UsabilityWorks Blog

If you want your users to love your designs, fall in love with your users.

Coming soon…

There’s a usability testing revival going on. I don’t know if you know that.

This new testing is leaner, faster, smarter, more collaborative, and covers more ground in less time. How does that happen? Everyone on the team is empowered to go do usability testing themselves. This isn’t science, it’s sensible design research. At it’s essence, usability testing is a simple thing: something to test, somewhere that makes sense, with someone who would be a real user.

But not everyone has time to get a Ph.D. in Human Computer Interaction or cognitive or behavioral psychology. Most of the teams I work with don’t even have time to attend a 2-day workshop or read a 400-page manual. These people are brave and experimental, anyway. Why not give them a tiny, sweet tool to guide them, and just let them have at it? Let us not hold them back. Continue reading.


Ending the opinion wars: fast, collaborative design direction

I’ve seen it dozens of times. The team meets after observing people use their design, and they’re excited and energized by what they saw and heard during the sessions. They’re all charged up about fixing the design. Everyone comes in with ideas, certain they have the right solution to the remedy frustrations users had. Then what happens?

On a super collaborative team everyone is in the design together, just with different skills. Splendid! Everyone was involved in the design of the usability test, they all watched most of the sessions, they participated in debriefs between sessions. They took detailed, copious notes. And now the “what ifs” begin:

What if we just changed the color of the icon? What if we made the type bigger? What if we moved the icon to the other side of the screen? Or a couple of pixels? What if? Continue reading.


The importance of rehearsing

Sports teams drill endlessly. They walk through plays, they run plays, they practice plays in scrimmages. They tweak and prompt in between drills and practice. And when the game happens, the ball just knows where to go.

This seems like such an obvious thing, but we researchers often poo-poo dry runs and rehearsals. In big studies, it is common to run large pilot studies to get the kinks out of an experiment design before running the experiment with a large number of participants.

But I’ve been getting the feeling that we general research practitioners are afraid of rehearsals. One researcher I know told me that he doesn’t do dry runs or pilot sessions because he fears that makes it look to his team like he doesn’t know what he is doing. Well, guess what. The first “real” session ends up being your rehearsal, whether you like it or not. Because you actually don’t know exactly what you’re doing — yet. If it goes well, you were lucky and you have good, valid, reliable data. But if it didn’t go well, you just wasted a lot of time and probably some money. Continue reading.


Wilder than testing in the wild: usability testing by flash mob

It was a spectacularly beautiful Saturday in San Francisco. Exactly the perfect day to do some field usability testing. But this was no ordinary field usability test. Sure, there’d been plenty of planning and organizing ahead of time. And there would be data analysis afterward. What made this test different from most usability tests?

  • 16 people gathered to make 6 research teams
  • Most of the people on the teams had never met
  • Some of the research teams had people who had never taken part in usability
    testing before
  • The teams were going to intercept people on the street, at libraries, in farmers’
  • markets Continue reading.

Are you testing for delight?

 

Maybe you just read Jared Spool’s article about deconstructing delight. And maybe you want to hear my take, since Jared did such a good job of shilling for my framework. 

Here’s a talk I did a couple of years ago, but have been doing for a while. Have a listen.  (The post below was originally published in May, 2012.)

Everybody’s talking about designing for delight. Even me! Well, it does get a bit sad when you spend too much time finding bad things in design. So, I went positive. I looked at positive psychology, and behavioral economics, and the science of play, and hedonics, and a whole bunch of other things, and came away from all that with a framework in mind for what I call “happy design.” It comes in three flavors: pleasure, flow, and meaning.

I used to think of the framework as being in layers or levels. But it’s not like that when you start looking at great digital designs and the great experiences they are part of. Pleasure, flow and meaning end up commingled.

So, I think we need to deconstruct what we mean by “delight.” I’ve tried to do that in a talk that I’ve been giving. Here are the slides:

Deconstructing delight from Dana Chisnell

 

You can listen to audio of the talk from the IA Summit here.


The form that changed *everything*

There’s a lot of crap going on in the world right now: terrorism, two major wars, and worldwide economic collapse. Let’s not forget the lack of movement on climate change and serious unrest in the Middle East and other places.

People trust governments less than ever — perhaps because of the transparency that ambient technology brings — leading to more regulation of privacy and security, but also to protests. Protests that started in Egypt have rippled around the world.

This wave started with a butterfly. Not the butterfly of chaos theory, but there is a metaphor here that should not be missed: when a butterfly flaps its wings in the Amazon rainforest, there are ripple effects that you might not realize. The butterfly I am talking about is the butterfly ballot used in Palm Beach County, Florida in the US 2000 presidential election.

Palm Beach County, Florida November 2000 ballot

 


Four secrets of getting great participants who show up

What if you had a near-perfect participant show rate for all your studies? The first time it happens, it’s surprising. The next few times, it’s refreshing — a relief. Teams that do great user research start with the recruiting process, and they come to expect near perfect attendance.

Secret 1: Participants are people, not data points
The people who opt in to a study have rich, complex lives that offer rich, complex experiences that a design may or may not fit into. People don’t always fit nicely into the boxes that screening questionnaires create.

Screeners can be constraining not in a good way. An agency that isn’t familiar with your design or your audience or both — and may not be experienced with user research — may eliminate people who could be great in user research or usability testing. Teams we work with find that participants who are selected through open-ended interviews conducted voice-to-voice become engaged and invested in the study. The conversation helps the participant know they’re interesting to you, and that makes them feel wanted. The team learns about variations in the user profile that they might want to design for. Continue reading.


The true costs of no-shows

One of the first things people say when they call up looking for help with recruiting is that they want to recruit “12 for 8” or “20 for 15”. They know what they want to end up with. They’ve got to get data. Managers are showing up to observe. They’ve gone through a lot to get a study to happen at all. They don’t want to risk putting a study together only to get less data than they need. So, compensating for a show rate of between 60% and 80% means over-recruiting.

Even though a recruiting agency probably won’t charge for no-shows, those no-shows can be costly in lots of ways. Continue reading.


What’s the best way to find people for user research and usability testing?

There are lots of great sources of participants for usability studies and other user research. The key: know what behavioryou want to learn about. For example:

  • Playing online games
  • Voting
  • Planning for retirement
  • Shopping for a new car
  • Treating a chronic illness

Note that there’s nothing about demographics here.

After you identify the behaviors you want to learn about — preferably by observing people using a design rather than just asking them about it — brainstorming ideas for where to find them can be fun. There are loads of options. Continue reading.


Usability testing is HOT

For many of us, usability testing is a necessary evil. For others, it’s too much work, or it’s too disruptive to the development process. As you might expect, I have issues with all that. It’s unfortunate that some teams don’t see the value in observing people use their designs. Done well, it can be an amazing event in the life of a design. Even done very informally, it can still show up useful insights that can help a team make informed design decisions. But I probably don’t have to tell you that.

Usability testing can be enormously elevating for teams at all stages of UX maturity. In fact, there probably isn’t nearly enough of it being done. Even on enlightened teams that know about and do usability tests, they’re probably not doing it often enough. There seems to be a correlation between successful user experiences and how often and how much the designers and developers spend time observing users. (hat tip Jared Spool) Continue reading.