If you want your users to love your designs, fall in love with your users.
There are lots of great sources of participants for usability studies and other user research. The key: know what behavioryou want to learn about. For example:
- Playing online games
- Planning for retirement
- Shopping for a new car
- Treating a chronic illness
Note that there’s nothing about demographics here.
After you identify the behaviors you want to learn about — preferably by observing people using a design rather than just asking them about it — brainstorming ideas for where to find them can be fun. There are loads of options. Continue reading.
For many of us, usability testing is a necessary evil. For others, it’s too much work, or it’s too disruptive to the development process. As you might expect, I have issues with all that. It’s unfortunate that some teams don’t see the value in observing people use their designs. Done well, it can be an amazing event in the life of a design. Even done very informally, it can still show up useful insights that can help a team make informed design decisions. But I probably don’t have to tell you that.
Usability testing can be enormously elevating for teams at all stages of UX maturity. In fact, there probably isn’t nearly enough of it being done. Even on enlightened teams that know about and do usability tests, they’re probably not doing it often enough. There seems to be a correlation between successful user experiences and how often and how much the designers and developers spend time observing users. (hat tip Jared Spool) Continue reading.
There are some brilliant questions on Quora. This morning, I was prompted to answer one about recruiting.
The question asker asked, How do I recruit prospective customers to shadow as a part of a user-centered design approach? The asker expanded, thusly:
I’m interested in shadowing prospective customers in order to better understand how my tool can fit into their life and complement, supplement, or replace the existing tools that they use. How do I find prospective customers? How do I convince them to let me shadow them?
Seemed like a very thoughtful question. I have some experience with recruiting for field studies and other user research, so I thought I might share my lessons learned. Here’s my answer. Would love to hear yours. Continue reading.
How many of you have run usability tests that look like this: Individual, one-hour sessions, in which the participant is performing one or more tasks from a scenario that you and your team have come up with, on a prototype, using bogus or imaginary data. It’s a hypothetical situation for the user, sometimes, they’re even role-playing.
Anyone? That’s what I thought. Me too. I just did it a couple of weeks ago.
But that model of usability testing is broken. Why? Because one of the first things we found out is that the task we were asking people to do – doing some basic financial estimates based on goals for retirement – involved more than the person in the room with me.
For the husbands, the task involved their wives because the guys didn’t actually know what the numbers were for the household expenses. For the women, it was their children, because they wanted to talk to them about medical expenses and plans for assisted living. For younger people it was their parents or grandparents, because they wanted to learn from them how they’d managed to save enough to help them through school and retire, too. Continue reading.
This happens. The team is heads down, just trying to do work, to make things work, and then you realize it. Perspective is gone. Recently I gave a couple of talks about usability testing and collaboratively analyzing data. There was a guy in the first row who was super attentive as I showed screen shots of web sites and walked the attendees through tasks that regular people might try to do on the sites. Sweat beaded on his brow. His hands came up to his forehead in the way that someone who has had a sudden realization reacts. He put his hand over his mouth. I assumed he was simply passionate about web design and was feeling distressed about the crimes this web site committed against its users.
Turns out, he was the web site’s owner. Continue reading.
[This is an excerpt of an article published in UX Magazine on June 16, 2010.]
I’m a devotee of TED talks. I was once assigned to watch several TED talks to deconstruct what made each a good or a bad presentation. TED topics are wide-ranging, though they generally relate to the categories that make up the “TED” acronym: Technology, Entertainment, and Design. I tend to stick to the design topics, but during my research I came across a video of Martin Seligman talking about positive psychology.
Happiness is a topic I’ve been interested in for a while. According to Darrin McMahon, author of Happiness: A History, happiness is a relatively new construct in the history of humanness. It’s only been in the last 250 years or so in the West that we’ve been safe and healthy enough to think about how we feel emotionally. Continue reading.
I’ve often said that most of the value in doing user research is in spending time with users — observing them, listening to them. This act, especially if done by everyone on the design team, can be unexpectedly enlightening. Insights are abundant.
But it’s data, right? Now that the team has done this observing, what do you know? What are you going to do with what you know? How do you figure that out?
The old way: Tally the data, write a report, make recommendations
This is the *usual* sequence of things after the sessions with users are done: finish the sessions; count incidents; tally data; summarize data; if there’s enough data, do some statistical analysis; write a report listing all the issues; maybe apply severity ratings; present the report to the team; make recommendations for changes to the user interface; wait to see what happens.
There are a couple of problems with this process, though. UXers feel pressure to analyze the data really quickly. They complain that no one reads the report. And if you’re an outside consultant, there’s often no good way of knowing whether your recommendations will be implemented. Continue reading.