Test topics. The first thing to decide is just what we're going to test and what format it will be. Paper prototypes to finished products -- we'll discuss the benefits and drawbacks to using each.
Recruiting. We recruit participants based on the personas we defined. A persona is a description of a user type
with usage scenarios. Because we're looking at user behavior and not doing statistical analysis, a small number of participants is usually sufficient (3 or 4 of each user type).
Compensation. We have to decide what to give participants as compensation for their time. That might be cash, a gift certificate or a copy of your software.
Location. It's good to test at your offices so members of your team can drop in for a few minutes or an entire session. We can also use third-party locations, which provide formal viewing rooms. We can also visit participants if they're too busy to travel or if we want to learn about their work environments (that's useful for building personas).
Tasks and questionnaires. With your input and review, we'll prepare a task list and a set of questions to ask before and after each session.
Pilot session. it's often good to review the test setup in a pilot session, or at least to go over it in a meeting.
We have one participant per session, which lasts an hour to an hour and a half. That includes:
and pre-test questionnaire. Participants sign release forms and nondisclosure agreements. A questionnaire helps us understand
the user's conceptual models of the product.
Tasks. Tasks should be what the participants do in real life. They're based on the usage scenarios that we develop as part of the persona development. The actual task order may vary depending on what each participant does in the session. Through careful observation and limited interaction with the
participant, we will see how well the product meets each customer's needs.
Participants answer questions about each
aspect of the product. Comparison with responses from the pre-test questions can show whether the product matched
the user's conceptual model of the task.
Members of your team watch sessions from another room. Watching real users work
with your product is an eye-opening experience.
One or more observer will take notes using a software tool (described below). It captures audio, the participant's face, the screen and all observer notes. They're synchronized, which is very helpful in the analysis phase.
We write a report based
on everyone's observations. The report includes recommendations to fix problems and build upon strengths that we saw in the study.
Recommendations may include sketches, but they
do not constitute a design specification, because that requires much more
In addition to product-specific categories, usability
reports cover these topics: layout and visual design, conformance with
platform UI guidelines, terminology, how well the product matches the
user's conceptual model of the task, ways you can improve transaction
We can create a short videotape with test highlights as part of the study.
This is particularly useful for remote team members or senior management.
There are a number of great tools for usability testing, whether it's remote or in-person:
Morae captures the exact screen that the participant sees. Observers see that with a picture-in-picture
image of the participant's face. An observer takes note that are time stamped and included with
the video for analysis.
WebEx is also helpful because it allows for screen sharing and voice communication.