Friday, March 7, 2014

Preparing for usability testing

You can do a usability study in different ways. Alice Preston described the types of usability methods for the STC usability community (Vol 10, No. 3). In the list, she describes 11 different ways to evaluate usability in a design. Her list:
  1. Interviews/Observations: One-on-one sessions with users.
  2. Focus Groups: Often used in marketing well before there is any kind of prototype or product to test, a facilitated meeting with multiple attendees from the target user group.
  3. Group Review or Walk-Through: A facilitator presents planned workflow to multiple attendees, who present comments on it.
  4. Heuristic Review: Using a predefined set of standards, a professional usability expert reviews someone else's product or product design and presents a marked checklist back to the designer.
  5. Walk-Around Review: Copies of the design/prototype/wireframe are tacked to the walls, and colleagues are invited to comment.
  6. Do-it-Yourself Walk-Through: Make mock-ups of artifacts, but make the scenarios realistic. Walk through the work yourself.
  7. Paper Prototype Test: Use realistic scenarios but a fake product.
  8. Prototype Test: A step up from a paper prototype, using some type of animated prototype with realistic scenarios.
  9. Formal Usability Test: Using a stable product, an animated prototype, or even a paper prototype, test a reasonably large number of subjects against a controlled variety of scenarios.
  10. Controlled Experiment: A comparison of two products, with careful statistical balancing, etc
  11. Questionnaires: Ask testers to complete a formal questionnaire, or matching questionnaire.
In 2012,  I suggested that only 4 of Preston's usability methods would apply to the study of open source usability:
  1. Heuristic Review
  2. Prototype Test
  3. Formal Usability Test
  4. Questionnaires
In sharing that list of 4 methods, I used an unspoken assumption of open source programs that are suitably large, yet ones that had usability issues that still needed to be uncovered and analyzed.

The heuristic review is interesting. A “Usability Analysis” or “Usability Critical Analysis” is basically a heuristic review, and is essentially a “plus/delta” group exercise, focused on what is working in the design (plus) and what needs to be improved (delta). In this review, a panel of usability experts provide comments on a program's user interface design, and predict how users might respond to such a design. The interface might be a working prototype, or a mock-up on paper. A parallel in other fields of academia is the “Literary Critical Analysis,” a discussion of a work in literature. The use of criticism doesn't imply disapproval or a negative review, but instead a full review of the work.

However, the heuristic review relies on trust between the reviewer and the program maintainer. If that trust is lacking, or if the maintainer doesn't "buy into" the usability review, the maintainer will likely dismiss the heuristic review. This is one reason why many open source software programs lack good usability. From a previous conversation with Eric Raymond, Raymond commented that most programmers view "Menus and icons are like the frosting on the cake after you've baked it" and that any results that try to provide correction to usability is "swimming against a strong cultural headwind." While a mixed metaphor (intentional) it's very descriptive of the problem.

A more compelling way to achieve usability "buy in" with developers is to perform a usability test. There's nothing quite like seeing a user experience problems with your program.

In my next post, I'll share some notes on how to do a usability test, and planning for your usability test audience.

No comments:

Post a Comment