Thursday, October 11, 2012

A new kind of usability test

Alice Preston described the types of usability methods in the STC Usability SIG Newsletter. In the list, she describes 11 different ways to evaluate usability in a design. Her list:


  1. Interviews/Observations: One-on-one sessions with users.
  2. Focus Groups: Often used in marketing well before there is any kind of prototype or product to test, a facilitated meeting with multiple attendees from the target user group.
  3. Group Review or Walk-Through: A facilitator presents planned workflow to multiple attendees, who present comments on it.
  4. Heuristic Review: Using a predefined set of standards, a professional usability expert reviews someone else's product or product design and presents a marked checklist back to the designer.
  5. Walk-Around Review: Copies of the design/prototype/wireframe are tacked to the walls, and colleagues are invited to comment.
  6. Do-it-Yourself Walk-Through: Make mock-ups of artifacts, but make the scenarios realistic. Walk through the work yourself.
  7. Paper Prototype Test: Use realistic scenarios but a fake product.
  8. Prototype Test: A step up from a paper prototype, using some type of animated prototype with realistic scenarios.
  9. Formal Usability Test: Using a stable product, an animated prototype, or even a paper prototype, test a reasonably large number of subjects against a controlled variety of scenarios.
  10. Controlled Experiment: A comparison of two products, with careful statistical balancing, etc
  11. Questionnaires: Ask testers to complete a formal questionnaire, or matching questionnaire.

In my prospectus, I suggested that only 4 of these usability methods would apply to the study of open source usability:

  1. Heuristic Review
  2. Prototype Test
  3. Formal Usability Test
  4. Questionnaires

In that post, I had an unspoken assumption that I would likely study an open source program of a suitable size, but one that had usability issues that needed to be uncovered and analyzed.

However, after a discussion with Eric Raymond, I wonder if it would be better to modify that assumption. Instead, I could study an open source program of a suitable size, one that was successful in addressing usability. The results of the case study wouldn't be a diagnostic review of the usability issues, but a summary of what works in open source usability.

The benefits to this kind of usability study are immediate to the open source community. In my experience, and that of Raymond, most open source programmers are more likely to imitate successful designs rather than apply the rigor of usability studies to their own programs. If my study is to benefit the open source community as a whole, I need to change how I approach the case study.

I'm not sure how to refer to this new kind of usability test. I don't recall seeing a name for it in the literature. It seems very similar to a Heuristic Review - except that the results identify positive design ideas rather than identifying usability issues for a designer to correct. In a different framework, this is the first half of a "plus/delta" exercise (where participants identify what works and what needs improvement). In that respect, it's also similar to the method of Questionnaires. But I still think it's a different thing.

Perhaps a good name for this is an "Usability Analysis," which adequately and simply describes the method. I'll use that name for now, until someone points out a more correct description.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.