Saturday, December 8, 2018

Week 1 of GNOME usability testing

The Outreachy internship started this week! For this cycle, we are joined by Clarissa, who will help us with usability testing in GNOME.

I wanted to share our progress in the internship. I hope to provide regular status updates on our work.

For this week, I want to start with a grounding in Usability testing. You can do usability testing in different ways. Alice Preston described the types of usability methods for the STC newsletter (Vol 10, No. 3). I can't find a copy of that newsletter online, but in the list, she describes 11 ways to evaluate usability in a design. Preston's list is:

  1. Interviews/Observations: One-on-one sessions with users.
  2. Focus Groups: Often used in marketing well before there is any kind of prototype or product to test, a facilitated meeting with multiple attendees from the target user group.
  3. Group Review or Walk-Through: A facilitator presents planned workflow to multiple attendees, who present comments on it.
  4. Heuristic Review: Using a predefined set of standards, a professional usability expert reviews someone else's product or product design and presents a marked checklist back to the designer.
  5. Walk-Around Review: Copies of the design/prototype/wireframe are tacked to the walls, and colleagues are invited to comment.
  6. Do-it-Yourself Walk-Through: Make mock-ups of artifacts, but make the scenarios realistic. Walk through the work yourself.
  7. Paper Prototype Test: Use realistic scenarios but a fake product.
  8. Prototype Test: A step up from a paper prototype, using some type of animated prototype with realistic scenarios.
  9. Formal Usability Test: Using a stable product, an animated prototype, or even a paper prototype, test a reasonably large number of subjects against a controlled variety of scenarios.
  10. Controlled Experiment: A comparison of two products, with careful statistical balancing, etc
  11. Questionnaires: Ask testers to complete a formal questionnaire, or matching questionnaire.


Most of our work in the internship will be testing designs that haven't gone "live" yet (this is called "prototype usability testing"). Allan and Jakub will create mock-ups of new designs, and Clarissa will do usability testing on them. You can read a bit about this on Allan's blog post.

In that post, Allan writes:
Therefore, for this round of the Outreachy internships, we are only going to test UX changes that are actively being worked on. Instead of testing finished features, the tests will be on two things:
  1. Mockups or prototypes of changes that we hope to implement soon (this can include static mockups and paper or software prototypes)
  2. Features or UI changes that are being actively worked on, but haven’t been released to users yet

Ciarrai interned with us a few years ago, and worked on prototype usability testing for the then-upcoming GNOME Settings app redesign. We wrote an article together for OpenSource.com about paper-based usability testing that you might find interesting.

You may also be interested in an older article I wrote about usability testing with prototypes. I don't expect that we'll be testing with the animated prototypes that my article proposes, but it's good background material.

As you prepare for usability testing, you may ask "How many testers do I need?" The answer is "about five" if you are doing a traditional usability test. For a paper prototype test, this may be different. How many do you think we need?

For additional background, you might be interested to read this essay about how many testers you need, and why the answer is "about five." Note that "five" only applies to doing usability testing in a traditional or "formal" usability test, where you test real people with real scenario tasks against a real product. For paper prototype testing, you have a different value for L, so we'll need a different number of testers. But this is a good start to understanding the assumptions.

Usability expert Jakob Nielsen has a good video about "five testers" that I often recommend.

Looking for more background information? Here are a few additional articles related to "how many testers do I need?"



That last article makes an interesting point that I'll quote here: "Studies to evaluate a prototype of a novel user-interface design often concern the discovery of severe show-stopper problems. Testing usually reveals such severe errors relatively quickly, so these tests often require fewer participants."

But, how many testers do you think you'll need for each iteration of prototype testing?

No comments:

Post a Comment