Tuesday, July 26, 2016

Testing for Usability

I recently came across a copy of Web Redesign 2.0: Workflow That Works (book, 2005) by Goto and Cotler. The book includes a chapter on "Testing for Usability" which is brief but informative. The authors comment that many websites are redesigned because customers want to add new feature or want to drive more traffic to the website. But they rarely ask the important questions: "How easy is it to use our website?" "How easily can visitors get to the information they want and need?" and "How easily does the website 'lead' visitors to do what you want them to do?" (That last question is interesting for certain markets, for example.)

The authors highlight this important attribute of usability: (p. 212)
Ease of use continues to be a top reason why customers repeatedly return to a site. It usually only takes one bad experience for a site to lose a customer. Guesswork has no place here; besides, you are probably too close to your site to be an impartial judge.

This is highly insightful, and underscores why I continue to claim that open source developers need to engage with usability. As an open source developer, you are very close to the project you are working on. You know where to find all the menu items, you know what actions each menu item represents, and you know how to get the program to do what you need it to do. This is obvious to you because you wrote the program. It probably made a lot of sense to you at the time to label a button or menu item the way you did. Will it make the same sense to someone who hasn't used the program before?

Goto and Cotler advise to test your current, soon-to-be-redesigned product. Testing the current system for usability will help you understand how users actually use it, and will indicate rough spots to focus on first.

The authors also provide this useful advice, which I quite like: (p. 215)
Testing new looks and navigation on your current site's regular users will almost always yield skewed results. As a rule, people dislike change. If a site is changed, many regular users will have something negative to say, even if the redesign is easier to use. Don't test solely on your existing audience.

Emphasis is mine. People dislike change. They will respond negatively to any change. So be cautious and include new users in your testing.

But how do you test usability? I've discussed several methods here before, including Heuristic Review, Prototype Test, Formal Usability Test, and Questionnaires. Similarly, Goto and Cotler recommend traditional usability testing, and suggest three general categories of testing: (p. 218)

Informal testingMay take place in the tester's work environment or other setting.Co-workers or friends.Simple task list, observed and noted by a moderator.
Semiformal testingMay or may not take place in a formal test environment.Testers are pre-screened and selected.Moderator is usually a member of the team.
Formal testingTakes place in a formal facility.Testers are pre-screened and selected.Scenario tasks, moderated by a human factors specialist. May also include one-way mirror and video monitoring.

The authors also recommend building a task list that represents what real people actually would do, and writing a usability test plan (essentially, a brief document that describes your overall goals, typical users, and methodology). Goto and Cotler follow this with a discussion about screening test candidates, then conducting the session. I didn't see a discussion about how to write scenario tasks.

The role of the moderator is to remain neutral. When you introduce yourself as moderator, remind the tester that you will be a silent observer, and that you are testing the system, not them. Encourage the testers to "think aloud"—if they are looking for a "Print" button, they should say "I am looking for a 'Print' button." Don't describe the tasks in advance, and don't set expectations (such as "This is an easy task").

It can be hard to moderate a usability test, especially if you haven't done it before. You need to remain an observer; you cannot "rescue" a tester who seems stuck. Let them work it out for themselves. At the same time, if the tester has given up, you should move on to the next task.

Goto and Cotler recommend you compile and summarize data as you go; don't leave it all for the end. Think about your results while it is still fresh in your mind. The authors prefer a written report to summarize the usability test, showing the results of each test, problem areas, comments and feedback. As a general outline, they suggest: (p. 231)
  1. Executive summary
  2. Methodology
  3. Results
  4. Findings and recommendations
  5. Appendices

This is fine, but I prefer to summarize usability test results via a heat map. This is a simple visual device that concisely displays test results in a colored grid. Scenario tasks are on rows, and testers are on columns. For each cell, use green to represent a task that was easy for the tester to accomplish, yellow to represent a more difficult task, orange for somewhat hard, red for very difficult, and black for tasks the tester could not figure out.

Whatever your method, at the end of your test, you should identify those items that need immediate attention. Work on those, make improvements as suggested by the usability test, then test it again. The value in usability testing is to make it an iterative part of the design process: create a design, test it, update the design, test it again, repeat.
image: web-redesign.com

No comments:

Post a Comment