Friday, December 28, 2012

Usability wrapup

I wanted to share some thoughts about this usability project, and how I would apply it outside of open source software. The project was very interesting and insightful about how to improve the usability of open source software, but usability methods don't just apply to software.

At work, I am the Director of Information Technology for a small liberal arts university. As part of my role, I often interact with students and faculty. For example, last year we held several "listening sessions" to solicit direct feedback from our customers about the level of service we currently provide, and where we can improve. These listening sessions aren't very different in concept to usability testing. In usability testing, you gather input from your target audience, and use that input to inform a process.

Where usability methods are used in software, that process is the software development lifecycle to bring new improvements to the software. Where the same methods are used in listening sessions, that process drives improvements in the organization. These are strongly parallel.

Going forward, I now recognize you don't need very many people to provide effective and useful feedback. In our listening sessions, we were concerned about attracting a "sufficient" number of folks to the sessions to garner useful feedback. But now I know from Jakob Nielsen, it turns out 5 testers is usually enough to get you there, if you're just interested in finding the core problems for a product. Per Nielsen: "If you want a single number, the answer is simple: test 5 users in a usability study. This lets you find almost as many usability problems as you'd find using many more test participants."

And in retrospect, that rule would have applied equally well to our listening sessions. We started the sessions with 20-40 people, depending on attendance, and asked participants to identify three to four ideas that would raise the level of technology on campus. But from each roomful of contributors, we ended up with about a dozen unique ideas. There was quite a lot of overlap. Five participants at three or four ideas each would have generated 15-20 ideas. So yes, Nielsen seems to be correct, even for this different process: "Test 5 users in a usability study. This lets you find almost as many usability problems as you'd find using many more test participants."

That's a key takeaway from this usability study that can be applied to non-usability areas: it can be statistically valid to use a smaller sample size. Of course, if you have the larger group available and the resources to test them, certainly do so. But if you can only scrape together a few participants, you can get by with as few as five interested testers.

Friday, December 14, 2012

Interesting reading

A few links to share about usability testing methods:

Usability Testing Essentials: Ready, Set...Test! by Carol Barnum. A few folks have recommended this book to me under the incomplete title "Ready, Set, Test" so I had a hard time tracking it down. From the description on Amazon:

Usability Testing Essentials presents a practical, step-by-step approach to learning the entire process of planning and conducting a usability test. It also explains how to analyze and apply the results and what to do when confronted with budgetary and time restrictions. This is the ideal book for anyone involved in usability or user-centered design-from students to seasoned professionals.

"Current Issues of Usability Characteristics and Usability Testing" by Ayobami, Hector, Hammed. It's a short read (five pages, including the references) but provides a good summary of the issues. From the abstract:

A well given design’s attention to Learnability, Contextuality and Operationability as Usability characteristics issues of software, and information systems generally is an evolving and challenging area in the field of Information systems, and Human Computer Interaction as an area of expertise. The procedure of testing and evaluating usability as quality characteristics in information systems has equally attracted researchers and professionals more in the recent times due to the developers’ attempt to meet the diverse users’ needs in the development of usable information systems. Hence, the paper aims to bring to the limelight; these current issues of usability and usability testing, the trend of the challenges inherent in the usability testing and evaluation process, then suggest the necessary attention that must be provided by subsequent researches in the bid to solve these lingering design research problems.

The article quickly identifies the key issues of usability: Learnability, Contextuality and Operationability. If your design can master those qualities, you should be okay. This is echoed somewhat through my findings in open source software and usability, where open source software that exhibits successful usability shares these characteristics:

  1. Familiarity
  2. Consistency
  3. Menus
  4. Obviousness

Thursday, December 13, 2012

Usability testing tools

If you are looking at improving the usability of your website, one of the first steps is to see how your users are trying to use the site. You can start with your website statistics to get an idea of the "top ten" pages they use. You might even have additional analytics that reveal how long a user remains on the site, and where they are from.

But I can speak from experience that seeing where users are clicking provides a very compelling case. The Useful Usability blog lists 24 website usability testing tools that you may find interesting. I've mentioned in another post. At work, we have used Crazy Egg. You may also be interested in the open source software tool ClickHeat.

These tools help to generate a "heat map" of your users' navigation habits. Where are they clicking? How much of the screen can they see?

As always, this is just one tool that you can use to analyze the current usability of your website. Don't forget to start with a user-focused design with personas and scenarios, and to do a full usability analysis.