Tuesday, October 30, 2012

Designing a usability test

Another thing that came of my discussions with thought leaders is how to design a usability test. A few thoughts on that.

When I first considered this project, I thought I could do an "open ended" field usability test, where I solicited users at large to submit to a usability test of a program. They would do this test on their own computers, according to a set of scenarios that I would design for them. I imagined that, while users would carry their own experiences with them, they would be able to execute the test by following a set of user personas for the program. Personas would describe general users for the program, including details that inform the tester of what the persona would be like.

In discussing this option with thought leaders, I realize that is not the right way to do a test. There's a lot of value by doing an in person usability test. Bring the test subject into a lab (the lab can be formal or informal - even a desk in an office) and ask that person to execute a series of workflow tasks. For example, to test a word processor, we might ask testers to type a few short paragraphs of text (provided for them), start a new tab, search and replace text, print, change the font, and other typical tasks. The key is that these scenarios should demonstrate typical, average use of the program.

The test scenarios do not (necessarily) need to exercise every function of the program, as long as it demonstrates how a typical user of average knowledge would use the program. Generally, those test results can be applied to other parts of the program, as well.

In this study, I am interested in what makes for good usability in open source software. The usability test design for this need not be complicated: I will create a bootable USB "flash" drive that runs Fedora, a version of Linux. This flash drive can be easily re-written between tests, to restore the drive to a known, default state so the next usability tester has the same starting point as the first usability tester.

The usability lab can be as simple as an office with a closed door (to prevent distractions). To start the test, I would provide some background information for the tester: why are we doing the test, what they will be expected to do (the test scenarios) and how they should act during the test (ignore the observer, please speak aloud what is going through your mind at each step, etc.) Usability testers should be drawn from a wide pool, including users that are similar to the target audience: general users. Since I work on a university, potential testers might include students, faculty, and staff. I may also draw on family and friends.

After each test, I will ask the testers about their experience, and possibly explore areas that looked particularly interesting or challenging for them. What did they expect to see during this part of the test? What should this other screen have shown you, since you said you were doing X? To wrap up, engage the tester in a brief "plus/delta" exercise: What worked well in the program? What features were particularly easy to use? But what other features were more challenging to use? Where did you feel confused?

The plus/delta will be important for the results. I will need to report both in the final publication, even though I intend to focus on the features that contributed to good usability, so that other open source developers might mimic those successful features in their own programs.

Discussions with thought leaders

Over the last few weeks, I've interviewed several thought leaders in usability and open source software. I'd like to take another opportunity to recognize Ginny Redish, Eric Raymond, Richard Stallman, Chad Fennell, Dave Rosen, Greg Laden, and others for their time, and for their input to my study.

What did I learn?

In general, all agree that open source software provides a unique challenge. To be sure, many programs (including "commercial" or "proprietary" programs) may view usability as a final phase effort - going through a usability study only at the end of a project, right before go-live, intending to "validate" design decisions made earlier. That's if they consider usability at all during the project.

However, open source programmers tend not to think about usability. Open source programmers, to borrow and paraphrase from Raymond, tend to "scratch an itch," to create programs that are interesting to them or solve a particular problem. The functionality and features of the open source program are key. How the program looks is secondary.

To put it in another context, writing an open source program is like baking a cake. The interesting part of the process is putting the ingredients together, mixing them, to assemble the cake. Once the cake (or open source program) is "baked," the look and feel is less important. Appearance is like putting the icing on the cake. It's window-dressing, and thus carries less value.

But we also (generally) agree that usability is important. How a program is used and how it appears are both key items. Truly technically-minded folks may not mind that a program is difficult to use, as long as it does the job. (For example, many folks responded to my "what programs have good usability?" question by suggesting very specific programs, with the caveat that the program may be difficult to pick up, but once you figure out how to use the menus - or once you get past a particular, quirky design choice - the program is fairly straightforward.) However, the general user with average knowledge will not be able to accomplish tasks with the software if it is difficult to use. If menus are obscure, if the workflow is convoluted, or if other aspects of usability are poorly implemented, then these users will not adopt the program.

Interestingly, thought leaders also suggest that there has been very little - if any - experiential study done on the usability of open source programs. This seems to be a largely unexplored area, save for a few high-profile projects (such as the Gnome desktop environment) where a key partner (such as Gnome's Sun Microsystems) invested effort to examine usability.

With a few exceptions (Drupal is one), the open source community at large does not include usability in their development process. As mentioned earlier, these open source developers do not see the value in studying the usability of their programs. They have little interest - or indeed, little knowledge of how to go about it. Thought leaders (Raymond, etc) suggest that open source developers would be more likely to mimic successful usability from other programs, rather than apply the rigor of usability study to their own work.

So that is the launching point for the rest of my project. I want to do a study of programs that demonstrate good usability and provide an analysis for why these programs "work" successfully. Of course, I've been advised by at least one thought leader that successful usability may not be very transportable to other programs - what works for one program may not be good usability in another. My results will need to carefully discuss what makes for good usability in each of the test candidates, and how this might be applied as "lessons learned" for open source developers to adopt - with the caveat to apply the lessons carefully to other projects.

Monday, October 29, 2012

Candidates for good usability

I'd like to thank everyone who responded to my plea, asking what programs have good usability. This question was repeated elsewhere on other blogs and on twitter, and I got quite a lot of feedback. Together, you suggested almost 100 candidates for good usability in open source software. I've collated the results, and I'd like to share my results.

I was looking for GUI programs, not command-line programs or programs that use text mode or "TUI" (i.e. full-screen text-mode programs you can run at a regular terminal). In the usability test, we'll ask participants to sit at a computer and run through several exercises that are typical for that program. For example, to test a word processor, we might ask testers to type a few short paragraphs of text (provided for them), start a new tab, search and replace text, print, change the font, and other typical workflow tasks.

Picking the right open source program is a tricky thing. The ideal program should be not too big (for example, very complex menus can "lose" the audience in the details) but neither should it be too small (a trivial program will not provide as valuable of results). The program should be approachable by general users.

Several folks suggested graphics suites such as Gimp or Inkscape. Both are fine graphics programs, and I personally agree that these seem to do a lot of things "right" in usability. However, I've decided to skip these types of programs, as they are intended for people in the graphics profession, and not the general user. Similarly, I decided to exclude disk image programs, PDF writers, operating system tools, and similar programs because they are too specific to one area, or too technical.

So, what programs did I decide to include as candidates?

Many folks listed Gedit as an open source program with good usability. At first, I assumed text editors would be too simple to include in a usability test. When I think of text editors, my mind jumps to trivial text editors such as Windows Notepad. However, Gedit is a great suggestion! It provides a powerful text editor with a simple interface. In fact, the simplicity of the interface belies the features contained in Gedit. To quote the website: Currently it features:

  • Full support for internationalized text (UTF-8)
  • Configurable syntax highlighting for various languages (C, C++, Java, HTML, XML, Python, Perl and many others)
  • Undo/Redo
  • Editing files from remote locations
  • File reverting
  • Print and print preview support
  • Clipboard support (cut/copy/paste)
  • Search and replace
  • Go to specific line
  • Auto indentation
  • Text wrapping
  • Line numbers
  • Right margin
  • Current line highlighting
  • Bracket matching
  • Backup files
  • Configurable fonts and colors
  • A complete online user manual

In other words, Gedit successfully addresses the advanced user (programmers, etc.) and the general user. For more, view the screenshots.

Ideally, I'd like to include more than one open source program in the usability test. Looking through the suggestions, possible program types include email clients, file managers, simple graphics ("paint") programs, music players, file viewers, and web browsers. There's a lot there. I am drawn to the web browsers (Chrome, Firefox) but I wonder if too many of our potential testers have used these same programs on other platforms (Windows, Mac) so will already be too familiar with them when they do the usability test. We won't really be testing the usability of these programs, but how well the user already navigates them on a different platform.

In the interests of expanding the candidate pool, I will add the Nautilus file manager to the list. File management is often overlooked because it is such a "basic" part of a desktop operating system. However, the usability of this essential feature is key. It is also a different kind of program than Gedit, so would yield different results. Together, the usability tests for Gedit and Nautilus would provide hints to successful usability in open source programs.

Tuesday, October 23, 2012

What programs have good usability?

I want to ask for your help in my study.

For my study, I want to do a "deep dive" on usability in open source software. After speaking with several "thought leaders," my thinking now is that it's better to do a case study, a usability critical analysis on an open source software program that has good usability. The results will be a discussion about why that program has good usability, and what makes good usability, so that other open source programmers can mimic the good parts.

I'll also discuss what features are not good usability examples, so programmers can avoid those mistakes. But the focus will be more on the good and less on the bad.

Picking the right open source program is a tricky thing. The ideal program should be not too big (for example, very complex menus can "lose" the audience in the details) but neither should it be too small (a trivial program will not provide as valuable of results). The program should be approachable by general users.

There's no reason the program needs to be a Linux program. However, I prefer that the case study be of an open source program. Many open source programs also exist for Windows and MacOSX.

So, what open source program would you suggest for the case study? Leave your suggestions in the comments:

Monday, October 22, 2012

Making an impact

Just wanted to share this item from the University of Minnesota Libraries, on research impact measures.

However, my discussions with thought leaders lead me to think that Linux Journal would be an excellent place to share my results. LJ is a respected magazine in the open source community - and as the name implies, it is read primarily by Linux developers and users. While LJ doesn't need the same level of impact measures that, say, Science would require, it will help to review the research impact measures when writing about my results.

Sunday, October 14, 2012

More on critical analysis

I wanted to share more about what I mean by a usability critical analysis. Who does that? What are the measures? What are the steps?

By way of providing context, let me take a step back to another usability test I performed on the FreeDOS website earlier this spring:
This was a complete redesign of the website, which had not previously undergone any kind of usability study. The old website was a hodgepodge of content by different editors, each with different agendas.

To redesign the website, I started by creating user personas that represented our user base, and usage scenarios typical of those users. FreeDOS users typically fall into one of three types: casual users who want to use FreeDOS to play old DOS games, people who want to use FreeDOS to run a legacy application, and technical users who use FreeDOS in embedded systems. From that information base, I derived the new site by asking "what does each persona need to find?" and designing navigation and content areas that responded to those needs. Through a process of iteration, I arrived at a prototype of the new website, and invited the FreeDOS community to help test the new website.

The usability evaluation was a typical prototype test: I asked each tester to exercise the new prototype as if they were one of the user personas, and according to the usage scenarios. Could the testers find the information they needed? At the end of the usability test, I asked the testers to respond to a formal questionnaire about using the new website. The questionnaire also included a section where testers identified what was working and what needed improvement with room to provide detailed suggestions.

This first evaluation led to several improvements in the prototype, and another prototype test with questionnaire. Through repetition, I arrived at the FreeDOS website that you see today.

The process of asking usability testers to evaluate a prototype and comment on it via a questionnaire was invaluable. In that usability evaluation, I was interested in both what was working and what needed improvement, or a plus-delta exercise without using the terms "plus" and "delta". The "plus" items helped me identify what features were good, and the "delta" items allowed me to focus on problem areas to improve for the next iteration of the prototype website.

And frankly, the "delta" items would have been no use to me if I had not been interested in using that feedback to improve the website.
Let's jump back to the topic of usability in open source software. Last week, I interviewed Eric S. Raymond (quotes below are from that discussion). He provided an interesting view into open source developers: the process of usability is antithetical to how open source software is created. Open source developers prefer functionality over appearance, and by extension put (typically) no emphasis on interface usability. While some open source projects have a maintainer with good taste who dictates that good taste on the interface, Raymond commented that most programmers view "Menus and icons are like the frosting on the cake after you've baked it" and that any results that try to provide correction to usability is "swimming against a strong cultural headwind."

Our discussion left us both with the realization that if my study is to have a positive contribution to the open source community, it needs to "focus first on the good examples, rather than trying to fix the bad."

Therefore, I cannot use diagnostic usability in my study. I had originally planned to do a case study on the usability of an open source software program. My unwritten assumption was that I would start with user personas and usage scenarios, working with the author of candidate program to understand the user base and their typical actions. And (I assumed) the study would generate output similar to my work on the FreeDOS web site: what is working and what needs improvement. I might even have created an animated, non-functioning prototype of an existing program, and ask usability testers to evaluate that prototype against the user scenarios.

However, after my discussion with Raymond, I need to modify that assumption. Instead, I plan to study an open source program of a suitable size, one that was successful in addressing usability. The results of the case study wouldn't be a diagnostic analysis of the usability issues, but a summary of what works in open source usability: a usability critical analysis.

The benefits to this kind of usability study are immediate to the open source community. In my experience, and that of Raymond, most open source programmers are more likely to imitate successful designs rather than apply the rigor of usability studies to their own programs. If my study is to benefit the open source community as a whole, I need to change how I approach the case study.

That brings me back to the questions "Who does that?" and "What steps?"

In this model, the usability testers can be any willing participant. When I tested the FreeDOS website, my testers were members of the open source software community - many directly from the FreeDOS Project. In this study, the usability testers can be anyone who is interested. They do not need to be members of that open source software project; they do not even need to be part of any open source software project. The minimum requirement for these critical analysis testers is a willingness to use the program. Some base level of familiarity with the program would be helpful, but in a usability test should not be required.

The steps in this critical analysis would likely be the same that I applied in the FreeDOS website usability test: starting with user personas and scenarios, ask each tester to exercise the program according to those scenarios. At the end of the test, ask the testers to respond to a formal questionnaire about their experience, for each scenario in the test. But in this case, the focus of the questions would be what worked well rather than what needs improvement.

And that leads directly to the output of the study: a publishable result that identifies what works well in software usability, in a format that allows other open source developers to mimic those successful aspects of the design in their own projects.

Saturday, October 13, 2012

Usability critical analysis

In my other post, I shared a new kind of usability test that I might use in my study: the "Usability Analysis". Similar to a heuristic review, this other usability test would essentially be the first half of a "plus/delta" exercise, focused on what is working in the design rather than discussing what needs to be improved.

I realize now that such a critique already has a name: the critical analysis. For example, a literary critical analysis is simply a discussion of a work in literature. The use of criticism doesn't imply disapproval or a negative review, but instead a full review of the work.

A better term for my proposed usability study might be "Usability Critical Analysis."

Thursday, October 11, 2012

A new kind of usability test

Alice Preston described the types of usability methods in the STC Usability SIG Newsletter. In the list, she describes 11 different ways to evaluate usability in a design. Her list:

  1. Interviews/Observations: One-on-one sessions with users.
  2. Focus Groups: Often used in marketing well before there is any kind of prototype or product to test, a facilitated meeting with multiple attendees from the target user group.
  3. Group Review or Walk-Through: A facilitator presents planned workflow to multiple attendees, who present comments on it.
  4. Heuristic Review: Using a predefined set of standards, a professional usability expert reviews someone else's product or product design and presents a marked checklist back to the designer.
  5. Walk-Around Review: Copies of the design/prototype/wireframe are tacked to the walls, and colleagues are invited to comment.
  6. Do-it-Yourself Walk-Through: Make mock-ups of artifacts, but make the scenarios realistic. Walk through the work yourself.
  7. Paper Prototype Test: Use realistic scenarios but a fake product.
  8. Prototype Test: A step up from a paper prototype, using some type of animated prototype with realistic scenarios.
  9. Formal Usability Test: Using a stable product, an animated prototype, or even a paper prototype, test a reasonably large number of subjects against a controlled variety of scenarios.
  10. Controlled Experiment: A comparison of two products, with careful statistical balancing, etc
  11. Questionnaires: Ask testers to complete a formal questionnaire, or matching questionnaire.

In my prospectus, I suggested that only 4 of these usability methods would apply to the study of open source usability:

  1. Heuristic Review
  2. Prototype Test
  3. Formal Usability Test
  4. Questionnaires

In that post, I had an unspoken assumption that I would likely study an open source program of a suitable size, but one that had usability issues that needed to be uncovered and analyzed.

However, after a discussion with Eric Raymond, I wonder if it would be better to modify that assumption. Instead, I could study an open source program of a suitable size, one that was successful in addressing usability. The results of the case study wouldn't be a diagnostic review of the usability issues, but a summary of what works in open source usability.

The benefits to this kind of usability study are immediate to the open source community. In my experience, and that of Raymond, most open source programmers are more likely to imitate successful designs rather than apply the rigor of usability studies to their own programs. If my study is to benefit the open source community as a whole, I need to change how I approach the case study.

I'm not sure how to refer to this new kind of usability test. I don't recall seeing a name for it in the literature. It seems very similar to a Heuristic Review - except that the results identify positive design ideas rather than identifying usability issues for a designer to correct. In a different framework, this is the first half of a "plus/delta" exercise (where participants identify what works and what needs improvement). In that respect, it's also similar to the method of Questionnaires. But I still think it's a different thing.

Perhaps a good name for this is an "Usability Analysis," which adequately and simply describes the method. I'll use that name for now, until someone points out a more correct description.

Sunday, October 7, 2012


I wanted to share my project prospectus. Some of the text has been pulled from previous posts here. This prospectus serves as a good "state of the project" post:

How does usability impact open source software development? How might usability be applied to open source software?

Today, many popular computer programs are written under an “open source” or “Free” software license. Generally, “open source” means that the program’s source code, the computer instructions that define the program’s operation, is available for anyone to view and edit. In open source software, other developers may use the source code to add new features and functionality to a program, and contribute those changes back to the program’s author or maintainer.

This open access to the program’s source code allows ideas and information to flow between programmers, which in turn enables rapid development of enhancements and bug fixes. An open source program can quickly grow from an interesting proof of concept used by a few developers to a mature, stable application used by many general users.

However, open source applications are frequently written by developers for other developers. While some larger open source projects (i.e. GNOME) have performed usability studies, most projects lack the resources or interest to pursue a usability examination. Many open source programs are often utilitarian, focused on functionality and features, with little attention paid to the user interface. This lack of focus on usability in open source software can mean open source programs are underutilized because they are perceived as difficult to use.

In my study, I will do a “deep dive” on the usability of open source software user interfaces, and examine how usability might be applied to open source software programs in general. I intend to perform a case study of a specific open source software program, a usability examination of menus and icons. The output will be a publishable research paper that discusses the usability of open source software and the results of a case study, including a proposed reorganization of menus.


A key part of this project is the case study of a specific open source program. Managing scope is important.

I will avoid open source programs that serve a very limited audience - for example, programming tools, special utilities, and other programs that are intended for advanced users. Similarly, usability results from simple open source programs would not “scale up” to larger programs. At the other extreme, programs that are overly complex or have very deep menus would be impossible to manage within this scope. For example, programs such as OpenOffice, LibreOffice, or even Firefox that have rich menus would expand the usability evaluation to such an extent where the results may not be applicable to other open source programs.

Icons are an important part of usability, and today's graphical programs often leverage some icon set to represent program actions. Typical examples are a “toolbar” of commonly-used activities, such as copy and paste, or bold and italics. Programs that exist solely in text mode, or other programs that do not make use of icons in the user interface, will yield limited usability results that apply to a small subset of users. Text-only programs will not be included in the study.

The right open source program for this usability case study needs to strike a careful balance:

  • Generally useful to a wide audience
  • Not overly technical without becoming trivial
  • Not too many menus
  • Some icons without being purely icon-driven

Methods of Usability Testing

This study will employ one or more of the following types of usability testing: (list derived from Preston)

  • Heuristic Review: Evaluators independently examine a user interface and judge its compliance with a set of usability principles. The result is a list of potential usability issues or problems.
  • Prototype Test: Test a design using an animated prototype with realistic scenarios.
  • Formal Usability Test: Using a stable product, or even a prototype, test a reasonably large number of subjects against a variety of defined scenarios. Formal usability tests are often conducted in controlled environments.
  • Questionnaires: Users respond to a formal questionnaire about their experiences using the product.

Literature Review

Calum Benson, Matthias Müller-Prove, Jiri Mzourek. “Professional Usability in Open Source Projects: GNOME, OpenOffice.org, NetBeans.” (ACM)

David M Nichols, Michael B Twidale. “Usability Processes in Open Source Projects.”

Alice Preston. “Types of Usability Methods.” STC Usability SIG Newsletter,

Eric S Raymond. “The Cathedral and the Bazaar.”

David M Nichols, Michael B Twidale. (2003). “The Usability of Open Source Software.” First Monday 8(1).

David M Nichols, Michael B Twidale. (2005). “Exploring Usability Discussions in Open Source Development.” Proceedings, 38th Annual Hawaii International Conference on System Sciences, HICSS-38, Track 7: “Internet and the Digital Economy,” p.198c.