Friday, December 9, 2016

Examining User eXperience

When I talk about usability and User eXperience (UX) I often pause to explain the difference between the two concepts.

Usability is really about how easily people can use the software. Some researchers attach definitions to it, like Learnability and Rememberability, but in the end usability is all about real people trying to do real tasks in a reasonable amount of time. If people can easily use the software to do the things they need to do, then the software probably has good usability. If people can't do real tasks, or they can't do so in a reasonable amount of time, then the software probably has bad usability.

User eXperience (UX) is more about the emotional attachment that people have when using the software. You can also consider the emotional response people have to using the software. Does the software make them feel happy and that they want to use the software again next time? Then the software probably has a positive UX. If the software makes users feel unhappy and not want to use the software again, then the software probably has a negative UX.

In most cases, usability and UX are strongly aligned. And that makes sense. If you can use the software to get your work done, you probably have a good opinion of the software (good usability, positive UX). And if you can't use the software to do real work, then you probably won't have a great opinion of it (bad usability, negative UX).

But it doesn't always need to be that way. You can have it the other way around. It just doesn't happen that often. For example, there's an open source software game that I like to play sometimes (I won't name it here). It's a fun game, the graphics are well done, the sounds are adorable. When I'm done playing the game, I think I've had a fun time. And days or weeks later, when I remember the game, I look forward to playing it again. But the game is really hard to play. I don't know the controls. The game doesn't show you what to do to move around or to fire the weapons. And for a turn-based game with a time limit, it's important that you know how to move and shoot. Every time I play this game, I end up banging on keys to figure out what key does what action. It's not intuitive to me. Essentially, I have to re-learn how to play the game every time I play it.

That game has bad usability, but a positive UX. I don't know how to play it, I have trouble figuring out how to play it, but (once I figure it out) I have a fun time playing it and I look forward to playing it again. That's bad usability and a positive UX.

And that means we can't rely on good usability to have a positive UX. We need to examine both.

When I mentored Diana, Renata and Ciarrai this summer in GNOME and Outreachy, Diana performed a UX test on GNOME. This was the first time we'd attempted a UX test on GNOME, and I think we all recognize that it didn't go as well as we'd hoped. But I think we have a good foundation to make the next UX test even better.

We did some research into UX testing, and one method that looked interesting was asking test participants to identify their emotional response with an emoji. Diana looked around and found several examples of emojis, and we decided to move forward with this scale:


The emoji range from "angry" and "sad" on the left to "meh" and "?" in the middle, to "happy" and "love" on the right. It seemed a good scale to ask respondents to indicate their emotion.

For the test, we identified three broad scenario tasks that testers would respond to. Each tester logged into a GNOME workstation with a fresh account, so they would get the "new user" experience. From there, each tester was asked to complete the scenario tasks. We intentionally left the scenario tasks as somewhat vague, so testers wouldn't feel "boxed in" with the test. We only wanted testers to exercise GNOME. The scenario tasks represented what new users would likely do when they used GNOME for the first time, and asked testers to access their email (we wiped the machine afterwards) and to pretend the files on a USB fob drive contained their files from an old machine and to copy these files to the new computer wherever they saw fit.

The scenario tasks took about thirty minutes to complete. Afterward, Diana interviewed the testers, including asking them to respond with the emoji chart. Specifically, "Thinking back to the first ten minutes or so, what emoji represents what you thought of GNOME" and "Thinking about the last ten minutes or so, what emoji represents what you thought of GNOME."

From those questions, we hoped to identify what new users thought of GNOME when they first started using it, and after they'd used it for a little while.

However, when we looked at the results, I saw two problems:

We didn't use enough testers. Diana was only able to find a few testers, and this clearly wasn't enough to find consensus. With iterative usability testing, you usually only need about five testers to get "good enough" results to make improvements. But that assumes usability testing, not UX testing. We need more than five UX testers to understand what users are thinking.

We used too many emoji. Ten emoji turns out to be a lot. If you go through the list of emoji, you may ask what's the difference between #3, #4, and#5. All seem to express "meh." And is there significant distinction between #8 and #9? Both are "happy." There may be similar overlap on other emoji in this list. Having too many choices makes it that much more difficult to read the emotions of users.

I spoke at the Government IT Symposium this week, and I gave three presentations. One of them was "Usability Testing in Open Source Software" and was about our usability tests this summer with GNOME and Outreachy. Attendees had great comments about Renata's traditional usability test and Ciarrai's paper prototype test. People also liked the heat map method to examine usability test results. When I talked about the UX test, attendees thought the emojis were a good start, but also suggested the method could be improved. I asked for help to make this better next time.

These are a few things we might do differently in the next UX test:

Use the emojis, but use fewer of them. Others agreed that there are too many emojis for testers to choose from. Also, that many options without significant variation means testers may ascribe a feeling to an emoji that you don't share. With fewer emoji, we should have better reproducibility. Some people suggested five emoji (similar to a typical Likert scale) or six emoji (more difficult to give a "no feeling" response).

Also ask testers to name their feeling. When we ask testers to identify the emoji that represents their emotional response to part of the test, also ask the testers to name that feeling. "I pick X emoji, which means 'Y'." Then you have another data point to use in describing the UX.

Use a Likert scale. One researcher suggested that UX would be easier to quantify if we asked testers to respond to a traditional five-point or six-point Likert scale, from "hate" to "love."

Use word association. I had thought about doing this before, so it was good to hear the suggestion from someone else. When asking testers to talk about how they felt during part of the test, ask the testers to pick a few words from a word list. Say, five words. The word list could be created by selecting a range of emotions ("love" and "meh" and "hate," for example) and using a thesaurus to generate other alternative words for the same emotion. Sort the list alphabetically so similar words aren't grouped next to each other. We'd need to be careful in creating the word list like this, but it could provide an interesting way to identify emotion. How many "love" words did testers select versus "hate" words, etc? One way to display the results is a "word cloud" on the base words ("love" and "meh" and "hate," in this example).

Mix UX testing with usability testing. In this UX test, we decided to let users experiment with GNOME before we asked them what they thought about GNOME. We provided the testers with a few broad scenario tasks that represented typical "first time user" tasks. After testers had experienced GNOME, we asked them what they thought about it. Others suggest it would be better and more interesting to pause after each scenario task to ask testers what they think. Do the emoji exercise or other UX measurement, then move on to the next task. This would provide a better indicator about how their emotional response changes as they use GNOME. And it could provide some correlation between a difficult scenario task and a sudden negative turn in UX.

Add more UX questions. When I asked for help in UX testing, several people offered other questions they use to examine UX. Some also shared questionnaires they use for usability testing. And there's some crossover here. A few questions that we might consider adding include "How would you describe the overall look and feel?" and "What is your first impression of X?" and "Talk about what you're seeing on this screen and describe what you're thinking." These questions can be useful in usability testing, but also provide some insight to UX.

Thursday, November 24, 2016

FreeDOS 1.2 Release Candidate 2

We started FreeDOS in 1994 to create a free and open source version of DOS that anyone could use. We've been slow to make new releases, but DOS isn't exactly a moving target anymore. New versions of FreeDOS are mostly about updating the software and making FreeDOS more modern. We made our first Alpha release in 1994, and our first Beta in 1998. In 2006, we finally released FreeDOS 1.0, and updated to FreeDOS 1.1 in 2012. And all these years later, it's exciting to see so many people using FreeDOS in 2016.

If you follow my work on the FreeDOS Project, you should know that we are working towards a new release of FreeDOS. You should see the official FreeDOS 1.2 release on December 25, 2016.

We are almost ready for the new FreeDOS 1.2 release! Please help us to test this new version. Download the FreeDOS 1.2 RC2 ("Release Candidate 2") and try it out. If you already have an operating system on your computer (such as Linux or Windows) we recommend you install FreeDOS 1.2 RC2 in a PC emulator or "virtual machine." Report any issues to the freedos-devel email list.

You can download FreeDOS 1.2 RC2 from our Download page or at ibiblio.

Here's what you'll find:

  • Release notes
  • Changes from FreeDOS 1.1
  • FD12CD.iso (full installer CDROM) —If you have problems with this image, try FD12LGCY.iso
  • FD12FLOPPY.zip (boot floppy for CDROM)
  • FD12FULL.zip (full installer USB image)
  • FD12LITE.zip (minimal installer USB image)

Thanks to everyone in the FreeDOS Project for their work towards this new release! There are too many of you to recognize individually, but you have all helped enormously. Thank you!
If you'd like an interview about the FreeDOS Project and our upcoming FreeDOS 1.2 release, you can email me at jhall@freedos.org.

Monday, October 31, 2016

FreeDOS 1.2 RC1

You may know that I am involved in many open source software projects. Aside from my usability work with GNOME, I am probably best known as the founder and project coordinator of the FreeDOS Project.

I started the FreeDOS Project back in 1994, when MS-DOS was still the platform of choice for many people. You can read the history of FreeDOS on our website, but the short version is this: I announced the FreeDOS Project in June 1994 as a way to replace the functionality of MS-DOS. The idea of a free DOS quickly caught on, and soon developers across the world came together to contribute to and improve FreeDOS.

It's 2016, and FreeDOS is still going strong. In fact, we are planning a new release this year: FreeDOS 1.2 should be available on December 25, 2016.

You can help us make FreeDOS 1.2 a reality! We just released the FreeDOS 1.2 RC1 ("Release Candidate 1") for folks to try out. Please download and test this latest version of FreeDOS! Report any bugs or problems to the freedos-devel email list.

You can get FreeDOS 1.2 RC1 here: www.freedos.org/download

Saturday, October 29, 2016

Solitaire in a Bash script

I like to write Bash scripts. It stems from my time as a Unix and Linux systems administrator, years ago. I used to automated everything. So I got very good at writing shell scripts. Even today when managing a personal server, I write Bash scripts to automate various jobs so I don't have to keep logging into the server all the time. For example, I have a job that parses an RSS news feed with Bash.

I admit that my Bash scripts aren't always for automation. Some of my scripts are just for fun. Like the Bash script to fill out my March Madness basketball brackets. It can be an interesting diversion to write a Bash script to do something innovative.

And lately, I've started writing another such Bash script. Let me tell you about it.

We all know the classic Klondike Solitaire card game. There have been countless computer implementations of Solitaire on every platform. We even had a simple Solitaire game on our old Apple IIe computer in the 1980s. If you run Linux, you may be familiar with AisleRiot, which supports multiple card solitaire games, including the classic Klondike Solitaire. More recently, Google now provides a browser-based version of Klondike Solitaire; just search for the term "solitaire" and you'll get an option to "Click to play" the web version.

I wanted to write my own version of Klondike Solitaire as a Bash script. Sure, I could grab another shell script implementation of Solitaire called Shellitaire but I liked the challenge of writing my own.

And I did. Or rather, I mostly did. I have run out of free time to work on it. So I'm sharing it here in case others want to build on it. I have implemented most of the game, except for the card selection. You might think that's the toughest part, but I don't think so; I'll explain at the end.

So, how do you write a solitaire card game in a shell script?

I found it was easiest to leverage the strength of shell scripts: files. I started with all 52 cards in a single "deck" file, shuffled it, then "drew" cards from the deck into piles on the tableau.

Let's start with the basics. Creating 52 cards, comprised of thirteen cards of four different suits, is straightforward:

for s in c d h s ; do
  for n in $(seq 1 13) ; do
    echo "$s$n"
  done
done

You can direct the output to a file, and you have a file containing an ordered representation of all 52 cards. On a Linux system, you can use GNU shuf(1) to randomize ("shuffle") an input file, which can also be a pipe. So to define a shuffled deck of 52 cards, you do this:

deck=/tmp/sol/deck

for s in c d h s ; do
  for n in $(seq 1 13) ; do
    echo "$s$n"
  done
done | shuf > $deck

Drawing cards from the deck requires a little more work, but not much more. I wrote a simple function popn() that takes ("pops") the first n lines of a file and returns those lines, and shortens the file at the same time. Usually, this will be one at a time, but we'll need the flexibility later.

On top of that function, I wrote another simple function drawcard() that draws a single card from the deck and assigns it to the end of a "drawn cards" pile. This function also needs a little extra logic to deal with an empty deck; if the deck is empty, we return the drawn cards to the deck and start over. This is why it's important to pop from the start of the deck and append drawn cards to the end of the "drawn cards" pile; you can easily reset the deck using the drawn cards:

function popn() {
  # pop the first n($2) items from a file($1) and print it
  head -$2 $1
  cp $1 /tmp/tempfile
  awk "NR>$2 {print}" /tmp/tempfile > $1
}

function drawcard() {
  if [ $(cat $deck | wc -l) -eq 0 ] ; then
    cp $draw $deck
    >$draw
  fi

  popn $deck 1 >> $draw
}

In Klondike Solitaire, the play area is a tableau of seven piles of cards, where the first n-1 cards are piled "face down" and the last card is placed "face up." So for the first column, there are no "face down" cards, and only one "face up" card. On the seventh column, you start with six "face down" cards, and only one "face up" card. The player must move cards on these seven piles, eventually transferring the cards to four separate "foundation" piles, where each pile is dedicated to a separate suit: clubs, diamonds, hearts, and spades.

Creating the play area requires the use of Bash arrays. I rarely use arrays in Bash, but they are a very useful feature. Bash supports both indexed arrays (0, 1, 2, …) and associative arrays (where the index is a string value). You define a variable as an indexed array using the declare -a directive, and as an associative array with the declare -A directive.

With this, it's simple enough to define the tableau card piles as separate files, then use the popn() function to deal cards from the deck into these files.

declare -a tabdraw
for n in $(seq 1 7) ; do tabdraw[$n]="/tmp/sol/tab.draw.$n" ; done

declare -a pile
for n in $(seq 1 7) ; do tab[$n]="/tmp/sol/tab.$n" ; done

declare -A found
for s in c d h s ; do found[$s]="/tmp/sol/found.$s" ; done

# deal cards from deck into tableau {curly brackets needed on array refs}

for n in $(seq 1 7) ; do popn $deck 1 >> ${tab[$n]} ; done

for n in $(seq 1 7) ; do popn $deck $((n - 1)) >> ${tabdraw[$n]} ; done

And that's the guts of a Solitaire game in Bash! When assembled, my Bash script looked like this:

#!/bin/bash

# create work directory

tmpdir="/tmp/sol.tmp.$RANDOM"
[ ! -d $tmpdir ] && mkdir $tmpdir

# create variables

deck="$tmpdir/deck"
draw="$tmpdir/draw"
tmpf="$tmpdir/tmpf"

declare -a tabdraw
for n in $(seq 1 7) ; do tabdraw[$n]="$tmpdir/tab.draw.$n" ; done

declare -a pile
for n in $(seq 1 7) ; do tab[$n]="$tmpdir/tab.$n" ; done

declare -A found
for s in c d h s ; do found[$s]="$tmpdir/found.$s" ; done

# create functions

function popn() {
  # pop the first n($2) items from a file($1) and print it
  head -$2 $1
  cp $1 $tmpf
  awk "NR>$2 {print}" $tmpf > $1
}

function drawcard() {
  if [ $(cat $deck | wc -l) -eq 0 ] ; then
    cp $draw $deck
    >$draw
  fi

  popn $deck 1 >> $draw
}

function showcards() {
  # show the cards again - repaint the screen

  clear

  for n in $(seq 1 7) ; do cat ${tabdraw[$n]} | wc -l > $tmpdir/wc.$n ; done

  paste $tmpdir/wc.[1-7]
  paste ${tab[*]}

  echo '___  ___  ___  ___'

  paste ${found[*]}

  echo '___'

  cat $deck | wc -l
  tail -1 $draw
}

# build and shuffle initial deck

for s in c d h s ; do
  echo "${s}0" > ${found[$s]}

  for n in $(seq 1 13) ; do
    echo "$s$n"
  done
done | shuf > $deck

# deal cards from deck into tableau {curly brackets needed on array refs}

for n in $(seq 1 7) ; do popn $deck 1 >> ${tab[$n]} ; done

for n in $(seq 1 7) ; do popn $deck $((n - 1)) >> ${tabdraw[$n]} ; done

# loop until quit ('q')

input='n'
while [ "$input" != "q" ] ; do
  case "$input" in
  'n')
    # draw next card
    drawcard
    ;;
  *)
    # parse into card1,card2
    # is this a valid request?
    # is this a valid move?
    ;;
  esac

  showcards

  echo -n '?> '
  read input
done

# cleanup

rm -rf $tmpdir

I have left unfinished the logic to move cards around on the tableau. This might seem like the most difficult part, but not really. Since every pile on the tableau is a file, it's easy enough to seek a requested card in each of the "face up" piles, then extract all lines (cards) from that "face up" pile and append them onto another "face up" pile. You'd need a little extra logic in there to move Kings to an empty space on the tableau, but that's not very difficult.

I envisioned splitting the logic into three parts:

1. Verifying that this is a valid request
For example, the user is not allowed to move a black card onto a black card. Also, the user can only move cards in descending order on the tableau, and ascending order on the foundation. That's easy logic.
2. Confirming that the cards are there to move
This should be a fairly straightforward process using fgrep(1) to locate the requested cards on the tableau or in the foundation.
3. Moving the cards
I believe this should be easy, if you remember that cards are only entries in files. You can easily write a function that outputs all lines starting with card A, and appends them to the end of another file. At the same time, the function can truncate the first file starting at card A.

When you start the script, you should see output similar to this:

0       1       2       3       4       5       6
s12     c11     s2      h7      s11     h11     d3
___     ___     ___     ___
c0      d0      h0      s0
___
23
c13
?> 

The output is intentionally functional, and uses paste(1) to merge the output from several tableau files. What you're seeing:
  • The first line shows the number of cards remaining in each of the tableau "face down" piles.
  • The second line shows the "face up" cards for each pile on the tableau. This sample output indicates: 12 (Queen) of Spades, 11 (Jack) of Clubs, 2 of Spades, 7 of Hearts, 11 (Jack) of Spades, 11 (Jack) of Hearts, and 3 of Diamonds.
  • The third line shows the empty foundation piles. I initialized these as the "zero" cards so the logic to transfer cards to the foundation could remain simple; you only move cards of the same suit in ascending order.
  • The fourth line shows the number of cards remaining to be drawn on the deck.
  • The fifth line shows the "face up" card from the deck. In this case, it is the 13 (King) of Clubs.
  • The last line shows a prompt (?>) for the user to enter a command.

This Bash script was a lot of fun to write, but I don't have time to finish it. The script took an afternoon to write, and an hour to tweak. Even so, I think this is a solid start to play Solitaire in a Bash script.

Feel free to finish this script. Please assume Creative Commons Attribution. So if you use it somewhere else, such as to write an article and publish it, you should credit me as the original author.

Sunday, October 2, 2016

Looking ahead: Usability of open source software

This Spring semester, I look forward to teaching CSCI 4609 Processes, Programming, and Languages: Usability of Open Source Software. This is the second time I will teach the class at the University of Minnesota Morris, although it's more like the fifth time because I structured the course outline to be very similar to the Outreachy internships I've mentored now for three cycles.

Interested in a preview of the course? Here's a quick breakdown of the syllabus:
CSCI 4609: Usability of Open Source Software

Introduction to usability studies and how users interact with systems using open source software as an example. Students learn usability methods, then explore and contribute to open source software by performing usability tests, presenting their analysis of these tests, and making suggestions or changes that may improve the usability.

Course objectives:
  • To understand what is usability and apply basic principles for how test usability
  • Design and develop personas, scenarios, and other artifacts for usability testing
  • Create an execute a usability test against an actual product
  • To identify and reflect on the value and presentation of usability test results

Each student will work with the professor and other students to choose an individual project to complete during the second half of the term.

Requirements:
  • Class engagement (discussions, presentations, feedback)
  • Projects (small group project, and larger individual project)
  • Final paper to document your individual project

Each discussion will be worth 5 points. This is graded on a scale: no points for no discussion posted, and 1 to 5 points based on the quality of your discussion. For example: a well thought-out discussion will be given 5 points; a sketched out discussion post will earn 1 point.

Course outline:
  1. Introduction
  2. What is usability?
  3. How do we test usability?
  4. Personas
  5. Scenarios
  6. Scenario tasks
  7. User interfaces
  8. Mini project (two weeks)
  9. Final project (four weeks)
  10. Final paper
Based on what I learned from teaching the class last time, I'll be sure to arrange the weeks to leave more time for the final project, and to spread discussion throughout the week. For example, in the first half of the course, there's a lot of research and practice: learn about a topic and post a summary, then apply what you have learned towards a specific assignment. This time, I'll have the first discussion assignment due around Thursday each week, and the practice assignment due on Sunday, assuming each week starts on a Monday and ends on Sunday night.

I will also change the points. Last time, I had a 60/40 split for discussion points and final paper. I totaled your discussion, and that was 60% of your grade; your final paper was the other 40%. This year, I plan to make the points cumulative. If you assume 20 discussions at 5 points each, that's 100 points for discussion. The paper is an additional 50 points. That makes it clear you cannot skip the discussion and hope for a strong paper; neither can you punt the paper and rely on your discussion points. You need to participate every week and you need to make an effort on the final paper to get a good grade in the class.
image: University of Minnesota Morris

Sunday, September 25, 2016

Great first year at LAS GNOME!

This was the first year of the Libre Application Summit, hosted by GNOME (aka "LAS GNOME"). Congratulations to the LAS GNOME team for a successful launch of this new conference! I hope to see more of them.

In case you missed LAS GNOME, the conference was in Portland, Oregon. I thoroughly enjoyed this very walkable city. Portland is a great place for a conference venue. When I booked my hotel, I found lots of hotel options within easy walking distance to the LAS GNOME location. I walked every day, but you could also take any of the many light rail or bus or trolley options running throughout the city.

I encourage you to review this year's conference schedule to learn about the different presentations. I'll share only a few highlights of my own. I also live-tweeted through many of the presentations, and I'll share some of those tweets here:


Alexander Larsson gave a great presentation ("Taking back the apps from the distributions") about Flatpak. I'd followed Flatpak before, but until Alexander's presentation, I never really grokked what Flatpak can do for us. With Flatpak, anyone can provide an application or app anywhere, on any distribution.


Today, you may be comfortable installing whatever application you need through your distribution. But what if your distribution doesn't provide the application you are looking for? And what if you want to run an application that a distribution isn't likely to want to provide (such as a commercial third-party office application or game)? In these cases, you either hope the person providing the application can provide a package for your distribution, or you go without.

With Flatpak, you can bundle an application to run on any Linux distribution, and it should just run. I see this as a huge opportunity for indie games on Linux! Imagine the next version of PuzzleQuest also being available for Linux, or a version of Worms for Linux, or a version of Inside for Linux. With Flatpak, these indie developers could bundle up their apps for Linux—and sell them.


Also, a great thing about going to conferences is the opportunity to finally meet people you've only chatted with online. Ciarrai was one of my interns during the May-August cycle of Outreachy, and we finally got to meet at LAS GNOME! We had been emailed back and forth in the weeks ahead of time, since we knew we'd both be there. But it was great to finally meet them in person!

LAS GNOME had scheduled talks in the morning, and "unconference" topics in the afternoon. With an "unconference," people suggest topics for an impromptu presentation or workshop, and attendees vote on them.

In the afternoon of day 1, I hosted an informal workshop on usability testing, at the same time my friend Asheesh gave an unconference presentation on web app packaging in Sandstorm. I was surprised to see so many developers at my how-to presentation about usability testing. Thanks to all who attended! Scott snapped this selfie of the two of us:


On day 2, Asheesh Laroia gave a presentation ("How to make open source web apps viable") about Sandstorm. Asheesh and I have known each other for a few years now, and I've followed his work with Sandstorm. But it was great to see Asheesh walk us through a demo of Sandstorm to really show off what it can do. In short, Sandstorm provides an open source software platform that helps people collaborate over the web.

And Sandstorm is designed with several layers of security, so it's very locked down. Even if you can "break out" of one web application, you can't get elsewhere on the system. As Asheesh put it during his talk, it's like Google Docs, but more secure:


On day 3, Stephano Cetola shared a wonderful story about how he got involved in open source software and made it his career ("Endless Summer of Code: Getting Involved in OSS"). I loved how Stephano talked about his experimental nature, and how he learned about technology by letting the "magic smoke" out of things.

Because of his eagerness and willingness to learn, Stephano found new opportunities to grow—eventually landing a position where he works for Intel, working on the Yocto project. A great journey that was a joy to experience!


Finally, I gave my presentation on usability testing for GNOME ("GNOME Usability Testing"). I opened by talking about different ways (direct and indirect) that you can test usability of software. From there, I gave an overview of the Outreachy internship, and what Renata, Ciarrai and Diana worked on as part of their internships.

I was thrilled for Ciarrai to share their part of usability testing from this cycle of Outreachy. Ciarrai's project was a paper prototype test of a new version of GNOME Settings. Renata conducted a traditional usability test of other ongoing work in GNOME, and Diana worked on a User eXperience (UX) test of GNOME.


I think my presentation went well, and we had a lot of great questions. Thanks to everyone!

Again, congratulations to the LAS GNOME team for a successful first year!
image: LAS GNOME
photo: Scott

Sunday, September 18, 2016

Headed to LAS GNOME!

By the time this gets posted on the blog, I will be headed to LAS GNOME. I'm really looking forward to being there!

I'm on the schedule to talk about usability testing. Specifically, I'll discuss how you can do usability testing for your own open source software projects. Maybe you think usability testing is hard—it's not! Anyone can do usability testing! It only takes a little prep work and about five testers to get enough useful feedback that you can improve your interface.

As part of my presentation, I'll use our usability tests from this summer's GNOME usability tests as examples. Diana, Renata and Ciarrai did a great job on their usability testing.

These usability testing projects also demonstrate three different methods you can use to examine usability in software:

Ciarrai did a paper prototype test of the new GNOME Settings. A paper prototype is a useful test if you don't have the user interface nailed down yet. The paper prototype test allows you to examine how real people would respond to the interface while it's still mocked up on paper. If you have a demo version of the software, you can do a variation of this called an "animated prototype" that looks a lot like a traditional usability test.

Renata performed a traditional usability test of other areas of ongoing development in GNOME. This is the kind of test most people think of when they hear about usability testing. In a traditional usability test, you ask testers to perform various tasks ("scenario tasks") while you observe what they do and take notes of what they say or how they attempt to complete the tasks. This is a very powerful test that can provide great insights to how users approach your software.

Diana led a user experience (UX) test of users who used GNOME for the first time. User experience is technically different from usability. Where usability is about how easily real people can accomplish real tasks using the software, user experience focuses more on the emotional reaction these people have when using the software. Usability and user experience are often confused with each other, but they are separate concepts.

I hope to see you at the conference!
Interested in my slides from LAS GNOME? You can find them on my personal website at www.freedos.org/jhall/uploads/. I'll keep them there for at least the next few weeks.
image: LAS GNOME