Friday, December 29, 2017

So long, Linux Journal

If you don't know, Linux Journal has ceased publication. Unless an investor drops in at the last minute, the LJ website will soon shut down. Thus ends over twenty-three years in Linux and open source publication. That's quite a legacy!

Linux Journal first hit shelves in April 1994. To remind you about those times: that's when Linux reached the 1.0 version milestone. That's also the same year that I started the FreeDOS Project. Elsewhere in technology, Yahoo!, Amazon, and Netscape got their start in 1994. That's also the same year the shows E.R. and Friends first hit TV. Also that year, the movies Pulp Fiction, Forrest Gump, Speed, and Stargate.

In 1994, you most likely connected to the Internet using a dial-up modem. KDE and GNOME were several years away, so the most popular Linux graphical interface in 1994 was FVWM, or the more lightweight TWM. In 1994, you probably ran an 80486 Intel CPU, unless you had upgraded to the recently-released Pentium CPU. In mainstream computing, Microsoft's Windows 3.1 ruled; Windows95 wouldn't come out for another year. In the Apple world, Macs ran MacOS 7.1 and PowerPC CPUs. Apple was strictly a hardware company; no one had heard of iTunes or an iPod.

With that context, we should recognize Linux Journal as having made an indelible mark in computing history. LJ chronicled the new features of Linux, and Linux applications. I would argue that Linux Journal helped raise the visibility of Linux and fostered a kind of Linux ecosystem.

Linux Journal operated in the same way that Linux developers did: LJ encouraged its community to write articles, essays, and reviews for the magazine and website. You didn't do it for the money; I think I received tiny payments for the articles I submitted. Rather, you wrote for LJ for the love of the community. That's certainly why I contributed to Linux Journal. I wanted to share what I had learned about Linux, and hoped others would enjoy my contributions.

So before the Linux Journal website goes dark, I wanted to share a few articles I wrote for them. Here you are:

Update: Looks like Linux Journal was saved at the last minute by investors! From the article:
In fact, we're more alive than ever, thanks to a rescue by readers—specifically, by the hackers who run Private Internet Access (PIA) VPN, a London Trust Media company. … In addition, they aren't merely rescuing this ship we were ready to scuttle; they're making it seaworthy again and are committed to making it bigger and better than we were ever in a position to think about during our entirely self-funded past.

Saturday, December 23, 2017

Top ten in 2017 (part 2 of 2)

Following up from part 1, here's the rest of my favorite blog articles from this year:

6. Debian and GNOME usability testing
Intrigeri emailed me to share that "During the Contribute your skills to Debian event that took place in Paris last week-end, we conducted a usability testing session" of GNOME 3.22 and Debian 9. They have posted their usability test results at Intrigeri's blog: "GNOME and Debian usability testing, May 2017." The results are very interesting and I encourage you to read them! Here's an overview.
7. FreeDOS is 23 years old
In the 1980s and early 1990s, I was a huge DOS nerd. I loved DOS, and used it for everything. I wrote all my class papers in WordPerfect or shareware Galaxy Write on MS-DOS, and did all of my physics lab analysis using the As-Easy-As shareware spreadsheet. I just though DOS was a great little operating system. So I wasn't pleased when Microsoft said they were going to "kill" MS-DOS with the next release of Windows (Windows95). In June 1994, I announced an open source software project to create our own compatible implementation of DOS, which became the FreeDOS Project. In June 2017, FreeDOS turned 23 years old. See also our free FreeDOS ebook.
8. A look back at Linux 1.0
The Linux kernel turned 26 years old this year. To celebrate, I installed one of the first true Linux distributions: SoftLanding Systems Linux 1.03, featuring the then-new Linux 1.0 kernel. I wrote about it on Great to go back to explore what Linux looked like in 1994.
9. How I put Linux in the enterprise
I used to work in higher ed. In the late 1990s, we moved to a new student records system. We created an "add-on" web registration system, so students could register on-line—still a new idea in 1998. But when we finally went live, the load crushed the web servers. No one could register. We tried to fix it, but nothing worked. Then we had the idea to move our web registration to Linux, which rescued our failing system. I wrote about the experience on, and shared some lessons you can apply to your own Linux migration.
10. Reflection on trip to Kiel
This summer, I attended the Kieler Open Source und Linux Tage in Kiel, Germany. I shared two presentations: a history of the FreeDOS Project, and how to do usability testing in open source software. Here is my summary of that trip.

And here's looking forward to a great 2018!

Saturday, December 16, 2017

Top ten in 2017 (part 1 of 2)

It's been a great year in the usability of open source software. As I look back on the last twelve months, I thought it would be interesting to highlight a few of my favorite blog posts from the year. And here they are, in no particular order:

1. The importance of the press kit
In December 2016, we released FreeDOS 1.2. Throughout December and into January, FreeDOS was the subject of many many articles in tech press, magazines, websites, and journals. I credit our press kit, which made it really easy for anyone to write an article about FreeDOS. In this essay, I explain how we created our press kit, and the features your open source software press kit should contain so journalists can write about you.
2. Good usability but poor user experience
I like to remind people that usability is not the same as user experience. They really are two different concepts. Usability is about making software that real people can use to do real tasks in a reasonable amount of time. User experience is more about the emotional response users have when they use the software. Often, programs that have good usability will have a positive user experience, and vice versa. But it's possible for a program to have good usability and a negative user experience. Here's one example.
3. I can't read your website
The current trend in website design seems to be grey text on a light background. That's really hard for most people to read. And your small font sizes aren't helping, either. Here are a few examples of the same stanza of text in different styles. See also Calculating contrast ratios of text and The readability of DOS applications as interesting followup.
4. Screencasts for usability testing
When you're moderating a usability test, you may ask your testers to use the "speak aloud" protocol, where they say out loud what they are trying to do. If they are looking for a Print button, they should say "I'm looking for a Print button." As the tester works through the usability test, you might take notes about what the tester is doing, and where they are "looking" with their mouse. An easier way to track this is to record a screencast, for later playback. Here's an example in GNOME.
5. How many testers do you need?
Usability testing in open source software isn't that hard. But when I talk about how to do a usability test in open source software, most people ask me "How many testers do I need?" It turns out that you don't need that many people. The short answer is "about five."

I'll share part two of my top-ten list next week!

Sunday, December 10, 2017

The art of the usability interview

During a usability test, it's important to understand what the tester is thinking. What were they looking for when they couldn't find a button or menu item? During the usability test, I recommend that you try to observe, take notes, capture as much data as you can about what the tester is doing. Only after the tester is finished with a scenario or set of scenarios should you ask questions.

But how do you ask questions in a way to gain the most insight? Asking the right questions can sometimes be an art form; it certainly requires practice. A colleague shared with me a few questions she uses in her usability interviews, and I am sharing them here for your usability interviews:

Before starting a scenario or set of scenarios:

  • What are three things you might do in this application?
  • What menu options do you see here and what do you think they do?
  • What might you do on this panel?
  • What is your first impression of the application?
  • What do these icons do? What do they represent?

After finishing a set of scenarios:

  • Who do you think the application was created for?
  • How easy did you think it was to get around the application?
  • If you could make one change to the application, what would it be?
  • Is there a feature you think is missing?
  • Do you remember any phrases or icons that you didn't understand?

The goal is to avoid leading questions, or any questions that suggests a "right" and "wrong" answer.

Saturday, November 18, 2017

Happy birthday to Fortran!

A recent article reminded me that the Fortran programming language is now sixty years old! This is a major milestone. And while I don't really write Fortran code anymore, the article was a great reminder of my early use of Fortran.

My first compiled language was Fortran. I was an undergraduate physics student at the University of Wisconsin-River Falls, and as part of the physics program, every student had to learn Fortran programming. Since this was the very early 1990s, we used FORTRAN 77 (the new capitalization of "Fortran" would be established with Fortran 90, a few years later).

We learned FORTRAN 77 so could do numerical analysis, or other programmatic analysis of lab data. For example, while spreadsheets of the era could calculate a linear fit to x-y data, including standard deviations, we could not fit polynomials to nonlinear data. But given a maths textbook, you could write your own program in FORTRAN 77. I wrote many programs like this throughout my undergraduate career.

As a research intern at a national lab between my junior and senior years, my mentors discovered I knew FORTRAN. So I got the assignment to port a FORTRAN IV program to FORTRAN 77 (Fortran 90 had recently been defined, and the lab didn't have the compiler yet). It was my first programming "job" and through experience I realized I wanted a career in IT rather than physics research.

I also taught myself the C programming language, and thereafter switched to C when I needed to write a program. I haven't needed to write Fortran code since then.

The last time I wrote anything in Fortran was a few years ago. At the time, I read an article about a proposed proof to the Collatz conjecture: the so-called "hailstone" sequence. From Slashdot: "A hailstone sequence starts from any positive integer n the next number in the sequence is n/2 if n is even and 3n+1 if n is odd. The conjecture is that this simple sequence always ends in 1. Simple to state but very difficult to prove and it has taken more than 60 years to get close to a solution."

I hadn't heard of the hailstone sequence before, but I thought it was an interesting problem. I could have easily written this program in C or even Bash, but I used the opportunity to return to my roots with Fortran. I created a simple FORTRAN 77 program to iterate the hailstone sequence. To celebrate Fortran's milestone birthday, I'd like to share my program with you:

C     IS 3N+1. ITERATE.

 10   PRINT *, 'Enter starting number (any positive integer):'
      READ *, N

C      PRINT *, 'You entered: ', N

      IF (N.LT.1) THEN
         PRINT *, 'You must enter a positive integer'
         GOTO 10


      PRINT *, N

      ITER = 0

 20   IF (MOD(N,2).EQ.0) THEN
         N = N / 2
         N = (3 * N) + 1

      ITER = ITER + 1

      PRINT *, N
      IF (N.NE.1) GOTO 20

      PRINT *, 'Number of iterations: ', ITER


This program doesn't demonstrate the best programming practices, but it does represent many FORTRAN 77 programs. To illustrate, allow me to walk you through the code:

First, FORTRAN code was originally written on punched cards. The first FORTRAN used columns to understand the program listing. FORTRAN 77 programs used the same column rules:

  • If a C or * in column 1, the line is a comment
  • Program labels (line numbers) are in columns 1–5
  • Program statements begin on column 7, but cannot go beyond column 72
  • Any character in column 6 will continue the line from the preceding line (not used here)

While you could (and should) declare variables to be of a certain type, FORTRAN 77 used a set of implicit rules to assign variable types: all variables starting IN are assumed INTEGER, and variables starting with other letters are declared REAL (floating point).

My program uses only two variables, N and ITER, which are both integer variables.

FORTRAN 77 is a simple language, so you should be able to figure out what the program is doing based on those rules. I'll add a note about the code starting with line label 20. FORTRAN 77 doesn't have a do-while loop concept, so you end up constructing your own using a label, IF, and GOTO.

And that's what happens starting at label 20. The program begins a loop iteration, following the rules of the hailstone sequence: for each positive integer n the next number in the sequence is n/2 if n is even and 3n+1 if n is odd. After updating the ITER iteration count and printing the current value of N, the program continues to loop back to line label 20 (using GOTO) until N reaches 1.

When the loop is complete, the program prints the number of iterations, and exits.

Here's a sample run:

 Enter starting number (any positive integer):
 Number of iterations:           14

Happy birthday, Fortran!

Wednesday, November 15, 2017

QEMU and function keys (follow-up)

Since I posted my suggestion for QEMU a few weeks ago, I've learned a few things about QEMU. Thanks so much to the folks who contacted me via email to help me out.

A brief review of my issue:

I like to run FreeDOS in QEMU, on my Linux laptop. QEMU makes it really easy to boot FreeDOS or to test new installations. During our run up to the FreeDOS 1.2 release, I tested every pre-release version by installing under QEMU.

But one problem pops up occasionally when using QEMU. A lot of old DOS software uses the function keys to do various things. The most common was F1 for help, but it was common for an install program to use F10 to start the install.

And with QEMU, you can use those keys. Except some of them. Some function keys, like F10, are intercepted by the window system or desktop environment. You can get around this in QEMU by using the QEMU console (available in the menu bar or tabs) and typing a sendkey command, like sendkey f10. But that's kind of awkward, especially for new users. Nor is it very fast if you often need to use the function keys.

So I suggested that QEMU add a toolbar with the function keys.

Of course, QEMU's preference is that QEMU should grab the keyboard and intercept all the function keys, blocking the window system shortcut keys like F10. So QEMU wants to do this. And I understand that QEMU used to do this. Sounds like the current issue is a regression in the Wayland implementation—and I run Fedora Linux, so I'm using Wayland.

As Daniel responded via the QEMU tracker for my bug:

Recently though, there has been a regression in this area. With the switch to Wayland instead of Xorg, the standard GTK APIs for doing this keyboard grabbing / accel blocking no longer work correctly. Instead apps need to add custom code to talk to the Wayland compositor IIRC. There's work underway to address this but its a way off.

So that explains it. I'm happy to have this captured by the application. Doing the keyboard interception "live" is a much better solution (better usability) than the toolbar I suggested. Thanks!

Wednesday, November 1, 2017

My suggestion for QEMU

I have been involved in open source software since 1993. And in 1994, I believed so strongly in the ability for people to come together to write code that I created the FreeDOS Project, to replicate the functionality of MS-DOS. And twenty-three years later, I'm still using and developing FreeDOS.

My desktop system is Linux, and I run FreeDOS using QEMU (Quick EMUlator). QEMU is very easy to use, and provides great flexbility to define your virtual machine. I run FreeDOS in QEMU when I want to play an old DOS game, or when I want to test some legacy software, or when I want to write code to update a FreeDOS program.

But one problem pops up occasionally when using QEMU. A lot of old DOS software uses the function keys to do various things. The most extreme example is WordPerfect, which was arguably the most popular commercial word processor of the day. WordPerfect is notorious for using all of the function keys, in every combination, including use of Ctrl and Alt to access all the common features. I think WordPerfect probably used all of the expanded keys too, like Home and End.

Other DOS programs use the function keys in different ways. The most common was F1 for help, but it was common for an install program to use F10 to start the install.

And with QEMU, you can use those keys. Except some of them. And strictly speaking, that's not QEMU's fault. Some function keys, like F10, are intercepted by the window system or desktop environment. You can get around this in QEMU by using the QEMU console (available in the menu bar or tabs) and typing a sendkey command, like sendkey f10. But that's kind of awkward, especially for new users. Nor is it very fast if you often need to use the function keys.

So as a frequent user of QEMU, I'd like to suggest a modification to the user interface: a toolbar with the function keys. Here's a simple mock-up to show what I mean:

A possible improvement would be to add "modifier" buttons for Ctrl, Shift, and Alt to make it easier for users to enter combinations like Ctrl-F1 or Shift-F5 or Alt-F7.

I've already submitted this idea as a feature request in the QEMU tracker, and it's been added to a Wishlist. If you are a QEMU developer, or want to make a contribution to QEMU, I encourage you to work on this toolbar.

Monday, September 18, 2017

Reflection on trip to Kiel

On Sunday, I flew home from my trip to Kiel, Germany. I was there for the Kieler Open Source und LinuxTage, September 15 and 16. It was a great conference! I wanted to share a few details while they are still fresh in my mind:

I gave a plenary keynote presentation about FreeDOS! I'll admit I was a little concerned that people wouldn't find "DOS" an interesting topic in 2017, but everyone was really engaged. I got a lot of questions—so many that we had to wrap up before I could answer all the questions.

FreeDOS has been around for a long time. We started FreeDOS in 1994, when I was still an undergraduate physics student. I loved DOS at the time, and I was upset that Microsoft planned to eliminate DOS when they released the next version of Windows. If you remember, the then-current version was Windows 3.1, and it wasn't great. And Windows's history up to this point wasn't promising: Windows 1 looked pretty much like Windows 2, and Windows 2 looked like Windows 3. I decided that if Windows 4 would look anything like Windows 3.1, I wanted nothing to do with it. I preferred DOS to clicking around the clumsy Windows interface. So I decided to create my own version of DOS, compatible with MS-DOS so I could continue to run all my DOS programs.

We recently published a free ebook about the history of FreeDOS. You can find it on our website, at 23 Years of FreeDOS.

My second talk that afternoon was about usability testing in open source software. The crowd was smaller, but they seemed very engaged during the presentation, so that's good.

I talked about how I got started in usability testing in open source software, and focused most of my presentation on the usability testing we've done as part of the Outreachy internships. I highlighted the GNOME usability testing from my interns throughout my participation in Outreachy: Sanskriti, Gina, Renata, Ciarrai, and Diana.

Interesting note: Ciarrai's paper prototype test on the then-proposed Settings redesign will be published this week on, so watch for that.

The conference recorded both presentations, and they'll be uploadeded to YouTube in the next few days. I'll link to them when they are up.
image: Kieler Open Source und LinuxTage

Sunday, September 17, 2017

Documentation needs usability, too

If you're like most developers, writing documentation is hard. Moreso if you are writing for end-users. How do you approach writing your documentation?

Remember that documentation needs good usability, too. If documentation is too difficult to read—if it's filled with grammatical mistakes, or the vocabulary is just too dense, or even if it's just too long—then few people will bother to read it. Your documentation needs to reach your audience where they are.

Finding the right tone and "level" of writing can be difficult. When I was in my Master's program, I referred to three different styles of writing: "High academic," "Medium academic," and "Low academic."
High academic is typical for many peer-reviewed journals. This writing is often very dense and uses large words that demonstrate the author's command of the field. High academic writing can seem very imposing.

Medium academic is more typical of undergraduate writing. It is less formal than high academic, yet more formal than what you find in the popular press.

Low academic tends to include most professional and trade publications. Low academic authors may sprinkle technical terms here and there, but generally write in a way that's approachable to their audience. Low academic writing uses contractions, although sparingly. Certain other formal writing conventions continue, however. For example, numbers should be written out unless they are measurements; "fifty" instead of "50," and "two-thirds" instead of "2/3." But do use "250 MB" and "1.18 GHz."
In my Master's program, I learned to adjust my writing style according to my instructors' preferences. One professor might have a very formal attitude towards academic writing, so I would use High academic. Another professor might approach the subject more loosely, so I would write in Medium academic. When I translated some of my papers into articles for magazines or trade journals, I wrote in Low academic.

And when I write my own documentation, I usually aim for Low academic. It's a good balance of professional writing that's still easy to read.

To make writing your own documentation easier, you might also consult the Google Developer Documentation Style Guide. Google just released their guide for anyone to use. The guide has guidelines for style and tone, documenting future features, accessible content, and writing for a global audience. The guide also includes details about language, grammar, punctuation, and formatting.

Wednesday, September 13, 2017

On my way to Kieler Open Source und Linux Tage

Just wanted to share a brief update that I'm now on my way to Kiel, Germany for the Kieler Open Source und Linux Tage. I will be sharing two presentations:

The FreeDOS Project: Then and Now

I'll be talking about the history of the FreeDOS Project, and a little about where things are headed. If you don't know about FreeDOS: FreeDOS is a complete, free, DOS-compatible operating system that you can use to play classic DOS games, run legacy business software, or develop embedded systems. Any program that works on MS-DOS should also run on FreeDOS.

Usability Testing in Open Source Software

This presentation is for anyone who works on open source software, and wants to make it easier for everyone to use. I'll talk about some easy methods you can use to test the usability of your software, and then how to quickly identify the "trouble spots" that you need to fix.

If you're planning to attend Kieler, please let me know!
Update: I've made my slides available for download. You can find them on my personal page. Both presentations are available under the Creative Commons Attribution (CC BY).
image: Kieler Open Source und LinuxTage

Saturday, September 9, 2017

23 Years of FreeDOS

On June 29th, 2017, FreeDOS turned 23 years old. There’s nothing special about "23," but I thought it would be great to celebrate the anniversary by having a bunch of past and current users share their stories about why they use FreeDOS. So, I made a call for users to write their own FreeDOS stories.

Many people shared their FreeDOS story, many of which were shared on the FreeDOS Blog. We have collected these stories into a free ebook, 23 Years of FreeDOS. (CC BY 4.0) This ebook contains the voices of many of the users who contributed their stories, as well as the history of FreeDOS.

These stories are written from different perspectives, such as: "How did you discover FreeDOS?" "What do you use FreeDOS for?" and "How do you contribute to FreeDOS?" In short, I requested users to answer the question: "Why FreeDOS?"

Many individuals have helped make FreeDOS what it is, but this ebook represents only a few of them. I hope you enjoy this collection of 23 years of everything FreeDOS!

To download the free ebook, go to 23 Years of FreeDOS (ebook) on the FreeDOS website.
image: FreeDOS

Flat design is harder to understand

Interesting research from the Nielsen Group shows that a flat web design is harder for people to understand. The usability study was conducted against web pages, but the results apply equally well to graphical user interfaces.

First, let me define the "flat" web design: Websites used to use colors on links (usually blue) with underline, and 3D-style buttons. Web designers really didn't have to do anything to make that happen; the standard web styles defines blue as a link color (purple as a visited link color) and any button element will appear in a 3D style (such as beveled edges).

In recent years, it has become fashionable for web designers to "flatten" the website design: links appear like normal paragraph text, and buttons are plain rectangles with no special decoration. Here's a trivial example of a flat web design:
Title text

Hi there! This is some sample text that you might find on a website. Let's say you are on a shopping website, this text might be an item description. Or if you're on a news website, this might be the summary for a news article. And below it, you might have a link for more information.

Click here for more
Looking at that example, do you know what to click on? Do you know that you can click on something? Actually, you can click on the title text or the "Click here for more." Both are links to Google.

These flat user interface elements attract less attention and cause uncertainty, according to Nielsen's research.

The Nielsen article is very interesting, and if you are interested in usability or user interface design (or web design), then I encourage you to read it. The article includes "gaze maps" (heat maps that show where testers looked on the web page) on web pages that used a flat design (weak signifiers) versus a more traditional design (strong signifiers).

It's not all bad, though. A flat design can work in some specific circumstances. From the article: "As we saw in this experiment, the potential negative consequences of weak signifiers are diminished when the site has a low information density, traditional or consistent layout, and places important interactive elements where they stand out from surrounding elements." (emphasis mine) And, "Ideally, to avoid click uncertainty, all three of those criteria should be met, not just one or two."

So your best bet in user interface design is to make sure clickable items look clickable: buttons should have a 3D design, and links should be styled with a different color and underline to look like clickable links instead of regular text.

Thursday, September 7, 2017

Dissecting the Sierpinski Triangle program

Last week, I shared my return to old school programming. My first home computer was an Apple II clone called the Franklin ACE 1000, and it was on this machine that my brother and I taught ourselves how to write programs in AppleSoft BASIC. And last week, I had an itch to return to the Apple II and write a simple BASIC program to generate the Sierpinski Triangle:

Let me briefly show you how to code the Sierpinski Triangle in AppleSoft BASIC. Fortunately, this is a simple program that only uses a few functions and statements:
Create an array variable

Assigns a value to a variable

Basic iteration loop structure

Display information to the screen

Return the integer portion of a number

Return a random number between 0 and 1

And for the graphics, these statements:
Set standard graphics (GR) mode or high-resolution graphics (HGR) mode

Set the color for drawing graphics in GR or HGR mode, respectively

Make a single pixel dot in GR or HGR mode, respectively

So with those instructions, I was able to iterate a chaos generation of the Sierpinski Triangle. Again, if you aren't familiar with this method to generate the Sierpinski Triangle, the brief rules are:
  1. Set three points that define a triangle
  2. Randomly select a point anywhere (x,y)
  1. Randomly select one of the triangle's points
  2. Set the new x,y to be the midpoint between the previous x,y and the triangle point
  3. Repeat
Let's start with the first step, to define the triangle's end points. I used two parallel single-dimension arrays, and assigned values to them. This assigns the points at 0,39 (left,bottom) and 20,0 (middle,top) and 39,39 (right,bottom):
DIM X(3)
DIM Y(3)
LET X(1) = 0
LET X(2) = 20
LET X(3) = 39
LET Y(1) = 39
LET Y(2) = 0
LET Y(3) = 39
Note that in AppleSoft BASIC, arrays start at 1. And of course, all lines must have line numbers, but I won't show those here.

Then I defined a random point anywhere on the board. Since it's random, it doesn't really matter what I pick, so I hard-coded this at 10,10:
LET XX = 10
LET YY = 10
To draw each pixel in the Sierpinski Triangle, we need to be in graphics mode. So I use GR to set standard graphics mode, and set the draw color to blue (7):
I chose to make 2,000 iterations on the triangle. Maybe this was overkill for GR mode, but it worked well in HGR mode, so I just left it. You start an iteration loop with the FOR statement that specifies the start and end values, and close the loop with the NEXT statement. Since you can have nested loops, the NEXT statement requires the variable you are iterating:
FOR I = 1 TO 2000
Inside the loop, I pick a triangle end point at random, then do some simple math to find the midpoint between the current XX,YY and the end point. AppleSoft BASIC only allows variable names up to two characters long. So here, IX stands for "index."

Technically, RND(1) returns a floating point value from 0 to 0.999…, so multiplying by 3 gives 0 to 2.999…. Adding 1 to this results in a floating point value between 1 and 3.999…. To use this as the index of an array, I want an integer, so I use the INT() function to return just the integer part, resulting in a final integer value between 1 and 3, inclusive:
LET IX = 1 + INT (3 * RND (1))
With that, I can compute the midpoint:
LET XN = INT ((XX + X(IX)) / 2)
LET YN = INT ((YY + Y(IX)) / 2)
Again, since AppleSoft BASIC only allows variable names of up to two characters long, XN represents "X new" and YN represents "Y new."

Then I reassign those values back to XX and YY, then plot the pixel on the screen:
And as a way to track the progress of the overall iteration, I print the counter I before starting the loop over again:
And that's how to create a Sierpinski Triangle in AppleSoft BASIC!

(Oops, I used floating point variables throughout, when I should have been using integer variables. My bad. If I were to code this again, I would use variables like XX% instead of XX. That would also obviate the need to use the INT() function, since the value would be automatically cast to integer when saving the value to the integer variable.)

BASIC is a very straightforward language, with only a few statements and essential functions. AppleSoft BASIC was not a very difficult language to learn. Even as a young child, I quickly figured out how to create simple math quizzes. From there, I graduated to larger and more complex programs, including one that effectively emulated the thermonuclear war simulator from the 1983 movie, War Games!

But I'll stop here, and leave it to you to learn more about BASIC on your own. You can find AppleSoft BASIC programming guides in lots of places on the Internet. Landsnail's Apple II Programmer's Reference is a good one.

Saturday, September 2, 2017

Return to old school programming

When my brother and I were growing up, our parents brought home an Apple II personal computer. Actually ours was one of the first Apple "clones," a Franklin ACE 1000, but it ran all of the original Apple software. And more importantly, you could write your own programs with the included AppleSoft BASIC.

My brother and I cracked open the computer guide that came with it, and slowly taught ourselves how to write programs in BASIC. My first attempts were fairly straightforward math quizzes and other simple programs. But as I gained more experience in BASIC, I was able to harness high resolution graphics mode and do all kinds of nifty things.

AppleSoft BASIC was my first programming language. And while I eventually moved on to other compiled languages (I prefer C) and other programming environments, I think I'll always have a soft spot for AppleSoft BASIC.

BASIC was a very simple programming language. Two-letter variable names, line numbers, and other hallmarks were typical for AppleSoft BASIC. But even within these limitations, you could create pretty impressive programs if you were clever.

Recently, I've been spending free time playing around with an Apple II emulator, writing a few simple programs as a "throwback" to that old school programming. The Apple IIjs emulator runs in your web browser and very effectively simulates running an old Apple II computer from the 1980s. You can find other Apple II emulators specifically for Linux.

I want to share a program I recently wrote on Apple IIjs: a chaos generation of the Sierpinski Triangle. If you aren't familiar with this method to generate the Sierpinski Triangle, the brief rules are:

  1. Set three points that define a triangle (A,B,C)
  2. Randomly select a point anywhere (x,y)
  1. Randomly select one of the triangle's points (A,B,C)
  2. Set the new x,y to be the midpoint between the previous x,y and the triangle point
  3. Repeat

And with this rule set, I created a very simple iteration of the Sierpinski Triangle. This sample uses the standard graphics resolution mode (GR) with 40×40 pixels.

The code to generate this image is fairly straightforward:

Two thousand steps takes forever to run on the simulated 6502 microprocessor, by the way. Just like computing in the 1980s.

For a more interesting view of the Sierpinski Triangle on the Apple II, it helps to switch to a higher resolution. Apple's high resolution mode (HGR) allowed a whopping 280×192 pixels.

This requires changing two lines of code: line 50 sets HGR mode instead of GR mode, and line 140 uses HPLOT instead of PLOT.

I don't have our original Franklin ACE 1000 anymore, or an original Apple II computer, but at least I can return to old school programming whenever I like by using an emulator.

Monday, August 28, 2017

Leadership lessons from open source software

Just wanted to point to an article I recently wrote for CIO Review Magazine, about Leadership lessons from open source software. I've been involved in open source software since I was a university student—as a user, contributor, and maintainer. Today, I'm a chief information officer in local government. While my day job is unrelated to my personal interest in open source software, I find leverage in many of the lessons I learned throughout my history in open source software projects.

My article shares three key lessons from open source software that I’ve carried into my career as chief information officer:
  1. Feedback is a gift
  2. Everyone brings different viewpoints
  3. You don’t have to do it all yourself
Looking for leadership lessons through the lens of unexpected sources can be interesting and insightful. We need to find inspiration from everything we experience. For myself, I like to reflect on what I have done, to find ways to improve myself.

As chief information officer, I leverage many of the lessons I learned from maintaining or contributing to open source software. While I find insights from other areas, experience drives learning, and my twenty years of personal experience in open source software has taught me much about accepting feedback, listening to others, and sharing the burden. This applies directly to my professional career.

Sunday, August 27, 2017

At look back at Linux 1.0

The Linux Kernel is 26 years old this year. And to mark this anniversary, I took a look back at where it all began. You can find my journey into Linux nostalgia over at

I discovered Linux in 1993. My first Linux distribution was Softlanding Linux System (SLS) 1.03, with Linux kernel 0.99 alpha patch level 11. That required a whopping 2MB of RAM, or 4MB if you wanted to compile programs, and 8MB to run X windows.

A year later, I upgraded to SLS 1.05, which sported the brand-new Linux kernel 1.0.

Check out the article for some great screenshots from SLS 1.05, including the color-enabled text-mode installer, a few full-screen console applications, and a sample X session with TWM, the tabbed window manager.

Tuesday, August 15, 2017

Happy birthday, GNOME!

The GNOME desktop turns 20 today, and I'm so excited! Twenty years is a major milestone for any open source software project, especially a graphical desktop environment like GNOME that has to appeal to many different users. The 20th anniversary is definitely something to celebrate!

I wrote an article on about "GNOME at 20: Four reasons it's still my favorite GUI." I encourage you to read it!

In summary: GNOME was a big deal to me because when GNOME first appeared, we really didn't have a free software "desktop" system. The most common desktop environments at the time included FVWM, FVWM95, and their variants like WindowMaker or XFCE, but GNOME was the first complete, integrated "desktop" environment for Linux.

And over time, GNOME has evolved as technology has matured and Linux users demand more from their desktop than simply a system to manage files. GNOME 3 is modern yet familiar, striking that difficult balance between features and utility.

So, why do I still enjoy GNOME today?

  1. It's easy to get to work
  2. Open windows are easy to find
  3. No wasted screen space
  4. The desktop of the future

The article expands on these ideas, and provides a brief history of GNOME throughout the major milestones of GNOME 1, 2, and 3.
image: GNOME

Saturday, August 12, 2017

Allan Day on The GNOME Way

If you don't read Allan Day's blog, I encourage you to do so. Allan is one of the designers on the GNOME Design team, and is also a great guy in person. Allan recently presented at GUADEC, the GNOME Users And Developers European Conference, about several key principles in GNOME design concepts. Allan's has turned his talk into a blog post: "The GNOME Way." You should read it.

Allan writes in the introduction: "In what follows, I’m going to summarise what I think are GNOME’s most important principles. It’s a personal list, but it’s also one that I’ve developed after years of working within the GNOME project, as well as talking to other members of the community. If you know the GNOME project, it should be familiar. If you don’t know it so well, it will hopefully help you understand why GNOME is important."

A quick summary of those key principles:

1: GNOME is principled
"Members of the GNOME project don’t just make things up as they go along and they don’t always take the easiest path."

2: software freedom
"GNOME was born out of a concern with software freedom: the desire to create a Free Software desktop. That commitment exists to this day. "

3: inclusive software
"GNOME is committed to making its software usable by as many people as possible. This principle emerged during the project’s early years."

4: high-quality engineering
"GNOME has high standards when it comes to engineering. We expect our software to be well-designed, reliable and performant. We expect our code to be well-written and easy to maintain."

5: we care about the stack
"GNOME cares about the entire system: how it performs, its architecture, its security."

6: take responsibility for the user’s experience
"Taking responsibility means taking quality seriously, and rejecting the “works for me” culture that is so common in open source. It requires testing and QA."

Allan's article is a terrific read for anyone interested in why GNOME is the way it is, and how it came to be. Thanks, Allan!

Tuesday, August 8, 2017

Simplify, Standardize, and Automate

On my Coaching Buttons blog, I sometimes write about "Simplify, Standardize, and Automate." I have reiterated this mantra in my professional career since 2008, when I worked in higher ed. A challenge that constantly faces higher ed is limited budgets; we often had to "do more with less." One way to respond to shrinking budgets was to become more efficient, which we did through a three-pronged approach of simplifying our environment, standardizing our systems, and automating tasks.

The concept of automation was always very important to me. Automation is very powerful. It can remove drudgery work from the shoulders of our staff. By allowing a machine to do repetitive tasks, we free up our staff to do more valuable tasks.

What common tasks do you do every day that could be automated, and turned into a script or program? When I worked in higher ed, I shared this comment about automation:
"If you need a report from the Data Warehouse every month, documenting the steps is certainly a good first step. But it's much better to create a script to generate it for you automatically. The file just appears when you need it, without having to repeat the steps to create it manually. That's less time to manage an individual thing, leaving you more time to work on other tasks."
Kyle Rankin recently wrote at Linux Journal about the importance of automation, part of "Sysadmin 101." Kyle identifies several types of tasks you should automate, including routine and repeatable tasks, then goes on to discuss when you should automate and how you should automate.

If you are a systems administrator, and especially if you are new to systems administration, I encourage you to read Kyle's article. Then, learn about the automation available on your system. I leverage cron and (mostly) Bash scripts on my own Linux systems. I don't have very complex tasks that have dependencies on other jobs, so that works well for me. If you have such a need for more complex automation, you can find them.

Saturday, July 29, 2017

A new GNOME Board

After my Board term expired, I had planned to stay involved with the GNOME Foundation Board of Directors until the official hand-off to the new Board at GUADEC. Since GUADEC is happening right now, this marks the end of my time on the GNOME Board of Directors.

It was great to serve on the GNOME Board this year! I know we accomplished a lot of great things. Among other things, we hired a new Executive Director, who I believe will provide strong leadership for GNOME. The GNOME Board is an important part of governance too, and the Board demonstrated that by keeping GNOME moving forward in the absence of an Executive Director.

I may run for GNOME Board again in a few years, when things settle down for me. It's been a busy time lately, but as things reach a new normal, I'll be able to take on new activities in GNOME.

Good luck to everyone on the Board for the coming year! I know everyone is highly engaged, and that's what really matters for a successful Board.
image: GNOME

Friday, July 14, 2017

How I put Linux in the enterprise

I recently wrote an article for that tells the story about How I introduced my organization to Linux. Here's the short version:

I used to work in higher ed. In the late 1990s, we moved to a new student records system. We created an "add-on" web registration system, so students could register on-line—still a new idea in 1998. But when we finally went live, the load crushed the web servers. No one could register. We tried to fix it, but nothing worked.

Instead, we just shifted everything to Linux, and it worked! No code changes, just a different platform. That was our first time using Linux in the enterprise. When I left the university some seventeen years later, I think about two-thirds of our enterprise servers ran on Linux.

There's a lot going on behind the scenes here, so I encourage you to read the full article. The key takeaways aren't really the move to Linux. Instead, I use this as an example for how to deploy a big change in any environment: Solve a problem, don't stroke an ego. Change as little as possible. Be honest about the risks and benefits. And communicate broadly. These are the keys to success.

Friday, June 30, 2017

FreeDOS is 23 years old

I have been involved in open source software for a long time, since before anyone coined the term "open source." My first introduction to Free software was GNU Emacs on our campus Unix system, when I was an undergraduate. Then I discovered other Free software tools. Through that exposure, I decided to installed Linux on my home computer in 1993. But as great as LInux was at the time, with few applications like word processors and spreadsheets, Linux was still limited—great for writing programs and analysis tools for my physics labs, but not (yet) for writing class papers or playing games.

So my primary system at the time was still MS-DOS. I loved DOS, and had since the 1980s. While the MS-DOS command line was under-powered compared to Unix, I found it very flexible. I wrote my own utilities and tools to expand the MS-DOS command line experience. And of course, I had a bunch of DOS applications and games. I was a DOS "power user." For me, DOS was a great mix of function and features, so that's what I used most of the time.

And while Microsoft Windows was also a thing in the 1990s, if you remember Windows 3.1, you should know that Windows wasn't a great system. Windows was ugly and difficult to use. I preferred to work at the DOS command line, rather than clicking around the primitive graphical user interface offered by Windows.

With this perspective, I was a little distraught to learn in 1994, through Microsoft's interviews with tech magazines, that the next version of Windows would do away with MS-DOS. It seemed MS-DOS was dead. Microsoft wanted everyone to move to Windows. But I thought "If Windows 3.2 or 4.0 is anything like Windows 3.1, I want nothing to do with that."

So in early 1994, I had an idea. Let's create our own version of DOS! And that's what I did.

On June 29, 1994, I made a little announcement to the comp.os.msdos.apps discussion group on Usenet. My post read, in part:
Announcing the first effort to produce a PD-DOS.  I have written up a
"manifest" describing the goals of such a project and an outline of
the work, as well as a "task list" that shows exactly what needs to be
written.  I'll post those here, and let discussion follow.
That announcement of "PD-DOS" or "Public Domain DOS" later grew into the FreeDOS Project that you know today. And today, FreeDOS is now 23 years old!

All this month, we've asked people to share their FreeDOS stories about how they use FreeDOS. You can find them on the FreeDOS blog, including stories from longtime FreeDOS contributors and new users. In addition, we've highlighted several interesting moments in FreeDOS history, including a history of the FreeDOS logo, a timeline of all FreeDOS distributions, an evolution of the FreeDOS website, and more. You can read everything on our celebration page at our blog: Happy 23rd birthday to FreeDOS.

Since we've received so many "FreeDOS story" contributions, I plan to collect them into a free ebook, which we'll make available via the FreeDOS website. We are still collecting FreeDOS stories for the ebook! If you use FreeDOS, and would like to contribute to the ebook, send me your FreeDOS story by Tuesday, July 18.
image: FreeDOS

Monday, June 5, 2017

Help us celebrate 23 years of FreeDOS

This year on June 29, FreeDOS will turn 23 years old. That's pretty good for a legacy 16-bit operating system like DOS. It's interesting to note that we have been doing FreeDOS for longer than MS-DOS was a thing. And we're still going!

There's nothing special about "23 years old" but I thought it would be a good idea to mark this year's anniversary by having people contribute stories about how they use FreeDOS. So over at the FreeDOS Blog, I've started a FreeDOS blog challenge.

If you use FreeDOS, I'm asking you to write a blog post about it. Maybe your story is about how you found FreeDOS. Or about how you use FreeDOS to run certain programs. Or maybe you want to tell a story about how you installed FreeDOS to recover data that was locked away in an old program. There are lots of ways you could write your FreeDOS story. Tell us about how you use FreeDOS!

Your story can be short, or it can be long. Make it as long or short as you need to talk about how you use FreeDOS.

Write your story, post it on your blog, and email me so I can find it. Or if you don't have a blog of your own, email your story to me and I'll put it up as a "guest post" on the FreeDOS Blog.

I'm planning to post a special blog item on June 29 to collect all of these great stories. So you need to write your story by June 28.
image: FreeDOS

Tuesday, May 23, 2017

Please run for GNOME Board

Are you a member of the GNOME Foundation? Please consider running for Board.

Serving on the Board is a great way to contribute to GNOME, and it doesn't take a lot of your time. The GNOME Board of Directors meets every week via a one-hour phone conference to discuss various topics about the GNOME Foundation and GNOME. In addition, individual Board members may volunteer to take on actions from meetings—usually to follow up with someone who asked the Board for action, such as a funding request.

At least two current Board members have decided not to run again this year. (I am one of them.) So if you want to run for the GNOME Foundation Board of Directors, this is an excellent opportunity!

If you are planning on running for the Board, please be aware that the Board meets 2 days before GUADEC begins to do a formal handoff, plan for the upcoming year, and meet with the Advisory Board. GUADEC 2017 is 28 July to 2 August in Manchester, UK. If elected, you should plan on attending meetings this year on 26 and 27 July in Manchester, UK.

To announce your candidacy, just send an email to foundation-announce that gives your name, your affiliation (who you work for), and a few sentences about your background and interest in serving on the Board.
Update: the election is over. Congratulations to the new Board members!
image: GNOME

Friday, May 19, 2017

Can't make GUADEC this year

This year, the GNOME Users And Developers European Conference (GUADEC) will be hosted in beautiful Manchester, UK between 28th July and 2nd August. Unfortunately, I can't make it. I missed last year, too. The timing is not great for me.

I work in local government, and just like last year, GUADEC falls during our budget time at the county. Our county budget is set every two years. That means during an "on" year, we make our budget proposals for the next two years. In the "off" year, we share a budget status.

I missed GUADEC last year because I was giving a budget status in our "off" year. And guess what? This year, department budget presentations again happen during GUADEC.

During GUADEC, I'll be making our county IT budget proposal. This is our one opportunity to share with the Board our budget priorities for the next two years, and to defend any budget adjustment. I can't miss this meeting.

Wednesday, May 17, 2017

GNOME and Debian usability testing

Intrigeri emailed me to share that "During the Contribute your skills to Debian event that took place in Paris last week-end, we conducted a usability testing session" of GNOME 3.22 and Debian 9. They have posted their usability test results at Intrigeri's blog: "GNOME and Debian usability testing, May 2017." The results are very interesting and I encourage you to read them!

There's nothing like watching real people do real tasks with your software. You can learn a lot about how people interact with the software, what paths they take to accomplish goals, where they find the software easy to use, and where they get frustrated. Normally we do usability testing with scenario tasks, presented one at a time. But in this usability test, they asked testers to complete a series of "missions." Each "mission" was a set of two of more goals. For example:

Mission A.1 — Download and rename file in Nautilus

  1. Download a file from the web, a PDF document for example.
  2. Open the folder in which the file has been downloaded.
  3. Rename the dowloaded file to SUCCESS.pdf.
  4. Toggle the browser window to full screen.
  5. Open the file SUCCESS.pdf.
  6. Go back to the File manager.
  7. Close the file SUCCESS.pdf.

Mission A.2 — Manipulate folders in Nautilus

  1. Create a new folder named cats in your user directory.
  2. Create a new folder named to do in your user directory.
  3. Move the cats folder to the to do folder.
  4. Delete the cats folder.

These "missions" take the place of scenario tasks. My suggestion to the usability testing team would be to add a brief context that "sets the stage" for each "mission." In my experience, that helps testers get settled into the task. This may have been part of the introduction they used for the overall usability test, but generally I like to see a brief context for each scenario task.

The usability test results also includes a heat map, to help identify any problem areas. I've talked about the Heat Map Method before (see also “It’s about the user: Applying usability in open source software.” Jim Hall. Linux Journal, print, December 2013). The heat map shows your usability test results in a neat grid, coded by different colors that represent increasing difficulty:

  • Green if the tester didn't have any problems completing the task.
  • Yellow if the tester encountered a few problems, but generally it was pretty smooth.
  • Orange if the tester experienced some difficulty in completing the task.
  • Red if the tester had a really hard time with the task.
  • Black if the task was too difficult and the tester gave up.

The colors borrow from the familiar green-yellow-red color scheme used in traffic signals, and which most people can associate with easy-medium-hard. The colors also suggest greater levels of "heat," from green (easy) to red (very hard) and black (too hard).

To build a heat map, arrange your usability test scenario tasks in rows, and your testers in columns. This provides a colorful grid. You can look across rows and look for "hot" rows (lots of black, red and orange) and "cool" rows (lots of green, with some yellow). Focus on the hot rows; these are where testers struggled the most.

Intrigeri's heat map suggests some issues with B1 (install and remove a package), C2 (temporary files) and C3 (change default video player). There's some difficulty with A3 (create a bookmark in Nautilus) and C4 (add and remove world clocks), but these seem secondary. Certainly these are issues to address, but the results suggest to focus on B1, C2 and C3 first.

For more, including observations and discussion, go read Intrigeri's article.

Saturday, May 6, 2017

Not running for Board this year

After some serious thinking, I've decided not to run for the GNOME Foundation Board of Directors for the 2017-18 session.

As the other directors are aware, I've over-committed myself. I think I did a good job keeping up with GNOME Board issues, but it was sometimes a real stretch. And due to some budget and planning items happening at work, I've been busier in 2017 than I planned. I've missed a few Board meetings due to meeting conflicts or other issues.

It's not fair to GNOME for me to continue to be on the Board if I'm going to be this busy. So I've decided to not run again this year, and let someone with more time to take my seat.

However, I do plan to continue as director for the rest of the 2016-17 session.
image: GNOME

Thursday, May 4, 2017

How I found Linux

Growing up through the 1980s and 1990s, I was always into computers. As I entered university in the early 1990s, I was a huge DOS nerd. Then I discovered Linux, a powerful Unix system that I could run on my home computer. And I have been a Linux user ever since.

I wrote my story for, about How I got started with Linux.

In the article, I also talk about how I've deployed Linux in every organization where I've worked. I'm a CIO in local government now, and while we have yet to install Linux in the year since I've arrived, I have no doubt that we will someday.

Tuesday, April 18, 2017

A better March Madness script?

Last year, I wrote an article for Linux Journal describing how to create a Bash script to build your NCAA "March Madness" brackets. I don't really follow basketball, but I have friends that do, so by filling out a bracket at least I can have a stake in the games.

Since then, I realized my script had a bug that prevented any rank 16 team from winning over a rank 1 team. So this year, I wrote another article for Linux Journal with an improved Bash script to build a better NCAA "March Madness" bracket. In brief, the updated script builds a custom random "die roll" based on the relative strength of each team. My "predictions" this year are included in the Linux Journal article.

Since the games are now over, I figured this was a great time to see how my bracket performed. If you followed the games, you know that there were a lot of upsets this year. No one really predicted the final two teams for the championship. So maybe I shouldn't be too surprised if my brackets didn't do well either. Next year might be a better comparison.

In the first round of the NCAA March Madness, you start with teams 1–16 in four regions, so that's 64 teams that compete in 32 games. In that "round of 64," my shell script correctly predicted 21 outcomes. That's not a bad start.

March Madness is single-elimination, so for the second round, you have 32 teams competing in 16 games. My shell script correctly guessed 7 of those games. So just under half were predicted correctly. Not great, but not bad.

In the third round, my brackets suffered. This is the "Sweet Sixteen" where 16 teams compete in 8 games, but my script only predicted 2 of those games.

And in the fourth round, the "Elite Eight" round, my script didn't predict any of the winners. And that wrapped up my brackets.

Following the standard method for how to score "March Madness" brackets, each round has 320 possible points. In round one, assign 10 points for each correctly selected outcome. In round two, assign 20 points for each correct outcome. And so on, double the possible points at each round. From that, the math is pretty simple.

round one:21 × 10 =210
round two:7 × 20 =140
round three:1 × 40 =40
round four:0 × 80 =0
My total score this year is 390 points. As a comparison, last year's script (the one with the bug) scored 530 in one instance, and 490 in another instance. But remember that there were a lot of upsets in this year's games, so everyone's brackets fared poorly this year, anyway.

Maybe next year will be better.

Did you use the Bash script to help fill out your "March Madness" brackets? How did you do?

Monday, April 3, 2017

How many testers do you need?

When you start a usability test, the first question you may ask is "how many testers do I need?" The standard go-to article on this is Nielsen's "Why You Only Need to Test with 5 Users" which gives the answer right there in the title: you need five testers.

But it's important to understand why Nielsen picks five as the magic number. MeasuringU has a good explanation, but I think I can provide my own.

The core assumption is that each tester will uncover a certain amount of issues in a usability test, assuming good test design and well-crafted scenario tasks. The next tester will uncover about the same amount of usability issues, but not exactly the same issues. So there's some overlap, and some new issues too.

If you've done usability testing before, you've observed this yourself. Some testers will find certain issues, other testers will find different issues. There's overlap, but each tester is on their own journey of discovery.

How many usability issues is up for some debate. Nielsen uses his own research and asserts that a single tester can uncover about 31% of the usability issues. Again, that assumes good test design and scenario tasks. So one tester finds 31% of the issues, the next tester finds 31% but not the same 31%, and so on. With each tester, there's some overlap, but you discover some new issues too.

In his article, Nielsen describes a function to demonstrate the number of usability issues found vs the number of testers in your test, for a traditional formal usability test:

…where L is the amount of issues one tester can uncover (Nielsen assumes L=31%) and n is the number of testers.

I encourage you to run the numbers here. A simple spreadsheet will help you see how the value changes for increasing numbers of testers. What you'll find is a curve that grows quickly then slowly approaches 100%.

Note at five testers, you have uncovered about 85% of the issues. Nielsen's curve suggests a diminishing return at higher numbers of testers. As you add testers, you'll certainly discover more usability issues, but the increment gets smaller each time. Hence Nielsen's recommendation for five testers.

Again, the reason that five is a good number is because of overlap of results. Each tester will help you identify a certain number of usability issues, given a good test design and high quality scenario tasks. The next tester will identify some of the same issues, plus a few others. And as you add testers, you'll continue to have some overlap, and continue to expand into new territory.

Let me help you visualize this. We can create a simple program to show this overlap. I wrote a Bash script to generate SVG files with varying numbers of overlapping red squares. Each red square covers about 31% of the gray background.

If you run this script, you should see output that looks something like this, for different values of n. Each image starts over; the iterations are not additive:








As you increase the number of testers, you cover more of the gray background. And you also have more overlap. The increase in coverage is quite dramatic from one to five, but compare five to fifteen. Certainly there's more coverage (and more overlap) at ten than at five, but not significantly more coverage. And the same going from ten to fifteen.

These visuals aren't meant to be an exact representation of the Nielsen iteration curve, but they do help show how adding more testers gives significant return up to a point, and then adding more testers doesn't really get you much more.

The core takeaway is that it doesn't take many testers to get results that are "good enough" to improve your design. The key idea is that you should do usability testing iteratively with your design process. I think every usability researcher would agree. Ellen Francik, writing for Human Factors, refers to this process as the Rapid Iterative Testing and Evaluation (RITE) method, arguing "small tests are intended to deliver design guidance in a timely way throughout development." (emphasis mine)

Don't wait until the end to do your usability tests. By then, it's probably too late to make substantive changes to your design, anyway. Instead, test your design as you go: create (or update) your design, do a usability test, tweak the design based on the results, test it again, tweak it again, and so on. After a few iterations, you will have a design that works well for most users.

Sunday, April 2, 2017

A throwback theme for gedit

This isn't exactly about usability, but I wanted to share it with you anyway.

I've been involved in a lot of open source software projects, since about 1993. You know that I'm also the founder and coordinator of the FreeDOS Project? I started that project in 1994, to write a free version of DOS that anyone could use.

DOS is an old operating system. It runs entirely in text mode. So anyone who was a DOS user "back in the day" should remember text mode and the prevalence of white-on-blue text.

For April 1, we used a new "throwback" theme on the FreeDOS website. We rendered the site using old-style DOS colors, with a monospace DOS VGA font.

Even though the redesign was meant only for a day, I sort of loved the new design. This made me nostalgic for using the DOS console: editing text in that white-on-blue, without the "distraction" of other fonts or the glare of modern black-on-white text.

So I decided to create a new theme for gedit, based on the DOS throwback theme. Here's a screenshot of gedit editing a Bash script, and editing the XML theme file itself:

The theme uses the same sixteen color palette from DOS. You can find the explanation of  why DOS has sixteen colors at the FreeDOS blog. I find the white-on-blue text to be calming, and easy on the eyes.

Of course, to make this a true callback to earlier days of computing, I used a custom font. On my computer, I used Mateusz Viste's DOSEGA font. Mateusz created this font by redrawing each glyph in Fontforge, using the original DOS CPI files as a model. I think it's really easy to read. (Download DOSEGA here:

Want to create this on your own system? Here's the XML source to the theme file. Save this in ~/.local/share/gtksourceview-3.0/styles/dosedit.xml and gedit should find it as a new theme.
<?xml version="1.0" encoding="UTF-8"?>
<style-scheme id="dos-edit" name="DOS Edit" version="1.0">
<author>Jim Hall</author>
<description>Color scheme using DOS Edit color palette</description>
  Emulate colors used in a DOS Editor. For best results, use a monospaced font
  like DOSEGA.

<!-- Color Palette -->

<color name="black"           value="#000"/>
<color name="blue"            value="#00A"/>
<color name="green"           value="#0A0"/>
<color name="cyan"            value="#0AA"/>
<color name="red"             value="#A00"/>
<color name="magenta"         value="#A0A"/>
<color name="brown"           value="#A50"/>
<color name="white"           value="#AAA"/>
<color name="brightblack"     value="#555"/>
<color name="brightblue"      value="#55F"/>
<color name="brightgreen"     value="#5F5"/>
<color name="brightcyan"      value="#5FF"/>
<color name="brightred"       value="#F55"/>
<color name="brightmagenta"   value="#F5F"/>
<color name="brightyellow"    value="#FF5"/>
<color name="brightwhite"     value="#FFF"/>

<!-- Settings -->

<style name="text"                 foreground="white" background="blue"/>
<style name="selection"            foreground="blue" background="white"/>
<style name="selection-unfocused"  foreground="black" background="white"/>

<style name="cursor"               foreground="brown"/>
<style name="secondary-cursor"     foreground="magenta"/>

<style name="current-line"         background="black"/>
<style name="line-numbers"         foreground="black" background="white"/>
<style name="current-line-number"  background="cyan"/>

<style name="bracket-match"        foreground="brightwhite" background="cyan"/>
<style name="bracket-mismatch"     foreground="brightyellow" background="red"/>

<style name="right-margin"         foreground="white" background="blue"/>
<style name="draw-spaces"          foreground="green"/>
<style name="background-pattern"   background="black"/>

<!-- Extra Settings -->

<style name="def:base-n-integer"   foreground="cyan"/>
<style name="def:boolean"          foreground="cyan"/>
<style name="def:builtin"          foreground="brightwhite"/>
<style name="def:character"        foreground="red"/>
<style name="def:comment"          foreground="green"/>
<style name="def:complex"          foreground="cyan"/>
<style name="def:constant"         foreground="cyan"/>
<style name="def:decimal"          foreground="cyan"/>
<style name="def:doc-comment"      foreground="green"/>
<style name="def:doc-comment-element" foreground="green"/>
<style name="def:error"            foreground="brightwhite" background="red"/>
<style name="def:floating-point"   foreground="cyan"/>
<style name="def:function"         foreground="cyan"/>
<style name="def:heading0"         foreground="brightyellow"/>
<style name="def:heading1"         foreground="brightyellow"/>
<style name="def:heading2"         foreground="brightyellow"/>
<style name="def:heading3"         foreground="brightyellow"/>
<style name="def:heading4"         foreground="brightyellow"/>
<style name="def:heading5"         foreground="brightyellow"/>
<style name="def:heading6"         foreground="brightyellow"/>
<style name="def:identifier"       foreground="brightyellow"/>
<style name="def:keyword"          foreground="brightyellow"/>
<style name="def:net-address-in-comment" foreground="brightgreen"/>
<style name="def:note"             foreground="green"/>
<style name="def:number"           foreground="cyan"/>
<style name="def:operator"         foreground="brightwhite"/>
<style name="def:preprocessor"     foreground="brightcyan"/>
<style name="def:shebang"          foreground="brightgreen"/>
<style name="def:special-char"     foreground="brightred"/>
<style name="def:special-constant" foreground="brightred"/>
<style name="def:specials"         foreground="brightmagenta"/>
<style name="def:statement"        foreground="brightmagenta"/>
<style name="def:string"           foreground="brightred"/>
<style name="def:type"             foreground="cyan"/>
<style name="def:underlined"       foreground="brightgreen"/>
<style name="def:variable"         foreground="cyan"/>
<style name="def:warning"          foreground="brightwhite" background="brown"/>