Sunday, September 29, 2013

OpenHatch open source software event in Morris

This weekend, I was excited to participate in an OpenHatch open source software event at my campus, the University of Minnesota Morris. Thanks to Shauna and Asheesh for providing a very engaging day with everyone, and to Nic, Elena, and KK for hosting the event.

I didn't do a specific headcount, but I'd guess we had more than 25 people in attendance, most of them students.

The purpose of the workshop was to help students get engaged in open source software. It was a very full day. We started by discussing communication tools common in most open source software projects (IRC and email), tools used in open source software (such as Git), and how to pick an open source software project you'd like to work with. Several of us also participated in a panel discussion about open source software, and answered questions about how folks can make a living in open source software.

In the afternoon, we divided into groups of interested students to actually contribute to an open source software project. I was fortunate to work with a student who helped me with Simple Senet, a simulation of the ancient Egyptian boardgame Senet. (Since there are two accepted by competing rulesets, we worked on a method to allow users to tweak the gameplay rules of Senet to the version they prefer. This solution allows the user to specify command line options that turn on or off certain options, including "ruleset groups" of common play options.) We got most of the way towards a solution during the afternoon session; I can take it from there and code the final implementation in a future version of Simple Senet.

Friday, September 27, 2013

The persistence of ctrl-alt-del

Just a quick note to share this video from the Harvard Campaign, interviewing Bill Gates from Microsoft. See the related article on GeekWire for discussion and a link to the video. The interview is very interesting, but from a programmer and usability perspective, I was drawn to Gates's comments starting around 16:30, in response to a question about Windows users having to press ctrl-alt-del to login.

The article summarizes Gates's response pretty well:
“You want to have something you do with the keyboard that is signaling to a very low level of the software — actually hard-coded in the hardware — that it really is bringing in the operating system you expect, instead of just a funny piece of software that puts up a screen that looks like a log-in screen, and then it listens to your password and then it’s able to do that,” Gates said. 
The Microsoft co-founder said that there was an option to make a single button for such a command, but the IBM keyboard designer didn’t want to give Microsoft a single button. So Microsoft decided to use “Ctrl+Alt+Del” as a way to log into Windows. 
“It was a mistake,” Gates said, drawing a big laugh from the crowd.
While this was certainly a laugh line, it's interesting to comment on the origins of ctrl-alt-del. It actually originated in the early MS-DOS days. The core of MS-DOS could receive "interrupts" from the hardware to capture certain events. One important event was the ctrl-alt-del key combination; if you pressed these keys at the same time, MS-DOS would immediately reboot. Microsoft included this as a way to recover the system if MS-DOS "hung" and stopped responding to the user.

When Microsoft later introduced Windows NT (a completely redesigned operating system that imitated the then-familiar look of Windows 3.1, where Windows 3.1 still ran on MS-DOS) Microsoft was concerned that "bad guys" would create a screen that mimicked the Windows NT login screen, to capture people's usernames and passwords. This would be bad news in a corporate environment, for example.

So Microsoft had users initiate the login function by pressing the ctrl-alt-del key combination. If you were running a fake screen (probably running on MS-DOS) meant to capture your password, the system would immediately reboot. But not Windows NT. That way, you knew you were always using Windows NT.

While this may not be a very visible usability issue, it's worth reflecting that the ctrl-alt-del key combination has stuck with us today. A "hack" originally introduced in MS-DOS remains as a vestige in today's Windows operating system. The "three finger salute" (as ctrl-alt-del is sometimes affectionately known) has a certain persistence.

Thursday, September 26, 2013

GNOME 3.10 now available

I saw that GNOME 3.10 was released today. From the announcement: "The release comes six months after 3.8, and contains 34786 changes by approximately 985 contributors. It contains major new features as well as a large collection of smaller enhancements. 3.10 provides an improved experience for users, as well as new capabilities for application developers."

The major features include:
  • Support for the new, faster Wayland graphics system
  • A new, redesigned system status area
  • Merging titlebars and toolbars into a single "Header Bar"
  • New core applications, including including Music, Photos, Notes, Software and Maps

But most important to me: GNOME 3.10 claims an improved user experience. These UX features include:
  • A new "Paginated Application View" when selecting programs you can run, rather that a scrolling list (this seems similar to an iPad, for example)
  • A customizable lock screen
  • Changes to scrollbars, so it's easier to move small distances
  • Fixes to the Settings application, including changing Date & Time, easier to configure Displays, integration of chat accounts into Online Accounts, combining all features of Universal Access into a single page, and allowing the user to set their background via their Flickr account.

Additionally, GNOME 3.10 advertises clearer text, smart card support, and improved support for low-vision users by caret and focus tracking in the magnifier, among other changes.

I'll admit that I'm looking forward to trying this release. However, I prefer to wait for developers and beta testers to shake out any bugs in this initial release. Perhaps I'll wait until December, when Fedora Linux 20 is due out; Fedora uses GNOME as the default desktop, so will include GNOME 3.10.

As a major player in the open source software desktop, GNOME is often the focus of usability reviews. Calum Benson, Matthias Müller-Prove, and Jiri Mzourek included GNOME among their 2004 findings about Professional Usability in Open Source Software (see slides, PDF). Similarly, Nichols and Twidale link to GNOME in The Usability of Open Source Software webpage. GNOME brings attention to usability, as suggested by their Usability Testing Suite wiki (last edited 2012-05-27 13:34:01 by JanCBorchardt).

So with GNOME 3.10, I'm hopeful that the project has addressed the usability errors I mentioned previously. I'll advise that large open source projects like Fedora that depend on many "upstream" projects like GNOME often slip their release dates. While the Fedora Project currently estimates "2013-12-03" for the release, note that this already has slipped from the original estimate of "2013-11-26." By the time Fedora 20 actually gets released, it may be late December 2013 or early January 2014. And on an academic note, this would be perfect timing; my M.S. capstone project starts in January 2014.

Friday, September 13, 2013

Supporting multiple languages

This is much more technical than is needed for a discussion about Open Source Software & Usability, but since having a program operate in your preferred language affects software usability, I thought I'd briefly explain how software supports multiple languages. Adding support for different languages is easy. If you are a programmer working on an open source software project, I encourage you to become familiar with these or similar methods. By including different languages, you increase your program's audience, and you find new ways for people to contribute to your project (by translating your program into other languages).

Most programming languages provide some kind of support for multiple languages. The most common is the catgets method used in the C programming language. The "cat" in catgets refers to a "message catalog," and the "gets" refers to "get a string from that message catalog." You can think of a "message catalog" as a list of all the messages that a program will need to print out, and a "string" is just a message.

I've previously mentioned that I do a lot of work in open source software, much of it in the FreeDOS Project. I wrote a version of "catgets" for FreeDOS, so I think I can explain how catgets works:

Programs first need to open a "message catalog" before they can use it. This is done using the catopen function. The catopen function figures out what language you use and opens the right message catalog for that language. So programs have a different message catalog for each language they support.

The catopen function returns with a special code that the program can use to refer just to that "message catalog." A large program might use several message catalogs. But to keep things simple, I'll show you with just one message catalog.

Once the program has opened the message catalog, it can get messages from that catalog using the catgets function. For example, if the program wants to say "Hello" it gets that message (called a "string") from the message catalog. Each message (or "string") has a specific number assigned to it—actually, it has two numbers assigned to it, one number for the "message set" and one for the "message" itself. A program might put all error messages into one set ("1") and all information messages into another set ("2"). Let's assume the message "Hello" might be message 1 in in the "information" message set ("2").

Since catopen already figured out what language you want to use, catgets will always pick messages from the right language.

When the program is done, it needs to close the "message catalog" using a function called catclose.

To a programmer writing a simple program called "Foo," it all comes together like this:
catalog_number = catopen("Foo", MCLoadAll); /* You don't need to know what "MCLoadAll" means for this example */

string = catgets(catalog_number, 2, 1, "Hello"); /* In case "catgets" can't find message 1 in set 2, it just gives you "Hello" */


And somewhere, the program has separate "message catalogs" for English, Spanish, German, … and so on.

Creating the message catalogs is fairly straightforward. Under catgets, this is actually a two-step process: write a file containing all the messages that your program will use, then use gencat to convert this into a message catalog that can be easily opened by catopen.

For example, a message file (foo.msg) might look like this:
$set 1
1 Error reading file.
2 Abort!

$set 2
1 Hello world!
2 I like pie.
In that file, error messages are in one set ("1") and all information messages are in another set ("2"). So message 1 in set 2 (information) is "Hello world!"

At the command line, the programmer can convert this message file ("msgfile") into a message catalog ("catfile") using the gencat command:
gencat catfile msgfile

Wednesday, September 11, 2013

What killed Linux on the Desktop

I recently found Miguel de Icaza's 2012 article, "What Killed the Linux Desktop." It's basically de Icaza's "exit memo" when he decided to quit open source software, and move entirely to Apple's Mac OSX. While I disagree with many of de Icaza's views, I wanted to provide an accurate summary of his article. de Icaza iterates two basic ideas that he believes went wrong on the Linux desktop:

1. The developer culture

In de Icaza's view, the problem with Linux on the Desktop is rooted in the developer culture that was created around it.
The attitude of our community was one of engineering excellence: we do not want deprecated code in our source trees, we do not want to keep broken designs around, we want pure and beautiful designs and we want to eliminate all traces of bad or poorly implemented ideas from our source code trees. 
And we did. 
We deprecated APIs, because there was a better way. We removed functionality because "that approach is broken," for degrees of broken from "it is a security hole" all the way to "it does not conform to the new style we are using."
In essence, de Icaza argues that the Desktop became a moving target. Developers created new ways to do things, and deprecated the now-outmoded methods. While this happens in proprietary software, as well, the cycle is often much shorter in open source software.

2. Incompatibilities between distributions

Fragmentation and differences between similar Linux on the Desktop implementations was also a major factor, according to de Icaza. "Either they did not agree, the schedule of the transitions were out of sync or there were competing implementations for the same functionality."

In de Icaza's view, having to support different desktop environments that were all "Linux" killed the ecosystem for third party developers trying to target Linux on the Desktop. You would try once, do your best effort to support the "top" distro (such as Fedora or Ubuntu) only to find out that your software no longer worked six months later.

de Icaza argues that few developers are interested in stabilizing and supporting a standard API. "Backwards compatibility, and compatibility across Linux distributions is not a sexy problem. It is not even remotely an interesting problem to solve. Nobody wants to do that work, everyone wants to innovate, and be responsible for the next big feature in Linux."

In short, de Icaza says Linux on the Desktop lost the race for a consumer operating system to the likes of Windows and Mac OSX, both of which have stable APIs that programs can rely on for many years. "Meanwhile, you can still run the 2001 Photoshop that came when XP was launched on Windows 8. And you can still run your old OSX apps on Mountain Lion."

While the various Linux on the desktops are the best they have ever been (Ubuntu and Unity, Fedora and GnomeShell, RHEL and Gnome 2, Debian and Xfce plus the KDE distros) yet we still have four major desktop APIs, each popular and slightly incompatible versions of Linux on the Desktop. Different OS subsystems, different packaging systems, different dependencies and slightly different versions of the core libraries. This model works great for pure open source, where an open source distribution can simply recompile programs to run on the new platform, but does not work well for proprietary code that need certain operating assumptions to remain true. Shipping and maintaining applications for these rapidly evolving platforms is a big challenge.