Saturday, March 26, 2016

Visual brand and user experience

Note: This article is not an official GNOME position on visual brand.

How does the visual brand of a graphical desktop affect the user experience?

A while back, I started to think about the "visual brand" or "visual identity" of graphical desktops. I wondered if there was a way to break down the components of a user interface to just those elements that truly signify the desktop.

Some desktop environments try to brand their desktop with visual elements including a distinctive wallpaper. For example, this picture of rolling green hills with a blue sky immediately reminds many users of Microsoft Windows XP:

Bliss, the default Microsoft Windows XP wallpaper (Wikipedia)

But it's not just images that define a visual identity for a desktop environment. The shapes and arrangements used in the presentation also define a user interface's visual brand. In this way, the shapes influence our perception and create an association with a particular identity. We recognize a particular arrangement and connect that pattern with a brand.

One reference that I use to explain our association of shapes with ideas is Picture This by Molly Bang. This book is a must-read for anyone interested in user interface design. Bang presents the idea that we perceive certain shapes in different ways, and we can affix ideas and emotional states to colors and shapes in an image.

By way of example, Bang tells the familiar story of Little Red Riding Hood through simple shapes. Red Riding Hood is a red triangle, a girl in a red hood and cape. Put the red triangle in a large white space, and we perceive the girl as alone. Make the background a dark purple, and Red Riding Hood is alone at night. Add vertical black bars to the image, and we interpret the bars as tree trunks, so the girl is alone in a dark forest. If some of those black bars are canted at angles, then the the dark forest becomes spooky and foreboding. And so on.

Our recognition of user interfaces works with the same association. Even without the distinctive wallpaper background, we can recognize particular arrangements of shapes and colors as different user interfaces.

Let's try a little experiment to demonstrate. Here are depictions of several common graphical desktops, constructed using block shapes and simple symbols. Can you guess which operating systems are shown here?

1.
2.
3.
>
4.
5.
6.
7.
8.

In no particular order, the operating systems shown are:

  • GNOME 3
  • GNOME 2
  • Windows 7
  • Windows XP
  • Windows 3
  • MacOS X
  • MacOS 9 (or earlier MacOS)
  • the command line (Linux, Unix, DOS)

Even without visual markings, logos, or wallpapers, you can probably guess that image #5 is GNOME 3. The distinctive pattern for the GNOME 3 desktop is the black top bar across a field. We perceive this arrangement as the GNOME 3 visual identity, the visual brand.

That association of shapes to GNOME 3 is so strong that you can assign any wallpaper to the background and still perceive the mock-up as GNOME 3. As long as the black bar is at the top, we recognize GNOME 3. As an example, let's apply the Windows XP wallpaper to mock-up #5:


You may experience some cognitive dissonance in seeing the Windows XP wallpaper on a GNOME 3 frame, but you still recognize the mock-up as GNOME 3.

Let's consider the history of GNOME desktop development, and look more closely at some of the above mock-ups. GNOME 1 (1997) used a visual framework of a desktop with a gray task bar at bottom. This arrangement of elements was familiar to many; it mimicked the same arrangement used by Microsoft Windows since Windows 95 (1995). This presented a familiar interface for users, and allowed for easy transition from Windows 95 to GNOME 1.

GNOME 1 screenshot (Wikipedia)

GNOME 2 (2002) modified the user interface arrangement, placing a separate task bar at the top where users could launch programs, and where GNOME displayed the date and time. A separate task bar at the bottom shows running applications. It was around this time that I helped friends and family to transition to free software, using GNOME 2 on the desktop. I explained GNOME 2's arrangement as "things you can do" (top) and "things you are doing" (bottom).

While the two task bars was a deviation from other popular desktop environments, the arrangement was not too dissimilar to Windows XP (2001).

GNOME 2 screenshot (Wikipedia)

GNOME 3 (2011) further adjusted the user interface arrangement, in response to several key issues in GNOME that had become more pronounced over time: Finding windows was frustrating and difficult; Workspaces were useful but not easy or natural to use; Launching applications was labor-intensive and error-prone; The panel suffered from over-configurability; applets were little used by most users.

GNOME 3 removed the traditional task bar, in favor of an "Overview" mode that shows all running applications. Instead of using a launch menu, users start applications with an "Activities" hot button in the black top bar. Selecting the Activities menu brings up the Overview mode, showing both things you can do (an application dock to the left of the screen) and things you are doing (window representations of open applications).

Comparing the user interface elements, the GNOME 3 visual identity is more similar to the MacOS arrangement, shown below in the same aspect ratio as an example:

GNOME 3 showing Overview mode (Wikipedia)


While we recognize the arrangement of these visual elements as the GNOME 3 identity, not everyone is very fond of it. You don't have to look very far on the Internet to find comments about how GNOME 3 is klunky or breaks the desktop metaphor. I don't think these people are specifically reacting to the GNOME 3 user interface per se but rather the disassociation of GNOME's visual identity with other desktop environments.

In usability testing, I don't know that we've explored how users approach the GNOME 3 desktop. Is the visual identity of the black bar and other elements a significant departure from other visual brands? I can only share observations from my usability tests (also noted in the usability tests of my students) that the GNOME 3 visual arrangement is similar to that of MacOS, and the window decorations are more similar to those of Microsoft Windows.

I believe it's not GNOME's change in desktop metaphor that causes strain in certain users, but rather the discontinuity of seeing Microsoft Windows decorative elements in a MacOS framework. The mix of visual elements creates a sense of cognitive dissonance, confusion in relating different yet familiar user interface elements in one graphical desktop system. In their frustration, these users find targets for their dissatisfaction, and "desktop metaphor" is a common focus point because the "desktop" is an easy concept to communicate.

But the "desktop metaphor" isn't the crux of the issue for these users. I think some users react badly to the incongruence of different user interface elements overlaid on a new visual identity. That's where dissatisfaction in GNOME 3 stems from. Users need a framework, a particular arrangement of visual elements, to guide them in what to do. The user interfaces for MacOS and Windows can be quite different. Certainly each user interface represents a distinct visual brand, and that brand evokes a particular emotional response in users. And with a sense of mixed identities, from MacOS to Windows to GNOME, I think users feel disoriented.

I'm not sure how GNOME can respond to this dysphoria in its user interface. But a few ideas do come to mind:

Reference happy colors
We know that humans attach emotional states to particular colors. I conducted a similar study a few years ago, about what you think of desktop colors. We perceive bright colors as happy, and we associate dark colors as moody. GNOME should establish a wallpaper image standard that emphasizes bright, happy colors. The current trend seems to be darker colors, such as the new "starry night" wallpaper to be packaged with Fedora 24.
Put an icon with the Activities menu
The Activities menu is simply the word "Activities" in the top-left hot corner. I think users quickly realize that they should use the Activities menu to start applications, but perhaps users would respond better if the "Activities" wordmark had a GNOME foot icon next to it, or some other simple icon. The GNOME 3 visual brand is similar to the MacOS visual identity, lacking the familiar "Apple" logo. GNOME users might have a more positive association with the Activities menu if it had a GNOME logo or some other identifying mark next to the word "Activities."

The visual brand of GNOME 3 has close ties to user experience. The definition of user experience isn't the same as usability. Usability and user experience are related, but different. Usability is about getting something done; user experience is about the user’s emotional impression. Lots of things can affect the emotional experience of a graphical desktop like GNOME. Colors, fonts, location of elements, and window decorations are just some of the things that can influence how a person feels about using GNOME. That’s the user experience.

Usability focuses on the user. The general rule about usability is that people use programs to be productive, and they are busy people who are trying to get things done. Through usability testing, the user decides when a product is easy to use. Because if a program is hard to use, no one will want to use it. And if they don’t use the software, then they won’t have an emotional experience about it.

Usability and user experience go hand-in-hand. Programs need to be pleasant (user experience) but people need to be able to use them, too (usability). But even with good usability, the visual brand can clearly influence the emotional response of end users. Users associate the arrangement of visual elements used in the desktop with other similar desktops, and experience a similar emotional response.

And that is how the visual brand of a graphical desktop can affect the user experience.
images: Wikipedia

Friday, March 25, 2016

A few recommendations

As many of you know, from July 2010 until December 2015, I was the campus CIO at the University of Minnesota Morris, a residential liberal arts university in rural Minnesota. While it was a great campus and we did a lot of wonderful work there, one thing I won't miss is the driving. You see, Morris is a three hour drive from the Twin Cities. That's a significant distance when my wife lived in the Twin Cities and I lived and worked in Morris.

So I did a lot of driving. To pass the time, I listened to audiobooks. On my Coaching Buttons blog, I shared a few of my favorite audiobook recommendations. I thought I'd share those here.

First, a comment on my audiobook preferences. I like science fiction and strong drama. I prefer full-cast performances over traditional readings, although I listen to both. A year or so before moving to Morris, I discovered Big Finish, which had the license to make new audio stories based on the classic Doctor Who television program. They do other stories too, but I am a huge Doctor Who nerd. So it's no surprise almost all of my recommendations are Big Finish productions.

Mine is a very long list, but I have condensed my recommendations to just those must-have stories that I listen to all the time. Click the links for the much longer lists of my favorites.

Traditional audiobooks, full-cast audio plays, and spin-offs from Doctor Who.

A few highlights:
UNIT, series 1
A spin-off from Doctor Who, but you don't need to have seen the show to enjoy this. More closely related to X-Files, this is about a military unit that investigates unexplained encounters.

Blake's 7, series 1
A continuation of the classic British sci-fi show, about a small group of renegades fighting against an oppressive galactic government.

Survivors
A compelling audio series about survival after a devastating plague wipes out almost all life on earth. Not for the casual listener, Survivors features adult content and situations.
The well-known BBC science fiction program, Doctor Who, featuring the original cast.

A few highlights:
Spare Parts
The origins of the Cybermen, featuring Peter Davison as the 5th Doctor.

Seasons of Fear
Paul McGann as the 8th Doctor, featuring new companion Charlotte (Charley) Pollard.

Dalek Empire, series 1–2
Another spin-off from Doctor Who, following the Daleks as they conquer the human race, and our struggle for freedom.

I, Davros
About the creator of the Daleks, this tells the story of Davros as a young man during the Thousand Year War.

Cyberman, series 1
One more spin-off from Doctor Who, about Earth at war with their own android creations in Orion, and leveraging captured Cyberman technology in a bid for victory.

Sword of Orion
A prequel to the Cyberman series, featuring the 8th Doctor and Charley.

And I'm glad to share that Big Finish doesn't use DRM. These are available in plain MP3 or audiobook, or you can purchase on CD and rip to your preferred format.

Thursday, March 24, 2016

A Usability Study of GNOME

Last summer, I mentored a usability testing project for Outreachy, working with Gina to examine the usability of GNOME.

Gina did a great job in her usability work. After her internship was complete, we authored a paper together, which FOSS Force has published. Please read A Usability Study of GNOME at FOSS Force.

Here's a brief excerpt from the article, describing the usability test:
During a Summer 2015 internship with Outreachy, an organization that helps underrepresented groups get involved in free and open source software, we conducted a usability test of the GNOME desktop. This was a formal usability test, where we invited a dozen testers to use GNOME and GNOME applications to complete a few simple tasks.

Test volunteers were about evenly divided between men and women (slightly more men than women) predominantly at college age (12 to 25) representing all levels of computer skill but describing mostly “constant” computer use. About half of testers had not used GNOME previously (slightly more had not than had).

Participants were free to choose the language version of GNOME. Five of them chose to test GNOME in French, while the other seven used GNOME in English. The choice of language did not appear to influence the results of the usability test.

In our test, we presented each tester a set of sample tasks, one task at a time. Throughout the test, we watched each volunteer as they completed the sample tasks, and noted any problems they had with the software. We asked the testers to speak aloud during the usability test, to describe what they were looking for; if they were looking for the “Print” button, they should say, “I’m looking for the ‘Print’ button.” After each set of tasks, we took a “comment break” so participants could share their thoughts about the software and the problems they encountered.

The total test duration varied, with the shortest test at thirty minutes and the longest at an hour and a half.

image: FOSS Force (Gina Dobrescu and Jim Hall)

Wednesday, March 23, 2016

March Madness in a shell script

I don't really follow the sportsball. But I do like to engage with others at my office who do. So every Spring, I feel somewhat left out as everyone at my office gets wrapped up in college basketball fever. I watch them fill out their NCAA March Madness brackets, and I always think about participating, but I know nothing about the sport other than you have to dunk the orange ball in the other team's hoopy net.

I'd like to take part in the fun, maybe put my five dollars into the office pool, but I just don't know enough about the teams to make an informed decision on my own March Madness bracket. So a few years ago, I found another way: I wrote a little program to do it for me.

Computer models that predict the outcome of matches isn't a new topic. You can find lots of models, including the Pythagorean Expectation Model or other algorithms that build on previous team performance to predict future game outcomes. But that requires some research into sports statistics and following each team throughout the season. That's too much work for me. I used a different method that should be familiar to many of my fellow nerds: the "Dungeons and Dragons Model."

That's right! You can apply a simple D16 method to build a March Madness basketball bracket. How scientific is this? Probably not very, but it's enough to give me a stake to follow March Madness basketball, but not enough that I feel particularly saddened if my bracket doesn't perform well. That's good enough for me.

Let me show you how to build a Bash shell script to build your own NCAA March Madness bracket. I'll use the following three simple assumptions:
  1. The NCAA March Madness basketball brackets are seeded with the NCAA's ranking of 64 college basketball teams, divided into four regions, and ranked #1 through #16.
  2. The NCAA March Madness basketball brackets are always initialized with the same contests: #1 vs #16, #8 vs #9, #5 vs #12, and so on.
  3. A #1 ranked team should perform better than a #16 team, but a #8 team should perform about the same as a #9 team.
Using these assumptions, let's examine the D16 method. In a tabletop role-playing game, you might throw a 1D16 to determine the outcome of an encounter. You would compare the value of the 1D16 to a player's statistic, such as Dexterity or Strength. This kind of throw assumes a "probability" or "likelihood" of an outcome based on the relative strength of the player. A player with a high Dexterity is more likely to dodge a blow than a player with a lower Dexterity. Usually, I see the DM compare the 1D16 to the player's statistic to determine the outcome.

Similarly, we can compare the outcome of a 1D16 to a team's NCAA ranking to determine the outcome of a team's performance. A #1 team should be a strong team, so let's assume the #1 team has fifteen out of sixteen "chances" to win, and one out of sixteen "chance" to lose. Without any other inputs, the #1 team would win if the 1D16 value is two or greater, and the #1 team would lose if the 1D16 value is one.

Using this assumption, we can throw a 1D16 to determine if team "A" wins, and a 1D16 to determine if team "B" loses, or vice versa. If the two throws agree, we know the outcome of the game.

In Bash, we generate a random number every time you reference the $RANDOM environment variable. The variable returns a value between 0 and 32,767, but we want a number between one and sixteen. We can reduce the random number's range to sixteen values by using the modulo operator. Using modulo 16 returns a value between zero and fifteen. Adjusting that to a number between one and sixteen is simple addition:
d16=$(( ( $RANDOM % 16 ) + 1 ))
Here's a Bash function that assumes two inputs are the NCAA rankings of two teams, team "A" and team "B." Using the D16 method, the function predicts the winner of a game and returns the winning team in the function's exit value.
function guesswinner {
 rankA=$1
 rankB=$2

 d16A=$(( ( $RANDOM % 16 ) + 1 ))
 d16B=$(( ( $RANDOM % 16 ) + 1 ))

 if [ $d16A -gt $rankA -a $d16B -le $rankB ] ; then
         # team A wins and team B loses
         return $rankA
 elif [ $d16A -le $rankA -a $d16B -gt $rankB ] ; then
         # team A loses and team B wins
         return $rankB
 else
         # no winner
         return 0
 fi
}
Of course, the D16 method assumes the two outcomes agree. While this method works most of the time, it's possible that neither results in a winner. A simple workaround is to try again. I find the outcomes agree within one or a few throws, but for an evenly-matched game, such as a #1 team against a #2 team, you might have to give up after too many attempts.

With that assumption, let's write a Bash function to repeatedly call guesswinner until the two outcomes agree. The function prints the match-up, prints the winner, and returns the winning team via the exit value.
function winner {
 teamA=$1
 teamB=$2

 echo -n "$teamA vs $teamB : "

 count=0

 # iterate and return winner, if found

 while [ $count -lt 10 ] ; do
         guesswinner $teamA $teamB
         win=$?

         if [ $win -gt 0 ] ; then
                 # winner found
                 echo $win
                 return $win
         fi

         count=$(( $count + 1 ))
 done

 # no winner found, return a default winner

 echo "=$teamA"
 return $teamA
}
The = in the last echo statement helps you see if the function was unable to determine a winner after ten attempts.

With these two functions, it's very simple to run through all the first-round games to determine winners, then iterate through those winners to build the rest of the basketball bracket. A few echo statements help us to follow each round in the bracket. The function returns the winner of the bracket via the return value.
function playbracket {
 echo -e '\nround 1\n'

 winner 1 16
 round1A=$?

 winner 8 9
 round1B=$?

 winner 5 12
 round1C=$?

 winner 4 13
 round1D=$?

 winner 6 11
 round1E=$?

 winner 3 14
 round1F=$?

 winner 7 10
 round1G=$?

 winner 2 15
 round1H=$?

 echo -e '\nround 2\n'

 winner $round1A $round1B
 round2A=$?

 winner $round1C $round1D
 round2B=$?

 winner $round1E $round1F
 round2C=$?

 winner $round1G $round1H
 round2D=$?

 echo -e '\nround 3\n'

 winner $round2A $round2B
 round3A=$?

 winner $round2C $round2D
 round3B=$?

 echo -e '\nround 4\n'

 winner $round3A $round3B

 return $?
}
Finally, we need only call the playbracket function for each of the four regions. We are left with the "Final Four" with the winners of each bracket, but I'll leave the final determination of those contests for you to resolve on your own.
#!/bin/bash

function guesswinner {
 …
}
function winner {
 …
}
function playbracket {
 …
}

echo -e '\n___ MIDWEST ___'

playbracket

echo -e '\n___ EAST ___'

playbracket

echo -e '\n___ WEST ___'

playbracket

echo -e '\n___ SOUTH ___'

playbracket
Every time you run the script, you will generate a fresh NCAA March Madness basketball bracket. It's entirely random, based on a D16 prediction similar to Dungeons and Dragons, so each iteration of the bracket will be different. In my experience, the D16 prediction works pretty well for the first few rounds, but often predicts the #1 team will make it to the fourth round. It's not a very scientific method, but I'll share that my computer-generated brackets usually fare well compared to others in my office.

The point of using a script to build your NCAA March Madness basketball bracket isn't to take away the fun of the game. On the contrary, since I don't have much familiarity with basketball, building my bracket programmatically allows me to participate in the office basketball pool. It's entertaining without requiring much familiarity with sports statistics. My script gives me a stake to follow the games, but without the emotional investment if my bracket doesn't perform well. And that's good enough for me!
Curious to see my brackets? The output isn't in "bracket" format, but you can see my bracket below, as predicted by my script (at least, as of today):
$ ./basketball.sh

___ MIDWEST ___

round 1

1 vs 16 : 1
8 vs 9 : 8
5 vs 12 : 5
4 vs 13 : 4
6 vs 11 : 6
3 vs 14 : 3
7 vs 10 : 10
2 vs 15 : 2

round 2

1 vs 8 : 1
5 vs 4 : 4
6 vs 3 : 3
10 vs 2 : 2

round 3

1 vs 4 : 1
3 vs 2 : 3

round 4

1 vs 3 : 1

___ EAST ___

round 1

1 vs 16 : 1
8 vs 9 : 9
5 vs 12 : 5
4 vs 13 : 4
6 vs 11 : 6
3 vs 14 : 3
7 vs 10 : 7
2 vs 15 : 2

round 2

1 vs 9 : 9
5 vs 4 : 5
6 vs 3 : 3
7 vs 2 : 2

round 3

9 vs 5 : 5
3 vs 2 : 3

round 4

5 vs 3 : =5

___ WEST ___

round 1

1 vs 16 : 1
8 vs 9 : 9
5 vs 12 : 5
4 vs 13 : 4
6 vs 11 : 11
3 vs 14 : 3
7 vs 10 : 10
2 vs 15 : 2

round 2

1 vs 9 : 1
5 vs 4 : 4
11 vs 3 : 3
10 vs 2 : 2

round 3

1 vs 4 : 4
3 vs 2 : 2

round 4

4 vs 2 : 4

___ SOUTH ___

round 1

1 vs 16 : 1
8 vs 9 : 9
5 vs 12 : 5
4 vs 13 : 4
6 vs 11 : 6
3 vs 14 : 3
7 vs 10 : 7
2 vs 15 : 2

round 2

1 vs 9 : 1
5 vs 4 : 4
6 vs 3 : 6
7 vs 2 : 2

round 3

1 vs 4 : 1
6 vs 2 : 6

round 4

1 vs 6 : 1
So for my Final Four, I'm left with Midwest #1, East #5, West #4, and South #1. I wonder how I'll do in my office pool?
image: Wikimedia (public domain)

Monday, March 21, 2016

How to create a heat map (Google Spreadsheets)

I received some great comments via email from several of you about my last post, describing how to create a heat map using LibreOffice Calc. Thanks for the feedback! It's nice to know my blog posts are helpful.

A few asked how to do this in other spreadsheets. All modern spreadsheets share basically the same features. The key to creating a heat map in a spreadsheet is to use conditional formatting. Every modern spreadsheet should support that.

So I thought I'd also share the same steps to create a heat map using Google Spreadsheets. While I am an open source software advocate, I also recognize that not everyone uses open source software. And since my blog is hosted on Blogspot, which is owned by Google, it's a safe bet that I probably also use Gmail and the other Google Apps. Many of you also have gmail.com email addresses, so you have already access to Google Apps, which includes Google Spreadsheets.

Google Apps can be a great platform if you need to collaborate with others who are far away. You can edit the same document or spreadsheet at the same time, all without having to email files to each other.

Here's a demonstration of how to create a heat map using Google Spreadsheets, using data from a first contribution to usability testing (used here with permission):
First, enter your usability scenario tasks in the spreadsheet. I like to add some whitespace before my results; this will become obvious at the end when we set the borders to white.

Write a brief summary of the usability scenario task on each row. If your usability test involves different groupings of scenario tasks (in this example, different programs) then type those in a separate column. These will be used as "headings" in a later step:


When entering your usability test results, use "G" for green, "Y" for yellow, and so on:


Every modern spreadsheet supports conditional formatting. In Google Spreadsheets, just highlight the data cells you want to format, then select FormatConditional formatting… and set your formatting:


For a heat map, define any cells with a value equal to "G" to have a green background, and so on for the other colors:


The spreadsheet will automatically apply consistent formatting to every data cell you highlighted. Note that you can also set the text color to be the same as the background color, which will effectively "hide" the data text. But in my example I have left the text color as the default so you can easily see how the spreadsheet applies the conditional formatting:


Now you just need to do a bit of manual formatting to make everything look nice. I like to adjust the columns and rows so the data cells are square with centered text. Also set the vertical alignment for everything to "Middle":


Highlight the heat map and a few rows and columns around it, then set the borders to solid white. This effectively erases the grid lines, but leaves nice-looking white lines between each data cell so you can easily follow columns and rows:


Finally, merge the heading cells with empty cells to the right, then set a grey background. You can take a screenshot of the final heat map and insert it into whatever summary you are writing about your results:

images: mine (data from first contribution to usability testing)

Sunday, March 20, 2016

How to create a heat map (LibreOffice)

The traditional way to present usability test results is to share a summary of the test itself. What worked well? What were the challenges?

This written summary works well, and it's important to report your findings accurately, but the summary requires a lot of reading on the part of anyone who reviews the results. And it can be difficult to spot problem areas. While a well-written summary should highlight these pain points, the reality is that the reader will need to dig through the report to understand where testers ran into problems, and which areas of the software seemed to be okay.

When presenting my usability test results, I still provide a summary of the findings. But I also include a "heat map." The heat map is a simple information design tool that presents a summary of the test results in a novel way.

I first developed the heat map method in 2014 when sharing my research results in the usability of the GNOME desktop. The heat map I used then was a primitive visualization, using simple colored blocks.

Over time, I modified the heat map presentation to use a grid style. For example, I used this style of heat map when I discussed an online class I taught about the usability of open source software.

Throughout the development of my heat map tool, I have used the same basic three rules to create the heat map:

  1. Scenario tasks (from the usability test) are arranged in rows.
  2. Test participants (for each tester) are arranged in columns.
  3. Represent each tester's difficulty with each scenario task with a colored block.

The color indicates the relative difficulty of each task for each tester:

Green if the tester easily completed the task. For example, if the tester seemed to know exactly what to do, what menu item to activate or which icon to click, you would code the task in green for that tester.

Yellow if the tester experienced some (but not too much) difficulty in the task.

Orange if the tester had some trouble in the task. For example, if the tester had to poke around the menus for a while to find the right option, or had to hunt through toolbars and selection lists to locate the appropriate icon, you would code the task in orange for that tester.

Red if the tester experienced severe difficulty in completing the task.

Black if the tester was unable to figure out how to complete the task, and gave up.

The colors borrow from the standard green-yellow-red indicators to suggest go-caution-stop from stoplights. The extra orange and black colors provide gradation to difficulty. In the heat map, red indicates a task with extreme difficulty, yet the tester was still able to complete the task. And black represents a task that was so difficult, the tester was unable to figure it out.

The colors also lend the heat map its name. The gradient from "cool" colors (green and yellow) to "hot" (orange and red, to black) imply increasingly difficulty.

You can create a heat map using different methods. In my first usability test results, I created the heat map by inserting different colored graphical "block" elements into my document. The constant width of the blocks produced the intended "grid" effect, where each row represented a scenario task, and each column represented an individual tester. In my subsequent usability test results, I created the heat map using a spreadsheet.

The spreadsheet method is much easier. You can create a heat map of your usability results in just a few minutes. Here is a demonstration of how to create a heat map using LibreOffice Calc, using data from a first contribution to usability testing (used here with permission):
First, enter your usability scenario tasks in the spreadsheet. I like to add some whitespace before my results; this will become obvious at the end when we set the borders to white.

Write a brief summary of the usability scenario task on each row. If your usability test involves different groupings of scenario tasks (in this example, different programs) then type those in a separate column. These will be used as "headings" in a later step:


When entering your usability test results, use "G" for green, "Y" for yellow, and so on:


Every modern spreadsheet supports conditional formatting. In LibreOffice Calc, just highlight the data cells you want to format, then select FormatConditional FormattingCondition… and set your formatting. For a heat map, define any cells with a value equal to "G" to have a green background, and so on for the other colors:

(Note to the LibreOffice folks: Conditional formatting in LibreOffice Calc is kind of rough. Can you make this easier to do? I knew the functionality was in there somewhere, but it took me about a minute to find conditional formatting, then a few minutes more to actually figure out how LibreOffice wanted me to do it. Hint for the rest of us: For each rule, use Apply StyleNew Style… and set the cell style background to whatever color you need.)


The spreadsheet will automatically apply consistent formatting to every data cell you highlighted. Note that you can also set the text color to be the same as the background color, which will effectively "hide" the data text. But in my example I have left the text color as the default so you can easily see how the spreadsheet applies the conditional formatting:


Now you just need to do a bit of manual formatting to make everything look nice. I like to adjust the columns and rows so the data cells are square with centered text. Also set the vertical alignment for everything to "Middle." Merge the heading cells with empty cells to the right, then set a grey background:


Finally, highlight the heat map and a few rows and columns around it, then set the borders to solid white. This effectively erases the grid lines, but leaves nice-looking white lines between each data cell so you can easily follow columns and rows. In this example, I've not merged the heading rows, so you can see the results of the white borders:


The final heat map after merging the headings. Just take a screenshot of the heat map, and insert it into your results:

images: mine (data from first contribution to usability testing)

Friday, March 18, 2016

Possibilities

A few weeks ago, I was invited to attend a private address by Vice President Joe Biden. It was actually because of my new job; I work in local government. Only a limited number of folks were invited to the event, something like 200 of us there.

Biden is an incredible speaker. While his address focused primarily on transportation benefits from a federal stimulus package, Biden spoke on several other topics too. I felt motivated by one particular point he made about America.

Vice President Biden visits Union Depot in St Paul, Minnesota

At one point in his remarks, Biden reflected on a visit to China, several years ago. During his tour, China's president Xi Jinping asked him what made America great. "One word," Biden told him. "Possibilities." America is great because we have so many possibilities. And the future of America will be strong, said Biden, because so many enterprising Americans find opportunity in those possibilities. We continue to explore new technology, new innovation, new ways to grow.

This point connected with me. Afterwards, I reflected on Biden's story and realized that is one reason why I am so invested in open source software. It's about possibilities.

Open source software exists because a developer saw something that didn't exist yet, and wrote a program to fill that need. As an open source community, we explore possibilities. How can we do this thing? And now that we've done that thing, how can we make it even better?

My first experience with open source software was in 1993. The term "open source software" hadn't been popularized yet, but the concept of Free Software was already alive. I discovered tools that a community of developers created, then gave away for free. And that software was on par with (and sometimes better than) many commercial programs of the era. Even better, you could download the source code and make improvements or fix bugs, then give away your changes so others could enjoy them.

I was amazed at Free Software in 1993, and quickly realized the potential of a group of developers working together to create great software. So in 1994, when Microsoft announced that MS-DOS was "dead," I realized we could leverage the Free Software concept to create our own free version of DOS for everyone to use. With that, FreeDOS was born.

More than twenty years later, I continue to engage in open source software because of the possibilities inherent in that ideal. Given enough developers, we can do anything. Look at the reach of Linux. At its inception in 1991, Linux was a neat experiment, a promising small Unix operating system that would run on home computers, but didn't do much. Today, even Microsoft plans to support products for Linux.

My professional career keeps me pretty busy these days, so I don't have as much free time to write code for open source software projects. Instead, I have shifted my open source focus to usability testing. I want all open source software to be easy to use. That's how I contribute and work with others in open source. Because when we work together, we seize possibilities and make open source software truly amazing.
photo: mine

Thursday, March 17, 2016

SQL Server on Linux

I wrote a much longer piece about SQL Server on Linux on my IT leadership blog ("Coaching Buttons") but wanted to share a brief summary here for those who didn't see the news.

Last week, Microsoft made an amazing announcement: they are bringing SQL Server to Linux. Quoting Scott Guthrie - Executive Vice President, Cloud and Enterprise Group, Microsoft:
Today I’m excited to announce our plans to bring SQL Server to Linux as well. This will enable SQL Server to deliver a consistent data platform across Windows Server and Linux, as well as on-premises and cloud. We are bringing the core relational database capabilities to preview today, and are targeting availability in mid-2017.
I find this interesting because of Microsoft's history. There was a time not so very long ago that Microsoft feared open source software. Former CEO Steve Ballmer once referred to open source as a "cancer" that would taint everything it touched.

Over time, Microsoft advanced their strategy to instill "Fear, Uncertainty, and Doubt" when talking about open source software. The goal here was to raise unanswered questions that cause C-level executives to fear open source software. For example, "Look at the copyleft. If you use open source software in your enterprise, you'll need to give away your proprietary source code to anyone who asks." (No, you don't.) This led to Ballmer's famous "cancer" statement.

But under CEO Satya Nadella, Microsoft seems to have genuinely changed its tune. And now, we find Microsoft plans to release a version of SQL Server for Linux. I'm excited by this news. I don't have any Linux systems at my new organization (I work in local government, and the culture of government seems to be very pro-Windows) but I want to shift our IT organizational culture to eventually embrace other options, including Linux. SQL Server for Linux opens up new possibilities for us. And for that, I welcome this news.
image: Wikimedia (public domain)

Wednesday, March 16, 2016

First contribution to usability testing

This is a guest post from Ciarrai, who is applying for the usability testing internship.

In order to apply for an Outreachy internship, we ask that you make an initial contribution. For usability testing, I suggest a small usability test with a few testers, then write a summary about what you learned and what you would do better next time. This small test is a great introduction to usability testing. It exercises the skills you'll grow during the usability testing internship, and demonstrates your interest in the project.

Ciarrai's summary was an excellent example of usability testing, and I am posting it here with permission:
What is usability testing and why is it important?

When writing a piece of code, we often obsess about making our logic elegant and concise, coming up with clever ways to execute tasks and demonstrate ideas. What we sometimes lose sight of is the fact that we aren't just trying to craft well build software, we also need it to be useful to the (hopefully) many others who will be using it. “Useful” encompasses a complex scope, but usability is a large part of what makes something useful. To further simplify an intricate topic, when we're talking about usability were basically asking the question, “Can people easily use this thing?”

What if we could just give users the software in question, ask them to preform some of the general tasks associated with the program, and observe the results? Would this not likely be the most effective way to judge a program's usability? We're not asking people to describe an experience, we're actually watching and listening to them as they do it. This is what usability testing is all about and it is crucial to the creative process of building anything user-based. If people can't use our software, then all the hard work of creating it was in vain.

Usability testing gives us insight into the holes in our development game from the user perspective, but also lets us see what works well about our software. Both types of data are indispensable as we continue to design and modify new programs. We can start to see patterns and refine our approach in a way that hopefully makes the whole process of creating more effective.

Methods

For the usability test, I set aside a guest user on my machine, a laptop running the Fedora 23 operating system with the GNOME desktop. There are no modifications to the install that should affect the results of the test. The participants executed the test separate from one another using otherwise identical settings.

The scenario tasks I used were taken from previous usability tests. I used the six scenario tasks on Gedit from Jim Hall's blog. I also borrowed the four Nautilus scenario tasks from Gina Dobrescu's blog from a previous usability test internship.

I was given permission to use previous scenario tasks. One reason for using previously-tested scenario tasks was that I don't yet have the skills for coming up with the most appropriate scenario tasks for a usability test. Another thought I had though is that in using redundant scenarios, I am widening the breadth of test subjects for these scenario tasks which could be useful testing for these GNOME applications. Using these approved scenarios also meant that I could go ahead with conducting a usability test on a shorter timeline.

I introduced the test by letting the volunteers know that they would be testing out two GNOME applications: a file manager and a text editor. I let them know that they were not being judged, but rather I was looking to see how well the software works for them. I told them there was no time limit, that they should try to complete the tasks in a manner comfortable to them. I also stressed that if they felt a task too difficult that it was fine to stop, that they should only proceed as far as they normally would when trying out a new application. I asked that the participants talked aloud during the test, that they vocalize all their thoughts about the test to the best of their abilities. I said that I would be there to listen, observe and help with any general issues, but that I was not going to provide assistance in completing any of the tasks.

I sat next to each tester and watched the screen over their shoulders while listening to them describe the process. I reminded testers that they could take their time and that they could abandon a task if it proved too difficult, but I was otherwise silent as they worked.

Directly following each test, I recorded the test results using a heat map and made a few notes to myself for reference when doing this write-up. I didn't write anything down during the test itself because I thought that would make the participants nervous.

Results

Overall, the testing proved to be quite interesting. Of the three testers, one works on Linux systems for a living and had experience with GNOME before. The other two use computers daily but were not particularly skilled users and were not familiar with the GNOME environment. The level of confidence with which they completed the tasks was strikingly different between the more and less competent users. What was very interesting though was seeing them all struggle on the same tasks. I think I got some real insight into a few things that could enhance the usability of Gedit.

So, what commonly seems intuitive about the software?

Well, no one seemed to have a problem writing, saving and copying a file in Gedit. I think it functions similarly enough to other software on the level of editing a note. One participant mentioned expecting the Save button to be on the upper left instead of right, but had no real trouble finding it.

As for Nautilus, using the search function seemed to be intuitive to all participants. The magnifying glass icon is well understood to be associated with search functionality. Participants seemed to understand what they were looking at and were able to navigate the home file system with ease. I think the takeaway for what went well regarding both applications was that they looked familiar enough and operated in a way that all participants were accustomed to on the basic level.

The difficulties in using both Gedit and Nautilus were similar, though to varying degrees across all participants. With Gedit, it seemed that everyone wanted the user preferences for fonts and colors to be located in the drop-down menu. I watched one participant navigate through each option in the menu drawer at least three times believing that font changes had to be located there. Two participants failed to ever check the Gedit → Preferences menu for fonts. The third used the Internet to locate the fonts tab for the application. Gedit seems to separate editor text tasks (find and replace for example) with editor preferences (color themes), but this seemed a non-intuitive divide for users.

With Nautilus, the difficulty seemed to be similar to that of Gedit. Testers had trouble changing preferences for how Nautilus works. I think that navigating to the the Nautilus → Preferences menu did not come naturally to most participants. I think that the tab being located above the application is maybe a more unique setup and it confuses testers who are used to other desktop environments. It also seemed as though participants wanted the layout preferences to adjust automatically when changed. It took users much longer to change Nautilius to list format because of this. One tester abandoned the task even though he figured out where preferences was located because he never tried closing the application.

The feedback from testers was generally that the tasks were either very easy or quite difficult, but that they all felt that they could use these applications to do their daily tasks.

What worked well in the test?

I think the test was successful in getting a small amount of data about the usability of Gedit and Nautilus. Having participants that represented different user groupings was helpful in getting a sense of how a variety of target users would react to GNOME. It was also helpful for me as the moderator to see where difficulty may have arisen through lack of familiarity with the task as opposed to the application.

All test subjects seemed to be mostly comfortable with the test, and were put more at ease by being explained that they were not being scrutinized for how well they accomplished the tasks. The testers gave their earnest efforts to complete the tasks. None of the testers took over 45 minutes to complete the test to the best of their abilities.

What were the challenges?

Participants reacted differently to being asked to talk their way through completing the tasks. One tester spoke aloud every action he was doing (ex. “Okay, now I am pressing alt + tab to navigate back to the file system”). Another seemed shy to say anything about what she was doing, especially as it became more difficult. This made each test experience unique in a way that was harder to judge the experience for the tester.

One participant mentioned feeling stupid after not being able to complete a task. I think it's valuable to hear this kind of feedback, though it was discouraging to know that people take it personally when they cannot execute a task they deem simple.

It felt strange to be so closely watching the testers as they worked. I found it difficult to restrain myself from helping the testers when they were having a rough time with a task.

I didn't know how to respond to the question of searching the Internet for answers when someone got stuck on a task. I could see how both allowing and disallowing the Internet were valuable decisions. In the end I decided for this test that participants should go through whatever normal channels they take when trying to complete a task with an application. This meant that one tester used an Internet search to complete a task while the others never did.

Recommendations

What comes to mind when thinking about conducting a test or experiment is accuracy. I think striving for more consistency is the best way to improve on future usability testing. One part of achieving accuracy would be removing known variables. The other would be adding functionality that yields more of the type of results that we want. With that in mind, my recommendations for future tests would be:
  • Give testers examples for how to talk through the execution of their tasks so that similar information can be gathered from all testers.
  • Record the tests so that I am not relying on memory when making comparisons and so that others could hear/view the test and interpret the results.
  • Get as much of a variety of users as possible to preform the test. This would include computer science professionals, different operating system users, those who use computers heavily but not for computer science, people of a variety of ages, etc.
  • Be clear on boundaries such as Internet usage during test.
Appendix/Scenarios

These are the scenario tasks used in the usability test:

Gedit (GNOME Text Editor)

1. You need to type up a quick note for yourself, briefly describing an event that you want to remember later. You start the Gedit text editor (this has been done for you).

Please type the following short paragraphs into the text editor:
Note for myself:

Jenny and I decided to throw a surprise party for Jeff, for Jeff's birthday. We'll get together at 5:30 on Friday after work, at Jeff's place. Jenny arranged for Jeff's roommate to keep him away until 7:00.

We need to get the decorations up and music ready to go by 6:30, when people will start to arrive. We asked everyone to be there no later than 6:45.
Save this note as party reminder.txt in the Documents folder.

2. After typing the note, you realize that you got a few details wrong. Please make these edits:
  • In the first paragraph, change Friday to Thursday.
  • Change 5:30 to 5:00.
  • Move the entire sentence Jenny arranged for Jeff's roommate to keep him away until 7:00. to the end of the second paragraph, after no later than 6:45.
When you are done, please save the file. You can use the same filename.

3. Actually, Jeff prefers to go by Geoff, and Jenny prefers Ginny. Please replace every occurrence of Jeff with Goeff, and all instances of Jenny with Ginny.

When you are done, please save the file. You can use the same filename.

4. You'd like to make a copy of the note, using a different name that you can find more easily later. Please save a copy of this note as Geoff surprise party.txt in the Documents folder.

For the purposes of this exercise, you do not need to delete the original file.

5. You decide the font in the editor is difficult to read, and you would prefer to use a different font. Please change the font to DejaVu Sans Mono, 12 point.

6. You decide the black-on-white text is too bright for your eyes, and you would prefer to use different colors. Please change the colors to the Oblivion color scheme.

Nautilus (GNOME File Manager)

1. Yesterday, you re-organized your files and you don’t remember where you saved the copy of one of the articles you were working on. Please search for a file named The Hobbit.

2. Files and folders are usually displayed as icons, but you can display them in other ways too. Change how the file manager displays files and folders, to show them as a list.

3. You don’t have your glasses with you, so you aren’t able to read the names of the files and folders very well. Please make the text bigger, so it is easier to read.

4. Please search for a folder or a file that you recently worked on, maybe this will help you find the lost article.

Tuesday, March 15, 2016

A preview of March Madness

Every Spring, it seems everyone I know follows the NCAA March Madness basketball tournaments. The games start soon.

I've written an article about how to "predict" your March Madness bracket (PDF) using a shell script. That should go live at Linux Journal within a few days, and I'll post another version of that article here a week or so after that. [Edit: My article is now on Linux Journal, as Bash Shell Script: Building Your March Madness Bracket. -jh]

But I wanted to share a preview of the article, by way of me running that script:
$ ./basketball.sh

___ MIDWEST ___

round 1

1 vs 16 : 1
8 vs 9 : 8
5 vs 12 : 5
4 vs 13 : 4
6 vs 11 : 6
3 vs 14 : 3
7 vs 10 : 10
2 vs 15 : 2

round 2

1 vs 8 : 1
5 vs 4 : 4
6 vs 3 : 3
10 vs 2 : 2

round 3

1 vs 4 : 1
3 vs 2 : 3

round 4

1 vs 3 : 1

___ EAST ___

round 1

1 vs 16 : 1
8 vs 9 : 9
5 vs 12 : 5
4 vs 13 : 4
6 vs 11 : 6
3 vs 14 : 3
7 vs 10 : 7
2 vs 15 : 2

round 2

1 vs 9 : 9
5 vs 4 : 5
6 vs 3 : 3
7 vs 2 : 2

round 3

9 vs 5 : 5
3 vs 2 : 3

round 4

5 vs 3 : =5

___ WEST ___

round 1

1 vs 16 : 1
8 vs 9 : 9
5 vs 12 : 5
4 vs 13 : 4
6 vs 11 : 11
3 vs 14 : 3
7 vs 10 : 10
2 vs 15 : 2

round 2

1 vs 9 : 1
5 vs 4 : 4
11 vs 3 : 3
10 vs 2 : 2

round 3

1 vs 4 : 4
3 vs 2 : 2

round 4

4 vs 2 : 4

___ SOUTH ___

round 1

1 vs 16 : 1
8 vs 9 : 9
5 vs 12 : 5
4 vs 13 : 4
6 vs 11 : 6
3 vs 14 : 3
7 vs 10 : 7
2 vs 15 : 2

round 2

1 vs 9 : 1
5 vs 4 : 4
6 vs 3 : 6
7 vs 2 : 2

round 3

1 vs 4 : 1
6 vs 2 : 6

round 4

1 vs 6 : 1

Monday, March 14, 2016

Happy Pi Day!

I wanted to wish everyone a happy Pi Day! If you aren't familiar with Pi Day, here's a quote from the Pi Day website to explain:

Pi Day is celebrated on March 14th (3/14) around the world. Pi (Greek letter “π”) is the symbol used in mathematics to represent a constant — the ratio of the circumference of a circle to its diameter — which is approximately 3.14159.

And if I've scheduled this post correctly, it should show up on 3/14 at 1:59pm, thus recognizing 3.14159.
image: Wikimedia (public domain)

New web design

You my have noticed that I updated the web design for the Open Source Software & Usability blog. The new design is cleaner and easier to read. I hope you like it.

If you have any technical or readability issues with the new design, please leave a comment below and I'll look into it. Thanks!

Thursday, March 10, 2016

How to join the internship

I've already been contacted by a few people interested in applying for the internship in usability testing. GNOME and other projects are offering paid, mentored, remote internships to people from groups underrepresented in free and open source software as part of the Outreachy project. I'm proud to mentor again with Outreachy, for the usability testing project.

Are you interested in doing usability testing? Internship dates are May 23 to August 23. You need to apply by March 22.

Here's what to expect in the usability testing project:

I prefer to use a project outline similar to an online class. In the Fall semester, I taught an online class on the usability of open source software. We'll follow the same basic outline: learn about usability, then practice usability. I described the course outline in my earlier article on Teaching usability of open source software. That's probably a good article to read so you know what to expect.

One requirement in Outreachy is that you will maintain a regular blog about your work. But that shouldn't be a problem because I structure the internship so you'll have something to write every week. To understand what's expected, interested applicants should review the blogs from the two previous usability interns: Gina and Sanskriti.

My project will focus specifically on the usability of GNOME. You'll learn a lot about usability in general along the way, but in the end we want to learn about how to make GNOME easier for everyone to use.

So, how to get started?

The first step in the application process is to make a first contribution, and to review that contribution with the project mentor (me). For the usability project, I ask that applicants perform a small formal usability test, and perform some basic analysis using the heat map method.

Previously, applicants did a one-person usability test, but it's really hard to compare results here. So I'm asking folks in this cycle to find three volunteers for their usability test. As you'll find, that's not enough to make general comparisons, but it's enough to see what's involved in a usability test.

To get a jump-start on this, you can re-use the scenario tasks from a previous usability test. You can probably find these on Gina's and Sanskriti's blogs, or you can search for "scenarios" on my blog to find a few examples, including my usability scenarios from 2012. For your first contribution, this doesn't need to be a long test. I think ten scenario tasks is fine.

You can do your first contribution usability test on any open source software program. But I recommend GNOME, since that's the topic of the usability testing internship. So just pick some programs from GNOME, and do a total of ten scenario tasks.

When you've finished your usability test, generate a heat map of your results, then write a brief summary about the experience. This is not a formal paper, just a few pages about the usability test. And I mean "a few," like two or three pages. Feel free to use an informal voice, like you would write in a blog; imagine you are explaining your test to a friend. Email your summary to me in OTF or PDF format, and I'll review the results with you.

There's usually a format for writing about usability test results. The general outline for your summary should be the following:
Introduction
Provide one or two paragraphs to describe the work and why it is important.
Methods
Provide a few paragraphs to describe how you did the research, including what equipment you used ("I set up a test account on my laptop running Fedora Linux 23…"), whether the research is based on something someone else did, etc.
Results
Describe what data you collected. Include your usability test heat map in this section. What areas of the software are easy to use? What parts are more difficult to use? What worked well in the test? What were the challenges? (One paragraph for each.)
Recommendations
Briefly describe how you would do the usability test differently next time, to make it better. One paragraph introduction and three bullet points.

If you need help getting started, feel free to email me! I'm always happy to help! (My email address is listed on my JimHall profile page on the GNOME Wiki.)
image: Outreachy

Tuesday, March 8, 2016

Cultural context in open source software

Have you ever worked on an open source software project—and out of nowhere, a flame war starts up on the mailing list? You review the emails, and think to yourself "Someone over-reacted here." The problem may not be an over-reaction per se, but rather a cultural conflict.

Anthropologist Edward T. Hall (not related) identified the "context preferences" of different cultures, describing high- and low-context cultures. In his 1976 book, Beyond Culture, Hall described the different communication styles of low- and high-context communicators. This difference is why, for example, German and English speakers interact differently with each other.

I don't mean to reduce entire cultures to a simple scale, but understanding the general communication preferences of different cultures can help you in any open source project that involves worldwide contributors. A brief introduction:

Hall's cultural factors says low-context cultures are more direct, and high-context cultures more indirect. A low-context communicator will get right to the point, while a low-context communicator prefers explicit messages that are simple and clear.

For example, in a high-context culture, communication will be very indirect, sometimes nonverbal. High-context cultures may hide reactions; you may not realize if you've offended someone because it would be rude to react and risk embarrassing you. It may take a very long time for a high-context speaker to get to the point. Japan and China are typical examples of a high-context culture.

Where do you fit on the cultural context? Germany and the Netherlands are typically low-context, China and Japan are high-context. Italy and France are somewhere in the middle. The US and England skew towards the low-context end, at about the one-quarter mark.

In a personal example: I'm in the US. At that one-quarter mark, we are mostly low-context, but we share some high-context qualities. We appreciate that you are on time, but we don't worry if you're a few minutes late. We speak openly about what's on our minds, but we're cautious not to embarrass or offend. We are organized, but not strictly so. We get to the point, but also use "phatic language" and talk about the weather or sports as a way to "ease into" a conversation.
Hall's high- and low-context is a broad cultural classification, and individuals may differ, but you can apply the high/low-context classification to understand the speaking styles of different audiences. And in doing so, you can become a more effective communicator.

Of course, these are cultural averages. While the US overall skews to low-context on average, you can find examples within the US that deviate somewhat. New York is probably lower context than, say, Minnesota. And you may find differences within industries. I find higher education to be higher context than industry. But as a national average, the US is around that one-quarter mark, trending to low-context.

After I learned about high- and low-context cultures, I adjusted my email style to my audience. Am I writing to someone from a high-context culture? I'll try to reference our relationship and include more background in my email. Are you from a low-context culture? I'll be more direct, and aim for the clearest delivery in the shortest message. If you need more from me, I'll assume you will reply and ask for details. For low-context cultures, my mantra in writing emails is 1. write message, 2. delete most of it, 3. click Send.

I encourage you to learn more about Hall's cultural factors. How does your organization communicate? How do others in your field work together? While the US tends to be lower context, at about that one-quarter mark, some regional and professional variances mean you may need to adapt your personal style to suit the environment you work in. Understand how best to communicate, so your message will be heard.
images: mine (Feb 2016)