Opening eyes and minds with usability testing

I have had the good fortune recently to participate in two Government of Canada projects where we’ve had a usability expert as part of the team. (My double good fortune has been that the expert is Lisa Fast, of Neo Insight.)

Both of these projects involve the design and development of web-based discovery systems, where the content itself is sourced and aggregated from multiple levels of government.

My role has been designing the information architecture, specifically metadata structures for describing and tagging the information, and the vocabularies that are used to populate the metadata. These vocabularies are used as facets in the interface, to help users navigate, filter and discover information.

This role has also involved content design, since we’re concerned not only about how stuff gets found, but also about what gets found. The content portion of the project has involved a lot of effort to simplify and streamline government content, encouraging authors to use plain language (oriented to the citizen-user) and the active voice. Usability testing has had an impact here, too: it’s quite bracing to watch someone struggle to make sense of what your website is trying to tell them – typically, struggling to find a nugget of relevant fact in a sea of words.

Lisa’s testing asks users – farmers, small business owners, manufacturers – to complete a set of tasks fundamental to the discovery system and “think aloud” as they go. To get early input, the tests are conducted on a private online prototype system. She records each session in a way that protects the identity of the person: in the video clips, we can see what they are doing on screen, and hear their voices, and we know something about the type of business that each subject operates.

The focus of the testing is to determine whether users can perform essential tasks efficiently and successfully. The process allows us to zero in on the things that slow them down, or stymie them completely: finding it difficult to choose between options, choosing incorrect options, correcting themselves when they’ve made a choice with unexpected results, or hitting a dead end. In our iterative, prototype-based process, we’ve even been able to make design tweaks on the fly during the testing, and see the impact immediately. In future, we’ll be able to continue to monitor task performance on the live systems, and improve them further.

For the taxonomy, the usability testing has revealed the enormous difference in efficiency that small changes can make. These changes involve term selection, ordering of terms, and ordering of items in lists.

As an example, we used “business planning & development” as an entry under the question “What do you need assistance with?” The former tends to over-emphasize “business planning,” which many business operators think of as an annual process, or something that’s done in order to secure financing. “Business development” proved largely meaningless to the test subjects. What we wanted to convey was the idea of setting goals for business expansion and long-term success. Putting “growth” as a trigger, close to the beginning of the phrase, and removing “development” altogether, increased success rates when users were asked to find services that would help them increase the profitability of their businesses.

Content can be a tricky thing. The testing has borne out everything you ever heard about good web content: it needs to be scannable, to the point, and task-oriented. (Readable content is only one aspect of the content challenge. Another partner on the project, Joe Gollner of Gnostyx Research, has blogged about this, specifically about how IT deals with content. You can read Joe’s insights about this on his blog, The Fractal Enterprise:  see Fear of Content.)

Perhaps most importantly, however, it’s not just the designers who have benefited from the testing. Lisa’s video clips, and her recommendations and test results, have been shared extensively with the business side of the project: with project sponsors, partners, and other stakeholders. The “voice of the customer” is not always so immediate in government projects, and it has really engaged them. One provincial executive described the video performance of users on his provincial pages as “depressing,” which I think is a sign that Lisa’s work really hit home. In another study, partners dragging their feet about the need for a new system shifted to being completely on board when they saw that the new system delivered 30% higher success rates in comparison to the old system.

Says Lisa: “Usability testing tends to engage partners and stakeholders as part of the team – they all want to help the users they’ve seen struggling in the videos. They become committed to improving their components of the system. The iterative testing in these projects rewarded that team approach with user success rates that climbed into the golden 90% range by the final round of testing.”

Yay team. And thanks, Lisa.

P.S. Both projects have used an iterative design process, with multiple rounds of testing and refinement. Project executive Stephen Karam of Systemscope will be speaking about this design process at GTEC on October 19: http://www.systemscope.com/news/systemscope-at-gtec-2011/ (Workshop #2).

 

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *