Monday, July 31, 2006

The role(s) of user research

I've been reading a fair few articles and posts recently about various aspects of, and issues to do with user research and it has led me to put together the following proposition:

User research in information architecture, interaction design &/or user experience design is used to:
  • define the problem
  • inform the solution design
  • evaluate different solution options
  • validate and/or fine-tune the designed solution
  • test the implementation
When I have more time I'll put together some specific user research tasks that might typically be used in each of these ways, but if you have any ideas or thoughts feel free to send them through or add them as a comment.

Thursday, July 27, 2006

Brand Experience in User Experience - now at uxmatters.com

I'm very pleased to report that my first article for UXmatters - www.uxmatters.com - has been published!! The article - Brand Experience in User Experience - looks at the role of brand experience in the definition of project objectives for user experience projects, and how those objectives can flow through into the resultant solution design.

The article references previous work by Jared Spool (of UIE) and Dirk Knemeyer (of Involution Studios) - see the article for specific works.

But since writing the article I've also found this article by Dan Saffer, written in June 2002 for Boxes and Arrows. The article - Building Brand into Structure - takes a similar view of brand experience (in this case, brand values) as they apply to the design of sites - information architecture & interaction design. Well worth a read, particularly for the examples provided at a very nuts-and-bolts level. It's a pity I didn't find this article earlier as it's rich in material.

For those heading to the Sydney IA-Peers f2f tonight, I'll see you there.

Friday, July 21, 2006

What's wrong with the following...

Using the old form design, users took an average of 4mins 31secs to complete Task 6. In the re-designed form task completion time was reduced, with users taking as little as 48secs on the task.

Can you spot the problem? Answers welcome...

---------------------------------------------------------------------------------------------------------------------------

OK, I understand. Better things to do on a weekend than think about something like this...

The above is an example of statistical sleight of hand. In reading through the above you've probably come away with the impression that:
  • Two form designs have been tested;
  • The task completion time on the old form is longer than for the new form;
  • Based on the numbers given - 4mins 31secs & 48secs - that improvement from old to new is substantial.
Well, the second and third points are completely unsupported. Why?

The first part of the statement provides us with the average task completion time for the old design - 4mins 31secs. This is followed by an assertion: "In the re-designed form, task completion time was reduced". The only supporting evidence provided for this assertion is the low-end extreme measure of task completion time for the new form design.

But how does this compare to the low-end extreme measure of task completion for the old design? What about the mean task completion time for the new design. What about the variation in each sample?

We're simply not comparing the same thing in each case, but the 'discussion of results' clearly wants us to think that we are. This is a common type of analytical two-step seen in many forms of research, and one for which you should look out.

User research smoke & mirrors - a 5-part series by Christopher Fahey

Over the past week I've really enjoyed reading (and re-reading) Christopher Fahey's 5-part series of blog posts on user research. This thought-provoking and provocative series sets out Christopher's issues with much of the research carried out in the interaction design/user experience space, providing examples, caveats, pros & cons.

The series is, I believe, a much-needed dose of scepticism and critical analysis of our discipline's love-hate relationship with research, and the results derived.

An absolute must-read for anyone involved in the design of Web sites and applications based around user research.

Make sure you take the time to read through the various comments & Christopher's responses. They contain some interesting perspectives on the issue from other practitioners (including myself).

Wednesday, July 19, 2006

Luke W on granular bucket testing

Carrying on the user research theme...

Just reading through an article on bucket testing over at Luke Wroblewski's blog - Functioning Form (thanks to Pabini @ UXmatters for the link). Bucket testing is the process wherein you test two versions of the same thing in parallel (page, form, design element etc) by channeling part of your site traffic through one version or the other and analysing the results (note).

Luke raises the issue that, with advances in the ease with which bucket testing can be undertaken, organisations are performing tests on increasingly isolated design elements. For example, the use of text colours in particular areas of a page. However, looking at the results of a very granular test, and then adjusting the design accordingly, does not result in an optimised design.

Any student of mathematics, finance, economics etc will tell you that the 'optimal' solution to a set of equations or conditions is rarely the combination of the optimal solution of each equation. This carries directly into the design of a page or form or screen of a Web site/application. It is invariably pointless carrying out a test only on a specific element and then adjusting the design based on those results without also verifying that the change produces a more optimal solution to the whole.

Luke puts it like this:
"A cohesive integration and layout of all the elements within an interface design is what enables effective communication with end users." [emphasis mine]

In other words (not that he needs my help), the optimal solution to a design problem is not the stitching together of individually-optimised components.

Note: I'm in the process of writing a post/article on how you can carry out this style of analysis with some degree of statistical rigour.