Friday, December 22, 2006

Season's Greetings

As the year winds down to a semi-close, I wanted to wish you all a very happy & safe holiday season.

See you in 2007.

Wednesday, December 13, 2006

webblast, webjam 2006 - awesome!

My sense of euphoria and positivism with my chosen industry has been bolstered in the past few weeks by two really cool, emotionally uplifting events hosted by different segments of the Web community here in Australia.

Since '97 I've seen the Web industry in Australia go through a period of 'golden child'-hood, market darling, crash victim and pariah. We wandered in the wilderness for 3 or 4 depressing years after 2001 where we hardly talked to one another, got into petty flame-wars on lists, and struggled to regain our sense of purpose, place, and direction.

Last night I had the pleasure of attending the first webjam event in Sydney. The concept is simple, yet novel: 20 speakers get 3 minutes each to present to an audience of Web industry professionals about something cool they've worked on in 2006. The audience got to vote for their favourite presentation via an SMS voting tool, and prizes were awarded for the most popular speaker, and 'door prizes' for 7 lucky voters.

The presentations were mixed, from News Labs new offerings to some neat javascript widgets, to social network mapping tools, mash-ups and a host of others. All were interesting - some very - but that wasn't what made the night special for me...

It was the sense of sharing, community, and mutual respect that eminated throughout the room as luminaries and lesser lights stood up under the supportive attention of their peers and strutted their stuff. It was the spontaneous applause; the laughter; the mingling and storytelling that went on as people reconnected with friends and colleagues.

Two weeks ago Webblast - a shared Christmas event bringing together industry pros from a range of groups ( Web Standards, IA-Peers, PHP Users, to name a few) - was a resounding success. 180 people talking, chatting, meeting new & familiar faces. The event was oversubscribed within 36hours of the announcement going out: I think we could have seen 300+ had the venue been able to accommodate them all.

I can only commend the efforts of the people who organised these two events, and the various sponsors who supported them, for putting on two very memorable and enjoyable events. I'm already looking forward to 2007, and looking back on 2006 a lot less jaded than I started the year.

Thank you.

Friday, November 10, 2006

4 seconds - Part II

OK, Akamai seem to have recovered from their little glitch & I now have a copy of the report. The detail of the report paints a very different picture from what's being reported in the press release and the media. This should serve as a lesson for you all not to rely on the media for your research - track down the primary source for the article and read it (carefully) for yourself.

Some questions I have for the report authors, journalists & PR people at the various companies:
  • on the headline finding of the report - that "Four seconds is the maximum length of time an average online shopper will wait for a Web page to load before potentially abandoning a retail site." - the report data indicates a very different picture: 80% of dial-up users, and 68% of broadband users will wait longer than 4 seconds before leaving a Web site. Looking at the detailed data from the report, I can't see any way to arrive at an 'average' measure of 4 seconds. Neither the mean, mode or median values for the data come out at 4 seconds. The lowest figure I can arrive at is at least 5 seconds, and possibly quite a bit higher.
  • Broadband users start to consider abandonment after less than 1 second in some cases (1% of broadband respondents). Dial-up users show a little more patience, starting to abandon the site after 1-2 secs of waiting (3%). Wouldn't this be a better way to report the findings? "Online shoppings start abandoning sites after 1 second of waiting"
  • From the original report: "Roughly 75% of online shoppers who experience a site freezing or crashing, that is too slow to render, or that involves a convoluted checkout process would no longer buy from that site". The respondents were asked to indicate whether they would be 'less likely to buy from the retailer again online' - not whether they definitely would not. This is a significant difference in interpretation of the data, and leads to the sort of attention-seeking articles shown below.

How the article was reported elsewhere:
Slashdot: "Of course we all want webpages to load as fast as possible, but now research has finally shown it: four seconds loading time is the maximum threshold for websurfers. Akamai and JupiterResearch have conducted a study among 1,000 online shoppers and have found, among other results, that one third of respondents have, at one point, left a shopping website because of the overall 'poor experience.' 75% of them do not intend ever to come back to this website again. Online shopper loyalty also increases as loading time of webpages decreases. Will this study finally show developers of shopping websites the importance of the performance of their websites?"

What's wrong here:
  • four seconds is not the maximum threshold for websurfers. 80% of dial-up & 68% of broadband users indicated they would typically wait longer than this for a page to load;
  • 75% of respondents indicated they would be less likely to return to the site; not that they had no intention of returning;
  • The report does not correlate an increase in online shopper loyalty to a decrease in webpage loading time: it indicated the converse (slower loading time correlates to decreased loyalty), which is not necessarily the same thing.

InformationWeek: "The survey found that more than one-third of online shoppers abandoned sites entirely whenever they suffered a poor experience. Some 75% of the online shoppers polled said they wouldn't be likely to use the sites in question after they had a poor shopping experience."

What's wrong:
  • More than one third of dissatisfied online shoppers who also abandoned a site did so due to load times, errors or crashes. The percentage of abandonments due to all sources of dissatisfaction ('poor experience') was not reported, but is also presumably higher than one third.
  • Again with the 75% of online shoppers (see above)
Sydney Morning Herald: "According to a new report on consumer behaviour, four seconds is the longest that online shoppers are prepared to wait for a site to load before backing out of the transaction."

What's wrong:
  • Again, 68% of broadband and 80% of dial-up users indicated in their survey response they would wait longer than 4 seconds for a page to load;
  • Slow loading times was a source of dissatisfaction for 33% of respondents;
  • 18% indicated they had abandoned a transaction due to the slow page loading on the site.
Somewhat interestingly, the biggest recorded factor affecting the likelihood that a dissatisfied online shopper would also shop with that retailer off-line was the convoluted or confusing checkout process.


4 Seconds - part I

I've just read an article on the Sydney Morning Herald site claiming: "According to a new report on consumer behaviour, four seconds is the longest that online shoppers are prepared to wait for a site to load before backing out of the transaction."

This is followed about two sentences later by: "It found that the average shopper will abandon an online store if forced to wait more than four seconds for pages to load."

So, either 4 seconds is the longest time shoppers are willing to wait - representing the extreme upper fringe; or it's what the 'average' user is prepared to wait before abandoning a process. Since these are two very different things, I'm currently trying to get my hands on a copy of the full report available from the report's sponsor Akamai (they're in the business of providing 'Content delivery, Application Performance Management, and Streaming Media Services').

After filling in the obligatory form on the Akamai site so that I can lay hands on the free copy of the report, I'm currently just getting the following:

"Service Unavailable - DNS failure
The server is temporarily unable to service your request. Please try again later.

Reference #11.95088790.1163111838.a9fda45"

I'll provide more information on this whole saga when Akamai corrects this little fault and gives me access to the report.

Wednesday, November 08, 2006

User research: subjectivity and objectivity in practice

My latest article - and the first article in my new column - went live at UXMatters yesterday. Have a read of the article and let me know what you think.

Thursday, November 02, 2006

Since Oz-IA and other recent events...

Since the Oz IA retreat things have been rather hectic around here. We're in the final stages of developing a major Web-based business application that utilises AJAX throughout the interface to provide a more responsive experience for the users. My work on the project commenced in March 2004 with a high-level conceptual architecture and has progressed through various stages since then. Four more weeks should hopefully see us finished with the main body of the application, leaving integration into other systems as the last stage.

I've also been busily working on writing a column for UXMatters, which I hope you'll see in the near, near future. The column's about user research generally and starts with a review of some recent debate about the merits of user research; I'm hoping it isn't too abstract for the readers.

A couple of recent projects have finished off well, and been well-received. Our site for Tourism Queensland - Queensland Holidays - won an award recently, for which we were all pretty happy.

On a more personal note, the summer is almost upon us and with it a new season of international cricket!! I'm looking forward to the Ashes series, and if you look carefully you'll see me parked on a seat in the Ladies Stand for the duration.

I'm not sure how many of you may have attended my presentation at the Oz-IA conference, but the feedback I've received so far has been largely positive, including this from Zef in NZ who says I hurt his brain - but in a good way.

More soon...

Tuesday, October 03, 2006

Oz-IA presentation




If you've ever wondered how someone goes about planning a presentation... here's a photo of my presentation notes for Oz-IA 2006 as it came together over the past week.

Wednesday, September 27, 2006

Getting ready for Oz-IA

With four days to go until my presentation at Oz-IA I'm in the final stages of preparation for the conference. The presentation itself is all laid out neatly - written on the glass wall to my office :-) Now I just need to migrate that into an actual presentation document, add some examples and I'm done. And hope to God nobody decides to clean the scribbled gibberish off my walls!!

If you're attending the conference this weekend, be sure to say 'Hi'.

Wednesday, August 23, 2006

Jakob on advanced web traffic visualisations

I read this article the other day and found it interesting for a variety of reasons: I approve of people performing detailed analysis of their data; I like it when someone with access to large data sets takes the time to tell us about what they see in them; and I'm always interested in seeing nice visualisations of complex & large data sets.

I woke up this morning with an uneasy feeling swirling around the back of my (still clouded) mind and traced it to this article. Something was bugging me about it, so I went back and took another read.

Jakob advocates the use of an advanced log-log plot of traffic usage data as a way of highlighting the behaviour of site traffic with respect to page views at the low frequency end of the plot. He notes that a simple linear plot of the page view data seems to indicate that the traffic displays a Zipf distribution, but that, when viewed using the log-log plot you can clearly see that it falls away from the predicted values at the low end.

And here's where he lost me: he then goes on to assert that this is evidence that the site in question is failing to meet the demand of the site's audience, based on the divergence from the Zipf distribution... 'So what?' I hear you ask. Well, at no stage has he shown - even loosely - that the Zipf distribution is ever a good model for site traffic, even if the site owner goes crazy with the content development. Is there even one example that can be used to show that this model is a good predictor of traffic patterns to a site? Anything?

So, based on the evidence provided (nil) I'd have to reject this advice and look for alternative explanations. Some exist already: the traffic usage distribution shown by Jakob in his example could actually be an occurence of a lognormal distribution. It may be that the 'natural' distribution for page view data on a Web site is the lognormal, and not the Zipf at all.

But what factors might contribute to the occurence of the one versus the other? Jakob argues that the reason we are seeing something similar to the lognormal distribution is due to the scarcity of specialised, low-view content, which would populate the low end of the Zipf distribution. That is, in the presence of additional content, Web site visitors would 'naturally' view these pages and boost the number of views to match the values predicted by the Zipf distribution.

I'm not convinced. For additional reading you might like to peruse Chris Anderson's (of the Long Tail fame) look at movie distribution figures for the US here. Chris attributes the presence of the lognormal distribution to the finite number of movie screens available in the US. Chris provides a much stronger argument for the Zipf distribution as the 'natural' curve of movie revenues in the absence of constraints.

For the Web page view analysis at least one question remains: is the constraint the scarcity of content, or the scarcity of visitors?

Monday, August 21, 2006

Oz-IA conference program & registration

This is a little bit late, but the Oz-IA conference registration is open for those wanting to attend. There's also a list of speakers & presentation overviews. There will be some very interesting ideas on display, so be sure to check it out.

More about user research tasks & techniques

Following on from my last post about the role of user research in user experience projects, I thought I'd quickly jot down some of the tasks that may fall into the category of 'Defining the problem':
  • ethnographic study
  • usage metrics from current system
  • task completion/abandonment rates
  • customer reliance on alternative channels
  • feedback / requests for assistance
  • customer surveys
  • business process mapping
What else would you put into this category?

Monday, July 31, 2006

The role(s) of user research

I've been reading a fair few articles and posts recently about various aspects of, and issues to do with user research and it has led me to put together the following proposition:

User research in information architecture, interaction design &/or user experience design is used to:
  • define the problem
  • inform the solution design
  • evaluate different solution options
  • validate and/or fine-tune the designed solution
  • test the implementation
When I have more time I'll put together some specific user research tasks that might typically be used in each of these ways, but if you have any ideas or thoughts feel free to send them through or add them as a comment.

Thursday, July 27, 2006

Brand Experience in User Experience - now at uxmatters.com

I'm very pleased to report that my first article for UXmatters - www.uxmatters.com - has been published!! The article - Brand Experience in User Experience - looks at the role of brand experience in the definition of project objectives for user experience projects, and how those objectives can flow through into the resultant solution design.

The article references previous work by Jared Spool (of UIE) and Dirk Knemeyer (of Involution Studios) - see the article for specific works.

But since writing the article I've also found this article by Dan Saffer, written in June 2002 for Boxes and Arrows. The article - Building Brand into Structure - takes a similar view of brand experience (in this case, brand values) as they apply to the design of sites - information architecture & interaction design. Well worth a read, particularly for the examples provided at a very nuts-and-bolts level. It's a pity I didn't find this article earlier as it's rich in material.

For those heading to the Sydney IA-Peers f2f tonight, I'll see you there.

Friday, July 21, 2006

What's wrong with the following...

Using the old form design, users took an average of 4mins 31secs to complete Task 6. In the re-designed form task completion time was reduced, with users taking as little as 48secs on the task.

Can you spot the problem? Answers welcome...

---------------------------------------------------------------------------------------------------------------------------

OK, I understand. Better things to do on a weekend than think about something like this...

The above is an example of statistical sleight of hand. In reading through the above you've probably come away with the impression that:
  • Two form designs have been tested;
  • The task completion time on the old form is longer than for the new form;
  • Based on the numbers given - 4mins 31secs & 48secs - that improvement from old to new is substantial.
Well, the second and third points are completely unsupported. Why?

The first part of the statement provides us with the average task completion time for the old design - 4mins 31secs. This is followed by an assertion: "In the re-designed form, task completion time was reduced". The only supporting evidence provided for this assertion is the low-end extreme measure of task completion time for the new form design.

But how does this compare to the low-end extreme measure of task completion for the old design? What about the mean task completion time for the new design. What about the variation in each sample?

We're simply not comparing the same thing in each case, but the 'discussion of results' clearly wants us to think that we are. This is a common type of analytical two-step seen in many forms of research, and one for which you should look out.

User research smoke & mirrors - a 5-part series by Christopher Fahey

Over the past week I've really enjoyed reading (and re-reading) Christopher Fahey's 5-part series of blog posts on user research. This thought-provoking and provocative series sets out Christopher's issues with much of the research carried out in the interaction design/user experience space, providing examples, caveats, pros & cons.

The series is, I believe, a much-needed dose of scepticism and critical analysis of our discipline's love-hate relationship with research, and the results derived.

An absolute must-read for anyone involved in the design of Web sites and applications based around user research.

Make sure you take the time to read through the various comments & Christopher's responses. They contain some interesting perspectives on the issue from other practitioners (including myself).

Wednesday, July 19, 2006

Luke W on granular bucket testing

Carrying on the user research theme...

Just reading through an article on bucket testing over at Luke Wroblewski's blog - Functioning Form (thanks to Pabini @ UXmatters for the link). Bucket testing is the process wherein you test two versions of the same thing in parallel (page, form, design element etc) by channeling part of your site traffic through one version or the other and analysing the results (note).

Luke raises the issue that, with advances in the ease with which bucket testing can be undertaken, organisations are performing tests on increasingly isolated design elements. For example, the use of text colours in particular areas of a page. However, looking at the results of a very granular test, and then adjusting the design accordingly, does not result in an optimised design.

Any student of mathematics, finance, economics etc will tell you that the 'optimal' solution to a set of equations or conditions is rarely the combination of the optimal solution of each equation. This carries directly into the design of a page or form or screen of a Web site/application. It is invariably pointless carrying out a test only on a specific element and then adjusting the design based on those results without also verifying that the change produces a more optimal solution to the whole.

Luke puts it like this:
"A cohesive integration and layout of all the elements within an interface design is what enables effective communication with end users." [emphasis mine]

In other words (not that he needs my help), the optimal solution to a design problem is not the stitching together of individually-optimised components.

Note: I'm in the process of writing a post/article on how you can carry out this style of analysis with some degree of statistical rigour.

Thursday, June 29, 2006

Oz-IA Retreat coming together

The organisation of the Oz-IA retreat being held in the first weekend in October is coming along nicely. The venue (Mercure Hotel just adjacent to Central Railway Station in Sydney, Australia) has been lined up; the speaker list is taking shape; and the level of interest is building.

The retreat is planned to be a semi-formal series of practical sessions on information architecture and related topics, aimed at practicing IA's and those within the broader industry interested in expanding their knowledge of the theory and practice. There'll be case study presentations; detailed how-to sessions; and general 'where are we going' discussions. And there'll be opportunities a-plenty to meet and mingle with peers and uber-IAs from Australia and around the world.

More information will be forthcoming in the next few weeks & months, but pencil in those dates (conveniently following on from this year's Web Directions conference). It should be a cracker!

Wednesday, June 28, 2006

Senior managers shouldn't care about the Web

Just reading through Gerry McGovern's piece titled "Senior managers: you can't keep ignoring the Web" and, like many of Gerry's articles, I agree with the practical up-shot of the argument, but not the basic premise. That is, whilst I would agree that, in practice, a senior management team should include at least one executive whose responsibility it is to oversee the direction and operation of the company's Web presence(s), I disagree that they should be doing so because of some inherent special quality of the Web.

A senior management post in an organisation of any size should be driven by the desire to realise the strategic objectives of the firm. Typically, achieving these objectives will require activities that are well suited to the Web. I say 'typically' because this is not always the case. And so a senior manager who spends energy on a Web presence where that presence isn't directly contributing to the achievement of those strategic objectives is wasting their time, resources, and potentially damaging the performance of the company instead of helping it.

Senior managers - actually, anyone for that matter - can't afford to be enarmoured of a technology to the point where they blindly implement initiatives without regard for actual benefit. This is relevant for the Web just as much as it is for an IT project, or a TV campaign, or a product release. They must retain their focus on the strategic objectives of the company and be open-minded enough to be able to select, implement, and operate the best (effective, efficient) initiatives towards those goals. In practical terms this will often include some form of Web presence.

As a side note, I think the history lesson in the conceptual framework of corporate Web sites is now fairly out of date. New companies no longer implement organisation-centric Web sites with anywhere near the prevalence that we saw 5-10 years ago. Instead, marketing teams and Web agencies are embracing customer-centric philosophies and representing themselves accordingly; and providing services that are similarly centred on the needs of the target customers. Sadly, the organisation-centric Web presence lives on, and probably will do for some time, but there is a definite shift towards customer-centricity occurring throughout the business world.

Monday, June 26, 2006

Jakob's latest Alertbox - How many users to test?

Jakob Nielsen's latest Alertbox, titled "Quantitative Studies: How Many Users to Test?" looks at the improvement in confidence intervals and margins of error to be had through increasing the number of users tested when measuring usability metrics. [Note: this is in contrast to his recommendation to test 5 users when looking qualitatively at usability.]

The article draws on large numbers of usability tests carried out by the NNg and provides some interesting points of note:
i) Usability metrics tend to follow a Normal (or Guassian) distribution - which makes the statistical analysis that much more convenient;
ii) User time-on-task performance tends to show a standard deviation of 52% of the mean;
iii) 20 users offers a reasonable test sample size for most usability metrics.

A couple of counter-points worth keeping in mind:
i) Although the finding that usability metrics tend to follow a Normal distribution is useful, most statistical analysis techniques include methods whereby the Normal distribution is not a requirement. This is particularly the case when performing quantitative analysis on non-parametric data (e.g. ranks, categorisations or counts);
ii) Always calculate the standard deviation and margin of error based on the data that you've collected. Whilst NNg's insight provides a useful starting point for deciding the number of test subjects, you need to go through the process of calculation sd and e for your data set;
iii) NNg use a 90% confidence interval as their baseline for determining the recommended number of test subjects: sometimes this level of confidence is insufficient, and so a greater number of test subjects would be required. (I tend to use 95% CI myself, particularly if the implementation cost is high.)
iv) Confidence intervals form part of the general set of summary statistics about a data set (along with mean, variance etc). They describe a characteristic of a particular sample, which allows some inference as to the nature of the general population - they don't provide a comparison between two populations. For example, is a time-on-task CI of 3 mins +/- 30secs better or worse than a time-on-task CI of 3:10mins +/- 15 secs?

Finally, this article feels like a response to the JUS article cited here previously. In particular, a response to Lewis and Sauro's references to use of 5 test subjects for usability studies, which may itself have been influenced by a previous Alertbox article.

Monday, June 19, 2006

Statistics without tears

That's the title of an interesting introductory book on Statistics that I'm reading through at the moment. Written by Derek Rowntree and published by Penguin (ISBN: 0-14-013632-0), the book provides an introduction to the theory and practice of statistical analysis without (much) recourse to formulae, calculations, charts, graphs or tables of figures.

Aimed at people that have to deal with statistics and statistical analysis, but who could go their lives seeing without a Gaussian distribution and not view it a loss, the 184-page text takes the reader on a fairly painless journey through descriptive and inferential statistcs; their meaning, use, and calculation.

If you've ever found yourself struggling to make headway into the topic of statistics then this book may offer you an olive branch.

Alternate titles for the book were: Statistics without calculations; Statistics for the innumerate; Statistics in words & pictures; The underlying ideas of statistcs; or How to think statistically.

PS: I'm reading this in preparation for a presentation on statistical analysis of usability and user research data to a largely non-mathematical audience later this year.

Tuesday, June 13, 2006

Australian football comes of age?

Un-f^%#ing-believable!!

I spent my childhood surrounded by kids of European and South American backgrounds, being taunted by them about Australia's (and, by implication, Australians') complete absence from the soccer World Cup. Whenever it rolled around the Italians, Croatians, Brazilians, Uruguayans, Chileans etc would cheer on their national sides, and laugh at our failure (again) to even qualify.

Earlier this morning Australia's Socceroos played their first World Cup game since '74. Behind for a good portion of the match they put in a tremendous effort in the last 20 mins, finally scoring in the 84th minute of the game. A second goal in the 89th minute put us in front; and a sealer in the 92nd minute made it comfortable.

It may be an overstatement to say we've come of age, but we've certainly proven we belong.

Monday, June 12, 2006

Very OT: Socceroos World Cup appearance

A little bit excited in the lead-up to the Socceroos first appearance in a football World Cup since 1974. The Aussies take the field in about 90mins to meet Japan.

In their last appearance, also in Germany, the Socceroos failed to win a game - or score a goal - so I'm hopeful we'll see a much improved performance this time around.

Anyway, we'll know in about three hours whether this World Cup will be a different story.

Friday, May 26, 2006

When 100% isn't really 100% - updated!

The latest Journal of Usability Studies (Issue 3, Vol 1) includes an article by James Lewis and Jeff Sauro of IBM and Oracle, respectively, entitled "When 100% isn't really 100%: Improving the Accuracy of Small-Sample Estimates of Completion Rates". The article - which is very clearly written, and provides nice 'take-aways' for the non-mathematical - provides a very neat look at alternate ways of estimating task completion rates from small-sample usability tests.

The basic idea of the article is that usability tests:
  1. Typically involve small participant numbers
  2. Report task completion rates as a primary success measure
  3. Typically calculate task completion rates as the number of successes / number of attempts (x/n)
When faced with extremes - e.g. 0% or 100% - we are faced with the difficult choice of producing an unlikely estimate - complete success or complete failure. Since, from experience, we know this is generally not the case, what alternative methods have we for estimating the likely rate of task completion.

The authors compare a number of different estimation methods - Laplace, Wilson, Jeffrey, MLE (x/n) and one of their own construction - 'Split-difference' - and recommend a particular alternative to the x/n method for various sample sizes and MLE value.

This article is well worth a read and should provide you with some extra depth to your analytical toolkit.

----------------------------------------------------------------
As a follow on from this, let's assume you've run a usability test and have the following:
Task 1: 4/6 successful completions = 66.67% success
Task 2: 4/5 successful completions = 80% success
Task 3: 6/8 successful completions = 75% success
[Note: typically, yes, each task would have the same number of users, but I'm making this up, so I can say what I want.]

The journal article tells us that in reality we can say the following:

Task 1: the real completion rate at launch should be somewhere between 21% and 99.3%, but we expect it to be around 60%;
Task 2: the real completion rate at launch should be somewhere between 25.7% and 100%, but we expect it to be around 67%;
Task 3: the real completion rate at launch should be somewhere between 34.3% and 99.5%, but we expect it to be around 67%; and
we can be only 95% certain that even those ranges will be accurate.

Kind of depressing really, isn't it.

Note: if you use around 30 test subjects instead, and maintain the same success ratios for each task, then you could expect the following:

Task 1: 47.7% - 81.9% with an expected success ratio of 64.8% (30 users)
Task 2: 61.44% - 91.75% with an expected success ratio of 76.6% (30 users)
Task 3: 56.82% - 87.82% with an expected success ratio of 72.32% (32 users)

So you can get a much narrower range for your estimate, but 30+ users is a significant undertaking for a usability test.

Tuesday, May 23, 2006

Light-hearted aside: Wedding gifts can be way cool

You may all remember that I got married late last year. You may also recall that I'm a die-hard fan of the Sydney Swans. So you'll understand how thrilled I am at finally receiving our wedding gift from my wife's cousins, uncle & aunt - a Swans' player jumper signed by the entire 2005 Premiership-winning team!!!

Forget the toasters, folks, this is one awesome gift.

Wednesday, May 17, 2006

Multi-variate testing ready to burst forth....oh reeeaallly!?

Reading this just now and I'm subsequently bracing myself for a spate of useless statistical analysis from the field of Web analytics. My experience with the application of multi-variate testing goes back a decade and includes the fields of Statistics, archaeology, marketing and more recently, information architecture. Time and time again I see multi-variate testing wasted through a complete lack of multi-variate analysis.

Folks, it isn't enough to calculate the mean of several variables and pat yourself on the back for your multi-dimensional approach to research. Unless you're going to perform analysis that creates a correlation between variables you are wasting your time. Even something as simple as cross-tabulation will provide you with insights not available through standard summary statistics on a single variable - despite calculating them for a series of variables.

For example:
Out of 100 users...
65 found the interface easy to understand, 25 found it confusing, 10 found in frustrating
50 were able to locate the information they required, 35 were unable to find the information, 15 found the information but didn't recognise it.

So, does that mean that a majority of users find the interface easy to understand and were able to locate the information they required?...

What if I told you that, of the 50 users able to locate their information, 25 of them were the ones that found the interface confusing? How about if, of the 65 that found the interface easy to understand, 35 of them were unable to locate their information?

Anyway, it bugs the bejeesus out of me when I see this sort of thing.

And if you're thinking this guy may not be representative of the standard within the Web analytics fraternity, pick up a book - any book - on the subject, and I challenge you to locate the analysis that goes beyond this style of simplistic, superficial level.

If you find one let me know. I'll even buy a copy.

Tuesday, May 16, 2006

MMORPG - the role of the tutorial and self-help in complex systems

I recently started playing a new computer game - Eve Online - a massively multi-player online roll-playing game, or MMORPG for short. The game is a spaced-based mixture of adventure, commerce, pirate-hunts, and character development, set in a galaxy far, far away. The game is rich, complex, and involves interacting with real players around the world to achieve common goals.

The game is FANTASTIC! I love this style of game. But that's not why I'm writing about it...

The complexity of the environment and the rules of engagement make it almost impossible to simply document in a user manual. The item database itself - the things you can buy, find, build, install etc - runs into the hundreds of pages, and a lot of the contents won't be relevant until months after you start playing.

The problem for the game designers, and new players, is that there's so much to know and learn and yet you can't force people to spend a couple of weeks poring over a user manual before they can start playing; you have to provide an 'in' to the early levels of the game.

The game designers (and I don't think this is unique to this particular game) have tackled the problem of how to introduce new players to such complexity in two interesting ways:
i) A fairly extensive tutorial that leads new players step-by-step into the environment. From how to configure a ship, to trading & commerce, to combat and moving through space.
ii) A rich online chat built into the game that provides general how-to support for new players, and the opportunity to communicate with fellow players in real time.

With the growing prominence of rich internet applications - now in three flavours - and the increasing richness (ha) that derives from these interaction environments - I'm starting to see the need for a similar approach (i.e. an introductory tutorial) to Web applications. Whilst user research will uncover the primary tasks and objectives of the audience; and usability testing will uncover barriers to use; sometimes it will be necessary to provide a step-by-step run-through of the complex processes before users will 'get it'.

Unlike computer games, however, Web applications lack the initial commitment from the user that would make such a personal investment likely prior to actual use. So is there a limit to the complexity we can introduce into the interaction design of our Web applications before the up-front investment in time will be prohibitive to use?

Wednesday, May 03, 2006

More about product design...

Just to balance my karma a little after my last gripe about product designers, I have to make mention of the FujiXerox printer that recently came into my 'possession'. A3, double-sided, colour laser printer. Networked (via Ethernet), three trays... all the things you want in a printer.

Installation and configuration took under 10 minutes from opening the box to first printing. That includes time spent exclaiming over the size!!

For those interested in such things, it's a DocuPrint C2428.

Lovely; easy; efficient.

Monday, April 17, 2006

Black is back

I've become aware of a rather strange anti-usability movement amongst consumer electronics manufacturers. It's an underground movement - you won't see these 'features' in their advertising - but apparently gaining momentum.

My wife and I recently purchased a new DVD player - a fairly middle-of-the-road replacement for our old one which had developed several glitches. The player is a Pioneer model and works extremely well except for one annoying feature: the open/close button on the player itself is a black button set into a black faceplate. Sitting here, about 12 feet from the machine it's impossible to see it. It isn't labelled; it's the only black button on the player.

Of itself this wouldn't constitute a trend, but we have just finished setting up a new Canon photo-copier/scanner/printer. When we started setting up the printer it was late afternoon and the light was a little low. (The room we were in was suffering from several blow lightbulbs.) We reached a part of the instructions that spoke about the 'Open' button so we looked; and looked. I eventually changed the lightbulbs and, in the enhanced light, saw the open button: on the front of the machine, which is mostly black with silver trim, is an unlabelled black button, set flush on the black faceplate. This black button opens the output tray. It's the only unlabelled button on the printer.

When I was younger my parents, teachers &etc used to espouse the view that working for something would make us appreciate it all the more. It appears that the product designers at Canon and Pioneer hold the same views, which is kind of sad.

Friday, February 24, 2006

User requirements need to get precedence

Over the past five weeks we've been working on an information architecture and user experience project to design the interface for an internal business application for a client. They have their own development team and business analysts, but no IA, so they asked us to assist with the interface.

Our brief was to interview the end users of the proposed system and gain an understanding of their needs from the application; how they work; how they'd like to interact with the data; their needs for reports and the like; and to encapsulate those findings into a series of wireframes for the system. At the time we were engaged for the project the Business Requirements had just been finalised, so we appeared to be starting out on the right foot.

The day after we commenced our user research we received the first draft of the functional requirements for the system.

Now, generally I would expect a fair degree of user research to be carried out before any work commenced on functional requirements - particularly elements like workflows and the like - but it quickly became evident that our actual role in the project was more symbolic than it was central. Suggestions for changes to the workflows based on user feedback were dismissed; arguments about user work practices being in conflict with the functional requirements were bandied back and forth without ever actually agreeing that the user's perspective was actually inherently more valid than the internal business analyst who would never use the system.

My belief is that the functional requirements for a system should not be written until after initial user research has been completed so that user requirements can be integrated into the specification rather than overlaid onto an existing view of the system. If all the user experience professional is doing is modifying button labels and the placement of form elements from a pre-determined set of elements, then the impact of their knowledge, research and expertise is being severely undervalued and undermined.

In most cases the functional requirements should be driven primarily by the user requirements and balanced against the needs of the business to ensure project objectives are being met - not the other way around. The likely outcome of driving the functional requirements from the business requirements - with a dash of UED thrown in - is a system that dictates to users the functions they can and will carry out, and will be poorly-received by those users as a result.

I don't hold out much hope that the end result for this project will be a thrilling experience for the users, who welcome the improvements to their work practices that it brings. At this point it feels more like a backlash against head office prescriptiveness waiting to happen, and that's a shame given the amount of effort going in to eking out each small interface improvement we can. We're fighting small battles around the fringe rather than the major battles in the centre and that doesn't provide for good user experiences in the main.

Monday, January 30, 2006

How hard is it to change your name?

My new wife has spent the last two weeks researching what she needs to do to have all of her accounts, records &etc with insurance companies, banks, motor registry office and work updated to show her married name. She put together a very detailed spreadsheet for each company showing what information she needed to present, proofs etc and where she could go to make the change be it online or at one of the branch offices.

Last Friday she went about visiting those Web sites and branch offices attempting to get her name changed and it's been interesting to see just how easy or difficult it has been in each case. In some cases, the online forms have been so poorly designed and implemented that she gave up and called them. Vodafone, for example, presented an unsecured form, poorly labelled, which required the user's account security code for submission.

Mostly, it was pretty straight-forward. The motor registry (our RTA) visit was painful only for the amount of people present, but in less than an hour she had a sparkling new license. NRMA was also painful, but in their case because the counter staff chose to use their lack of systems knowledge as an excuse to chat to the support staff about their plans for the weekend and the weather. In a much longer period than should have been required, those accounts were updated as well. The staff also fully expected my wife to be lacking some form of documentation required, and so approached the whole exercise with the slow, methodical questioning aimed at discovering exactly what it was she'd forgotten.

St George, ING and the Teacher's Credit Union were all painless, quick and trouble-free and only one of those were carried out online. Which goes to show that a good service process doesn't always have to be an online one.

'Doc'

Saturday, January 28, 2006

The Holidays are well and truly over!!

After a nice and mostly relaxing break over the Christmas holiday period, where I spent a good deal of time watching Australia take on South Africa in the cricket, I started back at work on the 9th. In the lead-up to the break there'd been signs of a build up in demand for Web development services - across the board: design, IA, strategy, development - and I was expecting a busy start to the year.

Instead, it's been more than simply busy. I haven't seen the year start off in this way for nearly five years. And I'm not alone. All across the local industry we're seeing the same thing: companies with more work than they know what to do with; and difficulty finding experienced staff.

Clients are expecting more than they have previously, but not without expecting to pay for the service. Happily, among the most frequently-asked-for 'extra' are information architecture and user experience services, a sign that the Australian market has well-and-truly caught up with the global trend we've been witnessing for the last few years.

The local IA professionals I've spoken with have uniformly seen the same increased demand for their services, which further supports the belief that this is neither isolated nor short-lived.

Closer to home I've been working on formalising an integrated UCD approach that places more emphasis on the characteristics of the clients' business as a means of counter-balancing the requirements derived from direct user research. Perhaps counter-balance is the wrong word. It's more a sense that those user requirements can be addressed in a variety of ways and the most appropriate way for a particular business is that which is most closely-aligned with the characteristics of the firm.

Anyway, you may get the opportunity to read an article on the subject in an up-coming issue of UX Matters (www.uxmatters.com) - if I can produce a draft worthy of being published!

That will have to be all for now, but if I can I'll post some exerpts for comment.

Bye for now.

'Doc'