Thursday, September 29, 2005

Fractal coasts and the Business/IA Boundary

Following on from my recent post about the discussion I started with Eric Schneid regarding the boundary between IA and BA roles, and the ways in which the one can inform the work of the other, I've hit upon the notion that the closer you look at it, the more ill-defined the boundary becomes.

Or rather, the closer in you get the more 'jagged' the separator becomes. This idea is resonant with Mandelbrot's work on fractional dimensions and fractal coastlines in that you can only provide a clear separation if you can define the scale at which you're looking. So you need to use phrases like "in general, this would be done by...".

You can translate this into the rather more prosaic observation that there are many tasks along the edges of the boundary that can, and should, be carried out in collaboration between the two roles rather than being dished out to one or the other. A good example is the definition of technical requirements and business rules. [I'll come back to the 'why' of that at a later date.]

That said, I think there are some definite areas of the project that reside on the business side of the boundary rather than with the user experience/IA role. To my mind these include the interpretation of the brand into a set of experience requirements; the definition of the site's "purpose" or strategy; the definition of commercial rules for the site (aka business rules); the assurance of alignment between business direction and site strategy.

What you should try and avoid wherever possible is to have the user experience/IA role making decisions about what the business needs, or the business strategist interpreting 'what the user wants'. When that starts to happen its probably time for a round-table discussion.

At all costs, I think it's important that these two roles are handled by different individuals. They don't have to be specialists in the area, but I think it's spurious to argue that you can 'swap hats' and make meaningful decisions.

[Exactly what I mean by "interpretation of the brand into a set of experience requirements" is a topic all on its own. As is how you might go about assuring that the site strategy is in alignment with the business strategy.]

Saturday, September 24, 2005

Swans break 72-year drought

On a lighter note, the Sydney Swans broke their 72-year drought by winning the AFL Grand Final earlier today in an absolute nail-biter. I won't bore you all with the details, but it was great to see them finally win after so long on the losing side.

Go the Swans!!

Wednesday, September 21, 2005

Looking for a new IA...

I might as well mention that Red Square is looking for an Information Architect to join the team. The role is a senior one, and involves taking 'charge' of the IA/UCD discipline within the company. It's a combined role - hands-on work mixed with business and strategic thinking.

In the role the IA will be expected to contribute actively to the ongoing development of IA knowledge within Red Square, and provide critical input into the discipline with respect to the ways in which we offer IA and user-centred design services to our clients.

The IA is expected to review current literature and discussion on topics related to the area of expertise, and be able to apply that knowledge in a practical way to the projects undertaken at Red Square. Similarly, the role has a commercial dimension to it that involves both business development and client engagement.

Ultimately, you'll work with me to build a UED consulting discipline within Red Square as a self-contained entity within the overall project workflow - either as part of that workflow, or as a standalone project.

There'll be a formal job posting on the Red Square web site in the next few days, but if you think you might fit the bill, drop me a line.

IA-Peers get-together

Tonight's IA-Peers Sydney get together was an interesting one, with a bunch of new faces (for me at least) showing up to discuss general tools, methods & techniques.

I was happy to learn about a few new terms and hear about the benefits of eye-tracking from the guys at PTG. I must confess that I find the technique a little too refined for the bulk of the work that I do, although I appreciate the benefits of it.

It was also interesting to see the results of the survey undertaken by Eric as part of the preparation for the night - asking practitioners to list out all of the methods, techniques or tools that they use in their day-to-day IA work. I'll leave it to him to publish or not as he sees fit. But the overlap was curious for the lack of ubiquity of some 'standard' techniques.

Anyway, these events continue to be of interest for a wide range of reasons, and it's always good to meet professionals who have to cope with the same issues as me day-in and day-out.

Tuesday, September 20, 2005

I'll buy that for $1

I've been doing a bit of digging around this afternoon, looking at what other IA and User experience consultancies have to say about themselves and the work that they do. And I have to say I've been struck by the number of times I've read this: "$1 spent up-front can save $100 fixing the problem later" - or some variation. I must have read this at least seven or eight times this afternoon and it started to irk me after a while.

For those who haven't come across this chestnut from the world of human-computer interaction, in their 1988 book "Principles of software engineering management" T.Gilb wrote: "Once a system is in development, correcting a problem costs 10 times as much as fixing the same problem in design. If the system has been released, it costs 100 times as much relative to fixing it in design."

Now you'll admit that this is a nice, chunky statistic to be able to throw around when someone's questioning whether more time on requirements and solution design is really worth while. And I don't doubt for one minute the underlying truth of the statement - money spent on up-front requirements and design is going to pay for itself many times over before the project's done and dusted.

But the 1:10:100 claim in the context of a Web site development project, and particularly as a means of justifying expenditure on information architecture, strikes me as being the worst kind of specious reasoning. Here's why:

1. Gilb was writing in 1988 and it won't come as a surprise to any of the punters out there that things have changed in the software development world since the book first went to press. Not only are the languages friendlier - if you don't believe me feel free to compare Java or C# to the likes of Pascal, Fortran, C or Cobol - but the development tools, software architectures and deployment environments are (literally) generations ahead of those in use in '88. And that doesn't take into account the fact that the data on which the book was based would, by necessity, have been even older.

2. Compiled languages - and they were all compiled languages in '88 - are generally more difficult to de-bug, correct, test and deploy than the vast majority of scripted languages used in your typical modern-day Web project.

3. Gilb refers to 'problems', not usability issues; not deficiencies in the information structure; not process issues; so the step from Gilb's 'problem' to an IA consultants' usefulness is a long one.

4. Gilb refers to "fixing the same problem in design", which for him didn't mean visual design or information design or even interaction design - it meant everything that doesn't involve actual code being written. So, given the host of 'problems' that might be resolved during 'design', I again fail to see that the next logical step is to bring in the IA consultant.

5. There's no mention of the sort of 'fixing' that's needed. Was it a poorly-defined business rule? Or was the background colour out by a couple of shades?

6. Why should I think that anyone who uses user-centred design can achieve just as big a saving? What have you done to justify the association?

I'm always happy to see people promoting the use of user-centred design techniques for the Web, and the need to prove a return on the investment is a central concern to us all, but there are a lot better examples to use than this one from software engineering, and the logic from service to benefit to engagement will flow a lot better for you if you use research that's closer to home.

Apologies for the soap-box, but we need better discipline in our discipline...

For alternatives, try this nice summation from Aaron Marcus & Associates (NY), written in 2002.

Sunday, September 18, 2005

The boundary between BA & IA

I had the pleasure of spending an evening in discussion with Eric Scheid - IA Peers convener and principal at Ironclad networks - at what I'm starting to think of as the Boomworks annual spring bash.

The topic of discussion was standard methods and procedures in the IA's toolkit and, whilst we covered this in some detail - and I'll leave it to Eric to steer the discussion after the next IA Peers meeting - of more interest to me was our discussion of what the BA/strategist contributes to the solution design process, and how best they can interact with the information architecture and user experience crowds.

I've only just begun to think about how the border between the two disciplines might look, so I'll have to come back at some future point with my thoughts. In the meantime, I'm already certain that, whilst there are certain things obviously falling into each camp, there are some components of the user-centred design process that would be more fruitful as areas of strict collaboration.

More on this later...

Monday, September 12, 2005

The role of 'value' in Web site design

The following three-part series of articles was published on the Australian edition of ZDNet's Builder site in February, 2004. These articles have since been syndicated internationally, to the UK and China (in Chinese).

Some of the concepts - and particularly the way they've been articulated - lack polish. However, the issue of business value of IA continues to attract interest, so I thought my own site should at least include a reference.

'Value-based approach to Web site design'

Part I
Part II
Part III

Feedback, comments on this are most welcome.

Wednesday, September 07, 2005

A statistical sidebar

The project from early 2005 that got me back into the saddle, statistically speaking, involved a large amount of methodological research, data collection, analysis and report writing. It's the single largest client project I've undertaken and resulted in a 20,000-word report.

The nice part of it was the range of analysis techniques I was able to employ, from simple descriptive and summary statistics (means, standard deviations, counts etc) to cross-tabulations and on to multi-variate analysis using k-means clustering, principal components and MANOVA tests.

To be honest I think the client on the receiving end of the report was a little overwhelmed by that part of the document, which is why I had buried it in an appendix.

Anyway, it blew the cobwebs off my statistical calculator, textbooks and probability tables!!

A little statistics with your Web site

I've recently re-discovered my fascination with things mathematical and particularly those statistical. I have a degree in Applied Mathematics, majoring in applied statistics, so it's not idle tinkering. But after graduating in '94 I've tended off towards more humanistic and social topics of study - archaeology, electronic commerce (masters degree in 2001) and business (MBA in 2004).

A project I undertook earlier this year (I'll write about the work in a later post) got me back in touch with my numerical side. That interest seems to have stuck for the time being, and I've just about completed a new project that carries on the theme.

Most people will be familiar with the idea of comparing two things to see whether they've changed. On a Web site we might want to see the effect that a change has had on the number of page views and so we count up the page views before the change (or calculate an average) and count them up after the change (or calculate the average) and compare the two. If one's higher than the other then we agree that there's been a change and congratulate ourselves.

You'll see it all the time. "Last week saw an 7% increase in traffic to the Web site" or "Today's sales are up 3% on yesterdays'".

The problem with this type of comparison is that it ignores the fact that almost every natural process has some random fluctuation to it. If we fail to take that fluctuation into account then we can mistakenly assign a causal relationship when in reality the observed difference is completely random. For example, average daily page views might vary by as much as 6%. So a 7% increase for the week isn't very special at all. In fact, it's hardly even noteworthy.

So, for the past few days I've been working with one of our Web application developers to put together a simple statistical test for 'significant change' in one of our clients' Web sites. This client spends a fair amount on advertising as a means of driving Web traffic, so I figured it would be useful for them to be able to determine - correctly - whether or not the advertising campaign was having a real impact.

The test we're using is a standard Mann-Whitney rank sum significance test. Since I can expect to have more than 20 sample points in either the "pre-" or the "post" sample, we've set the test up to use the approximation to the normal distribution.

There's a very nice lecture/book summary on the statistical theory available here:

The analysis program is being built into the administration system (content management etc) for the Web site, and we should be able to start carrying out analysis in the next few weeks (we need to collect some data first).

I'm feeling a little tickled at the idea of bringing statistical rigour to a Web site; it can only help (I think) make the work we do more seriously considered.