Wednesday, September 07, 2005

A little statistics with your Web site

I've recently re-discovered my fascination with things mathematical and particularly those statistical. I have a degree in Applied Mathematics, majoring in applied statistics, so it's not idle tinkering. But after graduating in '94 I've tended off towards more humanistic and social topics of study - archaeology, electronic commerce (masters degree in 2001) and business (MBA in 2004).

A project I undertook earlier this year (I'll write about the work in a later post) got me back in touch with my numerical side. That interest seems to have stuck for the time being, and I've just about completed a new project that carries on the theme.

Most people will be familiar with the idea of comparing two things to see whether they've changed. On a Web site we might want to see the effect that a change has had on the number of page views and so we count up the page views before the change (or calculate an average) and count them up after the change (or calculate the average) and compare the two. If one's higher than the other then we agree that there's been a change and congratulate ourselves.

You'll see it all the time. "Last week saw an 7% increase in traffic to the Web site" or "Today's sales are up 3% on yesterdays'".

The problem with this type of comparison is that it ignores the fact that almost every natural process has some random fluctuation to it. If we fail to take that fluctuation into account then we can mistakenly assign a causal relationship when in reality the observed difference is completely random. For example, average daily page views might vary by as much as 6%. So a 7% increase for the week isn't very special at all. In fact, it's hardly even noteworthy.

So, for the past few days I've been working with one of our Web application developers to put together a simple statistical test for 'significant change' in one of our clients' Web sites. This client spends a fair amount on advertising as a means of driving Web traffic, so I figured it would be useful for them to be able to determine - correctly - whether or not the advertising campaign was having a real impact.

The test we're using is a standard Mann-Whitney rank sum significance test. Since I can expect to have more than 20 sample points in either the "pre-" or the "post" sample, we've set the test up to use the approximation to the normal distribution.

There's a very nice lecture/book summary on the statistical theory available here: http://faculty.vassar.edu/lowry/ch11a.html

The analysis program is being built into the administration system (content management etc) for the Web site, and we should be able to start carrying out analysis in the next few weeks (we need to collect some data first).

I'm feeling a little tickled at the idea of bringing statistical rigour to a Web site; it can only help (I think) make the work we do more seriously considered.

No comments: