How do you set KPIs and run tests on websites or features with low traffic volumes?

It is no secret that website traffic is a vital resource to any data-driven optimization effort. Most of your standard Conversion Rate Optimization (CRO) practices rely on data to produce conclusive and actionable results. This leaves low-traffic websites facing a daunting hurdle, making experimentation efforts cumbersome and difficult.

However, low-traffic websites don’t hold exclusive rights to issues testing with small sample sizes. That’s a big business problem too.

Despite having monthly traffic volumes in the millions, an established giant like ASOS or eBay often face similar experimentation issues as those websites with only a few thousand visitors each month.

Consider this: Working for one of these digital giants you are tasked with improving the UX of a feature used by 0.2% of your website users, on a website that sees an overall usage of 100 million users each month. Already your sample size has been cut down to 200,000 users. Now add that this feature is only relevant to new users who visit the site's rarely used media centre section on desktop devices only. With all elements considered your final sample size is taken down to a minuscule 2,000 or so users. Now you’re looking at the same small sample size problems as a low-traffic website.

This is an extreme example, but it’s not hard to find a real case where the data volume for a specific area of experimentation is far smaller than that of the website’s overall users.

So, what can you do about it?

Setting KPIs to experiment on a low-traffic website feature

Established digital giants like ASOS or eBay have no issue acquiring relevant data from their million's daily visitors. This allows their teams to run experiments with fresh targets looking at a broad spectrum of problems. But even they, like a small business looking to generate ideas from their data to run tests on, must be far more scrutinous when working on a website feature that sees little to no traffic.

KPIs set for experimentation in these scenarios should be targeted, looking to optimize a niche or specific user behaviour. In other words, your KPIs need to be oriented to test micro-conversions.

What is a micro-conversion?

Micro-conversions are the incremental steps that represent a user's interest in your brand; this could be an interaction with a certain Call to Action (CTA) or piece of content, simply progressing from one page to another, or moving further into your conversion funnel. Because these micro-conversions are happening more frequently than the final macro-conversion, your data pools, even on a low-traffic website, will be far larger and easier to run experiments with.

An example of this: a white label goods provider, big or small, will have more data on their add-to-cart rate for the stainless-steel water bottles they stock than they would have on completed orders.

The specific action carried out by the user gives them a more significant data pool to work with than their completed orders macro-conversion. Focussing on micro-conversions as part of the user journey will optimize the overall user experience and move your traffic further into the conversion funnel.

Once KPIs are in place, how do you face the challenge of running tests with a small data pool?

Sometimes, even using the more targeted tactic and focussing on micro-conversions, you will still be left short-changed with a smaller than admirable data pool to work with. So, what can you do to still run experiments that will provide actionable insights to optimize your website?

What is often deemed best practice when running tests on a website is to reach a statistical significance of 95% with your experiment. For a website or feature that is providing a smaller sample size, running the test to this full 95% significance will likely result in the test being left to run for what could be deemed an inappropriate length of time. To test under these circumstances, it is advisable that you reduce the statistical significance down to a figure more like 80-85%.

For any business, reducing this figure from 95% to 85% increases the risk of implementing a false positive from 5% to 15%, this would mean that 3 in 20 tests could provide a false positive. However, in our experience, by running this slight increase to risk as a smaller business you enable further iterative testing that can be used to support the optimization efforts carried out so far.

While a larger business will have the funding to negate risks with supplementary usability testing and surveys (adding qualitative testing as a safety blanket and supporting the result of their experiments), they too had to start without to achieve what they have today.

Gone are the days of low-traffic websites missing out, unable to keep up with the big dogs at the top. By implementing the tactics discussed here anybody can commit to a program of experimentation, large or small, whether you're just starting or you're well into your optimization journey.

About the author

REO is a digital experience agency based in central London. They are an eclectic mix of bright and creative thinkers, embracing the best of research, strategy, design, and experimentation to solve their clients' toughest challenges. We work across a variety of sectors, with companies such as Amazon, M&S, Tesco & Samsung. Whatever the challenge may be, we apply design thinking to identify and deliver big growth opportunities for our clients. Their mission is to transform their clients' businesses and reputations by evolving the Customer Experience for their customers.
To find out more, check out their website.