WANT TO RUN 50% MORE TESTS IN A GIVEN MONTH?

Myth: We only run AB tests, they finish faster

Sequential testing, that is to run AB tests one after the other, is actually not the fastest way to achieve velocity with testing. It's just the easiest way to manage a given idea.

This is a post more relevant to those who have road maps of ideas, and test consistently throughout a year as opposed to those who run experiments on occasion only.

The nature of an experiment is to compare your control group against a variation. When you have a road map of potential tests, you take the idea at the top of the road map, plan it, produce wire frames and mock-ups, get it built, tested and put live.

All very straightforward. In this scenario, your 2 weeks of traffic is divided 50/50.

By the time this test concludes, you have the next one ready to go. And that takes 2 weeks of data, again at 50/50.

What you end up with at the end of a month, is 50% of your on-site traffic seeing the control experience for a whole month, and each variation getting 2 weeks at 50%.

In numbers - at 10,000 visitors a week, that's 20k visitors seeing control experiences and 10k seeing each variation.

From a different perspective - an ABn test with 3 variations (i.e. 4 distinct experiences) would hit 10k visitors in 4 weeks where we'd have previously run 2 tests. That's an improvement of 50% tests in a given month.

When you're testing at scale, building lots of tests and setting lots of tests live, this starts to become an inefficiency.

The industry is happy with the concept of ABn tests (i.e. More than 1 variation) but so often restrict these to adjustments of the same idea.

However, leveraging ABn tests where you bundle multiple test ideas together is much more traffic efficient. Let's take the same example of 2 tests, but as an ABn.

At 10k visitors a week, split across 3 levels (control, plus 2 variations), getting your sample size of 10k visitors into each variation takes 3 weeks, not 4. Over 4 weeks, you get 13.33k visitors into each group.

Yes, this involves doing more work up front - a problem for slower testing programmes, common practise for faster ones - but he real change is reconsidering which tests to choose.

Commonly, you'll pick tests that don't clash with each other when building in parallel. But in the world of ABn testing, you do the opposite - go and find ideas which are on the same page, ideally testing the same components albeit for different reasons.

The efficiencies then start to shine through - familiarity with the page means QA is a bit easier. Building is a bit easier. On page Metrics can be shared, and so the work to validate them is reduced.

Perhaps you learn from Metrics that aren't really needed for test A, but you wanted for test B. QAing the control and making sure metric capture is behaving itself happens once instead of twice.

If you're looking to accelerate an already fast-moving programme - here's your answer. Run more ABn tests, even if variations have nothing to do with each other!