How can AB Test entry conditions make or break your CRO strategy?

Something we are constantly preaching is that Conversion Rate Optimisation comes in all shapes and sizes, and a lot of this depends on what our client wants to learn about, achieve, and implement.

As a business, we are also constantly learning new things, which is one of the things we love about our industry and the clients we work with.

We are going to explore one of the most overlooked, and most powerful parts of a test; who actually gets included in the experiment.

Our recent two-phase experiment on mobile checkout behaviour and decision-making is the perfect example of the importance of test entry conditions. If you get this wrong, your test insights can be very misleading and change the whole course of your testing strategy.

A Good Idea That Never Reached the Right Users

The first test we did with the client was to introduce a sticky ‘checkout’ button on the basket page for mobile users to help them reach checkout faster. But the results were disappointing.

The variant actually led to a 3% drop in customers reaching the confirmation page. At first, this seems like a failed test idea. But this result was so unexpected, we dug into why this might have happened, with some interesting insights; most users didn’t have enough items in their basket to trigger the sticky CTA.

Meaning that whilst these users entered into the test, they never actually saw the new feature. But they were still being counted as part of the variant. Essentially, the variant behaved like the control for a huge percentage of visitors.

Phase 2: Reaching the right users

Instead of abandoning the test, we wanted to run it again whilst rectifying the problem. This time, the test needed proper entry conditions, and specifically around basket size.

In the next test we ran, we introduced a rule which would account for the findings of the previous test, only users with 4 or more items in their basket could enter the test. This meant the variant with the sticky CTA was only shown to users who actually needed it; those with longer basket pages and more scrolling friction.

The results of this greatly improved (which aligned a lot more closely with our hypothesis), with an +8% uplift in customers reaching the confirmation page.

The difference was that the second time around, we made sure the experiment was more aligned with the context of the users. Learning from the results of the first test meant we could eradicate barriers users were facing. By being more strategic about the test entry conditions, we were helping a specific group of people, which is just as important as helping all users.

Strategic Learnings

Often when it comes to testing, we focus on the variation itself; the layout, the CTA, the copy, the design, etc. But the users who actually experience the test is as equally important, and sometimes more so.

Digging into the reason behind the results were key for the success of this experiment. In any failed test situation, understanding the why will always improve your understanding of your users and help to inform the next steps in your strategy.

Webtrends Optimize allows you to set smart, behaviour-driven triggers so experiments are well-thought out and give you the best chance of accurate learnings. We can ensure that experiments only run when they truly matter. Instead of showing a test to everyone, we can pause, wait for the right user action, and then reveal the variation we want, or even block them entirely if they don’t qualify.