THE 3 KEY QUESTIONS YOU NEED TO ASK OF EVERY AB TEST

There are 3 key questions we all should be asking when reviewing the results of an AB test. Did the AB test win? If yes, where did it win? And thirdly, and probably most importantly, why? Let’s look at those in a little more detail…

Did my AB test win?

The key to answering this is by understanding what you were trying to achieve in the first place.

Answering your initial hypothesis should come hand in hand with data from your KPI (Key Performance Indicator). Setting KPIs prior to running an AB test is essential to ensure targets are set to measure your progress against your objectives.

For example, you may want to increase the Number of Purchases when someone visits your website. Therefore, your hypothesis might be, "we've found an issue with X, so we think that changing Y will help because users will know how to / to stop doing Z".

Simply put, the AB test will show whether there has been an improvement in the Number of Purchases, based on using a different format or criteria.

There may be times when you don't observe a difference, which is fine too. This could show you that something you think matters might be wrong, or your approach to fixing it is wrong. Either way, you will be getting closer to resolving the issue and reaching your goals.

Moreover, deciding on whether to implement the ‘winning’ test isn’t always as straightforward as it seems.

For example, if the Number of Purchases goes up, but the AOV (Average Order Value) or Units Purchased goes down, then the business could lose money overall. Therefore, whilst you in theory have solved one of your problems, it is not always in the best interest of the overall business to implement this.

If you are faced with this decision, there are two additional questions you should ask:

  1. Did you really pick the right metric? (For example, if fewer people spending more is palatable, then maybe AOV per Onsite Visitor should have been the goal).
  2. What's going on with the related metrics?

Where did it win?

When creating an AB test, you usually expect related metrics to move in the same direction.

So, if your AB test is encouraging more people to add to their bag, you expect more people to enter the checkout. If you are encouraging more people to buy, you expect the amount of money you make as a business to go up.

However, this is not always the case.

As an example, your AB test has a stronger push on ‘Buy It Now’ instead of ‘Add to Bag & Continue Browsing’. Therefore, this causes more people to buy, because you're priming them to be quick. But by doing so they're shopping around less and therefore spending less. You're the type of site where users traditionally buy multiple things at a time, so this can be detrimental.

Understanding not only what happened to the KPI, but specifically where the AB test is doing well and where it isn't is extremely important.

Webtrends Optimize always encourages data collection of many metrics, to aid this type of investigation.

Why?

Exploration of data and understanding why and how your AB test won is super important.

Digging into the data can help inform new hypotheses going forward, better understand key issues and stop you from repeating mistakes.

Our game-changing new data pipeline and reporting suite, Discovery, makes this incredibly easy to do – directly within Webtrends Optimize.

Most experimentation tools are restrictive and don't offer any means of understanding and filtering the journeys that users take. This makes it difficult to understand their behaviour.

For example, let’s say you are testing help/error messaging amongst other things in your AB test. Can you see the performance for people who hit the error? Is that the reason this test variation has performed in an unexpected way?

Webtrends Optimize offers a unique "Metrics Triggered" filter, which allows you to easily filter specific journeys/events in or out of your data set. Trying to answer some of these questions becomes considerably easier.

This, alongside other checks you should always be doing, such as determining whether certain countries behave differently (if you're testing copy especially) or if devices behave differently (maybe there's a problem on the mobile view) - all help to build a picture of why.

Discovery eradicates the need for any manual platforms (like Tableau, Analytics Platform Integrations, etc.) to get these sorts of answers as well as making it infinitely quicker to get to those answers!

This is what sets us apart from the rest. Our goal is to provide you with the why and then turn this into something which will change your customer experience forever.