WHAT’S THE OPTIMAL?

What’s the Optimal?

It’s exam season in the UK at the moment, where this week and last, students here will be receiving their results. So in that vein I decided to set our Optimisation services team a little test of their own a few weeks back…

I’ve seen a few clients get muddled between the various components of conversion rates, potentially leading to incorrect conclusions or an important insight being overlooked.

The purpose of this exercise was to reinforce an understanding and in particular draw out the care needed when using All Views as the rate denominator.

So first off, a couple of definitions…

All views – This is the total number views to a test page for the duration of the test. Five views of a test page by the same visitor, increases the count by 5.

Unique Views – Only the initial view is counted. Repeated views are not considered. So effectively Unique Views is a visitor count.

All Conversions – This is the total number conversions made by the same visitor. Two visits to a sales confirmation page would increase the count by 2.

Unique Conversions – Only the first conversion is counted. This removes the potential of heavy users skewing results.

For lead generation and many micro-conversions, there is only value in the first conversion. Two enquiries from the same visitor with exactly the same details is counted once with unique conversions.

For Ecommerce however, and in particular where visitors make multiple purchases in a test period, each conversion to sale represents additional revenue. On that basis I would advocate always exploring, All conversions in conjunction with AOV when determining an optimal.

The Question

So, let’s start with the exam question I posed the team:

After 3 weeks, Pete, a conversion rate analyst, sees the following data. Statistical significance is tested at 95% confidence. Rates are stable and duration to two business cycles. Pete wants to maximise revenue; which experiment is optimal (based on the data provided)? Please explain your answer…

Table 1

Visitors (Unique) Conversion to Sale (unique) Conversion Rate Confidence Interval Lift Significant?
Control 1,000 250 25.0% +/-2.68%
Experiment 2 1,000 300 30.0% +/-2.84% +20.00% Yes
Experiment 3 1,000 290 29.0% +/-2.81% +16.00% Yes

Table 2

All Visits All Conversions to Sale Conversion Rate Confidence Interval Lift Significant?
Control 3,000 340 11.3% +/-1.13%
Experiment 2 2,300 302 13.1% +/-1.38% +15.86% Yes
Experiment 3 2,500 329 13.2% +/-1.33% +16.12% Yes

Table 3

Average Order Value Significant?
Control £200
Experiment 2 £160 Yes
Experiment 3 £128 Yes

Analysis: So, what’s optimal?

Table A (below) has been created and calculated using the data provided above. It shows the number of unique views and unique conversions (sales) made by the experiment over 3 weeks. So effectively this is the customer conversion rate. Here, Experiments 2 and 3 have performed well and significantly above the control

But what we want is to “Optimise revenue”.

Table B (below) shows All views and All Conversions (to sale) by customer.  At first sight, this suggest that Experiments 2 and 3 have performed really well.

However, take a look at the total number of views made per visitor. These are down significantly for Experiments 2 and 3. This might represent a UX improvement – or more likely less engagement or return visits. Either way – it’s not a useful guide to establishing an optimal.

Table A

  All Views Views (unique) Views/visitor (3 weeks) % Difference
Control 3,000 1,000 3
Experiment 2 2,300 1,000 2.3 -23.33%
Experiment 3 2,500 1,000 2.5 -16.67%

To evaluate revenue performance, we need to explore total sales revenue per visitor

Table B

  Views (unique) All Conversions (to Sale) Sales per visitor Rate Confidence Interval Lift Significant?
Control 1,000 340 34.0% +/- 2.94%
Experiment 2 1,000 302 30.2% +/-2.85% -11.18% Yes
Experiment 3 1,000 329 32.9% +/-2.91% -3.24% No

And here we can see that both Experiments have performed negatively – to a point of significance for Experiment 2.

Taking the calculated sales rates and AOV values we can now compare revenue by experiment over the test period.

Table C

  Sales per visitor Rate AOV Revenue/1000 visitors % Difference
Control 34.0% 200 £68,000
Experiment 2 30.2% 160 £48,320 -29%
Experiment 3 34.0%* 178 £58,562 -14%

*Conversion rate for Experiment 3 isn’t significant. The small shift could be an act of chance. On that basis the rate is assumed to be unchanged and the control rate applied. AOV differences for Experiments 2 & 3 are both significant and applied.

The Answer

On this measure and on the information available, you should select the control as being optimal.

Although this data was manufactured, it could potentially manifest itself with the following scenario:

A luxury Ecommerce site with a high-ticket prices (& AOV) tests the benefits of an ongoing promotion for a low value item.

Visitors who would not ordinarily convert increasingly do so. Customer conversion increases – but it’s mainly incident on the promoted item.  In addition, the offer distracts established customers. As such AOV drops significantly

With no incentive to convert further, visits and repeat orders drop.

In summary

It’s important to look at all of the data available at the end of an experiment. What looks on first glance as a real winner, can mask the real facts and can skew decisions that could impact revenue.

We constantly challenge our Optimisation services team, in fact every week they face a new challenge. Over the coming weeks I will look to create additional blog posts on the experiments that should question all of our thinking.

 

For more information about our Managed Service and how we can help please contact us via https://www.webtrends-optimize.com/about-us/contact-us/