What’s the Optimal?
It’s exam season in the UK at the moment, where this week and last, students here will be receiving their results. So in that vein I decided to set our Optimisation services team a little test of their own a few weeks back…
I’ve seen a few clients get muddled between the various components of conversion rates, potentially leading to incorrect conclusions or an important insight being overlooked.
The purpose of this exercise was to reinforce an understanding and in particular draw out the care needed when using All Views as the rate denominator.
So first off, a couple of definitions…
All views – This is the total number views to a test page for the duration of the test. Five views of a test page by the same visitor, increases the count by 5.
Unique Views – Only the initial view is counted. Repeated views are not considered. So effectively Unique Views is a visitor count.
All Conversions – This is the total number conversions made by the same visitor. Two visits to a sales confirmation page would increase the count by 2.
Unique Conversions – Only the first conversion is counted. This removes the potential of heavy users skewing results.
For lead generation and many micro-conversions, there is only value in the first conversion. Two enquiries from the same visitor with exactly the same details is counted once with unique conversions.
For Ecommerce however, and in particular where visitors make multiple purchases in a test period, each conversion to sale represents additional revenue. On that basis I would advocate always exploring, All conversions in conjunction with AOV when determining an optimal.
So, let’s start with the exam question I posed the team:
After 3 weeks, Pete, a conversion rate analyst, sees the following data. Statistical significance is tested at 95% confidence. Rates are stable and duration to two business cycles. Pete wants to maximise revenue; which experiment is optimal (based on the data provided)? Please explain your answer…
|Visitors (Unique)||Conversion to Sale (unique)||Conversion Rate||Confidence Interval||Lift||Significant?|
|All Visits||All Conversions to Sale||Conversion Rate||Confidence Interval||Lift||Significant?|
|Average Order Value||Significant?|
Analysis: So, what’s optimal?
Table A (below) has been created and calculated using the data provided above. It shows the number of unique views and unique conversions (sales) made by the experiment over 3 weeks. So effectively this is the customer conversion rate. Here, Experiments 2 and 3 have performed well and significantly above the control
But what we want is to “Optimise revenue”.
Table B (below) shows All views and All Conversions (to sale) by customer. At first sight, this suggest that Experiments 2 and 3 have performed really well.
However, take a look at the total number of views made per visitor. These are down significantly for Experiments 2 and 3. This might represent a UX improvement – or more likely less engagement or return visits. Either way – it’s not a useful guide to establishing an optimal.
|All Views||Views (unique)||Views/visitor (3 weeks)||% Difference|
To evaluate revenue performance, we need to explore total sales revenue per visitor
|Views (unique)||All Conversions (to Sale)||Sales per visitor Rate||Confidence Interval||Lift||Significant?|
And here we can see that both Experiments have performed negatively – to a point of significance for Experiment 2.
Taking the calculated sales rates and AOV values we can now compare revenue by experiment over the test period.
|Sales per visitor Rate||AOV||Revenue/1000 visitors||% Difference|
*Conversion rate for Experiment 3 isn’t significant. The small shift could be an act of chance. On that basis the rate is assumed to be unchanged and the control rate applied. AOV differences for Experiments 2 & 3 are both significant and applied.
On this measure and on the information available, you should select the control as being optimal.
Although this data was manufactured, it could potentially manifest itself with the following scenario:
A luxury Ecommerce site with a high-ticket prices (& AOV) tests the benefits of an ongoing promotion for a low value item.
Visitors who would not ordinarily convert increasingly do so. Customer conversion increases – but it’s mainly incident on the promoted item. In addition, the offer distracts established customers. As such AOV drops significantly
With no incentive to convert further, visits and repeat orders drop.
It’s important to look at all of the data available at the end of an experiment. What looks on first glance as a real winner, can mask the real facts and can skew decisions that could impact revenue.
We constantly challenge our Optimisation services team, in fact every week they face a new challenge. Over the coming weeks I will look to create additional blog posts on the experiments that should question all of our thinking.