How do you come up with test ideas?

You probably know us as the company where you can test without restrictions. No more tiers/tears, as I put it. Naturally, we spend a lot of time showing customers how to increase conversions and get maximum ROI. But I find a common thread of customers failing to maximise their ROI - investing their time and money in the wrong places.

Customers get half-way there, and then go through an exercise of “here’s what I don’t like about the page”. That list gets prioritised and becomes the testing backlog.

At Webtrends Optimize, we believe first and foremost in solving real problems, not just operating off gut-feel. Whether with experimentation or quality research, good ROI is driven from your ability to first identify the problem, so that you don't waste weeks in design, development, qa and analysis just to have been chasing the wrong problem.

Story #1 – Recommendations Engine

A couple of years ago, I spoke to someone who said they wanted to test out our recommendations engine against the one from their Content Management System.

We spent a long time talking through features they wanted, functionality it should have, problems it would solve – all of the really interesting and quite fun things you get with testing.

Looking at the page though, there was one big problem I noticed. The PDP was full of images and was very, very long. Which naturally made me wonder – while loads of people get to the PDP, how many people actually scroll to that point in the page?

Imagine spending a month configuring an engine, fine-tuning it, testing it, then another few weeks running an AB test – all to find that the views of the carousel were super low and so the potential for improvement was tiny. That’d be a huge waste of time.

Instead, capturing a quick bit of data to know how many people saw the carousel and how many interact with it would help us understand the room for improvement. So often, people stop their analytics research at people who got to the page and end up solving the wrong problem.

(Which is why testing is so important!)

Story #2 – Exit Intent popups

Just like EVERYONE else, we’ve run hundreds of Exit Intent modals over the years. A quick message to catch people on their way out of the site. Coupon codes often, sometimes just USPs, but tons of these have been put out there (I’m so sorry).

There are two aspects to consider here:

  1. When running the tests, making sure you only count users who is trigger that exit behaviour, both in the control group and the variation, is absolutely key to running a fair test free of noise.
  2. When gauging your sample size, again making sure you only count that behaviour is key to knowing if the test is even worth running.

I’ve seen people lacking on both fronts often. In which case your tests are a shot in the dark.

We rarely studied the form; who’s coming in, what fields they interact with, which errors they’re hitting, where they’re getting stuck, etc. Nothing that could point to a specific problem or group of users that need help. Instead, these broad-brush changes were no different to closing your eyes with a bat in hand, and just swinging as hard as you can. Anything can happen, but in all likelihood, you won’t be very successful.

Solving the actual problem

Good research is great. Usability studies, lab sessions, interviews, recordings, crawling reviews – all great. But alongside these, there is absolutely no substitute for data backing up your ideas.

If people are highlighting challenges with some Guided Selling feature you have – can you support it with some sort of struggle score? What would be the potential for improvement?

The ability to prove problems with data should increase their priority in your roadmap, as you’re more likely to find successes. And the inability should perhaps deprioritise tests for the same reason – risk vs. reward.

This isn’t to be confused with confirmation bias. Finding anything that proves your point but ensuring you set clear ideas of how to measure success beforehand, and then letting the data speak for itself should keep the process objective.

However, you do it, always think to yourself – how do I know I’m solving the actual problem?