BEYOND UNCLEAR RESULTS OR SMALL GAINS. WHAT MAKES A WORTHWHILE TEST?

This week concludes a series of tests we’ve been conducting on our monthly customer newsletter. The first test changed the design and content strategy of the newsletter, and we saw significant increase in both open rate and click-throughs. The second test built on the success of the first and aimed to find the subject line that would entice even more opens. While the winning subject line was somewhat inconclusive, we did still see a slight lift in open rate.

For the final test, we shifted gears away from the newsletter itself, and tried to get more people to sign up to receive it by filling out a form on our site.

 

More interesting than the results are the questions this test raised for us. Although we were able to raise our number of conversions by 54%, the number of people who sign up for our newsletter every month is still small enough that 54% lift isn’t earth-shattering. We found ourselves asking what these results meant for us, whether they were statistically relevant, and ultimately, whether we should focus our efforts to testing other components of our site. The Webtrends Optimize marketing philosophy is that everything is worthy of a test (kittens on our 404 page included). But what makes a test truly valuable?

1. Determine your goals. This sounds basic, but it’s a key part to any marketing experiment. And by goals, I don’t mean “increase conversions.” I mean, what exactly are you trying to accomplish? In this test, we wanted to see if we could increase the number of people who signed up for our newsletter compared to the unsubscribe rate every month. We achieved that by a small margin.

2. Decide the benefit. Different from goals, this rule determines how much the goals will matter to different aspects of your marketing efforts. Our decided benefit was lead generation. We thought that by steadily increasing the ratio of new contacts against unsubscribes that we could eventually nurture our list into a growing, self-sufficient lead generation machine. Clearly the results of those efforts won’t be seen for a while, but it helped to have this benefit in mind while deciding our test schedule. Which brings us to the third rule…

3. Set your priorities. Whether it’s by ease of test, end benefit to the company, or time sensitivity related to a short-lived campaign, each test has its own priorities. We knew that we could achieve stable results on a sign up box our solutions pages within a month or less, so we decided to test this newsletter sign up box before a solution page redesign (currently testing now). By finding the sign up box that performs the best, we’re now able to use that winning box design on our new solutions page test design and control, so we’re bringing in the maximum sign ups while our next test runs.

We’d love to hear your thoughts. What makes a successful test in your opinion? What results do you look for? Also, about this particular test, would you have changed anything?