In the early days of optimization, everything was about science and measurement. Solutions gladly exposed the fact that they support a/b/n, split and multivariate testing (proudly proclaiming support for not only full-factorial, but also Taguchi or fractional factorial). Reporting provided multiple reports with an ability to export or extract that data for further analysis in Excel or some kind of data visualization tool. In those days, it was the analytics teams or data analysts who ran the show.

Today, the marketers are front and center. Tacguchi? Fractional factorial? Forget it. Give ‘em an intuitive interface that mimics a CMS by loading content into a visual editor and making it simple to select variation and conversions. Then, provide basic reporting that simply identifies winning variations and lift so they can get on to testing the next campaign.

What’s been lost during this transition from analyst to marketer as the user and consumer of data, is the fact that none of this matters if the system does not scale, the sciences are not predictive or the architecture fails to support increasingly complex business needs.


It’s easy to think about scalability as a hardware problem. If the system slows, throw more servers at it. But that ignores the simple fact that there are also geography problems and efficiency problems. For example, when it comes to geography at Webtrends Optimize, we have five data centers for global data collection and redundant CDNs for data distribution. In practical terms, this means visitors to our client sites both see and submit data more quickly than our competitors, thus the effect of having a testing solution in the mix is imperceptible and their experience with the content is unaffected.

As for efficiency, what happens when you want to run dozens of tests simultaneously on your sites? Solutions have to scale with your program and support running multiple tests on a single page while providing guardrails to avoid collisions between those tests or targets.


Why do sciences matter? Not because “our algorithms are better than theirs” or “we have more algorithms than they do.” Sciences matter because you are making business decisions based on the outcomes of the tests you run. If you only care about which banner is better, you’ll be fine to use any free solution you choose. However, if you are making decisions about what kinds of campaigns to fund for specific customer segments, then you want to have sciences that do a good job of predicting the gains you should expect to see. After all, if you go into a meeting to justify your campaign decisions, you are a lot better off suggesting those decisions will generate xx% gains in revenue or even the specific revenue amounts.


Ah, the good old days when everyone had one site (www.company.com) designed in HTML. And, maybe, a separate site for mobile, but still just plain HTML. Testing was so easy then. Now, everyone is headed toward responsive design, using dynamically generated sites where URLs never change because they are essentially large apps, and building mobile experiences are a combination of apps and responsive content. Oh, and relying on cookies to identify and engage visitors is getting harder as browsers and regulatory agencies respond to concerns about privacy and security. If you do not have a solution that designs for and around these issues, then your testing solution is showing its age.

Don’t get me wrong. The trend toward an easy-to-use visual editing experience is valuable and we have invested heavily in building that same kind of simplicity for our clients. But the best solutions are also handling very complex tests, global infrastructure and dynamic content behind the scenes. For anyone doing more than simple a/b tests or testing content more complex than landing pages, look beyond a pretty interface. If there isn’t much there, it could be a short relationship. If you’d like to discuss your unique business challenges and how optimization might help, we’d love to talk with you.