Cut Costly A/B Testing Mistakes

Although it's most commonly used in reference to digital advertising campaigns and website landing pages, A/B testing is far from a new concept. Biologist and statistician Ronald Fisher was not the first to use A/B testing principles, but he was responsible for solidifying them in the 1920s. It was much later that the comparison testing process was adopted by marketers for optimizing their "snail mail" campaigns.

The long and varied history of the A/B test doesn't mean that it has been perfected – or even that using it properly is as simple as it sounds. In reality, there are many different ways that your split testing can go wrong, whether you're exploring the best copy for a landing page or the most effective ad creatives to drive conversions. Mistakes in your testing process not only provides you with inaccurate data but can lead to further errors when it comes to future ad campaigns or landing pages.

These errors in the A/B testing process can add up to a significant cost for your business. Not only are you wasting time, effort and expense on ineffective testing, but also on future ads and landing pages based on incomplete data. Based on a recent study, the misuse of A/B testing costs online retailers up to $13 billion every year. The following lists some of the most common mistakes that can prevent you from getting accurate results from your A/B tests – regardless of what you are testing.

 

1. The Sample Size Is Too Small

One of the easier to make mistakes in running A/B tests is having the wrong sample size. Many marketers choose a test sample size that is not large enough to get reliable results, then act on that inaccurate data. The right sample size for your A/B test depends on a variety of factors, including your conversion rate goal, the minimum amount of change you want to see, and your threshold for statistical significance (95 percent is the standard). You can use an online sample size calculator to determine the sample size you need to make well-founded decisions.

 

2. The Test Ended Too Soon

Another common A/B testing mistake that leaves the testers with less-than-reliable results is ending the test too soon. Much like having a sample size that is too small, not letting your test run long can lead to optimization decisions made based on less than complete information. Most commonly, marketers and advertisers stop an A/B test as soon as they see what seems to be a significant change – too soon to know if that change is lasting.

According to researchers, you can see a "temporary significant effect" in up to half of the A/B tests that you perform. The initial rise in conversions that you see by making a change to your text or images may be a false positive. It's only by continuing past that point that you can determine whether the changes you've made will continue to improve your results in the long-term.

 

3. The Test Included Too Many Variables

One of the most important concepts behind A/B testing is that you should only change one element at a time. There's a good reason for this: if version A and version B have multiple differences, even if version B performs significantly better, you won't know which element or elements made the difference. While you've optimized that ad or landing page, you don't have data you can use to improve future marketing campaigns.

That's why it is critical that your A/B test changes only one element at the time, whether that is the headline, images, layout, call-to-action button, or how you describe the offer. If you want to test changes to multiple elements, you can – but each change needs to be a separate test, with only one difference from the control. This way, when you achieve a significant difference, you know exactly where the credit goes.

 

4. The Test Had Poor Timing

When you choose to run your A/B test can have just as much of an impact on the results as what you choose to test, your sample size, and the duration of the test. Marketers often make the mistake of running a test during high traffic times, such as season sales or holiday events. The theory is that the high number of visitors will bring them statistical significance in their test that much more quickly.

However, these spikes are evidence of irregular visitor behavior, which means the results of your test will not be representative of your brand's usual traffic patterns. For more a more accurate A/B test, you need to avoid times that historically have had unusual traffic – as well as promotions and offers that may affect traffic patterns in the future. That doesn't mean that you can't do A/B testing on special promotions – done properly, these tests can provide valuable insights on customer behavior and preferences during your busiest seasons. Some keys for promotion A/B tests include segmenting your returning visitors, running longer promotions to get more accurate data and keeping promotional test data separate.

 

Conclusion

A/B testing has remained a popular tool in digital marketing because of the power it has to provide insights into the small changes that can make a large difference in conversions and revenue. Although the mistakes mentioned above can skew distort your results – leading to further optimization errors – you are now equipped with the information you need to avoid these errors and get accurate results from your A/B tests.

 

Also read:

5 Valuable Tips to Improve your A/B Testing on Facebook

How to Pick the Right Ad Creative Design Service

Join our newsletter

Subscribe to our newsletter

Stay up to date with the latest tips, tricks and industry news for advertising online

Book a free consultation to find out how ReFUEL4 can help your business

AI background 2.png
AI WP Blog Promotion (1).jpg
DOWNLOAD OUR FREE WHITE PAPER

2017: The Age of AI in Advertising

This year, Artificial Intelligence technology is poised to take on the ad world in ways that are more pervasive and sophisticated than ever before.

Learn More

Recent Articles

Popular Articles