As marketers, we constantly strive to improve our results. And when it comes to learning – and proving – efficacy, something as simple as A/B testing is paramount. A/B testing, the process of using two versions of a marketing piece (web page, email, etc.) to see which one performs better, offers us the ability to tweak very small details in our programs to see incrementally larger results. But, like many marketing strategies, A/B testing can provide mixed, or even inaccurate results if not properly executed.
A Scientific Approach
Thinking back to the scientific method we learned in high school, we need to remember to change only one variable at a time when we A/B test. While there are several questions that we want to answer – e.g. if links work better in bold or italic, if images are more effective at different sizes, if calls-to-action work better with different verbs or adjectives, if more text is better or worse, etc. – we can’t test all of those things with one A/B test. Each A/B test should be as similar as possible, save one variable. With this technique, you can ensure that the results speak specifically to that one change, instead of a number of variables.
A successful A/B test is one that teaches you something. You might find that a change didn’t have a measurable effect in a particular test but that doesn’t make the test a failure. That lack of measurable change is a result in-and-of itself, and it can teach you just as much about what is not important to your customers as another test may teach you about what is important.
A Singular View of the Customer
A/B testing can work to help you refine your messaging for particular customer segments. But if you don’t know who your customers are, you are going to misidentify their needs and preferences when you A/B test. If you’re testing different messaging cadences for two different segments, be sure you understand what the difference is between those two segments. Having a singular view of your customer, being able to identify them across channels and follow them on their journey to purchase, ensures your customer segments will provide meaningful insight.
Actually DO Something with the Information
So, you learned that your customers in a particular segment prefer emails once a week while customers in a different segment click at a higher rate when sent emails twice a week. That’s not just good information, it’s actionable information. Put your results into practice and change the cadence of your marketing to meet the preferences of those customer segments. And then keep testing. Every insight should provide you with information about what to do – or what not to do – to make your marketing efforts increasingly effective.