How to Do A/B Testing: 15 Steps for the Perfect Split Test
What is A/B testing?
A/B testing, also known as split testing, is a marketing experiment wherein you split your audience to test variations on a campaign and determine which performs better. In other words, you can show version A of a piece of marketing content to one half of your audience and version B to another.
Why is A/B testing important?
A/B testing has many benefits to a marketing team, depending on what you decide to test. For example, there is a limitless list of items you can test to determine the overall impact on your bottom line. But you shouldn’t sleep on using A/B testing to find out exactly what your audience responds best to either. Let’s learn more.
How does A/B testing work?
To run an A/B test, you need to create two different versions of one piece of content, with changes to a single variable.
Then, you’ll show these two versions to two similarly-sized audiences and analyze which one performed better over a specific period. But remember, the testing period should be long enough to make accurate conclusions about your results.
A/B testing helps marketers observe how one version of a piece of marketing content performs alongside another. Here are two types of A/B tests you might conduct to increase your website’s conversion rate.
1. Pick one variable to test.
As you optimize your web pages and emails, you’ll find there are many variables you want to test. But to evaluate effectiveness, you’ll want to isolate one independent variable and measure its performance. Otherwise, you can’t be sure which variable was responsible for changes in performance. You can test more than one variable for a single web page or email — just be sure you’re testing them one at a time.
To determine your variable, look at the elements in your marketing resources and their possible alternatives for design, wording, and layout. You may also test email subject lines, sender names, and different ways to personalize your emails. Keep in mind that even simple changes, like changing the image in your email or the words on your CTA button, can drive big improvements. In fact, these sorts of changes are usually easier to measure than the bigger ones.
2. Identify your goal.
Although you’ll measure several metrics during any one test, choose a primary metric to focus on before you run the test. In fact, do it before you even set up the second variation. This is your dependent variable, which changes based on how you manipulate the independent variable. Think about where you want this dependent variable to be at the end of the split test. You might even state an official hypothesis and examine your results based on this prediction. If you wait until afterward to think about which metrics are important to you, what your goals are, and how the changes you’re proposing might affect user behavior, then you may not set up the test in the most effective way.
3. Create a ‘control’ and a ‘challenger.’
You now have your independent variable, your dependent variable, and your desired outcome. Use this information to set up the unaltered version of whatever you’re testing as your control scenario. If you’re testing a web page, this is the unaltered page as it exists already. If you’re testing a landing page, this would be the landing page design and copy you would normally use.
From there, build a challenger — the altered website, landing page, or email that you’ll test against your control. For example, if you’re wondering whether adding a testimonial to a landing page would make a difference in conversions, set up your control page with no testimonials. Then, create your challenger with a testimonial.
4. Split your sample groups equally and randomly.
For tests where you have more control over the audience — like with emails — you need to test with two or more equal audiences to have conclusive results. How you do this will vary depending on the A/B testing tool you use. Suppose you’re a HubSpot Enterprise customer conducting an A/B test on an email, for example.
5. Determine your sample size (if applicable).
How you determine your sample size will also vary depending on your A/B testing tool, as well as the type of A/B test you’re running. If you’re A/B testing an email, you’ll probably want to send an A/B test to a subset of your list large enough to achieve statistically significant results. Eventually, you’ll pick a winner to send to the rest of the list. (See “The Science of Split Testing” ebook at the end of this article for more.)
If you’re testing something that doesn’t have a finite audience, like a web page, then how long you keep your test running will directly affect your sample size. You’ll need to let your test run long enough to obtain a substantial number of views. Otherwise, it will be hard to tell whether there was a statistically significant difference between variations.
How to Read A/B Testing Results
As a marketer, you know the value of automation. Given this, you likely use software that handles the A/B test calculations for you — a huge help. But, after the calculations are done, you need to know how to read your results. Let’s go over how.
1. Check your goal metric.
The first step in reading your A/B test results is looking at your goal metric, which is usually conversion rate. After you’ve plugged your results into your A/B testing calculator, you’ll get two results for each version you’re testing. You’ll also get a significant result for each of your variations.
2. Compare your conversion rates.
By looking at your results, you’ll likely be able to tell if one of your variations performed better than the other. However, the true test of success is whether your results are statistically significant.
For example, variation A had a 16.04% conversion rate. Variation B had a 16.02% conversion rate, and your confidence interval of statistical significance is 95%. Variation A has a higher conversion rate, but the results are not statistically significant, meaning that variation A won’t significantly improve your overall conversion rate.
3. Segment your audiences for further insights.
Regardless of significance, it’s valuable to break down your results by audience segment to understand how each key area responded to your variations. Common variables for segmenting audiences are:
- Visitor type, or which version performed best for new visitors versus repeat visitors.
- Device type, or which version performed best on mobile versus desktop.
- Traffic source, or which version performed best based on where traffic to your two variations originated.
Start A/B Testing Today
A/B testing allows you to get to the truth of what content and marketing your audience wants to see. Read the full blog on our partner HubSpot’s Blog.
Contact Us today to optimize your email marketing.