Imagine you’re an online marketer for, I dunno, say a pet supply store that specializes in poodles.

And you just got the results back from an email A/B test you ran.
The numbers are freakin’ mind-blowing!
So you dash off a message to your boss proudly declaring:
Hey Fred;
Got the results back from that subject line A/B test we were talking about.
“How to teach poodles to dance” brought us a 47% higher open rate than “Want to teach poodles to dance?”
Boo-ya!
Thought you’d like to know.
OK, swell.
But what are you going do with your new-found data?
What does that A/B test really tell you about the people who are on your list?
And how will those test results help you improve your email marketing overall?
It doesn’t matter if you’re selling poodle supplies, slingshots or SaaS solutions — randomly doing tests won’t earn you much in-depth insight about your audience.
So here’s a 3-step process for conducting systematic email A/B tests and interpreting what the data means.
1) Determine a goal and select a segment
What do you want to get out of email A/B testing?
Maybe you’d like to amp up your opens rates.
Or boost click-throughs to a sales page that’s converting like crazy.
Whatever it is, you need to have a goal in mind before loading up that A/B campaign.
And of course, your goal will dictate exactly what you test. It could be things like:
- Subject lines
- Calls to action
- Text emails versus HTML
- Sender names
- Headlines
- Personalization
- Layout, images and design
- Email length
Just make sure you test one element at a time. Otherwise, you won’t know what was really behind the results.
And finally, determine if you want to test an email for your entire list or just a segment of it.
People with small or mid-sized email lists may need to send to everyone in order for the A/B test to reach statistical significance (we’ll get to this in a moment).
But if you have a massive list, and you’ve segmented it well, your insight will be much richer if you’re able to test just a targeted chunk of the audience.
2) Develop a solid hypothesis

You need to first have a hypothesis. This is essentially an assumption (derived from facts) that you’re basing your test on.
It’s the reason you’re doing the test in the first place. You want to find out if your assumptions about your email list are correct or not.
Your hypothesis could be around changing something already in place. Here’s an example:
Say your business uses a long-ish template for its newsletters. And you think the length of those emails could be hurting your click-through rates.
So now that you’ve identified a problem with the control (the long-ish template), you’d then come up with a solution. In this case, let’s say removing graphics to make the template shorter.
With that in mind, your hypothesis would simply be:
“Shortening the length of our email newsletters by removing the graphics will result in a higher click-through rate.”
Now, A/B testing subject lines for broadcast emails (probably the most common email test) is a little different.
It requires more of a systematic approach.
I recommend first creating a list of a few things you would like to learn from the A/B tests. This might include stuff like:
- do emoticons in subject lines improve open rates for our audience?
- do short subject lines work better than long subject lines?
- do questions in subject lines result in more opens?
Then your hypothesis for each A/B test you run should be based on the above questions.
For example, let’s take the emoticon question. Your hypothesis for a test could simply be:
“Adding an emoticon to the email subject line will increase both open rates and click-through rates.”
That could be a nice, clean test like this one:
Now, let’s go back to the poodle example. Let’s say you want to know whether “how-to” subject lines work for your audience. A hypothesis for that test might look like:
“A specific, how-to subject line will result in a higher open rate than using a question as a subject line.”
SUBJECT LINE A: How to teach poodles to dance
SUBJECT LINE B: Want to teach poodles to dance?
So let’s say subject line A won by a landslide. The problem here is you still don’t know exactly why it beat subject line B.
And that brings us to the final step…
3) Track & analyze your A/B test results
First, don’t do anything with your results until you’re confident your test has hit statistical significance (use this calculator to figure it out).
Once you’re sure you’ve got a valid result, now it’s time to determine what it really means.
So don’t jump to conclusions based on one email test.
Just because using a “how-to” subject line brought a massive lift in a single test, doesn’t mean you should go crazy with “how-to” subject lines.
Think of all the other variables at play. For example, any of these factors could influence an open rate:
- subject line length
- emoticons in the subject line
- emotional language
- curiosity invoked
- the benefits emphasized
- the tone of voice used
It’s very difficult to determine exactly what was behind that winning subject line when you examine the results in isolation.
So that’s why it’s important to keep running tests around what you’d like to learn while tracking and analyzing your data as a whole.
Look at what email A/B testing is teaching you about your customers
This comes down to what MECLABS calls “customer theory.” As they put it:
“To really drive sustainable returns, you must look past a test that simply tells you to use the red button instead of the blue button, and instead see what split testing is teaching you about your customers.”
I advocate that you apply insights from previous A/B tests to gain a better understanding of your target buyer overall. But you have to be careful when comparing different tests to draw conclusions. Once again, there are a lot of variables at play. If possible, you could set some rules for the e-mails you’re testing by:
- only testing on certain days
- only testing at certain times
- sending the test to the same list segment
But even just being aware of these variables — by noting when the emails were sent, for example — will help you avoid drawing conclusions that are misleading.
When you keep track of all your hypotheses and test results, you can start to see trends emerge in the data.
Do short subject lines almost always result in higher open rates than longer subject lines?
Is graphic-laden email content bringing fewer clicks every time?
Tracking your A/B test results can help you reliably answer these questions, which in turn, will help you steadily craft smarter email marketing campaigns.
Track your Email A/B testing with this free template
To help make it easier to spot these trends, use this email A/B testing template to keep track of your hypotheses, test results, conclusions and other vital information. It’s a Google spreadsheet, so just be sure to save it onto your own drive before filling it out: