email list, email list company

How to Plan and Run an Effective A/B Split Test for Your B2B Email Campaign in 10 Easy Steps

Good email marketers test; great email marketers test, test, and test again. Consistent

email list, email list company

and regular testing enables you to find which aspects (content, schedule, B2B email list, etc.) of your campaign need fixing or improvement and allows you to see which strategies work before implementation. However, without a sound system, your tests and test results aren’t worth considering.

This post looks at a very simple yet extremely effective evaluation approach known as A/B split testing. This entry also covers the steps to follow for planning and conducting A/B split tests in your email marketing campaign.

 

An A/B split test involves comparing two or more possible marketing tactics/strategies that basically have identical inputs except for one particular variable. The purpose of varying this single variable is to isolate and see which specific changes lead to a corresponding result. For example, two emails with identical properties except for their subject lines are used to find the best headline. One version is sent to a separate segment of the email list while the other is transmitted to another segment to see which generates higher open rates.

It’s a good idea to master the art and science of conducting A/B split tests in your email marketing campaigns. Other than being an excellent diagnostic tactic, these tests also enable you to handle other more sophisticated forms of email campaign evaluation. Here are the steps to follow to plan and run effective A/B split tests for your B2B emails.

1. Select the variable to test. The term “variable” in this context refers to that element in your email campaign that you want to change or evaluate. In the above example, the variable being tested is the subject line. Other variables of interest include time/day of sending, frequency, content type, calls to action, design/layout, opening line, email list segmentation technique, opt-in email list company choice, etc.

2. Create the test emails. Ideally, you should develop multiple versions of the same email with exactly the same properties except for the variable being tested. The number of versions depend on (equal to) the number of unique values that the test variable takes. The important thing here is that the variable tested should be the only distinct feature between the email versions.

3. Choose the test indicator. After you’ve determined what you’re testing for, you now select which measurable outcome value to use as your indicator. The indicator will be used to compare which email version of the test produces the desired result (highest or lowest value). In the previous example, the resulting open rates are used as the test indicator.

4. Select a representative sample. The choice of whether to use the entire email list or portion of it as a sample for testing depends on the statistical significance you’d like to achieve. The key idea here is that you’re selecting the recipients for your email versions.

5. Split your test sample. After sampling, you need to divide the sample set of recipients into subsets to accommodate the email versions you’ve created. Generally, the number of subsets you need is equal to the number of email versions you have plus one. The extra subset will be used later in the test. The sizes of the subsets again depend on the statistical significance required.

6. Assign one version for each subset. Each separate subset (except for the extra one) should be assigned to receive only one distinct email version.

7. Send the test emails. Before sending, make sure that you’ve correctly followed steps 1 through 6 and have clearly documented the test details including the version-subset assignments. With those aside, it’s time to press SEND.

8. Check your indicator. Wait for a reasonable amount of time after your test blasts then check the values of your chosen indicator for each of the subset of recipients (email versions). The decision criteria (highest or lowest indicator value) depend on the indicator you’ve chosen. In many cases, you’re going to look for versions that give you the highest open rates, click-through rates, etc. or versions that yield the lowest bounce rates, unsubscribe rates, etc.

9. Verify the results. This is now where the extra subset is going to be used. Find the email version that gives you the best result (meets your decision criteria) and send it to the remaining unassigned subset. Again, check the indicator value and see if it confirms your previous findings.

10. Act on the findings. Your results aren’t worth anything unless you use them as basis for changing or maintaining a strategy or tactic in your email campaign. Another related point here would be to regularly carry out these tests to generate more timely results and refine your campaign accordingly.