11 A/B Testing Best Practices to Boost Your Leads and Subscribers

Are you following A/B testing best practices for your opt-in marketing campaigns? If you’re not, the chances are you’re missing out on major opportunities to grow subscriber numbers and improve lead generation.

Website split testing is an essential conversion optimization practice, but it’s not always easy to be sure you’re doing it right.

This guide will help you with those issues. We’ll share some A/B testing best practices to help you get reliable results from your split testing strategy so you can attract more leads and boost subscriber numbers.

Why You Should Split Test Your Marketing Campaigns

There are several reasons why split testing website marketing campaigns makes sense. Web A/B testing helps you make decisions based on data, rather than guesswork, so you know for sure when a particular marketing tactic or campaign is working.

It also helps you avoid the case study trap. That’s when people read case studies and copy the tactics mentioned, without knowing for sure if they’ll work for their own business. Here’s a hint: every business is different and you can’t assume that what worked for others will work for you.

When you A/B test a website marketing campaign, you can experiment with different ideas. This is useful because sometimes a small change can make a big difference to the results you get.

One of the key areas to test to increase lead generation and subscriber growth is your optin form. That’s because this is where people sign up to become subscribers or leads.

What Can You Split Test?

When testing forms, which areas should you focus on? You’ll want to pay attention to form layout best practices so you can create the best website forms. There are a number of key areas to test, including:

  • Headlines and subheadings
  • Copy
  • Form design
  • Call to action (CTA)
  • Images
  • Colors

Learn more in this guide on which split tests to run.

Now, let’s get to the A/B testing best practices. Whether you’re testing forms or split testing your web page, there are some best practices that always apply:

Here’s a list so you can navigate easily:

  1. Test the Right Items
  2. Pay Attention to Sample Size
  3. Make Sure Your Data’s Reliable
  4. Get Your Hypothesis Right
  5. Schedule Your Tests Correctly
  6. Get the Duration Right
  7. Don’t Make Mid-Test Changes
  8. Test One Element at a Time
  9. Keep Variations Under Control
  10. Pay Attention to the Data
  11. Always Be Testing

1. Test the Right Items

One of the most important website testing best practices is to test items that make a difference to the bottom line.

For example, Hubspot suggests you optimize the pages people visit most:

Or you may want to focus on your key lead generation pages, optimizing optin forms on your:

To find your most visited pages in Google Analytics, go to Behavior » Site Content » All Pages.

ab testing best practices - ga behavior content all pages

Once you know what these are, you’ll know where to place email subscription forms and lead magnet optin forms.

2. Pay Attention to Sample Size

Another of the best practices for A/B testing is to get the sample size right.

If you don’t perform your test on enough people, you won’t get reliable results. That also means that any decisions you make on the basis of that data may be flawed.

One of the best ways to work out the ideal sample size is to use Optimizely’s sample size calculator. Put in your current conversion rate, plus the percentage increase you’d like to see, then it will automatically calculate the number of visitors you’ll need for your A/B test.

optimizely ab testing sample size calclulator

3. Make Sure Your Data’s Reliable

With website split testing, there’s another important measure of data reliability called statistical significance. In simple terms, this is a way of determining that your results aren’t due to random chance.

To identify statistical significance for your A/B test, use Visual Website Optimizer’s statistical significance tool.

Type in the number of visitors you’ve tested for your original marketing campaign (called the control) and the one you’ve changed (called the variation), then press the Calculate Significance button.

ab testing best practices - vwo statistical significance

You’ll get a result that shows the P-value (another measure of reliability), and tells you whether the test has statistical significance by showing Yes or No.

vwo statistical significance results

Pay attention to the confidence rating, which is the likelihood of getting real results from any changes you make. The industry standard confidence rating is 95%, though in the Optimizely tool shown in the last tip, you can adjust it to a level that you’re comfortable with.

sample size confidence rating detail

4. Get Your Hypothesis Right

When you start your testing without a hypothesis, you’re wasting time.

A hypothesis is an idea about what you need to test and why, and what changes you’ll see after you make any changes. With this structure in place, you’ll know the scope of your test and when it succeeds or fails. Without it, your testing is just a guessing game.

To form a hypothesis, use this template from Digital Marketer:

Because we observed [A] and feedback [B], we believe that changing [C] for visitors [D] will make [E] happen. We’ll know this when we see [F] and obtain [G].

Here’s how you could fill this in for your email newsletter optin form:

Because we observed a poor conversion rate and visitors reported that our optin form was too long, we believe that reducing the number of form fields for all visitors will increase newsletter signups. We’ll know this when we see an increase in newsletter signups over a 2 week testing period and get customer feedback that shows that people think the optin form is less complicated.

5. Schedule Your Tests Correctly

Test scheduling is one of the most crucial A/B testing best practices.

Here’s why: if you’re not testing like against like, you can’t trust the results.

In order to get reliable results, you’ll need to run your A/B tests for comparable periods.

Don’t forget to account for seasonal peaks and troughs. Don’t test traffic on Black Friday against traffic on an ordinary day in mid-January.

To find out how your traffic performs over a couple of months, login to Google Analytics. Go to Audience » Overview, and change the period to Last 30 days.

ga audience overview set last 30 days

Then click on Compare to and the previous 30 days will automatically be selected.

ga audience overview compare period

Click Apply and you’ll get a quick snapshot of traffic patterns.

ga audience overview comparison

This will give you a better idea of traffic patterns so you can select an ideal period to run your A/B test.

6. Get the Duration Right

Test duration is another essential factor in determining the reliability of your results. If you’re running a test with several variants and want 400 conversions, you’ll need to test for a longer time than for a test with just a control and a variant, and 100 expected conversions.

Use this chart from Digital Marketer to work out the ideal duration for split testing your website optin forms.

digital marketer ab testing duration guide

7. Don’t Make Mid-Test Changes

It’s easy to get so excited about the results you’re seeing during a test that you want to rush out and implement more changes.

Don’t do it.

If you interrupt the test before the end of the ideal testing period (see the previous tip), or introduce new elements that weren’t part of your original hypothesis (see tip #4), your results won’t be reliable. That means you’ll have no idea whether one of the changes you made is responsible for a lift in conversions.

8. Test One Element at a Time

One golden rule of A/B testing forms and web pages is: test one element at a time. If you’re testing an optin form for marketing, test the headline OR the CTA OR the number of form fields. That’s the only way you’ll know for sure if that ONE element makes a difference to lead generation or subscriber signups.

OM-split-testing-new-1

If you test more than one element, then you need multivariate testing. We explain the difference in our guide to split testing vs. multivariate testing.

OM-mvt-new-1

9. Keep Variations Under Control

Related to that, don’t test too many variations at once. That’s a classic split testing mistake. As you saw in the Digital Marketer table, the more variations there are, the longer you have to run the tests to get reliable results.

A/B testing best practices suggest you test between 2 and 4 variations at the same time. That gives the best balance of test duration and efficiency.

10. Pay Attention to the Data

We’ve all got gut feelings about how our marketing is performing, but the great thing about split testing is that it gives you data to back up those feelings – or to show that you’re wrong.

Never ignore the data in favor of your gut. If you’ve followed our advice on how to create split tests, you’ll get reliable data that’ll help you to improve conversions.

11. Always Be Testing

Our last tip is always be testing. Once you’ve got enough data (say, for at least a few weeks) from your original campaign, you can start using A/B testing to improve your results.

Incremental changes can soon add up, as many OptinMonster customers have found. Escola EDTI used split testing to get a 500% boost in conversions.

website testing best practices - escola edti example

And Logic Inbound got a whopping 1500% conversion boost by split testing its OptinMonster marketing campaigns.

web ab testing example from logic inbound

How to Split Test Your Campaigns with OptinMonster

Want to A/B test your own marketing campaigns so you can aim for similar results? We’ll tell you how to do that in this section, with the A/B testing tool that’s built into OptinMonster.

Follow our instructions to create and publish your first campaign.

Check out this video tutorial on creating your split test, or keep reading for step-by-step instructions:

From the OptinMonster dashboard, go to the three dot menu. Select A/B Split Test.

create-split-test-step-2

This’ll bring up a box where you can name your test and add some notes about the change you plan to make. Remember, you’re only going to change a single element.

Click Create Split Test. That will take you to the campaign builder.

on name split test

Make your change, then save and publish the campaign as normal.

om toggle publish on

OptinMonster will automatically segment your audience and collect conversion data, which you will see in the conversion analytics dashboard.

ab testing conversion-analytics

When you have a winning variation, return to the three dot menu in the OptinMonster dashboard, and select Make Primary to make that the main campaign.

change-primary-step-2

You can also toggle the button at the side of the campaign to turn off the other variation.

activate-deactivate-split-test

That’s it!

Now that you know how to use the A/B testing best practices that’ll make a real difference to conversions, check out our guide to split testing your email newsletters. And get more inspiration from EczemaCompany.com, who increased conversions by 158% with split testing.

Finally, follow us on Facebook and Twitter for more tips and in-depth guides.

Sharon Hurley Hall has been a professional writer for more than 25 years, and is certified in content marketing and email marketing. Her career has included stints as a journalist, blogger, university lecturer, and ghost writer.