News & Insights

A/B Testing: An Essential Part of Maximizing Your Website’s Sales Potential

A/B Testing: An Essential Part of Maximizing Your Website’s Sales Potential

Nothing is more important to online marketers than determining the best way to connect with customers. And few methods give a more clear and persuasive snapshot of customer preferences than A/B testing. It’s a way to compare two versions of something — a website design, a product description, an email, a headline, even a CTA button — to figure out which one performs better.

Although A/B testing is widely used for websites and apps, the method is almost a century old, and it’s one of the simplest forms of a randomized controlled experiment. The test works by showing two sets of users different versions of something to determine which leaves the maximum impact and drives business metrics the most.

A/B testing eliminates guessing during website optimization and gives those evaluating or maintaining the site some solid data to help make decisions. “A” refers to the “control” — the original testing variable; “B” refers to the “variation” — a new version of the original testing variable. The version that moves your business metrics needle the most is the winner.”

How A/B Testing Began

In the 1920s, British statistician and biologist Ronald Fisher laid out the basic principles behind 

A/B testing. He wasn’t the first to run experiments like this, but he was the first to figure out the fundamentals and mathematics of A/B testing. Fisher applied it to agricultural experiments, seeking to answer questions such as, “What happens if I put more fertilizer on this land?”

A/B testing was eventually applied to other fields. In the early 1950s it began to appear in clinical trials. In the 1960s and 1970s it was used by marketers to evaluate direct response campaigns. A/B testing for online uses began with the development of the internet in the 1990s.

 

How Does A/B Testing Work?

You start an A/B test by deciding what it is you want to evaluate. Let’s say your metric is the number of visitors who click on a CTA button. You’re curious to know if the size of the button makes any difference. To run the test, you show two sets of users (assigned at random without their knowledge when they visit the site) two different versions where the size of the CTA button is the only variable, then determine which button size causes more visitors to click.

Occasionally certain variables can affect the results. For example, maybe mobile users of your website tend to click less on CTA buttons compared with desktop users. Randomization could give you more mobile users in set A than set B, which may cause set A to have a lower click rate regardless of the button size they’re seeing. To reduce that bias, you can first divide the users by mobile and desktop and then randomly assign them to each version, resulting in two parallel A/B tests, one for each platform. This procedure is known as “blocking.”

 

When Should You Use A/B Testing?

Here are some ways that A/B testing can improve metrics and ultimately get more conversions:

1. Reduce the bounce rate

Visitors to your website may face specific roadblocks when trying to access information. Perhaps it’s confusing copy, bad links, or a hard-to-find CTA button. (Some drop-offs such as cart abandonment — visitors leaving unpurhcased items on the shopping cart page — can be especially frustrating.) A/B testing can help pinpoint these damaging UX issues. You can then employ visitor behavior analysis tools such as heatmaps, Google Analytics, and website surveys to remedy your visitors’ drop-off points.

2. Improve your ROI

It’s difficult to attract the right kind of visitor to your site. A/B testing lets you maximize their value and increase conversions without burning budget to attract new traffic. Even small adjustments can boost conversions with existing visitors.

3. Make low-risk modifications

Minor, incremental changes allow you to achieve maximum results with minimal cost. For example, you can perform an A/B test when you plan to remove or update your product descriptions to see if it improves conversions. Or say you’re making a big product feature change. Before introducing a new feature, launching it as an A/B test can help you see if the change will please your audience.

4. Eliminate guesswork

Since A/B testing is entirely data-driven with no room for guesswork or gut feelings, you can quickly determine a “winner” and a “loser” based on informative metrics such as time spent on the page, number of demo requests, cart abandonment rate, click-through rate, and so on. No more more relying on instinct.

5. Redesign a website to increase future business gains

Redesigning can range from a minor CTA text or color tweak on specific web pages to completely reimagining the entire site. A/B testing should be used for every significant change. And don’t stop testing when the new site is launched. Test every element to make sure everything is working optimally.

 

How to Conduct A/B Testing

Take these steps before and during your A/B test:

  1. Pick only one variable to test. Remember, you can test more than one variable for a single web page or email — just be sure you're testing them one at a time.
  2. Identify your goal. Although you'll be measuring several metrics during any one test, choose a primary metric to focus on before you run the test. 
  3. Create a ‘control’ and a ‘challenger.’ At this point you should also decide on a hypothesis of the outcome and later examine your results based on your prediction.
  4. Split the sample groups evenly and randomly. A/B testing tools provided by companies such as HubSpot or Google will do this for you.
  5. Determine your sample size. It has to be large enough to achieve statistically significant results. 
  6. Run only one A/B test on a site at any one time. Otherwise the results of each test can be altered in unpredictable ways. 
  7. Wait until the A/B test is over before analyzing results. No peeking or premature conclusions! Results can change surprisingly as the test progresses. 
  8. Ask for feedback from users. It can add insight and context to the results. Exit surveys or polls work well.

 

Take these steps after running your A/B test:

  1. Measure the significance of your results using an A/B testing calculator. It will calculate the confidence level your data produces for the winning variation.
  2. Take action based on your results. If one variation is statistically better than the other, you have a winner. Complete your test by disabling the losing variation in your A/B testing tool. If neither variation is statistically better, the test was inconclusive. Run another test, using the failed data to help you figure out a new iteration.
  3. Plan your next A/B test. The A/B test you just finished may have helped you discover a new way to make your marketing content more effective — and given you ideas about other A/B tests to try. Keep up the work of optimizing until your testing ideas are exhausted. There’s always room for improvement!

 

 

Learn more