Search

Home > Iterative Marketing Podcast > How To Run An Effective A-B Test
Podcast: Iterative Marketing Podcast
Episode:

How To Run An Effective A-B Test

Category: Business
Duration: 00:28:49
Publish Date: 2017-05-09 11:14:39
Description:

Show Notes

A-B testing is the core of experimentation. With the right execution, it not only provides uplift in click-through rate and conversions, but also serves as an audience insight generator. This podcast explores how six things sample size, random sample, controls, duration, statistical confidence and testing for insight can make an A-B test effective and beneficial to all departments in an organization.

What is an A-B Test (2:59 – 4:29)

  • The testing of two different versions of the same content to determine which results in a better outcome
  • A-B tests are important to Iterative Marketing because they are the core of experimentation
  • Can apply to any medium (print, banner ads, direct mail, email, etc.)
  • Tools for A-B testing (Optimizely, Convert, Google Optimize) are becoming more user-friendly. Many testing tools are embedded in platforms like Marketo and Pardot.

Why A-B Testing Is Important (4:30 – 6:06)

  • Produces an impact on our marketing that is based on data, rather than gut feelings or personal preference for the best way to allocate marketing resources
  • Helps multiple departments find out definitively what the audience prefers

Six Things That Make an A-B Test Work (6:07 – 7:06)

  • Sample size
  • Random sample
  • Controls
  • Duration
  • Statistical confidence
  • Testing for insight

1) Sample Size (7:07 – 9:42): The number of times you need to present version A or version B to determine a clear winner

  • Sample size calculators can help you determine how big of an audience you need to achieve 90% or 95% confidence.
  • Marketers should not attempt a test if you are not going to have a big enough sample. It s important to determine this BEFORE you start the A-B test to not waste resources.

2) Random Sample (9:43 – 11:42): Sample must not only be large enough, but it must also be segmented randomly.

  • Many tools do this for us

Charity Break – American Foundation for Suicide Prevention – (11:43 – 12:30)

3) Control (12:32 – 17:10): The efforts put in place to make sure the thing being tested is the only thing that s different between the experience of those getting version A and those getting version B.

  • Test only one variable at a time so you know which change is producing the result
  • Design version A and version B as exact replicas in layout, font size, color etc. except for the variable being changed to isolate what is being tested
  • Run the test with version A and version B at the same time so breaking news, weather, or other elements, do not change the outcome of the test
  • Make sure your audience has not seen either version before the test starts

4) Duration (17:11 – 18:49): How long to run an A-B test

  • In our experience, do not run a test longer than 90 days because too many factors may impact the result
  • If testing relies on cookies in a browser, they are not reliable for more than a few weeks
  • A test should be run long enough to factor in various business cycles
  • Ex: Running a test Thurs-Mon favors weekend habits, while running it Mon-Thurs favors weekday habits.

5) Statistical Confidence (18:50 – 21:35): A complicated math equation to help us determine if an A-B test is repeatable, or the result of chance

  • We have an easy-to-use A-B confidence calculator on our website. Simply plug in your impressions or sessions compared to clicks or conversions to find out the statistical significance
  • Usually represented as a percentage, which represents probability.
  • Marketers usually strive for 95% confidence, although we have taken the results of a test with 90% confidence as usable information, or as a good working hypothesis until a better test can be run.

6) Testing for Insight (21:36 – 26:12): Learning more about our audience beyond gaining an increase in click-through rate or conversions.

  • The best A-B tests test the psychographics of an audience segment to gain insight that can be applied to multiple departments in an organization.
  • To get started, brainstorm a hypothesis for how you expect your audience to act and why. Then, build an A-B test to validate or invalidate that hypothesis.
    • Ex: A bad hypothesis would be The headline, Don t make these three massive mistakes will result in more conversions than the headline, Use these three tips to amp-up your results.
    • This hypothesis is not audience-specific and is very specific to this piece of content.
    • Ex: A good hypothesis would be Mary (our persona) will be more likely to convert when presented with an offer that limits her risk because Mary prefers avoiding risk over new opportunity.

Summary (26:14-28:49)

We hope you want to join us on our journey. Find us on IterativeMarketing.net, the hub for the methodology and community. Email us at podcast@iterativemarketing.net, follow us on twitter at @iter8ive or join The Iterative Marketing Community LinkedIn group.

The Iterative Marketing Podcast is a production of Brilliant Metrics, a consultancy helping brands and agencies rid the world of marketing waste.

Producer: Heather Ohlman
Transcription: Emily Bechtel
Music: SeaStock Audio

Onward and upward!

The post How To Run An Effective A-B Test appeared first on Iterative Marketing.

Total Play: 0