We’re talking about A/B testing for marketing.
This is a topic that we have a lot of conversations about with our clients because we value it highly. I think everyone should be A/B testing their marketing assets all the time—to learn, improve, and get more conversions. This is one of the best ways to consistently increase your conversions, which means increasing your leads and sales.
Today, I’m going to go through what it is and how to do it in a practical way that you’ll be able to follow along with if you’re interested.
Why A/B Testing Matters
So, before we jump into what it is, let’s talk about why it matters.
As I’ve worked with many clients over the years, I’ve noticed a common thread. We start off in the messaging portion of the project. If we don’t, it’s like viewing marketing as building a bridge—a bridge between you and your customers. You wouldn’t just pick up a hammer and start hammering away without a plan.
So, we want to start by creating a blueprint: planning out the bridge between you and your customers. Part of that is the message we’re going to be using, and part of it is the design.
One of the objectives of marketing is to build and reinforce memories, and you do that by being consistent. If you don’t know what it is that you’re talking about in your marketing, you won’t be consistent. Therefore, you won’t build or reinforce memories.
The same applies to design: if you haven’t settled on certain design elements—colors, fonts, etc.—each time someone sees an ad or a post from you, if the design is different, you’re not building or reinforcing memories.
The Challenge of Certainty
As we’re in the starting phase of a project, creating the blueprint with a client, I see the same pattern happen repeatedly. Clients want to be certain, the team wants to be certain, and it’s basically impossible on the front end.
In fact, I’ve seen that even the most confident marketers often get things wrong. That’s the fear we all have—what if I get this wrong? What if it’s the wrong color? The wrong font? The wrong message? What if we’re telling the story in the wrong way? Or if just one piece is wrong? How much money are we going to lose?
That’s a big, important question. What we do is acknowledge that these are opinions, and that’s okay. When we’re sitting in a room having a conversation, these are opinions—not data. Opinions are good and can be added to our data set, but they are not the complete picture.
If we’re after leads and sales, we need to track and ensure that our decisions are actually leading to those results. Opinions are a starting place, but we must validate them with data—customer behavior.
Opinions vs. Customer Behavior
Are these designs or messages actually getting the results we want? Because at the end of the day, we pay our bills with revenue. If an opinion turns out to be unvalidated by customer behavior, I need to know. I need to find out what moves our customers forward in their journey. If I’m wrong, that’s okay—I just need to know. We, as a company, need to be right. No individual needs to be right.
In fact, if you’re watching this thinking you need to be right, let’s pump the brakes. It’s not true.
If you’re more committed to your own opinions and the delusion that you’re right than to data, customer behavior, leads, and sales, you’re paying a high price—literally, lower revenue. So, we want to avoid that.
Enter A/B Testing
This is where A/B testing comes in. Assuming we’re getting something a little bit wrong, I’ll share some stories of different results and how we can determine which one wins through A/B testing, and even assign a dollar amount to that.
The big question when it comes to A/B testing is: What is your opinion worth to you? Is it worth a million dollars? Ten thousand dollars? Because A/B testing is about putting that to the test and increasing revenue—but it also means you might be wrong. Just know that going in.
What is A/B Testing?
It’s where we take two variants—sometimes more, but usually two—of something: an asset, a design, a color, a message, or other elements. We put them against each other in a way that allows us to measure specific outcomes. The goal is to see which variant gets us to the outcome we want.
For example: We might test button colors—does green or orange get more clicks? That’s the variant, and the outcome is which gets clicked more.
We want to set up the test so that both variants are seen equally, ensuring accurate data. We track the outcomes to see which one wins and delivers the results we’re after.
How to Run a Good A/B Test
When building an A/B test, we need to ensure we’re testing things we can control.
Going back to science class—say biology—first, you create a hypothesis, then test it. I disagree: a hypothesis is a bias. If you say, “Orange will win,” that’s a hypothesis based on your expectation.
But in A/B testing, we don’t need a hypothesis. Instead, we ask a question: Which color will get more clicks? That’s the real purpose—finding out what actually works.
We test things we can control: Make sure the button is a precise color, and measure the outcome clearly—clicks, conversions, etc.
Avoid testing vague questions like, “Will people like orange or green?” because that’s unmeasurable. Instead, measure actual behaviors—clicks, conversions, scroll depth, etc.
Measuring Results
You need enough data—patience is key. A good rule of thumb at ClearBrand is about 100 conversions of the specific outcome you’re tracking. It’s not about how many people see the test, but about the number of behaviors (like clicks or purchases).
Once you reach around 100 conversions, you can assess statistical significance. Tools will help with this—no need to do complex math yourself. They’ll tell you if the results are statistically significant and which variant wins.
If you get 100 conversions with close numbers—say, 49 on variant A and 51 on variant B— that means there’s no clear winner. You can let it run longer or test something different.
The tools will show you the percentage difference and statistical significance, so you can make an informed decision.
Summary of the Basics
- Decide what you’re measuring (the variant).
- Decide what outcome you’re tracking (behavior).
- Wait until you have enough data for confidence.
How to Construct Variants
There are two main approaches:
- Test many things at once.
- Test one thing at a time.
1. Testing Many Things
A data scientist might advise against this because it’s hard to identify what caused the winner. For example, if you run two ads with different images, copy, colors, and call-to-actions, and one wins, you won’t know which element caused it.
However, at ClearBrand, we often do this when launching a new site. Before launching, we gather data from the current site—ideally at least 50 days of data. Then, we launch the new site and compare the pre-launch data to post-launch data.
This is effectively an A/B test of the entire site. It’s faster because we’re making multiple changes at once, but it’s also a big leap forward.
Once the new site is live and data stabilizes, we test one thing at a time to understand what truly works.
2. Testing One Thing at a Time
Once we have a new baseline—say, after doubling conversions—we refine further. We test one element—like button color—by splitting traffic equally (50/50 for two colors, 33/33/33 for three).
We let the test run until a clear winner emerges with statistical significance. This approach helps us understand cause-and-effect relationships.
Testing Ideas vs. Testing Specific Elements
For example, with running shoes, you might test the headline:
- “Run faster and win more races” vs.
- “The lightest running shoes on the planet.”
- Or test one word change:
- “Lightest shoes” vs. “Brightest shoes.”
The key is to keep tests focused—one idea at a time.
How to Know if Multiple Changes Are Responsible
If you change both button color and text, and one wins, you won’t know which change caused it.
To isolate effects, test one element at a time—small, specific changes.
If you want to test multiple elements, do so sequentially rather than simultaneously, to maintain clarity.
Additional Tips
- Always measure customer behavior—clicks, conversions, purchases.
- Use tools like Google Optimize (free) for website testing—requires some development knowledge.
- For low traffic, consider running ads, surveys, or sales calls to gather data faster.
- Heat maps can provide insights but shouldn’t replace direct testing or customer feedback.
- For branding or naming, crowdsourcing via survey sites can be effective.
In Summary
- A/B testing helps you move from opinions to data.
- Test one thing at a time for clarity.
- Gather enough data—about 100 conversions—to ensure confidence.
- Use tools to simplify analysis.
- When traffic is low, supplement with ads, surveys, or direct outreach.
Final Recap
- We want to A/B test because our opinions are almost always wrong.
- We aim to validate with customer data.
- Set up variants and measure specific outcomes.
- It’s okay to test multiple things initially, but then refine to one element at a time.
- Constantly learn and improve to increase leads and sales.
There are many tools out there—Google Optimize is a good free option if you have some web development experience. If traffic is low, get in front of people via ads, surveys, or calls to gather customer insights.
Thanks for listening to the Clear Brand Academy podcast, where we take the mystery out of marketing and help you get more leads and sales with a clear brand and proven tactics.
If you enjoyed this episode, please leave a review on Apple Podcasts or wherever you listen.