2
x close
Nothing to display...
8 min read

Is A/B testing still relevant in B2B marketing?

B2B marketing

Everyone has their go-to favorites when it comes to revenue marketing strategies and methods… but as the industry grows, have some of these strategies become less relevant or efficient?

We’ve been hearing some conversations about whether marketers focused on revenue growth think that A/B testing is still useful, particularly for those with larger companies. But, we want to keep the discussion going and explore the ins and outs of the method, including:

What is A/B testing?

A/B testing, also known as split or bucket testing, is a statistical method of comparing two or more versions of one variable (like a blog post, web page, or advertisement) against each other to determine which version performs the best, and also to understand if a difference between the versions is statistically different.

Running an A/B test allows you to directly compare changes, and whether these changes have a positive or negative impact on the consumer, and in turn, conversion rates. It also stops you from guessing/assuming anything about what your customer wants and instead puts data and factual insights at the forefront of your decision-making process.

Why do we do A/B tests?

A/B tests can help find out different ways to improve a variable’s performance. The first way is to see whether changing certain aspects within a variable can improve the user experience. For example, perhaps moving the position of a Call-To-Action (CTA) button or link from the middle of a web page or toward the top will improve the click-through rate.

To be able to test this, you’d create an A/B test where the first/original version of the variable stays the same, and the versions B/C/D and so on change the position of the button on the webpage.

You’d then put it into action by testing it against a predetermined number of site visitors where you can then compare the data to see which had the best impact.

The second is testing whether the design of the variable can have an impact on click-through rates. For example, would changing the size or the color of the same button affect how many people see it/are encouraged to click through to the link?

To test this, you’d add the two different versions of the button on the same page, linking to the same place. You’d then be able to test which version received the most amount of clicks within the same web page and adjust your design accordingly.

Opinions from marketers

We opened up a discussion on LinkedIn to see where marketers stood with A/B testing. This is what they had to say...

For A/B testing

Here’s what those who were ‘for’ A/B testing had to say:

“I’m for, but I think an organization needs to be savvy enough to know when A/B testing is and isn’t appropriate.
“I've worked at organizations where “let's A/B test it” was an excuse from leadership to bifurcate working through serious strategy and process issues and rush to action. Meaningful learnings never came from this approach.
“At a tactical level though, ICs should be empowered, especially with more top/top mid-of-the-funnel channels such as LP, email, paid ads to use A/B testing to continually test hypotheses and optimize performance.
“On a strategic note, I’ve used A/B testing to get buy-in from leadership on core positioning and messaging, so it's been quite useful.”
Jacob Swiss, Senior Product Marketing Manager at Grafana Labs

“A/B testing is a great methodology for making incremental changes and being sure of impact but it isn't great when a strategy needs to be deployed. It’s great when there is sizable data available, but not when there isn't enough traffic. A/B testing is great to ensure metrics maintain at the same level or grow, but not to disrupt the market.
“Everything can be A/B tested, but not everything should be. I've been able to demonstrate two changes that led to 150% growth for two organizations that have helped me grow too.”
Siddhartha Kathpalia, Associate Director (Marketing, Americas + Product) at VWO

“I agree that many B2B marketers are probably wasting their time with A/B testing things that don't matter or won't be leveraged anyway. It probably stems from a lack of experience in the weeds which many "digital marketers' lack these days.
“In any case, I never launch a campaign without some sort of "test" happening. Usually multiple. On GDN, for example, in one Adwords campaign, I’ll launch with multiple audiences. That's an A/B test. Within each, I'll also throw out a few assets. That's an A/B/potentially C test. I'll even throw in multiple variants within each asset if volume allows.
“This often allows for quick pivots after some data is collected. Peep which audience is doing the best. Ok, let's hit that audience hard. What asset within that audience is best? Let's cut the others. Ok, we already have a head start on collecting ad variant performance for that asset in that audience. Now the campaign is killing it!
“Should people obsess about A/B testing? No. But I think it should be naturally ingrained.”
Ryan Osman, Digital Marketing Manager to Equinix

“Of course, there are better appropriated moments and contexts for A/B testing things. But not doing it in a minimum frequency can contribute to creating a culture of decisions based more on speculation than on data, which leads the company or team to bet too much on HiPPOs and end up demotivating collaborators.”
Mariana Bonanomi, Content Strategist at Croct

“It depends on the situation, the volume you have to work with, and whether the experiment risks harming ongoing business. It can be particularly useful when trying out variations on a theme, such as just how much of a discount to offer, or which of three different adjectives gets clicks.”
Jesse Friedman, Director of Product Marketing at Tremendous

Against A/B testing

And here’s what those who were ‘against’ A/B testing had to say:

“There are several reasons why A/B tests don't work:
“1. They assume that you already have the right growth lever and you just need to tweak it to make it better. Most companies don't want to admit they have yet to find a meaningful growth lever, to begin with.
“2. They're performed poorly and almost always for the wrong reasons. Most of the time they create outliers that get you stuck trying to do it again and again.
“3. Metrics are now misleading. Google, Facebook, and LinkedIn are not trying to get you the most clicks—which means most of the metrics marketers use like CTR, bounce rate, etc are worthless and oftentimes misleading.
“4. Improving the performance of your marketing by 2%-5% is not going to help you reach a revenue goal that's moving at 30%-100% a year. You're running at efficiency when you still need to uncover growth.”
Jay Baron, CEO at Elevate Demand

“Because A/B and other message testing seems 'free', many marketers have lost the skill of doing good in-depth market research on their target personas. At best, people toss out a SurveyMonkey questionnaire. Sadly, surveys only tell you the answers to questions you know to ask.  
“Optimizing ads is no replacement for knowing the humans behind the decisions. To learn what you don't know you don't know, you need solid qualitative research... yep! You heard me. Qualitative research over time.
“It's never really been easier to do good longitudinal qualitative research but no one does it because we live in a world obsessed with measurement.”
Katherine Chan, Freelance Technology Copywriter

“I think there are two big problems with A/B testing:
“1. It's often done as an afterthought. People do their entire landing page, email, etc, then randomly choose an element to test. There's no strategy to this, so lots of tests aren't giving you any real insight into your audience.
“2. It's often done on low-value variables. The in-built assumption is that you've nailed the high-level things.
“For example, if you're going to test button text ‘find out more’ vs ‘learn more', you're assuming that your copy is as good as it can be, you're assuming that your offer is rock solid, and you're assuming that you're sending to the exact right audience of people. All of these are really big rocks that should not be taken for granted!
“So if you're going to do an A/B test, do it strategically, with a clear hypothesis that you test over multiple projects. And also, test the big stuff first before you zero down on the small stuff.”
Sam Grover, Freelance Copywriter, and Content Strategist

“I'm not a fan of when A/B testing is used because it is clear you don't understand the audience you are targeting well enough. When that happens I focus efforts away from measuring incremental A/B tests to talking to your audience and understanding them better.”
Trenton Romph, Marketing at Clozd

How to optimize your A/B testing

The decision on whether A/B testing is suitable for your organization is yours to make. If it makes sense for you to continue using this revenue marketing strategy, we have put together a few ways in which you can optimize this method to receive the best insights surrounding your product/service and audience.

Step one: Solidify your hypothesis

With a strong hypothesis in place, you’ll have an exact idea of what you’re testing and why. With this hypothesis, you’ll be able to see exactly what failed, and what succeeded, and then it’ll be a lot easier for you to identify from the results which changes you need to make afterward.

Step two: Don’t test too many versions at once

It’s generally considered best practice within A/B testing to test two to four versions of a variable at one time. This is because if you have too many to test, it’ll take a lot longer to receive an accurate amount of results, and it’ll be harder to determine which version has the best impact on your audience.

Step three: Ensure you’re getting the right sample size

Ensure that you’re performing your test with a strong number of people taking part. Too little of a number can result in the test being unreliable.

Step four: Don’t make changes mid-test

You may be intrigued with the way your test is panning out and want to implement more changes during the test to discover more insights. But, unfortunately, making changes in the middle of the test will disrupt the results and make them unreliable.

If there are other changes you’d like to test, make note of them and implement them once the initial test is complete. You can then always compare both tests to see which worked best.

Step five: Don’t ignore the data

It’s very easy to ignore data for what you feel is best for your customer. However, you can be biased and the data can’t lie. The results of your testing will either back up or go against your hypothesis and after all, the data is merely telling you what the customers are most attracted to.

So, you must take into account these insights to decide what is the best variation for your product or service.

Join the community

What do you think? Is A/B testing still relevant for marketers? Join the Revenue Marketing Alliance Slack channel today and let us know your thoughts.

Written by:

Charley Gale

Charley Gale

Charley is the copywriter at Revenue Marketing Alliance. She has a passion for creating new content for the community. She's always open to new ideas, so would love to hear from you!

Read More
Is A/B testing still relevant in B2B marketing?