Resources Hub » Blog » Value-Add A/B Testing

Your goal in growth should be to add as much value as you can to your assets. A/B testing is a frequently misunderstood vehicle people leverage in an attempt to do that. It’s not that A/B testing isn’t great, it’s just that it’s a tool among others that has a time and place.

Let’s start out first with what we shouldn’t do.

Don’t select a test that’s doomed to low traffic volumes. You require signal (read: traffic) to demonstrate a successful outcome.

Don’t test minor changes. You’re looking to demonstrate the variance of your variant: stronger signal (either positive or negative) is a powerful teacher

Don’t test established best practices; implement them. Testing takes time, resources and doesn’t realize the full benefit of gain until winning variants are implemented. Testing an obvious best practice is basically theater.

Avoid testing multiple variables unless you have the traffic to support the evaluation. Most marketing setups can’t carry a multivariate test to statistical significance within a short time frame.

So what should we test? The playing field is a lot more clear now.

Let’s run tests that we can adequately supply with traffic, that demonstrate significant changes from our baseline and engage users in a way that better conveys your offer. Beyond all of this, you should choose a test that has a chance of succeeding!

If it’s impractical for your primary success metric to accrue statistical significance in a reasonable period of time; pick an interim metric. Just because it isn’t the final metric doesn’t mean good-faith efforts at improving the UX won’t trickle down. Realize statistically significant wins against the interim metric and validate it hasn’t degraded the performance of your key KPI. Remember, testing needs to be impactful but it doesn’t need to be perfect.

Research: Any testing should be preceded by extensive research. What problem are we trying to solve? Do we do a good job of solving it? How does that compare relatively to others in the marketplace? Do we convey our solution’s merits effectively? Does the UX properly allow for a transition between education and transaction? Where are the bottlenecks in the current funnel?

High Volume: There’s no such thing as a low volume test! When identifying problems to solve with the current positioning, calculate what length of time you’ll need to conclusively prove the lift of any variant proposed. Velocity is crucial to the success of any testing strategy.

Design Variance: Don’t test minor variances. If you’re introducing a design change the variance should be meaningful, whether it’s with the assets you use to convey the offer or the workflow users navigate to transact. Button color testing never amounts to much.

Offer positioning: Don’t restrict yourself to design changes, the core offer itself and its positioning are key contributors to the conversion rate. If you don’t have authority to change offer elements, consider experimenting with different mindsets like FOMO/Scarcity, Social Proof, exclusivity, identity, etc.

Stat Sig: Statistical significance is your key to confidence that implementation of your test results have positive outcomes. What’s the best way to achieve statistical significance? Sample size, variance in conversion rate.

Proxies: Subscription services frequently struggle with A/B testing because their paying conversion rates land in the sub 1% range. Businesses typically resort to A/B testing against the trial proxy event but discount the reality that a more efficient vehicle for converting users against a free offer doesn’t necessarily translate into a higher conversion rate for subsequent paying offers. Proxy A/B testing can still be useful however, as long as UX improvements are made in good faith and conveyance of the value exchange isn’t discounted.

Promotions: Continual testing is fine, as long as you’re cycling through tests. Some marketing setups leave winners in distribution purgatory, serving 50/50 versus the original control, never realizing the full benefit of a win. Why do they do this? Resource priorities. Maintaining velocity is not only about testing things, but also implementing them when wins are identified.

Roadmaps: Roadmaps should be dynamic and always changing based on your latest test learnings. Successes and failures yield new insights that can leapfrog earlier plans. Stay flexible to incorporate nascent revelations.

Set it and forget it
Set it and forget it

Try our automation tool, and send triggered emails based on user activity or time of day.

Learn More

Case Study

Learn how Girls Who Code uses Campaign Monitor to change the tech world for the better.
Learn how
The email platform for agencies

The email platform for agencies

We started out helping agencies with email, so let us help you.

Learn more
This blog provides general information and discussion about email marketing and related subjects. The content provided in this blog ("Content”), should not be construed as and is not intended to constitute financial, legal or tax advice. You should seek the advice of professionals prior to acting upon any information contained in the Content. All Content is provided strictly “as is” and we make no warranty or representation of any kind regarding the Content.
bookmark
Press CMD+D to Bookmark this page

Get started with Campaign Monitor by Marigold today.

With our powerful yet easy-to-use tools, it's never been easier to make an impact with email marketing.

Try it for free