This article is by Sleeknote, a user engagement software company. Check out the full article (link at the bottom) to get the full descriptions of each test. Below you’ll find the results, main findings, and the most interesting learnings.
This post is twenty-one months in the making. Back in January 2017, I wanted to write about a/b split-testing ideas.
But after a few Google searches, I found many of the before and after images featured in the top posts were underwhelming, overused or in many cases, both.
So, the marketing team and I began running our own a/b split tests, mainly on our blog, to get data we could use for a future post.
That future post is the one you’re reading now.
In this post, I’ll share my five favorite a/b split tests from twenty-one months of continuous testing. I’ll reveal our key learnings—including a few surprising findings—and give you concrete takeaways you can use to inform your future campaigns.
Experiment 1. Radio Buttons Vs. Drop-Down
Like many companies that rely on email marketing to engage potential buyers, we’re eager to segment new subscribers, particularly on our blog.
To enrich our lead data, we need to know as much as possible about new subscribers—their company size, annual revenue, number of employees, and more. The more data we have, the easier it is to lead score and prioritize outreach, accordingly.
What’s crucial, in the beginning, though, is the subscriber’s industry. With many of our customers working in e-commerce, we know that a subscriber that belongs to e-commerce industry is more likely to become a customer than say, a beginner to marketing (which is one of our segmentation options).
When we first began segmenting subscribers, we invited them to click a link in our welcome email as many marketers do. But with few subscribers segmenting themselves, we ended up with a hodgepodge of segmented and unsegmented subscribers.
To combat that, and to get more subscribers segmenting themselves, we asked visitors to segment themselves before they opted in, via an email popup. We noticed an improvement in the number of visitors segmenting themselves…
But we knew we could do better.
One hypothesis we wanted to test was whether the method visitors used to segment themselves affected conversions. Specifically, was having to choose an option from a drop-down causing friction and affecting conversions?
To test our hypothesis, we tested our blog popup with a drop-down (our control) against one with radio buttons.
After running the test for two weeks, the winner was clear:
Website visitors were 41.18 percent more likely to segment themselves when opting in through a popup with radio buttons than one with a drop-down option.
Our hypothesis was correct. And like many of the results you’ll read below, informed how we’re collecting leads on our blog today.
Key Learning
If you’re not getting results from segmenting new subscribers with welcome email, invite website visitors to segment themselves through a website popup when opting in, instead . In our experience, it’s the best chance to get more enriched lead data.
Experiment 2. 30-Second Time Delay Vs. 30% Scroll Trigger
When creating a new campaign, knowing where to place pop-ups is easy. If you’re driving traffic to, say, a product page, it makes perfect sense to test a campaign there.
But deciding when to show it is challenging without data to use as a starting point.
In our case, we knew we wanted to show a campaign on our blog as it gets the majority of our traffic. But we didn’t know when to show it.
Our next experiment, then, involved testing a popup with a 30-second time delay (control) against one that triggered when a visitor scrolled 30 percent of a blog post. (Note: we chose 30 percent after running a heatmap on our blog and seeing 30 percent was the average drop off.)
After another two weeks of testing, we had another piece of the puzzle:
The slide-in with a scroll trigger outperformed the one with a time delay by 61.83 percent.
Experiment 3. Desktop Teaser Vs. No Desktop Teaser
One of the ways we distinguish ourselves from countless copycat competitors is our teaser feature.
A teaser, if you’re unfamiliar, is a preview of a popup’s content, often found in the bottom left- or right-hand part of the user’s screen.
Its job, when done right, is to make visitors curious enough to click through and opt in through the popup.
(Yes, that’s an emoji in the teaser. And, yes, we tested that too.)
After our previous test, we knew a 30 percent scroll trigger was better than a 30-second time delay. But we wanted to see if adding a teaser drove even more conversions.
To our surprise, it did:
Adding a teaser to our blog slide-in increased email sign ups by 81.83 percent.
While it’s hard to determine the reason it performed better, our belief is two-fold:
- It created an information gap. Classic research by Russell Golman and George Loewenstein suggests that when we feel a gap between what we know and what we want to know, our curiosity drives our need to seek out new knowledge. When you read, “We’ve Got Something for You…,” you can’t help but click through to learn more.
- It uses a two-step optin. Playing on the foot-in-the-door technique and the commitment and consistency principle, visitors are more likely to opt in if they click a teaser. Why? Because they’ve already taken action. If anything, NOT opting it after doing so requires MORE effort.
Increasing conversions from visitor to subscriber doesn’t have to take long. As you can see, it’s sometimes as quick as enabling a teaser.
Experiment 4. Minimalist Design Vs. Branded Design
“Will a custom-designed, branded popup outperform its basic, minimalistic counterpart?” That was the question we wanted to answer in our next test.
This was our (seemingly unbeatable) control, built using our drag-and-drop editor:
And this was the challenger, made by our in-house designer.
And the control won, big time!
Our minimalistic design popup increased email sign-ups by 137.25 percent.
Findings like the above might surprise design-savvy marketers (and relieve the inexperienced.) But rest assured, if you’re designing a campaign, remember, despite your best intentions, a campaign that’s easy on the eyes won’t always lighten a buyer’s wallet.
Key Learning
Contrary to popular belief, you don’t need a designer to create a high-converting website popup. Oftentimes, a basic campaign built with a drag-and-drop editor is more than enough to drive high conversions.
Experiment 5. Description of Offer with Bullets vs. Image of Offer
It’s a common marketing practice to include an image in a popup, especially if you’re offering a freebie. And it makes sense: if, as a website visitor, you’re asked to opt in for a lead magnet, you want to know what you’re getting, right?
We thought so, too. But after several previous tests, we were unsure why our control was performing so well, despite showing what our visitors would receive in exchange for their email. After consideration, we realized the bullets, describing the offer, played a bigger role than we initially thought.
To ascertain that for certain, we ran an experiment testing it against a similar design with an image, instead, previewing the offer:
After 49-day testing, the results were clear:
A popup describing the offer with bullet points boosted our conversion rate by 167.21 percent.
“A picture’s worth a thousand words,” says the old English adage. But in our experience, good, compelling bullets are worth far more.
Key Learning
Avoid writing off good copywriting in a popup campaign. It’s often THE differentiating factor in a winning campaign.
Conclusion
Getting inspiration from some old A/B test results and case studies might be a good starting point. But nothing beats your own data and your own test results.
Analyze your own data and user behavior, focus on their needs and write good copy, even for such small elements as pop-ups.
Got a Blog? Use These 5 Proven Split Test Winners to Boost Conversions (Case Study)
PS. If you can transform a rather intrusive element (e.g. a pop-up) into a curiosity element (like in experiment #3), you’ll almost certainly have a winner.