Got a question about conversion optimization?
Chances are, you’re not alone!
So, here is a compilation of 30 of your top conversion optimization questions. From how to get executive buy-in for experimentation, to the impact of CRO on SEO, to the power (or lack thereof) of personalization.
You don’t have to read all of it. Treat it more like a Conversion Wikipedia. Look up only what you need. Leave the rest for late, when the need arises.
Who’s answering the questions? Experts, company owners, CEOs, CMOs, industry veterans. You can be sure each answers stems from tons of real-life experience.
Every question is answered and explained in detail in the full article. Below I list all the questions and a few interesting answers, to whet your appetite.
- What do you see as the most common mistake people make that has a negative effect on website conversion?
I think the most common mistake is a strategic one, where marketers don’t create or ensure they have a great process and team in place before starting experimentation.
I’ve seen many teams get really excited about conversion optimization and bring it into their company. But they are like kids in a candy store: they’re grabbing at a bunch of ideas, trying to get quick wins, and making mistakes along the way, getting inconclusive results, not tracking properly, and looking foolish in the end.
- What are the most important questions to ask in the Explore phase?
- Is there such a thing as too much testing and / or optimizing?
A lot of people think that if they’re A/B testing, and improving an experience or a landing page or a website…they can’t improve forever. The question many marketers have is, how do I know how long to do this? Is there going to be diminishing returns? By putting in the same effort will I get smaller and smaller results?
But we haven’t actually found this to be true. We have yet to find a company that we have over-A/B tested. And the reason is that visitor expectations continue to increase, your competitors don’t stop improving, and you continuously have new questions to ask about your business, business model, value proposition, etc.
So my answer is…yes, you will run out of opportunities to test, as soon as you run out of business questions. When you’ve answered all of the questions you have as a business, then you can safely stop testing.
- Do you see a difference in conversion optimization on mobile versus desktop?
You should be able to use the same conversion optimization process or approach for both your mobile and desktop experiences.
However, there ARE typically major differences within the analytics of your mobile and desktop sites (traffic levels, conversion rates), and also in HOW your visitors make use of the respective sites. So much so that, even though we use the same process for growth and insights, the two sites are typically analyzed separately from each other.
- Do you get better results with personalization or A/B testing or any other methods you have in mind?
- Is there such a thing as too much personalization? We have a client with over 40 personas, with a very complicated strategy, which makes reporting hard to justify.
- With the advance of personalization technology, will we see broader segments disappear? Will we go to 1:1 personalization, or will bigger segments remain relevant?
- How do you explain personalization to people who are still convinced that personalization is putting first and last name fields on landing pages?
SEO versus CRO
- How do you avoid harming organic SEO when doing conversion optimization?
Getting Buy-in for Experimentation
- When you are trying to solicit buy-in from leadership, do you recommend going for big wins to share with the higher ups or smaller wins?
- Who would you say are the key stakeholders you need buy-in from, not only in senior leadership but critical members of the team?
CRO for Low Traffic Sites
- Do you have any suggestions for success with lower traffic websites?
When a site’s traffic is low, the ability to test is decreased, and so we try to make up for it by increasing the time spent and work done in the Explore phase.
We take those yet-to-be-validated insights found in the Explore phase, and build a larger, more impactful single variation, and test the cluster of changes. (This variation is generally more drastic than we would create for a higher-traffic client, since we can validate those insights easily through multiple tests.)
Because of the more drastic changes, the variation should have a larger impact on conversion rate (and hopefully gain statistical significance with lower traffic). And because we have researched evidence to support these changes, there is a higher likelihood that they will perform better than a standard re-design.
If a site does not have enough overall primary conversions, but you definitely, absolutely MUST test, then I would look for a secondary metric further ‘upstream’ to optimize for. These should be goals that indicate or guide the primary conversion (e.g. clicks to form > form submission, add to cart > transaction). However with this strategy, stakeholders have to be aware that increases in this secondary goal may not be tied directly to increases of the primary goal at the same rate.
- What would you prioritize to test on a page that has lower traffic, in order to achieve statistical significance?
- How far can I go with funnel optimization and testing when it comes to small local business?
Tips from an In-House Optimization Champion
- How do you get buy-in from major stakeholders, like your CEO, to go with a conversion optimization strategy?
- What has surprised you or stood out to you while doing CRO?
Optimization Across Industries
- Do you have any tips for optimizing a website to conversion when the purchase cycle is longer, like 1.5 months?
- When you have a longer sales process, getting them to convert is the first step. We have softer conversions (eBooks) and urgent ones like demo requests. Do we need to pick ONE of these conversion options or can ‘any’ conversion be valued?
- You’ve mainly covered websites that have a particular conversion goal, for example, purchasing a product, or making a donation. What would you say can be a conversion metric for a customer support website?
- Do you find that results from one client apply to other clients? Are you learning universal information, or information more specific to each audience?
- For companies that are not strictly e-commerce and have multiple business units with different goals, can you speak to any challenges with trying to optimize a visible page like the homepage so that it pleases all stakeholders? Is personalization the best approach?
- Do you find that testing strategies differ cross-culturally?
Experiment Design & Setup
- How do you recommend balancing the velocity of experimentation with quality, or more isolated design?
- I notice that you often have multiple success metrics, rather than just one? Does this ever lead to cherry-picking a metric to make sure that the test you wanted to win seem like it’s the winner?
- When do you make the call for A/B tests for statistical significance? We run into the issue of varying test results depending on part of the week we’re running a test. Sometimes, we even have to run a test multiple times.
- Is there a way to conclusively tell why a test lost or was inconclusive?
- How many visits do you need to get to statistically relevant data from any individual test?
- We are new to optimization. Looking at your Infinity Optimization Process, I feel like we are doing a decent job with exploration and validation – for this being a new program to us. Our struggle seems to be your orange dot… putting the two sides together – any advice?
- When test results are insignificant after lots impressions, how do you know when to ‘call it a tie’ and stop that test and move on?
Testing and technology
- There are tools meant to increase testing velocity with pre-built widgets and pre-built test variations, even – what are your thoughts on this approach?
Got any more questions? Drop me a line. I’ve got you.
PS. You can test. Or you can ignore all this stuff. That’s an option too.