It feels good to be sure that your current version of the site is effective. You know it because you’ve tested it. And that’s your most effective version, until the next winning test.
But there are times when A/B testing is not advisable.
Or simply a very bad idea, like in this case:
When else is A/B testing not a good idea?
When you don’t know why you’re testing
Testing because you’ve heard that testing is good is not a good enough reason for testing. If you don’t have a hypothesis you want to test, it will be just a waste of time.
A hypothesis looks like this:
“Because [A THING HAPPENING ON THE SITE / AN IDEA UNCOVERED DURING RESEARCH] I want to do [AN ACTION TO TEST] and I expect [AN IMPORTANT KPI] to [CHANGE IN WHICH DIRECTION].”
If you aren’t able to formulate your idea like this, you probably just want to test for the sake of testing.
When you don’t have enough traffic
In this case, make bigger changes, implement them without A/B testing and look for trends. Has the overall conversion rate increased? That’s a good indicator to keep this version. In case of a big drop, quickly revert and try something else.
When you can’t afford it
A/B testing is expensive. The cost of the testing tool is negligible compared to the overall cost of the testing process.
You need to do research, design the test, implement it, troubleshoot it, wait for statistically significant results, analyze them, then either implement the winning variation or move on to the next test, repeating the cycle.
It all takes a lot of time and the whole team could be doing something else at that time.
Only test when the potential results of the winning tests significantly outweigh the cost of testing.
When short-term exploitation is more important than long-term exploration
When you have a big launch, with 1-2 weeks of concentrated marketing activities, it makes more sense to exploit this opportunity than to explore with testing. You may test some elements beforehand, to make sure you’re launching an effective version but once you’re live, it doesn’t make sense to further test.
When you’re changing user behavior
Test results, especially in the case of changing user behavior, can be influenced by two opposing elements:
- When the customers have been doing something in a particular way and it has been changed, the novelty factor may temporarily increase conversions. For example, customers see a new thing or try a new way of doing just to try it, only because it’s something new. With time, they revert to the old habits and what seemed like a winner doesn’t work anymore.
- When the customers have been doing something in a particular way and it has been changed, they may rebel against it only because it makes them do things not the way they’re used to, even if the new way is significantly better. In this case, you may think that it didn’t work and stop the test, never to uncover that with time, when the rebellion stage is over, customers started to convert more and more, figuring out themselves that it was indeed a better option.
In both of these cases, instead of testing, it’s better to implement the change and patiently wait. Disregard the first overblown reactions in either direction and see where the results trend after.
Testing is essential for long term success but it’s not always the best solution. Not everything is a nail, even if you’re a hammer.
Decide what’s worth testing, avoid things that shouldn’t be tested and build on top of your previous winners.