It’s easy to win any A/B test, with practically 100% guarantee.
Here are a few examples:
Number of clicks
Make your homepage a huge button with “Click here” on it. You’ll get more clicks, guaranteed.
Number of sales
Slash your prices in half, leave everything else the same. You’ll get more sales, guaranteed.
You’ll get a better click-through rates, guaranteed.
Not all data is equal
While it’s always best to make your decisions based on data, don’t base it on stupid data.
All the examples above would result in winning tests. Based solely on data, you would be tempted to promote all of them.
But you don’t.
It’s the same situation with newsletter subscription pop-ups. Practically any pop-up (from the subtle one to the most egregious full-page takeover) result in more newsletter sign-ups. But there’s an invisible line beyond which, while results still show better CTRs and more sign-ups, the long term customer trust starts to erode.Your testing tool can show a win but, with time, your bank account will lose.
It’s on you to see that invisible line and to stop before crossing it. It’s not easy, it’s invisible after all. It’ll come with experience.
PS. I bet all the dark patterns on the web came from the winning tests too.