Lots of exciting stuff today!
An analysis of a single ad that took a campaign from $1.5k to $8k per day in revenue.
A thought-provoking and well-argued question about paid ads: what if performance advertising is just an analytics scam?
New UX design trends and best practices to take inspiration from.
And two articles going deep into technical details of testing, including whether it’s better to run tests in isolation or in parallel.
Sorry if too many of the articles sound too interesting, and you end up reading all of them. But at least you’ll learn a lot!
————
An ad that took a campaign from $1.5k to $8k/day in revenue [Twitter thread]
A great analysis of a video ad that grew the campaign’s revenue 5+ times.
Great messaging, great delivery, great shooting. A lot to learn from.

What if Performance Advertising is Just an Analytics Scam? [SparkToro]
“What the pandemic showed is we can take marketing down to zero and still have 95% of the same traffic as the year before. So we’re not going to forget that lesson.”
– Brian Chesky
A thought-provoking piece by Rand Fishkin about performance advertising and its effectiveness.
This image illustrates the point quite accurately:

Somewhere between 60-99% of the people exposed to the ads would have purchased anyway. Yet, the sales are credited to paid channels, and everyone is happy: the business, the ad networks, the performance manager. Everyone reports green.
Definitely give this article a read. It’s much more than just a controversial take. There are many stats, case studies, real numbers.
Embrace Overlapping A/B Tests and Avoid the Dangers of Isolating Experiments [Statsig]
“Our success is a function of how many experiments we do per year, per month, per week, per day.”
— Jeff Bezos
Isolating experiments slows you down. If you can, run experiments in parallel. Interaction effects are either overblown or cancel out, with minimum impact on the end results.
The world is full of noise. Stop worrying about it and start running as fast as you can to outpace the rest!
Best practices for overlapping experiments:
- Avoiding severe experimental collisions.
- Prioritize directionality over accuracy. It’s far more important to know that an experiment was “good” rather than whether revenue was lifted by 2.0% or 2.8%.
- Special strategies when precision matters, e.g., long-term or multivariate testing.
UX Design: 18 Novel Ways To Differentiate In 2022 [OpenGeeksLab]
As all the world went digital during the lockdown, companies experimented to find new ways of attracting and retaining customers. UX design became a core part of their practices.
Lots of inspiration to get from these trends and best practices:
- Dark Theme
- Glassmorphism
- Motion Graphics
- 3D Design and Animations
- Unusual Design
- 90s Retro Design
- Absurd Navigation
- Artificial Intelligence
- Virtual and Augmented Reality
- Occasional Changes
- Advanced Personalization
- Voice User Interface Interaction
- Motion Design Details
- Synchronization Across Devices
- Inclusive Design
- Performance Matters
- Human Writing
- UX Design Simplification
Interpreting A/B test results: false positives and statistical significance [Netflix Technology Blog]
Using a series of thought exercises based on flipping coins, the article builds up an intuition about false positives, statistical significance and p-values, rejection regions, confidence intervals, and the two decisions you can make based on test data.
If you’re into stats and understanding all the intricate calculations of testing, this will be interesting for you.
————
Until next Thursday!
Radek Sienkiewicz
PS. Similar to Rand’s question if performance advertising is a scam, I have posed a question if bidding on your own brand keywords makes any financial sense. It was maybe… 10 years ago, or something like that, and I still believe that.