Small Bets
Why Volume Beats Conviction in Product Development
There’s a common trap in product development, a seductive one. It’s the belief that if you just hire the smartest people, brainstorm long enough, and throw enough resources at a “big idea,” you’ll ship something revolutionary. One feature to rule them all. One magical launch that transforms the user experience and delights customers in a way that earns applause and maybe a TechCrunch headline.
But that’s not how it usually plays out.
In reality, most of the features we launch are… meh. Harmless at best. Actively harmful at worst. From years of experience, I’d estimate that 85% of new features fall into the neutral-or-worse bucket. They either do nothing or degrade the experience. That sounds harsh, but it’s not an anomaly. A 2017 Harvard Business Review article confirms: at Google and Bing, only 10-20% of experiments show positive results. Microsoft? About a third succeed, a third flop, and a third just take up space.
This doesn’t mean product teams are failing. It means the work is harder than it looks. User behavior is messy. Context changes quickly. Good intentions rarely survive first contact with reality.
So if most of what we ship doesn’t work, what should we do?
The answer isn’t to stop experimenting. It’s to change the size and structure of the bets we’re making.
Betting Small with a Big Payoff
To understand the power of small bets, you don’t need to look at a startup or product org. Look at Wall Street. Specifically, at the most successful hedge fund in history: Renaissance Technologies. You might not know the name, but you probably know its founder, Jim Simons.
Simons was a mathematician and codebreaker who turned to finance and quietly built what became known as the “greatest money-making machine of all time.” His flagship Medallion Fund returned an average of 66% annually before fees. As you might expect with returns like that they had to cap outside investment.
But Simons didn’t start that way. In the early days, he behaved more like a classic trader, mixing mathematical models with gut instincts and bold directional bets. It worked…sometimes. But it was volatile, unpredictable.
His breakthrough came when he embraced the exact opposite strategy: an automated system making thousands of tiny trades, each backed by a slight statistical edge. On their own, these trades were forgettable. But aggregated? They were unstoppable. A consistent, compounding engine of profit.
Each trade was low risk. Each bet was reversible. No single position could sink the ship. The genius was in how the small edges stacked. It wasn’t sexy. It wasn’t dramatic. But it worked. And the same lesson applies to product development.
Big Bets, Big Risks, and Often Big Regret
If you’ve worked in a product org long enough, you’ve seen it happen. The “must-win” roadmap item. The feature that consumes six months, twenty engineers, and the emotional energy of a small country. There’s a kickoff with excitement, a parade of demos, a slide deck with bold forecasts.
And then…crickets. Or worse, a dip in engagement. A spike in support tickets. Users don’t behave as expected. Or don’t care at all.
Take Google Wave. Heralded as the future of communication. It combined chat, email, documents, and real-time collaboration. Built by one of Google’s best teams. Launched with great fanfare, and shut down just a year later.
Or Microsoft’s Clippy, a bold, user-facing AI bet made in the '90s. It was supposed to be helpful. It became a punchline.
The lesson? High-conviction, high-resource bets don’t guarantee high returns. They may feel meaningful because of their weight, but their success rate often isn’t better than smaller, nimbler efforts.
A/B Tests as Micro-Trades: The Simons Strategy in Product Form
If we borrow Simons’ approach and translate it into product language, Discovery sprints and A/B tests are the trades. Each iteration is a small bet. Every tweak to onboarding copy, every adjustment to a recommendation algorithm, every UX polish to a flow, these are low-cost, reversible experiments.
On their own, they may seem trivial. But over time, they compound.
Netflix is a master at this. Their product team is constantly running experiments, not just on recommendation engines, but thumbnails, playback speed, font colors, episode previews. A single thumbnail change can increase engagement by 20%. Do that across thousands of titles and you’ve made a meaningful dent.
Amazon operates similarly. Every element on a product page, reviews, pricing display, button color, copy, has been tested and optimized in isolation. Jeff Bezos famously described Amazon as "the best place in the world to fail" because they could run hundreds of experiments at once, at scale, without betting the farm.
This is the real lesson: you don't need one big win, you need lots of small ones.
Kelly Criterion for Product: Bet Based on Signal, Not Hype
The Kelly Criterion, a concept from gambling and investing, tells us that bet size should match statistical edge. If the odds are barely in your favor, bet small. If you’re confident and the math backs you up, bet a bit more, but still not everything.
Most product teams operate in reverse. We bet big based on gut. On vision. On what the highest-paid person in the room believes. But if your product culture is driven by evidence, not ego, then your bet sizing changes:
You test the riskiest assumptions first, before committing full resources.
You kill ideas fast if the data says it’s not working.
You double down only when there’s proof, not when there’s pressure.
In this way, the Kelly Criterion becomes a cultural mindset, scale your investment with your confidence, and your confidence should be earned.
Cultivating a Culture of Smallness (In the Best Way)
Adopting a “small bets” approach isn’t just a change in process, it’s a change in company culture.
It means shipping smaller. More often. With less ceremony.
It means telling product managers that success isn’t measured in slide decks or story points, but learning velocity.
It means designing features to be reversible by default, what Bezos calls “two-way doors.” Go through, test it out, walk back if it doesn’t work. No damage done.
It means normalizing failure, not in the vague “fail fast” way that’s become a cliché, but in a measurable, intentional way. Run 10 experiments, expect 8 to fail, and still celebrate that you learned something valuable.
It’s what Spotify did in its early growth days. Instead of launching full playlists, they tested snippets. Instead of overhauling the homepage, they shuffled tiles and observed behavior.
Their product team moved fast, stayed humble, and kept the stakes low. The results? A user base that exploded and a recommendation engine that today sets the industry standard.
We Don’t Need Bigger Bets, We Need Faster Feedback
Let’s zoom out. The product world has never been more complex. AI is changing user expectations. Competitors are launching weekly. Platforms shift overnight. User behavior can pivot with a single viral TikTok.
In this chaos, what wins? Not the team with the most dramatic roadmap. Not the one with the boldest CEO letter. The team that learns fastest.
And you don’t learn fast by betting big. You learn fast by running the Jim Simons playbook: thousands of small trades, each with a tiny edge. Measured, tested, aggregated.
You build a system. You let the data speak. And you stay humble enough to admit that you can’t predict the future, but you can test your way into it.
Closing Thought: One Small Bet
So here’s where I’ll leave you: This week, don’t ask, “What’s the next big thing we’re going to ship?” Instead, ask: What’s the smallest possible bet we could make that teaches us something useful?
Make that bet. Observe. Adjust. And then make the next one. Because in the end, the product teams that win aren’t those who bet big, they’re the ones who learn relentlessly.
And just like Simons proved, if you get that system right? You don’t need to gamble. You just need to run the machine.



