Scientists Build Better Startups
Founders that learn how to scientifically test their assumptions build startups with 6x higher average revenue. Here’s how they do it…
In 2015, 116 founders of early-stage startups participated in a special training program in Italy. They were invited to 5 lectures and 5 mentorship sessions on building and scaling a tech company. But there was a hidden twist — half the participants were taught through the lens of the scientific method.
These ‘scientist’ founders learned how to use first-principles thinking to examine ideas and identify the underlying assumptions. They were trained to design experiments to test their assumptions, define their criteria for each experiment’s success or failure beforehand, and collect evidence to inform their next decision.
One year after the training program, the “scientist” founders had 6x higher average revenue than the control group (with a median difference of 1.8x).
Why would the scientific method lead to more revenue? Because being better at designing and running experiments means you find the right answer faster. The researchers concluded that “entrepreneurs who were taught to formulate hypotheses and rigorously test them on carefully chosen samples of potential customers were more likely to acknowledge that an idea was bad, pivot from non-starters or pitfalls, and generate more revenue.”
This is part one of a two part series covering everything a product team should know about assumption testing:
Part 1 → Test assumptions over ideas, ‘Riskiest Assumption Test’ examples, and How to plan your assumption tests (4 frameworks).
Part 2 → 15 assumption testing methods + Defining success and failure. {Next week}
If you’re extra eager, you can read parts 1 and 2 together as one blog post here.
Why the best teams focus on assumptions instead of ideas
Assumption testing forces us to acknowledge what we believe to be true about our idea and find ways to prove this veracity. It’s not the same as idea testing — we’re not trying to figure out whether our entire idea is viable or not, we’re checking whether the pillars our idea will be built on are as solid as we assume them to be.
This distinction between idea testing and assumption testing is important. The more time you spend working on an idea, the more likely you are to fall in love with it. We don’t get the same emotional attachment to assumptions because they’re just one little piece of our overall idea. It’s much easier to disprove an individual assumption than to kill an entire idea in one go.
If we were to try to test an entire idea, we would have to build a Minimum Viable Product (MVP). The problem with MVPs is right there in the name — “product.” MVPs offer the perfect excuse to skip assumption testing and start building your product instead. While you may intend for that initial MVP to be small in scope, it often shifts in definition to “Minimum Loveable Product” as you add more and more risky untested features into your product. The more you build, the harder it will be for you to kill a bad idea.
When we shift away from testing our idea and focus instead on assumptions, we replace the MVP with the RAT — Riskiest Assumption Test — which follows this three-step process:
List the assumptions that must be true for your idea to succeed.
Identify which individual assumption is most important for this success.
Design and conduct a test to prove whether this assumption is true or not.
In my opinion, the best part about the RAT is that it doesn’t require a full prototype or launch-ready product (in fact, it explicitly encourages otherwise!).
—
Riskiest Assumption Test Example: DoorDash
In late 2012, the DoorDash founders were interviewing the owner of a macaroon shop in Palo Alto named Chloe. At the very last moment of the interview, Chloe received a call from a customer requesting a delivery and turned the order down. The DoorDash founders were mystified — why would she turn business away like that? It turned out that Chloe had “pages and pages of delivery orders” and “no drivers to fulfill them”. Over the following months, they heard “deliveries are painful” over and over while interviewing 200+ other small business owners.
The supply side of the market had clear potential — restaurant owners needed someone to look after these deliveries for them. But the DoorDash founders decided not to build anything for restaurant owners at the start. In fact, when they first launched, they didn’t even tell any of the featured restaurant owners!
In January 2013, the team created paloaltodelivery.com a single afternoon — a basic website with PDF menus from a handful of local restaurants and a phone number for placing orders. They charged a flat $6 fee for deliveries, had no minimum order size, and collected payments from customers manually in person.
The point of paloaltodelivery.com wasn’t to prove the potential of their idea to restaurant owners. The DoorDash founders knew their riskiest assumption was whether people would trust a 3rd party website for their food deliveries. This approach was unheard of at the time. If the assumption turned out to be incorrect, they would have to find a completely different way to solve restaurant owners’ delivery problem.
But that wasn’t how things went. The same day they created the website, they received their first order — a guy was searching Google for “Palo Alto Food Delivery” and found their website (via a cheap AdWords campaign they funded). Within weeks, the four founders were receiving so many orders, mostly from Stanford students, that they were struggling to keep up with demand. The experiment had proven their riskiest assumption to be true.
—
How the best product teams plan their assumption tests
1. Assumption Boards
If you’ve ever taken part in a hackathon or Startup Weekend, you’ve likely been given a Lean Canvas to map your assumptions. For teams that are just starting on a new idea, the Lean Canvas often pushes people to invent assumptions by giving them boxes to fill that they hadn’t even considered before.
I personally prefer the more basic Assumption Board approach for early-stage teams — four boxes (“Hypothesis, Assumptions, Invalidated, and Confirmed”) for them to capture their assumptions, identify which are the riskiest, and track learnings from their tests as they derisk their idea.
Back when I was a Product Innovation Lead at Unilever, there were sprint weeks where we would complete full iterations of the assumptions board every day for five days straight — we’d start each morning by identifying our current riskiest assumption, by lunchtime we had started an experiment to test our assumption, and before 5pm we would update our assumption board with the result of our test. And we were inventing new types of ice cream, so you can’t claim this pace is too fast for software!
Assumption boards rely on your ability to identify your research hypothesis — here’s a great guide from Teresa Torres to help you with that.
2. Opportunity Solution Trees
The Opportunity Solution Tree (OST) is a visual framework that helps product teams map their research space and figure out what to prioritize working on next. Instead of fixating on specific customer problems or feature ideas in isolation, the OST helps teams brainstorm the various customer opportunities (unmet needs, pains, desires), solutions that could address each opportunity, and experiments that test these solutions.
There are two hidden superpowers built into Opportunity Solution Trees. The first is called reframing, which takes what we’re fixated on (eg. a new feature ideas), identifies what that will accomplish (the customer problem your idea solves), and then brainstorms other ways to accomplish that same objective (other feature ideas). Learn more about reframing in my problem-brainstorming guide.
The second superpower, compare and contrast decisions, is more relevant to assumption testing. Research shows that the more ideas you generate, the better your ideas tend to be. The same goes for product decisions — we’re better at picking which customer problem to focus on when we’ve identified a wider range of customer problems to consider. Teresa Torres, creator of the Opportunity Solution Tree, calls these ”compare and contrast” decisions and ”whether or not” questions.
Using an Opportunity Solution Tree tool like Vistaly or Olta, teams can pick an opportunity or solution and add the underlying assumptions as a note on that node. Then, once they’ve picked which opportunity or solution they want to test, they have a paper trail of the assumptions made when that node was originally added.
—
3. Problem Assumptions vs Solution Assumptions
When most people think about assumption testing, they think about idea validation and proving that they’re solving a customer problem. But, as product management leader Marty Cagan says, “People don’t buy the problem, they buy your solution, so don’t spend a lot of time on the problem because you need as much time as possible to come up with the winning solution!”
This sounds like the opposite of most customer discovery advice, but there’s an important nuance in Marty’s words. When people validate a high-priority customer problem, they often assume this also validates their overall idea, when really all they’re doing is skipping all the assumptions they’ve made about their intended solution. This is another reason the Opportunity Solution Tree is so great. Splitting opportunities (customer needs/pains) from solutions encourages us to map our assumptions for each separately. If you’re not using the Opportunity Solution Tree, you can still add a step to your workflow to identify problem and solution assumptions separately to avoid skipping this step.
—
4. The Four Product Risks
Marty doesn’t just tell us to consider solution assumptions, he also gives us the framework to do it — The Four Big Product Risks:
A traditional product trio tends to divide ownership of each risk as follows:
• Product Manager → Valuable + Viable
• Product Designer → Usable
• Product Engineer → Feasible
As pointed out by Cesar Tapia, risks and assumptions are opposite sides of the same coin — risks hurt when they’re true, assumptions hurt when they’re false.
—
Using These Frameworks To Plan Your Assumption Tests
The reason I like these frameworks is because they make it 10x easier to plan good assumption tests. For example, the Opportunity Solution Tree forces us to ask questions about the links between the company’s desired outcome, the problems customers face, and the solutions we think might solve those problems. This raises questions like, “Is this the only way a customer would try to solve this problem?”, “Is this solution too complex for the type of customer experiencing this need?”, and “Will this experiment give me a new or clearer perspective on the proposed solution?”.
The Four Big Risks gives even more targeted questions about our assumptions — especially those related to solution ideas. It encourages questions like “Do we have the time and resources to accomplish this proposed build?”, “Does this feature solve a problem that’s high enough on the customer’s list of priorities?,” and “Does this solution actually fit into the broader picture of how our company provides value to customers?”.
Just like in idea brainstorming, the more assumptions we can identify, the more likely we are to correctly diagnose which is the riskiest one in most need of testing.
Next week, I’ll be covering 15 assumption testing methods and how to define success/failure upfront. If you’re not a Full-Stack Researcher subscriber, add your email here to be notified.
If you want to continue reading part 2 today, check out the full version of this guide here.
Accidentally sent this to 8,000+ people without including the DoorDash example, smh 😔 Post updated now for anyone taking a look on Substack 👋