Iteration & Experimentation
Master the art of rapid experimentation and learn when to pivot vs. persevere on your path to product-market fit.
Why Iteration Matters
Finding product-market fit is not a single "eureka" moment—it's a series of experiments, learnings, and adjustments. The founders who find PMF fastest are those who can run high-quality experiments quickly and learn from them systematically.
"The only way to win is to learn faster than anyone else." — Eric Ries
The Experimentation Mindset
Shifting from execution to learning mode
Builder vs. Scientist Mode
Many founders get stuck in "builder mode"—shipping features they think users want. To find PMF, you need to operate in "scientist mode"—forming hypotheses and testing them.
Builder Mode (Avoid)
- • "Users will love this feature"
- • Builds before validating
- • Measures vanity metrics
- • Takes feedback personally
- • Ships and hopes
Scientist Mode (Embrace)
- • "I believe this will work because..."
- • Tests before building
- • Tracks leading indicators
- • Seeks disconfirming evidence
- • Ships to learn
The Experiment Culture
Reward insights gained, even from failed experiments
Run many small experiments rather than one big bet
Better to disprove an idea in 1 week than 3 months
Track hypotheses, results, and learnings systematically
Designing Strong Hypotheses
Creating testable, falsifiable assumptions
The Hypothesis Template
We believe that [specific change/action]
For [target user segment]
Will result in [measurable outcome]
We'll know this is true when [success metric + threshold]
Hypothesis Examples
Good Hypothesis
"We believe that adding a 'quick start guide' video to onboarding for new users will result in 20% higher day-7 retention. We'll know this is true when we see retention improve from 35% to 42% over 2 weeks with 200+ new users."
Weak Hypothesis
"Making the onboarding better will improve retention."
Problem: Not specific, no measurable outcome, no success threshold
Hypothesis Prioritization
You'll have more hypotheses than you can test. Prioritize using ICE scoring:
How much will this move the needle if true?
How confident are we in our hypothesis?
How quickly can we run this experiment?
MVP Experiment Types
Choosing the right experiment for your hypothesis
Smoke Test / Fake Door Test
FastAdd a button/link for a feature that doesn't exist yet. Measure clicks to gauge demand.
Concierge MVP
MediumDeliver the service manually to understand the problem deeply before automating.
Wizard of Oz
MediumProduct looks automated to users but is actually powered by humans behind the scenes.
Landing Page Test
FastCreate a landing page describing your solution. Measure signups, waitlist, or pre-orders.
A/B Test
Requires TrafficShow different versions to different users and measure which performs better.
Single Feature MVP
SlowerBuild only the core feature. Ship it to users and measure engagement.
Experiment Selection Rule
Always choose the fastest, cheapest experiment that can invalidate your hypothesis. If you can learn the same thing from a landing page test vs. building a feature, always start with the landing page.
The Pivot vs. Persevere Decision
Knowing when to change direction
Signs You Should Pivot
Time to Pivot
- •Retention is flat after 3+ iterations
- •NPS/satisfaction scores stuck below 30
- •Can't find users who "love" it
- •Growth requires constant paid acquisition
- •Customer interviews reveal different problem
Persevere Signals
- •Metrics improving with each iteration
- •Small group of passionate users
- •Word-of-mouth starting to appear
- •Users hacking product to do more
- •Clear pattern in "who loves it"
Types of Pivots
Zoom-in Pivot
One feature becomes the whole product. What users love most becomes the focus.
Zoom-out Pivot
Your product becomes a feature of a larger product needed to solve the problem.
Customer Segment Pivot
Same product, different target customer who values it more.
Customer Need Pivot
Same customer, different problem that you discovered is more urgent.
Platform Pivot
Change from application to platform or vice versa.
Business Model Pivot
Same product but different way of capturing value (pricing, monetization).
The Pivot Meeting
Schedule a regular "pivot or persevere" meeting every 4-6 weeks. Review all experiment data, customer feedback, and metrics. Make an explicit decision: pivot, persevere, or (rarely) stop. Don't let pivots happen by drift—make them intentional.
Maximizing Iteration Velocity
Learning faster than the competition
The Iteration Formula
Learning = (Speed × Quality) of Experiments
Optimize for both—fast but sloppy experiments teach nothing; slow but rigorous ones take forever
Speed Multipliers
Weekly release cycles
Ship every week instead of every month
Direct customer access
Talk to users daily, not monthly
Real-time analytics
See results immediately, not after a report
Pre-build validation
Validate before coding—use mockups, landing pages, prototypes
The Build Trap
Many teams confuse shipping features with making progress. Track these metrics to avoid the build trap:
Output Metrics (Avoid)
- • Features shipped per sprint
- • Lines of code written
- • Story points completed
- • Bugs fixed
Outcome Metrics (Track)
- • User retention change
- • Activation rate improvement
- • NPS/satisfaction delta
- • Learning velocity (experiments/week)
Building Learning Loops
Systematizing your experimentation process
The Build-Measure-Learn Loop
Hypotheses to test
MVP experiment
Test with users
Collect data
Analyze results
Extract insights
Experiment Tracking Template
| Experiment | Hypothesis | Metric | Target | Result | Learning |
|---|---|---|---|---|---|
| Onboarding video | Improves activation | Day-7 retention | 42% | 48% | Video > text guides |
| Social sharing | Drives referrals | K-factor | 0.3 | 0.1 | Users don't share yet |
| Email sequence | Re-engages churned | Reactivation % | 10% | 8% | Need better targeting |
Weekly Rhythm
Review last week's experiments, extract learnings
Choose this week's experiments based on learnings
Design and ship experiments
Talk to users, gather qualitative insights
Practice Exercise
Apply what you've learned by designing an experiment for your product:
- 1Identify your biggest assumption about why users aren't retaining/engaging
- 2Write a complete hypothesis using the template provided
- 3Choose the fastest experiment type to test it
- 4Define your success criteria and timeline
- 5Run the experiment and document your learnings