Who this is for
This lesson is for Product Analysts and aspiring analysts who want to quickly pinpoint where users abandon key flows (signup, checkout, onboarding) and turn those insights into actions that improve conversion and revenue.
Why this matters
- Find friction fast: See exactly which step loses the most users and why.
- Prioritize impact: Fix the leakiest step first to unlock the biggest gains.
- Drive product decisions: Support UX, engineering, and growth with clear, quantified evidence.
- Real tasks you will do: size the drop-off, segment it (device, country, traffic source), estimate lift and revenue impact, and recommend experiments or fixes.
Concept explained simply
A funnel is a sequence of steps users take toward a goal. Drop-off analysis measures how many users fail to move from one step to the next, and why.
Mental model: think of a leaky bucket. Water poured in (users at step 1) leaks through holes (friction). Your job: find the biggest holes and patch them.
Key metrics and formulas
- Step conversion (i β i+1) = users at step i+1 / users at step i
- Step drop-off rate (i β i+1) = 1 β step conversion
- Cumulative conversion to step k = users at step k / users at step 1
- Absolute loss at step i = users at step i β users at step i+1
- Estimated impact of a fix = absolute loss Γ expected lift Γ downstream survival rate
Notes on data choice
- Count users for multi-session journeys; count sessions for single-visit actions.
- Use a consistent lookback window (for example, 7 or 14 days) and require ordered steps.
- Exclude test traffic and bots; deduplicate repeated events in the same step.
Data preparation essentials
- Define the funnel clearly: exact events, order, and time window.
- Choose identity: unique users (user_id) or sessions (session_id) based on the flow.
- Normalize events: one row per user per step; handle retries carefully (keep the first completion within the journey).
- Check sample size: small counts lead to noisy rates; use rolling windows and aggregates.
Worked examples
Example 1: Checkout funnel
Steps and user counts in a week: 1 Product viewed: 20,000; 2 Add to cart: 10,000; 3 Begin checkout: 7,000; 4 Enter payment: 4,900; 5 Order confirmed: 4,410.
- Step conversions: 1β2 = 10,000/20,000 = 50%; 2β3 = 70%; 3β4 = 70%; 4β5 = 90%.
- Drop-off rates: 50%, 30%, 30%, 10%.
- Cumulative to purchase: 4,410/20,000 = 22.05%.
- Biggest hole: 1β2 (50% drop; 10,000 users lost).
What this tells you
The largest gain likely comes from making it easier to add to cart (price clarity, stock visibility, CTAs, page speed).
Example 2: Onboarding funnel
1 Sign up: 8,000; 2 Email verified: 6,400; 3 First project created: 3,200; 4 First share/invite: 960.
- Conversions: 1β2 = 80%; 2β3 = 50%; 3β4 = 30%.
- Biggest hole: 2β3 (users stall after verification).
What to try
Add an in-product checklist, a template gallery, and a prominent Create project button. Trigger a timely nudge if no project within 24 hours.
Example 3: Estimating impact
Continuing Example 1, suppose you can lift 1β2 by 6 percentage points (from 50% to 56%).
- Extra users reaching step 2: 20,000 Γ 6% = 1,200.
- Assuming the same downstream survival (0.7 Γ 0.7 Γ 0.9 = 44.1%), extra purchases β 1,200 Γ 44.1% = 529.
Takeaway
A small upstream lift compounds into many extra conversions downstream.
How to find root causes
- Segment: device, browser, country, traffic source, new vs returning, ad campaign, page version.
- Time analysis: time to complete each step; spikes suggest latency or load issues.
- Error signals: form validation errors, failed payments, 4xx/5xx rates.
- Path digests: alternate paths users take before drop-off (loops, dead ends).
- Qualitative: session replays and support tickets to validate hypotheses.
5 Whys mini-guide
Keep asking why until you reach a fixable root (for example, high mobile drop-off β slow page β heavy images β compress images).
Prioritize fixes
Use a quick score: Impact Γ Confidence Γ Effort (ICE). For impact, estimate extra conversions: absolute loss Γ expected lift Γ downstream rate Γ value per conversion if useful.
- Start with the largest drop that is actionable this sprint.
- Prefer fixes that help high-value segments (for example, paying countries).
Exercises you will practice
These mirror the grading tasks below. Solve here, then check solutions in the collapsible sections.
Exercise 1: Compute drops and find the critical step
Data: 1 Product viewed: 20,000; 2 Add to cart: 10,000; 3 Begin checkout: 7,000; 4 Enter payment: 4,900; 5 Order confirmed: 4,410.
- Task: Calculate each step conversion, step drop-off, and cumulative conversion to the final step. Identify the critical drop and list two fix ideas.
Show solution
Step conversions: 50%, 70%, 70%, 90%. Step drop-offs: 50%, 30%, 30%, 10%. Cumulative to purchase: 22.05%. Critical drop: 1β2. Fix ideas: make CTAs clearer and above the fold; show shipping costs and stock before add to cart.
Exercise 2: Segment and size the opportunity
At step 3β4 (Begin checkout β Enter payment): Desktop started step 3: 4,000; reached step 4: 3,200. Mobile started step 3: 6,000; reached step 4: 3,000. Assume step 4β5 conversion remains 90% for both. If we lift mobile step 3β4 conversion from 50% to 65%, how many extra orders result? Assume value per order 50.
Show solution
Mobile current step 3β4: 3,000/6,000 = 50%. Improved to 65% gives 3,900 at step 4; delta = 900. Orders gained = 900 Γ 0.9 = 810. Revenue β 810 Γ 50 = 40,500. Prioritize mobile.
Checklist before you move on
- I can compute step and cumulative conversion correctly.
- I can identify the largest absolute and percentage drop.
- I can segment drop-off and compare segments fairly.
- I can estimate impact in conversions and revenue.
Common mistakes and self-check
- Mixing users and sessions. Self-check: Does your denominator match your journey type?
- Counting out-of-order events. Self-check: Are steps strictly ordered within the window?
- Overreacting to tiny samples. Self-check: Do segments meet a minimum sample threshold?
- Ignoring downstream effects. Self-check: Did you apply the downstream survival rate when sizing impact?
- Comparing different time windows. Self-check: Are all segments computed over the same date range?
Practical projects
- Checkout uplift: Build a weekly funnel report, segment by device, and propose one A/B test for the worst step. Include expected extra orders.
- Onboarding speed: Measure time to complete each onboarding step and correlate slow steps with drop-off. Propose UX changes.
- Campaign quality: Compare funnels by acquisition source and recommend budget shifts based on conversion quality, not just volume.
Learning path
- Master step math: practice conversions, drop-offs, and cumulative rates.
- Segment wisely: add device, country, source, and new vs returning.
- Add timing and errors: combine latency and error metrics with drop-offs.
- Prioritize: estimate impact with downstream survival and value per conversion.
- Validate: design small experiments or UX tweaks and re-measure.
Next steps
- Go deeper into experiment analysis to validate fixes rigorously.
- Layer in cohort analysis to see if drop-offs change by user age.
- Automate a weekly funnel with alerts when step rates deviate beyond thresholds.
Mini challenge
You see that mobile 2β3 conversion dipped from 68% to 55% this week while desktop stayed stable. List your top three hypotheses and the first metric you would check for each.
Show example answer
- Hypothesis: Mobile page got slower. Check: median time to interactive and p95 load.
- Hypothesis: Validation errors increased. Check: rate of form error events per attempt.
- Hypothesis: A campaign sent low-intent traffic. Check: mobile traffic mix by source and their step 1β2 rates.
About the quick test
The quick test is available to everyone. If you are logged in, your progress will be saved automatically.