Why this matters
Common assumption categories
- Data: Stationarity, label quality, sampling bias, missingness patterns.
- Model: Calibration, generalization to new segments, fairness across groups.
- System: Latency budgets, memory limits, rollout and monitoring.
- Business: Volume forecasts, regulatory constraints, ops capacity.
Assumptions-to-monitor mapping
- Stationarity → Monitor PSI/feature drift weekly.
- Calibration → Reliability curves and Brier score monthly.
- Fairness → Group-wise precision/recall each release.
- Latency → P95 timing per model stage on each deploy.
Pre-presentation checklist
- Is the goal stated in one sentence?
- Are 2–3 options compared on 2–3 key dimensions (not 10)?
- Are assumptions explicit and testable?
- Is the recommendation tied to the goal, not just the best metric?
- Do you have a monitoring/rollback plan?
Steps to prepare a clear tradeoff slide
Write the primary outcome and constraints (e.g., recall ≥ 0.6, latency ≤ 100 ms).
Pick 2–3 meaningful alternatives. Remove dominated options.
Show deltas on top 2–3 dimensions: business metric, latency/cost, interpretability/fairness.
For each assumption, add a monitoring signal and fallback.
One line: I recommend [X] because [Y aligned with goal].
Exercises
Do these before the quick test. You can compare with solutions in the toggles.
Exercise 1: Rewrite with tradeoffs and assumptions
Original: “Model B is better; AUC is higher.” Rewrite it for a product manager, highlighting 2 tradeoffs and 2 assumptions, and end with a recommendation.
Exercise 2: One-slide decision for A/B test duration
Scenario: You need a test to detect a +1% conversion lift. Option A runs 2 weeks with 80% power; Option B runs 1 week with 60% power. Prepare a 4–6 bullet decision summary with assumptions and a fallback.
Checklist for your answers
- Goal stated in first line
- At least two tradeoffs quantified or clearly described
- Two assumptions and what happens if they break
- Clear recommendation and monitoring plan
Common mistakes and self-check
- Hiding assumptions: If a condition must hold, say it and how you’ll watch it.
- Metric dumping: Three dimensions max on the main slide; extras in backup.
- Vague risks: Tie each risk to a metric trigger and action.
- No north star: Start with the business goal; avoid optimizing for a proxy.
Self-check prompt
Could a non-technical stakeholder repeat your recommendation and why in under 20 seconds?
Practical projects
- Turn a past model report into a one-slide decision with options, tradeoffs, assumptions, and a monitoring plan.
- Run a simulated threshold sweep on a public classification dataset and present two thresholds as options.
- Create a “risk register” for your team listing top 5 assumptions and their monitoring signals.
Learning path
- Start: Practice the 30-second talk track with a teammate.
- Next: Use the decision slide template in your next review.
- Then: Add assumptions-to-monitor mapping to your dashboards.
- Finally: Mentor a junior teammate through one tradeoff presentation.
Who this is for
- Data Scientists and ML Engineers presenting model choices to product and business stakeholders.
- Analysts supporting decision meetings with clear options and risks.
Prerequisites
- Basic understanding of model metrics (e.g., precision/recall, MAPE, CTR).
- Ability to compare models and read latency/cost metrics.
Next steps
- Apply the template to one active project this week.
- Schedule a 10-minute dry run with a peer to get clarity feedback.
- Set up one monitoring metric per key assumption.
Mini challenge
In 120 words, present two options for a churn model: one optimized for recall, one for precision. Include at least two assumptions and end with a recommendation.
Quick Test
Everyone can take the test for free. Only logged-in users will have their progress saved.