Why this matters
Clear Jobs To Be Done (JTBD) keeps AI products focused on real customer progress, not features. For AI Product Managers, JTBD anchors discovery, scoping, metrics, and experimentation.
- Prioritize: Decide which customer struggles justify AI vs. simpler automation.
- Scope: Write solution-agnostic acceptance criteria and data requirements.
- Measure: Define outcome metrics before model selection.
- De-risk: Align stakeholders on the customer job, not the algorithm.
Concept explained simply
JTBD describes the progress a user wants to make in a situation, independent of your product. A simple job story template:
When [situation], I want to [motivation/struggle], so I can [desired outcome].
- Functional job: The core task (e.g., assess risk).
- Emotional job: How they want to feel (e.g., confident, not rushed).
- Social job: How they want to be perceived (e.g., competent to peers).
Desired Outcome Statements (DOS) make jobs measurable. Use a direction + metric + object + context:
- Minimize time to draft a customer reply for high-priority tickets.
- Increase recall of relevant policies in compliance reviews.
- Reduce variance in forecasts for long-tail SKUs.
Mental model
Think of a pipeline:
- Situation triggers → 2. Struggling moments → 3. Job story → 4. Desired outcomes (measurable) → 5. Acceptance criteria → 6. Data and constraints → 7. Candidate solutions (AI or not).
Quick self-check: Is your job story strong?
- Solution-agnostic (no references to models, features, or UI).
- Specifies a situation, not just a persona.
- Contains a measurable outcome or a path to one.
- Connects to a business outcome you can track.
Practical framework and template
- Define the situation
- When/where does the struggle happen? What triggers it?
- Who is present? What tools or data are available?
Mini task: Write one sentence starting with “When …”.
- Capture the struggle
- What slows them down or creates risk?
- What trade-offs are they making today?
Mini task: Write “I want to …” without referencing features.
- State the desired outcome
- What does “better” look like? How would they know?
Mini task: Finish “so I can …” with a measurable result.
- Write Desired Outcome Statements (DOS)
- Use verbs like minimize, reduce, increase, improve, avoid.
- Attach a metric, baseline, and target range when possible.
Mini task: Draft 3 DOS. Example: “Reduce average review time from 45m to <20m within 3 months.”
- Acceptance criteria (solution-agnostic)
- Define observable behaviors or thresholds, not model names.
- Include quality bars and guardrails (safety, compliance).
- Data and constraints
- Data needed, availability, quality checks, privacy rules.
- Latency, cost, and interpretability constraints.
- Only then: Candidate solutions
- Consider AI, rules, UI flows, or process changes.
Copy-paste JTBD template
Job story: When [situation], I want to [struggle/motivation], so I can [desired outcome].
Desired Outcome Statements (3–5):
- [Direction] [metric] of [object] in [context]
- [Direction] [risk/error] for [segment]
- [Direction] [time/cost] while maintaining [quality/safety]
Acceptance criteria:
- Success threshold(s): …
- Guardrails: …
- Observability: …
Data/constraints: Sources, freshness, privacy, latency, cost.
Candidate solutions: AI / rules / process / UI.
Worked examples
1) Customer Support: Response drafting
Job story: When I receive a high-priority ticket with a long conversation history, I want to quickly understand context and propose a correct reply, so I can resolve the case fast without missing policy details.
Desired outcomes:
- Minimize time to first draft from 10m to <2m.
- Reduce policy violations in replies to <1%.
- Increase customer satisfaction (CSAT) on these tickets by +0.3 points.
Acceptance criteria (solution-agnostic):
- Drafts reference correct order/account info 95%+ of the time in audit samples.
- Drafts include links to relevant policy sections or quoted policy text.
- Latency to draft < 5s for 95th percentile.
Data/constraints: Access to ticket history, policy corpus, PII protection, redaction for training data.
Candidate solutions: AI summarization + drafting; or rule-based templates + dynamic merge fields. JTBD helps compare both.
2) Fintech Risk: Transaction review
Job story: When a transaction is flagged as suspicious, I want to assess risk quickly with explainable evidence, so I can make a defensible decision and minimize false positives.
Desired outcomes:
- Reduce manual review time from 8m to <3m.
- Reduce false positives by 20% without increasing false negatives beyond 2%.
- Increase proportion of reviews with documented rationale to >99%.
Acceptance criteria:
- Every recommendation includes top 3 contributing factors with human-readable reasons.
- Audit log captures inputs, versioning, and decision maker notes.
- Model suggestions must be overrideable with rationale.
Data/constraints: Transaction features, graph data, explainability requirement, latency < 2s, regulatory retention.
3) Retail: Demand planning
Job story: When planning inventory for seasonal items, I want reliable demand projections with uncertainty ranges, so I can place orders that avoid stockouts without overstock.
Desired outcomes:
- Reduce MAPE for seasonal SKUs from 28% to <18%.
- Reduce stockouts by 30% for top 100 seasonal SKUs.
- Provide 80% prediction intervals per SKU-week.
Acceptance criteria:
- Forecasts include P10/P50/P90 and SKU-level feature importance.
- System flags data gaps and outliers before forecast generation.
- Weekly re-forecast completes within 30 minutes for 10k SKUs.
Data/constraints: Sales history, promotions, weather, holidays; cost ceiling; interpretability for planners.
Exercises (hands-on)
Do these before the quick test. Keep outputs short and solution-agnostic.
- Exercise 1 (mirrors ex1)
Scenario: A B2B sales rep prepares for a call with a new lead after a long email thread and attachments.
- Write 1 job story.
- Write 3 Desired Outcome Statements with tentative metrics.
- Draft 3 solution-agnostic acceptance criteria.
- Exercise 2 (mirrors ex2)
Scenario: A healthcare claims reviewer must decide if a claim requires additional documentation.
- Write 1 job story.
- List required data and constraints (privacy, latency, explainability).
- Write 2 guardrail criteria.
Quality checklist for your answers
- Job story avoids feature/model mentions.
- Outcomes use clear direction verbs and metrics.
- Acceptance criteria are observable and testable.
- Data/constraints cover privacy, safety, and latency.
Common mistakes and how to self-check
- Jumping to solutions: Mentions of “LLM”, “classifier”, or “chatbot” in the job story are red flags.
- Persona-only framing: “For analysts” is not a situation. Add triggers and context.
- Vague outcomes: Replace “better/faster/smarter” with target ranges or proxy metrics.
- Metric tunnel vision: Balance speed with quality and safety guardrails.
- Ignoring data reality: Validate data access, freshness, and quality before committing.
Self-audit
- Can a stakeholder read your JTBD and imagine multiple solutions?
- Do outcomes connect to a business KPI you can measure?
- Are acceptance criteria independently checkable by QA?
Who this is for
- AI Product Managers and aspiring PMs shaping ML/AI features.
- Data Scientists and Designers collaborating on discovery.
- Founders and PMMs defining outcome-first product narratives.
Prerequisites
- Basic understanding of product discovery and stakeholder interviewing.
- Awareness of metrics and experimentation (A/B or offline evaluation).
- High-level knowledge of AI capabilities and limitations.
Learning path
- Learn JTBD basics and draft job stories.
- Translate to Desired Outcome Statements and acceptance criteria.
- Validate with users/stakeholders; refine metrics and constraints.
- Only then explore candidate solutions and feasibility.
- Set up measurement and guardrail monitoring.
Practical projects
- Create a JTBD dossier for one workflow at your company: job story, outcomes, acceptance criteria, and data map.
- Run a 3-interview discovery sprint to validate the job and sharpen outcomes.
- Design a solution-agnostic experiment plan comparing AI vs. non-AI baselines against the same JTBD outcomes.
Next steps
- Complete the quick test to validate understanding.
- Apply the template to one real use case this week.
- Share your JTBD and outcomes with a partner for feedback using the checklist above.
Note: The quick test is available to everyone. If you log in, your progress and test results will be saved.
Mini challenge
Pick any recurring decision in your product (e.g., prioritizing a backlog, triaging reports). Write:
- 1 job story,
- 3 Desired Outcome Statements with rough targets,
- 3 acceptance criteria and 2 guardrails.
Tip if you get stuck
Ask: What makes this task slow, risky, or frustrating today? What would a confident, repeatable result look like tomorrow?