Who this is for
Prompt Engineers, Data Scientists, Analysts, and Builders who want reliable, auditable LLM outputs for tasks like data extraction, analytics reasoning, code generation, and multi-step workflows.
Prerequisites
- Basic prompt writing (clear instructions, role, constraints)
- Comfort reading structured outputs (bullet lists, JSON)
- Optional: familiarity with your target domain (analytics, text processing, or coding)
Why this matters
Real work rarely fits in a single-shot prompt. Decomposing tasks into clear steps improves accuracy, reduces hallucinations, and makes results reproducible.
- Analytics: break vague questions into metric definitions, data checks, and interpretation.
- Data extraction: define fields, edge cases, and validation before producing final JSON.
- Code generation: outline plan, write code, run mental tests, then provide final snippet.
- Tool use: plan which tools to call and in what order, then verify outputs.
Concept explained simply
Decomposition is splitting a task into a short sequence of named mini-goals (3β7 is typical). Each step has a purpose and a concrete output that feeds the next step.
Mental model
Think of an assembly line: each station does one job, hands off a clean part to the next, and the final station checks quality.
- Understand: restate goals and constraints.
- Plan: outline steps or subproblems.
- Gather: extract facts, inputs, specs.
- Execute: produce the solution.
- Check: validate, test, or sanity-check.
- Format: return in the requested structure.
Design rules that keep steps effective
- Keep 3β7 steps; merge or split to hit that range.
- Name each step and define its output (bullet list, JSON fields, or short text).
- Include a Check step (validation or test cases).
- Make each step observable (short, inspectable outputs).
- State formatting constraints (e.g., return JSON with specific keys).
- Pass state forward (step outputs feed the next step).
Patterns you can reuse
Pattern 1: Plan β Solve β Check
- Plan: summarize task, assumptions, and approach.
- Solve: produce the core output using the plan.
- Check: validate with rules or test cases; revise if needed.
Pattern 2: Outline β Fill β Format
- Outline: define sections/fields and acceptance criteria.
- Fill: complete each section.
- Format: compile into final structure (e.g., JSON or markdown).
Pattern 3: Hypothesize β Gather Evidence β Decide
- Hypothesize: list plausible options.
- Evidence: cite textual signals or rules.
- Decide: choose with a brief rationale and confidence.
Worked examples
Example 1: Data cleaning rules from messy text
Task: From the notes below, produce cleaning rules for a CSV of product reviews.
Notes: Ratings 1-5; remove duplicate reviews; strip emojis; map N/A, -, blank to null; keep languages: EN, ES.
Steps
1) Plan: Summarize goals and constraints.
2) Extract: List candidate rules as bullets.
3) Check: Spot conflicts or missing cases.
4) Format: Output JSON with fields: goals, rules[], gaps[].
Return JSON only.
Why this works
Each step adds certainty: plan clarifies scope, extract gathers rules, check catches gaps, format makes it machine-usable.
Example 2: SQL generation for analytics
Schema (simplified):
orders(order_id, user_id, created_at, total_amount)
users(user_id, country)
Question: Monthly revenue for last 3 months, grouped by country. Handle null countries as 'Unknown'.
Steps
1) Plan: List tables, joins, filters, edge cases (null country, time zone).
2) Solve: Write SQL.
3) Check: Verify columns, date boundaries, and null handling; propose a small mental test.
Output fields: plan (bullets), sql (code), check (bullets).
Why this works
The plan prevents schema mistakes; the check forces date and null validation before finalizing.
Example 3: Classifying support tickets
Labels: Billing, Technical, Account, Other.
Ticket: "My card is charged twice after I updated my email."
Steps
1) Understand: Extract key signals from the text.
2) Decide: Pick one label; give 1-sentence reason.
3) Check: If equally plausible labels, prefer Billing > Account > Technical.
4) Format: JSON with {label, reason, confidence}.
Why this works
Signals-first improves consistency; an explicit tie-break rule reduces ambiguity.
Exercises
These mirror the tasks in the Exercises panel below. Do them here, then open your preferred LLM to try your prompts.
Exercise 1: Turn messy meeting notes into an action plan
Prompt goal: Create a stepwise prompt template that takes messy notes and returns a clean action plan with owners and deadlines.
Input notes
- Launch prep: need final copy, QA landing page, email draft v2
- Alex: copy almost done; block: legal review
- QA found 3 issues on mobile
- Target launch: end of month; if late, announce next sprint
Your job: Write a 4β6 step prompt template with a final JSON output containing: goals[], tasks[{title, owner, deadline, blockers[]}], risks[], next_steps[]. Include a Check step that enforces ownership and deadlines for each task.
Exercise 2: Plan β Solve β Check for SQL
Scenario
Data tables: events(event_id, user_id, event_name, occurred_at), users(user_id, plan). Task: Get daily active users for the last 7 full days and split by plan. Treat unknown plans as "free". Return SQL and a brief validation checklist.
Your job: Write a 3-step prompt that requests: plan (bullets), sql (code), check (bullets with edge cases).
Self-check checklist
- I kept 3β7 steps with clear names
- Each step has a concrete, inspectable output
- I included a Check/Validate step
- Final output format is explicit (e.g., JSON keys)
- Steps flow logically and pass state forward
Common mistakes and how to self-check
- Too many steps: merge overly granular ones; target 3β7.
- Vague steps: rename to verbs and define outputs.
- No validation step: add rules, tests, or acceptance criteria.
- Unspecified format: state exact keys/sections and return format.
- Leaky scope: keep each step single-purpose; move extras to a later step.
Quick self-audit
For each step, ask: What does this produce? How will the next step use it? If unclear, rewrite.
Practical projects
- Template library: Build 5 reusable stepwise prompts (analytics, extraction, classification, planning, code). Keep them under version control and record success notes.
- Validation pack: For each template, add 3 domain-specific checks (e.g., date ranges, null handling, label tie-breaks).
- Before/After study: Run the same task single-shot vs. stepwise; measure accuracy and formatting errors.
Next steps
- Apply stepwise prompts to a live task at work and collect 5 examples of successes/failures.
- Combine with other patterns: e.g., Few-shot examples inside the Solve step, or tool selection in the Plan step.
- Iterate: Adjust step names and outputs until errors drop and outputs are consistent.
Quick test and progress
The quick test is available to everyone. If you log in, your progress will be saved automatically.
Tip for taking the test
Re-read the design rules and the three patterns (Plan β Solve β Check; Outline β Fill β Format; Hypothesize β Gather Evidence β Decide) before starting.
Mini challenge
Pick one of your frequent tasks (e.g., summarizing weekly updates). Draft a 4-step prompt that includes a validation step and a strict final format. Run it on 3 different inputs and refine the steps until all outputs are consistent.