Why this matters
Large language models do better when they first plan the steps and only then execute them. Separating planning from execution reduces hallucinations, keeps outputs structured, and helps you review the approach before any work is done.
- Analytics: outline the analysis plan before calculating metrics.
- Engineering: sketch function design and test cases, then implement.
- Content: plan sections, tone, and audience, then draft.
- Operations: map the SOP, then generate messages, emails, or forms.
Concept explained simply
Two-stage prompting:
- Planning: ask the model to list steps, assumptions, risks, and output format.
- Execution: tell the model to do the task strictly following the approved plan.
Mental model
Think like a chef: write the recipe (plan: ingredients, steps, timing) and only then cook (execution). If the recipe looks wrong, fix it before cooking.
What improves when you separate stages?
- Fewer wrong turns: the plan exposes misunderstandings early.
- Consistency: the model sticks to a reviewed structure.
- Traceability: you can audit how results were produced.
Core pattern (copy-paste friendly)
Prompt the model to produce only a plan, not the final answer.
Role: Senior [domain] expert.
Task: Propose a plan to solve [problem].
Constraints:
- Do NOT execute the task yet.
Output:
- Assumptions
- Step-by-step plan (numbered)
- Risks and checks
- Output schema for execution
Return format: JSON-like with keys {"assumptions":[],"plan":[],"checks":[],"schema":{...}}Read the plan. If anything is off, ask for a revision. When satisfied, say you are locking the plan.
Review notes: [approve or request changes] Lock statement: "Plan v1 approved. Follow it exactly." Plan ID: PLAN-001
Now instruct the model to execute strictly per the plan and schema.
Execute PLAN-001. Rules: - Follow steps in order. - Use the approved schema. - If a step is ambiguous, pause and ask before proceeding. Deliverables: [list deliverables]
Quality guardrails you can add
- Ask for a self-check against the plan at the end.
- Require numbered steps and explicit references to the plan.
- Include a “deviations” section if any step needed adaptation.
Worked examples
Example 1 — Analytics investigation
Task: A weekly active users (WAU) drop of 8% happened last week. Investigate likely causes and propose checks.
Planning prompt
Role: Senior Product Analyst. Task: Propose an investigation plan for an 8% WAU drop. Do NOT analyze yet. Output keys: assumptions, plan(steps), checks, schema. Schema for execution: - summary - hypotheses (list) - required data pulls - quick checks - next actions
Possible plan (shortened)
{
"assumptions":["No tracking outage","Seasonality possible"],
"plan":[
"Segment by platform, region, acquisition source",
"Check release notes & incident logs",
"Compare week-over-week funnel conversion",
"Inspect paid traffic volume & quality",
"Look for cohort-specific drops"
],
"checks":["Confirm metric definition unchanged","Validate event counts"],
"schema":{
"summary":"string",
"hypotheses":["string"],
"required_data_pulls":["string"],
"quick_checks":["string"],
"next_actions":["string"]
}
}Execution prompt
Plan v1 approved (PLAN-001). Execute strictly per steps and schema. Deliverables: summary, hypotheses, required_data_pulls, quick_checks, next_actions.
Notes
The plan forces segmentation first and protects from ad-hoc guesses.
Example 2 — Coding a utility function
Task: Implement a function to normalize names (trim, fix spacing, title case, preserve certain particles).
Planning prompt
Role: Senior Software Engineer. Plan only for a function normalize_name(input:str)->str. Include: rules, edge cases, test cases, algorithm steps, signature. No code yet.
Possible plan (shortened)
- Rules: trim, collapse spaces, title case except {"van","de","da"} when mid-name.
- Edge cases: multiple spaces, hyphenated names, apostrophes.
- Tests: [" van gogh " -> "Van Gogh"; "O'neill" -> "O'Neill"; "MARIA-de souza" -> "Maria de Souza" ]
- Algorithm: tokenize, lowercase, title case tokens except protected particles (unless first token), rejoin.Execution prompt
Execute PLAN-002: produce Python code, then run through the test cases and show outputs. Format: - code block - tests - outputs - deviations(if any)
Example 3 — Content draft
Task: Draft a one-page landing copy for a new budgeting app.
Planning prompt
Role: Senior Copywriter. Plan only. Audience: busy professionals, tone: clear, confident. Include: key message, sections outline, CTA variants, objection handling, style guide. No copy yet.
Execution prompt
Execute PLAN-003. Produce: - headline + subhead - benefit bullets (5) - social proof block - CTA (2 variants) - FAQ (3 items) - 120-word hero paragraph - Self-check: alignment with plan
Common mistakes and self-check
- Combining plan and execution in one message. Fix: explicitly forbid execution in the planning prompt.
- Vague plans without output schema. Fix: require a schema or bullet structure for execution.
- No review step. Fix: add a clear “Plan approved/locked” message and ID.
- Drift during execution. Fix: instruct to ask before deviating and to include a “deviations” section.
- Skipping assumptions and risks. Fix: request them in the planning stage.
Self-check checklist
- Did I see a distinct planning message and an execution message?
- Is there a plan ID or lock statement?
- Does execution follow the exact structure from the plan?
- Are deviations documented?
Exercises
Practice these. You can do them in any LLM. Aim to keep plan and execution as separate messages.
Exercise 1 — Support reply with action plan
Scenario: Customer writes, “I was charged twice this month and your chat bot is useless.”
- Plan only: list assumptions, response structure, empathy/tone rules, verification steps, and an action checklist for the team.
- Execute: produce the customer reply and a separate internal action checklist.
Hints
- Include a short identity statement: “Role: Senior Support Lead.”
- Schema: {reply, internal_actions[]}.
Exercise 2 — Data exploration starter
Scenario: Dataset of retail transactions with columns: order_id, customer_id, order_date, product, category, price, discount, channel, region.
- Plan only: define goals, slices, checks for data quality, and output schema.
- Execute: produce 3 insights, 2 chart recommendations (text description), and next steps.
Hints
- Start with a plan that segments by channel and region.
- Include anomalies and outliers check.
Progress tip
Keep your planning and execution prompts saved as a template for reuse.
Practical projects
- Incident postmortem assistant: plan investigation, then generate timeline, contributing factors, and action items.
- Experiment design helper: plan hypothesis, metrics, variants, sample size approach; then generate a ready-to-run spec.
- Team onboarding pack: plan outline and role matrix; then produce role-specific checklists and first-week tasks.
Learning path
- Before this: basic prompting, role prompting, instruction clarity.
- Now: planning–execution separation.
- Next: critique–revise loops, toolformer/tool-use prompts, and evaluation rubrics.
Who this is for
- Prompt engineers who need reliable, auditable outputs.
- Data/ML folks who want consistent analyses.
- Builders of internal assistants or documentation generators.
Prerequisites
- Comfort writing clear instructions and constraints.
- Basic familiarity with JSON-like structures for schemas.
Mini challenge
In two messages, have the model plan and then execute a “bug triage summary” for a given sprint. Require: assumptions, prioritization rules, summary bullets, and final risk rating. Keep plan and execution separate and include a lock statement.
Next steps
- Turn one of your recurring tasks into a two-stage template.
- Run the Quick Test below to check your understanding.
- Progress note: the test is available to everyone; sign in to save your progress.