Why this matters
Clear structure and formatting turn vague prompts into reliable instructions the model can follow. As a Prompt Engineer, you will:
- Draft content-generation prompts that match brand tone and output format.
- Design extraction prompts that return clean JSON for downstream code.
- Create classification and analysis prompts with unambiguous labels and rules.
- Build reusable templates teammates can apply consistently.
Quick example: messy vs structured
Messy:
Write about our product and make it cool and short, like a tweet, maybe include a CTA.
Structured:
Role: Marketing writer Task: Write a Twitter post about Product X's new AI summarizer. Audience: Busy knowledge workers. Constraints: 240 characters max; friendly tone; include 1 CTA. Input: - Key features: offline mode, 10x faster, privacy-first. Output format: - Plain text tweet. No hashtags except #ProductX. Quality checks: - CTA present; length <= 240; mentions privacy.
Who this is for
- Beginners learning prompt engineering fundamentals.
- Data analysts/engineers needing stable extraction and classification outputs.
- Writers and PMs designing repeatable content prompts.
Prerequisites
- Basic familiarity with LLMs and their capabilities/limits.
- Comfort reading JSON and plain-text specs.
Concept explained simply
A model follows text patterns. If you arrange your prompt like a good spec—with clear roles, tasks, inputs, constraints, and output format—you reduce ambiguity and improve consistency.
Mental model
- Contract: The prompt is a contract. You define scope and acceptance criteria.
- Parser: The model is a probabilistic parser. It mirrors structures you show.
- Determinism via formatting: Unclear formatting increases variance; explicit schemas reduce it.
Core components of a well-structured prompt
- Role — Optional persona to prime style/knowledge scope.
- Task — One-sentence imperative of what to do.
- Context — Relevant background; keep concise.
- Inputs — Explicitly delimited data blocks.
- Constraints — Rules, length, tone, forbidden content.
- Output format — Schema, examples, or template to fill.
- Evaluation checks — A short checklist that the model can self-check against.
Reusable template
Role: Task: Context: Inputs: Constraints: Output format: Quality checks:
Formatting patterns that work
- Clear delimiters: Use triple quotes or XML-like tags to separate sections.
<input> {{ your data here }} </input> - Explicit schema: Provide exact keys and value types for JSON outputs.
Output JSON schema: { "title": string, "sentiment": one_of["positive","neutral","negative"], "reasons": string[] } - Few-shot anchors: 1–3 short examples showing the exact format.
- Do/Don't lists: Compact guardrails to avoid common failure modes.
- Avoid chain-of-thought requests: Ask for final answers or brief justifications, not step-by-step private reasoning.
Worked examples (3+)
1) Content generation (product blurb)
Bad:
Write a short blurb about our app.
Better:
Role: Product marketer Task: Write a 60–80 word product blurb for the landing page hero. Context: App: FocusFlow. Helps freelancers track deep-work sessions. Constraints: - Tone: encouraging, not hypey - Include one benefit-driven headline (max 7 words) - Mention: offline tracking, weekly insights Output format: - Headline: <text> - Blurb: <text 60–80 words> Quality checks: - Mentions both features - No exclamation marks
2) Information extraction (invoice fields)
Bad:
Extract data from this invoice.
Better:
Task: Extract fields from the invoice in <invoice>…</invoice>
Constraints: If a field is missing, use null. Do not add extra keys.
Output format (valid JSON):
{
"invoice_number": string|null,
"vendor": string|null,
"date_iso": string|null,
"total": number|null,
"currency": string|null
}
<invoice>
Invoice #A-193 | Vendor: PaperCo | Date: 2024-07-10 | Total: 312.50 USD
</invoice>
3) Classification (support triage)
Bad:
Categorize this ticket.
Better:
Task: Classify the support ticket into one label.
Labels (choose exactly one):
- billing
- bug
- how_to
- account_access
Decision rules:
- Payment failures → billing
- Feature not working/exception → bug
- Usage questions → how_to
- Login/2FA issues → account_access
Output format (JSON): { "label": "billing|bug|how_to|account_access" }
Input ticket:
"I can't sign in after changing my phone."
Self-check rubric
- Single task per prompt (or clearly separated subtasks).
- All inputs are delimited and labeled.
- Constraints are testable (length, tone, allowed labels).
- Output format is explicit and minimal (no extra prose).
- Includes a short quality checklist the model can follow.
Exercises
Do these now. They mirror the graded exercises below. Use the checklist above to review your work.
-
Exercise 1: Rewrite a messy prompt into a clean, formatted prompt for a landing-page headline and subheadline.
What to produce
- Sections: Role, Task, Context, Inputs, Constraints, Output format, Quality checks.
- Target: 1 headline (≤7 words) + 1 subheadline (15–25 words).
-
Exercise 2: Create an extraction prompt that returns valid JSON for job postings (title, company, location, seniority, salary_range).
What to produce
- Schema with keys and allowed values.
- Two short few-shot examples.
- One real input block for testing.
Common mistakes and how to fix them
- Too many goals at once: Split into separate prompts or steps.
- Vague constraints: Replace adjectives ("short") with measurable limits ("≤80 words").
- No schema: Always show the exact output shape; include null handling.
- Leaky inputs: Mix of instructions and data. Use clear delimiters for data blocks.
- Asking for chain-of-thought: Request final answers or brief rationale only.
- Hidden assumptions: Make audience, tone, and acceptance criteria explicit.
Practical projects
- Build a prompt pack: generation, extraction, classification templates for one domain (e.g., e-commerce).
- Create a JSON extraction suite with 5 schemas and validation examples.
- Design a style guide prompt for brand tone with few-shot pairs (bad → revised).
Learning path
- Before: Basics of LLM capabilities and limitations.
- Now: Prompt Structure and Formatting (this page).
- Next: Prompt testing, evaluation, and iteration loops.
Next steps
- Turn your best prompt into a reusable template with placeholders.
- Add a short quality checklist to every production prompt.
- Run A/B tests by varying constraints and output schemas.
Mini challenge
Design a prompt that converts messy meeting notes into a structured project update (goals, risks, next actions). Include a JSON schema and one few-shot example. Keep it to 15 lines.
Quick Test
Available to everyone. If you sign in, your progress is saved automatically.