luvv to helpDiscover the Best Free Online Tools
Topic 4 of 7

Prompting Concepts

Learn Prompting Concepts for free with explanations, exercises, and a quick test (for AI Product Manager).

Published: January 7, 2026 | Updated: January 7, 2026

Why this matters

As an AI Product Manager, you won’t always write final prompts, but you will define product outcomes, constraints, and evaluation. Clear prompting transforms vague business needs into reliable model behavior. You will use prompting to prototype features, reduce hallucinations, enforce output formats, and align the model with your product’s tone, safety, and compliance needs.

  • Ship faster: Validate ideas with well-structured prompts before investing in custom models.
  • Reduce risk: Specify constraints and checks that lower hallucinations and off-brand outputs.
  • Measure quality: Turn prompts into testable specifications with objective criteria.

Concept explained simply

A prompt is a set of instructions and context that tells a model what to do, how to do it, with what constraints, and how to present the result.

  • Role: Who the model should act as (e.g., Support Agent).
  • Objective: What must be produced (e.g., Summarize a ticket).
  • Context: Relevant facts, data, or examples.
  • Constraints: Rules, tone, forbidden behaviors, safety boundaries.
  • Output format: The structure to return (e.g., valid JSON object with required fields).
  • Quality checks: Self-check or validation steps (e.g., If unsure, say unknown).
Common prompting patterns
  • Instruction-first: Direct, concise task instruction.
  • Role + Format: Add persona and strict output schema.
  • Few-shot: Provide 2 examples to anchor style/format.
  • Step-by-step: Ask for reasoning steps to improve correctness (you can request structured steps without exposing private reasoning to users).
  • Critique & revise: First draft, then a short self-review and correction pass.
  • RAG-style context: Insert retrieved snippets into the prompt and ask the model to answer only from those.
  • Tool-use guidance: Tell the model when and how to use tools/functions, and how to combine tool results.

Mental model

Use the RICCE-V checklist to design reliable prompts:

  • R  Role: Define who is speaking.
  • I  Instruction: State the task and success criteria.
  • C  Context: Provide only the data needed.
  • C  Constraints: Tone, safety, forbidden behaviors.
  • E  Examples: 1 concise demonstrations.
  • V  Validation: Output schema + self-check rules.

Think of the model as a talented but literal intern: give it the goal, relevant files, the format template, and a short checklist to self-verify before submitting.

Worked examples

Example 1: JSON data extraction for support tickets

Goal: Extract fields from a support ticket message.

Role: You are a support triage assistant.
Task: Extract fields from the ticket text.
Context: The ticket text is delimited by <ticket> ... </ticket>.
Constraints:
- If a field cannot be determined with high confidence, set it to "unknown".
- Only use information from the ticket text.
Output format: Return valid JSON with keys: urgency (low|medium|high), issue_type (billing|bug|access|other), summary (string, max 25 words).
Self-check: Ensure JSON is valid and keys exist.

Examples:
Input:
<ticket>Cannot access my account since yesterday. Urgent for my deadline.</ticket>
Output:
{"urgency":"high","issue_type":"access","summary":"User cannot access account and needs urgent help."}

Now extract for this input:
<ticket>The invoice total seems off by $20 and I need a corrected bill.</ticket>

Expected model behavior: Uses only the ticket text; outputs valid JSON; sets issue_type to billing and urgency likely medium unless urgency is explicit.

Example 2: RAG-grounded answer with citation

Goal: Answer questions only from provided docs.

Role: You are a precise product documentation assistant.
Instruction: Answer the user question strictly using the provided context. If the answer is not present, reply: "Not found in docs." Include a citation field listing the file IDs used.
Context: <docs>[doc_12: ...] [doc_34: ...]</docs>
Output format: JSON with keys: answer (string), citations (array of file IDs).
Validation: If no doc supports the answer, answer="Not found in docs." and citations=[].

Expected model behavior: Avoids hallucinations and includes citations when evidence exists.

Example 3: Summarization with tone and policy

Goal: Summarize a chat for a CRM note with tone and safety.

Role: Senior customer success analyst.
Instruction: Summarize the chat into a single CRM note.
Tone: Neutral, factual, no promises, no medical or legal advice.
Length: 607 words.
Output format: A single paragraph; no bullet points.
Self-check: Confirm no speculative claims and length within bounds.
Input chat: <chat> ... </chat>

Expected model behavior: Produces a compliant, concise summary within the length and tone constraints.

How to evaluate prompts

  • Define a rubric: correctness, groundedness (uses provided context), clarity, format validity, safety, and style adherence.
  • Create a small test set (102 cases) with expected outcomes or acceptability thresholds.
  • Measure: pass rate on schema validity, average score on rubric (e.g., 15 scale), latency, and cost.
  • Iterate: A/B prompts offline; promote the best to limited production; monitor real feedback.
Quality checklist (open and use as you evaluate)
  • Is the goal explicit and testable?
  • Are constraints specific and minimal?
  • Is context necessary and sufficient?
  • Is the output schema unambiguous?
  • Are edge cases covered (unknowns, missing data)?
  • Are safety and tone policies encoded?

Saving progress note: The quick test is available to everyone; only logged-in users will have their progress saved.

Common mistakes

  • Vague instructions: Leads to inconsistent results.
  • No output schema: Hard to parse or validate.
  • Overloading with irrelevant context: Increases confusion and cost.
  • Missing fallback rules: Forces the model to guess instead of saying unknown.
  • Too many or too long examples: Drowns the key instruction.
  • Ignoring evaluation: Shipping prompts without tests reduces reliability.
Self-check: debug your prompt
  • Can a teammate read your prompt and predict outputs?
  • Does it pass at least 80% of your rubric on 10 sample cases?
  • Does it produce valid output for malformed inputs?
  • Does it refuse to answer when context is missing (if required)?

Exercises

Complete these hands-on tasks. They mirror the exercises below and include checklists to guide you.

Exercise 1: Robust JSON extraction prompt

Scenario: For support ticket triage, extract urgency, issue_type, and a short summary from free-text tickets.

  • Write a prompt that includes role, instruction, tight constraints, and a JSON output schema.
  • Add 2 concise few-shot examples.
  • Include a self-check rule and a fallback to "unknown" when confidence is low.
  • Test your prompt on 3 different tickets, including one with missing urgency.
Checklist
  • Role defined
  • Instruction includes success criteria
  • Context delimiters used
  • Valid JSON schema with required keys
  • Few-shot examples present
  • Self-check and fallback rules included
Exercise 2: Evaluation plan for summarization

Scenario: You need a CRM-style summary from multi-turn chat.

  • Draft a prompt with tone, length, and safety constraints.
  • Create a rubric (correctness, groundedness, tone, length adherence, safety).
  • Write 5 test cases and specify expected outcomes or thresholds.
  • Define pass criteria (e.g., average rubric score 95 and 100% valid format).
Checklist
  • Prompt includes tone and safety policy
  • Rubric has 5 criteria with clear scales
  • 5 test cases cover normal and edge cases
  • Pass criteria defined

Practical projects

  • Build a prompt kit: A small library of reusable prompt templates (extraction, summarization, classification) with JSON schemas and brief rubrics.
  • RAG prototype: Insert retrieved snippets into a prompt and enforce Not found in docs behavior; measure groundedness.
  • Prompt A/B study: Compare two prompts on 20 labeled cases; report accuracy, schema validity, and average latency.

Who this is for

  • AI Product Managers and PMs exploring LLM features.
  • Designers and Analysts collaborating on AI-assisted experiences.
  • Engineers who want product-ready prompt specs.

Prerequisites

  • Basic understanding of LLM capabilities and limitations.
  • Comfort with JSON and structured outputs.
  • Familiarity with your products safety and tone guidelines.

Learning path

  • Start: This subskill (Prompting Concepts) to craft reliable prompts.
  • Next: Guardrails and evaluation to quantify quality and safety.
  • Then: RAG and tool use to ground answers and integrate external systems.
  • Finally: Experiment design and monitoring for production readiness.

Next steps

  • Refine your prompts using the RICCE-V checklist.
  • Create a 10-case test set for your most important prompt.
  • Run an offline A/B and document results and trade-offs.

Mini challenge

Write a production-ready prompt spec for a feature your team cares about (e.g., email triage, knowledge-based Q&A). Include role, instruction, context delimiters, constraints, 2 examples, JSON schema, and a short validation rule. Test on 5 real inputs. Aim for at least 80% rubric pass rate and 100% valid JSON.

Practice Exercises

2 exercises to complete

Instructions

Create a prompt for support ticket triage that extracts urgency (low|medium|high), issue_type (billing|bug|access|other), and a 25-word summary.

  • Include role, instruction, context delimiters, constraints, and a JSON output schema.
  • Add 2 few-shot examples.
  • Include a self-check and an unknown fallback rule.
  • Run your prompt on 3 sample tickets, including one with no explicit urgency.
Expected Output
A single, copy-pasteable prompt template that yields valid JSON for diverse tickets and sets fields to "unknown" when not inferable.

Prompting Concepts — Quick Test

Test your knowledge with 7 questions. Pass with 70% or higher.

7 questions70% to pass

Have questions about Prompting Concepts?

AI Assistant

Ask questions about this tool