luvv to helpDiscover the Best Free Online Tools
Topic 2 of 8

RAG Architecture Basics

Learn RAG Architecture Basics for free with explanations, exercises, and a quick test (for NLP Engineer).

Published: January 5, 2026 | Updated: January 5, 2026

Note: The Quick Test at the end is available to everyone. Sign in to save your progress and resume later.

Who this is for

  • NLP engineers and ML practitioners building LLM apps that must answer from private or dynamic data.
  • Data engineers supporting vector search infrastructure.
  • Product-minded technologists validating LLM features quickly and safely.

Prerequisites

  • Basic Python and familiarity with HTTP APIs.
  • Understanding of embeddings (vector representations) and cosine similarity at a high level.
  • Comfort with prompt engineering basics (system/user messages, few-shots).

Why this matters

RAG (Retrieval-Augmented Generation) lets your LLM answer using the freshest, company-specific knowledge without retraining. In real projects you will:

  • Build chat assistants grounded in policy docs, runbooks, and knowledge bases.
  • Reduce hallucinations by surfacing relevant sources and citations.
  • Ship updates fast by re-indexing documents instead of fine-tuning models.

Concept explained simply

RAG connects two worlds: a search engine and a language model. First, you search your knowledge base for the most relevant snippets. Then, you give those snippets to the LLM as context so it can answer accurately.

Mental model

Think of RAG as a librarian plus a writer:

  • The librarian (retriever) quickly finds the best book paragraphs.
  • The writer (LLM) reads them and drafts a clear, grounded answer—ideally with citations.

Core components of a RAG system

  1. Ingestion & preprocessing: Collect data (docs, HTML, tickets). Clean text, remove boilerplate.
  2. Chunking: Split text into overlapping chunks (e.g., 200–400 tokens, 10–20% overlap) to balance recall and precision.
  3. Embedding: Convert chunks to vectors with an embeddings model. Store vectors with metadata (title, URL, section).
  4. Indexing: Save in a vector store supporting similarity search (cosine/inner product) and filters.
  5. Retrieval: For a user query, embed the query and fetch top-k chunks. Optionally add hybrid keyword+vector search.
  6. Reranking (optional): Use a cross-encoder reranker to improve top results.
  7. Prompt assembly: Build a prompt that includes instructions, the user question, and selected chunks as context.
  8. Generation: Call the LLM to produce an answer grounded in the context.
  9. Post-processing: Add citations, extract structured fields, or run safety checks.
  10. Feedback loop: Log queries, clicks, ratings; use them to improve chunking, retrieval, and prompts.
Design tips and defaults
  • Start with chunk size ~300 tokens, 40–60 character overlap; tune later with small experiments.
  • Use top-k=5–8 for retrieval as a baseline; adjust for longer/shorter contexts and domain complexity.
  • Turn on reranking when your corpus is noisy or long-tail; it often helps more than increasing k.
  • Always store source IDs and titles; you will want citations and filtering.
  • Prompt: include clear grounding instructions like “Answer only from the provided context; if missing, say you don’t know.”

Worked examples

Example 1 — Internal policy QA assistant
  1. Goal: Answer HR policy questions with citations.
  2. Data: PDFs and docs from HR drive.
  3. Ingestion: Extract text; keep document, section, and page numbers.
  4. Chunking: 300 tokens, 15% overlap; separate by headings when possible.
  5. Embeddings: General-purpose multilingual embedding if policies vary by locale.
  6. Index: Vector store with metadata: {department: HR, country, doc_title, page}.
  7. Retrieval: top-k=6; filter by country if user specifies location.
  8. Rerank: Cross-encoder top-20 → top-6.
  9. Prompt: “Use only the context. If uncertain, say you don’t know. Cite titles and pages.”
  10. Generation: Model produces answer + citations.
  11. Post: Show expandable citation cards; log user rating.
  12. Metrics: Answer accuracy, citation click-through, abstention rate (when answer unknown).
Example 2 — Developer troubleshooting helper
  1. Goal: Help devs resolve common errors fast.
  2. Data: Runbooks, incident notes, code snippets.
  3. Chunking: Smaller chunks (~200 tokens) because code and errors need precision.
  4. Embeddings: Code-aware embedding if available.
  5. Retrieval: Hybrid BM25 + vector search to capture exact error strings and semantics.
  6. Rerank: Yes; cross-encoder helps when many similar errors exist.
  7. Prompt: “Summarize likely root cause and steps. Provide commands verbatim from context.”
  8. Post: Extract step list; display risk notes if present in context.
Example 3 — Product FAQs chatbot
  1. Goal: Answer user questions about features and pricing.
  2. Data: Public docs, release notes, pricing pages.
  3. Chunking: 250–350 tokens, overlap 10%; ensure each chunk is self-contained.
  4. Retrieval: top-k=5; boost recent release notes via metadata recency.
  5. Guardrails: “If pricing not found, ask clarifying questions or say you don’t know.”
  6. Evaluation: Weekly spot checks with synthetic queries from top support tickets.

Design choices and trade-offs

  • Chunk size: Larger chunks improve coverage but may add noise; smaller chunks increase precision but risk missing context.
  • Overlap: Avoid cutting concepts mid-sentence; modest overlap reduces fragmentation without bloating the index.
  • Embeddings model: Domain-specific embeddings can boost retrieval; test on your queries to confirm.
  • Hybrid search: Combine keyword and vector search to catch exact terms and semantics.
  • Reranking: Adds latency but often increases answer quality substantially in noisy corpora.
  • Prompt strictness: Strong grounding instructions reduce hallucinations; add an explicit refusal pattern.
  • k value: Too low k → missing facts; too high k → context dilution. Tune with offline benchmarks.

Common mistakes and self-check

  • No metadata: Without document titles or sections, you cannot show citations or filter. Self-check: Does every chunk store source and section?
  • Overlong context: Stuffing many chunks lowers relevance. Self-check: Is your prompt exceeding model context often?
  • Vague prompts: LLM invents details. Self-check: Does the prompt explicitly say “answer only from context”?
  • No evaluation: Hard to know if changes help. Self-check: Do you track correctness on a small, stable query set?
  • Ignoring latency: Rerankers and high k can slow UX. Self-check: Measure p95 latency before and after changes.

Practical projects

  • Build a policy QA bot that cites sources and abstains when unsure.
  • Create a troubleshooting assistant for a small codebase with hybrid retrieval and command extraction.
  • Implement a release-notes explorer that boosts the most recent content.

Exercises

Do these now. They mirror the graded exercises below.

Exercise 1 — Design a minimal RAG pipeline (mirrors Ex1)

Scenario: You need a support QA bot for your product docs (about 500 pages, updated monthly). Draft a minimal RAG design.

  1. Choose chunk size and overlap and justify briefly.
  2. Pick an embeddings model type and list metadata to store.
  3. Decide on retrieval: vector only or hybrid? Choose k.
  4. Decide if you’ll use a reranker; when and why.
  5. Write a 4–6 line prompt template with grounding instructions and citation requirement.
  6. Define two evaluation metrics you will track weekly.
Show example solution
Chunking: 300 tokens, 15% overlap. Embeddings: general-purpose; metadata: doc_title, section, url_id, updated_at. Retrieval: hybrid, k=6. Reranker: cross-encoder on top-20→top-6 due to similar topics. Prompt: “Use only the context; if missing, say you don’t know. Cite titles and sections. Keep answers under 150 words.” Metrics: accuracy on 30 curated queries; abstention correctness rate.
Exercise 2 — Chunking thought experiment (mirrors Ex2)

Text: “Refunds are available within 30 days of purchase. To request a refund, contact support with your order ID. Refunds are not available for custom services.”

  1. Create 2–3 chunks with overlap so that each chunk remains meaningful.
  2. Explain how your overlap reduces boundary issues for queries like “How do I request a refund?” and “Are custom services refundable?”
Show example solution
Chunk A: “Refunds are available within 30 days of purchase. To request a refund, contact support with your order ID.”
Chunk B (overlap): “To request a refund, contact support with your order ID. Refunds are not available for custom services.”
Reasoning: Overlap keeps the request steps visible while also capturing the exception. Both queries match at least one chunk fully.

Exercise checklist

  • I justified chunk size and overlap in terms of recall vs precision.
  • I included metadata for citations and filters.
  • My prompt prohibits guessing and requests citations.
  • I selected k based on corpus size and context limit.
  • I defined simple, trackable evaluation metrics.

Mini challenge

Pick one of your current documents and write a 5-line prompt template plus a retrieval plan (k, filters, reranking). Keep total context under 1,200 tokens. Test on 3 real questions and note one improvement for tomorrow.

Learning path

  • Next: Advanced retrieval strategies (hybrid search, multi-query, query rewriting).
  • Then: Reranking, context compression, and response validation.
  • Finally: Evaluation frameworks and offline/online metrics.

Next steps

  • Complete the exercises below and take the Quick Test.
  • Instrument your RAG prototype with latency and correctness logging.
  • Schedule a weekly review to tune chunking, k, and prompts based on logs.

Practice Exercises

2 exercises to complete

Instructions

Scenario: Build a support QA bot for 500 pages of product docs updated monthly. Produce a one-page design.

  1. Choose chunk size and overlap; justify.
  2. Pick an embeddings model type; list metadata to store.
  3. Decide retrieval strategy (vector vs hybrid) and k.
  4. Decide on reranking and when to apply it.
  5. Write a 4–6 line prompt template enforcing grounding and citations.
  6. List two evaluation metrics to track weekly.
Expected Output
A concise spec describing chunking, embeddings, index fields, retrieval (k), reranking plan, prompt template, and evaluation metrics.

RAG Architecture Basics — Quick Test

Test your knowledge with 7 questions. Pass with 70% or higher.

7 questions70% to pass

Have questions about RAG Architecture Basics?

AI Assistant

Ask questions about this tool