Menu

Topic 5 of 8

Handling Data Consistency

Learn Handling Data Consistency for free with explanations, exercises, and a quick test (for Backend Engineer).

Published: January 20, 2026 | Updated: January 20, 2026

Why this matters

As a Backend Engineer, you must keep data correct even when many users hit your system at once, services fail, or retries happen. Real tasks include:

  • Preventing overselling inventory during flash sales.
  • Ensuring money transfers never create or lose funds.
  • Avoiding duplicate orders when clients retry requests.
  • Keeping related records in sync (e.g., order and shipment).

Consistency builds user trust and reduces costly production incidents.

Concept explained simply

Consistency means your data always obeys the rules you set (like “balance can’t be negative” or “one order per request”). You enforce those rules with constraints, transactions, and careful read/write patterns.

Mental model

Think of your database as a ledger with a strict librarian:

  • The librarian only files complete, valid entries (transactions).
  • They reject entries that break rules (constraints).
  • They control who reads or edits a page at a time (isolation and locks).
  • They stamp each request with a unique ticket so the same job isn’t filed twice (idempotency keys).

Core tools you will use

  • Constraints: primary keys, unique, foreign keys, check constraints.
  • Transactions: all-or-nothing units of work.
  • Isolation levels: control read/write concurrency to avoid anomalies (dirty reads, lost updates, phantom reads).
  • Locks: row/table locks, SELECT ... FOR UPDATE.
  • Idempotency & dedup: request_id or operation_key with unique indexes.
  • Optimistic vs. pessimistic concurrency: version columns vs. explicit locks.
  • Ordering & time: server-side timestamps, monotonic IDs for event ordering.
  • Distributed flows: outbox + inbox patterns, sagas as sequences of local transactions.
Quick isolation cheatsheet
  • Read Uncommitted: allows dirty reads (rarely acceptable).
  • Read Committed: prevents dirty reads; common default.
  • Repeatable Read: stable rows during a transaction; reduces non-repeatable reads.
  • Serializable: safest; behaves like one-by-one execution; slower, more conflicts.

Worked examples

Example 1: Prevent overselling inventory (pessimistic lock)

  1. Start a transaction.
  2. Lock the row: SELECT stock FROM inventory WHERE sku = ? FOR UPDATE.
  3. If stock < qty: abort with error.
  4. Update: UPDATE inventory SET stock = stock - ? WHERE sku = ?.
  5. Insert order record with a unique request_id.
  6. Commit.
SQL sketch
BEGIN;
SELECT stock FROM inventory WHERE sku = $1 FOR UPDATE;
-- app check: stock >= $2
UPDATE inventory SET stock = stock - $2 WHERE sku = $1;
INSERT INTO orders (order_id, customer_id, sku, qty, request_id)
VALUES ($3, $4, $1, $2, $5);
COMMIT;

Add a unique index on (request_id) to stop duplicates during retries.

Example 2: Idempotent order creation with upsert

Goal: the same request from the client should not create multiple orders.

SQL sketch
-- orders(request_id UNIQUE)
INSERT INTO orders (order_id, customer_id, sku, qty, request_id)
VALUES ($1, $2, $3, $4, $5)
ON CONFLICT (request_id) DO NOTHING;
-- Then fetch by request_id to return an existing order.

Example 3: Optimistic locking to prevent lost updates

Use a version column and verify it during updates. If version changed, retry with fresh data.

SQL sketch
-- profile(id PK, name, email, version INT)
UPDATE profile
SET name = $2, email = $3, version = version + 1
WHERE id = $1 AND version = $4;
-- If rowcount = 0, someone else updated first. Reload and retry.

Exercises

Do these hands-on tasks. Compare your work to the solutions. Tip: use transactions and constraints, not just code checks.

Exercise 1 — Safe wallet transfer with idempotency (mirrors ex1)

Design a transaction that:

  • Debits one account and credits another.
  • Prevents negative balances.
  • Is idempotent using transfer_key (unique).
What you need to produce
  • SQL DDL for constraints.
  • SQL transaction steps to process transfer(amount, from_id, to_id, transfer_key).

Exercise 2 — Stop double-ordering (mirrors ex2)

Write SQL or pseudocode to create an order only if stock allows, avoiding duplicates on client retries.

What you need to produce
  • Inventory-safe decrement.
  • Unique request_id logic.
  • Behavior on retry.

Self-review checklist

  • All critical invariants are enforced by the database (constraints).
  • No update can silently overwrite another (locking or version checks).
  • Retries do not duplicate side effects (idempotency keys).
  • Transactions are minimal and only lock what is needed.
  • Errors roll back cleanly with no partial writes.

Common mistakes and self-check

  • Only validating in application code. Fix: add UNIQUE, FK, CHECK constraints.
  • Forgetting idempotency on endpoints that may be retried. Fix: store a request_id with a unique index.
  • Long-running transactions holding locks. Fix: keep them short; move slow work outside.
  • Using the strongest isolation everywhere. Fix: choose per-use-case; escalate only where needed.
  • Reading before locking. Fix: lock the rows you are about to change with SELECT ... FOR UPDATE.
  • Ignoring time/order. Fix: use server-generated timestamps or monotonic IDs when order matters.
Self-check prompt

For your last bug or feature, list the invariants it must keep. For each invariant, show the exact line (constraint, unique index, transaction step) that guarantees it.

Practical projects

  • Double-entry ledger service: accounts, entries, and balances with constraints to ensure debits equal credits per transaction.
  • Order service with idempotent create and inventory decrement using SELECT ... FOR UPDATE.
  • Outbox-based email sender: write events to outbox in the same transaction as the main change; a worker reads and sends reliably with idempotent processing.
Suggested steps
  1. Define invariants as CHECK/UNIQUE/FK constraints.
  2. Design transactions that enforce ordering and locking.
  3. Add idempotency keys to write paths.
  4. Load test with concurrent clients; verify no duplicates or negative values.

Who this is for

Backend and platform engineers who build APIs and services that must remain correct under concurrency, failures, and retries.

Prerequisites

  • Basic SQL (SELECT/INSERT/UPDATE/DELETE).
  • Transactions and indexes basics.
  • Familiarity with your DB’s isolation levels.

Learning path

  1. Model invariants as constraints.
  2. Add transactions with correct isolation and locking.
  3. Implement idempotency on all externally triggered writes.
  4. Use optimistic locking for user-edited records; pessimistic locks for hot counters/inventory.
  5. Integrate outbox/inbox for cross-service reliability.

Next steps

  • Refactor one existing write path in your codebase to be idempotent.
  • Add missing constraints for top invariants.
  • Run a small concurrency test and verify no anomalies.

Mini challenge

You must ensure: “A customer cannot place more than one free trial order.” Stock is unlimited. Which minimal DB approach would you use?

Sample answer

Add UNIQUE(customer_id) on trial_orders. On create, INSERT ... ON CONFLICT DO NOTHING and return the existing trial if present. No locks or inventory needed.

About the Quick Test

The Quick Test is available to everyone for free. Log in to save your progress and track completions.

Practice Exercises

2 exercises to complete

Instructions

You have tables:

accounts(id PK, balance NUMERIC CHECK (balance >= 0))
transfers(id PK, from_id FK, to_id FK, amount NUMERIC CHECK (amount > 0),
          transfer_key TEXT UNIQUE, created_at TIMESTAMP)

Write SQL to:

  • Process a transfer(from_id, to_id, amount, transfer_key) atomically.
  • Prevent negative balances.
  • Be idempotent on the same transfer_key (no double debit/credit).
Expected Output
A transaction that either moves funds once or returns the existing transfer without changing balances. No negative balances. One transfer row per transfer_key.

Handling Data Consistency — Quick Test

Test your knowledge with 7 questions. Pass with 70% or higher.

7 questions70% to pass

Have questions about Handling Data Consistency?

AI Assistant

Ask questions about this tool