Menu

Testing And Quality

Learn Testing And Quality for API Engineer for free: roadmap, examples, subskills, and a skill exam.

Published: January 21, 2026 | Updated: January 21, 2026

Who this is for

You build or maintain APIs and want predictable releases, fast feedback in CI, and confidence that changes don’t break clients. Ideal for API Engineers, Backend Developers, and Platform teams responsible for reliability and performance.

Prerequisites

  • Comfort with one backend stack (e.g., Node, Python, Go, Java) and HTTP basics
  • Know how to run an API locally (env vars, migrations, seeds)
  • Familiarity with a test runner in your language (e.g., Jest, pytest, JUnit)
Quick readiness self-check
  • Can you start your API with a single command?
  • Do you know how to set environment variables for test vs. dev?
  • Can you write a simple test that asserts status code and JSON body?

Why this skill matters

High-quality API tests reduce outages, protect client contracts, and speed up releases. With good tests, you can refactor aggressively, catch regressions early, and keep SLAs stable. For API Engineers, strong testing unlocks safer change management, better developer experience, and trust from downstream consumers.

What this unlocks on your team
  • Confident schema evolution with contract tests
  • Stable integration pipelines with mockable dependencies
  • Measurable performance baselines and regression alarms
  • Automated security checks for common risks

Learning path

  1. Foundation: Write integration tests for core endpoints (happy paths + key edge cases). Add schema validation to catch shape and type drift.
  2. Contracts: Capture consumer-provider contracts and verify them in CI to prevent breaking changes.
  3. Isolation: Mock external services and flaky dependencies to keep tests fast and deterministic.
  4. Security checks: Add authentication, authorization, injection, and leakage tests.
  5. Performance guardrails: Establish baseline latencies and set regression thresholds.
  6. Data strategy: Use seed data, factories, and idempotent cleanup for reliable runs.
  7. CI automation: Parallelize tests, add caching, and enforce quality gates (fail on contract or perf regressions).
Milestones you can measure
  • T1: Every critical endpoint has at least one integration test
  • T2: Contract checks run on each PR
  • T3: Perf regression test fails if p95 exceeds baseline by N%
  • T4: CI run time under 10 minutes with stable pass rate

Worked examples

1) Integration test for an endpoint (status, body, headers)

Example in JavaScript with a typical test runner and a superagent-like client.

// GET /v1/users/:id returns { id, email, role }

describe('GET /v1/users/:id', () => {
  it('returns 200 with expected shape', async () => {
    const res = await request(app)
      .get('/v1/users/123')
      .set('Accept', 'application/json');

    expect(res.status).toBe(200);
    expect(res.headers['content-type']).toMatch(/application\/json/);
    expect(res.body).toMatchObject({ id: 123, role: expect.any(String) });
    expect(res.body.email).toMatch(/@/); // basic sanity check
  });

  it('returns 404 for a missing user', async () => {
    const res = await request(app).get('/v1/users/999999');
    expect(res.status).toBe(404);
  });
});

2) Contract/schema validation

Capture expected response schema and validate responses. If the provider changes fields or types, the test fails early.

const userSchema = {
  type: 'object',
  required: ['id', 'email', 'role'],
  properties: {
    id: { type: 'integer' },
    email: { type: 'string', format: 'email' },
    role: { type: 'string', enum: ['admin', 'member', 'viewer'] }
  },
  additionalProperties: false
};

describe('Contract: GET /v1/users/:id', () => {
  it('matches schema', async () => {
    const res = await request(app).get('/v1/users/123');
    expect(res.status).toBe(200);
    expect(res.body).toSatisfyJsonSchema(userSchema); // matcher from your test framework
  });
});
Why contracts help

They protect clients by preventing silent changes, like renaming a field or adding unexpected properties that break strict parsers.

3) Mocking a dependency (cache + database)

Keep integration tests deterministic by simulating cache hits/misses.

describe('UserService.getUser', () => {
  it('uses cache when available', async () => {
    const cache = { get: jest.fn().mockResolvedValue({ id: 1, email: 'a@b.com' }) };
    const db = { findUserById: jest.fn() }; // should not be called

    const svc = new UserService({ cache, db });
    const user = await svc.getUser(1);

    expect(cache.get).toHaveBeenCalledWith('user:1');
    expect(db.findUserById).not.toHaveBeenCalled();
    expect(user.email).toBe('a@b.com');
  });

  it('falls back to db on cache miss', async () => {
    const cache = { get: jest.fn().mockResolvedValue(null), set: jest.fn() };
    const db = { findUserById: jest.fn().mockResolvedValue({ id: 2, email: 'x@y.com' }) };

    const svc = new UserService({ cache, db });
    const user = await svc.getUser(2);

    expect(db.findUserById).toHaveBeenCalledWith(2);
    expect(cache.set).toHaveBeenCalled();
    expect(user.id).toBe(2);
  });
});

4) Security testing basics

// Auth required
it('rejects without token', async () => {
  const res = await request(app).get('/v1/admin/users');
  expect(res.status).toBe(401);
});

// Authorization required
it('rejects insufficient role', async () => {
  const res = await request(app)
    .get('/v1/admin/users')
    .set('Authorization', 'Bearer userToken');
  expect(res.status).toBe(403);
});

// Injection guard (simple example)
it('sanitizes query inputs', async () => {
  const res = await request(app).get('/v1/users?search=" OR 1=1 --');
  expect(res.status).toBe(200);
  // assert results are constrained as expected, not all users
});

// Error leakage
it('does not leak stack traces', async () => {
  const res = await request(app).get('/v1/trigger-error');
  expect(res.status).toBeGreaterThanOrEqual(400);
  expect(JSON.stringify(res.body)).not.toMatch(/Exception|Stack|Trace/);
});

5) Performance regression guard

Fail CI if typical latency worsens beyond a threshold. This uses a simple loop; replace with your language’s HTTP client and timing utilities.

async function measureP95(getFn, times = 50) {
  const samples = [];
  for (let i = 0; i < times; i++) {
    const t0 = Date.now();
    await getFn();
    samples.push(Date.now() - t0);
  }
  samples.sort((a,b) => a - b);
  const idx = Math.floor(0.95 * samples.length) - 1;
  return samples[Math.max(idx, 0)];
}

describe('Performance regression: /v1/users/123', () => {
  it('p95 under 120ms (+15% budget)', async () => {
    const baselineMs = 120; // update when you intentionally improve/accept perf
    const budget = 1.15;    // 15% allowed

    const p95 = await measureP95(() => request(app).get('/v1/users/123'));
    expect(p95).toBeLessThanOrEqual(baselineMs * budget);
  });
});
Tip: Keeping perf tests stable
  • Warm up the service before timing
  • Use fixed data and controlled environment
  • Assert on p95 or median, not single-run latency

Drills & exercises

  • Add a 404 test for each read endpoint
  • Add schema validation for one frequently used response
  • Mock one external service (e.g., email, payments) in your tests
  • Add tests for 401, 403, and 422/400 scenarios on a write endpoint
  • Establish a p95 baseline for one endpoint and store it in code
  • Create a reusable test data factory for a core entity
  • Make your test suite run with a single command (including seeds)

Common mistakes and debugging tips

  • Over-relying on E2E tests: They’re slow and flaky. Prefer integration + contract tests, add a few critical-path E2Es only.
  • Testing implementation, not behavior: Assert public API (status, schema, side effects), not internal calls.
  • Unstable test data: Use factories/seeds, and clean up after tests. Avoid reusing mutable global state.
  • Ignoring non-happy paths: Cover 4xx/5xx, timeouts, and partial failures.
  • Silent schema drift: Lock responses with schemas and fail fast on unexpected fields/types.
  • Perf tests that flap: Warm up, increase sample size, and assert on percentiles.
Debugging checklist
  • If a test is flaky, run it in isolation and add timings/logs to narrow external dependencies.
  • Seed deterministic data and freeze time where possible.
  • Check parallelism issues (shared DB tables, ports, caches).
  • Record failing response payloads for quick diffing against schema.

Mini project: Ship a CI-ready API test suite

Goal: convert a small API into a well-tested, automated pipeline with contract, integration, security, and performance checks.

Step 1: Map endpoints and risks. Pick 5 endpoints (mix of GET/POST).
Step 2: Add integration tests (200 + key 4xx/5xx) per endpoint.
Step 3: Introduce schema validation for 2 high-traffic responses.
Step 4: Mock one external dependency (payments, email, or cache) for determinism.
Step 5: Add security tests: 401/403, injection guard, no stacktrace leakage.
Step 6: Establish p95 baselines for 2 endpoints and set 15% regression budget.
Step 7: Wire all tests into CI (single command), fail build on contract or perf regression.
Hints
  • Keep test data in factories. Name them clearly (e.g., makeAdminUser, makeOrg)
  • Tag slow tests so you can run fast subsets on each commit and full suite nightly
  • Store baselines in code and update only after intentional changes

Subskills

  • Contract Testing Basics: Capture and verify consumer-provider agreements to prevent breaking changes.
  • Integration Tests For Endpoints: Exercise routes, middleware, and persistence with realistic inputs.
  • Mocking Dependencies: Replace external services and flaky layers to keep tests fast and deterministic.
  • Schema Validation Tests: Lock response shapes and types to catch drift immediately.
  • Security Testing Basics: Validate auth, authz, injection resistance, and error hygiene.
  • Performance Regression Tests: Track baseline latency and fail builds on regressions.
  • Test Data Management: Use factories/seeds and cleanup to ensure reliable, idempotent runs.
  • CI Automation For API Tests: Run on each PR, parallelize, cache, and set clear quality gates.

Next steps

  • Pick one service and apply the mini project steps end-to-end
  • Add missing edge-case tests (timeouts, retries, partial failures)
  • Scale out: run critical tests on every PR and a fuller suite nightly

Have questions about Testing And Quality?

AI Assistant

Ask questions about this tool