Menu

Topic 2 of 8

Automated Testing In CI

Learn Automated Testing In CI for free with explanations, exercises, and a quick test (for Backend Engineer).

Published: January 20, 2026 | Updated: January 20, 2026

Why this matters

Backend teams rely on CI to catch problems before they reach production. Automated tests in CI help you:

  • Block broken commits from merging.
  • Validate API behavior (contracts, performance thresholds, error handling).
  • Prevent regressions when refactoring or upgrading dependencies.
  • Ship faster with confidence by running tests on every push and pull request.

Real tasks you will do as a Backend Engineer:

  • Add unit/integration tests to a pipeline and make the pipeline fail if tests fail.
  • Split tests to run in parallel to reduce build time.
  • Quarantine flaky tests without disabling the rest of the suite.
  • Publish test reports and code coverage as build artifacts.
Quick mental check

If your CI is green, do you trust deploying now? If the answer is not a confident yes, improve your test gates.

Concept explained simply

Automated testing in CI means your tests run automatically on every change, and the pipeline blocks merges if they fail. Think of CI as a guard that checks code continuously so your main branch stays healthy.

Common test layers you can automate:

  • Static checks: formatters, linters, type checkers.
  • Unit tests: fast, isolated functions/classes.
  • Integration tests: your code with real dependencies (DB, message broker, external API stubs).
  • Contract tests: validate producer/consumer API expectations.
  • End-to-end (E2E) smoke: minimal happy-path flows across services.

Policies you can enforce in CI:

  • Fail on test failures or low coverage.
  • Run fast tests on every push; run heavier tests on PR or nightly.
  • Quarantine flaky tests to a non-blocking job while you fix them.

Mental model

Imagine a quality conveyor belt. Each station (lint, unit, integration, e2e) is a filter that catches defects. Early filters are fast and broad; later filters are slower and targeted. If a defect passes through all filters, it can ship. Your job is to set up filters strong enough to stop defects early and fast.

Worked examples

Example 1: GitHub Actions for Node.js (unit + coverage)

name: ci
on: [push, pull_request]
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'
      - run: npm ci
      - run: npm test -- --coverage --watchAll=false
      - name: Enforce coverage & export JUnit
        run: |
          # Example with Jest. Configure thresholds in jest.config.js
          # Optionally convert to JUnit via jest-junit
          echo "Done"

Key points: cache dependencies, run tests, and enforce coverage via Jest thresholds that make the command exit non-zero if too low.

Example 2: GitLab CI for Python (pytest + coverage + report)

stages: [lint, test]

lint:
  stage: lint
  image: python:3.11
  script:
    - pip install ruff
    - ruff check .

unit_tests:
  stage: test
  image: python:3.11
  cache:
    key: pip
    paths: [ .cache/pip ]
  before_script:
    - pip install -U pip
    - pip install -r requirements.txt
    - pip install pytest pytest-cov
  script:
    - pytest -q --cov=app --cov-report=xml --cov-fail-under=80 --junitxml=report.xml
  artifacts:
    when: always
    reports:
      junit: report.xml
    paths:
      - coverage.xml

Key points: fail under 80% coverage, publish JUnit and coverage artifacts, and split linting from tests.

Example 3: Jenkins Declarative Pipeline (Maven + JUnit + parallel)

pipeline {
  agent any
  stages {
    stage('Build') {
      steps { sh 'mvn -B -DskipTests package' }
    }
    stage('Test') {
      parallel {
        stage('Unit') {
          steps { sh 'mvn -B -Dtest=*UnitTest test' }
          post { always { junit 'target/surefire-reports/*.xml' } }
        }
        stage('Integration') {
          steps { sh 'mvn -B -Dtest=*IT test' }
          post { always { junit 'target/surefire-reports/*.xml' } }
        }
      }
    }
  }
  post {
    always { archiveArtifacts artifacts: 'target/**/*.jar', fingerprint: true }
    unsuccessful { echo 'Blocking merge: tests failed.' }
  }
}

Key points: run unit and integration tests in parallel, publish results, and block merges when tests fail.

Tip: managing test data

Prefer ephemeral resources: spin up a local DB container, run migrations, seed data, run tests, then tear down. This isolates tests and keeps them stable.

Step-by-step implementation

  1. Pick your test frameworks and reporters (e.g., JUnit XML).
  2. Run fast checks first: formatting, lint, unit tests.
  3. Add coverage with a fail-under threshold.
  4. Add integration tests using ephemeral services (containers) and seed data.
  5. Publish test results and coverage as artifacts for visibility.
  6. Split long suites and run them in parallel to reduce time.
  7. Mark flaky tests and move them to a non-blocking job; track and fix them.
  8. Protect the main branch by making the test job required.
Performance checklist
  • Cache dependencies.
  • Only run heavy tests when needed (PRs, nightly).
  • Shard tests by file or historical timing to balance parallel jobs.

Exercises

Do these to cement the skill. The quick test at the end is available to everyone; log in to save your progress.

Exercise 1: Enforce coverage in CI

Create a CI job that runs your unit tests and fails if coverage is below 80%. Publish a JUnit test report artifact.

  • Expected output: pipeline fails when tests fail or coverage < 80%; passes otherwise.
Hints
  • Use a CLI flag (e.g., --cov-fail-under for pytest, Jest coverageThreshold).
  • Emit JUnit XML via your test runner or a plugin.
Show solution
# Example (GitHub Actions + Python)
name: ci
on: [push, pull_request]
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with: { python-version: '3.11' }
      - run: pip install -r requirements.txt pytest pytest-cov
      - run: pytest -q --cov=app --cov-report=xml --cov-fail-under=80 --junitxml=report.xml

Exercise 2: Quarantine a flaky test

Split flaky tests into a separate, non-blocking CI job while keeping the main test job blocking. Tag flaky tests and exclude them from the blocking job.

  • Expected output: main test job passes (excluding flaky tests), flaky job runs separately and may fail without blocking merges.
Hints
  • Use markers/tags (e.g., pytest -m "not flaky").
  • Have two jobs: "tests_blocking" (required) and "tests_flaky" (allowed to fail).
Show solution
# Example (GitLab CI)
stages: [test]

tests_blocking:
  stage: test
  image: python:3.11
  script:
    - pip install -r requirements.txt pytest
    - pytest -q -m "not flaky" --junitxml=report.xml
  artifacts:
    when: always
    reports: { junit: report.xml }

# Non-blocking job (mark as not required in your repo settings)
tests_flaky:
  stage: test
  image: python:3.11
  allow_failure: true
  script:
    - pip install -r requirements.txt pytest
    - pytest -q -m flaky --junitxml=flaky.xml
  artifacts:
    when: always
    reports: { junit: flaky.xml }

Checklist before you commit

  • Fast jobs first: lint and unit tests < 3 minutes.
  • Coverage threshold enforced (e.g., 80% start, adjust with team agreement).
  • Integration tests use ephemeral resources and clean up.
  • Flaky tests are tagged and separated from blocking jobs.
  • Test reports are visible in CI UI.
  • Main branch requires passing test job(s).

Common mistakes and self-check

  • Running only unit tests: add integration tests for DB/broker flows.
  • Letting flakiness block releases: quarantine while fixing root cause.
  • No coverage gate: flaky quality over time; add fail-under.
  • Slow pipeline: cache deps, parallelize, and skip heavy tests on trivial changes.
  • Shared, stateful test DB: prefer ephemeral per-job DB containers.
Self-check prompts
  • Can you show a failing commit that CI blocks due to tests?
  • Do you have at least one integration test touching a real dependency?
  • Can you open the coverage report artifact from the last run?

Practical projects

  • Add integration tests with a local database container, including migrations and seed data.
  • Split tests into parallel shards using timing data or file count.
  • Introduce contract tests for one producer-consumer API pair.

Learning path

  • Start: static checks and unit tests with coverage gate.
  • Next: integration tests with ephemeral services.
  • Then: contract tests and small E2E smoke.
  • Optimize: parallelization, caching, flaky-test quarantine, and reporting.

Who this is for

Backend Engineers, Platform Engineers, and DevOps-minded developers who need reliable CI pipelines that prevent regressions.

Prerequisites

  • Ability to run your project locally with tests.
  • Basic knowledge of a CI system (GitHub Actions, GitLab CI, Jenkins, etc.).
  • Familiarity with your language’s test and coverage tools.

Next steps

  • Harden your CI: add integration tests and make test jobs required.
  • Reduce flaky tests: add retries at the test level and better waiting/assertions.
  • Run the Quick Test below to check your understanding.

Mini challenge

Turn one flaky test into a stable integration test by replacing time-based waits with event-based checks, and run it in the blocking job. Show green CI twice in a row.

Note on progress

The Quick Test is free for everyone. Log in to save your attempts and track progress over time.

Practice Exercises

2 exercises to complete

Instructions

Create a CI job that runs unit tests and fails if coverage is below 80%. Publish a JUnit test report artifact.

Use your preferred stack. If unsure, try Python with pytest-cov.

Expected Output
Pipeline fails when tests fail or coverage < 80%; passes otherwise. JUnit report is available as an artifact.

Automated Testing In CI — Quick Test

Test your knowledge with 8 questions. Pass with 70% or higher.

8 questions70% to pass

Have questions about Automated Testing In CI?

AI Assistant

Ask questions about this tool