luvv to helpDiscover the Best Free Online Tools
Topic 5 of 8

Docker Compose For Local Stacks

Learn Docker Compose For Local Stacks for free with explanations, exercises, and a quick test (for Machine Learning Engineer).

Published: January 1, 2026 | Updated: January 1, 2026

Who this is for

Machine Learning Engineers and Data Scientists who need reliable, reproducible local environments for APIs, training jobs, experiment tracking, and dependencies like databases, message brokers, and object storage.

Prerequisites

  • Basic Docker: images, containers, Dockerfile, volumes, networks
  • Command line comfort
  • Optional: Python/ML tooling (FastAPI, Jupyter, MLflow)

Why this matters

In real ML work you often run multiple services together: API + Postgres + Redis for feature caching, or Jupyter + MLflow + MinIO for experiments. Docker Compose lets you define and run these as a single, versioned stack so teammates can reproduce your setup with one command.

  • Spin up a local inference API with Redis and Postgres for integration tests
  • Run Jupyter with MLflow tracking and a database for experiments
  • Sandbox data pipelines with a message broker and workers

Concept explained simply

Docker Compose is a YAML file that describes a group of containers (services), how they talk to each other (networks), where they store data (volumes), and how they start up (dependencies). One file, one command: consistent local stacks.

Mental model

  • compose.yml = blueprint for your local mini-cloud
  • services = apps (api, db, cache)
  • networks = private wires between apps
  • volumes = hard drives that survive restarts
  • environment and env_file = knobs you twist without changing code
  • depends_on + healthcheck = boot order and readiness

Set up a reliable local stack (step-by-step)

1) Create a compose file

Name it docker-compose.yml or compose.yml at project root.

2) Define services

Start with core services (api, db, cache). Add ports, environment, and volumes.

3) Add networks and volumes

Create a default network; mount volumes for data persistence.

4) Wire dependencies

Use depends_on and healthcheck so the app waits for databases to be ready.

5) Use .env for secrets and toggles

Keep credentials and settings out of the compose file when possible.

6) Run and iterate

docker compose up -d, view logs, refine. Tear down with docker compose down.

Worked examples

Example 1: Inference API + Postgres + Redis

Run a FastAPI inference service with a Postgres DB and Redis cache.

services:
  api:
    build: ./api
    ports:
      - "8000:8000"
    environment:
      - DATABASE_URL=postgresql://mluser:mlpass@db:5432/mldb
      - REDIS_URL=redis://cache:6379/0
    depends_on:
      db:
        condition: service_healthy
      cache:
        condition: service_started
  db:
    image: postgres:15
    environment:
      - POSTGRES_DB=mldb
      - POSTGRES_USER=mluser
      - POSTGRES_PASSWORD=mlpass
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U mluser"]
      interval: 5s
      timeout: 3s
      retries: 10
    volumes:
      - dbdata:/var/lib/postgresql/data
  cache:
    image: redis:7
    command: ["redis-server", "--save", "60", "1"]
volumes:
  dbdata:

Notes: The api can reach db and cache by service names (db, cache) thanks to the default network.

Example 2: Jupyter + MLflow + Postgres (tracking)

Experiment stack with persistent tracking.

services:
  jupyter:
    image: jupyter/scipy-notebook:2023-11-20
    ports: ["8888:8888"]
    volumes:
      - ./notebooks:/home/jovyan/work
    environment:
      - MLFLOW_TRACKING_URI=http://mlflow:5000
    command: ["start-notebook.sh", "--NotebookApp.token="]
  mlflow:
    image: ghcr.io/mlflow/mlflow:v2.8.0
    ports: ["5000:5000"]
    environment:
      - BACKEND_STORE_URI=postgresql://mlflow:mlflow@db:5432/mlflow
      - ARTIFACT_ROOT=/mlruns
    volumes:
      - mlruns:/mlruns
    depends_on:
      db:
        condition: service_healthy
  db:
    image: postgres:15
    environment:
      - POSTGRES_DB=mlflow
      - POSTGRES_USER=mlflow
      - POSTGRES_PASSWORD=mlflow
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U mlflow -d mlflow"]
      interval: 5s
      timeout: 3s
      retries: 15
    volumes:
      - trackingdb:/var/lib/postgresql/data
volumes:
  mlruns:
  trackingdb:

Notes: Jupyter uses the mlflow service by hostname; MLflow uses Postgres for the backend store.

Example 3: Worker + RabbitMQ + MinIO (object storage)

Local pipeline: a worker pulls tasks from a queue and stores outputs in S3-compatible storage.

services:
  worker:
    build: ./worker
    environment:
      - BROKER_URL=amqp://guest:guest@queue:5672/
      - MINIO_ENDPOINT=http://minio:9000
      - MINIO_ACCESS_KEY=minio
      - MINIO_SECRET_KEY=minio123
    depends_on:
      queue:
        condition: service_healthy
      minio:
        condition: service_started
  queue:
    image: rabbitmq:3-management
    ports:
      - "15672:15672"
    healthcheck:
      test: ["CMD", "rabbitmq-diagnostics", "ping"]
      interval: 10s
      timeout: 5s
      retries: 10
  minio:
    image: minio/minio:RELEASE.2023-12-02T10-51-33Z
    command: ["server", "/data", "--console-address", ":9001"]
    environment:
      - MINIO_ROOT_USER=minio
      - MINIO_ROOT_PASSWORD=minio123
    ports:
      - "9000:9000"
      - "9001:9001"
    volumes:
      - minio:/data
volumes:
  minio:

Notes: Service names act as DNS; you can scale workers with docker compose up -d --scale worker=3.

Exercises

Do the following practice to make the concepts stick.

Exercise 1: Compose a local ML API + Redis + Postgres stack

Goal: Create docker-compose.yml that starts three services (api, db, cache), waits for the DB to be ready, and exposes the API on port 8000.

  1. Create docker-compose.yml in your project.
  2. Define services: api (build from ./api), db (postgres:15), cache (redis:7).
  3. Use environment variables to connect api to db and cache. Keep credentials in a .env file.
  4. Add a healthcheck to db and make api depend on db readiness.
  5. Mount a named volume for db data persistence.
  6. Run docker compose up -d and verify the API health endpoint.

Checklist

  • API reachable at localhost:8000
  • DB marked healthy in logs
  • Redis serving (try redis-cli ping if available)
  • Named volume created for Postgres data

Common mistakes and self-check

  • Only using depends_on without healthcheck: The app may start before the DB is ready. Fix: add healthcheck and depends_on with condition: service_healthy.
  • Hardcoding hostnames like localhost inside containers: Inside a compose network, use service names (db, cache), not localhost.
  • Confusing bind mounts and volumes: Bind mounts map local folders; volumes are managed by Docker. For databases, prefer volumes.
  • Exposing too many ports: Only publish what you need (e.g., API). Internal services can stay private on the network.
  • Secrets in compose.yml: Prefer env_file: .env for local work; do not commit real credentials.
  • Not pinning image tags: Use specific versions (postgres:15) to keep environments reproducible.
Self-check tips
  • docker compose ps shows healthy states and ports
  • docker compose logs -f service_name to watch readiness
  • docker exec -it db psql -U mluser -c "SELECT 1" to verify DB
  • curl http://localhost:8000/health returns 200 OK

Practical projects

  • Feature Store Sandbox: API + Redis + Postgres with a nightly batch job container
  • Experiment Lab: Jupyter + MLflow + Postgres + MinIO for artifact storage
  • Inference Bench: API + Nginx reverse proxy + Redis, with a test client container for load tests

Learning path

  • Start: Single-service Dockerfile and docker run
  • Then: Two-service Compose (API + DB) with healthchecks
  • Add: Caching layer, volumes, and .env
  • Advance: Profiles, overrides (compose -f base -f override up), and scaling workers
  • Polish: Makefiles or scripts to wrap common compose commands

Next steps

  • Convert an existing manual setup into a compose stack
  • Add healthchecks and simplify local onboarding with one command
  • Introduce a test container that runs integration tests against the stack

Mini challenge

Add a fourth service named tester that waits for api:8000/health to return OK, then runs a simple HTTP check and exits with code 0. Make api depend only on db; tester should depend on api readiness. Hint: use a tiny curl image and a retry loop in the command.

Quick test and progress

You can take the Quick Test below to check your understanding. It’s available to everyone. Only logged-in users will have their progress saved.

Practice Exercises

1 exercises to complete

Instructions

Goal: Create docker-compose.yml that starts three services (api, db, cache), waits for the DB to be ready, and exposes the API on port 8000.

  1. Create docker-compose.yml in your project.
  2. Define services: api (build from ./api), db (postgres:15), cache (redis:7).
  3. Use environment variables to connect api to db and cache. Keep credentials in a .env file.
  4. Add a healthcheck to db and make api depend on db readiness.
  5. Mount a named volume for db data persistence.
  6. Run docker compose up -d and verify the API health endpoint.
  • API reachable at localhost:8000
  • DB marked healthy in logs
  • Redis serving (try redis-cli ping)
  • Named volume created for Postgres data
Expected Output
API responds 200 on /health; Postgres shows healthy; Redis responds PONG; a named volume for Postgres exists.

Docker Compose For Local Stacks — Quick Test

Test your knowledge with 10 questions. Pass with 70% or higher.

10 questions70% to pass

Have questions about Docker Compose For Local Stacks?

AI Assistant

Ask questions about this tool