Why this matters
As an API Engineer, you need to package services so they run the same in dev, CI, and production. Containers give you:
- Consistent runtime: the same image runs everywhere.
- Fast onboarding: one command to run your API locally.
- Reproducible builds: versioned images, pinned dependencies.
- Deploy-ready artifacts: publish images to a registry and ship.
Who this is for
- API Engineers and Backend Developers new to containers.
- Engineers who can run services locally but want portable builds.
- Anyone preparing to deploy APIs with CI/CD and orchestration.
Prerequisites
- Basic command-line comfort (bash, PowerShell, or similar).
- Ability to run a simple API in one language (Node.js, Python, Go, etc.).
- Docker installed locally (or an equivalent OCI-compatible runtime).
Containerization explained simply
Think of a container image as a frozen recipe (ingredients + steps). A container is a running instance of that recipe. The image contains your app code, dependencies, and minimal OS libraries. You run many containers from the same image.
Mental model
- Layers: Each Dockerfile instruction produces a cached layer. Layers stack to make an image.
- Image: An immutable set of layers with a tag like
my-api:1.0.0. - Container: A lightweight runtime with isolated process, filesystem (from image), network namespace, and resource limits.
- Build context: The files you send to the build engine (usually your project folder). Manage it with
.dockerignore.
Jargon buster
- Registry: A server that stores and serves images (e.g., company registry).
- Tag: Human-friendly label for an image version (avoid
latestfor production). - Runtime: The engine that runs containers (e.g., Docker Engine, containerd).
Key concepts you will use
Images and tags
- Reference images as
repository:tag, e.g.,node:18-alpine. - Pin base images to explicit versions for reproducibility.
Dockerfile basics
# Minimal example for a Node.js API
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 3000
CMD ["node", "index.js"]
FROM: base image.WORKDIR: sets working directory.COPYvsADD: preferCOPYfor clarity;ADDalso fetches URLs and extracts archives.RUN: executes commands at build time (creates a new layer).CMDandENTRYPOINT: default runtime command.ENTRYPOINTis the executable;CMDprovides default args.ENVandARG: set env vars (runtime) and build-time args (build only).HEALTHCHECK: tells the platform how to verify container health.
Build context and .dockerignore
Everything in your build context is sent to the daemon. Use .dockerignore to exclude node_modules, test data, VCS folders, and secrets to keep builds fast and images clean.
# .dockerignore
node_modules
.git
.env
*.log
dist
coverage
Multi-stage builds
Build with heavy toolchains in one stage, copy only the final artifacts into a small runtime stage. This reduces image size and attack surface.
Volumes and mounts
- Bind mount: map a host folder into the container (great for local dev hot-reload).
- Named volume: managed storage for persistent data (databases, caches).
Networking
- Default bridge network: map ports with
-p host:container(e.g.,-p 8080:3000). EXPOSEdocuments ports but does not publish them.
Security essentials
- Use minimal base images when practical.
- Create a non-root user and run the app under it.
- Never bake secrets into images. Inject at runtime (env vars, files, or secret managers).
- Pin versions and update regularly.
Resource limits
- CPU/memory limits avoid noisy-neighbor issues: e.g.,
--cpus 1 --memory 512m.
Lifecycle and logs
# Build, run, stop, inspect
docker build -t my-api:dev .
docker run --name myapi -p 8080:3000 my-api:dev
docker logs -f myapi
docker stop myapi && docker rm myapi
Worked examples
Example 1: Containerize a simple Express API
Show steps
- Create files:
# package.json
{
"name": "hello-api",
"version": "1.0.0",
"main": "index.js",
"scripts": {"start": "node index.js"},
"dependencies": {"express": "^4.18.2"}
}
// index.js
const express = require('express');
const app = express();
app.get('/', (req, res) => res.json({status: 'ok'}));
const port = process.env.PORT || 3000;
app.listen(port, () => console.log('Listening on ' + port));
# Dockerfile
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 3000
CMD ["node", "index.js"]
# Build and run
docker build -t hello-api:basic .
docker run --rm -p 8080:3000 hello-api:basic
# In another terminal
curl http://localhost:8080
# Expected: {"status":"ok"}
Example 2: Multi-stage build + non-root user
Show steps
# Dockerfile (multi-stage)
FROM node:18-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
FROM node:18-alpine AS runner
WORKDIR /app
# Create non-root user
RUN addgroup -S app && adduser -S app -G app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
USER app
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=3s --retries=3 CMD node -e "require('http').get('http://localhost:3000',res=>{process.exit(res.statusCode===200?0:1)})" || exit 1
CMD ["node", "index.js"]
docker build -t hello-api:secure .
docker run --rm -p 8080:3000 hello-api:secure
# Verify non-root
# (In another terminal)
docker ps
# Get container id and run:
docker exec -it <id> sh -c "id && whoami"
Example 3: Local dev with bind mounts
Show steps
# Run with live code (container sees host changes)
docker run --rm -p 8080:3000 -v $PWD:/app -w /app node:18-alpine sh -c "npm i && npm start"
# Edit index.js and refresh. No rebuild needed.
Example 4: Smaller images and caching
Show steps
- Order Dockerfile steps to maximize cache: copy only package files before
npm ci. - Use
.dockerignoreto skip large folders. - Prefer
npm ciovernpm installfor deterministic installs.
Practice: do it yourself
Complete these exercises. They match the items in the Exercises panel on this page.
- Exercise 1: Containerize a small Express API, expose port 3000, and return JSON from
/. - Exercise 2: Convert to a multi-stage build, run as a non-root user, and add a healthcheck.
- Checklist:
- Build succeeds without sending unnecessary files (check your
.dockerignore). - Container responds on the mapped host port.
- Image tag is explicit (no implicit
latestin production). - Container runs as a non-root user.
- Healthcheck reports healthy.
- Build succeeds without sending unnecessary files (check your
Common mistakes & self-check
- Using
latesteverywhere: Pin versions. Self-check: Can you rebuild the exact same image next week? - No
.dockerignore: Bloats build context. Self-check: Doesdocker buildupload thousands of files? - Root user: Higher risk. Self-check: Does
idshowuid=0inside the container? - Copying secrets into image: Avoid. Self-check: Does image history or layers contain
.envor keys? - Misusing
ADD: PreferCOPYunless you need extraction/URL fetch. - No healthcheck: Harder to detect failures. Add
HEALTHCHECK. - Port confusion:
EXPOSE≠publish. Use-p host:container.
Practical projects
- Containerize an existing microservice with a deterministic build and non-root user.
- Create a multi-stage Dockerfile that compiles TypeScript to
dist/and ships only runtime files. - Build a minimal healthcheck endpoint and wire a Docker
HEALTHCHECKto it.
Learning path
- Containerization Basics (this lesson): images, Dockerfile, runs, healthchecks.
- Service composition: define multi-container local setups with a compose file.
- Registries and CI/CD: build, tag, push, and promote images across environments.
- Observability: structured logs and metrics from containers.
- Security and scanning: base image updates, vulnerability scanning, secrets handling.
Mini challenge
Choose a minimal base (e.g., node:18-alpine), convert your API to multi-stage, run as non-root, add a healthcheck, and keep the final image under a reasonable size for your stack. Compare build times and image size before vs after and note the biggest savings.
Next steps
- Take the Quick Test to confirm you can explain and apply the basics.
- Then implement the Practical projects on a real API.
Note: The test is available to everyone. If you are logged in, your progress is saved automatically.