Why this matters
As a Platform Engineer, you turn code into running services safely and fast. Solid build-and-deploy pipelines mean:
- Fewer production incidents via repeatable, tested steps.
- Faster releases with caching, parallel jobs, and automated gates.
- Confidence to ship small changes often.
Real tasks you’ll do on the job
- Design a pipeline that builds, tests, and creates versioned Docker images.
- Promote artifacts across environments with approvals and checks.
- Roll back quickly when a deploy fails.
- Add security scans and secret-safe deployments.
- Optimize build time using dependency and Docker layer caching.
Progress note: The quick test is available to everyone. Only logged-in users will have progress saved.
Who this is for
- Platform and DevOps engineers building CI/CD foundations.
- Backend engineers owning service delivery pipelines.
- Team leads defining release processes.
Prerequisites
- Comfortable with Git (branches, PRs).
- Basic Docker knowledge (build, tag, push).
- Familiar with one CI system (e.g., GitHub Actions, GitLab CI, Jenkins) and one runtime (Kubernetes or container platform).
- Access to a container registry and a non-prod cluster/environment for practice.
Concept explained simply
A build-and-deploy pipeline is a reliable assembly line for code:
- Build: compile, run tests, package.
- Package: produce an artifact (e.g., Docker image) with a clear version.
- Deploy: roll out the artifact to an environment using repeatable steps and checks.
Mental model
Think of a pipeline as a guarded conveyor belt:
- Gates ensure only healthy items move ahead (tests, scans, approvals).
- Tags identify what’s moving (semantic version or commit SHA).
- Conveyors can run in parallel (matrix builds, concurrent jobs).
- Tracks are environments (dev → staging → prod) with promotion rules.
Core building blocks
- Triggers: when it runs (push/PR/tag/schedule/manual).
- Runners/agents: where steps execute.
- Stages: ordered groups like build, test, package, deploy.
- Artifacts: outputs passed to later stages (image, manifest, SBOM).
- Caching: reusing dependencies and Docker layers to speed builds.
- Environments: dev/staging/prod with gates and rollbacks.
- Secrets: credentials for registries, clusters, and cloud APIs.
- Observability: logs, metrics, and notifications on status.
Worked example 1: GitHub Actions to Kubernetes
Goal: build, test, push a Docker image, deploy to staging using kubectl.
Pipeline YAML
name: service-ci-cd
on:
push:
branches: [ main ]
pull_request:
concurrency:
group: service-${{ github.ref }}
cancel-in-progress: true
jobs:
build-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Node
uses: actions/setup-node@v4
with: { node-version: '20', cache: 'npm' }
- run: npm ci
- run: npm test -- --ci
- name: Build image
run: |
IMAGE=ghcr.io/${{ github.repository }}/service:${{ github.sha }}
docker build -t $IMAGE .
echo IMAGE=$IMAGE >> $GITHUB_ENV
- name: Login & Push
run: |
echo ${{ secrets.REGISTRY_TOKEN }} | docker login ghcr.io -u ${{ github.actor }} --password-stdin
docker push $IMAGE
deploy-staging:
needs: build-test
runs-on: ubuntu-latest
environment:
name: staging
if: github.ref == 'refs/heads/main'
steps:
- name: Setup kubectl
run: |
echo '${{ secrets.KUBE_CONFIG_STAGING }}' | base64 -d > $HOME/.kube/config
- name: Deploy
run: |
kubectl -n app set image deploy/service service=$IMAGE
kubectl -n app rollout status deploy/service --timeout=120s
- Highlights: caching npm deps, unique image tag (commit SHA), concurrency to avoid overlapping runs, rollout status to fail fast.
- Roll back: if rollout status fails, previous ReplicaSet remains; you can add automatic rollback commands in a post-failure step.
Worked example 2: GitLab CI with approvals
Goal: build once, promote the same artifact to staging after a manual approval.
.gitlab-ci.yml
stages: [build, test, package, deploy]
variables:
IMAGE: "$CI_REGISTRY_IMAGE/service:$CI_COMMIT_SHORT_SHA"
build:
stage: build
script:
- docker build -t $IMAGE .
- echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
- docker push $IMAGE
artifacts:
reports:
dotenv: image.env
after_script:
- echo IMAGE=$IMAGE > image.env
test:
stage: test
needs: ["build"]
script: ["npm ci", "npm test -- --ci"]
package-sbom:
stage: package
needs: ["build"]
script: ["syft $IMAGE -o json > sbom.json"]
artifacts:
paths: ["sbom.json"]
deploy-staging:
stage: deploy
needs: ["test", "package-sbom"]
when: manual
environment:
name: staging
script:
- echo "$KUBE_CONFIG_STAGING" | base64 -d > $HOME/.kube/config
- kubectl -n app set image deploy/service service=$IMAGE
- kubectl -n app rollout status deploy/service --timeout=120s
- Highlights: build once, promote with manual gate, SBOM as artifact, environment-specific kubeconfig via secret.
Worked example 3: Jenkins declarative with parallel tests and rollback
Jenkinsfile
pipeline {
agent any
options { disableConcurrentBuilds() }
environment {
IMAGE = "registry.example.com/service:${env.GIT_COMMIT}"
}
stages {
stage('Checkout') { steps { checkout scm } }
stage('Build') { steps { sh 'docker build -t ${IMAGE} .' } }
stage('Test') {
parallel {
stage('Unit') { steps { sh 'npm ci && npm test -- --ci' } }
stage('Lint') { steps { sh 'npm run lint' } }
}
}
stage('Push') { steps { sh 'docker login registry.example.com -u $REG_USER -p $REG_PASS; docker push ${IMAGE}' } }
stage('Deploy Staging') {
when { branch 'main' }
steps {
sh 'echo "$KUBE_CONFIG_STAGING" | base64 -d > $HOME/.kube/config'
sh 'kubectl -n app set image deploy/service service=${IMAGE}'
sh 'kubectl -n app rollout status deploy/service --timeout=120s'
}
post {
failure {
echo 'Deploy failed — attempting kubectl rollout undo'
sh 'kubectl -n app rollout undo deploy/service || true'
}
}
}
}
}
- Highlights: parallel test stages, serialized builds, explicit rollback on failure.
Common mistakes and how to self-check
- Using latest tags only: Self-check: Is every deploy tied to an immutable version (commit SHA or semver)?
- Building multiple times across stages: Self-check: Do later stages reuse the exact artifact from build?
- Storing secrets in plain text: Self-check: Are all secrets coming from a secure secret store/CI vault?
- No rollback plan: Self-check: Can you revert to the previous version with a single command?
- No caching: Self-check: Are dependency caches or Docker layer caches configured?
- Skipping health checks: Self-check: Does deploy step wait for rollout/health before succeeding?
- Overlapping deployments: Self-check: Is concurrency or a deploy lock configured?
Exercises
Do these to internalize the flow. They mirror the graded exercises.
Exercise 1 — Minimal build & deploy to staging
- Build and tag a Docker image with the commit SHA.
- Push to your container registry using CI-provided credentials.
- Update a Kubernetes Deployment image in a staging namespace.
- Wait for rollout and fail if unhealthy.
Success looks like: image exists in registry; staging deployment updated; rollout succeeded.
Exercise 2 — Add gates, caching, and rollback
- Add dependency caching to speed builds.
- Require a manual approval or environment gate before staging deploy.
- On deploy failure, automatically roll back to the previous ReplicaSet.
Success looks like: cache hits on subsequent runs; deploy waits for approval; failed rollout triggers rollback command.
Checklist: Pipeline readiness
- [ ] Immutable artifact versioning (SHA/semver)
- [ ] Build once, promote across environments
- [ ] Secrets pulled from secure store
- [ ] Caching enabled (deps and image layers)
- [ ] Health-checked deploy with timeouts
- [ ] Rollback commands scripted
- [ ] Concurrency control for deploys
- [ ] Notifications/visibility on outcomes
Learning path
- Start: Create a simple CI job that builds and tests on pull requests.
- Add packaging: produce and push a versioned Docker image.
- Introduce environments: deploy to staging automatically on main.
- Add gates: manual approval or policy checks before staging/prod.
- Harden: secrets management, SBOM, vulnerability scan, and rollback.
- Optimize: caching, parallelism, and matrix builds.
- Scale: templates/shared libraries for multiple services.
Practical projects
- Monorepo microservice pipeline: matrix build for services/* with shared template, per-service caching, and conditional deploys.
- Blue/Green on staging: deploy new ReplicaSet alongside old, switch Service on success, rollback on failure.
- One-button promotion: tag a release to trigger a promotion job that reuses the exact artifact and posts deployment notes.
Mini challenge
Modify a pipeline so that:
- It only deploys when a git tag v* is pushed.
- It attaches the SBOM as a release asset.
- It posts the image digest and environment to the job summary.
Hint
- Use tag-based triggers and a step to compute and store the image digest.
- Make the deploy job depend on the build job artifacts only.
Next steps
- Add policy-as-code checks (format, lint, IaC scans) early in the pipeline.
- Package deploys with Helm or Kustomize to standardize rollouts.
- Instrument deployments with metrics (success/failure rate, lead time) to drive improvements.