Who this is for
Platform Engineers, DevOps, and Backend engineers who need to reliably run apps on Kubernetes, ship zero-downtime updates, and expose services in and outside the cluster.
Prerequisites
- Comfort with containers (images, ports) and basic CLI
- kubectl installed and access to any Kubernetes cluster (local or remote)
- Basic YAML reading/writing
Why this matters
Real platform work includes:
- Deploying and updating workloads without outages
- Scaling up/down under load
- Routing traffic inside the cluster and to the outside world
- Debugging failing pods quickly
Pods, Deployments, and Services are the foundation for all of this.
Concept explained simply
- Pod: the smallest runnable unit in Kubernetes. Think of it as a single "app instance" (possibly with helper sidecars) sharing the same network namespace and storage volumes.
- Deployment: a manager that maintains the desired number of Pod replicas and handles rolling updates and rollbacks.
- Service: a stable virtual IP and DNS that load-balances traffic to matching Pods via a label selector. Types: ClusterIP (internal), NodePort (node-level port), LoadBalancer (cloud/external LB).
Mental model
Imagine a restaurant:
- Pods are kitchen stations (they prepare dishes). If a chef leaves (pod dies), the station is recreated.
- Deployment is the kitchen manager who ensures a certain number of stations are staffed, and swaps staff gradually during changes (rolling update).
- Service is the front desk routing each order to any station that can cook it (label selector).
Core objects in 5 minutes
- Labels and selectors connect everything. If labels don’t match, Services won’t find Pods and Deployments won’t manage them.
- Deployments own ReplicaSets which own Pods. You rarely manage ReplicaSets directly.
- Services route to Pod IPs. DNS names like my-svc.my-namespace.svc cluster-local resolve to the service.
Worked examples
Example 1: A simple Pod
Create a Pod running nginx and verify it responds.
pod.yaml
{"apiVersion":"v1","kind":"Pod","metadata":{"name":"hello-pod","labels":{"app":"hello"}},"spec":{"containers":[{"name":"web","image":"nginx:1.25","ports":[{"containerPort":80}],"readinessProbe":{"httpGet":{"path":"/","port":80},"initialDelaySeconds":5,"periodSeconds":5}}]}}Commands
kubectl apply -f pod.yaml kubectl get pods -o wide kubectl describe pod hello-pod kubectl logs hello-pod # Test locally via port-forward kubectl port-forward pod/hello-pod 8080:80 # Open http://localhost:8080
Key checks: STATUS should be Running; readiness should be 1/1; port-forward should serve the default nginx page.
Example 2: Deployment with rolling update
deployment.yaml
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"name":"hello-deploy","labels":{"app":"hello"}},"spec":{"replicas":3,"selector":{"matchLabels":{"app":"hello"}},"template":{"metadata":{"labels":{"app":"hello"}},"spec":{"containers":[{"name":"web","image":"nginx:1.25","ports":[{"containerPort":80}],"resources":{"requests":{"cpu":"50m","memory":"64Mi"},"limits":{"cpu":"200m","memory":"128Mi"}}}]}}}}Commands
kubectl apply -f deployment.yaml kubectl get deploy,rs,pods -l app=hello # Update image with a change-cause for history kubectl annotate deployment hello-deploy kubernetes.io/change-cause="Upgrade to nginx 1.27" --overwrite kubectl set image deployment/hello-deploy web=nginx:1.27 kubectl rollout status deployment/hello-deploy kubectl rollout history deployment/hello-deploy # Roll back if needed kubectl rollout undo deployment/hello-deploy
Result: Pods update gradually; the Deployment maintains desired replicas.
Example 3: Service types
ClusterIP service
{"apiVersion":"v1","kind":"Service","metadata":{"name":"hello-svc"},"spec":{"type":"ClusterIP","selector":{"app":"hello"},"ports":[{"port":80,"targetPort":80}]}}NodePort service
{"apiVersion":"v1","kind":"Service","metadata":{"name":"hello-nodeport"},"spec":{"type":"NodePort","selector":{"app":"hello"},"ports":[{"port":80,"targetPort":80,"nodePort":30080}]}}Commands
kubectl apply -f service-clusterip.yaml # if you saved the first as service-clusterip.yaml kubectl get svc # Internal test: run a temporary pod as a client kubectl run tmp --rm -it --image=busybox:1.36 -- sh -c "wget -qO- http://hello-svc" # NodePort test (from your machine if nodes reachable): curl http://:30080
LoadBalancer works similarly but requires an external/cloud LB integration.
Basic workflow you will repeat
- Write: Create YAML for a Pod/Deployment/Service.
- Apply: kubectl apply -f file.yaml
- Verify: kubectl get/describe, watch rollout status.
- Test: port-forward or curl from a test pod.
- Update: change the image or replicas, apply again.
- Rollback: kubectl rollout undo if something breaks.
Exercises
Do these on any cluster. Keep your YAMLs in a folder for reuse.
Exercise 1: Echo app behind a ClusterIP
- Create a Deployment with 2 replicas using image: hashicorp/http-echo:0.2.3 printing "hello from k8s" on port 5678.
- Expose it with a ClusterIP Service on port 80 targetPort 5678.
- Verify via a temporary client pod.
Mirror of exercise ex1 below.
Exercise 2: Rolling update and rollback
- Update the echo image tag to an invalid one, watch the rollout fail.
- Inspect what broke and roll back safely.
Mirror of exercise ex2 below.
- Checklist for both exercises:
- Deployment READY shows desired/available match
- Service Endpoints populated (not empty)
- Client requests return expected text
Common mistakes and self-check
- Selector/label mismatch: Service has no endpoints. Self-check: kubectl get endpoints hello-svc -o yaml; ensure labels in template.metadata.labels match service.spec.selector.
- Forgetting targetPort: Traffic reaches Service but not Pods. Self-check: kubectl describe svc and verify targetPort equals containerPort.
- Rolling update stuck: New Pods not Ready. Self-check: kubectl describe pod to inspect readiness/liveness, and kubectl rollout status deployment/NAME.
- Testing only from your laptop: ClusterIP isn’t reachable externally. Self-check: use kubectl run tmp --rm -it to curl inside the cluster.
- Editing Pods managed by a Deployment: Changes get reverted. Self-check: make changes in the Deployment template, not the Pods.
Quick debug commands
kubectl get all -l app=YOUR_LABEL kubectl describe deployment/NAME kubectl describe pod/POD kubectl logs POD -c CONTAINER --previous kubectl get endpoints NAME -o wide kubectl get events --sort-by=.lastTimestamp
Practical projects
- Canary rollout: Two Deployments (v1 and v2) with distinct labels (version=v1/v2). Use two Services (svc-v1, svc-v2) and switch an Ingress or a routing layer later; for now, validate both respond via separate Services.
- Blue/Green switch: Keep green idle, then change the selector of a single Service to point from version=blue to version=green and measure switch time.
- Multi-port app: One Pod with two containers (web and metrics). Expose only the web via Service; verify metrics stays internal.
Learning path
- Next core objects: Namespaces, ConfigMaps, Secrets
- Health: liveness/readiness/startup probes
- Traffic: Ingress and IngressClass
- Workload patterns: StatefulSets and DaemonSets
- Packing config: Helm/Kustomize
- Security: RBAC basics, ServiceAccounts
Next steps
- Finish the exercises and mini challenge below
- Take the quick test to check gaps
- Convert your YAMLs into Kustomize or Helm charts
Mini challenge
Create two Deployments (blue and green) serving different text. Create one ClusterIP Service. Start by selecting version=blue. Verify traffic. Then switch the Service selector to version=green and confirm traffic changes without downtime. Measure the switchover by curling 10 times during the change.
Quick test
Take the test below to lock in the concepts. Everyone can take it for free; if you are logged in, your progress will be saved automatically.