Menu

Topic 4 of 8

Ingress And Networking Basics

Learn Ingress And Networking Basics for free with explanations, exercises, and a quick test (for Platform Engineer).

Published: January 23, 2026 | Updated: January 23, 2026

Why this matters

As a Platform Engineer, you make applications reachable, secure, and reliable. You will:

  • Expose internal Services to the outside world with stable URLs and TLS.
  • Route traffic by host or path to multiple backends.
  • Control what can talk to what with basic network policies.
  • Preserve client IPs when needed for logging and rate limiting.
  • Debug 404/502 errors that often come from routing or Service misconfigurations.

Concept explained simply

Core pieces:

  • Pod: A running container with its own IP, not stable across restarts.
  • Service: Stable virtual IP that routes to Pods. Types:
    • ClusterIP (default): Internal-only. Used with Ingress for HTTP(S).
    • NodePort: Opens a port on each node. Useful for simple external access.
    • LoadBalancer: Gets a cloud load balancer. Direct external access.
  • Ingress: HTTP(S) rules (host/path) that route to Services. Needs an Ingress Controller running in your cluster (e.g., NGINX, Traefik).
  • Ingress Controller: The actual proxy that reads Ingress objects and does the routing.
  • NetworkPolicy: Rules that allow or deny Pod-to-Pod/namespace traffic (L3/L4).
  • TLS: Usually terminates at the Ingress Controller, which then talks to Services over HTTP or HTTPS.
Mental model

Think of a layered path:

  1. Client resolves a DNS name (e.g., app.example.com) to your external load balancer.
  2. Load balancer forwards to the Ingress Controller running in your cluster.
  3. Ingress Controller matches host/path rules and picks a Service.
  4. Service picks a healthy Pod endpoint and sends traffic.

If something breaks, walk this path from outside-in and verify each hop.

Worked examples

Example 1: Path-based routing with TLS

Goal: Route /blue and /green on the same host to different Services with TLS.

# Namespace
apiVersion: v1
kind: Namespace
metadata:
  name: web
---
# Deployments
apiVersion: apps/v1
kind: Deployment
metadata:
  name: blue
  namespace: web
spec:
  replicas: 2
  selector:
    matchLabels: { app: blue }
  template:
    metadata:
      labels: { app: blue }
    spec:
      containers:
        - name: app
          image: hashicorp/http-echo
          args: ["-text=blue"]
          ports: [{ containerPort: 5678 }]
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: green
  namespace: web
spec:
  replicas: 2
  selector:
    matchLabels: { app: green }
  template:
    metadata:
      labels: { app: green }
    spec:
      containers:
        - name: app
          image: hashicorp/http-echo
          args: ["-text=green"]
          ports: [{ containerPort: 5678 }]
---
# Services
apiVersion: v1
kind: Service
metadata:
  name: blue-svc
  namespace: web
spec:
  selector: { app: blue }
  ports:
    - name: http
      port: 80
      targetPort: 5678
---
apiVersion: v1
kind: Service
metadata:
  name: green-svc
  namespace: web
spec:
  selector: { app: green }
  ports:
    - name: http
      port: 80
      targetPort: 5678
---
# Ingress (assumes a controller & IngressClass exist)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: demo
  namespace: web
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
  ingressClassName: nginx
  tls:
    - hosts: ["demo.local"]
      secretName: demo-local-tls
  rules:
    - host: demo.local
      http:
        paths:
          - path: /blue
            pathType: Prefix
            backend:
              service:
                name: blue-svc
                port: { number: 80 }
          - path: /green
            pathType: Prefix
            backend:
              service:
                name: green-svc
                port: { number: 80 }

Create a TLS secret named demo-local-tls with your cert and key. Point local DNS (e.g., hosts file) to the Ingress load balancer IP. Then test:

  • https://demo.local/blue returns "blue"
  • https://demo.local/green returns "green"
Example 2: Host-based routing and health checks

Goal: Route api.example.local to api-svc and ui.example.local to ui-svc. Add readiness probes to avoid sending traffic to unready Pods.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
  namespace: web
spec:
  replicas: 2
  selector:
    matchLabels: { app: api }
  template:
    metadata:
      labels: { app: api }
    spec:
      containers:
        - name: api
          image: nginx
          ports: [{ containerPort: 80 }]
          readinessProbe:
            httpGet: { path: /, port: 80 }
            initialDelaySeconds: 3
            periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
  name: api-svc
  namespace: web
spec:
  selector: { app: api }
  ports: [{ name: http, port: 80, targetPort: 80 }]
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ui
  namespace: web
spec:
  replicas: 2
  selector:
    matchLabels: { app: ui }
  template:
    metadata:
      labels: { app: ui }
    spec:
      containers:
        - name: ui
          image: nginx
          ports: [{ containerPort: 80 }]
          readinessProbe:
            httpGet: { path: /, port: 80 }
            initialDelaySeconds: 3
            periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
  name: ui-svc
  namespace: web
spec:
  selector: { app: ui }
  ports: [{ name: http, port: 80, targetPort: 80 }]
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: host-routing
  namespace: web
spec:
  ingressClassName: nginx
  rules:
    - host: api.example.local
      http:
        paths:
          - path: /
            pathType: Prefix
            backend: { service: { name: api-svc, port: { number: 80 } } }
    - host: ui.example.local
      http:
        paths:
          - path: /
            pathType: Prefix
            backend: { service: { name: ui-svc, port: { number: 80 } } }

Requests with Host header api.example.local route to API; ui.example.local go to UI. Readiness probes keep traffic away from starting Pods.

Example 3: Restrict traffic with NetworkPolicy

Goal: Allow only the Ingress Controller to reach the api Pods on port 80.

# Label your ingress namespace if needed (example label)
# kubectl label namespace ingress-nginx role=ingress --overwrite
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-from-ingress
  namespace: web
spec:
  podSelector:
    matchLabels: { app: api }
  policyTypes: ["Ingress"]
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              role: ingress
      ports:
        - protocol: TCP
          port: 80

With a CNI that enforces NetworkPolicy, only Pods from namespaces labeled role=ingress can reach api Pods on port 80.

Exercises and practice

Do these hands-on tasks. Then take the quick test. Note: The quick test is available to everyone; only logged-in users get saved progress.

  1. Exercise 1: Create two Deployments and Services in a namespace. Add an Ingress with TLS that routes /v1 to svc1 and /v2 to svc2. Verify both endpoints return different text.
    Hints
    • Use ClusterIP Services; Ingress handles external access.
    • Match Ingress backend port to the Service port (not containerPort).
    • Create a TLS secret referenced in spec.tls[].secretName.
    • Ensure you use the correct ingressClassName for your controller.
  2. Exercise 2: Add a NetworkPolicy that only allows traffic to svc1 Pods from your Ingress Controller's namespace.
    Hints
    • Use podSelector to select the target Pods (e.g., app: svc1).
    • Use namespaceSelector with a label that identifies the ingress namespace.
    • Specify the allowed port under ingress.ports.

Self-check checklist

  • I can explain the difference between Service types (ClusterIP, NodePort, LoadBalancer).
  • I know that an Ingress needs a running Ingress Controller to work.
  • I can route by host and by path to different Services.
  • I can set up basic TLS termination at the Ingress.
  • I can write a NetworkPolicy that restricts incoming traffic sources.
  • I can trace a request from DNS to Pod and find where it breaks.

Common mistakes and how to self-check

  • Forgetting the Ingress Controller: Create an Ingress but nothing works. Self-check: Is the controller running? Does your Ingress reference the right ingressClassName?
  • Port mismatch: Service port 80 but backend port name is wrong. Self-check: In Ingress, the backend port must match the Service port number or name.
  • Wrong Host header: Testing with curl to an IP without the expected Host. Self-check: Send the Host header that matches spec.rules[].host.
  • No TLS secret: Ingress references secret that doesn't exist. Self-check: kubectl get secret in the same namespace as the Ingress.
  • Readiness not considered: Traffic sent to not-ready Pods. Self-check: Use readinessProbe and verify Endpoints show only ready addresses.
  • NetworkPolicy blocks health checks: Ingress or kubelet probes blocked. Self-check: Add rules for required sources/ports, test from a busybox Pod.

Practical projects

  • Multi-tenant demo: One Ingress, three paths (/team-a, /team-b, /team-c) to three Services. Add TLS and rate-limit annotation if your controller supports it.
  • Blue/green switch: Two versions behind one host. Use paths or separate subdomains, then switch default path to the new version.
  • Hardened API: API behind Ingress with NetworkPolicy allowlist for only Ingress namespace and a monitoring namespace. Add readiness probes.

Who this is for

  • Platform and DevOps engineers enabling app teams on Kubernetes.
  • Backend engineers who need to expose services safely and reliably.
  • Anyone preparing for on-call ownership of Kubernetes workloads.

Prerequisites

  • Basic Kubernetes objects: Pod, Deployment, Service.
  • Comfort with kubectl apply/get/describe/logs.
  • Understanding of HTTP, DNS, and TLS fundamentals.

Learning path

  1. Containers and Pods: images, health probes, resource requests.
  2. Services: ClusterIP, selectors, readiness and Endpoints.
  3. Ingress basics: host/path routing, TLS termination (this lesson).
  4. NetworkPolicy basics: default deny, namespace scoping.
  5. Ingress advanced: canary, sticky sessions, headers, timeouts.
  6. Production hardening: observability, retries, rate limits, WAF.

Next steps

  • Explore advanced Ingress annotations and timeouts for your controller.
  • Automate TLS certificate management with an operator in your cluster.
  • Evaluate Gateway API as a next-gen alternative to Ingress.
  • Add dashboards and alerts for 4xx/5xx rates, latency, and saturation.

Mini challenge

Design an Ingress for shop.local that routes:

  • /api to api-svc (port 8080)
  • / to web-svc (port 80)

Requirements:

  • TLS termination at the Ingress
  • Preserve client IP at the app layer if possible
  • Ensure only the Ingress can reach api Pods
Considerations
  • Ingress with two paths and a TLS secret.
  • If you use NodePort/LoadBalancer for the controller, set externalTrafficPolicy: Local to preserve client IP.
  • NetworkPolicy to allow only the ingress namespace to reach api Pods on 8080.

Practice Exercises

2 exercises to complete

Instructions

Create two Deployments (v1 and v2) each exposing a simple HTTP response. Create two ClusterIP Services, then an Ingress that:

  • Terminates TLS with a secret you create
  • Routes /v1 to svc-v1 and /v2 to svc-v2

Assume your Ingress controller class is nginx and your host is app.local. Point your local DNS to the controller’s external IP to test.

Expected Output
HTTPS requests to https://app.local/v1 and /v2 return distinct bodies (e.g., 'v1' vs 'v2').

Ingress And Networking Basics — Quick Test

Test your knowledge with 8 questions. Pass with 70% or higher.

8 questions70% to pass

Have questions about Ingress And Networking Basics?

AI Assistant

Ask questions about this tool