Why this matters
As a Platform Engineer, you often host multiple teams or applications on the same Kubernetes cluster. Namespaces and multi-tenancy controls help you:
- Prevent noisy neighbors by placing limits on CPU/memory per team.
- Isolate network traffic so tenants can’t accidentally reach each other.
- Apply access control so developers see and change only what they own.
- Standardize defaults (limits, policies) to avoid outages and surprise bills.
Concept explained simply
Namespaces are folders inside a cluster. Each team or app gets a folder and rules about what they can do there. Multi-tenancy is the overall approach to safely hosting many teams in one cluster.
Mental model
- Namespace = a scoped workspace for names, policies, and quotas.
- RBAC = keys that define who can open doors inside that workspace.
- ResourceQuota + LimitRange = the budget and per-pod spending caps.
- NetworkPolicy = walls and doors for network traffic.
- Pod Security Standards = safety checks on what pods are allowed to do.
What namespaces do NOT do
- They don’t hard-isolate kernel resources like a hypervisor would.
- They don’t prevent all cross-namespace communication by default.
- They don’t set resource limits automatically (you must add quotas/limits).
Core building blocks
- Namespaces: Organize and scope most Kubernetes objects.
- RBAC (Roles, RoleBindings): Grant least-privilege access per namespace.
- ResourceQuota: Cap total CPU/memory, object counts per namespace.
- LimitRange: Set default and max per-container requests/limits.
- NetworkPolicy: Control traffic; default deny is a safe baseline.
- Pod Security Standards (baseline/restricted): Enforce safer pod configs.
- Labels/Selectors: Group resources for policies and access control.
Safe defaults to start multi-tenancy
- Create one namespace per team/application.
- Attach a ResourceQuota and LimitRange to every namespace.
- Apply a default-deny NetworkPolicy, then open only what’s needed.
- Use Role + RoleBinding for least-privilege per tenant.
- Adopt restricted Pod Security where possible.
Worked examples
Example 1: Team namespace + RBAC
Create a namespace and give a team service account read/write access only inside it.
kubectl create namespace team-a
# ServiceAccount for the team
kubectl -n team-a create serviceaccount dev
# Role: allow typical developer actions on common resources
cat <<'EOF' | kubectl -n team-a apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: team-a-developer
rules:
- apiGroups: [""]
resources: ["pods","services","configmaps","secrets"]
verbs: ["get","list","watch","create","update","patch","delete"]
- apiGroups: ["apps"]
resources: ["deployments","statefulsets","daemonsets","replicasets"]
verbs: ["get","list","watch","create","update","patch","delete"]
EOF
# Bind Role to ServiceAccount
kubectl -n team-a create rolebinding team-a-dev-rb \
--role=team-a-developer --serviceaccount=team-a:dev
Why this works
The Role is namespace-scoped and allows common CRUD on app resources. The RoleBinding ties it to team-a’s dev ServiceAccount only.
Example 2: ResourceQuota + LimitRange
Prevent noisy neighbors by setting per-namespace caps and defaults.
# ResourceQuota: cap total compute and objects
cat <<'EOF' | kubectl -n team-a apply -f -
apiVersion: v1
kind: ResourceQuota
metadata:
name: team-a-quota
spec:
hard:
requests.cpu: "4"
requests.memory: "8Gi"
limits.cpu: "8"
limits.memory: "16Gi"
pods: "50"
services: "20"
configmaps: "50"
secrets: "50"
EOF
# LimitRange: set per-container defaults and max
cat <<'EOF' | kubectl -n team-a apply -f -
apiVersion: v1
kind: LimitRange
metadata:
name: team-a-limits
spec:
limits:
- type: Container
default:
cpu: "500m"
memory: "512Mi"
defaultRequest:
cpu: "250m"
memory: "256Mi"
max:
cpu: "2000m"
memory: "2Gi"
EOF
Result
Pods without explicit resources get reasonable defaults. The namespace can’t exceed the quota, preventing cluster-level contention.
Example 3: Default-deny NetworkPolicy + allow same-namespace traffic
# Default deny all ingress
cat <<'EOF' | kubectl -n team-a apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
spec:
podSelector: {}
policyTypes: ["Ingress"]
EOF
# Allow app-to-db within namespace by labels
cat <<'EOF' | kubectl -n team-a apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-app-to-db
spec:
podSelector:
matchLabels:
role: db
policyTypes: ["Ingress"]
ingress:
- from:
- podSelector:
matchLabels:
role: app
ports:
- protocol: TCP
port: 5432
EOF
Tip
Start with default deny, then explicitly allow only the flows you expect.
Who this is for
- Platform Engineers and SREs responsible for shared clusters.
- Backend engineers deploying microservices into Kubernetes.
- Engineering managers defining safe multi-team practices.
Prerequisites
- Basic kubectl usage and access to a test cluster or local Kubernetes (such as kind or minikube).
- Familiarity with Pods, Deployments, and Services.
- Comfort with YAML manifests.
Learning path
- Create a namespace per tenant with RBAC for least privilege.
- Add ResourceQuota and LimitRange to cap usage and set defaults.
- Apply default-deny NetworkPolicy; open only necessary flows.
- Adopt Pod Security Standards (restricted) where feasible.
- Automate with templates or GitOps for consistency.
Exercises
Complete these tasks. Open the solutions in the exercise cards below if you get stuck.
Exercise ex1: Team namespace with basic controls
- Create namespace team-alpha and ServiceAccount dev.
- Bind a Role allowing CRUD on Deployments, Pods, Services, ConfigMaps, Secrets.
- Attach a ResourceQuota and LimitRange similar to the examples.
- Apply a default-deny ingress NetworkPolicy.
Self-check checklist
- kubectl get ns shows team-alpha.
- kubectl auth can-i create deployments --as=system:serviceaccount:team-alpha:dev -n team-alpha returns yes.
- Attempting to exceed quotas is denied.
- New Pods without resources receive defaults.
Exercise ex2: Isolate two tenants and allow only app-to-db inside each
- Create namespaces team-a and team-b.
- Deploy an app (role=app) and db (role=db) in each.
- NetworkPolicies: default deny; allow app to db on TCP 5432 inside its own namespace only.
- Prove cross-namespace app cannot reach other namespace db.
Self-check checklist
- app in team-a can reach db in team-a.
- app in team-a cannot reach db in team-b.
- Policy YAML uses labels, not hard-coded pod IPs.
Common mistakes and how to self-check
- No default deny: Traffic is open by default. Self-check: Create a test pod and attempt cross-namespace curl; if it works unintentionally, add default deny.
- Missing limits: Pods without requests/limits can starve others. Self-check: kubectl describe pod shows no limits; add LimitRange.
- Overly broad RBAC: Using ClusterRoleBinding for tenant devs. Self-check: List RoleBindings in the namespace; avoid cluster-wide bindings.
- Ignoring labels: Policies break when labels are inconsistent. Self-check: Ensure Deployments set the same labels your policies select.
- Quota too strict: Deployments won’t schedule. Self-check: Events show quota exceeded; adjust ResourceQuota or per-pod limits.
Practical projects
- Tenant Starter Kit: A reusable YAML bundle (Namespace, Role/RoleBinding, ResourceQuota, LimitRange, default-deny NetworkPolicy) parameterized by tenant name.
- Traffic Showcase: Two tenants with visualized allowed/blocked flows using a tiny HTTP echo app; document expected vs. actual traffic results.
- Guardrails Audit: Script that scans all namespaces and reports any missing quota, limit range, or default-deny policy.
Mini challenge
In a single namespace, create three tiers: frontend (role=fe), backend (role=be), and database (role=db). Allow only fe->be on TCP 8080 and be->db on TCP 5432. Everything else must be denied. Provide the minimal set of NetworkPolicies to achieve this.
Next steps
- Automate tenant creation via templates or GitOps to reduce drift.
- Adopt restricted Pod Security where viable to harden workloads.
- Measure capacity usage per namespace to inform quotas and costs.
About the Quick Test
The Quick Test is available to everyone. Sign in to save your progress and resume later.