DevOps Advanced

Kubernetes for Django: Deployments, HPA, Ingress, ConfigMaps, Secrets, and PgBouncer

A pragmatic Kubernetes setup for a real Django app: Deployment + Service + Ingress, ConfigMaps and Secrets done right, liveness/readiness probes that work, HPA on the right metric, and PgBouncer in front of PostgreSQL.

DjangoZen Team Apr 25, 2026 20 min read 7 views

Kubernetes is overkill for one app on one VPS. It pays back the moment you have multiple services, blue-green deploys, or autoscaling needs. This tutorial is the smallest k8s setup that runs Django the way you'd actually want it in production — not the toy "deployment.yaml" you find in 3-minute videos.

When k8s makes sense for Django

Use it if you have at least two of: multiple environments (dev/staging/prod), multiple services (web + workers + scheduler + cron), traffic that varies 5×+ across the day, or a team that already runs k8s. Otherwise stick with systemd + nginx + a shell script.

The Dockerfile (recap)

FROM python:3.12-slim AS deps
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir --user -r requirements.txt

FROM python:3.12-slim
WORKDIR /app
RUN useradd -m app
COPY --from=deps /root/.local /home/app/.local
COPY --chown=app:app . .
ENV PATH=/home/app/.local/bin:$PATH PYTHONUNBUFFERED=1
USER app
EXPOSE 8000
CMD ["gunicorn","--bind","0.0.0.0:8000","--workers","3","djzen.wsgi:application"]

Deployment with proper probes

apiVersion: apps/v1
kind: Deployment
metadata: {name: djzen-web}
spec:
  replicas: 3
  selector: {matchLabels: {app: djzen-web}}
  strategy:
    type: RollingUpdate
    rollingUpdate: {maxSurge: 1, maxUnavailable: 0}
  template:
    metadata: {labels: {app: djzen-web}}
    spec:
      containers:
      - name: web
        image: registry.example.com/djzen:abc123
        ports: [{containerPort: 8000}]
        envFrom:
        - configMapRef: {name: djzen-config}
        - secretRef:    {name: djzen-secrets}
        readinessProbe:
          httpGet: {path: /healthz/, port: 8000}
          periodSeconds: 5
          failureThreshold: 3
        livenessProbe:
          httpGet: {path: /healthz/, port: 8000}
          initialDelaySeconds: 30
          periodSeconds: 30
          failureThreshold: 5
        resources:
          requests: {cpu: 200m, memory: 256Mi}
          limits:   {cpu: 1000m, memory: 512Mi}

Probes that actually work: /healthz/ should be a fast, dependency-light endpoint that returns 200 if the process is alive. A separate /readyz/ can check DB+cache. Don't make liveness check the DB — a brief DB blip will kill all your pods at once.

ConfigMap vs Secret — and how to manage them

apiVersion: v1
kind: ConfigMap
metadata: {name: djzen-config}
data:
  DJANGO_SETTINGS_MODULE: djzen.settings
  ALLOWED_HOSTS: djangozen.com,www.djangozen.com
  REDIS_URL: redis://redis:6379/0
---
apiVersion: v1
kind: Secret
metadata: {name: djzen-secrets}
type: Opaque
stringData:
  SECRET_KEY: "..."
  DATABASE_URL: "postgresql://..."

Secrets are base64-encoded, NOT encrypted at rest by default. Enable EncryptionConfig on your cluster, or use Sealed Secrets / External Secrets Operator (pulling from Vault/AWS Secrets Manager). Never kubectl apply a plaintext Secret to a public cluster.

Ingress and TLS

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: djzen
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/limit-rps: "20"
spec:
  ingressClassName: nginx
  tls:
  - hosts: [djangozen.com, www.djangozen.com]
    secretName: djzen-tls
  rules:
  - host: djangozen.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend: {service: {name: djzen-web, port: {number: 80}}}

Pair with cert-manager for automatic Let's Encrypt renewal. Add rate-limit annotations on auth endpoints separately.

Horizontal Pod Autoscaler — on the right metric

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata: {name: djzen-web}
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: djzen-web
  minReplicas: 3
  maxReplicas: 20
  metrics:
  - type: Resource
    resource:
      name: cpu
      target: {type: Utilization, averageUtilization: 70}

CPU is fine for many Django apps, but if your bottleneck is I/O (DB, external APIs), CPU sits at 30% while latency spikes. Better: use requests-per-pod from Prometheus via the prometheus-adapter. Scale on http_requests_per_pod > 100/s.

PgBouncer — non-negotiable above ~5 pods

Each gunicorn worker opens DB connections. 3 pods × 3 workers × 2 (web+celery) = 18 connections. At 20 pods you're at 120. PostgreSQL melts at a few hundred. Solution: PgBouncer in transaction-pooling mode sits between your apps and PG, multiplexing thousands of client connections onto a small pool of server connections.

# Run PgBouncer as a Deployment with a Service:
DATABASE_URL=postgresql://djzen@pgbouncer:6432/djzen
# pgbouncer.ini
pool_mode = transaction
max_client_conn = 2000
default_pool_size = 25

Caveats with transaction pooling: no session-level features (SET LOCAL only), no advisory locks, no LISTEN/NOTIFY. 99% of Django code doesn't care.

Migrations as a Job, not on container start

Don't run python manage.py migrate in your container's entrypoint. Two pods will race; one wins. Run a one-shot Job per release:

apiVersion: batch/v1
kind: Job
metadata: {name: djzen-migrate-abc123}
spec:
  template:
    spec:
      restartPolicy: Never
      containers:
      - name: migrate
        image: registry.example.com/djzen:abc123
        command: ["python","manage.py","migrate","--noinput"]
        envFrom:
        - configMapRef: {name: djzen-config}
        - secretRef:    {name: djzen-secrets}

Trigger it from your CI before rolling the Deployment.

Production checklist

  • Probes that don't depend on the DB.
  • Resource requests set (or HPA never scales). Limits set so one pod can't OOM-kill the node.
  • PodDisruptionBudget with minAvailable: 1.
  • Secrets encrypted at rest or pulled from Vault — never plaintext in git.
  • Network policies restricting pod-to-pod traffic.
  • Cluster autoscaler if HPA can outgrow your nodes.
  • Backups — a k8s cluster doesn't back up your DB. PostgreSQL needs separate WAL archiving.

Summary

k8s for Django is mostly: a sensible Deployment with proper probes, ConfigMap/Secret separation, an Ingress with TLS, HPA on the right metric, and PgBouncer between you and PostgreSQL. Skip the operators and CRDs until you actually need them. Boring k8s is the goal.