Django Advanced

Celery and Redis for Django: Background Tasks, Beat Schedules, Retries, and Flower

Move slow work off the request/response cycle. Architect Celery + Redis for production: workers, queues, retries with exponential backoff, periodic jobs with Beat, and live monitoring with Flower.

DjangoZen Team Apr 25, 2026 20 min read 10 views

If a request takes longer than a second, your users feel it. Email sending, PDF generation, third-party API calls, image processing, daily exports — all of these belong off the request/response cycle. Celery is the de-facto answer for Django, and with Redis as the broker it scales from one worker on a $5 VPS to hundreds across a Kubernetes cluster.

Architecture in 60 seconds

Celery has three moving parts: your Django app (the producer), the broker (Redis or RabbitMQ — a queue that holds task messages), and one or more workers (separate processes that pull tasks and run them). A result backend (also Redis) stores task return values if you need them. Beat is a single scheduler process that emits periodic tasks on a cron-like schedule.

Minimal setup

pip install "celery[redis]==5.4.*" django-celery-beat django-celery-results

Create djzen/celery.py:

import os
from celery import Celery

os.environ.setdefault("DJANGO_SETTINGS_MODULE", "djzen.settings")
app = Celery("djzen")
app.config_from_object("django.conf:settings", namespace="CELERY")
app.autodiscover_tasks()

In djzen/__init__.py:

from .celery import app as celery_app
__all__ = ("celery_app",)

In settings.py:

CELERY_BROKER_URL = "redis://127.0.0.1:6379/1"
CELERY_RESULT_BACKEND = "django-db"   # uses django-celery-results
CELERY_TASK_TIME_LIMIT = 60 * 5        # hard kill at 5 min
CELERY_TASK_SOFT_TIME_LIMIT = 60 * 4   # raise SoftTimeLimitExceeded at 4 min
CELERY_TASK_ACKS_LATE = True            # ack after success, not on receipt
CELERY_WORKER_PREFETCH_MULTIPLIER = 1   # one task at a time per worker
CELERY_TIMEZONE = "UTC"

Defining and calling tasks

from celery import shared_task

@shared_task(bind=True, max_retries=5, autoretry_for=(IOError,),
             retry_backoff=True, retry_backoff_max=600, retry_jitter=True)
def send_invoice_email(self, order_id: int) -> None:
    order = Order.objects.get(pk=order_id)
    pdf = render_invoice(order)
    smtp.send(order.email, attachment=pdf)

Call it from a view:

send_invoice_email.delay(order.id)              # fire and forget
send_invoice_email.apply_async(args=[order.id], countdown=30)   # in 30s
send_invoice_email.apply_async(args=[order.id], queue="email")  # specific queue

Always pass IDs, never ORM instances. Tasks execute in another process where the object may be stale.

Retries that don't melt your downstream

Use autoretry_for + retry_backoff=True + retry_jitter=True. Backoff doubles each attempt (1s, 2s, 4s, 8s, …) with random jitter to avoid thundering herds. For unexpected failures, log and let it die — don't blindly retry on Exception, you'll DOS yourself.

Periodic tasks with Beat

django-celery-beat stores schedules in the database, editable from Django admin:

CELERY_BEAT_SCHEDULER = "django_celery_beat.scheduler:DatabaseScheduler"

Add a CrontabSchedule in admin and bind it to a PeriodicTask pointing at tasks.cleanup_expired_carts. Run beat as a single, separate process: celery -A djzen beat -l info. Never run more than one beat — duplicates fire twice.

Queues and routing

One queue is fine until it isn't. Split high-priority email from slow PDF rendering:

CELERY_TASK_ROUTES = {
    "orders.tasks.send_invoice_email": {"queue": "email"},
    "orders.tasks.render_pdf":         {"queue": "render"},
}
# Run two worker pools:
# celery -A djzen worker -Q email  -c 8 -n email@%h
# celery -A djzen worker -Q render -c 2 -n render@%h

Monitoring with Flower

pip install flower
celery -A djzen flower --port=5555 --basic_auth=admin:strongpass

Flower shows live workers, in-flight tasks, history, retries, and per-task throughput. Lock it behind nginx with auth — never expose 5555 publicly.

Production checklist

  • systemd units for worker and beat (auto-restart on crash).
  • Idempotent tasks: design every task so running it twice is harmless. Use a unique task_id derived from the object + action.
  • acks_late=True + prefetch=1: tasks aren't lost if a worker is killed mid-execution.
  • Result expiry: CELERY_RESULT_EXPIRES = 3600 — don't grow the results table forever.
  • Sentry: install sentry-sdk[celery] and every task failure is captured automatically.
  • Don't run heavy work in the web request waiting on .get() from a task — that's just synchronous code with extra steps.

Summary

Celery + Redis is the boring, battle-tested choice. Master the four primitives — task, retry, queue, schedule — and you'll never again block a request on something that could happen in the background. Use Flower in dev, Sentry in prod, and design every task to be safe to run twice.