A pragmatic guide to caching in Django: choose the right level (site, view, fragment, low-level), avoid the stampede, and solve the only hard problem — invalidation — with versioned keys and signal-driven busts.
"There are only two hard things in computer science: cache invalidation and naming things." — Phil Karlton. This tutorial is about the first one. We'll skip the trivial bits (yes, install Redis) and focus on what trips production teams up: choosing the right cache level, avoiding stampedes, and invalidating without forgetting.
CACHES = {
"default": {
"BACKEND": "django.core.cache.backends.redis.RedisCache",
"LOCATION": "redis://127.0.0.1:6379/2",
"TIMEOUT": 300, # default 5 min
"KEY_PREFIX": "djzen",
"VERSION": 1,
}
}
Use a dedicated Redis DB index per cache (here /2) so you can FLUSHDB the cache without touching Celery's broker on /1.
Caching at the wrong level is worse than no cache at all (stale pages, weird auth bugs, "why am I seeing someone else's order?").
@cache_page) — good for fully anonymous pages with no personalization (blog index, public docs).{% raw %}{% cache %}{% endraw %}) — cache the expensive parts of a page; render the personalized parts live.cache.get/set) — cache function results, querysets, computed values. Most flexibility, most responsibility.from django.views.decorators.cache import cache_page
from django.views.decorators.vary import vary_on_headers
@cache_page(60 * 15)
@vary_on_headers("Accept-Language", "X-Theme")
def public_pricing(request):
return render(request, "pricing.html", {"plans": Plan.objects.published()})
Always pair @cache_page with @vary_on_headers if anything in the response depends on a header. Forgetting Accept-Language is how you ship the German version to French users.
{% raw %}{% load cache %}
{% cache 600 product_card product.id product.updated_at %}
<article class="card">{{ product.title }} — €{{ product.price }}</article>
{% endcache %}{% endraw %}
The trick: include updated_at in the key. When the product is saved, updated_at changes, the key changes, the old fragment is irrelevant. You never invalidate — you just stop pointing at the stale entry. This is the single most useful caching pattern in Django.
from django.core.cache import cache
def featured_products():
return cache.get_or_set(
"featured_products:v3",
lambda: list(Product.objects.filter(is_featured=True).select_related("category")[:12]),
timeout=600,
)
@receiver(post_save, sender=Product)
@receiver(post_delete, sender=Product)
def bust_featured(sender, instance, **kwargs):
if instance.is_featured or kwargs.get("created"):
cache.delete("featured_products:v3")
Bump the version suffix (:v3 → :v4) when you change the function's shape (added a field, changed the slice). That's a deploy-time atomic invalidation — no race, no stale entries.
A popular cached query expires at 14:00:00. At 14:00:00.001, 2,000 requests all miss simultaneously, all run the SQL, and your DB goes down. This is the "thundering herd."
Solutions, easiest first:
p per request, refresh as if the entry is expired even when it isn't.SET NX EX as a mutex; the loser waits and re-reads.def cached_with_lock(key, producer, timeout=600, lock_timeout=10):
val = cache.get(key)
if val is not None:
return val
lock_key = f"lock:{key}"
if cache.add(lock_key, "1", lock_timeout): # only first request wins
try:
val = producer()
cache.set(key, val, timeout)
finally:
cache.delete(lock_key)
else:
time.sleep(0.05); return cached_with_lock(key, producer, timeout, lock_timeout)
return val
Track hit rate per key prefix in Prometheus. A 60% hit rate on a "hot" key means you're caching wrong (TTL too short, key cardinality too high, or stampedes). Aim for >90% on intentionally cached values.
Cache the smallest unit that helps. Put updated_at in fragment keys so invalidation is automatic. Solve stampedes before they happen, not after. And always — always — measure hit rate. A cache you can't observe is a bug factory.