Django Advanced

Django ORM Optimization: Eliminating N+1 Queries and Performance Pitfalls

Deep dive into Django ORM performance. Learn how to identify and fix N+1 queries, use select_related, prefetch_related, only(), defer(), and Django Debug Toolbar to measure the real impact.

DjangoZen Team Apr 17, 2026 18 min read 24 views

Django's ORM is powerful, but its convenience can hide serious performance bombs. One of the biggest is the N+1 query problem, and it's likely lurking in your app right now.

What is the N+1 Problem?

Given this simple code:

posts = Post.objects.all()
for post in posts:
    print(post.author.name)

If you have 100 posts, Django runs 101 queries — one to fetch posts, then one per post to fetch its author. With 10,000 posts, this pattern grinds your database.

Identifying N+1 Queries

Install Django Debug Toolbar:

pip install django-debug-toolbar

Add to settings:

INSTALLED_APPS = [..., 'debug_toolbar']
MIDDLEWARE = ['debug_toolbar.middleware.DebugToolbarMiddleware', ...]
INTERNAL_IPS = ['127.0.0.1']

Now every page shows the SQL panel with exact query counts. If you see "similar queries: 50", you have N+1.

Use select_related for forward ForeignKey and OneToOne relationships. It adds a SQL JOIN:

# Bad: 1 + N queries
posts = Post.objects.all()

# Good: 1 query with JOIN
posts = Post.objects.select_related('author', 'category')

You can chain: select_related('author__profile', 'category').

For reverse ForeignKey or ManyToMany, use prefetch_related. It runs a second query and joins in Python:

# Each post has many tags
posts = Post.objects.prefetch_related('tags', 'comments__author')

Fix #3: Prefetch with custom querysets

When you need filtered prefetches:

from django.db.models import Prefetch

active_comments = Comment.objects.filter(is_active=True)
posts = Post.objects.prefetch_related(
    Prefetch('comments', queryset=active_comments, to_attr='active_comments')
)

for post in posts:
    for comment in post.active_comments:  # no extra query
        print(comment.body)

Fix #4: only() and defer()

Load only the columns you need:

# Load only title and slug
Post.objects.only('title', 'slug')

# Load everything EXCEPT large text fields
Post.objects.defer('content', 'html_body')

Be careful — accessing a deferred field triggers an extra query.

Fix #5: values() and values_list()

When you don't need model instances:

# Returns dicts — no Python model instance overhead
Post.objects.filter(published=True).values('id', 'title', 'author__name')

# Returns a flat list
Post.objects.values_list('id', flat=True)

Aggregation and Annotation

Push work to the database:

from django.db.models import Count, Avg, Sum

# Bad: fetches all comments then counts in Python
authors = Author.objects.all()
for a in authors:
    print(a.name, a.comment_set.count())  # N+1!

# Good: single query with COUNT
authors = Author.objects.annotate(comment_count=Count('comment'))
for a in authors:
    print(a.name, a.comment_count)

QuerySet Caching Gotchas

QuerySets are lazy and cached — but subtle rules apply:

qs = Post.objects.filter(published=True)

# Evaluates + caches
list(qs)

# Slicing before evaluation returns a new uncached QuerySet
first_five = qs[:5]

Use exists() instead of if queryset: — it runs a more efficient COUNT query.

Bulk Operations

Avoid the loop-and-save anti-pattern:

# Bad: N queries, N transactions
for data in source:
    Product.objects.create(**data)

# Good: 1 query
Product.objects.bulk_create([Product(**d) for d in source], batch_size=1000)

# Bulk update
Product.objects.bulk_update(products, ['price', 'stock'])

Raw SQL When Needed

For complex analytics, raw SQL is sometimes clearer:

from django.db import connection

with connection.cursor() as cursor:
    cursor.execute('''
        SELECT author_id, COUNT(*), AVG(word_count)
        FROM blog_post
        WHERE published_at > %s
        GROUP BY author_id
    ''', [last_month])
    results = cursor.fetchall()

Measuring the Impact

Use connection.queries to count queries in tests:

from django.db import connection, reset_queries
from django.test.utils import override_settings

@override_settings(DEBUG=True)
def test_post_list_queries():
    reset_queries()
    list(Post.objects.select_related('author').all())
    assert len(connection.queries) == 1

Or use assertNumQueries:

with self.assertNumQueries(2):
    response = self.client.get('/posts/')

Production Monitoring

Log slow queries with a middleware or use tools like:

  • Django Silk — per-request query inspection in production
  • Sentry Performance — traces and span-level query analysis
  • pg_stat_statements (PostgreSQL) — aggregate slow query stats

Summary Checklist

  • [ ] Use select_related for ForeignKey/OneToOne
  • [ ] Use prefetch_related for reverse/M2M
  • [ ] Use only()/defer() to reduce column loads
  • [ ] Use values() when you don't need model instances
  • [ ] Use annotate() with aggregates to avoid N+1
  • [ ] Use bulk_create/bulk_update for batch ops
  • [ ] Test query counts with assertNumQueries
  • [ ] Install Django Debug Toolbar in development
  • [ ] Monitor slow queries in production

The ORM is fast when used correctly. Know your tools, measure your queries, and keep your request times under 100ms.