Skip to content
Back to Insights
Odoo PostgreSQL Performance Scaling

Odoo Performance Tuning: From 50K Records to 50M

Odoo can scale further than most people think — we run a deployment with 28 million sale lines and sub-second list views. This is the performance playbook: Postgres tuning, ORM optimization, worker config, and caching patterns we've used in production.

Codecanis Admin

10 min read
Data center servers
Database tier serving a 50M-record Odoo cluster.

The conventional wisdom is that Odoo doesn't scale. The reality is that Odoo's defaults don't scale, but Odoo itself scales further than most teams realise. We run a deployment for a logistics client with 28 million sale order lines, 6 million stock moves, and 14 million accounting entries, serving 600 concurrent users with sub-second list view loads. The architecture is the same Odoo you would install from a Docker image — but every layer has been tuned.

This is the playbook we apply, in roughly the order we apply it.

Step 1: Measure Before You Tune

Two tools dominate. pg_stat_statements on Postgres surfaces the queries eating your IO and CPU. Odoo's --log-level=debug_sql (in dev) or the queries profiler decorator (in any environment) tells you which ORM calls produced which queries. Wire them together. Optimising blindly is the most common failure pattern we see.

-- Top 20 queries by total time
SELECT
  substring(query, 1, 80) AS query_snippet,
  calls,
  round(total_exec_time::numeric, 2) AS total_ms,
  round(mean_exec_time::numeric, 2) AS mean_ms,
  round((100 * total_exec_time / sum(total_exec_time) OVER ())::numeric, 2) AS pct
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 20;

Run this weekly. The top five queries usually account for 60–80% of database load — and they're often fixable with an index or an ORM tweak.

Step 2: Indexing Strategy

Odoo creates indexes for primary keys, many2one foreign keys, and fields declared with index=True. That's it. Everything else you query against in a domain is doing sequential scans on tables that grow to millions of rows.

The pattern we use: identify the top 10 slow queries from pg_stat_statements, run EXPLAIN ANALYZE on each, and add indexes for the predicates that cause seq scans on large tables.

-- Example: list view filtering sale orders by state + date range
EXPLAIN ANALYZE
SELECT id FROM sale_order
WHERE state IN ('sale', 'done')
  AND date_order >= '2025-01-01'
  AND company_id = 1
ORDER BY date_order DESC
LIMIT 80;

-- Composite index targeting the filter + sort
CREATE INDEX CONCURRENTLY idx_sale_order_company_state_date
  ON sale_order (company_id, state, date_order DESC)
  WHERE state IN ('sale', 'done');

Two things matter: CONCURRENTLY avoids locking the table during index creation (essential for production), and the partial index (WHERE state IN ...) keeps the index small by only indexing the rows actually queried in production.

In the ORM, declare index=True on fields you filter on frequently:

class SaleOrder(models.Model):
    _inherit = "sale.order"

    region_id = fields.Many2one("res.region", index=True)
    customer_tier = fields.Selection(
        selection=[("gold", "Gold"), ("silver", "Silver"), ("bronze", "Bronze")],
        index=True,
    )

Step 3: ORM Optimisation

The N+1 query is the single most common performance killer in Odoo code. Looping over records and accessing relations one at a time generates one SQL query per record. With 10,000 records, that's 10,000 queries.

# BAD: N+1 — one query per order to fetch partner
for order in orders:
    print(order.partner_id.name)

# GOOD: prefetch — single query for all partners
orders.mapped("partner_id").read(["name"])
for order in orders:
    print(order.partner_id.name)

Odoo's prefetch mechanism is usually smart enough to batch automatically when you iterate over a recordset that was fetched in bulk. The trap is when you mix iteration with operations that break the prefetch context — calls into other models, write operations, or invalidation events.

Read Group Performance

The read_group method powers pivot tables, dashboards, and kanban view groupings. It can be devastating at scale. The default behaviour groups in Python after fetching all matching rows. For aggregations over millions of rows, push the work into Postgres.

# SLOW: fetches all rows, aggregates in Python
result = self.env["sale.order.line"].read_group(
    domain=[("date_order", ">=", "2025-01-01")],
    fields=["price_subtotal:sum"],
    groupby=["product_id"],
)

# FAST: explicit lazy=False with proper aggregations and ordering
result = self.env["sale.order.line"].read_group(
    domain=[("date_order", ">=", "2025-01-01")],
    fields=["price_subtotal:sum", "product_uom_qty:sum"],
    groupby=["product_id"],
    lazy=False,
    orderby="price_subtotal desc",
    limit=100,
)

lazy=False forces Postgres-side aggregation. Combined with a sensible limit and a covering index on (date_order, product_id, price_subtotal), what used to take 12 seconds runs in under 200ms.

Overriding _search for Complex Domains

When a model's standard _search can't express a query efficiently, override it. We do this for a "high-value customer" computed flag that, by default, would force Odoo to fetch every customer record into Python and filter.

class ResPartner(models.Model):
    _inherit = "res.partner"

    is_high_value = fields.Boolean(compute="_compute_is_high_value",
                                    search="_search_is_high_value")

    def _search_is_high_value(self, operator, value):
        # Push the filter into a subquery rather than computing in Python
        query = """
            SELECT partner_id
            FROM sale_order
            WHERE state IN ('sale', 'done')
            GROUP BY partner_id
            HAVING SUM(amount_total) > 100000
        """
        self.env.cr.execute(query)
        ids = [row[0] for row in self.env.cr.fetchall()]
        op = "in" if (operator == "=") == bool(value) else "not in"
        return [("id", op, ids)]

Step 4: Worker and Longpolling Configuration

Odoo uses multi-process workers. The defaults assume a small deployment. For production we tune three values: workers (HTTP request workers), longpolling_port (real-time/chat — runs as a separate process), and max_cron_threads.

Rule of thumb for a dedicated app server: workers = (2 × cpu_cores) + 1. For a 16-core machine, that's 33 HTTP workers plus 2 cron threads plus the longpolling worker. Each worker needs roughly 200–400MB of RAM under load, so budget accordingly.

# odoo.conf
[options]
workers = 33
max_cron_threads = 2
longpolling_port = 8072

limit_memory_soft = 671088640    # 640MB — worker restarts after exceeding
limit_memory_hard = 805306368    # 768MB
limit_request = 8192
limit_time_cpu = 600
limit_time_real = 1200
limit_time_real_cron = 3600

db_maxconn = 64

The db_maxconn setting per worker matters: with 33 workers and a default of 64, you can demand 2,112 Postgres connections under load. Postgres typically can't handle that directly — which brings us to connection pooling.

Step 5: Connection Pooling With PgBouncer

Postgres connections are expensive — each one forks a backend process with its own memory. Beyond ~200 concurrent connections, performance degrades sharply. PgBouncer in transaction-pooling mode multiplexes your Odoo workers' connection demand onto a smaller pool of real Postgres connections.

-- pgbouncer.ini
[databases]
odoo_prod = host=db.internal port=5432 dbname=odoo_prod pool_size=40

[pgbouncer]
pool_mode = transaction
max_client_conn = 2000
default_pool_size = 40
reserve_pool_size = 10
server_idle_timeout = 600

Watch out: Odoo uses some session-level features (advisory locks, SET LOCAL) that only work in session pooling mode, not transaction mode. We work around this with two PgBouncer pools — one in transaction mode for general queries, one in session mode for the worker that handles long-running cron jobs.

Step 6: Postgres Tuning

The defaults in postgresql.conf are conservative. On a dedicated 32GB Postgres host, we run:

shared_buffers = 8GB                 -- 25% of RAM
effective_cache_size = 24GB          -- 75% of RAM
maintenance_work_mem = 1GB
work_mem = 32MB                      -- per-operation, careful
wal_buffers = 16MB
checkpoint_completion_target = 0.9
random_page_cost = 1.1               -- SSD
effective_io_concurrency = 200
max_wal_size = 4GB
min_wal_size = 1GB

-- Autovacuum tuning for high-write tables
autovacuum_vacuum_scale_factor = 0.05
autovacuum_analyze_scale_factor = 0.025
autovacuum_naptime = 30s
autovacuum_max_workers = 6

The autovacuum tuning is critical for Odoo. The default autovacuum_vacuum_scale_factor of 0.2 means a table won't be vacuumed until 20% of its rows are dead tuples — on a 28M-row table, that's 5.6M dead rows of accumulated bloat. We drop it to 5%.

Step 7: Async Work via Cron and Queue Jobs

Anything that takes more than 500ms should be async. Odoo's built-in cron is fine for scheduled batch work, but for event-driven async tasks (sending emails, generating reports, posting to external APIs), we use the OCA queue_job module.

from odoo.addons.queue_job.job import job

class SaleOrder(models.Model):
    _inherit = "sale.order"

    @job(default_channel="root.invoicing")
    def generate_invoice_async(self):
        self.ensure_one()
        invoice = self._create_invoices()
        invoice.action_post()
        self.message_post(body=f"Invoice {invoice.name} generated")

    def action_generate_invoices(self):
        for order in self:
            order.with_delay(priority=10).generate_invoice_async()

with_delay() enqueues the job into a Postgres-backed queue, consumed by separate worker processes. Channels let you cap concurrency per task type — useful when external APIs have rate limits.

Key Takeaways

  • Measure first with pg_stat_statements and EXPLAIN ANALYZE. Don't optimise blind.
  • Add partial composite indexes for top-N slow queries. Use CREATE INDEX CONCURRENTLY.
  • Avoid N+1; use read_group with lazy=False for aggregations; override _search when the default plan is bad.
  • Tune workers to 2 × cores + 1; budget RAM per worker.
  • Put PgBouncer in front of Postgres in transaction-pooling mode for general traffic.
  • Tune Postgres autovacuum aggressively — 5% scale factor on high-churn tables.
  • Push slow operations into queue_job; channel them to control external API rate.
Let's build something

Want to work together?

If this article made you think about your architecture, your roadmap, or a problem you haven't solved yet — let's talk.