OdooPerformancePostgreSQLLinux

The Odoo Workers Formula That Stopped Our Server From Crashing Under Load

2026-02-018 min read

The Odoo workers configuration is one of the most cargo-culted settings in the DevOps world. Everyone sets it to some number they found in a forum post, crosses their fingers, and wonders why their server crashes under load. Here is what actually happens — and the formula that fixed our crashing servers.

What "Workers" Actually Means

When you set workers = N in your Odoo configuration file, you are telling Odoo to run N worker processes instead of the default single-threaded mode. Each worker process can handle exactly one HTTP request at a time. If all workers are busy and a new request arrives, it queues — or times out.

The single-threaded mode (workers = 0) is fine for development. For production with multiple concurrent users, it is a performance cliff. Set workers and you get real process-level concurrency. But set it too high and you will starve PostgreSQL of connections, exhaust RAM, and crash your server under exactly the kind of load spikes you need to handle gracefully.

The Formula

The formula we use after tuning across dozens of production servers:

workers = (vCPU x 2) + 1
limit_memory_hard = 2684354560   # 2.5 GB per worker
limit_memory_soft = 2147483648   # 2.0 GB per worker
limit_time_cpu = 600
limit_time_real = 1200

For a 4 vCPU server this gives 9 workers. Each worker can use up to 2.5 GB RAM before being killed and restarted automatically by Odoo. The soft limit triggers garbage collection at 2 GB. These numbers are not arbitrary — they come from Odoo's own documentation combined with production observation.

The db_maxconn Trap

Here is the setting that causes silent death on high-traffic servers: db_maxconn. The default is 64 connections per worker. With 9 workers, that is potentially 576 PostgreSQL connections. If your PostgreSQL max_connections is set to the out-of-the-box default of 100, you have a time bomb.

When Odoo workers attempt to open more connections than PostgreSQL allows, you get cryptic errors: "too many connections" and "FATAL: remaining connection slots are reserved for non-replication superuser connections". These errors appear under load, at the worst possible moment, and are often misdiagnosed as Odoo bugs.

Our rule: max_connections in postgresql.conf must always be at least (workers x db_maxconn) + 10. In practice we set db_maxconn = 8 and max_connections = 200 for a standard 9-worker setup. PgBouncer is the right long-term solution for connection pooling, but correctly sizing these settings eliminates the immediate problem.

PostgreSQL shared_buffers: The Most Ignored Setting

PostgreSQL's shared_buffers defaults to 128 MB. For any Odoo installation with real data and real users, this is criminally low. A practical starting point: 25% of total RAM.

On a server with 16 GB RAM, a production-tuned postgresql.conf looks like this:

shared_buffers = 4GB
effective_cache_size = 12GB
work_mem = 64MB
maintenance_work_mem = 1GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
random_page_cost = 1.1        # for SSD storage
default_statistics_target = 100

The effective_cache_size is not memory you are allocating — it is a hint to the PostgreSQL query planner about how much OS-level cache is available. Setting it correctly changes query plan choices in ways that dramatically affect query speed for large Odoo datasets.

Diagnosing Before You Tune

Before changing any of these settings, collect baseline data. Run SELECT * FROM pg_stat_activity WHERE state = 'active' during peak load to see actual connection counts. Use htop to measure real memory consumption per Odoo worker process. Look at pg_stat_bgwriter to see if you have a checkpoint tuning problem.

Tune based on measurements, not guesses. The formula is a starting point, not a finished product. Every server has a different workload profile.

The Result

After applying these settings on a steel industry client's 11-server Odoo fleet, page load times dropped by 35% and we eliminated the weekly "server is slow" support tickets that had been ongoing for months. The changes took 2 hours to implement and test across the fleet. The understanding of why they work took years of production experience to accumulate. Now you have it in 8 minutes.

← Back to BlogDiscuss Your Infrastructure →