Back to BlogEngineering

PostgreSQL Performance Tuning for Indian Scale Applications

TechZunction Team20 November 202512 min read

PostgreSQL is our database of choice for every production application, but the default configuration is optimised for a modest workload on a single server. When your application starts handling millions of rows and thousands of concurrent connections — common for Indian consumer applications during festive sales or flash promotions — the defaults will buckle. This post covers the tuning strategies we apply to every client project, from connection management to query-level optimisation.

Connection pooling is the first bottleneck most teams hit. Each PostgreSQL connection consumes roughly 10MB of memory, and the default max_connections of 100 is exhausted quickly in a serverless or auto-scaling environment where each function instance opens its own connection. We deploy PgBouncer in transaction pooling mode between the application and the database, allowing thousands of application connections to share a pool of 50 to 100 actual database connections. For a Supabase-hosted database, we use their built-in connection pooler with similar settings.

Query optimisation starts with understanding EXPLAIN ANALYZE output. We run every query that touches user-facing endpoints through EXPLAIN (ANALYZE, BUFFERS, FORMAT JSON) and look for sequential scans on large tables, nested loop joins where hash joins would be more efficient, and excessive buffer reads. The most common fix is adding the right composite index — not just indexing individual columns, but creating indexes that match the exact column order and direction used in WHERE and ORDER BY clauses. We have seen single-query performance improve by 100x simply by adding a well-designed composite index.

For tables that grow beyond 50 million rows — transaction logs, event streams, audit trails — we implement range partitioning by date. PostgreSQL's native declarative partitioning creates child tables automatically, and queries that include a date range in the WHERE clause are automatically routed to the relevant partitions, skipping months or years of irrelevant data. Combined with a retention policy that detaches and archives old partitions, this keeps query performance consistent regardless of total data volume. One client's analytics queries went from 12 seconds to 200 milliseconds after we partitioned their 800-million-row events table by month.

Stay in the loop

Get our latest articles on engineering, design, and building digital products — delivered straight to your inbox.