Connection Pooling — Reuse DB Connections
Maintain a pool of reusable DB connections — opening a connection per request is expensive and kills DB performance at scale.
When to use
- Always, for any persistent DB connection in a server application (PostgreSQL, MySQL, Redis)
Tradeoffs
- Pool too small → requests queue and time out; pool too large → DB connection exhaustion across all instances
- Idle connections still hold DB resources (file descriptors, memory)
- Go
- Python
import "github.com/jackc/pgx/v5/pgxpool"
func NewPool(ctx context.Context, dsn string) (*pgxpool.Pool, error) {
cfg, err := pgxpool.ParseConfig(dsn)
if err != nil {
return nil, err
}
cfg.MinConns = 2
cfg.MaxConns = 10
cfg.MaxConnIdleTime = 5 * time.Minute
cfg.ConnConfig.ConnectTimeout = 3 * time.Second
return pgxpool.NewWithConfig(ctx, cfg)
}
from sqlalchemy import create_engine
engine = create_engine(
"postgresql://user:pass@host/db",
pool_size=10, # max persistent connections
max_overflow=5, # extra connections under burst
pool_timeout=3, # seconds to wait for a connection
pool_recycle=300, # recycle connections after 5 minutes
pool_pre_ping=True, # validate connection before use
)
Gotcha: Total DB connections = N app instances × max_pool_size. PostgreSQL defaults to
max_connections=100. With 10 instances at pool size 20 = 200 connections. Use PgBouncer as a connection proxy to avoid exhausting the DB.