Skip to main content

Connection Pooling — Reuse DB Connections

Maintain a pool of reusable DB connections — opening a connection per request is expensive and kills DB performance at scale.

When to use

  • Always, for any persistent DB connection in a server application (PostgreSQL, MySQL, Redis)

Tradeoffs

  • Pool too small → requests queue and time out; pool too large → DB connection exhaustion across all instances
  • Idle connections still hold DB resources (file descriptors, memory)
import "github.com/jackc/pgx/v5/pgxpool"

func NewPool(ctx context.Context, dsn string) (*pgxpool.Pool, error) {
cfg, err := pgxpool.ParseConfig(dsn)
if err != nil {
return nil, err
}

cfg.MinConns = 2
cfg.MaxConns = 10
cfg.MaxConnIdleTime = 5 * time.Minute
cfg.ConnConfig.ConnectTimeout = 3 * time.Second

return pgxpool.NewWithConfig(ctx, cfg)
}

Gotcha: Total DB connections = N app instances × max_pool_size. PostgreSQL defaults to max_connections=100. With 10 instances at pool size 20 = 200 connections. Use PgBouncer as a connection proxy to avoid exhausting the DB.