BingsanBingsan
Performance

Object Pooling

Memory buffer reuse for reduced allocations

Object Pooling

Bingsan uses sync.Pool from Go's standard library to reduce memory allocation pressure in hot paths.

Overview

Two types of pools are implemented:

PoolPurposeDefault SizeMax Size
BufferPoolJSON serialization buffers4 KB64 KB
BytePoolOAuth token generation32 bytes32 bytes

How It Works

BufferPool

The BufferPool provides reusable bytes.Buffer instances for JSON serialization:

Request 1 ──► Get buffer ──► Serialize JSON ──► Return buffer ──► Pool
                  │                                  ▲
                  └──────────────────────────────────┘
                              Reused

Key characteristics:

  • Initial capacity: 4 KB (typical JSON metadata size)
  • Maximum size: 64 KB (oversized buffers are discarded)
  • Thread-safe via sync.Pool
  • Automatic reset on get

BytePool

Fixed size: 32 bytes for OAuth access token generation.

Usage Patterns

In API Handlers

func (h *Handler) GetTable(ctx *fiber.Ctx) error {
    buf := pool.GetBuffer()
    defer pool.PutBuffer(buf)  // Always return!

    encoder := json.NewEncoder(buf)
    if err := encoder.Encode(table); err != nil {
        return err
    }

    return ctx.Send(buf.Bytes())
}

Configuration

ConstantValueDescription
DefaultBufferSize4096Initial buffer capacity in bytes
MaxBufferSize65536Maximum buffer size before discard
TokenSize32Fixed size for token byte slices

Best Practices

Always Use defer

buf := pool.GetBuffer()
defer pool.PutBuffer(buf)  // Guaranteed return

Don't Hold References

// Wrong: Reference escapes
data := buf.Bytes()
pool.PutBuffer(buf)
return data  // data is now invalid!

// Correct: Copy if needed
data := make([]byte, buf.Len())
copy(data, buf.Bytes())
pool.PutBuffer(buf)
return data

Metrics

Pool performance is exposed via Prometheus:

MetricTypeDescription
bingsan_pool_gets_totalCounterTotal Get() operations
bingsan_pool_returns_totalCounterTotal Put() operations
bingsan_pool_discards_totalCounterOversized items discarded
bingsan_pool_misses_totalCounterNew allocations (pool empty)

Benchmarks

go test -bench=BenchmarkPool -benchmem ./tests/benchmark/...

Expected results:

BenchmarkTimeAllocs
BufferPool.Get/Put~50ns0
BufferPool.Concurrent~100ns0
BytePool.Get/Put~30ns0

Troubleshooting

High Discard Rate

If bingsan_pool_discards_total is increasing rapidly:

  1. Cause: Many large responses exceeding 64KB
  2. Impact: Reduced pool effectiveness
  3. Solution: Consider increasing MaxBufferSize for schemas with 100+ columns

On this page