Skip to content

perf: batch INSERT for PostgreSQL append_events #360

@jwilger-ai-bot

Description

@jwilger-ai-bot

Problem

PostgresEventStore::append_events() runs a separate query().execute() for each event inside the transaction. For 100 events, that's 100 individual SQL round-trips rather than a single multi-row INSERT.

Benchmark data:

  • Single event append: 3.9 ms (254 ops/sec)
  • 100-event append: 16.7 ms (6K elem/sec)
  • Transaction commit/fsync is ~3.5 ms fixed cost; per-event overhead is ~130 µs/event

Marten (.NET) achieves 3.6 ms for a 100-event batch on the same PostgreSQL setup.

Proposed Solution

Replace the per-event INSERT loop with a single multi-row INSERT statement (or use PostgreSQL's COPY protocol for larger batches). All events in a single append_events() call share the same transaction, so they can be batched into one statement.

Expected Impact

100-event append: 16.7 ms → ~5-6 ms (2-3x improvement), bringing PostgreSQL throughput closer to Marten-competitive numbers.

Location

eventcore-postgres/src/lib.rsappend_events() method, the per-event INSERT loop.

Benchmark Baseline

Run cargo bench -p eventcore-bench --bench store_operations -- 'store/append/postgres' to measure before/after.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions