Skip to content

perf: offset-sorted mmap reads + disk read concurrency limiter#204

Merged
JustMaier merged 1 commit intomainfrom
ivy/doc-read-optimization
Apr 13, 2026
Merged

perf: offset-sorted mmap reads + disk read concurrency limiter#204
JustMaier merged 1 commit intomainfrom
ivy/doc-read-optimization

Conversation

@JustMaier
Copy link
Copy Markdown
Contributor

Summary

  • Sort DataSilo get_many_timed keys by data.bin offset before reading — turns random mmap page faults into sequential access with kernel readahead
  • Add 16-permit tokio::sync::Semaphore on the spawn_blocking doc read path — caps concurrent disk readers from 78 (observed) to 16, preventing I/O contention collapse

Problem

Prod metrics show 0.35% of disk fetches taking 5-10s and 2.2% taking 2.5-5s. Root cause: 78 concurrent readers doing random mmap page faults across multi-GB data.bin files at 109M records. The I/O scheduler thrashes under this concurrency, inflating P99 to 2.5s (was 479ms at best steady-state).

Changes

  • crates/datasilo/src/lib.rs: Look up each key's offset in the hash index, sort by offset, read data.bin in sequential order, scatter results back to original positions
  • src/server.rs: Static LazyLock<Semaphore> with 16 permits acquired before spawn_blocking — excess readers queue in async context instead of all faulting simultaneously

Test plan

  • All 36 datasilo tests pass
  • Server binary compiles clean (release)
  • Deploy to prod, compare query_doc_disk_fetch_seconds tail buckets (>500ms, >2.5s, >5s)
  • Monitor bitdex_docstore_concurrent_reads gauge — should cap at ~16 instead of ~78

🤖 Generated with Claude Code

Two changes to fix 5-10s doc read tail latency:

1. Sort keys by data.bin offset before reading (datasilo get_many_timed).
   Turns random mmap page faults into sequential access, enabling kernel
   readahead and I/O request merging. At 109M records across multi-GB
   data.bin, random access causes page fault storms under concurrency.

2. Add 16-permit semaphore on the spawn_blocking doc read path (server.rs).
   Caps concurrent disk readers from 78 (observed in prod) to 16. Excess
   readers queue in async context (no thread consumed) instead of all
   doing simultaneous page faults.

Prod data: 0.35% of disk fetches took 5-10s, 2.2% took 2.5-5s.
Expected: sequential reads reduce per-batch latency; semaphore eliminates
the I/O contention that amplifies individual read times.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@JustMaier JustMaier merged commit 3442a53 into main Apr 13, 2026
1 check failed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant