Summary
readInbox in src/core/operations/messaging.ts:139 reads ALL discussion contributions from the store and filters in JavaScript. When filtering by recipient, the store-level limit is disabled entirely (line 150: storeLimit = needsRecipientFilter ? undefined : (query?.limit ?? 50) * 3) and the function pulls every discussion into memory before slicing to the requested limit.
The author flagged this as a known workaround in the comment at line 143-146:
When filtering by recipient(s), we must fetch all discussions so post-fetch filtering doesn't miss older messages buried under unrelated traffic.
Impact
Scaling math: with 50K total discussions where 1K are addressed to @bob, calling readInbox({ recipient: '@bob', limit: 50 }) pulls 50K rows, allocates 50K Contribution objects, and then keeps 50.
- Nexus: every inbox poll hits the network for the entire discussion table. With multiple agents polling their inboxes, this is a multiplicative scan storm.
- SQLite: still bad — each inbox call materializes thousands of rows for a constant-size result.
Recommended fix
Add an indexed query on the contribution store: list({ kind, recipientHandle?: string }) that uses a recipient index in SQLite (a discussion_recipients(cid, handle) junction table populated on write) and a server-side filter in Nexus.
Alternatives considered (and rejected):
- Per-process recipient cache: cache invalidation across processes is wrong for multi-MCP setups.
- 24h scan window: changes user-visible inbox semantics ("your inbox" → "your inbox from the last 24h").
Context
Surfaced during the #228 review (Issue 14, "defer + file follow-up"). The fix scope is bigger than the rest of the #228 PR combined (~4-6h of schema migration work) so it's intentionally split out.
References
Summary
readInboxinsrc/core/operations/messaging.ts:139reads ALL discussion contributions from the store and filters in JavaScript. When filtering by recipient, the store-level limit is disabled entirely (line 150:storeLimit = needsRecipientFilter ? undefined : (query?.limit ?? 50) * 3) and the function pulls every discussion into memory before slicing to the requested limit.The author flagged this as a known workaround in the comment at line 143-146:
Impact
Scaling math: with 50K total discussions where 1K are addressed to
@bob, callingreadInbox({ recipient: '@bob', limit: 50 })pulls 50K rows, allocates 50K Contribution objects, and then keeps 50.Recommended fix
Add an indexed query on the contribution store:
list({ kind, recipientHandle?: string })that uses a recipient index in SQLite (adiscussion_recipients(cid, handle)junction table populated on write) and a server-side filter in Nexus.Alternatives considered (and rejected):
Context
Surfaced during the #228 review (Issue 14, "defer + file follow-up"). The fix scope is bigger than the rest of the #228 PR combined (~4-6h of schema migration work) so it's intentionally split out.
References
src/core/operations/messaging.ts:139—readInboxworkaroundsrc/core/operations/messaging.ts:143-146— workaround comment