Summary
The `read_memory` tool already performs semantic search across the knowledge graph (facts, topics, people, quotes) — personas use it automatically, and coding tools get it via CLI/MCP. The human sitting in the web UI has never had access to it. This is the feature that replaces Ctrl+F with something qualitatively better.
The Problem with Ctrl+F
Virtualized message lists (added in feat/virtual-scroll) make native browser search impossible — only rendered DOM nodes are searchable. But Ctrl+F was always a bandage. If you have 500 messages with Lena and want "that thing she said about startups," you're doing archaeology. What you actually want is "find the Quote Lena has about startups, and what Topics it connects to."
The Opportunity
Messages are the raw material. The real search surface is:
| Entity |
What you find |
| Quotes |
Memorable things said, with source message attribution |
| Facts |
Things Ei knows about you, with confidence scores |
| Topics |
Subjects with perspective, stake, sentiment |
| People |
Friends, family, colleagues Ei has learned about |
These are already extracted, stored, and semantically indexed. We just never gave the human a way to query them.
UX Model
Reddit-style scope toggle:
🔍 [search query________________]
● Search Lena's memory ○ Search all of Ei
Scoped to a persona = the existing `read_memory` with the `persona` filter.
Global = `read_memory` across the full human entity.
Entry point: Global keyboard shortcut (Cmd+K / Ctrl+F intercept when chat is focused).
Results: Cards grouped by entity type — Quotes first (most conversational), then Facts, Topics, People. Each card shows match context and source persona.
Technical Path
Backend — no new work for V1. `StateManager.searchHumanData()` is already the entry point. The `read_memory` tool wraps it today. Exposing it to the frontend is a direct `processor.searchMemory(query, personaId?)` call.
Jump to message — cleaner than expected. Quote entities already carry `message_id`, `start`, and `end` for source attribution. Since the virtualizer uses `msg.id` as the row key:
```typescript
// ChatPanelHandle gets a new method:
scrollToMessageId: (messageId: string) => void;
// Implementation (inside ChatPanel):
const index = messages.findIndex(m => m.id === messageId);
if (index !== -1) rowVirtualizer.scrollToIndex(index, { align: 'center' });
```
TanStack Virtual's `scrollToIndex` handles all the virtual positioning. The "jump to source" feature is not a workaround — it's a first-class virtualizer API.
V1 Scope
Open Questions
- Should Ctrl+F be intercepted when chat is focused, or is this a parallel entry point?
- Modal overlay vs. dedicated search panel (sidepanel vs. full modal)?
- V1: text/semantic match only (`searchHumanData` already does this) — or LLM-assisted query rewriting?
- "Jump to message" requires the source persona to be active — should the search result auto-switch persona, or prompt the user?
- Should results show a "preview pane" (messages surrounding the source message) as a fallback when jumping isn't possible (e.g., from a global search result without an active chat)?
Notes
This is the natural next step after virtualizing the chat panels. The message list is no longer the interface for finding things — it's the interface for having conversations. The knowledge graph is the interface for finding things.
Not near-term — planting the flag for when the extraction pipeline and UI architecture support it cleanly.
Summary
The `read_memory` tool already performs semantic search across the knowledge graph (facts, topics, people, quotes) — personas use it automatically, and coding tools get it via CLI/MCP. The human sitting in the web UI has never had access to it. This is the feature that replaces Ctrl+F with something qualitatively better.
The Problem with Ctrl+F
Virtualized message lists (added in feat/virtual-scroll) make native browser search impossible — only rendered DOM nodes are searchable. But Ctrl+F was always a bandage. If you have 500 messages with Lena and want "that thing she said about startups," you're doing archaeology. What you actually want is "find the Quote Lena has about startups, and what Topics it connects to."
The Opportunity
Messages are the raw material. The real search surface is:
These are already extracted, stored, and semantically indexed. We just never gave the human a way to query them.
UX Model
Reddit-style scope toggle:
Scoped to a persona = the existing `read_memory` with the `persona` filter.
Global = `read_memory` across the full human entity.
Entry point: Global keyboard shortcut (Cmd+K / Ctrl+F intercept when chat is focused).
Results: Cards grouped by entity type — Quotes first (most conversational), then Facts, Topics, People. Each card shows match context and source persona.
Technical Path
Backend — no new work for V1. `StateManager.searchHumanData()` is already the entry point. The `read_memory` tool wraps it today. Exposing it to the frontend is a direct `processor.searchMemory(query, personaId?)` call.
Jump to message — cleaner than expected. Quote entities already carry `message_id`, `start`, and `end` for source attribution. Since the virtualizer uses `msg.id` as the row key:
```typescript
// ChatPanelHandle gets a new method:
scrollToMessageId: (messageId: string) => void;
// Implementation (inside ChatPanel):
const index = messages.findIndex(m => m.id === messageId);
if (index !== -1) rowVirtualizer.scrollToIndex(index, { align: 'center' });
```
TanStack Virtual's `scrollToIndex` handles all the virtual positioning. The "jump to source" feature is not a workaround — it's a first-class virtualizer API.
V1 Scope
Open Questions
Notes
This is the natural next step after virtualizing the chat panels. The message list is no longer the interface for finding things — it's the interface for having conversations. The knowledge graph is the interface for finding things.
Not near-term — planting the flag for when the extraction pipeline and UI architecture support it cleanly.