Context
The 22-point gap between graph-native systems (Zep 71.2%) and vector-primary systems (Mem0 49%) on LongMemEval shows that graph structure is essential for complex reasoning — multi-hop queries, relational questions, entity-centric retrieval.
Current state
db0 has typed edges between memories: related, derived, contradicts, supports, supersedes. These are created during:
- Superseding (automatic
supersedes edge)
- Contradiction detection in
context().ingest() (automatic contradicts edge)
- Manual
memory().addEdge() calls
The entity extraction in extraction/entities.ts detects people, dates, and places and adds them as tags (entity:PERSON:alice).
The gap
db0 doesn't build an entity knowledge graph from conversations. It doesn't:
- Create entity nodes ("Alice", "TypeScript", "the auth service")
- Create relationship edges between entities ("Alice → manages → auth service")
- Support graph traversal retrieval (start from a seed entity, walk N hops)
Systems like Zep's Graphiti and Cognee build this automatically during ingestion, enabling queries like "Who is responsible for the service that handles login?" to be answered by traversing: login → auth service → managed by → Alice.
Proposed approach
- During
context().ingest(), extract entities and relationships as graph nodes/edges
- Use the existing
memoryAddEdge backend method for storage
- Add entity resolution (fuzzy matching to prevent duplicate nodes for "Alice", "alice", "Alice Chen")
- Add graph traversal as a retrieval strategy in
context().pack() — seed from query entities, walk 1-2 hops
- Profile-configurable: knowledge-base and agent-context would enable graph construction; minimal and conversational would skip it
Complexity consideration
Full graph construction (like Graphiti) requires LLM calls for entity extraction and resolution, which adds latency and cost during ingestion. A lighter approach: extend the existing rules-based entity extraction to create graph nodes, and use the existing typed edges for relationships. This wouldn't match Graphiti's depth but would enable basic graph traversal without LLM ingestion costs.
References
Context
The 22-point gap between graph-native systems (Zep 71.2%) and vector-primary systems (Mem0 49%) on LongMemEval shows that graph structure is essential for complex reasoning — multi-hop queries, relational questions, entity-centric retrieval.
Current state
db0 has typed edges between memories:
related,derived,contradicts,supports,supersedes. These are created during:supersedesedge)context().ingest()(automaticcontradictsedge)memory().addEdge()callsThe entity extraction in
extraction/entities.tsdetects people, dates, and places and adds them as tags (entity:PERSON:alice).The gap
db0 doesn't build an entity knowledge graph from conversations. It doesn't:
Systems like Zep's Graphiti and Cognee build this automatically during ingestion, enabling queries like "Who is responsible for the service that handles login?" to be answered by traversing: login → auth service → managed by → Alice.
Proposed approach
context().ingest(), extract entities and relationships as graph nodes/edgesmemoryAddEdgebackend method for storagecontext().pack()— seed from query entities, walk 1-2 hopsComplexity consideration
Full graph construction (like Graphiti) requires LLM calls for entity extraction and resolution, which adds latency and cost during ingestion. A lighter approach: extend the existing rules-based entity extraction to create graph nodes, and use the existing typed edges for relationships. This wouldn't match Graphiti's depth but would enable basic graph traversal without LLM ingestion costs.
References
packages/core/src/extraction/entities.tspackages/core/src/types.ts(MemoryEdge)