Releases: orkait/graphstore
Releases · orkait/graphstore
v0.4.0 - retrieval observability triangle
Highlights
The retrieval pipeline is now self-describing end-to-end: observe, dry-run, synthesize.
Features
- REMEMBER signal telemetry (#150). Every node returns per-signal scores (
_remember_score,_vector_sim,_bm25_score,_recency_score,_graph_score,_co_bonus,_recall_boost,_rank_stage).Result.meta["signals"]carries fusion weights + per-stage candidate counts + reranker state + nucleus state. Always on, additive. - SYS EXPLAIN REMEMBER (#151). Dry-runs the pipeline without materializing nodes, running rerank, expanding nucleus, or mutating recall counts. Returns candidate plan with per-signal scores + same telemetry. Safe for iterative tuning.
- ANSWER verb (#152).
ANSWER "q" [USING "reader"]+q.answer(...). Retrieves via REMEMBER + hands context to a user-configured reader LLM. Returns{answer, cited_slots, candidates, reader}. Graphstore ships zero LLM deps; reader is a plain callable wired at construction. Named-reader registry for A/B. - Skill-guided LLM ingest adapter (#149, #154). Bench harness for Mem0-style LLM-at-ingest experiments with cross-session fact memory + raw LLM dump per session.
Infrastructure
- Skills split (#148).
graphstore-dsl(runtime DSL emission) +graphstore-builder(Python adapter authors) replace the old 9k-token monolith. - Docusaurus docs (#142-147). Landing at https://graphstore-docs.orkait.com.
- LoCoMo bench CLI flags (#154).
--adapter {graphstore,skill}and--use-raw-turnsfor proper A/B control.
Breaking changes
None.
Reader example
def my_reader(prompt: str, max_tokens: int = 1000) -> str:
... # call any LLM backend
gs = GraphStore(reader=my_reader)
r = gs.execute('ANSWER "What is the capital of France?" LIMIT 3')
r.data["answer"] # "Paris"
r.data["cited_slots"] # ["mem:paris", "mem:eiffel", ...]
r.meta["signals"] # full pipeline telemetryVerified
- 1766 tests pass, 101 skipped
- Docusaurus builds clean
- Em dashes: zero (Rule 9)
What's NOT in this release
- Full LoCoMo A/B of baseline vs skill-adapter (abandoned; LoCoMo ships pre-distilled observations so re-distilling isn't a real test)
- Cloudflare Pages CI deploy workflow (manual-deploy only)
Both land in a follow-up once real user signal arrives.
🤖 Generated with Claude Code