MentisDB is a durable semantic memory engine and versioned skill registry for AI agents — a persistent, hash-chained brain that survives context resets, model swaps, and team turnover.
It stores semantically typed thoughts in an append-only, hash-chained memory log through a swappable storage adapter layer. The skill registry is a git-like immutable version store for agent instruction bundles — every upload is a new version, history is never overwritten, and every version is cryptographically signable.
Harness Swapping — the same durable memory works across every AI coding environment. Connect Claude Code, OpenAI Codex, GitHub Copilot CLI, Qwen Code, Cursor, VS Code, or any MCP-capable host to the same mentisdbd daemon and your agents share one brain, regardless of which tool you picked up today.
Zero Knowledge Loss Across Context Boundaries — when an agent's context window fills, it writes a Summary checkpoint to MentisDB, compacts, reloads mentisdb_recent_context, and continues without losing a single decision. Chat history is ephemeral. MentisDB is permanent.
Fleet Orchestration at Scale — one project manager agent decomposes work, dispatches a parallel fleet of specialists, each pre-warmed with shared memory, and synthesizes results wave by wave. MentisDB is the coordination substrate: every agent reads from the same chain and writes its lessons back. The fleet's collective intelligence compounds.
Versioned Skill Registry — skills are not just stored, they are versioned like a git repository. Every upload to an existing skill_id creates a new immutable version (stored as a unified diff). Any historical version is reconstructable. Skills can be deprecated or revoked while full audit history is preserved. Uploading agents with registered Ed25519 keys must cryptographically sign their uploads — provenance is verifiable, not assumed.
Session Resurrection — any agent can call mentisdb_recent_context and immediately know exactly where the project stands, what decisions were made, what traps were already hit, and what comes next — without re-reading code, re-running exploratory searches, or asking the human to re-explain context that was earned through hours of work.
Self-Improving Agent Fleets — agents upload updated skill files after learning something new. A skill checked in at the start of a project is better by the end of it. Combine with Ed25519 signing to create a verifiable, tamper-evident record of which agent authored which version of institutional knowledge.
Multi-Agent Shared Brain — multiple agents, multiple roles, multiple owners can write to the same chain key simultaneously. Every thought carries a stable agent_id. Queries filter by agent identity, thought type, role, tags, concepts, importance, and time windows. The chain represents the full collective intelligence of an entire orchestration system, not just one session.
Lessons That Outlive Models — architectural decisions, hard constraints, non-obvious failure modes, and retrospectives written to MentisDB survive chat loss, model upgrades, and team changes. The knowledge compounds instead of evaporating. A new engineer or a new agent boots up, loads the chain, and inherits everything the team learned.
Install and run the daemon:
cargo install mentisdb
mentisdbdRun persistently after closing your SSH session:
nohup mentisdbd &Connect your AI coding tool to the running daemon:
# Claude Code
claude mcp add --transport http mentisdb http://127.0.0.1:9471
# OpenAI Codex
codex mcp add mentisdb --url http://127.0.0.1:9471
# Qwen Code
qwen mcp add --transport http mentisdb http://127.0.0.1:9471
# GitHub Copilot CLI — use /mcp add in interactive mode,
# or write ~/.copilot/mcp-config.json manually (see below)mentisdb/ contains:
- the standalone
mentisdblibrary crate - server support for HTTP MCP and REST, enabled by default
- the
mentisdbddaemon binary - dedicated tests under
mentisdb/tests
A Makefile is included at the repository root. All common workflows have a target:
make build # fmt + release build
make build-mentisdbd # build only the daemon binary
make release # fmt, check, clippy, build, test, doc in sequence
make fmt # cargo fmt
make check # cargo check (lib + binary)
make clippy # cargo fmt + clippy --all-targets -D warnings
make test # cargo test
make bench # Criterion benchmarks, output tee'd to /tmp/mentisdb_bench_results.txt
make doc # cargo doc --all-features
make install # cargo install --path . --bin mentisdbd
make publish # cargo publish
make publish-dry-run
make clean
make help # list all targets with descriptionsmake buildOr directly with Cargo:
cargo build --releaseBuild only the library without the default daemon/server stack:
cargo build --no-default-featuresmake testOr directly:
cargo testRun tests for the library-only build:
cargo test --no-default-featuresRun rustdoc tests:
cargo test --docMentisDB ships a Criterion benchmark suite and a harness-free HTTP concurrency benchmark:
make benchOr directly:
cargo benchResults are also written to /tmp/mentisdb_bench_results.txt so numbers persist across terminal sessions.
Benchmark coverage:
benches/thought_chain.rs— 10 benchmarks: append throughput, query latency, traversal patternsbenches/skill_registry.rs— 12 benchmarks: skill upload, search, delta reconstruction, lifecyclebenches/http_concurrency.rs— startsmentisdbdin-process on a random port; measures write and read throughput at 100 / 1k / 10k concurrent Tokio tasks with p50/p95/p99 latency reporting
Baseline numbers from the DashMap concurrent chain lookup refactor: 750–930 read req/s at 10k concurrent tasks, compared to a sequential bottleneck on the previous RwLock<HashMap> implementation.
make docOr directly:
cargo doc --no-depsGenerate docs for the library-only build:
cargo doc --no-deps --no-default-featuresThe standalone daemon binary is mentisdbd.
Run it from source:
cargo run --bin mentisdbdInstall it from the crate directory:
make install
# or
cargo install --path . --lockedWhen it starts, it serves both:
- an MCP server
- a REST server
Before serving traffic, it:
- migrates or reconciles discovered chains to the current schema and default storage adapter
- verifies chain integrity and attempts repair from valid local sources when possible
- migrates the skill registry from V1 to V2 format if needed (idempotent; safe to run repeatedly)
Once startup completes, it prints:
- the active chain directory, default chain key, and bound MCP/REST addresses
- a catalog of all exposed HTTP endpoints with one-line descriptions
- a per-chain summary with version, adapter, thought count, and per-agent counts
mentisdbd is configured with environment variables:
MENTISDB_DIRDirectory where MentisDB storage adapters store chain files.MENTISDB_DEFAULT_KEYDefaultchain_keyused when requests omit one. Default:borganism-brainMENTISDB_DEFAULT_STORAGE_ADAPTERDefault storage backend for newly created chains. Supported values:binary,jsonl. Default:binaryMENTISDB_STORAGE_ADAPTEROptional short alias forMENTISDB_DEFAULT_STORAGE_ADAPTER.MENTISDB_VERBOSEWhen unset, verbose interaction logging defaults totrue. Supported explicit values:1,0,true,false.MENTISDB_LOG_FILEOptional path for interaction logs. When set, MentisDB writes interaction logs to that file even if console verbosity is disabled. IfMENTISDB_VERBOSE=true, the same lines are also mirrored to the console logger.MENTISDB_BIND_HOSTBind host for both HTTP servers. Default:127.0.0.1MENTISDB_MCP_PORTMCP server port. Default:9471MENTISDB_REST_PORTREST server port. Default:9472MENTISDB_AUTO_FLUSHControls per-write durability of thebinarystorage adapter.true(default): everyappend_thoughtflushes to disk immediately. Full durability.false: writes are batched and flushed every 16 appends (FLUSH_THRESHOLD). Up to 15 thoughts may be lost on a hard crash or power failure, but write throughput increases significantly for multi-agent hubs with many concurrent writers. Supported values:1,0,true,false. Has no effect on thejsonladapter.
Example — full durability (production default):
MENTISDB_DIR=/tmp/mentisdb \
MENTISDB_DEFAULT_KEY=borganism-brain \
MENTISDB_DEFAULT_STORAGE_ADAPTER=binary \
MENTISDB_VERBOSE=true \
MENTISDB_LOG_FILE=/tmp/mentisdb/mentisdbd.log \
MENTISDB_BIND_HOST=127.0.0.1 \
MENTISDB_MCP_PORT=9471 \
MENTISDB_REST_PORT=9472 \
MENTISDB_AUTO_FLUSH=true \
cargo run --bin mentisdbdExample — high-throughput write mode (multi-agent hub):
MENTISDB_DIR=/var/lib/mentisdb \
MENTISDB_AUTO_FLUSH=false \
MENTISDB_BIND_HOST=0.0.0.0 \
mentisdbdMCP endpoints:
GET /healthPOST /POST /tools/listPOST /tools/execute
REST endpoints:
GET /healthGET /mentisdb_skill_mdGET /v1/skillsGET /v1/skills/manifestGET /v1/chainsPOST /v1/bootstrapPOST /v1/agentsPOST /v1/agentPOST /v1/agent-registryPOST /v1/agents/upsertPOST /v1/agents/descriptionPOST /v1/agents/aliasesPOST /v1/agents/keysPOST /v1/agents/keys/revokePOST /v1/agents/disablePOST /v1/thoughtPOST /v1/thoughts/genesisPOST /v1/thoughtsPOST /v1/thoughts/traversePOST /v1/retrospectivesPOST /v1/searchPOST /v1/recent-contextPOST /v1/memory-markdownPOST /v1/skills/uploadPOST /v1/skills/searchPOST /v1/skills/readPOST /v1/skills/versionsPOST /v1/skills/deprecatePOST /v1/skills/revokePOST /v1/head
The daemon currently exposes 29 MCP tools:
mentisdb_bootstrapCreate a chain if needed and write one bootstrap checkpoint when it is empty.mentisdb_appendAppend a durable semantic thought with optional tags, concepts, refs, and signature metadata.mentisdb_append_retrospectiveAppend a retrospective memory intended to prevent future agents from repeating a hard failure.mentisdb_searchSearch thoughts by semantic filters, identity filters, time bounds, and scoring thresholds.mentisdb_list_chainsList known chains with version, storage adapter, counts, and storage location.mentisdb_list_agentsList the distinct agent identities participating in one chain.mentisdb_get_agentReturn one full agent registry record, including status, aliases, description, keys, and per-chain activity metadata.mentisdb_list_agent_registryReturn the full per-chain agent registry.mentisdb_upsert_agentCreate or update a registry record before or after an agent writes thoughts.mentisdb_set_agent_descriptionSet or clear the description stored for one registered agent.mentisdb_add_agent_aliasAdd a historical or alternate alias to a registered agent.mentisdb_add_agent_keyAdd or replace one public verification key on a registered agent.mentisdb_revoke_agent_keyRevoke one previously registered public key.mentisdb_disable_agentDisable one agent by marking its registry status as revoked.mentisdb_recent_contextRender recent thoughts into a prompt snippet for session resumption.mentisdb_memory_markdownExport aMEMORY.md-style Markdown view of the full chain or a filtered subset.mentisdb_get_thoughtReturn one stored thought by stable id, chain index, or content hash.mentisdb_get_genesis_thoughtReturn the first thought ever recorded in the chain, if any.mentisdb_traverse_thoughtsTraverse the chain forward or backward in append order from a chosen anchor, in chunks, with optional filters.mentisdb_skill_mdReturn the official embeddedMENTISDB_SKILL.mdMarkdown file.mentisdb_list_skillsList versioned skill summaries from the skill registry.mentisdb_skill_manifestReturn the versioned skill-registry manifest, including searchable fields and supported formats.mentisdb_upload_skillUpload a new immutable skill version from Markdown or JSON.mentisdb_search_skillSearch skills by indexed metadata such as ids, names, tags, triggers, uploader identity, status, format, schema version, and time window.mentisdb_read_skillRead one stored skill as Markdown or JSON. Responses include trust warnings for untrusted or malicious skill content.mentisdb_skill_versionsList immutable uploaded versions for one skill.mentisdb_deprecate_skillMark a skill as deprecated while preserving all prior versions.mentisdb_revoke_skillMark a skill as revoked while preserving audit history.mentisdb_headReturn head metadata, the latest thought at the current chain tip, and integrity state.
The detailed request and response shapes for the MCP surface live in
MENTISDB_MCP.md. The REST equivalents live in
MENTISDB_REST.md.
MentisDB distinguishes three different read patterns:
headmeans the newest thought at the current tip of the append-only chaingenesismeans the very first thought in the chain- traversal means sequential browsing by append order, forward or backward, in chunks
That traversal model is deliberately different from graph/context traversal through refs and typed relations. Graph traversal answers "what is connected to this thought?" Sequential traversal answers "what came before or after this thought in the ledger?"
Lookup and traversal support:
- direct thought lookup by
id,hash, orindex - logical
genesisandheadanchors forwardandbackwardtraversal directionsinclude_anchorcontrol for inclusive vs exclusive paging- chunked pagination, including
chunk_size = 1for next/previous behavior - optional filters reused from thought search, such as agent identity, thought type, role, tags, concepts, text, importance, confidence, and time windows
- numeric time windows expressed as
start + deltawithsecondsormillisecondsunits for MCP/REST callers
MentisDB includes a versioned skill registry stored alongside chain data in a binary file. Skills are ingested through adapters:
- Markdown ->
SkillDocument - JSON ->
SkillDocument SkillDocument-> MarkdownSkillDocument-> JSON
Each uploaded skill version records:
- registry file version
- skill schema version
- upload timestamp
- responsible
agent_id - optional agent display name and owner from the MentisDB agent registry
- source format
- integrity hash
Uploaders must already exist in the agent registry for the referenced chain. Reusing an existing skill_id creates a new immutable version; it does not overwrite history.
read_skill responses include explicit safety warnings because SKILL.md content can be malicious. Treat every skill as advisory until provenance, trust, and requested capabilities are validated.
Each upload to an existing skill_id creates a new immutable version rather than overwriting history:
- The first upload stores the full content (
SkillVersionContent::Full). - Subsequent uploads store a unified diff patch against the previous version
(
SkillVersionContent::Delta), keeping storage efficient for iteratively improved skills. - Each version receives a monotone
version_number(0-based, assigned in append order). - Pass a
version_idtoread_skill/mentisdb_read_skillto retrieve any historical version. The system reconstructs it by replaying patches forward from version 0. skill_versions/mentisdb_skill_versionslists all versions with their ids, numbers, and timestamps.
Agents that have registered Ed25519 public keys in the agent registry must sign their uploads.
Required fields when the uploading agent has active keys:
signing_key_id— thekey_idregistered viaPOST /v1/agents/keysormentisdb_add_agent_keyskill_signature— 64-byte Ed25519 signature over the raw skill content bytes
Agents without registered public keys may upload without signatures.
Upload flow for signing agents:
- Register a public key:
or via MCP:
POST /v1/agents/keys { agent_id, key_id, algorithm: "ed25519", public_key_bytes }mentisdb_add_agent_key - Sign the raw content bytes with the corresponding private key (Ed25519).
- Include
signing_key_idandskill_signaturein the upload request:or via MCP:POST /v1/skills/upload { agent_id, skill_id, content, signing_key_id, skill_signature }mentisdb_upload_skillwith the same fields.
mentisdbd exposes both:
- a standard streamable HTTP MCP endpoint at
POST / - the legacy CloudLLM-compatible MCP endpoints at
POST /tools/listandPOST /tools/execute
That means you can:
- use native MCP clients such as Codex and Claude Code against
http://127.0.0.1:9471 - keep using direct HTTP calls or
cloudllm's MCP compatibility layer when needed
Codex CLI expects a streamable HTTP MCP server when you use --url:
codex mcp add mentisdb --url http://127.0.0.1:9471Useful follow-up commands:
codex mcp list
codex mcp get mentisdbThis connects Codex to the daemon's standard MCP root endpoint.
Qwen Code uses the same HTTP MCP transport model:
qwen mcp add --transport http mentisdb http://127.0.0.1:9471Useful follow-up commands:
qwen mcp listFor user-scoped configuration:
qwen mcp add --scope user --transport http mentisdb http://127.0.0.1:9471Claude Code supports MCP servers through its claude mcp commands and
project/user MCP config. For a remote HTTP MCP server, the configuration shape
is transport-based:
claude mcp add --transport http mentisdb http://127.0.0.1:9471Useful follow-up commands:
claude mcp list
claude mcp get mentisdbClaude Code also supports JSON config files such as .mcp.json. A MentisDB
HTTP MCP config looks like this:
{
"mcpServers": {
"mentisdb": {
"type": "http",
"url": "http://127.0.0.1:9471"
}
}
}Important:
/mcpinside Claude Code is mainly for managing or authenticating MCP servers that are already configured- the server itself must already be running at the configured URL
GitHub Copilot CLI can also connect to mentisdbd as a remote HTTP MCP
server.
From interactive mode:
- Run
/mcp add - Set
Server Nametomentisdb - Set
Server TypetoHTTP - Set
URLtohttp://127.0.0.1:9471 - Leave headers empty unless you add auth later
- Save the config
You can also configure it manually in ~/.copilot/mcp-config.json:
{
"mcpServers": {
"mentisdb": {
"type": "http",
"url": "http://127.0.0.1:9471",
"headers": {},
"tools": ["*"]
}
}
}MentisDB supports a dedicated retrospective workflow for lessons learned.
- Use
mentisdb_appendfor ordinary durable facts, constraints, decisions, plans, and summaries. - Use
mentisdb_append_retrospectiveafter a repeated failure, a long snag, or a non-obvious fix when future agents should avoid repeating the same struggle.
The retrospective helper:
- defaults
thought_typetoLessonLearned - always stores the thought with
role = Retrospective - still supports tags, concepts, confidence, importance, and
refsto earlier thoughts such as the original mistake or correction
Multiple agents can write to the same chain_key.
Each stored thought carries a stable:
agent_id
Agent profile metadata now lives in the per-chain agent registry instead of being duplicated into every thought record. Registry records can store:
display_nameagent_ownerdescriptionaliasesstatuspublic_keys- per-chain activity counters such as
thought_count,first_seen_index, andlast_seen_index
That allows a shared chain to represent memory from:
- multiple agents in one workflow
- multiple named roles in one orchestration system
- multiple tenants or owners writing to the same chain namespace
Queries can filter by:
agent_idagent_nameagent_owner
Administrative tools can also inspect and mutate the agent registry directly, so agents can be documented, disabled, aliased, or provisioned with public keys before they start writing thoughts.
At the repository root:
MENTISDB_MCP.mdMENTISDB_REST.mdmentisdb/WHITEPAPER.mdmentisdb/changelog.txt