A production-quality SQLite-backed MCP Memory stack with WAL concurrent safety (10+ sessions), FTS5 BM25 search, session tracking, task management, bridge sync, collaboration workflows, and a native system tray task manager.
Drop-in compatible with @modelcontextprotocol/server-memory for the core 9 knowledge-graph tools, with 41 additional tools split across companion FastMCP micro-servers for sessions, tasks, bridge sync, collaboration, entity linking, and intelligence workflows (50 tools total). Includes a PyQt6 desktop app for visual task management and standalone automation scripts.
Existing MCP memory servers use JSONL files, cloud APIs, or heavyweight databases. Each has trade-offs that hurt real-world Claude Code usage:
- JSONL files (official MCP memory) -- file locks break with 2+ concurrent sessions. Data corruption is a matter of time.
- Cloud APIs (Mem0, Supabase) -- latency, API keys, privacy concerns, vendor lock-in.
- Heavy databases (Neo4j, ChromaDB, Qdrant) -- Docker, config files, resource overhead for what is essentially a key-value store with search.
SQLite hits the sweet spot:
- Single file --
memory.dbis the entire database. Back it up withcp. - Zero config -- No server process, no Docker, no API keys.
- ACID transactions -- Writes never corrupt, even on power loss.
- WAL mode -- Multiple concurrent readers and writers. 10+ Claude Code sessions, no conflicts.
- FTS5 -- Full-text search with BM25 ranking built into the standard library.
- stdlib --
sqlite3ships with Python. No additional binary dependencies.
- WAL mode -- 10+ concurrent Claude Code sessions with no file locking conflicts
- Hybrid search (BM25 + semantic) -- FTS5 keyword search fused with optional sqlite-vec cosine similarity via Reciprocal Rank Fusion, then re-ranked with 6 contextual signals (recency, project affinity, graph proximity, observation richness, canonical facts, active session)
- Session tracking -- Save and recall session snapshots for context continuity across restarts
- Task management -- Structured task CRUD with typed queries, priorities, sections, due dates, and recurring tasks
- Kanban board -- Optional HTML report generator for visual task overview via GitHub Pages
- Cross-project sharing -- Optional
projectfield scopes entities; omit it to share across all projects - Cross-machine sync -- Bridge tools push/pull shared entities between machines via a private git repo
- Drop-in compatible core -- All 9 tools from
@modelcontextprotocol/server-memorywork identically insqlite_memory, with 41 more tools available from companion servers - Zero required dependencies beyond stdlib -- Only
fastmcpis required for MCP protocol;sqlite3is Python stdlib. Optionalorjson,sqlite-vec, andsentence-transformersadd speed and semantic search - Automatic FTS sync -- Full-text index stays in sync with every write operation
- JSONL migration -- Optionally import existing
memory.jsonknowledge graphs on first run
| Feature | sqlite-memory-mcp | Official MCP Memory | claude-mem0 | @pepk/sqlite | simple-memory | mcp-memory-service | memsearch | memory-mcp | MemoryGraph |
|---|---|---|---|---|---|---|---|---|---|
| Storage | SQLite | JSONL file | Mem0 Cloud | SQLite | JSON file | ChromaDB | Qdrant | SQLite | Neo4j |
| Concurrent 10+ sessions | WAL mode | file locks | cloud | no WAL | file locks | yes | yes | no | yes |
| Hybrid search (BM25 + vector) | yes (RRF fusion) | substring | no | no | no | vector only | vector only | no | Cypher only |
| Session tracking | built-in | no | no | no | no | no | no | no | no |
| Task management | built-in | no | no | no | no | no | no | no | no |
| Cross-project sharing | project field | no | no | no | no | no | no | no | no |
| Drop-in compatible | 9/9 tools | baseline | no | partial | no | no | no | partial | no |
| Setup effort | pip install | npx | API key + pip | pip | npx | Docker + pip | Docker + pip | pip | Docker + Neo4j |
| Dependencies | sqlite3 (stdlib) | Node.js | Cloud API | sqlite3 | Node.js | ChromaDB | Qdrant | sqlite3 | Neo4j |
# Clone
git clone https://github.com/rmanov/sqlite-memory-mcp.git
cd sqlite-memory-mcp
# Install from source
pip install -e .
# Optional extras
# pip install -e ".[gui,vector,speed]"
# Add the core drop-in server
claude mcp add sqlite_memory python /path/to/server.py
# Add companion servers for the full 50-tool stack
claude mcp add sqlite_tasks python /path/to/task_server.py
claude mcp add sqlite_session python /path/to/session_server.py
claude mcp add sqlite_bridge python /path/to/bridge_server.py
claude mcp add sqlite_collab python /path/to/collab_server.py
claude mcp add sqlite_entity python /path/to/entity_server.py
claude mcp add sqlite_intel python /path/to/intel_server.pyIf you install the package instead of running from a checkout, the same servers are available as console scripts:
claude mcp add sqlite_memory sqlite-memory-core
claude mcp add sqlite_tasks sqlite-memory-tasks
claude mcp add sqlite_session sqlite-memory-session
claude mcp add sqlite_bridge sqlite-memory-bridge
claude mcp add sqlite_collab sqlite-memory-collab
claude mcp add sqlite_entity sqlite-memory-entity
claude mcp add sqlite_intel sqlite-memory-intelAdd these server/file pairs to your ~/.claude/settings.json under mcpServers:
| MCP server name | Python entry file | Purpose |
|---|---|---|
sqlite_memory |
server.py |
Core 9 drop-in memory tools |
sqlite_tasks |
task_server.py |
Task CRUD, digest, archive, overdue bump |
sqlite_session |
session_server.py |
Session recall, project search, health, resume |
sqlite_bridge |
bridge_server.py |
Cross-machine bridge sync, sharing review |
sqlite_collab |
collab_server.py |
Collaborator and public-knowledge workflows |
sqlite_entity |
entity_server.py |
Task-entity linking and merge helpers |
sqlite_intel |
intel_server.py |
Context assessment and enrichment tools |
Each server should share the same environment values:
"env": {
"SQLITE_MEMORY_DB": "/home/user/.claude/memory/memory.db",
"BRIDGE_REPO": "/home/user/.claude/memory/bridge"
}The SQLITE_MEMORY_DB environment variable controls where the database is stored. If omitted, it defaults to ~/.claude/memory/memory.db. BRIDGE_REPO is only needed for bridge/collab flows.
The system is intentionally split into micro-servers because Claude Code exposes only a limited number of tools per MCP server.
server.pyexposes the 9 drop-in knowledge-graph tools.task_server.py,session_server.py,bridge_server.py,collab_server.py,entity_server.py, andintel_server.pyexpose the remaining 41 tools.- All MCP servers, the Task Tray GUI, and the automation scripts share the same
memory.db. db_utils.pyandschema.pyare the shared source of truth for connections, migrations, and common helpers.- SQLite WAL mode handles concurrency across all of these processes.
The core schema includes the tables below, plus additional tables for task field-version tracking, bridge sync metadata, collaborators, public knowledge review, context packing, ratings, and entity/task links:
PRAGMA journal_mode=WAL;
PRAGMA foreign_keys=ON;
PRAGMA busy_timeout=5000;
-- Core entity storage
CREATE TABLE IF NOT EXISTS entities (
id INTEGER PRIMARY KEY,
name TEXT UNIQUE NOT NULL,
entity_type TEXT NOT NULL,
project TEXT DEFAULT NULL,
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL
);
-- Observations attached to entities
CREATE TABLE IF NOT EXISTS observations (
id INTEGER PRIMARY KEY,
entity_id INTEGER NOT NULL REFERENCES entities(id) ON DELETE CASCADE,
content TEXT NOT NULL,
created_at TEXT NOT NULL,
UNIQUE(entity_id, content)
);
-- Directed relations between entities
CREATE TABLE IF NOT EXISTS relations (
id INTEGER PRIMARY KEY,
from_id INTEGER NOT NULL REFERENCES entities(id) ON DELETE CASCADE,
to_id INTEGER NOT NULL REFERENCES entities(id) ON DELETE CASCADE,
relation_type TEXT NOT NULL,
created_at TEXT NOT NULL,
UNIQUE(from_id, to_id, relation_type)
);
-- Session snapshots for context continuity
CREATE TABLE IF NOT EXISTS sessions (
id INTEGER PRIMARY KEY,
session_id TEXT UNIQUE NOT NULL,
project TEXT DEFAULT NULL,
summary TEXT DEFAULT NULL,
active_files TEXT DEFAULT NULL, -- JSON array
started_at TEXT NOT NULL,
ended_at TEXT DEFAULT NULL
);
-- Structured task management
CREATE TABLE IF NOT EXISTS tasks (
id TEXT PRIMARY KEY,
title TEXT NOT NULL,
description TEXT DEFAULT NULL,
status TEXT NOT NULL DEFAULT 'not_started',
priority TEXT DEFAULT 'medium',
section TEXT DEFAULT 'inbox',
due_date TEXT DEFAULT NULL,
project TEXT DEFAULT NULL,
parent_id TEXT DEFAULT NULL REFERENCES tasks(id),
notes TEXT DEFAULT NULL,
recurring TEXT DEFAULT NULL,
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL
);
-- Full-text search index (BM25 ranked)
CREATE VIRTUAL TABLE IF NOT EXISTS memory_fts USING fts5(
name, entity_type, observations_text,
tokenize = "unicode61 remove_diacritics 2"
);Design notes:
entities.nameisUNIQUE-- one entity per name, enforced at the database level.observationsusesUNIQUE(entity_id, content)-- duplicate observations are silently ignored viaINSERT OR IGNORE.relationsusesUNIQUE(from_id, to_id, relation_type)-- same deduplication pattern.ON DELETE CASCADEon foreign keys ensures deleting an entity cleans up all its observations and relations.memory_ftsis a virtual table that concatenates entity name, type, and all observations into a single searchable document. It is synced on every write.tasks.idis a UUID (TEXT), not an integer -- tasks are identified by UUID for stability across machines.
The 50 tools are grouped by MCP server:
| MCP server | Tool count | Tools |
|---|---|---|
sqlite_memory |
9 | create_entities, add_observations, create_relations, delete_entities, delete_observations, delete_relations, read_graph, search_nodes, open_nodes |
sqlite_session |
5 | session_save, session_recall, search_by_project, knowledge_health, resume_context |
sqlite_tasks |
6 | create_task_or_note, update_task, query_tasks, task_digest, archive_done_tasks, bump_overdue_priority |
sqlite_bridge |
6 | bridge_push, bridge_pull, bridge_status, assign_task, review_shared_tasks, process_recurring_tasks |
sqlite_collab |
9 | manage_collaborators, share_knowledge, review_shared_knowledge, request_publish, cancel_publish, search_public_knowledge, rate_public_knowledge, get_knowledge_ratings, update_verification |
sqlite_entity |
7 | link_task_entity, unlink_task_entity, get_task_links, get_entity_tasks, suggest_task_links, find_entity_overlaps, merge_entities |
sqlite_intel |
8 | assess_context, queue_clarification, record_human_answer, extract_candidate_claims, promote_candidate, build_context_pack, explain_impact, enrich_context |
Share knowledge graph entities between machines (e.g., personal laptop + work computer) via a private git repo.
- Tag entities for sharing by setting
projectto any value starting with"shared"(e.g.,"shared","shared:trading","shared:hooks") bridge_push()first runs a bridge repo safety preflight, then exports shared data toshared.json,shared.js,index.json,tasks/, andentities/, and finally commits and pushes. The v2 payload also includes shared tasks.bridge_pull()on the other machine also runs the same repo safety preflight, doesgit pull, and imports new entities/observations/relations. Task metadata comes fromindex.json, whiledescriptionandnotesare hydrated from per-task files before the LWW merge.bridge_status()shows what's in sync vs only-local vs only-remote
Auto-sync only overwrites bridge-generated artifacts (shared.json, index.json, tasks/, entities/, public_knowledge/, shared.js). If the bridge repo contains user-managed dirty files such as index.html, sync now blocks instead of discarding them.
# One-time setup on each machine
mkdir -p ~/.claude/memory/bridge
cd ~/.claude/memory/bridge
git init
# Create a private GitHub repo
gh repo create memory-bridge --private
git remote add origin https://github.com/YOUR_USER/memory-bridge.git
# Initialize
echo '{}' > shared.json
git add shared.json
git commit -m "init: bridge repo"
git push -u origin mainOn the second machine, clone instead of init:
git clone https://github.com/YOUR_USER/memory-bridge.git ~/.claude/memory/bridgeAdd BRIDGE_REPO to the MCP servers that participate in sharing (sqlite_bridge, sqlite_collab, and usually the rest of the stack so they all see the same paths):
"sqlite_bridge": {
"command": "python",
"args": ["/path/to/bridge_server.py"],
"env": {
"SQLITE_MEMORY_DB": "/home/user/.claude/memory/memory.db",
"BRIDGE_REPO": "/home/user/.claude/memory/bridge"
}
}# Tag an entity for sharing
create_entities([{
"name": "WAL-mode-pattern",
"entityType": "TechnicalInsight",
"project": "shared:sqlite",
"observations": ["SQLite WAL mode enables concurrent readers + writers"]
}])
# Push to bridge repo
bridge_push() # pushes all project LIKE 'shared%'
# On another machine: pull
bridge_pull() # imports new entities with dedup
# Check sync status
bridge_status()SQLite's Write-Ahead Logging (WAL) mode is the key enabler for concurrent Claude Code sessions:
- Without WAL (default journal mode): Readers block writers, writers block readers. A single file lock means only one process can write at a time, and reads are blocked during writes.
- With WAL: Readers never block writers. Writers never block readers. Multiple readers can proceed concurrently. Only one writer at a time, but writers don't wait for readers.
This server sets three PRAGMAs at every connection:
PRAGMA journal_mode=WAL; -- Enable write-ahead logging
PRAGMA foreign_keys=ON; -- Enforce referential integrity
PRAGMA busy_timeout=5000; -- Wait up to 5 seconds for write lockThe busy_timeout is critical: if two sessions try to write simultaneously, the second one waits up to 5 seconds instead of failing immediately. In practice, MCP tool calls are fast enough that contention is rare.
Result: 10+ concurrent Claude Code sessions can read and write the same memory.db without corruption or blocking.
The search_nodes tool uses SQLite FTS5 with BM25 ranking. Queries support the standard FTS5 syntax:
# Simple term search
search_nodes("fastmcp")
# Phrase search
search_nodes('"WAL mode"')
# Boolean AND (implicit)
search_nodes("sqlite concurrency")
# Boolean OR
search_nodes("sqlite OR postgres")
# Prefix search
search_nodes("bug*")
# Negation
search_nodes("memory NOT cache")
# Column-specific search
search_nodes("name:server")
search_nodes("entity_type:BugFix")
Results are ranked by BM25 relevance score. The FTS5 index covers entity names, entity types, and the full text of all observations concatenated together.
Session tracking lives on the sqlite_session MCP server and enables context continuity across Claude Code restarts.
At the end of a session (or periodically), save a snapshot:
session_save(
session_id="abc-123",
project="sqlite-memory-mcp",
summary="Implemented FTS5 search with BM25 ranking. Fixed WAL pragma ordering.",
active_files=[
"server.py",
"README.md"
]
)
At the start of a new session, recall what happened recently:
session_recall(last_n=3)
Returns the 3 most recent sessions with their summaries, projects, active files, and timestamps.
You can extend your Claude Code session hook (~/.claude/hooks/session_context.py) to automatically recall recent sessions and inject them into the system prompt. See examples/session_context_hook.py for a reference implementation.
Structured task tracking lives on the sqlite_tasks MCP server. No external service required.
Tasks are organized into five sections following a GTD-style workflow:
| Section | Purpose |
|---|---|
inbox |
Unprocessed tasks (default) |
today |
Tasks to complete today |
next |
Next actions queue |
someday |
Deferred / maybe |
waiting |
Blocked on someone else |
Four priority levels: low, medium (default), high, critical. The query_tasks and task_digest tools always sort by priority descending, then by due_date ascending.
not_started (default), in_progress, done, archived, cancelled.
# Create a task
create_task_or_note(
title="Review pull request #42",
section="today",
priority="high",
due_date="2026-03-05",
project="sqlite-memory-mcp"
)
# Query pending tasks for today
query_tasks(section="today", status="not_started")
# Mark a task in progress
update_task(task_id="<uuid>", status="in_progress")
# Get a session-start digest
task_digest(sections=["today", "inbox"], include_overdue=True)
# Archive done tasks older than 3 days
archive_done_tasks(older_than_days=3)
# Escalate overdue tasks to high priority
bump_overdue_priority(target_priority="high")Link a task to a parent via parent_id:
parent = create_task_or_note(title="Implement feature X")
# parent returns {"task_id": "<parent-uuid>", ...}
create_task_or_note(
title="Write tests for feature X",
parent_id="<parent-uuid>"
)Query subtasks with query_tasks(parent_id="<parent-uuid>").
Pass a JSON recurrence config in the recurring field:
create_task_or_note(
title="Weekly review",
section="today",
recurring='{"every": "week", "day": "monday"}'
)The automation script recurring_tasks.py reads this field and recreates tasks on schedule.
Four scripts automate routine task hygiene:
| Script | Function |
|---|---|
daily_digest.py |
Sends formatted task digest at session start |
auto_archive.py |
Archives done tasks older than 7 days |
overdue_bump.py |
Escalates overdue tasks to high priority |
recurring_tasks.py |
Recreates recurring tasks on schedule |
All scripts are pure stdlib Python operating directly on memory.db via SQL -- zero external dependencies.
task_report.py generates a static HTML kanban board from the tasks table:
python task_report.py
# Writes: index.htmlThe generated index.html shows tasks grouped by section as kanban columns, with priority color-coding. Commit it to the bridge repo to publish via GitHub Pages.
# Publish to GitHub Pages
cp index.html ~/.claude/memory/bridge/
cd ~/.claude/memory/bridge
git add index.html
git commit -m "chore: update kanban board"
git pushEnable GitHub Pages on the bridge repo (Settings > Pages > Branch: main) to get a live URL.
task_tray.py is a native PyQt6 system tray application for visual task management:
- System tray icon with overdue badge counter
- Compact popup (left-click) -- Today + Overdue tasks, checkbox toggle, quick-add
- Full window (right-click > Open Full Window) -- tabbed view with Today / Inbox / Next / All
- Auto-refresh every 30 seconds when visible
- Window geometry persisted via QSettings
# Install PyQt6 (one-time)
pip install PyQt6
# Run
python3 task_tray.pyThe tray app reads/writes directly to memory.db via db_utils.py, so changes are immediately visible in Claude Code sessions and vice versa.
All Python files share constants and helpers via db_utils.py:
from db_utils import (
DB_PATH, BRIDGE_REPO,
TASK_SECTIONS, TASK_PRIORITIES, TASK_STATUSES,
PRIORITY_RANK, PRIORITY_COLORS,
get_conn, now_iso, parse_iso_date, is_overdue,
build_priority_order_sql, priority_sort_key,
)This eliminates duplication of DB connection setup, task constants, and timestamp helpers across server.py, task_tray.py, and the utility scripts.
MIT License. See LICENSE for details.