Skip to content
View memtomem's full-sized avatar

Block or report memtomem

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
memtomem/README.md

memtomem

Official website & docs: https://memtomem.com

PyPI Downloads GitHub stars Python 3.12+ License: Apache 2.0 CLA Safety

🚧 Alpha β€” APIs, defaults, and on-disk config surfaces may still change between 0.1.x releases. Feedback and issue reports are especially welcome: Issues Β· Discussions.

Give your AI agent a long-term memory.

memtomem turns your markdown notes, documents, and code into a searchable knowledge base that any AI coding agent can use. Write notes as plain .md files β€” memtomem indexes them and makes them searchable by both keywords and meaning.

flowchart LR
    A["Your files\n.md .json .py"] -->|Index| B["memtomem"]
    B -->|Search| C["AI agent\n(Claude Code, Cursor, etc.)"]
Loading

First time here? Follow the Getting Started guide β€” you'll have a working setup in under 5 minutes.


Why memtomem?

Problem How memtomem solves it
AI forgets everything between sessions Index your notes once, search them in every session
Keyword search misses related content Hybrid search: exact keywords + meaning-based similarity
Notes scattered across tools One searchable index for markdown, JSON, YAML, Python, JS/TS
Vendor lock-in Your .md files are the source of truth. The DB is a rebuildable cache

Quick Start

1. Install

uv tool install 'memtomem[all]'       # or: pipx install 'memtomem[all]'
mm --version                          # verify install

[all] bundles the features the sections below describe β€” ONNX dense embeddings, Korean tokenizer, Ollama / OpenAI providers, code chunker, and the Web UI. For a BM25-only install without those downloads (~40 MB vs ~250 MB), see the minimal install option in the Getting Started guide.

If mm --version shows an older version than the latest release, uv is likely serving cached PyPI metadata β€” re-run with uv tool install 'memtomem[all]' --refresh, or clear the cache first: uv cache clean memtomem.

mm: command not found? uv tool install drops the shim into ~/.local/bin, which isn't on $PATH in fresh shells on macOS/Linux. Run uv tool update-shell, then open a new shell and re-run mm --version.

2. Setup

mm init                               # preset picker, then memory_dir + MCP

The interactive picker starts with three presets β€” Minimal (BM25, no downloads), English (Recommended) (ONNX bge-small-en-v1.5 + English reranker + auto-discover providers), Korean-optimized (ONNX bge-m3 + kiwipiepy tokenizer + multilingual reranker) β€” plus an Advanced entry that opens the full 10-step wizard. Preset paths only ask about the memory directory and MCP registration; everything else is set from the preset.

For automation / CI:

mm init -y                            # minimal preset, same as before
mm init --preset korean -y            # Korean-optimized bundle, no prompts
mm init --advanced                    # force the full 10-step wizard

See Embeddings for the full model/provider matrix.

3. Use

"Call the mem_status tool"   β†’  confirms the server is connected
"Index my notes folder"      β†’  mem_index(path="~/notes")
"Search for deployment"      β†’  mem_search(query="deployment checklist")
"Remember this insight"      β†’  mem_add(content="...", tags=["ops"])

Prefer the terminal? mm status is a CLI mirror of mem_status β€” same output, no editor needed.

4. Web UI (optional)

mm web                # polished dashboard on http://127.0.0.1:8080
mm web --dev          # maintainer surface (adds opt-in pages)

mm web shows the polished page set by default. Pass --dev (or set MEMTOMEM_WEB__MODE=dev in your shell profile) to expose maintainer pages like Namespaces, Sessions, Working Memory, and Health Report.

Other install options

Minimal (BM25-only, ~40 MB):

uv tool install memtomem             # no extras β€” dense search, web UI, Korean tokenizer unavailable until you add them

Opt in later per-feature: uv tool install --reinstall 'memtomem[onnx,web]' (see the extras table in Getting Started).

Project-scoped (per-project isolation):

uv add 'memtomem[all]' && uv run mm init    # all commands need `uv run` prefix

No install (uvx on demand):

claude mcp add memtomem -s user -- uvx --from memtomem memtomem-server

See MCP Client Setup for Cursor / Windsurf / Claude Desktop / Gemini CLI.


Key Features

  • Hybrid search β€” BM25 keyword + dense vector + RRF fusion in one query
  • Semantic chunking β€” heading-aware Markdown, AST-based Python, tree-sitter JS/TS, structure-aware JSON/YAML/TOML
  • Incremental indexing β€” chunk-level SHA-256 diff; only changed chunks get re-embedded
  • Namespaces β€” organize memories into scoped groups with auto-derivation from folder names
  • Maintenance β€” near-duplicate detection, time-based decay, TTL expiration, auto-tagging
  • Web UI β€” visual dashboard for search, sources, tags, timeline, dedup, and more (mm web --dev for the full maintainer surface)
  • MCP tools β€” mem_do meta-tool routes all non-core actions in core mode for minimal context usage

Ecosystem

Package Description
memtomem Core β€” MCP server, CLI, Web UI, hybrid search, storage
memtomem-stm STM proxy β€” proactive memory surfacing via tool interception

Documentation

Guide Description
Getting Started Install, setup wizard, first use
Reference Complete feature reference for all tools and patterns
Configuration All MEMTOMEM_* environment variables
Embeddings ONNX, Ollama, and OpenAI embedding providers
LLM Providers Ollama, OpenAI, Anthropic, and compatible endpoints
MCP Client Setup Editor-specific configuration
Uninstalling memtomem Clean removal steps

Contributing

See CONTRIBUTING.md for setup instructions and the contributor guide.

License

Apache License 2.0. Contributions are accepted under the terms of the Contributor License Agreement.

Popular repositories Loading

  1. memtomem memtomem Public

    Markdown-first, long-term memory infrastructure for AI agents. Hybrid BM25 + semantic search across markdown/code files via MCP.

    Python 3 17

  2. memtomem-stm memtomem-stm Public

    Short-term memory proxy gateway with proactive memory surfacing for AI agents

    Python 5

  3. memtomem-com memtomem-com Public

    Astro

  4. pandas-data-analysis pandas-data-analysis Public

    Forked from tsdata/pandas-data-analysis

    파이썬 λ¨Έμ‹ λŸ¬λ‹ νŒλ‹€μŠ€ 데이터 뢄석 (κ°œμ •νŒ)

    Jupyter Notebook