Persistent memory for Claude, Cursor, Codex, LangChain, CrewAI, and any MCP or REST tool. Ebbinghaus decay scoring, contradiction detection, knowledge graphs, agent-scoped memory, vector search. 96% cheaper than mem0, Zep, and Letta.
- Ebbinghaus decay scoring — memories fade naturally over time unless reinforced, just like human memory. No other memory API does this. Your agent stops hallucinating stale facts.
- One memory, every tool — tell Claude Code you prefer Python, and Cursor already knows. Cross-platform memory via MCP or REST.
- $19/mo instead of $249+ — mem0 Pro costs $249/mo. Zep costs $475/mo. Smara Developer costs $19/mo with 200K memories. Same features, fraction of the price.
| Feature | Smara | mem0 | Zep | Letta | agentmemory |
|---|---|---|---|---|---|
| Recall accuracy (top-5) | 94% | 91% | 89% | 88% | 82% |
| Search latency (p95) | 38ms | 65ms | 72ms | 95ms | 110ms |
| Cost / month (200K memories) | $19 | $249 | $475 | $200 | self-host |
| Ebbinghaus memory decay | Yes | No | No | No | No |
| Contradiction detection | Yes | No | Partial | No | No |
| Knowledge graph | Yes | No | No | Yes | No |
| MCP native | Yes | No | No | No | No |
| Agent-scoped memory | Yes | No | Yes | Yes | No |
| Self-hostable | Yes | No | No | Yes | Yes |
| Cross-platform (Claude + Cursor + Codex) | Yes | No | No | No | No |
Benchmarks run on 50K-memory dataset, Voyage AI embeddings, pgvector HNSW index. Methodology →
from smara import Smara
client = Smara(api_key="smara_...")
client.store("user_42", "Prefers Python for backend, TypeScript for frontend")
results = client.search("user_42", "language preferences")Add to your MCP config (~/.claude/mcp_config.json, .cursor/mcp.json, etc.):
{
"smara": {
"command": "npx",
"args": ["-y", "@smara/mcp-server"],
"env": { "SMARA_API_KEY": "smara_your_key_here" }
}
}Restart your tool. Memory loads automatically at conversation start, new facts are stored silently.
- Ebbinghaus decay — memories ranked by recency + reinforcement, not just similarity
- Contradiction detection — "prefers Python" vs "switched to Rust" flagged automatically
- Knowledge graph — connect related memories with typed, weighted edges and traverse them
- Agent-scoped memory — each AI agent gets its own namespace with attached skills
- Semantic vector search — pgvector HNSW with Voyage AI embeddings, sub-40ms p95
- Cross-platform memory — same memory across Claude Code, Cursor, Codex, LangChain, CrewAI
- MCP native — first-class Model Context Protocol support, one config line
- Namespace isolation — partition memories by project, team, or environment
- Source tagging — know which tool stored each fact
- REST API — three endpoints, works with any language or framework
All /v1/* endpoints require Authorization: Bearer <API_KEY>.
curl -X POST https://api.smara.io/v1/memories \
-H "Authorization: Bearer $SMARA_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"user_id": "user_42",
"fact": "Prefers Python over TypeScript for backend work",
"importance": 0.8,
"source": "claude-code",
"namespace": "default"
}'Smart storage: duplicates (cosine >= 0.985) are merged. Contradictions (0.94-0.985) replace the old fact. Returns "stored", "duplicate", or "replaced".
curl "https://api.smara.io/v1/memories/search?user_id=user_42&q=editor+preferences&limit=5" \
-H "Authorization: Bearer $SMARA_API_KEY"Results are ranked by a blend of semantic similarity and Ebbinghaus decay. Relevant + recent beats relevant + old.
curl "https://api.smara.io/v1/users/user_42/context?q=preferences&top_n=5" \
-H "Authorization: Bearer $SMARA_API_KEY"Pre-formatted context string ready for system prompts.
Full API docs: api.smara.io/docs
| Free | Developer | Pro | |
|---|---|---|---|
| Memories | 10,000 | 200,000 | 2,000,000 |
| Teams | 1 / 3 members | 3 / 10 members | Unlimited / 50 |
| AI Agents | 2 | 10 + 5 skills | Unlimited |
| Price | $0 | $19/mo | $99/mo |
| Provider | Plan | Memories | Price/mo |
|---|---|---|---|
| Smara | Developer | 200K | $19 |
| mem0 | Pro | 200K | $249 |
| Letta | Max | 200K | $200 |
| Zep | Flex+ | 200K | $475 |
Sign up free — no credit card required. Pricing calculator →
# Before (mem0)
from mem0 import Memory
m = Memory()
m.add("Prefers dark mode", user_id="u1")
# After (Smara) — same pattern, lower cost
from smara import Smara
s = Smara(api_key="smara_...")
s.store("u1", "Prefers dark mode")mem0 stores facts but never forgets stale ones. Smara's Ebbinghaus decay scoring means outdated preferences fade naturally — your agent always has the freshest context.
| Tool | Method | Status |
|---|---|---|
| Claude Code | MCP server | Live |
| Cursor | MCP server | Live |
| Windsurf | MCP server | Live |
| OpenAI Codex | REST API | Live |
| CrewAI | Python SDK | smara-crewai |
| LangChain | Python SDK | smara-langchain |
| LlamaIndex | REST API | Live |
| Paperclip | Plugin | paperclip-plugin |
git clone https://github.com/smara-io/api.git
cd api
VOYAGE_API_KEY=your-key docker compose up -dAPI on localhost:3011, Postgres on localhost:5433. Auto-migrates on startup.
Manual setup: Node.js 20+, PostgreSQL 15+ with pgvector, Voyage AI key. See self-hosting docs.
Client (Claude Code / Cursor / Codex / LangChain / REST)
│
▼
Fastify API (Node.js)
│
├── Auth middleware (API key + tenant isolation)
├── /v1/memories — store, search, delete (vector + Ebbinghaus decay)
├── /v1/graph — connect, traverse, related (recursive CTE)
├── /v1/agents — CRUD agents, attach skills, agent-scoped memory
└── /v1/users — formatted LLM context
│
▼
PostgreSQL + pgvector
├── memories (embedding column, HNSW index)
├── memory_edges (knowledge graph)
├── agents, skills, agent_skills
└── Ebbinghaus decay computed at query time
│
Voyage AI (embeddings)
- Website: smara.io
- API Docs: api.smara.io/docs
- MCP Server: @smara/mcp-server on npm
- Python SDK: smara on PyPI
- Blog: How Ebbinghaus Forgetting Curves Make AI Agents Smarter
- Pricing Calculator: smara.io/#pricing
See CONTRIBUTING.md for development setup and guidelines.
MIT — see LICENSE.
Built by @parallelromb | smara.io | Twitter @SmaraMemo