Lossless Context Management plugin for OpenClaw, based on the LCM paper. Replaces OpenClaw's built-in sliding-window compaction with a DAG-based summarization system that preserves every message while keeping active context within model token limits.
Two ways to learn: read the below, or check out this super cool animated visualization.
When a conversation grows beyond the model's context window, OpenClaw (just like all of the other agents) normally truncates older messages. LCM instead:
- Persists every message in a SQLite database, organized by conversation
- Summarizes chunks of older messages into summaries using your configured LLM
- Condenses summaries into higher-level nodes as they accumulate, forming a DAG (directed acyclic graph)
- Assembles context each turn by combining summaries + recent raw messages
- Provides tools (
lcm_grep,lcm_describe,lcm_expand) so agents can search and recall details from compacted history
Nothing is lost. Raw messages stay in the database. Summaries link back to their source messages. Agents can drill into any summary to recover the original detail.
It feels like talking to an agent that never forgets. Because it doesn't. In normal operation, you'll never need to think about compaction again.
- OpenClaw with plugin context engine support
- Node.js 22+
- An LLM provider configured in OpenClaw (used for summarization)
Use OpenClaw's plugin installer (recommended):
openclaw plugins install @martian-engineering/lossless-clawIf you're running from a local OpenClaw checkout, use:
pnpm openclaw plugins install @martian-engineering/lossless-clawFor local plugin development, link your working copy instead of copying files:
openclaw plugins install --link /path/to/lossless-claw
# or from a local OpenClaw checkout:
# pnpm openclaw plugins install --link /path/to/lossless-clawThe install command records the plugin, enables it, and applies compatible slot selection (including contextEngine when applicable).
In most cases, no manual JSON edits are needed after openclaw plugins install.
If you need to set it manually, ensure the context engine slot points at lossless-claw:
{
"plugins": {
"slots": {
"contextEngine": "lossless-claw"
}
}
}Restart OpenClaw after configuration changes.
LCM is configured through a combination of plugin config and environment variables. Environment variables take precedence for backward compatibility.
Add a lossless-claw entry under plugins.entries in your OpenClaw config:
{
"plugins": {
"entries": {
"lossless-claw": {
"enabled": true,
"config": {
"freshTailCount": 32,
"contextThreshold": 0.75,
"incrementalMaxDepth": -1
}
}
}
}
}| Variable | Default | Description |
|---|---|---|
LCM_ENABLED |
true |
Enable/disable the plugin |
LCM_DATABASE_PATH |
~/.openclaw/lcm.db |
Path to the SQLite database |
LCM_CONTEXT_THRESHOLD |
0.75 |
Fraction of context window that triggers compaction (0.0–1.0) |
LCM_FRESH_TAIL_COUNT |
32 |
Number of recent messages protected from compaction |
LCM_LEAF_MIN_FANOUT |
8 |
Minimum raw messages per leaf summary |
LCM_CONDENSED_MIN_FANOUT |
4 |
Minimum summaries per condensed node |
LCM_CONDENSED_MIN_FANOUT_HARD |
2 |
Relaxed fanout for forced compaction sweeps |
LCM_INCREMENTAL_MAX_DEPTH |
0 |
How deep incremental compaction goes (0 = leaf only, -1 = unlimited) |
LCM_LEAF_CHUNK_TOKENS |
20000 |
Max source tokens per leaf compaction chunk |
LCM_LEAF_TARGET_TOKENS |
1200 |
Target token count for leaf summaries |
LCM_CONDENSED_TARGET_TOKENS |
2000 |
Target token count for condensed summaries |
LCM_MAX_EXPAND_TOKENS |
4000 |
Token cap for sub-agent expansion queries |
LCM_LARGE_FILE_TOKEN_THRESHOLD |
25000 |
File blocks above this size are intercepted and stored separately |
LCM_LARGE_FILE_SUMMARY_PROVIDER |
"" |
Provider override for large-file summarization |
LCM_LARGE_FILE_SUMMARY_MODEL |
"" |
Model override for large-file summarization |
LCM_SUMMARY_MODEL |
(from OpenClaw) | Model for summarization (e.g. anthropic/claude-sonnet-4-20250514) |
LCM_SUMMARY_PROVIDER |
(from OpenClaw) | Provider override for summarization |
LCM_AUTOCOMPACT_DISABLED |
false |
Disable automatic compaction after turns |
LCM_PRUNE_HEARTBEAT_OK |
false |
Retroactively delete HEARTBEAT_OK turn cycles from LCM storage |
LCM_FRESH_TAIL_COUNT=32
LCM_INCREMENTAL_MAX_DEPTH=-1
LCM_CONTEXT_THRESHOLD=0.75
- freshTailCount=32 protects the last 32 messages from compaction, giving the model enough recent context for continuity.
- incrementalMaxDepth=-1 enables unlimited automatic condensation after each compaction pass — the DAG cascades as deep as needed. Set to
0(default) for leaf-only, or a positive integer for a specific depth cap. - contextThreshold=0.75 triggers compaction when context reaches 75% of the model's window, leaving headroom for the model's response.
LCM preserves history through compaction, but it does not change OpenClaw's core session reset policy. If sessions are resetting sooner than you want, increase OpenClaw's session.reset.idleMinutes or use a channel/type-specific override.
{
"session": {
"reset": {
"mode": "idle",
"idleMinutes": 10080
}
}
}session.reset.mode: "idle"keeps a session alive until the idle window expires.session.reset.idleMinutesis the actual reset interval in minutes.- OpenClaw does not currently enforce a maximum
idleMinutes; in source it is validated only as a positive integer. - If you also use daily reset mode,
idleMinutesacts as a secondary guard and the session resets when either the daily boundary or the idle window is reached first. - Legacy
session.idleMinutesstill works, but OpenClaw preferssession.reset.idleMinutes.
Useful values:
1440= 1 day10080= 7 days43200= 30 days525600= 365 days
For most long-lived LCM setups, a good starting point is:
{
"session": {
"reset": {
"mode": "idle",
"idleMinutes": 10080
}
}
}- Configuration guide
- Architecture
- Agent tools
- TUI Reference
- lcm-tui
- Optional: enable FTS5 for fast full-text search
# Run tests
npx vitest
# Type check
npx tsc --noEmit
# Run a specific test file
npx vitest test/engine.test.tsindex.ts # Plugin entry point and registration
src/
engine.ts # LcmContextEngine — implements ContextEngine interface
assembler.ts # Context assembly (summaries + messages → model context)
compaction.ts # CompactionEngine — leaf passes, condensation, sweeps
summarize.ts # Depth-aware prompt generation and LLM summarization
retrieval.ts # RetrievalEngine — grep, describe, expand operations
expansion.ts # DAG expansion logic for lcm_expand_query
expansion-auth.ts # Delegation grants for sub-agent expansion
expansion-policy.ts # Depth/token policy for expansion
large-files.ts # File interception, storage, and exploration summaries
integrity.ts # DAG integrity checks and repair utilities
transcript-repair.ts # Tool-use/result pairing sanitization
types.ts # Core type definitions (dependency injection contracts)
openclaw-bridge.ts # Bridge utilities
db/
config.ts # LcmConfig resolution from env vars
connection.ts # SQLite connection management
migration.ts # Schema migrations
store/
conversation-store.ts # Message persistence and retrieval
summary-store.ts # Summary DAG persistence and context item management
fts5-sanitize.ts # FTS5 query sanitization
tools/
lcm-grep-tool.ts # lcm_grep tool implementation
lcm-describe-tool.ts # lcm_describe tool implementation
lcm-expand-tool.ts # lcm_expand tool (sub-agent only)
lcm-expand-query-tool.ts # lcm_expand_query tool (main agent wrapper)
lcm-conversation-scope.ts # Conversation scoping utilities
common.ts # Shared tool utilities
test/ # Vitest test suite
specs/ # Design specifications
openclaw.plugin.json # Plugin manifest with config schema and UI hints
tui/ # Interactive terminal UI (Go)
main.go # Entry point and bubbletea app
data.go # Data loading and SQLite queries
dissolve.go # Summary dissolution
repair.go # Corrupted summary repair
rewrite.go # Summary re-summarization
transplant.go # Cross-conversation DAG copy
prompts/ # Depth-aware prompt templates
.goreleaser.yml # GoReleaser config for TUI binary releases
MIT