Experience-driven continuous evolution for AI agents — turning every interaction into a lasting improvement.
经验 → 技能 → 改进 → 沉淀 — The complete evolution loop for AI agents.
Adapted from Hermes Agent (Nous Research), localized for OpenClaw.
| Mechanism | Description |
|---|---|
| Auto Retrospective | After complex tasks, automatically review and extract reusable patterns |
| Three-Layer Memory | Instant / Working / Experience / Session retrieval — structured knowledge layers |
| Error-Driven Improvement | When corrected, immediately update related knowledge (inspired by KEPA) |
| Skill Audit Cycle | Periodic review of accumulated experience → upgrade to formal Skills |
| Progressive Disclosure | Load only what's needed, when it's needed — save tokens |
Most AI agents repeat the same mistakes. Hermes Learning Loop breaks that cycle:
- Agents learn from corrections — not just from scratch
- Errors become permanent improvements — not forgotten after the session
- Experience compounds — each task makes the next one better
- No bloat — only meaningful knowledge is retained
# Via ClawHub (recommended)
clawhub install hermes-learning-loop
# Or manually
cp -r hermes-learning-loop/ ~/.openclaw/workspace/skills/Then restart OpenClaw and the skill activates automatically.
- 不偷懒 — Complex tasks must be reviewed, no skipping
- 不造假 — Do not pretend to know what you have not learned
- 不囤积 — Timely distillation, do not let MEMORY.md bloat
- 不孤岛 — New knowledge must connect to existing knowledge
- 不过度 — Simple operations do not need to be captured
| Skill | Approach |
|---|---|
self-learning |
Config file updates + learning record system |
self-improving-agent-cn |
Error capture + best practice logging |
hermes-learning-loop |
Experience distillation + skill audit + progressive loading |
- Language: Python + Markdown
- Platform: OpenClaw
- Inspiration: Hermes Agent by Nous Research, KEPA error learning
- Author: 林烬 (Lin Jin) — 学来学去学习社
- Philosophy: Hermes Agent by Nous Research
- Platform: OpenClaw
MIT License — see LICENSE file.