Aionis is a memory kernel that records agent execution traces and compiles them into replayable workflows.
Instead of asking the model to reason through the same task repeatedly, Aionis allows agents to reuse successful executions.
Most memory systems store text:
- conversation history
- embeddings
- entity memory
But they do not remember how work gets done.
Agents still re-reason every task.
Aionis records execution history.
Agent Run
↓
Execution Trace
↓
Compile Playbook
↓
Replay Execution
Once a workflow succeeds, it becomes reusable.
Aionis implements a three-mode execution model:
| Mode | Description |
|---|---|
simulate |
audit-only validation |
strict |
deterministic execution |
guided |
execution with repair suggestions |
Replay focuses on actions, not LLM token streams.
Aionis follows an audit-first design:
guided run
↓
repair suggestion
↓
human review
↓
shadow validation
↓
promotion
By default:
- repairs require review
- shadow validation runs first
- playbooks are not auto-promoted
Aionis now exposes a bounded automation layer above replay.
- sequential automation DAG execution
- repair approval and rejection controls
- explicit shadow validation before activation
- reverse-order compensation retry controls
- telemetry, alerting, and operator recovery surfaces
Automation remains a thin orchestrator, not a general-purpose workflow engine.
See the public Automation API docs for the current surface:
- English:
doc.aionisos.com/public/en/api-reference/01-automation-api-reference - 中文:
doc.aionisos.com/public/zh/api-reference/01-automation-api-reference
Real workflow benchmark (100 runs):
- Baseline success rate:
98% - Replay success rate:
98% - Replay stability:
98%
Latency improvement:
9.21xfaster onreplay119.29xfaster onreplay2
| Capability | Memory Plugins | Aionis |
|---|---|---|
| Conversation recall | ✓ | ✓ |
| Vector search | ✓ | ✓ |
| Execution trace | ✗ | ✓ |
| Workflow replay | ✗ | ✓ |
| Policy loop | ✗ | ✓ |
| Governed repair | ✗ | ✓ |
Most systems store information.
Aionis stores how work gets done.
LLM
↓
Agent Planner
↓
Aionis Memory Kernel
↓
Tools / Environment
Aionis acts as the execution memory layer of the agent stack.
Aionis turns successful agent runs into governed, replayable workflows.
git clone https://github.com/Cognary/Aionis.git
cd Aionis
cp .env.example .env
make stack-up
curl -fsS http://localhost:3001/healthMinimal write + recall:
export BASE_URL="http://localhost:3001"
curl -sS "$BASE_URL/v1/memory/write" \
-H 'content-type: application/json' \
-d '{
"tenant_id":"default",
"scope":"default",
"input_text":"Customer prefers email follow-up",
"memory_lane":"shared",
"nodes":[{"type":"event","memory_lane":"shared","text_summary":"Customer prefers email follow-up"}]
}'
curl -sS "$BASE_URL/v1/memory/recall_text" \
-H 'content-type: application/json' \
-d '{"tenant_id":"default","scope":"default","query_text":"preferred follow-up channel","limit":5}'- TypeScript SDK:
@aionis/sdk - Python SDK:
aionis-sdk - Docker image:
ghcr.io/cognary/aionis:0.2.17 - Standalone image:
ghcr.io/cognary/aionis:standalone-v0.2.17 - Integration guides: MCP / OpenWork / LangGraph / OpenClaw
TypeScript SDK example:
import { AionisClient } from "@aionis/sdk";
const client = new AionisClient({
base_url: "https://api.your-domain.com",
api_key: process.env.AIONIS_API_KEY,
});
await client.write({
scope: "default",
input_text: "Customer prefers email follow-up",
memory_lane: "shared",
nodes: [{ type: "event", memory_lane: "shared", text_summary: "Customer prefers email follow-up" }],
});
const out = await client.recallText({ query_text: "preferred follow-up channel", limit: 5, scope: "default" });
console.log(out.request_id);Python SDK example:
from aionis_sdk import AionisClient
client = AionisClient(
base_url="https://api.your-domain.com",
api_key="<your-api-key>",
)
client.write({
"scope": "default",
"input_text": "Customer prefers email follow-up",
"memory_lane": "shared",
"nodes": [{"type": "event", "memory_lane": "shared", "text_summary": "Customer prefers email follow-up"}],
})
out = client.recall_text({"scope": "default", "query_text": "preferred follow-up channel", "limit": 5})
print(out.get("request_id"))Run weekly strict evidence:
npm run -s evidence:weekly -- --scope default --window-hours 168 --strictRun production core gate:
npm run -s gate:core:prod -- --base-url "http://localhost:3001" --scope defaultReplay-learning regression coverage:
# validate replay_learning_projection fatal vs retryable classification
npm run -s e2e:replay-learning-fault-smoke
# validate replay-learning episode archival by TTL and rule stabilization
npm run -s e2e:replay-learning-retention-smokePublic benchmark snapshot and reproduction commands:
- Get Started
- Build Memory Workflows
- Control and Policy
- Operate and Production
- Integrations
- Reference
- Benchmarks
Licensed under the Apache License 2.0. See LICENSE.