Agent / Team framework for Go with local-first AI capabilities.
“AgentGo? It's useless and it consumes a lot of tokens.” -- some guy on the internet
中文文档 · API Reference · Architecture
AgentGo is a Go framework for building Agent / Team based systems. Team, Agent, Task, Memory, MCP/tools, Skills, and PTC form the core runtime; CLI and UI are optional adapters. RAG is an optional knowledge-retrieval plugin for external documents when embeddings are configured.
You do not need an embedding model for the default experience. Basic agent runtime, MCP, skills, file-backed memory, tasks, and PTC work without vector search. Configure embeddings only when you explicitly want RAG, semantic vector recall, or vector-heavy retrieval.
go get github.com/liliang-cn/agent-go/v2| Capability | Details |
|---|---|
| Agent | Multi-turn reasoning loop with planning, Auto-continuation, and Asynchronous Context Forking |
| Memory | Cognitive Layer: Evolution (Fact → Observation) + LLM-as-a-Judge Retrieval + Subconscious Worker |
| Tools | MCP (Model Context Protocol), Skills (YAML+Markdown), custom inline tools |
| PTC | LLM writes JavaScript; tools run in a Goja sandbox — cuts round-trips |
| Streaming | Token-by-token channel; Low-latency Streaming Tool Execution and Tombstone recovery |
| Providers | OpenAI, Anthropic, Azure, DeepSeek, Ollama — switchable at runtime |
| Teams | Persistent orchestrators + specialists, Actor-model subagent IPC, async task queues, cross-process tracking |
| Operator | Built-in execution agent with filesystem/web tools plus PTY and coding-agent session tooling |
AgentGo is easiest to understand as a small runtime core plus optional knowledge plugins:
LLM is the execution core. Runtime capabilities are built around it.
- It provides the base generation interface used by agents, PTC, tool selection, and optional RAG answers.
- Providers are runtime-selectable through the global pool.
- Standalone agents, orchestrators, specialists, and built-in agents all eventually run on the same LLM abstraction.
Think of it as: prompt + tools + policy -> model call.
Task is the first-class execution unit.
- Team is the process, Agent is the thread, and Task is the function invocation / activation frame.
- A task can span multiple LLM/tool frames while remaining one logical call.
- Task state is persisted with frames, events, continuation, awaiting state, and queue class (
taskormicrotask). - Tasks follow a Finish-Or-Block contract: they should end as
completed,blocked,failed, oryielded, not as vague “next steps” or “would do” text.
Think of it as: input -> frames/events -> output.
Finish-Or-Block is part of the default runtime prompt and built-in agent policy. Agents are expected to execute until verified completion, call task_blocked with a concrete blocker when execution cannot continue, fail with traceable errors, or yield when external input is genuinely required.
Memory is the durable internal context layer.
- It stores facts, preferences, observations, and other reusable knowledge learned from interaction.
- It is separate from cache and separate from RAG documents.
- File memory works even when no embedder is available.
Think of it as: what the system has learned over time.
MCP is the tool transport layer.
- It standardizes tool access, whether the tool is built-in or external.
- AgentGo always includes built-in filesystem and websearch servers.
- MCP is how agents interact with files, web pages, and other process-like capabilities without hardcoding every operation into the model prompt.
Think of it as: how agents touch the outside world.
Skills are reusable workflows expressed as Markdown/YAML.
- They are higher-level than raw tools.
- They encode domain-specific procedures, instructions, and reusable operator playbooks.
- They can be user-invocable or model-invocable depending on configuration.
Think of them as: portable expert workflows.
PTC (Programmatic Tool Calling) is the structured orchestration layer.
- Instead of emitting one tool call at a time, the model writes JavaScript to coordinate tools.
- This reduces round-trips for multi-step logic and data shaping.
- It is best for tool-heavy workflows where the model needs procedural control.
Think of it as: LLM-authored tool orchestration code.
An Agent is the basic runtime unit.
- It has instructions, tool access, optional RAG/memory/PTC/skills, and a session-aware execution loop.
- Agents can be built-in or user-defined.
- Built-in standalone agents include
Dispatcher,Responder,Operator, andEvaluator.
Key standalone patterns:
- use
Responderfor general-purpose direct work - use
Operatorfor execution, validation, PTY sessions, and coding-agent invocation - use
Evaluatorfor product/business framing - use
Dispatcherfor intake and orchestration
A Team is the persistent team layer on top of agents.
- A team has one
orchestratorand multiplespecialists. - The orchestrator is still an agent, but with team-oriented orchestration rules.
- Orchestrators prefer async team work for implementation-heavy tasks.
- Team task state is persisted, so new CLI processes can inspect or continue work.
Think of it as: persistent multi-agent coordination with queueing and status.
RAG is not part of the default runtime path. It is an optional knowledge-retrieval plugin for external/project documents.
- It ingests documents, chunks them, embeds them, and stores them in SQLite/vector storage.
- It requires embeddings for the useful vector retrieval path.
- Use it for external documents and project knowledge, not for durable internal agent memory.
Think of it as: documents + embeddings -> retrieval context.
AgentGo can be mapped to modern operating system concepts:
- Team = Process: A resource-isolated boundary with its own shared memory and task queue.
- Agent = Thread: The execution entity with a specific role and prompt, sharing the team's resources.
- Task = Function call: A first-class invocation frame with input, output, frames, events, and continuation state.
- SubAgent = Coroutine: Lightweight context forks dynamically spawned by the Agent for asynchronous, parallel tasks.
- Memory = Virtual Memory: LLM-based intelligent retrieval acts as
Page In, while auto-compaction acts asPage Out. - Subconscious = Daemon: A background worker pool that silently extracts and consolidates memories after a session ends.
At a high level the APIs map to those concepts like this:
- LLM / Agent runtime
Ask,Chat,Run,RunStream
- Memory
WithMemory,memory_save,memory_recall
- MCP
WithMCP, built-in filesystem/websearch tools, external MCP servers
- Skills
WithSkills, skill registration and invocation
- PTC
WithPTC,execute_javascript,callTool()
- Task
manager.Tasks().Get/List/Await/Yield/Resume/Cancel,agentgo task trace,agentgo task inspect
- Team
CreateTeam,JoinTeam,DispatchTask,SubmitTeamTask,GetTask
- Optional RAG
WithRAG,rag_query, document ingest/query flows
The practical layering is:
LLM -> tools/PTC -> Agent -> Team
with Memory, MCP, and Skills as core attachable capabilities; RAG is an optional external-knowledge plugin when embeddings are configured.
svc, _ := agent.New("assistant").
WithPrompt("You are a helpful assistant.").
Build()
defer svc.Close()
reply, _ := svc.Ask(ctx, "What is Go?")
fmt.Println(reply)svc, _ := agent.New("assistant").
WithPrompt("Answer questions based on the provided documents.").
WithRAG().
WithDBPath("~/.agentgo/data/agent.db").
Build()
defer svc.Close()
// Ingest once
svc.Run(ctx, "Ingest ./docs/")
// Query
reply, _ := svc.Ask(ctx, "What does the spec say about error handling?")svc, _ := agent.New("assistant").
WithMemory().
Build()
defer svc.Close()
svc.Chat(ctx, "My name is Alice and I work on the Go team.")
reply, _ := svc.Chat(ctx, "What team am I on?")
// → "You're on the Go team, Alice." (Recall via hybrid vector/index search)Run the interactive chat with memory visibility:
# Start interactive chat showing retrieved memories and reasoning
go run ./cmd/agentgo-cli chat --show-memory
# Enable JavaScript sandbox for complex logic
go run ./cmd/agentgo-cli chat --with-ptcRun team workflows from the CLI:
# Create a standalone agent
agentgo agent add Scout --description "Independent field agent" \
--instructions "Work independently, answer directly, and use tools when needed."
# Inspect or update that agent
agentgo agent show Scout
agentgo agent update Scout --model openai/gpt-5-mini
# Run a stored agent directly
agentgo agent run --agent Scout "Summarize the current repo structure"
# Built-in standalone agents are always available
agentgo agent show Dispatcher
agentgo agent show Operator
# Create a team (a default orchestrator is created automatically)
agentgo team add "Docs Team" --description "Documentation and release notes"
# Join the standalone agent to a team
agentgo agent join Scout --team "Docs Team" --role specialist
# Run a task through the default orchestrator and a specialist
agentgo team go "@Orchestrator @Scout summarize the UI/backend relationship and write workspace/ui_backend_overview.md"
# Inspect runtime task state; follows while tasks are still running or queued
agentgo team status "Docs Team"
# Run direct execution work through the built-in Operator
agentgo agent run --agent Operator "Write workspace/operator_probe.txt with the text: OPERATOR_OK"
# Leave the team again
agentgo agent leave Scout
# Delete the team when you're done
agentgo team delete "Docs Team"AgentGo implements an evolving memory layer inspired by Hindsight (Cognitive Hierarchy) and PageIndex (Structural Navigation).
| Concept | Description |
|---|---|
| Facts | Raw atomic data points extracted from conversations (e.g., "User likes Go"). |
| Observations | LLM-consolidated insights synthesized from multiple facts via Reflect(). |
| Hierarchical Index | A _index/ directory with Markdown summaries for lightning-fast reasoning navigation. |
| Hybrid Search | Parallel Vector Search (similarity) + Index Navigator (reasoning) fused via RRF. |
| Traceability | Every observation tracks its EvidenceIDs, providing a clear audit trail of why the agent "knows" something. |
- Extraction: Agent identifies a fact during chat.
- Indexing: Fact is stored in a Markdown file with YAML metadata (Confidence, SourceType).
- Reflection: Periodically (e.g., every 5 facts), a background worker triggers
Reflect()to merge facts into high-level Observations. - Superseded: When information changes, old memories are marked as
staleand linked to new ones viaSupersededBy.
// Implement your own module
type Module interface {
ID() string
RegisterTools(registry *ToolRegistry) error
}
svc, _ := agent.New("agent").
WithModule(NewMyCustomModule()).
Build()AgentGo tools can now declare execution semantics directly. This lets the runtime make better decisions about batching, cancellation, permissions, and streaming state updates.
svc, _ := agent.New("assistant").Build()
defer svc.Close()
svc.Register(
agent.BuildTool("workspace_summary").
Description("Return a compact summary of the active workspace.").
ReadOnly(true).
InterruptBehavior(agent.InterruptBehaviorCancel).
Handler(func(ctx context.Context, args map[string]interface{}) (interface{}, error) {
return map[string]any{
"workspace": "current project",
"mode": "demo",
}, nil
}).
Build(),
)
svc.AddToolWithMetadata(
"write_release_note",
"Write a release note file.",
map[string]interface{}{
"type": "object",
"properties": map[string]interface{}{
"path": map[string]interface{}{"type": "string"},
"body": map[string]interface{}{"type": "string"},
},
"required": []string{"path", "body"},
},
func(ctx context.Context, args map[string]interface{}) (interface{}, error) {
return "ok", nil
},
agent.ToolMetadata{
Destructive: true,
InterruptBehavior: agent.InterruptBehaviorBlock,
},
)| Method | Returns | Session | Use case |
|---|---|---|---|
Ask(ctx, prompt) |
(string, error) |
no | one-shot Q&A |
Chat(ctx, prompt) |
(*ExecutionResult, error) |
yes (auto UUID) | conversational |
Run(ctx, goal, ...opts) |
(*ExecutionResult, error) |
optional | full agent loop |
Stream(ctx, prompt) |
<-chan string |
no | live token output |
ChatStream(ctx, prompt) |
<-chan string |
yes | conversational + live |
RunStream(ctx, goal) |
(<-chan *Event, error) |
optional | full event visibility |
result, _ := svc.Run(ctx, "goal",
agent.WithMaxTurns(20),
agent.WithTemperature(0.7),
agent.WithSessionID("my-session"),
agent.WithStoreHistory(true),
)
result.Text() // final answer as string
result.Err() // non-nil if agent reported an error
result.HasSources() // true when RAG chunks were usedRunStream() now emits richer state_update events, including:
turn_stageloop_transitiontransition_reasontool_statepreferred_agentrequires_toolstransition
At the manager level, standalone agents are persistent named runtimes:
CreateAgent,UpdateAgent,DeleteAgentGetAgentByName,ListAgents,ListStandaloneAgentsGetAgentServiceGetAgentStatus,ListAgentStatuses
Built-in standalone agents (Dispatcher, Responder, Operator, Evaluator) are seeded automatically and can be treated like normal named agents.
User-created standalone agents automatically receive a small built-in delegation surface:
list_builtin_agentsdelegate_builtin_agentsubmit_builtin_agent_taskget_delegated_task_status
This is the primary way a custom agent can keep its own role while offloading execution to Operator, general work to Responder, or business clarification to Evaluator.
With WithPTC(), the LLM generates JavaScript instead of JSON tool calls. The code runs in a Goja sandbox where callTool() is available:
svc, _ := agent.New("analyst").
WithPTC().
WithTool(teamDef, teamHandler, "data").
WithTool(expenseDef, expenseHandler, "data").
Build()
// The LLM can now write:
// const team = callTool("get_team", { dept: "eng" });
// return team.members.map(m => ({
// name: m.name,
// spend: callTool("get_expenses", { id: m.id }).total
// }));When to use PTC: multiple dependent tool calls in one shot, data transformation before it hits the context window, conditional tool logic.
Memory and cache are different subsystems:
| Subsystem | Storage | What for |
|---|---|---|
| Memory | Markdown/YAML files or SQLite + vectors | Durable facts, observations, preferences, and reasoning context |
| Cache | In-memory or file-backed JSON entries | Disposable acceleration artifacts for query/vector/LLM/chunk reuse |
// Enable cognitive memory
svc, _ := agent.New("agent").WithMemory().Build()
// LongRun agents share the same memory automatically
lr, _ := agent.NewLongRun(svc).
WithInterval(5 * time.Minute).
WithWorkDir("~/.agentgo/longrun").
Build()For file-backed memory stores, AgentGo now exposes prompt-friendly entrypoints and session helpers:
fileStore, _ := store.NewFileMemoryStore("./memory")
_ = fileStore.WriteSessionMemory("session-123", "Current draft: keep the tone concise.")
sessionNote, _ := fileStore.ReadSessionMemory("session-123")
entrypoint, _ := fileStore.ReadEntrypoint() // reads MEMORY.md
headers, _ := fileStore.SelectRelevantHeaders(context.Background(), "Go backend concise tone", 3)
fmt.Println(sessionNote)
fmt.Println(entrypoint)
fmt.Println(headers)Memory degrades gracefully:
- no embedder -> file memory still works
- file-backed memory uses Markdown + YAML frontmatter and PageIndex-style retrieval
- file-backed memory now maintains
MEMORY.md,_session/*.md, and header selection APIs remember:prompts can be written directly to memory- ordinary dialogue can also be extracted into memory via
StoreIfWorthwhile
Cache is separate from memory:
- use
agentgo cache status|put|get|delete|clear - configure
cache.store_type = "memory"orcache.store_type = "file"
LongRun runs an agent on a schedule with a persistent task queue:
lr, _ := agent.NewLongRun(svc).
WithInterval(10 * time.Minute).
WithMaxActions(5).
Build()
lr.AddTask(ctx, "Monitor RSS feeds and summarize new entries", nil)
lr.Start(ctx)
// ...
lr.Stop()Features: SQLite task queue, heartbeat file, cron-style scheduling, shared DB memory with the parent agent.
// Handoffs — specialist agents
orchestrator.RegisterAgent(researchAgent)
orchestrator.RegisterAgent(writerAgent)
// The LLM routes to the right agent via transfer_to_* tool calls
// SubAgents — scoped delegation
coordinator := agent.NewSubAgentCoordinator()
resultChan := coordinator.RunAsync(ctx, subAgent)
results := coordinator.WaitAll(ctx)AgentGo has three layers of agent concepts:
- Standalone agents: long-lived named agents with their own role and tool budget
- Teams: a persistent team with one
orchestratorand multiplespecialists - Built-in agents: system-provided standalone agents that are always available
The default built-ins are:
Dispatcher: intake/orchestration foragentgo chatResponder: general-purpose direct workerOperator: execution/validation agentEvaluator: product/business representative
Inspect them directly:
agentgo agent show Dispatcher
agentgo agent show Responder
agentgo agent show Operator
agentgo agent show EvaluatorAgentGo now supports two delegation axes:
- Team delegation
orchestrator -> specialists- supports synchronous dispatch and persisted async team tasks
- Built-in delegation
custom agent -> Responder / Operator / Evaluator- useful when the custom agent should keep its own role but offload execution or business clarification
Conceptually:
- use Responder when you want a general-purpose built-in doer
- use Operator when the task is about execution, validation, files, PTY-backed commands, or coding-agent invocation
- use Evaluator when the task is about requirements, scope, tradeoffs, or acceptance criteria
Operator is the built-in execution layer.
At a concept level, it provides two API families:
- PTY session APIs
- start a command session
- send more input
- inspect output/status
- interrupt or stop the session
- Coding-agent APIs
- start or inspect provider-aware sessions for
claude,gemini,codex, andopencode - run one-shot coding-agent calls without forcing the model to guess shell commands
- start or inspect provider-aware sessions for
In practice, Operator is what QA, PM, or custom agents should delegate to when they need actual execution instead of just reasoning.
Simple CLI examples:
agentgo agent run --agent Operator "Write workspace/operator_probe.txt with the text: OPERATOR_OK"
agentgo agent run --agent Operator "Call codex and make it output exactly: RES_FROM_CODEX"User-created standalone agents automatically get a small built-in delegation API:
list_builtin_agentsdelegate_builtin_agentsubmit_builtin_agent_taskget_delegated_task_status
This means a custom agent can keep its own role and capabilities, but still delegate:
- execution to
Operator - general work to
Responder - product/business clarification to
Evaluator
AgentGo exposes a team-oriented manager API for standalone agents and team agents. A orchestrator is just an agent role inside a team.
store, err := agent.NewStore(filepath.Join(cfg.DataDir(), "agent.db"))
if err != nil {
panic(err)
}
manager := agent.NewTeamManager(store)
if err := manager.SeedDefaultMembers(); err != nil {
panic(err)
}
scout, err := manager.CreateAgent(ctx, &agent.AgentModel{
Name: "Scout",
Kind: agent.AgentKindAgent,
Description: "Independent field agent",
Instructions: "Work independently and answer directly.",
})
if err != nil {
panic(err)
}
docsTeam, err := manager.CreateTeam(ctx, &agent.Team{
Name: "Docs Team",
Description: "Documentation and release notes",
})
if err != nil {
panic(err)
}
writer, err := manager.JoinTeam(ctx, scout.Name, docsTeam.ID, agent.AgentKindSpecialist)
if err != nil {
panic(err)
}
result, err := manager.DispatchTask(ctx, writer.Name, "Write workspace/ui_backend_overview.md")
if err != nil {
panic(err)
}
fmt.Println(result)- A custom team created via
CreateTeam()oragentgo team addautomatically gets a default orchestrator. - The orchestrator receives team roster and role summaries in its system prompt.
- Orchestrators prefer async team work for implementation-heavy tasks.
- Shared team tasks are persisted and can be inspected from new CLI processes.
- Orchestrators do not use generic
delegate_to_subagentby default.
CreateAgent,UpdateAgent,DeleteAgent,GetAgentByName,ListAgents,ListStandaloneAgentsJoinTeam,LeaveTeam,GetAgentServiceCreateTeam,ListTeams,GetTeamByNameAddTeamAgent,CreateTeamAgent,ListTeamAgents,GetTeamAgentByNameAddOrchestrator,AddSpecialist,ListOrchestrators,ListSpecialists(role-specific helpers)DispatchTask,DispatchTaskStreamEnqueueSharedTask,ListSharedTasksSubmitAgentTask,SubmitTeamTask,GetTask,ListSessionTasks
For runtime orchestration and monitoring:
GetTeamStatus,ListTeamStatusesGetAgentStatus,ListAgentStatusesGetLeadAgentForTeamSubscribeTaskfor async task progress streamsDispatchTaskStreamWithOptions,ChatWithMemberStream,ChatWithMemberStreamWithOptions
In practice, the API layers look like this:
- Standalone agent APIs: create, run, inspect, update
- Team APIs: create teams, join agents, dispatch tasks, track async work
- Built-in delegation APIs: let a custom agent explicitly call
Responder,Operator, orEvaluator
plan, _ := svc.Plan(ctx, "Deploy the new service")
// inspect plan.Steps, edit if needed
result, _ := svc.Execute(ctx, plan.ID)Runtime layout is derived from AGENTGO_HOME (default: ~/.agentgo).
Structured runtime config lives in data/agentgo.db.
~/.agentgo/
├── mcpServers.json ← MCP server definitions
├── data/
│ ├── agentgo.db ← Control plane: providers, runtime config, agent/team metadata
│ ├── cortex.db ← Brain store: memory, vectors, graph, knowledge
│ └── memories/ ← File memory store (Markdown + YAML frontmatter)
├── skills/ ← SKILL.md files
├── intents/ ← Intent YAML files
└── workspace/ ← Agent working directory
| File | Default path | Purpose |
|---|---|---|
agentgo.db |
$home/data/agentgo.db |
Runtime config, providers, MCP/skills paths, agent/team data |
cortex.db |
$home/data/cortex.db |
Brain store for memory, vectors, graph, and knowledge |
history.db (opt) |
via WithHistoryDBPath() |
Detailed tool-call logs — only created when WithStoreHistory(true) |
store_type |
Storage | Best for | Embedder requirement |
|---|---|---|---|
file (default) |
data/memories/ |
Most reliable default, local debugging, human-readable memory | Not required |
cortex |
data/cortex.db |
Database-backed memory buckets and production-style local persistence | Optional; without one it uses lexical/text fallback |
memoryflow |
data/cortex.db |
CortexDB MemoryFlow diary/session workflow and agent memory lifecycle | Optional; works without embeddings |
graphflow |
data/cortex.db |
Memory that should also become an entity/relation graph | Optional; current graph extraction is deterministic |
Recommendation:
| Situation | Recommended store |
|---|---|
| No embedding model configured | file — works out of the box, human-readable, easiest to debug |
| Embedding model configured | graphflow — semantic vector recall + entity/relation graph, best overall recall quality |
fileis the default. It requires no embedding model and is fully transparent.graphflowis the recommended upgrade once an embedding provider is set. It stores memories in CortexDB (cortex.db) and combines vector search with deterministic graph extraction for higher-quality recall.- Use
cortexif you want database-backed storage without graph extraction. - Use
memoryflowfor diary-style or session-lifecycle-aware memory workflows.
Set the runtime type through the CLI/UI or by persisting memory.store_type in agentgo.db. The current CLI runtime configuration is DB-backed; agentgo.toml is not the source of truth once agentgo.db has a value.
store_type |
Storage | Purpose |
|---|---|---|
memory (default) |
in-process memory | Fast ephemeral cache |
file |
data/cache/<namespace>/*.json |
Restart-friendly cache persistence |
AgentGo derives the runtime storage layout automatically from AGENTGO_HOME:
- workspace:
$home/workspace - MCP filesystem allowlist:
$home/workspace - brain database:
$home/data/cortex.db - memory store:
$home/data/memoriesforfile, or$home/data/cortex.dbforcortex,memoryflow, andgraphflow - cache directory:
$home/data/cache
The remaining structured runtime values live in agentgo.db, including:
- LLM providers and pool strategy
- embedding providers and
rag.embedding_model - MCP config paths
- skills load paths
- per-agent preferred provider/model
agentgo cache status
agentgo cache put query my-key my-value --ttl 5m
agentgo cache get query my-key
agentgo cache clear queryProviders are configured in agentgo.db and managed through the CLI/UI/runtime APIs.
Supported: OpenAI · Anthropic · Azure OpenAI · DeepSeek · Ollama (local)
examples/
├── quickstart/ — simplest possible agent
├── agent/
│ ├── agent_usage/ — builder patterns, tool registration
│ ├── multi_agent_orchestration/ — handoffs + streaming
│ ├── longrun/ — autonomous scheduled agent
│ └── realtime_chat/ — WebSocket session
├── rag/ — document ingestion + Q&A
├── memory/
│ ├── chat_with_memory/ — DB memory + chat
│ └── smart_fusion/ — memory merging
├── ptc/
│ ├── custom_tools/ — JS sandbox tool orchestration
│ └── memory_chat/ — PTC + memory
├── skills/ — Skill files
└── mcp/ — MCP tool servers
MIT — Copyright (c) 2024–2026 AgentGo Authors