English | 简体中文
EasyAgent is a lightweight agent system built around a small set of core abstractions:
BaseLLMfor model accessBaseLoopfor execution strategyBaseMemoryfor full conversation historyBaseContextfor model-facing context assemblyBaseCapabilityfor optional features such as tools, skills, and sandbox resources
The project is intentionally incremental: each layer is usable on its own, and higher-level features are built directly on top of lower-level ones.
LLM -> Loop -> Memory / Context -> Capability -> Agent
Core ideas:
Agentis a thin orchestratorAgentSessionowns run-time stateMemorystores full historyContextdecides what the model seesCapabilityadds optional behavior without creating more agent subclasses
- Multi-model support through LiteLLM
- ReAct and single-turn loop abstractions
- Memory / context split
- Tool calling via
ToolManager - Skills with progressive disclosure
- Sandbox support through capability composition
- Local and Docker sandbox implementations
pip install easy-agent-sdkOptional extras:
pip install easy-agent-sdk[sandbox]
pip install easy-agent-sdk[web]
pip install easy-agent-sdk[all]From source:
git clone https://github.com/SNHuan/EasyAgent.git
cd EasyAgent
pip install -e ".[dev]"Create a config file such as config.yaml:
debug: true
models:
gpt-4o-mini:
api_type: openai
base_url: https://api.openai.com/v1
api_key: sk-xxx
kwargs:
temperature: 0.7
max_tokens: 4096Then point EA_DEFAULT_CONFIG to it:
export EA_DEFAULT_CONFIG=/path/to/config.yamlMinimal ReactAgent:
import asyncio
from easyagent import InMemoryMemory, LiteLLMModel, ReactAgent, SlidingWindowContext
async def main() -> None:
model = LiteLLMModel(model="gpt-4o-mini")
agent = ReactAgent(
model=model,
system_prompt="You are a concise assistant.",
memory=InMemoryMemory(),
context=SlidingWindowContext(max_messages=12),
max_iterations=5,
)
result = await agent.run("Introduce EasyAgent in one sentence.")
print(result)
asyncio.run(main())There is also a runnable example:
python examples/simple_react_agent.pyDefine a tool with @register_tool:
from easyagent.tool import register_tool
@register_tool
class GetWeather:
name = "get_weather"
type = "function"
description = "Get weather for a city."
parameters = {
"type": "object",
"properties": {
"city": {"type": "string", "description": "City name"},
},
"required": ["city"],
}
def init(self) -> None:
pass
def execute(self, city: str, **kwargs) -> str:
return f"The weather in {city} is sunny."Use it with ReactAgent:
agent = ReactAgent(
model=LiteLLMModel(model="gpt-4o-mini"),
tools=["get_weather"],
)Skills are markdown-based capability packages loaded on demand.
Directory layout:
./skills/
my-skill/
SKILL.md
Example SKILL.md:
---
name: my-skill
description: One-line summary shown before loading.
allowed-tools:
- get_weather
---
# Full instructionsUsage:
agent = ReactAgent(
model=LiteLLMModel(model="gpt-4o-mini"),
skills=["my-skill"],
skill_dir="./skills",
)The model only sees the skill summary at first. When it decides to load the skill, SkillCapability returns the full body and activates the declared tools for the current session.
SandboxAgent is now a thin preset built from:
SandboxCapabilityToolCapabilityReActLoop
Example:
import asyncio
from easyagent import LiteLLMModel, SandboxAgent
from easyagent.sandbox import LocalSandbox
async def main() -> None:
model = LiteLLMModel(model="gpt-4o-mini")
agent = SandboxAgent(
model=model,
sandbox=LocalSandbox(),
)
result = await agent.run("Run a short Python command and tell me the output.")
print(result)
asyncio.run(main())Built-in sandbox tools:
bashwrite_fileread_file
easyagent/
├── agent/ # Agent, ReactAgent, SandboxAgent, AgentSession
├── capability/ # BaseCapability, Tool/Skill/Sandbox capabilities
├── context/ # FullContext, SlidingWindowContext, SummaryContext
├── loop/ # BaseLoop, ReActLoop, SingleTurnLoop
├── memory/ # BaseMemory, InMemoryMemory
├── model/ # BaseLLM, LiteLLMModel, Message, ToolCall
├── sandbox/ # BaseSandbox, DockerSandbox, LocalSandbox
├── skill/ # Skill, SkillManager, SKILL.md loader
├── tool/ # Tool protocol, ToolManager, built-in tools
├── prompt/ # Prompt templates
├── config/ # Config loading
└── debug/ # Logging helpers
The current codebase has already been migrated to the new architecture:
- session-owned runtime state
- memory/context split
- capability-based feature composition
MCP integration and broader documentation cleanup are still future work.
MIT License © 2025 Yiran Peng