Interactive CLI for bootstrapping Warden agents with support for both A2A and LangGraph protocols.
Install the CLI globally:
npm install -g warden-codeThen run it with:
wardenOr just run it directly:
npx warden-codeThis launches an interactive CLI where you can create new agents.
| Command | Description |
|---|---|
/new [path] |
Create a new agent interactively [optionally provide a path] |
/build [path] |
Enter AI-powered build mode to modify your agent via chat |
/chat <url> |
Chat with a running agent via A2A or LangGraph |
/help |
Show available commands |
/clear |
Clear the terminal |
/exit |
Exit the CLI |
Run /new to start the agent creation wizard:
- Agent name - a name for your agent
- Description - what your agent does
- Model - Echo (just a demo that echoes input) or OpenAI (GPT-powered)
- Capability - Streaming or Multi-turn conversations
- Skills - Define agent capabilities (optional)
After generation, your agent will be ready at src/agent.ts.
| Model | Description |
|---|---|
| Echo + Streaming | Minimal streaming agent that echoes input |
| Echo + Multi-turn | Minimal multi-turn conversation agent |
| OpenAI + Streaming | GPT-powered agent with streaming responses |
| OpenAI + Multi-turn | GPT-powered agent with conversation history |
All options use AgentServer from @wardenprotocol/agent-kit, which exposes both:
- A2A Protocol
- LangGraph Protocol
my-agent/
├── src/
│ ├── agent.ts # Your agent logic (handler function)
│ └── server.ts # Server setup and configuration
├── agent-card.json # Agent identity, capabilities, and skills (A2A protocol)
├── package.json
├── tsconfig.json
├── Dockerfile
├── .env.example
└── .gitignore
cd my-agent
npm run build
npm run agentYour agent will be available at http://localhost:3000.
Run /build inside a scaffolded project to enter an AI-powered chat session. Describe the changes you want and the LLM will modify your agent code directly. Requires an OpenAI or Anthropic API key (configured on first run).
While in build mode you can type /chat to talk to your running agent without leaving the session. The URL is resolved automatically from your project's .env (AGENT_URL), or you can pass it explicitly (e.g. /chat http://localhost:3000). Type /exit inside the chat sub-session to return to build mode.
Run /chat http://localhost:3000 to interact with a running agent. The CLI auto-detects whether the agent supports A2A, LangGraph, or both, and prompts you to choose when multiple protocols are available.
To host your agent on a server, you can use a cloud provider like AWS, Google Cloud, or Azure. You can also use a containerization platform like Render.com to deploy your agent as a Docker container.
Apache-2.0