A memory-minimal AI coding agent for the terminal. Written in Rust. MIT licensed. Early development.
The name means "wind" in Japanese.
Most AI coding tools carry serious memory overhead.
| Tool | Idle | Peak |
|---|---|---|
| Claude Code (Node.js) | ~300 MB | 8.5 GB+ (known leaks) |
| OpenCode (TypeScript) | 40-80 MB | ~400 MB |
| kaze (Rust) | <25 MB | <80 MB |
kaze is built on rig-core for LLM abstraction and runs on a single-threaded tokio runtime. The release binary is optimized for size with opt-level="z", LTO, and symbol stripping.
# Install
cargo install --path .
# Set an API key (pick your provider)
export ANTHROPIC_API_KEY="your-key-here"
# One-shot question
kaze ask "explain ownership in rust"
# Interactive chat
kaze chat
# Use a different provider or model
kaze ask --provider openai "hello"
kaze ask --model ollama/llama3 "hello"
# List available models
kaze modelskaze ask "question"for one-shot streaming responseskaze chatfor interactive multi-turn REPLkaze session list,kaze session resume,kaze session delete,kaze session newkaze config showto view current configurationkaze modelsto list available models per provider (marks the default)--providerand--modelflags onaskandchatprovider/modelshorthand (e.g.,openai/gpt-4.1,ollama/llama3)
- Anthropic (default)
- OpenAI
- OpenRouter (supports many models, including free tiers)
- Ollama (local, no API key needed)
- Token counting with BPE tokenization (tiktoken-rs), displayed after each response
- Per-model context window limits with warning at 80% and auto-truncation at 95%
- Context compaction: LLM-based summarization of old messages, triggered manually (
/compact) or automatically at 90% usage
- Built-in tools:
read_file,write_file,edit,glob,grep,bash read_file: path validation, size limits, binary detectionwrite_file: full-file writes with parent directory creationedit: search-and-replace with exact text matching and diff outputglob: pattern matching, contained to project rootgrep: regex content search with file filtering and match limitsbash: shell execution with timeout, output cap, and env variable filtering- Tool framework:
Tooltrait andToolRegistrywith JSON Schema definitions for LLM function calling - Agent loop:
kaze askautonomously calls tools in a multi-turn cycle, executing tool calls and feeding results back until the LLM produces a final answer
- Conversations saved as JSONL files at
~/.local/share/kaze/sessions/ - Sessions survive restarts
- Resume by full or partial ID (git-style short IDs)
kaze chat --session {id}to resume directly
- TOML config at
~/.config/kaze/config.toml(XDG paths) - Per-project override via
kaze.tomlin project root - Environment variable resolution with
{env:VAR}syntax - Configurable system prompt via
system_prompt
- Readline support: arrow keys, history recall, Ctrl+R search
- Persistent readline history across sessions
- Slash commands:
/history,/clear,/compact,/help - Markdown-lite formatting for responses (bold, inline code, fenced code blocks)
- Streaming token-by-token output
- Per-tool permissions:
allow,ask, ordenyvia[permissions]in config - Interactive prompts for sensitive tools (
bashdefaults toask) - Session-level "always allow" option
- Wildcard matching for bash commands (e.g., allow
git *, denyrm *) - Colored unified diffs shown before file writes and edits, with confirm/reject when permission is
ask
Global config at ~/.config/kaze/config.toml. Per-project override with kaze.toml in your project root.
model = "claude-sonnet-4-6"
system_prompt = "You are a senior Rust developer. Be concise and precise."
[provider.anthropic]
api_key = "{env:ANTHROPIC_API_KEY}"
[provider.openai]
api_key = "{env:OPENAI_API_KEY}"
[provider.openrouter]
api_key = "{env:OPENROUTER_API_KEY}"
[provider.ollama]
base_url = "http://localhost:11434"
[compaction]
auto = true
keep_recent = 4
reserved = 10000
[permissions.tools]
read_file = "allow"
glob = "allow"
grep = "allow"
write_file = "allow"
edit = "allow"
bash = "ask"
[permissions.bash_commands]
"git status" = "allow"
"cargo build" = "allow"
"rm *" = "deny"Built incrementally across 8 phases.
| Phase | Description | Status |
|---|---|---|
| 0 | Project scaffold | Done |
| 1 | Core (ask, streaming, config) | Done |
| 2 | Multi-turn chat + sessions | Done |
| 3 | Multi-provider (OpenAI, OpenRouter, Ollama) | Done |
| 4 | Context management (token counting, compaction) | Done |
| 5 | Tools (read, write, edit, grep, bash) | Done |
| 6 | Agent loop | Done |
| 7 | TUI (ratatui) | In Progress |
| 8 | Advanced (MCP, custom agents, rules) | Planned |
- TUI sidebar is empty (no session navigation yet)
- TUI trust level defaults to Safe (bash tool disabled in TUI mode)
config setusestoml::Valueinternally (no comment preservation in config files)