A claude code like thingy, Vibed by GPT-5.4 with C, just for fun.
goosecode is a local AI coding agent with a Go-based TUI (terminal user interface) and C backend. It features an OpenAI-compatible API client, tool loop, slash commands, sessions, subagents, MCP support, and a terminal-first workflow.
It works with:
- local OpenAI-compatible servers such as Ollama, vLLM, llama.cpp, ik-llama, LM Studio, text-generation-webui proxies, or custom gateways
- hosted OpenAI-compatible providers such as OpenAI, Together, Fireworks, Groq, or self-hosted proxies
- TUI Mode (default): Interactive terminal UI with bubbletea
- Color-coded Plan/Build mode toggle (Tab key)
- Scrollable chat history
- Visible cursor with mode-specific colors
- REPL Mode (fallback): Legacy interactive REPL with multiline input
- one-shot prompt mode from the shell
- file editing and shell execution
- task tracking and plan mode
- resumable subagents and optional git worktrees
- MCP resource listing/reading
- LSP queries
- local git workflow commands like
/branch,/commit, and/review - provider presets and first-run provider setup
- Tools: 29
- Slash commands: 17
Main tools include:
bashread_file,write_file,edit_fileglob_search,grep_searchweb_fetch,web_searchtodo_write,task_create,task_get,task_list,task_updateask_user_questionenter_plan_mode,exit_plan_modeagentlist_mcp_resources,read_mcp_resourcelsprepl,powershell
Main slash commands include:
/help/model/provider/session/compact/plan/config/tasks/branch/commit/review/subagents/permissions/tools/exit
Requirements:
gccwith C11 supportlibcurlpthreadsupport from libc
The project vendors cJSON, so you do not need to install it separately.
sudo apt update
sudo apt install build-essential libcurl4-openssl-devbrew install curlIf Homebrew curl is not on your default compiler path, set the include/library flags yourself before building.
make # Build goosecode (TUI) and goosecode-backend
make tui # Build only the TUI
make clean # Clean build artifacts
make install
make uninstallBuild output:
./goosecode- TUI launcher (symlink to goosecode-tui)./goosecode-tui- TUI binary (Go)./goosecode-backend- C backend
User-local install path by default:
~/.local/bin/goosecode
If ~/.local/bin is on your PATH, you can then start goosecode from anywhere with:
goosecodeCustom install path example:
make install INSTALL_BINDIR=/usr/local/bin./goosecodeTUI features:
- Press Tab to toggle between PLAN and BUILD mode
- Cursor color changes: yellow (PLAN), green (BUILD)
- Separator line color matches mode
- Type messages or use slash commands (e.g.,
/help,/exit)
./goosecode --repl./goosecode "explain the architecture of this project"--provider <provider>
--model <model>
--base-url <url>
--permission <mode>
--max-turns <n>
--session <id>
--help
# interactive
./goosecode
# one-shot
./goosecode "write a fibonacci function in C"
# override model for one run
./goosecode --model gpt-4o-mini "summarize this repository"
# choose a provider preset for one run
./goosecode --provider ollama
# resume a saved session
./goosecode --session 1775092052_390207107
# set permissive mode for a local sandbox session
./goosecode --permission allowgoosecode talks to any server that exposes an OpenAI-compatible /v1 API.
You can switch providers directly inside the REPL:
/provider list
/provider set openai
/provider set ollama
/provider set vllm
/provider set llama.cpp
/provider set ik-llama
/provider test
/model list
/model set <name>
/provider set ... now prompts for:
- base URL
- model name
- API key when needed or optional
Provider settings are saved per provider, so switching back to a previous provider restores its last saved base_url, model, and api_key instead of overwriting everything.
First-run behavior:
- if no environment variables or settings files are present, goosecode opens a guided provider setup flow before the REPL starts
- the REPL banner also shows the active provider, model, and base URL so it is obvious what you are connected to
export OPENAI_BASE_URL=...
export OPENAI_MODEL=...
export OPENAI_API_KEY=...OPENAI_API_KEY is optional for many local servers.
Use this when your model server is on the same machine or LAN.
Examples:
# local gateway on port 8083
export OPENAI_BASE_URL=http://localhost:8083/v1
export OPENAI_MODEL=cyankiwi/Qwen3.5-122B-A10B-AWQ-8bit
./goosecode
# Ollama
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_MODEL=llama3
./goosecode
# vLLM
export OPENAI_BASE_URL=http://localhost:8000/v1
export OPENAI_MODEL=your-model-name
./goosecode
# llama.cpp or ik-llama
export OPENAI_BASE_URL=http://localhost:8080/v1
export OPENAI_MODEL=your-model-name
./goosecode
# or use the built-in provider presets interactively
./goosecode
# then run /provider set ollama or /provider set vllmNotes:
- many local servers ignore
OPENAI_API_KEY - the base URL should usually end in
/v1 - model names must match what your server exposes
Use this when talking to hosted services.
Examples:
# OpenAI
export OPENAI_BASE_URL=https://api.openai.com/v1
export OPENAI_API_KEY=sk-...
export OPENAI_MODEL=gpt-4o
./goosecode
# Together / Fireworks / Groq / any compatible host
export OPENAI_BASE_URL=https://your-provider.example/v1
export OPENAI_API_KEY=...
export OPENAI_MODEL=provider-model-name
./goosecodeNotes:
- if requests fail, first confirm the endpoint is OpenAI-compatible
- if streaming behaves oddly, test a simple non-tool prompt first
- hosted providers usually require
OPENAI_API_KEY
Configuration is loaded from:
~/.goosecode/settings.json.goosecode/settings.json
Project settings override user settings where applicable.
Example project config:
{
"provider": "vllm",
"base_url": "http://localhost:8083/v1",
"model": "cyankiwi/Qwen3.5-122B-A10B-AWQ-8bit",
"permission_mode": "allow",
"max_turns": 64
}User settings also keep a provider_profiles map internally so each provider can remember its own last-used values.
Supported permission modes:
read-onlyworkspace-writedanger-full-accesspromptallow
Examples:
./goosecode --permission read-only
./goosecode --permission allowEnvironment override:
export GOOSECODE_PERMS=allowEditor controls:
- Left / Right: move cursor
- Up / Down: history recall
Tab: complete slash commandsCtrl+A: start of lineCtrl+E: end of lineCtrl+J: insert newline into the current prompt
Examples:
/provider list
/provider set ollama
/provider test
/model list
/model set llama3
/tasks create investigate parser failure
/plan set
1. Reproduce
2. Fix
.
/review
The bash tool supports a configurable timeout in seconds.
That matters for commands like:
makecargo builddocker build- long-running test suites
Example tool call shape:
{
"command": "docker build -t app .",
"timeout": 1800
}Notes:
- default timeout:
120seconds - maximum timeout:
7200seconds - both numeric and numeric-string
timeoutvalues are accepted
Examples:
/session
/tasks
/tasks create add logging around API failures
/subagents
Stored state lives under:
~/.goosecode/sessions~/.goosecode/subagents~/.goosecode/worktrees~/.goosecode/todos.json
Configure MCP servers in settings:
{
"mcp_servers": [
{
"name": "test",
"command": "/usr/bin/python3",
"args": ["/path/to/mcp_server.py"]
}
]
}Supported MCP tools today:
list_mcp_resourcesread_mcp_resource
Supported LSP actions today:
hoverdefinitiondocument_symbols
The lsp tool can use:
- default server selection for supported file types
- explicit
server_commandandserver_args
src/
├── main.c # Entry point and CLI setup
├── agent.c/h # REPL + turn loop + tool execution
├── api.c/h # OpenAI-compatible API client
├── config.c/h # Env + settings file loading
├── session.c/h # Session persistence
├── permissions.c/h # Permission checks
├── prompt.c/h # System prompt assembly
├── commands/ # Slash commands
├── tools/ # Tool implementations
└── util/ # JSON, SSE, terminal, markdown, buffers, HTTP
Helpful commands:
make test
./goosecode --help
./goosecode --permission allowWhen testing interactively, prefer a sandbox working directory instead of the main source tree.
MIT