Connect VS Code to any AI provider through a single extension. One chat interface for Grok, Claude, Gemini, ChatGPT, OpenAI Codex, OpenCode, Pi, and local models, powered by conduit-bridge.
Current version: 0.7.0
Status: Active development. All core features implemented and tested (283 tests). Requires conduit-bridge running locally.
- Quick Start
- Features
- Background Agents
- Supported Models
- Chat Interface
- Chat Modes
- Agent Mode
- Model Selection
- Session Management
- Slash Commands
- Context Mentions
- Code Intelligence
- Inline Chat
- Custom Instructions
- Keyboard Shortcuts
- Configuration
- Provider Setup
- Testing
- Troubleshooting
- Development
- Roadmap
- VS Code 1.90 or newer
- conduit-bridge installed and running
Option A - From .vsix file:
code --install-extension conduit-vscode-0.7.0.vsixOr in VS Code: Extensions > ... > Install from VSIX...
Option B - From source:
git clone https://github.com/elvatis/conduit-vscode
cd conduit-vscode
npm install --include=dev
npx @vscode/vsce package --no-dependencies
code --install-extension conduit-vscode-0.7.0.vsix- Start the bridge:
conduit-bridge start - Open VS Code, the extension activates automatically
- The Conduit AI panel appears in the bottom panel area
- Drag the Conduit AI tab to the secondary sidebar (right side) to place it next to Copilot / Claude Code
- Click the model name in the toolbar to select your preferred model
- Start chatting
Note: After first install, you must fully close and reopen VS Code (not just reload) for the Sessions panel to appear. VS Code only reads new view registrations on startup.
- Streaming responses with full Markdown rendering (headings, code blocks, tables, lists, blockquotes, bold, italic, links)
- Copy and insert-code actions on every response
- Per-message model tag showing which model generated each response
- Automatic context window management, trims conversation history to fit model limits
- Token usage tracking via
/cost
- In Agent mode, responses are structured as collapsible step cards
- Each step shows an animated spinner while streaming, then a green checkmark when complete
- Steps are click-to-expand/collapse for easy navigation
- Models are instructed to use
### Step N: Titleformat for structured output
- Native VS Code tree view panel (like GitHub Copilot's Sessions)
- Persistent across VS Code restarts (up to 50 sessions)
- Click any session to reload the full conversation
- New Session button (+) and Refresh button in the panel title bar
- Delete button (trash icon) on hover for each session
- Sessions auto-save after each message exchange
- Native VS Code QuickPick, opens a full-width, searchable picker at the top of the editor
- Models grouped by provider (WEB-GROK, WEB-CLAUDE, WEB-GEMINI, etc.)
- Tier icons: star for flagship models, half-star for mid-tier
- Context window size shown next to each model (131K, 200K, 1M)
- Friendly display names with version numbers (e.g. "Claude Sonnet 4.6", "Grok Expert")
- Auto mode, automatically selects the best model based on task complexity
- Model-mode compatibility warnings when a model doesn't support the current chat mode
- Inline completions (ghost text), language-aware for 20+ languages
- Inline Chat (
Ctrl+I), describe a change at the cursor, review as a diff - Explain / Refactor / Generate Tests, right-click context menu on selections
- Fix diagnostics, send all file errors/warnings to the AI
- Terminal command suggestions, describe what you want, get a shell command
- Commit message generation (
Ctrl+Shift+M), generates from staged git diff
- Health dashboard (
Conduit: Health Dashboard), real-time provider status - Bridge manager, start/stop/restart conduit-bridge from VS Code
- Per-provider login, Grok, Claude, Gemini, ChatGPT login commands
- Auto-start bridge, starts automatically if not running on activation
- Consolidated status bar, bridge status, model count, and current model in one item
Conduit supports spawning background coding agents that work independently while you continue coding.
Conduit: Spawn Agent - select a model and enter a task. The agent runs in the background with its own output channel.
Conduit: Fix Issue - enter a GitHub issue number. Conduit:
- Creates a git worktree on a dedicated branch (
fix/issue-<N>) - Spawns an agent in the isolated worktree
- The agent reads the codebase, implements the fix, and commits
Parallel safety: Multiple Fix Issue commands can run simultaneously. Worktree creation is serialized via a file lock ($REPO/.git/worktree-create.lock) with 2.5s stagger to prevent .git/config.lock contention. (Credit: @m13v)
When removing worktrees, Conduit checks branch merge status:
- Merged branches: safe to remove
- Unmerged + recently active (within 24h): removal blocked (prevents losing review feedback)
- Unmerged + abandoned (>24h idle): allowed
- Force flag: overrides all safety checks
Background agents have access to these workspace tools:
| Tool | Permission | Description |
|---|---|---|
readFile |
safe | Read file contents (with optional line range) |
writeFile |
destructive | Create or overwrite files |
listFiles |
safe | List files matching a glob pattern |
searchCode |
safe | Regex search across workspace files |
runCommand |
destructive | Execute shell commands |
readDiagnostics |
safe | Read VS Code errors and warnings |
applyDiff |
destructive | Search-and-replace within a file |
createWorktree |
destructive | Create a git worktree for parallel work |
removeWorktree |
destructive | Remove a worktree (merge-status aware) |
Conduit: Batch Fix Issues - fix multiple GitHub issues in parallel:
- Enter a label filter (e.g.
bug,good-first-issue) or leave empty for all open issues - Set max issues to process
- Select a model
- Conduit fetches matching issues via
gh, shows a confirmation dialog, then spawns agents with worktree isolation - Max 3 concurrent agents (serialized worktree creation via lock)
- Progress and summary shown in a dedicated "Conduit Batch Fix" output channel
The Sessions tree view shows all background agents:
- Running: animated spinner icon (blue)
- Completed: checkmark icon (green)
- Failed: error icon (red)
- Click to view output, right-click to kill running agents
When a model fails with a transient error (rate limit, timeout, capacity, auth failure), Conduit automatically retries with the next model in the fallback chain:
- Gemini 2.5 Pro → Gemini 2.5 Flash
- Gemini 3 Pro Preview → Gemini 3 Flash Preview
- Claude Opus 4.6 → Claude Sonnet 4.6 → Claude Haiku 4.5
Fallback is triggered by: 429/503 errors, rate limits, capacity issues, timeouts, and auth failures. Non-transient errors (syntax, missing args) are not retried. The RouteResult metadata tracks which model was actually used and why.
Models are served by conduit-bridge. The extension displays whatever the bridge reports via /v1/models. Available models depend on which providers are logged in.
| Provider | Models | Context |
|---|---|---|
| Grok | Grok Expert, Grok Fast, Grok Heavy, Grok 4.20 Beta | 131K |
| Claude | Claude Sonnet 4.6, Claude Opus 4.6, Claude Haiku 4.5 | 200K |
| Gemini | Gemini 3 Fast, Gemini 3 Thinking, Gemini 3.1 Pro | 1M |
| ChatGPT | GPT-5.4 Pro, GPT-5.4 Thinking, GPT-5.3 Instant, GPT-5 Thinking Mini, o3 | 128K |
| Provider | Models | Context |
|---|---|---|
| Claude CLI | Claude Sonnet 4.6, Claude Opus 4.6, Claude Haiku 4.5 | 200K |
| Gemini CLI | Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 3.0 Pro Preview, Gemini 3.0 Flash Preview | 1M |
| OpenCode | Default model (auto-detected) | varies |
| Pi | Default model (configurable provider/model) | varies |
| Provider | Models | Context |
|---|---|---|
| OpenAI Codex | GPT-5.4, GPT-5.3 Codex, GPT-5.3 Codex Spark, GPT-5.2 Codex, GPT-5.1 Codex Mini | 200K |
| Provider | Models | Context |
|---|---|---|
| BitNet | BitNet 1.58 2B (CPU inference) | 4K |
The Conduit Chat panel lives in the bottom panel area by default. For the best experience, drag it to the secondary sidebar (right side) where it sits alongside GitHub Copilot and Claude Code.
The toolbar at the bottom of the chat has:
- + - Attach context (current selection or file from disk)
- Mode button (Ask/Edit/Agent/Plan) - click to switch chat mode via QuickPick
- Model button (e.g. "Claude Sonnet 4.6") - click to switch model via QuickPick
- Settings - open Conduit extension settings
- Send - send your message (or press Enter)
Click the mode button in the toolbar to switch between modes:
| Mode | Purpose | System Behavior |
|---|---|---|
| Ask | Answer questions about code | Conversational, explains concepts, provides examples |
| Edit | Modify and refactor code | Focuses on producing code changes, minimal explanation |
| Agent | Plan and build features | Multi-step reasoning with collapsible step cards |
| Plan | Create implementation plans | Produces structured plans with steps, file lists, and considerations |
Models are classified into tiers that determine which modes they support:
| Tier | Modes | Examples |
|---|---|---|
| Tier 1 (flagship) | Ask, Edit, Agent, Plan | Claude Opus 4.6, GPT-5.4 Pro, Gemini 3.1 Pro |
| Tier 2 (mid-tier) | Ask, Edit, Plan | Claude Haiku 4.5, Grok Fast, GPT-5.3 Instant |
| Tier 3 (fast) | Ask only | GPT-5 Thinking Mini, BitNet 2B |
If you select a mode that your current model doesn't support, Conduit shows a warning with a suggested alternative model.
In Agent mode, models produce structured output with step-by-step reasoning. Each step is rendered as a collapsible card in the chat:
- While the model streams a step, it shows an animated spinner
- When a step finishes (the next step begins), it shows a green checkmark
- When the full response is done, all steps show checkmarks
- Click any step header to expand/collapse its contents
This gives you a clear overview of the model's reasoning process without being overwhelmed by long responses.
Click the model name in the toolbar (or use /model) to open the model picker.
The QuickPick shows all available models grouped by provider:
Auto best for task
--- WEB-GROK ---
* Grok Expert 131K context - Ask, Edit, Agent, Plan
Grok Fast 131K context - Ask, Edit, Plan
--- WEB-CLAUDE ---
* Claude Sonnet 4.6 200K context - Ask, Edit, Agent, Plan
Claude Opus 4.6 200K context - Ask, Edit, Agent, Plan
...
- Type to search/filter models
- The checkmark shows the currently selected model
- Star icons indicate model tier (full star = tier 1, half star = tier 2)
- Supported modes are shown next to each model
- Auto mode picks the best model per message based on complexity
When set to Auto, Conduit analyzes your message to determine complexity:
- Simple (short questions, "explain this", "fix typo") → fast models like Grok Fast, Gemini 3 Fast
- Moderate (code changes, debugging) → mid-tier models like Gemini 3 Thinking, Claude Sonnet
- Complex (architecture, multi-file, "build a system") → flagship models like Claude Opus, GPT-5.4 Pro
The Sessions panel appears below the Chat panel (or as a separate collapsible section). It works like GitHub Copilot's session history.
- New Session (+) - start a fresh conversation (saves the current one)
- Refresh - reload the session list
- Click a session - load that conversation into the chat
- Rename (edit icon on hover) - give the session a custom name
- Delete (trash icon on hover) - remove a session permanently
You can rename any session to keep track of what you were working on:
- Use
/rename My Feature Workin the chat input - Or click the edit icon next to a session in the Sessions panel
- Custom names persist and are shown in the session list
When models change mid-conversation (either via Auto mode or manual switch), Conduit automatically injects a compressed summary of the previous context. This means the new model understands what was discussed before and can continue seamlessly.
Each session stores a "working summary", a compressed snapshot of what you were working on. When you switch between sessions, this summary is restored so no context is lost.
Sessions are stored in VS Code's global state and persist across restarts. Up to 50 sessions are kept.
To place Conduit next to GitHub Copilot and Claude Code in the secondary sidebar (right side), run the command "Conduit: Move to Secondary Sidebar" from the command palette (Ctrl+Shift+P).
Type / in the chat input to see autocomplete suggestions.
| Command | Description |
|---|---|
/help |
Show all available commands, context mentions, and keyboard shortcuts |
/fix |
Fix errors and warnings in the current file |
/explain |
Explain the selected code |
/tests |
Generate tests for the selected code |
/refactor [instruction] |
Refactor selected code (optional instruction) |
/plan [task] |
Create a structured implementation plan |
/commit |
Generate a commit message from staged git changes |
/clear |
Clear the current chat (without saving) |
/new |
Save current chat and start a new session |
/cost |
Show estimated token usage for this session |
/model [name] |
Switch model by name, or list all available |
/rename [name] |
Rename the current session |
/mode [ask|edit|agent|plan] |
Switch chat mode |
Add context to your messages by typing # followed by a mention:
| Mention | What it attaches |
|---|---|
#file:src/main.ts |
Full content of the file |
#file:src/main.ts:10-20 |
Lines 10-20 of the file |
#selection |
Current editor selection |
#problems |
All errors/warnings in the current file |
#workspace |
Lightweight workspace folder structure overview |
#codebase |
Deep search: file tree + contents of up to 30 source files (~80K chars) |
#terminal |
Terminal output (select text in terminal first) |
Explain this function #file:src/utils/parser.ts:45-80
Fix the errors in this file #problems
How does authentication work? #codebase
Where are the API routes? #workspace
Refactor #selection based on patterns in #file:src/helpers.ts
- #workspace is lightweight, just the folder structure. Use it when you need a quick overview of where things are.
- #codebase is deep, includes the folder structure PLUS the actual contents of up to 30 prioritized source files. Use it when the model needs to understand how your code works. Files are prioritized: config files first, then entry points, then by directory depth.
Select code in the editor and right-click to access:
- Conduit: Explain Selected Code - get a detailed explanation
- Conduit: Refactor Selected Code - suggest improvements
- Conduit: Generate Tests for Selection - create unit tests
- Conduit: Inline Edit - edit code with a prompt
Conduit provides inline code suggestions as you type, similar to GitHub Copilot:
- Works for 20+ programming languages
- Respects language-specific conventions
- Configurable delay (
conduit.inlineTriggerDelay) - Toggle on/off with
Conduit: Toggle Inline Suggestions
Press Ctrl+I (or Cmd+I on Mac) anywhere in the editor to open the inline chat:
- Type a description of what you want to change (e.g. "add error handling", "convert to async")
- Conduit generates the code change
- A diff view opens showing the proposed changes
- Click Accept to apply or Reject to discard
Customize Conduit's behavior for your project by creating instruction files:
.conduit/instructions.md- Conduit-specific instructionsCLAUDE.md- also recognized (Claude Code compatible).github/copilot-instructions.md- also recognized (Copilot compatible)
~/.conduit/instructions.md
## Project Context
This is a React 19 + TypeScript project using Tailwind CSS.
The backend is a Node.js Express API with PostgreSQL.
## Coding Conventions
- Use functional components with hooks
- Prefer named exports
- Use descriptive variable names
- Always add JSDoc comments to exported functions
- Tests use vitest with React Testing Library
## Important
- Never use `any` type
- Always handle loading and error states
- API calls go through the `/lib/api` module| Shortcut | Action |
|---|---|
Ctrl+Shift+G |
Open/focus the Conduit Chat panel |
Ctrl+I |
Inline Chat - edit code at cursor |
Ctrl+Shift+I |
Inline Edit - edit selected code with prompt |
Ctrl+Shift+E |
Explain selected code |
Ctrl+Shift+M |
Generate commit message from staged changes |
Enter |
Send message in chat |
Shift+Enter |
Insert new line in chat |
Open settings with Ctrl+, and search for "conduit", or use the gear icon in the chat toolbar.
| Setting | Default | Description |
|---|---|---|
conduit.proxyUrl |
http://127.0.0.1:31338 |
Base URL of the conduit-bridge proxy |
conduit.apiKey |
cli-bridge |
API key for the proxy |
conduit.defaultModel |
cli-gemini/gemini-2.5-pro |
Default model (full ID from the model picker) |
conduit.inlineSuggestions |
true |
Enable inline ghost-text completions |
conduit.inlineTriggerDelay |
600 |
Delay in ms before requesting inline suggestion |
conduit.contextLines |
80 |
Lines of surrounding code to include as context |
conduit.includeOpenFiles |
true |
Include other open editor tabs as additional context |
conduit.maxOpenFilesContext |
3 |
Maximum number of open files to include |
conduit.terminalIntegration |
true |
Enable terminal command suggestions |
conduit.autoStatusBar |
true |
Show connection status in the status bar |
Each AI provider needs to be authenticated through the bridge. The extension provides login commands for each.
These use browser session cookies, no API keys needed.
- Run the login command:
Ctrl+Shift+P→Conduit: Login - Grok(or Claude, Gemini, ChatGPT) - A browser window opens to the provider's website
- Log in with your account
- The bridge captures the session and the models become available
These require the respective CLI tools to be installed:
- Claude CLI: Install Claude Code and authenticate
- Gemini CLI: Install the Gemini CLI and authenticate
- OpenCode: Install OpenCode (auto-detected)
- Pi: Install Pi with
--providerand--modelflags for different backends
The bridge automatically detects installed CLIs.
Requires the Codex CLI with OAuth tokens:
- Install the Codex CLI
- Run
codex loginto authenticate - Run
openclaw models auth login --provider openai-codexand select "Codex CLI (existing login)" - The codex models (including GPT-5.4) appear in the model picker
Local CPU inference, no authentication needed:
- Install BitNet runtime
- The bridge auto-detects and serves the model
Use Ctrl+Shift+P → Conduit: Health Dashboard to see which providers are connected and which models are available.
Conduit has a comprehensive test suite with 283 tests across 17 test files.
npm test # run all tests
npm run test:coverage # run with coverage report| Test File | Tests | Coverage |
|---|---|---|
agent-parser.test.ts |
25 | Agent output parsing, step card extraction |
agent-tools.test.ts |
18 | Tool execution (readFile, writeFile, applyDiff, etc.) |
worktree-tools.test.ts |
17 | Worktree lock serialization, merge-status safety |
cli-runner-failover.test.ts |
7 | Model failover chain, fallback pattern matching |
aahp-context.test.ts |
10 | AAHP v3 context detection, loading, block building |
agent-backends.test.ts |
20 | Shared backend: prompt formatting, env, CLI config |
llm-tool-validation.test.ts |
14 | Tool catalog schema, LLM tool-call validation |
model-registry.test.ts |
39 | Model capabilities, tiers, auto-selection |
sessions-tree-provider.test.ts |
19 | Session tree, background agent status |
proxy-client.test.ts |
19 | HTTP streaming, error handling |
chat-view-provider.test.ts |
17 | Chat webview, slash commands |
context-builder.test.ts |
16 | Editor context collection |
mention-parser.test.ts |
11 | #file, #selection, #codebase parsing |
custom-instructions.test.ts |
8 | Instruction file loading |
utils.test.ts |
17 | Shared utilities |
config.test.ts |
4 | Settings reader |
inline-provider.test.ts |
7 | Ghost-text completions |
The llm-tool-validation.test.ts suite includes a live LLM test that sends a sample prompt to a model and validates the response contains correct tool calls. Run against a specific model:
LLM_MODEL=cli-claude/claude-sonnet-4-6 npx vitest run src/__tests__/llm-tool-validation.test.ts
LLM_MODEL=cli-gemini/gemini-2.5-flash npx vitest run src/__tests__/llm-tool-validation.test.ts
LLM_MODEL=openai-codex/gpt-5.3-codex npx vitest run src/__tests__/llm-tool-validation.test.tsThe test validates: correct tool names, required args present, expected values, no hallucinated tools.
Fully close and reopen VS Code. VS Code only reads new view registrations on startup, not on extension reload.
The model list comes from the bridge (/v1/models). Check:
- Is the bridge running? Run
Conduit: Check Proxy Status - Is the provider logged in? Run the login command for that provider
- Check the Health Dashboard for provider status
- The provider may not be authenticated, run the login command
- The model may not support your request type
- Check the bridge logs:
Conduit: Show Bridge Logs
- Open the command palette (
Ctrl+Shift+P) - Run
Conduit: Open Chat - If the panel appears at the bottom, right-click the tab and select "Move to Secondary Side Bar"
- Check that
conduit.inlineSuggestionsis enabled in settings - Ensure the bridge is running and at least one model is available
- Try increasing
conduit.inlineTriggerDelayif suggestions are too slow
The extension tries to auto-start the bridge on activation. If it fails:
- Check if conduit-bridge is installed:
conduit-bridge --version - Check if port 31338 is already in use
- Start manually:
conduit-bridge start - Check bridge logs:
Conduit: Show Bridge Logs
git clone https://github.com/elvatis/conduit-vscode
cd conduit-vscode
npm install --include=devnpm run dev # watch mode with source maps
npm run build # production build (minified)
npm run lint # eslint
npm test # run tests (vitest, 283 tests)Press F5 in VS Code to launch the Extension Development Host for debugging.
Two GitHub Actions workflows run automatically:
- CI (
.github/workflows/ci.yml): Runs on every push/PR tomain. Builds, runs all tests, uploads coverage artifacts. - LLM Validation (
.github/workflows/llm-validation.yml): Weekly smoke test (Mondays 06:00 UTC) that runsllm-tool-validation.test.tsagainst Claude Sonnet, Gemini Flash, and GPT-5.3 Codex. Can also be triggered manually viaworkflow_dispatch. RequiresANTHROPIC_API_KEY,OPENAI_API_KEY, andGOOGLE_APPLICATION_CREDENTIALSsecrets.
npx @vscode/vsce package --no-dependencies
# produces conduit-vscode-0.7.0.vsixconduit-vscode/
src/
extension.ts - activation, command registration
commands.ts - command palette registrations (spawn, fix issue, etc.)
chat-view-provider.ts - main chat webview (sidebar), slash commands, markdown
chat-panel.ts - chat panel webview host
sessions-tree-provider.ts - native sessions tree view + background agent sessions
models-tree-provider.ts - model picker tree view
model-registry.ts - model capabilities, display names, tiers, auto-selection
proxy-client.ts - HTTP/streaming client for the bridge
embedded-proxy.ts - embedded proxy server (bridgeless mode)
agent-backends.ts - shared agent backend abstraction (CLI detection, env, prompt, spawn)
cli-runner.ts - CLI subprocess routing (Claude, Gemini, Codex, OpenCode, Pi)
cost-tracker.ts - token usage parsing and cost estimation per agent session
agent-loop.ts - multi-turn agent loop with tool execution
agent-parser.ts - agent output parsing (step cards, tool calls)
agent-tools.ts - workspace tools (read/write/search/worktree/diff)
agent-types.ts - shared type definitions for the agent system
aahp-context.ts - AAHP v3 context auto-detection and injection
mention-parser.ts - #file, #selection, #workspace, #codebase parsing
context-builder.ts - editor context collection
bridge-manager.ts - bridge lifecycle management
bridge-panel.ts - bridge manager webview
browser-session.ts - browser automation for web providers
inline-provider.ts - ghost-text inline completions
inline-chat.ts - Ctrl+I inline chat with diff
custom-instructions.ts - .conduit/instructions.md loader
commit-message.ts - git commit message generation
config.ts - settings reader
health-panel.ts - health dashboard webview
status-bar.ts - consolidated status bar item
utils.ts - shared utilities
src/__tests__/ - 17 test files, 283 tests (vitest)
dist/
extension.js - bundled output (esbuild)
media/
icon.png - extension icon
icon.svg - extension icon (vector)
sidebar-icon.svg - panel icon
| Project | Description | Overlap |
|---|---|---|
| aahp-orchestrator | VS Code extension for AAHP v3 context injection | Context building for coding agents |
| aahp-runner | Autonomous CLI agent runner | CLI agent spawning, backend routing |
| aahp-cron | Pipeline orchestrator for multi-repo agent runs | Batch agent execution pattern |
Open issues tracking planned features:
| # | Feature | Status |
|---|---|---|
| #6 | Model failover chain for agent sessions | ✅ Done |
| #7 | AAHP context integration for agent sessions | ✅ Done |
| #8 | Multi-issue batch mode (aahp-cron pattern) | ✅ Done |
| #9 | Agent output streaming to session panel | ✅ Done |
| #10 | Shared agent backend abstraction with aahp-runner | ✅ Done |
| #11 | LLM tool-call validation CI (multi-model smoke test) | ✅ Done |
| #12 | Agent session persistence and resume | ✅ Done |
| #13 | Cost tracking per agent session | ✅ Done |
- CI workflow (build + test on push/PR) and weekly LLM validation workflow (#11)
- Shared agent backend abstraction, refactored cli-runner.ts (#10)
- Session persistence and resume: sessions survive VS Code restarts (#12)
- Cost tracking per agent session with token parsing and budget limits (#13)
- Resume, Remove, Clear commands for session management (#12)
- Cost summary command with per-model breakdown (#13)
- Branch protection enabled on main
- 295+ tests across 19 test files
- Agent backends: Claude CLI, Gemini CLI, OpenAI Codex, OpenCode, Pi
- Background agent sessions with spawn/monitor/kill
- Git worktree isolation for parallel agent work
- Worktree lock serialization (prevents .git/config.lock contention)
- Merge-status aware worktree cleanup
- Fix Issue command (auto-worktree + agent spawn)
- Model fallback chain definitions
- Live agent output streaming to session panel (#9)
- 277 tests across 17 test files
- Reliable agent loop with tool execution
- Multi-model auto-selection
- Local model support (BitNet)
- Smart fallback on model errors
- Per-provider sessions with model-aware completions
- Streaming metadata display (timestamps, duration, tokens)
- Inline chat with diff preview
- Initial release: chat interface, model picker, inline completions
- Session management with persistence
- Custom instructions support