A smart, fast, and interactive proxy server that enables the Claude Code CLI to work with any OpenAI-compatible API provider.
- 🚀 Simple & Fast: Minimal dependencies, near-instant startup.
- 🤖 Interactive Model Selection: If you don't specify a model, the bridge will automatically fetch the available models from your provider and present you with an interactive list to choose from.
- 🧠 Smart Token Auto-Detection: Automatically detects the
max_tokenslimit for your chosen model and sets a safe cap, preventing common errors. - 🛠️ Full Tool Support: Seamlessly converts function-calling and tool-use formats between Claude and OpenAI.
- 🌐 Universal Compatibility: Works with OpenAI, Azure OpenAI, Ollama, OpenRouter, and any other OpenAI-compatible API.
- 🛡️ Robust & Type-Safe: Built with TypeScript for reliability and includes graceful error handling.
- 🐳 Production Ready: Full Docker support, health checks, and graceful shutdown.
The easiest way to get started is with npx.
-
Run the Bridge: Provide your API provider's URL and your API key. You can omit the model name to get an interactive selector.
# Run without a model to get an interactive prompt npx claude-bridge -u https://api.your-provider.com/v1 -k sk-your-key # Or, specify a model directly npx claude-bridge -u https://api.your-provider.com/v1 -k sk-your-key -m gpt-4o # Full example with OpenRouter npx claude-bridge -u https://openrouter.ai/api/v1/ -k <your_api_key> -m deepseek/deepseek-chat-v3-0324:free # Example with PPInfra npm run dev -- -u https://api.ppinfra.com/v3/openai/ -k <your_api_key> -m deepseek/deepseek-v3-0324
-
Configure the Claude CLI: In another terminal, point the Claude CLI to the bridge:
export ANTHROPIC_BASE_URL=http://localhost:8000 export ANTHROPIC_AUTH_TOKEN="dummy" claude
That's it! The Claude CLI will now work through the bridge.
| Flag | Alias | Description | Default |
|---|---|---|---|
--url <string> |
-u |
(Required) Target API base URL. | |
--key <string> |
-k |
(Required) Your API key. | |
--model <string> |
-m |
Model name. If omitted, an interactive selector will appear. | |
--port <number> |
-p |
The port for the bridge server to run on. | 8000 |
--host <string> |
The host address for the bridge server. | localhost |
|
--max-tokens <number> |
Manually override the maximum tokens for a model. | (auto-detected) | |
--timeout <number> |
Request timeout in seconds. | 30 |
|
--log-level <level> |
Set the log level (debug, info, warn, error). |
info |
|
--help |
-h |
Show the help message. | |
--version |
-v |
Show the version number. |
The bridge acts as an intelligent middleman:
- Receives a request from the Claude Code CLI.
- Converts the Claude API format (including tool-use) to the standard OpenAI format.
- Forwards the request to your chosen API provider.
- Converts the OpenAI response back to the Claude format.
- Returns the final response to the Claude Code CLI.
Claude Code CLI → Claude Bridge → Any OpenAI-compatible API
(Claude Format) (Conversion) (OpenAI Format)
# Clone and install dependencies
git clone https://github.com/loulin/claude-bridge.git
cd claude-bridge
npm install
# Run the development server
# This will first build the project, then run it with nodemon.
npm run dev
# Manually build the project
npm run build
# Run tests
npm testMIT