A proxy server that enables Claude Code to work with OpenAI-compatible API providers. Convert Claude API requests to OpenAI API calls, allowing you to use various LLM providers through the Claude Code CLI.
- Full Claude API Compatibility: Complete
/v1/messagesendpoint support - Multiple Provider Support: OpenAI, Azure OpenAI, local models (Ollama), and any OpenAI-compatible API
- Web UI for Configuration: Easy-to-use web interface to manage multiple configuration profiles.
- Smart Model Mapping: Configure BIG and SMALL models via the UI.
- Function Calling: Complete tool use support with proper conversion.
- Streaming Responses: Real-time SSE streaming support.
- Image Support: Base64 encoded image input.
- Error Handling: Comprehensive error handling and logging.
# Using UV (recommended)
uv sync
# Or using pip
pip install -r requirements.txt# Direct run
python start_proxy.py
# Or with UV
uv run claude-code-proxy
# Or with docker
docker run -d -p 8082:8082 zimpel1/claude-code-proxy-enhance:latest
# Persistent configuration
docker run -d -p 8082:8082 -v ~/configs:/app/configs zimpel1/claude-code-proxy-enhance:latest After starting the server, open your browser and go to http://localhost:8082 (or your configured URL).
- The server will create a default configuration file at
configs/profiles.jsonon first run. - Use the web interface to create, edit, and switch between configuration profiles.
- Changes are applied instantly without needing to restart the server.
# If ANTHROPIC_API_KEY is not set in the proxy:
ANTHROPIC_BASE_URL=http://localhost:8082 ANTHROPIC_API_KEY="any-value" claude
# If ANTHROPIC_API_KEY is set in the proxy:
ANTHROPIC_BASE_URL=http://localhost:8082 ANTHROPIC_API_KEY="exact-matching-key" claudeConfiguration is now managed through a web interface, which saves settings to configs/profiles.json.
- Profiles: You can create multiple configuration profiles (e.g., one for OpenAI, one for Azure, one for local models).
- Dynamic Reloading: Activating a new profile applies the settings immediately without a server restart.
- Editable Fields: All major settings, including API keys, base URLs, model names, and server settings, are editable through the UI.
Environment variables from your .env file are used only on the very first run to create the initial default profile. After that, all configuration is managed through the UI.
The proxy maps Claude model requests to your configured models:
| Claude Request | Mapped To | Environment Variable |
|---|---|---|
| Models with "haiku" | SMALL_MODEL |
Default: gpt-4o-mini |
| Models with "sonnet" | MIDDLE_MODEL |
Default: BIG_MODEL |
| Models with "opus" | BIG_MODEL |
Default: gpt-4o |
OPENAI_API_KEY="sk-your-openai-key"
OPENAI_BASE_URL="https://api.openai.com/v1"
BIG_MODEL="gpt-4o"
MIDDLE_MODEL="gpt-4o"
SMALL_MODEL="gpt-4o-mini"OPENAI_API_KEY="your-azure-key"
OPENAI_BASE_URL="https://your-resource.openai.azure.com/openai/deployments/your-deployment"
BIG_MODEL="gpt-4"
MIDDLE_MODEL="gpt-4"
SMALL_MODEL="gpt-35-turbo"OPENAI_API_KEY="dummy-key" # Required but can be dummy
OPENAI_BASE_URL="http://localhost:11434/v1"
BIG_MODEL="llama3.1:70b"
MIDDLE_MODEL="llama3.1:70b"
SMALL_MODEL="llama3.1:8b"Any OpenAI-compatible API can be used by setting the appropriate OPENAI_BASE_URL.
import httpx
response = httpx.post(
"http://localhost:8082/v1/messages",
json={
"model": "claude-3-5-sonnet-20241022", # Maps to MIDDLE_MODEL
"max_tokens": 100,
"messages": [
{"role": "user", "content": "Hello!"}
]
}
)This proxy is designed to work seamlessly with Claude Code CLI:
# Start the proxy
python start_proxy.py
# Use Claude Code with the proxy
ANTHROPIC_BASE_URL=http://localhost:8082 claude
# Or set permanently
export ANTHROPIC_BASE_URL=http://localhost:8082
claudeTest the proxy functionality:
# Run comprehensive tests
python src/test_claude_to_openai.py# Install dependencies
uv sync
# Run server
uv run claude-code-proxy
# Format code
uv run black src/
uv run isort src/
# Type checking
uv run mypy src/claude-code-proxy/
├── src/
│ ├── main.py # Main server
│ ├── test_claude_to_openai.py # Tests
│ └── [other modules...]
├── start_proxy.py # Startup script
├── .env.example # Config template
└── README.md # This file
- Async/await for high concurrency
- Connection pooling for efficiency
- Streaming support for real-time responses
- Configurable timeouts and retries
- Smart error handling with detailed logging
MIT License

