Skip to content

zimplexing/claude-code-proxy-enhance

 
 

Repository files navigation

Claude Code Proxy Enhance

A proxy server that enables Claude Code to work with OpenAI-compatible API providers. Convert Claude API requests to OpenAI API calls, allowing you to use various LLM providers through the Claude Code CLI.

Claude Code Proxy

Configuration: Configure

Features

  • Full Claude API Compatibility: Complete /v1/messages endpoint support
  • Multiple Provider Support: OpenAI, Azure OpenAI, local models (Ollama), and any OpenAI-compatible API
  • Web UI for Configuration: Easy-to-use web interface to manage multiple configuration profiles.
  • Smart Model Mapping: Configure BIG and SMALL models via the UI.
  • Function Calling: Complete tool use support with proper conversion.
  • Streaming Responses: Real-time SSE streaming support.
  • Image Support: Base64 encoded image input.
  • Error Handling: Comprehensive error handling and logging.

Quick Start

1. Install Dependencies

# Using UV (recommended)
uv sync

# Or using pip
pip install -r requirements.txt

2. Start Server

# Direct run
python start_proxy.py

# Or with UV
uv run claude-code-proxy

# Or with docker
docker run -d -p 8082:8082 zimpel1/claude-code-proxy-enhance:latest

# Persistent configuration
docker run -d -p 8082:8082 -v ~/configs:/app/configs zimpel1/claude-code-proxy-enhance:latest 

3. Configure via Web UI

After starting the server, open your browser and go to http://localhost:8082 (or your configured URL).

  • The server will create a default configuration file at configs/profiles.json on first run.
  • Use the web interface to create, edit, and switch between configuration profiles.
  • Changes are applied instantly without needing to restart the server.

4. Use with Claude Code

# If ANTHROPIC_API_KEY is not set in the proxy:
ANTHROPIC_BASE_URL=http://localhost:8082 ANTHROPIC_API_KEY="any-value" claude

# If ANTHROPIC_API_KEY is set in the proxy:
ANTHROPIC_BASE_URL=http://localhost:8082 ANTHROPIC_API_KEY="exact-matching-key" claude

Configuration

Configuration is now managed through a web interface, which saves settings to configs/profiles.json.

Web UI Configuration

  • Profiles: You can create multiple configuration profiles (e.g., one for OpenAI, one for Azure, one for local models).
  • Dynamic Reloading: Activating a new profile applies the settings immediately without a server restart.
  • Editable Fields: All major settings, including API keys, base URLs, model names, and server settings, are editable through the UI.

Environment Variables (for first run)

Environment variables from your .env file are used only on the very first run to create the initial default profile. After that, all configuration is managed through the UI.

Model Mapping

The proxy maps Claude model requests to your configured models:

Claude Request Mapped To Environment Variable
Models with "haiku" SMALL_MODEL Default: gpt-4o-mini
Models with "sonnet" MIDDLE_MODEL Default: BIG_MODEL
Models with "opus" BIG_MODEL Default: gpt-4o

Provider Examples

OpenAI

OPENAI_API_KEY="sk-your-openai-key"
OPENAI_BASE_URL="https://api.openai.com/v1"
BIG_MODEL="gpt-4o"
MIDDLE_MODEL="gpt-4o"
SMALL_MODEL="gpt-4o-mini"

Azure OpenAI

OPENAI_API_KEY="your-azure-key"
OPENAI_BASE_URL="https://your-resource.openai.azure.com/openai/deployments/your-deployment"
BIG_MODEL="gpt-4"
MIDDLE_MODEL="gpt-4"
SMALL_MODEL="gpt-35-turbo"

Local Models (Ollama)

OPENAI_API_KEY="dummy-key"  # Required but can be dummy
OPENAI_BASE_URL="http://localhost:11434/v1"
BIG_MODEL="llama3.1:70b"
MIDDLE_MODEL="llama3.1:70b"
SMALL_MODEL="llama3.1:8b"

Other Providers

Any OpenAI-compatible API can be used by setting the appropriate OPENAI_BASE_URL.

Usage Examples

Basic Chat

import httpx

response = httpx.post(
    "http://localhost:8082/v1/messages",
    json={
        "model": "claude-3-5-sonnet-20241022",  # Maps to MIDDLE_MODEL
        "max_tokens": 100,
        "messages": [
            {"role": "user", "content": "Hello!"}
        ]
    }
)

Integration with Claude Code

This proxy is designed to work seamlessly with Claude Code CLI:

# Start the proxy
python start_proxy.py

# Use Claude Code with the proxy
ANTHROPIC_BASE_URL=http://localhost:8082 claude

# Or set permanently
export ANTHROPIC_BASE_URL=http://localhost:8082
claude

Testing

Test the proxy functionality:

# Run comprehensive tests
python src/test_claude_to_openai.py

Development

Using UV

# Install dependencies
uv sync

# Run server
uv run claude-code-proxy

# Format code
uv run black src/
uv run isort src/

# Type checking
uv run mypy src/

Project Structure

claude-code-proxy/
├── src/
│   ├── main.py  # Main server
│   ├── test_claude_to_openai.py    # Tests
│   └── [other modules...]
├── start_proxy.py                  # Startup script
├── .env.example                    # Config template
└── README.md                       # This file

Performance

  • Async/await for high concurrency
  • Connection pooling for efficiency
  • Streaming support for real-time responses
  • Configurable timeouts and retries
  • Smart error handling with detailed logging

License

MIT License

About

Claude Code to OpenAI API Proxy

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 86.0%
  • HTML 13.7%
  • Dockerfile 0.3%