A powerful Python recreation of Claude Code with enhanced real-time visualization, cost management, and Model Context Protocol (MCP) server capabilities. This tool provides a natural language interface for software development tasks with support for multiple LLM providers.
- Multi-Provider Support: Works with OpenAI, Anthropic, and other LLM providers
- Model Context Protocol Integration:
- Run as an MCP server for use with Claude Desktop and other clients
- Connect to any MCP server with the built-in MCP client
- Multi-agent synchronization for complex problem solving
- Real-Time Tool Visualization: See tool execution progress and results in real-time
- Cost Management: Track token usage and expenses with budget controls
- Comprehensive Tool Suite: File operations, search, command execution, and more
- Enhanced UI: Rich terminal interface with progress indicators and syntax highlighting
- Context Optimization: Smart conversation compaction and memory management
- Agent Coordination: Specialized agents with different roles can collaborate on tasks
- Clone this repository
- Install dependencies:
pip install -r requirements.txt- Create a
.envfile with your API keys:
# Choose one or more providers
OPENAI_API_KEY=your_openai_api_key_here
ANTHROPIC_API_KEY=your_anthropic_api_key_here
# Optional model selection
OPENAI_MODEL=gpt-4o
ANTHROPIC_MODEL=claude-3-opus-20240229
Run the CLI with the default provider (determined from available API keys):
python claude.py chatSpecify a provider and model:
python claude.py chat --provider openai --model gpt-4oSet a budget limit to manage costs:
python claude.py chat --budget 5.00Run as a Model Context Protocol server:
python claude.py serveStart in development mode with the MCP Inspector:
python claude.py serve --devConfigure host and port:
python claude.py serve --host 0.0.0.0 --port 8000Specify additional dependencies:
python claude.py serve --dependencies pandas numpyLoad environment variables from file:
python claude.py serve --env-file .envConnect to an MCP server using Claude as the reasoning engine:
python claude.py mcp-client path/to/server.pySpecify a Claude model:
python claude.py mcp-client path/to/server.py --model claude-3-5-sonnet-20241022Try the included example server:
# In terminal 1 - start the server
python examples/echo_server.py
# In terminal 2 - connect with the client
python claude.py mcp-client examples/echo_server.pyLaunch a multi-agent client with synchronized agents:
python claude.py mcp-multi-agent path/to/server.pyUse a custom agent configuration file:
python claude.py mcp-multi-agent path/to/server.py --config examples/agents_config.jsonExample with the echo server:
# In terminal 1 - start the server
python examples/echo_server.py
# In terminal 2 - launch the multi-agent client
python claude.py mcp-multi-agent examples/echo_server.py --config examples/agents_config.json- View: Read files with optional line limits
- Edit: Modify files with precise text replacement
- Replace: Create or overwrite files
- GlobTool: Find files by pattern matching
- GrepTool: Search file contents using regex
- LS: List directory contents
- Bash: Execute shell commands
- /help: Show available commands
- /compact: Compress conversation history to save tokens
- /version: Show version information
- /providers: List available LLM providers
- /cost: Show cost and usage information
- /budget [amount]: Set a budget limit
- /quit, /exit: Exit the application
Claude Code Python Edition is built with a modular architecture:
/claude_code/
/lib/
/providers/ # LLM provider implementations
/tools/ # Tool implementations
/context/ # Context management
/ui/ # UI components
/monitoring/ # Cost tracking & metrics
/commands/ # CLI commands
/config/ # Configuration management
/util/ # Utility functions
claude.py # Main CLI entry point
mcp_server.py # Model Context Protocol server
Once the MCP server is running, you can connect to it from Claude Desktop or other MCP-compatible clients:
-
Install and run the MCP server:
python claude.py serve
-
Open the configuration page in your browser:
http://localhost:8000 -
Follow the instructions to configure Claude Desktop, including:
- Copy the JSON configuration
- Download the auto-configured JSON file
- Step-by-step setup instructions
To connect to any MCP server using Claude Code:
- Ensure you have your Anthropic API key in the environment or .env file
- Start the MCP server you want to connect to
- Connect using the MCP client:
python claude.py mcp-client path/to/server.py
- Type queries in the interactive chat interface
For complex tasks, the multi-agent mode allows multiple specialized agents to collaborate:
- Create an agent configuration file or use the provided example
- Start your MCP server
- Launch the multi-agent client:
python claude.py mcp-multi-agent path/to/server.py --config examples/agents_config.json
- Use the command interface to interact with multiple agents:
- Type a message to broadcast to all agents
- Use
/talk Agent_Name messagefor direct communication - Use
/agentsto see all available agents - Use
/historyto view the conversation history
- Fork the repository
- Create a feature branch
- Implement your changes with tests
- Submit a pull request
MIT
This project is inspired by Anthropic's Claude Code CLI tool, reimplemented in Python with additional features for enhanced visibility, cost management, and MCP server capabilities.# OpenAI Code Assistant
A powerful command-line and API-based coding assistant that uses OpenAI APIs with function calling and streaming.
- Interactive CLI for coding assistance
- Web API for integration with other applications
- Model Context Protocol (MCP) server implementation
- Replication support for high availability
- Tool-based architecture for extensibility
- Reinforcement learning for tool optimization
- Web client for browser-based interaction
- Clone the repository
- Install dependencies:
pip install -r requirements.txt
- Set your OpenAI API key:
export OPENAI_API_KEY=your_api_key
Run the assistant in interactive CLI mode:
python cli.pyOptions:
--model,-m: Specify the model to use (default: gpt-4o)--temperature,-t: Set temperature for response generation (default: 0)--verbose,-v: Enable verbose output with additional information--enable-rl/--disable-rl: Enable/disable reinforcement learning for tool optimization--rl-update: Manually trigger an update of the RL model
Run the assistant as an API server:
python cli.py serveOptions:
--host: Host address to bind to (default: 127.0.0.1)--port,-p: Port to listen on (default: 8000)--workers,-w: Number of worker processes (default: 1)--enable-replication: Enable replication across instances--primary/--secondary: Whether this is a primary or secondary instance--peer: Peer instances to replicate with (host:port), can be specified multiple times
Run the assistant as a Model Context Protocol (MCP) server:
python cli.py mcp-serveOptions:
--host: Host address to bind to (default: 127.0.0.1)--port,-p: Port to listen on (default: 8000)--dev: Enable development mode with additional logging--dependencies: Additional Python dependencies to install--env-file: Path to .env file with environment variables
Connect to an MCP server using the assistant as the reasoning engine:
python cli.py mcp-client path/to/server.pyOptions:
--model,-m: Model to use for reasoning (default: gpt-4o)--host: Host address for the MCP server (default: 127.0.0.1)--port,-p: Port for the MCP server (default: 8000)
For easier deployment, use the provided script:
./deploy.sh --host 0.0.0.0 --port 8000 --workers 4To enable replication:
# Primary instance
./deploy.sh --enable-replication --port 8000
# Secondary instance
./deploy.sh --enable-replication --secondary --port 8001 --peer 127.0.0.1:8000To use the web client, open web-client.html in your browser. Make sure the API server is running.
POST /conversation: Create a new conversationPOST /conversation/{conversation_id}/message: Send a message to a conversationPOST /conversation/{conversation_id}/message/stream: Stream a message responseGET /conversation/{conversation_id}: Get conversation detailsDELETE /conversation/{conversation_id}: Delete a conversationGET /health: Health check endpoint
GET /: Health check (MCP protocol)POST /context: Get context for a prompt templateGET /prompts: List available prompt templatesGET /prompts/{prompt_id}: Get a specific prompt templatePOST /prompts: Create a new prompt templatePUT /prompts/{prompt_id}: Update an existing prompt templateDELETE /prompts/{prompt_id}: Delete a prompt template
The replication system allows running multiple instances of the assistant with synchronized state. This provides:
- High availability
- Load balancing
- Fault tolerance
To set up replication:
- Start a primary instance with
--enable-replication - Start secondary instances with
--enable-replication --secondary --peer [primary-host:port]
The assistant includes various tools:
- Weather: Get current weather for a location
- View: Read files from the filesystem
- Edit: Edit files
- Replace: Write files
- Bash: Execute bash commands
- GlobTool: File pattern matching
- GrepTool: Content search
- LS: List directory contents
- JinaSearch: Web search using Jina.ai
- JinaFactCheck: Fact checking using Jina.ai
- JinaReadURL: Read and summarize webpages
/help: Show help message/compact: Compact the conversation to reduce token usage/status: Show token usage and session information/config: Show current configuration settings/rl-status: Show RL tool optimizer status (if enabled)/rl-update: Update the RL model manually (if enabled)/rl-stats: Show tool usage statistics (if enabled)
