This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Maple Proxy is a lightweight OpenAI-compatible proxy server that forwards requests to Maple/OpenSecret's TEE (Trusted Execution Environment) infrastructure. It acts as a translation layer between OpenAI client libraries and the OpenSecret backend, enabling secure AI processing in trusted enclaves.
just run- Start the development server (loads config from .env)just run-local- Run pointing to local backend (http://localhost:3000)just run-prod- Run pointing to production backend (https://enclave.trymaple.ai)just build- Build debug binaryjust release- Build optimized release binarycargo run- Run directly with cargo
just test- Run all testsjust fmtorjust format- Format code with rustfmtjust lintorjust clippy- Run clippy lints with strict warningsjust check- Run format, lint, and test in sequence
just docker-build- Build Docker image locallyjust docker-run- Run container interactivelyjust docker-run-detached- Run container in backgroundjust compose-up- Start with docker-composejust compose-down- Stop docker-compose services
-
main.rs - Entry point that initializes the server with configuration and starts the Axum web server on the configured host/port.
-
lib.rs - Library root that exports the main
create_appfunction, which builds the Axum router with:- Health check endpoints (/, /health)
- OpenAI-compatible endpoints (/v1/models, /v1/chat/completions)
- Optional CORS support
- Request tracing
-
config.rs - Configuration management using clap for CLI args and environment variables:
- Server settings (host, port)
- Backend URL configuration
- API key management
- Debug and CORS flags
- OpenAI-compatible error types
-
proxy.rs - Core proxy logic that:
- Extracts API keys from Authorization headers or falls back to default
- Creates OpenSecret client and performs attestation handshake
- Forwards requests to the TEE backend
- Handles streaming responses for chat completions
- Transforms responses to OpenAI format
- Client sends OpenAI-compatible request to proxy
- Proxy extracts API key (from header or default config)
- Creates OpenSecret client and performs TEE attestation
- Forwards request to Maple backend (enclave.trymaple.ai or configured URL)
- Streams response back to client in OpenAI format
The proxy supports two authentication modes:
- Default API Key: Set via
MAPLE_API_KEYenvironment variable - Per-Request: Clients provide
Authorization: Bearer <key>header
For public deployments, avoid setting default API key to require per-request authentication.
Environment variables (can be set in .env file):
MAPLE_HOST- Server bind address (default: 127.0.0.1)MAPLE_PORT- Server port (default: 8080)MAPLE_BACKEND_URL- OpenSecret backend URL (default: https://enclave.trymaple.ai)MAPLE_API_KEY- Default API key (optional)MAPLE_DEBUG- Enable debug loggingMAPLE_ENABLE_CORS- Enable CORS for web clients
Tests are located in tests/ directory. Currently includes:
health_test.rs- Tests for health check endpoints
Run tests with just test or cargo test.
Key dependencies:
- opensecret (0.2.0) - Official OpenSecret SDK for TEE communication
- axum - Web framework for the HTTP server
- tokio - Async runtime
- tower/tower-http - Middleware for CORS and tracing
- clap - CLI argument parsing
- dotenvy - .env file support