Universal LLM interface with multi-provider support, real-time chat, and conversation management. Built with React/Next.js frontend and FastAPI backend.
๐ Production-ready BYOK (Bring Your Own Key) AI chat platform.
- ๐ค Multi-Provider Support: OpenAI, Anthropic Claude, Google Gemini, and more
- ๐ฌ Real-Time Chat: Interactive conversations with context preservation
- ๐ Conversation Management: Organize, search, archive, and manage chat history
- ๐จ Modern UI: Responsive design with Tailwind CSS and Radix UI
- โก High Performance: FastAPI backend with async operations
- ๐ Type Safety: Full TypeScript frontend and Python type hints
- ๐ Auto Documentation: Swagger/OpenAPI documentation generation
- ๐ ๏ธ Developer Friendly: Rich CLI, comprehensive logging, and debugging tools
- ๐ User Privacy: Users control their own API keys and data
- โก No Platform Rate Limits: Usage based on provider limits
- ๐ก๏ธ Secure Storage: API keys stored securely in user's keyring/database
- ๐ฏ Simple Pricing: Charge for platform access, not API usage
- ๐ Easy Setup: Users add API keys through intuitive web interface
Leaf AI/
โโโ frontend/ # Next.js 15 + TypeScript + Tailwind CSS
โ โโโ src/app/ # Next.js App Router
โ โโโ src/components/ # React components
โ โโโ src/lib/ # API client and utilities
โโโ leaf_ai/ # Python backend package
โ โโโ adapters/ # AI provider integrations
โ โโโ core/ # Business logic
โ โโโ web/ # FastAPI web server
โ โโโ cli/ # Command-line interface
โโโ docs/ # Documentation (planned)
-
Copy the production environment template:
cp env.production.template .env
-
Configure environment variables:
SUPABASE_URL,SUPABASE_KEY- Database configurationFRONTEND_URL- Your frontend domain (for CORS)
-
Database setup:
# No additional database setup required # Users will add their own API keys through the web interface
The render.yaml configuration is production-ready. Simply:
- Connect your GitHub repository to Render
- Add environment variables in Render dashboard (all variables from the template)
- Deploy automatically - Render will use the configuration in
render.yaml
# Install dependencies
pip install -r requirements.txt
# Start production server
python -m leaf_ai web --host 0.0.0.0 --port 8000- Frontend: Node.js 18+, npm/yarn/pnpm
- Backend: Python 3.9+, pip/poetry
- API Keys: OpenAI, Anthropic, and/or Google API keys
# Install backend dependencies
pip install -e .
# Set up environment variables
cp .env.example .env
# Edit .env with your API keys
# Start the backend server
python -m leaf_ai web --host 127.0.0.1 --port 8000 --reload# Navigate to frontend directory
cd frontend
# Install dependencies
npm install
# Set up environment variables
cp .env.example .env.local
# Configure NEXT_PUBLIC_API_URL=http://127.0.0.1:8000
# Start the development server
npm run dev- Frontend: http://localhost:3000
- Backend API Docs: http://127.0.0.1:8000/docs
- Health Check: http://127.0.0.1:8000/health
- Frontend README - Next.js application setup and development
- Backend README - FastAPI server and CLI usage
- Interactive Docs: http://127.0.0.1:8000/docs (when server is running)
- Alternative Docs: http://127.0.0.1:8000/redoc
# Chat with AI providers
POST /api/chat
# Manage conversations
GET /api/conversations
POST /api/conversations
GET /api/conversations/{id}
# Provider information
GET /api/providers
# WebSocket for live updates
WS /ws/{client_id} (with JWT)Backend (.env):
# API Keys
OPENAI_API_KEY=your_openai_key
ANTHROPIC_API_KEY=your_claude_key
GOOGLE_API_KEY=your_gemini_key
# Server Settings
HOST=127.0.0.1
PORT=8000
DEBUG=trueFrontend (.env.local):
NEXT_PUBLIC_API_URL=http://127.0.0.1:8000Each AI provider can be configured with:
- API keys and authentication
- Default models and parameters
- Rate limiting and timeouts
| Provider | Models | Features | Status |
|---|---|---|---|
| OpenAI | GPT-4o, GPT-4, GPT-3.5-turbo | Function calling, JSON mode | โ Active |
| Anthropic | Claude-3.5-sonnet, Claude-3-haiku | Long context, safety focus | โ Active |
| Gemini-1.5-pro, Gemini-1.5-flash | Multimodal, large context | โ Active |
# Send a message
curl -X POST "http://127.0.0.1:8000/api/chat" \
-H "Content-Type: application/json" \
-d '{
"message": "Hello, how are you?",
"provider": "openai",
"model": "gpt-4o"
}'# Start web server
python -m leaf_ai web --port 8000
# Future CLI features (planned)
python -m leaf_ai chat "Explain quantum computing"
python -m leaf_ai conversations list
python -m leaf_ai config showimport apiClient from '@/lib/api';
// Send a chat message
const response = await apiClient.sendMessage('Hello!', 'openai', 'gpt-4o');
// Get conversations
const conversations = await apiClient.getConversations('active');# Clone repository
git clone <repository-url>
cd leaf-ai
# Backend development
pip install -e ".[dev]"
python -m pytest
# Frontend development
cd frontend
npm install
npm run dev
npm run lint- Backend: Black, isort, mypy, type hints
- Frontend: ESLint, Prettier, TypeScript strict mode
- Documentation: Comprehensive docstrings and comments
# Backend tests
python -m pytest leaf_ai/tests/
# Frontend tests (planned)
cd frontend
npm run test- Multi-provider AI chat integration
- Real-time conversation interface
- WebSocket-based live updates
- Conversation management (CRUD)
- REST API with OpenAPI docs
- Type-safe frontend and backend
- Responsive web interface
- WebSocket support for real-time updates
- Configuration management UI
- Error tracking and diagnostics
- Conversation export/import
- Local model support (Ollama, LocalAI)
- Voice chat capabilities
- File upload and processing
- Team collaboration features
- Mobile application
- Docker deployment
- Database integration
- Advanced prompt management
# Backend
python -m leaf_ai web --reload
# Frontend
cd frontend && npm run dev# Backend
uvicorn leaf_ai.web.api:app --host 0.0.0.0 --port 8000 --workers 4
# Frontend
cd frontend
npm run build
npm run start# Build and run with Docker Compose
docker-compose up --build
# Individual services
docker build -t leaf-ai-backend .
docker build -t leaf-ai-frontend ./frontendWe welcome contributions! Please see our contributing guidelines:
- Fork the repository and create a feature branch
- Follow code style guidelines for your target component
- Add tests for new functionality
- Update documentation for any API changes
- Submit a pull request with clear description
- Check existing issues or create a new one
- Fork and clone the repository
- Set up development environment
- Make changes with proper testing
- Ensure all linters pass
- Submit pull request
Leaf AI - Making AI conversations more accessible and manageable.