Skip to content

Universal LLM interface with multi-provider support, real-time chat, conversation management, and cost tracking. Built with React/Next.js frontend and FastAPI backend.

Notifications You must be signed in to change notification settings

graphicstone/leaf-ai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

112 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Leaf AI

Universal LLM interface with multi-provider support, real-time chat, and conversation management. Built with React/Next.js frontend and FastAPI backend.

๐Ÿ”‘ Production-ready BYOK (Bring Your Own Key) AI chat platform.

๐ŸŒŸ Features

Core Platform

  • ๐Ÿค– Multi-Provider Support: OpenAI, Anthropic Claude, Google Gemini, and more
  • ๐Ÿ’ฌ Real-Time Chat: Interactive conversations with context preservation
  • ๐Ÿ“Š Conversation Management: Organize, search, archive, and manage chat history
  • ๐ŸŽจ Modern UI: Responsive design with Tailwind CSS and Radix UI
  • โšก High Performance: FastAPI backend with async operations
  • ๐Ÿ”’ Type Safety: Full TypeScript frontend and Python type hints
  • ๐Ÿ“š Auto Documentation: Swagger/OpenAPI documentation generation
  • ๐Ÿ› ๏ธ Developer Friendly: Rich CLI, comprehensive logging, and debugging tools

๐Ÿ”‘ BYOK Business Model

  • ๐Ÿ”’ User Privacy: Users control their own API keys and data
  • โšก No Platform Rate Limits: Usage based on provider limits
  • ๐Ÿ›ก๏ธ Secure Storage: API keys stored securely in user's keyring/database
  • ๐ŸŽฏ Simple Pricing: Charge for platform access, not API usage
  • ๐Ÿš€ Easy Setup: Users add API keys through intuitive web interface

๐Ÿ—๏ธ Architecture

Leaf AI/
โ”œโ”€โ”€ frontend/              # Next.js 15 + TypeScript + Tailwind CSS
โ”‚   โ”œโ”€โ”€ src/app/          # Next.js App Router
โ”‚   โ”œโ”€โ”€ src/components/   # React components
โ”‚   โ””โ”€โ”€ src/lib/          # API client and utilities
โ”œโ”€โ”€ leaf_ai/              # Python backend package
โ”‚   โ”œโ”€โ”€ adapters/         # AI provider integrations
โ”‚   โ”œโ”€โ”€ core/             # Business logic
โ”‚   โ”œโ”€โ”€ web/              # FastAPI web server
โ”‚   โ””โ”€โ”€ cli/              # Command-line interface
โ””โ”€โ”€ docs/                 # Documentation (planned)

๐Ÿš€ Production Deployment

Environment Setup

  1. Copy the production environment template:

    cp env.production.template .env
  2. Configure environment variables:

    • SUPABASE_URL, SUPABASE_KEY - Database configuration
    • FRONTEND_URL - Your frontend domain (for CORS)
  3. Database setup:

    # No additional database setup required
    # Users will add their own API keys through the web interface

Deploy to Render

The render.yaml configuration is production-ready. Simply:

  1. Connect your GitHub repository to Render
  2. Add environment variables in Render dashboard (all variables from the template)
  3. Deploy automatically - Render will use the configuration in render.yaml

Manual Deployment

# Install dependencies
pip install -r requirements.txt

# Start production server
python -m leaf_ai web --host 0.0.0.0 --port 8000

๐Ÿš€ Quick Start (Development)

Prerequisites

  • Frontend: Node.js 18+, npm/yarn/pnpm
  • Backend: Python 3.9+, pip/poetry
  • API Keys: OpenAI, Anthropic, and/or Google API keys

1. Backend Setup

# Install backend dependencies
pip install -e .

# Set up environment variables
cp .env.example .env
# Edit .env with your API keys

# Start the backend server
python -m leaf_ai web --host 127.0.0.1 --port 8000 --reload

2. Frontend Setup

# Navigate to frontend directory
cd frontend

# Install dependencies
npm install

# Set up environment variables
cp .env.example .env.local
# Configure NEXT_PUBLIC_API_URL=http://127.0.0.1:8000

# Start the development server
npm run dev

3. Access the Application

๐Ÿ“– Documentation

Component Documentation

API Documentation

Key Endpoints

# Chat with AI providers
POST /api/chat

# Manage conversations
GET  /api/conversations
POST /api/conversations
GET  /api/conversations/{id}

# Provider information
GET  /api/providers

# WebSocket for live updates
WS   /ws/{client_id} (with JWT)

๐Ÿ”ง Configuration

Environment Variables

Backend (.env):

# API Keys
OPENAI_API_KEY=your_openai_key
ANTHROPIC_API_KEY=your_claude_key
GOOGLE_API_KEY=your_gemini_key

# Server Settings
HOST=127.0.0.1
PORT=8000
DEBUG=true

Frontend (.env.local):

NEXT_PUBLIC_API_URL=http://127.0.0.1:8000

Provider Configuration

Each AI provider can be configured with:

  • API keys and authentication
  • Default models and parameters
  • Rate limiting and timeouts

๐Ÿค– Supported AI Providers

Provider Models Features Status
OpenAI GPT-4o, GPT-4, GPT-3.5-turbo Function calling, JSON mode โœ… Active
Anthropic Claude-3.5-sonnet, Claude-3-haiku Long context, safety focus โœ… Active
Google Gemini-1.5-pro, Gemini-1.5-flash Multimodal, large context โœ… Active

๐Ÿ’ก Usage Examples

Chat API

# Send a message
curl -X POST "http://127.0.0.1:8000/api/chat" \
  -H "Content-Type: application/json" \
  -d '{
    "message": "Hello, how are you?",
    "provider": "openai",
    "model": "gpt-4o"
  }'

Python CLI

# Start web server
python -m leaf_ai web --port 8000

# Future CLI features (planned)
python -m leaf_ai chat "Explain quantum computing"
python -m leaf_ai conversations list
python -m leaf_ai config show

Frontend Integration

import apiClient from '@/lib/api';

// Send a chat message
const response = await apiClient.sendMessage('Hello!', 'openai', 'gpt-4o');

// Get conversations
const conversations = await apiClient.getConversations('active');

๐Ÿ› ๏ธ Development

Setup Development Environment

# Clone repository
git clone <repository-url>
cd leaf-ai

# Backend development
pip install -e ".[dev]"
python -m pytest

# Frontend development
cd frontend
npm install
npm run dev
npm run lint

Code Style

  • Backend: Black, isort, mypy, type hints
  • Frontend: ESLint, Prettier, TypeScript strict mode
  • Documentation: Comprehensive docstrings and comments

Testing

# Backend tests
python -m pytest leaf_ai/tests/

# Frontend tests (planned)
cd frontend
npm run test

๐Ÿ“Š Project Status

Current Features โœ…

  • Multi-provider AI chat integration
  • Real-time conversation interface
  • WebSocket-based live updates
  • Conversation management (CRUD)
  • REST API with OpenAPI docs
  • Type-safe frontend and backend
  • Responsive web interface

In Progress ๐Ÿ”„

  • WebSocket support for real-time updates
  • Configuration management UI
  • Error tracking and diagnostics
  • Conversation export/import

Planned Features ๐Ÿ”ฎ

  • Local model support (Ollama, LocalAI)
  • Voice chat capabilities
  • File upload and processing
  • Team collaboration features
  • Mobile application
  • Docker deployment
  • Database integration
  • Advanced prompt management

๐Ÿš€ Deployment

Development

# Backend
python -m leaf_ai web --reload

# Frontend
cd frontend && npm run dev

Production

# Backend
uvicorn leaf_ai.web.api:app --host 0.0.0.0 --port 8000 --workers 4

# Frontend
cd frontend
npm run build
npm run start

Docker (Planned)

# Build and run with Docker Compose
docker-compose up --build

# Individual services
docker build -t leaf-ai-backend .
docker build -t leaf-ai-frontend ./frontend

๐Ÿค Contributing

We welcome contributions! Please see our contributing guidelines:

  1. Fork the repository and create a feature branch
  2. Follow code style guidelines for your target component
  3. Add tests for new functionality
  4. Update documentation for any API changes
  5. Submit a pull request with clear description

Development Workflow

  1. Check existing issues or create a new one
  2. Fork and clone the repository
  3. Set up development environment
  4. Make changes with proper testing
  5. Ensure all linters pass
  6. Submit pull request

Leaf AI - Making AI conversations more accessible and manageable.

About

Universal LLM interface with multi-provider support, real-time chat, conversation management, and cost tracking. Built with React/Next.js frontend and FastAPI backend.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors