A modular, high-performance AI chatbot ecosystem built on a modern service-oriented architecture. The system features a powerful Core API Service that handles all AI processing and data management, with support for multiple client platforms including Discord, Telegram, and a Web Dashboard.
- Service-Oriented Architecture: Clean separation between backend logic and client interfaces
- Core API Service (
ryuuko-api): Standalone FastAPI backend managing all business logic, database operations (MongoDB), and LLM provider communication - Multi-Platform Clients: Lightweight clients for Discord, Telegram, and Web that communicate with the Core API
- Core API Service (
- 3-Level Hierarchical Memory with intelligent context management:
- Level 1 - Sliding Window: 10 most recent messages for immediate context
- Level 2 - RAG Retrieval: Semantic search retrieving 10 most relevant past conversations using vector embeddings
- Level 3 - Contextual Summarization: High-level conversation summaries automatically generated and updated
- Semantic Search: Uses sentence-transformers for vector embeddings and similarity matching
- Automatic Summarization: Conversation summaries updated every 10 messages
- Modular Provider System: Seamless switching between multiple LLM backends:
- Google Gemini (via AI Studio)
- PolyDevs custom models
- ProxyVN (GPT models)
- Multimodal Conversations: Full support for contextual image analysis with text prompts
- Custom System Prompts: Per-user AI personas and behavior customization
- Account Linking: Unified user accounts across Discord, Telegram, and Web Dashboard
- Credit System: Granular usage tracking and credit management
- Access Levels: Multi-tier access control (Basic, Advanced, Ultimate)
- Web Dashboard: Full-featured React dashboard for account management and monitoring
- Python 3.11+ (recommended: Python 3.13)
- Git
- MongoDB database instance (local or cloud-hosted like MongoDB Atlas)
- Node.js 18+ (only if running the Web Dashboard)
-
Clone the repository:
git clone https://github.com/zvwgvx/ryuuko-chatbot cd ryuuko-chatbot -
Set up a virtual environment:
python3 -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate
-
Install dependencies:
Install the packages you need in editable mode:
# Core API (required) pip install -e ./packages/ryuuko-api # Discord bot client (optional) pip install -e ./packages/discord-bot # Telegram bot client (optional) pip install -e ./packages/telegram-bot
Each component requires its own .env file. See docs/SETUP.md for detailed configuration instructions.
Required environment variables:
# MongoDB Connection
MONGODB_CONNECTION_STRING=mongodb://localhost:27017/ryuuko
# API Security
CORE_API_KEY=your-secure-random-key-here
# LLM Provider API Keys (configure at least one)
GEMINI_API_KEY=your-gemini-api-key
POLYDEVS_API_KEY=your-polydevs-api-key
PROXYVN_API_KEY=your-proxyvn-api-key
# JWT Secret for Dashboard Authentication
SECRET_KEY=your-jwt-secret-keyDISCORD_TOKEN=your-discord-bot-token
CORE_API_URL=http://127.0.0.1:8000
CORE_API_KEY=your-secure-random-key-here # Must match Core APITELEGRAM_TOKEN=your-telegram-bot-token
CORE_API_URL=http://127.0.0.1:8000
CORE_API_KEY=your-secure-random-key-here # Must match Core APIVITE_API_URL=http://localhost:8000The ecosystem consists of independent services. Run the ones you need:
Core API Service (required):
python3 -m ryuuko_apiDiscord Bot (optional):
python3 -m discord_botTelegram Bot (optional):
python3 -m telegram_botWeb Dashboard (optional):
cd packages/dashboard
npm install
npm run devBuild and run individual services using Docker:
# Build Core API
docker build --build-arg PACKAGE_NAME=ryuuko-api -t ryuuko-api .
# Build Discord Bot
docker build --build-arg PACKAGE_NAME=discord-bot -t ryuuko-discord-bot .
# Run with environment variables
docker run -d --env-file packages/ryuuko-api/.env ryuuko-apiUsers interact with the bot through Discord commands (prefix: ,):
.help- Display available commands.ping- Check bot latency.model <name>- Switch AI model.models- List available models.profile- View your profile and settings.clearmemory- Clear conversation history
Admin commands (owner only):
.auth <user>- Authorize a user.addcredit <user> <amount>- Add credits to user.addmodel <name> <cost> <level>- Add new AI model
See docs/COMMANDS.md for the complete command reference.
Users can chat directly with the bot on Telegram:
- Send text messages for AI conversations
- Send images with captions for multimodal analysis
- Use
/clearmemoryto reset conversation history - Use
/profileto view account information
Access the dashboard at http://localhost:5173 (dev mode):
- Register/Login - Create an account or sign in
- Link Accounts - Connect your Discord/Telegram accounts
- Manage Settings - Configure AI model, system prompts, view credits
- View Memory - Browse conversation history across all platforms
ryuuko-chatbot/
├── packages/
│ ├── ryuuko-api/ # Core API Service (FastAPI)
│ │ ├── src/
│ │ │ ├── api/ # API endpoints (auth, users, admin, memory)
│ │ │ ├── models/ # Database models
│ │ │ ├── providers/ # LLM provider integrations
│ │ │ ├── memory_manager.py # Hierarchical memory system
│ │ │ ├── database.py # MongoDB operations
│ │ │ └── main.py # FastAPI application
│ │ ├── config/ # Configuration files
│ │ └── instructions/ # System prompts (English/Vietnamese)
│ │
│ ├── discord-bot/ # Discord client
│ │ └── src/
│ │ ├── commands/ # Command handlers
│ │ ├── events/ # Event listeners
│ │ └── utils/ # Queue, logging, embeds
│ │
│ ├── telegram-bot/ # Telegram client
│ │ └── src/
│ │ ├── commands/ # Command handlers
│ │ └── api_client.py
│ │
│ └── dashboard/ # Web Dashboard (React + Vite)
│ └── src/
│ └── components/ # React components
│
├── docs/ # Documentation
│ ├── ARCHITECTURE.md # System architecture details
│ ├── SETUP.md # Detailed setup guide
│ ├── COMMANDS.md # Command reference
│ └── DEPLOYMENT.md # Production deployment guide
│
├── scripts/ # Utility scripts
└── Dockerfile # Multi-stage Docker build
The system uses a service-oriented architecture with clear separation of concerns:
-
Core API - Centralized backend handling:
- AI processing with multiple LLM providers
- Hierarchical memory management (sliding window + RAG + summarization)
- User authentication and authorization
- Credit and access control
- Database operations (MongoDB)
-
Client Applications - Platform-specific interfaces:
- Discord bot (discord.py)
- Telegram bot (python-telegram-bot)
- Web dashboard (React)
-
Communication - All clients communicate with the Core API via:
- RESTful HTTP endpoints
- API key authentication
- Streaming responses for real-time AI output
See docs/ARCHITECTURE.md for detailed architecture documentation.
- Setup Guide - Detailed installation and configuration instructions
- Architecture - System design and data flow
- Commands - Complete command reference for Discord bot
- Deployment - Production deployment with systemd and Docker
- Contributing - Guidelines for contributing to the project
- Security - Security policies and best practices
- FastAPI - Modern, high-performance web framework
- MongoDB - NoSQL database for flexible data storage
- Sentence Transformers - Semantic embedding for RAG
- OpenAI SDK - Unified interface for LLM providers
- Google Generative AI - Gemini model integration
- discord.py - Discord bot framework
- python-telegram-bot - Telegram bot framework
- React + Vite - Modern web dashboard
- httpx - Async HTTP client for API communication
- Docker - Containerization for easy deployment
- Uvicorn - ASGI server for FastAPI
- python-dotenv - Environment variable management
We welcome contributions! Please see CONTRIBUTING.md for guidelines on:
- Reporting bugs
- Suggesting features
- Setting up development environment
- Code style and testing requirements
- Submitting pull requests
This project is licensed under the MIT License - see the LICENSE file for details.
For security concerns, please refer to our Security Policy. Do not report security vulnerabilities through public GitHub issues.
- Documentation: Check the
docs/directory for detailed guides - Issues: Report bugs or request features via GitHub Issues
- Discussions: Ask questions in GitHub Discussions
- Built with FastAPI
- Powered by various LLM providers (Google Gemini, PolyDevs, ProxyVN)
- UI components inspired by modern dashboard designs
- Memory system inspired by hierarchical memory research
Current Version: v2.0+ (Service-Oriented Architecture)
Maintainer: Zang Vũ (zvwgvx@polydevs.uk)