Telegram bot integrated with local LLM (LM Studio)
Responds to messages, remembers chat history, and speaks in different moods. Triggered by the name "Lumi".
User-facing messages: Russian
|
Local AI Integration
|
Smart Memory System
|
|
Multiple Personalities
|
Management Commands
|
# Clone repository
git clone <repo_url>
cd lumi-bot
# Setup virtual environment
python -m venv .venv
# Activate environment
# Linux/macOS:
source .venv/bin/activate
# Windows:
.venv\Scripts\activate
# Install dependencies
pip install -r requirements.txtCopy .example.env to .env and configure:
BOT_TOKEN="1234567890:ABCDEF..." # get from @BotFather
OWNER_ID=1212121212 # who use some dev commands
HISTORY_MAX=10python bot.pyImportant: LM Studio must be running at
http://localhost:1234
Recommended: Installmcp/web-searchtool for enhanced model capabilities
| Command | Description |
|---|---|
/lumi |
Info and project links |
/commands |
Command reference |
/ping |
Check response time |
/model |
Show active model |
/prompt |
Show system prompt |
| Command | Description |
|---|---|
/memorize <text> |
Save a note |
/show_memory |
List saved notes |
/forget |
Delete all notes |
/forget <number> |
Delete specific note |
| Command | Description |
|---|---|
/mood |
Show current mood |
/mood <mood> |
Set mood |
/mood list |
List available moods |
Available moods: friendly, sarcastic, formal, funny, aggressive, horny, uncensored, shy
| Command | Description |
|---|---|
/reset |
Clear chat memory, history, and reset mood |
API Endpoint: http://localhost:1234/v1/chat/completions
Default Model: llama-3.1-8b-instruct
Request Timeout: 60 seconds
Temperature: 0.7
Max Tokens: 300- User-facing messages: Russian
- Developer resources: English
- HISTORY_MAX: Adjustable in
.envfor context window size - PROJECT_LINKS: Customizable in source code for your own references





