feat: Melvin AI — production-grade rewrite with Ollama agents, encrypted dataset, and Rich TUI#3
Draft
feat: Melvin AI — production-grade rewrite with Ollama agents, encrypted dataset, and Rich TUI#3
Conversation
- config.py: centralised config with 8 Ollama models tuned for 24GB/650GB machine - melvin/monitoring/system.py: background system monitor (CPU/RAM/Disk/GPU) - melvin/core/dataset.py: Fernet-encrypted append-only interaction dataset - melvin/core/memory.py: sliding-window conversation memory - melvin/core/agent.py: Ollama-backed agents with streaming + dataset persistence - melvin/core/router.py: keyword + override prefix intelligent model router - melvin/tools/registry.py: 107-tool catalogue with examples and install methods - melvin/tools/manager.py: apt/pip/snap tool install, check, and run - melvin/ui/layout.py: Rich TUI layout with ThoughtLogger and ChatDisplay - melvin/ui/menus.py: all menu screens as Rich renderables - main.py: full application entry point with Live display and input threading - pyproject.toml: package metadata and entry point - requirements.txt: pinned production dependencies - tests/: 35 passing unit tests (dataset, memory, router, registry) - .gitignore Co-authored-by: marcusjenkinscode <264086284+marcusjenkinscode@users.noreply.github.com>
- tools/manager.py: correct Callable type annotation syntax - config.py: lowercase quantisation suffix in Ollama model tag - core/memory.py: add public system_prompt property (encapsulation) - core/agent.py: use public system_prompt property instead of private attr - ui/layout.py: fix flush_stream deadlock (release lock before calling add_assistant) - pyproject.toml: use modern setuptools.build_meta backend Co-authored-by: marcusjenkinscode <264086284+marcusjenkinscode@users.noreply.github.com>
Copilot
AI
changed the title
[WIP] Add JSON dataset support for AI learning
feat: Melvin AI — production-grade rewrite with Ollama agents, encrypted dataset, and Rich TUI
Mar 11, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
The original codebase was a single incomplete script with simulated AI responses, no real LLM integration, no encryption, and a truncated UI implementation. This replaces it with a fully modular, production-ready application.
Architecture
config.py— Centralised config; 8 Ollama models selected for a 24 GB RAM / 650 GB Debian machine (llama3.1:8b through llama3.1:70b-q4, codellama:13b, deepseek-coder:6.7b, phi3:medium, llava:13b)melvin/monitoring/system.py— Daemon-threadedSystemMonitorvia psutil + nvidia-smi fallbackmelvin/core/dataset.py— Fernet-encrypted append-only JSON-lines store; supports export/import/substring search. Key created withchmod 600.melvin/core/memory.py— Sliding-windowConversationMemoryper agent session (character-approximate token trimming)melvin/core/agent.py—Agentwraps an Ollama model with streaming token callbacks and auto-persists to dataset;AgentRegistrychecks live availabilitymelvin/core/router.py— Keyword heuristics (word-boundary regex) +!categoryoverride prefix; respects available RAM when selecting model sizemelvin/tools/registry.py— 107-tool catalogue across 20 categories with apt/pip/snap/manual install commands and pre-baked usage examplesmelvin/tools/manager.py— Tool lifecycle: check/install (apt → pip → snap → manual notice) / run as subprocessmelvin/ui/layout.py— Full-screen Rich TUI:ThoughtLogger, streamingChatDisplay,MelvinLayout(3-pane: stats / thoughts / main)melvin/ui/menus.py— All menu screens as Rich renderables (main, agents, tools, dataset, settings, sysinfo)main.py—Livedisplay loop + daemon input thread; signal handling for clean shutdownNotable design decisions
Tests
35 unit tests covering dataset (encryption, persistence, export/import, search, wrong-key resilience), conversation memory (trimming, clear, last-n-turns), router (category inference, override prefix stripping), and tool registry (find, search, install method generation).
Original prompt
#!/usr/bin/env python3
"""
╔══════════════════════════════════════════════════════════════╗
║ M E L V I N A I ║
║ Self‑Learning AI · Universal Toolkit · Hacker CLI ║
║ 🔥 Debian Edition 🔥 ║
╚══════════════════════════════════════════════════════════════╝
Melvin combines multiple AI models, continuously learns from interactions,
and provides a massive toolkit with on‑demand installation wizards.
All learning is stored in a JSON dataset for lifelong adaptation.
"""
import os
import sys
import json
import random
import subprocess
import shutil
import platform
import time
import threading
import queue
import signal
from datetime import datetime
from typing import Dict, List, Optional, Any, Callable
from pathlib import Path
----------------------------------------------------------------------
DEPENDENCIES – install if missing
----------------------------------------------------------------------
try:
import psutil
except ImportError:
subprocess.check_call([sys.executable, "-m", "pip", "install", "psutil"])
import psutil
try:
from rich.console import Console
from rich.layout import Layout
from rich.panel import Panel
from rich.text import Text
from rich.live import Live
from rich.table import Table
from rich.progress import Progress, SpinnerColumn, BarColumn, TextColumn, TimeElapsedColumn
from rich import box
from rich.columns import Columns
from rich.markdown import Markdown
from rich.syntax import Syntax
from rich.traceback import install
install()
RICH_AVAILABLE = True
except ImportError:
subprocess.check_call([sys.executable, "-m", "pip", "install", "rich"])
from rich.console import Console
from rich.layout import Layout
from rich.panel import Panel
from rich.text import Text
from rich.live import Live
from rich.table import Table
from rich.progress import Progress, SpinnerColumn, BarColumn, TextColumn, TimeElapsedColumn
from rich import box
from rich.columns import Columns
RICH_AVAILABLE = True
----------------------------------------------------------------------
CONFIGURATION
----------------------------------------------------------------------
DATA_DIR = os.path.expanduser("~/.melvin")
DATASET_FILE = os.path.join(DATA_DIR, "dataset.json")
TOOLS_DB_FILE = os.path.join(DATA_DIR, "tools_db.json")
MODELS_DIR = os.path.join(DATA_DIR, "models")
os.makedirs(DATA_DIR, exist_ok=True)
os.makedirs(MODELS_DIR, exist_ok=True)
console = Console()
----------------------------------------------------------------------
SYSTEM MONITOR (runs in background)
----------------------------------------------------------------------
class SystemMonitor:
"""Continuously update system stats."""
def init(self):
self.cpu_percent = 0
self.memory = (0, 0) # used, total
self.disk = (0, 0) # used, total
self.gpu_info = "No GPU detected"
self._stop = threading.Event()
self._thread = threading.Thread(target=self._update_loop, daemon=True)
self._thread.start()
✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.