Skip to content

feat: Melvin AI — production-grade rewrite with Ollama agents, encrypted dataset, and Rich TUI#3

Draft
Copilot wants to merge 3 commits intomainfrom
copilot/add-json-dataset-support
Draft

feat: Melvin AI — production-grade rewrite with Ollama agents, encrypted dataset, and Rich TUI#3
Copilot wants to merge 3 commits intomainfrom
copilot/add-json-dataset-support

Conversation

Copy link

Copilot AI commented Mar 11, 2026

The original codebase was a single incomplete script with simulated AI responses, no real LLM integration, no encryption, and a truncated UI implementation. This replaces it with a fully modular, production-ready application.

Architecture

  • config.py — Centralised config; 8 Ollama models selected for a 24 GB RAM / 650 GB Debian machine (llama3.1:8b through llama3.1:70b-q4, codellama:13b, deepseek-coder:6.7b, phi3:medium, llava:13b)
  • melvin/monitoring/system.py — Daemon-threaded SystemMonitor via psutil + nvidia-smi fallback
  • melvin/core/dataset.py — Fernet-encrypted append-only JSON-lines store; supports export/import/substring search. Key created with chmod 600.
  • melvin/core/memory.py — Sliding-window ConversationMemory per agent session (character-approximate token trimming)
  • melvin/core/agent.pyAgent wraps an Ollama model with streaming token callbacks and auto-persists to dataset; AgentRegistry checks live availability
  • melvin/core/router.py — Keyword heuristics (word-boundary regex) + !category override prefix; respects available RAM when selecting model size
  • melvin/tools/registry.py — 107-tool catalogue across 20 categories with apt/pip/snap/manual install commands and pre-baked usage examples
  • melvin/tools/manager.py — Tool lifecycle: check/install (apt → pip → snap → manual notice) / run as subprocess
  • melvin/ui/layout.py — Full-screen Rich TUI: ThoughtLogger, streaming ChatDisplay, MelvinLayout (3-pane: stats / thoughts / main)
  • melvin/ui/menus.py — All menu screens as Rich renderables (main, agents, tools, dataset, settings, sysinfo)
  • main.pyLive display loop + daemon input thread; signal handling for clean shutdown

Notable design decisions

# Agent routing: keyword heuristics with explicit override
agent, clean_prompt = router.route("!code write a binary search in Rust")
# → selects best available code-category agent, strips prefix

# Encrypted dataset: each line is an independent Fernet token
ds.append(input_text="...", response="...", model="codellama:13b", agent="code")
records = ds.load_all()   # decrypts on read; corrupted lines surfaced not crashed

# Streaming response forwarded token-by-token to TUI
agent.chat(prompt, stream_cb=lambda token: chat_display.append_stream_chunk(token))

Tests

35 unit tests covering dataset (encryption, persistence, export/import, search, wrong-key resilience), conversation memory (trimming, clear, last-n-turns), router (category inference, override prefix stripping), and tool registry (find, search, install method generation).

Original prompt

#!/usr/bin/env python3
"""
╔══════════════════════════════════════════════════════════════╗
║ M E L V I N A I ║
║ Self‑Learning AI · Universal Toolkit · Hacker CLI ║
║ 🔥 Debian Edition 🔥 ║
╚══════════════════════════════════════════════════════════════╝

Melvin combines multiple AI models, continuously learns from interactions,
and provides a massive toolkit with on‑demand installation wizards.
All learning is stored in a JSON dataset for lifelong adaptation.
"""

import os
import sys
import json
import random
import subprocess
import shutil
import platform
import time
import threading
import queue
import signal
from datetime import datetime
from typing import Dict, List, Optional, Any, Callable
from pathlib import Path

----------------------------------------------------------------------

DEPENDENCIES – install if missing

----------------------------------------------------------------------

try:
import psutil
except ImportError:
subprocess.check_call([sys.executable, "-m", "pip", "install", "psutil"])
import psutil

try:
from rich.console import Console
from rich.layout import Layout
from rich.panel import Panel
from rich.text import Text
from rich.live import Live
from rich.table import Table
from rich.progress import Progress, SpinnerColumn, BarColumn, TextColumn, TimeElapsedColumn
from rich import box
from rich.columns import Columns
from rich.markdown import Markdown
from rich.syntax import Syntax
from rich.traceback import install
install()
RICH_AVAILABLE = True
except ImportError:
subprocess.check_call([sys.executable, "-m", "pip", "install", "rich"])
from rich.console import Console
from rich.layout import Layout
from rich.panel import Panel
from rich.text import Text
from rich.live import Live
from rich.table import Table
from rich.progress import Progress, SpinnerColumn, BarColumn, TextColumn, TimeElapsedColumn
from rich import box
from rich.columns import Columns
RICH_AVAILABLE = True

----------------------------------------------------------------------

CONFIGURATION

----------------------------------------------------------------------

DATA_DIR = os.path.expanduser("~/.melvin")
DATASET_FILE = os.path.join(DATA_DIR, "dataset.json")
TOOLS_DB_FILE = os.path.join(DATA_DIR, "tools_db.json")
MODELS_DIR = os.path.join(DATA_DIR, "models")
os.makedirs(DATA_DIR, exist_ok=True)
os.makedirs(MODELS_DIR, exist_ok=True)

console = Console()

----------------------------------------------------------------------

SYSTEM MONITOR (runs in background)

----------------------------------------------------------------------

class SystemMonitor:
"""Continuously update system stats."""
def init(self):
self.cpu_percent = 0
self.memory = (0, 0) # used, total
self.disk = (0, 0) # used, total
self.gpu_info = "No GPU detected"
self._stop = threading.Event()
self._thread = threading.Thread(target=self._update_loop, daemon=True)
self._thread.start()

def _update_loop(self):
    while not self._stop.is_set():
        self.cpu_percent = psutil.cpu_percent(interval=1)
        mem = psutil.virtual_memory()
        self.memory = (mem.used, mem.total)
        disk = psutil.disk_usage('/')
        self.disk = (disk.used, disk.total)
        # Try to get GPU info (nvidia-smi)
        try:
            result = subprocess.run(
                ['nvidia-smi', '--query-gpu=utilization.gpu,memory.used,memory.total', '--format=csv,noheader,nounits'],
                capture_output=True, text=True, timeout=2
            )
            if result.returncode == 0:
                lines = result.stdout.strip().split('\n')
                if lines and lines[0]:
                    gpu_util, mem_used, mem_total = lines[0].split(',')
                    self.gpu_info = f"GPU: {gpu_util.strip()}% | VRAM: {mem_used.strip()}/{mem_total.strip()} MB"
            else:
                self.gpu_info = "GPU not available"
        except:
            self.gpu_info = "GPU not available"
        time.sleep(2)

def stop(self):
    self._stop.set()

def get_stats_panel(self) -> Panel:
    """Return a Rich Panel with current stats."""
    mem_used_gb = self.memory[0] / (1024**3)
    mem_total_gb = self.memory[1] / (1024**3)
    disk_used_gb = self.disk[0] / (1024**3)
    disk_total_gb = self.disk[1] / (1024**3)

    text = Text()
    text.append(f"CPU: {self.cpu_percent}%", style="bold green")
    text.append(f"\nRAM: {mem_used_gb:.1f}/{mem_total_gb:.1f} GB", style="bold cyan")
    text.append(f"\nDisk: {disk_used_gb:.1f}/{disk_total_gb:.1f} GB", style="bold magenta")
    text.append(f"\n{self.gpu_info}", style="bold yellow...

✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.

Copilot AI and others added 2 commits March 11, 2026 17:19
- config.py: centralised config with 8 Ollama models tuned for 24GB/650GB machine
- melvin/monitoring/system.py: background system monitor (CPU/RAM/Disk/GPU)
- melvin/core/dataset.py: Fernet-encrypted append-only interaction dataset
- melvin/core/memory.py: sliding-window conversation memory
- melvin/core/agent.py: Ollama-backed agents with streaming + dataset persistence
- melvin/core/router.py: keyword + override prefix intelligent model router
- melvin/tools/registry.py: 107-tool catalogue with examples and install methods
- melvin/tools/manager.py: apt/pip/snap tool install, check, and run
- melvin/ui/layout.py: Rich TUI layout with ThoughtLogger and ChatDisplay
- melvin/ui/menus.py: all menu screens as Rich renderables
- main.py: full application entry point with Live display and input threading
- pyproject.toml: package metadata and entry point
- requirements.txt: pinned production dependencies
- tests/: 35 passing unit tests (dataset, memory, router, registry)
- .gitignore

Co-authored-by: marcusjenkinscode <264086284+marcusjenkinscode@users.noreply.github.com>
- tools/manager.py: correct Callable type annotation syntax
- config.py: lowercase quantisation suffix in Ollama model tag
- core/memory.py: add public system_prompt property (encapsulation)
- core/agent.py: use public system_prompt property instead of private attr
- ui/layout.py: fix flush_stream deadlock (release lock before calling add_assistant)
- pyproject.toml: use modern setuptools.build_meta backend

Co-authored-by: marcusjenkinscode <264086284+marcusjenkinscode@users.noreply.github.com>
Copilot AI changed the title [WIP] Add JSON dataset support for AI learning feat: Melvin AI — production-grade rewrite with Ollama agents, encrypted dataset, and Rich TUI Mar 11, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants