A private, GPU-accelerated personal AI brain. Nobody else has built this.
A fully local, multi-agent personal AI assistant running on NVIDIA Jetson Orin Nano with Raspberry Pi 5 voice edge and Mac dashboard. Combines OpenClaw multi-channel messaging gateway, LangGraph agent orchestration, Qdrant vector memory, and Ollama inference โ all on-premise, zero cloud, zero subscription.

Most "personal AI" setups are either:
- Cloud-dependent (ChatGPT, Claude.ai, etc.) โ your data leaves your machine
- Single-model chatbots โ no specialization, no memory, no autonomy
Maximus-X Sentinel is different:
| Feature | What it does |
|---|---|
| OpenClaw Gateway | Talk to Maximus via WhatsApp, Telegram, Signal, Discord, iMessage โ all routed to your Jetson |
| LangGraph Supervisor | Intelligent routing to specialized sub-agents (Research, ChemBiz, Home, Schedule) |
| Context Membrane | Auto-ingesting RAG layer that pulls from your local notes, emails, and docs nightly |
| Jetson Inference Core | GPU-accelerated local LLM โ no API keys, no token costs, no privacy leaks |
| Pi 5 Voice Edge | Wake word + faster-whisper STT + Kokoro TTS, runs on Pi hardware |
| Self-improving | OpenClaw can write and install its own new skills when you ask for new capabilities |
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ YOUR DEVICES โ
โ WhatsApp / Telegram / Signal / iMessage / Discord โ
โโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Mac / Laptop (OpenClaw Gateway) โ
โ โข openclaw gateway process (Node.js) โ
โ โข Open WebUI dashboard :3000 โ
โ โข Routes all channels โ Jetson agent โ
โโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ LAN / Wi-Fi
โโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ NVIDIA Jetson Orin Nano (AI Brain) โ
โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ LangGraph Supervisor Agent โ โ
โ โ Routes to: Research | ChemBiz | Home | Sched โ โ
โ โโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโ โ
โ โ โ โ
โ โโโโโโโโโโโโผโโโโโโโ โโโโโโโโโโโโผโโโโโโโ โ
โ โ Ollama (LLM) โ โ Qdrant (RAG) โ โ
โ โ llama3.2:3b โ โ Context Membraneโ โ
โ โ GPU accel. โ โ Auto-ingestion โ โ
โ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โ
โ โ
โ FastAPI server :8000 โ all agent traffic โ
โโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Raspberry Pi 5 (Voice Edge) โ
โ โข openWakeWord (wake: "Hey Maximus") โ
โ โข faster-whisper STT (CTranslate2 backend) โ
โ โข Kokoro-82M TTS (HF TTS Arena #1, offline) โ
โ โข Home/work presence triggers โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
- Jetson Orin Nano running JetPack 6.4+
- NVMe SSD strongly recommended (SD card I/O is the bottleneck for model loading)
- Raspberry Pi 5 (4GB or 8GB)
- Mac/Linux laptop for OpenClaw gateway
- Ollama model pulled:
ollama pull llama3.2:3b
git clone https://github.com/shehanmakani/MaximusX
cd MaximusX
cp .env.example .env
# Edit .env โ set OPENCLAW_TOKEN, TELEGRAM_BOT_TOKEN, etc.docker compose up -dnpm install -g @openclaw/openclaw
openclaw init
openclaw startcd pi-voice
pip install -r requirements.txt
python3 voice_edge.pyThis repo now includes a local self-prompting loop that uses future-self-emulator
to rank likely next actions from calendar and repo context, then stages the winning
task for approval instead of executing it blindly.
python3 self_prompting_agent.py \
--profile "Shehan: founder, engineer, builder" \
--calendar sample_calendar.json \
--repo .That writes a staged task into autonomous_outputs/task_<timestamp>.json.
Approve a staged task locally:
python3 approve_task.py <task_id>Or through the WhatsApp gateway:
GOโ generate the latest predicted taskSTATUSโ show the latest pending taskAPPROVE <task_id>โ execute the allowlisted action
This repo also includes a first pass at the overnight clone workflow:
- Log daytime activity and repo state:
python3 activity_logger.py --repo .
python3 activity_logger.py --event-type meeting_prep --summary "Prepared architecture notes before the IntelliForm sync"- Enter sleep mode before bed:
python3 sleep_mode.py \
--profile "Shehan: founder, engineer, builder" \
--calendar sample_calendar.json \
--repo . \
--cycles 2- Wake up and review the digest:
python3 wake_mode.py
python3 morning_digest.py --jsonCore files:
activity_logger.pycaptures daytime behavior intomemory/activity_log.jsonlpattern_model.pylearns recurring priorities from those eventsreasoning_loop.pycritiques and ranks options using strategic, practical, protective, and identity lensesovernight_planner.pyconverts those patterns into a ranked overnight task queuesleep_mode.pystages tasks while you are "asleep"wake_mode.pygives you the morning approval summary
Important design note: This system should not just copy previous behavior. The newer overnight flow now includes a reasoning loop so it can adapt when priorities change, critique its own options, and prefer work that matches both current context and the user's style rather than blindly repeating patterns.
| Agent | Trigger keywords | Capabilities |
|---|---|---|
| Research | "find", "search", "what is", "summarize" | Web search, RAG over your docs |
| ChemBiz | "chemrich", "intelliform", "formulation", "lead" | Chemical domain Q&A, business context |
| Home | "lights", "temperature", "lock", "scene" | Home Assistant REST API |
| Schedule | "remind", "meeting", "calendar", "when" | Google Calendar + cron reminders |
chembiz-contextโ Loads ChemRich/ChemeNova domain knowledge into every ChemBiz querynightly-ingestโ Cron job: pulls new emails/notes into Qdrant Context Membrane at 2amvoice-relayโ Bridges Pi STT output โ OpenClaw โ Jetson and back to Pi TTSpresence-triggerโ Detects home/work arrival via Pi sensor, fires contextual briefing
| Layer | Technology |
|---|---|
| LLM inference | Ollama + llama3.2:3b (Jetson GPU) |
| Agent orchestration | LangGraph + LangChain-core |
| Vector DB | Qdrant v1.13.0 (arm64) |
| Messaging gateway | OpenClaw (self-hosted) |
| Dashboard | Open WebUI |
| Voice STT | faster-whisper (CTranslate2) |
| Wake word | openWakeWord |
| TTS | Kokoro-82M |
| Container runtime | Docker + NVIDIA runtime |
| API server | FastAPI + Uvicorn |
SSH Key Exchange: Ensure your Pi 5 can SSH into the Jetson without a password: ssh-copy-id shehan@192.168.1.100
Ngrok Setup: On the Pi, run ngrok http 5000. Copy the URL into the Twilio Console under "Sandbox Settings" > "When a message comes in."
-
Zero cloud inference โ all LLM calls stay on Jetson
-
OpenClaw stores config/memory as local Markdown on your Mac
-
Qdrant vector data stays on Jetson NVMe
-
Only outbound: your chosen messaging app (Telegram, etc.) for delivery
If you're interested in decision intelligence, check out Future-Self-Emulator โ an AI system that simulates multiple life-path versions of yourself using Agentic AI.