Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
f2ffd3b
Add self-healing setup inventory
thatcooperguy Apr 28, 2026
829b82b
Make setup wizard repairs one click
thatcooperguy Apr 28, 2026
3c1d6a5
Add nvWizard compatibility preflight
thatcooperguy Apr 28, 2026
26f45f6
Run nvWizard boot preflight on startup
thatcooperguy Apr 28, 2026
61a5872
Add nvWizard mission control autopilot
thatcooperguy Apr 28, 2026
9f035d9
Add beginner launcher and resilience advisor
thatcooperguy Apr 28, 2026
9e3b008
Add Claw agent wizard packs
thatcooperguy Apr 28, 2026
7067056
Improve setup wizard responsive layout
thatcooperguy Apr 28, 2026
60c6f79
Prefer persistent block mounts in setup wizard
thatcooperguy Apr 28, 2026
e05b29f
Add GitHub connect and game engine wizard packs
thatcooperguy Apr 28, 2026
6fd2878
Simplify setup wizard with software emblems
thatcooperguy Apr 28, 2026
f41b46b
Use real software icons in setup wizard
thatcooperguy Apr 28, 2026
2ffd7cd
Simplify setup wizard lab chooser
thatcooperguy Apr 28, 2026
83105f3
Add NVIDIA and GitHub icons to lab cards
thatcooperguy Apr 28, 2026
8c22295
Add music producer mission to setup wizard
thatcooperguy Apr 28, 2026
8b9cef7
Harden rootless setup wizard
thatcooperguy Apr 28, 2026
8945378
Map underdog student advocate role
thatcooperguy Apr 28, 2026
b3ce7f6
Clarify setup install cards
thatcooperguy Apr 28, 2026
09ddbb7
Add NVIDIA Omni Agent starter path
thatcooperguy Apr 28, 2026
0134433
Add production readiness and diagnostics gates
thatcooperguy Apr 28, 2026
12b7ac3
Refresh rootless AI lab README
thatcooperguy Apr 28, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,14 @@ build/
.eggs/
*.so
.pytest_cache/
.pytest_tmp/
pytest_tmp*/
.tmp-pytest*/
.pytest-*/
.codex-pytest-tmp/
.codex-test-tmp*/
pytest-of-*/
tmp*/
.mypy_cache/
.ruff_cache/
*.db
Expand All @@ -32,3 +40,4 @@ docs/DEVTO_ARTICLE.md
docs/DEVTO_FINAL.md
benchmark_results.md
.coverage
/setup-*.png
44 changes: 44 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,49 @@
# Changelog

## [0.34.1] - 2026-04-28

### Added
- Music Producer Studio mission and `nvh studio --install music -y` bundle
with ACE-Step, Demucs/WhisperX/audio lab tooling, and Audacity/LMMS
AppImage helpers.
- Setup wizard mission cards for AI Starter, Graphics Creator Studio, Game
Dev Lab, Music Producer Studio, Agent Builder, Local LLM Lab, and Power
User Workstation.
- GPU detection diagnostics that distinguish CPU-only hosts from rootless
sessions where NVIDIA devices exist but NVML or `nvidia-smi` are blocked.
- Boot preflight tracking for GPU architecture, compute capability, framebuffer
memory, Node/npm versions, storage capacity, storage write probes, and the
selected PyTorch CUDA profile.

### Changed
- Setup model recommendations now use the same GPU inventory path as the
system API, including aggregate multi-GPU framebuffer totals.
- Persistent storage checks now perform a real write/fsync/delete probe instead
of relying only on `os.access`.
- The setup wizard surfaces real software icons and shorter mission-first
language to reduce first-run wall-of-text fatigue.
- Setup wizard mission cards stay mission-first while storage, GPU, catalog,
and compatibility scans continue in the background.
- `nvh webui` now builds and starts the optimized production WebUI by default,
with `--dev` reserved for contributors editing the frontend.
- Pip/binary WebUI bootstrap now falls back to downloading the GitHub source
archive when `git` is missing, so fresh rootless desktops have one less
prerequisite.
- WebUI bootstrap now prefers the installed release tag before falling back to
`main`, keeping PyPI/binary installs aligned with their shipped version.
- `nvh workstation --all -y` now passes the non-interactive yes flag through to
the rootless Node installer instead of pausing for confirmation.

### Fixed
- Sorted the generated fallback catalog import block so the Python CI lint
matrix can pass.
- Model fit reports now check the total recommended model queue against
available persistent storage, not only one model at a time.
- WebUI API health and storage preflight now retry during slow API startup
instead of leaving the setup page stuck offline until a manual reload.
- API CORS now allows localhost, 127.0.0.1, nvhive, and IPv6 loopback WebUI
fallback ports so dynamic local previews can reach the API.

## [0.34.0] - 2026-04-28

### Added
Expand Down
190 changes: 150 additions & 40 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,28 @@
# nvHive

**One command. Every AI model you have. Automatically assembled into the best team for each task.**
**A rootless NVIDIA AI lab for students, creators, agents, ComfyUI, and local models.**

![version](https://img.shields.io/badge/version-0.34.0-blue) ![python](https://img.shields.io/badge/python-3.11%2B-blue) ![license](https://img.shields.io/badge/license-MIT-green) ![ci](https://img.shields.io/badge/CI-Linux%20%7C%20Windows%20%7C%20macOS-blue)
![version](https://img.shields.io/badge/version-0.34.1-blue) ![python](https://img.shields.io/badge/python-3.11%2B-blue) ![license](https://img.shields.io/badge/license-MIT-green) ![ci](https://img.shields.io/badge/CI-Linux%20%7C%20Windows%20%7C%20macOS-blue)

```bash
nvh "What is a binary search tree?" # → answers (single best advisor)
nvh "Fix the timeout bug in council.py" # → auto-detects coding task → agent mode
nvh "Should we use Redis or Postgres?" # → auto-detects debate → council (3+ advisors)
nvh "take a screenshot and describe my desktop" # → desktop agent (vision + tools)
nvh "setup comfyui" # → agent installs, configures, launches
```
nvHive turns a fresh cloud Linux GPU desktop into a ready-to-use AI workstation
without `sudo`: it finds persistent storage, installs into user-owned paths,
opens a setup wizard, recommends models for the detected GPU, and gives students
one-click paths for local LLMs, ComfyUI, agents, creative tools, game-dev tools,
and music production.

What you get on the happy path:

- A desktop launcher and WebUI setup wizard.
- Persistent `NVH_HOME` storage for models, ComfyUI, apps, logs, jobs, and config.
- GPU-aware model recommendations and disk estimates.
- Mission cards for AI Starter, Graphics Creator, Game Dev, Music Producer, and Agent Builder.
- Self-healing checks for storage, Python, Node, CUDA, drivers, boot drift, and install receipts.
- Redacted error reports with request IDs when something needs debugging.

Release status: CI is green across Linux, Windows, and macOS. nvHive should be
treated as a production candidate until the
[target NVIDIA Linux VM checklist](docs/PRODUCTION_READINESS.md) passes on the
actual no-root GPU desktop.

<p align="center">
<img src="docs/screenshots/terminal-demo-v2.gif" alt="nvHive CLI" width="640">
Expand All @@ -20,22 +32,54 @@ nvh "setup comfyui" # → agent installs, configure

## Install

Three ways to get nvHive — pick the one that matches your setup. No Docker, no container runtime, no root required.
Start with the Linux GPU desktop path if you are on a cloud workstation or
GeForce NOW-style session. Other install paths are below for existing Python
environments, local laptops, and single-file binary installs. No Docker, no
container runtime, and no root access are required for nvHive itself.

For cloud desktops, large downloads should live on the persistent block-backed
mount, not the read-only OS disk. The launcher finds the best candidate
automatically. If you already know the mount path, set `NVH_HOME` first:

### Option 1 — One-line installer (recommended for GPU VMs)
```bash
export NVH_HOME=/mnt/persist/nvhive
```

### Recommended - Launch a Linux GPU desktop lab

This is the easiest path for cloud Linux GPU sessions where only a mounted file
volume survives reconnects:

```bash
curl -sSL https://raw.githubusercontent.com/thatcooperguy/nvHive/main/start-linux.sh | bash
```

The launcher auto-detects a likely persistent block-backed mount, sets
`NVH_HOME`, installs nvHive rootlessly if needed, creates the desktop launcher,
starts the API/WebUI, and opens the setup wizard. If Python is missing, set
`NVH_USE_BINARY=1` and the same launcher downloads the single-file Linux binary
instead of creating a venv.

```bash
curl -sSL https://raw.githubusercontent.com/thatcooperguy/nvHive/main/start-linux.sh | NVH_USE_BINARY=1 bash
```

### General Linux installer

```bash
curl -sSL https://raw.githubusercontent.com/thatcooperguy/nvHive/main/install.sh | bash
```

Works on any Linux box with no root. Installs to `NVH_HOME` when set, otherwise `~/.nvh/` for new installs, uses Python `venv` + `pip` by default, offers a rootless micromamba fallback only when the cloud image needs it, pulls Ollama if you have an NVIDIA GPU, and writes a sensible default config.

If `NVH_HOME` is not set, the installer now checks common persistent mount roots such as `/mnt`, `/media/$USER`, `/workspace`, `/data`, `/persistent`, and `/storage` before falling back to `~/.nvh`.

Windows: `iwr -useb https://raw.githubusercontent.com/thatcooperguy/nvHive/main/install.ps1 | iex`
macOS: `curl -sSL https://raw.githubusercontent.com/thatcooperguy/nvHive/main/install-mac.sh | bash`

### Option 2 — Single-file binary (no Python needed)
### Single-file binary (no Python needed)

Fully standalone. No Python install, no pip, no venv. Click your OS:
Fully standalone. No Python install, no pip, no venv. Download the asset, make it executable, and run `nvh workstation --launch`. Click your OS:

<p align="center">
<a href="https://github.com/thatcooperguy/nvHive/releases/latest/download/nvh-linux-x86_64">
Expand All @@ -51,9 +95,21 @@ Fully standalone. No Python install, no pip, no venv. Click your OS:
</a>
</p>

On Linux/macOS after download: `chmod +x nvh-* && ./nvh-*`. Full asset list (wheel, sdist, checksums) lives on the [Releases page](https://github.com/thatcooperguy/nvHive/releases/latest).
Linux terminal path:

```bash
mkdir -p "$HOME/.local/bin"
curl -fL https://github.com/thatcooperguy/nvHive/releases/latest/download/nvh-linux-x86_64 -o "$HOME/.local/bin/nvh"
chmod +x "$HOME/.local/bin/nvh"
NVH_HOME=/mnt/persist/nvhive "$HOME/.local/bin/nvh" workstation --launch -y
```

### Option 3 — pip from PyPI (for existing Python environments)
On Linux/macOS after a browser download:
`chmod +x nvh-* && NVH_HOME=/mnt/persist/nvhive ./nvh-* workstation --launch -y`.
Full asset list (wheel, sdist, checksums) lives on the
[Releases page](https://github.com/thatcooperguy/nvHive/releases/latest).

### pip from PyPI (for existing Python environments)

```bash
pip install nvhive # core
Expand All @@ -62,14 +118,17 @@ pip install "nvhive[browser]" # + headless browser (playwright)
pip install "nvhive[all]" # everything
```

`pip install` installs the Python package only. For large local models,
ComfyUI, and Studio packs, launch the workstation with a persistent `NVH_HOME`
so assets survive reconnects.

### First run

```bash
nvh # guided setup GPU detect, provider keys, local model pulls
nvh # guided setup: GPU detect, provider keys, local model pulls
nvh workstation --all -y # Linux GPU desktop: launcher + WebUI + ComfyUI + studio packs
nvh webui # Setup > Models lets you choose exact local downloads
nvh studio --install starter -y # rootless LLMs + agents + ComfyUI nodes + game-dev tools
nvh "your question" # just ask — nvHive figures out the rest
nvh webui # open the dashboard and setup wizard
nvh "your question" # just ask - nvHive figures out the rest
```

For a fresh Linux cloud desktop where only a mounted file volume persists,
Expand All @@ -82,7 +141,7 @@ source "$NVH_HOME/nvh-env.sh"
nvh workstation --home-dir "$NVH_HOME" --all -y
```

`nvh workstation --all -y` creates a desktop launcher, starts the WebUI, prepares rootless local model tooling, installs ComfyUI with nvHive starter workflow examples, and adds AI Studio packs for LLMs, agents, ComfyUI nodes, and Linux game projects.
`nvh workstation --all -y` creates a desktop launcher, starts the WebUI, prepares rootless local model tooling, installs ComfyUI with nvHive starter workflow examples, and adds the beginner AI Starter packs. Creative, game, Claw, and music missions stay one click away in the WebUI or can be installed directly with `nvh studio --install creative|game|claw|music -y`.

Use packs directly when you want a specific no-root lab:

Expand All @@ -92,41 +151,91 @@ nvh studio --models
nvh studio --install-models recommended -y
nvh studio --install llms -y
nvh studio --install agents -y
nvh studio --install claw -y # OpenClaw + NemoClaw when Docker/OpenShell is usable
nvh studio --install comfy -y
nvh studio --install game -y
nvh studio --install creative -y # Blender LTS + game/asset workspace
nvh studio --install music -y # ACE-Step, Demucs, WhisperX, Audacity/LMMS AppImages
nvh studio --install python-runtime-fallback -y # optional rescue pack, not the default path
```

The WebUI setup wizard includes a model picker with GPU-fit badges, disk
estimates, installed status, and persistent install jobs saved under
`$NVH_HOME/jobs`. ComfyUI, AI Studio packs, and local model downloads keep
showing progress after a browser refresh and can be canceled from the wizard.
For pip installs, the downloaded WebUI, npm cache, and no-root Node fallback
also live under `$NVH_HOME`, so reconnecting to a cloud desktop does not mean
starting the frontend toolchain from scratch.
The wizard now includes a local setup helper that ranks the next storage,
runtime, model, ComfyUI, and creative-tool actions before any local LLM is
installed.
The ComfyUI step lets students select workflow examples and save a model
download plan with source links, folder targets, and a helper checklist script,
because many image/video weights are large or require upstream terms.

On first run, `nvh` launches a guided 3-step setup — GPU detection, provider keys, local model pulls. Works immediately with local models (no signup needed). Every step is skippable. Run `nvh setup` anytime to reconfigure.
### Pick Your Mission

The setup wizard starts with one simple question: **what do you want to make?**
Pick a mission and nvWizard handles storage, GPU checks, Python/Node runtime
checks, model recommendations, rootless installers, and background jobs.

The wizard does six things before heavy installs:

1. Finds the best user-writable persistent storage path.
2. Checks GPU, driver, CUDA, VRAM, Python, Node, and npm health.
3. Recommends local models that fit the detected hardware and disk.
4. Installs selected tools into `NVH_HOME` without root.
5. Tracks long downloads as resumable jobs with logs and receipts.
6. Offers **Fix My Setup** and **Copy Error Report** when something breaks.

| Mission | What it sets up |
| --- | --- |
| AI Starter | Local chat and coding models, Ollama, GitHub helper, and a local agent helper |
| Graphics Creator Studio | ComfyUI, Blender, image/video workflow examples, and model download plans |
| Game Dev Lab | Godot helpers, Blender assets, GitHub, Linux game tooling, and Unity/Unreal guidance |
| Music Producer Studio | AI music generation, stem splitting, transcription, audio apps, and notebooks |
| Agent Builder | OpenClaw by default, with NemoClaw unlocked only when Docker already works without sudo |

Beginner Mode shows one recommended action, a **Fix My Setup** repair path, and
mission cards. Advanced Details stays available for storage, driver, CUDA,
Python, Node, logs, receipts, boot drift, and release-readiness diagnostics.

ComfyUI, AI Studio packs, and local model downloads run as persistent jobs under
`$NVH_HOME/jobs`, so setup progress survives browser refreshes and cloud desktop
reconnects. The model picker shows GPU-fit badges, disk estimates, installed
status, and a selected download queue.

When the VM image changes between sessions, nvWizard compares the new boot
fingerprint with `$NVH_HOME/config/boot-preflight.json` and recommends rootless
repairs before launching large installs. If something still fails, **Copy Error
Report** creates a redacted report with request IDs and log locations. See
[Production Readiness](docs/PRODUCTION_READINESS.md) for the release gates and
target NVIDIA Linux VM acceptance checklist.

If setup gets stuck:

```bash
nvh webui # reopen the wizard
nvh doctor --storage --home-dir "$NVH_HOME" # verify the persistent mount
nvh doctor --fix # try safe local repairs
tail -n 80 "$NVH_HOME/logs/nvhive.log" # inspect rootless logs
```

When the local API is running, Advanced Details can copy the same redacted report
from the UI, or you can call `GET /v1/setup/diagnostics` directly.

<p align="center">
<img src="docs/screenshots/rootless-runtime.svg" alt="Rootless NVIDIA cloud desktop layout" width="900">
</p>

On first run, `nvh` launches a guided setup helper for GPU detection,
provider keys, local model pulls, and rootless app installs. Works immediately
with local models when available. Every advanced step is skippable. Run
`nvh setup` anytime to reconfigure.

<p align="center">
<img src="docs/screenshots/setup-flow.svg" alt="nvHive 3-Step Setup Flow" width="900">
</p>

### WebUI

`nvh webui` launches a full-screen dashboard at `localhost:3000` — chat, council mode, advisor status, analytics, system stats, and setup flows. NVIDIA corporate theme, keyboard-first (Ctrl+K command palette, Ctrl+B collapse sidebar).
`nvh webui` launches the local dashboard at `localhost:3000`: setup wizard,
chat, council mode, advisor status, analytics, and system health. First run
installs frontend dependencies under persistent `NVH_HOME`; later launches use
the production server for faster startup. Use `nvh webui --dev` only when
editing the frontend.

<p align="center">
<img src="docs/screenshots/webui-walkthrough.gif" alt="nvHive WebUI walkthrough" width="900">
</p>

**GPU tier model recommendations:**
**GPU tier model recommendations:**

| VRAM | Text Model | Vision Model | Behavior |
|------|-----------|-------------|----------|
Expand Down Expand Up @@ -302,8 +411,11 @@ Results vary by hardware and workload — run `nvh bench` to measure on your set

| Guide | Description |
|-------|-------------|
| [Getting Started](docs/GETTING_STARTED.md) | First-time setup |
| [Student GPU Cloud / Linux Desktop](docs/LINUX_DESKTOP.md) | No-root NVIDIA Linux workstation and ComfyUI guide |
| [Production Readiness](docs/PRODUCTION_READINESS.md) | Release gates and target NVIDIA Linux VM acceptance checklist |
| [Deploy Without Root](docs/DEPLOY_NO_ROOT.md) | No-root install on servers |
| [Windows Troubleshooting](docs/TROUBLESHOOTING_WINDOWS.md) | Encoding, segfaults, port issues |
| [Getting Started](docs/GETTING_STARTED.md) | General CLI/provider setup after the no-root workstation path |
| [Commands](docs/COMMANDS.md) | Full CLI reference (50+ commands) |
| [Providers](docs/PROVIDERS.md) | 23 providers, rate limits, free tiers |
| [Council System](docs/COUNCIL.md) | Multi-LLM consensus with confidence scoring |
Expand All @@ -313,8 +425,6 @@ Results vary by hardware and workload — run `nvh bench` to measure on your set
| [Agent Tools](docs/TOOLS.md) | Agent tools and capabilities |
| [Configuration](docs/CONFIGURATION.md) | Configuration reference |
| [Web UI](docs/WEBUI.md) | Web dashboard |
| [Deploy Without Root](docs/DEPLOY_NO_ROOT.md) | No-root install on servers |
| [Windows Troubleshooting](docs/TROUBLESHOOTING_WINDOWS.md) | Encoding, segfaults, port issues |
| [Releasing](docs/RELEASING.md) | Release runbook |

---
Expand Down
9 changes: 8 additions & 1 deletion docs/DEPLOY_NO_ROOT.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,11 @@ nvh nvidia # nvHive's GPU detection

## Ollama Without Root

For cloud desktops with persistent block storage, prefer the nvHive workstation
or Studio pack flow so model state is tied to `NVH_HOME`. The manual Ollama path
below is useful on generic no-root servers, but the default `~/.ollama` location
may be ephemeral on some managed desktops.

The standard Ollama installer (`curl | sh`) needs root. User-space
alternative:

Expand All @@ -62,7 +67,9 @@ ollama pull qwen2.5-coder:32b # ~34 GB, reviewer
nvh agent --setup
```

Models are stored in `~/.ollama/models/` — no root needed.
Models are stored in `~/.ollama/models/` by default, no root needed. On cloud
desktops, confirm that `$HOME` is persistent or set the equivalent model path to
your durable `NVH_HOME` volume before pulling large models.

## API Keys Without Keyring

Expand Down
Loading
Loading