Full Documentation: https://bazzite.ai/
Bazzite AI provides two distinct approaches for AI/ML development:
| Approach | What It Is | Who It's For |
|---|---|---|
| Bazzite AI OS | Immutable Linux desktop with 450+ pre-installed dev & ML packages | Users wanting a complete AI/ML desktop environment |
| Bazzite AI Pods | OCI container images for Docker, Podman, Kubernetes, and HPC | Users wanting ML tools on ANY Linux distribution |
Both approaches are equally valuable - choose the OS for a full desktop experience, or use Pods on any existing Linux system.
graph TB
subgraph os["Bazzite AI OS"]
direction TB
fedora[Fedora 43 Atomic]
ublue[Universal Blue]
bazzite_base[Bazzite NVIDIA Open]
bazzite_ai[Bazzite AI<br/>450+ Dev Packages]
fedora --> ublue --> bazzite_base --> bazzite_ai
end
subgraph pods["Bazzite Pods"]
direction TB
base[pod-base<br/>Fedora 43 + Dev Tools]
base --> nvidia[pod-nvidia<br/>CUDA + cuDNN]
base --> devops[pod-devops<br/>AWS + kubectl + Helm]
base --> runner[pod-githubrunner<br/>CI/CD Runner]
nvidia --> python[pod-nvidia-python<br/>PyTorch ML]
nvidia --> ollama[pod-ollama<br/>LLM Inference]
python --> jupyter[pod-jupyter<br/>JupyterLab]
end
subgraph deploy["Deployment Methods"]
bazzite_os[Bazzite AI OS<br/>Rebase / QEMU / Bootc]
docker[Docker / Podman]
devcontainer[Dev Containers]
quadlet[Podman Quadlets]
k8s[Kubernetes]
hpc[HPC / Apptainer]
end
os --> bazzite_os
pods --> docker
pods --> devcontainer
pods --> quadlet
pods --> k8s
pods --> hpc
pods --> bazzite_os
style bazzite_ai fill:#4CAF50,color:#fff
style python fill:#4CAF50,color:#fff
style jupyter fill:#4CAF50,color:#fff
style ollama fill:#4CAF50,color:#fff
style devops fill:#4CAF50,color:#fff
| Component | Requirement |
|---|---|
| Desktop | KDE Plasma |
| Storage | 12GB (OS) + 7-26GB per pod |
| GPU | NVIDIA RTX 20+, AMD RX 5000+, or Intel Gen 7+ |
| Component | Requirement |
|---|---|
| Container Runtime | Docker 20+ or Podman 4+ |
| GPU (Optional) | NVIDIA Container Toolkit for GPU access |
| Storage | 7-26GB per pod variant |
A complete AI/ML development desktop built on Bazzite and Universal Blue.
- Atomic updates with instant rollback
- KDE Plasma desktop environment
- 450+ pre-installed Dev & ML packages
- Full GPU support (NVIDIA RTX 20+, AMD, Intel)
- Pre-configured container runtimes (Podman, Docker)
Choose how to run Bazzite AI OS:
| Method | Best For | GPU Support |
|---|---|---|
| Rebase | Primary workstation | Full native GPU |
| QEMU VM | Development/testing with persistence | PCI passthrough (complex) |
| bcvk | Quick testing, CI/CD | Not supported |
Prerequisites: First, install Bazzite KDE NVIDIA Open by following the Bazzite Installation Guide.
Bazzite is a gaming-focused Fedora Atomic Desktop with:
- Immutable OS with atomic updates
- Pre-configured GPU drivers (NVIDIA, AMD, Intel)
- Built on Universal Blue
Rebase to Bazzite AI:
Once you have Bazzite KDE installed, rebase to Bazzite AI:
rpm-ostree rebase ostree-image-signed:docker://ghcr.io/atrawog/bazzite-ai:stable
systemctl rebootAvailable tags:
| Tag | Description |
|---|---|
stable |
Latest stable release (recommended) |
latest |
Most recent build from main branch |
testing |
Testing branch with latest features |
Rebase Back to Bazzite:
To return to base Bazzite KDE NVIDIA Open:
rpm-ostree rebase ostree-image-signed:docker://ghcr.io/ublue-os/bazzite-nvidia-open:stable
systemctl rebootRollback:
If something goes wrong, rollback to the previous deployment:
rpm-ostree rollback
systemctl rebootAfter rebasing to Bazzite AI, configure NVIDIA GPU support (RTX 20+ series):
Step 1: Apply kernel arguments
ujust config-nvidia kargsStep 2: Reboot (required)
systemctl rebootStep 3: Configure GPU containers
ujust config gpu setupStep 4: Verify setup
# Check GPU detection
nvidia-smi
# Test CUDA in containers
ujust config-nvidia test-cudaAMD and Intel GPUs: Work automatically via /dev/dri passthrough - no configuration required.
See: GPU Compatibility Guide for troubleshooting.
Run Bazzite AI OS as a virtual machine with cloud-init configuration.
GPU Note: GPU passthrough in VMs requires PCI passthrough setup (IOMMU, driver isolation). For simpler GPU access, use Pods with Docker/Podman.
Step 1: Download QCOW2 Image
# Download stable release
curl -LO https://r2.bazzite.ai/qcow2/bazzite-ai-stable.qcow2
# Or testing branch
curl -LO https://r2.bazzite.ai/qcow2/bazzite-ai-testing.qcow2Step 2: Create Cloud-Init Configuration
Create user-data:
#cloud-config
timezone: UTC
users:
- name: bazzite
groups: wheel, docker
sudo: ALL=(ALL) NOPASSWD:ALL
shell: /bin/bash
ssh_authorized_keys:
- ssh-ed25519 AAAA... your-key-hereCreate meta-data:
instance-id: bazzite-ai-vm
local-hostname: bazzite-aiCloud-Init Options:
| Option | Description |
|---|---|
timezone |
VM timezone (e.g., UTC, Europe/Vienna) |
users[].name |
Username for login |
users[].groups |
Group membership (wheel for sudo, docker for containers) |
users[].ssh_authorized_keys |
Your SSH public keys for passwordless login |
local-hostname |
VM hostname |
Step 3: Create Seed ISO
# Using genisoimage
genisoimage -output seed.iso -V cidata -r -J user-data meta-data
# Or using cloud-localds (if available)
cloud-localds seed.iso user-data meta-dataStep 4: Create VM Disk
qemu-img create -f qcow2 -b bazzite-ai-stable.qcow2 -F qcow2 vm-disk.qcow2 100GStep 5: Run VM
qemu-system-x86_64 \
-enable-kvm \
-machine type=q35,accel=kvm \
-cpu host \
-smp 4 \
-m 8G \
-drive file=vm-disk.qcow2,format=qcow2 \
-drive file=seed.iso,format=raw \
-nic user,hostfwd=tcp::2222-:22 \
-display none \
-daemonizeStep 6: Connect
ssh -p 2222 bazzite@localhostbcvk runs OCI container images directly as VMs without pre-built disk images. Ideal for rapid testing and CI/CD.
GPU Note: bcvk does not support GPU passthrough. For GPU workloads, use Option 1 (Rebase) or Bazzite AI Pods with Docker/Podman.
Install bcvk:
VERSION="v0.9.0"
curl -sSL "https://github.com/bootc-dev/bcvk/releases/download/${VERSION}/bcvk-x86_64-unknown-linux-gnu.tar.gz" \
| tar -xzf - -C ~/.local/bin/
chmod +x ~/.local/bin/bcvkPrerequisites:
# Fedora/RHEL
sudo dnf install qemu-kvm libvirt
sudo systemctl enable --now libvirtd
# Ubuntu/Debian
sudo apt install qemu-kvm libvirt-daemon
sudo systemctl enable --now libvirtdEphemeral VM (Auto-cleanup on exit):
bcvk ephemeral run-ssh ghcr.io/atrawog/bazzite-ai:testing
# SSH session starts automatically on port 2222
# VM is destroyed when you exitPersistent VM:
# Create VM
bcvk libvirt run \
--name bazzite-dev \
--cpus 4 \
--memory 8192 \
--disk-size 100G \
-p 2222:22 \
ghcr.io/atrawog/bazzite-ai:testing
# Manage VM
bcvk libvirt start bazzite-dev
bcvk libvirt stop bazzite-dev
bcvk libvirt list --all
bcvk libvirt rm bazzite-dev
# Connect via SSH
ssh -p 2222 root@localhostOCI container images that work on any Linux distribution with Docker or Podman. No Bazzite AI OS required.
# NVIDIA GPU (use --gpus all)
docker run -it --rm --gpus all -v $(pwd):/workspace \
ghcr.io/atrawog/bazzite-ai-pod-nvidia-python:stable
# AMD/Intel GPU (use --device=/dev/dri)
docker run -it --rm --device=/dev/dri -v $(pwd):/workspace \
ghcr.io/atrawog/bazzite-ai-pod-nvidia-python:stable
# CPU-only
docker run -it --rm -v $(pwd):/workspace \
ghcr.io/atrawog/bazzite-ai-pod-nvidia-python:stable
# JupyterLab (port 8888)
docker run -it --rm --gpus all -p 8888:8888 -v $(pwd):/workspace \
ghcr.io/atrawog/bazzite-ai-pod-jupyter:stable
# Ollama LLM server (port 11434)
docker run -it --rm --gpus all -p 11434:11434 \
ghcr.io/atrawog/bazzite-ai-pod-ollama:stable| Pod | Size | GPU | Use Case | Docker Command |
|---|---|---|---|---|
| nvidia-python | ~14GB | Yes | ML/AI with PyTorch + CUDA | docker run --gpus all ghcr.io/atrawog/bazzite-ai-pod-nvidia-python:stable |
| jupyter | ~17GB | Yes | Interactive notebooks | docker run --gpus all -p 8888:8888 ghcr.io/atrawog/bazzite-ai-pod-jupyter:stable |
| ollama | ~11GB | Yes | LLM inference server | docker run --gpus all -p 11434:11434 ghcr.io/atrawog/bazzite-ai-pod-ollama:stable |
| devops | ~10GB | No | AWS, kubectl, Helm, OpenTofu | docker run ghcr.io/atrawog/bazzite-ai-pod-devops:stable |
| base | ~7GB | No | Development foundation | docker run ghcr.io/atrawog/bazzite-ai-pod-base:stable |
| nvidia | ~8GB | Yes | Custom CUDA applications | docker run --gpus all ghcr.io/atrawog/bazzite-ai-pod-nvidia:stable |
| githubrunner | ~8GB | No | CI/CD runner | docker run ghcr.io/atrawog/bazzite-ai-pod-githubrunner:stable |
| comfyui | ~26GB | Yes | AI image generation | docker run --gpus all -p 8188:8188 ghcr.io/atrawog/bazzite-ai-pod-comfyui:stable |
Pods work across multiple platforms:
- Docker/Podman - Linux, macOS, Windows
- Kubernetes - AWS EKS, Google GKE, Azure AKS, on-prem
- HPC/Apptainer - Slurm, PBS, SGE clusters
- Dev Containers - VS Code, GitHub Codespaces, JetBrains IDEs
- Podman Quadlets - Systemd services with auto-start
See Pod Deployment Guide for detailed instructions.
apiVersion: batch/v1
kind: Job
metadata:
name: ml-training
spec:
template:
spec:
containers:
- name: pytorch
image: ghcr.io/atrawog/bazzite-ai-pod-nvidia-python:stable
resources:
limits:
nvidia.com/gpu: 1
restartPolicy: OnFailure# Pull image once
apptainer pull docker://ghcr.io/atrawog/bazzite-ai-pod-nvidia-python:stable
# Run in Slurm job (--nv enables GPU passthrough)
srun --gres=gpu:1 apptainer exec --nv bazzite-ai-pod-nvidia-python_stable.sif bashFor GPU access with Docker/Podman on any Linux:
# Fedora/RHEL
sudo dnf install nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
# Verify
docker run --rm --gpus all nvidia/cuda:12.0-base nvidia-smi| Resource | Description |
|---|---|
| Full Documentation | Complete user guide and reference |
| Pod Deployment Guide | Docker, Kubernetes, HPC deployment |
| Bazzite Installation | Base Bazzite installation guide |
| CLAUDE.md | AI assistant development guide |
| AGENTS.md | Commands and conventions |
# Clone and setup
git clone https://github.com/atrawog/bazzite-ai.git
cd bazzite-ai/bazzite-ai
./scripts/post-clone-setup.sh # Installs pre-commit hooks
# Build
just build # OS image
just pod build all # All pods
just docs-build # DocumentationFull guide: CONTRIBUTING.md
