Current version: 0.1.16
Containerized AI agent CLIs (cursor-agent and claude) with a persistent /root volume and your project mounted at /work.
- Agent CLI(s):
cursor-agentandclaude(Claude Code) installed into the persistent/rootvolume so they survive rebuilds - Host auth reuse: mounts your host Cursor config (
$HOME/.config/cursor) into the container (read-only); Claude Code credentials live in the persistent/rootvolume — authenticate once inside the container - Dev tools:
git,gh,curl, etc. (installed at image build time via family-specific Dockerfile templates;.ai-shell/bootstrap-tools.shis an optional empty hook for extra runtime installs on eachai-shell up) - Persistent state:
/rootis a named Docker volume;/workis your bind-mounted project directory - Per-project customization: each project gets its own
.ai-shell/directory with Dockerfile and scripts
Host OS note: this setup is currently documented for a Linux host (for example, it mounts host Cursor credentials from ~/.config/cursor). So far, ai-shell has only been tested on Ubuntu 24.04.
Cursor Agent CLI reads credentials from your host's Cursor installation. Make sure you're signed in on the host and that $HOME/.config/cursor is populated.
On first container creation, ai-shell up tries to install cursor-agent inside the container automatically (best-effort) by running:
curl https://cursor.com/install -fsSL | bashIf the install step fails, ai-shell up will warn (but still succeeds). To skip: ai-shell up --no-install-cursor.
Claude Code CLI stores credentials in ~/.claude inside the container, which lives in the persistent /root named volume. Authenticate once inside the container and credentials will persist across rebuilds.
On first container creation, ai-shell up tries to install claude inside the container automatically (best-effort) by running:
curl -fsSL https://claude.ai/install.sh | bashIf the install step fails, ai-shell up will warn (but still succeeds). To skip: ai-shell up --no-install-claude.
If auto-install fails for either agent, you can install manually:
ai-shell up --no-install # skip all agent installs
ai-shell enter
# inside the container, install manually:
curl https://cursor.com/install -fsSL | bash # cursor-agent
curl -fsSL https://claude.ai/install.sh | bash # claude code
export PATH="$HOME/.local/bin:$PATH"The container's /root is a named volume, so once agents are installed they persist across rebuilds/recreates.
If you provide GH_TOKEN to the container, gh will authenticate non-interactively inside the container.
Global .env (optional, recommended): ai-shell up will look for an env file in:
--env-file <path>(explicit; empty string disables)AI_SHELL_ENV_FILE=<path>$XDG_CONFIG_HOME/ai-shell/.env~/.config/ai-shell/.env
If no env file is found, ai-shell up still works; GitHub SSH bootstrap may be deferred until you authenticate gh interactively inside the container.
Example .env:
GH_TOKEN=github_pat_your_token_hereQuick smoke test (inside the container):
gh auth status
gh api user --jq .loginOn the first ai-shell up (when the container is created), ai-shell runs .ai-shell/setup-git-ssh.sh inside the container.
- It generates
~/.ssh/id_ed25519inside the container (stored in the persistent/rootvolume). - It adds the public key to your GitHub account via
gh ssh-key add. - It configures git to use SSH for GitHub (
url."git@github.com:".insteadOf). - If
ghis not authenticated and no env file was provided,ai-shell upwill skip SSH bootstrap and print next steps (you can authenticate withgh auth logininside the container, then re-runai-shell up).
Recommended: create a global env file with GH_TOKEN (or authenticate once interactively; the persistent /root volume keeps your gh login).
The following BASE_IMAGE values (Docker images) have been tested with the per-family Dockerfile templates. Node.js 22 LTS is installed at build time from official binaries; both AI agents install and run successfully.
| Base image | Family | Node.js | git | gh | ssh | Claude Code | Cursor Agent |
|---|---|---|---|---|---|---|---|
ubuntu:24.04 |
apt | 22.22.1 | 2.43.0 | 2.88.1 | 9.6p1 | 2.1.81 ✅ | 2026.03.20 ✅ |
fedora:40 |
dnf | 22.22.1 | 2.49.0 | 2.65.0 | 9.6p1 | 2.1.81 ✅ | 2026.03.20 ✅ |
opensuse/leap:15.6 |
zypper | 22.22.1 | 2.51.0 | 2.88.1 | 9.6p1 | 2.1.81 ✅ | 2026.03.20 ✅ |
opensuse/tumbleweed |
zypper | 22.22.1 | 2.53.0 | 2.88.1 | 10.2p1 | 2.1.81 ✅ | 2026.03.20 ✅ |
archlinux:latest |
pacman | 22.22.1 | 2.53.0 | 2.88.1 | 10.2p1 | 2.1.81 ✅ | 2026.03.20 ✅ |
Note: Alpine Linux is not supported. Both Cursor Agent and Claude Code bundle glibc-linked Node.js binaries that are incompatible with Alpine's musl libc.
Notes:
- Versions vary by distro (these are the observed versions from the test run).
- Tumbleweed and Arch are rolling-release; versions will change frequently.
- For openSUSE distros,
ghis installed from official GitHub CLI releases (not available in repos).
Before first use, run the one-time global setup:
ai-shell setupThis creates global config (~/.config/ai-shell/config.toml) and env file (~/.config/ai-shell/.env).
Alternatively, configure the container runtime manually:
ai-shell config set-mode <docker|podman>Optional (but recommended): set a default base image (an alias name) and define aliases:
ai-shell config set-default-base-image ubu
ai-shell config alias set ubuntu24 ubuntu:24.04 apt
ai-shell config showSystemwide:
make installUser install:
make install PREFIX="$HOME/.local"make build
./bin/ai-shell --helpFirst time setup (one-time per machine):
ai-shell setupThis creates:
- Global config (
~/.config/ai-shell/config.toml) with mode (docker/podman) - Global env file (
~/.config/ai-shell/.env) forGH_TOKEN
Initialize a project (per-project):
ai-shell initThis creates:
- Per-project
.ai-shell/directory withDockerfile,docker-compose.yml(auto-generated),docker-compose.override.yml(user-editable), and scripts
Build and start the container:
ai-shell upThis script will:
-
Build the Docker image from
.ai-shell/Dockerfile -
Create a container (optionally using a global env file if present)
-
Mount your project directory to
/work -
Create a persistent volume for
/root(home directory) -
Mount your Cursor credentials from
$HOME/.config/cursorto/root/.config/cursor(read-only) -
Run
.ai-shell/bootstrap-tools.sh(empty by default; optional place for extra runtime setup)
ai-shell builds its image from .ai-shell/Dockerfile. The Dockerfile requires a base image (the FROM image) via build-arg BASE_IMAGE.
ai-shell up, ai-shell init, and ai-shell regen accept only alias names defined in your global config (base-image-aliases). Bare image references like ubuntu:24.04 are rejected — define an alias first, then pass that name.
You can provide a base image for up/recreate either by flag or as an optional positional arg:
ai-shell config alias set ubuntu24 ubuntu:24.04 apt
ai-shell up ubuntu24
# or
ai-shell up --base-image ubuntu24Note: changing the base image affects builds. Existing containers need --recreate to pick up the new image.
Each workdir gets its own container + /root volume. The container/volume names are derived from the canonical (absolute) workdir path.
Examples:
# Create/start an instance for some other project folder
ai-shell up --workdir /path/to/project
# List all managed instances
ai-shell lsOr manually:
# Build the image (from a project with .ai-shell/ scaffolded)
# The image name is per-instance: ai-agent-shell-<iid>, where <iid> is shown by `ai-shell instance`
# Use a literal image ref here; `ai-shell up` uses alias names from your global config instead.
docker build -t ai-agent-shell-<iid> --build-arg BASE_IMAGE=python:3.12-slim .ai-shellImportant: ai-shell “metadata” is implemented as container labels (e.g. net.datatheory.ai-shell.managed=true).
A plain docker run ... ai-agent-shell-<iid> creates a usable container, but it will not be detected/managed by ai-shell
commands like ai-shell ls/start/stop/rm unless you add the expected labels.
ai-shell discovers the workdir dynamically from the container's /work bind mount rather than storing it as a label. This provides:
- Portability:
docker-compose.ymlfiles can be committed to source control without machine-specific paths - Single source of truth: The
/workbind mount is the canonical identity - Consistency: Both
ai-shell upanddocker composeuse the same mechanism
Every ai-shell container mounts the workdir to /work:
- Via
docker-compose.yml:..:/work(relative to.ai-shell/) - Via
ai-shell up:-v workdir:/work
If you really want to create the container manually and have it be manageable by ai-shell, use ai-shell instance
to print the correct derived names + labels for your workdir, then pass them to docker run:
# Print the derived container/volume names and labels for this workdir:
ai-shell instance --workdir "$(pwd)"
# Then use the printed values in your docker run. Example shape:
docker run -d \
--name "<container_from_ai_shell_instance>" \
--label net.datatheory.ai-shell.managed=true \
--label net.datatheory.ai-shell.schema=1 \
--label "net.datatheory.ai-shell.instance=<iid_from_ai_shell_instance>" \
--label "net.datatheory.ai-shell.volume=<volume_from_ai_shell_instance>" \
-v "$(pwd)":/work \
-v "<volume_from_ai_shell_instance>":/root \
-v "$HOME/.config/cursor":/root/.config/cursor:ro \
--env-file "$HOME/.config/ai-shell/.env" \
ai-agent-shell-<iid>Note: The workdir is NOT stored as a label; ai-shell discovers it from the /work bind mount at runtime.
Tip: When using the ai-shell CLI (not manual docker run), the container name is usually ai-agent-shell-<id> (derived from the workdir). Run ai-shell ls to see the SHORT id and use ai-shell enter <short> / ai-shell stop <short> without typing the full container name.
Note: TARGET prefixes must be unique; if a prefix matches multiple instances, ai-shell will error with “ambiguous target” and print candidates.
The ai-shell CLI provides a convenient way to start/stop/recreate and enter the container.
Usage:
ai-shell --help
ai-shell status # affects current directory's instance
ai-shell stop # affects current directory's instance
ai-shell start # affects current directory's instance
ai-shell stop --workdir /path/to/project
# Or target an instance by SHORT/IID/container prefix:
ai-shell ls
ai-shell enter <short>
ai-shell stop <short>To remove all ai-shell managed containers, their associated /root volumes, and any images those containers use:
ai-shell rm --nukeThis also attempts to remove orphaned volumes matching the default naming scheme ai_agent_shell_home_*.
- By default,
--nukeprompts for confirmation (Type NUKE to continue:). - If no TTY is available, it refuses unless you pass
--yes:
ai-shell rm --nuke --yesFeatures:
- Checks if the container exists before attempting to start/stop
- Verifies container state to avoid unnecessary operations
- Provides clear error messages if the container doesn't exist or operations fail
- Respects the
AI_SHELL_CONTAINERenvironment variable for custom container names
Note: If the container doesn't exist, run ai-shell setup (once per machine), then ai-shell init (per project), then ai-shell up to create it.
Each project has its own .ai-shell/ directory (created by ai-shell init) containing:
docker-compose.override.yml- edit this to add volume mounts, environment variables, ports, services; never overwritten by ai-shellDockerfile- tool installation for the chosen distro family (apt,dnf,zypper, orpacman) is emitted between marker comments (# === AI-SHELL AUTO-GENERATED — DO NOT EDIT BELOW THIS LINE ===…# === AI-SHELL AUTO-GENERATED — DO NOT EDIT ABOVE THIS LINE ===). Put your ownRUN/COPY/etc. below the closing marker; that user section is kept when you runai-shell init --force, which only replaces the marked auto-generated block.bootstrap-tools.sh- optional runtime hook (ships empty); runs on eachai-shell upif you need installs or setup that cannot live in the imagesetup-git-ssh.sh- customize SSH/git setup
Note: docker-compose.yml is auto-generated — do not edit it by hand. Use docker-compose.override.yml for customizations. Both files are automatically merged by docker compose / podman-compose.
To regenerate docker-compose.yml with a new random instance ID (e.g. to get a fresh container/volume name):
ai-shell regen --base-image ubuntu24You can commit .ai-shell/ to version control so team members get the same container setup.
Using Docker Compose directly:
cd .ai-shell
docker compose up -d --build
docker compose exec ai-shell bashOr with podman-compose:
cd .ai-shell
podman-compose up -d --build
podman-compose exec ai-shell bashai-shell ls
ai-shell enter <short>
# then inside the container:
cursor-agent --help
claude --version- No API key required - Uses credentials from host machine
- Credentials mounted from
$HOME/.config/cursoron host to/root/.config/cursorin container - Requires Cursor to be installed and configured on your host machine
- Credentials are read-only mounted (changes in container don't affect host)
- Credentials live in
/root/.claudeinside the container (part of the persistent/rootnamed volume) - Authenticate once inside the container:
claudewill prompt for login on first use; credentials persist across rebuilds - You can also set
ANTHROPIC_API_KEYin your env file for API-key-based auth
gitis available in the image.gh(GitHub CLI) is installed in the image, but it still needs authentication for API operations (typicallygh auth logininside the container or settingGH_TOKEN).
The container persists data in two locations:
- Project directory (
/work): Files created here appear in your local directory - Docker volume (
ai_agent_shell_home→/root): Home directory, configs, and installed packages
ai-shell setup creates global configuration:
config.toml(mode + seeded base image aliases)- a global
.envfile (optionally containingGH_TOKEN)
Interactive (TTY):
ai-shell setupNon-interactive (scripts/CI; defaults to docker):
ai-shell setup --yesWhere it writes (defaults):
config.toml:$XDG_CONFIG_HOME/ai-shell/config.tomlor~/.config/ai-shell/config.toml.env:$XDG_CONFIG_HOME/ai-shell/.envor~/.config/ai-shell/.env
Seeded base image aliases (each maps an image and a family used to pick the Dockerfile template: apt, dnf, zypper, or pacman):
ubu→ubuntu:24.04(apt)deb→debian:12-slim(apt)fed→fedora:40(dnf)suse→opensuse/leap:15.6(zypper)tw→opensuse/tumbleweed(zypper)arch→archlinux:latest(pacman)
If you have an older config.toml from before aliases included a family field, it will not parse; run ai-shell setup again to regenerate configuration.
GH_TOKEN behavior:
- Interactive: choose to (1) run a host command (default
gh auth token), (2) enter a token manually (input hidden), or (3) skip. - Non-interactive: attempts
gh auth token; if unavailable/fails, writes a placeholder comment instead.
ai-shell init scaffolds per-project configuration:
.ai-shell/Dockerfile.ai-shell/docker-compose.yml.ai-shell/bootstrap-tools.sh(empty runtime hook).ai-shell/setup-git-ssh.sh.ai-shell/README.md
ai-shell initInitialize a specific workdir:
ai-shell init --workdir /path/to/projectWhere it writes:
.ai-shell/: in the current workdir (or--workdir)
Environment variables (can be set in .env or as container env vars):
AI_SHELL_CONTAINER: Container base name (default:ai-agent-shell)AI_SHELL_IMAGE: Image name (default:ai-agent-shell)AI_SHELL_VOLUME: Volume base name for/root(default:ai_agent_shell_home)GH_TOKEN: Optional token for GitHub CLI (gh) authentication
- Nushell installation: automatically install Nushell (
nu) in the container (likely via Dockerfile customization or the optionalbootstrap-tools.shhook). - Nushell OpenAI plugins: automatically install/configure Nushell plugins like
gpt2099.nuandnu.ai. - OpenAI credentials: provide a first-class way to configure OpenAI credentials (for example
OPENAI_API_KEY) for tools that need them, ideally integrating with the existing env-file discovery ($XDG_CONFIG_HOME/ai-shell/.envor$HOME/.config/ai-shell/.env). - Mistral via Continue (optional): add guidance and/or integration work to connect Mistral through Continue; it should be a good option for OCaml-focused work.