Fastest way to deploy Physical AI on your hardware
Simple CLI for Physical AI: Fine-tune and serve models in the physical world; optimized for edge & on-device operations
๐ข IMPORTANT: Package Renamed
This package has been renamed from
solo-servertosolo-cli!If you're upgrading from
solo-server, please see the Migration Guide for upgrade instructions.
- Old:
pip install solo-server- New:
pip install solo-cli- Config Migration:
~/.solo_serverโ~/.solo- CLI Command: Still
solo(unchanged) โ
Solo-CLI powers users of Physical AI Inference by providing access to efficiency tuned AI models in the real world. From language to vision to action models, Solo-CLI allows you to interact with cutting-edge, on-device AI directly within the terminal. It is tailored for context aware intelligence, specialized for mission-critical tasks, and tuned for the edge.
Upgrading from solo-server? See the Migration Guide first.
First, install the uv package manager and setup a virtual environment as explained in prereq.md
#Choose one of the following for solo-cli installation
#1. Install solo cli from PyPI python manager
uv pip install solo-cli
#2. Install solo cli from source
git clone https://github.com/GetSoloTech/solo-cli.git
cd solo-cli
uv pip install -e .
# Solo commands
solo --help
For the full installation demo, click here to watch on YouTube.
For Mac users, we provide an automated installation script that handles all the setup steps:
# Clone the repository
git clone https://github.com/GetSoloTech/solo-cli.git
cd solo-cli
# Make the installation script executable
chmod +x install_mac.sh
# Run the automated installation
./install_mac.shThe script will automatically:
- Install uv package manager (version 0.9.3)
- Create a virtual environment with Python 3.12.12
- Set up environment variables for dependencies
- Install solo-cli in development mode with fallback handling for mujoco dependencies
After installation, activate the virtual environment:
source solo_venv/bin/activateTo see the full video, click here to watch on YouTube.
solo --help
โญโ Commands โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ setup Set up Solo CLI environment with interactive prompts and saves configuration to config.json. โ
โ robo Robotics operations: motor setup, calibration, teleoperation, data recording, training, and inference โ
โ serve Start a model server with the specified model. โ
โ status Check running models, system status, and configuration. โ
โ list List all downloaded models available in HuggingFace cache and Ollama. โ
โ test Test if the Solo CLI is running correctly. Performs an inference test to verify server functionality. โ
โ stop Stops Solo CLI services. You can specify a server type with 'ollama', 'vllm', or 'llama.cpp' โ
โ Otherwise, all Solo services will be stopped. โ
โ download Downloads a Hugging Face model using the huggingface repo id. โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
# Note that you will need Docker for solo serve
solo setup
solo serve --server ollama --model llama3.2:1bFind more details here: Solo Robo Documentation
# Motors (both) โ Calibrate (both) โ Teleop
solo robo --motors all
solo robo --calibrate all
solo robo --teleop
# Record a new local dataset with prompts
solo robo --record
# Train ACT or SmolVLA Policy on a recorded dataset and push to Hub
solo robo --train
# Inference with a hub model id (with optional Teleop override)
solo robo --inferenceFind more details here: OpenAI -> OpenAI API Docs Ollama -> Ollama API Docs
# Chat request endpoint
curl http://localhost:5070/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "llama3.2",
"messages": [{"role": "user", "content": "Analyze sensor data"}],
"tools": [{"type": "mcp", "name": "VitalSignsMCP"}]
}'# Chat request endpoint
curl http://localhost:5070/api/chat -d '{
"model": "llama3.2",
"messages": [
{
"role": "user",
"content": "why is the sky blue?"
}
]
}'Navigate to config file
.solo/config.json
{
"hardware": {
"use_gpu": false,
"cpu_model": "Apple M3",
"cpu_cores": 8,
"memory_gb": 16.0,
"gpu_vendor": "None",
"gpu_model": "None",
"gpu_memory": 0,
"compute_backend": "CPU",
"os": "Darwin"
},
"user": {
"domain": "Software",
"role": "Full-Stack Developer"
},
"server": {
"type": "ollama",
"ollama": {
"default_port": 5070
}
},
"active_model": {
"server": "ollama",
"name": "llama3.2:1b",
"full_model_name": "llama3.2:1b",
"port": 5070,
"last_used": "2025-10-09 11:30:06"
}
}- Fork the repository
- Create feature branch (
git checkout -b feature/name) - Commit changes (
git commit -m 'Add feature') - Push to branch (
git push origin feature/name) - Open Pull Request




