This repository extends the LeRobot framework by introducing an ONNXRuntime-based inference backend for the SmolVLA policy model. It enables execution on both standard CPU and custom hardware backends such as ETSoC through the ETGlow provider.
The implementation preserves LeRobot’s dataset and policy interface while adding an interchangeable runtime layer for deterministic and hardware-specific inference.
LeRobot provides a unified API for robot learning and policy evaluation. This repository focuses exclusively on inference, providing an ONNXRuntime integration layer that mirrors LeRobot’s PyTorch-based inference flow.
- ONNXRuntime backend compatible with LeRobot’s SmolVLA policy
- Support for CPU and ETGlow execution providers
- Reference PyTorch implementation for output comparison
- Deterministic behavior for reproducible inference
- Lightweight testing utilities to validate backend equivalence
-
Enter the development environment
For ETSoC execution:
./dev-env-et.bash
For CPU-only execution:
./dev-env-cpu.bash
-
Install virtual environment tools
apt update && apt install python3.10-venv -
Create and activate a virtual environment
python3 -m venv venv source venv/bin/activate -
Install dependencies
apt-get update && apt-get install -y libosmesa6 libosmesa6-dev git cmake libglib2.0-0 pip install --upgrade-strategy only-if-needed -r requirements.txt
This ensures a minimal, dependency-controlled Python environment suitable for inference or testing.
python -m src.inference.smolvla --device CPUor, for ETSoC hardware:
python -m src.inference.smolvla --device ETIf you wish to eval model in simulation, consider using eval script
python -m src.inference.eval --device CPUIf you have available display this command will launch a small visualization of evaluation process.
Othervise it'll save current fram to the sim_frame.png file and later compile a video in outputs filder.
python -m src.inference.smolvla_torchEach script loads a dataset sample and performs a single policy sample_action step, returning one action vector.
To verify consistency between ONNXRuntime and Torch-based outputs:
PYTHONPATH=. pytest -s src/tests/test_compare_policies.pyThis test executes both inference paths and performs a numerical comparison of their outputs.
To run a testing simulation:
run_eval_cpu.bash or run_eval_et.bash
src/
├── inference/
│ ├── eval_torch.py # Reference Torch-based evaluation script
│ ├── eval.py # ONNXRuntime script for evaluation (CPU/ ET)
│ ├── smolvla_torch.py # Reference Torch-based inference
│ └── smolvla.py # ONNXRuntime inference entrypoint (CPU / ET)
└── tests/
└── test_compare_policies.py # Cross-backend consistency test
- This repository is an extension, not a fork, of LeRobot.
- The primary purpose is backend experimentation and ONNXRuntime integration.
- Designed for controlled inference evaluation rather than model training.
- Deterministic setup is provided to ensure reproducible CPU runs.