SunFounder PiCar-X v2.0 chassis with Ollama AI on Raspberry Pi 5 (8 GB).
- Hardware: SunFounder PiCar-X v2.0 chassis — Adafruit PCA9685, ADS1115, TB6612 control boards — wiring guide
- AI: Ollama local models (vision:
llava, text: any) - Language: Python 3.10+, async-first, TDD
The SunFounder Robot HAT v4 has been replaced with discrete Adafruit components. The Robot HAT ships with closed, non-auditable firmware which presents an unacceptable supply-chain risk for a device with direct motor and GPIO control. The Adafruit stack is fully open source and auditable.
| Component | Role |
|---|---|
| Adafruit PCA9685 — 16-ch PWM/Servo Driver (I2C) | Steering, camera pan/tilt servos |
| Adafruit ADS1115 — 16-bit ADC, 4-ch (I2C) | Grayscale/line sensors, battery voltage |
| Adafruit TB6612 — 1.2A DC Motor Driver | Left and right drive motors |
See docs/hardware-setup.md for full wiring diagrams,
pin assignments, and the pre-power-on checklist before connecting hardware.
picar-x-custom/
├── src/picarx_custom/
│ ├── config.py # Pydantic settings (env-driven)
│ ├── hardware/
│ │ ├── car.py # CarProtocol + AdafruitCar implementation
│ │ └── camera.py # OpenCV camera wrapper
│ ├── behaviors/
│ │ └── base.py # Abstract async Behavior base class
│ └── ai/
│ └── client.py # Ollama client (chat + vision)
├── tests/ # pytest suite — mirrors src/
│ ├── conftest.py # MockCar fixture
│ ├── hardware/
│ ├── behaviors/
│ └── ai/
├── docs/
│ └── hardware-setup.md # Wiring guide, pin assignments, verification steps
├── scripts/
│ └── health_check.py # Verify Ollama + hardware connectivity
├── prompts/
│ └── vision_describe.txt # Versioned prompt templates
├── pyproject.toml
├── Makefile
└── .env.example
make install # creates .venv and installs dev deps
cp .env.example .env # adjust values as neededmake install-hw # adds Adafruit hardware libs to the venv
cp .env.example .envNote:
adafruit-blinka,adafruit-circuitpython-pca9685, and related packages require Raspberry Pi GPIO/I2C and will not install on a standard Linux/macOS machine. All other code runs and tests pass without hardware present.
Install Ollama on the Raspberry Pi, then pull a vision model:
ollama pull llava:7bSet OLLAMA_HOST in .env if Ollama runs on a different machine.
make test # run pytest with coverage
make lint # ruff check
make typecheck # mypy
make check # lint + typecheck + test
# On the Pi, verify everything is connected:
.venv/bin/python scripts/health_check.py- Create
src/picarx_custom/behaviors/my_behavior.py - Subclass
Behaviorand implementasync def step(self) -> None - Write tests in
tests/behaviors/test_my_behavior.pyusing themock_carfixture - Wire it up in a script under
scripts/
from picarx_custom.behaviors.base import Behavior
from picarx_custom.hardware.car import CarProtocol
class ForwardBehavior(Behavior):
async def step(self) -> None:
self._car.forward(30)| Variable | Default | Description |
|---|---|---|
OLLAMA_HOST |
http://localhost:11434 |
Ollama server URL |
OLLAMA_MODEL |
llava:7b |
Model name for chat and vision |
CAMERA_INDEX |
0 |
OpenCV camera device index |
CAMERA_WIDTH |
640 |
Capture width in pixels |
CAMERA_HEIGHT |
480 |
Capture height in pixels |
LOG_LEVEL |
INFO |
Python logging level |