CARLA verifiers environment for autonomous-driving evaluation and RL.
| Field | Value |
|---|---|
| Environment ID | carla-env |
| Version | 0.2.0 |
| Type | StatefulToolEnv |
Scenario families:
trolley_micro_*: benchmark trolley dilemmasaction_bias_*,bias_*: action-vs-inaction trolley variantsmaze: hidden-goal text navigationnavigation*: text-first open navigation with optional NPC vehicles/pedestriansnavigation_vision*: vision-first navigation with RGB camera access
Observation modes:
- Text-first: trolley, action-bias, maze,
navigation* - Vision-first:
navigation_vision*
uv pip install -e .
# Local CARLA
uv run vf-eval carla_env -m "openai/gpt-4.1-mini" \
-a '{"scenario": "maze", "sandbox": {"mode": "disabled"}}' -n 1 -r 1
# Text-first navigation
uv run vf-eval carla_env -m "openai/gpt-4.1-mini" \
-a '{"scenario": "navigation_Town10HD_v1_p1", "sandbox": {"mode": "disabled"}}' -n 1 -r 1
# Vision-first navigation
uv run vf-eval carla_env -m "qwen/qwen3-vl-8b-instruct" \
-a '{"scenario": "navigation_vision_Town10HD_v1_p1", "sandbox": {"mode": "disabled"}}' -n 1 -r 1| Argument | Default | Description |
|---|---|---|
scenario |
"action_bias_saves" |
Scenario identifier |
host |
$CARLA_HOST or 127.0.0.1 |
CARLA host |
port |
$CARLA_PORT or 2000 |
CARLA port |
dataset_path |
None |
Custom JSONL dataset |
trolley_micro_scoring |
"expected" |
"expected" or "actual" |
sandbox |
{"mode":"prime"} |
Prime sandbox config or local mode |
traffic_manager_enabled |
None |
Explicit TrafficManager opt-in/out |
| Scenario | Description |
|---|---|
maze |
Hidden-goal text navigation |
navigation |
Text-first open navigation |
navigation_<Map>_v<N>_p<M> |
Text-first navigation with map, vehicles, pedestrians |
navigation_vision |
Vision-first open navigation |
navigation_vision_<Map>_v<N>_p<M> |
Vision-first navigation with map, vehicles, pedestrians |
action_bias_saves |
Swerve avoids all pedestrians |
action_bias_less |
Swerve hits fewer |
action_bias_equal |
Equal harm either way |
bias_<C>v<S> |
Custom action-bias setup |
trolley_micro_<id> |
Benchmark trolley scenario |
Common tools:
control_vehicle(throttle, steer)brake_vehicle(intensity)emergency_stop()lane_change(direction)init_navigation_agent(behavior)set_destination(x, y, z)follow_route(steps)get_goal_info()
Text-first only:
observe()
Vision-first only:
capture_image()
get_goal_info() behavior:
mazeandnavigation*:distance_to_goal_m=... direction=... improving=...navigation_vision*:distance_to_goal_m=... improving=...
navigation*defaults totraffic_manager_enabled=False. TrafficManager is opt-in only.- Sandbox startup defaults are mode-aware:
- text-only scenarios:
./CarlaUnreal.sh -nullrhi -nosound - vision scenarios:
./CarlaUnreal.sh -RenderOffScreen -nosound
- text-only scenarios:
navigation_vision*does not expose normal text observations.navigation_vision*does not include destination coordinates in the prompt.- The CARLA Python client comes from
carla-ue5-api==0.10.0.
Current codebase includes:
- navigation scenarios
- RGB camera capture
- rubric tracking for RL
Not included in this stage:
- depth camera
- video recording
- CARLA version-compat layer