Skip to content

Comments

Update upstream#2

Open
mgagvani wants to merge 18 commits intomgagvani:devfrom
autorope:main
Open

Update upstream#2
mgagvani wants to merge 18 commits intomgagvani:devfrom
autorope:main

Conversation

@mgagvani
Copy link
Owner

@mgagvani mgagvani commented Feb 15, 2026

Note

Medium Risk
Touches core inference/camera plumbing and dependency resolution (TensorFlow/TFLite and new camera path), which can affect runtime behavior on edge devices, though most changes are additive and guarded.

Overview
Adds Luxonis OAK-D camera support: new parts/oak_d.py, new CAMERA_TYPE="OAKD" wiring in templates/complete.py (including optional depth recording), and config template updates to expose OAKD_* settings.

Makes TensorFlow an optional dependency for inference/training code by guarding imports in parts/interpreter.py and parts/keras.py, adding a get_tflite_interpreter() fallback chain (tflite-runtime/ai_edge_litert/TF), and fixing Keras model loading to initialize input_keys/output_keys; also updates KerasPilot inference to use interpreter input_keys directly.

Improves hardware/runtime robustness (disable Picamera2.align_configuration on Pi5; allow RoboHATDriver to reuse an existing serial port), refreshes/expands config templates, updates dependencies (setup.cfg: prefer tflite-runtime, relax matplotlib/pytest, add tensorflow-metal for macOS), and strengthens tests (MQTT client fully mocked, websocket tests wait deterministically, close tubs, enable pytest reruns).

Adds TrackSpeedPlanner utility: a Tornado-based CSV path speed editor with a bundled single-page UI and sample CSV assets under utilities/TrackSpeedPlanner.

Written by Cursor Bugbot for commit 9d34462. This will update automatically on new commits. Configure here.

DocGarbanzo and others added 17 commits June 30, 2025 22:24
* Fixed augmentation imports and test_train.py to use the new import paths.

* Add reruns in pytest ini to fix flaky web socket tests.
* Add Oak D Part (credit a6a547a)

* address review, black-ed everything, bump version
* add new folder with files

* Add TrackSpeedPlanner as regular folder

* Updated for Tornado

* Cleaned up to remove Node.js and Docker

* Fixed some bugs

CSVs now load properly without a header and file dialog has option to load from either Pi or local/laptop

* Deleting unneeded files

* Removed unnecessary files

* Fixed local save

* Make sure that exiting the program doesn't stop the Tornado server

* Got rid of unnecessary shell script

* Fix routing issue and create comprehensive documentation (#1210)

* Fix routing issue and create comprehensive documentation

- Fixed MainHandler routing conflict that prevented print statements from showing
- Replaced inaccurate documentation files with single comprehensive README.md
- Removed obsolete files (package.json, start.sh, .gitignore)
- Changed server binding to localhost for security
- Added debug print statement to MainHandler for troubleshooting

The new README accurately documents the actual functionality including:
- Interactive canvas-based path visualization with speed color coding
- Dual file loading (Pi directory browser + upload)
- Individual point speed editing with real-time visual feedback
- Proper API endpoints and CSV format specifications
- macOS port conflict troubleshooting

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Reverted to original IP address as overwrite was incorrect.

---------

Co-authored-by: Claude <noreply@anthropic.com>

---------

Co-authored-by: DocGarbanzo <47540921+DocGarbanzo@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
…ith mocks

The original test relied on external MQTT broker connection which caused intermittent failures. This change:
- Replaces real MQTT client with mocked instances to eliminate network dependency
- Adds comprehensive test coverage for both success and error scenarios
- Removes timing-dependent sleep calls and retry logic that caused flakiness
- Ensures consistent test execution across all environments

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
Replace fixed sleep() calls with proper async polling to eliminate race conditions
in WebSocket calibration tests. The tests were failing on macOS CI due to
insufficient wait time for WebSocket message processing.

Changes:
- Add wait_for_attribute_value() helper method with 5-second timeout
- Replace all sleep(SLEEP) calls with async polling in 7 tests
- Remove unused imports (tornado.ioloop, time.sleep, SLEEP constant)
- Add proper timeout error messages for better debugging

Fixes intermittent test failures on slower CI environments while maintaining
fast execution on local machines.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Comment out camera alignment to fix resolution issues on Pi5.
Just missing an "a"
* Reorganized and cleaned up config file

* Fix comment formatting in cfg_complete.py

Corrected comment formatting for web control port.
Copilot AI review requested due to automatic review settings February 15, 2026 18:03
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 8 potential issues.

Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.

This is the final PR Bugbot will review for you during this billing cycle

Your free Bugbot reviews will reset on March 9

Details

You are on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle.

To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.

)

depth_frame = self.get_frame(self.depth_queue)
rgb_frame = self.get_frame(self.rgb_queue)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OAK-D polling ignores stream toggles

High Severity

OakD._poll() always opens and reads both depth and rgb queues, even when enable_depth or enable_rgb is disabled. When one stream is off, the pipeline does not create that output, so queue access fails and the camera loop can stop.

Fix in Cursor Fix in Web

cam = OakD(
enable_rgb=cfg.OAKD_RGB,
enable_depth=cfg.OAKD_DEPTH,
device_id=cfg.OAKD_ID)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OAK-D ignores configured image size

Medium Severity

add_camera() creates OakD without passing cfg.IMAGE_W and cfg.IMAGE_H, so it always uses OakD defaults. This bypasses configured camera dimensions and can produce unexpected input shapes for models and heavier-than-expected processing.

Fix in Cursor Fix in Web

# Save to Pi directory
file_path = os.path.join(os.getcwd(), filename)
with open(file_path, 'w', newline='') as f:
f.write(csv_content)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Track editor saves to wrong directory

Medium Severity

CSVSaveHandler writes files to os.getcwd(), but file listing/loading uses Path(__file__).parent. Saving can target a different folder than the one shown in the UI, so edits may appear unsaved or disappear from the selectable file list.

Additional Locations (1)

Fix in Cursor Fix in Web

from donkeycar.parts.robohat import RoboHATDriver
V.add(RoboHATDriver(cfg), inputs=['steering', 'throttle'])
# Share serial port with controller to avoid opening the same port twice
V.add(RoboHATDriver(cfg, serial_port=ctr.serial), inputs=['steering', 'throttle'])
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MM1 startup assumes joystick serial exists

Medium Severity

RoboHATDriver is now always created with serial_port=ctr.serial. When DRIVE_TRAIN_TYPE is MM1 but the active controller is not RoboHATController, ctr can be LocalWebController and has no serial, causing runtime failure before driving starts.

Fix in Cursor Fix in Web

return tf.lite.Interpreter
raise ImportError("No TFLite runtime found. Install tflite-runtime or tensorflow.")


Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TensorRT support check crashes without TensorFlow

Medium Severity

When TensorFlow imports fail, trt is set to None, but has_trt_support() still calls trt.TrtGraphConverterV2(). The function only catches RuntimeError, so it raises AttributeError instead of returning False.

Additional Locations (1)

Fix in Cursor Fix in Web

# Save to Pi directory
file_path = os.path.join(os.getcwd(), filename)
with open(file_path, 'w', newline='') as f:
f.write(csv_content)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Track editor allows arbitrary file overwrite

High Severity

CSVSaveHandler writes using client-provided filename with os.path.join(os.getcwd(), filename) and no path validation. A crafted value like traversal segments can write outside the intended directory and overwrite arbitrary server files.

Fix in Cursor Fix in Web

if serial_port is not None:
self.pwm = serial_port
else:
self.pwm = serial.Serial(cfg.MM1_SERIAL_PORT, 115200, timeout=1)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RoboHAT constructor breaks positional debug argument

Medium Severity

RoboHATDriver.__init__ inserted serial_port before debug, so existing calls like RoboHATDriver(cfg, True) now treat True as serial_port. self.pwm becomes a boolean, and later self.pwm.write(...) crashes at runtime.

Fix in Cursor Fix in Web

time.sleep(2) # give thread enough time to shutdown

# done running
self.oak_d_device.close()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OAK-D shutdown misses device existence check

Low Severity

shutdown() unconditionally calls self.oak_d_device.close(), but self.oak_d_device is only created when enable_rgb or enable_depth is true. With both disabled, shutdown raises AttributeError instead of exiting cleanly.

Fix in Cursor Fix in Web

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 9d34462d9f

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

from donkeycar.parts.robohat import RoboHATDriver
V.add(RoboHATDriver(cfg), inputs=['steering', 'throttle'])
# Share serial port with controller to avoid opening the same port twice
V.add(RoboHATDriver(cfg, serial_port=ctr.serial), inputs=['steering', 'throttle'])

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Stop referencing undefined controller in MM1 drivetrain setup

This branch now passes ctr.serial into RoboHATDriver, but add_drivetrain has no ctr variable in scope, so selecting DRIVE_TRAIN_TYPE == "MM1" raises a NameError during startup and prevents MM1 cars from launching at all. Pass the controller into add_drivetrain (or keep the previous constructor usage) before dereferencing it here.

Useful? React with 👍 / 👎.


class ImageAugmentation:
def __init__(self, cfg, key, prob=0.5, always_apply=False):
def __init__(self, cfg, key, prob=0.5):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Preserve ImageAugmentation always_apply parameter compatibility

Removing the always_apply parameter from ImageAugmentation.__init__ breaks existing callers that still pass it (for example donkeycar/management/ui/pilot_screen.py calls ImageAugmentation(..., always_apply=True)), which now throws TypeError: unexpected keyword argument 'always_apply' when the pilot screen updates augmentations. Keep a compatible signature or update all call sites in the same change.

Useful? React with 👍 / 👎.

Comment on lines +192 to +193
self.depth_queue: DataOutputQueue = self.oak_d_device.getOutputQueue(
name="depth", maxSize=1, blocking=False

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Guard OAK-D queue reads by enabled stream flags

In _poll, the code always fetches both depth and rgb output queues whenever either stream is enabled, so configurations like enable_rgb=True, enable_depth=False (or vice versa) still try to read a queue that was never created and fail at runtime. Queue creation and frame reads should be conditioned per-stream to match enable_rgb/enable_depth.

Useful? React with 👍 / 👎.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request updates dependencies, adds new hardware support (OAK-D camera), introduces a web-based Track Speed Planner utility, improves test reliability, and makes the codebase more resilient to different TensorFlow/TFLite runtime environments.

Changes:

  • Updates Python dependencies by removing version constraints on matplotlib and pytest, adds tflite-runtime and tensorflow-metal support
  • Adds comprehensive OAK-D camera driver and configuration support
  • Introduces a new Tornado-based Track Speed Planner web utility for visualizing and editing path data CSV files
  • Improves test reliability by replacing sleep-based waits with proper async polling and adding connection error tests
  • Refactors TensorFlow imports to be optional with graceful fallbacks, supporting tflite-runtime, ai_edge_litert, and full TensorFlow

Reviewed changes

Copilot reviewed 22 out of 22 changed files in this pull request and generated 11 comments.

Show a summary per file
File Description
setup.cfg Updates dependency versions, adds tflite-runtime and tensorflow-metal for Pi and macOS
donkeycar/utilities/TrackSpeedPlanner/trackeditor.py New Tornado web server for path data visualization and editing
donkeycar/utilities/TrackSpeedPlanner/static/index.html Complete web interface with interactive canvas for path editing
donkeycar/utilities/TrackSpeedPlanner/test_path.csv Sample CSV data file with 296 path points
donkeycar/utilities/TrackSpeedPlanner/README.md Comprehensive documentation for the Track Speed Planner utility
donkeycar/tests/test_web_socket.py Replaces sleep-based timing with async polling for more reliable tests
donkeycar/tests/test_train.py Adds proper cleanup by closing tub files after use
donkeycar/tests/test_telemetry.py Replaces integration tests with mocked unit tests and adds connection error handling
donkeycar/tests/pytest.ini Adds pytest reruns configuration for flaky test handling
donkeycar/templates/complete.py Adds OAK-D camera support and MM1 serial port sharing
donkeycar/templates/cfg_simulator.py Adds OAK-D configuration parameters
donkeycar/templates/cfg_complete.py Major reorganization with improved comments and OAK-D support
donkeycar/templates/cfg_basic.py Updates camera type list to include OAK-D
donkeycar/pipeline/augmentations.py Removes deprecated 'always_apply' parameter for albumentations compatibility
donkeycar/parts/robohat.py Adds serial_port parameter to RoboHATDriver to enable port sharing
donkeycar/parts/oak_d.py New driver for OAK-D depth camera with RGB and depth support
donkeycar/parts/keras.py Makes TensorFlow imports optional with graceful fallback
donkeycar/parts/interpreter.py Adds support for tflite-runtime and ai_edge_litert interpreters
donkeycar/parts/camera.py Comments out problematic align_configuration call for Pi5 compatibility
donkeycar/init.py Version bump to 5.2.dev6
README.md Minor grammar fix

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +15 to +22
def __init__(self, cfg, key, prob=0.5):
aug_list = getattr(cfg, key, [])
augmentations = [ImageAugmentation.create(a, cfg, prob, always_apply)
augmentations = [ImageAugmentation.create(a, cfg, prob)
for a in aug_list]
self.augmentations = A.Compose(augmentations)

@classmethod
def create(cls, aug_type: str, config: Config, prob, always) -> \
def create(cls, aug_type: str, config: Config, prob) -> \
Copy link

Copilot AI Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The removal of the 'always_apply' parameter from ImageAugmentation may cause issues if any calling code explicitly passes this parameter. However, since this parameter was removed from albumentations library starting from version 1.0, this change aligns with the library's API. Ensure that the project's albumentations version is compatible with this change.

Copilot uses AI. Check for mistakes.

log_cli = True
log_cli_level = INFO
reruns = 3
Copy link

Copilot AI Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The 'reruns = 3' configuration requires the pytest-rerunfailures plugin to be installed. This plugin is not listed in the dev dependencies in setup.cfg. Add 'pytest-rerunfailures' to the dev extras_require section to ensure this configuration works.

Suggested change
reruns = 3
# The following option requires the pytest-rerunfailures plugin, which is not
# declared as a dev dependency. Uncomment and ensure the plugin is installed
# if rerun functionality is desired.
# reruns = 3

Copilot uses AI. Check for mistakes.
RPi.GPIO
flatbuffers==24.3.*
tensorflow-aarch64==2.15.*
tflite-runtime
Copy link

Copilot AI Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The change from 'tensorflow-aarch64' to 'tflite-runtime' is a significant shift. The tflite-runtime package only provides inference capabilities and does not include the full TensorFlow library. Ensure that the Pi installation does not require full TensorFlow capabilities for training or other operations. Additionally, verify that the new 'get_tflite_interpreter()' function in interpreter.py properly handles this change by falling back to ai_edge_litert or tensorflow when tflite-runtime is not available.

Suggested change
tflite-runtime
tflite-runtime
ai-edge-litert

Copilot uses AI. Check for mistakes.

camera = None
try:
camera = OakDLite(
Copy link

Copilot AI Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The class name 'OakDLite' is used in the test code but the actual class defined is 'OakD' (line 34). This will cause a NameError at runtime when executing this test code.

Suggested change
camera = OakDLite(
camera = OakD(

Copilot uses AI. Check for mistakes.
Comment on lines +1112 to +1113
# Share serial port with controller to avoid opening the same port twice
V.add(RoboHATDriver(cfg, serial_port=ctr.serial), inputs=['steering', 'throttle'])
Copy link

Copilot AI Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The variable 'ctr' is not defined in the scope of 'add_drivetrain' function. The function signature is 'def add_drivetrain(V, cfg)' but the code tries to access 'ctr.serial' which requires 'ctr' to be passed as a parameter or made available globally. This will cause a NameError when MM1 drive train type is used.

Suggested change
# Share serial port with controller to avoid opening the same port twice
V.add(RoboHATDriver(cfg, serial_port=ctr.serial), inputs=['steering', 'throttle'])
# Share serial port with controller to avoid opening the same port twice,
# but fall back to default behavior if no controller is available.
try:
serial_port = ctr.serial # use existing controller serial if available
except NameError:
serial_port = None # let RoboHATDriver open its own serial port
V.add(RoboHATDriver(cfg, serial_port=serial_port), inputs=['steering', 'throttle'])

Copilot uses AI. Check for mistakes.
config = self.camera.create_preview_configuration(
config_dict, transform=transform)
self.camera.align_configuration(config)
# self.camera.align_configuration(config) # this created issues with the libcamera2 library on the Pi5, which automatically changes the resolution to 128x120 in an attempt to align with native sensor resolution
Copy link

Copilot AI Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment has a spelling error: 'libcamera2' should likely be 'libcamera' (without the '2'). The Picamera2 library uses libcamera (not libcamera2) as its underlying camera system.

Suggested change
# self.camera.align_configuration(config) # this created issues with the libcamera2 library on the Pi5, which automatically changes the resolution to 128x120 in an attempt to align with native sensor resolution
# self.camera.align_configuration(config) # this created issues with the libcamera library on the Pi5, which automatically changes the resolution to 128x120 in an attempt to align with native sensor resolution

Copilot uses AI. Check for mistakes.

# Stack both images horizontally (i.e. side by side).
images = None
if enable_rgb:
Copy link

Copilot AI Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This bare 'except' clause catches all exceptions including SystemExit and KeyboardInterrupt, which is discouraged. Consider catching specific exceptions like SerialException.

Copilot uses AI. Check for mistakes.
val = input("Which DepthAI Device you want to use: ")
try:
return device_infos[int(val)]
except:
Copy link

Copilot AI Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This bare 'except' clause at line 116 is overly broad and may hide unexpected errors. Consider catching specific exceptions such as ValueError or IndexError to handle the expected error scenarios.

Suggested change
except:
except (ValueError, IndexError):

Copilot uses AI. Check for mistakes.
Comment on lines +34 to +255
class OakD(object):
"""
Donkeycar part for the Oak-D camera
Intel Movidius based depth sensing camera
https://docs.luxonis.com/projects/hardware/en/latest/pages/DM9095.html
https://www.kickstarter.com/projects/opencv/opencv-ai-kit-oak-depth-camera-4k-cv-edge-object-detection
https://shop.luxonis.com/
"""

def __init__(
self,
width=WIDTH,
height=HEIGHT,
enable_rgb=True,
enable_depth=True,
device_id=None,
):
self.device_id = device_id # "18443010C1E4681200" # serial number of device to use|None to use default|"list" to list devices and exit
self.enable_rgb = enable_rgb
self.enable_depth = enable_depth

self.width = width
self.height = height

# TODO: Accommodate using device native resolutions to avoid resizing.
self.resize = (width != WIDTH) or (height != HEIGHT)
if self.resize:
print(
f"The output images will be resized from {(WIDTH, HEIGHT)} to {(self.width, self.height)} using OpenCV. Device resolution in use is 640x480."
)

self.pipeline = None
if self.enable_depth or self.enable_rgb:
self.pipeline = depthai.Pipeline()

device_info = self.get_depthai_device_info(device_id)

if self.enable_depth:
self.setup_depth_camera(WIDTH, HEIGHT)

if self.enable_rgb:
self.setup_rgb_camera(WIDTH, HEIGHT)

self.oak_d_device = depthai.Device(self.pipeline, device_info)

# initialize frame state
self.color_image = None
self.depth_image = None
self.frame_count = 0
self.start_time = time.time()
self.frame_time = self.start_time

self.running = True

# Taken from the demo application.
def get_depthai_device_info(self, device_id: string):
device_infos = depthai.Device.getAllAvailableDevices()
if len(device_infos) == 0:
raise RuntimeError("No DepthAI (Oak-D-Lite) device (camera) found!")
else:
print("Available devices:")
for i, deviceInfo in enumerate(device_infos):
print(f"[{i}] {deviceInfo.getMxId()} [{deviceInfo.state.name}]")

# Set the deviceId to "list" in order to list the connected devices' ids.
if device_id == "list":
raise SystemExit(0)
elif device_id is not None:
matching_device = next(
filter(lambda info: info.getMxId() == device_id, device_infos), None
)
if matching_device is None:
raise RuntimeError(
f"No DepthAI device found with id matching {device_id} !"
)
return matching_device
elif len(device_infos) == 1:
return device_infos[0]
else:
val = input("Which DepthAI Device you want to use: ")
try:
return device_infos[int(val)]
except:
raise ValueError(f"Incorrect value supplied: {val}")

def setup_depth_camera(self, width, height):
# Set up left and right cameras
mono_left = self.get_mono_camera(self.pipeline, True)
mono_right = self.get_mono_camera(self.pipeline, False)

# Combine left and right cameras to form a stereo pair
stereo: depthai.node.StereoDepth = self.get_stereo_pair(
self.pipeline, mono_left, mono_right
)

# Define and name output depth map
xout_depth = self.pipeline.createXLinkOut()
xout_depth.setStreamName("depth")

stereo.depth.link(xout_depth.input)

def setup_rgb_camera(self, width, height):
cam_rgb = self.pipeline.create(depthai.node.ColorCamera)

res = depthai.ColorCameraProperties.SensorResolution.THE_1080_P

cam_rgb.setResolution(res)
cam_rgb.setVideoSize(width, height)

xout_rgb = self.pipeline.create(depthai.node.XLinkOut)
xout_rgb.setStreamName("rgb")

cam_rgb.video.link(xout_rgb.input)

def get_mono_camera(self, pipeline: Pipeline, is_left: bool):
# Configure mono camera
mono = pipeline.createMonoCamera()

# Set camera resolution
mono.setResolution(depthai.MonoCameraProperties.SensorResolution.THE_480_P)

if is_left:
# Get left camera
mono.setBoardSocket(depthai.CameraBoardSocket.LEFT)
else:
# Get right camera
mono.setBoardSocket(depthai.CameraBoardSocket.RIGHT)

return mono

def get_stereo_pair(self, pipeline: Pipeline, mono_left, mono_right):
# Configure the stereo pair for depth estimation
new_stereo = pipeline.createStereoDepth()
# Checks occluded pixels and marks them as invalid
new_stereo.setLeftRightCheck(True)

# Configure left and right cameras to work as a stereo pair
mono_left.out.link(new_stereo.left)
mono_right.out.link(new_stereo.right)

return new_stereo

def get_frame(self, queue: DataOutputQueue):
# Get frame from queue
new_frame: ImgFrame = queue.get()
# Convert to OpenCV format
return new_frame.getCvFrame()

def _poll(self):
last_time = self.frame_time
self.frame_time = time.time() - self.start_time
self.frame_count += 1

#
# convert camera frames to images
#
if self.enable_rgb or self.enable_depth:

self.depth_queue: DataOutputQueue = self.oak_d_device.getOutputQueue(
name="depth", maxSize=1, blocking=False
)
self.rgb_queue: DataOutputQueue = self.oak_d_device.getOutputQueue(
"rgb", maxSize=1, blocking=False
)

depth_frame = self.get_frame(self.depth_queue)
rgb_frame = self.get_frame(self.rgb_queue)

self.depth_image = depth_frame
self.color_image = rgb_frame

if self.resize:
if self.width != WIDTH or self.height != HEIGHT:
import cv2

self.color_image = (
cv2.resize(
self.color_image, (self.width, self.height), cv2.INTER_NEAREST
)
if self.enable_rgb
else None
)
self.depth_image = (
cv2.resize(
self.depth_image, (self.width, self.height), cv2.INTER_NEAREST
)
if self.enable_depth
else None
)

def update(self):
"""
When running threaded, update() is called from the background thread
to update the state. run_threaded() is called to return the latest state.
"""
while self.running:
self._poll()

def run_threaded(self):
"""
Return the latest state read by update(). This will not block.
All 4 states are returned, but may be None if the feature is not enabled when the camera part is constructed.
For gyroscope, x is pitch, y is yaw and z is roll.
:return: (rbg_image: nparray, depth_image: nparray, acceleration: (x:float, y:float, z:float), gyroscope: (x:float, y:float, z:float))
"""
return self.color_image, self.depth_image

def run(self):
"""
Read and return frame from camera. This will block while reading the frame.
see run_threaded() for return types.
"""
self._poll()
return self.run_threaded()

def shutdown(self):
self.running = False
time.sleep(2) # give thread enough time to shutdown

# done running
self.oak_d_device.close()

Copy link

Copilot AI Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new OakD camera driver does not have corresponding test coverage. Consider adding tests for the OakD class to ensure the camera initialization, configuration, and data retrieval work correctly.

Copilot uses AI. Check for mistakes.
# SIMULATION (DONKEY GYM)
#Only on Ubuntu linux, you can use the simulator as a virtual donkey and
#issue the same python manage.py drive command as usual, but have them control a virtual car.
#This enables that, and sets the path to the simualator and the environment.
Copy link

Copilot AI Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Corrected spelling of 'simualator' to 'simulator'

Suggested change
#This enables that, and sets the path to the simualator and the environment.
#This enables that, and sets the path to the simulator and the environment.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants