Conversation
* Fixed augmentation imports and test_train.py to use the new import paths. * Add reruns in pytest ini to fix flaky web socket tests.
* Add Oak D Part (credit a6a547a) * address review, black-ed everything, bump version
* add new folder with files * Add TrackSpeedPlanner as regular folder * Updated for Tornado * Cleaned up to remove Node.js and Docker * Fixed some bugs CSVs now load properly without a header and file dialog has option to load from either Pi or local/laptop * Deleting unneeded files * Removed unnecessary files * Fixed local save * Make sure that exiting the program doesn't stop the Tornado server * Got rid of unnecessary shell script * Fix routing issue and create comprehensive documentation (#1210) * Fix routing issue and create comprehensive documentation - Fixed MainHandler routing conflict that prevented print statements from showing - Replaced inaccurate documentation files with single comprehensive README.md - Removed obsolete files (package.json, start.sh, .gitignore) - Changed server binding to localhost for security - Added debug print statement to MainHandler for troubleshooting The new README accurately documents the actual functionality including: - Interactive canvas-based path visualization with speed color coding - Dual file loading (Pi directory browser + upload) - Individual point speed editing with real-time visual feedback - Proper API endpoints and CSV format specifications - macOS port conflict troubleshooting 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * Reverted to original IP address as overwrite was incorrect. --------- Co-authored-by: Claude <noreply@anthropic.com> --------- Co-authored-by: DocGarbanzo <47540921+DocGarbanzo@users.noreply.github.com> Co-authored-by: Claude <noreply@anthropic.com>
…ith mocks The original test relied on external MQTT broker connection which caused intermittent failures. This change: - Replaces real MQTT client with mocked instances to eliminate network dependency - Adds comprehensive test coverage for both success and error scenarios - Removes timing-dependent sleep calls and retry logic that caused flakiness - Ensures consistent test execution across all environments 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
…s install for apple silicon (#1203)
Replace fixed sleep() calls with proper async polling to eliminate race conditions in WebSocket calibration tests. The tests were failing on macOS CI due to insufficient wait time for WebSocket message processing. Changes: - Add wait_for_attribute_value() helper method with 5-second timeout - Replace all sleep(SLEEP) calls with async polling in 7 tests - Remove unused imports (tornado.ioloop, time.sleep, SLEEP constant) - Add proper timeout error messages for better debugging Fixes intermittent test failures on slower CI environments while maintaining fast execution on local machines. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Comment out camera alignment to fix resolution issues on Pi5.
After merging PR #1222
Just missing an "a"
* Reorganized and cleaned up config file * Fix comment formatting in cfg_complete.py Corrected comment formatting for web control port.
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 8 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
This is the final PR Bugbot will review for you during this billing cycle
Your free Bugbot reviews will reset on March 9
Details
You are on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle.
To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.
| ) | ||
|
|
||
| depth_frame = self.get_frame(self.depth_queue) | ||
| rgb_frame = self.get_frame(self.rgb_queue) |
There was a problem hiding this comment.
| cam = OakD( | ||
| enable_rgb=cfg.OAKD_RGB, | ||
| enable_depth=cfg.OAKD_DEPTH, | ||
| device_id=cfg.OAKD_ID) |
There was a problem hiding this comment.
| # Save to Pi directory | ||
| file_path = os.path.join(os.getcwd(), filename) | ||
| with open(file_path, 'w', newline='') as f: | ||
| f.write(csv_content) |
There was a problem hiding this comment.
Track editor saves to wrong directory
Medium Severity
CSVSaveHandler writes files to os.getcwd(), but file listing/loading uses Path(__file__).parent. Saving can target a different folder than the one shown in the UI, so edits may appear unsaved or disappear from the selectable file list.
Additional Locations (1)
| from donkeycar.parts.robohat import RoboHATDriver | ||
| V.add(RoboHATDriver(cfg), inputs=['steering', 'throttle']) | ||
| # Share serial port with controller to avoid opening the same port twice | ||
| V.add(RoboHATDriver(cfg, serial_port=ctr.serial), inputs=['steering', 'throttle']) |
There was a problem hiding this comment.
MM1 startup assumes joystick serial exists
Medium Severity
RoboHATDriver is now always created with serial_port=ctr.serial. When DRIVE_TRAIN_TYPE is MM1 but the active controller is not RoboHATController, ctr can be LocalWebController and has no serial, causing runtime failure before driving starts.
| return tf.lite.Interpreter | ||
| raise ImportError("No TFLite runtime found. Install tflite-runtime or tensorflow.") | ||
|
|
||
|
|
There was a problem hiding this comment.
TensorRT support check crashes without TensorFlow
Medium Severity
When TensorFlow imports fail, trt is set to None, but has_trt_support() still calls trt.TrtGraphConverterV2(). The function only catches RuntimeError, so it raises AttributeError instead of returning False.
Additional Locations (1)
| # Save to Pi directory | ||
| file_path = os.path.join(os.getcwd(), filename) | ||
| with open(file_path, 'w', newline='') as f: | ||
| f.write(csv_content) |
There was a problem hiding this comment.
| if serial_port is not None: | ||
| self.pwm = serial_port | ||
| else: | ||
| self.pwm = serial.Serial(cfg.MM1_SERIAL_PORT, 115200, timeout=1) |
There was a problem hiding this comment.
| time.sleep(2) # give thread enough time to shutdown | ||
|
|
||
| # done running | ||
| self.oak_d_device.close() |
There was a problem hiding this comment.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 9d34462d9f
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| from donkeycar.parts.robohat import RoboHATDriver | ||
| V.add(RoboHATDriver(cfg), inputs=['steering', 'throttle']) | ||
| # Share serial port with controller to avoid opening the same port twice | ||
| V.add(RoboHATDriver(cfg, serial_port=ctr.serial), inputs=['steering', 'throttle']) |
There was a problem hiding this comment.
Stop referencing undefined controller in MM1 drivetrain setup
This branch now passes ctr.serial into RoboHATDriver, but add_drivetrain has no ctr variable in scope, so selecting DRIVE_TRAIN_TYPE == "MM1" raises a NameError during startup and prevents MM1 cars from launching at all. Pass the controller into add_drivetrain (or keep the previous constructor usage) before dereferencing it here.
Useful? React with 👍 / 👎.
|
|
||
| class ImageAugmentation: | ||
| def __init__(self, cfg, key, prob=0.5, always_apply=False): | ||
| def __init__(self, cfg, key, prob=0.5): |
There was a problem hiding this comment.
Preserve ImageAugmentation always_apply parameter compatibility
Removing the always_apply parameter from ImageAugmentation.__init__ breaks existing callers that still pass it (for example donkeycar/management/ui/pilot_screen.py calls ImageAugmentation(..., always_apply=True)), which now throws TypeError: unexpected keyword argument 'always_apply' when the pilot screen updates augmentations. Keep a compatible signature or update all call sites in the same change.
Useful? React with 👍 / 👎.
| self.depth_queue: DataOutputQueue = self.oak_d_device.getOutputQueue( | ||
| name="depth", maxSize=1, blocking=False |
There was a problem hiding this comment.
Guard OAK-D queue reads by enabled stream flags
In _poll, the code always fetches both depth and rgb output queues whenever either stream is enabled, so configurations like enable_rgb=True, enable_depth=False (or vice versa) still try to read a queue that was never created and fail at runtime. Queue creation and frame reads should be conditioned per-stream to match enable_rgb/enable_depth.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Pull request overview
This pull request updates dependencies, adds new hardware support (OAK-D camera), introduces a web-based Track Speed Planner utility, improves test reliability, and makes the codebase more resilient to different TensorFlow/TFLite runtime environments.
Changes:
- Updates Python dependencies by removing version constraints on matplotlib and pytest, adds tflite-runtime and tensorflow-metal support
- Adds comprehensive OAK-D camera driver and configuration support
- Introduces a new Tornado-based Track Speed Planner web utility for visualizing and editing path data CSV files
- Improves test reliability by replacing sleep-based waits with proper async polling and adding connection error tests
- Refactors TensorFlow imports to be optional with graceful fallbacks, supporting tflite-runtime, ai_edge_litert, and full TensorFlow
Reviewed changes
Copilot reviewed 22 out of 22 changed files in this pull request and generated 11 comments.
Show a summary per file
| File | Description |
|---|---|
| setup.cfg | Updates dependency versions, adds tflite-runtime and tensorflow-metal for Pi and macOS |
| donkeycar/utilities/TrackSpeedPlanner/trackeditor.py | New Tornado web server for path data visualization and editing |
| donkeycar/utilities/TrackSpeedPlanner/static/index.html | Complete web interface with interactive canvas for path editing |
| donkeycar/utilities/TrackSpeedPlanner/test_path.csv | Sample CSV data file with 296 path points |
| donkeycar/utilities/TrackSpeedPlanner/README.md | Comprehensive documentation for the Track Speed Planner utility |
| donkeycar/tests/test_web_socket.py | Replaces sleep-based timing with async polling for more reliable tests |
| donkeycar/tests/test_train.py | Adds proper cleanup by closing tub files after use |
| donkeycar/tests/test_telemetry.py | Replaces integration tests with mocked unit tests and adds connection error handling |
| donkeycar/tests/pytest.ini | Adds pytest reruns configuration for flaky test handling |
| donkeycar/templates/complete.py | Adds OAK-D camera support and MM1 serial port sharing |
| donkeycar/templates/cfg_simulator.py | Adds OAK-D configuration parameters |
| donkeycar/templates/cfg_complete.py | Major reorganization with improved comments and OAK-D support |
| donkeycar/templates/cfg_basic.py | Updates camera type list to include OAK-D |
| donkeycar/pipeline/augmentations.py | Removes deprecated 'always_apply' parameter for albumentations compatibility |
| donkeycar/parts/robohat.py | Adds serial_port parameter to RoboHATDriver to enable port sharing |
| donkeycar/parts/oak_d.py | New driver for OAK-D depth camera with RGB and depth support |
| donkeycar/parts/keras.py | Makes TensorFlow imports optional with graceful fallback |
| donkeycar/parts/interpreter.py | Adds support for tflite-runtime and ai_edge_litert interpreters |
| donkeycar/parts/camera.py | Comments out problematic align_configuration call for Pi5 compatibility |
| donkeycar/init.py | Version bump to 5.2.dev6 |
| README.md | Minor grammar fix |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| def __init__(self, cfg, key, prob=0.5): | ||
| aug_list = getattr(cfg, key, []) | ||
| augmentations = [ImageAugmentation.create(a, cfg, prob, always_apply) | ||
| augmentations = [ImageAugmentation.create(a, cfg, prob) | ||
| for a in aug_list] | ||
| self.augmentations = A.Compose(augmentations) | ||
|
|
||
| @classmethod | ||
| def create(cls, aug_type: str, config: Config, prob, always) -> \ | ||
| def create(cls, aug_type: str, config: Config, prob) -> \ |
There was a problem hiding this comment.
The removal of the 'always_apply' parameter from ImageAugmentation may cause issues if any calling code explicitly passes this parameter. However, since this parameter was removed from albumentations library starting from version 1.0, this change aligns with the library's API. Ensure that the project's albumentations version is compatible with this change.
|
|
||
| log_cli = True | ||
| log_cli_level = INFO | ||
| reruns = 3 |
There was a problem hiding this comment.
The 'reruns = 3' configuration requires the pytest-rerunfailures plugin to be installed. This plugin is not listed in the dev dependencies in setup.cfg. Add 'pytest-rerunfailures' to the dev extras_require section to ensure this configuration works.
| reruns = 3 | |
| # The following option requires the pytest-rerunfailures plugin, which is not | |
| # declared as a dev dependency. Uncomment and ensure the plugin is installed | |
| # if rerun functionality is desired. | |
| # reruns = 3 |
| RPi.GPIO | ||
| flatbuffers==24.3.* | ||
| tensorflow-aarch64==2.15.* | ||
| tflite-runtime |
There was a problem hiding this comment.
The change from 'tensorflow-aarch64' to 'tflite-runtime' is a significant shift. The tflite-runtime package only provides inference capabilities and does not include the full TensorFlow library. Ensure that the Pi installation does not require full TensorFlow capabilities for training or other operations. Additionally, verify that the new 'get_tflite_interpreter()' function in interpreter.py properly handles this change by falling back to ai_edge_litert or tensorflow when tflite-runtime is not available.
| tflite-runtime | |
| tflite-runtime | |
| ai-edge-litert |
|
|
||
| camera = None | ||
| try: | ||
| camera = OakDLite( |
There was a problem hiding this comment.
The class name 'OakDLite' is used in the test code but the actual class defined is 'OakD' (line 34). This will cause a NameError at runtime when executing this test code.
| camera = OakDLite( | |
| camera = OakD( |
| # Share serial port with controller to avoid opening the same port twice | ||
| V.add(RoboHATDriver(cfg, serial_port=ctr.serial), inputs=['steering', 'throttle']) |
There was a problem hiding this comment.
The variable 'ctr' is not defined in the scope of 'add_drivetrain' function. The function signature is 'def add_drivetrain(V, cfg)' but the code tries to access 'ctr.serial' which requires 'ctr' to be passed as a parameter or made available globally. This will cause a NameError when MM1 drive train type is used.
| # Share serial port with controller to avoid opening the same port twice | |
| V.add(RoboHATDriver(cfg, serial_port=ctr.serial), inputs=['steering', 'throttle']) | |
| # Share serial port with controller to avoid opening the same port twice, | |
| # but fall back to default behavior if no controller is available. | |
| try: | |
| serial_port = ctr.serial # use existing controller serial if available | |
| except NameError: | |
| serial_port = None # let RoboHATDriver open its own serial port | |
| V.add(RoboHATDriver(cfg, serial_port=serial_port), inputs=['steering', 'throttle']) |
| config = self.camera.create_preview_configuration( | ||
| config_dict, transform=transform) | ||
| self.camera.align_configuration(config) | ||
| # self.camera.align_configuration(config) # this created issues with the libcamera2 library on the Pi5, which automatically changes the resolution to 128x120 in an attempt to align with native sensor resolution |
There was a problem hiding this comment.
The comment has a spelling error: 'libcamera2' should likely be 'libcamera' (without the '2'). The Picamera2 library uses libcamera (not libcamera2) as its underlying camera system.
| # self.camera.align_configuration(config) # this created issues with the libcamera2 library on the Pi5, which automatically changes the resolution to 128x120 in an attempt to align with native sensor resolution | |
| # self.camera.align_configuration(config) # this created issues with the libcamera library on the Pi5, which automatically changes the resolution to 128x120 in an attempt to align with native sensor resolution |
|
|
||
| # Stack both images horizontally (i.e. side by side). | ||
| images = None | ||
| if enable_rgb: |
There was a problem hiding this comment.
This bare 'except' clause catches all exceptions including SystemExit and KeyboardInterrupt, which is discouraged. Consider catching specific exceptions like SerialException.
| val = input("Which DepthAI Device you want to use: ") | ||
| try: | ||
| return device_infos[int(val)] | ||
| except: |
There was a problem hiding this comment.
This bare 'except' clause at line 116 is overly broad and may hide unexpected errors. Consider catching specific exceptions such as ValueError or IndexError to handle the expected error scenarios.
| except: | |
| except (ValueError, IndexError): |
| class OakD(object): | ||
| """ | ||
| Donkeycar part for the Oak-D camera | ||
| Intel Movidius based depth sensing camera | ||
| https://docs.luxonis.com/projects/hardware/en/latest/pages/DM9095.html | ||
| https://www.kickstarter.com/projects/opencv/opencv-ai-kit-oak-depth-camera-4k-cv-edge-object-detection | ||
| https://shop.luxonis.com/ | ||
| """ | ||
|
|
||
| def __init__( | ||
| self, | ||
| width=WIDTH, | ||
| height=HEIGHT, | ||
| enable_rgb=True, | ||
| enable_depth=True, | ||
| device_id=None, | ||
| ): | ||
| self.device_id = device_id # "18443010C1E4681200" # serial number of device to use|None to use default|"list" to list devices and exit | ||
| self.enable_rgb = enable_rgb | ||
| self.enable_depth = enable_depth | ||
|
|
||
| self.width = width | ||
| self.height = height | ||
|
|
||
| # TODO: Accommodate using device native resolutions to avoid resizing. | ||
| self.resize = (width != WIDTH) or (height != HEIGHT) | ||
| if self.resize: | ||
| print( | ||
| f"The output images will be resized from {(WIDTH, HEIGHT)} to {(self.width, self.height)} using OpenCV. Device resolution in use is 640x480." | ||
| ) | ||
|
|
||
| self.pipeline = None | ||
| if self.enable_depth or self.enable_rgb: | ||
| self.pipeline = depthai.Pipeline() | ||
|
|
||
| device_info = self.get_depthai_device_info(device_id) | ||
|
|
||
| if self.enable_depth: | ||
| self.setup_depth_camera(WIDTH, HEIGHT) | ||
|
|
||
| if self.enable_rgb: | ||
| self.setup_rgb_camera(WIDTH, HEIGHT) | ||
|
|
||
| self.oak_d_device = depthai.Device(self.pipeline, device_info) | ||
|
|
||
| # initialize frame state | ||
| self.color_image = None | ||
| self.depth_image = None | ||
| self.frame_count = 0 | ||
| self.start_time = time.time() | ||
| self.frame_time = self.start_time | ||
|
|
||
| self.running = True | ||
|
|
||
| # Taken from the demo application. | ||
| def get_depthai_device_info(self, device_id: string): | ||
| device_infos = depthai.Device.getAllAvailableDevices() | ||
| if len(device_infos) == 0: | ||
| raise RuntimeError("No DepthAI (Oak-D-Lite) device (camera) found!") | ||
| else: | ||
| print("Available devices:") | ||
| for i, deviceInfo in enumerate(device_infos): | ||
| print(f"[{i}] {deviceInfo.getMxId()} [{deviceInfo.state.name}]") | ||
|
|
||
| # Set the deviceId to "list" in order to list the connected devices' ids. | ||
| if device_id == "list": | ||
| raise SystemExit(0) | ||
| elif device_id is not None: | ||
| matching_device = next( | ||
| filter(lambda info: info.getMxId() == device_id, device_infos), None | ||
| ) | ||
| if matching_device is None: | ||
| raise RuntimeError( | ||
| f"No DepthAI device found with id matching {device_id} !" | ||
| ) | ||
| return matching_device | ||
| elif len(device_infos) == 1: | ||
| return device_infos[0] | ||
| else: | ||
| val = input("Which DepthAI Device you want to use: ") | ||
| try: | ||
| return device_infos[int(val)] | ||
| except: | ||
| raise ValueError(f"Incorrect value supplied: {val}") | ||
|
|
||
| def setup_depth_camera(self, width, height): | ||
| # Set up left and right cameras | ||
| mono_left = self.get_mono_camera(self.pipeline, True) | ||
| mono_right = self.get_mono_camera(self.pipeline, False) | ||
|
|
||
| # Combine left and right cameras to form a stereo pair | ||
| stereo: depthai.node.StereoDepth = self.get_stereo_pair( | ||
| self.pipeline, mono_left, mono_right | ||
| ) | ||
|
|
||
| # Define and name output depth map | ||
| xout_depth = self.pipeline.createXLinkOut() | ||
| xout_depth.setStreamName("depth") | ||
|
|
||
| stereo.depth.link(xout_depth.input) | ||
|
|
||
| def setup_rgb_camera(self, width, height): | ||
| cam_rgb = self.pipeline.create(depthai.node.ColorCamera) | ||
|
|
||
| res = depthai.ColorCameraProperties.SensorResolution.THE_1080_P | ||
|
|
||
| cam_rgb.setResolution(res) | ||
| cam_rgb.setVideoSize(width, height) | ||
|
|
||
| xout_rgb = self.pipeline.create(depthai.node.XLinkOut) | ||
| xout_rgb.setStreamName("rgb") | ||
|
|
||
| cam_rgb.video.link(xout_rgb.input) | ||
|
|
||
| def get_mono_camera(self, pipeline: Pipeline, is_left: bool): | ||
| # Configure mono camera | ||
| mono = pipeline.createMonoCamera() | ||
|
|
||
| # Set camera resolution | ||
| mono.setResolution(depthai.MonoCameraProperties.SensorResolution.THE_480_P) | ||
|
|
||
| if is_left: | ||
| # Get left camera | ||
| mono.setBoardSocket(depthai.CameraBoardSocket.LEFT) | ||
| else: | ||
| # Get right camera | ||
| mono.setBoardSocket(depthai.CameraBoardSocket.RIGHT) | ||
|
|
||
| return mono | ||
|
|
||
| def get_stereo_pair(self, pipeline: Pipeline, mono_left, mono_right): | ||
| # Configure the stereo pair for depth estimation | ||
| new_stereo = pipeline.createStereoDepth() | ||
| # Checks occluded pixels and marks them as invalid | ||
| new_stereo.setLeftRightCheck(True) | ||
|
|
||
| # Configure left and right cameras to work as a stereo pair | ||
| mono_left.out.link(new_stereo.left) | ||
| mono_right.out.link(new_stereo.right) | ||
|
|
||
| return new_stereo | ||
|
|
||
| def get_frame(self, queue: DataOutputQueue): | ||
| # Get frame from queue | ||
| new_frame: ImgFrame = queue.get() | ||
| # Convert to OpenCV format | ||
| return new_frame.getCvFrame() | ||
|
|
||
| def _poll(self): | ||
| last_time = self.frame_time | ||
| self.frame_time = time.time() - self.start_time | ||
| self.frame_count += 1 | ||
|
|
||
| # | ||
| # convert camera frames to images | ||
| # | ||
| if self.enable_rgb or self.enable_depth: | ||
|
|
||
| self.depth_queue: DataOutputQueue = self.oak_d_device.getOutputQueue( | ||
| name="depth", maxSize=1, blocking=False | ||
| ) | ||
| self.rgb_queue: DataOutputQueue = self.oak_d_device.getOutputQueue( | ||
| "rgb", maxSize=1, blocking=False | ||
| ) | ||
|
|
||
| depth_frame = self.get_frame(self.depth_queue) | ||
| rgb_frame = self.get_frame(self.rgb_queue) | ||
|
|
||
| self.depth_image = depth_frame | ||
| self.color_image = rgb_frame | ||
|
|
||
| if self.resize: | ||
| if self.width != WIDTH or self.height != HEIGHT: | ||
| import cv2 | ||
|
|
||
| self.color_image = ( | ||
| cv2.resize( | ||
| self.color_image, (self.width, self.height), cv2.INTER_NEAREST | ||
| ) | ||
| if self.enable_rgb | ||
| else None | ||
| ) | ||
| self.depth_image = ( | ||
| cv2.resize( | ||
| self.depth_image, (self.width, self.height), cv2.INTER_NEAREST | ||
| ) | ||
| if self.enable_depth | ||
| else None | ||
| ) | ||
|
|
||
| def update(self): | ||
| """ | ||
| When running threaded, update() is called from the background thread | ||
| to update the state. run_threaded() is called to return the latest state. | ||
| """ | ||
| while self.running: | ||
| self._poll() | ||
|
|
||
| def run_threaded(self): | ||
| """ | ||
| Return the latest state read by update(). This will not block. | ||
| All 4 states are returned, but may be None if the feature is not enabled when the camera part is constructed. | ||
| For gyroscope, x is pitch, y is yaw and z is roll. | ||
| :return: (rbg_image: nparray, depth_image: nparray, acceleration: (x:float, y:float, z:float), gyroscope: (x:float, y:float, z:float)) | ||
| """ | ||
| return self.color_image, self.depth_image | ||
|
|
||
| def run(self): | ||
| """ | ||
| Read and return frame from camera. This will block while reading the frame. | ||
| see run_threaded() for return types. | ||
| """ | ||
| self._poll() | ||
| return self.run_threaded() | ||
|
|
||
| def shutdown(self): | ||
| self.running = False | ||
| time.sleep(2) # give thread enough time to shutdown | ||
|
|
||
| # done running | ||
| self.oak_d_device.close() | ||
|
|
There was a problem hiding this comment.
The new OakD camera driver does not have corresponding test coverage. Consider adding tests for the OakD class to ensure the camera initialization, configuration, and data retrieval work correctly.
| # SIMULATION (DONKEY GYM) | ||
| #Only on Ubuntu linux, you can use the simulator as a virtual donkey and | ||
| #issue the same python manage.py drive command as usual, but have them control a virtual car. | ||
| #This enables that, and sets the path to the simualator and the environment. |
There was a problem hiding this comment.
Corrected spelling of 'simualator' to 'simulator'
| #This enables that, and sets the path to the simualator and the environment. | |
| #This enables that, and sets the path to the simulator and the environment. |


Note
Medium Risk
Touches core inference/camera plumbing and dependency resolution (TensorFlow/TFLite and new camera path), which can affect runtime behavior on edge devices, though most changes are additive and guarded.
Overview
Adds Luxonis OAK-D camera support: new
parts/oak_d.py, newCAMERA_TYPE="OAKD"wiring intemplates/complete.py(including optional depth recording), and config template updates to exposeOAKD_*settings.Makes TensorFlow an optional dependency for inference/training code by guarding imports in
parts/interpreter.pyandparts/keras.py, adding aget_tflite_interpreter()fallback chain (tflite-runtime/ai_edge_litert/TF), and fixing Keras model loading to initializeinput_keys/output_keys; also updates KerasPilot inference to use interpreterinput_keysdirectly.Improves hardware/runtime robustness (disable
Picamera2.align_configurationon Pi5; allowRoboHATDriverto reuse an existing serial port), refreshes/expands config templates, updates dependencies (setup.cfg: prefertflite-runtime, relaxmatplotlib/pytest, addtensorflow-metalfor macOS), and strengthens tests (MQTT client fully mocked, websocket tests wait deterministically, close tubs, enable pytest reruns).Adds TrackSpeedPlanner utility: a Tornado-based CSV path speed editor with a bundled single-page UI and sample CSV assets under
utilities/TrackSpeedPlanner.Written by Cursor Bugbot for commit 9d34462. This will update automatically on new commits. Configure here.