Helios is a real-time SLAM (Simultaneous Localization and Mapping) system designed for autonomous vehicles, consisting of a Python-based agent for sensor processing and navigation, and a Next.js-based supervisor for 3D visualization and monitoring.
This project is still ongoing and while it already has some capabilities it is by no means complete.
The system is divided into two main components:
The autonomous navigation and SLAM processing engine that runs on the vehicle hardware.
A web-based 3D visualization dashboard for real-time monitoring and point cloud visualization.
The agent is a multi-threaded Python application that integrates sensor data, performs SLAM calculations, and controls vehicle navigation. It processes data from LiDAR, gyroscope, and optical flow sensors to build a real-time 3D map of the environment.
PointHandler (Argus)
- Central sensor fusion hub that coordinates all sensors
- Manages synchronized data collection from multiple sensors using threading barriers
- Supports two operational modes:
- Static Mode: For stationary platforms (LiDAR + Gyro)
- Moving Mode: For moving platforms (LiDAR + Gyro + Optical Flow)
- Uses multi-threading with queues for high-frequency data processing (supports up to 50,000 Hz)
Supported Sensors:
- LidarSensor: TFmini-S distance measurements via serial (115200 baud)
- GyroSensor: Roll, pitch, yaw orientation data via MSP protocol
- OpticalFlowSensor: Position tracking for moving platforms
Point Calculation (calculations.py)
calc_point_static(): Transforms sensor readings into 3D coordinates for stationary platformscalc_point_moving(): Accounts for vehicle motion during point calculation- Uses rotation matrices and trigonometric calculations with Numba JIT compilation for performance
Neures Navigator
- Manages autonomous exploration and path planning
- Implements two operational modes:
- Auto Mode: Autonomous exploration with random movement patterns
- Manual Mode: Interactive user control
Key Features:
- Point Clustering: Groups nearby points using distance-based algorithms
- 360Β° Scanning: Rotational scanning for comprehensive environment mapping
- Slice Analysis: Analyzes point cloud slices for navigation decisions
- Real-time Processing: Converts incoming sensor points to navigable clusters
Navigation Algorithms (nav_math.py)
- Distance calculations and angular placement checking
- Polar coordinate transformations
- Group detection and clustering algorithms
- Optimized with Numba for real-time performance
Ground Vehicle (Car)
- Dual motor control using PWM and GPIO pins
- Motor driver integration (TB6612FNG compatible)
- Speed control: -100 to +100 range
- Braking and coasting modes
- Interactive control mode for manual operation
Hardware Pins:
- Motor A: PWM=12, IN1=17, IN2=22
- Motor B: PWM=13, IN1=5, IN2=6
- Standby: GPIO 27
WebSocket Server
- Real-time bidirectional communication with supervisor
- Streams point cloud data in binary format (big-endian floats)
- Supports multiple client connections
- Chunked data transmission (10 points/chunk, 10ms interval)
- Auto-reconnection with configurable retry logic
Data Protocol:
- Points transmitted as (x, y, z, group_id) tuples
- Binary packed using
struct.pack()for efficiency - JSON messages for commands and control
Voxel Grid System (voxel.py)
- Efficient spatial data structure for point cloud storage
- Chunked voxel implementation for memory optimization
- Configurable resolution and bounds
Visualization Tools (visualization.py)
- Matplotlib-based 3D plotting
- Scatter and voxel rendering modes
- Real-time point cloud display
All settings are managed through config.yaml:
network:
socket_uri: "ws://0.0.0.0:8765"
reconnect_interval: 5
navigation:
enabled: true
mode: "auto" # auto/manual
vehicle:
enabled: true
sensors:
tfmini_s: # LiDAR
enabled: true
port: "/dev/ttyUSB0"
frequency: 100
gyro:
enabled: true
port: "/dev/ttyACM0"
frequency: 100
optical_flow:
enabled: false
height: 3 # cm from groundβββββββββββββββ
β Sensors β (LiDAR, Gyro, OptFlow)
ββββββββ¬βββββββ
β
βΌ
βββββββββββββββ
β Argus β (Point Handler)
β Calculates β
β 3D Points β
ββββββββ¬βββββββ
β
ββββββββββββ
βΌ βΌ
ββββββββββββ ββββββββββββββ
β Navigatorβ βWebSocket β
β Clusters β βCommunicatorβ
β Points β βββββββ¬βββββββ
βββββββ¬βββββ β
β β
βΌ βΌ
ββββββββββββ ββββββββββββββ
β Vehicle β β Supervisor β
β Control β β (Web UI) β
ββββββββββββ ββββββββββββββ
- Main Thread: Initialization and coordination
- Sensor Threads: One per enabled sensor (LiDAR, Gyro, OptFlow)
- Point Calculation Thread: Processes sensor data into 3D coordinates
- Navigation Thread: Clustering and path planning
- WebSocket Server Thread: Client communication
- Vehicle Control Thread: Motor control and movement execution
cd agent
pip install -r requirements.txtKey Dependencies:
numpy- Numerical computationsnumba- JIT compilation for performancewebsockets- Real-time communicationpyserial- Sensor communicationgpiozero- GPIO control for motorsmatplotlib- VisualizationPyYAML- Configuration management
cd agent
python -m src.mainThe system will:
- Initialize sensors based on
config.yaml - Start sensor data collection
- Begin point cloud generation
- Connect to WebSocket clients
- Start autonomous navigation (if enabled)
A Next.js-based web application providing real-time 3D visualization of the SLAM point cloud data using React Three Fiber.
- Next.js 16: React framework with server-side rendering
- React 19: UI library
- React Three Fiber: Three.js React wrapper for 3D rendering
- @react-three/drei: Helpers and abstractions for R3F
- Three.js: 3D graphics library
- TailwindCSS: Styling framework
- Real-time 3D Point Cloud Rendering: Displays incoming sensor data as 3D boxes
- Instanced Rendering: Efficient rendering of thousands of points using
InstancedMesh - Group Color Coding: Different colors for different point clusters
- Interactive Camera: OrbitControls for navigation
- Grid and Environment: Visual reference frame
cd supervisor
pnpm install
pnpm devAccess the dashboard at http://localhost:3000
The supervisor connects to the agent's WebSocket server to receive:
- Real-time point cloud data
- Point clustering information
- Vehicle status updates
cd agent
# Edit config.yaml for your hardware setup
python -m src.maincd supervisor
pnpm install
pnpm devOpen http://localhost:3000 in your browser to see real-time 3D visualization.
Enable test mode in config.yaml:
app:
test_mode: true
debug_mode: trueThis uses pre-generated test data instead of reading from hardware sensors.
Captures additional diagnostic data for analysis:
- Raw sensor readings
- Calculated point coordinates
- Timing information
helios/
βββ agent/ # Python SLAM agent
β βββ config.yaml # Configuration file
β βββ requirements.txt # Python dependencies
β βββ src/
β βββ main.py # Entry point
β βββ communicator.py # WebSocket server
β βββ sensors/ # Sensor drivers
β β βββ PointHandler.py # Sensor fusion
β β βββ LidarSensor.py
β β βββ GyroSensor.py
β β βββ OpticalFlowSensor.py
β β βββ calculations.py # Point math
β βββ nereus/ # Navigation
β β βββ navigation.py # Main navigator
β β βββ nav_math.py # Math utilities
β βββ vehicle/ # Vehicle control
β β βββ ground.py # Motor control
β β βββ vehicle.py
β βββ ouranos/ # Visualization
β βββ voxel.py
β βββ visualization.py
β
βββ supervisor/ # Next.js dashboard
βββ package.json
βββ src/
βββ app/
βββ page.tsx # Main 3D viewer
βββ layout.tsx
- TFmini-S LiDAR: Serial connection (USB / UART)
- Gyroscope: MSP protocol (USB/Serial)
- Optical Flow Sensor (optional but recommended): SPI connection
- Motor Driver: TB6612FNG or compatible
- GPIO: Raspberry Pi or compatible SBC
- Power: Appropriate voltage for motors and sensors
- The system supports frequencies up to 50kHz for sensor data processing
- Point calculation is optimized with Numba JIT compilation
- WebSocket communication uses binary format for efficiency
- The voxel grid system provides memory-efficient spatial storage
- Navigation runs autonomously when enabled in auto mode