BlindNav is a browser-based assistive tool designed to help visually impaired users navigate their surroundings using real-time object detection, voice feedback, and haptic alerts β all powered by in-browser AI.
- π₯ Real-Time Object Detection via webcam (COCO-SSD + TensorFlow.js)
- π£οΈ Voice Feedback for detected obstacles (Web Speech API)
- π³ Haptic Feedback using the Vibration API (on supported devices)
- ποΈ Voice Commands β Start, Stop, Resume detection hands-free
- π Dark/Light Mode Toggle for accessible UI
- π Customizable Speech Rate & Volume
- πΌοΈ Animated Bounding Boxes around detected objects
- π§ Mini Map Navigation Panel
- π± Fully Responsive UI β Works on desktops, tablets, and smartphones
- π 100% Client-Side β No installation or server needed
| Technology | Purpose |
|---|---|
| HTML/CSS/JS | UI, interactivity |
| TensorFlow.js | In-browser object detection |
| COCO-SSD | Pre-trained object detection |
| Web Speech API | Voice feedback + commands |
| Vibration API | Haptic feedback on mobile |
| WebRTC (getUserMedia) | Webcam access |
- Clone the repository:
git clone https://github.com/your-username/blindnav.git
- Open
index.htmlin a browser. - Allow camera and microphone permissions when prompted.
Works best in Chrome or Firefox with HTTPS (e.g., GitHub Pages).
π Visit the Live Demo
- Your Name (@yourusername)
MIT License