ASLigator is a mobile app designed to bridge the communication gap between ASL and non-ASL signers. Leveraging computer vision (CV), advanced Machine Learning (ML) algorithms and Natural Language Processing (NLP), ASLigator will enable any user to point their camera at ASL gestures, translating them into understandable text instantly. This will ensure ASL signers can communicate effectively and naturally in their preferred language.
- Team Lead - Amanda Yu
- Backend Developer - Eric Wang
- Frontend Developer - Alvin Bang
- Machine Learning Specialist - Robert Meekins
- Real-time ASL gesture recognition
- Context-aware ASL translation using NLP
- Intuitive and accessible user interface
- Support for accessibility features like voice commands and large buttons
- Continuous learning and improvement with user feedback
- Deaf and hard-of-hearing individuals who rely on ASL for communication.
- ASL learners who want to reinforce their understanding of sign language.
- Hearing individuals who wish to communicate with ASL users but lack sign language proficiency.
- Educational institutions and teachers looking for tools to support deaf and hard-of-hearing students.
- Employers and workplaces aiming to foster inclusive environments by improving communication between deaf and hearing colleagues.
- As a user, I want the app to interpret ASL into spoken or written language so that I can communicate seamlessly with others.
- As a user, I want the app to facilitate two-way communication between ASL users and non-ASL users so that both parties can understand each other without barriers.
- As a user, I want the app to include accessibility features (e.g., voice commands, large buttons) so that I can use it comfortably.
- As a user, I want the app to provide highly accurate interpretations of ASL and spoken language so that misunderstandings are minimized.
- As a user, I want to be able to provide feedback or report errors in interpretation so that the app can improve over time.
- Frontend: JavaScript, React-Native, Expo
- Backend: Python, Flask, Firebase
- Computer Vision: OpenCV, MediaPipe
- Machine Learning: Keras, TensorFlow
- Natural Language Processing: NLP models for context-based ASL translation
- Prototype Development (February 2025)
- Computer vision integration with OpenCV
- Initial ASL dataset for training
- Basic UI and backend setup
- Core Feature Development (March 2025)
- Enhanced ASL gesture recognition
- Improved UI/UX
- Backend infrastructure for data processing
- Testing & Refinement (April 2025)
- Unit testing and bug fixes
- Neural network optimization
- User and developer documentation finalization