AI-powered American Sign Language (ASL) to text translation system that helps bridge communication gaps between the deaf/hard-of-hearing community and others.
- Real-time ASL hand sign recognition
- Support for 24 ASL letters (A-Y, excluding J and Z)
- High accuracy (99.80% on test set)
- User-friendly web interface
- Confidence score display for predictions
- Frontend: Streamlit
- Backend: Python, TensorFlow/Keras
- Data Processing: OpenCV, NumPy, Pandas
- Model: Convolutional Neural Network (CNN)
- Clone the repository:
git clone [your-repository-url]
cd signvision- Install dependencies:
pip install -r requirements.txtRun the application:
streamlit run app.pyThe application will open in your default web browser. Upload an image of an ASL hand sign, and the model will predict the corresponding letter.
- Input Layer: 28x28x1 (grayscale images)
- Multiple Convolutional Layers with BatchNormalization and MaxPooling
- Dense Layers with Dropout for regularization
- Output Layer: 24 classes (ASL letters A-Y, excluding J and Z)
The model is trained on the Sign Language MNIST dataset from Kaggle, which contains 27,455 training and 7,172 test images.
- Test Accuracy: 99.80%
- Support for real-time processing
- Robust to various hand sizes and positions
- Support for complete ASL alphabet including J and Z
- Gesture recognition capabilities
- Sentence-level translation
- Mobile application development
- Enhanced UI/UX features
- [Team Member 1] - [Role]
- [Team Member 2] - [Role]
- [Team Member 3] - [Role]
- [Team Member 4] - [Role]
[Choose an appropriate license]
- Sign Language MNIST dataset from Kaggle
- [Add any other acknowledgments]