The Sign Language Translator is a real-time system that captures and translates sign language gestures into text. Utilizing computer vision and deep learning, it enables seamless communication between sign language users and non-signers by identifying hand, face, and body movements. The system processes video input, detects keypoints using MediaPipe Holistic, and predicts gestures with an LSTM-based neural network trained on sign language datasets.
- Detects and translates sign language gestures instantly.
- Recognizes hand, face, and body movements using MediaPipe Holistic.
- Utilizes LSTM neural networks for accurate classification of gestures.
- Python: Main programming language for data processing and deep learning.
- OpenCV: Handles real-time video processing and UI display.
- MediaPipe: Detects and tracks body keypoints for gesture analysis.
- NumPy: Efficient numerical computations and data manipulation.
- TensorFlow & Keras: Used to build and train the LSTM-based neural network.
- Scikit-learn: Provides utilities for model evaluation and data processing.
- Webcam: Captures video input for real-time processing.
- Jupyter Notebook: Used for developing and testing the model.
![]() |
![]() |
Developed by 3 Members on 2024
If you appreciate our work, consider adding this project to your favorites on GitHub. DM me if you’d like to collaborate with us.

