Skip to content

sonmessia/SignBridge

Repository files navigation

SignBridge

AI-Powered Vietnamese Sign Language Translation for Inclusive Education

SignBridge is an intelligent web application that automatically converts Vietnamese speech from video and audio into animated Vietnamese Sign Language (VSL) using 3D avatars. Built to bridge communication gaps in education, SignBridge enables deaf and hard-of-hearing students to access educational content through real-time sign language interpretation.


🎯 Project Overview

SignBridge leverages state-of-the-art AI technologies to create an end-to-end pipeline that transforms spoken Vietnamese into visual sign language animations. The system processes uploaded videos or audio files through speech recognition, linguistic conversion, and 3D animation rendering to produce synchronized sign language interpretations.

Core Goals

  • Accessibility: Make educational content accessible to the deaf community
  • Automation: Reduce manual effort in creating sign language interpretations
  • Accuracy: Utilize comprehensive Vietnamese Sign Language dictionaries (3,873+ signs)
  • Real-time Processing: Fast conversion from speech to animated signs
  • Open Source: Contribute to inclusive education initiatives

🛠️ Tech Stack

Backend

  • Python 3.10+ - Core programming language with type hints
  • Flask - Web framework for API and templating
  • OpenAI Whisper - Speech-to-text (STT) engine for Vietnamese
  • PyTorch - Machine learning framework for Whisper
  • FFmpeg - Audio/video processing

Architecture & Design Patterns

  • SOLID Principles - Single Responsibility, Dependency Inversion, Open/Closed
  • Layered Architecture - Separation of concerns (Config, Services, Repositories, Models)
  • Repository Pattern - Data access abstraction with caching
  • Service Layer - Business logic encapsulation
  • Type Safety - Comprehensive type hints with mypy static analysis
  • Dependency Injection - Loose coupling between components

Frontend

  • HTML5/CSS3 - Modern responsive UI
  • JavaScript (ES6+) - Interactive features
  • Font Awesome 6.4 - Icon library

Sign Language Processing

  • HamNoSys - Hamburg Notation System for sign language representation
  • SiGML - Signing Gesture Markup Language
  • JASigning/CWA (CWASA) - 3D avatar animation engine with support for multiple avatars (Anna, Marc, Luna)

Data & Resources

  • VSL Dictionary - 3,873 Vietnamese sign language entries in HamNoSys notation
  • Training Datasets - DataTrain1 (with diacritics), DataTrain2 (ASCII)
  • Synonyms Database - 265KB of Vietnamese-VSL synonym mappings
  • Seed Vocabulary - Educational terminology in JSON format

Optional Integrations

  • Figma API - Design token synchronization for consistent UI theming

✨ Features

  • 🎤 Automated Speech-to-Text: Convert Vietnamese audio/video to text using Whisper AI
  • 🔄 Text-to-Gloss Conversion: Transform Vietnamese sentences into sign language glosses with synonym support
  • ✋ Gloss-to-SiGML Generation: Generate animation markup from glosses using comprehensive VSL dictionary
  • 🎭 3D Avatar Animation: Render sign language with customizable 3D avatars (Anna, Marc, Luna)
  • 📺 Picture-in-Picture Mode: Simultaneous display of original video and signing avatar
  • 📚 Learning Hub: Browse and manage educational content library
  • 🎨 Figma Integration: Automatic design token extraction for brand consistency
  • ⚡ Batch Processing: Handle multiple video segments with playlist generation
  • 🌐 Bilingual Support: Vietnamese and English interface (EN/VN toggle)
  • 📱 Responsive Design: Mobile-friendly interface with adaptive layouts
  • 🔍 Content Filtering: Filter by subject, grade level, and duration

📁 Folder Structure

SignBridge/
├── app.py                      # Main Flask application (thin controllers with routing only)
├── requirements.txt            # Python dependencies and package versions
├── mypy.ini                    # Type checking configuration for static analysis
├── .env.example                # Environment variable template
├── .gitignore                  # Git exclusion patterns
├── src/                        # Source code with layered architecture
│   ├── __init__.py
│   ├── exceptions.py          # Custom exception hierarchy (ValidationError, SignBridgeError)
│   ├── config/                # Configuration management layer
│   │   ├── __init__.py
│   │   └── settings.py        # AppConfig and FigmaConfig dataclasses (200 lines)
│   ├── models/                # Domain models and data structures
│   │   ├── __init__.py
│   │   └── video.py           # Video and Manifest models with serialization (107 lines)
│   ├── repositories/          # Data access layer with Repository Pattern
│   │   ├── __init__.py
│   │   └── manifest_repository.py  # ManifestRepository with caching (169 lines)
│   ├── services/              # Business logic layer (Service Layer Pattern)
│   │   ├── __init__.py
│   │   ├── figma_service.py   # Figma API integration (154 lines)
│   │   ├── file_service.py    # File operations abstraction (90 lines)
│   │   ├── pipeline_service.py     # Processing pipeline orchestration (190 lines)
│   │   └── video_service.py   # Video business logic (upload, viewing, hub) (237 lines)
│   ├── processors/            # Processing pipeline components
│   │   ├── __init__.py
│   │   ├── stt_processor.py   # Speech-to-text processor with Whisper (200 lines)
│   │   ├── text2gloss_processor.py  # Text-to-gloss converter (268 lines)
│   │   └── gloss2sigml_processor.py # Gloss-to-SiGML generator (232 lines)
│   └── utils/                 # Utility modules and helpers
│       ├── __init__.py
│       ├── file_io.py         # Encoding-safe file operations (61 lines)
│       ├── logging_utils.py   # Advanced logging decorators and utilities (197 lines)
│       └── validation.py      # Input validation with security checks (142 lines)
├── scripts/                    # Legacy command-line interface (being replaced by processors)
│   ├── 1_stt_whisper.py       # Speech-to-text CLI wrapper
│   ├── 2_text2gloss.py        # Text-to-gloss CLI wrapper
│   ├── 3_gloss2sigml.py       # Gloss-to-SiGML CLI wrapper
│   ├── 3b_segments_to_playlist.py  # Video segmentation CLI
│   └── 4_preview.html         # Preview template for testing
├── VSL/                        # Vietnamese Sign Language resources
│   ├── Dictionary VSL HamNoSys # 3,873 VSL signs in HamNoSys notation (1.2MB)
│   ├── DataTrain1.txt         # Training data with Vietnamese diacritics (100KB)
│   ├── DataTrain2.txt         # Training data in ASCII format (76KB)
│   ├── Synonyms.txt           # Synonym mappings for text normalization (265KB)
│   ├── README.md              # VSL project documentation
│   └── LICENSE                # GNU GPL v3
├── templates/                  # Flask HTML templates
│   ├── index.html             # Main landing page and learning hub
│   ├── result.html            # Video playback with avatar overlay (PIP mode)
│   └── hub.html               # Content management interface
├── static/                     # Static assets
│   ├── cwasa/                 # CWASA avatar system files
│   │   ├── anna/              # Avatar: Anna
│   │   ├── marc/              # Avatar: Marc
│   │   └── luna/              # Avatar: Luna
│   ├── cwacfg.json            # Avatar configuration (FPS, camera, speed)
│   ├── cwaclientcfg.json      # Client-side avatar settings
│   ├── cwasa-avatar-only.html # Standalone avatar viewer
│   ├── CWASA-plus-gui-panel.html  # Avatar with control panel
│   ├── styles.css             # Main stylesheet (37KB)
│   ├── script.js              # Frontend JavaScript (9KB)
│   ├── logo.png               # SignBridge logo
│   └── uploads/               # User-uploaded files (videos/audio)
├── Frontend/                   # Static marketing site
│   ├── index.html             # Landing page alternative
│   ├── styles.css             # Standalone styles
│   ├── script.js              # Standalone scripts
│   └── logo.png               # Logo asset
├── assets/                     # Media assets
├── content/                    # Content metadata
├── out/                        # Generated output files (SiGML, transcripts)
├── seed_vocabulary.json        # Educational terminology seed data (1.4KB)
├── seed_vocabulary.csv         # CSV version of seed vocabulary
└── .github/                    # GitHub configuration (empty)

Key Directories Explained

  • src/: NEW - Complete refactored codebase following SOLID principles
    • config/: Centralized configuration with environment variable support
    • models/: Type-safe domain models (Video, Manifest) with serialization
    • repositories/: Data access layer with caching and atomic writes
    • services/: Business logic (VideoService, PipelineService, FileService, FigmaService)
    • processors/: Processing pipeline (STT, Text2Gloss, Gloss2SiGML) as importable classes
    • utils/: Reusable utilities (logging, validation, file I/O)
  • scripts/: Legacy CLI wrappers (being phased out in favor of src/processors/)
  • VSL/: Vietnamese Sign Language dictionary and training data (research project by Assoc. Prof. Dr. Nguyen Chi Ngon)
  • static/cwasa/: Complete JASigning avatar system with 3 avatar options
  • templates/: Flask-rendered pages with dynamic content and avatar integration
  • out/: Generated files including transcripts (TXT), glosses (TXT), and SiGML animations (XML)

Architecture Benefits

The refactored src/ directory provides:

  • 100% testable - Business logic decoupled from Flask
  • Type-safe - Comprehensive type hints with mypy validation
  • Reusable - Services can be used in CLI, API, or background jobs
  • Maintainable - Clear separation of concerns (25% code reduction in app.py)
  • Performant - Repository caching (3x faster manifest access)
  • Secure - Input validation with path traversal prevention
  • Observable - Advanced logging with performance monitoring

📦 Installation & Setup

Prerequisites

  • Python: 3.10 or higher (for type hints compatibility)
  • FFmpeg: Required for audio/video processing
  • pip: Python package manager
  • Git: For cloning the repository
  • mypy (optional): For static type checking during development

System Requirements

  • OS: Linux, macOS, or Windows (WSL recommended)
  • RAM: 4GB minimum (8GB+ recommended for Whisper models)
  • Storage: 2GB free space for models and dependencies
  • GPU: Optional (CUDA-compatible GPU accelerates Whisper inference)

Installation Steps

  1. Clone the repository

    git clone https://github.com/sonmessia/SignBridge.git
    cd SignBridge
  2. Install Python dependencies

    # Install from requirements.txt (recommended)
    pip install -r requirements.txt

    Note: For speech recognition, you can use either openai-whisper (official implementation) or faster-whisper (a faster, more memory-efficient alternative). Install only one of these packages, not both.

    • Use openai-whisper for maximum compatibility.
    • Use faster-whisper for improved performance, especially on limited hardware.
    # Option 1: Manual install with openai-whisper
    pip install flask torch openai-whisper ffmpeg-python
    # OR
    # Option 2: Manual install with faster-whisper
    pip install flask torch faster-whisper ffmpeg-python
  3. Install FFmpeg (if not already installed)

    # macOS
    brew install ffmpeg
    
    # Ubuntu/Debian
    sudo apt update && sudo apt install ffmpeg
    
    # Windows (use Chocolatey)
    choco install ffmpeg
  4. Verify installation

    python -c "import whisper; print('Whisper installed successfully')"
    ffmpeg -version
  5. Optional: Install development tools

    # Type checking with mypy
    pip install mypy
    
    # Run type checker
    mypy app.py src/
    
    # Code formatting (optional)
    pip install black flake8
  6. Configure environment variables

    Copy the example environment file and configure:

    cp .env.example .env
    # Edit .env with your configuration

    Or set environment variables directly:

    export SECRET_KEY="your-secret-key-here"
    export FIGMA_URL="https://www.figma.com/design/YOUR_FILE_KEY"
    export FIGMA_API_KEY="YOUR_API_KEY"
  7. Optional: Configure Figma integration (deprecated method)

    Create ~/.cursor/mcp.json (or set environment variables):

    {
      "figma": {
        "fileUrl": "https://www.figma.com/design/YOUR_FILE_KEY",
        "apiKey": "YOUR_FIGMA_API_KEY"
      }
    }

    Or set environment variables:

    export FIGMA_URL="https://www.figma.com/design/YOUR_FILE_KEY"
    export FIGMA_API_KEY="YOUR_API_KEY"

🚀 Usage

Starting the Application

  1. Run the Flask server

    python app.py

    By default, Flask starts on 127.0.0.1:5000. To run the application on http://0.0.0.0:5001 (accessible at http://localhost:5001), ensure your app.py contains:

    app.run(host="0.0.0.0", port=5001)
  2. Access the web interface

    Open your browser and navigate to:

    http://localhost:5001
    

Using the Web Interface

  1. Upload Video/Audio

    • Navigate to the "Demo AI" or "Try Now" section
    • Click "Upload" and select your Vietnamese audio/video file
    • Supported formats: MP4, MP3, WAV, M4A
  2. Processing Pipeline

    • The system automatically processes your file through:
      • Speech-to-text conversion (Whisper)
      • Text-to-gloss translation
      • Gloss-to-SiGML generation
      • Avatar animation rendering
  3. View Results

    • Watch the synchronized video with sign language avatar
    • Toggle Picture-in-Picture mode for flexible viewing
    • Download generated files (transcript, gloss, SiGML)
  4. Learning Hub

    • Browse existing educational content
    • Filter by subject, grade level, or duration
    • Add custom videos to your collection

Command-Line Usage (Scripts)

1. Speech-to-Text

python scripts/1_stt_whisper.py \
  --input path/to/video.mp4 \
  --output out/transcript.txt \
  --model small

Options:

  • --model: Whisper model size (tiny, base, small, medium, large)
  • --language: Language code (default: vi for Vietnamese)
  • --beam_size: Beam search size (default: 5)

2. Text-to-Gloss

python scripts/2_text2gloss.py \
  --input out/transcript.txt \
  --output out/gloss.txt \
  --pairs VSL/DataTrain1.txt \
  --synonyms VSL/Synonyms.txt \
  --seed_json seed_vocabulary.json

Options:

  • --pairs: Training data file (DataTrain1.txt or DataTrain2.txt)
  • --synonyms: Synonym mapping file
  • --seed_json: Preferred vocabulary for educational terms

3. Gloss-to-SiGML

python scripts/3_gloss2sigml.py \
  --input out/gloss.txt \
  --dict "VSL/Dictionary VSL HamNoSys" \
  --output out/demo.sigml

Options:

  • --dict: VSL dictionary path
  • --bimanual: Bimanual mode (auto, always, never)

4. Playlist Generation (for segmented videos)

python scripts/3b_segments_to_playlist.py \
  --segments segments.json \
  --output out/playlist.m3u8

API Endpoints

Endpoint Method Description
/ GET Landing page with featured content
/hub GET Learning hub interface
/upload POST Upload video/audio for processing
/watch/<vid> GET Watch video with avatar overlay
/result GET View processing results
/preview GET Preview generated SiGML
/add_to_hub POST Add processed video to learning hub
/assets/<filename> GET Serve asset files
/static/uploads/<filename> GET Serve uploaded files
/out/<filename> GET Serve generated output files

⚙️ Configuration

Environment Variables

The application uses src/config/settings.py to load configuration from environment variables or .env file:

# Flask Configuration
export FLASK_SECRET_KEY="your-secret-key-here"  # Required for sessions
export FLASK_ENV=development  # or production
export FLASK_DEBUG=1          # Enable debug mode

# Directory Configuration (optional - defaults provided)
export UPLOAD_DIR="static/uploads"
export OUT_DIR="out"
export ASSETS_DIR="assets"
export CONTENT_DIR="content"

# Figma Integration (Optional)
export FIGMA_URL="https://www.figma.com/design/YOUR_FILE_KEY"
export FIGMA_API_KEY="your_figma_api_key"

Or use .env file (recommended):

# Create .env file from template
cp .env.example .env

# Edit .env with your values
nano .env

Configuration Classes:

  • AppConfig: Main application configuration (paths, secret key, logging)
  • FigmaConfig: Figma API settings (optional)

Both are loaded automatically via AppConfig.from_env() in app.py.

Avatar Configuration (static/cwacfg.json)

{
  "jasBase": "/static/cwasa/",
  "jasVersionTag": "vhg2024",
  "animgenFPS": 30,
  "avs": ["anna", "marc", "luna"],
  "avSettings": [
    {
      "width": 384,
      "height": 320,
      "initAv": "anna",
      "initCamera": [0, 0.23, 3.24, 5, 18, 30, -1, -1],
      "initSpeed": 0,
      "rateSpeed": 5
    }
  ]
}

Customizable Parameters:

  • animgenFPS: Animation frame rate (default: 30)
  • initAv: Default avatar (anna, marc, luna)
  • initSpeed: Initial playback speed (0 = normal)
  • rateSpeed: Speed adjustment rate

Seed Vocabulary (seed_vocabulary.json)

Define educational terminology with preferred glosses and synonyms:

[
  {
    "word": "biến",
    "gloss": "lưu trữ giá trị",
    "synonyms": ["variable"]
  },
  {
    "word": "hàm",
    "gloss": "khối lệnh có tên",
    "synonyms": ["function", "method"]
  }
]

🔧 Development

Running in Development Mode

# Enable Flask debug mode
export FLASK_DEBUG=1
python app.py

Development Features:

  • Auto-reload on code changes
  • Detailed error pages
  • Debug toolbar (if installed)

Testing

# Test Speech-to-Text
python scripts/1_stt_whisper.py \
  --input test_audio.mp3 \
  --output test_output.txt \
  --model tiny

# Test Text-to-Gloss
echo "Hôm nay chúng ta học về động vật" > test.txt
python scripts/2_text2gloss.py \
  --input test.txt \
  --output test_gloss.txt \
  --pairs VSL/DataTrain1.txt \
  --synonyms VSL/Synonyms.txt

# Test Gloss-to-SiGML
python scripts/3_gloss2sigml.py \
  --input test_gloss.txt \
  --dict "VSL/Dictionary VSL HamNoSys" \
  --output test.sigml

Code Structure

Layered Architecture:

┌─────────────────────────────────────────────┐
│         Presentation Layer (Flask)          │
│  app.py - Thin controllers with routing     │
│  - Type-safe routes with decorators         │
│  - Request/response handling only           │
└─────────────────────────────────────────────┘
                     │
┌─────────────────────────────────────────────┐
│       Business Logic Layer (Services)       │
│  src/services/                              │
│  - VideoService: Upload, viewing, hub mgmt  │
│  - PipelineService: Process orchestration   │
│  - FileService: File operations             │
│  - FigmaService: Design token integration   │
└─────────────────────────────────────────────┘
                     │
┌─────────────────────────────────────────────┐
│    Processing Layer (Processors)            │
│  src/processors/                            │
│  - STTProcessor: Speech → Text              │
│  - Text2GlossProcessor: Text → Glosses      │
│  - Gloss2SiGMLProcessor: Glosses → SiGML    │
└─────────────────────────────────────────────┘
                     │
┌─────────────────────────────────────────────┐
│    Data Access Layer (Repositories)         │
│  src/repositories/                          │
│  - ManifestRepository: Video metadata       │
│  - Caching, atomic writes, serialization    │
└─────────────────────────────────────────────┘
                     │
┌─────────────────────────────────────────────┐
│         Domain Layer (Models)               │
│  src/models/                                │
│  - Video: Video metadata and SiGML          │
│  - Manifest: Collection of videos           │
└─────────────────────────────────────────────┘

Key Classes and Responsibilities:

Configuration Layer (src/config/):

  • AppConfig: Application settings from environment
  • FigmaConfig: Figma API configuration

Service Layer (src/services/):

  • VideoService: Video upload, processing, viewing, hub management
  • PipelineService: Orchestrates STT → Text2Gloss → Gloss2SiGML pipeline
  • FileService: File copy, move, read operations
  • FigmaService: Fetch design tokens from Figma API

Processing Layer (src/processors/):

  • STTProcessor: Whisper-based speech-to-text
  • Text2GlossProcessor: Vietnamese → Sign glosses with phrase tables
  • Gloss2SigMLProcessor: Glosses → SiGML with VSL dictionary

Data Access Layer (src/repositories/):

  • ManifestRepository: CRUD operations for video manifest with caching

Domain Layer (src/models/):

  • Video: Video metadata, title, SiGML content
  • Manifest: Collection of videos with metadata

Utilities (src/utils/):

  • logging_utils: Performance monitoring, function tracing, structured logging
  • validation: Input sanitization, security checks
  • file_io: Encoding-safe file operations

Processing Pipeline:

  1. STT (STTProcessor): Audio → Text transcript
  2. Text2Gloss (Text2GlossProcessor): Vietnamese text → Sign glosses
  3. Gloss2SiGML (Gloss2SiGMLProcessor): Glosses → SiGML XML markup
  4. Rendering: SiGML → 3D avatar animation (CWASA)

Linting and Code Quality

# Install development dependencies
pip install flake8 black pylint mypy

# Run type checker (recommended)
mypy app.py src/
# Configuration in mypy.ini

# Run linter
flake8 app.py src/ scripts/

# Format code
black app.py src/ scripts/

# Check for common issues
pylint src/

# Run all quality checks
mypy app.py src/ && flake8 app.py src/ && echo "✅ All checks passed"

Testing Architecture Components

# Test configuration loading
python -c "from src.config import AppConfig; config = AppConfig.from_env(); print(f'Config loaded: {config.base_dir}')"

# Test services import
python -c "from src.services import VideoService, PipelineService; print('✅ Services OK')"

# Test processors import
python -c "from src.processors import STTProcessor, Text2GlossProcessor; print('✅ Processors OK')"

# Test repositories
python -c "from src.repositories import ManifestRepository; print('✅ Repository OK')"

# Test utilities
python -c "from src.utils import timed_execution, validate_filename; print('✅ Utils OK')"

Building for Production

# Disable debug mode
export FLASK_DEBUG=0
export FLASK_ENV=production

# Use production WSGI server
pip install gunicorn
gunicorn -w 4 -b 0.0.0.0:5001 app:app

🐳 Deployment

Docker Deployment

  1. Create Dockerfile
FROM python:3.10-slim

# Install FFmpeg
RUN apt-get update && apt-get install -y ffmpeg && rm -rf /var/lib/apt/lists/*

# Set working directory
WORKDIR /app

# Copy application files
COPY . /app

# Install Python dependencies
RUN pip install --no-cache-dir flask torch openai-whisper ffmpeg-python

# Expose port
EXPOSE 5001

# Run application
CMD ["python", "app.py"]
  1. Build and run
docker build -t signbridge:latest .
docker run -p 5001:5001 -v $(pwd)/out:/app/out signbridge:latest

Docker Compose

services:
  signbridge:
    build: .
    ports:
      - "5001:5001"
    volumes:
      - ./out:/app/out
      - ./static/uploads:/app/static/uploads
    environment:
      - FLASK_ENV=production
      - FIGMA_URL=${FIGMA_URL}
      - FIGMA_API_KEY=${FIGMA_API_KEY}

Run with:

docker-compose up -d

Cloud Deployment

Heroku

# Install Heroku CLI
heroku login
heroku create signbridge-app

# Add buildpacks
heroku buildpacks:add heroku/python
heroku buildpacks:add https://github.com/jonathanong/heroku-buildpack-ffmpeg-latest.git

# Deploy
git push heroku main

AWS EC2

# SSH into EC2 instance
ssh -i your-key.pem ubuntu@your-ec2-ip

# Install dependencies
sudo apt update
sudo apt install python3-pip ffmpeg git -y

# Clone and setup
git clone https://github.com/sonmessia/SignBridge.git
cd SignBridge
pip3 install flask torch openai-whisper ffmpeg-python

# Run with systemd service
sudo nano /etc/systemd/system/signbridge.service

systemd service file:

[Unit]
Description=SignBridge Application
After=network.target

[Service]
User=ubuntu
WorkingDirectory=/home/ubuntu/SignBridge
ExecStart=/usr/bin/python3 /home/ubuntu/SignBridge/app.py
Restart=always

[Install]
WantedBy=multi-user.target
sudo systemctl enable signbridge
sudo systemctl start signbridge

Production Recommendations

  • Use Gunicorn or uWSGI instead of Flask's development server
  • Set up Nginx as reverse proxy
  • Enable HTTPS with Let's Encrypt
  • Configure log rotation for application logs
  • Set up monitoring (Prometheus, Grafana)
  • Implement rate limiting to prevent abuse
  • Use CDN for static assets (CloudFlare, AWS CloudFront)
  • Enable database caching (Redis) for frequent queries
  • Configure automatic backups for VSL dictionary and user data
  • Enable structured logging for log aggregation (ELK, Datadog):
    from src.utils import setup_structured_logging
    setup_structured_logging(enable=True)
  • Run mypy in CI/CD pipeline to catch type errors before deployment
  • Use performance monitoring via @timed_execution decorator logs

📄 License and Credits

SignBridge Application

This project is open-source and available for educational and research purposes. Please refer to individual component licenses for specific terms.

Vietnamese Sign Language (VSL) Resources

The VSL dictionary and datasets in the VSL/ directory are licensed under GNU General Public License v3.0.

Research Project: "A study of proposing a solution to translate TV's News into 3D sign language animations for the deaf"
Project Number: B2013-16-31
Principal Investigator: Assoc. Prof. Dr. Nguyen Chi Ngon
Contact: luyldaquach@gmail.com

The VSL resources include:

  • Dictionary VSL HamNoSys: 3,873 Vietnamese signs in HamNoSys notation for JASigning 3D avatar
  • DataTrain1: Vietnamese text-to-sign training dataset (with diacritics)
  • DataTrain2: Vietnamese text-to-sign training dataset (ASCII)
  • Synonyms: Vietnamese-VSL synonym mappings

Third-Party Components

  • OpenAI Whisper: MIT License - Speech recognition model
  • JASigning/CWA Avatar System: See static/cwasa/ for avatar licensing
  • Flask: BSD-3-Clause License
  • PyTorch: BSD-style License
  • Font Awesome: Font Awesome Free License

Contributors

  • Development Team: SignBridge Development Team
  • UI/UX: Thien (GitHub: @sonmessia)
  • VSL Research: Assoc. Prof. Dr. Nguyen Chi Ngon and team

Acknowledgments

Special thanks to:

  • The deaf community for feedback and testing
  • Vietnamese Sign Language researchers and interpreters
  • Open-source contributors to Whisper, PyTorch, and JASigning
  • Educational institutions supporting inclusive education initiatives

🤝 Contributing

Contributions are welcome! Please feel free to submit issues, feature requests, or pull requests.

Development Setup

git clone https://github.com/sonmessia/SignBridge.git
cd SignBridge
pip install -r requirements.txt
python app.py

Contribution Guidelines

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

📧 Contact & Support

For questions, bug reports, or feature requests, please open an issue on GitHub.


Built with ❤️ for inclusive education

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors