A comprehensive assistive device interface for LLM streaming using the T.140 real-time text protocol. This application helps users with disabilities, particularly those who are deaf or hard-of-hearing, by providing real-time text communication from large language models (LLMs) to their assistive devices.
- Multiple Transport Protocols: WebSocket, RTP, SRTP, Unix sockets (STREAM and SEQPACKET)
- LLM Provider Support: OpenAI GPT models and Anthropic Claude models
- Advanced T.140 Features:
- Forward Error Correction (FEC)
- Redundancy (RED) for reliability
- Configurable character rate limiting
- Backspace processing
- Device Management: Easy-to-use web interface for managing assistive devices
- Conversation History: Persistent conversation tracking and management
- Real-time Streaming: Native t140llm integration for optimal performance
- Security: SRTP support for encrypted communications
- Hearing devices (for deaf or hard-of-hearing users)
- Visual devices (for blind or low vision users)
- Mobility devices (for users with mobility impairments)
- Cognitive devices (for users with cognitive disabilities)
- Multi-purpose devices
Manage all your assistive devices, view connection status, and control device connections from a single interface.
Send prompts to one or multiple devices simultaneously, with real-time streaming from OpenAI or Anthropic LLMs.
Configure API keys, default providers, and global system settings.
- Quick Start
- Prerequisites
- Installation
- Configuration
- User Guides
- API Documentation
- Troubleshooting
- Cloud Deployment
- Contributing
Get up and running in 5 minutes:
# 1. Clone and install
git clone https://github.com/agrathwohl/assistive-llm.git
cd assistive-llm
npm install
# 2. Configure (add your API keys)
cp .env.example .env
nano .env # Add your ANTHROPIC_API_KEY or OPENAI_API_KEY
# 3. Build and run
npm run build
npm start
# 4. Open browser
open http://localhost:3000- Node.js >= 10.18.1
- npm >= 6.13.4
- OpenAI and/or Anthropic API keys (at least one is required)
- An assistive device that supports T.140 protocol (for testing)
- Clone the repository:
git clone https://github.com/agrathwohl/assistive-llm.git
cd assistive-llm- Install dependencies:
npm install- Create environment file:
cp .env.example .env- Edit
.envand add your API keys:
# Required: At least one LLM provider API key
ANTHROPIC_API_KEY=sk-ant-xxxxxxxxxxxxx
# OR
OPENAI_API_KEY=sk-xxxxxxxxxxxxx- Build the application:
npm run build- Start the server:
npm startThe server will be available at http://localhost:3000.
For development with auto-reload:
npm run devCreate a .env file with the following variables:
# Server Configuration
PORT=3000 # Server port (default: 3000)
HOST=localhost # Server host (default: localhost)
# LLM Provider Configuration
DEFAULT_LLM_PROVIDER=anthropic # Default provider: 'openai' or 'anthropic'
ANTHROPIC_API_KEY=sk-ant-xxxxxxxxxxxxx # Your Anthropic API key
ANTHROPIC_MODEL=claude-3-5-sonnet-20241022 # Claude model to use
OPENAI_API_KEY=sk-xxxxxxxxxxxxx # Your OpenAI API key
OPENAI_MODEL=gpt-4 # GPT model to use
# Logging Configuration
LOG_LEVEL=info # Logging level: debug, info, warn, error
LOG_FILE=assistive-llm.log # Log file path
# Storage Configuration
DB_PATH=./data # Data directory for device configs and conversations- Visit Anthropic Console
- Sign up or log in
- Navigate to API Keys
- Create a new key
- Copy and add to
.envasANTHROPIC_API_KEY
- Visit OpenAI Platform
- Sign up or log in
- Go to API Keys section
- Create new secret key
- Copy and add to
.envasOPENAI_API_KEY
After starting the server, open your web browser and navigate to:
http://localhost:3000
You'll see the admin interface with three main sections:
- Devices: Manage assistive devices
- LLM Chat: Stream responses to devices
- Settings: Configure system settings
- Click on the LLM Chat tab
- Check that at least one provider shows as available
- If no providers are available, verify your API keys in
.env
Before connecting real devices, you can test the API endpoints:
# Check available providers
curl http://localhost:3000/api/llm/providers
# Should return:
# [
# {"provider":"openai","available":true,"model":"gpt-4"},
# {"provider":"anthropic","available":true,"model":"claude-3-5-sonnet-20241022"}
# ]-
Navigate to Devices Page
- Click on "Devices" in the navigation menu
- Click the "Add Device" button (blue button, top right)
-
Fill in Basic Information
Device Name: My Test Device Device Type: Hearing Device (or appropriate type) IP Address: 192.168.1.100 (your device's IP) Port: 5004 (your device's T.140 port) Protocol: RTP (or websocket/srtp depending on device) -
Configure Advanced Settings (Optional)
-
Character Rate Limit: 30 characters/second (default)
- Lower for slower devices
- Higher for capable devices (max 100)
-
Process Backspaces: Checked (recommended)
- Processes backspace characters in stream
- Uncheck for raw output
-
Enable FEC: Unchecked (default)
- Check for unreliable networks
- Adds forward error correction
-
Enable RED: Unchecked (default)
- Check for lossy connections
- Sends redundant data (1-3 generations)
-
-
Save Device
- Click "Save Device"
- Device appears in "Registered Devices" list
WebSocket - Best for:
- Modern web-based assistive devices
- Devices behind NAT/firewalls
- Testing and development
- Browser-based clients
RTP - Best for:
- Traditional T.140 devices
- Low-latency requirements
- Direct device connections
- VoIP-integrated systems
SRTP - Best for:
- Secure communications required
- Medical/healthcare applications
- Privacy-sensitive scenarios
- Encrypted channels needed
Unix Sockets - Best for:
- Local device communication
- Inter-process communication
- Development and testing
- High-performance local apps
-
From Devices Page:
- Find your device in "Registered Devices"
- Click the "Connect" button
- Status changes to "Connecting..." then "Online"
-
Verify Connection:
- Device appears in "Active Connections" section
- Status badge shows green "Online"
- Connection timestamp displayed
-
Troubleshooting Connection Issues:
Device stays in "Connecting" status:
- Verify device is powered on and reachable
- Check IP address and port are correct
- Ensure no firewall blocking connection
- Test network connectivity:
ping <device-ip>
Connection shows "Error" status:
- Check device logs for errors
- Verify protocol matches device expectations
- For SRTP: Verify keys/passphrase
- For Unix sockets: Verify socket path exists
# Connect to device via API
curl -X POST http://localhost:3000/api/devices/{device-id}/connect \
-H "Content-Type: application/json"
# Response:
# {
# "message": "Connected to device successfully",
# "connection": { ... }
# }-
Navigate to LLM Chat Page
- Click "LLM Chat" in navigation
-
Select Target Devices
- Click in "Select Devices" dropdown
- Choose one or more connected devices
- Only online devices are selectable
-
Choose LLM Provider
- Select "OpenAI" or "Anthropic" from dropdown
- Ensure provider shows as available
-
Enter Your Prompt
Example prompts: - "Explain quantum computing in simple terms" - "What are the symptoms of the flu?" - "Tell me a short story about a robot" -
Send to Devices
- Click "Send to Devices"
- Watch status display for streaming info
- Response streams in real-time to selected devices
Real-time Display: Text appears character-by-character on assistive device as LLM generates it.
Multi-device Broadcasting: Send same response to multiple devices simultaneously.
Conversation Context: System maintains conversation history for context-aware responses.
Stream to Single Device:
curl -X POST http://localhost:3000/api/llm/stream/{device-id} \
-H "Content-Type: application/json" \
-d '{
"prompt": "What is the weather forecast?",
"provider": "anthropic"
}'
# Response includes conversationId for follow-ups:
# {
# "message": "Started streaming...",
# "conversationId": "abc-123",
# "messageId": "msg-456",
# "provider": "anthropic"
# }Stream to Multiple Devices:
curl -X POST http://localhost:3000/api/llm/stream-multiple \
-H "Content-Type: application/json" \
-d '{
"deviceIds": ["device-1", "device-2"],
"prompt": "Tell me about accessibility technology",
"provider": "openai"
}'Continue Conversation:
curl -X POST http://localhost:3000/api/llm/stream/{device-id} \
-H "Content-Type: application/json" \
-d '{
"prompt": "Tell me more about that",
"provider": "anthropic",
"conversationId": "abc-123"
}'Via API:
# Get specific conversation
curl http://localhost:3000/api/llm/conversations/{conversation-id}
# Get device's recent conversations
curl http://localhost:3000/api/llm/devices/{device-id}/conversations?limit=10Response Format:
{
"conversationId": "abc-123",
"messages": [
{
"id": "msg-1",
"role": "user",
"content": "What is quantum computing?",
"timestamp": "2025-01-04T12:00:00Z",
"provider": "anthropic",
"model": "claude-3-5-sonnet-20241022"
},
{
"id": "msg-2",
"role": "assistant",
"content": "Quantum computing is...",
"timestamp": "2025-01-04T12:00:05Z"
}
]
}# Clear specific conversation
curl -X DELETE http://localhost:3000/api/llm/conversations/{conversation-id}Use Cases:
- Privacy: Clear sensitive conversations
- Testing: Reset conversation state
- Maintenance: Clean up old data
When to Enable:
- Unreliable network connections
- High packet loss environments
- Long-distance connections
- Critical communications
How to Enable:
- Edit device settings
- Check "Enable FEC"
- Save device
- Reconnect to device
Technical Details:
- Adds parity data to T.140 stream
- Recovers from packet loss
- Slight latency increase
- Recommended for <5% packet loss
When to Enable:
- Very lossy networks (>5% loss)
- Mission-critical applications
- Emergency communications
- Unreliable connections
Configuration:
- Edit device settings
- Check "Enable RED"
- Choose generations (1-3)
- Save and reconnect
Generation Guide:
- 1 generation: Mild packet loss (5-10%)
- 2 generations: Moderate loss (10-20%)
- 3 generations: High loss (>20%)
Trade-offs:
- Higher redundancy = more bandwidth
- Increased reliability
- Higher latency
Setup Method 1: Passphrase (Simple)
- Select "SRTP" protocol
- Device settings: Enter passphrase
- Keys generated automatically
- Save device
Setup Method 2: Manual Keys (Advanced)
- Generate keys:
# Using OpenSSL openssl rand -base64 30 # Master key openssl rand -base64 14 # Salt
- Add to device settings:
- SRTP Key: (base64 master key)
- SRTP Salt: (base64 salt)
- Save device
Security Notes:
- Use SRTP for sensitive data
- Keys stored locally only
- Change keys periodically
- Use strong passphrases
Optimization Guide:
| Device Type | Recommended Rate | Notes |
|---|---|---|
| Standard TTY | 30 cps | Default, works well |
| Fast display | 50-60 cps | Modern devices |
| Slow display | 15-20 cps | Older devices |
| Braille | 10-15 cps | Reading speed |
| Testing | 100 cps | Development only |
Tuning Process:
- Start with 30 cps
- Test with device
- Adjust based on:
- Display refresh rate
- User comfort
- Network capacity
- Save optimal setting
Get All Devices
GET /api/devicesReturns array of all registered devices.
Get Specific Device
GET /api/devices/:idCreate Device
POST /api/devices
Content-Type: application/json
{
"name": "My Device",
"type": "hearing",
"ipAddress": "192.168.1.100",
"port": 5004,
"protocol": "rtp",
"settings": {
"characterRateLimit": 30,
"backspaceProcessing": true,
"enableFEC": false,
"enableRED": false
}
}Update Device
PUT /api/devices/:id
Content-Type: application/json
{
"name": "Updated Name",
"settings": { ... }
}Delete Device
DELETE /api/devices/:idConnect to Device
POST /api/devices/:id/connectDisconnect from Device
POST /api/devices/:id/disconnectGet Active Connections
GET /api/devices/connections/activeGet Available Providers
GET /api/llm/providers
Response:
[
{
"provider": "openai",
"available": true,
"model": "gpt-4"
},
{
"provider": "anthropic",
"available": true,
"model": "claude-3-5-sonnet-20241022"
}
]Stream to Single Device
POST /api/llm/stream/:deviceId
Content-Type: application/json
{
"prompt": "Your question here",
"provider": "anthropic",
"conversationId": "optional-conversation-id"
}
Response:
{
"message": "Started streaming LLM response to device",
"provider": "anthropic",
"conversationId": "abc-123",
"messageId": "msg-456"
}Stream to Multiple Devices
POST /api/llm/stream-multiple
Content-Type: application/json
{
"deviceIds": ["device-1", "device-2"],
"prompt": "Your question here",
"provider": "openai"
}Get Conversation History
GET /api/llm/conversations/:conversationIdGet Device Conversations
GET /api/llm/devices/:deviceId/conversations?limit=10Clear Conversation
DELETE /api/llm/conversations/:conversationIdAll endpoints return standard error format:
{
"error": "Description of error",
"details": "Additional information (optional)"
}Common HTTP status codes:
200: Success400: Bad request (validation error)404: Resource not found500: Server error
Cause: No valid API keys in .env file
Solution:
- Check
.envfile exists - Verify API key format is correct
- Ensure no extra spaces or quotes
- Restart server after changing
.env
# Correct format:
ANTHROPIC_API_KEY=sk-ant-xxxxxxxxxxxxx
# Incorrect (no quotes needed):
ANTHROPIC_API_KEY="sk-ant-xxxxxxxxxxxxx"WebSocket Connection Issues:
# Test WebSocket connectivity
wscat -c ws://device-ip:port
# If fails, check:
# 1. Device is powered on
# 2. Network is reachable
# 3. Firewall allows WebSocket connectionsRTP Connection Issues:
# Test UDP connectivity
nc -u device-ip port
# Check:
# 1. Device listens on UDP port
# 2. No firewall blocking UDP
# 3. IP address is correctSRTP Connection Issues:
- Verify passphrase is correct
- Check key/salt are valid base64
- Ensure device supports SRTP
- Try RTP first to isolate encryption issues
Check 1: Device Connected
curl http://localhost:3000/api/devices/connections/active
# Device should appear in listCheck 2: Provider Available
curl http://localhost:3000/api/llm/providers
# Provider 'available' should be trueCheck 3: API Key Valid
- Test key directly with provider
- Check for expired keys
- Verify billing/credits available
Check 4: Server Logs
tail -f assistive-llm.log
# Look for error messages during streamingIssue: TypeScript compilation errors
Solution:
# Clean and reinstall
rm -rf node_modules dist
npm install
npm run buildIssue: Missing dependencies
Solution:
npm install --save t140llm
npm installSlow streaming:
- Check network latency to device
- Reduce character rate limit
- Disable FEC/RED if not needed
- Check server CPU/memory usage
High latency:
- Test network:
ping device-ip - Check LLM provider status
- Reduce redundancy generations
- Use faster LLM model
Log Files:
# View recent logs
tail -f assistive-llm.log
# Search for errors
grep ERROR assistive-llm.logDebug Mode:
# In .env file
LOG_LEVEL=debugCommunity Support:
- GitHub Issues: Report bugs and request features
- Documentation: Check docs for detailed guides
- t140llm Library: Check t140llm documentation
See CLOUD_DEPLOYMENT.md for detailed instructions on deploying to cloud providers including:
- Vercel
- Cloudflare Workers
- AWS
- Google Cloud Platform
- Azure
assistive-llm/
├── src/
│ ├── config/ # Configuration management
│ │ └── config.ts # App configuration
│ ├── controllers/ # API controllers
│ │ ├── device.controller.ts
│ │ └── llm.controller.ts
│ ├── interfaces/ # TypeScript interfaces
│ │ ├── device.interface.ts
│ │ └── api.interface.ts
│ ├── middleware/ # Express middleware
│ │ └── error.middleware.ts
│ ├── public/ # Static web files
│ │ ├── css/ # Stylesheets
│ │ ├── js/ # Client JavaScript
│ │ └── index.html # Admin interface
│ ├── routes/ # API routes
│ │ ├── device.routes.ts
│ │ ├── llm.routes.ts
│ │ └── index.ts
│ ├── services/ # Business logic
│ │ ├── device.service.ts
│ │ ├── llm.service.ts
│ │ └── conversation.service.ts
│ └── utils/ # Utility functions
│ └── logger.ts
├── dist/ # Compiled JavaScript (generated)
├── data/ # Data storage (generated)
│ ├── devices.json
│ └── conversations.json
├── assets/ # Documentation assets
├── .env.example # Example environment file
├── package.json
├── tsconfig.json
└── README.md
npm testnpm run devChanges trigger automatic rebuild and restart.
npm run buildCompiles TypeScript to JavaScript in dist/ directory.
Contributions are welcome! Please follow these guidelines:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Add tests if applicable
- Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Follow existing code style
- Add TypeScript types for new code
- Update documentation for new features
- Write clear commit messages
- Test thoroughly before submitting PR
MIT License - see LICENSE file for details
- Built with t140llm by agrathwohl
- Supports OpenAI and Anthropic LLM providers
- Designed for accessibility and real-time communication
- Implements ITU-T T.140 real-time text protocol
For issues, questions, or suggestions:
- GitHub Issues: Report bugs or request features
- Documentation: Check this README and linked docs
- t140llm Library: t140llm documentation
- Initial release with comprehensive t140llm integration
- Support for WebSocket, RTP, SRTP, and Unix socket transports
- OpenAI and Anthropic provider support
- Conversation history and management
- Advanced T.140 features (FEC, RED, redundancy)
- Web-based admin interface
- Device management and connection handling
- RESTful API with comprehensive endpoints
- Real-time streaming with character rate limiting
- Secure SRTP communications
- Comprehensive documentation and user guides