Stream any web page to YouTube Live, Twitch, or any other RTMP-compatible service. This project uses Playwright to capture a target web page in a headless browser and streams it 24/7 using FFmpeg. It's designed for reliable, long-running cloud deployment in a Docker container.
- π΄ Live Stream Any Web Page: Capture and stream dynamic web content.
- βοΈ Cloud-Ready: Optimized for 24/7 deployment on services like Railway, Render, Heroku, AWS, and GCP.
- π³ Dockerized: Includes a production-ready
Dockerfilewith Playwright and FFmpeg pre-installed. - π Remote Control API: Start, stop, and check the stream status via a simple REST API.
- π Secure Access: Protect API endpoints with an optional access keyword.
- π§ Highly Configurable: Adjust resolution, FPS, and bitrate using environment variables.
- π Silent Audio Track: Includes a synthetic silent audio track, a requirement for many streaming platforms like YouTube Live.
- π Auto-Restart: (If configured) Can automatically restart the stream on container startup.
- Koa Web Server: A lightweight server starts and exposes API endpoints.
- Playwright Browser: When the
/startendpoint is called, Playwright launches a headless Chromium browser. - Page Navigation: It navigates to the
TARGET_URLyou provide. - Screenshot Loop: The application takes screenshots of the page at the configured
STREAM_FPS. - FFmpeg Pipe: Each screenshot is piped directly to an FFmpeg process.
- RTMP Streaming: FFmpeg encodes the screenshots into a video stream (H.264) with a silent audio track (AAC) and sends it to your specified RTMP server.
- Docker and Docker Compose
- A YouTube, Twitch, or other account with RTMP stream details.
-
Clone the repository:
git clone https://github.com/your-username/url-to-rtmp.git cd url-to-rtmp -
Create an environment file: Copy the
env.exampleto a new file named.env.cp env.example .env
-
Configure your stream: Edit the
.envfile with your details:TARGET_URL: The full URL of the web page you want to stream.YOUTUBE_RTMP_URL: The RTMP ingest URL from your streaming provider.YOUTUBE_STREAM_KEY: Your unique stream key.ACCESS_KEYWORD: A secret password to protect the API endpoints.
-
Build and run with Docker Compose:
docker-compose up --build
The service will be running and available at
http://localhost:3000. -
Start the stream: Use a tool like
curlor Postman to send a POST request to the/startendpoint.curl -X POST http://localhost:3000/start \ -H "Content-Type: application/json" \ -d '{"keyword": "your-secret-keyword"}'
All configuration is managed through environment variables, making it easy to deploy. See env.example for a full list of available options and detailed comments.
| Variable | Description | Default |
|---|---|---|
TARGET_URL |
Required. The web page to stream. | - |
YOUTUBE_RTMP_URL |
Required. The RTMP ingest URL. | - |
YOUTUBE_STREAM_KEY |
Required. Your stream key. | - |
ACCESS_KEYWORD |
A secret keyword to protect the API. | "" (none) |
STREAM_WIDTH |
Stream width in pixels. | 1920 |
STREAM_HEIGHT |
Stream height in pixels. | 1080 |
STREAM_FPS |
Stream frames per second. | 30 |
SCREENSHOT_INTERVAL |
How often to capture screenshots (in seconds). | 1.0 |
STREAM_BITRATE |
Stream video bitrate (auto-enforces minimums). | 8000k |
PORT |
The port for the API server. | 3000 |
HEADLESS |
Run browser in headless mode. | true |
AUTO_START |
Automatically start streaming on container launch. | false |
POST /start: Starts the web capture and RTMP stream.POST /stop: Stops the stream and closes the browser.GET /status: Gets the current status of the streamer.GET /: A simple health check endpoint.
Request Body for protected endpoints (/start, /stop):
{
"keyword": "your-secret-keyword"
}This application is designed for flexible deployment. You can run it on modern Platform-as-a-Service (PaaS) providers or on your own Virtual Machine (VM) for more control and potentially lower cost for 24/7 operation.
For the easiest, most managed deployment experience, we recommend using a cloud platform that directly supports Docker containers. This approach minimizes infrastructure management.
β‘οΈ See our detailed Cloud Deployment Guide for instructions on:
- Railway
- Render
- Heroku
- Google Cloud Run, AWS App Runner, and more.
For 24/7 streaming, running the application on a dedicated Virtual Machine can be significantly more cost-effective. Below are the detailed steps for deploying to a Google Compute Engine (GCE) VM. This guide can be adapted for any Linux-based VM (e.g., from AWS, DigitalOcean, or a home server).
Estimated Cost: A 24/7 e2-medium instance (2 vCPU, 4GB RAM) costs approximately $25-30/month.
Architecture: We will create a GCE instance that runs a startup script to install Docker and clone this repository. A systemd service will be configured to automatically run the streamer via Docker Compose and ensure it restarts on boot or if it fails.
This script will run once when the VM is first created to set up the environment. Create a file named gce-startup-script.sh on your local machine.
#!/bin/bash
# URL-to-RTMP GCE Startup Script - UPDATED
# This script is intended to be run as root by GCE. 'sudo' is included so commands
# can also be run manually by a user with sudo privileges.
# 1. Uninstall old conflicting Docker packages
for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do sudo apt-get remove -y $pkg; done
# 2. Set up Docker's official repository
sudo apt-get update
sudo apt-get install -y ca-certificates curl git
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
# 3. Install Docker Engine, CLI, and Compose plugin
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
# 4. Clone the application repository into /opt
# IMPORTANT: Replace with your repository URL
sudo git clone https://github.com/mathiasbc/url-to-RTMP.git /opt/url-to-rtmp
# 5. Create the systemd service file to manage the application
sudo tee /etc/systemd/system/url-to-rtmp.service > /dev/null <<EOF
[Unit]
Description=URL to RTMP Docker Compose Service
Requires=docker.service
After=docker.service
[Service]
Restart=always
RestartSec=10
WorkingDirectory=/opt/url-to-rtmp
# Use the Docker Compose V2 plugin (docker compose)
ExecStart=/usr/bin/docker compose up
ExecStop=/usr/bin/docker compose down
[Install]
WantedBy=multi-user.target
EOF
# 6. Enable the service so it starts on boot
sudo systemctl enable url-to-rtmp.service
# Note: The service will not successfully start until the user creates the .env file.Run the following gcloud command from your local terminal to create the VM instance.
gcloud compute instances create url-to-rtmp-vm \
--project=YOUR_GCP_PROJECT_ID \
--zone=us-central1-a \
--machine-type=e2-medium \
--image-family=debian-11 \
--image-project=debian-cloud \
--boot-disk-size=20GB \
--tags=http-server \
--metadata-from-file=startup-script=./gce-startup-script.sh- Replace
YOUR_GCP_PROJECT_IDwith your project ID. - This command creates a VM, attaches our startup script, and tags it for HTTP traffic.
The default e2-medium machine type is cost-effective, but it may not be powerful enough for complex, frequently updating web pages (e.g., live charts, dashboards). If you experience low frame rates (check the logs) or your stream is buffering, you have three options:
-
Upgrade the VM Instance (Recommended for high quality): For a smoother 1080p @ 30fps stream on demanding sites, use a more powerful machine. A
n1-standard-2(2 vCPU, 7.5GB RAM) or even an1-standard-4(4 vCPU, 15GB RAM) is a good choice. To use it, simply replace--machine-type=e2-mediumwith--machine-type=n1-standard-2in thegcloudcommand above. This will increase the monthly cost. -
Optimize for Slowly-Changing Content (Recommended for dashboards): If your target page updates infrequently (like the Bitcoin dashboard that updates every 60 seconds), you can dramatically reduce CPU usage by increasing the screenshot interval. Edit the
.envfile on your VM (sudo nano /opt/url-to-rtmp/.env) and add:SCREENSHOT_INTERVAL=30(takes a screenshot every 30 seconds)- Or even
SCREENSHOT_INTERVAL=60for very static content
FFmpeg will automatically duplicate frames to maintain smooth 30fps video output while using much less CPU.
-
Lower the Stream Quality (Most economical): If you want to keep costs down, you can reduce the CPU load by lowering the stream quality. You can do this by editing the
.envfile on your VM (sudo nano /opt/url-to-rtmp/.env) and changing these values:STREAM_WIDTH=1280STREAM_HEIGHT=720STREAM_FPS=20(or even 15)STREAM_BITRATE=4000k
After editing the .env file, restart the service with sudo systemctl restart url-to-rtmp.
Allow external traffic to reach the API server on port 3000.
gcloud compute firewall-rules create allow-streamer-api \
--allow tcp:3000 \
--source-ranges=0.0.0.0/0 \
--target-tags=http-server-
SSH into your new VM:
gcloud compute ssh url-to-rtmp-vm --zone=us-central1-a
-
Create your environment file: The repository was cloned to
/opt/url-to-rtmp. Navigate there and create your.envfile.cd /opt/url-to-rtmp sudo cp env.example .env sudo nano .envFill in your
TARGET_URL,YOUTUBE_RTMP_URL,YOUTUBE_STREAM_KEY, andACCESS_KEYWORD. Save the file (Ctrl+X,Y,Enter). -
Start the service: The
systemdservice is already enabled but may have failed because the.envfile was missing. Now you can start it manually.sudo systemctl start url-to-rtmp
- Check Status: See if the service is running correctly.
sudo systemctl status url-to-rtmp
- View Logs: Watch the real-time output of your Docker container.
sudo journalctl -fu url-to-rtmp.service
- Restarting:
sudo systemctl restart url-to-rtmp
Your service is now deployed and will run 24/7, automatically restarting if the VM reboots or the process crashes.
Symptoms: YouTube shows buffering warnings, stream gets stuck loading Solutions:
-
Increase bitrate (most common fix):
STREAM_BITRATE=8000k # For 1080p STREAM_BITRATE=6000k # For 720p minimum
-
Optimize screenshot interval for your content:
# For dashboards that update every 60s SCREENSHOT_INTERVAL=30 # For dynamic content SCREENSHOT_INTERVAL=1 # For real-time content SCREENSHOT_INTERVAL=0.5
-
Check FFmpeg encoding speed in logs:
- Look for
Speed: 1.0xor higher - If speed < 1.0x, reduce quality or increase interval
- Look for
Symptoms: Logs show "Screenshot captured in 700ms+" Solutions:
-
Optimize browser performance:
# The app now automatically blocks unnecessary resources # Check logs for "β " vs "β οΈ" screenshot indicators
-
Increase screenshot interval:
SCREENSHOT_INTERVAL=2 # Reduce capture frequency -
Upgrade VM instance (for cloud deployment):
# Use n1-standard-2 or n1-standard-4 instead of e2-medium
Symptoms: FFmpeg logs show FPS: 12 instead of target FPS
Root Cause: Screenshot capture is too slow for the interval
Solutions:
-
Increase screenshot interval:
SCREENSHOT_INTERVAL=1.0 # Instead of 0.5 -
Reduce stream quality temporarily:
STREAM_WIDTH=1280 STREAM_HEIGHT=720 STREAM_FPS=24
Solutions by content type:
Static Dashboards (Bitcoin prices, etc.):
SCREENSHOT_INTERVAL=60 # 99.4% CPU reduction
STREAM_BITRATE=6000k # Adequate for slow-changing contentDynamic Content:
SCREENSHOT_INTERVAL=2 # 93% CPU reduction
STREAM_BITRATE=8000k # Higher quality for changesReal-time Content:
SCREENSHOT_INTERVAL=0.5 # 83% CPU reduction
STREAM_BITRATE=10000k # Maximum quality# Good performance indicators:
β
Screenshot 45 captured in 180ms (next in 1.0s)
Stream status - FPS: 30, Bitrate: 8000.0kbits/s, Speed: 1.2x
# Performance issues:
β οΈ Screenshot 45 captured in 850ms (SLOW - next in 1.0s)
Stream status - FPS: 12, Bitrate: 1500.0kbits/s, Speed: 0.8x- Screenshot capture: < 300ms (good), < 200ms (excellent)
- FFmpeg speed: β₯ 1.0x (real-time), > 1.2x (excellent)
- Stream bitrate: Match your
STREAM_BITRATEsetting - Output FPS: Should match your
STREAM_FPSsetting
# Check container resources
docker stats
# View detailed logs
sudo journalctl -fu url-to-rtmp.service
# Test API endpoints
curl -X GET http://localhost:3000/status- "Connection refused": Check
YOUTUBE_RTMP_URLandYOUTUBE_STREAM_KEY - "Bitrate too low": Increase
STREAM_BITRATE - "Non-monotonous DTS": Usually resolves automatically with new encoding settings
# For Google Cloud Engine
# Upgrade instance type:
gcloud compute instances stop url-to-rtmp-vm --zone=us-central1-a
gcloud compute instances set-machine-type url-to-rtmp-vm \
--machine-type=n1-standard-2 --zone=us-central1-a
gcloud compute instances start url-to-rtmp-vm --zone=us-central1-aThe application includes a comprehensive memory leak test suite to help diagnose and prevent memory issues.
# Run all memory tests
npm run test:memory
# Run specific tests
npm run test:screenshot # Test screenshot buffer leaks
npm run test:ffmpeg # Test FFmpeg process leaks
npm run test:browser # Test browser context leaks
npm run test:cycles # Test start/stop cycles
npm run test:long # Long-running stability test
# Run with garbage collection enabled (recommended)
npm run start:gc # Start app with GC supportThe application now includes built-in memory monitoring:
- Real-time tracking: Memory usage logged every 5 minutes
- Leak detection: Automatic warnings for excessive growth
- Periodic cleanup: Browser context refresh every 100 screenshots
- Garbage collection: Forced GC when available
- Performance metrics: Screenshot count and timing statistics
- Screenshot buffer management: Immediate buffer clearing after use
- FFmpeg stderr limiting: Circular buffer prevents data accumulation
- Browser context refresh: Periodic page reloads to clear memory
- Event listener cleanup: Proper removal of all event handlers
- Resource cleanup: Comprehensive cleanup on stream stop
β
Test passed - RSS growth < 50MB, Heap growth < 25MB
π¨ Memory leak detected - Excessive memory growth found
π Memory Stats - Current usage and growth patterns
β οΈ Warning - High memory usage detectedTest reports are saved as JSON files for detailed analysis.
This project is not affiliated with YouTube or any other streaming platform.