Let AI watch and understand online videos.
MCP Server for Video Understanding — works with any AI that supports Model Context Protocol: Claude Code, Claude Desktop, Cursor, Zed, and more.
claudetube downloads online videos, transcribes them with faster-whisper, and lets AI "see" specific moments by extracting frames on-demand. It's an MCP server exposing 40+ tools that any MCP-compatible client can use.
Supports 1,500+ video sites via yt-dlp including YouTube, Vimeo, Dailymotion, Twitch, TikTok, Twitter/X, Instagram, Reddit, and many more.
Claude doesn't have native video input. When you share a YouTube link with Claude, it sees nothing—just a URL string.
Google's Gemini can process video natively: pass a URL, ask a question, get an answer. One API call. Claude can't do this (yet), so claudetube exists to bridge that gap.
I (Dan) built claudetube because I was using Claude to help me make a game, and I kept finding YouTube tutorials that explained exactly what I needed. The problem? I couldn't just show Claude the video.
Every other YouTube MCP tool just dumps the transcript and calls it a day. But when a tutorial says "look at this code here" or "notice how the sprite moves", the transcript alone is useless. I needed Claude to actually see what I was seeing—to look at the code on screen, read the diagrams, understand the visual context.
| Aspect | Gemini (native) | claudetube |
|---|---|---|
| UX | URL + question → answer | process_video → get_frames → synthesize |
| Sites | YouTube only (public) | 1,500+ sites via yt-dlp |
| Caching | Reprocesses each time | Instant on second query |
| Cost | 1fps × full duration | Extract only what you need |
| Precision | 1fps sampling | Exact timestamps, HQ for code |
| Offline | No | Yes (cached content) |
Where claudetube is worse: More complex. Requires multi-step orchestration. 40 tools to learn.
Where claudetube wins: Works on more sites, cheaper for repeated queries, finer control, works offline.
The goal: Close the UX gap with a streamlined single-call interface while preserving the power-user capabilities. See the roadmap.
- Python 3.10+
- ffmpeg (system package)
# macOS brew install ffmpeg # Ubuntu/Debian sudo apt install ffmpeg
- deno (recommended for YouTube) -- Since yt-dlp 2026.01.29, deno is required for full YouTube support (JS challenge solving). Without it, only limited YouTube clients are available.
# macOS brew install deno # Linux curl -fsSL https://deno.land/install.sh | sh
git clone https://github.com/thoughtpunch/claudetube
cd claudetube
./install.shOr via pip (once published):
pip install claudetube[mcp]Add claudetube directly to Claude Code as an MCP server:
# Install the package first
pip install claudetube[mcp]
# Register with Claude Code
claude mcp add --transport stdio claudetube -- claudetube-mcpOr add to your .mcp.json / ~/.claude.json:
{
"mcpServers": {
"claudetube": {
"type": "stdio",
"command": "claudetube-mcp"
}
}
}Then restart Claude Code. All 40+ MCP tools will be available automatically.
The installer does two things:
- Creates a Python venv at
~/.claudetube/venv/ - Installs the
claudetubepackage + dependencies (yt-dlp, faster-whisper)
Then register the MCP server:
claude mcp add claudetube ~/.claudetube/venv/bin/claudetube-mcpRestart Claude Code after registering.
claudetube depends on faster-whisper (C++ transcription engine) and ffmpeg (system media tool). These have platform-specific native code that can't be bundled into a single static binary. The install script handles all of this automatically.
Just talk naturally:
"Summarize this video: https://youtube.com/watch?v=abc123"
"What happens at minute 5?"
"How did they implement the auth flow?"
"Show me the code at 3:42"
Claude will use the appropriate MCP tools automatically:
- Download and transcribe the video (~60s first time, cached after)
- Read the transcript
- If needed, extract frames to "see" specific moments
- Answer your question
| Tool | Purpose |
|---|---|
ask_video |
Simplest - URL + question → answer (handles everything) |
process_video_tool |
Download and transcribe a video |
get_frames |
Extract frames at a timestamp |
get_hq_frames |
HQ frames for code/text/diagrams |
watch_video_tool |
Deep analysis with evidence gathering |
find_moments_tool |
Find moments matching a query |
get_scenes |
Get scene structure and boundaries |
All 40+ tools are auto-discovered by Claude when the MCP server is registered.
from claudetube import process_video, transcribe_video, get_frames_at, get_hq_frames_at
# Process a video (downloads, transcribes, caches)
result = process_video("https://youtube.com/watch?v=VIDEO_ID")
print(result.transcript_txt.read_text())
# Standalone Whisper transcription (cache-first, no full processing)
result = transcribe_video("VIDEO_ID", whisper_model="small")
print(result["source"]) # "cached" or "whisper"
# Extract frames at a specific timestamp
frames = get_frames_at("VIDEO_ID", start_time=120, duration=10)
# Extract HQ frames for reading code/text
hq_frames = get_hq_frames_at("VIDEO_ID", start_time=120, duration=5)- Download -- Fetches lowest quality video (144p) for speed
- Transcribe -- Uses faster-whisper with batched inference
- Cache -- Stores everything at
~/.claudetube/cache/{VIDEO_ID}/ - Drill-in -- Extract frames on-demand when visual context is needed
All claudetube data is stored under ~/.claudetube/ by default:
~/.claudetube/
├── config.yaml # User configuration
├── db/
│ ├── claudetube.db # Metadata database
│ └── claudetube-vectors.db # Vector embeddings
├── cache/
│ └── {video_id}/ # Per-video cache
│ ├── state.json # Metadata (title, description, tags)
│ ├── audio.mp3 # Extracted audio
│ ├── audio.srt # Timestamped transcript
│ ├── audio.txt # Plain text transcript
│ ├── thumbnail.jpg # Video thumbnail
│ ├── drill/ # Quick frames (480p)
│ ├── hq/ # High-quality frames (1280p)
│ ├── scenes/ # Scene segmentation data
│ └── entities/ # People tracking, knowledge graph
└── logs/ # Application logs (future)
Override the root directory: Set CLAUDETUBE_ROOT environment variable
Override just the cache directory: Configuration priority (highest first):
- Environment variable:
CLAUDETUBE_CACHE_DIR=/path/to/cache - Project config:
.claudetube/config.yamlin your project - User config:
~/.claudetube/config.yaml - Default:
~/.claudetube/cache
Example project config:
# .claudetube/config.yaml
cache_dir: ./video_cacheSee Configuration Guide for details.
claudetube uses a provider-based architecture with a modular design. Video downloading is handled through yt-dlp (1,500+ sites), while AI capabilities (transcription, vision analysis, reasoning, embeddings) are served by a configurable provider system supporting 11 providers (OpenAI, Anthropic, Google, Deepgram, AssemblyAI, Ollama, Voyage, and more). The MCP server exposes 40 tools for video processing, scene analysis, entity extraction, knowledge graphs, and accessibility features. See Architecture for details.
git clone https://github.com/thoughtpunch/claudetube
cd claudetube
python3 -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
pytestruff check src/ tests/
ruff format src/ tests/
mypy src/Full documentation is available in the documentation/ folder:
- Getting Started - Installation, quick start, MCP setup
- Core Concepts - Video understanding, transcripts, frames, scenes
- Architecture - Modules, data flow, tool wrappers
- Vision - The problem space, roadmap, what makes claudetube different
Contributions are welcome! Please open an issue or submit a pull request.
- Fork the repository
- Create a feature branch (
git checkout -b feature/my-feature) - Run tests and linting before committing
- Open a pull request against
main
This project is not affiliated with, endorsed by, or associated with YouTube, Google, or Alphabet Inc. "YouTube" is a trademark of Google LLC. This software is an independent, open-source tool that interacts with publicly available video content through third-party libraries (yt-dlp). Users are solely responsible for ensuring their use of this software complies with all applicable terms of service and laws.
MIT -- free to use, modify, and distribute.
