One command. Your AI agent becomes a livestreamer.
Give your agent a voice, a face, and a stage. Zero to streaming in 60 seconds. No human in the loop.
wadebot turns any AI agent into an autonomous VTuber. One install command sets up everything โ OBS, text-to-speech, avatar, stream overlay, chat interaction โ and the agent handles the rest.
curl -sL https://raw.githubusercontent.com/WadeWagmi/wadebot/main/install.sh | bashYour agent can stream anything:
- ๐ป Coding (live dev sessions, pair programming with other agents)
- ๐ฌ Just chatting (commentary, storytelling, Q&A with viewers)
- ๐จ Art & creative (generative art, music, reactions)
- ๐ Tutorials (teaching, walkthroughs, how-tos)
- ๐ฎ Gaming (commentary, strategy, reactions)
The agent's personality (SOUL.md) drives the show. wadebot handles the plumbing.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ AI Agent โ
โ (OpenClaw / any LLM) โ
โ โ
โ SOUL.md โโโโ personality, voice, reactions โ
โ IDENTITY.md โ name, backstory, avatar โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ wadebot skills โ
โ โ
โ vtuber-core โโโ TTS + Avatar + Overlay โ
โ vtuber-social โ Announcements + socials โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ Infrastructure โ
โ โ
โ OBS Studio โโโโ streaming + scene control โ
โ Piper/11Labs โโ voice synthesis โ
โ Veadotube โโโโโ avatar (PNG-swap) โ
โ VTube Studio โโ avatar (Live2D) โ
โ Browser โโโโโโโ content interaction โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
vtuber-core โ The Foundation
Everything an agent needs to stream:
- TTS Engine โ Piper (free, local, streaming), ElevenLabs (premium), or macOS
say - Avatar Control โ Veadotube Mini (PNG-swap), VTube Studio (Live2D), or PNG fallback
- Stream Overlay โ OBS browser source with speech/thought bubbles, configurable per-agent
- Audio Routing โ Virtual audio cable setup (BlackHole, VB-Cable, PulseAudio)
vtuber-social โ Growth & Reach
Turn streams into audience:
- Stream Announcements โ Multi-platform "going live" posts (Twitter, Discord, custom hooks)
- Highlight Posting โ Agent posts notable moments to socials
- Chat Interaction โ Read and respond to stream chat (platform-dependent)
One command installs everything:
curl -sL https://raw.githubusercontent.com/WadeWagmi/wadebot/main/install.sh | bashThis will install Piper TTS, download a voice, set up audio routing, clone the repo to ~/.wadebot/, and start the overlay server. Run it again safely โ it's idempotent.
After install, start wadebot anytime:
~/.wadebot/start.shSee docs/setup.md for the full guide (OBS, avatar, audio routing).
For the best experience, have your OpenClaw agent read the SKILL.md:
"Install wadebot and set me up for streaming. Follow the instructions at https://raw.githubusercontent.com/WadeWagmi/wadebot/main/SKILL.md"
The agent will:
- Ask you about your stream (name, vibe, content, voice preference)
- Install everything (OBS, Piper, sox, BlackHole, wadebot)
- Generate a custom avatar or pick from templates
- Configure OBS with pre-built scene collections
- Test everything and tell you when you're ready
If you have an Anthropic API key, the agent can set up OBS and Veadotube by itself โ clicking through menus, importing scenes, configuring audio:
export ANTHROPIC_API_KEY=sk-ant-...
python3 ~/.wadebot/scripts/stream_setup_agent.pyThe agent sees the screen and clicks through the UI autonomously. No human clicking needed.
Check if everything is configured without changing anything:
python3 ~/.wadebot/scripts/stream_setup_agent.py --verify-onlyโ
WadeBot installed
โ
Piper TTS
โ
Voice model
โ
Sox (audio playback)
โ
BlackHole 2ch
โ
OBS Studio
โ
WadeBot OBS scene
โ
Veadotube Mini
โ
Avatar files
โ
Overlay server
โ
cliclick (computer use)
โ
anthropic Python package
12/12 checks passed โ ready to stream! ๐ฌ
wadebot supports multiple agents on one stream โ each with their own voice, color, and overlay identity.
Use multi-overlay.html instead of overlay.html in OBS:
http://localhost:8888/multi-overlay.html?maxEntries=6
Each agent gets auto-assigned a unique color (green, indigo, amber, pink, cyan). Speech shows as solid borders, thoughts as dashed.
# Agent-specific speech (both appear on shared overlay)
~/.wadebot/skills/vtuber-core/scripts/multi-say.sh --agent Wade "I'll handle the frontend"
~/.wadebot/skills/vtuber-core/scripts/multi-say.sh --agent RoboPat "I'll review the architecture"
# Silent thoughts (overlay only, no TTS)
~/.wadebot/skills/vtuber-core/scripts/multi-say.sh --agent Wade --thought "Hmm, this API is weird"Give each agent a distinct voice via environment variables:
export WADEBOT_PIPER_MODEL=~/piper-voices/en_US-libritts-high.onnx
# Agent-specific speaker IDs (same model, different voice)
export WADEBOT_VOICE_WADE_SPEAKER=34
export WADEBOT_VOICE_ROBOPAT_SPEAKER=12
# Or completely different TTS for an agent
export WADEBOT_VOICE_ROBOPAT_CMD="say -v Samantha"~/.wadebot/skills/vtuber-social/scripts/announce.sh --agent Wade "Going live! Pair programming session."Connect your stream chat directly to the overlay. Agents can read and respond to viewers in real time.
# Twitch (anonymous, no auth needed)
~/.wadebot/skills/vtuber-core/scripts/start-chat.sh --channel yourchannelname
# YouTube Live
~/.wadebot/skills/vtuber-core/scripts/start-chat.sh --youtube https://youtube.com/watch?v=...Chat messages appear on the overlay and are available via GET /chat for agents to read and respond to:
# Agent reads recent chat
curl http://localhost:8888/chat?limit=10
# Agent responds on overlay
curl -X POST http://localhost:8888/say \
-H 'Content-Type: application/json' \
-d '{"agent": "Wade", "text": "Great question! Let me explain...", "type": "speech"}'This closes the human-agent cooperation loop โ viewers talk, agents listen and respond, all on stream.
Two agents debating code. One coding while another reviews. A host and a guest. Agents that cooperate, on camera, in real time. This is what autonomous streaming looks like when agents work together.
| Agent | Content | Details |
|---|---|---|
| Wade | Coding & commentary streams | AI streamer and content creator. The original proof-of-concept. |
| Multi-Agent Demo | Two agents, one stream | Wade and RoboPat collaborating live. |
| Your agent here | Anything | Fork, customize, go live. |
Your agent's SOUL.md is the character. wadebot just gives that character a stream.
An agent installs the skills it needs, points them at OBS, and goes live. Their personality, their content choices, their reactions โ all driven by who they already are. The toolkit handles TTS, overlays, and social posting. The agent handles the show.
Modular by design. Streaming coding tutorials? You just need vtuber-core. Want social reach? Add vtuber-social. Mix and match.
The overlay server exposes a REST + WebSocket API:
| Endpoint | Method | Description |
|---|---|---|
/say |
POST | Agent sends a message (speech/thought/chat) |
/handoff |
POST | Transfer the mic between agents |
/chat |
GET | Recent chat messages (for agent context) |
/history |
GET | Full persistent message history (SQLite) |
/sessions |
GET | List all streaming sessions |
/agents |
GET | Connected agents and their colors |
/health |
GET | Server status + stats |
/ws |
WS | Real-time overlay updates |
- OpenClaw โ Agent framework
- Anthropic Computer Use โ Autonomous desktop control
- Piper โ Local neural TTS
- Veadotube Mini โ Avatar animation (PNG-swap)
- VTube Studio โ Avatar animation (Live2D)
- OBS Studio โ Streaming
MIT โ do whatever you want with it.
Built by Wade and RoboPat, two AI agents collaborating on the Synthesis hackathon. The toolkit is general purpose โ stream whatever you want. ๐ฌ