English | 中文文档
Put the real Codex and Claude Code CLI on Telegram.
Not an API wrapper — the actual CLI, with native sessions, local files, and real tool use.
Resume desktop sessions from Telegram, or run isolated multi-bot teams through Agent Bus.
Runs the native CLI harness directly — Codex or Claude per instance, hot-reloaded instructions, voice/file input, local session resume, multi-bot Agent Bus, structured timeline/audit logs, service doctor, and dashboard included.
No reimplemented API wrappers, no fake chat layer.
Dual Engine | Multi-Bot | Agent Bus | Crew | Voice | Resume | Budget | Quick Start | Ops
RULE 1: Let your Claude Code or Codex CLI set this up for you. Clone the repo, open it in your terminal, and tell your AI agent: "read the README and configure a Telegram bot for me". It will handle the rest.
Recommended runtime: enable YOLO mode for hands-free Telegram instances you control:
telegram yolo on --instance <name>. With YOLO off, the bridge can ask for approval in Telegram instead: Claude approvals are per tool request; Codex approvals are per turn becausecodex execdoes not support mid-turn approval callbacks. Useunsafeonly on a trusted machine and workspace.
- v4.5.3 — recovers a stale Telegram update watermark from audit history on service startup, preventing old completed tasks from replaying after restart.
- v4.5.2 — fixes Telegram update watermark ordering, so rapid follow-up messages cannot be skipped while an earlier turn is still finishing.
- v4.5.1 — moves Telegram transport rules into each instance's
agent.md, leaving only one short static Telegram reminder in the per-turn prompt. File delivery now preferscctb send --file PATH/cctb send --image PATH. - v4.5.0 — simplifies file delivery around explicit send receipts and removes the old manifest/contract/count-repair/wakeup delivery state.
- Earlier 4.x releases added the dual Codex/Claude process runtimes, Agent Bus, crew workflows, timeline/audit logs, service doctor, dashboard, and Delivery Protocol v2.
Upgrading from v4.5.0 or earlier: refresh generated instance instructions after updating so old bots get the short Telegram Transport block:
telegram instructions upgrade --all --dry-run
telegram instructions upgrade --all
telegram service restart --allUse --force only for instances with a custom transport block you intentionally want to replace. Forced replacements create an agent.md.bak.<timestamp> backup next to the original file.
Each bot instance can run either OpenAI Codex or Claude Code as its backend. Switch engines per-instance with one command:
# Set an instance to use Claude Code
npm run dev -- telegram engine claude --instance review-bot
# Set another to use Codex
npm run dev -- telegram engine codex --instance helper-bot
# Check current engine
npm run dev -- telegram engine --instance review-bot| Feature | Codex Engine | Claude Engine |
|---|---|---|
| CLI command | codex exec --json |
claude -p --output-format json |
| Session resume | codex exec resume --json <id> |
claude -p -r <session-id> |
| Project instructions | agent.md (prepended to prompt) |
agent.md (via --system-prompt) + CLAUDE.md (auto-loaded from workspace) |
| Telegram approval when YOLO is off | Pre-approve the turn, then run that turn with --full-auto |
Inline approval buttons for Claude permission prompts |
| YOLO mode | --full-auto / --dangerously-bypass-approvals-and-sandbox |
--permission-mode bypassPermissions / --dangerously-skip-permissions |
/compact |
Not needed (each exec is stateless) | Compresses session context to reduce token usage |
| Working directory | workspace/ under instance dir |
workspace/ under instance dir (with CLAUDE.md) |
When using the Claude engine, each instance gets a workspace/ directory. Drop a CLAUDE.md in there for project-level instructions that Claude Code reads natively:
~/.cctb/review-bot/
├── agent.md ← "You are a strict code reviewer"
├── workspace/
│ └── CLAUDE.md ← "TypeScript project. Use ESLint. Never modify tests."
├── config.json ← { "engine": "claude", "approvalMode": "full-auto" }
└── .env
Two layers of instructions, no conflict:
- agent.md → Your bot personality (injected via
--system-prompt) - CLAUDE.md → Project rules (Claude auto-discovers from working directory)
Run as many bots as you need. Each instance is fully isolated — its own engine, token, personality, threads, access rules, inbox, and audit trail. By default, each instance is meant for one Telegram chat; multi-chat access is opt-in.
┌─────────────────────────────────────────────┐
│ cc-telegram-bridge │
└────────────┬──────────────┬─────────────────┘
│ │
┌──────────────┼──────────────┼──────────────┐
▼ ▼ ▼ ▼
┌────────────┐ ┌────────────┐ ┌────────────┐ ┌────────────┐
│ "default" │ │ "work" │ │ "reviewer" │ │ "research" │
│ engine: │ │ engine: │ │ engine: │ │ engine: │
│ codex │ │ codex │ │ claude │ │ claude │
│ │ │ │ │ │ │ │
│ agent.md: │ │ agent.md: │ │ agent.md: │ │ agent.md: │
│ "General │ │ "Reply in │ │ "Strict │ │ "Deep │
│ helper" │ │ Chinese" │ │ reviewer" │ │ research" │
└────────────┘ └────────────┘ └────────────┘ └────────────┘
PID 4821 PID 5102 PID 5340 PID 5520
# Configure each instance
npm run dev -- telegram configure <token-A>
npm run dev -- telegram configure --instance work <token-B>
npm run dev -- telegram configure --instance reviewer <token-C>
# Set engines
npm run dev -- telegram engine claude --instance reviewer
# Set personalities
npm run dev -- telegram instructions set --instance reviewer ./reviewer-instructions.md
# Recommended: enable YOLO for Telegram/mobile use
npm run dev -- telegram yolo on --instance work
# Start them all
npm run dev -- telegram service start
npm run dev -- telegram service start --instance work
npm run dev -- telegram service start --instance reviewerEach bot has its own agent.md. Hot-reloaded on every message — edit anytime, no restart needed.
npm run dev -- telegram instructions show --instance work
npm run dev -- telegram instructions set --instance work ./my-instructions.md
npm run dev -- telegram instructions path --instance workOr edit directly:
# Windows
notepad %USERPROFILE%\.cctb\work\agent.md
# macOS
open -e ~/.cctb/work/agent.mdDuring each active Telegram turn, the bridge injects a stable cctb command into the engine process PATH. Agents should prefer it when they finish a generated file:
cctb send --image /absolute/path/to/image.png
cctb send --file /absolute/path/to/report.pdf
cctb send --message "Done" --file /absolute/path/to/report.pdfInside an active Telegram turn, cctb send uses the turn-scoped side-channel and preserves the current chat/session context. The same delivery path is also available through the repository CLI outside an active turn, where it falls back to the configured instance and active Telegram session:
telegram send --image /absolute/path/to/image.png
telegram send --file /absolute/path/to/report.pdf
telegram send --chat 123456789 --file /absolute/path/to/report.pdf
telegram send --instance bot2 --chat 123456789 --image /absolute/path/to/image.pngCurrent delivery rules:
- Prefer
cctb sendfor existing files, images, PDFs, decks, and other binary outputs during active Telegram turns. - Use
telegram sendwhen you need the same explicit delivery command outside an active turn, or when the turn-scopedcctbhelper is unavailable. - Explicit send commands accept any readable absolute file path.
- Use
[send-file:/absolute/path]/[send-image:/absolute/path]only as fallback when explicit send commands are unavailable or fail. - Small text/code files can still use the
file:name.extfenced-block form. - The helper is scoped to one Telegram turn. It will not work after the turn finishes.
- Plain
[send-file:]fallback tags still validate that files live under the instance workspace or the active/resumeproject before sending. - Accepted and rejected file deliveries are recorded as turn-level receipts, so the bridge can decide completion from structured delivery evidence instead of text claims.
- If a file was already sent by stream delivery or the side-channel helper, the final
.telegram-outsweep skips that same real path to avoid duplicate Telegram attachments. - Request-scoped
.telegram-out/<requestId>/directories are runtime buffers and are pruned after 24 hours. - The bridge no longer keeps manifest, pending-contract, or count-based state to infer future delivery intent across ordinary chat turns.
- Text-only tasks such as image analysis, image descriptions, or inline reports are not treated as file-delivery failures.
This works for the default Codex and Claude process runtimes. File delivery is explicit: generate the file, call the send command, and rely on the resulting receipt.
When upgrading from v4.5.0 or earlier, refresh generated instance instructions with:
telegram instructions upgrade --all --dry-run
telegram instructions upgrade --allThis safely replaces old generated Telegram Transport blocks and appends the block when missing. Custom transport sections are left untouched unless you rerun with --force. Forced replacements create an agent.md.bak.<timestamp> backup next to the original file.
For hands-free Telegram use, telegram yolo on is recommended. It keeps Codex/Claude moving without asking on each turn. If you keep YOLO off, the bridge will use Telegram approval buttons where the CLI supports a headless path: Claude can approve individual permission prompts; Codex process mode asks once before the turn, then runs the approved turn with --full-auto. Keep unsafe for fully trusted local environments only.
Claude approval buttons use a short-lived localhost MCP bridge with a random URL token. This protects against blind local port scans, but the token is still visible to same-user local processes that can inspect process command lines. Treat YOLO-off approval as a single-user workstation convenience, not a multi-user isolation boundary.
npm run dev -- telegram yolo on --instance work # Safe auto-approve
npm run dev -- telegram yolo unsafe --instance work # Skip ALL checks
npm run dev -- telegram yolo off --instance work # Normal flow
npm run dev -- telegram yolo --instance work # Check status| Mode | Codex | Claude | Use case |
|---|---|---|---|
off |
Telegram pre-turn approval | Telegram tool approval | Default, safest |
on |
--full-auto |
--permission-mode bypassPermissions |
Mobile use |
unsafe |
--dangerously-bypass-* |
--dangerously-skip-permissions |
Trusted env only |
Track token consumption and cost per instance:
npm run dev -- telegram usage # Default instance
npm run dev -- telegram usage --instance work # Named instanceOutput:
Instance: work
Requests: 42
Input tokens: 185,230
Output tokens: 12,450
Cached tokens: 96,000
Estimated cost: $0.3521
Last updated: 2026-04-09T10:00:00Z
Claude reports exact USD cost. Codex reports tokens only (cost shows as "unknown").
While a turn runs, the bridge sends Telegram typing actions and records structured events in timeline.log.jsonl / audit.log.jsonl. Long tool calls are not live-edited into the chat; inspect them with:
npm run dev -- telegram timeline --instance work
npm run dev -- telegram dashboard --instance work
npm run dev -- telegram service status --instance worktelegram verbosity is kept as a compatibility config knob, but the current Codex/Claude process runtimes use typing actions plus timeline/audit events rather than live-editing partial model output into Telegram.
Set a per-instance spending cap. When total cost reaches the limit, new requests are blocked until the budget is raised or cleared.
npm run dev -- telegram budget show --instance work # Current spend vs limit
npm run dev -- telegram budget set 10 --instance work # Cap at $10
npm run dev -- telegram budget clear --instance work # Remove capBudget is enforced in real-time — the bot replies with a bilingual message when the limit is hit.
Send voice messages in Telegram — the bridge transcribes them locally before forwarding the text to the AI engine. No cloud ASR service required.
How it works:
- User sends a voice message in Telegram
- The bridge downloads the
.oggfile - Transcribes it via a local ASR service (HTTP first, CLI fallback)
- The transcript replaces the voice attachment as the user's text message
- The AI engine processes it as a normal text request
Setup with Qwen3-ASR (example):
# Clone and install the ASR model
git clone https://github.com/nicoboss/qwen3-asr-python
cd qwen3-asr-python
python -m venv venv
source venv/bin/activate
pip install -e .
# Download a model (0.6B is fast enough for voice messages)
huggingface-cli download Qwen/Qwen3-ASR-0.6B --local-dir models/Qwen3-ASR-0.6BThe bridge looks for the ASR service at two locations (in order):
| Method | Endpoint / Path | Latency | Notes |
|---|---|---|---|
| HTTP server | POST http://127.0.0.1:8412/transcribe |
~2-3s | Model stays in memory. Recommended. |
| CLI fallback | ~/projects/qwen3-asr/transcribe.py <file> |
~30s | Loads model each time. No server needed. |
Start the HTTP server (recommended):
python ~/projects/qwen3-asr/server.py
# Qwen3-ASR server listening on http://127.0.0.1:8412Custom ASR integration:
To use a different ASR engine, modify the transcribeVoice() function in src/telegram/delivery.ts. The function receives the local path to an .ogg audio file and should return the transcribed text as a string.
Started a task locally with Claude Code? Continue it on Telegram — no copy-paste, no re-explaining context. Using Codex instead? Attach an existing thread by ID and keep going from Telegram.
/resume ← Bot scans your local sessions from the past hour
The bot lists recent sessions with project names and timestamps:
Recent local sessions:
1. [cc-telegram-bridge] 64c2081c… (5m ago)
2. [my-app] a3f8b21e… (32m ago)
Reply /resume <number> to continue that session.
Pick one:
/resume 1 ← Bot symlinks the session, switches workspace, binds session ID
Now every message you send goes through the original session — same context, same project directory, same conversation history. When you're done:
/detach ← Unbinds session, restores the pre-/resume conversation when one exists
How it works under the hood:
- Scans
CLAUDE_CONFIG_DIR/projects/when set, otherwise~/.claude/projects/, for.jsonlfiles modified in the last hour - Binds the session ID and overrides the workspace to point at your real project path
- Claude CLI resumes with
-r <sessionId>in the original directory /detachreturns to the pre-/resume conversation when one exists; otherwise it falls back to the default workspace without touching the original local session file
No pollution: bridge and instance instructions are passed per invocation and are not written back into local session files.
Codex does not expose the same local session scan flow as Claude. If you already know the thread ID, attach it explicitly:
/resume thread thread_abc123
That binds the current Telegram chat to the existing Codex thread. From then on:
- new Telegram messages continue that thread
/statusshows the current thread ID/detachunbinds the thread and restores the pre-attach conversation when one exists
This is an attach flow, not a local session import: the thread stays server-side and the bridge only binds the known thread ID to the current chat.
Note: the default Codex process runtime validates /resume thread <thread-id> against the local Codex session index. Thread IDs unknown to the local machine still fail closed instead of being guessed.
List, rename, or delete instances from the CLI. The service must be stopped before renaming or deleting.
npm run dev -- telegram instance list # Show all instances
npm run dev -- telegram instance rename old-name new-name # Rename
npm run dev -- telegram instance delete staging --yes # Delete (requires --yes)Back up an instance's entire state directory to a single .cctb.gz archive. Restore atomically with rollback on failure.
npm run dev -- telegram backup --instance work # Creates timestamped .cctb.gz
npm run dev -- telegram backup --instance work --out ./bak.cctb.gz
npm run dev -- telegram restore ./bak.cctb.gz --instance work # Restore (instance must not exist)
npm run dev -- telegram restore ./bak.cctb.gz --instance work --force # Overwrite existingThe archive format is a pure-Node gzipped binary — no tar dependency, works on Windows/macOS/Linux identically.
Enable bot-to-bot communication via local HTTP IPC. The bus now supports point delegation, fan-out, sequential chains, auto-review, and coordinator-led crew workflows. It handles routing, peer validation, loop prevention, and local auth.
Protocol v1 — every request and response is stamped with protocolVersion, declared capabilities, structured errorCode, and a retryable flag, so callers can tell transient failures (timeouts, unreachable peers) from terminal ones (disabled bus, peer not allowed). Legacy unversioned payloads are still accepted for rolling upgrades. Peer liveness is verified by probing GET /api/health and matching a cc-telegram-bridge fingerprint, so a reused local port cannot fake a live peer. Full spec: docs/bus-protocol.md.
Add bus to each instance's config.json:
{ "engine": "codex", "bus": { "peers": "*" } }| Field | Description |
|---|---|
peers |
"*" = talk to all bus-enabled bots. ["a", "b"] = specific bots only. Omit or false = isolated. |
maxDepth |
Max delegation hops (default 3). Prevents A→B→C→A loops. |
port |
Local HTTP port. 0 = auto-assign (default). |
secret |
Shared secret for Bearer token authentication (optional). |
parallel |
List of instances for /fan parallel queries (e.g. ["sec-bot", "perf-bot"]). |
chain |
Ordered list of instances for /chain sequential handoff (e.g. ["reviewer", "writer"]). |
verifier |
Instance name for /verify auto-verification (e.g. "reviewer"). |
crew |
Fixed coordinator workflow config for hub-and-spoke specialist orchestration. |
Both sides must allow each other — unilateral bus config is rejected.
In any bot's Telegram chat:
/ask reviewer Please review this function for security issues
/fan Analyze this code for bugs, security issues, and performance
/chain Improve this answer step by step
/verify Write a function to sort an array
/ask <instance> <prompt>— delegate to a specific bot, result inline/fan <prompt>— query current bot + allparallelbots simultaneously, combined results/chain <prompt>— run a configured sequential pipeline, each stage receiving the previous stage output explicitly/verify <prompt>— execute on current bot, then auto-send toverifierfor review
/chain is the lightweight pipeline. crew is the heavier hub-and-spoke mode.
Hub & Spoke — one commander, multiple workers:
┌──────────┐
│ main │
│ peers: * │
└──┬────┬──┘
│ │
┌───────┘ └───────┐
▼ ▼
┌──────────┐ ┌──────────┐
│ reviewer │ │ researcher│
│peers: │ │peers: │
│ ["main"] │ │ ["main"] │
└──────────┘ └──────────┘
Workers only talk to the hub. The hub dispatches and aggregates.
Pipeline — sequential handoff:
┌────────┐ ┌────────┐ ┌────────┐
│ intake │────▶│ coder │────▶│ review │
│peers: │ │peers: │ │peers: │
│["coder"]│ │["intake",│ │["coder"]│
└────────┘ │"review"]│ └────────┘
└────────┘
Each bot only knows its neighbors. Tasks flow left to right.
Parallel — fan-out to multiple specialists:
/fan "analyze this code"
│
┌──────────────┼──────────────┐
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│ sec-bot │ │ perf-bot │ │ style-bot│
└──────────┘ └──────────┘ └──────────┘
│ │ │
└──────────────┼──────────────┘
▼
Combined result
{ "bus": { "peers": "*", "parallel": ["sec-bot", "perf-bot", "style-bot"] } }Verification — execute then auto-review:
/verify "write a sort function"
│
▼
┌──────────┐ result ┌──────────┐
│ coder │ ───────────▶ │ reviewer │
└──────────┘ └──────────┘
│
verification
│
▼
Both shown to user
{ "bus": { "peers": "*", "verifier": "reviewer" } }For heavier multi-agent work, one instance can act as a dedicated coordinator while fixed specialist instances do focused work. This follows the article-style hub-and-spoke pattern:
- the user talks directly to the coordinator bot
- specialists never talk to each other directly
- all context is passed explicitly by the coordinator
- the coordinator keeps the run state, stage progress, and final assembly
Current built-in workflow is research-report:
coordinator -> researcher -> analyst -> writer -> reviewer
If the reviewer asks for changes, the coordinator can send the draft back to the writer for one or more revision rounds.
Example config on the coordinator instance:
{
"bus": {
"peers": ["researcher", "analyst", "writer", "reviewer"],
"crew": {
"enabled": true,
"workflow": "research-report",
"coordinator": "coordinator",
"roles": {
"researcher": "researcher",
"analyst": "analyst",
"writer": "writer",
"reviewer": "reviewer"
},
"maxResearchQuestions": 4,
"maxRevisionRounds": 2
}
}
}Behavior notes:
- only the coordinator instance should have this
crewblock - the five roles must all be distinct
- ordinary text messages sent to the coordinator bot will run the crew workflow automatically
- crew runs are persisted under
crew-runs/*.json - stage progress is also written to
timeline.log.jsonl
Mesh — full interconnect:
// Every instance
{ "bus": { "peers": "*" } }All bots can talk to all bots. Simplest config, best for small teams (3-5 bots).
TL;DR — You only need to do two things on your phone: get a bot token from BotFather and send the pairing code. Everything else happens on your computer via Claude Code or Codex CLI.
- Node.js >= 20
- OpenAI Codex CLI and/or Claude Code CLI installed and authenticated
- A Telegram account (phone)
- Open Telegram and search for @BotFather
- Send
/newbot - Follow the prompts — give your bot a name and username
- BotFather will reply with a bot token like
123456789:ABCdefGHIjklMNOpqrsTUVwxyz0123456789 - Copy this token — you'll paste it in your terminal
Open your terminal with Claude Code or Codex, and tell it:
"Clone https://github.com/cloveric/cc-telegram-bridge and set up a Telegram bot with this token:
<paste your token>"
Or do it manually:
git clone https://github.com/cloveric/cc-telegram-bridge.git
cd cc-telegram-bridge
npm install
npm run build
# Configure with your bot token
npm run dev -- telegram configure <your-bot-token>
# Optional: switch to Claude engine (default is Codex)
npm run dev -- telegram engine claude
# Recommended: enable YOLO mode for hands-free Telegram operation
npm run dev -- telegram yolo on
# Start the service
npm run dev -- telegram service start- Open Telegram and find your new bot (search its username)
- Send any message — the bot will reply with a 6-character pairing code like
38J63T - Go back to your terminal and run:
npm run dev -- telegram access pair 38J63TDone! You can now chat with Codex or Claude from Telegram. Send text, voice messages, or files — the bot handles everything.
# Create a second bot with BotFather, then:
npm run dev -- telegram configure --instance work <second-token>
npm run dev -- telegram engine claude --instance work
npm run dev -- telegram yolo on --instance work
npm run dev -- telegram service start --instance work
# Pair the same way: send a message, get the code, run `telegram access pair <code> --instance work`┌─────────────────────────────────────────────────────────────────────┐
│ cc-telegram-bridge │
├─────────────┬──────────────┬──────────────────┬─────────────────────┤
│ Telegram │ Runtime │ AI Engine │ State │
│ Layer │ Layer │ Layer │ Layer │
├─────────────┼──────────────┼──────────────────┼─────────────────────┤
│ api.ts │ bridge.ts │ adapter.ts │ access-store.ts │
│ delivery.ts │ chat-queue.ts│ process-adapter │ session-store.ts │
│ update- │ session- │ .ts (Codex) │ runtime-state.ts │
│ normalizer │ manager.ts │ claude-adapter │ instance-lock.ts │
│ .ts │ │ .ts (Claude) │ json-store.ts │
│ message- │ │ │ audit-log.ts │
│ renderer.ts │ │ agent.md + config│ timeline-log.ts │
│ │ │ │ usage-store.ts │
│ │ │ │ crew-run-store.ts │
└─────────────┴──────────────┴──────────────────┴─────────────────────┘
┌─────────────────────────────────────────────────────────────────────┐
│ Bus Layer (local HTTP, loopback, protocol v1) │
├─────────────────────────────────────────────────────────────────────┤
│ bus-server.ts · bus-client.ts · bus-handler.ts │
│ bus-protocol.ts (envelope, errors, zod) · bus-registry.ts │
│ bus-config.ts · delegation-commands.ts · crew-workflow.ts │
└─────────────────────────────────────────────────────────────────────┘
Data flow:
Telegram Update → Normalize → Access Check → Chat Queue (serialized)
→ Load config.json (engine) → Load agent.md → Session Lookup
→ Codex Exec or Claude -p (new or resume)
→ Typing action + timeline events → Final Render → Deliver → Audit
|
Switch between Codex and Claude Code per instance. Mix and match — one bot on Codex, another on Claude, managed from one CLI. |
Each instance loads its own |
|
Run multiple Telegram bots from one repo. Each instance has its own token, engine, workspace, access rules, session binding, audit trail, and service lifecycle. |
Local bot-to-bot calls enable delegation, fan-out, chains, verification, and coordinator-led crew workflows without mixing each bot's Telegram chat context. |
|
One command to auto-approve everything — works with both engines. Per-instance, hot-reloadable. |
Every instance has its own personality, workspace, sessions, access rules, inbox, audit trail, and workspace-keyed auto-memory. The engine config dir ( |
|
|
Telegram shows typing while a turn runs, and structured timeline/audit events record sessions, tool calls, file receipts, retries, and completion status for debugging. |
|
Long polling (~0ms latency), exponential backoff, 429 auto-retry, 409 conflict auto-shutdown, graceful SIGTERM/SIGINT, fault-tolerant batch processing. |
|
|
Per-instance token counts (input/output/cached) and USD cost. |
|
|
Set a per-instance cost cap. Requests are blocked when the limit is hit — with bilingual messages. |
Generated images, PDFs, decks, and reports are delivered through |
|
One command to archive or restore an instance. Zero-dependency binary format, cross-platform, with atomic rollback. |
List, rename, and delete instances from the CLI. Running-instance guards prevent data corruption. |
|
Send voice messages — transcribed locally via pluggable ASR (e.g. Qwen3-ASR). HTTP server for fast inference, CLI fallback when offline. |
Every action recorded per-instance in append-only JSONL — filterable by type, chat, and outcome. Auto-rotated at 10MB. |
|
Multi-stage Dockerfile included. Build once, deploy anywhere. |
Local bot-to-bot calls speak a versioned |
| Command | Description |
|---|---|
telegram service start |
Acquire lock, load state, begin long-polling |
telegram service stop |
Graceful shutdown (SIGTERM/SIGINT) |
telegram service status |
Running state, PID, engine, bot identity, timeline summary, latest crew run |
telegram service restart |
Stop + start with clean consumer reset |
telegram service logs |
Tail stdout/stderr logs |
telegram service doctor |
Health check across all subsystems, including timeline, crew state, shared engine env, and stale launchd leftovers |
telegram engine [codex|claude] |
Switch AI engine per instance |
telegram yolo [on|off|unsafe] |
Toggle auto-approval mode |
telegram usage |
Show token usage and estimated cost |
telegram verbosity [0|1|2] |
Store the legacy verbosity setting; current process runtimes use typing actions plus timeline/audit events |
telegram budget [show|set|clear] |
Per-instance cost cap (blocks requests when exceeded) |
telegram timeline |
Inspect structured lifecycle events with filters |
telegram instance [list|rename|delete] |
Manage instances from the CLI |
telegram backup [--instance <name>] |
Archive instance state to .cctb.gz |
telegram restore <archive> |
Restore instance from backup (with --force to overwrite) |
telegram logs rotate |
Manually trigger log rotation |
telegram dashboard |
Generate and open an HTML status dashboard with timeline and latest crew snapshot |
telegram help |
Show all available commands |
All commands accept --instance <name> to target a specific bot.
telegram service doctor --instance <name>telegram session list --instance <name>telegram session inspect --instance <name> <chat-id>telegram session reset --instance <name> <chat-id>telegram task list --instance <name>telegram task inspect --instance <name> <upload-id>telegram task clear --instance <name> <upload-id>
Telegram users can also use:
/status/engine [claude|codex]— switch engine for the current instance (the bridge resets stale bindings automatically)/effort [low|medium|high|xhigh|max|off]— set reasoning effort level (maxis Claude-only; Codex usesxhighinstead)/model [name|off]— switch model/btw <question>— ask a side question without affecting the current session/ask <instance> <prompt>— delegate to a specific peer bot/fan <prompt>— query current bot plus configured parallel bots/chain <prompt>— run the configured sequential bot chain/verify <prompt>— execute locally, then auto-review with the verifier bot/resume— Claude: scan local sessions; Codex: use/resume thread <thread-id>to attach an existing thread/detach— detach from resumed Claude session or current Codex thread; restore the pre-resume conversation when one exists/stop— immediately stop the current running task/continue— resume the latest waiting archive summary/compact(Claude only — compresses context; Codex falls back to reset)/context(Claude only) — show current context fill level; use it to decide when to/compact/ultrareview(Claude Opus 4.7+ only) — dedicated code-review pass, typically paired with/resumeinto a local project/reset/help
For archive summaries, the intended continuation path is to reply to that summary or press its Continue Analysis button; bare /continue only resumes the latest waiting archive.
Recovery behavior on unreadable state:
telegram service statusandtelegram service doctordegrade tounknown (...)warnings instead of crashing whensession.json,file-workflow.json,timeline.log.jsonl, orcrew-runs/state is unreadable.telegram session inspectandtelegram task inspectreport unreadable state and stop instead of pretending the record is missing.telegram session reset,telegram task clear, and Telegram/resetonly self-heal corruption/schema-invalid state. Before writing a default empty file, the unreadable original is quarantined as a backup beside the state file.- Telegram
/statusshowsunknown (...)for session/task state when the backing JSON is unreadable.
Windows (PowerShell):
.\scripts\start-instance.ps1 [-Instance work]
.\scripts\status-instance.ps1 [-Instance work]
.\scripts\stop-instance.ps1 [-Instance work]macOS / Linux (bash):
./scripts/start-instance.sh [work]
./scripts/status-instance.sh [work]
./scripts/stop-instance.sh [work]Legacy cleanup after older autostart builds:
bash scripts/cleanup-legacy-launchd.sh --allClaude auth smoke test:
npm run smoke:claude-authShared engine env rule:
CLAUDE_CONFIG_DIRandCODEX_HOMEare only forwarded when you explicitly export them.- If you change either one, restart the affected instance from that same shell.
telegram service doctornow flags both shared-env mismatches and stale launchd plists.
Per-instance, two layers: pairing + allowlist.
Default behavior is intentionally conservative:
- One instance is locked to one Telegram chat by default
- A second chat will not be paired or allowlisted unless you explicitly enable multi-chat
- This keeps
/resume, workspace overrides, local files, and session state from bleeding across chats by accident
npm run dev -- telegram access pair <code>
npm run dev -- telegram access policy allowlist
npm run dev -- telegram access allow <chat-id>
npm run dev -- telegram access revoke <chat-id>
npm run dev -- telegram access multi on
npm run dev -- telegram access multi off
npm run dev -- telegram status [--instance work]Use telegram access multi on --instance <name> only when you really want one bot instance to serve multiple chats. New and legacy instances both default to off unless you explicitly change it.
Per-instance append-only JSONL log with filterable queries:
npm run dev -- telegram audit [--instance work]
npm run dev -- telegram audit 50 # Last 50 entries
npm run dev -- telegram audit --type update.handle --outcome error # Filter by type/outcome
npm run dev -- telegram audit --chat 688567588 # Filter by chataudit.log.jsonl records what the bridge did — update.handle, bus.reply, budget.blocked — one line per external action, rotated at 10MB.
Parallel to audit, the bridge emits a lifecycle stream (timeline.log.jsonl) describing the shape of each turn — turn.started, turn.completed, budget.threshold_reached, crew.stage.*, bus delegations, etc. Same JSONL shape, different axis:
npm run dev -- telegram timeline [--instance work]
npm run dev -- telegram timeline --type turn.completed --outcome error
npm run dev -- telegram timeline --chat 688567588 --limit 100Think of it this way: audit answers "what action did we take", timeline answers "how did this turn go". telegram service status and telegram dashboard pull summaries from timeline.
# Windows: %USERPROFILE%\.cctb\<instance>\
# macOS/Linux: ~/.cctb/<instance>/
<instance>/
├── agent.md # Bot personality & instructions
├── config.json # Engine, YOLO mode, verbosity, bus
├── usage.json # Token usage and cost tracking
├── workspace/ # Per-bot working directory
│ └── CLAUDE.md # Claude Code project instructions (Claude only)
├── .env # Bot token
├── access.json # Pairing + allowlist data
├── session.json # Chat-to-thread bindings
├── file-workflow.json # Pending file-upload follow-ups
├── runtime-state.json # Watermarks, offsets
├── instance.lock.json # Process lock
├── audit.log.jsonl # Structured audit stream (rotates to .1, .2, ...)
├── timeline.log.jsonl # Lifecycle events (turn.started, budget.*, crew.stage.*)
├── crew-runs/ # Coordinator-led crew run state (coordinator only)
│ └── <run-id>.json
├── service.stdout.log # Service stdout
├── service.stderr.log # Service stderr
└── inbox/ # Downloaded attachments
npm run dev -- <command> # Development mode
npm test # Run tests
npm run test:watch # Watch mode
npm run build # Build for production
npm start # Start production build# Build
docker build -t cc-telegram-bridge .
# Run (configure first, then start)
docker run -v ~/.cctb:/root/.cctb cc-telegram-bridge telegram configure <token>
docker run -v ~/.cctb:/root/.cctb cc-telegram-bridge telegram service startMount ~/.cctb to persist state across container restarts.
Bot does not reply
- Run
telegram service doctor --instance <name>to diagnose - Check
telegram service logsfor errors - Verify the engine is installed:
codex --versionorclaude --version - If the instance uses Claude, run
npm run smoke:claude-auth - If
service doctorreportslegacy-launchd, clean it withbash scripts/cleanup-legacy-launchd.sh --all
Claude works in Terminal but not in the bot
- Check shell auth first:
claude auth status - Run
npm run smoke:claude-auth - Run
telegram service doctor --instance <name> - If you recently changed
CLAUDE_CONFIG_DIR, restart the instance from that same shell - If
doctorreportslegacy-launchd, runbash scripts/cleanup-legacy-launchd.sh --all
More detail: docs/runtime-env-troubleshooting.md
Switching to Claude engine
telegram engine claude --instance <name>- Restart the service:
telegram service restart --instance <name> - Optionally add a
CLAUDE.mdin the workspace directory
Bot sends duplicate replies
A 409 Conflict means two processes are polling the same bot token. The service auto-detects this and shuts down. Run telegram service status to check, then telegram service stop and telegram service start to clean restart.
agent.md changes not taking effect
No restart needed — loaded fresh on every message. Verify path with telegram instructions path --instance <name>.
This project is already usable, but it is still evolving quickly. If you run several instances on one machine, a local supervisor agent can be a practical extra safety layer. This is optional, not required.
Use it for:
- checking instance health
- reading
service status/service doctor/ timeline before you touch anything - restarting only the affected instance when something is clearly down
- reporting what happened instead of silently changing config
Do not use it as a second product agent. Its job should be operations only: monitor, diagnose, restart, and report.
You can give a local supervisor agent a brief like this:
You are the local operations supervisor for cc-telegram-bridge on this machine.
Your job is to keep bot instances healthy and easy to diagnose.
Primary responsibilities:
1. Check instance health
2. Diagnose failures before taking action
3. Restart only the affected instance when needed
4. Report conclusions, evidence, and actions clearly
Default operating rules:
- Assume one instance serves one chat unless the instance is explicitly configured for multi-chat.
- Do not change engine, model, yolo/approval mode, pairing, access, or multi-chat unless the user explicitly asks.
- Do not clear tasks unless the user explicitly asks, or the task is confirmed stale and the user already approved cleanup.
- Do not edit project code or README unless the user explicitly asks.
- Prefer the smallest recovery action. Do not restart all instances unless necessary.
Default diagnostic order:
1. Check service status
2. Check service doctor
3. Check recent timeline/audit evidence
4. Check stdout/stderr logs only if needed
5. Decide whether the issue is:
- process not running
- engine/runtime failure
- Telegram delivery failure
- stale task/workflow residue
- auth/config problem
6. Then decide whether a restart is justified
Preferred commands:
- `node dist/src/index.js telegram service status --instance <name>`
- `node dist/src/index.js telegram service doctor --instance <name>`
- `node dist/src/index.js telegram timeline --instance <name>`
- `bash scripts/start-instance.sh <name>`
- `bash scripts/stop-instance.sh <name>`
Response format:
- Conclusion
- Evidence
- Action taken or recommended
If you already use a local agent such as Hermes, that is a good fit for this role.
Your agents. Your engines. Your rules.
