Skip to content

cloveric/cc-telegram-bridge

Repository files navigation

English  |  中文文档

CC Telegram Bridge

License TypeScript Node.js Windows | macOS | Linux Codex | Claude Vitest

Put the real Codex and Claude Code CLI on Telegram.
Not an API wrapper — the actual CLI, with native sessions, local files, and real tool use.
Resume desktop sessions from Telegram, or run isolated multi-bot teams through Agent Bus.

Runs the native CLI harness directly — Codex or Claude per instance, hot-reloaded instructions, voice/file input, local session resume, multi-bot Agent Bus, structured timeline/audit logs, service doctor, and dashboard included.
No reimplemented API wrappers, no fake chat layer.

Dual Engine  |  Multi-Bot  |  Agent Bus  |  Crew  |  Voice  |  Resume  |  Budget  |  Quick Start  |  Ops

RULE 1: Let your Claude Code or Codex CLI set this up for you. Clone the repo, open it in your terminal, and tell your AI agent: "read the README and configure a Telegram bot for me". It will handle the rest.

Recommended runtime: enable YOLO mode for hands-free Telegram instances you control: telegram yolo on --instance <name>. With YOLO off, the bridge can ask for approval in Telegram instead: Claude approvals are per tool request; Codex approvals are per turn because codex exec does not support mid-turn approval callbacks. Use unsafe only on a trusted machine and workspace.

What Changed Recently

  • v4.5.3 — recovers a stale Telegram update watermark from audit history on service startup, preventing old completed tasks from replaying after restart.
  • v4.5.2 — fixes Telegram update watermark ordering, so rapid follow-up messages cannot be skipped while an earlier turn is still finishing.
  • v4.5.1 — moves Telegram transport rules into each instance's agent.md, leaving only one short static Telegram reminder in the per-turn prompt. File delivery now prefers cctb send --file PATH / cctb send --image PATH.
  • v4.5.0 — simplifies file delivery around explicit send receipts and removes the old manifest/contract/count-repair/wakeup delivery state.
  • Earlier 4.x releases added the dual Codex/Claude process runtimes, Agent Bus, crew workflows, timeline/audit logs, service doctor, dashboard, and Delivery Protocol v2.

Upgrading from v4.5.0 or earlier: refresh generated instance instructions after updating so old bots get the short Telegram Transport block:

telegram instructions upgrade --all --dry-run
telegram instructions upgrade --all
telegram service restart --all

Use --force only for instances with a custom transport block you intentionally want to replace. Forced replacements create an agent.md.bak.<timestamp> backup next to the original file.


Dual Engine: Codex + Claude Code

Each bot instance can run either OpenAI Codex or Claude Code as its backend. Switch engines per-instance with one command:

# Set an instance to use Claude Code
npm run dev -- telegram engine claude --instance review-bot

# Set another to use Codex
npm run dev -- telegram engine codex --instance helper-bot

# Check current engine
npm run dev -- telegram engine --instance review-bot
Feature Codex Engine Claude Engine
CLI command codex exec --json claude -p --output-format json
Session resume codex exec resume --json <id> claude -p -r <session-id>
Project instructions agent.md (prepended to prompt) agent.md (via --system-prompt) + CLAUDE.md (auto-loaded from workspace)
Telegram approval when YOLO is off Pre-approve the turn, then run that turn with --full-auto Inline approval buttons for Claude permission prompts
YOLO mode --full-auto / --dangerously-bypass-approvals-and-sandbox --permission-mode bypassPermissions / --dangerously-skip-permissions
/compact Not needed (each exec is stateless) Compresses session context to reduce token usage
Working directory workspace/ under instance dir workspace/ under instance dir (with CLAUDE.md)

Claude Engine: CLAUDE.md Support

When using the Claude engine, each instance gets a workspace/ directory. Drop a CLAUDE.md in there for project-level instructions that Claude Code reads natively:

~/.cctb/review-bot/
├── agent.md              ← "You are a strict code reviewer"
├── workspace/
│   └── CLAUDE.md         ← "TypeScript project. Use ESLint. Never modify tests."
├── config.json           ← { "engine": "claude", "approvalMode": "full-auto" }
└── .env

Two layers of instructions, no conflict:

  • agent.md → Your bot personality (injected via --system-prompt)
  • CLAUDE.md → Project rules (Claude auto-discovers from working directory)

Multi-Bot Setup

Run as many bots as you need. Each instance is fully isolated — its own engine, token, personality, threads, access rules, inbox, and audit trail. By default, each instance is meant for one Telegram chat; multi-chat access is opt-in.

          ┌─────────────────────────────────────────────┐
          │          cc-telegram-bridge              │
          └────────────┬──────────────┬─────────────────┘
                       │              │
        ┌──────────────┼──────────────┼──────────────┐
        ▼              ▼              ▼              ▼
 ┌────────────┐ ┌────────────┐ ┌────────────┐ ┌────────────┐
 │  "default" │ │   "work"   │ │ "reviewer" │ │ "research" │
 │  engine:   │ │  engine:   │ │  engine:   │ │  engine:   │
 │   codex    │ │   codex    │ │   claude   │ │   claude   │
 │            │ │            │ │            │ │            │
 │ agent.md:  │ │ agent.md:  │ │ agent.md:  │ │ agent.md:  │
 │ "General   │ │ "Reply in  │ │ "Strict    │ │ "Deep      │
 │  helper"   │ │  Chinese"  │ │  reviewer" │ │  research" │
 └────────────┘ └────────────┘ └────────────┘ └────────────┘
   PID 4821       PID 5102       PID 5340       PID 5520

Deploy in 30 Seconds

# Configure each instance
npm run dev -- telegram configure <token-A>
npm run dev -- telegram configure --instance work <token-B>
npm run dev -- telegram configure --instance reviewer <token-C>

# Set engines
npm run dev -- telegram engine claude --instance reviewer

# Set personalities
npm run dev -- telegram instructions set --instance reviewer ./reviewer-instructions.md

# Recommended: enable YOLO for Telegram/mobile use
npm run dev -- telegram yolo on --instance work

# Start them all
npm run dev -- telegram service start
npm run dev -- telegram service start --instance work
npm run dev -- telegram service start --instance reviewer

Agent Instructions

Each bot has its own agent.md. Hot-reloaded on every message — edit anytime, no restart needed.

npm run dev -- telegram instructions show --instance work
npm run dev -- telegram instructions set --instance work ./my-instructions.md
npm run dev -- telegram instructions path --instance work

Or edit directly:

# Windows
notepad %USERPROFILE%\.cctb\work\agent.md

# macOS
open -e ~/.cctb/work/agent.md

File Delivery From Agent Tasks

During each active Telegram turn, the bridge injects a stable cctb command into the engine process PATH. Agents should prefer it when they finish a generated file:

cctb send --image /absolute/path/to/image.png
cctb send --file /absolute/path/to/report.pdf
cctb send --message "Done" --file /absolute/path/to/report.pdf

Inside an active Telegram turn, cctb send uses the turn-scoped side-channel and preserves the current chat/session context. The same delivery path is also available through the repository CLI outside an active turn, where it falls back to the configured instance and active Telegram session:

telegram send --image /absolute/path/to/image.png
telegram send --file /absolute/path/to/report.pdf
telegram send --chat 123456789 --file /absolute/path/to/report.pdf
telegram send --instance bot2 --chat 123456789 --image /absolute/path/to/image.png

Current delivery rules:

  • Prefer cctb send for existing files, images, PDFs, decks, and other binary outputs during active Telegram turns.
  • Use telegram send when you need the same explicit delivery command outside an active turn, or when the turn-scoped cctb helper is unavailable.
  • Explicit send commands accept any readable absolute file path.
  • Use [send-file:/absolute/path] / [send-image:/absolute/path] only as fallback when explicit send commands are unavailable or fail.
  • Small text/code files can still use the file:name.ext fenced-block form.
  • The helper is scoped to one Telegram turn. It will not work after the turn finishes.
  • Plain [send-file:] fallback tags still validate that files live under the instance workspace or the active /resume project before sending.
  • Accepted and rejected file deliveries are recorded as turn-level receipts, so the bridge can decide completion from structured delivery evidence instead of text claims.
  • If a file was already sent by stream delivery or the side-channel helper, the final .telegram-out sweep skips that same real path to avoid duplicate Telegram attachments.
  • Request-scoped .telegram-out/<requestId>/ directories are runtime buffers and are pruned after 24 hours.
  • The bridge no longer keeps manifest, pending-contract, or count-based state to infer future delivery intent across ordinary chat turns.
  • Text-only tasks such as image analysis, image descriptions, or inline reports are not treated as file-delivery failures.

This works for the default Codex and Claude process runtimes. File delivery is explicit: generate the file, call the send command, and rely on the resulting receipt.

When upgrading from v4.5.0 or earlier, refresh generated instance instructions with:

telegram instructions upgrade --all --dry-run
telegram instructions upgrade --all

This safely replaces old generated Telegram Transport blocks and appends the block when missing. Custom transport sections are left untouched unless you rerun with --force. Forced replacements create an agent.md.bak.<timestamp> backup next to the original file.


YOLO Mode

For hands-free Telegram use, telegram yolo on is recommended. It keeps Codex/Claude moving without asking on each turn. If you keep YOLO off, the bridge will use Telegram approval buttons where the CLI supports a headless path: Claude can approve individual permission prompts; Codex process mode asks once before the turn, then runs the approved turn with --full-auto. Keep unsafe for fully trusted local environments only.

Claude approval buttons use a short-lived localhost MCP bridge with a random URL token. This protects against blind local port scans, but the token is still visible to same-user local processes that can inspect process command lines. Treat YOLO-off approval as a single-user workstation convenience, not a multi-user isolation boundary.

npm run dev -- telegram yolo on --instance work      # Safe auto-approve
npm run dev -- telegram yolo unsafe --instance work   # Skip ALL checks
npm run dev -- telegram yolo off --instance work      # Normal flow
npm run dev -- telegram yolo --instance work          # Check status
Mode Codex Claude Use case
off Telegram pre-turn approval Telegram tool approval Default, safest
on --full-auto --permission-mode bypassPermissions Mobile use
unsafe --dangerously-bypass-* --dangerously-skip-permissions Trusted env only

Usage Tracking

Track token consumption and cost per instance:

npm run dev -- telegram usage                    # Default instance
npm run dev -- telegram usage --instance work    # Named instance

Output:

Instance: work
Requests: 42
Input tokens: 185,230
Output tokens: 12,450
Cached tokens: 96,000
Estimated cost: $0.3521
Last updated: 2026-04-09T10:00:00Z

Claude reports exact USD cost. Codex reports tokens only (cost shows as "unknown").


Turn Activity And Timeline

While a turn runs, the bridge sends Telegram typing actions and records structured events in timeline.log.jsonl / audit.log.jsonl. Long tool calls are not live-edited into the chat; inspect them with:

npm run dev -- telegram timeline --instance work
npm run dev -- telegram dashboard --instance work
npm run dev -- telegram service status --instance work

telegram verbosity is kept as a compatibility config knob, but the current Codex/Claude process runtimes use typing actions plus timeline/audit events rather than live-editing partial model output into Telegram.


Budget Control

Set a per-instance spending cap. When total cost reaches the limit, new requests are blocked until the budget is raised or cleared.

npm run dev -- telegram budget show --instance work     # Current spend vs limit
npm run dev -- telegram budget set 10 --instance work   # Cap at $10
npm run dev -- telegram budget clear --instance work    # Remove cap

Budget is enforced in real-time — the bot replies with a bilingual message when the limit is hit.


Voice Input (ASR)

Send voice messages in Telegram — the bridge transcribes them locally before forwarding the text to the AI engine. No cloud ASR service required.

How it works:

  1. User sends a voice message in Telegram
  2. The bridge downloads the .ogg file
  3. Transcribes it via a local ASR service (HTTP first, CLI fallback)
  4. The transcript replaces the voice attachment as the user's text message
  5. The AI engine processes it as a normal text request

Setup with Qwen3-ASR (example):

# Clone and install the ASR model
git clone https://github.com/nicoboss/qwen3-asr-python
cd qwen3-asr-python
python -m venv venv
source venv/bin/activate
pip install -e .

# Download a model (0.6B is fast enough for voice messages)
huggingface-cli download Qwen/Qwen3-ASR-0.6B --local-dir models/Qwen3-ASR-0.6B

The bridge looks for the ASR service at two locations (in order):

Method Endpoint / Path Latency Notes
HTTP server POST http://127.0.0.1:8412/transcribe ~2-3s Model stays in memory. Recommended.
CLI fallback ~/projects/qwen3-asr/transcribe.py <file> ~30s Loads model each time. No server needed.

Start the HTTP server (recommended):

python ~/projects/qwen3-asr/server.py
# Qwen3-ASR server listening on http://127.0.0.1:8412

Custom ASR integration:

To use a different ASR engine, modify the transcribeVoice() function in src/telegram/delivery.ts. The function receives the local path to an .ogg audio file and should return the transcribed text as a string.


Session Resume & Codex Thread Attach

Started a task locally with Claude Code? Continue it on Telegram — no copy-paste, no re-explaining context. Using Codex instead? Attach an existing thread by ID and keep going from Telegram.

Claude local session resume

/resume          ← Bot scans your local sessions from the past hour

The bot lists recent sessions with project names and timestamps:

Recent local sessions:
1. [cc-telegram-bridge] 64c2081c… (5m ago)
2. [my-app] a3f8b21e… (32m ago)

Reply /resume <number> to continue that session.

Pick one:

/resume 1        ← Bot symlinks the session, switches workspace, binds session ID

Now every message you send goes through the original session — same context, same project directory, same conversation history. When you're done:

/detach          ← Unbinds session, restores the pre-/resume conversation when one exists

How it works under the hood:

  1. Scans CLAUDE_CONFIG_DIR/projects/ when set, otherwise ~/.claude/projects/, for .jsonl files modified in the last hour
  2. Binds the session ID and overrides the workspace to point at your real project path
  3. Claude CLI resumes with -r <sessionId> in the original directory
  4. /detach returns to the pre-/resume conversation when one exists; otherwise it falls back to the default workspace without touching the original local session file

No pollution: bridge and instance instructions are passed per invocation and are not written back into local session files.

Codex thread attach

Codex does not expose the same local session scan flow as Claude. If you already know the thread ID, attach it explicitly:

/resume thread thread_abc123

That binds the current Telegram chat to the existing Codex thread. From then on:

  • new Telegram messages continue that thread
  • /status shows the current thread ID
  • /detach unbinds the thread and restores the pre-attach conversation when one exists

This is an attach flow, not a local session import: the thread stays server-side and the bridge only binds the known thread ID to the current chat.

Note: the default Codex process runtime validates /resume thread <thread-id> against the local Codex session index. Thread IDs unknown to the local machine still fail closed instead of being guessed.


Instance Management

List, rename, or delete instances from the CLI. The service must be stopped before renaming or deleting.

npm run dev -- telegram instance list                          # Show all instances
npm run dev -- telegram instance rename old-name new-name      # Rename
npm run dev -- telegram instance delete staging --yes          # Delete (requires --yes)

Backup & Restore

Back up an instance's entire state directory to a single .cctb.gz archive. Restore atomically with rollback on failure.

npm run dev -- telegram backup --instance work                 # Creates timestamped .cctb.gz
npm run dev -- telegram backup --instance work --out ./bak.cctb.gz
npm run dev -- telegram restore ./bak.cctb.gz --instance work  # Restore (instance must not exist)
npm run dev -- telegram restore ./bak.cctb.gz --instance work --force  # Overwrite existing

The archive format is a pure-Node gzipped binary — no tar dependency, works on Windows/macOS/Linux identically.


Agent Bus

Enable bot-to-bot communication via local HTTP IPC. The bus now supports point delegation, fan-out, sequential chains, auto-review, and coordinator-led crew workflows. It handles routing, peer validation, loop prevention, and local auth.

Protocol v1 — every request and response is stamped with protocolVersion, declared capabilities, structured errorCode, and a retryable flag, so callers can tell transient failures (timeouts, unreachable peers) from terminal ones (disabled bus, peer not allowed). Legacy unversioned payloads are still accepted for rolling upgrades. Peer liveness is verified by probing GET /api/health and matching a cc-telegram-bridge fingerprint, so a reused local port cannot fake a live peer. Full spec: docs/bus-protocol.md.

Enable

Add bus to each instance's config.json:

{ "engine": "codex", "bus": { "peers": "*" } }
Field Description
peers "*" = talk to all bus-enabled bots. ["a", "b"] = specific bots only. Omit or false = isolated.
maxDepth Max delegation hops (default 3). Prevents A→B→C→A loops.
port Local HTTP port. 0 = auto-assign (default).
secret Shared secret for Bearer token authentication (optional).
parallel List of instances for /fan parallel queries (e.g. ["sec-bot", "perf-bot"]).
chain Ordered list of instances for /chain sequential handoff (e.g. ["reviewer", "writer"]).
verifier Instance name for /verify auto-verification (e.g. "reviewer").
crew Fixed coordinator workflow config for hub-and-spoke specialist orchestration.

Both sides must allow each other — unilateral bus config is rejected.

Usage

In any bot's Telegram chat:

/ask reviewer Please review this function for security issues
/fan Analyze this code for bugs, security issues, and performance
/chain Improve this answer step by step
/verify Write a function to sort an array
  • /ask <instance> <prompt> — delegate to a specific bot, result inline
  • /fan <prompt> — query current bot + all parallel bots simultaneously, combined results
  • /chain <prompt> — run a configured sequential pipeline, each stage receiving the previous stage output explicitly
  • /verify <prompt> — execute on current bot, then auto-send to verifier for review

/chain is the lightweight pipeline. crew is the heavier hub-and-spoke mode.

Topology Patterns

Hub & Spoke — one commander, multiple workers:

              ┌──────────┐
              │  main    │
              │ peers: * │
              └──┬────┬──┘
                 │    │
         ┌───────┘    └───────┐
         ▼                    ▼
   ┌──────────┐        ┌──────────┐
   │ reviewer │        │ researcher│
   │peers:    │        │peers:     │
   │ ["main"] │        │ ["main"]  │
   └──────────┘        └──────────┘

Workers only talk to the hub. The hub dispatches and aggregates.

Pipeline — sequential handoff:

┌────────┐     ┌────────┐     ┌────────┐
│ intake │────▶│ coder  │────▶│ review │
│peers:  │     │peers:  │     │peers:  │
│["coder"]│    │["intake",│   │["coder"]│
└────────┘    │"review"]│    └────────┘
              └────────┘

Each bot only knows its neighbors. Tasks flow left to right.

Parallel — fan-out to multiple specialists:

                    /fan "analyze this code"
                           │
            ┌──────────────┼──────────────┐
            ▼              ▼              ▼
      ┌──────────┐  ┌──────────┐  ┌──────────┐
      │ sec-bot  │  │ perf-bot │  │ style-bot│
      └──────────┘  └──────────┘  └──────────┘
            │              │              │
            └──────────────┼──────────────┘
                           ▼
                   Combined result
{ "bus": { "peers": "*", "parallel": ["sec-bot", "perf-bot", "style-bot"] } }

Verification — execute then auto-review:

/verify "write a sort function"
         │
         ▼
   ┌──────────┐    result    ┌──────────┐
   │  coder   │ ───────────▶ │ reviewer │
   └──────────┘              └──────────┘
                                  │
                             verification
                                  │
                                  ▼
                        Both shown to user
{ "bus": { "peers": "*", "verifier": "reviewer" } }

Crew Workflows (Hub and Spoke)

For heavier multi-agent work, one instance can act as a dedicated coordinator while fixed specialist instances do focused work. This follows the article-style hub-and-spoke pattern:

  • the user talks directly to the coordinator bot
  • specialists never talk to each other directly
  • all context is passed explicitly by the coordinator
  • the coordinator keeps the run state, stage progress, and final assembly

Current built-in workflow is research-report:

coordinator -> researcher -> analyst -> writer -> reviewer

If the reviewer asks for changes, the coordinator can send the draft back to the writer for one or more revision rounds.

Example config on the coordinator instance:

{
  "bus": {
    "peers": ["researcher", "analyst", "writer", "reviewer"],
    "crew": {
      "enabled": true,
      "workflow": "research-report",
      "coordinator": "coordinator",
      "roles": {
        "researcher": "researcher",
        "analyst": "analyst",
        "writer": "writer",
        "reviewer": "reviewer"
      },
      "maxResearchQuestions": 4,
      "maxRevisionRounds": 2
    }
  }
}

Behavior notes:

  • only the coordinator instance should have this crew block
  • the five roles must all be distinct
  • ordinary text messages sent to the coordinator bot will run the crew workflow automatically
  • crew runs are persisted under crew-runs/*.json
  • stage progress is also written to timeline.log.jsonl

Mesh — full interconnect:

// Every instance
{ "bus": { "peers": "*" } }

All bots can talk to all bots. Simplest config, best for small teams (3-5 bots).


Quick Start

TL;DR — You only need to do two things on your phone: get a bot token from BotFather and send the pairing code. Everything else happens on your computer via Claude Code or Codex CLI.

Prerequisites

  • Node.js >= 20
  • OpenAI Codex CLI and/or Claude Code CLI installed and authenticated
  • A Telegram account (phone)

Step 1: Create a Telegram Bot (on your phone)

  1. Open Telegram and search for @BotFather
  2. Send /newbot
  3. Follow the prompts — give your bot a name and username
  4. BotFather will reply with a bot token like 123456789:ABCdefGHIjklMNOpqrsTUVwxyz0123456789
  5. Copy this token — you'll paste it in your terminal

Step 2: Install & Configure (on your computer)

Open your terminal with Claude Code or Codex, and tell it:

"Clone https://github.com/cloveric/cc-telegram-bridge and set up a Telegram bot with this token: <paste your token>"

Or do it manually:

git clone https://github.com/cloveric/cc-telegram-bridge.git
cd cc-telegram-bridge
npm install
npm run build

# Configure with your bot token
npm run dev -- telegram configure <your-bot-token>

# Optional: switch to Claude engine (default is Codex)
npm run dev -- telegram engine claude

# Recommended: enable YOLO mode for hands-free Telegram operation
npm run dev -- telegram yolo on

# Start the service
npm run dev -- telegram service start

Step 3: Pair Your Phone (on your phone)

  1. Open Telegram and find your new bot (search its username)
  2. Send any message — the bot will reply with a 6-character pairing code like 38J63T
  3. Go back to your terminal and run:
npm run dev -- telegram access pair 38J63T

Done! You can now chat with Codex or Claude from Telegram. Send text, voice messages, or files — the bot handles everything.

Multiple Bots

# Create a second bot with BotFather, then:
npm run dev -- telegram configure --instance work <second-token>
npm run dev -- telegram engine claude --instance work
npm run dev -- telegram yolo on --instance work
npm run dev -- telegram service start --instance work
# Pair the same way: send a message, get the code, run `telegram access pair <code> --instance work`

Architecture

┌─────────────────────────────────────────────────────────────────────┐
│                        cc-telegram-bridge                       │
├─────────────┬──────────────┬──────────────────┬─────────────────────┤
│  Telegram   │   Runtime    │     AI Engine    │      State          │
│  Layer      │   Layer      │     Layer        │      Layer          │
├─────────────┼──────────────┼──────────────────┼─────────────────────┤
│ api.ts      │ bridge.ts    │ adapter.ts       │ access-store.ts     │
│ delivery.ts │ chat-queue.ts│ process-adapter  │ session-store.ts    │
│ update-     │ session-     │   .ts (Codex)    │ runtime-state.ts    │
│ normalizer  │ manager.ts   │ claude-adapter   │ instance-lock.ts    │
│   .ts       │              │   .ts (Claude)   │ json-store.ts       │
│ message-    │              │                  │ audit-log.ts        │
│ renderer.ts │              │ agent.md + config│ timeline-log.ts     │
│             │              │                  │ usage-store.ts      │
│             │              │                  │ crew-run-store.ts   │
└─────────────┴──────────────┴──────────────────┴─────────────────────┘

┌─────────────────────────────────────────────────────────────────────┐
│  Bus Layer  (local HTTP, loopback, protocol v1)                     │
├─────────────────────────────────────────────────────────────────────┤
│  bus-server.ts  · bus-client.ts  · bus-handler.ts                   │
│  bus-protocol.ts (envelope, errors, zod)  · bus-registry.ts         │
│  bus-config.ts  · delegation-commands.ts  · crew-workflow.ts        │
└─────────────────────────────────────────────────────────────────────┘

Data flow:

Telegram Update → Normalize → Access Check → Chat Queue (serialized)
    → Load config.json (engine) → Load agent.md → Session Lookup
    → Codex Exec or Claude -p (new or resume)
    → Typing action + timeline events → Final Render → Deliver → Audit

Highlights

Dual Engine

Switch between Codex and Claude Code per instance. Mix and match — one bot on Codex, another on Claude, managed from one CLI.

Per-Bot Personality

Each instance loads its own agent.md on every message. Claude instances also get CLAUDE.md project rules.

Multi-Bot Support

Run multiple Telegram bots from one repo. Each instance has its own token, engine, workspace, access rules, session binding, audit trail, and service lifecycle.

Agent Bus

Local bot-to-bot calls enable delegation, fan-out, chains, verification, and coordinator-led crew workflows without mixing each bot's Telegram chat context.

YOLO Mode

One command to auto-approve everything — works with both engines. Per-instance, hot-reloadable.

Per-Bot Isolation

Every instance has its own personality, workspace, sessions, access rules, inbox, audit trail, and workspace-keyed auto-memory. The engine config dir (~/.claude/ / ~/.codex/) is shared with your main CLI so OAuth refresh tokens don't race across instances — the trade-off is that settings, plugins, and MCP state live in your real home, and full-auto / bypass mode can touch it.

Session Resume

/resume picks up existing Claude Code local sessions, and /resume thread <thread-id> attaches Codex threads, so you can continue desktop work from Telegram without losing context.

Runtime Visibility

Telegram shows typing while a turn runs, and structured timeline/audit events record sessions, tool calls, file receipts, retries, and completion status for debugging.

Production Resilience

Long polling (~0ms latency), exponential backoff, 429 auto-retry, 409 conflict auto-shutdown, graceful SIGTERM/SIGINT, fault-tolerant batch processing.

Safe Detach

/detach returns to the pre-resume conversation when possible. Bridge instructions are injected per turn and are not written back into your local Claude or Codex session files.

Usage Tracking

Per-instance token counts (input/output/cached) and USD cost. telegram usage to check spend anytime.

Timeline & Dashboard

telegram timeline, telegram service status, and telegram dashboard expose current turn state, recent failures, file receipts, and crew snapshots.

Budget Control

Set a per-instance cost cap. Requests are blocked when the limit is hit — with bilingual messages.

File Delivery

Generated images, PDFs, decks, and reports are delivered through cctb send during active turns, telegram send outside turns, or [send-file:] / [send-image:] as a fallback.

Backup & Restore

One command to archive or restore an instance. Zero-dependency binary format, cross-platform, with atomic rollback.

Instance Management

List, rename, and delete instances from the CLI. Running-instance guards prevent data corruption.

Voice Input

Send voice messages — transcribed locally via pluggable ASR (e.g. Qwen3-ASR). HTTP server for fast inference, CLI fallback when offline.

Full Audit Trail

Every action recorded per-instance in append-only JSONL — filterable by type, chat, and outcome. Auto-rotated at 10MB.

Docker Ready

Multi-stage Dockerfile included. Build once, deploy anywhere.

Structured Bus Protocol

Local bot-to-bot calls speak a versioned v1 protocol — protocolVersion, capabilities, structured errorCode, and a retryable flag so callers can tell transient failures from terminal ones. Peer liveness is a real /api/health probe, not just a PID check. See docs/bus-protocol.md.


Service Operations

Command Description
telegram service start Acquire lock, load state, begin long-polling
telegram service stop Graceful shutdown (SIGTERM/SIGINT)
telegram service status Running state, PID, engine, bot identity, timeline summary, latest crew run
telegram service restart Stop + start with clean consumer reset
telegram service logs Tail stdout/stderr logs
telegram service doctor Health check across all subsystems, including timeline, crew state, shared engine env, and stale launchd leftovers
telegram engine [codex|claude] Switch AI engine per instance
telegram yolo [on|off|unsafe] Toggle auto-approval mode
telegram usage Show token usage and estimated cost
telegram verbosity [0|1|2] Store the legacy verbosity setting; current process runtimes use typing actions plus timeline/audit events
telegram budget [show|set|clear] Per-instance cost cap (blocks requests when exceeded)
telegram timeline Inspect structured lifecycle events with filters
telegram instance [list|rename|delete] Manage instances from the CLI
telegram backup [--instance <name>] Archive instance state to .cctb.gz
telegram restore <archive> Restore instance from backup (with --force to overwrite)
telegram logs rotate Manually trigger log rotation
telegram dashboard Generate and open an HTML status dashboard with timeline and latest crew snapshot
telegram help Show all available commands

All commands accept --instance <name> to target a specific bot.

Stable Beta Commands

  • telegram service doctor --instance <name>
  • telegram session list --instance <name>
  • telegram session inspect --instance <name> <chat-id>
  • telegram session reset --instance <name> <chat-id>
  • telegram task list --instance <name>
  • telegram task inspect --instance <name> <upload-id>
  • telegram task clear --instance <name> <upload-id>

Telegram users can also use:

  • /status
  • /engine [claude|codex] — switch engine for the current instance (the bridge resets stale bindings automatically)
  • /effort [low|medium|high|xhigh|max|off] — set reasoning effort level (max is Claude-only; Codex uses xhigh instead)
  • /model [name|off] — switch model
  • /btw <question> — ask a side question without affecting the current session
  • /ask <instance> <prompt> — delegate to a specific peer bot
  • /fan <prompt> — query current bot plus configured parallel bots
  • /chain <prompt> — run the configured sequential bot chain
  • /verify <prompt> — execute locally, then auto-review with the verifier bot
  • /resume — Claude: scan local sessions; Codex: use /resume thread <thread-id> to attach an existing thread
  • /detach — detach from resumed Claude session or current Codex thread; restore the pre-resume conversation when one exists
  • /stop — immediately stop the current running task
  • /continue — resume the latest waiting archive summary
  • /compact (Claude only — compresses context; Codex falls back to reset)
  • /context (Claude only) — show current context fill level; use it to decide when to /compact
  • /ultrareview (Claude Opus 4.7+ only) — dedicated code-review pass, typically paired with /resume into a local project
  • /reset
  • /help

For archive summaries, the intended continuation path is to reply to that summary or press its Continue Analysis button; bare /continue only resumes the latest waiting archive.

Recovery behavior on unreadable state:

  • telegram service status and telegram service doctor degrade to unknown (...) warnings instead of crashing when session.json, file-workflow.json, timeline.log.jsonl, or crew-runs/ state is unreadable.
  • telegram session inspect and telegram task inspect report unreadable state and stop instead of pretending the record is missing.
  • telegram session reset, telegram task clear, and Telegram /reset only self-heal corruption/schema-invalid state. Before writing a default empty file, the unreadable original is quarantined as a backup beside the state file.
  • Telegram /status shows unknown (...) for session/task state when the backing JSON is unreadable.

Shell Helpers

Windows (PowerShell):

.\scripts\start-instance.ps1 [-Instance work]
.\scripts\status-instance.ps1 [-Instance work]
.\scripts\stop-instance.ps1 [-Instance work]

macOS / Linux (bash):

./scripts/start-instance.sh [work]
./scripts/status-instance.sh [work]
./scripts/stop-instance.sh [work]

Legacy cleanup after older autostart builds:

bash scripts/cleanup-legacy-launchd.sh --all

Claude auth smoke test:

npm run smoke:claude-auth

Shared engine env rule:

  • CLAUDE_CONFIG_DIR and CODEX_HOME are only forwarded when you explicitly export them.
  • If you change either one, restart the affected instance from that same shell.
  • telegram service doctor now flags both shared-env mismatches and stale launchd plists.

Access Control

Per-instance, two layers: pairing + allowlist.

Default behavior is intentionally conservative:

  • One instance is locked to one Telegram chat by default
  • A second chat will not be paired or allowlisted unless you explicitly enable multi-chat
  • This keeps /resume, workspace overrides, local files, and session state from bleeding across chats by accident
npm run dev -- telegram access pair <code>
npm run dev -- telegram access policy allowlist
npm run dev -- telegram access allow <chat-id>
npm run dev -- telegram access revoke <chat-id>
npm run dev -- telegram access multi on
npm run dev -- telegram access multi off
npm run dev -- telegram status [--instance work]

Use telegram access multi on --instance <name> only when you really want one bot instance to serve multiple chats. New and legacy instances both default to off unless you explicitly change it.


Audit Trail

Per-instance append-only JSONL log with filterable queries:

npm run dev -- telegram audit [--instance work]
npm run dev -- telegram audit 50                                    # Last 50 entries
npm run dev -- telegram audit --type update.handle --outcome error  # Filter by type/outcome
npm run dev -- telegram audit --chat 688567588                      # Filter by chat

audit.log.jsonl records what the bridge didupdate.handle, bus.reply, budget.blocked — one line per external action, rotated at 10MB.

Timeline

Parallel to audit, the bridge emits a lifecycle stream (timeline.log.jsonl) describing the shape of each turn — turn.started, turn.completed, budget.threshold_reached, crew.stage.*, bus delegations, etc. Same JSONL shape, different axis:

npm run dev -- telegram timeline [--instance work]
npm run dev -- telegram timeline --type turn.completed --outcome error
npm run dev -- telegram timeline --chat 688567588 --limit 100

Think of it this way: audit answers "what action did we take", timeline answers "how did this turn go". telegram service status and telegram dashboard pull summaries from timeline.


State Layout

# Windows: %USERPROFILE%\.cctb\<instance>\
# macOS/Linux: ~/.cctb/<instance>/

<instance>/
├── agent.md                # Bot personality & instructions
├── config.json             # Engine, YOLO mode, verbosity, bus
├── usage.json              # Token usage and cost tracking
├── workspace/              # Per-bot working directory
│   └── CLAUDE.md           # Claude Code project instructions (Claude only)
├── .env                    # Bot token
├── access.json             # Pairing + allowlist data
├── session.json            # Chat-to-thread bindings
├── file-workflow.json      # Pending file-upload follow-ups
├── runtime-state.json      # Watermarks, offsets
├── instance.lock.json      # Process lock
├── audit.log.jsonl         # Structured audit stream (rotates to .1, .2, ...)
├── timeline.log.jsonl      # Lifecycle events (turn.started, budget.*, crew.stage.*)
├── crew-runs/              # Coordinator-led crew run state (coordinator only)
│   └── <run-id>.json
├── service.stdout.log      # Service stdout
├── service.stderr.log      # Service stderr
└── inbox/                  # Downloaded attachments

Development

npm run dev -- <command>     # Development mode
npm test                     # Run tests
npm run test:watch           # Watch mode
npm run build                # Build for production
npm start                    # Start production build

Docker

# Build
docker build -t cc-telegram-bridge .

# Run (configure first, then start)
docker run -v ~/.cctb:/root/.cctb cc-telegram-bridge telegram configure <token>
docker run -v ~/.cctb:/root/.cctb cc-telegram-bridge telegram service start

Mount ~/.cctb to persist state across container restarts.


Troubleshooting

Bot does not reply
  1. Run telegram service doctor --instance <name> to diagnose
  2. Check telegram service logs for errors
  3. Verify the engine is installed: codex --version or claude --version
  4. If the instance uses Claude, run npm run smoke:claude-auth
  5. If service doctor reports legacy-launchd, clean it with bash scripts/cleanup-legacy-launchd.sh --all
Claude works in Terminal but not in the bot
  1. Check shell auth first: claude auth status
  2. Run npm run smoke:claude-auth
  3. Run telegram service doctor --instance <name>
  4. If you recently changed CLAUDE_CONFIG_DIR, restart the instance from that same shell
  5. If doctor reports legacy-launchd, run bash scripts/cleanup-legacy-launchd.sh --all

More detail: docs/runtime-env-troubleshooting.md

Switching to Claude engine
  1. telegram engine claude --instance <name>
  2. Restart the service: telegram service restart --instance <name>
  3. Optionally add a CLAUDE.md in the workspace directory
Bot sends duplicate replies

A 409 Conflict means two processes are polling the same bot token. The service auto-detects this and shuts down. Run telegram service status to check, then telegram service stop and telegram service start to clean restart.

agent.md changes not taking effect

No restart needed — loaded fresh on every message. Verify path with telegram instructions path --instance <name>.


Optional: Run a Local Supervisor Agent

This project is already usable, but it is still evolving quickly. If you run several instances on one machine, a local supervisor agent can be a practical extra safety layer. This is optional, not required.

Use it for:

  • checking instance health
  • reading service status / service doctor / timeline before you touch anything
  • restarting only the affected instance when something is clearly down
  • reporting what happened instead of silently changing config

Do not use it as a second product agent. Its job should be operations only: monitor, diagnose, restart, and report.

Suggested Brief

You can give a local supervisor agent a brief like this:

You are the local operations supervisor for cc-telegram-bridge on this machine.

Your job is to keep bot instances healthy and easy to diagnose.

Primary responsibilities:
1. Check instance health
2. Diagnose failures before taking action
3. Restart only the affected instance when needed
4. Report conclusions, evidence, and actions clearly

Default operating rules:
- Assume one instance serves one chat unless the instance is explicitly configured for multi-chat.
- Do not change engine, model, yolo/approval mode, pairing, access, or multi-chat unless the user explicitly asks.
- Do not clear tasks unless the user explicitly asks, or the task is confirmed stale and the user already approved cleanup.
- Do not edit project code or README unless the user explicitly asks.
- Prefer the smallest recovery action. Do not restart all instances unless necessary.

Default diagnostic order:
1. Check service status
2. Check service doctor
3. Check recent timeline/audit evidence
4. Check stdout/stderr logs only if needed
5. Decide whether the issue is:
   - process not running
   - engine/runtime failure
   - Telegram delivery failure
   - stale task/workflow residue
   - auth/config problem
6. Then decide whether a restart is justified

Preferred commands:
- `node dist/src/index.js telegram service status --instance <name>`
- `node dist/src/index.js telegram service doctor --instance <name>`
- `node dist/src/index.js telegram timeline --instance <name>`
- `bash scripts/start-instance.sh <name>`
- `bash scripts/stop-instance.sh <name>`

Response format:
- Conclusion
- Evidence
- Action taken or recommended

If you already use a local agent such as Hermes, that is a good fit for this role.


License

MIT


Your agents. Your engines. Your rules.

About

Real Claude Code & Codex CLI on Telegram — native CLI harness with session resume, isolated multi-bot instances, Agent Bus delegation/fan-out/crew workflows, voice input, streaming, and tools.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages