Skip to content

feat: streaming drafts, rich formatting, follow-up interrupts, and UX improvements#152

Open
Fr4nzz wants to merge 2 commits intoRichardAtCT:mainfrom
Fr4nzz:feat/all-improvements
Open

feat: streaming drafts, rich formatting, follow-up interrupts, and UX improvements#152
Fr4nzz wants to merge 2 commits intoRichardAtCT:mainfrom
Fr4nzz:feat/all-improvements

Conversation

@Fr4nzz
Copy link
Copy Markdown

@Fr4nzz Fr4nzz commented Mar 15, 2026

Summary

  • Streaming draft responses via sendMessageDraft — real-time token streaming in private chats with fallback to editMessageText for groups. Draft cleared before final response to prevent stale bubbles; draft text reset after each 💬 message to avoid accumulation
  • Rich HTML formatting — parse Claude's markdown into Telegram HTML (bold, italic, code, links) with smart message splitting
  • Follow-up message interrupts — send a new message while Claude is working to redirect, like vanilla Claude Code
  • Per-event verbose messages — tool calls, thinking, and assistant text shown as individual messages with configurable verbosity (0/1/2)
  • Image sending via MCP — intercept send_file_to_user tool calls, validate paths, send as Telegram photos
  • SDK integration — use setting_sources=["project"] to avoid plugin MCP conflicts, support SSE-based MCP servers
  • Keep progress messages visible after response with elapsed time

Test plan

  • Send a text message and verify streaming draft appears in real-time
  • Send a follow-up message while Claude is working — verify it interrupts
  • Test /verbose 0, /verbose 1, /verbose 2 — verify output levels
  • Test image sending via Playwright screenshot + MCP send_file_to_user
  • Verify draft bubble clears before final response appears
  • Test in group chat (editMessageText fallback)
  • Run pytest tests/unit/ -o "addopts=" — all tests pass

🤖 Generated with Claude Code

Fr4nzz and others added 2 commits March 15, 2026 21:01
… improvements

Major improvements to the Telegram bot UX:

**Streaming draft responses**
- Real-time token streaming via sendMessageDraft (private chats)
- Falls back to editMessageText for group chats
- Draft cleared before final response to prevent stale bubbles
- Draft text reset after each 💬 message to avoid accumulation

**Rich HTML formatting**
- Parse Claude's markdown into Telegram HTML (bold, italic, code, links)
- Smart message splitting for long responses
- Proper escaping of HTML entities

**Follow-up message interrupts**
- Send a new message while Claude is working to interrupt and redirect
- Like vanilla Claude Code's behavior

**Per-event verbose messages**
- Tool calls, thinking, and assistant text shown as individual messages
- Configurable verbosity (0=quiet, 1=normal, 2=detailed)
- Progress timer shows elapsed seconds

**Image sending via MCP**
- Intercept send_file_to_user MCP tool calls
- Validate image paths (approved directory + /tmp for Playwright screenshots)
- Send as Telegram photos with optional captions

**SDK integration improvements**
- Use setting_sources=["project"] to avoid plugin MCP conflicts
- Support SSE-based MCP servers (persistent Playwright)
- Improved error handling and stderr capture

**Other**
- Keep progress messages visible after response
- Telegram MCP server improvements (file sending, better error handling)
- Updated tests for image path validation

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Move 🧠 thinking and 💬 assistant text processing BEFORE tool calls
  so messages appear in chronological order (💬 before 💻/✏️)
- Remove draft_streamer.clear() calls that caused "Deleted message"
  ghost and empty [] messages — upstream never clears drafts, the
  bubble disappears naturally when the final message arrives
- Only use the LAST AssistantMessage text for the final response,
  preventing intermediate reasoning from being repeated at the end

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@RichardAtCT
Copy link
Copy Markdown
Owner

Big feature PR with some genuinely useful additions — streaming drafts and interrupt support are quality-of-life wins. Several correctness issues and a process concern before this can land.


🔴 Blockers

1. /stop is a no-op — running_claude_task is never set

agentic_stop reads context.user_data["running_claude_task"] to cancel the task, but nowhere in the diff is that key actually written. The command will always hit the else branch and say "Nothing running." You need something like:

task = asyncio.create_task(self._run_agentic(...))
context.user_data["running_claude_task"] = task

before any await in the message handler. Without this, the entire interrupt mechanism is non-functional.

2. concurrent_updates(True) makes the race worse without the fix above

Enabling concurrent updates means multiple messages process simultaneously — right intent for interrupt support. But if /stop can't actually cancel the running task, you now have concurrent handlers stepping on each other with no abort path. Don't ship concurrent_updates(True) until the task lifecycle is wired up correctly.


🟡 Consistency / Design Issues

3. user_data vs chat_data — conflicts with PR #165

agentic_cleanup reads last_tool_message_ids and last_tool_chat_id from context.user_data. PR #165 (also open, also touches orchestrator.py) moved session state to chat_data. These two PRs will conflict on merge and produce inconsistent state. Agree on chat_data as the canonical store before either lands.

4. tool_log used as a mixed-type container

tool_log.append({"_tool_message_ids": tool_message_ids})

Injecting a sentinel dict into a list of tool call entries is a code smell — any downstream iterator has to defensively check for this magic key. Track tool_message_ids as a separate variable in the closure scope instead.


🟡 Documentation Drift

5. Verbose level 3 undocumented / CLAUDE.md out of date

CLAUDE.md config docs apparently still document verbose range as 0–2. Update to reflect level 3 and describe what it enables.


🟡 Process

6. Overlap with PR #148 (same author)

PR #148 is already open with very similar orchestrator.py changes from the same author. These will be nearly impossible to merge independently. Recommend either:

As-is, whichever merges second will have significant conflicts.


🟢 Test gaps

No tests visible for:

  • agentic_stop / agentic_cleanup handlers
  • draft_streamer.py (private chat vs group fallback)
  • html_format.py markdown→HTML edge cases (nested formatting, split threshold)

Minor

  • except Exception: pass in _send_tool_msg and agentic_cleanup silently eats errors — log at WARNING with structlog
  • agentic_stop and agentic_cleanup missing type hints on update/context — mypy strict violation

Summary: Streaming draft and HTML formatting look solid in concept, but /stop is broken as written (task never stored), and concurrent_updates(True) actively makes things worse without it. Fix the task lifecycle, resolve the user_data/chat_data conflict with #165, and sort out the #148 overlap before merging.

Friday, AI assistant to @RichardAtCT (posted as @RichardAtCT — FridayOpenClawBot access pending)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants