feat: streaming drafts, rich formatting, follow-up interrupts, and UX improvements#152
feat: streaming drafts, rich formatting, follow-up interrupts, and UX improvements#152Fr4nzz wants to merge 2 commits intoRichardAtCT:mainfrom
Conversation
… improvements Major improvements to the Telegram bot UX: **Streaming draft responses** - Real-time token streaming via sendMessageDraft (private chats) - Falls back to editMessageText for group chats - Draft cleared before final response to prevent stale bubbles - Draft text reset after each 💬 message to avoid accumulation **Rich HTML formatting** - Parse Claude's markdown into Telegram HTML (bold, italic, code, links) - Smart message splitting for long responses - Proper escaping of HTML entities **Follow-up message interrupts** - Send a new message while Claude is working to interrupt and redirect - Like vanilla Claude Code's behavior **Per-event verbose messages** - Tool calls, thinking, and assistant text shown as individual messages - Configurable verbosity (0=quiet, 1=normal, 2=detailed) - Progress timer shows elapsed seconds **Image sending via MCP** - Intercept send_file_to_user MCP tool calls - Validate image paths (approved directory + /tmp for Playwright screenshots) - Send as Telegram photos with optional captions **SDK integration improvements** - Use setting_sources=["project"] to avoid plugin MCP conflicts - Support SSE-based MCP servers (persistent Playwright) - Improved error handling and stderr capture **Other** - Keep progress messages visible after response - Telegram MCP server improvements (file sending, better error handling) - Updated tests for image path validation Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Move 🧠 thinking and 💬 assistant text processing BEFORE tool calls so messages appear in chronological order (💬 before 💻/✏️) - Remove draft_streamer.clear() calls that caused "Deleted message" ghost and empty [] messages — upstream never clears drafts, the bubble disappears naturally when the final message arrives - Only use the LAST AssistantMessage text for the final response, preventing intermediate reasoning from being repeated at the end Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Big feature PR with some genuinely useful additions — streaming drafts and interrupt support are quality-of-life wins. Several correctness issues and a process concern before this can land. 🔴 Blockers1.
task = asyncio.create_task(self._run_agentic(...))
context.user_data["running_claude_task"] = taskbefore any 2. Enabling concurrent updates means multiple messages process simultaneously — right intent for interrupt support. But if 🟡 Consistency / Design Issues3.
4. tool_log.append({"_tool_message_ids": tool_message_ids})Injecting a sentinel dict into a list of tool call entries is a code smell — any downstream iterator has to defensively check for this magic key. Track 🟡 Documentation Drift5. Verbose level 3 undocumented / CLAUDE.md out of date CLAUDE.md config docs apparently still document verbose range as 0–2. Update to reflect level 3 and describe what it enables. 🟡 Process6. Overlap with PR #148 (same author) PR #148 is already open with very similar
As-is, whichever merges second will have significant conflicts. 🟢 Test gapsNo tests visible for:
Minor
Summary: Streaming draft and HTML formatting look solid in concept, but — Friday, AI assistant to @RichardAtCT (posted as @RichardAtCT — FridayOpenClawBot access pending) |
Summary
sendMessageDraft— real-time token streaming in private chats with fallback toeditMessageTextfor groups. Draft cleared before final response to prevent stale bubbles; draft text reset after each 💬 message to avoid accumulationsend_file_to_usertool calls, validate paths, send as Telegram photossetting_sources=["project"]to avoid plugin MCP conflicts, support SSE-based MCP serversTest plan
/verbose 0,/verbose 1,/verbose 2— verify output levelssend_file_to_userpytest tests/unit/ -o "addopts="— all tests pass🤖 Generated with Claude Code