Skip to content

feat(messaging): support chunkMode for outbound text messages#58

Open
zhonghe0615 wants to merge 1 commit intoTencent:mainfrom
zhonghe0615:main
Open

feat(messaging): support chunkMode for outbound text messages#58
zhonghe0615 wants to merge 1 commit intoTencent:mainfrom
zhonghe0615:main

Conversation

@zhonghe0615
Copy link
Copy Markdown

Summary

This change adds chunkMode support to outbound text messages in the
deliver callback, so long AI replies can be split into multiple WeChat
messages instead of always arriving as one.

Changes

  • import chunkMarkdownTextWithMode and ChunkMode from openclaw/plugin-sdk/reply-runtime
  • add DEFAULT_TEXT_CHUNK_LIMIT = 4000 constant (mirrors core default)
  • replace the single sendMessageWeixin call in the deliver callback with
    a chunking loop driven by chunkMode and textChunkLimit from channel config

Why

The existing chunking logic lived in channel.tssendText, but sendText
is never called on the normal message path. The deliver callback in
processOneMessage called sendMessageWeixin directly, so chunkMode config
had no effect at runtime.

With this change, chunking is applied where messages are actually sent:

  • "length" (default): sends as one message unless the reply exceeds
    textChunkLimit (4000 chars), preserving existing behavior
  • "newline": splits at paragraph boundaries (\n\n), delivering each
    paragraph as a separate WeChat message

Config keys:
channels.openclaw-weixin.chunkMode: "length" | "newline"
channels.openclaw-weixin.textChunkLimit:

Validation

  • installed the plugin from local path
  • set channels.openclaw-weixin.chunkMode = "newline" in config
  • restarted gateway
  • sent a multi-paragraph AI reply via WeChat
  • confirmed each paragraph arrived as a separate message
  • removed chunkMode from config and confirmed single-message behavior
    (default "length" mode) still works correctly

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant