Skip to content

feat(feishu): add model-aware context guard defaults for long chats#283

Open
slicenferqin wants to merge 2 commits intoValueCell-ai:mainfrom
slicenferqin:feat/feishu-model-aware-context-guard
Open

feat(feishu): add model-aware context guard defaults for long chats#283
slicenferqin wants to merge 2 commits intoValueCell-ai:mainfrom
slicenferqin:feat/feishu-model-aware-context-guard

Conversation

@slicenferqin
Copy link
Contributor

@slicenferqin slicenferqin commented Mar 4, 2026

Summary

This PR addresses #279 by adding a practical context-control path for Feishu sessions in ClawX.

  • Add optional historyLimit field to Feishu channel settings UI.
  • Normalize historyLimit as a non-negative integer when saving config.
  • Auto-apply safe defaults for long-running Feishu conversations (only when missing):
    • agents.defaults.compaction.mode = safeguard
    • model-aware reserveTokensFloor (≈5% of context window, with sane bounds)
    • memoryFlush.enabled = true + model-aware softThresholdTokens
    • contextPruning defaults (cache-ttl, softTrim, hardClear)
  • Keep manual overrides intact (existing user config is not overwritten).
  • Add i18n labels/descriptions for the new Feishu setting in zh/en/ja.

Why

Issue #279 reports token explosions (30k~50k) in Feishu-integrated flows that can crash local models. This change adds both:

  1. an explicit user option (historyLimit), and
  2. a no-touch auto-protection baseline for users who never tune advanced settings.

Validation

  • pnpm install
  • pnpm -s typecheck

Fixes #279

@slicenferqin slicenferqin force-pushed the feat/feishu-model-aware-context-guard branch from 13e5d73 to b4ff8fc Compare March 4, 2026 05:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

1 participant