chore: sync upstream - CONFLICTS need manual review#8
chore: sync upstream - CONFLICTS need manual review#8
Conversation
Add a `timeout` frontmatter field (in seconds) to job files, allowing long-running jobs to override the global 5-minute session timeout. Also fix job session isolation: jobs now pass `job.name` as the threadId so each job runs in its own persistent session rather than sharing the main conversation session. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…sion When job names are used as threadIds for session isolation, they should not be persisted to sessions.json. Non-Discord IDs (e.g. "podcast-curator") cause 400 errors during rejoinThreads on every gateway reconnect. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Remove snowflake guard from createThreadSession so job-named sessions are persisted in sessions.json across daemon restarts - Add snowflake guard to rejoinThreads instead, so non-Discord IDs (job names) are skipped when rejoining Discord thread membership - Validate timeout frontmatter: reject zero, negative, and non-numeric values rather than passing them through to the runner
…y in Discord - Add sessionTimeoutMs to Settings interface and DEFAULT_SETTINGS (30 min default) - Parse sessionTimeoutMs in parseSettings() so the setting is no longer silently dropped; runner.ts used (settings as any).sessionTimeoutMs which always fell through to the 5-minute CLAUDE_TIMEOUT_MS constant - Replace (settings as any).sessionTimeoutMs with settings.sessionTimeoutMs in runner.ts (two call sites) - Fix Discord error display to check stdout before stderr: Claude writes human- readable error messages to stdout on exit 1, not stderr, so the previous order always produced 'Unknown error' (cherry picked from commit 588e10ab5252861feefdbf4eb539e0f6dd1bbaf7)
Extract DEFAULT_SESSION_TIMEOUT_MS (30 min) from config.ts and import it in runner.ts, removing the redundant local CLAUDE_TIMEOUT_MS constant. Previously the 30-minute value was duplicated in two places in config.ts and runner.ts had a separate 5-minute constant that served the same role as the default parameter for runClaudeOnce(). Now there is a single source of truth for the timeout default. (cherry picked from commit ef506988a5edaba13b50a2e799c9643d115fa0c7)
feat(jobs): per-job timeout override and session isolation
Adds a second job location scanned by loadJobs(): agents/<name>/jobs/*.md at the project root. This path is outside .claude/ and is therefore not subject to Claude Code's hardcoded write protection on .claude/ directories, which blocks writes even with --dangerously-skip-permissions. - jobs.ts: scan agents/*/jobs/*.md after legacy .claude/claudeclaw/jobs/; add agent, label, enabled fields to Job interface; directory location is authoritative (overrides any frontmatter agent: value) - sessions.ts: add optional agentName param to all session functions; agent sessions stored at agents/<name>/session.json (outside .claude/) - runner.ts: thread agentName through execClaude and run(); session calls branch on agentName the same way existing code branches on threadId - commands/start.ts: pass job.agent to run() at the cron fire site Backwards compatible: legacy .claude/claudeclaw/jobs/ still scanned and works unchanged. No new dependencies. No new processes. (cherry picked from commit 53e3af311ffbdd9b177d21254aa18745e634ee30)
clearJobSchedule() now resolves agent job names (agentName/label format) to agents/<name>/jobs/<label>.md instead of the legacy JOBS_DIR path. Non-recurring agent-scoped jobs can now correctly clear their schedule. status.ts now scans both legacy JOBS_DIR and agents/*/jobs/ so agent- scoped jobs appear in claudeclaw status output. (cherry picked from commit 3db5931d62b2ce46d54b12e73854fc9c698cf860)
Adds a lightweight watchdog that stops auto-compact retries when a session has timed out too many consecutive times, preventing a stuck session from draining the user's Claude Code allowance. Both limits (maxConsecutiveTimeouts, maxRuntimeSeconds) default to null so there is zero behaviour change without explicit opt-in in settings.json. - src/watchdog.ts: new 155-LOC module, no runtime dependencies - src/config.ts: WatchdogSettings type + parse wired into loadSettings - src/runner.ts: 26-line integration block before auto-compact retry - src/__tests__/watchdog.test.ts: 17 bun tests (17 pass, 0 fail) - version bump: 1.0.4 → 1.0.5 (cherry picked from commit 983e76c)
- fix(runner): compactCurrentSession now accepts agentName so agent sessions can actually be compacted (previously always operated on global session) - fix(start): agent jobs use 'agent:<name>' as queue key instead of undefined, preserving per-agent serialisation without blocking the global queue - fix(runner): remove clearSession on non-timeout failures — leaving it meant maxRuntimeSeconds clock reset on any exit-1, making the wall-clock guard weak - fix(config): add getAgentsDir() as single source of truth for the agents/ directory path; update jobs.ts, sessions.ts, status.ts to use it - docs(watchdog): document that startSession tracking is in-memory only and resets on daemon restart (maxRuntimeSeconds cannot span restarts) - test: update jobs.test.ts assertion to match getAgentsDir() rename - chore: bump plugin and marketplace versions to 1.0.7
Cherry-picks the non-conflicting aspects of #132 (per-job retry logic) into this PR, superseding it. Conflicts in start.ts are resolved by preserving this PR's agent-scoped dispatch (agent:<name> queue key). Added: - retry and retryDelay frontmatter fields on Job interface (jobs.ts) - runJob() helper in start.ts encapsulating dispatch + retry + one-shot clear - Per-job in-memory retry state (jobRetryState Map) Bug fixed vs #132: retry state is no longer deleted before the retry fires. The original PR did jobRetryState.delete(job.name) before calling runJob(), so if the retry failed again, runJob() saw no existing state and started failCount at 0 — meaning a permanently failing job never exhausted its configured retry limit. Fix: preserve state across the retry; runJob reads and increments failCount naturally from the existing entry.
When a retry fires from the cron tick, runJob() is async/fire-and-forget. retryAt stayed in the past during execution, so every subsequent 60s tick also satisfied the condition and stacked another runJob call — effectively burning through the retry budget without honoring retryDelay. Fix: set retryAt = Number.MAX_SAFE_INTEGER before firing runJob, acting as an in-flight sentinel. runJob's .then() handler overwrites it with the real next-retry time (or deletes state on success/exhaustion), so the sentinel is always replaced before the next cron decision.
feat: integrate agent-scoped jobs and watchdog
Adds /plugin and /claudeclaw:plugin wizard across Discord, Telegram and the Web UI — thin wrapper over the local `claude plugin ...` CLI, no separate installer or registry. New files: - src/commands/plugin-cli.ts: builds argv and spawns `claude plugin` subcommands (marketplace add/list/update/remove, install, list, enable, disable, uninstall). 30s timeout, captures stdout/stderr. - src/commands/plugin-wizard.ts: in-memory multi-step wizard (10min TTL) keyed by interface+scopeId. Handles add marketplace, update, remove, install (with scope prompt + capability confirmation), list, enable, disable, uninstall. Cancel at any step. Adapter touchpoints (3 × ~10 LOC, no runner changes): - discord.ts: intercepts before resolveSkillPrompt - telegram.ts: intercepts before resolveSkillPrompt - start.ts: intercepts before streamUserMessage in web onChat handler Install scope: `-s user` (~/.claude/plugins/) or `-s project` (.claude/plugins/). Per-agent runtime isolation deferred to Phase 2 — requires runner spawn-cwd scoping. Noted in wizard help text. Closes #116
…mal chat Memory leak: add a 15-minute setInterval sweep that removes expired sessions from the Map. Timer is unref'd so it doesn't prevent daemon shutdown. Message intercept: update MENU text to explicitly tell users that all messages route to the wizard while it is open, and to send 'cancel' to return to normal chat. Improve the unrecognised-input reply to repeat that escape.
An active /plugin wizard could be bypassed by messages that classify as "hire" or "fire" intents, causing unintended Discord thread mutations mid-wizard. Move the wizard check to before the typing indicator and AI thread intent block — after auth and non-empty content checks but before any thread management or Claude routing. Reported by Codex review of 81bc145.
feat(plugin): expose Claude Code plugin marketplace via /plugin wizard
runner.ts:
- Add cwd parameter to runClaudeOnce; passes to Bun.spawn
- Add ensureAgentDir(agentName) — creates agents/<name>/ and returns the path
- execClaude computes spawnCwd = agents/<name>/ when agentName is set
- Named agent Claude spawns now use the agent directory as cwd, so
.claude/plugins/ and .claude/skills/ resolve per-agent
plugin-cli.ts:
- runPluginCli accepts optional cwd to scope the subprocess
plugin-wizard.ts:
- Import ensureAgentDir from runner
- buildScopePrompt(agentName) shows agent-specific path for project scope
- install-confirm: passes agents/<name>/ as cwd when scope=project and
agentName is set, installing into that agent's directory
- Confirmation message reports actual install location
discord.ts:
- knownThreads now stores { parentId, agentName? }
- agentName populated from thread name at: hire, THREAD_CREATE,
THREAD_UPDATE, THREAD_LIST_SYNC
- wizardCtx receives agentName from knownThreads for the active channel
Addresses two issues from Codex review of #137: High: runUserMessage now accepts agentName and forwards it to run() → execClaude, so Discord thread messages use agents/<name>/ as cwd and resolve agent-local plugins/skills. discord.ts passes knownThreads.get(channelId)?.agentName to runUserMessage. Medium: runCompact gains a cwd parameter and passes it to runClaudeOnce. compactCurrentSession computes the agent cwd and passes it. The auto-compact in execClaude passes spawnCwd to both runCompact and the post-compact retry runClaudeOnce, preserving isolation through the full timeout/compact/retry cycle.
… /compact scope Security: safeAgentSlug() normalises raw thread names to [a-z0-9_-] before using them as path segments. ensureAgentDir() resolve-checks the result stays under PROJECT_DIR/agents/. Exported so wizard and discord.ts use the same canonical form for display and storage. Correctness (recovery): upsertThread() replaces all direct knownThreads.set() calls. It preserves any existing agentName when a new one is not provided, and sanitizes raw names via safeAgentSlug(). All recovery paths now populate agentName: rejoinThreads() fetches name from the channel API, handleMessageCreate fallback recovery does the same, GUILD_CREATE uses upsertThread with thread.name, and the THREAD_CREATE/UPDATE/LIST_SYNC handlers all route through upsertThread. Correctness (/compact): handleInteractionCreate looks up knownThreads.get(interaction.channel_id) and passes agentName to compactCurrentSession(), so /compact in an agent thread compacts that agent's session rather than the global session.
/compact thread session fix: Add compactCurrentThreadSession(threadId, agentName?) to runner.ts. It looks up the session via getThreadSession(threadId) — the same store used by normal Discord thread messages — so /compact inside an agent thread now compacts the correct session. agentName is still used for cwd isolation only. handleInteractionCreate routes to compactCurrentThreadSession when the interaction is in a known thread, and falls back to compactCurrentSession() for global channels. Slug collision fix: upsertThread() now stores agentName as "<safeSlug>-<threadId>" instead of just "<safeSlug>". Discord thread IDs are unique snowflakes, so two threads with similar display names (e.g. "Agent One" vs "agent-one") get distinct directory keys and never share agents/<slug>/.claude/plugins/.
Introduce agentDirKey(rawName, threadId) which truncates the display slug to max(1, 64 - suffix.length) chars before appending -<threadId>, so the unique suffix is never truncated away on long thread names. ensureAgentDir() now accepts a pre-normalised key directly (validates only [a-z0-9_-] characters) and does NOT call safeAgentSlug() again, eliminating the double-truncation that caused the collision. upsertThread() calls agentDirKey() instead of building the key manually. plugin-wizard.ts displays ctx.agentName directly (already normalised) instead of re-slugging it, removing the last second-pass truncation site.
The [a-z0-9_-]+ validation broke existing named-agent jobs whose agent directories have uppercase letters, spaces, or other valid filesystem characters. The resolve-under-agents/ check is the correct and sufficient security boundary — it blocks path traversal regardless of character set. Discord-generated keys are already safe via agentDirKey; filesystem-derived job agent names pass through as-is, same as before.
path.resolve() is a lexical operation and passes for symlinks that point outside the repo. Replace the post-mkdir containment check with realpath() on both the dir and agentsRoot, which resolves symlinks before comparing. The lexical pre-check is kept as a fast-path to reject obvious traversal without touching the filesystem.
…realpath The previous check only verified agents/<name> resolves under agents/. If agents/ itself is a symlink outside the project, realpath(agentsRoot) is already outside and the child check passes erroneously. Add a realpath(agentsRoot).startsWith(realpath(PROJECT_DIR)) check so a symlinked agents/ root is rejected before comparing child directories.
…tion feat(plugin): Phase 2 — per-agent runtime isolation for plugin spawns
…l) compatibility Fixes #2. When Claude uses the Task tool it spawns child claude processes that keep the parent alive until all agents finish. The previous blocking approach (claude -p --output-format json/text + await proc.exited) caused 15+ minute hangs on any agentic workflow. Replace runClaudeOnce in execClaude with a new runClaudeStream that: - uses --output-format stream-json --verbose - reads NDJSON events line by line as they arrive - captures session_id from the system/init or result event - returns the final answer from the result event Key fixes over a naive streaming port: - auto-compact retry path also uses runClaudeStream (was erroneously left on runClaudeOnce which would have dumped raw NDJSON to the user) - session is persisted whenever the init event delivers a session_id, regardless of exit code, so a timed-out first message is resumable - stderr is surfaced to the user when no result event arrives (abort/error) runClaudeOnce is unchanged and still used by runCompact (/compact runs a single blocking command, no subagents involved). streamClaude (web UI path) is unchanged.
Three issues raised in review: 1. streamClaude() rotation gap streamClaude() (web chat path) did not check needsRotation() before resuming the global session, letting over-threshold sessions continue indefinitely via the web UI. Added the same rotation guard that execClaude() has, gated on autoRotate. 2. Summary generation timeout generateSummary() spawned claude with no timeout. A hung subprocess would block rotation and queue all subsequent messages. Added a 60s deadline: if the subprocess does not complete in time it is killed and rotation proceeds without a summary. 3. messageCount counter placement messageCount was incremented inside getSession(), a generic lookup helper used for reads and session-state checks as well as message delivery. This caused non-message code paths (e.g. claudeclaw send checking for an active session) to pollute the counter. - Removed the increment from getSession(). - Added incrementMessageCount() to sessions.ts. - execClaude() calls it after each invocation (global session only). - streamClaude() calls it after each invocation. (cherry picked from commit ec4040c1e63bcca89687c24f83e95396c3909d0c)
… and stream paths After rotateSession() fires inline, load the generated summary via loadLatestSummary() and prepend it to appendParts so the next Claude invocation receives the previous session context in --append-system-prompt. Applies to both execClaude() (non-streaming) and streamClaude() (web chat). (cherry picked from commit 077906a6c723a66e9a584362f4be74eaf5b41523)
…teSummary Instead of re-reading the latest *.md file after rotation (which could pick up a stale summary from a prior rotation on timeout/failure), generateSummary() now returns the content it just wrote, or null on timeout/failure. rotateSession() threads that value through and returns it. Both execClaude() and streamClaude() use the returned value directly — stale summaries are never injected. (cherry picked from commit c11f5500ff741fc569209637c8abb5a09c0a7a24)
This is a clean port of the Slack integration from PR #83 onto the current master rather than a rebase. The PR branch was ~25 commits behind and its config.ts/runner.ts/discord.ts changes would have regressed watchdog, agent isolation, memory-leak fixes, and the snowflake-precision Discord user ID handling. Only the Slack-specific content was ported; all shared-file changes from the PR branch were discarded. Files added/changed vs master: src/commands/slack.ts — new (ported from PR #83, with all fixes below) src/__tests__/slack.test.ts — new (6 tests covering pure helpers) src/config.ts — additive only: SlackConfig interface + slack field src/index.ts — wire `slack` command to dispatch src/sessions.ts — carry forward fallback session functions (from #143) src/messaging.ts — carry forward extractErrorDetail (from #143) Fixes applied to slack.ts during the port: - Bug (#1): assistantThreadChannels was a Set<channelId>; once any assistant thread opened in a channel, ALL messages in that channel bypassed the @mention requirement. Fixed: changed to assistantThreadKeys keyed by "${channelId}:${threadTs}" so the bypass is scoped to the specific thread. - Bug (#2): [read_channel:...] directive ran a follow-up runUserMessage() but discarded the result — users received no visible summary. Fixed: result is captured and posted back to Slack via the same directive/postMessage path. - Fix (#4): removed unused streamUserMessage import. - Fix (#5): replaced hardcoded "正在處理中..." with "Processing...". - Fix (#6): the message send path had a double-chunking bug — postMessage() sent the full text, then a loop re-sent overlapping trailing chunks. Fixed: a single loop over postMessage() per MAX_LEN chunk, tracking the final ts for edit/delete tracking. - Fix (#7): recentlyProcessed dedup map grew without bound during idle periods (isDuplicate only pruned on call). Added a setInterval eviction that prunes expired entries every 60s and caps map size at 5000. - Fix (#8): wire agent isolation — derive per-thread agentName via agentDirKey() and pass as 4th arg to runUserMessage(), mirroring discord.ts. - Fix (#9): injected thread history could contain directive patterns from prior messages, enabling injection attacks. Fixed: each fetched thread message is passed through sanitizeUserInput() before being added to the prompt context. Additional: extractErrorDetail() wired into all error response paths; resetFallbackSession() called alongside resetSession() in /reset handler. Closes #83 (cherry picked from commit c2eb20b95df1322f7e88d91a0cb0a2e735584668)
Three fixes: - upload_file: reject absolute paths and validate relative paths stay within process.cwd() before uploading; prevents model output from exfiltrating arbitrary host files to Slack - read_channel: restrict fetched channels to the current conversation channel plus explicitly configured listenChannels; prevents prompt injection from reading arbitrary private channels the bot token can access - handleBlockAction: pass agentName derived from (channelId, threadTs) to runUserMessage, matching the isolation pattern used in the message handler; fixes loss of agent working-directory context on button/ select interactions (cherry picked from commit 9de84263dc4042cb15648202d6a1511f8f18498c)
- downloadSlackFile: check Content-Length before reading body (25 MB cap), validate file extension against a safe allowlist instead of trusting attacker-controlled file.name, and double-check body size after arrayBuffer() in case the header was absent or wrong - extractUploadDirectives / upload loop: add realpath() check after the existing lexical resolve() guard so symlinks cannot escape project root - fetchChannelHistory: apply sanitizeUserInput() to every message text before writing to disk; change follow-up prompt framing from [System] to [Channel transcript — untrusted external content] to prevent channel messages being treated as privileged instructions (cherry picked from commit 7946b853131e44dda9cddf3320d690709d0dfe3c)
…state Upload security (addresses Terry's second CHANGES_REQUESTED): - Restrict [upload_file:...] directives to .claude/claudeclaw/outbox/slack/ only; whole-project-root access allowed exfiltrating .env, settings.json, etc. — outbox is created on bot startup - Tighten realpath() symlink check to validate against outbox dir, not project root Correctness: - Wrap runUserMessage() in try/finally in handleMessage and handleBlockAction so statusRefreshInterval is always cleared even if the call throws - fetchBotMessages: filter on botUserId only, not bot_id (any bot) — prevents edit/delete directives from touching other bots' messages - Convert assistantThreadKeys and threadHistoryLoaded from unbounded Sets to Maps with timestamps; evict entries older than 7 days in the existing 60s eviction interval - /reset: also remove all Slack thread sessions from disk and clear in-memory Maps (threadHistoryLoaded, assistantThreadKeys, lastBotMessageTs) (cherry picked from commit 1491ff6e46cd4890769952bea62ede4f22074908)
(cherry picked from commit 8fc5159caa4f17bc331b7d1efae02af3b94a2f38)
Clean port of PR #85 (saravmajestic) with all issues from squirblej's review applied. Server (src/ui/services/sessions.ts new): - GET /api/sessions: lists global, per-agent, Discord thread, and orphan JSONL sessions. Agent lookup uses getAgentsDir() (agents/<name>/), not .claude/agents/. Channel uses Discord snowflake detection, not fabricated slk: prefix. Single JSONL read per session for preview. UUID validation on sessionId before path construction (path traversal fix). - GET /api/sessions/:id/messages: full user message kept after prefix-strip, not truncated to last line. limit/offset clamped. - GET /api/agents: reads subdirectory names from getAgentsDir(). - getProjectDir() sanitizer replaces /, backslash, and . to match Claude Code. - All error responses use single-arg json() matching existing http.ts signature. UI: - Session sidebar with agent name, channel badge, preview, timestamp, turns. - Click session to load transcript; "Viewing history" banner while browsing. - "Load older" pagination with scroll-position preservation. - "+ New" clears history view, returns to live chat. - loadSessions() wired to chat-tab click directly, not via monkey-patch. - Dead agent-selector sidebar dropdown removed. Not included (requires extending streamClaude): - Session resume (new messages into a selected session) - Agent targeting from the web UI Co-authored-by: saravmajestic <saravmajestic@users.noreply.github.com> Co-authored-by: squirblej <squirblej@users.noreply.github.com> (cherry picked from commit 7f2f28e48aa3f8729cd3af08bb40c9cc0064c154)
Eliminates the separate count-fetch the client used to infer transcript
length. The previous approach fetched with limit=1000000 but the server
clamped to 200, causing "Load older" to land at message ~180 instead of
the page immediately before the visible tail for transcripts >200 messages.
readSessionMessages() now returns { messages, total } — total is the full
parsed count before slicing. The client reads total from the initial
offset=-1 response to compute browseOffset correctly in one round-trip.
Server limit ceiling raised from 200 to 2000 (no longer needed for the
count hack, but raised to a sensible cap for future use).
(cherry picked from commit 1aee87830ed71b7ac1dbedd82e6cb4eb622ad32b)
…51-153 chore: bulk cherry-pick approved contributor PRs
Ports PR #41 (chetan-guevara) targeting master's existing streamClaude() implementation:
- AgentStreamEvent interface { type: "spawn"|"done", id, description, result? }
- streamClaude() now accepts optional onAgentEvent callback
- pendingAgents Map tracks Agent tool_use_id → description across the stream
- Detects Agent tool spawns in assistant content blocks; fires "spawn" event
- Detects tool_result completions in user message blocks; fires "done" event
- streamUserMessage() passes onAgentEvent through to streamClaude()
- types.ts: re-exports AgentStreamEvent; adds onAgentEvent to onChat signature
- server.ts: maps AgentStreamEvent → agent_spawn/agent_done SSE events
- start.ts: threads onAgentEvent from onChat into streamUserMessage()
- script.ts: renders agent spawn/done as pill bubbles in chat history;
filters running agents from localStorage persistence
- styles.ts: chat-msg-agent, chat-msg-agent-running, chat-msg-agent-done,
chat-agent-spinner CSS classes
Attribution: original concept and implementation by @chetan-guevara (PR #41)
(cherry picked from commit 8afc24ce80ed36a765a3348e815f40d1731a525e)
…_persist for Agent tool_use blocks (cherry picked from commit dd947cac94c856baf07b969924e5e5d1ca7f4e91)
Ports PR #38 (dmitryanchikov) with fixes for master compatibility: - TimeoutsConfig interface: per-name timeout settings (telegram/heartbeat/job/default) configurable in settings.json; hot-reloads within the existing 30s cycle - resolveTimeoutMs(name): maps invocation name to minutes from settings (telegram→5m, heartbeat→15m, everything else→job→30m) - execClaude() now uses resolveTimeoutMs(name) instead of flat sessionTimeoutMs (timeoutMsOverride still works for explicit overrides) - Rate limit state: parseRateLimitResetTime(), isRateLimited(), getRateLimitResetAt(), wasRateLimitNotified(), markRateLimitNotified() — exported for use by heartbeat/jobs - When a rate limit is detected, rateLimitResetAt is set globally; daemon can check isRateLimited() to pause heartbeats/jobs until the quota resets - Telegram: exit code 124 (master's timeout code) → user-friendly ⏱ message Preserves discord.listenChannels and listenGuilds (not removed as in original PR). RATE_LIMIT_PATTERN tightened to /you(?:'|')ve hit your limit/i (no false positives on "out of extra usage" which appears in non-rate-limit contexts). Attribution: original concept and implementation by @dmitryanchikov (PR #38) (cherry picked from commit 4adcc031e135b4d86a08fac2027740518c9dce9e)
…to timeouts.default (cherry picked from commit 00961ef91ea324302064cb753b5800357b2a6c39)
…ass "job" category (cherry picked from commit 045ee48b2188aa836a505d94e4f7a639aa0ba4ad)
…ting pattern (cherry picked from commit be5d150d5449587f633e8dcd5e072474718682a2)
Ports moazbuilds/claudeclaw#19 by dathtd119. - Extend json() helper to accept an optional HTTP status code - Add src/ui/auth.ts with checkBearer() for Bearer token validation - Add apiToken field to Settings (parsed from settings.json) - Add receiveEnabled flag to TelegramConfig (default true); when false, Telegram polling is skipped so the token is used send-only - Add POST /api/inject endpoint protected by Bearer auth; on success, optionally forwards the response to the first allowed Telegram user Co-authored-by: dathtd119 <dathtd119@users.noreply.github.com> (cherry picked from commit f4fec0f93a726d472b0034803433aaa26612d72c)
… hot-reload - buildTechnicalInfo now strips apiToken, telegram.token, discord.token, slack.botToken, slack.appToken before returning (both raw settingsJson and the snapshot.settings copy) - Add stopPolling() to telegram.ts (sets running=false, isPolling=false) so polling can be stopped at runtime; also reset running=true in startPolling so it is restartable after a stop - Track telegramReceiveEnabled in start.ts; hot-reload now handles the case where the Telegram token is unchanged but receiveEnabled toggles: starts or stops polling without needing a token change (cherry picked from commit cb58c292af7b796628946c902b82b18475dc02ea)
…op/start race stopPolling() increments pollingGeneration (invalidating the current loop) before clearing isPolling. startPolling() captures the new generation value and passes it to poll(). The poll loop checks pollingGeneration === generation after every getUpdates await, so a stale loop exits as soon as its in-flight 30s long-poll request returns, even if running was reset to true by a concurrent startPolling() call. The standalone telegram() entry point also uses the generation counter. (cherry picked from commit c1769e26239c5d3a6ec65dc8e4ddf462f8fc8cac)
…ng-block session corruption (cherry picked from commit d20c9ce3bb44704e62d3ecc59d7d5593e7dff25e)
…ssion ID after retry When a thinking-block signature mismatch is detected: - Global sessions: backupSession() (renames to session_N.backup) - Agent sessions: resetSession(agentName) (removes the scoped session file) - Thread sessions: removeThreadSession(threadId) (removes from sessions.json) After the fresh retry, if exec.sessionId is set, persist it via createSession/createThreadSession so subsequent calls resume the correct session rather than creating a new one again. Also imports: resetSession from sessions, removeThreadSession from sessionManager (cherry picked from commit 98c03dcffd967a0e75e333c65251fb317064bcf6)
…te primary recovery - Gate primary SIGNATURE_ERROR block with !usedFallback to prevent it from resetting the primary/thread/agent session when the error actually came from the fallback provider - Inside the fallback block, detect SIGNATURE_ERROR on the fallback's exec when a fallback session was resumed: reset fallback session via resetFallbackSession(), retry fresh with the fallback provider, and persist the recovered session ID via createFallbackSession() - Import resetFallbackSession from sessions (cherry picked from commit 0cfa840664a81249b5fcb7f042ff33e5fafb8bff)
…ion tool Adds stt.delegateTool config field. When set to an MCP tool name or CLI command (e.g. "mcp__whisper__transcribe" or "faster-whisper"), whisper is skipped and Claude is asked to call that specific tool with the audio file path. When unset (default), the existing whisper path is unchanged. This allows users to swap in any transcription backend without patching claudeclaw — set the tool name in settings.json and Claude orchestrates the call. Ported from: PR #35 by @moazbuilds-claudeclaw contributor (cherry picked from commit 0f2ec254532aa1da7650ff73ec56b05d1260b670)
…159-160 Bulk cherry-pick PRs 155, 156, 157, 159 and 160
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 1f5373bb23
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| <<<<<<< HEAD | ||
| import { getSession, createSession, incrementTurn, markCompactWarned, resetSession } from "./sessions"; | ||
| ======= |
There was a problem hiding this comment.
Remove unresolved merge markers from executable TS code
This commit leaves raw merge-conflict markers in src/runner.ts (and other runtime files), which makes the file invalid TypeScript. Any command path that loads this module (for example bun run src/index.ts ...) will fail to parse before runtime logic executes, so the daemon cannot start until these markers are resolved.
Useful? React with 👍 / 👎.
| <<<<<<< HEAD | ||
| "name": "claudeclaw-plus", | ||
| "version": "2.0.2", |
There was a problem hiding this comment.
Resolve conflict markers in plugin manifest JSON
The plugin manifest contains unresolved merge markers, so .claude-plugin/plugin.json is not valid JSON. Any tooling that reads this manifest (plugin validation, packaging, or marketplace ingestion) will fail to parse it, blocking release or distribution workflows for this plugin.
Useful? React with 👍 / 👎.
|
Superseded by #11 (today's sync). |
Automated sync from moazbuilds/claudeclaw:master has merge conflicts. Resolve before merging.