Conversation
- prepare-release.yml: fires on PR merge to dev (not main); version bump PR targets dev instead of main - release.yml: triggers on dev→main PR merge instead of commit message on push; adds sync-back step to keep dev aligned with main after release Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(ui): apply ASCII logo gradient by X column, not string index ink-gradient maps colors by character index across the whole string, so the p descender (last two lines) always got the tail/pink color regardless of its leftward visual position. Fix: render each logo line separately with its own <Gradient>, padded to logoWidth so column X maps to the same gradient fraction on every line. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(ui): remove Static windowing that caused messages to disappear Ink's <Static> tracks rendered items by array INDEX, not React key. It stores the last array length and slices from that index on each render. When the array stops growing (constant length), the index overshoots and nothing new is printed — causing streamed messages to vanish. PR #45 introduced two patterns that broke this invariant: 1. STATIC_HISTORY_WINDOW=200 in MainContent.tsx — sliding window kept the array at a constant 204 items (3 fixed + 200 history + banner), so after the 201st history item nothing was ever printed by Static. 2. MAX_HISTORY_ITEMS=500 in useHistoryManager.ts — pruning the front of the array kept it at exactly 500 items, same effect. 3. Same AGENT_STATIC_HISTORY_WINDOW=200 windowing in AgentChatView.tsx. Fix: pass all history items to Static (array only ever grows). Remove TruncatedHistoryBanner from within Static (it can't update once committed to the terminal anyway, and its conditional insertion shifted existing indices on first appearance). Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> --------- Co-authored-by: Automaker <automaker@localhost> Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
…rotection) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
… failure
- schemaValidator: add Array.isArray guard so array tool params return
'Value of params must be an object' immediately instead of reaching AJV
- openai converter: return plain string content for text-only tool messages
instead of [{type:'text',...}] array — LiteLLM and most OpenAI-compatible
local providers only accept string content and crash on array content parts
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Single-text tool responses (validation errors, simple outputs) now return
content as a plain string instead of [{type:'text',text:'...'}] array.
Many OpenAI-compatible providers (LiteLLM, local models) only accept string
content in tool messages and crash with 'Can only get item pairs from a
mapping' on array content.
Multi-part responses (text+media, multi-text blocks, unsupported media
placeholders) keep array format to preserve all content parts.
Reverts the overly broad Array.isArray guard in schemaValidator — AJV already
rejects arrays for object-typed schemas, and the guard incorrectly blocked
valid array inputs for 2020-12 prefixItems schemas.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…on cascade When a weak/local model hits max_tokens and produces empty responses, tool errors accumulate in context causing subsequent calls to also fail with NO_RESPONSE_TEXT. Add trimToolErrorsFromContext() to strip trailing model-tool-call + user-tool-error pairs (up to 6 pairs), then attempt one final recovery call with the cleaned context before giving up. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
When a model hits max_tokens mid-tool-call (producing Shell {}), the
truncation error gets re-added to context making the next turn also
overflow. At the start of each sendMessageStream, detect the cascade
(truncation-guidance marker in the last user turn) and pre-trim:
1. Remove the error tool-call pairs (trimToolErrorsFromContext)
2. Cap any large preceding tool responses to 10K chars
This prevents the Shell {} → error → Shell {} loop that affected both
weak models and frontier models (Claude Sonnet) on large tool outputs.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
… history When dev→main is squash-merged, the tag-to-tag git log only shows "chore: release" commits which get filtered, silently skipping the Discord post. Add a fallback that checks origin/dev (which retains the individual commits at release time) and a post-discord.yml workflow for manual backfill. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
ACP tests were hardcoded to use `methodId: 'openai'` and the e2e workflow passed OPENAI_API_KEY, which is not configured in CI. Since protoCLI uses Anthropic as its primary provider, update everything to use Anthropic auth: - authMethods.ts: expose USE_ANTHROPIC instead of USE_OPENAI - acp-integration.test.ts: change authenticate to methodId 'anthropic', update openaiModel selector to anthropicModel, skip qwen-oauth test (Qwen-specific model type, no equivalent in protoCLI) - acp-cron.test.ts: same authenticate change - e2e.yml: pass ANTHROPIC_API_KEY instead of OpenAI secrets Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
The LiteLLM gateway uses USE_OPENAI auth (OPENAI_API_KEY + OPENAI_BASE_URL + OPENAI_MODEL). The v0.25.17 change to Anthropic auth was incorrect. Reverts all test and workflow changes back to openai methodId and OPENAI_* secrets. The actual fix required is adding the three gateway secrets (OPENAI_API_KEY, OPENAI_BASE_URL, OPENAI_MODEL) to GitHub repository secrets. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
ACP tests require gateway credentials that are not configured in CI. Since ACP is not currently in use, skip these tests automatically when OPENAI_API_KEY is absent rather than failing the E2E job. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
WalkthroughThis pull request migrates from Anthropic to OpenAI API across CI/CD workflows and integration tests. Environment variables in E2E jobs are updated from Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@integration-tests/cli/acp-integration.test.ts`:
- Line 20: The current SKIP_ACP flag only checks OPENAI_API_KEY and lets tests
run when only the key is present; instead ensure the suite is skipped unless the
full OpenAI config is available by using the same logic as getAuthTypeFromEnv():
update the SKIP_ACP computation to return true when IS_SANDBOX is true OR when
getAuthTypeFromEnv() !== 'openai' (or equivalently when any of OPENAI_API_KEY,
OPENAI_MODEL, or OPENAI_BASE_URL are missing), so the ACP suites are only
executed when getAuthTypeFromEnv() selects 'openai'.
- Around line 501-505: The test currently finds an "openai" model by substring
which is wrong because availableModels' modelId is the configured OPENAI_MODEL;
update the lookup to directly match the configured model id (use
newSession.models.availableModels.find(m => m.modelId === OPENAI_MODEL or the
test's config variable) and then assert it's defined (openaiModel) before using
it (references: newSession.models.availableModels and openaiModel).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: d616cd80-04ab-4e6d-a921-3aaac8d14607
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (10)
.github/workflows/e2e.ymlintegration-tests/cli/acp-cron.test.tsintegration-tests/cli/acp-integration.test.tspackage.jsonpackages/cli/package.jsonpackages/cli/src/acp-integration/authMethods.tspackages/core/package.jsonpackages/test-utils/package.jsonpackages/web-templates/package.jsonpackages/webui/package.json
| process.env['QWEN_SANDBOX'] && | ||
| process.env['QWEN_SANDBOX']!.toLowerCase() !== 'false'; | ||
|
|
||
| const SKIP_ACP = IS_SANDBOX || !process.env['OPENAI_API_KEY']; |
There was a problem hiding this comment.
Skip ACP suites only when the full OpenAI config is available.
getAuthTypeFromEnv() only selects OpenAI when OPENAI_API_KEY, OPENAI_MODEL, and OPENAI_BASE_URL are all populated. With the current gate, CI can still enter these suites with only the key set and then fail later on authenticate('openai') or the OpenAI model assertions instead of cleanly skipping.
🩹 Proposed fix
-const SKIP_ACP = IS_SANDBOX || !process.env['OPENAI_API_KEY'];
+const HAS_OPENAI_ENV =
+ !!process.env['OPENAI_API_KEY'] &&
+ !!process.env['OPENAI_MODEL'] &&
+ !!process.env['OPENAI_BASE_URL'];
+const SKIP_ACP = IS_SANDBOX || !HAS_OPENAI_ENV;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const SKIP_ACP = IS_SANDBOX || !process.env['OPENAI_API_KEY']; | |
| const HAS_OPENAI_ENV = | |
| !!process.env['OPENAI_API_KEY'] && | |
| !!process.env['OPENAI_MODEL'] && | |
| !!process.env['OPENAI_BASE_URL']; | |
| const SKIP_ACP = IS_SANDBOX || !HAS_OPENAI_ENV; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@integration-tests/cli/acp-integration.test.ts` at line 20, The current
SKIP_ACP flag only checks OPENAI_API_KEY and lets tests run when only the key is
present; instead ensure the suite is skipped unless the full OpenAI config is
available by using the same logic as getAuthTypeFromEnv(): update the SKIP_ACP
computation to return true when IS_SANDBOX is true OR when getAuthTypeFromEnv()
!== 'openai' (or equivalently when any of OPENAI_API_KEY, OPENAI_MODEL, or
OPENAI_BASE_URL are missing), so the ACP suites are only executed when
getAuthTypeFromEnv() selects 'openai'.
| // Use openai model to avoid auth issues | ||
| const openaiModel = newSession.models.availableModels.find((model) => | ||
| model.modelId.includes('openai'), | ||
| ); | ||
| expect(anthropicModel).toBeDefined(); | ||
| expect(openaiModel).toBeDefined(); |
There was a problem hiding this comment.
Look up the configured model ID directly.
availableModels is sourced from OPENAI_MODEL, so modelId is the configured model name, not a provider label that necessarily contains "openai". This lookup can return undefined, which then blows up at Line 510 and makes the assertion on Line 527 fail for the wrong reason.
🩹 Proposed fix
- // Use openai model to avoid auth issues
- const openaiModel = newSession.models.availableModels.find((model) =>
- model.modelId.includes('openai'),
- );
+ const configuredOpenAIModel = process.env['OPENAI_MODEL'];
+ expect(configuredOpenAIModel).toBeTruthy();
+ const openaiModel = newSession.models.availableModels.find(
+ (model) => model.modelId === configuredOpenAIModel,
+ );📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Use openai model to avoid auth issues | |
| const openaiModel = newSession.models.availableModels.find((model) => | |
| model.modelId.includes('openai'), | |
| ); | |
| expect(anthropicModel).toBeDefined(); | |
| expect(openaiModel).toBeDefined(); | |
| const configuredOpenAIModel = process.env['OPENAI_MODEL']; | |
| expect(configuredOpenAIModel).toBeTruthy(); | |
| const openaiModel = newSession.models.availableModels.find( | |
| (model) => model.modelId === configuredOpenAIModel, | |
| ); | |
| expect(openaiModel).toBeDefined(); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@integration-tests/cli/acp-integration.test.ts` around lines 501 - 505, The
test currently finds an "openai" model by substring which is wrong because
availableModels' modelId is the configured OPENAI_MODEL; update the lookup to
directly match the configured model id (use
newSession.models.availableModels.find(m => m.modelId === OPENAI_MODEL or the
test's config variable) and then assert it's defined (openaiModel) before using
it (references: newSession.models.availableModels and openaiModel).
Code Coverage Summary
CLI Package - Full Text ReportCore Package - Full Text ReportFor detailed HTML reports, please see the 'coverage-reports-22.x-ubuntu-latest' artifact from the main CI run. |
Release v0.25.18 — skip ACP integration tests when OPENAI_API_KEY is not set, fixing E2E CI failures.
Summary by CodeRabbit
Changes
Chores