Implement Bun/Elysia MVP and Bun-first validation#1
Conversation
|
Caution Review failedPull request was closed or merged during review 📝 WalkthroughWalkthroughReplaces Elixir CI/validation with Bun commands and adds a full Bun-based Symphony implementation: CLI, service, orchestrator, agent runner, codex app-server client, dynamic tools, trackers, workspace manager, HTTP server, config/workflow loading, utilities, scripts, documentation, and extensive tests. Changes
Sequence Diagram(s)sequenceDiagram
participant CLI as CLI
participant Svc as SymphonyService
participant WF as WorkflowStore
participant Orch as Orchestrator
participant Tracker as Tracker (Linear)
participant Agent as AgentRunner
participant WS as WorkspaceManager
participant AppSvr as AppServerClient
participant Codex as Codex Process
rect rgba(200,200,255,0.5)
CLI->>Svc: start(workflowPath, port?)
Svc->>WF: start()
Svc->>Orch: start()
end
rect rgba(200,255,200,0.5)
Orch->>Tracker: fetchCandidateIssues()
Tracker-->>Orch: issues[]
Orch->>Agent: dispatch(issue)
Agent->>WS: createForIssue(issue)
WS-->>Agent: workspacePath
Agent->>WS: runBeforeRunHook(workspace, issue)
Agent->>AppSvr: startSession(workspace)
AppSvr->>Codex: spawn subprocess (stdin/stdout JSON lines)
AppSvr-->>Agent: session
Agent->>AppSvr: runTurn(prompt, issue)
AppSvr->>Codex: write turn_start
Codex-->>AppSvr: events / messages
AppSvr-->>Agent: turn_result / events
Agent->>Tracker: fetchIssueStatesByIds([id])
Agent->>WS: runAfterRunHook(workspace, issue)
Agent-->>Orch: complete/update
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 12
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🟡 Minor comments (9)
bun/scripts/mvp-smoke.sh-11-12 (1)
11-12:⚠️ Potential issue | 🟡 MinorValidate
STARTUP_TIMEOUT_SECONDSbefore arithmetic use.Line 94 assumes a numeric timeout. A non-numeric env override can fail the readiness loop setup unexpectedly.
Proposed fix
if ! [[ "$PORT" =~ ^[0-9]+$ ]]; then echo "PORT must be a non-negative integer. Got: $PORT" >&2 exit 1 fi + +if ! [[ "$STARTUP_TIMEOUT_SECONDS" =~ ^[0-9]+$ ]]; then + echo "STARTUP_TIMEOUT_SECONDS must be a non-negative integer. Got: $STARTUP_TIMEOUT_SECONDS" >&2 + exit 1 +fiAlso applies to: 94-94
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bun/scripts/mvp-smoke.sh` around lines 11 - 12, Validate and coerce STARTUP_TIMEOUT_SECONDS to a numeric value before any arithmetic in the readiness loop: read STARTUP_TIMEOUT_SECONDS (default to 20), check it with a numeric regex like [[ "$STARTUP_TIMEOUT_SECONDS" =~ ^[0-9]+$ ]], and if it fails set it to the default (20) or an integer fallback, then use the validated integer variable in the arithmetic at the readiness loop (where STARTUP_TIMEOUT_SECONDS is used around line 94); this prevents non-numeric env overrides from breaking the arithmetic and readiness logic.bun/src/utils/errors.ts-20-29 (1)
20-29:⚠️ Potential issue | 🟡 Minor
asErrorMessageviolates its return type contract.Line 26 may return
undefined(when passedundefined, functions, or symbols), breaking the declaredstringreturn type. This occurs becauseJSON.stringifycan returnundefinedat runtime, even though TypeScript's standard library definitions type it as returningstring.Proposed fix
export const asErrorMessage = (error: unknown): string => { if (error instanceof Error) { return error.message; } try { - return JSON.stringify(error); + const serialized = JSON.stringify(error); + return serialized ?? String(error); } catch { return String(error); } };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bun/src/utils/errors.ts` around lines 20 - 29, The asErrorMessage function can return undefined because JSON.stringify may return undefined at runtime; update asErrorMessage to ensure it always returns a string by capturing the JSON.stringify result into a variable (e.g., const json = JSON.stringify(error)), then if json === undefined return String(error) (or a fallback like "undefined"), otherwise return json; keep the existing Error instance check and the try/catch fallback but ensure the try branch converts an undefined stringify result to a string before returning.bun/docs/mvp-manual-test.md-11-12 (1)
11-12:⚠️ Potential issue | 🟡 MinorUse portable paths in manual test instructions.
These snippets currently hard-code a local user path, which makes the checklist less reusable.
Suggested fix
-cd /Users/vorcigernix/Dev/symphony/bun +cd /path/to/symphony/bun bun install-cd /Users/vorcigernix/Dev/symphony +cd /path/to/symphony ./bun/scripts/mvp-smoke.sh --workflow ./bun/WORKFLOW.test.md --port 8789Also applies to: 18-20, 28-29, 86-87
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bun/docs/mvp-manual-test.md` around lines 11 - 12, Replace hard-coded absolute user paths in the manual test steps (e.g., the "cd /Users/vorcigernix/Dev/symphony/bun" command) with portable alternatives such as using a relative path (cd ./bun), a repository-root placeholder (cd <repo-root>/bun), or an environment-variable-based path (cd $HOME/Dev/symphony/bun) and ensure all repeated occurrences (the other spots noted around the same commands) are updated; keep the following commands like "bun install" unchanged but documented to be run from the portable path so the checklist is reusable across machines.bun/README.md-45-47 (1)
45-47:⚠️ Potential issue | 🟡 MinorReplace machine-specific absolute path in command examples.
Line 45 hard-codes a local filesystem path, which breaks reproducibility for others.
Suggested fix
-cd /Users/vorcigernix/Dev/symphony +cd /path/to/symphony ./bun/scripts/mvp-smoke.sh --workflow ./bun/WORKFLOW.test.md --port 8789🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bun/README.md` around lines 45 - 47, The example command in the README hard-codes a local absolute path; update the example to use a generic, reproducible form such as using a relative path or variable (e.g., "cd $(pwd)" or simply "cd symphony" or omit the cd and run "./bun/scripts/mvp-smoke.sh --workflow ./bun/WORKFLOW.test.md --port 8789") so the invocation of the mvp-smoke.sh script and reference to WORKFLOW.test.md do not rely on a user-specific filesystem path.bun/test/orchestrator-conformance.test.ts-78-88 (1)
78-88:⚠️ Potential issue | 🟡 MinorTest title contradicts the asserted behavior.
Line 78 says “not eligible,” but Line 86 asserts
isCandidateIssue(...)istrue. Please rename the test to match the actual expectation.Suggested fix
- test("todo issue with non-terminal blocker is not eligible", () => { + test("todo issue with non-terminal blocker remains a candidate but is flagged as blocked", () => {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bun/test/orchestrator-conformance.test.ts` around lines 78 - 88, The test name "todo issue with non-terminal blocker is not eligible" contradicts the assertions; rename the test to reflect that the issue is considered a candidate and is blocked by a non-terminal blocker — e.g., change the test description string in the test that calls isCandidateIssue and isTodoBlockedByNonTerminal to something like "todo issue with non-terminal blocker is eligible and detected as blocked"; update the test declaration (the string passed to test(...)) so it matches the asserted behavior involving isCandidateIssue(...) and isTodoBlockedByNonTerminal(...).bun/src/orchestrator/orchestrator.ts-916-919 (1)
916-919:⚠️ Potential issue | 🟡 MinorTop-level
usageis skipped due toasRecord(...) || asRecord(...).
asRecord(...)always returns an object, so the right-handpayload.usagepath is never reached whentotal_token_usageis absent.💡 Proposed fix
- const direct = asRecord(payload.total_token_usage) || asRecord(payload.usage); - if (Object.keys(direct).length > 0) { - return direct; - } + const totalUsage = asRecord(payload.total_token_usage); + if (Object.keys(totalUsage).length > 0) { + return totalUsage; + } + const usage = asRecord(payload.usage); + if (Object.keys(usage).length > 0) { + return usage; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bun/src/orchestrator/orchestrator.ts` around lines 916 - 919, The current assignment to `direct` uses `asRecord(payload.total_token_usage) || asRecord(payload.usage)` which never evaluates the right side because `asRecord` always returns an object; update the logic in the `direct` assignment so it prefers `payload.total_token_usage` only when that field is actually present/non-empty and falls back to `payload.usage` otherwise (e.g., check `payload.total_token_usage` for null/undefined or check `Object.keys(asRecord(payload.total_token_usage)).length > 0`), keeping the rest of the flow that returns `direct` unchanged; refer to `asRecord`, `payload.total_token_usage`, `payload.usage`, and the `direct` variable to locate and modify the code.bun/src/config/config.ts-355-359 (1)
355-359:⚠️ Potential issue | 🟡 MinorFix
~expansion for non-~/values.
value.slice(2)drops one character for inputs like~workspace(workspacebecomesorkspace).💡 Proposed fix
- const suffix = value === "~" ? "" : value.slice(2); + const suffix = + value === "~" + ? "" + : value.startsWith("~/") || value.startsWith("~\\") + ? value.slice(2) + : value.slice(1);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bun/src/config/config.ts` around lines 355 - 359, The tilde-expansion logic in the block handling value.startsWith("~") incorrectly uses value.slice(2) for all non-"~" values, which mangles inputs like "~workspace"; update the suffix calculation in config.ts so it uses "" when value === "~", value.slice(2) when the second character is "/" (i.e., value.startsWith("~/")), and value.slice(1) otherwise, then pass that suffix into resolve(home, suffix) so "~", "~/path", and "~workspace" all expand correctly; refer to the variables value, home and the resolve(...) call to locate the change.bun/src/http/presenter.ts-64-65 (1)
64-65:⚠️ Potential issue | 🟡 MinorUse the same workspace-key normalization as workspace creation.
workspace.pathis built from rawissueIdentifier, but workspaces are created usingsanitizeWorkspaceKey(...), so these can diverge.💡 Proposed fix
import type { RuntimeSnapshot } from "../types"; +import { sanitizeWorkspaceKey } from "../utils/normalize"; @@ workspace: { - path: `${workspaceRoot.replace(/[\\/]$/, "")}/${issueIdentifier}`, + path: `${workspaceRoot.replace(/[\\/]$/, "")}/${sanitizeWorkspaceKey(issueIdentifier)}`, },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bun/src/http/presenter.ts` around lines 64 - 65, workspace.path is constructed from the raw issueIdentifier which can diverge from how workspaces are created; update the presenter to normalize the key with the same sanitizeWorkspaceKey(...) used during workspace creation (i.e., call sanitizeWorkspaceKey(issueIdentifier) and use that result when building path and any workspace-key related values) so the path uses the canonical workspace key and stays consistent with workspace creation logic.bun/src/tracker/linear-client.ts-141-145 (1)
141-145:⚠️ Potential issue | 🟡 Minor
fetchIssueStatesByIdstruncates when ID count exceeds page size.This call only requests one page (
first: min(ids.length, 50)), so IDs beyond the first page are dropped.💡 Proposed fix
- const body = await this.graphql(ISSUE_BY_IDS_QUERY, { - ids, - first: Math.min(ids.length, ISSUE_PAGE_SIZE), - relationFirst: ISSUE_PAGE_SIZE, - }); - - const nodes = - asRecord(body.data)?.issues && asRecord(asRecord(body.data).issues).nodes; + const nodeBatches: unknown[] = []; + for (let i = 0; i < ids.length; i += ISSUE_PAGE_SIZE) { + const chunk = ids.slice(i, i + ISSUE_PAGE_SIZE); + const body = await this.graphql(ISSUE_BY_IDS_QUERY, { + ids: chunk, + first: chunk.length, + relationFirst: ISSUE_PAGE_SIZE, + }); + const nodes = asRecord(asRecord(asRecord(body.data).issues).nodes); + nodeBatches.push(...(Array.isArray(nodes) ? (nodes as unknown[]) : [])); + } + const nodes = nodeBatches;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bun/src/tracker/linear-client.ts` around lines 141 - 145, The current fetchIssueStatesByIds uses a single GraphQL call with first: Math.min(ids.length, ISSUE_PAGE_SIZE) which truncates IDs beyond ISSUE_PAGE_SIZE; modify fetchIssueStatesByIds to paginate the ids array into chunks of size ISSUE_PAGE_SIZE, call this.graphql(ISSUE_BY_IDS_QUERY, { ids: chunk, first: chunk.length, relationFirst: ISSUE_PAGE_SIZE }) for each chunk, collect and merge all returned issue nodes into a single result (preserving mapping to the original ids), and return the aggregated set so no IDs are dropped.
🧹 Nitpick comments (8)
bun/scripts/mvp-smoke.sh (1)
6-6: Drop unusedROOT_DIRassignment.Line 6 is currently unused and can be removed to reduce script noise.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bun/scripts/mvp-smoke.sh` at line 6, Remove the unused ROOT_DIR assignment in bun/scripts/mvp-smoke.sh: delete the line that sets ROOT_DIR="$(cd "$BUN_DIR/.." && pwd)" so the script no longer defines an unused variable; if any future logic needs project root, replace this with an explicit reference or a small helper function, otherwise simply remove the ROOT_DIR declaration to reduce noise..github/workflows/pr-description-lint.yml (1)
15-18: Pin Bun version instead oflatestfor deterministic CI behavior.Line 18 uses
bun-version: latest, which can cause non-deterministic builds when new Bun releases ship. Use one of these approaches:
bun-version-file: ".bun-version"(preferred; supports.tool-versionsandpackage.json)bun-version: "1.3.3"(explicit pinned version)packageManager: "bun@1.3.3"inpackage.json(the action falls back to this)Also ensure
bun.lockis committed for reproducible dependency resolution withbun ci.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/pr-description-lint.yml around lines 15 - 18, The workflow currently sets the setup-bun action with bun-version: latest which causes non-deterministic CI; update the GitHub Actions step that uses oven-sh/setup-bun@v2 (the block containing the bun-version field) to pin Bun by either specifying bun-version-file: ".bun-version" (preferred), or replace bun-version: latest with an explicit bun-version: "1.3.3" (or similar pinned version), or ensure packageManager: "bun@1.3.3" is present in package.json so the action can fall back; also commit bun.lock to the repo so bun ci has reproducible dependency resolution.bun/test/app-server-protocol.test.ts (1)
33-61: GuaranteestopSessioninfinallyfor spawned session tests.If a throw happens before explicit
stopSession, the subprocess may leak and affect subsequent tests.Suggested refactor pattern
- try { + let session: Awaited<ReturnType<AppServerClient["startSession"]>> | null = null; + try { const config = buildConfig(env.workspaceRoot, env.logPath, "success"); const client = new AppServerClient(() => config); const events: string[] = []; - const session = await client.startSession(env.workspacePath); + session = await client.startSession(env.workspacePath); const result = await client.runTurn(session, "Hello", baseIssue, { onMessage: (event) => { events.push(event.event); }, }); - client.stopSession(session); expect(result.threadId).toBe("thread-1"); // ... } finally { + if (session) { + client.stopSession(session); + } await env.cleanup(); }Also applies to: 67-87, 93-113, 119-136
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bun/test/app-server-protocol.test.ts` around lines 33 - 61, The test may leak a spawned subprocess if an error occurs before client.stopSession is called; ensure each block that calls client.startSession (the session variable returned by AppServerClient.startSession) always calls client.stopSession in the finally: capture the session in a variable outside the try, wrap the existing logic in try, and in finally check if session is defined then await client.stopSession(session) before calling env.cleanup(); apply the same pattern to the other test blocks that call startSession (the blocks around lines with startSession/runTurn pairs) so sessions are always stopped even on failure.bun/test/fixtures/fake-codex-app-server.ts (1)
138-142: Fail fast on malformed JSON input in the fixture.Silently ignoring parse errors can mask protocol regressions and lead to hanging tests.
🧪 Proposed fixture hardening
try { msg = JSON.parse(trimmed) as Record<string, unknown>; } catch { - return; + log(`invalid_json:${trimmed}`); + process.stderr.write("invalid JSON received by fake codex app server\n"); + process.exit(1); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bun/test/fixtures/fake-codex-app-server.ts` around lines 138 - 142, The current try/catch around msg = JSON.parse(trimmed) swallows malformed JSON and returns silently; change the catch to fail fast so tests surface protocol regressions. Replace the empty catch with logic that either rethrows the parse error (e.g., throw new SyntaxError with context including the trimmed payload) or logs the error and closes the connection/stream (ensure you call the socket/ws close path used elsewhere in this fixture), so malformed JSON no longer silently returns; update any tests that expect a connection close accordingly.bun/src/tracker/memory-tracker.ts (1)
6-12: Defensively copy issue collections to avoid external mutation leaks.
MemoryTrackercurrently exposes internal mutable state by reference. A caller mutating the returned array/objects can unintentionally alter tracker behavior later.♻️ Proposed hardening
constructor(issues: Issue[]) { - this.issues = issues; + this.issues = issues.map((issue) => ({ ...issue })); } async fetchCandidateIssues(): Promise<Issue[]> { - return this.issues; + return this.issues.map((issue) => ({ ...issue })); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bun/src/tracker/memory-tracker.ts` around lines 6 - 12, MemoryTracker currently stores and returns the same Issue[] reference; to prevent external mutation leaks, make defensive copies: in the constructor (MemoryTracker constructor) clone the incoming issues array and its elements (shallow copy at minimum) into this.issues, and in fetchCandidateIssues() return a new array (and cloned issue objects if needed) instead of returning this.issues directly; update references to this.issues accordingly (preserve Issue type) so callers cannot mutate internal state.bun/src/config/workflow-store.ts (1)
28-39: Makestart()idempotent to avoid duplicate pollers.Calling
start()twice currently creates multiplesetIntervalloops.💡 Proposed fix
async start(): Promise<void> { + if (this.timer) { + return; + } + const loaded = await this.loadState(this.workflowPath); this.workflow = loaded.workflow;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bun/src/config/workflow-store.ts` around lines 28 - 39, The start() method can create duplicate pollers if called multiple times; update WorkflowStore.start to be idempotent by checking the existing this.timer (or this.loadedPath) before creating a new setInterval: if a timer already exists return immediately (or clear the existing timer and replace it if intended), then proceed to load state and set this.timer as currently done; reference the start method, this.timer, this.workflowPath, and WORKFLOW_POLL_INTERVAL_MS when implementing the guard so repeated calls do not spawn multiple intervals.bun/src/config/config.ts (2)
105-107: Simplify redundant endpoint fallback expression.Both ternary branches return the same value.
💡 Proposed fix
endpoint: normalizeNonEmptyString(tracker.endpoint) ?? - (normalizeTrackerKind(tracker.kind) === "linear" ? DEFAULT_LINEAR_ENDPOINT : DEFAULT_LINEAR_ENDPOINT), + DEFAULT_LINEAR_ENDPOINT,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bun/src/config/config.ts` around lines 105 - 107, The endpoint assignment uses a redundant ternary that returns DEFAULT_LINEAR_ENDPOINT in both branches; replace the whole fallback expression with a single DEFAULT_LINEAR_ENDPOINT, i.e. set endpoint to normalizeNonEmptyString(tracker.endpoint) ?? DEFAULT_LINEAR_ENDPOINT and remove the normalizeTrackerKind(tracker.kind) ? ... : ... check so the symbols involved are normalizeNonEmptyString, tracker.endpoint, normalizeTrackerKind, tracker.kind and DEFAULT_LINEAR_ENDPOINT.
367-373: Relative path preservation misses common relative forms.With
allowRelativePreservation = true, values like./workspacesorfoo/barstill get absolutized. That conflicts with the “relative by design” intent in this function.💡 Proposed fix
- if (value.includes(sep) || value.includes("/") || value.includes("\\")) { - return resolve(value); - } - - if (allowRelativePreservation) { + if (allowRelativePreservation && !/^(?:[A-Za-z]:[\\/]|\/)/.test(value)) { return value; } return resolve(value);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bun/src/config/config.ts` around lines 367 - 373, The code currently absolutizes any value containing path separators before honoring allowRelativePreservation; move the allowRelativePreservation check earlier and treat common relative forms as preserved: if allowRelativePreservation is true and the value startsWith "./" or "../" or is not path.isAbsolute(value) but contains "/" or "\" (i.e., "foo/bar" or "./workspaces"), return the original value; otherwise fall back to resolve(value). Use path.isAbsolute (or equivalent) and the existing symbols allowRelativePreservation, value, sep, resolve to implement this change in the function.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bun/scripts/check-pr-body.ts`:
- Around line 78-80: Replace raw substring searches in headingPosition and
captureHeadingSection with line-anchored matching: split content into lines,
scan for a line that matches the heading at the start of the line (use a regex
like /^\s*(#{1,6}\s+)?<heading>\s*$/ or /^\s*<heading>\s*$/ depending on how
headings are provided), and return the character index of that line start (or -1
if not found). Apply the same line-based search to locate the next heading
boundary in captureHeadingSection so both functions only match headings that
appear alone at the start of a line, preventing false positives inside
paragraphs.
In `@bun/scripts/mvp-smoke.sh`:
- Around line 27-37: When parsing the flags in mvp-smoke.sh, don't directly use
"$2" for --workflow, --port, or --issue; instead validate that a next argument
exists and isn't another flag before assigning WORKFLOW_PATH, PORT, or
ISSUE_IDENTIFIER (e.g., check that "$#" >= 2 and that "$2" doesn't start with
"-"), and if the value is missing print the usage/error and exit non-zero;
update the case arms handling --workflow, --port, and --issue to perform this
validation before doing the current assignments and shifts.
In `@bun/src/cli.ts`:
- Around line 56-65: The port parsing branch for arg "--port" should reject
malformed and out-of-range values: validate rawPort (from args[index + 1]) is
present, trim whitespace, ensure it matches a strict digits-only pattern (e.g.,
/^\d+$/) so things like "123abc" or " 123 " are rejected, parse to an integer
(port) and then enforce 0 <= port <= 65535, throwing clear errors (e.g., "--port
requires a non-negative integer between 0 and 65535" or "--port requires a
numeric value") when checks fail; update the checks around rawPort, port,
Number.parseInt, and the error messages accordingly.
In `@bun/src/codex/app-server.ts`:
- Around line 335-340: The call to executeDynamicTool can throw and currently
skips the sendMessage reply for the request id; wrap the executeDynamicTool call
in a try/catch around executeDynamicTool(toolName, argumentsPayload,
this.configProvider()) and ensure sendMessage(session.process, { id, result })
is always invoked: on success send the normal result, on error send a structured
error response (e.g., { id, error: { message: err.message, name: err.name } } or
similar) so the item/tool/call always gets a reply; reference
executeDynamicTool, sendMessage, session.process, id, argumentsPayload and
this.configProvider() when making the change.
- Around line 397-400: awaitResponse currently discards any non-matching
JSON-RPC messages when it does const payload = await bus.next(remaining); if
(payload.id !== requestId) continue; — instead, preserve those messages so
notifications/other requests aren’t lost: when payload.id !== requestId, push
the payload into a pending buffer or map (e.g., pendingMessages keyed by
payload.id or a notifications queue) or re-enqueue it onto remaining, then
continue; update awaitResponse to consult pendingMessages first before awaiting
bus.next and ensure symbols mentioned (awaitResponse, bus.next, remaining,
requestId, payload) are used to locate and implement the buffering/re-enqueue
logic.
In `@bun/src/codex/dynamic-tool.ts`:
- Around line 95-123: The current logic around response.json() swallows JSON
parse failures into a fake body and then computes hasErrors so a 2xx non-JSON
response can return success: true; fix this by detecting parse failures
explicitly (e.g., set a parseFailed flag when (await response.json().catch(...))
falls back) and return a failureResponse or set success: false when parseFailed
and response.ok is true; update the code that computes hasErrors / the returned
object (referencing response.json(), the parse fallback, hasErrors,
failureResponse, and the success/contentItems return) so non-JSON 2xx responses
are treated as failures with an appropriate error message and body included.
In `@bun/src/config/workflow.ts`:
- Around line 15-22: The current catch blocks wraps all readFile failures as
SymphonyError("missing_workflow_file"); instead add a distinct IO error code:
extend WorkflowErrorCode with workflow_read_error, then in the readFile catch
check (error && (error as NodeJS.ErrnoException).code === "ENOENT") => throw
SymphonyError("missing_workflow_file", ... , { path, cause: error }); otherwise
throw SymphonyError("workflow_read_error", `Failed to read workflow file:
${path}`, { path, cause: error }); do not reuse workflow_parse_error for IO
errors so YAML parse errors remain distinct.
In `@bun/src/http/server.ts`:
- Around line 67-79: The /api/v1/state route handler currently returns an error
payload with a 2xx status when orchestrator.snapshot() throws; update the
app.get("/api/v1/state") handler so that on catch it sends the error JSON with
an appropriate failure HTTP status (e.g., 500) instead of a successful 2xx code
— locate the route in server.ts and adjust the response semantics (where
statePayload and orchestrator.snapshot() are used) to set the non-2xx status for
the error branch.
- Around line 90-101: The current handler incorrectly maps any thrown/internal
error to a 404 by catching all exceptions and returning
errorEnvelope("issue_not_found"), which hides real failures; change the error
handling so that only a non-ok result from issuePayload leads to set.status =
404 and errorEnvelope("issue_not_found"), while exceptions from issuePayload,
orchestrator.snapshot(), or configProvider() are not translated into 404 —
instead catch the exception, log the error, set an appropriate 5xx status (e.g.,
set.status = 500) and return an internal error envelope (e.g.,
errorEnvelope("internal_error", "Internal server error")) or rethrow to let
upstream middleware handle it; update the block around
issuePayload/orchestrator.snapshot/configProvider().workspace.root and the catch
branch accordingly and preserve the successful return of result.payload when
result.ok is true.
In `@bun/src/orchestrator/orchestrator.ts`:
- Around line 291-342: fetchIssueStatesByIds may return fewer Issue objects than
requested, leaving workers for missing IDs running; after awaiting refreshed =
await tracker.fetchIssueStatesByIds(issueIds) compute the set of returned IDs
(from refreshed.map(i=>i.id)), find any missingIds from the original issueIds
input, log each missing ID (and identifier if available), and call
this.terminateRunningIssue(missingId, false) to stop those orphaned workers
before proceeding with the existing loop; use the existing running map and
logger to locate relevant entries and provide context.
In `@bun/src/service.ts`:
- Around line 48-75: The start sequence can leave components running if a later
step fails; wrap the post-workflowStore startup steps
(validateDispatchConfig(this.currentConfig()), this.orchestrator.start(), and
this.httpServer.start(...)) in a try/catch and on any error call the
corresponding stop/cleanup for components that were already started (call
this.orchestrator.stop() if orchestrator started, this.httpServer.stop() if
httpServer.start() returned a port, and this.workflowStore.stop() for the
initial store), ensure this.started remains false on failure, rethrow the
original error, and only set this.started = true after all steps succeed.
In `@bun/src/utils/normalize.ts`:
- Around line 19-24: The current string-to-integer branch (in
parseInteger/normalize logic) uses Number.parseInt(value.trim(), 10) which
allows partial parses like "10ms"; change it to perform a strict full-match
numeric check before converting: trim the string, validate it matches the
integer regex /^-?\d+$/ (or /^\d+$/ if negatives disallowed), then call
Number.parseInt on the whole string and return it only when the regex matches;
otherwise fall through to rejection/undefined so malformed strings like "100px"
are not accepted.
---
Minor comments:
In `@bun/docs/mvp-manual-test.md`:
- Around line 11-12: Replace hard-coded absolute user paths in the manual test
steps (e.g., the "cd /Users/vorcigernix/Dev/symphony/bun" command) with portable
alternatives such as using a relative path (cd ./bun), a repository-root
placeholder (cd <repo-root>/bun), or an environment-variable-based path (cd
$HOME/Dev/symphony/bun) and ensure all repeated occurrences (the other spots
noted around the same commands) are updated; keep the following commands like
"bun install" unchanged but documented to be run from the portable path so the
checklist is reusable across machines.
In `@bun/README.md`:
- Around line 45-47: The example command in the README hard-codes a local
absolute path; update the example to use a generic, reproducible form such as
using a relative path or variable (e.g., "cd $(pwd)" or simply "cd symphony" or
omit the cd and run "./bun/scripts/mvp-smoke.sh --workflow
./bun/WORKFLOW.test.md --port 8789") so the invocation of the mvp-smoke.sh
script and reference to WORKFLOW.test.md do not rely on a user-specific
filesystem path.
In `@bun/scripts/mvp-smoke.sh`:
- Around line 11-12: Validate and coerce STARTUP_TIMEOUT_SECONDS to a numeric
value before any arithmetic in the readiness loop: read STARTUP_TIMEOUT_SECONDS
(default to 20), check it with a numeric regex like [[
"$STARTUP_TIMEOUT_SECONDS" =~ ^[0-9]+$ ]], and if it fails set it to the default
(20) or an integer fallback, then use the validated integer variable in the
arithmetic at the readiness loop (where STARTUP_TIMEOUT_SECONDS is used around
line 94); this prevents non-numeric env overrides from breaking the arithmetic
and readiness logic.
In `@bun/src/config/config.ts`:
- Around line 355-359: The tilde-expansion logic in the block handling
value.startsWith("~") incorrectly uses value.slice(2) for all non-"~" values,
which mangles inputs like "~workspace"; update the suffix calculation in
config.ts so it uses "" when value === "~", value.slice(2) when the second
character is "/" (i.e., value.startsWith("~/")), and value.slice(1) otherwise,
then pass that suffix into resolve(home, suffix) so "~", "~/path", and
"~workspace" all expand correctly; refer to the variables value, home and the
resolve(...) call to locate the change.
In `@bun/src/http/presenter.ts`:
- Around line 64-65: workspace.path is constructed from the raw issueIdentifier
which can diverge from how workspaces are created; update the presenter to
normalize the key with the same sanitizeWorkspaceKey(...) used during workspace
creation (i.e., call sanitizeWorkspaceKey(issueIdentifier) and use that result
when building path and any workspace-key related values) so the path uses the
canonical workspace key and stays consistent with workspace creation logic.
In `@bun/src/orchestrator/orchestrator.ts`:
- Around line 916-919: The current assignment to `direct` uses
`asRecord(payload.total_token_usage) || asRecord(payload.usage)` which never
evaluates the right side because `asRecord` always returns an object; update the
logic in the `direct` assignment so it prefers `payload.total_token_usage` only
when that field is actually present/non-empty and falls back to `payload.usage`
otherwise (e.g., check `payload.total_token_usage` for null/undefined or check
`Object.keys(asRecord(payload.total_token_usage)).length > 0`), keeping the rest
of the flow that returns `direct` unchanged; refer to `asRecord`,
`payload.total_token_usage`, `payload.usage`, and the `direct` variable to
locate and modify the code.
In `@bun/src/tracker/linear-client.ts`:
- Around line 141-145: The current fetchIssueStatesByIds uses a single GraphQL
call with first: Math.min(ids.length, ISSUE_PAGE_SIZE) which truncates IDs
beyond ISSUE_PAGE_SIZE; modify fetchIssueStatesByIds to paginate the ids array
into chunks of size ISSUE_PAGE_SIZE, call this.graphql(ISSUE_BY_IDS_QUERY, {
ids: chunk, first: chunk.length, relationFirst: ISSUE_PAGE_SIZE }) for each
chunk, collect and merge all returned issue nodes into a single result
(preserving mapping to the original ids), and return the aggregated set so no
IDs are dropped.
In `@bun/src/utils/errors.ts`:
- Around line 20-29: The asErrorMessage function can return undefined because
JSON.stringify may return undefined at runtime; update asErrorMessage to ensure
it always returns a string by capturing the JSON.stringify result into a
variable (e.g., const json = JSON.stringify(error)), then if json === undefined
return String(error) (or a fallback like "undefined"), otherwise return json;
keep the existing Error instance check and the try/catch fallback but ensure the
try branch converts an undefined stringify result to a string before returning.
In `@bun/test/orchestrator-conformance.test.ts`:
- Around line 78-88: The test name "todo issue with non-terminal blocker is not
eligible" contradicts the assertions; rename the test to reflect that the issue
is considered a candidate and is blocked by a non-terminal blocker — e.g.,
change the test description string in the test that calls isCandidateIssue and
isTodoBlockedByNonTerminal to something like "todo issue with non-terminal
blocker is eligible and detected as blocked"; update the test declaration (the
string passed to test(...)) so it matches the asserted behavior involving
isCandidateIssue(...) and isTodoBlockedByNonTerminal(...).
---
Nitpick comments:
In @.github/workflows/pr-description-lint.yml:
- Around line 15-18: The workflow currently sets the setup-bun action with
bun-version: latest which causes non-deterministic CI; update the GitHub Actions
step that uses oven-sh/setup-bun@v2 (the block containing the bun-version field)
to pin Bun by either specifying bun-version-file: ".bun-version" (preferred), or
replace bun-version: latest with an explicit bun-version: "1.3.3" (or similar
pinned version), or ensure packageManager: "bun@1.3.3" is present in
package.json so the action can fall back; also commit bun.lock to the repo so
bun ci has reproducible dependency resolution.
In `@bun/scripts/mvp-smoke.sh`:
- Line 6: Remove the unused ROOT_DIR assignment in bun/scripts/mvp-smoke.sh:
delete the line that sets ROOT_DIR="$(cd "$BUN_DIR/.." && pwd)" so the script no
longer defines an unused variable; if any future logic needs project root,
replace this with an explicit reference or a small helper function, otherwise
simply remove the ROOT_DIR declaration to reduce noise.
In `@bun/src/config/config.ts`:
- Around line 105-107: The endpoint assignment uses a redundant ternary that
returns DEFAULT_LINEAR_ENDPOINT in both branches; replace the whole fallback
expression with a single DEFAULT_LINEAR_ENDPOINT, i.e. set endpoint to
normalizeNonEmptyString(tracker.endpoint) ?? DEFAULT_LINEAR_ENDPOINT and remove
the normalizeTrackerKind(tracker.kind) ? ... : ... check so the symbols involved
are normalizeNonEmptyString, tracker.endpoint, normalizeTrackerKind,
tracker.kind and DEFAULT_LINEAR_ENDPOINT.
- Around line 367-373: The code currently absolutizes any value containing path
separators before honoring allowRelativePreservation; move the
allowRelativePreservation check earlier and treat common relative forms as
preserved: if allowRelativePreservation is true and the value startsWith "./" or
"../" or is not path.isAbsolute(value) but contains "/" or "\" (i.e., "foo/bar"
or "./workspaces"), return the original value; otherwise fall back to
resolve(value). Use path.isAbsolute (or equivalent) and the existing symbols
allowRelativePreservation, value, sep, resolve to implement this change in the
function.
In `@bun/src/config/workflow-store.ts`:
- Around line 28-39: The start() method can create duplicate pollers if called
multiple times; update WorkflowStore.start to be idempotent by checking the
existing this.timer (or this.loadedPath) before creating a new setInterval: if a
timer already exists return immediately (or clear the existing timer and replace
it if intended), then proceed to load state and set this.timer as currently
done; reference the start method, this.timer, this.workflowPath, and
WORKFLOW_POLL_INTERVAL_MS when implementing the guard so repeated calls do not
spawn multiple intervals.
In `@bun/src/tracker/memory-tracker.ts`:
- Around line 6-12: MemoryTracker currently stores and returns the same Issue[]
reference; to prevent external mutation leaks, make defensive copies: in the
constructor (MemoryTracker constructor) clone the incoming issues array and its
elements (shallow copy at minimum) into this.issues, and in
fetchCandidateIssues() return a new array (and cloned issue objects if needed)
instead of returning this.issues directly; update references to this.issues
accordingly (preserve Issue type) so callers cannot mutate internal state.
In `@bun/test/app-server-protocol.test.ts`:
- Around line 33-61: The test may leak a spawned subprocess if an error occurs
before client.stopSession is called; ensure each block that calls
client.startSession (the session variable returned by
AppServerClient.startSession) always calls client.stopSession in the finally:
capture the session in a variable outside the try, wrap the existing logic in
try, and in finally check if session is defined then await
client.stopSession(session) before calling env.cleanup(); apply the same pattern
to the other test blocks that call startSession (the blocks around lines with
startSession/runTurn pairs) so sessions are always stopped even on failure.
In `@bun/test/fixtures/fake-codex-app-server.ts`:
- Around line 138-142: The current try/catch around msg = JSON.parse(trimmed)
swallows malformed JSON and returns silently; change the catch to fail fast so
tests surface protocol regressions. Replace the empty catch with logic that
either rethrows the parse error (e.g., throw new SyntaxError with context
including the trimmed payload) or logs the error and closes the
connection/stream (ensure you call the socket/ws close path used elsewhere in
this fixture), so malformed JSON no longer silently returns; update any tests
that expect a connection close accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d352aa2d-504d-462e-b25b-5645ffc90be6
⛔ Files ignored due to path filters (1)
bun/bun.lockis excluded by!**/*.lock
📒 Files selected for processing (40)
.codex/skills/push/SKILL.md.github/pull_request_template.md.github/workflows/make-all.yml.github/workflows/pr-description-lint.yml.gitignoreREADME.mdbun/README.mdbun/WORKFLOW.test.mdbun/docs/mvp-manual-test.mdbun/package.jsonbun/scripts/check-pr-body.tsbun/scripts/mvp-smoke.shbun/src/agent/agent-runner.tsbun/src/cli.tsbun/src/codex/app-server.tsbun/src/codex/dynamic-tool.tsbun/src/config/config.tsbun/src/config/workflow-store.tsbun/src/config/workflow.tsbun/src/http/presenter.tsbun/src/http/server.tsbun/src/orchestrator/orchestrator.tsbun/src/prompt/prompt-builder.tsbun/src/service.tsbun/src/tracker/linear-client.tsbun/src/tracker/memory-tracker.tsbun/src/tracker/tracker.tsbun/src/types.tsbun/src/utils/errors.tsbun/src/utils/logger.tsbun/src/utils/normalize.tsbun/src/workspace/workspace-manager.tsbun/test/app-server-protocol.test.tsbun/test/config.test.tsbun/test/fixtures/fake-codex-app-server.tsbun/test/http-server.test.tsbun/test/orchestrator-conformance.test.tsbun/test/workflow.test.tsbun/test/workspace.test.tsbun/tsconfig.json
| function headingPosition(content: string, heading: string): number { | ||
| return content.indexOf(heading); | ||
| } |
There was a problem hiding this comment.
Use line-anchored heading matching instead of raw indexOf.
Line 79 and Line 87 can match heading text inside normal paragraph content, which can corrupt missing/order/section checks.
Proposed fix direction
-function headingPosition(content: string, heading: string): number {
- return content.indexOf(heading);
-}
+function headingPosition(content: string, heading: string): number {
+ const escaped = heading.replace(/[.*+?^${}()|[\]\\]/g, "\\$&");
+ const match = new RegExp(`^${escaped}$`, "m").exec(content);
+ return match?.index ?? -1;
+}Also apply the same anchored approach in captureHeadingSection when locating the current and next heading boundaries.
Also applies to: 87-105
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bun/scripts/check-pr-body.ts` around lines 78 - 80, Replace raw substring
searches in headingPosition and captureHeadingSection with line-anchored
matching: split content into lines, scan for a line that matches the heading at
the start of the line (use a regex like /^\s*(#{1,6}\s+)?<heading>\s*$/ or
/^\s*<heading>\s*$/ depending on how headings are provided), and return the
character index of that line start (or -1 if not found). Apply the same
line-based search to locate the next heading boundary in captureHeadingSection
so both functions only match headings that appear alone at the start of a line,
preventing false positives inside paragraphs.
| --workflow) | ||
| WORKFLOW_PATH="$2" | ||
| shift 2 | ||
| ;; | ||
| --port) | ||
| PORT="$2" | ||
| shift 2 | ||
| ;; | ||
| --issue) | ||
| ISSUE_IDENTIFIER="$2" | ||
| shift 2 |
There was a problem hiding this comment.
Handle missing values for option flags before using $2.
At Line 27 / Line 31 / Line 35, passing --workflow, --port, or --issue without a value can crash under set -u instead of showing a friendly usage error.
Proposed fix
while [[ $# -gt 0 ]]; do
case "$1" in
--workflow)
+ if [[ $# -lt 2 || "$2" == --* ]]; then
+ echo "Missing value for --workflow" >&2
+ usage
+ exit 1
+ fi
WORKFLOW_PATH="$2"
shift 2
;;
--port)
+ if [[ $# -lt 2 || "$2" == --* ]]; then
+ echo "Missing value for --port" >&2
+ usage
+ exit 1
+ fi
PORT="$2"
shift 2
;;
--issue)
+ if [[ $# -lt 2 || "$2" == --* ]]; then
+ echo "Missing value for --issue" >&2
+ usage
+ exit 1
+ fi
ISSUE_IDENTIFIER="$2"
shift 2
;;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| --workflow) | |
| WORKFLOW_PATH="$2" | |
| shift 2 | |
| ;; | |
| --port) | |
| PORT="$2" | |
| shift 2 | |
| ;; | |
| --issue) | |
| ISSUE_IDENTIFIER="$2" | |
| shift 2 | |
| --workflow) | |
| if [[ $# -lt 2 || "$2" == --* ]]; then | |
| echo "Missing value for --workflow" >&2 | |
| usage | |
| exit 1 | |
| fi | |
| WORKFLOW_PATH="$2" | |
| shift 2 | |
| ;; | |
| --port) | |
| if [[ $# -lt 2 || "$2" == --* ]]; then | |
| echo "Missing value for --port" >&2 | |
| usage | |
| exit 1 | |
| fi | |
| PORT="$2" | |
| shift 2 | |
| ;; | |
| --issue) | |
| if [[ $# -lt 2 || "$2" == --* ]]; then | |
| echo "Missing value for --issue" >&2 | |
| usage | |
| exit 1 | |
| fi | |
| ISSUE_IDENTIFIER="$2" | |
| shift 2 | |
| ;; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bun/scripts/mvp-smoke.sh` around lines 27 - 37, When parsing the flags in
mvp-smoke.sh, don't directly use "$2" for --workflow, --port, or --issue;
instead validate that a next argument exists and isn't another flag before
assigning WORKFLOW_PATH, PORT, or ISSUE_IDENTIFIER (e.g., check that "$#" >= 2
and that "$2" doesn't start with "-"), and if the value is missing print the
usage/error and exit non-zero; update the case arms handling --workflow, --port,
and --issue to perform this validation before doing the current assignments and
shifts.
| if (arg === "--port") { | ||
| const rawPort = args[index + 1]; | ||
| if (!rawPort) { | ||
| throw new Error("--port requires an integer value"); | ||
| } | ||
|
|
||
| const port = Number.parseInt(rawPort, 10); | ||
| if (!Number.isInteger(port) || port < 0) { | ||
| throw new Error("--port requires a non-negative integer value"); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's verify the code at the specified lines and surrounding context
cat -n bun/src/cli.ts | head -80 | tail -40Repository: meiroio/symphony
Length of output: 1301
🏁 Script executed:
# Also run the provided verification script to see current parsing logic
rg -n -C4 'if \(arg === "--port"\)|Number\.parseInt|port < 0|65535' bun/src/cli.tsRepository: meiroio/symphony
Length of output: 512
🏁 Script executed:
# Check if the file exists and its size
wc -l bun/src/cli.tsRepository: meiroio/symphony
Length of output: 78
Harden --port parsing to reject malformed and out-of-range values.
The current implementation accepts non-canonical input and does not enforce the valid TCP port range (0-65535). Number.parseInt() accepts values with leading/trailing whitespace and partial numeric strings (e.g., "123abc" parses to 123). Additionally, the validation only checks for negative values but allows ports above 65535.
🔧 Proposed fix
if (arg === "--port") {
const rawPort = args[index + 1];
if (!rawPort) {
throw new Error("--port requires an integer value");
}
- const port = Number.parseInt(rawPort, 10);
- if (!Number.isInteger(port) || port < 0) {
- throw new Error("--port requires a non-negative integer value");
+ if (!/^\d+$/.test(rawPort)) {
+ throw new Error("--port requires an integer value");
+ }
+
+ const port = Number(rawPort);
+ if (!Number.isInteger(port) || port < 0 || port > 65_535) {
+ throw new Error("--port requires an integer value in the range 0-65535");
}
portOverride = port;
index += 1;
continue;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (arg === "--port") { | |
| const rawPort = args[index + 1]; | |
| if (!rawPort) { | |
| throw new Error("--port requires an integer value"); | |
| } | |
| const port = Number.parseInt(rawPort, 10); | |
| if (!Number.isInteger(port) || port < 0) { | |
| throw new Error("--port requires a non-negative integer value"); | |
| } | |
| if (arg === "--port") { | |
| const rawPort = args[index + 1]; | |
| if (!rawPort) { | |
| throw new Error("--port requires an integer value"); | |
| } | |
| if (!/^\d+$/.test(rawPort)) { | |
| throw new Error("--port requires an integer value"); | |
| } | |
| const port = Number(rawPort); | |
| if (!Number.isInteger(port) || port < 0 || port > 65_535) { | |
| throw new Error("--port requires an integer value in the range 0-65535"); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bun/src/cli.ts` around lines 56 - 65, The port parsing branch for arg
"--port" should reject malformed and out-of-range values: validate rawPort (from
args[index + 1]) is present, trim whitespace, ensure it matches a strict
digits-only pattern (e.g., /^\d+$/) so things like "123abc" or " 123 " are
rejected, parse to an integer (port) and then enforce 0 <= port <= 65535,
throwing clear errors (e.g., "--port requires a non-negative integer between 0
and 65535" or "--port requires a numeric value") when checks fail; update the
checks around rawPort, port, Number.parseInt, and the error messages
accordingly.
| const result = await executeDynamicTool(toolName, argumentsPayload, this.configProvider()); | ||
|
|
||
| await this.sendMessage(session.process, { | ||
| id, | ||
| result, | ||
| }); |
There was a problem hiding this comment.
Always reply to item/tool/call, even on tool exceptions.
At Line 335, a thrown error skips the reply for the request ID (Line 338), which can stall/desync protocol flow.
Suggested fix
- const result = await executeDynamicTool(toolName, argumentsPayload, this.configProvider());
+ let result: Record<string, unknown>;
+ try {
+ result = await executeDynamicTool(toolName, argumentsPayload, this.configProvider());
+ } catch (error) {
+ result = {
+ success: false,
+ error: {
+ code: "tool_execution_error",
+ message: error instanceof Error ? error.message : "tool execution failed",
+ },
+ };
+ }
await this.sendMessage(session.process, {
id,
result,
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const result = await executeDynamicTool(toolName, argumentsPayload, this.configProvider()); | |
| await this.sendMessage(session.process, { | |
| id, | |
| result, | |
| }); | |
| let result: Record<string, unknown>; | |
| try { | |
| result = await executeDynamicTool(toolName, argumentsPayload, this.configProvider()); | |
| } catch (error) { | |
| result = { | |
| success: false, | |
| error: { | |
| code: "tool_execution_error", | |
| message: error instanceof Error ? error.message : "tool execution failed", | |
| }, | |
| }; | |
| } | |
| await this.sendMessage(session.process, { | |
| id, | |
| result, | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bun/src/codex/app-server.ts` around lines 335 - 340, The call to
executeDynamicTool can throw and currently skips the sendMessage reply for the
request id; wrap the executeDynamicTool call in a try/catch around
executeDynamicTool(toolName, argumentsPayload, this.configProvider()) and ensure
sendMessage(session.process, { id, result }) is always invoked: on success send
the normal result, on error send a structured error response (e.g., { id, error:
{ message: err.message, name: err.name } } or similar) so the item/tool/call
always gets a reply; reference executeDynamicTool, sendMessage, session.process,
id, argumentsPayload and this.configProvider() when making the change.
| const payload = await bus.next(remaining); | ||
| if (payload.id !== requestId) { | ||
| continue; | ||
| } |
There was a problem hiding this comment.
awaitResponse currently drops unrelated JSON-RPC messages.
Line 398 consumes and discards non-matching payloads. If notifications or other requests arrive first, they are lost instead of being handled later.
Suggested fix
private async awaitResponse(
bus: JsonLineBus,
requestId: number,
timeoutMs: number,
): Promise<JsonRpcResponse> {
const deadline = Date.now() + timeoutMs;
+ const deferred: JsonRpcResponse[] = [];
- while (true) {
- const remaining = deadline - Date.now();
- if (remaining <= 0) {
- throw new Error("response_timeout");
- }
-
- const payload = await bus.next(remaining);
- if (payload.id !== requestId) {
- continue;
- }
-
- if (payload.error !== undefined) {
- throw new Error(`response_error:${JSON.stringify(payload.error)}`);
- }
-
- return payload;
- }
+ try {
+ while (true) {
+ const remaining = deadline - Date.now();
+ if (remaining <= 0) {
+ throw new Error("response_timeout");
+ }
+
+ const payload = await bus.next(remaining);
+ if (payload.id !== requestId) {
+ deferred.push(payload);
+ continue;
+ }
+
+ if (payload.error !== undefined) {
+ throw new Error(`response_error:${JSON.stringify(payload.error)}`);
+ }
+
+ return payload;
+ }
+ } finally {
+ bus.prepend(deferred);
+ }
} class JsonLineBus {
+ prepend(payloads: JsonRpcResponse[]): void {
+ if (payloads.length === 0) {
+ return;
+ }
+ this.queue.unshift(...payloads);
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const payload = await bus.next(remaining); | |
| if (payload.id !== requestId) { | |
| continue; | |
| } | |
| private async awaitResponse( | |
| bus: JsonLineBus, | |
| requestId: number, | |
| timeoutMs: number, | |
| ): Promise<JsonRpcResponse> { | |
| const deadline = Date.now() + timeoutMs; | |
| const deferred: JsonRpcResponse[] = []; | |
| try { | |
| while (true) { | |
| const remaining = deadline - Date.now(); | |
| if (remaining <= 0) { | |
| throw new Error("response_timeout"); | |
| } | |
| const payload = await bus.next(remaining); | |
| if (payload.id !== requestId) { | |
| deferred.push(payload); | |
| continue; | |
| } | |
| if (payload.error !== undefined) { | |
| throw new Error(`response_error:${JSON.stringify(payload.error)}`); | |
| } | |
| return payload; | |
| } | |
| } finally { | |
| bus.prepend(deferred); | |
| } | |
| } | |
| // Add to JsonLineBus class: | |
| prepend(payloads: JsonRpcResponse[]): void { | |
| if (payloads.length === 0) { | |
| return; | |
| } | |
| this.queue.unshift(...payloads); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bun/src/codex/app-server.ts` around lines 397 - 400, awaitResponse currently
discards any non-matching JSON-RPC messages when it does const payload = await
bus.next(remaining); if (payload.id !== requestId) continue; — instead, preserve
those messages so notifications/other requests aren’t lost: when payload.id !==
requestId, push the payload into a pending buffer or map (e.g., pendingMessages
keyed by payload.id or a notifications queue) or re-enqueue it onto remaining,
then continue; update awaitResponse to consult pendingMessages first before
awaiting bus.next and ensure symbols mentioned (awaitResponse, bus.next,
remaining, requestId, payload) are used to locate and implement the
buffering/re-enqueue logic.
| app.get("/api/v1/state", () => { | ||
| try { | ||
| return statePayload(orchestrator.snapshot()); | ||
| } catch { | ||
| return { | ||
| generated_at: new Date().toISOString(), | ||
| error: { | ||
| code: "snapshot_unavailable", | ||
| message: "Snapshot unavailable", | ||
| }, | ||
| }; | ||
| } | ||
| }); |
There was a problem hiding this comment.
Return a failure status when snapshot generation fails.
The error payload currently comes back with a 2xx status, which misrepresents endpoint health to clients and monitors.
📡 Proposed fix
- app.get("/api/v1/state", () => {
+ app.get("/api/v1/state", ({ set }) => {
try {
return statePayload(orchestrator.snapshot());
} catch {
+ set.status = 503;
return {
generated_at: new Date().toISOString(),
error: {
code: "snapshot_unavailable",
message: "Snapshot unavailable",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| app.get("/api/v1/state", () => { | |
| try { | |
| return statePayload(orchestrator.snapshot()); | |
| } catch { | |
| return { | |
| generated_at: new Date().toISOString(), | |
| error: { | |
| code: "snapshot_unavailable", | |
| message: "Snapshot unavailable", | |
| }, | |
| }; | |
| } | |
| }); | |
| app.get("/api/v1/state", ({ set }) => { | |
| try { | |
| return statePayload(orchestrator.snapshot()); | |
| } catch { | |
| set.status = 503; | |
| return { | |
| generated_at: new Date().toISOString(), | |
| error: { | |
| code: "snapshot_unavailable", | |
| message: "Snapshot unavailable", | |
| }, | |
| }; | |
| } | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bun/src/http/server.ts` around lines 67 - 79, The /api/v1/state route handler
currently returns an error payload with a 2xx status when
orchestrator.snapshot() throws; update the app.get("/api/v1/state") handler so
that on catch it sends the error JSON with an appropriate failure HTTP status
(e.g., 500) instead of a successful 2xx code — locate the route in server.ts and
adjust the response semantics (where statePayload and orchestrator.snapshot()
are used) to set the non-2xx status for the error branch.
| try { | ||
| const result = issuePayload(identifier, orchestrator.snapshot(), configProvider().workspace.root); | ||
| if (!result.ok) { | ||
| set.status = 404; | ||
| return errorEnvelope("issue_not_found", "Issue not found"); | ||
| } | ||
|
|
||
| return result.payload; | ||
| } catch { | ||
| set.status = 404; | ||
| return errorEnvelope("issue_not_found", "Issue not found"); | ||
| } |
There was a problem hiding this comment.
Do not map internal lookup failures to 404 issue_not_found.
This catch-all converts unexpected internal errors into a false “not found,” obscuring operational failures.
🛠️ Proposed fix
app.get("/api/v1/:issue_identifier", ({ params, set }) => {
const identifier = params.issue_identifier;
try {
const result = issuePayload(identifier, orchestrator.snapshot(), configProvider().workspace.root);
if (!result.ok) {
set.status = 404;
return errorEnvelope("issue_not_found", "Issue not found");
}
return result.payload;
} catch {
- set.status = 404;
- return errorEnvelope("issue_not_found", "Issue not found");
+ set.status = 500;
+ return errorEnvelope("issue_lookup_failed", "Issue lookup failed");
}
});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bun/src/http/server.ts` around lines 90 - 101, The current handler
incorrectly maps any thrown/internal error to a 404 by catching all exceptions
and returning errorEnvelope("issue_not_found"), which hides real failures;
change the error handling so that only a non-ok result from issuePayload leads
to set.status = 404 and errorEnvelope("issue_not_found"), while exceptions from
issuePayload, orchestrator.snapshot(), or configProvider() are not translated
into 404 — instead catch the exception, log the error, set an appropriate 5xx
status (e.g., set.status = 500) and return an internal error envelope (e.g.,
errorEnvelope("internal_error", "Internal server error")) or rethrow to let
upstream middleware handle it; update the block around
issuePayload/orchestrator.snapshot/configProvider().workspace.root and the catch
branch accordingly and preserve the successful return of result.payload when
result.ok is true.
| let refreshed: Issue[]; | ||
|
|
||
| try { | ||
| refreshed = await tracker.fetchIssueStatesByIds(issueIds); | ||
| } catch (error) { | ||
| logger.warn("Running issue state refresh failed; keeping workers running", { | ||
| reason: String(error), | ||
| }); | ||
| return; | ||
| } | ||
|
|
||
| for (const issue of refreshed) { | ||
| const issueId = issue.id; | ||
| if (!issueId) { | ||
| continue; | ||
| } | ||
|
|
||
| if (this.isTerminalState(issue.state)) { | ||
| logger.info("Issue moved to terminal state; stopping run", { | ||
| issue_id: issue.id, | ||
| issue_identifier: issue.identifier, | ||
| state: issue.state, | ||
| }); | ||
| this.terminateRunningIssue(issueId, true); | ||
| continue; | ||
| } | ||
|
|
||
| if (!issue.assignedToWorker) { | ||
| logger.info("Issue reassigned away; stopping run", { | ||
| issue_id: issue.id, | ||
| issue_identifier: issue.identifier, | ||
| }); | ||
| this.terminateRunningIssue(issueId, false); | ||
| continue; | ||
| } | ||
|
|
||
| if (!this.isActiveState(issue.state)) { | ||
| logger.info("Issue moved to non-active state; stopping run", { | ||
| issue_id: issue.id, | ||
| issue_identifier: issue.identifier, | ||
| state: issue.state, | ||
| }); | ||
| this.terminateRunningIssue(issueId, false); | ||
| continue; | ||
| } | ||
|
|
||
| const runningEntry = this.running.get(issueId); | ||
| if (runningEntry) { | ||
| runningEntry.issue = issue; | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Handle missing refreshed running issues explicitly.
If fetchIssueStatesByIds omits a running issue ID (deleted/inaccessible), that worker is never reconciled and keeps running.
💡 Proposed fix
- for (const issue of refreshed) {
+ const refreshedById = new Map(
+ refreshed
+ .filter((issue) => typeof issue.id === "string" && issue.id.length > 0)
+ .map((issue) => [issue.id as string, issue]),
+ );
+
+ for (const issueId of issueIds) {
+ const issue = refreshedById.get(issueId);
+ if (!issue) {
+ logger.info("Running issue no longer visible; stopping run", { issue_id: issueId });
+ this.terminateRunningIssue(issueId, false);
+ continue;
+ }
+
const issueId = issue.id;
if (!issueId) {
continue;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bun/src/orchestrator/orchestrator.ts` around lines 291 - 342,
fetchIssueStatesByIds may return fewer Issue objects than requested, leaving
workers for missing IDs running; after awaiting refreshed = await
tracker.fetchIssueStatesByIds(issueIds) compute the set of returned IDs (from
refreshed.map(i=>i.id)), find any missingIds from the original issueIds input,
log each missing ID (and identifier if available), and call
this.terminateRunningIssue(missingId, false) to stop those orphaned workers
before proceeding with the existing loop; use the existing running map and
logger to locate relevant entries and provide context.
| await this.workflowStore.start(); | ||
|
|
||
| const validation = validateDispatchConfig(this.currentConfig()); | ||
| if (!validation.ok) { | ||
| throw new Error(validation.message ?? validation.errorCode ?? "dispatch_validation_failed"); | ||
| } | ||
|
|
||
| await this.orchestrator.start(); | ||
|
|
||
| const config = this.currentConfig(); | ||
| const port = this.serverPortOverride ?? config.server.port; | ||
|
|
||
| let httpPort: number | null = null; | ||
|
|
||
| if (typeof port === "number" && port >= 0) { | ||
| const host = config.server.host; | ||
| httpPort = this.httpServer.start(port, host); | ||
|
|
||
| logger.info("HTTP observability server started", { | ||
| host, | ||
| port: httpPort, | ||
| }); | ||
| } | ||
|
|
||
| this.started = true; | ||
|
|
||
| return { httpPort }; | ||
| } |
There was a problem hiding this comment.
Add rollback on partial startup failure.
If validation/orchestrator/HTTP startup fails after workflowStore.start(), the method throws without stopping already-started components.
💡 Proposed fix
async start(): Promise<{ httpPort: number | null }> {
@@
- await this.workflowStore.start();
-
- const validation = validateDispatchConfig(this.currentConfig());
- if (!validation.ok) {
- throw new Error(validation.message ?? validation.errorCode ?? "dispatch_validation_failed");
- }
-
- await this.orchestrator.start();
+ await this.workflowStore.start();
+ let orchestratorStarted = false;
+ try {
+ const validation = validateDispatchConfig(this.currentConfig());
+ if (!validation.ok) {
+ throw new Error(validation.message ?? validation.errorCode ?? "dispatch_validation_failed");
+ }
+
+ await this.orchestrator.start();
+ orchestratorStarted = true;
@@
- this.started = true;
-
- return { httpPort };
+ this.started = true;
+ return { httpPort };
+ } catch (error) {
+ if (orchestratorStarted) {
+ this.orchestrator.stop();
+ }
+ this.httpServer.stop();
+ this.workflowStore.stop();
+ throw error;
+ }
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bun/src/service.ts` around lines 48 - 75, The start sequence can leave
components running if a later step fails; wrap the post-workflowStore startup
steps (validateDispatchConfig(this.currentConfig()), this.orchestrator.start(),
and this.httpServer.start(...)) in a try/catch and on any error call the
corresponding stop/cleanup for components that were already started (call
this.orchestrator.stop() if orchestrator started, this.httpServer.stop() if
httpServer.start() returned a port, and this.workflowStore.stop() for the
initial store), ensure this.started remains false on failure, rethrow the
original error, and only set this.started = true after all steps succeed.
| if (typeof value === "string") { | ||
| const parsed = Number.parseInt(value.trim(), 10); | ||
| if (Number.isInteger(parsed)) { | ||
| return parsed; | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the actual implementation
cat -n bun/src/utils/normalize.tsRepository: meiroio/symphony
Length of output: 3485
🏁 Script executed:
# Run the verification script from the review comment to find all call sites
rg -n --type ts '\bparseInteger\s*\(' bun/src bun/test
rg -n --type ts '\bparsePositiveInteger\s*\(' bun/src bun/test
rg -n --type ts '\bparseNonNegativeInteger\s*\(' bun/src bun/testRepository: meiroio/symphony
Length of output: 1321
🏁 Script executed:
# Examine the config.ts file to understand the context
head -n 160 bun/src/config/config.ts | tail -n 90Repository: meiroio/symphony
Length of output: 3670
🏁 Script executed:
# Check if there are tests for the parsing functions
rg -n 'parseInteger|parsePositiveInteger|parseNonNegativeInteger' bun/test -A 3Repository: meiroio/symphony
Length of output: 42
🏁 Script executed:
# Check the data source - where do these config values come from?
rg -n "agent\.max_concurrent_agents|polling\.interval_ms|codex\.stall_timeout_ms" bun/src -B 5 -A 1 | head -60Repository: meiroio/symphony
Length of output: 2340
🏁 Script executed:
# Test the actual behavior of Number.parseInt to confirm the issue
node <<'EOF'
console.log("Number.parseInt('10ms', 10):", Number.parseInt('10ms', 10));
console.log("Number.isInteger(10):", Number.isInteger(Number.parseInt('10ms', 10)));
console.log("Number.parseInt('ms10', 10):", Number.parseInt('ms10', 10));
console.log("Number.parseInt('100px', 10):", Number.parseInt('100px', 10));
console.log("Number.parseInt(' 42 ', 10):", Number.parseInt(' 42 ', 10));
EOFRepository: meiroio/symphony
Length of output: 219
🏁 Script executed:
# Look for any input validation or schema validation
rg -n "schema|zod|joi|validate.*config" bun/src/config -iRepository: meiroio/symphony
Length of output: 702
parseInteger silently accepts malformed numeric strings due to partial parsing.
Number.parseInt() extracts leading digits and ignores trailing characters. Strings like "10ms" or "100px" parse as 10 and 100 respectively, instead of being rejected. This is problematic for configuration parsing where invalid formats should fail strictly.
Affected config values include timeouts, counts, and port numbers. Users providing strings like "100ms" would silently receive 100 instead of a validation error.
🔧 Proposed fix
export const parseInteger = (value: unknown): number | null => {
if (typeof value === "number" && Number.isInteger(value)) {
return value;
}
if (typeof value === "string") {
- const parsed = Number.parseInt(value.trim(), 10);
- if (Number.isInteger(parsed)) {
+ const trimmed = value.trim();
+ if (!/^[+-]?\d+$/.test(trimmed)) {
+ return null;
+ }
+ const parsed = Number.parseInt(trimmed, 10);
+ if (Number.isSafeInteger(parsed)) {
return parsed;
}
}
return null;
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (typeof value === "string") { | |
| const parsed = Number.parseInt(value.trim(), 10); | |
| if (Number.isInteger(parsed)) { | |
| return parsed; | |
| } | |
| } | |
| if (typeof value === "string") { | |
| const trimmed = value.trim(); | |
| if (!/^[+-]?\d+$/.test(trimmed)) { | |
| return null; | |
| } | |
| const parsed = Number.parseInt(trimmed, 10); | |
| if (Number.isSafeInteger(parsed)) { | |
| return parsed; | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bun/src/utils/normalize.ts` around lines 19 - 24, The current
string-to-integer branch (in parseInteger/normalize logic) uses
Number.parseInt(value.trim(), 10) which allows partial parses like "10ms";
change it to perform a strict full-match numeric check before converting: trim
the string, validate it matches the integer regex /^-?\d+$/ (or /^\d+$/ if
negatives disallowed), then call Number.parseInt on the whole string and return
it only when the regex matches; otherwise fall through to rejection/undefined so
malformed strings like "100px" are not accepted.
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (1)
bun/package.json (1)
1-23: Package configuration looks good overall.The structure is valid with correct ESM module type, appropriate scripts referencing
src/cli.ts, and reasonable dependency choices.Consider adding an
enginesfield to document the minimum Bun version requirement for reproducibility:"engines": { "bun": ">=1.0.0" }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bun/package.json` around lines 1 - 23, Add an "engines" field to package.json at the top level to document the minimum Bun runtime required (e.g., bun >=1.0.0) so builds and contributors know the supported Bun version; update the package.json object that contains "name", "type", "scripts" (e.g., "dev", "prod", "start") and dependencies, ensuring the new "engines" key appears alongside those top-level fields and does not break the existing ESM "type": "module" or script references to src/cli.ts.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bun/README.md`:
- Around line 32-35: The README contains a hardcoded absolute path
"/Users/vorcigernix/Dev/symphony/bun" in the install snippet; update the command
block in bun/README.md to use a relative path (e.g., "cd bun" or "./bun") so it
works for other contributors and remove any developer-specific paths in that
code block.
- Around line 39-42: Replace the hardcoded absolute path in the README example
with a relative path so users can run the commands from the repo root;
specifically update the bash snippet that currently shows "cd
/Users/vorcigernix/Dev/symphony/bun" to use "cd bun" before "bun dev" in the
README.md example so it works for all contributors.
- Around line 56-60: Replace the hardcoded absolute path in the README bash
snippet—change the line containing "cd /Users/vorcigernix/Dev/symphony/bun" to a
portable relative path such as "cd bun" (or remove the cd entirely if
instructions assume current repo root) so the example using the "bun dev" and
"bun prod" commands works for other users; update the snippet around the "bun
dev" and "bun prod" lines accordingly.
- Around line 182-185: The README currently shows a hardcoded absolute path
before running the smoke script; update the example to run the script from the
repository root using a relative command so the path is portable — replace the
two-line sequence that cd's to /Users/vorcigernix/Dev/symphony and then runs
./bun/scripts/mvp-smoke.sh with a single invocation (e.g., run
./bun/scripts/mvp-smoke.sh --workflow ./bun/WORKFLOW.test.md --port 8789) and
add a brief comment like "From repository root" to clarify the working
directory; ensure the example references the script path
./bun/scripts/mvp-smoke.sh and workflow path ./bun/WORKFLOW.test.md exactly as
in the diff.
- Around line 213-217: Update the README.md command example that currently uses
a hardcoded absolute path (/Users/vorcigernix/Dev/symphony/bun) to a portable
relative path; change that line to "cd bun" (or remove the cd step if redundant)
and keep the subsequent commands ("bun run typecheck" and "bun test") so local
validation instructions work for all contributors.
---
Nitpick comments:
In `@bun/package.json`:
- Around line 1-23: Add an "engines" field to package.json at the top level to
document the minimum Bun runtime required (e.g., bun >=1.0.0) so builds and
contributors know the supported Bun version; update the package.json object that
contains "name", "type", "scripts" (e.g., "dev", "prod", "start") and
dependencies, ensuring the new "engines" key appears alongside those top-level
fields and does not break the existing ESM "type": "module" or script references
to src/cli.ts.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a76a4247-526f-4981-aae1-6c0957a2a834
📒 Files selected for processing (4)
README.mdbun/README.mdbun/WORKFLOW.mdbun/package.json
| ```bash | ||
| cd /Users/vorcigernix/Dev/symphony/bun | ||
| bun install | ||
| ``` |
There was a problem hiding this comment.
Replace hardcoded absolute paths with relative paths.
The documentation contains developer-specific paths (/Users/vorcigernix/Dev/symphony/bun) that won't work for other contributors. Use relative paths instead.
📝 Suggested fix
1. Install dependencies.
```bash
-cd /Users/vorcigernix/Dev/symphony/bun
+cd bun
bun install
</details>
<details>
<summary>🤖 Prompt for AI Agents</summary>
Verify each finding against the current code and only fix it if needed.
In @bun/README.md around lines 32 - 35, The README contains a hardcoded absolute
path "/Users/vorcigernix/Dev/symphony/bun" in the install snippet; update the
command block in bun/README.md to use a relative path (e.g., "cd bun" or
"./bun") so it works for other contributors and remove any developer-specific
paths in that code block.
</details>
<!-- This is an auto-generated comment by CodeRabbit -->
| ```bash | ||
| cd /Users/vorcigernix/Dev/symphony/bun | ||
| bun dev | ||
| ``` |
There was a problem hiding this comment.
Same hardcoded path issue.
📝 Suggested fix
2. Start in development mode using the default workflow at `./WORKFLOW.md`.
```bash
-cd /Users/vorcigernix/Dev/symphony/bun
+cd bun
bun dev
</details>
<!-- suggestion_start -->
<details>
<summary>📝 Committable suggestion</summary>
> ‼️ **IMPORTANT**
> Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
```suggestion
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bun/README.md` around lines 39 - 42, Replace the hardcoded absolute path in
the README example with a relative path so users can run the commands from the
repo root; specifically update the bash snippet that currently shows "cd
/Users/vorcigernix/Dev/symphony/bun" to use "cd bun" before "bun dev" in the
README.md example so it works for all contributors.
| ```bash | ||
| cd /Users/vorcigernix/Dev/symphony/bun | ||
| bun dev # NODE_ENV=development | ||
| bun prod # NODE_ENV=production | ||
| ``` |
There was a problem hiding this comment.
Same hardcoded path issue.
📝 Suggested fix
Run modes:
```bash
-cd /Users/vorcigernix/Dev/symphony/bun
+cd bun
bun dev # NODE_ENV=development
bun prod # NODE_ENV=production
</details>
<details>
<summary>🤖 Prompt for AI Agents</summary>
Verify each finding against the current code and only fix it if needed.
In @bun/README.md around lines 56 - 60, Replace the hardcoded absolute path in
the README bash snippet—change the line containing "cd
/Users/vorcigernix/Dev/symphony/bun" to a portable relative path such as "cd
bun" (or remove the cd entirely if instructions assume current repo root) so the
example using the "bun dev" and "bun prod" commands works for other users;
update the snippet around the "bun dev" and "bun prod" lines accordingly.
</details>
<!-- This is an auto-generated comment by CodeRabbit -->
| ```bash | ||
| cd /Users/vorcigernix/Dev/symphony | ||
| ./bun/scripts/mvp-smoke.sh --workflow ./bun/WORKFLOW.test.md --port 8789 | ||
| ``` |
There was a problem hiding this comment.
Same hardcoded path issue for smoke test.
📝 Suggested fix
Fast smoke test (memory tracker + HTTP contract checks):
```bash
-cd /Users/vorcigernix/Dev/symphony
+# From repository root
./bun/scripts/mvp-smoke.sh --workflow ./bun/WORKFLOW.test.md --port 8789
</details>
<!-- suggestion_start -->
<details>
<summary>📝 Committable suggestion</summary>
> ‼️ **IMPORTANT**
> Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
```suggestion
Fast smoke test (memory tracker + HTTP contract checks):
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bun/README.md` around lines 182 - 185, The README currently shows a hardcoded
absolute path before running the smoke script; update the example to run the
script from the repository root using a relative command so the path is portable
— replace the two-line sequence that cd's to /Users/vorcigernix/Dev/symphony and
then runs ./bun/scripts/mvp-smoke.sh with a single invocation (e.g., run
./bun/scripts/mvp-smoke.sh --workflow ./bun/WORKFLOW.test.md --port 8789) and
add a brief comment like "From repository root" to clarify the working
directory; ensure the example references the script path
./bun/scripts/mvp-smoke.sh and workflow path ./bun/WORKFLOW.test.md exactly as
in the diff.
| ```bash | ||
| cd /Users/vorcigernix/Dev/symphony/bun | ||
| bun run typecheck | ||
| bun test | ||
| ``` |
There was a problem hiding this comment.
Same hardcoded path issue for local validation.
📝 Suggested fix
## Local Validation
```bash
-cd /Users/vorcigernix/Dev/symphony/bun
+cd bun
bun run typecheck
bun test
</details>
<details>
<summary>🤖 Prompt for AI Agents</summary>
Verify each finding against the current code and only fix it if needed.
In @bun/README.md around lines 213 - 217, Update the README.md command example
that currently uses a hardcoded absolute path
(/Users/vorcigernix/Dev/symphony/bun) to a portable relative path; change that
line to "cd bun" (or remove the cd step if redundant) and keep the subsequent
commands ("bun run typecheck" and "bun test") so local validation instructions
work for all contributors.
</details>
<!-- This is an auto-generated comment by CodeRabbit -->
Summary: - Enforced strict runtime dispatch validation for Linear-only tracker kind, including required Linear API token and project slug checks. - Aligned defaults/docs/workflows with Linear runtime usage, including local Linear workflow script and ignored local token workflow file. - Hardened app-server behavior for user-input requests and rate-limit telemetry extraction, and fixed workspace hook timeout semantics for before_remove to be best-effort. - Added broad conformance tests for workflow reload, prompt strictness, Linear client query/error behavior, workspace safety hooks, protocol handling, and HTTP/orchestrator contracts. Rationale: - The implementation now matches the strict conformance requirement that runtime dispatch supports tracker.kind=linear in this spec version. - Expanded tests reduce regression risk in protocol, config-reload, and workspace safety paths that are operationally critical. - Documentation and smoke/manual test guides now match real runtime behavior and expected warnings in dummy-credential smoke mode. Tests: - cd bun && bun run typecheck && bun test - ./bun/scripts/mvp-smoke.sh --workflow ./bun/WORKFLOW.test.md --port 8789 Co-authored-by: Codex <codex@openai.com>
Context
TL;DR
Summary
Alternatives
Test Plan
make -C elixir allSummary by CodeRabbit
New Features
Documentation
Build & Testing
Chores