release: v0.8.0-alpha.1 — OpenBrain integration, fleet pipeline, AX-10, hardening#5
release: v0.8.0-alpha.1 — OpenBrain integration, fleet pipeline, AX-10, hardening#5
Conversation
Co-Authored-By: Virgil <virgil@lethean.io>
Replace all forge.lthn.ai import paths in Go source files with dappco.re equivalents. Update go.mod deps to dappco.re versions as specified. Co-Authored-By: Virgil <virgil@lethean.io>
All RegisterTools and internal register*Tool methods updated from *mcp.Server to *coremcp.Service. Tool registration calls updated to use svc.Server() for SDK AddTool calls. Monitor subsystem updated to store *coremcp.Service and access Server() for Sessions/ResourceUpdated. Tests updated to create coremcp.Service via New() instead of raw SDK server. Co-Authored-By: Virgil <virgil@lethean.io>
…andard - Replace broken registerMCPService with mcp.Register (fixes nil ServiceRuntime panic) - Remove dead mcp_service.go, update tests to use mcp.Register directly - Add setTestWorkspace() helper to clear workspaceRootOverride between tests - Fix 40+ test files with workspace state poisoning from loadAgentConfig - Fix forge.lthn.ai → dappco.re in findConsumersList test - Fix BranchWorkspaceCount test to use isolated temp dir - Add CLI test standard: 32 tests across 19 subsystems (tests/cli/) - All 9 packages pass, 0 failures Co-Authored-By: Virgil <virgil@lethean.io>
- Add isLEMProfile(): codex:lemmy/lemer/lemma/lemrd use --profile not --model - Add isNativeAgent(): Claude agents run natively (not in Docker) - Update localAgentCommandScript for LEM profile support - 12 new tests (Good/Bad/Ugly for profiles, native agent, codex variants) Co-Authored-By: Virgil <virgil@lethean.io>
Gate non-essential MCP tools behind CORE_MCP_FULL=1 env var. Core factory tools (dispatch, status, plan, issue, PR, scan, mirror, watch, brain, files) always registered. Extended tools (session, sprint, state, phase, task, template, message, content, platform, epic, remote, review-queue, setup, metrics, RAG, webview) only when full mode enabled. 189 → 82 tools in default mode. Fixes slow MCP startup and tool registration timeout in Claude Code. Co-Authored-By: Virgil <virgil@lethean.io>
…re primitives Replaced fmt, strings, sort, os, io, sync, encoding/json, path/filepath, errors, log, reflect with core.Sprintf, core.E, core.Contains, core.Trim, core.Split, core.Join, core.JoinPath, slices.Sort, c.Fs(), c.Lock(), core.JSONMarshal, core.ReadAll and other CoreGO v0.8.0 primitives. Framework boundary exceptions preserved where stdlib types are required by external interfaces (Gin, net/http, CGo, Wails, bubbletea). Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
…gaps - paths.go: resolve relative workspace_root against $HOME/Code so workspaces land in the conventional location regardless of launch cwd (MCP stdio vs CLI) - dispatch.go: container mounts use /home/agent (matches DEV_USER), plus runtime-aware dispatch (apple/docker/podman) with GPU toggle per RFC §15.5 - queue.go / runner/queue.go: DispatchConfig adds Runtime/Image/GPU fields; AgentIdentity parsing for the agents: block (RFC §10/§11) - pr.go / commands_forge.go / actions.go: agentic_delete_branch tool + branch/delete CLI (RFC §7) - brain/tools.go / provider.go: Org + IndexedAt fields on Memory (RFC §4) - config/agents.yaml: document new dispatch fields, fix identity table - tests: dispatch_runtime_test.go (21), expanded pr_test.go + queue_test.go, new CLI fixtures for branch/delete and pr/list Co-Authored-By: Virgil <virgil@lethean.io>
- sync.go: syncBackoffSchedule (1s/5s/15s/60s/5min) with per-push Attempts and NextAttempt honoured on retry (RFC §16.5) - runSyncFlushLoop: ticks every minute from OnStartup when API key present, drains the queue without re-scanning workspaces - SyncPushInput.QueueOnly: lets flush loop drain without triggering a full workspace scan (prevents duplicate pushes) - Sync ledger at .core/sync/ledger.json: fingerprints keyed by workspace name + (updated_at, runs); skips already-synced workspaces until fresh activity - docs/RFC-AGENT.md: synced from plans/code/core/agent/RFC.md with latest AgentPlan status enum, complete capability, pr.close/branch.delete, indexed_at/org brain fields Co-Authored-By: Virgil <virgil@lethean.io>
Introduce an optional go-store persistence layer for the three state groups described in RFC §15.3 — queue, concurrency, registry — plus runtime_state and dispatch_history used by the sync pipeline. - statestore.go lazily opens `.core/db.duckdb` via go-store when available; nil-safe helpers return cleanly so in-memory/file-based fallbacks survive when the store cannot open (graceful degradation, RFC §15.6) - prep.go tracks the store reference on the subsystem and closes it on shutdown; hydrateWorkspaces now consults the registry group before the filesystem scan so ghost agents are marked failed across restarts, and TrackWorkspace mirrors updates back into the cache - runtime_state.go persists backoff + fail-count snapshots into the go-store runtime group so dispatch backoff survives restarts even when the JSON file rotates - commit.go writes the completed dispatch record into dispatch_history for RFC §16.3 sync push to drain without rescanning workspaces - statestore_test.go covers lazy-once init, restore/delete round trip, ghost-agent failure marking, and runtime-state replay across subsystem instances Co-Authored-By: Virgil <virgil@lethean.io>
…store
- prep.go TrackWorkspace mirrors into queue + concurrency store groups
(previously only registry); hydrateWorkspaces reaps filesystem ghosts
(dead PID → failed, persisted back to status.json) so cmdStatus and
out-of-process consumers see coherent state (RFC §15.3)
- sync.go queue read/write goes through go-store first per RFC §16.5
("Queue persists across restarts in db.duckdb"), file remains fallback
for graceful degradation
- statestore.go stateStoreGet helper for go-store-first reads
- tests/cli/restart — new CLI test for RFC §15.7 "dispatch → kill →
restart → no ghost agents" dead-PID reap flow
- 4 new statestore tests: queue group mirror, concurrency refresh,
sync queue persistence, fs ghost reap with disk write-back
Co-Authored-By: Virgil <virgil@lethean.io>
Implements `core login CODE` — exchanges a 6-digit pairing code generated at app.lthn.ai/device for an AgentApiKey, persisted to ~/.claude/brain.key. Pairing code is the proof, so the POST is unauthenticated. - auth.go: AuthLoginInput/Output DTOs + handleAuthLogin handler - commands_platform.go: login / auth/login / agentic:login CLI commands with cmdAuthLogin persisting the returned key - prep.go: registered agentic.auth.login / agent.auth.login actions - auth_test.go / commands_platform_test.go / prep_test.go: Good/Bad/Ugly triads per repo convention, including key persistence verification Co-Authored-By: Virgil <virgil@lethean.io>
The runQA handler now captures every lint finding, tool run, build, vet
and test result into a go-store workspace buffer and commits the cycle
to the journal. Intelligence survives in the report and the journal per
RFC §7 Completion Pipeline.
- qa.go: QAFinding / QAToolRun / QASummary / QAReport DTOs mirroring
lint.Report shape; DispatchReport struct written to .meta/report.json;
runQAWithReport opens NewWorkspace("qa-<workspace>"), invokes
core-lint run --output json via c.Process().RunIn(), records every
finding + tool + stage result, then commits
- runQALegacy preserved for graceful degradation when go-store is
unavailable (RFC §15.6)
- dispatch.go: runQA now delegates to runQAWithReport, bool contract
unchanged for existing call sites
- qa_test.go: Good/Bad/Ugly triads per repo convention
Poindexter clustering from RFC §7 Post-Run Analysis remains open —
needs its own RFC pass for the package boundary.
Co-Authored-By: Virgil <virgil@lethean.io>
Extends DispatchReport with the three RFC §7 diff lists (New, Resolved, Persistent) and a Clusters list that groups findings by tool/severity/ category/rule_id. runQAWithReport now queries the SQLite journal for up to persistentThreshold previous cycles of the same workspace, computes the diff against the current cycle, and populates .meta/report.json before ws.Commit(). The full findings payload is also pushed to the journal via CommitToJournal so later cycles have findings-level data to compare against (workspace.Commit only stores aggregated counts). Matches RFC §7 Post-Run Analysis without pulling in Poindexter as a direct dependency — uses straightforward deterministic clustering so agent stays inside the core/go-* dependency tier. Co-Authored-By: Virgil <virgil@lethean.io>
Adds `.core/workspace/db.duckdb` — the permanent record of dispatch cycles described in RFC §15.5. Stats rows persist BEFORE workspace directories are deleted so "what happened in the last 50 dispatches" queries survive cleanup and sync drain. - `workspace_stats.go` — lazy go-store handle for the parent stats DB, build/record/filter/list helpers, report payload projection - `commit.go` — writes a stats row as part of the completion pipeline so every committed dispatch carries forward into the permanent record - `commands_workspace.go` — `workspace/clean` captures stats before deleting, new `workspace/stats` command + `agentic.workspace.stats` action answer the spec's "query on the parent" use case Co-Authored-By: Virgil <virgil@lethean.io>
Adds `recoverStateOrphans` per RFC §15.5 — startup scans `.core/state/`
for leftover QA workspace buffers from dispatches that crashed before
commit, and discards them so partial cycles do not poison the diff
history described in RFC §7.
- `statestore.go` — new `recoverStateOrphans` wrapper around go-store's
`RecoverOrphans("")` so the agent inherits the store's configured
state directory
- `prep.go` — wires the recovery into OnStartup immediately after
`hydrateWorkspaces` so the registry, queue, and buffers all come back
into a consistent state on restart
- `statestore_test.go` — Good/Bad/Ugly coverage, includes the cwd
redirect guard so the go-store default relative path cannot leak test
artefacts into the package working tree
Co-Authored-By: Virgil <virgil@lethean.io>
runWorkspaceLanguagePrep now appends `GOWORK=` (empty) to the env passed to `go work sync` so inherited `GOWORK=off` from a test runner or CI environment doesn't short-circuit the workspace lookup. The extracted workspace template includes a go.work referencing ./repo; without this override the sync fails even though the file is right there. Converged pass — no new features found this sample. Co-Authored-By: Virgil <virgil@lethean.io>
Replace os/exec.LookPath with process.Program.Find() — keeps dispatch runtime detection in line with the repo's process-execution convention and removes the os/exec import from pkg/agentic/dispatch.go. Convergence-pass from spark-medium — no new features found on this sample, confirms core/agent and go-store RFC parity is complete. Co-Authored-By: Virgil <virgil@lethean.io>
go-process's OnStartup re-registers process.start/run/kill with string-ID variants, clobbering the agent's custom handlers that return *process.Process. This broke pid/queue helpers and 7 tests that need the rich handle (TestPid_ProcessAlive_Good, TestQueue_CanDispatchAgent_Bad_AgentAtLimit, etc). Register a Startable override service that reapplies the agent handlers after every service finishes OnStartup — since services run in registration order, "agentic.process-overrides" always runs last and wins. Co-Authored-By: Virgil <virgil@lethean.io>
…0, hardening
Consolidates the v0.8.0-alpha.1 release work for core/agent. Major
themes: OpenBrain memory and search wired through the agent surface,
fleet/dispatch pipeline cleanup, AX-10 + module path migration, and
defensive hardening across the agentic layer.
* Module path migration: forge.lthn.ai/core/agent -> dappco.re/go/agent.
All sibling dappco.re/go/* deps pinned to v0.8.0-alpha.1.
* AX-10 scaffold: tests/cli/agent/Taskfile.yaml + driver coverage for
the agent CLI surface.
* OpenBrain integration (PHP + Go):
- GET /v1/brain/search — Elasticsearch full-text endpoint
- GET /v1/brain/tags + /v1/brain/scopes
- /v1/brain/recall extended with org filter, keywords, boost_keywords
- artisan commands: brain:reindex, brain:clean, brain:prune
- BrainService::remember() async via EmbedMemory job
- Elasticsearch indexing in BrainService + DeleteFromIndex job
- org filter at BuildQdrantFilter boundary
- Postgres portability for brain connection + migrations
- Qdrant api-key header wired from BRAIN_QDRANT_API_KEY
- org scoping + index cleanup + reindex flags + MCP schemas +
circuit layer for fleet readiness
* Fleet + dispatch pipeline:
- MetaReader contract with Forgejo-backed implementation
- ScanForWork + ManagePullRequest routed through MetaReader
- WorkspaceState workflow progress updated on dispatch push
- SessionArtifact passes description as metadata array
- fleet task vs AgentSession separation documented
- HTTPS cert regression tests + fleet sync audit
* Plugin restructure:
- core-go / core-php / infra plugin directories scaffolded
- dappcore -> core rename completed; Gemini stub added
- hermes plugins: openbrain_memory.py MemoryProvider,
openbrain_context.py ContextEngine, plus SKILL.md docs
* Stdio MCP wrappers:
- cmd/openbrain-mcp (Claude Code)
- hermes_runner_mcp
- camofox_mcp
* §14A primitives migration:
- sync.Once purged from pkg/agentic via core.Once
* Defensive hardening across the agentic layer:
- tool execution + template validation
- rate limiting + api key identifiers
- prep workspace writes validated; fail-closed on specs copy
- persistence failure surfacing
- corrupt-dispatch-report preservation with timestamped backup
- empty MCP session id rejection
- core reference edge cases
- append-write failure checks
- composer autoload corrected to php/ tree layout
- codex CLI 0.122+ config.toml compatibility
* Coverage: 18 documented gosec false-positives recorded in
.gitleaksignore. Test triad expanded for monitor harvest, agentic
message + dispatch sync contracts.
* Repo hygiene:
- removed 79 previously-tracked .DS_Store entries across the tree
- dropped empty .core/TODO.md
Refs: RFC-CORE-008-AGENT-EXPERIENCE.md (AX-1, AX-6, AX-10)
RFC §9 (agentic_auth_login MCP tool)
RFC §14A (Once primitive)
Co-authored-by: Athena <athena@lthn.ai>
Co-authored-by: Hephaestus <hephaestus@lthn.ai>
Co-authored-by: Cladius Maximus <cladius@lthn.ai>
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 2 minutes and 32 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📝 WalkthroughWalkthroughThis pull request reorganises plugin families, refactors MCP tool registration architecture, updates Go and PHP agent backends with expanded features (org scoping, async indexing, metadata extraction), and adds comprehensive test coverage across multiple subsystems. Changes
|
There was a problem hiding this comment.
Actionable comments posted: 1
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
php/Mcp/Tools/Agent/Brain/BrainList.php (1)
1-3:⚠️ Potential issue | 🟡 MinorMissing SPDX license header.
The file is missing the required
// SPDX-License-Identifier: EUPL-1.2header before the opening<?phptag or immediately after it.As per coding guidelines: "Include
// SPDX-License-Identifier: EUPL-1.2header on every source file"🔧 Suggested fix
<?php +// SPDX-License-Identifier: EUPL-1.2 + declare(strict_types=1);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Mcp/Tools/Agent/Brain/BrainList.php` around lines 1 - 3, Add the required SPDX license header to this source file by inserting the exact comment string "// SPDX-License-Identifier: EUPL-1.2" either immediately before the opening "<?php" tag or directly after it; update BrainList.php (the file containing the BrainList class) so the header appears at the top of the file to satisfy the codebase guideline.docs/audits/pipeline-verify-20260423.md (1)
233-254:⚠️ Potential issue | 🟡 MinorTicket
#7in this audit looks stale against the current PR content.The report still lists “Fix
session_artifactMCP metadata typing and add regression coverage” as open, but this PR includes dedicated regression coverage for that path. Please either mark it resolved here or explicitly label this section as historical snapshot output.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/audits/pipeline-verify-20260423.md` around lines 233 - 254, The audit entry claiming ticket `#7` is still open is stale: update the audit summary to either mark the “Fix session_artifact MCP metadata typing and add regression coverage” ticket as resolved or annotate that the section is a historical snapshot; reference the SessionArtifact class (SessionArtifact) and the new regression test added in AgentSessionTest (the MCP artefact tool path coverage) so reviewers can verify the fix rather than leaving the ticket listed as pending.pkg/agentic/dispatch.go (1)
379-405:⚠️ Potential issue | 🟠 MajorDo not mount Codex state into every containerised agent.
The base container args always mount
~/.codex, so any non-Codex containerised agent inherits files it does not need. That unnecessarily broadens credential and config exposure across agents.Suggested fix
containerArgs = append(containerArgs, "-v", core.Concat(workspaceDir, ":/workspace"), "-v", core.Concat(metaDir, ":/workspace/.meta"), "-w", "/workspace/repo", - "-v", core.Concat(core.JoinPath(home, ".codex"), ":/home/agent/.codex"), "-e", "OPENAI_API_KEY", "-e", "ANTHROPIC_API_KEY", "-e", "GEMINI_API_KEY", @@ "-e", "GOFLAGS=-mod=mod", ) + + if command == "codex" { + containerArgs = append(containerArgs, + "-v", core.Concat(core.JoinPath(home, ".codex"), ":/home/agent/.codex"), + ) + } if command == "claude" {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/agentic/dispatch.go` around lines 379 - 405, The base containerArgs currently always mounts the host ~/.codex (via core.Concat(core.JoinPath(home, ".codex"), ":/home/agent/.codex")), exposing credentials to every agent; remove that mount from the default append and instead append it conditionally only for agents that need Codex state (e.g., check the command value or agent capability and when it indicates the Codex agent, call containerArgs = append(containerArgs, "-v", core.Concat(core.JoinPath(home, ".codex"), ":/home/agent/.codex:ro"))); update the code paths around the containerArgs construction (symbols: containerArgs, home, core.JoinPath, command) to ensure the mount is read-only and only applied for the specific agent(s) that require it (e.g., command == "codex" or an explicit requiresCodex flag).docs/RFC-AGENT.md (1)
208-215:⚠️ Potential issue | 🟡 MinorThe Brain API table is missing the new read endpoints.
This RFC section still documents only remember / recall / forget / list, so consumers following the contract will miss the new Brain endpoints added in this release for search and aggregations.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/RFC-AGENT.md` around lines 208 - 215, The Brain API table under "Brain (`/v1/brain/*`)" is missing the new read endpoints; update the markdown table to include rows for the newly added endpoints (e.g., /v1/brain/search and /v1/brain/aggregations or whichever exact paths were implemented) and specify the correct HTTP method and brief Action (e.g., "Search" or "Aggregations") for each row so the RFC matches the runtime contract; ensure the endpoint names and methods you add match the actual handler names/paths in the codebase (search handler, aggregation handler) before committing.
🟠 Major comments (23)
google/gemini-cli/.gemini-plugin/plugin.json-6-6 (1)
6-6:⚠️ Potential issue | 🟠 MajorUpdate manifest repository URL to the migrated canonical path.
Line 6 still points to
forge.lthn.ai, which conflicts with the release migration todappco.re/*and can misdirect plugin metadata consumers.Suggested fix
- "repository": "https://forge.lthn.ai/core/agent.git", + "repository": "https://dappco.re/go/agent.git",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@google/gemini-cli/.gemini-plugin/plugin.json` at line 6, Update the "repository" field in plugin.json to point to the migrated canonical path (replace the forge.lthn.ai URL with the new dappco.re/* repository URL) so plugin metadata consumers resolve to the release-migrated location; edit the "repository" property value in .gemini-plugin/plugin.json accordingly and ensure the new URL follows the same scheme (https://dappco.re/...) used by other migrated manifests..core/reference/runtime.go-156-158 (1)
156-158:⚠️ Potential issue | 🟠 MajorStartup nil-guard is good, but shutdown can still panic on the same nil receiver.
Line 156 now allows
(*Runtime)(nil).ServiceStartup(...)to no-op safely, butServiceShutdownstill dereferencesrat Line 164. That means lifecycle code can pass startup and then panic during shutdown in the same nil-runtime scenario.Suggested fix
func (r *Runtime) ServiceShutdown(ctx context.Context) Result { - if r.Core != nil { - return r.Core.ServiceShutdown(ctx) - } - return Result{OK: true} + if r == nil || r.Core == nil { + return Result{OK: true} + } + return r.Core.ServiceShutdown(ctx) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.core/reference/runtime.go around lines 156 - 158, ServiceShutdown can still panic when called on a nil receiver because it dereferences r/Core; add the same nil-guard logic used in ServiceStartup to the start of ServiceShutdown so that if r == nil || r.Core == nil it returns Result{OK: true} without accessing fields. Update the ServiceShutdown method on Runtime to check the receiver and its Core before any dereference to safely no-op during shutdown in nil-runtime scenarios.php/Services/AgentToolRegistry.php-286-289 (1)
286-289:⚠️ Potential issue | 🟠 MajorInclude the tool name in the execution-rate cache key.
This counter is keyed only by API key, so one noisy tool can exhaust the budget for every other tool on that key. That makes the limiter global-per-key, not per-tool.
Suggested fix
- private function executionRateCacheKey(ApiKey $apiKey): string + private function executionRateCacheKey(ApiKey $apiKey, string $toolName): string { - return 'agent_api_key_tool_rate:'.$this->apiKeyIdentifier($apiKey); + return 'agent_api_key_tool_rate:'.$this->apiKeyIdentifier($apiKey).':'.$toolName; } @@ - $cacheKey = $this->executionRateCacheKey($apiKey); + $cacheKey = $this->executionRateCacheKey($apiKey, $toolName);Also applies to: 342-350
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Services/AgentToolRegistry.php` around lines 286 - 289, The execution-rate cache key is currently only keyed by ApiKey (method executionRateCacheKey), causing rate limits to be global per API key instead of per-tool; change executionRateCacheKey to accept the tool name (string) or a Tool object and include that tool identifier in the returned key (e.g. append ':' . $toolName or $tool->getName()), update all callers (including the other similar routine referenced around the 342-350 block) to pass the tool name, and keep the key format consistent across both places so each (apiKey, tool) pair has its own cache entry.claude/camofox_mcp/server.py-382-394 (1)
382-394:⚠️ Potential issue | 🟠 MajorHandle framing failures inside the serve loop.
Line 384 sits outside the
try, so a badContent-Lengthheader fromread_stdio_message()currently tears down the fallback server instead of returning a JSON-RPC parse error and continuing.Suggested fix
def serve(self, reader: BinaryIO, writer: BinaryIO) -> None: while True: - payload, framing = read_stdio_message(reader) + try: + payload, framing = read_stdio_message(reader) + except JsonRpcError as exc: + write_stdio_message( + writer, + jsonrpc_error(None, exc.code, exc.message, exc.data), + framing=self._output_framing, + ) + continue + except UnicodeDecodeError as exc: + write_stdio_message( + writer, + jsonrpc_error(None, -32700, f"Parse error: {exc}"), + framing=self._output_framing, + ) + continue if payload is None: break self._output_framing = framing or self._output_framing🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@claude/camofox_mcp/server.py` around lines 382 - 394, The serve loop currently calls read_stdio_message(reader) outside the try/except so a malformed framing (bad Content-Length) can raise and tear down the server; wrap the call to read_stdio_message and the subsequent update of self._output_framing in the same try block that handles JSON decoding so framing/parsing errors are caught and converted to a JSON-RPC error response via jsonrpc_error and written with write_stdio_message, then continue the loop; look for the serve method, read_stdio_message, self._output_framing, json.loads, jsonrpc_error, and write_stdio_message to implement this change.claude/camofox_mcp/server.py-549-575 (1)
549-575:⚠️ Potential issue | 🟠 MajorParse the full header block before deciding the framing.
This only accepts
Content-Lengthwhen it is the first non-empty line. If a client sendsContent-Typeor any extension header first, Line 575 treats that header as line-delimited JSON, desynchronises the stream, and breaks interoperability with an otherwise valid framed peer.Suggested fix
def read_stdio_message(reader: BinaryIO) -> tuple[str | None, str | None]: while True: line = reader.readline() if line == b"": return None, None if line in (b"\n", b"\r\n"): continue - if line.lower().startswith(b"content-length:"): - try: - content_length = int(line.split(b":", 1)[1].strip()) - except (IndexError, ValueError) as exc: - raise JsonRpcError(-32700, f"Invalid Content-Length header: {line!r}") from exc - - while True: - header = reader.readline() - if header == b"": - return None, "content-length" - if header in (b"\n", b"\r\n"): - break - - payload = reader.read(content_length) - if len(payload) != content_length: - return None, "content-length" - return payload.decode("utf-8"), "content-length" + if line[:1] in (b"{", b"["): + return line.strip().decode("utf-8"), "line" + + headers = [line] + while True: + header = reader.readline() + if header == b"": + return None, "content-length" + if header in (b"\n", b"\r\n"): + break + headers.append(header) + + content_length: int | None = None + for header in headers: + try: + name, value = header.split(b":", 1) + except ValueError as exc: + raise JsonRpcError(-32700, f"Invalid header line: {header!r}") from exc + if name.strip().lower() == b"content-length": + try: + content_length = int(value.strip()) + except ValueError as exc: + raise JsonRpcError(-32700, f"Invalid Content-Length header: {header!r}") from exc + + if content_length is None: + raise JsonRpcError(-32700, "Missing Content-Length header") + + payload = reader.read(content_length) + if len(payload) != content_length: + return None, "content-length" + return payload.decode("utf-8"), "content-length" return line.strip().decode("utf-8"), "line"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@claude/camofox_mcp/server.py` around lines 549 - 575, The function read_stdio_message currently assumes Content-Length is the first non-empty header and treats any other first header as a line; update read_stdio_message to read and parse the entire header block (loop reading headers until an empty CRLF separator) into a headers map, handling end-of-stream as it does now, then check headers case-insensitively for "content-length" and, if present, parse it and read exactly that many bytes into payload (raising JsonRpcError on invalid value and returning the same (None, "content-length") sentinel on short reads), otherwise treat the payload as a single line-delimited JSON by returning the trimmed first-line string with the "line" tag; keep existing decode/return semantics and error handling in functions read_stdio_message and its Content-Length/line branches..core/reference/error.go-378-378 (1)
378-378:⚠️ Potential issue | 🟠 MajorDo not log raw crash-report bytes.
Including
raw=with full payload can leak sensitive metadata and stack content into logs. Keep logs to path/error/size and preserve raw bytes only in the.corruptartefact.🔒 Safer logging patch
- Default().Error(Concat("crash report file corrupted path=", h.filePath, " err=", err.Error(), " raw=", string(data))) + Default().Error(Concat("crash report file corrupted path=", h.filePath, " err=", err.Error(), " bytes=", Sprint(len(data))))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.core/reference/error.go at line 378, The log call currently includes raw crash bytes via Default().Error(Concat(... " raw=", string(data))) which can leak sensitive info; remove the raw payload from the log and instead log only path, error, and payload size (use len(data)) using the existing Concat call and h.filePath and err; keep writing the raw bytes only to the .corrupt artifact as before (ensure the .corrupt write code still uses data) so raw content is preserved off‑log but never printed..core/reference/error.go-380-380 (1)
380-380: 🛠️ Refactor suggestion | 🟠 MajorReplace
os.WriteFile()withfs.WriteMode().Line 380 uses the standard library directly, bypassing the project's file-I/O abstraction. Use
fs.WriteMode(backupPath, data, 0600)instead, per project coding guidelines.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.core/reference/error.go at line 380, The code currently calls os.WriteFile(...) when writing the backup (the expression creating backupErr using os.WriteFile(backupPath, data, 0600)); replace that call with the project abstraction fs.WriteMode(backupPath, data, 0600) and ensure the fs package is imported/used where error backupErr is assigned so the rest of the error handling remains unchanged (keep the backupErr variable and existing error path logic).php/Actions/Sync/PushDispatchHistory.php-73-76 (1)
73-76:⚠️ Potential issue | 🟠 MajorScope plan resolution to the current workspace before writing sync state.
resolvePlan()can currently match plans outside the active workspace (byagent_plan_idorplan_slug), which risks cross-workspace state updates.Suggested fix
- $planUpdate = $this->resolvePlanUpdate($dispatch, $status); + $planUpdate = $this->resolvePlanUpdate($workspaceId, $dispatch, $status); - private function resolvePlanUpdate(array $dispatch, string $status): ?array + private function resolvePlanUpdate(int $workspaceId, array $dispatch, string $status): ?array { - $plan = $this->resolvePlan($dispatch); + $plan = $this->resolvePlan($workspaceId, $dispatch); if (! $plan instanceof AgentPlan) { return null; } @@ - private function resolvePlan(array $dispatch): ?AgentPlan + private function resolvePlan(int $workspaceId, array $dispatch): ?AgentPlan { $planId = (int) ($dispatch['agent_plan_id'] ?? 0); if ($planId > 0) { - $plan = AgentPlan::find($planId); + $plan = AgentPlan::query() + ->whereKey($planId) + ->where('workspace_id', $workspaceId) + ->first(); if ($plan instanceof AgentPlan) { return $plan; } } @@ - return AgentPlan::where('slug', $planSlug)->first(); + return AgentPlan::query() + ->where('workspace_id', $workspaceId) + ->where('slug', $planSlug) + ->first(); }Also applies to: 127-163
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Actions/Sync/PushDispatchHistory.php` around lines 73 - 76, resolvePlan()/resolvePlanUpdate() can match plans from other workspaces, so restrict lookups to the current workspace before writing sync state: update resolvePlan (and usages in resolvePlanUpdate and the block handling lines ~127-163) to include the active workspace id/instance check (e.g., match workspace_id or use $this->workspace context) when querying by agent_plan_id or plan_slug, and only return a planUpdate/plan_id if the plan belongs to the current workspace; ensure all branches that build $planUpdates[$planUpdate['plan_id']] perform this workspace-scoped validation first.go.mod-138-138 (1)
138-138:⚠️ Potential issue | 🟠 MajorRemove the local
replacedirective from the committedgo.mod.A relative replacement (
../mcp) is environment-specific and will break consumers and CI environments that do not have that sibling directory available. The module already declares a versioned requirement (dappco.re/go/mcp v0.8.0-alpha.1), which is sufficient for consumers.Keep local overrides in a workspace-level
go.workfile instead, as described in the development guide. This ensures the module can be used by external consumers and in CI environments without additional directory structure assumptions.Suggested fix
-replace dappco.re/go/mcp => ../mcp🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@go.mod` at line 138, Remove the local replace directive "replace dappco.re/go/mcp => ../mcp" from go.mod because it is environment-specific; keep the declared requirement "dappco.re/go/mcp v0.8.0-alpha.1" intact and, if you need a local override during development, add the replace into a workspace-level go.work instead (or use `go work use ../mcp`) so CI and external consumers are not broken by a committed relative replace.php/tests/Feature/Sync/PushDispatchHistoryTest.php-19-134 (1)
19-134:⚠️ Potential issue | 🟠 MajorUse Pest syntax for this test suite.
This file is written in PHPUnit class style, but the project standard requires Pest in
php/tests/**. Please convert these tests to Pest (it(...)/test(...)) before merge.As per coding guidelines:
php/tests/**/*.php: Use Pest for all testing instead of PHPUnit.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/tests/Feature/Sync/PushDispatchHistoryTest.php` around lines 19 - 134, Convert the PHPUnit class PushDispatchHistoryTest (methods test_PushDispatchHistory_handle_Good_updatesWorkspaceStateForAgentPlanId, test_PushDispatchHistory_handle_Bad_resolvesPlanSlugForWorkspaceState, test_PushDispatchHistory_handle_Ugly_skipsWorkspaceStateWhenPlanIsMissing, plus setUp/tearDown and helper assertSyncState) to Pest style: replace setUp/tearDown with beforeEach/afterEach, move $workspace and $plan into the shared closure scope, register RefreshDatabase via uses(RefreshDatabase::class) or uses(...) in Pest, and convert each test_* method into a test(...) or it(...) closure that calls PushDispatchHistory::run and performs the same assertions; port the private assertSyncState method into a local helper closure or inline assertions within tests to preserve the same checks (WorkspaceState::forPlan, assertCount, assertSame, etc.).php/Migrations/2026_04_24_000001_add_org_to_brain_memories.php-23-25 (1)
23-25:⚠️ Potential issue | 🟠 MajorGuard
after('project')for schema portability.If
brain_memories.projectdoes not exist in a target environment, this migration can fail while addingorg. Please gate theafter('project')modifier behind a prior column check.Proposed fix
public function up(): void { $schema = Schema::connection($this->getConnection()); if (! $schema->hasTable('brain_memories') || $schema->hasColumn('brain_memories', 'org')) { return; } - $schema->table('brain_memories', function (Blueprint $table): void { - $table->string('org', 128)->nullable()->after('project')->index(); + $hasProjectColumn = $schema->hasColumn('brain_memories', 'project'); + + $schema->table('brain_memories', function (Blueprint $table) use ($hasProjectColumn): void { + $column = $table->string('org', 128)->nullable(); + if ($hasProjectColumn) { + $column->after('project'); + } + $column->index(); }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Migrations/2026_04_24_000001_add_org_to_brain_memories.php` around lines 23 - 25, The migration adds an 'org' column using ->after('project') which will fail if the brain_memories.project column doesn't exist; update the up() migration to check Schema::hasColumn('brain_memories', 'project') and only chain ->after('project') when true, otherwise add the column without ->after; keep the column definition ($table->string('org', 128)->nullable()->index()) and ensure the down() still drops 'org' as before.pkg/agentic/commands_platform.go-165-187 (1)
165-187:⚠️ Potential issue | 🟠 MajorHarden key persistence and only report success when it was actually saved.
brain.keyis a credential, but this uses the generic write path and then printssaved to:even when the warning branches say the file was not persisted. Please write it with restrictive permissions and gate the success message on the write succeeding.As per coding guidelines, "Use `WriteMode(path, content, 0600)` for sensitive files".🔐 Proposed fix
// Persist the raw key so the agent authenticates on the next invocation. keyPath := core.JoinPath(HomeDir(), ".claude", "brain.key") + saved := false if r := fs.EnsureDir(core.PathDir(keyPath)); !r.OK { core.Print(nil, "warning: could not create %s — key not persisted", core.PathDir(keyPath)) - } else if r := fs.Write(keyPath, output.Key.Key); !r.OK { + } else if r := fs.WriteMode(keyPath, output.Key.Key, 0600); !r.OK { core.Print(nil, "warning: could not write %s — key not persisted", keyPath) } else { s.brainKey = output.Key.Key + saved = true } core.Print(nil, "logged in") if output.Key.Prefix != "" { core.Print(nil, "key prefix: %s", output.Key.Prefix) @@ if len(output.Key.Permissions) > 0 { core.Print(nil, "permissions: %s", core.Join(",", output.Key.Permissions...)) } - core.Print(nil, "saved to: %s", keyPath) + if saved { + core.Print(nil, "saved to: %s", keyPath) + } return core.Result{OK: true} }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/agentic/commands_platform.go` around lines 165 - 187, The code currently attempts to write the credential to keyPath but prints "saved to:" even when EnsureDir or fs.Write failed and uses the generic write path; change the flow so that after calling fs.EnsureDir(core.PathDir(keyPath)) you call fs.WriteMode(keyPath, output.Key.Key, 0600) (or the equivalent WriteMode function provided by fs) and only on its OK result set s.brainKey = output.Key.Key and print the "saved to: %s" message; keep printing warnings when EnsureDir or the write fail and do not print the success message unless WriteMode returned OK, also ensure you still print prefix/name/expires/permissions unchanged when present.pkg/agentic/pr.go-500-518 (1)
500-518:⚠️ Potential issue | 🟠 MajorBlock deletion of non-agent branches.
This forwards any branch name straight to Forge. A bad tool call can remove
dev,main, or another shared branch just as easily as an ephemeral agent branch.🔒 Proposed guard
func (s *PrepSubsystem) deleteBranch(ctx context.Context, _ *mcp.CallToolRequest, input DeleteBranchInput) (*mcp.CallToolResult, DeleteBranchOutput, error) { if s.forgeToken == "" { return nil, DeleteBranchOutput{}, core.E("deleteBranch", "no Forge token configured", nil) } if s.forge == nil { return nil, DeleteBranchOutput{}, core.E("deleteBranch", "forge client is not configured", nil) } if input.Repo == "" || input.Branch == "" { return nil, DeleteBranchOutput{}, core.E("deleteBranch", "repo and branch are required", nil) } + if !core.HasPrefix(input.Branch, "agent/") || input.Branch == "dev" || input.Branch == "main" || input.Branch == "master" { + return nil, DeleteBranchOutput{}, core.E("deleteBranch", "only agent branches may be deleted", nil) + } org := input.Org if org == "" { org = "core" }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/agentic/pr.go` around lines 500 - 518, The deleteBranch method in PrepSubsystem currently forwards any branch name to s.forge.Branches.DeleteBranch which allows deletion of protected branches; add validation in deleteBranch (operating on DeleteBranchInput.Branch) to block protected names (e.g., "main", "master", "dev", "release" or any org-wide canonical branches) and only allow deletion when the branch matches an explicit ephemeral pattern (for example a configurable prefix like "agent/" or a UUID pattern); return a descriptive error via core.E("deleteBranch", ...) when validation fails and do not call s.forge.Branches.DeleteBranch unless the branch passes the check. Ensure tests or callers that expect previous behavior are updated to use ephemeral names.pkg/agentic/deps.go-14-16 (1)
14-16:⚠️ Potential issue | 🟠 MajorDo not suppress real
go.modread failures.This now returns
nilfor everyfs.Readfailure. An unreadable or malformedgo.modwill be treated like “nothing to do”, so dependency cloning is skipped and the later failure is much harder to diagnose.🩹 Proposed fix
r := fs.Read(goModPath) if !r.OK { - return nil + if !fs.Exists(goModPath) { + return nil + } + if err, ok := r.Value.(error); ok { + return core.E("cloneWorkspaceDeps", "read go.mod", err) + } + return core.E("cloneWorkspaceDeps", "read go.mod", nil) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/agentic/deps.go` around lines 14 - 16, The code currently swallows all fs.Read(goModPath) failures by returning nil when r.OK is false; change this to propagate the read error instead of returning nil: when r := fs.Read(goModPath) yields !r.OK, return r.Err (or wrap it with context, e.g., "reading go.mod") so callers can see the actual failure; update the error path in the function that calls fs.Read(goModPath) (refer to the local variable r and the fs.Read call) to surface the real error rather than treating it as a no-op.php/tests/Feature/BrainServiceTest.php-10-58 (1)
10-58: 🛠️ Refactor suggestion | 🟠 MajorPlease migrate this test file to Pest style.
This file currently uses PHPUnit class-based tests, but test files in this repo are standardised on Pest.
Example Pest-style rewrite
-use PHPUnit\Framework\TestCase; - -class BrainServiceTest extends TestCase -{ - public function test_BrainService_buildQdrantFilter_Good_FiltersByOrgOnly(): void - { - $filter = (new BrainService)->buildQdrantFilter([ - 'org' => 'core', - ]); - - $this->assertSame([ - 'must' => [ - ['key' => 'org', 'match' => ['value' => 'core']], - ], - ], $filter); - } - - public function test_BrainService_buildQdrantFilter_Bad_CombinesOrgAndProject(): void - { - $filter = (new BrainService)->buildQdrantFilter([ - 'org' => 'core', - 'project' => 'agent', - ]); - - $this->assertSame([ - 'must' => [ - ['key' => 'org', 'match' => ['value' => 'core']], - ['key' => 'project', 'match' => ['value' => 'agent']], - ], - ], $filter); - } - - public function test_BrainService_buildQdrantFilter_Ugly_CombinesWorkspaceOrgAndProject(): void - { - $filter = (new BrainService)->buildQdrantFilter([ - 'workspace_id' => 42, - 'org' => 'core', - 'project' => 'agent', - ]); - - $this->assertSame([ - 'must' => [ - ['key' => 'workspace_id', 'match' => ['value' => 42]], - ['key' => 'org', 'match' => ['value' => 'core']], - ['key' => 'project', 'match' => ['value' => 'agent']], - ], - ], $filter); - } -} +test('BrainService_buildQdrantFilter_Good_filters_by_org_only', function (): void { + $filter = (new BrainService)->buildQdrantFilter(['org' => 'core']); + + expect($filter)->toBe([ + 'must' => [ + ['key' => 'org', 'match' => ['value' => 'core']], + ], + ]); +}); + +test('BrainService_buildQdrantFilter_Bad_combines_org_and_project', function (): void { + $filter = (new BrainService)->buildQdrantFilter([ + 'org' => 'core', + 'project' => 'agent', + ]); + + expect($filter)->toBe([ + 'must' => [ + ['key' => 'org', 'match' => ['value' => 'core']], + ['key' => 'project', 'match' => ['value' => 'agent']], + ], + ]); +}); + +test('BrainService_buildQdrantFilter_Ugly_combines_workspace_org_and_project', function (): void { + $filter = (new BrainService)->buildQdrantFilter([ + 'workspace_id' => 42, + 'org' => 'core', + 'project' => 'agent', + ]); + + expect($filter)->toBe([ + 'must' => [ + ['key' => 'workspace_id', 'match' => ['value' => 42]], + ['key' => 'org', 'match' => ['value' => 'core']], + ['key' => 'project', 'match' => ['value' => 'agent']], + ], + ]); +});As per coding guidelines
php/tests/**/*.php: "Use Pest for all testing instead of PHPUnit".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/tests/Feature/BrainServiceTest.php` around lines 10 - 58, Convert the PHPUnit class-based tests to Pest-style function tests: replace the BrainServiceTest class and its methods with three Pest test() calls that call (new BrainService)->buildQdrantFilter with the same inputs and assert the returned array using Pest's expectation helpers (e.g., expect($filter)->toBe([...]) or expect($filter)->toEqual([...])). Ensure you keep the same test scenarios and names (Good_FiltersByOrgOnly, Bad_CombinesOrgAndProject, Ugly_CombinesWorkspaceOrgAndProject) and reference the BrainService::buildQdrantFilter invocation and expected arrays exactly as in the original assertions. Make sure to remove the PHPUnit TestCase import and class declaration so the file is plain Pest tests.pkg/agentic/dispatch.go-272-285 (1)
272-285:⚠️ Potential issue | 🟠 MajorKeep explicit runtime choices deterministic.
If
preferredis set but unavailable, this falls through the auto-probe loop and can return a different backend entirely. That makes an invalidapple/podmansetting silently execute on another runtime instead of the documented Docker fallback.Suggested fix
func resolveContainerRuntime(preferred string) string { - switch preferred { - case RuntimeApple, RuntimeDocker, RuntimePodman: - if runtimeAvailable(preferred) { - return preferred - } - } - for _, candidate := range []string{RuntimeApple, RuntimeDocker, RuntimePodman} { - if runtimeAvailable(candidate) { - return candidate - } - } - return RuntimeDocker + switch preferred { + case "", RuntimeAuto: + for _, candidate := range []string{RuntimeApple, RuntimeDocker, RuntimePodman} { + if runtimeAvailable(candidate) { + return candidate + } + } + return RuntimeDocker + case RuntimeApple, RuntimeDocker, RuntimePodman: + if runtimeAvailable(preferred) { + return preferred + } + return RuntimeDocker + default: + return RuntimeDocker + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/agentic/dispatch.go` around lines 272 - 285, The resolveContainerRuntime function currently probes other backends when a specified preferred runtime (preferred parameter; RuntimeApple/RuntimeDocker/RuntimePodman) is unavailable, causing a non-deterministic swap; change the logic so that if preferred is set to a known runtime and runtimeAvailable(preferred) is false, the function immediately returns RuntimeDocker (explicit documented fallback) instead of falling through to the probe loop; only when preferred is empty or unrecognized should you iterate the candidate list with runtimeAvailable to pick a runtime.php/Services/BrainService.php-181-188 (1)
181-188:⚠️ Potential issue | 🟠 MajorKeyword recall bypasses
agent_idscoping in the Elasticsearch path.
recall()merges Elasticsearch hits into$scoreMap, but the indexed document does not storeagent_idandbuildElasticFilters()never applies it. Withkeywordsplus anagent_idfilter, cross-agent hits can be reintroduced after Qdrant has already been scoped.🩹 Include `agent_id` in the ES document and filters
private function buildElasticDocument(BrainMemory $memory): array { return [ 'id' => $memory->id, + 'agent_id' => $memory->agent_id, 'content' => $memory->content, 'type' => $memory->type, 'tags' => $memory->tags ?? [], 'project' => $memory->project, 'workspace_id' => $memory->workspace_id, @@ - foreach (['workspace_id', 'org', 'project', 'type'] as $field) { + foreach (['workspace_id', 'org', 'project', 'type', 'agent_id'] as $field) { if (! isset($filters[$field])) { continue; }Also applies to: 199-230, 523-536, 565-590
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Services/BrainService.php` around lines 181 - 188, The Elasticsearch keyword recall bypasses agent scoping because ES docs lack agent_id and buildElasticFilters() isn't applying it; update the ES indexing code to store agent_id on each document and modify buildElasticFilters() to add a term filter for agent_id when an agent_id is present, then ensure calls to elasticSearch(...) (used by recall(), scoreElasticResults(), and the other similar blocks around the ranges noted) pass that filter so only hits matching the agent are returned before merging into $scoreMap; audit the other affected sites (the blocks around 199-230, 523-536, 565-590) and apply the same change.php/Pipeline/ForgejoMetaReader.php-377-395 (1)
377-395:⚠️ Potential issue | 🟠 MajorDo not mix issue numbers with database IDs in the child lookup path.
buildIssueLookup()indexes bynumber, butextractIssueId()falls back to$child['id']. When a child payload lacksnumber,$issueLookup[$issueId]will miss and the later state / linked-PR enrichment quietly degrades to defaults.🩹 Safer fix
private function extractIssueId(array $child): ?int { return $this->intOrNull( $child['issue_id'] ?? $child['number'] ?? $child['issue']['number'] - ?? $child['id'] ?? null, ); }Also applies to: 401-409
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Pipeline/ForgejoMetaReader.php` around lines 377 - 395, buildIssueLookup currently only keys issues by the external "number" while extractIssueId may fall back to a database "id", causing lookups to miss; modify buildIssueLookup (and use intOrNull) to add entries for both numeric keys when present: compute $number = $this->intOrNull($issue['number'] ?? null) and $dbId = $this->intOrNull($issue['id'] ?? null) and set $lookup[$number] = $issue and $lookup[$dbId] = $issue (only when each is non-null) so extractIssueId and later state/linked-PR enrichment can find the record regardless of which id form is used.php/Services/BrainService.php-238-246 (1)
238-246:⚠️ Potential issue | 🟠 MajorTreat missing index documents as a successful forget.
forget()now queues index cleanup for every deleted row, including memories that may still haveindexed_at = null. In that race, Elasticsearch has nothing to delete yet,elasticDelete()throws on 404, and the queue retries a cleanup that is already logically complete.🩹 One low-impact fix
public function elasticDelete(string $id): void { $response = $this->http(10) ->delete($this->elasticDocumentUrl($id)); + if ($response->status() === 404) { + return; + } + if (! $response->successful()) { Log::error("Elasticsearch delete failed: {$response->status()}", ['id' => $id, 'body' => $response->body()]); throw new \RuntimeException("Elasticsearch delete failed: {$response->status()}"); } }Also applies to: 315-324
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@php/Services/BrainService.php` around lines 238 - 246, The forget() method queues DeleteFromIndex for every deleted memory which causes the job to fail/retry when Elasticsearch returns 404 for missing docs; update the DeleteFromIndex job (its handle() / elasticDelete logic) to treat a 404 Not Found as a successful deletion (catch the ES exception/error code 404 and return/ack without rethrowing) and optionally avoid dispatching when BrainMemory->indexed_at is null in BrainService::forget to reduce race windows; ensure DeleteFromIndex still rethrows or fails for other error codes so real errors surface.hermes/plugins/openbrain_context.py-72-80 (1)
72-80:⚠️ Potential issue | 🟠 Major
compress()can still return an over-budget history.The first/last turns are always kept, and the early
len(ordered_turns) <= 2return skips trimming entirely. If those anchors alone exceedtoken_budget, this helper still returns them unchanged, so the caller can still overflow the model context window.Also applies to: 101-110, 399-416
hermes/plugins/openbrain_memory.py-275-282 (1)
275-282:⚠️ Potential issue | 🟠 Major
brain_rememberis not scoped to the current workspace.The tool schema does not expose
workspace_id, and this branch disables the provider default withinclude_workspace=False, so remember calls are created without workspace scoping even though recall/list/sync_turn all rely on it.🩹 Minimal fix
if name == "brain_remember": - payload = self._with_context_defaults(request_args, include_workspace=False) + payload = self._with_context_defaults(request_args) return self._request_json( "POST", self._brain_endpoint("remember"),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hermes/plugins/openbrain_memory.py` around lines 275 - 282, The brain_remember branch is building a payload without workspace scoping by calling _with_context_defaults(..., include_workspace=False); change it to include the workspace (remove the include_workspace=False or pass include_workspace=True) so the payload contains workspace_id, ensuring brain_remember requests sent via _request_json to _brain_endpoint("remember") are scoped the same way as recall/list/sync_turn; update the call site in the brain_remember branch to use _with_context_defaults(request_args) (or explicitly include_workspace=True) and keep the existing _auth_headers() usage.hermes/plugins/openbrain_memory.py-608-639 (1)
608-639:⚠️ Potential issue | 🟠 MajorRestrict the urllib fallback to HTTP(S) schemes only.
This fallback will happily open any scheme carried in
brain_url/qdrant_url. If the richer clients are unavailable, a misconfigured or hostile value can triggerfile:access or other unintended handlers instead of an HTTP request.🔐 Minimal guard for the fallback transport
def _urllib_request( self, method: str, url: str, @@ ) -> tuple[int, str]: request_headers = dict(headers or {}) request_url = url @@ if json_body is not None: request_headers.setdefault("Content-Type", "application/json") data = json.dumps(json_body).encode("utf-8") + parsed = urlparse(request_url) + if parsed.scheme not in {"http", "https"}: + raise OSError(f"Unsupported URL scheme: {parsed.scheme or '<empty>'}") + request = Request(request_url, data=data, headers=request_headers, method=method)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hermes/plugins/openbrain_memory.py` around lines 608 - 639, The urllib fallback in _urllib_request currently allows opening arbitrary URL schemes; parse the provided url (request_url) with urllib.parse.urlparse and validate that the scheme is either "http" or "https" before constructing the Request and calling urlopen, otherwise raise a clear exception (e.g., ValueError or OSError) to block non-HTTP schemes. Update _urllib_request to perform this scheme check early (before building Request or using urlopen) so brain_url/qdrant_url cannot trigger file: or other handlers when richer HTTP clients are unavailable.hermes/plugins/openbrain_context.py-630-661 (1)
630-661:⚠️ Potential issue | 🟠 MajorRestrict the urllib fallback to HTTP(S) schemes only.
_urllib_request()accepts whatever scheme is present inbrain_url/qdrant_url. In environments withoutrequestsorhttpx, a bad or attacker-controlled URL can turn this into local file access or an unexpected handler invocation.[suggested fix]
🔐 Minimal guard for the fallback transport
def _urllib_request( self, method: str, url: str, @@ ) -> tuple[int, str]: request_headers = dict(headers or {}) request_url = url @@ if json_body is not None: request_headers.setdefault("Content-Type", "application/json") data = json.dumps(json_body).encode("utf-8") + parsed = urlparse(request_url) + if parsed.scheme not in {"http", "https"}: + raise OSError(f"Unsupported URL scheme: {parsed.scheme or '<empty>'}") + request = Request(request_url, data=data, headers=request_headers, method=method)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hermes/plugins/openbrain_context.py` around lines 630 - 661, The fallback transport _urllib_request should only allow HTTP(S) URLs to avoid accidental local file or other scheme handling; inside _urllib_request (use the request_url or incoming url param) parse the URL (e.g., via urllib.parse.urlparse) and validate that scheme is "http" or "https", and if not raise a clear exception (OSError or ValueError) before constructing the Request or calling urlopen so non-HTTP schemes are rejected early.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: e99f9e7f-cb71-44f6-a04c-37459659158c
⛔ Files ignored due to path filters (2)
go.sumis excluded by!**/*.sumgoogle/gemini-cli/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (298)
.claude-plugin/marketplace.json.codex/config.toml.core/reference/cli.go.core/reference/error.go.core/reference/fs.go.core/reference/runtime.go.gitignore.gitleaksignore.mcp.jsonclaude/camofox_mcp/README.mdclaude/camofox_mcp/__init__.pyclaude/camofox_mcp/pyproject.tomlclaude/camofox_mcp/server.pyclaude/camofox_mcp/tests/__init__.pyclaude/camofox_mcp/tests/test_server.pyclaude/core-go/.claude-plugin/plugin.jsonclaude/core-go/README.mdclaude/core-go/agents/README.mdclaude/core-go/commands/README.mdclaude/core-go/marketplace.yamlclaude/core-go/skills/README.mdclaude/core-php/.claude-plugin/plugin.jsonclaude/core-php/README.mdclaude/core-php/agents/README.mdclaude/core-php/commands/README.mdclaude/core-php/marketplace.yamlclaude/core-php/skills/README.mdclaude/core/.claude-plugin/plugin.jsonclaude/core/.mcp.jsonclaude/core/000.mcp.jsonclaude/core/000mcp.jsonclaude/core/commands/dispatch.mdclaude/core/commands/recall.mdclaude/core/commands/remember.mdclaude/core/commands/review.mdclaude/core/commands/scan.mdclaude/core/commands/status.mdclaude/core/commands/sweep.mdclaude/core/skills/orchestrate.mdclaude/hermes_runner_mcp/README.mdclaude/hermes_runner_mcp/__init__.pyclaude/hermes_runner_mcp/pyproject.tomlclaude/hermes_runner_mcp/server.pyclaude/hermes_runner_mcp/tests/__init__.pyclaude/hermes_runner_mcp/tests/test_server.pyclaude/infra/.claude-plugin/plugin.jsonclaude/infra/README.mdclaude/infra/agents/README.mdclaude/infra/commands/README.mdclaude/infra/marketplace.yamlclaude/infra/skills/README.mdclaude/research/.claude-plugin/plugin.jsoncmd/core-agent/commands.gocmd/core-agent/commands_example_test.gocmd/core-agent/commands_test.gocmd/core-agent/main.gocmd/core-agent/main_test.gocmd/core-agent/mcp_service.gocmd/core-agent/mcp_service_example_test.gocmd/core-agent/mcp_service_test.gocomposer.jsonconfig/agents.yamlcore-agentcore-agent.backupdocs/AUDIT-openbrain-20260424.mddocs/RFC-AGENT-PLUGIN-RESTRUCTURE.mddocs/RFC-AGENT.mddocs/audits/fleet-https-cert-20260423.mddocs/audits/pipeline-verify-20260423.mddocs/php-agent/RFC.mddocs/php-agent/RFC.openbrain-design.mddocs/php-agent/RFC.openbrain-impl.mdgo.modgoogle/gemini-cli/.gemini-plugin/plugin.jsongoogle/gemini-cli/.gitignoregoogle/gemini-cli/GEMINI.mdgoogle/gemini-cli/README.mdgoogle/gemini-cli/commands/code/awareness.tomlgoogle/gemini-cli/commands/code/remember.tomlgoogle/gemini-cli/commands/code/yes.tomlgoogle/gemini-cli/commands/codex/awareness.tomlgoogle/gemini-cli/commands/codex/core-cli.tomlgoogle/gemini-cli/commands/codex/overview.tomlgoogle/gemini-cli/commands/codex/safety.tomlgoogle/gemini-cli/commands/qa/fix.tomlgoogle/gemini-cli/commands/qa/qa.tomlgoogle/gemini-cli/gemini-extension.jsongoogle/gemini-cli/hooks/hooks.jsongoogle/gemini-cli/package.jsongoogle/gemini-cli/src/index.jsgoogle/gemini-cli/src/index.tsgoogle/gemini-cli/tsconfig.jsonhermes/__init__.pyhermes/plugins/__init__.pyhermes/plugins/openbrain_context.pyhermes/plugins/openbrain_memory.pyhermes/skills/openbrain-recall/SKILL.mdhermes/skills/openbrain-remember/SKILL.mdphp/Actions/Brain/RememberKnowledge.phpphp/Actions/Fleet/AssignTask.phpphp/Actions/Fleet/CompleteTask.phpphp/Actions/Forge/ManagePullRequest.phpphp/Actions/Forge/ScanForWork.phpphp/Actions/Sync/PushDispatchHistory.phpphp/Console/Commands/BrainCleanCommand.phpphp/Console/Commands/BrainPruneCommand.phpphp/Console/Commands/BrainReindexCommand.phpphp/Console/Commands/PrepWorkspaceCommand.phpphp/Controllers/Api/BrainController.phpphp/Jobs/DeleteFromIndex.phpphp/Jobs/EmbedMemory.phpphp/Mcp/Tools/Agent/Brain/BrainList.phpphp/Mcp/Tools/Agent/Brain/BrainRecall.phpphp/Mcp/Tools/Agent/Brain/BrainRemember.phpphp/Mcp/Tools/Agent/Session/SessionArtifact.phpphp/Migrations/0001_01_01_000008_create_brain_memories_table.phpphp/Migrations/0001_01_01_000009_drop_brain_memories_workspace_fk.phpphp/Migrations/2026_04_23_000001_add_indexed_at_to_brain_memories.phpphp/Migrations/2026_04_24_000001_add_org_to_brain_memories.phpphp/Models/BrainMemory.phpphp/Pipeline/EpicChild.phpphp/Pipeline/EpicMeta.phpphp/Pipeline/ForgejoMetaReader.phpphp/Pipeline/IssueState.phpphp/Pipeline/MetaReader.phpphp/Pipeline/PRMeta.phpphp/Pipeline/Reactions.phpphp/Routes/api.phpphp/Services/AgentToolRegistry.phpphp/Services/BrainService.phpphp/Services/PlanTemplateService.phpphp/config.phpphp/tests/Feature/Api/BrainRecallExtendedTest.phpphp/tests/Feature/Api/BrainSearchTest.phpphp/tests/Feature/Api/BrainTagsScopesTest.phpphp/tests/Feature/Brain/CircuitBreakerTest.phpphp/tests/Feature/Brain/OrgScopingTest.phpphp/tests/Feature/Brain/ReindexFlagsTest.phpphp/tests/Feature/Brain/SupersedeForgetIndexCleanupTest.phpphp/tests/Feature/BrainServiceTest.phpphp/tests/Feature/Console/BrainCleanCommandTest.phpphp/tests/Feature/Console/BrainPruneCommandTest.phpphp/tests/Feature/Console/BrainReindexCommandTest.phpphp/tests/Feature/Jobs/DeleteFromIndexTest.phpphp/tests/Feature/Jobs/EmbedMemoryTest.phpphp/tests/Feature/Mcp/BrainSchemaOrgTest.phpphp/tests/Feature/Mcp/BrainSmokeTest.phpphp/tests/Feature/Mcp/SessionArtifactTest.phpphp/tests/Feature/Pipeline/NoBodyLeakTest.phpphp/tests/Feature/PlanTemplateServiceTest.phpphp/tests/Feature/Services/BrainServiceElasticTest.phpphp/tests/Feature/Services/BrainServiceRememberTest.phpphp/tests/Feature/Sync/PushDispatchHistoryTest.phpphp/tests/Unit/AgentToolRegistryTest.phpphp/tests/Unit/BrainServiceTest.phpphp/tests/Unit/Pipeline/ForgejoMetaReaderTest.phppkg/agentic/actions.gopkg/agentic/actions_test.gopkg/agentic/auth.gopkg/agentic/auth_test.gopkg/agentic/auto_pr_test.gopkg/agentic/brain_seed_memory.gopkg/agentic/commands_commit_test.gopkg/agentic/commands_forge.gopkg/agentic/commands_plan_test.gopkg/agentic/commands_platform.gopkg/agentic/commands_platform_test.gopkg/agentic/commands_session_test.gopkg/agentic/commands_setup.gopkg/agentic/commands_task_test.gopkg/agentic/commands_test.gopkg/agentic/commands_workspace.gopkg/agentic/commands_workspace_test.gopkg/agentic/commit.gopkg/agentic/commit_test.gopkg/agentic/content.gopkg/agentic/deps.gopkg/agentic/deps_test.gopkg/agentic/dispatch.gopkg/agentic/dispatch_runtime_test.gopkg/agentic/dispatch_sync.gopkg/agentic/dispatch_sync_test.gopkg/agentic/dispatch_test.gopkg/agentic/epic.gopkg/agentic/epic_test.gopkg/agentic/events.gopkg/agentic/events_test.gopkg/agentic/handlers_test.gopkg/agentic/helpers_test.gopkg/agentic/issue.gopkg/agentic/lang.gopkg/agentic/logic_test.gopkg/agentic/message.gopkg/agentic/message_test.gopkg/agentic/mirror.gopkg/agentic/paths.gopkg/agentic/paths_test.gopkg/agentic/phase.gopkg/agentic/phase_test.gopkg/agentic/pid.gopkg/agentic/pid_example_test.gopkg/agentic/plan.gopkg/agentic/plan_compat_test.gopkg/agentic/plan_crud_test.gopkg/agentic/plan_dependencies_test.gopkg/agentic/plan_from_issue_test.gopkg/agentic/plan_retention.gopkg/agentic/plan_retention_test.gopkg/agentic/plan_test.gopkg/agentic/platform.gopkg/agentic/platform_test.gopkg/agentic/platform_tools.gopkg/agentic/pr.gopkg/agentic/pr_test.gopkg/agentic/prep.gopkg/agentic/prep_extra_test.gopkg/agentic/prep_test.gopkg/agentic/process_register.gopkg/agentic/process_register_test.gopkg/agentic/prompt_version.gopkg/agentic/provider_manager.gopkg/agentic/qa.gopkg/agentic/qa_test.gopkg/agentic/queue.gopkg/agentic/queue_extra_test.gopkg/agentic/queue_logic_test.gopkg/agentic/queue_test.gopkg/agentic/remote.gopkg/agentic/remote_client_test.gopkg/agentic/remote_status.gopkg/agentic/resume.gopkg/agentic/resume_test.gopkg/agentic/review_queue.gopkg/agentic/review_queue_extra_test.gopkg/agentic/runtime_state.gopkg/agentic/runtime_state_test.gopkg/agentic/scan_test.gopkg/agentic/session.gopkg/agentic/shutdown.gopkg/agentic/sprint.gopkg/agentic/state.gopkg/agentic/statestore.gopkg/agentic/statestore_test.gopkg/agentic/status.gopkg/agentic/status_extra_test.gopkg/agentic/status_test.gopkg/agentic/sync.gopkg/agentic/sync_example_test.gopkg/agentic/sync_test.gopkg/agentic/task.gopkg/agentic/task_test.gopkg/agentic/template.gopkg/agentic/transport.gopkg/agentic/verify_extra_test.gopkg/agentic/verify_test.gopkg/agentic/watch.gopkg/agentic/watch_test.gopkg/agentic/workspace_stats.gopkg/agentic/workspace_stats_test.gopkg/brain/brain.gopkg/brain/bridge_test.gopkg/brain/direct.gopkg/brain/messaging.gopkg/brain/provider.gopkg/brain/provider_test.gopkg/brain/tools.gopkg/lib/lib.gopkg/lib/lib_test.gopkg/lib/workspace/default/.core/reference/cli.gopkg/lib/workspace/default/.core/reference/error.gopkg/lib/workspace/default/.core/reference/fs.gopkg/lib/workspace/default/.core/reference/runtime.gopkg/monitor/harvest_test.gopkg/monitor/monitor.gopkg/monitor/monitor_test.gopkg/monitor/register_test.gopkg/monitor/sync.gopkg/monitor/sync_test.gopkg/runner/queue.gopkg/runner/queue_test.gopkg/runner/runner.gotests/cli/Taskfile.yamltests/cli/_lib/run.shtests/cli/agent/Taskfile.yamltests/cli/brain/Taskfile.yamltests/cli/brain/forget/Taskfile.yamltests/cli/brain/list/Taskfile.yamltests/cli/brain/recall/Taskfile.yamltests/cli/brain/remember/Taskfile.yamltests/cli/branch/Taskfile.yamltests/cli/branch/delete/Taskfile.yamltests/cli/check/Taskfile.yamltests/cli/credits/Taskfile.yamltests/cli/credits/balance/Taskfile.yamltests/cli/dispatch/Taskfile.yamltests/cli/dispatch/shutdown/Taskfile.yamltests/cli/dispatch/sync/Taskfile.yamltests/cli/env/Taskfile.yaml
💤 Files with no reviewable changes (19)
- google/gemini-cli/.gitignore
- google/gemini-cli/commands/qa/qa.toml
- google/gemini-cli/commands/code/awareness.toml
- google/gemini-cli/commands/codex/awareness.toml
- google/gemini-cli/commands/qa/fix.toml
- google/gemini-cli/gemini-extension.json
- google/gemini-cli/commands/codex/core-cli.toml
- google/gemini-cli/commands/codex/safety.toml
- cmd/core-agent/mcp_service.go
- google/gemini-cli/commands/code/remember.toml
- google/gemini-cli/commands/codex/overview.toml
- google/gemini-cli/hooks/hooks.json
- google/gemini-cli/commands/code/yes.toml
- google/gemini-cli/package.json
- google/gemini-cli/tsconfig.json
- google/gemini-cli/GEMINI.md
- google/gemini-cli/src/index.js
- cmd/core-agent/commands.go
- google/gemini-cli/src/index.ts
…rom public mirror CodeRabbit on PR #5 flagged 9 suppression entries (lines 32-48 of the old file) referencing two commit SHAs (e58986a..., ecd47fe3...) and a third (e2d1d32...) that do not exist in the public dAppCore/agent history. Their absence from the squashed mirror means gitleaks cannot apply those suppressions at scan time, so the previously-suppressed false-positives reappear as findings on every public-CI scan. This release branch removes the unreachable entries. The remaining four fingerprints (pkg/agentic/prep_test.go x3 and pkg/orchestrator/ security_test.go) are reachable from the public release branch and continue to suppress correctly. If gitleaks running against the squashed public history still surfaces the same false-positives, fresh suppressions with current-SHA fingerprints can be added in a follow-up. Closes Mantis #929 on PR #5 dAppCore/agent. Co-authored-by: Cerberus <cerberus@lthn.ai> Co-authored-by: Hephaestus <hephaestus@lthn.ai>
|




Summary
Consolidates the v0.8.0-alpha.1 release work for core/agent. OpenBrain memory and search wired through the agent surface, fleet/dispatch pipeline cleanup, AX-10 + module path migration, and defensive hardening across the agentic layer.
Headline changes
forge.lthn.ai/core/agent→dappco.re/go/agent. All siblingdappco.re/go/*deps pinned tov0.8.0-alpha.1.tests/cli/agent/Taskfile.yaml+ driver coverage.GET /v1/brain/search— Elasticsearch full-text endpointGET /v1/brain/tags+/v1/brain/scopes/v1/brain/recallextended with org filter, keywords, boost_keywordsbrain:reindex,brain:clean,brain:pruneBrainService::remember()async viaEmbedMemoryjobBrainService+DeleteFromIndexjobBuildQdrantFilterboundaryBRAIN_QDRANT_API_KEYMetaReadercontract with Forgejo-backed implementationScanForWork+ManagePullRequestrouted throughMetaReaderWorkspaceStateworkflow progress updated on dispatch pushSessionArtifactpasses description as metadata arrayAgentSessionseparation documenteddappcore→corerename completed; Gemini stub added; hermes plugins (openbrain_memory.py,openbrain_context.py) plusSKILL.mddocs.cmd/openbrain-mcp(Claude Code),hermes_runner_mcp,camofox_mcp.sync.Oncepurged frompkg/agenticviacore.Once.php/tree; codex CLI 0.122+config.tomlcompatibility..gitleaksignore. Test triad expanded for monitor harvest, agentic message + dispatch sync contracts..DS_Storeentries across the tree; dropped empty.core/TODO.md.Refs
RFC-CORE-008-AGENT-EXPERIENCE.md(AX-1, AX-6, AX-10)agentic_auth_loginMCP tool)Test plan
go test ./...passestests/cli/agentTaskfile drivers exercise the CLI surfaceCo-authored-by: Athena athena@lthn.ai
Co-authored-by: Hephaestus hephaestus@lthn.ai
Co-authored-by: Cladius Maximus cladius@lthn.ai
Summary by CodeRabbit
Release Notes
New Features
core login CODE.Improvements
Bug Fixes