Extend build pipeline update documentation on docker hub#5
Extend build pipeline update documentation on docker hub#5
Conversation
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (19)
📝 WalkthroughWalkthroughAdds a CLI to generate per-agent Docker Hub descriptions (short and overview with a generated Releases section), tests and helper exports, ignores generated outputs, and extends the auto-release GitHub Actions workflow with an optional dispatch to generate and PATCH Docker Hub repository descriptions. Changes
Sequence DiagramsequenceDiagram
participant GHA as GitHub Actions
participant Script as generate-hub-descriptions.mjs
participant Catalog as scripts/lib/catalog.mjs
participant FS as File System
participant DH as Docker Hub API
GHA->>Script: run with --catalog & --descriptions (or job clones wiki)
Script->>FS: read catalog JSON
Script->>Script: iterate agents
Script->>FS: check descriptions/<agent>/short-description.txt & overview.md
alt inputs present
Script->>FS: read short-description.txt & overview.md
Script->>Catalog: call dockerTagLink() & markdownTable()
Script->>Script: build "## Releases" section (filter & sort)
Script->>FS: write output/short-description.txt and output/overview.md
else missing
Script->>Script: warn and skip or exit (required)
end
GHA->>DH: authenticate (login) -> receive token
GHA->>FS: read descriptions/<agent>/output files
alt output files present
GHA->>DH: PATCH /repositories/<agent> with description/full_description
alt HTTP 200
DH-->>GHA: success
else HTTP !=200
GHA->>GHA: fail job
end
else missing output
GHA->>GHA: skip agent
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Greptile SummaryThis PR introduces a Key observations:
Confidence Score: 3/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant T as Trigger (schedule / dispatch)
participant D as detect job
participant P as publish job
participant UC as update-catalog job
participant SH as sync-hub-descriptions job
participant DH as Docker Hub API
participant W as Wiki repo
T->>D: run
D->>W: clone wiki, read agent-versions.json
D-->>P: changed == true
D-->>UC: changed == true
P->>DH: docker push images
UC->>W: clone wiki, apply-agent-updates.mjs
UC->>UC: generate-hub-descriptions.mjs
note over UC: ⚠ no zero-output guard here
UC->>DH: PATCH /repositories/vibepod/{agent}/
UC->>W: commit & push wiki state
T->>SH: update_descriptions == true (workflow_dispatch)
SH->>W: clone wiki
SH->>SH: generate-hub-descriptions.mjs
note over SH: ✓ guards against 0 generated files
SH->>DH: PATCH /repositories/vibepod/{agent}/
note over UC,SH: ⚠ no needs dependency — race condition if both run
Reviews (4): Last reviewed commit: "Extend build pipeline update documentati..." | Re-trigger Greptile |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
.github/workflows/auto-release.yml (1)
175-196: Avoid hardcoding agent targets in the Docker Hub sync loop.Line 175 duplicates agent identifiers. If catalog entries change, this step can silently drift and miss description updates.
♻️ Proposed refactor
- for agent in auggie claude codex copilot devstral gemini opencode; do + mapfile -t agents < <(jq -r '.agents[].target' wiki/automation/agent-versions.json) + for agent in "${agents[@]}"; do output_dir="descriptions/${agent}/output" if [[ ! -f "${output_dir}/short-description.txt" || ! -f "${output_dir}/overview.md" ]]; then echo "Skipping ${agent}: output files not found" continue fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/auto-release.yml around lines 175 - 196, The loop currently hardcodes agent names in the "for agent in auggie claude codex ..." statement which will drift when catalog entries change; change the loop to dynamically discover agents by iterating the subdirectories under the descriptions folder (use the directory names as agent IDs), preserve the existing checks using output_dir (SHORT, FULL) and the checks for short-description.txt and overview.md, and keep the same curl PATCH behavior and HTTP_STATUS check so missing or new agents are picked up automatically without editing the workflow.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/generate-hub-descriptions.mjs`:
- Around line 21-34: The release filtering assumes every item in
agent.release_history is an object and will throw on null/undefined; update the
pipeline that builds releases (the const releases = ... chain) to first filter
out non-objects (e.g., r == null or typeof r !== 'object'), then apply the
existing image_tag exclusion and sort; also guard the mapping that produces rows
(the releases.map that calls dockerTagLink(agent, r.image_tag || "") and
accesses r.agent_version/r.released_at) so it safely handles missing r
properties by using empty-string fallbacks as already intended. Ensure you
modify the release filtering and the rows mapping (references: releases,
dockerTagLink, rows) to prevent exceptions from malformed entries.
---
Nitpick comments:
In @.github/workflows/auto-release.yml:
- Around line 175-196: The loop currently hardcodes agent names in the "for
agent in auggie claude codex ..." statement which will drift when catalog
entries change; change the loop to dynamically discover agents by iterating the
subdirectories under the descriptions folder (use the directory names as agent
IDs), preserve the existing checks using output_dir (SHORT, FULL) and the checks
for short-description.txt and overview.md, and keep the same curl PATCH behavior
and HTTP_STATUS check so missing or new agents are picked up automatically
without editing the workflow.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d7fb089f-b86a-4da9-b818-c58284d6f8ea
📒 Files selected for processing (5)
.github/workflows/auto-release.yml.gitignorescripts/generate-hub-descriptions.mjsscripts/generate-hub-descriptions.test.mjsscripts/lib/catalog.mjs
| const releases = (agent.release_history || []) | ||
| .filter((r) => r.image_tag !== "next" && r.image_tag !== "latest") | ||
| // ISO 8601 strings sort lexicographically — newest first | ||
| .sort((a, b) => (b.released_at || "").localeCompare(a.released_at || "")); | ||
|
|
||
| if (releases.length === 0) { | ||
| return "## Releases\n\nNo releases recorded.\n"; | ||
| } | ||
|
|
||
| const rows = releases.map((r) => [ | ||
| dockerTagLink(agent, r.image_tag || ""), | ||
| r.agent_version || "", | ||
| r.released_at || "", | ||
| ]); |
There was a problem hiding this comment.
Harden release filtering against malformed entries.
Line 22 assumes every release_history item is an object; a null/undefined entry will throw and abort description generation.
🛡️ Proposed fix
- const releases = (agent.release_history || [])
- .filter((r) => r.image_tag !== "next" && r.image_tag !== "latest")
+ const releases = (agent.release_history || [])
+ .filter((r) => {
+ const tag = r?.image_tag || "";
+ return tag !== "" && tag !== "next" && tag !== "latest";
+ })
// ISO 8601 strings sort lexicographically — newest first
- .sort((a, b) => (b.released_at || "").localeCompare(a.released_at || ""));
+ .sort((a, b) => (b?.released_at || "").localeCompare(a?.released_at || ""));
if (releases.length === 0) {
return "## Releases\n\nNo releases recorded.\n";
}
const rows = releases.map((r) => [
- dockerTagLink(agent, r.image_tag || ""),
- r.agent_version || "",
- r.released_at || "",
+ dockerTagLink(agent, r?.image_tag || ""),
+ r?.agent_version || "",
+ r?.released_at || "",
]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/generate-hub-descriptions.mjs` around lines 21 - 34, The release
filtering assumes every item in agent.release_history is an object and will
throw on null/undefined; update the pipeline that builds releases (the const
releases = ... chain) to first filter out non-objects (e.g., r == null or typeof
r !== 'object'), then apply the existing image_tag exclusion and sort; also
guard the mapping that produces rows (the releases.map that calls
dockerTagLink(agent, r.image_tag || "") and accesses
r.agent_version/r.released_at) so it safely handles missing r properties by
using empty-string fallbacks as already intended. Ensure you modify the release
filtering and the rows mapping (references: releases, dockerTagLink, rows) to
prevent exceptions from malformed entries.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
scripts/generate-hub-descriptions.test.mjs (1)
87-89: Consider cleaning up temporary directories after tests.The
makeTmpDirhelper creates directories inos.tmpdir()but tests don't clean them up. While the OS eventually clears these, accumulated test runs can leave many orphan directories.♻️ Optional: Add cleanup helper and use it in tests
function makeTmpDir() { return fs.mkdtempSync(path.join(os.tmpdir(), "hub-desc-test-")); } + +function cleanupTmpDir(dir) { + fs.rmSync(dir, { recursive: true, force: true }); +}Then in each test, wrap with try/finally:
test("processAgent: writes output files for valid agent", () => { const tmp = makeTmpDir(); try { makeAgentFixture(tmp, "testagent"); processAgent(SIMPLE_AGENT, tmp); // assertions... } finally { cleanupTmpDir(tmp); } });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/generate-hub-descriptions.test.mjs` around lines 87 - 89, The tests create temporary directories via makeTmpDir but never remove them; add a cleanup helper (e.g., cleanupTmpDir(dir) that recursively removes the directory) and update tests that call makeTmpDir to wrap test body in try/finally so cleanupTmpDir(tmp) is always called; locate uses of makeTmpDir in tests (and helper makeAgentFixture/processAgent calls) and replace with the try/finally pattern to ensure removal of tmp dirs after each test..github/workflows/auto-release.yml (2)
184-184: Hardcoded agent list may drift from catalog.The agent list
auggie claude codex copilot devstral gemini opencodeis hardcoded here but the generate script reads from the catalog. If agents are added/removed from the catalog, this list must be manually updated.Consider reading the agent list dynamically from the catalog to avoid maintenance drift.
♻️ Proposed fix to read agents dynamically
+ AGENTS=$(jq -r '.agents[].target' wiki/automation/agent-versions.json) + - for agent in auggie claude codex copilot devstral gemini opencode; do + for agent in $AGENTS; do🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/auto-release.yml at line 184, The hardcoded for-loop "for agent in auggie claude codex copilot devstral gemini opencode; do" should be replaced with a dynamic invocation that reads the agent names from the same catalog the generate script uses; update the workflow to call the generate/catalog reader (the same script or command used by the repo's generate script) to produce a whitespace/newline-separated list and iterate over that output instead of the hardcoded list, ensuring the workflow uses that command substitution in place of the literal "auggie claude codex copilot devstral gemini opencode" so added/removed agents in the catalog are picked up automatically.
192-203: Capture API error response for better diagnostics.When the PATCH fails, the current code only logs the HTTP status. Capturing and displaying the response body would help diagnose issues (e.g., permission errors, rate limits, invalid payload).
♻️ Proposed enhancement
- HTTP_STATUS=$(curl -s -o /dev/null -w "%{http_code}" \ + RESPONSE_FILE=$(mktemp) + HTTP_STATUS=$(curl -s -o "$RESPONSE_FILE" -w "%{http_code}" \ -X PATCH "https://hub.docker.com/v2/repositories/vibepod/${agent}/" \ -H "Authorization: JWT ${TOKEN}" \ -H "Content-Type: application/json" \ --data-binary "$(jq -n \ --arg d "$SHORT" \ --arg f "$FULL" \ '{description: $d, full_description: $f}')") if [[ "${HTTP_STATUS}" != "200" ]]; then - echo "error: failed to update ${agent} (HTTP ${HTTP_STATUS})" + echo "error: failed to update ${agent} (HTTP ${HTTP_STATUS}):" + cat "$RESPONSE_FILE" + rm -f "$RESPONSE_FILE" exit 1 fi + rm -f "$RESPONSE_FILE" echo "updated: ${agent}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/auto-release.yml around lines 192 - 203, The PATCH block that sets HTTP_STATUS with curl only captures the status code; modify the curl invocation that updates the Docker Hub repository (the command assigning HTTP_STATUS) to capture both the response body and status (e.g., store response body in a variable and status in HTTP_STATUS) and then, in the failure branch where you currently echo "error: failed to update ${agent} (HTTP ${HTTP_STATUS})", include the captured response body alongside the HTTP status to aid diagnostics (referencing the variables HTTP_STATUS, the curl call that uses TOKEN and agent, and the payload built from SHORT and FULL).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/auto-release.yml:
- Around line 179-182: The Docker Hub login step sets TOKEN via the curl + jq
pipeline but does not validate the result or mask the secret; update the step
that defines TOKEN (the curl -X POST ... | jq -r .token pipeline) to: 1) check
the extracted TOKEN for emptiness or the literal "null" and fail the job with a
clear error message if invalid, 2) ensure the raw DOCKERHUB_TOKEN is not echoed
and use the workflow mask feature to hide the extracted TOKEN (e.g., call the
GitHub Actions mask command immediately after obtaining TOKEN), and 3) bail out
before making any PATCH requests when TOKEN is invalid so subsequent requests
don’t run with an empty token. Ensure you reference the TOKEN variable produced
by that curl/jq command and add the mask step that runs right after it.
---
Nitpick comments:
In @.github/workflows/auto-release.yml:
- Line 184: The hardcoded for-loop "for agent in auggie claude codex copilot
devstral gemini opencode; do" should be replaced with a dynamic invocation that
reads the agent names from the same catalog the generate script uses; update the
workflow to call the generate/catalog reader (the same script or command used by
the repo's generate script) to produce a whitespace/newline-separated list and
iterate over that output instead of the hardcoded list, ensuring the workflow
uses that command substitution in place of the literal "auggie claude codex
copilot devstral gemini opencode" so added/removed agents in the catalog are
picked up automatically.
- Around line 192-203: The PATCH block that sets HTTP_STATUS with curl only
captures the status code; modify the curl invocation that updates the Docker Hub
repository (the command assigning HTTP_STATUS) to capture both the response body
and status (e.g., store response body in a variable and status in HTTP_STATUS)
and then, in the failure branch where you currently echo "error: failed to
update ${agent} (HTTP ${HTTP_STATUS})", include the captured response body
alongside the HTTP status to aid diagnostics (referencing the variables
HTTP_STATUS, the curl call that uses TOKEN and agent, and the payload built from
SHORT and FULL).
In `@scripts/generate-hub-descriptions.test.mjs`:
- Around line 87-89: The tests create temporary directories via makeTmpDir but
never remove them; add a cleanup helper (e.g., cleanupTmpDir(dir) that
recursively removes the directory) and update tests that call makeTmpDir to wrap
test body in try/finally so cleanupTmpDir(tmp) is always called; locate uses of
makeTmpDir in tests (and helper makeAgentFixture/processAgent calls) and replace
with the try/finally pattern to ensure removal of tmp dirs after each test.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: daed1aee-d0b2-4be5-b25c-d5f1fe328c53
📒 Files selected for processing (5)
.github/workflows/auto-release.yml.gitignorescripts/generate-hub-descriptions.mjsscripts/generate-hub-descriptions.test.mjsscripts/lib/catalog.mjs
✅ Files skipped from review due to trivial changes (1)
- .gitignore
🚧 Files skipped from review as they are similar to previous changes (2)
- scripts/lib/catalog.mjs
- scripts/generate-hub-descriptions.mjs
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
scripts/generate-hub-descriptions.test.mjs (1)
87-89: Consider adding temp-directory cleanup per test.
makeTmpDir()allocates real filesystem dirs, but none are removed. A smallt.after(() => fs.rmSync(tmp, { recursive: true, force: true }))pattern would keep local and CI environments cleaner.Also applies to: 119-226
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/generate-hub-descriptions.test.mjs` around lines 87 - 89, The tests create temp dirs using makeTmpDir() but never remove them; update the test files that call makeTmpDir (including the blocks around lines 119-226) to register cleanup with the test harness by calling t.after(() => fs.rmSync(tmp, { recursive: true, force: true })) for each tmp directory created, or change makeTmpDir() to return { path: tmp, registerCleanup: (t) => t.after(() => fs.rmSync(tmp, { recursive: true, force: true })) } and invoke that registerCleanup in each test; ensure you reference the makeTmpDir return value in tests and remove directories with fs.rmSync(..., { recursive: true, force: true }) to avoid leftover temp dirs in CI/local runs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/generate-hub-descriptions.test.mjs`:
- Around line 170-183: The test currently can pass without ensuring process.exit
was invoked because exitCode is undefined and the catch swallows all errors;
change the test so exitCode is initialized to a sentinel (e.g. null), and
replace the broad catch with one that only swallows the specific thrown
Error("exit") (or checks error.message === "exit") and rethrows other
exceptions; after the try/catch restore process.exit and assert that exitCode is
neither null/undefined and is non-zero to guarantee processAgent (called with
processAgent, SIMPLE_AGENT, tmp, { required: true }) actually invoked the mocked
process.exit.
---
Nitpick comments:
In `@scripts/generate-hub-descriptions.test.mjs`:
- Around line 87-89: The tests create temp dirs using makeTmpDir() but never
remove them; update the test files that call makeTmpDir (including the blocks
around lines 119-226) to register cleanup with the test harness by calling
t.after(() => fs.rmSync(tmp, { recursive: true, force: true })) for each tmp
directory created, or change makeTmpDir() to return { path: tmp,
registerCleanup: (t) => t.after(() => fs.rmSync(tmp, { recursive: true, force:
true })) } and invoke that registerCleanup in each test; ensure you reference
the makeTmpDir return value in tests and remove directories with fs.rmSync(...,
{ recursive: true, force: true }) to avoid leftover temp dirs in CI/local runs.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a421b71b-44c4-4dbd-8c78-889ce2ac4efa
📒 Files selected for processing (5)
.github/workflows/auto-release.yml.gitignorescripts/generate-hub-descriptions.mjsscripts/generate-hub-descriptions.test.mjsscripts/lib/catalog.mjs
✅ Files skipped from review due to trivial changes (1)
- .gitignore
🚧 Files skipped from review as they are similar to previous changes (3)
- scripts/lib/catalog.mjs
- .github/workflows/auto-release.yml
- scripts/generate-hub-descriptions.mjs
| let exitCode; | ||
| const origExit = process.exit; | ||
| process.exit = (code) => { exitCode = code; throw new Error("exit"); }; | ||
|
|
||
| try { | ||
| processAgent(SIMPLE_AGENT, tmp, { required: true }); | ||
| } catch { | ||
| // swallow the thrown Error("exit") | ||
| } finally { | ||
| process.exit = origExit; | ||
| } | ||
|
|
||
| assert.ok(exitCode !== 0, "exits non-zero"); | ||
| }); |
There was a problem hiding this comment.
Required-mode exit test can pass without actually verifying process.exit was called.
Because exitCode starts as undefined (Line 170), the broad catch (Line 176) hides unrelated exceptions, and exitCode !== 0 (Line 182) still passes for undefined.
Proposed fix
- let exitCode;
+ let exitCode = null;
const origExit = process.exit;
process.exit = (code) => { exitCode = code; throw new Error("exit"); };
try {
processAgent(SIMPLE_AGENT, tmp, { required: true });
- } catch {
- // swallow the thrown Error("exit")
+ } catch (err) {
+ assert.equal(err.message, "exit", "should only throw sentinel exit error");
} finally {
process.exit = origExit;
}
- assert.ok(exitCode !== 0, "exits non-zero");
+ assert.equal(exitCode, 1, "should call process.exit(1)");📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let exitCode; | |
| const origExit = process.exit; | |
| process.exit = (code) => { exitCode = code; throw new Error("exit"); }; | |
| try { | |
| processAgent(SIMPLE_AGENT, tmp, { required: true }); | |
| } catch { | |
| // swallow the thrown Error("exit") | |
| } finally { | |
| process.exit = origExit; | |
| } | |
| assert.ok(exitCode !== 0, "exits non-zero"); | |
| }); | |
| let exitCode = null; | |
| const origExit = process.exit; | |
| process.exit = (code) => { exitCode = code; throw new Error("exit"); }; | |
| try { | |
| processAgent(SIMPLE_AGENT, tmp, { required: true }); | |
| } catch (err) { | |
| assert.equal(err.message, "exit", "should only throw sentinel exit error"); | |
| } finally { | |
| process.exit = origExit; | |
| } | |
| assert.equal(exitCode, 1, "should call process.exit(1)"); | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/generate-hub-descriptions.test.mjs` around lines 170 - 183, The test
currently can pass without ensuring process.exit was invoked because exitCode is
undefined and the catch swallows all errors; change the test so exitCode is
initialized to a sentinel (e.g. null), and replace the broad catch with one that
only swallows the specific thrown Error("exit") (or checks error.message ===
"exit") and rethrows other exceptions; after the try/catch restore process.exit
and assert that exitCode is neither null/undefined and is non-zero to guarantee
processAgent (called with processAgent, SIMPLE_AGENT, tmp, { required: true })
actually invoked the mocked process.exit.
6a6f78e to
b3654d9
Compare
| export function processAgent(agent, descriptionsRoot, { required = false } = {}) { | ||
| const agentDir = path.join(descriptionsRoot, agent.target); | ||
|
|
||
| if (!fs.existsSync(agentDir)) { | ||
| if (required) { | ||
| console.error(`error: no descriptions directory for agent "${agent.target}"`); | ||
| process.exit(1); | ||
| } | ||
| console.warn(`warn: no descriptions directory for agent "${agent.target}", skipping`); | ||
| return false; | ||
| } | ||
|
|
||
| const shortDescPath = path.join(agentDir, "short-description.txt"); | ||
| const overviewPath = path.join(agentDir, "overview.md"); | ||
|
|
||
| if (!fs.existsSync(shortDescPath) || !fs.existsSync(overviewPath)) { | ||
| console.warn(`warn: missing source files for agent "${agent.target}", skipping`); | ||
| return false; | ||
| } |
There was a problem hiding this comment.
required mode silently exits 0 when source files are missing
The required flag only guards against a missing agent directory (lines 55–62), but it is completely ignored when the directory exists yet one of the source files (short-description.txt / overview.md) is absent. In that case the function emits a warn: message and return false — and main() (line 118) discards the return value entirely, so the process exits with code 0 even though nothing was generated.
Concretely, running:
node scripts/generate-hub-descriptions.mjs --catalog ... --descriptions ... --agent auggie
with descriptions/auggie/overview.md deleted will print a warning and exit 0, giving callers (CI or a developer) a false "success" signal.
The missing-files guard should also honour required:
| export function processAgent(agent, descriptionsRoot, { required = false } = {}) { | |
| const agentDir = path.join(descriptionsRoot, agent.target); | |
| if (!fs.existsSync(agentDir)) { | |
| if (required) { | |
| console.error(`error: no descriptions directory for agent "${agent.target}"`); | |
| process.exit(1); | |
| } | |
| console.warn(`warn: no descriptions directory for agent "${agent.target}", skipping`); | |
| return false; | |
| } | |
| const shortDescPath = path.join(agentDir, "short-description.txt"); | |
| const overviewPath = path.join(agentDir, "overview.md"); | |
| if (!fs.existsSync(shortDescPath) || !fs.existsSync(overviewPath)) { | |
| console.warn(`warn: missing source files for agent "${agent.target}", skipping`); | |
| return false; | |
| } | |
| if (!fs.existsSync(shortDescPath) || !fs.existsSync(overviewPath)) { | |
| if (required) { | |
| console.error(`error: missing source files for agent "${agent.target}"`); | |
| process.exit(1); | |
| } | |
| console.warn(`warn: missing source files for agent "${agent.target}", skipping`); | |
| return false; | |
| } |
There is also no test that exercises processAgent(agent, root, { required: true }) when the directory exists but a source file is missing — that edge case should be covered alongside the existing required-mode test.
| - name: Generate Docker Hub descriptions | ||
| shell: bash | ||
| run: | | ||
| set -euo pipefail | ||
| node scripts/generate-hub-descriptions.mjs \ | ||
| --catalog wiki/automation/agent-versions.json \ | ||
| --descriptions descriptions |
There was a problem hiding this comment.
No zero-output guard in
update-catalog's description generation step
The sync-hub-descriptions job (lines 248–262) guards against the script completing but generating zero output files — it counts the generated files and exits 1 if none were produced. The equivalent step inside update-catalog lacks this check entirely.
If all agents are silently skipped here (e.g., the descriptions/ source files were not yet committed when the image pipeline ran), the "Update Docker Hub descriptions" step that follows will print Skipping <agent>: output files not found for every agent and exit 0, giving no indication that descriptions were not refreshed after the new release.
Consider adding the same guard used in sync-hub-descriptions:
generated=$(find descriptions -path "*/output/overview.md" | wc -l)
if [[ "${generated}" -eq 0 ]]; then
echo "warn: no description output files were generated — descriptions were not updated"
fi
echo "Generated descriptions for ${generated} agent(s)"
(A warn rather than a hard exit 1 is appropriate here since the update-catalog job's primary purpose is publishing images, not updating descriptions.)
Introduces automation for updating Docker Hub repository descriptions for multiple agents. It adds a new script to generate and format these descriptions, integrates the script into the GitHub Actions release workflow, and provides comprehensive tests to ensure correctness. Additionally, it makes minor exports in a shared library to support the new script.
Summary by CodeRabbit
New Features
Tests
Chores