Conversation
📝 WalkthroughWalkthroughThis pull request transitions from single-addon, simple CI workflows to a comprehensive multi-addon orchestration framework. It introduces new orchestrator workflows replacing legacy automation, adds an extensive verification suite (PowerShell-based), includes numerous automation scripts for addon management, and updates the fanexid add-on configuration and structure. Changes
Sequence Diagram(s)sequenceDiagram
participant PR as Pull Request
participant Orchestrator as Orchestrator CI
participant Filter as Filter Engine
participant Tests as Test Matrix
participant Report as Report & Label
PR->>Orchestrator: Trigger on push/PR
Orchestrator->>Filter: Detect changed addons & types
Filter->>Orchestrator: Emit filters (docs_only, needs_build)
Orchestrator->>Tests: Run lint/compliance/build per addon
Tests->>Tests: Conditional execution based on change type
Tests->>Report: Aggregate results
Report->>Report: Generate dynamic labels (addon/*, version-bump, new-addon)
Report->>PR: Post comment with status & link
Report->>Orchestrator: Trigger auto-merge on success
sequenceDiagram
participant Master as Master Branch
participant AutoFix as Orchestrator Auto-Fix
participant Format as Formatters (Black/Shfmt/Prettier)
participant Commit as Git Auto-Commit
participant Edge as Edge Branch
Master->>AutoFix: Push detected
AutoFix->>AutoFix: Run OCI labels & enforce architectures
AutoFix->>Format: Format Python/Shell/Markdown
Format->>AutoFix: Format complete
AutoFix->>Commit: Commit changes with PAT
Commit->>Master: Push auto-fix commit
AutoFix->>Edge: Trigger edge sync
Edge->>Edge: Bump versions & prepare local builds
sequenceDiagram
participant Supervisor as Supervisor Mock
participant Test as Supervisor Test
participant Addon as Add-on Container
participant Health as Health Check
Test->>Supervisor: Start mock Supervisor env
Test->>Addon: Build & run addon from docker
Addon->>Supervisor: Register with mock API
Test->>Health: Check addon health/status
Health->>Addon: Query health endpoint
Addon->>Health: Return status
Health->>Test: Report result (PASS/WARN/FAIL)
Test->>Addon: Cleanup container
Estimated Code Review Effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
This PR is being reviewed by Cursor Bugbot
Details
You are on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle.
To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.
| [ -z "$ADDONS" ] && ADDONS="$item_name" || ADDONS="$ADDONS,$item_name" | ||
| fi | ||
| fi | ||
| done |
There was a problem hiding this comment.
Missing SKIP_ADDONS loop prevents addon exclusion
Medium Severity
In the "release(all)" push handler, the code defines SKIP_ADDONS but never checks against it. The inner for skip_item in $SKIP_ADDONS loop that exists in the correct implementation (lines 140-145) is missing here. Instead, skip=false is set twice (duplicate line) and immediately checked without iteration. Addons like "homeassistant-test-instance" and "sap-abap-cloud-dev" that should be excluded will be incorrectly included in releases.
| # Addons that should get auto-releases on dependency updates | ||
| AUTO_RELEASE_ADDONS: | | ||
| AUTO_RELEASE_ADDONS: | | ||
| fanexid |
There was a problem hiding this comment.
Duplicate AUTO_RELEASE_ADDONS environment variable definition
Low Severity
The AUTO_RELEASE_ADDONS environment variable is defined twice consecutively. The first definition is empty, and the second contains the actual value "fanexid". While YAML will use the second value, this appears to be a copy-paste error that could cause confusion or unexpected behavior if editing this section later.
There was a problem hiding this comment.
Actionable comments posted: 4
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
fanexid/rootfs/etc/s6-overlay/s6-rc.d/init-fanexid/run (2)
224-232: Verify frontend directory handling on build failure.The code removes
/app/frontend/*(line 226) before checking whetherdistexists (line 227). If the build succeeds butdistis unexpectedly missing, the frontend directory will be left empty, potentially breaking the service.Consider checking for
distexistence before removing the frontend contents, or ensure the placeholder HTML remains available on failure.🛡️ Defensive ordering suggestion
bashio::log.info "Running 'npm run build'..." if npm run build; then bashio::log.info "Frontend build successful. Installing..." - # Clear placeholder - rm -rf /app/frontend/* if [ -d "dist" ]; then + # Clear placeholder + rm -rf /app/frontend/* cp -r dist/* /app/frontend/ bashio::log.info "Frontend successfully installed!" else bashio::log.error "'dist' directory not found after build!" + return 1 fi else bashio::log.error "Frontend build failed!" + return 1 fi
233-237: Missing return statement causes build failure to be ignored.When
npm run buildfails (line 234) ordistdirectory is not found (line 231), the function logs an error but does not return a non-zero exit code. This causes the deferred build check at line 500 to incorrectly treat the failure as success, leading to cleanup of build artifacts and loss of debugging information.🐛 Add missing return statements
bashio::log.info "Frontend build successful. Installing..." # Clear placeholder rm -rf /app/frontend/* if [ -d "dist" ]; then cp -r dist/* /app/frontend/ bashio::log.info "Frontend successfully installed!" else bashio::log.error "'dist' directory not found after build!" + return 1 fi else bashio::log.error "Frontend build failed!" + return 1 fi else bashio::log.error "npm install failed!" + return 1 fifanexid/Dockerfile (1)
26-55: Fix broken arm64-musl Node.js URL for v22.12.0.The arm64-musl binary for Node.js v22.12.0 is not available at
https://unofficial-builds.nodejs.org/download/release/v22.12.0/node-v22.12.0-linux-arm64-musl.tar.xz(returns HTTP 404). This will cause the Dockerfile build to fail on ARM64 systems. The x86_64-musl binary is accessible.Either upgrade to a version where the arm64-musl binary is available (e.g., the latest LTS v24.12.0), or use an alternative distribution source that provides musl binaries for the specified version.
🤖 Fix all issues with AI agents
In @.github/workflows/orchestrator-release.yaml:
- Around line 508-520: The block that attempts to "Sort addons by priority"
assigns ADDONS="$SORTED_ADDONS" but SORTED_ADDONS is never defined, which clears
ADDONS for multi-addon releases; update the block so you either populate
SORTED_ADDONS before assignment or guard the assignment with a check (e.g., only
set ADDONS if SORTED_ADDONS is non-empty) and remove duplicate assignment, or
comment out the sorting block until real sorting logic is added; key symbols to
modify are SORTED_ADDONS, ADDONS and the surrounding ADDON_COUNT conditional.
- Around line 60-65: Remove the duplicate AUTO_RELEASE_ADDONS mapping and keep a
single key with the intended multi-line value; specifically, delete the first
empty "AUTO_RELEASE_ADDONS: |" entry and ensure the remaining
AUTO_RELEASE_ADDONS uses the block scalar (|) with the indented value "fanexid"
so the env mapping has only one AUTO_RELEASE_ADDONS key.
In @.scripts/verify_addons/custom/solumati/test.ps1:
- Around line 1-18: The script calls Add-Result before the common helper is
loaded; after the param(...) block in test.ps1, source the common module using
the $RepoRoot variable so Add-Result is available (i.e., load the
verify_addons/lib/common.ps1 file referenced by $RepoRoot). Update the Solumati
test (and the other 17 affected custom test scripts) to include the same
sourcing line immediately after their parameter blocks so Add-Result resolves at
runtime.
In @.scripts/verify_addons/custom/wordpress/test.ps1:
- Around line 1-18: The script calls Add-Result but never imports the shared
verification helper; fix by dot-sourcing the common verification module before
any Add-Result calls so the function is available at runtime (e.g., add a
dot-source of the shared "common" module near the top of the script); ensure the
import happens before Write-Host and that Add-Result is used unchanged
(references: Add-Result, $Addon, $ContainerName, $logs, $procCheck).
🟠 Major comments (30)
.scripts/retract_version.py-63-64 (1)
63-64: Add timeout to DELETE request.Same issue as the GET requests—this DELETE call needs a timeout.
🔧 Proposed fix
- del_resp = requests.delete(delete_url, headers=headers) + del_resp = requests.delete(delete_url, headers=headers, timeout=30).scripts/toggle_support.py-72-75 (1)
72-75: Add timeouts to prevent indefinite hangs.These
requests.get()calls lack timeouts and could hang indefinitely.🔧 Proposed fix
- res = requests.get(url, headers=headers) + res = requests.get(url, headers=headers, timeout=30) if res.status_code == 404: url = f"https://api.github.com/users/{owner}/packages/container/{pkg}/versions" - res = requests.get(url, headers=headers) + res = requests.get(url, headers=headers, timeout=30).scripts/retract_version.py-98-112 (1)
98-112: Remove unusedmanifestslist (dead code).The
manifestslist is defined but never used. The comments suggest potential future use for multi-architecture cleanup, but currently it's dead code. Either remove it or implement the intended logic.🗑️ Proposed fix (remove dead code)
- manifests = [ - f"{addon}-amd64", - f"{addon}-aarch64", - f"{addon}-armv7", - f"{addon}-armhf", - f"{addon}-i386", - ] - # We try the slug itself first (multi-arch manifest) and then arch-specifics? - # Or just the specific images. - - # Simple approach: Check exact name passed first. retract_version(addon, ver, token, owner) - - # If we want to be smart, we could loop variants. - # retract_version(f"{addon}-amd64", ver, token, owner).scripts/retract_version.py-33-44 (1)
33-44: Add timeouts to prevent indefinite hangs.These
requests.get()calls lack timeouts. If the GHCR API is slow or unresponsive, the script will hang indefinitely. This is especially problematic in CI/CD pipelines.🔧 Proposed fix
- resp = requests.get(base_url, headers=headers) + resp = requests.get(base_url, headers=headers, timeout=30) if resp.status_code == 404: # Try Org base_url = ( f"{GITHUB_API}/orgs/{owner}/packages/container/{package_name}/versions" ) - resp = requests.get(base_url, headers=headers) + resp = requests.get(base_url, headers=headers, timeout=30).scripts/toggle_support.py-90-100 (1)
90-100: Fix unused variables and add error handling for deletions.Several issues here:
to_keepis assigned but never useddel_urlis computed but not used—the actual delete usesurldirectly, which is correct, but the variable is dead code- The DELETE response is discarded—failures are silently ignored
- Missing timeout on DELETE
🔧 Proposed fix
- to_keep = versions[:1] to_delete = versions[1:] for v in to_delete: vid = v["id"] print(f" - Deleting version {vid} of {pkg}") - # Delete - del_url = f"{url}/{vid}" # this url base is .../versions, so + /id works? No, url var was set to base. - # actually we constructed url above. - # verify url construction - requests.delete(f"{url}/{vid}", headers=headers) + del_resp = requests.delete(f"{url}/{vid}", headers=headers, timeout=30) + if del_resp.status_code not in (200, 204): + print(f" ⚠️ Failed to delete {vid}: {del_resp.status_code}").scripts/toggle_support.py-4-4 (1)
4-4: Remove unused import.
subprocessis imported but never used in this file.🗑️ Proposed fix
import argparse import os import shutil -import subprocess import sysCommittable suggestion skipped: line range outside the PR's diff.
.scripts/restore_backups.py-27-35 (1)
27-35: Add error handling to prevent partial restorations.The restoration loop lacks error handling around file operations. If
shutil.copyencounters an error (permissions, disk space, I/O issues), the script will crash mid-loop, potentially leaving some addons restored and others not—creating an inconsistent state.Additionally, consider adding success confirmation messages and wrapping the logic in a function for better testability.
🛡️ Proposed fix with error handling
+def restore_addon_backups(): + """Restore README.md files from their .bak backups for each addon.""" + restored = [] + failed = [] + -for addon in ADDONS: - bak_path = os.path.join(addon, "README.md.bak") - readme_path = os.path.join(addon, "README.md") - - if os.path.exists(bak_path): - print(f"Restoring {addon}...") - shutil.copy(bak_path, readme_path) - else: - print(f"⚠️ No backup for {addon}") + for addon in ADDONS: + bak_path = os.path.join(addon, "README.md.bak") + readme_path = os.path.join(addon, "README.md") + + if os.path.exists(bak_path): + try: + print(f"Restoring {addon}...") + shutil.copy(bak_path, readme_path) + restored.append(addon) + print(f"✅ Restored {addon}") + except (OSError, IOError) as e: + print(f"❌ Failed to restore {addon}: {e}") + failed.append(addon) + else: + print(f"⚠️ No backup for {addon}") + + print(f"\nSummary: {len(restored)} restored, {len(failed)} failed") + return len(failed) == 0 + + +if __name__ == "__main__": + success = restore_addon_backups() + exit(0 if success else 1).scripts/verify_addons/custom/apache2-minimal/test.ps1-35-41 (1)
35-41: Fix the multiple-network IP concatenation bug.The Go template
{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}concatenates all IP addresses without separators when the container is on multiple networks. For example, if the container is on both the bridge network (172.17.0.2) and a custom network (172.18.0.3), the result will be "172.17.0.2172.18.0.3", which is an invalid IP address. This causes the HTTP check to fail silently.🔧 Proposed fix to select the first network IP
- $containerIp = docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $ContainerName 2>$null + $containerIp = docker inspect -f '{{(index .NetworkSettings.Networks 0).IPAddress}}' $ContainerName 2>$nullAlternatively, to be more robust, retrieve the first non-empty IP:
- $containerIp = docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $ContainerName 2>$null + $containerIp = (docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $ContainerName 2>$null) -split "`n" | Where-Object { $_ } | Select-Object -First 1Or use a more explicit Go template to get the first network:
- $containerIp = docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $ContainerName 2>$null + $containerIp = docker inspect -f '{{with (index .NetworkSettings.Networks)}}{{.IPAddress}}{{end}}' $ContainerName 2>$nullCommittable suggestion skipped: line range outside the PR's diff.
.scripts/verify_addons/tests/11-docker-build-run.ps1-49-56 (1)
49-56: Docker filter conditions always evaluate to true.The
if (docker ps -a -q -f name=$mockName)pattern doesn't work as expected in PowerShell. When no container matches, it returns an empty string which is still truthy. Similarly for network checks.🐛 Proposed fix
# Clean previous Write-Host " > Cleaning up previous mock environment..." -ForegroundColor Gray - if (docker ps -a -q -f name=$mockName) { + $existingContainer = docker ps -a -q -f name=$mockName + if ($existingContainer) { docker rm -f $mockName 2>&1 | Out-Null Start-Sleep -Milliseconds 500 } - if (docker network ls -q -f name=$networkName) { + $existingNetwork = docker network ls -q -f name=$networkName + if ($existingNetwork) { docker network rm $networkName 2>&1 | Out-Null Start-Sleep -Milliseconds 500 }.scripts/verify_addons/tests/11-docker-build-run.ps1-533-537 (1)
533-537: Undefined variable$Testscauses custom tests to never run.The variable
$Testsis referenced but never defined in the script parameters or body. This means$willRunCustomwill always be$false, and custom tests will never be preserved/executed.🐛 Proposed fix - add parameter
param( [Parameter(Mandatory)][array]$Addons, [Parameter(Mandatory)][hashtable]$Config, [Parameter(Mandatory)][string]$OutputDir, [Parameter(Mandatory)][string]$RepoRoot, [bool]$DockerAvailable = $false, [bool]$RunTests = $true, [bool]$ChangedOnly = $false, [hashtable]$ChangedAddons = @{}, - [bool]$CacheImages = $false + [bool]$CacheImages = $false, + [array]$Tests = @("all") )Committable suggestion skipped: line range outside the PR's diff.
.scripts/verify_addons/tests/11-docker-build-run.ps1-192-200 (1)
192-200: Same docker filter pattern issue.Lines 192 and 198 have the same truthy evaluation issue flagged earlier. Apply the same fix: capture output to a variable before the conditional check.
- if (docker ps -a -q -f name=$contName) { + $existing = docker ps -a -q -f name=$contName + if ($existing) { docker rm -f $contName 2>&1 | Out-Null Start-Sleep -Milliseconds 200 } # Ensure network exists (resiliency against previous crashes) - if (-not (docker network ls -q -f name=$networkName)) { + $netExists = docker network ls -q -f name=$networkName + if (-not $netExists) { docker network create $networkName 2>$null | Out-Null }.scripts/verify_addons/custom/switch_lan_play/test.ps1-10-15 (1)
10-15: Add explicit failure reporting for both checks.Both the BannerCheck and ProcessCheck only report PASS when conditions are met, but don't report FAIL when conditions aren't met. This creates silent failures where the test appears incomplete rather than explicitly failed.
Compare with the apache2/test.ps1 pattern (lines 15-19) which includes explicit ELSE branches with FAIL status.
🔧 Proposed fix to add failure reporting
$logs = docker logs "$ContainerName" 2>&1 if ($logs -match "FaserF's Addon Repository") { Add-Result -Addon $Addon.Name -Check "BannerCheck" -Status "PASS" -Message "Banner found." +} else { + Add-Result -Addon $Addon.Name -Check "BannerCheck" -Status "FAIL" -Message "Banner NOT found in logs." } if ($logs -match "lan-play") { Add-Result -Addon $Addon.Name -Check "ProcessCheck" -Status "PASS" -Message "lan-play detected in logs." +} else { + Add-Result -Addon $Addon.Name -Check "ProcessCheck" -Status "WARN" -Message "lan-play not detected in logs." }fanexid/rootfs/etc/s6-overlay/s6-rc.d/svc-backend/run-12-14 (1)
12-14: Add validation for uvicorn and application module.The script attempts to execute uvicorn without verifying it's installed or that the main:app module exists. If either is missing, the service will fail at runtime without a clear error message.
🛡️ Proposed validation checks
bashio::log.info "Starting faneX-ID Backend (Uvicorn)..." cd /app/backend || exit 1 + +# Validate uvicorn is available +if ! command -v uvicorn &> /dev/null; then + bashio::log.error "uvicorn command not found. Cannot start backend." + exit 1 +fi + +# Validate main.py exists +if [ ! -f "main.py" ]; then + bashio::log.error "main.py not found in /app/backend" + exit 1 +fi + exec uvicorn main:app --host 127.0.0.1 --port 8001 --log-level "$(echo $LOG_LEVEL | tr '[:upper:]' '[:lower:]')".scripts/verify_addons/custom/whatsapp/test.ps1-35-46 (1)
35-46: Fix curl flags for proper HTTP status code capture.The
-fflag makes curl fail silently on HTTP errors (4xx, 5xx) without outputting the status code. However, Lines 40-46 check for 404 and 401 codes, which won't be captured due to the-fflag. Remove-fto properly capture all HTTP status codes.🔧 Proposed fix to remove -f flag
- $curlResult = docker exec $ContainerName curl -f -s -o /dev/null -w "%{http_code}" http://127.0.0.1:8066/health 2>$null + $curlResult = docker exec $ContainerName curl -s -o /dev/null -w "%{http_code}" http://127.0.0.1:8066/health 2>$null $curlResult = $curlResult.Trim()This allows curl to return the actual HTTP status code regardless of whether it's a success (2xx) or error (4xx, 5xx) response.
.scripts/auto_update.py-1-1 (1)
1-1: Remove unused import.
argparseis imported but never used in this script.♻️ Fix
-import argparse import os import subprocess.scripts/fix_oci_labels.py-1-1 (1)
1-1: Remove unused import.
datetimeis imported but never used. TheBUILD_DATEin the labels dict is a shell variable placeholder (${BUILD_DATE}), not computed in Python.♻️ Fix
-import datetime import os import sys.github/workflows/orchestrator-automerge.yaml-19-22 (1)
19-22: Fix the branch name condition to match the actual default branch.Line 22 checks
head_branch != 'master', but the repository's default branch ismain. This condition will not work as intended—PRs frommainwill not be prevented from auto-merging because'main' != 'master'evaluates to true. Change'master'to'main'on line 22 to correctly prevent auto-merge of PRs directly from the default branch..scripts/verify_addons/tests/16-ingress-check.ps1-78-81 (1)
78-81: Remove or track temporary CI skip.The hard-coded CI skip with "Temporary" comment (line 79) has been in place since this file was added. If this is still needed, create a tracking issue and reference it in the comment. Otherwise, remove this workaround.
Do you want me to open a new issue to track this temporary workaround?
.scripts/verify_addons/custom/solumati/test.ps1-9-12 (1)
9-12: BannerCheck logic is incomplete.The BannerCheck only reports PASS when the banner is found but never reports FAIL when it's missing. This creates inconsistency with the ProcessCheck logic (lines 14-18) which correctly handles both PASS and FAIL cases.
🔧 Proposed fix to add FAIL branch
$logs = docker logs "$ContainerName" 2>&1 if ($logs -match "FaserF's Addon Repository") { Add-Result -Addon $Addon.Name -Check "BannerCheck" -Status "PASS" -Message "Banner found." +} else { + Add-Result -Addon $Addon.Name -Check "BannerCheck" -Status "FAIL" -Message "Banner NOT found." }.scripts/verify_addons/custom/wordpress/test.ps1-9-12 (1)
9-12: BannerCheck logic is incomplete.The BannerCheck only reports PASS when the banner is found but never reports FAIL when it's missing. This creates inconsistency with the ProcessCheck logic (lines 14-18) which correctly handles both PASS and FAIL cases.
This is the same issue present in
.scripts/verify_addons/custom/solumati/test.ps1.🔧 Proposed fix to add FAIL branch
$logs = docker logs "$ContainerName" 2>&1 if ($logs -match "FaserF's Addon Repository") { Add-Result -Addon $Addon.Name -Check "BannerCheck" -Status "PASS" -Message "Banner found." +} else { + Add-Result -Addon $Addon.Name -Check "BannerCheck" -Status "FAIL" -Message "Banner NOT found." }.scripts/trigger_edge_updates.py-12-22 (1)
12-22: Missing .unsupported directory scanning.The function only scans top-level directories but doesn't check the
.unsupporteddirectory for edge addons. Other similar scripts in the codebase (e.g.,trigger_all_releases.py,prepare_edge_branch.py) scan both locations.🔧 Suggested fix to include .unsupported directory
def get_all_addons(): addons = [] for item in os.listdir("."): if ( os.path.isdir(item) and not item.startswith(".") and os.path.exists(os.path.join(item, "config.yaml")) ): if item.endswith("-edge") or item in EDGE_ADDONS: addons.append(item) + + # Check .unsupported directory + unsupported_dir = ".unsupported" + if os.path.exists(unsupported_dir): + for item in os.listdir(unsupported_dir): + item_path = os.path.join(unsupported_dir, item) + if os.path.isdir(item_path) and os.path.exists( + os.path.join(item_path, "config.yaml") + ): + if item.endswith("-edge") or item in EDGE_ADDONS: + addons.append(f".unsupported/{item}") return addons.scripts/trigger_edge_updates.py-39-53 (1)
39-53: Changemessageparameter tochangelog_message.The
orchestrator-release.yamlworkflow acceptschangelog_messageas an input, notmessage. Passing an unsupported parameter will cause the workflow run to fail. Either rename it tochangelog_messageor remove it if not needed.Workflow inputs reference
The workflow accepts:
addon(string, required)version(string, required)skip_verification(boolean, optional)changelog_only(boolean, optional)changelog_message(string, optional).scripts/verify_addons/tests/00-autofix.ps1-78-84 (1)
78-84: Path separator may cause issues on non-Windows platforms.Using backslash (
\) in the path pattern will fail on Linux/macOS where PowerShell Core is used. GitHub Actions runners are typically Ubuntu-based.Proposed cross-platform fix
try { - $prettierTargets = $FixPaths | ForEach-Object { "$_\**\*.{json,js,md,yaml}" } + $prettierTargets = $FixPaths | ForEach-Object { "$_/**/*.{json,js,md,yaml}" } $ignorePath = Join-Path $RepoRoot ".prettierignore" npx prettier --write $prettierTargets --ignore-path "$ignorePath" } catch { Write-Host "Skipping Prettier (failed/missing)" -ForegroundColor DarkGray }.github/workflows/orchestrator-edge-sync.yaml-120-130 (1)
120-130: Duplicate checkout steps - likely copy/paste error.There are two consecutive checkout steps at lines 120-130. The first has a conditional
ifclause, but the second one (lines 125-130) runs unconditionally and would override the first. This appears to be a bug.Proposed fix - remove duplicate
- name: ⤵️ Checkout master if: | github.event_name == 'workflow_dispatch' || steps.check_changes.outputs.only_workflows != 'true' uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6 - - name: ⤵️ Checkout master - uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6 with: ref: master fetch-depth: 0 token: ${{ secrets.PAT || secrets.GITHUB_TOKEN }}.scripts/prepare_edge_branch.py-166-207 (1)
166-207: Main README path is case-sensitive and may not match actual file nameIn
add_edge_notice_to_readmeandmain()you use:main_readme = "README.MD" if add_edge_notice_to_readme(main_readme): print("📝 Added edge notice to main README.MD")On case-sensitive filesystems (Linux runners, most dev environments) this will not match
README.md, which is what this repo currently uses. (github.com) As a result the main README will never receive the edge notice.Recommend either:
- Switch to
"README.md", or- Check both names:
for candidate in ("README.md", "README.MD"): if add_edge_notice_to_readme(candidate): print(f"📝 Added edge notice to main {candidate}") breakThis keeps behavior robust across platforms and naming conventions.
Also applies to: 295-303
.github/workflows/orchestrator-release.yaml-840-851 (1)
840-851: Delete-existing-image logic is tied toghcr.io/faserf/and out of sync with image formatHere you try to derive a package name from
image::IMAGE_FROM_CONFIG=... if [ -z "$IMAGE_FROM_CONFIG" ]; then ... else PACKAGE_NAME=$(echo "$IMAGE_FROM_CONFIG" | sed 's|ghcr.io/faserf/||' | sed 's/-{arch}$//' | sed 's/-amd64$//' ... fiBut earlier in the release job you normalize the image field to:
CORRECT_IMAGE="ghcr.io/${REPO_OWNER}/$(basename "${{ github.repository }}")-${SLUG}-{arch}"So in this repo the
image:will be underghcr.io/${REPO_OWNER}/..., notghcr.io/faserf/.... The hardcodedfaserfhere means:
PACKAGE_NAMEwill still contain theghcr.io/...prefix, and- The GitHub Packages API call will not match the actual container package name.
Result: the “delete existing image version” step effectively never finds the old version.
Suggested fix: strip the actual owner prefix instead of a hardcoded one, e.g.:
- PACKAGE_NAME=$(echo "$IMAGE_FROM_CONFIG" | sed 's|ghcr.io/faserf/||' | sed 's/-{arch}$//' ...) + OWNER_LOWER=$(echo "${{ github.repository_owner }}" | tr '[:upper:]' '[:lower:]') + PACKAGE_NAME=$(echo "$IMAGE_FROM_CONFIG" \ + | sed "s|ghcr.io/${OWNER_LOWER}/||" \ + | sed 's/-{arch}$//' \ + | sed 's/-amd64$//' \ + | sed 's/-aarch64$//' \ + | sed 's/-armhf$//' \ + | sed 's/-armv7$//' \ + | sed 's/-i386$//')This keeps the delete logic aligned with however you format the
image:field above..scripts/bump_version.py-316-391 (2)
316-391: Dev version format (-dev-<sha>) is incompatible with the version regexThe version regex in
bump_version:version_pattern = r'^(version: ["']?)([0-9]+\.[0-9]+\.[0-9]+(?:-dev)?(?:\+[a-f0-9]+)?)(["']?)'supports:
1.2.31.2.3-dev1.2.3+deadbeef1.2.3-dev+deadbeefBut when
set_devis true you now generate:new_version = f"{major}.{minor}.{patch}-dev-{commit_sha}"That form does not match the pattern, so the next time you run
bump_versionon that add-on,re.searchwill fail and you’ll get “Could not find version in config file”.Two options:
- Either revert to a
+suffix (and avoid using the dev version as an image tag), or- Extend the regex and parser to understand
-dev-<sha>:- (?:-dev)?(?:\+[a-f0-9]+)? + (?:-dev(?:-[a-f0-9]+)?)?(?:\+[a-f0-9]+)?and update
parse_versionto strip a trailing-dev-<sha>when determiningclean_version.Without this alignment, any future dev bumps on an already-dev version will fail.
Also applies to: 690-715
316-391: Image tag logic hardcoded toghcr.io/faserf/hassio-addons-...conflicts with the release workflowThe image naming convention is hardcoded in two files:
.scripts/bump_version.py(line 357):correct_image_line = ( f"image: ghcr.io/faserf/hassio-addons-{slug.lower()}-{{arch}}" )
.scripts/ensure_image_fields.py(line 24):IMAGE_TEMPLATE = "ghcr.io/faserf/hassio-addons-{slug}-{{arch}}"However,
.github/workflows/orchestrator-release.yaml(lines 640-644) derives the image dynamically:REPO_OWNER=$(echo "${{ github.repository_owner }}" | tr '[:upper:]' '[:lower:]') CORRECT_IMAGE="ghcr.io/${REPO_OWNER}/$(basename "${{ github.repository }}")-${SLUG}-{arch}"For this repository (faneX-ID/homeassistant-addon), this creates a mismatch:
- orchestrator-release.yaml produces:
ghcr.io/fanexid/homeassistant-addon-{slug}-{arch}- Both Python scripts override with:
ghcr.io/faserf/hassio-addons-{slug}-{arch}This causes the two tools to overwrite each other's image tags and can prevent registry cleanup from working correctly.
Both
.scripts/bump_version.pyand.scripts/ensure_image_fields.pyshould derive the image format from environment variables (matching the workflow's approach) rather than hardcoding the owner and repo name..scripts/verify_addons/tests/17-supervisor-test.ps1-117-168 (1)
117-168: Inline PEM key/cert will keep tripping secret scannersThe embedded certificate and private key plus the hardcoded
SUPERVISOR_TOKENand passwords are clearly test-only, but tools like gitleaks will continue to flag them as real secrets.Consider instead:
- Generating the TLS keypair at runtime with
openssl(as you already do in the orchestrator-supervisor-test workflow), and- Moving test-only tokens/passwords to obvious “dummy” env vars or constants (e.g.,
TEST_SUPERVISOR_TOKEN), or documenting them in a gitleaks allowlist.This keeps the script behavior the same but greatly reduces false-positive secret alerts.
.scripts/verify_addons/tests/17-supervisor-test.ps1-538-639 (1)
538-639:$shouldRunRuntimeTestsis never actually honoredEven when
$shouldRunRuntimeTestsis set to$false(“Install Only Mode”), the code still proceeds to start the add-on and poll for runtime state further down in the sametryblock.As written:
- You mark the test as
PASS (Install Only)but then still runha addons startand subsequent checks, which can flip$testPassedtofalseand turn the test into a failure.If you really want an “install only” mode, gate the entire start/poll logic behind
$shouldRunRuntimeTests, e.g.:Proposed guard around start/poll phase
- if (-not $shouldRunRuntimeTests) { - Write-Host " > Skipping Start/Config phase (Install Only Mode)" -ForegroundColor Yellow - Add-Result -Addon $addon.Name -Check "SupervisorTest" -Status "PASS" -Message "PASS (Install Only)" - } - else { + if (-not $shouldRunRuntimeTests) { + Write-Host " > Skipping Start/Config phase (Install Only Mode)" -ForegroundColor Yellow + Add-Result -Addon $addon.Name -Check "SupervisorTest" -Status "PASS" -Message "PASS (Install Only)" + # Skip the remaining start/runtime checks entirely + continue + } + else { # Configure add-on if needed (for non-apache2 addons) ... } @@ - # Start add-on + # Start add-on (only reached when shouldRunRuntimeTests is $true)Right now this is a logic bug that can cause unexpected failures in what is supposed to be an install-only path.
🟡 Minor comments (20)
.scripts/verify_addons/tests/13-workflow-checks.ps1-38-52 (1)
38-52: Use cross-platform temp path.
$env:TEMPis Windows-specific. On Linux/macOS, this environment variable doesn't exist, causing the log path to be incomplete or fail.🔧 Proposed fix using .NET cross-platform temp path
- $zizmorLog = Join-Path $env:TEMP "zizmor-report.txt" + $zizmorLog = Join-Path ([System.IO.Path]::GetTempPath()) "zizmor-report.txt".scripts/verify_addons/tests/13-workflow-checks.ps1-16-17 (1)
16-17: Missing.ymlworkflow files.GitHub Actions supports both
.yamland.ymlextensions for workflow files. The current filter only captures*.yaml, potentially missing workflows using the.ymlextension.🔧 Proposed fix
-$workflowDir = Join-Path $RepoRoot ".github\workflows" -$workflows = Get-ChildItem -Path $workflowDir -Filter "*.yaml" +$workflowDir = Join-Path $RepoRoot ".github" "workflows" +$workflows = Get-ChildItem -Path $workflowDir -Include "*.yaml", "*.yml" -File.scripts/generate_ci_success_message.js-9-15 (1)
9-15: Add newline separators before optional notes.The optional notes are appended directly to the body without leading newlines. Since the main message ends with "ready for review." without a trailing newline, the notes will be concatenated on the same line, breaking markdown formatting.
📝 Proposed fix for formatting
body += 'This PR meets high quality standards and is ready for review.'; if (previousErrorsNote) { - body += previousErrorsNote; + body += '\n\n' + previousErrorsNote; } if (autoMergeNote) { - body += autoMergeNote; + body += '\n\n' + autoMergeNote; } return body;.scripts/verify_addons/custom/whatsapp/test.ps1-27-29 (1)
27-29: Fix port number inconsistency.Line 28 comment states "port 8099" but Line 29 uses port 8066. Verify which port is correct and update the comment accordingly.
📝 Proposed comment fix (assuming 8066 is correct)
# 2. Test: Verify Health Endpoint -# The addon starts a web server on port 8099 +# The addon starts a web server on port 8066 Write-Host " - Verifying health endpoint (http://127.0.0.1:8066/)..." -ForegroundColor Gray.scripts/workflow_monitor.js-174-176 (1)
174-176: Sanitize values for markdown table.If
err.job,err.step,err.type, orerr.fixcontain pipe characters (|) or newlines, the markdown table will break. Consider escaping or replacing these characters.🛠️ Suggested sanitization helper
// Add helper function at the top of the module const sanitize = (str) => String(str).replace(/\|/g, '\\|').replace(/\n/g, ' '); // Then use in table row construction: body += `| ${sanitize(err.job)} | ${sanitize(err.step)} | ${sanitize(err.type)} | ${sanitize(err.fix)} |\n`;.scripts/workflow_monitor.js-187-192 (1)
187-192: YAML values should be escaped.If
err.job,err.step, etc. contain double quotes, the generated YAML will be malformed. Consider escaping quotes or using a YAML library.🛠️ Suggested fix
const escapeYaml = (str) => String(str).replace(/"/g, '\\"'); // Then: body += ` - job: "${escapeYaml(err.job)}"\n`; body += ` step: "${escapeYaml(err.step)}"\n`; body += ` error_type: "${escapeYaml(err.type)}"\n`; body += ` fix: "${escapeYaml(err.fix)}"\n`;.scripts/workflow_monitor.js-118-134 (1)
118-134:infraStepscheck overwrites more specific error type.When a step matches both an
infraPatternspattern and aninfraStepssubstring, line 133 overwriteserrorInfo.typewith the generic"Setup/Infrastructure", losing the more specific type (e.g.,"Docker Error","Permission Error") that was set at line 111.🛠️ Suggested fix
// Known infrastructure steps const infraSteps = [ 'Check out code', 'Set up', 'Setup', 'Install', 'Cache', 'Download', 'Upload', 'Docker', 'Login', 'Checkout', ]; - if (infraSteps.some((s) => stepName.includes(s))) { + if (!isInfraError && infraSteps.some((s) => stepName.includes(s))) { isInfraError = true; errorInfo.type = 'Setup/Infrastructure'; }.scripts/verify_addons/custom/sap-abap-cloud-dev/test.ps1-38-55 (1)
38-55: Potential socket resource leak on exceptions.If
BeginConnectsucceeds butWaitOnereturns false,EndConnectis not called beforeClose, which can leave the async operation dangling. Also, if an exception occurs after socket creation (line 42), the socket may not be closed properly since line 51 is inside the try block.🛠️ Suggested fix with proper cleanup
try { $containerIp = docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $ContainerName 2>$null if ($containerIp) { - # Quick check if port is open (TCP), not HTTP 200 as that takes forever $socket = New-Object System.Net.Sockets.TcpClient - $connect = $socket.BeginConnect($containerIp, 8443, $null, $null) - $success = $connect.AsyncWaitHandle.WaitOne(1000) - if ($success) { - Add-Result -Addon $Addon.Name -Check "PortCheck" -Status "PASS" -Message "Port 8443 is open." - $socket.EndConnect($connect) - } else { - Add-Result -Addon $Addon.Name -Check "PortCheck" -Status "INFO" -Message "Port 8443 not yet open (startup ongoing, expected)." + try { + $connect = $socket.BeginConnect($containerIp, 8443, $null, $null) + $success = $connect.AsyncWaitHandle.WaitOne(1000) + if ($success -and $socket.Connected) { + Add-Result -Addon $Addon.Name -Check "PortCheck" -Status "PASS" -Message "Port 8443 is open." + } else { + Add-Result -Addon $Addon.Name -Check "PortCheck" -Status "INFO" -Message "Port 8443 not yet open (startup ongoing, expected)." + } + } finally { + $socket.Close() } - $socket.Close() } } catch { Add-Result -Addon $Addon.Name -Check "PortCheck" -Status "INFO" -Message "Port check skipped/failed (network or startup timing)." }.scripts/verify_addons/custom/tado_aa/test.ps1-10-12 (1)
10-12: Missing FAIL case for BannerCheck when banner is not found.Same issue as the pterodactyl-wings test - the BannerCheck only reports PASS when found but doesn't report FAIL when missing.
🐛 Proposed fix to add FAIL case
if ($logs -match "FaserF's Addon Repository") { Add-Result -Addon $Addon.Name -Check "BannerCheck" -Status "PASS" -Message "Banner found." +} else { + Add-Result -Addon $Addon.Name -Check "BannerCheck" -Status "FAIL" -Message "Banner not found in logs." }.scripts/verify_addons/custom/pterodactyl-wings/test.ps1-10-12 (1)
10-12: Missing FAIL case for BannerCheck when banner is not found.The BannerCheck only reports PASS when the banner is found but doesn't report FAIL when it's missing. This leaves the check result undefined for failed cases.
🐛 Proposed fix to add FAIL case
if ($logs -match "FaserF's Addon Repository") { Add-Result -Addon $Addon.Name -Check "BannerCheck" -Status "PASS" -Message "Banner found." +} else { + Add-Result -Addon $Addon.Name -Check "BannerCheck" -Status "FAIL" -Message "Banner not found in logs." }.scripts/verify_addons/custom/wiki.js3/test.ps1-10-12 (1)
10-12: Missing FAIL case for BannerCheck.Same issue as the nginx test - if the banner is not found, no result is recorded.
🐛 Proposed fix
if ($logs -match "FaserF's Addon Repository") { Add-Result -Addon $Addon.Name -Check "BannerCheck" -Status "PASS" -Message "Banner found." +} else { + Add-Result -Addon $Addon.Name -Check "BannerCheck" -Status "WARN" -Message "Banner not found in logs." }.scripts/verify_addons/tests/12-coderabbit.ps1-65-68 (1)
65-68: HEALTHCHECK timing logic may not work as intended.The condition requires both
--intervalAND--timeoutto be missing before warning. If only one is missing, no warning is issued. Consider using-orto catch cases where either parameter is missing.🐛 Proposed fix
- if ($content -match 'HEALTHCHECK' -and $content -notmatch 'HEALTHCHECK\s+NONE' -and ($content -notmatch '--interval' -and $content -notmatch '--timeout')) { + if ($content -match 'HEALTHCHECK' -and $content -notmatch 'HEALTHCHECK\s+NONE' -and ($content -notmatch '--interval' -or $content -notmatch '--timeout')) { Add-Result -Addon $a.Name -Check "CR-HealthcheckTiming" -Status "WARN" -Message "HEALTHCHECK lacks explicit --interval and --timeout parameters." }.scripts/verify_addons/custom/nginx/test.ps1-10-12 (1)
10-12: Missing FAIL case for BannerCheck.If the banner is not found, no result is recorded. This makes it impossible to distinguish between "banner check passed" and "banner check was not run" in the results. Add an
elsebranch for consistency with the ProcessCheck below.🐛 Proposed fix
if ($logs -match "FaserF's Addon Repository") { Add-Result -Addon $Addon.Name -Check "BannerCheck" -Status "PASS" -Message "Banner found." +} else { + Add-Result -Addon $Addon.Name -Check "BannerCheck" -Status "WARN" -Message "Banner not found in logs." }.scripts/verify_addons/tests/12-coderabbit.ps1-84-87 (1)
84-87: BUILD_DATE regex check has a logic flaw.The pattern
ARG BUILD_DATE=.uses.which matches any single character. Combined with-notmatch, this means "ARG BUILD_DATE=" followed by any character would NOT trigger the condition. The intent appears to be checking for a non-empty default, butARG BUILD_DATE=x(any single char) would pass incorrectly.🐛 Proposed fix using a more explicit pattern
- if ($content -match 'ARG BUILD_DATE' -and ($content -notmatch 'ARG BUILD_DATE=.' -or $content -match 'ARG BUILD_DATE=""')) { + if ($content -match 'ARG BUILD_DATE\s*$' -or $content -match 'ARG BUILD_DATE\s*=\s*$' -or $content -match 'ARG BUILD_DATE\s*=\s*["'']?["'']?\s*$') { Add-Result -Addon $a.Name -Check "CR-BuildDateDefault" -Status "WARN" -Message "ARG BUILD_DATE has empty or missing default. Use '1970-01-01T00:00:00Z' for local builds." }Alternatively, use a simpler positive match for valid BUILD_DATE defaults.
.scripts/verify_addons/tests/03-hadolint.ps1-36-39 (1)
36-39: Misleading error message in catch block.The catch block at line 38 reports "Docker unavailable," but this code only executes when
$DockerAvailableis$true(checked at line 19). If the docker command fails at this point, it's more likely a runtime issue (image pull failure, command syntax error, etc.) rather than Docker being unavailable.Consider updating the message to accurately reflect the error condition, such as "Hadolint execution failed" or including the actual exception details.
📝 Improve error message accuracy
try { $out = (Get-Content $df | docker run --rm -i hadolint/hadolint hadolint - --ignore DL3018 2>&1) if ($LASTEXITCODE -ne 0) { Add-Result -Addon $a.Name -Check "Hadolint" -Status "FAIL" -Message $out } else { Add-Result -Addon $a.Name -Check "Hadolint" -Status "PASS" -Message "OK" } } catch { - Add-Result -Addon $a.Name -Check "Hadolint" -Status "SKIP" -Message "Docker unavailable" + Add-Result -Addon $a.Name -Check "Hadolint" -Status "SKIP" -Message "Hadolint execution failed: $_" }.scripts/verify_addons/custom/ShieldDNS/test.ps1-9-12 (1)
9-12: Add FAIL case for banner check.The banner check only reports PASS when the banner is found but doesn't report FAIL when it's missing. This creates an incomplete test result and inconsistency with the ProcessCheck logic below.
🔧 Proposed fix
$logs = docker logs "$ContainerName" 2>&1 if ($logs -match "FaserF's Addon Repository") { Add-Result -Addon $Addon.Name -Check "BannerCheck" -Status "PASS" -Message "Banner found." +} else { + Add-Result -Addon $Addon.Name -Check "BannerCheck" -Status "FAIL" -Message "Banner not found." }.github/workflows/orchestrator-autofix.yaml-243-261 (1)
243-261: Force-push in retry logic could overwrite concurrent changes.The
--force-with-leaseis safer than--force, but this retry block could still cause issues if another workflow pushes between the reset and push. Consider if this complexity is necessary or if failing gracefully would be preferable.The logic also assumes the last commit contains workflow files when it might have failed for other reasons (network, permissions). The
elsebranch at line 258-260 handles this, but the error message could be more descriptive about what actually failed..scripts/check_compliance.py-77-82 (1)
77-82: PotentialIndexErrorwhen parsing FROM instruction.If a
FROMline exists but has no image specified (e.g., justFROMwith trailing whitespace),line.split()[1]will raise anIndexError.Proposed defensive fix
if not detected_base: # Fallback to Dockerfile FROM check for line in content.splitlines(): if line.startswith("FROM"): - image = line.split()[1] - validate_base_image(image, "Dockerfile") + parts = line.split() + if len(parts) > 1: + image = parts[1] + validate_base_image(image, "Dockerfile").scripts/verify_addons/tests/15-custom-addon-tests.ps1-56-61 (1)
56-61: Accessing global variable without null check.
$global:Resultsis accessed without verifying it exists, which could cause issues if the global results array hasn't been initialized.Proposed fix
# Check if DockerRun failed for this addon - $dockerRunResult = $global:Results | Where-Object { $_.Addon -eq $a.Name -and $_.Check -eq "DockerRun" } + $dockerRunResult = if ($global:Results) { + $global:Results | Where-Object { $_.Addon -eq $a.Name -and $_.Check -eq "DockerRun" } + } else { $null }.github/workflows/orchestrator-release.yaml-440-459 (1)
440-459: Push-moderelease(all)logic has dead/miswired piecesIn the push branch:
COMMIT_MSG="${github.event.head_commit.message}" if echo "$COMMIT_MSG" | grep -qi "release(all)"; then ... SKIP_ADDONS="homeassistant-test-instance sap-abap-cloud-dev" ... for item in */; do ... # Skip test add-ons skip=false skip=false if [ "$skip" == "false" ]; then ADDONS=... fi doneIssues:
COMMIT_MSG="${github.event.head_commit.message}"is a literal string, not the commit message. If you still need this, it should use${{ }}interpolation; otherwise it can be removed since the top-levelif:already filters oncontains(github.event.head_commit.message, 'release(all)').SKIP_ADDONSis never used;skipis set tofalsetwice and never updated, so the “Skip test add-ons” comment does not match reality andhomeassistant-test-instance/sap-abap-cloud-devwill be released withrelease(all).If you intend to actually skip those add-ons, reuse the more complete logic you already have in the
workflow_dispatchall-branch (including theSKIP_ADDONSloop) instead of this incomplete copy.
| param( | ||
| [Parameter(Mandatory)]$Addon, | ||
| [Parameter(Mandatory)]$Config, | ||
| [Parameter(Mandatory)]$OutputDir, | ||
| [Parameter(Mandatory)]$RepoRoot, | ||
| [Parameter(Mandatory)]$ContainerName | ||
| ) | ||
| Write-Host " > [Custom] Verifying Solumati..." -ForegroundColor Gray | ||
| $logs = docker logs "$ContainerName" 2>&1 | ||
| if ($logs -match "FaserF's Addon Repository") { | ||
| Add-Result -Addon $Addon.Name -Check "BannerCheck" -Status "PASS" -Message "Banner found." | ||
| } | ||
| $procCheck = docker exec "$ContainerName" ps aux 2>&1 | ||
| if ($procCheck -match "python" -or $procCheck -match "solumati") { | ||
| Add-Result -Addon $Addon.Name -Check "ProcessCheck" -Status "PASS" -Message "Solumati process detected." | ||
| } else { | ||
| Add-Result -Addon $Addon.Name -Check "ProcessCheck" -Status "FAIL" -Message "Solumati process NOT found." | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check which custom addon test scripts use Add-Result without sourcing common.ps1
echo "=== Custom test scripts that call Add-Result ==="
rg -l "Add-Result" .scripts/verify_addons/custom/ --type=ps1
echo -e "\n=== Custom test scripts that source common.ps1 ==="
rg -l "common\.ps1" .scripts/verify_addons/custom/ --type=ps1
echo -e "\n=== Scripts using Add-Result WITHOUT sourcing common.ps1 ==="
for script in $(rg -l "Add-Result" .scripts/verify_addons/custom/ --type=ps1); do
if ! grep -q "common\.ps1" "$script"; then
echo "$script"
fi
doneRepository: faneX-ID/homeassistant-addon
Length of output: 335
🏁 Script executed:
# Examine the specific files mentioned in the review
echo "=== Checking .scripts/verify_addons/custom/solumati/test.ps1 ==="
cat -n .scripts/verify_addons/custom/solumati/test.ps1
echo -e "\n=== Checking .scripts/verify_addons/custom/antigravity-server/test.ps1 (first 20 lines) ==="
head -20 .scripts/verify_addons/custom/antigravity-server/test.ps1 | cat -nRepository: faneX-ID/homeassistant-addon
Length of output: 1854
🏁 Script executed:
#!/bin/bash
# Search for custom test scripts using standard commands (rg with glob pattern)
echo "=== All custom test .ps1 files ==="
find .scripts/verify_addons/custom -name "test.ps1" -type f
echo -e "\n=== Which ones use Add-Result ==="
find .scripts/verify_addons/custom -name "test.ps1" -type f -exec grep -l "Add-Result" {} \;
echo -e "\n=== Which ones source common.ps1 ==="
find .scripts/verify_addons/custom -name "test.ps1" -type f -exec grep -l "common\.ps1" {} \;
echo -e "\n=== Detailed: Scripts using Add-Result but NOT sourcing common.ps1 ==="
for script in $(find .scripts/verify_addons/custom -name "test.ps1" -type f); do
if grep -q "Add-Result" "$script" && ! grep -q "common\.ps1" "$script"; then
echo "MISSING SOURCE: $script"
elif grep -q "Add-Result" "$script" && grep -q "common\.ps1" "$script"; then
echo "HAS SOURCE: $script"
fi
doneRepository: faneX-ID/homeassistant-addon
Length of output: 4020
Source the common module before using Add-Result.
The script calls Add-Result (lines 11, 15, 17) without sourcing the required module. Add this at the top of the script after the parameters:
. "$RepoRoot/.scripts/verify_addons/lib/common.ps1"Note: This issue exists in 18 out of 21 custom test scripts and should be fixed across all of them to ensure Add-Result is available at runtime.
🤖 Prompt for AI Agents
In @.scripts/verify_addons/custom/solumati/test.ps1 around lines 1 - 18, The
script calls Add-Result before the common helper is loaded; after the param(...)
block in test.ps1, source the common module using the $RepoRoot variable so
Add-Result is available (i.e., load the verify_addons/lib/common.ps1 file
referenced by $RepoRoot). Update the Solumati test (and the other 17 affected
custom test scripts) to include the same sourcing line immediately after their
parameter blocks so Add-Result resolves at runtime.
| param( | ||
| [Parameter(Mandatory)]$Addon, | ||
| [Parameter(Mandatory)]$Config, | ||
| [Parameter(Mandatory)]$OutputDir, | ||
| [Parameter(Mandatory)]$RepoRoot, | ||
| [Parameter(Mandatory)]$ContainerName | ||
| ) | ||
| Write-Host " > [Custom] Verifying Wordpress..." -ForegroundColor Gray | ||
| $logs = docker logs "$ContainerName" 2>&1 | ||
| if ($logs -match "FaserF's Addon Repository") { | ||
| Add-Result -Addon $Addon.Name -Check "BannerCheck" -Status "PASS" -Message "Banner found." | ||
| } | ||
| $procCheck = docker exec "$ContainerName" ps aux 2>&1 | ||
| if ($procCheck -match "nginx") { | ||
| Add-Result -Addon $Addon.Name -Check "ProcessCheck" -Status "PASS" -Message "Nginx process detected." | ||
| } else { | ||
| Add-Result -Addon $Addon.Name -Check "ProcessCheck" -Status "FAIL" -Message "Nginx process NOT found." | ||
| } |
There was a problem hiding this comment.
Missing common module import.
The script uses Add-Result without sourcing the common module that provides this function. This is the same issue in .scripts/verify_addons/custom/solumati/test.ps1.
🔧 Proposed fix to source common module
param(
[Parameter(Mandatory)]$Addon,
[Parameter(Mandatory)]$Config,
[Parameter(Mandatory)]$OutputDir,
[Parameter(Mandatory)]$RepoRoot,
[Parameter(Mandatory)]$ContainerName
)
+
+# Source common module for Add-Result
+. "$RepoRoot/.scripts/verify_addons/lib/common.ps1"
+
Write-Host " > [Custom] Verifying Wordpress..." -ForegroundColor Gray📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| param( | |
| [Parameter(Mandatory)]$Addon, | |
| [Parameter(Mandatory)]$Config, | |
| [Parameter(Mandatory)]$OutputDir, | |
| [Parameter(Mandatory)]$RepoRoot, | |
| [Parameter(Mandatory)]$ContainerName | |
| ) | |
| Write-Host " > [Custom] Verifying Wordpress..." -ForegroundColor Gray | |
| $logs = docker logs "$ContainerName" 2>&1 | |
| if ($logs -match "FaserF's Addon Repository") { | |
| Add-Result -Addon $Addon.Name -Check "BannerCheck" -Status "PASS" -Message "Banner found." | |
| } | |
| $procCheck = docker exec "$ContainerName" ps aux 2>&1 | |
| if ($procCheck -match "nginx") { | |
| Add-Result -Addon $Addon.Name -Check "ProcessCheck" -Status "PASS" -Message "Nginx process detected." | |
| } else { | |
| Add-Result -Addon $Addon.Name -Check "ProcessCheck" -Status "FAIL" -Message "Nginx process NOT found." | |
| } | |
| param( | |
| [Parameter(Mandatory)]$Addon, | |
| [Parameter(Mandatory)]$Config, | |
| [Parameter(Mandatory)]$OutputDir, | |
| [Parameter(Mandatory)]$RepoRoot, | |
| [Parameter(Mandatory)]$ContainerName | |
| ) | |
| # Source common module for Add-Result | |
| . "$RepoRoot/.scripts/verify_addons/lib/common.ps1" | |
| Write-Host " > [Custom] Verifying Wordpress..." -ForegroundColor Gray | |
| $logs = docker logs "$ContainerName" 2>&1 | |
| if ($logs -match "FaserF's Addon Repository") { | |
| Add-Result -Addon $Addon.Name -Check "BannerCheck" -Status "PASS" -Message "Banner found." | |
| } | |
| $procCheck = docker exec "$ContainerName" ps aux 2>&1 | |
| if ($procCheck -match "nginx") { | |
| Add-Result -Addon $Addon.Name -Check "ProcessCheck" -Status "PASS" -Message "Nginx process detected." | |
| } else { | |
| Add-Result -Addon $Addon.Name -Check "ProcessCheck" -Status "FAIL" -Message "Nginx process NOT found." | |
| } |
🤖 Prompt for AI Agents
In @.scripts/verify_addons/custom/wordpress/test.ps1 around lines 1 - 18, The
script calls Add-Result but never imports the shared verification helper; fix by
dot-sourcing the common verification module before any Add-Result calls so the
function is available at runtime (e.g., add a dot-source of the shared "common"
module near the top of the script); ensure the import happens before Write-Host
and that Add-Result is used unchanged (references: Add-Result, $Addon,
$ContainerName, $logs, $procCheck).
| - name: 🐍 Set up Python (cached) | ||
| uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0 | ||
| with: | ||
| python-version: '3.14' |
There was a problem hiding this comment.
Python 3.14 version may not be available in runners
Medium Severity
The python-version is set to '3.14', which is a very recently released Python version that may not be available in all GitHub Actions runners or cached properly by the pinned setup-python@v6.1.0 action. Other workflows in this same PR correctly use "3.x" (in orchestrator-autofix.yaml line 40) to get the latest stable version. This inconsistency could cause CI failures for the lint job.
There was a problem hiding this comment.
Actionable comments posted: 14
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🤖 Fix all issues with AI agents
In @.github/workflows/orchestrator-ci.yaml:
- Line 330: Replace the floating GitHub Action reference to
aquasecurity/trivy-action@master with a pinned stable release tag (e.g., a
specific semver like v0.35.0 or the current stable release) so the workflow uses
a known immutable version; update the workflow step using "uses:
aquasecurity/trivy-action@master" to the chosen tag and optionally add a comment
noting the pinned version and why to simplify future upgrades.
In @.github/workflows/orchestrator-labeler.yaml:
- Around line 42-62: The fallback currently reads the filesystem from the PR
branch when git commands fail (baseRef/baseAddons), so either fail fast or
switch the checkout to the base branch before reading; update the fallback logic
to run a safe checkout of the target commit (e.g., git fetch origin ${baseRef}
and git checkout --detach origin/${baseRef}) and then use fs.readdirSync to
populate baseAddons, or instead abort with a clear error and non-zero exit if
base branch detection fails—modify the code around baseRef/baseAddons to
implement one of these two behaviors.
- Around line 20-22: The workflow job named "label" is missing a timeout, so add
a timeout-minutes setting (e.g., 10 or 15) to the job definition alongside
runs-on and steps to prevent indefinite hangs; update the "label" job block to
include timeout-minutes: 10 (or 15) so the job will be automatically cancelled
if it exceeds that duration.
- Around line 10-11: Update the GitHub Actions workflow trigger so it matches
the repo's default branch: replace the branches entry that currently lists
"master" with "main" in the orchestrator-labeler workflow (the branches array
under the workflow's on: configuration). Ensure both push and pull_request
triggers (if present) use "main" so the workflow will run for PRs targeting
main.
- Around line 23-26: The workflow references origin/${baseRef} in the script but
the checkout step may not create or fetch that remote/branch; update the job to
explicitly fetch the PR base before running git commands: after the
actions/checkout step run a git fetch that fetches the baseRef (or set up
origin) so origin/${baseRef} is available, or modify the script to use the pull
request base SHA (baseSha = context.payload.pull_request.base.sha) and call git
ls-tree against that SHA instead of origin/${baseRef}; ensure the checkout still
uses fetch-depth: 0 and that the workflow has permission to fetch the repo.
In @.scripts/enforce_architectures.py:
- Around line 137-148: Replace the single broad "except Exception as e:" handler
in enforce_architectures.py with two handlers: first catch only the expected,
recoverable exceptions (e.g., OSError, IOError, ValueError,
json.JSONDecodeError) and perform the existing logging/traceback.print_exc(),
write "modified=false" to github_output and exit(0); then add a separate broad
except Exception as e that logs the error (using traceback.print_exc() and the
existing print message) but immediately re-raises the exception so truly
unexpected errors propagate and fail the workflow. Ensure you keep the
github_output write logic only in the recoverable-exceptions branch and
reference the same variables (traceback.print_exc, github_output) when
implementing these handlers.
In @.scripts/fix_oci_labels.py:
- Around line 54-59: The current existence check uses a substring search ("if
key not in content") which yields false positives (matches in comments or
unrelated text); instead detect actual LABEL directives by parsing the
Dockerfile for LABEL instructions or using a regex that matches LABEL
lines/blocks and keys (e.g., match lines starting with optional whitespace then
"LABEL" and search for the key as a property name followed by "="), then only
skip adding when that regex/parser finds the exact key defined; update the logic
around label_block construction (the loop over labels and variables key/value)
to use this more precise check so labels are not incorrectly skipped.
- Line 46: The LABEL value "org.opencontainers.image.created" in
.scripts/fix_oci_labels.py currently emits the literal "${BUILD_DATE}" which
won't expand at build time unless ARG BUILD_DATE is declared and --build-arg
BUILD_DATE=... is passed; fix by either (A) adding ARG BUILD_DATE to generated
Dockerfiles and updating CI/build workflows to pass --build-arg
BUILD_DATE=<timestamp> so the label expands, or (B) since datetime is already
imported in the script, replace the literal with a generated ISO 8601 timestamp
from datetime.utcnow() when constructing the label in the script so the produced
Dockerfile contains an actual date string instead of "${BUILD_DATE}".
- Around line 37-46: The LABEL values in the labels dict are double-wrapped in
quotes (e.g. f'"{name}"'), producing malformed Dockerfile LABELs; remove the
embedded extra quotes so values are raw strings (e.g. use name, description,
url, os.path.relpath(...) result, and '${BUILD_DATE}' instead of f'"{...}"' or
'"${BUILD_DATE}"') so the code that emits LABEL directives can add the proper
outer quotes once.
In @.scripts/prepare_edge_branch.py:
- Around line 304-306: The code sets main_readme = "README.MD" which mismatches
the actual filename case; update the string to "README.md" so
add_edge_notice_to_readme(main_readme) can find the file on case-sensitive
filesystems, ensuring the edge notice is added.
In @.scripts/toggle_support.py:
- Around line 85-88: Add a timeout to every requests call in the script to avoid
indefinite hangs: define a single REQUEST_TIMEOUT constant (or read from env)
and pass it as timeout=REQUEST_TIMEOUT on all requests.get and requests.post
invocations (e.g., the calls matching "requests.get(url, headers=headers)" shown
and the call around line 113); update every occurrence to include the timeout
argument so all HTTP requests fail fast when the API is unresponsive.
- Around line 103-113: Remove the unused to_keep assignment and fix the delete
to use the precomputed del_url: drop the line assigning to_keep = versions[:1],
keep computing del_url = f"{url}/{vid}" inside the loop (or compute the correct
versions endpoint into a clearly named variable first) and replace
requests.delete(f"{url}/{vid}", headers=headers) with requests.delete(del_url,
headers=headers) so the delete uses the correct URL and no unused variables
remain; ensure url/versions endpoint used to build del_url is the one
constructed earlier (not the fallback reassigned value).
🟡 Minor comments (7)
.scripts/check_compliance.py-86-91 (1)
86-91: Add defensive handling for malformed FROM lines.
line.split()[1]will raiseIndexErrorif the FROM line is malformed (e.g., just "FROM" with no image). Also, lines likeFROM --platform=linux/amd64 image:tagwould incorrectly extract the flag instead of the image.🛠️ Suggested fix
if not detected_base: # Fallback to Dockerfile FROM check for line in content.splitlines(): - if line.startswith("FROM"): - image = line.split()[1] - validate_base_image(image, "Dockerfile") + if line.strip().upper().startswith("FROM "): + parts = line.split() + # Skip flags like --platform=... + image = next((p for p in parts[1:] if not p.startswith("--")), None) + if image: + validate_base_image(image, "Dockerfile").scripts/bump_version.py-323-324 (1)
323-324: Incomplete implementation — danglingpassstatements.These
passstatements appear to be placeholders for incomplete logic. Line 323-324 suggests merging existing content, and line 340-341 suggests preserving release notes. Either implement the intended logic or remove these blocks with a comment explaining the decision.Also applies to: 340-341
.github/workflows/orchestrator-release.yaml-61-63 (1)
61-63: Duplicate comment line.Line 62 is a duplicate of line 61.
🐛 Proposed fix
# Addons that should get auto-releases on dependency updates - # Addons that should get auto-releases on dependency updates AUTO_RELEASE_ADDONS: |.github/workflows/orchestrator-release.yaml-956-959 (1)
956-959: Triplicate comment.The same comment appears three times in a row.
🐛 Proposed fix
# Mount cache directory for faster builds (caching enabled, --no-cache removed) - # Mount cache directory for faster builds (caching enabled, --no-cache removed) - # Mount cache directory for faster builds (caching enabled, --no-cache removed).scripts/prepare_edge_branch.py-202-203 (1)
202-203: Duplicate return statement — dead code.Line 203 contains a duplicate
return Truethat is unreachable. This appears to be a copy-paste error.🐛 Proposed fix
with open(readme_path, "w", encoding="utf-8") as f: f.write("\n".join(lines)) return True - return True except (OSError, ValueError) as e:.scripts/prepare_edge_branch.py-240-241 (1)
240-241: Duplicate return statement — dead code.Same issue as above; line 241 is unreachable.
🐛 Proposed fix
with open(repo_json_path, "w", encoding="utf-8") as f: json.dump(data, f, indent=2) f.write("\n") return True - return True except (OSError, ValueError, json.JSONDecodeError) as e:.github/workflows/orchestrator-release.yaml-756-757 (1)
756-757: Duplicategit rebase --abortcommands.Line 757 is a duplicate of line 756.
🐛 Proposed fix
git merge --abort 2>/dev/null || true git rebase --abort 2>/dev/null || true - git rebase --abort 2>/dev/null || trueCommittable suggestion skipped: line range outside the PR's diff.
🧹 Nitpick comments (28)
.scripts/check_compliance.py (2)
83-84: Avoid catching broadException.Catching blind
Exceptioncan mask unexpected errors. Catch specific exceptions for better error visibility.♻️ Suggested fix
- except Exception as e: - errors.append(f"Failed to parse build.yaml: {e}") + except (yaml.YAMLError, OSError) as e: + errors.append(f"Failed to parse build.yaml: {e}")
142-147: Minor: Misleading comment.The comment says "Skip if not a directory" but
os.path.exists()returnsTruefor files too. The logic works correctly since the subsequentconfig.yamlcheck filters out files, but the comment could be clarified.📝 Suggested clarification
for addon in args: - if not os.path.exists(addon): - # Skip if not a directory (e.g. file passed by mistake) + if not os.path.isdir(addon): + # Skip if not a valid directory continue if not os.path.exists(os.path.join(addon, "config.yaml")): # Not an addon directory continue.scripts/enforce_architectures.py (2)
10-41: Consider adding type hints for better IDE support.The docstring declares
file_pathas typePath, but the function signature lacks type hints. Adding them would improve IDE support and catch type-related issues earlier.📝 Suggested enhancement
-def enforce_config(file_path): +def enforce_config(file_path: Path) -> bool: """ Remove unsupported architectures from config.yaml.
43-92: Consider adding type hints for consistency.Similar to
enforce_config, this function would benefit from explicit type hints matching the docstring.📝 Suggested enhancement
-def enforce_build(file_path): +def enforce_build(file_path: Path) -> bool: """ Remove unsupported architectures from build.yaml or build.json..scripts/toggle_support.py (5)
1-5: Remove unused imports.The
subprocessandsysimports are not used anywhere in this script.♻️ Proposed fix
import argparse import os import shutil -import subprocess -import sys
40-40: Fix typo in comment."Patern" should be "Pattern".
📝 Proposed fix
- # Patern: ](unsupported/addon) -> ](addon) + # Pattern: ](unsupported/addon) -> ](addon)
70-70: Move therequestsimport to the module level.Importing
requestsinside the function is unconventional. Move it to the top of the file with other imports for better clarity and to follow Python conventions.♻️ Proposed fix
At the top of the file:
import argparse import os +import requests import shutilThen remove the import from inside the function:
if not token or not owner: print("⚠️ Missing GITHUB_TOKEN or OWNER. Skipping prune.") return - import requests - headers = {
89-91: Add error handling for API requests.The script silently continues on non-200 status codes without logging the actual error. This makes debugging difficult when the API call fails for reasons other than 404.
📝 Proposed enhancement
if res.status_code != 200: + print(f" ⚠️ Failed to fetch versions for {pkg}: HTTP {res.status_code}") continue
131-131: Consider using plain ASCII for info messages.The Unicode information source character (ℹ️) might not render correctly in all terminals or CI environments. Consider using plain text like "Info:" or "i" instead.
♻️ Proposed fix
if os.path.exists(dst): - print(f"ℹ️ {addon} is already in {UNSUPPORTED_DIR}.") + print(f"ℹ {addon} is already in {UNSUPPORTED_DIR}.") returnif os.path.exists(dst): - print(f"ℹ️ {addon} is already supported (in root).") + print(f"ℹ {addon} is already supported (in root).") returnAlso applies to: 151-151
.scripts/fix_oci_labels.py (6)
1-1: Remove unused import.The
datetimemodule is imported but never used in the script.♻️ Proposed fix
-import datetime import os import sys
24-29: Catch specific exceptions instead of bareException.Catching bare
Exceptioncan mask programming errors and unexpected failures. Specify the expected exceptions (e.g.,yaml.YAMLError,IOError).♻️ Proposed fix
try: with open(config_path, "r", encoding="utf-8") as f: config = yaml.safe_load(f) - except Exception as e: + except (yaml.YAMLError, IOError, OSError) as e: print(f"Failed to read config.yaml in {addon_path}: {e}") return False
62-62: Consider using ASCII characters instead of Unicode emoji.The
➕emoji might cause encoding issues in some terminal environments or CI systems, though it's generally safe in modern systems.♻️ Proposed fix
- print(f"➕ Adding missing OCI labels to {addon_path}/Dockerfile") + print(f"[+] Adding missing OCI labels to {addon_path}/Dockerfile")
73-79: Catch specific exceptions instead of bareException.Similar to the earlier comment, catching bare
Exceptioncan mask unexpected errors. Specify the expected exceptions.♻️ Proposed fix
try: with open(dockerfile_path, "w", encoding="utf-8") as f: f.write(new_content) return True - except Exception as e: + except (IOError, OSError) as e: print(f"Failed to write Dockerfile in {addon_path}: {e}") return False
63-72: Consider placing labels after the final FROM directive for better compatibility.The comments acknowledge that labels should ideally be placed after the FROM directive, but the implementation simply appends them to the end. For multi-stage builds, this approach may not work correctly if there are additional instructions after the final stage definition.
Consider enhancing the logic to:
- Find the last
FROMdirective in the Dockerfile- Insert labels after that directive (or after any ARG directives that follow it)
- This ensures labels are properly scoped to the final stage
Example approach
# Find the position after the last FROM directive lines = new_content.split('\n') last_from_idx = -1 for i, line in enumerate(lines): if line.strip().upper().startswith('FROM '): last_from_idx = i if last_from_idx >= 0: # Insert after the FROM line (or after subsequent ARG lines) insert_idx = last_from_idx + 1 while insert_idx < len(lines) and lines[insert_idx].strip().upper().startswith('ARG '): insert_idx += 1 lines.insert(insert_idx, "\n# Labels") for label in label_block: lines.insert(insert_idx + 1, label) new_content = '\n'.join(lines)
121-123: Consider consistent use of ASCII characters in output.Similar to the earlier comment, the Unicode emojis (✅, ✨) might cause encoding issues in some environments, though they're generally safe in modern systems.
♻️ Proposed fix
if changed: - print("✅ OCI Labels generated.") + print("[✓] OCI Labels generated.") else: - print("✨ No OCI Labels needed generation.") + print("[*] No OCI Labels needed generation.").github/workflows/orchestrator-labeler.yaml (3)
80-80: Remove placeholder comment.The comment
// ... (abbreviated context) ...appears to be a placeholder or artifact from code editing.if (!validAddonPattern.test(name)) { - // ... (abbreviated context) ... console.log(`Warning: Invalid addon name detected and filtered: ${name}`);
181-195: Consider removing redundant filesystem check.At line 187, the code checks
fs.existsSync(path.join(rootDir, 'config.yaml'))after already verifyingtracking.hasConfig. SincehasConfigis set whenconfig.yamlappears in the changed files (line 139-141), the filesystem check is redundant.♻️ Simplify the condition
const isNewAddon = !baseAddons.includes(rootDir) && !BLOCKLIST.includes(rootDir.toLowerCase()) && validAddonPattern.test(rootDir) && - tracking.hasConfig && - fs.existsSync(path.join(rootDir, 'config.yaml')); + tracking.hasConfig;
28-377: Consider extracting inline script for better maintainability.The workflow contains 346 lines of inline JavaScript, which makes it harder to test, lint, and maintain. For complex logic like this, consider:
- Extracting to
.github/scripts/labeler.jsand usingnode .github/scripts/labeler.js- Creating a custom composite action
- Breaking into multiple jobs/steps if dependencies allow
This would improve testability, enable proper IDE support, and make future changes easier to review.
.scripts/prepare_edge_branch.py (2)
142-143: Port remapping may exceed valid port range.Adding 10000 to the host port can result in invalid ports (>65535) if the original port is already high (e.g., 56000 → 66000). Consider adding a bounds check.
♻️ Proposed fix
original_port = int(host_port) new_port = original_port + 10000 + + # Ensure port stays within valid range + if new_port > 65535: + print(f"⚠️ Skipping port remap for {port_key}: {original_port} + 10000 exceeds 65535") + continue # Construct regex to find this specific line
113-115: Broad exception handling loses diagnostic information.Catching bare
Exceptionswallows all errors including unexpected ones (e.g.,KeyboardInterrupton Python < 3.11). Consider catching more specific exceptions or at least logging the exception type.Also applies to: 161-163
.scripts/bump_version.py (3)
495-496: File operations missing explicit encoding.Several
open()calls lack anencodingparameter, which will use the system default encoding. This can cause issues on Windows or with non-ASCII content. Useencoding="utf-8"for consistency with other file operations in this script.♻️ Proposed fix for line 495
- with open(config_path, "r") as f: + with open(config_path, "r", encoding="utf-8") as f: content = f.read()Also applies to: 571-572, 584-585
60-62: Silent exception swallowing hides git errors.The bare
except Exception: passpattern silently swallows all errors when fetching the git remote URL. At minimum, consider logging when falling back to the hardcodedGITHUB_REPO.♻️ Suggested improvement
except Exception: - pass + print("ℹ️ Could not detect git remote URL, using default") return GITHUB_REPO
329-329: Magic number for item limit.The
[:15]slice limits changelog entries without explanation. Consider defining this as a named constant at the top of the file (e.g.,MAX_CHANGELOG_ENTRIES_PER_CATEGORY = 15) for clarity and easier maintenance..scripts/auto_update.py (2)
1-1: Unused import.
argparseis imported but never used in this script.♻️ Proposed fix
-import argparse import os import subprocess
71-79: Consider logging the new version on success.After a successful version bump, it would be helpful to log which version was applied. The
bump_versionfunction returns the new version string, but it's not captured or logged here.♻️ Suggested improvement
try: - bump_version( + new_version = bump_version( addon, "patch", "Maintenance: Automated dependency/base update" ) + print(f"✅ Bumped {addon} to {new_version}") # We need to output this for the workflow to know what to release.scripts/bump_script_version.py (1)
129-137: Regex pattern only matches double-quotedscriptVersion.The pattern
(scriptVersion:\s*")[^"]+(")requires double quotes around the version. If the YAML uses single quotes or no quotes (e.g.,scriptVersion: 1.0.0), the replacement will fail silently and the script will exit with an error on line 137.Consider extending the pattern to handle all common YAML formats:
♻️ Proposed fix supporting multiple formats
- pattern = r'(scriptVersion:\s*")[^"]+(")' - new_content = re.sub(pattern, rf"\1{bumped_version}\2", content) + # Try double quotes first, then single quotes, then unquoted + patterns = [ + (r'(scriptVersion:\s*")[^"]+(")', rf"\1{bumped_version}\2"), + (r"(scriptVersion:\s*')[^']+(')", rf"\1{bumped_version}\2"), + (r"(scriptVersion:\s*)[\d.]+", rf"\g<1>{bumped_version}"), + ] + + new_content = content + for pattern, replacement in patterns: + new_content = re.sub(pattern, replacement, content) + if new_content != content: + break.github/workflows/orchestrator-edge-sync.yaml (1)
250-252: Version restoration viasedmay break YAML formatting.The
sedcommand assumes versions are unquoted or will produce quoted output regardless of the original format. If the originalconfig.yamlusesversion: 1.0.0(unquoted), this will produceversion: "1.0.0-dev-abc1234"which changes the format.Consider using a YAML-aware update or matching the original format:
♻️ Proposed alternative using Python
if [ -n "$SAVED_VERSION" ] && [[ "$SAVED_VERSION" == *"-dev"* ]]; then echo " 📌 Restoring $addon to $SAVED_VERSION" - # Update version in config.yaml using sed - sed -i "s/^version:.*/version: \"$SAVED_VERSION\"/" "$addon/config.yaml" + # Update version preserving original format + python3 -c " +import re +with open('$addon/config.yaml', 'r') as f: + content = f.read() +# Preserve original quoting style +content = re.sub(r'^(version:\s*)([\"']?)[\d.]+-?[\w-]*([\"']?)\$', r'\g<1>\g<2>$SAVED_VERSION\g<3>', content, flags=re.MULTILINE) +with open('$addon/config.yaml', 'w') as f: + f.write(content) +" fi.github/workflows/orchestrator-ci.yaml (1)
1191-1197: Add error handling around the external script dependency.The workflow requires
.scripts/generate_ci_success_message.jswhich exists in the repository, but therequire()call on line 1196 has no error handling. If the script develops a syntax error or other issues, the success_report job will fail without a meaningful error message. Consider wrapping the require and function call in a try-catch block to gracefully handle failures.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (17)
.github/workflows/ci.yaml.github/workflows/deploy.yaml.github/workflows/orchestrator-autofix.yaml.github/workflows/orchestrator-ci.yaml.github/workflows/orchestrator-edge-sync.yaml.github/workflows/orchestrator-labeler.yaml.github/workflows/orchestrator-release.yaml.github/workflows/orchestrator-supervisor-test.yaml.scripts/auto_update.py.scripts/bump_script_version.py.scripts/bump_version.py.scripts/check_compliance.py.scripts/enforce_architectures.py.scripts/fix_configs.py.scripts/fix_oci_labels.py.scripts/prepare_edge_branch.py.scripts/toggle_support.py
💤 Files with no reviewable changes (2)
- .github/workflows/deploy.yaml
- .github/workflows/ci.yaml
🚧 Files skipped from review as they are similar to previous changes (3)
- .github/workflows/orchestrator-autofix.yaml
- .github/workflows/orchestrator-supervisor-test.yaml
- .scripts/fix_configs.py
🧰 Additional context used
🧬 Code graph analysis (2)
.scripts/enforce_architectures.py (2)
.scripts/check_compliance.py (1)
main(128-154).scripts/fix_configs.py (1)
main(118-139)
.scripts/auto_update.py (1)
.scripts/bump_version.py (1)
bump_version(464-810)
🪛 Ruff (0.14.10)
.scripts/toggle_support.py
85-85: Probable use of requests call without timeout
(S113)
88-88: Probable use of requests call without timeout
(S113)
103-103: Local variable to_keep is assigned to but never used
Remove assignment to unused variable to_keep
(F841)
110-110: Local variable del_url is assigned to but never used
Remove assignment to unused variable del_url
(F841)
113-113: Probable use of requests call without timeout
(S113)
131-131: String contains ambiguous ℹ (INFORMATION SOURCE). Did you mean i (LATIN SMALL LETTER I)?
(RUF001)
151-151: String contains ambiguous ℹ (INFORMATION SOURCE). Did you mean i (LATIN SMALL LETTER I)?
(RUF001)
.scripts/enforce_architectures.py
137-137: Do not catch blind exception: Exception
(BLE001)
.scripts/bump_script_version.py
21-21: subprocess call: check for execution of untrusted input
(S603)
.scripts/bump_version.py
48-48: Starting a process with a partial executable path
(S607)
60-61: try-except-pass detected, consider logging the exception
(S110)
60-60: Do not catch blind exception: Exception
(BLE001)
84-84: Starting a process with a partial executable path
(S607)
117-117: subprocess call: check for execution of untrusted input
(S603)
138-138: Consider moving this statement to an else block
(TRY300)
139-139: Do not catch blind exception: Exception
(BLE001)
371-371: Avoid specifying long messages outside the exception class
(TRY003)
403-403: String contains ambiguous ℹ (INFORMATION SOURCE). Did you mean i (LATIN SMALL LETTER I)?
(RUF001)
420-420: String contains ambiguous ℹ (INFORMATION SOURCE). Did you mean i (LATIN SMALL LETTER I)?
(RUF001)
445-445: String contains ambiguous ℹ (INFORMATION SOURCE). Did you mean i (LATIN SMALL LETTER I)?
(RUF001)
540-540: subprocess call: check for execution of untrusted input
(S603)
541-541: Starting a process with a partial executable path
(S607)
548-548: Do not catch blind exception: Exception
(BLE001)
574-574: String contains ambiguous ℹ (INFORMATION SOURCE). Did you mean i (LATIN SMALL LETTER I)?
(RUF001)
.scripts/fix_oci_labels.py
27-27: Do not catch blind exception: Exception
(BLE001)
62-62: String contains ambiguous ➕ (HEAVY PLUS SIGN). Did you mean + (PLUS SIGN)?
(RUF001)
76-76: Consider moving this statement to an else block
(TRY300)
77-77: Do not catch blind exception: Exception
(BLE001)
.scripts/prepare_edge_branch.py
51-51: Consider moving this statement to an else block
(TRY300)
111-111: Consider moving this statement to an else block
(TRY300)
113-113: Do not catch blind exception: Exception
(BLE001)
159-159: Consider moving this statement to an else block
(TRY300)
161-161: Do not catch blind exception: Exception
(BLE001)
203-203: Consider moving this statement to an else block
(TRY300)
241-241: Consider moving this statement to an else block
(TRY300)
.scripts/check_compliance.py
83-83: Do not catch blind exception: Exception
(BLE001)
.scripts/auto_update.py
18-18: Starting a process with a partial executable path
(S607)
78-78: Do not catch blind exception: Exception
(BLE001)
82-82: String contains ambiguous ℹ (INFORMATION SOURCE). Did you mean i (LATIN SMALL LETTER I)?
(RUF001)
85-85: String contains ambiguous ℹ (INFORMATION SOURCE). Did you mean i (LATIN SMALL LETTER I)?
(RUF001)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Cursor Bugbot
🔇 Additional comments (15)
.scripts/enforce_architectures.py (2)
1-8: LGTM!Imports and constant definitions are clear and appropriate for the task.
100-136: LGTM! Well-structured automation script.The directory scanning logic, modification tracking, and GitHub Actions integration are all well-implemented. The script correctly:
- Scans only top-level add-on directories
- Skips hidden directories
- Tracks modifications across multiple file types per add-on
- Provides appropriate output for workflow consumption
Also applies to: 151-152
.scripts/toggle_support.py (1)
164-172: LGTM!The command-line interface is well-structured with clear argument definitions and proper usage of argparse.
.github/workflows/orchestrator-labeler.yaml (5)
88-102: LGTM: Proper pagination implementation.The pagination logic correctly handles PRs with more than 100 changed files by iterating through all pages.
150-176: Good version bump detection with new addon filtering.The logic correctly distinguishes between version bumps in existing add-ons (which get the label) and initial versions in new add-ons (which don't), preventing false positives.
332-339: Verify automatic 'automerge' label for dependency bots is intended.The workflow automatically adds the 'automerge' label to all PRs from renovate or dependabot. Ensure your CI checks and automerge workflow (if present) are configured to safely handle automatic merges, including verifying tests pass and no breaking changes are introduced.
Do you want to confirm this is the desired behavior, or should the automerge label require additional validation?
313-329: LGTM: Size labeling logic is correct.The thresholds and logic appropriately categorize PRs by size, and the if-else structure ensures exactly one size label is applied.
366-376: Verify error handling works as expected.The code uses
core.setFailed()at line 375. While thecoreobject should be available in github-script context, it hasn't been referenced elsewhere in the script, so verify this works correctly in practice.Consider testing this workflow with an intentional API failure to ensure errors are properly surfaced.
.scripts/bump_script_version.py (1)
10-25: Good error handling pattern.The
run_commandfunction properly handles subprocess errors with specific exception handling and returnsNoneon failure, enabling graceful degradation. This is a clean pattern..github/workflows/orchestrator-edge-sync.yaml (1)
1-263: Well-structured edge sync workflow.The workflow properly handles:
- Skipping workflow-only changes
- Preserving dev versions for unchanged addons
- Force-pushing edge branch (appropriate for this use case)
- Detecting changed addons via git diff
The use of
[skip ci]in the commit message prevents infinite loops..github/workflows/orchestrator-ci.yaml (3)
411-412: Report job depends onsupervisor-testbut may run before it completes.The
reportjob listssupervisor-testinneedsbut usesalways(). Ifsupervisor-testhasn't started yet (due to its own conditions), the report job could potentially run prematurely. The condition logic looks correct, but verify the timing is as expected.
1026-1029:core.summarymay not be available in github-script context.The
coreobject withsummaryis typically available in custom actions, not inactions/github-script. This call may fail. Consider using$GITHUB_STEP_SUMMARYfile directly or removing this block.♻️ Proposed fix using fs module
// Post General Report console.log("Posting General Failure Report..."); try { - await core.summary.addRaw(reportBody).write(); + const fs = require('fs'); + const summaryFile = process.env.GITHUB_STEP_SUMMARY; + if (summaryFile) { + fs.appendFileSync(summaryFile, reportBody); + } } catch (err) { console.log("Could not write to Job Summary:", err); }Likely an incorrect or invalid review comment.
284-285: No action needed. Python 3.14 was released as a stable version on October 7, 2025, and is safe to use in CI..scripts/auto_update.py (1)
45-50: Theunsupporteddirectory exclusion is not problematic; this directory doesn't exist in the repository.Neither
unsupportednor.unsupportedexists in the codebase, so the exclusion list at line 45 is harmless regardless of whether it usesunsupportedor.unsupported. If the intent is to exclude an unsupported addons directory in the future, clarify the expected directory name and ensure it matches when added.Likely an incorrect or invalid review comment.
.github/workflows/orchestrator-release.yaml (1)
720-763: Remove the "force-push pattern" claim—the workflow does not use force-push.The workflow uses
git push origin master(plain push, no--forceflag) at line 818. The actual pattern is stash → hard reset → rebase → regular push. While a race condition could theoretically occur between the fetch and push steps when multiple jobs run in parallel, the retry logic (lines 773–820) mitigates this by runninggit pull --rebase origin masterbefore committing and pushing. Additionally, each job modifies only its own addon directory (separate config.yaml files), so conflicts between concurrent jobs are unlikely. The workflow is defensively designed, not recklessly parallel.Likely an incorrect or invalid review comment.
| # Scanning the directory for vulnerabilities (fs mode) | ||
| # Note: Scanning the built image is better, but this scans deps/conf before build. | ||
| - name: 🛡️ Trivy Security Scan | ||
| uses: aquasecurity/trivy-action@master |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, check the actual file at the mentioned line
git ls-files | grep orchestrator-ci.yamlRepository: faneX-ID/homeassistant-addon
Length of output: 110
🌐 Web query:
aquasecurity trivy-action latest version releases github 2025
💡 Result:
Latest release: aquasecurity/trivy-action v0.33.1 — published Sep 3, 2025. [1][2]
Sources:
[1] GitHub Releases (aquasecurity/trivy-action) — v0.33.1.
[2] aquasecurity/trivy-action README showing usage with @0.33.1.
🏁 Script executed:
# Check the actual content around line 330 in the file
sed -n '325,335p' .github/workflows/orchestrator-ci.yamlRepository: faneX-ID/homeassistant-addon
Length of output: 491
Pin trivy-action to a specific stable version instead of @master.
Using @master means the workflow uses whatever is currently on the master branch, which can break without warning. Pin to a stable release for reproducibility and reliability.
🔒 Proposed fix
- name: 🛡️ Trivy Security Scan
- uses: aquasecurity/trivy-action@master
+ uses: aquasecurity/trivy-action@0.33.1
with:
scan-type: "fs"🤖 Prompt for AI Agents
In @.github/workflows/orchestrator-ci.yaml at line 330, Replace the floating
GitHub Action reference to aquasecurity/trivy-action@master with a pinned stable
release tag (e.g., a specific semver like v0.35.0 or the current stable release)
so the workflow uses a known immutable version; update the workflow step using
"uses: aquasecurity/trivy-action@master" to the chosen tag and optionally add a
comment noting the pinned version and why to simplify future upgrades.
| branches: | ||
| - master |
There was a problem hiding this comment.
Branch mismatch: workflow triggers on 'master' but PR targets 'main'.
The PR objectives indicate the target branch is "main", but this workflow is configured to run on "master". This workflow will never trigger for this PR or future PRs to the main branch.
🔧 Fix the branch name
branches:
- - master
+ - main🤖 Prompt for AI Agents
In @.github/workflows/orchestrator-labeler.yaml around lines 10 - 11, Update the
GitHub Actions workflow trigger so it matches the repo's default branch: replace
the branches entry that currently lists "master" with "main" in the
orchestrator-labeler workflow (the branches array under the workflow's on:
configuration). Ensure both push and pull_request triggers (if present) use
"main" so the workflow will run for PRs targeting main.
| label: | ||
| runs-on: ubuntu-latest | ||
| steps: |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Add job timeout for safety.
The job doesn't specify a timeout-minutes value. Consider adding a reasonable timeout (e.g., 10-15 minutes) to prevent the job from hanging indefinitely if git commands or API calls fail.
⏱️ Add timeout
label:
runs-on: ubuntu-latest
+ timeout-minutes: 10
steps:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| label: | |
| runs-on: ubuntu-latest | |
| steps: | |
| label: | |
| runs-on: ubuntu-latest | |
| timeout-minutes: 10 | |
| steps: |
🤖 Prompt for AI Agents
In @.github/workflows/orchestrator-labeler.yaml around lines 20 - 22, The
workflow job named "label" is missing a timeout, so add a timeout-minutes
setting (e.g., 10 or 15) to the job definition alongside runs-on and steps to
prevent indefinite hangs; update the "label" job block to include
timeout-minutes: 10 (or 15) so the job will be automatically cancelled if it
exceeds that duration.
| - name: ⤵️ Check out code | ||
| uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1 | ||
| with: | ||
| fetch-depth: 0 |
There was a problem hiding this comment.
Ensure base branch is available for git commands.
The script at lines 43 and 48 uses origin/${baseRef}, but the checkout action may not configure the "origin" remote. Additionally, the base branch may not be automatically fetched.
🔧 Add explicit base branch fetch
- name: ⤵️ Check out code
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
+ - name: 🔄 Fetch base branch
+ run: |
+ git fetch origin ${{ github.event.pull_request.base.ref }}Also update line 43 and 48 to use a more reliable reference:
-const lsTree = execSync(`git ls-tree --name-only origin/${baseRef}`, { encoding: 'utf8' });
+const lsTree = execSync(`git ls-tree --name-only ${{ github.event.pull_request.base.sha }}`, { encoding: 'utf8' });Or alternatively, reference the base commit SHA directly in the script:
const baseSha = context.payload.pull_request.base.sha;
const lsTree = execSync(`git ls-tree --name-only ${baseSha}`, { encoding: 'utf8' });Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @.github/workflows/orchestrator-labeler.yaml around lines 23 - 26, The
workflow references origin/${baseRef} in the script but the checkout step may
not create or fetch that remote/branch; update the job to explicitly fetch the
PR base before running git commands: after the actions/checkout step run a git
fetch that fetches the baseRef (or set up origin) so origin/${baseRef} is
available, or modify the script to use the pull request base SHA (baseSha =
context.payload.pull_request.base.sha) and call git ls-tree against that SHA
instead of origin/${baseRef}; ensure the checkout still uses fetch-depth: 0 and
that the workflow has permission to fetch the repo.
| try { | ||
| const lsTree = execSync(`git ls-tree --name-only origin/${baseRef}`, { encoding: 'utf8' }); | ||
| baseAddons = lsTree.split('\n').filter(name => { | ||
| if (!name || name.startsWith('.')) return false; | ||
| // Check if this dir has config.yaml in base branch | ||
| try { | ||
| execSync(`git cat-file -e origin/${baseRef}:${name}/config.yaml`, { stdio: 'pipe' }); | ||
| return true; | ||
| } catch { | ||
| return false; | ||
| } | ||
| }); | ||
| } catch (e) { | ||
| console.log('Failed to get base branch addons, falling back to current checkout'); | ||
| const rootEntries = fs.readdirSync('.', { withFileTypes: true }); | ||
| baseAddons = rootEntries.filter(dirent => { | ||
| if (!dirent.isDirectory()) return false; | ||
| if (dirent.name.startsWith('.')) return false; | ||
| return fs.existsSync(path.join(dirent.name, 'config.yaml')); | ||
| }).map(d => d.name); | ||
| } |
There was a problem hiding this comment.
Fallback reads wrong branch state.
If the git commands fail (lines 43-53), the fallback (lines 56-61) reads from the current checkout, which is the PR branch, not the base branch. This defeats the stated purpose of "calculating against the target state, not the PR" (line 36).
Consider failing explicitly if base branch detection fails, or ensure the checkout is positioned at the base branch for the fallback:
💡 Suggested approach
} catch (e) {
- console.log('Failed to get base branch addons, falling back to current checkout');
- const rootEntries = fs.readdirSync('.', { withFileTypes: true });
- baseAddons = rootEntries.filter(dirent => {
- if (!dirent.isDirectory()) return false;
- if (dirent.name.startsWith('.')) return false;
- return fs.existsSync(path.join(dirent.name, 'config.yaml'));
- }).map(d => d.name);
+ console.error('Failed to get base branch addons: ' + e.message);
+ core.setFailed('Unable to determine base branch add-ons');
+ throw e;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try { | |
| const lsTree = execSync(`git ls-tree --name-only origin/${baseRef}`, { encoding: 'utf8' }); | |
| baseAddons = lsTree.split('\n').filter(name => { | |
| if (!name || name.startsWith('.')) return false; | |
| // Check if this dir has config.yaml in base branch | |
| try { | |
| execSync(`git cat-file -e origin/${baseRef}:${name}/config.yaml`, { stdio: 'pipe' }); | |
| return true; | |
| } catch { | |
| return false; | |
| } | |
| }); | |
| } catch (e) { | |
| console.log('Failed to get base branch addons, falling back to current checkout'); | |
| const rootEntries = fs.readdirSync('.', { withFileTypes: true }); | |
| baseAddons = rootEntries.filter(dirent => { | |
| if (!dirent.isDirectory()) return false; | |
| if (dirent.name.startsWith('.')) return false; | |
| return fs.existsSync(path.join(dirent.name, 'config.yaml')); | |
| }).map(d => d.name); | |
| } | |
| try { | |
| const lsTree = execSync(`git ls-tree --name-only origin/${baseRef}`, { encoding: 'utf8' }); | |
| baseAddons = lsTree.split('\n').filter(name => { | |
| if (!name || name.startsWith('.')) return false; | |
| // Check if this dir has config.yaml in base branch | |
| try { | |
| execSync(`git cat-file -e origin/${baseRef}:${name}/config.yaml`, { stdio: 'pipe' }); | |
| return true; | |
| } catch { | |
| return false; | |
| } | |
| }); | |
| } catch (e) { | |
| console.error('Failed to get base branch addons: ' + e.message); | |
| core.setFailed('Unable to determine base branch add-ons'); | |
| throw e; | |
| } |
🤖 Prompt for AI Agents
In @.github/workflows/orchestrator-labeler.yaml around lines 42 - 62, The
fallback currently reads the filesystem from the PR branch when git commands
fail (baseRef/baseAddons), so either fail fast or switch the checkout to the
base branch before reading; update the fallback logic to run a safe checkout of
the target commit (e.g., git fetch origin ${baseRef} and git checkout --detach
origin/${baseRef}) and then use fs.readdirSync to populate baseAddons, or
instead abort with a clear error and non-zero exit if base branch detection
fails—modify the code around baseRef/baseAddons to implement one of these two
behaviors.
| "org.opencontainers.image.url": f'"{url}"', | ||
| "org.opencontainers.image.source": f'"{url}"', | ||
| "org.opencontainers.image.documentation": f'"{url}/blob/master/{os.path.relpath(addon_path, os.getcwd()).replace(os.sep, "/")}/README.md"', | ||
| "org.opencontainers.image.created": '"${BUILD_DATE}"', |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Description: Check if BUILD_DATE is defined in Dockerfiles or build workflows
echo "=== Checking for BUILD_DATE ARG in Dockerfiles ==="
fd -e Dockerfile -x grep -l "ARG BUILD_DATE"
echo ""
echo "=== Checking for BUILD_DATE build-arg in workflows ==="
rg -l "build-arg.*BUILD_DATE" .github/workflows/ 2>/dev/null || echo "(No matches found)"
echo ""
echo "=== Examining the fix_oci_labels.py file ==="
cat -n .scripts/fix_oci_labels.py
echo ""
echo "=== Checking for any BUILD_DATE references in workflows ==="
rg "BUILD_DATE" .github/workflows/ -A 2 -B 2 2>/dev/null || echo "(No matches found)"Repository: faneX-ID/homeassistant-addon
Length of output: 5364
${BUILD_DATE} variable in LABEL will not expand—define ARG BUILD_DATE or pass --build-arg during builds.
The script generates "${BUILD_DATE}" as a literal string in Dockerfiles, but verification shows:
- No Dockerfiles declare
ARG BUILD_DATE - No build workflows pass
--build-arg BUILD_DATE=...
Without these, the label org.opencontainers.image.created will contain the literal string ${BUILD_DATE} instead of an actual date. Either add ARG BUILD_DATE to Dockerfiles and ensure build workflows provide the value, or use Python's datetime module (already imported) to set an actual date at generation time.
🤖 Prompt for AI Agents
In @.scripts/fix_oci_labels.py at line 46, The LABEL value
"org.opencontainers.image.created" in .scripts/fix_oci_labels.py currently emits
the literal "${BUILD_DATE}" which won't expand at build time unless ARG
BUILD_DATE is declared and --build-arg BUILD_DATE=... is passed; fix by either
(A) adding ARG BUILD_DATE to generated Dockerfiles and updating CI/build
workflows to pass --build-arg BUILD_DATE=<timestamp> so the label expands, or
(B) since datetime is already imported in the script, replace the literal with a
generated ISO 8601 timestamp from datetime.utcnow() when constructing the label
in the script so the produced Dockerfile contains an actual date string instead
of "${BUILD_DATE}".
| # Basic check if labels dictate adding | ||
| # We construct the LABEL block | ||
| label_block = [] | ||
| for key, value in labels.items(): | ||
| if key not in content: | ||
| label_block.append(f"LABEL {key}={value}") |
There was a problem hiding this comment.
Improve label existence check to avoid false positives.
The substring check if key not in content will match the label key anywhere in the Dockerfile, including in comments, other LABEL values, or unrelated strings. This could incorrectly skip adding a label.
Consider a more robust check:
♻️ Proposed fix
# Basic check if labels dictate adding
# We construct the LABEL block
label_block = []
for key, value in labels.items():
- if key not in content:
+ # Check for LABEL directive with this key (with or without quotes/spaces)
+ if not any(line.strip().startswith(f"LABEL {key}") for line in content.split('\n')):
label_block.append(f"LABEL {key}={value}")Alternatively, use a regex pattern to match LABEL directives more precisely.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Basic check if labels dictate adding | |
| # We construct the LABEL block | |
| label_block = [] | |
| for key, value in labels.items(): | |
| if key not in content: | |
| label_block.append(f"LABEL {key}={value}") | |
| # Basic check if labels dictate adding | |
| # We construct the LABEL block | |
| label_block = [] | |
| for key, value in labels.items(): | |
| # Check for LABEL directive with this key (with or without quotes/spaces) | |
| if not any(line.strip().startswith(f"LABEL {key}") for line in content.split('\n')): | |
| label_block.append(f"LABEL {key}={value}") |
🤖 Prompt for AI Agents
In @.scripts/fix_oci_labels.py around lines 54 - 59, The current existence check
uses a substring search ("if key not in content") which yields false positives
(matches in comments or unrelated text); instead detect actual LABEL directives
by parsing the Dockerfile for LABEL instructions or using a regex that matches
LABEL lines/blocks and keys (e.g., match lines starting with optional whitespace
then "LABEL" and search for the key as a property name followed by "="), then
only skip adding when that regex/parser finds the exact key defined; update the
logic around label_block construction (the loop over labels and variables
key/value) to use this more precise check so labels are not incorrectly skipped.
| main_readme = "README.MD" | ||
| if add_edge_notice_to_readme(main_readme): | ||
| print("📝 Added edge notice to main README.MD") |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check actual README filename in repository root (case-insensitive search)
fd -t f -d 1 'readme' --ignore-case .Repository: faneX-ID/homeassistant-addon
Length of output: 83
🏁 Script executed:
#!/bin/bash
# Find the add_edge_notice_to_readme function implementation
rg -A 20 'def add_edge_notice_to_readme' .scripts/prepare_edge_branch.pyRepository: faneX-ID/homeassistant-addon
Length of output: 717
Fix README filename case mismatch in main README reference.
Line 305 references README.MD (uppercase extension), but the actual file is README.md. On case-sensitive filesystems, os.path.exists() will fail to find the file, preventing the edge notice from being added. Change "README.MD" to "README.md".
🤖 Prompt for AI Agents
In @.scripts/prepare_edge_branch.py around lines 304 - 306, The code sets
main_readme = "README.MD" which mismatches the actual filename case; update the
string to "README.md" so add_edge_notice_to_readme(main_readme) can find the
file on case-sensitive filesystems, ensuring the edge notice is added.
| res = requests.get(url, headers=headers) | ||
| if res.status_code == 404: | ||
| url = f"https://api.github.com/users/{owner}/packages/container/{pkg}/versions" | ||
| res = requests.get(url, headers=headers) |
There was a problem hiding this comment.
Add timeout to all requests calls to prevent indefinite hangs.
All HTTP requests should specify a timeout to prevent the script from hanging indefinitely if the API becomes unresponsive. This is a security and reliability concern.
🔒 Proposed fix
# Get versions
url = f"https://api.github.com/orgs/{owner}/packages/container/{pkg}/versions"
# Fallback to user
- res = requests.get(url, headers=headers)
+ res = requests.get(url, headers=headers, timeout=30)
if res.status_code == 404:
url = f"https://api.github.com/users/{owner}/packages/container/{pkg}/versions"
- res = requests.get(url, headers=headers)
+ res = requests.get(url, headers=headers, timeout=30)
if res.status_code != 200:
continue print(f" - Deleting version {vid} of {pkg}")
# Delete
del_url = f"{url}/{vid}" # this url base is .../versions, so + /id works? No, url var was set to base.
# actually we constructed url above.
# verify url construction
- requests.delete(f"{url}/{vid}", headers=headers)
+ requests.delete(f"{url}/{vid}", headers=headers, timeout=30)Also applies to: 113-113
🧰 Tools
🪛 Ruff (0.14.10)
85-85: Probable use of requests call without timeout
(S113)
88-88: Probable use of requests call without timeout
(S113)
🤖 Prompt for AI Agents
In @.scripts/toggle_support.py around lines 85 - 88, Add a timeout to every
requests call in the script to avoid indefinite hangs: define a single
REQUEST_TIMEOUT constant (or read from env) and pass it as
timeout=REQUEST_TIMEOUT on all requests.get and requests.post invocations (e.g.,
the calls matching "requests.get(url, headers=headers)" shown and the call
around line 113); update every occurrence to include the timeout argument so all
HTTP requests fail fast when the API is unresponsive.
| to_keep = versions[:1] | ||
| to_delete = versions[1:] | ||
|
|
||
| for v in to_delete: | ||
| vid = v["id"] | ||
| print(f" - Deleting version {vid} of {pkg}") | ||
| # Delete | ||
| del_url = f"{url}/{vid}" # this url base is .../versions, so + /id works? No, url var was set to base. | ||
| # actually we constructed url above. | ||
| # verify url construction | ||
| requests.delete(f"{url}/{vid}", headers=headers) |
There was a problem hiding this comment.
Fix URL construction bug and remove unused variables.
Two issues here:
-
Critical bug: Line 113 uses
urlvariable which may have been reassigned to the fallback user endpoint on line 87. The delete URL will be incorrect if the fallback was taken. Line 110 correctly computesdel_urlbut then doesn't use it. -
Both
to_keep(line 103) anddel_url(line 110) are assigned but never used.
🐛 Proposed fix
# Sort by updated_at desc
versions.sort(key=lambda x: x["updated_at"], reverse=True)
- # Keep 1 (latest usually)
- # Logic: If unsupported, maybe user wants to keep NO images?
- # User said: "delete all Docker Images except the newest".
- to_keep = versions[:1]
- to_delete = versions[1:]
+ # Delete all except the latest
+ to_delete = versions[1:]
for v in to_delete:
vid = v["id"]
print(f" - Deleting version {vid} of {pkg}")
- # Delete
- del_url = f"{url}/{vid}" # this url base is .../versions, so + /id works? No, url var was set to base.
- # actually we constructed url above.
- # verify url construction
- requests.delete(f"{url}/{vid}", headers=headers)
+ # Construct the delete URL correctly
+ del_url = f"{url}/{vid}"
+ requests.delete(del_url, headers=headers, timeout=30)Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 Ruff (0.14.10)
103-103: Local variable to_keep is assigned to but never used
Remove assignment to unused variable to_keep
(F841)
110-110: Local variable del_url is assigned to but never used
Remove assignment to unused variable del_url
(F841)
113-113: Probable use of requests call without timeout
(S113)
🤖 Prompt for AI Agents
In @.scripts/toggle_support.py around lines 103 - 113, Remove the unused to_keep
assignment and fix the delete to use the precomputed del_url: drop the line
assigning to_keep = versions[:1], keep computing del_url = f"{url}/{vid}" inside
the loop (or compute the correct versions endpoint into a clearly named variable
first) and replace requests.delete(f"{url}/{vid}", headers=headers) with
requests.delete(del_url, headers=headers) so the delete uses the correct URL and
no unused variables remain; ensure url/versions endpoint used to build del_url
is the one constructed earlier (not the fallback reassigned value).
Note
Overhauls GitHub Actions by replacing legacy workflows with a unified, orchestrated pipeline for CI, testing, and releases.
orchestrator-ci.yamlwith dynamic add-on detection, linting, test builds, Trivy scan, compliance checks, and success/failure reportingorchestrator-supervisor-test.yamlto run real Supervisor integration tests for ingress-enabled add-onsorchestrator-release.yaml, plus auto-merge (orchestrator-automerge.yaml), auto-fix/format (orchestrator-auto-fix.yaml), labeler (orchestrator-labeler.yaml), edge sync (orchestrator-edge-sync.yaml), and workflow monitor (orchestrator-workflow-monitor.yaml)auto-merge.yml,build-addon.yml,ci.yaml,deploy.yaml,fanex-id-bot.yml,hacs.yaml,housekeeping.yml,pr-assistant.yml,pr-autofix.yml,pr-thanks.yml)Written by Cursor Bugbot for commit 0460da9. This will update automatically on new commits. Configure here.
Summary by CodeRabbit
New Features
Changes
Refactor
✏️ Tip: You can customize this high-level summary in your review settings.