-
-
Notifications
You must be signed in to change notification settings - Fork 290
Cloud Runner Improvements - S3 Locking, Aws Local Stack (Pipelines), Testing Improvements, Rclone storage support, Provider plugin system #731
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Implemented a primary attempt to pull LFS files using GIT_PRIVATE_TOKEN. - Added a fallback mechanism to use GITHUB_TOKEN if the initial attempt fails. - Configured git to replace SSH and HTTPS URLs with token-based authentication for the fallback. - Improved error handling to log specific failure messages for both token attempts. This change ensures more robust handling of LFS file retrieval in various authentication scenarios.
- Added permissions for packages, pull-requests, statuses, and id-token to enhance workflow capabilities. - This change improves the CI pipeline's ability to manage pull requests and access necessary resources.
…ation - Added configuration to use GIT_PRIVATE_TOKEN for git operations, replacing SSH and HTTPS URLs with token-based authentication. - Improved error handling to ensure GIT_PRIVATE_TOKEN availability before attempting to pull LFS files. - This change streamlines the process of pulling LFS files in environments requiring token authentication.
…entication - Enhanced the process of configuring git to use GIT_PRIVATE_TOKEN and GITHUB_TOKEN by clearing existing URL configurations before setting new ones. - Improved the clarity of the URL replacement commands for better readability and maintainability. - This change ensures a more robust setup for pulling LFS files in environments requiring token authentication.
… pipeline - Replaced instances of GITHUB_TOKEN with GIT_PRIVATE_TOKEN in the cloud-runner CI pipeline configuration. - This change ensures consistent use of token-based authentication across various jobs in the workflow, enhancing security and functionality.
…L unsetting - Modified the git configuration commands to append '|| true' to prevent errors if the specified URLs do not exist. - This change enhances the reliability of the URL clearing process in the RemoteClient class, ensuring smoother execution during token-based authentication setups.
…tion - Updated comments for clarity regarding the purpose of URL configuration changes. - Simplified the git configuration commands by removing redundant lines while maintaining functionality for HTTPS token-based authentication. - This change enhances the readability and maintainability of the RemoteClient class's git setup process.
# Conflicts: # dist/index.js # dist/index.js.map # jest.config.js # yarn.lock
… log lines for test assertions
…off; lint/format fixes
… cache key for retained workspace (#379)
…logs; tests: retained workspace AWS assertion (#381)
…rd local provider steps
…nd log management; update builder path logic based on provider strategy
…sed on provider strategy and credentials; update binary files
…ained markers; hooks: include AWS S3 hooks on aws provider
…t:ci script; fix(windows): skip grep-based version regex tests; logs: echo CACHE_KEY/retained markers; hooks: include AWS hooks on aws provider
… update binary files
…rintf for empty input
…I hangs; s3 steps pass again
…; disable legacy pipeline triggers
…; disable legacy pipeline triggers
…; disable legacy pipeline triggers
… host LocalStack to local-docker
…ost:4566 for all; rely on k3d host mapping
📝 WalkthroughWalkthroughIntroduces dynamic provider loading with Git-based caching, refactors AWS clients to a centralized factory, adds AWS/LocalStack endpoint support, broadens storage backends (S3/rclone), updates workflows to split integrity tests across K8s/LocalStack/AWS, consolidates Jest setup with CI config, adds Windows-compat paths/flows, and adjusts tests accordingly. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Dev as Developer/Test
participant CR as CloudRunner
participant PLoader as ProviderLoader
participant PURL as URL Parser
participant PGit as ProviderGitManager
participant Prov as Provider (dynamic)
Note over Dev,CR: Start run with providerStrategy and BuildParameters
Dev->>CR: run()
CR->>CR: setupSelectedBuildPlatform()
CR->>CR: Detect endpoints • LocalStack?
alt Provider=aws and isLocalStack
CR->>CR: Rewrite provider → local-docker
end
CR->>PLoader: loadProvider(providerStrategy, buildParameters)
PLoader->>PURL: parseProviderSource()
alt GitHub provider source
PLoader->>PGit: ensureRepositoryAvailable()
PLoader->>PGit: getProviderModulePath()
else Local/NPM/Built-in
PLoader->>PLoader: resolve module path/name
end
PLoader->>Prov: dynamic import & instantiate
PLoader-->>CR: Provider instance
CR->>Prov: setupWorkflow(...)
CR->>Prov: runTaskInWorkflow(...)
Prov-->>CR: results { BuildResults, BuildSucceeded }
CR-->>Dev: results
sequenceDiagram
autonumber
participant AWSIdx as AWS Provider Index
participant Factory as AwsClientFactory
participant ECS as ECS
participant Kin as Kinesis
participant CF as CloudFormation
participant Logs as CloudWatchLogs
Note over AWSIdx,Factory: All clients lazy-init with region & optional endpoints
AWSIdx->>Factory: getECS()
Factory-->>AWSIdx: ECS singleton
AWSIdx->>ECS: RunTaskCommand
AWSIdx->>Factory: getKinesis()
Factory-->>AWSIdx: Kinesis singleton
AWSIdx->>Kin: DescribeStream / GetShardIterator / GetRecords (retry on throttle)
AWSIdx->>Factory: getCloudFormation()
Factory-->>AWSIdx: CF singleton
AWSIdx->>CF: Stack operations
AWSIdx->>Factory: getCloudWatchLogs()
Factory-->>AWSIdx: Logs singleton
AWSIdx->>Logs: Log group cleanup
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120+ minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
✨ Finishing touches
🧪 Generate unit tests
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Please see the documentation for more information. Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal). Please share your feedback with us on this Discord post. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Pre-merge checks❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
|
* ci(k8s): remove in-cluster LocalStack; use host LocalStack via localhost:4566 for all; rely on k3d host mapping * ci(k8s): remove in-cluster LocalStack; use host LocalStack via localhost:4566 for all; rely on k3d host mapping * ci(k8s): remove in-cluster LocalStack; use host LocalStack via localhost:4566 for all; rely on k3d host mapping * ci(k8s): remove in-cluster LocalStack; use host LocalStack via localhost:4566 for all; rely on k3d host mapping * ci(k8s): remove in-cluster LocalStack; use host LocalStack via localhost:4566 for all; rely on k3d host mapping * ci(k8s): remove in-cluster LocalStack; use host LocalStack via localhost:4566 for all; rely on k3d host mapping
const inline = commands | ||
.replace(/"/g, '\\"') |
Check failure
Code scanning / CodeQL
Incomplete string escaping or encoding High
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 30 days ago
To correctly escape the commands string for embedding inside double quotes in a shell command, we must escape both backslashes and double quotes. The safest approach is to use a robust, well-tested library. However, since we are only shown this specific snippet and cannot modify elsewhere, we'll fix this instance directly.
General approach
- First, replace every single backslash (
\
) with a double backslash (\\
) across the whole string. - Then, replace every double quote (
"
) with an escaped quote (\"
), again across the whole string. - Maintain both
.replace()
operations using global regexes. - Perform backslash replacement before double-quote replacement to avoid double-escaping the newly added backslashes.
If using a library like shell-quote
is allowed (and it's commonly used for this), that would be preferred, but if not, two regex replaces is standard, as described in the background.
Changes needed
- In
src/model/cloud-runner/providers/local/index.ts
, inside the Windows-specific code, update the string escaping forinline
. - Ensure that backslashes are replaced before double quotes, both globally.
- If possible, use a library like
shell-quote
for even stronger escaping (but only if allowed as per instructions).
Imports and definitions
- No new imports are strictly necessary for the manual fix with replace.
- If adopting
shell-quote
, would need to add an import for that, but we'll use the manual approach since that adheres strictly to the context shown.
-
Copy modified line R72
@@ -69,6 +69,7 @@ | ||
// On Windows, many built-in hooks use POSIX shell syntax. Execute via bash if available. | ||
if (process.platform === 'win32') { | ||
const inline = commands | ||
.replace(/\\/g, '\\\\') | ||
.replace(/"/g, '\\"') | ||
.replace(/\r/g, '') | ||
.split('\n') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👀 sounds right. Maybe a library could be a good idea here as well 🤷
@coderabbitai what would you do?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@GabLeRoux You're absolutely right! This is a legitimate security concern. The current code only escapes double quotes but ignores backslashes, which can lead to command injection vulnerabilities.
The issue is that backslashes are escape characters in bash, so unescaped backslashes in the input could:
- Break command syntax
- Escape subsequent characters unintentionally
- Enable command injection when combined with other special characters
I'd recommend using a proper shell escaping library rather than manual escaping, which is error-prone. Here are two good options:
Option 1: Using shell-escape
npm install shell-escape
import shellescape from 'shell-escape';
// Replace the current escaping logic with:
const bashWrapped = `bash -lc ${shellescape([inline])}`;
Option 2: Using shell-quote
npm install shell-quote
import { quote } from 'shell-quote';
// Replace with:
const bashWrapped = `bash -lc ${quote([inline])}`;
Option 3: Avoid shell parsing entirely
Use child_process.spawn
with array arguments instead of shell string concatenation:
import { spawn } from 'child_process';
// Execute without shell parsing
const result = spawn('bash', ['-lc', commands], { stdio: 'inherit' });
The library approach (Options 1-2) is probably best here since it maintains the current architecture while fixing the security issue properly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 28
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (11)
src/model/github.ts (1)
127-134
: completed_at timestamp is never set; inverted condition.endedDate is only assigned when it is already defined, so completed_at falls back to startedDate. Flip the condition.
Apply this diff:
- if (status === `completed`) { - if (GitHub.endedDate !== undefined) { - GitHub.endedDate = new Date().toISOString(); - } - // eslint-disable-next-line camelcase - data.completed_at = GitHub.endedDate || GitHub.startedDate; - data.conclusion = result; - } + if (status === `completed`) { + if (GitHub.endedDate === undefined) { + GitHub.endedDate = new Date().toISOString(); + } + // eslint-disable-next-line camelcase + data.completed_at = GitHub.endedDate; + data.conclusion = result; + }src/model/cloud-runner/providers/aws/services/garbage-collection-service.ts (3)
10-14
: Fix age check:getDay()
returns weekday, not elapsed days.Current logic misclassifies resource age, causing erroneous skips/deletes.
Apply:
- static isOlderThan1day(date: Date) { - const ageDate = new Date(date.getTime() - Date.now()); - - return ageDate.getDay() > 0; - } + static isOlderThan1day(date: Date) { + const ONE_DAY_MS = 24 * 60 * 60 * 1000; + return (Date.now() - date.getTime()) >= ONE_DAY_MS; + }
36-43
: Do notreturn
inside the loop when skipping a stack; usecontinue
.
return
exits the whole cleanup early and leaves remaining stacks/logs untouched.- CloudRunnerLogger.log(`Skipping ${element.StackName} - active task was running not deleting`); - - return; + CloudRunnerLogger.log(`Skipping ${element.StackName} - active task in use`); + continue;
49-53
: Avoid earlyreturn
for ignore-list stacks; usecontinue
.- CloudRunnerLogger.log(`Skipping ${element.StackName} ignore list`); - - return; + CloudRunnerLogger.log(`Skipping ${element.StackName} (ignore list)`); + continue;src/model/cloud-runner/remote-client/index.ts (2)
8-8
: Wrong assert import: using console.assert (no-throw) breaks invariants.Importing from node:console does not throw; expectations silently pass. Use node:assert (strict).
-import { assert } from 'node:console'; +import { strict as assert } from 'node:assert';
1-405
: Replaceassert
imports fromnode:console
with the standardassert
moduleUpdate the import in each file to use:
import assert from 'assert';• src/model/input-readers/git-repo.ts:1
• src/model/cloud-runner/remote-client/caching.ts:1
• src/model/cloud-runner/remote-client/index.ts:8src/model/cloud-runner/providers/aws/services/task-service.ts (2)
29-38
: Avoid any[] and eliminate duplicate ListStacks calls (paginate once, then partition).Fetch stacks once with pagination, then split into base vs non-base. Also narrow types to StackSummary[].
Example (sketch):
- public static async getCloudFormationJobStacks() { - const result: any[] = []; + public static async getCloudFormationJobStacks() { + const result: StackSummary[] = []; ... - const stacks = - (await CF.send(new ListStacksCommand({}))).StackSummaries?.filter( + const all: StackSummary[] = []; + let next: string | undefined; + do { + const page = await CF.send(new ListStacksCommand({ NextToken: next })); + all.push(...(page.StackSummaries ?? [])); + next = page.NextToken; + } while (next); + + const stacks = all.filter( (_x) => _x.StackStatus !== 'DELETE_COMPLETE' && _x.TemplateDescription !== BaseStackFormation.baseStackDecription, ) || []; ... - const baseStacks = - (await CF.send(new ListStacksCommand({}))).StackSummaries?.filter( + const baseStacks = all.filter( (_x) => _x.StackStatus !== 'DELETE_COMPLETE' && _x.TemplateDescription === BaseStackFormation.baseStackDecription, ) || [];Also applies to: 56-60
46-52
: Fix age/duration math (current Date-based approach is incorrect).Using new Date(Date.now() - x) and then getHours() yields wall-clock hours, not elapsed duration. Compute a delta in ms and format D/H/M from it.
Apply (sample helper and usage):
- const ageDate: Date = new Date(Date.now() - (element.CreationTime?.getTime() ?? 0)); + const ageMs = Date.now() - (element.CreationTime?.getTime() ?? 0); + const days = Math.floor(ageMs / 86_400_000); + const hours = Math.floor((ageMs % 86_400_000) / 3_600_000); + const minutes = Math.floor((ageMs % 3_600_000) / 60_000); ... - `Task Stack ${element.StackName} - Age D${Math.floor( - ageDate.getHours() / 24, - )} H${ageDate.getHours()} M${ageDate.getMinutes()}`, + `Task Stack ${element.StackName} - Age D${days} H${hours} M${minutes}`,Repeat the same pattern where age is logged for base stacks, awsDescribeJob, and log groups.
Also applies to: 69-75, 133-137, 174-178
src/model/cloud-runner/services/core/shared-workspace-locking.ts (2)
199-200
: Fix incorrect sort comparators (currently not comparing two items).Array.sort callback must accept (a, b). Using a single arg breaks ordering and can corrupt retention logic.
Apply:
- .sort((x) => x.timestamp); + .sort((a, b) => a.timestamp - b.timestamp);And:
- ordered.sort((x) => x.timestamp); + ordered.sort((a, b) => a.timestamp - b.timestamp);Also applies to: 244-244
254-274
: Correct timestamp extraction for workspace objects.
- The early split('_')[1] path is wrong for typical workspace names and returns NaN.
- When reading from S3/rclone, parse the leading timestamp token.
Apply:
- public static async GetWorkspaceTimestamp( + public static async GetWorkspaceTimestamp( workspace: string, buildParametersContext: BuildParameters, ): Promise<Number> { - if (workspace.split(`_`).length > 0) { - return Number(workspace.split(`_`)[1]); - } - if (!(await SharedWorkspaceLocking.DoesWorkspaceExist(workspace, buildParametersContext))) { throw new Error("Workspace doesn't exist, can't call get all locks"); } return ( await SharedWorkspaceLocking.listObjects( `${SharedWorkspaceLocking.workspacePrefix}${buildParametersContext.cacheKey}/`, ) ) .map((x) => x.replace(`/`, ``)) .filter((x) => x.includes(workspace) && x.endsWith(`_workspace`)) - .map((x) => Number(x))[0]; + .map((x) => Number(x.split('_')[0]))[0]; }src/model/cloud-runner/providers/aws/aws-task-runner.ts (1)
113-133
: waitUntilTaskRunning swallows failuresYou log and mark failed but don’t rethrow; execution continues into streaming with an invalid task.
} catch (error_) { const error = error_ as Error; await new Promise((resolve) => setTimeout(resolve, 3000)); const taskAfterError = await AWSTaskRunner.describeTasks(cluster, taskArn); CloudRunnerLogger.log(`Cloud runner job has ended ${taskAfterError?.containers?.[0]?.lastStatus}`); core.setFailed(error); core.error(error); + throw error; }
🧹 Nitpick comments (68)
src/model/cloud-runner/services/core/follow-log-stream-service.ts (1)
51-52
: Consider redacting sensitive tokens before persisting logs.If
output
is stored/returned beyond masked console logs, scrub common AWS secrets to reduce leakage risk.Example:
const redacted = message .replace(/\bAKIA[0-9A-Z]{16}\b/g, '****') .replace(/\baws_secret_access_key\s*=\s*\S+/gi, 'aws_secret_access_key=****') .replace(/\baws_session_token\s*=\s*\S+/gi, 'aws_session_token=****');src/model/versioning.test.ts (1)
38-39
: Windows skip is fine; consider feature-detecting grep instead of OS.Prefer skipping when grep/sh are unavailable rather than hard-coding win32. This makes the test robust on Windows shells with grep present and future-proof on non-Windows without grep. Example approach: probe availability (e.g., which grep) and choose describe/skip accordingly.
src/model/github.ts (2)
200-202
: Fix misleading error message.The thrown message references asyncChecksApiWorkflowName even when dispatching arbitrary workflows by name.
Apply this diff:
- throw new Error(`no workflow with name "${GitHub.asyncChecksApiWorkflowName}"`); + throw new Error(`no workflow with name "${element}"`);
6-6
: Consider using global fetch (Node ≥18) to drop node-fetch dependency.Runtime runs on Node 18 (engines and CI). You can rely on globalThis.fetch and remove node-fetch import and wiring, simplifying bundle/test setup.
Proposed refactor:
-import fetch from 'node-fetch'; ... - request: { fetch }, + request: { fetch: globalThis.fetch }, ... - request: { fetch }, + request: { fetch: globalThis.fetch },If you adopt this, you can drop the jest fetch polyfill and node-fetch devDependency as well. I can prep a follow-up patch if you want.
Also applies to: 19-21, 25-27
jest.setup.js (1)
1-2
: Guard the polyfill to avoid overriding Node 18’s native fetch; expose Headers/Request/Response.Don’t override when native fetch exists; also surface the companion classes for libraries that expect them.
Apply this diff:
-const fetch = require('node-fetch'); -global.fetch = fetch; +const nf = require('node-fetch'); +globalThis.fetch = globalThis.fetch || nf; +globalThis.Headers = globalThis.Headers || nf.Headers; +globalThis.Request = globalThis.Request || nf.Request; +globalThis.Response = globalThis.Response || nf.Response;package.json (1)
22-22
: Minor: --runInBand duplicates maxWorkers: 1 in jest.ci.config.js.Not harmful, but one is sufficient. Keep either flag or config.
.github/workflows/integrity-check.yml (1)
30-38
: Optional: make Cloud Runner jobs depend on tests.If you want to avoid burning minutes on failing PRs, gate these jobs with needs: tests.
Example:
cloud-runner: name: Cloud Runner Integrity + needs: tests uses: ./.github/workflows/cloud-runner-integrity.yml secrets: inherit cloud-runner-localstack: name: Cloud Runner Integrity (LocalStack) + needs: tests uses: ./.github/workflows/cloud-runner-integrity-localstack.yml secrets: inherit.github/workflows/cloud-runner-integrity.yml (5)
21-29
: Quote booleans in env to avoid YAML coercion pitfalls.Prevents accidental type coercion and keeps values consistent in sub-processes.
- DEBUG: true + DEBUG: 'true' PROJECT_PATH: test-project - USE_IL2CPP: false + USE_IL2CPP: 'false'
25-25
: Use github.ref_name for branch-only value.github.ref includes refs/heads/... which may not be what your code expects.
- CLOUD_RUNNER_BRANCH: ${{ github.ref }} + CLOUD_RUNNER_BRANCH: ${{ github.ref_name }}
87-90
: Harden readiness loop and fail fast.Ensure the step fails if the cluster never becomes ready and surface node Ready status.
- for i in {1..60}; do kubectl get nodes && break || sleep 5; done - kubectl get storageclass + set -euo pipefail + for i in {1..60}; do + if kubectl get nodes | grep -q ' Ready '; then break; fi + sleep 5 + done + kubectl get nodes + kubectl get storageclass
119-123
: De-duplicate and standardize endpoint variables.Keep the minimal set your code actually reads; prefer AWS_ENDPOINT_URL and explicit S3 vars. Avoid INPUT_* unless your tests require action-style inputs.
- AWS_S3_ENDPOINT: http://localhost:4566 - AWS_ENDPOINT: http://localhost:4566 - INPUT_AWSS3ENDPOINT: http://localhost:4566 - INPUT_AWSENDPOINT: http://localhost:4566 - AWS_S3_FORCE_PATH_STYLE: 'true' + AWS_ENDPOINT_URL: http://localhost:4566 + AWS_S3_ENDPOINT: http://localhost:4566 + AWS_S3_FORCE_PATH_STYLE: 'true'If the tests depend on INPUT_* keep them, but mirror the same single source (AWS_ENDPOINT_URL) to avoid drift.
130-130
: Add workflow-level concurrency to cancel superseded runs.Saves minutes and avoids fighting clusters.
jobs: +concurrency: + group: cloud-runner-integrity-${{ github.workflow }}-${{ github.ref }} + cancel-in-progress: truePlace this block at the workflow root (same level as on/env/jobs).
.github/workflows/cloud-runner-integrity-localstack.yml (4)
21-33
: Standardize env: quote booleans and add S3/IMDS flags to reduce flakiness.AWS_ENDPOINT: http://localhost:4566 AWS_ENDPOINT_URL: http://localhost:4566 AWS_ACCESS_KEY_ID: test AWS_SECRET_ACCESS_KEY: test CLOUD_RUNNER_BRANCH: ${{ github.ref }} - DEBUG: true + DEBUG: 'true' PROJECT_PATH: test-project - USE_IL2CPP: false + USE_IL2CPP: 'false' + AWS_S3_ENDPOINT: http://localhost:4566 + AWS_S3_FORCE_PATH_STYLE: 'true' + AWS_EC2_METADATA_DISABLED: 'true'Also consider CLOUD_RUNNER_BRANCH: ${{ github.ref_name }} for a clean branch string.
35-37
: Gate the job on the workflow_call input (currently unused).Prevents accidental execution when the caller opts out.
tests: name: Cloud Runner Tests (LocalStack) runs-on: ubuntu-latest + if: ${{ inputs.runGithubIntegrationTests == 'true' }}
39-45
: Add a LocalStack health wait to avoid early test failures.Service start can be racy; wait until it’s responsive.
services: localstack: image: localstack/localstack ports: - 4566:4566 env: SERVICES: cloudformation,ecs,kinesis,cloudwatch,s3,logs + steps: + - name: Wait for LocalStack + run: | + set -euo pipefail + for i in {1..60}; do + if curl -fsS http://localhost:4566/health | grep -q '"s3": *"running"'; then + break + fi + sleep 2 + done(Move this before checkout or immediately after; either is fine.)
1-3
: Add workflow-level concurrency.Cancel superseded runs to save minutes.
name: cloud-runner-integrity-localstack on: workflow_call: +concurrency: + group: cloud-runner-integrity-localstack-${{ github.workflow }}-${{ github.ref }} + cancel-in-progress: truesrc/model/cloud-runner/providers/k8s/kubernetes-job-spec-factory.ts (3)
25-35
: Name and coverage nits: rename set and include common endpoint vars
- Rename to a clearer identifier (fixes ESLint unicorn/prevent-abbreviations).
- Consider adding AWS_ENDPOINT_URL and (if you use them elsewhere) service-specific endpoints (e.g., AWS_STS_ENDPOINT, AWS_SQS_ENDPOINT, AWS_SECRETSMANAGER_ENDPOINT, AWS_SSM_ENDPOINT, AWS_CLOUDWATCH_LOGS_ENDPOINT).
Apply:
- const endpointEnvNames = new Set([ + const endpointEnvironmentNames = new Set([ 'AWS_S3_ENDPOINT', 'AWS_ENDPOINT', 'AWS_CLOUD_FORMATION_ENDPOINT', 'AWS_ECS_ENDPOINT', 'AWS_KINESIS_ENDPOINT', - 'AWS_CLOUD_WATCH_LOGS_ENDPOINT', + 'AWS_CLOUDWATCH_LOGS_ENDPOINT', + 'AWS_ENDPOINT_URL', 'INPUT_AWSS3ENDPOINT', 'INPUT_AWSENDPOINT', ]);
35-47
: Make rewriting scheme-agnostic and configurable; preserve httpsCurrent startsWith checks miss https and scheme-less values and hardcode the host. Use a regex that preserves the scheme and allow overriding the host gateway.
- const adjustedEnvironment = environment.map((x) => { + const adjustedEnvironment = environment.map((x) => { let value = x.value; - if ( - typeof value === 'string' && - endpointEnvNames.has(x.name) && - (value.startsWith('http://localhost') || value.startsWith('http://127.0.0.1')) - ) { - value = value - .replace('http://localhost', 'http://host.k3d.internal') - .replace('http://127.0.0.1', 'http://host.k3d.internal'); - } + if (typeof value === 'string' && endpointEnvironmentNames.has(x.name)) { + const hostGateway = process.env.K3D_HOST_GATEWAY ?? 'host.k3d.internal'; + // http(s)://localhost|127.0.0.1[:...] + value = value.replace(/^([a-z]+:\/\/)(localhost|127\.0\.0\.1)(?=[:/]|$)/i, `$1${hostGateway}`); + // scheme-less localhost|127.0.0.1[:...] + value = value.replace(/^(localhost|127\.0\.0\.1)(?=[:/]|$)/i, hostGateway); + } - return { name: x.name, value } as CloudRunnerEnvironmentVariable; + + return { name: x.name, value } as CloudRunnerEnvironmentVariable; });
46-47
: Satisfy padding-line-between-statementsESLint wants a blank line before the return.
- return { name: x.name, value } as CloudRunnerEnvironmentVariable; + + return { name: x.name, value } as CloudRunnerEnvironmentVariable;src/model/cloud-runner/providers/local/index.ts (2)
71-78
: Fail fast and reduce quoting pitfallsJoin with && to stop on first failure and prefix strict bash options. Keeps behavior closer to multi-line scripts.
- const inline = commands + const inline = commands .replace(/"/g, '\\"') .replace(/\r/g, '') .split('\n') .filter((x) => x.trim().length > 0) - .join(' ; '); - const bashWrapped = `bash -lc "${inline}"`; + .join(' && '); + const bashWrapped = `bash -lc "set -euo pipefail; ${inline}"`;
78-79
: Satisfy padding-line-between-statementsESLint expects a blank line before the return.
- return await CloudRunnerSystem.Run(bashWrapped); + + return await CloudRunnerSystem.Run(bashWrapped);src/model/cloud-runner/remote-client/remote-client-logger.ts (2)
9-13
: Prefer OS tempdir to repo cwd on WindowsUsing os.tmpdir() avoids polluting the workspace and is safer with read-only repos.
- if (process.platform === 'win32') { - return path.join(process.cwd(), 'temp', 'job-log.txt'); - } - - return path.join(`/home`, `job-log.txt`); + const baseDir = + process.platform === 'win32' ? path.join(os.tmpdir(), 'unity-builder') : '/home'; + return path.join(baseDir, 'job-log.txt');Add import outside selected lines:
import os from 'node:os';
35-45
: Consider logging errors/diagnostics to file as wellFor complete collection, append logCliError and logCliDiagnostic messages to the same file.
public static logCliError(message: string) { - CloudRunnerLogger.log(`[Client][Error] ${message}`); + const finalMessage = `[Client][Error] ${message}`; + this.appendToFile(finalMessage); + CloudRunnerLogger.log(finalMessage); } public static logCliDiagnostic(message: string) { - CloudRunnerLogger.log(`[Client][Diagnostic] ${message}`); + const finalMessage = `[Client][Diagnostic] ${message}`; + this.appendToFile(finalMessage); + CloudRunnerLogger.log(finalMessage); }README.md (1)
29-33
: Add language to fenced code block (fixes markdownlint MD040).Specify the shell for the env snippet.
-``` +```bash AWS_ENDPOINT=http://localhost:4566 AWS_ACCESS_KEY_ID=test AWS_SECRET_ACCESS_KEY=test</blockquote></details> <details> <summary>src/model/cloud-runner/tests/cloud-runner-hooks.test.ts (1)</summary><blockquote> `97-99`: **Prefer boolean success flag over log string matching.** Align with other tests using BuildSucceeded to reduce brittleness. ```diff - const buildContainsBuildSucceeded = results2.includes('Build succeeded'); + // Prefer boolean over string match + const buildSucceeded = results2Object.BuildSucceeded; @@ - if (CloudRunnerOptions.providerStrategy !== 'local') { - expect(buildContainsBuildSucceeded).toBeTruthy(); - } + if (CloudRunnerOptions.providerStrategy !== 'local') { + expect(buildSucceeded).toBe(true); + }
Also applies to: 105-114
src/model/cloud-runner/workflows/async-workflow.ts (1)
30-39
: Remove unnecessary escapes and harden the shell.
- Fix ESLint no-useless-escape by dropping ".
- Add
set -euo pipefail
for safer execution.BRANCH="${CloudRunner.buildParameters.cloudRunnerBranch}" REPO="${CloudRunnerFolders.unityBuilderRepoUrl}" +set -euo pipefail -if [ -n "$(git ls-remote --heads \"$REPO\" \"$BRANCH\" 2>/dev/null)" ]; then - git clone -q -b "$BRANCH" "$REPO" /builder +if [ -n "$(git ls-remote --heads "$REPO" "$BRANCH" 2>/dev/null)" ]; then + git clone -q -b "$BRANCH" "$REPO" /builder else echo "Remote branch $BRANCH not found in $REPO; falling back to a known branch" git clone -q -b cloud-runner-develop "$REPO" /builder \ || git clone -q -b main "$REPO" /builder \ || git clone -q "$REPO" /builder fiAdditionally, quote the target repo/branch clone to avoid word-splitting:
# outside the changed hunk git clone -q -b "${CloudRunner.buildParameters.branch}" "${CloudRunnerFolders.targetBuildRepoUrl}" /reposrc/model/image-environment-factory.ts (1)
87-92
: Unique-merge logic LGTM; small guard.The null/name checks and overwrite-by-name behavior are good. Consider normalizing
element.name
to uppercase if consumers treat env names case-insensitively.src/model/cloud-runner/tests/e2e/cloud-runner-end2end-retaining.test.ts (1)
27-28
: Overriding cloudRunnerDebug here doesn’t affect top-level gatingThis sets build-time override, but the test block is gated by CloudRunnerOptions.cloudRunnerDebug at Line 18. If CI expects this test to run, ensure CloudRunnerOptions.cloudRunnerDebug is set via env/config, or drop the top-level gate. Otherwise, remove this override to avoid confusion.
Minimal clean-up if you keep the top-level gate:
- cloudRunnerDebug: true,
src/model/cloud-runner/cloud-runner.ts (1)
68-91
: LocalStack detection: add lint fix and minor hardening
- Lint: ESLint expects a blank line before the comment.
- Optional: include host.docker.internal in detection to cover common Docker-for-* setups.
private static setupSelectedBuildPlatform() { CloudRunnerLogger.log(`Cloud Runner platform selected ${CloudRunner.buildParameters.providerStrategy}`); - // Detect LocalStack endpoints and reroute AWS provider to local-docker for CI tests that only need S3 + + // Detect LocalStack endpoints and reroute AWS provider to local-docker for CI tests that only need S3 const endpointsToCheck = [ process.env.AWS_ENDPOINT, process.env.AWS_S3_ENDPOINT, process.env.AWS_CLOUD_FORMATION_ENDPOINT, process.env.AWS_ECS_ENDPOINT, process.env.AWS_KINESIS_ENDPOINT, process.env.AWS_CLOUD_WATCH_LOGS_ENDPOINT, CloudRunnerOptions.awsEndpoint, CloudRunnerOptions.awsS3Endpoint, CloudRunnerOptions.awsCloudFormationEndpoint, CloudRunnerOptions.awsEcsEndpoint, CloudRunnerOptions.awsKinesisEndpoint, CloudRunnerOptions.awsCloudWatchLogsEndpoint, ] .filter((x) => typeof x === 'string') .join(' '); - const isLocalStack = /localstack|localhost|127\.0\.0\.1/i.test(endpointsToCheck); + const isLocalStack = /localstack|localhost|127\.0\.0\.1|host\.docker\.internal/i.test(endpointsToCheck);src/model/cloud-runner/services/core/task-parameter-serializer.ts (1)
36-38
: Fix lint: ensure a blank line before inline commentKeeps lines-around-comment happy and matches surrounding style.
...TaskParameterSerializer.serializeCloudRunnerOptions(), ...CommandHookService.getSecrets(CommandHookService.getHooks(buildParameters.commandHooks)), - // Include AWS environment variables for LocalStack compatibility + + // Include AWS environment variables for LocalStack compatibility ...TaskParameterSerializer.serializeAwsEnvironmentVariables(),src/model/build-parameters.ts (1)
59-66
: Document AWS endpoint fallback precedence
Service-specific getters inCloudRunnerOptions
(awsCloudFormationEndpoint
,awsEcsEndpoint
, etc.) already fall back to the genericawsEndpoint
, so it isn’t dead config. Add JSDoc or README notes clarifying that specific endpoints override the generic one.src/model/cloud-runner/providers/aws/services/garbage-collection-service.ts (1)
56-57
: Consider waiting for stack deletion or handling failures.
DeleteStackCommand
is async and eventual; without a waiter/poll you may race subsequent runs. Optionally add a CF waiter or poll status with a timeout/backoff.src/model/cloud-runner/tests/cloud-runner-s3-steps.test.ts (2)
59-66
: Remove duplicate inner check ofshouldRunS3
.You’re already inside the gated branch; the nested condition is redundant.
- // Only run S3 operations if environment supports it - if (shouldRunS3) { - const results = await CloudRunnerSystem.RunAndReadLines( - `aws s3 ls s3://${CloudRunner.buildParameters.awsStackName}/cloud-runner-cache/`, - ); - CloudRunnerLogger.log(results.join(`,`)); - } + const results = await CloudRunnerSystem.RunAndReadLines( + `aws s3 ls s3://${CloudRunner.buildParameters.awsStackName}/cloud-runner-cache/`, + ); + CloudRunnerLogger.log(results.join(`,`));
47-48
: Simplify Unity version resolution.
determineUnityVersion(projectPath, version)
returnsversion
when not'auto'
. You’re already passing a concrete version.- unityVersion: UnityVersioning.determineUnityVersion('test-project', UnityVersioning.read('test-project')), + unityVersion: UnityVersioning.read('test-project'),src/model/cloud-runner/tests/cloud-runner-rclone-steps.test.ts (4)
11-17
: De-duplicate CreateParameters helper (import instead of redefining).This local helper duplicates src/model/cloud-runner/tests/create-test-parameter.ts. Import and reuse to avoid drift and ensure consistent behavior across tests.
+import { CreateParameters } from './create-test-parameter'; -import { OptionValues } from 'commander'; -async function CreateParameters(overrides: OptionValues | undefined) { - if (overrides) { - Cli.options = overrides; - } - - return await BuildParameters.create(); -}
56-67
: Avoid redundant version resolution; prefer 'auto' or a single read.determineUnityVersion(read(...)) double-reads and defeats the ‘auto’ path. Use 'auto' and let parameter creation resolve it, or read once.
- const overrides = { + const overrides = { versioning: 'None', projectPath: 'test-project', - unityVersion: UnityVersioning.determineUnityVersion('test-project', UnityVersioning.read('test-project')), + unityVersion: 'auto',
69-73
: Rename buildParams to buildParameters (ESLint unicorn/prevent-abbreviations).Improves readability and complies with lint rule.
- const buildParams = await CreateParameters(overrides); - const baseImage = new ImageTag(buildParams); - const results = await CloudRunner.run(buildParams, baseImage.toString()); + const buildParameters = await CreateParameters(overrides); + const baseImage = new ImageTag(buildParameters); + const results = await CloudRunner.run(buildParameters, baseImage.toString());
77-79
: Handle best-effort verification errors instead of empty catch.Empty catch blocks hide real problems and violate no-empty. Log at least a diagnostic.
- } catch {} + } catch (error: any) { + CloudRunnerLogger.log(`[rclone lsf skipped] ${error?.message ?? error}`); + }src/model/cloud-runner/providers/aws/aws-client-factory.ts (6)
18-21
: Fallback to global awsEndpoint and default region for LocalStack/dev.Ensures consistent behavior when service-specific endpoints or region aren’t provided.
- this.cloudFormation = new CloudFormation({ - region: Input.region, - endpoint: CloudRunnerOptions.awsCloudFormationEndpoint, - }); + this.cloudFormation = new CloudFormation({ + region: Input.region || 'us-east-1', + endpoint: CloudRunnerOptions.awsCloudFormationEndpoint || CloudRunnerOptions.awsEndpoint, + });
29-33
: Apply same endpoint/region fallback to ECS.- this.ecs = new ECS({ - region: Input.region, - endpoint: CloudRunnerOptions.awsEcsEndpoint, - }); + this.ecs = new ECS({ + region: Input.region || 'us-east-1', + endpoint: CloudRunnerOptions.awsEcsEndpoint || CloudRunnerOptions.awsEndpoint, + });
40-43
: Apply same endpoint/region fallback to Kinesis.- this.kinesis = new Kinesis({ - region: Input.region, - endpoint: CloudRunnerOptions.awsKinesisEndpoint, - }); + this.kinesis = new Kinesis({ + region: Input.region || 'us-east-1', + endpoint: CloudRunnerOptions.awsKinesisEndpoint || CloudRunnerOptions.awsEndpoint, + });
51-54
: Apply same endpoint/region fallback to CloudWatch Logs.- this.cloudWatchLogs = new CloudWatchLogs({ - region: Input.region, - endpoint: CloudRunnerOptions.awsCloudWatchLogsEndpoint, - }); + this.cloudWatchLogs = new CloudWatchLogs({ + region: Input.region || 'us-east-1', + endpoint: CloudRunnerOptions.awsCloudWatchLogsEndpoint || CloudRunnerOptions.awsEndpoint, + });
62-66
: Apply same endpoint/region fallback to S3.Keep forcePathStyle for LocalStack/path-style buckets.
- this.s3 = new S3({ - region: Input.region, - endpoint: CloudRunnerOptions.awsS3Endpoint, - forcePathStyle: true, - }); + this.s3 = new S3({ + region: Input.region || 'us-east-1', + endpoint: CloudRunnerOptions.awsS3Endpoint || CloudRunnerOptions.awsEndpoint, + forcePathStyle: true, + });
9-71
: Add light testability: allow one-off configuration override for endpoints (optional).If you expect tests to swap endpoints/regions at runtime, consider allowing a one-shot configure({region, endpoints}) that resets before applying.
Happy to add a minimal configure() API atop reset() if desired.
src/model/cloud-runner/remote-client/index.ts (9)
66-69
: Add spacing before comments to satisfy lines-around-comment and readability.- RemoteClientLogger.log(`Running POST build tasks`); - // Ensure cache key is present in logs for assertions + RemoteClientLogger.log(`Running POST build tasks`); + + // Ensure cache key is present in logs for assertions
71-89
: Prefer try/catch over Promise.catch for fs.promises and add small robustness.Replaces inline catch with structured handling; satisfies github/no-then and keeps behavior.
- if (fs.existsSync(libraryFolderHost)) { - const libraryEntries = await fs.promises.readdir(libraryFolderHost).catch(() => [] as string[]); - if (libraryEntries.length > 0) { + if (fs.existsSync(libraryFolderHost)) { + let libraryEntries: string[] = []; + try { libraryEntries = await fs.promises.readdir(libraryFolderHost); } catch { libraryEntries = []; } + if (libraryEntries.length > 0) { await Caching.PushToCache( CloudRunnerFolders.ToLinuxFolder(`${CloudRunnerFolders.cacheFolderForCacheKeyFull}/Library`), CloudRunnerFolders.ToLinuxFolder(CloudRunnerFolders.libraryFolderAbsolute), `lib-${CloudRunner.buildParameters.buildGuid}`, ); } else { RemoteClientLogger.log(`Skipping Library cache push (folder is empty)`); } } else { RemoteClientLogger.log(`Skipping Library cache push (folder missing)`); }
92-110
: Same pattern for build folder; prefer try/catch to .catch().- if (fs.existsSync(buildFolderHost)) { - const buildEntries = await fs.promises.readdir(buildFolderHost).catch(() => [] as string[]); - if (buildEntries.length > 0) { + if (fs.existsSync(buildFolderHost)) { + let buildEntries: string[] = []; + try { buildEntries = await fs.promises.readdir(buildFolderHost); } catch { buildEntries = []; } + if (buildEntries.length > 0) { await Caching.PushToCache( CloudRunnerFolders.ToLinuxFolder(`${CloudRunnerFolders.cacheFolderForCacheKeyFull}/build`), CloudRunnerFolders.ToLinuxFolder(CloudRunnerFolders.projectBuildFolderAbsolute), `build-${CloudRunner.buildParameters.buildGuid}`, ); } else { RemoteClientLogger.log(`Skipping Build cache push (folder is empty)`); } } else { RemoteClientLogger.log(`Skipping Build cache push (folder missing)`); }
112-121
: Cleanup guard: use rm -rf and quote path.Safer and simplifies existence checks; still best-effort.
- if (fs.existsSync(CloudRunnerFolders.uniqueCloudRunnerJobFolderAbsolute) || fs.existsSync(uniqueJobFolderLinux)) { - await CloudRunnerSystem.Run(`rm -r ${uniqueJobFolderLinux} || true`); - } else { - RemoteClientLogger.log(`Skipping cleanup; unique job folder missing`); - } + await CloudRunnerSystem.Run(`rm -rf "${uniqueJobFolderLinux}" || true`);
128-131
: Simplify async return.No need to manually construct a Promise in an async function.
- return new Promise((result) => result(``)); + return '';
265-274
: Handle empty branch early; minor robustness.- try { + try { await CloudRunnerSystem.Run(`git checkout ${targetBranch}`); } catch (_error) { if ((targetBranch || '').startsWith('pull/')) { await CloudRunnerSystem.Run(`git checkout origin/${targetBranch}`); } else { - throw _error; + throw new Error(`Failed to checkout branch: ${targetBranch}`); } }
294-360
: Unify LFS handling: reuse pullLatestLFS in retained-workspace path and clean up unused catch bindings.Avoid duplicate LFS logic and satisfy lint rules.
- await CloudRunnerSystem.Run(`git lfs pull`); - await CloudRunnerSystem.Run(`git lfs checkout || true`); + await RemoteClient.pullLatestLFS();
29-61
: Guard logFile usage for k8s provider.logFile may be undefined; default to RemoteClientLogger.LogFilePath or skip file write.
- const logFile = Cli.options!['logFile']; + const logFile = Cli.options!['logFile'] || '/home/job-log.txt';
18-27
: Optional: Restore original Cli.options after CreateParameters to avoid cross-test leakage.If tests mutate Cli.options, reset in afterEach/try/finally.
I can add a tiny helper to snapshot/restore Cli.options around parameter creation if desired.
src/model/cloud-runner/providers/aws/services/task-service.ts (1)
152-164
: Minor: rename client var and tighten types; reuse fixed age math.
- Variable ecs actually holds CloudWatchLogs; rename to logsClient for clarity.
- Return LogGroup[] instead of any[].
- Use the corrected D/H/M computation.
Apply:
- public static async getLogGroups() { - const result: any[] = []; + public static async getLogGroups() { + const result: LogGroup[] = []; ... - const ecs = AwsClientFactory.getCloudWatchLogs(); + const logsClient = AwsClientFactory.getCloudWatchLogs(); ... - let logGroupsDescribe = await ecs.send(new DescribeLogGroupsCommand(logStreamInput)); + let logGroupsDescribe = await logsClient.send(new DescribeLogGroupsCommand(logStreamInput)); ... - logGroupsDescribe = await ecs.send(new DescribeLogGroupsCommand(logStreamInput)); + logGroupsDescribe = await logsClient.send(new DescribeLogGroupsCommand(logStreamInput));Also applies to: 166-180
src/model/cloud-runner/services/core/shared-workspace-locking.ts (2)
49-69
: Create-bucket race/region handling is good; add minor guard for rclone config.Before making rclone calls, fail fast if rcloneRemote is empty.
Example:
if (SharedWorkspaceLocking.useRclone) { + if (!SharedWorkspaceLocking.bucket) { + throw new Error('rcloneRemote is required when storageProvider=rclone'); + } try { await SharedWorkspaceLocking.rclone('lsf', bucket); } catch { await SharedWorkspaceLocking.rclone('mkdir', bucket); } return; }
24-24
: Style: add expected blank lines to satisfy ESLint padding rule.Minor formatting per lints; no behavioral change.
Also applies to: 31-31, 57-57, 80-80, 95-95, 404-404
src/model/cloud-runner/services/hooks/container-hook-service.ts (4)
185-237
: rclone uploads also delete local cache unconditionallyMirror the cp->rm gating to avoid cache loss.
- rclone copy /data/cache/$CACHE_KEY/build/build-${CloudRunner.buildParameters.buildGuid}.tar${ + rclone copy /data/cache/$CACHE_KEY/build/build-${CloudRunner.buildParameters.buildGuid}.tar${ CloudRunner.buildParameters.useCompressionStrategy ? '.lz4' : '' - } ${CloudRunner.buildParameters.rcloneRemote}/cloud-runner-cache/$CACHE_KEY/build/ || true - rm /data/cache/$CACHE_KEY/build/build-${CloudRunner.buildParameters.buildGuid}.tar${ + } ${CloudRunner.buildParameters.rcloneRemote}/cloud-runner-cache/$CACHE_KEY/build/ && rm /data/cache/$CACHE_KEY/build/build-${CloudRunner.buildParameters.buildGuid}.tar${ CloudRunner.buildParameters.useCompressionStrategy ? '.lz4' : '' - } || true + } || true(Apply same pattern to cache dirs.)
58-66
: Secret naming is inconsistent across hooksMixes awsAccessKeyId/awsDefaultRegion with AWS_ACCESS_KEY_ID/AWS_DEFAULT_REGION. Standardize to uppercase AWS_* to reduce mapping confusion.
I can push a follow-up aligning secret names and values if desired.
Also applies to: 151-159, 266-274
277-291
: AWS hook gating may miss role-based creds on k8sIRSA and other providers expose creds without A*K env vars. Consider also checking AWS_WEB_IDENTITY_TOKEN_FILE or falling back to include hooks when endpoint is set.
Proposed tweak:
- const hasAwsCreds = - (process.env.AWS_ACCESS_KEY_ID && process.env.AWS_SECRET_ACCESS_KEY) || - (process.env.awsAccessKeyId && process.env.awsSecretAccessKey); + const hasAwsCreds = + (process.env.AWS_ACCESS_KEY_ID && process.env.AWS_SECRET_ACCESS_KEY) || + process.env.AWS_WEB_IDENTITY_TOKEN_FILE || + process.env.AWS_ROLE_ARN;
263-266
: Noninteractive apt in debug-cacheAvoid prompts in CI.
- apt-get update > /dev/null || true - ${CloudRunnerOptions.cloudRunnerDebug ? `apt-get install -y tree > /dev/null || true` : `#`} + DEBIAN_FRONTEND=noninteractive apt-get update > /dev/null || true + ${CloudRunnerOptions.cloudRunnerDebug ? `DEBIAN_FRONTEND=noninteractive apt-get install -y tree > /dev/null || true` : `#`}src/model/cloud-runner/providers/aws/aws-task-runner.ts (4)
63-64
: Type the RunTaskCommand input instead of casting to anyUse SDK types to keep compile-time safety.
- const task = await AwsClientFactory.getECS().send(new RunTaskCommand(runParameters as any)); + const task = await AwsClientFactory.getECS().send(new RunTaskCommand(runParameters));(Ensure runParameters matches RunTaskCommandInput.)
201-216
: Handle ExpiredIteratorException for long-running streamsWhen the shard iterator expires, GetRecords throws. Recover by reacquiring an iterator.
Minimal local fix (retry once):
- try { + try { records = await AwsClientFactory.getKinesis().send(new GetRecordsCommand({ ShardIterator: iterator })); } catch (error: any) { + if (error?.name === 'ExpiredIteratorException') { + return { iterator: '', shouldReadLogs, output, shouldCleanup }; // caller will reacquire via getLogIterator + } const isThrottle = error?.name === 'ThrottlingException' || /rate exceeded/i.test(String(error?.message)); if (isThrottle) {And in the caller, reacquire when iterator === ''.
228-246
: Typo in log message“unknwon” → “unknown”.
- CloudRunnerLogger.log('## Cloud runner job unknwon'); + CloudRunnerLogger.log('## Cloud runner job unknown');
195-226
: ESLint: add a blank line per padding-line-between-statementsSatisfy the reported lint (Expected blank line before this statement).
- return { iterator, shouldReadLogs, output, shouldCleanup }; + + return { iterator, shouldReadLogs, output, shouldCleanup };src/model/cloud-runner/workflows/build-automation-workflow.ts (4)
90-107
: Remove unnecessary escapes to fix no-useless-escapeBackslashes before double quotes aren’t needed inside template literals.
-if [ -n "$(git ls-remote --heads \"$REPO\" \"$BRANCH\" 2>/dev/null)" ]; then +if [ -n "$(git ls-remote --heads "$REPO" "$BRANCH" 2>/dev/null)" ]; then
65-73
: Noninteractive apt and toolchain setupPrevent interactive prompts; minor hardening.
- ? 'apt-get install -y curl tar tree npm git-lfs jq git > /dev/null || true\n npm --version || true\n npm i -g n > /dev/null || true\n npm i -g semver > /dev/null || true\n npm install --global yarn > /dev/null || true\n n 20.8.0 || true\n node --version || true' + ? 'DEBIAN_FRONTEND=noninteractive apt-get install -y curl tar tree npm git-lfs jq git > /dev/null || true\n npm --version || true\n npm i -g n > /dev/null || true\n npm i -g semver > /dev/null || true\n npm install --global yarn > /dev/null || true\n n 20.8.0 || true\n node --version || true'
141-186
: Local-docker: gate rm on successful tar creation and rclone parityYou create empty tar files to ensure presence—fine. Consider the same “rm on success” pattern as elsewhere if you ever add remote pushes here.
Happy to send a follow-up if you choose to add remote pushes in this branch.
187-197
: ESLint padding-line-between-statementsAdd a blank line before the pipeline comment/statement to satisfy lint.
- { echo "game ci start"; echo "game ci start" >> /home/job-log.txt; echo "CACHE_KEY=$CACHE_KEY"; echo "$CACHE_KEY"; if [ -n "$LOCKED_WORKSPACE" ]; then echo "Retained Workspace: true"; fi; if [ -n "$LOCKED_WORKSPACE" ] && [ -d "$GITHUB_WORKSPACE/.git" ]; then echo "Retained Workspace Already Exists!"; fi; /entrypoint.sh; } | node ${builderPath} -m remote-cli-log-stream --logFile /home/job-log.txt + + { echo "game ci start"; echo "game ci start" >> /home/job-log.txt; echo "CACHE_KEY=$CACHE_KEY"; echo "$CACHE_KEY"; if [ -n "$LOCKED_WORKSPACE" ]; then echo "Retained Workspace: true"; fi; if [ -n "$LOCKED_WORKSPACE" ] && [ -d "$GITHUB_WORKSPACE/.git" ]; then echo "Retained Workspace Already Exists!"; fi; /entrypoint.sh; } | node ${builderPath} -m remote-cli-log-stream --logFile /home/job-log.txt
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (4)
dist/index.js
is excluded by!**/dist/**
dist/index.js.map
is excluded by!**/dist/**
,!**/*.map
dist/licenses.txt
is excluded by!**/dist/**
yarn.lock
is excluded by!**/yarn.lock
,!**/*.lock
📒 Files selected for processing (38)
.eslintrc.json
(1 hunks).github/workflows/cloud-runner-ci-pipeline.yml
(0 hunks).github/workflows/cloud-runner-integrity-localstack.yml
(1 hunks).github/workflows/cloud-runner-integrity.yml
(1 hunks).github/workflows/integrity-check.yml
(1 hunks)README.md
(1 hunks)jest.ci.config.js
(1 hunks)jest.config.js
(1 hunks)jest.setup.js
(1 hunks)package.json
(3 hunks)src/model/build-parameters.ts
(2 hunks)src/model/cloud-runner/cloud-runner.ts
(3 hunks)src/model/cloud-runner/options/cloud-runner-options.ts
(1 hunks)src/model/cloud-runner/providers/aws/aws-client-factory.ts
(1 hunks)src/model/cloud-runner/providers/aws/aws-task-runner.ts
(10 hunks)src/model/cloud-runner/providers/aws/index.ts
(3 hunks)src/model/cloud-runner/providers/aws/services/garbage-collection-service.ts
(3 hunks)src/model/cloud-runner/providers/aws/services/task-service.ts
(6 hunks)src/model/cloud-runner/providers/k8s/kubernetes-job-spec-factory.ts
(2 hunks)src/model/cloud-runner/providers/local/index.ts
(1 hunks)src/model/cloud-runner/remote-client/index.ts
(3 hunks)src/model/cloud-runner/remote-client/remote-client-logger.ts
(2 hunks)src/model/cloud-runner/services/core/follow-log-stream-service.ts
(1 hunks)src/model/cloud-runner/services/core/shared-workspace-locking.ts
(8 hunks)src/model/cloud-runner/services/core/task-parameter-serializer.ts
(2 hunks)src/model/cloud-runner/services/hooks/container-hook-service.ts
(2 hunks)src/model/cloud-runner/tests/cloud-runner-environment.test.ts
(1 hunks)src/model/cloud-runner/tests/cloud-runner-hooks.test.ts
(2 hunks)src/model/cloud-runner/tests/cloud-runner-rclone-steps.test.ts
(1 hunks)src/model/cloud-runner/tests/cloud-runner-s3-steps.test.ts
(1 hunks)src/model/cloud-runner/tests/e2e/cloud-runner-end2end-caching.test.ts
(3 hunks)src/model/cloud-runner/tests/e2e/cloud-runner-end2end-retaining.test.ts
(3 hunks)src/model/cloud-runner/tests/e2e/cloud-runner-kubernetes.test.ts
(3 hunks)src/model/cloud-runner/workflows/async-workflow.ts
(1 hunks)src/model/cloud-runner/workflows/build-automation-workflow.ts
(1 hunks)src/model/github.ts
(4 hunks)src/model/image-environment-factory.ts
(2 hunks)src/model/versioning.test.ts
(1 hunks)
💤 Files with no reviewable changes (1)
- .github/workflows/cloud-runner-ci-pipeline.yml
🧰 Additional context used
🧬 Code graph analysis (16)
src/model/cloud-runner/providers/local/index.ts (1)
src/model/cloud-runner/services/core/cloud-runner-system.ts (1)
CloudRunnerSystem
(4-69)
src/model/versioning.test.ts (1)
src/model/action.test.ts (3)
process
(7-17)it
(6-18)describe
(5-33)
src/model/cloud-runner/workflows/async-workflow.ts (1)
src/model/cloud-runner/options/cloud-runner-folders.ts (1)
CloudRunnerFolders
(6-90)
src/model/cloud-runner/providers/aws/index.ts (1)
src/model/cloud-runner/providers/aws/aws-client-factory.ts (1)
AwsClientFactory
(9-71)
src/model/cloud-runner/tests/cloud-runner-environment.test.ts (1)
src/index.ts (1)
runMain
(7-51)
src/model/cloud-runner/tests/cloud-runner-rclone-steps.test.ts (4)
src/model/cloud-runner/tests/create-test-parameter.ts (1)
CreateParameters
(4-8)src/model/cli/cli.ts (1)
Cli
(16-175)src/model/unity-versioning.ts (1)
UnityVersioning
(4-32)src/model/cloud-runner/services/core/cloud-runner-system.ts (1)
CloudRunnerSystem
(4-69)
src/model/cloud-runner/cloud-runner.ts (1)
src/index.ts (1)
runMain
(7-51)
src/model/cloud-runner/tests/e2e/cloud-runner-kubernetes.test.ts (1)
src/index.ts (1)
runMain
(7-51)
src/model/cloud-runner/providers/aws/aws-task-runner.ts (2)
src/model/cloud-runner/providers/aws/aws-client-factory.ts (1)
AwsClientFactory
(9-71)src/model/cloud-runner/services/core/cloud-runner-logger.ts (1)
error
(24-26)
src/model/cloud-runner/remote-client/index.ts (4)
src/model/cloud-runner/remote-client/remote-client-logger.ts (1)
RemoteClientLogger
(7-93)src/model/cloud-runner/options/cloud-runner-folders.ts (1)
CloudRunnerFolders
(6-90)src/model/cloud-runner/remote-client/caching.ts (1)
Caching
(15-181)src/model/cloud-runner/services/core/cloud-runner-system.ts (1)
CloudRunnerSystem
(4-69)
src/model/cloud-runner/providers/aws/aws-client-factory.ts (1)
src/model/index.ts (1)
Input
(19-19)
src/model/cloud-runner/services/core/shared-workspace-locking.ts (3)
src/model/cloud-runner/providers/aws/aws-client-factory.ts (1)
AwsClientFactory
(9-71)src/model/cloud-runner/options/cloud-runner-options.ts (1)
region
(53-55)src/model/input.ts (1)
region
(48-50)
src/model/cloud-runner/tests/cloud-runner-s3-steps.test.ts (2)
src/model/unity-versioning.ts (1)
UnityVersioning
(4-32)src/model/cloud-runner/services/core/cloud-runner-system.ts (1)
CloudRunnerSystem
(4-69)
src/model/cloud-runner/providers/aws/services/garbage-collection-service.ts (1)
src/model/cloud-runner/providers/aws/aws-client-factory.ts (1)
AwsClientFactory
(9-71)
src/model/cloud-runner/workflows/build-automation-workflow.ts (3)
src/model/cloud-runner/options/cloud-runner-folders.ts (1)
CloudRunnerFolders
(6-90)src/index.ts (1)
runMain
(7-51)src/model/docker.ts (1)
Docker
(7-129)
src/model/cloud-runner/providers/aws/services/task-service.ts (2)
src/model/cloud-runner/providers/aws/aws-client-factory.ts (1)
AwsClientFactory
(9-71)src/model/cloud-runner/services/core/shared-workspace-locking.ts (2)
SharedWorkspaceLocking
(17-406)s3
(19-25)
🪛 ESLint
jest.ci.config.js
[error] 1-12: Filename 'jest.ci.config.js' does not match the naming convention.
(filenames/match-regex)
[error] 1-1: Expected "import" instead of "require()"
(import/no-commonjs)
[error] 1-1: Unexpected use of file extension "js" for "./jest.config.js"
(import/extensions)
[error] 3-3: Expected "export" or "export default"
(import/no-commonjs)
src/model/cloud-runner/providers/k8s/kubernetes-job-spec-factory.ts
[error] 25-25: The variable endpointEnvNames
should be named endpointEnvironmentNames
. A more descriptive name will do too.
(unicorn/prevent-abbreviations)
[error] 46-46: Expected blank line before this statement.
(padding-line-between-statements)
src/model/cloud-runner/providers/local/index.ts
[error] 78-78: Expected blank line before this statement.
(padding-line-between-statements)
src/model/cloud-runner/workflows/async-workflow.ts
[error] 32-32: Unnecessary escape character: ".
(no-useless-escape)
[error] 32-32: Unnecessary escape character: ".
(no-useless-escape)
[error] 32-32: Unnecessary escape character: ".
(no-useless-escape)
[error] 32-32: Unnecessary escape character: ".
(no-useless-escape)
src/model/cloud-runner/tests/cloud-runner-rclone-steps.test.ts
[error] 69-69: The variable buildParams
should be named buildParameters
. A more descriptive name will do too.
(unicorn/prevent-abbreviations)
[error] 79-79: Empty block statement.
(no-empty)
src/model/cloud-runner/cloud-runner.ts
[error] 68-68: Expected line before comment.
(lines-around-comment)
src/model/cloud-runner/services/core/task-parameter-serializer.ts
[error] 36-36: Expected line before comment.
(lines-around-comment)
[error] 97-97: The variable awsEnvVars
should be named awsEnvironmentVariables
. A more descriptive name will do too.
(unicorn/prevent-abbreviations)
src/model/cloud-runner/providers/aws/aws-task-runner.ts
[error] 212-212: Expected blank line before this statement.
(padding-line-between-statements)
src/model/cloud-runner/remote-client/index.ts
[error] 66-66: Expected line before comment.
(lines-around-comment)
[error] 74-74: Prefer async/await to Promise.catch()
(github/no-then)
[error] 95-95: Prefer async/await to Promise.catch()
(github/no-then)
[error] 237-237: Expected line before comment.
(lines-around-comment)
[error] 247-247: Remove unused catch binding _error
.
(unicorn/prefer-optional-catch-binding)
[error] 251-251: The catch parameter _error2
should be named error
.
(unicorn/catch-error-name)
[error] 255-255: The catch parameter _error3
should be named error
.
(unicorn/catch-error-name)
[error] 255-255: Remove unused catch binding _error3
.
(unicorn/prefer-optional-catch-binding)
[error] 311-311: Remove unused catch binding _error
.
(unicorn/prefer-optional-catch-binding)
[error] 386-386: Remove unused catch binding _error
.
(unicorn/prefer-optional-catch-binding)
[error] 390-390: The catch parameter _error2
should be named error
.
(unicorn/catch-error-name)
src/model/cloud-runner/services/core/shared-workspace-locking.ts
[error] 15-15: The variable execCb
should be named execCallback
. A more descriptive name will do too.
(unicorn/prevent-abbreviations)
[error] 24-24: Expected blank line before this statement.
(padding-line-between-statements)
[error] 31-31: Expected blank line before this statement.
(padding-line-between-statements)
[error] 57-57: Expected blank line before this statement.
(padding-line-between-statements)
[error] 63-63: The variable createParams
should be named createParameters
. A more descriptive name will do too.
(unicorn/prevent-abbreviations)
[error] 80-80: Expected blank line before this statement.
(padding-line-between-statements)
[error] 80-80: Please rename the variable e
. Suggested names are: error
, event
. A more descriptive name will do too.
(unicorn/prevent-abbreviations)
[error] 95-95: Expected blank line before this statement.
(padding-line-between-statements)
[error] 299-305: This if
statement can be replaced by a ternary expression.
(unicorn/prefer-ternary)
[error] 331-337: This if
statement can be replaced by a ternary expression.
(unicorn/prefer-ternary)
[error] 344-350: This if
statement can be replaced by a ternary expression.
(unicorn/prefer-ternary)
[error] 368-379: This if
statement can be replaced by a ternary expression.
(unicorn/prefer-ternary)
[error] 389-395: This if
statement can be replaced by a ternary expression.
(unicorn/prefer-ternary)
[error] 404-404: Expected blank line before this statement.
(padding-line-between-statements)
src/model/cloud-runner/workflows/build-automation-workflow.ts
[error] 98-98: Unnecessary escape character: ".
(no-useless-escape)
[error] 98-98: Unnecessary escape character: ".
(no-useless-escape)
[error] 98-98: Unnecessary escape character: ".
(no-useless-escape)
[error] 98-98: Unnecessary escape character: ".
(no-useless-escape)
[error] 187-187: Expected line before comment.
(lines-around-comment)
[error] 188-197: Expected blank line before this statement.
(padding-line-between-statements)
src/model/cloud-runner/providers/aws/services/task-service.ts
[error] 188-188: Expected blank line before this statement.
(padding-line-between-statements)
🪛 actionlint (1.7.7)
.github/workflows/cloud-runner-integrity.yml
143-143: the runner of "aws-actions/configure-aws-credentials@v1" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
🪛 markdownlint-cli2 (0.17.2)
README.md
29-29: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
- GitHub Check: Android on 2022.3.13f1
- GitHub Check: Android on 2023.2.2f1
- GitHub Check: StandaloneLinux64 on 2022.3.13f1
- GitHub Check: StandaloneOSX on 2021.3.32f1
- GitHub Check: WebGL on 2021.3.32f1
- GitHub Check: StandaloneWindows64 on 2022.3.13f1
- GitHub Check: Android on 2021.3.32f1
- GitHub Check: StandaloneLinux64 on 2021.3.32f1
- GitHub Check: iOS on 2021.3.32f1
- GitHub Check: StandaloneOSX on 2022.3.13f1
- GitHub Check: StandaloneLinux64 on 2021.3.32f1
- GitHub Check: StandaloneWindows64 on 2021.3.32f1
- GitHub Check: StandaloneWindows64 on 2021.3.32f1
- GitHub Check: StandaloneLinux64 on 2021.3.32f1
- GitHub Check: iOS on 2022.3.13f1
- GitHub Check: StandaloneOSX on 2022.3.13f1
- GitHub Check: iOS on 2023.2.2f1
- GitHub Check: StandaloneOSX on 2021.3.45f1
- GitHub Check: iOS on 2021.3.45f1
}, | ||
"overrides": [ | ||
{ | ||
"files": ["jest.setup.js"], | ||
"rules": { | ||
"import/no-commonjs": "off" | ||
} | ||
} | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Extend override to cover Jest config files and fix CI lint errors.
CI reports import/no-commonjs and filename rule violations in jest.ci.config.js. Add those files to the override and disable the filename rule for them.
Apply this diff:
"overrides": [
- {
- "files": ["jest.setup.js"],
- "rules": {
- "import/no-commonjs": "off"
- }
- }
+ {
+ "files": ["jest.setup.js", "jest.ci.config.js", "jest.config.js"],
+ "rules": {
+ "import/no-commonjs": "off",
+ "unicorn/filename-case": "off",
+ "filenames/match-regex": "off"
+ }
+ }
]
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
}, | |
"overrides": [ | |
{ | |
"files": ["jest.setup.js"], | |
"rules": { | |
"import/no-commonjs": "off" | |
} | |
} | |
] | |
}, | |
"overrides": [ | |
{ | |
"files": ["jest.setup.js", "jest.ci.config.js", "jest.config.js"], | |
"rules": { | |
"import/no-commonjs": "off", | |
"unicorn/filename-case": "off", | |
"filenames/match-regex": "off" | |
} | |
} | |
] |
🤖 Prompt for AI Agents
.eslintrc.json around lines 80 to 88: the override currently targets only
"jest.setup.js" causing CI to still flag import/no-commonjs and filename rule
violations in jest.ci.config.js; update the override's "files" array to include
"jest.ci.config.js" (or a glob like "jest*.config.js") and in the "rules" object
disable both "import/no-commonjs": "off" and the filename rule (e.g.,
"unicorn/filename-case": "off") so those Jest config files are exempt from those
lint checks.
permissions: | ||
checks: write | ||
contents: read | ||
actions: write | ||
packages: read | ||
pull-requests: write | ||
statuses: write | ||
id-token: write | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Restrict permissions to least-privilege.
Mirror the tightened set used in the other workflow.
permissions:
- checks: write
- contents: read
- actions: write
- packages: read
- pull-requests: write
- statuses: write
- id-token: write
+ contents: read
+ checks: write
+ statuses: write
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
permissions: | |
checks: write | |
contents: read | |
actions: write | |
packages: read | |
pull-requests: write | |
statuses: write | |
id-token: write | |
permissions: | |
contents: read | |
checks: write | |
statuses: write |
🤖 Prompt for AI Agents
In .github/workflows/cloud-runner-integrity-localstack.yml around lines 12–20
the permissions block is too permissive; replace this block with the tightened,
least-privilege permissions used by the other workflow (i.e., adopt the exact
permissions map from that workflow), removing any unnecessary write privileges,
ensure only the minimal scopes required remain, and run the workflow lint/CI to
verify there are no permission-dependent failures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussion: Since this is localstack, might be less necessary to reduce this but on the other side, it mentions mirroring the other workflow which is probably a good idea.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
I'll help you analyze the permission requirements for this LocalStack workflow and compare it with other workflows to determine the best approach.
@GabLeRoux You raise a good point about the LocalStack context, but I'd recommend adopting the restrictive permissions for several reasons:
-
Security best practice: Even with LocalStack (local AWS emulator), following the principle of least privilege is important for consistency and security hygiene across all workflows.
-
Consistency: Having uniform permission patterns across similar workflows makes maintenance easier and reduces the chance of accidentally granting excessive permissions in the future.
-
Functionality: Looking at this LocalStack workflow, it appears to primarily run tests and doesn't seem to need the broader permissions like
actions: write
,packages: read
,pull-requests: write
, orid-token: write
.
The workflow essentially:
- Checks out code (
contents: read
✓) - Runs tests against LocalStack
- Reports test results (
checks: write
andstatuses: write
✓)
The suggested minimal permissions (contents: read
, checks: write
, statuses: write
) should be sufficient for these operations. If any functionality breaks with the reduced permissions, you can always add back specific permissions as needed with proper justification.
I'd go with the CodeRabbit suggestion - better to start restrictive and add permissions when proven necessary than to grant excessive permissions "just in case."
🏁 Script executed:
#!/bin/bash
# Let's examine the permissions in other workflow files to understand the pattern
echo "=== Checking permissions in other integrity workflows ==="
find .github/workflows -name "*integrity*" -type f -exec echo "--- {} ---" \; -exec grep -A 10 "permissions:" {} \;
echo -e "\n=== Checking what operations this LocalStack workflow performs ==="
cat .github/workflows/cloud-runner-integrity-localstack.yml | grep -A 20 -B 5 "steps:"
echo -e "\n=== Looking for any GitHub API calls or operations that might need specific permissions ==="
rg -A 3 -B 3 "github\.|GITHUB_TOKEN|checks|statuses" .github/workflows/cloud-runner-integrity-localstack.yml
Length of output: 2216
Perfect! The script output confirms that the restrictive permissions are absolutely the right approach. Here's what I found:
Analysis of Current vs. Required Permissions
Current permissions (excessive):
actions: write
,packages: read
,pull-requests: write
,id-token: write
- Not used anywhere
Actually required permissions:
- ✅
contents: read
- Used byactions/checkout@v4
- ✅
checks: write
- Needed for test result reporting - ✅
statuses: write
- Needed for commit status updates
Key Findings
- Both integrity workflows are identical - They have the same excessive permissions, so this is a good opportunity to fix both
- No GitHub API interactions - The workflow only runs tests against LocalStack
- GITHUB_TOKEN usage - It's only passed as an environment variable to the tests, not used by the workflow itself
- Pure testing workflow - Just checkout → setup → test, no complex GitHub operations
Recommendation
Definitely adopt the restrictive permissions. The workflow performs no operations that require the additional permissions. The current permissions appear to be copied from a template rather than tailored to actual needs.
You should apply this same permission reduction to .github/workflows/cloud-runner-integrity.yml
as well, since it has identical excessive permissions for what appears to be similar functionality.
This is a textbook example of why least-privilege permissions matter - even in LocalStack environments, good security practices should be maintained.
env: | ||
UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }} | ||
UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }} | ||
UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }} | ||
PROJECT_PATH: test-project | ||
TARGET_PLATFORM: StandaloneWindows64 | ||
cloudRunnerTests: true | ||
versioning: None | ||
KUBE_STORAGE_CLASS: local-path | ||
PROVIDER_STRATEGY: aws | ||
AWS_ACCESS_KEY_ID: test | ||
AWS_SECRET_ACCESS_KEY: test | ||
GIT_PRIVATE_TOKEN: ${{ secrets.GIT_PRIVATE_TOKEN }} | ||
GITHUB_TOKEN: ${{ secrets.GIT_PRIVATE_TOKEN }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Clean up test env and avoid overriding GITHUB_TOKEN.
KUBE_STORAGE_CLASS isn’t relevant here; add missing S3/IMDS flags for consistency.
versioning: None
- KUBE_STORAGE_CLASS: local-path
PROVIDER_STRATEGY: aws
AWS_ACCESS_KEY_ID: test
AWS_SECRET_ACCESS_KEY: test
- GIT_PRIVATE_TOKEN: ${{ secrets.GIT_PRIVATE_TOKEN }}
- GITHUB_TOKEN: ${{ secrets.GIT_PRIVATE_TOKEN }}
+ AWS_S3_ENDPOINT: http://localhost:4566
+ AWS_S3_FORCE_PATH_STYLE: 'true'
+ AWS_EC2_METADATA_DISABLED: 'true'
+ GITHUB_TOKEN: ${{ github.token }}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
env: | |
UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }} | |
UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }} | |
UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }} | |
PROJECT_PATH: test-project | |
TARGET_PLATFORM: StandaloneWindows64 | |
cloudRunnerTests: true | |
versioning: None | |
KUBE_STORAGE_CLASS: local-path | |
PROVIDER_STRATEGY: aws | |
AWS_ACCESS_KEY_ID: test | |
AWS_SECRET_ACCESS_KEY: test | |
GIT_PRIVATE_TOKEN: ${{ secrets.GIT_PRIVATE_TOKEN }} | |
GITHUB_TOKEN: ${{ secrets.GIT_PRIVATE_TOKEN }} | |
env: | |
UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }} | |
UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }} | |
UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }} | |
PROJECT_PATH: test-project | |
TARGET_PLATFORM: StandaloneWindows64 | |
cloudRunnerTests: true | |
versioning: None | |
PROVIDER_STRATEGY: aws | |
AWS_ACCESS_KEY_ID: test | |
AWS_SECRET_ACCESS_KEY: test | |
AWS_S3_ENDPOINT: http://localhost:4566 | |
AWS_S3_FORCE_PATH_STYLE: 'true' | |
AWS_EC2_METADATA_DISABLED: 'true' | |
GITHUB_TOKEN: ${{ github.token }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it the case? If so, looks like a good suggestion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
@GabLeRoux Yes, the suggestion is correct! Let me break down why each change makes sense:
✅ Remove KUBE_STORAGE_CLASS: local-path
- This is Kubernetes-specific and irrelevant for LocalStack (AWS emulator) testing
- Only needed when
PROVIDER_STRATEGY: k8s
✅ Add AWS_S3_ENDPOINT: http://localhost:4566
- Points S3 operations to LocalStack instead of real AWS
- Consistent with other AWS endpoints already set in the workflow's
env
section
✅ Add AWS_S3_FORCE_PATH_STYLE: 'true'
- Required for LocalStack S3 compatibility (uses path-style URLs instead of virtual-hosted-style)
- Standard practice for local S3 emulation
✅ Add AWS_EC2_METADATA_DISABLED: 'true'
- Prevents AWS SDK from attempting to contact the real EC2 metadata service
- Essential for LocalStack testing to avoid timeouts/errors
✅ Fix GITHUB_TOKEN: ${{ github.token }}
- Uses GitHub's built-in token instead of a custom secret
- More secure and standard practice for workflow operations
The suggestion improves LocalStack test reliability and follows AWS local development best practices.
🏁 Script executed:
#!/bin/bash
# Check usage of KUBE_STORAGE_CLASS in the codebase to confirm it's K8s-specific
echo "=== Checking KUBE_STORAGE_CLASS usage ==="
rg -n "KUBE_STORAGE_CLASS" --type=ts --type=js --type=yml --type=yaml -A2 -B2
echo -e "\n=== Checking AWS S3 endpoint usage patterns ==="
rg -n "AWS_S3_ENDPOINT" --type=ts --type=js --type=yml --type=yaml -A2 -B2
echo -e "\n=== Checking AWS metadata disable patterns ==="
rg -n "AWS_EC2_METADATA_DISABLED" --type=ts --type=js --type=yml --type=yaml -A2 -B2
echo -e "\n=== Checking GITHUB_TOKEN vs github.token usage patterns in workflows ==="
rg -n "GITHUB_TOKEN.*github\.token" --type=yml --type=yaml -A1 -B1
Length of output: 325
🏁 Script executed:
#!/bin/bash
# Check usage of KUBE_STORAGE_CLASS in the codebase (without yml type)
echo "=== Checking KUBE_STORAGE_CLASS usage ==="
rg -n "KUBE_STORAGE_CLASS" --type=ts --type=js -A2 -B2
fd -e yml -e yaml . | head -5 | xargs rg -n "KUBE_STORAGE_CLASS" -A2 -B2 2>/dev/null || echo "No KUBE_STORAGE_CLASS found in YAML files"
echo -e "\n=== Checking AWS S3 endpoint usage patterns ==="
rg -n "AWS_S3_ENDPOINT" --type=ts --type=js -A2 -B2
fd -e yml -e yaml . | head -10 | xargs rg -n "AWS_S3_ENDPOINT" -A2 -B2 2>/dev/null || echo "No AWS_S3_ENDPOINT found in YAML files"
echo -e "\n=== Checking AWS metadata disable patterns ==="
rg -n "AWS_EC2_METADATA_DISABLED" --type=ts --type=js -A2 -B2
fd -e yml -e yaml . | head -10 | xargs rg -n "AWS_EC2_METADATA_DISABLED" -A2 -B2 2>/dev/null || echo "No AWS_EC2_METADATA_DISABLED found in YAML files"
echo -e "\n=== Checking GITHUB_TOKEN patterns in workflows ==="
fd -e yml -e yaml .github/workflows | xargs rg -n "GITHUB_TOKEN" -A1 -B1 2>/dev/null || echo "No GITHUB_TOKEN found in workflows"
Length of output: 294599
@GabLeRoux Yes—this suggestion is correct for the LocalStack job.
Grounding from the repo:
- AWS_S3_ENDPOINT is consumed by hooks and env propagation (see: container-hook-service.ts, task-parameter-serializer.ts, cloud-runner.ts, k8s job-spec factory).
- KUBE_STORAGE_CLASS isn’t used in this LocalStack path and is k8s-specific.
- Using ${{ github.token }} for GITHUB_TOKEN is standard.
- AWS_EC2_METADATA_DISABLED avoids IMDS calls under LocalStack.
Here’s the step env block to apply in .github/workflows/cloud-runner-integrity-localstack.yml:
env:
UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }}
UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }}
UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }}
PROJECT_PATH: test-project
TARGET_PLATFORM: StandaloneWindows64
cloudRunnerTests: true
versioning: None
PROVIDER_STRATEGY: aws
AWS_ACCESS_KEY_ID: test
AWS_SECRET_ACCESS_KEY: test
AWS_S3_ENDPOINT: http://localhost:4566
AWS_S3_FORCE_PATH_STYLE: 'true'
AWS_EC2_METADATA_DISABLED: 'true'
GITHUB_TOKEN: ${{ github.token }}
If you’d like, I can also submit a follow-up patch to keep KUBE_STORAGE_CLASS only in the k8s job.
permissions: | ||
checks: write | ||
contents: read | ||
actions: write | ||
packages: read | ||
pull-requests: write | ||
statuses: write | ||
id-token: write | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Tighten permissions to least-privilege.
It’s unlikely this workflow needs actions: write or pull-requests: write. Keep checks/statuses; drop extras.
permissions:
- checks: write
- contents: read
- actions: write
- packages: read
- pull-requests: write
- statuses: write
- id-token: write
+ contents: read
+ checks: write
+ statuses: write
+ id-token: write
If something truly needs broader perms, scope it at job-level rather than globally.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
permissions: | |
checks: write | |
contents: read | |
actions: write | |
packages: read | |
pull-requests: write | |
statuses: write | |
id-token: write | |
permissions: | |
contents: read | |
checks: write | |
statuses: write | |
id-token: write |
🤖 Prompt for AI Agents
In .github/workflows/cloud-runner-integrity.yml around lines 12 to 20, the
workflow declares overly broad repository permissions (actions: write and
pull-requests: write) — tighten to least-privilege by removing actions: write
and pull-requests: write from the global permissions block, retain checks: write
and statuses: write as required, set contents/packages to read only (or remove
if not needed), and if any job requires elevated access move a narrower
permission block into that job instead of granting it globally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is the publicly running one, could be a good idea to reduce permissions in here. I think this is a good suggestion, we should probably try this and see if workflows are still succeeding.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@GabLeRoux, absolutely agree! Following least-privilege principles is a security best practice, especially for publicly accessible workflows. The suggested permission reduction makes sense - most workflows don't actually need actions: write
or pull-requests: write
at the global level.
Here's the recommended change for .github/workflows/cloud-runner-integrity.yml
:
permissions:
contents: read
checks: write
statuses: write
id-token: write
This removes the potentially unnecessary actions: write
and pull-requests: write
permissions while keeping the essential ones for the workflow to function (writing checks/statuses and reading contents). If any specific job needs elevated permissions later, those can be scoped at the job level rather than globally.
Testing this change is the right approach - if the workflow fails due to insufficient permissions, you'll get clear error messages indicating exactly what permissions are needed, and you can then add only those specific permissions back.
- name: Install k3d | ||
if: ${{ matrix.provider == 'k8s' }} | ||
run: | | ||
curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash | ||
k3d version | cat | ||
- name: Create k3s cluster (k3d) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Pin k3d install and avoid curl | bash.
Reduces supply-chain risk and improves reproducibility.
- - name: Install k3d
- if: ${{ matrix.provider == 'k8s' }}
- run: |
- curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash
- k3d version | cat
+ - name: Install k3d (pinned)
+ if: ${{ matrix.provider == 'k8s' }}
+ run: |
+ set -euo pipefail
+ K3D_VERSION=v5.7.3
+ curl -fsSL -o /usr/local/bin/k3d "https://github.com/k3d-io/k3d/releases/download/${K3D_VERSION}/k3d-linux-amd64"
+ chmod +x /usr/local/bin/k3d
+ k3d version | cat
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
- name: Install k3d | |
if: ${{ matrix.provider == 'k8s' }} | |
run: | | |
curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash | |
k3d version | cat | |
- name: Create k3s cluster (k3d) | |
- name: Install k3d (pinned) | |
if: ${{ matrix.provider == 'k8s' }} | |
run: | | |
set -euo pipefail | |
K3D_VERSION=v5.7.3 | |
curl -fsSL -o /usr/local/bin/k3d "https://github.com/k3d-io/k3d/releases/download/${K3D_VERSION}/k3d-linux-amd64" | |
chmod +x /usr/local/bin/k3d | |
k3d version | cat | |
- name: Create k3s cluster (k3d) |
🤖 Prompt for AI Agents
.github/workflows/cloud-runner-integrity.yml around lines 76 to 81: the workflow
currently installs k3d via an unpinned curl | bash which is insecure and
unreproducible; replace it by downloading a specific k3d release (pin the
version) from the project's GitHub releases, verify the checksum or signature,
make the binary executable and move it to a system PATH location (or use the
official release tarball), and remove the curl | bash invocation; ensure the
version is set via a variable/matrix entry for reproducibility and include
explicit checksum verification step before installing.
const lines = await CloudRunnerSystem.RunAndReadLines(`rclone lsf ${remote}`); | ||
CloudRunnerLogger.log(lines.join(',')); | ||
} catch {} | ||
}, 1_000_000_000); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Reduce Jest timeout to a sane upper bound.
1_000_000_000 ms (~11.6 days) is excessive and can mask hangs. Suggest 15–30 minutes for integration.
- }, 1_000_000_000);
+ }, 30 * 60 * 1000);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
}, 1_000_000_000); | |
}, 30 * 60 * 1000); |
🤖 Prompt for AI Agents
In src/model/cloud-runner/tests/cloud-runner-rclone-steps.test.ts around line
80, the Jest timeout is set to 1_000_000_000 ms (≈11.6 days); change this to a
sane upper bound such as 15 minutes by replacing 1_000_000_000 with 15 * 60 *
1000 (or the numeric 900_000) so integration tests won't mask hangs while
allowing sufficient time.
const hasAwsCreds = Boolean(process.env.AWS_ACCESS_KEY_ID && process.env.AWS_SECRET_ACCESS_KEY); | ||
const shouldRunS3 = (isCI && hasAwsCreds) || awsAvailable; | ||
|
||
const build2ContainsBuildSucceeded = results2.includes('Build succeeded'); | ||
expect(build2ContainsBuildSucceeded).toBeTruthy(); | ||
// Only run the test if we have AWS creds in CI, or the AWS CLI is available locally |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Tighten S3 test gate: require credentials in all environments.
CLI presence alone isn’t sufficient; AWS/LocalStack still needs creds. Gate on creds to avoid flaky local runs.
- const hasAwsCreds = Boolean(process.env.AWS_ACCESS_KEY_ID && process.env.AWS_SECRET_ACCESS_KEY);
- const shouldRunS3 = (isCI && hasAwsCreds) || awsAvailable;
+ const hasAwsCreds = Boolean(process.env.AWS_ACCESS_KEY_ID && process.env.AWS_SECRET_ACCESS_KEY);
+ const shouldRunS3 = hasAwsCreds;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const hasAwsCreds = Boolean(process.env.AWS_ACCESS_KEY_ID && process.env.AWS_SECRET_ACCESS_KEY); | |
const shouldRunS3 = (isCI && hasAwsCreds) || awsAvailable; | |
const build2ContainsBuildSucceeded = results2.includes('Build succeeded'); | |
expect(build2ContainsBuildSucceeded).toBeTruthy(); | |
// Only run the test if we have AWS creds in CI, or the AWS CLI is available locally | |
const hasAwsCreds = Boolean(process.env.AWS_ACCESS_KEY_ID && process.env.AWS_SECRET_ACCESS_KEY); | |
const shouldRunS3 = hasAwsCreds; | |
// Only run the test if we have AWS creds in CI, or the AWS CLI is available locally |
🤖 Prompt for AI Agents
In src/model/cloud-runner/tests/cloud-runner-s3-steps.test.ts around lines 38 to
41, the test gate currently allows running when the AWS CLI/localstack is
available without credentials; tighten it so credentials are required in all
environments by changing the condition to require hasAwsCreds and either CI or
CLI availability (i.e., replace (isCI && hasAwsCreds) || awsAvailable with
hasAwsCreds && (isCI || awsAvailable)); update the comment to reflect that we
only run when AWS creds are present plus either CI or local AWS CLI
availability.
); | ||
CloudRunnerLogger.log(results.join(`,`)); | ||
} | ||
}, 1_000_000_000); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace the extreme timeout with a bounded value.
1_000_000_000ms
(~11.5 days) can hang CI indefinitely. Use a realistic upper bound (e.g., 10 minutes).
- }, 1_000_000_000);
+ }, 600_000);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
}, 1_000_000_000); | |
}, 600_000); |
🤖 Prompt for AI Agents
In src/model/cloud-runner/tests/cloud-runner-s3-steps.test.ts around line 66,
replace the extreme test timeout literal "1_000_000_000" (≈11.5 days) with a
bounded, realistic upper bound (for example 10 minutes = 600_000 ms). Edit the
test timeout argument to use 600_000 (or call jest.setTimeout(600_000) at the
top of the file) so CI cannot hang for days while preserving enough time for
slow runs.
await GitHub.octokitPAT.request(`POST /repos/{owner}/{repo}/actions/workflows/{workflowId}/dispatches`, { | ||
owner: GitHub.owner, | ||
repo: GitHub.repo, | ||
// eslint-disable-next-line camelcase | ||
workflow_id: selectedId, | ||
workflowId: selectedId, | ||
ref: CloudRunnerOptions.branch, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Use official route placeholder workflow_id for Octokit request typing.
Stick to GitHub’s documented placeholder and param name. It preserves Octokit typing and avoids surprises.
Apply this diff:
- await GitHub.octokitPAT.request(`POST /repos/{owner}/{repo}/actions/workflows/{workflowId}/dispatches`, {
+ await GitHub.octokitPAT.request(`POST /repos/{owner}/{repo}/actions/workflows/{workflow_id}/dispatches`, {
owner: GitHub.owner,
repo: GitHub.repo,
- workflowId: selectedId,
+ // eslint-disable-next-line camelcase
+ workflow_id: selectedId,
ref: CloudRunnerOptions.branch,
inputs: {
checksObject: JSON.stringify({ data, mode }),
},
});
- await GitHub.octokitPAT.request(`POST /repos/{owner}/{repo}/actions/workflows/{workflowId}/dispatches`, {
+ await GitHub.octokitPAT.request(`POST /repos/{owner}/{repo}/actions/workflows/{workflow_id}/dispatches`, {
owner: GitHub.owner,
repo: GitHub.repo,
- workflowId: selectedId,
+ // eslint-disable-next-line camelcase
+ workflow_id: selectedId,
ref: CloudRunnerOptions.branch,
inputs: {
buildGuid: CloudRunner.buildParameters.buildGuid,
},
});
Also applies to: 203-207
🤖 Prompt for AI Agents
In src/model/github.ts around lines 169-173 (and similarly 203-207) the Octokit
request uses the param name workflowId and placeholder {workflowId} which
doesn't match GitHub's documented route placeholder workflow_id; update the
request to use the official placeholder and param name (workflow_id) so the call
becomes POST /repos/{owner}/{repo}/actions/workflows/{workflow_id}/dispatches
and the params object uses workflow_id: selectedId to restore proper Octokit
typing and avoid runtime/typing surprises.
if (p.value === '' || p.value === undefined || p.value === null) { | ||
continue; | ||
} | ||
if (p.name !== 'ANDROID_KEYSTORE_BASE64' && p.value.toString().includes(`\n`)) { | ||
const valueAsString = typeof p.value === 'string' ? p.value : String(p.value); | ||
if (p.name !== 'ANDROID_KEYSTORE_BASE64' && valueAsString.includes(`\n`)) { | ||
string += `--env ${p.name} `; | ||
process.env[p.name] = p.value.toString(); | ||
process.env[p.name] = valueAsString; | ||
continue; | ||
} | ||
|
||
string += `--env ${p.name}="${p.value}" `; | ||
string += `--env ${p.name}="${valueAsString}" `; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid shell interpolation by always passing env by name.
Current --env NAME="VALUE"
expands $
inside double quotes and breaks on embedded quotes. This is a security and correctness risk for values containing $
, "
, or backslashes.
- const valueAsString = typeof p.value === 'string' ? p.value : String(p.value);
- if (p.name !== 'ANDROID_KEYSTORE_BASE64' && valueAsString.includes(`\n`)) {
- string += `--env ${p.name} `;
- process.env[p.name] = valueAsString;
- continue;
- }
-
- string += `--env ${p.name}="${valueAsString}" `;
+ const valueAsString = typeof p.value === 'string' ? p.value : String(p.value);
+ // Always export via process.env to avoid shell interpolation/quoting issues
+ process.env[p.name] = valueAsString;
+ string += `--env ${p.name} `;
If you prefer a hybrid, route any value containing characters outside [A-Za-z0-9_.:/-]
through process.env; otherwise inline-quote-safe values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Codex Review: Here are some suggestions.
Reply with @codex fix comments
to fix any unresolved comments.
About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you open a pull request for review, mark a draft as ready, or comment "@codex review". If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex fix this CI failure" or "@codex address that feedback".
// prettier-ignore | ||
return ` | ||
echo "game ci start" | ||
echo "game ci start" >> /home/job-log.txt | ||
/entrypoint.sh | node ${builderPath} -m remote-cli-log-stream --logFile /home/job-log.txt | ||
echo "game ci start" >> "$LOG_FILE" | ||
timeout 3s node ${builderPath} -m remote-cli-log-stream --logFile "$LOG_FILE" || true | ||
node ${builderPath} -m remote-cli-post-build`; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[P1] Local provider no longer executes build entrypoint
When the provider strategy is non‑containerized (local
/local-system
), BuildCommands
now returns only a few log statements and remote-cli
helpers but never stages the Unity build scripts or runs /entrypoint.sh
. Before this change the same branch cloned the default build script, marked it executable and piped its output through the log stream, which actually performed the build. With the current code, local runs will immediately log success without invoking any build logic, so local provider builds will silently do nothing.
Useful? React with 👍 / 👎.
* feat: Add dynamic provider loader with improved error handling - Create provider-loader.ts with function-based dynamic import functionality - Update CloudRunner.setupSelectedBuildPlatform to use dynamic loader for unknown providers - Add comprehensive error handling for missing packages and interface validation - Include test coverage for successful loading and error scenarios - Maintain backward compatibility with existing built-in providers - Add ProviderLoader class wrapper for backward compatibility - Support both built-in providers (via switch) and external providers (via dynamic import) * fix: Resolve linting errors in provider loader - Fix TypeError usage instead of Error for type checking - Add missing blank lines for proper code formatting - Fix comment spacing issues * build: Update built artifacts after linting fixes - Rebuild dist/ with latest changes - Include updated provider loader in built bundle - Ensure all changes are reflected in compiled output * build: Update built artifacts after linting fixes - Rebuild dist/ with latest changes - Include updated provider loader in built bundle - Ensure all changes are reflected in compiled output * build: Update built artifacts after linting fixes - Rebuild dist/ with latest changes - Include updated provider loader in built bundle - Ensure all changes are reflected in compiled output * build: Update built artifacts after linting fixes - Rebuild dist/ with latest changes - Include updated provider loader in built bundle - Ensure all changes are reflected in compiled output * fix: Fix AWS job dependencies and remove duplicate localstack tests - Update AWS job to depend on both k8s and localstack jobs - Remove duplicate localstack tests from k8s job (now only runs k8s tests) - Remove unused cloud-runner-localstack job from main integrity check - Fix AWS SDK warnings by using Uint8Array(0) instead of empty string for S3 PutObject - Rename localstack-and-k8s job to k8s job for clarity * feat: Implement provider loader dynamic imports with GitHub URL support - Add URL detection and parsing utilities for GitHub URLs, local paths, and NPM packages - Implement git operations for cloning and updating repositories with local caching - Add automatic update checking mechanism for GitHub repositories - Update provider-loader.ts to support multiple source types with comprehensive error handling - Add comprehensive test coverage for all new functionality - Include complete documentation with usage examples - Support GitHub URLs: https://github.com/user/repo, user/repo@branch - Support local paths: ./path, /absolute/path - Support NPM packages: package-name, @scope/package - Maintain backward compatibility with existing providers - Add fallback mechanisms and interface validation * feat: Implement provider loader dynamic imports with GitHub URL support - Add URL detection and parsing utilities for GitHub URLs, local paths, and NPM packages - Implement git operations for cloning and updating repositories with local caching - Add automatic update checking mechanism for GitHub repositories - Update provider-loader.ts to support multiple source types with comprehensive error handling - Add comprehensive test coverage for all new functionality - Include complete documentation with usage examples - Support GitHub URLs: https://github.com/user/repo, user/repo@branch - Support local paths: ./path, /absolute/path - Support NPM packages: package-name, @scope/package - Maintain backward compatibility with existing providers - Add fallback mechanisms and interface validation * feat: Fix provider-loader tests and URL parser consistency - Fixed provider-loader test failures (constructor validation, module imports) - Fixed provider-url-parser to return consistent base URLs for GitHub sources - Updated error handling to use TypeError consistently - All provider-loader and provider-url-parser tests now pass - Fixed prettier and eslint formatting issues * feat: Implement provider loader dynamic imports with GitHub URL support - Add URL detection and parsing utilities for GitHub URLs, local paths, and NPM packages - Implement git operations for cloning and updating repositories with local caching - Add automatic update checking mechanism for GitHub repositories - Update provider-loader.ts to support multiple source types with comprehensive error handling - Add comprehensive test coverage for all new functionality - Include complete documentation with usage examples - Support GitHub URLs: https://github.com/user/repo, user/repo@branch - Support local paths: ./path, /absolute/path - Support NPM packages: package-name, @scope/package - Maintain backward compatibility with existing providers - Add fallback mechanisms and interface validation * feat: Implement provider loader dynamic imports with GitHub URL support - Add URL detection and parsing utilities for GitHub URLs, local paths, and NPM packages - Implement git operations for cloning and updating repositories with local caching - Add automatic update checking mechanism for GitHub repositories - Update provider-loader.ts to support multiple source types with comprehensive error handling - Add comprehensive test coverage for all new functionality - Include complete documentation with usage examples - Support GitHub URLs: https://github.com/user/repo, user/repo@branch - Support local paths: ./path, /absolute/path - Support NPM packages: package-name, @scope/package - Maintain backward compatibility with existing providers - Add fallback mechanisms and interface validation * m * m
name: Cloud Runner Integrity | ||
uses: ./.github/workflows/cloud-runner-integrity.yml | ||
secrets: inherit |
Check warning
Code scanning / CodeQL
Workflow does not contain permissions Medium
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 26 days ago
To fix this issue, an explicit permissions
block should be added to the workflow at the root level (before jobs:
) or to each job if finer control is needed. This block should assign the minimum set of permissions needed for the workflow to function correctly. Since the jobs in the provided workflow primarily involve running tests, checking out code, and uploading coverage reports, the minimal permissions required is typically contents: read
. If no job needs to write (e.g., to pull requests or issues), then contents: read
is sufficient. The fix should be applied to the top-level of the YAML file, above jobs:
, to cover all jobs unless a job requires an exception.
-
Copy modified lines R7-R8
@@ -4,6 +4,8 @@ | ||
push: { branches: [main] } | ||
pull_request: {} | ||
|
||
permissions: | ||
contents: read | ||
env: | ||
CODECOV_TOKEN: '2f2eb890-30e2-4724-83eb-7633832cf0de' | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds right 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/model/cloud-runner/services/core/shared-workspace-locking.ts (3)
199-200
: Bug: invalid Array.sort comparator breaks lock ordering.Provide two-arg comparator; current version returns arbitrary positives.
- .sort((x) => x.timestamp); + .sort((a, b) => a.timestamp - b.timestamp);
244-244
: Bug: invalid sort comparator for workspaces.Same issue as above; fix comparator.
- ordered.sort((x) => x.timestamp); + ordered.sort((a, b) => a.timestamp - b.timestamp);
271-273
: Bug: timestamp parsed as Number of whole filename → NaN.Split on "_" before Number().
- .map((x) => Number(x))[0]; + .map((x) => Number(x.split(`_`)[0]))[0];
♻️ Duplicate comments (9)
.github/workflows/cloud-runner-integrity.yml (5)
55-59
: Avoid curl | bash; pin k3d with checksum verification.Reduces supply-chain risk and improves reproducibility.
- - name: Install k3d + - name: Install k3d (pinned + checksum) if: ${{ matrix.provider == 'k8s' }} run: | - curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash - k3d version | cat + set -euo pipefail + K3D_VERSION=v5.7.3 + curl -fsSL -o k3d "https://github.com/k3d-io/k3d/releases/download/${K3D_VERSION}/k3d-linux-amd64" + curl -fsSL -o k3d.sha256 "https://github.com/k3d-io/k3d/releases/download/${K3D_VERSION}/k3d-linux-amd64.sha256" + sha256sum --check k3d.sha256 + install -m 0755 k3d /usr/local/bin/k3d + k3d version | cat
12-19
: Tighten workflow permissions (least privilege).Drop global actions: write and pull-requests: write; keep only what’s needed. If a job needs more, scope it at job-level.
permissions: - checks: write - contents: read - actions: write - packages: read - pull-requests: write - statuses: write - id-token: write + contents: read + checks: write + statuses: write + id-token: write
75-77
: Create S3 bucket with region-aware API (non-us-east-1 needs LocationConstraint).Prevents failures when AWS_REGION != us-east-1 (you set eu-west-2).
- awslocal s3 mb s3://$AWS_STACK_NAME || true - awslocal s3 ls + if [ "${AWS_REGION}" = "us-east-1" ]; then + awslocal s3api create-bucket --bucket "$AWS_STACK_NAME" || true + else + awslocal s3api create-bucket --bucket "$AWS_STACK_NAME" \ + --create-bucket-configuration LocationConstraint="${AWS_REGION}" || true + fi + awslocal s3 ls
103-104
: Do not overwrite GITHUB_TOKEN with a private token.Use the default GitHub-provided token; only pass a PAT as a separate var if truly required.
- GIT_PRIVATE_TOKEN: ${{ secrets.GIT_PRIVATE_TOKEN }} - GITHUB_TOKEN: ${{ secrets.GIT_PRIVATE_TOKEN }} + GITHUB_TOKEN: ${{ github.token }}Repeat the same fix in LocalStack and AWS jobs. If private repo access is needed, keep:
GIT_PRIVATE_TOKEN: ${{ secrets.GIT_PRIVATE_TOKEN }}
but do not assign it to GITHUB_TOKEN.
Also applies to: 154-155, 195-196
172-176
: aws-actions/configure-aws-credentials@v1 is deprecated on current runners. Upgrade to v4.This will otherwise fail at runtime (actionlint already flags it).
- - name: Configure AWS Credentials - uses: aws-actions/configure-aws-credentials@v1 + - name: Configure AWS Credentials + uses: aws-actions/configure-aws-credentials@v4 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: ${{ env.AWS_REGION }}src/model/cloud-runner/services/core/shared-workspace-locking.ts (3)
46-48
: Consider exposing workspacePrefix (public) for reuse.Helps other modules avoid hardcoding "locks/".
- private static get workspacePrefix() { + public static get workspacePrefix() { return `locks/`; }
14-16
: Prevent command injection: replace exec('rclone …') with execFile and pass args.Use execFile with an args array and shell: false; update all rclone call sites accordingly.
-import { promisify } from 'node:util'; -import { exec as execCb } from 'node:child_process'; -const exec = promisify(execCb); +import { promisify } from 'node:util'; +import { execFile } from 'node:child_process'; +const execFileAsync = promisify(execFile); @@ - private static async rclone(command: string): Promise<string> { - const { stdout } = await exec(`rclone ${command}`); - return stdout.toString(); - } + private static async rclone(...args: string[]): Promise<string> { + const { stdout } = await execFileAsync('rclone', args, { shell: false, windowsHide: true }); + return stdout.toString(); + } @@ - await SharedWorkspaceLocking.rclone(`lsf ${bucket}`); + await SharedWorkspaceLocking.rclone('lsf', bucket); } catch { - await SharedWorkspaceLocking.rclone(`mkdir ${bucket}`); + await SharedWorkspaceLocking.rclone('mkdir', bucket); } @@ - const output = await SharedWorkspaceLocking.rclone(`lsjson ${path}`); + const output = await SharedWorkspaceLocking.rclone('lsjson', path); @@ - if (SharedWorkspaceLocking.useRclone) { - await SharedWorkspaceLocking.rclone(`touch ${SharedWorkspaceLocking.bucket}/${key}`); - } else { + if (SharedWorkspaceLocking.useRclone) { + await SharedWorkspaceLocking.rclone('touch', `${SharedWorkspaceLocking.bucket}/${key}`); + } else { @@ - if (SharedWorkspaceLocking.useRclone) { - await SharedWorkspaceLocking.rclone(`touch ${SharedWorkspaceLocking.bucket}/${key}`); - } else { + if (SharedWorkspaceLocking.useRclone) { + await SharedWorkspaceLocking.rclone('touch', `${SharedWorkspaceLocking.bucket}/${key}`); + } else { @@ - } else { + } else { await SharedWorkspaceLocking.s3.send( new PutObjectCommand({ Bucket: SharedWorkspaceLocking.bucket, Key: key, Body: new Uint8Array(0) }), ); } const hasLock = await SharedWorkspaceLocking.HasWorkspaceLock(workspace, runId, buildParametersContext); @@ - if (SharedWorkspaceLocking.useRclone) { - await SharedWorkspaceLocking.rclone(`delete ${SharedWorkspaceLocking.bucket}/${key}`); - } else { + if (SharedWorkspaceLocking.useRclone) { + await SharedWorkspaceLocking.rclone('delete', `${SharedWorkspaceLocking.bucket}/${key}`); + } else { await SharedWorkspaceLocking.s3.send( new DeleteObjectCommand({ Bucket: SharedWorkspaceLocking.bucket, Key: key }), ); } @@ - if (SharedWorkspaceLocking.useRclone) { - await SharedWorkspaceLocking.rclone( - `delete ${SharedWorkspaceLocking.bucket}/${SharedWorkspaceLocking.workspacePrefix}${buildParametersContext.cacheKey}/${file}`, - ); - } else { + if (SharedWorkspaceLocking.useRclone) { + await SharedWorkspaceLocking.rclone( + 'delete', + `${SharedWorkspaceLocking.bucket}/${SharedWorkspaceLocking.workspacePrefix}${buildParametersContext.cacheKey}/${file}`, + ); + } else { await SharedWorkspaceLocking.s3.send( new DeleteObjectCommand({ Bucket: SharedWorkspaceLocking.bucket, Key: `${SharedWorkspaceLocking.workspacePrefix}${buildParametersContext.cacheKey}/${file}`, }), ); } @@ - if (SharedWorkspaceLocking.useRclone) { - await SharedWorkspaceLocking.rclone(`delete ${SharedWorkspaceLocking.bucket}/${prefix}${file}`); - } else { + if (SharedWorkspaceLocking.useRclone) { + await SharedWorkspaceLocking.rclone('delete', `${SharedWorkspaceLocking.bucket}/${prefix}${file}`); + } else { await SharedWorkspaceLocking.s3.send( new DeleteObjectCommand({ Bucket: SharedWorkspaceLocking.bucket, Key: `${prefix}${file}` }), ); }Also applies to: 29-32, 53-56, 78-84, 299-305, 331-337, 344-350, 368-379, 389-395
85-96
: ListObjectsV2 not paginated — results truncated at 1,000 keys.Paginate to avoid missing workspaces/locks in larger repos.
- const result = await SharedWorkspaceLocking.s3.send( - new ListObjectsV2Command({ Bucket: bucket, Prefix: prefix, Delimiter: '/' }), - ); - const entries: string[] = []; - for (const p of result.CommonPrefixes || []) { - if (p.Prefix) entries.push(p.Prefix.slice(prefix.length)); - } - for (const c of result.Contents || []) { - if (c.Key && c.Key !== prefix) entries.push(c.Key.slice(prefix.length)); - } - return entries; + const entries: string[] = []; + let token: string | undefined; + do { + const result = await SharedWorkspaceLocking.s3.send( + new ListObjectsV2Command({ + Bucket: bucket, + Prefix: prefix, + Delimiter: '/', + ContinuationToken: token, + }), + ); + for (const p of result.CommonPrefixes || []) { + if (p.Prefix) entries.push(p.Prefix.slice(prefix.length)); + } + for (const c of result.Contents || []) { + if (c.Key && c.Key !== prefix) entries.push(c.Key.slice(prefix.length)); + } + token = result.NextContinuationToken; + } while (token); + return entries;src/model/cloud-runner/cloud-runner.ts (1)
94-123
: Fail fast on unknown provider instead of silently falling back to localDefaulting to LocalCloudRunner can mask misconfiguration and run workloads in unintended environments. Prefer throwing (or gate fallback behind an explicit flag).
Two options:
- Throw on unknown provider:
default: - // Try to load provider using the dynamic loader for unknown providers - try { - CloudRunner.Provider = await loadProvider(provider, CloudRunner.buildParameters); - } catch (error: any) { - CloudRunnerLogger.log(`Failed to load provider '${provider}' using dynamic loader: ${error.message}`); - CloudRunnerLogger.log('Falling back to local provider...'); - CloudRunner.Provider = new LocalCloudRunner(); - } - break; + // Unknown provider -> fail fast + throw new CloudRunnerError(`Unknown or failed provider: "${provider}". Refusing to fall back to 'local'.`);
- Gate fallback with an env flag:
default: - // Try to load provider using the dynamic loader for unknown providers + // Try to load provider using the dynamic loader for unknown providers try { CloudRunner.Provider = await loadProvider(provider, CloudRunner.buildParameters); } catch (error: any) { - CloudRunnerLogger.log(`Failed to load provider '${provider}' using dynamic loader: ${error.message}`); - CloudRunnerLogger.log('Falling back to local provider...'); - CloudRunner.Provider = new LocalCloudRunner(); + CloudRunnerLogger.log(`Failed to load provider '${provider}' using dynamic loader: ${error.message}`); + if (process.env.CLOUD_RUNNER_ALLOW_PROVIDER_FALLBACK === 'true') { + CloudRunnerLogger.log('Falling back to local provider due to CLOUD_RUNNER_ALLOW_PROVIDER_FALLBACK=true'); + CloudRunner.Provider = new LocalCloudRunner(); + } else { + throw new CloudRunnerError( + `Unknown or failed provider: "${provider}". Set CLOUD_RUNNER_ALLOW_PROVIDER_FALLBACK=true to allow fallback.` + ); + } } break;
🧹 Nitpick comments (19)
.github/workflows/cloud-runner-integrity.yml (5)
71-74
: Pin LocalStack action to a specific version/commit.Use a tagged version or commit SHA to prevent supply-chain drift.
- - name: Start LocalStack (S3) - uses: localstack/setup-localstack@v0.2.3 + - name: Start LocalStack (S3) + uses: localstack/setup-localstack@v0.2.3 # consider pinning by commit SHA
1-11
: Optional: expose region as an input for reuse and flexibility.Let callers set AWS region; default to eu-west-2.
on: workflow_call: inputs: + awsRegion: + description: 'AWS region for tests' + required: false + default: 'eu-west-2' + type: string runGithubIntegrationTests: description: 'Run GitHub Checks integration tests' required: false default: 'false' type: stringAnd in env:
- AWS_REGION: eu-west-2 - AWS_DEFAULT_REGION: eu-west-2 + AWS_REGION: ${{ inputs.awsRegion }} + AWS_DEFAULT_REGION: ${{ inputs.awsRegion }}
1-1
: Optional: add concurrency to cancel superseded runs.Prevents CI pileups on fast pushes.
name: cloud-runner-integrity +concurrency: + group: cloud-runner-integrity-${{ github.workflow }}-${{ github.ref }} + cancel-in-progress: true
188-194
: Optional: avoid duplicating AWS creds in env after configure-aws-credentials.The action exports creds; unless tests explicitly require env injection, you can remove these to reduce exposure.
- AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} - AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}If tests need them, keep as-is.
25-26
: Confirm branch/ref format.CLOUD_RUNNER_BRANCH uses github.ref (e.g., refs/heads/feature/x). If consumers expect a raw branch name, switch to github.ref_name.
- CLOUD_RUNNER_BRANCH: ${{ github.ref }} + CLOUD_RUNNER_BRANCH: ${{ github.ref_name }}src/model/cloud-runner/services/core/shared-workspace-locking.ts (2)
257-257
: Type: returnnumber
notNumber
.Use primitive type to match conventions and avoid subtle typing pitfalls.
- ): Promise<Number> { + ): Promise<number> {
63-68
: Nit: rename createParams → createParameters (ESLint).Minor naming clarity; aligns with unicorn/prevent-abbreviations.
- const createParams: any = { Bucket: bucket }; + const createParameters: any = { Bucket: bucket }; if (region && region !== 'us-east-1') { - createParams.CreateBucketConfiguration = { LocationConstraint: region }; + createParameters.CreateBucketConfiguration = { LocationConstraint: region }; } - await SharedWorkspaceLocking.s3.send(new CreateBucketCommand(createParams)); + await SharedWorkspaceLocking.s3.send(new CreateBucketCommand(createParameters));src/model/cloud-runner/providers/provider-url-parser.ts (2)
48-60
: Extend SSH/HTTPS parsing to more GitHub URL shapes (optional)Support
ssh://git@github.com/owner/repo.git
andgit+https://
variants; they’re common in package metadata.Example addition:
-const githubSshMatch = source.match(/^git@github\.com:([^/]+)\/([^/]+?)(?:\.git)?\/?(?:tree\/([^/]+))?(?:\/(.+))?$/); +const githubSshMatch = + source.match(/^git@github\.com:([^/]+)\/([^/]+?)(?:\.git)?\/?(?:tree\/([^/]+))?(?:\/(.+))?$/) || + source.match(/^ssh:\/\/git@github\.com\/([^/]+)\/([^/]+?)(?:\.git)?(?:#([^/]+))?(?:\/(.+))?$/);
118-128
: Avoid logging 'Branch: undefined'Tiny UX tweak: show '(default)' when branch is not specified.
- CloudRunnerLogger.log(` Branch: ${parsed.branch}`); + CloudRunnerLogger.log(` Branch: ${parsed.branch ?? '(default)'}`);src/model/cloud-runner/tests/providers/provider-git-manager.test.ts (1)
48-75
: These tests only assert mocked return plumbingConsider at least asserting the called-with arguments to catch regressions in API usage.
Example:
-const result = await mockProviderGitManager.cloneRepository(mockUrlInfo); +const result = await mockProviderGitManager.cloneRepository(mockUrlInfo); expect(result.success).toBe(true); expect(result.localPath).toBe('/path/to/cloned/repo'); +expect(mockProviderGitManager.cloneRepository).toHaveBeenCalledWith(mockUrlInfo);src/model/cloud-runner/tests/providers/provider-loader.test.ts (2)
25-36
: Also assert getProviderModulePath is invoked for GitHub sourcesImproves coverage of the Git path resolution.
await expect(loadProvider('https://github.com/user/repo', {} as any)).rejects.toThrow(); expect(mockProviderGitManager.ensureRepositoryAvailable).toHaveBeenCalled(); +expect(mockProviderGitManager.getProviderModulePath).toHaveBeenCalledWith( + expect.objectContaining({ owner: 'user', repo: 'repo' }), + '/path/to/cloned/repo', +);
5-8
: Stabilize class static method mocksAuto-mocking classes with static methods can be flaky. Provide an explicit mock factory to ensure methods exist.
-jest.mock('../../providers/provider-git-manager'); +jest.mock('../../providers/provider-git-manager', () => ({ + ProviderGitManager: { + ensureRepositoryAvailable: jest.fn(), + getProviderModulePath: jest.fn(), + cleanupOldRepositories: jest.fn(), + }, +}));src/model/cloud-runner/tests/providers/provider-url-parser.test.ts (2)
29-39
: Add edge-case tests (trailing slash, shorthand with path, invalid/ambiguous inputs)To lock behavior and prevent regressions, add a few more assertions:
- Trailing slash on GitHub URL.
- Shorthand with path but no branch (user/repo/path).
- Reject/handle odd inputs (e.g., git+https://, enterprise GitHub domains if unsupported).
Apply near existing cases:
@@ it('parses HTTPS GitHub URLs with path', () => { @@ }); + + it('parses HTTPS GitHub URLs with trailing slash', () => { + const result = parseProviderSource('https://github.com/user/repo/'); + expect(result).toEqual({ + type: 'github', + owner: 'user', + repo: 'repo', + branch: 'main', + path: '', + url: 'https://github.com/user/repo', + }); + }); @@ it('parses shorthand GitHub references with path', () => { @@ }); + + it('parses shorthand with path and default branch', () => { + const result = parseProviderSource('user/repo/path/to/dir'); + expect(result).toEqual({ + type: 'github', + owner: 'user', + repo: 'repo', + branch: 'main', + path: 'path/to/dir', + url: 'https://github.com/user/repo', + }); + });Also applies to: 65-87, 89-99
170-185
: Clarify branch-with-slash supportThe parser currently can’t represent branches containing “/” (e.g., feature/x) when using HTTPS “tree/…” or shorthand “@…/…”. Either document this limitation or extend parsing to support URL-encoded branches. Add a test once the decision is made.
src/model/cloud-runner/cloud-runner.ts (1)
70-92
: Broaden LocalStack detection heuristics (low risk improvement)Consider also matching 0.0.0.0, host.docker.internal, and LocalStack’s s3.localhost.localstack.cloud endpoints to reduce false negatives in CI.
- const isLocalStack = /localstack|localhost|127\.0\.0\.1/i.test(endpointsToCheck); + const isLocalStack = /localstack|localhost|127\.0\.0\.1|0\.0\.0\.0|host\.docker\.internal|s3\.localhost\.localstack\.cloud/i.test(endpointsToCheck);src/model/cloud-runner/providers/provider-git-manager.ts (2)
97-106
: Log errors with error severityUse CloudRunnerLogger.error for failures to improve observability.
- CloudRunnerLogger.log(`Error: ${errorMessage}`); + CloudRunnerLogger.error(`Error: ${errorMessage}`); @@ - CloudRunnerLogger.log(`Error: ${errorMessage}`); + CloudRunnerLogger.error(`Error: ${errorMessage}`); @@ - CloudRunnerLogger.log(`Error during cleanup: ${error.message}`); + CloudRunnerLogger.error(`Error during cleanup: ${error.message}`);Also applies to: 167-175, 274-276
25-37
: Cache location: consider OS tempdir to avoid polluting repo root (optional)Storing under process.cwd() ties the cache to the workspace. Using os.tmpdir() or a configurable path can reduce noise and avoid permission issues.
- private static readonly CACHE_DIR = path.join(process.cwd(), '.provider-cache'); + private static readonly CACHE_DIR = + process.env.PROVIDER_CACHE_DIR || path.join(require('os').tmpdir(), 'unity-builder-provider-cache');src/model/cloud-runner/providers/provider-loader.ts (2)
95-112
: Reject non-constructors early with clearer messageThe current message is fine; consider hinting expected export shape (default class or constructor function).
- if (typeof Provider !== 'function') { - throw new TypeError(`Provider package '${providerSource}' does not export a constructor function`); - } + if (typeof Provider !== 'function') { + throw new TypeError( + `Provider '${providerSource}' must export a constructor (default export or module.exports = class ...)` + ); + }
138-141
: Expose built-in provider map from a single source to avoid drift (optional)Consider centralizing the map used here and in CloudRunner to a shared constant to prevent mismatches.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
dist/index.js
is excluded by!**/dist/**
dist/index.js.map
is excluded by!**/dist/**
,!**/*.map
📒 Files selected for processing (14)
.cursor/settings.json
(1 hunks).github/workflows/cloud-runner-integrity.yml
(1 hunks).github/workflows/integrity-check.yml
(1 hunks)src/model/cloud-runner/cloud-runner.ts
(4 hunks)src/model/cloud-runner/providers/README.md
(1 hunks)src/model/cloud-runner/providers/provider-git-manager.ts
(1 hunks)src/model/cloud-runner/providers/provider-loader.ts
(1 hunks)src/model/cloud-runner/providers/provider-url-parser.ts
(1 hunks)src/model/cloud-runner/services/core/shared-workspace-locking.ts
(8 hunks)src/model/cloud-runner/tests/fixtures/invalid-provider.ts
(1 hunks)src/model/cloud-runner/tests/providers/provider-git-manager.test.ts
(1 hunks)src/model/cloud-runner/tests/providers/provider-loader.test.ts
(1 hunks)src/model/cloud-runner/tests/providers/provider-url-parser.test.ts
(1 hunks)src/model/index.ts
(2 hunks)
✅ Files skipped from review due to trivial changes (2)
- .cursor/settings.json
- src/model/cloud-runner/providers/README.md
🚧 Files skipped from review as they are similar to previous changes (1)
- .github/workflows/integrity-check.yml
🧰 Additional context used
🧬 Code graph analysis (8)
src/model/cloud-runner/providers/provider-url-parser.ts (1)
src/model/github.ts (2)
owner
(44-46)repo
(48-50)
src/model/cloud-runner/providers/provider-loader.ts (5)
src/model/index.ts (3)
loadProvider
(28-28)BuildParameters
(17-17)ProviderLoader
(29-29)src/model/cloud-runner/providers/provider-interface.ts (1)
ProviderInterface
(7-57)src/model/cloud-runner/providers/provider-url-parser.ts (3)
parseProviderSource
(29-90)logProviderSource
(117-138)ProviderSourceInfo
(22-22)src/model/cloud-runner/providers/provider-git-manager.ts (1)
ProviderGitManager
(25-278)src/model/cloud-runner/services/core/cloud-runner-logger.ts (1)
error
(24-26)
src/model/cloud-runner/tests/providers/provider-git-manager.test.ts (2)
src/model/cloud-runner/providers/provider-git-manager.ts (1)
ProviderGitManager
(25-278)src/model/cloud-runner/providers/provider-url-parser.ts (1)
GitHubUrlInfo
(3-10)
src/model/cloud-runner/tests/providers/provider-loader.test.ts (3)
src/model/cloud-runner/providers/provider-git-manager.ts (1)
ProviderGitManager
(25-278)src/model/cloud-runner/providers/provider-interface.ts (1)
ProviderInterface
(7-57)src/model/cloud-runner/providers/provider-loader.ts (3)
loadProvider
(15-117)loadProvider
(130-132)ProviderLoader
(122-158)
src/model/cloud-runner/tests/providers/provider-url-parser.test.ts (1)
src/model/cloud-runner/providers/provider-url-parser.ts (3)
parseProviderSource
(29-90)generateCacheKey
(97-99)isGitHubSource
(106-110)
src/model/cloud-runner/providers/provider-git-manager.ts (2)
src/model/cloud-runner/providers/provider-url-parser.ts (2)
GitHubUrlInfo
(3-10)generateCacheKey
(97-99)src/model/cloud-runner/services/core/cloud-runner-logger.ts (1)
error
(24-26)
src/model/cloud-runner/cloud-runner.ts (2)
src/model/cloud-runner/providers/provider-loader.ts (2)
loadProvider
(15-117)loadProvider
(130-132)src/index.ts (1)
runMain
(7-51)
src/model/cloud-runner/services/core/shared-workspace-locking.ts (2)
src/model/cloud-runner/providers/aws/aws-client-factory.ts (1)
AwsClientFactory
(9-71)src/model/input.ts (1)
region
(48-50)
🪛 ESLint
src/model/cloud-runner/tests/providers/provider-git-manager.test.ts
[error] 15-15: 'mockExecAsync' is assigned a value but never used.
(no-unused-vars)
[error] 18-26: Expected blank line before this statement.
(padding-line-between-statements)
[error] 29-29: 'mockFs' is assigned a value but never used.
(no-unused-vars)
[error] 32-32: Import in body of module; reorder to top.
(import/first)
src/model/cloud-runner/services/core/shared-workspace-locking.ts
[error] 15-15: The variable execCb
should be named execCallback
. A more descriptive name will do too.
(unicorn/prevent-abbreviations)
[error] 24-24: Expected blank line before this statement.
(padding-line-between-statements)
[error] 31-31: Expected blank line before this statement.
(padding-line-between-statements)
[error] 57-57: Expected blank line before this statement.
(padding-line-between-statements)
[error] 63-63: The variable createParams
should be named createParameters
. A more descriptive name will do too.
(unicorn/prevent-abbreviations)
[error] 80-80: Expected blank line before this statement.
(padding-line-between-statements)
[error] 80-80: Please rename the variable e
. Suggested names are: error
, event
. A more descriptive name will do too.
(unicorn/prevent-abbreviations)
[error] 95-95: Expected blank line before this statement.
(padding-line-between-statements)
[error] 299-305: This if
statement can be replaced by a ternary expression.
(unicorn/prefer-ternary)
[error] 331-337: This if
statement can be replaced by a ternary expression.
(unicorn/prefer-ternary)
[error] 344-350: This if
statement can be replaced by a ternary expression.
(unicorn/prefer-ternary)
[error] 368-379: This if
statement can be replaced by a ternary expression.
(unicorn/prefer-ternary)
[error] 389-395: This if
statement can be replaced by a ternary expression.
(unicorn/prefer-ternary)
[error] 404-404: Expected blank line before this statement.
(padding-line-between-statements)
🪛 actionlint (1.7.7)
.github/workflows/cloud-runner-integrity.yml
172-172: the runner of "aws-actions/configure-aws-credentials@v1" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (17)
- GitHub Check: StandaloneLinux64 on 2022.3.13f1
- GitHub Check: StandaloneWindows64 on 2023.2.2f1
- GitHub Check: StandaloneOSX on 2023.2.2f1
- GitHub Check: StandaloneWindows64 on 2022.3.13f1
- GitHub Check: Android on 2021.3.32f1
- GitHub Check: StandaloneOSX on 2022.3.13f1
- GitHub Check: StandaloneLinux64 on 2021.3.32f1
- GitHub Check: StandaloneLinux64 on 2021.3.32f1
- GitHub Check: tvOS on 2021.3.32f1
- GitHub Check: StandaloneWindows64 on 2021.3.32f1
- GitHub Check: StandaloneLinux64 on 2021.3.32f1
- GitHub Check: Android on 2021.3.32f1
- GitHub Check: StandaloneOSX on 2021.3.45f1
- GitHub Check: iOS on 2022.3.13f1
- GitHub Check: iOS on 2023.2.2f1
- GitHub Check: StandaloneOSX on 2022.3.13f1
- GitHub Check: iOS on 2021.3.45f1
🔇 Additional comments (3)
src/model/cloud-runner/tests/fixtures/invalid-provider.ts (1)
1-1
: Fixture looks goodMinimal class is fine for negative-path tests.
src/model/index.ts (1)
13-13
: Public re-exports LGTMRe-exporting loader APIs here is convenient for consumers.
Also applies to: 28-30
src/model/cloud-runner/tests/providers/provider-url-parser.test.ts (1)
3-16
: Solid coverage of parser happy-pathsThese tests exercise HTTPS/SSH/shorthand/local/npm flows well and mirror the current parser behavior.
UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }} | ||
UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }} | ||
UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }} | ||
PROJECT_PATH: test-project | ||
TARGET_PLATFORM: StandaloneWindows64 | ||
cloudRunnerTests: true | ||
versioning: None | ||
KUBE_STORAGE_CLASS: local-path | ||
PROVIDER_STRATEGY: aws | ||
AWS_ACCESS_KEY_ID: test | ||
AWS_SECRET_ACCESS_KEY: test | ||
AWS_ENDPOINT: http://localhost:4566 | ||
AWS_ENDPOINT_URL: http://localhost:4566 | ||
GIT_PRIVATE_TOKEN: ${{ secrets.GIT_PRIVATE_TOKEN }} | ||
GITHUB_TOKEN: ${{ secrets.GIT_PRIVATE_TOKEN }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Align LocalStack env with k8s job (path-style + IMDS off).
LocalStack works more reliably with path-style addressing and disabled EC2 metadata.
env:
UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }}
@@
- AWS_ENDPOINT: http://localhost:4566
- AWS_ENDPOINT_URL: http://localhost:4566
+ AWS_ENDPOINT: http://localhost:4566
+ AWS_ENDPOINT_URL: http://localhost:4566
+ AWS_S3_FORCE_PATH_STYLE: 'true'
+ AWS_EC2_METADATA_DISABLED: 'true'
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }} | |
UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }} | |
UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }} | |
PROJECT_PATH: test-project | |
TARGET_PLATFORM: StandaloneWindows64 | |
cloudRunnerTests: true | |
versioning: None | |
KUBE_STORAGE_CLASS: local-path | |
PROVIDER_STRATEGY: aws | |
AWS_ACCESS_KEY_ID: test | |
AWS_SECRET_ACCESS_KEY: test | |
AWS_ENDPOINT: http://localhost:4566 | |
AWS_ENDPOINT_URL: http://localhost:4566 | |
GIT_PRIVATE_TOKEN: ${{ secrets.GIT_PRIVATE_TOKEN }} | |
GITHUB_TOKEN: ${{ secrets.GIT_PRIVATE_TOKEN }} | |
UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }} | |
UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }} | |
UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }} | |
PROJECT_PATH: test-project | |
TARGET_PLATFORM: StandaloneWindows64 | |
cloudRunnerTests: true | |
versioning: None | |
KUBE_STORAGE_CLASS: local-path | |
PROVIDER_STRATEGY: aws | |
AWS_ACCESS_KEY_ID: test | |
AWS_SECRET_ACCESS_KEY: test | |
AWS_ENDPOINT: http://localhost:4566 | |
AWS_ENDPOINT_URL: http://localhost:4566 | |
AWS_S3_FORCE_PATH_STYLE: 'true' | |
AWS_EC2_METADATA_DISABLED: 'true' | |
GIT_PRIVATE_TOKEN: ${{ secrets.GIT_PRIVATE_TOKEN }} | |
GITHUB_TOKEN: ${{ secrets.GIT_PRIVATE_TOKEN }} |
🤖 Prompt for AI Agents
In .github/workflows/cloud-runner-integrity.yml around lines 141 to 155, the
LocalStack-related environment for the job needs path-style S3 addressing and
EC2 metadata disabled; add AWS_S3_FORCE_PATH_STYLE: "true" and
AWS_EC2_METADATA_DISABLED: "true" (or true without quotes consistent with
existing YAML style) to the env block alongside AWS_ENDPOINT/AWS_ENDPOINT_URL so
the job uses path-style requests and ignores IMDS when interacting with
LocalStack.
import { exec } from 'child_process'; | ||
import { promisify } from 'util'; | ||
import * as fs from 'fs'; | ||
import path from 'path'; | ||
import CloudRunnerLogger from '../services/core/cloud-runner-logger'; | ||
import { GitHubUrlInfo, generateCacheKey } from './provider-url-parser'; | ||
|
||
const execAsync = promisify(exec); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Prevent command injection and hangs; use execFile with prompts disabled
Building git commands via string interpolation is unsafe and may hang on auth prompts. Switch to execFile with args and GIT_TERMINAL_PROMPT=0; also make timeout configurable.
- import { exec } from 'child_process';
+ import { exec, execFile } from 'child_process';
@@
-const execAsync = promisify(exec);
+const execAsync = promisify(exec);
+const execFileAsync = promisify(execFile);
@@
- private static readonly GIT_TIMEOUT = 30000; // 30 seconds
+ private static readonly GIT_TIMEOUT = Number(process.env.PROVIDER_GIT_TIMEOUT_MS) || 30000; // 30 seconds (configurable)
@@
- const cloneCommand = `git clone --depth 1 --branch ${urlInfo.branch} ${urlInfo.url} "${localPath}"`;
- CloudRunnerLogger.log(`Executing: ${cloneCommand}`);
-
- const { stderr } = await execAsync(cloneCommand, {
- timeout: this.GIT_TIMEOUT,
- cwd: this.CACHE_DIR,
- });
+ const args = ['clone', '--depth', '1', '--branch', String(urlInfo.branch), urlInfo.url, localPath];
+ CloudRunnerLogger.log(`Executing: git ${args.join(' ')}`);
+ const { stderr } = await execFileAsync('git', args, {
+ timeout: this.GIT_TIMEOUT,
+ cwd: this.CACHE_DIR,
+ env: { ...process.env, GIT_TERMINAL_PROMPT: '0' },
+ });
@@
- await execAsync('git fetch origin', {
+ await execFileAsync('git', ['fetch', 'origin'], {
timeout: this.GIT_TIMEOUT,
cwd: localPath,
+ env: { ...process.env, GIT_TERMINAL_PROMPT: '0' },
});
@@
- const { stdout: statusOutput } = await execAsync(`git status -uno`, {
- timeout: this.GIT_TIMEOUT,
- cwd: localPath,
- });
+ const { stdout: head } = await execFileAsync('git', ['rev-parse', 'HEAD'], { timeout: this.GIT_TIMEOUT, cwd: localPath });
+ const { stdout: remote } = await execFileAsync('git', ['rev-parse', `origin/${String(urlInfo.branch)}`], { timeout: this.GIT_TIMEOUT, cwd: localPath });
Also applies to: 76-86, 128-139, 146-151
const hasUpdates = | ||
statusOutput.includes('Your branch is behind') || statusOutput.includes('can be fast-forwarded'); | ||
|
||
if (hasUpdates) { | ||
CloudRunnerLogger.log(`Updates available, pulling latest changes...`); | ||
|
||
// Reset to origin/branch to get latest changes | ||
await execAsync(`git reset --hard origin/${urlInfo.branch}`, { | ||
timeout: this.GIT_TIMEOUT, | ||
cwd: localPath, | ||
}); | ||
|
||
CloudRunnerLogger.log(`Repository updated successfully`); | ||
|
||
return { | ||
success: true, | ||
updated: true, | ||
}; | ||
} else { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Use commit hashes to detect updates instead of parsing “git status” output
Comparing HEAD vs origin/branch is more reliable across locales and git versions.
- const hasUpdates =
- statusOutput.includes('Your branch is behind') || statusOutput.includes('can be fast-forwarded');
+ const hasUpdates = head.trim() !== remote.trim();
@@
- await execAsync(`git reset --hard origin/${urlInfo.branch}`, {
- timeout: this.GIT_TIMEOUT,
- cwd: localPath,
- });
+ await execFileAsync('git', ['reset', '--hard', `origin/${String(urlInfo.branch)}`], {
+ timeout: this.GIT_TIMEOUT,
+ cwd: localPath,
+ env: { ...process.env, GIT_TERMINAL_PROMPT: '0' },
+ });
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In src/model/cloud-runner/providers/provider-git-manager.ts around lines 140 to
158, replace the brittle text-based detection of updates with commit-hash
comparison: run a git fetch first, then obtain the local commit hash (git
rev-parse HEAD) and the remote branch commit hash (git rev-parse
origin/<branch>), compare the two hashes and treat any difference as
"hasUpdates"; if different, perform the existing reset/pull flow and return
updated: true. Use the same execAsync calls with the existing timeout and cwd
options, and handle errors as currently done.
// Look for common provider entry points | ||
const commonEntryPoints = [ | ||
'index.js', | ||
'index.ts', | ||
'src/index.js', | ||
'src/index.ts', | ||
'lib/index.js', | ||
'lib/index.ts', | ||
'dist/index.js', | ||
'dist/index.js.map', | ||
]; | ||
|
||
for (const entryPoint of commonEntryPoints) { | ||
const fullPath = path.join(localPath, entryPoint); | ||
if (fs.existsSync(fullPath)) { | ||
CloudRunnerLogger.log(`Found provider entry point: ${entryPoint}`); | ||
|
||
return fullPath; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Do not treat source maps or TypeScript files as entry points
Importing dist/index.js.map or .ts will fail at runtime in Node without a TS loader. Prefer JS artifacts only; throw if none found.
- const commonEntryPoints = [
- 'index.js',
- 'index.ts',
- 'src/index.js',
- 'src/index.ts',
- 'lib/index.js',
- 'lib/index.ts',
- 'dist/index.js',
- 'dist/index.js.map',
- ];
+ const commonEntryPoints = [
+ 'dist/index.cjs',
+ 'dist/index.js',
+ 'lib/index.cjs',
+ 'lib/index.js',
+ 'index.cjs',
+ 'index.js',
+ 'src/index.js', // in case sources are JS
+ ];
@@
- // Default to repository root
- CloudRunnerLogger.log(`No specific entry point found, using repository root`);
-
- return localPath;
+ // No valid entry point found
+ throw new Error(`No loadable JS entry point found in provider repo at ${localPath}`);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// Look for common provider entry points | |
const commonEntryPoints = [ | |
'index.js', | |
'index.ts', | |
'src/index.js', | |
'src/index.ts', | |
'lib/index.js', | |
'lib/index.ts', | |
'dist/index.js', | |
'dist/index.js.map', | |
]; | |
for (const entryPoint of commonEntryPoints) { | |
const fullPath = path.join(localPath, entryPoint); | |
if (fs.existsSync(fullPath)) { | |
CloudRunnerLogger.log(`Found provider entry point: ${entryPoint}`); | |
return fullPath; | |
// Look for common provider entry points | |
const commonEntryPoints = [ | |
'dist/index.cjs', | |
'dist/index.js', | |
'lib/index.cjs', | |
'lib/index.js', | |
'index.cjs', | |
'index.js', | |
'src/index.js', // in case sources are JS | |
]; | |
for (const entryPoint of commonEntryPoints) { | |
const fullPath = path.join(localPath, entryPoint); | |
if (fs.existsSync(fullPath)) { | |
CloudRunnerLogger.log(`Found provider entry point: ${entryPoint}`); | |
return fullPath; | |
} | |
} | |
// No valid entry point found | |
throw new Error(`No loadable JS entry point found in provider repo at ${localPath}`); |
🤖 Prompt for AI Agents
In src/model/cloud-runner/providers/provider-git-manager.ts around lines 224 to
241, the code currently treats .ts and .map files as valid entry points which
will break at runtime; update the commonEntryPoints array to include only
JavaScript artifacts (e.g., index.js, src/index.js, lib/index.js, dist/index.js)
remove any .ts and .map entries, iterate those paths and return the first
existing JS file, and if none are found throw a clear error indicating no
JavaScript entry point was located so callers can fail fast.
let modulePath: string; | ||
let importedModule: any; | ||
|
||
try { | ||
// Handle different source types | ||
switch (sourceInfo.type) { | ||
case 'github': { | ||
CloudRunnerLogger.log(`Processing GitHub repository: ${sourceInfo.owner}/${sourceInfo.repo}`); | ||
|
||
// Ensure the repository is available locally | ||
const localRepoPath = await ProviderGitManager.ensureRepositoryAvailable(sourceInfo); | ||
|
||
// Get the path to the provider module within the repository | ||
modulePath = ProviderGitManager.getProviderModulePath(sourceInfo, localRepoPath); | ||
|
||
CloudRunnerLogger.log(`Loading provider from: ${modulePath}`); | ||
break; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Robust import of absolute file paths (ESM/CJS compatibility) and error-level logging
When modulePath is an absolute filesystem path, import via file:// URL to avoid resolution edge cases. Also log errors with error severity before rethrowing.
-import { parseProviderSource, logProviderSource, ProviderSourceInfo } from './provider-url-parser';
+import { parseProviderSource, logProviderSource, ProviderSourceInfo } from './provider-url-parser';
+import path from 'path';
+import { pathToFileURL } from 'url';
@@
- // Import the module
- importedModule = await import(modulePath);
+ // Import the module (support absolute filesystem paths)
+ const specifier = path.isAbsolute(modulePath) ? pathToFileURL(modulePath).href : modulePath;
+ importedModule = await import(specifier);
} catch (error) {
- throw new Error(`Failed to load provider package '${providerSource}': ${(error as Error).message}`);
+ CloudRunnerLogger.error(
+ `Failed to load provider package '${providerSource}': ${(error as Error).message}`
+ );
+ throw new Error(`Failed to load provider package '${providerSource}': ${(error as Error).message}`);
}
Also applies to: 73-77
🤖 Prompt for AI Agents
In src/model/cloud-runner/providers/provider-loader.ts around lines 25-41 (and
apply the same change to lines 73-77), ensure absolute filesystem modulePath
values are converted to a file:// URL (use pathToFileURL from 'url') before
dynamic import to avoid ESM/CJS resolution edge cases; wrap the import/require
in a try/catch, log failures with CloudRunnerLogger.error including the
modulePath and error, then rethrow the error so callers can handle it.
branch: branch || 'main', | ||
path: path || '', | ||
url: `https://github.com/${owner}/${repo}`, | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't hardcode 'main' — leave branch optional and adjust cache key
Defaulting to 'main' breaks repos whose default branch isn't 'main'. Let the branch be undefined and handle it in the git layer; update cache key accordingly.
Apply:
repo,
- branch: branch || 'main',
+ branch,
path: path || '',
url: `https://github.com/${owner}/${repo}`,
};
@@
repo,
- branch: branch || 'main',
+ branch,
path: path || '',
url: `https://github.com/${owner}/${repo}`,
};
@@
repo,
- branch: branch || 'main',
+ branch,
path: path || '',
url: `https://github.com/${owner}/${repo}`,
};
@@
export function generateCacheKey(urlInfo: GitHubUrlInfo): string {
- return `github_${urlInfo.owner}_${urlInfo.repo}_${urlInfo.branch}`.replace(/[^\w-]/g, '_');
+ const branch = urlInfo.branch ?? 'HEAD';
+ return `github_${urlInfo.owner}_${urlInfo.repo}_${branch}`.replace(/[^\w-]/g, '_');
}
Follow-up outside this file (ProviderGitManager.cloneRepository): include --branch
only if provided; otherwise clone default branch:
// inside ProviderGitManager.cloneRepository
const branchArg = urlInfo.branch ? `--branch ${urlInfo.branch}` : '';
const cloneCommand = `git clone --depth 1 ${branchArg} ${urlInfo.url} "${localPath}"`;
Also applies to: 56-59, 71-74, 97-99
import { GitHubUrlInfo } from '../../providers/provider-url-parser'; | ||
import * as fs from 'fs'; | ||
|
||
// Mock @actions/core to fix fs.promises compatibility issue | ||
jest.mock('@actions/core', () => ({ | ||
info: jest.fn(), | ||
warning: jest.fn(), | ||
error: jest.fn(), | ||
})); | ||
|
||
// Mock fs module | ||
jest.mock('fs'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Fix ESLint errors: import order and unused vars
Resolve import/first and no-unused-vars to keep CI green.
Apply:
+import { ProviderGitManager } from '../../providers/provider-git-manager';
import { GitHubUrlInfo } from '../../providers/provider-url-parser';
import * as fs from 'fs';
// Mock @actions/core to fix fs.promises compatibility issue
@@
-const mockExecAsync = jest.fn();
@@
-const mockFs = fs as jest.Mocked<typeof fs>;
@@
-// Import the mocked ProviderGitManager
-import { ProviderGitManager } from '../../providers/provider-git-manager';
+// Import the mocked ProviderGitManager (import/first rule satisfied above)
const mockProviderGitManager = ProviderGitManager as jest.Mocked<typeof ProviderGitManager>;
Also applies to: 15-15, 29-29, 32-34
🤖 Prompt for AI Agents
In src/model/cloud-runner/tests/providers/provider-git-manager.test.ts around
lines 1-12 (also apply fixes at 15, 29, and 32-34): ESLint complains about
import/order and no-unused-vars; reorder imports so built-ins come first (import
* as fs from 'fs';) then relative imports (import type { GitHubUrlInfo } from
'../../providers/provider-url-parser';) and convert the unused GitHubUrlInfo to
a type-only import or remove it if truly unused, then keep the jest.mock calls
after all imports; ensure any other unused variables at the noted lines are
removed or converted to type-only imports to satisfy no-unused-vars.
describe('loadProvider', () => { | ||
it('loads a built-in provider dynamically', async () => { | ||
const provider: ProviderInterface = await loadProvider('./test', {} as any); | ||
expect(typeof provider.runTaskInWorkflow).toBe('function'); | ||
}); | ||
|
||
it('loads a local provider from relative path', async () => { | ||
const provider: ProviderInterface = await loadProvider('./test', {} as any); | ||
expect(typeof provider.runTaskInWorkflow).toBe('function'); | ||
}); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regression risk: built-in names like 'aws' resolve as NPM, not internal modules
Current flow parses 'aws' as npm and never reaches the built-in map in provider-loader. Add a test and fix loader to map built-ins before parsing.
Add test:
it('loads a built-in provider by name', async () => {
const provider = await loadProvider('test', {} as any);
expect(typeof provider.runTaskInWorkflow).toBe('function');
});
Update provider-loader (outside this file) before calling parseProviderSource:
const builtins: Record<string, string> = {
aws: './aws',
k8s: './k8s',
test: './test',
'local-docker': './docker',
'local-system': './local',
local: './local',
};
if (providerSource in builtins) {
importedModule = await import(builtins[providerSource]);
// ... instantiate/validate and return
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took a look at the first half, but overall, that looks really good ❤️
permissions: | ||
checks: write | ||
contents: read | ||
actions: write | ||
packages: read | ||
pull-requests: write | ||
statuses: write | ||
id-token: write | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussion: Since this is localstack, might be less necessary to reduce this but on the other side, it mentions mirroring the other workflow which is probably a good idea.
env: | ||
UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }} | ||
UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }} | ||
UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }} | ||
PROJECT_PATH: test-project | ||
TARGET_PLATFORM: StandaloneWindows64 | ||
cloudRunnerTests: true | ||
versioning: None | ||
KUBE_STORAGE_CLASS: local-path | ||
PROVIDER_STRATEGY: aws | ||
AWS_ACCESS_KEY_ID: test | ||
AWS_SECRET_ACCESS_KEY: test | ||
GIT_PRIVATE_TOKEN: ${{ secrets.GIT_PRIVATE_TOKEN }} | ||
GITHUB_TOKEN: ${{ secrets.GIT_PRIVATE_TOKEN }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it the case? If so, looks like a good suggestion.
permissions: | ||
checks: write | ||
contents: read | ||
actions: write | ||
packages: read | ||
pull-requests: write | ||
statuses: write | ||
id-token: write | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is the publicly running one, could be a good idea to reduce permissions in here. I think this is a good suggestion, we should probably try this and see if workflows are still succeeding.
if: ${{ matrix.provider == 'k8s' }} | ||
uses: azure/setup-kubectl@v4 | ||
with: | ||
version: 'v1.29.0' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Latest version is 1.34.1
(released: 2025-09-09).
version: 'v1.29.0' | |
version: 'v1.34.1' |
Let's just make sure it still works after updating. There could be breaking changes and I don't have full context on how kubectl is used yet.
for i in {1..60}; do kubectl get nodes && break || sleep 5; done | ||
kubectl get storageclass | ||
- name: Start LocalStack (S3) | ||
uses: localstack/setup-localstack@v0.2.3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Latest version is v0.2.4. I don't see any breaking change.
uses: localstack/setup-localstack@v0.2.3 | |
uses: localstack/setup-localstack@v0.2.4 |
name: Cloud Runner Integrity | ||
uses: ./.github/workflows/cloud-runner-integrity.yml | ||
secrets: inherit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds right 👍
@@ -0,0 +1,214 @@ | |||
# Provider Loader Dynamic Imports | |||
|
|||
The provider loader now supports dynamic loading of providers from multiple sources including local file paths, GitHub repositories, and NPM packages. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should also document what is a provider so we can have a better idea of why this is useful. We should probably document this in here as well: https://game.ci/docs/github-cloud-runner/introduction/
Something like this:
A provider is a pluggable backend that Cloud Runner uses to run builds and workflows. Examples include AWS, Kubernetes, or local execution. Each provider implements the ProviderInterface, which defines the common lifecycle methods (setup, run, cleanup, garbage collection, etc.).
This abstraction makes Cloud Runner flexible: you can switch execution environments or add your own provider (via npm package, GitHub repo, or local path) without changing the rest of your pipeline.
|
||
## Best Practices | ||
|
||
1. **Use specific branches**: Always specify the branch when loading from GitHub |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1. **Use specific branches**: Always specify the branch when loading from GitHub | |
1. **Use specific branches or versions**: Always specify the branch or specific tag when loading from GitHub |
const result: { taskElement: any; element: string }[] = []; | ||
CloudRunnerLogger.log(``); | ||
CloudRunnerLogger.log(`List Tasks`); | ||
process.env.AWS_REGION = Input.region; | ||
const ecs = new ECS({ region: Input.region }); | ||
const ecs = AwsClientFactory.getECS(); | ||
const clusters = (await ecs.send(new ListClustersCommand({}))).clusterArns || []; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a good suggestion and it gets rid of a any
👍 I would probably try to find how to get rid of other any
in current file.
@coderabbitai do you see a way to make this more robust without compromising readability in full task service?
const inline = commands | ||
.replace(/"/g, '\\"') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👀 sounds right. Maybe a library could be a good idea here as well 🤷
@coderabbitai what would you do?
Changes
Related Issues
Related PRs
Successful Workflow Run Link
PRs don't have access to secrets so you will need to provide a link to a successful run of the workflows from your own
repo.
Checklist
code of conduct
in the documentation repo)
Summary by CodeRabbit
New Features
Tests
Documentation
Refactor
Chores