feat: add supply chain security check workflow for PRs#227
feat: add supply chain security check workflow for PRs#227
Conversation
Add a CI workflow that runs on PR creation to analyze dependency changes for supply chain attack risks and known vulnerabilities, posting results as a comment on the PR. https://claude.ai/code/session_01Vxfk5wpHFgVHZsof9pdVRb
Replace github-script with claude-code-action so Claude analyzes the supply chain check report and posts an intelligent, contextual comment. https://claude.ai/code/session_01Vxfk5wpHFgVHZsof9pdVRb
Claude now searches the web for each changed/added package to find recent CVEs, supply chain attack reports, and typosquatting incidents, providing more thorough analysis beyond static checks. https://claude.ai/code/session_01Vxfk5wpHFgVHZsof9pdVRb
There was a problem hiding this comment.
Pull request overview
Adds an automated supply-chain security check for pull requests by generating a dependency-change + risk + npm audit markdown report and then invoking a Claude action to research and comment on the PR.
Changes:
- Introduces supply-chain check logic (
findDependencyChanges, risk scoring, report generation, metadata fetch,npm auditintegration). - Adds a CI runner script to execute the check in GitHub Actions and emit a markdown report.
- Adds a new PR workflow that runs the check and posts a sticky PR comment (plus unit tests and a version bump).
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
.github/workflows/supply-chain-check.yml |
New PR workflow to run the supply-chain report and Claude analysis. |
scripts/supply-chain-check.ts |
Core supply-chain analysis + report generation + npm registry/audit integration. |
scripts/supply-chain-check-runner.ts |
CLI runner used by CI to compare base/head package.json and print the report. |
tests/scripts/supply-chain-check.test.ts |
Vitest coverage for dependency diffing, risk scoring, and report output. |
package.json |
Version bump to satisfy repo PR validation/versioning policy. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| id: check | ||
| run: | | ||
| BASE_PKG=$(cat /tmp/base-package.json) | ||
| npx tsx scripts/supply-chain-check-runner.ts "$BASE_PKG" > /tmp/supply-chain-report.md |
There was a problem hiding this comment.
The workflow runs npx tsx ... but tsx is not listed in package.json dependencies/devDependencies, so this step may fail (or unexpectedly download tooling at runtime). Prefer using the already-present ts-node to run the runner script, or add/pin tsx as a devDependency and invoke it via npx --no-install.
| npx tsx scripts/supply-chain-check-runner.ts "$BASE_PKG" > /tmp/supply-chain-report.md | |
| npx ts-node scripts/supply-chain-check-runner.ts "$BASE_PKG" > /tmp/supply-chain-report.md |
| async function main() { | ||
| const basePackageJsonStr = process.argv[2]; | ||
| if (!basePackageJsonStr) { | ||
| console.error('Usage: supply-chain-check-runner.ts <base-package-json>'); | ||
| process.exit(1); | ||
| } | ||
|
|
||
| const basePackage = JSON.parse(basePackageJsonStr); | ||
| const headPackage = JSON.parse(fs.readFileSync('package.json', 'utf-8')); | ||
|
|
There was a problem hiding this comment.
Passing the entire base branch package.json as a single CLI argument is brittle (ARG_MAX limits, embedded newlines) and makes failures hard to diagnose. Consider changing the runner to accept a file path (e.g., /tmp/base-package.json) or read base JSON from stdin instead of process.argv[2].
| } catch (error) { | ||
| riskResults.push({ | ||
| pkg: pkg.name, | ||
| risks: [ | ||
| { | ||
| type: 'no-repository' as const, | ||
| severity: 'high' as const, | ||
| message: `Failed to fetch package metadata: ${error instanceof Error ? error.message : String(error)}`, | ||
| }, | ||
| ], | ||
| }); |
There was a problem hiding this comment.
The catch block reports metadata fetch failures as a no-repository risk signal, which is misleading and will be interpreted as an actual package property. Add a distinct risk type for metadata fetch failures (e.g., metadata-fetch-failed) or represent fetch errors separately so the report remains semantically accurate.
scripts/supply-chain-check.ts
Outdated
| const publishedDate = new Date(metadata.publishedAt); | ||
| const daysSincePublish = Math.floor( | ||
| (Date.now() - publishedDate.getTime()) / (1000 * 60 * 60 * 24) | ||
| ); | ||
| if (daysSincePublish <= NEW_VERSION_DAYS) { | ||
| risks.push({ | ||
| type: 'very-new-version', | ||
| severity: 'high', | ||
| message: `Package version published ${daysSincePublish} days ago`, | ||
| }); |
There was a problem hiding this comment.
analyzePackageRisk doesn’t handle invalid or future publishedAt values: new Date(publishedAt) can be invalid, producing NaN days, and future dates yield negative days (e.g., "published -2 days ago"). Consider validating the parsed date and clamping/handling negative or NaN values (and/or emitting a dedicated risk signal for invalid publish timestamps).
| const publishedDate = new Date(metadata.publishedAt); | |
| const daysSincePublish = Math.floor( | |
| (Date.now() - publishedDate.getTime()) / (1000 * 60 * 60 * 24) | |
| ); | |
| if (daysSincePublish <= NEW_VERSION_DAYS) { | |
| risks.push({ | |
| type: 'very-new-version', | |
| severity: 'high', | |
| message: `Package version published ${daysSincePublish} days ago`, | |
| }); | |
| const publishedTime = new Date(metadata.publishedAt).getTime(); | |
| if (!Number.isNaN(publishedTime)) { | |
| const diffMs = Date.now() - publishedTime; | |
| if (diffMs >= 0) { | |
| const daysSincePublish = Math.floor( | |
| diffMs / (1000 * 60 * 60 * 24) | |
| ); | |
| if (daysSincePublish <= NEW_VERSION_DAYS) { | |
| risks.push({ | |
| type: 'very-new-version', | |
| severity: 'high', | |
| message: `Package version published ${daysSincePublish} days ago`, | |
| }); | |
| } | |
| } |
| const versionData = data.versions?.[version] ?? {}; | ||
| const timeData = data.time ?? {}; | ||
|
|
||
| const downloadsUrl = `https://api.npmjs.org/downloads/point/last-week/${encodeURIComponent(packageName)}`; | ||
| let weeklyDownloads = 0; | ||
| try { | ||
| const dlResponse = await fetch(downloadsUrl); | ||
| if (dlResponse.ok) { | ||
| const dlData = await dlResponse.json(); | ||
| weeklyDownloads = dlData.downloads ?? 0; | ||
| } | ||
| } catch { | ||
| // Ignore download count fetch failures | ||
| } | ||
|
|
||
| return { | ||
| name: packageName, | ||
| version, | ||
| publishedAt: timeData[version] ?? timeData.created ?? new Date().toISOString(), | ||
| maintainerCount: (data.maintainers ?? []).length, | ||
| weeklyDownloads, | ||
| hasTypes: !!versionData.types || !!versionData.typings, | ||
| license: versionData.license ?? data.license, | ||
| repositoryUrl: versionData.repository?.url ?? data.repository?.url, | ||
| description: data.description, | ||
| }; |
There was a problem hiding this comment.
fetchPackageMetadata assumes version is an exact key in data.versions/data.time. If the dependency spec is a range/tag/path (e.g., ^1.2.3, latest, file:), versionData will be empty and publishedAt falls back to new Date().toISOString(), which will incorrectly flag the package as a very new version. Consider resolving the spec to an exact version (or failing fast/marking as “unresolved version spec”) before computing risk signals.
| export async function fetchPackageMetadata( | ||
| packageName: string, | ||
| version: string | ||
| ): Promise<PackageMetadata> { | ||
| const registryUrl = `https://registry.npmjs.org/${encodeURIComponent(packageName)}`; | ||
| const response = await fetch(registryUrl); | ||
| if (!response.ok) { | ||
| throw new Error(`Failed to fetch metadata for ${packageName}: ${response.status}`); | ||
| } |
There was a problem hiding this comment.
Unit tests cover findDependencyChanges/analyzePackageRisk/generateReport, but fetchPackageMetadata isn’t covered. Consider adding tests that mock global.fetch to verify parsing of registry responses and error handling (non-OK responses, missing version keys, download API failures).
| export async function runNpmAudit(): Promise<NpmAuditResult> { | ||
| const { execSync } = await import('node:child_process'); | ||
| try { | ||
| const output = execSync('npm audit --json 2>/dev/null', { | ||
| encoding: 'utf-8', | ||
| timeout: 60000, | ||
| }); | ||
| const result = JSON.parse(output); | ||
| const vuln = result.metadata?.vulnerabilities ?? {}; | ||
| return { | ||
| vulnerabilities: { | ||
| total: (vuln.critical ?? 0) + (vuln.high ?? 0) + (vuln.moderate ?? 0) + (vuln.low ?? 0), | ||
| critical: vuln.critical, | ||
| high: vuln.high, | ||
| moderate: vuln.moderate, | ||
| low: vuln.low, | ||
| }, | ||
| advisories: Object.values(result.advisories ?? {}).map((a: Record<string, unknown>) => ({ | ||
| severity: a.severity as string, | ||
| title: a.title as string, | ||
| module_name: a.module_name as string, | ||
| url: a.url as string, | ||
| })), | ||
| }; | ||
| } catch (error: unknown) { | ||
| // npm audit exits with non-zero when vulnerabilities are found | ||
| const err = error as { stdout?: string }; | ||
| if (err.stdout) { | ||
| try { | ||
| const result = JSON.parse(err.stdout); | ||
| const vuln = result.metadata?.vulnerabilities ?? {}; | ||
| return { | ||
| vulnerabilities: { | ||
| total: (vuln.critical ?? 0) + (vuln.high ?? 0) + (vuln.moderate ?? 0) + (vuln.low ?? 0), | ||
| critical: vuln.critical, | ||
| high: vuln.high, | ||
| moderate: vuln.moderate, | ||
| low: vuln.low, | ||
| }, | ||
| advisories: Object.values(result.advisories ?? {}).map((a: Record<string, unknown>) => ({ | ||
| severity: a.severity as string, | ||
| title: a.title as string, | ||
| module_name: a.module_name as string, | ||
| url: a.url as string, | ||
| })), | ||
| }; | ||
| } catch { | ||
| // Parse failure | ||
| } | ||
| } | ||
| return { vulnerabilities: { total: 0 } }; | ||
| } |
There was a problem hiding this comment.
runNpmAudit is untested, including the important path where npm audit exits non-zero but still returns JSON on stdout. Consider adding tests that mock child_process.execSync for both success and throw-with-stdout cases to prevent regressions across npm versions.
| export async function runNpmAudit(): Promise<NpmAuditResult> { | |
| const { execSync } = await import('node:child_process'); | |
| try { | |
| const output = execSync('npm audit --json 2>/dev/null', { | |
| encoding: 'utf-8', | |
| timeout: 60000, | |
| }); | |
| const result = JSON.parse(output); | |
| const vuln = result.metadata?.vulnerabilities ?? {}; | |
| return { | |
| vulnerabilities: { | |
| total: (vuln.critical ?? 0) + (vuln.high ?? 0) + (vuln.moderate ?? 0) + (vuln.low ?? 0), | |
| critical: vuln.critical, | |
| high: vuln.high, | |
| moderate: vuln.moderate, | |
| low: vuln.low, | |
| }, | |
| advisories: Object.values(result.advisories ?? {}).map((a: Record<string, unknown>) => ({ | |
| severity: a.severity as string, | |
| title: a.title as string, | |
| module_name: a.module_name as string, | |
| url: a.url as string, | |
| })), | |
| }; | |
| } catch (error: unknown) { | |
| // npm audit exits with non-zero when vulnerabilities are found | |
| const err = error as { stdout?: string }; | |
| if (err.stdout) { | |
| try { | |
| const result = JSON.parse(err.stdout); | |
| const vuln = result.metadata?.vulnerabilities ?? {}; | |
| return { | |
| vulnerabilities: { | |
| total: (vuln.critical ?? 0) + (vuln.high ?? 0) + (vuln.moderate ?? 0) + (vuln.low ?? 0), | |
| critical: vuln.critical, | |
| high: vuln.high, | |
| moderate: vuln.moderate, | |
| low: vuln.low, | |
| }, | |
| advisories: Object.values(result.advisories ?? {}).map((a: Record<string, unknown>) => ({ | |
| severity: a.severity as string, | |
| title: a.title as string, | |
| module_name: a.module_name as string, | |
| url: a.url as string, | |
| })), | |
| }; | |
| } catch { | |
| // Parse failure | |
| } | |
| } | |
| return { vulnerabilities: { total: 0 } }; | |
| } | |
| function parseNpmAuditJson(json: string): NpmAuditResult | null { | |
| try { | |
| const result = JSON.parse(json); | |
| const vuln = result.metadata?.vulnerabilities ?? {}; | |
| return { | |
| vulnerabilities: { | |
| total: | |
| (vuln.critical ?? 0) + | |
| (vuln.high ?? 0) + | |
| (vuln.moderate ?? 0) + | |
| (vuln.low ?? 0), | |
| critical: vuln.critical, | |
| high: vuln.high, | |
| moderate: vuln.moderate, | |
| low: vuln.low, | |
| }, | |
| advisories: Object.values(result.advisories ?? {}).map( | |
| (a: Record<string, unknown>) => ({ | |
| severity: a.severity as string, | |
| title: a.title as string, | |
| module_name: a.module_name as string, | |
| url: a.url as string, | |
| }), | |
| ), | |
| }; | |
| } catch { | |
| // If parsing fails, signal to the caller that no valid result could be produced. | |
| return null; | |
| } | |
| } | |
| export async function runNpmAudit(): Promise<NpmAuditResult> { | |
| const { execSync } = await import('node:child_process'); | |
| try { | |
| const output = execSync('npm audit --json 2>/dev/null', { | |
| encoding: 'utf-8', | |
| timeout: 60000, | |
| }); | |
| const parsed = parseNpmAuditJson(output); | |
| if (parsed) { | |
| return parsed; | |
| } | |
| } catch (error: unknown) { | |
| // npm audit exits with non-zero when vulnerabilities are found | |
| const err = error as { stdout?: string }; | |
| if (err.stdout) { | |
| const parsed = parseNpmAuditJson(err.stdout); | |
| if (parsed) { | |
| return parsed; | |
| } | |
| } | |
| } | |
| // If we reach here, we either couldn't parse the output or had no usable stdout. | |
| return { vulnerabilities: { total: 0 } }; |
- Use ts-node instead of tsx (already in devDependencies) (#1) - Accept file path instead of JSON CLI argument to avoid ARG_MAX (#2) - Add dedicated metadata-fetch-failed risk type (#3) - Validate publishedAt for NaN and future dates (#4) - Extract parseNpmAuditJson for testability (#7) - Add tests for fetchPackageMetadata with fetch mocks (#6) - Add tests for parseNpmAuditJson (#7) https://claude.ai/code/session_01Vxfk5wpHFgVHZsof9pdVRb
Add a CI workflow that runs on PR creation to analyze dependency changes
for supply chain attack risks and known vulnerabilities, posting results
as a comment on the PR.
https://claude.ai/code/session_01Vxfk5wpHFgVHZsof9pdVRb