Conversation
- scripts/assets.mjs: processes source images → optimised originals + WebP - source-assets/ directory with README for raw asset intake - npm scripts: assets, assets:check, assets:clean - sharp for compression (PNG level 9, MozJPEG q82, WebP q80) - Content-hash manifest for idempotent re-runs Co-authored-by: brianruggieri <2104689+brianruggieri@users.noreply.github.com>
…naming Co-authored-by: brianruggieri <2104689+brianruggieri@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
Introduces a Node-based image asset pipeline to manage original (source) images separately from optimised, deployable outputs, with incremental processing via a manifest.
Changes:
- Added
scripts/assets.mjsto hash, optimise, and emit images intostatic/img/(plus WebP variants) and track state insource-assets/manifest.json. - Added
source-assets/README.mddocumenting the workflow, formats, and directory mirroring. - Updated
package.json/package-lock.jsonto addsharpand newnpm run assets*commands.
Reviewed changes
Copilot reviewed 3 out of 4 changed files in this pull request and generated 7 comments.
| File | Description |
|---|---|
| source-assets/README.md | Documents the intended asset workflow and directory layout. |
| scripts/assets.mjs | Implements the asset processing/check/clean CLI and manifest logic. |
| package.json | Adds assets scripts and sharp devDependency. |
| package-lock.json | Locks sharp and its transitive dependencies. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
| // Skip if manifest shows the same hash (already processed) | ||
| if (manifest[relPath]?.sourceHash === srcHash) { | ||
| return { relPath, skipped: true }; | ||
| } |
There was a problem hiding this comment.
The skip path only checks the stored source hash. If a generated output file (e.g., static/img/... .webp) is deleted or missing, this will still skip processing and leave the site without the expected assets. Consider verifying that every output listed in the manifest exists (and optionally matches expected size/hash) before returning skipped; otherwise re-run processing for that source.
| for (const f of files) { | ||
| const relPath = relative(SOURCE_DIR, f); | ||
| const srcBuf = await readFile(f); | ||
| const srcHash = hash(srcBuf); | ||
| if (manifest[relPath]?.sourceHash !== srcHash) { | ||
| console.log(` ✗ ${relPath} (needs processing)`); | ||
| stale++; | ||
| } | ||
| } | ||
|
|
||
| if (stale === 0) { | ||
| console.log("All source assets are up-to-date."); | ||
| } else { | ||
| console.log(`\n${stale} asset(s) need processing. Run: npm run assets`); | ||
| process.exitCode = 1; | ||
| } |
There was a problem hiding this comment.
assets:check only compares source hashes to the manifest. If generated outputs are missing or were accidentally removed, this command will still report success (and assets will also skip). Consider also validating that all manifest outputs for a source exist under static/img/ (or re-derive expected output paths and check for their presence).
| import sharp from "sharp"; | ||
|
|
||
| // ── Paths ──────────────────────────────────────────────────────────────────── | ||
| const ROOT = resolve(import.meta.dirname, ".."); |
There was a problem hiding this comment.
import.meta.dirname is only available in newer Node versions; this will throw on Node 18/early 20 even though sharp supports Node 18. If this repo isn't explicitly pinned to Node 20.11+/22 for local tooling, consider using the portable fileURLToPath(import.meta.url) pattern to compute the script directory.
| import sharp from "sharp"; | |
| // ── Paths ──────────────────────────────────────────────────────────────────── | |
| const ROOT = resolve(import.meta.dirname, ".."); | |
| import { fileURLToPath } from "node:url"; | |
| import sharp from "sharp"; | |
| // ── Paths ──────────────────────────────────────────────────────────────────── | |
| const ROOT = resolve(dirname(fileURLToPath(import.meta.url)), ".."); |
| } else if (ext === ".bmp") { | ||
| // Convert BMP → PNG for web | ||
| optimised = await image.clone().png(PNG_OPTS).toBuffer(); | ||
| outExt = ".png"; | ||
| } | ||
|
|
||
| const origOutName = basename(relPath, ext) + outExt; | ||
| const origOutRel = join(outRelDir, origOutName); | ||
| await writeFile(join(OUTPUT_DIR, origOutRel), optimised); |
There was a problem hiding this comment.
BMP sources are written out as PNG with the same basename. If both foo.bmp and foo.png exist under source-assets, they'll collide and overwrite the same static/img/.../foo.png and foo.webp. Consider either preserving the original extension in the output name for BMP-derived PNGs or detecting collisions and failing with a clear error.
| const webpBuf = await image.clone().webp(WEBP_OPTS).toBuffer(); | ||
| const webpName = basename(relPath, ext) + ".webp"; | ||
| const webpRel = join(outRelDir, webpName); | ||
| await writeFile(join(OUTPUT_DIR, webpRel), webpBuf); | ||
| results.outputs.push({ | ||
| file: webpRel, | ||
| format: "webp", | ||
| width: meta.width, | ||
| height: meta.height, | ||
| size: webpBuf.length, | ||
| }); |
There was a problem hiding this comment.
GIF handling keeps the original bytes but still generates a WebP variant from sharp(srcBuf) without enabling animated input/output. For animated GIFs this typically converts only the first frame, which is likely unexpected. Consider either excluding GIFs from WebP generation or explicitly supporting animated conversion (and documenting the chosen behavior).
| const webpBuf = await image.clone().webp(WEBP_OPTS).toBuffer(); | |
| const webpName = basename(relPath, ext) + ".webp"; | |
| const webpRel = join(outRelDir, webpName); | |
| await writeFile(join(OUTPUT_DIR, webpRel), webpBuf); | |
| results.outputs.push({ | |
| file: webpRel, | |
| format: "webp", | |
| width: meta.width, | |
| height: meta.height, | |
| size: webpBuf.length, | |
| }); | |
| if (ext !== ".gif") { | |
| const webpBuf = await image.clone().webp(WEBP_OPTS).toBuffer(); | |
| const webpName = basename(relPath, ext) + ".webp"; | |
| const webpRel = join(outRelDir, webpName); | |
| await writeFile(join(OUTPUT_DIR, webpRel), webpBuf); | |
| results.outputs.push({ | |
| file: webpRel, | |
| format: "webp", | |
| width: meta.width, | |
| height: meta.height, | |
| size: webpBuf.length, | |
| }); | |
| } |
| ├── achievements/ | ||
| │ └── coin-clicker-gold.png | ||
| ├── me-illustration.png | ||
| └── manifest.json ← auto-generated, do not edit | ||
| ``` |
There was a problem hiding this comment.
This README states manifest.json is auto-generated and should not be edited, but the repository .gitignore currently doesn't ignore source-assets/manifest.json. Unless you intend to commit the manifest, it will create noisy diffs for each run. Consider adding it to .gitignore (and mentioning whether it should be committed).
| processed++; | ||
|
|
||
| const srcSize = (await stat(f)).size; | ||
| const outSize = res.outputs.reduce((s, o) => s + o.size, 0); |
There was a problem hiding this comment.
outSize is computed but never used, which makes the loop harder to follow and suggests leftover debug logic. Consider removing it or using it (e.g., for logging total output size) to avoid dead code.
| const outSize = res.outputs.reduce((s, o) => s + o.size, 0); |
Automated pipeline to ingest raw images, compress originals, generate WebP variants, and place them in
static/img/matching the site's existing<picture>element conventions.Pipeline (
scripts/assets.mjs)source-assets/mirroringstatic/img/structurenpm scripts
npm run assets— process new/changed source imagesnpm run assets:check— verify all sources are up-to-date (CI gate)npm run assets:clean— remove generated outputs and manifestUsage
Existing images and templates are untouched — this adds a new workflow alongside them.
💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.