feat: reliability improvements for mobile uploads#1
Conversation
Cover 4 improvements: days remaining fix, chunked upload, client-side probe, and basic PWA support.
Previously displayed static RetentionDays value instead of actual remaining time until expiration.
- Add manifest.json with app metadata and icons - Add minimal service worker for installability - Register SW and link manifest in layout
- Remove /probe endpoint route (saves double upload) - Add client-side probe using HTMLVideoElement/HTMLAudioElement - Shows duration, resolution, file size instantly without upload
- Add /upload/chunk endpoint for 5MB chunk uploads - Add /upload/complete endpoint to assemble and process - Client-side: split file, upload sequentially with 3 retries - Exponential backoff on failures (1s, 2s, 4s) - Progress shows chunk-by-chunk status
- Create static/app.js with @ts-check and full JSDoc types - Add jsconfig.json for TypeScript checking without build step - Remove ~250 lines of inline JS from upload.templ - Type-safe chunked upload, probe, and form handling
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
Caution Review failedThe pull request is closed. 📝 WalkthroughWalkthroughAdds client-side probing and chunked uploads with server-side chunk assembly, CSRF and security-headers middleware, magic-bytes and filename validation, FFmpeg path hardening and tests, logging sanitization, Media.DaysRemaining(), PWA assets/service worker, and consolidates inline upload/confirm JS into Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Browser as Client (static/app.js)
participant Server as HTTP (handlers + CSRF + SecHeaders)
participant FS as Filesystem (chunks dir)
participant MediaSvc as Media Service
User->>Browser: Select file
Browser->>Browser: Probe file, split into chunks, gen uploadId
loop per chunk
Browser->>Server: POST /upload/chunk (uploadId, chunkIndex, chunk, CSRF)
Server->>FS: Write chunk to chunks/{uploadId}/{chunkIndex}
FS-->>Server: OK
Server-->>Browser: 200 OK
end
Browser->>Server: POST /upload/complete (uploadId, metadata, CSRF)
Server->>FS: Read & assemble chunks -> temp file
Server->>Server: Validate assembled file (magic bytes, filename)
Server->>MediaSvc: Upload assembled file + metadata
MediaSvc-->>Server: Success
Server->>FS: Delete chunks/{uploadId}
Server-->>Browser: HX-Redirect / success response
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 golangci-lint (2.5.0)Error: unknown linters: 'modernize', run 'golangci-lint help linters' to see the list of supported linters Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
🤖 Fix all issues with AI agents
In `@docs/plans/2026-02-04-reliability-improvements-design.md`:
- Around line 51-73: Update the markdown in
docs/plans/2026-02-04-reliability-improvements-design.md by adding explicit
language specifiers (e.g., "text") to the fenced code blocks and ensuring there
is a blank line before and after each fenced block; specifically update the API
snippet containing "POST /upload/chunk" and the "Server Storage" directory tree
snippet "{dataDir}/chunks/{uploadId}/" so both use ```text and are separated by
blank lines from surrounding paragraphs to satisfy markdownlint.
In `@internal/adapter/http/handler.go`:
- Around line 165-196: The unchecked error returns are from deferred
file.Close(), out.Close() and the ignored w.Write result in the chunk upload
handler; replace the simple defers and ignore-write with explicit error-handling
wrappers: change defer file.Close() and defer out.Close() to deferred anonymous
funcs that call Close(), log any non-nil error via logger.Error.Printf (include
context like uploadID/chunkIndex/chunkPath), and check the write to w (the _, _
= w.Write call) and handle/log its error before returning; use the existing
identifiers file, out, chunkDir, chunkPath and the handler's logger to implement
these checks.
- Around line 150-163: The handler reads uploadID and chunkIndex but only
validates uploadID, allowing path-traversal via chunkIndex; fix by parsing
chunkIndex to an integer (e.g., use strconv.Atoi or strconv.ParseInt), ensure
it's non-negative, and reject the request if parsing fails, then use the
sanitized integer (converted back with strconv.Itoa) when building the chunk
file path instead of the raw chunkIndex string; update the code paths around
variables chunkIndex and uploadID (and the place where the file path is
constructed) to use the validated numeric chunkIdx to eliminate traversal risk.
- Around line 238-268: In CompleteUpload there are unchecked error returns from
closing each chunk file: after opening chunk (variable chunk) and copying into
assembled, check the error returned by chunk.Close(), log it (e.g.,
logger.Error.Printf with uploadID and chunk index) and return an appropriate
HTTP error (500) if Close() fails; also ensure Close() is executed even if
io.Copy returns an error (use a defer or capture both copyErr and closeErr) so
you don't swallow Close errors from the chunk file.
- Around line 199-220: Add the same uploadID format validation used in
ChunkUpload to CompleteUpload by extracting the validation logic into a reusable
helper (e.g., validateUploadID or UploadIDIsValid) and calling it from both
Handlers. Ensure the helper enforces the same constraints as in ChunkUpload
(pattern/length checks used there) and that CompleteUpload returns an HTTP 400
when the helper fails; update both CompleteUpload and ChunkUpload to call the
new helper rather than duplicating validation logic.
In `@internal/adapter/http/templates/layout.templ`:
- Around line 398-402: The service worker registration currently calls
navigator.serviceWorker.register('/static/sw.js') guarded by the feature check;
add simple error handling around that call by attaching a .then/.catch (or async
try/catch) to the navigator.serviceWorker.register promise so failures are
logged (e.g., using console.error or an existing logger) and, optionally,
conditionalize detailed logs to development mode; update the script block that
contains the 'serviceWorker' in navigator check and the
navigator.serviceWorker.register call to include this error handling.
In `@static/app.js`:
- Around line 317-319: The current code mutates the onchange attribute by
concatenating strings (origOnchange and window.handleFileSelect), which is
fragile; replace this with a proper event listener: remove the
attribute-concatenation logic around fileInput, and instead call
fileInput.addEventListener('change', handler) where handler invokes
window.handleFileSelect (e.g., handler receives the event and passes
event.target or the fileInput to window.handleFileSelect). Use a named handler
or a guard to avoid adding the listener multiple times if this code can run
repeatedly; reference fileInput, origOnchange, and window.handleFileSelect to
locate and update the logic.
- Around line 199-212: The retry loop for uploading chunks currently retries on
any non-ok response; change it so it only retries on network failures (caught
exceptions) or server errors (response.status >= 500) and immediately return
false for client errors (4xx); locate the loop around the fetch to
'/upload/chunk' that uses variables attempt, maxRetries and resp, and modify the
logic to check resp.ok → success, else if resp.status >= 500 then do the
exponential backoff and retry (if attempt < maxRetries), otherwise for 4xx
return false without retrying; keep the existing backoff awaits for retryable
cases.
- Around line 63-67: The formatSize function should handle zero and sub-kilobyte
values distinctly: update formatSize to return "0 B" when bytes === 0 and to
return the exact byte count with " B" for values less than 1024 (instead of
converting to "0.0 KB"); keep the existing GB/MB formatting for bytes >= 1048576
and >= 1073741824 respectively. Locate the formatSize function and add the zero
and <1024 branches before the existing KB/MB/GB checks so small values render as
raw bytes.
- Around line 35-41: Replace the non-cryptographic Math.random-based generator
in generateUUID() with the built-in crypto.randomUUID() when available: call
crypto.randomUUID() inside generateUUID() and fall back to the existing
replacement-based implementation only if crypto.randomUUID is undefined or
throws; keep the original string-replace logic as the fallback and ensure
generateUUID() still returns the same UUID format for older browsers or
non-secure contexts.
There was a problem hiding this comment.
Pull request overview
This PR implements reliability improvements for mobile uploads in the Sharm media sharing application. The changes address upload failures on unstable connections, improve user experience with accurate expiration displays, and add PWA support for Android home screen installation.
Changes:
- Implemented chunked file uploads (5MB chunks) with exponential backoff retry logic to handle unstable mobile connections
- Added client-side media probing using HTMLVideoElement/HTMLAudioElement to eliminate redundant server-side probe uploads
- Fixed days remaining display to show actual time until expiration rather than static retention value
- Added PWA support with manifest.json and minimal service worker for Android installability
- Extracted inline JavaScript to typed static/app.js with JSDoc annotations and @ts-check for type safety
Reviewed changes
Copilot reviewed 11 out of 12 changed files in this pull request and generated 21 comments.
Show a summary per file
| File | Description |
|---|---|
| static/sw.js | Minimal service worker for PWA installability (passthrough fetch handler) |
| static/manifest.json | PWA manifest with app metadata and icon references |
| static/jsconfig.json | TypeScript checking configuration for JavaScript files |
| static/app.js | Extracted and type-checked JavaScript with chunked upload, retry logic, and client-side probing |
| internal/domain/media.go | Added DaysRemaining() method to calculate actual days until expiration |
| internal/adapter/http/templates/upload.templ | Replaced inline JavaScript with external app.js reference |
| internal/adapter/http/templates/layout.templ | Added manifest link and service worker registration |
| internal/adapter/http/templates/dashboard.templ | Updated to display actual days remaining instead of retention period |
| internal/adapter/http/server.go | Added chunked upload endpoints, removed probe endpoint |
| internal/adapter/http/handler.go | Implemented ChunkUpload and CompleteUpload handlers for chunked upload flow |
| docs/plans/2026-02-04-reliability-improvements-design.md | Design documentation for reliability improvements |
| .gitignore | Added .worktrees/ exclusion for git worktree support |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
internal/adapter/http/handler.go
Outdated
| chunkDir := filepath.Join(os.TempDir(), "sharm-chunks", uploadID) | ||
| if err := os.MkdirAll(chunkDir, 0755); err != nil { | ||
| logger.Error.Printf("failed to create chunk dir: %v", err) | ||
| http.Error(w, "Server error", http.StatusInternalServerError) | ||
| return | ||
| } | ||
|
|
||
| chunkPath := filepath.Join(chunkDir, chunkIndex) |
There was a problem hiding this comment.
Potential path traversal vulnerability. The uploadID validation checks for alphanumeric characters and dashes, but doesn't prevent directory traversal sequences. An attacker could potentially craft an uploadID like "../../etc" that passes the character validation but could still access unintended directories. Use filepath.Clean on uploadID before constructing chunkDir, or better yet, validate that the uploadID doesn't contain path separators at all.
|
|
||
| fps, _ := strconv.Atoi(r.FormValue("fps")) | ||
|
|
||
| chunkDir := filepath.Join(os.TempDir(), "sharm-chunks", uploadID) |
There was a problem hiding this comment.
The uploadID is used to construct file paths but lacks proper validation against path traversal. While there's character validation in ChunkUpload, this endpoint has no validation at all. An attacker could supply a malicious uploadID to read arbitrary chunks from the filesystem. Add the same validation as in ChunkUpload (checking for allowed characters) or use filepath.Clean and verify the result stays within the intended directory.
| chunkDir := filepath.Join(os.TempDir(), "sharm-chunks", uploadID) | |
| baseChunkDir := filepath.Join(os.TempDir(), "sharm-chunks") | |
| chunkDir := filepath.Join(baseChunkDir, uploadID) | |
| cleanChunkDir := filepath.Clean(chunkDir) | |
| if cleanChunkDir != baseChunkDir && !strings.HasPrefix(cleanChunkDir, baseChunkDir+string(os.PathSeparator)) { | |
| http.Error(w, "Invalid uploadID", http.StatusBadRequest) | |
| return | |
| } | |
| chunkDir = cleanChunkDir |
|
|
||
| func (h *Handlers) ChunkUpload() http.HandlerFunc { | ||
| return func(w http.ResponseWriter, r *http.Request) { | ||
| r.Body = http.MaxBytesReader(w, r.Body, chunkSize+1024*1024) // chunk + overhead |
There was a problem hiding this comment.
The MaxBytesReader limit (chunkSize + 1MB overhead) could allow an attacker to send chunks larger than 5MB. If a client sends a 6MB chunk, it would be accepted, potentially causing memory issues or bypassing intended upload size restrictions. Consider using a stricter limit or adding explicit size validation after reading the chunk.
| const video = document.createElement('video'); | ||
| video.preload = 'metadata'; | ||
| video.onloadedmetadata = function () { | ||
| URL.revokeObjectURL(url); | ||
| renderProbeResult(container, { | ||
| duration: video.duration, | ||
| width: video.videoWidth, | ||
| height: video.videoHeight, | ||
| size: file.size, | ||
| }); | ||
| }; | ||
| video.onerror = function () { | ||
| URL.revokeObjectURL(url); | ||
| container.innerHTML = | ||
| '<div class="text-muted" style="font-size:var(--text-xs);">Unable to read video metadata</div>'; | ||
| }; | ||
| video.src = url; |
There was a problem hiding this comment.
The video element is created but not cleaned up if neither onloadedmetadata nor onerror fires (e.g., if the component is unmounted while loading). Consider adding a timeout to revoke the URL and clean up resources after a reasonable period, or store a reference that can be cleaned up when needed.
static/app.js
Outdated
| if (resp.ok) return true; | ||
| if (attempt < maxRetries) { | ||
| await new Promise((r) => setTimeout(r, Math.pow(2, attempt) * 1000)); | ||
| } | ||
| } catch (e) { |
There was a problem hiding this comment.
When fetch fails with a non-ok status, the error response body is not logged or displayed to the user. This makes debugging upload failures difficult. Consider logging resp.status and/or resp.text() to help diagnose issues, especially since network issues are the primary concern this chunked upload aims to address.
| if (resp.ok) return true; | |
| if (attempt < maxRetries) { | |
| await new Promise((r) => setTimeout(r, Math.pow(2, attempt) * 1000)); | |
| } | |
| } catch (e) { | |
| if (resp.ok) return true; | |
| // Log non-ok responses to aid debugging of upload failures | |
| let errorText = ''; | |
| try { | |
| errorText = await resp.text(); | |
| } catch (readErr) { | |
| errorText = '[failed to read error response body]'; | |
| } | |
| console.error( | |
| 'Chunk upload failed with non-ok response', | |
| { | |
| uploadId, | |
| chunkIndex, | |
| attempt, | |
| status: resp.status, | |
| statusText: resp.statusText, | |
| body: errorText, | |
| } | |
| ); | |
| if (attempt < maxRetries) { | |
| await new Promise((r) => setTimeout(r, Math.pow(2, attempt) * 1000)); | |
| } | |
| } catch (e) { | |
| // Log exceptions during upload to aid debugging | |
| console.error('Chunk upload encountered an error', { | |
| uploadId, | |
| chunkIndex, | |
| attempt, | |
| error: e, | |
| }); |
internal/adapter/http/handler.go
Outdated
| chunk, err := os.Open(chunkPath) | ||
| if err != nil { | ||
| logger.Error.Printf("missing chunk %d for upload %s: %v", i, uploadID, err) | ||
| http.Error(w, fmt.Sprintf("Missing chunk %d", i), http.StatusBadRequest) | ||
| return | ||
| } | ||
| _, err = io.Copy(assembled, chunk) | ||
| chunk.Close() |
There was a problem hiding this comment.
File descriptor leak on error path. If os.Open succeeds but io.Copy fails, the chunk file handle is closed, but if the next iteration's os.Open fails, the error path returns without closing that iteration's file handle. While this is a minor issue since the process will eventually close all handles, it's better practice to use defer chunk.Close() immediately after successful os.Open.
| // DaysRemaining returns the number of days until expiration (rounded up). | ||
| // Returns 0 if already expired. | ||
| func (m *Media) DaysRemaining() int { | ||
| remaining := time.Until(m.ExpiresAt).Hours() / 24 | ||
| if remaining <= 0 { | ||
| return 0 | ||
| } | ||
| return int(math.Ceil(remaining)) | ||
| } |
There was a problem hiding this comment.
Missing test coverage for the new DaysRemaining method. Since internal/domain/media_test.go exists with comprehensive tests for other Media methods, this new method should also have test coverage to verify edge cases like: media about to expire (less than 1 day), media just created, media already expired, and the rounding-up behavior.
static/app.js
Outdated
| const origOnchange = fileInput.getAttribute('onchange') || ''; | ||
| fileInput.setAttribute('onchange', origOnchange + ';window.handleFileSelect(this)'); |
There was a problem hiding this comment.
The inline onchange handler is being concatenated with a string, which could cause issues if origOnchange is null or contains special characters. Use addEventListener instead to avoid potential issues with inline event handlers and to maintain cleaner separation of concerns.
| const origOnchange = fileInput.getAttribute('onchange') || ''; | |
| fileInput.setAttribute('onchange', origOnchange + ';window.handleFileSelect(this)'); | |
| fileInput.addEventListener('change', function () { | |
| // Preserve existing inline handlers (if any) and add our own logic | |
| // eslint-disable-next-line @typescript-eslint/ban-ts-comment | |
| // @ts-ignore - handleFileSelect is attached to window elsewhere | |
| window.handleFileSelect(this); | |
| }); |
| } | ||
| defer file.Close() | ||
|
|
||
| chunkDir := filepath.Join(os.TempDir(), "sharm-chunks", uploadID) |
There was a problem hiding this comment.
Storage location discrepancy with design document. The design doc (docs/plans/2026-02-04-reliability-improvements-design.md:68-73) specifies chunks should be stored in {dataDir}/chunks/{uploadId}/, but the implementation uses os.TempDir()/sharm-chunks/{uploadId}/. While using temp directory is reasonable, this divergence should be documented or the design doc should be updated to reflect the actual implementation.
| "skipLibCheck": true | ||
| }, | ||
| "include": ["*.js"], | ||
| "exclude": ["*.min.js", "sw.js"] |
There was a problem hiding this comment.
The jsconfig.json excludes sw.js from type checking with the @ts-nocheck comment in sw.js. However, sw.js is a simple file that could benefit from type checking. Consider removing it from the exclude list and removing the @ts-nocheck comment, or documenting why it needs to be excluded (e.g., if service worker types require additional configuration).
| "exclude": ["*.min.js", "sw.js"] | |
| "exclude": ["*.min.js"] |
FormData sends multipart/form-data which ParseForm() doesn't handle.
- Extract validateUploadID helper to prevent path traversal - Parse chunkIndex as int to block malicious paths - Add proper error handling for file Close() calls - Use crypto.randomUUID() when available - Only retry uploads on 5xx/network errors, not 4xx - Fix formatSize for zero and small byte values - Use addEventListener instead of onchange mutation - Fix markdown lint in design doc - Add SW registration error handling
- Limit totalChunks to 20000 to prevent DoS via huge values - Use O_EXCL flag to prevent race conditions on chunk files - Add .env.local and tmp/ to gitignore
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/adapter/http/templates/upload_templ.go (1)
78-86:⚠️ Potential issue | 🟡 MinorAdd method/action/enctype for no-JS fallback.
With only
id="upload-form", a JS failure results in a GET submit and no upload. Addmethod="post",action="/upload", andenctype="multipart/form-data"so the legacy server upload path still works.🛠️ Proposed change
- <form id="upload-form"> + <form id="upload-form" method="post" action="/upload" enctype="multipart/form-data">
🤖 Fix all issues with AI agents
In `@docs/plans/2026-02-04-reliability-improvements-design.md`:
- Around line 49-65: Add a "Security Considerations" subsection after the New
Endpoints block that documents the implemented hardening: state that uploadId is
validated to prevent path traversal, totalChunks is capped at 20,000 to mitigate
DoS, and chunk files are created using the O_EXCL flag to prevent race
conditions; reference the endpoints POST /upload/chunk and POST /upload/complete
so maintainers can map protections to those handlers.
- Around line 162-167: Add a blank line immediately before the fenced JavaScript
code block that starts with "// sw.js" to satisfy markdownlint MD031; locate the
block containing self.addEventListener('fetch', (event) => { ... }) and insert a
single empty line above the triple-backtick fence so the code fence is separated
from the preceding paragraph.
- Line 124: The document contains an incorrect file reference string
"static/js/upload.js"; update the text in
docs/plans/2026-02-04-reliability-improvements-design.md to consistently
reference the actual implemented file "static/app.js" (replace any occurrence of
"static/js/upload.js" with "static/app.js") so the doc matches the PR summary
and AI-generated summary.
- Line 87: The document contains an incorrect file reference string
"static/js/upload.js" — update that occurrence in the design doc to the correct
implementation path "static/app.js" so the PR summary and AI-generated summary
match the doc; search for the literal "static/js/upload.js" in the docs/plans
file and replace it with "static/app.js".
- Around line 98-118: Add a blank line before the fenced code block to satisfy
markdownlint, and update the probeFile implementation to handle audio files by
creating either an HTMLAudioElement or HTMLVideoElement based on file.type (use
file.type.startsWith('audio/') ? document.createElement('audio') :
document.createElement('video')); assign it to a single variable (e.g.,
element), set element.preload = 'metadata', use element.onloadedmetadata to
resolve with duration: element.duration and width/height: element.videoWidth ||
0 and element.videoHeight || 0, revoke URL.revokeObjectURL(element.src) in that
handler, set element.onerror to reject with a clearer message like 'Cannot read
media metadata', and set element.src = URL.createObjectURL(file).
- Around line 137-159: Add a blank line immediately before the fenced code block
that starts with "```json" (the JSON manifest snippet containing "name":
"Sharm", "short_name": "Sharm", etc.) so the markdown has an empty line above
the code fence to satisfy MD031; edit the document so there is one newline
separating the preceding paragraph or heading and the ```json code fence.
In `@internal/adapter/http/handler.go`:
- Around line 200-217: The current chunk write uses os.O_EXCL which causes
retries to fail when a previous attempt created a partial file; change to write
to a temp file in the same directory and then atomically rename to chunkPath:
create a temp file (e.g. using ioutil.TempFile or os.Create with a unique suffix
in chunkDir), write the contents from file to that temp file (ensure you call
File.Sync and close it), then call os.Rename(tempPath, chunkPath) to atomically
put the chunk in place; on any write error remove the temp file and return the
error, and if os.Rename reports that chunkPath already exists treat that as a
successful idempotent write (or overwrite atomically if desired), referencing
the identifiers chunkDir, chunkIdx, chunkPath, out, and file to locate where to
implement the temp-write-then-rename change.
In `@internal/adapter/http/templates/dashboard_templ.go`:
- Around line 429-433: The rendered "days left" can be negative because
m.DaysRemaining() may return negative values; clamp the value to a non-negative
integer before formatting (preferred: adjust the DaysRemaining() method on the
model to return max(0, computedDays) so all callers benefit; alternatively,
clamp here immediately before fmt.Sprintf by computing d := m.DaysRemaining();
if d < 0 { d = 0 } and use d in the fmt.Sprintf call referenced around
templ.JoinStringErrs and templ_7745c5c3_Var15). Ensure you update the
DaysRemaining() function name (DaysRemaining) or the local clamping logic where
templ_7745c5c3_Var15 is produced.
In `@static/app.js`:
- Around line 376-418: The handleFileSelect function leaves hidden codec
checkboxes checked when switching file types; update it to clear or disable
irrelevant inputs whenever you hide them: specifically, when switching to
non-video clear/disable the AV1/H.264 inputs (elements with ids codec-av1 and
codec-h264) and when switching to non-audio clear/disable the OPUS input (id
codec-opus); keep using the existing codec-options and fps-options visibility
logic and call updateFpsVisibility() as before, and ensure
probeClientSide/probe-result behavior is unchanged.
- Around line 236-315: The chunkedUpload function lacks a guard for zero-length
files causing totalChunks=0 and a bad finalization request; inside chunkedUpload
(before computing totalChunks or starting the loop) check if file.size === 0
and, if so, set a user-visible error into the same element used elsewhere
(result.innerHTML with a consistent text-error message), call hideProgress(),
and return false to abort the upload; make sure to reference the existing result
variable and reuse the existing hideProgress() behavior so the rest of the flow
(including uploadId, CHUNK_SIZE, uploadChunk, and /upload/complete) is never
invoked for empty files.
| **New Endpoints**: | ||
|
|
||
| ```text | ||
| POST /upload/chunk | ||
| Form data: | ||
| - uploadId: string (client-generated UUID) | ||
| - chunkIndex: int | ||
| - totalChunks: int | ||
| - chunk: binary data | ||
|
|
||
| POST /upload/complete | ||
| Form data: | ||
| - uploadId: string | ||
| - filename: string | ||
| - retentionDays: int | ||
| - codecs: []string | ||
| ``` |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Document security validations for chunked upload endpoints.
The commit messages indicate that important security hardening was implemented (uploadId validation to prevent path traversal, totalChunks limit of 20,000 for DoS protection, O_EXCL flag to prevent race conditions on chunk files), but these measures are not documented in this design spec. Consider adding a Security Considerations subsection to ensure maintainers understand these protections.
📋 Suggested addition
Add after line 65:
**Security Considerations**:
- `uploadId` is validated to prevent path traversal attacks
- `totalChunks` is limited to 20,000 to prevent DoS
- Chunk files are created with `O_EXCL` flag to prevent race conditions🤖 Prompt for AI Agents
In `@docs/plans/2026-02-04-reliability-improvements-design.md` around lines 49 -
65, Add a "Security Considerations" subsection after the New Endpoints block
that documents the implemented hardening: state that uploadId is validated to
prevent path traversal, totalChunks is capped at 20,000 to mitigate DoS, and
chunk files are created using the O_EXCL flag to prevent race conditions;
reference the endpoints POST /upload/chunk and POST /upload/complete so
maintainers can map protections to those handlers.
| **Changes**: | ||
| - `internal/adapter/http/handler.go`: Add `HandleChunkUpload`, `HandleCompleteUpload` | ||
| - `internal/adapter/http/routes.go`: Register new routes | ||
| - `static/js/upload.js`: New chunked upload logic with retry |
There was a problem hiding this comment.
Correct the file path reference.
The document references static/js/upload.js, but the PR summary and AI-generated summary indicate the implementation is in static/app.js. This inconsistency should be corrected for accuracy.
📝 Proposed fix
-- `static/js/upload.js`: New chunked upload logic with retry
+- `static/app.js`: New chunked upload logic with retry📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - `static/js/upload.js`: New chunked upload logic with retry | |
| - `static/app.js`: New chunked upload logic with retry |
🤖 Prompt for AI Agents
In `@docs/plans/2026-02-04-reliability-improvements-design.md` at line 87, The
document contains an incorrect file reference string "static/js/upload.js" —
update that occurrence in the design doc to the correct implementation path
"static/app.js" so the PR summary and AI-generated summary match the doc; search
for the literal "static/js/upload.js" in the docs/plans file and replace it with
"static/app.js".
| ```javascript | ||
| function probeFile(file) { | ||
| return new Promise((resolve, reject) => { | ||
| const video = document.createElement('video'); | ||
| video.preload = 'metadata'; | ||
|
|
||
| video.onloadedmetadata = () => { | ||
| resolve({ | ||
| duration: video.duration, | ||
| width: video.videoWidth, | ||
| height: video.videoHeight, | ||
| type: file.type | ||
| }); | ||
| URL.revokeObjectURL(video.src); | ||
| }; | ||
|
|
||
| video.onerror = () => reject(new Error('Cannot read video metadata')); | ||
| video.src = URL.createObjectURL(file); | ||
| }); | ||
| } | ||
| ``` |
There was a problem hiding this comment.
Add blank line before code block and clarify audio file handling.
- The fenced code block at line 98 is missing a blank line above it, which triggers a markdownlint warning.
- The PR summary states "Client-side probe: use HTMLVideoElement/HTMLAudioElement to extract file metadata," but the example only demonstrates video probing. Consider adding a note or example showing how audio files are handled (e.g., using
HTMLAudioElementwhenfile.type.startsWith('audio/')or clarifying that video element handles both).
📝 Proposed fix
**Client-Side Probe** (basic info):
+
```javascript
function probeFile(file) {
return new Promise((resolve, reject) => {
- const video = document.createElement('video');
- video.preload = 'metadata';
+ const element = file.type.startsWith('audio/')
+ ? document.createElement('audio')
+ : document.createElement('video');
+ element.preload = 'metadata';
- video.onloadedmetadata = () => {
+ element.onloadedmetadata = () => {
resolve({
- duration: video.duration,
- width: video.videoWidth,
- height: video.videoHeight,
+ duration: element.duration,
+ width: element.videoWidth || 0,
+ height: element.videoHeight || 0,
type: file.type
});
- URL.revokeObjectURL(video.src);
+ URL.revokeObjectURL(element.src);
};
- video.onerror = () => reject(new Error('Cannot read video metadata'));
- video.src = URL.createObjectURL(file);
+ element.onerror = () => reject(new Error('Cannot read media metadata'));
+ element.src = URL.createObjectURL(file);
});
}🧰 Tools
🪛 markdownlint-cli2 (0.20.0)
[warning] 98-98: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
🤖 Prompt for AI Agents
In `@docs/plans/2026-02-04-reliability-improvements-design.md` around lines 98 -
118, Add a blank line before the fenced code block to satisfy markdownlint, and
update the probeFile implementation to handle audio files by creating either an
HTMLAudioElement or HTMLVideoElement based on file.type (use
file.type.startsWith('audio/') ? document.createElement('audio') :
document.createElement('video')); assign it to a single variable (e.g.,
element), set element.preload = 'metadata', use element.onloadedmetadata to
resolve with duration: element.duration and width/height: element.videoWidth ||
0 and element.videoHeight || 0, revoke URL.revokeObjectURL(element.src) in that
handler, set element.onerror to reject with a clearer message like 'Cannot read
media metadata', and set element.src = URL.createObjectURL(file).
| - `internal/adapter/http/handler.go`: Remove `HandleProbeUpload` | ||
| - `internal/adapter/http/routes.go`: Remove `/probe` route | ||
| - `internal/adapter/http/templates/upload.templ`: Update UI to show client probe results | ||
| - `static/js/upload.js`: Add client-side probe function |
There was a problem hiding this comment.
Correct the file path reference.
The document references static/js/upload.js, but the PR summary and AI-generated summary indicate the implementation is in static/app.js. This inconsistency should be corrected for accuracy.
📝 Proposed fix
-- `static/js/upload.js`: Add client-side probe function
+- `static/app.js`: Add client-side probe function📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - `static/js/upload.js`: Add client-side probe function | |
| - `static/app.js`: Add client-side probe function |
🤖 Prompt for AI Agents
In `@docs/plans/2026-02-04-reliability-improvements-design.md` at line 124, The
document contains an incorrect file reference string "static/js/upload.js";
update the text in docs/plans/2026-02-04-reliability-improvements-design.md to
consistently reference the actual implemented file "static/app.js" (replace any
occurrence of "static/js/upload.js" with "static/app.js") so the doc matches the
PR summary and AI-generated summary.
| ```json | ||
| { | ||
| "name": "Sharm", | ||
| "short_name": "Sharm", | ||
| "description": "Share media with expiring links", | ||
| "start_url": "/", | ||
| "display": "standalone", | ||
| "background_color": "#1a1a1a", | ||
| "theme_color": "#1a1a1a", | ||
| "icons": [ | ||
| { | ||
| "src": "/static/icon-192.png", | ||
| "sizes": "192x192", | ||
| "type": "image/png" | ||
| }, | ||
| { | ||
| "src": "/static/icon-512.png", | ||
| "sizes": "512x512", | ||
| "type": "image/png" | ||
| } | ||
| ] | ||
| } | ||
| ``` |
There was a problem hiding this comment.
Add blank line before code block.
The fenced code block at line 137 is missing a blank line above it, which triggers a markdownlint warning (MD031).
📝 Proposed fix
**manifest.json**:
+
```json
{
"name": "Sharm",🧰 Tools
🪛 markdownlint-cli2 (0.20.0)
[warning] 137-137: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
🤖 Prompt for AI Agents
In `@docs/plans/2026-02-04-reliability-improvements-design.md` around lines 137 -
159, Add a blank line immediately before the fenced code block that starts with
"```json" (the JSON manifest snippet containing "name": "Sharm", "short_name":
"Sharm", etc.) so the markdown has an empty line above the code fence to satisfy
MD031; edit the document so there is one newline separating the preceding
paragraph or heading and the ```json code fence.
| ```javascript | ||
| // sw.js | ||
| self.addEventListener('fetch', (event) => { | ||
| event.respondWith(fetch(event.request)); | ||
| }); | ||
| ``` |
There was a problem hiding this comment.
Add blank line before code block.
The fenced code block at line 162 is missing a blank line above it, which triggers a markdownlint warning (MD031).
📝 Proposed fix
**Service Worker** (minimal, for installability):
+
```javascript
// sw.js
self.addEventListener('fetch', (event) => {🧰 Tools
🪛 markdownlint-cli2 (0.20.0)
[warning] 162-162: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
🤖 Prompt for AI Agents
In `@docs/plans/2026-02-04-reliability-improvements-design.md` around lines 162 -
167, Add a blank line immediately before the fenced JavaScript code block that
starts with "// sw.js" to satisfy markdownlint MD031; locate the block
containing self.addEventListener('fetch', (event) => { ... }) and insert a
single empty line above the triple-backtick fence so the code fence is separated
from the preceding paragraph.
| chunkPath := filepath.Join(chunkDir, strconv.Itoa(chunkIdx)) | ||
| out, err := os.OpenFile(chunkPath, os.O_WRONLY|os.O_CREATE|os.O_EXCL, 0600) | ||
| if err != nil { | ||
| logger.Error.Printf("failed to create chunk file %s: %v", chunkPath, err) | ||
| http.Error(w, "Server error", http.StatusInternalServerError) | ||
| return | ||
| } | ||
| defer func() { | ||
| if err := out.Close(); err != nil { | ||
| logger.Error.Printf("failed to close output file %s: %v", chunkPath, err) | ||
| } | ||
| }() | ||
|
|
||
| if _, err := io.Copy(out, file); err != nil { | ||
| logger.Error.Printf("failed to write chunk: %v", err) | ||
| http.Error(w, "Server error", http.StatusInternalServerError) | ||
| return | ||
| } |
There was a problem hiding this comment.
Make chunk writes idempotent to support retries.
With O_EXCL, a retry after a network blip (or a partial-write failure) will hit “file exists” and permanently fail. This defeats the retry logic and can strand uploads. Consider writing to a temp file and renaming atomically, or allowing overwrite + cleanup on error.
🛠️ One possible fix (temp file + rename)
- chunkPath := filepath.Join(chunkDir, strconv.Itoa(chunkIdx))
- out, err := os.OpenFile(chunkPath, os.O_WRONLY|os.O_CREATE|os.O_EXCL, 0600)
+ chunkPath := filepath.Join(chunkDir, strconv.Itoa(chunkIdx))
+ tmpPath := chunkPath + ".tmp"
+ out, err := os.OpenFile(tmpPath, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0600)
if err != nil {
logger.Error.Printf("failed to create chunk file %s: %v", chunkPath, err)
http.Error(w, "Server error", http.StatusInternalServerError)
return
}
defer func() {
if err := out.Close(); err != nil {
logger.Error.Printf("failed to close output file %s: %v", chunkPath, err)
}
}()
if _, err := io.Copy(out, file); err != nil {
logger.Error.Printf("failed to write chunk: %v", err)
+ _ = os.Remove(tmpPath)
http.Error(w, "Server error", http.StatusInternalServerError)
return
}
+ if err := os.Rename(tmpPath, chunkPath); err != nil {
+ logger.Error.Printf("failed to finalize chunk %s: %v", chunkPath, err)
+ http.Error(w, "Server error", http.StatusInternalServerError)
+ return
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| chunkPath := filepath.Join(chunkDir, strconv.Itoa(chunkIdx)) | |
| out, err := os.OpenFile(chunkPath, os.O_WRONLY|os.O_CREATE|os.O_EXCL, 0600) | |
| if err != nil { | |
| logger.Error.Printf("failed to create chunk file %s: %v", chunkPath, err) | |
| http.Error(w, "Server error", http.StatusInternalServerError) | |
| return | |
| } | |
| defer func() { | |
| if err := out.Close(); err != nil { | |
| logger.Error.Printf("failed to close output file %s: %v", chunkPath, err) | |
| } | |
| }() | |
| if _, err := io.Copy(out, file); err != nil { | |
| logger.Error.Printf("failed to write chunk: %v", err) | |
| http.Error(w, "Server error", http.StatusInternalServerError) | |
| return | |
| } | |
| chunkPath := filepath.Join(chunkDir, strconv.Itoa(chunkIdx)) | |
| tmpPath := chunkPath + ".tmp" | |
| out, err := os.OpenFile(tmpPath, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0600) | |
| if err != nil { | |
| logger.Error.Printf("failed to create chunk file %s: %v", chunkPath, err) | |
| http.Error(w, "Server error", http.StatusInternalServerError) | |
| return | |
| } | |
| defer func() { | |
| if err := out.Close(); err != nil { | |
| logger.Error.Printf("failed to close output file %s: %v", chunkPath, err) | |
| } | |
| }() | |
| if _, err := io.Copy(out, file); err != nil { | |
| logger.Error.Printf("failed to write chunk: %v", err) | |
| _ = os.Remove(tmpPath) | |
| http.Error(w, "Server error", http.StatusInternalServerError) | |
| return | |
| } | |
| if err := os.Rename(tmpPath, chunkPath); err != nil { | |
| logger.Error.Printf("failed to finalize chunk %s: %v", chunkPath, err) | |
| http.Error(w, "Server error", http.StatusInternalServerError) | |
| return | |
| } |
🤖 Prompt for AI Agents
In `@internal/adapter/http/handler.go` around lines 200 - 217, The current chunk
write uses os.O_EXCL which causes retries to fail when a previous attempt
created a partial file; change to write to a temp file in the same directory and
then atomically rename to chunkPath: create a temp file (e.g. using
ioutil.TempFile or os.Create with a unique suffix in chunkDir), write the
contents from file to that temp file (ensure you call File.Sync and close it),
then call os.Rename(tempPath, chunkPath) to atomically put the chunk in place;
on any write error remove the temp file and return the error, and if os.Rename
reports that chunkPath already exists treat that as a successful idempotent
write (or overwrite atomically if desired), referencing the identifiers
chunkDir, chunkIdx, chunkPath, out, and file to locate where to implement the
temp-write-then-rename change.
| var templ_7745c5c3_Var15 string | ||
| templ_7745c5c3_Var15, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%dd left", m.RetentionDays)) | ||
| templ_7745c5c3_Var15, templ_7745c5c3_Err = templ.JoinStringErrs(fmt.Sprintf("%dd left", m.DaysRemaining())) | ||
| if templ_7745c5c3_Err != nil { | ||
| return templ.Error{Err: templ_7745c5c3_Err, FileName: `internal/adapter/http/templates/dashboard.templ`, Line: 153, Col: 104} | ||
| return templ.Error{Err: templ_7745c5c3_Err, FileName: `internal/adapter/http/templates/dashboard.templ`, Line: 153, Col: 106} | ||
| } |
There was a problem hiding this comment.
Clamp negative “days left” values.
If DaysRemaining() can go negative after expiration, this renders “-1d left.” Consider clamping to 0 in DaysRemaining() (preferred) or before formatting.
🤖 Prompt for AI Agents
In `@internal/adapter/http/templates/dashboard_templ.go` around lines 429 - 433,
The rendered "days left" can be negative because m.DaysRemaining() may return
negative values; clamp the value to a non-negative integer before formatting
(preferred: adjust the DaysRemaining() method on the model to return max(0,
computedDays) so all callers benefit; alternatively, clamp here immediately
before fmt.Sprintf by computing d := m.DaysRemaining(); if d < 0 { d = 0 } and
use d in the fmt.Sprintf call referenced around templ.JoinStringErrs and
templ_7745c5c3_Var15). Ensure you update the DaysRemaining() function name
(DaysRemaining) or the local clamping logic where templ_7745c5c3_Var15 is
produced.
| async function chunkedUpload(file, form) { | ||
| const uploadId = generateUUID(); | ||
| const totalChunks = Math.ceil(file.size / CHUNK_SIZE); | ||
| const result = document.getElementById('result'); | ||
|
|
||
| for (let i = 0; i < totalChunks; i++) { | ||
| const start = i * CHUNK_SIZE; | ||
| const end = Math.min(start + CHUNK_SIZE, file.size); | ||
| const chunk = file.slice(start, end); | ||
|
|
||
| updateProgress( | ||
| (i / totalChunks) * 90, | ||
| 'Uploading chunk ' + (i + 1) + '/' + totalChunks | ||
| ); | ||
|
|
||
| const ok = await uploadChunk(uploadId, i, chunk, MAX_RETRIES); | ||
| if (!ok) { | ||
| if (result) { | ||
| result.innerHTML = | ||
| '<div class="text-error" style="font-size:var(--text-sm);">Upload failed at chunk ' + | ||
| (i + 1) + | ||
| '. Please try again.</div>'; | ||
| } | ||
| hideProgress(); | ||
| return false; | ||
| } | ||
| } | ||
|
|
||
| updateProgress(95, 'Finalizing...'); | ||
|
|
||
| const fd = new FormData(); | ||
| fd.append('uploadId', uploadId); | ||
| fd.append('filename', file.name); | ||
| fd.append('totalChunks', String(totalChunks)); | ||
|
|
||
| const retentionSelect = form.querySelector('[name="retention"]'); | ||
| if (retentionSelect instanceof HTMLSelectElement) { | ||
| fd.append('retention', retentionSelect.value); | ||
| } | ||
|
|
||
| form.querySelectorAll('[name="codecs"]:checked').forEach((cb) => { | ||
| if (cb instanceof HTMLInputElement) { | ||
| fd.append('codecs', cb.value); | ||
| } | ||
| }); | ||
|
|
||
| const fpsInput = form.querySelector('[name="fps"]:checked'); | ||
| if (fpsInput instanceof HTMLInputElement) { | ||
| fd.append('fps', fpsInput.value); | ||
| } | ||
|
|
||
| try { | ||
| const resp = await fetch('/upload/complete', { method: 'POST', body: fd }); | ||
| if (resp.ok) { | ||
| const redirect = resp.headers.get('HX-Redirect'); | ||
| if (redirect) { | ||
| window.location.href = redirect; | ||
| } else { | ||
| updateProgress(100, 'Done!'); | ||
| window.location.href = '/'; | ||
| } | ||
| return true; | ||
| } else { | ||
| const text = await resp.text(); | ||
| if (result) { | ||
| result.innerHTML = | ||
| text || | ||
| '<div class="text-error" style="font-size:var(--text-sm);">Upload failed</div>'; | ||
| } | ||
| hideProgress(); | ||
| return false; | ||
| } | ||
| } catch (e) { | ||
| if (result) { | ||
| result.innerHTML = | ||
| '<div class="text-error" style="font-size:var(--text-sm);">Upload failed. Please try again.</div>'; | ||
| } | ||
| hideProgress(); | ||
| return false; | ||
| } |
There was a problem hiding this comment.
Handle empty-file uploads before chunking.
file.size === 0 yields totalChunks = 0 and a completion request that will be rejected by the server. Add a client-side guard with a user-facing error.
🛠️ Proposed guard
async function chunkedUpload(file, form) {
+ if (file.size === 0) {
+ const result = document.getElementById('result');
+ if (result) {
+ result.innerHTML =
+ '<div class="text-error" style="font-size:var(--text-sm);">Empty files cannot be uploaded</div>';
+ }
+ return false;
+ }
const uploadId = generateUUID();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async function chunkedUpload(file, form) { | |
| const uploadId = generateUUID(); | |
| const totalChunks = Math.ceil(file.size / CHUNK_SIZE); | |
| const result = document.getElementById('result'); | |
| for (let i = 0; i < totalChunks; i++) { | |
| const start = i * CHUNK_SIZE; | |
| const end = Math.min(start + CHUNK_SIZE, file.size); | |
| const chunk = file.slice(start, end); | |
| updateProgress( | |
| (i / totalChunks) * 90, | |
| 'Uploading chunk ' + (i + 1) + '/' + totalChunks | |
| ); | |
| const ok = await uploadChunk(uploadId, i, chunk, MAX_RETRIES); | |
| if (!ok) { | |
| if (result) { | |
| result.innerHTML = | |
| '<div class="text-error" style="font-size:var(--text-sm);">Upload failed at chunk ' + | |
| (i + 1) + | |
| '. Please try again.</div>'; | |
| } | |
| hideProgress(); | |
| return false; | |
| } | |
| } | |
| updateProgress(95, 'Finalizing...'); | |
| const fd = new FormData(); | |
| fd.append('uploadId', uploadId); | |
| fd.append('filename', file.name); | |
| fd.append('totalChunks', String(totalChunks)); | |
| const retentionSelect = form.querySelector('[name="retention"]'); | |
| if (retentionSelect instanceof HTMLSelectElement) { | |
| fd.append('retention', retentionSelect.value); | |
| } | |
| form.querySelectorAll('[name="codecs"]:checked').forEach((cb) => { | |
| if (cb instanceof HTMLInputElement) { | |
| fd.append('codecs', cb.value); | |
| } | |
| }); | |
| const fpsInput = form.querySelector('[name="fps"]:checked'); | |
| if (fpsInput instanceof HTMLInputElement) { | |
| fd.append('fps', fpsInput.value); | |
| } | |
| try { | |
| const resp = await fetch('/upload/complete', { method: 'POST', body: fd }); | |
| if (resp.ok) { | |
| const redirect = resp.headers.get('HX-Redirect'); | |
| if (redirect) { | |
| window.location.href = redirect; | |
| } else { | |
| updateProgress(100, 'Done!'); | |
| window.location.href = '/'; | |
| } | |
| return true; | |
| } else { | |
| const text = await resp.text(); | |
| if (result) { | |
| result.innerHTML = | |
| text || | |
| '<div class="text-error" style="font-size:var(--text-sm);">Upload failed</div>'; | |
| } | |
| hideProgress(); | |
| return false; | |
| } | |
| } catch (e) { | |
| if (result) { | |
| result.innerHTML = | |
| '<div class="text-error" style="font-size:var(--text-sm);">Upload failed. Please try again.</div>'; | |
| } | |
| hideProgress(); | |
| return false; | |
| } | |
| async function chunkedUpload(file, form) { | |
| if (file.size === 0) { | |
| const result = document.getElementById('result'); | |
| if (result) { | |
| result.innerHTML = | |
| '<div class="text-error" style="font-size:var(--text-sm);">Empty files cannot be uploaded</div>'; | |
| } | |
| return false; | |
| } | |
| const uploadId = generateUUID(); | |
| const totalChunks = Math.ceil(file.size / CHUNK_SIZE); | |
| const result = document.getElementById('result'); | |
| for (let i = 0; i < totalChunks; i++) { | |
| const start = i * CHUNK_SIZE; | |
| const end = Math.min(start + CHUNK_SIZE, file.size); | |
| const chunk = file.slice(start, end); | |
| updateProgress( | |
| (i / totalChunks) * 90, | |
| 'Uploading chunk ' + (i + 1) + '/' + totalChunks | |
| ); | |
| const ok = await uploadChunk(uploadId, i, chunk, MAX_RETRIES); | |
| if (!ok) { | |
| if (result) { | |
| result.innerHTML = | |
| '<div class="text-error" style="font-size:var(--text-sm);">Upload failed at chunk ' + | |
| (i + 1) + | |
| '. Please try again.</div>'; | |
| } | |
| hideProgress(); | |
| return false; | |
| } | |
| } | |
| updateProgress(95, 'Finalizing...'); | |
| const fd = new FormData(); | |
| fd.append('uploadId', uploadId); | |
| fd.append('filename', file.name); | |
| fd.append('totalChunks', String(totalChunks)); | |
| const retentionSelect = form.querySelector('[name="retention"]'); | |
| if (retentionSelect instanceof HTMLSelectElement) { | |
| fd.append('retention', retentionSelect.value); | |
| } | |
| form.querySelectorAll('[name="codecs"]:checked').forEach((cb) => { | |
| if (cb instanceof HTMLInputElement) { | |
| fd.append('codecs', cb.value); | |
| } | |
| }); | |
| const fpsInput = form.querySelector('[name="fps"]:checked'); | |
| if (fpsInput instanceof HTMLInputElement) { | |
| fd.append('fps', fpsInput.value); | |
| } | |
| try { | |
| const resp = await fetch('/upload/complete', { method: 'POST', body: fd }); | |
| if (resp.ok) { | |
| const redirect = resp.headers.get('HX-Redirect'); | |
| if (redirect) { | |
| window.location.href = redirect; | |
| } else { | |
| updateProgress(100, 'Done!'); | |
| window.location.href = '/'; | |
| } | |
| return true; | |
| } else { | |
| const text = await resp.text(); | |
| if (result) { | |
| result.innerHTML = | |
| text || | |
| '<div class="text-error" style="font-size:var(--text-sm);">Upload failed</div>'; | |
| } | |
| hideProgress(); | |
| return false; | |
| } | |
| } catch (e) { | |
| if (result) { | |
| result.innerHTML = | |
| '<div class="text-error" style="font-size:var(--text-sm);">Upload failed. Please try again.</div>'; | |
| } | |
| hideProgress(); | |
| return false; | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@static/app.js` around lines 236 - 315, The chunkedUpload function lacks a
guard for zero-length files causing totalChunks=0 and a bad finalization
request; inside chunkedUpload (before computing totalChunks or starting the
loop) check if file.size === 0 and, if so, set a user-visible error into the
same element used elsewhere (result.innerHTML with a consistent text-error
message), call hideProgress(), and return false to abort the upload; make sure
to reference the existing result variable and reuse the existing hideProgress()
behavior so the rest of the flow (including uploadId, CHUNK_SIZE, uploadChunk,
and /upload/complete) is never invoked for empty files.
| function handleFileSelect(input) { | ||
| const opts = document.getElementById('codec-options'); | ||
| const av1 = document.getElementById('codec-av1'); | ||
| const h264 = document.getElementById('codec-h264'); | ||
| const opus = document.getElementById('codec-opus'); | ||
| const fpsOpts = document.getElementById('fps-options'); | ||
| const probeResult = document.getElementById('probe-result'); | ||
|
|
||
| if (!input.files?.[0]) { | ||
| if (opts) opts.style.display = 'none'; | ||
| if (fpsOpts) fpsOpts.style.display = 'none'; | ||
| return; | ||
| } | ||
|
|
||
| const name = input.files[0].name.toLowerCase(); | ||
| const videoExts = ['.mp4', '.webm', '.mov', '.avi', '.mkv', '.flv', '.wmv', '.m4v']; | ||
| const audioExts = ['.mp3', '.wav', '.ogg', '.flac', '.aac', '.m4a', '.wma', '.opus']; | ||
| const isVideo = videoExts.some((e) => name.endsWith(e)); | ||
| const isAudio = audioExts.some((e) => name.endsWith(e)); | ||
|
|
||
| if (isVideo) { | ||
| if (opts) opts.style.display = 'block'; | ||
| if (av1) av1.style.display = 'flex'; | ||
| if (h264) h264.style.display = 'flex'; | ||
| if (opus) opus.style.display = 'none'; | ||
| updateFpsVisibility(); | ||
| } else if (isAudio) { | ||
| if (opts) opts.style.display = 'block'; | ||
| if (av1) av1.style.display = 'none'; | ||
| if (h264) h264.style.display = 'none'; | ||
| if (opus) opus.style.display = 'flex'; | ||
| if (fpsOpts) fpsOpts.style.display = 'none'; | ||
| } else { | ||
| if (opts) opts.style.display = 'none'; | ||
| if (fpsOpts) fpsOpts.style.display = 'none'; | ||
| } | ||
|
|
||
| if (probeResult && (isVideo || isAudio)) { | ||
| probeClientSide(input.files[0], probeResult); | ||
| } else if (probeResult) { | ||
| probeResult.innerHTML = ''; | ||
| } | ||
| } |
There was a problem hiding this comment.
Clear hidden codec selections when switching file types.
If a user selects video (checks AV1/H.264), then selects an image, those hidden checkboxes remain checked and will still be submitted. This can trigger incorrect conversions or server errors. Clear or disable irrelevant codec inputs when switching types.
🛠️ One possible fix
if (isVideo) {
if (opts) opts.style.display = 'block';
if (av1) av1.style.display = 'flex';
if (h264) h264.style.display = 'flex';
if (opus) opus.style.display = 'none';
+ if (opus) opus.querySelector('input')?.checked = false;
updateFpsVisibility();
} else if (isAudio) {
if (opts) opts.style.display = 'block';
if (av1) av1.style.display = 'none';
if (h264) h264.style.display = 'none';
if (opus) opus.style.display = 'flex';
+ if (av1) av1.querySelector('input')?.checked = false;
+ if (h264) h264.querySelector('input')?.checked = false;
if (fpsOpts) fpsOpts.style.display = 'none';
} else {
if (opts) opts.style.display = 'none';
if (fpsOpts) fpsOpts.style.display = 'none';
+ av1?.querySelector('input')?.checked = false;
+ h264?.querySelector('input')?.checked = false;
+ opus?.querySelector('input')?.checked = false;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function handleFileSelect(input) { | |
| const opts = document.getElementById('codec-options'); | |
| const av1 = document.getElementById('codec-av1'); | |
| const h264 = document.getElementById('codec-h264'); | |
| const opus = document.getElementById('codec-opus'); | |
| const fpsOpts = document.getElementById('fps-options'); | |
| const probeResult = document.getElementById('probe-result'); | |
| if (!input.files?.[0]) { | |
| if (opts) opts.style.display = 'none'; | |
| if (fpsOpts) fpsOpts.style.display = 'none'; | |
| return; | |
| } | |
| const name = input.files[0].name.toLowerCase(); | |
| const videoExts = ['.mp4', '.webm', '.mov', '.avi', '.mkv', '.flv', '.wmv', '.m4v']; | |
| const audioExts = ['.mp3', '.wav', '.ogg', '.flac', '.aac', '.m4a', '.wma', '.opus']; | |
| const isVideo = videoExts.some((e) => name.endsWith(e)); | |
| const isAudio = audioExts.some((e) => name.endsWith(e)); | |
| if (isVideo) { | |
| if (opts) opts.style.display = 'block'; | |
| if (av1) av1.style.display = 'flex'; | |
| if (h264) h264.style.display = 'flex'; | |
| if (opus) opus.style.display = 'none'; | |
| updateFpsVisibility(); | |
| } else if (isAudio) { | |
| if (opts) opts.style.display = 'block'; | |
| if (av1) av1.style.display = 'none'; | |
| if (h264) h264.style.display = 'none'; | |
| if (opus) opus.style.display = 'flex'; | |
| if (fpsOpts) fpsOpts.style.display = 'none'; | |
| } else { | |
| if (opts) opts.style.display = 'none'; | |
| if (fpsOpts) fpsOpts.style.display = 'none'; | |
| } | |
| if (probeResult && (isVideo || isAudio)) { | |
| probeClientSide(input.files[0], probeResult); | |
| } else if (probeResult) { | |
| probeResult.innerHTML = ''; | |
| } | |
| } | |
| function handleFileSelect(input) { | |
| const opts = document.getElementById('codec-options'); | |
| const av1 = document.getElementById('codec-av1'); | |
| const h264 = document.getElementById('codec-h264'); | |
| const opus = document.getElementById('codec-opus'); | |
| const fpsOpts = document.getElementById('fps-options'); | |
| const probeResult = document.getElementById('probe-result'); | |
| if (!input.files?.[0]) { | |
| if (opts) opts.style.display = 'none'; | |
| if (fpsOpts) fpsOpts.style.display = 'none'; | |
| return; | |
| } | |
| const name = input.files[0].name.toLowerCase(); | |
| const videoExts = ['.mp4', '.webm', '.mov', '.avi', '.mkv', '.flv', '.wmv', '.m4v']; | |
| const audioExts = ['.mp3', '.wav', '.ogg', '.flac', '.aac', '.m4a', '.wma', '.opus']; | |
| const isVideo = videoExts.some((e) => name.endsWith(e)); | |
| const isAudio = audioExts.some((e) => name.endsWith(e)); | |
| if (isVideo) { | |
| if (opts) opts.style.display = 'block'; | |
| if (av1) av1.style.display = 'flex'; | |
| if (h264) h264.style.display = 'flex'; | |
| if (opus) opus.style.display = 'none'; | |
| if (opus) opus.querySelector('input')?.checked = false; | |
| updateFpsVisibility(); | |
| } else if (isAudio) { | |
| if (opts) opts.style.display = 'block'; | |
| if (av1) av1.style.display = 'none'; | |
| if (h264) h264.style.display = 'none'; | |
| if (opus) opus.style.display = 'flex'; | |
| if (av1) av1.querySelector('input')?.checked = false; | |
| if (h264) h264.querySelector('input')?.checked = false; | |
| if (fpsOpts) fpsOpts.style.display = 'none'; | |
| } else { | |
| if (opts) opts.style.display = 'none'; | |
| if (fpsOpts) fpsOpts.style.display = 'none'; | |
| av1?.querySelector('input')?.checked = false; | |
| h264?.querySelector('input')?.checked = false; | |
| opus?.querySelector('input')?.checked = false; | |
| } | |
| if (probeResult && (isVideo || isAudio)) { | |
| probeClientSide(input.files[0], probeResult); | |
| } else if (probeResult) { | |
| probeResult.innerHTML = ''; | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@static/app.js` around lines 376 - 418, The handleFileSelect function leaves
hidden codec checkboxes checked when switching file types; update it to clear or
disable irrelevant inputs whenever you hide them: specifically, when switching
to non-video clear/disable the AV1/H.264 inputs (elements with ids codec-av1 and
codec-h264) and when switching to non-audio clear/disable the OPUS input (id
codec-opus); keep using the existing codec-options and fps-options visibility
logic and call updateFpsVisibility() as before, and ensure
probeClientSide/probe-result behavior is unchanged.
The script was loaded both in layout.templ and upload.templ, causing "Can't create duplicate variable: 'MAX_RETRIES'" error in browser.
Using split('=')[1] truncated tokens containing = padding characters.
Use substring to get everything after the cookie name.
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/adapter/converter/ffmpeg/converter.go (1)
170-205: 🧹 Nitpick | 🔵 TrivialAdd timeouts to Thumbnail/Probe to avoid hung processes.
These still useexec.Commandwithout a context.Proposed change
func (c *Converter) Thumbnail(inputPath, outputPath string) error { @@ - cmd := exec.Command("ffmpeg", args...) + ctx, cancel := context.WithTimeout(context.Background(), convertTimeout) + defer cancel() + cmd := exec.CommandContext(ctx, "ffmpeg", args...) return cmd.Run() } @@ func (c *Converter) Probe(inputPath string) (*domain.ProbeResult, error) { @@ - cmd := exec.Command("ffprobe", args...) + ctx, cancel := context.WithTimeout(context.Background(), convertTimeout) + defer cancel() + cmd := exec.CommandContext(ctx, "ffprobe", args...) output, err := cmd.Output()
🤖 Fix all issues with AI agents
In `@docs/plans/2026-02-04-security-hardening.md`:
- Around line 116-127: The custom contains and containsHelper functions are
redundant; replace their usage with the standard strings.Contains: remove the
contains and containsHelper definitions and update all callers to call
strings.Contains(s, substr), adding an import for the "strings" package where
needed (ensure any references to contains or containsHelper are updated to
strings.Contains to keep behavior identical).
- Around line 162-170: The CSP string assigned to csp currently includes
'unsafe-inline' for script-src and style-src; remove those tokens and replace
them with a nonce- or hash-based approach and only allow external assets (e.g.,
static/app.js and approved styles). Update the csp construction (variable csp)
to include a generated nonce value (or sha256 hashes) for script-src and
style-src instead of 'unsafe-inline', ensure the server injects that nonce into
inline <script> and <style> tags or move all inline handlers into static/app.js
and reference that file, and make sure the nonce generation and injection logic
is implemented where responses are rendered so the produced CSP matches the
nonces used in the HTML.
In `@internal/adapter/converter/ffmpeg/converter_test.go`:
- Around line 222-234: The tests define custom substring helpers containsString
and findSubstring that duplicate stdlib behavior; remove both functions, add an
import for "strings" if missing, and replace all calls to
containsString(err.Error(), tt.errMsg) (and any other uses) with
strings.Contains(err.Error(), tt.errMsg) to use the standard, idiomatic
implementation.
In `@internal/adapter/http/middleware/csrf.go`:
- Around line 60-81: The GenerateToken function currently fills randomBytes with
predictable values on crypto/rand failure; instead remove that predictable
fallback and on rand.Read error return an empty token (or propagate an error) so
the caller can fail the request; update CSRF middleware to detect an empty token
from CSRFProtection.GenerateToken (or handle the propagated error) and reject
the request (e.g., respond 500/deny) rather than issuing a predictable token.
Ensure references: CSRFProtection.GenerateToken and the CSRF middleware that
calls it are updated accordingly.
In `@internal/adapter/http/middleware/security_test.go`:
- Around line 279-290: The test TestSecurityHeaders_PreservesResponseBody
currently ignores the return values from w.Write inside the inline handler;
update the write to explicitly ignore the return values (e.g. assign to _, _ )
or handle the error so intent is explicit—locate the anonymous handler passed
into SecurityHeaders in TestSecurityHeaders_PreservesResponseBody and change the
w.Write([]byte("test response")) call accordingly.
In `@internal/adapter/http/middleware/security.go`:
- Around line 39-61: The isTLS and buildCSP functions need hardening: in isTLS,
make X-Forwarded-Proto parsing robust by splitting the header value on commas,
trimming spaces, doing a case-insensitive comparison for any entry equal to
"https" (and still fall back to r.TLS != nil); reference function isTLS to
implement this. In buildCSP, prepare for tightening CSP by removing
'unsafe-inline' from "script-src" and "style-src" once inline handlers are
removed—update buildCSP to omit those tokens (or add a TODO to remove them when
inline handlers are eliminated); reference function buildCSP for this CSP
change. Ensure comments reflect the behavior change.
In `@internal/adapter/http/server.go`:
- Around line 104-107: The ServeHTTP method currently rebuilds the middleware
chain on every request (middleware.SecurityHeaders(s.csrf.Middleware(s.mux))).
Add a pre-built http.Handler field to Server (e.g., handler http.Handler),
initialize it once during server construction/initialization by assigning
handler = middleware.SecurityHeaders(s.csrf.Middleware(s.mux)), and then change
ServeHTTP to call s.handler.ServeHTTP(w, r); ensure the new field is set after
s.csrf and s.mux are available.
In `@internal/adapter/http/validation/filetype.go`:
- Around line 14-33: Update the MIME allowlist and the MP4 brand detection: add
"audio/mp4" to the allowedMIMETypes map and modify the MP4/brand handling so
that when the ftyp/brand is "M4A " (or indicates m4a) the detector returns
"audio/mp4" instead of "video/mp4"; specifically update the allowedMIMETypes
variable and the function that maps MP4 ftyp brands to MIME (locate the MP4
brand branch that currently returns "video/mp4" and add a separate case for the
"M4A " brand to return "audio/mp4").
| func contains(s, substr string) bool { | ||
| return len(s) >= len(substr) && (s == substr || len(s) > 0 && containsHelper(s, substr)) | ||
| } | ||
|
|
||
| func containsHelper(s, substr string) bool { | ||
| for i := 0; i <= len(s)-len(substr); i++ { | ||
| if s[i:i+len(substr)] == substr { | ||
| return true | ||
| } | ||
| } | ||
| return false | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Use strings.Contains instead of custom contains helper.
The test snippet’s custom contains/containsHelper is redundant and makes the plan noisier. strings.Contains is simpler and idiomatic.
🤖 Prompt for AI Agents
In `@docs/plans/2026-02-04-security-hardening.md` around lines 116 - 127, The
custom contains and containsHelper functions are redundant; replace their usage
with the standard strings.Contains: remove the contains and containsHelper
definitions and update all callers to call strings.Contains(s, substr), adding
an import for the "strings" package where needed (ensure any references to
contains or containsHelper are updated to strings.Contains to keep behavior
identical).
| // Note: 'unsafe-inline' needed for HTMX inline handlers and templ styles | ||
| csp := "default-src 'self'; " + | ||
| "script-src 'self' 'unsafe-inline'; " + | ||
| "style-src 'self' 'unsafe-inline' https://fonts.googleapis.com; " + | ||
| "font-src 'self' https://fonts.gstatic.com; " + | ||
| "img-src 'self' data:; " + | ||
| "media-src 'self'; " + | ||
| "connect-src 'self'; " + | ||
| "frame-ancestors 'none'" |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Avoid CSP unsafe-inline; use nonce/hash and external JS.
The plan explicitly allows 'unsafe-inline' for scripts/styles, but later steps already move inline JS into static/app.js. Keeping 'unsafe-inline' weakens CSP without necessity. Prefer a nonce or hash-based CSP and external scripts only.
🤖 Prompt for AI Agents
In `@docs/plans/2026-02-04-security-hardening.md` around lines 162 - 170, The CSP
string assigned to csp currently includes 'unsafe-inline' for script-src and
style-src; remove those tokens and replace them with a nonce- or hash-based
approach and only allow external assets (e.g., static/app.js and approved
styles). Update the csp construction (variable csp) to include a generated nonce
value (or sha256 hashes) for script-src and style-src instead of
'unsafe-inline', ensure the server injects that nonce into inline <script> and
<style> tags or move all inline handlers into static/app.js and reference that
file, and make sure the nonce generation and injection logic is implemented
where responses are rendered so the produced CSP matches the nonces used in the
HTML.
| func containsString(s, substr string) bool { | ||
| return len(s) >= len(substr) && (s == substr || len(substr) == 0 || | ||
| (len(s) > 0 && len(substr) > 0 && findSubstring(s, substr))) | ||
| } | ||
|
|
||
| func findSubstring(s, substr string) bool { | ||
| for i := 0; i <= len(s)-len(substr); i++ { | ||
| if s[i:i+len(substr)] == substr { | ||
| return true | ||
| } | ||
| } | ||
| return false | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Replace custom substring helpers with strings.Contains.
The containsString and findSubstring functions reimplement standard library functionality. Using strings.Contains is cleaner and more idiomatic.
♻️ Proposed refactor
import (
"errors"
+ "strings"
"testing"
)
-func containsString(s, substr string) bool {
- return len(s) >= len(substr) && (s == substr || len(substr) == 0 ||
- (len(s) > 0 && len(substr) > 0 && findSubstring(s, substr)))
-}
-
-func findSubstring(s, substr string) bool {
- for i := 0; i <= len(s)-len(substr); i++ {
- if s[i:i+len(substr)] == substr {
- return true
- }
- }
- return false
-}Then replace usages like containsString(err.Error(), tt.errMsg) with strings.Contains(err.Error(), tt.errMsg).
🤖 Prompt for AI Agents
In `@internal/adapter/converter/ffmpeg/converter_test.go` around lines 222 - 234,
The tests define custom substring helpers containsString and findSubstring that
duplicate stdlib behavior; remove both functions, add an import for "strings" if
missing, and replace all calls to containsString(err.Error(), tt.errMsg) (and
any other uses) with strings.Contains(err.Error(), tt.errMsg) to use the
standard, idiomatic implementation.
| // GenerateToken creates a new CSRF token with HMAC signature. | ||
| // Token format: base64(32 random bytes + 32 bytes HMAC-SHA256 signature) | ||
| func (c *CSRFProtection) GenerateToken() string { | ||
| randomBytes := make([]byte, tokenSize) | ||
| if _, err := rand.Read(randomBytes); err != nil { | ||
| // In case of crypto/rand failure, use less random but still functional | ||
| // This should never happen in practice | ||
| for i := range randomBytes { | ||
| randomBytes[i] = byte(i) | ||
| } | ||
| } | ||
|
|
||
| mac := hmac.New(sha256.New, c.secretKey) | ||
| mac.Write(randomBytes) | ||
| signature := mac.Sum(nil) | ||
|
|
||
| token := make([]byte, tokenSize+len(signature)) | ||
| copy(token[:tokenSize], randomBytes) | ||
| copy(token[tokenSize:], signature) | ||
|
|
||
| return base64.URLEncoding.EncodeToString(token) | ||
| } |
There was a problem hiding this comment.
Fail hard on crypto/rand failure instead of using predictable fallback.
The fallback at lines 67-69 generates completely predictable tokens (0, 1, 2, ...), which would make CSRF protection useless. While crypto/rand failures are extremely rare, if they occur, it indicates a serious system issue. Failing the request is safer than serving predictable tokens.
🔒 Proposed fix: Fail the request on rand failure
func (c *CSRFProtection) GenerateToken() string {
randomBytes := make([]byte, tokenSize)
if _, err := rand.Read(randomBytes); err != nil {
- // In case of crypto/rand failure, use less random but still functional
- // This should never happen in practice
- for i := range randomBytes {
- randomBytes[i] = byte(i)
- }
+ // crypto/rand failure indicates a serious system issue
+ // Return empty string - caller should handle this as an error
+ return ""
}Then update Middleware to handle empty token:
if _, err := r.Cookie(csrfCookieName); err != nil {
token := c.GenerateToken()
+ if token == "" {
+ http.Error(w, "Internal server error", http.StatusInternalServerError)
+ return
+ }
c.setCSRFCookie(w, r, token)
}🤖 Prompt for AI Agents
In `@internal/adapter/http/middleware/csrf.go` around lines 60 - 81, The
GenerateToken function currently fills randomBytes with predictable values on
crypto/rand failure; instead remove that predictable fallback and on rand.Read
error return an empty token (or propagate an error) so the caller can fail the
request; update CSRF middleware to detect an empty token from
CSRFProtection.GenerateToken (or handle the propagated error) and reject the
request (e.g., respond 500/deny) rather than issuing a predictable token. Ensure
references: CSRFProtection.GenerateToken and the CSRF middleware that calls it
are updated accordingly.
| func TestSecurityHeaders_PreservesResponseBody(t *testing.T) { | ||
| handler := SecurityHeaders(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { | ||
| w.Write([]byte("test response")) | ||
| })) | ||
|
|
||
| req := httptest.NewRequest(http.MethodGet, "/", nil) | ||
| rec := httptest.NewRecorder() | ||
|
|
||
| handler.ServeHTTP(rec, req) | ||
|
|
||
| assert.Equal(t, "test response", rec.Body.String()) | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Minor: Ignored write error.
The w.Write() return value is ignored on line 281. While acceptable in tests, using _, _ = w.Write(...) makes the intent explicit.
💡 Optional: Explicit error ignore
func TestSecurityHeaders_PreservesResponseBody(t *testing.T) {
handler := SecurityHeaders(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
- w.Write([]byte("test response"))
+ _, _ = w.Write([]byte("test response"))
}))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func TestSecurityHeaders_PreservesResponseBody(t *testing.T) { | |
| handler := SecurityHeaders(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { | |
| w.Write([]byte("test response")) | |
| })) | |
| req := httptest.NewRequest(http.MethodGet, "/", nil) | |
| rec := httptest.NewRecorder() | |
| handler.ServeHTTP(rec, req) | |
| assert.Equal(t, "test response", rec.Body.String()) | |
| } | |
| func TestSecurityHeaders_PreservesResponseBody(t *testing.T) { | |
| handler := SecurityHeaders(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { | |
| _, _ = w.Write([]byte("test response")) | |
| })) | |
| req := httptest.NewRequest(http.MethodGet, "/", nil) | |
| rec := httptest.NewRecorder() | |
| handler.ServeHTTP(rec, req) | |
| assert.Equal(t, "test response", rec.Body.String()) | |
| } |
🤖 Prompt for AI Agents
In `@internal/adapter/http/middleware/security_test.go` around lines 279 - 290,
The test TestSecurityHeaders_PreservesResponseBody currently ignores the return
values from w.Write inside the inline handler; update the write to explicitly
ignore the return values (e.g. assign to _, _ ) or handle the error so intent is
explicit—locate the anonymous handler passed into SecurityHeaders in
TestSecurityHeaders_PreservesResponseBody and change the w.Write([]byte("test
response")) call accordingly.
| // buildCSP constructs the Content-Security-Policy header value. | ||
| func buildCSP() string { | ||
| directives := []string{ | ||
| "default-src 'self'", | ||
| "script-src 'self' 'unsafe-inline'", | ||
| "style-src 'self' 'unsafe-inline' https://fonts.googleapis.com", | ||
| "font-src 'self' https://fonts.gstatic.com", | ||
| "img-src 'self' data:", | ||
| "media-src 'self'", | ||
| "connect-src 'self'", | ||
| "frame-ancestors 'none'", | ||
| } | ||
| return strings.Join(directives, "; ") | ||
| } | ||
|
|
||
| // isTLS checks if the request is served over TLS. | ||
| // It checks both the TLS connection state and the X-Forwarded-Proto header | ||
| // (for requests behind a reverse proxy). | ||
| func isTLS(r *http.Request) bool { | ||
| if r.TLS != nil { | ||
| return true | ||
| } | ||
| return r.Header.Get("X-Forwarded-Proto") == "https" |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Harden X-Forwarded-Proto parsing; consider phasing out unsafe-inline.
Handle comma-separated/case variants for proxies, and once inline handlers are removed you can tighten CSP by dropping unsafe-inline.
Proposed change (isTLS)
func isTLS(r *http.Request) bool {
if r.TLS != nil {
return true
}
- return r.Header.Get("X-Forwarded-Proto") == "https"
+ proto := r.Header.Get("X-Forwarded-Proto")
+ if proto == "" {
+ return false
+ }
+ proto = strings.TrimSpace(strings.Split(proto, ",")[0])
+ return strings.EqualFold(proto, "https")
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // buildCSP constructs the Content-Security-Policy header value. | |
| func buildCSP() string { | |
| directives := []string{ | |
| "default-src 'self'", | |
| "script-src 'self' 'unsafe-inline'", | |
| "style-src 'self' 'unsafe-inline' https://fonts.googleapis.com", | |
| "font-src 'self' https://fonts.gstatic.com", | |
| "img-src 'self' data:", | |
| "media-src 'self'", | |
| "connect-src 'self'", | |
| "frame-ancestors 'none'", | |
| } | |
| return strings.Join(directives, "; ") | |
| } | |
| // isTLS checks if the request is served over TLS. | |
| // It checks both the TLS connection state and the X-Forwarded-Proto header | |
| // (for requests behind a reverse proxy). | |
| func isTLS(r *http.Request) bool { | |
| if r.TLS != nil { | |
| return true | |
| } | |
| return r.Header.Get("X-Forwarded-Proto") == "https" | |
| // buildCSP constructs the Content-Security-Policy header value. | |
| func buildCSP() string { | |
| directives := []string{ | |
| "default-src 'self'", | |
| "script-src 'self' 'unsafe-inline'", | |
| "style-src 'self' 'unsafe-inline' https://fonts.googleapis.com", | |
| "font-src 'self' https://fonts.gstatic.com", | |
| "img-src 'self' data:", | |
| "media-src 'self'", | |
| "connect-src 'self'", | |
| "frame-ancestors 'none'", | |
| } | |
| return strings.Join(directives, "; ") | |
| } | |
| // isTLS checks if the request is served over TLS. | |
| // It checks both the TLS connection state and the X-Forwarded-Proto header | |
| // (for requests behind a reverse proxy). | |
| func isTLS(r *http.Request) bool { | |
| if r.TLS != nil { | |
| return true | |
| } | |
| proto := r.Header.Get("X-Forwarded-Proto") | |
| if proto == "" { | |
| return false | |
| } | |
| proto = strings.TrimSpace(strings.Split(proto, ",")[0]) | |
| return strings.EqualFold(proto, "https") | |
| } |
🤖 Prompt for AI Agents
In `@internal/adapter/http/middleware/security.go` around lines 39 - 61, The isTLS
and buildCSP functions need hardening: in isTLS, make X-Forwarded-Proto parsing
robust by splitting the header value on commas, trimming spaces, doing a
case-insensitive comparison for any entry equal to "https" (and still fall back
to r.TLS != nil); reference function isTLS to implement this. In buildCSP,
prepare for tightening CSP by removing 'unsafe-inline' from "script-src" and
"style-src" once inline handlers are removed—update buildCSP to omit those
tokens (or add a TODO to remove them when inline handlers are eliminated);
reference function buildCSP for this CSP change. Ensure comments reflect the
behavior change.
| func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) { | ||
| s.mux.ServeHTTP(w, r) | ||
| // Chain: SecurityHeaders -> CSRF -> mux | ||
| middleware.SecurityHeaders(s.csrf.Middleware(s.mux)).ServeHTTP(w, r) | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Middleware chain is recreated on every request.
The current implementation wraps the handler chain inside ServeHTTP, creating new handler instances per request. While functionally correct, this adds minor allocation overhead. Consider pre-building the chain once during server construction.
♻️ Optional: Pre-build middleware chain
type Server struct {
mux *http.ServeMux
+ handler http.Handler
handlers *Handlers
// ... other fields
}
func NewServer(...) *Server {
// ... existing setup ...
s := &Server{
// ... fields ...
}
s.registerRoutes()
s.registerStatic()
+ s.handler = middleware.SecurityHeaders(s.csrf.Middleware(s.mux))
return s
}
func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
- middleware.SecurityHeaders(s.csrf.Middleware(s.mux)).ServeHTTP(w, r)
+ s.handler.ServeHTTP(w, r)
}🤖 Prompt for AI Agents
In `@internal/adapter/http/server.go` around lines 104 - 107, The ServeHTTP method
currently rebuilds the middleware chain on every request
(middleware.SecurityHeaders(s.csrf.Middleware(s.mux))). Add a pre-built
http.Handler field to Server (e.g., handler http.Handler), initialize it once
during server construction/initialization by assigning handler =
middleware.SecurityHeaders(s.csrf.Middleware(s.mux)), and then change ServeHTTP
to call s.handler.ServeHTTP(w, r); ensure the new field is set after s.csrf and
s.mux are available.
| var allowedMIMETypes = map[string]bool{ | ||
| // Images | ||
| "image/jpeg": true, | ||
| "image/png": true, | ||
| "image/gif": true, | ||
| "image/webp": true, | ||
| // Videos | ||
| "video/mp4": true, | ||
| "video/webm": true, | ||
| "video/quicktime": true, | ||
| // Audio | ||
| "audio/mpeg": true, | ||
| "audio/ogg": true, | ||
| "application/ogg": true, | ||
| "audio/wav": true, | ||
| "audio/wave": true, | ||
| "audio/x-wav": true, | ||
| "audio/flac": true, | ||
| "audio/x-flac": true, | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n internal/adapter/http/validation/filetype.go | head -150Repository: bnema/sharm
Length of output: 5048
🏁 Script executed:
rg -n "ValidateMagicBytes" -A 5 -B 2 --type goRepository: bnema/sharm
Length of output: 21536
🏁 Script executed:
sed -n '80,95p' internal/adapter/http/handler.go
sed -n '340,355p' internal/adapter/http/handler.goRepository: bnema/sharm
Length of output: 1326
🏁 Script executed:
rg -n "DetectMediaType" -A 10 -B 2 --type goRepository: bnema/sharm
Length of output: 2920
🏁 Script executed:
rg -n "M4A\|m4a" --type goRepository: bnema/sharm
Length of output: 37
🏁 Script executed:
rg -n "brand\|ftyp" --type goRepository: bnema/sharm
Length of output: 37
🏁 Script executed:
rg -n "case.*M4" --type goRepository: bnema/sharm
Length of output: 199
🏁 Script executed:
rg -n "mp4Magic\|m4v\|m4a" -i --type goRepository: bnema/sharm
Length of output: 37
🏁 Script executed:
rg -n "var.*Magic\|const.*Magic" --type go | head -30Repository: bnema/sharm
Length of output: 37
🏁 Script executed:
head -100 internal/adapter/http/validation/filetype_test.goRepository: bnema/sharm
Length of output: 3355
🏁 Script executed:
grep -n "m4\|M4" internal/adapter/http/validation/filetype_test.goRepository: bnema/sharm
Length of output: 37
🏁 Script executed:
grep -n "audioExts\|imageExts\|videoExts" --type go -r internal/Repository: bnema/sharm
Length of output: 172
🏁 Script executed:
rg -n "audioExts|imageExts|videoExts" --type goRepository: bnema/sharm
Length of output: 278
🏁 Script executed:
sed -n '291,311p' internal/domain/media.goRepository: bnema/sharm
Length of output: 642
🏁 Script executed:
rg -n "MediaType\|MediaTypeAudio\|MediaTypeVideo" --type go -A 3 | head -60Repository: bnema/sharm
Length of output: 37
🏁 Script executed:
rg -n "type MediaType" --type go -A 5Repository: bnema/sharm
Length of output: 361
Add audio/mp4 to allowlist and correctly classify M4A as audio, not video.
M4A is an audio format with the M4A brand in the ftyp box. Currently, it returns video/mp4 from magic byte detection, which is semantically incorrect and creates a mismatch with the filename-based detection that correctly identifies .m4a as audio. Add audio/mp4 to the allowedMIMETypes map and separate the M4A case to return audio/mp4 instead of video/mp4.
Proposed change
var allowedMIMETypes = map[string]bool{
@@
"audio/mpeg": true,
+ "audio/mp4": true,
"audio/ogg": true,
"application/ogg": true,
@@
if len(buf) >= 12 {
if buf[4] == 'f' && buf[5] == 't' && buf[6] == 'y' && buf[7] == 'p' {
// Check brand to distinguish MP4 variants
brand := string(buf[8:12])
switch brand {
- case "isom", "iso2", "iso3", "iso4", "iso5", "iso6", "mp41", "mp42", "avc1", "M4V ", "M4A ":
+ case "isom", "iso2", "iso3", "iso4", "iso5", "iso6", "mp41", "mp42", "avc1", "M4V ":
return "video/mp4"
+ case "M4A ":
+ return "audio/mp4"
case "qt ":
return "video/quicktime"
default:
// Default to MP4 for unknown ftyp brands
return "video/mp4"🤖 Prompt for AI Agents
In `@internal/adapter/http/validation/filetype.go` around lines 14 - 33, Update
the MIME allowlist and the MP4 brand detection: add "audio/mp4" to the
allowedMIMETypes map and modify the MP4/brand handling so that when the
ftyp/brand is "M4A " (or indicates m4a) the detector returns "audio/mp4" instead
of "video/mp4"; specifically update the allowedMIMETypes variable and the
function that maps MP4 ftyp brands to MIME (locate the MP4 brand branch that
currently returns "video/mp4" and add a separate case for the "M4A " brand to
return "audio/mp4").
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@internal/adapter/http/middleware/security_test.go`:
- Around line 13-63: The four tests (TestSecurityHeaders_XContentTypeOptions,
TestSecurityHeaders_XFrameOptions, TestSecurityHeaders_ReferrerPolicy,
TestSecurityHeaders_PermissionsPolicy) duplicate the same setup; replace them
with a single table-driven test (e.g., TestSecurityHeaders_StaticHeaders) that
creates the SecurityHeaders middleware once and iterates over a slice of {name,
header, expected} cases, calling handler.ServeHTTP for each case and asserting
rec.Header().Get(header) equals expected; keep the existing header
names/expected values (nosniff, DENY, strict-origin-when-cross-origin,
camera=(), microphone=(), geolocation=()) and use t.Run(tt.name, ...) for
subtests.
- Around line 191-249: Add a negative test to ensure HSTS is not set when
X-Forwarded-Proto explicitly equals "http": create a new test function (e.g.,
TestSecurityHeaders_HSTS_NotSetWithXForwardedProtoHTTP) that uses
SecurityHeaders with an http.HandlerFunc, builds a request via
httptest.NewRequest, sets req.Header.Set("X-Forwarded-Proto", "http"), uses
httptest.NewRecorder(), calls handler.ServeHTTP(rec, req) and asserts that
rec.Header().Get("Strict-Transport-Security") is empty; mirror the style of
existing tests (TestSecurityHeaders_HSTS_NotSetWithoutTLS /
_SetWithXForwardedProtoHTTPS) to ensure consistency.
- Around line 338-340: Replace the inconsistent assertion in the CSP test loop:
instead of using assert.True(t, strings.Contains(csp, directive), ...), use
assert.Contains(t, csp, directive, ...) so the test matches the other CSP
checks; locate the loop that iterates over directives and updates the assertion
that checks csp for each directive.
Summary
static/app.jswith JSDoc + @ts-check for type safetyTest plan
static/app.jstypes in IDE (should have no errors)Summary by CodeRabbit
New Features
Bug Fixes
Security
Chores
UI