Conversation
…streaming waveform persistence - Add @mediabunny/ac3 WASM decoder for Dolby Digital audio playback and export - Introduce sourceFps field on timeline items for correct frame conversion between source-native and project FPS across trim, split, rate stretch, and composition rendering - Rewrite waveform storage from single IndexedDB records to progressive binned persistence with streaming worker chunks (init → chunk → complete) - Add CustomDecoderAudio component for AC-3/E-AC-3 preview playback via mediabunny decode with IndexedDB-backed decoded audio cache - Upgrade mediabunny from 1.32.0 to 1.34.3 - Improve filmstrip/waveform skeleton states to avoid infinite loading on error - Gracefully clamp clips exceeding source duration instead of rendering errors
…fer-style waveform rendering Replace bar-based waveform with continuous mirrored path visualization, add per-clip amplitude normalization, and implement partial audio decode so playback can start from cached bins before full decode completes. Reduce bin duration for finer-grained streaming granularity.
…deo track color Replace continuous wavesurfer-style path with spaced thin bars using top-2 peak blending for cleaner visual density. Restore video track color to original blue (oklch 0.62 0.17 250).
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
📝 WalkthroughWalkthroughThis PR introduces AC-3 audio codec support through mediabunny decoder registration, adds per-source frame-rate awareness via a Changes
Sequence Diagram(s)sequenceDiagram
participant App as Application
participant Mediabunny as mediabunny<br/>(format support)
participant AC3Lib as `@mediabunny/ac3`<br/>(AC3 decoder)
participant AudioCache as Audio Decode<br/>Cache
participant IndexedDB as IndexedDB<br/>Storage
participant Component as Playback<br/>Component
App->>AC3Lib: import registerAc3Decoder()
AC3Lib-->>App: decoder registration function
App->>Mediabunny: import mediabunny
Mediabunny-->>App: mediabunny module
App->>Mediabunny: registerAc3Decoder()
Mediabunny->>Mediabunny: register AC-3 codec handler
App->>Component: render with unsupported codec
Component->>AudioCache: getOrDecodeAudio(mediaId)
AudioCache->>IndexedDB: check for cached meta/bins
IndexedDB-->>AudioCache: cached bins + metadata
alt Bins valid and complete
AudioCache->>AudioCache: assemble from bins
AudioCache-->>Component: AudioBuffer (fast path)
else Needs decode
AudioCache->>Mediabunny: decode audio with AC-3
Mediabunny->>Mediabunny: decompress AC-3 stream
Mediabunny-->>AudioCache: raw PCM samples
AudioCache->>AudioCache: downmix to stereo
AudioCache->>IndexedDB: save bins progressively
IndexedDB-->>AudioCache: bin write confirmations
AudioCache-->>Component: AudioBuffer
end
Component->>Component: play via Web Audio API
sequenceDiagram
participant Timeline as Timeline<br/>System
participant Clip as Clip<br/>Item
participant SourceCalc as Source<br/>Calculations
participant Render as Playback<br/>Renderer
Timeline->>Clip: create item with media
Clip->>Clip: extract sourceFps from media
Clip->>Clip: store sourceFps on item
Render->>Clip: retrieve item data
Render->>SourceCalc: timelineToSourceFrames(timelineFrames, speed, timelineFps, sourceFps)
SourceCalc->>SourceCalc: convert: timelineFrames / timelineFps = seconds
SourceCalc->>SourceCalc: scale: seconds × sourceFps = sourceFrames
SourceCalc-->>Render: sourceFrames
Render->>Render: calculate seek position
Render->>Render: render video/audio at source time
Note over Timeline,Render: sourceFps enables accurate<br/>frame-rate conversions<br/>between timeline and source domains
sequenceDiagram
participant WaveformWorker as Waveform<br/>Worker
participant Mediabunny as mediabunny<br/>(decode)
participant Cache as Waveform<br/>Cache
participant IndexedDB as IndexedDB<br/>Bins
participant UI as UI<br/>Component
WaveformWorker->>Mediabunny: decode audio stream
Mediabunny-->>WaveformWorker: PCM samples
WaveformWorker->>WaveformWorker: extract peaks per chunk
WaveformWorker->>Cache: emit WaveformInitResponse
Cache-->>UI: init (duration, channels, sampleRate)
loop for each decoded chunk
WaveformWorker->>WaveformWorker: compute normalized peaks
WaveformWorker->>Cache: emit WaveformChunkResponse(peaks)
Cache->>IndexedDB: save peaks as bin
Cache-->>UI: chunk (partial waveform)
UI->>UI: render progressive waveform
end
WaveformWorker->>Cache: emit WaveformCompleteResponse(maxPeak)
Cache->>IndexedDB: save meta record
Cache-->>UI: complete
UI->>UI: finalize waveform visualization
Estimated code review effort🎯 4 (Complex) | ⏱️ ~65 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/features/timeline/utils/source-calculations.ts (1)
138-148: 🛠️ Refactor suggestion | 🟠 MajorDuplicate JSDoc block for
clampSpeed.Lines 138–145 contain two consecutive JSDoc blocks — the first says "Clamp speed to valid range and round to 2 decimal places", while the second says "without rounding." The first block is stale and should be removed.
📝 Remove the stale JSDoc
-/** - * Clamp speed to valid range and round to 2 decimal places. - */ /** * Clamp speed to valid range without rounding. * Speed is stored with full precision for accurate calculations. * UI should format to 2 decimal places for display only. */🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/timeline/utils/source-calculations.ts` around lines 138 - 148, There are two JSDoc blocks above the clampSpeed function causing contradictory docs; remove the stale/incorrect JSDoc that says "Clamp speed to valid range and round to 2 decimal places" and keep the accurate block that states "Clamp speed to valid range without rounding." Locate the duplicated comments immediately above export function clampSpeed and delete the first JSDoc block so only the single, correct doc remains; ensure the remaining JSDoc correctly describes clampSpeed and no other comment duplication remains.src/lib/composition-runtime/components/item.tsx (1)
329-329:⚠️ Potential issue | 🟡 Minor
throw new Errorfor unknown item type will crash the component tree.Line 329 throws for an unknown
item.type, which would propagate up and crash the nearest error boundary (or the entire app if none exists). Given that timeline items come from user data/project files, a corrupted or future-versioned item type should not crash the editor.Consider returning
nullwith aconsole.errorinstead, consistent with how other invalid states are handled in this component (lines 72–76, 206).🛡️ Proposed defensive fallback
- throw new Error(`Unknown item type: ${JSON.stringify(item)}`); + console.error(`[Composition Item] Unknown item type:`, item.type, item.id); + return null;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/composition-runtime/components/item.tsx` at line 329, Replace the throwing behavior for unknown item types: instead of throwing new Error(`Unknown item type: ${JSON.stringify(item)`)` when encountering an unexpected item.type, log the full item to console.error and return null so the component tree isn't crashed; follow the same defensive pattern used elsewhere in this module (see existing handling around the other invalid states) by replacing the throw with a console.error message including the serialized item and a clear context string and then returning null from the render function (identify the exact spot where item is checked and the throw occurs and update that branch).
🧹 Nitpick comments (20)
src/features/export/utils/canvas-audio.ts (1)
398-399:registerAc3Decoder()called on every decode invocation — consider guarding.Unlike
proxy-generation-worker.tsandmedia-processor.worker.tswhich use a one-time registration guard (ac3Registeredflag or module-level caching), this file callsregisterAc3Decoder()on everydecodeAudioFromSourceinvocation. If the function isn't idempotent, this could cause issues; even if it is, it's unnecessary overhead.♻️ Suggested fix: add a module-level guard
+let ac3Registered = false; + async function decodeAudioFromSource( src: string, itemId: string, startTime?: number, endTime?: number ): Promise<DecodedAudio> { // ... cache check ... try { const mb = await import('mediabunny'); - const { registerAc3Decoder } = await import('@mediabunny/ac3'); - registerAc3Decoder(); + if (!ac3Registered) { + const { registerAc3Decoder } = await import('@mediabunny/ac3'); + registerAc3Decoder(); + ac3Registered = true; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/export/utils/canvas-audio.ts` around lines 398 - 399, The code calls registerAc3Decoder() on every decodeAudioFromSource invocation; add a module-level boolean guard (e.g., ac3Registered) to ensure registerAc3Decoder() is only invoked once: introduce a top-level let ac3Registered = false, then inside decodeAudioFromSource check if (!ac3Registered) { const { registerAc3Decoder } = await import('@mediabunny/ac3'); registerAc3Decoder(); ac3Registered = true; } so subsequent calls skip re-registration; reference registerAc3Decoder and decodeAudioFromSource when making the change.src/features/export/utils/canvas-render-orchestrator.ts (1)
98-101: AC-3 decoder registration lacks error handling, unlike MP3 encoder registration.At lines 483–491 in
renderAudioOnly, the MP3 encoder registration is wrapped in atry/catchso rendering can proceed if it fails. The AC-3 decoder registration here (and at lines 478–479) has no such guard — if@mediabunny/ac3fails to load or register, the entire render will throw unrecoverably even for compositions that don't use AC-3 audio.Suggested fix
- const { registerAc3Decoder } = await import('@mediabunny/ac3'); - registerAc3Decoder(); + try { + const { registerAc3Decoder } = await import('@mediabunny/ac3'); + registerAc3Decoder(); + } catch (err) { + log.warn('Failed to register AC-3 decoder, AC-3 audio will not be available', err); + }Apply the same pattern at lines 478–479.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/export/utils/canvas-render-orchestrator.ts` around lines 98 - 101, The AC-3 decoder registration (the import('mediabunny') assignment to mediabunny and the registerAc3Decoder() call) must be wrapped in the same defensive try/catch used for MP3 encoder setup so a failing `@mediabunny/ac3` import/registration won't abort renders that don't need AC-3; update the block around mediabunny and registerAc3Decoder to catch and log the error (using the same logger or pattern as the MP3 encoder registration) and continue execution, referencing the mediabunny variable and registerAc3Decoder function and keeping behavior consistent with renderAudioOnly's MP3 encoder guard.src/features/timeline/stores/actions/item-actions.ts (1)
359-376: Potential inconsistency betweensourceFpsandmediaFps.Line 359 reads
sourceFpsfrom the timeline item (for the frame conversion), while line 375 readsmedia.fpsfrom the media library metadata (for the timestamp calculation). These represent the same underlying value — the source media's frame rate — but are sourced independently. If they ever diverge,sourceFrame / mediaFpswould yield an incorrect seek timestamp.Consider using
sourceFpsconsistently:- const mediaFps = media.fps || 30; - const timestampSeconds = sourceFrame / mediaFps; + const timestampSeconds = sourceFrame / sourceFps;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/timeline/stores/actions/item-actions.ts` around lines 359 - 376, The code reads sourceFps from the timeline item but later uses media.fps (mediaFps) to compute timestampSeconds, which can diverge; change the timestamp calculation to use the already-determined sourceFps (falling back to media.fps only if sourceFps is undefined) so the same FPS value is used for timelineToSourceFrames and timestampSeconds; update the computation around sourceFps, sourceFrame and timestampSeconds in insertFreezeFrame (and keep useMediaLibraryStore/mediaItems/media lookup intact) to prefer sourceFps || media.fps and compute timestampSeconds = sourceFrame / (sourceFps || media.fps).src/features/timeline/stores/actions/source-edit-actions.ts (1)
74-78: Minor DRY concern:sourceFpsdefault is computed identically in two places.
ctx.media.fps || 30appears both inresolveSourceEditContext(line 74) and again increateTimelineItem(line 128). Consider includingsourceFpsin theSourceEditContextinterface to avoid the duplicated fallback logic.Also applies to: 128-131
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/timeline/stores/actions/source-edit-actions.ts` around lines 74 - 78, The fallback for media fps is duplicated; add a sourceFps property to the SourceEditContext interface and set it once in resolveSourceEditContext using ctx.media.fps || 30, then remove the duplicate fallback logic in createTimelineItem and reference ctx.sourceFps when computing sourceDurationFrames (and any other places using media.fps fallback) so the default is centralized.src/features/timeline/stores/items-store.ts (1)
259-273:timelineFpsread could be hoisted above the.map()for clarity.
useTimelineSettingsStore.getState().fpsis called inside theset()updater's.map()callback. While it only executes for the matching item, hoisting it before theset()call (alongside_trimItemEnd) would be slightly cleaner and make the external dependency explicit.♻️ Suggested refactor
_trimItemStart: (id, trimAmount) => { + const timelineFps = useTimelineSettingsStore.getState().fps; - return set((state) => ({ + set((state) => ({ items: state.items.map((item) => { if (item.id !== id) return item; - const timelineFps = useTimelineSettingsStore.getState().fps; const { clampedAmount } = clampTrimAmount(item, 'start', trimAmount, timelineFps);Same applies to
_trimItemEndat lines 285-297.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/timeline/stores/items-store.ts` around lines 259 - 273, Hoist the timeline FPS lookup out of the per-item map by reading useTimelineSettingsStore.getState().fps once before calling set() and pass that value into the _trimItemStart and _trimItemEnd updater logic; update the functions _trimItemStart and _trimItemEnd to accept or close over a local timelineFps variable instead of calling useTimelineSettingsStore.getState().fps inside the .map() callback, and reuse that hoisted timelineFps when calling clampTrimAmount and calculateTrimSourceUpdate to make the external dependency explicit and avoid repeated lookups.src/lib/storage/indexeddb/decoded-preview-audio.ts (1)
37-47: SamegetStore()outsidetry-catchinconsistency as inwaveforms.ts.If
getStore()throws, the error won't be logged before propagating. Consider wrapping it inside thetryblock for consistency withgetDecodedPreviewAudio.Suggested fix
export async function saveDecodedPreviewAudio( data: DecodedPreviewAudio ): Promise<void> { + try { const db = await getStore(); - try { await db.put('decodedPreviewAudio', data); } catch (error) { logger.error(`Failed to save decoded preview audio ${data.id}:`, error); throw error; } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/storage/indexeddb/decoded-preview-audio.ts` around lines 37 - 47, Move the await getStore() call into the try block inside saveDecodedPreviewAudio so any errors from getStore() are caught and logged the same way as in getDecodedPreviewAudio; e.g., start the try before calling getStore(), then call await db.put('decodedPreviewAudio', data) and keep the existing logger.error and rethrow behavior for consistency (referencing saveDecodedPreviewAudio and getStore).src/lib/storage/indexeddb/waveforms.ts (1)
64-74: Inconsistent error handling:getStore()is outsidetry-catch.In
saveWaveformRecord, ifgetStore()throws (e.g., reconnection fails), the error propagates without logging. Other functions likegetWaveformandgetWaveformBinswrapgetStore()inside thetry-catch. Consider movinggetStore()inside the try block for consistent error logging.Suggested fix
export async function saveWaveformRecord( data: WaveformRecord ): Promise<void> { + try { const db = await getStore(); - try { await db.put('waveforms', data); } catch (error) { logger.error(`Failed to save waveform record ${data.id}:`, error); throw error; } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/storage/indexeddb/waveforms.ts` around lines 64 - 74, The saveWaveformRecord function calls getStore() outside the try-catch so any errors from getStore() escape without logging; move the await getStore() call inside the existing try block in saveWaveformRecord so failures are caught and logged consistently (use the same logger.error pattern as in getWaveform/getWaveformBins), keep the db.put('waveforms', data) call and rethrow the caught error after logging.src/lib/composition-runtime/components/custom-decoder-buffered-audio.tsx (2)
234-249: Per-instance global event listeners for AudioContext resume.Each
CustomDecoderBufferedAudioinstance registers its ownpointerdown/keydownlisteners to resume the shared AudioContext. With many clips on the timeline, this duplicates work. Consider lifting resume-on-interaction into the shared context initialization or a single top-level listener.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/composition-runtime/components/custom-decoder-buffered-audio.tsx` around lines 234 - 249, CustomDecoderBufferedAudio currently registers per-instance global event listeners in its useEffect to call resume() which resumes the shared AudioContext via getSharedAudioContext; this duplicates listeners when many instances exist. Move the pointerdown/keydown resume logic out of CustomDecoderBufferedAudio and into the shared AudioContext initialization (or a single top-level module) so only one listener calls ctx.resume() on user interaction; remove the useEffect listener registration from CustomDecoderBufferedAudio (the resume function and getSharedAudioContext are the key symbols to refactor) and ensure the centralized handler uses the same resume semantics and removes listeners on teardown if applicable.
18-31: Module-levelsharedCtxsingleton can leak across composition lifecycles.
sharedCtxpersists across component unmounts. If a composition is torn down and a new one created, the staleAudioContextmay be in asuspendedstate or accumulate dangling connections. Thestate === 'closed'check on Line 27 only handles explicit closure—suspendedcontexts are reused as-is.This is likely acceptable for this application since there's typically one active composition, but consider adding
sharedCtxcleanup toclearPreviewAudioCache()or documenting this as intentional.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/composition-runtime/components/custom-decoder-buffered-audio.tsx` around lines 18 - 31, The module-level singleton sharedCtx can persist across composition lifecycles; fix by ensuring sharedCtx is cleaned up when preview audio is cleared: update clearPreviewAudioCache() to detect if sharedCtx exists and is not closed, call sharedCtx.close() (handle promise errors) and then set sharedCtx = null; optionally also adjust getSharedAudioContext() to recreate the context when sharedCtx.state is not 'running' (treat 'suspended' as stale) so a fresh AudioContext is used after teardown.src/lib/composition-runtime/components/custom-decoder-audio.tsx (2)
177-219: Ref-counting ingetOrCreateDecodedWavUrlis correct but fragile under concurrent error + release.The cache entry is deleted on promise rejection (Line 214), which correctly prevents stale entries. However, any concurrent callers that incremented
refsbefore the rejection will callreleaseDecodedWavUrlon a now-deleted entry (no-op). This is safe today since no URL was created, but adding a comment documenting this invariant would help future maintainers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/composition-runtime/components/custom-decoder-audio.tsx` around lines 177 - 219, The rejection handler in getOrCreateDecodedWavUrl deletes the cache entry which means concurrent callers that previously incremented entry.refs may later call releaseDecodedWavUrl on a deleted (no-op) entry; add a clear comment next to the catch/delete (and near the DecodedWavEntry type and releaseDecodedWavUrl implementation) stating this invariant: on promise rejection we remove the entry, no URL was created, releaseDecodedWavUrl may be called and will safely be a no-op, and therefore we intentionally delete the entry instead of keeping a tombstone; reference getOrCreateDecodedWavUrl, decodedWavUrlCache, DecodedWavEntry, and releaseDecodedWavUrl in the comment so future maintainers understand the concurrent behavior.
132-175: Large single-allocation WAV encoding may pressure memory for long audio.
audioBufferToWavBloballocates one contiguousArrayBufferfor the entire WAV file. For a 10-minute stereo source at 22050 Hz, that's ~53 MB in a single allocation. This is the fallback path (used when stored bins aren't available), so it's infrequent, but on memory-constrained devices it could fail.Consider using the same
Blob([header, ...chunks])approach used intryBuildWavBlobFromStoredBins(Lines 99-129) to avoid a monolithic allocation—reuse theinterleaveStereoInt16helper per-chunk instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/composition-runtime/components/custom-decoder-audio.tsx` around lines 132 - 175, audioBufferToWavBlob currently creates one large ArrayBuffer for the full WAV which can OOM on long audio; change it to build the WAV as Blob parts like tryBuildWavBlobFromStoredBins: create a small header ArrayBuffer (44 bytes) then iterate frames in reasonable-sized chunks, call interleaveStereoInt16 (or equivalent) for each chunk to produce small ArrayBuffers for PCM data, and return new Blob([header, ...pcmChunks], { type: 'audio/wav' }) while ensuring the header's data length fields (RIFF/data sizes) are set to the total PCM length before assembling the Blob.src/features/timeline/components/timeline-track.tsx (1)
406-427: Source-frame calculation is duplicated between multi-item and single-item drop paths.Lines 406-412 and 541-548 contain identical logic for computing
sourceFps,actualSourceDurationFrames, andsourceFramesForItemDuration. Consider extracting a small helper to reduce duplication, e.g.:Suggested extraction
function computeSourceFields(media: { fps?: number; duration: number }, itemDuration: number, timelineFps: number) { const sourceFps = media.fps || timelineFps; const actualSourceDurationFrames = Math.round(media.duration * sourceFps); const sourceFramesForItemDuration = Math.min( actualSourceDurationFrames, Math.round(itemDuration * sourceFps / timelineFps) ); return { sourceFps, actualSourceDurationFrames, sourceFramesForItemDuration }; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/timeline/components/timeline-track.tsx` around lines 406 - 427, Duplicate logic computing sourceFps, actualSourceDurationFrames, and sourceFramesForItemDuration is present in the multi-item and single-item drop paths; extract that logic into a helper (e.g., computeSourceFields) that accepts media, itemDuration, and timeline fps (fps) and returns { sourceFps, actualSourceDurationFrames, sourceFramesForItemDuration }, then replace the duplicated blocks that build baseItem (and the equivalent single-item construction) to call this helper and use its returned values for sourceStart/sourceEnd/sourceDuration/sourceFps.src/lib/composition-runtime/utils/audio-decode-cache.ts (1)
44-51:float32ToInt16truncates; siblingfloatToInt16incustom-decoder-audio.tsxrounds.This function assigns float products directly to
Int16Array(which truncates toward zero), while the parallelfloatToInt16incustom-decoder-audio.tsx:40-43usesMath.round. The difference is inaudible in practice but could cause 1-LSB discrepancies between the two paths. Consider aligning them for consistency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/composition-runtime/utils/audio-decode-cache.ts` around lines 44 - 51, The float32ToInt16 function currently assigns scaled float values directly into an Int16Array which truncates; align it with the sibling floatToInt16 in custom-decoder-audio.tsx by applying Math.round to the scaled sample before storing so both conversion paths produce consistent 1‑LSB behavior (update float32ToInt16 to round the s * 0x8000 / 0x7FFF result accordingly while preserving the clamping logic).src/lib/composition-runtime/components/video-content.tsx (1)
17-17: Hardcoded log level suppresses debug output at import time.
videoLog.setLevel(2)unconditionally overrides the default log level to WARN for this module. This is fine for reducing noise, but it can't be changed at runtime for debugging without modifying the source. Consider gating this behind a flag or environment check, or adding a comment with instructions for re-enabling.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/composition-runtime/components/video-content.tsx` at line 17, The module currently forces WARN by calling videoLog.setLevel(2) at import time, preventing changing log verbosity at runtime; change this to a conditional that respects an environment flag or runtime config (e.g. check process.env.DEBUG_VIDEO_LOGS or a runtime prop/feature flag) before calling videoLog.setLevel, or remove the call and add a clear comment explaining how to set videoLog level for debugging; update references around videoLog.setLevel(2) in video-content.tsx so the level is only lowered when the flag indicates production/noisy suppression and ensure the code path exposes a way to enable more verbose logs for debugging.src/lib/composition-runtime/compositions/main-composition.tsx (1)
649-672: Conservative fallback: all legacy clips (nomediaId) route through custom WASM decode.When
segment.mediaIdis absent or the media entry is missing,shouldUseCustomDecoderreturnstrue. This is safe for AC-3 support but means legacy project clips with standard codecs (AAC, MP3) will also go through the heavierCustomDecoderAudiopath—downloading, decoding via mediabunny, and playing from an AudioBuffer instead of native<audio>.If legacy projects are common, consider checking whether the source plays natively (e.g., via
canPlayTypeprobe on a hidden<audio>element) before falling back to the custom decoder. This could be a follow-up improvement.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/composition-runtime/compositions/main-composition.tsx` around lines 649 - 672, The current shouldUseCustomDecoder function unconditionally returns true for segments lacking mediaId or missing media, causing legacy clips with native-playable codecs to always use the heavy CustomDecoderAudio path; change this by probing native playback first: inside shouldUseCustomDecoder (and where VideoAudioSegment is handled), when segment.mediaId is missing or mediaById.get(...) is undefined, create a temporary HTMLAudioElement and call canPlayType with the segment's available MIME/codec info (e.g., segment.mimeType or segment.sourceUrl-derived type) and return false if canPlayType yields "probably" or "maybe", otherwise return true; keep the existing fallback to needsCustomAudioDecoder(media.audioCodec) when media is present. Ensure the probe is synchronous/cheap (use a shared cached hidden audio element) and guard for missing segment.mimeType by falling back to the original conservative true.src/features/timeline/services/waveform-worker.ts (2)
254-255: Redundant consecutive progress messages (90 → 95) with no work in between.Lines 254–255 emit progress 90 and 95 back-to-back with no intervening computation. This creates unnecessary message traffic.
♻️ Suggested simplification
self.postMessage({ type: 'progress', requestId, progress: 90 } as WaveformProgressResponse); - self.postMessage({ type: 'progress', requestId, progress: 95 } as WaveformProgressResponse);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/timeline/services/waveform-worker.ts` around lines 254 - 255, The two back-to-back progress posts using self.postMessage({ type: 'progress', requestId, progress: 90 } as WaveformProgressResponse) and progress 95 are redundant because there is no work between them; remove one of the messages (keep the later, more meaningful progress or consolidate into a single postMessage) in the waveform-worker.ts worker flow where WaveformProgressResponse messages are emitted to reduce unnecessary message traffic.
171-188: Type-castingsampleto extractsampleRate/timestamp/durationis fragile.The
AudioDataWeb API type does definetimestamp,duration,sampleRate, andnumberOfFramesas standard properties. The cast to a plain object type at line 174 suggests the TypeScript types available in the worker context may be incomplete. This works but may silently break if the mediabunnyAudioSampleSinkchanges the shape of yielded objects.Consider adding a brief comment noting why the cast is needed (e.g., "Worker globals lack AudioData type definitions") so future maintainers don't remove it.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/timeline/services/waveform-worker.ts` around lines 171 - 188, The cast of sample to the ad-hoc shape (assigned to sampleData) is fragile because worker TS globals may not include the AudioData type; add a concise comment above the sampleData assignment explaining why the cast is required (e.g., "Worker globals lack AudioData type definitions so we cast to access timestamp/duration/sampleRate"), reference the sample and sampleData variables in the comment, and keep the comment short so future maintainers don't remove the cast accidentally; optionally consider replacing the ad-hoc cast with a Partial<AudioData> if type defs become available later.src/features/timeline/services/waveform-cache.ts (1)
680-691: Race condition betweenworkerRequestIdincrement andpendingRequests.set.On line 684, the
requestIdis constructed usingthis.workerRequestId, butgenerateWaveform(called on line 682) internally incrementsworkerRequestIdon line 424 insidegenerateWaveformWithWorker. If that call fails and falls back togenerateWaveformFallback, therequestIdstored inpendingRequestson line 684 won't match the one actually used by the worker.More concretely:
generateWaveformis called →generateWaveformWithWorkerincrementsworkerRequestIdto N and uses"waveform-N"→ then line 684 stores"waveform-N"in pendingRequests. This does match for the success case. But if the worker fails and fallback runs,workerRequestIdis still N, and line 684 stores"waveform-N"— but the fallback path doesn't use requestIds at all. Soabort()on line 725 would send a stale requestId to the worker, which is harmless since no worker request is active.This is functionally safe but semantically misleading. The
requestIdinpendingRequestsmay not correspond to any active worker request when the fallback is in use.♻️ Consider returning requestId from generateWaveform
Have
generateWaveformreturn{ promise, requestId }so the caller always stores the correct requestId, or move the pendingRequests tracking insidegenerateWaveformWithWorker.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/timeline/services/waveform-cache.ts` around lines 680 - 691, The pendingRequests entry uses a requestId built from this.workerRequestId but generateWaveform (specifically generateWaveformWithWorker) increments workerRequestId internally and fallback paths don't use that id, causing the stored requestId to potentially not correspond to an active worker; fix by having generateWaveform return both the promise and the actual requestId used (e.g., { promise, requestId }) or move pendingRequests.set into generateWaveformWithWorker so the function that increments workerRequestId is the same one that registers the pendingRequests entry; update callers (the caller that currently calls generateWaveform and later abort()) to use the returned requestId when storing pendingRequests and when calling abort().src/features/timeline/utils/source-calculations.ts (1)
3-14: File-level documentation is stale — formulas don't reflect FPS-aware conversions.The relationships documented on lines 12–13 (
sourceFrames = timelineFrames * speedandtimelineFrames = sourceFrames / speed) are the simplified forms that only hold whentimelineFps == sourceFps. With the new FPS-aware logic, the actual relationships are:
sourceFrames = (timelineFrames / timelineFps) * sourceFps * speedtimelineFrames = (sourceFrames / sourceFps) * timelineFps / speed📝 Proposed documentation update
* Relationships: - * - sourceFrames = timelineFrames * speed - * - timelineFrames = sourceFrames / speed + * - sourceFrames = (timelineFrames / timelineFps) * sourceFps * speed + * - timelineFrames = (sourceFrames / sourceFps) * timelineFps / speed + * When timelineFps == sourceFps, these simplify to: + * - sourceFrames = timelineFrames * speed + * - timelineFrames = sourceFrames / speed🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/timeline/utils/source-calculations.ts` around lines 3 - 14, Update the stale file-level documentation in the top comment of source-calculations.ts to use FPS-aware conversions instead of the simplified formulas: replace "sourceFrames = timelineFrames * speed" and "timelineFrames = sourceFrames / speed" with the corrected equations "sourceFrames = (timelineFrames / timelineFps) * sourceFps * speed" and "timelineFrames = (sourceFrames / sourceFps) * timelineFps / speed", and briefly note the meaning of timelineFps and sourceFps so readers understand the variables used in the conversions.src/lib/composition-runtime/components/item.tsx (1)
64-67:mediaItems.find()in selector runs a linear scan on every store update.The selector at line 65–67 performs
s.mediaItems.find((m) => m.id === item.mediaId)on everyuseMediaLibraryStorestate change. IfmediaItemsis large (hundreds of items), this could cause micro-stutters during frequent store updates (e.g., during import/processing).Consider maintaining a
Map<string, MediaItem>index in the store, or memoizing the lookup outside the selector. Not blocking since the selector returns a primitive (so downstream rerenders are gated), but worth optimizing if the media library grows.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/composition-runtime/components/item.tsx` around lines 64 - 67, The selector `useMediaLibraryStore` is doing a linear scan via `s.mediaItems.find(...)` each store update; replace that lookup with an O(1) index or move the lookup out of the selector: add/maintain a Map-like index in the media store (e.g., `mediaItemsById`) keyed by `id` and change the selector that computes `mediaSourceFps` to read `s.mediaItemsById[item.mediaId]?.fps` (or alternatively memoize the find outside the selector with `useMemo` keyed by `item.mediaId` and `s.mediaItems`), referencing `mediaItems`, `mediaItemsById`, `useMediaLibraryStore`, `mediaSourceFps`, and `item.mediaId`.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/features/timeline/components/clip-waveform/index.tsx`:
- Around line 292-294: renderVersion arithmetic can collide across fields;
change the computation in renderVersion to allocate non-overlapping numeric
ranges (e.g., use much larger multipliers) so progressBucket, peaks.length,
quantizedPPS and height cannot produce the same numeric value; for example
compute renderVersion = progressBucket * 1e12 + peaks.length * 1e6 +
quantizedPPS * 1e3 + height (ensuring the result stays <=
Number.MAX_SAFE_INTEGER) and replace the existing expression in the
renderVersion calculation referenced by renderVersion, progressBucket,
peaks.length, quantizedPPS and height so TiledCanvas.version no longer suffers
collisions.
In `@src/features/timeline/services/waveform-cache.ts`:
- Around line 434-438: The timeout handler currently removes listeners and
rejects the promise but doesn't stop the worker, causing a resource leak; update
the timeout callback in waveform-cache.ts so that after removing handleMessage
and handleError it also instructs the worker to stop (e.g., send an abort
message like worker.postMessage({ type: 'abort' }) or call worker.terminate())
before rejecting, and ensure any cleanup (clearTimeout, removing listeners)
remains intact so the worker doesn't continue decoding in the background.
In `@src/features/timeline/services/waveform-worker.ts`:
- Around line 222-231: The bin-flush logic (using processedEndTimeSec,
nextChunkStart, emitChunk, binSampleCount, samplesPerSecond, numOutputSamples)
assumes in-order sample delivery so late-arriving earlier samples can be
dropped; to mitigate, defer flushing by one extra bin as a safety margin:
compute completedOutputExclusive from processedEndTimeSec but subtract one bin
(binSampleCount) before emitting chunks (clamped to >=0 and <= numOutputSamples)
so you only flush bins that are at least one bin older, or alternatively add a
clear comment near this logic documenting the in-order delivery assumption and
why the one-bin defer is safe.
In `@src/features/timeline/theme.css`:
- Around line 16-18: The two CSS custom properties --color-timeline-video and
--color-timeline-image are identical (oklch(0.62 0.17 250)), which prevents
visual distinction between video and image timeline items; update one of them
(e.g., adjust hue or lightness of --color-timeline-video or
--color-timeline-image) to a slightly different oklch value so video and image
items are visually distinct, then verify timeline components that consume these
variables (e.g., CSS classes or components that reference --color-timeline-video
and --color-timeline-image) still render correctly and update any design tokens
or documentation if present.
In `@src/lib/composition-runtime/components/custom-decoder-buffered-audio.tsx`:
- Line 273: The targetTime calculation in CustomDecoderBufferedAudio (line with
const targetTime = (trimBefore / fps) + (frame * playbackRate / fps)) uses
timeline fps instead of the source clip's fps; add sourceFps to the
VideoAudioSegment interface in main-composition.tsx, populate segment.sourceFps
from item.sourceFps where segments are constructed, thread sourceFps through
CustomDecoderAudioProps and CustomDecoderBufferedAudioProps, and update the
targetTime formula to use (trimBefore / (sourceFps ?? fps)) and similarly use
(frame * playbackRate / (sourceFps ?? fps)) so it falls back to timeline fps
when sourceFps is missing.
In `@src/lib/composition-runtime/components/debug-overlay.tsx`:
- Around line 67-70: The destructured props for the DebugOverlay component
reference sourceFps = fps before fps is initialized, causing a TDZ; reorder the
destructuring so fps (with its default 30) is declared before sourceFps (so
sourceFps = fps can use that value), i.e., move the fps binding above sourceFps
in the props pattern (references: sourceFps, fps, DebugOverlay) so downstream
calculations (lines using sourceFps) get a valid number.
In `@src/lib/composition-runtime/utils/audio-decode-cache.ts`:
- Around line 528-547: After computing storedTotalFrames from persistedBins, add
a guard for storedTotalFrames === 0 and handle it before constructing an
OfflineAudioContext; specifically, in the block that uses
persistedBins/storedTotalFrames (symbols: persistedBins, storedTotalFrames,
OfflineAudioContext, combined), either return an appropriate empty result (e.g.,
a clear "no-audio" value or an empty buffer representation) or throw a
descriptive error, so we never call new OfflineAudioContext(2, 0,
STORAGE_SAMPLE_RATE) which triggers a NotSupportedError. Ensure the chosen
early-return/throw is consistent with surrounding API expectations.
In `@src/types/storage.ts`:
- Line 115: Update the stale comment on the binDurationSec field to reflect the
new target interval (5–10 seconds) instead of "typically 30"; locate the
binDurationSec property in the storage.types (symbol: binDurationSec) and change
its inline comment to indicate the new seconds-per-bin range and rationale
(e.g., "Seconds per bin (typically 5–10) for finer-grained streaming").
---
Outside diff comments:
In `@src/features/timeline/utils/source-calculations.ts`:
- Around line 138-148: There are two JSDoc blocks above the clampSpeed function
causing contradictory docs; remove the stale/incorrect JSDoc that says "Clamp
speed to valid range and round to 2 decimal places" and keep the accurate block
that states "Clamp speed to valid range without rounding." Locate the duplicated
comments immediately above export function clampSpeed and delete the first JSDoc
block so only the single, correct doc remains; ensure the remaining JSDoc
correctly describes clampSpeed and no other comment duplication remains.
In `@src/lib/composition-runtime/components/item.tsx`:
- Line 329: Replace the throwing behavior for unknown item types: instead of
throwing new Error(`Unknown item type: ${JSON.stringify(item)`)` when
encountering an unexpected item.type, log the full item to console.error and
return null so the component tree isn't crashed; follow the same defensive
pattern used elsewhere in this module (see existing handling around the other
invalid states) by replacing the throw with a console.error message including
the serialized item and a clear context string and then returning null from the
render function (identify the exact spot where item is checked and the throw
occurs and update that branch).
---
Nitpick comments:
In `@src/features/export/utils/canvas-audio.ts`:
- Around line 398-399: The code calls registerAc3Decoder() on every
decodeAudioFromSource invocation; add a module-level boolean guard (e.g.,
ac3Registered) to ensure registerAc3Decoder() is only invoked once: introduce a
top-level let ac3Registered = false, then inside decodeAudioFromSource check if
(!ac3Registered) { const { registerAc3Decoder } = await
import('@mediabunny/ac3'); registerAc3Decoder(); ac3Registered = true; } so
subsequent calls skip re-registration; reference registerAc3Decoder and
decodeAudioFromSource when making the change.
In `@src/features/export/utils/canvas-render-orchestrator.ts`:
- Around line 98-101: The AC-3 decoder registration (the import('mediabunny')
assignment to mediabunny and the registerAc3Decoder() call) must be wrapped in
the same defensive try/catch used for MP3 encoder setup so a failing
`@mediabunny/ac3` import/registration won't abort renders that don't need AC-3;
update the block around mediabunny and registerAc3Decoder to catch and log the
error (using the same logger or pattern as the MP3 encoder registration) and
continue execution, referencing the mediabunny variable and registerAc3Decoder
function and keeping behavior consistent with renderAudioOnly's MP3 encoder
guard.
In `@src/features/timeline/components/timeline-track.tsx`:
- Around line 406-427: Duplicate logic computing sourceFps,
actualSourceDurationFrames, and sourceFramesForItemDuration is present in the
multi-item and single-item drop paths; extract that logic into a helper (e.g.,
computeSourceFields) that accepts media, itemDuration, and timeline fps (fps)
and returns { sourceFps, actualSourceDurationFrames, sourceFramesForItemDuration
}, then replace the duplicated blocks that build baseItem (and the equivalent
single-item construction) to call this helper and use its returned values for
sourceStart/sourceEnd/sourceDuration/sourceFps.
In `@src/features/timeline/services/waveform-cache.ts`:
- Around line 680-691: The pendingRequests entry uses a requestId built from
this.workerRequestId but generateWaveform (specifically
generateWaveformWithWorker) increments workerRequestId internally and fallback
paths don't use that id, causing the stored requestId to potentially not
correspond to an active worker; fix by having generateWaveform return both the
promise and the actual requestId used (e.g., { promise, requestId }) or move
pendingRequests.set into generateWaveformWithWorker so the function that
increments workerRequestId is the same one that registers the pendingRequests
entry; update callers (the caller that currently calls generateWaveform and
later abort()) to use the returned requestId when storing pendingRequests and
when calling abort().
In `@src/features/timeline/services/waveform-worker.ts`:
- Around line 254-255: The two back-to-back progress posts using
self.postMessage({ type: 'progress', requestId, progress: 90 } as
WaveformProgressResponse) and progress 95 are redundant because there is no work
between them; remove one of the messages (keep the later, more meaningful
progress or consolidate into a single postMessage) in the waveform-worker.ts
worker flow where WaveformProgressResponse messages are emitted to reduce
unnecessary message traffic.
- Around line 171-188: The cast of sample to the ad-hoc shape (assigned to
sampleData) is fragile because worker TS globals may not include the AudioData
type; add a concise comment above the sampleData assignment explaining why the
cast is required (e.g., "Worker globals lack AudioData type definitions so we
cast to access timestamp/duration/sampleRate"), reference the sample and
sampleData variables in the comment, and keep the comment short so future
maintainers don't remove the cast accidentally; optionally consider replacing
the ad-hoc cast with a Partial<AudioData> if type defs become available later.
In `@src/features/timeline/stores/actions/item-actions.ts`:
- Around line 359-376: The code reads sourceFps from the timeline item but later
uses media.fps (mediaFps) to compute timestampSeconds, which can diverge; change
the timestamp calculation to use the already-determined sourceFps (falling back
to media.fps only if sourceFps is undefined) so the same FPS value is used for
timelineToSourceFrames and timestampSeconds; update the computation around
sourceFps, sourceFrame and timestampSeconds in insertFreezeFrame (and keep
useMediaLibraryStore/mediaItems/media lookup intact) to prefer sourceFps ||
media.fps and compute timestampSeconds = sourceFrame / (sourceFps || media.fps).
In `@src/features/timeline/stores/actions/source-edit-actions.ts`:
- Around line 74-78: The fallback for media fps is duplicated; add a sourceFps
property to the SourceEditContext interface and set it once in
resolveSourceEditContext using ctx.media.fps || 30, then remove the duplicate
fallback logic in createTimelineItem and reference ctx.sourceFps when computing
sourceDurationFrames (and any other places using media.fps fallback) so the
default is centralized.
In `@src/features/timeline/stores/items-store.ts`:
- Around line 259-273: Hoist the timeline FPS lookup out of the per-item map by
reading useTimelineSettingsStore.getState().fps once before calling set() and
pass that value into the _trimItemStart and _trimItemEnd updater logic; update
the functions _trimItemStart and _trimItemEnd to accept or close over a local
timelineFps variable instead of calling useTimelineSettingsStore.getState().fps
inside the .map() callback, and reuse that hoisted timelineFps when calling
clampTrimAmount and calculateTrimSourceUpdate to make the external dependency
explicit and avoid repeated lookups.
In `@src/features/timeline/utils/source-calculations.ts`:
- Around line 3-14: Update the stale file-level documentation in the top comment
of source-calculations.ts to use FPS-aware conversions instead of the simplified
formulas: replace "sourceFrames = timelineFrames * speed" and "timelineFrames =
sourceFrames / speed" with the corrected equations "sourceFrames =
(timelineFrames / timelineFps) * sourceFps * speed" and "timelineFrames =
(sourceFrames / sourceFps) * timelineFps / speed", and briefly note the meaning
of timelineFps and sourceFps so readers understand the variables used in the
conversions.
In `@src/lib/composition-runtime/components/custom-decoder-audio.tsx`:
- Around line 177-219: The rejection handler in getOrCreateDecodedWavUrl deletes
the cache entry which means concurrent callers that previously incremented
entry.refs may later call releaseDecodedWavUrl on a deleted (no-op) entry; add a
clear comment next to the catch/delete (and near the DecodedWavEntry type and
releaseDecodedWavUrl implementation) stating this invariant: on promise
rejection we remove the entry, no URL was created, releaseDecodedWavUrl may be
called and will safely be a no-op, and therefore we intentionally delete the
entry instead of keeping a tombstone; reference getOrCreateDecodedWavUrl,
decodedWavUrlCache, DecodedWavEntry, and releaseDecodedWavUrl in the comment so
future maintainers understand the concurrent behavior.
- Around line 132-175: audioBufferToWavBlob currently creates one large
ArrayBuffer for the full WAV which can OOM on long audio; change it to build the
WAV as Blob parts like tryBuildWavBlobFromStoredBins: create a small header
ArrayBuffer (44 bytes) then iterate frames in reasonable-sized chunks, call
interleaveStereoInt16 (or equivalent) for each chunk to produce small
ArrayBuffers for PCM data, and return new Blob([header, ...pcmChunks], { type:
'audio/wav' }) while ensuring the header's data length fields (RIFF/data sizes)
are set to the total PCM length before assembling the Blob.
In `@src/lib/composition-runtime/components/custom-decoder-buffered-audio.tsx`:
- Around line 234-249: CustomDecoderBufferedAudio currently registers
per-instance global event listeners in its useEffect to call resume() which
resumes the shared AudioContext via getSharedAudioContext; this duplicates
listeners when many instances exist. Move the pointerdown/keydown resume logic
out of CustomDecoderBufferedAudio and into the shared AudioContext
initialization (or a single top-level module) so only one listener calls
ctx.resume() on user interaction; remove the useEffect listener registration
from CustomDecoderBufferedAudio (the resume function and getSharedAudioContext
are the key symbols to refactor) and ensure the centralized handler uses the
same resume semantics and removes listeners on teardown if applicable.
- Around line 18-31: The module-level singleton sharedCtx can persist across
composition lifecycles; fix by ensuring sharedCtx is cleaned up when preview
audio is cleared: update clearPreviewAudioCache() to detect if sharedCtx exists
and is not closed, call sharedCtx.close() (handle promise errors) and then set
sharedCtx = null; optionally also adjust getSharedAudioContext() to recreate the
context when sharedCtx.state is not 'running' (treat 'suspended' as stale) so a
fresh AudioContext is used after teardown.
In `@src/lib/composition-runtime/components/item.tsx`:
- Around line 64-67: The selector `useMediaLibraryStore` is doing a linear scan
via `s.mediaItems.find(...)` each store update; replace that lookup with an O(1)
index or move the lookup out of the selector: add/maintain a Map-like index in
the media store (e.g., `mediaItemsById`) keyed by `id` and change the selector
that computes `mediaSourceFps` to read `s.mediaItemsById[item.mediaId]?.fps` (or
alternatively memoize the find outside the selector with `useMemo` keyed by
`item.mediaId` and `s.mediaItems`), referencing `mediaItems`, `mediaItemsById`,
`useMediaLibraryStore`, `mediaSourceFps`, and `item.mediaId`.
In `@src/lib/composition-runtime/components/video-content.tsx`:
- Line 17: The module currently forces WARN by calling videoLog.setLevel(2) at
import time, preventing changing log verbosity at runtime; change this to a
conditional that respects an environment flag or runtime config (e.g. check
process.env.DEBUG_VIDEO_LOGS or a runtime prop/feature flag) before calling
videoLog.setLevel, or remove the call and add a clear comment explaining how to
set videoLog level for debugging; update references around videoLog.setLevel(2)
in video-content.tsx so the level is only lowered when the flag indicates
production/noisy suppression and ensure the code path exposes a way to enable
more verbose logs for debugging.
In `@src/lib/composition-runtime/compositions/main-composition.tsx`:
- Around line 649-672: The current shouldUseCustomDecoder function
unconditionally returns true for segments lacking mediaId or missing media,
causing legacy clips with native-playable codecs to always use the heavy
CustomDecoderAudio path; change this by probing native playback first: inside
shouldUseCustomDecoder (and where VideoAudioSegment is handled), when
segment.mediaId is missing or mediaById.get(...) is undefined, create a
temporary HTMLAudioElement and call canPlayType with the segment's available
MIME/codec info (e.g., segment.mimeType or segment.sourceUrl-derived type) and
return false if canPlayType yields "probably" or "maybe", otherwise return true;
keep the existing fallback to needsCustomAudioDecoder(media.audioCodec) when
media is present. Ensure the probe is synchronous/cheap (use a shared cached
hidden audio element) and guard for missing segment.mimeType by falling back to
the original conservative true.
In `@src/lib/composition-runtime/utils/audio-decode-cache.ts`:
- Around line 44-51: The float32ToInt16 function currently assigns scaled float
values directly into an Int16Array which truncates; align it with the sibling
floatToInt16 in custom-decoder-audio.tsx by applying Math.round to the scaled
sample before storing so both conversion paths produce consistent 1‑LSB behavior
(update float32ToInt16 to round the s * 0x8000 / 0x7FFF result accordingly while
preserving the clamping logic).
In `@src/lib/storage/indexeddb/decoded-preview-audio.ts`:
- Around line 37-47: Move the await getStore() call into the try block inside
saveDecodedPreviewAudio so any errors from getStore() are caught and logged the
same way as in getDecodedPreviewAudio; e.g., start the try before calling
getStore(), then call await db.put('decodedPreviewAudio', data) and keep the
existing logger.error and rethrow behavior for consistency (referencing
saveDecodedPreviewAudio and getStore).
In `@src/lib/storage/indexeddb/waveforms.ts`:
- Around line 64-74: The saveWaveformRecord function calls getStore() outside
the try-catch so any errors from getStore() escape without logging; move the
await getStore() call inside the existing try block in saveWaveformRecord so
failures are caught and logged consistently (use the same logger.error pattern
as in getWaveform/getWaveformBins), keep the db.put('waveforms', data) call and
rethrow the caught error after logging.
| const progressBucket = Math.floor(progress); | ||
| // Include decode progress so tiles repaint as streaming chunks arrive. | ||
| const renderVersion = progressBucket * 10000000 + peaks.length * 10000 + quantizedPPS + height; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find TiledCanvas component definition
fd tiled-canvas --type f -e ts -e tsx | head -5Repository: walterlow/freecut
Length of output: 125
🏁 Script executed:
# Search for TiledCanvas interface/class definition and version field
rg -n 'class TiledCanvas|interface TiledCanvas|version.*:' --type=ts --type=tsx src/features/timeline/components/ -A 3 | head -40Repository: walterlow/freecut
Length of output: 88
🏁 Script executed:
# Check the actual usage of renderVersion in the component
rg -n 'renderVersion' src/features/timeline/components/clip-waveform/ -B 2 -A 2Repository: walterlow/freecut
Length of output: 1012
🏁 Script executed:
# Read the TiledCanvas component to check the version prop type
cat src/features/timeline/components/clip-filmstrip/tiled-canvas.tsx | head -100Repository: walterlow/freecut
Length of output: 3140
🏁 Script executed:
# Check how peaks.length and quantizedPPS are computed
rg -n 'peaks\s*=|quantizedPPS\s*=' src/features/timeline/components/clip-waveform/index.tsx -B 2 -A 2Repository: walterlow/freecut
Length of output: 421
🏁 Script executed:
# Find where peaks is defined/used in clip-waveform
rg -n 'peaks\s*[=:]|const peaks' src/features/timeline/components/clip-waveform/index.tsx -B 3 -A 1Repository: walterlow/freecut
Length of output: 43
🏁 Script executed:
# Check the full context around renderVersion to understand variable types
sed -n '280,310p' src/features/timeline/components/clip-waveform/index.tsxRepository: walterlow/freecut
Length of output: 1093
🏁 Script executed:
# Search for useWaveform to understand what peaks represents
rg -n 'useWaveform|const.*peaks' src/features/timeline/hooks/ -B 2 -A 5 | head -60Repository: walterlow/freecut
Length of output: 1231
🏁 Script executed:
# Get the full useWaveform return type to see what peaks is
rg -n 'interface UseWaveformResult|peaks' src/features/timeline/hooks/use-waveform.ts -B 2 -A 2 | head -40Repository: walterlow/freecut
Length of output: 525
🏁 Script executed:
# Check what the actual return value of useWaveform shows about peaks
sed -n '1,100p' src/features/timeline/hooks/use-waveform.ts | grep -A 30 'interface UseWaveformResult'Repository: walterlow/freecut
Length of output: 970
🏁 Script executed:
# Look at the actual function to see how peaks is created/processed
sed -n '40,150p' src/features/timeline/hooks/use-waveform.tsRepository: walterlow/freecut
Length of output: 3642
🏁 Script executed:
# Check if peaks is checked for null before renderVersion is called
sed -n '270,310p' src/features/timeline/components/clip-waveform/index.tsxRepository: walterlow/freecut
Length of output: 1440
🏁 Script executed:
# Look at the earlier guard for peaks null check
sed -n '250,295p' src/features/timeline/components/clip-waveform/index.tsxRepository: walterlow/freecut
Length of output: 1834
🏁 Script executed:
# Check what the actual ranges are for peaks.length, quantizedPPS, and height in typical scenarios
# Look for any constants or constraints
rg -n 'WAVEFORM|pixelsPerSecond|sampleRate' src/features/timeline/components/clip-waveform/index.tsx | head -20Repository: walterlow/freecut
Length of output: 1336
🏁 Script executed:
# Verify if there are any edge cases where the collision could occur
# Check the actual variable declarations
sed -n '1,50p' src/features/timeline/components/clip-waveform/index.tsxRepository: walterlow/freecut
Length of output: 1757
renderVersion arithmetic creates collision risk for multi-state rerenders.
The version is computed as progressBucket * 10000000 + peaks.length * 10000 + quantizedPPS + height. For clips with high peak counts (e.g., a 10-minute clip at 100 samples/sec yields ~60,000 peaks), peaks.length * 10000 = 600,000,000—far exceeding the progressBucket stride of 10,000,000. Two different render states can produce identical version numbers:
- progress=0, peaks.length=1000, quantizedPPS=150, height=0 → 10,000,150
- progress=1, peaks.length=0, quantizedPPS=150, height=0 → 10,000,150
This causes TiledCanvas to skip necessary repaints when the version number happens to collide.
Since TiledCanvas.version must be numeric, use bitwise shifts or larger separations to ensure non-overlapping ranges. Example:
- const progressBucket = Math.floor(progress);
- // Include decode progress so tiles repaint as streaming chunks arrive.
- const renderVersion = progressBucket * 10000000 + peaks.length * 10000 + quantizedPPS + height;
+ // Use bitwise shifts to avoid arithmetic collisions across different render states
+ const renderVersion = (progressBucket << 30) | ((Math.min(peaks.length, 1048575) << 10) | (quantizedPPS << 7) | (height & 0x7F));📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const progressBucket = Math.floor(progress); | |
| // Include decode progress so tiles repaint as streaming chunks arrive. | |
| const renderVersion = progressBucket * 10000000 + peaks.length * 10000 + quantizedPPS + height; | |
| const progressBucket = Math.floor(progress); | |
| // Use bitwise shifts to avoid arithmetic collisions across different render states | |
| const renderVersion = (progressBucket << 30) | ((Math.min(peaks.length, 1048575) << 10) | (quantizedPPS << 7) | (height & 0x7F)); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/features/timeline/components/clip-waveform/index.tsx` around lines 292 -
294, renderVersion arithmetic can collide across fields; change the computation
in renderVersion to allocate non-overlapping numeric ranges (e.g., use much
larger multipliers) so progressBucket, peaks.length, quantizedPPS and height
cannot produce the same numeric value; for example compute renderVersion =
progressBucket * 1e12 + peaks.length * 1e6 + quantizedPPS * 1e3 + height
(ensuring the result stays <= Number.MAX_SAFE_INTEGER) and replace the existing
expression in the renderVersion calculation referenced by renderVersion,
progressBucket, peaks.length, quantizedPPS and height so TiledCanvas.version no
longer suffers collisions.
| const timeout = setTimeout(() => { | ||
| worker.removeEventListener('message', handleMessage); | ||
| worker.removeEventListener('error', handleError); | ||
| reject(new Error('Worker timeout')); | ||
| }, 30000); | ||
| }, 90000); |
There was a problem hiding this comment.
Worker timeout rejects the promise but doesn't abort the worker — resource leak.
When the 90-second timeout fires, the handler/error listeners are removed and the promise is rejected, but no abort message is sent to the worker. The worker continues decoding in the background, consuming CPU and memory until it finishes or errors out.
🐛 Proposed fix: send abort on timeout
const timeout = setTimeout(() => {
worker.removeEventListener('message', handleMessage);
worker.removeEventListener('error', handleError);
+ worker.postMessage({ type: 'abort', requestId });
reject(new Error('Worker timeout'));
}, 90000);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/features/timeline/services/waveform-cache.ts` around lines 434 - 438, The
timeout handler currently removes listeners and rejects the promise but doesn't
stop the worker, causing a resource leak; update the timeout callback in
waveform-cache.ts so that after removing handleMessage and handleError it also
instructs the worker to stop (e.g., send an abort message like
worker.postMessage({ type: 'abort' }) or call worker.terminate()) before
rejecting, and ensure any cleanup (clearTimeout, removing listeners) remains
intact so the worker doesn't continue decoding in the background.
| // Flush full bins that can no longer change. | ||
| const completedOutputExclusive = Math.min( | ||
| numOutputSamples, | ||
| Math.floor(processedEndTimeSec * samplesPerSecond) | ||
| ); | ||
| while (nextChunkStart + binSampleCount <= completedOutputExclusive) { | ||
| const end = nextChunkStart + binSampleCount; | ||
| emitChunk(nextChunkStart, end); | ||
| nextChunkStart = end; | ||
| } |
There was a problem hiding this comment.
Bin flushing assumes in-order sample delivery — out-of-order samples would produce stale chunks.
processedEndTimeSec advances via Math.max, so if a sample with a later timestamp arrives before an earlier one, processedEndTimeSec advances and bins covering the earlier range may be flushed before the late-arriving sample's peaks are written into them. The flushed chunk (transferred to the main thread) would then be missing those peaks.
For most audio codecs samples arrive in presentation-time order, so this is unlikely to manifest in practice. However, if mediabunny's AudioSampleSink ever yields reordered samples (e.g., for certain codecs or seek scenarios), the streamed waveform would have visual gaps that only resolve on reload from storage.
Consider documenting this assumption or, if defensible, deferring bin flush by one extra bin as a safety margin.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/features/timeline/services/waveform-worker.ts` around lines 222 - 231,
The bin-flush logic (using processedEndTimeSec, nextChunkStart, emitChunk,
binSampleCount, samplesPerSecond, numOutputSamples) assumes in-order sample
delivery so late-arriving earlier samples can be dropped; to mitigate, defer
flushing by one extra bin as a safety margin: compute completedOutputExclusive
from processedEndTimeSec but subtract one bin (binSampleCount) before emitting
chunks (clamped to >=0 and <= numOutputSamples) so you only flush bins that are
at least one bin older, or alternatively add a clear comment near this logic
documenting the in-order delivery assumption and why the one-bin defer is safe.
| --color-timeline-video: oklch(0.62 0.17 250); | ||
| --color-timeline-audio: oklch(0.22 0.02 302); | ||
| --color-timeline-image: oklch(0.62 0.17 250); |
There was a problem hiding this comment.
--color-timeline-video is now identical to --color-timeline-image.
Both resolve to oklch(0.62 0.17 250). If these item types can coexist on the timeline, users won't be able to distinguish video clips from image clips by color alone. Is this intentional, or should one of them be differentiated (e.g., a slight hue or lightness shift)?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/features/timeline/theme.css` around lines 16 - 18, The two CSS custom
properties --color-timeline-video and --color-timeline-image are identical
(oklch(0.62 0.17 250)), which prevents visual distinction between video and
image timeline items; update one of them (e.g., adjust hue or lightness of
--color-timeline-video or --color-timeline-image) to a slightly different oklch
value so video and image items are visually distinct, then verify timeline
components that consume these variables (e.g., CSS classes or components that
reference --color-timeline-video and --color-timeline-image) still render
correctly and update any design tokens or documentation if present.
| if (!ctx || !gain) return; | ||
|
|
||
| const isPremounted = frame < 0; | ||
| const targetTime = (trimBefore / fps) + (frame * playbackRate / fps); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, find the relevant files
echo "=== Checking custom-decoder-buffered-audio.tsx around line 273 ==="
sed -n '265,285p' src/lib/composition-runtime/components/custom-decoder-buffered-audio.tsx
echo ""
echo "=== Checking the entire playback sync effect context ==="
sed -n '265,320p' src/lib/composition-runtime/components/custom-decoder-buffered-audio.tsx
echo ""
echo "=== Checking CustomDecoderBufferedAudioProps interface ==="
sed -n '40,90p' src/lib/composition-runtime/components/custom-decoder-buffered-audio.tsx
echo ""
echo "=== Checking how trimBefore is passed to this component in main-composition.tsx ==="
rg -n 'CustomDecoderBufferedAudio' src/lib/composition-runtime/compositions/main-composition.tsx -A 15 | head -50
echo ""
echo "=== Checking video-content.tsx line 75 mentioned in review ==="
sed -n '70,80p' src/lib/composition-runtime/components/video-content.tsxRepository: walterlow/freecut
Length of output: 4701
🏁 Script executed:
#!/bin/bash
# Check the trimBefore computation in main-composition.tsx
echo "=== Finding VideoAudioSegment interface and trimBefore definition ==="
rg -n 'interface VideoAudioSegment' src/lib/composition-runtime/compositions/main-composition.tsx -A 15
echo ""
echo "=== Finding getVideoTrimBefore function ==="
rg -n 'function getVideoTrimBefore\|getVideoTrimBefore.*=' src/lib/composition-runtime/compositions/main-composition.tsx -B 2 -A 10
echo ""
echo "=== Checking how baseTrimBefore is computed for audio segments ==="
rg -n 'baseTrimBefore' src/lib/composition-runtime/compositions/main-composition.tsx -B 3 -A 3 | head -40Repository: walterlow/freecut
Length of output: 1465
🏁 Script executed:
#!/bin/bash
# Check if sourceFps is available in video-content.tsx and how it's used
echo "=== Checking video-content.tsx for sourceFps context (lines 70-85) ==="
sed -n '60,90p' src/lib/composition-runtime/components/video-content.tsx
echo ""
echo "=== Check full props interface for video-content ==="
rg -n 'interface.*Props' src/lib/composition-runtime/components/video-content.tsx -A 20 | head -40
echo ""
echo "=== Check CustomDecoderAudio (parent) props ==="
rg -n 'interface CustomDecoderAudioProps' src/lib/composition-runtime/components/custom-decoder-audio.tsx -A 15Repository: walterlow/freecut
Length of output: 1886
🏁 Script executed:
#!/bin/bash
# Check if CustomDecoderAudio (parent of CustomDecoderBufferedAudio) receives sourceFps
echo "=== Checking how CustomDecoderAudio is called in main-composition.tsx ==="
rg -n 'CustomDecoderAudio' src/lib/composition-runtime/compositions/main-composition.tsx -B 3 -A 20 | head -80
echo ""
echo "=== Check if audio segments have sourceFps available ==="
rg -n 'sourceFps' src/lib/composition-runtime/compositions/main-composition.tsx | head -20
echo ""
echo "=== Verify getVideoTrimBefore returns source frames ==="
rg -n 'function getVideoTrimBefore\|const getVideoTrimBefore' src/lib/composition-runtime/compositions/main-composition.tsx -A 8Repository: walterlow/freecut
Length of output: 3527
🏁 Script executed:
#!/bin/bash
# Check the STORAGE_SAMPLE_RATE and audio buffer context
echo "=== Checking decoded audio buffer context ==="
rg -n 'STORAGE_SAMPLE_RATE\|AudioBuffer' src/lib/composition-runtime/components/custom-decoder-buffered-audio.tsx -B 2 -A 2 | head -30
echo ""
echo "=== Check how getOrDecodeAudioForPlayback creates the buffer ==="
rg -n 'getOrDecodeAudioForPlayback' src/lib/composition-runtime/components/custom-decoder-buffered-audio.tsx -A 5
echo ""
echo "=== Search for getOrDecodeAudioForPlayback definition to verify buffer creation ==="
fd -t f 'audio.*decode\|decode.*audio' src --type tsx --type ts | head -10Repository: walterlow/freecut
Length of output: 852
🏁 Script executed:
#!/bin/bash
# Check what data types/units are used for audio segments vs video segments
echo "=== Compare video vs audio segment definitions ==="
rg -n 'interface Video.*Segment' src/lib/composition-runtime/compositions/main-composition.tsx -A 20 | head -40
echo ""
echo "=== Check if there's a getter for video segments vs audio segments ==="
rg -n 'getVideo.*Segments\|getAudio.*Segments' src/lib/composition-runtime/compositions/main-composition.tsx -B 2 -A 10 | head -50Repository: walterlow/freecut
Length of output: 710
🏁 Script executed:
#!/bin/bash
# Find where VideoAudioSegment is constructed and check if we can add sourceFps
echo "=== Check full VideoAudioSegment construction in main-composition.tsx ==="
sed -n '543,575p' src/lib/composition-runtime/compositions/main-composition.tsx
echo ""
echo "=== Check what happens with video segments - do they have sourceFps? ==="
rg -n 'interface VideoSequenceItem\|interface StableVideoSequenceItem' src/lib/composition-runtime/components/ -A 15 | head -50
echo ""
echo "=== Check if media library or items have source FPS info ==="
rg -n 'sourceFrameRate\|sourceFps' src/types/timeline.ts src/types/media.ts 2>/dev/null | head -20Repository: walterlow/freecut
Length of output: 1693
🏁 Script executed:
#!/bin/bash
# Check if the audio decoder is decoding to a specific sample rate
echo "=== Find audio-decode-cache to check buffer creation ==="
find src -name '*audio*decode*' -type f
echo ""
echo "=== Check audio-decode-cache implementation ==="
sed -n '1,50p' "$(find src -name 'audio-decode-cache*' -type f | head -1)"Repository: walterlow/freecut
Length of output: 2036
🏁 Script executed:
#!/bin/bash
# Check how VideoItem carries source FPS information
echo "=== Check VideoItem type definition ==="
rg -n 'interface VideoItem\|type VideoItem' src/types/timeline.ts -A 25 | head -50
echo ""
echo "=== Check if getVideoTrimBefore receives item that has sourceFps ==="
sed -n '550,580p' src/lib/composition-runtime/compositions/main-composition.tsxRepository: walterlow/freecut
Length of output: 1453
🏁 Script executed:
#!/bin/bash
# Check if VideoItem or items have source frame rate information
echo "=== Search for sourceFrameRate, sourceFps in types ==="
rg -n 'sourceFrameRate|sourceFps|source.*fps' src/types/ --type ts -i
echo ""
echo "=== Check what properties are available on EnrichedVideoItem ==="
rg -n 'type EnrichedVideoItem|EnrichedVideoItem =' src/lib/composition-runtime/compositions/main-composition.tsx -B 2 -A 15 | head -40
echo ""
echo "=== Check if media objects (from mediaLibrary) have frame rate ==="
rg -n 'interface.*Media.*Item|frameRate|fps' src/types/ --type ts -i | grep -i 'frameRate\|fps' | head -20Repository: walterlow/freecut
Length of output: 2486
🏁 Script executed:
#!/bin/bash
# Check if there's a way to get sourceFps for the item in the segment construction
echo "=== Check how item properties are accessed in segment construction ==="
sed -n '543,600p' src/lib/composition-runtime/compositions/main-composition.tsx
echo ""
echo "=== Find where VideoItem gets its source properties ==="
rg -n 'item\.sourceStart\|item\.sourceFrameRate\|item\.fps' src/lib/composition-runtime/compositions/main-composition.tsx -B 2 -A 2
echo ""
echo "=== Check if clip object in expanded segment is used anywhere ==="
rg -n 'clip\.' src/lib/composition-runtime/compositions/main-composition.tsx | head -10Repository: walterlow/freecut
Length of output: 3306
🏁 Script executed:
#!/bin/bash
# Most important: check what the actual impact is - does the audio work correctly?
# Let's look for any existing bug reports or handling of fps mismatch
echo "=== Check for any existing fps/sourceFps handling in audio components ==="
rg -n 'sourceFps|sourceFrameRate' src/lib/composition-runtime/components/
echo ""
echo "=== Check PitchCorrectedAudio to see if it has the same issue ==="
sed -n '1,100p' src/lib/composition-runtime/components/pitch-corrected-audio.tsx | grep -A 5 -B 5 'targetTime\|trimBefore\|fps'Repository: walterlow/freecut
Length of output: 6385
🏁 Script executed:
#!/bin/bash
# Verify that CustomDecoderAudio does NOT receive sourceFps and that clip is not passed through
echo "=== Check CustomDecoderAudio full interface ==="
cat src/lib/composition-runtime/components/custom-decoder-audio.tsx | head -50
echo ""
echo "=== Verify how CustomDecoderAudio is called with segment properties ==="
sed -n '799,814p' src/lib/composition-runtime/compositions/main-composition.tsx
echo ""
echo "=== Check if PitchCorrectedAudio receives sourceFps in same context ==="
sed -n '815,830p' src/lib/composition-runtime/compositions/main-composition.tsxRepository: walterlow/freecut
Length of output: 3268
🏁 Script executed:
#!/bin/bash
# Verify if the clip object has sourceFps and trace where video-content.tsx receives sourceFps from
echo "=== Check where video-content.tsx receives sourceFps parameter ==="
rg -n '<NativePreviewVideo' src/lib/composition-runtime/components/item.tsx -B 5 -A 10 | head -60
echo ""
echo "=== Check the full context of how NativePreviewVideo/video-content.tsx is called ==="
sed -n '165,195p' src/lib/composition-runtime/components/item.tsx
echo ""
echo "=== Verify if item.sourceFps exists at the point where segments are created ==="
sed -n '550,570p' src/lib/composition-runtime/compositions/main-composition.tsxRepository: walterlow/freecut
Length of output: 2201
🏁 Script executed:
#!/bin/bash
# Check if sourceFps needs to be added to VideoAudioSegment interface and passed through
echo "=== Look at complete PitchCorrectedAudio instantiation in main-composition.tsx ==="
sed -n '815,835p' src/lib/composition-runtime/compositions/main-composition.tsx
echo ""
echo "=== Verify the pattern: does video segment construction add sourceFps? ==="
rg -n 'videoSequenceItems\|type.*VideoSequenceItem' src/lib/composition-runtime/components/stable-video-sequence.tsx -A 20 | head -50Repository: walterlow/freecut
Length of output: 993
🏁 Script executed:
#!/bin/bash
# Check how PitchCorrectedAudio uses the sourceFps parameter when provided
echo "=== Find where PitchCorrectedAudio uses sourceFps in calculations ==="
sed -n '280,295p' src/lib/composition-runtime/components/pitch-corrected-audio.tsx
echo ""
echo "=== Check the targetTime calculation in PitchCorrectedAudio ==="
rg -n 'const targetTime' src/lib/composition-runtime/components/pitch-corrected-audio.tsx -B 2 -A 2
echo ""
echo "=== Verify: does item.sourceFps exist when segments are constructed? ==="
sed -n '70,90p' src/lib/composition-runtime/components/item.tsxRepository: walterlow/freecut
Length of output: 2309
🏁 Script executed:
#!/bin/bash
# Final verification: confirm that sourceFps is available on item at segment construction time
echo "=== Check allVisualItems where videoAudioItems is filtered from ==="
rg -n 'allVisualItems =' src/lib/composition-runtime/compositions/main-composition.tsx -B 5 -A 5
echo ""
echo "=== Trace back to see if sourceFps is in the items ==="
rg -n 'buildVisualItems\|enrichVisualItem' src/lib/composition-runtime/compositions/main-composition.tsx -A 15 | head -60Repository: walterlow/freecut
Length of output: 184
🏁 Script executed:
#!/bin/bash
# Final confirmation: check how PitchCorrectedAudio is passed sourceFps (or not) in main-composition.tsx
echo "=== Verify PitchCorrectedAudio call in main-composition - does it get sourceFps? ==="
sed -n '815,830p' src/lib/composition-runtime/compositions/main-composition.tsx
echo ""
echo "=== Check if there's another PitchCorrectedAudio call that passes sourceFps ==="
rg -n 'PitchCorrectedAudio' src/lib/composition-runtime/compositions/main-composition.tsx
echo ""
echo "=== Verify CustomDecoderAudio line 273 issue one more time ==="
sed -n '273p' src/lib/composition-runtime/components/custom-decoder-buffered-audio.tsx
echo ""
echo "=== And compare to pitch-corrected-audio targetTime calculation ==="
sed -n '287p' src/lib/composition-runtime/components/pitch-corrected-audio.tsxRepository: walterlow/freecut
Length of output: 1266
🏁 Script executed:
#!/bin/bash
# Conclusive verification: Show that item.sourceFps exists and should be threaded through
echo "=== Show the complete context at segment construction (line 543-575) ==="
sed -n '543,575p' src/lib/composition-runtime/compositions/main-composition.tsx
echo ""
echo "=== Confirm item.sourceFps is available on the item objects ==="
sed -n '75,82p' src/lib/composition-runtime/components/item.tsxRepository: walterlow/freecut
Length of output: 2003
trimBefore is converted to seconds using timeline fps instead of source FPS.
trimBefore originates from getVideoTrimBefore (source frames). Dividing by the timeline fps produces an incorrect offset when the source frame rate differs from the project frame rate. The same pattern was corrected in video-content.tsx line 75 to use sourceFps. This component requires the same fix.
To resolve this:
- Add
sourceFpsto theVideoAudioSegmentinterface inmain-composition.tsx - Extract
item.sourceFpswhen constructing segments (around line 554) and include it in the segment object - Thread
sourceFpsthroughCustomDecoderAudioPropsandCustomDecoderBufferedAudioProps - Use
sourceFps(with fallback tofps) in the targetTime calculation:
Proposed fix for line 273
+ const effectiveSourceFps = sourceFps ?? fps;
- const targetTime = (trimBefore / fps) + (frame * playbackRate / fps);
+ const targetTime = (trimBefore / effectiveSourceFps) + (frame * playbackRate / fps);This requires threading sourceFps from the segment through the component props.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/composition-runtime/components/custom-decoder-buffered-audio.tsx` at
line 273, The targetTime calculation in CustomDecoderBufferedAudio (line with
const targetTime = (trimBefore / fps) + (frame * playbackRate / fps)) uses
timeline fps instead of the source clip's fps; add sourceFps to the
VideoAudioSegment interface in main-composition.tsx, populate segment.sourceFps
from item.sourceFps where segments are constructed, thread sourceFps through
CustomDecoderAudioProps and CustomDecoderBufferedAudioProps, and update the
targetTime formula to use (trimBefore / (sourceFps ?? fps)) and similarly use
(frame * playbackRate / (sourceFps ?? fps)) so it falls back to timeline fps
when sourceFps is missing.
| sourceFps = fps, | ||
| isInvalidSeek, | ||
| exceedsSource, | ||
| fps = 30, |
There was a problem hiding this comment.
Bug: sourceFps default references fps before it is declared in the destructuring pattern.
In destructuring defaults, values are evaluated left-to-right. sourceFps = fps (line 67) references the fps binding that isn't initialized until line 70, causing a TDZ (temporal dead zone) error at runtime when sourceFps is omitted by the caller. This will crash the component or produce undefined/NaN in all downstream calculations (lines 123–125, 191, 201, 203, 205).
Fix: reorder so fps is destructured before sourceFps
sourceDuration,
durationInFrames,
sourceFramesNeeded,
sourceEndPosition,
- sourceFps = fps,
isInvalidSeek,
exceedsSource,
fps = 30,
+ sourceFps = fps,
position = 'top-left',📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| sourceFps = fps, | |
| isInvalidSeek, | |
| exceedsSource, | |
| fps = 30, | |
| sourceDuration, | |
| durationInFrames, | |
| sourceFramesNeeded, | |
| sourceEndPosition, | |
| isInvalidSeek, | |
| exceedsSource, | |
| fps = 30, | |
| sourceFps = fps, | |
| position = 'top-left', |
🧰 Tools
🪛 Biome (2.3.14)
[error] 67-67: This variable is used before its declaration.
The variable is declared here:
(lint/correctness/noInvalidUseBeforeDeclaration)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/composition-runtime/components/debug-overlay.tsx` around lines 67 -
70, The destructured props for the DebugOverlay component reference sourceFps =
fps before fps is initialized, causing a TDZ; reorder the destructuring so fps
(with its default 30) is declared before sourceFps (so sourceFps = fps can use
that value), i.e., move the fps binding above sourceFps in the props pattern
(references: sourceFps, fps, DebugOverlay) so downstream calculations (lines
using sourceFps) get a valid number.
| // Wait for all bins and assemble playback buffer from downsampled bins. | ||
| const totalBins = binIndex; | ||
| const persistedBins = await Promise.all(binFlushPromises); | ||
| persistedBins.sort((a, b) => a.binIndex - b.binIndex); | ||
|
|
||
| const storedTotalFrames = persistedBins.reduce((sum, b) => sum + b.frames, 0); | ||
| const outCtx = new OfflineAudioContext(2, storedTotalFrames, STORAGE_SAMPLE_RATE); | ||
| const combined = outCtx.createBuffer(2, storedTotalFrames, STORAGE_SAMPLE_RATE); | ||
| const outLeft = combined.getChannelData(0); | ||
| const outRight = combined.getChannelData(1); | ||
|
|
||
| let offset = 0; | ||
| for (const bin of persistedBins) { | ||
| outLeft.set(int16ToFloat32(bin.left), offset); | ||
| outRight.set(int16ToFloat32(bin.right), offset); | ||
| offset += bin.frames; | ||
| } | ||
| if (offset !== storedTotalFrames) { | ||
| throw new Error(`Decoded audio assembly mismatch: ${offset}/${storedTotalFrames} frames`); | ||
| } |
There was a problem hiding this comment.
OfflineAudioContext will throw if storedTotalFrames is 0.
If the audio track yields zero samples (e.g., an empty or corrupt AC-3 stream), persistedBins is empty, storedTotalFrames is 0, and new OfflineAudioContext(2, 0, STORAGE_SAMPLE_RATE) on Line 534 will throw a NotSupportedError since the spec requires length > 0.
Proposed guard
const storedTotalFrames = persistedBins.reduce((sum, b) => sum + b.frames, 0);
+ if (storedTotalFrames === 0) {
+ throw new Error(`Decoded audio produced zero frames for media ${mediaId}`);
+ }
const outCtx = new OfflineAudioContext(2, storedTotalFrames, STORAGE_SAMPLE_RATE);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Wait for all bins and assemble playback buffer from downsampled bins. | |
| const totalBins = binIndex; | |
| const persistedBins = await Promise.all(binFlushPromises); | |
| persistedBins.sort((a, b) => a.binIndex - b.binIndex); | |
| const storedTotalFrames = persistedBins.reduce((sum, b) => sum + b.frames, 0); | |
| const outCtx = new OfflineAudioContext(2, storedTotalFrames, STORAGE_SAMPLE_RATE); | |
| const combined = outCtx.createBuffer(2, storedTotalFrames, STORAGE_SAMPLE_RATE); | |
| const outLeft = combined.getChannelData(0); | |
| const outRight = combined.getChannelData(1); | |
| let offset = 0; | |
| for (const bin of persistedBins) { | |
| outLeft.set(int16ToFloat32(bin.left), offset); | |
| outRight.set(int16ToFloat32(bin.right), offset); | |
| offset += bin.frames; | |
| } | |
| if (offset !== storedTotalFrames) { | |
| throw new Error(`Decoded audio assembly mismatch: ${offset}/${storedTotalFrames} frames`); | |
| } | |
| // Wait for all bins and assemble playback buffer from downsampled bins. | |
| const totalBins = binIndex; | |
| const persistedBins = await Promise.all(binFlushPromises); | |
| persistedBins.sort((a, b) => a.binIndex - b.binIndex); | |
| const storedTotalFrames = persistedBins.reduce((sum, b) => sum + b.frames, 0); | |
| if (storedTotalFrames === 0) { | |
| throw new Error(`Decoded audio produced zero frames for media ${mediaId}`); | |
| } | |
| const outCtx = new OfflineAudioContext(2, storedTotalFrames, STORAGE_SAMPLE_RATE); | |
| const combined = outCtx.createBuffer(2, storedTotalFrames, STORAGE_SAMPLE_RATE); | |
| const outLeft = combined.getChannelData(0); | |
| const outRight = combined.getChannelData(1); | |
| let offset = 0; | |
| for (const bin of persistedBins) { | |
| outLeft.set(int16ToFloat32(bin.left), offset); | |
| outRight.set(int16ToFloat32(bin.right), offset); | |
| offset += bin.frames; | |
| } | |
| if (offset !== storedTotalFrames) { | |
| throw new Error(`Decoded audio assembly mismatch: ${offset}/${storedTotalFrames} frames`); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/composition-runtime/utils/audio-decode-cache.ts` around lines 528 -
547, After computing storedTotalFrames from persistedBins, add a guard for
storedTotalFrames === 0 and handle it before constructing an
OfflineAudioContext; specifically, in the block that uses
persistedBins/storedTotalFrames (symbols: persistedBins, storedTotalFrames,
OfflineAudioContext, combined), either return an appropriate empty result (e.g.,
a clear "no-audio" value or an empty buffer representation) or throw a
descriptive error, so we never call new OfflineAudioContext(2, 0,
STORAGE_SAMPLE_RATE) which triggers a NotSupportedError. Ensure the chosen
early-return/throw is consistent with surrounding API expectations.
| sampleRate: number; // Samples/sec in stored bins | ||
| totalSamples: number; // Total peak sample count | ||
| binCount: number; // Number of bins | ||
| binDurationSec: number; // Seconds per bin (typically 30) |
There was a problem hiding this comment.
Stale comment: bin duration described as "typically 30" but PR reduces it to 5–10s.
The PR objectives state bin duration is reduced from ~30s to 5–10s for finer-grained streaming. Consider updating the comment to match.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/types/storage.ts` at line 115, Update the stale comment on the
binDurationSec field to reflect the new target interval (5–10 seconds) instead
of "typically 30"; locate the binDurationSec property in the storage.types
(symbol: binDurationSec) and change its inline comment to indicate the new
seconds-per-bin range and rationale (e.g., "Seconds per bin (typically 5–10) for
finer-grained streaming").
Summary
Test plan
npm run buildto verify no TypeScript errorsSummary by CodeRabbit
New Features
Bug Fixes
Improvements