feat: implement TalkingHead renderer with image/audio props#189
feat: implement TalkingHead renderer with image/audio props#189SecurityQQ merged 1 commit intomainfrom
Conversation
Add a full rendering pipeline for the <TalkingHead> component: - image prop accepts VargElement<image> or ResolvedElement<image> - audio prop accepts VargElement<speech> or ResolvedElement<speech> - Resolves image + speech in parallel, then generates lipsync video - Wired into clip.ts switch statement as a video layer - Made TalkingHead awaitable via makeThenable + resolveTalkingHeadElement - Added resolution prop (480p/720p/1080p) for lipsync generation - 8 tests covering element creation, clip integration, error cases, pre-resolved elements, and lazy element rendering
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (6)
📝 Walkthroughwalkthroughnew "talking-head" react element introduced with thenable async resolution. updated props structure uses image/audio varge elements instead of string inputs. resolution coordinates image+audio lipsync pipeline via video model. rendering and clip integration added with comprehensive test coverage. changes
sequence diagramsequenceDiagram
participant App as App
participant Elem as TalkingHead Element
participant Resolve as resolveTalkingHeadElement
participant ImageRes as resolveImageProp
participant AudioRes as resolveAudioProp
participant VideoRender as renderVideo
participant Render as renderTalkingHead
participant Clip as renderClipLayers
App->>Elem: create talking-head with image/audio
Elem->>Elem: return thenable element
App->>Resolve: await element resolution
Resolve->>ImageRes: resolve image prop
ImageRes-->>Resolve: file (pre-resolved or rendered)
Resolve->>AudioRes: resolve audio prop (concurrent)
AudioRes-->>Resolve: file (pre-resolved or rendered)
Resolve->>VideoRender: create lipsync video element<br/>(image+audio → prompt)
VideoRender-->>Resolve: video file
Resolve-->>App: ResolvedElement with video file
App->>Clip: render clip with talking-head
Clip->>Render: renderTalkingHead(element, ctx)
Render->>VideoRender: delegate to video renderer
VideoRender-->>Render: file path
Render-->>Clip: resolved file
Clip-->>App: VideoLayer (cover resize, mixVolume: 1)
estimated code review effort🎯 3 (moderate) | ⏱️ ~20 minutes possibly related prs
poem
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary
<TalkingHead>component renderer that was previously defined but never wired into the rendering pipelineimage(acceptsVargElement<"image">orResolvedElement<"image">) andaudio(acceptsVargElement<"speech">orResolvedElement<"speech">)model(e.g.,sync-v2-pro)Changes
src/react/types.tscharacter/src/voice/childrenprops withimage/audioprops onTalkingHeadPropssrc/react/elements.tsTalkingHeadthenable/awaitable viamakeThenablesrc/react/renderers/talking-head.tssrc/react/renderers/clip.tscase "talking-head":handler in clip switchsrc/react/resolve.tsresolveTalkingHeadElement()for standaloneawait TalkingHead()src/react/renderers/talking-head.test.tsFixes
Fixes render jobs that use
<TalkingHead>— previously the component was silently skipped during rendering, producing empty/black video output.Usage