Skip to content

Feature Request: Preprocessor prompt reflection in UI #462

@kfaist

Description

@kfaist

Feature Request: Preprocessor Prompt Reflection in UI
Is your feature request related to a problem?
When building a preprocessor plugin that generates prompts dynamically (e.g., voice-to-prompt via speech transcription), the injected prompts are never visible to the user in the UI prompt box. The preprocessor correctly injects prompts into the downstream pipeline via the output_dict → extra_params → next_processor.update_parameters() path, and these prompts do reach the generation pipeline — but the user has no way to see what's being generated from their input.
This creates a blind spot in the creative workflow. For example, in a voice-controlled video generation setup:

User speaks "butterfly"
Preprocessor transcribes → extracts keyword → injects {"prompts": [{"text": "butterfly, cinematic", "weight": 1.0}]}
Downstream pipeline receives the prompt and generates accordingly
But the UI prompt box still shows the original typed prompt (e.g., "A panda walking in a park")

The user never sees confirmation that their voice was heard, what keyword was extracted, or what prompt is actually driving generation.
Describe the solution you'd like
Add a prompt_update (or similar) notification type that preprocessor pipelines can emit back to the frontend via the existing WebRTC data channel NotificationSender. The frontend would listen for this message type and update the prompt display accordingly.
Backend side:

Expose notification_callback to plugin pipelines (currently only FrameProcessor has access, and only uses it for stream_stopped)
Or allow preprocessors to include a notification key in their output dict that gets forwarded to the frontend

Example preprocessor output:
pythonreturn {
"video": frames,
"prompts": [{"text": "butterfly, cinematic", "weight": 1.0}],
"notification": {
"type": "prompt_update",
"source": "audio-transcription",
"prompts": [{"text": "butterfly, cinematic", "weight": 1.0}]
}
}
Frontend side:

Handle prompt_update messages in the data channel onmessage handler (currently only stream_stopped is handled)
Display the active prompt in the UI — either by updating the prompt box or showing a secondary "Active Prompt" indicator so users can distinguish between their typed prompt and the preprocessor-injected one

Why this matters
Scope's plugin architecture is powerful — preprocessors can transform input and inject parameters into downstream pipelines. But without UI feedback, users can't tell if their preprocessor is working. This is especially important for:

Voice-controlled generation — users need to see what was transcribed
Automated prompt cycling — users need to see which prompt is active
Debugging — developers need visual confirmation that prompt injection is working
Live performance — performers need real-time feedback on what's driving the visuals

Context
This came up while building scope-audio-transcription, a voice-to-video preprocessor plugin that uses Whisper AI to transcribe speech into prompts for real-time video generation. The plugin works correctly at the pipeline level — prompts flow through pipeline_processor.py's extra_params forwarding into text_conditioning — but users have no visual indication in the UI.
Additional context

The NotificationSender infrastructure already exists and works (used for stream_stopped)
The data channel is bidirectional — backend → frontend messaging is already proven
This would benefit any preprocessor plugin that modifies prompts, not just audio transcription
A secondary "active prompt" display (rather than overwriting the user's typed prompt) might be the cleanest UX, since users may want to keep their base prompt visible while seeing preprocessor modifications alongside it

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions