Add ComfyUI custom workflow and UNet/Flux generation modes#314
Add ComfyUI custom workflow and UNet/Flux generation modes#314Pento95 wants to merge 8 commits intoAventurasTeam:masterfrom
Conversation
There was a problem hiding this comment.
Code Review
This pull request introduces support for custom ComfyUI workflows and a new UNet-based text-to-image mode. Key additions include logic for validating and auto-detecting field paths in user-uploaded API-format workflows, a new UNet workflow definition, and UI components for managing these advanced settings. The review feedback focuses on improving efficiency by caching model lists and auto-detected CLIP/VAE names to prevent redundant network requests, as well as simplifying the Svelte template by merging adjacent conditional blocks with identical logic.
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds two ComfyUI modes — Custom Workflow and UNet Txt2Img — with UI for uploading/validating workflows and selecting UNet/CLIP/VAE models; provider/service changes to validate, introspect, cache, list, and execute workflows (including new UNet workflow JSON); adds ComfyCustomWorkflow types and baseUrl/timeout-aware model caching/listing. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant UI as Settings UI
participant Service as comfy.ts
participant ComfyUI as ComfyUI API
User->>UI: Upload/paste workflow JSON
UI->>Service: validateApiWorkflow() & detectWorkflowFields()
Service-->>UI: validation result + detected positive/seed/output (negative optional)
alt multiple positive nodes
UI->>User: prompt for positive node selection
User->>UI: select node
UI->>Service: confirm selection
end
UI->>Service: save providerOptions.customWorkflow
User->>UI: request generation (custom workflow)
UI->>Service: generate() with providerOptions including workflow
Service->>ComfyUI: POST execute workflow
ComfyUI-->>Service: image blob
Service->>UI: return base64 image result
sequenceDiagram
participant User
participant UI as Settings UI
participant Service as comfy.ts
participant ComfyUI as ComfyUI API
User->>UI: switch to UnetTxt2Img / request model lists
UI->>Service: fetchModelList(baseUrl, type) requests
Service->>ComfyUI: GET /models/diffusion_models, /models/text_encoders, /models/vae
ComfyUI-->>Service: lists (cached per baseUrl)
Service-->>UI: populate UNet / CLIP / VAE dropdowns
User->>UI: start generation with chosen UNet/CLIP/VAE
UI->>Service: generate() with unet-txt2img params
Service->>Service: load unet workflow JSON, inject model names/params
Service->>ComfyUI: POST execute workflow
ComfyUI-->>Service: returned image blob
Service->>UI: return base64 image result
Estimated code review effort🎯 4 (Complex) | ⏱️ ~65 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/lib/services/ai/image/providers/comfy.ts`:
- Around line 278-285: The mapping over nodeErrors can throw if err.errors is
undefined or not an array; update the block that builds `details` (the
.map(([node, err]) => { ... }) closure) to guard `err.errors` with Array.isArray
before calling .map, and fall back to a safe string (e.g., use err.message or
JSON.stringify(err) or 'no errors') when it's absent; ensure the constructed
`nodeMsgs` uses the guarded value so `message = \`ComfyUI Validation Error:
${details}\`` never throws.
- Around line 39-55: The cached Promise in unetModelNames can remain rejected
and block retries; when inserting the Promise (from fetchModelList or the
wrapper that produces a Promise<Set<string>>), attach a .catch handler that
removes the map entry for that baseUrl (unetModelNames.delete(baseUrl)) and
rethrows or returns an empty Set so callers get a failure but subsequent calls
can retry; update the code paths that populate unetModelNames (and any wrapper
function used by listModels()) to perform this cleanup on rejection and return a
resolved Set<string> or rethrow as appropriate.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro Plus
Run ID: e3f615ae-31fc-476c-8db3-39d0de018c73
📒 Files selected for processing (4)
src/lib/components/settings/tabs/images.sveltesrc/lib/services/ai/image/providers/comfy.tssrc/lib/services/ai/image/providers/comfyWorkflows/unet-txt2img-workflow.jsonsrc/lib/services/ai/image/providers/types.ts
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/lib/services/ai/image/providers/registry.ts (1)
110-115:⚠️ Potential issue | 🟠 MajorPassing
timeoutMshere still doesn't enforce a timeout.
createComfyProvider()currently initializes its ComfyUI API wrapper without consumingconfig.timeoutMs, so these new config fields remain a no-op and the underlying requests are still effectively unbounded.Also applies to: 148-152, 199-203, 220-224
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/services/ai/image/providers/registry.ts` around lines 110 - 115, The ImageProviderConfig.timeoutMs field is being set but never used—update createComfyProvider to forward config.timeoutMs into the ComfyUI API client (or the underlying HTTP layer) so requests are actually bounded; modify the ComfyUI client initialization inside createComfyProvider to accept a timeoutMs parameter (or to construct an AbortController/axios instance with that timeout) and ensure fetch/request calls use it; apply the same change for the other providers where config.timeoutMs is built (the instances noted around the other occurrences) so timeoutMs is honored across all ComfyUI provider initializations.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/lib/components/settings/tabs/images.svelte`:
- Around line 75-79: The UI exposes new ComfyUI modes (ComfyMode.CustomWorkflow
and ComfyMode.UnetTxt2Img) but the Comfy provider mapping and generate()
implementation in src/lib/services/ai/image/providers/comfy.ts still only handle
BasicTxt2Img and LoraTxt2Img; update the provider to map the new ComfyMode
entries and extend the generate() path to read and pass through customWorkflow,
clipName, vaeName, clipType, and weightDtype from the incoming request/options
into the Comfy API payload or processing logic (ensure any mode-specific fields
are validated and used when mode === ComfyMode.CustomWorkflow or mode ===
ComfyMode.UnetTxt2Img and fall back safely for older modes).
- Around line 461-465: The autosave and manual-save paths are currently
persisting profiles even when profileCustomWorkflow is set but no workflow has
been uploaded/confirmed; update the save logic that consumes
profileCustomWorkflow (the same code paths that read profileClipName,
profileVaeName, profileClipType, profileWeightDtype, etc.) to first verify that
profileCustomWorkflow is present and represents a confirmed/complete workflow
(e.g., a truthy uploaded/confirmed property or required content) and skip or
reject saving when that check fails; apply this guard in both the autosave
handler and the manual save function so no profile is persisted with mode:
CustomWorkflow until the workflow is actually confirmed.
- Around line 719-754: The in-flight loadUnetModels call can overwrite model
lists after the ComfyUI base URL changes; modify loadUnetModels to capture the
base URL at start (e.g., const callUrl = profileBaseUrl?.trim() ||
'http://localhost:8188') and before assigning availableClips/availableVaes
verify the current profileBaseUrl trimmed still equals callUrl and
profileProviderType is still 'comfyui'; if not, skip the assignment. Also update
the URL-change effect that runs when profileProviderType === 'comfyui' to
explicitly clear availableClips and availableVaes (set them to empty arrays)
when prevComfyBaseUrl differs from currentUrl so the empty-check guard triggers
a fresh reload.
In `@src/lib/services/ai/image/providers/registry.ts`:
- Around line 148-152: The cache key for model lists currently only uses
provider type and API key but provider.listModels() also depends on baseUrl;
update the cache key construction (where cacheKey is created for
provider.listModels calls in registry.ts) to include the provider config's
baseUrl (e.g. append config.baseUrl or a normalized empty-string fallback) so
different baseUrl values produce distinct cache entries while leaving TTL logic
unchanged.
---
Outside diff comments:
In `@src/lib/services/ai/image/providers/registry.ts`:
- Around line 110-115: The ImageProviderConfig.timeoutMs field is being set but
never used—update createComfyProvider to forward config.timeoutMs into the
ComfyUI API client (or the underlying HTTP layer) so requests are actually
bounded; modify the ComfyUI client initialization inside createComfyProvider to
accept a timeoutMs parameter (or to construct an AbortController/axios instance
with that timeout) and ensure fetch/request calls use it; apply the same change
for the other providers where config.timeoutMs is built (the instances noted
around the other occurrences) so timeoutMs is honored across all ComfyUI
provider initializations.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro Plus
Run ID: 2a8d8edc-1014-44ce-90db-800114905c4d
📒 Files selected for processing (5)
src/app.htmlsrc/lib/components/settings/tabs/images.sveltesrc/lib/services/ai/image/providers/comfy.tssrc/lib/services/ai/image/providers/registry.tssrc/lib/services/ai/image/providers/types.ts
✅ Files skipped from review due to trivial changes (1)
- src/app.html
🚧 Files skipped from review as they are similar to previous changes (2)
- src/lib/services/ai/image/providers/types.ts
- src/lib/services/ai/image/providers/comfy.ts
|
Sent comments on discord, otherwise looks good to me. |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/lib/components/settings/tabs/images.svelte (1)
582-592:⚠️ Potential issue | 🟡 MinorCustomWorkflow profiles can be saved without an uploaded workflow.
When a user creates a new profile (which defaults to
CustomWorkflowmode) and saves it without uploading a workflow, the profile is saved withmode: 'custom-workflow'but nocustomWorkflowdata. At generation time,comfy.tswill fall through to the hardcoded workflow paths (basic/lora/unet) sincerawCustomWorkflowis undefined.Consider either:
- Disabling the save button until a workflow is confirmed, or
- Adding a warning message in the UI when saving without a workflow
💡 Suggested validation in handleSaveProfile
async function handleSaveProfile() { if (!profileName.trim()) return + + // Warn if CustomWorkflow mode but no workflow uploaded + if (profileMode === ComfyMode.CustomWorkflow && !profileCustomWorkflow) { + // Could show a toast/alert or block save + console.warn('Saving CustomWorkflow profile without a workflow - will use auto-detection at generation time') + } await settings.addImageProfile({🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/components/settings/tabs/images.svelte` around lines 582 - 592, handleSaveProfile currently allows saving a profile in CustomWorkflow mode without any uploaded workflow; update handleSaveProfile to validate that when profileProviderType (or mode) indicates CustomWorkflow (e.g., 'custom-workflow' or 'CustomWorkflow') there is an uploaded/confirmed workflow (check rawCustomWorkflow or the flag that indicates the uploaded workflow is present) and if not, prevent saving and show a clear UI warning/toast; additionally, add a derived boolean (e.g., canSaveProfile) used to disable the save button when profileName is empty or when CustomWorkflow has no workflow so users cannot click Save until a workflow is provided — adjust both handleSaveProfile and the Save button binding accordingly and keep settings.addImageProfile and buildProviderOptions usage unchanged when validation passes.
🧹 Nitpick comments (4)
src/lib/services/ai/image/providers/registry.ts (1)
183-187: Consider handlingapiKeybeing undefined consistently.When
apiKeyis undefined, the config will haveapiKey: undefinedrather than an empty string. Line 184 usesapiKey ?? ''but then line 189 passes the rawapiKeytoprovider.listModels(apiKey). This inconsistency could cause issues if providers expect a string.♻️ Suggested fix for consistency
try { const config: ImageProviderConfig = { - apiKey: apiKey ?? '', + apiKey: apiKey || '', baseUrl, timeoutMs: settings.apiSettings.llmTimeoutMs, } const provider = PROVIDER_FACTORIES[providerType](config) - const models = await provider.listModels(apiKey) + const models = await provider.listModels(apiKey || '') modelCaches.set(cacheKey, { models, timestamp: Date.now() })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/services/ai/image/providers/registry.ts` around lines 183 - 187, The code constructs config with apiKey: apiKey ?? '' but then calls provider.listModels(apiKey), causing inconsistent handling of undefined; normalize the API key into a local string (e.g., const normalizedApiKey = apiKey ?? '') or reuse config.apiKey and pass that normalized string to provider.listModels and any other provider calls so providers always receive a string; update references to use the normalized value (or config.apiKey) instead of the raw apiKey variable.src/lib/services/ai/image/providers/comfy.ts (2)
432-444: Auto-detection caches persist even on generation failure.If a user's first generation attempt fails (e.g., network issue after fetching clips/vaes but before completing), the auto-detected
clipName/vaeNameare cached. On retry, these cached values will be used even if the user wanted different behavior. This is minor since it's the same fallback logic, but worth noting.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/services/ai/image/providers/comfy.ts` around lines 432 - 444, The auto-detected clipName/vaeName are being cached immediately after fetchModels (autoClipCache.set/autoVaeCache.set) which can persist incorrect fallbacks if the subsequent generation fails; modify the flow so that you only write to autoClipCache and autoVaeCache after a successful generation (i.e., move the cache.set calls out of the detection block and into the success path of the generation function) or, alternatively, wrap the generation in try/catch and remove any cached entries (autoClipCache.delete(baseUrl)/autoVaeCache.delete(baseUrl)) on error; locate symbols clipName, vaeName, autoClipCache, autoVaeCache and fetchModels in comfy.ts and implement one of these approaches.
178-185: Only the first KSampler node is used for detection.When a workflow contains multiple
KSamplerorKSamplerAdvancednodes, only the first one encountered is used for graph traversal and seed path detection. This may not match user intent in complex workflows with multiple sampling stages.Consider documenting this limitation in the UI help text, or potentially using the KSampler connected to the SaveImage node.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/services/ai/image/providers/comfy.ts` around lines 178 - 185, Current logic in the KSampler/KSamplerAdvanced handling only captures the first encountered sampler (variables kSamplerNodeId, kSamplerNode, seedPath), which ignores other samplers in multi-stage workflows; update the selection to prefer the KSampler node that feeds into the SaveImage node (or otherwise is downstream of the image save) by inspecting node connection edges when multiple samplers exist: when encountering a KSampler/KSamplerAdvanced, if kSamplerNodeId already set, check whether this nodeId is connected to a SaveImage node (or is upstream of it) and if so replace kSamplerNodeId/kSamplerNode/seedPath with this connected sampler; alternatively, if implementing a UI change instead, add clear help text documenting the current limitation (mentioning KSampler/KSamplerAdvanced and seedPath) so users know only the first sampler is used.src/lib/components/settings/tabs/images.svelte (1)
1625-1632:disabledattribute on button element may not work as expected.The
disabled={isLoadingUnetModels}attribute is on a<button>element withtype="button". While this works, the button still receives hover styles and theonclickhandler will still fire. Consider adding a guard in the handler or using a proper<Button>component.♻️ Suggested fix
<button type="button" class="text-muted-foreground hover:text-foreground text-xs underline" disabled={isLoadingUnetModels} - onclick={loadUnetModels} + onclick={() => !isLoadingUnetModels && loadUnetModels()} >Or use the
Buttoncomponent:-<button - type="button" - class="text-muted-foreground hover:text-foreground text-xs underline" - disabled={isLoadingUnetModels} - onclick={loadUnetModels} -> - {isLoadingUnetModels ? 'Loading…' : '↻ Refresh'} -</button> +<Button + variant="link" + size="sm" + disabled={isLoadingUnetModels} + onclick={loadUnetModels} +> + {isLoadingUnetModels ? 'Loading…' : '↻ Refresh'} +</Button>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/components/settings/tabs/images.svelte` around lines 1625 - 1632, The button currently uses disabled={isLoadingUnetModels} but still allows the onclick to fire; update the code so the click handler is guarded: in the loadUnetModels function add an early return if isLoadingUnetModels is true (or check the same flag at the top of the handler) and ensure the Svelte event binding uses on:click={loadUnetModels} instead of onclick, or replace the element with the shared Button component that respects disabled state; reference loadUnetModels and isLoadingUnetModels when making this change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/lib/components/settings/tabs/images.svelte`:
- Around line 582-592: handleSaveProfile currently allows saving a profile in
CustomWorkflow mode without any uploaded workflow; update handleSaveProfile to
validate that when profileProviderType (or mode) indicates CustomWorkflow (e.g.,
'custom-workflow' or 'CustomWorkflow') there is an uploaded/confirmed workflow
(check rawCustomWorkflow or the flag that indicates the uploaded workflow is
present) and if not, prevent saving and show a clear UI warning/toast;
additionally, add a derived boolean (e.g., canSaveProfile) used to disable the
save button when profileName is empty or when CustomWorkflow has no workflow so
users cannot click Save until a workflow is provided — adjust both
handleSaveProfile and the Save button binding accordingly and keep
settings.addImageProfile and buildProviderOptions usage unchanged when
validation passes.
---
Nitpick comments:
In `@src/lib/components/settings/tabs/images.svelte`:
- Around line 1625-1632: The button currently uses
disabled={isLoadingUnetModels} but still allows the onclick to fire; update the
code so the click handler is guarded: in the loadUnetModels function add an
early return if isLoadingUnetModels is true (or check the same flag at the top
of the handler) and ensure the Svelte event binding uses
on:click={loadUnetModels} instead of onclick, or replace the element with the
shared Button component that respects disabled state; reference loadUnetModels
and isLoadingUnetModels when making this change.
In `@src/lib/services/ai/image/providers/comfy.ts`:
- Around line 432-444: The auto-detected clipName/vaeName are being cached
immediately after fetchModels (autoClipCache.set/autoVaeCache.set) which can
persist incorrect fallbacks if the subsequent generation fails; modify the flow
so that you only write to autoClipCache and autoVaeCache after a successful
generation (i.e., move the cache.set calls out of the detection block and into
the success path of the generation function) or, alternatively, wrap the
generation in try/catch and remove any cached entries
(autoClipCache.delete(baseUrl)/autoVaeCache.delete(baseUrl)) on error; locate
symbols clipName, vaeName, autoClipCache, autoVaeCache and fetchModels in
comfy.ts and implement one of these approaches.
- Around line 178-185: Current logic in the KSampler/KSamplerAdvanced handling
only captures the first encountered sampler (variables kSamplerNodeId,
kSamplerNode, seedPath), which ignores other samplers in multi-stage workflows;
update the selection to prefer the KSampler node that feeds into the SaveImage
node (or otherwise is downstream of the image save) by inspecting node
connection edges when multiple samplers exist: when encountering a
KSampler/KSamplerAdvanced, if kSamplerNodeId already set, check whether this
nodeId is connected to a SaveImage node (or is upstream of it) and if so replace
kSamplerNodeId/kSamplerNode/seedPath with this connected sampler; alternatively,
if implementing a UI change instead, add clear help text documenting the current
limitation (mentioning KSampler/KSamplerAdvanced and seedPath) so users know
only the first sampler is used.
In `@src/lib/services/ai/image/providers/registry.ts`:
- Around line 183-187: The code constructs config with apiKey: apiKey ?? '' but
then calls provider.listModels(apiKey), causing inconsistent handling of
undefined; normalize the API key into a local string (e.g., const
normalizedApiKey = apiKey ?? '') or reuse config.apiKey and pass that normalized
string to provider.listModels and any other provider calls so providers always
receive a string; update references to use the normalized value (or
config.apiKey) instead of the raw apiKey variable.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro Plus
Run ID: 1caf9fa5-19a3-4e2a-bf93-0306e9068af8
📒 Files selected for processing (3)
src/lib/components/settings/tabs/images.sveltesrc/lib/services/ai/image/providers/comfy.tssrc/lib/services/ai/image/providers/registry.ts
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
src/lib/components/settings/ImageModelSelect.svelte (1)
119-120: Remove unreachable loading props in the empty-state branch.At Line 119–120,
isLoadingis always false in this branch (loading is already handled by the#if isLoadingblock), so thedisabledand spin condition are dead code.Cleanup diff
- <Button variant="ghost" size="icon" onclick={onRefresh} disabled={isLoading}> - <RefreshCw class="h-4 w-4 {isLoading ? 'animate-spin' : ''}" /> + <Button variant="ghost" size="icon" onclick={onRefresh}> + <RefreshCw class="h-4 w-4" /> </Button>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/components/settings/ImageModelSelect.svelte` around lines 119 - 120, In ImageModelSelect.svelte, inside the empty-state branch remove the unreachable loading-related props on the refresh control: delete the Button prop disabled={isLoading} and the conditional spin class {isLoading ? 'animate-spin' : ''} on the RefreshCw icon (these are dead because isLoading is already handled by the separate `#if` isLoading branch), leaving a plain clickable refresh Button and static RefreshCw class.src/lib/services/ai/image/providers/registry.ts (1)
156-157: Consider short-lived negative caching for empty model lists.With
Line 156andLine 190, empty results are never cached. For unavailable/empty providers this can repeatedly hit the backend and degrade UX. A short TTL for empty/error responses would reduce retry storms.Also applies to: 190-191
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/services/ai/image/providers/registry.ts` around lines 156 - 157, The current registry caching logic (modelCaches.set with cacheKey) skips caching when models is empty; change it to also cache empty or error results but with a short TTL—e.g., when models.length === 0 set modelCaches.set(cacheKey, { models, timestamp: Date.now(), ttl: SHORT_EMPTY_TTL_MS }) so consumers can check ttl/age and treat empty entries as short-lived negatives; apply the same change to the other spot referenced near lines 190-191 to ensure empty/error responses are stored with a short TTL rather than never cached.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/lib/components/settings/ImageModelSelect.svelte`:
- Around line 119-121: The refresh Button in ImageModelSelect.svelte is
icon-only and lacks an accessible label; update the Button (the element with
onclick={onRefresh} and disabled={isLoading} that renders <RefreshCw .../>) to
include an accessible name—e.g., add aria-label="Refresh models" (or aria-label
that matches the UI) or include a visually hidden text child—so screen readers
announce its purpose while preserving the existing isLoading spin behavior.
In `@src/lib/services/ai/image/providers/registry.ts`:
- Around line 60-63: The cache key generation in getCacheKey currently truncates
baseUrl with .slice(-24), which can cause collisions across different endpoints;
update getCacheKey (parameters providerType, apiKey, baseUrl) to use the full
normalized baseUrl instead of slicing — e.g., trim and remove trailing slashes
then use the entire resulting string in the returned
`${providerType}:${keyHash}:${urlKey}` (or, if length is a concern, compute and
use a stable hash of the full normalized baseUrl) so each endpoint yields a
unique cache key.
---
Nitpick comments:
In `@src/lib/components/settings/ImageModelSelect.svelte`:
- Around line 119-120: In ImageModelSelect.svelte, inside the empty-state branch
remove the unreachable loading-related props on the refresh control: delete the
Button prop disabled={isLoading} and the conditional spin class {isLoading ?
'animate-spin' : ''} on the RefreshCw icon (these are dead because isLoading is
already handled by the separate `#if` isLoading branch), leaving a plain clickable
refresh Button and static RefreshCw class.
In `@src/lib/services/ai/image/providers/registry.ts`:
- Around line 156-157: The current registry caching logic (modelCaches.set with
cacheKey) skips caching when models is empty; change it to also cache empty or
error results but with a short TTL—e.g., when models.length === 0 set
modelCaches.set(cacheKey, { models, timestamp: Date.now(), ttl:
SHORT_EMPTY_TTL_MS }) so consumers can check ttl/age and treat empty entries as
short-lived negatives; apply the same change to the other spot referenced near
lines 190-191 to ensure empty/error responses are stored with a short TTL rather
than never cached.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro Plus
Run ID: 04f881e2-bfa1-4def-b80f-32a5c41e51dc
📒 Files selected for processing (2)
src/lib/components/settings/ImageModelSelect.sveltesrc/lib/services/ai/image/providers/registry.ts
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
src/lib/components/settings/tabs/images.svelte (1)
448-458:⚠️ Potential issue | 🟠 MajorBlock autosave while
CustomWorkflowis incomplete.
handleSaveProfile()now respectscanSaveProfile, butautoSaveProfile()still persists existing profiles whenneedsWorkflowis true. That means switching an existing profile toCustomWorkflowand then waiting for debounce or closing the collapsible can still save an unusable{ mode: custom-workflow }payload with no confirmed workflow.Suggested guard
async function autoSaveProfile() { - if (isNewProfile || !editingProfileId || !profileName.trim()) return + if (isNewProfile || !editingProfileId || !profileName.trim() || needsWorkflow) return await settings.updateImageProfile(editingProfileId, {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/components/settings/tabs/images.svelte` around lines 448 - 458, autoSaveProfile currently saves existing profiles without checking whether a CustomWorkflow is fully configured; add the same guard used in handleSaveProfile by blocking persistence when needsWorkflow is true (or by checking canSaveProfile) before calling settings.updateImageProfile in autoSaveProfile (use the identifiers autoSaveProfile, canSaveProfile, needsWorkflow, editingProfileId, and settings.updateImageProfile to find the code), so switching an existing profile to CustomWorkflow won't auto-save an incomplete workflow payload.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/lib/components/settings/tabs/images.svelte`:
- Around line 342-355: The finally block clears isLoadingProfileModels
unconditionally which lets stale requests hide the spinner; capture a request
identity before calling listImageModelsByProvider (e.g., const reqKey = {
providerType, baseUrl } or incrementing requestId), and in both the try/catch
and finally only mutate isLoadingProfileModels, profileModels, and
profileModelsError if the current request identity still matches (compare
profileProviderType/profileBaseUrl or the requestId); apply the same pattern to
the UNet loader (isLoadingUnetModels) to ensure only the latest request can
clear its loading state or write results/errors.
- Around line 766-777: When profileProviderType is not 'comfyui' the code only
clears prevComfyBaseUrl but leaves availableClips and availableVaes populated,
causing stale lists to prevent reload when returning; update the else branch
inside the $effect to also clear availableClips and availableVaes (and
optionally call clearComfyCacheForUrl(prevComfyBaseUrl) if prevComfyBaseUrl
exists) so that switching away from ComfyUI resets state for profileBaseUrl and
forces a fresh load when coming back; touch the variables prevComfyBaseUrl,
availableClips, availableVaes in the else branch in the component where $effect
uses profileProviderType/profileBaseUrl.
In `@src/lib/services/ai/image/providers/comfy.ts`:
- Around line 342-348: The custom workflow path currently runs whenever
providerOptions.customWorkflow exists, ignoring the selected mode; change
generate() to only parse and use rawCustomWorkflow when the resolved mode equals
CustomWorkflow (i.e., guard the block that reads rawCustomWorkflow and assigns
customWorkflow with an explicitMode === CustomWorkflow check), and remove/reuse
any later redeclaration of explicitMode in the subsequent mode-selection block
so that the same explicitMode value is used for choosing between custom workflow
vs Basic/LoRA/UNet flows (update references to rawCustomWorkflow,
customWorkflow, and explicitMode accordingly).
---
Duplicate comments:
In `@src/lib/components/settings/tabs/images.svelte`:
- Around line 448-458: autoSaveProfile currently saves existing profiles without
checking whether a CustomWorkflow is fully configured; add the same guard used
in handleSaveProfile by blocking persistence when needsWorkflow is true (or by
checking canSaveProfile) before calling settings.updateImageProfile in
autoSaveProfile (use the identifiers autoSaveProfile, canSaveProfile,
needsWorkflow, editingProfileId, and settings.updateImageProfile to find the
code), so switching an existing profile to CustomWorkflow won't auto-save an
incomplete workflow payload.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro Plus
Run ID: c3aaa455-d97c-4686-b752-3b503b39c108
📒 Files selected for processing (4)
src/lib/components/settings/ImageModelSelect.sveltesrc/lib/components/settings/tabs/images.sveltesrc/lib/services/ai/image/providers/comfy.tssrc/lib/services/ai/image/providers/registry.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- src/lib/services/ai/image/providers/registry.ts
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/lib/components/settings/tabs/images.svelte (1)
389-415:⚠️ Potential issue | 🟡 MinorIgnore stale test generations after the profile changes.
This tab clears the preview immediately, but the in-flight
generateImage()call still writestestImageResult/testErrorback unconditionally. If the user switchestestProfileIdwhile a request is running, the debug UI can show the old result under the new profile.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/components/settings/tabs/images.svelte` around lines 389 - 415, handleTestGenerate currently lets an in-flight generateImage call overwrite testImageResult/testError even if testProfileId changed; capture the active profile id (e.g. const activeProfile = testProfileId or profile.id) before awaiting generateImage and after the await verify that testProfileId (or profile.id) still equals that captured id and isGeneratingTestImage is still true before assigning testImageResult or testError; alternatively use an AbortSignal/token passed to generateImage and abort it on profile change; ensure all writes to testImageResult/testError are gated by that identity check so stale results are ignored.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/lib/components/settings/tabs/images.svelte`:
- Around line 209-214: The current derived `needsWorkflow` does not consider
`pendingWorkflowData`, so when `processWorkflowJson()` sets
`pendingWorkflowData` but leaves `profileCustomWorkflow` truthy the UI still
thinks no workflow is needed and `canSaveProfile` stays true; update the
`needsWorkflow` derived (used to compute `canSaveProfile`) to treat a non-empty
`pendingWorkflowData` as requiring user confirmation by including a check like
`|| !!pendingWorkflowData`, or alternatively modify `processWorkflowJson()` to
clear `profileCustomWorkflow` when it populates `pendingWorkflowData` so
`needsWorkflow` becomes true; refer to the `needsWorkflow`, `canSaveProfile`,
`pendingWorkflowData`, and `processWorkflowJson` symbols when making the change.
In `@src/lib/services/ai/image/providers/comfy.ts`:
- Around line 47-63: fetchModelList currently collapses transport errors,
timeouts, and non-OK responses into an empty array, causing transient ComfyUI
outages to be cached as "no models"; change fetchModelList to surface failures
instead of returning [] for transport/timeouts/non-OK responses (only return []
for a genuine 404/no-models case if the API explicitly signals that), by
throwing an Error with context (include status and body or the AbortError) on
fetch failures or non-ok responses so callers in the UNet/text-encoder/VAE
detection paths can distinguish real empty lists from transport/system errors
and avoid caching bad results.
---
Outside diff comments:
In `@src/lib/components/settings/tabs/images.svelte`:
- Around line 389-415: handleTestGenerate currently lets an in-flight
generateImage call overwrite testImageResult/testError even if testProfileId
changed; capture the active profile id (e.g. const activeProfile = testProfileId
or profile.id) before awaiting generateImage and after the await verify that
testProfileId (or profile.id) still equals that captured id and
isGeneratingTestImage is still true before assigning testImageResult or
testError; alternatively use an AbortSignal/token passed to generateImage and
abort it on profile change; ensure all writes to testImageResult/testError are
gated by that identity check so stale results are ignored.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro Plus
Run ID: 8f2ea6fe-249a-44e5-a66a-931249a3365b
📒 Files selected for processing (4)
src/lib/components/settings/ImageModelSelect.sveltesrc/lib/components/settings/tabs/images.sveltesrc/lib/services/ai/image/providers/comfy.tssrc/lib/services/ai/image/providers/registry.ts
| const needsWorkflow = $derived( | ||
| profileProviderType === 'comfyui' && | ||
| profileMode === ComfyMode.CustomWorkflow && | ||
| !profileCustomWorkflow, | ||
| ) | ||
| const canSaveProfile = $derived(!!profileName.trim() && !needsWorkflow) |
There was a problem hiding this comment.
Treat pendingWorkflowData as unsaved state.
When an existing custom workflow is replaced by an ambiguous upload, processWorkflowJson() fills pendingWorkflowData but leaves profileCustomWorkflow truthy. That keeps needsWorkflow false, so autosave/manual save can still persist the old workflow while the new one is waiting for confirmation.
Suggested fix
const needsWorkflow = $derived(
profileProviderType === 'comfyui' &&
profileMode === ComfyMode.CustomWorkflow &&
- !profileCustomWorkflow,
+ (!profileCustomWorkflow || pendingWorkflowData !== null),
)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/components/settings/tabs/images.svelte` around lines 209 - 214, The
current derived `needsWorkflow` does not consider `pendingWorkflowData`, so when
`processWorkflowJson()` sets `pendingWorkflowData` but leaves
`profileCustomWorkflow` truthy the UI still thinks no workflow is needed and
`canSaveProfile` stays true; update the `needsWorkflow` derived (used to compute
`canSaveProfile`) to treat a non-empty `pendingWorkflowData` as requiring user
confirmation by including a check like `|| !!pendingWorkflowData`, or
alternatively modify `processWorkflowJson()` to clear `profileCustomWorkflow`
when it populates `pendingWorkflowData` so `needsWorkflow` becomes true; refer
to the `needsWorkflow`, `canSaveProfile`, `pendingWorkflowData`, and
`processWorkflowJson` symbols when making the change.
| export async function fetchModelList( | ||
| baseUrl: string, | ||
| type: string, | ||
| timeoutMs?: number, | ||
| ): Promise<string[]> { | ||
| const controller = new AbortController() | ||
| const timerId = timeoutMs ? setTimeout(() => controller.abort(), timeoutMs) : null | ||
| try { | ||
| const resp = await fetch(`${baseUrl}/models/${type}`, { signal: controller.signal }) | ||
| if (!resp.ok) return [] | ||
| return (await resp.json()) as string[] | ||
| } catch { | ||
| return [] | ||
| } finally { | ||
| if (timerId !== null) clearTimeout(timerId) | ||
| } | ||
| } |
There was a problem hiding this comment.
Don't collapse transport failures into an empty model list.
fetchModelList() returns [] for timeouts and non-OK responses, and downstream code caches that result in the UNet/text-encoder/VAE detection paths. A transient ComfyUI outage then gets treated as “this endpoint has no models” until the cache is explicitly cleared, which can route UNet models through the wrong workflow.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/services/ai/image/providers/comfy.ts` around lines 47 - 63,
fetchModelList currently collapses transport errors, timeouts, and non-OK
responses into an empty array, causing transient ComfyUI outages to be cached as
"no models"; change fetchModelList to surface failures instead of returning []
for transport/timeouts/non-OK responses (only return [] for a genuine
404/no-models case if the API explicitly signals that), by throwing an Error
with context (include status and body or the AbortError) on fetch failures or
non-ok responses so callers in the UNet/text-encoder/VAE detection paths can
distinguish real empty lists from transport/system errors and avoid caching bad
results.
| let kSamplerNodeId: string | null = null | ||
| let kSamplerNode: WorkflowNode | null = null | ||
| let seedPath: string | null = null | ||
| let outputNodeId: string | null = null | ||
| let saveImageCount = 0 | ||
|
|
||
| for (const [nodeId, node] of Object.entries(workflow) as [string, WorkflowNode][]) { | ||
| switch (node.class_type) { | ||
| case 'CLIPTextEncode': { | ||
| const title = node._meta?.title ?? '' | ||
| clipNodes.set(nodeId, { | ||
| nodeId, | ||
| title, | ||
| path: `${nodeId}.inputs.text`, | ||
| textPreview: textPreview(node.inputs.text), | ||
| }) | ||
| break | ||
| } | ||
| case 'KSampler': | ||
| case 'KSamplerAdvanced': { | ||
| if (!kSamplerNodeId) { | ||
| kSamplerNodeId = nodeId | ||
| kSamplerNode = node | ||
| seedPath = `${nodeId}.inputs.seed` | ||
| } |
There was a problem hiding this comment.
Walk back from SaveImage instead of picking the first sampler.
detectWorkflowFields() currently stores the first KSampler / KSamplerAdvanced it sees and uses that node's seed/prompt links. In multi-stage workflows, object iteration order is unrelated to the branch that feeds the single SaveImage, so prompt/seed injection can target the wrong sampler.
Summary
heuristics. Ambiguous multi-node cases show a picker — writes to profileCustomWorkflow only on explicit confirm (via pendingWorkflowData staging state to prevent premature auto-save).
generation time, cached per baseUrl.
Bug fixes
Summary by CodeRabbit
New Features
Improvements
UI