-
Notifications
You must be signed in to change notification settings - Fork 13
Open
Description
Hi thanks for your work!
I am a bit confused about the explicit workflow for attribute manipulation in the img2img generation process.
how can I insert a material/layout/content from an image in the generation process since there is no style_dir param?
Do I need to finetune/train the model on only one image an then use the prompts according to the T2I prompts in the Read.Me (* gets replaced by the learned embedding)? What is the general idea behind the finetuning of the model?
Metadata
Metadata
Assignees
Labels
No labels