I managed to shave off inference timings for SD2.1 by a few seconds for 512x512 (50 steps) and 768x768 (50 Steps).
Using just few additions:
torch.backends.cudnn.benchmark = True
torch.backends.cuda.matmul.allow_tf32 = True
pipe = StableDiffusionPipeline.from_pretrained(
MODEL_ID,
cache_dir=MODEL_CACHE,
local_files_only=True,
)
pipe = pipe.to("cuda")
pipe.enable_xformers_memory_efficient_attention()
pipe.enable_vae_slicing()
Overall output didn't suffer coz of this. Getting crisp images. Wanted to know how do I create a PR to add these? And are there any tests around this?
Here are the inferences:
I managed to shave off inference timings for SD2.1 by a few seconds for
512x512(50 steps) and768x768(50 Steps).Using just few additions:
Overall output didn't suffer coz of this. Getting crisp images. Wanted to know how do I create a PR to add these? And are there any tests around this?
Here are the inferences: