Minor improvements to avoid out of memory issues in stable_diffusion.ipynb notebook.#4
Minor improvements to avoid out of memory issues in stable_diffusion.ipynb notebook.#4jantic wants to merge 3 commits intofastai:masterfrom
Conversation
Removing unnecessary additional memory usage by replacing separate Dreambooth db_pipe assignment with pipe, and then deleting that pipe before running "Looking inside the pipeline" section. This allows me to run notebook all the way through without out of memory issues on a 12 GB 3080TI card.
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
…ency. "Looking inside the pipeline" section had some models instantiated that weren't being converted to fp16. They're now using fp16 and as a result I can now run this notebook in full on a 11GB 1080TI GPU without an out of memory error.
|
Added one more commit that sets models in "Looking inside the pipeline" to fp16 to further memory efficiency and allow for running the notebook beginning to end on a 11GB 1080TI gpu. |
|
Thanks mate! Any chance you could reduce the size of this PR by not re-running the image outputs, and also not run that cell that gives you the |
|
Oh sure thing,. Done. It was checked in with outputs originally so that's what I assumed was wanted for the pull request. Otherwise I wouldn't normally do that as yeah- that makes for a lot of extra data in git. |
|
Sorry I didn't explain myself carefully enough. We do still want the outputs (so that people can see them in the notebook without running it, which takes some time), but I was hoping you could avoid re-running the notebook from the version in the repo. That way, the outputs aren't regenerated with slightly different pixels (which makes the PR and repo bigger). Does that make sense? |
|
Yeah that's clear now. I'll have this done tomorrow here in the US. Sorry about that! |
|
It seemed a bit over complicated to try to modify the existing pull request/branch to get rid of the junk history, so I make a new pull request here that I think should work: #5 . Closing this version. |
Removing unnecessary additional memory usage by replacing separate Dreambooth db_pipe assignment with pipe, and then deleting that pipe before running "Looking inside the pipeline" section. This allows me to run notebook all the way through without out of memory issues on a 12 GB 3080TI card.
I should note that this assumes that using the Dreambooth pipeline for the "What is Stable Diffusion" section is an acceptable substitution, and that the output example has changed as a result.