We should support users who want to run as much as they can locally. There are many reasons why someone would want to do this:
- Save $$$ by not using APIs. From my tests, generating 12 prompts + images costs ~$1.5 USD.
- Generate content offline.
- Not hit any content filters denying generations.
This is an initial investigation to create multiple issues as it would involve multiple steps, such as
- Investigating different local LLMs.
- Updating the UI/UX to cleanly support choosing between using ChatGPT vs local LLM.
- This portion could get messy since the prompting would also differ and we'd likely have a separate UI for each.
- (Optionally) doing the above + refactoring to allow plug-ability of more support in the future.
We should support users who want to run as much as they can locally. There are many reasons why someone would want to do this:
This is an initial investigation to create multiple issues as it would involve multiple steps, such as