Skip to content

Investigate (local) LLM support #16

@inFocus7

Description

@inFocus7

We should support users who want to run as much as they can locally. There are many reasons why someone would want to do this:

  1. Save $$$ by not using APIs. From my tests, generating 12 prompts + images costs ~$1.5 USD.
  2. Generate content offline.
  3. Not hit any content filters denying generations.

This is an initial investigation to create multiple issues as it would involve multiple steps, such as

  • Investigating different local LLMs.
  • Updating the UI/UX to cleanly support choosing between using ChatGPT vs local LLM.
    • This portion could get messy since the prompting would also differ and we'd likely have a separate UI for each.
  • (Optionally) doing the above + refactoring to allow plug-ability of more support in the future.

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions