ollama (self-hosted) support for AI Flow #13
Replies: 3 comments 1 reply
-
|
Hey @thefoxman88! This is definitely on the roadmap. Current state:
Ollama support plan:
For a 12GB RTX 3060, you could run:
The task is pretty simple (parsing author/title from folder names), so even smaller models work well. I will add this in an upcoming release. Would you prefer:
|
Beta Was this translation helpful? Give feedback.
-
|
This has been implemented! 🎉 Ollama support is now available on the Features:
Recommended models for your RTX 3060 (12GB):
To try it now: docker pull ghcr.io/deucebucket/library-manager:developThen in Settings → AI Setup, select "Ollama (Self-hosted - No API costs)" and configure your server URL. Your spicy books stay private! 🌶️ This will be in the next stable release after some testing on develop. |
Beta Was this translation helpful? Give feedback.
-
|
All ollama models are showing up as undefined in library-manager. Tested with mistral:7b |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Can I request a self-hosted option for AI requests?
Hopefully others would like to avoid requesting book info to gemini/openAi about a questionable spicey books?
I have a simple setup with Ollama and Open-Webui and would love to share that resource with library-manager. Lets just hope the models required can be used with less then 12GB (RTX 3060)
Beta Was this translation helpful? Give feedback.
All reactions