The Large Language Models (LLMs) Plugin gives Obsidian users access to LLMs through cloud providers (OpenAI, Anthropic, Google, and Mistral) and locally via GPT4All and Ollama. Models can be interacted with in the sidebar, main window, and a newly added, floating action button popup window.
Installation: Download the plugin via the community plugin browser.
Using models from cloud-based providers:
- In the plugin settings menu, enter an API key from one of the supported model providers
- To interact with models, open one of the chat views using the newly added commands (see Commands section below)
Using models locally (GPT4All):
- Download GPT4All
- Download a model through GPT4All's model browser
- In the setting menu of GPT4All, toggle on the "Enable Local Server" setting
- Models downloaded via GPT4All will be selectable via the model switcher in each chat view
Using models locally (Ollama):
- Install Ollama and pull the models you want to use
- In the plugin settings, configure the Ollama host (default:
http://localhost:11434) - Click "Discover Models" to detect your locally available models
- Select an Ollama model from the model switcher in any chat view
| Command | Description |
|---|---|
| Open modal | Opens the chat modal |
| Toggle FAB | Toggles the visibility of the Floating Action Button (FAB) used to open and close the chat popup window |
| Open chat in tab | Opens the chat window in a tab |
| Open chat in sidebar | Opens the chat window in the Sidebar |
Cloud-based:
| Model provider | Status |
|---|---|
| OpenAI | Supported |
| Anthropic | Supported |
| Supported | |
| Mistral | Supported |
Local:
| Model provider | Status |
|---|---|
| GPT4All | Supported |
| Ollama | Supported |
- Johnny ✨
- Ryan Mahoney
- Evan Harris