-
Notifications
You must be signed in to change notification settings - Fork 0
[WIP] LM Studio support #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
123a147
84ea000
2f03688
ede614f
edc5d37
63558a3
10b64a4
7d3d4b5
4c7cf48
5f57b88
fd8a999
7b01405
c27578a
d6737d4
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,146 @@ | ||
| --- | ||
| summary: "Run OpenClaw with LM Studio" | ||
| read_when: | ||
| - You want to run OpenClaw with open source models via LM Studio | ||
| - You want to set up and configure LM Studio | ||
| title: "LM Studio" | ||
| --- | ||
|
|
||
| # LM Studio | ||
|
|
||
| LM Studio is a friendly yet powerful app for running open-weight models on your own hardware. It lets you run llama.cpp (GGUF) or MLX models (Apple Silicon). Comes in a GUI package or headless daemon (`llmster`). For product and setup docs, see [lmstudio.ai](https://lmstudio.ai/). | ||
|
|
||
| ## Quick start | ||
|
|
||
| 1. Install LM Studio (desktop) or `llmster` (headless), then start the local server: | ||
|
|
||
| ```bash | ||
| curl -fsSL https://lmstudio.ai/install.sh | bash | ||
| ``` | ||
|
|
||
| 2. Start the server | ||
|
|
||
| Make sure you either start the desktop app or run the daemon using the following command: | ||
|
|
||
| ```bash | ||
| lms daemon up | ||
| ``` | ||
|
|
||
| ```bash | ||
| lms server start --port 1234 | ||
| ``` | ||
|
|
||
| If you are using the app, make sure you have JIT enabled for a smooth experience. Learn more [here](https://lmstudio.ai/docs/developer/core/ttl-and-auto-evict) | ||
|
|
||
| 3. OpenClaw requires an LM Studio token value. Set `LM_API_TOKEN`: | ||
|
|
||
| ```bash | ||
| export LM_API_TOKEN="your-lm-studio-api-token" | ||
| ``` | ||
|
|
||
| If you don't want to use LM Studio with Authentication, use any non-empty placeholder value: | ||
|
|
||
| ```bash | ||
| export LM_API_TOKEN="placeholder-key" | ||
| ``` | ||
|
|
||
| For LM Studio auth setup details, see [LM Studio Authentication](https://lmstudio.ai/docs/developer/core/authentication). | ||
|
|
||
| 4. Run onboarding and choose `LM Studio`: | ||
|
|
||
| ```bash | ||
| openclaw onboard | ||
| ``` | ||
|
|
||
| 5. In onboarding, use the `Default model` prompt to pick your LM Studio model. | ||
|
|
||
| You can also set or change it later: | ||
|
|
||
| ```bash | ||
| openclaw models set lmstudio/qwen/qwen3.5-9b | ||
| ``` | ||
|
|
||
| LM Studio model keys follow a `author/model-name` format (e.g. `qwen/qwen3.5-9b`). OpenClaw | ||
| model refs prepend the provider name: `lmstudio/qwen/qwen3.5-9b`. You can find the exact key for | ||
| a model by running `curl http://localhost:1234/api/v1/models` and looking at the `key` field. | ||
|
|
||
| ## Non-interactive onboarding | ||
|
|
||
| Use non-interactive onboarding when you want to script setup (CI, provisioning, remote bootstrap): | ||
|
|
||
| ```bash | ||
| openclaw onboard \ | ||
| --non-interactive \ | ||
| --accept-risk \ | ||
| --auth-choice lmstudio \ | ||
| --custom-base-url http://localhost:1234/v1 \ | ||
| --custom-api-key "$LM_API_TOKEN" \ | ||
| --custom-model-id qwen/qwen3.5-9b | ||
| ``` | ||
|
|
||
| `--custom-model-id` takes the model key as returned by LM Studio (e.g. `qwen/qwen3.5-9b`), without | ||
| the `lmstudio/` provider prefix. | ||
|
|
||
| If your LM Studio server does not require authentication, OpenClaw non-interactive onboarding still requires | ||
| `--custom-api-key` (or `LM_API_TOKEN` in env). For unauthenticated LM Studio servers, pass any non-empty value. | ||
|
|
||
| This writes `models.providers.lmstudio`, sets the default model to | ||
| `lmstudio/<custom-model-id>`, and writes the `lmstudio:default` auth profile. | ||
|
|
||
| ## Configuration | ||
|
|
||
| ### Explicit configuration | ||
|
|
||
| ```json5 | ||
| { | ||
| models: { | ||
| providers: { | ||
| lmstudio: { | ||
| baseUrl: "http://localhost:1234/v1", | ||
| apiKey: "${LM_API_TOKEN}", | ||
| api: "openai-completions", | ||
| models: [ | ||
| { | ||
| id: "qwen/qwen3-coder-next", | ||
| name: "Qwen 3 Coder Next", | ||
| reasoning: false, | ||
| input: ["text"], | ||
| cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 }, | ||
| contextWindow: 128000, | ||
| maxTokens: 8192, | ||
| }, | ||
| ], | ||
| }, | ||
| }, | ||
| }, | ||
| } | ||
| ``` | ||
|
|
||
| ## Troubleshooting | ||
|
|
||
| ### LM Studio not detected | ||
|
|
||
| Make sure LM Studio is running and that you set `LM_API_TOKEN` (or any non-empty placeholder for unauthenticated servers): | ||
|
|
||
| ```bash | ||
| # Start via desktop app, or headless: | ||
| lms server start --port 1234 | ||
| ``` | ||
|
|
||
| Verify the API is accessible: | ||
|
|
||
| ```bash | ||
| curl http://localhost:1234/api/v1/models | ||
| ``` | ||
|
|
||
| ### Authentication errors (HTTP 401) | ||
|
|
||
| If setup reports HTTP 401, verify your API key: | ||
|
|
||
| - Check that `LM_API_TOKEN` matches the key configured in LM Studio. | ||
| - For LM Studio auth setup details, see [LM Studio Authentication](https://lmstudio.ai/docs/developer/core/authentication). | ||
| - If your server does not require authentication, use any non-empty placeholder value for `LM_API_TOKEN`. | ||
|
|
||
| ### Just-in-time model loading | ||
|
|
||
| LM Studio supports just-in-time (JIT) model loading, where models are loaded on first request. Make sure you have this enabled to avoid 'Model not loaded' errors. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,3 @@ | ||
| # LM Studio Provider | ||
|
|
||
| Bundled provider plugin for LM Studio discovery, auto-load, and setup. |
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Let's change "open-weight" to "open source" (in this and other files) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ^ wondering what's the rationale for this? |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,91 @@ | ||
| import { | ||
| LMSTUDIO_DEFAULT_API_KEY_ENV_VAR, | ||
| LMSTUDIO_PROVIDER_LABEL, | ||
| } from "openclaw/plugin-sdk/lmstudio-defaults"; | ||
| import { | ||
| definePluginEntry, | ||
| type OpenClawPluginApi, | ||
| type ProviderAuthContext, | ||
| type ProviderAuthMethodNonInteractiveContext, | ||
| type ProviderAuthResult, | ||
| type ProviderDiscoveryContext, | ||
| type ProviderRuntimeModel, | ||
| } from "openclaw/plugin-sdk/plugin-entry"; | ||
|
|
||
| const PROVIDER_ID = "lmstudio"; | ||
| const cachedDynamicModels = new Map<string, ProviderRuntimeModel[]>(); | ||
|
|
||
| /** Lazily loads setup helpers so provider wiring stays lightweight at startup. */ | ||
| async function loadProviderSetup() { | ||
| return await import("openclaw/plugin-sdk/lmstudio-setup"); | ||
| } | ||
|
|
||
| export default definePluginEntry({ | ||
| id: PROVIDER_ID, | ||
| name: "LM Studio Provider", | ||
| description: "Bundled LM Studio provider plugin", | ||
| register(api: OpenClawPluginApi) { | ||
| api.registerProvider({ | ||
| id: PROVIDER_ID, | ||
| label: "LM Studio", | ||
| docsPath: "/providers/lmstudio", | ||
| envVars: [LMSTUDIO_DEFAULT_API_KEY_ENV_VAR], | ||
| auth: [ | ||
| { | ||
| id: "custom", | ||
| label: LMSTUDIO_PROVIDER_LABEL, | ||
| hint: "Local/self-hosted LM Studio server", | ||
| kind: "custom", | ||
| run: async (ctx: ProviderAuthContext): Promise<ProviderAuthResult> => { | ||
| const providerSetup = await loadProviderSetup(); | ||
| return await providerSetup.promptAndConfigureLmstudioInteractive({ | ||
| config: ctx.config, | ||
| prompter: ctx.prompter, | ||
| secretInputMode: ctx.secretInputMode, | ||
| allowSecretRefPrompt: ctx.allowSecretRefPrompt, | ||
| }); | ||
| }, | ||
| runNonInteractive: async (ctx: ProviderAuthMethodNonInteractiveContext) => { | ||
| const providerSetup = await loadProviderSetup(); | ||
| return await providerSetup.configureLmstudioNonInteractive(ctx); | ||
| }, | ||
| }, | ||
| ], | ||
| discovery: { | ||
| // Run after early providers so local LM Studio detection does not dominate resolution. | ||
| order: "late", | ||
| run: async (ctx: ProviderDiscoveryContext) => { | ||
| const providerSetup = await loadProviderSetup(); | ||
| return await providerSetup.discoverLmstudioProvider(ctx); | ||
| }, | ||
| }, | ||
| prepareDynamicModel: async (ctx) => { | ||
| const providerSetup = await loadProviderSetup(); | ||
| cachedDynamicModels.set( | ||
| ctx.providerConfig?.baseUrl ?? "", | ||
| await providerSetup.prepareLmstudioDynamicModels(ctx), | ||
| ); | ||
| }, | ||
| resolveDynamicModel: (ctx) => | ||
| cachedDynamicModels | ||
| .get(ctx.providerConfig?.baseUrl ?? "") | ||
| ?.find((model) => model.id === ctx.modelId), | ||
| wizard: { | ||
| setup: { | ||
| choiceId: PROVIDER_ID, | ||
| choiceLabel: "LM Studio", | ||
| choiceHint: "Local/self-hosted LM Studio server", | ||
| groupId: PROVIDER_ID, | ||
| groupLabel: "LM Studio", | ||
| groupHint: "Self-hosted open-weight models", | ||
| methodId: "custom", | ||
| }, | ||
| modelPicker: { | ||
| label: "LM Studio (custom)", | ||
| hint: "Detect models from LM Studio /api/v1/models", | ||
| methodId: "custom", | ||
| }, | ||
| }, | ||
| }); | ||
| }, | ||
| }); |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,24 @@ | ||
| { | ||
| "id": "lmstudio", | ||
| "providers": ["lmstudio"], | ||
| "providerAuthEnvVars": { | ||
| "lmstudio": ["LM_API_TOKEN"] | ||
| }, | ||
| "providerAuthChoices": [ | ||
| { | ||
| "provider": "lmstudio", | ||
| "method": "custom", | ||
| "choiceId": "lmstudio", | ||
| "choiceLabel": "LM Studio", | ||
| "choiceHint": "Local/self-hosted LM Studio server", | ||
| "groupId": "lmstudio", | ||
| "groupLabel": "LM Studio", | ||
| "groupHint": "Self-hosted open-weight models" | ||
| } | ||
| ], | ||
| "configSchema": { | ||
| "type": "object", | ||
| "additionalProperties": false, | ||
| "properties": {} | ||
| } | ||
| } |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,12 @@ | ||
| { | ||
| "name": "@openclaw/lmstudio-provider", | ||
| "version": "2026.3.14", | ||
| "private": true, | ||
| "description": "OpenClaw LM Studio provider plugin", | ||
| "type": "module", | ||
| "openclaw": { | ||
| "extensions": [ | ||
| "./index.ts" | ||
| ] | ||
| } | ||
| } |
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,6 +1,8 @@ | ||
| [ | ||
| "index", | ||
| "core", | ||
| "lmstudio-defaults", | ||
| "lmstudio-setup", | ||
| "ollama-setup", | ||
| "provider-setup", | ||
| "sandbox", | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would recommend adding a brief paragraph describing what LM Studio is
LM Studio is a friendly application that makes it seamless to run open source models locally on your own hardware.
I'd also add a sentence or two about how OpenClaw integrates with LM Studio (via our native api/v1 API endpoints) and why this integration makes the set up much simpler (what users don't need to do/think about)