Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .github/labeler.yml
Original file line number Diff line number Diff line change
Expand Up @@ -265,6 +265,10 @@
- changed-files:
- any-glob-to-any-file:
- "extensions/kilocode/**"
"extensions: lmstudio":
- changed-files:
- any-glob-to-any-file:
- "extensions/lmstudio/**"
"extensions: openai":
- changed-files:
- any-glob-to-any-file:
Expand Down
24 changes: 23 additions & 1 deletion docs/concepts/model-providers.md
Original file line number Diff line number Diff line change
Expand Up @@ -465,6 +465,28 @@ MiniMax is configured via `models.providers` because it uses custom endpoints:

See [/providers/minimax](/providers/minimax) for setup details, model options, and config snippets.

### LM Studio

LM Studio ships as a bundled provider plugin which uses the native API:

- Provider: `lmstudio`
- Auth: `LM_API_TOKEN` (if auth is not toggled on within LM Studio, any placeholder string is acceptable)
- Default inference base URL: `http://localhost:1234/v1`

Then set a model (replace with one of the IDs returned by `http://localhost:1234/api/v1/models`):

```json5
{
agents: {
defaults: { model: { primary: "lmstudio/openai/gpt-oss-20b" } },
},
}
```

OpenClaw uses LM Studio's native `/api/v1/models` and `/api/v1/models/load`
for discovery + auto-load, with `/v1/chat/completions` for inference by default.
See [/providers/lmstudio](/providers/lmstudio) for setup and troubleshooting.

### Ollama

Ollama ships as a bundled provider plugin and uses Ollama's native API:
Expand Down Expand Up @@ -563,7 +585,7 @@ Example (OpenAI‑compatible):
providers: {
lmstudio: {
baseUrl: "http://localhost:1234/v1",
apiKey: "LMSTUDIO_KEY",
apiKey: "LM_API_TOKEN",
api: "openai-completions",
models: [
{
Expand Down
1 change: 1 addition & 0 deletions docs/docs.json
Original file line number Diff line number Diff line change
Expand Up @@ -1192,6 +1192,7 @@
"providers/huggingface",
"providers/kilocode",
"providers/litellm",
"providers/lmstudio",
"providers/minimax",
"providers/mistral",
"providers/modelstudio",
Expand Down
1 change: 1 addition & 0 deletions docs/providers/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ Looking for chat channel docs (WhatsApp/Telegram/Discord/Slack/Mattermost (plugi
- [Hugging Face (Inference)](/providers/huggingface)
- [Kilocode](/providers/kilocode)
- [LiteLLM (unified gateway)](/providers/litellm)
- [LM Studio (local models)](/providers/lmstudio)
- [MiniMax](/providers/minimax)
- [Mistral](/providers/mistral)
- [Model Studio (Alibaba Cloud)](/providers/modelstudio)
Expand Down
146 changes: 146 additions & 0 deletions docs/providers/lmstudio.md
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would recommend adding a brief paragraph describing what LM Studio is

LM Studio is a friendly application that makes it seamless to run open source models locally on your own hardware.

I'd also add a sentence or two about how OpenClaw integrates with LM Studio (via our native api/v1 API endpoints) and why this integration makes the set up much simpler (what users don't need to do/think about)

Original file line number Diff line number Diff line change
@@ -0,0 +1,146 @@
---
summary: "Run OpenClaw with LM Studio"
read_when:
- You want to run OpenClaw with open source models via LM Studio
- You want to set up and configure LM Studio
title: "LM Studio"
---

# LM Studio

LM Studio is a friendly yet powerful app for running open-weight models on your own hardware. It lets you run llama.cpp (GGUF) or MLX models (Apple Silicon). Comes in a GUI package or headless daemon (`llmster`). For product and setup docs, see [lmstudio.ai](https://lmstudio.ai/).

## Quick start

1. Install LM Studio (desktop) or `llmster` (headless), then start the local server:

```bash
curl -fsSL https://lmstudio.ai/install.sh | bash
```

2. Start the server

Make sure you either start the desktop app or run the daemon using the following command:

```bash
lms daemon up
```

```bash
lms server start --port 1234
```

If you are using the app, make sure you have JIT enabled for a smooth experience. Learn more [here](https://lmstudio.ai/docs/developer/core/ttl-and-auto-evict)

3. OpenClaw requires an LM Studio token value. Set `LM_API_TOKEN`:

```bash
export LM_API_TOKEN="your-lm-studio-api-token"
```

If you don't want to use LM Studio with Authentication, use any non-empty placeholder value:

```bash
export LM_API_TOKEN="placeholder-key"
```

For LM Studio auth setup details, see [LM Studio Authentication](https://lmstudio.ai/docs/developer/core/authentication).

4. Run onboarding and choose `LM Studio`:

```bash
openclaw onboard
```

5. In onboarding, use the `Default model` prompt to pick your LM Studio model.

You can also set or change it later:

```bash
openclaw models set lmstudio/qwen/qwen3.5-9b
```

LM Studio model keys follow a `author/model-name` format (e.g. `qwen/qwen3.5-9b`). OpenClaw
model refs prepend the provider name: `lmstudio/qwen/qwen3.5-9b`. You can find the exact key for
a model by running `curl http://localhost:1234/api/v1/models` and looking at the `key` field.

## Non-interactive onboarding

Use non-interactive onboarding when you want to script setup (CI, provisioning, remote bootstrap):

```bash
openclaw onboard \
--non-interactive \
--accept-risk \
--auth-choice lmstudio \
--custom-base-url http://localhost:1234/v1 \
--custom-api-key "$LM_API_TOKEN" \
--custom-model-id qwen/qwen3.5-9b
```

`--custom-model-id` takes the model key as returned by LM Studio (e.g. `qwen/qwen3.5-9b`), without
the `lmstudio/` provider prefix.

If your LM Studio server does not require authentication, OpenClaw non-interactive onboarding still requires
`--custom-api-key` (or `LM_API_TOKEN` in env). For unauthenticated LM Studio servers, pass any non-empty value.

This writes `models.providers.lmstudio`, sets the default model to
`lmstudio/<custom-model-id>`, and writes the `lmstudio:default` auth profile.

## Configuration

### Explicit configuration

```json5
{
models: {
providers: {
lmstudio: {
baseUrl: "http://localhost:1234/v1",
apiKey: "${LM_API_TOKEN}",
api: "openai-completions",
models: [
{
id: "qwen/qwen3-coder-next",
name: "Qwen 3 Coder Next",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: 128000,
maxTokens: 8192,
},
],
},
},
},
}
```

## Troubleshooting

### LM Studio not detected

Make sure LM Studio is running and that you set `LM_API_TOKEN` (or any non-empty placeholder for unauthenticated servers):

```bash
# Start via desktop app, or headless:
lms server start --port 1234
```

Verify the API is accessible:

```bash
curl http://localhost:1234/api/v1/models
```

### Authentication errors (HTTP 401)

If setup reports HTTP 401, verify your API key:

- Check that `LM_API_TOKEN` matches the key configured in LM Studio.
- For LM Studio auth setup details, see [LM Studio Authentication](https://lmstudio.ai/docs/developer/core/authentication).
- If your server does not require authentication, use any non-empty placeholder value for `LM_API_TOKEN`.

### Just-in-time model loading

LM Studio supports just-in-time (JIT) model loading, where models are loaded on first request. Make sure you have this enabled to avoid 'Model not loaded' errors.
1 change: 1 addition & 0 deletions docs/reference/api-usage-costs.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,7 @@ Semantic memory search uses **embedding APIs** when configured for remote provid
- `memorySearch.provider = "gemini"` → Gemini embeddings
- `memorySearch.provider = "voyage"` → Voyage embeddings
- `memorySearch.provider = "mistral"` → Mistral embeddings
- `memorySearch.provider = "lmstudio"` → LM Studio embeddings (local/self-hosted)
- `memorySearch.provider = "ollama"` → Ollama embeddings (local/self-hosted; typically no hosted API billing)
- Optional fallback to a remote provider if local embeddings fail

Expand Down
3 changes: 3 additions & 0 deletions extensions/lmstudio/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# LM Studio Provider

Bundled provider plugin for LM Studio discovery, auto-load, and setup.
91 changes: 91 additions & 0 deletions extensions/lmstudio/index.ts
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's change "open-weight" to "open source" (in this and other files)

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

^ wondering what's the rationale for this?

Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
import {
LMSTUDIO_DEFAULT_API_KEY_ENV_VAR,
LMSTUDIO_PROVIDER_LABEL,
} from "openclaw/plugin-sdk/lmstudio-defaults";
import {
definePluginEntry,
type OpenClawPluginApi,
type ProviderAuthContext,
type ProviderAuthMethodNonInteractiveContext,
type ProviderAuthResult,
type ProviderDiscoveryContext,
type ProviderRuntimeModel,
} from "openclaw/plugin-sdk/plugin-entry";

const PROVIDER_ID = "lmstudio";
const cachedDynamicModels = new Map<string, ProviderRuntimeModel[]>();

/** Lazily loads setup helpers so provider wiring stays lightweight at startup. */
async function loadProviderSetup() {
return await import("openclaw/plugin-sdk/lmstudio-setup");
}

export default definePluginEntry({
id: PROVIDER_ID,
name: "LM Studio Provider",
description: "Bundled LM Studio provider plugin",
register(api: OpenClawPluginApi) {
api.registerProvider({
id: PROVIDER_ID,
label: "LM Studio",
docsPath: "/providers/lmstudio",
envVars: [LMSTUDIO_DEFAULT_API_KEY_ENV_VAR],
auth: [
{
id: "custom",
label: LMSTUDIO_PROVIDER_LABEL,
hint: "Local/self-hosted LM Studio server",
kind: "custom",
run: async (ctx: ProviderAuthContext): Promise<ProviderAuthResult> => {
const providerSetup = await loadProviderSetup();
return await providerSetup.promptAndConfigureLmstudioInteractive({
config: ctx.config,
prompter: ctx.prompter,
secretInputMode: ctx.secretInputMode,
allowSecretRefPrompt: ctx.allowSecretRefPrompt,
});
},
runNonInteractive: async (ctx: ProviderAuthMethodNonInteractiveContext) => {
const providerSetup = await loadProviderSetup();
return await providerSetup.configureLmstudioNonInteractive(ctx);
},
},
],
discovery: {
// Run after early providers so local LM Studio detection does not dominate resolution.
order: "late",
run: async (ctx: ProviderDiscoveryContext) => {
const providerSetup = await loadProviderSetup();
return await providerSetup.discoverLmstudioProvider(ctx);
},
},
prepareDynamicModel: async (ctx) => {
const providerSetup = await loadProviderSetup();
cachedDynamicModels.set(
ctx.providerConfig?.baseUrl ?? "",
await providerSetup.prepareLmstudioDynamicModels(ctx),
);
},
resolveDynamicModel: (ctx) =>
cachedDynamicModels
.get(ctx.providerConfig?.baseUrl ?? "")
?.find((model) => model.id === ctx.modelId),
wizard: {
setup: {
choiceId: PROVIDER_ID,
choiceLabel: "LM Studio",
choiceHint: "Local/self-hosted LM Studio server",
groupId: PROVIDER_ID,
groupLabel: "LM Studio",
groupHint: "Self-hosted open-weight models",
methodId: "custom",
},
modelPicker: {
label: "LM Studio (custom)",
hint: "Detect models from LM Studio /api/v1/models",
methodId: "custom",
},
},
});
},
});
24 changes: 24 additions & 0 deletions extensions/lmstudio/openclaw.plugin.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
{
"id": "lmstudio",
"providers": ["lmstudio"],
"providerAuthEnvVars": {
"lmstudio": ["LM_API_TOKEN"]
},
"providerAuthChoices": [
{
"provider": "lmstudio",
"method": "custom",
"choiceId": "lmstudio",
"choiceLabel": "LM Studio",
"choiceHint": "Local/self-hosted LM Studio server",
"groupId": "lmstudio",
"groupLabel": "LM Studio",
"groupHint": "Self-hosted open-weight models"
}
],
"configSchema": {
"type": "object",
"additionalProperties": false,
"properties": {}
}
}
12 changes: 12 additions & 0 deletions extensions/lmstudio/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
{
"name": "@openclaw/lmstudio-provider",
"version": "2026.3.14",
"private": true,
"description": "OpenClaw LM Studio provider plugin",
"type": "module",
"openclaw": {
"extensions": [
"./index.ts"
]
}
}
8 changes: 8 additions & 0 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,14 @@
"types": "./dist/plugin-sdk/core.d.ts",
"default": "./dist/plugin-sdk/core.js"
},
"./plugin-sdk/lmstudio-defaults": {
"types": "./dist/plugin-sdk/lmstudio-defaults.d.ts",
"default": "./dist/plugin-sdk/lmstudio-defaults.js"
},
"./plugin-sdk/lmstudio-setup": {
"types": "./dist/plugin-sdk/lmstudio-setup.d.ts",
"default": "./dist/plugin-sdk/lmstudio-setup.js"
},
"./plugin-sdk/ollama-setup": {
"types": "./dist/plugin-sdk/ollama-setup.d.ts",
"default": "./dist/plugin-sdk/ollama-setup.js"
Expand Down
2 changes: 2 additions & 0 deletions pnpm-lock.yaml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 2 additions & 0 deletions scripts/lib/plugin-sdk-entrypoints.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
[
"index",
"core",
"lmstudio-defaults",
"lmstudio-setup",
"ollama-setup",
"provider-setup",
"sandbox",
Expand Down
Loading
Loading