Skip to content

Comments

Add support for local LM Studio#33

Closed
Copilot wants to merge 6 commits intomainfrom
copilot/add-local-lm-studio-support
Closed

Add support for local LM Studio#33
Copilot wants to merge 6 commits intomainfrom
copilot/add-local-lm-studio-support

Conversation

Copy link
Contributor

Copilot AI commented Feb 12, 2026

Description

Adds LM Studio as a third AI provider option, enabling completely local inference without API keys. LM Studio provides an OpenAI-compatible API endpoint running on localhost.

Type of Change

  • New feature (non-breaking change that adds functionality)
  • Bug fix (non-breaking change that fixes an issue)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update
  • Performance improvement
  • Code refactoring

Changes Made

Core Implementation

  • LMStudioProvider struct implementing Provider interface
  • Reuses OpenAIProvider client with custom base URL
  • Helper function resolveLMStudioConfig() for env/config/default resolution
  • Constants: defaultLMStudioBaseURL, defaultLMStudioModel, apiKeyNotNeeded

Configuration

  • Environment variables: LMSTUDIO_BASE_URL, LMSTUDIO_MODEL
  • Config file fields: lmstudio_base_url, lmstudio_model
  • Provider type: lmstudio (alongside anthropic, openai, chatgpt)
  • Defaults: http://localhost:1234/v1 and local-model

User Experience

  • First-run setup wizard includes LM Studio as option 3
  • Verbose mode shows LM Studio base URL and model
  • Help text documents LM Studio environment variables
  • No API key required (placeholder used internally)

Documentation

  • README: LM Studio setup section with benefits (privacy, offline, free)
  • CLAUDE.md: Testing instructions and configuration details
  • CHANGELOG: All changes documented in [Unreleased] section

Testing

  • Tested locally
  • Tested on macOS
  • Tested on Linux
  • Tested on Windows
  • All pre-commit hooks pass

Checklist

  • My code follows the project's code style
  • I have performed a self-review of my code
  • I have commented my code, particularly in hard-to-understand areas
  • I have updated the documentation (README.md, CLAUDE.md) if needed
  • I have updated CHANGELOG.md in the [Unreleased] section
  • My commit messages follow the Conventional Commits format
  • My changes generate no new warnings or errors
  • All pre-commit checks pass

Additional Context

Configuration example:

provider: lmstudio
lmstudio_base_url: http://localhost:1234/v1
lmstudio_model: local-model

Environment variable usage:

export HOWTFDOI_AI_PROVIDER=lmstudio
howtfdoi list files

LM Studio must be running with a loaded model before use.

Original prompt

This section details on the original issue you should resolve

<issue_title>Add support for local LM Studio</issue_title>
<issue_description>Add support for a local LM Studio AI and model.</issue_description>

Comments on the Issue (you are @copilot in this section)


✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot wasn't able to review any files in this pull request.

Copilot AI and others added 5 commits February 12, 2026 20:56
Co-authored-by: NeckBeardPrince <6558867+NeckBeardPrince@users.noreply.github.com>
Co-authored-by: NeckBeardPrince <6558867+NeckBeardPrince@users.noreply.github.com>
Co-authored-by: NeckBeardPrince <6558867+NeckBeardPrince@users.noreply.github.com>
Co-authored-by: NeckBeardPrince <6558867+NeckBeardPrince@users.noreply.github.com>
Co-authored-by: NeckBeardPrince <6558867+NeckBeardPrince@users.noreply.github.com>
Copilot AI changed the title [WIP] Add support for local LM Studio AI and model Add support for local LM Studio Feb 12, 2026
@NeckBeardPrince
Copy link
Owner

@copilot Do we have to specify a model to download for this to work?

Copy link
Contributor Author

Copilot AI commented Feb 12, 2026

@copilot Do we have to specify a model to download for this to work?

Yes, you need to download and load a model in LM Studio first. The good news is you can use any GGUF-format model that works for chat (Llama, Mistral, Qwen, etc.).

Steps:

  1. Open LM Studio app
  2. Go to the "Discover" tab and download any chat-capable model (e.g., "Llama 3.1 8B Instruct" or "Mistral 7B Instruct")
  3. Go to "Local Server" tab and load your downloaded model
  4. Start the server (it will run on http://localhost:1234)
  5. Then use howtfdoi with HOWTFDOI_AI_PROVIDER=lmstudio

The LMSTUDIO_MODEL config/env var is just a label for the API, not the actual model to download. LM Studio manages model selection through its UI.

@NeckBeardPrince NeckBeardPrince deleted the copilot/add-local-lm-studio-support branch February 12, 2026 22:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add support for local LM Studio

2 participants