Skip to content

Feature Request: Support for Local LLMs (LM Studio) #2

@Darkcast

Description

@Darkcast

Summary

It would be great to add support for using local language models through LM Studio.

Motivation

Many users prefer running models locally instead of relying on external APIs. This can help with:

  • Privacy (no data sent to third-party services)
  • Offline usage
  • Avoiding API costs
  • Greater control over models and configurations

LM Studio is a popular tool that makes it easy to run and manage local models, so supporting it would make LLMMap more flexible and accessible.

Proposed Idea

Add an option to use LM Studio as a backend for running queries. Since LM Studio exposes an API similar to OpenAI, it may integrate smoothly with the existing structure.

A possible approach could be to introduce a dedicated loader (e.g., LLM_LMStudio) that:

  • Connects to LM Studio’s local server (typically http://localhost:1234/v1)
  • Uses the OpenAI-compatible API endpoints exposed by LM Studio
  • Supports both chat and completion-style requests

Ideally, users could:

  • Select LM Studio as a backend
  • Configure a local endpoint
  • Run queries the same way they would with other providers

Benefits

  • Expands support for local-first workflows
  • Useful for research, lab environments, and privacy-focused users
  • Reduces dependency on external services

Additional Notes

Happy to help test or contribute if this is something you'd consider adding.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions