Skip to content

truehot/LMLocal

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🤖 LMLocal

LMLocal is a Visual Studio extension that integrates with LM Studio to provide a lightweight, local AI chat assistant directly inside your IDE.

Visual Studio License Status


Important

Preview Notice: This extension is currently in preview. Features, behavior, UI, and documentation are subject to change before the final release.

✨ Features

  • ☁️ In-IDE Chat UI – Tool window for LLM interaction without switching applications.
  • 🌊 Streaming Responses – Real-time token delivery for low-latency feedback.
  • 🧠 Thought/Reasoning Support – Support for reasoning models; "thoughts" are displayed in expandable blocks.
  • 📊 Live Stats – Status bar metrics: real-time speed (tokens/sec) and total token count.
  • 📝 Markdown & Highlighting – Full GFM rendering with syntax highlighting for code blocks.
  • 📋 Quick Copy – Dedicated Copy icon above code blocks for clipboard access.
  • 🔍 Active Window Context – + button to include active editor content or Output Pane text in the request.

🛠 Requirements

To use LMLocal, ensure you have:

  • Visual Studio 2022 or 2026
  • LM Studio installed and running
  • Local Server enabled in LM Studio at http://127.0.0.1:1234
  • A chat-capable LLM loaded

Tip

Make sure the LM Studio server is listening on port 1234. See the LM Studio Server Documentation for details.

🚀 Installation

Option 1: Visual Studio Marketplace (Recommended)

  1. Open Visual Studio.
  2. Go to Extensions > Manage Extensions.
  3. Search for LMLocal and click Download.
  4. Restart Visual Studio to complete the installation.

Option 2: Manual VSIX

  1. Download the .vsix file from the Marketplace.
  2. Double-click the file and follow the VSIX Installer prompts.

🏁 Getting Started

  1. Launch: Open the LMLocal tool window from the Extensions menu.
  2. Connect: The extension automatically attempts to connect to http://127.0.0.1:1234.
    • If the connection fails, a (Refresh) icon will appear in the header for manual retry.
  3. Context (Optional): Click the + button to attach text from your active window.
  4. Chat: Type your message and click Send or hit Enter ⌨️.
  5. Monitor: During generation, check the bottom status bar for live performance metrics (tokens and speed).
  6. Control:
    • Use Stop ⏹️ to cancel a generation.
    • Use Clear 🗑️ to wipe the current session history.

🔧 Troubleshooting

Issue Solution
No model shown Ensure a model is fully loaded in the LM Studio "AI Chat" or "Server" tab.
Connection Error Check if the LM Studio Server is ON at http://127.0.0.1:1234. Click to retry.
UI Lag Restart the tool window or check your local machine resources (CPU/GPU).

📜 License & Third-Party

  • License: MIT License. See LICENSE.txt for details.
  • Components:
    • marked v18.0.0 (MIT)
    • highlight.js v11.9.0 (MIT) & GitHub Dark theme

🙌 Acknowledgments

Special thanks to the LM Studio team for their local inference platform and the open-source community for the libraries that make this extension possible.

About

LMLocal is a Visual Studio extension that adds a dedicated chat interface for interacting with local LLMs via LM Studio. It operates as a manual assistant for prompts and code generation within the IDE.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages