LMLocal is a Visual Studio extension that integrates with LM Studio to provide a lightweight, local AI chat assistant directly inside your IDE.
Important
Preview Notice: This extension is currently in preview. Features, behavior, UI, and documentation are subject to change before the final release.
- ☁️ In-IDE Chat UI – Tool window for LLM interaction without switching applications.
- 🌊 Streaming Responses – Real-time token delivery for low-latency feedback.
- 🧠 Thought/Reasoning Support – Support for reasoning models; "thoughts" are displayed in expandable blocks.
- 📊 Live Stats – Status bar metrics: real-time speed (tokens/sec) and total token count.
- 📝 Markdown & Highlighting – Full GFM rendering with syntax highlighting for code blocks.
- 📋 Quick Copy – Dedicated Copy icon above code blocks for clipboard access.
- 🔍 Active Window Context – + button to include active editor content or Output Pane text in the request.
To use LMLocal, ensure you have:
- Visual Studio 2022 or 2026
- LM Studio installed and running
- Local Server enabled in LM Studio at
http://127.0.0.1:1234 - A chat-capable LLM loaded
Tip
Make sure the LM Studio server is listening on port 1234. See the LM Studio Server Documentation for details.
- Open Visual Studio.
- Go to
Extensions>Manage Extensions. - Search for LMLocal and click Download.
- Restart Visual Studio to complete the installation.
- Download the
.vsixfile from the Marketplace. - Double-click the file and follow the VSIX Installer prompts.
- Launch: Open the LMLocal tool window from the Extensions menu.
- Connect: The extension automatically attempts to connect to
http://127.0.0.1:1234.- If the connection fails, a
↻(Refresh) icon will appear in the header for manual retry.
- If the connection fails, a
- Context (Optional): Click the
+button to attach text from your active window. - Chat: Type your message and click Send or hit
Enter⌨️. - Monitor: During generation, check the bottom status bar for live performance metrics (tokens and speed).
- Control:
- Use Stop ⏹️ to cancel a generation.
- Use Clear 🗑️ to wipe the current session history.
| Issue | Solution |
|---|---|
| No model shown | Ensure a model is fully loaded in the LM Studio "AI Chat" or "Server" tab. |
| Connection Error | Check if the LM Studio Server is ON at http://127.0.0.1:1234. Click ↻ to retry. |
| UI Lag | Restart the tool window or check your local machine resources (CPU/GPU). |
- License: MIT License. See LICENSE.txt for details.
- Components:
markedv18.0.0 (MIT)highlight.jsv11.9.0 (MIT) & GitHub Dark theme
Special thanks to the LM Studio team for their local inference platform and the open-source community for the libraries that make this extension possible.