Vidvat Assistant is an AI-powered sidebar for Visual Studio 2022 with:
- Chat interface
- Code generation
- Code explanation
- Model selection (Qwen3-Coder, Codestral, CodeGemma)
- Local LLM inference using Ollama
- FastAPI backend
- WebView2 UI embedded inside VS
- A top-right corner button (like Copilot) opens a floating panel.
- Four tabs: Chat, Code, Explain, Settings.
Vidvat supports local, private, GPU-accelerated models:
qwen3-codercodegemmacodestral
Models are automatically downloaded when selected.
A lightweight backend provides:
/set_model/chat/available_models
Full source included inside backend/.
VidvatAssistant/ ├── # VSIX project in the root ├── backend/ # FastAPI + Ollama server └── README.md
Requires:
- Visual Studio 2022
- Python 3.10+
- Ollama installed (
https://ollama.com)
cd backend pip install -r requirements.txt python main.py
Download the .vsix from Releases and install it.
Click the V button in the top-right corner of Visual Studio.
Go to Settings → Select Model.
Choosing a model will:
- Download it via
ollama pull - Save it inside
~/.vidvat/config.json - Use it for all chat sessions
- Visual Studio Extensibility SDK
- WebView2
- WPF
- FastAPI
- Python
- Ollama
- Docker (optional)
Please report bugs and feature requests in the Issues tab.
Public license
