AI Proxy Management Plugin for JetBrains Rider IDE
ProxyMe is a comprehensive control panel plugin that lets you manage local AI proxy servers directly from JetBrains Rider IDE. Configure multiple AI models, control which models are available to Rider's AI Assistant, and manage API keys securely.
ALPHA SOFTWARE: This plugin is in active development and contains AI-generated code (Claude Sonnet 4.5). Expect bugs and rough edges.
ONLY TESTED WITH RIDER: This plugin has only been tested with JetBrains Rider IDE. Use with other JetBrains IDEs at your own risk.
KNOWN ISSUES:
- Occasional crashes when restarting proxy (improved in v2.1.0+)
- Code needs refactoring and optimization
- Security review needed for API key handling
WE NEED YOUR HELP: This project is now community-driven. Contributions welcome for code quality, security fixes, testing, and documentation.
- Only enabled models appear in Rider AI Assistant - No more cluttered model lists!
- Enable/disable models individually
- Configure which AI models Rider can actually use
- Auto-generates configuration file for the proxy
- Temperature control (0.0 - 2.0) - Set creativity level per model
- Streaming toggle - Enable/disable real-time responses
- Custom API endpoints
- Secure API key management
- Launch, stop, and restart proxy from IDE
- Real-time status monitoring (🟢🟠🔴 LED indicator)
- Custom port and host configuration
- Auto-launch on IDE startup
- API keys stored in
~/.proxyme/(outside project directory) - Never committed to version control
- Environment files auto-generated
- Project-specific isolation
- Save configurations as reusable templates
- Built-in presets (DeepSeek, Perplexity, Claude)
- Import/export templates
- Quick setup for common scenarios
- View logs directly in Rider Terminal
- Dedicated log files in
~/.proxyme/logs/ - Health check endpoints
- Debug and error tracking
- JetBrains Rider 2024.3 or later
- Node.js v18 or later
- Java 17+ (for building from source)
-
Download the latest release:
- Visit Releases
- Download
ProxyMe-2.1.0.zip
-
Install in Rider:
File → Settings → Plugins → ⚙️ → Install Plugin from Disk...- Select the downloaded ZIP file
- Click OK and restart Rider
-
Verify installation:
- Check
Toolsmenu forProxyMe
- Check
Step 1: Install Dependencies
The main settings panel shows dependency status. Click "Reinstall" if needed to install Node.js dependencies.
Step 2: Configure Log Directory
Configure your local user directory for logs and set up the proxy server settings. The model configuration table shows your enabled AI models.
Step 3: Use Prebuilt Templates
Load prebuilt templates for quick setup or create your own custom templates. Preview shows the configuration before loading.
Step 4: Add Your Own Model (BYOK)
Bring Your Own Key (BYOK) - Add custom AI models with your API keys. Configure temperature and streaming options per model.
Step 5: Configure Rider AI Assistant
Configure Rider's AI Assistant to connect to ProxyMe proxy server at http://localhost:3000/v1. Click "Test Connection" to refresh available models.
See BUILD.md for detailed instructions.
git clone https://github.com/native-apps/proxyme.git
cd proxyme
./gradlew buildPlugin-
Open ProxyMe Settings:
Tools → ProxyMe -
Add your first AI model:
- Click
Add Model - Enter model details (name, provider, endpoint, API key)
- Set temperature (recommended: 0.3 for coding)
- Enable streaming
- Click OK
- Click
-
Enable the model:
- Check the box in the
Enabledcolumn - Click
Save
- Check the box in the
-
Launch the proxy:
Tools → ProxyMe → Launch Proxy Server- Status indicator turns green 🟢
-
Configure Rider AI Assistant:
Settings → Tools → AI Assistant → Models- Provider:
OpenAI API - URL:
http://localhost:3000/v1 - API Key: (leave empty)
- Click
Test Connection
- Provider:
-
Assign models to features:
- Core features: Select your preferred model
- Instant helpers: Select a fast, focused model
- Completion model: Select a precise model
Done! Start using AI features in Rider.
Temperature controls response creativity:
- 0.1 - 0.3 → Focused, precise, deterministic (recommended for code)
- 0.4 - 0.7 → Balanced, good for general chat
- 0.8 - 2.0 → Creative, exploratory (use sparingly)
Default: 0.3 for coding tasks
| Feature | Recommended Model | Temperature | Why |
|---|---|---|---|
| Core features | deepseek-chat or sonar |
0.3-0.5 | Main coding and chat |
| Instant helpers | deepseek-chat |
0.1-0.3 | Quick edits, precise |
| Completion | deepseek-chat |
0.2-0.3 | Inline completion |
Avoid:
- ❌ Search models (Sonar) for Quick Edit - tends to edit multiple files
- ❌ High temperatures (>0.7) for code - causes unfocused edits
- ❌ Reasoning models for simple tasks - overkill and slower
- Installation Guide - Detailed installation steps
- Build Instructions - How to build from source
- Troubleshooting - Common issues and solutions
- Contributing Guidelines - How to contribute
- Roadmap - Future plans and versions
- Full Documentation - Complete documentation library
┌─────────────────────────────────────────┐
│ Rider IDE with ProxyMe │
│ │
│ ┌───────────────────────────────────┐ │
│ │ ProxyMe Plugin (Settings UI) │ │
│ │ - Configure models │ │
│ │ - Manage API keys │ │
│ │ - Control proxy lifecycle │ │
│ └──────────────┬────────────────────┘ │
│ │ │
│ │ generates │
│ ▼ │
│ ┌─────────────────────────────┐ │
│ │ ~/.proxyme/proxy/ │ │
│ │ - models.json │ │
│ │ - .env (API keys) │ │
│ └──────────────┬──────────────┘ │
└─────────────────┼───────────────────────┘
│
│ launches
▼
┌───────────────────────┐
│ Node.js Proxy │
│ (localhost:3000) │
│ - Reads models.json │
│ - Loads API keys │
│ - Handles requests │
└──────────┬────────────┘
│
│ forwards to
▼
┌─────────────────────────┐
│ AI Provider APIs │
│ - DeepSeek │
│ - Perplexity │
│ - Anthropic │
│ - OpenAI │
│ - Custom providers │
└─────────────────────────┘
ProxyMe's Role:
- Controls which models are available
- Manages API keys and endpoints
- Configures temperature and streaming
- Generates
models.jsonfor proxy
Rider AI Assistant's Role:
- Controls which models are assigned to features
- Decides which model for chat vs. completion vs. helpers
- Native Rider UI and functionality
Clear Separation:
- Configure models in ProxyMe → Save → Generates config
- Restart proxy → Loads only enabled models
- Assign models in Rider AI Assistant → Use in IDE
ProxyMe stores configuration in your home directory (never in project files):
~/.proxyme/
├── proxy/
│ ├── .env # API keys (NEVER committed)
│ ├── models.json # Enabled models configuration
│ ├── proxy.js # Proxy server code
│ └── package.json # Node.js dependencies
├── logs/
│ └── proxyme.log # Application logs
└── templates/
├── presets/ # Built-in templates
└── user/ # Your custom templates
- DeepSeek - Fast, efficient coding models
- Perplexity - Search-augmented AI (Sonar models)
- Anthropic - Claude models (3.5 Sonnet, Opus, etc.)
- OpenAI - GPT models (requires API key)
- Custom - Any OpenAI-compatible API
We need your help! This project contains AI-generated code and needs:
- 🐛 Bug fixes and stability improvements
- 🔒 Security review and hardening
- 🧪 Testing and quality assurance
- 📖 Documentation improvements
- ✨ New features and enhancements
- 🧹 Code refactoring and cleanup
See CONTRIBUTING.md for guidelines.
High Priority:
- Security review of API key handling
- Error handling and recovery
- Testing (unit and integration tests)
- Performance optimization
- Stability fixes
Code Quality:
- Refactor AI-generated code
- Add comprehensive comments
- Improve type safety
- Better logging
Features:
- Support for more AI providers
- Expanded preset library
- UI/UX improvements
- Better health monitoring
- 🐛 Bug Reports: GitHub Issues
- 💬 Discussions: GitHub Discussions
- 📖 Documentation: Full Docs
- 🔧 Troubleshooting: Guide
See ROADMAP.md for detailed plans.
Upcoming:
- Stability and bug fixes
- Security improvements
- Testing with other JetBrains IDEs
- More AI provider integrations
- UI/UX enhancements
- Plugin marketplace release
MIT License - See LICENSE file for details.
- Created with assistance from Claude Sonnet 4.5 (yes, AI helped build an AI tool!)
- Built for the JetBrains Rider community
- Inspired by the need for better AI model management in IDEs
Use at your own risk. This is alpha software with known issues. Always:
- Back up your work before using
- Review generated code carefully
- Test in non-production environments first
- Keep API keys secure
- Monitor usage and costs
Ready to get started?
📦 Download the latest release | 📖 Read the docs | 🤝 Contribute
Made with ❤️ by the community, for the community




