A VS Code-like IDE powered by an LLM Council (inspired by Andrej Karpathy's llm-council). Instead of using a single LLM, this IDE uses a council of multiple LLMs that review and rank each other's responses, with a Chairman LLM synthesizing the final answer.
- First Opinions: All council LLMs provide initial responses
- Review: Each LLM reviews and ranks other responses
- Final Response: Chairman LLM synthesizes the best insights
- Python 3.10+
- Node.js 18+ and npm
- OpenRouter API key
-
Clone or download this repository
-
Set up the backend:
# Install Python dependencies (using uv or pip) pip install -r requirements.txt # OR if you have uv: uv sync
-
Set up the frontend:
cd frontend npm install cd ..
-
Configure your API key:
# Copy the example env file cp env.example .env # Edit .env and add your OpenRouter API key # OPENROUTER_API_KEY=sk-or-v1-your-actual-key-here # Optional: Adjust max tokens based on your credits # Lower values = less credits needed, but shorter responses # MAX_TOKENS=256 # Default (works with low credits) # MAX_TOKENS=512 # Medium # MAX_TOKENS=2048 # High (needs more credits)
-
Configure models (important):
You must verify model names are available on OpenRouter!
- Visit https://openrouter.ai/models to see available models
- Check the exact model ID format (e.g.,
openai/gpt-4o,anthropic/claude-3.5-sonnet) - Edit
backend/config.pyto use models that exist:
COUNCIL_MODELS = [ "openai/gpt-4o", # Verify this exists "anthropic/claude-3.5-sonnet", # Verify this exists "openai/gpt-3.5-turbo", # Fallback option ] CHAIRMAN_MODEL = "openai/gpt-4o" # Use a reliable model
Note: Model names change frequently. If you get 404 errors, check OpenRouter's model list and update the config.
chmod +x start.sh
./start.shTerminal 1 (Backend):
# Load environment variables and run backend
export $(cat .env | xargs)
python -m backend.main
# OR with uv:
uv run python -m backend.mainTerminal 2 (Frontend):
cd frontend
npm run devThen open http://localhost:5173 in your browser.
-
Chat with LLM Council:
- Click the chat panel on the right (or the chat button if hidden)
- Type your question and press Cmd/Ctrl + Enter to send
- The council will discuss and provide a final answer
- You can view individual LLM responses by clicking the tabs
-
View Individual Responses:
- After receiving a response, click on the model tabs to see what each LLM said
- The "Final Response" tab shows the Chairman's synthesized answer
better-cursor/
├── backend/
│ ├── __init__.py
│ ├── config.py # Configuration (models, API keys)
│ └── main.py # FastAPI backend with LLM Council logic
├── frontend/
│ ├── src/
│ │ ├── App.jsx # Main React component
│ │ ├── App.css # Styles
│ │ ├── main.jsx # React entry point
│ │ └── index.css # Global styles
│ ├── index.html
│ ├── package.json
│ └── vite.config.js
├── data/
│ └── conversations/ # Saved conversations (auto-created)
├── .env # Your API keys (create from env.example)
├── env.example # Example environment file
├── pyproject.toml # Python dependencies
├── README.md
└── start.sh # Startup script
- Backend: FastAPI (Python), OpenRouter API, async httpx
- Frontend: React + Vite, Monaco Editor, react-markdown
- Storage: JSON files in
data/conversations/
Edit backend/config.py:
COUNCIL_MODELS = [
"openai/gpt-4o",
"google/gemini-2.0-flash-exp",
# Add your preferred models
]Edit backend/config.py:
CHAIRMAN_MODEL = "google/gemini-2.0-flash-exp"- Backend: Edit
backend/config.py(default: 8000) - Frontend: Edit
frontend/vite.config.js(default: 5173)
- Make sure you have credits on OpenRouter or have automatic top-up enabled
- Conversations are saved in
data/conversations/as JSON files - The backend loads environment variables from
.envfile
This project is provided as-is for inspiration and learning purposes.
