Open-source AI code review for GitHub PRs. Works with any LLM provider.
- 🔓 Open Source - Self-hostable, no vendor lock-in
- 🔑 BYOK - Bring Your Own API Key
- 🌐 Multi-Provider - OpenAI, Groq, Mistral, DeepSeek, Gemini, and more
- ⚡ Token-Efficient - 50-70% cost reduction with TOON encoding
Works with any OpenAI-compatible API:
| Provider | Free Tier? | Base URL |
|---|---|---|
| OpenAI | No | (default) |
| Groq | ✅ Yes | https://api.groq.com/openai/v1 |
| DeepSeek | ✅ Yes | https://api.deepseek.com/v1 |
| Mistral | ✅ Yes | https://api.mistral.ai/v1 |
| Together AI | ✅ Yes | https://api.together.xyz/v1 |
| Fireworks | ✅ Yes | https://api.fireworks.ai/inference/v1 |
| OpenRouter | No | https://openrouter.ai/api/v1 |
| Google Gemini | ✅ Yes | https://generativelanguage.googleapis.com/v1beta/openai |
Pick any provider above. For free options, try Groq or DeepSeek.
Go to: Settings → Secrets → Actions → New repository secret
- Name:
LLM_API_KEY - Value: Your API key
Create .github/workflows/ai-review.yml:
name: AI Code Review
on:
pull_request:
types: [opened, synchronize]
permissions:
contents: read
pull-requests: write
jobs:
review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: tusharshah21/ai-code-reviewer@main
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
LLM_API_KEY: ${{ secrets.LLM_API_KEY }}
LLM_MODEL: "gpt-4o"That's it! PRs will now get AI reviews.
LLM_API_KEY: ${{ secrets.OPENAI_API_KEY }}
LLM_MODEL: "gpt-4o"LLM_API_KEY: ${{ secrets.GROQ_API_KEY }}
LLM_MODEL: "llama-3.3-70b-versatile"
LLM_BASE_URL: "https://api.groq.com/openai/v1"LLM_API_KEY: ${{ secrets.DEEPSEEK_API_KEY }}
LLM_MODEL: "deepseek-chat"
LLM_BASE_URL: "https://api.deepseek.com/v1"LLM_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
LLM_MODEL: "mistral-large-latest"
LLM_BASE_URL: "https://api.mistral.ai/v1"LLM_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
LLM_MODEL: "gemini-1.5-flash"
LLM_BASE_URL: "https://generativelanguage.googleapis.com/v1beta/openai"LLM_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
LLM_MODEL: "meta-llama/Llama-3.3-70B-Instruct-Turbo"
LLM_BASE_URL: "https://api.together.xyz/v1"LLM_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}
LLM_MODEL: "anthropic/claude-3.5-sonnet"
LLM_BASE_URL: "https://openrouter.ai/api/v1"| Input | Required | Default | Description |
|---|---|---|---|
GITHUB_TOKEN |
Yes | - | Auto-provided by GitHub |
LLM_API_KEY |
Yes | - | Your provider's API key |
LLM_MODEL |
No | gpt-4o |
Model name |
LLM_BASE_URL |
No | OpenAI | Provider's API endpoint |
exclude |
No | - | Files to skip (glob patterns) |
PR Opened → Fetch Diff → TOON Encode → LLM Review → Post Comments
- Triggers on PR open/update
- Fetches code diff from GitHub
- Encodes diff into token-efficient TOON format
- Sends to your chosen LLM
- Posts inline review comments
TOON encoding saves 50-70% tokens. Example for reviewing 1000 lines:
| Provider | Model | Cost/Review |
|---|---|---|
| Groq | Llama 3.3 70B | FREE |
| DeepSeek | DeepSeek Chat | ~$0.001 |
| OpenAI | GPT-4o | ~$0.02 |
| OpenAI | GPT-4o-mini | ~$0.002 |
MIT - Free and open source