Skip to content

tusharshah21/ai-code-reviewer

Repository files navigation

AI Code Reviewer

Open-source AI code review for GitHub PRs. Works with any LLM provider.

Features

  • 🔓 Open Source - Self-hostable, no vendor lock-in
  • 🔑 BYOK - Bring Your Own API Key
  • 🌐 Multi-Provider - OpenAI, Groq, Mistral, DeepSeek, Gemini, and more
  • Token-Efficient - 50-70% cost reduction with TOON encoding

Supported Providers

Works with any OpenAI-compatible API:

Provider Free Tier? Base URL
OpenAI No (default)
Groq ✅ Yes https://api.groq.com/openai/v1
DeepSeek ✅ Yes https://api.deepseek.com/v1
Mistral ✅ Yes https://api.mistral.ai/v1
Together AI ✅ Yes https://api.together.xyz/v1
Fireworks ✅ Yes https://api.fireworks.ai/inference/v1
OpenRouter No https://openrouter.ai/api/v1
Google Gemini ✅ Yes https://generativelanguage.googleapis.com/v1beta/openai

Quick Start

1. Get an API Key

Pick any provider above. For free options, try Groq or DeepSeek.

2. Add Secret to Your Repo

Go to: Settings → Secrets → Actions → New repository secret

  • Name: LLM_API_KEY
  • Value: Your API key

3. Create Workflow

Create .github/workflows/ai-review.yml:

name: AI Code Review

on:
  pull_request:
    types: [opened, synchronize]

permissions:
  contents: read
  pull-requests: write

jobs:
  review:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: tusharshah21/ai-code-reviewer@main
        with:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          LLM_API_KEY: ${{ secrets.LLM_API_KEY }}
          LLM_MODEL: "gpt-4o"

That's it! PRs will now get AI reviews.


Provider Examples

OpenAI (default)

LLM_API_KEY: ${{ secrets.OPENAI_API_KEY }}
LLM_MODEL: "gpt-4o"

Groq (FREE & Fast)

LLM_API_KEY: ${{ secrets.GROQ_API_KEY }}
LLM_MODEL: "llama-3.3-70b-versatile"
LLM_BASE_URL: "https://api.groq.com/openai/v1"

DeepSeek (FREE & Cheap)

LLM_API_KEY: ${{ secrets.DEEPSEEK_API_KEY }}
LLM_MODEL: "deepseek-chat"
LLM_BASE_URL: "https://api.deepseek.com/v1"

Mistral AI

LLM_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
LLM_MODEL: "mistral-large-latest"
LLM_BASE_URL: "https://api.mistral.ai/v1"

Google Gemini

LLM_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
LLM_MODEL: "gemini-1.5-flash"
LLM_BASE_URL: "https://generativelanguage.googleapis.com/v1beta/openai"

Together AI

LLM_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
LLM_MODEL: "meta-llama/Llama-3.3-70B-Instruct-Turbo"
LLM_BASE_URL: "https://api.together.xyz/v1"

OpenRouter (100+ models)

LLM_API_KEY: ${{ secrets.OPENROUTER_API_KEY }}
LLM_MODEL: "anthropic/claude-3.5-sonnet"
LLM_BASE_URL: "https://openrouter.ai/api/v1"

Configuration

Input Required Default Description
GITHUB_TOKEN Yes - Auto-provided by GitHub
LLM_API_KEY Yes - Your provider's API key
LLM_MODEL No gpt-4o Model name
LLM_BASE_URL No OpenAI Provider's API endpoint
exclude No - Files to skip (glob patterns)

How It Works

PR Opened → Fetch Diff → TOON Encode → LLM Review → Post Comments
  1. Triggers on PR open/update
  2. Fetches code diff from GitHub
  3. Encodes diff into token-efficient TOON format
  4. Sends to your chosen LLM
  5. Posts inline review comments

Cost Comparison

TOON encoding saves 50-70% tokens. Example for reviewing 1000 lines:

Provider Model Cost/Review
Groq Llama 3.3 70B FREE
DeepSeek DeepSeek Chat ~$0.001
OpenAI GPT-4o ~$0.02
OpenAI GPT-4o-mini ~$0.002

License

MIT - Free and open source

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published