More than just a
.patchreader. AI Review Agent is an autonomous, context-aware code review assistant designed natively for GitLab Merge Requests. It clones your repository, reads the codebase, checks historical conventions, and engages in technical debates—just like a Senior Developer.
Most open-source AI reviewers simply pipe your git diff into an LLM and spit out generic advice. We do things differently:
- 🧠 True Contextual Awareness: We don't just read the diff. The agent is equipped with tools (
read_file,search_code,multi_diff) to explore the actual codebase. If you modify a function signature, it can search where else it's used before commenting. No more "hallucinated" bugs. - ♻️ Self-Improving Feedback Loop: The system learns your project's conventions. A background Cron job consolidates historical AI reviews and human feedback into a tailored "Repository Best Practices" rulebook.
- 🔀 Robust Multi-LLM Routing: Avoid vendor lock-in. Natively supports OpenAI (GPT-4o), Anthropic (Claude 3.7), and Google (Gemini 2.0) with load-balancing and fallback mechanisms. Mix and match models based on cost, rate limits, or language proficiency.
- 💻 Interactive Local CLI: Don't want to spam your team with AI comments? Run the agent locally via CLI. It performs a dry-run review, displays the findings in your terminal, and lets you interactively select exactly which comments to push to GitLab.
- Trigger & Initialization: A webhook catches a Merge Request event (create/update). The agent checks whether the MR carries the configured review trigger label (default:
ai-review). If the label is present, a job is queued asynchronously and the specific repository config (frameworks, model overrides) is loaded. A manual CLI trigger skips the label check and always proceeds. - Smart Git Synchronization: The agent acquires a lock and shallow-fetches the target branch. It calculates a smart Base SHA to only process incremental new commits if the MR was reviewed previously, preventing noisy duplicate comments.
- Risk Scoring & Parsing: Modifed files are scored for risk. Highly modified or complex files are pre-loaded directly into the LLM context. Massive PRs (>150 files) are safely truncated and sampled by risk to protect your context window.
- Context Gathering (The Secret Sauce): External data is fetched:
- Repository Settings: Known frameworks/languages.
- Discussion History: Previous unresolved AI comments (it can auto-resolve them if the developer fixed the code!).
- Feedback Rules: Historical lessons learned specific to this repo.
- Agentic Code Analysis: The LLM runs in an agentic loop. Over multiple iterations, it navigates the codebase using tools (
read_file,search_code). It verifies its assumptions against real code before drafting a comment. - Publish & Auto-Resolution: Validated, structured comments are pushed as inline GitLab discussions. If a developer modified lines overlapping with a previous AI comment, the agent automatically recognizes the fix and resolves the old thread.
- Reply Loop: Developers can reply directly to the AI's thread in GitLab. A specialized
Replier Agentwakes up, reads the thread history + surrounding code context, and continues the technical debate.
- Go 1.25.5+
- A GitLab instance (or gitlab.com)
- Access Tokens: GitLab (Personal/Project Access Token) and at least one LLM Provider (OpenAI, Anthropic, Google).
The easiest way to get the agent running without managing Go environments is using Docker Compose.
git clone https://github.com/antlss/gitlab-review-agent.git
cd gitlab-review-agent
# Configure your environment
cp .env.example .env
# Edit .env to set your LLM tokens and GitLab credentials
# Start the server in the background
docker-compose up -dYour webhook server will be live at port 8080. You can execute the interactive CLI directly inside the running container:
docker exec -it ai_review_agent ./cli review --project-id 123 --mr-id 45git clone https://github.com/antlss/gitlab-review-agent.git
cd gitlab-review-agent
# Build the server and CLI binaries
go build -o server ./cmd/server
go build -o cli ./cmd/cli
# Configure your environment
cp .env.example .envEdit .env to define your GITLAB_BASE_URL, GITLAB_TOKEN, STORE_DRIVER (file or sqlite are easiest to start), and your preferred LLM_DEFAULT_PROVIDER.
Start the webhook handler and background worker pool:
./serverPoint your GitLab Project Webhook to http://<your-server>:8080/webhook/gitlab.
By default the agent only reviews Merge Requests that carry the ai-review label. Add the label to an MR in GitLab and the next open/update webhook event will trigger a review automatically.
You can change the label name via the REVIEW_TRIGGER_LABEL environment variable:
# .env
REVIEW_TRIGGER_LABEL=ai-review # default — change to any GitLab label you preferNote: The CLI (
./cli review) bypasses the label check entirely — it always performs a review as long as the configured GitLab token has access to the target project.
Trigger a review manually from your terminal and pick which comments to actually post:
./cli review --project-id 123 --mr-id 45You can dynamically override the model for a specific run:
./cli review --project-id 123 --mr-id 45 --model claude-3-7-sonnet-20250219cmd/server: HTTP server handling GitLab Webhooks, Cron jobs, and Worker pools.cmd/cli: The Command Line Interface for interactive local reviews.internal/core: Heart of the logic (reviewandreplypipelines,feedbackloops,reviewer/replieragents).internal/pkg: External port adapters (GitLab API, Git CLI wrapper, LLM drivers, SQL/File storage DAOs).
We strictly follow Standard Go Project Layout conventions and utilize Clean Architecture principles.
We welcome contributions! Please refer to the CONTRIBUTING.md for local development setup, coding standards, and our branch/PR workflow.
This project is licensed under the MIT License - see the LICENSE file for details.