Transform your code reviews with AI-powered intelligence
An intelligent, automated code review bot that revolutionizes your development workflow by providing instant, comprehensive analysis of every pull request. Powered by OpenAI's advanced language models, it delivers expert-level feedback on code quality, security vulnerabilities, performance optimizations, and documentation improvements - completely free and open source.
π Instant Expert Reviews
- Get comprehensive code analysis in seconds, not hours
- Catch bugs, security issues, and performance problems before they reach production
- Maintain consistent code quality across your entire team
π Security-First Approach
- Identifies potential security vulnerabilities automatically
- Validates input sanitization and authentication patterns
- Flags insecure configurations and credential exposure risks
β‘ Performance Optimization
- Spots inefficient algorithms and data structures
- Suggests optimizations for better runtime performance
- Identifies memory leaks and resource management issues
π Documentation Excellence
- Recommends missing documentation for complex functions
- Suggests clearer variable and function naming
- Improves code readability and maintainability
π§ Zero Configuration Required
- Works out-of-the-box with any Bitbucket repository
- Simple webhook integration - setup in under 5 minutes
- Docker deployment for maximum portability
| Feature | Description | Impact |
|---|---|---|
| π€ AI-Powered Analysis | Advanced GPT-based code review with context understanding | Catches issues human reviewers might miss |
| π¬ Inline Comments | Precise feedback posted directly on problematic code lines | Streamlines developer workflow |
| π Real-time Integration | Automatic analysis triggered by pull request events | Zero manual intervention required |
| π‘οΈ Security Scanning | Identifies vulnerabilities, injection risks, and auth issues | Prevents security breaches before deployment |
| β‘ Performance Insights | Spots algorithmic inefficiencies and optimization opportunities | Improves application speed and resource usage |
| π Quality Metrics | Enforces coding standards and best practices | Maintains consistent codebase quality |
| π Enterprise Security | Webhook signature verification and rate limiting | Production-ready security features |
| π³ Docker Ready | One-command deployment with Docker Compose | Deploy anywhere in minutes |
# 1. Clone the repository
git clone https://github.com/torkian/ai-code-reviewer.git
cd ai-code-reviewer
# 2. Configure your environment
cp .env.example .env
# Edit .env with your API keys (see configuration section below)
# 3. Deploy with Docker
docker-compose up -d
# 4. Verify it's running
curl http://localhost:5000/test# 1. Clone and setup
git clone https://github.com/torkian/ai-code-reviewer.git
cd ai-code-reviewer
pip install -r requirements.txt
# 2. Configure environment
cp .env.example .env
# Add your OpenAI API key and Bitbucket credentials
# 3. Run the application
python app.py
# 4. Test the installation
python test_installation.py| Requirement | Where to Get It | Time Needed |
|---|---|---|
| π OpenAI API Key | OpenAI Platform | 2 minutes |
| π Bitbucket Token | Bitbucket Settings | 2 minutes |
| π Public Deployment | Heroku, Railway, or Render (all free) | 5 minutes |
Heroku (Easiest)
# Install Heroku CLI, then:
heroku create your-ai-reviewer
heroku config:set OPENAI_API_KEY=your_key_here
heroku config:set BITBUCKET_ACCESS_TOKEN=your_token_here
heroku config:set WEBHOOK_SECRET=your_secret_here
git push heroku mainRailway (Modern)
# Install Railway CLI, then:
railway login
railway new
railway add
railway deploy
# Configure environment variables in Railway dashboardRender (Simple)
# 1. Connect your GitHub repo to Render
# 2. Set environment variables in Render dashboard
# 3. Deploy automatically on git pushDocker Compose (Recommended)
cp .env.example .env
# Edit .env with your configuration
docker-compose up -dDocker CLI
docker build -t ai-code-reviewer .
docker run -d -p 5000:5000 --env-file .env ai-code-reviewerFor testing webhooks locally before production deployment:
# Start your Flask application first
python app.py
# Server runs on http://localhost:5000
# Then in another terminal, expose it publicly:
# Option 1: Using ngrok (most reliable)
# Download from https://ngrok.com, then:
./ngrok http 5000
# Use the https://xxxx.ngrok.io URL for Bitbucket webhook
# Option 2: SSH tunnel (if you have access to a server)
ssh -R 80:localhost:5000 serveo.netNote: Local tunnels are temporary and only for development. Always use cloud deployment for production.
Create a .env file based on .env.example:
| Variable | Required | Description |
|---|---|---|
OPENAI_API_KEY |
Yes | Your OpenAI API key |
BITBUCKET_ACCESS_TOKEN |
Yes | Bitbucket access token |
WEBHOOK_SECRET |
Recommended | Secret for webhook signature verification |
PORT |
No | Server port (default: 5000) |
FLASK_DEBUG |
No | Enable debug mode (default: false) |
API_RATE_LIMIT |
No | Requests per hour limit (default: 60) |
- Visit OpenAI API Platform
- Create a new API key
- Copy the key to your
.envfile
- Go to Bitbucket Settings β App passwords
- Click "Create app password"
- Give it a label (e.g., "AI Code Reviewer")
- Select these permissions:
- Repositories: Read
- Pull requests: Read, Write
- Click "Create" and copy the generated token to your
.envfile
-
Configure the webhook in your Bitbucket repository:
- Go to Repository Settings β Webhooks
- Add webhook URL:
https://your-domain.com/webhook - Select triggers: "Pull request created" and "Pull request updated"
- Add the webhook secret (recommended for security)
-
Test the webhook:
curl -X GET https://your-domain.com/test
- Webhook Reception: Bitbucket sends a webhook when PR events occur
- Event Filtering: The system processes only pull request events
- Diff Retrieval: Fetches the complete diff from the Bitbucket API
- AI Analysis: Sends the diff to OpenAI for comprehensive code review
- Comment Posting: Posts structured feedback as comments on the PR
The AI reviewer analyzes code across these categories:
- Code Quality: Best practices, maintainability, code patterns
- Bugs & Logic: Runtime errors, edge cases, error handling
- Security: Vulnerabilities, injection risks, credential exposure
- Performance: Optimization opportunities, algorithmic efficiency
- Documentation: Missing documentation, unclear code
GET /- Health check endpointGET /test- Detailed server statusPOST /webhook- Bitbucket webhook endpoint
python -m pytest tests/export FLASK_DEBUG=true
python app.pyβββ app.py # Main Flask application
βββ wsgi.py # WSGI entry point
βββ requirements.txt # Python dependencies
βββ Dockerfile # Docker configuration
βββ docker-compose.yml # Docker Compose setup
βββ .env.example # Environment template
βββ src/
βββ utils/
βββ bitbucket_client.py # Bitbucket API integration
βββ openai_client.py # OpenAI API integration
βββ webhook_utils.py # Webhook processing
- Webhook Signatures: Always configure
WEBHOOK_SECRETfor production - API Keys: Store all API keys securely and never commit them to version control
- Rate Limiting: Built-in rate limiting protects against abuse
- Input Validation: All webhook payloads are validated before processing
-
OpenAI API Errors
- Verify your API key is correct and has sufficient credits
- Check the logs for detailed error messages
-
Bitbucket Authentication Errors
- Ensure your access token/app password has the correct permissions
- Verify the token hasn't expired
-
Webhook Not Triggering
- Check the webhook URL is accessible from the internet
- Verify the webhook triggers are set to PR events
- Test with
/testendpoint first
-
Comments Not Posting
- Check Bitbucket API permissions
- Verify the repository name format in logs
- Ensure the webhook payload contains valid PR information
Application logs are stored in the logs/ directory with daily rotation. Check these files for debugging:
tail -f logs/ai_reviewer_$(date +%Y%m%d).logWe welcome contributions! This project thrives on community input and improvements.
- π΄ Fork the repository
- πΏ Create a feature branch (
git checkout -b feature/amazing-feature) - β¨ Make your changes with proper testing
- π Add tests for new functionality
- π Submit a pull request with a clear description
git clone https://github.com/torkian/ai-code-reviewer.git
cd ai-code-reviewer
pip install -r requirements.txt
cp .env.example .env
# Configure your development environment
python test_installation.py- π Additional Git Platforms: GitLab, GitHub integration
- π§ New AI Models: Claude, Gemini support
- π Analytics Dashboard: Code quality metrics visualization
- π§ͺ Testing: Expand test coverage
- π Documentation: Tutorials and examples
If this project helps you, please consider:
- β Starring the repository
- π Reporting bugs and suggesting features
- π¬ Sharing with your team and on social media
- π€ Contributing code or documentation
MIT License - Use it anywhere, modify freely, no strings attached!
See the LICENSE file for full details.
Created with β€οΈ by Behzad Torkian
- π Documentation: Start with SETUP.md for detailed instructions
- π Issues: Open an issue for bugs or feature requests
- π¬ Discussions: Share ideas and ask questions in Discussions
| Problem | Solution |
|---|---|
| π« Webhook not triggering | Check URL accessibility with curl https://your-domain.com/test |
| π OpenAI API errors | Verify API key and account credits |
| π Bitbucket auth issues | Ensure token has "Pull requests: Write" permission |
| π¬ Comments not posting | Check logs for detailed error messages |
π Ready to revolutionize your code reviews?
β Star this repo | π΄ Fork it | π Read the docs
Built for developers, by developers. Made with π€ AI and β€οΈ open source.