A modern, comprehensive AI-powered development toolkit that analyzes code quality across multiple languages and frameworks. Built with Laravel 12 and Vue 3, featuring both cloud and local AI providers for maximum flexibility, privacy, and cost control.
- Universal AI Analysis: Supports PHP/Laravel, JavaScript, React, Vue.js, Node.js, React Native
- Auto-Detection: Automatically identifies language/framework and applies specific best practices
- Quality Scoring: Get instant quality scores (1-10) with framework-specific criteria
- Actionable Suggestions: Receive detailed, code-example-rich improvement recommendations
- Security Analysis: Identify vulnerabilities across all supported languages
- Cloud AI: Google Gemini 2.0 Flash for high-quality, fast analysis
- Local AI: Ollama integration with Qwen2.5-Coder, DeepSeek-Coder, CodeLlama
- Cost Control: Choose between paid cloud AI or free local models
- Privacy Options: Keep sensitive code local with offline AI analysis
- Provider Comparison: Test same code with multiple AI providers
- Responsive Design: Beautiful Vue.js interface that works on all devices
- Real-time Analysis: Live code analysis with progress indicators
- Provider Selection: Easy switching between AI providers and models
- Syntax Highlighting: Framework-specific code highlighting and formatting
- Interactive Results: Expandable suggestions with executable code examples
- Save & Track: Store analyses with custom names and provider information
- History View: Browse all past analyses with scores, timestamps, and costs
- Detailed Modal: View complete analysis details including original code
- Provider Tracking: See which AI provider and model generated each analysis
- Cost Monitoring: Track analysis costs across different providers
- Laravel 12: Latest Laravel framework with Vue starter kit
- Modern Stack: Vue 3 + TypeScript + Tailwind CSS + Inertia.js
- Multi-AI Architecture: Extensible provider system with BaseProvider pattern
- Local AI Ready: Full Ollama integration for privacy-first development
- Database Storage: Enhanced schema tracking providers, models, and costs
- PHP 8.1+
- Composer
- Node.js 18+
- NPM/Yarn
-
Clone the repository
git clone https://github.com/rabibsust/ai-toolkit.git cd ai-toolkit -
Install PHP dependencies
composer install
-
Install Node dependencies
npm install
-
Environment setup
cp .env.example .env php artisan key:generate
-
Configure Cloud AI (Optional)
# Get your API key from https://aistudio.google.com/ # Add to .env file: GEMINI_API_KEY=your_gemini_api_key_here
-
Setup Local AI (Recommended)
# Install Ollama (macOS/Linux) curl -fsSL https://ollama.com/install.sh | sh # Start Ollama service ollama serve # Download AI models for code analysis ollama pull qwen2.5-coder:7b # Best overall performance ollama pull deepseek-coder:6.7b # Efficient and fast ollama pull codellama:7b # Security-focused
-
Database setup
php artisan migrate
-
Install Gemini Laravel package (if using cloud AI)
composer require google-gemini-php/laravel php artisan gemini:install
-
Start development servers
composer run dev
Visit http://127.0.0.1:8000 to start analyzing your code with AI!
- Navigate to the dashboard
- Select your preferred AI provider (Gemini Cloud or Ollama Local)
- Choose the AI model that best fits your needs
- Paste your code (any supported language/framework)
- The AI automatically detects the language and applies specific analysis
- Review quality scores, suggestions, and security recommendations
- Save analyses for future reference with cost tracking
| Language/Framework | Best AI Model | Specialization |
|---|---|---|
| PHP/Laravel | Qwen2.5-Coder 7B | Framework patterns, security, Eloquent |
| React | Qwen2.5-Coder 7B | Hooks, performance, modern patterns |
| Vue.js | DeepSeek-Coder 6.7B | Composition API, reactivity, Vue 3 |
| Node.js/Express | CodeLlama 7B | Security, async patterns, APIs |
| React Native | Qwen2.5-Coder 7B | Mobile patterns, cross-platform |
| JavaScript | DeepSeek-Coder 6.7B | ES6+, browser compatibility, performance |
Laravel Controller:
<?php
class UserController extends Controller
{
public function index()
{
$users = DB::table('users')->get();
return view('users.index', compact('users'));
}
public function store(Request $request)
{
$user = new User();
$user->name = $request->name;
$user->email = $request->email;
$user->save();
return redirect()->back();
}
}React Component:
import React, { useState, useEffect } from 'react';
function UserList() {
const [users, setUsers] = useState([]);
useEffect(() => {
fetch('/api/users')
.then(response => response.json())
.then(data => setUsers(data));
}, []);
return (
<div>
{users.map(user => (
<div key={user.id}>{user.name}</div>
))}
</div>
);
}Vue Component:
<template>
<div>
<h1>{{ title }}</h1>
<ul>
<li v-for="user in users" :key="user.id">
{{ user.name }}
</li>
</ul>
</div>
</template>
<script setup>
import { ref, onMounted } from 'vue'
const title = ref('User List')
const users = ref([])
onMounted(async () => {
const response = await fetch('/api/users')
users.value = await response.json()
})
</script>- Detected Language: Auto-identified framework (e.g., "Laravel Controller", "React Component")
- Quality Score: 1-10 with framework-specific criteria
- Suggestions: Language-specific improvements with code examples
- Security Analysis: Framework-appropriate vulnerability detection
- Best Practices: Modern patterns and conventions
- Performance Tips: Optimization recommendations
app/
βββ Http/Controllers/
β βββ AiToolsController.php # Main API controller
βββ Services/
β βββ LLMProviderFactory.php # Multi-provider factory
β βββ Providers/
β βββ BaseProvider.php # Shared provider logic
β βββ GeminiProvider.php # Google Gemini integration
β βββ OllamaProvider.php # Local Ollama integration
βββ Contracts/
β βββ LLMProviderInterface.php # Provider contract
βββ Models/
βββ CodeAnalysis.php # Enhanced analysis storage
resources/js/
βββ pages/AiTools/
β βββ Dashboard.vue # Multi-provider analysis interface
β βββ History.vue # Enhanced history with provider info
βββ components/ui/ # Reusable UI components
βββ layouts/
βββ AppLayout.vue # Application layout
-- code_analyses table with provider tracking
CREATE TABLE code_analyses (
id BIGINT PRIMARY KEY,
code TEXT NOT NULL,
analysis TEXT NOT NULL,
suggestions JSON,
score INTEGER,
file_name VARCHAR(255),
provider VARCHAR(50), -- NEW: AI provider used
model VARCHAR(100), -- NEW: Specific model used
cost DECIMAL(8,6), -- NEW: Analysis cost
tokens_used INTEGER, -- NEW: Token usage tracking
created_at TIMESTAMP,
updated_at TIMESTAMP
);| Component | Technology | Purpose |
|---|---|---|
| Backend | Laravel 12 | API, routing, multi-provider logic |
| Frontend | Vue 3 + TypeScript | Reactive multi-language interface |
| Styling | Tailwind CSS | Responsive, modern design |
| SPA | Inertia.js | Seamless page transitions |
| Cloud AI | Google Gemini 2.0 | High-quality cloud analysis |
| Local AI | Ollama + Multiple Models | Privacy-first local analysis |
| Database | SQLite/PostgreSQL | Enhanced analysis storage |
| Testing | Pest | Modern PHP testing |
| Provider | Cost | Privacy | Speed | Quality | Best For |
|---|---|---|---|---|---|
| Gemini Cloud | $0.001/request | ββ | βββββ | βββββ | Production, complex analysis |
| Ollama Local | Free | βββββ | βββ | ββββ | Privacy, development, learning |
- Qwen2.5-Coder 7B: Best overall performance (4.7GB)
- DeepSeek-Coder 6.7B: Most efficient (3.4GB)
- CodeLlama 7B: Security-focused (4.0GB)
- CodeGemma 7B: Google-optimized (4.2GB)
POST /api/analyze-code
Content-Type: application/json
{
"code": "<?php class UserController...",
"provider": "ollama",
"model": "qwen2.5-coder:7b",
"options": {
"focus": "security",
"detail": "detailed"
}
}POST /api/save-analysis
Content-Type: application/json
{
"code": "...",
"analysis": "...",
"suggestions": [...],
"score": 8,
"file_name": "React Component Analysis",
"provider": "ollama",
"model": "qwen2.5-coder:7b"
}GET /api/analysis/{id}
GET /api/providers # Get available providersRun the comprehensive test suite:
php artisan testTest local AI integration:
# Test Ollama connection
ollama run qwen2.5-coder:7b "Analyze this PHP code: <?php echo 'hello'; ?>"Run frontend tests:
npm run test-
Environment configuration
APP_ENV=production APP_DEBUG=false GEMINI_API_KEY=your_production_key # Optional OLLAMA_URL=http://127.0.0.1:11434 # For local AI
-
Database migration
php artisan migrate --force
-
Asset optimization
npm run build php artisan config:cache php artisan route:cache
# Multi-stage build with Ollama support
FROM php:8.2-fpm as app
# ... Laravel setup ...
FROM ollama/ollama as ai-models
RUN ollama pull qwen2.5-coder:7b
RUN ollama pull deepseek-coder:6.7b
# Production image combining both
FROM app
COPY --from=ai-models /root/.ollama /root/.ollama- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Follow PSR-12 coding standards
- Write tests for new AI providers
- Update documentation for API changes
- Test with multiple AI models
- Use conventional commit messages
This project is licensed under the MIT License - see the LICENSE file for details.
- Multi-LLM provider architecture
- Cloud AI integration (Gemini)
- Local AI integration (Ollama)
- Multi-language support (PHP, JS, React, Vue, Node)
- Auto-detection and smart analysis
- Enhanced analysis history with provider tracking
- Intelligent AI router (auto-select best provider/model)
- Cost optimization algorithms
- Real-time provider performance monitoring
- Advanced security vulnerability scanning
- Code comparison across providers
- User authentication and multi-tenancy
- Team collaboration and shared analyses
- API rate limiting and quotas
- Advanced analytics dashboard
- Enterprise SSO integration
- GitHub integration for PR analysis
- VS Code extension
- CI/CD pipeline integration
- Automated test generation
- Performance monitoring integration
- Documentation: Wiki
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Email: rabib.sust@gmail.com
- Google Gemini AI for powerful cloud-based analysis
- Ollama Team for excellent local AI infrastructure
- Alibaba Qwen Team for Qwen2.5-Coder model
- DeepSeek AI for efficient code analysis models
- Meta for CodeLlama and React ecosystem
- Laravel Team for the excellent framework
- Vue.js Team for the reactive frontend framework
- Tailwind CSS for utility-first styling
- Inertia.js for seamless SPA functionality
Built with β€οΈ by Ahmad Jamaly Rabib
Transform your development workflow with AI-powered insights across all major languages and frameworks
- π Universal Language Support: One tool for PHP, JavaScript, React, Vue, Node.js, React Native
- π Privacy-First: Run powerful AI models locally with zero cloud dependency
- π° Cost Flexible: Choose between free local AI or premium cloud AI based on your needs
- π§ Smart Detection: Automatically identifies languages and applies framework-specific analysis
- π Provider Comparison: Test the same code with multiple AI providers to get the best insights
- π Security Focused: Framework-specific vulnerability detection across all supported languages