Enhanced fork of DeepSeek Pentest AI with local Ollama support
Original Author: HernΓ‘n RodrΓguez
Original Project: DeepSeek-Pentest-AI
LinkedIn: https://www.linkedin.com/in/hernanrodriguez-/
This tool is intended for EDUCATIONAL and AUTHORIZED security testing purposes ONLY.
- Do NOT use against systems without explicit written permission
- The authors are NOT responsible for misuse of this extension
- Use ONLY on systems you own or have authorization to test
- Recommended for local testing against vulnerable apps like OWASP Juice Shop or DVWA
- USE AT YOUR OWN RISK - NO WARRANTY PROVIDED
This fork replaces the cloud-based DeepSeek API with local Ollama models , giving you:
- β Privacy : All payload generation happens locally
- β No API costs : Use uncensored models for offensive security testing
- β Full control : Choose from multiple Ollama models
- β Educational focus : Perfect for learning penetration testing techniques
Feature v1 (Basic) v2 (Enhanced) Local Ollama Support β β Context-Aware Payloads β β Risk Scoring (20-95%) β β Color-Coded Risk Badges β β Multi-Model Support β β Safe Mode (No Destructive) β β Session Save/Load β β AI Summary Generation β β Temperature Control β β
Recommended Version : v2 for production use
- AI-Powered Payload Generation : Uses local Ollama models for intelligent payload creation
- Multiple Attack Types : SQL Injection, XSS, Command Injection, Path Traversal, LFI, SSRF, RCE, SSTI, XXE, NoSQL, GraphQL, Open Redirect, CRLF, CORS, Host Header Injection
- Smart Parameter Detection : Automatically finds parameters in GET, POST, JSON, XML, multipart, and headers
- Intelligent Fuzzing : Baseline comparison, differential analysis, confidence scoring
- Real-Time Metrics : Live vulnerability tracking with visual charts
- π― Context-Aware Payloads : Extracts actual parameter values for targeted payload generation
- π¨ Risk Badges : Color-coded severity indicators (Critical/High/Medium/Low)
- π‘οΈ Safe Mode : Filters destructive SQL payloads (DROP, DELETE, UPDATE, etc.) with confirmation dialog
- π§ Multiple Models : Choose from mistral, llama2, phi3, neural-chat, gdisney/mistral-uncensored
- π‘οΈ Fine-Tuning : Adjust temperature and max tokens for optimal results
- πΎ Session Persistence : Save/load entire testing sessions as JSON
- π AI Summaries : Generate executive reports with remediation recommendations
- π Confidence Scoring : 20-95% confidence ratings based on evidence strength
- Burp Suite Pro (Community edition has limited functionality)
- Ollama installed locally (Download)
- Jython standalone JAR configured in Burp Extender
- Python 2.7 environment (for Jython compatibility)
bash
ollama pull gdisney/mistral-uncensoredNote: This is an uncensored model suitable for offensive security research. Use responsibly.
bash
# macOS/Linux
curl https://ollama.ai/install.sh |sh
# Windows
# Download from https://ollama.ai/downloadbash
ollama pull gdisney/mistral-uncensored
# or
ollama pull mistral
ollama pull llama2-
Load Jython:
- Extender β Options β Python Environment
- Set location of Jython standalone JAR
-
Add Extension:
- Extender β Extensions β Add
- Extension type: Python
- Select:
Ollama-Pentest-AI-localhost-v2.py(recommended) - Check Burp Extender output for:
Plugin initialized
Basic Setup:
- API Key: Enter any value (e.g., "localhost" - required by UI but not used)
- Model: Select
gdisney/mistral-uncensoredor your preferred model - Attack Type: Choose vulnerability type or "CUSTOM PROMPT"
- Payload Count: 5-10 recommended
- Delay: 100ms (adjust based on target)
Advanced Settings (v2 only):
- Temperature:
0.4(lower =more focused, higher =more creative) - Max Tokens:
800(longer for complex payloads) - β Safe Mode: ENABLED BY DEFAULT - prevents destructive SQL
- β Smart Fuzzing: Enable for adaptive payload generation
- β Capture all traffic: Monitor all Burp requests
- Use Burp Proxy to intercept a request
- Right-click β "Ollama: Analyze Request"
- Or manually paste a request into the extension tab
1. Select Attack Type (e.g., "SQL Injection")
2. Click "Analyze & Generate"
3. Review generated payloads in"AI Analysis" tab
4. Parameters automatically detected and displayed
1. Click "Start Pentesting"
2. Watch real-time results in"Pentest Live" tab
3. Color-coded vulnerabilities appear (v2: with risk badges)
4. Check "Results" tab for detailed findings
- View metrics in"Metrics" tab
- Generate AI summary (v2): "AI Summary" β "Generate Summary"
- Export findings: "Export to CSV"
- Only vulnerabilities
- Full history
- Evidence-only
For specialized testing, use CUSTOM PROMPT mode:
Attack Type: CUSTOM PROMPT
Custom Prompt: "Generate advanced SQL injection payloads for boolean-based blind attacks that bypass ModSecurity WAF. Return ONLY payload values, one per line, no explanations."
Important: Always end custom prompts with:
"Return ONLY payload values, one per line, no explanations."
Safe Mode filters out destructive SQL payloads that could cause permanent data loss:
Blocked Operations:
DROP(tables/databases)DELETE(row deletion)UPDATE(data modification)INSERT(data injection)TRUNCATE(table clearing)ALTER(schema changes)CREATE,RENAME,GRANT,REVOKE
Default: β ENABLED (Recommended)
To Disable (not recommended):
- Uncheck "Safe Mode (no destructive)"
- Confirmation dialog appears with warning
- Click "Yes" to confirm (use extreme caution)
Best Practice: Keep Safe Mode enabled unless you have explicit authorization and understand the risks.
| Color | Risk Level | Severity | Example |
|---|---|---|---|
| π΄ Dark Red | Critical | RCE, SSRF, XXE | Remote code execution |
| π Orange | High | SQL Injection, XSS | Data exfiltration possible |
| π‘ Yellow | Medium | Open Redirect, CORS | Limited impact |
| π’ Green | Low | Information Disclosure | Minimal risk |
- 20-50%: Low confidence, possible false positive
- 51-74%: Medium confidence, worth investigating
- 75-89%: High confidence, likely vulnerable
- 90-95%: Very high confidence, exploit confirmed
Findings marked with "Evidence:" show actual proof:
SQL Injection - Database error detected [high|85%]| Evidence: "mysql_fetch_array() expects"
Recommended Models:
-
gdisney/mistral-uncensored (Best for offensive security)
- Uncensored, generates aggressive payloads
- Great for educational testing
-
mistral (Balanced)
- Good quality, moderate censorship
- Suitable for general testing
-
llama2 (Conservative)
- More censored, safer outputs
- Good for compliance testing
0.1-0.3: Focused, deterministic payloads
0.4-0.6: Balanced creativity (recommended)
0.7-0.9: Highly creative, diverse payloads
1.0+: Experimental, unpredictable
Request Delay:
- 0ms: Maximum speed (may trigger WAF)
- 100ms: Balanced (recommended)
- 500ms+: Stealth mode
Payload Count:
- 5: Quick scan
- 10: Standard test
- 20+: Comprehensive audit
- β You need basic functionality
- β You're new to Burp extensions
- β You prefer simplicity
- β You need production-ready features
- β You want risk scoring and context-aware payloads
- β You need session persistence
- β You want AI-generated reports
- β You need Safe Mode protection
Recommendation: Use v2 for all serious testing scenarios.
Ollama-Pentest-AI/
βββ Ollama-Pentest-AI-localhost-v1.py # Basic version
βββ Ollama-Pentest-AI-localhost-v2.py # Enhanced version (recommended)
βββ DeepSeek-Pentest-AI.py # Original DeepSeek version
βββ README-v1.md # v1 documentation
βββ README-v2.md # v2 documentation
βββ README.md # This file
"API test failed"
bash
# Check Ollama is running
ollama list
# Start Ollama service
ollama serve
# Test with curl
curl http://localhost:11434/v1/chat/completions"No parameters detected"
- Ensure request starts with HTTP method (GET/POST/PUT/DELETE)
- Check Content-Type header is present
- Try capturing request through Burp Proxy
"Generate payloads first"
- Click "Analyze & Generate" before "Start Pentesting"
- Wait for payload generation to complete
- Check "AI Analysis" tab for generated payloads
Slow payload generation
python
# Reduce max_tokens in UI
Max Tokens:400(instead of 800)
# Or use faster model
Model: mistral (instead of llama2)Extension not loading
- Verify Jython is configured in Burp
- Check Burp Extender "Output" tab for errors
- Ensure Python file is not corrupted
This is an educational fork. Contributions welcome:
- Fork the repository
- Create a feature branch
- Test thoroughly on vulnerable apps (DVWA, Juice Shop)
- Submit pull request with detailed description
This project inherits the license from the original DeepSeek-Pentest-AI repository.
Original Author : HernΓ‘n RodrΓguez
- GitHub: @HernanRodriguez1
- Original Project: DeepSeek-Pentest-AI
- LinkedIn: hernanrodriguez-
Ollama Adaptation : Educational fork for local AI-powered pentesting
EDUCATIONAL USE ONLY
This tool is designed for:
- β Authorized penetration testing
- β Security research in controlled environments
- β Educational demonstrations
- β Bug bounty programs with permission
NEVER USE FOR :
- β Unauthorized access attempts
- β Malicious attacks
- β Testing without permission
- β Any illegal activity
By using this tool, you agree :
- You have explicit authorization to test target systems
- You understand the legal implications
- You accept full responsibility for your actions
- You will use Safe Mode in production environments
Runs in Burp Pro or Community
Happy (Legal) Hacking! π―
Remember: With great power comes great responsibility. Use this tool ethically.

