Skip to content

steveschofield/Ollama-Pentesting-AI

Repository files navigation

Ollama Pentest AI - Burp Suite Extension

Enhanced fork of DeepSeek Pentest AI with local Ollama support

Original Author: HernΓ‘n RodrΓ­guez

Original Project: DeepSeek-Pentest-AI

LinkedIn: https://www.linkedin.com/in/hernanrodriguez-/


⚠️ Disclaimer

This tool is intended for EDUCATIONAL and AUTHORIZED security testing purposes ONLY.

  • Do NOT use against systems without explicit written permission
  • The authors are NOT responsible for misuse of this extension
  • Use ONLY on systems you own or have authorization to test
  • Recommended for local testing against vulnerable apps like OWASP Juice Shop or DVWA
  • USE AT YOUR OWN RISK - NO WARRANTY PROVIDED

🎯 What's New in Ollama Version

This fork replaces the cloud-based DeepSeek API with local Ollama models , giving you:

  • βœ… Privacy : All payload generation happens locally
  • βœ… No API costs : Use uncensored models for offensive security testing
  • βœ… Full control : Choose from multiple Ollama models
  • βœ… Educational focus : Perfect for learning penetration testing techniques

Version Comparison

Featurev1 (Basic)v2 (Enhanced)
Local Ollama Supportβœ…βœ…
Context-Aware PayloadsβŒβœ…
Risk Scoring (20-95%)βŒβœ…
Color-Coded Risk BadgesβŒβœ…
Multi-Model SupportβŒβœ…
Safe Mode (No Destructive)βŒβœ…
Session Save/LoadβŒβœ…
AI Summary GenerationβŒβœ…
Temperature ControlβŒβœ…

Recommended Version : v2 for production use


πŸš€ Key Features

Core Capabilities

  • AI-Powered Payload Generation : Uses local Ollama models for intelligent payload creation
  • Multiple Attack Types : SQL Injection, XSS, Command Injection, Path Traversal, LFI, SSRF, RCE, SSTI, XXE, NoSQL, GraphQL, Open Redirect, CRLF, CORS, Host Header Injection
  • Smart Parameter Detection : Automatically finds parameters in GET, POST, JSON, XML, multipart, and headers
  • Intelligent Fuzzing : Baseline comparison, differential analysis, confidence scoring
  • Real-Time Metrics : Live vulnerability tracking with visual charts

Enhanced v2 Features

  • 🎯 Context-Aware Payloads : Extracts actual parameter values for targeted payload generation
  • 🎨 Risk Badges : Color-coded severity indicators (Critical/High/Medium/Low)
  • πŸ›‘οΈ Safe Mode : Filters destructive SQL payloads (DROP, DELETE, UPDATE, etc.) with confirmation dialog
  • 🧠 Multiple Models : Choose from mistral, llama2, phi3, neural-chat, gdisney/mistral-uncensored
  • 🌑️ Fine-Tuning : Adjust temperature and max tokens for optimal results
  • πŸ’Ύ Session Persistence : Save/load entire testing sessions as JSON
  • πŸ“Š AI Summaries : Generate executive reports with remediation recommendations
  • πŸ“ˆ Confidence Scoring : 20-95% confidence ratings based on evidence strength

πŸ“‹ Prerequisites

Required

  • Burp Suite Pro (Community edition has limited functionality)
  • Ollama installed locally (Download)
  • Jython standalone JAR configured in Burp Extender
  • Python 2.7 environment (for Jython compatibility)

Recommended Ollama Model

bash

ollama pull gdisney/mistral-uncensored

Note: This is an uncensored model suitable for offensive security research. Use responsibly.


πŸ”§ Installation

1. Install Ollama

bash

# macOS/Linux
curl https://ollama.ai/install.sh |sh

# Windows
# Download from https://ollama.ai/download

2. Pull a Model

bash

ollama pull gdisney/mistral-uncensored
# or
ollama pull mistral
ollama pull llama2

3. Configure Burp Suite

  1. Load Jython:

    • Extender β†’ Options β†’ Python Environment
    • Set location of Jython standalone JAR
  2. Add Extension:

    • Extender β†’ Extensions β†’ Add
    • Extension type: Python
    • Select: Ollama-Pentest-AI-localhost-v2.py(recommended)
    • Check Burp Extender output for: Plugin initialized

4. Configure the Extension

Basic Setup:

  • API Key: Enter any value (e.g., "localhost" - required by UI but not used)
  • Model: Select gdisney/mistral-uncensored or your preferred model
  • Attack Type: Choose vulnerability type or "CUSTOM PROMPT"
  • Payload Count: 5-10 recommended
  • Delay: 100ms (adjust based on target)

Advanced Settings (v2 only):

  • Temperature: 0.4(lower =more focused, higher =more creative)
  • Max Tokens: 800(longer for complex payloads)
  • βœ… Safe Mode: ENABLED BY DEFAULT - prevents destructive SQL
  • βœ… Smart Fuzzing: Enable for adaptive payload generation
  • βœ… Capture all traffic: Monitor all Burp requests

πŸ“– Usage Guide

Basic Workflow

1. Capture a Request

  • Use Burp Proxy to intercept a request
  • Right-click β†’ "Ollama: Analyze Request"
  • Or manually paste a request into the extension tab

2. Generate Payloads

1. Select Attack Type (e.g., "SQL Injection")
2. Click "Analyze & Generate"
3. Review generated payloads in"AI Analysis" tab
4. Parameters automatically detected and displayed

3. Start Pentesting

1. Click "Start Pentesting"
2. Watch real-time results in"Pentest Live" tab
3. Color-coded vulnerabilities appear (v2: with risk badges)
4. Check "Results" tab for detailed findings

4. Review & Export

- View metrics in"Metrics" tab
- Generate AI summary (v2): "AI Summary" β†’ "Generate Summary"
- Export findings: "Export to CSV"
  - Only vulnerabilities
  - Full history
  - Evidence-only

Custom Prompts

For specialized testing, use CUSTOM PROMPT mode:

Attack Type: CUSTOM PROMPT
Custom Prompt: "Generate advanced SQL injection payloads for boolean-based blind attacks that bypass ModSecurity WAF. Return ONLY payload values, one per line, no explanations."

Important: Always end custom prompts with: "Return ONLY payload values, one per line, no explanations."


πŸ”’ Safe Mode Feature (v2)

What is Safe Mode?

Safe Mode filters out destructive SQL payloads that could cause permanent data loss:

Blocked Operations:

  • DROP(tables/databases)
  • DELETE(row deletion)
  • UPDATE(data modification)
  • INSERT(data injection)
  • TRUNCATE(table clearing)
  • ALTER(schema changes)
  • CREATE, RENAME, GRANT, REVOKE

Using Safe Mode

Default: βœ… ENABLED (Recommended)

To Disable (not recommended):

  1. Uncheck "Safe Mode (no destructive)"
  2. Confirmation dialog appears with warning
  3. Click "Yes" to confirm (use extreme caution)

Best Practice: Keep Safe Mode enabled unless you have explicit authorization and understand the risks.


πŸ“Š Understanding Results

Risk Ratings (v2)

Color Risk Level Severity Example
πŸ”΄ Dark Red Critical RCE, SSRF, XXE Remote code execution
🟠 Orange High SQL Injection, XSS Data exfiltration possible
🟑 Yellow Medium Open Redirect, CORS Limited impact
🟒 Green Low Information Disclosure Minimal risk

Confidence Scores (v2)

  • 20-50%: Low confidence, possible false positive
  • 51-74%: Medium confidence, worth investigating
  • 75-89%: High confidence, likely vulnerable
  • 90-95%: Very high confidence, exploit confirmed

Evidence Indicators

Findings marked with "Evidence:" show actual proof:

SQL Injection - Database error detected [high|85%]| Evidence: "mysql_fetch_array() expects"

πŸŽ›οΈ Advanced Configuration

Model Selection

Recommended Models:

  1. gdisney/mistral-uncensored (Best for offensive security)

    • Uncensored, generates aggressive payloads
    • Great for educational testing
  2. mistral (Balanced)

    • Good quality, moderate censorship
    • Suitable for general testing
  3. llama2 (Conservative)

    • More censored, safer outputs
    • Good for compliance testing

Temperature Settings

0.1-0.3: Focused, deterministic payloads
0.4-0.6: Balanced creativity (recommended)
0.7-0.9: Highly creative, diverse payloads
1.0+: Experimental, unpredictable

Performance Tuning

Request Delay: 
  - 0ms: Maximum speed (may trigger WAF)
  - 100ms: Balanced (recommended)
  - 500ms+: Stealth mode

Payload Count:
  - 5: Quick scan
  - 10: Standard test
  - 20+: Comprehensive audit

πŸ” Comparison: v1 vs v2

Choose v1 If:

  • βœ… You need basic functionality
  • βœ… You're new to Burp extensions
  • βœ… You prefer simplicity

Choose v2 If:

  • βœ… You need production-ready features
  • βœ… You want risk scoring and context-aware payloads
  • βœ… You need session persistence
  • βœ… You want AI-generated reports
  • βœ… You need Safe Mode protection

Recommendation: Use v2 for all serious testing scenarios.


πŸ“ Project Structure

Ollama-Pentest-AI/
β”œβ”€β”€ Ollama-Pentest-AI-localhost-v1.py  # Basic version
β”œβ”€β”€ Ollama-Pentest-AI-localhost-v2.py  # Enhanced version (recommended)
β”œβ”€β”€ DeepSeek-Pentest-AI.py             # Original DeepSeek version
β”œβ”€β”€ README-v1.md                        # v1 documentation
β”œβ”€β”€ README-v2.md                        # v2 documentation
└── README.md                           # This file

πŸ› Troubleshooting

Common Issues

"API test failed"

bash

# Check Ollama is running
ollama list

# Start Ollama service
ollama serve

# Test with curl
curl http://localhost:11434/v1/chat/completions

"No parameters detected"

  • Ensure request starts with HTTP method (GET/POST/PUT/DELETE)
  • Check Content-Type header is present
  • Try capturing request through Burp Proxy

"Generate payloads first"

  • Click "Analyze & Generate" before "Start Pentesting"
  • Wait for payload generation to complete
  • Check "AI Analysis" tab for generated payloads

Slow payload generation

python

# Reduce max_tokens in UI
Max Tokens:400(instead of 800)

# Or use faster model
Model: mistral (instead of llama2)

Extension not loading

  • Verify Jython is configured in Burp
  • Check Burp Extender "Output" tab for errors
  • Ensure Python file is not corrupted

🀝 Contributing

This is an educational fork. Contributions welcome:

  1. Fork the repository
  2. Create a feature branch
  3. Test thoroughly on vulnerable apps (DVWA, Juice Shop)
  4. Submit pull request with detailed description

πŸ“œ License

This project inherits the license from the original DeepSeek-Pentest-AI repository.


πŸ™ Credits

Original Author : HernΓ‘n RodrΓ­guez

Ollama Adaptation : Educational fork for local AI-powered pentesting


πŸ“š Further Reading


βš–οΈ Legal Notice

EDUCATIONAL USE ONLY

This tool is designed for:

  • βœ… Authorized penetration testing
  • βœ… Security research in controlled environments
  • βœ… Educational demonstrations
  • βœ… Bug bounty programs with permission

NEVER USE FOR :

  • ❌ Unauthorized access attempts
  • ❌ Malicious attacks
  • ❌ Testing without permission
  • ❌ Any illegal activity

By using this tool, you agree :

  • You have explicit authorization to test target systems
  • You understand the legal implications
  • You accept full responsibility for your actions
  • You will use Safe Mode in production environments

Runs in Burp Pro or Community

Burp Image

Burp Image UI


Happy (Legal) Hacking! 🎯

Remember: With great power comes great responsibility. Use this tool ethically.

About

Burp Suite plugin doing Offensive Security using local Models via Ollama

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages