AI-powered Docker security scanner that explains vulnerabilities in plain English
Quick Start β’ Features β’ Installation β’ Usage β’ Contributing
π Officially recognized as an OWASP Incubator Project
Trusted by the global security community β’ 14,000+ downloads
DockSec is an OWASP Incubator Project that combines traditional Docker security scanners (Trivy, Hadolint, Docker Scout) with AI to provide context-aware security analysis. Instead of dumping 200 CVEs and leaving you to figure it out, DockSec:
- Prioritizes what actually matters
- Explains vulnerabilities in plain English
- Suggests specific fixes for YOUR Dockerfile
- Generates professional security reports
Think of it as having a security expert review your Dockerfiles.
Being recognized as an OWASP Incubator Project means:
- β Peer-reviewed by security professionals
- β Community-driven development and governance
- β Trusted by enterprises and security teams worldwide
- β Open source with transparent security practices
- β Active maintenance and regular updates
Join thousands of developers using DockSec to secure their containers.
DockSec follows a simple pipeline:
- Scan - Runs Trivy, Hadolint, and Docker Scout on your images/Dockerfiles
- Analyze - AI processes all findings and correlates them with your setup
- Recommend - Get plain English explanations with specific line-by-line fixes
- Report - Export results in JSON, PDF, HTML, or Markdown formats
# Install
pip install docksec
# Scan your Dockerfile
docksec Dockerfile
# Scan with image analysis
docksec Dockerfile -i myapp:latest
# Scan without AI (no API key needed)
docksec Dockerfile --scan-only- Smart Analysis: AI explains what vulnerabilities mean for your specific setup
- Multiple LLM Providers: Support for OpenAI, Anthropic Claude, Google Gemini, and Ollama (local models)
- Multiple Scanners: Integrates Trivy, Hadolint, and Docker Scout
- Security Scoring: Get a 0-100 score to track improvements
- Multiple Formats: Export reports as HTML, PDF, JSON, or CSV
- No AI Required: Works offline with
--scan-onlymode - CI/CD Ready: Easy integration into build pipelines
Requirements: Python 3.12+, Docker (for image scanning)
pip install docksecFor AI features, choose your preferred LLM provider:
export OPENAI_API_KEY="your-key-here"export ANTHROPIC_API_KEY="your-key-here"
export LLM_PROVIDER="anthropic"
export LLM_MODEL="claude-3-5-sonnet-20241022"export GOOGLE_API_KEY="your-key-here"
export LLM_PROVIDER="google"
export LLM_MODEL="gemini-1.5-pro"# First, install and run Ollama: https://ollama.ai
# Then pull a model: ollama pull llama3.1
export LLM_PROVIDER="ollama"
export LLM_MODEL="llama3.1"
# Optional: customize Ollama URL
export OLLAMA_BASE_URL="http://localhost:11434"External tools (optional, for full scanning):
# Install Trivy and Hadolint
python -m docksec.setup_external_tools
# Or install manually:
# - Trivy: https://aquasecurity.github.io/trivy/
# - Hadolint: https://github.com/hadolint/hadolint# Analyze Dockerfile with AI recommendations
docksec Dockerfile
# Scan Dockerfile + Docker image
docksec Dockerfile -i nginx:latest
# Get only scan results (no AI)
docksec Dockerfile --scan-only
# Scan image without Dockerfile
docksec --image-only -i nginx:latest
# Use specific LLM provider and model
docksec Dockerfile --provider anthropic --model claude-3-5-sonnet-20241022
# Use local Ollama model
docksec Dockerfile --provider ollama --model llama3.1| Option | Description |
|---|---|
dockerfile |
Path to Dockerfile |
-i, --image |
Docker image to scan |
-o, --output |
Output file path |
--provider |
LLM provider (openai, anthropic, google, ollama) |
--model |
Model name (e.g., gpt-4o, claude-3-5-sonnet-20241022) |
--ai-only |
AI analysis only (no scanning) |
--scan-only |
Scanning only (no AI) |
--image-only |
Scan image without Dockerfile |
Create a .env file for advanced configuration:
# LLM Provider Configuration
LLM_PROVIDER=openai # Options: openai, anthropic, google, ollama
LLM_MODEL=gpt-4o # Model to use
LLM_TEMPERATURE=0.0 # Temperature (0-1)
# API Keys
OPENAI_API_KEY=your-openai-key
ANTHROPIC_API_KEY=your-anthropic-key
GOOGLE_API_KEY=your-google-key
# Ollama Configuration (for local models)
OLLAMA_BASE_URL=http://localhost:11434
# Scanning Configuration
TRIVY_SCAN_TIMEOUT=600
DOCKSEC_DEFAULT_SEVERITY=CRITICAL,HIGHSee full configuration options.
π Scanning Dockerfile...
β οΈ Security Score: 45/100
Critical Issues (3):
β’ Running as root user (line 12)
β’ Hardcoded API key detected (line 23)
β’ Using vulnerable base image
π‘ AI Recommendations:
1. Add non-root user: RUN useradd -m appuser && USER appuser
2. Move secrets to environment variables or build secrets
3. Update FROM ubuntu:20.04 to ubuntu:22.04 (fixes 12 CVEs)
π Full report: results/nginx_latest_report.html
Dockerfile β [Trivy + Hadolint + Scout] β AI Analysis β Reports
DockSec runs security scanners locally, then uses AI to:
- Combine and deduplicate findings
- Assess real-world impact for your context
- Generate actionable remediation steps
- Calculate security score
Supported AI Providers:
- OpenAI: GPT-4o, GPT-4 Turbo, GPT-3.5 Turbo
- Anthropic: Claude 3.5 Sonnet, Claude 3 Opus
- Google: Gemini 1.5 Pro, Gemini 1.5 Flash
- Ollama: Llama 3.1, Mistral, Phi-3, and other local models
All scanning happens on your machine. Only scan results (not your code) are sent to the AI provider when using AI features.
- Multiple LLM provider support (OpenAI, Anthropic, Google, Ollama)
- Docker Compose support
- Kubernetes manifest scanning
- GitHub Actions integration
- Custom security policies
See open issues or suggest features in discussions.
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
Quick links:
"No OpenAI API Key provided"
β Set appropriate API key for your provider (OPENAI_API_KEY, ANTHROPIC_API_KEY, GOOGLE_API_KEY) or use --scan-only mode
"Unsupported LLM provider"
β Valid providers: openai, anthropic, google, ollama. Set with --provider flag or LLM_PROVIDER env var
"Hadolint not found"
β Run python -m docksec.setup_external_tools
"Python version not supported"
β DockSec requires Python 3.12+. Use pyenv install 3.12 to upgrade.
"Connection refused" with Ollama
β Make sure Ollama is running: ollama serve and the model is pulled: ollama pull llama3.1
"Where are my scan results?"
β Results are saved to results/ directory in your DockSec installation
β Customize location: export DOCKSEC_RESULTS_DIR=/custom/path
For more issues, see Troubleshooting Guide.
MIT License - see LICENSE for details.
DockSec is proud to be an OWASP Incubator Project, recognized by the Open Web Application Security Project for its contribution to application security.
- Vetted by Security Experts: OWASP projects undergo rigorous review
- Community Trust: Join thousands of security professionals using OWASP tools
- Enterprise Ready: OWASP recognition provides confidence for enterprise adoption
- Long-term Sustainability: Backed by a global nonprofit foundation
- OWASP Project Page: https://owasp.org/www-project-docksec/
- PyPI: https://pypi.org/project/docksec/
- Issues: https://github.com/advaitpatel/DockSec/issues
- Discussions: https://github.com/advaitpatel/DockSec/discussions
If DockSec helps you, give it a β to help others discover it!
Built with β€οΈ by Advait Patel
