We release patches for security vulnerabilities in the following versions:
| Version | Supported |
|---|---|
| 0.19.x | ✅ |
| 0.18.x | ✅ |
| < 0.18 | ❌ |
We take security seriously. If you discover a security vulnerability, please report it responsibly.
Do NOT open a public GitHub issue for security vulnerabilities.
Instead, please report vulnerabilities via one of these methods:
-
GitHub Security Advisories (Preferred)
- Go to the Security tab
- Click "Report a vulnerability"
- Fill out the private security advisory form
-
Email
- Send details to: security@amiable.dev
- Use our PGP key for sensitive information (available upon request)
Please include:
- Description of the vulnerability
- Steps to reproduce
- Affected versions
- Potential impact
- Any suggested fixes (optional)
- Initial Response: Within 48 hours
- Status Update: Within 7 days
- Resolution Target: Within 90 days (depending on severity)
- We will acknowledge your report within 48 hours
- We will provide a more detailed response within 7 days
- We will work with you to understand and resolve the issue
- We will credit you in the security advisory (unless you prefer anonymity)
- We ask that you give us reasonable time to address the issue before public disclosure
- Never commit API keys to version control
- Use environment variables or secure key storage
- Rotate keys periodically
- Use the built-in keychain storage:
llm-council setup-key
- Keep
.envfiles in.gitignore - Use
LLM_COUNCIL_SUPPRESS_WARNINGS=falsein production - Review webhook URLs before enabling (HTTPS required by default)
- Use HTTPS for all external communications
- Configure webhook HTTPS enforcement:
LLM_COUNCIL_WEBHOOK_HTTPS_ONLY=true - Review gateway configurations for sensitive data exposure
The council uses XML sandboxing in Stage 2 to prevent prompt injection attacks during peer review. However, users should still:
- Sanitize user inputs before sending to the council
- Review synthesized outputs before automated actions
- Use binary verdict mode for security-critical decisions
- Session data is stored locally by default
- Cross-session bias metrics require explicit consent
- Query hashing (for RESEARCH consent) uses HMAC with configurable secret
LLM Council implements a multi-layered security scanning pipeline (see ADR-035):
- Gitleaks: Secret detection before commit
- Ruff: Python linting and formatting
- CodeQL: Semantic code analysis for Python vulnerabilities
- Semgrep: SAST with custom LLM-specific rules
- Dependency Review: License and vulnerability checking on PRs
- Snyk: Continuous dependency monitoring
- Trivy: Container and filesystem vulnerability scanning
- SonarCloud: Code quality and security analysis
- SBOM: CycloneDX Software Bill of Materials attached to releases
- SLSA Provenance: Level 3 build provenance attestations (Sigstore-signed)
- OpenSSF Scorecard: Automated security health metrics (view score)
- PyPI Attestations: Automatic attestations via Trusted Publisher
- Enables downstream vulnerability tracking and artifact verification
pip install pre-commit
pre-commit installSecurity updates are released as patch versions. Subscribe to:
- GitHub Releases (Watch > Custom > Releases)
- Security Advisories
We thank the security researchers who have helped improve the security of LLM Council:
- (Your name could be here!)