If you discover a security vulnerability in this project, please report it by:
- Email: institutoconsciencia@proton.me (response time: 7 days)
- GitHub Security Advisory: Use the private reporting feature
Response Time Objective: 7 days for initial response to vulnerability reports.
Please do NOT report security vulnerabilities through public GitHub issues.
This project uses automated dependency scanning to identify and address security vulnerabilities in dependencies.
The project runs weekly dependency health checks using:
- pip-audit: Scans for known security vulnerabilities in Python dependencies
- GitHub Dependabot: Monitors for security updates (if enabled)
- The
dependency-health.ymlworkflow runs weekly on Wednesdays at 10:00 UTC - It scans all dependencies listed in
requirements.txt - If vulnerabilities are found, an issue is automatically created with:
- Details of vulnerable packages
- Affected versions
- Available fixes
- Links to security advisories
- The workflow automatically closes false-positive issues when no real vulnerabilities exist
You can manually trigger a security check:
# Install pip-audit
pip install pip-audit
# Run security scan
pip-audit --desc --format json
# Or with requirements file
pip-audit -r requirements.txtWe support the following Python versions:
- Python 3.11 (production standard)
- Python 3.12 (future-proofing)
When contributing to this project:
- Keep dependencies updated: Regularly check for security updates
- Review security advisories: Check the GitHub Advisory Database
- Follow secure coding practices:
- Validate user inputs
- Use parameterized queries
- Avoid hardcoded credentials
- Use secure random number generators
- Test security updates: Run the full test suite after updating dependencies
- Monitor security issues: Check automated security issues regularly
- Update promptly: Apply security patches as soon as possible
- Verify compatibility: Test updates with Python 3.11 and 3.12
- Document changes: Update CHANGELOG.md with security fixes
- Communicate: Notify users of critical security updates
Issue: The dependency health workflow was creating false-positive security issues even when no vulnerabilities existed.
Root Cause: The workflow checked if the pip-audit JSON report file existed, but didn't verify if any packages actually had vulnerabilities. pip-audit generates a report file even when all packages are secure (with empty vulns arrays).
Fix:
- Added proper JSON parsing to check if any package has non-empty
vulnsarray - Enhanced issue creation to include detailed vulnerability summaries
- Added automatic closing of false-positive issues
- Improved reporting to clearly distinguish between packages with and without vulnerabilities
Impact: Reduces noise from false-positive security alerts and provides more actionable information when real vulnerabilities are detected.
- Critical vulnerabilities: Fix within 24-48 hours
- High severity: Fix within 7 days
- Medium severity: Fix within 30 days
- Low severity: Fix in next regular update cycle
When we address a security vulnerability:
- We will acknowledge receipt of the report within 48 hours
- We will provide an estimated timeline for a fix
- We will release a patch and security advisory
- We will credit the reporter (unless anonymity is requested)
Policy: All authentication tokens and credentials MUST be provided via environment variables only. Command-line arguments and configuration files are explicitly forbidden for security reasons.
-
Hugging Face Token (
HF_TOKEN)- Used for: Llama 4 model downloads and inference
- Required: Only when using
use_llama4=Truein QCAL-LLM - Format:
hf_XXXXXXXXXXXXXXXXXXXXXXXXXXXX(34+ characters) - Setup:
export HF_TOKEN=your_token_here - Never: Pass via
--token,--hf-token, or store in code
-
OpenAI API Key (
OPENAI_API_KEY)- Used for: Benchmark comparisons (optional)
- Setup:
export OPENAI_API_KEY=your_key_here
-
Anthropic API Key (
ANTHROPIC_API_KEY)- Used for: Benchmark comparisons (optional)
- Setup:
export ANTHROPIC_API_KEY=your_key_here
The project includes automated tests that fail if token patterns are detected in the repository:
# Run token detection test
python tests/test_security_no_tokens.pyWhat it checks:
- Hugging Face tokens:
hf_[A-Za-z0-9]{30,} - OpenAI API keys:
sk-[A-Za-z0-9]{32,} - Generic secrets: High-entropy strings in suspicious contexts
CI/CD Integration: This test runs on every push and pull request. PRs containing tokens will be automatically rejected.
- Immediately revoke the token at the service provider
- Generate a new token with appropriate scopes
- Remove from history using
git filter-branchor BFG Repo-Cleaner - Report the incident to maintainers if in a public repository
✅ DO:
- Store tokens in environment variables
- Use
.envfiles (ensure they're in.gitignore) - Use secret management services (GitHub Secrets, AWS Secrets Manager, etc.)
- Rotate tokens regularly
- Use minimum required scopes/permissions
❌ DON'T:
- Hard-code tokens in source files
- Pass tokens via command-line arguments
- Commit
.envfiles to version control - Share tokens in issues, PRs, or discussions
- Use production tokens for testing
Create a .env file in the project root (never commit this file):
# Hugging Face token (optional, only for Llama 4 integration)
HF_TOKEN=hf_your_token_here
# OpenAI API key (optional, only for benchmarks)
OPENAI_API_KEY=sk-your_key_here
# Anthropic API key (optional, only for benchmarks)
ANTHROPIC_API_KEY=sk-ant-your_key_hereLoad with:
from dotenv import load_dotenv
load_dotenv() # Automatically loads .env file- Python Security Best Practices
- OWASP Top 10
- GitHub Security Features
- pip-audit Documentation
- Git Secrets - Prevent committing secrets
Last updated: 2025-01-01