| Version | Supported |
|---|---|
| 0.1.x | ✅ |
We take security seriously. If you discover a security vulnerability in ragnarok-ai, please report it responsibly.
Do NOT open a public GitHub issue for security vulnerabilities.
Instead, please use GitHub Security Advisories to report vulnerabilities privately.
- Description of the vulnerability
- Steps to reproduce
- Potential impact
- Suggested fix (if any)
| Action | Timeline |
|---|---|
| Acknowledgment | Within 48 hours |
| Initial assessment | Within 7 days |
| Fix development | Depends on severity |
| Public disclosure | After fix is released |
- Acknowledgment: We will confirm receipt of your report within 48 hours
- Assessment: We will investigate and assess the severity
- Communication: We will keep you informed of our progress
- Fix: We will develop and test a fix
- Release: We will release the fix and credit you (unless you prefer anonymity)
- Disclosure: We will publish a security advisory
- Keep ragnarok-ai updated to the latest version
- Run Ollama locally — ragnarok-ai is designed for local-first operation
- Don't expose services — Keep Ollama and vector stores on localhost
- Review configurations — Check
ragnarok.yamlfor sensitive settings
- No hardcoded secrets — Use environment variables
- Validate inputs — Sanitize all user inputs
- Dependencies — Keep dependencies updated, review security advisories
- Code review — All PRs require review before merge
- ragnarok-ai core library (
src/ragnarok_ai/) - CLI tool
- Official adapters (Ollama, Qdrant, LangChain)
- Configuration handling
- Data processing pipelines
- Third-party dependencies (report to their maintainers)
- Ollama security issues (report to Ollama team)
- Vector store security (report to respective projects)
- User misconfiguration
ragnarok-ai is designed to run 100% locally. This means:
- No data is sent to external APIs by default
- LLM inference happens on your machine via Ollama
- Vector stores run locally
The tool reads and writes files for:
- Knowledge base documents
- Test sets
- Evaluation reports
- Checkpoints
Ensure appropriate file permissions in production environments.
When using LLM-as-judge for metrics (faithfulness, relevance), be aware that:
- Malicious content in documents could attempt prompt injection
- We implement basic sanitization, but no system is foolproof
- Review generated test sets before using in CI/CD
Thank you for helping keep ragnarok-ai secure!