Skip to content

Security: hungpdn/llmgo

Security

SECURITY.md

Security Policy

Security is a top priority for llmgo. As an enterprise-grade LLM framework handling prompt execution, tool orchestration, and data ingestion, we take vulnerabilities very seriously.

Supported Versions

We currently provide security updates for the following versions of llmgo:

Version Supported Notes
1.0.x Active mainline development
< 1.0 Beta/Experimental phases (Not recommended)

Reporting a Vulnerability

DO NOT open a public GitHub issue to report a security vulnerability. Publicly disclosing a vulnerability can put the community and enterprise users at risk.

If you discover a security vulnerability within llmgo, please responsibly disclose it by sending an email directly to the maintainers at: 👉 [update]

What to include in your report

  • A detailed description of the vulnerability.
  • Steps to reproduce the issue (including any specific LLM models, prompts, or Tool payloads used).
  • Potential impact on the system (e.g., Denial of Service, Remote Code Execution via Tools, Memory Leak).

Response Timeline

  • You should receive an acknowledgment of your report within 48 hours.
  • We will keep you updated on the progress of the patch and coordinate a public disclosure date once the fix is safely released.

LLM & AI Security Posture

llmgo is architected with several built-in guardrails to mitigate common AI threat vectors. Security researchers should be aware of the following design choices:

  1. Context Overflow & Memory Bloat (Anti-DoS):

    • The internal/pool restricts LLM response buffers. If a malicious prompt forces the LLM to output massive payloads, the framework will gracefully drop the buffer rather than causing an Out-Of-Memory (OOM) crash.
    • BufferMemory utilizes strict PruneRatio constraints to self-heal when token limits are breached.
  2. Tool Execution Isolation (Panic Recovery):

    • All tools executed via Registry.ExecuteParallel run in isolated Goroutines wrapped in defer recover(). A malicious or malfunctioning tool payload parsed by the LLM cannot crash the Orchestrator Engine.
  3. Prompt Injection Mitigation:

    • We highly encourage the use of the InputSanitizer middleware injected via orchestrator.WithInputSanitizer(). It provides a first line of defense against character-escaping attacks before the payload reaches the Router.
  4. Zero-Copy RAG Safety:

    • RAG DirectoryLoader is bound by a strict semaphore (maxConcurrency) to prevent File Descriptor exhaustion attacks when pointing the framework at deeply nested or infinite symlink directories.

We welcome audits and improvements to these internal guardrails!

There aren’t any published security advisories