History Poison Lab: Vulnerable LLM implementation demonstrating Chat History Poisoning attacks. Learn how attackers manipulate chat context and explore mitigation strategies for secure LLM applications.
ai-security vulnerability-testing prompt-injection llm-security agentic-ai secure-llm context-engineering chat-history-poisoning exploitation-demo history-poison-lab
-
Updated
Nov 27, 2025 - TypeScript