A reproducible adversarial ML lab that demonstrates TextFooler, BERTAttack, and DeepWordBug attacks against transformer-based sentiment models, with Docker automation and adversarial security reporting.
adversarial-machine-learning ai-security distilbert textattack mitre-atlas llm-security nlp-security ai-security-toolkit ai-ml-redteam
-
Updated
Mar 17, 2026 - Python