👋 Hi, I’m Frederick Baffour
AI Security Assurance Engineer | LLM Red Teaming | Model Supply-Chain Security
I specialize in AI Security Assurance, focusing on how AI models are evaluated, tested, and documented before use in real environments. My work covers the full lifecycle—from model intake and supply-chain verification to adversarial testing and structured reporting.
My background is in enterprise security engineering, and I apply the same discipline to AI systems: clear methodology, reproducible testing, and evidence-based conclusions.
- AI Security Assurance engineering
- LLM red teaming (Garak, Promptfoo, manual testing)
- Jailbreak, prompt-injection, and refusal-bypass evaluation
- Model supply-chain integrity (hashing, SBOMs, static analysis)
- Secure model execution and misuse analysis
- Garak, Promptfoo
- YARA, ClamAV, Sigcheck
- Syft / Grype
- Ollama, HuggingFace CLI
🔐 AI Security Assurance Labs
End-to-end portfolio demonstrating:
- Model intake & supply-chain verification
- Hashing, YARA, ClamAV, SBOM workflows
- LLM red teaming & behavioral evaluation
- Clear, reviewer-friendly documentation
👉 https://github.com/fred-ai-security/ai-security-assurance-labs
- AI Security Engineer
- LLM Red Team Engineer
- Model Evaluation & Assurance
- AI Systems Security
📬 Contact
Email: fbaffour@gmail.com
LinkedIn: https://www.linkedin.com/in/frederick-baffour