From da7040edd57acc73ec30765a2a272c175804bbcc Mon Sep 17 00:00:00 2001 From: starbuck100 Date: Thu, 12 Feb 2026 19:41:12 +0100 Subject: [PATCH] Add AgentAudit security badge --- README.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/README.md b/README.md index 4868393..a10687f 100644 --- a/README.md +++ b/README.md @@ -7,6 +7,9 @@ # Agent Evaluation +[![AgentAudit Security](https://img.shields.io/badge/AgentAudit-Safe-brightgreen?logo=data:image/svg%2Bxml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCAyNCAyNCI+PHBhdGggZmlsbD0id2hpdGUiIGQ9Ik0xMiAxTDMgNXY2YzAgNS41NSAzLjg0IDEwLjc0IDkgMTIgNS4xNi0xLjI2IDktNi40NSA5LTEyVjVsLTktNHoiLz48L3N2Zz4=)](https://www.agentaudit.dev/skills/agent-evaluation) + + Agent Evaluation is a generative AI-powered framework for testing virtual agents. Internally, Agent Evaluation implements an LLM agent (evaluator) that will orchestrate conversations with your own agent (target) and evaluate the responses during the conversation.