-
Notifications
You must be signed in to change notification settings - Fork 0
Home
p0bailey edited this page Jun 27, 2025
·
61 revisions

by Phillip Bailey
AI transforms how organisations operate—but it also introduces risk: attacks, bias, opacity, misuse, and regulatory failure.
This framework offers a practical, lean, and action-focused structure to govern, secure, monitor, and improve AI systems.
Built on NIST AI RMF, NIST CSF 2.0, EU AI Act, ISO 42001, and OWASP standards.
Supports LLMs, agentic AI, computer vision, and more.
It includes a built-in AI threat modeling workflow to:
- Map system-specific risks early
- Identify model, data, and infrastructure exposures
- Align mitigations to OWASP AI Top 10, MITRE ATLAS, and STRIDE
- Integrate threat insights into governance, monitoring, and response
Most frameworks are:
- Too abstract or academic
- Too focused on documentation over action
- Locked inside enterprise-only tools
This one is:
- Practical, not academic
- Risk-driven, not checklist-based
- MLSecOps-ready, not locked in legacy tooling
| Risk Theme | Description |
|---|---|
| Bias & Discrimination | Skewed outcomes, especially impacting protected groups |
| Opacity | Black-box models without auditability or explanation |
| Model Drift | Performance decay due to changing data or context |
| Adversarial Attacks | Prompt injection, model extraction, data poisoning |
| Privacy Violations | Leakage of sensitive or personal information via model outputs |
| Hallucinated Outputs | Confident but false or misleading responses |
| Off-Label Use | Models used outside of safe/intended purpose |
| Over-Reliance | Uncritical trust in AI without human checks |
- AI + Traditional Security Integration – Links AI-specific threats to standard security controls
- Regulatory Coverage – Aligns to EU AI Act, ISO 42001, and sector-specific rules
- Threat Protection – Maps to OWASP AI Top 10, MITRE ATLAS, and adversarial patterns
- Infrastructure Security – Secures CI/CD pipelines, APIs, supply chains, and deployments
- Application Security – Protects inference endpoints, web inputs, toolchains
- Living Threat Intelligence – Informed by MITRE, OWASP, and red teaming data
- Embedded Threat Modeling Workflow – Enables system-specific threat analysis across lifecycle
What It Covers
The AI GRC framework is structured around six lifecycle functions:
| Function | Purpose | Key Elements |
|---|---|---|
| Govern | Define roles, policies, and accountability for AI systems | Ownership, oversight, governance structure, third-party controls |
| Assess | Identify, map, and classify AI use cases and risks | AI threat modelling workflow to uncover threats early |
| Secure | Apply controls to protect models, data, and infrastructure | OWASP, NIST, ISO, and MLSecOps-aligned security controls |
| Monitor | Track trust metrics, drift, bias, incidents, and attacks | Telemetry, logs, audit trails, anomaly detection |
| Respond | Contain and remediate AI incidents | Response plans, escalation paths, transparency practices |
| Improve | Learn from failure and strengthen the AI system | Retrospectives, model updates, policy tuning, risk posture improvement |
Each function includes:
- Why it matters
- What to cover
- How to apply it
- Mapping to NIST CSF, AI RMF, EU AI Act, ISO 42001
- Templates and lean artefacts to accelerate adoption
| System Type | Example Use Cases | Key Risk Themes |
|---|---|---|
| Generative AI | Text, image, audio, code generation | Hallucination, misuse, IP leakage |
| Agentic AI | Autonomous planning/decision agents | Misalignment, autonomy abuse |
| Image Recognition | Tagging, OCR, anomaly detection | Surveillance, false positives, adversarial input |
| Predictive Modelling | Forecasting, scoring, fraud detection | Drift, unfair bias, lack of explainability |
| Reinforcement Learning | Robotics, gaming, control systems | Reward hacking, unsafe exploration |
| Recommendation Systems | Ranking, personalisation | Manipulation, filter bubbles, privacy |
| NLP Systems | Sentiment, translation, summarisation | Toxicity, context errors, bias |
| Speech Systems | Voice assistants, transcription | Misinterpretation, misactivation, privacy |
| Computer Vision | Object/face detection | Identity abuse, adversarial patches |
| Autonomous Systems | Drones, robotics, self-driving | Physical harm, control loss, liability |
| Decision Support Systems | Legal, clinical, financial | Over-reliance, data gaps, explainability |
| Federated & Edge AI | Local inference, edge learning | Data leakage, fragmented governance |
| AI for Cybersecurity | Threat detection, auto-response | False positives, adversarial evasion |
| Critical Infrastructure AI | Defence, transport, energy | Systemic risk, cascading failures |
This framework supports cross-functional adoption:
- CISOs & GRC Leaders – Ensure AI meets policy and regulatory standards
- AI/ML Engineers – Build secure, trustworthy models
- Platform & MLOps Teams – Enforce controls across the AI lifecycle
- Product Owners – Deliver compliant, high-impact AI in sensitive domains
Use the links below to explore each function in depth:
Use the links below to explore each function in depth: