MSc Computer Science (Artificial Intelligence) University of Wolverhampton (ongoing)
Specialising in AI Governance, Accountability, Risk & Compliance.
I build AI governance frameworks that ensure:
- Compliance with GDPR Article 22, EU AI Act, ISO/IEC 42001, ISO 27001/27701
- Fair, explainable and accountable AI system behaviour
- Clear human oversight and escalation paths
- Protection against algorithmic harm
This profile includes hands-on examples demonstrating how I operationalise:
- Explainability (SHAP, LIME)
- Fairness testing (Aequitas, Fairlearn)
- Counterfactual accountability
- ISO 42001 lifecycle governance controls
- Continuous monitoring and drift detection
→ Turning responsible AI principles into auditable evidence.
| Case Study | What Went Wrong | Link |
|---|---|---|
| NI Child Benefit Failure | Broken oversight → families wrongly penalised | /modules/01_NI_ChildBenefit_Failure |
| Airport AI Decision Harm | Lack of explainability & redress pathways | /modules/case01_ni_airport_harm |
| Deloitte Hallucinated Report (2024) | Weak governance → fabricated regulatory evidence | /modules/governance-failure-analysis/deloitte-llm-hallucination-case |
Python · SHAP · LIME · Aequitas · Fairlearn · Streamlit · Pandas · NIST AI RMF · ISO/IEC 42001
- Responsible AI / AI Governance
- AI Risk & Compliance
- Model Explainability & Audit
If your organisation is preparing for the EU AI Act or ISO 42001 readiness — let’s connect.
