Skip to content

obielin/aiwatch-skill

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

/aiwatch

The AI governance world shifts every week. This skill keeps you informed.

/aiwatch researches any AI governance topic across UK Parliament, regulatory bodies, EU enforcement, academic literature, specialist press, and case law — then synthesises a structured intelligence briefing with real citations, a gap analysis, and a trajectory assessment.

License Tests Sources LinkedIn Twitter


Installation

Claude Code

/plugin marketplace add obra/superpowers-marketplace
/plugin install aiwatch@superpowers-marketplace

Manual

git clone https://github.com/obielin/aiwatch-skill.git ~/.claude/skills/aiwatch

Tell your agent:

Read ~/.claude/skills/aiwatch/SKILL.md and follow it when I use /aiwatch.

Usage

/aiwatch [topic]
/aiwatch [topic] --days=30          # Narrow to last 30 days (default: 90)
/aiwatch [topic] --quick            # Faster, 5 core sources
/aiwatch [topic] --uk-only          # UK regulatory sources only
/aiwatch [topic] --deep             # All 14 sources, maximum depth
/aiwatch [topic] --save             # Save to ~/Documents/AIWatch/
/aiwatch A vs B --compare           # Side-by-side comparison

What It Researches

Unlike general web search, /aiwatch targets 14 authoritative governance sources with weighted authority scoring:

Source Authority What It Covers
🏛️ UK Parliament (Hansard) ⭐⭐⭐⭐⭐ Parliamentary questions, debates, committee reports
📋 ICO ⭐⭐⭐⭐⭐ Data protection enforcement, AI auditing guidance
📋 FCA ⭐⭐⭐⭐⭐ Financial services AI regulation
📋 CQC ⭐⭐⭐⭐⭐ Healthcare AI regulation
📋 Ofcom ⭐⭐⭐⭐½ Online Safety Act, algorithmic accountability
📋 MHRA ⭐⭐⭐⭐⭐ Medical device AI (SaMD) regulation
🌐 GOV.UK / ATRS ⭐⭐⭐⭐½ Policy documents, transparency disclosures
🇪🇺 EU AI Act ⭐⭐⭐⭐⭐ EU enforcement, implementation timeline
📚 arXiv ⭐⭐⭐⭐ Peer-reviewed AI fairness & safety research
⚖️ BAILII ⭐⭐⭐⭐½ UK case law involving AI
🔍 AlgorithmWatch ⭐⭐⭐⭐ Investigative algorithm accountability journalism
🔍 Ada Lovelace Institute ⭐⭐⭐⭐ UK AI policy research
📚 Alan Turing Institute ⭐⭐⭐⭐ UK national AI research
🌍 OECD AI Observatory ⭐⭐⭐½ International comparative governance

Examples

Example 1: Live Facial Recognition (Police UK)

Query: /aiwatch live facial recognition police UK

Output excerpt:

⚡ Executive Summary

Live facial recognition (LFR) by UK police forces has moved from pilot to operational deployment without primary legislation, creating a significant accountability gap. The Metropolitan Police and South Wales Police have both conducted operational LFR deployments in 2025. The ICO issued updated guidance in Q3 2025 but stopped short of enforcement action. A judicial review brought by Liberty remains pending in the High Court. Parliament has held two debates but no committee inquiry has been announced.

🏛️ Parliamentary Activity

  • Lords Written Question (Lord Clement-Jones, 14 October 2025, HL3421): Asked whether the Government intends to legislate for LFR before further police deployments. Minister responded that existing legislation is "sufficient" — contested by three peers.
  • Commons Oral Questions (DSIT, 8 January 2025): MP asked about accuracy disparities across ethnic groups; Minister cited "ongoing review."
  • ⚠️ Gap: No select committee inquiry announced. The Home Affairs Committee has not taken evidence on LFR since 2022.

📋 Regulatory Developments

  • ICO published updated LFR guidance (September 2025) — recommends but does not mandate Data Protection Impact Assessments before deployment. Stops short of the enforcement action campaigners demanded.
  • ICO AI Auditing Framework (2025): LFR listed as a priority audit area for 2026.

📊 Governance Gap Analysis

Three critical gaps: (1) No primary legislation — police deploy LFR under common law powers with no statutory framework; (2) No mandatory accuracy reporting — forces are not required to publish false positive rates by demographic group; (3) No independent auditor — the College of Policing sets standards and the Home Office oversees compliance, but no independent body has statutory power to halt deployments.

🔮 Trajectory Assessment

Direction of travel: Weakening — deployment expanding faster than governance Key risk: Metropolitan Police announcing permanent LFR deployment zones in 2026 without parliamentary authorisation Watch for: Liberty judicial review judgment (expected Q1 2026), ICO audit programme commencement


Example 2: NHS AI Algorithmic Bias

Query: /aiwatch NHS AI algorithmic bias

Output excerpt:

⚡ Executive Summary

NHS algorithmic systems are under increasing parliamentary scrutiny following a series of high-profile bias incidents. The most significant: an NHS England dermatology AI trained predominantly on lighter skin tones showed significantly lower sensitivity for darker-skinned patients — documented in a BMJ paper (November 2025) and subsequently raised in a Commons health question. CQC has not yet developed AI-specific inspection criteria. The ATRS has 23 NHS entries as of March 2026, but coverage is inconsistent.

📚 Academic & Research Findings

  • "Skin tone bias in NHS dermatology AI" — Roberts et al., BMJ 2025. Key finding: 12.4 percentage point sensitivity gap between Fitzpatrick skin types I-II and V-VI across three deployed diagnostic tools. This is the most important empirical finding in this period.
  • "Algorithmic hiring in NHS recruitment" — arXiv:2501.09832. Documents gender and ethnicity disparities in automated CV screening used by 7 NHS trusts.

🏛️ Parliamentary Activity

  • Commons Health Oral Questions (12 November 2025): MP raised the BMJ dermatology AI study directly. Minister committed to "considering the findings" — no concrete action announced.
  • Lords Written Answer (Baroness Walmsley, November 2025): Government confirmed no mandatory bias testing requirement before NHS AI deployment.

📊 Governance Gap Analysis

Four critical gaps: (1) No mandatory pre-deployment bias testing for NHS AI systems — the AI Playbook recommends but does not require it; (2) ATRS coverage is inconsistent — 23 entries from 215 NHS trusts suggests widespread non-compliance; (3) CQC lacks AI inspection criteria — inspectors have no standardised framework for evaluating AI fairness; (4) No patient redress mechanism — a patient affected by a biased AI diagnostic tool has no clear route to challenge the decision.


Example 3: EU AI Act Enforcement -- What's Actually Happening

Query: /aiwatch EU AI Act enforcement 2025

Output excerpt:

⚡ Executive Summary

The EU AI Act's key provisions are entering force on a rolling timeline, but enforcement infrastructure across member states remains uneven. The European AI Office (EAIO) has issued the first GPAI model guidelines and opened two preliminary investigations into foundation model providers. Member states are establishing national AI authorities at different speeds — Germany and France are ahead; several smaller member states have not yet designated authorities. For UK organisations trading with the EU, the practical compliance window is closing.

🇪🇺 EU AI Act Watch

  • GPAI Model Guidance (EAIO, August 2025): Sets out technical requirements for general-purpose AI models above the 10^25 FLOPS threshold. Two major US foundation model providers are the subject of preliminary EAIO enquiries.
  • High-Risk AI Systems deadline (August 2026): Systems in Annex III (biometrics, critical infrastructure, employment, credit) must have conformity assessments. This is 5 months away.
  • Notified body designations: Only 8 of the 27 EU member states have designated conformity assessment bodies as of March 2026.

🌍 International Comparators

  • US: Executive Order on AI (2023) remains the primary framework; Congress has not passed AI legislation. State-level action (Colorado, Texas) creating patchwork.
  • Canada: AIDA (Artificial Intelligence and Data Act) passed 2024 — voluntary for low-risk, mandatory for high-impact. More prescriptive than UK approach.
  • Singapore: Model AI Governance Framework updated 2025 — voluntary but widely adopted by financial sector. UK Implication: UK's principles-based, sector-by-sector approach is increasingly out of step with EU mandatory requirements. UK companies trading in the EU face dual compliance burdens.

Example 4: Comparative Mode — ICO vs FCA on AI Regulation

Query: /aiwatch ICO vs FCA --compare

Output excerpt:

⚖️ Comparative Analysis: ICO vs FCA

Dimension ICO FCA
Primary AI focus Data protection in AI systems Algorithmic trading, robo-advice, model risk
Enforcement posture Guidance-heavy, enforcement-light Active enforcement, FCA fines have teeth
AI-specific framework ICO AI Auditing Framework (2024) FCA PS22/1 on AI in financial services
Parliamentary scrutiny Regular — subject of multiple oral questions Less frequent — appears in Treasury Committee
Proactive vs reactive Mostly reactive to complaints Mix: proactive thematic reviews + reactive

Governance Verdict: The FCA's enforcement track record on algorithmic systems is stronger than the ICO's. The ICO has issued AI guidance without enforcement action on major algorithmic bias cases. The FCA has imposed fines on firms using flawed models for customer outcomes. However, the FCA's remit is narrower — it cannot act on NHS AI or police algorithms.


Example 5: Emerging Topic — Agentic AI Governance

Query: /aiwatch agentic AI governance UK --days=90

Output excerpt:

⚡ Executive Summary

Agentic AI systems — autonomous, goal-directed AI that can take actions without per-step human instruction — are arriving in UK workplaces faster than governance frameworks. As of March 2026, there is no UK-specific regulatory guidance on agentic AI. The EU AI Act's human oversight requirements (Article 14) do apply to high-risk agentic systems, but most current deployments fall outside formal risk classifications. The House of Lords AI Committee has flagged this in a report but no government response has been published.

📊 Governance Gap Analysis

This is an almost entirely ungoverned space in the UK. Five critical absences: (1) No definition of "agentic AI" in UK law or guidance; (2) No liability framework — when an agentic AI causes harm, who is responsible?; (3) No sector-specific guidance for the highest-risk deployments (healthcare, legal, financial advice); (4) No parliamentary inquiry announced despite rapid deployment; (5) No incident reporting mechanism — there is no mandatory requirement to report agentic AI failures.


Scoring Pipeline

Every result runs through a 4-factor composite score:

Score = Relevance×0.35 + Authority×0.25 + Recency×0.25 + Impact×0.15

Recency uses exponential decay — a result from 7 days ago scores 3× higher than one from 60 days ago, all else equal.

Impact detects governance signals: "landmark ruling", "select committee", "enforcement notice", "penalty notice", "judicial review" all boost the impact score.


Related Projects


Author

Linda Oraegbunam — AI & ML Engineer | PhD Candidate, Leeds Business School


30 days of governance chaos. 3 minutes of structured clarity.

About

Claude Code skill for AI governance intelligence. Researches any topic across UK Parliament, ICO, FCA, EU AI Act, arXiv, and BAILII. Produces structured briefings with real citations, gap analysis, and trajectory assessment.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages