Skip to content

01_Govern

p0bailey edited this page Jun 27, 2025 · 11 revisions

Govern

The Govern function establishes the foundation for managing AI risks by embedding accountability, oversight, and risk tolerance into the organization’s cybersecurity and enterprise governance structure.

This section aligns with:

  • NIST CSF 2.0: GOVERN
  • NIST AI RMF 1.0: GOVERN
  • EU AI Act: Title III (High-risk systems), Title IX (Governance & Enforcement)

Objectives

  • Define and communicate AI-related roles, responsibilities, and authorities
  • Set organizational risk appetite and tolerance specific to AI systems
  • Align AI risk with broader enterprise and cybersecurity risk governance
  • Establish policies for responsible AI development, deployment, and oversight
  • Govern use of third-party AI systems and models (e.g., LLMs, APIs, datasets)
  • Ensure compliance with legal and regulatory requirements (EU AI Act, OECD Principles)

Outcomes

  • Clear accountability for AI risks across teams and roles
  • AI risks are visible, tracked, and aligned with business objectives
  • AI systems are governed consistently with documented policies
  • Trustworthy AI is enforced through both culture and control

Key Governance Elements

Element Description
AI Risk Ownership Assign executive and operational responsibility for AI risk decisions
AI Risk Appetite & Tolerance Define acceptable levels of AI risk across use cases (e.g., fairness, accuracy, explainability)
Policy Framework Create and maintain policies on AI lifecycle governance, human oversight, model accountability, and decommissioning
AI Inventory Maintain a live inventory of all AI systems and their risk classification
Governance Structures Establish cross-functional AI Risk Committees (e.g., Legal, Security, Data Science, Ethics)
Third-Party Governance Vet and monitor external AI vendors, models, data sources, and APIs
Ethical Oversight Ensure AI aligns with societal values, legal norms, and internal codes of conduct
Regulatory Alignment Classify systems under the EU AI Act and implement conformity assessments for high-risk AI

Example Practices

  • Maintain a centralized AI risk register integrated with GRC tooling
  • Require every AI project to submit a use-case description, data source justification, and risk classification
  • Convene an AI Ethics and Risk Review Board for high-impact AI deployments
  • Establish documented escalation paths for AI incidents and exceptions
  • Review AI policies regularly in response to regulation, audits, and incidents

Framework Alignment

This function aligns with widely adopted governance and risk management frameworks used to ensure responsible AI development and oversight.

Common Governance Domains

  • Organizational context – Understand AI's role in business operations and risk posture
  • Roles & responsibilities – Assign accountability for AI-related decisions
  • Policy management – Define and maintain AI lifecycle policies
  • Risk appetite and tolerance – Set thresholds for acceptable AI risks (e.g., fairness, transparency)
  • Oversight and escalation – Ensure governance structures support timely decision-making
  • Third-party risk – Control exposure from external AI models, datasets, and APIs

These domains are reflected across:

  • NIST CSF 2.0 – GOVERN Function
  • NIST AI RMF 1.0 – GOVERN Pillar
  • ISO/IEC 42001 – AI Management System
  • EU AI Act – Title III (High-risk Systems), Title IX (Governance & Enforcement)

Artefacts

  • AI Risk Register template
  • Governance Policy example
  • AI System Inventory spreadsheet
  • Regulatory Alignment Checklist (EU AI Act)
  • Board-level AI Risk Oversight Briefing Pack

Clone this wiki locally