-
Notifications
You must be signed in to change notification settings - Fork 0
02_Assess
Phillip Bailey edited this page Jun 25, 2025
·
6 revisions
The Assess function helps organizations identify, classify, and understand AI systems, their contexts, risks, and stakeholders across the full lifecycle.
This section aligns with:
- NIST CSF 2.0: IDENTIFY
- NIST AI RMF 1.0: MAP
- EU AI Act: Title II (Classification), Annex III (High-Risk Use Cases)
- Identify all AI systems and their purpose, context, and expected outcomes
- Map data flows, dependencies, and lifecycle stages for each system
- Classify AI systems based on criticality, impact, and risk
- Identify affected stakeholders and potential harms
- Align AI inventory and classification with EU AI Act risk tiers
- Perform threat modeling to identify potential vulnerabilities and abuse scenarios
- A comprehensive and up-to-date AI inventory
- AI systems classified by risk level and regulatory exposure
- Documented AI lifecycle context and responsible owners
- Threat models and abuse cases documented for high-risk systems
- Proactive awareness of non-compliant or exploitable AI use cases
| Element | Description |
|---|---|
| AI System Inventory | List and document all AI systems in use or development, including purpose and ownership |
| Use Case Mapping | Describe each system’s intended function, affected users, and deployment scope |
| Data Flow Diagrams | Outline input data, model processes, outputs, and dependencies |
| Risk Classification | Categorize AI systems as minimal, limited, high, or prohibited risk per EU AI Act and internal thresholds |
| Stakeholder Analysis | Identify users, impacted groups, and those involved in the lifecycle (build, deploy, audit) |
| Lifecycle Context | Document where the AI system sits within the organization’s broader digital and operational architecture |
| Threat Modeling | Use structured techniques (e.g., STRIDE, LINDDUN, OWASP for LLMs) to identify vulnerabilities and attack surfaces |
- Maintain an AI system registration form integrated with the GRC platform
- Conduct risk triage at project intake using standardized templates
- Use architecture and data flow diagrams to contextualize system behavior
- Perform structured threat modeling during design and prior to major changes
- Flag high-risk systems for review by Governance or Ethics boards
- Continuously update inventory post-deployment as use or risk changes
- ID.AM – Asset Management
- ID.BE – Business Environment
- ID.GV – Governance
- ID.RA – Risk Assessment
- ID.RM – Risk Management Strategy
- ID.SC – Supply Chain Risk Management
- System purpose, scope, context, and goals
- Stakeholders and use environment
- Intended benefits and potential negative impacts
- Third-party data, models, and components
- AI System Registration Form
- AI Inventory Dashboard or Spreadsheet
- Risk Classification Template (EU AI Act aligned)
- Stakeholder Impact Assessment Template
- Data Flow and Lifecycle Mapping Diagram
- Threat Modeling Template (STRIDE, OWASP LLMs, Agentic AI) (Google Sheet)