You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Dec 18, 2025. It is now read-only.
@ronaldpetty@raravena80 As we discussed in the AI WG to start working on a white paper around best practices and benchmarks for managing AI risks for AI platforms and workloads. Hence, this issue is being created to track this effort and related documents and artifacts created as part of this work.
AI technologies are becoming pervasive in our day-to-day activities and have potential to transform our lives and society. However, improper use of AI technologies pose significant risks that can negatively affect individuals, communities, and the world. This has led to the development of AI Risk Management Frameworks such as the NIST AI RMF and the EU Artificial Intelligence Act specifying a set of safeguards (controls) for responsible development and use of AI systems.
In this whitepaper we will look at the NIST AI risk management framework with focus on developing best practices for AI risk assessment and compliance for AI workloads and platforms. We will look at different aspects of AI risks around data, AI models, AI applications & runtime, and people & governance. We will then develop best practices and benchmarks for a generic AI platform.