NIST is developing a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI). The NIST AI Risk Management Framework (AI RMF) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
The Framework is being developed through a consensus-driven, open, transparent, and collaborative process that includes workshops and other opportunities to provide input. It is intended to build on, align with, and support AI risk management efforts by others.
A partial draft of a companion NIST AI RMF Playbook was available for initial comments by September 29, 2022, and at the October workshop. Feedback on the Playbook can be provided at any time and suggestions will be reviewed and integrated on a semi-annual basis.
NIST issued a Concept Paper of the AI RMF for public review and comment on Dec 13, 2021.