NIST issued a second draft of the AI RMF for comments in writing by September 29, 2022 and at the October 18-19, 2022 workshop. NIST also has issued for comment a draft Playbook companion to the AI Framework.
NIST is developing a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI). The NIST AI Risk Management Framework (AI RMF) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
The Framework is being developed through a consensus-driven, open, transparent, and collaborative process that includes workshops and other opportunities to provide input. It is intended to build on, align with, and support AI risk management efforts by others. A second draft of the AI RMF was made available for feedback by September 29, 2022. Comments were also solicited and discussed at the October 18-19 workshop.
A partial draft of a companion NIST AI RMF Playbook was available for initial comments by September 29, 2022, and at the October workshop. Feedback on the Playbook can be provided at any time and suggestions will be reviewed and integrated on a semi-annual basis.