NIST is developing a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI). The NIST AI Risk Management Framework (AI RMF) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
The Framework is being developed through a consensus-driven, open, transparent, and collaborative process that includes workshops and other opportunities to provide input. It is intended to build on, align with, and support AI risk management efforts by others. An initial draft of the AI RMF is available for feedback and discussion at the March 29-31 workshop and in writing by sending comments to AIframework [at] nist.gov by April 29, 2022.