NIST aims to cultivate trust in the design, development, use, and evaluation of AI technologies and systems in ways that enhance economic security and improve quality of life. Congress directed NIST to collaborate with the private and public sectors to develop a voluntary AI RMF. The agency’s work on the AI RMF is consistent with recommendations by the National Security Commission on Artificial Intelligence and the Plan for Federal Engagement in Developing AI Technical Standards and Related Tools.
The Framework was developed in collaboration with the private and public sectors.
Will NIST provide additional guidance to help with using the AI RMF?
Yes. In collaboration with the private and public sectors, NIST produced a companion Playbook as a voluntary resource for organizations navigating the AI RMF functions. It contains actionable suggestions derived from industry best practices and research insights. Organizations seeking specific guidance for how to achieve AI RMF function outcomes may borrow as many – or as few – suggestions as apply to their industry use case or interests. Comments on the Playbook are welcome at any time. They will be reviewed and integrated on a semi-annual basis.
The Playbook is part of the NIST Trustworthy and Responsible AI Resource Center. NIST already has published a variety of documents and carries out measurement and evaluation projects which inform AI risk management. See: https://www.nist.gov/artificial-intelligence
The AI RMF aims to help designers, developers, deployers, users, and evaluators of AI systems better manage AI risks which could affect individuals, organizations, or society. The AI RMF is a sector- and use-case agnostic framework for managing AI risks. The Blueprint for an AI Bill of Rights focuses specifically on one category of AI risks: the potential for meaningful impact on individuals’ and communities’ rights, opportunities, or access to critical resources or services. The Technical Companion of the Blueprint for an AI Bill of Rights also may inform those seeking to govern, map, measure, and manage such risks.
These documents are not contradictory. Both documents share the same goal: more trustworthy, responsible, rights-preserving technologies. AI RMF provides the framework for mitigating AI risks in general, and the Blueprint provides details to help mitigate the AI risks that impact individuals’ or communities’ rights, opportunities, or access to critical resources or services.
AI is experiencing fast-paced changes with implications for innovation as well as individuals’ and communities’ rights, opportunities, and access. NIST and OSTP will continue to work with AI community to promote rights-preserving AI.