Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

AI Risk Management Framework FAQs

  1. What is the AI Risk Management Framework (AI RMF)?

    The Framework is intended to help developers, users and evaluators of AI systems better manage AI risks which could affect individuals, organizations, society, or the environment. It aims to foster the development of innovative approaches to address characteristics of trustworthiness including valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhance, and fair with harmful bias managed.
     
  2. When do these characteristics apply and can organizations which apply these characteristics ensure that their AI systems will be trustworthy?

    The Framework users and AI actors should consider and encompass trustworthiness characteristics during pre-design, design and development, deployment, use, and test and evaluation of AI technologies and systems. These characteristics and principles are generally considered as contributing to the trustworthiness of AI technologies and systems, products, and services. Addressing AI trustworthiness characteristics individually will not ensure AI system trustworthiness; tradeoffs are often involved, rarely do all characteristics apply in every setting, and some will be more or less important in any given situation. Ultimately, trustworthiness depends on each and every one of these characteristics – and how they are perceived – by different AI actors and affected individuals and communities.
     
  3. Why has NIST developed the Framework?

    NIST aims to cultivate trust in the design, development, use, and evaluation of AI technologies and systems in ways that enhance economic security and improve quality of life. Congress directed NIST to collaborate with the private and public sectors to develop a voluntary AI RMF. The agency’s work on the AI RMF is consistent with recommendations by the National Security Commission on Artificial Intelligence and the Plan for Federal Engagement in Developing AI Technical Standards and Related Tools.

    The Framework was developed in collaboration with the private and public sectors.
     

  4. For whom is the Framework intended?

    It should be useful for those who design, develop, use, or evaluate AI technologies. It uses language that is understandable by a broad audience, including senior executives and those who are not AI professionals, while still of sufficient technical depth to be useful to practitioners across many domains. The Framework should be scalable to organizations of all sizes, public or private, in any sector, and operating within or across domestic borders.
     
  5. Will private or public sector organizations be required to use the Framework?

    No. NIST is producing this as a voluntary Framework.
     
  6. How was the Framework developed and what was the timeline?

    NIST developed the Framework in collaboration with the private and public sectors. Input was solicited from the private and public sectors. The Framework has been consensus-driven and developed and updated through an open, transparent, and collaborative process. On July 29, 2021, NIST issued a Request for Information to Help Develop an AI Risk Management Framework. A Summary Analysis of Responses to the NIST AI RMF Request for Information was issued on October 15, 2021. NIST issued a concept paper on December 13, 2021.

    A draft AI RMF released on March 17, 2022, received many comments. Discussions about the draft took place during a NIST workshop March 29-31, 2022. Based on comments and discussions, a second draft. was released on August 18, 2022, and another workshop was held October 18-19, 2022. AI RMF 1.0 was released on January 26, 2023. [ADD LINK TO AI RMF 1.0 Jan 26, 2023]
     
  7. Will NIST provide additional guidance to help with using the AI RMF?

    Yes. In collaboration with the private and public sectors, NIST is producing a companion Playbook as a voluntary resource for organizations navigating the AI RMF functions. It contains actionable suggestions derived from industry best practices and research insights. Organizations seeking specific guidance for how to achieve AI RMF function outcomes may borrow as many – or as few – suggestions as apply to their industry use case or interests. The first full Playbook was issued for comments on January 26, 2023. NIST encourages feedback on the Manage and Measure functions of the Playbook by February 27,,2023. A revised version will be posted several months later. After that, suggestions will be welcome at any time for any part of the Playbook. They will be reviewed and integrated on a semi-annual basis.

    The Playbook will reside online only and will be updated regularly with contributions expected to come from many stakeholders. It will be part of a NIST Trustworthy and Responsible AI Resource Center that is being established. NIST already has published a variety of documents and carries out measurement and evaluation projects which inform AI risk management. See: https://www.nist.gov/artificial-intelligence

     

  8. Why is a separate risk management framework for AI needed? There already are plenty of frameworks out there produced by NIST and others to address various related issues such as cybersecurity, privacy, and enterprise risk management?

    Each framework focuses on a specific set of challenges in managing risks. There are similarities in each of these – and important differences, as well. Many stakeholders involved with designing, developing, deploying, and evaluating and monitoring AI as well as those who are affected by use of AI products and services have called for specific guidance to help ensure that AI is trustworthy and that related risks are properly addressed during the AI lifecycle.
     
  9. How does this fit in with other NIST AI research, standards, and other activities?

    NIST aims to cultivate trust in the design, development, use, and evaluation of AI technologies and systems in ways that enhance economic security and improve quality of life. The agency focuses on improving measurement science, standards, technology, and related tools, including evaluation and data. NIST is developing forward-thinking approaches that support innovation and confidence in AI systems. NIST’s work on the AI RMF is consistent with its AI Strategic Plan and the Plan for Federal Engagement in Developing AI Technical Standards and Related Tools published in 2019.
     
  10. How do I get involved in the Framework development effort?

    NIST especially encourages suggestions and contributions of guidance to help put AI RMF 1.0 into practice. That includes developing Profiles describing ways in which to tailor the Framework, along with suggesting information to be included in the forthcoming NIST Trustworthy and Responsible AI Center. Check out our Engage page, watch this space for specific opportunities, sign up to receive email notifications about NIST’s AI activities here, or contact us at: AIframework [at] nist.gov.
  11. Who can answer additional questions regarding the Framework?

    Send us an email: AIframework [at] nist.gov.
Created July 13, 2021, Updated January 26, 2023