Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Perspectives about the NIST Artificial Intelligence Risk Management Framework

The success of the AI Risk Management Framework (AI RMF 1.0) depends upon its widespread use. Below are statements from interested organizations and individuals.

“As business leaders begin their AI journey, many are looking for a roadmap for how to develop and use AI in a way that is responsible and innovative. At Workday, we’ve found the NIST AI Risk Management Framework to be a concrete benchmark for mapping, measuring, and managing our approach to AI governance. We believe the Framework will help us maintain our customers’ trust and stay true to our company’s core values as we leverage it to innovate going forward”.

Jim Stratton, Chief Technology Officer, Workday 

See a description of how Workday has been using the AI RMF. 


“The automotive industry continues to leverage the power of artificial intelligence in driver support features, advanced safety technologies, and automated driving systems for consumers. The Artificial Intelligence Risk Management Framework (“AI RMF”) represents a proactive, comprehensive, and holistic approach, and we appreciate its development by NIST through a transparent, multistakeholder, and consensus-driven process. The AI RMF will be an important resource to the automotive industry in its ongoing efforts to maximize the benefits and positive impacts of artificial intelligence products, services, and systems and to effectively communicate with policymakers and other external stakeholders.”

The Alliance for Automotive Innovation


"On behalf of Amazon Web Services, I would like to commend the NIST for the recent publication of the AI Risk Management Framework. At a time when AI leadership is more important than ever, the AI RMF is both a vital statement of United States policy and a critical resource for organizations involved in the development, deployment, and use of AI. The AI RMF’s lifecycle-based approach to managing the risks and enhancing the benefits of AI aligns closely with AWS’s approach to responsible AI and the tools and technical guidance we provide customers. We see great promise in the AI RMF and look forward to supporting NIST’s efforts to develop testing, evaluation, and benchmarks that will be crucial as organizations seek to operationalize and measure the impacts of the Framework."

Amazon Web Services


“The Bipartisan Policy Center is pleased to see the launch of the Artificial Intelligence (AI) Risk Management Framework by the National Institute of Standards and Technology (NIST). This voluntary Framework is a key step forward in addressing some of the issues of bias, fairness, privacy, security, and more. It will be a valuable tool in managing AI's negative and positive impacts and promoting trustworthiness in the technology.

We support NIST’s approach to developing this Framework by opening a dialogue and gathering feedback from a multidisciplinary set of stakeholders. Its approach ensures the Framework is inclusive of all impacted stakeholders throughout the lifecycle of an AI system and considers the potential impact of AI on different groups of people.

BPC is committed to promoting and implementing this Framework with our network and stakeholders. We will complement the work by NIST in our continued work to promote responsible, ethical, and trustworthy AI systems. We look forward to working with NIST to ensure continuous improvement of the Framework.”

Bipartisan Policy Center


“The NIST AI Risk Management Framework (AI RMF) constructively builds upon the work BSA has done in the Framework to Build Trust in AI, together offering an important path forward for the responsible development and deployment of AI products and services. The AI RMF, like the BSA Framework, creates a lifecycle approach for addressing AI risks, identifies characteristics of Trustworthy AI, recognizes the importance of context-based solutions, and acknowledges the importance of impact assessments to identify, document, and mitigate risks.

This approach is well-aligned with BSA’s Framework to Build Trust in AI, which emphasizes the need to focus on high-risk uses of AI, highlights the value of impact assessments, and distinguishes between the obligations for those companies that develop AI, and those entities that deploy AI. 

We appreciate the opportunity to engage with NIST to help promote the responsible development of AI and look forward to working with BSA members to determine how best to promote the AI RMF and draw from other best practices to advance consistent approaches around the world.”

BSA | The Software Alliance


“The Center for Security and Emerging Technology (CSET), based at Georgetown University, has been actively involved in the development of the NIST AI Risk Management Framework (AI RMF 1.0). With confidence, CSET can affirm that one of the framework's strengths is that it provides sufficient details so that organizations can use it as a foundation to build more customized risk management approaches. This is important because AI risk management is highly context-dependent, and the implementation of the RMF is likely to be unique for each organization and each AI system. This balance allows organizations to begin devising steps for risk management and considering how the framework can be adapted to their purposes and context.

CSET is planning to use the Framework to manage AI risks and improve AI trustworthiness, as a part of its work on AI assessment, by developing high-level guidance for how organizations can customize and create contextually-appropriate RMF profiles. In addition, CSET intends to identify the high-level variables or features that affect RMF profile customization. These features may include the sector in which an AI system is applied or the features of the technology itself, such as its maturity or complexity. Applications of these features should allow organizations to more easily identify existing RMF profiles that are relevant to their needs and identify gaps in guidance that new profiles could fill. Finally, CSET's ongoing effort to create a taxonomy of AI incidents in an accessible database will allow organizations to identify specific vulnerabilities and issues that should be addressed in their custom RMF profiles.”

Center for Security and Emerging Technology (CSET)


The Chamber appreciates the hard work that NIST put into the development of the congressionally directed AI Risk Management Framework. NIST established an open and transparent process where all AI stakeholders have the opportunity to provide substantive input on framing and mitigating risks associated with using Artificial Intelligence. This open and transparent model should serve as a template for others throughout the federal government. 

Today’s announcement of the AI Risk Management Framework 1.0 is an essential step in developing Trustworthy AI. The Chamber looks forward to continuing to work with NIST to update the framework to meet the challenges of tomorrow and ensure that the United States leads in the development of trustworthy AI globally.”

U.S. Chamber of Commerce


 "Credo AI welcomes the release of the NIST Artificial Intelligence Risk Management Framework 1.0 - an important step in making Responsible AI governance a reality - and we applaud the NIST team for their open and extensive collaboration with stakeholders throughout its development. The AI RMF 1.0 is a critical step toward helping organizations effectively manage risks in designing, developing, and using AI products, services and systems. In particular, the AI RMF 1.0 provides valuable guidance to small and medium businesses, which promote a great deal of AI innovation, yet often lack the resources or extensive in-house expertise in AI governance and risk management. We have partnered closely with NIST to ensure that we are ready to operationalize the AI RMF 1.0 with companies of all sizes, across industries, in every stage of Responsible AI design. Credo AI helps organizations design, develop, and deploy AI systems that meet the highest ethical standards, and guiding our customers in adopting the NIST AI RMF is critical to this mission. We look forward to continuing to partner with NIST in helping to make the AI RMF 1.0 an accessible and easy-to-adopt reality for organizations at all stages of AI maturity."

Navrina Singh, Founder and CEO, Credo AI


“As the nation’s wireless industry association, CTIA applauds NIST for developing a risk management framework for AI that aims to be flexible, voluntary, and pro-innovation.  CTIA appreciates NIST’s consensus-based, collaborative approach that considered important stakeholder feedback, as it has done with its Cybersecurity Framework. CTIA looks forward to continuing to work with NIST on approaching AI risk management.”

CTIA


“We applaud the hard work by the tireless and committed staff and leadership at NIST and the Department of Commerce to bring this AI RMF to fruition. This Framework's release is a significant step in ensuring that the AI that permeates our daily activity and increasingly supports our pivotal functions is safer, more trustworthy and inclusive. Not only is the Framework notable but so was the process that led to its creation, throughout which NIST modeled best practices, including open, transparent, trust-building and inclusive engagement. We look forward to continuing to support this important work underway at NIST, including the operationalization of a framework that can be critical in our realization of responsible AI.”

Miriam Vogel, President & CEO, EqualAI


“The Federation of American Scientists is pleased to see the National Institute of Standards and Technology's release of the AI Risk Management Framework (AI RMF 1.0). We believe that this framework is an important step in promoting responsible and trustworthy use of AI. At FAS, we are dedicated to promoting the responsible use of science and technology, and we intend to review and advance the guidelines outlined in the AI RMF. Furthermore, we look forward to participating in future discussions and continuing our collaboration with NIST and other organizations to guarantee the safe and ethical use of AI. We acknowledge the effort and collaboration that went into the development of the AI RMF and are confident that it will serve as a valuable resource for organizations and individuals in the field.”

Federation of American Scientists


“The Future of Life Institute applauds NIST for spearheading a multi-year and stakeholder initiative to improve the management of AI risks. As an active participant in the AI RMF process, we view this effort as a crucial step in fostering a culture of risk identification and mitigation within the US and abroad. Furthermore, we praise NIST's commitment to continuously update the RMF as our common understanding of AI's impact on society evolves. With this launch, it is our hope that all organizations charged with designing, developing, deploying, or using AI will actively consider the short, medium, and long-term implications of this technology on individuals, communities, and the planet. Failure to do so may have consequential and existential effects on the future of life as we know it.” 
Future of Life Institute


“Google congratulates NIST on the release of its AI Risk Management Framework (RMF) and for its broad engagement across stakeholders. Within Google, we’ve built an AI and advanced technologies governance program that is aligned with the AI RMF approach and underpinned by industry-leading research and a growing library of resources, tools and recommended practices. Importantly, the NIST approach, which has been successful in cybersecurity and privacy, is flexible and can adapt as the AI ecosystem progresses. The AI RMF provides guidance to developers and deployers of AI systems on how to strike a practical balance between optimizing beneficial use cases and addressing potential risks.  We look forward to our continued  work with NIST as the AI ecosystem matures, new techniques and applications are developed, and further progress is made on benchmarks and metrics for AI systems.”

Google


“IBM applauds NIST on the release of the AI Risk Management Framework, and on the multi-year, inclusive, multistakeholder process that contributed to its development. We are happy to have contributed to that process throughout the last few years and gratified to see the Framework come to fruition.  This initiative lays important groundwork for advancing trustworthy AI and showcasing the United States’ commitment to the responsible development and deployment of this crucial technology. IBM commends NIST for its tireless effort in developing this framework and looks forward to helping to promote it as best-in-class practices for both AI developers and deployers.” 

IBM

Video remarks from IBM


“Building trustworthy Artificial Intelligence (AI) technology in the era of digital transformation is essential. As this technology evolves, we take seriously our responsibility to enable a world in which AI is used responsibly and develop solutions to address potential negative implications. The National Institute of Standards and Technology’s (NIST) AI Risk Management Framework is a critical voluntary, flexible tool to help industry and other stakeholders understand, identify, and treat the potential for negative outcomes while leveraging opportunities associated with the use of AI systems. We appreciate NIST’s ongoing engagement with industry throughout the Framework’s development and encourage its adoption in the United States and around the world to help foster trustworthy AI systems and their responsible use.”

John Miller, Senior Vice President for Policy for Trust, Data, and Technology at Information Technology Industry Council


“The NIST AI Risk Management Framework represents a collaborative, thoughtful approach to addressing the socio-technical nature of AI systems. At Microsoft, we especially appreciate the focus on impact assessments and testing of AI systems, which will support organizations in identifying risk, developing mitigations, and validating capabilities. These are approaches we are already putting into practice through our Responsible AI Standard. We commend NIST’s efforts to align the AI Risk Management Framework with its Cybersecurity and Privacy Frameworks to further enable organizations to build upon existing frameworks.” 

Natasha Crampton, Chief Responsible AI Officer – Office for Responsible AI, Microsoft


"The National Fair Housing Alliance (NFHA) Applauds NIST on the launch of its AI Risk Management Framework (AI RMF 1.0) which helps manage risks to individuals, communities, organizations, society, businesses, and the environment. NFHA provided comments to NIST throughout the various stages of the development of AI RMF 1.0 and is pleased to see the framework addresses discrimination risks and stresses the importance of ensuring systems are fair and managed for bias. Technologies that manifest discrimination and bias can cause great harm which might be intractable. Technology is the new civil rights frontier. Thus, having the AI RMF 1.0 is critically important to help scientists, modelers, researchers, regulators, civil rights experts, compliance professionals, consumers, and other actors identify, assess, and manage risks in AI systems and to help ensure they are fair, beneficial, and equitable. NFHA used concepts from earlier iterations of the AI RMF 1.0 to develop our Purpose, Process, and Monitoring (PPM) framework, which represents what we believe is the gold standard for auditing AI systems. We will use NIST’s AI RMF 1.0, and future iterations, to help update the PPM framework as we apply it to a whole range of systems used in the housing and lending sectors including credit scoring, risk-based pricing, tenant screening, automated underwriting, and advertising models."

Lisa Rice, President & CEO, National Fair Housing Alliance (NFHA)


“As the Nation’s largest non-defense Federal funder of Artificial Intelligence research, NSF is dedicated to the proposition that research ideas be applied fairly, safely, and carefully to real world problems. We support the AI RMF and look forward to its continued development to benefit our research community, as well as for society as a whole.”

National Science Foundation


"The NIST Risk Management Framework will contribute to ensuring that AI systems are deployed safely, responsibly and securely. The Partnership on AI is pleased to have contributed to this critically important framework, building on our work with our Partners. We look forward to working with NIST to support its implementation to improve practice and policy globally."

Rebecca Finlay - CEO, Partnership on AI

Video remarks from Partnership on AI


 The NIST AI Risk Management Framework is the right document at the right time that seeks to provide American leadership in the AI governance space. While having a framework and its associated playbook is a critical first step to managing the potential risk AI can pose to people and organizations, the next step in the process is even more crucial. Now, policy makers, developers, deployers, and consumers must step up and help NIST develop use cases and populate its resource center. These efforts will help all those who build, deploy, and use AI systems to best identify and mitigate potential risk, while unleashing opportunities for the United States to innovate and remain a global AI leader.”

Yll Bajraktari, CEO, Special Competitive Studies Project


“The NIST AI Risk Management Framework provides a valuable tool for understanding the challenges and opportunities presented by different AI systems. The AI RMF has the potential to help organizations of all sizes to work towards trustworthy AI systems that benefit all.”
David Danks, UC San Diego Halıcıoğlu Data Science Institute (HDSI) and Department of Philosophy professor


“Today’s launch of the NIST AI Framework is a major milestone. The Framework creates a common approach for how companies can map, measure, manage, and govern AI risk. In doing so, it promises to accelerate the adoption of trustworthy AI, build bridges with partners around the world, especially in Europe, and pave the way to unlocking the power of AI to help positively transform the way we live and work.  We applaud NIST for this achievement; Workday has been pleased to partner with NIST since the early days of the Framework's development, and we look forward to continued collaboration to leverage this valuable tool.” 

Sayan Chakraborty, Executive Vice President, Product & Technology, Workday

Created January 23, 2023, Updated September 14, 2023