Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Biden-Harris Administration Announces New NIST Public Working Group on AI

The group will build on NIST’s Risk Management Framework to tackle risks of rapidly advancing generative AI.

Composite image representing artificial intelligence. Image of graphic human head with images representing healthcare, cybersecurity, transportation, energy, robotics, and manufacturing.
Credit: N. Hanacek/NIST

WASHINGTON — Today, U.S. Secretary of Commerce Gina Raimondo announced that the National Institute of Standards and Technology (NIST) is launching a new public working group on artificial intelligence (AI) that will build on the success of the NIST AI Risk Management Framework to address this rapidly advancing technology. The Public Working Group on Generative AI will help address the opportunities and challenges associated with AI that can generate content, such as code, text, images, videos and music. The public working group will also help NIST develop key guidance to help organizations address the special risks associated with generative AI technologies. The announcement comes on the heels of a meeting President Biden convened earlier this week with leading AI experts and researchers in San Francisco, as part of the Biden-Harris administration’s commitment to seizing the opportunities and managing the risks posed by AI. 

“President Biden has been clear that we must work to harness the enormous potential while managing the risks posed by AI to our economy, national security and society,” Secretary Raimondo said. “The recently released NIST AI Risk Management Framework can help minimize the potential for harm from generative AI technologies. Building on the framework, this new public working group will help provide essential guidance for those organizations that are developing, deploying and using generative AI, and who have a responsibility to ensure its trustworthiness.”

The public working group will draw upon volunteers, with technical experts from the private and public sectors, and will focus on risks related to this class of AI, which is driving fast-paced changes in technologies and marketplace offerings. 

“This new group is especially timely considering the unprecedented speed, scale and potential impact of generative AI and its potential to revolutionize many industries and society more broadly,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio. “We want to identify and develop tools to better understand and manage those risks, and we hope to attract broad participation in this new group.” 

NIST has laid out short-term, midterm and long-term goals for the working group. Initially, it will serve as a vehicle for gathering input on guidance that describes how the NIST AI Risk Management Framework (AI RMF) may be used to support development of generative AI technologies. This type of guidance, called a profile, will support and encourage use of the AI RMF in addressing related risks. 

In the midterm, the working group will support NIST’s work on testing, evaluation and measurement related to generative AI. This will include support of NIST’s participation in the AI Village at the 2023 DEF CON, the longest running and largest computer security and hacking conference.

Longer term, the group will explore specific opportunities to increase the likelihood that powerful generative AI technologies are productively used to address top challenges of our time in areas such as health, the environment and climate change. The group can help ensure that risks are addressed and managed before, during and after AI applications are developed and used. 

Those interested in joining the NIST Generative AI Public Working Group, which will be facilitated via a collaborative online workspace, should complete this form no later than July 9. Participants will have the opportunity to choose to help develop the generative AI profile for the AI RMF as part of their contributions to the group.

Generative AI is also the subject of the first two in a new series of NIST video interviews with leaders in AI to explore issues critical to improving the trustworthiness of fast-paced AI technologies. Part 1 features Jack Clark, co-founder of Anthropic, and Navrina Singh, founder and CEO of CREDO AI, who are interviewed by Elham Tabassi, associate director for emerging technologies in NIST’s Information Technology Laboratory. In Part 2, Rishi Bommasani, a Ph.D. student at Stanford University, and Irene Solaiman, policy director at Hugging Face, are interviewed by Reva Schwartz, principal investigator, AI bias, at NIST. All videos in the “NIST Conversations on AI” series will be available on the NIST website

Additionally, today, the National Artificial Intelligence Advisory Committee delivered its first report to the president and identified areas of focus for the committee for the next two years. The full report, including all of its recommendations, is available on the AI.gov website.

Questions about the public working group or NIST’s other work relating to generative AI may be sent to: generativeAI [at] nist.gov (generativeAI[at]nist[dot]gov).

Released June 22, 2023