Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence

“Expanding on our wide-ranging efforts in AI, NIST will work with private and public stakeholders to carry out its responsibilities under the executive order. We are committed to developing meaningful evaluation guidelines, testing environments, and information resources to help organizations develop, deploy, and use AI technologies that are safe and secure, and that enhance AI trustworthiness.” — Under Secretary of Commerce for Standards and Technology and NIST Director Laurie Locascio

NIST’s Responsibilities Under the October 30, 2023 Executive Order

NIST's Responsibilities Under Executive Order 14110 On Safe, Secure and Trustworthy AI
NIST's Responsibilities Under Executive Order 14110 On Safe, Secure and Trustworthy AI
This video discusses NIST's responsibilities under the Executive Order. 

Overview

The President’s Executive Order (EO) on Safe, Secure, and Trustworthy Artificial Intelligence (14110) issued on October 30, 2023, charges multiple agencies – including NIST – with producing guidelines and taking other actions to advance the safe, secure, and trustworthy development and use of Artificial Intelligence (AI). 

The EO directs NIST to develop guidelines and best practices to promote consensus industry standards that help ensure the development and deployment of safe, secure, and trustworthy AI systems. Specifically, NIST is to:

  • Develop a companion resource to the AI Risk Management Framework focused on generative AI
  • Develop a companion resource to the Secure Software Development Framework to incorporate secure-development practices for generative AI and dual-use foundation models
  • Launch a new initiative to create guidance and benchmarks for evaluating and auditing AI capabilities, with a focus on capabilities that could cause harm
  • Establish guidelines and processes – except for AI used as a component of a national security system – to enable developers of generative AI, especially dual-use foundation models, to conduct AI red-teaming tests for deployment of safe, secure, and trustworthy systems. This includes:
    • Coordinating or developing guidelines related to assessing and managing the safety, security, and trustworthiness of dual-use foundation models and related to privacy-preserving machine learning
    • In coordination with the Secretary of Energy and the Director of the National Science Foundation (NSF), developing and helping to ensure the availability of testing environments, such as testbeds, to support the development of safe, secure, and trustworthy AI technologies, as well as support the design, development, and deployment of associated privacy-enhancing technologies (PETs) 
  • Engage with industry and relevant stakeholders to develop and refine (for possible use by synthetic nucleic acid sequence providers): 
    • Specifications for effective nucleic acid synthesis procurement screening
    • Best practices, including security and access controls, for managing sequence-of-concern databases to support such screening
    • Technical implementation guides for effective screening
    • Conformity assessment best practices and mechanisms
  • Develop a report to the Director of OMB and the Assistant to the President for National Security Affairs identifying existing standards, tools, methods, and practices, as well as the potential development of further science-backed standards and techniques, for:
    • Authenticating content and tracking its provenance
    • Labeling synthetic content (e.g., watermarking)
    • Detecting synthetic content
    • Preventing generative AI from producing Child Sexual Abuse Material or producing non-consensual intimate imagery of real individuals
    • Testing software used for the above purposes
    • Auditing and maintaining synthetic content
  • Create guidelines for agencies to evaluate the efficacy of differential-privacy-guarantee protections, including for AI.
  • Develop guidelines, tools, and practices to support agencies' implementation of minimum risk-management practices.
  • Assist the Secretary of Commerce in coordinating with key international partners and standards development organizations to drive the development and implementation of AI-related consensus standards, cooperation, and information sharing. Then the Secretary of Commerce (coordinating with the Secretary of State and heads of other Federal agencies) will establish a plan for global engagement to promote and develop AI standards.
    • These efforts are to be guided by principles set out in the NIST AI Risk Management Framework and the US Government National Standards Strategy for Critical and Emerging Technology, which is led by NIST.

In some assignments, NIST will be working on behalf of the Secretary of Commerce. NIST is to consult with other agencies in producing some of its guidance; in turn, several of those agencies are directed to consult NIST (directly or through the Secretary of Commerce) in accomplishing their actions under the EO. Most of the EO tasks to NIST have a 270 day deadline. 

In addition to working with government agencies, NIST intends to engage with the private sector, academia, and civil society as it produces guidance called for by the EO. NIST will build and expand on current efforts in several of these areas. That includes the Generative AI Public Working Group established in June 2023.

For EO related questions email: ai-inquiries [at] nist.gov (ai-inquiries[at]nist[dot]gov)

 

News and Updates

Contacts

Media Inquiries