Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

AI Policy Contributions

NIST provides scientific and technical expertise and contributions to policy development. NIST plays a key role as a neutral convener of organizations and individuals with disparate views about AI matters. The agency’s active participation in national and global discussions helps to shape the development of trustworthy and responsible AI.

NIST is heavily engaged with US and international AI policy efforts such as the US-EU Trade and Technology Council, OECD, Council of Europe, Quadrilateral Security Dialogue, and a host of bilateral initiatives in Asia, Europe, the Middle East, and North America. NIST partners with other agencies including the US Department of Commerce’s International Trade Administration and the US Department of State on many of these efforts.

US-EU Trade and Technology Council AI-Related Efforts

The US and EU are now reviewing comments received on the list of 65 key AI terms essential to understanding risk-based approaches to AI, along with their US and EU interpretations and shared US.-EU definitions, that was released as part of TTC 4 in May 2023. Comments received will be made publicly available. 

During their fourth ministerial meeting (May 2023), the US and EU TTC co-chairs reviewed progress and announced key initiatives including advancement of its joint AI Roadmap through launching three expert groups to focus on AI terminology and taxonomy, standards and tools for trustworthy AI and risk management, and monitoring and measuring AI risks.  The groups have (i) issued a list of 65 key AI terms essential to understanding risk-based approaches to AI, along with their US and EU interpretations and shared US-EU definitions and (ii) mapped the respective involvement of the United States and the European Union in standardization activities with the goal of identifying relevant AI-related standards of mutual interest.  The United States and the European Union decided to add special emphasis on generative AI, including its opportunities and risks, to the work on the Roadmap. This work will complement the G7 Hiroshima AI process. The joint statement is found here.

This Roadmap, shared at the TTC Ministerial Meeting (December 2022), aims to guide the development of tools, methodologies, and approaches to AI risk management and trustworthy AI. It advances US and EU shared interest to support international standardization efforts and promote trustworthy AI on the basis of a shared dedication to democratic values and human rights.

The US-EU Joint Statement of the TTC (May 2022) expressed an intention to develop a joint roadmap on evaluation and measurement tools for trustworthy AI and risk management. 

Contributing to Federal Engagements that Explore or Determine AI-Related Policies 

Participating in Major AI Forums

  • NIST engages with private, public, and non-profit organizations – directly and via international forums and other dialogues – about AI-related policies that align with NIST’s mission and technical contributions. NIST also convenes national and international stakeholders to ensure multi-way communications on select AI-related issues. 

Research Partnerships 

  • NIST also teams with other organizations to advance policy decision making. That includes co-funding a partnership with the National Science Foundation (NSF) on an Institute for Trustworthy AI in Law & Society (TRAILS) led by the University of Maryland. TRAILS aims to transform the practice of AI from one driven primarily by technological innovation to one driven with attention to ethics, human rights, and support for communities whose voices have been marginalized into mainstream AI. TRAILS is the first Institute of its kind to integrate participatory design, technology, and governance of AI systems and technologies and will focus on investigating what trust in AI looks like, whether current technical solutions for AI can be trusted, and which policy models can effectively sustain AI trustworthiness.
Created July 26, 2021, Updated March 25, 2024