NIST expertise in AI enables the agency to make important technical contributions to the development of policies. NIST plays a key role as a neutral convener of organizations and individuals with disparate views about AI matters. The agency’s active participation in national and global discussions can help to shape the development of trustworthy and responsible AI.
NIST’s work supports several US activities initiated or addressed by the TTC.
During their fourth ministerial meeting (May 2023), the U.S. and EU TTC co-chairs reviewed progress and announced key initiatives including advancement of its joint AI Roadmap through launching three expert groups to focus on AI terminology and taxonomy, standards and tools for trustworthy AI and risk management, and monitoring and measuring AI risks. The groups have (i) issued a list of 65 key AI terms essential to understanding risk-based approaches to AI, along with their U.S. and EU interpretations and shared U.S.-EU definitions and (ii) mapped the respective involvement of the United States and the European Union in standardization activities with the goal of identifying relevant AI-related standards of mutual interest. The United States and the European Union decided to add special emphasis on generative AI, including its opportunities and risks, to the work on the Roadmap. This work will complement the G7 Hiroshima AI process. The joint statement is found here.
This Roadmap, shared at the TTC Ministerial Meeting (December 2022), aims to guide the development of tools, methodologies, and approaches to AI risk management and trustworthy AI. It advances U.S. and EU shared interest to support international standardization efforts and promote trustworthy AI on the basis of a shared dedication to democratic values and human rights.
The U.S.-EU Joint Statement of the TTC (May 2022) expressed an intention to develop a joint roadmap on evaluation and measurement tools for trustworthy AI and risk management.