Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

U.S. AI Safety Institute (USAISI) Workshop on Collaboration to Enable Safe and Trustworthy AI

Remarks as prepared.

Thank you all so much for coming today. The turnout is amazing and highlights the importance of the work we are setting out to do. We are excited to work with you on the future of safe and trustworthy artificial intelligence systems. 

As you know, NIST has a long history in supporting the digital economy and, specifically, machine learning and AI. In everything NIST does our approach is to partner with industry and society, to understand the state of practice as well as the future challenges, to develop the measurement science to advance innovation and enable standards, and then to work together to build those standards and create new ecosystems for continued technology development and innovation. 

Our touchstone on this today is the AI Risk Management Framework, launched in January of this year. The reception of this effort — which builds upon a collaborative effort of many in this room — has been tremendous, and evidence of the community confidence in our approach.  

We believe in the value of effective partnership, and are gathered today to continue that proposition. 

Your thoughts, your time, your data, models, challenges and learnings — they are all going to be a critical part of the U.S. AI Safety Institute Consortium that we’re discussing today. Together we can build a culture of safe and trustworthy AI systems, advance the science of measuring and evaluating these systems, and enable the growth of AI to help humanity. 

We know that the standards, guidelines, and test and evaluation methods we develop are an ongoing process, and a key step forward. NIST and DOC are already engaged in making certain that the standards and guidelines, tools and test methods that we develop are not just for the U.S., but will be interoperable with our key international partners and allies. 

The work ahead is hard, challenging stuff. There is no secret answer for safety and trust cracked by some research group or company somewhere. No one organization knows the “answer.” And the solution cannot be secret — it is the open, transparent knowledge of safety that engenders trust. 

We look forward to discussing today the ways we can continue this effort, and expand it through the U.S. AI Safety Institute and the Consortium we are gathering together now. 

Created November 28, 2023