Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

U.S. AI Safety Institute Consortium Holds First Plenary Meeting to Reflect on Progress in 2024 & Outline Research Priorities for 2025

Representatives from the consortium’s member companies, organizations and local governments met for the first time in-person to discuss the group’s progress in supporting federal efforts to advance AI safety and spur continued American innovation.

On Dec. 7, 2024, members of the U.S. AI Safety Institute Consortium (AISIC) gathered for the first time in-person at the University of Maryland to review the group’s work to date and plan how the consortium can continue to serve as a bridge between the technology industry, academia, and civil society and the United States government on critical issues of AI safety research, evaluations and standards.

The consortium comprises more than 290 member companies and organizations, as well as various local and state governments. These organizations are on the frontlines of creating and using the most advanced AI systems and developing the foundational scientific research that will help us better understand how to fully harness the benefits of AI, while mitigating potential risks. 

Since its creation in February, the consortium has been hard at work advancing scientific inquiry and collaborative research across five key issue areas, including: (1) generative AI risk management, (2) synthetic content, (3) evaluations, (4) red-teaming, and (5) model safety and security. 

This ongoing collaboration is done in close partnership with government scientists and technical experts from the National Institute of Standards and Technology’s (NIST) U.S. AI Safety Institute (US AISI) and AI Innovation Lab with the goal of helping to advance the science of AI safety and inform the U.S. government’s efforts to enable trustworthy AI innovation.

“It’s rare to see such a broad swath of companies, academic institutions, and civil society organizations selflessly working with one another and with the government in the service of a common goal,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio. “It’s a testament to the promise of AI innovation and the common understanding that in order to harness the full potential of AI innovation, we must be able to scientifically identify, measure and mitigate its risks.”

At this week’s event, consortium members presented on several key developments, including:

  • Voluntary reporting approaches: Key findings on how organizations can share risk management data and analysis through a voluntary reporting template (VRT) for the NIST AI Risk Management Framework Generative AI Profile (NIST AI 600-1).
  • Chem-bio research: Identifying paths to advance the science of evaluating chemical-biological misuse risks of foundation models with key takeaways in three areas: capability assessment, threat modeling, and evaluation methodology.
  • Strengthening safeguards: Assessments of challenges and next steps in protecting against intentional misuse of models that are deployed as a service and safeguards that aim to prevent adversarial misuse.

You can find more about AISIC, here. The consortium is currently closed to new members but expects to open a new member application process in 2025. 

Created December 12, 2024, Updated February 4, 2025