Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Artificial Intelligence Safety Institute Consortium (AISIC)

In support of efforts to create safe and trustworthy artificial intelligence (AI), NIST has established the U.S. Artificial Intelligence Safety Institute (USAISI). To support this Institute, NIST has created the U.S. AI Safety Institute Consortium. The Consortium brings together more than 200 organizations to develop science-based and empirically backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety across the world. This will help ready the U.S. to address the capabilities of the next generation of AI models or systems, from frontier models to new applications and approaches, with appropriate risk management strategies.

Overview 

On February 8, 2024, U.S. Secretary of Commerce Gina Raimondo announced the creation of the U.S. AI Safety Institute Consortium (AISIC). Housed under NIST, the Consortium will unite AI creators and users, academics, government and industry researchers, and civil society organizations in support of the development and deployment of safe and trustworthy artificial intelligence (AI). Read announcement

Read member perspectives

Building upon its long track record of working with the private and public sectors and its history of reliable and practical measurement and standards-oriented solutions, NIST works with research collaborators through the AISIC who can support this vital undertaking. Specifically, it will:

  • Establish a knowledge and data sharing space for AI stakeholders
  • Engage in collaborative and interdisciplinary research and development through the performance of the Research Plan
  • Prioritize research and evaluation requirements and approaches that may allow for a more complete and effective understanding of AI’s impacts on society and the US economy
  • Identify and recommend approaches to facilitate the cooperative development and transfer of technology and data between and among Consortium Members
  • Identify mechanisms to streamline input from federal agencies on topics within their direct purviews
  • Enable assessment and evaluation of test systems and prototypes to inform future AI measurement efforts

To create a lasting approach for continued joint research and development, the work of the consortium will be open and transparent and provide a hub for interested parties to work together in building and maturing a measurement science for trustworthy and responsible AI.  

Consortium members contributions will support one of the following areas:

  1. Develop new guidelines, tools, methods, protocols and best practices to facilitate the evolution of industry standards for developing or deploying AI in safe, secure, and trustworthy ways
  2. Develop guidance and benchmarks for identifying and evaluating AI capabilities, with a focus on capabilities that could potentially cause harm 
  3. Develop approaches to incorporate secure-development practices for generative AI, including special considerations for dual-use foundation models, including
    • Guidance related to assessing and managing the safety, security, and trustworthiness of models and related to privacy-preserving machine learning; 
    • Guidance to ensure the availability of testing environments
  4. Develop and ensure the availability of testing environments
  5. Develop guidance, methods, skills and practices for successful red-teaming and privacy-preserving machine learning
  6. Develop guidance and tools for authenticating digital content
  7. Develop guidance and criteria for AI workforce skills, including risk identification and management, test, evaluation, validation, and verification (TEVV), and domain-specific expertise
  8. Explore the complexities at the intersection of society and technology, including the science of how humans make sense of and engage with AI in different contexts
  9. Develop guidance for understanding and managing the interdependencies between and among AI actors along the lifecycle

Membership Process

Organizations had 75 days (between Nov. 2, 2023, and Jan. 15, 2024) to submit a letter of interest as described in the Federal Register.

NIST received over 600 Letters of Interest from organizations across the AI stakeholder community and the United States. As of February 8, 2024, the consortium includes more than 200 member companies and organizations. 

NIST will continue to onboard organizations into the Consortium, which submitted Letters of Interest prior to the January 15, 2024, deadline. For questions, contact usaisi [at] nist.gov

There may be continuing opportunity to participate in the Consortium even after initial activity commences for participants not selected initially or which submitted their letter of interest after the selection process. Selected participants will be required to enter into a consortium CRADA with NIST.  At NIST’s discretion, entities which are not permitted to enter into CRADAs pursuant to law may be allowed to participate in the Consortium pursuant to separate non-CRADA agreement.

NIST cannot guarantee that all submissions will be used, or the products proposed by respondents will be used in consortium activities. Each prospective participant will be expected to work collaboratively with NIST staff and other project participants under the terms of the Consortium CRADA.

An evaluation copy of the Artificial Intelligence Safety Institute Consortium Cooperative Research and Development Agreement (CRADA) is now available (read here).

Created April 15, 2024