Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

AISIC Working Groups

NIST hosted a workshop on November 17, 2023, to engage in a conversation about artificial intelligence (AI) safety. As a result of the workshop, NIST has an initial list of working groups in which members may participate: 

  • Working Group #1: Risk Management for Generative AI 
    • Develop a companion resource to the AI Risk Management Framework (AI RMF) for generative AI 
    • Develop minimum risk management guidance geared toward federal agencies
    • Operationalize the AI RMF
  • Working Group #2: Synthetic Content 
    • Identify the existing standards, tools, methods, and practices, as well as the potential development of further science-backed standards and techniques, for authenticating content and tracking its provenance; labeling synthetic content, such as using watermarking; detecting synthetic content; and preventing generative AI from producing child sexual abuse material or producing non-consensual intimate imagery of real individuals; testing software used for the above purposes; and auditing and maintaining synthetic content
  • Working Group #3: Capability Evaluations 
    • Create guidance and benchmarks for evaluating and auditing AI capabilities, with a focus on capabilities through which AI could cause harm, such as in the areas of chemical, biological, radiological, and nuclear (CBRN), cybersecurity, autonomous replication, control of physical systems, and other areas 
    • Develop and aid in ensuring the availability of testing environments, such as testbeds, to support the development of safe, secure, and trustworthy AI technologies
  • Working Group #4: Red-Teaming 
    • Establish appropriate guidelines, including appropriate procedures and processes, to enable developers of AI, especially of dual-use foundation models, to conduct AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems
  • Working Group #5: Safety & Security 
    • Coordinate and develop guidelines related to managing the safety and security of dual-use foundation models. 

Throughout these activities, we anticipate that societal and technological considerations will be incorporated throughout the working group activities to ensure safe and trustworthy AI systems can be developed effectively. 

Initial Consortium activities include workshops to bring together stakeholders on these key topics; the creation of test environments, data sets, guidelines, and frameworks to enable the foundations of AI safety; and a continuous stream of reporting and effort describing the progress and the challenges as NIST and the Consortium members work towards a safer AI future.

AI Safety Institute Consortium Members will receive instructions from the AISIC team on how to participate in the working groups. 

For questions, contact USAISI [at] nist.gov (usaisi[at]nist[dot]gov).

Created February 7, 2024, Updated February 8, 2024