Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

NIST Information Technology Laboratory AI Webinar Series: The International AI Standards Landscape and ITL’s Role, Priorities, and Progress

NIST Information Technology Laboratory AI Webinar Series: The International AI Standards Landscape and ITL’s Role, Priorities, and Progress
NIST Information Technology Laboratory AI Webinar Series: The International AI Standards Landscape and ITL’s Role, Priorities, and Progress

NIST’s Information Technology Laboratory (ITL) AI Program held an interactive webinar on the international AI standards landscape and ITL’s role, priorities, and progress.

The webinar included an overview on the current state of the international AI standards ecosystem, and ITL’s progress in accelerating and broadening participation in the standardization process. The session also included specific opportunities for the community to get more involved in ITL’s AI standards-related efforts and the standards ecosystem more broadly.

This webinar is the first in a series hosted by ITL which will offer overviews and updates on ITL’s AI Program.

Webinar Materials: 

FAQs:

How does NIST's voluntary pre-standardization research, such as the AI Risk Management Framework, translate into application for organizations? What are some ways to distinguish between pre-standardization work and formal consensus-based standards?

The NIST AI RMF produced by the NIST Information Technology Laboratory (ITL) is a set of voluntary guidelines, not a standard – though it can readily be used as the basis for standards. The AI RMF – along complementary materials– provides a structural taxonomy (Govern, Map, Measure, Manage) to operationalize trustworthy AI characteristics into concrete practices.   

AI standards are technical and procedural documents that establish shared rules, guidelines, and definitions for developing and evaluating artificial intelligence systems. These documents are created through a structured, international consensus process managed by independent Standards Developing Organizations (SDOs) like ISO/IEC JTC1 SC42. In the AI standards ecosystem, NIST, alongside a wide set of stakeholders, provides technical inputs and foundational pre-standardization research that SDOs use to strive for outputs that are technically rigorous and implementable. NIST staff draw on the agency's pre-standardization outputs, like the AI RMF, along with their own technical expertise when contributing to SDOs.

How does the international standards ecosystem address concerns regarding AI safety, and how do these standards translate into measurable engineering practices like red-teaming and Testing, Evaluation, Verification, and Validation (TEVV)?

Consensus standards are a private sector-led process. As such, a wide set of stakeholders work together to create technically rigorous standards for topics that stakeholders agree are ready for standardization.

In the AI standards ecosystem, topics such as AI safety are addressed through concrete risk management, use-case context, and robust measurement science. For example, NIST contributes to the ISO/IEC 42001 series, which specifies requirements for organizations providing or utilizing AI systems, and to other standards, such as ISO/IEC 25568, which provide technical techniques to assess and remediate AI risks.

NIST/ITL also actively contributes to ISO/IEC 42119 on AI red-teaming. Through the Zero Drafts Pilot Project, NIST/ITL has consulted with a wide range of stakeholders to create an outline on TEVV for AI, which will soon be released to ISO/IEC JTC1 SC42 as a candidate for standardization.

Why did NIST create the Zero Drafts Pilot Project? Are you allowed to use Generative AI tools in drafting these pre-standardization or standardized inputs?

The Zero Drafts Pilot Project is designed to simultaneously accelerate the standards development timeline and broaden multi-disciplinary participation before the lengthy, formal SDO consensus procedures begin. NIST offered a series of topics ready for standardization; stakeholders prioritized TEVV for AI and Documentation of AI Models and Datasets.

After consensus on topics, NIST distilled community input into outlines, and based on outline feedback, is creating preliminary drafts, after which it will be handed over to an SDO, likely ISO/IEC JTC1 SC42, to undergo their formal consensus processes.

Regarding drafting mechanics, there are strict normative prohibitions. For example, within ISO/IEC SC 42, the use of Generative AI to produce text is explicitly prohibited. Consequently, all Zero Drafts are hand-drafted by NIST subject matter experts who review and adjudicate inputs gathered from the private sector, public sector, and academia.

What is NIST's role in the AI standards ecosystem? How does U.S. representation in international standardization bodies coordinate to facilitate global interoperability and prevent the fragmentation of AI standardization?

U.S. representation in the AI standards ecosystem ranges from private-sector, public-sector, civil-society, and more. NIST does not create AI standards, nor does NIST serve as an official U.S. representative to any SDOs with respect to AI standards. The American National Standards Institute (ANSI) holds that role, and INCITS/AI serves as the U.S. Technical Advisory Group to ISO/IEC JTC1 SC 42. NIST’s role is to provide technical inputs and support to these bodies.

International consensus standards are created with the input of a wide set of stakeholders across the world and thus are built with global interoperability in mind. National bodies and members of SDOs are often aware of each other's AI standardization work and might work to build upon existing standards or decide to pursue a different topic that also needs standardization.

What are the established mechanisms for multi-disciplinary stakeholder (small startups to highly regulated domain-specific entities such as finance and local governments) to engage with NIST’s AI Program and contribute to the broader standards ecosystem?
  • Developing technically rigorous and globally interoperable AI standards benefits immensely from the diverse expertise of the entire U.S. stakeholder ecosystem. The most effective pathway for an organization to contribute depends on whether its objective is to inform NIST's voluntary, pre-standardization research or to participate directly in the formal drafting of consensus-based standards.

    There are several potential avenues for engagement: 

    • Direct Pre-Standardization Engagement; Stakeholders can share technical views or express interest in potential topics by emailing us directly at ai-standards [at] nist.gov (ai-standards[at]nist[dot]gov.)
    • NIST AI Consortium: Organizations that have enrolled in NIST's AI Consortium can interact with the standards initiatives, such as the Zero Draft effort, via workshops.
    • Formal SDO Participation: To contribute to the development of consensus standards, interested stakeholders should contact the SDOs directly. Specific domain expertise or interest will help guide where and how your organization might seek to engage, noting that the list below should not be considered as promoting a particular SDO:
      • Horizontal Standards: Foundational standards for AI are created in ISO/IEC JTC 1/SC 42 or IEEE and can be universally applicable across all sectors.
      • Vertical Standards: Specialized vertical standards for AI are actively developed by SDOs such as AAMI, HL7, CTA, and ISO/TC 215 for healthcare; SAE International, RTCA, EUROCAE, and ISO/TC 22 for mobility and aviation; ASC X9 and ISO/TC 68 for financial services; and ITU-T and ATIS for telecommunications. 

About the ITL AI Program

The AI Program in NIST’s Information Technology Laboratory (ITL) accelerates and expands development and adoption of AI by strengthening trust in AI through vital measurement science, testing and evaluation, guidance, and standards.

Focus areas include:

  • Advancing Testing, Evaluation, Verification, and Validation (TEVV) to ensure that AI is deployed and used responsibly, reliably, and efficiently
  • Providing resources for managing AI benefits and risks, empowering industry, academia, non-profits, and government to make informed decisions about AI trustworthiness and use
  • Positioning U.S. as preeminent in AI technical and governance standards, ensuring the U.S. leads global AI innovation
  • Enabling U.S. to lead in applying AI to high-priority areas – including manufacturing and cybersecurity for critical infrastructure – via innovative approaches to address measurement challenges
Created February 5, 2026, Updated April 27, 2026
Was this page helpful?