On January 15, 2026, NIST released A Possible Approach for Evaluating AI Standards Development (GCR-26-069). The report intends to stimulate discussion on a potential approach to evaluate the effectiveness, utility, and relative value of the development of AI standards (Read the report).
NIST welcomes public feedback on the potential approach outlined in the report. Input may be shared any time via email to aistandards [at] nist.gov (ai-standards[at]nist[dot]gov). Submissions, including attachments, will become part of the public record.
NIST intends to announce an online event to encourage dialogue. Those interested in joining the discussion should sign up for alerts to receive updates about AI-related news and events.
NIST leads and participates in the development of technical standards, including international standards, that promote innovation and public trust in systems that use AI. A broad spectrum of standards for AI data, performance, and governance are – and increasingly will be – a priority for trustworthy and responsible AI. NIST carries out its work consistent with the U.S. Government National Standards Strategy for Critical and Emerging Technology.
Global Engagement for AI Standards
NIST has developed a plan for global engagement on promoting and developing AI standards. The goal is to drive the development and implementation of AI-related consensus standards, cooperation and coordination, and information sharing. Reflecting public and private sector input, on April 29, 2024, NIST released a draft plan. On July 26, 2024, after considering public comments on the draft, NIST released A Plan for Global Engagement on AI Standards (NIST AI 100-5e2025).
Ensuring Awareness and Federal Coordination in AI Standards Efforts
- In its role as federal AI standards coordinator, NIST works across the government and with industry stakeholders to identify critical standards development activities, strategies, and gaps. Based on priorities outlined in the NIST-developed “Plan for Federal Engagement in AI Standards and Related Tools,” NIST seeks out AI standards development opportunities, periodically collecting and analyzing information about agencies’ AI standards-related priority activities and making recommendations through the interagency process to optimize engagement.
- On March 1, 2022, NIST delivered to Congress a report summarizing progress that federal agencies have made to implement the recommendations of the US Leadership in AI plan.
- NIST is facilitating federal agency coordination in the development and use of AI standards in part through the Interagency Committee on Standards Policy (ICSP), which it chairs. An ICSP AI Standards Coordination Working Group (AISCWG) aims to promote effective and consistent federal policies leveraging AI standards, raise awareness, and foster agencies’ use of AI to inform the development of standards. The group helps to coordinate government and private sector positions regarding AI international standards activities. NIST’s role in ensuring awareness and federal coordination of AI standards is explained in more detail here.
Encouraging International Standards Incorporation of the AI Risk Management Framework (AI RMF 1.0)
- Incorporation of the AI RMF in international standards will further the Framework’s value as a resource to those designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.
- The AI RMF seeks to “Take advantage of and foster greater awareness of existing standards, guidelines, best practices, methodologies, and tools for managing AI risks....” AI RMF 1.0 takes into account and cites international standards and documents.
- As part of the AI RMF Roadmap, NIST is making it a priority to continue to align the AI RMF and related guidance with applicable international standards, guidelines, and practices. The roadmap specifically cites “Alignment with international standards and production crosswalks to related standards (e.g., ISO/IEC 5338, ISO/IEC 38507, ISO/IEC 22989, ISO/IEC 24028, ISO/IEC DIS 42001, and ISO/IEC NP 42005.)”
- The first two crosswalks to the AI RMF created by NIST are for ISO/IEC FDIS23894 Information technology - Artificial intelligence - Guidance on risk management and an illustration of how NIST AI RMF trustworthiness characteristics relate to the OECD Recommendation on AI, the Proposed EU AI Act, and several other key documents. Subsequently, NIST has posted additional AI RMF Crosswalks in the NIST Trustworthy and Responsible AI Center.
Facilitating Dialogue on Approaches for Evaluating AI Standards Development
- On January 15, 2026, NIST released A Possible Approach for Evaluating AI Standards Development (GCR-26-069). The report intends to stimulate discussion on a potential approach to evaluate the effectiveness, utility, and relative value of the development of AI standards. It was prepared by Dr. Julia Lane, a NIST Associate and Professor Emerita at New York University.
- Recognizing the lack of formal or shared methods to measure the impact of standards development on the goals of innovation and trust, the report sketches out a conceptual structure for evaluating whether a given AI standard or set of standards meet these goals.
- The report draws on successful and well-tested evaluation approaches, tools, and metrics that are used for monitoring and assessing the effect of interventions in other domains.
Invitation for Input:
NIST welcomes feedback on the potential approach outlined in the report. Input may be shared any time via email to aistandards [at] nist.gov (ai-standards[at]nist[dot]gov). Submissions, including attachments, will become part of the public record.
NIST intends to announce an online event to encourage dialogue. Those interested in joining the discussion should sign up for alerts about AI-related news and events.