NIST leads and participates in the development of technical standards, including international standards, that promote innovation and public trust in systems that use AI. A broad spectrum of standards for AI data, performance, and governance are – and increasingly will be – a priority for trustworthy and responsible AI.
Ensuring Awareness and Federal Coordination in AI Standards Efforts
- In its role as federal AI standards coordinator, NIST works across the government and with industry stakeholders to identify critical standards development activities, strategies, and gaps. Based on priorities outlined in the NIST-developed “Plan for Federal Engagement in AI Standards and Related Tools,” NIST is tracking AI standards development opportunities, periodically collecting and analyzing information about agencies’ AI standards-related priority activities and making recommendations through the interagency process to optimize engagement.
- On March 1, 2022, NIST delivered to Congress a report summarizing progress that federal agencies have made to implement the recommendations of the U.S. Leadership in AI plan.
- NIST is facilitating federal agency coordination in the development and use of AI standards in part through the Interagency Committee on Standards Policy (ICSP), which it chairs. An ICSP AI Standards Coordination Working Group (AISCWG) aims to promote effective and consistent federal policies leveraging AI standards, raise awareness, and foster agencies’ use of AI to inform the development of standards. The group helps to coordinate government and private sector positions regarding AI international standards activities. NIST’s role in ensuring awareness and federal coordination of AI standards is explained in more detail here.
Encouraging International Standards Incorporation of the AI Risk Management Framework (AI RMF 1.0)
- Incorporation of the AI RMF in international standards will further the Framework’s value as a resource to those designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.
- The AI RMF seeks to “Take advantage of and foster greater awareness of existing standards, guidelines, best practices, methodologies, and tools for managing AI risks....” AI RMF 1.0 takes into account and cites international standards and documents.
- As part of the AI RMF Roadmap, NIST is making it a priority to continue to align the AI RMF and related guidance with applicable international standards, guidelines, and practices. The roadmap specifically cites “Alignment with international standards and production crosswalks to related standards (e.g., ISO/IEC 5338, ISO/IEC 38507, ISO/IEC 22989, ISO/IEC 24028, ISO/IEC DIS 42001, and ISO/IEC NP 42005.)”
- The first two crosswalks to the AI RMF created by NIST are to ISO/IEC FDIS23894 Information technology - Artificial intelligence - Guidance on risk management and an illustration of how NIST AI RMF trustworthiness characteristics relate to the OECD Recommendation on AI, the Proposed EU AI Act, and several other key documents.