The Executive Order on AI (EO 13859) reflects the importance of technical standards in enabling innovation, public trust and confidence in AI technologies. The EO proactively required a plan to guide Federal agencies’ engagement in the development of AI technical standards with the goal of ensuring that “technical standards minimize vulnerability to attacks from malicious actors and reflect Federal priorities for innovation, public trust, and public confidence in systems that use AI technologies; and develop international standards to promote and protect those priorities.”
The Secretary of Commerce, through the NIST Director, was directed by the EO to “issue a plan for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies.” NIST committed to fulfilling that responsibility in an open, transparent, inclusive and timely way, engaging the public and private sectors in producing a plan within 180 days of the February 11, 2019, assignment. The plan was delivered to the White House on August 10, 2019, completing the assignment.
By law and policy, NIST plays a key role in coordinating the Federal government’s technical standards activities with those of the private sector. NIST’s technical expertise in AI-related technologies, extensive experience in standards development and its reputation as a knowledgeable and unbiased Federal government convener make it well suited for this role.
This new assignment in the EO also aligns well with NIST’s research, which is focused on how to measure and enhance the security and trustworthiness of AI systems.
NIST worked with key stakeholders, including the White House Select Committee on Artificial Intelligence, other government agencies, the private sector, academia and non governmental entities to develop the plan. NIST held a workshop on May 30, 2019. The recorded webcast of the workshop is here. A Proposed Draft Outline of a Federal Engagement Plan was provided for discussion purposes at the workshop. Attended by more than 200 participants, the workshop provided an opportunity for attendees to actively engage in facilitated discussions to advance the development of the plan. Based on the workshop discussion, NIST released a draft plan for public and agency review on July 5, 2019. Comments were taken into account in the version submitted to the White House.
The plan identifies priority needs for standardization of AI systems development and deployment, progress on AI standards and standards-related tools, opportunities for and challenges to U.S. leadership in standardization related to AI technologies, and Federal priorities. The plan includes practical steps for Federal agencies as well as four recommendations on Federal government standards actions to advance U.S. AI leadership. Those recommendations can be found on page 22 of the plan.
Trustworthiness standards include guidance and requirements for accuracy, explainability, resiliency, safety, reliability, objectivity, and security.
It is important for those participating in AI standards development to be aware of, and to act consistently with, policies and principles, including those that address societal and ethical issues, governance, and privacy. While there is broad agreement that these issues must factor into AI standards, it is not clear how that should be done and whether there is yet sufficient scientific and technical basis to develop those standards provisions.
With the goal of fulfilling their missions more effectively and efficiently, many Federal agencies are major players in developing and using AI technologies. Likewise, these agencies should be directly engaged in prioritizing and developing AI technical standards. The engagement plan spells out priorities and offers practical guidance about how agencies can proceed.
Artificial intelligence is a strategic priority for NIST. As AI-based technology advances, rigorous scientific testing and standards will be needed to ensure secure, trustworthy and safe AI.
NIST’s research program includes fundamental research to measure and enhance trustworthiness of AI systems by, for example, examining how to measure the security and transparency of AI systems. Moreover, NIST is advancing the application of AI to research programs across its broad laboratory portfolio, from autonomous discovery of advanced materials to robotic systems in manufacturing environments and more.
A broad spectrum of standards for AI data, performance, interoperability, usability, security and privacy will be a priority for trustworthy AI that is accurate, reliable, safe and secure. NIST is including AI considerations in discussions with industry partners at the National Cybersecurity Center of Excellence (NCCoE) to see if and how that center — which focuses on developing practical cybersecurity guidance — might address AI. The NIST privacy team is exploring the implications of AI in managing privacy risk.
The U.S. government’s role in the development and use of standards and conformity assessment is guided by key legislation and policy instruments, including provisions in the National Technology Transfer and Advancement Act (NTTA), OMB Circular A-119, Trade Agreements Act of 1979 (as amended) and other Federal laws, regulations and international agreements. U.S. government agencies are encouraged to participate in and adopt voluntary consensus standards wherever possible (in lieu of government-unique standards).