August 6, 2020 | AI Kickoff Webinar
This webinar kicks off a NIST initiative involving private and public sector organizations and individuals in discussions about building blocks for trustworthy AI systems and the associated measurements, methods, standards, and tools to implement those building blocks when developing, using, and overseeing AI systems. NIST’s effort will be informed by a series of workshops that will follow this initial session.
August 18, 2020 | Bias in AI Workshop
This workshop focuses on collectively facilitating the development of a shared understanding of bias in AI, what it is, and how to measure it. This online event will consist of collaborative panels and breakout sessions and will bring together experts from the public and private sectors to engage in important discussions about bias in AI.
Artificial Intelligence (AI) is rapidly transforming our world. Remarkable surges in AI capabilities have led to a number of innovations including autonomous vehicles and connected Internet of Things devices in our homes. AI is even contributing to the development of a brain-controlled robotic arm that can help a paralyzed person feel again through complex direct human-brain interfaces. These new AI-enabled systems are revolutionizing everything from commerce and healthcare to transportation and cybersecurity.
AI has the potential to impact nearly all aspects of our society, including our economy, but the development and use of the new technologies it brings are not without technical challenges and risks. AI must be developed in a trustworthy manner to ensure reliability, safety and accuracy.
NIST has a long-standing reputation for cultivating trust in technology by participating in the development of standards and metrics that strengthen measurement science and make technology more secure, usable, interoperable and reliable. This work is critical in the AI space to ensure public trust of rapidly evolving technologies, so that we can benefit from all that this field has to promise.
AI systems typically make decisions based on data-driven models created by machine learning, or the system’s ability to detect and derive patterns. As the technology advances, we will need to develop rigorous scientific testing that ensures secure, trustworthy and safe AI. We also need to develop a broad spectrum of standards for AI data, performance, interoperability, usability, security and privacy.
NIST participates in interagency efforts to further innovation in AI. NIST Director and Undersecretary of Commerce for Standards and Technology Walter Copan serves on the White House Select Committee on Artificial Intelligence. Charles Romine, Director of NIST’s Information Technology Laboratory, serves on the Machine Learning and AI Subcommittee.
A February 11, 2019, Executive Order on Maintaining American Leadership in Artificial Intelligence tasks NIST with developing “a plan for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies.” For more information, see: https://www.nist.gov/topics/artificial-intelligence/ai-standards.
NIST research in AI is focused on how to measure and enhance the security and trustworthiness of AI systems. This includes participation in the development of international standards that ensure innovation, public trust and confidence in systems that use AI technologies. In addition, NIST is applying AI to measurement problems to gain deeper insight into the research itself as well as to better understand AI’s capabilities and limitations.
The NIST AI program has two major goals:
The recently launched AI Visiting Fellow program brings nationally recognized leaders in AI and machine learning to NIST to share their knowledge and experience and to provide technical support.