Artificial Intelligence (AI) is rapidly transforming our world. Remarkable surges in AI capabilities have led to a wide range of innovations including autonomous vehicles and connected Internet of Things devices in our homes. AI is even contributing to the development of a brain-controlled robotic arm that can help a paralyzed person feel again through complex direct human-brain interfaces. These new AI-enabled systems are revolutionizing and benefitting nearly all aspects of our society and economy – everything from commerce and healthcare to transportation and cybersecurity. But the development and use of the new technologies it brings are not without technical challenges and risks.
On October 30, President Joseph R. Biden signed an Executive Order (EO) to build U.S. capacity to evaluate and mitigate the risks of Artificial Intelligence (AI) systems to ensure safety, security, and trust, while promoting an innovative, competitive AI ecosystem that supports workers and protects consumers. Learn more about NIST's responsibilities in the EO and the creation of the U.S. Artificial Intelligence Safety Institute, including the new consortium that is being established.
NIST contributes to the research, standards and data required to realize the full promise of artificial intelligence (AI) as a tool that will enable American innovation, enhance economic security and improve our quality of life. Much of our work focuses on cultivating trust in the design, development, use and governance of artificial intelligence (AI) technologies and systems. We are doing this by:
NIST’s AI portfolio includes fundamental research into and development of AI technologies — including software, hardware, architectures and human interaction and teaming — vital for AI computational trust.
AI approaches are increasingly an essential component in new research. NIST scientists and engineers use various machine learning and AI tools to gain a deeper understanding of and insight into their research. At the same time, NIST laboratory experiences with AI are leading to a better understanding of AI’s capabilities and limitations.
With a long history of devising and revising metrics, measurement tools, standards and test beds, NIST increasingly is focusing on the evaluation of technical characteristics of trustworthy AI.
NIST leads and participates in the development of technical standards, including international standards, that promote innovation and public trust in systems that use AI. A broad spectrum of standards for AI data, performance and governance are — and increasingly will be — a priority for the use and creation of trustworthy and responsible AI.
AI and Machine Learning (ML) is changing the way in which society addresses economic and national security challenges and opportunities. It is being used in genomics, image and video processing, materials, natural language processing, robotics, wireless spectrum monitoring and more. These technologies must be trustworthy and developed for responsible AI practice and use. Trustworthy AI systems are demonstrated to be valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed.
Delivering the needed measurements, standards and other tools is a primary focus for NIST’s portfolio of AI efforts. It is an area in which NIST has special responsibilities and expertise. NIST relies heavily on stakeholder input, including via workshops, and issues most publications in draft for comment.