Artificial Intelligence (AI) is rapidly transforming our world. Remarkable surges in AI capabilities have led to a wide range of innovations including autonomous vehicles and connected Internet of Things devices in our homes. AI is even contributing to the development of a brain-controlled robotic arm that can help a paralyzed person feel again through complex direct human-brain interfaces. These new AI-enabled systems are revolutionizing and benefitting nearly all aspects of our society and economy – everything from commerce and healthcare to transportation and cybersecurity. But the development and use of the new technologies it brings are not without technical challenges and risks.
NIST contributes to the research, standards and data required to realize the full promise of artificial intelligence (AI) as a tool that will enable American innovation, enhance economic security and improve our quality of life. Much of our work focuses on cultivating trust in the design, development, use and governance of artificial intelligence (AI) technologies and systems. We are doing this by:
NIST’s AI portfolio includes fundamental research into and development of AI technologies — including software, hardware, architectures and human interaction and teaming — vital for AI computational trust.
AI approaches are increasingly an essential component in new research. NIST scientists and engineers use various machine learning and AI tools to gain a deeper understanding of and insight into their research. At the same time, NIST laboratory experiences with AI are leading to a better understanding of AI’s capabilities and limitations.
With a long history of devising and revising metrics, measurement tools, standards and test beds, NIST increasingly is focusing on the evaluation of technical characteristics of trustworthy AI.
NIST leads and participates in the development of technical standards, including international standards, that promote innovation and public trust in systems that use AI. A broad spectrum of standards for AI data, performance and governance are — and increasingly will be — a priority for the use and creation of trustworthy and responsible AI.
AI and Machine Learning (ML) is changing the way in which society addresses economic and national security challenges and opportunities. It is being used in genomics, image and video processing, materials, natural language processing, robotics, wireless spectrum monitoring and more. These technologies must be developed and used in a trustworthy and responsible manner.
While answers to the question of what makes an AI technology trustworthy may differ depending on whom you ask, there are certain key characteristics which support trustworthiness, including accuracy, explainability and interpretability, privacy, reliability, robustness, safety, and security (resilience) and mitigation of harmful bias. Principles such as transparency, fairness and accountability should be considered, especially during deployment and use. Trustworthy data, standards and evaluation, validation, and verification are critical for the successful deployment of AI technologies.
Delivering the needed measurements, standards and other tools is a primary focus for NIST’s portfolio of AI efforts. It is an area in which NIST has special responsibilities and expertise. NIST relies heavily on stakeholder input, including via workshops, and issues most publications in draft for comment.
NIST Seeks Comments on AI Risk Management Framework Guidance, Workshop Date Set
NIST is seeking comments on a second draft of the NIST Artificial Intelligence Risk Management Framework (AI RMF). The AI RMF is intended for voluntary use in addressing risks in the design, development, use, and evaluation of AI products, services, and systems. The new draft builds on and reflects the discussions at the AI RMF Workshop #2 and incorporates feedback received on the initial draft