Artificial Intelligence (AI) is rapidly transforming our world. Remarkable surges in AI capabilities have led to a number of innovations including autonomous vehicles and connected Internet of Things devices in our homes. AI is even contributing to the development of a brain-controlled robotic arm that can help a paralyzed person feel again through complex direct human-brain interfaces. These new AI-enabled systems are revolutionizing everything from commerce and healthcare to transportation and cybersecurity.
AI has the potential to impact nearly all aspects of our society, including our economy, but the development and use of the new technologies it brings are not without technical challenges and risks. AI must be developed in a trustworthy manner to ensure reliability, safety and accuracy.
Cultivating Trust in AI Technologies
NIST has a long-standing reputation for cultivating trust in technology by participating in the development of standards and metrics that strengthen measurement science and make technology more secure, usable, interoperable and reliable. This work is critical in the AI space to ensure public trust of rapidly evolving technologies, so that we can benefit from all that this field has to promise.
AI systems typically make decisions based on data-driven models created by machine learning, or the system’s ability to detect and derive patterns. As the technology advances, we will need to develop rigorous scientific testing that ensures secure, trustworthy and safe AI. We also need to develop a broad spectrum of standards for AI data, performance, interoperability, usability, security and privacy.