Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

AI Foundational Research - Security

Image of a cyber lock
Credit: Shutterstock

NIST conducts foundational research on metrics and best practices to ensure AI applications are secure and free from vulnerabilities. The Information Technology Laboratory (ITL) and the National Cybersecurity Center of Excellence (NCCoE) have invested more than $2.5 million on projects related to securing AI. Plans include investing an additional $1.4 million in FY2021 to develop a reference architecture, example solutions, and best practices for securing AI.

Common Understanding

In October 2019, the NCCoE published draft NIST Internal Report (NISTIR) 8269, A Taxonomy and Terminology of Adversarial Machine Learning, as a step toward securing applications of AI, specifically Adversarial Machine Learning (AML), and features a taxonomy of concepts and terminologies. This NISTIR can inform future standards and best practices for assessing and managing machine learning security by establishing a common language and understanding of the rapidly developing AML landscape. The public comments period closed on January 30, 2020. There has been one draft public review and comment period and one more is planned, with the final document being available later this year. 

For more information about the NISTIR, please visit the NCCoE’s page “Artificial Intelligence: Adversarial Machine Learning"

The development of the NISTIR has been instrumental in building the AI Community of Interest and encouraging participation with NIST.

Platform development and demonstration capability

The NCCoE is developing a platform that is intended to be a shared resource to further AI research across NIST. The Secure AI program is using the platform as a testbed to research and develop metrics and best practices to assess vulnerabilities of AI models. Initially, the platform will be used to demonstrate attacks and the effectiveness of defenses against a Machine Learning environment. 

Working example and practical guidance

In FY2021, ITL plans to support an NCCoE project in collaboration with industry partners, academia, and our broader AI Community of Interest. This effort will result in practical guidance that will include mapping to relevant best practices, standards, a reference architecture, and an example solution that has been proven in a laboratory environment.

The results of this practical application will provide a stronger understanding of the tools and techniques that offer the best possible guidance on securing AI and will be published in a NIST 1800 series practice guide in the near future.

Completing this project will draw from resources across NIST and ITL, including Federally Funded Research and Development Center resources from the MITRE Corporation, as well as collaborators from industry and other agencies.  

Created April 20, 2020, Updated April 21, 2020