Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

AI Fundamental Research - Security

NIST conducts foundational research on metrics and best practices to ensure AI applications are secure and free from vulnerabilities. 

Common Understanding

In October 2019, the NCCoE published draft NIST Internal Report (NISTIR) 8269, A Taxonomy and Terminology of Adversarial Machine Learning, as a step toward securing applications of AI, specifically Adversarial Machine Learning (AML), and features a taxonomy of concepts and terminologies. This NISTIR can inform future standards and best practices for assessing and managing machine learning security by establishing a common language and understanding of the rapidly developing AML landscape. The public comments period closed on January 30, 2020. There has been one draft public review and comment period and one more is planned, with the final document being available later this year. 

For more information about the NISTIR, please visit the NCCoE’s page “Artificial Intelligence: Adversarial Machine Learning"

The development of the NISTIR has been instrumental in building the AI Community of Interest and encouraging participation with NIST.

Platform development and demonstration capability

The NCCoE is developing a platform that is intended to be a shared resource to further AI research across NIST. The Secure AI program is using the platform as a testbed to research and develop metrics and best practices to assess vulnerabilities of AI models. Initially, the platform will be used to demonstrate attacks and the effectiveness of defenses against a Machine Learning environment. 

Working example and practical guidance

In FY2021, ITL plans to support an NCCoE project in collaboration with industry partners, academia, and our broader AI Community of Interest. This effort will result in practical guidance that will include mapping to relevant best practices, standards, a reference architecture, and an example solution that has been proven in a laboratory environment.

The results of this practical application will provide a stronger understanding of the tools and techniques that offer the best possible guidance on securing AI and will be published in a NIST 1800 series practice guide in the near future.

Completing this project will draw from resources across NIST and ITL, including Federally Funded Research and Development Center resources from the MITRE Corporation, as well as collaborators from industry and other agencies.  

Created April 20, 2020, Updated April 14, 2022