Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

AI Fundamental Research - Security

The trustworthiness of AI technologies depends in part on how secure they are. The NIST AI Risk Management Framework (AI RMF) identifies “Secure and Resilient” as one of the primary characteristics of AI trustworthiness. NIST conducts foundational and applied research and provides guidance to help AI applications to be more secure and free from vulnerabilities. 

AI risks should not be considered in isolation. Treating AI risks along with other critical risks, such as cybersecurity, will yield a more integrated outcome and organizational efficiencies. Some risks related to AI systems are common across other types of software development and deployment. Overlapping risks include security concerns related to the confidentiality, integrity, and availability of the system and its training and output data – along with the general security of the underlying software and hardware for AI systems. Cybersecurity risk management considerations and approaches are applicable in the design, development, deployment, evaluation, and use of AI systems. NIST develops a wide array of cybersecurity standards, guidelines, best practices, and other resources. Those efforts complement and enhance its portfolio of AI activities to meet the needs of U.S. industry, federal agencies and the broader public.

Clearly, much more work on security related to AI technologies is needed, and challenges and potential solutions are changing very rapidly. For example, existing frameworks and guidance are unable to comprehensively address security concerns related to evasion, model extraction, membership inference, availability, or other machine learning attacks. They also do not account for the complex attack surface of AI systems or other security abuses enabled by AI systems.

AI technologies also have the potential to transform cybersecurity. They offer the prospect of giving defenders new tools that can address security vulnerabilities and potentially mitigate cybersecurity workforce shortages – even as they can enhance the capabilities of those seeking to target organizations and individuals through information technology (IT) and operational technology (OT) attacks.

Several examples of AI-specific security efforts now underway by NIST follow:

Taxonomy and Terminology: Attacks and Mitigations

In March 2023, the National Cybersecurity Center of Excellence (NCCoE) managed by NIST published a draft report on Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST AI 100-2e2023).  This draft report is a step toward securing applications of AI, specifically Adversarial Machine Learning (AML), and features a taxonomy of concepts and terminologies. When final, this report can inform future standards and best practices for assessing and managing machine learning security by establishing a common language and understanding of the rapidly developing AML landscape. The public comment period will close on September 30, 2023. This version reflects public comments on a previous draft.

Platform development and demonstration capability

Researchers at the (NCCoE) managed by NIST are developing a platform, Dioptra, that is intended to be a shared resource to further AI research across NIST. The Secure AI program is using the platform as a testbed to research and develop metrics and best practices to assess vulnerabilities of AI models. Initially, the platform will be used to demonstrate attacks and the effectiveness of defenses against a Machine Learning environment. 

Created April 20, 2020, Updated January 10, 2024