Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

CAISI Issues Request for Information About Securing AI Agent Systems

The Center for AI Standards and Innovation (CAISI) at the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST), has published a Request for Information (RFI) seeking insights from industry, academia, and the security community regarding the secure development and deployment of AI agent systems.

AI agent systems are capable of planning and taking autonomous actions that impact real-world systems or environments. While these systems promise significant benefits for productivity and innovation, they present unique security challenges.

AI agent systems face a range of security threats and risks. Some risks overlap with other software systems, such as exploitable authentication or memory management vulnerabilities. This RFI, however, focuses on distinct risks that arise when combining AI model outputs with the functionality of software systems. This includes risks from models interacting with adversarial data (such as in indirect prompt injection), risks from the use of insecure models (such as models that have been subject to data poisoning), and risks that models may take actions that harm security even in the absence of adversarial inputs (such as models that exhibit specification gaming or otherwise pursue misaligned objectives). These security challenges not only hinder adoption today but may also pose risks for public safety and national security as AI agent systems become more widely deployed.

The RFI poses questions on topics including:

  • Unique security threats affecting AI agent systems, and how these threats may change over time.
  • Methods for improving the security of AI agent systems in development and deployment (2a).
  • Promise of and possible gaps in existing cybersecurity approaches when applied to AI agent systems.
  • Methods for measuring the security of AI agent systems and approaches to anticipating risks during development.
  • Interventions in deployment environments to address security risks affecting AI agent systems, including methods to constrain and monitor the extent of agent access in the deployment environment.

Input from AI agent deployers, developers, and computer security researchers, among others, will inform future work on voluntary guidelines and best practices related to AI agent security. It will also contribute to CAISI’s ongoing research and evaluations of agent security. Respondents are encouraged to provide concrete examples, best practices, case studies and actionable recommendations based on their experience with AI agent systems. The full RFI can be found here.

The comment period closes on March 9, 2026, at 11:59 PM Eastern Time. Comments can be submitted online at www.regulations.gov, under docket no. NIST-2025-0035.

Released January 12, 2026
Was this page helpful?