Thank you to everyone who participated in the Cyber AI Profile Workshop NIST hosted this past April! This work intends to support the cybersecurity and AI communities — and the input you provided during this workshop is critical. We are working to publish a Workshop Summary that captures themes and highlights from the event. In the interim, we would like to share a preview of what we heard.
Background on the Cyber AI Profile Workshop (watch the workshop introduction video)
As NIST began exploring the idea of a Cyber AI Profile and writing the Cybersecurity and AI Workshop Concept Paper leading up to this workshop, stakeholders told us there are several cybersecurity topics that are top of mind as businesses adopt AI. The Cyber AI Profile aims to offer practical guidance to address those topics by applying the NIST Cybersecurity Framework to address three Focus Areas: securing AI system components, conducting AI-enabled cyber defense, and thwarting AI-enabled cyber-attacks. The Cyber AI Profile will accelerate the development and adoption of AI by helping organizations understand the landscape and enabling them to effectively prioritize AI actions and investments and implement cybersecurity risk management practices.
The aim of the April workshop was to gather community feedback on the concept paper to inform the development of the Cyber AI Profile. The public feedback received on the concept paper informed the agenda and workshop discussions. During the workshop, NIST staff, NCCoE staff, and participants explored the scope and practical applications of the Cyber AI Profile, including the three Focus Areas on securing AI system components, thwarting AI-enabled cyberattacks, and leveraging AI for cyber-defense activities.
What We Heard During the Workshop
The workshop consisted of a presentation, fireside chat, and two panel discussions that were open to all in-person and livestream audiences, followed by three rounds of in-person breakout sessions.
During workshop discussions, stakeholders shared that the security of these systems is important, as well as how best to use AI to enhance and adapt their cyber defenses with the advent of adversaries using AI for cyber-attacks. Participants generally view the Cyber AI Profile as a useful tool and agreed to start with the Cybersecurity Framework, noting that key characteristics should include scalability for organizations of different sizes and risk tolerances, as well as reducing redundancy across frameworks and the evolving landscape of regulations, standards, and guidance.
Below are key themes we heard from participants during the workshop:
- Enterprise Risk Management and Governance: Participants generally agreed that AI risk is organizational risk. As such, it is critical to integrate AI risk management into existing enterprise risk management practices and governance structures.
- Multidisciplinary Collaboration and Education: AI use introduces considerations from multiple aspects of organizational operations, including legal, technical, procurement/acquisitions, and governance teams. Collaboration across these areas was highlighted as essential for addressing AI-related cybersecurity risks. Participants called for integrating perspectives through multidisciplinary teams to ensure a comprehensive enterprise view. They also discussed the need for education and cross-team collaboration to bridge the knowledge gaps between AI and cybersecurity experts.
- Dual-Use Nature of AI: AI's dual-use nature, both as a tool for enhancing defenses and as a means for adversaries to amplify attacks, presents both opportunities and risks. Cybersecurity considerations need to address these dual uses. AI can enhance cybersecurity tools, such as anomaly detection and incident response, and it also enables adversaries to scale and automate attacks, including phishing, data poisoning, and model inversion. Proactive defense measures, such as automated red teaming and zero-trust principles, were identified as essential.
- Effectiveness in Cyber-Defense: AI enhances efficiency in cybersecurity, such as faster data analysis and reduced reliance on third-party services. However, participants stressed the importance of measurable benchmarks to assess AI performance and mitigate challenges like false positives and negatives to ensure AI-driven cyber-defense capabilities are performing as expected.
- Supply Chain Security: The workshop underscored the importance of securing the AI supply chain, including the need to understand the origins of AI components (e.g., microservices, containers, libraries) and data, the new vulnerabilities they may introduce, and their potential impacts to cybersecurity. Discussions also covered open-source models, training and use of third-party models, understanding the impacts of model changes over time, and third-party dependencies (as well as the need for trust and provenance).
- Transparency: Participants called for greater transparency in AI systems used to support cybersecurity, including clear documentation of data provenance, model behavior, and decision-making processes. Concerns about the distribution of data leading to incorrect results were identified as areas requiring more focus to ensure trustworthiness in cybersecurity implementations.
- Human-in-the-Loop: Participants stressed the importance of human-in-the-loop processes and training to ensure effective implementation and oversight of AI systems used for cybersecurity. Discussions highlighted challenges in understanding AI behavior, including the variability of responses and the difficulty in interpreting AI-driven decisions. Bridging knowledge gaps between AI researchers and cybersecurity professionals was identified as critical for enabling the two communities to work together to build and maintain trustworthy AI systems.
- Framework Integration: Participants emphasized the need for a shared understanding and common taxonomy between the AI and cybersecurity communities and noted the importance of integrating AI-specific risks into existing frameworks, such as the NIST Cybersecurity Framework (CSF), NIST AI Risk Management Framework (AI RMF), NIST Privacy Framework, NIST Risk Management Framework (NIST RMF), and Security and Privacy Controls for Information Systems and Organizations. Suggestions included mappings to international standards, such as European Union’s General Data Protection Regulation (GDPR) and the anticipated ISO/IEC 27090, and leveraging other resources like MITRE ATLASTM for threat modeling. The need for sector-specific Community Profiles tailored to industries like healthcare, finance, and energy was also discussed.
- Implementation Guidance and Informative References: Multiple discussions stressed the need for guidance on implementing controls for AI systems and informative references to help organizations understand how to manage AI-related risks.
See this NIST blog post about a new initiative aligned to this area: Cybersecurity and AI: Integrating and Building on Existing NIST Guidelines.
- Data Governance: Data governance was identified as a critical topic, particularly for AI systems reliant on external or third-party data sources. Participants called for a separate Focus Area to address sensitive data and its use in AI systems, including privacy considerations.
See this related NIST effort: Data Governance and Management Profile.
Additionally, there was significant emphasis on the need for supplementing traditional cybersecurity measures with cybersecurity measures tailored for AI. Examples of tailored approaches discussed during the workshop include:
- Cryptographic signing for models.
- Certification systems.
- Use of AI Software Bill of Materials (AI SBOM) to enhance transparency and accountability in AI components.
- Securing feedback loops, where model outputs are reused for further training or decision making, to prevent amplification of errors or adversarial manipulations.
- Robust governance strategies to address risks—such as data poisoning and model manipulation.
- Adaptive identity management and access control strategies and policies to address the growing and complex role of non-human identities, such as AI agents and applications, in organizational environments.
- Incident response strategies and plans that distinguish attacks against AI models from traditional cybersecurity incidents and facilitate appropriate responses to attacks on AI systems (e.g., reverting to last known-good versions of models after model poisoning).
- Data security throughout its lifecycle in AI systems (e.g., model inputs and outputs)
These takeaways provide a foundation for future community engagement.
Next Steps
To continue this open and on-going dialogue, NIST is running a series of virtual working sessions during the remainder of the summer and early fall to further explore each of the three Focus Areas. The first virtual working session will be held on August 05, 2025. We hope to see you there!
We have also updated the Cyber AI Profile Roadmap. The latest version appears below. The COI Working Sessions outlined in yellow encompass the virtual working sessions. The information collected from these discussions will influence the content developed for the Cyber AI Profile Preliminary Draft.
Credit:
NIST
We look forward to hearing from you during the upcoming COI working sessions!
As always, you are welcome to email us at CyberAIProfile [at] nist.gov (CyberAIProfile[at]nist[dot]gov). And if you have not already done so, please consider joining our COI.