This webinar presents a new project to develop NIST security control overlays for AI systems. These overlays adapt, tailor, and supplement the SP 800-53 controls to address AI-specific concerns, such as model integrity, data provenance, adversarial robus
Securing artificial intelligence (AI) systems can seem daunting given their complexity, unique risks, and rapidly evolving capabilities. However, organizations don’t have to start from scratch. By building on established cybersecurity frameworks and standards — particularly the NIST Risk Management Framework (RMF) and SP 800-53 — securing AI systems can be more manageable, scalable, and aligned with broader existing cybersecurity, privacy, and C-SCRM risk management practices.
This webinar presents a new project to develop NIST security control overlays for AI systems. These overlays adapt, tailor, and supplement the SP 800-53 controls to address AI-specific concerns, such as model integrity, data provenance, adversarial robustness, and transparency without reinventing the wheel.
On September 25, 2025, from 1:00 – 2:00 PM ET, join the project leads for this session and learn:
Whether you work in cybersecurity, AI development, risk management, or compliance, this session will show how familiar tools and standards can make AI security both possible and practical.
Space for this webinar is limited, and registration will close once capacity is reached. This webinar will be recorded and posted following the live event. By registering for or attending this event, you acknowledge and consent to being recorded. Questions should be directed to overlays-securing-ai [at] list.nist.gov (overlays-securing-ai[at]list[dot]nist[dot]gov).