Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

NIST Secure Software Development Framework for Generative AI and for Dual Use Foundation Models Virtual Workshop

NIST is hosting a workshop on Wednesday, January 17, 2024, from 9:00 AM - 1:00 PM EST to bring together industry, academia, and government to discuss secure software development practices for AI models. Attendees will gain insight into major cybersecurity challenges specific to developing and using AI models—as well as recommended practices for addressing those challenges. Feedback from various communities will inform NIST’s creation of SSDF companion resources to support both AI model producers and the organizations which are adopting and incorporating those AI models within their own software and services.

Background

The October 2023, Executive Order 14110, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, tasked NIST with “developing a companion resource to the SSDF to incorporate secure development practices for generative AI and for dual-use foundation models.” NIST’s SSDF version 1.1 describes a set of fundamental, sound practices for general secure software development. The SSDF focuses on outcomes, not tools and techniques, so it can be used for any type of software development, including AI models.

To provide software producers and acquirers with more information on secure development for AI models, NIST is considering the development of one or more SSDF companion resources on generative AI models and dual-use foundation models. These companion resources would be similar in concept and content to the Profiles for the NIST Cybersecurity Framework, Privacy Framework, and AI Risk Management Framework.

During the workshop, NIST is seeking feedback on several topics to help inform the development of future SSDF Profiles, including:

  1. What changes, if any, need to be made to SSDF version 1.1 to accommodate secure development practices for generative AI and dual-use foundation models?
  2. What AI-specific considerations should NIST capture in its companion resource?
  3. What else should be captured in the SSDF Profiles?
  4. Is there an alternative to an SSDF Profile that would be more effective at accomplishing the EO 14110 requirement, while also providing flexibility and technology neutrality for software producers?
  5. What secure development resources specific to AI models do you find most valuable?
  6. What is unique about developing code for generative AI and dual-use foundation models?

Questions about the workshop or NIST’s SSDF work? Contact us via ssdf [at] nist.gov (ssdf[at]nist[dot]gov).

 

 

 

TimesSpeakersSession Name/Information
9:00 AM – 9:15 AM

Michael Ogata, NIST

Kevin Stine, NIST

Introduction and Overview
9:15 AMMartin Stanley, NISTSession 1 - Secure Software Development Challenges with Large Language Models (LLMs) and Generative AI Systems
 
This session will discuss major cybersecurity challenges in the development and use of LLMs, dual-use foundation models, and generative AI systems. Attendees will identify and consider the biggest challenges are and the potential impacts of not adequately addressing them.
9:15 AM – 9:30 AM

Jonathan Spring, CISA

 

CISA Presentation for NIST Secure Software Development Workshop for Generative AI
9:30 AM – 9:45 AMDave Schulker, CERTUsing System Theoretic Process Analysis to Advance Safety in LLM-enabled Software Systems
9:45 AM – 10:00 AMHenry Young, BSACybersecurity for Generative AI: Leveraging Existing Tools and Identifying New Challenges
10:00 AM – 10:15 AM

Martin Stanley, NIST

Jonathan Spring, CISA

Dave Schulker, CERT

Henry Young, BSA

Q&A
10:15 AMApostol Vassilev, NISTSession 2 - Secure Development of LLMs and Generative AI Systems
 
This session will explore recommended security practices for the development of LLMs, such as dual-use foundation models with billions of parameters. The focus will be on security practices that are specific to the LLM development lifecycle, rather than on practices generally used for all other types of software development. Attendees will share and gain a better understanding of the practices in use and the gaps that remain to be addressed by users of LLMs.
10:15 AM – 10:30 AMNick Hamilton, OpenAISecuring Large Language Model Development and Deployment: Navigating the Complexities of LLM Secure Development Practices to Align with the NIST Secure Development Framework
10:30 AM – 10:45 AMMark Ryland, AWSSecure Development of GenAI Systems: An AWS Perspective
10:45 AM – 11:00 AMMihai Maruseac, GoogleSecure AI Development @ Google
11:00 AM – 11:15 AM

Apostol Vassilev, NIST

Nick Hamilton, OpenAI

Mark Ryland, AWS

Mihai Maruseac, Google

Q&A
11:15 AM – 11:30 AMMichael Ogata, NISTBreak
11:30 AMHarold Booth, NISTSession 3 - Secure Use of LLMs and Generative AI Systems
 
This session will explore recommended security practices for reuse of existing LLMs and generative AI systems as components of traditional software deployed within an organization. It will focus on security practices that are specific to LLMs and generative AI models as components integrated into other software and the specific security challenges they bring rather than on practices generally used for any traditional software reuse. Attendees will discuss recommendations and considerations for enhancing their existing secure software development practices, as well as additional security controls they may need to employ.
11:30 AM – 11:45 AMKarthi Natesan Ramamurthy, IBMFoundation Models and their Use in Software Systems -Trust and Governance
11:45 AM – 12:00 PMDavid Beveridge, HiddenLayerSecure Use of LLMs and GEN AI Systems
12:00 PM – 12:15 AMVivek Sharma, MicrosoftNIST Secure Use of LLMs and Generative AI System
12:15 PM – 12:30 PM

Harold Booth, NIST

Karthi Natesan Ramamurthy, IBM

David Beveridge, HiddenLayer

Vivek Sharma, Microsoft

Q&A
12:30 PM –12:45 PMMichael Ogata, NISTClosing and Next Steps
1:00 PM Adjourn

 

Created January 4, 2024, Updated February 1, 2024