Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

A Call for Built-In Biosecurity Safeguards for Generative AI Tools

Published

Author(s)

Mengdi Wang, Zaixi Zhang, Amrit Singh Bedi, Alvaro Velasquez, Stephanie Guerra, Sheng Lin-Gibson, Le Cong, Megan Blewett, Yuanhao Qu, Jian Ma, Eric Xing, George Church, Souradip Chakraborty

Abstract

The rapid adoption of generative AI (GenAI) in biotechnology offers immense potential but also raises serious safety concerns. AI models for protein engineering, genome editing, and molecular synthesis can be misused to enhance viral virulence, design toxins, or modify human embryos, while ethical and policy discussions lag behind technological advances. This Correspondence calls for proactive, built-in, AI-native safeguards within GenAI tools. With more research and development, emerging AI safety technologies—watermarking, alignment, anti-jailbreak methods, and unlearning—can complement governance policies and provide scalable biosecurity solutions. We also stress the global community's role in researching, developing, testing, and implementing these measures to ensure the responsible GenAI deployment in biotechnology.
Citation
Nature Biotechnology

Keywords

generative AI, biosecurity

Citation

Wang, M. , Zhang, Z. , Bedi, A. , Velasquez, A. , Guerra, S. , Lin-Gibson, S. , Cong, L. , Blewett, M. , Qu, Y. , Ma, J. , Xing, E. , Church, G. and Chakraborty, S. (2025), A Call for Built-In Biosecurity Safeguards for Generative AI Tools, Nature Biotechnology (Accessed December 10, 2025)

Issues

If you have any questions about this publication or are having problems accessing it, please contact [email protected].

Created December 9, 2025
Was this page helpful?