Mitigating risks from synthetic content is a priority of the US AI Safety Institute’s focus on advancing the science of AI safety. It is also a focus of the first convening of the international network of AI Safety Institutes on November 20-21, 2024.
Per the 2023 White House Executive Order on Safe, Secure, and Trustworthy Development of AI, synthetic content refers to “information, such as images, videos, audio clips, and text, that has been significantly modified or generated by algorithms, including by AI.” Though synthetic content is not inherently harmful and has both positive and benign use cases, its widespread production and distribution risks damaging information integrity and undermining trust. In addition, synthetic content can and does cause harm to individuals and populations through, for example, AI-generated child sexual abuse material, facilitating fraud, impersonation, and copyright or intellectual property violation. The dangers of synthetic content stretch across modalities – text, audio, image, and video.
The AI technical and academic community has developed a variety of techniques for addressing these risks, including approaches to digital watermarking, authentication and provenance, detection of synthetic material, digital personhood authentication, and safeguards for models in generating harmful outputs. Despite these techniques, challenges remain, from how to improve the robustness of watermarks, to security gaps in the content authentication ecosystem, to how the adoption of content authentication mechanisms may impact information ecosystems internationally. There is furthermore a pressing need for social science research on the systemic impact of the use and misuse of synthetic content, e.g., the effect of synthetic content on information integrity and trust; the effect of using generative AI in domains such as education, counseling, and caregiving; and how to address the use of generation AI for fraud and impersonation. The U.S. AI Safety Institute calls upon all stakeholders – public and private funders in addition to academic, civil society, and industry scholars – to support and pursue urgent and actionable research on mitigating the risks of synthetic content in order to improve AI safety and enable innovation that benefits all.
As part of the AI Safety Institute’s efforts to support important and actionable research on mitigating risks from synthetic content, we are inviting broad input on identifying salient risks, assessing systemic impact, supporting novel approaches as well as improvements in existing techniques, and opportunities to direct resources of human and financial capital toward advancing AI safety in the domain of synthetic content. There is special but not exclusive interest in research related to the risks outlined in NIST AI 100-4, Reducing Risks Posed By Synthetic Content: An Overview of Technical Approaches to Digital Content Transparency.
Please reach out to usaisi [at] nist.gov (usaisi[at]nist[dot]gov) to connect with a member of our team.