Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Search Publications by: Yooyoung Lee (Fed)

Search Title, Abstract, Conference, Citation, Keyword or Author
Displaying 1 - 25 of 77

2025 NIST GenAI (Pilot) Evaluation Plan for Image Discriminators

March 14, 2025
Author(s)
George Awad, Hariharan Iyer, Seungmin Seo, Peter Fontana, Yooyoung Lee
In this NIST Generative AI (GenAI) program, we invite and encourage participating teams from academia, industry, and other research labs to support research in Generative AI. GenAI is an evaluation series that provides a platform for testing and evaluation

2025 NIST GenAI (Pilot) Evaluation Plan for Image Generators

March 14, 2025
Author(s)
George Awad, Hariharan Iyer, Seungmin Seo, Peter Fontana, Yooyoung Lee
In this NIST Generative AI (GenAI) program, we invite and encourage participating teams from academia, industry, and other research labs to support research in Generative AI. GenAI is an evaluation series that provides a platform for testing and evaluation

2024 NIST Generative AI (GenAI): Data Creation Specification for Text-to-Text (T2T) Generators

April 1, 2024
Author(s)
Yooyoung Lee, George Awad, Asad Butt, Lukas Diduch, Kay Peterson, Seungmin Seo, Ian Soboroff, Hariharan Iyer
Generator (G) teams will be tested on their system ability to generate content that is indistinguishable from human-generated content. For the pilot study, the evaluation will help determine strengths and weaknesses in their approaches including insights

2024 NIST Generative AI (GenAI): Evaluation Plan for Text-to-Text (T2T) Discriminators

April 1, 2024
Author(s)
Yooyoung Lee, George Awad, Asad Butt, Lukas Diduch, Kay Peterson, Seungmin Seo, Ian Soboroff, Hariharan Iyer
Generator (G) teams will be tested on their system's ability to generate content that is indistinguishable from human-generated content. For the pilot study, the evaluation will help determine strengths and weaknesses in their approaches including insights

2022 OpenFAD Evaluation Plan (Open Fine-grained Activity Detection)

October 2, 2023
Author(s)
Yooyoung Lee, Jonathan Fiscus, Lukas Diduch, Jeffery Byrne
This document describes an evaluation of the 2022 Open Fine-grained Activity Detection (OpenFAD) challenge. The evaluation plan covers resources, task definitions, task conditions, file formats for system inputs and outputs, evaluation metrics, scoring

The 2022 NIST Language Recognition Evaluation

February 28, 2023
Author(s)
Yooyoung Lee, Craig Greenberg, Asad Butt, Eliot Godard, Elliot Singer, Trang Nguyen, Lisa Mason, Douglas Reynolds
In 2022, the U.S. National Institute of Standards and Technology (NIST) conducted a Language Recognition Evaluation (LRE), which was the latest in an ongoing series of language detection evaluations administered by NIST since 1996. The LREs measure how

OpenMFC 2022 Evaluation Program

January 3, 2023
Author(s)
Haiying Guan, Baptiste Chocot, Ilia Ghorbanian Bajgiran, Lukas Diduch, Yooyoung Lee, Christopher Tu

NIST 2022 Language Recognition Evaluation Plan

August 31, 2022
Author(s)
Yooyoung Lee, Craig Greenberg, Lisa Mason, Elliot Singer
The 2022 NIST language recognition evaluation (LRE22) is the 9th cycle in an on-going language recognition evaluation series that began in 1996. The objectives of the evaluation series are (1) to advance technologies in language recognition with innovative

Open Media Forensics Challenge 2022 Evaluation Plan

March 3, 2022
Author(s)
Haiying Guan, Yooyoung Lee, Lukas Diduch
This document describes the system evaluation tasks supported by the Open Media Forensics Challenge (OpenMFC) 2022. The evaluation plan covers resources, task definitions, task conditions, file formats for system inputs and outputs, evaluation metrics

Open Media Forensics Challenge (OpenMFC) 2020-2021: Past, Present, and Future

September 29, 2021
Author(s)
Haiying Guan, Yooyoung Lee, Lukas Diduch, Jesse Zhang, Ilia Ghorbanian Bajgiran, Timothee Kheyrkhah, Peter Fontana, Jonathan G. Fiscus
This document describes the online leaderboard public evaluation program, Open Media Forensics Challenge (OpenMFC) 2021-2022. In the report, first, the introduction, objectives, challenges, contributions, and achievements of the evaluation program are

User Guide for NIST Media Forensic Challenge (MFC) Datasets

July 6, 2021
Author(s)
Haiying Guan, Andrew Delgado, Yooyoung Lee, Amy Yates, Daniel Zhou, Timothee N. Kheyrkhah, Jonathan G. Fiscus
NIST released a set of Media Forensic Challenge (MFC) datasets developed in DARPA MediFor (Media Forensics) project to the public in the past 5 years. More than 300 individuals, 150 organizations, from 26 countries and regions worldwide use our datasets

2018 Multimedia Forensics Challenges (MFC18): Summary and Results

November 12, 2020
Author(s)
Yooyoung Lee, Amy Yates, Haiying Guan, Jonathan G. Fiscus, Daniel Zhou
The interest of forensic techniques capable of detecting many different manipulation types has been growing, and system developments with machine learning technology have been evolving in recent years. There has been, however, a lack of diverse data