Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Publications

Search Publications by

Yooyoung Lee (Fed)

Search Title, Abstract, Conference, Citation, Keyword or Author
Displaying 1 - 25 of 29

Open Media Forensics Challenge (OpenMFC) 2020-2021: Past, Present, and Future

September 29, 2021
Author(s)
Haiying Guan, Yooyoung Lee, Lukas Diduch, Jesse Zhang, Ilia Ghorbanian Bajgiran, Timothee Kheyrkhah, Peter Fontana, Jonathan G. Fiscus
This document describes the online leaderboard public evaluation program, Open Media Forensics Challenge (OpenMFC) 2021-2022. In the report, first, the introduction, objectives, challenges, contributions, and achievements of the evaluation program are

User Guide for NIST Media Forensic Challenge (MFC) Datasets

July 6, 2021
Author(s)
Haiying Guan, Andrew Delgado, Yooyoung Lee, Amy Yates, Daniel Zhou, Timothee N. Kheyrkhah, Jonathan G. Fiscus
NIST released a set of Media Forensic Challenge (MFC) datasets developed in DARPA MediFor (Media Forensics) project to the public in the past 5 years. More than 300 individuals, 150 organizations, from 26 countries and regions worldwide use our datasets

2018 Multimedia Forensics Challenges (MFC18): Summary and Results

November 12, 2020
Author(s)
Yooyoung Lee, Amy Yates, Haiying Guan, Jonathan G. Fiscus, Daniel Zhou
The interest of forensic techniques capable of detecting many different manipulation types has been growing, and system developments with machine learning technology have been evolving in recent years. There has been, however, a lack of diverse data

Media Forensics Challenge Image Provenance Evaluation and State-of-the-Art Analysis on Large-Scale Benchmark Datasets

October 26, 2020
Author(s)
Xiongnan Jin, Yooyoung Lee, Jonathan G. Fiscus, Haiying Guan, Amy Yates, Andrew Delgado, Daniel F. Zhou
With the development of storage, transmission, editing, and sharing tools, digital forgery images are propagating rapidly. The need for image provenance analysis has never been more timely. Typical applications are content tracking, copyright enforcement

NIST Media Forensic Challenge (MFC) Evaluation 2020 - 4th Year DARPA MediFor PI meeting

July 15, 2020
Author(s)
Jonathan G. Fiscus, Haiying Guan, Yooyoung Lee, Amy Yates, Andrew Delgado, Daniel F. Zhou, Timothee N. Kheyrkhah, Xiongnan Jin
The presentation slides summarize NIST Media Forensic Challenge (MFC) Evaluation, and present MFC20 evaluation reports in DARPA MediFor PI meeting. The slides contains five parts: Overview, Image Tasks, Video Tasks, Camera ID Verification Tasks, Provenance

2018 MediFor Challenge

July 23, 2019
Author(s)
Jonathan G. Fiscus, Haiying Guan, Andrew Delgado, Timothee N. Kheyrkhah, Yooyoung Lee, Daniel F. Zhou, Amy Yates
Media forensics is the science and practice of determining the authenticity and establishing the integrity of audio and visual media. DARPA's Media Forensics (MediFor) program brings together world-class researchers to develop technologies for the

Manipulation Data Collection and Annotation Tool for Media Forensics

June 17, 2019
Author(s)
Eric Robertson, Haiying Guan, Mark Kozak, Yooyoung Lee, Amy Yates, Andrew Delgado, Daniel F. Zhou, Timothee N. Kheyrkhah, Jeff Smith, Jonathan G. Fiscus
With the increasing diversity and complexity of media forensics techniques, the evaluation of state-of-the-art detectors are impeded by lacking the metadata and manipulation history ground-truth. This paper presents a novel image/video manipulation

MFC Datasets: Large-Scale Benchmark Datasets for Media Forensic Challenge Evaluation

January 11, 2019
Author(s)
Haiying Guan, Mark Kozak, Eric Robertson, Yooyoung Lee, Amy Yates, Andrew Delgado, Daniel F. Zhou, Timothee N. Kheyrkhah, Jeff Smith, Jonathan G. Fiscus
We provide a benchmark for digital media forensic challenge evaluations. A series of datasets are used to assess the progress and deeply analyze the performance of diverse systems on different media forensic tasks across last two years. The benchmark data

MediFor Nimble Challenge Evaluation 2017

August 23, 2017
Author(s)
Jonathan G. Fiscus, Haiying Guan, Yooyoung Lee, Amy Yates, Andrew Delgado, Daniel F. Zhou, David M. Joy, August L. Pereira
NIST presentation slides for DARPA MediFor Program One-Year PI Meeting

MediFor Nimble Challenge Evaluation

April 17, 2017
Author(s)
Jonathan G. Fiscus, Haiying Guan, Yooyoung Lee, Amy Yates, Andrew Delgado, Daniel F. Zhou, Timothee N. Kheyrkhah

Generalizing Face Quality and Factor Measures to Video

September 24, 2014
Author(s)
Yooyoung Lee, P. Jonathon Phillips, James Filliben, J. R. Beveridge, Hao H. Zhang
Methods for assessing the impact of factors and image-quality metrics for still face images are well-understood. The extension of these factors and quality measures to faces in video has not, however, been explored. We present a specific methodology for

Identifying Face Quality and Factor Measures for Video

May 21, 2014
Author(s)
Yooyoung Lee, P. Jonathon Phillips, James J. Filliben, J. R. Beveridge, Hao Zhang
This paper identifies important factors for face recognition algorithm performance in video. The goal of this study is to understand key factors that affect algorithm performance and to characterize the algorithm performance. We evaluate four factor

VASIR: An Open-Source Research Platform for Advanced Iris Recognition Technologies

April 22, 2013
Author(s)
Yooyoung Lee, Ross J. Micheals, James J. Filliben, P J. Phillips
The performance of iris recognition systems is frequently affected by input image quality, which in turn is vulnerable to less-than-optimal conditions due to illuminations, environments, and subject characteristics (e.g., distance, movement, face/body

Ocular and Iris Recognition Baseline Algorithm

November 7, 2011
Author(s)
Yooyoung Lee, Ross J. Micheals, James J. Filliben, P J. Phillips, Hassan A. Sahibzada
Due to its distinctiveness, the human eye is a popular biometricv feature used to identity a person with high accuracy. The Grand Challenge in biometrics is to have an effective algorithm for subject verification or identification under a broad range of

Robust Iris Recognition Baseline for the Grand Challenge

May 17, 2011
Author(s)
Yooyoung Lee, Ross J. Micheals, James J. Filliben, P J. Phillips
Due to its distinctiveness, the human iris is a popular biometric feature used to identity a person with high accuracy. The “Grand Challenge” in iris recognition is to have an effective algorithm for subject verification or identification under a broad

Robust Iris Recognition Baseline for the Occular Challenge

January 20, 2011
Author(s)
Yooyoung Lee, Ross J. Micheals, James J. Filliben, P J. Phillips
Due to its distinctiveness, the human iris is a popular biometric feature used to identity a person with high accuracy. The Grand Challenge in iris recognition is to have an effective algorithm for subject verification or identification under a broad range