2018 Multimedia Forensics Challenges (MFC18): Summary and Results
Yooyoung Lee, Amy Yates, Haiying Guan, Jonathan G. Fiscus, Daniel Zhou
The interest of forensic techniques capable of detecting many different manipulation types has been growing, and system developments with machine learning technology have been evolving in recent years. There has been, however, a lack of diverse data collection and evaluation studies for advancing multimedia forensics technologies. For the forensics research community, a well-defined evaluation is necessary to rapidly examine the performance of system's accuracy and robustness over diverse datasets collected under various environments. In this paper, we propose an evaluation protocol and associated performance metrics and apply them to the 2018 Multimedia Forensics Challenge (MFC18). The MFC18 evaluation consists of five tasks and two challenges. A large number of datasets were created for supporting each task and conducting experiments for comparative analysis using a structured evaluation protocol. In summary, a total of 25 teams participated in the MFC18 evaluation, and we provide a ranked list of systems based on their performance with respect to the MFC18 five tasks and the two challenges.
, Yates, A.
, Guan, H.
, , J.
and Zhou, D.
2018 Multimedia Forensics Challenges (MFC18): Summary and Results, NIST Interagency/Internal Report (NISTIR), National Institute of Standards and Technology, Gaithersburg, MD, [online], https://doi.org/10.6028/NIST.IR.8324, https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=931153
(Accessed January 22, 2022)