The Media Forensics Challenge 2018 (MFC2018) Evaluation is the second annual evaluation to support research and help advance the state of the art for image and video forensics technologies – technologies that determine the region and type of manipulations in imagery (image/video data) and the phylogenic process that modified the imagery. The MFC2018 evaluation is currently being designed building of off experience from the NC2017 Evaluation. We expect to continue support for the following tasks:
Prospective MFC participants can subscribe to the MFC mailing list for announcements by sending a request to the contact below and can take part in the evaluation by completing the registration and license agreements below.
There are many exciting changes being planned for the evaluation cycle: more development resources, a scoring server with a leader-board style evaluation, manipulation operation-specific system evaluations to allow manipulation type-specific research, and detailed, automatic diagnostics of system success/failures.
Dates | Development Resources |
---|---|
Now |
|
August 2017 |
|
September 13, 2017 |
|
September 29, 2017 |
|
December 1, 2017 |
|
December 8, 2017 |
|
December 15, 2017 |
|
January 19, 2018 |
|
March 21, 2018 |
|
March 28, 2018 |
|
April 4, 2018 |
|
April 11, 2018 |
|
April 18, 2018 |
|
May 2, 2018 |
|
May 4, 2018 |
|
May 9, 2018 |
|
May 16, 2018 |
|
The MFC18 Evaluation Plan details the structure of the evaluation tasks, data, and metrics.
By signing up for the evaluation, you’ll be getting a wealth of data resources to conduct your media forensics research. The data will be designated as either: (1) a development resource that will be accessible at signup, (2) past year evaluation resources which will be provided after completing a dry run evaluation, and (3) the MFC’18 evaluation resources provided during the formal evaluation in April ’18. Here’s a quick summary of the resources:
Data Set Type |
Data Set Name |
Number of Forensic Probes (true manipulations and non-manipulations) |
World Data Set Size |
Reference Annotations |
Supported Tasks |
---|---|---|---|---|---|
Development |
NC2016 – Both Nimble Science and Nimble Web |
624 |
|
Full |
MDL |
NC’17 Development Image Data |
3,500 |
100,000 |
Full |
All |
|
NC’17 Development Video Data |
213 |
|
Full |
All |
|
MFC’18 Development Image and Video Data |
TBD |
TBD |
Full |
TBD |
|
Past Evaluations |
NC’17 Evaluation Images |
10,000 |
1,000,000 |
Full for 1/3 subset |
All |
NC’17 Evaluation Videos |
1,000 |
|
Full for 1/3 Subset |
All |
|
MFC ‘18 Evaluation |
MFC’18 Evaluation Images |
50,000 |
5,000,000 |
|
|
MFC’18 Evaluation Videos |
5,000 |
|
|
|
NIST-provided tools are described in the Evaluation Infrastructure Setup Instructions.
mfc_poc [at] nist.gov (mfc_poc[at]nist[dot]gov)