Take a sneak peek at the new NIST.gov and let us know what you think!
(Please note: some content may not be complete on the beta site.).
2013 TRECVID Multimedia Event Detection Track
The Multimedia Event Detection (MED) evaluation track is part of the TRECVID Evaluation. The 2013 evaluation will be the third MED evaluation which was preceded by the 2012 and 2011 evaluation and the 2010 Pilot evaluation.
The goal of MED is to assemble core detection technologies into a system that can search multimedia recordings for user-defined events based on pre-computed metadata. The metadata stores developed by the systems are expected to be sufficiently general to permit re-use for subsequent user defined events.
A user searching for events in multimedia material may be interested in a wide variety of potential events. Since it is an intractable task to build special purpose detectors for each event a priori, technology is needed that can take as input a human-generated definition of the event that a system will use to search a collection of multimedia clips. The MED evaluation series will define events via an event kit which consists of an event name, definition, explication (textual exposition of the terms and concepts), evidential descriptions, and illustrative video exemplars.
The major changes for the 2013 evaluation include:
MED Task Definitions
The 2013 evaluation will support two evaluation tasks:
The Pre-Specified event task is identical to the MED12 task. Participants must build a system for at least one of the test events in order to participate in the evaluation and TRECVID Conference.
NIST maintains an email discussion list to disseminate information. Send requests to join the list to med_poc at nist dot gov.
Evaluation Plan and Data Use Rules
MED system performance will be evaluated as specified in the evaluation plan. The evaluation plan contains the rules, protocols, metric definitions, scoring instructions, and submission instructions.
A collection of Internet multimedia (i.e., clips containing both audio and video streams) will be provided to registered MED participants. The data, which was collected by the Linguistic Data Consortium, consists of publicly available, user-generated content posted to the various Internet video hosting sites.
MED12 Participants will receive following training resources:
The evaluation plan and license information will specify usage rules of the data resources in full detail.
2013 Pre-Specified Event Kits
Twenty Pre-Specified events will be from the 2012 test events. The table below contains the event names.
In order to obtain the corpora, ALL (including '12 participants) TRECVID-registered sites must complete an evaluation license with the LDC. Each site who requires access to the data, either as part of a team or as a standalone researcher, must complete a license.
To complete the evaluation license follow these steps:
The designated data contact person for each site will receive instructions from the LDC about the specific procedures for obtaining the data packages when they are released.
Dry Run Evaluation
The dry run data resources are available from http://www-nlpir.nist.gov/projects/tv2013/med.data/. The username and password was provided during TRECVID registration. There are two relevant files:
In addition to the data files, F4DE_3.0.1 contains a scoring primer for MED '13 in the file DEVA/doc/TRECVid-MED13-ScoringPrimer.html of the release that demonstrates how to score a system and prepare a submission file.
Evaluation scripts to support the MED evaluation are within the NIST Framework for Detection Evaluations (F4DE) Toolkit Version 3.0.1 or later found on the NIST MIG tools page.
The package contains an MED evaluation primer (found in DEVA/doc/TRECVid-MED11-ScoringPrimer.html) of the distribution.
The NIST has prepared a MER Workstation. The current release version is MERAPP13-v3.0.
Consult the TRECVID Master schedule.