The Multimedia Event Detection (MED) evaluation track is part of the TRECVID Evaluation. The 2012 evaluation will be the second MED evaluation which was preceded by the 2011 evaluation and the 2010 Pilot evaluation.
The goal of MED is to assemble core detection technologies into a system that can search multimedia recordings for user-defined events based on pre-computed metadata. The metadata stores developed by the systems are expected to be sufficiently general to permit re-use for subsequent user defined events.
A user searching for events in multimedia material may be interested in a wide variety of potential events. Since it is an intractable task to build special purpose detectors for each event a priori, technology is needed that can take as input a human-generated definition of the event that a system will use to search a collection of multimedia clips. The MED evaluation series will define events via an event kit which consists of an event name, definition, explication (textual exposition of the terms and concepts), evidential descriptions, and illustrative video exemplars.
The major changes for the 2012 evaluation include:
The 2012 evaluation will support two evaluation tasks:
The Pre-Specified event task is identical to the MED11 task. Participants must build a system for at least one of the test events in order to participate in the evaluation and TRECVID Conference.
NIST maintains an email discussion list to disseminate information. Send requests to join the list to med_poc at nist dot gov.
MED system performance will be evaluated as specified in the evaluation plan. The evaluation plan contains the rules, protocols, metric definitions, scoring instructions, and submission instructions. The latest final version is V02.
In addition, the MED '12 FAQ is provided to answer specific questions about the evaluation plan.
A collection of Internet multimedia (i.e., clips containing both audio and video streams) will be provided to registered MED participants. The data, which was collected by the Linguistic Data Consortium, consists of publicly available, user-generated content posted to the various Internet video hosting sites.
MED12 Participants will receive following training resources:
The evaluation plan and license information will specify usage rules of the data resources in full detail.
Twenty events will be used in the 2012 Pre-Specified event task: 10 MED11 Testing events and 10 new events. The table below contains the event names.
|MED11 Test Event Names||MED12 New Events|
|Birthday party |
Changing a vehicle tire
Flash mob gathering
Getting a vehicle unstuck
Grooming an animal
Making a sandwich
Repairing an appliance
Working on a sewing project
|Attempting a bike trick |
Cleaning an appliance
Giving directions to a location
Renovating a home
Town hall meeting
Winning a race without a vehicle
Working on a metal crafts project
Event names need to be interpreted in the full context of the event definitions that will be made available as part of the event kits.
In order to obtain the MED10 and MED11 corpora, TRECVID-registered sites must complete an evaluation license with the LDC. Each site who requires access to the data, either as part of a team or as a standalone researcher, must complete a license.
To complete the evaluation license follow these steps:
The designated data contact person for each site will receive instructions from the LDC about the specific procedures for obtaining the data packages when they are released.
The dry run period for MED will run until July 30, 2012. Dry run submissions will be accepted at any time during the period. The dry run is an opportunity for new developers to make sure they are able to generate valid system output that can be scored with the NIST scoring tools and is required for any participant that has not completed a previous MED evaluation. As discussed in the Data Resources Section, sites are required to complete the Dry Run and will not receive the Progress Set (the evaluation data for 2012) until they complete the Dry Run.
The actual performance of the system is not of interest during the dry run so developers may feel free to use any method to generate their system output, e.g., a random system, training on the dry run data, etc.
The procedure for participating in the dry run is as follows:
Evaluation scripts to support the MED evaluation are within the NIST Framework for Detection Evaluations (F4DE) Toolkit Version 2.3.4 or later found on the NIST MIG tools page.
The package contains an MED evaluation primer (found in DEVA/doc/TRECVid-MED11-ScoringPrimer.html) of the distribution.
To obtain the TrialIndex to validate ADHOC submissions , download: PROGTEST-ADHOC12_20120507_TrialIndex.csv
Consult the TRECVID Master schedule.