The Multimedia Event Detection (MED) track is part of the TRECVID Evaluation. The 2015 evaluation will be the sixth MED evaluation which was preceded by the 2011, 2012, 2013, and 2014 evaluations and the 2010 Pilot evaluation. Unlike 2014, only the MED evaluation task is supported.
The goal of MED is to assemble core detection technologies into a system that can search multimedia recordings for user-defined events based on pre-computed metadata. The metadata stores developed by the systems are expected to be sufficiently general to permit re-use for subsequent user defined events.
A user searching for events in multimedia material may be interested in a wide variety of potential events. Since it is an intractable task to build special purpose detectors for each event a priori, technology is needed that can take as input a human-generated definition of the event that a system will use to search a collection of multimedia clips. The MED evaluation series will define events via an event kit which consists of an event name, definition, explication (textual exposition of the terms and concepts), evidential descriptions, and illustrative video exemplars.
A number of major changes have been made for the 2015 evaluation, please consult the evaluation plan.
NIST maintains an email discussion list to disseminate information. Send requests to join the list to med_poc at nist dot gov.
MED system performance will be evaluated as specified in the evaluation plan. The version of the MED15 Evaluation Plan is V2. For convenience, a comparison with V1 is provided which highlights changes.
A collection of Internet multimedia (i.e., clips containing both audio and video streams) will be provided to registered MED participants. The data, which was collected by the Linguistic Data Consortium, consists of publicly available, user-generated content posted to the various Internet video hosting sites.The evaluation plan and license information will specify usage rules of the data resources in full detail.
In order to obtain the corpora, ALL (including previous participants) TRECVID-registered sites must complete an evaluation license with the LDC. Each site who requires access to the data, either as part of a team or as a standalone researcher, must complete a license.
To complete the evaluation license follow these steps:
- Download the license MED15 Data License.
- Return the completed license to LDC's Membership Office via email at ldc at ldc dot upenn dot edu. Alternatively you may fax the completed license to the LDC at 215-573-2175.
- When you send the completed license to the LDC, include the following information:
- Registered TRECVID Team name
- Site/organization name
- Data contact person's name
- Data contact person's email
The designated data contact person for each site will receive instructions from the LDC about the specific procedures for obtaining the data packages when they are released.
Dry Run Evaluation
There are two relevant archives for the dry run evaluation:
- MED15DRYRUN_Files.tar.bz2 - Contains the index files for three events (E006, E009, and E013) tested against the MEDTest testing collection
- MED15_testTEAM_MED15DRYRUN_PS_1.tar.bz2 - A demonstration submission file for the Dry Run
In addition to the data files, F4DE-3.2.3 contains a scoring primer for MED '15 in the file DEVA/doc/TRECVid-MED15-ScoringPrimer.html that demonstrates how to score a system and prepare a submission file.
Evaluation scripts to support the MED evaluation are within the NIST Framework for Detection Evaluations (F4DE) Toolkit, a link to the latest release can be found on the NIST MIG tools page.
MED15 requires a F4DE version of 3.2.3 or later. The list of F4DE releases can be found on the F4DE GitHub page here.
Consult the TRECVID Master Schedule.
- July 29, 2015 - Added Dry Run Evaluation section, updated Evaluation Tools section.
- March 18, 2015 - Initial page created.