The Multimedia Event Detection (MED) evaluation track is part of the TRECVID Evaluation. The multi-year goal of MED is to assemble core detection technologies into a system that can quickly and accurately search a multimedia collection for user-defined events. An event for MED 2010 is "an activity-centered happening that involves people engaged in process-driven actions with other people and/or objects at a specific place and time".
A user searching for events in multimedia material may be interested in a wide variety of potential events. Since it is an intractable task to build special purpose detectors for each event a priori, a technology is needed that can take as input a definition of the event that a human can use to search a collection of multimedia clips. The MED evaluation series will define events via an event kit which consists of:
NIST maintains an email discussion list to disseminate information. Send requests to join the list to jfiscus at nist dot gov.
The MED task is: given an Event Kit, find all clips that contain the event in a video collection.
The MED task is a "multimedia" task in that systems will be expected to detect evidence of the event using either or both the audio and video streams (but not human-created textual metadata) of the clips.
Participants can choose to build system(s) for a one, two, or all three 2010 events.
Three events will be used for the 2010 pilot evaluation. The events will be: "Making a cake", "Batting a run", and "Assembling a shelter". In order to begin a community-wide discussion, the Linguistic Data Consortium has prepared a web page containing event definitions and example instances for each of the three events. The linked examples are meant to inform discussions only and may not be incorporated in the distributed data resources. The actual illustrative examples will be included in the video data resources described below.
MED system performance will be evaluated as specified in the evaluation plan. The evaluation plan contains the metric definitions, scoring instructions, and submission instructions.
Current version: V05
A new collection of Internet multimedia (i.e., video clips containing both audio and video streams) will be provided to registered MED participants.The data, which was collected by the Linguistic Data Consortium, consists of publicly available, user-generated content posted to the various Internet video hosting sites. Instances of the events were collected by specifically searching for target events using text-based Internet search engines. All included data has been reviewed for privacy and offensive material.
Video clips will be provided in MPEG-4 formatted files. The video will be encoded to the H.264 standard. The audio will be encoded using MPEG-4'S Advanced Audio Coding (ACC) standard.
The video data collection will be divided into two data sets:
- Development data consisting 1746 total clips (~56 hours). The dev data set includes nominally 50 instances of each of the three MED '10 events and the rest of the clips are not on any of the three MED events.
- Evaluation data consisting 1742 total clips (~59 hours). The Eva data set will include instances of the three events but the actual number of instances will not be release until the evaluation submissions are complete.
In order to obtain the MED-10 dev and Eva corpora, registered sites must complete an evaluation license with LDC. Each site who requires access to the data, either as part of a team or as a standalone researcher, should complete a license.
To complete the evaluation license follow these steps:
- Download the license MED-10 Evaluation License V2.
- Return the completed license to LDC's Membership Office via email at ldc [at] ldc.upenn.edu. Alternatively you may fax the completed license to LDC at 215-573-2175.
- When you send the completed license to LDC, include the following information:
- Registered TRECVID Team name
- Site/organization name
- Data contact person's name
- Data contact person's email
The designated data contact person for each site will receive automated web download instructions from LDC upon release of the data packages.
The dry run period for MED will run until September 8, 2010. Dry run submissions will be accepted at any time during the period. The dry run is an opportunity for developers to make sure they are able to generate valid system output that can be scored with the NIST scoring tools. The actual performance of the system is not of interest during the dry run so developers may feel free to use any method to generate their system output, e.g., a random system, training on the dry run data, etc.
The procedure for participating in the dry run is as follows:
Evaluation scripts to support the MED evaluation are within the NIST Framework for Detection Evaluations (F4DE) Version 2.2 Toolkit found on the NIST MIG tools page.
The package contains an MED evaluation primer (found in F4DE-2.2/DEVA/doc/TRECVid-MED10ScoringPrimer.html) of the distribution.
Consult the TRECVID Master schedule.