The TREC Video Retrieval Evaluation (TRECVID) 2009 was a TREC-style video analysis and retrieval evaluation, the goal of which was to promote progress in content-based exploitation of digital video via open, metrics-based evaluation. 63 teams from various research organizations --- 28 from Europe, 24 from Asia, 10 from North America, and 1 from Africa --- completed one or more of four tasks: high-level feature extraction, search (fully automatic, manually assisted, or interactive), copy detection, or surveillance event detection. Test data for the search and feature tasks was about 280 hours of (MPEG-1) TV news magazine, science news, news reports, documentaries, educational programming, and archival video from the Netherlands Institute for Sound and Vision. About 100 hours of video was available for search/feature system development. The combined 380 hours were used in the copy detection task. About 100 hours of airport surveillance video from the Image Library for Intelligent Detection Systems for Multi-Camera Tracking Training (i-LIDS MCTTR) provided by the UK Home Office was made available for training data in the 2009 surveillance event detection task. Systems were tested on about 15 hours of a new 50-hour test set from the same source. Results were scored by NIST for almost all tasks against human judgments. Feature and search submissions were evaluated based on partial manual judgments of the pooled submissions. Copy detection submissions were evaluated at NIST based on ground truth created automatically using tools donated by the INRIA-IMEDIA group. NIST evaluated the surveillance event detection results using ground truth created manually under contract by the Linguistic Data Consortium.
Citation: TRECVID publications
Pub Type: Websites
TRECVID, video analysis, video retrieval