Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Face in Video Evaluation (FIVE)

Summary

The Face in Video Evaluation (FIVE) is being conducted to assess the capability of face recognition algorithms to correctly identify or ignore persons appearing in video sequences – i.e. the open-set identification problem. Both comparative and absolute accuracy measures are of interest, given the goals to determine which algorithms are most effective and whether any are viable for various operational use-cases.

Description

five_logo

2024-01-23  FIVE 2024 Announced!

The Face in Video Evaluation (FIVE) 2024 is being conducted to assess the capability of face recognition algorithms to correctly identify or ignore persons appearing in video sequences – i.e., the open-set identification problem.  For more information, please visit the FIVE 2024 webpage.

FIVE 2024 will include datasets and use cases involving degraded video imagery (low resolution, compressed, etc.) collected

  • Outdoors with directional lighting
  • At long range (300m+) and potentially affected by atmospheric turbulence
  • From elevated platforms (large look-down pitch angles)
  • With multiple people in the scene

FIVE 2024 will also include datasets and use cases previously assessed in FIVE 2015, including

  • High volume screening of persons in the crowded spaces (e.g. an airport)
  • Low volume forensic examination of footage from a crime scene (e.g. a convenience store)
  • Persons in business meetings (e.g. for video-conferencing)
  • Persons appearing in television footage

2017-03-07  Report publication

The FIVE report  “NIST Interagency Report 8173: Face In Video Evaluation (FIVE) Face Recognition of Non-Cooperative Subjects” is now available.  [PDF, 47MB] [BIB]

2015-10-20  Phase 3 timeline

The final deadline for submission to FIVE is December 11, 2015. Participation proceeds as previously described in the API document linked below.

2014-11-19 Final API released

The final evaluation plan and API document, and its C++ interface header are now online. Implementers should conform to this API and submit Phase 1 algorithms to NIST by February 8, 2015. Participants must mail the properly completed participation agreement to NIST before the first algorithm is sent. An image of the mandatory operating system is also online.

2014-11-03 Final draft API for comment

The final draft API, the C++ interface header, and the participation agreement are now online. Comments on the API should be emailed to five AT nist DOT gov by November 11, 2014. 

2014-10-03 Second draft API for comment

The second of three draft API documents is now online. Comments should be emailed to five AT nist DOT gov by November 1. A final short comment period will follow.

2014-08-15 Draft API for comment

The first draft API for FIVE algorithms is now online. It is very similar to that used in the class V track of the last FRVT evaluation. Developers are specifically asked to comment– particularly on whether it supports measurement of the full capability of algorithms to detect, track and recognize faces in video. Comments should be email to five [at] nist.gov (subject: FIVE, body: ) (five AT nist DOT gov) by September 1. Two further comment periods will follow.

2014-07-16  Program Announcement

Scope: The Face in Video Evaluation (FIVE) is being conducted to assess the capability of face recognition algorithms to correctly identify or ignore persons appearing in video sequences – i.e. the open-set identification problem. Both comparative and absolute accuracy measures are of interest, given the goals to determine which algorithms are most effective and whether any are viable for the following primary operational use-cases: 1. High volume screening of persons in the crowded spaces (e.g. an airport); 2. Low volume forensic examination of footage from a crime scene (e.g. a convenience store); 3. Persons in business meetings (e.g. for video-conferencing); and 4. Persons appearing in television footage. These applications differ in their tolerance of false positives, whether a human examiner will review outputs, the prior probabilities of mate vs. non-mate presence, and the cost of recognition errors.

Out of scope: Gait, iris and voice recognition; Recognition across multiple views (e.g. via stereoscopic techniques); Tracking across sequential cameras (re-identification); anomaly detection; detection of evasion.

Relationship to FRVT: The Face Recognition Vendor Tests of 2000, 2002, 2006, 2010, and 2013 gave quantitative statements of accuracy and speed of mostly still-image face recognition algorithms. The last test included a video track (FRVT class V) – results from that work are being provided to participants. Our new FIVE program supersedes the FRVT work but proceeds in an almost identical manner.

Test progression: Software submitted to NIST will be evaluated on sequestered sets to quantify accuracy and speed. Algorithms must be implemented behind the formal C++ API to be published by NIST. This will be very similar to the API used in the prior FRVT evaluation.  The test will be conducted over at least three iterative cooperative test-report-test phases engaging algorithm developers. This process will culminate in the publication of reports on this website and in the open literature.

Test data: This program will leverage several archival video corpora that are sequestered at NIST. Each includes subjects who are generally neither cooperative nor actively uncooperative. The datasets have in-common that several subjects are usually present in any given sequence, that only one fixed camera observes them, and that frontal views are the exception rather than the norm. The datasets vary in terms of camera quality, video quality, compression and pedestrian motion. Video imagery will primarily be compared with enrolled still-image datasets (video-to-still) of varying size and quality, and for which one or more views of a subject will be available. In addition, still-to-video and video-to-video tests will be executed.

None of the test data can be provided to participants. Instead prospective participants should leverage public domain and proprietary datasets as available. For the surveillance application NIST is aware of a very suitable video corpus that has been made available to qualified developers – please contact five [at] nist.gov (five AT nist DOT gov )for details.

Standardization: The FIVE activity is expected to give quantitative support to the development of the ISO/IEC 30137 multipart standard that has recently been initiated in the SC37 Biometrics Subcommittee. Particularly, Working Group 5 there is developing a performance testing and reporting standard for video-surveillance systems. Working Group 4 is formulating recommendations for the design and specification of such systems. Finally Working Group 3 is considering biometric data needs layered on top of existing video interchange standards e.g. ISO/IEC 22311:2012.

Important Dates:

August 15 – November 17:  API publication and public comment periods
November 17 – February 8: Phase 1 submission period
April 6 – June 12: Phase 2 submission period
August 10 – December 11: Phase 3 submission period

Contact
For further information please contact five [at] nist.gov (five AT nist DOT gov) .

This work is funded in part by the Department of Homeland Security's Science and Technology Directorate with PM Patricia Wolfhope.

DHS small logo

 

Created July 16, 2014, Updated February 15, 2024