Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Face Recognition Vendor Test (FRVT)

Description

 

Ongoing FRVT Activities

FRVT 1:1 Verification

The FRVT Ongoing activity is conducted on a continuing basis and will remain open indefinitely such that developers may submit their algorithms to NIST whenever they are ready. This approach more closely aligns evaluation with development schedules. The evaluation will use very large sets of facial imagery to measure the performance of face recognition algorithms developed in commercial and academic communities worldwide. Multiple evaluation tracks relevant to face recognition will be conducted under this test. For more information, visit the FRVT Ongoing webpage.

FRVT 1:N 2018

The FRVT 1:N 2018 will measure advancements in the accuracy and speed of one-to-many face recognition identification algorithms searching enrolled galleries containing at least 10 million identities. The evaluation will primarily use standardized portrait images, and will quantify how accuracy depends on subject-specific demographics and image-specific quality factors.  For more information, visit the FRVT 1:N 2018 webpage.

FRVT MORPH

Facial morphing and the ability to detect it is an area of high interest to a number of photo-credential issuance agencies and those employing face recognition for identity verification.  The FRVT MORPH test will provide ongoing independent testing of prototype facial morph detection technologies. 

FRVT Quality

NIST is establishing an evaluation of face image quality assessment algorithms.  NIST will run quality assessment algorithms on large sets of images and relate their outputs to face recognition outcomes. 

Prior Tests and Activities

Face Challenges

NIST announced new open and sequestered challenge datasets, including detection and recognition of individuals in social media. These challenges are available here.

FIVE

While not part of the FRVT series, the Face-in-Video-Evaluation (FIVE) conducted 2015-2016 will be of interest to the FRVT audience.  The FIVE activity assessed face recognition capability in video sequences.  The outcomes of FIVE were published in NIST Interagency Report 8173.

Face Recognition Prize Challenge (FRPC) 2017

In conjunction with IARPA, NIST ran its first Face Recognition Prize Challenge (FRPC) to assess capability of the latest algorithms operating on unconstrained images.  IARPA awarded cash prizes to the most accurate identification and verification algorithms.

Chexia Face Recognition

The Face Recognition Algorithm Independent Evaluation (CHEXIA-FACE) was conducted to assess the capability of face detection and recognition algorithms to correctly detect and recognize children's faces appearing in unconstrained imagery.

FRVT 2013

FRVT 2013 tested state-of-the-art face recognition performance. It used very large sets of facial imagery to measure the accuracy and computational efficiency of face recognition algorithms developed in commercial and academic communities worldwide. The test itself ran from July 2012 to the end of 2013. The detailed plans, procedures and outcomes of the test are documented on the FRVT 2013 homepage

FRVT 2010

Under the name MBE 2010, 2D face recognition algorithms were evaluated, yielding two reports.  First, NIST Interagency Report 7709 gave results for both verification and identification algorithms. Second, the NIST Interagency Report 7830 surveyed compression and resolution parameters for storing face images on identity credentials.

Moreover MBE 2010 instituted the methodologies used in FRVT 2013.

FRVT 2006

The FRVT 2006 measured performance with sequestered data (data not previously seen by the researchers or developers). A standard dataset and test methodology was employed so that all participants were evenly evaluated. The government provided both the test data and the test environment to participants. The test environment was called the Biometric Experimentation Environment (BEE). The BEE was the FRVT 2006 infrastructure. It allowed the experimenter to focus on the experiment by simplifying test data management, experiment configuration, and the processing of results.

FRVT 2002

FRVT 2002 consisted of two tests: the High Computational Intensity (HCInt) Test and the Medium Computational Intensity (MCInt) Test. Both test required the systems to be full automatic, and manual intervention was not allowed.

FRVT 2000

FRVT 2000 consisted of two components: the Recognition Performance Test and the Product Usability Test. The Recognition Performance Test was a technology evaluation. The goal of the Recognition Performance Test was to compare competing techniques for performing facial recognition. All systems were tested on a standardized database. The standard database ensured all systems were evaluated using the same images, which allowed for comparison of the core face recognition technology. The product usability test examined system properties for performing access control.

FERET

The goal of the FERET program was to develop automatic face recognition capabilities that could be employed to assist security, intelligence, and law enforcement personnel in the performance of their duties. The task of the sponsored research was to develop face recognition algorithms. The FERET database was collected to support the sponsored research and the FERET evaluations. The FERET evaluations were performed to measure progress in algorithm development and identify future research directions.  

Created July 8, 2010, Updated November 27, 2019