IN LIGHT OF COVID-19 GUIDANCE FROM FEDERAL, STATE AND LOCAL HEALTH AUTHORITIES, NIST HAS MADE TEMPORARY ADJUSTMENTS TO STAFFING AND OPERATIONS TO PROTECT THE HEALTH AND SAFETY OF NIST EMPLOYEES AND THE PUBLIC. AS A RESULT OF THESE CHANGES, ALL FRVT EVALUATION TRACKS SUBMISSIONS ARE TEMPORARILY SUSPENDED. THANK YOU FOR YOUR UNDERSTANDING.
NIST describes and quantifies demographic differentials for contemporary face recognition algorithms in this report, NISTIR 8280. NIST has conducted tests to quantify demographic differences for nearly 200 face recognition algorithms from nearly 100 developers, using four collections of photographs with more than 18 million images of more than 8 million people.
The FRVT Ongoing activity is conducted on a continuing basis and will remain open indefinitely such that developers may submit their algorithms to NIST whenever they are ready. This approach more closely aligns evaluation with development schedules. The evaluation will use very large sets of facial imagery to measure the performance of face recognition algorithms developed in commercial and academic communities worldwide. Multiple evaluation tracks relevant to face recognition will be conducted under this test. For more information, visit the FRVT Ongoing webpage.
The FRVT 1:N 2018 will measure advancements in the accuracy and speed of one-to-many face recognition identification algorithms searching enrolled galleries containing at least 10 million identities. The evaluation will primarily use standardized portrait images, and will quantify how accuracy depends on subject-specific demographics and image-specific quality factors. For more information, visit the FRVT 1:N 2018 webpage.
Facial morphing and the ability to detect it is an area of high interest to a number of photo-credential issuance agencies and those employing face recognition for identity verification. The FRVT MORPH test will provide ongoing independent testing of prototype facial morph detection technologies.
NIST is establishing an evaluation of face image quality assessment algorithms. NIST will run quality assessment algorithms on large sets of images and relate their outputs to face recognition outcomes.
NIST announced new open and sequestered challenge datasets, including detection and recognition of individuals in social media. These challenges are available here.
While not part of the FRVT series, the Face-in-Video-Evaluation (FIVE) conducted 2015-2016 will be of interest to the FRVT audience. The FIVE activity assessed face recognition capability in video sequences. The outcomes of FIVE were published in NIST Interagency Report 8173.
In conjunction with IARPA, NIST ran its first Face Recognition Prize Challenge (FRPC) to assess capability of the latest algorithms operating on unconstrained images. IARPA awarded cash prizes to the most accurate identification and verification algorithms.
The Face Recognition Algorithm Independent Evaluation (CHEXIA-FACE) was conducted to assess the capability of face detection and recognition algorithms to correctly detect and recognize children's faces appearing in unconstrained imagery.
FRVT 2013 tested state-of-the-art face recognition performance. It used very large sets of facial imagery to measure the accuracy and computational efficiency of face recognition algorithms developed in commercial and academic communities worldwide. The test itself ran from July 2012 to the end of 2013. The detailed plans, procedures and outcomes of the test are documented on the FRVT 2013 homepage.
Under the name MBE 2010, 2D face recognition algorithms were evaluated, yielding two reports. First, NIST Interagency Report 7709 gave results for both verification and identification algorithms. Second, the NIST Interagency Report 7830 surveyed compression and resolution parameters for storing face images on identity credentials.
Moreover MBE 2010 instituted the methodologies used in FRVT 2013.
The FRVT 2006 measured performance with sequestered data (data not previously seen by the researchers or developers). A standard dataset and test methodology was employed so that all participants were evenly evaluated. The government provided both the test data and the test environment to participants. The test environment was called the Biometric Experimentation Environment (BEE). The BEE was the FRVT 2006 infrastructure. It allowed the experimenter to focus on the experiment by simplifying test data management, experiment configuration, and the processing of results.
FRVT 2002 consisted of two tests: the High Computational Intensity (HCInt) Test and the Medium Computational Intensity (MCInt) Test. Both test required the systems to be full automatic, and manual intervention was not allowed.
FRVT 2000 consisted of two components: the Recognition Performance Test and the Product Usability Test. The Recognition Performance Test was a technology evaluation. The goal of the Recognition Performance Test was to compare competing techniques for performing facial recognition. All systems were tested on a standardized database. The standard database ensured all systems were evaluated using the same images, which allowed for comparison of the core face recognition technology. The product usability test examined system properties for performing access control.
The goal of the FERET program was to develop automatic face recognition capabilities that could be employed to assist security, intelligence, and law enforcement personnel in the performance of their duties. The task of the sponsored research was to develop face recognition algorithms. The FERET database was collected to support the sponsored research and the FERET evaluations. The FERET evaluations were performed to measure progress in algorithm development and identify future research directions.