Craig I. Watson, Gregory P. Fiumara, Elham Tabassi, Su L. Cheng, Patricia A. Flanagan, Wayne J. Salamon
FpVTE was conducted primarily to assess the current capabilities of fingerprint matching algorithms using operational datasets containing several million subjects. There were three classes of participation that examined various finger combinations from single finger all the way up to ten fingers. Enrollment sets varied in size from 10 000 subjects up to 5 million subjects. All data used was sequestered operational data that was not shared with any of the participants. The evaluation provided feedback to the participants after the first two of three submissions, allowing participants to evaluate their performance, make adjustments to their algorithms, and resubmit for further testing. Each participant was allowed to submit two algorithms per class of participation during each round of submission. The evaluation was conducted at NIST using NIST-owned hardware. Participants submitted software libraries compliant to the testing Application Programming Interface (API), which were linked to a NIST-developed test driver and run by NIST employees. All participant libraries went through validation testing to ensure that results at NIST matched results participants were getting on their hardware. This is the first large scale one-to-many fingerprint evaluation since FpVTE 2003. In 2003, participants brought their own hardware to NIST to process the evaluation data. The datasets in 2003 had approximately 25 000 subjects and required millions of single subject to subject matches. The current FpVTE uses a testing model closer to real one-to-many identification systems by allowing the submitted software to control how it does the one-to-many search and return a candidate list of potential matches. The number of subjects used is also significantly higher as the current FpVTE has around 10 million subjects in the testing datasets.