International Biometric Performance Testing Conference
March 1-5, 2010
In cooperation with the National Physical Laboratory and Fraunhofer IGD, NIST held an international forum for the discussion of recent advances in the fields of biometric testing and biometric performance specification on March 2-4, 2010, with Satellite Workshops held on March 1st and March 5th. The conference aimed to identify the important and new performance metrics and to expose best practice for evaluation. New performance results are not themselves in scope - instead the intention is to capture recent and best practice, to contrast that with the past, and to expose what is needed in the future. The overarching goal was to refine the concept of biometric performance and to ultimately elevate adoption and effectiveness of biometric technologies.
All the IBPC Presentations are now online.
Satellite Workshop I, March 1, 2010
The future of the NIST Fingerprint Image Quality software (NFIQ) was discussed.
Contact : email@example.com
Satellite Workshop II, March 5, 2010
A meeting was held to solicit comments on metrics and testing methods for protecting biometric templates.
See Agenda for background information.
Contact : firstname.lastname@example.org
Satellite Workshop III, March 5, 2010
A session was held to discuss work on fingerprint feature markup and testing. It also discussed interoperability, reference datasets, and the possibilities for semantic conformance testing. Contact: email@example.com
In the context of multifactor authentication, biometrics fill the role of the something you are and their utility rests on the correct analog-to-digital conversion to the particular human characteristic or trait. This is itself multi-disciplinary involving biological aspects, human factors, physical sensor technologies, and computer vision and signal detection functions, Thereafter, the algorithmic tasks leading up to correct recognition exploit capabilities in image and signal processing, machine learning, pattern recognition and classification. These steps are often non-trivial and are potential sources of error. Performance, accordingly, is never perfect and always subject to tradeoffs between acceptance at one stage and rejection at a later one.
Prior NIST workshops, in 2006 and 2007, used the term "quality" as a proxy for performance. The goals of those forums was to distinguish quality-by-design from quality-by-practice and to identify measureable quantities that predict recognition outcome.