Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Information Access Division Highlights - 2014

Information Access Division Highlights - 2014

 

NOVEMBER – DECEMBER 2014

The Twenty - Second Text REtrieval Conference Proceedings (TREC 2013)

Ellen Voorhees, Editor
NIST Special Publication 500-302
September 2014

This report constitutes the proceedings of the Twenty-Second Text REtrieval Conference (TREC 2013) held in Gaithersburg, Maryland, on November 19--22, 2013. The conference was cosponsored by NIST and the Defense Advanced Research Projects Agency (DARPA).


IREX V: Instructional Material for Iris Image Collection

By George W. Quinn, James Matey, Elham Tabassi, and PatrickGrother
NISTIR 8013
July 2014

This document provides guidance for the proper collection of iris images. Problems that occur during image acquisition can lead to poor quality samples. If the subject was looking down or blinking at the moment of capture, the image should be rejected and a new one acquired. Such problems are fairly simple and straightforward to correct, but require attentiveness on the part of the camera operator. If an image of a closed eye is accepted without scrutiny, no amount of post - capture processing can recover the lost information. For this reason, certain procedures should be followed to ensure that only good quality samples are collected.

Measurement Uncertainties of Three Score Distributions and Two Thresholds with Data Dependency

By Jin Chu Wu, Alvin F. Martin, Craig S. Greenberg, and Raghu N. Kacker
NISTIR 8025
September 2014

NIST conducts an ongoing series of Speaker Recognition Evaluations (SRE). Recently a new paradigm was adopted to evaluate the performance of speaker recognition systems in which three distributions of target, known non-target, and unknown non-target scores, as well as two thresholds were employed.The new detection cost function was defined to be an average of the two weighted sums of the probabilities of type I and type II errors corresponding to the two decision thresholds. In addition, data dependency due to multiple uses of the same subjects is involved. The measurement uncertainties, i.e., the standard errors of the detection cost function, improved as a result of taking account of the data dependency.

SEPTEMBER – OCTOBER 2014

Design and Testing of a Mobile Touchscreen Interface for Multi-Modal Biometric Capture

By Kristen K. Greene, Ross Micheals, Kayee Kwong, and Gregory P. Fiumara
NISTIR 8003
May 2014

This report describes the design and usability testing of a touchscreen interface for multi-modal biometric capture, an application called WSABI, Web Services for the Acquisition of Biometric Information. The application code is publicly available online at https://github.com/NIST-BWS/wsabi2.The interface is a tablet-based reference application for the Web Services for Biometric Devices (WS-BD) protocol.Just as WS-BD specifies a method of communication between client and sensors (i.e., machine-to-machine communication), WSABI provides a consistent and modality-independent method of interaction between human operators and sensors (i.e., human-to-machine communication).

Performance of Face Identification Algorithms

By Patrick Grother and Mei Ngan
NISTIR 8009
May 2014

This report documents performance of one-to-many face identification algorithms and compares it with performance in 2010. Performance in this context refers to recognition accuracy and computational resource usage as measured by executing those algorithms on massive sequestered datasets. These datasets consist of reasonable-quality law enforcement mugshot images; poor-quality webcam images collected in similar detention operations; and moderate-quality visa application images. The mugshot and visa images are used to approximate performance obtainable using high-quality ISO standardized images collected in passport, visa, and driving license duplicate detection operations.

A Measurement Metric for Forensic Latent Fingerprint Preprocessing

By Haiying Guan, Andrew Dienstfrey, Mary Theofanos, and Brian Stanton
NISTIR 8017
July 2014

This report describes a proposal to extend Spectral Image Validation and Verification (SIVV) to serve as a metric for latent fingerprint image quality measurement. ITL researchers implemented and tested the new SIVV-based metric for latent fingerprint image quality and used it to measure the performance of the forensic latent fingerprint preprocessing step. Preliminary results show that the new metric can provide positive indications of both latent fingerprint image quality and the performance of the fingerprint preprocessing.

JULY-AUGUST 2014

ITL Performs Multilingual Evaluation of Key Word Search Technology

ITL recently performed two additional multilingual evaluations of KeyWord Search (KWS) technology. T These evaluations assessed the capability of software to detect a specific term, defined textually in the language's native orthography, within a conversational telephone speech recording. The first evaluation assessed performance of systems developed by performers in year 2 of the Intelligence Advanced Research Projects Activity (IARPA) Babel Program. The second evaluation, OpenKWS, leveraged tools and data resources to assess the technology across a wider research community. These evaluations were the culmination of a joint effort by the Babel Program's testing and evaluation team, which included IARPA, ITL, MIT Lincoln Labs, University of Maryland's Center for Advanced Study of Language, and MITRE to develop language resources for KWS systems in five languages.

The OpenKWS evaluation was open to the entire research community. The evaluations assessed systems across different factors expected to affect performance, including the amount of transcribed training data, the size of language resources brought to bear, and system retraining with knowledge of the keywords. Twelve teams participated in OpenKWS this year, three of which were new to the evaluation. Four teams were members of IARPA's Babel Program (namely, the BABELON, LORELEI, RADICAL, and SWORDFISH teams) and the other eight teams represented various organizations from around the world, including China, Israel, Singapore, and the United States. Participating teams represented both industry and academia. The OpenKWS community of researchers will discuss the evaluation and the results in early July, after which results will be made public on the NIST OpenKWS14 Website: <http://www.nist.gov/itl/iad/mig/openkws14.cfm>.

Kevin Mangold was recently selected to participate in the 2014 International Electrotechnical Commission (IEC) Young Professionals Program. The three young professionals selected to represent the United States will attend the IEC 2014 General Meeting to be held in November 2014, in Tokyo, Japan, and participate in the workshop on IEC standardization strategies and conformity assessment.

Human Engineering Design Criteria Standards Part 1: Project Introduction and Existing Standards

By Susanne Furman, Mary Theofanos, and Hannah Wald
NISTIR 7889
April 2014

The Department of Homeland Security (DHS) requires general human systems integration (HSI) criteria for the design and development of human-machine interfaces for their technology, systems, equipment, and facilities. The goal of the DHS Human Systems Engineering Project was to identify, develop, and apply a standard process to enhance technology and system design, system safety, and operational efficiency. The project manager partnered with ITL's Visualization and Usability Group to advance this effort. The goal of this phase of the project was to identify and review the body of existing human factors and HSI standards, best practices, and guidelines in order to map these to potential DHS needs, technology, and processes.

Exploring the Methodology and Utility of Standardized Latent Fingerprint Matcher Scoring

By Vladimir N. Dvornychenko and George W. Quinn
NISTIR 7992
March 2014

Automated searches of fingerprints against a repository/database are important tools of the forensic community. Systems performing these searches are referred to as Automated Fingerprint Identification Systems (AFISs). The output of an AFIS is a fairly small set of prospective candidates with attendant matching scores. These scores provide an indication of how likely a particular candidate is a true mate of the search fingerprint. One difficulty in interpreting matching scores in usage is that there is no accepted standard for its range and exact meaning. This report proposes a standardization of the scoring system.

Identifying Face Quality and Factor Measures for Video

By Yooyoung Lee, P. Jonathon Phillips, James J. Filliben, J. Ross Beveridge, and Hao Zhang
NISTIR 8004
May 2014

This paper identifies important factors for face recognition algorithm performance in video. The goal of this study is to understand key factors that affect algorithm performance and to characterize the algorithm performance. We evaluate four factor metrics for a single video as well as two comparative metrics for pairs of videos. The study investigated the effect of nine factors on three algorithms using the Point-And-Shoot Challenge (PaSC) video dataset.

MAY – JUNE 2014

Examination of the Impact of Fingerprint Spatial Area Loss on Matcher Performance in Various Mobile Identification Scenarios

By Shahram Orandi, Kenneth Ko, Stephen S. Wood, John D. Grantham, and Michael Garris
NISTIR 7950
March 2014

NIST conducted a study of the FBI Repository for Individuals of Special Concern (RISC) system using various gallery and Mobile ID [MOBID] acquisition profile combinations to examine performance characteristics of the various profiles in terms of matching effectiveness and throughput. Results of the study showed that the predominant RISC operational case of Mobile ID FAP10 (fingerprint acquisition profile 10) using the left and right index fingers is at a marked disadvantage in terms of matcher performance compared to the larger FAP20 and FAP30 cases using the same fingers. Conclusion: System false non-identification rates suffer a significant performance penalty in the typical operational case of FAP10 two index finger (2,7) capture.

Towards NFIQ II Lite: Self-Organizing Maps for Fingerprint Image Quality Assessment

By Elham Tabassi
NISTIR 7973
February 2014

Fingerprint quality assessment is a crucial task which needs to be conducted accurately in various phases in the biometric enrolment and recognition processes. We propose a computationally efficient means of predicting biometric performance based on a combination of unsupervised and supervised machine learning techniques. We train a self-organizing map (SOM) to cluster blocks of fingerprint images based on their spatial information content. The output of the SOM is a high-level representation of the finger image, which forms the input to a random forest trained to learn the relationship between the SOM output and biometric performance. The quantitative evaluation demonstrates that our proposed quality assessment algorithm is a reasonable predictor of performance.

IREX IV: Part 2, Compression Profiles for Iris Image Compression

By George W. Quinn, Patrick J. Grother, and Meilee L. Ngan
NISTIR 7978
January 2014

The IREX IV evaluation builds upon IREX III as a performance test of one-to-many iris recognition. This report is the second part of the IREX IV evaluation, which specifically evaluates the ability of automated iris recognition algorithms to match heavily compressed standard iris images and determines the optimal set of compression parameters for JP2 compression and the maximum capabilities for one-to-many matching.

Report: Authentication Diary Study

By Michelle P. Steves
NISTIR 7983
February 2014

Users have developed various coping strategies for minimizing or avoiding the friction and burden associated with managing and using their portfolios of user IDs and passwords or personal identification numbers (PINs). Many try to use the same password (or different versions of the same password) across different systems. Others use memory aids or technological assistants such as password management software. We were interested in these coping strategies and the friction points that prompt people to use them. More broadly, we wanted to address a pressing research need by gathering data for user-centered models of how people interact with security as part of their daily life, as empirical research in that area is currently lacking.

Integrating Electronic Health Records into Clinical Workflow: An Application of Human factors Modeling Methods to Ambulatory Care

By Svetlana Lowry, Mala Ramaiah, E.S. Patterson, D. Brick, A.P. Gurses, A. Ozok, and M.C. Gibbons
NISTIR 7988
March 2014

The recommendations in this report provide a first step in moving from a billing-centered perspective (i.e., focusing on ensuring maximum and timely reimbursement) to a clinician-centered perspective where the electronic health record design supports clinical cognitive work. These recommendations point the way towards a "patient visit management system," which incorporates broader notions of supporting workload management, supporting the flexible flow of patients and tasks, and preventing common workarounds.

United States Federal Employees' Password Management Behaviors – a Department of Commerce Case Study

By Yee-Yin Choong, Mary Theofanos, and Hung-Kung Liu
NISTIR 7991
April 2014

We designed an on-line survey to collect data on end-users' password management and their attitudes toward computer security in a government work environment. This paper focuses on the data collected from employees of the Bureaus of the U.S. Department of Commerce between June 2010 and June 2011.

MARCH – APRIL 2014

ITL's Text REtrieval Conference Supports the Information Retrieval Research Community

ITL recently sponsored the 22nd Text REtrieval Conference (TREC) at the NIST Gaithersburg campus. ITL founded and directs the international TREC project, an effort that develops the infrastructure required to measure the effectiveness of information retrieval systems, e.g., search engines. Each TREC is organized around a set of focus areas called tracks. TREC participants use their own search engines and a common data set to perform a track's task. They submit their results to ITL researchers, who use the combined result sets to build evaluation resources that are then used to score each participant's submission. These resources are eventually made publicly available through the TREC website to support the larger retrieval research community. TREC 2013 contained eight tracks and received search result submissions from 60 research groups in 21 countries. The 2013 tracks investigated several topics including best practices in crowdsourcing for the development of search evaluation resources, the real-time nature of search in "microblogs" (e.g., Twitter tweets), and diversifying result sets in web search. Two of the tracks were new to TREC 2013. The Federated Web Search track investigates techniques for metasearching: selecting which sites to search and combining result sets to form a single coherent response from among a large set of independent search verticals. The Temporal Summarization track looks to develop systems that allow users to efficiently monitor the information associated with an event such as a natural disaster in real time. Proceedings of TREC 2013 will be posted on the TREC website. http://trec.nist.gov/

Jonathon Phillips, Information Access Division, received the inaugural Mark Everingham Prize from the Institute of Electrical and Electronics Engineers (IEEE) Pattern Analysis and Machine Intelligence (PAMI) Technical Committee. The prize recognized Phillips for his work on a series of datasets and challenges starting with the Face Recognition Technology (FERET) evaluations in the 1990s, the Face Recognition Grand Challenge in 2004-2005, and Face Recognition Vendor Tests in 2000, 2002, and 2006. These efforts were significant because they established the challenge paradigm as a key method to facilitate the development of new and improved algorithms in the computer vision and pattern recognition community.

Compression Guidance for 1000 ppi Friction Ridge Imagery

By Shahram Orandi, John Libert, John Grantham, Kenneth Ko, Stephen Wood, Frederick Byers, Bruce Bandini, Stephen Harvey, and Michael Garris
NIST Special Publication 500-289
February 2014

The criminal justice community has traditionally captured, processed, stored, and exchanged friction ridge imagery data at 500 ppi in the course of their operation. Modern biometric systems are trending towards operation on fingerprint images at 1000 ppi. This transition to 1000 ppi friction ridge imagery offers many benefits, notably greater fidelity to the original sample and better representation of Level 3 features. Both of these benefits are favorable since they may increase probability of establishing a match/non-match decision by expert examiners or automated fingerprint matchers. The JPEG2000 compression standard offers much flexibility in the types of images it can operate on as well as the way images can be compressed and encoded. This flexibility makes it a suitable compression algorithm for friction ridge imagery. A need exists for a normative guidance that establishes a set of protocols for the compression of images by stakeholders. Adherence to this normative guidance by stakeholders provides assurances for compatibility between those stakeholders. This publication provides normative guidance for compression of grayscale friction ridge imagery at 1000 ppi.

A Spectral Analytic Method for Fingerprint, Image Sample Rate Estimates

By John M. Libert, Shahram Orandi, John Grantham, and Michael Garris
NISTIR 7968
March 2014

This study examines the use of the NIST Spectral Image Validation and Verification (SIVV) metric for the application of detecting the sample rate of a given fingerprint digital image. SIVV operates by reducing an input image to a 1-dimensional power spectrum that makes explicit the characteristic ridge structure of the fingerprint that on a global basis differentiates it from most other images. The magnitude of the distinctive spectral feature, which is related directly to the distinctness of the level 1 ridge detail, provides a primary diagnostic indicator of the presence of a fingerprint image. The location of the detected peak corresponding to the level 1 ridge detail can be used as an estimator of the original sampling frequency of that image given the behavior of this peak at known sampling frequencies a priori versus the calculated shift of this peak on an image of unknown sampling rate. A statistical model is fit to frequency measurements of a sample of images scanned at various sample rates from 10-print fingerprint cards such that the model parameters can be applied to SIVV frequency values of a digital fingerprint of unknown sample rate to estimate the sample rate. Uncertainty analysis is used to compute 95 % confidence intervals for predictions of sample rate from frequency. The model is tested against sets of cardscan and livescan images.

JANUARY – FEBRUARY 2014

AWARD WINNERS:

  • Brian Antonishek, Athanasios Karygiannis, Stephen Quirolgico, and Jeffrey Voas – Gold Medal for developing innovative techniques to secure and measure the performance of smartphones and applications.
  • Audrey Tong – Silver Medal for developing a novel evaluation framework/metrology and data supporting significant advances in automatic translation of foreign language handwriting.
  • FierceGovernmentIT announced its Second Annual Fierce 15 Winners, and the list of 15 creative and innovative federal employees included one ITL/IAD leaders: Patrick Grother, Biometric Testing Project Leader in ITL's Information Access Division

    FierceGovernmentIT is a trusted and growing online source for in-depth federal IT reporting. The benchmark for inclusion in the Fierce 15 list is high; the publication recognizes federal employees who have demonstrated groundbreaking creativity and innovation.

Effects of JPEG 2000 Lossy Image Compression on 1000 ppi Latent Fingerprint Casework

By Shahram Orandi, John M Libert, John D. Grantham, Frederick R. Byers, Lindsay M. Petersen, and Michael D. Garris
NISTIR 7780 Rev. 1
October 2013

This paper presents the findings of a study conducted to measure the impact of JPEG 2000 lossy compression on the comparison of 1000 ppi latent fingerprint imagery and 1000 ppi exemplar fingerprint imagery. Combinations of image pairs that vary by the compression rate applied to one of the images in the pair are observed and analyzed. The impact of lossy compression to both Galton and non-Galton-based features of a fingerprint is measured by professional judgment of expert fingerprint examiners. The impact of compression is analyzed by quantifying multiple decisions relative to different levels of loss incurred during image compression. In addition to measuring the perceived visual impact of compression on the aforementioned features of the fingerprint, the paper also looks at the impact of

lossy compression on the examiner's ability to correctly render their identification decisions.

Fingerprint Scanner Affordances

By Michelle Steves, Brian Stanton, Mary Theofanos, Dana Chisnell, and Hannah Wald
NISTIR 7944
September 2013

This study examines the light emitting diode (LED) indicators and instructional icons on the fingerprint scanner type in place in U.S. ports of entry (at the time of the study) to learn if people interpret these features as intended, are guided through the fingerprint collection process, and whether they can present usable fingerprint samples to the scanner without assistance. The results from this study will be used to inform the development of a "self-service" fingerprinting solution.

Created December 8, 2014, Updated August 25, 2016