Examination of Downsampling Strategies for Converting 1000 ppi Fingerprint Imagery to 500 ppi
Shahram Orandi, John M. Libert, John D. Grantham, Lepley Margaret, Bruce Bandini, Kenneth Ko, Lindsay M. Petersen, Stephen S. Wood, Stephen G. Harvey
Currently the bulk of fingerprint data in operational is captured, processed and stored at 500 ppi using the WSQ compressed digital format. With the transition to 1000 ppi, some systems will unavoidably contain an overlap between 500 ppi and 1000 ppi operational pathways. This overlap may be a result of legacy infrastructure limitations, or some other financial or logistical reason. Additionally, there will still be a need to compare newly collected 1000 ppi images against legacy 500 ppi images, for both one‐to‐one and one‐to‐many scenarios. To create a bridge between legacy and modern data there needs to be a pathway for interoperability of legacy and modern data on equal footing by converting one of the images to the same resolution as the other. Downsampling of the higher resolution 1000 ppi imagery to 500 ppi provides this pathway. The study compares several computational methods for downsampling of modern fingerprint images from 1000 ppi to 500 ppi. These treatments include pixel averaging, decimation, transcoding directly from the JPEG2000 codestream, Gaussian filtering and spectral truncation. Fidelity of downsampled 1000 ppi images relative to corresponding prints scanned natively at 500 ppi is evaluated via scoring by expert fingerprint examiners, an automated machine matcher, RMS differences and correlation of spectra. Gaussian low‐pass filtering (σ = 0.8475, r = 4) combined with decimation emerged as the optimal downsampling strategy with respect to image fidelity when gauged by several different ranking strategies. Transcode methods involving selective decoding of JPEG2000 images so as to discard the highest resolution decomposition level yielded 500 ppi images that were not of optimal quality but were processed at considerably higher speed due to greater computational efficiency of such transcode algorithms making these methods attractive in a tradeoff between image fidelity vs. sheer computational throughput.