For segmentation results with unknown finger number (position), were these results counted as no match automatically? For the tables in the new addendum report, if it is possible to break down each category (High, Marginal) to unknown percentage and wrong matching percentage? It is very important for our real system design.
Additional tables showing the effect of finger position identification are available in an Addendum to the SlapSeg04 Report here (PDF).
Did you evaluate the relationship between "Segmentation Quality" and the rate of three matchable fingers? If so, could you share such results with us?
Additional charts showing the ability to detect segmentation failures for three highly matchable fingers per slap are available in an Addendum to the SlapSeg04 Report here (PDF).
The information in Figure B-8 is very important when we design systems using slap fingerprint images, especially for slap images with 3 matchable fingers or 2 matchable fingers. Could I request the raw data for Figure B-8?
The tables associated with Figure B-8 are available in an Addendum to the SlapSeg04 Report here (PDF).
Can we obtain the fingerprint databases used in the evaluation?
Unfortunately, no. NIST does not own or have the right to define who has access to the data. NIST is authorized to use the data with restrictions defined by the agencies that provided the data, notably that the fingerprints are considered SBU (Sensitive But Unclassified) data, and cannot be distributed.
If it is acceptable, we would like to resubmit Segmentation SDK with new Segmentation Quality which includes quality of output finger image. It would be appreciated if you reevaluate "Ability to Detect Problems" and share the results with us.
On the paper source images, will the rolled fingers, which are normally above the slap boxes but sometimes overlap the edges of the slap boxes, be totally eliminated or at least minimized?
If other fingerprints (such as rolls or plain thumbs) overlapped the margins of the slap image on the paper card, they will be included in the slap images used in the paper source evaluation data. In the Issues web page, see the "Thumb overlapping edge of image" example.
Will there be any laser jet print out cards in the paper source testing set?
The overwhelming majority of the images labeled "paper" will be scanned from inked paper cards. However, in a few cases, agencies have taken livescan images, printed them onto paper fingerprint cards, and those cards were treated as if they were inked cards and rescanned. This process is not recommended, but it does occur in some operational systems. These rescanned cards are not differentiated from cards from inked sources in the operational databases. Since the SlapSeg04 evaluation data reflects the actual contents of the operational data, some small portion of the "paper" evaluation data in SlapSeg04 will be rescanned.
How is SlapSeg04 related to NIST's role under the PATRIOT Act?
SlapSeg04 will serve as part of NIST's statutory mandate under section 403c of the USA PATRIOT Act to certify those biometric technologies that may be used in U.S. VISIT.
The NIST WSQ code (with minor modification) requires almost 1.7 seconds to uncompress a slap image on a 1GHz CPU while [a commercial vendor] WSQ library requires 0.4 seconds. Although we would like to use NIST WSQ code, this time difference cannot be disregarded if the total processing time is one of SlapSeg04 evaluation. I understand that SlapSeg04 does not intend to evaluate WSQ performance. Is it possible for you to uncompress the test images and to feed uncompressed images to Segmentator? This preparation (WSQ uncompression) will shorten the total evaluation time because all participants can save WSQ uncompression time.
Are the examples in the Issues and Examples documents selected from the evaluation data? From the sample data?
In the sample data, and in the Issues and Examples documents:
- The paper slaps in are from SD29 (AKA "Practice data"), which is reasonably representative of the paper evaluation data. SD29 will not be used as evaluation data.
- The livescan slaps are taken from volunteers, and attempt as much as possible to mimic characteristics seen in the operational/evaluation data. This is because all of the evaluation slaps we have are considered Sensitive data, and cannot be distributed.
Will there be test images in which the upper part of the palm (interdigital area) is visible on the slap image?
The slap images come from and are representative of operational government sources. The interdigital area is unlikely to appear except possibly in livescan images using a large (4 inch) platen. Most of the evaluation images will be no more than 2 inches tall. The Sample Dataset does not include any images of that type.
What assumptions can be made regarding the orientation of the slap? For example, +/-30° from vertical for most of the test images but can be up to +/-45° for some images.
The slap images come from and are representative of operational government sources. The orientation of slap images is affected by the dimensions of the rectanglar space provided for image capture. These dimensions vary from source to source. Most slap images (except possibly those from large-platen livescans) are rotated. The amount of rotation varies, but typically averages 20-25 degrees. Fingers from the left hand are usually rotated clockwise, and those from the right hand are usually rotated counterclockwise. Few images are rotated more than 45 degrees, but this does occur. Please see the Issues page for more discussion of this.
What assumptions can be made regarding the rotation tolerance of the matching algorithm that will be used to test the segmentation?
The analysis methodology is being designed so that whether or not output images are rotated will not affect evaluation. Participants may assume that the matchers are tolerant of rotations. Segmenters may produce output images that preserve the original orientation, or they may rotate the images to the upright position. Multiple matchers will be used for scoring.
If the test image is identified with an unknown ("U") hand identifier, should the finger_pos code be set to unknown ("0") or is it expected that the application will attempt to "guess" as to the hand and therefore the finger position?
A segmenter capable of detecting switched hands has obvious operational advantages. One measure of interest is how often a segmenter can correctly segment all fingers in a slap from an unknown hand and correctly identify their position. An application should attempt to identify the finger positions if it is accurate at this task.
When the hand position is not specified, is there an advantage for a segmenter to specify the finger positions?
Part of the purpose of the subtest in which hands are not identified is to determine if segmenters can correctly identify which hand the image came from. Segmenters identify which hand the image came from by specifying the finger positions, if they can do so accurately.
How will the finger_pos values be evaluated and scored? In other words is it better for the algorithm to "guess" the correct finger position or to be more conservative and mark a finger image as unknown ("00") in the case that it is not clear?
An application should attempt to identify the finger positions if it can do so accurately. If finger positions cannot be accurately identified, the finger positions should be marked as unknown. In other words, correctly identifying the finger positions is better than leaving them unknown, but incorrectly identifying the finger positions is worse than leaving them unknown. Segmenters will be evaluated on this information.
When the hand position is specified, should the segmenter look for switched hands or assume that the specification was correct?
Segmenters may assume that hand position is correctly specified. Only when hand position is specified as "Unknown" should segmenters make a left/right determination.
Will segmenters be evaluated on the accuracy of the ORIGINAL_ROTATION value in the meta-information files? If not, why is this information requested?
Segmenters will NOT be evaluated on this information. We do not plan to manually verify rotational information over a large dataset. However, If enough segmenters provide the ORIGINAL_ROTATION value, consensus and selective manual verification will make it possible to report on the distribution of rotation angles in the data, and to analyze the effect of rotation on segmentation accuracy. Please provide rotation information if it is calculated by your application.
The API Specification states, "If the output images are rotated relative to the input, OUTPUT_ROTATED should be set to TRUE." Is this required or optional?
If the output images are rotated relative to the input (e.g. the original slap was rotated 45 degrees, but the output fingerprints are rotated to be upright), the application should set OUTPUT_ROTATED to TRUE.
Could you tell how much proportion of slap images are hand-type known and how much proportion of slap images are hand-type unknown?
The vast majority of the evaluation data (95%+) will have the hands identified. A smaller subtest, which will be run using livescan slaps, will not differentiate between right and left hands.
The Test plan says "The ability to correctly identify finger positions, especially when hand positions are unidentified, may also be evaluated." I assume this is only applicable to slap images with impression of four fingers (or less) mostly from Live Scanning. I mean that this is not applicable to slap images with "extra fingers" from Paper Card. The reason why I would like to make sure is because "hand type (left or right) recognition" becomes more difficult (or less accurate) and time consuming if we need to consider the case of extra fingers from Paper Card. I understand needs for "Left or Right hand recognition" capability on US Visit when "slap4 scanning" is introduced at port of entry. Illegal immigrants or terrorists may try to scan wrong hand. The good system needs to have ability to detect such illegal trial. However, I do not think this capability is necessary for "Batch segmentation of large databases of paper slap fingerprints". Is my assumption correct?
Swapped hands are a problem (as you mention) with uncooperative subjects. This was true to a limited extent in the past even with paper cards. However, in cases in which rolled and slap fingerprints are collected at the same time, a sequence checking step can be used to verify that the slaps were not swapped at the same time the sequence of the rolls is verified. The bulk of the test will focus on the accuracy of segmentation applications when the hands are identified, but unidentified hand tests will be conducted using livescan slaps.
Is there a fee to participate in SlapSeg04?
There is no participation fee.
Can non-US companies participate?
Can we participate anonymously?
Do you evaluate based on time to complete as well as segmentation accuracy?
Processing speed will be noted but will not be a primary evaluation criterion.
What's the sampling resolution of the images?
All images are 500 pixels per inch, 8-bit grayscale images.
Do the images scanned from inked paper cards contain extraneous text, lines, or other marks?
Are there any publicly-available or open-source examples of slap segmentation software?
A reference implementation of slap segmentation software will be included in the next update of NIST Fingerprint Image Software (NFIS). Note that this software uses a slightly different API than is defined for SlapSeg04. This can be made available to registered participants upon request.
We plan to use WSQ code in the NIST Public Domain for SlapSeg04 test. This WSQ code does not seem to have a copyright notice. I understand it is OK to use it for SlapSeg04 test. Is it OK to redistribute this WSQ code to other purpose under the same condition as the copyright of JPEG code?
The NIST WSQ code has no copyright restrictions of any kind and may be freely distributed.