Comparison of Confidence Intervals for Large Operational Biometric Data by Parametric and Non-parametric Methods
Su L. Cheng, Ross J. Micheals, John Lu
Receiver operating characteristic (ROC) or Detection Error Trade-off (DET) curves are used to measure the performance of a biometric verification or identification system. To go beyond the ROC/DET and to enhance evaluation of a verification system we compute the confidence intervals of the False Accept Rate (FAR) and False Reject Rate (FRR). In this paper, we validate the accuracy of variance estrimators by comparing the estimators to variance computed over repeated experiments. We calculate the confidence intervals of the error rate using both parametric and non-parametric methods. For the parametric approach, we calculate the confidence interval based on variance estimations from the survey sampling variance approach and the binomial distribution model approach.. For the non-parametric approach, we use the bootstrap method to compute the confidence intervals directly. Two different datasets and several authentication systems are tested in the evaluation process. Then report the confidence intervals from all three approaches, with different sample sizes. What we found from the evaluation result is that there is no significant difference of the computed Confidence Intervals by all three methods. However, for very large data sets, the binomial model approach is the most efficient computationally among the three approaches, and we also argue that it can easily be extended in theory to evaluation problems with smaller data sets or extremely low error rates. To enhance readability, we have chosen to use the familiar terms FAR and FRR rather than the more formal equivalent terms False Match Rate (FMR) and False Non-Match Rate (FNMR). Because we are addressing the operation of the matcher rather than that of the complete verification or identification system, we do not include other types of system errors such as Failure To Acquire (FTA), or Failure To Enroll (FTE).