This paper describes aspects of work done at NIST, in concert with the latent fingerprint community, toward achieving a partial lights-out latent fingerprint processing capability. The initial steps are to quantify performance over a cross-section of latent fingerprint matchers using representative latent fingerprints. This will give a clearer picture of the state of the art, and where the weaknesses lie. Analytic tools for modeling latent system performance are presented. Covered are: the selection of test dataset size (both search and background); methods of projecting to a ¿fully populated¿ system; and modeling errors. It is shown that when modeling errors are taken into account ¿bigger is better¿ may not always apply to dataset size.