With the signing of the Mutual Recognition Arrangement (MRA), National Metrology Institutes (NMI's) and Regional Metrology Organizations (RMO's) around the world have committed themselves to establishing the equivalence of their measurement standards. The goal of this project is to provide a unified statistical framework and detailed guidance for this process in collaboration with our colleagues from NIST, other NMI's, and RMO's.
With the recent signing of the Mutual Recognition Arrangement (MRA), National Metrology Institutes (NMI's) and Regional Metrology Organizations (RMO's) around the world have committed themselves to establishing the equivalence of their measurement standards. Currently, however, there is not wide agreement on the best statistical procedures for analysis of the international interlaboratory experiments called Key Comparisons which are used to establish such equivalence. To help build consensus on the best statistical procedures for this work, we propose to develop a unified statistical framework and detailed guidance for the Key Comparisons process in collaboration with our colleagues at NIST and other NMI's.
Key comparison testing is at its core a statistical process. Data is collected, statistically analyzed and the degrees of equivalence between the participating laboratories are estimated. For maximal effectiveness and efficiency, however, we believe that the data collection phase needs a statistically sound experimental design. This includes decisions as to the number of the traveling standards and the pattern of the comparison (i.e. how often the transfer standards travel back to the coordinating laboratory for monitoring). It further includes determination of the number of repetitions for each measurement at each lab. In complex experiments, it should also include the sequence of the experimental runs and the randomization of the experimental units at each lab. The analysis phase of the key comparison process deals with issues of summarization of the NMI measurements and assessment of the corresponding uncertainty. The basic principles for this are well known and internationally accepted. However, there are various procedural alternatives and it will be valuable to study these and set out general guidelines to their use. The final phase of the key comparison process is the determination and reporting of the level of equivalence among the participating labs.
In order to address the design problems, we propose to draw on the expertise of scientists across NIST to study the issues of the experimental design phase, ultimately identifying a core set of principles according to which a proposed experimental design can be judged as to its adequacy to handle the proposed task. In the later stages of our work we will develop a set of efficient experimental designs and ultimately an interactive tool which the leading lab will be able to use to produce an efficient test protocol. We envision that some of this work will require original research in optimal experimental design and expertise in computing. In order to solidify the statistical foundations of the analysis of Key Comparison participants' data and the establishment of equivalence among the participating labs, we propose to study the alternatives, possibly devise new methods, and make statistically best practices available to the international metrological community.
Meeting the research challenges described above will provide a more detailed understanding of the issues surrounding Key Comparisons, new methods for designing efficient Key Comparison experiments, and appropriate methods for the analysis of the subsequently obtained data. As a consequence, Key Comparison results will improve. Because scientists across NIST will be directly involved in this project, our results will be able to be rapidly integrated into the institutional culture.
In addition to completion of the statistical research needed for the optimal design and analysis of Key Comparisons, the main deliverable envisioned from this project is newly developed software that implements these statistical methods. We also plan to produce multimedia- or web-based tutorials for NIST staff and others on a variety of topics integral to Key Comparisons such as data reporting and uncertainty computation.
Context of Project
International standards projects like this one typically have fundamental, but difficult to quantify, impacts. In the big picture, with the lowering of trade barriers around the world, this project will facilitate international trade. In a local context, this project directly supports NIST's efforts to establish equivalence with other NMI's and RMO's under the MRA.
The links listed here provide access to other resources on the web related to the design and analysis of Key Comparisons. Please send feedback on other links to project staff or email@example.com.