The Evaluation of the Scoring Systems: The fixed effects model under known variances
Hugo Gasca-Aragon, David L. Duewer
A number of scoring systems for proficiency testing and interlaboratory comparison are in use by the metrology community. The choice of scoring system for a given study is often based on the study coordinator's experience and anecdotal knowledge, perhaps attributable to a historical lack of detailed and formal explanation about the foundation of these systems. This has influenced the development of new scoring systems, some of them departing from the well-established hypothesis testing theory. Often different scoring systems give different results not because one may be better than the others but because, as they are documented, the user cannot control the confidence level of the test. We present a formal evaluation of seven of these systems under the fixed effects model assuming known variances. Under these sound assumptions the systems analyzed all have the same statistical properties. Furthermore, these systems are all members of a family of systems based on strictly increasing functions in which the statistical decision problem is invariant. Under the fixed effects model with known variances, no scoring system can provide greater statistical power than the members of this family of systems. We apply these results to the lead content of water example provided in International Standard ISO 13528:2005(E) "Statistical methods for use in proficiency testing by interlaboratory comparisons."
and Duewer, D.
The Evaluation of the Scoring Systems: The fixed effects model under known variances, Accreditation and Quality Assurance, [online], https://doi.org/10.1007/s00769-016-1215-y
(Accessed March 4, 2024)