Skip to main content

NOTICE: Due to a lapse in annual appropriations, most of this website is not being updated. Learn more.

Form submissions will still be accepted but will not receive responses at this time. Sections of this site for programs using non-appropriated funds (such as NVLAP) or those that are excepted from the shutdown (such as CHIPS and NVD) will continue to be updated.

U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Evaluating Inter-Laboratory Comparison Data

Published

Author(s)

Enrico Frahm, John D. Wright

Abstract

The primary purpose of inter-laboratory comparisons is to demonstrate that the uncertainty specifications of the calibration measurement capabilities of the participating laboratories are correct. The most common criterion for assessing a participating laboratory's results is whether the normalized error |〖En〗_i | is ≤ 1. Most comparison reports we reviewed properly include uncertainty components related to the transfer standard 〖(u〗_TS) and the repeatability of the calibrations 〖(u〗_(repeat_i )) in the uncertainty of the value reported by a participant. Unfortunately, high values for either u_TS and u_(repeat_i ) decrease |〖En〗_i |, making it easier to achieve passing results in a comparison that uses a poor transfer standard or for a participant that delivers unstable measurements. A review of past comparison reports shows that this problem occurs for many measurands, including flow, temperature, and pressure. Improved comparison criteria were proposed by [1] to counteract the flaws of the |〖En〗_i |≤ 1 criterion by introducing the possibility of inconclusive results and a probability-based approach. In this paper, we define comparison uncertainty u_comp as the root-sum-of-squares of u_TS and u_(repeat_i ) and find it a better tool for assessing the power of the comparison than u_TS alone. We applied the comparison evaluation criteria to recent comparison results to illustrate their benefits over the |〖En〗_i |≤ 1criterion. In general, the newer criteria confirm prior determinations, but in some cases passing results for the |〖En〗_i |≤ 1criterion would be found inconclusive.
Proceedings Title
FLOMEKO 2022
Conference Dates
October 17-21, 2022
Conference Location
Chongqing, CN
Conference Title
FLOMEKO

Keywords

Inter-laboratory comparison, normalized error, inconclusive, probability based criterion

Citation

Frahm, E. and Wright, J. (2022), Evaluating Inter-Laboratory Comparison Data, FLOMEKO 2022, Chongqing, CN, [online], https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=934985 (Accessed October 6, 2025)

Issues

If you have any questions about this publication or are having problems accessing it, please contact [email protected].

Created November 3, 2022, Updated December 2, 2022
Was this page helpful?