Logo PTB

Analysis of key comparisons

Working Group 8.42

Content

Key comparisons are interlaboratory comparisons carried out regularly between National Metrology Institutes (NMIs) within the framework of the CIPM Mutual Recognition Arrangement (Opens external link in new windowMRA). The MRA has meanwhile been signed by more than 98 institutes. Key comparisons enable enable the mutual recognition of calibrations, measurements, and test certificates  of the NMIs and mark a major step in supporting international trade, commerce and regulatory affairs. In order to ensure the compatibility of the measurement capabilities provided by NMIs, the MRA prescribes that key comparisons are carried out on a regular basis. Based on the analysis of the data from a key comparison, the corresponding calibration and measurement capabilities  (Opens external link in new windowCMCs) of NMIs are validated. The final report and the supporting technical data of each key comparison are stored and made publicly available at the key comparison data base Opens external link in new windowKCDB of the Bureau International des Poids et Mesures (Opens external link in new windowBIPM). Fig. 1 shows a typical example of key comparison data.

Example data of a key comparison along with the key comparison reference value (KCRV).
Fig.1 Example data of a key comparison along with the key comparison reference value (KCRV). The blue results indicate control measurements made by the so-called pilot laboratory.

The goal of the analysis of KC data is to assess the results reported by the participating laboratories. According to the MRA, a so-called key comparison reference value (KCRV) is usually calculated. On the basis of the measurement results including the stated measurement uncertainty, the KCRV is then used to calculate the degrees of equivalence (DoEs) as the difference between the results reported by the laboratories and the KCRV, along with the uncertainties associated with these differences. The DoEs quantify the extent to which the laboratories are compatible, and they can also be viewed as a measure to judge whether the laboratories measure as well as they claim. If a DoE significantly differs from zero, the (CMC of the) corresponding laboratory is considered to be not approved.

More generally, the analysis of KCs can be seen as a Opens external link in new windowMeta-Analysis in which the results reported by the participating laboratories are assessed. Methods employed for meta-analyses, such as fixed effects or random effects models, have also been proposed for the analysis of key comparisons. Simpler methods such as the mean, the median or the weighted mean have also been employed for the calculation of a KCRV. Methods that have been applied for the analysis of KCs also include approaches based on the explicit or implicit removal of outliers.

The “Guide to the Expression of Uncertainty in Measurement“ (Opens external link in new windowGUM) constitutes the main guidelines for uncertainty evaluation in metrology, and its recent supplements approach the Bayesian point of view. Bayesian methods have also been suggested for the analysis of KCs. When applying a Bayesian approach, a so-called (posterior) distribution is derived for the unknown quantities such as the DoEs, cf. Figure 2.

Example posterior distributions for the degrees of equivalence (DoEs) obtained by a Bayesian inference of the data from Fig.1.
Fig. 2: Example of posterior distributions for the degrees of equivalence (DoEs) obtained by a Bayesian inference of the data from Fig.1.

Current and future research in the analysis of KC data comprises the adequate selection of a prior distribution when employing Bayesian inference. This includes the elicitation of available prior knowledge, but also the choice of adequate noninformative priors. Other research directions include the use of non-normal distributions leading to more robust analysis procedures, or the optimal design of key comparisons.

To top

Software

Bayesian hypothesis testing for key comparisons

The assessment of the calibration and measurement capabilities of a laboratory based on a key comparison can often be viewed as carrying out a classical hypothesis test. PTB Working Group 8.42 has developed an alternative Bayesian approach for hypothesis testing which has the advantage that it can include prior assessment about the capabilities of the laboratories participating in the key comparison. In order to ease the application of the proposed Bayesian hypothesis testing for key comparisons, corresponding MATLAB and R Initiates file downloadsoftware is made available. The software is able to take into account correlations within the key comparison results as well as different prior probabilities of the laboratories. The software also provides routines to enter the key comparison data as well as a graphical representation of the results. 

Related Publication

G. Wübbeler, O. Bodnar and C. Elster (2016). Bayesian hypothesis testing for key comparisons. Metrologia 53(4), [DOI: 10.1088/0026-1394/53/4/1131].

To top

Publications

Publication single view

Article

Title: Model-based analysis of key comparisons applied to accelerometer calibrations
Author(s): C. Elster, A. Link and H.-J. von Martens
Journal: Measurement Science and Technology
Year: 2001
Volume: 12
Issue: 10
Pages: 1672
DOI: 10.1088/0957-0233/12/10/308
Web URL: http://stacks.iop.org/0957-0233/12/i=10/a=308
Tags: 8.42,KC
Abstract: The concept of a model-based analysis of key comparisons is proposed and illustrated by applying it to data from a regional key comparison of accelerometer calibrations on a scale of frequencies. A physical model of the frequency dependence of the accelerometers' sensitivities is used to calculate reference values. The parameters of the physical model are determined by weighted least squares, and the resulting model is shown to conform with the data. Uncertainties associated with the reference values calculated by the physical model are smaller than those associated with reference values obtained by standard analysis. This can lead to a more favourable assessment of the degree of equivalence of single laboratory measurement values as expressed by calculated E n -numbers. The degree of equivalence of single laboratory measurement values is quantitatively calculated by both model-based analysis and standard analysis, and the results obtained and their differences are discussed.

Back to the list view

To top