Logo PTB

Analysis of key comparisons

Working Group 8.42

Content

Key comparisons are interlaboratory comparisons carried out regularly between National Metrology Institutes (NMIs) within the framework of the CIPM Mutual Recognition Arrangement (Opens external link in new windowMRA). The MRA has meanwhile been signed by more than 98 institutes. Key comparisons enable enable the mutual recognition of calibrations, measurements, and test certificates  of the NMIs and mark a major step in supporting international trade, commerce and regulatory affairs. In order to ensure the compatibility of the measurement capabilities provided by NMIs, the MRA prescribes that key comparisons are carried out on a regular basis. Based on the analysis of the data from a key comparison, the corresponding calibration and measurement capabilities  (Opens external link in new windowCMCs) of NMIs are validated. The final report and the supporting technical data of each key comparison are stored and made publicly available at the key comparison data base Opens external link in new windowKCDB of the Bureau International des Poids et Mesures (Opens external link in new windowBIPM). Fig. 1 shows a typical example of key comparison data.

Example data of a key comparison along with the key comparison reference value (KCRV).
Fig.1 Example data of a key comparison along with the key comparison reference value (KCRV). The blue results indicate control measurements made by the so-called pilot laboratory.

The goal of the analysis of KC data is to assess the results reported by the participating laboratories. According to the MRA, a so-called key comparison reference value (KCRV) is usually calculated. On the basis of the measurement results including the stated measurement uncertainty, the KCRV is then used to calculate the degrees of equivalence (DoEs) as the difference between the results reported by the laboratories and the KCRV, along with the uncertainties associated with these differences. The DoEs quantify the extent to which the laboratories are compatible, and they can also be viewed as a measure to judge whether the laboratories measure as well as they claim. If a DoE significantly differs from zero, the (CMC of the) corresponding laboratory is considered to be not approved.

More generally, the analysis of KCs can be seen as a Opens external link in new windowMeta-Analysis in which the results reported by the participating laboratories are assessed. Methods employed for meta-analyses, such as fixed effects or random effects models, have also been proposed for the analysis of key comparisons. Simpler methods such as the mean, the median or the weighted mean have also been employed for the calculation of a KCRV. Methods that have been applied for the analysis of KCs also include approaches based on the explicit or implicit removal of outliers.

The “Guide to the Expression of Uncertainty in Measurement“ (Opens external link in new windowGUM) constitutes the main guidelines for uncertainty evaluation in metrology, and its recent supplements approach the Bayesian point of view. Bayesian methods have also been suggested for the analysis of KCs. When applying a Bayesian approach, a so-called (posterior) distribution is derived for the unknown quantities such as the DoEs, cf. Figure 2.

Example posterior distributions for the degrees of equivalence (DoEs) obtained by a Bayesian inference of the data from Fig.1.
Fig. 2: Example of posterior distributions for the degrees of equivalence (DoEs) obtained by a Bayesian inference of the data from Fig.1.

Current and future research in the analysis of KC data comprises the adequate selection of a prior distribution when employing Bayesian inference. This includes the elicitation of available prior knowledge, but also the choice of adequate noninformative priors. Other research directions include the use of non-normal distributions leading to more robust analysis procedures, or the optimal design of key comparisons.

To top

Software

Bayesian hypothesis testing for key comparisons

The assessment of the calibration and measurement capabilities of a laboratory based on a key comparison can often be viewed as carrying out a classical hypothesis test. PTB Working Group 8.42 has developed an alternative Bayesian approach for hypothesis testing which has the advantage that it can include prior assessment about the capabilities of the laboratories participating in the key comparison. In order to ease the application of the proposed Bayesian hypothesis testing for key comparisons, corresponding MATLAB and R Initiates file downloadsoftware is made available. The software is able to take into account correlations within the key comparison results as well as different prior probabilities of the laboratories. The software also provides routines to enter the key comparison data as well as a graphical representation of the results. 

Related Publication

G. Wübbeler, O. Bodnar and C. Elster (2016). Bayesian hypothesis testing for key comparisons. Metrologia 53(4), [DOI: 10.1088/0026-1394/53/4/1131].

To top

Publications

O. Bodnar, A. Link, B. Arendacká, A. Possolo and C. Elster
Statistics in Medicine, 39(2),
378--399,
2017.
O. Bodnar and C. Elster
AStA Advances in Statistical Analysis, published online
1--20,
2016.
J. Wright, B. Toman, B. Mickan, G. Wübbeler, O. Bodnar and C. Elster
Metrologia, 53(6),
1243,
2016.
G. Wübbeler, O. Bodnar and C. Elster
Metrologia, 53(4),
1131--1138,
2016.
G. Wübbeler, O. Bodnar, B. Mickan and C. Elster
Metrologia, 52(2),
400--405,
2015.
L. Spinelli, M. Botwicz, N. Zolek, M. Kacprzak, D. Milej, P. Sawosz, A. Liebert, U. Weigel, T. Durduran, F. Foschum, A. Kienle, F. Baribeau, S. Leclair, J.-P. Bouchard, I. Noiseux, P. Gallant, O. Mermut, A. Farina, A. Pifferi, A. Torricelli, R. Cubeddu, H.-C. Ho, M. Mazurenka, H. Wabnitz, K. Klauenberg, O. Bodnar, C. Elster, M. Bénazech-Lavoué, Y. Bérubé-Lauzière, F. Lesage, D. Khoptyar, A. A. Subash, S. Andersson-Engels, P. Di Ninni, F. Martelli and G. Zaccanti
Biomedical optics express, 5(7),
2037--53,
2014.
O. Bodnar and C. Elster
Metrologia, 51(5),
516--521,
2014.
K. Jousten, K. Arai, U. Becker, O. Bodnar, F. Boineau, J. A. Fedchak, V. Gorobey, W. Jian, D. Mari, P. Mohan, J. Setina, B. Toman, M. Vivcar and Y. H. Yan
Metrologia, 50(1A),
07001--07001,
2013.
O. Bodnar, A. Link, K. Klauenberg, K. Jousten and C. Elster
Measurement Techniques, 56(6),
584--590,
2013.
C. Elster and B. Toman
Metrologia, 50(5),
549--555,
2013.
I. Lira, A. G. Chunovkina, C. Elster and W. Wöger
IEEE Transactions on Instrumentation and Measurement, 61(8),
2079--2084,
2012.
B. Toman, J. Fischer and C. Elster
Metrologia, 49(4),
567--571,
2012.
C. Elster and B. Toman
Metrologia, 47(3),
113--119,
2010.
C. Elster, A. G. Chunovkina and W. Wöger
Metrologia, 47(1),
96--102,
2010.
A. G. Chunovkina, C. Elster, I. Lira and W. Wöger
Measurement Techniques, 52(7),
788--793,
2009.
A. G. Chunovkina, C. Elster, I. Lira and W. Wöger
Metrologia, 45(2),
211--216,
2008.
H.-J. von Martens, C. Elster, A. Link, A. Täubner and T. Bruns
Metrologia, 43(1A),
09002-09002,
2006.
C. Elster, W. Wöger and M. G. Cox
Measurement Techniques, 48(9),
883--893,
2005.
H.-J. von Martens, C. Elster, A. Link, W. Wöger and P. J Allisy
Metrologia, 41(1A),
09002,
2004.
C. Elster, A. Link and W. Wöger
Metrologia, 40(4),
189,
2003.
C. Elster, A. Link and H.-J. von Martens
Measurement Science and Technology, 12(10),
1672,
2001.
C. Elster and A. Link
Measurement Science and Technology, 12(9),
1431,
2001.
Export as:
BibTeX, XML
To top