This file was created by the TYPO3 extension
bib
--- Timezone: CET
Creation date: 2024-03-28
Creation time: 17-18-58
--- Number of references
25
article
MarschallWBE2023
On modelling of artefact instability in interlaboratory comparisons
Metrologia
2023
6
26
8.4,8.42,KC,Messunsicherheit
accepted
10.1088/1681-7575/ace18f
MMarschall
GWübbeler
MBorys
CElster
article
WuebbelerBHE2018
Maintaining consensus for the redefined kilogram
Metrologia
2018
9
7
55
5
722
8.4,8.42,KC
10.1088/1681-7575/aadb6b
GWübbeler
HBettin
FHärtig
CElster
article
SchachtschneiderFSABBBBKKLLMPRSSWWSE2018
Interlaboratory comparison measurements of aspheres
Measurement Science and Technology
2018
4
9
29
5
055010
8.4, 8.42, KC, Form
10.1088/1361-6501/aaae96
RSchachtschneider
IFortmeier
MStavridis
JAsfour
GBerger
R BBergmann
ABeutler
TBlümel
HKlawitter
KKubo
JLiebl
FLöffler
RMeeß
CPruss
DRamm
MSandner
GSchneider
MWendel
IWiddershoven
MSchulz
CElster
article
BodnarE2016
Assessment of vague and noninformative priors for Bayesian estimation of the realized random effects in random-effects meta-analysis
AStA Advances in Statistical Analysis
2018
1
31
102
1
1--20
8.42,KC,Unsicherheit
10.1007/s10182-016-0279-7
OBodnar
CElster
article
BodnarLAPE2017
Bayesian estimation in random effects meta-analysis using a non-informative prior
Statistics in Medicine
2017
2
1
39
2
378--399
8.4,8.42,KC,Unsicherheit
1097-0258
10.1002/sim.7156
OBodnar
ALink
BArendacká
APossolo
CElster
article
WrightTMWBE2016
Transfer standard uncertainty can cause inconclusive inter-laboratory comparisons
Metrologia
2016
10
20
53
6
1243
8.42,8.4,KC
8.42,8.4,KC
10.1088/0026-1394/53/6/1243
JWright
BToman
BMickan
GWübbeler
OBodnar
CElster
article
WubbelerBE2016
Bayesian hypothesis testing for key comparisons
Metrologia
2016
7
18
53
4
1131--1138
8.42,KC
8.42,KC
10.1088/0026-1394/53/4/1131
GWübbeler
OBodnar
CElster
article
Wubbeler2015
Explanatory power of degrees of equivalence in the presence of a random instability of the common measurand
Metrologia
2015
1
3
52
2
400--405
8.42, Unsicherheit, KC
http://iopscience.iop.org/article/10.1088/0026-1394/52/2/400
IOP Publishing
en
0026-1394
10.1088/0026-1394/52/2/400
GWübbeler
OBodnar
BMickan
CElster
article
Spinelli2014
Determination of reference values for optical properties of liquid phantoms based on Intralipid and India ink
Biomedical optics express
2014
5
7
2037--53
A multi-center study has been set up to accurately characterize the optical properties of diffusive liquid phantoms based on Intralipid and India ink at near-infrared (NIR) wavelengths. Nine research laboratories from six countries adopting different measurement techniques, instrumental set-ups, and data analysis methods determined at their best the optical properties and relative uncertainties of diffusive dilutions prepared with common samples of the two compounds. By exploiting a suitable statistical model, comprehensive reference values at three NIR wavelengths for the intrinsic absorption coefficient of India ink and the intrinsic reduced scattering coefficient of Intralipid-20<prt>%</prt> were determined with an uncertainty of about 2<prt>%</prt> or better, depending on the wavelength considered, and 1<prt>%</prt>, respectively. Even if in this study we focused on particular batches of India ink and Intralipid, the reference values determined here represent a solid and useful starting point for preparing diffusive liquid phantoms with accurately defined optical properties. Furthermore, due to the ready availability, low cost, long-term stability and batch-to-batch reproducibility of these compounds, they provide a unique fundamental tool for the calibration and performance assessment of diffuse optical spectroscopy instrumentation intended to be used in laboratory or clinical environment. Finally, the collaborative work presented here demonstrates that the accuracy level attained in this work for optical properties of diffusive phantoms is reliable.
Medical optics instrumentation,Photon migration,Turbid media
8.42,KC
http://www.osapublishing.org/viewmedia.cfm?uri=boe-5-7-2037<prt>&</prt>seq=0<prt>&</prt>html=true
Optical Society of America
EN
2156-7085
10.1364/BOE.5.002037
LSpinelli
MBotwicz
NZolek
MKacprzak
DMilej
PSawosz
ALiebert
UWeigel
TDurduran
FFoschum
AKienle
FBaribeau
SLeclair
J-PBouchard
INoiseux
PGallant
OMermut
AFarina
APifferi
ATorricelli
RCubeddu
H-CHo
MMazurenka
HWabnitz
KKlauenberg
OBodnar
CElster
MBénazech-Lavoué
YBérubé-Lauzière
FLesage
DKhoptyar
A ASubash
SAndersson-Engels
PDi Ninni
FMartelli
GZaccanti
article
Bodnar2014
On the adjustment of inconsistent data using the Birge ratio
Metrologia
2014
51
5
516--521
8.42,KC,Regression, Unsicherheit
http://iopscience.iop.org/article/10.1088/0026-1394/51/5/516
IOP Publishing
en
doi:10.1088/0026-1394/51/5/516
0026-1394
10.1088/0026-1394/51/5/516
OBodnar
CElster
article
Jousten2013
Final report of key comparison CCM.P-K12 for very low helium flow rates (leak rates)
Metrologia
2013
50
1A
07001--07001
8.42,KC
http://iopscience.iop.org/article/10.1088/0026-1394/50/1A/07001
IOP Publishing
en
1681-7575
10.1088/0026-1394/50/1A/07001
KJousten
KArai
UBecker
OBodnar
FBoineau
J AFedchak
VGorobey
WJian
DMari
PMohan
JSetina
BToman
MVivcar
Y HYan
article
Bodnar2013a
Application of Bayesian model averaging using a fixed effects model with linear drift for the analysis of key comparison CCM.P-K12
Measurement Techniques
2013
56
6
584--590
8.42,Bayes,KC
http://link.springer.com/10.1007/s11018-013-0249-3
0543-1972
10.1007/s11018-013-0249-3
OBodnar
ALink
KKlauenberg
KJousten
CElster
article
Elster2013
Analysis of key comparison data: critical assessment of elements of current practice with suggested improvements
Metrologia
2013
50
5
549--555
8.42,Bayes,KC
http://iopscience.iop.org/article/10.1088/0026-1394/50/5/549
IOP Publishing
en
0026-1394
10.1088/0026-1394/50/5/549
CElster
BToman
article
Lira2012
Analysis of Key Comparisons Incorporating Knowledge About Bias
IEEE Transactions on Instrumentation and Measurement
2012
61
8
2079--2084
A method is proposed for analyzing key comparison data. It is based on the assumption that each laboratory participating in the comparison exercise obtains independent and consistent estimates of the measurand and that, in addition, each laboratory provides an estimate of the quantity that collects all systematic effects that the laboratory took into account. The unknown value of the latter quantity, subtracted from its estimate, is defined as the laboratory's bias. The uncertainties associated with the estimates of the measurand and with the vanishing biases' estimates are also assumed to be reported. In this paper, we show that the information provided in this way may be of help for judging the performances of the laboratories in their correction of systematic effects. This is done by developing formulas for the final (consensus) estimates and uncertainties of the measurand and of the biases. Formulas for the final estimates and uncertainties of the pairwise differences between the biases are also developed. An example involving simulated key comparison data makes apparent the benefits of the proposed approach.
Atmospheric measurements,Bayesian methods,Bismuth,Gaussian distribution,Laboratories,Measurement uncertainty,Particle measurements,Systematics,Uncertainty,laboratory bias estimation,measurement errors,measurement uncertainty,performance evaluation,statistical analysis,systematic effect,vanishing bias estimation
8.42,KC
http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6189781
0018-9456
10.1109/TIM.2012.2193690
ILira
A GChunovkina
CElster
WWöger
article
Toman2012
Alternative analyses of measurements of the Planck constant
Metrologia
2012
49
4
567--571
8.42,Bayes,KC
http://iopscience.iop.org/article/10.1088/0026-1394/49/4/567
IOP Publishing
en
0026-1394
10.1088/0026-1394/49/4/567
BToman
JFischer
CElster
article
Elster2010
Analysis of key comparisons: estimating laboratories' biases by a fixed effects model using Bayesian model averaging
Metrologia
2010
47
3
113--119
8.42,Bayes,KC
http://iopscience.iop.org/article/10.1088/0026-1394/47/3/001
IOP Publishing
en
0026-1394
10.1088/0026-1394/47/3/001
CElster
BToman
article
Elster2010a
Linking of a RMO key comparison to a related CIPM key comparison using the degrees of equivalence of the linking laboratories
Metrologia
2010
47
1
96--102
8.42,KC
http://iopscience.iop.org/article/10.1088/0026-1394/47/1/011
IOP Publishing
en
0026-1394
10.1088/0026-1394/47/1/011
CElster
A GChunovkina
WWöger
article
Chunovkina2009
Evaluating systematic differences between laboratories in interlaboratory comparisons
Measurement Techniques
2009
52
7
788--793
8.42,KC
http://link.springer.com/10.1007/s11018-009-9340-1
0543-1972
10.1007/s11018-009-9340-1
A GChunovkina
CElster
ILira
WWöger
article
Chunovkina2008
Analysis of key comparison data and laboratory biases
Metrologia
2008
45
2
211--216
8.42,KC
http://iopscience.iop.org/article/10.1088/0026-1394/45/2/010
IOP Publishing
en
0026-1394
10.1088/0026-1394/45/2/010
A GChunovkina
CElster
ILira
WWöger
article
Martens2006
Final report on the key comparison EUROMET.AUV.V-K1
Metrologia
2006
43
1A
09002-09002
8.42,Dynamik,KC
IOP Publishing
10.1088/0026-1394/43/1A/09002
H-Jvon Martens
CElster
ALink
ATäubner
TBruns
article
Elster2005
Analysis of Key Comparison Data: Unstable Travelling Standards
Measurement Techniques
2005
48
9
883--893
8.42,KC
http://link.springer.com/10.1007/s11018-005-0239-1
0543-1972
10.1007/s11018-005-0239-1
CElster
WWöger
M GCox
article
Martens2004
Linking the results of the regional key comparison APMP.AUV.V-K1 to those of the CIPM key comparison CCAUV.V-K1
Metrologia
2004
41
1A
09002
During 1996 and 1997, eight national metrology institutes (NMI) took part in a vibration accelerometer comparison, identifier APMP.AUV.V-K1 [http://www.bipm.org/utils/common/pdf/final_reports/AUV/V-K1/APMP.AUV.V-K1.pdf] . Two NMIs ultimately withdrew from the comparison and the results of the remaining six NMIs have been approved by the CCAUV. Four NMIs subsequently took part in the 2001 CIPM key comparison for the same quantity, identifier CCAUV.V-K1. The results of these four CIPM participants have been used to link the results of the remaining two NMIs to the results in the CIPM key comparison using the reference frequency of 160 Hz. The CCAUV nominated the PTB to propose the methodology for the link and subsequently approved the linked results as presented in this report. The degrees of equivalence between each result and the key comparison reference value (KCRV), and between each NMI have been calculated and the results are given in the form of a matrix and graph for six NMIs. As two results from the APMP can now be linked to the published CCAUV.V-K1 comparison [http://www.iop.org/EJ/abstract/0026-1394/40/1A/09001] , the updated graph for the key comparison database is also given. Main text. To reach the main text of this paper, click on Final Report [http://www.bipm.org/utils/common/pdf/final_reports/AUV/V-K1/CCAUV.V-K1_APMP.AUV.V-K1.pdf] . The final report has been peer-reviewed and approved for publication by the CCAUV, according to the provisions of the Mutual Recognition Arrangement (MRA).
8.42,KC
http://stacks.iop.org/0026-1394/41/i=1A/a=09002
10.1088/0026-1394/41/1A/09002
H-Jvon Martens
CElster
ALink
WWöger
PJ Allisy
article
Elster2003
Proposal for linking the results of CIPM and RMO key comparisons
Metrologia
2003
40
4
189
A procedure for linking the results of a Regional Metrology Organisation (RMO) key comparison to those of a related Comité International des Poids et Mesures (CIPM) key comparison is proposed. The RMO results are linked to the CIPM results by a factor which is determined as the ratio of the CIPM key comparison reference value and the weighted mean of the RMO results of the linking laboratories. Correlations of the results of the linking laboratories in the two comparisons are taken into account. The uncertainties associated with the linked RMO results and the degrees of equivalence (DOEs) are explicitly given. The influence of correlations of the results of the linking laboratories in both comparisons is examined. It is shown that these correlations can decrease the linking uncertainty, whereas DOEs are expected to be influenced less. The proposed linking procedure is illustrated by its application to linking the results of a recent CIPM key comparison on accelerometer calibrations to that of a corresponding RMO key comparison.
8.42,KC
http://stacks.iop.org/0026-1394/40/i=4/a=308
10.1088/0026-1394/40/4/308
CElster
AlfredLink
WWöger
article
Elster2001
Model-based analysis of key comparisons applied to accelerometer calibrations
Measurement Science and Technology
2001
12
10
1672
The concept of a model-based analysis of key comparisons is proposed and illustrated by applying it to data from a regional key comparison of accelerometer calibrations on a scale of frequencies. A physical model of the frequency dependence of the accelerometers' sensitivities is used to calculate reference values. The parameters of the physical model are determined by weighted least squares, and the resulting model is shown to conform with the data. Uncertainties associated with the reference values calculated by the physical model are smaller than those associated with reference values obtained by standard analysis. This can lead to a more favourable assessment of the degree of equivalence of single laboratory measurement values as expressed by calculated E n -numbers. The degree of equivalence of single laboratory measurement values is quantitatively calculated by both model-based analysis and standard analysis, and the results obtained and their differences are discussed.
8.42,KC
http://stacks.iop.org/0957-0233/12/i=10/a=308
10.1088/0957-0233/12/10/308
CElster
ALink
H-Jvon Martens
article
Elster2001a
Analysis of key comparison data: assessment of current methods for determining a reference value
Measurement Science and Technology
2001
12
9
1431
The degree of equivalence of national measurement standards is established by means of key comparisons. The analysis of data from a key comparison requires the determination of a reference value, which is then used to express the degree of equivalence of the national measurement standards. Several methods for determining a reference value are available and these methods can lead to different results. In this study current methods for determining a reference value are compared. In order to quantitatively assess the quality of performance, the methods are applied to a large set of simulated key comparison data. The simulations refer to several realistic scenarios, including correlated measurements. Large differences in the results can occur and none of the methods performs best in every situation. We give some guidance for selecting an appropriate method when assumptions about the reliability of quoted uncertainties can be made.
8.42,KC
http://stacks.iop.org/0957-0233/12/i=9/a=308
10.1088/0957-0233/12/9/308
CElster
ALink