Here the mathematical and statistical procedures of most relevance to dynamic measurements are introduced briefly. We include material that is relevant to National Measurement Institutes (NMIs) and to providers of calibration services, as well as to industrial end-users of calibrations, both for the sake of completeness and to provide useful background information. In particular, it is of benefit to end-users of dynamic calibrations provided by NMIs if they have some understanding of the calibration methods employed by NMIs at primary and secondary level and of the nature of the calibration information that may be provided on calibration certificates for dynamically calibrated sensors and transducers.
Note that in general we advise that practitioners follow the uncertainty analysis methods advocated in the Guide to the expression of uncertainty in measurement, known colloquially as the GUM.
Guidance on the terminology and vocabulary of metrology can be found in the International Vocabulary of Metrology - Basic and general concepts and associated terms (VIM). The International Electrotechnical Vocabulary (IEC 60050) specialises in terminology for electrical and electronic applications but much of its content is relevant to dynamic measurements, signal processing and mathematics. The on-line version is known as Electropedia and can be found here.
Calibration can be regarded as a system identification problem. A known input is provided to the sensor or system to be calibrated and the output is recorded. The derivation of the mathematical relationship between the input and the output is the purpose of calibration and it is this mathematical relationship that is recorded on the calibration certificate. In the static case this relationship may be no more than a single number (and its associated uncertainty) that can be used to convert a voltage value, for example, to the physical quantity of interest. For dynamic cases, the calibration certificate is likely to provide information on the frequency response of the sensor, i.e., amplitude and phase as a function of frequency, or it may provide a transfer function (typically expressed in the form of a Laplace Transform).
In general the characterisation of a dynamic system from measured data (system identification) involves a number of basic stages, summarised below.
Having specified a model linking the measured data with parameters describing the system characteristics, a number of statistical techniques may be employed for estimation of the parameters of interest. Among these techniques at least two distinct groups can be identified: one yields (point) estimates of the parameters and uncertainties associated with their estimation, whereas an alternative approach is to obtain a posterior probability distribution for the parameters that summarises the current state of knowledge about them. Estimates and associated uncertainties can then be calculated from this distribution. In general the GUM favours the second approach.
Detailed information and advice about parameter estimation can be found in textbooks such as:
J. Schoukens and R. Pintelon, Identification of Linear Systems: A Practical Guide to Accurate Modelling, Pergamon Press, 1991
A. V. Oppenheim and R. W. Schafer, Discrete-time signal processing, Prentice Hall Signal Processing Series, 1999
L. Ljung, System identification: Theory for the user, Prentice Hall Information and System Science Series, 1999
Matlab users may also wish to investigate the use of Matlab’s System Identification Toolbox, which provides many tools for system identification based on both frequency domain and time domain data.
Spectral Audio Signal Processing
The Scientist and Engineer's Guide to Digital Signal Processing
Least squares design of IIR filters
A number of choices of excitation signal exists for characterising dynamic systems. Those employed within this EMRP project have included:
The mathematics and statistics team of the IND09 project modelled the systems studied by sets of linear ordinary differential equations or, equivalently, by a rational function in the Laplace domain (transfer function). Thus our approach was based on white box parametric models with physical interpretations of the parameters. The set of differential equations determines the formula for the transfer function uniquely.
Within this general framework we often studied a range of representations of the system under investigation based on various modelling approaches or approximations that may be valid in a particular operating range and then the necessity arises of choosing between the different models. An example of this is provided by the work done to support dynamic force calibrations using shock force methods. Here we investigated models of the experimental set-up assuming that represented the system as three, four or five mass, spring and damper systems in series. We found that the quality of the models, as demonstrated by an analysis of residuals, varied depending on the sensor being employed.
Mathematical and statistical methods employed in the estimation of the unknown parameters and evaluation of their uncertainties should be chosen with the emphasis on achieving conformity with the GUM and its supplements. The fitness of the model is assessed, e.g., by an appropriate residual analysis, and, depending on the availability of measured data, the model may be validated using additional data sets from experiments. It may also be useful to compare results obtained for measuring systems with different sensors and amplifiers, or for different excitation signals.
For the physical quantities studied within this European project - force, torque and pressure – two methods of generating dynamic signals at the primary standard level have been employed. The first is the generation of narrow band sinusoidal signals as inputs to the calibration, i.e., the stepped sine methodology described briefly above. The second is the use of broad band or impulse methods that can generate a range of frequencies simultaneously, also described briefly above. To ensure traceability it is necessary to demonstrate the consistency of the results from the two methods at all frequencies that the two approaches have in common.
Consider first the narrow band sinusoidal signal case. For the dynamic quantities of interest in this project, the narrow band method is employed for the generation of a sequence of single-frequency signals (the stepped sine excitation) that spans a required calibration (frequency) interval. In such a case, the representation of the individual sinusoidal signals is typically in terms of estimates of their amplitude, frequency and phase together with the uncertainties associated with these estimates. Therefore, some pre-processing of the time domain signals has to be carried out to obtain the amplitude and phase (relative to the input signal) of the observed output signal. Uncertainties associated with the frequency domain values can then be obtained either by propagation from the time domain measurements through the pre-processing stage or by means of statistical modelling in the frequency domain.
Now consider the use of a broad band signal as an input to a calibration. The input signal is likely to take a form that approximates an impulse, a step or other sharp change in signal value that occurs over a short period of time. The intention of such an approach is to generate a signal with broad frequency content, so that a single excitation generates a range of frequencies for calibration purposes. The outcome of a calibration of a broad band sensor or transducer is typically a frequency domain representation of the impulse response in which the amplitude and phase values (and their associated uncertainties) can be reported in the form of data tables and graphs.
The output of sensors or transducers that detect and respond to the physical quantity of interest is usually a time-varying voltage or electrical charge signal that will act as input to a signal conditioning and signal recording system that consists of a number of stages.
The measuring chain of sensor, filter, amplifier, display device and digitiser will all have some influence on the recorded signal so that they may also require a dynamic calibration. In fact, given that sensors are often employed with dedicated amplifiers and filters, it may be that the sensor/amplifier/filter chain is calibrated as one unit, so that one calibrates a measuring system rather than an isolated sensor or transducer.
A measuring system can usually be represented as a chain of filters in series so that the output signal of a measuring system is a convolution of the successive filters that represent each instrument or device in the measuring chain with the input signal. Consequently, the transfer function of such a measuring chain is the product of transfer functions of the individual components.
It is important to recognise that there may not be a simple relationship between the models that are employed to describe the measuring systems and the information that appears on a calibration certificate. In practical terms the key issues are:
For all three physical quantities, the responses of the sensors that are used depend on the structure to which the sensor is connected. During the calibration process it is necessary to correct for the effects of the structural loading so that the data on the calibration certificate is independent of the structure and from other environmental effects. However, when the sensor is used to make a measurement in an industrial context it may experience a different structural load and a different form of signal from those it experienced during calibration. Thus it may not be straightforward to interpret and use calibration certificate data and care should be taken when using a sensor in an environment that differs substantially from the environment in which it was calibrated.
How calibration information is to be used depends on the purpose of the measurement and on the bandwidth of the input signal to be estimated. A dynamic measurement necessarily implies that one is dealing with a time varying signal as input to the measuring system. If the aim of the measurement is to estimate the complete input time signal (and if the signal contains a range of frequencies) then in general it will be necessary to employ available calibration information in a deconvolution process. We therefore recommend study of the following papers:
S. Eichstädt, C. Elster, T. J. Esward and J. P. Hessling (2010) Deconvolution filters for the analysis of dynamic measurement processes: a tutorial Metrologia 47, 522-533
S. Eichstädt, A. Link, P. Harris and C. Elster (2012) Efficient implementation of a Monte Carlo method for uncertainty evaluation in dynamic measurements Metrologia 49, 401-410
The tutorial paper comes to the conclusion that if a continuous model of the LTI system is available, or one knows the frequency response of a system (the two cases considered earlier), application of least squares methods in the frequency domain for the construction of an approximate inverse filter for use in a deconvolution algorithm is to be preferred.
The paper on Monte Carlo methods sets out methods for evaluating uncertainties in dynamic measurements that are carried out in line with supplements 1 and 2 of the GUM:
In particular the paper recognises that a direct implementation of the Monte Carlo method can become computationally intractable owing to storage requirements and the limitation of computer memory and proposes two memory-efficient alternatives.
If the input signal or measurand has a narrow bandwidth or consists of a single frequency component, it may be possible to treat the measurement as static or quasi-static if the available calibration information provides the amplitude and phase response (either directly or via a transfer function) and associated uncertainties at the frequency of interest. In such cases deconvolution is not needed and the observed output can be inverted straightforwardly using the available calibration information at the specific frequency of the measurement.
In many dynamic measurements the purpose is not to estimate the complete time history of the input signal but to evaluate specific features of a signal such as a maximum or a minimum value, an r.m.s. value, or a specific frequency component or range of components and their amplitudes. Nevertheless, whatever the measurand, it is important to consider whether the bandwidth of the measurement and the available calibration information about the sensor, lead to the need to deconvolve the system response from the observed output signal. A key factor in this decision is the magnitude of the measurement uncertainty that is acceptable to the user of the measurement result. A deconvolution process should be regarded as a means of correcting a signal prior to uncertainty evaluation and as deconvolution is necessarily an imperfect process, an estimate of the uncertainty of the process should be included in the measurement uncertainty budget. This uncertainty may itself be frequency dependent and this should be taken into account when estimating the measurement uncertainty of broadband measurands.
For end-users of calibration information the measurement problem is: based on an output signal measured at discrete time instances, the unknown input signal has to be estimated. Our advice is that the measurement device/measuring system is modelled as a linear time invariant system and the input signal is represented as a discrete-time sequence. This allows the application of the GUM Supplement 2 methodology[1].
To estimate the input signal, a compensation filter is designed consisting of the inverse of the linear time invariant system and a low-pass filter attenuating measurement noise beyond a certain frequency. Next, uncertainties of the estimated input signal are evaluated, taking into account
Steps leading to the desired compensation filter are described in detail here[2]. Formulas for the uncertainties can be found in this publication as well. These are based on model linearization. The paper also discusses in detail a Monte Carlo approach to uncertainty evaluation. Since the length of the signals is typically large, sequential implementation of the Monte Carlo method is required. Details can also be found here[3].
Dynamic calibration of a transducer involves measuring a frequency response. This measured frequency response may be used together with a theoretical model for parameter identification, or it may be the final result reported from the calibration. In both cases, uncertainties associated with the measured amplitudes and phases, or real and imaginary parts of the frequency response are of interest. To this end, uncertainty budgets are developed. To validate the uncertainty budgets and check their plausibility a reproducibility experiment should be conducted.
In a long-term reproducibility experiment a transducer is mounted repeatedly into the measurement setup and the frequency response is repeatedly determined. The data collected (e.g., amplitudes and phases) display variation due to all the sources of uncertainty that change from one single measurement to another (repeatability), as well as all sources that change only from mounting to mounting, including external influences (whose impact should be minimal). Looking at the details of the reproducibility experiment, one defines groups of effects (e.g. those changing between mountings, those changing for each measurement etc.) and these then represent sources of uncertainty in a (theoretical) mixed linear model. In the model, the identified sources of uncertainty are called random effects and variability of these effects is estimated by fitting a mixed linear model to the measured data using e.g. the method of restricted maximum likelihood. This paper[4] provides advice on:
as well as a short discussion of available software tools (the algorithms for calculating restricted maximum likelihood estimates in mixed linear models are part of many statistical software packages and there exists also an implementation in the Statistical Toolbox of MATLAB).
The Guide to the expression of uncertainty in measurement includes much practical advice concerning the evaluation of uncertainties. Set out below are extracts from the Guide identified by the relevant paragraph number from the 2008 edition. The extracts have been chosen to reflect some of the key ideas that are relevant to analysing uncertainties in dynamic measurements.
3.1.6 The mathematical model of the measurement that transforms the set of repeated observations into the measurement result is of critical importance because, in addition to the observations, it generally includes various influence quantities that are inexactly known. This lack of knowledge contributes to the uncertainty of the measurement result, as do the variations of the repeated observations and any uncertainty associated with the mathematical model itself.
3.2.4 It is assumed that the result of a measurement has been corrected for all recognized significant systematic effects and that every effort has been made to identify such effects.
3.3.2 In practice, there are many possible sources of uncertainty in a measurement, including:
a) incomplete definition of the measurand;
b) imperfect realization of the definition of the measurand;
c) nonrepresentative sampling — the sample measured may not represent the defined measurand;
d) inadequate knowledge of the effects of environmental conditions on the measurement or imperfect measurement of environmental conditions;
e) personal bias in reading analogue instruments;
f) finite instrument resolution or discrimination threshold;
g) inexact values of measurement standards and reference materials;
h) inexact values of constants and other parameters obtained from external sources and used in the data-reduction algorithm;
i) approximations and assumptions incorporated in the measurement method and procedure;
j) variations in repeated observations of the measurand under apparently identical conditions.
These sources are not necessarily independent, and some of sources a) to i) may contribute to source j). Of course, an unrecognized systematic effect cannot be taken into account in the evaluation of the uncertainty of the result of a measurement but contributes to its error.
3.4.1 If all of the quantities on which the result of a measurement depends are varied, its uncertainty can be evaluated by statistical means. However, because this is rarely possible in practice due to limited time and resources, the uncertainty of a measurement result is usually evaluated using a mathematical model of the measurement and the law of propagation of uncertainty. Thus implicit in this Guide is the assumption that a measurement can be modelled mathematically to the degree imposed by the required accuracy of the measurement.
3.4.2 Because the mathematical model may be incomplete, all relevant quantities should be varied to the fullest practicable extent so that the evaluation of uncertainty can be based as much as possible on observed data. Whenever feasible, the use of empirical models of the measurement founded on long-term quantitative data, and the use of check standards and control charts that can indicate if a measurement is under statistical control, should be part of the effort to obtain reliable evaluations of uncertainty. The mathematical model should always be revised when the observed data, including the result of independent determinations of the same measurand, demonstrate that the model is incomplete. A well-designed experiment can greatly facilitate reliable evaluations of uncertainty and is an important part of the art of measurement.
3.4.3 In order to decide if a measurement system is functioning properly, the experimentally observed variability of its output values, as measured by their observed standard deviation, is often compared with the predicted standard deviation obtained by combining the various uncertainty components that characterize the measurement. In such cases, only those components (whether obtained from Type A or Type B evaluations) that could contribute to the experimentally observed variability of these output values should be considered.
3.4.4 In some cases, the uncertainty of a correction for a systematic effect need not be included in the evaluation of the uncertainty of a measurement result. Although the uncertainty has been evaluated, it may be ignored if its contribution to the combined standard uncertainty of the measurement result is insignificant. If the value of the correction itself is insignificant relative to the combined standard uncertainty, it too may be ignored.
3.4.8 Although this Guide provides a framework for assessing uncertainty, it cannot substitute for critical thinking, intellectual honesty and professional skill. The evaluation of uncertainty is neither a routine task nor a purely mathematical one; it depends on detailed knowledge of the nature of the measurand and of the measurement. The quality and utility of the uncertainty quoted for the result of a measurement therefore ultimately depend on the understanding, critical analysis, and integrity of those who contribute to the assignment of its value A novel method of estimating dynamic measurement errors
4.1.2 The input quantities X1, X2, ..., XN upon which the output quantity Y depends may themselves be viewed as measurands and may themselves depend on other quantities, including corrections and correction factors for systematic effects, thereby leading to a complicated functional relationship f that may never be written down explicitly. Further, f may be determined experimentally (see 5.1.4) or exist only as an algorithm that must be evaluated numerically. The function f as it appears in this Guide is to be interpreted in this broader context, in particular as that function which contains every quantity, including all corrections and correction factors, that can contribute a significant component of uncertainty to the measurement result.
7.1.4 Although in practice the amount of information necessary to document a measurement result depends on its intended use, the basic principle of what is required remains unchanged: when reporting the result of a measurement and its uncertainty, it is preferable to err on the side of providing too much information rather than too little. For example, one should a) describe clearly the methods used to calculate the measurement result and its uncertainty from the experimental observations and input data; b) list all uncertainty components and document fully how they were evaluated; c) present the data analysis in such a way that each of its important steps can be readily followed and the calculation of the reported result can be independently repeated if necessary; d) give all corrections and constants used in the analysis and their sources. A test of the foregoing list is to ask oneself “Have I provided enough information in a sufficiently clear manner that my result can be updated in the future if new information or data become available?”
[1] BIPM, IEC, IFCC, ISO, IUPAC, IUPAP and OIML (2011). Evaluation of Measurement Data—Supplement 2 to the ‘Guide to the Expression of Uncertainty in Measurement’—Extension to any number of output quantities, Joint Commitee for Guides in Metrology, Bureau International des Poids et Mesures, JCGM 102
[2] Eichstädt, Arendacká, Link, Elster. Evaluation of measurement uncertainty for time- dependent quantities, European Physical Journal, in Press.
[3] Eichstädt, Link, Harris, Elster (2012). Efficient implementation of a Monte Carlo method for uncertainty evaluation in dynamic measurements, Metrologia 49, pp. 401-410
[4] Arendacká, Täubner, Eichstädt, Bruns, Elster (2014). Linear mixed models: GUM and beyond, Measurement Science Review 14, No. 2, pp. 52-61