# Mathematical Modelling and Simulation

Working Group 8.41

# Uncertainty Quantification

A core task in metrology is the determination of measurement uncertainties. According to an international agreement measurement uncertainties are determined by the “Guide to the expression of uncertainties in measurements” (GUM) and its supplements. Thereby the effects of uncertain input variables of nonlinear problems are considered as random variables and output fluctuations are obtained by Monte Carlo (MC) samplings. Many applications in metrology, physics and engineering use partial differential equations (pde) which are solved by finite element methods (FEM).   Such methods are computationally expensive and the determination of uncertainties according to the GUM is often unfeasible.  Our main goal is to develop methods for the determination of uncertainties for computationally expensive systems in compliance with the GUM.

# Introduction

To investigate the effect of uncertain parameters of a pde on its solutions a map is typically defined from these parameters to the pde’s solutions. Such a map is called a forward model. Uncertain parameters (input quantities) are expressed in probability distributions. The aim is to determine the probability distribution of the solutions of the pde or derived quantities (output quantities), see Fig1.

# Sampling methods

Sampling methods are characterized by randomly drawing from input distributions and calculating the associated output quantities. To obtain a distribution of output quantities from this procedure a large number of repetitions is necessary. The choice of a specific sampling method can reduce the sampling size and in turn the number of forward calculations significantly. For example, in many applications the Lattin-Hypercube sampling is more effective than the Monte Carlo method and therefore it needs less computing time for the same precision/accuracy.

# Polynomial chaos

Within the polynomial chaos approach the output quantity of interest  is approximated as a weighted sum of orthogonal polynomials of random variables. The weights are defined as integrals over the (multi-dimensional) space of input quantities, with the integrands being a product of the underlying deterministic model function and one of the polynomials. The integrals are evaluated using numerical quadrature, requiring model evaluations.
This approach has very rapid convergence properties (in many cases, low order polynomials give a good approximation of the model output), meaning that the methods can outperform most sampling methods, at least as long as only a few random parameters are considered. Thus, the method is also applicable to computationally expensive systems as they occur in computational fluid dynamics, for example.

# Surrogate models

Surrogate models approximate the forward model itself, rather than to reduce the number of forward evaluations. In the case of pde’s, solutions are interpolated within a certain parameter range by nonlinear functions.  In particular, for certain input quantities (training points) output quantities are calculated rigorously by using the forward model. From these results a surrogate model for a certain parameter range is constructed. The surrogate model connects the inputs to the output quantities by simple functions. With this method a speed up of the computational time by several magnitudes is possible. Fig. 3 shows a sketch for the determination of diffraction pattern of a nano-grating (scatterometry). The upper path depicts the rigorous FEM calculation of electromagnetic waves (computational time 2min). The lower path depicts the replacement with the surrogate model (less than one second). The pre-calculation time to construct the surrogate model was about 20h.

# Inverse Problems

For indirect measurements for the determination of uncertainties it is necessary to solve a statistical inverse problem. A flexible method to solve a statistical inverse problem allowing for the inclusion of prior knowledge is the Bayesian approach.  The inclusion of prior knowledge is of advantage in metrology. Here it is possible to combine different measurements to reduce measurement uncertainties (hybrid-metrology).  Bayes theorem provides the basis for this approach. Thereby prior knowledge about a certain measurand given by the distribution $\pi_0$ is combined with the information from the measurement (likelihood function, $\mathcal{L}$) to update the knowledge about the desired quantity (posterior distribution $\pi$), i.e.,

$\pi (\theta;{\bf y} ) = \frac{\mathcal{L}(\theta;{\bf y})\pi_0(\theta)}{\int \mathcal{L}(\theta;{\bf y})\pi_0(\theta) d\theta }.$

The calculation of the output distribution (posterior) requires a large number of evaluations of the forward model. For computationally expensive problems this is very time consuming which makes the direct calculation of uncertainties unfeasible. However by using the prior knowledge, output distributions can be efficiently approximated and uncertainties are calculated.

To top

# Flow

Since the polynomial chaos method requires only a small number of evaluations of the underlying deterministic problem, it is well-suited for computationally expensive problems like CFD (computational fluid dynamics).

In cooperation with AG 7.52 the influence of disturbed inflow profiles on the measurement result of a single-beam ultrasonic flow meter has been investigated using the polynomial chaos method in conjunction with CFD. The disturbed profiles were produced by a double bend out-of plane. This case is especially important for metrology, since in practice one bend usually follows another one, which can lead to huge measurement errors.

Fig. 1 shows the expected error of the volume flow measured by the simulated measurement device (in cyan) as well as its standard deviation (in red). Additionally the maximal deviation of the prediction with respect to the exact volume flow is shown (in blue). One can see that in the considered configuration the flow rate is underestimated on average. With increasing distance between the double bend and the measurement device the mean error decreases from around 4% to almost 0%.

# Scatterometry

The second application deals with the statistical inverse problem in scatterometry. Scatterometry is an indirect optical measurement method and is used to measure periodic nanostructures at surfaces. In particular, critical dimensions and sidewall angles of lines on a photomask can be measured nondestructively.  Its precise production and measurement is of particular interest in semiconductor industry.

Fig. 5 shows a cross-section of a realistic EUV mask used for simulations. In our case the forward model is given by the map from the line geometry onto the diffraction pattern. Rigorous calculations are performed by a standard FEM solver (DIPOG-WIAS). For the geometry parameters: line width, line height and the side wall angle a certain prior distribution was chosen ($\pi_0$) and a surrogate model based on the polynomial chaos method was constructed.  The associated posterior distribution could be determined in several hours by using a Markov-Chain-Monte-Carlo method (compared with a computational time of half a year if the rigorous method was used). Fig. 6 depicts the distributions for the geometry parameters. Green, dashed lines indicate reference values.

To top

# Publications

## Publication single view

### Article

Title: Bayesian approach to the statistical inverse problem of scatterometry: Comparison of three surrogate models S. Heidenreich, H. Gross and M. Bär International Journal for Uncertainty Quantification 2015 511 10.1615/Int.J.UncertaintyQuantification.2015013050 8.41, Scatter-Inv, UQ

Back to the list view