Numerical Methods of Optimum Experimental Design Based on a Second-Order Approximation of Confidence Regions
A successful application of model-based simulation and optimization of dynamic processes requires an exact calibration of the underlying mathematical models. Here, a fundamental task is the estimation of unknown and nature given model coefficients by means of real observations. After an appropriat...
Mathematik und Informatik
|Online Access:||PDF Full Text|
No Tags, Be the first to tag this record!
|Summary:||A successful application of model-based simulation and optimization of dynamic processes requires an exact calibration of the underlying mathematical models.
Here, a fundamental task is the estimation of unknown and nature given model coefficients by means of real observations. After an appropriate numerical treatment of the differential systems, the parameters can be estimated as the solution of a finite dimensional nonlinear constrained parameter estimation problem. Due to the fact that the measurements always contain defects, the resulting parameter estimate cannot be seen as an ultimate solution and a sensitivity analysis is required to quantify the statistical accuracy.
The goal of the design of optimal experiments is the identification of those measurement times and experimental conditions which allow a parameter estimate with a maximized statistical accuracy. Also the design of optimal experiments problem can be formulated as an optimization problem, where the objective function is given by a suitable quality criterion based on the sensitivity analysis of the parameter estimation problem.
In this thesis, we develop a quadratic sensitivity analysis to enable a better assessment of the statistical accuracy of a parameter estimate in the case of highly nonlinear model functions. The newly introduced sensitivity analysis is based on a quadratically approximated confidence region which is an expansion of the commonly used linearized confidence region. The quadratically approximated confidence region is analyzed extensively and adequate bounds are established. It is shown that exact bounds of the quadratic components can be obtained by solving symmetric eigenvalue problems. One main result of this thesis is that the quadratic part is essentially bounded by two Lipschitz constants, which also characterize the Gauss-Newton convergence properties. This bound can also be used for an approximation error of the validity of the linearized confidence regions. Furthermore, we compute a quadratic approximation of the covariance matrix, which delivers another possibility for the statistical assessment of the solution of a parameter estimation problem. The good approximation properties of the newly introduced sensitivity analysis are illustrated in several numerical examples.
In order to robustify the design of optimal experiments, we develop a new objective function - the Q-criterion - based on the introduced sensitivity analysis. Next to the trace of the linear approximation of the covariance matrix, the Q-criterion consists of the above-mentioned Lipschitz constants. Here, we especially focus on a numerical computation of an adequate approximation of the constants. The robustness properties of the new objective function in terms of parameter uncertainties is investigated and compared to a worst-case formulation of the design of optimal experiments problem. It is revealed that the Q-criterion covers the worst-case approach of the design of optimal experiments problem based on the A-criterion. Moreover, the properties of the new objective function are considered in several examples. Here, it becomes evident that the Q-criterion leads to a drastic improve of the Gauss-Newton convergence rate at the following parameter estimation.
Furthermore, in this thesis we consider efficient and numerically stable methods of parameter estimation and the design of optimal experiments for the treatment of multiple experiment parameter estimation problems. In terms of parameter estimation and sensitivity analysis, we propose a parallel computation of the Gauss-Newton increments and the covariance matrix based on orthogonal decompositions. Concerning the design of optimal experiments, we develop a parallel approach to compute the trace of the covariance matrix and its derivative.|