Information Entropy-Based Decision Making in Optimization

This thesis connects the optimization of stochastic functions and information theory in a new way. New methods for the optimization of stochastic functions are proposed. These methods have been developed with regard to the optimization of non-imaging optical systems whose performance must be evaluat...

Fuld beskrivelse

Gespeichert in:
Bibliografiske detaljer
Hovedforfatter: Schmidt, Tobias Christian
Andre forfattere: Ries, H. (Prof. Dr.) (BetreuerIn (Doktorarbeit))
Format: Dissertation
Sprog:engelsk
Udgivet: Philipps-Universität Marburg 2009
Fag:
Online adgang:PDF-Volltext
Tags: Tilføj Tag
Ingen Tags, Vær først til at tagge denne postø!
Beskrivelse
Summary:This thesis connects the optimization of stochastic functions and information theory in a new way. New methods for the optimization of stochastic functions are proposed. These methods have been developed with regard to the optimization of non-imaging optical systems whose performance must be evaluated via Monte Carlo ray tracing, but they can be applied to other stochastic merit functions. A function is stochastic if its function values cannot be calculated straightforwardly and probability distributions for function values must be derived from the results of random experiments instead. The elements of a stochastic function’s domain are termed configurations. The core idea of this thesis is to base decisions made during an optimization on a criterion derived from the concept of information entropy. This leads to very efficient optimization methods. Efficiency is the ratio of the gain in information concerning the optimum to the effort invested. By means of the information entropy, the information content of the data collected during the optimization is measured. Criteria for decisions are based on this information measure. For each configuration, the number of random experiments is adjusted according to the demand by means of these criteria. For each option under consideration, the expected information gain is evaluated and the option with the largest expected information gain is chosen. Applying this principle, three methods for the optimization of stochastic functions are developed. Each of these methods is suitable for a specific class of optimization problems. The concept of information entropy is also applied to ranking and selection procedures. The purpose of ranking and selection is to guide the selection of a subset containing several good alternatives from a finite set of alternatives.
Fysisk beskrivelse:121 Seiten
DOI:10.17192/z2010.0078