Information Entropy-Based Decision Making in Optimization

This thesis connects the optimization of stochastic functions and information theory in a new way. New methods for the optimization of stochastic functions are proposed. These methods have been developed with regard to the optimization of non-imaging optical systems whose performance must be evaluat...

Бүрэн тодорхойлолт

-д хадгалсан:
Номзүйн дэлгэрэнгүй
Үндсэн зохиолч: Schmidt, Tobias Christian
Бусад зохиолчид: Ries, H. (Prof. Dr.) (Дипломын ажлын зөвлөх)
Формат: Dissertation
Хэл сонгох:англи
Хэвлэсэн: Philipps-Universität Marburg 2009
Нөхцлүүд:
Онлайн хандалт:PDF-н бүрэн текст
Шошгууд: Шошго нэмэх
Шошго байхгүй, Энэхүү баримтыг шошголох эхний хүн болох!
Тодорхойлолт
Тойм:This thesis connects the optimization of stochastic functions and information theory in a new way. New methods for the optimization of stochastic functions are proposed. These methods have been developed with regard to the optimization of non-imaging optical systems whose performance must be evaluated via Monte Carlo ray tracing, but they can be applied to other stochastic merit functions. A function is stochastic if its function values cannot be calculated straightforwardly and probability distributions for function values must be derived from the results of random experiments instead. The elements of a stochastic function’s domain are termed configurations. The core idea of this thesis is to base decisions made during an optimization on a criterion derived from the concept of information entropy. This leads to very efficient optimization methods. Efficiency is the ratio of the gain in information concerning the optimum to the effort invested. By means of the information entropy, the information content of the data collected during the optimization is measured. Criteria for decisions are based on this information measure. For each configuration, the number of random experiments is adjusted according to the demand by means of these criteria. For each option under consideration, the expected information gain is evaluated and the option with the largest expected information gain is chosen. Applying this principle, three methods for the optimization of stochastic functions are developed. Each of these methods is suitable for a specific class of optimization problems. The concept of information entropy is also applied to ranking and selection procedures. The purpose of ranking and selection is to guide the selection of a subset containing several good alternatives from a finite set of alternatives.
Биет тодорхойлолт:121 Seiten
DOI:10.17192/z2010.0078