Asymptotics for selected Risk Measures under general assumptions

The first questions when reading the title could be: What is risk and how can we measure it, especially in practice? % and how (good) can we assess the risk in practice? It is widely accepted that when considering a real world mechanism, assertions about its future state have to be of probabilisti...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
1. Verfasser: Zwingmann, Tobias
Beteiligte: Holzmann, Hajo (Prof. Dr.) (BetreuerIn (Doktorarbeit))
Format: Dissertation
Sprache:Englisch
Veröffentlicht: Philipps-Universität Marburg 2018
Schlagworte:
Online Zugang:PDF-Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The first questions when reading the title could be: What is risk and how can we measure it, especially in practice? % and how (good) can we assess the risk in practice? It is widely accepted that when considering a real world mechanism, assertions about its future state have to be of probabilistic nature. Thus, there is the potential of deviations from the expected outcome of the mechanism, which we call risk. With the aid of risk measures these deviations can be quantified, for example enabling decision-making grounded on this quantification. In practice we have to estimate the desired values based on observations of the considered mechanism. One question we address in this work is how certain estimates for risk measures behave in statistical terms. More precisely, we choose three useful risk measures and estimates thereof, for which we prove (functional) central limit theorems in non-standard situations, laying the base for further statistical examination. Exemplary considering applications in financial markets, the properties of a chosen risk measure should reflect agreed principles of risk. First, if we have no asset, there should be no risk, meaning the risk measure should assign 0 to that asset. This is called normalization. Second, if we know that an asset has a guaranteed return, adding this to our portfolio should decrease the risk by the secure return, which is called translation invariance. Third, if there is an asset, which always yields better returns than another, the first ought to have a higher risk. We call this monotonicity of the risk measure. The properties so far do not capture one of the most important principles in economics, namely the diversification principle. By this, the risk of two assets together should not exceed the sum of the risk of the individual ones. Mathematically this is called sub-additivity of the risk measure at hand. Additionally, if we buy another share of the asset, the risk should scale according to this proportion. Then the risk measure is positive homogeneous. A risk measure having all of these five properties is called coherent. A last property we want to mention here is the comonotonicity of a risk measure. Assume we have two positions, which always evolve in the same direction, meaning that one position gains value if and only if the other does (not necessarily the same amount) and similarly for loosing value. Then we should not be able to exploit the diversification principle, as the evolution of the two is too alike. This implies that the risk of two such positions added equals the sum of the individual risk measures. Perhaps one of the most common risk measures - besides the mean and the variance - is the so called Value at Risk; historically it has probably been the first in wide use. This measure returns a value, beyond which losses are only suffered with a fixed probability; the latter is called the (confidence) level of the Value at Risk. From the mathematical point of view, the Value at Risk is a certain quantile of the profit-and-loss distribution. The importance of the Value at Risk stems among others from the fact that the Basel II and III frameworks explicitly incorporate it and give regulations for calculating Value at Risk-models within banks. In addition, the Value at Risk can mathematically be defined as the unique minimum of a deterministic function - it is elicitable -, which opens the path to many statistical tools; additionally, this fact is important for us in the course of the thesis. The function to be minimized is called scoring function in general; for the Value at Risk the aptronym check-function appears in the literature. On the other hand, there is a major drawback to be named for the Value at Risk. In particular, that risk measure is not coherent, as it is not always sub-additive and therefore could discourage diversification. This leads us to the second measure investigated through the thesis, the Expected Shortfall, which the Basel Committee of Banking Supervision also recommends to use. The Expected Shortfall at a chosen level is the average of all Value at Risks pending that fixed level. In some cases, for example for continuous distributions, it equals the expected loss of the profit-and-loss distribution, given that the loss is higher than the Value at Risk at the level. It turns out that this risk measure is coherent, but unfortunately not accessible by minimizing a scoring function as in the quantile case. Thus, especially comparative backtesting of the Expected Shortfall is questionable. The good news is that the situation changes when considering Value at Risk and Expected Shortfall simultaneously, as, recently, a scoring function for the bivariate risk measure (Value at Risk, Expected Shortfall) was constructed. A major part of the thesis works with this pair of risk measures and generalizations thereof, using the respective scoring function to deduce a central limit theorem for the empirical versions of the risk measures. For the pair (Value at Risk, Expected Shortfall) this is done under weak conditions on the first entry, especially dropping the standard assumption of an existing and strictly positive derivative of the underlying distribution function in the Value at Risk. The generalizations of (Value at Risk, Expected Shortfall) considered in the present work are twofold: First, the Expected Shortfall is a special case of so called \emph{spectral risk measures}, second, it can be seen as a Bayes risk; both paths are detailed in the course of the thesis. If we want to comprise the benefits of the Value at Risk and the Expected Shortfall, we are directly led to the expectile at some confidence level to be chosen. This risk measure is on the one hand identifiable as the unique minimum of a scoring function and on the other hand coherent. This is actually a unique property among risk measure. The expectile is defined as the expectation of the distribution at hand, conditional on being below or above some threshold where deviations up- and downwards are weighted differently. So while the Value at Risk does not take the height of any loss above some threshold into account and the Expected Shortfall only considers high losses, the expectile takes both high and low losses into account, giving them different importance. In practice it is not directly obvious why a risk measure should account for both high and low outcomes, which is the major criticism for the expectile. But this can be justified in financial terms by regarding high profits as subject to tax, whereas interpreting high losses as a tax shield. In the thesis we consider the expectile at several levels simultaneously, showing a functional central limit theorem for the empirical estimate of the expectile curve under weak assumptions on the underlying profit-and-loss distribution.
Umfang:188 Seiten
DOI:10.17192/z2019.0233