1 / 36

Friends or F oes: A S tory of Value at Risk and Expected T ail L oss

Friends or F oes: A S tory of Value at Risk and Expected T ail L oss. 1 4 th Dubrovnik Economic Conference June 2 5 - 29 , 200 8 , Dubrovnik, Croatia organized by the Croatian National Bank Dr.sc. Saša Žiković Faculty of Economics Rijeka. Motivation.

fpettiford
Download Presentation

Friends or F oes: A S tory of Value at Risk and Expected T ail L oss

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Friends or Foes: A Story of Value at Risk and Expected Tail Loss 14th Dubrovnik Economic Conference June 25 - 29, 2008, Dubrovnik, Croatia organized by the Croatian National Bank Dr.sc. Saša Žiković Faculty of Economics Rijeka

  2. Motivation • With the latest market turmoil stemming from US sub-prime mortgage crises it is clear that there is a need for an approach that comes to terms with problems posed by extreme event estimation. • VaR is not a “coherent” risk measure because it does not necessarily satisfy the sub-additivity condition. (Sub-additivity = a portfolio will risk an amount, which is at most the sum of the separate amounts risked by its subportfolios)

  3. Motivation • VaR provides no handle on the extent of the losses that might be suffered beyond a certain threshold. VaR is incapable of distinguishing between situations where losses in the tail are only a bit worse and those where they are overwhelming. • An alternative measure that is coherent and quantifies the losses that might be encountered in the tail is the expected tail loss (ETL). • Since the introduction by Artzner (1999) of coherent risk measures and the new dawn of measuring extreme losses it seems as if the academic community is willing to sacrifice all the advances made in the field of measuring VaR.

  4. Motivation • The field of ETL estimation and model comparison is just beginning to develop and there is an obvious lack of empirical research. • VaR and ETL are inherently connected in the sense that from the VaR surface of the tail ETL figures can be easily calculated. • Advances that have been made in VaR estimation should not be lost with the adoption of coherent risk measures into regulatory framework. • Superior VaR techniques can be employed to yield superior ETL forecasts. • VaR and ETL should be regarded as partners not rivals.

  5. Contribution of the paper • Add to currently scarce literature on ETL empirical testing and model comparison. • Validating risk measurement models based on both their VaR and ETL performance. • Connecting VaR and ETL models. • Developing a new hybrid ETL model based on advanced VaR modelling techniques. • Developing a new loss function for evaluating ETL forecasts.

  6. Literature review • Artzner et al. (1999) introduced the Expected Shortfall risk measure, which equals the expected value of the loss, given that a VaR violation occurred. • Yamai and Yoshiba (2002) compared the two measures and argued that VaR is not reliable during market turmoil, whereas ETL can be a better choice overall. • Angelidis, Degiannakis (2007) test the performance of various parametric VaR and ETL models. They find that different volatility models are “optimal” for different assets. • Although ETL is a superior risk measure to VaR, it lacks the depth of the theoretical and empirical research that VaR has. Investigation into the theoretical properties of ETL is still in its early stages.

  7. Value at Risk (VaR) VaR is usually defined as: “VaR is the maximum potential loss that a portfolio can suffer within a fixed confidence level (cl) during a holding period.” This definition can be misleading because VaR does not represent the “maximum” loss - a portfolio can lose much more than suggested by VaRdepending on the shape of the tail of the distribution.

  8. Value at Risk (VaR) A better definition of VaR: “VaR is the minimum potential loss that a portfolio can suffer in the 100(1-cl)% worst cases during a holding period.„ OR “VaR is the maximum potential loss that a portfolio can suffer in the 100(1-cl)% best cases during a holding period.“ Is VaR the most appropriate measure to describe the risks associated with holding a certain position?

  9. Value at Risk (VaR) • Advantages of VaRover traditional measures of risk: - VaR applies to any financial instrument and can be expressed in any voluntary unit of measure. More traditional measures, such as the “greeks”, are measures created ad hoc for specific instruments or risk variables and are expressed in different units. - VaR includes an estimate of future events and allows the risk of the portfolio to be expressed in a single number. Unlike VaR, “greeks” amount to a “what if” risk measure that does not make any connection between the probability and severity of future events.

  10. Coherent risk measures • A coherent risk measure ρ assigns to each loss X a risk measure ρ(X) such that the following conditions are satisfied: ρ(tX) = tρ(X) (homogeneity) ρ(X) ≥ ρ(Y), if X ≤ Y (monotonicity) ρ(X + n) = ρ(X) - n (risk-free condition) ρ(X) + ρ(Y) ≤ ρ(X + Y) (sub-additivity) These conditions guarantee that the risk function is convex, which in turn corresponds to risk aversion: ρ(tX + (1 - t)Y) ≤ tρ(X) + (1 - t)ρ(Y)

  11. Coherent risk measures • VaR is not a coherent risk measure because it does not necessarily satisfy the sub-additivity condition. VaR can only be made sub-additive if a usually implausible assumption is imposed on returns being normallydistributed. • For a sub-additive measure, which ETL is, portfolio diversification always leads to risk reduction, while for VaR, diversification may produce an increase in its value even when partial risks are triggered by mutually exclusive events.

  12. Coherent risk measures Sub-additivity matters because: • adding risks together would give an overestimate of combined risk - a sum of risks can be used as a conservative estimate of combined risk. • if regulators use non-sub-additive risk measures to set capital requirements, a bank might be tempted to break itself up to reduce its regulatory capital requirements. • non-sub-additive risk measures can inspire traders to break up their accounts, with separate accounts for separate risks, in order to reduce their margin requirements.

  13. Coherent risk measures • VaR provides no handle on the extent of the losses that might be suffered beyond the threshold amount. • VaR is incapable of distinguishing between situations where losses in the tail are only a bit worse, and those where they are overwhelming. • ETL quantifies the losses that might be encountered in the tail. “ETL is the expected value of the loss of the portfolio in the 100(1-cl)% worst cases during a holding period.“ ETLcl(X) = E[X | X ≥ VaRcl(X)]

  14. VaR and ETL

  15. Advantages of ETL over VaR • Example 1: • We are faced with two different portfolios with one being clearly riskier than the other. Despite this, VaR tells us that we face exactly the same risk when investing in these two portfolios. This is because VaR ignores extreme losses if they are rare enough. • Because ETL weights extreme losses it can differ between such portfolios and correctly identify the riskier one. ETL is far more sensitive to extreme events no matter how rare they are.

  16. Advantages of ETL over VaR • Example 2: • VaR paradox = VaR negative values -> are the securities with low enough probabilities of losses trully risk free? • If a bond has a default probability of 2% and we are using a 95% VaR as our risk management tool we are convinced that this bond can bring us only profit without any loss, because of this our VaR < 0! • By only looking at VaR we would be convinced that this bond is completely risk free.

  17. VaR/ETL models using Extreme value theory (EVT) • Most VaR models use the Central limit theorem (assumption of normality), which is completely wrong for our purpose - we are interested in the distribution of the tails not the central mass. • The key to estimating the distribution of such events is the EVT, which governs the distribution of extreme values. • EVT provides a framework in which an estimate of anticipated forces could be made using historical data. • Extreme events are rare, meaning that their estimates are often required for levels of a process that are greater than those in the available data set. Potential problems: • EV models are developed using asymptotic arguments. • EV models are derived under idealized circumstances - need not be true for the process being modeled.

  18. Fisher-Tippett theorem - as n gets large the distribution of tail of X converges to Generalized extreme value distribution (GEV): • If ξ > 0, GEV distribution becomes a Fréchet distribution, meaning that F(x) is leptokurtotic. • If ξ = 0, GEV distribution becomes a Gumbel distribution, meaning that F(x) has normal kurtosis • EV VaR : (Fréchet EV VaR, ξ > 0) (Gumbel EV VaR, ξ = 0)

  19. There are no closed form ETL formulas for Fréchet and Gumbel distributions but EV ETL can be derived from EV VaR estimates using “average-tail VaR” algorithm. • It is easily shown that ETL is indeed estimable in a consistent way as the “average of 100cl% worst cases: Hybrid historical simulation (HHS) ETL developed in this paper can be expressed as: are order statistics from volatility scaled bootstrapped series Where

  20. Model comparison and backtesting • Blanco-Ihle loss function compares VaR with tail losses, which makes no practical sense because VaR forecasts only the „best“ scenario for the tail losses. Blanco-Ihle loss function actually measures the discrepancy between the lowest possible and actual tail losses - not especially useful. • Blanco-Ihle loss function can be modified to compare ETL with the actual value of the tail loss - exactly what the loss function should be measuring. • Suggested modification:

  21. Backtesting results for VaR forecasts (DOW JONES index, cl = 0.99, period 22.3.2004 - 12.3.2008). Symbols * and ** denote significance at 5 and 10% levels

  22. Backtesting results for VaR forecasts (NASDAQ index, cl = 0.99, period 22.3.2004 - 12.3.2008)

  23. Backtesting results for VaR forecasts (S&P500 index, cl = 0.99, period 22.3.2004 - 12.3.2008)

  24. Backtesting results for ETL forecasts (DOW JONES index, ξ = 0.2, cl = 0.95, 0.99, period 22.3.2004 - 12.3.2008)

  25. Backtesting results for ETL forecasts (NASDAQ index, ξ = 0.31, cl = 0.95, 0.99, period 22.3.2004 - 12.3.2008)

  26. Backtesting results for ETL forecasts (S&P500 index, ξ = 0.24, cl = 0.95, 0.99, period 22.3.2004 - 12.3.2008)

  27. VaR backtesting results • Data: daily returns from DOW JONES, NASDAQ, S&P500, CAC, DAX and FTSE • Period: 01.01.2000 - 12.3.2008 • Backtesting:out-of-the-sample latest 1.000 observations, 1 day holding period, confidence level = 95 and 99% • 5/6 tested VaR models continually failed the Basel criteria. • HHS was the only model, out of the tested models, that passed all of the tests, for both the Basel criteria and independence test of VaR failures. • Worst performers: VCV, HS250 and RiskMetrics models • Results are consistent with the results for transitional markets reported in Žiković (2007).

  28. ETL backtesting results • Bootstrapped HHS ETL approach was the best performing ETL measure across all of the tested indexes with the exception of NASDAQ index at 99% cut-off level - Bootstrapped HS500 ETL. • Worst performers: VCV approach based on Fréchet distribution and GARCH RM approach with Fréchet distribution. • These models greatly overestimated the expected averages of tail loss.Models that used Gumbel distribution performed far better compared to those with Fréchet distribution - two possible reasons: 1) Tail indexes have been incorrectly calculated (they are too high) 2) The use of GEV distributions in ETL estimation provides overly conservative estimates of average tail losses.

  29. Tail losses and ETL for NASDAQ index (cl = 0.95, ξ = 0.31)

  30. Conclusion • Advances that have been made in VaR should not be lost with the (probable) adoption of coherent risk measures into regulatory framework. Superior quality of VaR techniques should yield superior ETL forecasts showing that VaR and ETL should be regarded as partners not rivals. • The weak points of risk measurement models cannot be ignored and they will continually come back to haunt us even when we switch from one risk measure to another. The problems remain the same regardless whether we are estimating VaR or ETL. • Overall the results of VaR model comparison obtained for the tested US and selected European stock indexes are in line with the results reported by Žiković (2007) for stock indexes from transitional markets.

  31. Conclusion • For both developing and developed stock markets simpler VaR models consistently fail their task - provide risk managers with falsely optimistic data about the levels of risk that the financial institutions are exposed to. GARCH based volatility models, even at lower confidence level, continually outperform VaR models based on the assumption of simpler models of volatility such as SMA and EWMA. • Bootstrapped HHS ETL approach was the best performing ETL measure across all of the tested indexes with the exception of NASDAQ index at 99% cut-off level. • For the tested stock indexes the use of GEV distributions in ETL estimation provides overly conservative estimates of ETL.

  32. Conclusion • The strong points and weaknesses of every model remain with them and that is why knowledge obtained in developing VaR models must not be wasted. VaR techniques can easily be adopted to serve a new “superior” risk measure – ETL. • Research in VaR estimation should by no means be discouraged, because now it can serve a dual purpose – improving VaR estimates but also improving ETL estimates. • The focus of future research should be on:1) improving both VaR and ETL techniques,2) finding optimal combinations of VaR-ETL models. • Only complete information can serve as a solid basis for decision making in financial institutions and reveal actual risk exposure both to investors and regulators.

  33. Thank you for your attention! Questions?

More Related