- 150 Views
- Uploaded on
- Presentation posted in: General

Applied Business Statistics Case studies Market risk management

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Applied Business StatisticsCase studiesMarket risk management

Mauro Bufano

Risk Management – Banca Mediolanum Spa

- Market risk can be defined as the risk hat the value of a portfolio, either an investment portfolio or a trading portfolio, will decrease due to the change in assets’ prices
- Market risk is present in every investment portfolio:
- Treasuries’ portfolios
- Asset management (funds)
- Real Estate investments

- Market risk concept was born in the late 80’s: the 1987 “Black Monday” market crash (in one day Dow Jones Industrial Average dropped by more than 22%) showed the importance of a proper market risk quantification
- In the early ’90sJ. P. Morgan CEO Dennis Weatherstone famously called for a “4:15 report” that combined all firm risk on one page, available within 15 minutes of the market close
- Introduction of the most important market risk measure: Value at Risk (VaR)

Market risk is not unique, but there are different sources of risk:

- Equity risk
- Interest rate risk
- Exchange rate risk
- Liquidity risk
- Spread risk

- Equity risk: the most known risk, it’s the risk deriving from changes in stock prices
- Interest rate risk: it’s the risk that changes in interest rates affect the value of an asset. It’s generally more complex than equity risk because we refer not to a single asset, but to different interest rates for each maturity: the yield curve. Yield curve movements can be summarized into 3 different sources of risk:
- Vertical translations: parallel movements of the yield curve (generally 80% of interest rate risk)
- Twist: changes in the slope of the yield curve
- Interest rate volatility

In the chart above we can see different yield curves at different rates. This could impact dramatically in the price of interest rate related assets (e.g. bonds)

- Exchange rate risk: it’s the risk that assets in foreign currencies change their value due to movements in the exchange rate
- Exchange rates generally depends on the difference between respective short term interest rates
- Therefore exchange rate risk is very difficult to measure, also because they are strongly influenced by Central Bank decisions

- Liquidity risk is defined as the risk that assets becomes illiquid, i.e. there is few (or no demand) for those assets and it’s difficult to sell it on the market
- It’s generally measured as the bid-ask spread listed on the market for that particular security
- It depends on
- Market conditions (during 2007-09 market crisis, the amount of liquidity on the market has shrunk and bid-ask spreads have widened)
- Issuer quality (a EMU government bond is more liquid than a small firm bond)
- The amount of that instrument traded on the market
- The maturity of the asset (longer maturity lower liquidity)

Illiquid bond: the bid-ask spread is high and there are few counterparties. Prices are very different among them

Liquid bond: the bid-ask spread is low and there are many counterparties. Prices are very close to each others

- Spread risk can be defined as the risk that a decrease in the credit worthiness of a issuer determines a higher discount rate, and therefore a reduced value of the assets.
- Spread risk has increased its importance with the introduction of the Credit Default Swaps (CDS), i.e. insurances against default, that give a precise quantification of the credit spread of a issuer
- It can be generally divided into two sources of risk:
- Systemic risk: the risk that credit spreads in the market increase e.g., due to a reduced risk appetite or a global recession (2008-09)
- Specific risk: the risk that credit spread of a firm widens due to internal problems that increase its probability of default (PD)

- As we can see, credit spreads tend to move together, also for those companies whose credit worthiness remains high
- Systemic risk, especially in financial industry, generally explain more than 80% of credit spread movements

- In risk measurement, the key parameter that allows risk managers to quantify risk is volatility.
- It can be represented as the second moment of asset price distribution (being the first moment the asset’s return) and it’s generally measured with the standard deviation (σ)
- The importance of its measurement is trivial, because it’s not a fixed parameter but rather time-varying
- In financial markets we observe volatility clusters: high volatilities are generally associated with market crashes or economic crises, and increase dramatically the risk of huge losses

Volatility can be measured in different ways:

- Historical volatility: it’s defined as the standard deviation of an asset along a period of time (e.g. 1 year)
- Advantages: easy to measure and to observe
- Disadvantages: the past is not always the best proxy for the future!

- Implied volatility: it’s the volatility (σ) embedded in option prices (call or put), derived from Black & Scholes (B&S) formulation
- For a call option:
- For a put option:
where

- Implied volatility cannot be derived analytically from B&S, but it’s necessary an iterative method (many softwares, i.e. Matlab, encode it)
- Anyway, B&S formulation implies log-normal distribution of the underlying price, that generally not always observed in reality, and it is derived under risk-neutrality assumptions
- The implied volatility derived from this formulation is not a real “observable” variable, but rather a proxy of the real volatility expected by market operators

- The limitations of B&S formula result in the empirical observations: there is not a unique volatility but rather a “volatility smile”, i.e. a different volatility for any strike (see chart below)

- Generally, smile volatility is observed for a short time horizon (less than 2 years). For longer horizons, the smile becomes a “skew”, i.e. volatility is a decreasing function of the smile
- Volatility surface is not directly observed, but it has to be interpolated. In this case, a Local Polynomial Smoothing (LVS) technique has been used

Eurostoxx 50

Maturity

Strike

Even if it’s not a direct observable variable and it’s derived under strong assumptions, implied volatility is used (especially for pricing purposes) because:

- It’s a forward measure and it’s not directly related to past events
- It reflects market expectations
- The existence of a volatility surface gives the opportunity to derive “what-if” scenarios or simulation basing on the underlying price
- It can be used to calibrate time-varying volatility models (see next slides)

In some cases, it’s necessary to simulate volatility paths in order to get a distribution of future assets’ or portfolios’ values.

Different models are available:

- GARCH models: it’s a time varying simulations based on an autoregressive process of variance:
e.g. GARCH(1,1)

GARCH models are widely used for many applications (especially for VaR or scenario analysis), but is generally not a market model for derivative pricing

Its calibration is obtained via maximum likelihood estimation

A common GARCH(1,1) model is the Risk Metrics one, where:

ω = 0

α = 1- β

β = 0.94 for daily data, β = 0.97 for monthly data

- Other models (mostly used for derivative pricing) are
- Dupire model or local volatility: it’s derived by re-adapting volatility surface in terms of underlying price and time
In this way the volatility depends on the path of the underlying and on the time. For its calibration it’s necessary to know the first derivative of the option price on time and the derivatives (first and second order) of the option price on the strike. It’s therefore necessary to have a volatility surface

- Dupire model or local volatility: it’s derived by re-adapting volatility surface in terms of underlying price and time

- Heston model: it’s a time varying process for volatility and the underlying in which the two processes are correlated (generally, in a negative way)
where WtSand WtV are correlated Brownian motions

Heston model has to be calibrated on the quotations of the options listed on the market. It’s generally used for complex derivatives’ pricing

- Another crucial risk measure is correlation, that can be either interpreted as asset correlation (in a portfolio) or risk factors’ correlation
- Its importance has growth especially in the last crisis, in which the risk factors’ correlation has growth in a dramatic way and causing huge losses
- In fact, if volatility was (in some cases) already embedded in many instrument and it was possible to hedge it, correlation risk was completely mispriced in some basket securities (e.g. CDO)

- Until now, the most used correlation measurement is historical one, with all the limits of this approach
- There are, anyway, some instruments that allows to determine an “implied correlation”
- A mathematical instrument widely used to calculate (and simulate) implied correlation is the “copula function”, derived by Sklar’s theorem:
- The (1) is a cumulated distribution, the (2) is a density distribution: this result allows to separate marginal distribution from the “copula”, that models the correlation between the 2 variables

- Market risk management has developed a lot since late 80’s. We can now observe two different aspects and uses of market risk management
- Pricing: for some instruments it’s not available a “fair” market price, because of their illiquidity (e.g. small firms’ bonds or complex derivatives)
- Risk measurement: it involves risks’ quantification and the estimation of potential losses of a portfolio. In this case it’s crucial to identify the main risk factors

Example of bond pricing:

- A bond price can be determined by discounting cash flows
- Let’s suppose to have a bond with N coupons Ci and with a nominal amount 100, to be paid at maturity (T)
- We have three variables
- r: the risk free rate, obtained by interpolation of the swap curve over the coupons’ maturities
- s: the credit spread, obtained by interpolation of the CDS spread (when it’s available) or comparables’ spreads (e.g. its rating average spread) over the coupons’ maturities
- l: the liquidity spread, representing the liquidity premium for the single issuance it’s not directly observable but has to be estimated

- Simple derivative products have “closed formulas” that we can use to determine its price:
- e.g. for a call option on a listed stock or index we can use B&S formula

- For more complex derivatives it’s impossible to have a closed formula their price has to be estimated via Monte Carlo simulation
- e.g. “barrier option”: the option pays a fixed coupon amount until the underlying stays below the barrier (generally 50% of its value at issue date), otherwise it replies the index performance
- where DFi is the discount factor of the i-th coupon

- Monte Carlo pricing: we simulate the underlying price with a Geometric Brownian Motion (GBM), under risk-neutrality assumption
- For every path, we get the payoff Pi
- We repeat the simulation several times (e.g. 100,000 times)
- The fair value (i.e. the market price) is represented by the average value of simulated payoffs

As we have already seen, risk measurement can be divided into two different families:

- Expected losses (or expected returns), in average market conditions
- Unexpected losses, in stressed market conditions
We will focus on the 2), that in market risk is quantified with the Value at Risk (VaR)

- As we have already seen, Value at Risk (VaR) can be defined as the maximum potential loss in a time horizon, given a confidence level. Analytically, given a confidence level α, and being L the random variable representing losses within a time horizon, we have
The VaR is therefore a quantile in the loss distribution

- Time horizon of market VaR in trading portfolios is generally 1 day (or 10 days for Basel II capital requirements), with confidence level 99%
- We have 3 different measures of VaR:
- Parametric (or variance-covariance approach)
- Historical simulation
- Monte Carlo

- The parametric approach assumes that changes in risk factors are distributed according to a normal distribution with a variance of zero and stable volatility over time
- Changes in assets’ values are derived from those in risk factors through linear coefficients. More complex instruments are decomposed in simpler ones (e.g. a fixed coupon bond is decomposed in a series of cash flows)
- Changes in risk factors are determined by their correlation matrix
- VaR is obtained by multiplying the standard deviation by a scaling factor α which depends on the confidence level

For each asset we need to know the sensitivities to risk factors:

- Interest rate risk: modified duration
- Stock market risk: beta
- Derivatives: greeks
- Other risk factors: delta
Then, we need to know the variance-covariance matrix of the risk factors (to determine volatilities and correlations)

There are many databases (e.g. Risk Metrics) that provide on a daily basis the variance-covariance matrix of risk factors

Example:

Let’s consider a USD dollar bond with a market value (MV) of 1,000,000 €. We have (at least) 2 risk factors:

- Interest rate risk
- Exchange rate risk
Let’s suppose:

- Modified duration (MD) of 5 years
- Volatility of interest rate (σi=5%) and volatility of exchange rate (σe=10%)
- Correlation (ρ) between interest rates and exchange rates equal to 20%
- Confidence level 99% (α = 2.32)
We have:

The VaR of the asset p will result:

i.e. approximately 688,000 €

Volatility measure are generally annualized, if we want the daily VaR we can use the following transformation:

If we consider 252 working days in a year, the daily VaR will be:

N.B. The previous transformation, even if it’s widely used, is given under the assumption that changes in market factors are independent through time (which has been generally denied by empirical studies!)

Advantages:

- The quantiles of the distribution are already determined
- Once the parameters are known, there’s no need of huge datasets to work out the distribution
- Computationally very fast and simple (an Excel spreadsheet is generally sufficient to compute parametric VaR)
Disadvantages:

- The estimation of parameters is sometimes difficult and it trivially depends on the estimation sample that we choose
- The output of the model depends on the underlying distribution chosen (e.g. it’s empirically proven that stock market returns are leptokurtic, i.e. the Gaussian distribution is not fit!)
- The sensitivity of portfolio positions to changes in market factors is linear, but in general this is not true
- The distribution of risk factors is stable (doesn’t allow changes in time) and assumes serial independence

In historical simulation VaR, risk factors changes are assumed to be represented by their historical empirical distribution:

- We select a sample of risk factors’ changes, related to a given historical period (e.g. last 500 working days)
- Every asset in the portfolio is revaluated at each data applying relative changes to risk factors
- In this way we have an empirical distribution of portfolio values
- The VaR is the difference between the current portfolio value and the percentile of empirical distribution corresponding to the desired confidence level

Advantages:

- No need to choose a statistical distribution
- Easy to understand: we refer to an historical event, not to a quantile in a density distribution
- Does not require a variance-covariance matrix of risk factors
- Allows to capture non-linear sensitivities to risk factors
Disadvantages:

- We should have a robust dataset of events with several periods (e.g. in economic analysis, different business cycles with growth and recessions)
- Computationally heavy
- The choice of the historical deepness of the dataset is trivial
- Not always the past is the best predictor for the future
- Past variations of some risk factors depend also on their level, that could be very different from the current one (e.g. daily variations of volatility tend to be higher when volatility is high)

- Monte Carlo VaR consists in the simulation of the risk factors with a given stochastic process
- For each simulation we have a risk factors’ scenario, that determines a theoretical value of the assets (and of the portfolio)
- We repeat the simulation several times (e.g. 10,000 times) and we get a portfolio return (and losses) distribution
- The VaR is the difference between the current portfolio value and the percentile of simulated distribution corresponding to the desired confidence level

- Examples of stochastic processes:
- Equity: Geometric Brownian Motion (slide 26)
- Interest rates: Cox Ingersoll and Ross (1984) – CIR
it’s a mean reverting process: the interest rate r tends to its long term average b with speed a

- Exchange rates: GBM with drift given by the difference between domestic interest rate and foreign interest rate
- Credit spread: martingale process (without drift) with changes depending on the level of the spread (higher spreads – higher spread volatility)
- Volatility: GARCH models, Heston model

- An accurate Monte Carlo VaR would require correlated processes: we have to generate correlated brownian motion, using Cholesky decomposition (but the estimation of correlations between different risk factors is sometimes very difficult – e.g. interest rate vs credit spread)

Advantages:

- It’s considered the most accurate VaR measurement
- The output portfolio distribution is not given under assumption of normality, but by the stochastic processes of the risk factors
- Allows to capture non-linear sensitivities to risk factors
- The distribution of the risk factors is based on market expectations and not on historical data
- It’s particularly fit for path dependent instruments (e.g. exotic options)
Disadvantages:

- Requires calibration of the models used to simulate the underlying risk factors (sometimes very complex and time consuming)
- Requires the estimation of risk factors’ variance-covariance matrix
- If we want a stable VaR measurement, we need to generate a lot of scenarios it’s computationally heavy (it can takes many hours or days) simplifications are necessary, therefore reducing the accuracy of VaR

- Before using a VaR methodology for management purposes, this should be back-tested
- Back-testing means evaluating VaR in the past, comparing realized (or simulated) losses with the potential losses of the VaR
- A VaR would pass the back test if the percentage of violations (i.e. when the realized loss exceeds the VaR) is lower than the complement of confidence level
- Example: let’s consider a VaR with 99% confidence level. If we test the VaR over 500 days, the methodology would pass the backtest if the violations are less than 5 (1% of the sample)

- The size of losses: VaR doesn’t provide any information about the size of extreme events. Let’s consider charts below.
- The two portfolios have the same VaR (approx. 240,000) but the second one is much more risky (tail events would cause huge losses!)

Therefore VaR cannot be considered a coherent measure of risk

- An alternative measure of risk, without the problems listed above, is the expected shortfall (ES) or Conditional VaR (CVaR)
- It represents the average losses in excess of VaR
where MV is the market value of the portfolio

- Even if ES has more desirable characteristics than VaR, the latter is still widely used in banks for management purposes and fixing of risk limits

- Resti, A. and Sironi, A., “Risk Management and Shareholders’ Value in Banking”. 2007
- Hull, J. C., “Option, Futures and other derivatives”
- Cox, J. C., Ingersoll, J. E. and Ross, S. A., “ A theory of the Term structure of interest rates”, Econometrica, 53 (1985), 385-407
- Heston, S. (1993): A closed-form solutions for options with stochastic volatility, Review of Financial Studies, 6, 327–343
- Dupire, B. (January 1994). Pricing with a Smile. Risk Magazine, Incisive Media.
- Sklar, A. (1959), "Fonctions de répartition à n dimensions et leurs marges", Publ. Inst. Statist. Univ. Paris 8: 229–231
- Glasserman (2004), “Monte Carlo methods in financial engineering”