Loading in 5 sec....

Noise sensitivity of portfolio selection under various risk measuresPowerPoint Presentation

Noise sensitivity of portfolio selection under various risk measures

- 82 Views
- Uploaded on

Download Presentation
## PowerPoint Slideshow about 'Noise sensitivity of portfolio selection under various risk measures' - evelyn

**An Image/Link below is provided (as is) to download presentation**

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

N eigenvalue of the matrix. The larger the first eigenvalue, the closer the scalar products to 1 or -1.=1000

N eigenvalue of the matrix. The larger the first eigenvalue, the closer the scalar products to 1 or -1.=1000

N eigenvalue of the matrix. The larger the first eigenvalue, the closer the scalar products to 1 or -1.=1000

Collegium Budapest and Eötvös

University, Budapest, Hungary

Risk Measurement and Management, Rome, June 9-17, 2005

Noise sensitivity of portfolio selection under various risk measuresContents

- I. Preliminaries
the problem of noise, risk measures, noisy covariance matrices

- II. Random matrices
Spectral properties of Wigner and Wishart matrices

- III. Filtering of normal portfolios
optimization vs. risk measurement, model-simulation approach, random-matrix-theory-based filtering

- IV. Beyond the stationary, Gaussian world
non-stationary case, alternative risk measures (mean absolute deviation, expected shortfall, worst loss), their sensitivity to noise, the feasibility problem

Coworkers

- Szilárd Pafka and Gábor Nagy (CIB Bank, Budapest, a member of the Intesa Group), Marc Potters (Capital Fund Management, Paris)
- Richárd Karádi (Institute of Physics, Budapest University of Technology, now at Procter&Gamble)
- Balázs Janecskó, András Szepessy, Tünde Ujvárosi (Raiffeisen Bank, Budapest)
- István Varga-Haszonits (Eötvös University, Budapest)

Preliminary considerations

- Portfolio selection vs. risk measurement of a fixed portfolio
- Portfolio selection: a tradeoff between risk and reward
- There is a more or less general agreement on what we mean by reward in a finance context, but the status of risk measures is controversial
- For optimal portfolio selection we have to know what we want to optimize
- The chosen risk measure should respect some obvious mathematical requirements, must be stable, and easy to implement in practice

The problem of noise

- Even if returns formed a clean, stationary stochastic process, we only could observe finite time segments, therefore we never have sufficient information to completely reconstruct the underlying process. Our estimates will always be noisy.
- Mean returns are particularly hard to measure on the market with any precision
- Even if we disregard returns and go for the minimal risk portfolio, lack of sufficient information will introduce „noise”, i. e. error, into correlations and covariances, hence into our decision.
- The problem of noise is more severe for large portfolios (size N) and relatively short time series (length T) of observations, and different risk measures are sensitive to noise to a different degree.
- We have to know how the decision error depends on N and T for a given risk measure

Some elementary criteria on risk measures

- A risk measure is a quantitative characterization of our intuitive risk concept (fear of uncertainty and loss).
- Risk is related to the stochastic nature of returns. It is a functional of the pdf of returns.
- Any reasonable risk measure must satisfy:
- convexity

- invariance under addition of risk free asset

- monotonicity and assigning zero risk to a zero position

- The appropriate choice may depend on the nature of data (e.g. on their asymptotics) and on the context (investment, risk management, benchmarking, tracking, regulation, capital allocation)

A more elaborate set of risk measure axioms

- Coherent risk measures (P. Artzner, F. Delbaen, J.-M. Eber, D. Heath, Risk, 10, 33-49 (1997); Mathematical Finance,9, 203-228 (1999)): Required properties: monotonicity, subadditivity, positive homogeneity, and translational invariance. Subadditivity and homogeneity imply convexity. (Homogeneity is questionable for very large positions. Multiperiod risk measures?)
- Spectral measures (C. Acerbi, in Risk Measures for the 21st Century, ed. G. Szegö, Wiley, 2004): a special subset of coherent measures, with an explicit representation. They are parametrized by a spectral function that reflects the risk aversion of the investor.

Convexity

- Convexity is extremely important.
- A non-convex risk measure
- penalizes diversification (without convexity risk

can be reduced by splitting the portfolio in two

or more parts)

- does not allow risk to be correctly aggregated

- cannot provide a basis for rational pricing of risk

(the efficient set may not be not convex)

- cannot serve as a basis for a consistent limit

system

In short, a non-convex risk measure is really not a risk measure at all.

A classical risk measure: the variance

When we use variance as a risk measure we assume that the underlying statistics is essentially multivariate normal or close to it.

Portfolios

- Consider a linear combination of returns
with weights : . The weights add up to unity: . The portfolio’s expectation value is: with variance: ,

where is the covariance matrix, and the standard deviation of return .

Level surfaces of risk measured in variance

- The covariance matrix is positive definite. It follows that the level surfaces (iso-risk surfaces) of variance are (hyper)ellipsoids in the space of weights. The convex iso-risk surfaces reflect the fact that the variance is a convex measure.
- The principal axes are inversely proportional to the square root of the eigenvalues of the covariance matrix.
Small eigenvalues thus correspond to long axes.

- The risk free asset would correspond to and infinite axis, and the correspondig ellipsoid would be deformed into an elliptical cylinder.

The Markowitz problem

- According to Markowitz’ classical theory the tradeoff between risk and reward can be realized by minimizing the variance
over the weights, for a given expected return

and budget

- Geometrically, this means that we have to blow up the risk ellipsoid until it touches the intersection of the two planes corresponding to the return and budget constraints, respectively. The point of tangency is the solution to the problem.
- As the solution is the point of tangency of a convex surface with a linear one, the solution is unique.
- There is a certain continuity or stability in the solution: A small miss-specification of the risk ellipsoid leads to a small shift in the solution.

- Covariance matrices corresponding to real markets tend to have mostly positive elements.
- A large, complicated matrix with nonzero average elements will have a large (Frobenius-Perron) eigenvalue, with the corresponding eigenvector having all positive components. This will be the direction of the shortest principal axis of the risk ellipsoid.
- Then the solution also will have all positive components. Even large fluctuations in the small eigenvalue sectors may have a relatively mild effect on the solution.

The minimal risk portfolio have mostly positive elements.

- Expected returns are hardly possible (on efficient markets, impossible) to determine with any precision.
- In order to get rid of the uncertainties in the returns, we confine ourselves to considering the minimal risk portfolio only, that is, for the sake of simplicity, we drop the return constraint.
- Minimizing the variance of a portfolio without considering return does not, in general, make much sense. In some cases (index tracking, benchmarking), however, this is precisely what one has to do.

Benchmark tracking have mostly positive elements.

- The goal can be (e.g. in benchmark tracking or index replication) to minimize the risk (e.g. standard deviation) relative to a benchmark
- Portfolio:
- Benchmark:
- „Relative portfolio”:

- Therefore the relevant problems are of similar structure but with returns relative to the benchmark:
- For example, to minimize risk relative to the benchmark means minimizing the standard deviation of
with the usual budget contraint (no condition on expected returns!)

The weights of the minimal risk portfolio with returns relative to the benchmark:

- Analytically, the minimal variance portfolio corresponds to the weights for which
is minimal, given .

The solutions is: .

- Geometrically, the minimal risk portfolio is the point of tangency between the risk ellipsoid and the plane of he budget constraint.

Empirical covariance matrices with returns relative to the benchmark:

- The covariance matrix has to be determined from measurements on the market. From the returns observed at time twe get the estimator:
- For a portfolio of N assets the covariance matrix has O(N²) elements. The time series of length T for N assets contain NT data. In order for the measurement be precise, we need N <<T. Bank portfolios may contain hundreds of assets, and it is hardly meaningful to use time series longer than 4 years (T~1000). Therefore, N/T << 1 rarely holds in practice. As a result, there will be a lot of noise in the estimate, and the error will scale in N/T.

Fighting the curse of dimensions with returns relative to the benchmark:

- Economists have been struggling with this problem for ages. Since the root of the problem is lack of sufficient information, the remedy is to inject external info into the estimate. This means imposing some structure on σ. This introduces bias, but beneficial effect of noise reduction may compensate for this.
- Examples:
- single-index models (β’s) All these help to
- multi-index models various degrees.
- grouping by sectors Most studies are based
- principal component analysis on empirical data
- Baysian shrinkage estimators, etc.

An intriguing observation with returns relative to the benchmark:

- L.Laloux, P. Cizeau, J.-P. Bouchaud, M. Potters, PRL83 1467 (1999) and Risk12 No.3, 69 (1999)
and to

V. Plerou, P. Gopikrishnan, B. Rosenow, L.A.N. Amaral, H.E. Stanley, PRL83 1471 (1999)

noted that there is such a huge amount of noise in empirical covariance matrices that it may be enough to make them useless.

- A paradox: Covariance matrices are in widespread use and banks still survive ?!

Laloux et al. 1999 with returns relative to the benchmark:

The spectrum of the covariance matrix obtained from the time

series of S&P 500

with N=406, T=1308, i.e. N/T= 0.31, compared with that of a completely random matrix (solid curve). Only about 6% of the eigenvalues lie beyond the random band.

Remarks on the paradox with returns relative to the benchmark:

- The number of junk eigenvalues may not necessarily be a proper measure of the effect of noise: The small eigenvalues and their eigenvectors fluctuate a lot, indeed, but perhaps they have a relatively minor effect on the optimal portfolio, whereas the large eigenvalues and their eigenvectors are fairly stable.
- The investigated portfolio was too large compared with the length of the time series.
- Working with real, empirical data, it is hard to distinguish the effect of insufficient information from other parasitic effects, like nonstationarity.

A historical remark with returns relative to the benchmark:

- Random matrices first appeared in a finance context in G. Galluccio, J.-P. Bouchaud, M. Potters, PhysicaA 259 449 (1998). In this paper they show that the optimization of a margin account (where, due to the obligatory deposit proportional to the absolute value of the positions, a nonlinear constraint replaces the budget constraint) is equivalent to finding the ground state configuration of what is called a spin glass in statistical physics. This task is known to be NP-complete, with an exponentially large number of solutions.
- Problems of a similar structure would appear if one wanted to optimize the capital requirement of a bond portfolio under the rules stipulated by the Capital Adequacy Directive of the EU (see below)

A filtering procedure suggested by RMT with returns relative to the benchmark:

- The appearence of random matrices in the context of portfolio selection triggered a lot of activity, mainly among physicists. Laloux et al. and Plerou et al. proposed a filtering method based on random matrix theory (RMT) subsequently. This has been further developed and refined by many workers.
- The proposed filtering consists basically in discarding as pure noise that part of the spectrum that falls below the upper edge of the random spectrum. Information is carried only by the eigenvalues and their eigenvectors above this edge. Optimization should be carried out by projecting onto the subspace of large eigenvalues, and replacing the small ones by a constant chosen so as to preserve the trace. This would then drastically reduce the effective dimensionality of the problem.

- Interpretation of the large eigenvalues: The largest one is the „market”, the other big eigenvalues correspond to the main industrial sectors.
- The method can be regarded as a systematic version of principal component analysis, with an objective criterion on the number of principal components.
- In order to better understand this novel filtering method, we have to recall a few results from Random Matrix Theory (RMT)

II. RANDOM MATRICES the „market”, the other big eigenvalues correspond to the main industrial sectors.

Origins of random matrix theory (RMT) the „market”, the other big eigenvalues correspond to the main industrial sectors.

- Wigner, Dyson 1950’s
- Originally meant to describe (to a zeroth approximation) the spectral properties of (heavy) atomic nuclei
- on the grounds that something that is sufficiently complex is almost random

-fits into the picture of a complex system, as one with a large number of degrees of freedom, without symmetries, hence irreducible, quasi random.

- markets, by the way, are considered stochastic for similar reasons

- Later found applications in a wide range of problems, from quantum gravity through quantum chaos, mesoscopics, random systems, etc. etc.

RMT the „market”, the other big eigenvalues correspond to the main industrial sectors.

- Has developed into a rich field with a huge set of results for the spectral properties of various classes of random matrices
- They can be thought of as a set of „central limit theorems” for matrices

Wigner semi-circle law the „market”, the other big eigenvalues correspond to the main industrial sectors.

- Mij symmetrical NxN matrix with i.i.d. elements (the distribution has 0 mean and finite second moment)
- k: eigenvalues of Mij
- The density of eigenvalues k (normed by N) goes to the Wigner semi-circle for N→∞ with prob. 1:
,

, otherwise

Remarks on the semi-circle law the „market”, the other big eigenvalues correspond to the main industrial sectors.

- Can be proved by the method of moments (as done originally by Wigner) or by the resolvent method (Marchenko and Pastur and countless others)
- Holds also for slightly dependent or non-homogeneous entries (e.g. for the association matrix in networks theory)
- The convergence is fast (believed to be of ~1/N, but proved only at a lower rate), especially what concerns the support

Convergence to the semi-circle as the „market”, the other big eigenvalues correspond to the main industrial sectors.N increases

N the „market”, the other big eigenvalues correspond to the main industrial sectors.=20

Elements of M are distributed normally

N the „market”, the other big eigenvalues correspond to the main industrial sectors.=50

N the „market”, the other big eigenvalues correspond to the main industrial sectors.=100

N the „market”, the other big eigenvalues correspond to the main industrial sectors.=200

N the „market”, the other big eigenvalues correspond to the main industrial sectors.=500

N the „market”, the other big eigenvalues correspond to the main industrial sectors.=1000

If the matrix elements are not centered but have a common mean, one large eigenvalue breaks away, the rest stay in the semi-circle

If the matrix elements are not centered mean, one large eigenvalue breaks away, the rest stay in the semi-circle

N=1000

N mean, one large eigenvalue breaks away, the rest stay in the semi-circle=1000

For fat-tailed (but finite variance) distributions the theorem still holds, but the convergence is slow

Sample from Student t (freedom=3) distribution theorem still holds, but the convergence is slow

N=20

N theorem still holds, but the convergence is slow=50

N theorem still holds, but the convergence is slow=100

N theorem still holds, but the convergence is slow=200

N theorem still holds, but the convergence is slow=500

N theorem still holds, but the convergence is slow=1000

There is a lot of fluctuation, level crossing, random rotation of eigenvectors taking place in the bulk

Illustration of the rotation of eigenvectors taking place in the bulkinstability of the eigenvectors, although the distribution of the eigenvalues is the same.

Sample 1

Matrix elements normally distributed

N=1000

Sample 2 rotation of eigenvectors taking place in the bulk

Sample k rotation of eigenvectors taking place in the bulk

Scalar product of the eigenvectors assigned to the j. eigenvalue of the matrix.

The eigenvector belonging to the large eigenvalue (when there is one) is much more stable. The larger the eigenvalue, the more so.

Illustration of the there is one) is much more stable. The larger the eigenvalue, the more so. stability of the largest eigenvector

Sample 1

Matrix elements are normally distributed, but the sum of the elements in the rows is not zero.

N=1000

Sample 2 there is one) is much more stable. The larger the eigenvalue, the more so.

Sample k there is one) is much more stable. The larger the eigenvalue, the more so.

Scalar product of the eigenvectors belonging to the largest eigenvalue of the matrix. The larger the first eigenvalue, the closer the scalar products to 1 or -1.

The eigenvector components eigenvalue of the matrix. The larger the first eigenvalue, the closer the scalar products to 1 or -1.

- A lot less is known about the eigenvectors.
- Those in the bulk have random components
- The one belonging to the large eigenvalue (when there is one) is completely delocalized

Wishart matrices – random sample covariance matrices eigenvalue of the matrix. The larger the first eigenvalue, the closer the scalar products to 1 or -1.

- Let Aij NxT matrix with i.i.d. elements (0 mean and finite second moment)
- σ =1/T AA’ where A’ is the transpose
- Wishart or Marchenko-Pastur spectrum (eigenvalue distribution):
where

Remarks eigenvalue of the matrix. The larger the first eigenvalue, the closer the scalar products to 1 or -1.

- The theorem also holds when E{A}is of finite rank
- The assumption that the entries are identically distributed is not necessary
- If T < N the distribution is the same with and extra point of mass 1 – T/N at the origin
- If T = N the Marchenko-Pastur law is the squared Wigner semi-circle
- The proof extends to slightly dependent and inhomogeneous entries
- The convergence is fast, believed to be of ~1/N , but proved only at a lower rate

Convergence in eigenvalue of the matrix. The larger the first eigenvalue, the closer the scalar products to 1 or -1.N, with T/N = 2 fixed

N eigenvalue of the matrix. The larger the first eigenvalue, the closer the scalar products to 1 or -1.=20

T/N=2

The red curve is the limit Wishart distribution

N eigenvalue of the matrix. The larger the first eigenvalue, the closer the scalar products to 1 or -1.=50

T/N=2

N eigenvalue of the matrix. The larger the first eigenvalue, the closer the scalar products to 1 or -1.=100

T/N=2

N eigenvalue of the matrix. The larger the first eigenvalue, the closer the scalar products to 1 or -1.=200

T/N=2

N eigenvalue of the matrix. The larger the first eigenvalue, the closer the scalar products to 1 or -1.=500

T/N=2

N eigenvalue of the matrix. The larger the first eigenvalue, the closer the scalar products to 1 or -1.=1000

T/N=2

Evolution of the distribution with T/N, with N = 1000 fixed eigenvalue of the matrix. The larger the first eigenvalue, the closer the scalar products to 1 or -1.

The quadratic limit eigenvalue of the matrix. The larger the first eigenvalue, the closer the scalar products to 1 or -1.N=1000

T/N=1

N eigenvalue of the matrix. The larger the first eigenvalue, the closer the scalar products to 1 or -1.=1000

T/N=1.2

N eigenvalue of the matrix. The larger the first eigenvalue, the closer the scalar products to 1 or -1.=1000

T/N=2

T/N=3

T/N=5

T/N=10

Scalar product of the eigenvectors belonging to the j eigenvalue of the matrices for different samples.

Eigenvector components eigenvalue of the matrices for different samples.

The same applies as in the Wigner case: the eigenvectors in the bulk are random, the one outside is delocalized

Distribution of the eigenvector components, if no dominant eigenvalue exists.

N eigenvalue exists. =200

T/N=2

N eigenvalue exists. =500

T/N=2

N eigenvalue exists. =1000

T/N=2

Scalar product of the eigenvectors belonging to the largest eigenvalue of the matrix. The larger the first eigenvalue, the closer the scalar products to 1.

Distribution of the eigenvector components, if one of the eigenvalues is not typical for random matrixes.

N=1000

T/N=2

Rho=0.1

Distribution of the eigenvector components, if one of the eigenvalues is not typical for random matrixes.

N=1000

T/N=2

Rho=0.1

The interval becomes narrower as correlation increases. eigenvalues is not typical for random matrixes.

N=1000

T/N=2

Rho=0.9

III. FILTERING OF NORMAL PORTFOLIOS eigenvalues is not typical for random matrixes.

Some key points eigenvalues is not typical for random matrixes.

Laloux et al. and Plerou et al. demonstrate the effect of noise on the spectrum of the correlation matrix C. This is not directly relevant for the risk in the portfolio. We wanted to study the effect of noise on a measure of risk.

Optimization vs. risk management eigenvalues is not typical for random matrixes.

- There is a fundamental difference between the two kinds of uses of the covariance matrix σ for optimization resp. risk measurement.
- Where do people use σ for portfolio selection at all?
- Goldman&Sachs technical document

- tracking portfolios, benchmarking, shrinkage

- capital allocation (EWRM)

- hidden in softwares

Optimization eigenvalues is not typical for random matrixes.

- When σ is used for optimization, we need a lot more information, because we are comparing different portfolios.
- To get optimal portfolio, we need to invert σ, and as it has small eigenvalues, error gets amplified.

Risk measurement – management - regulatory capital calculation

Assessing risk in a given portfolio – no need to invert σ – the problemofmeasurement error is much less serious

A measure of the effect of noise calculation

Assume we know the true covariance matrix and

the noisy one . Then a natural, though not unique,

measure of the impact of noise is

where w*are the optimal weights corresponding

to and , respectively.

W calculatione will mostly use simulated data

The rationale behind this is that in order to be able to compare the efficiency of filtering methods (and later also the sensitivity of risk measures to noise) we better get rid of other sources of uncertainty, like non-stationarity. This can be achieved by using artificial data where we have total control over the underlying stochastic process

The model-simulation approach calculation

- Our strategy is to choose various model covariancematrices and generate N long simulated time series by them. Then we cut segments of length T from these time series, as if observing them on the market, and try to reconstruct the covariance matrices from them. We optimize a portfolio both with the „true” and with the „observed” covariance matrix and determine the measure .

The models are chosen to mimic at least some of the characteristic features of real markets. Four simple models of slightly increasing complexity will be considered

Model 1: the unit matrix characteristic features of real markets. Four simple models of slightly increasing complexity will be considered

Spectrum

λ = 1, N-fold degenerate

Noise will split this

into band

1

0

C=

Model 2: single-index characteristic features of real markets. Four simple models of slightly increasing complexity will be considered

Singlet: λ1=1+ρ(N-1) ~ O(N)

eigenvector: (1,1,1,…)

λ2 = 1- ρ~ O(1)

(N-1) – fold degenerate

ρ

1

The economic content of the single-index model characteristic features of real markets. Four simple models of slightly increasing complexity will be considered

returnmarket return with

standard deviation σ

The covariance matrix implied by the above:

The assumed structure reduces # of parameters to N.

If nothing depends on i then this is just the caricature Model 2.

Model 3: market + sectors characteristic features of real markets. Four simple models of slightly increasing complexity will be considered

singlet

- fold degenerate

1

This structure has also been studied by economists

- fold degenerate

Model 4: characteristic features of real markets. Four simple models of slightly increasing complexity will be consideredSemi-empirical

Suppose we have very long time series (T’) for many assets (N’).

Choose N<N’ time series randomly and derive Cº from these data. Generate time series of length T<<T’ from Cº.

The error due to T is much larger than that due to T’.

How to generate time series? characteristic features of real markets. Four simple models of slightly increasing complexity will be considered

Given independent standard normal

Given

Define L (real, lower triangular) matrix such that

(Cholesky)

Get:

„Empirical” covariance matrix will be different from . For fixed N, and T → ,

We look for the minimal risk portfolio for both the true and the empirical covariances and determine the measure

We get numerically for Model 1 the following scaling result the empirical covariances and determine the measure

This confirms the expected scaling in the empirical covariances and determine the measureN/T. The corresponding analytic result

can easily be derived for Model 1. It is valid within O(1/N) corrections also for more general models.

The same in a risk measurement context the empirical covariances and determine the measure

Given fixed wi’s. Choose to generate data. Measure from finite T time series.

Calculate

It can be shown that , for

Filtering the empirical covariances and determine the measure

Single-index filter:

Spectral decomposition of correlation matrix:

to be chosen so as topreserve trace

Random matrix filter the empirical covariances and determine the measure

whereto be chosen to preservetrace again

and - the upper edge of the random band.

Covariance estimates the empirical covariances and determine the measure

after filtering we get

and

Silarly for the other models. We compare results on the following figures

Results for the market + sectors model the empirical covariances and determine the measure

Results for the semi-empirical model the empirical covariances and determine the measure

Comments on the efficiency of filtering techniques the empirical covariances and determine the measure

- Results depend on the model used for Cº.
- Market model: still scales withT/N, singular at T/N=1
much improved (filtering techniquematches structure), can go even below T=N.

- Market + sectors: strong dependence on parameters
RMT filtering outperforms the other two

- Semi-empirical: data are scattered, RMT wins in most cases

- Filtering is very powerful in supressing noise, particularly when it matches the underlying structure.
- Is there information buried in the random band?
With T increasing more and more eigenvalues crawl out of from below the upper random band edge.

- How to dig out information buried in the random band?
Promising steps by various groups (Z. Burda, A. Görlich, A. Jarosz and J. Jurkiewicz, cond-mat/0305627; and Z. Burda and J. Jurkiewicz, cond-mat/0312496, Jagellonian University, Cracow; Th. Guhr, Lund University; P. Repetowicz, P. Richmond and S. Hutzler, Trinity College, Dublin; G. Papp, Sz. Pafka, M.A. Nowak, and I.K., Budapest and Cracow, etc.)

IV. BEYOND THE STATIONARY GAUSSIAN WORLD when it matches the underlying structure.

- Real-life time series are neither stationary (volatility clustering, changing economic or legal environment, etc.), nor Gaussian (fat tails)
- For long-tailed distributions the variance is not an appropriate risk measure (even when it exists): minimizing the variance may actually increase rather than decrease risk.

One step towards reality: Non-stationary case clustering, changing economic or legal environment, etc.), nor Gaussian (fat tails)

- Volatility clustering →ARCH, GARCH, integrated GARCH→EWMA (Exponentially Weighted Moving Averages) in RiskMetrics
t – actual time

T – window

α – attenuation factor ( Teff~-1/log α), the rate of

forgetting

- RiskMetrics: clustering, changing economic or legal environment, etc.), nor Gaussian (fat tails)αoptimal= 0.94
memory of a few months, total weight of data preceding the last 75 days is < 1%.

- Because of the short effective time cutoff, filtering is even more important than before. Carol Alexander applied standard principal component analysis.
- RMT helps choosing the number of principal components in an objective manner.
- For the application of RMT we need the upper edge of the random band for exponentially weighted random matrices

Exponentially weighted Wishart matrices clustering, changing economic or legal environment, etc.), nor Gaussian (fat tails)

Sz. Pafka, M. Potters, and I.K.: clustering, changing economic or legal environment, etc.), nor Gaussian (fat tails)submitted to Quantitative Finance, e-print: cond-mat/0402573

Density of eigenvalues:

where v is the solution to:

- The RMT filtering wins again – better than plain EWMA and better than plain MA.
- There is an optimal α (too long memory will include nonstationary effects, too short memory looses data).
The optimal α(for N= 100) is 0.996 >>RiskMetrics α.

Alternative risk measures better than plain MA.

Risk measures in practice: VaR better than plain MA.

- VaR (Value at Risk) is a high (95%, or 99%) quantile, a threshold beyond which a given fraction (5% or 1%) of the statistical weight resides.
- Its merits (relative to the Greeks, e.g.):
- universal: can be applied to any portfolio

- probabilistic content: associated to the distribution

- expressed in money

- Wide spread across the whole industry and regulation. Has been promoted from a diagnostic tool to a decision tool.
- Its lack of convexity promted search for coherence

Risk measures implied by regulation better than plain MA.

- Banks are required to set aside capital as a cushion against risk
- Minimal capital requirements are fixed by international regulation (Basel I and II, Capital Adequacy Directive of the EEC) – the magic 8%
- Standard model vs. internal models
- Capital charges assigned to various positions in the standard model purport to cover the risk in those positions, therefore, they must be regarded as some kind of implied risk measures
- These measures are trying to mimic variance by piecewise linear approximants. They are quite arbitrary, sometimes concave and unstable

An example: better than plain MA.Specific risk of bonds

Specific ri

CAD, Annex I, §14:

The capital requirement of the specific risk (due to issuer) of bonds is

Iso-risk surface of the specific risk of bonds

An better than plain MA.other example: Foreign exchange

According to Annex III, §1, (CAD 1993, Official Journal of the European Communities, L14, 1-26) the capital requirement is given as

,

,

in terms of the gross

.

and the net position

The iso-risk surface of the foreign exchange portfolio

Mean absolute deviation (MAD) better than plain MA.

Some methodologies (e.g. Algorithmics) use the mean absolute deviationrather than the standard deviation to characterize the fluctuation of portfolios. The objective function to minimize is then:

instead of:

The iso-risk surfaces of MAD are polyhedra again.

Effect of noise on absolute deviation-optimized portfolios better than plain MA.

We generate artificial time series (say iid normal), determine the true abs. deviation and compare it to the „measured” one:

We get:

Noise sensitivity of MAD better than plain MA.

- The result scales in T/N (same as with the variance). The optimal portfolio – other things being equal - is more risky than in the variance-based optimization.
- Geometrical interpretation: The level surfaces of the variance are ellipsoids.The optimal portfolio is found as the point where this risk-ellipsoid first touches the plane corresponding to the budget constraint. In the absolute deviation case the ellipsoid is replaced by a polyhedron, and the solution occurs at one of its corners. A small error in the specification of the polyhedron makes the solution jump to another corner, thereby increasing the fluctuation in the portfolio.

Filtering for MAD (? better than plain MA.?)

The absolute deviation-optimized portfolios can be filtered, by associating a covariance matrix with the time series, then filtering this matrix (by RMT, say), and generating a new time series via this reduced matrix. This (admittedly fortuitous) procedure significantly reduces the noise in the absolute deviation.

Note that this risk measure can be used in the case of non-Gaussian portfolios as well.

Expected shortfall (ES) optimization better than plain MA.

ES is the mean loss beyond a high threshold defined in probability (not in money). For continuous pdf’s it is the same as the conditional expectation beyond the VaR quantile. ES is coherent (in the sense of Artzner et al.) and as such it is strongly promoted by a group of academics. In addition, Uryasev and Rockefellar have shown that its optimizaton can be reduced to linear programming for which extremely fast algorithms exist.

ES-optimized portfolios tend to be much noisier than either of the previous ones. One reason is the instability related to the (piecewise) linear risk measure, the other is that a high quantile sacrifices most of the data.

In addition, ES optimization is not always feasible!

Before turning to the discussion of the feasibility problem, let us compare the noise sensitivity of the following risk measures: standard deviation, absolute deviation and expected shortfall (at 95%). For the sake of comparison we use the same (Gaussian) input data of length T for each, determine the minimal risk portfolio under these risk measures and compare the error due to noise.

The next slides show let us

- plots of wi (porfolio weights) as a function of i
- display of q0 (ratio of risk of optimal portfolio determined from time series information vs full information)
- results show that the effect of estimation noise can be significant and more „advanced” risk measures are more demanding for information (in portfolio optimization context)

- the suboptimality ( let us q0) scales in T/N (for large N and T):

Risk measures in risk measurement (as opposed to portfolio optimization)

- in the context of risk measurement of given (fixed) portfolios, the estimation error is much smaller, it scales usually as independently of N !
- see next slides show the histogram of measured risk/true risk for different risk measures (T=500,1000), the mean is 1 and the estimation error is usually within 5-10%, i.e. negligible if compared to the portfolio optimization context

The essence of the feasibility problem optimization)

- For T < N, there is no solution to the portfolio optimization problem under any of the risk measures considered here.
For T > N, there always is a solution under the variance and MAD, even if it is bad for T not large enough. In contrast, under ES (and WL to be considered later), there may or may not be a solution for T > N, depending on the sample. The probability of the existence of a solution goes to 1 only for T/N going to infinity.

- The problem does not appear if short selling is banned

Feasibility of optimization under ES optimization)

Probability of the existence of an optimum under CVaR.

F is the standard normal distribution. Note the scaling in N/√T.

A pessimistic risk measure: worst loss optimization)

- In order to better understand the feasibility problem, select the worst return in time and minimize this over the weights:
subject to

- This risk measure is coherent, one of Acerbi’s spectral measures.
- For T < N there is no solution
- The existence of a solution for T > N is a probabilistic issue again, depending on the time series sample

Why is the existence of an optimum a random event? optimization)

- To get a feeling, consider N=T=2.
- The two planes
intersect the plane of the budget constraint in two straight lines. If one of these is decreasing, the other is increasing with , then there is a solution, if both increase or decrease, there is not. It is easy to see that for elliptical distributions the probability of there being a solution is ½.

Probability of the feasibility of the minimax problem optimization)

- For T>N the probability of a solution (for an elliptical underlying pdf) is
(The problem is isomorphic to some problems in operations research and random geometry.)

- For N and T large, p goes over into the error function and scales in N/√T.
- For T→ infinity, p →1.

Probability of the existence of a solution under maximum loss.

F is the standard normal distribution. Scaling is in N/√T again.

Concluding remarks loss.

- Due to the large number of assets in typical bank portfolios and the limited amount of data, noise is an all pervasive problem in portfolio theory.
- It can be efficiently filtered by a variety of techniques from portfolios optimized under variance.
- RMT is (one of) the latest of these filtering or dimensional reduction techniques. It is quite competitive with existing alternatives already, shows enhanced performance when applied in conjunction with extra information about the structure of the market, and holds great promise for resolving the spectrum under the upper edge of the random band.
- Unfortunately, variance is not an adequate risk measure for fat-tailed pdf’s.
- Piecewise linear risk measures show instability (jumps) in a noisy environment.
- Risk measures focusing on the far tails show additional sensitivity to noise, due to loss of data.
- The two coherent measures we have studied displaylarge sample-to-sample fluctuations and feasibility problems under noise.This may cast a shade of doubt on their applications.

Some references loss.

- Physica A 299, 305-310 (2001)
- European Physical Journal B 27, 277-280 (2002)
- Physica A 319, 487-494 (2003)
- Physica A 343, 623-634 (2004)
- submitted to Quantitative Finance, e-print: cond-mat/0402573

Download Presentation

Connecting to Server..