1 / 22

Statistical Information Risk

Statistical Information Risk. Gary G. Venter Guy Carpenter Instrat. Statistical Information Risk. Estimation risk Uncertainty related to estimating parameters of distributions from data Projection risk Uncertainty arising from the possibility of future changes in the distributions

Download Presentation

Statistical Information Risk

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Statistical Information Risk Gary G. Venter Guy Carpenter Instrat

  2. Statistical Information Risk • Estimation risk • Uncertainty related to estimating parameters of distributions from data • Projection risk • Uncertainty arising from the possibility of future changes in the distributions • Both quantifiable to some extent

  3. Topics • Impact of information risk • Estimation risk 1 – asymptotic theory • Estimation risk 2 – Bayesian updating • Estimation risk 3 – both together • Projection risk

  4. Impact of Information Risk • Collective risk theory • L = Si=1N Xi • E(L) = E(N)E(X) • Var(L) = E(N)Var(X) + E(X)2Var(N) • CV(L)2 = Var(X)/[E(N)E(X)2] + Var(N)/E(N)2 • = [CV(X)2 + VM(N)] / E(N) • = [ 49 + 1 ] / E(N) • Goes to zero as frequency gets large • Add projection risk • K = JL, E(J) = 1, so E(K) = E(L) • CV(K)2 = [1+CV(J)2]CV(L)2 + CV(J)2 • Lower limit is CV(J)2

  5. Impact of Projection Risk on Aggregate CVCV(X)=7, VM(N)=1

  6. Percentiles of Lognormal Loss RatioCV(X)=7, VM(N)=1, E(LR)=65, 3 E(N)’s

  7. Estimation Risk 1 – Asymptotic Theory • At its maximum, all partial derivatives of log-likelihood function with respect to parameters are zero • 2nd partials are negative • Matrix of negative of expected value of 2nd partials called “information matrix” • Usually evaluated by plugging maximizing values into formulas for 2nd partials • Matrix inverse of this is covariance matrix of parameters • Distribution of parameters is asymptotically multivariate normal with this covariance

  8. Estimation Risk 1 – Asymptotic Theory • See Loss Models p. 63 for discussion • Simulation proceeds by simulating parameters from normal, then simu-lating losses from the parameters • See Loss Models p. 613 for how to simulate multivariate normals

  9. Information Matrix – Pareto Example

  10. Estimation Risk 2 – Bayesian Updating • Bayes Theorem: • f(x|y)f(y) = f(x,y) = f(y|x)f(x) • f(x|y) = f(y|x)f(x)/f(y) • f(x|y)  f(y|x)f(x) • Diffuse priors • f(x)  1 • f(x)  1/x

  11. Estimation Risk 2 – Bayesian UpdatingFrequency Example • Poisson distribution with diffuse prior f(l) 1/l • f(k| l) = e- llk/k! • f(l|k)  f(k| l)/l e- llk-1 • Gamma distribution in k, 1 • That’s posterior; predictive distribution of next observation is mixture of Poisson by that gamma, which is negative binomial • After n years of observation, predictive distribution is negative binomial with same mean as sample and variance = mean*(1+1/n) • Parameter distribution for l is gamma with mean of sample and variance = mean/n • Can simulate from negative binomial or from gamma then Poisson

  12. Estimation Risk 3 – Bayes + Information • Tests developed by Rodney Kreps • Likelihood function is proportional to f(data|parameters) • Assume prior for parameters is proportional to 1 • Then likelihood function is proportional to f(parameters|data)

  13. Likelihood Function for Small Sample Pareto Fit Scaled to Max = 1, Log Scale

  14. Testing Assumptions of Parameter Distribution • Use information matrix to get covariance matrix of parameters • This is quadratic term of expansion of likelihood function around max • Compare bivariate normal and bivariate lognormal parameter distributions to scaled likelihood function

  15. Probability Contours of Likelihood

  16. Normal Approximation

  17. Lognormal Approximation Contours

  18. 95th Percentile Comparison

  19. 33rd Percentile Comparison

  20. Quantifying Projection Risk • Regression 100q% confidence intervals • t(q/2;N-2) is the upper 100(q/2)% point from a Student “T” distribution with N-2 degrees of freedom

  21. Projection Risk Example

  22. finis

More Related