1 / 25

UNCERTAINTY AROUND MODELED LOSS ESTIMATES

UNCERTAINTY AROUND MODELED LOSS ESTIMATES. CAS Annual Meeting New Orleans, LA November 10, 2003 Jonathan Hayes, ACAS, MAAA. Agenda. Models Model Results Confidence Bands Data Issues with Data Issues with Inputs Model Outputs Company Approaches Role of Judgment Conclusions.

shaw
Download Presentation

UNCERTAINTY AROUND MODELED LOSS ESTIMATES

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. UNCERTAINTY AROUND MODELED LOSS ESTIMATES CAS Annual Meeting New Orleans, LA November 10, 2003 Jonathan Hayes, ACAS, MAAA

  2. Agenda • Models • Model Results • Confidence Bands • Data • Issues with Data • Issues with Inputs • Model Outputs • Company Approaches • Role of Judgment • Conclusions

  3. Florida Hurricane Amounts in Millions USD

  4. Florida Hurricane Amounts in Millions USD

  5. Modeled Event LossSample Portfolio, Total Event

  6. Modeled Event LossBy State Distribution

  7. Modeled Event LossBy County Distribution, State S

  8. Agenda • Models • Model Results • Confidence Bands • Data • Issues with Data • Issues with Inputs • Model Outputs • Company Approaches • Role of Judgment • Conclusions

  9. Types Of Uncertainty(In Frequency & Severity) • Uncertainty (not randomness) • Sampling Error • 100 years for hurricane • Specification Error • FCHLPM sample dataset (1996) 1 in 100 OEP of 31m, 38m, 40m & 57m w/ 4 models • Non-sampling Error • El Nino Southern Oscillation • Knowledge Uncertainty • Time dependence, cascading, aseismic shift, poisson/negative binomial • Approximation Error • Res Re cat bond: 90% confidence interval, process risk only, of +/- 20%, per modeling firm Source: Major, Op. Cit..

  10. Frequency-Severity UncertaintyFrequency Uncertainty (Miller) • Frequency Uncertainty • Historical set: 96 years, 207 hurricanes • Sample mean is 2.16 • What is range for true mean? • Bootstrap method • New 96-yr sample sets: Each sample set is 96 draws, with replacement, from original • Review Results

  11. Frequency Bootstrapping • Run 500 resamplings and graph relative to theoretical t-distribution Source: Miller, Op. Cit.

  12. Frequency Uncertainty Stats • Standard error (SE) of the mean: • 0.159 historical SE • 0.150 theoretical SE, assuming Poisson, i.e., (lambda/n)^0.5

  13. Hurricane Freq. UncertaintyBack of the Envelope • Frequency Uncertainty Only • 96 Years, 207 Events, 3100 coast miles • 200 mile hurricane damage diameter • 0.139 is avg annl # storms to site • SE = 0.038, assuming Poisson frequency • 90% CI is loss +/- 45% • i.e., (1.645 * 0.038) / 0.139

  14. Frequency-Severity UncertaintySeverity Uncertainty (Miller) • Parametric bootstrap • Cat model severity for some portfolio • Fit cat model severity to parametric model • Perform X draws of Y severities, where X is number of frequency resamplings and Y is number of historical hurricanes in set • Parameterize the new sampled severities • Compound with frequency uncertainty • Review confidence bands

  15. OEP Confidence Bands Source: Miller, Op. Cit.

  16. OEP Confidence Bands • At 80-1,000 year return, range fixes to 50% to 250% of best estimate OEP • Confidence band grow exponentially at frequent OEP points because expected loss goes to zero • Notes • Assumed stationary climate • Severity parameterization may introduce error • Modelers’ “secondary uncertainty” may overlap here, thus reducing range • Modelers’ severity distributions based on more than just historical data set

  17. Agenda • Models • Model Results • Confidence Bands • Data • Issues with Data • Issues with Inputs • Model Outputs • Company Approaches • Role of Judgment • Conclusions

  18. Data Collection/Inputs • Is this all the subject data? • All/coastal states • Inland Marine, Builders Risk, APD, Dwelling Fire • Manual policies • General level of detail • County/zip/street • Aggregated data • Is this all the needed policy detail? • Building location/billing location • Multi-location policies/bulk data • Statistical Record vs. policy systems • Coding of endorsements • Sublimits, wind exclusions, IM • Replacement cost vs. limit

  19. More Data Issues • Deductible issues • Inuring/facultative reinsurance • Extrapolations & defaults • Blanket policies • HPR • Excess policies

  20. Model Output • Data Imported/Not Imported • Geocoded/Not Geocoded • Version • Perils Run • Demand Surge • Storm Surge • Fire Following • Defaults • Construction Mappings • Secondary Characteristics • Secondary Uncertainty • Deductibles

  21. Agenda • Models • Model Results • Confidence Bands • Data • Issues with Data • Issues with Inputs • Model Outputs • Company Approaches • Role of Judgment • Conclusions

  22. Company ApproachesAvailable Choices • Output From: • 2-5 Vendor Models • Detailed & Aggregate Models • ECRA Factors • Experience, Parameterized • Select (weighted) Average

  23. Company ApproachesLoss Costs • Arithmetic average • Subject to change • Significant u/w flexibility • Weighted average • Weights by region, peril, class et al. • Weights determined by: • Model review • Consultation with modeling firms • Historical event analysis • Judgment • Weight changes require formal sign-off

  24. Conclusions • Cat Model Distributions Vary • More than one point estimate useful • Point estimates may not be significantly different • Uncertainty not insignificant but not insurmountable • What about uncertainty before cat models? • Data Inputs Matter • Not mechanical process • Creating model inputs requires many decisions • User knowledge and expertise critical • Loss Cost Selection Methodology Matters • # Models used more influential than weights used • Judgment Unavoidable • Actuaries already well-versed in its use

  25. References • Bove, Mark C. et al.., “Effect of El Nino on US Landfalling Hurricanes, Revisited,” Bulletin of the American Meteorological Society, June 1998. • Efron, Bradley and Robert Tibshirani, An Introduction to the Bootstrap, New York: Chapman & Hall, 1993. • Major, John A., “Uncertainty in Catastrophe Models,” Financing Risk and Reinsurance, International Risk Management Institute, Feb/Mar 1999. • Miller, David, “Uncertainty in Hurricane Risk Modeling and Implications for Securitization,” CAS Forum, Spring 1999. • Moore, James F., “Tail Estimation and Catastrophe Security Pricing: Cat We Tell What Target We Hit If We Are Shooting in the Dark”, Wharton Financial Institutions Center, 99-14.

More Related