A method for projecting individual large claims
This presentation is the property of its rightful owner.
Sponsored Links
1 / 37

A Method for Projecting Individual Large Claims PowerPoint PPT Presentation


  • 53 Views
  • Uploaded on
  • Presentation posted in: General

A Method for Projecting Individual Large Claims. Casualty Loss Reserving Seminar 11-12 September 2006 Atlanta. Karl Murphy and Andrew McLennan. Overview. Rationale for considering individual claims Outline of methodology Examples Data Requirements Assumptions Whole account variability

Download Presentation

A Method for Projecting Individual Large Claims

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


A method for projecting individual large claims

A Method for Projecting Individual Large Claims

Casualty Loss Reserving Seminar

11-12 September 2006

Atlanta

Karl Murphy and Andrew McLennan


Overview

Overview

  • Rationale for considering individual claims

  • Outline of methodology

  • Examples

  • Data Requirements

  • Assumptions

  • Whole account variability

  • Case Study

  • Conclusion


Rationale for considering individual claims

Rationale for Considering Individual Claims

  • Last few years has seen a significant change in requirements from actuaries in terms of understanding variability around results

  • Partially driven by a greater understanding by board members that things can go wrong, and partly by the increased use of DFA models

  • Much work done based on aggregate triangles, but very little on stochastic individual claims development

  • Weaknesses in methods for deriving consistent gross and net results


Traditional netting down methods

Traditional Netting Down Methods

  • How do you net down gross reserves?

  • Could assume reinsurance ultimate reserves = reinsurance current reserves

    • Prudent if deficiencies in reserves

    • Optimistic if redundancies

  • Analyse net data, and calculate net results from this

  • Disadvantages:

    • Retentions may change

      • look at data on consistent retention

      • lots of triangles! Ensuring consistency between gross and various nets difficult

    • Indexation of retention

      • need assumption of payment pattern

    • Aggregate deductibles

      • need assumption of ultimate position of individual claims

  • Another option – model excess claims above a threshold, and calculate average deficiency of excess claims – i.e. IBNER on those above threshold. Apply average IBNER loading to open claims to get ultimate


Deterministic netting down methods tend to understand effect of reinsurance

Deterministic Netting Down Methods Tend to Understand Effect of Reinsurance

  • Example: excess IBNER of £0.5m, two claims of incurred of £250k, and retention of £500k

  • Deterministic development factor of 2, so gross-up claims to ultimate of £500k each

  • Calculate reinsurance recoveries: 500k-500k = 0 – no reinsurance recoveries

  • Net reserves = gross reserves


Deterministic netting down methods tend to understand effect of reinsurance1

Deterministic Netting Down Methods Tend to Understand Effect of Reinsurance

  • Because of the one-sided nature of reinsurance, this will understate the reinsurance recoveries:

  • Above example:

    • one claim settles for 250k, one for 750k

      • same gross result

      • Net reserves = gross reserves – 250k

  • Need method that allows for distribution of ultimate individual claims to allow for reinsurance correctly


Traditional variability methods

Traditional Variability Methods

  • Traditional Methods:

    • Methods based on log(incremental data), i.e. lognormal models

    • Mack’s model – based on cumulative data

    • Provide mean and variance of outcomes only

  • Bootstrapping

    • Provides a full predictive distribution – not just first two moments

    • Bootstrap any well specified underlying model

      • Over-dispersed Poisson (England & Verrall)

      • Mack’s model

    • Characteristics

      • Usually applied to aggregate triangles

      • Works well with stable triangles

      • However, large claims can influence volatility unduly

  • Bayesian Methods:

    • Like Bootstrapping, provides a full predictive distribution

    • Ability to incorporate expert judgement with informative priors


Traditional variability methods1

Traditional Variability Methods

  • No allowance made for the number of large claims in an origin period, and no allowance made for the status (i.e. open/closed)

  • No linkage between variability of gross and net of reinsurance reserves

  • No information about the distribution of individual claims – will have same problems of netting down gross results as deterministic methods


Outline of methodology

Outline of Methodology

  • Our methodology simulates large claims individually

  • Separately simulate known claims (for IBNER) and IBNR claims

  • Consider dependencies between IBNER and IBNR claims

  • For non-large claims, use an aggregate “capped” triangle

    • when a individual claim reaches the capping level, ignore any development in excess of the capping

    • index the capping threshold at an appropriate level

    • use a “traditional” stochastic method

    • consider dependency between the run-off of capped and excess claims


Outline of methodology ibner

Outline of Methodology: IBNER

  • Take latest incurred position and status of claim

  • Simulate next incurred position and status of claim based on movement of a similar historic claim

  • Allows for re-openings, to the extent they are in the historic data

  • Projects individual claims from the point they become “large”

  • Claims are considered “similar” by:

    • Status of claim (open / closed)

    • Number of years since a claim became large (development period)

    • Size of claim – e.g. a claim with incurred of £10m will behave differently to a claim with incurred of £1m – claims are banded into layers


Outline of methodology ibnr

Outline of Methodology: IBNR

  • IBNR large claims can be either genuine IBNR, or claims previously not reported as large

  • Apply “standard” stochastic methods to numbers triangles

  • Alternatively, simulate based on an assumed frequency per unit of exposure

  • For severity, can sample from the (simulated) known large claims, or simulate from an appropriately parameterised distribution


Example data

Example Data


Claim d

Claim D

  • Need to simulate into development period 3

  • Open status as at development period 2

  • Similar to claims B and C, with development factors of 0.53 and 1.5


Claim d simulations

Claim D: Simulations


Claim e

Claim E

  • Closed status as at development period 2

  • Similar to claim A, with no development


Claim f

Claim F

  • Open status as at development period 1

  • For development into year 2, can consider any of A to E

  • Consider also the status


Claim f simulations to year 2

Claim F Simulations to Year 2


Claim f simulations to year 3

Claim F Simulations to Year 3


Ibnr claims

IBNR Claims

  • Two sources of IBNR claims:

    • True IBNR claims

    • Known claims which are not yet large

  • Triangle of claims that ever become large

  • Calculate frequency of large claims in development period

  • Simulate number of large claims going forward

  • Simulate IBNR claim costs from historic claims that became large in that period


A method for projecting individual large claims

IBNR

  • Data below shows the claim number triangle, and frequency of claims


A method for projecting individual large claims

IBNR

  • Result for one simulation


Data requirements

Data Requirements

  • Individual large claim information:

    • Full incurred and payment history

    • Historic open status of claims

    • Claims that were ever large, not just currently large

  • Accident year exposure

  • Definition of “large” depends on:

    • Historic retentions

    • Number of claims above threshold

    • Consider having two thresholds – e.g. all claims above $100k, but then calculate excess above $200k – allows for claims developing just below the layer


Assumptions

Assumptions

  • Historic claims provide the full distribution of possible chain ladder factors for claims

  • Development by year is independent

  • No significant changes to case estimation procedures

    • Can allow for this by standardising the historic chain ladder factors, as is done in aggregate modelling

  • Historic reopening and settlement experience is representative of future

  • Method cannot be applied blindly – it is not a replacement of gross aggregate best estimate modelling, rather a tool to analyse variability around the aggregate modelling, and netting down of results


Variability of whole account

Variability of Whole Account

  • Simulate variability of small claims via “capped” triangle, using existing methods

  • Capped triangles preferred to triangles which totally exclude large claims

    • if claims are taken out once they become large, we see negative development

    • if history of claim is taken out, then triangles change from analysis to analysis

    • becomes difficult to allow for IBNR large claims

  • Add gross excess claims from individual simulations for total gross results, with appropriate dependency structure

  • Add net excess claims for total net results


Case study

Case Study

  • UK auto account

  • 16 years of data

  • Individual claims > £100k

  • 2 layers used to simulate IBNER claims, 80% in lower layer, 20% in upper layer


Ibner

IBNER

  • Distribution of one individual claim, current incurred £125k

  • Expected ultimate of £300k

  • 90% of the time, ultimate cost of claim doesn’t exceed £700k


Ibner1

IBNER

  • Occasionally the claim can grow very large, however


Ibner2

IBNER

  • Progression of one claim that has been large for 4 years, and is still open

  • Still significant variability in ultimate cost


Ultimate loss development factors

Ultimate Loss Development Factors

  • Graph shows ultimate LDF (ultimate / latest incurred) for “big” and “little” claim from same point in development

  • Probability of observe an large LDF (>4) 60% higher for small claim than large claim

  • Average LDF for small claim 1.1, for big claim 0.87


Distribution of capped reserve

Distribution of Capped Reserve


Comparison with mack method

Comparison with Mack Method


2003 distribution

2003 Distribution

  • Higher proportion of large claims

  • One claim of £6m

  • Greater uncertainty than implied by aggregate projection


2004 and 2005 distributions

2004 and 2005 Distributions

  • Distributions from individual claims distributions slightly heavier tailed than aggregate method

  • Caused by increase in large claims proportions over time, not adequately allowed for in aggregate methods


Netting down

Netting Down


Reinsurance structures

Reinsurance Structures

  • Even simple portfolios can have reinsurance structures that are difficult to model

    • Aggregate Deductibles

    • Loss Occurring During vs Risk Attaching coverages

    • Partial Placements

    • Indexation Clauses

  • By having individual claims, can explicitly allow for any structure


Example aggregate deductible

Example: Aggregate Deductible

  • Graph shows percentile chart of the usage of a £2.25m aggregate deductible attaching to layer £400k XS £600k


Conclusion

Conclusion

  • Existing stochastic methods work well for homogenous data, but some lines of business are dominated by small number of large claims

  • Treating these claims separately allows existing methods to be used on the attritional claims, with our individual claims simulation technique allowing for variability in these large claims explicitly

  • This allows net and gross results to be calculated on a consistent basis, allowing explicitly for any reinsurance structures in place


  • Login