experience rating with separate frequency severity distributions l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Experience Rating with Separate Frequency & Severity Distributions PowerPoint Presentation
Download Presentation
Experience Rating with Separate Frequency & Severity Distributions

Loading in 2 Seconds...

play fullscreen
1 / 24

Experience Rating with Separate Frequency & Severity Distributions - PowerPoint PPT Presentation


  • 211 Views
  • Uploaded on

Experience Rating with Separate Frequency & Severity Distributions. Alice Underwood Seminar on Reinsurance June 7-8, 2004. XOL Experience Rating. Bring historical losses and exposures to an ultimate, “as-if” basis, considering Loss Trend Loss Development (IBNYR and IBNER)

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Experience Rating with Separate Frequency & Severity Distributions' - chapa


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
experience rating with separate frequency severity distributions

Experience Rating with Separate Frequency & Severity Distributions

Alice Underwood

Seminar on Reinsurance

June 7-8, 2004

xol experience rating
XOL Experience Rating
  • Bring historical losses and exposures to an ultimate, “as-if” basis, considering
    • Loss Trend
    • Loss Development (IBNYR and IBNER)
    • Exposure Trend (e.g. Premium Onlevel)
  • Create a loss model based on these
  • Project results for the XOL layer
    • Behavior of losses in the layer
    • Effect of treaty features
let s assume perfect data
Let’s Assume Perfect Data
  • Historical exposure information
    • E.g. premiums, payroll, bed count
    • Method for bringing to current levels
  • Credible historical claim detail
    • Individual losses xs appropriate threshold, each claim at consecutive evaluation dates
    • Accident dates (assume L.O. treaty)
    • Separate claim components (Pd, O/S, ALAE)
one plan of attack
One Plan of Attack

1. Bring historical exposures to future AY level

2. Trend each loss to common future accident date

3. Apply policy limits, layer attachment & retention

4. Aggregate the trended layer losses by accident year

5. Create development triangle of trended AY layer loss

6. Select LDFs and ultimate AY trended loss

7. Calculate as-if AY burning cost ratios to exposure

8. Fit a distribution to the historical burning cost ratios; apply this to the projected exposure base

Voila! An aggregate loss model for the layer.

(assuming sufficient credibility, no need for cat load, etc)

aggregate loss distribution function
Aggregate Loss Distribution Function

Illustration 1: Aggregate losses to 1 xs 1 layer

5

applying treaty features
Applying Treaty Features?
  • What would be the effect of, say, a limited number of reinstatements?
  • Could go back and construct as-if treaty losses on this basis…but…
    • Results will be less credible
    • Fitted model a less accurate reflection of exact treaty structure
    • A hassle to re-do this every time the underwriter wants to try a different structure

Loss-sensitive treaty features…a strong reason “WHY” to model frequency & severity separately

typical frequency severity models
Typical Frequency & Severity Models
  • Poisson
  • Negative Binomial
  • Convolute to get aggregate distribution
    • Numerically (e.g. Panjer, Heckman-Meyers)
    • Simulation (e.g. Monte Carlo)
  • Single Parameter Pareto
  • Lognormal
aggregate loss distribution function8
Aggregate Loss Distribution Function

Illustration 1: Aggregate losses to 1 xs 1 layer

8

aggregate loss distribution function9
Aggregate Loss Distribution Function

Illustration 1a: Layer 1 xs 1, with only one reinstatement

9

aggregate loss distribution function10
Aggregate Loss Distribution Function

Illustration 2: Layer 2 xs 2, with unlimited reinstatements

10

aggregate loss distribution function11
Aggregate Loss Distribution Function

Illustration 2a: Layer 2 xs 2, with only one reinstatement

11

ok but how do you get these separate distributions 1
OK, but HOW do you get these separate distributions? (1)
  • Could apply “method of moments” with single parameter distributions:
    • Expected frequency to the layer (via experience analysis of claim counts)
      • Poisson  parameter
    • Expected severity to layer = E(loss)/E(freq)
      • Pareto  parameter
  • A viable method, but…
    • Ignores higher moments (e.g. Poisson / Pareto may not be the best model)
    • Deductibles/SIRs and policy limits should be applied to ultimate individual claim values
ok but how do you get these separate distributions 2
OK, but HOW do you get these separate distributions? (2)

Could try fitting distributions (via least squares, maximum likelihood, “eyeball method”, etc) directly to the actual trended claim data:

  • Historical frequency of trended losses
  • Historical severity of trended losses

Again do-able, but…

Ignores development!

(Which may be OK for short tail lines)

ok but how do you get these separate distributions 3
OK, but HOW do you get these separate distributions? (3)
  • Separately reflect two kinds of development
    • Development on reported claims (IBNER)
    • Claims that have not yet been reported or have not yet exceeded the reporting threshold (IBNYR)
  • Will describe a method based on René Schnieper’s paper “Separating True IBNR and IBNER Claims” (ASTIN Bulletin Vol. 21, No. 1)
  • Provides an alternative / cross-check to aggregate methods for longer-tailed lines
ibner method data notation
IBNER Method: Data & Notation
  • Individual (layer) claims with development
    • For n accident and development years
    • Claim data can be paid or incurred
    • Claim data must be trended to future levels
    • Denote triangle of aggregated losses by Xi,j
  • Exposure Ei for each accident year
ibner method definitions
IBNER Method: Definitions
  • Ni,j : Sum of losses of accident year i which have first nonzero value in development year j
    • IBNYR or"newly entering losses"

0 for j=1

  • Di,j=

Xi,j-1-Xi,j+Ni,j for j>2

    • - IBNER or"downward development on known losses"
  • Note Xi,j - Xi,j-1 = Ni,j - Di,j for j>2
  • Set of observed variables up to calendar year k :

Hk= {Ni,j,Di,j | i+j < k+1}

ibner method assumptions
IBNER Method: Assumptions
  • General assumption: random variables pertaining to different accident years are stochastically independent
  • Assumptions for the N triangle (new losses)
    • (N1) E[Ni,j | Hi+j-2] = Ei lj dev yr j=1,…,n
    • (N2) Var [Ni,j] = Ei 2j dev yr j=1,…,n
  • Estimators (biasfree, assuming Ei  0)

Recall Ei denotes exposure for accident year i

ibner method assumptions cont d
IBNER Method: Assumptions (cont’d)
  • Assumptions for the D triangle (downward development)
    • (D1) E[Di,j | Hi+j-2] = Xi,j-1 di dev yr j=1,…,n
    • (D2) Var[Di,j | Hn] = Xi,j-1 2j dev yr j=1,…,n
  • Estimators (biasfree, assuming Xi,j-1  0)

Recall Xij denotes triangle of aggregate losses

The factors j are called the IBNER factors.

ibner method projection
IBNER Method: Projection
  • Under these assumptions, the future losses can be estimated recursively :
  • The estimators for the ultimate of accident year i (without tail) are then
  • Comment: this method can also be applied to the frequency triangle
single loss development
Single Loss Development
  • If we assume that a similar development pattern applies to each claim, the formula

implies that we can develop the single open claims to ultimate using the factors (1-j)

  • A severity distribution can then be fitted to these developed single claims
sld remarks
SLD: Remarks
  • The assumptions are fairly strong
    • Comparable development of all claims
      • Might differ by claim size, cause of loss...
    • Linear dependence
      • But linearity assumptions of CL rarely verified either
  • Method best applied to very stable triangles
  • Still need to include a tail factor
  • Even “perfect” real world data likely to be biased
    • Large losses in excess of some threshold
    • For green AYs, fewer claims initially exceed threshold
    • Those claims might tend to be more severe
    • Effect of policy limits
application of sld
Application of SLD
  • Fundamental assumption: all claims have a similar development
    • Unreasonably strong assumption?
    • Partial remedy: divide the losses into classes
  • One approach: group losses by size
    • Sort claims by size at last report and divide into groups
    • Must confront credibility issues: population of size groups
finally
Finally…
  • Severity distribution can be fitted to collection of trended ULTIMATE single claims
  • Frequency distribution based on claim count development
  • Use simulation to generate projected treaty results