1 / 18

CLIVE L. KEATINGE

DISCUSSION OF MINIMUM DISTANCE ESTIMATION OF LOSS DISTRIBUTIONS BY STUART A. KLUGMAN AND A. RAHULJI PARSA. CLIVE L. KEATINGE. KLUGMAN AND PARSA’S BASIC ASSERTION. Maximum likelihood estimation is asymptotically optimal in the sense that parameter estimates have minimum asymptotic variance, BUT

howardcole
Download Presentation

CLIVE L. KEATINGE

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. DISCUSSION OFMINIMUM DISTANCEESTIMATION OFLOSS DISTRIBUTIONSBY STUART A. KLUGMANAND A. RAHULJI PARSA CLIVE L. KEATINGE

  2. KLUGMAN AND PARSA’S BASIC ASSERTION Maximum likelihood estimation is asymptotically optimal in the sense that parameter estimates have minimum asymptotic variance, BUT Minimum distance (weighted least squares) estimation can be tailored to reflect the goals of the analysis (e.g., obtaining a close fit in the tail by skewing the weights toward the tail).

  3. KEATINGE’SRESPONSE Minimum distance estimation is a clumsy remedy for a model that is not flexible enough. ALTERNATIVES: • Fit a parametric distribution only to the upper section of the data and use the empirical distribution below that. • Use the semiparametric mixed exponential distribution.

  4. Minimum distance estimation can proceed using the cumulative distribution function or the limited expected value function. • With the cumulative distribution function, if one selects weights to minimize the asymptotic variance, one ends up with parameter estimates very close to or identical to the grouped maximum likelihood estimates.

  5. Example 1—6656 general liability claims fit to a Pareto distribution Maximum likelihood asymptotic covariance matrix Minimum distance (limited expected values) asymptotic covariance matrix —Klugman and Parsa’s weights

  6. Table 1 shows the weights that were used compared with optimized weights. • Note that the overweighting in the tail results in the substantially higher asymptotic variances.

  7. Table 2 shows fits to Pareto and mixed exponential distributions. • Most modelers would probably prefer the minimum distance Pareto to the maximum likelihood Pareto, because it provides a much closer fit in the tail at a modest cost in terms of the fit low in the distribution. • This would be an implicit acknowledgement that the assumption that the data comes from a Pareto distribution is not appropriate. Otherwise, one would prefer the estimator with the smaller asymptotic variances. • The mixed exponential distributions fit very well over the entire range of the data. The minimum distance estimator provides a closer fit because it uses empirical limited expected values directly, whereas the maximum likelihood estimator uses the number of losses that fall in each interval.

  8. Example 2—463 medical malpractice claim report lags truncated from above and fit to a Burr distribution Maximum likelihood asymptotic covariance matrix Minimum distance (cumulative distribution function) asymptotic covariance matrix —Klugman and Parsa’s weights

  9. Table 3 shows the weights that were used compared with optimized weights.

  10. Table 4 shows fits to Burr and Weibull distributions. • If one believes a Burr distribution is appropriate, one should prefer the maximum likelihood or minimum chi-square estimates, since they have smaller asymptotic variances. • None of the distributions provides a particularly good fit very low in the distribution. If one does not believe that a Burr distribution is appropriate over the entire range of the data, one could fit that distribution only above a certain point and use an empirical distribution below that. • The mixed exponential distribution always has a mode of zero, and since the data clearly shows a mode significantly greater than zero, the mixed exponential would not fit well over the entire range of the data. However, one could fit the mixed exponential to the section of the distribution to the right of the mode.

  11. 95% confidence intervals for the number of claims that will be reported after Lag 168: • Burr MLE 72 +/- 57 • Burr MinDist 59 +/- 61 • Burr MinChiSq 102 +/- 89 • Weibull MLE 4 +/- 3 • Confidence intervals depend on the assumption that a particular distribution is appropriate over the entire range of the distribution, including the portion for which we do not have data. • Extrapolation is likely to lead to a very unreliable estimate.

  12. The main purported advantage of minimum distance estimation is that, through adjustment of the weights, it can provide a closer fit to the parts of the distribution that are of the most interest. • However, this leads to an estimator with a larger variance than the maximum likelihood estimator, and if one believes that the model one is using is appropriate, one should prefer the estimator with the smaller variance.

  13. Minimum distance estimation would be useful in situations where maximum likelihood estimation is not feasible, such as when limited expected values are the only data available. • However, in general, I see little reason to prefer it to maximum likelihood estimation.

  14. KEATINGE’S RESPONSE Minimum distance estimation is a clumsy remedy for a model that is not flexible enough. ALTERNATIVES: • Fit a parametric distribution only to the upper section of the data and use an empirical distribution below that. • Use the semiparametric mixed exponential distribution.

More Related