1 / 36

Using imprecise estimates for weights Alan Jessop Durham Business School

Using imprecise estimates for weights Alan Jessop Durham Business School. Motivation. In a weighted value function model weights are inferred from judgements. Judgements are imprecise and so, therefore, weight estimates must be imprecise.

suki
Download Presentation

Using imprecise estimates for weights Alan Jessop Durham Business School

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Using imprecise estimates for weights Alan Jessop Durham Business School

  2. Motivation In a weighted value function model weights are inferred from judgements. Judgements are imprecise and so, therefore, weight estimates must be imprecise. Probabilistic weight estimates enable the usual inferential methods, such as confidence intervals, to be used to decide whether weights or alternatives may justifiably be differentiated.

  3. Testing & sensitivity Single parameter. Results easily shown and understood. But partial. Multi parameter. Examine pairs (or more) to get a feel for interaction. Global (eg. Monte Carlo). Comprehensive but results may be hard to show simply. Using some familiar methods uncertainty can be inferred from judgements and the effects of global imprecision can be shown. An analytical approach rather than a simulation.

  4. Sources of imprecision Statements made by the judge are inexact. This is imprecise articulation: variance = σa² The same judgements may be made in different circumstances, using different methods or at different times, for instance. This is circumstantial imprecision: variance = σc²

  5. Sources of imprecision No redundancy e.g. simple rating Redundancy e.g. ask at different times reciprocal matrix related?

  6. 3 point estimate: a2 Beta distribution μ = aM + (1-a)(L+H)/2 σa = b(H-L) Previous studies for PERT analyses. Generalise as a = 1.800x10 -12 c5.751 b = 1.066 - 0.00853c But because w = 1 variances will be inconsistent. Solution: fit a Dirichlet distribution.

  7. Dirichlet f(W) = kiwiui-1 ; 0<wi<1, iwi = 1, ui0, i where k= (iui) / i(ui) which has Beta marginal distributions with meani = ui / v variancei² = ui(v-ui) / v²(v+1) = i(1-i) / (v+1) and covarianceij = -uiuj / v²(v+1) = -ij / (v+1) ; i≠j wherev = iui Relative values of parameters ui determine means. Absolute values determine variance via their sum, v.

  8. Usually used by specifying parameters (eg in Monte Carlo simulation) set parameters Dirichlet weight values But can also be used to ensure compatibility: judgements: marginal characteristics mean ei and variance si2 consistent variances i² Dirichlet Put i = ei Then get least squares best fit to minimise S = (i² - si2)2 S/v = 0 → v+1 = [ei(1-ei )]2 / ei(1-ei )si2 so i² = ei(1-ei) / (v+1) Sum over available estimates si² so can tolerate missing values ( NOTE: only have to know mean values and v )

  9. Experiment: FT full-time MBA ranking 7 variables used in experiment

  10. Experiment:3 point estimate → a2 missingvalue tolerated Dirichlet consistent 3 point judgement scaled from which, mean and standard deviation

  11. Summarising discrimination between programmes y = iwixi var(y) = ijσijxixj = [ iwi(1-wi)xi² - 2ijwiwjxixj ] / (v+1) j>i For two alternatives replace x values with differences (xa-xb)

  12. Northern Europe: UK, France, Belgium, Netherlands, Ireland

  13. Summarising discrimination between programmes Summaries (v+1) = 351.77 Proportion of all pairwise differences significantly different at p = 0.1: discrimination = 81%

  14. Lines show indistinguishable pairs; p = 10%

  15. Lines show indistinguishable pairs; p = 1%

  16. Lines show indistinguishable pairs; p = 25%

  17. Weights. discrimination = 71%

  18. Weights; p = 10%

  19. Different circumstances → c2. Reciprocal matrix.

  20. a2 and c2. Reciprocal matrix. Give each judgement in a reciprocal matrix as a 3-point evaluation. Then treat each column as a separate 3-point evaluation and find Dirichlet compatible a2 as before. For each weight the mean of these variances is the value of a2 as in aggregating expert judgements (Clemen & Winkler, 2007). The mean of the column means is the weight and the variance of the means is c2.

  21. Results from 10 MBA students standard deviations σ = [a2 + c2]½

  22. Are the two sources of uncertainty related? consistently σc > σa mean r = 0.70 taken together r = 0.33

  23. Student G is representative

  24. Scores. (v+1) = 8.54. discrimination = 30%

  25. Lines show indistinguishable pairs; p = 10%

  26. Lines show indistinguishable pairs; p = 50% decide that 1 & 5 can be distinguished

  27. A possible form of interaction Assume that new discrimination is due to increased precision rather than difference in scores: statistical significance rather than material significance. So, change precision by changing (v+1) and leave weights unaltered. z is directly proportional to √(v+1). In this case (v+1) = 8.54 → z1,5= 0.55 p = 50% → z* = 0.67 (v+1)new = (z*/z1,5)² × (v+1) = (0.67 / 0.55)² × 8.54 = 12.67 and so ...

  28. Weights. discrimination = 14%

  29. Group aggregation Do this for all ten assessors.

  30. Scores. (v+1) = 7.18. discrimination = 16%

  31. Lines show indistinguishable pairs; p = 10%

  32. Lines show indistinguishable pairs; p = 50%

  33. Weights. discrimination = 0%

  34. Tentative conclusions Even though results are imprecise there may still exist enough discrimination to be useful, as in forming a short list. May give ordering of clusters. Makes explicit what may justifiably be discriminated. Choosing confidence levels and significance values is, as ever, sensible but arbitrary. Explore different values. Once a short list is identified, further analysis needed, probably using some form of what-if interaction to see the effect of greater precision. Variation between circumstances seems to be consistently greater than self-assessed uncertainty. Does this matter? Do we want to justify one decision now or address circumstantial (temporal?) variation?

  35. end

More Related