1 / 37

Expert Judgment

Expert Judgment. EMSE 280 Techniques of Risk Analysis and Management. Expert Judgment. Why Expert Judgement? Risk Analysis deals with events with low intrinsic rates of occurrence  not much data available.

ruggiero
Download Presentation

Expert Judgment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Expert Judgment EMSE 280Techniques of Risk Analysis and Management

  2. Expert Judgment • Why Expert Judgement? • Risk Analysis deals with events with low intrinsic rates of occurrence  not much data available. • Data sources not originally constructed with a Risk Analysis in mind can be in a form inadequate form for the analysis. • Data sources can be fraught with problems e.g. poor entry, bad data definitions, dynamic data definitions • Cost, time, or technical considerations

  3. Expert Judgment • Issues in the Use of Expert Judgement • Selection of Experts • Wide enough to encompass all facets of scientific thought on the topic • Qualifications\criteria need to be specified • Pitfalls in Elicitation – Biases • Mindsets – unstated assumptions that the expert uses • Structural Biases – from level of detail or choice of background scales for quantification • Motivational Biases – expert has a stake in the study outcome • Cognitive Biases • Overconfidence – manifested in uncertainty estimation • Anchoring – expert subconsciously bases his judgement on some previously given estimate • Availability – when events that are easily (difficult) to recall are likely to be overestimated (underestimated)

  4. Expert Judgment • Avoiding Pitfalls • Be aware • Carefully design elicitation process • Perform a dry run elicitation with a group of experts not participating in the study • Strive for uniformity in elicitation sessions • Never perform elicitation session without the presence of a qualified analyst • Guaranteeing Anonymity of Experts • Combination of Expert Judgements • Technical and political issues

  5. Basic Expert Judgment for Priors • Method of Moments • Expert provides most likely value at parameter, q, say q* and a range qL,qU • for a distribution f(q) we equate E[q]=(qL+4q*+qU)/6 Var[q}= [(qU-qL)/6]2 • And solve for distribution parameters • Method of Range • Expert provides maximum possible range for q say qL,qU • for a distribution f(q) with CDF F(q) we equate • F(qU) = .95 F(qL) = .05 • And solve for distribution parameters

  6. Combining Expert Judgment: Paired Comparison • Description • Paired Comparison is general name for a technique used to combine several experts’ beliefs about the relative probabilities (or rates of occurrence) of certain events. • Setup • E # experts • a1, …, an object to be compared • v1, …, vn true value of the objects • v1,r, …, vn,r internal value of object i for expert r • Experts are asked a series (specifically a total of n taken 2 at a time) of paired comparisonsai, vs aj • ai, >> aj by e  e thinks P(ai) > P(aj)

  7. Combining Expert Judgment: Paired Comparison • Statistical Tests • Significance of Expert e’s Preferences (Circular Triad) Test H0 Expert e Answered Random Ha Expert e Did Not Answered Randomly A circular triad is a set of preferences ai, >> aj , aj >> ak , ak >> ai Define c # circular triads in a comparison of n objects and Nr(i) the number of times that expert r prefers ai to another object  expert data Nr(1), …, Nr(n), r = 1, …, e. c(r) the number of circular triads in expert r’s preferences David(1963)

  8. Combining Expert Judgment: Paired Comparison • Significance of Expert e’s Preferences (Circular Triad) • Kendall (1962) • tables of the Pr{c(r) >c*} under H0 that the expert answered in a random fashion for n = 2, …, 10 • developed the following statistic for comparing n items in a random fashion, When n>7, this statistic has (approximately) a chi- squared distribution with df = • perform a standard one-tailed hypothesis test. If H0 for any expert cannot be rejected at the 5% level of significance i.e. Pr{2c’(e)}>.05, the expert is dropped

  9. Combining Expert Judgment: Paired Comparison • Statistical Tests • Agreement of Experts : coefficient of agreement Test H0 Experts Agreement is Due to Chance Ha Experts Agreement is not Due to Chance Define N(i,j) denote the number of times ai >> aj. coefficient of agreement attains a max of 1 for complete agreement

  10. Combining Expert Judgment: Paired Comparison • Agreement of Experts : coefficient of agreement • tabulated distributions of for small values of n and e under H0 • These are used to test hypothesis concerning u. For large values of n and e, Kendall (1962) developed the statistic which under H0 has (approx.) a chi squared distribution with . we want to reject at the 5% level and fail if Pr{2u’}>.05

  11. Combining Expert Judgment: Paired Comparison • Statistical Tests • Agreement of Experts : coefficient of concordance Define R (i,r) denote the rank of ai obtained expert r’s responses coefficient of concordance Again attains max at 1 for complete agreement

  12. Combining Expert Judgment: Paired Comparison • Agreement of Experts : coefficient of concordance • Tables of critical values developed for distribution of S under H0 for 3n7 and 3n20 by Siegel (1956) • For n>7, Siegel (1956) provides the the statistic Which is (approx) Chi Squared with df=n-1 Again we should reject a the 5% level of significance

  13. Paired Comparison: Thurstone Model • Assumptions vi,r ~N(i, i2) with i= vi and i2 = 2 m1 m2 m3 Probability that 3 beats 2 or 3 is preferred to 2 Think of this as tournament play

  14. Paired Comparison: Thurstone Model • Assumptions vi,r ~N(i, i2) with i= vi and i2 = 2 • Implications vi,r-vj,r ~N(i-j, 22) ~N(i,j, 22) (experts assumed indep)  ai is preferred to aj by expert r with probability if pi,j is the % of experts that preferred ai to aj then

  15. Paired Comparison: Thurstone Model • Establishing Equations Then we can establish a set of equations by choosing a scaling constant so that as this is an over specified system for we solve for i such that we get and Mosteler (1951) provides a goodness of fit test based on an approx Chi-Squared Value

  16. Paired Comparison: Bradley-Terry Model • Assumptions Thus each paired comparison is the result of a Bernoulli rv for a single expert , a binomial rv for he set of experts vi are determined up to a constant so we can assume Define then vi can be found as the solution to

  17. Paired Comparison: Bradley-Terry Model Iterative solution Ford (1956) Ford (1957) notes that the estimate obtained is the MLE and that the solution is unique and convergence under the conditions that it is not possible to separate the n objects into two sets where all experts deem that no object in the first set is more preferable to any object in the second set. Bradley (1957) developed a goodness of fit test based on (asymptotically) distributed as a chi-square distribution with df = (n-1)(n-2)/2

  18. Paired Comparison: NEL Model • Motivation • If Ti~exp(li) then • For a set of exponential random variables,we may ask experts which one will occur first • We can use all of the Bradley-Terry machinery to estimate li • We need only have a separate estimate one particular l to anchor all the others

  19. Combination of Expert Judgment:Bayesian Techniques • Method of Winkler (1981) & Mosleh and Apostolakis (1986) • Set Up • X an unknown quantity of interest • x1, …, xe estimates of X from experts 1, …, e • p(x) DM’s prior density for X • Then • If the experts are independent

  20. Combination of Expert Judgment:Bayesian Techniques • Method of Winkler (1981) & Mosleh and Apostolakis (1986) • Approach where the parameters μi and σi are selected by the DM to reflect his\her opinion about the experts’ biases and accuracy • Under the assumptions of the linear (multiplicative) model, the likelihood is simply the value of the normal (lognormal) density with parameters x+μi and σi . • Then for the additive model we have

  21. Combination of Expert Judgment:Bayesian Techniques Note: i. the multiplicative model follows the same line of reasoning but with the lognormal distribution ii. the DM acts as the e+1st expert, (perhaps uncomfortable)

  22. Combination of Expert Judgment:Bayesian Techniques

  23. Combination of Expert Judgment:The Classical Model • Overview • Experts are asked to assess their uncertainty distribution via specification of a 5%, 50% and 95%-ile values for unknown values and for a set of seed variables (whose actual realizationis known to the analyst alone) anda set of variables of interest • The analyst determines the Intrinsic Range or bounds for the variable distributions • Expert weights are determined via a combination ofcalibration and information scores on the seed variable values • These weights can be shown to satisfy an asymptoticstrictly proper scoring rules, i.e., experts achieve their best maximum expected weight in the long run only bystating assessments corresponding to their actual beliefs

  24. Combination of Expert Judgment:The Classical Model 1 Expert 1 .5 0 ql q5 q50 q95 qu 1 Expert 2 .5 0 ql qu q5 q50 q95 For a weighted combination of expert CDFs take the weighted combination at all break points (i.e. qi values for each expert) and then linearly interpolate

  25. Combination of Expert Judgment:The Classical Model Expert 1 Expert 2 Expert 3 Expert Distribution Break Points Realization | | | x | | | | x | x | Var. 1 | | x | | | | | x | x | Var. 2 | | x | | x | | | x | | Var. 3 | | | x | | | | x | x | Var. 4 | | x | | | x | | | x | Var. 5 Expert 1 – Calibrated but not informative Expert 2 – Informative but not calibrated Expert 3 – Informative and calibrated

  26. Combination of Expert Judgment:The Classical Model • Information • Informativeness is measured with respect to some background measure, in this context usually the uniform distribution F(x) = [x-l]/[h-l] l < x < h • or log-uniform distribution F(x) = [ln(x)-ln(l)]/[ln(h)-ln(l)] l < x < h • Probability densities are associated with the assessments of each expert for each query variable by • the density agrees with the expert’s quantile assessment • the densities are nominally informative with respect to the background measure • When the background measure is uniform, for example, then the Expert’s distribution is uniform on it’s 0% to 5%quantile, 5% to 50% quantile, etc.

  27. Combination of Expert Judgment:The Classical Model • Information • The relative information for expert e on a variable is • That is, r1 = F(q5(e)) -F(ql(e)) , …, r4 = F(qh(e)) -F(q95(e)) • The expert information score is the average information over all variables Expert Distribution 1 0.5 Uniform Background Measure | | | | | max min

  28. Combination of Expert Judgment:The Classical Model • Intrinsic Range for Each Seed Variable • Let qi(e) denote expert e’s i% quantile for seed variable X • Let seed variable X have realization (unknown to the experts ) of r • Determine intrinsic range as (assuming m experts) l=min{q5(1),…, q5(m),r} and h =max{q95(1),…, q95(m),r} • then for k the overshoot percentage (usually k = 10%) • ql(e)=l – k(h - l) • qh(e)=l + k(h - l) • Expert Distribution (CDF) for seed variable X is a linear interpolation between • (ql(e),0), (q5(e),.05), (q50(e),.5), (q.95(e),.95), (qh(e),1)

  29. Combination of Expert Judgment:The Classical Model • Calibration • By specifying the 5%, 50% and 95%-iles, the expert is specifying a 4-bin multinomial distribution with probabilities .05, .45, .45, and .05 for each seed variable response • For each expert, the seed variable outcome (realization), r, is the result of a multinomial experiment, i.e. • r  [ql(e), (q5(e)), [interval 1], with probability 0.05 • r  [q5(e), q50(e)), [interval 2], with probability 0.45 • r  [q50(e), q95(e)), [interval 3], with probability 0.45 • r  [q95(e), qh(e)], [interval 4], with probability 0.05 • Then if there are N seed variables and assuming independence si= [# seed variable in interval i]/N is an empirical estimate of (p1, p2, p3, p4) = (.05, .45, .45, .05)

  30. Combination of Expert Judgment:The Classical Model • Calibration • We may test how well the expert is calibrated bytesting the hypothesis that H0 si = pi for all i vs Ha si pi for some i • This can be performed using Relative Information

  31. Combination of Expert Judgment:The Classical Model Note that this value is always nonnegative and onlytakes the value 0 when si=pifor all i. • If N (the number of seed variables) is large enough • Thus the calibration score for the expert is the probability of getting a relative information score worse (greater or equal to) than what was obtained

  32. Combination of Expert Judgment:The Classical Model • Weights • Proportional to calibration score * information score • Don’t forget to normalize • Note • as intrinsic range for a variable is dependent on expert quantiles, dropping experts may cause the intrinsic range to be recalculated • change in intrinsic range and background measure have negligible to modest affects on scores 

  33. Combination of Expert Judgment:The Classical Model • Optimal (DM)Weights • Choose minimum  value such that • if C(e) > , C(e) = 0 (some experts will get 0 weight) •  is selected so that a fictitious expert with a distribution • equal to that of the the weighted combination of expert • distributions would be given the highest weight among • experts

  34. Combination of Expert Judgment:The Classical Model

  35. Combination of Expert Judgment:The Classical Model

  36. Combination of Expert Judgment:The Classical Model

  37. Combination of Expert Judgment:The Classical Model PERFORMANCE BASED WEIGHTS USER DEFINED WEIGHTS EQUAL WEIGHTS

More Related