220 likes | 497 Views
Combining Expert Judgement: A Review for Decision Makers. Simon French simon.french@mbs.ac.uk. Valencia 2: Group Consensus Probability Distributions. Group of decision makers. Decision Maker. Group of experts. Issues and undefined decisions. Experts.
E N D
Combining Expert Judgement:A Review for Decision Makers Simon French simon.french@mbs.ac.uk
Valencia 2:Group Consensus Probability Distributions Group of decision makers Decision Maker Group of experts Issues and undefined decisions Experts
Valencia 2:Group Consensus Probability Distributions Group of decision makers Decision Maker Group of experts Issues and undefined decisions Experts
Different contexts different assumptions appropriate • Expert Problem • Expert judgements are data to DM • OK to calibrate judgements • no assumption of equality • Many to 1 communication • Group Decision Problem • two step process: learn then vote • learn from each other mutual communication • wrong to calibrate at decision? • equal voting power? • Text book Problem • Need to think of later unspecified decision • Need to communicate to unspecified audiences
How do you question experts? If the non-swimmer averages advice on depths … he drowns! If he were to ask the question, ‘will I drown if I wade across?’ he would get a unanimous answer: yes!
p( | x) p(x | ) p() Posterior probability likelihood prior probability Approaches to the expert problem (1) Bayesian • Expert judgement is data • Difficulty in defining likelihood DM’s prior for quantities of interest in real problem
Approaches to the expert problem (1) Bayesian • Expert judgement is data, x • Difficulty in defining likelihood p( | x) p(x | ) p() Posterior probability likelihood prior probability DM’s probability for the experts’ judgementsgiven actual quantity of interestcorrelations? elicitation errors? calibration?
Approaches to the expert problem (2) Opinion Pools • Expert judgement are taken as probabilities • Essentially a weighted mean • arithmetic, geometric, … • Weights defined from • DM’s judgement • Equal weights (Laplace, equal pay) • Social networks • Cooke’s Classical method • Weights defined from calibration data • Are there better scoring rules? • Many applications • Database of 45 studies • Computationally easy • Appears to discard poor assessors but actually finds spanning set
Analysis Formulateissues and structure problem Decide andImplement But all this is the easy bit …. Expert advice on what might happen Expert input on models, parameters, probabilities • cf, discussions of EDA then confirmatory statistics • How do you elicit models andprobabilities? • Plausibility bias if it is the expert’s model?
(p1(.), u1(.)), (p2(.), u2(.)), … (pi(.), ui(.)), …(pn(.), un(.)) (pg(.), ug(.)) ug(x) pg(x) dx Group decision problem Many approaches: • combine individual pi(.) and ui(.) into group pg(.) and ug(.) then form group expected utility ranking.
(p1(.), u1(.)), (p2(.), u2(.)), … (pi(.), ui(.)), …(pn(.), un(.)) u1(x)p1(x)dx u2(x)p2(x)dx ui(x)pi(x)dx un(x)pn(x)dx vote vote vote vote Group decision problem Many approaches: • combine individual pi(.) and ui(.) into group pg(.) and ug(.) then form group expected utility ranking. • individuals rank using their own expected utility ordering then vote
(p1(.), u1(.)), (p2(.), u2(.)), … (pi(.), ui(.)), …(pn(.), un(.)) ug(x) pg(x) dx Group decision problem Many approaches: • combine individual pi(.) and ui(.) into group pg(.) and ug(.) then form group expected utility ranking. • individuals rank using their own expected utility ordering then vote • altruistic Supra Decision Maker
(p1(.), u1(.)), (p2(.), u2(.)), … (pi(.), ui(.)), …(pn(.), un(.)) (p1(x*), u1(x*)), (p2(x*), u2(x*)), … (pi(x*), ui(x*)), …(pn(x*), un(x*)) Group decision problem Many approaches: • combine individual pi(.) and ui(.) into group pg(.) and ug(.) then form group expected utility ranking. • individuals rank using their own expected utility ordering then vote • altruistic Supra Decision Maker • negotiation models
Group decision problem Arrow’ Theorem and similar results • combine individual pi(.) and ui(.) into group pg(.) and ug(.) then form group expected utility ranking. • individuals rank using their own expected utility ordering then vote • altruistic Supra Decision Maker • negotiation models Paradox and impossibility theorems abound in group decision making theory
Group decision problem Arrow and similar results • combine individual pi(.) and ui(.) into group pg(.) and ug(.) then form group expected utility ranking. • individuals rank using their own expected utility ordering then vote • altruistic Supra Decision Maker • negotiation models • social process which translates individual decisions into an implemented action • Decision conferences • Built around ‘reference’ decision or negotiation models • Decision analysis as much about communication as about supporting decision making • Might vote or might leave the actual decision to unspoken political/social processes
Group decision support systems • The advent of the readily available computing means that algorithmic solutions to the Group Decision Problem are attractive. • Few software developers know any of the theory in this area, and ignorance of Arrow is rife.
The textbook problem • How to present results to help in future as yet unspecified decisions • How does one report with that in mind? • Public participation and the web means that many stakeholders to issues are seeking and using expert reports … whether or not they understand them
Cooke’s Principles for scientific reporting of expert judgement studies • Empirical control:Quantitative expert assessments are subjected to empirical quality controls. • Neutrality:The method for combining/evaluating expert opinion should encourage experts to state their true opinions, and must not bias results. • Fairness:Experts are not pre-judged, prior to processing the results of their assessments. • Scrutability/accountability: All data, including experts' names and assessments, and all processing tools are open to peer review and results must be reproducible by competent reviewers.
Cooke’s Principles for scientific reporting of expert judgement studies • Empirical control:Quantitative expert assessments are subjected to empirical quality controls. • Neutrality:The method for combining/evaluating expert opinion should encourage experts to state their true opinions, and must not bias results. • Fairness:Experts are not pre-judged, prior to processing the results of their assessments. • Scrutability/accountability: All data, including experts' names and assessments, and all processing tools are open to peer review and results must be reproducible by competent reviewers. Few reports satisfy this : Chatham house reporting
The Textbook Problem relates to … • Exploring issues, formulating decision problems, Developing prior distributions • So report should anticipate meta-analyses* and give calibration data, expert biographies, background information, etc. • Since the precise decision problem is not known at the time of the expert studies, the reports will be used to build the prior distributions not update them • Need meta-analytic approaches for expert judgement • Little peer-review • No publication bias • ‘self’ promotion of reports by pressure groups • Cooke’s principles not even considered.
The textbook problem for public participation • Public and stakeholders will need to develop their priors from information available • But they will not always be sophisticated DMs nor will they be supported by an analyst • Behavioural issues • Probabilities versus frequencies (Gigerenzer) • Risk communication • celebrity • Observables versus parametric constructs