1 / 24

Linear Models of Judgment

Linear Models of Judgment. Judgment vs. choice Multiattribute model of judgment Actuarial model of the environment Experts and computers Bootstrapping models. Judgment vs. Choice. Judgment = assign a score or category e.g., How much would you pay for a one-week trip to Aspen?

dragon
Download Presentation

Linear Models of Judgment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Linear Models of Judgment • Judgment vs. choice • Multiattribute model of judgment • Actuarial model of the environment • Experts and computers • Bootstrapping models

  2. Judgment vs. Choice • Judgment = assign a score or category e.g., How much would you pay for a one-week trip to Aspen? How much do you like Bill Clinton? What will the price of Intel be in 6 months? • Choice = pick from a set of alternatives e.g., which car? investment? job?

  3. Multiattribute Choice Model • Choice = select most desirable • Desirability is judged from attributes • Attributes can be FACTS (e.g., price), COMPOSITES (e.g., safety), or subjective VALUES (e.g., prestige) • There is often a hierarchical structure to attributes and judgments • How to make tradeoffs, e.g., weight & add

  4. Desirability of a Car

  5. Lens Model • Judgment (Ys) is an attempt to represent or predict the environment from cues • There is a criterion (Ye) that allows us to estimate correctness of the judgment c u Ys Ye e s

  6. Models of Decision Makers • Slovic’s study of two stockbrokers: A: near term prospects, P/E, earnings qtly trend B: earnings yearly trend, P/E, profit margin trend • Ratings of business schools: USNWR: reputation with academics, reputation with CEOs, selectivity, placement BusWeek: recruiters rate analytics, teaming, global; graduates rate teaching, curriculum, placement

  7. Occupation clergy 46 executive 62 professional 62 student 46 teacher 46 unemployed 33 no answer 47 Job Tenure < .5 years 31 .5 - 5.5 24 5.5 - 8.5 26 8.5 - 15.5 31 > 15.5 years 39 Capon, J. Marketing, 1982, 46, 82-91. Actuarial Environmental Models Major Retailer’s Credit Scoring Table • Older economists make more extreme forecasts

  8. Comparisons Using Models • How consistent are individuals? • How consensual are experts? • How accurate are judges? (Ye vs. Ys) • What are judges doing? (Ysm) • What predicts the criterion? (Yem) • How good are our models? • Do judges understand the environment? (Yem vs. Ysm)

  9. Graduate Admissions Example • Ys = judgment of admissions committee (1 to 5 scale) • Ye = faculty ratings of performance • Ysm = prediction model of judgments = -4.17 +.0032*GRE+1.02*GPA+.0791*QI • Yem= actuarial model of performance = -.71 +.0006*GRE+.76*GPA+.2518*QI

  10. Admission and Job Interviews • Harvard Business School stopped conducting interviews • Are interviews accurate? • Are interviews overweighted? • What is their proper role? • HBS no longer uses the GMAT • HBS criteria: academics and character

  11. Advantages of Models • Makes strategy explicit • Can see how experts vary • Train new judges • Learn about environments • Enhance or replace experts • Can use the model when expert gone

  12. Judges vs. Environment • Which should be more accurate, expert judges or actuarial models? • Judges have their experience, ability to use cues in complex ways • Actuarial models are simple, typically linear in form, consistent

  13. Judges vs. Actuarial Model

  14. Why Don’t Experts Do Better? • They have the wrong rules • They don’t use their rules - distractions - fatigue, boredom - “exceptions” - unable to make tradeoffs

  15. Bootstrapping Models • If intuitive decision makers have good rules but fail to use them consistently, can we separate signal from noise? • Consensus of judges (see groups later) • Model of a judge (bootstrapping) Judgment = Linear + Nonlinear + Noise • What wins: Judge vs. linear model?

  16. A Tale of Three Models

  17. Some Typical Results • Some tasks are much harder than others • Actuarial models almost always win • Bootstrapping works! • Linear models correlate with any monotonic function, work well when there is noise, positively correlated cues, work with random or unit weights • To improve on linear models, you need lots of data

  18. Experts and Models • What do experts do best? • What do computers do best? • How can they be combined? • Should we give the model to the expert or give the expert to the model?

  19. Batterymarch Example • Stock portfolio company • Manage $12 Billion with 37 employees • Experts identify variables, suggest rules, design tests, deal with clients • Computer keeps databases, runs tests of rules, buys and sells stocks • 10-12 rules identify attractive stocks

  20. Working With the Political Lens: Separating Facts and Values • Selecting a bullet for Denver Police - police want to immobilize suspects - community concerned about injuries - experts testify on each side • What kinds of information are needed? • How should this decision be made?

  21. A Frame for Conflict Resolution FactsValues Desirability weight Injury potential speed shape Stopping power etc. Threat to bystanders

  22. Injury Potential proposed by Police . . . . . . . . . . . . . . . . . . . . P. . . . . . . . . . . . . . . . . . . . . . C. . . . . . . . . . . . . . . . . . . . proposed by Community Denver Bullet Resolution • Experts combine facts into judgments on each value • Constituencies compromise on how to weight the values into overall worth Stopping Power

  23. Role of Technical Experts • Executive whose daughter had a hip deformity • One doctor said, “Wait” • A second said, “Brace for 6 months” • The third said, “Operate” • How would you make this decision?

  24. Your Exercise #1: Job Selection • What were the attributes or objectives of jobs that mattered to you? • How different were the rankings due to intuition, weighted linear model, unit weighted model? • If the rankings differ, which do you trust? Why? • Value-added in the process, not the numbers

More Related