1 / 29

Decision making as a model

Decision making as a model. 2. Statistics and decision making. Bayesian statistics: p(H|D)  p(H) ∙p(D|H) If H refers to possible values of θ : pdf( θ |D)  pdf( θ ) ∙L ( θ |D) NB: L: Likelihood function!.

prince
Download Presentation

Decision making as a model

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Decision making as a model 2. Statistics and decision making

  2. Bayesian statistics: p(H|D)  p(H)∙p(D|H) If H refers to possible values of θ: pdf(θ|D)  pdf(θ)∙L(θ|D) NB: L: Likelihood function! From about 1925 Bayesian approach in inductive statistics was marginalised (now a come back)

  3. In “classial ” statistics frequentist interpretation of probability is preferred Hypotheses are TRUE or FALSE (we don’t know which for certain - not a matter of probability) and are accepted of rejected based on data D and likelihood p(D|H) e.g. test of significance

  4. Sx compute pdf(S|H0) (for sample of n) probability density p Statistic S Fisher Null hypothesis about some population parameter do experiment ( Sx, p) If p is small, reject H0, you could accept some alternative

  5. Neyman & Pearson pdf(S|H0) pdf(S|H1) Statistic S Specify H0 ,H1 and their pdf’s. Decide on a criterion based on β p(type II error) and α p(type I error) do experiment, compute Sx and choose between H0 and H1

  6. Neyman & Pearson more suitable for decision making than for science! For completeness: Likelihood approach without priors: Fisher, Royall p(H|D)  p(H)∙p(D|H) Irrespective of p(H): how strong is D’s support for H ? Example: model selection: Akaike AIC = -log(L) + k BIC = -log(L) + k log(n)/2

  7. SignalDetectionTheory Application of Neyman-Pearsonto processing sonar or radar signalsonnoisy background Military technology (WW2):

  8. Hypothesis 0: there is no signal, only noise Hypothesis 1: there is a signal and noise NB.1 On the basis of some “evidence” I have to act, although I do not know which H is true! NB.2 This is typically a “classic” approach, but at the end Bayes will creep in by the back door!

  9. fundamental assumptions of signal detection theory Probability density “Evidence”, e.g.…..???? 1. Effect (= a value of “Evidence”) of signal is variable (according to a probability distribution). 2.Effect of Noise is also variable. Problem: is this “Evidence” (= a point on x-axis) the effect of a signal (+ noise) or of noise only?

  10. “No” “Yes” 3. If signal is weak, distributions overlap and errors are unavoidable, whichever criterion is adopted

  11. Terminology: “No” “Yes” “No” “Yes” miss hit Signal (+noise) (only) noise correct rejection false alarm

  12. The stronger the signal (or the better the detector) … the further the distributions lie apart

  13. “No” “Yes” Givensome sensitivity (= a distribution for noise and one for signal) several response criteria can be adopted Dependent on van personal preference or “pay off” in this situation: -How bad is a miss, how important is a hit? -How bad is a false alarm, how important is a correct rejection? -Hoe often do signals occur? (think of Bayes!)

  14. Two types of applications: 1. Normative: distributions are known, try to find optimal criterion (for optimal behavior) • Is that a hostile plane? • Does this mammogram indicate a malignancy? • Is there a weapon in this suitcase? • Can we admit this student to this school? • What is the best cut-off score for this test?

  15. Two types of application: 2. Descriptive: Behavior is known, try to reconstruct distributions and criterion as a rational model How good is this person in detecting a v among u’s? Is this person inclined to say “yes” in a recognition test? How well judges or juries are able to distinguish between the guilty and the innocent? Do judges and lay juries differ in their bias for convicting or acquitting? How good is this test? .

  16. “No” “Yes” An experiment with noise (blank) and and signal (target) trials: A strict (“high”) criterion results in few hits and few false alarms Hit rate = Proportion hits (of signal trials) False Alarm Rate = Proportion false alarms (of noise trials)

  17. “No” “Yes” hits A lax “low” criterion results in more hits and more false alarms -given the same sensitivity false alarms

  18. The ROC-(response operating characteristic) curve connects points in a Hit/FA- plot, resulting from adopting several criteria given the same sensitivity (= same distributions) ROC-curve characterises detector sensitivity (or signal strength) independent of criterion important: sensitivity and criterion theoretically independent

  19. Same sensitivity (for this signal), several criteria hits ROC-curve Receiver Operating Characteristic Relative Operating Characteristic Isosensitivity Curve false alarms

  20. Greater sensitivity: ROC-curve further from diagonal (Perfection would be: all hits and no false alarms) hits false alarms

  21. Suggests two types of measure for sensitivity (independent of criterion:) • distance between signal and noise distributions • (e.g. d' ) 2. Area under ROC-Curve: A

  22. No distinction between signal and noise: A = .50 (ROC-curve reflects only bias for saying “yes” or “no”)

  23. Perfect distinction between signal and noise: A 1.

  24. Types of measures for criterion: h f c 1. Position on op x-axis (e.g. c) 2. Likelihood ratio p(xc|S)/p(xc|N) = h/f (e.g. β) 3. Position in ROC-plot (left down. vs right up) 4. Slope of tangent on ROC

  25. Signal Detection Theory is applied in many contexts! Breast cancer?

  26. Hit rate PSA-indices for screening prostate cancer FA rate

  27. Psychodiagnosis: 1. How good is this test distinguishing relevant categories? • What is good cut-off score • (at which score should I hire the candidate/admit the student / send the cliënt to a psychiatrist or an asylum? Control group patients Test score

  28. Comer & Kendall 2005: Children’s Depression Inventory detects depression in a sample of anxious and anxious + depressive children Several cut-off scores

  29. What are the costs missing a weapon/explosive at an airport? What are the costs a false alarm? What are the costs of screening (apparatus, personnel, delay)?

More Related