1 / 15

Some Basic Aspects of Perceptual Inference Under Uncertainty

Some Basic Aspects of Perceptual Inference Under Uncertainty. Psych 209 Jan 9, 2013. Example. H = “it has just been raining” E = “the ground is wet” What is the probability that H is true, given E? Assume we already believe: P(H) = .2; P(~H) = .8 P(E|H) = .9; P(E|~H) = .01

lupita
Download Presentation

Some Basic Aspects of Perceptual Inference Under Uncertainty

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Some Basic Aspects of Perceptual Inference Under Uncertainty Psych 209 Jan 9, 2013

  2. Example • H = “it has just been raining” • E = “the ground is wet” • What is the probability that H is true, given E? • Assume we already believe: • P(H) = .2; P(~H) = .8 • P(E|H) = .9; P(E|~H) = .01 • We want to calculate p(H|E) • Can we derive a formula to do so?

  3. Derivation of Bayes Formula • By the definition of conditional probability: • p(H|E) = p(H&E)/p(E) • p(E|H) = p(H&E)/p(H) • So p(E|H)p(H) = p(H&E) • Substituting in the first line, we obtain p(H|E) = p(E|H)p(H)/p(E) (1) • What is p(E)? p(E) = p(H&E) + p(~H&E) = p(E|H)p(H) + p(E|~H)p(~H) • Substitute the last expression into (1) and we have Bayes formula:P(H|E) = p(E|H)p(H) p(E|H)p(H) + p(E|~H)p(~H)

  4. Example • Assumptions: • P(H) = .2; P(~H) = .8 • P(E|H) = .9; P(E|~H) = .01 • Then what is p(H|E), the probability that it has just been raining, given that the ground is wet? (.9*.2)/((.9*.2) + (.01*.8)) = (.18)/(.18+.008) = ~.96 • Visualization (on board) • What happens if we change our beliefs about: • P(H)? P(E|H)? p(E|~H)?

  5. Extension to N Alternatives

  6. Posterior Ratios • The ratio p(hi|e)/p(hj|e) can be expressed as:p(hi|e)/p(hj|e) = (p(hi)/p(hj)) (p(e|hi)/p(e|hj)) • These ratios are indifferent to the number of alternatives • taking logslog(p(hi|e)/p(hj|e)) = log(p(hi)/p(hj)) + log(p(e|hi)/p(e|hj))

  7. Morton’s use of the logit

  8. Odds Ratio Version of Bayes Formula • For the 2-alternative case we can re-express p(hi|e):p(hi|e) = (p(hi)/p(hj)) (p(e|hi)/p(e|hj)) (p(hi)/p(hj)) (p(e|hi)/p(e|hj)) +1 • Using logs and exponentials:p(hi|e) = elog(p(hi)/p(hj)) + log(p(e|hi)/p(e|hj)) elog(p(hi)/p(hj)) + log(p(e|hi)/p(e|hj)) + 1

  9. How Should we Combine Two or More Sources of Evidence? • Two different sources of evidence E1 and E2 are conditionally independent given the state of H, iff p(E1&E2|H) = p(E1|H)p(E2|H) and p(E1&E2|~H) = p(E1|~H)p(E2|~H) • Suppose p(H), p(E1|H) and p(E1|~H) are as before andE2 = ‘The sky is blue’; p(E2|H) = .02; p(E2|~H) = .5 • Assuming conditional independence we can substitute into Bayes’ rule to determine that: p(H|E1&E2) = .9 x .02 x .2 = .47 .9 x .02 x .2 + .01 x .5 X .8 • In case of N sources of evidence, all conditionally independent under H, then we get: p(E|H) = Pj p(Ej|H)

  10. Conditional Independence in the Generative Model of Letter Feature Displays • A letter is chosen for display • Features are then chosen for display independently for each letter, but with noise

  11. How this relates to connectionist units (or populations of neurons) • We treat the activation of the unit as corresponding to the instantaneous normalized firing rate of a neural population. • The baseline activation of the unit is thought to depend on a constant background input called its ‘bias’. • When other units are active, their influences are combined with the bias to yield a quantity called the ‘net input’. • The influence of a unit j on another unit i depends on the activation of j and the weight or strength of the connection to i from j. • Connection weights can be positive (excitatory) or negative (inhibitory). • These influences are summed to determine the net input to unit i: neti = biasi + Sjajwij where aj is the activation of unit j, and wij is the strength of the connection to unit i from unit j. Input fromunit j wij unit i

  12. A Unit’s Activation can Reflect P(H|E) • The activation of unit i given its net input neti is assumed to be given by: ai = exp(neti) 1 + exp(neti) • This function is called the ‘logistic function’. It is usually written in the numerically identical form: ai = 1/[1 + exp(-neti)] • In the reading we showed thatai = p(Hi|E) iffaj = 1 when Ej is present, or 0 when Ej is absent;wij = log(p(Ej|Hi)/p(Ej|~Hi);biasi = log(p(Hi)/p(~Hi)) • This assumes the evidence is conditionally independent given the state of H. ai neti

  13. Choosing between N alternatives • Often we are interested in cases where there are several alternative hypotheses (e.g., different directions of motion of a field of dots). Here we have a situation in which the alternatives to a given H, say H1, are the other hypotheses, H2, H3, etc. • In this case, the probability of a particular hypothesis given the evidence becomes: P(Hi|E) = p(E|Hi)p(Hi) Si’p(E|Hi’)p(Hi’) • The normalization implied here can be performed by computing net inputs as before but now setting each unit’s activation according to: ai = exp(neti)Si’exp(neti’) • This normalization effect is approximated by lateral inhibition mediated by inhibitory interneurons (shaded unit in illustration). H E

  14. ‘Cue’ Integrationin Monkeys Saltzman and Newsome (1994) combined two ‘cues’ to theperception of motion: Partially coherent motion in a specific direction Direct electrical stimulation of neurons in area MT They measured the probability of choosing each direction with and without stimulation at different levels of coherence (next slide).

  15. Model used by S&N: S&N applied a model that is structurally identical to the one we have been discussing: Pj = exp(yj)/Si’exp(yj’) yj = bj + mjzj + gdx bj = bias for direction j mj = effect of micro-stimulation zi = 1 if stimulation was applied, 0 otherwise gd = support for j when motion is in that direction (d=1) or other more disparate directions (d=2,3,4,5) x = motion coherence Open circles above show effect of presenting visual stimulation in one direction (using an intermediate coherence) together with electrical stimulation favoring a direction 135° away from the visual stimulus. Dip between peaks rules out simple averaging of the directions cued by visual and electrical stimulation but is approximately consistent with the Bayesian model (filled circles).

More Related