exploring subjective probability distributions using bayesian statistics l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Exploring subjective probability distributions using Bayesian statistics PowerPoint Presentation
Download Presentation
Exploring subjective probability distributions using Bayesian statistics

Loading in 2 Seconds...

play fullscreen
1 / 28

Exploring subjective probability distributions using Bayesian statistics - PowerPoint PPT Presentation


  • 134 Views
  • Uploaded on

Exploring subjective probability distributions using Bayesian statistics. Tom Griffiths Department of Psychology Cognitive Science Program University of California, Berkeley. Perception is optimal. Körding & Wolpert (2004). Cognition is not. Bayesian models of cognition.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Exploring subjective probability distributions using Bayesian statistics' - elisha


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
exploring subjective probability distributions using bayesian statistics

Exploring subjective probability distributions using Bayesian statistics

Tom Griffiths

Department of Psychology

Cognitive Science Program

University of California, Berkeley

slide2

Perception is optimal

Körding & Wolpert (2004)

bayesian models of cognition
Bayesian models of cognition
  • What would an “ideal cognizer” do?
    • “rational analysis” (Anderson, 1990)
  • How can structured representations be combined with statistical inference?
    • graphical models, probabilistic grammars, etc.
  • What knowledge guides human inferences?
    • questions about priors and likelihoods
exploring subjective distributions
Exploring subjective distributions

Natural statistics in cognition

(joint work with Josh Tenenbaum)

Markov chain Monte Carlo with people

(joint work with Adam Sanborn)

exploring subjective distributions6
Exploring subjective distributions

Natural statistics in cognition

(joint work with Josh Tenenbaum)

Markov chain Monte Carlo with people

(joint work with Adam Sanborn)

natural statistics
Natural statistics

Prior distributions

Images of natural scenes

p(x)

predicting the future
Predicting the future

How often is Google News updated?

t = time since last update

ttotal = time between updates

What should we guess for ttotal given t?

bayesian inference
Bayesian inference

p(ttotal|t)  p(t|ttotal) p(ttotal)

posterior

probability

likelihood

prior

bayesian inference10
Bayesian inference

p(ttotal|t)  p(t|ttotal) p(ttotal)

p(ttotal|t)  1/ttotal p(ttotal)

posterior

probability

likelihood

prior

assume

random

sample

(0 < t < ttotal)

evaluating human predictions
Evaluating human predictions
  • Different domains with different priors:
    • a movie has made $60 million [power-law]
    • your friend quotes from line 17 of a poem [power-law]
    • you meet a 78 year old man [Gaussian]
    • a movie has been running for 55 minutes [Gaussian]
    • a U.S. congressman has served for 11 years [Erlang]
  • Prior distributions derived from actual data
  • Use 5 values of t for each
  • People predict ttotal
slide13

people

empirical prior

Gott’s rule

parametric prior

exploring subjective distributions14
Exploring subjective distributions

Natural statistics in cognition

(joint work with Josh Tenenbaum)

Markov chain Monte Carlo with people

(joint work with Adam Sanborn)

metropolis hastings algorithm metropolis et al 1953 hastings 1970
Metropolis-Hastings algorithm(Metropolis et al., 1953; Hastings, 1970)

Step 1: propose a state (we assume symmetrically)

Q(x(t+1)|x(t)) = Q(x(t))|x(t+1))

Step 2: decide whether to accept, with probability

Metropolis acceptance function

Barker acceptance function

a task
A task

Ask subjects which of two alternatives comes from a target category

Which animal is a frog?

response probabilities
Response probabilities

If people probability match to the posterior, response probability is equivalent to the Barker acceptance function for target distribution p(x|c)

collecting the samples

Which is the frog?

Which is the frog?

Collecting the samples

Which is the frog?

Trial 1

Trial 2

Trial 3

sampling from natural categories
Sampling from natural categories

Examined distributions for four natural categories: giraffes, horses, cats, and dogs

Presented stimuli with nine-parameter stick figures (Olman & Kersten, 2004)

mean animals by subject
Mean animals by subject

S1

S2

S3

S4

S5

S6

S7

S8

giraffe

horse

cat

dog

marginal densities aggregated across subjects
Marginal densities (aggregated across subjects)

Giraffes are distinguished by neck length, body height and body tilt

Horses are like giraffes, but with shorter bodies and nearly uniform necks

Cats have longer tails than dogs

markov chain monte carlo with people
Markov chain Monte Carlo with people
  • Probabilistic models can guide the design of experiments to measure psychological variables
  • Markov chain Monte Carlo can be used to sample from subjective probability distributions
    • category distributions (Metropolis-Hastings)
    • prior distributions (Gibbs sampling)
  • Effective for exploring large stimulus spaces, with distributions on a small part of the space
conclusion
Conclusion
  • Probabilistic models give us a way to explore the knowledge that guides people’s inferences
  • Basic problem for both cognition and perception: identifying subjective probability distributions
  • Two strategies:
    • natural statistics
    • Markov chain Monte Carlo with people