everyday inductive leaps making predictions and detecting coincidences l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Everyday inductive leaps Making predictions and detecting coincidences PowerPoint Presentation
Download Presentation
Everyday inductive leaps Making predictions and detecting coincidences

Loading in 2 Seconds...

play fullscreen
1 / 98

Everyday inductive leaps Making predictions and detecting coincidences - PowerPoint PPT Presentation


  • 111 Views
  • Uploaded on

Everyday inductive leaps Making predictions and detecting coincidences. Tom Griffiths Department of Psychology Program in Cognitive Science University of California, Berkeley (joint work with Josh Tenenbaum, MIT). data. hypotheses. cube. shaded hexagon. Inductive problems.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Everyday inductive leaps Making predictions and detecting coincidences' - saddam


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
everyday inductive leaps making predictions and detecting coincidences

Everyday inductive leapsMaking predictions and detecting coincidences

Tom Griffiths

Department of Psychology

Program in Cognitive Science

University of California, Berkeley

(joint work with Josh Tenenbaum, MIT)

inductive problems

data

hypotheses

cube

shaded hexagon

Inductive problems
  • Inferring structure from data
  • Perception
    • e.g. structure of 3D world from 2D visual data
inductive problems3

hypotheses

fair coin

data

two heads

HHHHH

Inductive problems
  • Inferring structure from data
  • Perception
    • e.g. structure of 3D world from 2D visual data
  • Cognition
    • e.g. whether a process is random
everyday inductive leaps
Everyday inductive leaps
  • Inferences we make effortlessly every day
    • making predictions
    • detecting coincidences
    • evaluating randomness
    • learning causal relationships
    • identifying categories
    • picking out regularities in language
  • A chance to study induction in microcosm, and compare cognition to optimal solutions
two everyday inductive leaps
Two everyday inductive leaps

Predicting the future

Detecting coincidences

two everyday inductive leaps8
Two everyday inductive leaps

Predicting the future

Detecting coincidences

predicting the future
Predicting the future

How often is Google News updated?

t = time since last update

ttotal = time between updates

What should we guess forttotalgivent?

bayes theorem

Likelihood

Prior

probability

Posterior

probability

Sum over space

of hypotheses

Bayes’ theorem

h: hypothesis

d: data

bayes theorem12
Bayes’ theorem

h: hypothesis

d: data

bayesian inference
Bayesian inference

p(ttotal|t)  p(t|ttotal) p(ttotal)

posterior

probability

likelihood

prior

bayesian inference14
Bayesian inference

p(ttotal|t)  p(t|ttotal) p(ttotal)

p(ttotal|t)  1/ttotal p(ttotal)

posterior

probability

likelihood

prior

assume

random

sample

(0 < t < ttotal)

the effects of priors
The effects of priors

Different kinds of priorsp(ttotal) are appropriate in different domains

e.g. wealth

e.g. height

evaluating human predictions
Evaluating human predictions
  • Different domains with different priors:
    • a movie has made $60 million[power-law]
    • your friend quotes from line 17 of a poem[power-law]
    • you meet a 78 year old man[Gaussian]
    • a movie has been running for 55 minutes[Gaussian]
    • a U.S. congressman has served 11 years[Erlang]
  • Prior distributions derived from actual data
  • Use 5 values oftfor each
  • People predictttotal
slide18

people

empirical prior

Gott’s rule

parametric prior

probability matching
Probability matching

p(ttotal|tpast)

Proportion of judgments below predicted value

ttotal

Quantile of Bayesian posterior distribution

probability matching20
Probability matching

p(ttotal|tpast)

ttotal

Proportion of judgments below predicted value

  • Average over all
  • prediction tasks:
  • movie run times
  • movie grosses
  • poem lengths
  • life spans
  • terms in congress
  • cake baking times

Quantile of Bayesian posterior distribution

predicting the future21
Predicting the future
  • People produce accurate predictions for the duration and extent of everyday events
  • Strong prior knowledge
    • form of the prior (power-law or exponential)
    • distribution given that form (parameters)
  • Contrast with “base rate neglect”

(Kahneman & Tversky, 1973)

two everyday inductive leaps22
Two everyday inductive leaps

Predicting the future

Detecting coincidences

slide23

November 12, 2001: New Jersey lottery results were 5-8-7, the same day that American Airlines flight 587 crashed

slide24

"It could be that, collectively, the people in New York caused those lottery numbers to come up 911," says Henry Reed. A psychologist who specializes in intuition, he teaches seminars at the Edgar Cayce Association for Research and Enlightenment in Virginia Beach, VA.

"If enough people all are thinking the same thing, at the same time, they can cause events to happen," he says. "It's called psychokinesis."

slide25

The bombing of London

(Gilovich, 1991)

slide26

The bombing of London

(Gilovich, 1991)

slide29

76 years

75 years

(Halley, 1752)

the paradox of coincidences
The paradox of coincidences

How can coincidences simultaneously lead us to irrational conclusions and significant discoveries?

a common definition coincidences are unlikely events

“an event which seems so unlikely

that it is worth telling a story about”

“we sense that it is too unlikely to have

been the result of luck or mere chance”

A common definition: Coincidences are unlikely events
bayesian causal induction

cause

chance

Hypotheses:

a novel causal

relationship exists

no such

relationship exists

p(cause) p(chance)

Priors:

Data:

d

p(d|cause) p(d|chance)

Likelihoods:

Bayesian causal induction
bayesian causal induction34

Prior odds

low

high

?

high

Likelihood ratio

(evidence)

?

low

Bayesian causal induction

cause

chance

bayesian causal induction35

Prior odds

low

high

high

low

Bayesian causal induction

coincidence

cause

Likelihood ratio

(evidence)

?

chance

what makes a coincidence
What makes a coincidence?

A coincidence is an event that provides evidence for causal structure, but not enough evidence to make us believe that structure exists

what makes a coincidence37
What makes a coincidence?

A coincidence is an event that provides evidence for causal structure, but not enough evidence to make us believe that structure exists

likelihood ratio

is high

what makes a coincidence38
What makes a coincidence?

A coincidence is an event that provides evidence for causal structure, but not enough evidence to make us believe that structure exists

prior odds

are low

likelihood ratio

is high

posterior odds

are middling

slide39

prior odds

are low

likelihood ratio

is high

posterior odds

are middling

HHHHHHHHHH

HHTHTHTTHT

bayesian causal induction40

chance

cause

C

C

E

E

 1 - 

0 < p(E) < 1

p(E) = 0.5

Bayesian causal induction

Hypotheses:

Priors:

frequency of effect in presence of cause

Data:

Likelihoods:

slide41

prior odds

are low

likelihood ratio

is high

posterior odds

are middling

prior odds

are low

likelihood ratio

is low

posterior odds

are low

coincidence

HHHHHHHHHH

HHTHTHTTHT

chance

empirical tests
Empirical tests
  • Is this definition correct?
    • from coincidence to evidence
  • How do people assess complex coincidences?
    • the bombing of London
    • coincidences in date
empirical tests43
Empirical tests
  • Is this definition correct?
    • from coincidence to evidence
  • How do people assess complex coincidences?
    • the bombing of London
    • coincidences in date
slide44

prior odds

are low

likelihood ratio

is high

posterior odds

are middling

prior odds

are low

likelihood ratio

is very high

posterior odds

are high

coincidence

HHHHHHHHHH

cause

HHHHHHHHHHHHHHHHHHHHHH

from coincidence to evidence
From coincidence to evidence
  • Transition produced by
    • increase in likelihood ratio (e.g., coin flipping)
    • increase in prior odds (e.g., genetics vs.ESP)

coincidence

evidence for a

causal relation

testing the definition
Testing the definition
  • Provide participants with data from experiments
  • Manipulate:
    • cover story: genetics vs. ESP (prior)
    • data: number of heads/males (likelihood)
    • task: “coincidence or evidence?” vs. “how likely?”
  • Predictions:
    • coincidences affected by prior and likelihood
    • relationship between coincidence and posterior
slide47

Proportion “coincidence”

47 51 55 59 63 70 87 99

Number of heads/males

Posterior probability

47 51 55 59 63 70 87 99

r = -0.98

empirical tests48
Empirical tests
  • Is this definition correct?
    • from coincidence to evidence
  • How do people assess complex coincidences?
    • the bombing of London
    • coincidences in date
complex coincidences
Complex coincidences
  • Many coincidences involve structure hidden in a sea of noise (e.g., bombing of London)
  • How well do people detect such structure?
  • Strategy: examine correspondence between strength of coincidence and likelihood ratio
slide52

Change in...

People

Number

Ratio

Location

Spread

(uniform)

bayesian causal induction53

chance

cause

T

T

T

T

T

T

X

X

X

X

X

X

X

X

uniform

+

regularity

uniform

Bayesian causal induction

Hypotheses:

 1 - 

Priors:

Data:

bomb locations

Likelihoods:

slide54

Change in...

People

Bayes

Number

Ratio

Location

Spread

(uniform)

r = 0.98

slide55

76 years

75 years

coincidences in date
Coincidences in date

May 14, July 8, August 21, December 25

vs.

August 3, August 3, August 3, August 3

bayesian causal induction58

chance

cause

B

B

B

B

B

B

B

B

P

P

P

P

P

P

P

P

uniform

uniform + regularity

August

Bayesian causal induction

Hypotheses:

 1 - 

Priors:

Data:

birthdays of those present

Likelihoods:

slide59

Bayes

People

Regularities:

Proximity in date

Same day of month

Same month

coincidences
Coincidences
  • Provide evidence for causal structure, but not enough to make us believe that structure exists
  • Intimately related to causal induction
    • an opportunity to revise a theory
    • a window on the process of discovery
  • Guided by a well calibrated sense of when an event provides evidence of causal structure
the paradox of coincidences61

Status of current theory

Consequence

significant discovery

false

true

false conclusion

The paradox of coincidences

The utility of attending to coincidences

depends upon how much you know already

two everyday inductive leaps62
Two everyday inductive leaps

Predicting the future

Detecting coincidences

subjective randomness
Subjective randomness
  • View randomness as an inference about generating processes behind data
  • Analysis similar (but inverse) to coincidences
    • randomness is evidence against a regular generating process

(Griffiths & Tenenbaum, 2003)

slide64

Other cases of causal induction

A

B

(Griffiths, Baraff, & Tenenbaum, 2004)

aspects of language acquisition
Aspects of language acquisition

(Goldwater, Griffiths, & Johnson, 2006)

categorization
Categorization

Probability

x

(Sanborn, Griffiths, & Navarro, 2006)

conclusions
Conclusions
  • We can learn about cognition (and not just perception) by thinking about optimal solutions to computational problems
  • We can study induction using the inferences that people make every day
  • Bayesian inference offers a way to understand these inductive inferences
magic tricks
Magic tricks

Magic tricks are regularly used to identify infants’ ontological commitments

Can we use a similar method with adults?

(Wynn, 1992)

what s a better magic trick73
What’s a better magic trick?

milk

water

a brick

a vase

a rose

a daffodil

a dove

a blackbird

a man

a girl

  • Participants rate the quality of 45 transformations, 10 appearances, and 10 disappearances
    • direction of transformation is randomized between subjects
  • A second group rates similarity
  • Objects are chosen to lie at different points in a hierarchy

Applicable predicates

what s a better magic trick74
What’s a better magic trick?

milk

water

a brick

a vase

a rose

a daffodil

a dove

a blackbird

a man

a girl

milk

water

a brick

a vase

a rose

a daffodil

a dove

a blackbird

a man

a girl

milk

water

a brick

a vase

a rose

a daffodil

a dove

a blackbird

a man

a girl

milk

water

a brick

a vase

a rose

a daffodil

a dove

a blackbird

a man

a girl

ontological asymmetries
Ontological asymmetries

milk

water

a brick

a vase

a rose

a daffodil

a dove

a blackbird

a man

a girl

milk

water

a brick

a vase

a rose

a daffodil

a dove

a blackbird

a man

a girl

milk

water

a brick

a vase

a rose

a daffodil

a dove

a blackbird

a man

a girl

milk

water

a brick

a vase

a rose

a daffodil

a dove

a blackbird

a man

a girl

analyzing asymmetry
Analyzing asymmetry

milk

water

a brick

a vase

a rose

a daffodil

a dove

a blackbird

a man

a girl

  • Build a regression model:
    • similarity
    • appearing object
    • disappearing object
    • contains people
    • direction in hierarchy (-1,0,1)
  • All factors significant
  • Explains 90.9% of variance

Applicable predicates

summary magic tricks
Summary: magic tricks
  • Certain factors reliably influence the estimated quality of a magic trick
  • Magic tricks might be a way to investigate our ontological assumptions
    • inviolable laws that are otherwise hard to assess
  • A Bayesian theory of magic tricks?
    • strong evidence for a novel causal force
    • causal force is given low prior probability
a reformulation unlikely kinds
A reformulation: unlikely kinds
  • Coincidences are events of an unlikely kind
    • e.g. a sequence with that number of heads
  • Deals with the obvious problem...

p(10 heads) < p(5 heads, 5 tails)

problems with unlikely kinds
Problems with unlikely kinds
  • Defining kinds

August 3, August 3, August 3, August 3

January 12, March 22, March 22, July 19, October 1, December 8

problems with unlikely kinds81
Problems with unlikely kinds
  • Defining kinds
  • Counterexamples

HHHH>HHTT

P(4 heads) < P(2 heads, 2 tails)

HHHH>HHHHTHTTHHHTHTHHTHTTHHH

P(4 heads) > P(15 heads, 8 tails)

sampling from categories
Sampling from categories

Frog distribution P(x|c)

markov chain monte carlo
Markov chain Monte Carlo
  • Sample from a target distributionP(x) by constructing Markov chain for whichP(x) is the stationary distribution
  • Markov chain converges to its stationary distribution, providing outcomes that can be used similarly to samples
metropolis hastings algorithm88
Metropolis-Hastings algorithm

p(x)

A(x(t), x(t+1)) = 0.5

metropolis hastings algorithm90
Metropolis-Hastings algorithm

p(x)

A(x(t), x(t+1)) = 1

a task
A task

Ask subjects which of two alternatives comes from a target category

Which animal is a frog?

collecting the samples

Which is the frog?

Which is the frog?

Collecting the samples

Which is the frog?

Trial 1

Trial 2

Trial 3

sampling from natural categories
Sampling from natural categories

Examined distributions for four natural categories: giraffes, horses, cats, and dogs

Presented stimuli with nine-parameter stick figures (Olman & Kersten, 2004)

mean animals by subject
Mean animals by subject

S1

S2

S3

S4

S5

S6

S7

S8

giraffe

horse

cat

dog

markov chain monte carlo with people
Markov chain Monte Carlo with people
  • Rational models can guide the design of psychological experiments
  • Markov chain Monte Carlo (and other methods) can be used to sample from subjective probability distributions
    • category distributions
    • prior distributions