lecture 8 knowledge
Download
Skip this Video
Download Presentation
Lecture 8: Knowledge

Loading in 2 Seconds...

play fullscreen
1 / 48

Lecture 8: Knowledge - PowerPoint PPT Presentation


  • 406 Views
  • Uploaded on

Psyc 317: Cognitive Psychology. Lecture 8: Knowledge. Outline. Approaches to Categorization – Definitions – Prototypes – Exemplars • Is there a special level of category? • Semantic Networks • Connectionism • Categories in the brain. Categorization is hierarchical.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Lecture 8: Knowledge' - salena


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
outline
Outline
  • Approaches to Categorization

– Definitions

– Prototypes

– Exemplars

• Is there a special level of category?

• Semantic Networks

• Connectionism

• Categories in the brain

categorization is hierarchical
Categorization is hierarchical

• So we have levels of categories

• How can all of this be represented in the mind?

• Semantic network approach

collins quillian s model
Collins & Quillian’s Model
  • Nodes are bits of information

• Links connect them together

Semantic network template

Simple semantic network

get more complicated
Get more complicated!
  • Add properties to nodes:
how does the network work
How does the network work?
  • Example: Retrieve properties of canary
why not store it all at the node
Why not store it all at the node?
  • To get “can fly” and “has feathers,” you must travel up to bird

• Why not put it all at canary?

• Cognitive economy: Putting common properties at each node is too inefficient

• More efficient to put “cannot fly” at exception nodes

how do we know this works collins quillian 1969
How do we know this works?Collins & Quillian (1969)
  • Ask participants about canaryproperties that require more traversal

vs.

link traversal demo
Link Traversal Demo

Yes or no:

• A German Shepherd is a type of dog.

• A German Shepherd is a mammal.

• A German Shepherd barks.

• A German Shepherd has skin.

spreading activation priming the network
Spreading activation:Priming the Network
  • An activated node spreads its activation to connected links
spreading activation works meyer schvaneveldt 1971
Spreading Activation WorksMeyer & Schvaneveldt (1971)
  • Lexical decision task: Are the two letter strings both words?

Associated

meyer schvaneveldt results
Meyer & Schvaneveldt Results

* Associated words prime each other

collin quillian criticisms
Collin & Quillian Criticisms
  • Typicality effect is not explained - ostrich and canary are one link away from bird

• Incongruent results (Rips et al., 1972):

– A pig is a mammal 1476 ms

– A pig is an animal 1268 ms

collins loftus model
Collins & Loftus’ Model
  • No more hierarchy

• Shorter links between more connected concepts

dis advantages of the model
(Dis)advantages of the model

“A fairly complicated theory with enough generality to apply to results from many different experimental paradigms.”

• This is bad. Why?

the model is unfalsifiable
The model is unfalsifiable
  • The theory explains everything – How long should links be between nodes?

Result B says nodes look like this

Result A says nodes look like this

everything is arbitrary
Everything is arbitrary
  • Cannot disprove theory: what does link length mean for the brain?

• You can make connections as long as you want/need to explain your results

outline19
Outline
  • Approaches to Categorization

– Definitions

– Prototypes

– Exemplars

• Is there a special level of category?

• Semantic Networks

• Connectionism

• Categories in the brain

connectionism is a new version of semantic network theories
Connectionism is a new version of semantic network theories
  • McClelland & Rummelhart (1986)

• Concepts are represented in networks with nodes and links

– But they function a lot differently than in semantic networks

• Theory is biologically based

• A quick review of neurons…

physiological basis of connectionism
Physiological Basis of Connectionism
  • Neural circuits: Processing happens between many neurons connected by synapses

• Excitatory and inhibitory connections:

physiological basis of connectionism22
Physiological Basis of Connectionism
  • Strength of firing: Number of inputs onto a neuron (+ and -) determines rate of firing

1.5

0.2

Fires at 1.6

-0.75

basics of connectionism
Basics of Connectionism
  • Instead of nodes, you have units

– Units are “neuronlike processing units”

• Units are connected together

• Parallel Distributed Processing (PDP)

– Activation occurs in parallel

– Processing occurs in many units

basic pdp network
Basic PDP network

Mental representation

Processing

Weights

5.6

From the environment

how a pdp network works
How a PDP network works
  • Give the network stimuli via the input units
  • Information is passed through the network by hidden units

– Weights affect activation of nodes

  • Eventually, the stimulus is represented as a pattern via the output units
example output
Example output
  • The brain represents things from the environment differently
pdp learning stage 1
PDP Learning: Stage 1
  • Give it input, get output
learning error signals
Learning: Error signals

• The output pattern is not the correct pattern

• Figure out what the difference is

– That difference is the error signal

• Use the error signal to fine-tune weights

• Error signal is sent back using back propagation

learning stage 2
Learning : Stage 2
  • Back propagate error signal through network, adjust weights

5.7

5.2

learning stage 3 4 5 1024
Learning: Stage 3, 4, 5… 1024
  • Now that weights are adjusted, give network same input
  • Lather, rinse, repeat until error signal is 0
so this is learning
So this is learning?

• Repeated input and back propagation changes weights between units

• When error signal = 0, the network has learned the correct weights for that stimulus

– The network has been trained

so where is the knowledge
So where is the knowledge?
  • Semantic networks

– One node has “canary” and is connected to “can fly” and “yellow”

• PDP networks

– A bunch of nodes together represent “canary” and another bunch represent “yellow”

– Distributed knowledge in neural circuits

pdp the good networks based on neurons
PDP: The GoodNetworks based on neurons
  • All nodes can do is fire (they are dumb)

• Knowledge is distributed amongst many nodes

• Sounds a lot like neurons and the brain!

• Emergence: Lots of little dumb things form one big smart thing

pdp the good networks are damage resistant
PDP: The GoodNetworks are damage-resistant
  • “Lesion” the network by taking out nodes

• This damage does not totally take out the system

– Graceful degradation

• These networks can adapt to damage

pdp the good learning can be generalized
PDP: The GoodLearning can be generalized
  • Related concepts should activate many of the same nodes

– Robin and sparrow should share a lot of the same representation

• PDP networks can emulate this – similar inputs can operate with similar networks

pdp the good successful computer models
PDP: The GoodSuccessful computer models
  • Not just a theory, but can be programmed in a computer

• Computational modeling of the mind

– Object perception

– Recognizing words

pdp the bad cannot explain everything
PDP: The BadCannot explain everything
  • More complex tasks cannot be explained

– Problem solving

– Language processing

• Limitation of computers?

– We have trillions of neurons

– PDP networks can’t support that many nodes (yet)

pdp the bad retroactive interference
PDP: The BadRetroactive interference
  • Learning something new interferes with something already learned

Example: Train network on “collie”

– Weights are perfectly adjusted for collie

• Give network “terrier”

– Network must change weights again for terrier

• Weights must change to accommodate both dogs

pdp the bad cannot explain rapid learning
PDP: The BadCannot explain rapid learning

• It does not take thousands of trials to remember that you parked in Lot K

– How does rapid learning occur?

• Two separate systems?

how the connectionists explain rapid learning
How the connectionists explain rapid learning

• Two separate systems

PDP in the Cortex:

Rapid learning in

the Hippocampus:

outline42
Outline
  • Approaches to Categorization

– Definitions

– Prototypes

– Exemplars

• Is there a special level of category?

• Semantic Networks

• Connectionism

• Categories in the brain

categories in the brain
Categories in the brain
  • Imaging studies have localized face and house areas

– Still not very exciting (“light-up” studies)

• Does this mean one brain area processes houses, another one for heads, and chairs, and technology, etc. etc.?

visual agnosia for categories
Visual agnosia for categories
  • Damage to inferior temporal cortex causes inability to name certain objects

– Visual agnosia

• Double dissociation for living/nonliving things

double dissociation
Double Dissociation
  • Double dissociation for living/nonliving things
living vs non living
Living vs. Non-living?
  • fMRI studies have shown different brain areas for living and non-living things

• There is a lot of overlap for the two areas, though

• Damage for categories is not well understood

category specific neurons
Category-specific neurons
  • Some neurons only respond to certain categories

• A “Bill Clinton” neuron? Probably not.

• A “Bill Clinton” neural circuit? More likely.

not categories but continuum
Not categories, but continuum
  • There are probably no distinct face, house, chair, etc. areas in the brain

• But everything’s not all stored in the same place, either

• A mix of overlapping areas and distributed processes

– Living vs. non-living is a big distinction

ad