Lecture 8: Knowledge - PowerPoint PPT Presentation

Lecture 8 knowledge
Download
1 / 48

  • 385 Views
  • Uploaded on
  • Presentation posted in: Pets / Animals

Psyc 317: Cognitive Psychology. Lecture 8: Knowledge. Outline. Approaches to Categorization – Definitions – Prototypes – Exemplars • Is there a special level of category? • Semantic Networks • Connectionism • Categories in the brain. Categorization is hierarchical.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha

Download Presentationdownload

Lecture 8: Knowledge

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Lecture 8 knowledge l.jpg

Psyc 317: Cognitive Psychology

Lecture 8: Knowledge


Outline l.jpg

Outline

  • Approaches to Categorization

    – Definitions

    – Prototypes

    – Exemplars

    • Is there a special level of category?

    • Semantic Networks

    • Connectionism

    • Categories in the brain


Categorization is hierarchical l.jpg

Categorization is hierarchical

• So we have levels of categories

• How can all of this be represented in the mind?

• Semantic network approach


Collins quillian s model l.jpg

Collins & Quillian’s Model

  • Nodes are bits of information

    • Links connect them together

Semantic network template

Simple semantic network


Get more complicated l.jpg

Get more complicated!

  • Add properties to nodes:


How does the network work l.jpg

How does the network work?

  • Example: Retrieve properties of canary


Why not store it all at the node l.jpg

Why not store it all at the node?

  • To get “can fly” and “has feathers,” you must travel up to bird

    • Why not put it all at canary?

    • Cognitive economy: Putting common properties at each node is too inefficient

    • More efficient to put “cannot fly” at exception nodes


How do we know this works collins quillian 1969 l.jpg

How do we know this works?Collins & Quillian (1969)

  • Ask participants about canaryproperties that require more traversal

vs.


Link traversal demo l.jpg

Link Traversal Demo

Yes or no:

• A German Shepherd is a type of dog.

• A German Shepherd is a mammal.

• A German Shepherd barks.

• A German Shepherd has skin.


Collins quillian results l.jpg

Collins & Quillian Results


Spreading activation priming the network l.jpg

Spreading activation:Priming the Network

  • An activated node spreads its activation to connected links


Spreading activation works meyer schvaneveldt 1971 l.jpg

Spreading Activation WorksMeyer & Schvaneveldt (1971)

  • Lexical decision task: Are the two letter strings both words?

Associated


Meyer schvaneveldt results l.jpg

Meyer & Schvaneveldt Results

* Associated words prime each other


Collin quillian criticisms l.jpg

Collin & Quillian Criticisms

  • Typicality effect is not explained - ostrich and canary are one link away from bird

    • Incongruent results (Rips et al., 1972):

    – A pig is a mammal 1476 ms

    – A pig is an animal 1268 ms


Collins loftus model l.jpg

Collins & Loftus’ Model

  • No more hierarchy

    • Shorter links between more connected concepts


Dis advantages of the model l.jpg

(Dis)advantages of the model

“A fairly complicated theory with enough generality to apply to results from many different experimental paradigms.”

• This is bad. Why?


The model is unfalsifiable l.jpg

The model is unfalsifiable

  • The theory explains everything – How long should links be between nodes?

Result B says nodes look like this

Result A says nodes look like this


Everything is arbitrary l.jpg

Everything is arbitrary

  • Cannot disprove theory: what does link length mean for the brain?

    • You can make connections as long as you want/need to explain your results


Outline19 l.jpg

Outline

  • Approaches to Categorization

    – Definitions

    – Prototypes

    – Exemplars

    • Is there a special level of category?

    • Semantic Networks

    • Connectionism

    • Categories in the brain


Connectionism is a new version of semantic network theories l.jpg

Connectionism is a new version of semantic network theories

  • McClelland & Rummelhart (1986)

    • Concepts are represented in networks with nodes and links

    – But they function a lot differently than in semantic networks

    • Theory is biologically based

    • A quick review of neurons…


Physiological basis of connectionism l.jpg

Physiological Basis of Connectionism

  • Neural circuits: Processing happens between many neurons connected by synapses

    • Excitatory and inhibitory connections:


Physiological basis of connectionism22 l.jpg

Physiological Basis of Connectionism

  • Strength of firing: Number of inputs onto a neuron (+ and -) determines rate of firing

1.5

0.2

Fires at 1.6

-0.75


Distributed coding l.jpg

Distributed Coding


Basics of connectionism l.jpg

Basics of Connectionism

  • Instead of nodes, you have units

    – Units are “neuronlike processing units”

    • Units are connected together

    • Parallel Distributed Processing (PDP)

    – Activation occurs in parallel

    – Processing occurs in many units


Basic pdp network l.jpg

Basic PDP network

Mental representation

Processing

Weights

5.6

From the environment


How a pdp network works l.jpg

How a PDP network works

  • Give the network stimuli via the input units

  • Information is passed through the network by hidden units

    – Weights affect activation of nodes

  • Eventually, the stimulus is represented as a pattern via the output units


Example output l.jpg

Example output

  • The brain represents things from the environment differently


Pdp learning stage 1 l.jpg

PDP Learning: Stage 1

  • Give it input, get output


Learning error signals l.jpg

Learning: Error signals

• The output pattern is not the correct pattern

• Figure out what the difference is

– That difference is the error signal

• Use the error signal to fine-tune weights

• Error signal is sent back using back propagation


Learning stage 2 l.jpg

Learning : Stage 2

  • Back propagate error signal through network, adjust weights

5.7

5.2


Learning stage 3 4 5 1024 l.jpg

Learning: Stage 3, 4, 5… 1024

  • Now that weights are adjusted, give network same input

  • Lather, rinse, repeat until error signal is 0


So this is learning l.jpg

So this is learning?

• Repeated input and back propagation changes weights between units

• When error signal = 0, the network has learned the correct weights for that stimulus

– The network has been trained


So where is the knowledge l.jpg

So where is the knowledge?

  • Semantic networks

    – One node has “canary” and is connected to “can fly” and “yellow”

    • PDP networks

    – A bunch of nodes together represent “canary” and another bunch represent “yellow”

    – Distributed knowledge in neural circuits


Pdp the good networks based on neurons l.jpg

PDP: The GoodNetworks based on neurons

  • All nodes can do is fire (they are dumb)

    • Knowledge is distributed amongst many nodes

    • Sounds a lot like neurons and the brain!

    • Emergence: Lots of little dumb things form one big smart thing


Pdp the good networks are damage resistant l.jpg

PDP: The GoodNetworks are damage-resistant

  • “Lesion” the network by taking out nodes

    • This damage does not totally take out the system

    – Graceful degradation

    • These networks can adapt to damage


Pdp the good learning can be generalized l.jpg

PDP: The GoodLearning can be generalized

  • Related concepts should activate many of the same nodes

    – Robin and sparrow should share a lot of the same representation

    • PDP networks can emulate this – similar inputs can operate with similar networks


Pdp the good successful computer models l.jpg

PDP: The GoodSuccessful computer models

  • Not just a theory, but can be programmed in a computer

    • Computational modeling of the mind

    – Object perception

    – Recognizing words


Pdp the bad cannot explain everything l.jpg

PDP: The BadCannot explain everything

  • More complex tasks cannot be explained

    – Problem solving

    – Language processing

    • Limitation of computers?

    – We have trillions of neurons

    – PDP networks can’t support that many nodes (yet)


Pdp the bad retroactive interference l.jpg

PDP: The BadRetroactive interference

  • Learning something new interferes with something already learned

    Example: Train network on “collie”

    – Weights are perfectly adjusted for collie

    • Give network “terrier”

    – Network must change weights again for terrier

    • Weights must change to accommodate both dogs


Pdp the bad cannot explain rapid learning l.jpg

PDP: The BadCannot explain rapid learning

• It does not take thousands of trials to remember that you parked in Lot K

– How does rapid learning occur?

• Two separate systems?


How the connectionists explain rapid learning l.jpg

How the connectionists explain rapid learning

• Two separate systems

PDP in the Cortex:

Rapid learning in

the Hippocampus:


Outline42 l.jpg

Outline

  • Approaches to Categorization

    – Definitions

    – Prototypes

    – Exemplars

    • Is there a special level of category?

    • Semantic Networks

    • Connectionism

    • Categories in the brain


Categories in the brain l.jpg

Categories in the brain

  • Imaging studies have localized face and house areas

    – Still not very exciting (“light-up” studies)

    • Does this mean one brain area processes houses, another one for heads, and chairs, and technology, etc. etc.?


Visual agnosia for categories l.jpg

Visual agnosia for categories

  • Damage to inferior temporal cortex causes inability to name certain objects

    – Visual agnosia

    • Double dissociation for living/nonliving things


Double dissociation l.jpg

Double Dissociation

  • Double dissociation for living/nonliving things


Living vs non living l.jpg

Living vs. Non-living?

  • fMRI studies have shown different brain areas for living and non-living things

    • There is a lot of overlap for the two areas, though

    • Damage for categories is not well understood


Category specific neurons l.jpg

Category-specific neurons

  • Some neurons only respond to certain categories

    • A “Bill Clinton” neuron? Probably not.

    • A “Bill Clinton” neural circuit? More likely.


Not categories but continuum l.jpg

Not categories, but continuum

  • There are probably no distinct face, house, chair, etc. areas in the brain

    • But everything’s not all stored in the same place, either

    • A mix of overlapping areas and distributed processes

    – Living vs. non-living is a big distinction


ad
  • Login