Psyc 317: Cognitive Psychology. Lecture 8: Knowledge. Outline. Approaches to Categorization – Definitions – Prototypes – Exemplars • Is there a special level of category? • Semantic Networks • Connectionism • Categories in the brain. Categorization is hierarchical.
• So we have levels of categories
• How can all of this be represented in the mind?
• Semantic network approach
• Links connect them together
Semantic network template
Simple semantic network
• Why not put it all at canary?
• Cognitive economy: Putting common properties at each node is too inefficient
• More efficient to put “cannot fly” at exception nodes
Yes or no:
• A German Shepherd is a type of dog.
• A German Shepherd is a mammal.
• A German Shepherd barks.
• A German Shepherd has skin.
* Associated words prime each other
• Incongruent results (Rips et al., 1972):
– A pig is a mammal 1476 ms
– A pig is an animal 1268 ms
• Shorter links between more connected concepts
“A fairly complicated theory with enough generality to apply to results from many different experimental paradigms.”
• This is bad. Why?
Result B says nodes look like this
Result A says nodes look like this
• You can make connections as long as you want/need to explain your results
• Concepts are represented in networks with nodes and links
– But they function a lot differently than in semantic networks
• Theory is biologically based
• A quick review of neurons…
• Excitatory and inhibitory connections:
Fires at 1.6
– Units are “neuronlike processing units”
• Units are connected together
• Parallel Distributed Processing (PDP)
– Activation occurs in parallel
– Processing occurs in many units
From the environment
– Weights affect activation of nodes
• The output pattern is not the correct pattern
• Figure out what the difference is
– That difference is the error signal
• Use the error signal to fine-tune weights
• Error signal is sent back using back propagation
• Repeated input and back propagation changes weights between units
• When error signal = 0, the network has learned the correct weights for that stimulus
– The network has been trained
– One node has “canary” and is connected to “can fly” and “yellow”
• PDP networks
– A bunch of nodes together represent “canary” and another bunch represent “yellow”
– Distributed knowledge in neural circuits
• Knowledge is distributed amongst many nodes
• Sounds a lot like neurons and the brain!
• Emergence: Lots of little dumb things form one big smart thing
• This damage does not totally take out the system
– Graceful degradation
• These networks can adapt to damage
– Robin and sparrow should share a lot of the same representation
• PDP networks can emulate this – similar inputs can operate with similar networks
• Computational modeling of the mind
– Object perception
– Recognizing words
– Problem solving
– Language processing
• Limitation of computers?
– We have trillions of neurons
– PDP networks can’t support that many nodes (yet)
Example: Train network on “collie”
– Weights are perfectly adjusted for collie
• Give network “terrier”
– Network must change weights again for terrier
• Weights must change to accommodate both dogs
• It does not take thousands of trials to remember that you parked in Lot K
– How does rapid learning occur?
• Two separate systems?
• Two separate systems
PDP in the Cortex:
Rapid learning in
– Still not very exciting (“light-up” studies)
• Does this mean one brain area processes houses, another one for heads, and chairs, and technology, etc. etc.?
– Visual agnosia
• Double dissociation for living/nonliving things
• There is a lot of overlap for the two areas, though
• Damage for categories is not well understood
• A “Bill Clinton” neuron? Probably not.
• A “Bill Clinton” neural circuit? More likely.
• But everything’s not all stored in the same place, either
• A mix of overlapping areas and distributed processes
– Living vs. non-living is a big distinction