pdp motivation basic approach n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
PDP: Motivation, basic approach PowerPoint Presentation
Download Presentation
PDP: Motivation, basic approach

Loading in 2 Seconds...

play fullscreen
1 / 44

PDP: Motivation, basic approach - PowerPoint PPT Presentation


  • 111 Views
  • Uploaded on

PDP: Motivation, basic approach. Cognitive psychology or “How the Mind Works”. Information processing. Mental representations. Transformations. Perception / sensation. Action. Key questions:. What are the mental representations? What are the transformations and how do they work?

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PDP: Motivation, basic approach


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
slide3

Information processing

Mental

representations

Transformations

Perception / sensation

Action

key questions
Key questions:
  • What are the mental representations?
  • What are the transformations and how do they work?
  • Where do these come from?
answers circa 1979
Answers circa 1979
  • Mental representations are data structures (symbols bound together in relational structures)
  • “Transformation” processes are rules, like the instructions in a computer program.
  • The rules and starting representations are largely innate
  • Learning is a process of building up new structured representations from primitives and deriving (and storing) new rules.
slide7

Their application is highly intuitive, logical, maybe even obvious!

EG modular, serial processing in word recognition…

slide8

Explain generalization / generativity, central to human cognition!

EG: Today I will frump the car, yesterday I….______________ the car.

EG: Colorless green ideas sleep furiously!

slide10

Explains modular cognitive impairments!

Pure alexia: Word representations gone!

Prosopagnosia: Face reps gone!

Category-specific impairment: Animals gone!

Broca’s aphasia: Phonological word forms gone!

And so on…

slide15

Cognitive impairments not so modular…

Pure alexia: Can’t see high spatial frequencies.

Prosopagnosia: Abnormal visual word recognition.

Category-specific impairment: Can still name half the animals.

Broca’s aphasia: Can still produce single concrete nouns, can’t do grammar.

And so on…

But also Had a lot of Other Problems…

some issues with modular serial symbolic approaches
Some issues with modular, serial, symbolic approaches
  • Constraint satisfaction, context sensitivity
  • Quasi regularity
  • Learning and developmental change
  • Graceful degradation
rumelhart
Rumelhart
  • Brains are different than serial digital computers.
  • Brain:
    • Neurons are like little information processors
    • They are slow and intrinsically noisy…
    • …but there are many of them and they behave in parallel, not in serial.
    • It turns out that noisy, parallel computing systems are naturally suited to some of the kinds of behavioral tasks that challenged symbolic theories of the time.
in other words
In other words…
  • Rather than thinking of the mind as some kind of unconstrained “universal computer,” maybe we should pay attention to the kinds of computations a noisy, parallel, brain-like system can naturally conduct.
  • Paying attention to the implementation might offer some clues about / constraints on theories about how the mind works.
  • Added bonus: Such theories might offer a bridge between cognition and neuroscience!
slide19

Cell body

Axon

(transmits signals)

Axon

Terminal

Dendrites

(receive signals)

Golgi Stain

slide20

+

-

+

-

+

+

+

+

+

+

-

-

-

-

-

-

-

-

-

-

-

-

-

+

+

+

+

+

+

+

slide21

+

-

+

-

+

-

-

-

-

+

+

+

slide24

p(Firing a spike)

Hyperpolarized

Depolarized

Membrane potential at axon hillock

slide25

Input: Depends on activation of sending

neurons and efficacy of synapses

Output: Train of action potentials

at a particular rate

Weight: Effect on downstream neuron

six elements of connectionist models
Six elements of connectionist models:
  • A set of units
  • A weight matrix
  • An input function
  • A transfer function
  • A model environment
  • A learning rule
six elements of connectionist models1
Six elements of connectionist models:
  • A set of units
    • Each unit is like a population of cells with a similar receptive field.
    • Think of all units in a model as a single vector, with each unit corresponding to one element of the vector.
    • At any point in time, each unit has an activation state analogous to the mean firing activity of the population of neurons.
    • These activation states are stored in an activation vector, with each element corresponding to one unit.
slide29

Output

Output

Hidden

Hidden

Input

Input

Bias

[10.51 .52 .45]

[10.51.52 .451]

six elements of connectionist models2
Six elements of connectionist models:
  • A weight matrix
    • Each unit sends and receives a weighted connection to/from some other subset of units.
    • These weights are analogous to synapses: they are the means by which one units transmits information about its activation state to another unit.
    • Weights are stored in a weight matrix
slide31

Output

Hidden

Input

Bias

Receiving

[10.51.52 .451]

Sending

six elements of connectionist models3
Six elements of connectionist models:
  • An input function
    • For any given receiving unit, there needs to be some way of determining how to combine weights and sending activations to determine the unit’s net input
    • This is almost always the dot product (ie weighted sum) of the sending activations and the weights.
slide33

Output

Hidden

Input

Bias

Receiving

[10???? ??1]

Sending

slide36

Output

Hidden

Input

Bias

Receiving

[10???? ??1]

Sending

six elements of connectionist models5
Six elements of connectionist models:
  • A model environment
    • All the models do is compute activation states over units, given the preceding elements and some partial input.
    • The model environment specifies how events in the world are encoded in unit activation states, typically across a subset of units.
    • It consists of vectors that describe the input activations corresponding to different events, and sometimes the “target” activations that the network should generate for each input.
slide38

Output

Hidden

Input

Bias

X-OR function

Input1

Input2

Hidden1

Hidden2

Output

Input1

Input2

Hidden1

Hidden2

Output

bias

slide39

Note that the model environment is always theoretically important!

  • It amounts to a theoretical statement about the nature of the information available to the system from perception and action or prior cognitive processing.
  • Many models sink or swim on the basis of their assumptions about the nature of inputs / outputs.
six elements of connectionist models6
Six elements of connectionist models:
  • A learning rule
    • Only specified for models that learn (obviously)
    • Specifies how the values stored in the weight matrix should change as the network processes patterns
    • Many different varieties that we will see:
      • Hebbian
      • Error-correcting (e.g. backpropagation)
      • Competitive / self-organizing
      • Reinforcement-based
slide41

Output

Hidden

Input

Bias

X-OR function

Input1

Input2

Hidden1

Hidden2

Output

Input1

Input2

Hidden1

Hidden2

Output

bias

central challenge
Central challenge
  • Given just these elements, can we build little systems—models—that help us to understand human cognitive abilities, and their neural underpinnings?