ling 696b categorical perception perceptual magnets and neural nets
Download
Skip this Video
Download Presentation
LING 696B: Categorical perception, perceptual magnets and neural nets

Loading in 2 Seconds...

play fullscreen
1 / 52

LING 696B: Categorical perception, perceptual magnets and neural nets - PowerPoint PPT Presentation


  • 73 Views
  • Uploaded on

LING 696B: Categorical perception, perceptual magnets and neural nets. From last time: quick overview of phonological acquisition. Prenatal to newborn 4-day-old French babies prefer listening to French over Russian (Mehler et al, 1988)

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'LING 696B: Categorical perception, perceptual magnets and neural nets' - neal


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
from last time quick overview of phonological acquisition
From last time: quick overview of phonological acquisition
  • Prenatal to newborn
    • 4-day-old French babies prefer listening to French over Russian (Mehler et al, 1988)
  • Prepared to learn any phonetic contrast in human languages (Eimas, 71)
a quick overview of phonological acquisition
A quick overview of phonological acquisition
  • 6 months: Effect of experience begin to surface (Kuhl, 92)
  • 6 - 12 months: from a universal perceiver to a language-specific one (Werker & Tees, 84)
a quick overview of phonological acquisition1
A quick overview of phonological acquisition
  • 6 - 9 months: knows the sequential patterns of their language
    • English v.s. Dutch (Jusczyk, 93)
      • “avoid” v.s. “waardig”
    • Weird English v.s. better English (Jusczyk, 94)
      • “mim” v.s. “thob”
    • Impossible Dutch v.s. Dutch (Friederici, 93)
      • “bref” v.s. “febr”
a quick overview of phonological acquisition2
A quick overview of phonological acquisition
  • 6 months: segment words out of speech stream based on transition probabilities (Saffran et al, 96)
    • pabikutibudogolatupabikudaropi…
  • 8 months: remembering words heard in stories read to them by graduate students (Jusczyk and Hohne, 97)
a quick overview of phonological acquisition3
A quick overview of phonological acquisition
  • From 12-18 months and above: vocabulary growth from 50 words to a “burst”
    • Lexical representation?
  • Toddlers: learning morpho-phonology
    • Example: U-shaped curve went --> goed --> wented/went
next few weeks learning phonological categories
Next few weeks: learning phonological categories
  • The innovation of phonological structure requires us to think of the system of vocalizations as combinatorial, in that the concatenation of inherently meaningless phonological units leads to an intrinsically unlimited class of words ... It is a way of systematizing existing vocabulary items and being about to create new ones.

-- Ray Jackendoff (also citing C. Hockett in “Foundations of Language”)

categorical perception
Categorical perception
  • Categorization
  • Discrimination
eimas et al 1971
Eimas et al (1971)
  • Enduring results: tiny (1-4 month) babies can hear almost anything
  • Stimuli: 3 pairs along the /p/-/b/ synthetic continuum, differing in 20ms
eimas et al 19711
Eimas et al (1971)
  • Babies get
    • Very excited hearing a change cross the category boundary
    • Somewhat excited when the change stay within the boundary
    • Bored when there is no change
eimas et al 19712
Eimas et al (1971)
  • Their conclusion: [voice] feature detection is innate
  • But similar things were shown for Chichillas (Kuhl and Miller, 78) and monkeys (Ramus et al)
  • So maybe this is a Mammalian aditory system thing…
sensitivity to distributions
Sensitivity to distributions
  • Percetual Magnet effects (Kuhl, 91): tokens near prototypes are harder to discriminate
  • Perhaps a fancier term is “perceptual space warping”
categorical perception v s perceptual magnets
Categorical perception v.s. perceptual magnets
  • Gradient effects: Kuhl (91-95) tried hard to demonstrate these are not experimental errors, but systematic

Refererce 1 2 3 4

categorical perception v s perceptual magnets1
Categorical perception v.s. perceptual magnets
  • PM is based on a notion of perceptual space
    • Higher dimensions
    • Non-category boundaries
kuhl et al 1992
Kuhl et al (1992)
  • American English has /i/, but no /y/
  • Swedish has /y/, but no /i/
  • Test whether infants can detect differences between prototype and neighbors

Percent

of treating

as the same

kuhl et al 92 s interpretation
Kuhl et al (92)’s interpretation
  • Lingustic experience changes perception of native and non-native sounds (also see Werker & Tees, 84)
  • Influence of input starts before any phonemic contrast is learned
  • Information stored as prototypes
modeling attempt guenther and gjaja 96
Modeling attempt: Guenther and Gjaja (96)
  • Claim: CP/PM is a matter of auditory cortex, and does not have to involve higher-level cognition
  • Methodology: build a connectionist model inspired by neurons
self organizing map in guenther and gjaja
Self-organizing map in Guenther and Gjaja
  • Input coding:
  • So for each (F1,F2) pair, use 4 input nodes
  • This seems to be an artifact of the mechanism (with regard to scaling), but a standard practice in the field
self organizing map in guenther and gjaja1
Self-organizing map in Guenther and Gjaja

mj : activation

Connection weights

self organizing map learning stage
Self-organizing map: learning stage

mj : activation

Connection weights:initialized to be random

Learning rate

Learning rule:

mismatch

update

activation

self organizing map learning stage1
Self-organizing map: learning stage
  • Crucially, only update connections to the most active N cells
    • The “inhibitory connection”
  • N goes down to 1 over time
  • Intuition: each cell selectively encodes certain kind of input. Otherwise the weights are pulled in different directions
self organizing map testing stage
Self-organizing map: testing stage
  • Population vector: take a weighted sum of individual votes
  • Inspired by neural firing patterns in monkey reaching
  • How to compute Fvote?
    • X+, X- must point to the same direction as Z+, Z-
    • Algebra exercise: derive a formula to calculate Fvote (homework Problem 1)
the distributin of f vote after learning
The distributin of Fvote after learning

Distribution of Fvote

Trainining stimuli

Recall: activation

results shown in the paper1
Results shown in the paper

With some choice of

parameters to fit the

“percent generalization”

local summary
Local summary
  • Captures a type of pre-linguistic / pre-lexical learning from distributions
    • (next time) Distribution learning can happen in a short time (Maye&Gerken, 00)
  • G&G: No need to have categories in order to get PM effects
    • Also see Lacerda (95)’s exemplar model
  • Levels of view: cognition or cortex
    • Maybe Guenther is right …
categorization discrimination training guenther et al 99
Categorization / discrimination training (Guenther et al, 99)
  • 1. Space the stimuli according to JND
  • 2. Test whether they can hear the distinctions in control and training regions
  • 3. TRAIN in some way(45 minutes!)
  • 4. Test again as in 2.

Band-passed white noise

guenther et al 99
Guenther et al, 99
  • Categorization training:
    • Tell people that sounds from the training region belong to a “prototype”
    • Present them sounds from training region and band-edges, ask whether it’s prototype
  • Discrimination training:
    • People say “same” or “different”, and get feedback
    • Made harder by tighter spacing over time
guenther et al 991
Guenther et al, 99
  • Result of distribution learning depends on what people do with it

control

training

guenther s neural story for this effect
Guenther’s neural story for this effect
  • Propose to modify G&G’s architecture to simulate effects of different training procedure
  • Did some brain scanto verify the hypothesis(Guenther et al, 04)
  • New model forthcoming?
guenther s neural story for this effect1
Guenther’s neural story for this effect
  • Propose to modify G&G’s architecture to simulate effects of different training procedure
  • Did some brain scanto verify the hypothesis(Guenther et al, 04)
  • New model forthcoming?(A+ guaranteed if you turn one in)
the untold story of guenther and gjaja
The untold story of Guenther and Gjaja
  • Size of model matters a lot
  • Larger models give more flexibility, but no free lunch:
    • Take longer to learn
    • Must tune the learning rate parameter
    • Must tune the activation neighborhood

100 cells

500 cells

1500 cells

randomness in the outcome
Randomness in the outcome
  • This is perhaps the nicest
  • Some other runs
a technical view of g g s model
A technical view of G&G’s model
  • “Self-organization”: a type of unsupervised learning
    • Receive input with no information on which categories they belong
    • Also referred to as “clustering” (next time)
  • “Emergent behavior”: spatial input patterns coded connection weights that realize the non-linear mapping
a technical view of g g s model1
A technical view of G&G’s model
  • A non-parametric approximation to the input-percept non-linear mapping
    • Lots of parameters, but individually do not correspond to categories
    • Will look at a parametric approach to clustering next time
  • Difficulties: tuning of parameters, rates of convergence, formal analysis (more later)
other questions for g g
Other questions for G&G
  • If G&G is a model of auditory cortex, then where do F1 and F2 come from? (see Adam’s presentation)
  • What if we expose G&G to noisy data?
  • How does it deal withspeech segmentation?
  • More next time.
homework problems related to perceptual magnets
Homework problems related to perceptual magnets
  • Problem 2: warping in one dimension predicted by perceptual magnets
  • Problem 3: program a simplified G&G type of model or any other SOM to simulate the effect shown above

Before

Training data

After?

commercial cogsci colloquium this year
Commercial:CogSci colloquium this year
  • Many big names in connectionism this semester
a digression to the history of neural nets
A digression to the history of neural nets
  • 40’s -- study of neurons (Hebb, Huxley&Hodgin)
  • 50’s -- self-programming computers, biological inspirations
  • ADALINE from Stanford (Widrow & Hoff)
  • Perceptron (Rosenblatt, 60)
perceptron
Perceptron
  • Input: x1, …, xn
  • Output: {0, 1}
  • Weights: w1,…,wn
  • Bias: a
  • How it works:
what can perceptrons do
What can perceptrons do?
  • A linear machine: finding a plane that separates the 0 examples from 1 examples
perceptron learning rule
Perceptron learning rule
  • Start with random weights
  • Take an input, check if the output, when there is an error
    • weight = weight + a*input*error
    • bias = bias + a*error
  • Repeat until no more error
  • Recall G&G learning rule
  • See demo
criticism from traditional ai minsky papert 69
Criticism from traditional AI(Minsky & Papert, 69)
  • Simple perceptron not capable of learning simple boolean functions
  • More sophisticated machines hard to understand
  • Example: XOR (see demo)
revitalization of neural nets connectionism
Revitalization of neural nets: connectionism
  • Minsky and Papert’s book stopped the work on neural nets for 10 years …
  • But as computer gets faster and cheaper,
  • CMU parallel distributed processing (PDP) group
  • UC San Diego institute of cognitive science
networks get complicated more layers
Networks get complicated: more layers
  • 3 layer perceptron can solve the XOR problem
networks get complicated combining non linearities
Networks get complicated: combining non-linearities
  • Decision surface can be non-linear
networks get complicated more connections
Networks get complicated: more connections
  • Bi-directional
  • Inhibitory
  • Feedback loops
key to the revitalization of connectionism
Key to the revitalization of connectionism
  • Back-propagation algorithm that can be used to train a variety of networks
  • Variants developed for other architectures

error

two kinds of learning with neural nets
Two kinds of learning with neural nets
  • Supervised
    • Classification: patterns --> {1,…,N}
    • Adaptive control: control variables --> observed measurements
    • Time series prediction: past events --> future events
  • Unsupervised
    • Clustering
    • Compression / feature extraction
statistical insights from the neural nets
Statistical insights from the neural nets
  • Overtraining / early-stopping
  • Local maxima
  • Model complexity -- training error tradeoff
  • Limits of learning: generalization bounds
  • Dimension reduction, feature selection
  • Many of these problems become standard topics in machine learning
big question neural nets and symbolic computing
Big question: neural nets and symbolic computing
  • Grammar: harmony theory (Smolensky, and later Optimality Theory)
  • Representation: Can neural nets discovery phonemes?
next week
Next week
  • Reading will be posted on the course webpage
ad