does a theory of language need a grammar evidence from the obligatory contour principle
Download
Skip this Video
Download Presentation
Does a theory of language need a grammar? Evidence from the Obligatory Contour Principle

Loading in 2 Seconds...

play fullscreen
1 / 60

Does a theory of language need a grammar? Evidence from the Obligatory Contour Principle - PowerPoint PPT Presentation


  • 95 Views
  • Uploaded on

Does a theory of language need a grammar? Evidence from the Obligatory Contour Principle. Iris Berent Florida Atlantic University. The big question. How to account for linguistic productivity?. The generative account (Chomsky, 1957, Pinker, 1999, Prince & Smolensky, 1993 ).

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Does a theory of language need a grammar? Evidence from the Obligatory Contour Principle' - garran


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
does a theory of language need a grammar evidence from the obligatory contour principle

Does a theory of language need a grammar?Evidence from the Obligatory Contour Principle

Iris Berent

Florida Atlantic University

the big question
The big question
  • How to account for linguistic productivity?
the generative account chomsky 1957 pinker 1999 prince smolensky 1993
The generative account(Chomsky, 1957, Pinker, 1999, Prince & Smolensky, 1993)
  • Grammar: A symbolic computational mechanism that operates over variables
    • abstract placeholders
    • Noun, verb
  • Hallmarks of operations on variables
    • Blind to specific instances
    • Generalizes across the board, irrespective of item properties, familiarity
      • Dog + s-->dogs
      • Ktiv + s-->ktivs
    • Appeal to variables is critical to explain productivity

Noun

+ S

an associative account rumelhart mcclelland 1986 elman et al 1996
An associative account (Rumelhart & McClelland, 1986; Elman et al. 1996)
  • A grammatical component is obsolete
  • Speakers generalize by analogizing novel forms to similar lexical instances
  • Hallmark of associative processes:
    • generalizations are constrained by statistical properties of lexical instances
      • Similarity
      • Familiarity
    • Such generalizations are inexplicable by a grammatical operations on variables (blind to instance properties)

gog

Dog-dogs

Log-logs

examples of instance based generalizations
Examples of instance based generalizations
  • Generalizations in natural and artificial languages are guided by the co-occurrence of instances at various grain sizes:
    • morpheme (de Jong, Schreuder & Baayen, 2000)
    • Syllables (Saffran, Aslin, & Newport, 1996)
    • Subsyllabic units (Frisch et al., 2000)
    • Segments: (Dell, Reed, Adams & Meyer, 2000)
    • Features: (Goldrick, 2002)
slide6
AgreementSpeakers are equipped with a powerful associative mechanism of statistical learning that generalizes from lexical instances

gog

dog

debate

  • Is an associative lexicon sufficient to account for linguistic productivity?
    • Do some linguistic generalizations appeal to variables?
    • Does a theory of language need a grammar (a mechanism that operates on variables)?

Noun

+S

how to sort it out see also marcus 2001
How to sort it out?(see also Marcus, 2001)
  • Scope of linguistic generalizations
  • Learnability
the scope of linguistic generalizations
The scope of linguistic generalizations
  • Agreement (all accounts): people can generalize
  • Debate: scope of generalizations
    • Associative accounts: instance based generalizations are sensitive to similarfamiliarinstances (gog-dog)
    • Symbolic account: operations over variables allow for generalizations across the board, irrespective of similarity of novel items to familiar items
  • Do people generalize in such a manner?
do speakers generalize across the board
Do speakers generalize across the board?
  • No (strong associationist view):
    • the symbolic hypothesis has the empirical facts wrong: Speakers don’t generalize across the board
  • Yes (weak associationist view):
    • Speakers can generalize across the board (operate over variables)
    • Symbolic view is wrong about the innateness of the learning mechanism:
      • Symbolic view: Prior to learning, speakers have the (innate) capacity to operate over variables
      • associationist alternative: operations over variables are an emergent property of associative systems (does not come equipped with operations over variables)
the learnability issue
The learnability issue
  • Is the ability to operate over variables learnable by an associative system?
    • Associationist system: has no capacity to operate on variables prior to learning
what is not relevant to this debate
What is not relevant to this debate
  • The contents of the grammar
    • Rules vs. constraints
    • What is constrained (articulatory vs. acoustic entities)
    • Domain specificity
    • Innateness of specific constraints
  • The debate:Is a grammar required?
      • Grammar: a computational mechanism that is innately equipped with operations over variables
does a theory of language need a grammar
Does a theory of language need a grammar?
  • Most research: inflectional morphology
  • Current focus: (morph)phonology
    • Phonology: an interface between the grammar and perceptual system
    • Many phonological processes are governed by similarity--prone to an associative explanation
      • E.g., assimilation
    • The success of connectionist accounts of phonology in reading
  • Question: Does phonological knowledge appeal to variables?
case study constraint on hebrew root structure
Case study: Constraint on Hebrew root structure
  • Hebrew word formation

root word pattern Outcome

smm CiCeC SiMeM

  • Restriction on the position of identical consonants:
    • Identity is frequent root finally: smm
    • Identity is rare root initially: ssm
  • Speakers generalize the constraint on root structure to novel roots
how to account for the constraint on identical consonants
How to account for the constraint on identical consonants?
  • Symbolic account:
    • Speakers constrain identity (OCP, McCarthy, 1986)

*bbg

    • Identity is represented by a variable: XX
    • A constraint on identity implicates a grammatical operation on variables
  • Associative account (strong):
    • Variables are eliminated
    • Root structure knowledge does not appeal to identity (variables)--explicable in terms of the statistical structure of root tokens and their constituents (phonemes, features)
      • bbg
      • bb=rare root initially
does a constraint on identical c s require a grammar an overview
Does a constraint on identical C’s require a grammar: An overview
  • The distinction between identical and nonidentical consonants is inexplicable by statistical knowledge
    • segment co-occurrence (Part 1)
    • feature co-occurrence (Part 2)
  • The constraint on identical C’s is observed in the absence of relevant statistical knowledge(Part 3):
    • novel phonemes with novel feature values
    • Such generalizations may be unlearnable in the absence of innate operations over variables
  • The restriction in identity implicates a grammar
    • a computational mechanism that is innately equipped with operates on variables
part 1
Part 1
  • Speakers’ sensitivity to root identity is inexplicable by the co-occurrence of segments?
    • Production

Berent, I., Everett, D. & Shimron, J. (2001). Cognitive Psychology, 42(1),1-60.

    • Lexical decision

Berent, I., Shimron, J. & Vaknin, V. (2001). Journal of Memory and Language, 44(4),644-665

the production task
The production task

exemplar new root new word

__________________________________

CaCaC psm PaSaM

CaCaC sm ?

?

how to seat 2 c s on 3 slots
How to seat 2 C’s on 3 slots?
  • An additional root segment is needed
  • Two possible solutions:
    • new segment: SaMaL
    • Identical segments:
      • final: SaMaM
      • initial: SaSaM
  • McCarthy (1986)
    • Speakers solve this problem routinely
    • Opt for root final identity
the restriction on consonant identity
The restriction on consonant identity
  • McCarthy (1986)
    • OCP: adjacent identical elements are prohibited
      • The root SMM is prohibited
      • Verbs like SaMaM are stored as SM
    • Root identity emerges during word formation by rightwards spreading

s m

c v c v c

a

the restriction on consonant identity1
The restriction on consonant identity
  • McCarthy (1986)
    • OCP: adjacent identical elements are prohibited
      • The root SMM is prohibited
      • Verbs like SaMaM are stored as SM
    • Root identity emerges during word formation by rightwards spreading

s m

c v c v c

a

  • Outcome: identity is well formed only root finally
    • Reduplication: Sm-->smm
predictions
predictions
  • Speakers productively form identity from a biconsonantal input by “reduplication”
  • The location of identity is constrained:
    • Smm
    • *ssm
  • The domain of the constraint is the root: root initial identity is avoided irrespective of word position
    • CaCaC
    • maCCiCim
    • hitCaCaCtem
how is identity formed
How is identity formed?
  • Symbolic view: Reduplication--operation on variables
    • X-->XX
  • Associationist view (strong):
    • Variables are eliminated--identity is not represented
    • All new segments (identical or not) are inserted by a single process: segment addition
      • sm--> smm
      • sm-->sml
    • The selection of added segment reflects its frequency
  • Question: is the production of identical consonants explicable by segement co-occurrence?
expected vs observed responses root final sm sm m s m m addition sm sm x s x m x sm
Expected vs. observed responsesroot final: sm->smm, smmaddition: sm-->smX, sXm, Xsm

Observed

?

sml

Smm

smm

sml

expected vs observed responses root final sm sm m s m m addition sm sm x s x m x sm1
Expected vs. observed responsesroot final: sm->smm, smmaddition: sm-->smX, sXm, Xsm

sml

Smm

smm

sml

conclusion
conclusion
  • The formation of identical consonants is inexplicable by their expected lexical frequency: a grammatical mechanism
additional questions
Additional questions
  • Do speakers constrain root identity on-line?
lexical decision experiments
Lexical decision experiments
  • Words

Final DiMuM (bleeding)

No DiShuN (fertilization)

  • Nonwords: Novel roots in existing word patterns

Initial KiKuS

Final SiKuK

No NiKuS

  • Are speakers sensitive to the location of identity?
predictions for nonwrods
Predictions for nonwrods
  • ssm type roots are ill formed-->easier to reject (classify as nonword) than smm
  • The representation of identity: SMM vs. PSM (freuqency matched)
    • Associative account (strong): no distinction between root types when statisical properties are controlled for
    • Symbolic view:
    • speakers distinguish between identity and nonidentity
    • If identity is formed by the grammar--may be more wordlike--difficult to reject than no identity
  • The domain of the constraint: root or word
the materials in experiments 1 3
The materials in Experiments 1-3

Exp. 1 Exp. 2 Exp. 3

Nonwords

Initial Ki-KuS Ki-KaS-tem hit-Ka-KaS-ti

Final Si-KuK Si-KaK-tem hiS-ta-KaK-ti

No Ni-KuS Ni-KaS-tem hit-Na-KaS-ti

Words

Final Di-MuM Si-NaN-tem hit-Ba-SaS-ti

No Di-ShuN Si-MaN-tem hit-Ba-LaT-ti

  • Word vs. word:
      • Word domain: no consistency across word patterns
      • Root domain: consistent performance despite differences in word pattern
conclusions
Conclusions
  • Speakers constrain the location of identical consonants in the roots
  • The constraint is inexplicable by the statisical co-occurrence of segments
    • Inconsistent with a strong associative account
part 2
Part 2
  • Is the constraint on identical root consonant explicable by statistical properties of features?
  • Is the constraint on identity due to similarity?
    • Rating experiments
        • Berent, I. & Shimron, I. (2003). Journal of Linguistics, 39.1.
    • Lexical decision experiments
        • Berent,Vaknin & Shimron, (in preparation)
the similarity explanation
The similarity explanation
  • General claim: (e.g.,Pierrehumbert, 1993):
      • Similarity among adjacent segments is undesirable
      • Identical consonants are maximally similar
      • The ban on identical consonants is due to their similarity: full segment identity is independently not constrained
  • Symbolic version (degree of feature overlap):
      • Similar segments are undesirable because the grammar constrains identical features
      • Appeals to variables:“Any feature”, “identity”
  • Associationist version (freq. of similar segments):
      • Similar segments are desirable because they are rare
      • Appeals to specific instances (e.g., bb, labial) not variables
  • Either way: a single restriction on identical and similar consonants
the identity account mccarthy 1986 1994
The identity account (McCarthy, 1986; 1994)
  • The constraint on full segment identity is irreducible to the restriction on similarity (homorganicity: same place of articulation)
  • A shared principle: adjacent identical elements are prohibited (OCP)
  • Different domains of application
    • Identity: full segment (root node)
    • Homorganicity: place
  • Different potential for violation
predicted dissociations
Predicted dissociations

*[velar] [velar]

S k g

C V C V C

a

S k

C V C V C

a

Homorganic:

violation

Identical:

No violation

SKK

SKG

comparing the identity and similarity views root finally
Comparing the identity and similarity views (root finally)

SKK>SKG

SKK

SKK=SKG

Assume statistical properties

Are matched

lexical decision experiments1
Lexical decision experiments
  • Nonwords(novel roots +existing word patterns)

Homorganicity SiGuK

Identity RiGuG

Control GiDuN

  • Control for statistical properties:
    • All trio members matched for
      • bigram frequency
      • Word pattern
    • Identical and homorganic members are matched for
      • Place of articulation
      • Co-occurrence of
        • Segments (bigrams)
        • homorganic features
        • At the feature level: (iden, homor)
the materials in experiments 1 31
The materials in Experiments 1-3

Exp. 1 Exp. 2 Exp. 3

nouns verbs (Suf) verbs (Pre+suf)

_______________________________________________________

Nonwords(novel roots +existing word patterns)

Homorganicity SiGuK SiGaKtem hiStaGaKtem

Identity RiGuG RiGaGtem hitRaGaGtem

Control GiDuN GiDaNtem hitGaDaNtem

Words

Identity: KiDuD LiKaKtem hitLaKaKtem

No Identity: KiShuT LiMaDtem hitLaMaDtem

predictions identity vs similarity
Predictions (identity vs. similarity)

RT: SKK>SKG

RT: SKK>SKG

RT: SKK

RT: SKK=SKG

Assume statistical properties

Are matched

objections
Objections
  • Do speakers generalize across the board?
    • The absence of a statistical explanation is due to an inaccurate estimate of statistical properties
        • Type
        • Token
    • How far can speakers generalize?
  • Is a grammar implicated?
    • Suppose people can generalize “across the board”
    • Are such generalizations learnable by associative systems that are not innately equipped with operations over variables?
how to measure the scope of a generalization marcus 1998 2001
How to measure the scope of a generalization? (Marcus, 1998, 2001)
  • The training space: space used to representtraining items
  • Classification of novel items:
  • Within training space: described exhaustively by using values of trained features
  • Outside the training space:
  • represented by some untrained feature values

xog

xog

Dog

Log

gog

Dog

Gog

network s architecture determines scope marcus 1998 2001
Network’s architecture determines scope (Marcus, 1998, 2001)
  • Generalizations of byconnectionist networks that lack innate operations on variables (FF networks, SRN)
  • an identity mapping: X-->X
    • A dog is a dog
  • Outside the training space:
  • No systematic generalizations!
  • A xog is a ?
  • Within training space:
  • Successful generalizations
  • A gog is a gog

Dog

Gog

Dog

Log

gog

xog

Critics:: Altmann & Dienes, 1999; Christiansen & Curtin, 1999; Christiansen, Conway & Curtin, 2000; Eimas, 1999; McClelland & Plaut, 1999; Negishi, 1999; Seidenberg & Elman, 1999; 1999b; Shastri, 1999

implications
Implications
  • Generalizations over variables cannot be learned from training on instances
  • If speakers can generalize beyond their training space, then they possess a grammar (a mechanism operating on variables)
  • Question: do speakers generalize in such a fashion?
existing evidence for exceeding the training space in natural language
Existing evidence for exceeding the training space in natural language
  • Phonotactic restrictions extend to unattested clusters (Moreton, 2002): bw>dl
    • Inexplicable by segment-co-occurrence
    • Are they explained by feature-co-occurrence?
  • Regular inflection generalizes to strange novel items (Prasada & Pinker 1993; Berent, Pinker & Shimron, 1999)
    • Are “strange” words outside speakers’ space?
part 3
Part 3
  • Does the constraint on root structure generalize beyond the phonological space of Hebrew?
    • Berent, I., Marcus, G., Shimron, J., & Gafos, A. (2002). Cognition, 83, 113-139.
generalization to novel phonemes e g jj r vs r jj
Generalization to novel phonemes (e.g., jjr vs. rjj)

Tongue tip Constriction Area:wide(Gafos, 1999)

th

Ch

J

w

Hebrew

phonemes

TTCA narrow (s, z, ts)

TTCA mid (sh)

Hebrew features

rationale
rationale
  • identical novel phonemes never co-occure
    • Root initially
    • Root finally
  • A restriction on novel identical phonemes is inexplicable by
    • Statistical knowledge of phonemeco-occurrence
    • Statistical knowledge of feature co-occurrence th (novel place value)
  • question: Can speakers generalize in the absence of relevant statistical knowledge?
rating materials
Rating materials

type root transparent opaque

____________________________________

initial jjr ja-jar-tem hij-ta-jar-tem

final rjj ra-jaj-tem hit-ra-jaj-tem

controls jkr ja-kar-tem hij-ta-kar-tem

vocal lexical decision say then decide
Vocal lexical decision(say, then decide)

Words nonwords

____________________________________

Initial ----- hij-ta-jar-tem

final hit-pa-lal-tem hit-ra-jaj-tem

controls hit-pa-lash-tem hij-ta-kar-tem

conclusion1
conclusion
  • The constraint on the location of identical root consonants generalizes to
    • Novel phonemes
    • Novel feature variables
  • Speakers can extend phonological generalizations beyond the space of phonemes and feature values of their language
objection
Objection
  • Must such generalizations exceed the training space?
  • Problem: generalization outside the feature space is unattainable
  • Solution: change the feature space to accommodate the novel phonemes
can the novel phonemes be accommodated within the hebrew feature space
Can the novel phonemes be accommodated within the Hebrew feature space?
  • Probably yes!
  • Are these solutions motivated
    • Th is “more foreign”
      • Borrowings into Hebrew

Many phonemes are maintained (job, check)

Th is not (termometer, terapya)

      • Roots with th are rated lower than the other foreign phonemes
  • Will these solutions work?
    • The constraint on identical consonants is inexplicable by feature co-occurrence
    • It is unlikely that a model formulated at the feature level could capture the facts
conclusions1
conclusions
  • Hebrew speakers generalize the constraint on root structure across the board
    • Irrespective of the statistical properties of novel items
    • Despite having no relevant statistical knowledge
  • Such generalizations may not be learnable by an associative system from the statistical properties of the lexicon (so far…)
  • An account of language, in general, and phonology, in particular must incorporate a grammar--a mechanism innately equipped with operations on variables-- that is irreducible to an associative lexicon.
ad