Perceptual Categories:
Download
1 / 156

Perceptual Categories: Old and gradient, young and sparse. - PowerPoint PPT Presentation


  • 218 Views
  • Uploaded on

Perceptual Categories: Old and gradient, young and sparse. Bob McMurray University of Iowa Dept. of Psychology. Collaborators. Richard Aslin Michael Tanenhaus David Gow. Joe Toscano Cheyenne Munson Meghan Clayards Dana Subik Julie Markant Jennifer Williams.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Perceptual Categories: Old and gradient, young and sparse. ' - johana


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Slide1 l.jpg

Perceptual Categories:

Old and gradient, young and sparse.

Bob McMurray

University of Iowa

Dept. of Psychology


Slide2 l.jpg

Collaborators

Richard Aslin

Michael Tanenhaus

David Gow

Joe Toscano

Cheyenne Munson

Meghan Clayards

Dana Subik

Julie Markant

Jennifer Williams

The students of the MACLab


Slide3 l.jpg

Categorization

Categorization occurs when:

1) discriminably different stimuli…

2) …are treated equivalently for some purposes…

3) …and stimuli in other categories are treated differently.


Slide4 l.jpg

Categorization

Perceptual Categorization

  • Continuous input maps to discrete categories.

  • Semantic knowledge plays minor role.

  • Bottom-up learning processes important.


Slide5 l.jpg

Categorization

Perceptual Categorization

  • Continuous inputs map to discrete categories.

  • Semantic knowledge plays less of a role.

  • Categories include:

  • Faces

  • Shapes

  • Words

  • Colors

  • Exemplars include:

  • A specific view of a specific faces

  • A variant of a shape.

  • A particular word in a particular utterance

  • Variation in hue, saturation, lightness


Slide6 l.jpg

Categorization occurs when:

1) Discriminably different stimuli…

2) …are treated equivalently for some purposes…

3) …and stimuli in other categories are treated differently.

Premise

For Perceptual Categoriesthis definition largely falls short.

and

this may be a good thing.

Approach

Walk through work on speech and category development.

Assess this definition along the way.


Slide7 l.jpg

Overview

  • Speech perception: Discriminably different and categorical perception.

  • Word recognition: exemplars of the same word are not treated equivalently.(+Benefits)

3) Speech Development: phonemes are not treated equivalently.

  • Speech Development (model): challenging other categories treated differently. (+Benefits)

5) Development of Visual Categories: challenging other categories treated differently.


Slide8 l.jpg

Categorical Perception

B

100

100

Discrimination

% /p/

Discrimination

ID (%/pa/)

0

0

B

VOT

P

  • Sharp identification of tokens on a continuum.

P

  • Discrimination poor within a phonetic category.

Subphonemic variation in VOT is discarded in favor of adiscretesymbol (phoneme).


Slide9 l.jpg

Categorical Perception

Categorical Perception: Demonstrated across wide swaths of perceptual categorization.

Line Orientation (Quinn, 2005)

Basic Level Objects (Newell & Bulthoff, 2002)

Facial Identity (Beale & Keil, 1995)

Musical Chords (Howard, Rosen & Broad, 1992)

Signs (Emmorey, McCollough & Brentari, 2003)

Color (Bornstein & Korda, 1984)

Vocal Emotion (Luakka, 2005)

Facial Emotion (Pollak & Kistlerl, 2002)

What’s going on?


Slide10 l.jpg

Categorical Perception

  • Across a category boundary, CP:

    • enhances contrast.

  • Within a category, CP yields

    • a loss of sensitivity

    • a down-weighting of the importance of within-category variation.

    • discarding continuous detail.


Slide11 l.jpg

Categorical Perception

  • Across a category boundary, CP:

    • enhances contrast.

  • Within a category, CP yields

    • a loss of sensitivity

    • a downweighting of the importance of within-category variation.

    • discarding continuous detail.

Categorization occurs when:

1) discriminably different stimuli…

2) …are treated equivalently for some purposes…

3) …and stimuli in other categories are treated differently

Stimuli are not discriminably different.

CP: Categorization affects perception.

Definition: Categorization independent of perception.

Need a more integrated view…


Slide12 l.jpg

Perceptual Categorization

Categorization occurs when:

CP: perception not independent of categorization.

  • discriminably

  • different stimuli

  • 2) are treated

    • equivalently for

    • some purposes…

  • and stimuli in

  • other categories

  • are treated

  • differently.


Slide13 l.jpg

Categorical Perception

  • Across a category boundary, CP:

    • enhances contrast.

  • Within a category, CP yields

    • a loss of sensitivity

    • a downweighting of the importance of within-category variation.

    • discarding continuous detail.

Is continuous detail really discarded?


Slide14 l.jpg

Is continuous detail really discarded?

Evidence against the strong form of Categorical Perception from psychophysical-type tasks:

Sidebar

This has never been examined with non-speech stimuli…

  • Goodness Ratings

  • Miller (1994, 1997…)

  • Massaro & Cohen (1983)

    • Discrimination Tasks

    • Pisoni and Tash (1974)

    • Pisoni & Lazarus (1974)

    • Carney, Widin & Viemeister (1977)

    • Training

    • Samuel (1977)

    • Pisoni, Aslin, Perey & Hennessy (1982)


    Slide15 l.jpg

    Is continuous detail really discarded? No.

    ?

    Why not?

    Is it useful?


    Slide16 l.jpg

    X

    basic

    bakery

    bakery

    X

    ba…

    kery

    barrier

    X

    X

    bait

    barricade

    X

    baby

    • Online Word Recognition

    • Information arrives sequentially

    • At early points in time, signal is temporarily ambiguous.

    • Later arriving information disambiguates the word.


    Slide17 l.jpg

    Input:

    b... u… tt… e… r

    time

    beach

    butter

    bump

    putter

    dog


    Slide18 l.jpg

    These processes have been well defined for a phonemic representation of the input.

    But considerably less ambiguity if we consider within-category (subphonemic) information.

    Example: subphonemic effects of motor processes.


    Slide19 l.jpg

    Coarticulation

    n n

    ee

    t c

    k

    Any action reflects future actions as it unfolds.

    Example:Coarticulation

    Articulation (lips, tongue…) reflectscurrent, futureandpastevents.

    Subtle subphonemic variation in speech reflects temporal organization.

    Sensitivity to theseperceptualdetails might yield earlier disambiguation.


    Slide20 l.jpg

    Experiment 1

    ?

    What does sensitivity to within-category detail do?

    Does within-category acoustic detail systematically affect higher level language?

    Is there a gradient effect of subphonemic detail on lexical activation?


    Slide21 l.jpg

    Experiment 1

    Gradient relationship: systematic effects of subphonemic information on lexical activation.

    If this gradiency is used it must be preserved over time.

    Need a design sensitive to bothsystematic acoustic detailand detailedtemporal dynamicsof lexical activation.

    McMurray, Tanenhaus & Aslin (2002)


    Slide22 l.jpg

    Acoustic Detail

    Use a speech continuum—more steps yields a better picture acoustic mapping.

    KlattWorks:generate synthetic continua from natural speech.

    • 9-step VOT continua (0-40 ms)

    • 6 pairs of words.

    • beach/peach bale/pale bear/pear

    • bump/pump bomb/palm butter/putter

    • 6 fillers.

    • lamp leg lock ladder lip leaf

    • shark shell shoe ship sheep shirt


    Slide24 l.jpg

    Temporal Dynamics

    How do we tap on-line recognition?

    With an on-line task:Eye-movements

    Subjects hear spoken language and manipulate objects in a visual world.

    Visual world includes set of objects with interesting linguistic properties.

    abeach, apeachand some unrelated items.

    Eye-movements to each object are monitored throughout the task.

    Tanenhaus, Spivey-Knowlton, Eberhart & Sedivy, 1995


    Slide25 l.jpg

    Why use eye-movements and visual world paradigm?

    • Relatively naturaltask.

    • Eye-movements generated veryfast(within 200ms of first bit of information).

    • Eye movementstime-lockedto speech.

    • Subjectsaren’t awareof eye-movements.

    • Fixation probability maps ontolexical activation..


    Slide26 l.jpg

    Task

    A moment to view the items


    Slide28 l.jpg

    Task

    Bear

    Repeat 1080 times


    Slide29 l.jpg

    Identification Results

    1

    0.9

    0.8

    0.7

    0.6

    0.5

    0.4

    0.3

    0.2

    0.1

    0

    0

    5

    10

    15

    20

    25

    30

    35

    40

    High agreement across subjects and items for category boundary.

    proportion /p/

    B

    VOT (ms)

    P

    By subject:17.25 +/- 1.33ms

    By item: 17.24 +/- 1.24ms


    Slide30 l.jpg

    Task

    200 ms

    Trials

    1

    2

    3

    4

    5

    % fixations

    Time

    Target = Bear

    Competitor = Pear

    Unrelated = Lamp, Ship


    Slide31 l.jpg

    Task

    0.9

    VOT=0 Response=

    VOT=40 Response=

    0.8

    0.7

    0.6

    0.5

    Fixation proportion

    0.4

    0.3

    0.2

    0.1

    0

    0

    400

    800

    1200

    1600

    2000

    0

    400

    800

    1200

    1600

    Time (ms)

    More looks to competitor than unrelated items.


    Slide32 l.jpg

    Task

    target

    Fixation proportion

    Fixation proportion

    time

    time

    • Given that

      • the subject heard bear

      • clicked on “bear”…

    How often was the subject looking at the “pear”?

    Categorical Results

    Gradient Effect

    target

    target

    competitor

    competitor

    competitor

    competitor


    Slide33 l.jpg

    Results

    20 ms

    25 ms

    30 ms

    10 ms

    15 ms

    35 ms

    40 ms

    0.16

    0.14

    0.12

    0.1

    0.08

    0.06

    0.04

    0.02

    0

    0

    400

    800

    1200

    1600

    0

    400

    800

    1200

    1600

    2000

    Response=

    Response=

    VOT

    VOT

    0 ms

    5 ms

    Competitor Fixations

    Time since word onset (ms)

    Long-lasting gradient effect: seen throughout the timecourse of processing.


    Slide34 l.jpg

    0.08

    0.07

    0.06

    0.05

    0.04

    0.03

    0.02

    0

    5

    10

    15

    20

    25

    30

    35

    40

    Area under the curve:

    Clear effects of VOT

    B: p=.017* P: p<.001***

    Linear Trend

    B: p=.023* P: p=.002***

    Response=

    Response=

    Looks to

    Competitor Fixations

    Looks to

    Category

    Boundary

    VOT (ms)


    Slide35 l.jpg

    0.08

    0.07

    0.06

    0.05

    0.04

    0.03

    0.02

    0

    5

    10

    15

    20

    25

    30

    35

    40

    Unambiguous Stimuli Only

    Clear effects of VOT

    B: p=.014* P: p=.001***

    Linear Trend

    B: p=.009** P: p=.007**

    Response=

    Response=

    Looks to

    Competitor Fixations

    Looks to

    Category

    Boundary

    VOT (ms)


    Slide36 l.jpg

    Summary

    Subphonemic acoustic differences in VOT have gradient effect on lexical activation.

    • Gradient effect of VOT on looks to the competitor.

    • Effect holds even for unambiguous stimuli.

    • Seems to be long-lasting.

    Consistent with growing body of work using priming (Andruski, Blumstein & Burton, 1994; Utman, Blumstein & Burton, 2000; Gow, 2001, 2002).

    Variants from the same category are not treated equivalently: Gradations in interpretation are related to gradations in stimulus.


    Slide37 l.jpg

    Extensions

    Word recognition is systematically sensitiveto subphonemic acoustic detail.

    • Voicing

    • Laterality, Manner, Place

    • Natural Speech

    • Vowel Quality


    Slide38 l.jpg

    Extensions

    B

    Sh

    L

    P

    Word recognition is systematically sensitiveto subphonemic acoustic detail.

    • Voicing

    • Laterality, Manner, Place

    • Natural Speech

    • Vowel Quality

       Metalinguistic Tasks


    Slide39 l.jpg

    Extensions

    0.1

    Response=P

    Looks to B

    0.08

    0.06

    Competitor Fixations

    Response=B

    Looks to B

    0.04

    Category

    Boundary

    0.02

    0

    0

    5

    10

    15

    20

    25

    30

    35

    40

    VOT (ms)

    Word recognition is systematically sensitiveto subphonemic acoustic detail.

    • Voicing

    • Laterality, Manner, Place

    • Natural Speech

    • Vowel Quality

       Metalinguistic Tasks


    Slide40 l.jpg

    Extensions

    0.1

    0.08

    0.06

    0.04

    0.02

    0

    Word recognition is systematically sensitiveto subphonemic acoustic detail.

    • Voicing

    • Laterality, Manner, Place

    • Natural Speech

    • Vowel Quality

       Metalinguistic Tasks

    Response=P

    Looks to B

    Competitor Fixations

    Response=B

    Looks to B

    Category

    Boundary

    0

    5

    10

    15

    20

    25

    30

    35

    40

    VOT (ms)


    Slide41 l.jpg

    Categorical Perception

    Within-category detail surviving to lexical level.

    Abnormally sharp categories may be due to meta-linguistic tasks.

    There is a middle ground: warping of perceptual space (e.g. Goldstone, 2002)

    Retain: non-independence of perception and categorization.


    Slide42 l.jpg

    Perceptual Categorization

    Categorization occurs when:

    CP: perception not independent of categorization.

    • discriminably

    • different stimuli

    Exp 1: Lexical variants not treated equivalently (gradiency)

    • 2) are treated

      • equivalently for

      • some purposes…

    • and stimuli in

    • other categories

    • are treated

    • differently.


    Slide43 l.jpg

    Perceptual Categorization

    Categorization occurs when:

    CP: perception not independent of categorization.

    • discriminably

    • different stimuli

    Exp 1: Lexical variants not treated equivalently (gradiency)

    • 2) are treated

      • equivalently for

      • some purposes…

    WHY?

    • and stimuli in

    • other categories

    • are treated

    • differently.


    Slide44 l.jpg

    Progressive Expectation Formation

    Any action reflects future actions as it unfolds.

    • Can within-category detail be used to predict future acoustic/phonetic events?

    • Yes: Phonological regularities create systematic within-category variation.

      • Predicts future events.


    Slide45 l.jpg

    Input:

    m… a… r… oo… ng… g… oo… s…

    time

    maroon

    goose

    goat

    duck

    Experiment 3: Anticipation

    Word-final coronal consonants (n, t, d) assimilate the place of the following segment.

    Maroong Goose

    Maroon Duck

    Place assimilation -> ambiguous segments

    —anticipate upcoming material.


    Slide46 l.jpg

    We should see faster eye-movements to “goose” after assimilated consonants.

    Subject hears

    “select the maroonduck”

    “select the maroon goose”

    “select the maroong goose”

    “select the maroong duck” *


    Slide47 l.jpg

    Onset of “goose” + oculomotor delay assimilated consonants.

    0.9

    0.8

    0.7

    0.6

    Fixation Proportion

    0.5

    0.4

    Assimilated

    0.3

    Non Assimilated

    0.2

    0.1

    0

    0

    200

    400

    600

    Time (ms)

    Looks to “goose“ as a function of time

    Results

    Anticipatory effect on looks to non-coronal.


    Slide48 l.jpg

    Onset of “goose” + oculomotor delay assimilated consonants.

    0.3

    Assimilated

    0.25

    Non Assimilated

    Fixation Proportion

    0.2

    0.15

    0.1

    0.05

    0

    0

    200

    400

    600

    Time (ms)

    Looks to “duck” as a function of time

    Inhibitory effect on looks to coronal(duck, p=.024)


    Slide49 l.jpg

    Experiment 3: Extensions assimilated consonants.

    Possible

    lexical

    locus

    Green/m Boat

    Eight/Ape Babies

    Assimilation creates competition


    Slide50 l.jpg

    • Sensitivity to subphonemic detail: assimilated consonants.

      • Increase priors on likely upcoming events.

      • Decrease priors on unlikely upcoming events.

      • Active Temporal Integration Process.

    • Possible lexical mechanism…

    NOT treating stimuli equivalently allows within-category detail to be used for temporal integration.


    Slide51 l.jpg

    Adult Summary assimilated consonants.

    • Lexical activation is exquisitely sensitive to within-category detail: Gradiency.

    • This sensitivity is usefulto integrate material over time.

      • Progressive Facilitation

      • Regressive Ambiguity resolution (ask me about this)


    Slide52 l.jpg

    Perceptual Categorization assimilated consonants.

    Categorization occurs when:

    CP: perception not independent of categorization.

    • discriminably

    • different stimuli

    Exp 1: Lexical variants not treated equivalently (gradiency)

    • 2) are treated

      • equivalently for

      • some purposes…

    Exp 2: non equivalence enables temporal integration.

    • and stimuli in

    • other categories

    • are treated

    • differently.


    Slide53 l.jpg

    Development assimilated consonants.

    Historically, work in speech perception has been linked to development.

    Sensitivityto subphonemic detail must revise our view of development.

    Use:Infants face additional temporal integration problems

    No lexicon available to clean up noisy input: rely on acoustic regularities.

    Extracting a phonology from the series of utterances.


    Slide54 l.jpg

    Sensitivity assimilated consonants.to subphonemic detail:

    For 30 years, virtually all attempts to address this question have yieldedcategorical discrimination(e.g. Eimas, Siqueland, Jusczyk & Vigorito, 1971).

    • Exception: Miller & Eimas (1996).

      • Only at extreme VOTs.

      • Only when habituated tonon- prototypicaltoken.


    Slide55 l.jpg

    Use? assimilated consonants.

    Nonetheless, infants possess abilities that would requirewithin-category sensitivity.

    • Infants can useallophonic differencesat word boundaries forsegmentation(Jusczyk, Hohne & Bauman, 1999; Hohne, & Jusczyk, 1994)

    • Infants can learn phonetic categories fromdistributional statistics(Maye, Werker & Gerken, 2002; Maye & Weiss, 2004).


    Slide56 l.jpg

    Statistical Category Learning assimilated consonants.

    Within a category, VOT forms Gaussian distribution.

    Result: Bimodal distribution

    0ms

    40ms

    VOT

    Speech production causesclusteringalongcontrastivephonetic dimensions.

    E.g. Voicing / Voice Onset Time

    B: VOT ~ 0

    P: VOT ~ 40-50


    Slide57 l.jpg

    • Record assimilated consonants.frequencies of tokens at each valuealong a stimulus dimension.

    • Extract categories from the distribution.

    +voice

    -voice

    frequency

    0ms

    50ms

    VOT

    Tostatistically learnspeech categories, infants must:

    • This requires ability to track specific VOTs.


    Slide58 l.jpg

    Experiment 4 assimilated consonants.

    Why no demonstrations of sensitivity?

    • Habituation

      • Discrimination not ID.

      • Possible selective adaptation.

      • Possible attenuation of sensitivity.

    • Synthetic speech

      • Not ideal for infants.

    • Single exemplar/continuum

      • Not necessarily a category representation

    Experiment 3: Reassess issue with improved methods.


    Slide59 l.jpg

    HTPP assimilated consonants.

    • Head-Turn Preference Procedure

    • (Jusczyk & Aslin, 1995)

    • Infants exposed to a chunk of language:

      • Words in running speech.

      • Stream of continuous speech (ala statistical learning paradigm).

      • Word list.

    • Memory for exposed items (or abstractions) assessed:

      • Compare listening time between consistent and inconsistent items.



    Slide61 l.jpg

    Center Light blinks. assimilated consonants.



    Slide63 l.jpg

    One of the side-lights blinks. assimilated consonants.


    Slide64 l.jpg

    Beach… Beach… Beach… assimilated consonants.

    When infant looks at side-light…

    …he hears a word


    Slide65 l.jpg

    …as long as he keeps looking. assimilated consonants.


    Slide66 l.jpg

    Methods assimilated consonants.

    Bomb

    Palm

    Bear

    Pear

    Bail

    Pail

    Beach

    Peach

    Measure listening time on…

    Original words

    Bear

    Pear

    Competitors

    Pear

    Bear

    VOT closer to boundary

    Bear*

    Pear*

    7.5 month old infants exposed to either 4 b-, or 4 p-words.

    80 repetitions total.

    Form a category of the exposed

    class of words.


    Slide67 l.jpg

    B: M= 3.6 ms VOT assimilated consonants.

    P: M= 40.7 ms VOT

    B*: M=11.9 ms VOT

    P*: M=30.2 ms VOT

    B* and P* were judged /b/ or /p/ at least90% consistentlyby adult listeners.

    B*: 97%

    P*: 96%

    Stimuli constructed by cross-splicingnaturally producedtokens of each end point.


    Slide68 l.jpg

    Novelty or Familiarity? assimilated consonants.

    Novelty

    Familiarity

    Within each group will we see evidence forgradiency?

    B

    36

    16

    P

    21

    12

    Novelty/Familiarity preferencevariesacross infants and experiments.

    We’re only interested in the middle stimuli (b*, p*).

    Infants were classified asnoveltyorfamiliaritypreferring by performance on the endpoints.


    Slide69 l.jpg

    Categorical assimilated consonants.

    Gradient

    After being exposed to

    bear… beach… bail… bomb…

    Infants who show a novelty effect…

    …will look longer for pear than bear.

    What about in between?

    Listening Time

    Bear

    Bear*

    Pear


    Slide70 l.jpg

    Experiment 3: Results assimilated consonants.

    Noveltyinfants (B: 36 P: 21)

    10000

    9000

    8000

    Listening Time (ms)

    7000

    Exposed to:

    6000

    B

    P

    5000

    4000

    Target

    Target*

    Competitor

    Target vs. Target*:

    Competitor vs. Target*:

    p<.001

    p=.017


    Slide71 l.jpg

    10000 assimilated consonants.

    Exposed to:

    9000

    B

    P

    8000

    Listening Time (ms)

    7000

    6000

    5000

    4000

    Target

    Target*

    Competitor

    Familiarityinfants (B: 16 P: 12)

    Target vs. Target*:

    Competitor vs. Target*:

    P=.003

    p=.012


    Slide72 l.jpg

    .009** assimilated consonants.

    .009**

    10000

    .024*

    .024*

    9000

    8000

    Listening Time (ms)

    7000

    6000

    .028*

    .028*

    9000

    5000

    4000

    8000

    .018*

    .018*

    P

    P

    P*

    P*

    B

    B

    7000

    Listening Time (ms)

    Familiarity

    N=12

    6000

    5000

    4000

    P

    P*

    B

    Infants exposed to /p/

    Novelty

    N=21


    Slide73 l.jpg

    >.1 assimilated consonants.

    >.2

    10000

    <.001**

    <.001**

    Novelty

    N=36

    9000

    8000

    Listening Time (ms)

    7000

    6000

    .06

    10000

    5000

    .15

    9000

    4000

    B

    B*

    P

    8000

    Listening Time (ms)

    7000

    Familiarity

    N=16

    6000

    5000

    4000

    B

    B*

    P

    Infants exposed to /b/


    Slide74 l.jpg

    Experiment 3 Conclusions assimilated consonants.

    Contrary to all previous work:

    • 7.5 month old infants show gradient sensitivity to subphonemic detail.

      • Clear effect for /p/

      • Effect attenuated for /b/.


    Slide75 l.jpg

    Null Effect? assimilated consonants.

    Listening Time

    Bear

    Bear*

    Pear

    Listening Time

    Expected Result?

    Bear

    Bear*

    Pear

    Reduced effect for /b/… But:


    Slide76 l.jpg

    Actual result. assimilated consonants.

    Listening Time

    Bear

    Bear*

    Pear

    • Bear*  Pear

    • Category boundary lies between Bear & Bear*

      • - Between (3ms and 11 ms) [??]

    • Within-category sensitivity in a different range?


    Slide77 l.jpg

    Experiment 4 assimilated consonants.

    Same design as experiment 3.

    VOTs shifted away from hypothesized boundary

    Train

    Test:

    Bomb Bear

    Beach Bale

    -9.7 ms.

    Bomb* Bear*

    Beach* Bale*

    3.6 ms.

    Palm Pear

    Peach Pail

    40.7 ms.


    Slide78 l.jpg

    Familiarity assimilated consonants.infants (34 Infants)

    =.01**

    9000

    =.05*

    8000

    7000

    Listening Time (ms)

    6000

    5000

    4000

    B-

    B

    P


    Slide79 l.jpg

    Novelty assimilated consonants.infants (25 Infants)

    =.002**

    9000

    =.02*

    8000

    7000

    Listening Time (ms)

    6000

    5000

    4000

    B-

    B

    P


    Slide80 l.jpg

    Experiment 4 Conclusions assimilated consonants.

    • Within-category sensitivity in /b/ as well as /p/.

    Infants do NOT treat stimuli from the same category equivalently: Gradient.


    Slide81 l.jpg

    Perceptual Categorization assimilated consonants.

    Categorization occurs when:

    CP: perception not independent of categorization.

    • discriminably

    • different stimuli

    Exp 1: Lexical variants not treated equivalently (gradiency)

    • 2) are treated

      • equivalently for

      • some purposes…

    Exp 2: non equivalence enables temporal integration.

    Exp 3/4: Infants do not treat category members equivalently

    • and stimuli in

    • other categories

    • are treated

    • differently.


    Slide82 l.jpg

    Experiment 4 Conclusions assimilated consonants.

    • Within-category sensitivity in /b/ as well as /p/.

    Infants do NOT treat stimuli from the same category equivalently: Gradient.

    • Remaining questions:

    • Why the strange category boundary?

    • Where does this gradiency come from?


    Slide83 l.jpg

    Experiment 4 Conclusions assimilated consonants.

    Remaining questions:

    2) Where does this gradiency come from?

    Listening Time

    B-

    B

    B*

    P*

    P

    VOT


    Slide84 l.jpg

    Remaining questions: assimilated consonants.

    2) Where does this gradiency come from?

    Results resemble

    half a Gaussian…

    B-

    B

    B*

    P*

    P

    VOT


    Slide85 l.jpg

    Remaining questions: assimilated consonants.

    2) Where does this gradiency come from?

    Results resemble

    half a Gaussian…

    And the distribution of VOTs is Gaussian

    Lisker & Abramson (1964)

    Statistical Learning Mechanisms?


    Slide86 l.jpg

    /b/ assimilated consonants.

    /p/

    Category Mapping

    Strength

    VOT

    Remaining questions:

    1) Why the strange category boundary?

    /b/ results consistent with (at least) two mappings.

    1)Shifted boundary

    • Inconsistent with prior literature.


    Slide87 l.jpg

    Adult boundary assimilated consonants.

    2) Sparse Categories

    unmapped

    space

    /b/

    /p/

    Category Mapping

    Strength

    VOT

    HTPP is a one-alternative task.

    Asks: B or not-B not: B or P

    Hypothesis: Sparse categories: by-product of efficient learning.


    Slide88 l.jpg

    • Remaining questions: assimilated consonants.

    • Why the strange category boundary?

    • Where does this gradiency come from?

    ?

    Are both a by-product of statistical learning?

    Can a computational approach contribute?


    Slide89 l.jpg

    Computational Model assimilated consonants.

    3) Each Gaussian has three

    parameters:

    VOT

    Mixture of Gaussian model of speech categories

    • Models distribution of tokens as

      • a mixture of Gaussian distributions

      • over phonetic dimension (e.g. VOT) .

    2) Each Gaussian represents a category. Posterior probability of VOT ~ activation.


    Slide90 l.jpg

    VOT assimilated consonants.

    VOT

    Statistical Category Learning

    1) Start with a set of randomly selected Gaussians.

    • After each input, adjust each parameter to find best description of the input.

    • Start with more Gaussians than necessary--model doesn’t innately know how many categories.

    •  -> 0 for unneeded categories.


    Slide92 l.jpg


    Slide93 l.jpg

    • Undergeneralization assimilated consonants.

      • small 

      • not as costly: maintain distinctiveness.


    Slide94 l.jpg

    1 assimilated consonants.

    0.9

    0.8

    39,900

    Models

    Run

    0.7

    0.6

    2 Category Model

    P(Success)

    0.5

    3 Category Model

    0.4

    0.3

    0.2

    0.1

    0

    0

    10

    20

    30

    40

    50

    60

    Starting 

    • To increase likelihood of successful learning:

      • err on the side of caution.

      • start with small 


    Slide95 l.jpg

    Small assimilated consonants.

    Unmapped

    space

    Starting 

    0.4

    .5-1

    0.35

    0.3

    0.25

    Avg Sparseness Coefficient

    0.2

    0.15

    0.1

    0.05

    0

    0

    2000

    4000

    6000

    8000

    10000

    12000

    Training Epochs

    Sparseness coefficient: % of space not strongly mapped

    to any category.

    VOT


    Slide96 l.jpg

    .5-1 assimilated consonants.

    20-40

    Start with large σ

    VOT

    Starting 

    0.4

    0.35

    0.3

    0.25

    Avg Sparsity Coefficient

    0.2

    0.15

    0.1

    0.05

    0

    0

    2000

    4000

    6000

    8000

    10000

    12000

    Training Epochs


    Slide97 l.jpg

    .5-1 assimilated consonants.

    12-17

    20-40

    3-11

    Intermediate starting σ

    VOT

    Starting 

    0.4

    0.35

    0.3

    0.25

    Avg Sparsity Coefficient

    0.2

    0.15

    0.1

    0.05

    0

    0

    2000

    4000

    6000

    8000

    10000

    12000

    Training Epochs


    Slide98 l.jpg

    Model Conclusions assimilated consonants.

    Continuous sensitivity required for statistical learning.

    Statistical learning enhances gradient category structure.

    To avoid overgeneralization…

    …better to start with small estimates for 

    Small or even mediumstarting  => sparse category structure during infancy—much of phonetic space is unmapped.

    Tokens that are treated differently may not be in different categories.


    Slide99 l.jpg

    Perceptual Categorization assimilated consonants.

    Categorization occurs when:

    • discriminably

    • different stimuli

    CP: perception not independent of categorization.

    Exp 1: Lexical variants not treated equivalently (gradiency)

    Exp 2: non equivalence enables temporal integration.

    • 2) are treated

      • equivalently for

      • some purposes…

    Exp 3/4: Infants do not treat category members equivalently

    Model: Gradiency arises from statistical learning.

    • and stimuli in

    • other categories

    • are treated

    • differently.

    Model: Tokens treated differently are not in different categories (sparseness).

    Model: Sparseness by product of optimal learning.


    Slide100 l.jpg

    AEM Paradigm assimilated consonants.

    Treating stimuli equivalently

    Treating stimuli differently

    Identification, not discrimination.

    Existing infant methods:

    Habituation

    Head-Turn Preference

    Preferential Looking

    Mostly test

    discrimination

    Examination of sparseness/completenessof categories needs a two alternative task.

    To AEM


    Slide101 l.jpg

    AEM Paradigm assimilated consonants.

    i

    a a a a…

    • Infant hears constant stream of distractor stimuli.

    • Conditioned to turn head in response to a target stimulus using visual reinforcer.

    Exception: Conditioned Head Turn (Kuhl, 1979)

    • After training generalization can be assessed.

    • Approximates Go/No-Go task.


    Slide102 l.jpg

    AEM Paradigm assimilated consonants.

    • When detection occurs this could be because

      • Stimulus is perceptually equivalent to target.

      • Stimulus is perceptually different but member of same category as target.

    • When no detection, this could be because

      • Stimuli are perceptually different.

      • Stimuli are in different categories.

    A solution: the multiple exemplar approach


    Slide103 l.jpg

    AEM Paradigm assimilated consonants.

    • Multiple exemplarmethods (Kuhl, 1979; 1983)

      • Training: single distinction i/a.

      • Irrelevant variation gradually added (speaker & pitch).

      • Good generalization.

    • This exposure may masknatural biases:

      • Infants trained on irrelevant dimension(s).

      • Infants exposed to expected variation along irrelevant dimension.

    Infants trained on a single exemplar did not generalize.


    Slide104 l.jpg

    AEM Paradigm assimilated consonants.

    Is a member of

    ’s category?

    HTPP, Habituation and Conditioned Head-Turn methods all rely on a single response: criterion effects.

    • Yes:

      • Both dogs

      • Both mammals

      • Both 4-legged animals

    • No:

      • Different breeds

      • Different physical properties

    How does experimenter establish the decision criterion?


    Slide105 l.jpg

    AEM Paradigm assimilated consonants.

    Multiple responses:

    Is a member of

    or ?

    Pug vs. poodle: Decision criteria will be based

    on species-specific properties (hair-type, body-shape).

    • Two-alternative tasks specify criteriawithout explicitly teaching:

      • What the irrelevant cues are

      • Their statistical properties (expected variance).


    Slide106 l.jpg

    AEM Paradigm assimilated consonants.

    • Conditioned-Head-Turn provides right sort of response, but cannot be adapted to two-alternatives (Aslin & Pisoni, 1980).

      • Large metabolic cost in making head-movement.

      • Requires 180º shift in attention.

    • Could we use a different behavioral response in a similar conditioning paradigm?


    Slide107 l.jpg

    AEM Paradigm assimilated consonants.

    Eye movements may provide ideal response.

    • Smaller angular displacements detectable with computer- based eye-tracking.

    • Metabolically cheap—quick and easy to generate.

    How can we train infants to make eye movements target locations?


    Slide108 l.jpg

    AEM Paradigm assimilated consonants.

    Visual Expectation Paradigm

    (Haith, Wentworth & Canfield,

    1990; Canfield, Smith, Breznyak

    & Snow, 1997)

    Movement under an occluder

    (Johnson, Amso & Slemmer, 2003)

    Infants readily make anticipatory eye movements to regularly occurring visual events:


    Slide109 l.jpg

    AEM Paradigm assimilated consonants.

    Anticipatory Eye-Movements (AEM):

    Train infants to use anticipatory eye movements as a behavioral label for category identity.

    • Two alternative response (left-right)

    • Arbitrary, identification response.

    • Response to a single stimulus.

    • Many repeated measures.


    Slide110 l.jpg

    AEM Paradigm assimilated consonants.

    Each category is associated with the left or right side of the screen.

    Categorization stimuli followed by visual reinforcer.


    Slide111 l.jpg

    AEM Paradigm assimilated consonants.

    STIMULUS

    Delay between stimulus and reward gradually increases throughout experiment.

    trial 1

    REINFORCER

    STIMULUS

    trial 30

    REINFORCER

    time

    Delay provides opportunity for infants to make anticipatory eye-movements to expected location.


    Slide112 l.jpg

    AEM Paradigm assimilated consonants.


    Slide113 l.jpg

    AEM Paradigm assimilated consonants.


    Slide114 l.jpg

    AEM Paradigm assimilated consonants.

    After training on original stimuli, infants are tested on a mixture of:

    • new, generalization stimuli (unreinforced)

    • Examine category structure/similarity relative to trained stimuli.

    • original, trained stimuli (reinforced)

      • Maintain interest in experiment.

      • Provide objective criterion for inclusion


    Slide115 l.jpg

    AEM Paradigm assimilated consonants.

    MHT Receiver

    MHT Receiver

    Remote

    Remote

    MHT Transmitter

    MHT Transmitter

    Eye

    Eye

    -

    -

    TV

    tracker

    tracker

    TV

    TV

    Baby

    Baby

    Infrared

    Infrared

    Video

    Video

    Camera

    Camera

    Eye

    Eye

    -

    -

    tracker Control Unit

    tracker Control Unit

    MHT Control Unit

    MHT Control Unit

    To Eye tracking Computer

    To Eye tracking Computer

    Gaze position assessed with automated, remote eye-tracker.

    Gaze position recorded on standard video for analysis.


    Slide116 l.jpg

    Experiment 5 assimilated consonants.

    Multidimensional visual categories

    Can infants learn to make anticipatory eye movements in response to visual category identity?

    ?

    • What is the relationship between basic visual features in forming perceptual categories?

      • Shape

      • Color

      • Orientation


    Slide117 l.jpg

    Experiment 5 assimilated consonants.

    Train: Shape (yellow square and yellow cross)

    Test: Variation in color and orientation.

    Yellow 0º (training values)

    Orange 10º

    Red 20º

    If infants ignore irrelevant variation in color or orientation, performance should be good for generalization stimuli.

    If infants’ shape categories are sensitive to this variation, performance will degrade.


    Slide118 l.jpg

    Experiment 5: Results assimilated consonants.

    80

    70

    No effect of color (p>.2)

    60

    50

    40

    Angle (p<.05)

    Color (n.s.)

    30

    Significant performance deficit due to orientation (p=.002)

    20

    Yellow, 0°

    10

    0

    Yellow

    Orange

    10°

    Red

    20°

    9/10 scored better than chance on original stimuli.

    M = 68.7% Correct

    Percent Correct

    Training

    Stimuli


    Slide119 l.jpg

    Some stimuli are uncategorized (despite very reasonable responses): sparseness.

    Sparse

    region of

    input

    spaces


    Slide120 l.jpg

    Perceptual Categorization responses):

    Categorization occurs when:

    • discriminably

    • different stimuli

    CP: perception not independent of categorization.

    Exp 1: Lexical variants not treated equivalently (gradiency)

    Exp 2: non equivalence enables temporal integration.

    • 2) are treated

      • equivalently for

      • some purposes…

    Exp 3/4: Infants do not treat category members equivalently

    Model: Gradiency arises from statistical learning.

    • and stimuli in

    • other categories

    • are treated

    • differently.

    Model: Tokens treated differently are not in different categories (sparseness).

    Model: Sparseness by product of optimal learning.

    Exp 5: Shape categories show similar sparse structure.


    Slide121 l.jpg

    Occlusion-Based AEM responses):

    Infants do make eye-movements to anticipate objects’ trajectories under an occluder. (Johnson, Amso & Slemmer, 2003)

    • AEM is based on an arbitrary mapping.

      • Unnatural mechanism drives anticipation.

      • Requires slowly changing duration of delay-period.

    Can infants associate anticipated trajectories (under the occluder) with target identity?


    Slide122 l.jpg

    Red Square responses):


    Slide123 l.jpg

    Yellow Cross responses):


    Slide124 l.jpg

    Yellow Square responses):

    To faces

    To end


    Slide125 l.jpg

    Experiment 6 responses):

    Can AEM assess auditory categorization?

    Can infants “normalize” for variations in pitch and duration?

    or…

    Are infants’ sensitive to acoustic-detail during a lexical identification task?


    Slide126 l.jpg

    “teak!” responses):

    “lamb!”

    Training:

    “Teak” -> rightward trajectory.

    “Lamb” -> leftward trajectory.

    Test:

    Lamb & Teak with changes in:

    Duration: 33% and 66% longer.

    Pitch: 20% and 40% higher

    If infants ignore irrelevant variation in pitch or duration, performance should be good for generalization stimuli.

    If infants’ lexical representations are sensitive to this variation, performance will degrade.



    Experiment 6 results 2 l.jpg
    Experiment 6 Results 2 responses):

    Experiment 6: Results

    0.9

    Pitch

    p>.1

    0.8

    0.7

    0.6

    0.5

    Duration

    p=.002

    0.4

    0.3

    0.2

    0.1

    0

    20 Training trials.

    11 of 29 infants performed better than chance.

    Proportion Correct Trials

    Duration

    Pitch

    Training

    Stimuli

    D1 / P1

    D2 / P2

    Stimulus


    Slide129 l.jpg

    Variation in pitch is tolerated for word-categories. responses):

    Variation in duration is not.

    - Takes a gradient form.

    Again, some stimuli are uncategorized (despite very reasonable responses): sparseness.


    Slide130 l.jpg

    Perceptual Categorization responses):

    Categorization occurs when:

    • discriminably

    • different stimuli

    CP: perception not independent of categorization.

    Exp 1: Lexical variants not treated equivalently (gradiency)

    Exp 2: non equivalence enables temporal integration.

    • 2) are treated

      • equivalently for

      • some purposes…

    Exp 3/4: Infants do not treat category members equivalently

    Model: Gradiency arises from statistical learning.

    Exp 6: Gradiency in infant response to duration.

    • and stimuli in

    • other categories

    • are treated

    • differently.

    Model: Tokens treated differently are not in different categories (sparseness).

    Model: Sparseness by product of optimal learning.

    Exp 5,6: Shape, Word categories show similar sparse structure.


    Slide131 l.jpg

    Exp 7: Face Categorization responses):

    Can AEM help understand face categorization?

    Are facial variants treated equivalently?

    Train: two arbitrary faces

    Test: same faces at

    0°, 45°, 90°, 180°

    Facial inversion effect.


    Slide133 l.jpg

    Experiment 7: Results responses):

    Vertical

    45º

    90º

    180º

    22/33 successfully categorized vertical faces.

    1

    0.8

    0.6

    Percent Correct

    0.4

    0.2

    0

    • 90º vs. Vertical: p<.001

    • 90º vs. 45º & 180º : p<.001.

    • 45º, 180º: chance (p>.2).

    • 90º: p=.111


    Slide134 l.jpg

    Experiment 7 responses):

    AEM useful with faces.

    Facial Inversion effect replicated.

    Generalization not simple similarity–90º vs. 45º –Infants’ own category knowledge is reflected.

    Resembles VOT (b/p) results: within a dimension, some portions are categorized, others are not.

    Again, some stimuli are uncategorized (despite very reasonable responses): sparseness.


    Slide135 l.jpg

    Perceptual Categorization responses):

    Categorization occurs when:

    • discriminably

    • different stimuli

    CP: perception not independent of categorization.

    Exp 1: Lexical variants not treated equivalently (gradiency)

    Exp 2: non equivalence enables temporal integration.

    • 2) are treated

      • equivalently for

      • some purposes…

    Exp 3/4: Infants do not treat category members equivalently

    Model: Gradiency arises from statistical learning.

    Exp 6: Gradiency in infant response to duration.

    • and stimuli in

    • other categories

    • are treated

    • differently.

    Model: Tokens treated differently are not in different categories (sparseness).

    Model: Sparseness by product of optimal learning.

    Exp 5,6,7: Shape, Word, Face categories show similar sparse structure.


    Slide136 l.jpg

    Again, some stimuli are uncategorized (despite very reasonable responses): sparseness.

    Evidence for complex, but sparse categories: some dimensions (or regions of a dimension) are included in the category, others are not.


    Slide137 l.jpg

    Infant Summary reasonable responses):

    • Infants show graded sensitivity to continuous speech cues.

    • /b/-results: regions ofunmapped phonetic space.

    • Statistical approach provides support for sparseness.

      • - Given current learning theories, sparseness results from optimal starting parameters.

    • Empirical test will require a two-alternative task: AEM

    • Test of AEM paradigm also shows evidence for sparseness in shapes, words, and faces.


    Slide138 l.jpg

    Audience Specific Conclusions reasonable responses):

    For speech people

    Gradiency: continuous information in the signal is not discarded and is useful during recognition.

    Gradiency: Infant speech categories are also gradient, a result of statistical learning.

    For infant people

    Methodology: AEM is a useful technique for measuring categorization in infants (bonus: works with undergrads too).

    Sparseness: Through the lens of a 2AFC task, (or interactions of categories) categories look more complex.


    Slide139 l.jpg

    Perceptual Categorization reasonable responses):

    1) discriminably different stimuli…

    CP: discrimination not distinct from categorization.

    Continuous feedback relationship between perception and categorization

    2) …are treated equivalently for some purposes…

    Gradiency: Infants and adults do not treat stimuli equivalently. This property arises from learning processes as well as the demands of the task.

    3) and stimuli in other categories are treated differently

    Sparseness: Infants’ categories do not fully encompass the input. Many tokens are not categorized at all…


    Slide140 l.jpg

    Conclusions reasonable responses):

    Categorization is an approximationof an underlyingly continuous system.

    Clumps of similarity in stimulus-space.

    Reflect underlying learning processes and demands of online processing.

    During development, categorization is not common (across the complete perceptual space)—small, specific clusters may grow to larger representations.

    This is useful: avoid overgeneralization.


    Slide141 l.jpg

    Take Home Message reasonable responses):

    Early, sparse, regions of graded similarity space

    grow, gain structure

    but retain their fundamental gradiency.


    Slide142 l.jpg

    Perceptual Categories: reasonable responses):

    Old and gradient, young and sparse.

    Bob McMurray

    University of Iowa

    Dept. of Psychology


    Slide143 l.jpg

    IR Head-Tracker reasonable responses):

    Emitters

    Head-Tracker Cam

    Monitor

    Head

    2 Eye cameras

    Computers connected

    via Ethernet

    Subject

    Computer

    Eyetracker

    Computer


    Slide144 l.jpg

    Misperception: Additional Results reasonable responses):


    Slide145 l.jpg

    20 Filler items (lemonade, restaurant, saxophone…)

    Option to click “X” (Mispronounced).

    26 Subjects

    1240 Trials over two days.


    Slide146 l.jpg

    Identification Results reasonable responses):

    1.00

    0.90

    0.80

    0.70

    0.60

    Voiced

    Response Rate

    0.50

    Voiceless

    0.40

    NW

    0.30

    0.20

    0.10

    0.00

    0

    5

    10

    15

    20

    25

    30

    35

    Barakeet

    Parakeet

    1.00

    0.90

    0.80

    0.70

    Significant target responses even at extreme.

    Graded effects of VOT on correct response rate.

    Voiced

    0.60

    Response Rate

    0.50

    Voiceless

    0.40

    NW

    0.30

    0.20

    0.10

    0.00

    0

    5

    10

    15

    20

    25

    30

    35

    Barricade

    Parricade


    Slide147 l.jpg

    Phonetic “Garden-Path” reasonable responses):

    VOT = 0 (/b/)

    VOT = 35 (/p/)

    1

    0.8

    Barricade

    0.6

    Parakeet

    Fixations to Target

    0.4

    0.2

    0

    0

    500

    1000

    0

    500

    1000

    1500

    Time (ms)

    Time (ms)

    “Garden-path” effect:

    Difference between looks to each target (b vs. p) at same VOT.


    Slide148 l.jpg

    0.15 reasonable responses):

    0.1

    0.05

    Garden-Path Effect

    ( Barricade - Parakeet )

    0

    -0.05

    -0.1

    0

    5

    10

    15

    20

    25

    30

    35

    VOT (ms)

    0.06

    0.04

    0.02

    0

    Garden-Path Effect

    -0.02

    ( Barricade - Parakeet )

    -0.04

    -0.06

    -0.08

    -0.1

    0

    5

    10

    15

    20

    25

    30

    35

    VOT (ms)

    Target

    GP Effect:

    Gradient effect of VOT.

    Target: p<.0001

    Competitor: p<.0001

    Competitor


    Slide149 l.jpg

    Assimilation: Additional Results reasonable responses):


    Slide150 l.jpg

    When /p/ is heard, the reasonable responses): bilabial feature can be assumed to come from assimilation (not an underlying /m/).

    When /t/ is heard,the bilabial feature is likely to be from an underlying /m/.

    runm picks

    runm takes ***


    Slide151 l.jpg

    Exp 3 & 4: Conclusions reasonable responses):

    • Within-category detail used in recovering from assimilation: temporal integration.

      • Anticipate upcoming material

      • Bias activations based on context

        • - Like Exp 2: within-category detail retained to resolve ambiguity..

    • Phonological variation is a source of information.


    Slide152 l.jpg

    Subject hears reasonable responses):

    “select the mud drinker”

    “select the mudg gear”

    “select the mudg drinker

    Critical Pair


    Slide153 l.jpg

    Initial Coronal:Mud Gear reasonable responses):

    Initial Non-Coronal:Mug Gear

    Onset of “gear”

    Avg. offset of “gear” (402 ms)

    0.45

    0.4

    0.35

    0.3

    Fixation Proportion

    0.25

    0.2

    0.15

    0.1

    0.05

    0

    0

    200

    400

    600

    800

    1000

    1200

    1400

    1600

    1800

    2000

    Time (ms)

    Mudg Gear is initially ambiguous with a late bias towards “Mud”.


    Slide154 l.jpg

    Onset of “drinker” reasonable responses):

    Avg. offset of “drinker (408 ms)

    0.6

    0.5

    0.4

    Fixation Proportion

    0.3

    0.2

    Initial Coronal: Mud Drinker

    0.1

    Initial Non-Coronal: Mug Drinker

    0

    0

    200

    400

    600

    800

    1000

    1200

    1400

    1600

    1800

    2000

    Time (ms)

    Mudg Drinker is also ambiguous with a late bias towards “Mug” (the /g/ has to come from somewhere).


    Slide155 l.jpg

    Onset of “gear” reasonable responses):

    0.8

    0.7

    0.6

    0.5

    Fixation Proportion

    0.4

    Assimilated

    0.3

    Non Assimilated

    0.2

    0.1

    0

    0

    200

    400

    600

    Time (ms)

    Looks to non-coronal (gear) following assimilated or non-assimilated consonant.

    In the same stimuli/experiment there is also a progressiveeffect!


    Slide156 l.jpg

    Non-parametric approach? reasonable responses):

    Categories

    • Competitive Hebbian Learning (Rumelhart & Zipser, 1986).

    VOT

    • Not constrained by a particular equation—can fill space better.

    • Similar properties in terms of starting  and sparseness.


    ad