Gesture and asl acquisition l.jpg
This presentation is the property of its rightful owner.
Sponsored Links
1 / 41

Gesture and ASL Acquisition PowerPoint PPT Presentation


  • 299 Views
  • Updated On :
  • Presentation posted in: General

Gesture and ASL Acquisition. Sarah Taub, Linguistics Dennis Galvan, Psychology Pilar Pi ñar, Foreign Languages, Cultures, and Literatures Susan Mather, Linguistics. Why this study?. Taub, Piñar, and Galvan (2004) compared narratives in English, Spanish, and ASL.

Related searches for Gesture and ASL Acquisition

Download Presentation

Gesture and ASL Acquisition

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Gesture and asl acquisition l.jpg

Gesture and ASL Acquisition

Sarah Taub, Linguistics

Dennis Galvan, Psychology

Pilar Piñar, Foreign Languages, Cultures, and Literatures

Susan Mather, Linguistics


Why this study l.jpg

Why this study?

  • Taub, Piñar, and Galvan (2004) compared narratives in English, Spanish, and ASL.

  • They analyzed information expressed through spatial mapping in

    • ASL verbs, classifiers, role shift

    • gesture accompanying the spoken languages

  • For the spoken languages, a significant amount of spatial information was expressed through co-speech gesture.


Why this study continued l.jpg

Why this study? (continued)

  • Nearly all hearing subjects used co-speech gesture to express conceptual information.

  • Some subjects made bigger, clearer gestures than others:

    • Clearly established significant spatial locations

    • Used their whole bodies to show character actions

    • Changed their handshapes to show characters’ shapes

  • OUR QUESTION: What if the co-speech gestures of hearing non-signers predicted their ability to learn sign language?


Overview of presentation l.jpg

Overview of Presentation

  • Previous research on

    • ASL L2 acquisition

    • Spatial mapping in gesture and ASL

  • Hypotheses

  • Methods

  • Coding

  • Preliminary results and next steps


Asl l2 acquisition research l.jpg

ASL L2 Acquisition research

  • Locker McKee & McKee (1992): Students and teachers identified difficult aspects of ASL grammar:

    • Adapting to the visual modality

    • Dexterity of production

    • Spatial indexing, classifiers

  • Wilcox & Wilcox (1991) identified difficult aspects of ASL for learners:

    • Modality (creates production and perception difficulties)

    • Non-manual grammar features

    • Morphological inflection and classifiers


Asl l2 acquisition research continued l.jpg

ASL L2 acquisition research (continued)

3. McIntire & Snitzer Reilly (1988): examined whether communicative facial expression in hearing learners transferred into ASL facial grammar.

  • Preexisting facial expression transfers into acquisition of ASL facial grammar, but only after a reanalysis stage.


Our study as a next step l.jpg

Our study as a next step

  • We focus on the potential transfer of spatially mapped elements from gesture to ASL

    • First person “blends”

    • Third person “blends”

    • Establishment of spatial locations

  • We also investigate mental imagery skills

    • Potential effect on spatial mapping in gesture

    • Potential effect on spatial mapping in ASL


Blends in gesture and asl l.jpg

“Blends” in Gesture and ASL

  • The communicator creates a “blend” of an imagined space and the space surrounding him/her. (Fauconnier & Turner 1996, Liddell 2000)


First and third person perspective l.jpg

First and Third Person Perspective

  • Tannen 1986

  • First person discourse or direct quotation

    • The cat said, “I want to catch the bird.”

    • High involvement style

  • Third person discourse or indirect quotation

    • The cat said that he wanted to catch the bird.

    • Low involvement style

  • Mather & Thibeault 2000

  • How does this apply to ASL and co-speech gestures?


First person blends asl l.jpg

First Person Blends – ASL

Imagined Space

Blend: Signer as Cat


First person blends gesture l.jpg

First Person Blends - Gesture

Imagined Space

Blend: Gesturer as Cat


Third person blends asl l.jpg

Third Person Blends – ASL

Imagined Space

Blend: Signer’s hands as cat and wall


Third person blends gesture l.jpg

Third Person Blends - Gesture

Imagined Space

Blend: Gesturer’s hands as cat and wall


Mental imagery skills effect on gesture and asl l.jpg

Mental imagery skills: effect on gesture and ASL?

  • Mental imagery skills are known to be stronger in native signers than in nonsigners (Emmorey, Kosslyn, & Bellugi 1993).

  • Mental imagery skills include:

    • Mental rotation

    • Image generation

  • Could mental imagery skills form the foundation for “blends” in gesture and ASL?


Mental rotation test l.jpg

Mental Rotation Test

  • Subjects are asked whether the target is the same as another shape that is either identical or mirror image

  • The second shape is rotated by up to 180 degrees

  • Native signers answer more quickly than non-signers (Emmorey, Kosslyn, & Bellugi 1993)


Image generation test l.jpg

Image Generation Test

  • Subjects are required to remember the block letter image and to generate it on demand.

  • Accurate image generation will lead to accurate response as to whether the “X” lies on the letter shape.

  • Native signers form images faster than non-signers (Emmorey, Kosslyn, & Bellugi 1993).


Main hypothesis l.jpg

Main Hypothesis

  • The quality of co-speech gesture in non-signers might predict successful acquisition of spatial mapping in ASL:

    • Use of role-shift in gesture ------> efficient learning of first person in ASL.

    • Use of classifier-like handshapes in gesture -----> efficient learning of 3rd person in ASL.

    • Ability to set up the space in gesture ----> efficient use of spatial locations in ASL.


Other possible hypotheses l.jpg

Other Possible Hypotheses

  • Mental rotation and image generation scores might predict spatial mapping in gesture

  • Mental rotation and image generation scores might predict acquisition of spatial mapping in ASL


Methods l.jpg

Methods

  • This is a longitudinal study of second-language learners.

  • Subjects:

    • 35 Native speakers of English about to start learning ASL. (Subjects were recruited at Gallaudet and CCBC.)

    • Control group of native speakers of English about to start learning Spanish (not yet analyzed) – data collected in partnership with Karen Emmorey of SDSU.


Procedure l.jpg

Procedure

  • Experimental group:

    • Collect gesture data in fall, before/at the beginning of ASL classes (35 subjects)

    • Collect ASL data in spring, after approximately 8 months of ASL classes (19 subjects)

  • Control group:

    • Collect gesture data in fall

    • Collect Spanish data in spring

  • Note: approximately half the subjects did not return for the second data collection


Procedure continued l.jpg

Procedure (continued)

  • Before learning ASL: subjects were filmed retelling 7 cartoon stories and 10 cartoon clips in English to a partner.

  • After 8 months of taking ASL: subjects were filmed retelling the same clips in ASL.

  • Both times: Language background questionnaire.

  • Both times: Mental rotation and image generation tests.

  • ASL grades (in the spring, after one semester of learning ASL.)


Coding l.jpg

Coding

  • The research team devised a coding sheet to account for:

    • Use of first person in co-speech gesture and ASL

    • Use of third person in co-speech gesture and ASL

    • Establishment of locations in co-speech gesture and ASL


First person blends l.jpg

First Person Blends

  • Does use of 1st person blends in gesture predict use of 1st person blends in ASL?

  • Measures:

    • eye gaze matches entity eye gaze

    • facial expression shows character’s emotion

    • body part movements show character’s performance


Third person blends l.jpg

Third Person Blends

  • Does use of 3rd person blends in gesture predict use of 3rd person blends in ASL?

  • Measures:

    • handshape and orientation are plausible to represent entity’s shape (in gesture)

    • handshape/entity match is correct for ASL

    • separate handshapes are used for different entities


Establishment of locations l.jpg

Establishment of Locations

  • Does establishment of locations in gesture predict establishment of locations in ASL?

  • Measures:

    • establishment of location

    • consistency of location

    • relative placement of 2 locations

    • size of signing space


Coding continued l.jpg

Coding (continued)

  • Coders filled out their coding sheets separately.

  • Coding ratings were then compared and discussed until complete consensus was reached for each subject.

  • The coders watched and coded the retelling of two cartoon clips per subject.

    • SWING SMASH

    • SWING WAGGLE


Overview of results l.jpg

Overview of results

  • Mental imagery tasks:

    • No correlation with gesture or ASL spatial mapping

    • Differences between those who returned for the second data collection and those who did not

    • Differences among different majors

  • Gesture hypotheses

    • First person blends: partly supported

    • Third person blends: not supported

    • Establishment of location: partly supported

    • Large differences between clips


Results for image generation and mental rotation tasks l.jpg

Results for Image Generation and Mental Rotation Tasks


Slide29 l.jpg

Fall Mental Rotation and Image Generation tasks: Comparing those who returned with those who did not. Those who returned did better in the fall, but differences are not significant.


18 of 19 people improved on the second mental rotation task p 001 sig diff l.jpg

18 of 19 people improved on the second Mental Rotation task. (p<.001) (Sig. diff.)


11 of 19 improved on the second image generation task p 109 not a significant difference l.jpg

11 of 19 improved on the second Image Generation. task. (p=.109) (Not a significant difference)


Number of items correct by ed program for mental rotation task p 591 not a significant difference l.jpg

Number of Items correct by Ed. Program for Mental Rotation task. P=.591 (Not a significant difference)


Number of items correct by ed program for image generation task p 021 significant difference l.jpg

Number of Items correct by Ed. Program for Image Generation task. P=.021 (Significant difference)


Hypothesis 1 first person blends in gesture will be correlated with first person blends in asl l.jpg

Hypothesis 1: First Person Blends in gesture will be correlated with First Person Blends in ASL.

  • True for Swing-Waggle clip, r = .622* (p=.004)

  • Not true for Swing-Smash clip, r = .342


Hypothesis 2 third person blends in gesture will be correlated with third person blends in asl l.jpg

Hypothesis 2: Third Person Blends in gesture will be correlated with Third Person Blends in ASL.

  • Not true for Swing-Waggle clip,

    r = -.066

  • Not true for Swing-Smash clip,

    r = -.014


Slide36 l.jpg

Hypothesis 3: Establishment of locations in gesture will be correlated with establishment of locations in ASL.

  • True for Swing-Waggle clip, r = .511* (p=.036)

  • Not true for Swing-Smash clip, r = .007


Summary and new questions l.jpg

Summary and New Questions

  • Scores on mental imagery tasks do not correlate with spatial mapping in gesture or in ASL

  • High scores on mental imagery tasks correlate with persistence in study

    • Are high scorers more likely to stay in ASL programs?

    • Does study of ASL improve mental rotation and image generation scores?

    • Do people with high image generation skills self-select into interpretation training programs?


Summary and new questions38 l.jpg

Summary and New Questions

  • Partial support for correlation of gesture with ASL aptitude:

    • Establishment of locations, for Swing Waggle clip

    • First person blends, for Swing Waggle clip

    • No support for Third person blends, either clip

  • Why were the two clips different?

    • Perhaps they tested very different skills

    • Need to analyze additional clips


Next steps l.jpg

Next Steps

  • Add additional subjects (19 subjects tested in partnership with Karen Emmorey of SDSU)

    • Will correlations still hold? Will trends become significant?

  • Code additional clips

    • What patterns will appear, and why?

  • Analyze control group of Spanish learners

    • Does gesture correlate with learning any language, not just a signed language?


Thanks to l.jpg

Thanks to

  • Gallaudet University Priority Research Grant for three years of support

  • Research assistants

    • Marisa Bennett, Jessica Bentley, Brett Best, Carla Colley, Angela Crawford, Will Garrow, Shannon Grady, Randall Hogue, Christy Lively, Kristina Parmenter, Rachel Rosenstock, David Warn


Slide41 l.jpg

REFERENCES

  • Emmorey, K., S. Kosslyn, & U. Bellugi. 1993. Visual imagery and visual-spatial language: Enhanced imagery abilities in deaf and hearing signers. Cognition, 46, 139-181.

  • Fauconnier, G. and M. Turner. 1996. Blending as a central process of grammar. In A. Goldberg (ed.) Conceptual Structure, Discourse, and Language, pp.113-130. Stanford, CA: CSLI.

  • Liddell, S. 2000. Blended spaces and deixis in sign language discourse. In D. McNeill (ed.) Language and Gesture. Cambridge: Cambridge University Press.

  • Locker McKee, R. & D. McKee. 1992. What’s so hard about learning ASL?: students’ & teachers’ perceptions. Sign Language Studies, 75: 129-158.

  • McIntire, M. L. & J. Snitzer Reilly. 1988. Nonmanual behaviors in L1 & L2 learners of American Sign Language. Sign Language Studies, 61: 351-375.

  • Mather, S & A. Thibeault. 2000. Creating an involvement-focused style in book reading with deaf and hard-of-hearing students: The visual way. In C. Chamberlain, J. Morford, & R. Mayberry (eds.) Language Acquisition by Eye. Hilldale, NJ: Lawrence Erlbaum Associates.

  • Taub, S., P. Piñar & D. Galvan. 2004. The encoding of spatial information in speech/gesture and sign language. Paper presented to the 8th international conference on Theoretical Issues in Sign Language Research, Barcelona, Spain.

  • Tannen, D. 1986. That’s Not What I Meant! New York: Morrow.

  • Wilcox, S. & P. Wilcox. 1991. Learning to see: ASL as a second language. Center for Applied Linguistics. ERIC Clearinghouse on Languages and Linguistics. Englewood Cliffs, N.J.: Prentice Hall.


  • Login