html5-img
1 / 1

Method Participants

Table 1. Proportion of Target Labels Given by Mode. Mode of Presentation. Two-Year-Olds. Three-Year-Olds. Four-Year-Olds. Target Label. Standard photographs. Study-created photographs. Audiovisual Clips. Mean. Table 2. Proportion of Non-Emotion Responses. Mode of Presentation.

bert-terry
Download Presentation

Method Participants

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Table 1. Proportion of Target Labels Given by Mode Mode of Presentation Two-Year-Olds Three-Year-Olds Four-Year-Olds Target Label Standard photographs Study-created photographs Audiovisual Clips Mean Table 2. Proportion of Non-Emotion Responses Mode of Presentation .926a .941a .922 Happy .897a Standard photographs Study-created photographs Audiovisual Clips .750c .794c .716 Sad .602b Standard Photographs .661b .706b .667 Angry .632b Non-Emotion Labels .136c .268a .188b Study-created Photographs .309d .221e .260 Scared .250c Audiovisual Clip Study-created Photograph Standard Photograph Audiovisual Clips Note: Maximum possible is 1. LSD comparisons (alpha = .05) were calculated using cell means. Means in the same row that do not share subscripts differ at p = < .04. .662 .665 Mean .596 Emotion Category Note: Maximum possible is 1. LSD comparisons (alpha = .05) were calculated using cell means. Means in the same row that do not share subscripts differ at p < .04. Means in the same column that do not share subscripts differ at p < .04. Figure 2. Proportion of children, according to age group, giving the target label for the presented stimuli, by emotion and mode of presentation. Figure 1. Modes of presentation used. Standard Photos Study-created Photos Audiovisual Clips Figure 3. Children’s emotion terms categorized by discrete and valence correctness Dynamic Cues Affect Children’s Understanding of Others’ Emotional StatesNicole L. Nelson* & James A. RussellBoston College Abstract In a within subjects design, children (2-4 years) were shown emotional displays (happiness, sadness, anger, fear) in two modes: dynamic audio-visual clips and static photographs. While 2-year olds performed similarly for all modes, 3- and 4-year olds’ performance was higher when labeling audio-visual clips, particularly for displays of sadness. Procedure All children first underwent an interactive priming procedure to ensure that the needed emotion terms were readily available, and to establish their ability to produce a label when prompted. All children who participated in the study generated each target emotion label before participating. After the priming procedure, children participated in two counterbalanced blocks of trials. The audiovisual clips were presented in one block of trials while the photographs were presented in another. The order of photograph set presentation was also counterbalanced. After children watched each audiovisual clip, they were asked “How did Molly feel when she smelled the flower?” For each set of facial expressions children were shown a face and asked “How does she feel?”. • Results • Children’s performance was higher when presented with the audiovisual clips or the study-created photographs than when presented with the standard photographs, F (2, 130) = 5.127, p = .007. (Table 1) • In labeling the sadness stimuli, children’s performance was significantly higher when labeling the audiovisual clip than when labeling either the standard photograph or the study-created photograph. (Table 1) • Children were more likely to provide an emotion term, as opposed to a non-emotion answer (e.g. “I don’t know” or “silly”), when viewing the audiovisual clips than either the standard photographs or the study-created photographs (Table 2) • Two-year-olds’ performance was higher when shown the study-created photographs, while three- and four-year-olds’ performance was highest when shown the audiovisual clips (Figure 2) • Children were more likely to provide an emotion label of incorrect valence when viewing the standard photographs than the audiovisual clips or the study-created photographs, particularly for scared stimuli (Figure 3) Method Participants Participants were 68 preschoolers, between the ages of 29 and 59 months of age. The sample was comprised of 32 male participants and 36 female participants, with a mean age of 44 months (SD = 9.2 months). All children were fluent in English and tested at daycares in the greater Boston area. Materials Photographs of facial expressions. Two sets of 5 photographs were presented on a computer screen, with female posers displaying prototypical facial expressions. One set included standard black and white photographs (Ekman & Friesen, 1978). The other set included color photographs of facial expressions modeled after standard expressions and posed by the same actress shown in the emotion audiovisual clips. These photographs were created by taking the frame from an audiovisual clip at the point at which the actress’s facial expression reached its apex. Audiovisual clips of emotion expressions. Four audiovisual clips were created, in which a professional actress simultaneously displayed three cues to an emotion: body posture (moving from a standing neutral posture to an emotional posture), facial expression (moving from a neutral face to an emotional face) and vocal characteristics (varying intonation and pitch, but with neutral word content). Adult Comparison Group. All audiovisual clips used in this study were initially tested with an adult comparison group (N = 25). The majority of the adult group chose the target label for all videos used, with participant agreement as follows: happiness = 100%, sadness = 92%, anger = 80%, fear = 84%. Introduction It is typically assumed that emotional expressions convey a specific emotional message that is universally understood.  Given this assumption, one would expect that even young children emotion labeling performance would be high, but this is not the case.  Children’s labeling of emotional expressions is poor.  While children’s low performance has often been dismissed as errors due to attention or vocabulary, a growing body of evidence shows that children’s performance on labeling other aspects of emotion (such as behavioral responses to emotion) is higher. Thus their low performance on facial expressions is not due to an overall deficit in emotion understanding. The current study explored the possibility that children’s poor labeling performance may be an artifact of the photographed facial expressions that are often used to test children’s emotion understanding.  We pursue the idea that children’s performance will be higher when labeling emotions presented with multiple, dynamic cues to emotion.  Study Overview Children’s performance labeling dynamic audiovisual clips and static photographs was compared. Emotions used in all three modes of presentation were: happiness, sadness, anger and fear. • Discussion • This study suggests that the standard photographs are not the best stimuli to tap children’s understanding of emotions. The stimuli created for this study were more effective in communicating the intended emotion to the children than the standard photographs. This raises the possibility that children’s knowledge of emotional facial expressions has been underestimated in some cases. • Children have an easier time assigning emotion terms to the audiovisual clips than the sets of photographs. The dynamic videos do not always encourage children to use the target label for the stimuli, but do discourage errors of valence in their answers. By eliciting emotion labels from more children, we may gain insight into how they interpret emotional expressions and how they attribute emotion to others in a dynamic scenario, which may be more realistic than static photographs. • Although we expected that the youngest age group would gain the most benefit from the dynamic stimuli, the opposite was true. In fact, even the older age groups saw only marginal improvement when shown the audiovisual clips. • References • 1. Ekman, P. & Friesen, W.V. (1978). Facial action coding systems. Palo Alto, CA: Cosulting Psychologists Press. • 2. Widen, S.C. & Russell, J.A. (2003). A Closer Look at Preschoolers’ Freely Produced Labels for Facial Expressions. Developmental Psychology, 39, 114–128. Association for Psychological Science, 2007 Contact: Nicole Nelson at nicole.nelson.1@bc.edu

More Related