1 / 19

Combined Gesture-Speech Analysis and Synthesis

Combined Gesture-Speech Analysis and Synthesis. M. Emre Sargın, Ferda Ofli, Yelena Yasinnik, Oya Aran, Alexey Karpov, Stephen Wilson,Engin Erzin, Yücel Yemez, A. Murat Tekalp. Outline. Project Objective Technical Details Preparation of Gesture-Speech Database

wylie
Download Presentation

Combined Gesture-Speech Analysis and Synthesis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Combined Gesture-Speech Analysis and Synthesis M. Emre Sargın, Ferda Ofli, Yelena Yasinnik, Oya Aran,Alexey Karpov, Stephen Wilson,Engin Erzin, Yücel Yemez, A. Murat Tekalp

  2. Outline • Project Objective • Technical Details • Preparation of Gesture-Speech Database • Determination of Gestural – Auditory Events • Detection of Gestural – Auditory Events • Gesture-Speech Correlation Analysis • Synthesis of Gestures Accompanying Speech • Resources • Concluding Remarks and Future Work • Demonstration

  3. Project Objective • The production of speech and gesture is interactive throughout the entire communication process. • Computer-Human Interaction systems should be interactive such that, for an edutainment application, animated person’s speech should be aided and complemented by it’s gestures. • Two main goals of this project: • Analysis and modeling of correlation between speech and gestures. • Synthesis of correlated natural gestures accompanying speech.

  4. Technical Details • Preparation of Gesture-Speech Database • Determination of Gestural – Auditory Events • Detection of Gestural – Auditory Events • Gesture-Speech Correlation Analysis • Synthesis of Gestures Accompanying Speech

  5. Preparation of Database • Gestures and Speech of a specific subject (Can-Ann) was investigated. • 25 minutes video of a native English speaker giving directions, 25 fps, 38249 frames.

  6. Determination of Gestural – Auditory Events • Database is manually examined to find specific, repetitive gestural and auditory events. • Note that, the events found for one specific subject is personal and can vary from culture to culture. • During the refusal phrases • Turkish Style → Upward Movement of Head • European Style → Left-Right Movement of Head • The Can-Ann does not use these gestural events at all. • Auditory Events: • Semantic Information (Keywords): “Left”, “Right” and “Straight”. • Prosodic Information: “Accent”. • Gestural Events: • Head Movements: “Down”, “Tilt”. • Hand Movements: “Left”, “Right”, “Straight”.

  7. Correlation Results

  8. Detection of Gesture Elements • In this project, we consider arm and head gestures. • Gesture features are selected as: • Head Gesture Features: Global Motion Parameters calculated within head region. • Hand Gesture Features:Hand center of mass position and calculated velocity. • Main tasks included in detection of gesture elements: • Tracking of head region. • Optical Flow Based • Tracking of hand region. • Kalman Filter Based • Particle Filter Based • Extraction of gesture features. • Recognition and labeling of gestures.

  9. Detection of Auditory Elements • In this project, we consider semantic and prosodic events. • Main tasks included in detection of gesture elements: • Extraction of Speech Features: • MFCC • Pitch • Intensity • Keyword Spotting • HMM Based • Dynamic Time Warping Based • Accent Detection • HMM Based • Sliding Window Based

  10. Keyword Spotting (HMM Based): Training Labels for keywords Training speech - Speaker-dependent speech recognition system Training - Hidden Markov Toolkit (HTK) was used as base technology for development of keyword spotter Testing Grammar: left Unknown speech Labels for keywords right straight - 20 minutes of speech were labelled manually and used for training silence garbage - each keyword was pronounced in training speech at least 30 times

  11. Keyword Spotting (HMM Based): Testing • 5.5 minutes of speech were used for testing • Speech fragment contains aproximately 600 words of which 35 are keywords First experiments: keyword spotter was able to find almost all keywords in test speech, but it gives many false alarms.

  12. Keyword Spotting (Dynamic Time Warping) • MFCC Parameters are used for parameterization • Dynamic time warping method is used to find an optimal match between two given sequences (e.g. time series). • Results:

  13. Accent Detection (Sliding Window Based) • Parameters are calculated given a sliding window: • Pitch contour • Number of local minimum and maximum in pitch contour • Intensity • Windows that has high intensity values are selected. • Median Filtering is used to remove short windows. • The candidate accent windows are labeled using connected component analysis. • The candidate accent regions that contain few or many local minimums and maximums are eliminated. • Remaining candidate regions are selected as accents. • Proposed method detects %68 of accents and gives 25% F.A.

  14. Synthesis of Gestures Accompanying Speech • Based on the methodology used in correlation analysis given a speech signal: • Features will be extracted. • Most probable speech label will be designated to speech patterns. • Gesture pattern that is most correlated with speech pattern will be used to animate a stick model of a person.

  15. Hand Gesture Models Generated trajectories based on HMM Original Hand Trajectories

  16. Resources • Database Preparation and Labeling • VirtualDub • Anvil • Paraat • Image Processing and Feature Extraction: • Matlab Image Processing Toolbox • OpenCV Image Processing Library • Gesture-Speech Correlation Analysis • HTK HMM Toolbox • Torch Machine Learning Library

  17. Concluding Remarks and Future Work • Database will be extended with new subjects. • Algorithms and methods will be tested using new databases. • HMM based accent detector will be implemented. • Keyword and event sets will be extended. • Database scenarios will be extended.

  18. Demonstration I

  19. Demonstration II

More Related