1 / 36

PRESENTATION

PRESENTATION . ON. “ GESTURE RECOGNITION ”. SUBMITTED BY :. SUBMITTED TO:. PRESENTATION OUTLINE :. INTRODUCTION STEPS OF GESTURE RECOGNITION TRACKING TECHNOLOGIES SPEECH WITH GESTURE APPLICATIONS. WHAT ARE GESTURES ???. Gestures are expressive, meaningful body motions –

zach
Download Presentation

PRESENTATION

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PRESENTATION ON “ GESTURE RECOGNITION ” SUBMITTEDBY: SUBMITTED TO:

  2. PRESENTATION OUTLINE : • INTRODUCTION • STEPS OF GESTURE RECOGNITION • TRACKING TECHNOLOGIES • SPEECH WITH GESTURE • APPLICATIONS

  3. WHAT ARE GESTURES ??? Gestures are expressive, meaningful body motions – i.e., physical movements of the fingers, hands, arms, head, face, or body with the intent to convey information or interact with the environment.

  4. GESTURE RECOGNITION: • Mood and emotion are • expressed by body • language. • Facial expressions. • Tone of voice. • Allows computers to interact • with human beings in a • more natural way. • Allows control without • having to touch the device.

  5. Human Computer Interface using Gesture • Replace mouse and keyboard. • Pointing gestures. • Navigate in a virtual environment. • Pick up and manipulate virtual objects. • Interact with a 3D world. • No physical contact with computer. • Communicate at a distance.

  6. STEPS OF GESTURE RECOGNITION:

  7. TRACKING TECHNOLOGIES: DATAGLOVES / CYBERGLOVES - Use of gloves equipped with sensors. - Use of fiber optic cables.

  8. SIGN LANGUAGE RECOGNITION • 5000 gestures in vocabulary. • each gesture consists of a hand shape, a hand motion and a location in 3D space. A C F

  9. Datagloves

  10. THE PROCESS Colour Segment Noise Removal Scale by Area

  11. TRACKING TECHNOLOGIES: • COMPUTER-VISION TECHNOLOGY. • USE OF CAMERAS • DEPTH CAMERAS. • STEREO CAMERAS. • NORMAL CAMERAS.

  12. THE VIDEOPLACE : Here, text is entered by pointing at the character desired Here the index finger is recognized and when extended, becomes a drawing tool. Here the index fingers and thumbs of the two hands are recognized and are used to control the shape of the object being defined

  13. Yes/No? Yes/No? Yes/No? Yes/No? Y A B C

  14. Hierarchical Search • We need to search thousands of images. • How to do this efficiently? • We need to use a “coarse-to-fine”searchstrategy.

  15. Blurring Factor = 1 Original image Blurring Factor = 2 Blurring Factor = 3

  16. Multi-scale Hierarchy Factor = 3.0 Factor = 2.0 Factor = 1.0

  17. Motion Recognition Hidden Markov Model ( HMM ) --- time sequence of images modeling HMM1 (Hello) f P(f|HMM1) P(f |HMM2) HMM2 (Good) HMM3(Bad) HMM4 (House)

  18. Prediction and Tracking • Given previous frames we can predict what will happen next • Speeds up search. • occlusions -

  19. Co-articulation In fluent dialogue signs are modified by preceding and following signs. intermediate forms A B

  20. Face recognition Single pose • Standard head-and-shoulders view with uniform background • Easy to find face within image

  21. Aligning Images Alignment • Faces in the training set must be aligned with each other to remove the effects of translation, scale, rotation etc. • It is easy to find the position of the eyes and mouth and then shift and resize images so that are aligned with each other

  22. Nearest Neighbour • Once the images have been aligned you can simply search for the member of the training set which is nearest to the test image. • There are a number of measures of distance including Euclidean distance, and the cross-correlation.

  23. Principal Components Analysis • PCA reduces the number of dimensions and so the memory requirement is much reduced. • The search time is also reduced

  24. Problems with PCA • The same person may sometimes appear differently due to • Beards, moustaches • Glasses, • Makeup • These have to be represented by different ellipsoids.

  25. Facial Expressions • There are six types of facial expression • We could use PCA on the eyes and mouth – so we could have eigeneyes and eigenmouths Anger Fear Disgust Happy Sad Surprise

  26. Multiple Poses • Heads must now be aligned in 3D world space. • Classes now form trajectories in feature space. • It becomes difficult to recognise faces because the variation due to pose is greater than the variation between people.

  27. Model-based Recognition • We can fit a model directly • to the face image • Model consists of a mesh which is matched to facial features such as the eyes, nose, mouth and edges of the face. • We use PCA to describe the parameters of the model rather than the pixels.

  28. Speech with Gesture • Voice and gesture compliment each other and form a powerful interface that either a modality alone. • Speech and gesture make a more interactive interface. • Combining gesture and voice increase recognition accuracy.

  29. MEDIA ROOM Within the media room user can use gesture ,speech ,eye movements or combination of all three. Example: One application allowed user to manage color coded ship against a map of a carribean . A user just need to point the location and need to say “create a large blue tank”. A blue tank will appear on the location. Media room

  30. Applications • Sign language recognition: gesture recognition software can transcribe the symbols represented through sign language into text. • Control through facial gestures: Controlling a computer through facial gestures is a useful application of gesture recognition for users who may not physically be able to use a mouse or keyboard. • Immersive game technology: Gestures can be used to control interactions within video games to try and make the game player's experience more interactive or immersive.

  31. A person playing game. Computer is responding as per user instruction. A girl is instructing the computer from her body movements .

  32. Applications • Virtualcontrollers: For systems where the act of finding or acquiring a physical controller could require too much time, gestures can be used as an alternative control mechanism. • Affective computing: In affective computing, gesture recognition is used in the process of identifying emotional expression through computer systems. • Remote control: Through the use of gesture recognition, “remote control with the wave of a hand” of various devices is possible. The signal must not only indicate the desired response, but also which device to be controlled.

  33. Future Work: • Occlusions (Atid). • Grammars in Irish Sign Language. • --- Sentence Recognition. • Body Language.

  34. References • Wu yang,vision based gesture recognition lecture notes in artificial intelligence 1999. • Wikipedia .

  35. THANK YOU !!!! ANY QUERIES ???

More Related