1 / 37

Learning sign language by watching TV

Learning sign language by watching TV. Shishir Agrawal Lei Fan. Introduction and Goal. Television programs are now routinely broadcast with both subtitles and a person signing (usually as an overlay) to provide simultaneous ‘translation’ of the spoken words for deaf people

yeriel
Download Presentation

Learning sign language by watching TV

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Learning sign language by watching TV Shishir Agrawal Lei Fan

  2. Introduction and Goal • Television programs are now routinely broadcast with both subtitles and a person signing (usually as an overlay) to provide simultaneous ‘translation’ of the spoken words for deaf people • To learn the translation of English words to British Sign Language signs from these TV broadcasts using the supervisory information available from subtitles broadcast simultaneously with the signing.

  3. Goal

  4. Previous Research • Required manual training data to be generated for each sign e.g. a signer ‘performing’ each sign in controlled conditions – a time-consuming and expensive procedure. • Many considered only constrained situations for example requiring the use of data- gloves orcoloured gloves to assist with image processing at training and/or test time.

  5. Training Data Set • The source material for learning consists of many hours of video with simultaneous signing and subtitles recorded from BBC digital television. • This supervisory information is WEAK and NOISY

  6. WHY WEAK? • It is weak due to the correspondence problem since temporal distance between sign and subtitle is unknown and signing does not follow the text order. • Polysemy The same English word may have different meanings and therefore signs, or the same sign may correspond to multiple English words.

  7. WHY NOISY ? • The occurrence of a subtitle word does not imply the presence of the corresponding sign

  8. Noisy and Weak Data Set

  9. Approach: How to Learn • Find set of subtitles which contain the word to form the positive training set and also the one’s which do not contain the word to form the negative set of subtitles • Make visual descriptor for all signs corresponding to in subtitles • Find sign each which is present in all positive set of subtitles and not in the negative set (IDEALLY) • Because of noise and weak training data we need to score the signs and choose the one with best score

  10. Example

  11. Method • Extract the subtitles from the video • Generate a feature vector for each frame of the video describing the position, shape and orientation of the hands • Find score of each sign (window)

  12. Extracting Data from Video • OCR methods used to extract subtitles from the video. • Each subtitle instance consists of a short text, and a start and end frame indicating when the subtitle is displayed. • Typically a subtitle is displayed for around 100–150 frames.

  13. By processing subtitles we obtain a set of video sequences labeled with respect to a given target English word as ‘positive’ (likely to contain the corresponding sign) or ‘negative’ (unlikely to contain the sign).

  14. +veSequence Frame Range • Due to latency • Given the subtitle in which the target word appears, the frame range of the extracted positive sequence is defined as the start frame of the previous subtitle until the end frame of the next subtitle • Consequently, positive sequences are, on average, around 400 frames in length. In contrast, a sign is typically around 7–13 frames long

  15. -ve Sequence Frame Range • Similarly, negative sequences are determined by searching for subtitles where the target word does not appear. • For any target word an hour of video yields around 80,000 negative frames which are collected into a single negative set.

  16. Visual Processing • A description of the signer’s actions for each frame in the video is extracted by tracking the hands via an articulated upper-body model.

  17. Upper Body Tracking • It keeps track the head, torso, arms and hands of the signerunlike traditional methods which just keep track of the hands • It requires a few frames (around forty) of manual initialization to specify the size of the parts and learn their colour and shape, and then tracking proceeds automatically for the length of the video. • Robust method to track long videos, e.g. an hour, despite the complex and continuously- changing background

  18. Frame Descriptor • After we extract the hands from the frame, a descriptor for the left, right and pairs of hands as a whole is made to solve the problem of overlapping or touching hands

  19. What Does the Descriptor Describe • Position • Shape • Orientation

  20. Output of Tracker • Segmented parts like hands which are represented by their HOG (histogram of oriented gradients) as they help in conveying shape.

  21. Exemplar (visual word) • This HOG descriptor is converted into an ‘exemplar’ hand shape. • Exemplar are precomputed hand shapes • Exemplars are learnt separately for the left hand, right hand, and hand pairs, using automatically chosen ‘clean’ images: the hands must not be in front of the face, and should be separate for individual hands or connected for hand pairs. K-means clustering of the corresponding HOG descriptors is used to determine the exemplar set. • 1,000 clusters for each of left/right hands and hand pairs are used

  22. How it works • Given the exemplars, the segmented hands in each frame are then assigned to their nearest exemplar (as measured by Euclidean distance between HOG descriptors) using the position of the wrists in the frame and in the hand exemplar for approximate alignment.

  23. Example

  24. Frame and window descriptors • Frame descriptor (position, hand exemplar, hand pair exemplar) • Window descriptor is the concatenation of the per-frame descriptors

  25. Visual distance between signs • Distance between windows is learnt for each individual target sign. • Distance for left hand and right hand defined similar weights are learnt offline

  26. Position distance • Position relative to torso. Translation invariant maximum translation is learnt from training data and is set at 5 pixels. • Other transformations (scaling and rotation) investigated to be slightly detrimental.

  27. Hand shape distance • Hand shape distance. Reliable either hands are apart or touching

  28. Hand orientation distance • The square of angle needed to rotate one hand exemplar to the other

  29. Each positive sequence in turn is used as ‘driving sequence’ where each temporal window of length n within the sequence is considered as a template for the signs.

  30. Sliding window classifier • The classifier is used to determine if a temporal window matches the ‘template’ window

  31. MIL • For a given target word, a set of positive bags

  32. Score function • Maximize score function to obtain parameter predications on positive bags and negative instances and the prior knowledge about the likely temporal location of target signs

  33. Score function • Distribution of errors modeled as estimated by fitting a parametric model to ground truth training data.

  34. Temporal prior • Sign instances corresponding to a target word are more likely to be temporally located close to the center of positive sequences • For bags with negative output • Otherwise

  35. Temporal prior • For bags with negative output • Otherwise (Maximum likelihood) temporal locations scaled to [-1, +1], learnt from a subset of signs

  36. Maximizing the score • Given template window , score function is maximized by searching over the left hand and a set of thresholds • Operation repeated for all template windows, the template window that maximizes the score is deemed to be the sign corresponding to the target word.

  37. Experiment • Given an English word, the goal is to identify the corresponding sign. • Success if • i. the selected template window shows the true sign (at least 50% overlap with ground truth) • ii. At least 50% of all windows in the matched sequence show the true sign.

More Related