1 / 17

Database Construction for Speech to Lip-readable Animation Conversion

Database Construction for Speech to Lip-readable Animation Conversion. Gyorgy Takacs, A t tila Tihanyi, Tamas Bardi, Gergo Feldhoffer, Balint Srancsik. Peter Pazmany Catholic University, Faculty of Information Technology 1083 Budapest, Práter u. 50/a. Hungary.

kale
Download Presentation

Database Construction for Speech to Lip-readable Animation Conversion

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Database Construction for Speech to Lip-readable Animation Conversion Gyorgy Takacs, Attila Tihanyi, Tamas Bardi, Gergo Feldhoffer, Balint Srancsik Peter Pazmany Catholic University, Faculty of Information Technology 1083 Budapest, Práter u. 50/a. Hungary

  2. Specialities of the Faculty of Information Technology at Peter Pazmany Catholic University

  3. Main concept in the project for communicatoin aid for deaf people • Deaf people have fantastic abilities in speech understanding based purely on lip reading. • The gap between hearing and deaf persons can be eliminated by such an everyday equipment as a high-end class second or third generation mobile phone. • In our system we provide a part of an animated human face on a display as output to deaf user. • Közvetlenül a beszédből számolja… • Jeltolmács • A rather promising feature of our system is the potentially language independent operation

  4. The system

  5. System components • The input speech sound is sampled at 16 bit/48 kHz and then acoustic feature vectors based on Mel-Frequency Cepstrum Coefficients (MFCC) are extracted from the signal. • The feature vectors are sent to the neural network (NN), which computes a special weighting vector [w1,…w6] that is a compressed representation of the target frame of the animation. • The coordinates of our selected feature point set used to drive the animation are obtained by linear combination of our component vectors with the weights coming from the neural net. • The FP positions are computed in this way for 25 frames per second. • The final component in our system is a modified LUCIA talking head model.

  6. Database

  7. we have decided to employ interpreters to record our audiovisual database • So the database contains two-digit frames, names of months, numbers, days. • The MPEG-4 standard describes the face with 86 feature points (FP). We selected 15 FP around the mouth according to our preliminary results • We used commercially available video cameras with 720x576 resolution, 25 fps PAL format video – which means 40ms for audio and video frames • Automatic marker tracking. • Frames were binarized according to a statistically determinded threshold value. To get a fused patch a dilation process was perfected. Step-by step eroding provides marker patches only one pixel, which are the centres of FP-s.

  8. Since each video frame has 15 FPs, one frame is a vector in a 30 dimensional space. The training of the system is much more efficient by compressing 30 coordinate parameters into 6 weight parameters. • The first 6 PCA vectors have been selected as a basis, the database was encoded in 6 dimension.

  9. Where P is the matrix of PCA vectors (30x30) ordered by the eigenvalue, B is the 30 dimensional vector set, and c is the chosen orgio which is the PCA weights of the neutral face. This data reduction causes 1-3% error (1-2 pixel variance in FP x-y coordinates) which is an acceptable approximation

  10. The FP positions expressed by the 1st, 2nd 3rd and 4th principal components using

  11. NN • Matrix Back-propagation algorithm (elaborated by David Anguita) was used to train the network [8]. The NN has 3 layers: 80 elements in the input layer to receive 16 MFCC from 5 frames. The hidden layer consists of 40 nodes. The output layer has 6 nodes, providing 6 centered PCA weights to reproduce the FPs. • The training file contained 5450 frames. The network was trained by 100 000 epochs. This NN model uses interval[-1,1] both for inputs and outputs. MFCC and PCA vectors are scaled linearly by the same value to fit to the appropriate range except the energy coefficient of MFCC.

  12. Lucia

  13. Conclusion • The visual word recognition even in the case of natural and professional speaker has about 3 % of errors. • The animated face model controlled by 15 FP parameters following accurately the FP parameters of interpreter/lip-speaker’s face resulted about 42% of errors. After test discussions it was clarified, that the visible parts of tongue and movement of parts of the face others then the mouth convey additional information to help the correct recognition. Probably the face model itself needs further improvements. • The decreasing of correct recognition only by about 7% as a result the complete changing of face model control from natural parameters to calculated parameters seems to be the fundamental result of our system.

More Related