enhanced speech models for robust speech recognition
Download
Skip this Video
Download Presentation
Enhanced Speech Models for Robust Speech Recognition

Loading in 2 Seconds...

play fullscreen
1 / 47

Enhanced Speech Models for Robust Speech Recognition - PowerPoint PPT Presentation


  • 135 Views
  • Uploaded on

Enhanced Speech Models for Robust Speech Recognition. Juan Arturo Nolazco-Flores Dpto. de Ciencias Computacinales ITESM, campus Monterrey. Talk Overview. Introduction Enhanced-Speech Models Coments and Conclusions. Questions?. Introduction. Problem:

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Enhanced Speech Models for Robust Speech Recognition' - diane


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
enhanced speech models for robust speech recognition

Enhanced Speech Modelsfor Robust Speech Recognition

Juan Arturo Nolazco-Flores

Dpto. de Ciencias Computacinales

ITESM, campus Monterrey

talk overview
Talk Overview
  • Introduction
  • Enhanced-Speech Models
  • Coments and Conclusions
introduction
Introduction
  • Problem:
    • Automatic Speech Recognition performance is highly degraded when speech is corrupted for noise (additive noise, convolutional noise, etc.).
  • Fact:
    • In order to have real speech recognisers, ASR should tackle this problem.
  • Knowledge.
    • ASR can be improved either:
      • Enhancing speech before recognition
      • Training models in the same environment the ASR is going to be used.
  • Challenge:
    • Find a simple and efficient technique to solve this problem.
recognition using cd hmm
Input Data

It needs a model

for unit of recognition.

M1

M2

Probability of each model.

MQ

Higher Probability

Recognised word

Recognition using CD-HMM

Recogniser

enhancing speech
Enhancing Speech
  • Features:
    • Models are trained with clean speech.
    • Corrupted speech is enhanced.
  • There are a number of well studied techniques:
    • Subtract an estimated noise found during nonspeech activity.
    • Adaptive noise cancelling (ANC).
  • Successful for low to medium SNR (>5dB).
slide9
Problems:
    • Enhancers are not perfects, therefore
      • the speech is distorted and
      • there are residual noise.
training models in the same environment
Training models in the same environment
  • ASR systems which uses this technique can deal with low to high SNR (>0 dB).
  • In example, for an isolated digit recognition task where digits are corrupted for helicopter(Lynx) noise, you can get the following performance:
  • For TIMIT
  • Problem:
    • There are many possible environments (no practical).
slide11
However, using continuous HMM is possible to combine the clean speech model and noise model and obtain a noisy speech model.
  • Techniques:
    • Model Decomposition
    • Parallel Model Combination-PMC (Mark Gales, 1996).
    • Cepstrum-Domain Model Combination-CDMC (Kim & Rose, 2002).
changing to linear domain using pmc
Changing to linear domain using PMC
  • Introduction
  • Scheme
  • Diagram
introduction1
Introduction
  • It is an artificial way to simulate that the system has been trained in the adverse environment the system is going to work.
  • The clean speech CHMM and the noise CHMM (estimated with the noise before the word is uttered) are combined in the linear domain to obtain models adapted to the adverse environment.
  • The combination is based in the assumption that that pdf of the state distribution models are completely defined by the mean and variance.
scheme
Scheme
  • For simplicity, it is convenient to combine these models in a linear domain.
  • Problem:
    • High performance speech recognition is obtained in a non-linear domain (i.e. mel-cepstral domain, auditory-based coefficients).
  • Solution:
    • Transform coefficients to a linear domain.
diagram
Diagram

Clean

speech

HMM

Linear

domain

C-1()

exp()

PMC

HMM

C()

+

log()

Noise

HMM

C-1()

exp()

Simulates training in noise.

enhanced speech models
Enhanced Speech Models
  • Introduction
  • Hypothesis prove
  • Enhanced-Speech Models Combination
    • Changing to linear domain using PMC
    • Diagram
    • Results
introduction2
Introduction
  • When we train in the same environment, we obtained the following upper boundry values:
  • Since PMC or CDMC (Cepstrum-Domain Model Combination) tries to simulated recognition in the same environment, hence this are the best expected results for these kind of techniques.
introduction3
Introduction
  • How can we improve recognition performance in adverse environments?
slide19
Fact:
    • The enhancer returns a “cleaner” speech, but distorted.
  • Therefore the question is:
    • Is it possible to improve recognition performance if the models where trained with this enhaned speech?
hypothesis
Hypothesis
  • Enhanced-Speech models improve ASR performance in noisy environments.
in order to prove this hypothesis
In order to prove this hypothesis:
  • A signal enhancement scheme has to be selected.
  • Models has to be trained with the enhanced speech.
  • Observation vectors input to the recogniser has to be processed for the selected enhancement scheme.
hypothesis prove
Hypothesis Prove
  • Introduction
  • Spectral Subtraction definition
  • Experiments and results
  • Conclusions
introduction4
Introduction
  • Since it is a simple (and successful) scheme, Spectral Subtraction (SS) was selected.
spectral subtraction definition
Spectral Subtraction Definition
  • Before filterbank
  • After filterbank.
experiments and results
Experiments and Results.
  • CHMMs were trained with speech enhanced by SS.
  • Recognition performance was developed over speech enhance by SS in the same conditions.
example 1
Example 1
  • Task: isolated digit Recognition
  • Vocabulary Size: 10
  • Training: Using enhanced speech
  • Noise: Helicopter (Lynx)
  • Database: Noisex92
  • Real noise is artificially added to clean speech, such that no Lombard effect can bias recognition performance.
slide27
bPSS

Std. HMM

Training Models in Noise

(PMC)

Enhanced-Speech Models

example 2
Example 2
  • Task: continuous digit Recognition
  • Vocabulary size: 30 words
  • Training: Using enhanced speech
  • Noise: White
  • White noise is artificially added to clean speech, such that no Lombard effect can bias recognition performance.
results
Results:

Std. HMM

Noisy Speech

Models (PMC)

Enhanced-Speech

Models

example 3
Example 3:
  • Task: continuous speech Recognition
  • Vocabulary size: 6233 words
  • Training: Using enhanced speech
  • Noise: white
  • Database: TIMIT
  • Real noise is artificially added to clean speech, such that no Lombard effect can bias recognition performance.
results1
Results:

Std. HMM

Noisy Speech

Models (PMC)

Enhanced-Speech

Models

conclusions
Conclusions
  • Hypothesis was prove to be true.
  • Challenge:
    • Tried these experiments using other databases.
    • How can we combine
      • Enhanced Scheme,
      • the Noise Model
      • and the Clean models
    • such that we do not need to train for all enhancement conditions.
conclusions1
Conclusions
  • Are all the enhancement schemes suited for combination?
conclusions2
Conclusions
  • Now, we know that ASR can be improved either:
      • Enhancing speech before recognition
      • Training CHMM in the same environment the ASR is going to be used.
      • Training CHMM with the same enhancement technique that is used to get “cleaner” speech at recognition.
  • Advantage:
    • Moreover, training with a better enhancement technique means a potential better recognition performance.
es ss model combination
ES-SS Model Combination
  • Introduction
  • ES-Spectral Subtraction Scheme
introduction5
Introduction
  • How can we combine CHMMs without having to train for each enhancement and noise condition?
  • Observation: For CHMMs the state’s pdfs are completelydefined for their means and variances.
es spectral subtraction scheme
ES-Spectral Subtraction Scheme

Assuming Y and YD can be modelled as parametric distributions

with means E[Y] and E[YD] and variances V[Y] and V[YD].

It can be shown that these parameters are distorted as follows:

pdf of Y

prove
Prove:

where

Re-arranging

a a p y
A(a,P(Y))

Assuming that Y is lognormal:

Making

( )

es pmc diagram
ES-PMC Diagram

Adaptation

calculations

Clean

speech

HMM

ES-PMC

HMM

C->log

exp()

C()

log()

+

+

PMC

Noise

HMM

C->log

exp()

Speech is pre-processed using SS.

results2
Results

No compensation scheme

Spectral

Subtraction

PMC

Spectral

Subtraction and parallel model

combination

results3
Results

No compensation scheme

Spectral

Subtraction

PMC

Spectral

Subtraction and parallel model

combination

results4
Results

No compensation scheme

Spectral

Subtraction

PMC

Spectral

Subtraction and parallel model

combination

results5
Results

No compensation scheme

Spectral

Subtraction

PMC

Spectral

Subtraction and parallel model

combination

coments and conclusions
Coments and Conclusions
  • Since training and recognition with the same speech enhancement scheme have not been tried before, hence a new area of research has been open.
    • How can we combine CHMM, such that we do not need to train for all enhancement conditions.
    • Are all the enhancement technique suited for CHMM combination?
  • We show how to combine enhanced-speech, noise and clean CHMM for SS scheme.
  • It was shown that equations for ES-PMC-SS were straightforward.
slide47
We expect that training with a better enhancement technique we can also obtain better recognition performance.
  • Future work:
    • Develop equations and experiments for other enhancement techniques.
    • Obtain the optimal alpha for SS scheme.
    • Compensate in the Cepstrum Domain.
ad