Recent paper of md akmal haidar
Download
1 / 19

Recent Paper of Md. Akmal Haidar - PowerPoint PPT Presentation


  • 114 Views
  • Uploaded on

Recent Paper of Md. Akmal Haidar. Meeting before ICASSP 2013. 報告者:郝柏翰. Outline. “Novel Weighting Scheme for Unsupervised Language Model Adaptation Using Latent Dirichlet Allocation”, 2010

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Recent Paper of Md. Akmal Haidar' - paige


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Recent paper of md akmal haidar

Recent Paper of Md. AkmalHaidar

Meeting before ICASSP 2013

報告者:郝柏翰


Outline
Outline

  • “Novel Weighting Scheme for Unsupervised Language Model Adaptation Using Latent Dirichlet Allocation”, 2010

  • “Unsupervised Language Model Adaptation Using Latent Dirichlet Allocation and Dynamic Marginals”, 2011

  • “Topic N-gram Count Language Model Adaptation for Speech Recognition”, 2012

  • “Comparison of a Bigram PLSA and a Novel Context-Based PLSA Language Model for Speech Recognition”, 2013


Novel weighting scheme for unsupervised language model adaptation using latent dirichlet allocation

Novel Weighting Scheme for Unsupervised Language Model Adaptation UsingLatent Dirichlet Allocation

Md. AkmalHaidar and Douglas O’Shaughnessy

INTERSPEECH 2010


Introduction
Introduction Adaptation Using

  • Adaptation is required when the styles, domains or topics of the test data are mismatched with the training data. It is also important as natural language is highly variable since the topic information is highly non-stationary.

  • The idea of an unsupervised LM adaptation approach is to extract the latent topics from the training set and then adapt the topic specific LM with proper mixture weights, finally interpolated with the generic n-gram LM.

  • In this paper, we propose the idea that the weights of topic models are generated using the word count of the topics generated by a hard-clustering method.


Proposed method
Proposed Method Adaptation Using

  • We have used the MATLAB topic modeling toolbox to get the word-topic matrix, WP, and the document-topic matrix, DP, using LDA.

  • Hard-clustering:

    • For training, topic clusters are formed by assigning a topic to a document as:

  • Adaptation:

    • we can create a dynamically adapted topic model by using a mixture of LMs from different topics as:


Proposed method1
Proposed Adaptation UsingMethod

  • To find topic mixture weight , we propose a scheme where the unigram count of the topics.

  • TF represents the number of times word is drawn from topic .

  • is the frequency of in document .

  • The adapted topic model is then interpolated with the generic LM as:


Experiment setup
Experiment Setup Adaptation Using

  • We evaluated the LM adaptation approach using the Brown Corpus and WSJ1 corpus transcription text data.


Experiments
Experiments Adaptation Using


Unsupervised language model adaptation using latent dirichlet allocation and dynamic marginals

Unsupervised Language Model Adaptation Using Latent Dirichlet Allocation and Dynamic Marginals

Md. AkmalHaidar and Douglas O’Shaughnessy

INTERSPEECH 2010


Introduction1
Introduction Dirichlet Allocation and Dynamic Marginals

  • To overcome the mismatch problem. we introduce an unsupervised language model adaptation approach using latent Dirichlet allocation (LDA) and dynamic marginals: locally estimated (smoothed) unigram probabilities from in-domain text data.

  • we extend our previous work to find an adapted model by using the minimum discriminant information (MDI), which uses KL divergence as the distance measure between probability distributions.

  • The final adapted model is formed by minimizing the KL divergence between the final adapted model and the LDA adapted topic model.


Proposed method2
Proposed Dirichlet Allocation and Dynamic MarginalsMethod

  • Topic clustering

  • LDA adapted topic mixture model generation


Proposed method3
Proposed Method Dirichlet Allocation and Dynamic Marginals

  • Adaptation using dynamic marginals

    • The adapted model using dynamic marginals is obtained by minimizing the KL-divergence between the adapted model and the background model subject to the marginalization constraint for each word w in the vocabulary:

  • The constraint optimization problem has close connection to the maximum entropy approach, which provides that the adapted model is a rescaled version of the background model:


Proposed method4
Proposed Method Dirichlet Allocation and Dynamic Marginals

  • The background and the LDA adapted topic model have standard back-off structure and the above constraint, so the adapted LM has the following recursive formula:


Experiments1
Experiments Dirichlet Allocation and Dynamic Marginals


Topic n gram count language model adaptation for speech recognition

Topic N-gram Count Language Model Adaptation for Speech Dirichlet Allocation and Dynamic MarginalsRecognition

Md. AkmalHaidar and Douglas O’Shaughnessy

INTERSPEECH 2010


Introduction2
Introduction Dirichlet Allocation and Dynamic Marginals

  • In this paper, we propose a new LM adaptation approach by considering the features of the LDA model.

  • Here, we compute the topic mixture weights of each n-gram in the training set using the probability of word given topic , as confidence measures for the words in the n-gram under different LDA latent topics.

  • The normalized topic mixture weights are then multiplied by the global count of the n-gram to determine the topic n-gram count for the respective topics.

  • The topic n-gram count LMs are then created using the respective topic n-gram counts and adapted by using the topic mixture weights obtained by averaging the confidence measures over the seen words of a development test set.


Proposed method5
Proposed Dirichlet Allocation and Dynamic MarginalsMethod

  • The probability of word under LDA latent topic is computed as:

  • Using these features of the LDA model, we proposed two confidence measures to compute the topic mixture weights for each n-gram:

  • The topic n-gram language models are then generated using the topic n-gram counts and defined as TNCLM.


Proposed method6
Proposed Method Dirichlet Allocation and Dynamic Marginals

  • In the LDA model, a document can be generated by a mixture of topics. So, for a test document , the dynamically adapted topic model by using a mixture of LMs from different topics is computed as:

  • The ANCLM are then interpolated with the background n-gram model to capture the local constraints using the linear interpolation as:


Experiments2
Experiments Dirichlet Allocation and Dynamic Marginals


ad