Text-classification using Latent Dirichlet Allocation - intro graphical model

1 / 19

Text-classification using Latent Dirichlet Allocation - intro graphical model - PowerPoint PPT Presentation

Text-classification using Latent Dirichlet Allocation - intro graphical model

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

1. Text-classification using Latent Dirichlet Allocation- intro graphical model Lei Li leili@cs

2. Outline • Introduction • Unigram model and mixture • Text classification using LDA • Experiments • Conclusion

3. …………………… the New York Stock Exchange …………………… America’s Nasdaq ……………………… Buy ……………………… …………………… bank debt loan interest billion buy ……………………… …………………… the New York Stock Exchange …………………… America’s Nasdaq ……………………… Buy ……………………… …………………… Iraq war weapon army Ak-47 bomb ……………………… Text Classification What class can you tell given a doc? military finance

4. Why db guys care? • Could be adapted to model discrete random variables • Disk failures • user access pattern • Social network, tags • blog

5. Document • “bag of words”: no order on words • d=(w1, w2, … wN) • wi one value in 1…V (1-of-V scheme) • V: vocabulary size

6. Modeling Document • Unigram: simple multinomial dist • Mixture of unigram • LDA • Other: PLSA, bigram

7. Unigram Model for Classification • Y is the class label, • d={w1, w2, … wN} • Use bayes rule: • How to model the document given class • ~ Multinomial distribution, estimated as word frequency Y N w

8. Unigram: example d = bank * 100, debt * 110, interest * 130, war * 1, army * 0, weapon * 0 P(finance|d)=? P(military|d)=?

9. Mixture of unigrams for classification • For each class, assume k topics • Each topic represents a multinomial distribution • Under each topic, each word is multinomial Y N z w

10. Unigram: example d = bank * 100, debt * 110, interest * 130, war * 1, army * 0, weapon * 0 P(finance|d)=? P(military|d)=?

11. Bayesian Network • Given a DAG • Nodes are random variables, or parameters • Arrow are conditional probability dependency • Given some prob on part nodes, there are algorithm to infer values for other nodes

12. Latent Dirichlet Allocation • Model a θ as a Dirichlet distribution, on α • For n-th term wn: • Model n-th latent variable zn as a multinomial distribution according to θ. • Model wn as a multinomial distribution according to zn and β.

13. Variational inference for LDA • Direct inference with LDA is HARD • Approximation with variational distribution • use factorized distribution on variational parameters γ and Φ to approximate posterior distribution of latent variables θand z.

14. Experiment • Data set: Reuters-21578, 8681 training documents, 2966 test documents. • Classification task: “EARN” vs. “Non-EARN” • For each document, learn LDA features and classify with them (discriminative)

15. Result most frequent words in each topic

16. Classification Accuracy

17. Comparison of Accuracy

18. Take Away Message • LDA with few topics and few training data could produce relative better results • Bayesian network is useful to model multiple random variable, nice algorithm for it, • Potential use of LDA: • disk failure • database access pattern • user preference (collaborative filtering) • social network (tags)

19. Reference • Blei, D., Ng, A., Jordan, M.: Latent Dirichlet allocation. Journal of machine Learning Research