1 / 44

Measure clustering approach to MWE extraction

25th International Conference on Computational Linguistics and Intellectual Technologies, May 29 – June 1, Moscow, RSUH. Measure clustering approach to MWE extraction. Petr Rossyaykin Moscow State University petrrossyaykin@gmail.com. Natalia Loukachevitch Moscow State University

denice
Download Presentation

Measure clustering approach to MWE extraction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 25th International Conference on Computational Linguistics and Intellectual Technologies, May 29 – June 1, Moscow, RSUH Measure clustering approach to MWE extraction Petr Rossyaykin Moscow State University petrrossyaykin@gmail.com Natalia Loukachevitch Moscow State University louk_nat@mail.ru

  2. Our goal • Provide an unsupervised, possibly resource- and language-independent method to extract multiword expressions (namely, nominal phrases of 2 words) of different semantic types in order to supplement lexical resources

  3. Outline • MWEs and their properties • Data & task • Measures • Statistical association measures • Context measures • Distributional measures • Clustering • Results • Discussion

  4. Multiword expressions • Multiword expressions (MWEs) are “lexical items that: (a) can be decomposed into multiple lexemes; and (b) display lexical, syntactic, semantic, pragmatic and/or statistical idiomaticity” (Baldwin & Kim 2010)

  5. Multiword expressions (MWEs) Examples: • idioms (to kick the bucket, to spill the beans), • terms (vowel harmony, extended projection principle), • proper names (South Korea, Lady Gaga), • lexicalized expressions (by and large, coat of arms), • etc.?

  6. Properties of MWEs • Non-compositionality (e.g. blue devils) • Statistical idiosyncrasy (e.g. law and order vs order and law) • Morphological/syntactic idiosyncrasy (e. g. by and large) and rigidity (John kicked the bucket vs#John kicked a bucket vs#The bucket was kicked by John) + pragmatic idiomaticity (e. g. good morning) MWEs differ in regard to the prominence of these properties

  7. Overview of methods

  8. Task • The number of MWEs is comparable to that of single lexemes and new ones appear in language constantly (Jackendoff1997, Sag et al. 2002) • The task is to propose a method to supplement lexical resources with new MWEs (noun phrases) • Russian language thesaurus RuThes (Loukachevitch et al. 2014) is used as a gold standard:

  9. Data Corpus • Russian news’ texts from the Internet published in 2011 • 446 million tokens Pre-processing • lemmatised, • upper-cased, • POS-tagged

  10. Initial candidate expressions • N-N (e.g. глава государства ‘head of state’) and Adj-N (e.g. уголовное дело‘criminal case’) bigrams • Observed frequency > 200 • 37767 candidate bigrams in total

  11. 20 most frequent expressions

  12. 20 most frequent expressions

  13. Measures

  14. Statistical association measures

  15. Frequency distribution Red points – positive class Blue points – negative class X – log(expected frequency) Y – log(observed frequency) Black curve – PMI3 at the level of 30 (best among statistical measures)

  16. Frequency distribution Red points – positive class Blue points – negative class X – log(expected frequency) Y – log(observed frequency) Black curve – PMI3 at the level of 30 (best among statistical measures)

  17. Context measures Idea – compare sets of contexts of n-grams Measures: • gravity count • modified gravity count • type-LR • type-FLR • context intersection • independent context intersection + combinations

  18. type-LR We can model lexical rigidity (non-substitutability) using sets of internal contexts (neighboring words): 𝑟(𝑥) is a set of unique words occurring in corpus immediately to the right from the word x, 𝑙(𝑥) is a set of unique words which occur in corpus immediately to the left from the word x

  19. type-LR – example x = general y = director l(director) = { general (director), fired (director), former (director), claim (director), … } 17234 unique words in total r(general) = { (general) director, (general) secretary, (general) agreement, (general) cleaning, …} 671 unique words in total

  20. Context intersection The following two measures compare • external context sets of MWEs ( and ) • context sets of their single components (and ) and • context sets of single components occurring outside candidate expressions ( and ):

  21. Combinations with frequency Combining with the observed frequency allows to make use of both non-compositionality and statistical idiosyncrasy of MWEs:

  22. Distributional measures Loukachevitch& Parkhomenko(2018) achieved extremely high average precision (AP@100 = 0.95) on the same data with the following measures: where w is a word from the model vocabulary distinct from x and y where teis a thesaurus entry (word or phrase)

  23. Individual measures (best)

  24. Individual results (best) Recall up to 0.14 Recall up to 1

  25. Top-10 lists – type-FLR & DFthes Note the absence of matches in top-10 lists

  26. Top-10 lists – type-FLR & DFthes Note the absence of matches in top-10 lists

  27. Clustering

  28. Combinational methods • Linear combinations (Zakharov 2017) • Machine learning • Neural networks (Pecina 2008) • Bayesian networks (Tsvetkov& Wintner2011, Buljan & Šnajder2017) • Clustering (Tutubalina 2015) • etc.

  29. Clustering Idea: • Use measures as dimensions • Cluster expressions • Provide ranking function based on clustering results expression vector = <PMI value, t-score, LLR, …>

  30. 2 measures (PMI3 and type-LR)

  31. 2 measures (PMI3 and ICI)

  32. Clustering More ideas: • Measures of different types account for different properties of MWEs => allow separating MWEs of different types from free phrases • Increasing number of measures/dimensions gives more fine-grained clustering => yields better results

  33. Feature subset • Best individual measures:

  34. Normalization • Values assigned by different measures vary considerably • values → ranks → (binary) logarithms of ranks

  35. Ranking function Since we are interested in ranking, rather just classifying, bigrams, we used the following centroid-oriented scoring function: where is Euclidean distance, xy is the vector of an expression ‘x y’, is the centroid of the larger cluster, and is the centroid of the smaller cluster

  36. Clustering 247 feature subsets with more than 1 element 2 clustering algorithms (scikit-learn): • k-means (2 clusters) • agglomerative hierarchical clustering (2 clusters, Ward linkage) vs 2 variants of linear combination: • sum of ranks’ logarithms • product of ranks’ logarithms

  37. Results

  38. Discussion • The best results are achieved with clustering-based ranking function applied to relatively large subsets of features (from 4 to 7) • Two best variants use measures of all three types • Except cubic PMI all of the measures we tried to combine appear in the best setups at least twice

  39. Top-20 list – best combination No matches with the lists extracted by type-FLR and DFthes

  40. Top-20 list – best combination No matches with the lists extracted by type-FLR and DFthes

  41. Conclusion • Ranked lists extracted by measures of different types exhibit significant variation • The highest average precision is achieved with the help of measures which utilize both frequency and context/distributional information => • The choice of measures is crucial for the efficiency of combinational MWE extraction • Our unsupervised method allows to incorporate relatively large number of measures of different types yielding high AP

  42. References Baldwin T., Kim S. N. (2010), Multiword expressions. In Nitin Indurkhya and Fred J. Damerau, editors, Handbook of Natural Language Processing, pages 267–292. CRC Press, Taylor and Francis Group, Boca Raton, FL, USA, 2 edition. Buljan, Šnajder J. (2017), Combining Linguistic Features for the Detection of Croatian Multiword Expressions. Proceedings of the 13th Workshop on Multiword Expressions (MWE 2017), Valencia, 194–199. Jackendoff R. (1997), The Architecture of the Language Faculty. Number 28 in Linguistic Inquiry Monographs. MIT Press, Cambridge, MA, USA. 262 p. LoukachevitchN., Dobrov B., Chetviorkin I. (2014), "Ruthes-lite, a publiclyavailableversionofthesaurusofrussianlanguageruthes." ComputationalLinguisticsandIntellectual Technologies: Papers fromthe Annual International Conference “Dialogue”, Bekasovo, Russia. 2014. Loukachevitch N., Parkhomenko E (2018), Recognition of multiword expressions using word embeddings // Artificial Intelligence. RCAI 2018. — Vol. 934 of Communications in Computer and Information Science. — Springer Cham, 2018. — P. 112–124.

  43. References Pecina P. (2008), A Machine Learning Approach to Multiword Expression Extraction. In Proceedings of the LREC 2008 Workshop Towards a Shared Task for Multiword Expressions (MWE 2008), pages 54–57, Marrakech, 2008. Sag I. A., Baldwin T., Bond F., Copestake A., Flickinger D. (2002), Multiwordexpressions: A pain in the neck for NLP. In Proceedingsofthe Third International Conference on Intelligent Text Processing andComputationalLinguistics (CICLing 2002), pages 1–15, Mexico City, Mexico. Tsvetkov Y., Wintner S (2011), Identification of multi-word expressions by combining multiple linguistic information sources. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 836–845. Association for Computational Linguistics. Tutubalina E. (2015), Clustering-based Approach to Multiword Expression Extraction and Ranking. In NAACL-HTL, pages 39–43, Denver, Colorado, 2015. Zakharov V. (2017), Automatic Collocation Extraction: Association Measures Evaluation and Integration // Computational Linguistics and Intellectual Technologies: Papers from the Annual conference “Dialogue”. Volume 1 of 2. Computational Linguistics: Practical Applications. — Moscow : RSUH, 2017. — P. 396–407.

  44. Thank you for attention!

More Related