1 / 42

Latent Semantic Indexing and Beyond

Latent Semantic Indexing and Beyond. Leif Grönqvist (lgr@msi.vxu.se) School of Mathematics and Systems Engineering the Swedish Graduate School of Language Technology. Overview. My background Introduction to vector space models and Latent Semantic Indexing A toy example Interpretation

neil
Download Presentation

Latent Semantic Indexing and Beyond

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Latent Semantic Indexingand Beyond Leif Grönqvist (lgr@msi.vxu.se) School of Mathematics and Systems Engineering the Swedish Graduate School of Language Technology Leif Grönqvist, in Borås

  2. Overview • My background • Introduction to vector space models and Latent Semantic Indexing • A toy example • Interpretation • Some applications • A concrete example and a small experiment • Improvements of the model • Various unsolved problems • Conclusion: things I have to do Leif Grönqvist, in Borås

  3. My Background • 1986-1989: ”4-årig teknisk” (electrical engineering) • 1989-1993: MSc (official translation of “Filosofie Magister”) in Computing Science, Göteborg University • 1989-1993: 62 points in mechanics, electronics, etc. • 1994-2001: Work at the Linguistic department in Göteborg • Various projects related to corpus linguistics • Some teaching on statistical methods (Göteborg and Uppsala), • and corpus linguistics in Göteborg, Sofia, and Beijing • 1995: Consultant at Redwood Research, in Sollentuna, working on information retrieval in medical databases • 1995-1996: Work at the department of Informatics in Göteborg (the Internet Project) • 2001-2006: PhD Student in Computer Science / Language Technology Leif Grönqvist, in Borås

  4. Vector Space Models • If we had a way to map any term to a vector in a high-dimensional space, in a way such that the similarity between the meaning of the terms is reflected in the distance between the vectors… Then we could: • For a given term t, find an ordered list of the terms most similar to t • For any two terms, find the similarity between them Leif Grönqvist, in Borås

  5. Vector Space Models, cont. • And if it is possible to add meaning for terms and this is also reflected by adding the corresponding vectors, we could do some more things: • If we assume that it is possible to extract terms from a document, we can map documents to vectors too! • A set of terms (one or more terms) may be seen as a document as well Leif Grönqvist, in Borås

  6. Vector Space Models, cont. • Now it is possible for any [term or document] d, to find an ordered list of the terms or documents most similar to d • Further, we can for any two [term or document]s, find the similarity between them • Therefore it is meaningful to look at terms as a special case of document – a short one Leif Grönqvist, in Borås

  7. Leif Grönqvist, in Borås

  8. Zoom into the blue cluster Leif Grönqvist, in Borås

  9. And the red one Leif Grönqvist, in Borås

  10. Alternative data sources • A useful data source to get similar information would be a thesaurus, a WorldNet, or any kind of knowledge database. But: • We don’t have them for all languages • Most of them are not domain specific and domain specific terms are not covered • In such data sources most of the words are missing • Especially names, compounds, technical terms and numbers • My big newspaper corpus contains ~3 000 000 unique words • A vector space model can be trained from raw un-annotated corpus data! Leif Grönqvist, in Borås

  11. Calculating a vector space • The training process needs a large set of documents - the bigger the better. My data set used for experiments contains roughly 1.2 million newspaper articles and 0.4 billion running words but I will collect more… • Step 1: Create a word-by-document matrix - each element in the matrix is a frequency (possibly weighted) for a word type in a specific document • From here there are several ways to find a good vector space Leif Grönqvist, in Borås

  12. Vector Space Algorithms • Singular Value Decomposition (SVD) • This is a mathematically complicated (based on eigen-values) way to find an optimal vector space in a specific number of dimensions • Computationally heavy - maybe 20 hours for my test set • Uses often the entire document as context • Random Indexing (RI) • Select some dimensions randomly • Not as heavy to calculate, but more unclear (for me) why it works • Uses a small context, typically 1+1 – 5+5 words • Neural nets, Hyperspace Analogue to Language, etc. Leif Grönqvist, in Borås

  13. The terminology I use Some people use these terms in a sloppy way. For me: • LSI=LSA: Latent Semantic Indexing / Analysis are used in roughly the same way by most people • Two ways to obtain the model used in LSA are SVD and RI – they both find the latent information Leif Grönqvist, in Borås

  14. A toy example Leif Grönqvist, in Borås

  15. What SVD gives us X=T0S0D0: X, T0,S0,D0 are matrices Leif Grönqvist, in Borås

  16. .22 -.11 .29 -.41 -.11 -.34 .52 -.06 -.41 .20 -.07 .14 -.55 .28 .50 -.07 -.01 -.11 .24 .04 -.16 -.59 -.11 -.25 -.30 .06 .49 .40 .06 -.34 .10 .33 .38 0 0 .01 .64 -.17 .36 .33 -.16 -.21 -.17 .03 .27 .27 .11 -.43 .07 .08 -.17 .28 -.02 -.05 .27 .11 -.43 .07 .08 -.17 .28 -.02 -.05 .30 -.14 .33 .19 .11 .27 -.03 -.02 -.17 .21 .27 -.18 -.03 -.54 .08 -.47 -.04 -.58 .01 .49 .23 .03 -.59 -.39 -.29 .25 -.23 .04 .62 .22 0 -.07 .11 .16 -.68 .23 .03 .65 .14 -.01 -.30 .28 .34 .68 -.18 And our example: T0 Leif Grönqvist, in Borås

  17. 3.34 2.54 2.35 1.64 1.50 1.31 0.85 0.56 0.36 And our example: S0 Leif Grönqvist, in Borås

  18. .20 .61 .46 .54 .28 0 .01 .02 .08 -.06 .17 -.13 -.23 .11 .19 .44 .62 .53 .11 -.50 .21 .57 -.51 .10 .19 .25 .08 -.95 -.03 .04 .27 .15 .02 .02 .01 -.03 .05 -.21 .38 -.21 .33 .39 .35 .15 -.60 -.08 -.26 -.72 -.37 .03 -.30 -.21 0 .36 .18 -.43 -.24 .26 .67 -.34 -.15 .25 .04 -.01 .05 .01 -.02 -.06 .45 -.76 .45 -.07 -.06 .24 .02 -.08 -.26 -.62 .02 .52 -.45 And our example: D0 Leif Grönqvist, in Borås

  19. C1 C2 C3 C4 C5 M1 M2 M3 M4 Human .16 .40 .38 .47 .18 -.05 -.12 -.16 -.09 Interface .14 .37 .33 .40 .16 -.03 -.07 -.10 -.04 Computer .15 .51 .36 .41 .24 .02 .06 .09 .12 User .26 .84 .61 .70 .39 .03 .08 .12 .19 System .45 1.23 1.05 1.27 .56 -.07 -.15 -.21 -.05 Response .16 .58 .38 .42 .28 .06 .13 .19 .22 Time .16 .58 .38 .42 .28 .06 .13 .19 .22 EPS .22 .55 .51 .63 .24 -.07 -.14 -.20 -.11 Survey .10 .53 .23 .21 .27 .14 .44 .44 .42 Trees -.06 .23 -.14 -.27 .14 .24 .77 .77 .66 Graph -.06 .34 -.15 -.30 .20 .31 .98 .98 .85 Minors -.04 .25 -.10 -.21 .15 .22 .71 .71 .62 We can recalculate X with m=2 Leif Grönqvist, in Borås

  20. What does the SVD give? • Susan Dumais 1995: “The SVD program takes the ltc transformed term-document matrix as input, and calculates the best "reduced-dimension" approximation to this matrix.” • Michael W Berry 1992: “This important result indicates that Ak is the best k-rank approximation (in at least squares sense) to the matrix A. • Leif 2003: What Berry says is that SVD gives the best projection from n to k dimensions, that is the projection that keep distances in the best possible way, so no problems with local maxima. Leif Grönqvist, in Borås

  21. The distance measure • Three easy-to-calculate distance measures: • Cosine: the cosine of the angle between the vectors • Euclidean distance: just the distance as we all know it • Manhattan distance: the distance if you walk only along the orthogonal axes • Just as easy to calculate in n dimensions where n>>3 • The most used is the cosine Leif Grönqvist, in Borås

  22. What does it really mean then? • The fact that a word w is represented by a specific vector v means exactly nothing! • If two words a, b are represented by vectors close to each other (the angle between them is small) then: • a and b are often found in the same document and/or • a is often found together with c and c is often found together with b And so on… Leif Grönqvist, in Borås

  23. A naïve algorithm • Not trivial that SVD and RI works. I will explain a naive but more intuitive algorithm to obtain a result similar to SVD, but too slow for practical use: • Select a random point in a space with the selected dimensionality, for each unique word • For each document D in the set: move the points corresponding to each word towards the mass center for the words/points in D. • If any point made a “big” move since last iteration, then go back to step 2 Step 1-3 could be done several times to have a better chance to find the global maximum Leif Grönqvist, in Borås

  24. Some applications • Automatic generation of a domain specific thesaurus • Keyword extraction from documents • Find sets of similar documents in a collection • Find documents related to a given document or a set of terms Leif Grönqvist, in Borås

  25. Problems and questions • How can we interpret the similarities as different kinds of relations? • How can we include document structure and phrases in the model? • Terms are not really terms, but just words • Ambiguous terms pollute the vector space • How could we find the optimal number of dimensions for the vector space? Leif Grönqvist, in Borås

  26. pelle svensson 0.886 pelle 0.886 svensson 0.821 svenssons 0.795 ödsligt 0.789 skandal 0.786 frikännande 0.784 polismannens 0.781 tjänstetid 0.781 slutkörd 0.781 munsex 0.780 avstyra An example based on 5000 newspaper articles bengt johansson 0.853 johansson 0.752 bengt 0.750 davidson 0.746 folkpartiledaren 0.737 kdsledaren 0.734 öresundsbroprojektet 0.728 centerledaren 0.725 irhammar 0.716 partiledarna 0.715 avgaser 0.709 lyckosamt Leif Grönqvist, in Borås

  27. Bengt Johansson is just Bengt + Johansson – something is missing! 1.000 bengt 0.764 folkpartiledaren 0.749 westerberg 0.730 kdsledaren 0.713 riksdagsledamot 0.703 ändrats 0.697 ingbritt 0.692 irhammar 0.685 tolkningen 0.677 tolkar 0.674 partiledarna 1.000 johansson 0.789 olof 0.768 miljödepartementets 0.752 görel 0.751 thurdin 0.750 miljöminister 0.749 brofrågan 0.749 rosenbad 0.746 miljödepartementet 0.746 regeringssammanträdet 0.745 avgaser Leif Grönqvist, in Borås

  28. A small experiment • I want the model to know the difference between Bengt and Bengt • Make a frequency list for all n-tuples up to n=5 with a frequency>1 • Keep all words in the bags, but add the tuples, with space replaced by _, as words • Run the LSI again • Now Bengt_Johansson is a word, and Bengt_Johansson is NOT Bengt + Johansson Number of terms grows from 34238 to 104783 Leif Grönqvist, in Borås

  29. New results • Some distances 0.4371 bengt_johansson johansson 0.2566 bengt_johansson bengt 0.1014 bengt_johansson olof 0.0994 bengt_johansson folkpartiledaren 0.8014 johansson olof 0.5376 johansson folkpartiledaren 0.4850 johansson bengt 0.8438 bengt folkpartiledaren 0.4246 bengt olof 0.5616 folkpartiledaren olof Leif Grönqvist, in Borås

  30. 1.000 bengt_johansson 0.997 handbollslandslag 0.995 gunnar_blombäck 0.993 fyrnationsturneringen_i_östergötland 0.991 fyrnationsturneringen 0.991 förbundskapten_bengt_johansson 0.991 förbundskapten_bengt 0.990 blombäck 0.974 carlen 0.972 åtta_mål 0.971 bänken 0.957 magnus_wislander 0.957 wislander 0.953 målet_stod 0.951 svenske_förbundskaptenen And the top list for Bengt_Johansson 0.949 orutinerade 0.948 vinna_den_här 0.945 magnus_andersson 0.945 matchen_spelades 0.942 förbundskaptenen 0.935 landskamp 0.935 glädjeämnen 0.933 vmlaget 0.927 halvlek 0.927 världsstjärnor 0.926 bottenlaget 0.924 brolin 0.923 uppvisningen 0.923 offensivt 0.922 jörgensen 0.921 landslag Leif Grönqvist, in Borås

  31. The new vector space model • It is clear that it is now possible to find terms closely related to Bengt Johansson – the handball coach • But is the model better for single words or for document comparison? What do you think? • More “words” than before – hopefully it improves the result just as more data does • At least no reason for a worse result... Or? Leif Grönqvist, in Borås

  32. An example document REGERINGSKRIS ELLER INTE PARTILEDARNA I SISTAMINUTEN ÖVERLÄGGNINGAR OM BRON Under onsdagskvällen satt partiledarna i regeringen i sista minutenöverläggningar om Öresundsbron Centerledaren Olof Johansson var den förste som lämnade överläggningarna På torsdagen ska regeringen ge ett besked Det måste dock enligt statsminister Carl Bildt inte innebära ett ja eller ett nej till bron … Leif Grönqvist, in Borås

  33. 0.986 underkänner 0.982 irhammar 0.977 partiledarna 0.970 godkände 0.962 delade_meningar 0.960 regeringssammanträde 0.957 riksdagsledamot 0.957 bengt_westerberg 0.954 materialet 0.952 diskuterade 0.950 folkpartiledaren 0.949 medierna 0.947 motsättningarna 0.946 vilar 0.944 socialminister_bengt_westerberg Closest terms in each model 0.967 partiledarna 0.921 miljökrav 0.921 underkänner 0.918 tolkar 0.897 meningar 0.888 centerledaren 0.886 regeringssammanträde 0.880 slottet 0.880 rosenbad 0.877 planminister 0.866 folkpartiledaren 0.855 thurdin 0.845 brokonsortiet 0.839 görel 0.826 irhammar Leif Grönqvist, in Borås

  34. Closest document in both models BILDT LOVAR BESKED OCH REGERINGSKRIS HOTAR Det blir ett besked under torsdagen men det måste inte innebära ett ja eller nej från regeringen till Öresundsbroprojektet Detta löfte framförde statsminister Carl Bildt under onsdagen i ett antal varianter Samtidigt skärptes tonen mellan honom och miljöminister Olof Johansson och stämningen tydde på annalkande regeringskris De båda har under den långa broprocessen undvikit att uttala sig kritiskt om varandra och därmed trappa upp motsättningarna Men nu menar Bildt att centern lämnar sned information utåt Johansson och planminister Görel Thurdin anser å andra sidan att regeringen bara kan säga nej till bron om man tar riktig hänsyn till underlaget för miljöprövningen … Leif Grönqvist, in Borås

  35. Leif Grönqvist, in Borås

  36. Documents with better ranking in the basic model 2602 .848 4 .492 12 BRON KAN BLI VALFRÅGA SÄGER JOHANSSON Om det lutar åt ett ja i regeringen av politiska skäl då är naturligtvis den här frågan en viktig valfråga … 2367 .804 10 .434 19 INTE EN KRITISK RÖST BLAND CENTERPARTISTERNA TILL BROBESKEDET En etappseger för miljön och centern En eloge till Olof Johansson Görel Thurdin och Carl Bildt … Leif Grönqvist, in Borås

  37. Documents with better ranking in the phrase model 1567 .456 73 .601 5 ALF SVENSSON TOPPNAMN I STOCKHOLM Kds-ledaren Alf Svensson toppar kds riksdagslista för Stockholms stad och Michael Stjernström sakkunnig i statsrådsberedningen har en valbar andra plats … 1371 .456 74 .601 6 BENGT WESTERBERG BARNPORREN MÅSTE STOPPAS Folkpartiledaren Bengt Westerberg lovade på onsdagen att regeringen ska göra allt för att stoppa barnporren … Leif Grönqvist, in Borås

  38. Hmm, adding n-grams was maybe too simple... • If the bad result is due to overtraining, it could help to remove the words I build phrases of, but maybe not all • Another way to try is to use a dependency parser to find more meaningful phrases, not just n-grams Leif Grönqvist, in Borås

  39. The interpretation of similarities • I havn’t tried to solve this problem at all but one idea I have is to: • Calculate vector spaces for various dimensionalities and context widths • Check if the different settings find different kind of relations • With a data source like WordNet it could be done in a systematic way Leif Grönqvist, in Borås

  40. How to select the number of dimensions • Susan T Dumais 1995: “In previous experiments we found that performance, improves as the number of dimensions is increased up to 200 or 300 dimensions, and decreases slowly after that to the level observed for the standard vector EC­3 method (Dumais, 1991).” • Jason I Hong 2000: “There does not seem to be a general consensus for an optimal number of dimensions; instead, the size of the concept space must be determined based on the specific collection of documents used.” • Thomas K Landauer 1997: “Near maximum performance of 45-53%, corrected for guessing, was obtained over a fairly broad region around 300 dimensions” • Leif 2003: “We should try to do similar experiments as Dumais/Landauer, but relate the optimal dimensionality to measures like number of documents, terms, nonzero elements, etc, because these could give us a formula not relying on hand tagged data sets” Leif Grönqvist, in Borås

  41. Performance for the SVD • Dumais 1995: “The SVD takes only about 2 minutes on a Sparc10 for a 2k x 5k matrix, but this time increases to about 18-20 hours for a 60k x 80k matrix.” • Hong 2000: “The SVD algorithm is O(N2 k3), where N is the number of terms plus documents, and k is the number of dimensions in the concept space”, “However, if the collection is stable, SVD will only need to be performed once, which may be an acceptable cost.” • Leif: So if a good computer today is 100 times faster than Dumais’ 1995 and we have 20 times bigger data sets and we have an optimized SVD function instead of a research prototype, it should still take around 20 hours. Leif Grönqvist, in Borås

  42. What I still have to do something about • Find a better LSI/SVD package than the one I have (old C-code from 1990), or maybe writing it myself... • Get the phrases into the model in some way When these things are done I could: • Try to interpret various relations from similarities in a vector space mode • Try to solve the “number of optimal dimensions”-problem • Explore what the length of the vectors mean Leif Grönqvist, in Borås

More Related