1 / 52

Digital Libraries: Steps toward information finding

Digital Libraries: Steps toward information finding . Many slides in this presentation are from Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze , Introduction to Information Retrieval, Cambridge University Press. 2008.

callia
Download Presentation

Digital Libraries: Steps toward information finding

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Digital Libraries: Steps toward information finding Many slides in this presentation are from Christopher D. Manning, PrabhakarRaghavan and HinrichSchütze, Introduction to Information Retrieval, Cambridge University Press. 2008. Book available online at http://nlp.stanford.edu/IR-book/information-retrieval-book.html

  2. A brief introduction to Information Retrieval • Resource: Christopher D. Manning, PrabhakarRaghavan and HinrichSchütze, Introduction to Information Retrieval, Cambridge University Press. 2008. • The entire book is available online, free, at http://nlp.stanford.edu/IR-book/information-retrieval-book.html • I will use some of the slides that they provide to go with the book.

  3. Author’s definition • Information Retrieval (IR) is finding material (usually documents) of an unstructured nature (usually text) that satisfies an information need from within largecollections (usually stored on computers). • Note the use of the word “usually.” We have seen DL examples where the material is not documents, and not text.

  4. Examples and Scaling • IR is about finding a needle in a haystack – finding some particular thing in a very large collection of similar things. • Our examples are necessarily small, so that we can comprehend them. Do remember, that all that we say must scale to very large quantities.

  5. Just how much information? • Libraries are about access to information. • What sense do you have about information quantity? • How fast is it growing? • Are there implications for the quantity and rate of increase?

  6. Yotta Zetta Exa Peta Tera Giga Mega Kilo How much information is there? Soon most everything will be recorded and indexed Everything Recorded ! Data summarization, trend detection anomaly detection are key technologies Most bytes will never be seen by humans. All Books MultiMedia These require algorithms, data and knowledge representation, and knowledge of the domain All books (words) A movie See Mike Lesk: How much information is there: http://www.lesk.com/mlesk/ksg97/ksg.html See Lyman & Varian: How much information http://www.sims.berkeley.edu/research/projects/how-much-info/ A Photo 24 Yecto, 21 zepto, 18 atto, 15 femto, 12 pico, 9 nano, 6 micro, 3 milli A Book Slide source Jim Gray – Microsoft Research (modified)

  7. Where does the information come from? • Many sources • Corporations • Individuals • Interest groups • News organizations • Accumulated through crawling

  8. Once we have a collection • How will we ever find the needle in the haystack? The one bit of information needed? • After crawling, or other resource acquisition step, we need to create a way to query the information we have • Next step: Index • Example content: Shakespeare’s plays

  9. Searching Shakespeare • Which plays of Shakespeare contain the words BrutusANDCaesar but NOTCalpurnia? • See http://www.rhymezone.com/shakespeare/ • One could grep all of Shakespeare’s plays for Brutus and Caesar, then strip out lines containing Calpurnia? • Why is that not the answer? • Slow (for large corpora) • NOTCalpurnia is non-trivial • Other operations (e.g., find the word Romans nearcountrymen) not feasible • Ranked retrieval (best documents to return)

  10. Term-document incidence BrutusANDCaesarBUTNOTCalpurnia First approach – make a matrix with terms on one axis and plays on the other All the plays  All the terms  1 if play contains word, 0 otherwise

  11. Incidence Vectors • So we have a 0/1 vector for each term. • To answer query: take the vectors for Brutus, Caesar and Calpurnia (complemented) bitwise AND. • 110100 AND 110111 AND 101111 = 100100.

  12. Answer to query • Antony and Cleopatra,Act III, Scene ii Agrippa [Aside to DOMITIUS ENOBARBUS]: Why, Enobarbus, When Antony found Julius Caesar dead, He cried almost to roaring; and he wept When at Philippi he found Brutus slain. • Hamlet, Act III, Scene ii Lord Polonius: I did enact Julius Caesar I was killed i' the Capitol; Brutus killed me.

  13. Try another one • What is the vector for the query • Antony and mercy • What would we do to find Antony OR mercy?

  14. Basic assumptions about information retrieval • Collection: Fixed set of documents • Goal: Retrieve documents with information that is relevant to the user’s information needand helps the user complete a task

  15. The classic search model TASK Ultimately, some task to perform. Info Need Some information is required in order to perform the task. Verbal form The information need must be expressed in words (usually). Query The information need must be expressed in the form of a query that can be processed. It may be necessary to rephrase the query and try again SEARCHENGINE QueryRefinement Results Corpus

  16. Misconception? Mistranslation? Misformulation? The classic search model Potential pitfalls between task and query results TASK Get rid of mice in a politically correct way Info Need Info about removing mice without killing them Verbal form How do I trap mice alive? Query mouse trap SEARCHENGINE QueryRefinement Results Corpus

  17. How good are the results? • Precision: How well do the results match the information need? • Recall: What fraction of the available correct results were retrieved? • These are the basic concepts of information retrieval evaluation.

  18. Stop and think • If you had to choose between precision and recall, which would you choose? • Give an example when each would be preferable. Everyone, provide an example of each. Everyone, comment on an example of each provided by someone else. • Ideally, do this in real time. However, you may need to come back to do your comments after others have finished.

  19. Discuss • What was the best example of precision as being more important? • What was the best example of recall being more important? • Try to come to consensus in your discussion.

  20. Size considerations • Consider N = 1 million documents, each with about 1000 words. • Avg 6 bytes/word including spaces/punctuation • 6GB of data in the documents. • Say there are M = 500K distinct terms among these.

  21. The matrix does not work • 500K x 1M matrix has half-a-trillion 0’s and 1’s. • But it has no more than one billion 1’s. • matrix is extremely sparse. • What’s a better representation? • We only record the 1 positions. • i.e. We don’t need to know which documents do not have a term, only those that do. Why?

  22. 1 2 4 11 31 45 173 1 2 4 5 6 16 57 132 Inverted index • For each term t, we must store a list of all documents that contain t. • Identify each by a docID, a document serial number • Can we used fixed-size arrays for this? Brutus 174 Caesar Calpurnia 31 54 101 2 What happens if we add document 14, which contains “Caesar.”

  23. 1 2 4 11 31 45 173 1 2 4 5 6 16 57 132 Dictionary Postings Inverted index • We need variable-size postings lists • On disk, a continuous run of postings is normal and best • In memory, can use linked lists or variable length arrays • Some tradeoffs in size/ease of insertion Posting Brutus 174 Caesar Calpurnia 2 31 54 101 Sorted by docID (more later on why).

  24. Tokenizer Token stream. Friends Romans Countrymen Linguistic modules Modified tokens. friend friend roman countryman roman Indexer 2 4 countryman Inverted index. 1 2 16 13 Inverted index construction Documents to be indexed. Friends, Romans, countrymen. Stop words, stemming, capitalization, cases, etc.

  25. Stop and think • Look at the previous slide. • Describe exactly what happened at each stage. • What does tokenization do? • What did the linguistic modules do? • Why would we want these transformations?

  26. Indexer steps: Token sequence • Sequence of (Modified token, Document ID) pairs. Doc 1 Doc 2 I did enact Julius Caesar I was killed i' the Capitol; Brutus killed me. So let it be with Caesar. The noble Brutus hath told you Caesar was ambitious Note the words in the table are exactly the same as in the two documents, and in the same order.

  27. Indexer steps: Sort • Sort by terms • And then docID Core indexing step

  28. Indexer steps: Dictionary & Postings • Multiple term entries in a single document are merged. • Split into Dictionary and Postings • Doc. frequency information is added. ID of documents that contain the term Number of documents in which the term appears

  29. Spot check(See the separate assignment for this in Blackboard • Complete the indexing for the following two “documents.” • Of course, the examples have to be very small to be manageable. Imagine that you are indexing the entire news stories. • Construct the charts as seen on the previous slide • Put your solution in the Blackboard Indexing – Spot Check 1. There is a discussion board in Blackboard. You will find it on the content homepage. Document 1: Pearson and Google Jump Into Learning Management With a New, Free System Document 2: Pearson adds free learning management tools to Google Apps for Education

  30. How do we process a query? • Using the index we just built, examine the terms in some order, looking for the terms in the query.

  31. 2 4 8 16 32 64 1 2 3 5 8 13 21 Query processing: AND • Consider processing the query: BrutusANDCaesar • Locate Brutus in the Dictionary; • Retrieve its postings. • Locate Caesar in the Dictionary; • Retrieve its postings. • “Merge” the two postings: 128 Brutus Caesar 34

  32. Sec. 1.3 Brutus Caesar 13 128 2 2 4 4 8 8 16 16 32 32 64 64 8 1 1 2 2 3 3 5 5 8 8 21 21 13 34 The merge • Walk through the two postings simultaneously, in time linear in the total number of postings entries 128 2 34 If the list lengths are x and y, the merge takes O(x+y) operations. What does that mean? Crucial: postings sorted by docID.

  33. Spot Check • Let’s assume that • the term mercy appears in documents 1, 2, 13, 18, 24,35, 54 • the term noble appears in documents 1, 5, 7, 13, 22, 24, 56 • Show the document lists, then step through the merge algorithm to obtain the search results.

  34. Stop and try it • Make up a new pair of postings lists (like the ones we just saw). • Post it on the discussion board. • Take a pair of postings lists that someone else posted and walk through the merge process. Post a comment on the posting saying how many comparisons you had to do to complete the merge.

  35. Intersecting two postings lists(a “merge” algorithm)

  36. Boolean queries: Exact match • The Boolean retrieval model is being able to ask a query that is a Boolean expression: • Boolean Queries are queries using AND, OR and NOT to join query terms • Views each document as a set of words • Is precise: document matches condition or not. • Perhaps the simplest model to build • Primary commercial retrieval tool for 3 decades. • Many search systems you still use are Boolean: • Email, library catalog, Mac OS X Spotlight

  37. Query optimization • Consider a query that is an and of n terms, n > 2 • For each of the terms, get its postings list, then and themtogether • Example query: BRUTUS AND CALPURNIA AND CAESAR • What is the best order for processing this query? 37

  38. Query optimization • Example query: BRUTUS AND CALPURNIA AND CAESAR • Simple and effective optimization: Process in order of increasingfrequency • Start with the shortest postings list, then keep cutting further • In this example, first CAESAR, then CALPURNIA, then BRUTUS 38

  39. Optimizedintersectionalgorithmfor conjunctivequeries 39

  40. More General optimization • Example query: (MADDING OR CROWD) and (IGNOBLE OR STRIFE) • Get frequencies for all terms • Estimate the size of each or by the sum of its frequencies (conservative) • Process in increasing order of or sizes 40

  41. Scaling • These basic techniques are pretty simple • There are challenges • Scaling • as everything becomes digitized, how well do the processes scale? • Intelligent information extraction • I want information, not just a link to a place that might have that information.

  42. Problem with Boolean search:feast or famine • Boolean queries often result in either too few (=0) or too many (1000s) results. • A query that is too broad yields hundreds of thousands of hits • A query that is too narrow may yield no hits • It takes a lot of skill to come up with a query that produces a manageable number of hits. • AND gives too few; OR gives too many

  43. Ranked retrieval models • Rather than a set of documents satisfying a query expression, in ranked retrieval models, the system returns an ordering over the (top) documents in the collection with respect to a query • Free text queries: Rather than a query language of operators and expressions, the user’s query is just one or more words in a human language • In principle, these are different options, but in practice, ranked retrieval models have normally been associated with free text queries and vice versa

  44. Feast or famine: not a problem in ranked retrieval • When a system produces a ranked result set, large result sets are not an issue • Indeed, the size of the result set is not an issue • We just show the top k( ≈ 10) results • We don’t overwhelm the user • Premise: the ranking algorithm works

  45. Scoring as the basis of ranked retrieval • We wish to return, in order, the documents most likely to be useful to the searcher • How can we rank-order the documents in the collection with respect to a query? • Assign a score – say in [0, 1] – to each document • This score measures how well document and query “match”.

  46. Query-document matching scores • We need a way of assigning a score to a query/document pair • Let’s start with a one-term query • If the query term does not occur in the document: score should be 0 • The more frequent the query term in the document, the higher the score (should be) • We will look at a number of alternatives for this.

  47. Take 1: Jaccard coefficient • A commonly used measure of overlap of two sets A and B • jaccard(A,B) = |A ∩ B| / |A ∪ B| • jaccard(A,A) = 1 • jaccard(A,B) = 0if A ∩ B = 0 • A and B don’t have to be the same size. • Always assigns a number between 0 and 1.

  48. Jaccard coefficient: Scoring example • What is the query-document match score that the Jaccard coefficient computes for each of the two “documents” below? • Query: ides of march • Document 1: caesar died in march • Document 2: the long march

  49. Jaccard Example done • Query: ides of march • Document 1: caesar died in march • Document 2: the long march • A = {ides, of, march} • B1 = {caesar, died, in, march} • B2 = {the, long, march}

  50. Issues with Jaccard for scoring • It doesn’t consider term frequency (how many times a term occurs in a document) • Rare terms in a collection are more informative than frequent terms. Jaccard doesn’t consider this information • We need a more sophisticated way of normalizing for length The problem with the first example was that document 2 “won” because it was shorter, not because it was a better match. We need a way to take into account document length so that longer documents are not penalized in calculating the match score.

More Related