1 / 57

The Nuts & Bolts of Hypertext retrieval Crawling; Indexing; Retrieval

The Nuts & Bolts of Hypertext retrieval Crawling; Indexing; Retrieval. Indexing and Retrieval Issues. Efficient Retrieval (1). Document-term matrix t 1 t 2 . . . t j . . . t m nf

lois
Download Presentation

The Nuts & Bolts of Hypertext retrieval Crawling; Indexing; Retrieval

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Nuts & Bolts of Hypertext retrieval Crawling; Indexing; Retrieval

  2. Indexing and Retrieval Issues

  3. Efficient Retrieval (1) • Document-term matrix t1 t2 . . . tj . . . tm nf d1 w11 w12 . . . w1j . . . w1m 1/|d1| d2 w21 w22 . . . w2j . . . w2m 1/|d2| . . . . . . . . . . . . . . di wi1 wi2 . . . wij . . . wim 1/|di| . . . . . . . . . . . . . . dn wn1 wn2 . . . wnj . . . wnm 1/|dn| • wij is the weight of term tj in document di • Most wij’s will be zero.

  4. Naïve retrieval Consider query q = (q1, q2, …, qj, …, qn), nf = 1/|q|. How to evaluate q (i.e., compute the similarity between q and every document)? Method 1: Compare q with every document directly. • document data structure: di : ((t1, wi1), (t2, wi2), . . ., (tj, wij), . . ., (tm, wim ), 1/|di|) • Only terms with positive weights are kept. • Terms are in alphabetic order. • query data structure: q : ((t1, q1), (t2, q2), . . ., (tj, qj), . . ., (tm, qm ), 1/|q|)

  5. Naïve retrieval Method 1: Compare q with documents directly (cont.) • Algorithm initialize all sim(q, di) = 0; for each document di (i = 1, …, n) { for each term tj (j = 1, …, m) if tj appears in both q and di sim(q, di) += qjwij; sim(q, di) = sim(q, di) (1/|q|) (1/|di|); } sort documents in descending similarities and display the top k to the user;

  6. Inverted Files Observation: Method 1 is not efficient as most non-zero entries in the document-term matrix need to be accessed. Method 2: Use Inverted File Index Several data structures: • For each term tj, create a list (inverted file list) that contains all document ids that have tj. I(tj) = { (d1, w1j), (d2, w2j), …, (di, wij), …, (dn, wnj) } • di is the document id number of the ith document. • Only entries with non-zero weights should be kept.

  7. Inverted files Method 2: Use Inverted File Index (continued) Several data structures: • Normalization factors of documents are pre-computed and stored in an array: nf[i] stores 1/|di|. • Create a hash table for all terms in the collection. . . . . . . tj pointer to I(tj) . . . . . . • Inverted file lists are typically stored on disk. • The number of distinct terms is usually very large.

  8. How Inverted Files are Created Dictionary Postings

  9. Retrieval using Inverted files Use something like this as part of your Project.. Algorithm initialize all sim(q, di) = 0; for each term tj in q { find I(t) using the hash table; for each (di, wij) in I(t) sim(q, di) += qjwij; } for each document di sim(q, di) = sim(q, di) (1/|q|) nf[i]; sort documents in descending similarities and display the top k to the user;

  10. Retrieval using inverted indices Some observations about Method 2: • If a document d does not contain any term of a given query q, then d will not be involved in the evaluation of q. • Only non-zero entries in the columns in the document-term matrix corresponding to the query terms are used to evaluate the query. • computes the similarities of multiple documents simultaneously (w.r.t. each query word)

  11. Efficient Retrieval (8) Example (Method 2): Suppose q = { (t1, 1), (t3, 1) }, 1/|q| = 0.7071 d1 = { (t1, 2), (t2, 1), (t3, 1) }, nf[1] = 0.4082 d2 = { (t2, 2), (t3, 1), (t4, 1) }, nf[2] = 0.4082 d3 = { (t1, 1), (t3, 1), (t4, 1) }, nf[3] = 0.5774 d4 = { (t1, 2), (t2, 1), (t3, 2), (t4, 2) }, nf[4] = 0.2774 d5 = { (t2, 2), (t4, 1), (t5, 2) }, nf[5] = 0.3333 I(t1) = { (d1, 2), (d3, 1), (d4, 2) } I(t2) = { (d1, 1), (d2, 2), (d4, 1), (d5, 2) } I(t3) = { (d1, 1), (d2, 1), (d3, 1), (d4, 2) } I(t4) = { (d2, 1), (d3, 1), (d4, 1), (d5, 1) } I(t5) = { (d5, 2) }

  12. Efficient Retrieval (9) After t1 is processed: sim(q, d1) = 2, sim(q, d2) = 0, sim(q, d3) = 1 sim(q, d4) = 2, sim(q, d5) = 0 After t3 is processed: sim(q, d1) = 3, sim(q, d2) = 1, sim(q, d3) = 2 sim(q, d4) = 4, sim(q, d5) = 0 After normalization: sim(q, d1) = .87, sim(q, d2) = .29, sim(q, d3) = .82 sim(q, d4) = .78, sim(q, d5) = 0

  13. Efficiency versus Flexibility • Storing computed document weights is good for efficiency but bad for flexibility. • Recomputation needed if tfw and idfw formulas change and/or tf and df information change. • Flexibility is improved by storing raw tf and df information but efficiency suffers. • A compromise • Store pre-computed tf weights of documents. • Use idf weights with query term tf weights instead of document term tf weights.

  14. Web Crawling (Search) Strategy • Starting location(s) • Traversal order • Depth first • Breadth first • Or ??? • Cycles? • Coverage? • Load? d b e h j f c g i

  15. Interesting link: Commercial search-engine conference http://www.infonortics.com/searchengines Lecture of October 7th Today’s Agenda: Complete Crawling issues --Discussion of Google & Mercator Topics in queue: Clustering; Collaborative filtering Next class: Clustering search results Read: paper by Vipin Kumar et. Al.

  16. Robot (2) Some specific issues: • What initial URLs to use? Choice depends on type of search engines to be built. • For general-purpose search engines, use URLs that are likely to reach a large portion of the Web such as the Yahoo home page. • For local search engines covering one or several organizations, use URLs of the home pages of these organizations. In addition, use appropriate domain constraint.

  17. Robot (4) • How to extract URLs from a web page? Need to identify all possible tags and attributes that hold URLs. • Anchor tag: <a href=“URL” … > … </a> • Option tag: <option value=“URL”…> … </option> • Map: <area href=“URL” …> • Frame: <frame src=“URL” …> • Link to an image: <img src=“URL” …> • Relative path vs. absolute path: <base href= …>

  18. Robot (7) Several research issues about robots: • Fetching more important pages first with limited resources. • Can use measures of page importance • Fetching web pages in a specified subject area such as movies and sports for creating domain-specific search engines. • Focused crawling • Efficient re-fetch of web pages to keep web page index up-to-date. • Keeping track of change rate of a page

  19. Focused Crawling • Classifier: Is crawled page P relevant to the topic? • Algorithm that maps page to relevant/irrelevant • Semi-automatic • Based on page vicinity.. • Distiller:is crawled page P likely to lead to relevant pages? • Algorithm that maps page to likely/unlikely • Could be just A/H computation, and taking HUBS • Distiller determines the priority of following links off of P

  20. SPIDER CASE STUDY

  21. Anatomy of Google(circa 1999) Slides from http://www.cs.huji.ac.il/~sdbi/2000/google/index.htm

  22. The “google” paper Discusses google’s Architecture circa 99 Search Engine Size over Time Number of indexed pages, self-reported Google: 50% of the web?

  23. Google Search Engine Architecture URL Server- Provides URLs to be fetched Crawler is distributed Store Server - compresses and stores pages for indexing Repository - holds pages for indexing (full HTML of every page) Indexer - parses documents, records words, positions, font size, and capitalization Lexicon - list of unique words found HitList – efficient record of word locs+attribs Barrels hold (docID, (wordID, hitList*)*)* sorted: each barrel has range of words Anchors - keep information about links found in web pages URL Resolver - converts relative URLs to absolute Sorter - generates Doc Index Doc Index - inverted index of all words in all documents (except stop words) Links - stores info about links to each page (used for Pagerank) Pagerank - computes a rank for each page retrieved Searcher - answers queries SOURCE: BRIN & PAGE

  24. Major Data Structures • Big Files • virtual files spanning multiple file systems • addressable by 64 bit integers • handles allocation & deallocation of File Descriptions since the OS’s is not enough • supports rudimentary compression

  25. Major Data Structures (2) • Repository • tradeoff between speed & compression ratio • choose zlib (3 to 1) over bzip (4 to 1) • requires no other data structure to access it

  26. Major Data Structures (3) • Document Index • keeps information about each document • fixed width ISAM (index sequential access mode) index • includes various statistics • pointer to repository, if crawled, pointer to info lists • compact data structure • we can fetch a record in 1 disk seek during search

  27. Major Data Structures (4) • Lexicon • can fit in memory for reasonable price • currently 256 MB • contains 14 million words • 2 parts • a list of words • a hash table

  28. Major Data Structures (4) • Hit Lists • includes position font & capitalization • account for most of the space used in the indexes • 3 alternatives: simple, Huffman , hand-optimized • hand encoding uses 2 bytes for every hit

  29. Major Data Structures (4) • Hit Lists (2)

  30. Major Data Structures (5) • Forward Index • partially ordered • used 64 Barrels • each Barrel holds a range of wordIDs • requires slightly more storage • each wordID is stored as a relative difference from the minimum wordID of the Barrel • saves considerable time in the sorting

  31. Major Data Structures (6) • Inverted Index • 64 Barrels (same as the Forward Index) • for each wordID the Lexicon contains a pointer to the Barrel that wordID falls into • the pointer points to a doclist with their hit list • the order of the docIDs is important • by docID or doc word-ranking • Two inverted barrels—the short barrel/full barrel

  32. Major Data Structures (7) • Crawling the Web • fast distributed crawling system • URLserver & Crawlers are implemented in phyton • each Crawler keeps about 300 connection open • at peek time the rate - 100 pages, 600K per second • uses: internal cached DNS lookup • synchronized IO to handle events • number of queues • Robust & Carefully tested

  33. Major Data Structures (8) • Indexing the Web • Parsing • should know to handle errors • HTML typos • kb of zeros in a middle of a TAG • non-ASCII characters • HTML Tags nested hundreds deep • Developed their own Parser • involved a fair amount of work • did not cause a bottleneck

  34. Major Data Structures (9) • Indexing Documents into Barrels • turning words into wordIDs • in-memory hash table - the Lexicon • new additions are logged to a file • parallelization • shared lexicon of 14 million pages • log of all the extra words

  35. Major Data Structures (10) • Indexing the Web • Sorting • creating the inverted index • produces two types of barrels • for titles and anchor (Short barrels) • for full text (full barrels) • sorts every barrel separately • running sorters at parallel • the sorting is done in main memory Ranking looks at Short barrels first And then full barrels

  36. Algorithm 1. Parse the query 2. Convert word into wordIDs 3. Seek to the start of the doclist in the short barrel for every word 4. Scan through the doclists until there is a document that matches all of the search terms 5. Compute the rank of that document 6. If we’re at the end of the short barrels start at the doclists of the full barrel, unless we have enough 7. If were not at the end of any doclist goto step 4 8. Sort the documents by rank return the top K (May jump here after 40k pages) Searching

More Related