1 / 73

Some crawling algorithms

Some crawling algorithms. 5 th march lecture. Robot (8). Efficient Crawling through URL Ordering [Cho 98] Default ordering is based on breadth-first search. Efficient crawling fetches important pages first. Importance Definition Similarity of a page to a driving query

kamana
Download Presentation

Some crawling algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Some crawling algorithms 5th march lecture

  2. Robot (8) Efficient Crawling through URL Ordering [Cho 98] • Default ordering is based on breadth-first search. • Efficient crawling fetches important pages first. Importance Definition • Similarity of a page to a driving query • Backlink count of a page • Forward link of a page • PageRank of a page • Domain of a page (.edu is better than .com) • Combination of the above. • w1*Apples+w2*Oranges+…..

  3. Robot (9) A method for fetching pages related to a driving query first [Cho 98]. • Suppose the query is “computer”. • A page is related (hot) if “computer” appears in the title or appears  10 times in the body of the page. • Some heuristics for finding a hot page: • The anchor of its URL contains “computer”. • Its URL contains “computer”. • Its URL is within 3 links from a hot page. Call the above URL as a hot URL.

  4. Robot (10) Crawling Algorithm hot_queue = url_queue = empty; /* initialization */ /* hot_queue stores hot URL and url_queue stores other URL */ enqueue(url_queue, starting_url); while (hot_queue or url_queue is not empty) { url = dequeue2(hot_queue, url_queue); /* dequeue hot_queue first if it is not empty */ page = fetch(url); if (page is hot) then hot[url] = true; enqueue(crawled_urls, url); url_list = extract_urls(page); for each u in url_list if (u not in url_queue and u not in hot_queue and u is not in crawled_urls) /* If u is a new URL */ if (u is a hot URL) enqueue(hot_queue, u); else enqueue(url_queue, u); }

  5. Focused Crawling • Classifier: Is crawled page P relevant to the topic? • Algorithm that maps page to relevant/irrelevant • Semi-automatic • Based on page vicinity.. • Distiller:is crawled page P likely to lead to relevant pages? • Algorithm that maps page to likely/unlikely • Could be just A/H computation, and taking HUBS • Distiller determines the priority of following links off of P

  6. Measuring Crawler efficiency

  7. Indexing and Retrieval Issues

  8. Efficient Retrieval (1) • Document-term matrix t1 t2 . . . tj . . . tm nf d1 w11 w12 . . . w1j . . . w1m 1/|d1| d2 w21 w22 . . . w2j . . . w2m 1/|d2| . . . . . . . . . . . . . . di wi1 wi2 . . . wij . . . wim 1/|di| . . . . . . . . . . . . . . dn wn1 wn2 . . . wnj . . . wnm 1/|dn| • wij is the weight of term tj in document di • Most wij’s will be zero.

  9. Naïve retrieval Consider query q = (q1, q2, …, qj, …, qn), nf = 1/|q|. How to evaluate q (i.e., compute the similarity between q and every document)? Method 1: Compare q with every document directly. • document data structure: di : ((t1, wi1), (t2, wi2), . . ., (tj, wij), . . ., (tm, wim ), 1/|di|) • Only terms with positive weights are kept. • Terms are in alphabetic order. • query data structure: q : ((t1, q1), (t2, q2), . . ., (tj, qj), . . ., (tm, qm ), 1/|q|)

  10. Naïve retrieval Method 1: Compare q with documents directly (cont.) • Algorithm initialize all sim(q, di) = 0; for each document di (i = 1, …, n) { for each term tj (j = 1, …, m) if tj appears in both q and di sim(q, di) += qjwij; sim(q, di) = sim(q, di) (1/|q|) (1/|di|); } sort documents in descending similarities and display the top k to the user;

  11. Inverted Files Observation: Method 1 is not efficient as most non-zero entries in the document-term matrix need to be accessed. Method 2: Use Inverted File Index Several data structures: • For each term tj, create a list (inverted file list) that contains all document ids that have tj. I(tj) = { (d1, w1j), (d2, w2j), …, (di, wij), …, (dn, wnj) } • di is the document id number of the ith document. • Only entries with non-zero weights should be kept.

  12. Inverted files Method 2: Use Inverted File Index (continued) Several data structures: • Normalization factors of documents are pre-computed and stored in an array: nf[i] stores 1/|di|. • Create a hash table for all terms in the collection. . . . . . . tj pointer to I(tj) . . . . . . • Inverted file lists are typically stored on disk. • The number of distinct terms is usually very large.

  13. How Inverted Files are Created Dictionary Postings

  14. Retrieval using Inverted files Use something like this as part of your Project.. Algorithm initialize all sim(q, di) = 0; for each term tj in q { find I(t) using the hash table; for each (di, wij) in I(t) sim(q, di) += qjwij; } for each document di sim(q, di) = sim(q, di) (1/|q|) nf[i]; sort documents in descending similarities and display the top k to the user;

  15. Retrieval using inverted indices Some observations about Method 2: • If a document d does not contain any term of a given query q, then d will not be involved in the evaluation of q. • Only non-zero entries in the columns in the document-term matrix corresponding to the query terms are used to evaluate the query. • computes the similarities of multiple documents simultaneously (w.r.t. each query word)

  16. Efficient Retrieval (8) Example (Method 2): Suppose q = { (t1, 1), (t3, 1) }, 1/|q| = 0.7071 d1 = { (t1, 2), (t2, 1), (t3, 1) }, nf[1] = 0.4082 d2 = { (t2, 2), (t3, 1), (t4, 1) }, nf[2] = 0.4082 d3 = { (t1, 1), (t3, 1), (t4, 1) }, nf[3] = 0.5774 d4 = { (t1, 2), (t2, 1), (t3, 2), (t4, 2) }, nf[4] = 0.2774 d5 = { (t2, 2), (t4, 1), (t5, 2) }, nf[5] = 0.3333 I(t1) = { (d1, 2), (d3, 1), (d4, 2) } I(t2) = { (d1, 1), (d2, 2), (d4, 1), (d5, 2) } I(t3) = { (d1, 1), (d2, 1), (d3, 1), (d4, 2) } I(t4) = { (d2, 1), (d3, 1), (d4, 1), (d5, 1) } I(t5) = { (d5, 2) }

  17. Efficient Retrieval (9) After t1 is processed: sim(q, d1) = 2, sim(q, d2) = 0, sim(q, d3) = 1 sim(q, d4) = 2, sim(q, d5) = 0 After t3 is processed: sim(q, d1) = 3, sim(q, d2) = 1, sim(q, d3) = 2 sim(q, d4) = 4, sim(q, d5) = 0 After normalization: sim(q, d1) = .87, sim(q, d2) = .29, sim(q, d3) = .82 sim(q, d4) = .78, sim(q, d5) = 0

  18. Efficiency versus Flexibility • Storing computed document weights is good for efficiency but bad for flexibility. • Recomputation needed if tfw and idfw formulas change and/or tf and df information change. • Flexibility is improved by storing raw tf and df information but efficiency suffers. • A compromise • Store pre-computed tf weights of documents. • Use idf weights with query term tf weights instead of document term tf weights.

  19. 7th March Lecture Indexing & Retrieval in Google Slides from http://www.cs.huji.ac.il/~sdbi/2000/google/index.htm

  20. System Anatomy • High Level Overview

  21. Major Data Structures • Big Files • virtual files spanning multiple file systems • addressable by 64 bit integers • handles allocation & deallocation of File Descriptions since the OS’s is not enough • supports rudimentary compression

  22. Major Data Structures (2) • Repository • tradeoff between speed & compression ratio • choose zlib (3 to 1) over bzip (4 to 1) • requires no other data structure to access it

  23. Major Data Structures (3) • Document Index • keeps information about each document • fixed width ISAM (index sequential access mode) index • includes various statistics • pointer to repository, if crawled, pointer to info lists • compact data structure • we can fetch a record in 1 disk seek during search

  24. Major Data Structures (4) • URL’s - docID file • used to convert URLs to docIDs • list of URL checksums with their docIDs • sorted by checksums • given a URL a binary search is performed • conversion is done in batch mode

  25. Major Data Structures (4) • Lexicon • can fit in memory for reasonable price • currently 256 MB • contains 14 million words • 2 parts • a list of words • a hash table

  26. Major Data Structures (4) • Hit Lists • includes position font & capitalization • account for most of the space used in the indexes • 3 alternatives: simple, Huffman , hand-optimized • hand encoding uses 2 bytes for every hit

  27. Major Data Structures (4) • Hit Lists (2)

  28. Major Data Structures (5) • Forward Index • partially ordered • used 64 Barrels • each Barrel holds a range of wordIDs • requires slightly more storage • each wordID is stored as a relative difference from the minimum wordID of the Barrel • saves considerable time in the sorting

  29. Major Data Structures (6) • Inverted Index • 64 Barrels (same as the Forward Index) • for each wordID the Lexicon contains a pointer to the Barrel that wordID falls into • the pointer points to a doclist with their hit list • the order of the docIDs is important • by docID or doc word-ranking • Two inverted barrels—the short barrel/full barrel

  30. Major Data Structures (7) • Crawling the Web • fast distributed crawling system • URLserver & Crawlers are implemented in phyton • each Crawler keeps about 300 connection open • at peek time the rate - 100 pages, 600K per second • uses: internal cached DNS lookup • synchronized IO to handle events • number of queues • Robust & Carefully tested

  31. Major Data Structures (8) • Indexing the Web • Parsing • should know to handle errors • HTML typos • kb of zeros in a middle of a TAG • non-ASCII characters • HTML Tags nested hundreds deep • Developed their own Parser • involved a fair amount of work • did not cause a bottleneck

  32. Major Data Structures (9) • Indexing Documents into Barrels • turning words into wordIDs • in-memory hash table - the Lexicon • new additions are logged to a file • parallelization • shared lexicon of 14 million pages • log of all the extra words

  33. Major Data Structures (10) • Indexing the Web • Sorting • creating the inverted index • produces two types of barrels • for titles and anchor (Short barrels) • for full text (full barrels) • sorts every barrel separately • running sorters at parallel • the sorting is done in main memory Ranking looks at Short barrels first And then full barrels

  34. Algorithm 1. Parse the query 2. Convert word into wordIDs 3. Seek to the start of the doclist in the short barrel for every word 4. Scan through the doclists until there is a document that matches all of the search terms 5. Compute the rank of that document 6. If we’re at the end of the short barrels start at the doclists of the full barrel, unless we have enough 7. If were not at the end of any doclist goto step 4 8. Sort the documents by rank return the top K (May jump here after 40k pages) Searching

  35. The Ranking System • The information • Position, Font Size, Capitalization • Anchor Text • PageRank • Hits Types • title ,anchor , URL etc.. • small font, large font etc..

  36. The Ranking System (2) • Each Hit type has it’s own weight • Counts weights increase linearly with counts at first but quickly taper off this is the IR score of the doc • (IDF weighting??) • the IR is combined with PageRank to give the final Rank • For multi-word query • A proximity score for every set of hits with a proximity type weight • 10 grades of proximity

  37. Feedback • A trusted user may optionally evaluate the results • The feedback is saved • When modifying the ranking function we can see the impact of this change on all previous searches that were ranked

  38. Results • Produce better results than major commercial search engines for most searches • Example: query “bill clinton” • return results from the “Whitehouse.gov” • email addresses of the president • all the results are high quality pages • no broken links • no bill without clinton & no clinton without bill

  39. Storage Requirements • Using Compression on the repository • about 55 GB for all the data used by the SE • most of the queries can be answered by just the short inverted index • with better compression, a high quality SE can fit onto a 7GB drive of a new PC

  40. Web Page Statistics Storage Statistics

  41. System Performance • It took 9 days to download 26million pages • 48.5 pages per second • The Indexer & Crawler ran simultaneously • The Indexer runs at 54 pages per second • The sorters run in parallel using 4 machines, the whole process took 24 hours

  42. Clustering

  43. Global clustering Generate clusters from the entire document database Offline Eg. Google approach Local clustering Cluster just the search results for the current query Online Eg. Manjara, Uses LSI ideas Grouper Uses suffix trees Northern exposure Manual/collaborative Clustering in WebSearch Notice the similarity With clustering based Query expansion

  44. What is clustering? • The process of grouping a set of physical or abstract objects into classes of similar objects. • It is also called unsupervised learning. • It is a common and important task that finds many applications.

  45. Classical clustering methods • Partitioning methods • k-Means (and EM), k-Medoids • Hierarchical methods • agglomerative, divisive, BIRCH • Distance measures • minimum, maximum • mean • average

  46. Clustering -- Example 1 • For simplicity, 1-dimension objects and k=2. • Objects: 1, 2, 5, 6,7 • K-means: • Randomly select 5 and 6 as centroids; • => Two clusters {1,2,5} and {6,7}; meanC1=8/3, meanC2=6.5 • => {1,2}, {5,6,7}; meanC1=1.5, meanC2=6 • => no change. • Aggregate dissimilarity = 0.5^2 + 0.5^2 + 1^2 + 1^2 = 2.5

  47. Clustering -- Example 2 • For simplicity, we still use 1-dimension objects. • Objects: 1, 2, 5, 6,7 • agglomerative clustering: • find two closest objects and merge; • => {1,2}, so we have now {1.5,5, 6,7}; • => {1,2}, {5,6}, so {1.5, 5.5,7}; • => {1,2}, {{5,6},7}.

  48. Challenges • Scalability • Dealing with different types of attributes • Clusters with arbitrary shapes • Automatically determining input parameters • Dealing with noise (outliers) • Order insensitivity • High dimensionality • Interpretability and usability

More Related