1 / 29

CS246: Page Selection

CS246: Page Selection. Page Selection. Infinite # of pages on the Web E.g., infinite pages from a calendar site How to select the pages to download?. 2. Challenges Due to Infinity. What does Web coverage mean? 8 billion vs 20 billion How much should I download?

renee-frye
Download Presentation

CS246: Page Selection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS246: Page Selection

  2. Page Selection Infinite # of pages on the Web E.g., infinite pages from a calendar site How to select the pages to download? 2 Junghoo "John" Cho (UCLA Computer Science)

  3. Challenges Due to Infinity • What does Web coverage mean? • 8 billion vs 20 billion • How much should I download? • 8 billion? 100 billion? • How much have I covered? • When can I stop? • How to maximize coverage? • How can we define coverage? Junghoo "John" Cho (UCLA Computer Science)

  4. RankMass • Web coverage weighted by PageRank • Q: Why PageRank? • A: • Primary ranking metric for search results • User’s visit probability under random surfer model Junghoo "John" Cho (UCLA Computer Science)

  5. PageRank • A page is important if it is pointed by many important pages • PR(p) = PR(p1)/c1 + … + PR(pk)/ckpi : page pointing to p, ci : number of links in pi • PageRank of p is the sum of PageRanks of its parents • One equation for every page • N equations, N unknown variables

  6. PageRank: Random Surfer Model • The probability of a Web surfer to reach a page after many clicks, following random links Random Click

  7. Damping Factor and Trust Score Users do not always follow link They get distracted and “jump” to other pages d: Damping factor. Probability to follow links. ti: Trust score. Non-zero only for the pages that user trusts and jumps to. “TrustRank”, “Personalized PageRank”

  8. RankMass: Definition RankMass of DC: Assuming personalized PageRank Now what? How can we use it for the crawling problem? 8 Junghoo "John" Cho (UCLA Computer Science)

  9. Two Crawling Challenges Coverage guarantee: Given e, make sure we download at least 1- e Crawling efficiency: For a given |DC|, pick DC such that RM(DC) is the maximum 9 Junghoo "John" Cho (UCLA Computer Science)

  10. RankMass Guarantee • Q: How can we provide RankMass guarantee when we stop? • Q: How do we calculate RankMass without downloading the whole Web? • Q: Any way to provide the guarantee without knowing the exact PageRank? Junghoo "John" Cho (UCLA Computer Science)

  11. RankMass Guarantee We can’t compute the exact PageRank but can lower bound How? Let’s a start with a simple case 11 Junghoo "John" Cho (UCLA Computer Science)

  12. Single Trusted Page t1=1 ; ti = 0 (i≠1) Always jump to p1 when bored NL(p1): pages reachable from p1 in L links 12 Junghoo "John" Cho (UCLA Computer Science)

  13. Single Trusted Page Q: What is the probability to get to a page L links away from P1? 13 Junghoo "John" Cho (UCLA Computer Science)

  14. RankMass Lower Bound: Single Trusted Page Assuming the trust vector T(1), the sum of the PageRank values of all L-neighbors of p1 is at least dL+1 close to 1 14 Junghoo "John" Cho (UCLA Computer Science)

  15. PageRank Linearity Letbe PageRank vector based on trust vector . That is, Then, for any 15 Junghoo "John" Cho (UCLA Computer Science)

  16. RankMass Lower Bound: General Case The RankMass of the L-neighbors of the group of all trusted pages G, NL(G), is at least dL+1 close to 1. That is: Q: Given the result, how should we download for RankMass guarantee? 16 Junghoo "John" Cho (UCLA Computer Science)

  17. The L-Neighbor Crawler L := 0 N[0] = {pi|ti > 0} // Start with trusted pages While (e < dL+1) Download all uncrawled pages in N[L] N[L + 1] = {all pages linked to by a page in N[L]} L = L + 1 Essentially, a BFS (Breadth-First Search) crawling algorithm 17 Junghoo "John" Cho (UCLA Computer Science)

  18. Crawling Efficiency For a given |DC|, pick DC such that RM(DC) is the maximum Q: Can we use L-Neighbor? A: L-Neighbor simple, but we need to further prioritize certain pages over others Page level prioritization. 18 Junghoo "John" Cho (UCLA Computer Science)

  19. Page Level Prioritization Q: What page should we download first to maximize RankMass? A: Pages with high PageRank Q: How do we know high PageRank pages? The idea: Calculate PageRank lower bound of undownloaded pages Give high priority to high lower bound pages 19 Junghoo "John" Cho (UCLA Computer Science)

  20. Calculating PageRank Lower Bound PR(p): Probability random surfer at p Breakdown path by “interrupts”, jumps to a trusted page Sum up all paths that start with an interrupt, jump to a trusted page and end with p Interrupt Pj P1 P2 P3 P4 P5 Pi (d*1/3) (d*1/4) (d*1/3) (d*1/5) (d*1/3) (d*1/3) (1-d) (tj) 20 Junghoo "John" Cho (UCLA Computer Science)

  21. Calculating PageRank Lower Bound • Q: What if we sum up the probabilities of the subsets of the paths to p? • A: “lower bound” of PageRank p • Basic idea • Start with the set of trusted pages G • Enumerate paths to a page p as we discover links • Sum up the probability of each discovered path to p • Not every path needed. Only the ones that we have discovered so far Junghoo "John" Cho (UCLA Computer Science)

  22. RankMass Crawler: High Level Dynamically update lower bound onPageRank By enumerating paths to pages Download page with highest lower bound Sum of downloaded lower bounds = RankMass coverage 22 Junghoo "John" Cho (UCLA Computer Science)

  23. RankMass Crawler CRM = 0 // CRM: crawled RankMass rmi = (1 − d)ti for each ti > 0 // rmi : RankMass (PageRank lower bound) of pi While (CRM < 1 − e): Pick pi with the largest rmi. Download pi if not downloaded yet CRM = CRM + rmi // we have downloaded pi For each pj linked to by pi: rmj = rmj + d/ci rmi // Update RankMass based on the discovered links from pi rmi = 0 23 Junghoo "John" Cho (UCLA Computer Science)

  24. Experimental Setup HTML files only Algorithms simulated over web graph Crawled between Dec’ 2003 and Jan’ 2004 141 millon URLs span over 6.9 million host names 233 top level domains. 24 Junghoo "John" Cho (UCLA Computer Science)

  25. Metrics Of Evaluation How much RankMass is actually collected during the crawl How much RankMass is “known” to have been collected during the crawl 25 Junghoo "John" Cho (UCLA Computer Science)

  26. L-Neighbor 26 Junghoo "John" Cho (UCLA Computer Science)

  27. RankMass 27 Junghoo "John" Cho (UCLA Computer Science)

  28. Algorithm Efficiency 28 Junghoo "John" Cho (UCLA Computer Science)

  29. Summary • Web crawler and its challenges • Page selection problem • PageRank • RankMass guarantee • Computing PageRank lower bound • RankMass crawling algorithm • Any questions?

More Related