1 / 55

Antonio Gulli

Antonio Gulli. AGENDA. Overview of Spidering Technology … hey, how can I get that page? Overview of a Web Graph … tell me something about the arena Google Overview.

michi
Download Presentation

Antonio Gulli

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Antonio Gulli

  2. AGENDA • Overview of Spidering Technology … hey, how can I get that page? • Overview of a Web Graph … tell me something about the arena • Google Overview Mining the WebDiscovering Knowledge from Hypertext DataSoumen Chakrabarti (CAP.2)Morgan-Kaufmann Publishers352 pages, cloth/hard-boundISBN 1-55860-754-4

  3. Spidering • 24h, 7days “walking” over a Graph, getting data • What about the Graph? • Recall the sample of 150 sites (last lesson) • Direct graph G = (N, E) • N changes (insert, delete) ~ 4-6 * 109 nodes • E changes (insert, delete) ~ 10 links for node • Size 10*4*109 = 4*1010 non zero entries in adj matrix • EX: suppose a 64 bit hash for URL, how much space for storing the adj matrix?

  4. Q: sparse or not sparse? A Picture of the Web Graph j i 21 milions of pages, 150millions of links [DelCorso, Gulli, Romani .. WAW04]

  5. A Picture of the Web Graph [BRODER, www9]

  6. A Picture of the Web Graph Berkeley Stanford Q: what kind of sorting is this? [Hawelivala, www12]

  7. A Picture of the Web Graph [Ravaghan, www9] READ IT!!!!!

  8. The Web’s Characteristics • Size • Over a billion pages available • 5-10K per page => tens of terabytes • Size doubles every 2 years • Change • 23% change daily • Half life time of about 10 days • Bowtie structure

  9. Page Repository Web Indexer Collection Analysis Queries Results Crawlers Query Engine Ranking Text Structure Utility Crawl Control Indexes Search Engine Structure

  10. Bubble ???

  11. Crawler “cycle of life” Link Extractor: while(<ci sono pagine da cui estrarre i link>){ <prendi una pagina p dal page repository> <estrai i link contenuti nel tag a href> <estrai i link contenuti in javascript> <estrai ….. <estrai i link contenuti nei frameset> <inserisci i link estratti nella priority que, ciascuna con una priorità dipendente dalla politica scelta e: 1) compatibilmente ai filtri applicati 2) applicando le operazioni di normalizzazione> <marca p come pagina da cui abbiamo estratto i link> } Downloaders: while(<ci sono url assegnate dai crawler manager>){ <estrai le url dalla coda di assegnamento> <scarica le pagine piassociate alla url dalla rete> <invia le pi al page repository> } Crawler Manager: <estrai un bunch di url dalla “priority que” in ordine> while(<ci sono url assegnate dai crawler manager>){ <estrai le URL ed assegnale ad S> foreach u  S { if ( (u  “Already Seen Page” ) || ( u  “Already Seen Page” && (<sul Web server la pagina è più recente> ) && ( <u è un url accettata dal robot.txt del sito>) ) { <risolvi u rispetto al DNS> <invia u ai downloaders, in coda> } }

  12. DNS Revolvers Strutture Dati DNS Cache Parallel Downloaders Moduli Software Already Seen Pages Parsers Parallel Crawler Managers Priority Que Robot.txt Cache Parallel Link Extractors Distributed Page Repository SPIDERS Architecture of Incremental Crawler INTERNET LEGENDA … Indexer … … [Gulli, 98] Page Analysis INDEXERS

  13. Crawling Issues • How to crawl? • Quality: “Best” pages first • Efficiency: Avoid duplication (or near duplication) • Etiquette: Robots.txt, Server load concerns (Minimize load)- • How much to crawl? How much to index? • Coverage: How big is the Web? How much do we cover? • Relative Coverage: How much do competitors have? • How often to crawl? • Freshness: How much has changed? • How much has really changed? (why is this a different question?) • How to parallelize the process

  14. Page selection • Crawler method for choosing page to download • Given a page P, define how “good” that page is. • Several metric types: • Interest driven • Popularity driven (PageRank, full vs partial) • BFS, DFS, Random • Combined • Random Walk • Potential quality measures: • Final Indegree • Final Pagerank

  15. BFS • “…breadth-first search order discovers the highest quality pages during the early stages of the crawl BFS” • 328 milioni di URL nel testbed Q: how this is related to SCC, Power Laws. domains hierarchy in a Web Graph? See more when we will do PageRank [Najork 01]

  16. Stanford Web Base (179K, 1998)[Cho98] Perc. Overlap with best x% by indegree Perc. Overlap with best x% by pagerank x% crawled by O(u) x% crawled by O(u)

  17. BFS & Spam (Worst case scenario) Start Page Start Page BFS depth = 2 Normal avg outdegree = 10 100 URLs on the queue including a spam page. Assume the spammer is able to generate dynamic pages with 1000 outlinks BFS depth = 3 2000 URLs on the queue 50% belong to the spammer BFS depth = 4 1.01 million URLs on the queue 99% belong to the spammer

  18. Can you trust words on the page? auctions.hitsoffice.com/ Pornographic Content www.ebay.com/ Examples from July 2002

  19. SPAM Y Is this a Search Engine spider? Real Doc N A few spam technologies • Cloaking • Serve fake content to search engine robot • DNS cloaking: Switch IP address. Impersonate • Doorway pages • Pages optimized for a single keyword that re-direct to the real target page • Keyword Spam • Misleading meta-keywords, excessive repetition of a term, fake “anchor text” • Hidden text with colors, CSS tricks, etc. • Link spamming • Mutual admiration societies, hidden links, awards • Domain flooding: numerous domains that point or re-direct to a target page • Robots • Fake click stream • Fake query stream • Millions of submissions via Add-Url Cloaking Meta-Keywords = “… London hotels, hotel, holiday inn, hilton, discount, booking, reservation, sex, mp3, britney spears, viagra, …”

  20. Parallel Crawlers • Web is too big to be crawled by a single crawler, work should be divided • Independent assignment • Each crawler starts with its own set of URLs • Follows links without consulting other crawlers • Reduces communication overhead • Some overlap is unavoidable

  21. Parallel Crawlers • Dynamic assignment • Central coordinator divides web into partitions • Crawlers crawl their assigned partition • Links to other URLs are given to Central coordinator • Static assignment • Web is partitioned and divided to each crawler • Crawler only crawls its part of the web

  22. URL-Seen Problem • Need to check if file has been parsed or downloaded before - after 20 million pages, we have “seen” over 100 million URLs - each URL is 50 to 75 bytes on average • Options: compress URLs in main memory, or use disk - Bloom Filter (Archive) [we will discuss this later] - disk access with caching (Mercator, Altavista)

  23. Virtual Documents • P = “whitehouse.org”, not yet reached • {P1….Pr} reached, {P1….Pr} points to P • Insert into the index the anchors context • …George Bush, President of U.S. lives at <a href=http://www.whitehouse.org> WhiteHouse</a> Pagina Web e Documento Virtuale White House bush Washington

  24. Focused Crawling • Focused Crawler: selectively seeks out pages that are relevant to a pre-defined set of topics. • Topics specified by using exemplary documents (not keywords) • Crawl most relevant links • Ignore irrelevant parts. • Leads to significant savings in hardware and network resources.

  25. Focused Crawling • Pr[documento rilevante | il termine t è presente] • Pr[documento irrilevante | il termine t è presente] • Pr[termine t sia presente | il doc sia rilevante] • Pr[termine t sia presente | il doc sia irrilevante]

  26. An example of crawler Polybot • crawl of 120 million pages over 19 days 161 million HTTP request 16 million robots.txt requests 138 million successful non-robots requests 17 million HTTP errors (401, 403, 404 etc) 121 million pages retrieved • slow during day, fast at night • peak about 300 pages/s over T3 • many downtimes due to attacks, crashes, revisions • http://cis.poly.edu/polybot/ [Suel 02]

  27. Examples: Open Source • Nutch, also used by Overture • http://www.nutch.org • Hentrix, used by Archive.org • http://archive-crawler.sourceforge.net/index.html

  28. Where we are? • Spidering Technologies • Web Graph (a glimpse) • Now, some funny math on two crawling issues • 1) Hash for robust load balance • 2) Mirror Detection

  29. Consistent Hashing • Item and servers ← ID ( hash function of m bits) • Node identifier mapped on a 2^m ring • Item K assigned to first server with ID ≥ k • What if a downloader goes down? • What if a new downloader appear? • A mathematical tool for: • Spidering • Web Cache • P2P • Routers Load Balance • Distributed FS

  30. Duplicate/Near-Duplicate Detection • Duplication: Exact match with fingerprints • Near-Duplication: Approximate match • Overview • Compute syntactic similarity with an edit-distance measure • Use similarity threshold to detect near-duplicates • E.g., Similarity > 80% => Documents are “near duplicates” • Not transitive though sometimes used transitively

  31. Computing Near Similarity • Features: • Segments of a document (natural or artificial breakpoints) [Brin95] • Shingles (Word N-Grams) [Brin95, Brod98] “a rose is a rose is a rose” => a_rose_is_a rose_is_a_rose is_a_rose_is • Similarity Measure • TFIDF [Shiv95] • Set intersection [Brod98] (Specifically, Size_of_Intersection / Size_of_Union )

  32. Shingles + Set Intersection • Computing exact set intersection of shingles between all pairs of documents is expensive and infeasible • Approximate using a cleverly chosen subset of shingles from each (a sketch)

  33. Shingles + Set Intersection • Estimate size_of_intersection / size_of_union based on a short sketch ( [Brod97, Brod98] ) • Create a “sketch vector” (e.g., of size 200) for each document • Documents which share more than t(say 80%) corresponding vector elements are similar • For doc D, sketch[ i ] is computed as follows: • Let f map all shingles in the universe to 0..2m (e.g., f = fingerprinting) • Let pi be a specific random permutation on 0..2m • Pick sketch[i] := MIN pi ( f(s) ) over all shingles s in D

  34. Document 1 Computing Sketch[i] for Doc1 Start with 64 bit shingles Permute on the number line with pi Pick the min value 264 264 264 264

  35. Document 1 Test if Doc1.Sketch[i] = Doc2.Sketch[i] Document 2 264 264 264 264 264 264 A B 264 264 Are these equal? Test for 200 random permutations:p1, p2,… p200

  36. Document 2 Document 1 264 264 264 264 264 264 264 264 However… A B A = B iff the shingle with the MIN value in the union of Doc1 and Doc2 is common to both (I.e., lies in the intersection) This happens with probability: Size_of_intersection / Size_of_union

  37. Mirror Detection • Mirroring is systematic replication of web pages across hosts. • Single largest cause of duplication on the web • Host1/a and Host2/b are mirrors iff For all (or most) paths p such that when http://Host1/ a / p exists http://Host2/ b / p exists as well with identical (or near identical) content, and vice versa.

  38. Mirror Detection example • http://www.elsevier.com/ and http://www.elsevier.nl/ • Structural Classification of Proteins • http://scop.mrc-lmb.cam.ac.uk/scop • http://scop.berkeley.edu/ • http://scop.wehi.edu.au/scop • http://pdb.weizmann.ac.il/scop • http://scop.protres.ru/

  39. Motivation • Why detect mirrors? • Smart crawling • Fetch from the fastest or freshest server • Avoid duplication • Better connectivity analysis • Combine inlinks • Avoid double counting outlinks • Redundancy in result listings • “If that fails you can try: <mirror>/samepath” • Proxy caching

  40. Bottom Up Mirror Detection[Cho00] • Maintain clusters of subgraphs • Initialize clusters of trivial subgraphs • Group near-duplicate single documents into a cluster • Subsequent passes • Merge clusters of the same cardinality and corresponding linkage • Avoid decreasing cluster cardinality • To detect mirrors we need: • Adequate path overlap • Contents of corresponding pages within a small time range

  41. synthesis.stanford.edu www.synthesis.org b a b a d d c c Can we use URLs to find mirrors? www.synthesis.org/Docs/ProjAbs/synsys/synalysis.html www.synthesis.org/Docs/ProjAbs/synsys/visual-semi-quant.html www.synthesis.org/Docs/annual.report96.final.html www.synthesis.org/Docs/cicee-berlin-paper.html www.synthesis.org/Docs/myr5 www.synthesis.org/Docs/myr5/cicee/bridge-gap.html www.synthesis.org/Docs/myr5/cs/cs-meta.html www.synthesis.org/Docs/myr5/mech/mech-intro-mechatron.html www.synthesis.org/Docs/myr5/mech/mech-take-home.html www.synthesis.org/Docs/myr5/synsys/experiential-learning.html www.synthesis.org/Docs/myr5/synsys/mm-mech-dissec.html www.synthesis.org/Docs/yr5ar www.synthesis.org/Docs/yr5ar/assess www.synthesis.org/Docs/yr5ar/cicee www.synthesis.org/Docs/yr5ar/cicee/bridge-gap.html www.synthesis.org/Docs/yr5ar/cicee/comp-integ-analysis.html synthesis.stanford.edu/Docs/ProjAbs/deliv/high-tech-… synthesis.stanford.edu/Docs/ProjAbs/mech/mech-enhanced… synthesis.stanford.edu/Docs/ProjAbs/mech/mech-intro-… synthesis.stanford.edu/Docs/ProjAbs/mech/mech-mm-case-… synthesis.stanford.edu/Docs/ProjAbs/synsys/quant-dev-new-… synthesis.stanford.edu/Docs/annual.report96.final.html synthesis.stanford.edu/Docs/annual.report96.final_fn.html synthesis.stanford.edu/Docs/myr5/assessment synthesis.stanford.edu/Docs/myr5/assessment/assessment-… synthesis.stanford.edu/Docs/myr5/assessment/mm-forum-kiosk-… synthesis.stanford.edu/Docs/myr5/assessment/neato-ucb.html synthesis.stanford.edu/Docs/myr5/assessment/not-available.html synthesis.stanford.edu/Docs/myr5/cicee synthesis.stanford.edu/Docs/myr5/cicee/bridge-gap.html synthesis.stanford.edu/Docs/myr5/cicee/cicee-main.html synthesis.stanford.edu/Docs/myr5/cicee/comp-integ-analysis.html

  42. Where we are? • Spidering • Web Graph • Some nice mathematical tools • Many others funny algorithmic for crawling issues… • Now, a glimpse on a Google: (thanks to Jungoo Cho, 3rd founder..)

  43. Google: Scale • Number of pages indexed: 3B in November 2002 • Index refresh interval: Once per month ~ 1200 pages/sec • Number of queries per day: 200M in April 2003 ~ 2000 queries/sec • Runs on commodity Intel-Linux boxes [Cho, 02]

  44. Google: Other Statistics • Average page size: 10KB • Average query size: 40B • Average result size: 5KB • Average number of links per page: 10 • Total raw HTML data size 3G x 10KB = 30 TB! • Inverted index roughly the same size as raw corpus: 30 TB for index itself • With appropriate compression, 3:1 • 20 TB data residing in disk (and memory!!!)

  45. Google:Data Size and Crawling • Efficient crawl is very important • 1 page/sec  1200 machines just for crawling • Parallelization through thread/event queue necessary • Complex crawling algorithm -- No, No! • Well-optimized crawler • ~ 100 pages/sec (10 ms/page) • ~ 12 machines for crawling • Bandwidth consumption • 1200 x 10KB x 8bit ~ 100Mbps • One dedicated OC3 line (155Mbps) for crawling ~ $400,000 per year

  46. Google: Data Size, Query Processing • Index size: 10TB  100 disks • Typically less than 5 disks per machine • Potentially 20-machine cluster to answer a query • If one machine goes down, the cluster goes down • Two-tier index structure can be helpful • Tier 1: Popular (high PageRank) page index • Tier 2: Less popular page index • Most queries can be answered by tier-1 cluster (with fewer machines)

  47. Google: Implication of Query Load • 2000 queries / sec • Rule of thumb: 1 query / sec per CPU • Depends on number of disks, memory size, etc. • ~ 2000 machines just to answer queries • 5KB / answer page • 2000 x 5KB x 8bit ~ 80 Mbps • Half dedicated OC3 line (155Mbps) ~ $300,000

  48. Google: Hardware • 50,000 Intel-Linux cluster • Assuming 99.9% uptime (8 hour downtime per year) • 50 machines are always down • Nightmare for system administrators • Assuming 3-year hardware replacement • Set up, replace and dump 50 machines every day • Heterogeneity is unavoidable

  49. Shingles computation (for Web Clustering) • s1=a_rose_is_a_rose_in_the = w1 w2 w3 w4 w5 w6 w7 • s2=rose_is_a_rose_in_the_garden = w1 w2 w3 w4 w5 w6 w7 • 0/1 representation (using ascii code) • HP: a word is ~8 byte → length(si)=7*8 byte=448bit • This represent S, a poly with coefficient in 0/1 (Z2) • Rabin fingerprint • Map 448bit in a K=40bit space, with low collision prob. • Generate an irreducible poly P with degree K-1 (see GaloisZ2) • F(S) = S mod P

More Related