1 / 93

Crash Course

Crash Course. Web Graph Spidering Indexing Ranking. Antonio Gulli University of Pisa. A technology with large audience. Web Search is used by more than 400M people/day There are more than 8billions pages Google IPO is estimated 10-15billion $ A LOT of Computer Science

mio
Download Presentation

Crash Course

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Crash Course Web Graph Spidering Indexing Ranking Antonio Gulli University of Pisa

  2. A technology with large audience • Web Search is used by more than 400M people/day • There are more than 8billions pages • Google IPO is estimated 10-15billion $ • A LOT of Computer Science • Information Retrieval • Algorithms & Data Structures • Numeric Analysis • Parallel & Distributed Computation • ….. and many others

  3. AGENDA • Web Graph • Google Overview • Overview of Spidering Technology … hey, how can I get that page? • Overview of Indexing Technology … hey, how can I remember that page? • Overview of Ranking Technology … hey, how can I order those pages?

  4. A Picture of the Web Graph [BRODER, www9]

  5. A Picture of the Web Graph [Ravaghan, www9]

  6. A Picture of the Web Graph Berkeley Stanford [Hawelivala, www12]

  7. A Picture of the Web Graph Rimpiazzare con definitiva [DelCorso, Gulli, Romani .. Work in Progress]

  8. The Web’s Characteristics • Size • Over a billion pages available • 5-10K per page => tens of terabytes • Size doubles every 2 years • Change • 23% change daily • Half life time of about 10 days • Poisson model for changes • Bowtie structure

  9. Page Repository Web Indexer Collection Analysis Queries Results Crawlers Query Engine Ranking Text Structure Utility Crawl Control Indexes Search Engine Structure

  10. Google: Scale • Number of pages indexed: 3B in November 2002 • Index refresh interval: Once per month ~ 1200 pages/sec • Number of queries per day: 200M in April 2003 ~ 2000 queries/sec • Runs on commodity Intel-Linux boxes [Cho, 02]

  11. Google:Other Statistics • Average page size: 10KB • Average query size: 40B • Average result size: 5KB • Average number of links per page: 10 • Total raw HTML data size 3G x 10KB = 30 TB! • Inverted index roughly the same size as raw corpus: 30 TB for index itself • With appropriate compression, 3:1 • 20 TB data residing in disk (and memory!!!)

  12. Google:Data Size and Crawling • Efficient crawl is very important • 1 page/sec  1200 machines just for crawling • Parallelization through thread/event queue necessary • Complex crawling algorithm -- No, No! • Well-optimized crawler • ~ 100 pages/sec (10 ms/page) • ~ 12 machines for crawling • Bandwidth consumption • 1200 x 10KB x 8bit ~ 100Mbps • One dedicated OC3 line (155Mbps) for crawling ~ $400,000 per year

  13. Google: Data Size, Query Processing • Index size: 10TB  100 disks • Typically less than 5 disks per machine • Potentially 20-machine cluster to answer a query • If one machine goes down, the cluster goes down • Two-tier index structure can be helpful • Tier 1: Popular (high PageRank) page index • Tier 2: Less popular page index • Most queries can be answered by tier-1 cluster (with fewer machines)

  14. Google: Implication of Query Load • 2000 queries / sec • Rule of thumb: 1 query / sec per CPU • Depends on number of disks, memory size, etc. • ~ 2000 machines just to answer queries • 5KB / answer page • 2000 x 5KB x 8bit ~ 80 Mbps • Half dedicated OC3 line (155Mbps) ~ $300,000

  15. Google: Query Load and Replication • Index replication necessary to handle the query load • Assuming 1TB tier-1 index, 100Mbit/sec transfer rate • 8bits x 1TB / 100MB = 80,000 sec • One day to refresh to a new index • Of course, need to verify the transferred data before using it…

  16. Google: Hardware • 50,000 Intel-Linux cluster • Assuming 99.9% uptime (8 hour downtime per year) • 50 machines are always down • Nightmare for system administrators • Assuming 3-year hardware replacement • Set up, replace and dump 50 machines every day • Heterogeneity is unavoidable

  17. ROADMAP • What we have seen so far • Web Graph • Search Engine Architecture • Google Overview • Next ? • We will focus onSPIDERING

  18. Crawling web pages • What pages to download • When to refresh • Minimize load on web sites • How to parallelize the process

  19. Bubble ???

  20. Crawler “cycle of life” Downloaders: while(<ci sono url assegnate dai crawler manager>){ <estrai le url dalla coda di assegnamento> <scarica le pagine piassociate alla url dalla rete> <invia le pi al page repository> } Link Extractor: while(<ci sono pagine da cui estrarre i link>){ <prendi una pagina p dal page repository> <estrai i link contenuti nel tag a href> <estrai i link contenuti in javascript> <estrai ….. <estrai i link contenuti nei frameset> <inserisci i link estratti nella priority que, ciascuna con una priorità dipendente dalla politica scelta e: 1) compatibilmente ai filtri applicati 2) applicando le operazioni di normalizzazione> <marca p come pagina da cui abbiamo estratto i link> } Crawler Manager: <estrai un bunch di url dalla “priority que” in ordine> while(<ci sono url assegnate dai crawler manager>){ <estrai le URL ed assegnale ad S> foreach u  S { if ( (u  “Already Seen Page” ) || ( u  “Already Seen Page” && (<sul Web server la pagina è più recente> ) && ( <u è un url accettata dal robot.txt del sito>) ) { <risolvi u rispetto al DNS> <invia u ai downloaders, in coda> } }

  21. DNS Revolvers Strutture Dati DNS Cache Parallel Downloaders Moduli Software Already Seen Pages Parsers Parallel Crawler Managers Priority Que Robot.txt Cache Parallel Link Extractors Distributed Page Repository SPIDERS Architecture of Incremental Crawler INTERNET LEGENDA … Indexer … … Page Analysis INDEXERS [Gulli, 98]

  22. Page selection • Crawler method for choosing page to download • Given a page P, define how “good” that page is. • Several metric types: • Interest driven • Popularity driven • BFS, DFS, Random • Combined

  23. Interest Driven • Define a driving query Q • Find textual similarity between P and Q • Define a word vocabulary W1…Wn • Define a vector for P and Q: • Vp, Vq = <W1,…,Wn> • Wi = 0 if Wi does not appear in the document • Wi = Inverse document frequency otherwise • IDF(Wi) = 1 / number of appearances in the entire collection • Importance: IS(P) = P * Q (cosine product) • Finding IDF requires going over the entire web • Estimate IDF by pages already visited, to calculate IS’

  24. Popularity Driven • How popular a page is: • Backlink count • IB(P) – the number of pages containing a link to P • Estimat by pervious crawls: IB’(P) • More sophisticated metric, called PageRank

  25. BFS • “…breadth-first search order discovers the highest quality pages during the early stages of the crawl BFS” 328 milioni di URL nel testbed [Najork 01]

  26. WebBase Results [Cho 01]

  27. Refresh Strategy • Crawlers can refresh only a certain amount of pages in a period of time. • The page download resource can be allocated in many ways • The proportional refresh policy allocated the resource proportionally to the pages’ change rate.

  28. Focused Crawling • Focused Crawler: selectively seeks out pages that are relevant to a pre-defined set of topics. • Topics specified by using exemplary documents (not keywords) • Crawl most relevant links • Ignore irrelevant parts. • Leads to significant savings in hardware and network resources.

  29. Focused Crawling • il teorema di Bayes stima la probabilità condizionale che si verifichi l’evento Hi in presenza dell’evento E: • Pr[documento rilevante | il termine t è presente] • Pr[documento irrilevante | il termine t è presente] • Pr[termine t sia presente | il doc sia rilevante] • Pr[termine t sia presente | il doc sia irrilevante]

  30. Parallel Crawlers • Web is too big to be crawled by a single crawler, work should be divided • Independent assignment • Each crawler starts with its own set of URLs • Follows links without consulting other crawlers • Reduces communication overhead • Some overlap is unavoidable

  31. Parallel Crawlers • Dynamic assignment • Central coordinator divides web into partitions • Crawlers crawl their assigned partition • Links to other URLs are given to Central coordinator • Static assignment • Web is partitioned and divided to each crawler • Crawler only crawls its part of the web

  32. URL-Seen Problem • Need to check if file has been parsed or downloaded before - after 20 million pages, we have “seen” over 100 million URLs - each URL is 50 to 75 bytes on average • Options: compress URLs in main memory, or use disk - Bloom Filter (Archive) - disk access with caching (Mercator, Altavista)

  33. An example of crawler Polybot • crawl of 120 million pages over 19 days 161 million HTTP request 16 million robots.txt requests 138 million successful non-robots requests 17 million HTTP errors (401, 403, 404 etc) 121 million pages retrieved • slow during day, fast at night • peak about 300 pages/s over T3 • many downtimes due to attacks, crashes, revisions • http://cis.poly.edu/polybot/ [Suel 02]

  34. Pagina Web e Documento Virtuale bush White House Indicizzare ciò che non si è raccolto • Supponiamo di non avere raggiunto la pagina P “whitehouse.org”, ma di avere già raggiunto ed indicizzato un insieme di pagine {P1….Pr} che puntano P • Supponiamo di estrarre dal link che da ciascun Pi, 1<i<r, punta P una finestra di testo. • …George Bush, President of U.S. lives at <a href=http://www.whitehouse.org> WhiteHouse</a> • … George Washington was at <a href=http://whitehouse.org> WhiteHouse</a> ES: Madonna On Google Washington

  35. Examples: Open Source • Nutch, also used by Overture • http://www.nutch.org • Hentrix, used by Archive.org • http://archive-crawler.sourceforge.net/index.html

  36. What we have seen so far • Web Graph • Search Engine Architecture • Google Overview • Spidering • Next ? • We will focus on INDEX DATA STRUCTURE

  37. The Indexer Module Creates Two indexes : • Text (content) index : Uses “Traditional” indexing methods like Inverted Indexing. • Structure(Links( index : Uses a directed graph of pages and links. Sometimes also creates an inverted graph.

  38. Text Inverted Index • A Set of inverted lists, one per each index term (word). • Inverted list of a term: A sorted list of locations in which the term appeared. • Posting : A pair (w,l) where w is word and l is one of its locations. • Lexicon : Holds all index’s terms with statistics about the term (not the posting)

  39. Word IDF Document TF 1/3530 Stanford D1 2 Lexicon Postingslist 1/9860 UCLA D14 30 1/937 8 MIT D376 … (TF may be normalized by document size) Text Inverted Index • Google sorts more than 100 B terms in its index.

  40. Google 98: Text Inverted Index Lexicon: fin in memory two different forms Hit list: account for most space use 2 bytes to save space Forward index: barrels are sorted by wordID. Inside barrel, sorted by docID Inverted Index: some content as the forward index, but sorted by wordID. doc list is sorted by docID

  41. Google 98: Text Inverted Index • Each docID is associated with a list of hits - these describe the verbal information in a page. • “Hand” optimized compact encoding • Plain Hits • word occurrences in the main page • relative font size, position(12 bits), capitalization • Fancy Hits • URL, title, anchor, META-tag • denoted by font size setting • plain’s 12 position bits used differently • 4 bits for the type of fancy hits

  42. Google 98: Text Inverted Index • Minimise disk seek bottlenecks • Repository • contains full HTML for each crawled page • time favoured over space for the compression algorithm • Document Index • holds document ids for all crawled and uncrawled URLs • feeds uncrawled URLS to the URL Server • batch conversion of URLS into DOCIDs to minimize disk seeks

  43. Text Index Partitioning A distributed text indexing can be done by : • Local inverted file(IFL) • Each nodes contain disjoint random pages. • Query is broadcasted. • Result is the joined query answers. • Global inverted file (IFG) • Each node is responsible only for a subset of terms in the collection. • Query sent only to the apropriate node BETTER??

  44. #Outbound link #Inbound link Inbound pages … … … … … Page 0 1 2 3 . . . 4 n Link Index (Web Graph) 2 1 0 3 4

  45. Challenges • Index build must be : • Fast • Economic (unlike traditional index buildings) • Incremental Indexing must be supported • Storage : compression vs. speed

  46. Indexing, Conclusion • Web pages indexing is complicated due to it’s scale (millions of pages, hundreds of gigabytes). • Challenges : Incremental indexing and personalization.

  47. ROADMAP • What we have seen so far • Web Graph • Search Engine Architecture • Google Overview • Spidering • Index Data Structure • Next ? • We will focus on Ranking & Social Networks

  48. Traditional Ranking Faults • Many pages containing a term may be of poor quality or not relevant. • TFIDF (Term frequency inverse document frequency) vector and cosine similarity • Insufficient self description vs. spamming. • Not using link analysis.

  49. Traditional Ranking Faults • TF (Term frequency): number of times that a word occurs in a document • IDF (Inverse document frequency): inverse of the number of documents containing the word

  50. Search engine e.g., using: tf*idf formula Vector Space Model Ranking by descending relevance Similarity metric: Query (Set of weighted features) Documents are feature vectors

More Related