the anatomy of a large scale hypertextual web search engine n.
Download
Skip this Video
Download Presentation
The Anatomy of a Large-Scale Hypertextual Web Search Engine

Loading in 2 Seconds...

play fullscreen
1 / 43

The Anatomy of a Large-Scale Hypertextual Web Search Engine - PowerPoint PPT Presentation


  • 129 Views
  • Uploaded on

The Anatomy of a Large-Scale Hypertextual Web Search Engine. Sergey Brin, Lawrence Page Presented By: Paolo Lim April 10, 2007. AKA: The Original Google Paper. Larry Page and Sergey Brin. Presentation Outline. Design goals of Google search engine Link Analysis and other features

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'The Anatomy of a Large-Scale Hypertextual Web Search Engine' - read


Download Now An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
the anatomy of a large scale hypertextual web search engine

The Anatomy of a Large-Scale Hypertextual Web Search Engine

Sergey Brin, Lawrence Page

Presented By: Paolo Lim

April 10, 2007

CS 331 - Data Mining

aka the original google paper
AKA: The Original Google Paper

Larry Page and Sergey Brin

CS 331 - Data Mining

presentation outline
Presentation Outline
  • Design goals of Google search engine
  • Link Analysis and other features
  • System architecture and major structures
  • Crawling, indexing, and searching the web
  • Performance and results
  • Conclusions
  • Final exam questions

CS 331 - Data Mining

linear algebra background
Linear Algebra Background
  • PageRank involves knowledge of:
    • Matrix addition/multiplication
    • Eigenvectors and Eigenvalues
    • Power iteration
    • Dot product
  • Not discussed in detail in presentation
  • For reference:
    • http://cs.wellesley.edu/~cs249B/math/Linear%20Algebra/CS298LinAlgpart1.pdf
    • http://www.cse.buffalo.edu/~hungngo/classes/2005/Expanders/notes/LA-intro.pdf

CS 331 - Data Mining

google design goals
Google Design Goals
  • Scaling with the web’s growth
  • Improved search quality
    • Number of documents increasing rapidly, but user’s ability to look at documents lags
    • Lots of “junk” results, little relevance
  • Academic search engine research
    • Development and understanding in academic realm
    • System that reasonable number of people can actually use
    • Support novel research activities of large-scale web data by other researchers and students

CS 331 - Data Mining

link analysis basics
Link Analysis Basics
  • PageRank Algorithm
    • A Top 10 IEEE ICDM data mining algorithm
    • Large basis for ranking system (discussed later)
    • Tries to incorporate ideas from academic community (publishing and citations)
  • Anchor Text Analysis
    • <a href=http://www.com> ANCHOR TEXT </a>

CS 331 - Data Mining

intuition why links anyway
Intuition: Why Links, Anyway?
  • Links represent citations
  • Quantity of links to a website makes the website more popular
  • Quality of links to a website also helps in computing rank
  • Link structure largely unused before Larry Page proposed it to thesis advisor

CS 331 - Data Mining

na ve pagerank
Naïve PageRank
  • Each link’s vote is proportional to the importance of its’ source page
  • If page P with important I has N outlinks, then each link gets I / N votes
  • Simple recursive formulation:
    • PR(A) = PR(p1)/C(p1) + … + PR(pn)/C(pn)
    • PR(X)  PageRank of page X
    • C(X)  number of links going out of page X

CS 331 - Data Mining

na ve pagerank model from http www stanford edu class cs345a lectureslides pagerank pdf

Yahoo

Amazon

M’soft

Naïve PageRank Model(from http://www.stanford.edu/class/cs345a/lectureslides/PageRank.pdf)

The web in 1839

y = y /2 + a /2

a = y /2 + m

m = a /2

y/2

y

a/2

y/2

m

a/2

m

a

CS 331 - Data Mining

solving the flow equations
Solving the flow equations
  • 3 equations, 3 unknowns, no constants
    • No unique solution
    • All solutions equivalent modulo scale factor
  • Additional constraint forces uniqueness
    • y+a+m = 1
    • y = 2/5, a = 2/5, m = 1/5
  • Gaussian elimination method works for small examples, but we need a better method for large graphs

CS 331 - Data Mining

matrix formulation
Matrix formulation
  • Matrix M has one row and one column for each web page
  • Suppose page j has n outlinks
    • If j ! i, then Mij=1/n
    • Else Mij=0
  • M is a column stochastic matrix
    • Columns sum to 1
  • Suppose r is a vector with one entry per web page
    • ri is the importance score of page i
    • Call it the rank vector

CS 331 - Data Mining

example from http www stanford edu class cs345a lectureslides pagerank pdf

j

i

i

=

1/3

M

r

Example (from http://www.stanford.edu/class/cs345a/lectureslides/PageRank.pdf)

Suppose page j links to 3 pages, including i

r

CS 331 - Data Mining

eigenvector formulation
Eigenvector formulation
  • The flow equations can be written

r = Mr

  • So the rank vector is an eigenvector of the stochastic web matrix
    • In fact, its first or principal eigenvector, with corresponding eigenvalue 1

CS 331 - Data Mining

example from http www stanford edu class cs345a lectureslides pagerank pdf1

Yahoo

r = Mr

Amazon

M’soft

y 1/2 1/2 0 y

a = 1/2 0 1 a

m 0 1/2 0 m

Example (from http://www.stanford.edu/class/cs345a/lectureslides/PageRank.pdf)

y a m

y 1/2 1/2 0

a 1/2 0 1

m 0 1/2 0

y = y /2 + a /2

a = y /2 + m

m = a /2

CS 331 - Data Mining

power iteration
Power Iteration
  • Simple iterative scheme (aka relaxation)
  • Suppose there are N web pages
  • Initialize: r0 = [1,….,1]T
  • Iterate: rk+1 = Mrk
  • Stop when |rk+1 - rk|1 < 
    • |x|1 = 1·i·N|xi| is the L1 norm
    • Can use any other vector norm e.g., Euclidean

CS 331 - Data Mining

power iteration example from http www stanford edu class cs345a lectureslides pagerank pdf

Yahoo

Amazon

M’soft

Power Iteration Example (from http://www.stanford.edu/class/cs345a/lectureslides/PageRank.pdf)

y a m

y 1/2 1/2 0

a 1/2 0 1

m 0 1/2 0

y

a =

m

1

1

1

1

3/2

1/2

5/4

1

3/4

9/8

22/24

1/2

6/5

6/5

3/5

. . .

CS 331 - Data Mining

random surfer
Random Surfer
  • Imagine a random web surfer
    • At any time t, surfer is on some page P
    • At time t+1, the surfer follows an outlink from P uniformly at random
    • Ends up on some page Q linked from P
    • Process repeats indefinitely
  • Let p(t) be a vector whose ith component is the probability that the surfer is at page i at time t
    • p(t) is a probability distribution on pages

CS 331 - Data Mining

the stationary distribution
The stationary distribution
  • Where is the surfer at time t+1?
    • Follows a link uniformly at random
    • p(t+1) = Mp(t)
  • Suppose the random walk reaches a state such that p(t+1) = Mp(t) = p(t)
    • Then p(t) is called a stationary distribution for the random walk
  • Our rank vector r satisfies r = Mr
    • So it is a stationary distribution for the random surfer

CS 331 - Data Mining

spider traps
Spider traps
  • A group of pages is a spider trap if there are no links from within the group to outside the group
    • Random surfer gets trapped
  • Spider traps violate the conditions needed for the random walk theorem

CS 331 - Data Mining

microsoft becomes a spider trap from http www stanford edu class cs345a lectureslides pagerank pdf
Microsoft becomes a spider trap (from http://www.stanford.edu/class/cs345a/lectureslides/PageRank.pdf)

Yahoo

y a m

y 1/2 1/2 0

a 1/2 0 0

m 0 1/2 1

Amazon

M’soft

y

a =

m

1

1

1

1

1/2

3/2

3/4

1/2

7/4

5/8

3/8

2

0

0

3

. . .

CS 331 - Data Mining

random teleports
Random teleports
  • The Google solution for spider traps
  • At each time step, the random surfer has two options:
    • With probability , follow a link at random
    • With probability 1-, jump to some page uniformly at random
    • Common values for  are in the range 0.8 to 0.9
  • Surfer will teleport out of spider trap within a few time steps

CS 331 - Data Mining

matrix formulation1
Matrix formulation
  • Suppose there are N pages
    • Consider a page j, with set of outlinks O(j)
    • We have Mij = 1/|O(j)| when j!i and Mij = 0 otherwise
    • The random teleport is equivalent to
      • adding a teleport link from j to every other page with probability (1-)/N
      • reducing the probability of following each outlink from 1/|O(j)| to /|O(j)|
      • Equivalent: tax each page a fraction (1-) of its score and redistribute evenly

CS 331 - Data Mining

page rank
Page Rank
  • Construct the NxN matrix A as follows
    • Aij = Mij + (1-)/N
  • Verify that A is a stochastic matrix
  • The page rank vector r is the principal eigenvector of this matrix
    • satisfying r = Ar
  • Equivalently, r is the stationary distribution of the random walk with teleports

CS 331 - Data Mining

previous example with 0 8 from http www stanford edu class cs345a lectureslides pagerank pdf
Previous example with =0.8 (from http://www.stanford.edu/class/cs345a/lectureslides/PageRank.pdf)

1/2 1/2 0

1/2 0 0

0 1/2 1

1/3 1/3 1/3

1/3 1/3 1/3

1/3 1/3 1/3

+ 0.2

Yahoo

0.8

y 7/15 7/15 1/15

a 7/15 1/15 1/15

m 1/15 7/15 13/15

Amazon

M’soft

y

a =

m

1

1

1

1.00

0.60

1.40

0.84

0.60

1.56

0.776

0.536

1.688

7/11

5/11

21/11

. . .

CS 331 - Data Mining

dead ends
Dead ends
  • Pages with no outlinks are “dead ends” for the random surfer
    • Nowhere to go on next step

CS 331 - Data Mining

microsoft becomes a dead end from http www stanford edu class cs345a lectureslides pagerank pdf

Non-

stochastic!

Microsoft becomes a dead end (from http://www.stanford.edu/class/cs345a/lectureslides/PageRank.pdf)

1/2 1/2 0

1/2 0 0

0 1/2 0

1/3 1/3 1/3

1/3 1/3 1/3

1/3 1/3 1/3

+ 0.2

Yahoo

0.8

y 7/15 7/15 1/15

a 7/15 1/15 1/15

m 1/15 7/15 1/15

Amazon

M’soft

y

a =

m

1

1

1

1

0.6

0.6

0.787

0.547

0.387

0.648

0.430

0.333

0

0

0

. . .

CS 331 - Data Mining

dealing with dead ends
Dealing with dead-ends
  • Teleport
    • Follow random teleport links with probability 1.0 from dead-ends
    • Adjust matrix accordingly
  • Prune and propagate
    • Preprocess the graph to eliminate dead-ends
    • Might require multiple passes
    • Compute page rank on reduced graph
    • Approximate values for dead ends by propagating values from reduced graph

CS 331 - Data Mining

anchor text
Anchor Text
  • Can be more accurate description of target site than target site’s text itself
  • Can point at non-HTTP or non-text
    • Images
    • Videos
    • Databases
  • Possible for non-crawled pages to be returned in the process

CS 331 - Data Mining

other features
Other Features
  • List of occurrences of a particular word in a particular document (Hit List)
  • Location information and proximity
  • Keeps track of visual presentation details:
    • Font size of words
    • Capitalization
    • Bold/Italic/Underlined/etc.
  • Full raw HTML of all pages is available in repository

CS 331 - Data Mining

google architecture from http www ics uci edu scott google htm
Google Architecture (from http://www.ics.uci.edu/~scott/google.htm)

Implemented in C and C++ on Solaris and Linux

CS 331 - Data Mining

google architecture from http www ics uci edu scott google htm1
Google Architecture(from http://www.ics.uci.edu/~scott/google.htm)

Multiple crawlers run in parallel. Each crawler keeps its own DNS lookup cache and ~300 open connections open at once.

Keeps track of URLs that have and need to be crawled

Compresses and stores web pages

Stores each link and text surrounding link.

Converts relative URLs into absolute URLs.

Uncompresses and parses documents. Stores link information in anchors file.

Contains full html of every web page. Each document is prefixed by docID, length, and URL.

CS 331 - Data Mining

google architecture from http www ics uci edu scott google htm2
Google Architecture(from http://www.ics.uci.edu/~scott/google.htm)

Parses & distributes hit lists into “barrels.”

Maps absolute URLs into docIDs stored in Doc Index. Stores anchor text in “barrels”. Generates database of links (pairs of docIds).

Partially sorted forward indexes sorted by docID. Each barrel stores hitlists for a given range of wordIDs.

In-memory hash table that maps words to wordIds. Contains pointer to doclist in barrel which wordId falls into.

Creates inverted index whereby document list containing docID and hitlists can be retrieved given wordID.

  • DocID keyed index where each entry includes info such as pointer to doc in repository, checksum, statistics, status, etc. Also contains URL info if doc has been crawled. If not just contains URL.

CS 331 - Data Mining

google architecture from http www ics uci edu scott google htm3
Google Architecture (from http://www.ics.uci.edu/~scott/google.htm)

2 kinds of barrels. Short barrell which contain hit list which include title or anchor hits. Long barrell for all hit lists.

List of wordIds produced by Sorter and lexicon created by Indexer used to create new lexicon used by searcher. Lexicon stores ~14 million words.

New lexicon keyed by wordID, inverted doc index keyed by docID, and PageRanks used to answer queries

CS 331 - Data Mining

google query evaluation
Google Query Evaluation
  • Parse the query.
  • Convert words into wordIDs.
  • Seek to the start of the doclist in the short barrel for every word.
  • Scan through the doclists until there is a document that matches all the search terms.
  • Compute the rank of that document for the query.
  • If we are in the short barrels and at the end of any doclist, seek to the start of the doclist in the full barrel for every word and go to step 4.
  • If we are not at the end of any doclist go to step 4.
  • Sort the documents that have matched by rank and return the top k.

CS 331 - Data Mining

single word query ranking
Single Word Query Ranking
  • Hitlist is retrieved for single word
  • Each hit can be one of several types: title, anchor, URL, large font, small font, etc.
  • Each hit type is assigned its own weight
  • Type-weights make up vector of weights
  • Number of hits of each type is counted to form count-weight vector
  • Dot product of type-weight and count-weight vectors is used to compute IR score
  • IR score is combined with PageRank to compute final rank

CS 331 - Data Mining

multi word query ranking
Multi-word Query Ranking
  • Similar to single-word ranking except now must analyze proximity of words in a document
  • Hits occurring closer together are weighted higher than those farther apart
  • Each proximity relation is classified into 1 of 10 bins ranging from a “phrase match” to “not even close”
  • Each type and proximity pair has a type-prox weight
  • Counts converted into count-weights
  • Take dot product of count-weights and type-prox weights to computer for IR score

CS 331 - Data Mining

scalability
Scalability
  • Cluster architecture combined with Moore’s Law make for high scalability. At time of writing:
    • ~ 24 million documents indexed in one week
    • ~518 million hyperlinks indexed
    • Four crawlers collected 100 documents/sec

CS 331 - Data Mining

key optimization techniques
Key Optimization Techniques
  • Each crawler maintains its own DNS lookup cache
  • Use flex to generate lexical analyzer with own stack for parsing documents
  • Parallelization of indexing phase
  • In-memory lexicon
  • Compression of repository
  • Compact encoding of hit lists for space saving
  • Indexer is optimized so it is just faster than the crawler so that crawling is the bottleneck
  • Document index is updated in bulk
  • Critical data structures placed on local disk
  • Overall architecture designed avoid to disk seeks wherever possible

CS 331 - Data Mining

storage requirements from http www ics uci edu scott google htm
Storage Requirements (from http://www.ics.uci.edu/~scott/google.htm)

At the time of publication, Google had the following statistical breakdown for storage requirements:

CS 331 - Data Mining

conclusions
Conclusions
  • Search is far from perfect
    • Topic/Domain-specific PageRank
    • Machine translation in search
    • Non-hypertext search
  • Business potential
    • Brin and Page worth around $15 billion each… at 32 years old!
    • If you have a better idea than how Google does search, please remember me when you’re hiring software engineers! 

CS 331 - Data Mining

possible exam questions
Possible Exam Questions
  • Given a web/link graph, formulate a Naïve PageRank link matrix and do a few steps of power iteration.
    • Slides 14 – 16
  • What are spider traps and dead ends, and how does Google deal with these?
    • Spider Trap: Slides 19 – 21
    • Dead End: Slides 25 – 27
  • Explain difference between single and multiple word search query evaluation.
    • Slides 35 – 36

CS 331 - Data Mining

references
References
  • Brin, Page. The Anatomy of a Large-Scale Hypertextual Web Search Engine.
  • Brin, Page, Motwani, Winograd. The PageRank Citation Ranking: Bringing Order to the Web.
  • http://www.stanford.edu/class/cs345a/lectureslides/PageRank.pdf
  • www.cs.duke.edu/~junyang/courses/cps296.1-2002-spring/lectures/02-web-search.pdf
  • http://www.ics.uci.edu/~scott/google.htm

CS 331 - Data Mining

thank you
Thank you!

CS 331 - Data Mining