1 / 51

Advanced Algorithms

This article explains the concept of PageRank algorithm and its relation to matrix operations. It covers topics like matrix properties, determinants, eigenvalues, and eigenvectors. The article also discusses the role of matrix operations in calculating importance scores for web pages in search engines.

burgessj
Download Presentation

Advanced Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Advanced Algorithms Piyush Kumar (Lecture 5: PageRank) Welcome to COT5405

  2. Quick Recap: Linear Algebra • Matrices Source: http://www.phy.cuhk.edu.hk/phytalent/mathphy/

  3. 1.1 Matrices Square matrices • When m = n, i.e.,

  4. Example: if and Evaluate A + Band A – B. 1.2 Operations of matrices Sums of matrices

  5. Example: . Evaluate 3A. 1.2 Operations of matrices Scalar multiplication

  6. 1.2 Operations of matrices Properties Matrices A, B and C are conformable, • A + B = B + A • A + (B +C) = (A + B) +C • l(A + B) = lA + lB, where l is a scalar (commutative law) (associative law) (distributive law) Can you prove them?

  7. 1.2 Operations of matrices Matrix multiplication • If A = [aij] is a m  p matrix and B = [bij] is a p  n matrix, then AB is defined as a m  nmatrix C = AB, where C= [cij] with for 1 im, 1 jn. Example: , and C = AB. Evaluate c21.

  8. 1.2 Operations of matrices Matrix multiplication Example: , , Evaluate C = AB.

  9. 1.2 Operations of matrices Properties Matrices A, B and C are conformable, • A(B + C) = AB + AC • (A + B)C = AC + BC • A(BC) = (AB) C • AB  BA in general • AB = 0 NOT necessarily imply A = 0 orB = 0 • AB = ACNOT necessarily imply B = C However

  10. Examples of identity matrices: and Identity Matrix

  11. 1.3 Types of matrices The transpose of a matrix • The matrix obtained by interchanging the rows and columns of a matrix A is called the transpose of A (write AT). Example: The transpose of A is • For a matrix A = [aij], its transpose AT= [bij], where bij= aji.

  12. Ans: Note that Can you show the details? 1.3 Types of matrices The inverse of a matrix • If matrices A and B such that AB = BA = I, then B is called the inverse of A (symbol: A-1); and A is called the inverse of B (symbol: B-1). Example: Show B is the the inverse of matrix A.

  13. 1.3 Types of matrices Symmetric matrix • A matrix A such that AT= A is called symmetric, i.e., aji = aijfor all i and j. • A+ AT must be symmetric. Why? Example: is symmetric. • A matrix A such that AT= -A is called skew-symmetric, i.e., aji = -aijfor all i and j. • A- AT must be skew-symmetric. Why?

  14. 1.4 Properties of matrix • (AB)-1 = B-1A-1 • (AT)T = A and (lA)T = l AT • (A + B)T = AT + BT • (AB)T = BT AT

  15. The determinant of a 2 × 2 matrix: • Note: 1. For every square matrix, there is a real number associated with this matrix and called its determinant 2.It is common practice to delete the matrix brackets Source: http://www.management.ntu.edu.tw/~jywang/course/

  16. Historically, the use of determinants arose from the recognition of special patterns that occur in the solutions of linear systems: • Note: 1. a11a22 - a21a12≠0 2. x1 and x2 have the same denominator, and this quantity is called the determinant of the coefficient matrix A

  17. Ex. 1: (The determinant of a matrix of order 2) • Note: The determinant of a matrix can be positive, zero, or negative

  18. 1.5 Determinants The following properties are true for determinants of any order. If every element of a row (column) is zero, e.g., , then |A| = 0. |AT| = |A| |AB| = |A||B| determinant of a matrix = that of its transpose

  19. Eigenvalues and Eigenvectors Ax = λx Should not exist? det(A − λI) = 0. Fact: A and transpose(A) have the same eigenvalues. Why?

  20. Page Ranking

  21. Task of search engines • Crawl • Build indices so that one can search keywords efficiently. • Rate the importance of pages. • One example is the simple algorithm named pagerank.

  22. The basic idea • Mimic democracy! • Use the brains of all people collectively.

  23. The basic idea • Mimic democracy! • Use the brains of all people collectively for the ranking. What’s wrong with counting backlinks? Should page 1 be ranked above page 4?

  24. Voting using backlinks? But then we don’t want an individual to cast more than one vote? Normalize?

  25. Normalized Voting?

  26. Link Matrix (for the given web):

  27. Link Matrix (for the given web): Most important node = 1?

  28. Definition • A square matrix is called column stochastic if all of its entries are non-negative and the entries in each column sum to 1. • Lemma: Every column stochastic matrix has 1 as an eigenvalue. • Proof: A and A’ = transpose of A, have the same eigenvalues: Why?

  29. Two shortcomings • Nonunique Rankings. • Dangling nodes : Nodes with no outgoing edges. • The matrix is no longer column stochastic. • Can we transform it into one easily?

  30. Nonunique Rankings Not clear: Which linear combination should we pick for the ranking?

  31. Nonunique Rankings

  32. Nonunique Rankings

  33. Modification of the Link Matrix The value of m used by google (1998) was .15 For any m between 0,1; M is column stochastic. M can be used to compute unambiguous importance scores (in the absence of dangling nodes) For m = 1, the only normalized eigenvector with eigenvalue 1 is ?

  34. Modification of the Link Matrix

  35. Example 1 • For our first example graph, m = 0.15.

  36. Example 2 • Still, m = 0.15.

  37. Towards the proof Proof by Contradiction? -> Let x be an eigenvector with mixed signs for the eigenvalue 1. For real numbers

  38. Towards the proof

  39. A punchline

  40. The Algorithm (aka Power Method)

  41. c ?

  42. One last lemma…

  43. Why does it converge?

  44. The main theorem For figure 2:

  45. First Example Do we need any modifications to A?

  46. Calculations

  47. Another Example

  48. Random Surfer Model • The 85-15 Rule: • Assume that 85 per cent of the time the random surfer clicks a random link on the current page (each link chosen with equal probability) • 15 percent of the time the random surfer goes directly to a random page (all pages on the web chosen with equal probability).

  49. Random Surfer Model • Cons • No one chooses links or pages with equal probability. • There is no real potential to surf directly to each page on the web. • The 85-15 (or any fixed) breakdown is just a guess. • Back Button? Bookmarks? • Despite these flaws, the model is good enough that we have learnt a great deal about the web using it.

  50. Related stuff to explore • Random walks and Markov Chains. • Random Graph construction using Random walks. • Absorbing Markov Chains. • Ranking with not too many similar items at the top. • Dynamical Systems point of view. • Equilibrium or Stationary Distributions. • Rate of convergence. • Perron-Frobenius Theorem • Intentional Surfer model. Markov Chain Slides: http://www.math.dartmouth.edu/archive/m20x06/public_html/Lecture13.pdf http://www.math.dartmouth.edu/archive/m20x06/public_html/Lecture14.pdf http://www.math.dartmouth.edu/archive/m20x06/public_html/Lecture15.pdf

More Related