1 / 27

Lecture 5: Mathematics of Networks (Cont)

Lecture 5: Mathematics of Networks (Cont). CS 790g: Complex Networks. Slides are modified from Networks: Theory and Application by Lada Adamic. Characterizing networks: How far apart are things?. Network metrics: paths.

jorryn
Download Presentation

Lecture 5: Mathematics of Networks (Cont)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 5: Mathematics of Networks (Cont) CS 790g: Complex Networks Slides are modified from Networks: Theory and Application by Lada Adamic

  2. Characterizing networks:How far apart are things?

  3. Network metrics: paths • A path is any sequence of vertices such that every consecutive pair of vertices in the sequence is connected by an edge in the network. • For directed: traversed in the correct direction for the edges. • path can visit itself (vertex or edge) more than once • Self-avoiding paths do not intersect themselves. • Path length r is the number of edges on the path • Called hops

  4. Network metrics: paths

  5. Network metrics: shortest paths B 3 C A 2 1 3 D E 2

  6. Structural metrics: Average path length 1 ≤ L ≤ D ≤ N-1

  7. Eulerian Path • Euler’s Seven Bridges of Königsberg • one of the first problems in graph theory • Is there a route that crosses each bridge only once and returns to the starting point? Source: http://en.wikipedia.org/wiki/Seven_Bridges_of_Königsberg Image 1 – GNU v1.2: Bogdan, Wikipedia; http://commons.wikimedia.org/wiki/Commons:GNU_Free_Documentation_License Image 2 –GNU v1.2: Booyabazooka, Wikipedia; http://commons.wikimedia.org/wiki/Commons:GNU_Free_Documentation_License Image 3 –GNU v1.2: Riojajar, Wikipedia; http://commons.wikimedia.org/wiki/Commons:GNU_Free_Documentation_License

  8. Eulerian and Hamiltonian paths • Hamiltonian path is self avoiding If starting point and end point are the same: only possible if no nodes have an odd degree as each path must visit and leave each shore If don’t need to return to starting point can have 0 or 2 nodes with an odd degree Eulerian path: traverse each edge exactly once Hamiltonian path: visit each vertex exactly once

  9. Characterizing networks:Is everything connected?

  10. Network metrics: components • If there is a path from every vertex in a network to every other, the network is connected • otherwise, it is disconnected • Component: A subset of vertices such that there exist at least one path from each member of the subset to others and there does not exist another vertex in the network which is connected to any vertex in the subset • Maximal subset • A singeleton vertex that is not connected to any other forms a size one component • Every vertex belongs to exactly one component

  11. components in directed networks • Weakly connected components: every node can be reached from every other node by following links in either direction • Weakly connected components • A B C D E • G H F • Strongly connected components • Each node within the component can be reached from every other node in the component by following directed links B B F F G • Strongly connected components • B C D E • A • G H • F C G A C A H D H E D E

  12. components in directed networks • Every strongly connected component of more than one vertex has at least one cycle • Out-component: set of all vertices that are reachable via directed paths starting at a specific vertex v • Out-components of all members of a strongly connected component are identical • In-component: set of all vertices from which there is a direct path to a vertex v • In-components of all members of a strongly connected component are identical B F G C A H D E

  13. network metrics: size of giant component • if the largest component encompasses a significant fraction of the graph, it is called the giant component

  14. bowtie model of the web • The Web is a directed graph: • webpages link to other webpages • The connected components tell us what set of pages can be reached from any other just by surfing • no ‘jumping’ around by typing in a URL or using a search engine • Broder et al. 1999 – crawl of over 200 million pages and 1.5 billion links. • SCC – 27.5% • IN and OUT – 21.5% • Tendrils and tubes – 21.5% • Disconnected – 8%

  15. Independent paths • Edge independent paths: if they share no common edge • Vertex independent paths: if they share no common vertex except start and end vertices • Vertex-independent => Edge-independent • Also called disjoint paths • These set of paths are not necessarily unique • Connectivity of vertices: the maximal number of independent paths between a pair of vertices • Used to identify bottlenecks and resiliency to failures

  16. Cut Sets and Maximum Flow • A minimum cut set is the smallest cut set that will disconnect a specified pair of vertices • Need not to be unique • Menger’s theorem: If there is no cut set of size less than n between a pair of vertices, then there are at least n independent paths between the same vertices. • Implies that the size of min cut set is equal to maximum number of independent paths (for both edge and vertex independence) • Maximum Flow between a pair of vertices is the number of edge independent paths times the edge capacity.

  17. Graph Laplacian

  18. Eigenvalues and eigenvectors • Eigenvalues and eigenvectors have their origins in physics, in particular in problems where motion is involved, although their uses extend from solutions to stress and strain problems to differential equations and quantum mechanics. • Eigenvectors are vectors that point in directions where there is no rotation. Eigenvalues are the change in length of the eigenvector from the original length. • The basic equation in eigenvalue problems is: Slides from Fred K. Duennebier

  19. Eigenvalues and eigenvectors • In words, this deceptively simple equation says that for the square matrix A, there is a vector x such that the product of Ax such that the result is a SCALAR, , that, when multiplied by x, results in the same product. The multiplication of vector x by a scalar constant is the same as stretching or shrinking the coordinates by a constant value. (E.01)

  20. The vector x is called an eigenvectorand the scalar , is called an eigenvalue. • Do all matrices have real eigenvalues? • No, they must be square and the determinant of A- I must equal zero. This is easy to show: • This can only be true if det(A- I )=|A- I |=0 • Are eigenvectors unique? • No, if x is an eigenvector, then x is also an eigenvector and  is an eigenvalue. (E.02) (E.03) A(x)= Ax = lx = l (x) (E.04)

  21. How do you calculate eigenvectors and eigenvalues? Expand equation (E.03): det(A- I )=|A- I |=0 for a 2x2 matrix: (E.05) For a 2-dimensional problem such as this, the equation above is a simple quadratic equation with two solutions for . In fact, there is generally one eigenvalue for each dimension, but some may be zero, and some complex.

  22. The solution to E.05 is: (E.06) (E.07) This “characteristic equation” does not involve x, and the resulting values of  can be used to solve for x. Consider the following example: Eqn. E.07 doesn’t work here because a11a22-a12a12=0, so we use E.06:

  23. We see that one solution to this equation is =0, and dividing both sides of the above equation by  yields =5. Thus we have our two eigenvalues, and the eigenvectors for the first eigenvalue, =0 are: These equations are multiples of x=-2y, so the smallest whole number values that fit are x=2, y=-1

  24. For the other eigenvalue, =5: This example is rather special; A-1 does not exist, the two rows of A- I are dependent and thus one of the eigenvalues is zero. (Zero is a legitimate eigenvalue!) EXAMPLE: A more common case is A =[1.05 .05 ; .05 1] used in the strain exercise. Find the eigenvectors and eigenvalues for this A, and then calculate [V,D]=eig[A]. The procedure is: Compute the determinant of A- I Find the roots of the polynomial given by | A- I|=0 Solve the system of equations (A- I)x=0

  25. What good are such things? Consider the matrix: What is A100 ? We can get A100 by multiplying matrices many many times: Or we could find the eigenvalues of A and obtain A100 very quickly using eigenvalues.

  26. For now, I’ll just tell you that there are two eigenvectors for A: The eigenvectors are x1=[.6 ; .4] and x2=[1 ; -1], and the eigenvalues are 1=1 and 2=0.5. Note that if we multiply x1 by A, we get x1. If we multiply x1 by A again, we STILL get x1. Thus x1 doesn’t change as we mulitiply it by An.

  27. What about x2? When we multiply A by x2, we get x2/2, and if we multiply x2 by A2, we get x2/4 . This number gets very small fast. Note that when A is squared the eigenvectors stay the same, but the eigenvalues are squared! Back to our original problem we note that for A100, the eigenvectors will be the same, the eigenvalues 1=1 and 2=(0.5)100, which is effectively zero. Each eigenvector is multiplied by its eigenvalue whenever A is applied,

More Related