1 / 43

Lecture11- All-pairs shortest paths

Lecture11- All-pairs shortest paths. Dynamic programming. Comparing to divide-and-conquer Both partition the problem into sub-problems Divide-and-conquer partition problems into independent sub-problems, dynamic programming is applicable when the sub-problems are dependent.

byford
Download Presentation

Lecture11- All-pairs shortest paths

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture11- All-pairs shortest paths

  2. Dynamic programming • Comparing to divide-and-conquer • Both partition the problem into sub-problems • Divide-and-conquer partition problems into independent sub-problems, dynamic programming is applicable when the sub-problems are dependent.

  3. Dynamic programming • Comparing to greedy method: • Both are applicable when a problem exhibits optimal substructure • In greedy algorithms, we use optimal substructure in a top-down fashion, that is, without finding optimal solutions to sub-problems, we first make a choice that looks best at the time-and then solve a resulting sub-problem. While in dynamic programming, we use optimal substructure in a bottom-up fashion, that is, we first find optimal solutions to sub-problems and, having solved the sub-problems, we find an optimal solution to the problem.

  4. Dynamic programming • The development of a dynamic programming algorithm can be broken into four steps: • Characterize the structure of an optimal solution. • Recursively define the value of an optimal solution. • Compute the value of an optimal solution in a bottom-up fashion. • Construct an optimal solution from computed information. Steps 1-3 form the basis of a dynamic-programming solution to a problem. Step 4 can be omitted if only the value of an optimal solution is required. When we do perform step 4, we sometimes maintain additional information during the computation in step 3 to ease the construction of an optimal solution.

  5. 0-1 version v/w: 1, 3, 3.6, 3.66, 4.

  6. 35

  7. All –pairs shortest-paths problem • Problem: Given a directed graph G=(V, E), and a weight function w: ER, for each pair of vertices u, v, compute the shortest path weight (u, v), and a shortest path if exists. • Output: • A VV matrix D = (dij), where, dij contains the shortest path weight from vertex i to vertex j. //Important! • A VV matrix =(ij), where, ij is NIL if either i=j or there is no path from i to j, otherwise ij is the predecessor of j on some shortest path from i. // Not covered in class, but in Exercises!

  8. Methods • Application of single-source shortest-path algorithms • Direct methods to solve the problem: • Matrix multiplication • Floyd-Warshall algorithm • Johnson’s algorithm for sparse graphs • Transitive closure (Floyd-Warshall algorithm)

  9. Matrix multiplication--suppose there are no negative cycles. • A dynamic programming method: • Study structure of an optimal solution • Solve the problem recursively • Compute the value of an optimal solution in a bottom-up manner • The operation of each loop is like matrix multiplication.

  10. Matrix multiplication—structure of a shortest path • Suppose W = (wij) is the adjacency matrix such that Consider a shortest path P from i to j, and suppose that P has at most m edges. Then, if i = j, P has weight 0 and no edges. If i  j, we can decompose P into ikj, P’ is a shortest path from i to k. P’

  11. Matrix multiplication—recursive solution Let lij(m) be the minimum weight of any path from i to j that contains at mostm edges. • lij(0) = 0, if i = j, and  otherwise. • For m 1, • lij(m) = min{lik(m-1)+wkj}, 1  k  n • The solution is lij(n-1)

  12. Matrix Multiplication • Solve the problem stage by stage (dynamic programming) • L(1) = W • L(2) • … • L(n-1) • where L(m), contains the shortest path weight with path length  m.

  13. Matrix multiplication (pseudo-code)

  14. Matrix multiplication (pseudo-code)

  15. Matrix multiplication (running time) • O(n4) Improving the running time: • No need to compute all the L(m) matrices for 1  m  n-1. We are interested only in L(n-1), which is equal to L(m) for all integers m ≥ n-1, with assuming that there are no negative cycles.

  16. Improving the running time Compute the sequence L(1) = W, L(2) = W2 = WW , L(4) = W4 = W2W2, L(8) = W8 = W4W4 ... We need only lg(n-1) matrix products • Time complexity: O(n3 lg n)

  17. Improving running time

  18. Floyd-Warshall algorithm--suppose there are no negative cycles.-structure of shortest path

  19. Floyd-Warshall algorithm(idea) • dij(k): shortest path weight from i to j with intermediate vertices (excluding i, j) from the set {1,2,…,k} • Intermediate vertex of a simple path p = <v1, v2, …, vl> is any vertex of pother thanv1 or vl. • dij(0) = wij (no intermediate vertices at all) • How to compute dij(k) from D(r), for r < k?

  20. Floyd-Warshall algorithm-recursive solution • dij(0) = wij (no intermediate vertices at all) dij(k) = min(dij(k-1), dik(k-1) + dkj(k-1)) if k ≥ 1 Result: D(n) = (dij(n)) (because all intermediate vertices are in the set {1, 2, …, n})

  21. Floyd-Warshall algorithm-compute shortest-path weights • Solve the problem stage by stage: • D(0) • D(1) • D(2) • … • D(n) • where D(k) contains the shortest path weight with all the intermediate vertices from set {1,2…,k}.

  22. Floyd-Warshall algorithm 2 k=1: d43(1)=-5; d42(1)=5; d45(1)=-2 4 3 1 3 8 2 -4 1 -5 7 5 4 6

  23. Floyd-Warshall algorithm 2 k=2: d43(2) =-5; d42(2)=5; d45(2)= 2 4 3 1 3 8 2 -4 1 -5 7 5 4 6

  24. Floyd-Warshall algorithm 2 k=3: d42(3)=-1; d43(3)=-5; d45(3)=-2 4 3 1 3 8 2 -4 1 -5 7 5 4 6

  25. Floyd-Warhsall algorithm (pseudo-code) Time complexity: O(n3) Space: O(n3)

  26. Floyd-Warshall algorithm(less space) Notice that: we can write dij(k) directly on D(k-1) k-th column dij(k) dik(k-1) dkj(k) dkj(k-1) k-th row dik(k) dij(k)

  27. Floyd-Warhsall algorithm(less space)

  28. Constructing a shortest path • For k = 0 • For k 1

  29. Print all-pairs shortest paths

  30. 2 4 3 1 3 8 1 -5 -4 2 7 5 4 6 Example:

  31. D(0)= (0)= (1)= D(1)=

  32. D(2)= (2)= D(3)= (3)=

  33. D(4)= (4)= (5)= D(5)=

  34. Shortest path from 1 to 2 in 2 4 3 1 3 8 1 -5 -4 2 7 5 4 6

  35. Transitive closure (the problem) • Given a graph G=(V, E), compute the transitive closure G* = (V, E*), where E* = {(i, j): There is a path from i to j in G} a c b i j

  36. Transitive closure • One way: set wij = 1 and run the Floyd-Warshall algorithm, if dij(n) <, then (i, j)  E* • Running time O(n3)

  37. transitive closure • Another way: Substitute “+” and “min” by AND and OR in Floyd’s algorithm • Running time O(n3)

  38. Transitive closure (pseudo-code)

  39. Transitive closure

More Related