1 / 19

Lecture 16. Shortest Path Algorithms

Lecture 16. Shortest Path Algorithms. The single-source shortest path problem is the following: given a source vertex s, and a sink vertex v, we'd like to find the shortest path from s to v. Here shortest path means a sequence of directed edges from s to v with the smallest total weight.

Download Presentation

Lecture 16. Shortest Path Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 16. Shortest Path Algorithms • The single-source shortest path problem is the following: given a source vertex s, and a sink vertex v, we'd like to find the shortest path from s to v. Here shortest path means a sequence of directed edges from s to v with the smallest total weight. • There are some subtleties here. Should we allow negative edges? Of course, there are no negative distances; nevertheless, there are actually some cases where negative edges make logical sense. But then there may not be a shortest path, because if there is a cycle with negative weight, we could simply go around that cycle as many times as we want and reduce the cost of the path as much as we like. To avoid this, we might want to detect negative cycles. This can be done in the algorithm itself. • Non-negative cycles aren't helpful, either. Suppose our shortest path contains a cycle of non-negative weight. Then by cutting it out we get a path with the same weight or less, so we might as well cut it out.

  2. Weighted Graphs • In a weighted graph, each edge has an associated numerical value, called the weight of the edge • Edge weights may represent distances, costs, etc. • Example: • In a flight route graph, the weight of an edge represents the distance in miles between the endpoint airports 849 PVD 1843 ORD 142 SFO 802 LGA 1205 1743 337 1387 HNL 2555 1099 1233 LAX 1120 DFW MIA

  3. Shortest Path Problem • Given a weighted graph and two vertices u and v, we want to find a path of minimum total weight between u and v. • Length of a path is the sum of the weights of its edges. • Example: • Shortest path between Providence and Honolulu • Applications • Internet packet routing • Flight reservations • Driving directions 849 PVD 1843 ORD 142 SFO 802 LGA 1205 1743 337 1387 HNL 2555 1099 1233 LAX 1120 DFW MIA

  4. Shortest Path Properties Property 1: A subpath of a shortest path is itself a shortest path Property 2: There is a tree of shortest paths from a start vertex to all the other vertices Example: Tree of shortest paths from Providence 849 PVD 1843 ORD 142 SFO 802 LGA 1205 1743 337 1387 HNL 2555 1099 1233 LAX 1120 DFW MIA

  5. Dijkstra’s Algorithm • The distance of a vertex v from a vertex s is the length of a shortest path between s and v • Dijkstra’s algorithm computes the distances of all the vertices from a given start vertex s • Assumptions: • the graph is connected • the edges are undirected • the edge weights are nonnegative • We grow a “cloud” of vertices, beginning with s and eventually covering all the vertices • We store with each vertex v a label d(v) representing the distance of v from s in the subgraph consisting of the cloud and its adjacent vertices • At each step • We add to the cloud the vertex u outside the cloud with the smallest distance label, d(u) • We update the labels of the vertices adjacent to u

  6. Edge Relaxation • Consider an edge e =(u,z) such that • uis the vertex most recently added to the cloud • z is not in the cloud • The relaxation of edge e updates distance d(z) as follows: d(z)min{d(z),d(u) + weight(e)} d(u) = 50 d(z) = 75 10 e u z s d(u) = 50 d(z) = 60 10 e u z s

  7. 0 A 4 8 2 8 2 3 7 1 B C D 3 9 5 8 2 5 E F Example 0 A 4 8 2 8 2 4 7 1 B C D 3 9   2 5 E F 0 0 A A 4 4 8 8 2 2 8 2 3 7 2 3 7 1 7 1 B C D B C D 3 9 3 9 5 11 5 8 2 5 2 5 E F E F

  8. Example (cont.) 0 A 4 8 2 7 2 3 7 1 B C D 3 9 5 8 2 5 E F 0 A 4 8 2 7 2 3 7 1 B C D 3 9 5 8 2 5 E F

  9. Dijkstra’s Algorithm AlgorithmDijkstraDistances(G, s) Q new heap-based priority queue for all v  G.vertices() ifv= s setDistance(v, 0) else setDistance(v, ) l  Q.insert(getDistance(v),v) setLocator(v,l) while Q.isEmpty() u Q.removeMin() for all e  G.incidentEdges(u) { relax edge e } z  G.opposite(u,e) r  getDistance(u) + weight(e) ifr< getDistance(z) setDistance(z,r)Q.replaceKey(getLocator(z),r) • A priority queue stores the vertices outside the cloud • Key: distance • Element: vertex • Locator-based methods • insert(k,e) returns a locator • replaceKey(l,k) changes the key (distance) of an item • We store two labels with each vertex: • Distance (d(v) label) • locator in priority queue

  10. Analysis • Graph operations • Method incidentEdges is called once for each vertex • Label operations • We set/get the distance and locator labels of vertex zO(deg(z)) times • Setting/getting a label takes O(1) time • Priority queue operations • Each vertex is inserted once into and removed once from the priority queue, where each insertion or removal takes O(log n) time • The key of a vertex in the priority queue is modified at most deg(w) times, where each key change takes O(log n) time • Dijkstra’s algorithm runs in O((n + m) log n) time provided the graph is represented by the adjacency list structure • Recall that Sv deg(v)= 2m • The running time can also be expressed as O(m log n) since the graph is connected

  11. Extension AlgorithmDijkstraShortestPathsTree(G, s) … for all v  G.vertices() … setParent(v, ) … for all e  G.incidentEdges(u) { relax edge e } z  G.opposite(u,e) r  getDistance(u) + weight(e) ifr< getDistance(z) setDistance(z,r) setParent(z,e) Q.replaceKey(getLocator(z),r) • Using the template method pattern, we can extend Dijkstra’s algorithm to return a tree of shortest paths from the start vertex to all other vertices • We store with each vertex a third label: • parent edge in the shortest path tree • In the edge relaxation step, we update the parent label

  12. Why Dijkstra’s Algorithm Works • Dijkstra’s algorithm is based on the greedy method. It adds vertices by increasing distance. • Suppose it didn’t find all shortest distances. Let F be the first wrong vertex the algorithm processed. • When the previous node, D, on the true shortest path was considered, its distance was correct. • But the edge (D,F) was relaxed at that time! • Thus, so long as d(F)>d(D), F’s distance cannot be wrong. That is, there is no wrong vertex. 0 A 4 8 2 7 2 3 7 1 B C D 3 9 5 8 2 5 E F

  13. Why It Doesn’t Work for Negative-Weight Edges Dijkstra’s algorithm is based on the greedy method. It adds vertices by increasing distance. 0 A 4 8 • If a node with a negative incident edge were to be added late to the cloud, it could mess up distances for vertices already in the cloud. 6 7 5 4 7 1 B C D 0 -8 5 9 2 5 E F C’s true distance is 1, but it is already in the cloud with d(C)=5!

  14. Bellman-Ford Algorithm • Works even with negative-weight edges • Must assume directed edges (for otherwise we would have negative-weight cycles) • Iteration i finds all shortest paths of length i . • Running time: O(nm). • Can be extended to detect a negative-weight cycle if it exists. AlgorithmBellmanFord(G, s) for all v  G.vertices() ifv= s setDistance(v, 0) else setDistance(v, ) for i 1 to n-1 do (*) for each directed edgee = u  z { relax edge e } r  getDistance(u) + weight(e) ifr< getDistance(z) setDistance(z,r)

  15. Correctness • Lemma. After i repetitions of the "for" loop in BELLMAN-FORD, if there is a path from s to u with at most i edges, then d[u] is at most the length of the shortest path from s to u with at most i edges. • Proof. By induction on i. The base case is i=0. Trivial. For the induction step, consider the shortest path from s to u with at most i edges. Let v be the last vertex before u on this path. Then the part of the path from s to v is the shortest path from s to v with at most i-1 edges. By the inductive hypothesis, d[v] after i-1 executions of the (*) "for" loop is at most the length of this path. Therefore, d[v] + w(u,v) is at most the length of the path from s to u, via v (as i-1st node). In the i'th iteration, d[u] gets compared with d[v] + w(u,v), and is set equal to it if d[v] + w(u,v) is smaller. Therefore, after i iterations of the (*) "for" loop, d[u] is at most the length of the shortest path from s to u that uses at most i edges. So the lemma holds.

  16. Bellman-Ford Example Nodes are labeled with their d(v) values 0 0 4 4 8 8 -2 -2 -2 4 8 7 1 7 1       3 9 3 9 -2 5 -2 5     0 0 4 4 8 8 -2 -2 7 1 -1 7 1 5 8 -2 4 5 -2 -1 1 3 9 3 9 6 9 4 -2 5 -2 5   1 9

  17. DAG-based Algorithm AlgorithmDagDistances(G, s) for all v  G.vertices() ifv= s setDistance(v, 0) else setDistance(v, ) Perform a topological sort of the vertices for u 1 to n do {in topological order} for each e  G.outEdges(u) { relax edge e } z  G.opposite(u,e) r  getDistance(u) + weight(e) ifr< getDistance(z) setDistance(z,r) • Works even with negative-weight edges • Uses topological order • Doesn’t use any fancy data structures • Is much faster than Dijkstra’s algorithm • Running time: O(n+m).

  18. 1 DAG Example Nodes are labeled with their d(v) values 1 1 0 0 4 4 8 8 -2 -2 -2 4 8 4 4 3 7 2 1 3 7 2 1       3 9 3 9 -5 5 -5 5     6 5 6 5 1 1 0 0 4 4 8 8 -2 -2 5 4 4 3 7 2 1 -1 3 7 2 1 8 -2 4 5 -2 -1 9 3 3 9 0 4 1 7 -5 5 -5 5   1 7 6 5 6 5 (two steps)

  19. All-Pairs Shortest Paths: Dynamic Programming • Find the distance between every pair of vertices in a weighted directed graph G. • We can make n calls to Dijkstra’s algorithm (if no negative edges), which takes O(nmlog n) time. • Likewise, n calls to Bellman-Ford would take O(n2m) time. • We can achieve O(n3) time using dynamic programming (Floyd-Warshall algorithm). AlgorithmAllPair(G) {assumes vertices 1,…,n} for all vertex pairs (i,j) ifi= j D0[i,i] 0 else if (i,j) is an edge in G D0[i,j] weight of edge (i,j) else D0[i,j] +  for k 1 to n do for i 1 to n do for j 1 to n do Dk[i,j] min{Dk-1[i,j], Dk-1[i,k]+Dk-1[k,j]} return Dn Uses only vertices numbered 1,…,k (compute weight of this edge) i j Uses only vertices numbered 1,…,k-1 Uses only vertices numbered 1,…,k-1 k

More Related