1 / 46

Dijkstra's Algorithm & Minimum Spanning Trees

Learn about Dijkstra's algorithm for finding the shortest path in graphs and Kruskal's algorithm for minimum spanning trees. Explore their applications and understand the key concepts behind them.

chandar
Download Presentation

Dijkstra's Algorithm & Minimum Spanning Trees

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSE 326: Data StructuresLecture #21Graphs II.5 Alon Halevy Spring Quarter 2001

  2. Outline • Dijkstra’s algorithm (finally!!) • Algorithms for minimum spanning trees. • Search huge graphs.

  3. Single Source, Shortest Path Given a graph G = (V, E) and a vertex s  V, find the shortest path from s to every vertex in V Many variations: • weighted vs. unweighted • cyclic vs. acyclic • positive weights only vs. negative weights allowed • multiple weight types to optimize

  4. Dijkstra’s Algorithm for Single Source Shortest Path • Classic algorithm for solving shortest path in weighted graphs without negative weights • A greedy algorithm (irrevocably makes decisions without considering future consequences) • Intuition: • shortest path from source vertex to itself is 0 • cost of going to adjacent nodes is at most edge weights • cheapest of these must be shortest path to that node • update paths for new node and continue picking cheapest path

  5. Dijkstra’s Pseudocode(actually, our pseudocode for Dijkstra’s algorithm) Initialize the cost of each node to  Initialize the cost of the source to 0 While there are unknown nodes left in the graph Select the unknown node with the lowest cost: n Mark n as known For each node a which is adjacent to n a’s cost = min(a’s old cost, n’s cost + cost of (n, a))

  6. 2 2 3 B A F H 1 1 2 1 4 10 9 4 G C 8 2 D 1 E 7 Dijkstra’s Algorithm in Action

  7. The Cloud Proof Next shortest path from inside the known cloud But, if the path to G is the next shortest path, the path to P must be at least as long. So, how can the path through P to G be shorter? G Better path to the same node THE KNOWN CLOUD P Source

  8. Inside the Cloud (Proof) Everything inside the cloud has the correct shortest path Proof is by induction on the # of nodes in the cloud: • initial cloud is just the source with shortest path 0 • inductive step: once we prove the shortest path to G is correct, we add it to the cloud Negative weights blow this proof away!

  9. Revenge of the Dijkstra Pseudocode Initialize the cost of each node to  s.cost = 0; heap.insert(s); While (! heap.empty) n = heap.deleteMin() For (each node a which is adjacent to n) if (n.cost + edge[n,a].cost < a.cost) then a.cost = n.cost + edge[n,a].cost a.path = n; if (a is in the heap) then heap.decreaseKey(a) else heap.insert(a)

  10. Data Structures for Dijkstra’s Algorithm |V| times: Select the unknown node with the lowest cost findMin/deleteMin O(log |V|) |E| times: a’s cost = min(a’s old cost, …) decreaseKey O(log |V|) find by name O(1) runtime:

  11. Single Source & Goal • Suppose we only care about shortest path from source to a particular vertex g • Run Dijkstra to completion • Stop early? When? • When g is added to the priority queue • When g is removed from the priority queue • When the priority queue is empty

  12. Key Idea • Once g is removed from the priority queue, its shortest path is known • Otherwise, even if it has been seen, a shorter path may yet be found • Easy generalization: find the shortest paths from source to any of a set of goal nodes g1, g2, …

  13. Spanning Trees Spanning tree: a subset of the edges from a connected graph that… • touches all vertices in the graph (spans the graph) • forms a tree (is connected and contains no cycles) Minimum spanning tree: the spanning tree with the least total edge cost. 4 7 9 2 1 5

  14. Applications of Minimal Spanning Trees • Communication networks • VLSI design • Transportation systems • Good approximation to some NP-hard problems (more later)

  15. Kruskal’s Algorithm for Minimum Spanning Trees A greedy algorithm: Initialize all vertices to unconnected While there are still unmarked edges Pick a lowest cost edge e = (u, v) and mark it If u and v are not already connected, add e to the minimum spanning tree and connect u and v

  16. Kruskal’s Algorithm in Action (1/5) 2 2 3 B A F H 1 2 1 4 10 9 G 4 C 8 2 D E 7

  17. Kruskal’s Algorithm in Action (2/5) 2 2 3 B A F H 1 2 1 4 10 9 G 4 C 8 2 D E 7

  18. Kruskal’s Algorithm in Action (3/5) 2 2 3 B A F H 1 2 1 4 10 9 G 4 C 8 2 D E 7

  19. Kruskal’s Algorithm in Action (4/5) 2 2 3 B A F H 1 2 1 4 10 9 G 4 C 8 2 D E 7

  20. Kruskal’s Algorithm Completed (5/5) 2 2 3 B A F H 1 2 1 4 10 9 G 4 C 8 2 D E 7

  21. Why Greediness Works The algorithm produces a spanning tree. Why? Proof by contradiction that Kruskal’s finds the minimum: Assume another spanning tree has lower cost than Kruskal’s Pick an edge e1 = (u, v) in that tree that’s not in Kruskal’s Kruskal’s alg. connects u’s and v’s sets with another edge e2 But, e2 must have at most the same cost as e1! So, swap e2 for e1 (at worst keeping the cost the same) Repeat until the tree is identical to Kruskal’s: contradiction! QED: Kruskal’s algorithm finds a minimum spanning tree.

  22. Data Structures for Kruskal’s Algorithm Once: |E| times: Initialize heap of edges… Pick the lowest cost edge… buildHeap findMin/deleteMin |E| times: If u and v are not already connected… …connect u and v. union |E| + |E| log |E| + |E| ack(|E|,|V|) runtime:

  23. Prim’s Algorithm • Can also find Minimum Spanning Trees using a variation of Dijkstra’s algorithm: Pick a initial node Until graph is connected: Choose edge (u,v) which is of minimum cost among edges where u is in tree but v is not Add (u,v) to the tree • Same “greedy” proof, same asymptotic complexity

  24. Does Greedy Always Work? • Consider the following problem: • Given a graph G = (V,E) and a designed subset of vertices S, find a minimum cost tree that includes all of S • Exactly the same as a minimum spanning tree, except that it doesn’t have to include ALL the vertices – only the specified subset of vertices. • Does Kruskal or Prim work?

  25. Nope! • Greedy can fail to be optimal • because different solutions may contain different “non-designed” vertices, proof that you can covert one to the other doesn’t go through • This Minimum Steiner Tree problem has no known solution of O(nk) for any fixed k • NP-complete problems strike again! • Finding a spanning tree and then pruning it a pretty good approximation

  26. Huge Graphs • Consider some really huge graphs… • All cities and towns in the World Atlas • All stars in the Galaxy • All ways 10 blocks can be stacked Huh???

  27. Implicitly Generated Graphs • A huge graph may be implicitly specified by rules for generating it on-the-fly • Blocks world: • vertex = relative positions of all blocks • edge = robot arm could stack one block stack(green,blue) stack(blue,red) stack(green,red)

  28. Blocks World • Source = initial state of the blocks • Goal = desired state of the blocks • Path from source to goal = sequence of actions (program) for robot arm! • n blocks  nn states • 10 blocks  10 billion states

  29. Problem: Branching Factor • Cannot search such huge graphs exhaustively. Suppose we know that goal is only d steps away. • Dijkstra’s algorithm is basically breadth-first search (modulo arc weights) • If out-degree of each node is 10, potentially visits 10d vertices • 10 step plan = 10 billion vertices!

  30. An Easier Case • Suppose you live in Manhattan; what do you do? S 52nd St G 51st St 50th St 10th Ave 9th Ave 8th Ave 7th Ave 6th Ave 5th Ave 4th Ave 2nd Ave 3rd Ave

  31. Best-First Search • The Manhattan distance ( x+  y) is an estimate of the distance to the goal • a heuristic value • Best-First Search • Order nodes in priority to minimize estimated distance to the goal • Compare: Dijkstra • Order nodes in priority to minimize distance from the start

  32. Best First in Action • Suppose you live in Manhattan; what do you do? S 52nd St G 51st St 50th St 10th Ave 9th Ave 8th Ave 7th Ave 6th Ave 5th Ave 4th Ave 2nd Ave 3rd Ave

  33. Problem 1: Led Astray • Led astray – eventually will expand vertex to get back on the right track S G 52nd St 51st St 50th St 10th Ave 9th Ave 8th Ave 7th Ave 6th Ave 5th Ave 4th Ave 2nd Ave 3rd Ave

  34. Problem 2: Optimality • With Best-First Search, are you guaranteed a shortest path is found when • goal is first seen? • when goal is removed from priority queue?

  35. Sub-Optimal Solution • No! Goal is by definition at distance 0: will be removed from priority queue immediately, even if a shorter path exists! S (5 blocks) 52nd St G 51st St 9th Ave 8th Ave 7th Ave 6th Ave 5th Ave 4th Ave

  36. Synergy? • Dijkstra / Breadth First guaranteed to find optimal solution • Best First often visits far fewer vertices, but may not provide optimal solution • Can we get the best of both?

  37. A* • Order vertices in priority queue to minimize (distance from start) + (estimated distance to goal) f(n) = g(n) + h(n) f(n) = priority of a node g(n) = true distance from start h(n) = heuristic distance to goal

  38. Optimality • Suppose the estimated distance (h) is  the true distance to the goal • (heuristic is a lower bound) • Then: when the goal is removed from the priority queue, we are guaranteed to have found a shortest path!

  39. Problem 2 Revisited S (5 blocks) 5+2=6 52nd St G 51st St 1+4=5 50th St Dijkstra would have visited these guys! 9th Ave 8th Ave 7th Ave 6th Ave 5th Ave 4th Ave

  40. Revised Cloud Proof • Suppose have found a path of cost c to G which is not optimal • priority(G) = f(G) = g(G) + h(G) = c + 0 = c • Say N is the last vertex on an optimal path P to G which has been added to the queue but not yet dequeued. • There must be such an N, otherwise the optimal path would have been found. • priority(N) = f(N) = g(N) + h(N)  g(N) + actual cost N to G = cost of path P < c • So N will be dequeued before G is dequeued • Repeat argument to show entire optimal path will be expanded before G is dequeued. N G S c

  41. A Little History • A* invented by Nils Nilsson & colleagues in 1968 • or maybe some guy in Operations Research? • Cornerstone of artificial intelligence • still a hot research topic! • iterative deepening A*, automatically generating heuristic functions, … • Method of choice for search large (even infinite) graphs when a good heuristic function can be found

  42. What About Those Blocks? • “Distance to goal” is not always physical distance • Blocks world: • distance = number of stacks to perform • heuristic lower bound = number of blocks out of place # out of place = 2, true distance to goal = 3

  43. Other Examples • Simplifying Integrals • vertex = formula • goal = closed form formula without integrals • arcs = mathematical transformations • heuristic = number of integrals remaining in formula

  44. DNA Sequencing • Problem: given chopped up DNA, reassemble • Vertex = set of pieces • Arc = stick two pieces together • Goal = only one piece left • Heuristic = number of pieces remaining - 1

  45. Solving Simultaneous Equations • Input: set of equations • Vertex = assignment of values to some of the variables • Edge = Assign a value to one more variable • Goal = Assignment that simultaneously satisfies all the equations • Heuristic = Number of equations not yet satisfied

  46. What ISN’T A*? essentially, nothing.

More Related