1 / 49

Greedy Algorithms

Greedy Algorithms. CLRS CH. 16, 23, & 24. Overview. Greedy algorithms <- optimization problems. Problems must have optimal substructure : an optimal solution must be made of subproblems that are solved optimally. Problems must have greedy-choice property:

Download Presentation

Greedy Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Greedy Algorithms CLRS CH. 16, 23, & 24 Z. Guo

  2. Overview • Greedy algorithms <- optimization problems. • Problems must have optimal substructure : • an optimal solution must be made of subproblems that are solved optimally. • Problems must have greedy-choice property: • A globally optimal solution can be arrived at by making a locally optimal (greedy) choice. • When we have a choice to make, we can make the one that looks best right now. • Demonstrate that a locally optimal choicecan be part of a globally optimal solution.

  3. Activity-selection Problem • Input: Set S of n activities, a1, a2, …, an. • si = start time of activity i. • fi = finish time of activity i. • Output: Subset A Swith a maximum number of compatible activities. • Two activities are compatible if their intervals don’t overlap. Example: Activities in each line are compatible.

  4. Optimal Substructure • Let’s assume that activities are sorted by finishing times. f1 f2 …  fn. • Suppose an optimal solution includes activity ak. • This generates two subproblems. • Selecting from a1, …, ak-1, activities compatible with one another, and that finish before ak starts (compatible with ak). • Selecting from ak+1, …, an, activities compatible with one another, and that start after ak finishes. • The solutions to the two subproblems must be optimal. • Prove using the cut-and-paste approach. • Thus, we could use dynamic programming…

  5. Recursive Solution • Let Sij = subset of activities in S that start after ai finishes and finish before aj starts. • Subproblems: Selecting maximum number of mutually compatible activities from Sij. • Let c[i, j] = size of maximum-size subset of mutually compatible activities in Sij. Recursive Solution:

  6. Greedy-choice Property • The problem also exhibits the greedy-choice property. • There is an optimal solution to the subproblem Sij, that includes the activity with the earliest finish time in set Sij. • Proof: if any solution does not do the task with earliest finishing time, then replace its first task with that. • Hence, there is an optimal solution to S doing a1. • So, make this greedy choice without first evaluating the cost of subproblems. • Then solve only the subproblem that results from making this greedy choice. • Combine the greedy choice and the subproblem solution.

  7. Recursive Algorithm Recursive-Activity-Selector (s, f, i, j) • m i+1 • whilem < j and sm < fi • dom  m+1 • ifm < j • thenreturn {am}  Recursive-Activity-Selector(s, f, m, j) • else return  Initial Call: Recursive-Activity-Selector (s, f, 0, n+1) Complexity:(n) This is just to make the algorithm look complicated; The iterative algorithm is even easier. (See CLRS 16)

  8. Typical Steps • Cast the optimization problem as one in which we make a choice & are left with one subproblem to solve. • Prove that there exists an optimal solution that makes the greedy choice, so the greedy choice is always safe. • Show that greedy choice and optimal solution to subproblem optimal solution to the problem. • Make the greedy choice and solve top-down. • May have to preprocess input to put it into greedy order. • Example: Sorting activities by finish time.

  9. Minimum Spanning Trees CLRS CH. 23

  10. Minimum Spanning Trees • Given: Connected, undirected, weighted graph, G • Find: Minimum - weight spanning tree, T • Example: 7 b c Acyclic subset of edges(E) that connects all vertices of G. 5 a 1 -3 3 11 d e f 0 2 b c 5 a 1 3 -3 d e f 0

  11. Generic Algorithm “Grows” a set A. Invariant: A is subset of some MST. Edge is “safe” adding it to A maintains the invariant. A := ; while A not complete tree do find a safe edge (u, v); A := A  {(u, v)} od

  12. Definitions no edge in the set crosses the cut cut respects the edge set {(a, b), (b, c)} a light edge crossing cut (could be more than one) 5 7 a b c 1 -3 3 11 • cut partitions vertices into disjoint sets, S and V – S. d e f 0 2 this edge crosses the cut one endpoint is in S and the other is in V – S.

  13. Theorem 23.1 Theorem 23.1: Let (S, V-S) be any cut that respects A, and let (u, v) be a light edge crossing (S, V-S). Then, (u, v) is safe for A. Proof: Let T be a MST that includes A. Case: (u, v) in T. We’re done. Case: (u, v) not in T. We have the following: edge in A (x, y) crosses cut. Let T´ = T - {(x, y)}  {(u, v)}. Because (u, v) is light for cut, w(u, v)  w(x, y). Thus, w(T´) = w(T) - w(x, y) + w(u, v)  w(T). Hence, T´ is also a MST. So, (u, v) is safe for A. x cut y u shows edges in T v

  14. Corollary In general, A will consist of several connected components. Corollary: If (u, v) is a light edge connecting one CC in (V, A) to another CC in (V, A), then (u, v) is safe for A.

  15. Kruskal’s Algorithm • Starts with each vertex in its own component. • Repeatedly merges two components into one by choosing a light edge that connects them (i.e., a light edge crossing the cut between them). • Scans the set of edges in monotonically increasing order by weight. • Uses a disjoint-set data structure to determine whether an edge connects vertices in different components.

  16. Prim’s Algorithm • Builds one tree, so A is always a tree. • Starts from an arbitrary “root” r . • At each step, adds a light edge crossing cut (VA, V - VA) to A. • VA= vertices that A is incident on.

  17. Prim’s Algorithm • Builds one tree, so A is always a tree. • Starts from an arbitrary “root” r . • At each step, adds a light edge crossing cut (VA, V - VA) to A. • VA= vertices that A is incident on. • Uses a priority queue Q to find a light edge quickly. • Each object in Q is a vertex in V- VA. • Key of vis minimum weight of any edge (u, v), where u VA. • Then the vertex returned by Extract-Min is vsuch that there exists u VAand (u, v)is light edge crossing (VA, V- VA). • Key of vis  if vis not adjacent to any vertex in VA.

  18. Prim’s Algorithm Q := V[G]; for each u  Q do key[u] :=  od; key[r] := 0; [r] := NIL; while Q   do u := Extract - Min(Q); for each v  Adj[u] do if v  Q  w(u, v) < key[v] then [v] := u; key[v] := w(u, v) fi od od Complexity: Using binary heaps: O(E lg V). Initialization – O(V). Building initial queue – O(V). V Extract-Min’s – O(V lgV). E Decrease-Key’s – O(E lg V). Using Fibonacci heaps: O(E + V lg V). (see book)  decrease-key operation Note:A = {(v, [v]) : v  v - {r} - Q}.

  19. Example of Prim’s Algorithm Not in tree 5 7 a/0 b/ c/ Q = a b c d e f 0   1 -3 3 11 d/ e/ f/ 0 2

  20. Example of Prim’s Algorithm 5 7 a/0 b/5 c/ Q = b d c e f 5 11   1 -3 3 11 d/11 e/ f/ 0 2

  21. Example of Prim’s Algorithm 5 7 a/0 b/5 c/7 Q = e c d f 3 7 11  1 -3 3 11 d/11 e/3 f/ 0 2

  22. Example of Prim’s Algorithm 5 7 a/0 b/5 c/1 Q = d c f 0 1 2 1 -3 3 11 d/0 e/3 f/2 0 2

  23. Example of Prim’s Algorithm 5 7 a/0 b/5 c/1 Q = c f 1 2 1 -3 3 11 d/0 e/3 f/2 0 2

  24. Example of Prim’s Algorithm 5 7 a/0 b/5 c/1 Q = f -3 1 -3 3 11 d/0 e/3 f/-3 0 2

  25. Example of Prim’s Algorithm 5 7 a/0 b/5 c/1 Q =  1 -3 3 11 d/0 e/3 f/-3 0 2

  26. Example of Prim’s Algorithm 5 a/0 b/5 c/1 1 -3 3 d/0 e/3 f/-3 0

  27. Chapter 24: Single-Source Shortest Paths • Given: A single source vertex in a weighted, directed graph. • Want to compute a shortest path for each possible destination. • Similar to BFS. • We will assume either • no negative-weight edges, or • no reachable negative-weight cycles. • Algorithm will compute a shortest-path tree. • Similar to BFS tree.

  28. Outline • General Lemmas and Theorems. • CLRS now does this last. We’ll still do it first. • Bellman-Ford algorithm. • DAG algorithm. • Dijkstra’s algorithm. • We will skip Section 24.4.

  29. General Results (Relaxation) Lemma 24.1: Let p = ‹v1, v2, …, vk› be a SP from v1 to vk. Then, pij = ‹vi, vi+1, …, vj› is a SP from vi to vj, where 1  i  j  k. So, we have the optimal-substructure property. Bellman-Ford algorithm uses dynamic programming. Dijkstra’s algorithm uses the greedy approach. Let δ(u, v) = weight of SP from u to v. Corollary: Let p = SP from s to v, where p = s u v. Then, δ(s, v) = δ(s, u) + w(u, v). Lemma 24.10: Let s  V. For all edges (u,v)  E, we have δ(s, v)  δ(s, u) + w(u,v).

  30. Relaxation Algorithms keep track of d[v], [v]. Initialized as follows: Initialize(G, s) for each v  V[G] do d[v] := ; [v] := NIL od; d[s] := 0 These values are changed when an edge (u, v) is relaxed: Relax(u, v, w) if d[v] > d[u] + w(u, v) then d[v] := d[u] + w(u, v); [v] := u fi

  31. Properties of Relaxation Consider any algorithm in which d[v], and [v] are first initialized by calling Initialize(G, s) [s is the source], and are only changed by calling Relax. We have: Lemma 24.11: ( v:: d[v]  (s, v)) is an invariant. Implies d[v] doesn’t change once d[v] = (s, v). • Proof: • Initialize(G, s) establishes invariant. If call to Relax(u, v, w) • changes d[v], then it establishes: • d[v] = d[u] + w(u, v) •  (s, u) + w(u, v) , invariant holds before call. •  (s, v) , by Lemma 24.10. Corollary 24.12: If there is no path from s to v, then d[v] = δ(s, v) =  is an invariant.

  32. Bellman-Ford Algorithm Can have negative-weight edges. Will “detect” reachable negative-weight cycles. Initialize(G, s); for i := 1 to |V[G]| –1 do for each (u, v) in E[G] do Relax(u, v, w) od od; for each (u, v) in E[G] do if d[v] > d[u] + w(u, v) then return false fi od; return true Time Complexity is O(VE).

  33. Example u v 5   –2 6 –3 8 0 z 7 –4 2 7   9 y x

  34. Example u v 5 6  –2 6 –3 8 0 z 7 –4 2 7  7 9 y x

  35. Example u v 5 6 4 –2 6 –3 8 0 z 7 –4 2 7 2 7 9 y x

  36. Example u v 5 2 4 –2 6 –3 8 0 z 7 –4 2 7 2 7 9 y x

  37. Example u v 5 2 4 –2 6 –3 8 0 z 7 –4 2 7 -2 7 9 y x

  38. Another Look Note: This is essentially dynamic programming. Let d(i, j) = cost of the shortest path from s to i that is at most j hops. d(i, j) = 0 if i = s  j = 0  if i  s  j = 0 min({d(k, j–1) + w(k, i): i  Adj(k)}  {d(i, j–1)}) if j > 0 i z u v x y 1 2 3 4 5 0 0     1 0 6  7  2 0 6 4 7 2 3 0 2 4 7 2 4 0 2 4 7 –2 j

  39. Dijkstra’s Algorithm Assumes no negative-weight edges. Maintains a set S of vertices whose SP from s has been determined. Repeatedly selects u in V–S with minimum SP estimate (greedy choice). Store V–S in priority queue Q. Initialize(G, s); S := ; Q := V[G]; while Q   do u := Extract-Min(Q); S := S  {u}; for each v  Adj[u] do Relax(u, v, w) od od Relax(u, v, w) if d[v] > d[u] + w(u, v) then d[v] := d[u] + w(u, v); [v] := u fi

  40. Example u v 1   10 9 2 3 0 s 4 6 7 5   2 y x

  41. Example u v 1 10  10 9 2 3 0 s 4 6 7 5  5 2 y x

  42. Example u v 1 8 14 10 9 2 3 0 s 4 6 7 5 7 5 2 y x

  43. Example u v 1 8 13 10 9 2 3 0 s 4 6 7 5 7 5 2 y x

  44. Example u v 1 8 9 10 9 2 3 0 s 4 6 7 5 7 5 2 y x

  45. Example u v 1 8 9 10 9 2 3 0 s 4 6 7 5 7 5 2 y x

  46. Correctness Theorem 24.6: Upon termination, d[u] = δ(s, u) for all u in V (assuming non-negative weights). Proof: By Lemma 24.11, once d[u] = δ(s, u) holds, it continues to hold. We prove: For each u in V, d[u] = (s, u) when u is inserted in S. Suppose not. Let u be the first vertex such that d[u] (s, u) when inserted in S. Note that d[s] = (s, s) = 0 when s is inserted, so u  s.  S   just before u is inserted (in fact, s  S).

  47. Proof (Continued) Note that there exists a path from s to u, for otherwise d[u] = (s, u) =  by Corollary 24.12.  there exists a SP from s to u. SP looks like this: p2 u s p1 y x S

  48. Proof (Continued) • Claim: d[y] = (s, y) when u is inserted into S. • We had d[x] = (s, x) when x was inserted into S. • Edge (x, y) was relaxed at that time. • By Lemma 24.14, this implies the claim. • Now, we have: d[y] = (s, y) , by Claim. •  (s, u) , nonnegative edge weights. •  d[u] , by Lemma 24.11. • Because u was added to S before y, d[u]  d[y]. • Thus, d[y] = (s, y) = (s, u) = d[u]. • Contradiction.

  49. Complexity • Running time is • O(V2) using linear array for priority queue. • O((V + E) lg V) using binary heap. • O(V lg V + E) using Fibonacci heap. • (See book.)

More Related