1 / 65

Algorithms

Algorithms. Dynamic Programming. Dynamic Programming. General approach – combine solutions to subproblems to get the solution to an problem Unlike Divide and Conquer in that subproblems are dependent rather than independent bottom-up approach

Download Presentation

Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Algorithms Dynamic Programming

  2. Dynamic Programming • General approach – combine solutions to subproblems to get the solution to an problem • Unlike Divide and Conquer in that • subproblems are dependent rather than independent • bottom-up approach • save values of subproblems in a table and use more than one time

  3. Usual Approach • Start with the smallest, simplest subproblems • Combine “appropriate” subproblem solutions to get a solution to the bigger problem • If you have a D&C algorithm that does a lot of duplicate computation, you can often find a DP solution that is more efficient

  4. A Simple Example • Calculating binomial coefficients • n choose k is the number of different combinations of n things taken k at a time • These are also the coefficients of the binomial expansion (x+y)n

  5. Two Definitions

  6. Algorithm for Recursive Definition function C(n,k) if k = 0 or k = n then return 1 else return C(n-1, k-1) + C(n-1, k)

  7. C(5,3) C(4,2) C(4,3) C(3,1) C(3,2) C(3,2) C(3,3)

  8. Complexity of D & C Algorithm • Time complexity is (n!) • But we did a lot of duplicate computation • Dynamic programming approach • Store solutions to sub-problems in a table • Bottom-up approach

  9. k n k 0 1 2 3 0 1 2 3 4 5 n

  10. Analysis of DP Version • Time complexity O(n k) • Storage requirements • full table O(n k) • better solution

  11. Typical Problem • Dynamic Programming is often used for optimization problems that satisfy the principle of optimality • Principle of optimality • In an optimal sequence of decisions or choices, each subsequence must be optimal

  12. Optimization Problems • Problem has many possible solutions • Each solution has a value • Goal is to find the solution with the optimal value

  13. Steps in DP Solutions to Optimization Problems 1 Characterize the structure of an optimal solution 2 Recursively define the value of an optimal solution 3 Compute the value of an optimal solution in a bottom-up manner 4 Construct an optimal solution from computed information

  14. Matrix Chain Multiplication A1A2A3An • Matrix multiplication is associative • All ways that a sequence can be parenthesized give the same answer • But some are much less expensive to compute

  15. Matrix Chain Multiplication Problem • Given a chain < A1,A2, . . .,An> of n matrices, where i = 1, 2, . . ., n and matrix Ai has dimension pi-1pi, fully parenthesize the product A1A2An in a way that minimizes the number of scalar multiplications

  16. Example Matrix Dimensions A 13 x 5 B 5 X 89 C 89 X 3 D 3 X 34 M = A B C D 13 x 34

  17. Parenthesization Scalar multiplications 1 ((A B) C) D 10,582 2 (A B) (C D) 54,201 3 (A (B C)) D 2, 856 4 A ((B C) D) 4, 055 5 A (B (C D)) 26,418 1 13 x 5 x 89 to get (A B) 13 x 89 result 13 x 89 x 3 to get ((AB)C) 13 x 3 result 13 x 3 x 34 to get (((AB)C)D) 13 x 34

  18. Number of Parenthesizations

  19. T(n) ways to parenthesize n 1 2 3 4 5 10 15 T(n) 1 1 2 5 14 4,862 2,674,440

  20. Steps in DP Solutions to Optimization Problems 1 Characterize the structure of an optimal solution 2 Recursively define the value of an optimal solution 3 Compute the value of an optimal solution in a bottom-up manner 4 Construct an optimal solution from computed information

  21. Step 1 • Show that the principle of optimality applies • An optimal solution to the problem contains within it optimal solutions to sub-problems • Let Ai..j be the optimal way to parenthesize AiAi+1. . .Aj • Suppose the optimal solution has the first split at position k A1..k Ak+1..n • Each of these sub-problems must be optimally parenthesized

  22. Step 2 • Define value of the optimal solution recursively in terms of optimal solutions to sub-problems. • Consider the problem Ai..j • Let m[i,j] be the minimum number of scalar multiplications needed to compute matrix Ai..j • The cost of the cheapest way to compute A1..n is m[1,n]

  23. Step 2 continued • Define m[i,j] If i = j the chain has one matrix and the cost m[i,i] = 0 for all i  n If i < j Assume the optimal split is at position k where i k < j m[i,j] = m[i,k] + m[k+1,j] + pi-1 pkpj to compute Ai..k and Ak+1..j

  24. Step 2 continued • Problem: we don’t know what the value for k is, but there are only j-i possibilities

  25. Step 3 • Compute optimal cost by using a bottom-up approach • Example: 4 matrix chain Matrix Dimensions A1 100 x 1 A2 1 x 100 A3 100 x 1 A4 1 x 100 • po p1 p2 p3 p4 100 1 100 1 100

  26. j j 1 2 3 4 1 2 3 4 i i 1 2 3 4 1 2 3 4 s m po p1 p2 p3 p4 100 1 100 1 100

  27. MATRIX-CHAIN-ORDER(p) 1 n length[p] - 1 2 for i 1 to n 3 do m[i,i] 0 4 for l 2 to n 5 do for i 1 to n - l + 1 6 do ji + l - 1 7 m[i,j]  8 for k i to j - 1 9 do q m[i,k]+m[k+1,j]+ pi-1 pkpj 10 if q < m[i,j] 11 then m[i,j] = q 12 s[i,j] = k 13 return m and s

  28. Time and Space Complexity • Time complexity • Triply nested loops • O(n3) • Space complexity • n x n two dimensional table • O(n2)

  29. Step 4: Constructing the Optimal Solution • Matrix-Chain-Order determines the optimal number of scalar multiplications • Does not directly compute the product • Step 4 of the dynamic programming paradigm is to construct an optimal solution from computed information. • The s matrix contains the optimal split for every level

  30. MATRIX-CHAIN-MULTIPLY(A,s,i,j) 1 if j > i 2 then X MATRIX-CHAIN-MULTIPLY(A,s,i,s[i,j]) 3 Y MATRIX-CHAIN-MULTIPLY(A,s,s[i,j]+1,j) 4 return MATRIX-MULTIPLY(X,Y) 5 else return Ai Initial call MATRIX-CHAIN-MULTIPLY(A,s,1,n) where A = < A1,A2, . . .,An>

  31. Algorithms Dynamic Programming Continued

  32. Comments on Dynamic Programming • When the problem can be reduced to several “overlapping” sub-problems. • All possible sub-problems are computed • Computation is done by maintaining a large matrix • Usually has large space requirement • Running times are usually at least quadratic

  33. Memoization • Variation on dynamic programming • Idea–memoize the natural but inefficient recursive algorithm • As each sub-problem is solved, store values in table • Initialize table with values that indicate if the value has been computed

  34. MEMOIZED-MATRIX-CHAIN(p) 1 n  length(p) - 1 2 for i 1 to n 3 do for j i to n 4 do m[i,j]  5 return LOOKUP-CHAIN(p,1,n)

  35. LOOKUP-CHAIN(p,i,j) 1 if m[i,j]  2 then return m[i,j] 3 if i = j 4 then m[i,j]  5 else for k i to j - 1 6 do q LOOKUP-CHAIN(p,i,k) + LOOKUP-CHAIN(p,k+1,j) + pi-1pkpj 7 if q < m[i,j] 8 then m[i,j] q 9 return m[i,j]

  36. Space and Time Requirementsof Memoized Version • Running time (n3) • Storage (n2)

  37. Longest Common Subsequence • Definition 1: Subsequence Given a sequence X = < x1, x2, . . . , xm> then another sequence Z = < z1, z2, . . . , zk> is a subsequence of X if there exists a strictly increasing sequence <i1, i2, . . . , ik> of indices of x such that for all j = 1,2,...k we have xij= zj

  38. Example X = <A,B,D,F,M,Q> Z = <B, F, M> Z is a subsequence of X with index sequence <2,4,5>

  39. More Definitions • Definition 2: Common subsequence • Given 2 sequences X and Y, we say Z is a common subsequence of X and Y if Z is a subsequence of X and a subsequence of Y • Definition 3: Longest common subsequence problem • Given X = < x1, x2, . . . , xm> and Y = < y1, y2, . . . , yn> find a maximum length common subsequence of X and Y

  40. Example X = <A,B,C,B,D,A,B> Y = <B,D,C,A,B,A>

  41. Brute Force Algorithm 1 for every subsequence of X 2 Is there subsequence in Y? 3 If yes, is it longer than the longest subsequence found so far? Complexity?

  42. Yet More Definitions • Definition 4: Prefix of a subsequence If X = < x1, x2, . . . , xm> , the ith prefix of X for i = 0,1,...,m is Xi = < x1, x2, . . . , xi> • Example • if X = <A,B,C,D,E,F,H,I,J,L> then X4 = <A,B,C,D> and X0 = <>

  43. Optimal Substructure • Theorem 16.1 Optimal Substructure of LCS Let X = < x1, x2, . . . , xm> and Y = < y1, y2, . . . , yn> be sequences and let Z = < z1, z2, . . . ,zk> be any LCS of X and Y 1. if xm = yn then zk = xm = yn and zk-1 is an LCS of xm-1 and yn-1 2. if xm  yn and zk  x m Z is an LCS of Xm-1 and Y 3. if xm  yn and zk  yn Z is an LCS of Xm and Yn-1

  44. Sub-problem structure • Case 1 • if xm = yn then there is one sub-problem to solve find a LCS of Xm-1 and Yn-1 and append xm • Case 2 • if xm  yn then there are two sub-problems • find an LCS of Xmand Yn-1 • find an LCS of Xm-1 and Yn • pick the longer of the two

  45. Cost of Optimal Solution • Cost is length of the common subsequence • We want to pick the longest one • Let c[i,j] be the length of an LCS of the sequences Xi and Yj • Base case is an empty subsequence--then c[i,j] = 0 because there is no LCS

  46. The Recurrence

  47. Dynamic Programming Solution • Table c[0..m, 0..n] stores the length of an LCS of Xi and Yj c[i,j] • Table b[1..m, 1..n] stores pointers to optimal sub-problem solutions

  48. Y C B R F T S Q 0 1 2 3 4 5 6 7 0 A 1 B 2 D 3 F 4 M 5 Q 6 X Matrix C

  49. Y j C B R F T S Q 1 2 3 4 5 6 7 i A 1 B 2 D 3 F 4 M 5 Q 6 X Matrix B

  50. LCS-LENGTH(X,Y) 1 m length[X] 2 n length[Y] 3 for i  to m 4 do c[i,0]  5 for j  to n 6 do c[0,j]  7 for i  to m 8 do for j  to n 9 do if xi = yj then c[i,j] c[i-1,j-1] + 1 b[i,j] ”” 12 else if c[i-1,j] c[i,j-1] 13 then c[i,j] c[i-1,j] 14 b[i,j] ”” 15 else c[i,j] c[i,j-1] 16 b[i,j] ”” 17 return c and b

More Related