1 / 43

Dynamic Programming

Dynamic Programming. 2012/11/20. Dynamic Programming (DP). Dynamic programming is typically applied to optimization problems . Problems that can be solved by dynamic programming satisfy the principle of optimality . Principle of optimality.

aren
Download Presentation

Dynamic Programming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dynamic Programming 2012/11/20

  2. Dynamic Programming (DP) • Dynamic programming is typically applied to optimization problems. • Problems that can be solved by dynamic programming satisfy the principle of optimality.

  3. Principle of optimality • Suppose that in solving a problem, we have to make a sequence of decisions D1, D2, …,Dn-1, Dn • If this sequence of decisions D1, D2, …,Dn-1, Dn is optimal, then the last k, 1  k  n, decisions must be optimal under the condition caused by the first n-k decisions.

  4. Dynamic method v.s.Greedy method • Comparison: In the greedy method, any decision is locally optimal. • These locally optimal solutions will finally add up to be a globally optimal solution.

  5. The Greedy Method • E.g. Find a shortest path from v0 to v3. • The greedy method can solve this problem. • The shortest path: 1 + 2 + 4 = 7.

  6. The Greedy Method • E.g. Find a shortest path from v0 to v3 in the multi-stage graph. • Greedy method: v0v1,2v2,1v3 = 23 • Optimal: v0v1,1v2,2v3 = 7 • The greedy method does not work for this problem. • This is because decisions at different stages influence one another.

  7. Multistage graph • A multistage graph G=(V,E) is a directed graph in which the vertices are partitioned into k2 disjoint sets Vi, 1i  k • In addition, if <u,v> is an edge in E then uVi and vVi+ifor some i, 1i<k • The set V1 and Vk are such that V1 =Vk=1 • The multistage graph problem is to find a minimum cost path from s in V1 to t in Vk • Each set Vi defines a stage in the graph

  8. Greedy Method vs. Multistage graph • E.g. • The greedy method cannot be applied to this case: S A D T 1+4+18 = 23. • The shortest path is: S C F T 5+2+2 = 9.

  9. Dynamic Programming • Dynamic programming approach: • d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)}

  10. Dynamic Programming • d(A, T) = min{4+d(D, T), 11+d(E, T)} • = min{4+18, 11+13} = 22.

  11. Dynamic Programming • d(B, T) = min{9+d(D, T), 5+d(E, T), 16+d(F, T)} = min{9+18, 5+13, 16+2} = 18. • d(C, T) = min{ 2+d(F, T) } = 2+2 = 4 • d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)} = min{1+22, 2+18, 5+4} = 9.

  12. Save computation • For example, we never calculate (as a whole) the length of the pathS  B  D  T( namely, d(S,B)+d(B,D)+d(D,T) )because we have found d(B, E)+d(E, T)<d(B,D)+d(D,T) • There are some more examples… • Compare with the brute-force method…

  13. The advantages of dynamic programming approach • To avoid exhaustively searching the entire solution space (to eliminate some impossible solutions and save computation). • To solve the problem stage by stage systematically. • To store intermediate solutions in a table (array) so that they can be retrieved from the table in later stages of computation.

  14. Comment • If a problem can be described by a multistage graph then it can be solved by dynamic programming.

  15. The longest common subsequence (LCS or LCSS) problem • A sequence of symbols A = b a c a d • A subsequence of A: deleting 0 or more symbols (not necessarily consecutive)from A. • E.g., ad, ac, bac, acad, bacad, bcd. • Common subsequences of A = b a c a d and B = a c c b a d c b : ad, ac, bac, acad. • The longest common subsequence of A and B: a c a d.

  16. DNA Matching DNA = {A|C|G|T}* S1=ACCGGTCGAGTGCGGCCGAAGCCGGCCGAA S2=GTCGTTCGGAATGCCGTTGCTGTAAA Are S1 and S2 similar DNAs? The question can be answered by figuring out the longest common subsequence.

  17. Networked virtual environments (NVEs) • virtual worlds full of numerous virtual objects to simulate a variety of real world scenes • allowing multiple geographically distributed users to assume avatars to concurrently interact with each other via network connections. • E.G., MMOGs: World of Warcraft (WoW), Second Life (SL)

  18. Avatar Path Clustering • Because of similar personalities, interests, or habits, users may possess similar behavior patterns, which in turn lead to similar avatar paths within the virtual world. • We would like to group similar avatar paths as a cluster and find a representative path (RP) for them.

  19. How similar are two paths in Freebies island of Second Life?

  20. LCSS-DC-path transfers sequence SeqA:C60.C61.C62.C63.C55.C47.C39.C31.C32

  21. LCSS-DC -similar path thresholds SeqA:C60.C61.C62.C63.C55.C47.C39.C31.C32 SeqB:C60.C61.C62.C54.C62.C63.C64 LCSSAB :C60.C61.C62. C63

  22. Longest-common-subsequence problem: • We are given two sequences X = <x1,x2,...,xm> and Y = <y1,y2,...,yn> and wish to find a maximum length common subsequence of X and Y. • We define Xi = < x1,x2,...,xi > and Yj= <y1,y2,...,yj>.

  23. Brute Force Solution • m * 2n = O(2n ) or • n * 2m = O(2m)

  24. A recursive solution to subproblem • Define c [i, j] is the length of the LCS of Xi and Yj . ì 0 if i= 0 or j= 0 ï = - - + c [ i , j ] c [ i 1 , j 1 ] 1 if i,j> 0 and x =y í i j ï - - ¹ max{ c [ i , j 1 ], c [ i 1 , j ]} if i,j> 0 and x y î i j

  25. Computing the length of an LCS LCS_LENGTH(X,Y) 1 m length[X] 2 n length[Y] 3 fori1tom 4 doc[i, 0]  0 5 forj 1ton 6 doc[0, j]  0

  26. 7 fori 1 tom 8 forj 1ton 9 ifxi = yj 10 thenc[i, j]  c[i-1, j-1]+1 11 b[i, j]  “” 12 else if c[i–1, j] c[i, j-1] 13 thenc[i, j]  c[i-1, j] 14 b[i, j]  “” 15 elsec[i, j] c[i, j-1] 16 b[i, j]  “” 17 returnc and b

  27. Complexity: O(mn) rather than O(2m) or O(2n) of Brute force method

  28. PRINT_LCS PRINT_LCS(b, X, i, j ) 1 ifi = 0orj = 0 2 then return 3 ifb[i, j] = “” 4 then PRINT_LCS(b, X, i-1, j-1) 5 print xi 6 else ifb[i, j] = “” 7 then PRINT_LCS(b, X, i-1, j) 8 else PRINT_LCS(b, X, i, j-1) Complexity: O(m+n) By calling PRINT_LCS(b, X, length[X], length[Y]) to print LCS

  29. Matrix-chain multiplication • How to compute where is a matrix for every i. • Example:

  30. MATRIX MULTIPLY MATRIX MULTIPLY(A,B) 1 if columns[A] rows[B] 2 then error“incompatible dimensions” 3 else for to rows[A] 4 forto columns[B] 5 6 forto columns[A] 7 8 returnC

  31. Complexity: • Let A be a matrix, and B be a matrix. Then the complexity of AxB is .

  32. Example: • is a matrix, is a matrix, and is a matrix. Then takes time. However takes time.

  33. The matrix-chain multiplication problem: • Given a chain of n matrices, where for i=0,1,…,n, matrix Ai has dimension pi-1pi, fully parenthesize the product in a way that minimizes the number of scalar multiplications. • A product of matrices is fully parenthesized if it is either a single matrix, or a product of two fully parenthesized matrix product, surrounded by parentheses.

  34. Counting the number of parenthesizations: • [Catalan number]

  35. Step 1: The structure of an optimal parenthesization

  36. Step 2: A recursive solution • Define m[i, j]= minimum number of scalar multiplications needed to compute the matrix • goal m[1, n]

  37. Step 3: Computing the optimal costs • Instead of computing the solution to the recurrence recursively, we compute the optimal cost by using a tabular, bottom-up approach. • The procedure uses an auxiliary table m[1..n, 1..n] for storing the m[i, j] costs and an auxiliary table s[1..n, 1..n] that records which index of k achieved the optimal cost in computing m[i, j].

  38. MATRIX_CHAIN_ORDER MATRIX_CHAIN_ORDER(p) 1 n length[p] –1 2 fori 1 ton 3 dom[i, i]  0 4 forl 2ton 5 do fori 1ton – l + 1 6 doj i + l – 1 7 m[i, j]   8 fork itoj – 1 9 doq m[i, k] + m[k+1, j]+ pi-1pkpj 10 ifq < m[i, j] 11 thenm[i, j]  q 12 s[i, j]  k 13 returnm and s Complexity:

  39. Example:

  40. the m and s table computed by MATRIX-CHAIN-ORDER for n=6

  41. m[2,5]= min{ m[2,2]+m[3,5]+p1p2p5=0+2500+351520=13000, m[2,3]+m[4,5]+p1p3p5=2625+1000+35520=7125, m[2,4]+m[5,5]+p1p4p5=4375+0+351020=11374 } =7125

  42. MATRIX_CHAIN_MULTIPLY PRINT_OPTIMAL_PARENS(s, i, j) 1 ifi=j 2 then print “A”i 3 else print “(“ 4 PRINT_OPTIMAL_PARENS(s, i, s[i,j]) 5 PRINT_OPTIMAL_PARENS(s, s[i,j]+1, j) 6 print “)” • Example:

  43. Q&A

More Related