1 / 21

Dynamic Programming Technique

Dynamic Programming Technique. Sources: Text book slides + older slides. Dynamic Programming History. Bellman. Pioneered the systematic study of dynamic programming in the 1950s. Etymology. Dynamic programming = planning over time. Secretary of Defense was hostile to mathematical research.

pilger
Download Presentation

Dynamic Programming Technique

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dynamic Programming Technique Sources: Text book slides + older slides

  2. Dynamic Programming History • Bellman. Pioneered the systematic study of dynamic programming in the 1950s. • Etymology. • Dynamic programming = planning over time. • Secretary of Defense was hostile to mathematical research. • Bellman sought an impressive name to avoid confrontation. • "it's impossible to use dynamic in a pejorative sense" • "something not even a Congressman could object to" Reference: Bellman, R. E. Eye of the Hurricane, An Autobiography. Cutler/Head

  3. When is Dynamic Programming used? • Used for problems in which an optimal solution for the original problem can be found from optimal solutions to subproblems of the original problem • Often a recursive algorithm can solve the problem. But the algorithm computes the optimal solution to the same subproblem more than once and therefore is slow. • The following two examples (Fibonacci and binomial coefficient) have such a recursive algorithm • Dynamic programming reduces the time by computing the optimal solution of a subproblem only once and saving its value. The saved value is then used whenever the same subproblem needs to be solved. Cutler/Head

  4. Fibonacci's Series: • Definition: S0 = 0, S1 = 1, Sn = Sn-1 + Sn-2 n>10, 1, 1, 2, 3, 5, 8, 13, 21, … • Applying the recursive definition we get: fib (n) 1. ifn < 2 2. returnn 3. else return (fib (n -1) + fib(n -2)) Cutler/Head

  5. fib (n)1. ifn < 22. returnn3.else return fib (n -1) + fib(n -2) What is the recurrence equation? The run time can be shown to be very slow: T(n) =  ( n)where  = (1 + sqrt(5) ) / 2  1.61803 Cutler/Head

  6. Analysis using Substitution of Recursive Fibonacci • Let T(n) be the number of additions done by fib(n) • T(n)=T(n-1)+T(n-2)+1 for n>=2T(2)=1 T(1)=T(0)=0 Cutler/Head

  7. What does the Execution Tree look like? Fib(5) + Fib(3) + Fib(4) + Fib(3) + Fib(2) + Fib(1) Fib(2) + Fib(2) + Fib(1) Fib(1) Fib(0) Fib(1) Fib(0) Fib(1) Fib(0) Cutler/Head

  8. The Main Idea of Dynamic Programming • In dynamic programming we usually reduce time by increasing the amount of space • We solve the problem by solving subproblems of increasing size and saving each optimal solution in a table (usually). • The table is then used for finding the optimal solution to larger problems. • Time is saved since each subproblem is solved only once. Cutler/Head

  9. Dynamic Programming Solution for Fibonacci • Builds a table with the first n Fibonacci numbers. fib(n) 1. A[0] =0 2. A[1] = 1 3. for i  2 to n 4. do A[ i ] = A [i -1] + A[i -2 ] 5. return A What is the run time? What is the space requirements? If we only need the nth number can we save space? Cutler/Head

  10. The Binomial Coefficient Cutler/Head

  11. The recursive algorithm binomialCoef(n, k)1. ifk = 0 or k = n2. then return 13. else return (binomialCoef( n -1, k -1) + binomialCoef(n-1, k)) Cutler/Head

  12. The Call Tree binomialCoef(n, k)1. ifk = 0 or k = n2. then return 13. else return (binomialCoef( n -1, k -1) + binomialCoef(n-1, k)) ( ) n k ( ) ( ) n -1 n-1 k -1 k ) ( ( ) ( n -2 n -2 ) ( ) n -2 n -2 k -1 k k -2 k -1 ( ) ( ) ( ) ( ) n -3 ) ( ) n -3 n -3 n -3 ( ( ) n -3 n -3 n -3 ( ) n -3 k k -1 k -2 k -1 k -2 k -2 k -1 k -3 Cutler/Head

  13. æ ö n B[n,k] = ç ÷ è ø k Dynamic Solution • Use a matrix B of n+1 rows, k+1 columns where • Establish a recursive property. Rewrite in terms of matrix B:B[ i , j ] = B[ i -1 , j -1 ] + B[ i -1, j ] , 0 < j < i 1 , j = 0 or j = i • Solve all “smaller instances of the problem” in a bottom-up fashion by computing the rows in B in sequence starting with the first row. Cutler/Head

  14. The B Matrix 0 1 2 3 4 ... j k 01234i n 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 B[i -1, j -1] B[i -1, j ] B[ i, j ] Cutler/Head

  15. ( ) 4 2 Compute B[4,2]= • Row 0: B[0,0] =1 • Row 1: B[1,0] = 1 B[1,1] = 1 • Row 2: B[2,0] = 1 B[2,1] = B[1,0] + B[1,1] = 2 B[2,2] = 1 • Row 3: B[3,0] = 1 B[3,1] = B[2,0] + B[2,1] = 3 B[3,2] = B[2,1] + B[2,2] = 3 • Row 4: B[4,0] = 1 B[4,1] = B[3, 0] + B[3, 1] = 4 B[4,2] = B[3, 1] + B[3, 2] = 6 Cutler/Head

  16. Dynamic Program • bin(n,k )1. fori = 0 ton // every row2. forj = 0 to minimum( i, k ) 3. ifj = 0 or j = i // column 0 or diagonal4. then B[ i ,j ] = 15. elseB[ i ,j ] =B[i -1, j -1] + B[i -1, j ]6. return B[ n, k ] • What is the run time? • How much space does it take? • If we only need the last value, can we save space? Cutler/Head

  17. æ ö i ç ÷ è ø j Dynamic programming • All values in column 0 are 1 • All values in the first k+1 diagonal cells are 1 • j ¹ i and 0 < j <=min{i,k} ensures we only compute B[ i, j ] for j < i and only first k+1 columns. • Elements above diagonal (B[i,j] for j>i) are not computed since is undefined for j >i Cutler/Head

  18. Number of iterations Cutler/Head

  19. Principle of Optimality(Optimal Substructure) The principle of optimality applies to a problem (not an algorithm) A large number of optimization problems satisfy this principle. Principle of optimality: Given an optimal sequence of decisions or choices, each subsequence must also be optimal. Cutler/Head

  20. Principle of optimality - shortest path problem Problem: Given a graph G and vertices s and t, find a shortest path in G from s to t Theorem: A subpath P’ (from s’ to t’) of a shortest path P is a shortest path from s’ to t’ of the subgraph G’ induced by P’. Subpaths are paths that start or end at an intermediate vertex of P. Proof: If P’ was not a shortest path from s’ to t’ in G’, we can substitute the subpath from s’ to t’ in P, by the shortest path in G’ from s’ to t’. The result is a shorter path from s to t than P. This contradicts our assumption that P is a shortest path from s to t. Cutler/Head

  21. A problem that does not satisfy the Principle of Optimality Problem: What is the longest simple route between City A and B? • Simple = never visit the same spot twice. • The longest simple route (solid line) has city C and D as intermediate cities. • It does not consist of the longest simple route from A to D plus the longest simple route from D to B plus the longest simple route from C to B. B A D C Cutler/Head

More Related