Algorithms
Download
1 / 65

algorithms - PowerPoint PPT Presentation


  • 232 Views
  • Updated On :

Algorithms. Dynamic Programming. Dynamic Programming. General approach – combine solutions to subproblems to get the solution to an problem Unlike Divide and Conquer in that subproblems are dependent rather than independent bottom-up approach

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'algorithms' - daniel_millan


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Algorithms l.jpg

Algorithms

Dynamic Programming


Dynamic programming l.jpg
Dynamic Programming

  • General approach – combine solutions to subproblems to get the solution to an problem

  • Unlike Divide and Conquer in that

    • subproblems are dependent rather than independent

    • bottom-up approach

    • save values of subproblems in a table and use more than one time


Usual approach l.jpg
Usual Approach

  • Start with the smallest, simplest subproblems

  • Combine “appropriate” subproblem solutions to get a solution to the bigger problem

  • If you have a D&C algorithm that does a lot of duplicate computation, you can often find a DP solution that is more efficient


A simple example l.jpg
A Simple Example

  • Calculating binomial coefficients

  • n choose k is the number of different combinations of n things taken k at a time

  • These are also the coefficients of the binomial expansion (x+y)n



Algorithm for recursive definition l.jpg
Algorithm for Recursive Definition

function C(n,k)

if k = 0 or k = n then

return 1

else

return C(n-1, k-1) + C(n-1, k)


Slide7 l.jpg

C(5,3)

C(4,2)

C(4,3)

C(3,1)

C(3,2)

C(3,2)

C(3,3)


Complexity of d c algorithm l.jpg
Complexity of D & C Algorithm

  • Time complexity is (n!)

  • But we did a lot of duplicate computation

  • Dynamic programming approach

    • Store solutions to sub-problems in a table

    • Bottom-up approach


Slide9 l.jpg

k

n k

0 1 2 3

0

1

2

3

4

5

n


Analysis of dp version l.jpg
Analysis of DP Version

  • Time complexity

    O(n k)

  • Storage requirements

    • full table O(n k)

    • better solution


Typical problem l.jpg
Typical Problem

  • Dynamic Programming is often used for optimization problems that satisfy the principle of optimality

  • Principle of optimality

    • In an optimal sequence of decisions or choices, each subsequence must be optimal


Optimization problems l.jpg
Optimization Problems

  • Problem has many possible solutions

  • Each solution has a value

  • Goal is to find the solution with the optimal value


Steps in dp solutions to optimization problems l.jpg
Steps in DP Solutions to Optimization Problems

1 Characterize the structure of an optimal solution

2 Recursively define the value of an optimal solution

3 Compute the value of an optimal solution in a bottom-up manner

4 Construct an optimal solution from computed information


Matrix chain multiplication l.jpg
Matrix Chain Multiplication

A1A2A3An

  • Matrix multiplication is associative

  • All ways that a sequence can be parenthesized give the same answer

  • But some are much less expensive to compute


Matrix chain multiplication problem l.jpg
Matrix Chain Multiplication Problem

  • Given a chain < A1,A2, . . .,An> of n matrices, where i = 1, 2, . . ., n and matrix Ai has dimension pi-1pi, fully parenthesize the product A1A2An in a way that minimizes the number of scalar multiplications


Example l.jpg
Example

Matrix Dimensions

A 13 x 5

B 5 X 89

C 89 X 3

D 3 X 34

M = A B C D 13 x 34


Slide17 l.jpg

Parenthesization Scalar multiplications

1 ((A B) C) D 10,582

2 (A B) (C D) 54,201

3 (A (B C)) D 2, 856

4 A ((B C) D) 4, 055

5 A (B (C D)) 26,418

1 13 x 5 x 89 to get (A B) 13 x 89 result

13 x 89 x 3 to get ((AB)C) 13 x 3 result

13 x 3 x 34 to get (((AB)C)D) 13 x 34



Slide19 l.jpg

T(n) ways to parenthesize

n 1 2 3 4 5 10 15

T(n) 1 1 2 5 14 4,862 2,674,440


Slide20 l.jpg

Steps in DP Solutions to Optimization Problems

1 Characterize the structure of an optimal solution

2 Recursively define the value of an optimal solution

3 Compute the value of an optimal solution in a bottom-up manner

4 Construct an optimal solution from computed information


Step 1 l.jpg
Step 1

  • Show that the principle of optimality applies

    • An optimal solution to the problem contains within it optimal solutions to sub-problems

    • Let Ai..j be the optimal way to parenthesize AiAi+1. . .Aj

    • Suppose the optimal solution has the first split at position k

      A1..k Ak+1..n

    • Each of these sub-problems must be optimally parenthesized


Step 2 l.jpg
Step 2

  • Define value of the optimal solution recursively in terms of optimal solutions to sub-problems.

  • Consider the problem Ai..j

  • Let m[i,j] be the minimum number of scalar multiplications needed to compute matrix Ai..j

  • The cost of the cheapest way to compute A1..n is m[1,n]


Step 2 continued l.jpg
Step 2 continued

  • Define m[i,j]

    If i = j the chain has one matrix and the cost m[i,i] = 0 for all i  n

    If i < j

    Assume the optimal split is at position k where i k < j

    m[i,j] = m[i,k] + m[k+1,j] + pi-1 pkpj

    to compute Ai..k and Ak+1..j


Step 2 continued24 l.jpg
Step 2 continued

  • Problem: we don’t know what the value for k is, but there are only j-i possibilities


Step 3 l.jpg
Step 3

  • Compute optimal cost by using a bottom-up approach

  • Example: 4 matrix chain

    Matrix Dimensions

    A1 100 x 1

    A2 1 x 100

    A3 100 x 1

    A4 1 x 100

  • po p1 p2 p3 p4

    100 1 100 1 100


Slide26 l.jpg

j

j

1 2 3 4

1 2 3 4

i

i

1

2

3

4

1

2

3

4

s

m

po p1 p2 p3 p4

100 1 100 1 100


Slide27 l.jpg

MATRIX-CHAIN-ORDER(p)

1 n length[p] - 1

2 for i 1 to n

3 do m[i,i] 0

4 for l 2 to n

5 do for i 1 to n - l + 1

6 do ji + l - 1

7 m[i,j] 

8 for k i to j - 1

9 do q m[i,k]+m[k+1,j]+ pi-1 pkpj

10 if q < m[i,j]

11 then m[i,j] = q

12 s[i,j] = k

13 return m and s


Time and space complexity l.jpg
Time and Space Complexity

  • Time complexity

    • Triply nested loops

    • O(n3)

  • Space complexity

    • n x n two dimensional table

    • O(n2)


Step 4 constructing the optimal solution l.jpg
Step 4: Constructing the Optimal Solution

  • Matrix-Chain-Order determines the optimal number of scalar multiplications

  • Does not directly compute the product

  • Step 4 of the dynamic programming paradigm is to construct an optimal solution from computed information.

  • The s matrix contains the optimal split for every level


Slide30 l.jpg

MATRIX-CHAIN-MULTIPLY(A,s,i,j)

1 if j > i

2 then X MATRIX-CHAIN-MULTIPLY(A,s,i,s[i,j])

3 Y MATRIX-CHAIN-MULTIPLY(A,s,s[i,j]+1,j)

4 return MATRIX-MULTIPLY(X,Y)

5 else return Ai

Initial call

MATRIX-CHAIN-MULTIPLY(A,s,1,n) where

A = < A1,A2, . . .,An>


Algorithms31 l.jpg

Algorithms

Dynamic Programming Continued


Comments on dynamic programming l.jpg
Comments on Dynamic Programming

  • When the problem can be reduced to several “overlapping” sub-problems.

  • All possible sub-problems are computed

  • Computation is done by maintaining a large matrix

  • Usually has large space requirement

  • Running times are usually at least quadratic


Memoization l.jpg
Memoization

  • Variation on dynamic programming

  • Idea–memoize the natural but inefficient recursive algorithm

  • As each sub-problem is solved, store values in table

  • Initialize table with values that indicate if the value has been computed


Slide34 l.jpg

MEMOIZED-MATRIX-CHAIN(p)

1 n  length(p) - 1

2 for i 1 to n

3 do for j i to n

4 do m[i,j] 

5 return LOOKUP-CHAIN(p,1,n)


Slide35 l.jpg

LOOKUP-CHAIN(p,i,j)

1 if m[i,j] 

2 then return m[i,j]

3 if i = j

4 then m[i,j] 

5 else for k i to j - 1

6 do q LOOKUP-CHAIN(p,i,k) +

LOOKUP-CHAIN(p,k+1,j) +

pi-1pkpj

7 if q < m[i,j]

8 then m[i,j] q

9 return m[i,j]


Space and time requirements of memoized version l.jpg
Space and Time Requirementsof Memoized Version

  • Running time (n3)

  • Storage (n2)


Longest common subsequence l.jpg
Longest Common Subsequence

  • Definition 1: Subsequence

    Given a sequence

    X = < x1, x2, . . . , xm>

    then another sequence

    Z = < z1, z2, . . . , zk>

    is a subsequence of X if there exists a strictly increasing sequence <i1, i2, . . . , ik> of indices of x such that for all j = 1,2,...k we have xij= zj


Example38 l.jpg
Example

X = <A,B,D,F,M,Q>

Z = <B, F, M>

Z is a subsequence of X with index sequence <2,4,5>


More definitions l.jpg
More Definitions

  • Definition 2: Common subsequence

    • Given 2 sequences X and Y, we say Z is a common subsequence of X and Y if Z is a subsequence of X and a subsequence of Y

  • Definition 3: Longest common subsequence problem

    • Given X = < x1, x2, . . . , xm> and Y = < y1, y2, . . . , yn> find a maximum length common subsequence of X and Y


Example40 l.jpg
Example

X = <A,B,C,B,D,A,B>

Y = <B,D,C,A,B,A>


Brute force algorithm l.jpg
Brute Force Algorithm

1 for every subsequence of X

2 Is there subsequence in Y?

3 If yes, is it longer than the longest subsequence found so far?

Complexity?


Yet more definitions l.jpg
Yet More Definitions

  • Definition 4: Prefix of a subsequence

    If X = < x1, x2, . . . , xm> , the ith prefix of X for i = 0,1,...,m is Xi = < x1, x2, . . . , xi>

  • Example

    • if X = <A,B,C,D,E,F,H,I,J,L> then

      X4 = <A,B,C,D> and X0 = <>


Optimal substructure l.jpg
Optimal Substructure

  • Theorem 16.1 Optimal Substructure of LCS

    Let X = < x1, x2, . . . , xm> and Y = < y1, y2, . . . , yn> be sequences and let Z = < z1, z2, . . . ,zk> be any LCS of X and Y

    1. if xm = yn then zk = xm = yn and zk-1 is an LCS of xm-1 and yn-1

    2. if xm  yn and zk  x m Z is an LCS of Xm-1 and Y

    3. if xm  yn and zk  yn Z is an LCS of Xm and Yn-1


Sub problem structure l.jpg
Sub-problem structure

  • Case 1

    • if xm = yn then there is one sub-problem to solve

      find a LCS of Xm-1 and Yn-1 and append xm

  • Case 2

    • if xm  yn then there are two sub-problems

      • find an LCS of Xmand Yn-1

      • find an LCS of Xm-1 and Yn

      • pick the longer of the two


Cost of optimal solution l.jpg
Cost of Optimal Solution

  • Cost is length of the common subsequence

  • We want to pick the longest one

  • Let c[i,j] be the length of an LCS of the sequences Xi and Yj

  • Base case is an empty subsequence--then c[i,j] = 0 because there is no LCS



Dynamic programming solution l.jpg
Dynamic Programming Solution

  • Table c[0..m, 0..n] stores the length of an LCS of Xi and Yj c[i,j]

  • Table b[1..m, 1..n] stores pointers to optimal sub-problem solutions


Slide48 l.jpg

Y

C B R F T S Q

0 1 2 3 4 5 6 7

0

A 1

B 2

D 3

F 4

M 5

Q 6

X

Matrix C


Slide49 l.jpg

Y

j

C B R F T S Q

1 2 3 4 5 6 7

i

A 1

B 2

D 3

F 4

M 5

Q 6

X

Matrix B


Slide50 l.jpg

LCS-LENGTH(X,Y)

1 m length[X]

2 n length[Y]

3 for i  to m

4 do c[i,0] 

5 for j  to n

6 do c[0,j] 

7 for i  to m

8 do for j  to n

9 do if xi = yj

then c[i,j] c[i-1,j-1] + 1

b[i,j] ””

12 else if c[i-1,j] c[i,j-1]

13 then c[i,j] c[i-1,j]

14 b[i,j] ””

15 else c[i,j] c[i,j-1]

16 b[i,j] ””

17 return c and b


Slide51 l.jpg

PRINT-LCS (b,X,i,j)

1if i = 0 or j = 0

2 then return

3 if b[i,j] = ””

4 then PRINT-LCS(b,X,i-1,j-1)

5 print xi

6 else if b[i,j] ””

7 then PRINT-LCS(b,X,i-1,j)

8 else PRINT-LCS(b,X,i,j-1)


Code complexity l.jpg
Code Complexity

  • Time complexity

  • Space complexity

  • Improved space complexity?


Optimal polygon triangulation l.jpg
Optimal Polygon Triangulation

  • Turns out to be very similar to matrix chain multiplication

  • Mapping one problem to another kind of problem for which you already have a good solution is a useful concept


Lots of definitions l.jpg
Lots of definitions

  • A polygon is a piecewise-linear, closed curve in the plane.

  • The “pieces” are called sides.

  • A point joining two consecutive sides is called a vertex.

  • The set of points in the plane enclosed by a simple polygon forms the interior of the polygon.

  • the set of point ons the polygon forms its boundary

  • the set of points surrounding the polygon form its exterior


Convex polygons l.jpg
Convex Polygons

  • A simple polygon is convex if given any two points on its boundary or in its interior, all points on the line segment drawn between them are contained in the polygon’s boundary or interior.


Labeling a convex polygon l.jpg
Labeling a convex polygon

v5

v0= vn

v4

v1

v3

P = <v0, v1, . . . vn-1>

v2


Slide57 l.jpg

Chords

v5

v0

v5

v0

v4

v4

v1

v1

v3

v3

v2

v2


Triangulation l.jpg
Triangulation

  • A triangulation of a polygon is a set T of chords of the polygon that divide the polygon into disjoint triangles.

    • No chords intersect

    • The set of chords T is maximal; every chord not in T intersects some chord in T

  • Every triangulation of an n-vertex convex polygon

    • has n-3 chords

    • divides polygon into n-2 triangles


Optimal polygon triangulation problem l.jpg
Optimal (polygon) triangulation problem

  • Given a convex polygonP = <v0, v1, . . . vn-1> and a weight function w defined on triangles formed by sides and chords of P.

    • Find a triangulation that minimizes the sum of the weights of the triangles in the triangulation

    • One possible weight function is to minimize the sum of the lengths of the sides


Correspondance of parenthesization l.jpg
Correspondance of Parenthesization

  • We will show how to view both parenthesization and triangulation as parse trees and show the correspondence between the two views.

  • The result will be that a slight modification of the matrix chain problem can be used to solve the triangulation problem.


Slide61 l.jpg

((A1(A2A3)(A4(A5A6)))

A1(A2A3)

A4(A5A6)

A1

A2A3

A4

A5A6

A2

A3

A5

A6


Slide62 l.jpg

v0

v6

v1

v5

v2

v4

v3


Slide63 l.jpg

((A1(A2A3)(A4(A5A6)))

v0

v6

v1

v5

v2

v4

v3


Slide64 l.jpg

A1 A2 A3 A4 A5 A6

6 matrices

7 dimensions

p0 p1 p2 p3 p4 p5 p6

v0

v6

v1

v5

v2

v4

6 sides

7 vertices

v3


ad