Dynamic programming. 叶德仕 [email protected] Dynamic Programming Histo ry. Bellman. Pioneered the systematic study of dynamic programming in the 1950s. Etymology. Dynamic programming = planning over time. Secretary of defense was hostile to mathematical research.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Knapsack problem.
value 40
W = 11
Input:n, W, w1,…,wn, v1,…,vn
for w = 0 to W
M[0, w] = 0
fori = 1 to n
forw = 1 to W
if (wi > w)
M[i, w] = M[i1, w]
else
M[i, w] = max {M[i1, w], vi + M[i1, wwi ]}
returnM[n, W]
W+1
n+1
OPT = 40
it is infinity
and an increasing subsequence is one in which the numbers are getting strictly larger.
6
3
6
9
7
5
2
8
solved over and over again!
L(5)
L(1)
L(2)
L(3)
L(4)
L(1)
L(2)
L(3)
o
c
u
r
r
a
n
c
e

6 mismatches,1gap
o
c
c
u
r
r
e
n
c
e
o
c

u
r
r
a
n
c
e
1 mismatch,1gap
o
c
c
u
r
r
e
n
c
e
o
c

u
r
r

a
n
c
e
0 mismatch, 3gaps

n
c
e
o
c
c
u
r
r
e
gap
mismatch
x1
x2
x3
x4
x5
x6
x7
x8
x9
o
c

u
r
r
a
n
c
e
o
c
c
u
r
r
e
n
c
e
y1
y2
y3
y4
y5
y6
y7
y8
y9
y10
SequenceAlignment(m, n, x1 x2 . . . xm, y1 y2 . . . yn,δ,α) {
for i = 0 to m
M[0, i] = i δ
for j = 0 to n
M[j, 0] = j δ
for i = 1 to m
for j = 1 to n
M[i, j] = min(α[xi, yj] + M[i1, j1],δ+ M[i1, j],δ+ M[i, j1])
return M[m, n]
}
j1
Update B[i ,0] = B[i, 1] for each i
End for
% B[i, 1] holds the value of OPT(i, n) for i=1, ...,m
Remark. Analysis is not tight because two subproblems are of size
(q, n/2) and (m  q, n/2). In next slide, we save log n factor.
“a” not “the”
= exponential time.
m
1
2
i
x :
...
j
1
n
=
y :
...
LCS(x, y, i, j)
if
x[i] = y[ j]
then
c[i, j] ← LCS(x, y, i–1, j–1) + 1
else
c[i, j] ← max{LCS(x, y, i–1, j),LCS(x, y, i, j–1)}
algorithm evaluates two subproblemseach
with only one parameter decremented.
m=3,n=4
3,4
m+n level
2,4
3,3
3,2
2,3
1,4
2,3
Thus, it may work potentially exponential.
3,4
Same subproblems
2,4
3,3
3,2
2,3
1,4
2,3
Overlapping subproblems
A recursive solution contains a
“small” number of distinct
subproblems repeated many times.
C
A
B
B
D
A
B
B
Reconstruct
LCS by tracing
backwards.
D
C
A
B
A
C
A
B
B
D
A
B
B
Reconstruct
LCS by tracing
backwards.
D
C
A
B
A
C
A
B
B
D
A
B
B
Reconstruct
LCS by tracing
backwards.
Another solution
D
C
A
B
A
×
×
×
c
1 × 10
D
10 × 100
B
20 × 1
A
50 × 20
×
×
A
50 × 20
D
10 × 100
B ×C
20 × 10
×
D
10 × 100
A × (B ×C)
50 × 10
(A × B ×C) ×D
50 × 100
However, exhaustive search is not efficient.
Let P(n) be the number of alternative parenthesizations of n matrices.
P(n) = 1,if n=1
P(n) = ∑k=1 to n1 P(k)P(nk), if n ≥ 2
P(n) ≥ 4n1/(2n2n). Ex. n = 20, this is > 228.
A
D
D
C
B
C
A
B
Running time O(n3)
AlgorithmmatrixChain(P):
Input:sequence P of P matrices to be multiplied
Output:number of operations in an optimal parenthesization of P
n length(P)  1
fori 1 to n do
C[i, i]0
forl 2 to n do
fori 0 to nl1 do
ji+l1
C[i, j]+infinity
fork i to j1 do
C[i, j]min{C[i,k] + C[k+1, j] +pi1 pkpj}
1
2
5
6
3
4