lower bounds in greedy model
Download
Skip this Video
Download Presentation
Lower Bounds in Greedy Model

Loading in 2 Seconds...

play fullscreen
1 / 56

Lower Bounds in Greedy Model - PowerPoint PPT Presentation


  • 74 Views
  • Uploaded on

Lower Bounds in Greedy Model. Sashka Davis Advised by Russell Impagliazzo (Slides modified by Jeff) UC San Diego October 6, 2006. Suppose you have to solve a problem Π …. No Greedy alg. exists ? Or I didn’t think of one?. Is there a Dynamic Programming algorithm that solves Π ?.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Lower Bounds in Greedy Model' - kelton


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
lower bounds in greedy model

Lower Bounds in Greedy Model

Sashka Davis

Advised by Russell Impagliazzo

(Slides modified by Jeff)

UC San Diego

October 6, 2006

suppose you have to solve a problem
Suppose you have to solve a problem Π…

No Greedy alg. exists? Or I didn’t think of one?

Is there a Dynamic Programming algorithm that solves Π?

Is there a Backtracking algorithm that solves Π?

Is there a Greedy algorithm that solves Π?

Eureka! I have a DP

Algorithm!

No Backtracking agl. exists? Or I didn’t think of one?

Is my DP algorithm optimal or a better one exists?

suppose we a have formal model of each algorithmic paradigm
Suppose we a have formal model of each algorithmicparadigm

Is there a Dynamic Programming alg. that solves Π?

Is my algorithm optimal, or a better DP algorithm exists?

No Greedy algorithm can solve Π exactly.

Is there a Greedy algorithm that solves Π?

Is there a Backtracking algorithm that solves Π?

No Backtracking algorithm can solve Π exactly.

DP helps!

Yes, it is! Because NO DP alg. can solve Π more efficiently.

the goal
The goal
  • To build a formal model of each of the basic algorithmic design paradigms which should capture the strengths of the paradigm.
  • To develop lower bound technique, for each formal model, that can prove negative results for all algorithms in the class.
using the framework we can answer the following questions
Using the framework we can answer the following questions

1. When solving problems exactly:

What algorithmic design paradigm can help?

  • No algorithm within a given formal model can solve the problem exactly.
  • Wefind an algorithm that fits a given formal model.

2. Is a given algorithm optimal?

  • Prove a lower bound matching the upper bound for all algorithms in the class.

3. Solving the problems approximately:

  • What algorithmic paradigm can help?
  • Is a given approximation scheme optimal within the formal model?
some of our results
Some of our results

Dynamic

Programming

Backtracking & Simple DP

(tree)

Greedy

pBP

pBT

ADAPTIVE

PRIORITY

FIXED

Online

on line algorithms
On-line algorithms

 is a set of data items;  is a set of options

Input: instance I={1 ,2 ,…,n }, I 

Output: solution S={(i , i) | i= 1,2,…,d}; i 

1. Order: Objects arrive in worst case order chosen by adversary.

2. Loop considering i in order.

  • Make a irrevocable decision i 
fixed priority algorithms
Fixed priority algorithms

 is a set of data items;  is a set of options

Input: instance I={1 ,2 ,…,n }, I 

Output: solution S={(i , i) | i= 1,2,…,d}; i 

1. Order: Algorithm chooses fixedπ: →R+ without looking at I.

2. Loop considering i in order.

  • Make a irrevocable decision i 
adaptive priority algorithms
Adaptive priority algorithms

 is a set of data items;  is a set of options

Input: instance I={1 ,2 ,…,n }, I 

Output: solution S={(i , i) | i= 1,2,…,d}; i 

2. Loop

- Order: Algorithm reordersπ: →R+ without looking at rest of I.

- Considering next i in current order.

  • Make a irrevocable decision i 
fixed priority back tracking
Fixed priority “Back Tracking”

 is a set of data items;  is a set of options

Input: instance I={1 ,2 ,…,n }, I 

Output: solution S={(i , i) | i= 1,2,…,d}; i 

1. Order: Algorithm chooses π: →R+ without looking at I.

2. Loop considering i in order.

  • Make a set of decisions i 

(one of which will be the final decision.)

some of our results1

Maximum Matching

in Bipartite graphs

Maximum Matching

in Bipartite graphs

Shortest Path in negative graphs no cycles

Flow Algorithms

Bellman-Ford

Dijkstra’s

Kruskal’s

Kruskal’s

Prim’s

Some of our results

Shortest Path in no-negative graphs

pBP

pBT

ADAPTIVE

PRIORITY

FIXED

PRIORITY

Online

Minimum Spanning Tree

some of our results2

Shortest Path in no-negative graphs

Dijkstra’s

Kruskal’s

Kruskal’s

Prim’s

Minimum Spanning Tree

Some of our results

pBP

pBT

ADAPTIVE

PRIORITY

FIXED

PRIORITY

Online

kruskal algorithm for mst is a fixed priority algorithm
Kruskal algorithm for MST is a Fixed priority algorithm

Input (G=(V,E), ω: E →R)

  • Initialize empty solution T
  • L = Sorted list of edges in non-decreasing order according to their weight
  • while (L is not empty)
    • e = next edge in L
    • Add the edge to T, as long as T remains a forest and remove e from L
  • Output T
prims algorithm for mst is an adaptive priority algorithm
Prims algorithm for MST is an adaptive priority algorithm

Prim’s algorithm

Input G=(V,E), w: E →R

  • Initialize an empty tree T ← ; S ← 
  • Pick a vertex u; S={u};
  • for (i=1 to |V|-1)
    • (u,v) = min(u,v)cut(S, V-S)w(u,v)
    • S←S  {v}; T←T{(u,v)}
  • Output T
dijkstra s shortest paths alg is an adaptive priority algorithm
Dijkstra’s Shortest Paths Alg is an adaptive priority algorithm
  • Dijkstra algorithm (G=(V,E), s  V)
  • T←∅; S←{s};
  • Until (S≠V)
  • Find e=(u,x) | e = mineCut(S, V-S){path(s, u)+ω(e)}
  • T← T{e}; S ← S {x}
some of our results3

Shortest Path in no-negative graphs

Dijkstra’s

Kruskal’s

Kruskal’s

Prim’s

Minimum Spanning Tree

Some of our results

pBP

pBT

ADAPTIVE

PRIORITY

FIXED

PRIORITY

Online

some of our results4
Some of our results

ShortPath Problem: Given a graph G=(V,E),

ω: E →R+; s, t V. Find a directed tree of edges,

rooted at s, such that the combined weight of the

path from s to t is minimal

  • Data items are edges of the graph
  • Decision options = {accept, reject}
  • Theorem: No Fixed priority algorithm can achieve any constant approximation ratio for the ShortPath problem
fixed priority game

Solver

Adversary

Γ0

γd

γ1

γ2

γ3

γi

γj

γk

Solver is awarded

Fixed priority game

Γ0

Γ1

γi1

γi2

Γ2

γi3

γi4

γi5

Γ3

γi6

=∅

γi7

γi8

γi9,…

σi2

σi4

End Game

S_adv = {(γi2,σ*i2), (γi4,σ*i4)}

S_sol = {(γi2,σi2)}

S_sol = {(γi2,σi2), (γi4,σi4)}

adversary selects 0

u(k)

a

y(1)

v(1)

s

t

z(1)

x(1)

b

w(k)

Adversary selects 0
solver selects an order on 0

v(1)

w(k)

Solver selects an order on 0

If then the Adversary presents:

u(k)

a

y(1)

s

t

x(1)

z(1)

b

adversary s strategy

Event 1

σy=accept

Event 2

σy=reject

Adversary’s strategy

Waits until Solver considers edge y(1)

Solver will consider y(1) before z(1)

event 1 solver accepts y 1

u(k)

a

y(1)

t

s

z(1)

x(1)

b

The Solver constructs a path {u,y}

The Adversary outputs solution {x,z}

Event 1: Solver accepts y(1)
event 2 solver rejects y 1

z(1)

Event 2: Solver rejects y(1)

u(k)

a

y(1)

t

s

x(1)

b

The Solver fails to construct a path.

The Adversary outputs a solution {u,y}.

the outcome of the game
The outcome of the game:
  • The Solver either fails to output a solution or achieves an approximation ratio (k+1)/2
  • The Adversary can set k arbitrarily large and thus can force the Algorithm to claim arbitrarily large approximation ratio
some of our results5

Shortest Path in no-negative graphs

Dijkstra’s

Some of our results

pBP

pBT

ADAPTIVE

PRIORITY

FIXED

PRIORITY

Online

some of our results6

Factor of 3

Factor of 3

Some of our results

Interval Schedulingvalue is width

pBP

pBT

ADAPTIVE

PRIORITY

FIXED

PRIORITY

Online

interval scheduling on a single machine
Interval scheduling on a single machine
  • Instance:

Set of intervals I=(i1, i2,…,in), j ij=[rj, dj]

  • Problem: schedule intervals on a single machine
  • Solution: S  I
  • Objective function: maximize iS(dj - rj)
a simple solution lpt
A simple solution (LPT)

Longest Processing Time algorithm

input I=(i1, i2,…,in)

  • Initialize S ← 
  • Sort the intervals in decreasing order (dj – rj)
  • while (I is not empty)
    • Let ik be the next in the sorted order
    • If ik can be scheduled then S ← S U {ik};
    • I ← I \ {ik}
  • Output S
lpt is a 3 approximation

LPT

OPT

OPT

OPT

LPT is a 3-approximation
  • LPT sorts the intervals in decreasing order according to their length
  • 3 LPT≥ OPT

ri

di

example lower bound bnr02
Example lower bound [BNR02]
  • Theorem1: No adaptive priority algorithm can achieve an approximation ratio better than 3 for the interval scheduling problem with proportional profit for a single machine configuration
proof of theorem 1

e

q

2

q-1

q-1

3

1

2

1

3

Proof of Theorem 1
  • Adversary’s move
  • Algorithm’s move: Algorithm selects an ordering
  • Let i be the interval with highest priority
adversary s strategy1

2

1

3

i

2

k

j

3

1

Adversary’s strategy
  • If Algorithm decides not to schedule i
  • During next round Adversary removes all remaining intervals and schedules interval i

i

Alg’s value = 0

Adv’s value = i

adversary s strategy2

2

1

3

i

i-1

i+1

i

k

j

Adversary’s strategy
  • If i = and Algorithm schedules i
  • During next round the Adversary restricts the sequence:

i

Alg’s value = i

Adv’s value = (i-1)+3(i/3)+(i+1)=3i

adversary s strategy3

2

1

3

2

1

i

2

k

j

3

1

Adversary’s strategy
  • If i = and Algorithm schedules i
  • During next round the Adversary restricts the sequence:

1

Alg’s value = 1

Adv’s value = 3(1/3)+(2)=3

adversary s strategy4

2

1

3

q

q-1

q-1

i

2

k

j

3

1

Adversary’s strategy
  • If i = and Algorithm schedules i
  • During next round the Adversary restricts the sequence:

q

Alg’s value = q

Adv’s value = (q-1)+3(q/3)+(q-1)=3q-1

But q is big

adversary s strategy5

2

1

3

m

2

k

j

3

1

Adversary’s strategy
  • If i = and Algorithm schedules i
  • During next round Adversary restricts the sequence:

i

m

i

Alg’s value = i

Adv’s value = (3i) =3i

slide37

Some of our results

?

Factor of 3

Interval Schedulingvalue is width

pBP

pBT

ADAPTIVE

PRIORITY

FIXED

PRIORITY

Factor of 3

Online

The algorithm was missed up beforeit got a chance to reorder things.

some of our results7

Factor of 2

Some of our results

Weighted Vertex Cover

pBP

pBT

ADAPTIVE

PRIORITY

FIXED

PRIORITY

Online

joh74 greedy 2 approximation for wvc

Weighted Vertex Cover

[Joh74] greedy 2-approximation for WVC

Input: instance G with weights on nodes.

Output: solution S  V covers all edges and minimizes weight taken nodes.

Repeat until all edges covered.

  • Take v minimizing ω(v)/(# uncovered adj edges)
slide40

Weighted Vertex Cover

  • With Shortest Path,a data item is an edge of the graph
    •  = (<u,v>, ω(<u,v>) )
  • With weighted vertex cover,
    • A data item is a vertex of the graph  = (v, ω(v), adj_list(v))
  • (Stronger than having the items be edges,because the alg gets more info from nodes.

Theorem: No Adaptive priority algorithm can

achieve an approximation ration better than 2

adaptive priority game

Solver

Adversary

Adaptive priority game

Γ3

Γ1

Γ2

Γ0

γ9

γ10

γ12

γ2

γ3

γ7

γ6

γ8

γ5

γ11

γ1

γ4

S_sol = {(γ7,σ7), (γ4,σ4)}

S_sol = {(γ7,σ7)}

σ7

σ4

σ2

S_sol = {(γ7,σ7), (γ4,σ4),(γ2,σ2)}

  • The Game Ends:
  • S_adv = {(γ7,σ*7), (γ4,σ*4),(γ2,σ*2)}
  • Solver is awarded payoff
  • f(S_sol)/f(S_adv)
the adversary chooses instances to be graphs k n n

n2

n2

1

1

1

n2

n2

1

The Adversary chooses instances to be graphs Kn,n

The weight function ω:V→ {1, n2}

the game
The game
  • Data items
    • each node appears in oas two separate data items with weights 1, n2
  • Solver moves
    • Choses a data item, and commits to a decision
  • Adversary move
    • Removes from the next t the data item, corresponding to the node just committed and..
adversary s strategy is to wait unitl

1

1

1

1

1

Adversary’s strategy is to wait unitl

Event 1: Solver accepts a node of weight n2

Event 2: Solver rejects a node of any weight

Event 3: Solver has committed to all but one nodes on either side of the bipartite

event 1 solver accepts a node v n 2

n2

1

Event 1: Solver accepts a node ω(v)=n2

1

1

1

1

1

The Adversary chooses part B of the bipartite as a cover, and incurs cost n

The cost of a cover for the Solver is at least n2+n-1

event 2 solver rejects a node of any weight
Event 2: Solver rejects a node of any weight

n2

n2

The Adversary chooses part A of the bipartite as a cover.

The Solver must choose part B of the bipartite as a cover.

event 3 solver commits to n 1 nodes w v 1 on either side of k n n

1

Event 3: Solver commits to n-1 nodes w(v)=1, on either side of Kn,n

1

1

1

1

1

n2

1

The Adversary chooses part B of the bipartite as a cover, and incurs cost n

The cost of a cover for the Solver is 2n-1

some of our results8

Factor of 2

Some of our results

Weighted Vertex Cover

pBP

pBT

ADAPTIVE

PRIORITY

FIXED

PRIORITY

Online

some of our results9

Factor of logn

Some of our results

Facility Location

pBP

pBT

ADAPTIVE

PRIORITY

FIXED

PRIORITY

Online

facility location problem
Facility location problem
  • Instance is a set of cities and set of facilities
    • The set of cities is C={1,2,…,n}
    • Each facility fi has an opening cost cost(fi) and connection costs for each city: {ci1, ci2,…, cin}
  • Problem: open a collection of facilities such that each city is connected to at least one facility
  • Objective function: minimize the opening and connection costs min(ΣfScost(fi) + ΣjCmin fiScij )
ab02 result
[AB02] result
  • Theorem: No adaptive priority algorithm can achieve an approximation ratio better than log(n) for facility location in arbitrary spaces
adversary presents the instance
Adversary presents the instance:
  • Cities: C={1,2,…,n}, where n=2k
  • Facilities:
    • Each facility has opening cost n
    • City connection costs are 1 or∞
    • Each facility covers exactly n/2 cities
    • cover(fj) = {i | i  C,cji=1}

Cu denotes the set of cities not yet covered by the solution of the Algorithm

adversary s strategy6
Adversary’s strategy

At the beginning of each round t

  • The Adversary chooses St to consist of facilities f such thatfStiff |cover(f)∩ Cu| = n/(2t)
  • The number of uncovered cities Cu is n/(2t-1)

Two facilities are complementary if together they cover all cities in C. For any round t St consists of complementary facilities

the game1
The game

Uncovered cities Cu

end of the game
End of the game
  • Either Algorithm opened log(n) facilities or failed to produce a valid solution
  • Cost of Algorithm’s solution is n.log(n)+n
  • Adversary opens two facilities incurs total cost 2n+n
some of our results10

Factor of logn

Some of our results

Facility Location

pBP

pBT

ADAPTIVE

PRIORITY

FIXED

PRIORITY

Online

ad