Algorithms for orienteering and discounted reward tsp
Download
1 / 18

Algorithms for Orienteering and Discounted-Reward TSP - PowerPoint PPT Presentation


  • 404 Views
  • Uploaded on
  • Presentation posted in: Sports / Games

Algorithms for Orienteering and Discounted-Reward TSP. Shuchi Chawla Carnegie Mellon University Joint work with Avrim Blum, Adam Meyerson, David Karger, Maria Minkoff and Terran Lane. The focus of our paper. Orienteering Dis. Rew. TSP. reward.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha

Download Presentationdownload

Algorithms for Orienteering and Discounted-Reward TSP

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Algorithms for orienteering and discounted reward tsp l.jpg

Algorithms for Orienteering and Discounted-Reward TSP

Shuchi Chawla

Carnegie Mellon University

Joint work with

Avrim Blum, Adam Meyerson, David Karger, Maria Minkoff and Terran Lane


The focus of our paper l.jpg

The focus of our paper

Orienteering

Dis. Rew. TSP

reward

  • Given weighted graph G, root s, reward on nodes v

  • Construct a path P rooted at s

  • High level objective: Collect large reward in little time

    • Orienteering

      Maximize reward collected with path of length D

    • Discounted-Reward TSP

      Reward from node v, if reached at time t is vt

time

Shuchi Chawla, Carnegie Mellon University


The focus of our paper3 l.jpg

The focus of our paper

  • Given weighted graph G, root s, reward on nodes v

  • Construct a path P rooted at s

  • High level objective: Collect large reward in little time

    • Orienteering

      Maximize reward collected with path of length D

    • Discounted-Reward TSP

      Reward from node v, if reached at time t is vt

      A related problem…

    • K-Traveling Salesperson

      Minimize length while collecting at least K in reward

No approximation algorithm known previously for the rooted non-geometric version

New problem

Best: (2+)-approx

[Garg] [AroraKarpinski] …

Shuchi Chawla, Carnegie Mellon University


Our contributions l.jpg

Our contributions

Orienteering

Discounted-Reward TSP

Problem

Source/Reduction

Approximation

K-path (CP)

[Chaudhuri et al’03]

2+

Min-Excess Path (EP)

1.5 CP – 0.5

2.5+

2+

1+[EP]

4

(1+EP)(1+1/EP)EP

8.12+

6.75+

Shuchi Chawla, Carnegie Mellon University


A robot navigation problem l.jpg

A Robot Navigation Problem

  • Task: deliver packages to locations in a building

  • Faster delivery => greater happiness

  • Classic formulation – Traveling Salesperson Problem

    Find the shortest tour covering all locations

  • Uncertainty in robot’s lifetime/behavior

    • battery failure; sensor error…

    • Robot may fail before delivering all packages

  • Deliver as many packages as possible

    • Some packages have higher priority than others

Shuchi Chawla, Carnegie Mellon University


Robot navigation a probabilistic view l.jpg

Robot Navigation: A probabilistic view

Discounted-Reward TSP

Orienteering

  • At every time step, the robot has a fixed probability (1-) of failing

  • If a package with value  is delivered at time t, the expected reward is t

  • Goal: Construct a path such that the total discounted reward collected is maximized

“Discounted Reward”

Alternately, robot has a fixed battery life D

Goal: Construct path of length at most D that collects maximum reward

Shuchi Chawla, Carnegie Mellon University


Rest of this talk l.jpg

Rest of this talk

  • The Min-Excess problem

  • Using Min-Excess to solve Orienteering

  • Solving Min-Excess

  • Using Min-Excess to solve Discounted-Reward TSP

  • Extensions and open problems

Shuchi Chawla, Carnegie Mellon University


Using k path directly l.jpg

Using K-path directly

  • First attempt – Use distance-based approximations to approximate reward

  • Let OPT(d) = max achievable reward with length d

  • A 2-approx for distance implies that ALG(d) ¸ OPT(d/2)

  • However, we may have OPT(d/2) << OPT(d)

  • Bad trade-off between distance and reward!

  • Same problem with Discounted-Reward TSP

Shuchi Chawla, Carnegie Mellon University


Approximating orienteering l.jpg

Approximating Orienteering

t

s

  • Using a distance-based approx

    • Divide the optimal path into many segments

    • Approximate the max reward segment using distance saved byshort-cutting other segments

  • If min-distance between s and v is d, we spend at least d in going to v, regardless of the path

Shuchi Chawla, Carnegie Mellon University


Approximating orienteering10 l.jpg

Approximating Orienteering

Min-Excess Path Problem

  • Using a distance-based approx

    • Divide the optimal path into many segments

    • Approximate the max reward segment using distance saved byshort-cutting other segments

  • If min-distance between s and v is d, we spend at least d in going to v, regardless of the path

  • Approximate the “extra” length taken by a path over the shortest path length

  • If OPT obtains k reward with length d+, ALG should obtain the same reward with length d+

Shuchi Chawla, Carnegie Mellon University


From min excess to orienteering l.jpg

From Min-Excess to Orienteering

2t/3

t

s

excess = t/3

  • There exists a path from s to t, that

    • collects reward at least 

    • has length · D

    • t is the farthest from s among all nodes in the path

  • Excess at node v = “v” = extra time taken to reach v = dPv – dv

t· D-dt

new excess = t/3

Can afford an excess up to t

Shuchi Chawla, Carnegie Mellon University


From min excess to orienteering12 l.jpg

From Min-Excess to Orienteering

2t/3

excess = t/3

  • There exists a path from s to t, that

    • collects reward at least 

    • has length · D

    • t is the farthest from s among all nodes in the path

  • For any integer r, 9 a path from s to v that

    • collects reward /r

    • has excess · (D-dt)/r · (D-dv)/r

  • Using an r-approx for Min-excess, we get an r-

    approximation

  • Note: If t is not the farthest node, a similar

    analysis gives an r+1 approximation

t

s

t· D-dt

new excess = t/3

Can afford an excess up to t

Shuchi Chawla, Carnegie Mellon University


Solving min excess l.jpg

Solving Min-Excess

  • OPT = d+; k-path gives us ALG = (d+)

    We want ALG = d + 

  • Note: When ¼ d, (d+) ¼d + O()

  • Idea: When  is large, approximate using k-path

  • What if  << d ?

  • Small   path is almost like a shortest path

    or “its distance from s mostly increases monotonically”

Shuchi Chawla, Carnegie Mellon University


Solving min excess14 l.jpg

Solving Min-Excess

Approximate

Dynamic Program

  • OPT = d+; k-path gives us ALG = (d+)

    We want ALG = d + 

  • Note: When ¼ d, (d+) ¼d + O()

  • Idea: When  is large, approximate using k-path

  • What if  << d ?

  • Small   path is almost like a shortest path

    or “its distance from s mostly increases monotonically”

  • Idea: Completely monotone path  use dynamic programming!

Patch segments using dynamic programming

t

s

wiggly

wiggly

monotone

monotone

monotone

Shuchi Chawla, Carnegie Mellon University


Solving discounted reward tsp l.jpg

Solving Discounted-Reward TSP

half life

t

s

v

excess = 1

  • WLOG,  = ½. Reward of v at time t = vt

  • An interesting observation:

    OPT collects half of its reward before the first node that has excess 1

  • Therefore, approximate the min-excess from s to v

  • New path has excess 3. Reward  by factor of 23.

  • 16-approximation

’ = 2OPT(v,t) > OPT

reward ¸OPT/2

length of entire remaining path decreases by 1

Shuchi Chawla, Carnegie Mellon University


A summary of our results l.jpg

A summary of our results

Orienteering

Discounted-Reward TSP

Problem

Source/Reduction

Approximation

K-path (CP)

[Chaudhuri et al’03]

2+

Min-Excess Path (EP)

1.5 CP – 0.5

2+

1+[EP]

4

(1+EP)(1+1/EP)EP

6.75+

Shuchi Chawla, Carnegie Mellon University


Some extensions l.jpg

Some extensions

  • Unrooted versions

  • Multiple robots

  • Max-reward Steiner tree of bounded size

Shuchi Chawla, Carnegie Mellon University


Future work l.jpg

Future work…

  • Improve the approximations

    • 2-approx for Orienteering?

  • Robot Navigation

    • A highly complex process with various kinds of uncertainty

    • Can we model the MDP as a simple graph problem?

  • Different deadlines for different packages

Shuchi Chawla, Carnegie Mellon University


ad
  • Login