slide1 n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Multi-Vehicle Exploration and Routing in Adversarial Environments PowerPoint Presentation
Download Presentation
Multi-Vehicle Exploration and Routing in Adversarial Environments

Loading in 2 Seconds...

play fullscreen
1 / 34

Multi-Vehicle Exploration and Routing in Adversarial Environments - PowerPoint PPT Presentation


  • 109 Views
  • Uploaded on

Multi-Vehicle Exploration and Routing in Adversarial Environments. Eric Feron Farmey Joseph Jerome le Ny Program Review Meeting The Stata Center at MIT June 22, 2005. Exploration for UAVs. Ambush-aware Routing. Exploration with stochastic agents.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Multi-Vehicle Exploration and Routing in Adversarial Environments' - joie


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
slide1
Multi-Vehicle Exploration and Routing in Adversarial Environments

Eric Feron

Farmey Joseph

Jerome le Ny

Program Review Meeting

The Stata Center at MIT

June 22, 2005

slide2

Exploration for UAVs

Ambush-aware Routing

Exploration with stochastic agents

Multi-Vehicle Exploration and Routing in Adversarial Environments

routing
A common situation in hostile environments: a person must travel from an origin to a destination on a regular schedule. For example:

Military Police drive from base each day to school or hospital to provide security

Humanitarian convoy leaves supply base each day to deliver food, medicines, etc. to aid dispersal site

Ambassador makes daily trip from home to embassy

Must randomize the routing so as to minimize the probability of being ambushed by hostile forces?

Determine an optimal mixed strategy for choosing among possible routes between origin and destination

Routing
previous work
Previous Work
  • Exact solutions have been found for a number of specialized ambush games
    • [Ruckle, 1983], [Baston & Bostock, 1987] solve a continuous ambush game: Player 1 navigates from one side of a rectangle to the other, Player 2 sets finite-length barriers.
    • [Woodward, 2003] solves a discretized version of the above game.
  • These solutions are difficult to generalize to an arbitrary city road network
the vip transport game
Game is played on a graph with directed edges

edges » usable roads

nodes » road intersections

Player 1 (VIP) chooses a path from origin node to destination node

Player 2 (hostile forces) simultaneously chooses k nodes as ambush locations

k typically small, dependent on enemy resources

If P1 is ambushed at node i, game outcome is ai > 0.

ai» level of exposure of node i to an ambush

If P1’s path avoids all ambush nodes, game outcome is 0.

The VIP Transport Game
assumptions
Assumptions
  • The game is repeated an infinite number of times
  • Both players have complete information about the environment, including origin and destination nodes
  • If Player 1’s path passes through more than one ambush site, then the game outcome is equal to the sum of the outcomes of all ambushes.
  • All ambushes take place at nodes rather than along edges
  • Player 2 is intelligent and will place ambushes according to the optimal strategy in response to Player 1’s chosen mixed strategy
problem
Problem
  • Given:
    • graph G = (N, E)
    • origin node no
    • destination node nd
    • ambush outcome values ai for each node
  • Find:
    • Player 1's optimal mixed strategy for choosing among all paths between origin and destination, such that the expected outcome of the game is minimized
naive approach
Naive Approach
  • Decision variables » number of routes, grows exponentially with size of graph
  • Determine all possible routes from origin to destination for Player 1
  • Determine all possible ambush strategies for Player 2
  • Form the game matrix A where Aij = game outcome if P1 chooses route i and P2 chooses ambush strategy j
  • Solve an LP for P1’s optimal mixed strategy p*
  • Massive game matrix A
  • Excessive memory requirements
  • Not a practical formulation
network flow approach
Network Flow Approach

Concept

flow along an edge = probability of traversing that edge

  • inflow of 1 at origin
  • outflow of 1 at destination
  • flow vector $ mixed strategy: p = (p1, p2, ...)
  • decision variables: » number of edges << number of routes

for k-ambush problem: minimize the max flow (weighted by ai) into any set of k nodes

network flow approach1
Network Flow Approach

Formulation

  • Minimize
    • z = max a-weighted flow into any set of k nodes
  • Subject to:
    • a-weighted flow into any set of k nodes <= z
    • flow conservation at each node
    • flow nonnegativity

min z

s.t. Dp – 1z·0

Ap = b

p ¸ 0

Solve for Player 1’s optimal strategy (p*) and the expected value of the game outcome (z*)

example cambridge ma
Example: Cambridge, MA

Scenario

  • 50 nodes, 91 edges
  • Fresh Pond to downtown Boston
  • Major roads only
  • Estimate of enemy capability: 2 ambushes
  • Ambush outcome values: ai = min {distance to no, distance to nd}
example cambridge ma1
Example: Cambridge, MA

Solution

  • Thickness of line proportional to probability of traversing that edge
  • Generate random path based on edge probabilities
  • Solution time: 1.5 sec (Pentium 4, 1.8 GHz, 256 MB RAM)
variations
Variations
  • Multiple VIPs, different origins and destinations, traveling in separate vehicles
    • multicommodity network flow problem
  • Multiple VIPs, different origins and destinations, traveling in one vehicle
    • adjust flow constraints to set inflow = 1 at each origin and destination
multi agent exploration
Multi-Agent Exploration
  • What kind of agents?
    • Uncertain/hostile environment
    • Agents can fail: trade-off cost/reliability
    • Minimization of budget allocated to mission (assume well defined cost per agent)
  • Number and type (i.e. cost) of agents is a parameter under control in general
agents for exploration examples
Agents for Exploration: Examples
  • High-cost agents:

autonomous vehicles, engineered mobility (UAVs, cars…). Reliable, few agents involved.

  • Low-cost agents:

carried by environment + limited control of mobility. Cheap (batch fabrication, ex MEMS), unreliable individually, but use redundancy.

naturally mobile sensors
Naturally Mobile Sensors
  • Ex: rats controlled via brain electrodes

(ref: Talwar et al., Nature 417, 2002)

  • Potential Applications include search and rescue operations and mine clearance
line exploration models
Line Exploration Models

Some exploration scenarios in 1-D

  • Deterministic agents dropped randomly on a line.

N agents dropped at x1,…,xN. Minimize sum of the distances traveled to completely explore the line. How to choose respective exploration segments, i.e. R1,…,RN?

deterministic agents
Deterministic Agents
  • Only one agent should switch direction
  • Optimal cost is
  • For uniform distribution of agents, ca=cost/agent:

optimal # of agents:

probabilistic agents
Probabilistic Agents

(ex: naturally mobile sensors.)

Continuous model: speed is uncertain

control u(t) Є [-1;1]

  • Discrete Model: random walk, controlled drift

Go right (p>q):

Goal: explore the line with 1 prob. agent

Equivalent results, easier with BM

control policies
Control Policies
  • Closed-loop control: agent has positioning capability, bang-bang controller optimal, always go towards closest extremity
  • Open-loop control: know at beginning which is closest extremity. Send agent towards that point until it reaches it, then to the other end. No positioning (except starting point) necessary
control policies1
Control Policies
  • Independently of starting point:

(in expectation)

  • Positioning sensor and optimal controller give little improvement of performance
exploration time distribution
Exploration Time Distribution
  • Open-loop strategy
  • An agent starting at b reaches 0 with probability at least 1-ε after at most

deviation

from expectation

deterministic

result

agent design example
Agent Design Example
  • We have infinite number of prob. agents, initially all at b, to explore segment [0,b]
  • Open-loop policy
  • All agents have autonomy t0: cost for increasing t0
  • Can send them one at a time towards 0. If fail to reach 0, send another one.
agent design example1
Agent Design Example
  • Cost:

expected time

until 1st success

cost for increasing

autonomy/reliability

conclusions on exploration
Conclusions on Exploration
  • From 1D ex, expect that design and number of agents used in multi-agent systems should be optimized:
    • Saturation effect with increasing # of agents
    • Trade-off between cost of technology and benefits
    • Optimum depends on specific scenario
extensions
Extensions
  • How does this generalize to 2D?
    • Need simple strategies that are easy to compute and scale up with # of agents.
    • Exploit random natural mobility as tool for exploration (random sensor networks).
uav routing
UAV Routing
  • Path planner: one component of high-level mission scheduling. Complexity?
  • Ex: compute shortest tour for aircraft to visit n sites, with simplified dynamics:
    • Dubins Traveling Salesman Problem

(Savla, Frazzoli and Bullo, 2005)

dubins tsp
Dubins TSP
  • Dubins’ vehicle is a model for fixed-wing aircraft, constant speed, maximum turn radius ρ
  • Given initial and terminal locations and velocity directions (headings), complete characterization of the path, computed in constant time
alternating algorithm
Alternating Algorithm
  • Based on solution of Euclidean TSP. Keep odd numbered edges (straight lines) and add Dubins’ paths between them
randomized headings algorithm
Randomized Headings Algorithm
  • Approximation Algorithm for DTSP
  • Computes in O(n3) a tour within

of the optimum

  • Choose headings at random (good against adversary!)
  • Compute all-pair Dubins' distance matrix
  • solve the TSP problem on resulting asymmetric graph, using a (log n)-approximation algorithm
randomized headings algorithm1
Randomized Headings Algorithm
  • Euclidean metric not involved
  • Dmin can be considered fixed lower bound on distance between any two points (coverage radius of on-board sensors)
  • Better performance for dense sets of points
improvements
Improvements
  • Approximation ratio analysis not tight: based on perturbation analysis for point-to-point paths, need a more global approach. Maybe use analysis of bead-tiling alg. of Savla et al.
  • Deviation from expectation? (also need global analysis.)
conclusion
Conclusion
  • Network flow formulation provides an efficient means to rigorously determine the optimal mixed strategy for navigating from origin to destination and avoiding pre-set ambushes
  • For scenarios where the enemy observes the motion of the VIP and makes a real-time decision on ambush location, need to solve a multi-stage ambush game