1 / 10

MURI Telecon, Update 7/26/2012

MURI Telecon, Update 7/26/2012. Summary, Part I:

lilka
Download Presentation

MURI Telecon, Update 7/26/2012

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MURI Telecon, Update 7/26/2012 • Summary, Part I: • Completed: proving and validating numerically optimality conditions for Distributed Optimal Control (DOC) problem; conservation law analysis; direct method of solution for DOC problems; computational complexity analysis; application to multi-agent path planning. • Submitted paper on developments above to Automatica. • Completed: modeling of maneuvering targets by Markov motion models; derivation of (corresponding) multi-sensor performance function representing the probability of detection of multiple distributed sensors; application to multi-sensor placement. • Submitted paper on developments above to IEEE TC. • In progress: application of methods above to multi-sensor trajectory optimization for tracking and detecting Markov targets based on feedback from a Kalman-Particle filter. • Submitted paper on developments above to MSIT 2012; another journal paper on developments above in preparation.

  2. MURI Telecon, Update 7/26/2012 • Summary, Part II: • Completed: comparison of information theoretic functions for multi-sensor systems performing target classification. • Published paper on above developments in SMCB –Part B, Vol. 42, No. 1, Feb 2012. • In progress: comparison of information theoretic functions for multi-sensor systems performing (Markov) target tracking and detection. • Submitted paper on above developments to SSP 2012; another journal paper on developments above in preparation. • Completed: derived new approximate dynamic relations for hybrid systems. • Submitted paper on above developments to JDSM. • In progress: integrating DOC for multiple tasks and distributions with consensus based bundle algorithm (CBBA); apply DOC to non-parametric Bayesian models of sensors/targets. • In progress: develop DOC reachability proofs in the presence of communication constraints, for decentralized DOC.

  3. DOC Background • Distributed Systems: A system of multiple autonomous dynamic systems that communicate and interact with each other to achieve a common goal. • Swarms: Hundreds to thousands of systems; homogeneous; minimal communication and sensing capabilities. Decentralized control laws: stable; non-optimal; and, do not meet common goal. • Multi-agent systems: few to hundreds of systems; heterogeneous; advanced sensing and, possibly, communication capabilities. Centralized vs. decentralized control laws: path planning; obstacle avoidance; must meet one or more common goals, subject to agent constraints and dynamics. • Classical Optimal Control: Determines the optimal control law and trajectory for a single agent or dynamical system. • Characterized by well-known optimality conditions and numerical algorithms • Applied to a single agent for trajectory optimization, pursuit-evasion, feedback control (auto-pilots) .. • Does not scale to systems of hundreds of agents 3

  4. Benchmark Problem: Multi-agent Path Planning The agent microscopic dynamics are given by the unicycle model with constant velocity, which amounts to the following system of ODEs, Agent: Where: The number of components (m) in the Gaussian mixture is chosen by the used based on the complexity of the initial and goal PDFs. 4

  5. Example with m = 4 Goal PDF, h(xi, tf) Initial PDF, p(xi, t0) Pr(xi) : Fixed obstacle 5

  6. Results: Optimal PDF (m = 4) Pr(xi): Optimal PDF : Fixed obstacle 6

  7. Agents’ Optimal Trajectories Feedback control of agents via DOC. Pr(xi): Optimal PDF Agent’s control input (Sample) : Individual agent (unicycle) : Fixed obstacle 7

  8. Example with m = 6 Goal PDF, h(xi, tf) Initial PDF, p(xi, t0) Pr(xi) : Fixed obstacle 8

  9. Results: Optimal PDF (m = 6) Pr(xi): Optimal PDF : Fixed obstacle 9

  10. Agents’ Optimal Trajectories Feedback control of N = 200 agents via DOC. Pr(xi): Optimal PDF Agent’s control input (Sample) : Individual agent (unicycle) : Fixed obstacle 10

More Related