1 / 17

Dynamic Sensor Planning and Control for Optimally Tracking Targets By:Spletzer&Taylor

Dynamic Sensor Planning and Control for Optimally Tracking Targets By:Spletzer&Taylor. Notes by Aisha Walcott. Outline. Approach and Algorithm Experiments Analysis and Discussion. Problem.

ahava
Download Presentation

Dynamic Sensor Planning and Control for Optimally Tracking Targets By:Spletzer&Taylor

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dynamic Sensor Planning and Controlfor Optimally Tracking TargetsBy:Spletzer&Taylor Notes by Aisha Walcott

  2. Outline • Approach and Algorithm • Experiments • Analysis and Discussion

  3. Problem • Active control of the configurations of a team of mobile agents equipped with omni-directional cameras, to optimize the quality of the estimates derived from their measurements • Assumptions • Noise model, given as P(ž | , )predicts the distribution of measurements a fn of , • Constant number N particles for each target • Robot positions  are known a priori • There is a means to estimate the formation pose

  4. Notation Cr: configuration space of the robots, concatenation of the poses [x1, y1,θ1,…]T Cw: config space of target parameters, eg. [xt , yt]T Ž : set of all possible sensor measurements ž  Ž: vector of 3 angles formed by 3 robots,eg. [1,2 ,3]   Cr,  Cw P(): pdf on Cw – can be used to model prior info, eg. may use prior measurements or dynamic model of target (I’m not sure what this distrubution would look like)

  5. Functions Est:Cr Ž Cw - produce an estimate of the target’s position, ŵ ,from noisy measurements and robot configuration (paper uses least squares and suggests others) Disp: Cw CwR+- an indication of the disparity btwn estimated value ŵ and the actual . This paper uses Disp(,ŵ) = ||-ŵ||2 (mean squared error) Quality Function

  6. Quality Function Est: ŵ = estimate of target posn robots configuration noisy measurement actual target posn noisy measurement target configuration -Noise model (given) Target configuration space -PDF on target configuration space -Disparity btwn estimated target posn.and actual target posn. -Error metric: Euclidean distance (used in paper) Set of all possible sensor measurements • Formulate the problem of choosing an appropriate configuration for the robots and an optimization problem • Goal: find a choice  , where  Cr •  model configurations that cannot be achieved, obstacles, robot limitations, etc.

  7. Behavior of Quality Function

  8. Computational Approach • Approximate Q(p), use particle filtering • (j, j), j a sample from Cw, j weight~likelihood of j representing the state  Number of samples • Samples are selected at random from P() and P(ž | ,) • With sufficient N, then this eqn reflects the integration in the quality function

  9. 1 3 2 Example Evaluation Q() 1 T Ideally show a 3-5 step animation Of the algorithm steps (as robots and target move)- or at least place algorithm here [x1, y1,θ1] 2 3 [x2, y2,θ2] [x3, y3,θ3] ž =[1,2 ,3]

  10. 1 3 2 Example (1/3) Evaluation Q() P() 1 1. A sample  is chosen at random from P()- the pdf on the configuration space of the target 2.  projected from target space to each robot’s frame [x1, y1,θ1] Projection of  onto each robots frame Sample 2 3 [x2, y2,θ2] [x3, y3,θ3] ž =[1,2 ,3]

  11. 1 3 2 Example (2/3) Evaluation Q() P() 3. The corresponding image measurements (that correspond to ) are corrupted with noise from the sensor model- ži =P(ž | , ) 4. Project back into target space using ŵ =Est(, ži), noise corrupted target posn 1  [x1, y1,θ1] Projection of  onto each robots frame ŵ 2 3 [x2, y2,θ2] [x3, y3,θ3] ž =[1,2 ,3], (Est:Cr Ž Cw - produce an estimate of the target’s position, ŵ, from noisy measurements and robot configuration)

  12. 1 3 2 Example (3/3) Evaluation Q() 5. Disparity between new sample ŵ and original sample ;Generate error for ŵ. error = error + Disp(,ŵ) = ||-ŵ||2 P() 1  [x1, y1,θ1] Projection of  onto each robots frame Repeat for N samples () then 6. Q() = error / N ŵ 2 3 [x2, y2,θ2] [x3, y3,θ3] ž =[1,2 ,3], (Est:Cr Ž Cw - produce an estimate of the target’s position, ŵ, from noisy measurements and robot configuration)

  13. Optimizing Robot Configuration • Approximate the gradient of the quality function (paper uses finite difference techniques) • Avoids local minima because it’s re-computed each time the estimated positions of the targets change • Generate piecewise optimal trajectories per time step

  14. 2 Behavior of robot configuration Error represented by Q(p), can be used to determine the gradient (I think) Target posn Error, Q(p) Robot configuration if target posn error is that large

  15. Outline • Approach and Algorithm • Experiments • Analysis and Discussion

  16. Outline • Approach and Algorithm • Experiments • Analysis and Discussion

More Related