1 / 13

AUTONOMOUS SYSTEMS

AUTONOMOUS SYSTEMS. PROJECT PRESENTATIONS. Instituto Superior Técnico Instituto de Sistemas e Robótica 23 September 2008. COOPERATIVE VISUAL TRACKING OF A MOVING OBJECT.

jamese
Download Presentation

AUTONOMOUS SYSTEMS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. AUTONOMOUS SYSTEMS PROJECT PRESENTATIONS Instituto Superior Técnico Instituto de Sistemas e Robótica 23 September 2008

  2. COOPERATIVE VISUAL TRACKING OF A MOVING OBJECT Simulate the visual tracking of a moving object using probabilistic models for the object motion, for a set of sensors in a sensor network consisting of static sensors. The sensors’ information must be fused so as to better estimate the object position and velocity over time. The dynamics of object motion can be assumed as approximately known. Can be done in Matlab, with emphasis on realistic sensor observation and object motion models. Ref.: Durrant-Whyte paper followed in classes W. Nisticò, M. Hebbel, T. Kerkhof, and C. Zarges, “Cooperative Visual Tracking in a Team of Autonomous Mobile Robots”, RoboCup 2006 Book, Springer-Verlag 2

  3. FORMATION CONTROL 2D Simulate at least 2 methods of formation control for non-holonomic robots moving in 2D (e.g., Leonard and Fiorelli's method, Kumar and his co-workers’ method). Study comparatively the methods regarding how to handle obstacle avoidance when moving the formation Can be done in Matlab and/or realistic robot simulator. Ref.: P. Ä. Ogren, E. Fiorelli and N. E. Leonard (2003). Cooperative control of mobile sensor networks: Adaptive gradient climbing in a distributed environment”, IEEE Transactions on Automatic Control 49(8), pp 1292-1302. R. Fierro, P. Song, A. Das, V. Kumar, " Cooperative Control of Robot Formations "In R. Murphy and P. Pardalos, editors, Cooperative Control and Optimization, volume 66, pages 73-93, Hingham, MA, 2002, Kluwer Academic Press 3

  4. FORMATION STATE ESTIMATION Mobile robots, acting as a formation, may need to know the full state (all positions and velocities) of the formation robots for guidance purposes. An approach to full formation state estimation with reduced communication among the formation members consists of using, at each vehicle, local measurements (e.g., of the distance to some teammates) plus the latest state estimate provided by a designated teammate, to update the global state estimate in a given frame attached to the formation. An (Extended) Kalman Filter can be used for this purpose but, due to correlations among the estimates, it needs to be endowed with a Covariance Intersection algorithm.The project will consist of applying this method to a formation of 4-5 simulated omnidirectional vehicles, capable of measuring, with associated error, the distances to some of their teammates, as well as to communicate information to the another subset of their teamates. Can be done in Matlab and/or realistic robot simulator. Ref.: Optimal Guidance and Decentralised State Estimation Applied to a Formation Flying Demonstration Mission in GTO, Dan Dumitriu, Sónia Marques, Pedro Lima, J. C. Bastante, J. Araújo, L. F. Peñin, A. Caramagno, B. Udrea, IET, Control Theory and Applications, Vol. 1, Issue 2 , p. 443-552, March, 2007 Arambel, P.O., Rago, C., and Mehra, R.K.: “Covariance intersection algorithm for distributed spacecraft state estimation”. Proc. American Control Conf., 2001, Arlington, VA 4

  5. ACTIVE VISION USING REINFORCEMENT LEARNING Robots with non-omnidirectional vision often have to decide where to look at, so as to perform their mission adequately. One such example is when a robot is trying to catch an object, but simultaneously needs to know its localization, using vision-based algorithms for both purposes. Most of the time the robot looks at (or for) the object, but once in a while it needs to take a look at landmarks to improve its localization estimate, which would otherwise degrade should it only be based on odometry while the robot moves. This project consists of using reinforcement learning methods to help the robot plan its observation sequences so as to keep the object location and its self-localization uncertainties as low as possible. Can be done in Matlab and/or realistic robot simulator. One possibility for interested students is to implement the work in a SONY AIBO ERS-7 robot. Ref.: Reinforcement Learning: an Introduction , R. Sutton and A. Barto, 1998, MIT Press . 5

  6. ACTIVE COOPERATIVE PERCEPTION IN STATIC+MOBILE SENSOR NETWORKS Develop probabilistic models for the sensors in a simple simulated sensor network consisting of 2 cams and one omnidirectional mobile robot with on board omnidirectional vision. Then fuse the information provided by the three sensors using different fusion methods, to better estimate the object position. Come up with a decision method which determines when to move the robot so that the cost of moving is compensated by the gain in uncertainty reduction by changing the robot viewpoint w.r.t. the object. Odometry models can be used to estimate the robot location over time with associated uncertainty. Different situations, e.g., sensors agree and disagree, one sensor has a clear better estimate and should prevail, should be studied. Can be done in Matlab, with emphasis on realistic sensor observation and robot localization models. Ref.: Durrant-Whyte paper followed in classes 6

  7. COOPERATIVE PLAN REPRESENTATION AND EXECUTION USING PETRI NETS Represent a cooperative plan involving at least 2 robots using Petri nets (to model primitive actions and communications for synchronization) and implement it in simulated 2Drobots. The main objective is to model the plan concerning the uncertainties (e.g., probability of success) of its composing actions, use Petri net analysis methods to estimate the plan success and robustness to higher-than-expected action failure rates, and verify the results in the realistic simulation. Can be done in Matlab and/or realistic robot simulator. Ref.: "Performance Modeling of Automated Manufacturing Systems", N. Viswanadham and Y. Narahari, Prentice Hall, 1992 7

  8. MONTE CARLO LOCALIZATION (I) Implement, in a Pioneer robot, a Monte Carlo localization algorithm, using odometry and the robot sonar ring. The working area comprises the LSDC4 room, the 5th room corridors, and the elevator hall. Will have to do intensive testing with the robot, and will require a laptop on the top of the robot. Ref.: S. Thrun, W. Burgard e D. Fox, “Probabilistic Robotics”, 2005, MIT Press

  9. MONTE-CARLO LOCALIZATION (II) • Implement, in simulation, a Monte-Carlo localization procedure using a particle filter to estimate robot poses in a probabilistic, non-Gaussian, framework. Consider that a map is a priori known, where landmarks are defined. The kidnapping problem will have to be solved. The adaptation of the number of particles over time will have to be discussed and implemented. • Can be done in Matlab and/or realistic robot simulator. • Main references: • Sebastian Thrun, Wolfram Burgard, Dieter Fox, “Probabilistic Robotics”, MIT Press, 2005 (Chaps.8) • Dieter Fox, Sebastian Thrun, Wolfram Burgard, Frank Dellaert, “Particle Filters for Mobile Robot Localization” • Ioannis M. Rekleitis, “A Particle Filter Tutorial for Mobile Robot Localization”, Technical Report TR-CIM-04-02, McGill University, 2002.

  10. SIMULTANEOUS LOCALIZATION AND MAPPING (SLAM) • Implement, in simulation, the EKF based Simultaneous Localization and Mapping algorithm. The problem of environment mapping using sensors carried by a mobile robot requires the correct localization of the sensors, and thus of the robot. On the other hand, a map is required for the correct localization of the robot. The original SLAM uses an EKF approach to simultaneously estimate the location of the robot (carrying the sensors) and landmarks used to characterize the environment. • Can be done in Matlab and/or realistic robot simulator. • Main references: • Tim Bayley, H.Durrant-Whyte, “Simultaneous Localization and Mapping (SLAM): Parts I and II”, IEEE Robotics and Automation Magazine, 2006. • M. Montemerlo, S. Thrun, “Fast SLAM”, Springer Verlag, 2007

  11. FAST SIMULTANEOUS LOCALIZATION AND MAPPING (FAST SLAM) • Implement, in simulation, the Fast Simultaneous Localization and Mapping (Fast SLAM) algorithm. The Fast SLAM is a version of the SLAM algorithm that overcomes drawbacks of the classical SLAM and that samples over potential robot paths (using particle filter) instead of maintaining a parameterized distribution of solutions like the EKF version • Can be done in Matlab and/or realistic robot simulator. • Main references: • M. Montemerlo, S. Thrun, “Fast SLAM”, Springer Verlag, 2007

  12. SIMULTANEOUS LOCALIZATION AND MAPPING IN A REAL ROBOT • Implement, in a Pioneer robot, a Simultaneous Localization and Mapping (SLAM) algorithm of your choice. The working area comprises the LSDC4 room, the 5th room corridors, and the elevator hall. • Will have to do intensive testing with the robot, and will require a laptop on the top of the robot. • Main references: • Sebastian Thrun, Wolfram Burgard, Dieter Fox, “Probabilistic Robotics”, MIT Press, 2005

  13. POLLUTANT CONCENTRATION MAPPING USING A COOPERATIVE AUV TEAM • Simulate a team of cooperative unmanned aerial vehicles (UAVs) carrying the task of mapping the concentration of a pollutant over a defined 3D region. This project includes the coordination of the UAVs in order to get the best resolution for the pollutant concentration. The dynamics of the pollutant plume motion must also be simulated. • Can be done in Matlab and/or realistic robot simulator. • Main references: • Sebastian Thrun, Wolfram Burgard, Dieter Fox, “Probabilistic Robotics”, MIT Press, 2005 • J. C. Seco, C. Pinto-Ferreira, L. Correia, “A Society of Agents in Environmental Monitoring”, Proc. SAB-5, 1998

More Related