1 / 36

A Neural Network Approach to UGV Reconnaissance in MOUT Environments

Brandon S. Perelman Department of Cognitive and Learning Sciences, Michigan Technological University. A Neural Network Approach to UGV Reconnaissance in MOUT Environments. Unmanned Ground Vehicles (UGVs). Ground-based unmanned systems for transporting items Cargo Sensor packages

miracle
Download Presentation

A Neural Network Approach to UGV Reconnaissance in MOUT Environments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Brandon S. Perelman Department of Cognitive and Learning Sciences, Michigan Technological University A Neural Network Approach to UGV Reconnaissance in MOUT Environments

  2. Unmanned Ground Vehicles (UGVs) • Ground-based unmanned systems for transporting items • Cargo • Sensor packages • Communications packages • Weapons systems • Why UGVs? • “Dull, dirty, and dangerous” jobs • Size / weight constraints • Endurance requirements • Speed • Cost

  3. Sample Domain – Inspection & Assessment, Hazardous Environments iRobot in Fukushima Volcano Exploration Bot

  4. Sample Domain – Planetary Rovers

  5. Sample Domains – Military *

  6. A Way Forward for UGVs • Current State: Nearly all military UGVs are tele-operated. Some autonomous functions. • Goal State: Autonomous UGVs. • Roles for UGVs • Logistics / load carriage • Combat • EOD: Explosive ordinance disposal • RSTA: Reconnaissance, Surveillance, and Target Acquisition

  7. Presentation Outline • Brief history of UGV development • Why autonomy in UGVs? • Current approaches to automation • A Way Forward: Neural Networks • Proposed System

  8. Brief History of UGV Development60s and 70s • Shakey (Nilsson, 1969) – DARPA & Stanford • Tele-operated via RF using LISP commands • Automated functions • A* Search algorithm • Sensor package • Video camera • Range finder • Touch sensors

  9. Brief History of UGV Development 70s to 80s • Hans Moravec • Stanford  Carnegie Mellon University • “Cart” – autonomous UGV • Movable camera • Locomotion • Navigation & Obstacle Avoidance • Limitations • Incredibly slow (15 minutes per “move”) Source: Gage (1995)

  10. Brief History of UGV Development80s to 90s • DARPA Autonomous Land Vehicle • 8 wheel All-Terrain Vehicle • Road following / obstacle avoidance • RSTA • Ground Surveillance Robot • Automated M114 AFV • TeleOperated Dune Buggy Q: What do these all have in common? A: They’re all huge!

  11. Brief History of UGV Development90s to 2k • Smaller UGVs for RSTA Operations • Surrogate Teleoperated Vehicle • 6-wheeled ATV. Driveable. • Helicopter / HMMWV Transportable • RSTA-specific sensor package • FLIR • Stereo TV • GPS • Laser rangefinder / target designator • Chemical agents detector • Acoustic sensors

  12. Current UGV Systems (all Tele-operated) Afghanistan, 2002- First Use: 2000

  13. Current UGV Systems Outside the United States IDF’s “Guardium” UGV

  14. Why Automation? Hold on bro, I just have to beat this boss

  15. Why Automation? • Reduce human operator burden (Nardi, 2009) and drag on human elements in tactical environments (Lif, Jander, & Borgvall, 2006) • Common to Unmanned Aerial domain • Allow rapid reaction to changing tactical situations (Mills, 2007) without taking a human operator “out of the fight.” • Allow highly aggressive maneuvers beyond human perceptual and control capabilities • Aggressive Quad Rotors (Mellinger, Michael, & Kumar, 2012)

  16. Why Automation? Tele-Operation Interfaces (Fong & Thorpe, 2001) • Direct Interface: UGV controlled from “inside out” • Multimodal / multisensor: Multiple sensors (e.g., instruments like a driver’s cockpit) • Supervisory Control: Partial automation. Operator selects waypoints.

  17. Why Automation? Problems with Tele-Operation • Sensory / Situation Awareness Limitations • Wide FOV sensors provide better SA but can induce motion sickness in human operators (Coovert, Prewett, Saboe, & Johnson, 2010) • Control Limitations • Operator Workload & Practical Considerations (i.e., touchscreen with gloves; Ögren, Svenmarck, Lif, Norberg, & Söderbäck, 2013) • Information transmission bottlenecks • Lag in video to operator transmission, and operator input to unit transmission (Fong & Thorpe, 2001)

  18. Current Approaches to Automation • Environment Sensation • Long Range - ~200m • Video cameras • LADAR • Short Range ~ < 1m • SONAR • Proprioceptive, touch, and acoustic sensors • Localization • GPS • Compass Easily disrupted by environmental contaminants Requires LOS to satellites Easily disrupted by EM fields and lasers

  19. Current Approaches to Automation Sonar (below) and LADAR (right)

  20. Current Approaches to Automation • Environment Perception & Modeling • Point cloud modeling & path optimization algorithms (similar to A* and Dijkstra’s algorithm; e.g. (Whitty, Cossell, Dang, Guivant, & Katupitiya, 2010) • Road-following (Grey, Karlsen, DiBerardino, Mottern, & Kott, 2012) • Person-tracking (Navarro-Serment, Mertz, & Hebert, 2010) • Tactical Behavior • Rule-based systems (e.g. Advocates and Critics for Tactical Behavior; Hussain, Vidaver, & Berliner, 2005)

  21. Current Approaches to Automation Pathfinding Algorithms • A* • Developed for Shakey (60s) • Based on Dijkstra’s algorithm • Produces heuristic solutions to routing problems

  22. Current Approaches to Automation Pathfinding Algorithms • Spreading Activation (Perelman & Mueller, 2013) • Biologically plausible • Often used in neural networks • Creates a topography navigable via hill-climbing

  23. Current Approaches to Automation Pathfinding Algorithms • Occupancy Grid-based Cost to go functions (Whitty et al., 2010) • Robot generates point cloud using LADAR • Point cloud used to create occupancy grid

  24. Current Approaches to Automation Pathfinding Algorithms • Road-following algorithms (Grey, Karlsen, DiBerardino, Mottern, & Kott, 2012) • Determine (usually using LADAR or video) road location usually based on symmetries • Follow road

  25. Challenges to Automation • Sensor Challenges • GPS & Compass • LOS to satellites required. Unrealistic in urban environments. • Impaired by LADAR and movement (Maxwell, Larkin, & Lowrance, 2013) • Proprioceptive Sensors • Temperature-sensitive (Durst & Goodin, 2012) • LADAR (including other laser and radio rangefinders) • Heavily impaired by environmental contaminants like dust (Yamauchi, 2010)

  26. Challenges to Automation • Environmental Challenges • Stairs • Mud & Water (Rankin & Matthies, 2010; Rankin, Matthies, & Bellutta, 2011) • “Fixes” for these depend upon sky reflections detectable via video camera. Useless at night. HELP!!!

  27. A Way Forward • UGV Design Requirements (Mills, 2007) • Heavily armed & armored • Less-than-lethal options for force escalation • Automated – Manual control exception vs. rule • Navigation independent of GPS • Excellent IFF capability • Obstacle recognition • Quiet • Cheap • Lightweight • Reliable • Modular • Interoperable • Capable of tactical behavior Addressed by proposed system Software problem beyond scope of presentation Not Applicable / addressed

  28. A Way Forward: Neural Networks • Neural networks are: • Biologically plausible data structures • Matrices / arrays • Used to model animal and human behavior • Experiential learners – they learn by doing • Computationally inexpensive

  29. A Way Forward: Neural Networks • Neural networks offer: • Simultaneous Reconnaissance Gathering, Data Representation, and Localization • A single mechanism is used to facilitate navigation and data representation for the operator. • Compatibility with highly robust & cheap short-range sensors • Neural networks are often used to model animal behavior. In these models, the perceptual systems have robust and cheap analogs in machine perception. • Inherent Propensity Toward Tactical Behavior • Since the animals that are often modeled are prey animals, navigation based on neural networks encourages stealthy behavior. • No Reliance on GPS • Neural Networks encode memory for the environment and objects in it. No GPS is required for localization or navigation. Now that I’ve promised you the world, you probably want to know how they work…

  30. Neural Networks: Representing the Environment CornuAmmonisregions of hippocampus (CA1 and CA3) Information enters through dentate gyrus CA3: “Cognitive Map” CA1: Goal site representations (the stuff in particular locations)

  31. Neural Networks: Representing the Environment CA1 and CA3: Interconnected layers of pyramidal cells CA3 encodes possible locations in the environment CA1 encodes the contextual information associated with those locations Exploration increases weight of CA3-CA3 connections (environmental familiarity)

  32. Neural Networks: Purposive Navigation Model search trajectory (left) and spreading activation from goal sites generated then suppressed (right)

  33. Neural Net Powered UGV

  34. Proposed Use Scenario

  35. References • Apostolopoulos, D. (2014). Gladiator. Carnegie Mellon – The Robotics Institute. Retrieved from http://www.ri.cmu.edu/research_project_detail.html?project_id=566&menu_id=261, February, 2014. • Chen, J. Y. C. (2010). UAV-guided navigation for ground robot tele-operation in a military reconnaissance environment. Ergonomics, 53, 940-950. • Childers, M. A., Bodt, B. A., & Camden, R. (2011). Assessing unmanned ground vehicle tactical behaviors performance. International Journal of Intelligent Control and Systems, 16, 52-66. • Coovert, M. D., Prewett, M. S., Saboe, K. N., & Johnson, R. C. (2010). Development of Principles for Multimodal Displays in Army Human-Robot Operations (No. ARL-CR-651). University of Florida: Tampa • DARPA. (2012). DARPA’s four-legged robots walk out for capabilities demonstration. DARPA News. Retrieved from http://www.darpa.mil/NewsEvents/Releases/2012/09/10.aspx, February, 2014. • Durst, P. J. & Goodin, C. (2012). High fidelity modelling and simulation of inertial sensors commonly used by autonomous mobile robots. World Journal of Modelling and Simulation, 8, 172-184. • Fong, T. & Thorpe, C. (2001). Vehicle Teleoperation Interfaces. Autonomous Robots, 11, 9-18. • Gage, D. W. (1995). UGV History 101: A brief history of unmanned ground vehicle (UGV) development efforts. Unmanned Systems Magazine, 13. • Gorchetchnikov, A; Hasselmo, ME (2002). A model of hippocampal circuitry mediating goal-driven navigation in a familiar environment. Neurocomputing, 44, 423-427. doi: 10.1016/S0925-2312(02)00395-8 • Gray, J. P., Karlsen, R. E., DiBerardino, C., Mottern, E., & Kott, N. J. (2012). Challenges to autonomous navigation in complex urban terrain. In SPIE Defense, Security, and Sensing (pp. 83870B-83870B). International Society for Optics and Photonics. • Ha, C. & Lee, D. (2013). Vision-based teleoperation of unmanned aerial and ground vehicles. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany, May 6-10. • Hartley, T., Burgess, N., Lever, C., Cacucci, F., & O’Keefe, J. (2000). Modeling place fields in terms of the cortical inputs to the hippocampus. Hippocampus, 10, 369-379. • Hightower, J. D., Smith, D. C., & Wiker, S. F. (1986). Development of remote presence technology for teleoperator systems. In Proceedings of the 14th Meeting of the UJNR/MFP, September. • Hoffman, C. M., Timberlake, W., Leffel, J., & Gont, R. (1999). How is radial arm maze behavior in rats related to locomotor search tactics? Animal Learning & Behavior, 27, 426-444. • Hussain, T. S., Vidaver, G., & Berliner, J. (2005, May). Advocates and critics for tactical behaviors in UGV navigation. In Defense and Security (pp. 255-266). International Society for Optics and Photonics. • Irvin, C., Leo, S., & Kim, J. (2012). GunBot: Design concept of a semi-autonomous UGV with omni-directional mobility and auto-target tracking. In Proceedings of the Florida Conference on Recent Advances in Robotics, Boca Raton, Florida, May 2012. • Koene, R.A., Gorchetchnikov, A., Cannon, R.C. and Hasselmo M.E. (2003) Modeling goal-directed spatial navigation in the rat based on physiological data from the hippocampal formation. Neural Networks, 16, 577-84. • Kogut, G., Blackburn, M., & Everett, H. R. (2003). Using video sensor networks to command and control unmanned ground vehicles. SPACE AND NAVAL WARFARE SYSTEMS CENTER SAN DIEGO CA. • Levy, W.B. (1989). A computational approach to hippocampal function. In R. D. Hawkins & G. H. Bower (Eds.), Computational models of learning in simple neural systems (pp. 243-305). New York, NY: Academic Press. • Levy, W. B., Colbert, C. M., & Desmond, N. L. (1990). Elemental adaptive processes of neurons and synapses: a statistical/computational perspective. In M. A. Gluck & D. E. Rumelhart (Eds.), Neuroscience and connectionist models (pp. 187-235). Nillsdale, NJ: Lawrence Erlbaum Assoc., Inc. • Lif, P., Jander, H., & Borgvall, J. (2006). Tactical evaluation of unmanned ground vehicle during a MOUT exercise. In Proceedings of the Human Factors and Ergonomics Society 50th Annual Meeting, 2557-2561. • Maxwell, P., Larkin, D., & Lowrance, C. (2013). Turning Remote-Controlled Military Systems into Autonomous Force Multipliers. Potentials, IEEE, 32(6), 39-43. • Mellinger, D., Michael, N., & Kumar, V. (2012). Trajectory generation and control for precise aggressive maneuvers with quadrotors. The International Journal of Robotics Research, 31(5), 664-674. • Mills, M. E. (2007). Challenges to the acceptance and proliferation of tactical UGVs. RUSI Defence Systems, 10(2), 28-30. • Mueller, S. T., Perelman, B. S., & Simpkins, B. (2013). Pathfinding in the cognitive map: Network models of mechanisms for search and planning. Biologically Inspired Cognitive Architectures, 5, 94-111. • Nardi, G. J. (2009). Autonomy, Unmanned Ground Vehicles, and the US Army: Preparing for the Future by Examining the Past. ARMY COMMAND AND GENERAL STAFF COLL FORT LEAVENWORTH KS SCHOOL OF ADVANCED MILITARY STUDIES. • Navarro-Serment, L. E., Mertz, C., & Hebert, M. (2010). Pedestrian detection and tracking using three-dimensional LADAR data. The International Journal of Robotics, 29, 1516-1528. • Nilsson, N. J. (1969). A mobile automaton: An application of artificial intelligence techniques. In Proceedings of the First International Joint Conference on Artificial Intelligence, Washington, D.C., May, pp. 509-520. • Ögren, P., Svenmarck, P., Lif, P., Norberg, M., & Söderbäck, N. E. (2013). Design and implementation of a new teleoperation control mode for differential drive UGVs. Autonomous Robots, 1-9. • O'Keefe, J., & Nadel, L. (1978). The hippocampus as a cognitive map (Vol. 3, pp. 483-484). Oxford: Clarendon Press. • Perelman, B. S. (2013). A simple feature based approach to context in neurocomputational models of navigation. Unpublished manuscript.

  36. References (cont.) • Perelman, B. S. & Mueller, S. T. (2013). A Neurocomputational Approach to Modeling Human Behavior in Simulated Unmanned Aerial Search Tasks. In Proceedings of the 2013 International Conference on Cognitive Modeling. • Peynot, T. & Kassir, A. (2010). Laser-camera data discrepancies and reliable perception in outdoor robotics. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, October 18-22. • Rankin, A. L. & Matthies, L. H. (2010). Passive sensor evaluation for unmanned ground vehicle mud detection. Journal of Field Robotics, 27, 473-490. • Rankin, A. L., Matthies, L. H., & Bellutta, P. (2011). Daytime water detection based on sky reflections. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, May 9-13. • Rolls, E.T., Robertson, R.G., & Georges-François, P. (1997). Spatial view cells in the primate hippocampus. European Journal of Neuroscience,9, 1789–94 • Sam, R. & Hattab, A. (2014). Improving the control behavior of unmanned ground vehicle (UGV) using virtual windows. Unpublished manuscript. Retrieved from http://www.ammarhattab.com/, January 31, 2014. • Samsonovich, A. V., & Ascoli, G. A. (2005). A simple neural network model of the hippocampus suggesting its pathfinding role in episodic memory retrieval. Learning & Memory, 12, 193-208. doi:10.1101/lm.85205 • Walker, A. M., Miller, D. P., & Ling, C. (2013). Spatial orientation aware smartphones for tele-operated robot control in military environments: A usability experiment. In Proceedings of the Human Factors and Ergonomics Society 57th Annual Meeting. • Whitty, M., Cossell, S., Dang, K. S., Guivant, J., & Katupitiya, J. (2010). Autonomous navigation using a real-time 3D point cloud. In Proceedings of the Australasian Conference on Robotics and Automation. • Yamauchi, B. (2004). PackBot: A versatile platform for military robotics. In Proceedings of the SPIE, 5422, 229. • Yamauchi, B. (2010). All-weather perception for man-portable robots using ultra-wideband radar. In 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, May 3-8. • Zhou, B. & Dai, X. Z. (2010). Set-membership based real-time terrain modeling of mobile robots with a laser scanner. In Proceedings of the 2010 IEEE International Conference on Mechatronics and Automation, Xi’an, China, August 4-7.

More Related