1 / 32

Intro

Intro. Computational Neuroscience NSCI 492 Spring 2008. Course organization. Syllabus at http://www.tulane.edu/~howard/CompNSCI/. Readings. Newman, D. Jay (2006) Linux Robotics. Programming smarter robots . §5 Behavioral programming.

patch
Download Presentation

Intro

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Intro Computational Neuroscience NSCI 492 Spring 2008

  2. Course organization • Syllabus at http://www.tulane.edu/~howard/CompNSCI/ Harry Howard, NSCI 492, Tulane University

  3. Readings • Newman, D. Jay (2006) Linux Robotics. Programming smarter robots. §5 Behavioral programming. • Predko, Myke (2003) Programming Robot Controllers. §5 Bringing your robot to life. • Preston, Scott (2006) The Definitive Guide to Building Java Robots. §7 Navigation. Harry Howard, NSCI 492, Tulane University

  4. General methods(Newman, Predko, Wikipedia) • Dedicated programming for a particular situation • Behavioral or reactive programming • Symbolic AI • Sub-symbolic AI and neural networks • Intelligent agents and hybrid approaches Harry Howard, NSCI 492, Tulane University

  5. Dedicated programming • Program the robot to perform a small and specific set of actions. • It works fine in limited worlds, like welding a seam in a car factory, but it does not generalize to the real world. Harry Howard, NSCI 492, Tulane University

  6. Behavioral or reactive programming

  7. Overview • Behaviors/reactions are triggered by sensor input. • One or more sensors triggers a behavior. • The active behavior affects one or more actuators • The robot has no map of the world -- "the world as seen through the sensors is its own map". • The robot has little to no memory • The robot has no concept of state • The robot does not learn. • The robot does not time anything • Computation is easier and faster. Harry Howard, NSCI 492, Tulane University

  8. Examples • The (Braitenberg) obstacle-avoidance controller program for the Kheperas. • Predko's robot 'shadow' application, pp. 350-5 • this is an interesting project for Webots, and ultimately a pair of Kheperas Harry Howard, NSCI 492, Tulane University

  9. Subsumption architecture • The robot has an ordered list of behaviors • the ones with lower numbers have higher priority than the ones with higher numbers • each one can choose to return either 0 (inactive) or 1 (active) for its activation • only one is active at any given time • the active one is the highest-priority behavior that wants to be activated Harry Howard, NSCI 492, Tulane University

  10. HH's response • From a biological perspective, the behaviors/reactions are reflexes. • Predko (p. 359, last lines says the same thing, but talking about neural nets, not reactive programming • Newman is right to compare reactive programming to insect rather than human behavior. Harry Howard, NSCI 492, Tulane University

  11. Traditional symbolic AI

  12. Schools of good old fashioned AI or GOFAI • From Wikipedia: When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. • The research was centered in three institutions: CMU, Stanford and MIT, and each one developed its own style of research. • John Haugeland named these approaches to AI "good old fashioned AI" or "GOFAI". Harry Howard, NSCI 492, Tulane University

  13. Cognitive simulation @ CMU • Economist Herbert Simon and Alan Newell studied human problem solving skills and attempted to formalize them. • Their research team performed psychological experiments to demonstrate the similarities between human problem solving and the programs (such as their "General Problem Solver") they were developing. • This tradition would eventually culminate in the development of the Soar architecture in the middle 80s. Harry Howard, NSCI 492, Tulane University

  14. Logical AI @ Stanford • John McCarthy felt that machines did not need to simulate human thought, but should instead try find the essence of abstract reasoning and problem solving, regardless of whether people used the same algorithms. • His laboratory (SAIL) focussed on using formal logic to solve wide variety of problems, including knowledge representation, planning and learning. • Work in logic led to the development of the programming language Prolog and the science of logic programming. Harry Howard, NSCI 492, Tulane University

  15. "Scruffy" symbolic AI @ MIT • Marvin Minsky and Seymour Papert found that solving difficult problems in vision and natural language processing required ad-hoc solutions • they argued that there was no silver bullet, no simple and general principle (like logic) that would capture all the aspects of intelligent behavior. • An important realization was that AI required large amounts of commonsense knowledge, and that this had to be engineered one complicated concept at a time. • Roger Schank described their "anti-logic" approaches as "scruffy" (as opposed to the "neat" paradigms at CMU and Stanford), and this still forms the basis of research into commonsense knowledge, such as Doug Lenat's Cyc. Harry Howard, NSCI 492, Tulane University

  16. Newman's overview • The programmer makes a model of the world in a symbolic language. • The model is compared to the sensor input, and the robot makes some action. • The problem is how to explicitly program a computer to recognize sensor input, like a sofa. Harry Howard, NSCI 492, Tulane University

  17. Sub-symbolic AI andneural networks

  18. History • During the 1960s, symbolic (GOFAI) approaches had achieved great success at simulating high-level thinking in small demonstration programs. • Approaches based on cybernetics or neural networks were abandoned or pushed into the background. • By the 1980s, however, progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. • A number of researchers began to look into "sub-symbolic" approaches to specific AI problems. Harry Howard, NSCI 492, Tulane University

  19. The beginning • Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focussed on the basic engineering problems that would allow robots to move and survive. • Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 50s and reintroduced the use of control theory in AI. • These "bottom-up" approaches are known as behavior-based AI, situated AI or Nouvelle AI, and are closely tied to embodied cognitive science. Harry Howard, NSCI 492, Tulane University

  20. Neural networks • Interest in neural networks and "connectionism" was revived by David Rumelhart and others in the middle 1980s. • These and other sub-symbolic approaches, such as fuzzy systems and evolutionary computation, are now studied collectively by the emerging discipline of computational intelligence. • In the 1990s, AI researchers developed sophisticated mathematical tools to solve specific sub-problems. • These tools are truly scientific, in the sense that their results are both measurable and verifiable, and they have been responsible for many of AI's recent successes. • The shared mathematical language has also permitted a high level of collaboration with more established fields (like mathematics, economics or operations research). Harry Howard, NSCI 492, Tulane University

  21. Newman and Predko • Newman • "An ANN can do anything a traditional program can do." • The problem is the high memory and computational cost. • Predko • says some ridiculous things on pp. 356-7 • yet his thumbnail sketch of neural networks on the following pages is ok • he is right to say that the 'robot moth' NN on p. 358 acts more like a reflex than "true intelligence" • what want to investigate what the missing component is Harry Howard, NSCI 492, Tulane University

  22. Intelligent agents Wikipedia on AI

  23. Definition • An intelligent agent is a system that perceives its environment and takes actions which maximizes its chances of success. • The simplest intelligent agents are programs that solve specific problems. • The most complicated intelligent agents would be rational, thinking human beings. Harry Howard, NSCI 492, Tulane University

  24. History • The intelligent agent paradigm became widely accepted during the 1990s. • Although earlier researchers had proposed modular "divide and conquer" approaches to AI, the intelligent agent did not reach its modern form until Judea Pearl, Alan Newell and others brought concepts from decision theory and economics into the study of AI. • When the economist's definition of a rational agent was married to computer science's definition of an object or module, the intelligent agent paradigm was complete. Harry Howard, NSCI 492, Tulane University

  25. In practice • An agent architecture or cognitive architecture allows researchers to build more versatile and intelligent systems out of interacting intelligent agents in a multi-agent system. • The paradigm gives researchers license to study isolated problems and find solutions that are both verifiable and useful, without agreeing on one single approach. • The paradigm gives researchers a common language to describe problems and share their solutions with each other and with other fields—such as decision theory—that also use concepts of abstract agents. • An agent that solves a specific problem can use any approach that works • some agents are symbolic and logical • some are sub-symbolic neural networks • and some can be based on new approaches, without forcing researchers to reject old approaches that have proven useful. Harry Howard, NSCI 492, Tulane University

  26. Overt integration • A system with both symbolic and sub-symbolic components is a hybrid intelligent system, and the study of such systems is AI systems integration. • Hierarchical control system theory is a bridge between sub-symbolic AI at its lowest, reactive levels and traditional symbolic AI at its highest levels, where relaxed time constraints permit planning and world modelling. Harry Howard, NSCI 492, Tulane University

  27. Summary

  28. Reactive paradigm Robot does something (motor cortex) Environment changes (feedforward) feedback Robot senses change (sensory cortex) Harry Howard, NSCI 492, Tulane University

  29. Agent paradigm Robot does something (motor cortex) feedforward Robot chooses among alternatives (prefrontal cortex) Environment changes Robot senses change (sensory cortex) feedback Harry Howard, NSCI 492, Tulane University

  30. Human-like paradigm Robot does something (motor cortex) feedforward Robot chooses among alternatives (prefrontal cortex) Robot predicts result Environment changes Robot senses change (sensory cortex) feedback Harry Howard, NSCI 492, Tulane University

  31. Task 1 Dead reckoning

  32. Next time • Intro to Webots • Maybe start on the hippocampus Harry Howard, NSCI 492, Tulane University

More Related