1 / 24

Outline: Biological Metaphor

Outline: Biological Metaphor. Biological generalization How AI applied this Ramifications for HRI How the resulting AI architecture relates to automation and control theory. “Upper brain” or cortex Reasoning over information about goals. “Middle brain”

Download Presentation

Outline: Biological Metaphor

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Outline: Biological Metaphor • Biological generalization • How AI applied this • Ramifications for HRI • How the resulting AI architecture relates to automation and control theory

  2. “Upper brain” or cortex Reasoning over information about goals “Middle brain” Converting sensor data into information Spinal Cord and “lower brain” Skills and responses Biological Intelligence* *An amazingly sweeping generalization for the purpose of metaphor

  3. Programming Modules PRIMITIVES SENSE PLAN ACT LEARN

  4. 2 1 3 SENSE PLAN ACT Early AI Robotics (1967-86) Seemed to capture cognitive notions such as “action-perception cycle”

  5. 2 1 3 SENSE PLAN ACT Early Problem (1967-86) In practice: World model was intractable In theory: Ignored Gibson, reflexes, …

  6. Two Major “Loops”:1- Reflexes, Reactive, Direct Perception Spinal Cord and “lower brain” Skills and responses, behaviors *An amazingly sweeping generalization for the purpose of metaphor

  7. Two Major “Loops”:2- Deliberative, Use Symbols/Representations … “Upper brain” or cortex Reasoning over information about goals Spinal Cord and “lower brain” Skills and responses *An amazingly sweeping generalization for the purpose of metaphor

  8. “Upper brain” or cortex Reasoning over information about goals “Middle brain” Converting sensor data into information Spinal Cord and “lower brain” Skills and responses Plus Perception to Symbols(Abstraction, Models, Explicit Representation) *An amazingly sweeping generalization for the purpose of metaphor

  9. “Upper brain” or cortex Reasoning over information about goals “Middle brain” Converting sensor data into information Spinal Cord and “lower brain” Skills and responses AI Architecture Using Biological Metaphor Deliberative Layer Reactive (or Behavioral) Layer

  10. SENSE SENSE SENSE ACT ACT ACT SENSE-ACT couplings are “behaviors” Behavioral Layer Behaviors are independent, run in parallel, output is emergent

  11. Just Behavioral Layer…

  12. HRI Ramifications • Overall action is EMERGENT, a product of interaction of multiple behaviors and their response to stimulus • Not amenable to proofs, traditional guarantees of correctness/safety • Behaviors-only implementations aren’t optimal

  13. PLAN SENSE SENSE SENSE ACT ACT ACT Deliberative Layer PLAN, then instantiate and monitor SENSE-ACT behaviors Reuse sensing channels but create task-specific representations Introduction to AI Robotics R. Murphy (MIT Press 2000) for second edition 13

  14. Example

  15. “Upper brain” or cortex Reasoning over information about goals “Middle brain” Converting sensor data into information Spinal Cord and “lower brain” Skills and responses Don’t Know How to Do Symbol-Ground Problem/World Models *An amazingly sweeping generalization for the purpose of metaphor Introduction to AI Robotics R. Murphy (MIT Press 2000) for second edition 15

  16. HRI Ramifications • Robots are good at computer optimization, large data set types of problems • Planning and “search” • Allocation • Robots are not good at converting what is in the real world to symbols (which are required for deliberative functions) • Recognition is hard • Gesturing and giving directions relative to objects is hard because no perceptual common ground

  17. What About LEARN? PRIMITIVES SENSE PLAN ACT LEARN

  18. What About Other People? *An amazingly sweeping generalization for the purpose of metaphor

  19. HRI Ramifications • Robots don’t learn. If they do, it is extremely limited and local to a particular activity or situation • Natural language understanding remains elusive, so unlikely to be communicating with any depth

  20. monitoring generating sense sense sense act act act World model plan selecting implementing How AI Relates to Factory Automation • Deliberative Layer: • Upper level is mission generation & monitoring • But World Modeling & Monitoring is hard (SA) • Lower level is selection of behaviors to accomplish task (instantiation) & local monitoring

  21. sense sense sense act act act Control Theory is “Lower Level” But Doesn’t Necessary Capture it All • Reactive (fly by wire, inner loop control, behaviors): • Tightly coupled with sensing, so very fast • Many concurrent stimulus-response behaviors, strung together with simple scripting with FSA • Action is generated by sensed or internal stimulus • No awareness, no mission monitoring • Models are of the vehicle, not the “larger” world World model plan

  22. monitoring generating sense sense sense act act act World model plan selecting implementing Consider Time Scales/Horizon PRESENT+PAST+FUTURE, SLOW PRESENT+PAST, FAST PRESENT,VERY FAST, PARALLEL

  23. Recap… • Automation is closed world, autonomy is open world • Automation fails in the open world, autonomy fails • Humans are the more adaptive member of the JCS • A simple biological analogy for AI in robotics • Behaviors easy • Advanced cognitive easy • Connecting perception with symbols hard

  24. Next: Case Studies • Automation is closed world, autonomy is open world • Automation fails in the open world, autonomy fails • Humans are the more adaptive member of the JCS • A simple biological analogy for AI in robotics • Behaviors easy • Advanced cognitive easy • Connecting perception with symbols hard • Teleoperation • Is always the backup control regime • Operator is mediated and only human • Robot manufacturers cheap out and assume human can figure it out

More Related