1 / 20

A framework of safe robot planning

This research paper explores the concept of safe robot planning, including the importance of explicitly given goals, permitted changes in the robot's environment, and the avoidance of unsafe actions. The proposed solution involves enumerating goals and permissions, as well as distinguishing between irreversible and reversible actions. The implementation language is Prolog.

dadkins
Download Presentation

A framework of safe robot planning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A framework of safe robot planning Roland Pihlakas Institute of Technology in University of Tartu august 2008

  2. What is safe • Safe action or state is: • a goal which is explicitly given to the robot; • explicitly permitted change in the robot’s environment. • Everything else is unknown and possibly unsafe. Therefore should not be caused. • Some automatically calculated sub-goals can be unsafe too, unless permitted.

  3. The problem • Why not to enumerate bad states: • too much work • humans are poor at systematically predicting things. Especially when they are not directly interested in that • Why not to tell the robot all the sub-steps towards the goal: • too much work • again, unexpected consequences of given steps

  4. The problem • Opposed interests: • to let the human get the job of robot configuration / instruction done quickly • to still have control and no surprises

  5. Proposed solution • Bad is implicit • Usually enumerate only: • goals; • “okay” changes => permissions • Perhaps simpler to enumerate • Analogy: public vs private law

  6. An analogy • Three laws in the order of priority: • Do not do anything that is not explicitly permitted. • Fulfill the goals that are given to you. • Optionally fulfill the optional goals.

  7. An useful concept • If you can undo something, it is usually safe, assuming that current state is safe. • From that follows • the principle of avoiding irreversible actions • two special classes for actions and their corresponding results, called “irreversible actions” and “reversible actions”.

  8. Few motivating examples • Street cleaning • Room cleaning • Making room on harddisk • Littering crap

  9. About permissions • The permissions are usually for: • changes in given dimensions • usually not about specific actions

  10. A simple language example • Goal x = 2; • Allow y = any; • Reverse z; • Dontdisturb w; • Guarantee for q1 = 44 is q2 = 37; • Context a = 177; • Askauth allow b = any;

  11. Data flow Sensors, certain functions calculating some value etc… The configuration • Three datastructures: • - preconditions / context • - keep-always conditions • - goal conditions Causal relations / prediction module Automatic plan generation Precondition checking

  12. Adding optional goals to the language • An example:Obligatory { Goal robot.location = “in front of TV”;}Optional { Goal floor.still_clean = yes;}

  13. The protocol • When giving permissions, make sure that context is correctly specified! • Opposing interests of human user: • to give many permissions and get quickly rid of the job • giving only necessary permissions and to specify their proper context • Selinux analogy

  14. Robot learning • The sandbox • Levels of sandboxes

  15. “Passive” safety • Distinguishes user’s commands from auto-generated ones; does not override users: • The robot distinguishes clearly between the orders that were given and the sub-goals it has set to itself. • By default avoids only own mistakes. • Even more: the robot may refuse to act.

  16. Errata • May stop when encountering unexpected / unknown situations, unless instructed otherwise using context-specific goals.

  17. Implementation language • Prolog: • has built-in parser (for configuration processing) • has variable data type • automatic memory management • has useful data types for expressing constraints, or uncertainty • conveniently short syntax for failing function calls and resuming alternatives at upper levels • “scriptable”

  18. Future • Multiple contexts / goal sets • Online planning, partial plans • Online diagnostics and remedy taking in case of danger, faults etc. • Automatically finding unnecessary rights • More powerful prediction module • Time constraints

  19. Future • Asking for authorisation during planning • Askauth allow x = any; • Asking the user to choose and authorise one plan from a set of automatically proposed alternatives • Different userlevels • Understanding changes caused by external agents or natural forces

More Related