1 / 18

Imbuing Human-Robot Teams with Intention Recognition

Funder: DARPA CSSP. Imbuing Human-Robot Teams with Intention Recognition. Dr. Gita Sukthankar gitars@eecs.ucf.edu Students: Ken Laviers , Bennie Lewis. Intelligent Agents Lab. Robots (Mechanical Agents). Software Agents. Intention (Plan , Activity, Goal) Recognition. Humans

brooke
Download Presentation

Imbuing Human-Robot Teams with Intention Recognition

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Funder: DARPA CSSP Imbuing Human-Robot Teams with Intention Recognition Dr. GitaSukthankar gitars@eecs.ucf.eduStudents: Ken Laviers, Bennie Lewis

  2. Intelligent Agents Lab Robots (Mechanical Agents) Software Agents Intention (Plan, Activity, Goal)Recognition Humans (Biological Agents)

  3. Research Problems • Improving plan, activity, and intention recognition • Fast • Sufficiently accurate • Acquiring training data • Making it sample-efficient • Determining when to act autonomously • Transfer-of-control • Identifying what to do • This can be the hardest problem! • Not annoying the human users! • Mutual predictability • Human must be able to infer the intentions of the agents/robots

  4. Domains • Improved game and simulation AI • Intelligent training and tutoring systems • Human-robot interfaces • Elder-care home monitoring systems

  5. Example: Adaptive Opponents Exploit adversarial models to improve team decision-making

  6. Online Play Recognition • Create tree of team spatio-temporal traces • Combine output from multiple classifiers to reliably recognize plays • Modify policy of key players to improve play of entire team • Adapt in real-time to the strategy employed by the human player

  7. Learning to Adapt Play Recognizer Divide and conquer the problem into several learning modules Play recognizer Successor state estimator Reward estimator Individual modules are inaccurate but combine to learn an effective play adaptation. Use Monte Carlo search to rapidly evaluate large number of play adaptations ?

  8. Results Adaptive agents improves the yardage gained in a play and reduce the number of interceptionsover the standard game AI.

  9. Human-Agent-Robot Teams • Robots need humans to use their past experience and common-sense knowledge: • To process ambiguous sensor data • To solve complicated planning problems (e.g., figuring out the grasp points on objects) • Humans need help with repetitive tasks: • Monitoring multiple information streams (video or audio) • Toggling between multiple robots • Agents can facilitate HRI by: • Monitoring the humans to identify operator distraction • Remembering and propagating commands intelligently across teams of robots

  10. User Interface (view from an overhead camera)

  11. User Interface (gamepad control is popular with our student test subjects)

  12. RSARSim • (video)

  13. Learn Models of User Distraction • Learn model of user distraction by inserting artificial visual distractions into simulation • Identify which of the three robots the user is paying attention to • Features based on robot motion trajectories • Use EM to fit parameters to HMM model • Perform transfer-of control-when distraction levels go over a certain level Agent

  14. Learn Models of User Distraction • Identification of user distraction level more accurate than models that don’t remember past state • Two state classification accuracy shows our decision threshold (control vs. no-control) • Statistically significant improvements (p<.05) on time required to find the total number of victims in urban rescue scenario

  15. Multi-Robot Manipulation • Sensors on robot are insufficient for good grasp planning • Toggling rapidly between robots is complicated for users • Idea: leverage commands given by user to one robot to propagate (and translate) for second robot) • User study evaluating command paradigm: • Follow Me: 2nd robot joins the 1st robot • Mirror Me: 2nd robot copies the 1st robot • Scenario involves moving piles of objects to a goal location, some of which require two robots to move

  16. Human-Agent-Robot Teams • User study on 20 users had very promising results • Introducing these two new primitives results in reductions in both time required to complete the task and in reducing the number of object drops in most of the scenarios • Favorable responses on the post-test questionnaire • Current work: • Incorporating a learning by demonstration mode to allow users to learn the primitives rather than having them preprogrammed by the developer

  17. Conclusions Robots (Mechanical Agents) Software Agents Agents are well-positioned to serve as an enabler of mutual predictability through a combination of intention recognition and communication monitoring. Humans (Biological Agents)

  18. Multi-Robot Manipulation

More Related