imbuing human robot teams with intention recognition n.
Skip this Video
Loading SlideShow in 5 Seconds..
Imbuing Human-Robot Teams with Intention Recognition PowerPoint Presentation
Download Presentation
Imbuing Human-Robot Teams with Intention Recognition

Loading in 2 Seconds...

play fullscreen
1 / 18

Imbuing Human-Robot Teams with Intention Recognition - PowerPoint PPT Presentation

  • Uploaded on

Funder: DARPA CSSP. Imbuing Human-Robot Teams with Intention Recognition. Dr. Gita Sukthankar Students: Ken Laviers , Bennie Lewis. Intelligent Agents Lab. Robots (Mechanical Agents). Software Agents. Intention (Plan , Activity, Goal) Recognition. Humans

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'Imbuing Human-Robot Teams with Intention Recognition' - brooke

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
imbuing human robot teams with intention recognition


Imbuing Human-Robot Teams with Intention Recognition

Dr. GitaSukthankar

gitars@eecs.ucf.eduStudents: Ken Laviers, Bennie Lewis

intelligent agents lab
Intelligent Agents Lab


(Mechanical Agents)




(Plan, Activity, Goal)Recognition


(Biological Agents)

research problems
Research Problems
  • Improving plan, activity, and intention recognition
    • Fast
    • Sufficiently accurate
    • Acquiring training data
    • Making it sample-efficient
  • Determining when to act autonomously
    • Transfer-of-control
  • Identifying what to do
    • This can be the hardest problem!
  • Not annoying the human users!
    • Mutual predictability
    • Human must be able to infer the intentions of the agents/robots
  • Improved game and simulation AI
  • Intelligent training and tutoring systems
  • Human-robot interfaces
  • Elder-care home monitoring systems
example adaptive opponents
Example: Adaptive Opponents

Exploit adversarial models to improve team decision-making

online play recognition
Online Play Recognition
  • Create tree of team spatio-temporal traces
  • Combine output from multiple classifiers to reliably recognize plays
  • Modify policy of key players to improve play of entire team
  • Adapt in real-time to the strategy employed by the human player
learning to adapt
Learning to Adapt

Play Recognizer

Divide and conquer the problem into several learning modules

Play recognizer

Successor state estimator

Reward estimator

Individual modules are inaccurate but combine to learn an effective play adaptation.

Use Monte Carlo search to rapidly evaluate large number of play adaptations



Adaptive agents improves the yardage gained in a play and reduce the number of interceptionsover the standard game AI.

human agent robot teams
Human-Agent-Robot Teams
  • Robots need humans to use their past experience and common-sense knowledge:
    • To process ambiguous sensor data
    • To solve complicated planning problems (e.g., figuring out the grasp points on objects)
  • Humans need help with repetitive tasks:
    • Monitoring multiple information streams (video or audio)
    • Toggling between multiple robots
  • Agents can facilitate HRI by:
    • Monitoring the humans to identify operator distraction
    • Remembering and propagating commands intelligently across teams of robots
user interface
User Interface

(view from an overhead camera)

user interface1
User Interface

(gamepad control is popular with our student test subjects)

  • (video)
learn models of user distraction
Learn Models of User Distraction
  • Learn model of user distraction by inserting artificial visual distractions into simulation
  • Identify which of the three robots the user is paying attention to
  • Features based on robot motion trajectories
  • Use EM to fit parameters to HMM model
  • Perform transfer-of control-when distraction levels go over a certain level


learn models of user distraction1
Learn Models of User Distraction
  • Identification of user distraction level more accurate than models that don’t remember past state
  • Two state classification accuracy shows our decision threshold (control vs. no-control)
  • Statistically significant improvements (p<.05) on time required to find the total number of victims in urban rescue scenario
multi robot manipulation
Multi-Robot Manipulation
  • Sensors on robot are insufficient for good grasp planning
  • Toggling rapidly between robots is complicated for users
  • Idea: leverage commands given by user to one robot to propagate (and translate) for second robot)
  • User study evaluating command paradigm:
    • Follow Me: 2nd robot joins the 1st robot
    • Mirror Me: 2nd robot copies the 1st robot
  • Scenario involves moving piles of objects to a goal location, some of which require two robots to move
human agent robot teams1
Human-Agent-Robot Teams
  • User study on 20 users had very promising results
  • Introducing these two new primitives results in reductions in both time required to complete the task and in reducing the number of object drops in most of the scenarios
  • Favorable responses on the post-test questionnaire
  • Current work:
    • Incorporating a learning by demonstration mode to allow users to learn the primitives rather than having them preprogrammed by the developer


(Mechanical Agents)



Agents are well-positioned to serve as an enabler of mutual predictability through a combination of intention recognition and communication monitoring.


(Biological Agents)