1 / 50

Human Robot Teams: Concepts, Constraints, and Experiments

Human Robot Teams: Concepts, Constraints, and Experiments. Michael A. Goodrich Dan R. Olsen Jr. Brigham Young University. Research Agenda. Evaluation Technology Neglect Tolerance Behavioral Entropy Fan-Out Interface Design Mixed Reality Displays Principles HF Experiments

doyle
Download Presentation

Human Robot Teams: Concepts, Constraints, and Experiments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Human Robot Teams:Concepts, Constraints, and Experiments Michael A. Goodrich Dan R. Olsen Jr. Brigham Young University

  2. Research Agenda • Evaluation Technology • Neglect Tolerance • Behavioral Entropy • Fan-Out • Interface Design • Mixed Reality Displays • Principles • HF Experiments • Autonomy Design • Team-Based Autonomy • UAVs • Perceptual Learning

  3. The Presentation Agenda • The types of questions • Neglect tolerance: Is a team feasible? • How do we compute neglect tolerances? • Tradeoffs: workload and performance • Is a team optimal? • The problem with switch costs • Some limits, ideas, and proposals

  4. A Special Case: The Robotics Specialist • One soldier • Two UAVs • One UGV • Can one person manage all three assets? • At what level of performance? • At what level of engagement?

  5. A More General Case:Span of Control • How many “things” can be managed by a single human? • How many robots? • How do we measure Span of Control in HRI? • Relationships between NT and IT • How do we compare possible team configurations? • Evaluate performance-workload tradeoffs • Identify performance of feasible configurations

  6. ICV Driver Vehicle Commander Robotics NCO PLT LDR Medic The Most General Case: Multiple Robots & Multiple Humans • How many people are responsible for a single robot? • How many robots can provide information to a single human? Platoon Headquarters Organization 1 CL I UAV System ARV-A (L) ICV 1 CL I UAV System

  7. The Presentation Agenda • The types of questions • Neglect tolerance: Is a team feasible? • How do we compute neglect tolerances? • Tradeoffs: workload and performance • Is a team optimal? • The problem with switch costs • Some limits, ideas, and proposals

  8. Neglect Tolerance:Neglect Time and Interaction Time • How long can the robot “go” without needing human input? • How long does it take for a human to give guidance to the robot? Neglect Time (NT) Interaction Time (IT)

  9. Fan-Out (Olsen 2003,2004): How many homogeneous robots? • How many interaction periods “fit” into one neglect period • Two other robots can be handled while robot 1 is neglected • Fan-out = 3 1 NT IT IT IT 2 3 4

  10. Can a human manage team T ? Fan-out and Feasibility • Fan-out (homoeneous teams) • Feasibility (heterogeneous teams) • These are upper bounds

  11. The Presentation Agenda • The types of questions • Neglect tolerance: Is a team feasible? • How do we compute neglect tolerances? • Tradeoffs: workload and performance • Is a team optimal? • The problem with switch costs • Some limits, ideas, and proposals

  12. Neglect Impact Curves • A task is Neglected if attention is elsewhere • Neglect impacts task performance: 2ndary tasks

  13. Not Neglect Tolerant Enough

  14. Too Neglect Tolerant • Old Glory Insurance

  15. Interface Efficiency Curves • Recovery from “zero” point • Imprecise switch costs

  16. Efficient Interfaces • PDA-based UAV control (versus command line)

  17. Efficient Interfaces • Phycon-based UAV control (versus command line)

  18. Finding NT and IT from the curves

  19. Example • Vary minimum performance level • Measure • Average performance • Neglect time • Interaction time

  20. Validation of Method: Complexity • As complexity goes up, NT goes down and IT goes up • Feasibility using NT/IT needs more work

  21. The Presentation Agenda • The types of questions • Neglect tolerance: Is a team feasible? • How do we compute neglect tolerances? • Tradeoffs: workload and performance • Is a team optimal? • The problem with switch costs • Some limits, ideas, and proposals

  22. Increasing Threshold Existing Tradeoffs Ideal

  23. Types of Autonomy

  24. Using Tradeoffs to Select a Configuration Ideal Ideal Ideal

  25. The Presentation Agenda • The types of questions • Neglect tolerance: Is a team feasible? • How do we compute neglect tolerances? • Tradeoffs: workload and performance • Is a team optimal? • The problem with switch costs • Some limits, ideas, and proposals

  26. Predicting Performance of a Heterogeneous Team • Each robot may have multiple autonomy modes and interaction methods • Each interaction scheme yields NT, IT, and average performance values

  27. Accuracy of Predictions in a Three-Robot Team • Two interaction schemes • Point to point (P) • Region of Interest (R) • Three robots • Experiment • 23 subjects • 148 trials • 3 world complexities

  28. The Presentation Agenda • The types of questions • Neglect tolerance: Is a team feasible? • How do we compute neglect tolerances? • Tradeoffs: workload and performance • Is a team optimal? • The problem with switch costs • Some limits, ideas, and proposals

  29. What are switch costs? • The biggest unknown influence on span of control • They come in several flavors: • Time to regain situation awareness • Time to prepare for switch • Errors and Change Blindness What really happens here?

  30. Before and After

  31. Getting a Feel for the Experiment

  32. Preliminary Results • 6 subjects, none naïve • 207 correct change detections • One-sided T-test, equal variances

  33. Important Trends • Differences not just from “time away” • blank and tetris have same time • UAV and tone have same time • Averages nearly identical • Differences not just from “counting” • UAV and tone both count • Differences not just from “motor channel” • UAV and tone both select • Tetris requires interaction • Probably spatial reasoning and changing perspectives

  34. The Presentation Agenda • The types of questions • Neglect tolerance: Is a team feasible? • How do we compute neglect tolerances? • Tradeoffs: workload and performance • Is a team optimal? • The problem with switch costs • Some limits, ideas, and proposals

  35. How Many Robots? • Assumptions • Goal: Gather battle-related information while minimizing risk • Media: Mostly camera/video information • Prediction • Interpreting camera information difficult • High robot autonomy won’t help enough

  36. A Special Case: The Robotics Specialist • Can one person manage multiple robot assets? • At what level of performance? • Goal: gather information • Media: visual (camera/video) • Belief: autonomy will help, but not enough

  37. Mixed Reality Displays • Eliminate “The world through a soda straw” • Integrate vision with active sensors • Integrate display with autonomy • Include sensor uncertainty • Control pan-and tilt • Study time delay effects

  38. Real World Results • Objective • 51% Faster (p < .01) • 93% Less Safeguarding (p < .01) • 29% Lower Entropy (p < . 05) • 10% Better on Memory Task (p < .05) • Subjective • 64% Less Workload / Effort (p < .001) • 70% More Learnable (p < .0001) • 46% More Confident (p < .05)

  39. Several Thousand Words

  40. Experiment Results

  41. Mixed Reality Displays (Pan and Tilt)

  42. Phlashlight Concept What will UAV see? Control the Information Source, Not the Robot

  43. Semantic Maps and Change Highlighting • Video in context • Icon-based maps w/ semantic labels • “That was then, this is now comparison” --- change highlighting • Information decay

  44. Information in Context

  45. “Neglect Tolerance” Support Timely Shifts • Prompt prospective memory • Shift in a timely way • Give time to prepare “Situation Awareness”

  46. Supporting Task Switching: Etc. • History trails. Knowing recent past helps • Tail on a map-based interface • Virtual descent into video-based interface • Change highlighting/morphing • Plans: Knowing intention helps • Planned path on map-based interface • Predicted trajectory on video-based interface • Quickened displays • Task relationships: Knowing relationship between two tasks helps • Relative spatial location on map-based interface • Picture-in-picture on video-based interface • Progress bar of task X on task Y’s display

  47. Improve Perception and Scene Interpretation (Olsen) • Use interaction and machine learning to make this robust

  48. Safe/Unsafe occupancy grids Evolutionary image classifier Evolutionary integration of vision and lasers Particle-based inverse perspective transform Path planning Uncertainty-based triggers for retraining Learning interface mappings from implicit user cues Future Concept (Proposed)

  49. Conclusions • We can evaluate team feasibility • We can predict team performance • We need to understand task switching better • We need to support realistic task switching • Via interfaces • Via autonomy

  50. Near-Term Future Work • Complete validation of task switching experiment paradigm • Compare “new and improved” interfaces against baseline • Compare effects of type and size of interface • Answer the questions for the special case

More Related