1 / 15

On the Use of Intelligent Agents as Partners in Training Systems for Complex Tasks*

On the Use of Intelligent Agents as Partners in Training Systems for Complex Tasks*. Thomas R. Ioerger, Joe Sims, Richard Volz Department of Computer Science Texas A&M University Judson Workman, Wayne Shebilske Department of Psychology Wright State University.

jorryn
Download Presentation

On the Use of Intelligent Agents as Partners in Training Systems for Complex Tasks*

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. On the Use of Intelligent Agentsas Partners in Training Systemsfor Complex Tasks* Thomas R. Ioerger, Joe Sims, Richard Volz Department of Computer Science Texas A&M University Judson Workman, Wayne Shebilske Department of Psychology Wright State University *Funds provided by a MURI grant through DoD/AFOSR.

  2. Complex tasks (e.g. operating machinery) multiple cognitive components (memory, perceptual, motor, reasoning/inference...) novices feel over-whelmed limitations of part-task training automaticity vs. attention management Role for intelligent agents? can place agents in simulation environments need guiding principles to promote learning Complex Tasks, and the Needfor new Training Methods

  3. AIM (Active Interlocked Modeling; Shebilske, 1992) trainees work in pairs (AIM-Dyad) each trainee does part of the task together importance of context (integration of responses) can produce equal training, 100% efficiency gain co-presence/social variables not required trainees placed in separate rooms correlation with intelligence of partner Bandura, 1986: “modeling” Previous Work: Partner-Based Training

  4. Hypothesis: Would the training be as effective if the partner were played by an intelligent agent? Important pre-requisite: a CTA (cognitive task analysis) a hierarchical task-decomposition allows functions to be divided in a “natural” way between human and agent partners Automating the Partner with an Intelligent Agent

  5. Representative of complex tasks has similar perceptual, motor, attention, memory, and decision-making demands as flying a fighter jet continuous control: navigation with joystick, 2nd-order thrust control discrete events: firing missles, making bonus selections with mouse must learn rules for when to fire, boundaries... Large body of previous studies/data Multiple Emphasis on Components (MEC) protocol transfers to operational setting (attention mgmt) Space Fortress: Laboratory Task

  6. P M I MOUSE BUTTONS JOYSTICK THE FORTRESS FORTRESS SHOT SHIP BONUS AVAILABLE $ A MINE PNTS CNTRL VLCTY VLNER IFF INTRVL SPEED SHOTS 200 100 119 0 W 90 70

  7. Implemented decision-making procedures for automating mouse and joystick Added if-then-else rules in C source code emulate Decision-Making with rules Agent simple, but satisfies criteria: situated, goal-oriented, autonomous First version of agent played too “perfectly” Make it play “realistically” by adding some delays and imprecision (e.g. in aiming) Implementation of a Partner Agent

  8. Agent Finite-State Diagrams Handling the Fortress Handling Mines

  9. Hypothesis: Training with agent improves final scores Protocol: 10 sessions of 10 3-minute trials each (over 4 days) each session 1/2 hour: 8 practice trials, 2 test trials Groups: Control (standard instructions+practice) Partner Agent: (instructions+practice, alternate mouse and joystick between trainee and agent) Participants: 40 male undegrads at WSU <20 hrs/wk playing video games Experiment 1

  10. Results of Expt 1 *Difference in final scores was significant at p<0.05 level by paired T-test (with dof=38): t=2.33>2.04

  11. Breakdown of Scores

  12. Results of Expt 1 raises follow-up question: What is the effect of the level of expertise simulated by the agent? Can make the agent more or less accurate. Recall: correlation with partner’s intelligence Is it better to train with an expert? or perhaps with a partner of matching skill-level?... novices might have trouble comprehending experts strategies since struggling to keep up Effect of Level of Simulated Expertise of Agent?

  13. Hypothesis: Different skill-levels of agent affect trainees’ performance improvement Similar design as Expt 1, except 4 Groups: Control, Novice agent, Intemediate agent, Expert Adjust skill-level of agent by fine-tuning randomness parameters (shot timing, aiming accur., IFF mistakes) Gauge to skill levels target groups (empirical): Experiment 2

  14. Results of Expt 2 Conclusion: Training with an expert partner agent is best.

  15. Principled approach to using agents in training systems: as partners - cognitive benefits Requires CTA best if high degree of de-coupling if greater interaction, agent might have to “cooperate” with human by interpreting and responding to apparent strategies Desiderata for Agent: Correctness Consistency (necessary for modeling) Realism (how to simulate human “errors”?) Exploration (errors lead to unusual situations) Lessons Learned for Future Applications

More Related