1 / 64

Predicting and Explaining Individual Performance in Complex Tasks

Predicting and Explaining Individual Performance in Complex Tasks. Marsha Lovett, Lynne Reder, Christian Lebiere, John Rehling, Baris Demiral. This project is sponsored by the Department of the Navy, Office of Naval Research. Multi-Tasking. A single person can perform multiple tasks.

cartere
Download Presentation

Predicting and Explaining Individual Performance in Complex Tasks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Predicting and Explaining Individual Performance in Complex Tasks Marsha Lovett, Lynne Reder, Christian Lebiere, John Rehling, Baris Demiral This project is sponsored by the Department of the Navy, Office of Naval Research

  2. Multi-Tasking • A single person can perform multiple tasks. A single model should be able to capture performance on those multiple tasks. • A single person brings to bear the same fundamental processing capacities to perform all those tasks. A single model should be able to predict that person’s performance across tasks from his/her capacities.

  3. A way to keep the multiple-constraint advantage offered by unified theories of cognition while making their development tractable is to do Individual Data Modeling. That is, to gather a large number of empirical/experimental observations on a single subject (or a few subjects analysed individually) using a variety of tasks that exercise multiple abilities (e.g., perception memory, problem solving), and then to use these data to develop a detailed computational model of the subject that is able to learn while performing the tasks. Gobet & Ritter, 2000

  4. ZERO PARAMETER PREDICTIONS!

  5. Basic Goals of Project • Combine best features of cognitive modeling • Study performance in a dynamic, multi-tasking situation (albeit less complex than real world) • Explain not only aggregate behavior but variation (using individual difference variables) • Predict (not fit/postdict) complex performance • Use cognitive architecture and fixed parameters • Employ off-the-shelf models whenever possible • Plug in individual difference params for each person

  6. How to predict task performance • Estimate each individual’s processing parameters • Measure individuals’ performance on “standard” tasks • Using models of these tasks, estimate participant’s corresponding architectural parameters (e.g., working memory capacity, perceptual/motor speed) • Build/refine model of target task • Select global parameters for model of target task (e.g., from previously collected data) • Plug into model of target task each individual’s parameters to predict his/her target task performance

  7. Example: Memory Task Performance • Fit task A to estimate individuals’ parameters

  8. Zero-Parameter Predictions • Plug those parameters into model of task B (Lovett, Daily, & Reder, 2000)

  9. Challenges of Complex Tasks • Modeling the target task is harder • More than one individual difference variable likely impacting target task • Possibility of knowledge/strategy differences

  10. What about knowledge differences? • Develop tasks that reduce their relevance • Train participants on specific procedures • Measure skill/knowledge differences in another task and incorporate them in model • Use model to predict variation in relative use of strategies by way of estimates of individuals’ processing capacities

  11. Individual Differences in ACT-R • Most ACT-R models don’t account for impact of individual differences on performance, but the potential is there • There are many parameters with particular interpretations related to individual difference variables • Most ACT-R modelers set parameters to universal or global values, i.e., defaults or values that fit aggregate data

  12. ACT-R & Individual Differences P1, P2, P3, … M1, M2, M3, … W1, W2, W3, …

  13. Overview of Talk • Review tasks we are studying • Illustrate methodology • Highlight key results • Visual search vs. memory strategies trade off in final performance => complex task modeling offers best constraint with fine-grained analysis

  14. Modified Digit Span (MODS)

  15. Modified Digit Span (MODS)

  16. P/M Tasks • In our earlier studies, initial training phase of target task was used to collect data on individuals’ perceptual/motor speed. • e.g., Time to find object “A7” and click on it • In later studies, separate task used to measure perceptual and motor speed.

  17. How to predict task performance • Estimate each individual’s processing parameters • Measure individuals’ performance on MODS, PercMotor • Using models of these tasks, estimate participant’s corresponding architectural parameters (e.g., working memory capacity, perceptual/motor speed) • Build/refine model of target task • Select global parameters for model of target task (e.g., from previously collected data) • Plug into model of target task each individual’s parameters to predict his/her target task performance

  18. W affects Performance • W is the ACT-R parameter for source activation, which impacts the degree to which activation of goal-related facts rises above the sea of other facts’ activations • Higher W => goal-related facts relatively more activated => faster and more accurately retrieved => better MODS performance

  19. Estimating W • Model of MODS task is fit to individual’s MODS performance by varying W • Best fitting value of W is taken as estimate

  20. Estimating PM • For simplicity, we estimated a combined PM parameter directly from each individual’s perceptual/motor task performance. • This PM parameter was then used to scale the timing of the target task’s perceptual-motor productions.

  21. Joint Distribution of W and P/M W and P/M are tapping distinct characteristics

  22. ACT-R & Individual Differences P1, P2, P3, … M1, M2, M3, … W1, W2, W3, …

  23. Specifics of our Approach • Estimate each individual’s processing parameters • Measure individuals’ performance on modified digit span, spatial span, perceptual/motor speed • Using models of these tasks, estimate participant’s W, P, M • Build/refine model of air traffic control task–AMBR • Select global parameters for AMBR model • Plug in individuals’ parameters to predict performance across different AMBR scenarios

  24. AMBR: Air Traffic Control Task • Complex and dynamic task • Spatial and verbal aspects • Multi-tasking • Testbed for cognitive modeling architectures

  25. AMBR TaskAC=aircraft, ATC=air traffice controller • As ATC, you communicate with AC and other ATC to handle all AC in your airspace • Six commands with different triggers: • First ACCEPT, then WELCOME incoming AC (these two separated by short interval) • First TRANSFER, then order a CONTACT message from outgoing AC (these two separated by short interval) • Decide to OK or REJECT requests for speed increase • When a command is not handled before AC reaches zone boundary, this is a HOLD (error)

  26. Issuing an AMBR Command • Text message or radar cues particular action • Click on Command Button • Click on Aircraft (in radar screen) • Click on Air Traffic Controller (if nec’y) • Click on SEND Button

  27. General Methods • Empirical Methods • Day 1: Collect MODS and P/M data and train on AMBR plus AMBR practice • Day 2: Review AMBR instructions, battery of AMBR scenarios • Modeling Methods • Use MODS & PM data to estimate W and PM for each subject • Plug individual W and PM values into AMBR model • Compare individuals’ AMBR performance with model predictions

  28. Experiments 1 & 2 • AMBR Scenario Design • Experiment 1: alternating 5 easy, 5 hard • Experiment 2: 9 scenarios of varying difficulty • AMBR Dependent Measures • Total time to handle each command • Number of hold errors

  29. Off-the-shelf ACT-R Model of AMBR • Scan for something to do: Radar, Left, Right, Bottom text windows • When an action cue is noticed, determine if it has been handled or not: scan/remember • If the cue has not been handled, click command, AC, [ATC], SEND • Resume scanning

  30. Model Captures Range of Performance

  31. Model Predictions • Prediction of whether a subject commits an error in a scenario, based on scenario details and individual’s W & P/M

  32. Ind’l Diffs’ Impact on Hold Errors • Hold errors only weakly dependent on W, more strongly on P/M and scenario difficulty # Hold Errors Parameter Value

  33. Scenario Difficulty Scenario

  34. Mean Errors by Scenario Scenario

  35. Be Careful What (DM) you Model • Error data too coarse to constrain model • Even total RT/command data insufficient • Model predicts that scanning strategy plays a large role in performance. • This is consistent with participant reports who may be doing any combination of visual search or memory retrieval

  36. Subject T 0.0 Cue: Accept T6? T 3.6 ACCEPT button T 5.9 AC “T6” T 6.7 ATC “EAST” T 7.7 SEND button Model T 0.0 Cue: Accept T6? T 3.7 ACCEPT button T 5.7 AC “T6” T 7.0 ATC “EAST” T 8.2 SEND button Observable Behaviors Stochastic variation on the single-action level is part of subject and model behavior

  37. Model I/O T 0.0 Cue: Accept T6? T 3.7 ACCEPT button T 5.7 AC “T6” T 7.0 ATC “EAST” T 8.2 SEND button Model Trace T 1.5 Notice cue T 2.5 Subgoal task T 3.7 Mouse click T 3.8 Start AC search T 4.9 Find AC T 5.7 Mouse click T 7.0 Mouse click T 8.2 Mouse click The Details Are Inside

  38. Conclusion thus far… • Visual search vs. memory strategies trade off in final performance => even when modeling a complex task, coarse dependent measures (accuracy, total RT) hide important details • Previous AMBR model fit group data well • Only by seeking extra constraint of modeling individual participants were important gaps in model fidelity revealed

  39. Modifications for Experiment 3 • Use more fine-grained measures: Action RT & Clicks • Modify the ATC task to increase memory demand • More interesting for our purposes • More realistic • Lengthen scenario length so same planes are in play • Hide AC names until click, then only after delay • Use model to bracket appropriate difficulty level

  40. Raw Characteristics of Data Experiment 3 • Action RT 12.1 sec, Holds 3.3 / subject • Action RT correlates with W (r = -0.314) and Pm (r = 0.485) • Holds correlates with W (r = -0.444) and Pm (r = 0.508)

  41. Model Modifications • Search not only can give the answer sought (a specific AC’s location) but an additional rehearsal of that information • In slack times, possible strategy of studying radar screen to rehearse AC names (called “exploratory clicks”)

  42. Model Predicts Hold Errors • Predicts errors per subject, r = 0.81 • Hold errors depend more on W (compared to previous version of task) but still mostly dependent on PM and scenario difficulty • Move to modeling more fine-grained aspects of data…

  43. Model Predicts Number of Clicks

  44. W, P/M affect RT click by click Hi-Hi Model & Subject • Set W-P/M parameters in model corresponding to participants (e.g., hi-hi & lo-lo) • Run model to produce RT predictions click by click (for 2 commands: Accept and Contact) Lo-Lo Model & Subject

  45. W, P/M affect RT click by click • Set W-P/M parameters in model corresponding to participants • Run model to produce RT predictions click by click (for 2 commands: Accept and Contact)

  46. Conclusion thus far • Modeling more fine-grained measures required task and model modifications, but this produced individual participant predictions that were very promising. • Clicking on correct AC the first time ranges from 69% to 96% • Akin to remember vs. scan strategies • Higher number -> more (accurate) remembering • This detailed aspect of performance relates to W

  47. Theoretical Interlude:Spatial vs. Verbal WM • Our working assumption (parsimoniously) posits a single source activation parameter, W • W modulates the degree to which goal-relevant facts are activated above the sea of unrelated facts • …regardless of spatial/verbal representation • This perspective still allows for spatial/verbal distinctions in performance but explains them as a function of differences in spatial/verbal skills etc.

More Related