1 / 25

Subjective Multidimensional Workload Index for Distributed Teams:

UTA. Subjective Multidimensional Workload Index for Distributed Teams: Development Program for the Team Subjective Assessment of Workload (T-SAW) HFE DoD TAG, 21 May 2014, APG, MD. Sandro Scielzo, Jennifer Riley, and Fleet Davis SA Technologies, Inc. Marietta, GA Shannon Scielzo

kedem
Download Presentation

Subjective Multidimensional Workload Index for Distributed Teams:

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. UTA Subjective Multidimensional Workload Index for Distributed Teams: Development Program for the Team Subjective Assessment of Workload (T-SAW) HFE DoD TAG, 21 May 2014, APG, MD Sandro Scielzo, Jennifer Riley, and Fleet Davis SA Technologies, Inc. Marietta, GA Shannon Scielzo University of Texas at Arlington SBIR DATA RIGHTS Contract No.: W911QX-11-C-0059 SA Technologies, Inc. 3750 Palladian Village Drive, Building 600, Marietta, GA 30066 Expiration of SBIR Data Rights Period: 10 September 2019, subject to SBIR Policy Directive of 24 September 2002

  2. UTA OSD The military has an urgent need for conceptualizing and operationalizing the construct of team workload to create a domain-independentsubjective scale for teams

  3. UTA

  4. Problem Space UTA • Problem Significance • No consensus on how workload, much less team workload, should be characterized and operationalized • No clear theoretical framework leading to a satisfactory operationalization of the construct • No validated subjective workload measures for teams • Team Workload Limitations • Omit the critical steps of item development • Limited studies at the team-level • Little understanding regarding individual, contextual, and team-level antecedents • No comprehensive theoretical model • Lack of validity • Limited applicability • Flawed assumptions regarding team member awareness of other team members’ workload

  5. T-SAW Solution Our Solution Our Team: Complementary Strengths • Develop the first fully (a) validated, (b) domain-independent, (c) diagnostic, and (d) prescriptivesubjective workload measure, sensitive to different team configurations, from intact co-located teams, to ad-hoc distributed teams

  6. Unified Theoretical Approach UTA Comprehensive ABC+ Theoretical Framework • Existing Workload Frameworks are Insufficient • MRT is cognitive-centric • How to account for full coverage of individual and team workload-related factors? • Affective, Behavioral, Cognitive, and Team + Theoretical Framework • Ensures full coverage of individual and team workload spectrum • Ensures proper construct classification under the ABC+ framework

  7. Phase I Model and Item Development

  8. Phase I Model and Item Development • Measurement Model • Manageable Number of Dimensions • Items generation for T-SAW constructs • Testable framework • Diagnostic Capability • Pinpoint high workload areas • Team workload profile • Prescriptive Training Capability • Trainable competencies • Non-trainable traits / dispositions • Team + Component • Team workload moderators • Further enhance diagnosticity • Item Development • 500+ item pool • Initial card sort and bias analyses

  9. Phase II Validation: T-SAW First Iteration Year 1 Goal Validation Plan and Methods • Validate a domain-independent T-SAW, by down-selecting best behaving items that are predictive of performance using participant samples from the general population and other environments. Provide T-SAW in paper/pencil and electronic format.

  10. Phase II Validation: T-SAW First Iteration UTA SAR – Experiment Design and Setup • 4 VBS2 SAR Scenarios • IVs • Number of victims (within-subjects) • Visual noise (between-subjects) • DVs • Real time metrics • AAR metrics • Communications

  11. Phase II Validation: T-SAW First Iteration UTA NUWC– Experiment Design and Setup • NUWC Experiment • Artemis starship simulator • 3-members team (captain, helm, weapons) • 4 teams • IV • Collocated vs. distributed team • DVs • Mission performance • Communications

  12. Phase II Validation: T-SAW First Iteration UTA Correlational Analyses • 618 items analyzed • Best behaving items matched against ‘archetype’ items by model dimensions

  13. Phase II Validation: T-SAW First Iteration • Experimental Efforts • 100s of data points collected between UTA and NUWC • Validation of model dimensions • Initial T-SAW measure (domain independent) • PDF file and electronic version • Full T-SAW and T-SAW for simulated environments (no physical items) • Validation support from NUWC • Team data from simulation environment • Novel approach for further T-SAW development • T-SAW core with branching items based on team characteristics

  14. Phase II Validation: Abridged T-SAW Year 2 Goal Validation Plan and Methods • Validate Abridged Army T-SAW , by using USMA and UTA ROTC Cadets. Determine utility of “dynamic” T-SAW with item branching. Develop T-SAW scoring sheet with automated diagnostic visualizations. Provide T-SAW in paper/pencil and electronic format.

  15. Phase II Validation: Abridged T-SAW Year 2 • Developed Abridged ‘Dynamic’ T-SAW • Item branching based on core item responses • Completion between 30 and 90 seconds (faster than NASA-TLX) • USMA Card Sort and Bias analysis • Validated appropriateness of T-SAW items for Army domain • Team Study: SAR Scenarios • Entire pool of UTA ROTC Cadets • Presence of armed civilians • Team Indices and algorithms • ABC+ Indices and scoring algorithms • Indices predictive of performance • Diagnostic scores and visualizations • Automated diagnostic information

  16. Phase II Validation: Abridged T-SAW UTA Many thanks to:

  17. T-SAW Diagnosticity and Applicability UTA T-SAW Algorithms and Visualizations • Goal and Advantages • Rapid data visualization (e.g., AAR) • Powerful diagnostics • No SW required other than Microsoft Excel • Excel Workbook • Input team parameters • Input team averages • Visualize team workload indices • Print diagnostic summary

  18. T-SAW Diagnosticity and Applicability UTA • Input Sheet • Team name, date, event type • Team averages for 9 ABC factors and corresponding Team + dimensions • Team Averages • Compute manually OR • Copy from SurveyMonkey descriptives

  19. T-SAW Diagnosticity and Applicability UTA • Algorithms • Automatically computed • Protected formulas • Standardized 0-100 Indices across ABC and Team+ factors • Indices based on • Factor relationship with performance • Factor relationship with Team+ factors

  20. T-SAW Diagnosticity and Applicability UTA Indices Visualizations • ABC visualizations • 9 ABC factors figures • Color-coded background indicating performance impact • Team+ Bottlenecks • Bottleneck breakdown by factor with factor index

  21. UTA Print-out format Custom header Sorted ABC factors Top 3 factors with diagnostic table Sorted Team + “bottleneck” factors Top 3 bottlenecks with diagnostic table Factors / bottlenecks matrix

  22. T-SAW Diagnosticity and Applicability • Military Applicability • Acquisition programs • HSI lifecycle • Manpower analyses support • Any team research • Industry Applicability • Medical teams • Sport teams • Organizations

  23. Acknowledgments UTA This research was supported by:

  24. T-SAW Research Team UTA Sandro Scielzo Research Associate Skills & Specialties: -Metrics and Scale Development -Development of User-Centered Multimedia Interfaces to Support Training and Decision-Making -Perceptual Discrimination in Complex Visual Search Task Environments -Situation Awareness Measurement -Human/Robot Team Interactions Years General Experience: 12 Specialized: 8 • Jennifer Riley • Principal Research • Associate • Skills & Specialties: • Training for situation awareness in complex and dynamic domains • Human-automation interaction • Human-computer interface design • Development of real-time Situation Awareness measures for virtual environment training • Human-interaction with unmanned systems • Years • General Experience: 16 • Specialized: 12 Fleet Davis Research Associate Skills & Specialties: -Developing MOPs and MOEs -Developing live performance measures for training and operational environments - Conducting Cognitive Task Analysis -Situation Awareness Measurement -Developing training content for complex skill acquisition Years General Experience: 10 Specialized: 10 Shannon Scielzo Assistant Professor, UTA Director, TMT lab Skills & Specialties: -Psychometric theory -Statistics and research design -Scale development -Dyadic and team communications -Computer-mediated communications -Virtual team processes -Training Needs Analysis -Performance evaluation and assessment Years General Experience: 10 Specialized: 8

  25. Questions UTA

More Related