1 / 29

Games for Analysis of Technologies in Human-Intensive Systems - Dr. Tim Dasey, Informatics and Decision Support Group, M

Recognizing what technologies will be useful prior to prototyping is error prone, with resulting higher-than-acceptable developmental rejection rates. MIT Lincoln Laboratory (MIT LL) has been using serious games to aid in technology assessment programs. This approach combines economic game theory with rapid-play, rapidly-developed digital simulations to collect quantitative data, improve qualitative feedback, and crowdsource the ingenuity of human experts.

Download Presentation

Games for Analysis of Technologies in Human-Intensive Systems - Dr. Tim Dasey, Informatics and Decision Support Group, M

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Games for Analysis of Technologies in Human-Intensive Systems Timothy Dasey, Ph.D. Serious Play 2018 July 19, 2018 This material is based upon work supported under Air Force Contract No. FA8721-05-C-0002 and/or FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the U.S. Air Force. © 2017 Massachusetts Institute of Technology.

  2. LL History of Games Development 2001 2009 2010 2011 2012 2013 2014 2015 2016 Missile Defense ISR & Intelligence Cyber Operations Chem-Bio Defense Emergency Management Air Traffic Control Holding flights Airborne flights HIVELET Connections - 2 Rob Seater 2017

  3. Virtual Human-in-the-Loop Experimentation Purposes Training Analysis Discovery Expert Decision Analysis Experiential Learning Crowdsourced Discovery Integrated Human- Machine Performance Requirements Prediction & Estimation Diagnostic / Evaluation Guided Brainstorming (Red and Blue) Machine Learning from Human Experts Impact of Future Technology MIT LL uses serious games for a wide range or purposes. Here, our focus is on future technology & discovery. HIVELET Connections - 3 Rob Seater 2017 Image Source: LL screenshot (bottom left), Shutterstock.com (others)

  4. Serious Games for Technology Exploration A game is a sensor for measuring human decision making. Serious games are a tool for system analysis to addresses the role of human decision, failings, and creativity. Technology exploration games crowdsource discovery of new tactics, clever combinations of technologies, and requirements estimates based on iterative experiential play. HIVELET Connections - 4 Rob Seater 2017

  5. Technology Concept Lifecycle Goals: - Catch issues as early as possible - Be as systematic and quantitative as able triage based on anticipated utility and acceptance focused, immersive, detailed evaluation field testing and brainstorming brainstorming refinement seed concept, emerging technology or threat deployment & training leading design trade-offs ideas worth pursuing with more costly R&D concepts & con-ops 2Ncombinations HIVELET Connections - 5 Rob Seater 2017

  6. Analyzing User-Facing Future Technology is Hard Requirements Unknown Experts Bad at Theorizing Everyone Is a Novice User Doctrine & Tactics Change …plus everyone is waiting on you before the project can start. HIVELET Connections - 6 Rob Seater 2017 Image Source: Shutterstock.com

  7. Rapid-Play Serious Games Short Accessible Flexible 1-2 minute turns (see trends, shift priorities) 10-15 minute games (experiment w/ feedback, measure rate of learning) 1-2 hour session (low burden on participants, can be paired with other materials) • Minutes to hours • High level • No special equipment • Play remotely • Easy to generate scenarios • Easy to modify objectives x x many plays many participants many scenarios Also, they can be created quickly by adapting existing templates. HIVELET Connections - 7 Rob Seater 2017 Image Source: Shutterstock.com

  8. Homeland Security Applications Rapid Play Games Informing Conventional Analysis Naval Missile Defense Robotic Support for First Responders Public Health Response to Bio-Terrorism Citizen Preparedness for Radiological Fallout Standoff Security for Transportation Hubs HIVELET Connections - 8 Rob Seater 2017 Image Source: LL Screenshots

  9. Defense Applications Rapid Play Games Informing Conventional Analysis Naval Fleet Composition Infantry Squad Tactics Using Small UAVs Drone-Based Aerial Refueling ISR Team Workflows & Tool Suites HIVELET Connections - 9 Rob Seater 2017 Image Source: LL Screenshots

  10. HIVELET Approach Combine Game Theory with Rapid-Play Digital Simulations Human-Interactive Virtual Exploration for Lightweight Evaluation of Technologies • Rapid-Play Digital Simulations (‘Games’) – provide concrete intuition about potential benefit – many iterations allow for more exploration and data collection • Game Theory / Auction Theory – player must think critically about what is most beneficial – player can choose custom combinations to execute a novel strategy rapidly alternate between modes to build intuition, collect data, and explore novel tactics Player-Driven Capability Selection Mission Simulation as Digital Game HIVELET Connections - 10 Rob Seater 2017

  11. General Framework Modular Components Support Different Experiments Mission Simulator Scenario / Domain Available Capabilities Selection / Pricing Mechanism Data Analysis Data Collection HIVELET Connections - 11 Rob Seater 2017 Image Source: LL Screenshot

  12. Case Study: UAV Integration into Infantry Squads  10 minutes per full cycle Mission: find downed predator drone in urban environment, retrieve data, destroy predator, extract safely Capabilities: personal UAV sensors, control/flight systems, battery, targeting support, intelligent behaviors Research Question: Which capabilities or combinations of capabilities are most valuable to this mission, and how do they alter tactics and doctrine? HIVELET Connections - 12 Rob Seater 2017

  13. Experimental Trials • 36 participants over 5 trials – 25 lab researchers – 9 military fellows – 3 admin/support • 2.5 hour play session – 1:00 Training – 1:00 Score competition – 0:30 Discussion • Quantitative & qualitative results – Positive feedback from experienced military fellows & lab UAV researchers – Data and discussion supported the primary experimental questions – Written report in progress Goal: Demonstrate the value of data collected by this technique, not the realism of the technologies modeled or environment used for that validation. HIVELET Connections - 13 Rob Seater 2017 Image Source: LL Photo

  14. Alternate Missions & Environments Ruined Desert City at Dawn Urban Sprawl Arctic Expanse Rocky Desert at Night Island City Each of the 5 terrains can be paired with either of 2 mission objectives – search for a crashed asset / clear a convoy’s path of threats. HIVELET Connections - 14 Rob Seater 2017 Image Source: LL Screenshot

  15. Example HIVELET Output Quantitative Qualitative Anticipated pre-game Not anticipated Strategy used in game Strategy not used in game Player technology selection preferences. Player preferences correlate with performance and change after gameplay. Strategy Effectiveness Unanticipated, effective strategies High Moderate Ineffective Unobserved Also Also • Changes in technology utility perception • Player perception of risk tolerance vs. scoring features • Player rationale for observed strategy evolution • Consequence by architecture • Information required for player to take action • Player performance relative to benchmarks HIVELET Connections - 15 Rob Seater 2017

  16. Key Questions for Method Validation • Convergence. Do players quickly form and express consistent opinions? • Accuracy. Are player opinions worth listening to? Does it match success? • Transferability. Do lessons in the game map to the real world? (e.g. risk) • User Impact. Does this approach affect the opinions players express? • Novel Lessons. Do we learn anything new, or just confirm prior knowledge? HIVELET Connections - 16 Rob Seater 2017

  17. Experimental Results • Convergence – Do players learn the game and technologies quickly? – Does the game change their opinions? – Was 1-2 hours enough play time? Image Source: LL Graphic HIVELET Connections - 17 Rob Seater 2017 Lesson: We can collect consistent data in a short time.

  18. Experimental Results • Accuracy / Consistency – Does player make good choices for themselves? – Can we trust their selections to reflect utility? Image Source: LL Graphic HIVELET Connections - 18 Rob Seater 2017 Lesson: Players make good choices for themselves.

  19. Experimental Results • Score Sensitivity / Realism – Should we believe that behaviors in the game match behaviors in the world? – Will players be appropriately risk averse in a virtual environment? – Can we control their level of risk taking? Averages from Post-Game Survey (1 = low, 5 = high) Image Source: LL Graphic HIVELET Connections - 19 Rob Seater 2017 Lesson: We can control player risk aversion with scoring.

  20. Experimental Results • Improve the Quality of Qualitative Feedback – Does playing a game change player opinions and ability to discuss the topic? – Does making in-game choices about capability selection change opinions? Image Source: LL Graphic Lesson: Customizing loadouts make players more critical than just experiencing the virtual model. HIVELET Connections - 20 Rob Seater 2017

  21. Experimental Results • Crowdsourcing Ingenuity – Do players discover novel strategies? – Do players discover surprises in the value (or lack of value) of technologies? Image Source: LL Graphic HIVELET Connections - 21 Rob Seater 2017 Lesson: Players discovered novel (effective) strategies.

  22. Lessons Learnt for Rapid-Play Games • Design – Design the game around the data to be collected (sensor vs. simulation) – Focus on key decisions, abstract away detail when possible – Balance experimental design with realistic models (are rare events rare?) • Feedback – Use multiple scoring axes or variable scoring weights – Clarify best outcome vs. best practice • Scenarios – Procedurally generate scenarios (faster, more insight) – Use a ‘detective’ style game when possible (cleaner analysis) – Use turn-based games when possible (clearer intent) • Complementary Techniques – Precede with open ended tabletops to discover relevant info & actions – Follow with more traditional detailed / immersive evaluations HIVELET Connections - 22 Rob Seater 2017

  23. Backup HIVELET Connections - 23 Rob Seater 2017

  24. Alternative Quantitative Analysis & Evaluation Mechanisms More Virtual / Early Phase More Physical / Late Phase Behavioral Model + Simulation Interactive Digital Simulation Immersive Chamber Simulation Augmented Indoor Exercise Large Scale Outdoor Exercise High-N Experiments Low Cost / Burden strong strong moderate weak weak strong strong moderate moderate weak Realistic Detail Req’ts Identification weak moderate strong strong strong moderate moderate strong moderate weak New Uses/Exploits weak strong weak moderate moderate Technical Feasibility Current Availability weak strong strong moderate strong weak weak moderate weak strong HIVELET Connections - 24 Rob Seater 2017

  25. Outline → Challenges and Gaps → Approach & Results → Other Applications HIVELET Connections - 25 Rob Seater 2017 Image Source: LL Screenshot

  26. Outline → Challenges and Gaps → Approach & Results → Other Applications HIVELET Connections - 26 Rob Seater 2017 Image Source: LL Screenshot

  27. Outline → Challenges and Gaps → Approach & Results → Other Applications HIVELET Connections - 27 Rob Seater 2017 Image Source: LL Screenshot

  28. Experimental Validation Convergence on Successful Strategies • Key Questions Supported Experimentally  – Convergence. Do players quickly form and express consistent opinions? – Accuracy. Are player opinions worth listening to? Does it match success?  – Transferability. Do lessons in the game map to the real world? (e.g. risk)  Gameplay Shatters Rosy Speculation – User Impact. Does this approach affect the opinions players express?  – Novel Lessons. Do we learn anything new, or just confirm prior knowledge?  HIVELET Connections - 28 Rob Seater 2017

  29. HIVELET Connections - 29 Rob Seater 2017

More Related