1 / 26

The Cognitive Engineering of Human-Agent-Robot Systems

The Cognitive Engineering of Human-Agent-Robot Systems. Peter Benda PhD Candidate Department of Information Systems. Key messages. Thinking in ‘ecological systems’ sense can provide ‘engineering/design leverage’ Cognitive systems engineering might provide representations or models that

tamas
Download Presentation

The Cognitive Engineering of Human-Agent-Robot Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Cognitive Engineering of Human-Agent-Robot Systems Peter Benda PhD Candidate Department of Information Systems

  2. Key messages • Thinking in ‘ecological systems’ sense can provide ‘engineering/design leverage’ • Cognitive systems engineering might provide representations or models that • can be shared by humans and robots at an ‘interface’ level • be useful in designing ‘resilient’ work systems • Propose doctoral research that may provide leverage in designing ‘resilient’ HART systems

  3. What I want from HART • learn what interesting and relevant work is out there • guidance for next steps in the PhD • Please don’t tell me to quit! • Focus . . . • look for opportunities for collaboration

  4. My Background • BAppSc + MAppSc Ind. & Mech Engineering (Toronto) • ~10 years HCI & HF consulting, corporate work • 4 years research fellow: “Maximising the effectiveness of interactive automated programs for smoking cessation”

  5. Quit Smoking Support (Briefly) • expert system model, ‘basic messaging’/advice system based on modified TTM • 5 conditions (5 variations of coaching system/controls) • iterative development model—ethnographic studies • RCT 3800+ participants (quitting smokers)

  6. Where to start? • Transition from work in HF (CogEng), HCI to a desire to work with ‘Agents’ • Initial reviews of HF/Cog Eng literature and Agent literature; is there any common ground? • Perspective is of a ‘systems design problem’

  7. Systems Engineering & Design Perspective • Broadly: “How do we design a human-agent-robot system?” . . . . what does it mean to design a ‘robust’ or ‘resilient’ HAR system?

  8. Agent Lit Generally Covered Kaminka Meta analyses of agent modelling approaches R-CAST Joint Intentions Social Simulation ACT-R/ SOAR Cohen & Levesque Shared Plans Game Theory Wayne Gray Evolutionary ‘Ecological’ Agents BRAHMS John Yen HRI Bradshaw BDI

  9. HF/CSE Lit Reviewed Hutchins Klein’s RPD Woods & Hollnagel’s Joint Cognitive Systems Perrow’s Normal Accidents, Complex Systems Lintern Human Perception of Automation Parasuraman & Sheridan s levels of automation Vicente & Rasmussen’s Cognitive Work Analysis Zieba on resilient H-A-R Systems

  10. Perspectives on HF and Agent literature • HF literature typically deals with • Understanding human behaviour with automation, understanding human ‘perception’ of automation • User acceptance issues • Optimising human use of automation • Analysis of system error  training outcomes, prevention, interface design • Agent literature • Models upon which synthetic agents can be based • Cognitive, decision making , perceptual, behavioural, social psych etc. • Provide insight into ‘human behaviours’ • Optimisation of multi-agent systems (typically synthetic) • Problem-solving systems • etc

  11. BUT

  12. Klein et al.: 10 key challenges • Klein et al.’s (2004) 10 key challenges facing such H-M systems (I’m looking at you, Bradshaw!) • Basic Compact • Adequate Models • Predictability: Human-agent team members must be mutually predictable • Directability: Agents must be directable. • Revealing Status and Intentions: Agents must be able to make pertinent aspects of their status and intentions obvious to their team-mates. • Interpreting Signals: Agents must be able to observe and interpret pertinent signals of status and intentions. • Goal Negotiation: Agents Must Be Able to Engage in Goal Negotiation • Collaboration Support technologies for planning and autonomy must enable a collaborative approach. • Attention Management: Agents must be able to participate in managing attention. • Cost Control: All team members must help control the costs of coordinated activity.

  13. Consider principles behind Distributed Cognitive Systems Ed Hutchins + others (1995+): • knowledge possessed by members of the cognitive system is both highly variable and redundant • Individuals working together on a collaborative task possess different kinds of knowledge, will engage in interactions that will allow them to pool the various resources to accomplish their tasks. ref Rogers (1997)

  14. Distributed Cognitive Systems (2) • Distributing and sharing access and knowledge enables the coordination of expectations to emerge which in turn form the basis of coordinated action ref Rogers (1997)

  15. Building System Resilience • Three meanings (Zeiba et al, 2009): • foresight and avoidance of events • reaction to events • recovery from occurrence of events.

  16. Recovery from occurrence of (unanticipated) events “[Affordances allow] for a common representation for the opportunities of action between the automated system and its environment.”

  17. Can Cog Sys provide leverage? • Rasmussen and Vicente developed and refined ‘Cognitive Work Analysis’ (CWA) • Focus was on the engineering of complex, time-critical H-M systems that exploit human decision making effectively during • ‘normal’ operation (i.e. predictable situations) • the occurrence of unpredictable events (often emergency situations) • Goal was to provide a framework for resilient systems design • analytical tools & design tools (e.g., EID)

  18. Cognitive Engineering • With Lintern’s (2009) modifications to include RPD, the CWA ‘outcomes’ to be focussed on including • Abstraction-Decomposition Space (affordance and constraint mapping of a work system) • Contextual Activity Matrices (desired and potential spans of action) • Decision Ladder(s) (potential strategies)

  19. One potential approach • Utilise Abstraction Hierarchy & related Work Domain Analysis as a basis for a shared ‘system’ model • Development of shared system representation (interface) that • Can be understood, interrogated, acted upon by humans, agents, robots efficiently

  20. In other words Develop a specification of a human–agent-robot shared representation (interface) supporting affordance based communication, that • can be directly ("efficiently") perceived by H-A-R • can be used as a basis for coordinated H-A-R action as a means of collaboratively and dynamically 'resolving degrees of freedom in the work system' in unanticipated situations.

  21. 10 Challenges Redux • Potentially addresses Challenge 5 (Revealing Status and Intentions): Agents must be able to make pertinent aspects of their status and intentions obvious to their team-mates. Challenge 6 (Interpreting Signals): Agents must be able to observe and interpret pertinent signals of status and intentions.

  22. Research approach • Take a candidate H-A-R or H-A system • human-in-the-loop simulation • e.g., MIL-C2 DSS in some of the R-CAST work • must be able to introduce unanticipated events • develop mechanics of integration and human- + agent- interface(s) • integrate the proposed model into said simulation • run experiments versus control (original)

  23. Other ideas inspired by HART • BW4T – is that a possible candidate micro-world? • NIFTI search and rescue work • augments? user-centred development work

  24. What I want from HART Redux • learn what interesting and relevant work is out there • guidance for next steps in the PhD • Please don’t tell me to quit! • Focus . . . • look for opportunities for collaboration

  25. A Related Approach . . . ? • Johnson , Bradshaw et al (2010) “Coactivity” & Interdependence “Critical design feature of HR system is ‘the underlying interdependence of joint activity’” • closely following this work . . . think there is ‘common ground’

  26. Questions, comments?

More Related