1 / 42

Models of Human Performance - revision

Models of Human Performance - revision. CSCI 4800/6800 Spring 2006 Kraemer. Objectives. Introduce theory-based models for predicting human performance Introduce competence-based models for assessing cognitive activity Relate modelling to interactive systems design and evaluation.

Download Presentation

Models of Human Performance - revision

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Models of Human Performance - revision CSCI 4800/6800 Spring 2006 Kraemer

  2. Objectives • Introduce theory-based models for predicting human performance • Introduce competence-based models for assessing cognitive activity • Relate modelling to interactive systems design and evaluation

  3. Some Background Reading Dix, A et al., 1998, Human-Computer Interaction (chapters 6 and 7) London: Prentice Hall Anderson, J.R., 1983, The Architecture of Cognition, Harvard, MA: Harvard University Press Card, S.K. et al., 1983, The Psychology of Human-Computer Interaction, Hillsdale, NJ: LEA Carroll, J., 2003, HCI Models, Theories and Frameworks: towards a multidisciplinary science, (chapters 1, 3, 4, 5) San Francisco, CA: Morgan Kaufman

  4. Why Model Performance? • Building models can help develop theory • Models make assumptions explicit • Models force explanation • Surrogate user: • Define ‘benchmarks’ • Evaluate conceptual designs • Make design assumptions explicit • Rationale for design decisions

  5. Why Model Performance? • Human-computer interaction as Applied Science • Theory from cognitive sciences used as basis for design • General principles of perceptual, motor and cognitive activity • Development and testing of theory through models

  6. Pros and Cons of Modelling • PROS • Consistent description through (semi) formal representations • Set of ‘typical’ examples • Allows prediction / description of performance • CONS • Selective (some things don’t fit into models) • Assumption of invariability • Misses creative, flexible, non-standard activity

  7. Generic Model Process? • Define system: {goals, activity, tasks, entities, parameters} • Abstract to semantic level • Define syntax / representation • Define interaction • Check for consistency and completeness • Predict / describe performance • Evaluate results • Modify model

  8. User Models in Design • Benchmarking • Human Virtual Machines • Evaluation of concepts • Comparison of concepts • Analytical prototyping

  9. Benchmarking • What times can users expect to take to perform task • Training criteria • Evaluation criteria (under ISO9241) • Product comparison

  10. Human Virtual Machine • How might the user perform? • Make assumptions explicit • Contrast views

  11. Evaluation of Concepts • Which design could lead to better performance? • Compare concepts using models prior to building prototype • Use performance of existing product as benchmark

  12. Performance vs. Competence • Performance Models • Make statements and predictions about the time, effort or likelihood of error when performing specific tasks; • Competence Models • Make statements about what a given user knows and how this knowledge might be organised.

  13. Notions of Memory • Procedural • Knowing how • Described in ACT by Production Rules • Declarative • Knowing that • Described in ACT by ‘chunks’ • Goal Stack • A sort of ‘working memory’ • Holds chunks (goals) • Top goal pushed (like GOMS) • Writeable

  14. Production Systems • Rules = (Procedural) Knowledge • Working memory = state of the world • Control strategies = way of applying knowledge

  15. Rule base Interpreter Working Memory Production Systems Architecture of a production system:

  16. The Problem of Control • Rules are useless without a useful way to apply them • Need a consistent, reliable, useful way to control the way rules are applied • Different architectures / systems use different control strategies to produce different results

  17. Production Rules IF condition THEN action e.g., IF ship is docked And free-floating ships THEN launch ship IF dock is free And Ship matches THEN dock ship

  18. The Parsimonious Production Systems Rule Notation • On any cycle, any rule whose conditions are currently satisfied will fire • Rules must be written so that a single rule will not fire repeatedly • Only one rule will fire on a cycle • All procedural knowledge is explicit in these rules rather than being explicit in the interpreter

  19. Why Cognitive Architecture? • Computers architectures: • Specify components and their connections • Define functions and processes • Cognitive Architectures could be seen as the logical conclusion of the ‘human-brain-as-computer’ hypothesis

  20. General Requirements • Integration of cognition, perception, and action • Robust behavior in the face of error, the unexpected, and the unknown • Ability to run in real time • Ability to Learn • Prediction of human behavior and performance

  21. Adaptive Control of Thought, Rational (ACT-R)http://act.psy.cmu.edu

  22. Adaptive Control of Thought, Rational (ACT-R) • ACT-R symbolic aspect realised over subsymbolic mechanism • Symbolic aspect in two parts: • Production memory • Symbolic memory (declarative memory) • Theory of rational analysis

  23. Theory of Rational Analysis • Evidence-based assumptions about environment (probabilities) • Deriving optimal strategies (Bayesian) • Assuming that optimal strategies reflect human cognition (either what it actually does or what it probably ought to do)

  24. (Very simple) ACT • Network of propositions • Production rules selected via pattern matching. Production rules coordinate retrieval of chunks from symbolic memory and link to environment. • If information in working memory matches production rule condition, then fire production rule

  25. ACT* Declarative memory Procedural memory Retrieval Storage Match Execution Working memory Encoding Performance OUTSIDE WORLD

  26. Addition-Fact Knowledge Representation addend1 sum U (4); T (1); H (0) six addend2 16 18 + _____ 34 _____ 1 eight Goal buffer: add numbers in right-most column Visual buffer: 6, 8 Retrieval buffer: 14

  27. Symbolic / Subsymbolic levels • Symbolic level • Information as chunks in declarative memory, and represented as propositions • Rules as productions in procedural memory • Subsymbolic level • Chunks given parameters which are used to determine the probability that the chunk is needed • Base-level activation (relevance) • Context activation (association strengths)

  28. Conflict resolution • Order production rules by preference • Select top rule in list • Preference defined by: • Probability that rule will lead to goal • Time associated with rule • Likely cost of reaching goal when using sequence involving this rule

  29. Conclusions • ACT use simple production system • ACT provides some quantitative prediction of performance • Rationality = optimal adaptation to environment

  30. States, Operators, And Reasoning (SOAR)http://www.isi.edu/soar/soar.html

  31. States, Operators, And Reasoning (SOAR) • Sequel of General Problem Solver (Newell and Simon, 1960) • SOAR seeks to apply operators to states within a problem space to achieve a goal. • SOAR assumes that actor uses all available knowledge in problem-solving

  32. Soar as a Unified Theory of Cognition • Intelligence = problem solving + learning • Cognition seen as search in problem spaces • All knowledge is encoded as productions  a single type of knowledge • All learning is done by chunking  a single type of learning

  33. Young, R.M., Ritter, F., Jones, G.  1998 "Online Psychological Soar Tutorial" available at: http://www.psychology.nottingham.ac.uk/staff/Frank.Ritter/pst/pst-tutorial.html

  34. SOAR Activity • Operators:  Transform a state via some action • State:  A representation of possible stages of progress in the problem • Problem space:  States and operators that can be used to achieve a goal. • Goal: Some desired situation.

  35. SOAR Activity • Problem solving = applying an Operator to a State in order to move through a Problem Space to reach a Goal.  • Impasse =   Where an Operator cannot be applied to a State, and so it is not possible to move forward in the Problem Space. This becomes a new problem to be solved. • Soar can learn by storing solutions to past problems as chunks and applying them when it encounters the same problem again

  36. Chunking mechanism SOAR Architecture Production memory Pattern Action Pattern Action Pattern Action Working memory Objects Preferences Working memory Manager Conflict stack Decision procedure

  37. Explanation • Working Memory • Data for current activity, organized into objects • Production Memory • Contains production rules • Chunking mechanism • Collapses successful sequences of operators into chunks for re-use

  38. 3 levels in soar • Symbolic – the programming level • Rules programmed into Soar that match circumstances and perform specific actions • Problem space – states & goals • The set of goals, states, operators, and context. • Knowledge – embodied in the rules • The knowledge of how to act on the problem/world, how to choose between different operators, and any learned chunks from previous problem solving

  39. How does it work? • A problem is encoded as a current state and a desired state (goal) • Operators are applied to move from one state to another • There is success if the desired state matches the current state • Operators are proposed by productions, with preferences biasing choices in specific circumstances • Productions fire in parallel

  40. Impasses • If no operator is proposed, or if there is a tie between operators, or if Soar does not know what to do with an operator, there is an impasse • When there are impasses, Soar sets a new goal (resolve the impasse) and creates a new state • Impasses may be stacked • When one impasse is solved, Soar pops up to the previous goal

  41. Conclusions • It may be too "unified" • Single learning mechanism • Single knowledge representation • Uniform problem state • It does not take neuropsychological evidence into account (cf. ACT-R) • There may be non-symbolic intelligence, e.g. neural nets etc not abstractable to the symbolic level

  42. Comparison of Architectures

More Related