Models of human performance l.jpg
This presentation is the property of its rightful owner.
Sponsored Links
1 / 213

Models of Human Performance PowerPoint PPT Presentation


  • 107 Views
  • Uploaded on
  • Presentation posted in: General

Models of Human Performance. CSCI 4800 Spring 2006 Kraemer. Objectives. Introduce theory-based models for predicting human performance Introduce competence-based models for assessing cognitive activity Relate modelling to interactive systems design and evaluation.

Download Presentation

Models of Human Performance

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Models of human performance l.jpg

Models of Human Performance

CSCI 4800

Spring 2006

Kraemer


Objectives l.jpg

Objectives

  • Introduce theory-based models for predicting human performance

  • Introduce competence-based models for assessing cognitive activity

  • Relate modelling to interactive systems design and evaluation


What are we trying to model l.jpg

What are we trying to model?


Seven stage action model norman 1990 l.jpg

Seven Stage Action Model[Norman, 1990]

GOAL OF PERSON


Describing problem solving l.jpg

Describing Problem Solving

  • Initial State

  • Goal State

  • All possible intervening states

    • Problem Space

  • Path Constraints

  • State Action Tree

  • Means-ends analysis


Problem solving l.jpg

Problem Solving

  • A problem is something that doesn’t solve easily

  • A problem doesn’t solve easily because:

    • you don’t have the necessary knowledge or,

    • you have misrepresented part of the problem

  • If at first you don’t succeed, try something else

  • Tackle one part of the problem and other parts may fall into place


Conclusion l.jpg

Conclusion

  • More than one solution

  • Solution limited by boundary conditions

  • Representation affects strategy

  • Active involvement and testing


Functional fixedness l.jpg

Functional Fixedness

  • Strategy developed in one version of the problem

  • Strategy might be inefficient

    X ) XXXX

  • Convert numerals or just ‘see’ 4


Data driven perception l.jpg

Data-driven perception

Activation of neural structures of sensory system by pattern of stimulation from environment


Theory driven perception l.jpg

Theory-driven perception

Perception driven by memories and expectations about incoming information.


Keypoint l.jpg

KEYPOINT

PERCEPTION involves a set of active processes that impose:

STRUCTURE,

STABILITY,

and MEANING

on the world


Visual illusions l.jpg

Visual Illusions

http://www.genesishci.com/illusions2.htm

Rabbit or duck?

Old Woman or Young girl?


Interpretation l.jpg

Interpretation

Knowledge of what you are “looking at” can aid in interpretation

JACKAN DJI

LLW ENTU PTH

EHILLTOFE

TCHAPAILO

FWATER

Organisation of information is also useful


Story grammars l.jpg

Story Grammars

  • Analogy with sentence grammars

    • Building blocks and rules for combining

  • Break story into propositions

    “Margie was holding tightly to the string of her beautiful new balloon. Suddenly a gust of wind caught it, and carried it into a tree. It hit a branch, and burst. Margie cried and cried.”


Story grammar l.jpg

Story Grammar

Story

Episode

Setting

[1]

Reaction

Event

Internal

response

Overt

response

Event

Event

[6]

Event

Event

Event

Event

[sadness]

[4]

[3]

[2]

Change

Of state

[5]


Inferences l.jpg

Inferences

  • Comprehension typically requires our active involvement in order to supply information that is not explicit in the text

    1. Mary heard the ice-cream van coming

    2. She remembered her pocket money

    3. She rushed into the house.


Inference and recall l.jpg

Inference and Recall

  • Thorndyke (1976): recall of sentences from ‘Mary’ story

    • 85% correct sentence

    • 58% correct inference –

      • sentence not presented

    • 6% incorrect inference


Mental models l.jpg

Mental Models

  • Van Dijk and Kintsch (1983)

    • Text processed to extract propositions, which are held in working memory;

    • When sufficient propositions in WM, then linking performed;

    • Relevance of propositions to linking proportional to recall;

    • Linking reveals ‘gist’


Semantic networks l.jpg

Semantic Networks

Has Skin

Can move

Eats

Breathes

ANIMAL

Can fly

Has Wings

Has feathers

BIRD

Has fins

Can swim

Has gills

FISH

Is Yellow

Can sing

CANARY

Collins &

Quillian, 1969


Levels and reaction time l.jpg

1.5

1.4

1.3

Property

1.2

Mean Reaction Time (s)

Category

1.1

1

0.9

0

1

2

False

Levels of Sentences

Levels and Reaction time

A canary

can fly

A canary

has gills

A canary

can sing

A canary

has skin

Collins &

Quillian, 1969

A canary is

a fish

A canary is

a canary

A canary is

a bird

A canary is

an animal


Canaries l.jpg

Canaries

  • Different times to verify the statements:

    • A canary is a bird

    • A canary can fly

    • A canary can sing

  • Time proportional to movement through network


Scripts schema and frames l.jpg

Scripts, Schema and Frames

  • Schema = chunks of knowledge

    • Slots for information: fixed, default, optional

  • Scripts = action sequences

    • Generalised event schema (Nelson, 1986)

  • Frames = knowledge about the properties of things


Mental models23 l.jpg

Mental Models

  • Partial

  • Procedures, Functions or System?

  • Memory or Reconstruction?


Concepts l.jpg

Concepts

  • How do you know a chair is a chair?

A chair has four legs…does it? A chair has a seat…does it?


Prototypes typical features and exemplars l.jpg

Prototypes, Typical Features, and Exemplars

  • Prototype

    • ROSCH (1973): people do not use feature sets, but imagine a PROTOTYPE for an object

  • Typical Features

    • ROSCH & MERVIS (1975): people use a list of features, weighted in terms of CUE VALIDITY

  • Exemplars

    • SMITH & MEDIN (1981): people use an EXAMPLE to imagine an object


  • Representing concepts l.jpg

    Representing Concepts

    • BARSALOU (1983)

      • TAXONOMIC

        • Categories that are well known and can be recalled consistently and reliably

          • E.g., Fruit, Furniture, Animals

        • Used to generate overall representation of the world

      • AD HOC

        • Categories that are invented for specific purpose

          • E.g., How to make friends, Moving house

        • Used for goal-directed activity within specific event frames


    Long term memory l.jpg

    Long Term Memory

    • Procedural

      • Knowing how

    • Declarative

      • Knowing that

    • Episodic vs. Semantic

      • Personal events

      • Language and knowledge of world


    Working memory l.jpg

    Working Memory

    • Limited Capacity

      • 7 + 2 items (Miller, 1965)

      • 4 + 2 chunks (Broadbent, 1972)

      • Modality dependent capacity

  • Strategies for coping with limitation

    • Chunking

    • Interference

    • Activation of Long-term memory


  • Slide29 l.jpg

    Baddeley’s (1986) Model of

    Working Memory

    Central

    executive

    Visual Cache

    Inner scribe

    Phonological store

    Auditory word presentation

    Visual word presentation

    Articulatory control process


    Slave systems l.jpg

    Slave Systems

    • Articulatory loop

      • Memory Activation

      • Rehearsal capacity

        • Word length effect and Rehearsal speed

    • Visual cache

      • Visual patterns

      • Complexity of pattern, number of elements etc

    • Inner scribe

      • Sequences of movement

      • Complexity of movement


    Typing l.jpg

    Typing

    • Eye-hand span related to expertise

      • Expert = 9, novice = 1

  • Inter-key interval

    • Expert = 100ms

  • Strategy

    • Hunt & Peck vs. Touch typing

  • Keystroke

    • Novice = highly variable keystroke time

    • Novice = very slow on ‘unusual’ letters, e.g., X or Z


  • Salthouse 1986 l.jpg

    Salthouse (1986)

    • Input

      • Text converted to chunks

    • Parsing

      • Chunks decomposed to strings

    • Translation

      • Strings into characters and linked to movements

    • Execution

      • Key pressed


    Rumelhart norman 1982 l.jpg

    Rumelhart & Norman (1982)

    • Perceptual processes

      • Perceive text, generate word schema

    • Parsing

      • Compute codes for each letter

    • Keypress schemata

      • Activate schema for letter-keypress

    • Response activation

      • Press defined key through activation of appropriate hand / finger


    Schematic of rumelhart and norman s connectionist model of typing l.jpg

    Schematic of Rumelhart and Norman’s connectionist model of typing

    middle

    index ring

    thumb little

    Right hand

    middle

    ring index

    little thumb

    Left hand

    Response system

    Keypress node, breaking

    Word into typed letters;

    Excites and inhibits nodes

    z

    z

    j

    a

    activation

    Word node, activated from

    Visual or auditory stimulus

    jazz


    Automaticity l.jpg

    Automaticity

    • Norman and Shallice (1980)

      • Fully automatic processing controlled by SCHEMATA

      • Partially automatic processing controlled by either Contention Scheduling

      • Supervisory Attentional System (SAS)


    Supervisory attentional system model l.jpg

    Supervisory Attentional System Model

    Supervisory

    Attentional

    System

    Control schema

    Trigger

    database

    Perceptual

    System

    Effector

    System

    Contention

    scheduling


    Contention scheduling l.jpg

    Contention Scheduling

    • Gear changing when driving involves many routine activities but is performed ‘automatically’ – without conscious awareness

    • When routines clash, relative importance is used to determine which to perform – Contention Scheduling

      • e.g., right foot on brake or clutch


    Sas activation l.jpg

    SAS activation

    • Driving on roundabouts in France

      • Inhibit ‘look right’; Activate ‘look left’

      • SAS to over-ride habitual actions

    • SAS active when:

      • Danger, Choice of response, Novelty etc.


    Attentional slips and lapses l.jpg

    Attentional Slips and Lapses

    • Habitual actions become automatic

    • SAS inhibits habit

    • Perserveration

      • When SAS does not inhibit and habit proceeds

  • Distraction

    • Irrelevant objects attract attention

    • Utilisation behaviour: patients with frontal lobe damage will reach for object close to hand even when told not to


  • Performance operating characteristics l.jpg

    Performance Operating Characteristics

    • Resource-dependent trade-off between performance levels on two tasks

    • Task A and Task B performed several times, with instructions to allocate more effort to one task or the other


    Task difficulty l.jpg

    Task Difficulty

    • Data limited processes

      • Performance related to quality of data and will not improve with more resource

  • Resource limited processes

    • Performance related to amount of resource invested in task and will improve with more resource


  • Slide42 l.jpg

    Data limited

    Resource limited

    POC

    P

    P

    Cost

    M

    Cost

    Task

    A

    Task

    A

    M

    Task B

    Task B


    Why model performance l.jpg

    Why Model Performance?

    • Building models can help develop theory

      • Models make assumptions explicit

      • Models force explanation

    • Surrogate user:

      • Define ‘benchmarks’

      • Evaluate conceptual designs

      • Make design assumptions explicit

    • Rationale for design decisions


    Why model performance44 l.jpg

    Why Model Performance?

    • Human-computer interaction as Applied Science

      • Theory from cognitive sciences used as basis for design

      • General principles of perceptual, motor and cognitive activity

      • Development and testing of theory through models


    Types of model in hci l.jpg

    Types of Model in HCI

    Whitefield, 1987


    Task models l.jpg

    Task Models

    • Researcher’s Model of User, in terms of tasks

    • Describe typical activities

    • Reduce activities to generic sequences

    • Provide basis for design


    Pros and cons of modelling l.jpg

    Pros and Cons of Modelling

    • PROS

      • Consistent description through (semi) formal representations

      • Set of ‘typical’ examples

      • Allows prediction / description of performance

    • CONS

      • Selective (some things don’t fit into models)

      • Assumption of invariability

      • Misses creative, flexible, non-standard activity


    Generic model process l.jpg

    Generic Model Process?

    • Define system: {goals, activity, tasks, entities, parameters}

    • Abstract to semantic level

    • Define syntax / representation

    • Define interaction

    • Check for consistency and completeness

    • Predict / describe performance

    • Evaluate results

    • Modify model


    Device and task models l.jpg

    Device and Task Models


    Device models l.jpg

    Device Models

    • Buxton’s 3-state device model

    State

    2

    State

    1

    State

    0


    Application l.jpg

    Application

    Button up

    Pen off

    State

    2

    State

    1

    State

    0

    Pen on

    Button down

    drag

    select

    Out of range


    Different pointing devices l.jpg

    Different pointing devices


    Conclusions l.jpg

    Conclusions

    • Models abstract aspects of interaction

      • User, task, system

    • Models play a variety of roles in design


    Hierarchical task analysis l.jpg

    Hierarchical Task Analysis

    • Activity assumed to consist of TASKS performed in pursuit of GOALS

    • Goals can be broken into SUBGOALS, which can be broken into tasks

    • Hierarchy (Tree) description


    Hierarchical task description l.jpg

    Hierarchical Task Description


    The analysis comes from plans l.jpg

    The “Analysis” comes from plans

    • PLANS = conditions for combining tasks

    • Fixed Sequence

      • P0: 1 > 2 > exit

    • Contingent Fixed Sequence

      • P1: 1 > when state X achieved > 2 > exit

      • P1.1: 1.1 > 1.2 > wait for X time > 1.3 > exit

    • Decision

      • P2: 1 > 2 > If condition X then 3, elseif condition Y then 4 > 5 > exit


    Reporting l.jpg

    Reporting

    • HTA can be constructed using Post-it notes on a large space (this makes it easy to edit and also encourages participation)

    • HTA can be difficult to present in a succinct printed form (it might be useful to take a photograph of the Post-it notes)

    • Typically a Tabular format is used:


    Redesigning the interface to a medical imaging system l.jpg

    Redesigning the Interface to a medical imaging system


    Original design l.jpg

    Original Design

    Menu driven

    Menus accessed by first letter of command

    Menus arranged in hierarchy


    Problems with original design l.jpg

    Problems with original design

    • Lack of consistency

      • D = DOS commands; Delete; Data file; Date

  • Hidden hierarchy

    • Only ‘experts’ could use

  • Inappropriate defaults

    • Setting up a scan required ‘correction’ of default settings three or four times


  • Initial design activity l.jpg

    Initial design activity

    • Observation of non-technology work

      • Cytogeneticists inspecting chromosomes

  • Developed model of task

    • Hierarchical task analysis

  • Developed design principles, e.g.,

    • Cytogeneticists as ‘picture people’

    • Task flow

    • Task mapping


  • Task model l.jpg

    Task Model

    • Work flows between specific activities

    Administration

    Patient details

    Reporting

    Cell sample

    Set up

    Analysis

    Microscope


    First prototype l.jpg

    First “prototype”

    Layout related to

    task model

    ‘Sketch’ very simple

    Annotations show

    modifications


    Second prototype l.jpg

    Second prototype

    Refined layout

    ‘Prototype’ using

    HyperCard

    Initial user trials compared

    this with a mock-up of the

    original design


    Final product l.jpg

    Final Product

    Picture taken from company

    brochure

    Initial concepts retained

    Further modifications possible


    Predicting transaction time l.jpg

    Predicting Transaction Time


    Predicting performance time l.jpg

    Predicting Performance Time

    • Time and error are ‘standard’ measures of human performance

    • Predict transaction time for comparative evaluation

    • Approximations of human performance


    Unit times l.jpg

    Unit Times

    • From task model, define sequence of tasks to achieve a specific goal

    • For each task, define ‘average time’


    Quick exercise l.jpg

    Quick Exercise

    • Draw two parallel lines about 4cm apart and about 10cm long

    • Draw, as quickly as possible, a zig-zag line for 5 seconds

    • Count the number of lines and the number of times you have crossed the parallel lines


    Predicted result l.jpg

    Predicted result

    • About 70 lines

    • About 20 cross-overs


    Why this prediction l.jpg

    Why this prediction?

    • Movement speed limited by biomechanical constraints

      • Motor subsystem change direction @ 70ms

      • So: 5000 / 70 = 71 oscillations

    • Cognitive / Perceptual system cycles:

      • Perceptual @ 70ms

      • Cognitive @ 100ms

      • Correction takes 70+70+100 = 240ms

      • 5000/240 = 21


    Fitts law l.jpg

    Fitts’ Law

    • Paul Fitts 1954

    • Information-theoretic account of simple movements

    • Define the number of ‘bits’ processed in performing a given task


    Fitts tapping task l.jpg

    Fitts’ Tapping Task

    W

    a


    Fitts law74 l.jpg

    Fitts’ Law

    • A = 62, W = 15

    • A = 112, W = 7

    • A = 112, W = 21

    Movement Time = a + b (log2 2A/W)

    Hits

    60

    40

    20

    0

    54

    43

    a = 10

    b = 27.5

    21

    b

    a

    Log2 (2A/W)

    1 = 5.3 2 = 4.5 3 = 3.2


    Alternate versions l.jpg

    Alternate Versions

    MT = a + b log2 (2A/W)

    MT = b log2 (A/W + 0.5)

    MT = a + b log2 (A/W/+1)


    A and b are constants l.jpg

    a and b are “constants”

    Data derived from plot

    Data as predictors?


    Potential problems l.jpg

    Potential Problems

    • Data-fitter rather than ‘law’

    • ‘Generic value’: a+b = 100

    • Variable predictive power for devices?

      • From ‘mouse data’ we get:

        (assume A = 5 and W = 10) log2(2A/W)  0.3

        339ms, 150.5ms and 34.9ms (!!)


    Hick hyman law l.jpg

    Hick – Hyman Law

    • William Hick 1952

    • Selection time, from a set of items, is proportional to the number of items

      T = k log2 (n+1), Where k = a constant (intercept+slope)

    • Approximately 150ms added to T for each item


    Example of hick hyman law l.jpg

    Example of Hick-Hyman Law

    Search Time

    (s)

    4

    3

    2

    1

    0

    words

    numbers

    234 5 6 7 8 10 12

    Landauer and Nachbar, 1985


    Keystroke level models l.jpg

    Keystroke Level Models

    • Developed from 1950s ergonomics

    • Human information processor as linear executor of specified tasks

    • Unit-tasks have defined times

    • Prediction = summing of times for sequence of unit-tasks


    Building a klm l.jpg

    Building a KLM

    • Develop task model

    • Define task sequence

    • Assign unit-times to tasks

    • Sum times


    Example cut and paste l.jpg

    Example: cut and paste

    Task Model: Select line – Cut – Select insertion point – paste

    Task One: select line

    move cursor to

    start of line

    press (hold) button

    drag cursor to

    end of line

    release button


    Times for movement l.jpg

    Times for Movement

    • H: homing, e.g., hand from keyboard to mouse

      • Range: 214ms – 400ms

      • Average: 320ms

    • P: pointing, e.g., move cursor using mouse

      • Range: defined by Fitts’ Law

      • Average: 1100ms

    • B: button pressing, e.g., hitting key on keyboard

      • Range: 80ms – 700ms

      • Average: 200ms


    Times for cognition perception l.jpg

    Times for Cognition / Perception

    • M: mental operation

      • Range: 990ms – 1760ms

      • Average: 1350ms

    • A: switch attention between parts of display

      • Average: 320ms

    • R: recognition of items

      • Range: 314ms – 1800ms

      • Average: 340ms

    • Perceive change:

      • Range: 50 – 300ms

      • Average: 100ms


    Rules for summing times l.jpg

    Rules for Summing Times

    • How to handle multiple Mental units:

      • M before Ks in new argument strings

      • M at start of ‘cognitive unit’

      • M before Ps that select commands

      • Delete M if K redundant terminator


    Alternative l.jpg

    Pe

    M

    H

    P

    P’

    P

    Alternative

    • What if we use ‘accelerated scrolling’ on the cursor keys?

      • Press  key and read scrolling numbers

      • Release key at or near number

      • Select correct number


    Critical path models l.jpg

    Critical Path Models

    • Used in project management

    • Map dependencies between tasks in a project

      • Task X is dependent on task Y, if it is necessary to wait until the end of task Y until task X can commence


    Procedure l.jpg

    Procedure

    • Construct task model, taking into account dependencies

    • Assign times to tasks

    • Calculate critical path and transaction time

      • Run forward pass

      • Run backward pass


    Example l.jpg

    Example

    R

    M

    H

    P

    P’

    P

    M = 1.35

    H = 0.32

    P = 0.2

    R = 0.34

    R

    0.34

    M

    1.35

    H

    0.32

    P

    0.2

    P’

    0.2

    P

    0.2

    1

    2

    3

    4

    5

    6


    Critical path table l.jpg

    Critical Path Table


    Comparison l.jpg

    Comparison

    • ‘Summing of times’ result:

      • 2.61s

    • ‘Critical path’ result:

      • 2.47s

    • R allowed to ‘float’


    Other time based models l.jpg

    Other time-based models

    • Task-network models

      • MicroSAINT

      • Unit-times and probability of transition

    p

    Prompt

    50ms

    Speak word

    [300  9]ms

    System response

    [1000  30]ms

    1-p


    Models of competence l.jpg

    Models of Competence


    Performance vs competence l.jpg

    Performance vs. Competence

    • Performance Models

      • Make statements and predictions about the time, effort or likelihood of error when performing specific tasks;

    • Competence Models

      • Make statements about what a given user knows and how this knowledge might be organised.


    Sequence vs process vs grammar l.jpg

    Sequence vs. Process vs. Grammar

    • Sequence Models

      • Define activity simply in terms of sequences of operations that can be quantified

    • Process Models

      • Simple model of mental activity but define the steps needed to perform tasks

    • Grammatical Models

      • Model required knowledge in terms of ‘sentences’


    Process models l.jpg

    Process Models

    • Production systems

    • GOMS


    Production systems l.jpg

    Production Systems

    • Rules = (Procedural) Knowledge

    • Working memory = state of the world

    • Control strategies = way of applying knowledge


    Production systems98 l.jpg

    Rule base

    Interpreter

    Working Memory

    Production Systems

    Architecture of a production system:


    The problem of control l.jpg

    The Problem of Control

    • Rules are useless without a useful way to apply them

    • Need a consistent, reliable, useful way to control the way rules are applied

    • Different architectures / systems use different control strategies to produce different results


    Forward chaining l.jpg

    If A and B then not C

    If A then B

    If A then B

    If not C then GOAL

    C

    A

    If A and B then not C

    B

    B

    A

    Forward Chaining

    C

    A

    If not C then GOAL


    Backward chaining l.jpg

    C

    A

    If A and B then not C

    If A then B

    If not C then GOAL

    C

    A

    B

    B

    A

    Backward Chaining

    Need GOAL

    If not C then GOAL

    Need: not C

    If A and B then not C

    Need B

    If A then B


    Production systems102 l.jpg

    Production Systems

    • A simple metaphor

    Ships

    Docks


    Production systems103 l.jpg

    Production Systems

    • Ships must fit the correct dock

    • When one ship is docked, another can be launched


    Production systems104 l.jpg

    Production Systems


    Production systems105 l.jpg

    Production Systems


    Production rules l.jpg

    Production Rules

    IF condition

    THEN action

    e.g.,

    IF ship is docked

    And free-floating ships

    THEN launch ship

    IF dock is free

    And Ship matches

    THEN dock ship


    The parsimonious production systems rule notation l.jpg

    The Parsimonious Production Systems Rule Notation

    • On any cycle, any rule whose conditions are currently satisfied will fire

    • Rules must be written so that a single rule will not fire repeatedly

    • Only one rule will fire on a cycle

    • All procedural knowledge is explicit in these rules rather than being explicit in the interpreter


    Worked example the tower of hanoi l.jpg

    Worked Example: The Tower of Hanoi

    A B C

    1

    2

    3

    4

    5


    Possible steps 1 l.jpg

    Possible Steps 1

    Disc 1 from a to c

    Disc 2 from a to b

    Disc 1 from c to a

    Disc 3 from a to c

    Disc 2 from b to c

    Disc 1 from a to c


    Worked example the tower of hanoi110 l.jpg

    Worked Example: The Tower of Hanoi

    A B C

    1

    2

    4

    3

    5


    Possible steps 2 l.jpg

    Possible Steps 2

    Disc 4 from a to b

    Disc 1 from c to b

    Disc 2 from c to a

    Disc 1 from b to a

    Disc 2 from a to b

    Disc 3 from a to b


    Worked example the tower of hanoi112 l.jpg

    Worked Example: The Tower of Hanoi

    A B C

    1

    2

    3

    4

    5


    Possible steps 3 l.jpg

    Possible Steps 3

    Disc 5 from a to c

    Disc 1 from b to a

    Disc 2 from b to c

    Disc 1 from a to c

    Disc 3 from b to a

    Disc 1 from c to b

    Disc 2 from c to a

    Disc 4 from b to c

    Disc 1 from a to c

    Disc 2 from a to b

    Disc 1 from c to b

    Disc 3 from a to c

    Disc 1 from b to a

    Disc 2 from b to c

    Disc 1 from a to c


    Simon s 1975 goal recursive logic l.jpg

    Simon’s (1975) goal-recursive logic

    To get the 5-tower to Peg C, get the 4-tower to Peg B, then move

    The 5-disc to Peg C, then move the 4-tower to Peg C

    To get the 4-tower to Peg B, get the 3-tower to Peg C, then move

    The 4-disc to Peg B, then move the 3-tower to Peg B

    To get the 3-tower to Peg C, get the 2-tower to Peg B, then move

    The 3-disc to Peg C, then move the 2-tower to Peg C,

    To get the 2-tower to Peg B, move the 1-disc to Peg C, then move

    The 2-disc to Peg B, then move the 1-disc to Peg A


    Production rule 1 l.jpg

    Production Rule 1

    SUBGOAL_DISCS

    IFthe goal is to achieve a particular configuration of discs

    AndDi is on Px but should go to Py in the configuration

    AndDi is the largest disc out of place

    AndDj is on Py

    And Dj is smaller than Di

    AndPz is clear OR has a disc larger than Dj

    THENset a subgoal to move the Dj tower to Pz and Di to Py


    Production rule 2 l.jpg

    Production Rule 2

    SUBGOAL_MOVE_DISC

    IFthe goal is to achieve a particular configuration of discs

    AndDi is on Px but should go to Py in the configuration

    AndDi is the largest disc out of place

    AndPy is clear

    THENmove Di to Py


    Goals operators method selection card moran and newell 1983 l.jpg

    Goals Operators Method SelectionCard, Moran and Newell, 1983

    • Human activity modelled by Model Human Processor

    • Activity defined by GOALS

    • Goals held in ‘Stack’

    • Goals ‘pushed’ onto stack

    • Goals ‘popped’ from stack


    Goals l.jpg

    Goals

    • Symbolic structures to define desired state of affairs and methods to achieve this state of affairs

      GOAL: EDIT-MANUSCRIPTtop level goal

      GOAL: EDIT-UNIT-TASKspecific sub goal

      GOAL: ACQUIRE UNIT-TASKget next step

      GOAL: EXECUTE UNIT-TASK do next step

      GOAL: LOCATION-LINEspecific step


    Operators l.jpg

    Operators

    • Elementary perceptual, motor or cognitive acts needed to achieve subgoals

      Get-next-line

      Use-cursor-arrow-method

      Use-mouse-method


    Methods l.jpg

    Methods

    • Descriptions of procedures for achieving goals

    • Conditional upon contents of working memory and state of task

      GOAL: ACQUIRE-UNIT-TASK

      GET-NEXT-PAGEif at end of manuscript

      GET-NEXT-TASK


    Selection l.jpg

    Selection

    • Choose between competing Methods, if more than one

      GOAL:EXECUTE-UNIT-TASK

      GOAL:LOCATE-LINE

      [select:if hands on keyboard

      and less than 5 lines to move

      USE CURSOR KEYS

      else

      USE MOUSE]


    Example122 l.jpg

    Example

    • Withdraw cash from ATM

      • Construct task model

      • Define production rules


    Task model123 l.jpg

    Task Model

    Method for goal: Obtain cash from ATM

    Step1: access ATM

    Step2: select ‘cash’ option

    Step3: indicate amount

    Step4: retrieve cash and card

    Step5: end task


    Production rules124 l.jpg

    Production Rules

    ((GOAL: USE ATM TO OBTAIN CASH)

    ADD-UNIT-TASK (access ATM)

    ADD-WM-UNIT-TASK (access ATM)

    ADD-TASK-STEP (insert card in slot)

    SEND-TO-MOTOR(place card in slot)

    SEND-TO-MOTOR (eyes to slot)

    SEND-TO-PERCEPTUAL (check card in)

    ADD (WM performing card insertion)

    ADD-TASK-STEP (check card insertion)

    DELETE-UNIT-TASK (access ATM)

    ADD-UNIT-TASK (enter PIN)


    Problems with goms l.jpg

    Problems with GOMS

    • Assumes ‘error-free’ performance

      • Even experts make mistakes

    • MHP gross simplifies human information processing

    • Producing a task model of non-existent products is difficult


    Task action grammar l.jpg

    Task Action Grammar

    • GOMS assumes ‘expert’ knows operators and methods for tasks

    • TAG assumes ‘expert’ knows simple tasks, i.e., tasks that can be performed without problem-solving


    Tag and competence l.jpg

    TAG and competence

    • Competence

      • Defines what an ‘ideal’ user would know

    • TAG relies on ‘world knowledge’

      • up vs down

      • left vs right

      • forward vs backward


    Task action grammar128 l.jpg

    Task-action Grammar

    • Grammar relates simple tasks to actions

    • Generic rule schema covering combinations of simple tasks


    Slide129 l.jpg

    TAG

    • A ‘grammar’

      • maps

        • Simple tasks

      • Onto

        • Actions

      • To form

        • an interaction language

      • To investigate

        • consistency


    Consistency l.jpg

    Consistency

    • Syntactic: use of expressions

    • Lexical: use of symbols

    • Semantic-syntactic alignment: order of terms

    • Semantic: principle of completeness


    Procedure131 l.jpg

    Procedure

    • Step 1: Write out commands and their structures

    • Step 2: Determine in commands have consistent structure

    • Step 3: Place command items into variable/feature relationship

    • Step 4: Generalise commands by separating into task features, simple tasks, task-action rule schema

    • Step 5: Expand parts of task into primitives

    • Step 6: Check to ensure all names are unique


    Example132 l.jpg

    Example

    • Setting up a recording on a video-cassette recorder (VCR)

    • Assume that all controls via front panel and that the user can only use the up and down arrows


    Feature list for a vcr l.jpg

    Feature list [for a VCR]

    • PropertyDate, Channel, Start, End

    • Valuenumber

    • FrequencyDaily, Weekly

    • Recordon, off


    Simple tasks l.jpg

    Simple tasks

    SetDate [Property = Date, Value = US#, Frequency = Daily]

    SetDate [Property = Date, Value = US#, Frequency = Weekly]

    SetProg[Property =Prog, Value = US#]

    SetStart[Property = start, Value = US#, Record = on]

    SetEnd[Property = start, Value = US#, Record = off]


    Rule schema l.jpg

    Rule Schema

    1. Task[Property = US#, Value]  SetValue [Value]

    2. Task[Property = Date, Value, Frequency = US#]  SetValue [Value] + press “ |” until Frequency = US#

    3. Task[Property = Start, Value]  SetValue [Value] + press “Rec”

    4. SetValue [Value = US#]  press “ |” until Value = US#

    5. SetValue[Value = US#]  use “ |” until Value = US#


    Architectures for cognition l.jpg

    Architectures for Cognition


    Why cognitive architecture l.jpg

    Why Cognitive Architecture?

    • Computers architectures:

      • Specify components and their connections

      • Define functions and processes

    • Cognitive Architectures could be seen as the logical conclusion of the ‘human-brain-as-computer’ hypothesis


    Why do this l.jpg

    Why do this?

    • Philosophy: Provide a unified understanding of the mind

    • Psychology: Account for experimental data

    • Education: Provide cognitive models for intelligent tutoring systems and other learning environments

    • Human Computer Interaction: Evaluate artifacts and help in their design

    • Computer Generated Forces: Provide cognitive agents to inhabit training environments and games

    • Neuroscience: Provide a framework for interpreting data from brain imaging


    General requirements l.jpg

    General Requirements

    • Integration of cognition, perception, and action

    • Robust behavior in the face of error, the unexpected, and the unknown

    • Ability to run in real time

    • Ability to Learn

    • Prediction of human behavior and performance


    Architectures l.jpg

    Architectures

    • Model Human Processor (MHP)

      • Card, Moran and Newell (1983)

    • ACT-R

      • Anderson (1993)

    • EPIC

      • Meyer and Kieras (1997)

    • SOAR

      • Laird, Rosenbloom and Newell (1987)


    Model human processor l.jpg

    Model Human Processor

    • Three interacting subsystems:

      • Perceptual

        • Auditory image store

        • Visual image store

      • Cognitive

        • Working memory

        • Long-term memory

      • Motor


    Parameters of mhp l.jpg

    Parameters of MHP


    Average data for mhp l.jpg

    Average data for MHP

    • Long-term memory:?

    • Working memory: 3 – 7 chunks, 7s

    • Auditory image store: 17 letters, 200ms

    • Visual image store: 5 letters, 1500ms

    • Cognitive processor: 100ms

    • Perceptual processor: 70ms

    • Motor processor: 70ms


    Conclusions144 l.jpg

    Conclusions

    • Simple description of cognition

    • Uses ‘standard times’ for prediction

    • Uses production rules for defining and combining tasks (with GOMS formalism)


    Adaptive control of thought rational act r http act psy cmu edu l.jpg

    Adaptive Control of Thought, Rational (ACT-R)http://act.psy.cmu.edu


    Adaptive control of thought rational act r l.jpg

    Adaptive Control of Thought, Rational (ACT-R)

    • ACT-R symbolic aspect realised over subsymbolic mechanism

    • Symbolic aspect in two parts:

      • Production memory

      • Symbolic memory (declarative memory)

    • Theory of rational analysis


    Theory of rational analysis l.jpg

    Theory of Rational Analysis

    • Evidence-based assumptions about environment (probabilities)

    • Deriving optimal strategies (Bayesian)

    • Assuming that optimal strategies reflect human cognition (either what it actually does or what it probably ought to do)


    Notions of memory l.jpg

    Notions of Memory

    • Procedural

      • Knowing how

      • Described in ACT by Production Rules

    • Declarative

      • Knowing that

      • Described in ACT by ‘chunks’

    • Goal Stack

      • A sort of ‘working memory’

      • Holds chunks (goals)

      • Top goal pushed (like GOMS)

      • Writeable


    Production rules149 l.jpg

    Production Rules

    • Knowing how to do X

      • Production rule = set of conditions and an action

        IF it is raining

        And you wish to go out

        THEN pick up your umbrella


    Very simple act l.jpg

    (Very simple) ACT

    • Network of propositions

    • Production rules selected via pattern matching. Production rules coordinate retrieval of chunks from symbolic memory and link to environment.

    • If information in working memory matches production rule condition, then fire production rule


    Slide151 l.jpg

    ACT*

    Declarative

    memory

    Procedural

    memory

    Retrieval StorageMatch Execution

    Working

    memory

    EncodingPerformance

    OUTSIDE WORLD


    Slide152 l.jpg

    Addition-Fact

    Knowledge Representation

    addend1

    sum

    U (4); T (1); H (0)

    six

    addend2

    16

    18 +

    _____

    34

    _____

    1

    eight

    Goal buffer: add numbers in right-most column

    Visual buffer: 6, 8

    Retrieval buffer: 14


    Symbolic subsymbolic levels l.jpg

    Symbolic / Subsymbolic levels

    • Symbolic level

      • Information as chunks in declarative memory, and represented as propositions

      • Rules as productions in procedural memory

    • Subsymbolic level

      • Chunks given parameters which are used to determine the probability that the chunk is needed

      • Base-level activation (relevance)

      • Context activation (association strengths)


    Conflict resolution l.jpg

    Conflict resolution

    • Order production rules by preference

    • Select top rule in list

    • Preference defined by:

      • Probability that rule will lead to goal

      • Time associated with rule

      • Likely cost of reaching goal when using sequence involving this rule


    Example155 l.jpg

    Example

    • Activity: Find target and then use mouse to select target:

      Hunt_Feature

      IF goal = find target with feature F

      AND there is object X on screen

      THEN move attention to object X

      Found_target

      IF goal = find target with feature F

      AND target with F in location L

      THEN move mouse to L and click


    Example156 l.jpg

    Example

    • Model reaction time to target

      • Assume switch attention linearly increases with each new position

      • Assume probability of feature X in location y = 0.53

      • Assume switch attention = 185ms

    • Therefore, reaction time = 185 X 0.53 = 98ms per position

    • Empirical data has RT of 103ms per position


    Example157 l.jpg

    Example

    • Assume target in field of distractors

      • P = 0.42

      • Therefore, 185 x .42 = 78ms per position

    • Empirical data = 80ms per position


    Learning l.jpg

    Learning

    • Symbolic level

      • Learning defined by adding new chunks and productions

    • Subsymbolic level

      • Adjustment of parameters based on experience


    Conclusions159 l.jpg

    Conclusions

    • ACT uses simple production system

    • ACT provides some quantitative prediction of performance

    • Rationality = optimal adaptation to environment


    Executive process interactive control epic ftp ftp eecs umich edu people kieras l.jpg

    Executive Process Interactive Control (EPIC)ftp://ftp.eecs.umich.edu/people/kieras


    Executive process interactive control epic l.jpg

    Executive Process Interactive Control (EPIC)

    • Focus on multiple task performance

    • Cognitive Processor runs production rules and interacts with perceptual and motor processors


    Epic parameters l.jpg

    EPIC parameters

    • FIXED

      • Connections and mechanisms

      • Time parameters

      • Feature sets for motor processors

      • Task-specific production rules and perceptual encoding types

    • FREE

      • Production rules for tasks

      • Unique perceptual and motor processors

      • Task instance set

      • Simulated task environment


    Slide163 l.jpg

    EPIC

    Long-term

    memory

    Production

    memory

    PERCEPTUAL

    PROCESSORS

    DISPLAY

    Production

    Rule interpreter

    Auditory

    Auditory

    Visual

    Visual

    Task

    environment

    Working

    memory

    Speech

    Speech

    Manual

    Manual

    Tactile


    Production memory l.jpg

    Production Memory

    • Perceptual processors controlled by production rules

    • Production Rules held in Production Memory

    • Production Rule Interpreter applies rules to perceptual processes


    Working memory165 l.jpg

    Working Memory

    • Limited capacity (or duration of 4s) and holds current production rules

    • Cognitive processor updates every 50ms

    • On update, perceptual input, item from production memory, and next action held in working memory


    Resolving conflict l.jpg

    Resolving Conflict

    • Production rules applied to executive tasks to handle resource conflict and scheduling

    • Conflict dealt with in production rule specification

      • Lockout

      • Interleaving

      • Strategic response deferent


    Example167 l.jpg

    Example

    Task one

    Stimulus one

    Perceptual process

    Cognitive process

    Response selection

    Memory process

    Response one

    Executive process

    Move eye to S2

    Enable task1 + task 2

    Wait for task1 complete

    Task1end

    Task2 permission

    Trial end

    Task two

    Stimulus two

    Perceptual process

    Cognitive process

    Response selection

    Memory process

    Response two


    Conclusions168 l.jpg

    Conclusions

    • Modular structure supports parallelism

    • EPIC does not have a goal stack and does not assume sequential firing of goals

    • Goals can be handled in parallel (provided there is no resource conflict)

    • Does not support learning


    States operators and reasoning soar http www isi edu soar soar html l.jpg

    States, Operators, And Reasoning (SOAR)http://www.isi.edu/soar/soar.html


    States operators and reasoning soar l.jpg

    States, Operators, And Reasoning (SOAR)

    • Sequel of General Problem Solver (Newell and Simon, 1960)

    • SOAR seeks to apply operators to states within a problem space to achieve a goal.

    • SOAR assumes that actor uses all available knowledge in problem-solving


    Soar as a unified theory of cognition l.jpg

    Soar as a Unified Theory of Cognition

    • Intelligence = problem solving + learning

    • Cognition seen as search in problem spaces

    • All knowledge is encoded as productions

       a single type of knowledge

    • All learning is done by chunking

       a single type of learning


    Slide172 l.jpg

    Young, R.M., Ritter, F., Jones, G.  1998 "Online Psychological Soar Tutorial" available at: http://www.psychology.nottingham.ac.uk/staff/Frank.Ritter/pst/pst-tutorial.html


    Soar activity l.jpg

    SOAR Activity

    • Operators:  Transform a state via some action

    • State:  A representation of possible stages of progress in the problem

    • Problem space:  States and operators that can be used to achieve a goal.

    • Goal: Some desired situation.


    Soar activity174 l.jpg

    SOAR Activity

    • Problem solving = applying an Operator to a State in order to move through a Problem Space to reach a Goal. 

    • Impasse =   Where an Operator cannot be applied to a State, and so it is not possible to move forward in the Problem Space. This becomes a new problem to be solved.

    • Soar can learn by storing solutions to past problems as chunks and applying them when it encounters the same problem again


    Soar architecture l.jpg

    Chunking

    mechanism

    SOAR Architecture

    Production memory

    Pattern Action

    Pattern Action

    Pattern Action

    Working memory

    Objects

    Preferences

    Working memory

    Manager

    Conflict stack

    Decision

    procedure


    Explanation l.jpg

    Explanation

    • Working Memory

      • Data for current activity, organized into objects

    • Production Memory

      • Contains production rules

    • Chunking mechanism

      • Collapses successful sequences of operators into chunks for re-use


    3 levels in soar l.jpg

    3 levels in soar

    • Symbolic – the programming level

      • Rules programmed into Soar that match circumstances and perform specific actions

    • Problem space – states & goals

      • The set of goals, states, operators, and context.

    • Knowledge – embodied in the rules

      • The knowledge of how to act on the problem/world, how to choose between different operators, and any learned chunks from previous problem solving


    How does it work l.jpg

    How does it work?

    • A problem is encoded as a current state and a desired state (goal)

    • Operators are applied to move from one state to another

    • There is success if the desired state matches the current state

    • Operators are proposed by productions, with preferences biasing choices in specific circumstances

    • Productions fire in parallel


    Impasses l.jpg

    Impasses

    • If no operator is proposed, or if there is a tie between operators, or if Soar does not know what to do with an operator, there is an impasse

    • When there are impasses, Soar sets a new goal (resolve the impasse) and creates a new state

    • Impasses may be stacked

    • When one impasse is solved, Soar pops up to the previous goal


    Learning180 l.jpg

    Learning

    • Learning occurs by chunking the conditions and the actions of the impasses that have been resolved

    • Chunks can immediately used in further problem-solving behaviour


    The switchyard video l.jpg

    The Switchyard video


    Conclusions182 l.jpg

    Conclusions

    • It may be too "unified"

      • Single learning mechanism

      • Single knowledge representation

      • Uniform problem state

    • It does not take neuropsychological evidence into account (cf. ACT-R)

    • There may be non-symbolic intelligence, e.g. neural nets etc not abstractable to the symbolic level


    Comparison of architectures l.jpg

    Comparison of Architectures


    The role of models in design l.jpg

    The Role of Models in Design


    User models in design l.jpg

    User Models in Design

    • Benchmarking

    • Human Virtual Machines

    • Evaluation of concepts

    • Comparison of concepts

    • Analytical prototyping


    Benchmarking l.jpg

    Benchmarking

    • What times can users expect to take to perform task

      • Training criteria

      • Evaluation criteria (under ISO9241)

      • Product comparison


    Human virtual machine l.jpg

    Human Virtual Machine

    • How might the user perform?

      • Make assumptions explicit

      • Contrast views


    Evaluation of concepts l.jpg

    Evaluation of Concepts

    • Which design could lead to better performance?

      • Compare concepts using models prior to building prototype

      • Use performance of existing product as benchmark


    Reliability of models l.jpg

    Reliability of Models

    • Agreement of predictions with observations

    • Agreement of predictions by different analysts

    • Agreement of model with theory


    Comparison with theory l.jpg

    Comparison with Theory

    • Approximation of human information processing

    • Assumes linear, error-free performance

    • Assumes strict following of ‘correct’ procedure

    • Assumes only way correct procedure

    • Assumes actions can be timed


    Klm validity l.jpg

    KLM Validity

    Predicted values lie

    within 20% of

    observed values


    Comparison of klm predicted with times from user trials l.jpg

    Comparison of KLM predicted with times from user trials

    Total time

    (s)

    25

    20

    15

    10

    CUI: P = 15.84s

    mean = 15.37s

    Error = 2.9%

    GUI: P = 11.05s

    mean = 8.64s

    Error = 22%

    1 2 3 4 5 67

    Trial number


    Inter intra rater reliability l.jpg

    Inter / Intra-rater Reliability

    • Inter-rater:

      • Correlation of several analysts

      • = 0.754

    • Intra-rater:

      • Correlation for same analysts on several occasions

      • =0.916

    • Validity:

      • correlation with actual performance

      • = 0.769

    Stanton and Young, 1992


    How compare single data points l.jpg

    How compare single data points?

    • Models typically produce a single prediction

    • How can one value be compared against a set of data?

    • How can a null hypothesis be proved?


    Liao and milgram 1991 l.jpg

    Liao and Milgram (1991)

    A-D-*sd A-D A-D+*sd A A+D-*sd A+D A+D+*sd

    D


    Defining terms l.jpg

    Defining terms

    • A = Actual values, with observed standard deviation (sd)

    • D = Derived values

    •  = 5% (P < 0.05 to reduce Type I error)

    •  = 20% (P<0.2 for Type II error)


    Acceptance criteria l.jpg

    Acceptance Criteria

    Accept Ho if: A-D+  *sd < D< A+D-  *sd

    Reject Ho if: D < A-D-  *sd

    Reject Ho if: D > A-D+  *sd


    Analytical prototyping l.jpg

    Analytical Prototyping

    • Functional analysis

      • Define features and functions

      • Development of design concepts, e.g., sketches and storyboards

  • Scenario-based analysis

    • How people pursue defined goals

    • State-based descriptions

  • Structural analysis

    • Predictive evaluation

    • Testing to destruction


  • Analytical prototyping199 l.jpg

    Analytical Prototyping

    • Functional analysis

    • Scenario-based analysis

    • Structural analysis


    Rewritable routines l.jpg

    Rewritable Routines

    • Mental models

      • Imprecise, incomplete, inconsistent

    • Partial representations of product and procedure for achieving subgoal

    • Knowledge recruited in response to system image


    Simple architecture l.jpg

    Action to change machine state

    Rewritable Routines

    Next

    State

    Current State

    Goal

    State

    Possible States

    Relevant State

    Simple Architecture


    Global prototypical routines l.jpg

    Global Prototypical Routines

    • Stereotyped Stimulus-Response compatibilities

    • Generalisable product knowledge


    State specific routines l.jpg

    State-specific Routines

    • Interpretation of system image

      • Feature evolution

    • Expectation of procedural steps

    • Situated / Opportunistic planning


    Describing interaction l.jpg

    Describing Interaction

    • State-space diagrams

    • Indication of system image

    • Indication of user action

    • Prediction of performance


    State space diagram l.jpg

    0

    Waiting for: Raise lid

    Waiting for: Play Mode

    Waiting for: Enter

    Waiting for: Skip forward

    Waiting for: Skip back

    Waiting for: Play

    Waiting for: Stop

    Waiting for: Off

    Task: Press ‘Play’

    Time: 200ms

    Error: 0.0004

    State

    1

    State-space Diagram

    • State number

    • System image

    • Waiting for…

    • Transitions


    Defining parameters l.jpg

    Defining Parameters


    Developing models l.jpg

    Developing Models

    Start:

    0ms

    P=0.997

    P=0.74

    P=0.003

    P=0.26

    Recall plan:

    1380ms

    Wrong plan:

    1380ms

    P=0.9996

    P=0.9996

    P=0.0004

    P=0.0004

    P=1

    P=1

    Press play:

    200ms

    Cycle

    through

    menu:

    800ms

    Press Playmode:

    200ms

    P=0.9996

    P=0.9996

    P=0.0004

    P=0.0004

    P=1

    P=1

    Press Playmode:

    200ms

    Press Enter:

    0ms

    Switch

    off:

    300ms

    P=0.9996

    P=0.9996

    P=0.0004

    P=0.0004

    P=1

    P=1

    Press Play:

    0ms

    Press Other Key:

    200ms


    Results l.jpg

    Results


    What is the point l.jpg

    What is the point?

    • Are these models useful to designers?

    • Are these models useful to theorists?


    Task models problems l.jpg

    Task Models - problems

    • Task models take time to develop

      • They may not have high inter-rater reliability

      • They cannot deal easily with parallel tasks

      • They ignore social factors


    Task models benefits l.jpg

    Task Models - benefits

    • Models are abstractions – you always leave something out

    • The process of creating a task model might outweigh the problems

    • Task models highlight task sequences and can be used to define metrics


    Task models for theorists l.jpg

    Task Models for Theorists

    • Task models are engineering approximations

      • Do they actually describe how human information processing works?

        • Do they need to?

      • Do they describe cognitive operations, or just actions?


    Some background reading l.jpg

    Some Background Reading

    Dix, A et al., 1998, Human-Computer Interaction (chapters 6 and 7) London: Prentice Hall

    Anderson, J.R., 1983, The Architecture of Cognition, Harvard, MA: Harvard University Press

    Card, S.K. et al., 1983, The Psychology of Human-Computer Interaction, Hillsdale, NJ: LEA

    Carroll, J., 2003, HCI Models, Theories and Frameworks: towards a multidisciplinary science, (chapters 1, 3, 4, 5) San Francisco, CA: Morgan Kaufman


  • Login