models of human performance l.
Download
Skip this Video
Download Presentation
Models of Human Performance

Loading in 2 Seconds...

play fullscreen
1 / 213

Models of Human Performance - PowerPoint PPT Presentation


  • 160 Views
  • Uploaded on

Models of Human Performance. CSCI 4800 Spring 2006 Kraemer. Objectives. Introduce theory-based models for predicting human performance Introduce competence-based models for assessing cognitive activity Relate modelling to interactive systems design and evaluation.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Models of Human Performance' - makya


Download Now An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
models of human performance

Models of Human Performance

CSCI 4800

Spring 2006

Kraemer

objectives
Objectives
  • Introduce theory-based models for predicting human performance
  • Introduce competence-based models for assessing cognitive activity
  • Relate modelling to interactive systems design and evaluation
describing problem solving
Describing Problem Solving
  • Initial State
  • Goal State
  • All possible intervening states
    • Problem Space
  • Path Constraints
  • State Action Tree
  • Means-ends analysis
problem solving
Problem Solving
  • A problem is something that doesn’t solve easily
  • A problem doesn’t solve easily because:
    • you don’t have the necessary knowledge or,
    • you have misrepresented part of the problem
  • If at first you don’t succeed, try something else
  • Tackle one part of the problem and other parts may fall into place
conclusion
Conclusion
  • More than one solution
  • Solution limited by boundary conditions
  • Representation affects strategy
  • Active involvement and testing
functional fixedness
Functional Fixedness
  • Strategy developed in one version of the problem
  • Strategy might be inefficient

X ) XXXX

  • Convert numerals or just ‘see’ 4
data driven perception
Data-driven perception

Activation of neural structures of sensory system by pattern of stimulation from environment

theory driven perception
Theory-driven perception

Perception driven by memories and expectations about incoming information.

keypoint
KEYPOINT

PERCEPTION involves a set of active processes that impose:

STRUCTURE,

STABILITY,

and MEANING

on the world

visual illusions
Visual Illusions

http://www.genesishci.com/illusions2.htm

Rabbit or duck?

Old Woman or Young girl?

interpretation
Interpretation

Knowledge of what you are “looking at” can aid in interpretation

JA CKAN DJI

LLW ENTU PTH

EHI LLT OFE

TCH APA ILO

FWA TER

Organisation of information is also useful

story grammars
Story Grammars
  • Analogy with sentence grammars
    • Building blocks and rules for combining
  • Break story into propositions

“Margie was holding tightly to the string of her beautiful new balloon. Suddenly a gust of wind caught it, and carried it into a tree. It hit a branch, and burst. Margie cried and cried.”

story grammar
Story Grammar

Story

Episode

Setting

[1]

Reaction

Event

Internal

response

Overt

response

Event

Event

[6]

Event

Event

Event

Event

[sadness]

[4]

[3]

[2]

Change

Of state

[5]

inferences
Inferences
  • Comprehension typically requires our active involvement in order to supply information that is not explicit in the text

1. Mary heard the ice-cream van coming

2. She remembered her pocket money

3. She rushed into the house.

inference and recall
Inference and Recall
  • Thorndyke (1976): recall of sentences from ‘Mary’ story
    • 85% correct sentence
    • 58% correct inference –
      • sentence not presented
    • 6% incorrect inference
mental models
Mental Models
  • Van Dijk and Kintsch (1983)
    • Text processed to extract propositions, which are held in working memory;
    • When sufficient propositions in WM, then linking performed;
    • Relevance of propositions to linking proportional to recall;
    • Linking reveals ‘gist’
semantic networks
Semantic Networks

Has Skin

Can move

Eats

Breathes

ANIMAL

Can fly

Has Wings

Has feathers

BIRD

Has fins

Can swim

Has gills

FISH

Is Yellow

Can sing

CANARY

Collins &

Quillian, 1969

levels and reaction time

1.5

1.4

1.3

Property

1.2

Mean Reaction Time (s)

Category

1.1

1

0.9

0

1

2

False

Levels of Sentences

Levels and Reaction time

A canary

can fly

A canary

has gills

A canary

can sing

A canary

has skin

Collins &

Quillian, 1969

A canary is

a fish

A canary is

a canary

A canary is

a bird

A canary is

an animal

canaries
Canaries
  • Different times to verify the statements:
    • A canary is a bird
    • A canary can fly
    • A canary can sing
  • Time proportional to movement through network
scripts schema and frames
Scripts, Schema and Frames
  • Schema = chunks of knowledge
    • Slots for information: fixed, default, optional
  • Scripts = action sequences
    • Generalised event schema (Nelson, 1986)
  • Frames = knowledge about the properties of things
mental models23
Mental Models
  • Partial
  • Procedures, Functions or System?
  • Memory or Reconstruction?
concepts
Concepts
  • How do you know a chair is a chair?

A chair has four legs…does it? A chair has a seat…does it?

prototypes typical features and exemplars
Prototypes, Typical Features, and Exemplars
  • Prototype
      • ROSCH (1973): people do not use feature sets, but imagine a PROTOTYPE for an object
  • Typical Features
      • ROSCH & MERVIS (1975): people use a list of features, weighted in terms of CUE VALIDITY
  • Exemplars
      • SMITH & MEDIN (1981): people use an EXAMPLE to imagine an object
representing concepts
Representing Concepts
  • BARSALOU (1983)
    • TAXONOMIC
      • Categories that are well known and can be recalled consistently and reliably
        • E.g., Fruit, Furniture, Animals
      • Used to generate overall representation of the world
    • AD HOC
      • Categories that are invented for specific purpose
        • E.g., How to make friends, Moving house
      • Used for goal-directed activity within specific event frames
long term memory
Long Term Memory
  • Procedural
    • Knowing how
  • Declarative
    • Knowing that
  • Episodic vs. Semantic
    • Personal events
    • Language and knowledge of world
working memory
Working Memory
  • Limited Capacity
      • 7 + 2 items (Miller, 1965)
      • 4 + 2 chunks (Broadbent, 1972)
      • Modality dependent capacity
  • Strategies for coping with limitation
      • Chunking
      • Interference
      • Activation of Long-term memory
slide29

Baddeley’s (1986) Model of

Working Memory

Central

executive

Visual Cache

Inner scribe

Phonological store

Auditory word presentation

Visual word presentation

Articulatory control process

slave systems
Slave Systems
  • Articulatory loop
    • Memory Activation
    • Rehearsal capacity
      • Word length effect and Rehearsal speed
  • Visual cache
    • Visual patterns
    • Complexity of pattern, number of elements etc
  • Inner scribe
    • Sequences of movement
    • Complexity of movement
typing
Typing
  • Eye-hand span related to expertise
      • Expert = 9, novice = 1
  • Inter-key interval
      • Expert = 100ms
  • Strategy
      • Hunt & Peck vs. Touch typing
  • Keystroke
      • Novice = highly variable keystroke time
      • Novice = very slow on ‘unusual’ letters, e.g., X or Z
salthouse 1986
Salthouse (1986)
  • Input
    • Text converted to chunks
  • Parsing
    • Chunks decomposed to strings
  • Translation
    • Strings into characters and linked to movements
  • Execution
    • Key pressed
rumelhart norman 1982
Rumelhart & Norman (1982)
  • Perceptual processes
    • Perceive text, generate word schema
  • Parsing
    • Compute codes for each letter
  • Keypress schemata
    • Activate schema for letter-keypress
  • Response activation
    • Press defined key through activation of appropriate hand / finger
schematic of rumelhart and norman s connectionist model of typing
Schematic of Rumelhart and Norman’s connectionist model of typing

middle

index ring

thumb little

Right hand

middle

ring index

little thumb

Left hand

Response system

Keypress node, breaking

Word into typed letters;

Excites and inhibits nodes

z

z

j

a

activation

Word node, activated from

Visual or auditory stimulus

jazz

automaticity
Automaticity
  • Norman and Shallice (1980)
      • Fully automatic processing controlled by SCHEMATA
      • Partially automatic processing controlled by either Contention Scheduling
      • Supervisory Attentional System (SAS)
supervisory attentional system model
Supervisory Attentional System Model

Supervisory

Attentional

System

Control schema

Trigger

database

Perceptual

System

Effector

System

Contention

scheduling

contention scheduling
Contention Scheduling
  • Gear changing when driving involves many routine activities but is performed ‘automatically’ – without conscious awareness
  • When routines clash, relative importance is used to determine which to perform – Contention Scheduling
      • e.g., right foot on brake or clutch
sas activation
SAS activation
  • Driving on roundabouts in France
    • Inhibit ‘look right’; Activate ‘look left’
    • SAS to over-ride habitual actions
  • SAS active when:
      • Danger, Choice of response, Novelty etc.
attentional slips and lapses
Attentional Slips and Lapses
  • Habitual actions become automatic
  • SAS inhibits habit
  • Perserveration
      • When SAS does not inhibit and habit proceeds
  • Distraction
      • Irrelevant objects attract attention
      • Utilisation behaviour: patients with frontal lobe damage will reach for object close to hand even when told not to
performance operating characteristics
Performance Operating Characteristics
  • Resource-dependent trade-off between performance levels on two tasks
  • Task A and Task B performed several times, with instructions to allocate more effort to one task or the other
task difficulty
Task Difficulty
  • Data limited processes
      • Performance related to quality of data and will not improve with more resource
  • Resource limited processes
      • Performance related to amount of resource invested in task and will improve with more resource
slide42
Data limited

Resource limited

POC

P

P

Cost

M

Cost

Task

A

Task

A

M

Task B

Task B

why model performance
Why Model Performance?
  • Building models can help develop theory
    • Models make assumptions explicit
    • Models force explanation
  • Surrogate user:
    • Define ‘benchmarks’
    • Evaluate conceptual designs
    • Make design assumptions explicit
  • Rationale for design decisions
why model performance44
Why Model Performance?
  • Human-computer interaction as Applied Science
    • Theory from cognitive sciences used as basis for design
    • General principles of perceptual, motor and cognitive activity
    • Development and testing of theory through models
types of model in hci
Types of Model in HCI

Whitefield, 1987

task models
Task Models
  • Researcher’s Model of User, in terms of tasks
  • Describe typical activities
  • Reduce activities to generic sequences
  • Provide basis for design
pros and cons of modelling
Pros and Cons of Modelling
  • PROS
    • Consistent description through (semi) formal representations
    • Set of ‘typical’ examples
    • Allows prediction / description of performance
  • CONS
    • Selective (some things don’t fit into models)
    • Assumption of invariability
    • Misses creative, flexible, non-standard activity
generic model process
Generic Model Process?
  • Define system: {goals, activity, tasks, entities, parameters}
  • Abstract to semantic level
  • Define syntax / representation
  • Define interaction
  • Check for consistency and completeness
  • Predict / describe performance
  • Evaluate results
  • Modify model
device models
Device Models
  • Buxton’s 3-state device model

State

2

State

1

State

0

application
Application

Button up

Pen off

State

2

State

1

State

0

Pen on

Button down

drag

select

Out of range

conclusions
Conclusions
  • Models abstract aspects of interaction
    • User, task, system
  • Models play a variety of roles in design
hierarchical task analysis
Hierarchical Task Analysis
  • Activity assumed to consist of TASKS performed in pursuit of GOALS
  • Goals can be broken into SUBGOALS, which can be broken into tasks
  • Hierarchy (Tree) description
the analysis comes from plans
The “Analysis” comes from plans
  • PLANS = conditions for combining tasks
  • Fixed Sequence
    • P0: 1 > 2 > exit
  • Contingent Fixed Sequence
    • P1: 1 > when state X achieved > 2 > exit
    • P1.1: 1.1 > 1.2 > wait for X time > 1.3 > exit
  • Decision
    • P2: 1 > 2 > If condition X then 3, elseif condition Y then 4 > 5 > exit
reporting
Reporting
  • HTA can be constructed using Post-it notes on a large space (this makes it easy to edit and also encourages participation)
  • HTA can be difficult to present in a succinct printed form (it might be useful to take a photograph of the Post-it notes)
  • Typically a Tabular format is used:
original design
Original Design

Menu driven

Menus accessed by first letter of command

Menus arranged in hierarchy

problems with original design
Problems with original design
  • Lack of consistency
      • D = DOS commands; Delete; Data file; Date
  • Hidden hierarchy
      • Only ‘experts’ could use
  • Inappropriate defaults
      • Setting up a scan required ‘correction’ of default settings three or four times
initial design activity
Initial design activity
  • Observation of non-technology work
      • Cytogeneticists inspecting chromosomes
  • Developed model of task
      • Hierarchical task analysis
  • Developed design principles, e.g.,
      • Cytogeneticists as ‘picture people’
      • Task flow
      • Task mapping
task model
Task Model
  • Work flows between specific activities

Administration

Patient details

Reporting

Cell sample

Set up

Analysis

Microscope

first prototype
First “prototype”

Layout related to

task model

‘Sketch’ very simple

Annotations show

modifications

second prototype
Second prototype

Refined layout

‘Prototype’ using

HyperCard

Initial user trials compared

this with a mock-up of the

original design

final product
Final Product

Picture taken from company

brochure

Initial concepts retained

Further modifications possible

predicting performance time
Predicting Performance Time
  • Time and error are ‘standard’ measures of human performance
  • Predict transaction time for comparative evaluation
  • Approximations of human performance
unit times
Unit Times
  • From task model, define sequence of tasks to achieve a specific goal
  • For each task, define ‘average time’
quick exercise
Quick Exercise
  • Draw two parallel lines about 4cm apart and about 10cm long
  • Draw, as quickly as possible, a zig-zag line for 5 seconds
  • Count the number of lines and the number of times you have crossed the parallel lines
predicted result
Predicted result
  • About 70 lines
  • About 20 cross-overs
why this prediction
Why this prediction?
  • Movement speed limited by biomechanical constraints
    • Motor subsystem change direction @ 70ms
    • So: 5000 / 70 = 71 oscillations
  • Cognitive / Perceptual system cycles:
    • Perceptual @ 70ms
    • Cognitive @ 100ms
    • Correction takes 70+70+100 = 240ms
    • 5000/240 = 21
fitts law
Fitts’ Law
  • Paul Fitts 1954
  • Information-theoretic account of simple movements
  • Define the number of ‘bits’ processed in performing a given task
fitts law74
Fitts’ Law
  • A = 62, W = 15
  • A = 112, W = 7
  • A = 112, W = 21

Movement Time = a + b (log2 2A/W)

Hits

60

40

20

0

54

43

a = 10

b = 27.5

21

b

a

Log2 (2A/W)

1 = 5.3 2 = 4.5 3 = 3.2

alternate versions
Alternate Versions

MT = a + b log2 (2A/W)

MT = b log2 (A/W + 0.5)

MT = a + b log2 (A/W/+1)

a and b are constants
a and b are “constants”

Data derived from plot

Data as predictors?

potential problems
Potential Problems
  • Data-fitter rather than ‘law’
  • ‘Generic value’: a+b = 100
  • Variable predictive power for devices?
    • From ‘mouse data’ we get:

(assume A = 5 and W = 10) log2(2A/W)  0.3

339ms, 150.5ms and 34.9ms (!!)

hick hyman law
Hick – Hyman Law
  • William Hick 1952
  • Selection time, from a set of items, is proportional to the number of items

T = k log2 (n+1), Where k = a constant (intercept+slope)

  • Approximately 150ms added to T for each item
example of hick hyman law
Example of Hick-Hyman Law

Search Time

(s)

4

3

2

1

0

words

numbers

2 3 4 5 6 7 8 10 12

Landauer and Nachbar, 1985

keystroke level models
Keystroke Level Models
  • Developed from 1950s ergonomics
  • Human information processor as linear executor of specified tasks
  • Unit-tasks have defined times
  • Prediction = summing of times for sequence of unit-tasks
building a klm
Building a KLM
  • Develop task model
  • Define task sequence
  • Assign unit-times to tasks
  • Sum times
example cut and paste
Example: cut and paste

Task Model: Select line – Cut – Select insertion point – paste

Task One: select line

move cursor to

start of line

press (hold) button

drag cursor to

end of line

release button

times for movement
Times for Movement
  • H: homing, e.g., hand from keyboard to mouse
    • Range: 214ms – 400ms
    • Average: 320ms
  • P: pointing, e.g., move cursor using mouse
    • Range: defined by Fitts’ Law
    • Average: 1100ms
  • B: button pressing, e.g., hitting key on keyboard
    • Range: 80ms – 700ms
    • Average: 200ms
times for cognition perception
Times for Cognition / Perception
  • M: mental operation
    • Range: 990ms – 1760ms
    • Average: 1350ms
  • A: switch attention between parts of display
    • Average: 320ms
  • R: recognition of items
    • Range: 314ms – 1800ms
    • Average: 340ms
  • Perceive change:
    • Range: 50 – 300ms
    • Average: 100ms
rules for summing times
Rules for Summing Times
  • How to handle multiple Mental units:
    • M before Ks in new argument strings
    • M at start of ‘cognitive unit’
    • M before Ps that select commands
    • Delete M if K redundant terminator
alternative

Pe

M

H

P

P’

P

Alternative
  • What if we use ‘accelerated scrolling’ on the cursor keys?
    • Press  key and read scrolling numbers
    • Release key at or near number
    • Select correct number
critical path models
Critical Path Models
  • Used in project management
  • Map dependencies between tasks in a project
    • Task X is dependent on task Y, if it is necessary to wait until the end of task Y until task X can commence
procedure
Procedure
  • Construct task model, taking into account dependencies
  • Assign times to tasks
  • Calculate critical path and transaction time
    • Run forward pass
    • Run backward pass
example
Example

R

M

H

P

P’

P

M = 1.35

H = 0.32

P = 0.2

R = 0.34

R

0.34

M

1.35

H

0.32

P

0.2

P’

0.2

P

0.2

1

2

3

4

5

6

comparison
Comparison
  • ‘Summing of times’ result:
    • 2.61s
  • ‘Critical path’ result:
    • 2.47s
  • R allowed to ‘float’
other time based models
Other time-based models
  • Task-network models
    • MicroSAINT
    • Unit-times and probability of transition

p

Prompt

50ms

Speak word

[300  9]ms

System response

[1000  30]ms

1-p

performance vs competence
Performance vs. Competence
  • Performance Models
    • Make statements and predictions about the time, effort or likelihood of error when performing specific tasks;
  • Competence Models
    • Make statements about what a given user knows and how this knowledge might be organised.
sequence vs process vs grammar
Sequence vs. Process vs. Grammar
  • Sequence Models
    • Define activity simply in terms of sequences of operations that can be quantified
  • Process Models
    • Simple model of mental activity but define the steps needed to perform tasks
  • Grammatical Models
    • Model required knowledge in terms of ‘sentences’
process models
Process Models
  • Production systems
  • GOMS
production systems
Production Systems
  • Rules = (Procedural) Knowledge
  • Working memory = state of the world
  • Control strategies = way of applying knowledge
production systems98

Rule base

Interpreter

Working Memory

Production Systems

Architecture of a production system:

the problem of control
The Problem of Control
  • Rules are useless without a useful way to apply them
  • Need a consistent, reliable, useful way to control the way rules are applied
  • Different architectures / systems use different control strategies to produce different results
forward chaining

If A and B then not C

If A then B

If A then B

If not C then GOAL

C

A

If A and B then not C

B

B

A

Forward Chaining

C

A

If not C then GOAL

backward chaining

C

A

If A and B then not C

If A then B

If not C then GOAL

C

A

B

B

A

Backward Chaining

Need GOAL

If not C then GOAL

Need: not C

If A and B then not C

Need B

If A then B

production systems102
Production Systems
  • A simple metaphor

Ships

Docks

production systems103
Production Systems
  • Ships must fit the correct dock
  • When one ship is docked, another can be launched
production rules
Production Rules

IF condition

THEN action

e.g.,

IF ship is docked

And free-floating ships

THEN launch ship

IF dock is free

And Ship matches

THEN dock ship

the parsimonious production systems rule notation
The Parsimonious Production Systems Rule Notation
  • On any cycle, any rule whose conditions are currently satisfied will fire
  • Rules must be written so that a single rule will not fire repeatedly
  • Only one rule will fire on a cycle
  • All procedural knowledge is explicit in these rules rather than being explicit in the interpreter
possible steps 1
Possible Steps 1

Disc 1 from a to c

Disc 2 from a to b

Disc 1 from c to a

Disc 3 from a to c

Disc 2 from b to c

Disc 1 from a to c

possible steps 2
Possible Steps 2

Disc 4 from a to b

Disc 1 from c to b

Disc 2 from c to a

Disc 1 from b to a

Disc 2 from a to b

Disc 3 from a to b

possible steps 3
Possible Steps 3

Disc 5 from a to c

Disc 1 from b to a

Disc 2 from b to c

Disc 1 from a to c

Disc 3 from b to a

Disc 1 from c to b

Disc 2 from c to a

Disc 4 from b to c

Disc 1 from a to c

Disc 2 from a to b

Disc 1 from c to b

Disc 3 from a to c

Disc 1 from b to a

Disc 2 from b to c

Disc 1 from a to c

simon s 1975 goal recursive logic
Simon’s (1975) goal-recursive logic

To get the 5-tower to Peg C, get the 4-tower to Peg B, then move

The 5-disc to Peg C, then move the 4-tower to Peg C

To get the 4-tower to Peg B, get the 3-tower to Peg C, then move

The 4-disc to Peg B, then move the 3-tower to Peg B

To get the 3-tower to Peg C, get the 2-tower to Peg B, then move

The 3-disc to Peg C, then move the 2-tower to Peg C,

To get the 2-tower to Peg B, move the 1-disc to Peg C, then move

The 2-disc to Peg B, then move the 1-disc to Peg A

production rule 1
Production Rule 1

SUBGOAL_DISCS

IF the goal is to achieve a particular configuration of discs

And Di is on Px but should go to Py in the configuration

And Di is the largest disc out of place

And Dj is on Py

And Dj is smaller than Di

And Pz is clear OR has a disc larger than Dj

THEN set a subgoal to move the Dj tower to Pz and Di to Py

production rule 2
Production Rule 2

SUBGOAL_MOVE_DISC

IF the goal is to achieve a particular configuration of discs

And Di is on Px but should go to Py in the configuration

And Di is the largest disc out of place

And Py is clear

THEN move Di to Py

goals operators method selection card moran and newell 1983
Goals Operators Method SelectionCard, Moran and Newell, 1983
  • Human activity modelled by Model Human Processor
  • Activity defined by GOALS
  • Goals held in ‘Stack’
  • Goals ‘pushed’ onto stack
  • Goals ‘popped’ from stack
goals
Goals
  • Symbolic structures to define desired state of affairs and methods to achieve this state of affairs

GOAL: EDIT-MANUSCRIPT top level goal

GOAL: EDIT-UNIT-TASK specific sub goal

GOAL: ACQUIRE UNIT-TASK get next step

GOAL: EXECUTE UNIT-TASK do next step

GOAL: LOCATION-LINE specific step

operators
Operators
  • Elementary perceptual, motor or cognitive acts needed to achieve subgoals

Get-next-line

Use-cursor-arrow-method

Use-mouse-method

methods
Methods
  • Descriptions of procedures for achieving goals
  • Conditional upon contents of working memory and state of task

GOAL: ACQUIRE-UNIT-TASK

GET-NEXT-PAGE if at end of manuscript

GET-NEXT-TASK

selection
Selection
  • Choose between competing Methods, if more than one

GOAL:EXECUTE-UNIT-TASK

GOAL:LOCATE-LINE

[select: if hands on keyboard

and less than 5 lines to move

USE CURSOR KEYS

else

USE MOUSE]

example122
Example
  • Withdraw cash from ATM
    • Construct task model
    • Define production rules
task model123
Task Model

Method for goal: Obtain cash from ATM

Step1: access ATM

Step2: select ‘cash’ option

Step3: indicate amount

Step4: retrieve cash and card

Step5: end task

production rules124
Production Rules

((GOAL: USE ATM TO OBTAIN CASH)

ADD-UNIT-TASK (access ATM)

ADD-WM-UNIT-TASK (access ATM)

ADD-TASK-STEP (insert card in slot)

SEND-TO-MOTOR(place card in slot)

SEND-TO-MOTOR (eyes to slot)

SEND-TO-PERCEPTUAL (check card in)

ADD (WM performing card insertion)

ADD-TASK-STEP (check card insertion)

DELETE-UNIT-TASK (access ATM)

ADD-UNIT-TASK (enter PIN)

problems with goms
Problems with GOMS
  • Assumes ‘error-free’ performance
    • Even experts make mistakes
  • MHP gross simplifies human information processing
  • Producing a task model of non-existent products is difficult
task action grammar
Task Action Grammar
  • GOMS assumes ‘expert’ knows operators and methods for tasks
  • TAG assumes ‘expert’ knows simple tasks, i.e., tasks that can be performed without problem-solving
tag and competence
TAG and competence
  • Competence
    • Defines what an ‘ideal’ user would know
  • TAG relies on ‘world knowledge’
    • up vs down
    • left vs right
    • forward vs backward
task action grammar128
Task-action Grammar
  • Grammar relates simple tasks to actions
  • Generic rule schema covering combinations of simple tasks
slide129
TAG
  • A ‘grammar’
    • maps
      • Simple tasks
    • Onto
      • Actions
    • To form
      • an interaction language
    • To investigate
      • consistency
consistency
Consistency
  • Syntactic: use of expressions
  • Lexical: use of symbols
  • Semantic-syntactic alignment: order of terms
  • Semantic: principle of completeness
procedure131
Procedure
  • Step 1: Write out commands and their structures
  • Step 2: Determine in commands have consistent structure
  • Step 3: Place command items into variable/feature relationship
  • Step 4: Generalise commands by separating into task features, simple tasks, task-action rule schema
  • Step 5: Expand parts of task into primitives
  • Step 6: Check to ensure all names are unique
example132
Example
  • Setting up a recording on a video-cassette recorder (VCR)
  • Assume that all controls via front panel and that the user can only use the up and down arrows
feature list for a vcr
Feature list [for a VCR]
  • Property Date, Channel, Start, End
  • Value number
  • Frequency Daily, Weekly
  • Record on, off
simple tasks
Simple tasks

SetDate [Property = Date, Value = US#, Frequency = Daily]

SetDate [Property = Date, Value = US#, Frequency = Weekly]

SetProg[Property =Prog, Value = US#]

SetStart[Property = start, Value = US#, Record = on]

SetEnd[Property = start, Value = US#, Record = off]

rule schema
Rule Schema

1. Task[Property = US#, Value]  SetValue [Value]

2. Task[Property = Date, Value, Frequency = US#]  SetValue [Value] + press “ |” until Frequency = US#

3. Task[Property = Start, Value]  SetValue [Value] + press “Rec”

4. SetValue [Value = US#]  press “ |” until Value = US#

5. SetValue[Value = US#]  use “ |” until Value = US#

why cognitive architecture
Why Cognitive Architecture?
  • Computers architectures:
    • Specify components and their connections
    • Define functions and processes
  • Cognitive Architectures could be seen as the logical conclusion of the ‘human-brain-as-computer’ hypothesis
why do this
Why do this?
  • Philosophy: Provide a unified understanding of the mind
  • Psychology: Account for experimental data
  • Education: Provide cognitive models for intelligent tutoring systems and other learning environments
  • Human Computer Interaction: Evaluate artifacts and help in their design
  • Computer Generated Forces: Provide cognitive agents to inhabit training environments and games
  • Neuroscience: Provide a framework for interpreting data from brain imaging
general requirements
General Requirements
  • Integration of cognition, perception, and action
  • Robust behavior in the face of error, the unexpected, and the unknown
  • Ability to run in real time
  • Ability to Learn
  • Prediction of human behavior and performance
architectures
Architectures
  • Model Human Processor (MHP)
    • Card, Moran and Newell (1983)
  • ACT-R
    • Anderson (1993)
  • EPIC
    • Meyer and Kieras (1997)
  • SOAR
    • Laird, Rosenbloom and Newell (1987)
model human processor
Model Human Processor
  • Three interacting subsystems:
    • Perceptual
      • Auditory image store
      • Visual image store
    • Cognitive
      • Working memory
      • Long-term memory
    • Motor
average data for mhp
Average data for MHP
  • Long-term memory: ?
  • Working memory: 3 – 7 chunks, 7s
  • Auditory image store: 17 letters, 200ms
  • Visual image store: 5 letters, 1500ms
  • Cognitive processor: 100ms
  • Perceptual processor: 70ms
  • Motor processor: 70ms
conclusions144
Conclusions
  • Simple description of cognition
  • Uses ‘standard times’ for prediction
  • Uses production rules for defining and combining tasks (with GOMS formalism)
adaptive control of thought rational act r http act psy cmu edu
Adaptive Control of Thought, Rational (ACT-R)http://act.psy.cmu.edu
adaptive control of thought rational act r
Adaptive Control of Thought, Rational (ACT-R)
  • ACT-R symbolic aspect realised over subsymbolic mechanism
  • Symbolic aspect in two parts:
    • Production memory
    • Symbolic memory (declarative memory)
  • Theory of rational analysis
theory of rational analysis
Theory of Rational Analysis
  • Evidence-based assumptions about environment (probabilities)
  • Deriving optimal strategies (Bayesian)
  • Assuming that optimal strategies reflect human cognition (either what it actually does or what it probably ought to do)
notions of memory
Notions of Memory
  • Procedural
    • Knowing how
    • Described in ACT by Production Rules
  • Declarative
    • Knowing that
    • Described in ACT by ‘chunks’
  • Goal Stack
    • A sort of ‘working memory’
    • Holds chunks (goals)
    • Top goal pushed (like GOMS)
    • Writeable
production rules149
Production Rules
  • Knowing how to do X
    • Production rule = set of conditions and an action

IF it is raining

And you wish to go out

THEN pick up your umbrella

very simple act
(Very simple) ACT
  • Network of propositions
  • Production rules selected via pattern matching. Production rules coordinate retrieval of chunks from symbolic memory and link to environment.
  • If information in working memory matches production rule condition, then fire production rule
slide151
ACT*

Declarative

memory

Procedural

memory

Retrieval Storage Match Execution

Working

memory

Encoding Performance

OUTSIDE WORLD

slide152

Addition-Fact

Knowledge Representation

addend1

sum

U (4); T (1); H (0)

six

addend2

16

18 +

_____

34

_____

1

eight

Goal buffer: add numbers in right-most column

Visual buffer: 6, 8

Retrieval buffer: 14

symbolic subsymbolic levels
Symbolic / Subsymbolic levels
  • Symbolic level
    • Information as chunks in declarative memory, and represented as propositions
    • Rules as productions in procedural memory
  • Subsymbolic level
    • Chunks given parameters which are used to determine the probability that the chunk is needed
    • Base-level activation (relevance)
    • Context activation (association strengths)
conflict resolution
Conflict resolution
  • Order production rules by preference
  • Select top rule in list
  • Preference defined by:
    • Probability that rule will lead to goal
    • Time associated with rule
    • Likely cost of reaching goal when using sequence involving this rule
example155
Example
  • Activity: Find target and then use mouse to select target:

Hunt_Feature

IF goal = find target with feature F

AND there is object X on screen

THEN move attention to object X

Found_target

IF goal = find target with feature F

AND target with F in location L

THEN move mouse to L and click

example156
Example
  • Model reaction time to target
    • Assume switch attention linearly increases with each new position
    • Assume probability of feature X in location y = 0.53
    • Assume switch attention = 185ms
  • Therefore, reaction time = 185 X 0.53 = 98ms per position
  • Empirical data has RT of 103ms per position
example157
Example
  • Assume target in field of distractors
    • P = 0.42
    • Therefore, 185 x .42 = 78ms per position
  • Empirical data = 80ms per position
learning
Learning
  • Symbolic level
    • Learning defined by adding new chunks and productions
  • Subsymbolic level
    • Adjustment of parameters based on experience
conclusions159
Conclusions
  • ACT uses simple production system
  • ACT provides some quantitative prediction of performance
  • Rationality = optimal adaptation to environment
executive process interactive control epic ftp ftp eecs umich edu people kieras
Executive Process Interactive Control (EPIC)ftp://ftp.eecs.umich.edu/people/kieras
executive process interactive control epic
Executive Process Interactive Control (EPIC)
  • Focus on multiple task performance
  • Cognitive Processor runs production rules and interacts with perceptual and motor processors
epic parameters
EPIC parameters
  • FIXED
    • Connections and mechanisms
    • Time parameters
    • Feature sets for motor processors
    • Task-specific production rules and perceptual encoding types
  • FREE
    • Production rules for tasks
    • Unique perceptual and motor processors
    • Task instance set
    • Simulated task environment
slide163
EPIC

Long-term

memory

Production

memory

PERCEPTUAL

PROCESSORS

DISPLAY

Production

Rule interpreter

Auditory

Auditory

Visual

Visual

Task

environment

Working

memory

Speech

Speech

Manual

Manual

Tactile

production memory
Production Memory
  • Perceptual processors controlled by production rules
  • Production Rules held in Production Memory
  • Production Rule Interpreter applies rules to perceptual processes
working memory165
Working Memory
  • Limited capacity (or duration of 4s) and holds current production rules
  • Cognitive processor updates every 50ms
  • On update, perceptual input, item from production memory, and next action held in working memory
resolving conflict
Resolving Conflict
  • Production rules applied to executive tasks to handle resource conflict and scheduling
  • Conflict dealt with in production rule specification
    • Lockout
    • Interleaving
    • Strategic response deferent
example167
Example

Task one

Stimulus one

Perceptual process

Cognitive process

Response selection

Memory process

Response one

Executive process

Move eye to S2

Enable task1 + task 2

Wait for task1 complete

Task1end

Task2 permission

Trial end

Task two

Stimulus two

Perceptual process

Cognitive process

Response selection

Memory process

Response two

conclusions168
Conclusions
  • Modular structure supports parallelism
  • EPIC does not have a goal stack and does not assume sequential firing of goals
  • Goals can be handled in parallel (provided there is no resource conflict)
  • Does not support learning
states operators and reasoning soar http www isi edu soar soar html
States, Operators, And Reasoning (SOAR)http://www.isi.edu/soar/soar.html
states operators and reasoning soar
States, Operators, And Reasoning (SOAR)
  • Sequel of General Problem Solver (Newell and Simon, 1960)
  • SOAR seeks to apply operators to states within a problem space to achieve a goal.
  • SOAR assumes that actor uses all available knowledge in problem-solving
soar as a unified theory of cognition
Soar as a Unified Theory of Cognition
  • Intelligence = problem solving + learning
  • Cognition seen as search in problem spaces
  • All knowledge is encoded as productions

 a single type of knowledge

  • All learning is done by chunking

 a single type of learning

slide172

Young, R.M., Ritter, F., Jones, G.  1998 "Online Psychological Soar Tutorial" available at: http://www.psychology.nottingham.ac.uk/staff/Frank.Ritter/pst/pst-tutorial.html

soar activity
SOAR Activity
  • Operators:  Transform a state via some action
  • State:  A representation of possible stages of progress in the problem
  • Problem space:  States and operators that can be used to achieve a goal.
  • Goal: Some desired situation.
soar activity174
SOAR Activity
  • Problem solving = applying an Operator to a State in order to move through a Problem Space to reach a Goal. 
  • Impasse =   Where an Operator cannot be applied to a State, and so it is not possible to move forward in the Problem Space. This becomes a new problem to be solved.
  • Soar can learn by storing solutions to past problems as chunks and applying them when it encounters the same problem again
soar architecture

Chunking

mechanism

SOAR Architecture

Production memory

Pattern Action

Pattern Action

Pattern Action

Working memory

Objects

Preferences

Working memory

Manager

Conflict stack

Decision

procedure

explanation
Explanation
  • Working Memory
    • Data for current activity, organized into objects
  • Production Memory
    • Contains production rules
  • Chunking mechanism
    • Collapses successful sequences of operators into chunks for re-use
3 levels in soar
3 levels in soar
  • Symbolic – the programming level
    • Rules programmed into Soar that match circumstances and perform specific actions
  • Problem space – states & goals
    • The set of goals, states, operators, and context.
  • Knowledge – embodied in the rules
    • The knowledge of how to act on the problem/world, how to choose between different operators, and any learned chunks from previous problem solving
how does it work
How does it work?
  • A problem is encoded as a current state and a desired state (goal)
  • Operators are applied to move from one state to another
  • There is success if the desired state matches the current state
  • Operators are proposed by productions, with preferences biasing choices in specific circumstances
  • Productions fire in parallel
impasses
Impasses
  • If no operator is proposed, or if there is a tie between operators, or if Soar does not know what to do with an operator, there is an impasse
  • When there are impasses, Soar sets a new goal (resolve the impasse) and creates a new state
  • Impasses may be stacked
  • When one impasse is solved, Soar pops up to the previous goal
learning180
Learning
  • Learning occurs by chunking the conditions and the actions of the impasses that have been resolved
  • Chunks can immediately used in further problem-solving behaviour
conclusions182
Conclusions
  • It may be too "unified"
    • Single learning mechanism
    • Single knowledge representation
    • Uniform problem state
  • It does not take neuropsychological evidence into account (cf. ACT-R)
  • There may be non-symbolic intelligence, e.g. neural nets etc not abstractable to the symbolic level
user models in design
User Models in Design
  • Benchmarking
  • Human Virtual Machines
  • Evaluation of concepts
  • Comparison of concepts
  • Analytical prototyping
benchmarking
Benchmarking
  • What times can users expect to take to perform task
    • Training criteria
    • Evaluation criteria (under ISO9241)
    • Product comparison
human virtual machine
Human Virtual Machine
  • How might the user perform?
    • Make assumptions explicit
    • Contrast views
evaluation of concepts
Evaluation of Concepts
  • Which design could lead to better performance?
    • Compare concepts using models prior to building prototype
    • Use performance of existing product as benchmark
reliability of models
Reliability of Models
  • Agreement of predictions with observations
  • Agreement of predictions by different analysts
  • Agreement of model with theory
comparison with theory
Comparison with Theory
  • Approximation of human information processing
  • Assumes linear, error-free performance
  • Assumes strict following of ‘correct’ procedure
  • Assumes only way correct procedure
  • Assumes actions can be timed
klm validity
KLM Validity

Predicted values lie

within 20% of

observed values

comparison of klm predicted with times from user trials
Comparison of KLM predicted with times from user trials

Total time

(s)

25

20

15

10

CUI: P = 15.84s

mean = 15.37s

Error = 2.9%

GUI: P = 11.05s

mean = 8.64s

Error = 22%

1 2 3 4 5 6 7

Trial number

inter intra rater reliability
Inter / Intra-rater Reliability
  • Inter-rater:
    • Correlation of several analysts
    • = 0.754
  • Intra-rater:
    • Correlation for same analysts on several occasions
    • =0.916
  • Validity:
    • correlation with actual performance
    • = 0.769

Stanton and Young, 1992

how compare single data points
How compare single data points?
  • Models typically produce a single prediction
  • How can one value be compared against a set of data?
  • How can a null hypothesis be proved?
liao and milgram 1991
Liao and Milgram (1991)

A-D-*sd A-D A-D+*sd A A+D-*sd A+D A+D+*sd

D

defining terms
Defining terms
  • A = Actual values, with observed standard deviation (sd)
  • D = Derived values
  •  = 5% (P < 0.05 to reduce Type I error)
  •  = 20% (P<0.2 for Type II error)
acceptance criteria
Acceptance Criteria

Accept Ho if: A-D+  *sd < D< A+D-  *sd

Reject Ho if: D < A-D-  *sd

Reject Ho if: D > A-D+  *sd

analytical prototyping
Analytical Prototyping
  • Functional analysis
      • Define features and functions
      • Development of design concepts, e.g., sketches and storyboards
  • Scenario-based analysis
      • How people pursue defined goals
      • State-based descriptions
  • Structural analysis
      • Predictive evaluation
      • Testing to destruction
analytical prototyping199
Analytical Prototyping
  • Functional analysis
  • Scenario-based analysis
  • Structural analysis
rewritable routines
Rewritable Routines
  • Mental models
    • Imprecise, incomplete, inconsistent
  • Partial representations of product and procedure for achieving subgoal
  • Knowledge recruited in response to system image
simple architecture

Action to change machine state

Rewritable Routines

Next

State

Current State

Goal

State

Possible States

Relevant State

Simple Architecture
global prototypical routines
Global Prototypical Routines
  • Stereotyped Stimulus-Response compatibilities
  • Generalisable product knowledge
state specific routines
State-specific Routines
  • Interpretation of system image
    • Feature evolution
  • Expectation of procedural steps
  • Situated / Opportunistic planning
describing interaction
Describing Interaction
  • State-space diagrams
  • Indication of system image
  • Indication of user action
  • Prediction of performance
state space diagram

0

Waiting for: Raise lid

Waiting for: Play Mode

Waiting for: Enter

Waiting for: Skip forward

Waiting for: Skip back

Waiting for: Play

Waiting for: Stop

Waiting for: Off

Task: Press ‘Play’

Time: 200ms

Error: 0.0004

State

1

State-space Diagram
  • State number
  • System image
  • Waiting for…
  • Transitions
developing models
Developing Models

Start:

0ms

P=0.997

P=0.74

P=0.003

P=0.26

Recall plan:

1380ms

Wrong plan:

1380ms

P=0.9996

P=0.9996

P=0.0004

P=0.0004

P=1

P=1

Press play:

200ms

Cycle

through

menu:

800ms

Press Playmode:

200ms

P=0.9996

P=0.9996

P=0.0004

P=0.0004

P=1

P=1

Press Playmode:

200ms

Press Enter:

0ms

Switch

off:

300ms

P=0.9996

P=0.9996

P=0.0004

P=0.0004

P=1

P=1

Press Play:

0ms

Press Other Key:

200ms

what is the point
What is the point?
  • Are these models useful to designers?
  • Are these models useful to theorists?
task models problems
Task Models - problems
  • Task models take time to develop
    • They may not have high inter-rater reliability
    • They cannot deal easily with parallel tasks
    • They ignore social factors
task models benefits
Task Models - benefits
  • Models are abstractions – you always leave something out
  • The process of creating a task model might outweigh the problems
  • Task models highlight task sequences and can be used to define metrics
task models for theorists
Task Models for Theorists
  • Task models are engineering approximations
    • Do they actually describe how human information processing works?
      • Do they need to?
    • Do they describe cognitive operations, or just actions?
some background reading
Some Background Reading

Dix, A et al., 1998, Human-Computer Interaction (chapters 6 and 7) London: Prentice Hall

Anderson, J.R., 1983, The Architecture of Cognition, Harvard, MA: Harvard University Press

Card, S.K. et al., 1983, The Psychology of Human-Computer Interaction, Hillsdale, NJ: LEA

Carroll, J., 2003, HCI Models, Theories and Frameworks: towards a multidisciplinary science, (chapters 1, 3, 4, 5) San Francisco, CA: Morgan Kaufman

ad