1 / 46

Response Times and Their Use in the Cognitive Science of Choice

Response Times and Their Use in the Cognitive Science of Choice. Robin Thomas 1 , Trish Van Zandt 2 , Joe Houpt 3 , Mario Fific 4 , & Joe Johnson 1. 1 Miami University, Oxford, OH 2 The Ohio State University, Columbus, OH 3 Wright State University, Dayton, OH

mairi
Download Presentation

Response Times and Their Use in the Cognitive Science of Choice

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Response Times and Their Use in the Cognitive Science of Choice Robin Thomas1, Trish Van Zandt2, Joe Houpt3, Mario Fific4, & Joe Johnson1 1Miami University, Oxford, OH 2The Ohio State University, Columbus, OH 3Wright State University, Dayton, OH 4Grand Valley State University, MI

  2. Typical Tasks • Consider a signal detection experiment: one of two stimuli is presented, a standard (or noise) and a comparison (or signal) that differ in intensity on some dimension. The observer must determinewhich of two occurred on each trial. • A decision maker is given two gambles that differ in value and probability of earnings. Gamble A = 40% chance of winning $10, 60 % chance of losing $5. Gamble B = 60 % chance of winning $6, 40 % chance losing $9. Which does he actually play? How long does it take him to decide?

  3. Typical Tasks • A participant studies a list of items at time t0. Later, she is presented with another list of items, some old, some new. Her task is to indicate whether each item is old or new. • A learner trains on examples to discover which objects belong in one of two categories (e.g., friend or foe, poisonous or safe, malignant or benign). New examples are presented to the learner that need to be classified. • Which city is farther south, Paris or New York? How confident are you (on a scale from 0 – 100%)?

  4. In every case, we measure both the choice and the time required to make it.

  5. Typical summary measures • Mean response times and variance, choice proportions ,

  6. Typical summary measures • Mean response times and variance, choice proportions • RT densities and distributions (and functions of) ,

  7. Histogram estimate of density Empirical cumulative distribution function - from Van Zandt, 2000 - Ashby, et al. 1993

  8. Overview • Approaches to using response times in cognitive science • Macro-process modeling/Mental architectures • Basic SFT paradigm & data variables • Dimensions of a Processing System • Architectures • Stopping Rules • Capacity • Dependence • Predictions & Statistical analysis issues • Empirical example worked out (Johnson, et al., 2010) • Micro-process modeling/models of RT and accuracy • Sequential Sampling Basics • Random walk • Race models • Diffusion • “Easy versions” • Beyond simple choices multi-option • Combining approaches • Neural evidence

  9. Mental Architectures Systems Factorial Technology Townsend & Nozawa, 1995) “double-factorial paradigm” based on Sternberg, 1969, see also Schweickert, 1985, Dzhafarov & Schweickert, 1995)

  10. Mental Architectures Systems Factorial Technology Townsend & Nozawa, 1995) “double-factorial paradigm” based on Sternberg, 1969, see also Schweickert, 1985, Dzhafarov & Schweickert, 1995) Divided attention task: One stimulus presented on a trial, observer asked “Is there an arrow somewhere in the stimulus” = OR gate (also can use an ‘and’ gate version of task, H&T 2010, 2012) - from Johnson, et al. (2010)

  11. Mental Architectures Dependent Measure: RT from which interaction contrasts are formed. Accuracy is not analyzed (often high) or separately analyzed (Schweickert, 1985). Mean Interaction Contrast = • where Rtijrefers to the mean response time in the present conditions in which level of factor A is ‘i’ and the other factor ‘j’ • in the global/local arrow search task, the saliency of local level arrow relative to dash is first factor, saliency of global level arrow relative to dash is second factor

  12. Mental Architectures Dependent Measure: RT from which interaction contrasts are formed. Survivor function = S(t) = P( T > t) = 1 – F(t) where F(t) is the cumulative distribution function. Survivor Interaction Contrast =

  13. HH LH LH HH Freq HL HL LL LL Freq RT(ms) RT(ms) HH HL LH LL - - SIC + = How to calculate the survivor interaction contrast (SIC) function Reaction time histograms Reaction time Survivor functions SIC(t) = Shh(t) - Shl(t) - (Slh(t) - Sll(t))

  14. Mental Architectures Dimensions of a processing model

  15. Mental Architectures Serial Processing Parallel Processing Coactive - from Johnson, et al. 2010

  16. Mental Architectures Using the salience factorial conditions

  17. Mental Architectures • Capacity Coefficient: • Use presence vs absence factorial conditions • Indicates changes in processing resources due to an increase in workload (# items/channels) • Where • Note that Single target conditions integrated hazard function hazard function and Easy to estimate

  18. Mental Architectures • Capacity Coefficient: • Measured against a baseline model UCIP with self-termination • Unlimited Capacity: No change in resources available for individual items due to increased overall workload • Independent: Stochastic independence • Parallel: Simultaneous processing of inputs • Self-terminating: stops at first opportunity • C(t) = 1 unlimited capacity, • C(t) > 1 supercapacity • C(t) < 1 limited capacity

  19. Mental Architectures Statistical Issues: Mean interaction contrast (MIC) which can be assessed via standard factorial ANOVA test of interaction Survivor interaction contrast (Houpt & Townsend, 2010) Capacity coefficient (Houpt & Townsend, 2012) Above are Fisherian. Houpt promises Bayesian approaches forthcoming ….

  20. Mental Architectures Empirical Example: Global – local processing in autism (Johnson, et al., 2010) Participants: 10 ASD, 11 Controls Task: indicate if arrow present Measured response time and accuracy, RT analyses only All MIC, SIC, and capacity analyses performed on individual participants In normal visual processing, global precedes and may interfere with local

  21. Mental Architectures Single factor reversal (Townsend & Thomas, 1994) + SIC(t) -> inhibitory parallel Facilitative parallel exhaustive

  22. Mental Architectures Coactive or facilitative parallel Inhibitory parallel

  23. Mental Architectures Some super and near unlimited capacity Most limited capacity

  24. Models of RT and Accuracy • SFT uses only RT of correct responses – a weakness of the approach • Important information is also included in error responses and the probability of each response especially in classification, memory recognition, decision-making. • Predominant approach – sequential sampling • At each moment in time, evidence is accrued according to an underlying stochastic mechanism until enough to determine a response, or time-limit has expired

  25. Models of RT and Accuracy Phenomenon: Speed – accuracy tradeoff

  26. Option A Option B Sequential sampling models 2 1.5 1 0.5 Evidence State 0 - 0.5 - 1 - 1.5 - 2 0 Td 100 200 300 400 500 Deliberation Time

  27. Option A Option B Sequential sampling models 2 1.5 1 0.5 Evidence State 0 - 0.5 - 1 - 1.5 - 2 0 100 200 300 400 500 Td Deliberation Time

  28. Models of RT and Accuracy Race (Counter) models (e.g., Merkle & Van Zandt, 2006) - from Merkle & Van Zandt (2006)

  29. Models of RT and Accuracy Exemplar-based random walk model of classification learning (Nosofsky & Palmeri, 1997) - from Thomas (2006)

  30. Models of RT and Accuracy Ratcliff’s Diffusion Model (1978, 2002) Drift rate distributions, one for each stimulus category

  31. Models of RT and Accuracy • “Easy” Versions • Offer closed-form solutions for response time and probability predictions - from Wagenmakers, et al., 2007)

  32. Models of RT and Accuracy • “Easy” Versions • Offer closed-form solutions for response time and probability predictions Linear Ballistic Accumulator - from Brown & Heathcote, 2008)

  33. Models of RT and Accuracy • Beyond two-choices: Decision Field Theory of Multi-alternative Decisions (Busemeyer & Townsend, 1993; Johnson & Busemeyer, 2005, 2008) • Attention shifts at each moment to a particular dimension of the decision problem • An evaluation of each choice alternative is based on relative values on the focal dimension • This evaluation is used to update the preference state from the previous moment • Preference updating continues until an alternative surpasses a decision threshold

  34. DFT Example: College choice • Attention shifting • Evaluation of relative values • Preference updating • Decision threshold .923 .834 .732

  35. A θ B B A C C DFT: Illustration P(t) t

  36. Y X Multialternative choice • Alternative space • Dimension interpretations • Binary choices • Additional alternatives • Choice pair relations • {X,Y} vs. {X,Y,Z} Z

  37. Y X Choice phenomena • Similarity • Pr (X|X,Y,S) < Pr (Y|X,Y,S) • Attraction (decoy) • Pr (X|X,Y,D) > Pr (Y|X,Y,D) • Compromise • Pr (C|X,Y,C) > Pr (X|X,Y,C) = Pr (Y|X,Y,C) C S D Pr (X|X,Y) = Pr (Y|X,Y) = 0.5 = Pr (X|X,C) = Pr (Y|Y,C)

  38. Y X x x x Pr (X) Pr (Y) Pr (S) Pr (X) Pr (Y) Pr (D) Pr (X) Pr (Y) Pr (C) + + + DFT: Account for phenomena C S D

  39. Combining Approaches • Thomas (2006) simulated diffusion models and random walk models of choice (e.g., EBRW) in a factorial task to derive MIC predictions • characterized optimal responding in random walks and diffusion models in additive factor paradigms • provided a reinterpretation of previously paradoxical findings regarding the effects of stimulus probability on choice RT

  40. Combining Approaches

  41. Combining Approaches - Fific, et al., 2010

  42. Combining Approaches - Townsend, et al., 2012, “General recognition theory extended to include response times: Predictions for a class of parallel systems”, JMP

  43. Neural Evidence - Smith & Ratcliff (2004)

  44. Neural Evidence

  45. Neural Evidence - from Purcell, et al. 20120

  46. Summary & Conclusions • Two major approaches to understanding response times in choice • Axiomatic analysis of mental architecture in factorial paradigms • Parameter free, class-wide applicability • Accuracy information not generally taken into account (exception, Schweickert’s work) • Micro-process models of both accuracy and decision time – sequential sampling • Computationally complex – though some ‘EZ’ versions • Parametric • Some efforts to incorporate macro axiomatic logic into microprocess models • Neural evidence for information accumulation to a threshold assumption

More Related