learning fmri based classifiers for cognitive states
Download
Skip this Video
Download Presentation
Learning fMRI-Based Classifiers for Cognitive States

Loading in 2 Seconds...

play fullscreen
1 / 31

Learning fMRI-Based Classifiers for Cognitive States - PowerPoint PPT Presentation


  • 136 Views
  • Uploaded on

Learning fMRI-Based Classifiers for Cognitive States. Stefan Niculescu Carnegie Mellon University April, 2003. Our Group: Tom Mitchell, Luis Barrios, Rebecca Hutchinson, Marcel Just, Francisco Pereira, Xuerui Wang. fMRI and Cognitive Modeling. Have: First generative models:

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Learning fMRI-Based Classifiers for Cognitive States' - kamal


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
learning fmri based classifiers for cognitive states

Learning fMRI-Based Classifiers for Cognitive States

Stefan Niculescu

Carnegie Mellon University

April, 2003

Our Group: Tom Mitchell, Luis Barrios, Rebecca Hutchinson, Marcel Just, Francisco Pereira, Xuerui Wang

fmri and cognitive modeling
fMRI and Cognitive Modeling

Have:

  • First generative models:
    • Task  Cognitive state seq.  average fMRIROI
    • Predict subject-independent, gross anatomical regions
    • Miss subject-subject variation, trial-trial variation

Want:

  • Much greater precision, reverse the prediction
    • of single subject, single trial  Cognitive state seq.
slide3
Cognitive task

Cognitive state sequence

slide4
Cognitive task

Cognitive state sequence

“Virtual sensors” of cognitive state

slide5
Does fMRI contain enough information?
  • Can we devise learning algorithms to construct such “virtual sensors”?

Cognitive task

Cognitive state sequence

“Virtual sensors” of cognitive state

preliminary experiments learning virtual sensors
Preliminary Experiments: Learning Virtual Sensors
  • Machine learning approach: train classifiers
    • fMRI(t, t+ d)  CognitiveState
  • Fixed set of possible states
  • Trained per subject, per experiment
  • Time interval specified
approach
Approach
  • Learn fMRI(t,…,t+k)  CognitiveState
  • Classifiers:
    • Gaussian Naïve Bayes, SVM, kNN
  • Feature selection/abstraction
    • Select subset of voxels (by signal, by anatomy)
    • Select subinterval of time
    • Average activities over space, time
    • Normalize voxel activities
study 1 pictures and sentences
Trial: read sentence, view picture, answer whether sentence describes picture

Picture presented first in half of trials, sentence first in other half

Image every 500 msec

12 normal subjects

Three possible objects: star, dollar, plus

Collected by Just et al.

Study 1: Pictures and Sentences

[Xuerui Wang and Stefan Niculescu]

slide13
+

---

*

is subject viewing picture or sentence
Is Subject Viewing Picture or Sentence?
  • Learn fMRI(t, …, t+15)  {Picture, Sentence}
    • 40 training trials (40 pictures and 40 sentences)
    • 7 ROIs
  • Training methods:
    • K Nearest Neighbor
    • Support Vector Machine
    • Naïve Bayes
is subject viewing picture or sentence1
Is Subject Viewing Picture or Sentence?
  • SVMs and GNB worked better than kNN
  • Results (leave one out) on picture-then-sentence, sentence-then-picture data and combined
    • Random guess = 50% accuracy
    • SVM using pair of time slices at 5.0,5.5 sec after stimulus: 91% accuracy
slide17
Dataset \ Classifier

GNB

SVM

1NN

3NN

5NN

SP

0.10

0.11

0.13

0.12

0.10

PS

0.20

0.17

0.38

0.31

0.26

SP + PS

0.29

0.32

0.43

0.41

0.37

Error for Single-Subject Classifiers

  • 95% confidence intervals are 10% - 15% large
  • Accuracy of default classifier is 50%
training cross subject classifiers
Training Cross-Subject Classifiers
  • Approach: define supervoxels based on anatomically defined regions of interest
    • Normalize per voxel activity for each subject
      • Each value scaled now in [0,1]
    • Abstract to seven brain region supervoxels
    • 16 snapshots for each supervoxel
  • Train on n-1 subjects, test on nth
    • Leave one subject out cross validation
error for cross subject classifiers
Error for Cross Subject Classifiers

Dataset \ Classifier

GNB

SVM

1NN

3NN

5NN

SP

0.14

0.13

0.15

0.13

0.11

PS

0.20

0.22

0.26

0.24

0.21

SP + PS

0.30

0.25

0.36

0.33

0.32

  • 95% confidence intervals approximately 5% large
  • Accuracy of default classifier is 50%
study 2 word categories
Family members

Occupations

Tools

Kitchen items

Dwellings

Building parts

Study 2: Word Categories

[Francisco Pereira]

  • 4 legged animals
  • Fish
  • Trees
  • Flowers
  • Fruits
  • Vegetables
word categories study
Word Categories Study
  • Ten neurologically normal subjects
  • Stimulus:
    • 12 blocks of words:
      • Category name (2 sec)
      • Word (400 msec), Blank screen (1200 msec); answer
      • Word (400 msec), Blank screen (1200 msec); answer
    • Subject answers whether each word in category
    • 32 words per block, nearly all in category
    • Category blocks interspersed with 5 fixation blocks
training classifier for word categories
Training Classifier for Word Categories

Learn fMRI(t)  word-category(t)

  • fMRI(t) = 8470 to 11,136 voxels, depending on subject

Training methods:

  • train ten single-subect classifiers
  • kNN (k = 1,3,5)
  • Gaussian Naïve Bayes  P(fMRI(t) | word-category)
study 2 results
Dataset \ Classifier

GNB

1NN

3NN

5NN

Words

0.10

0.40

0.40

0.40

Study 2: Results

Classifier outputs ranked list of classes

Evaluate by the fraction of classes ranked ahead of true class

  • 0=perfect, 0.5=random, 1.0 unbelievably poor
study 3 syntactic ambiguity
Study 3: Syntactic Ambiguity

[Rebecca Hutchinson]

Is subject reading ambiguous or unambiguous sentence?

  • “The experienced soldiers warned about the dangers conducted the midnight raid.”
  • “The experienced solders spoke about the dangers before the midnight raid.”
study 3 results
Study 3: Results
  • 10 examples, 4 subjects
  • Almost random results if no feature selection used
  • With feature selection:
    • SVM - 77% accuracy
    • GNB - 75% accuracy
    • 5NN – 72% accuracy
feature selection
Feature Selection
  • Five feature selection methods:
    • All (all voxels available)
    • Active (n most active available voxels according to a t-test)
    • RoiActive (n most active voxels in each ROI)
    • RoiActiveAvg (average of the n most active voxels in each ROI)
    • Disc (n most discriminating voxels according to a trained classifier)
  • Active works best
feature selection1
Feature Selection

Dataset

Feature Selection

GNB

SVM

1NN

3NN

5NN

PictureSent

All

0.29

0.32

0.43

0.41

0.37

Active

0.16

0.09

0.20

0.18

0.19

Words

All

0.10

N/A

0.40

0.40

0.40

Active

0.08

N/A

0.30

0.20

0.16

Synt

Amb

All

0.43

0.38

0.50

0.46

0.47

Active

0.25

0.23

0.29

0.29

0.28

summary
Summary
  • Successful training of classifiers for instantaneous cognitive state in three studies
  • Cross subject classifiers trained by abstracting to anatomically defined ROIs
  • Feature selection and abstraction are essential
research opportunities
Research Opportunities
  • Learning temporal models
    • HMM’s, Temporal Bayes nets,…
  • Discovering useful data abstractions
    • ICA, PCA, hidden layers,…
  • Linking cognitive states to cognitive models
    • ACT-R, CAPS
  • Merging data from multiple sources
    • fMRI, ERP, reaction times, …
ad