Learning fmri based classifiers for cognitive states
Download
1 / 31

Learning fMRI-Based Classifiers for Cognitive States - PowerPoint PPT Presentation


  • 132 Views
  • Uploaded on

Learning fMRI-Based Classifiers for Cognitive States. Stefan Niculescu Carnegie Mellon University April, 2003. Our Group: Tom Mitchell, Luis Barrios, Rebecca Hutchinson, Marcel Just, Francisco Pereira, Xuerui Wang. fMRI and Cognitive Modeling. Have: First generative models:

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Learning fMRI-Based Classifiers for Cognitive States' - kamal


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Learning fmri based classifiers for cognitive states

Learning fMRI-Based Classifiers for Cognitive States

Stefan Niculescu

Carnegie Mellon University

April, 2003

Our Group: Tom Mitchell, Luis Barrios, Rebecca Hutchinson, Marcel Just, Francisco Pereira, Xuerui Wang


Fmri and cognitive modeling
fMRI and Cognitive Modeling

Have:

  • First generative models:

    • Task  Cognitive state seq.  average fMRIROI

    • Predict subject-independent, gross anatomical regions

    • Miss subject-subject variation, trial-trial variation

      Want:

  • Much greater precision, reverse the prediction

    • <fMRI, behavioral data, stimulus> of single subject, single trial  Cognitive state seq.


Cognitive task

Cognitive state sequence


Cognitive task

Cognitive state sequence

“Virtual sensors” of cognitive state


Cognitive task

Cognitive state sequence

“Virtual sensors” of cognitive state



Preliminary experiments learning virtual sensors
Preliminary Experiments: Learning Virtual Sensors

  • Machine learning approach: train classifiers

    • fMRI(t, t+ d)  CognitiveState

  • Fixed set of possible states

  • Trained per subject, per experiment

  • Time interval specified


Approach
Approach

  • Learn fMRI(t,…,t+k)  CognitiveState

  • Classifiers:

    • Gaussian Naïve Bayes, SVM, kNN

  • Feature selection/abstraction

    • Select subset of voxels (by signal, by anatomy)

    • Select subinterval of time

    • Average activities over space, time

    • Normalize voxel activities


Study 1 pictures and sentences

Trial: read sentence, view picture, answer whether sentence describes picture

Picture presented first in half of trials, sentence first in other half

Image every 500 msec

12 normal subjects

Three possible objects: star, dollar, plus

Collected by Just et al.

Study 1: Pictures and Sentences

[Xuerui Wang and Stefan Niculescu]



+ describes picture

---

*


Is subject viewing picture or sentence
Is Subject Viewing Picture or Sentence? describes picture

  • Learn fMRI(t, …, t+15)  {Picture, Sentence}

    • 40 training trials (40 pictures and 40 sentences)

    • 7 ROIs

  • Training methods:

    • K Nearest Neighbor

    • Support Vector Machine

    • Naïve Bayes


Is subject viewing picture or sentence1
Is Subject Viewing Picture or Sentence? describes picture

  • SVMs and GNB worked better than kNN

  • Results (leave one out) on picture-then-sentence, sentence-then-picture data and combined

    • Random guess = 50% accuracy

    • SVM using pair of time slices at 5.0,5.5 sec after stimulus: 91% accuracy


Dataset \ Classifier describes picture

GNB

SVM

1NN

3NN

5NN

SP

0.10

0.11

0.13

0.12

0.10

PS

0.20

0.17

0.38

0.31

0.26

SP + PS

0.29

0.32

0.43

0.41

0.37

Error for Single-Subject Classifiers

  • 95% confidence intervals are 10% - 15% large

  • Accuracy of default classifier is 50%



Training cross subject classifiers
Training Cross-Subject Classifiers describes picture

  • Approach: define supervoxels based on anatomically defined regions of interest

    • Normalize per voxel activity for each subject

      • Each value scaled now in [0,1]

    • Abstract to seven brain region supervoxels

    • 16 snapshots for each supervoxel

  • Train on n-1 subjects, test on nth

    • Leave one subject out cross validation


Error for cross subject classifiers
Error for Cross Subject Classifiers describes picture

Dataset \ Classifier

GNB

SVM

1NN

3NN

5NN

SP

0.14

0.13

0.15

0.13

0.11

PS

0.20

0.22

0.26

0.24

0.21

SP + PS

0.30

0.25

0.36

0.33

0.32

  • 95% confidence intervals approximately 5% large

  • Accuracy of default classifier is 50%


Study 2 word categories

Family members describes picture

Occupations

Tools

Kitchen items

Dwellings

Building parts

Study 2: Word Categories

[Francisco Pereira]

  • 4 legged animals

  • Fish

  • Trees

  • Flowers

  • Fruits

  • Vegetables


Word categories study
Word Categories Study describes picture

  • Ten neurologically normal subjects

  • Stimulus:

    • 12 blocks of words:

      • Category name (2 sec)

      • Word (400 msec), Blank screen (1200 msec); answer

      • Word (400 msec), Blank screen (1200 msec); answer

    • Subject answers whether each word in category

    • 32 words per block, nearly all in category

    • Category blocks interspersed with 5 fixation blocks


Training classifier for word categories
Training Classifier for Word Categories describes picture

Learn fMRI(t)  word-category(t)

  • fMRI(t) = 8470 to 11,136 voxels, depending on subject

    Training methods:

  • train ten single-subect classifiers

  • kNN (k = 1,3,5)

  • Gaussian Naïve Bayes  P(fMRI(t) | word-category)


Study 2 results

Dataset \ Classifier describes picture

GNB

1NN

3NN

5NN

Words

0.10

0.40

0.40

0.40

Study 2: Results

Classifier outputs ranked list of classes

Evaluate by the fraction of classes ranked ahead of true class

  • 0=perfect, 0.5=random, 1.0 unbelievably poor


Study 3 syntactic ambiguity
Study 3: Syntactic Ambiguity describes picture

[Rebecca Hutchinson]

Is subject reading ambiguous or unambiguous sentence?

  • “The experienced soldiers warned about the dangers conducted the midnight raid.”

  • “The experienced solders spoke about the dangers before the midnight raid.”


Study 3 results
Study 3: Results describes picture

  • 10 examples, 4 subjects

  • Almost random results if no feature selection used

  • With feature selection:

    • SVM - 77% accuracy

    • GNB - 75% accuracy

    • 5NN – 72% accuracy


Feature selection
Feature Selection describes picture

  • Five feature selection methods:

    • All (all voxels available)

    • Active (n most active available voxels according to a t-test)

    • RoiActive (n most active voxels in each ROI)

    • RoiActiveAvg (average of the n most active voxels in each ROI)

    • Disc (n most discriminating voxels according to a trained classifier)

  • Active works best


Feature selection1
Feature Selection describes picture

Dataset

Feature Selection

GNB

SVM

1NN

3NN

5NN

PictureSent

All

0.29

0.32

0.43

0.41

0.37

Active

0.16

0.09

0.20

0.18

0.19

Words

All

0.10

N/A

0.40

0.40

0.40

Active

0.08

N/A

0.30

0.20

0.16

Synt

Amb

All

0.43

0.38

0.50

0.46

0.47

Active

0.25

0.23

0.29

0.29

0.28


Summary
Summary describes picture

  • Successful training of classifiers for instantaneous cognitive state in three studies

  • Cross subject classifiers trained by abstracting to anatomically defined ROIs

  • Feature selection and abstraction are essential


Research opportunities
Research Opportunities describes picture

  • Learning temporal models

    • HMM’s, Temporal Bayes nets,…

  • Discovering useful data abstractions

    • ICA, PCA, hidden layers,…

  • Linking cognitive states to cognitive models

    • ACT-R, CAPS

  • Merging data from multiple sources

    • fMRI, ERP, reaction times, …


End of talk

End of talk describes picture


ad