1 / 26

Preliminary Experiments: Learning Virtual Sensors

Preliminary Experiments: Learning Virtual Sensors. Machine learning approach: train classifiers fMRI(t, t+ d )  CognitiveState Fixed set of possible states Trained per subject, per experiment Time interval specified. Approach. Learn fMRI(t,…,t+k)  CognitiveState Classifiers:

Download Presentation

Preliminary Experiments: Learning Virtual Sensors

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Preliminary Experiments: Learning Virtual Sensors • Machine learning approach: train classifiers • fMRI(t, t+ d)  CognitiveState • Fixed set of possible states • Trained per subject, per experiment • Time interval specified

  2. Approach • Learn fMRI(t,…,t+k)  CognitiveState • Classifiers: • Gaussian Naïve Bayes, SVM, kNN • Feature selection/abstraction • Select subset of voxels (by signal, by anatomy) • Select subinterval of time • Average activities over space, time • Normalize voxel activities • …

  3. Family members Occupations Tools Kitchen items Dwellings Building parts Study 1: Word Categories [Francisco Pereira] • 4 legged animals • Fish • Trees • Flowers • Fruits • Vegetables

  4. Word Categories Study • Ten neurologically normal subjects • Stimulus: • 12 blocks of words: • Category name (2 sec) • Word (400 msec), Blank screen (1200 msec); answer • Word (400 msec), Blank screen (1200 msec); answer • … • Subject answers whether each word in category • 32 words per block, nearly all in category • Category blocks interspersed with 5 fixation blocks

  5. Training Classifier for Word Categories Learn fMRI(t)  word-category(t) • fMRI(t) = 8470 to 11,136 voxels, depending on subject Feature selection: Select n voxels • Best single-voxel classifiers • Strongest contrast between fixation and some word category • Strongest contrast, spread equally over ROI’s • Randomly Training method: • train ten single-subect classifiers • Gaussian Naïve Bayes  P(fMRI(t) | word-category)

  6. Learned Bayes Models - MeansP(BrainActivity | WordCategory = People)

  7. Learned Bayes Models - MeansP(BrainActivity | WordClass) Accuracy: 85% People words Animal words

  8. Results Classifier outputs ranked list of classes Evaluate by the fraction of classes ranked ahead of true class • 0=perfect, 0.5=random, 1.0 unbelievably poor Try abstracting 12 categories to 6 categories e.g., combine “Family Members” with “Occupations”

  9. Impact of Feature Selection

  10. “Zero Signal” learning setting. Class 1 observations Class 2 observations • Goal: learn f: X ! Y or P(Y|X) • Given: • Training examples <Xi, Yi> where Xi = Si + Ni , signal Si ~ P(S|Y=i), noise Ni ~ Pnoise • Observed noise with zero signal N0 ~ Pnoise X1=S1+N1 X2=S2+N2 N0 (fixation)

  11. [Haxby et al., 2001]

  12. Study 1: Summary • Able to classify single fMRI image by word category block • Feature selection important • Is classifier learning word category or something else related to time? • Accurate across ten subjects • Relevant voxels in similar locations across subjs • Locations compatible with earlier studies • New experimental data will answer definitively

  13. Trial: read sentence, view picture, answer whether sentence describes picture Picture presented first in half of trials, sentence first in other half Image every 500 msec 12 normal subjects Three possible objects: star, dollar, plus Collected by Just et al. Study 2: Pictures and Sentences [Xuerui Wang and Stefan Niculescu]

  14. It is true that the star is above the plus?

  15. + --- *

  16. Is Subject Viewing Picture or Sentence? • Learn fMRI(t, …, t+15)  {Picture, Sentence} • 40 training trials (40 pictures and 40 sentences) • 7 ROIs • Training methods: • K Nearest Neighbor • Support Vector Machine • Naïve Bayes

  17. Is Subject Viewing Picture or Sentence? • Support Vector Machine worked better on avg. • Results (leave one out) on picture-then-sentence, sentence-then-picture data • Random guess = 50% accuracy • SVM using pair of time slices at 5.0,5.5 sec after stimulus: 91% accuracy

  18. Accuracy for Single-Subject Classifiers

  19. Can We Train Subject-Indep Classifiers?

  20. Training Cross-Subject Classifiers • Approach: define supervoxels based on anatomically defined regions of interest • Abstract to seven brain region supervoxels • Train on n-1 subjects, test on nth

  21. Accuracy for Cross-Subject GNB Classifier

  22. Accuracy for Cross-Subject and Cross-Context Classifier

  23. Output classification Learned cross-subject representation Subject 3 Subject 1 Subject 2 Possible ANN to discover intermediate data abstraction across multiple subjects. Each bank of inputs corresponds to voxel inputs for a particular subject. The trained hidden layer will provide a low-dimensional data abstraction shared by all subjects. We propose to develop new algorithms to train such networks to discover multi-subject classifiers.

More Related