1 / 49

Emoscill Outline

Emoscill Outline. Overview. Motivations The role of top-down processing during perception Mood influences during perception S tudy design and aims Results so far Mood induction check Behavioral Data ROI analysis. Overview. Motivations The role of top-down processing during perception

neron
Download Presentation

Emoscill Outline

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Emoscill Outline

  2. Overview • Motivations • The role of top-down processing during perception • Mood influences during perception • Study design and aims • Results so far • Mood induction check • Behavioral Data • ROI analysis

  3. Overview • Motivations • The role of top-down processing during perception • Mood influences during perception • Study design and aims • Results so far • Mood induction check • Behavioral Data • ROI analysis

  4. Preview/redux of this section • Top-down feedback conveying a priori knowledge can facilitate visual perception • We are examining a particular top-down mechanism in which low spatial frequency (LSF) information about a stimulus is rapidly conveyed to prefrontal regions. This LSF information triggers predictions about stimulus identity which guide ongoing processing in object recognition areas in inferior temporal cortex • The purpose of the present study is to examine if differences in participant mood facilitate or inhibit this top-down mechanism. We are also interested in any other mood effects on brain function during perception.

  5. Overview • Motivations • The role of top-down processing during perception • Mood influences during perception • Study design and aims • Results so far • Mood induction check • Behavioral Data • ROI analysis

  6. How Does the Brain Recognize Objects? One line of research addressing this question has focused on stimulus-driven analyses in the ventral visual stream

  7. How Does the Brain Recognize Objects? Basic features of the visual stimulus are extracted in V1. This information is refined to higher levels of abstraction along the ventral visual stream. A visual representation of the input is formed (Tanaka, 1996) and matched with a representation stored in memory. Poggio & Bizzi 2004

  8. How Does the Brain Recognize Objects? Basic features of the visual stimulus are extracted in V1. This information is refined to higher levels of abstraction along the ventral visual stream. A visual representation of the input is formed (Tanaka, 1996) and matched with a representation stored in memory. However, this stimulus-driven narrative is incomplete and cannot fully explain our sophisticated visual ability Poggio & Bizzi 2004

  9. Some feats achieved by the visual brain…. • The brain… • can distinguish salient ‘figures’ from ‘ground’ • can recognize occluded objects with ease • can identify novel objects as exemplars of previously learned categories • can do all of this in well under half a second

  10. Some feats achieved by the visual brain…. • The brain… • can distinguish salient ‘figures’ from ‘ground’ • can recognize occluded objects with ease • can identify novel objects as exemplars of previously learned categories • can do all of this in well under half a second These challenges would overwhelm a purely data-driven system operating without any a priori assumptions.

  11. The role of top-down processing • It has long been suggested that the brain must be using a priori knowledge (predictions) to guide perception. Behavioral paradigms have provided empirical support (e.g. Biederman 1973, Palmer 1975, many others) • The feedback connections found throughout the ventral stream provide an anatomical basis for this facilitation (e.g. Rempel-Clower & Barbas, 2000 ) • Recently, functional imaging techniques have shed light on a specific mechanisms for various top-down facilitation processes (e.g. Engel, Fries, & Singer, 2001; Ruff…&Driver 2006; Kveraga et al., 2007)

  12. Top-down facilitation based on low spatial frequencies • Bar (2003) has proposed the existence of specific top-mechanism for facilitating object recognition based on low spatial frequencies (LSFs) extracted from visual stimuli • Subsequent empirical support described in Bar, Ghuman, Boshyan…, 2006; Kveraga, Boshyan, & Bar, 2007

  13. The Proposal An illustration of the top-down facilitation model. A partially processed, LSF image of the visual input is rapidly projected to OFC from early visual regions, although detailed slower analysis of the visual input is being performed along the ventral visual stream. This ‘gist’ image activates predictions about candidate objects similar to the image in their LSF appearance, which are fed back to the ventral object recognition regions to facilitate bottom-up processing

  14. The Proposal Magnocellular cells of the dorsal visual stream may convey information from early visual areas to OFC, consistent with their sensitivity to coarse visual information and rapid conduction speeds. In contrast, the parvocellular cells of the ventral visual stream are known to convey detailed visual information more slowly.

  15. The Proposal Magnocellular cells: course spatial detail fast conduction speeds Achromatic Sensitive to luminance Parvocellular cells: fine spatial detail Slower conduction speeds Color sensitive Insensitive to luminance

  16. Overview • Motivations • The role of top-down processing during perception • Mood influences during perception • Study design and aims • Results so far • Mood induction check • Behavioral Data • ROI analysis

  17. LSF-based Top-Down Facilitation and Mood • In the proposed LSF top-down model, the ability of the brain to engage in associative processing is key • When the global information extracted from the visual stimulus reaches OFC, the brain must essentially answer the question “what is this like?” i.e. (less anthropomorphically) LSF information reaching OFC may trigger the retrieval of associated object representations from memory • Previous research suggests that mood may affect such associative processing

  18. Mood and Associative Processing • Positive mood promotes and negative mood impairs associative processing (Storbeck & Clore, 2008;Bless, et al 1996;Isen et al, 1985; Challis & Krane, 1988) • This literature originally inspired the hypothesis that positive mood would enhance and negative mood inhibit the top-down facilitation described above by influencing the associative, predictive processing occurring in OFC • However, a recent study (Huntsinger, Clore, & Bar-Anan, 2010) suggests the proposed link between mood and associations isn’t so clear cut. And so far our data suggest greater response amplitudes in OFC & ventral visual areas and greater OFC/ventral stream phaselocking under conditions of negative mood

  19. Overview • Motivations • The role of top-down processing during perception • Mood influences during perception • Study design and aims • Results so far • Mood induction check • Behavioral Data • ROI analysis

  20. Preview/redux of this section • Subjects performed a simple object recognition task in which they had to decide whether or not objects presented on a projector screen would fit in a typical shoebox. This task was chosen simply because it requited subjects to pay attention to the stimuli. • Object stimuli were primed by a near-identical stimulus specially designed to either stimulate the magnocellular cells believed to transmit information to OFC (‘m-biased’ stimuli) or the parvocellular cells of the ventral stream (‘p-biased’ stims; details to follow) • Three variables were manipulated • Participant mood (positive or negative) • Prime type (m biased, p biased, or none) • Stimulus onset asynchrony (SOA; number of milliseconds between the onset of the m or p prime and the ‘ordinary’ stimulus, 50 or 100 ms)

  21. Trial Timecourse(stimuli modified for visibility) + Response Period Intertrial Interval M- or P- biased image unbiased image Fixation 500 ms 500-1500 ms 500 ms 100 or 50 ms 1200 ms 5 total trial types: M primed, 100 ms SOA P primed, 100 ms SOA M primed, 50 ms SOA P primed, 50 ms SOA Control condition (no prime)

  22. M and P biased stimuli M biased stimuli Target differs from background in luminance but not color Should selectively stimulate (to an extent) the magnocellular cells thought to convey LSF information to OFC, and, by extension, top-down processing P biased stimuli Target differs from background in color but not luminance Should selectively stimulate the parvocellar cells that dominate the parvocellular stream, and, by extension, bottom-up processing Full spectrum targets Target differes from background in both color and luminance. No m or p bias. *Subjective luminance and color contrast detection thresholds were determined for each subject prior to the main task

  23. The Proposal Magnocellular cells: course spatial detail fast conduction speeds Achromatic Sensitive to luminance Parvocellular cells: fine spatial detail Slower conduction speeds Color sensitive Insensitive to luminance

  24. Task Schedule (abbreviated) • Subject inserted into the MEG • Luminance and color contrast thresholds determined • Subjects complete PANAS #1 (a standardized mood measure) • Main task begins, composed of 5 runs each consisting of: • Mood induction period (1 m 30 sec of positive or negative music and images) with valence and arousal ratings (see next slide) • 25 ‘shoebox’ trials • Steps 1&2 repeated 3 additional times for 100 trials per run (thus, entire task consists of 500 trials, 100 for each condition) • PANAS #2 • End of experiment

  25. Valence and Arousal Ratings Before and after every mood induction period participants rated their valence (subjective feelings of positivity or negativity, x-axis above) and arousal (subjective energy level, y-axis above) on a scale of 1-10

  26. MEG Acquisition Details • 306-channel NeuromagVectorview whole-head system (ElektaNeuromagOy) housed in a three-layer magnetically shielded room (ImedcoAG). • Participant head position was monitored using four head-position indicator (HPI) electrodes affixed to the subject’s head. The positions of the HPI electrodes on the head as well as those of multiple points on the scalp were entered with a magnetic digitizer (PolhemusFastTrack 3D) in a head coordinate frame defined by anatomical landmarks. • Eye blinks were monitored with four electrooculogram (EOG) sensors positioned above and beside the subjects’ eyes. Data collected by the MEG, EOG, and HPI sensors were sampled at 600 Hz, band pass-filtered in the range of 0.1–200 Hz, and stored for offline analysis. • So far, MEG data have been analyzed with the MNE software package (Hämäläinen 2005).

  27. Overview • Motivations • The role of top-down processing during perception • Mood influences during perception • Study design and aims • Results so far • Mood induction check • Behavioral Data • ROI analysis

  28. Aim 1: Determine if mood affects top-down information flow • As outlined, positive mood may facilitate LSF-based TD predictions, while negative mood may hamper this mechanism. (as mentioned, preliminary results suggest that the reverse may in fact be true) • To test this, we can begin by comparing current flow in OFC and ventral stream areas across mood conditions. • Increased early OFC amplitude for positive mood would suggest an increased prefrontal role during recognition. • Increased ventral stream activity with negative mood (with little prefrontal activity) would suggest greater processing demands in the absence of top-down facilitation • Ideal finding: Greater early visualOFC and OFCventral streamphaselocking for positive mood and early visualventral stream phaselocking for negative mood

  29. Aim 2: Characterize the time course of m & p information flow • The model proposes that magno cells convey LSF information to OFC while parvo cells convey finer grained information along ventral stream • We have already used MEG to show that LSF stimuli elicit a rapid response in OFC (Bar et al, 2006). • Demonstrating similar early preferential activation of OFC by m-biased stimuli would support the proposed role of m cells in conveying information quickly to OFC

  30. Aim 3: Analyze the effect of mood on alpha power • EEG studies have shown that negative mood is associate with decreased alpha power across a range of electrode sites (Kuhbandner et al 2009, Everhart et al 2003) • Localizing the cortical generators of this effect with MEG would be interesting in its own right • We have observed a consistent (although nonsignificant) trend for subjects induced into a negative mood to display faster reaction times. Perhaps this is linked to alpha in some way?

  31. Overview • Motivations • Study design and aims • Results so far • Mood induction check • Behavioral Data • ROI analysis

  32. Overview • Motivations • Study design and aims • Results so far (N =21, pos subjs = 10, neg subjects = 11) • Mood induction check • Behavioral Data • ROI analysis

  33. Mean valence ratings by run Positive MI subjects consistently reported more positive valence than did negative MI subjects (2 mood x 5 run anova, main effect of mood: p < 1e-5). Analysis of PANAS scores confirmed the groups did not differ in valence prior to the experiment Valence Key: 10 = most positive 5 = neutral 0 = most negative

  34. Mean arousal ratings by run Positive MI subjects consistently reported higher arousal than did negative MI subjects (2 mood x 5 run anova, main effect of arousal: p < .05). Arousal Key: 10 = highest arousal 5 = neutral 0 = lowest arousal

  35. Overview • Motivations • Study design and aims • Results so far (N =21, pos subjs = 10, neg subjects = 11) • Mood induction check • Behavioral Data • ROI analysis

  36. Reaction Time • Mixed model anova: x 2 (Mood: pos or neg) x 2 (prime: m or p) x 2 (SOA: 50 or 100)

  37. Reaction Time • Mixed model anova: x 2 (Mood: pos or neg) x 2 (prime: m or p) x 2 (SOA: 50 or 100) • Main effect of Prime Type (faster for m primes, p < .05)

  38. Reaction Time • Mixed model anova: x 2 (Mood: pos or neg) x 2 (prime: m or p) x 2 (SOA: 50 or 100) • Main effect of Prime Type (faster for m primes, p < .05) • Main effect of SOA (faster for 100 ms SOA, p < 1e-6)

  39. Reaction Time • Mixed model anova: x 2 (Mood: pos or neg) x 2 (prime: m or p) x 2 (SOA: 50 or 100) • Main effect of Prime Type (faster for m primes, p < .05) • Main effect of SOA (faster for 100 ms SOA, p < 1e-6) • No interactions or main effect of mood (although negative mood subjects tended to be faster across conditions)

  40. Overview • Motivations • Study design and aims • Results so far (N =21, pos subjs = 10, neg subjects = 11) • Mood induction check • Behavioral Data • ROI analysis

  41. Overview • Motivations • Study design and aims • Results so far (N =21, pos subjs = 10, neg subjects = 11) • Mood induction check • Behavioral Data • ROI analysis (evoked responses & phaselocking)

  42. ROI Selection – early visual area V1 Selection Steps: -For first pass, just used automatic Freesurferparcellation of V1 Left hemisphere, medial view Right hemisphere, medial view

  43. ROI Selection – Ventral Stream Selection Steps: -loaded the Freesurfer automatic fusiform parcellation -circled peak signal-to-noise ratio within the fusiform anatomical label between 100 and 200 ms post-stimulus (all subjects grouped together, not separated by mood) Left hemisphere, ventral view Right hemisphere, ventral view

  44. ROI Selection – OFC Selection Steps: -loaded the Freesurfer automatic OFC parcellation (medial and lateral areas) -circled peak signal-to-noise ratio within the OFC anatomical label between 100 and 200 ms post-stimulus (all subjects grouped together, not separated by mood) Left hemisphere, ventral view Right hemisphere, ventral view

  45. Y axis = tesla/cm Yellow shading = timepoints with significant uncorrected t-tests between groups Cluster p val = p value from a montecarlo test (not discussed) Red and blue shading = 95% confidence intervals Only control trials shown for V1 and other ROIs. Intended as a broad overview of the sorts of analyses that have been done so far.

  46. *1 negative subject excluded because mean OFC amplitude over time window of interest was > 3 sd above the mean

  47. Functional OFCFusiformPhaselocking by hemi and mood: possible greater phaselocking for negative mood subjects in right hemisphere (stats pending)

  48. Present state of analysis • As of now, can • Define ROIs and extract trial-by-trial or averaged data • Do rudimentary time-series analysis by identifying response peaks and testing for amplitude or latency effects via anova • Compute power and phaselocking statistics for individual rois and calculate p values using cluster mass montecarlo tests (Maris & Oostenveld, 2007) • Create whole brain ‘dspm maps’ (essentially, signal-to-noise z scores) using mne software, although the statistical interpretation of these maps seems to be up to debate within the community • Goals for the analysis: • Be able to create whole brain analyses identifying the effect of mood, prime , soa, and their interactions on current amplitude or frequency power without being overwhelmed by the multiple comparisons over space and time • Analyze the roi time-series data in a more sensitive manner

More Related