1 / 45

Cortical Encoding of Natural Auditory Scenes

Cortical Encoding of Natural Auditory Scenes. Brigid Thurgood. “ Cortical encoding of natural scenes emerges through the interactions between scene structure and on-going network activity ”. A discussion of. by Chandramouli Chandrasekaran et al.

josiah
Download Presentation

Cortical Encoding of Natural Auditory Scenes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cortical Encoding of Natural Auditory Scenes Brigid Thurgood

  2. “Cortical encoding of natural scenes emerges through the interactions between scene structure and on-going network activity” A discussion of by Chandramouli Chandrasekaran et al. Neuroscience Institute and Princeton and University of Southern Alabama

  3. Objective: To investigate the relationship between the spectrotemporal modulations embedded in natural auditory scenes and on-going cortical network activity and how this affects cortical output through spiking activity using local field potentials (LFP) as an index

  4. Based on the idea that: “The efficient encoding and processing of time-varying natural scenes is essential for an organism’s reproductive success and survival.”

  5. Background: “ It has repeatedly been suggested that exploiting statistical regularities in the stimulus space may be evolutionarily adaptive and improve the efficiency of neural processing. The premise of such an argument is that sensory processing would encode incoming sensory information in the most efficient form possible by exploiting the redundancies and correlation structure of the input. In its simplest form, this principle would assert that neural systems would be optimized to process the statistical structure of sensory signals they encounter most often. ”

  6. “spectrotemporal modulations embedded in auditory scenes”What are we talking about? Spectralstructure - the pitch of the sound - i.e. what is tonotopically laid out in the brain - in the kHz range Temporal structure - time constants, repetition rates, durations, etc. - i.e. how many times a given sound is repeated per second - in the Hz range

  7. NightingaleSong

  8. NightingaleSong

  9. So, “spectrotemporal modulations” refers to changes in both spectral (pitch) and temporal (time pattern) structures within the greater auditory scene.

  10. What is the relationship between: Spectrotemporal Structure of Stimulus LFP Spiking

  11. What did they find? The temporal structure of the auditory scene strongly modulated the dynamics of both LFP and spiking activity in most sites. It also predicted the degree to which LFP and spiking activity would phase-lock in those sites that were strongly modulated. Robust spike-field (LFP) coherence was still found in cortical sites that showed little phase-locking to the scene Whether spike-field relationship was affected by the scene or by intrinsic dynamics was based on the spectral selectivity of the cortical site.

  12. Stimulus: 3 recordings from natural habitats of old-world monkeys Riverine Savannah Rainforest 3 exemplars of each, 1 minute long, 5 times each

  13. Measured: • spectrotemporal modulations of the stimulus • LFP • spiking activity • spectral tuning and tonotopy of cortical sites

  14. Savannah

  15. Savannah

  16. Savannah

  17. Savannah

  18. Savannah

  19. Savannah

  20. Riverine

  21. Riverine

  22. Rain forest

  23. Rain forest

  24. Ok, we understand the stimulus! Now let’s see what’s going on in the auditory cortex while we’re listening…

  25. LFP & Scene Cortical site 1 Cortical site 2 Savannah scene

  26. LFP & Scene Visible phase-locking of LFP to stimulus.

  27. LFP & Scene To what degree does LFP dynamics phase-lock to the stimulus? Compute the phase coherence between the LFP and the temporal frequencyof the scene to find out.

  28. LFP & Scene Phase coherence of LFP to intact versus shuffled scene dynamics.

  29. LFP & Scene Cross-referencing sound structure and LFP response to ensure that phase-locking is not arbitrary.

  30. LFP & Scene We can see a significant difference between mean intact and mean shuffled phase coherence.

  31. LFP & Scene Significant differences between intact and shuffled phase-coherences: • 65% Savannah (39 out of 60 sites) • 35% Riverine (21 out of 60 sites) • 45% Rainforest (27 out of 60 sites)

  32. LFP & Scene So… LFPs are modulated by the temporal dynamics of the complex auditory stimulus Can be modulated up to 50 Hz - much higher than indicated by previous studies.

  33. Spiking & Scene Savannah scene Phase coherence is calculated between spiking and the temporal characteristics of the intact auditory scene and compared to coherence with the shuffled scene.

  34. Spiking & Scene Significant differences between intact and shuffled phase-coherences: • 36% - Savannah (22 out of 60 sites) • 20% - Riverine (12 out of 60 sites) • 33% - Rain forest (20 out of 60 sites)

  35. Spiking & Scene So… Spiking is modulated by the temporal dynamics of the complex auditory stimulus.

  36. Spike-field-scene 2 Questions: 1. How well can we predict spike-field phase-coherence as a function of how well each phase-locks to the stimulus? 2. If spiking and LFP do not phase-lock to the stimulus, will there still be spike-field phase-locking?

  37. LFP & Spiking Correlation coefficient of scene-LFP and spike-LFP averaged over 3 trials: Savannah = 0.81 Riverine = 0.66 Suggests scene-LFP coherence predicts spike-field coherence.

  38. Spike-field-scene The stronger the LFP phase-locks to the scene, the higher the likelihood that the spiking activity will also lock. So, the spike-field relationship is predicted by the scene structure. r2 = 0.61 r2 = 0.21

  39. LFP & Spiking Q: How well can we predict spike-field phase coherence as a function of how well each phase-locks to the stimulus? A: Spike-field phase coherence can be predicted by scene-LFP phase coherence in cortical sites that are strongly modulated by the scene.

  40. Spike-field-scene 2. If spiking and LFP do not phase-lock to the stimulus, will there still be spike-field phase-locking?

  41. Spike-field-scene • Took lowest 20th percentile and compared intact scene to shuffled scene phase coherence. • 75% Savannah, • 66% Riverine showed significant a difference.

  42. Spike-field-scene Q: If spiking and LFP do not phase-lock to the stimulus, will there still be spike-field phase-locking? A: We still see robust spike-field phase-locking in the absence of phase-locking to the temporal modulations of the scene.

  43. Selectivity Left: scene-LFP phase coherence as function of spectral and temporal frequencies. Middle: temporal frequency of scene coherence with both LFP and spiking activity. Right: spectral selectivity of the cortical area.

  44. Selectivity So… in a cortical site, the closer the spectral selectivity is to the dominant spectral modulation in the scene the stronger its phase coherence with the temporal modulations of the natural scene will be.

  45. Conclusion “ We observe that primary auditory cortex encodes natural scenes by employing a joint spatiotemporal code: to represent the spectrotemporal structure of natural acoustic scenes, auditory cortex encodes the spectral structure by position along the tonotopic axis and encodes temporal modulations by modulating LFP and spiking activity in the same temporal modulation frequencies. ”

More Related