1 / 1

A new theoretical framework for multisensory integration Michael W. Hadley and Elaine R. Reynolds

A new theoretical framework for multisensory integration Michael W. Hadley and Elaine R. Reynolds Neuroscience Program, Lafayette College, Easton PA 18042. Analysis of the use of SOMs to model MSI. Introduction. Martin, Meredith, Ahmad (MMA) SOM model.

vea
Download Presentation

A new theoretical framework for multisensory integration Michael W. Hadley and Elaine R. Reynolds

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A new theoretical framework for multisensory integration Michael W. Hadley and Elaine R. Reynolds Neuroscience Program, Lafayette College, Easton PA 18042 Analysis of the use of SOMs to model MSI Introduction Martin, Meredith, Ahmad (MMA) SOM model Multisensory integration (MSI) literature has focused on the Superior Colliculus (SC), the subcotrical area responsible for gaze orientation, resulting in understanding the development, classes and computational of SC MSI. MMA’s model [5] consists of m senses projecting to a 10x10 grid of neurons-the SC. The projections to the SC were trained with a SOM by presenting many examples of the different firing combinations . Each example that a SOM is trained on maps to a location in the grid (simplified into 3.1). Similar examples are mapped to similar locations forming unisensory and multisensory areas (denoted by the colors in 3.1). Cortical MSI lacks such an understanding, and hence, the recent shift in views on cortical MSI (1.1 to 1.2) has yet to be computationally modeled. The key to the integration is the sigmoidal firing curve. The weights in the mulstisensory areas pay attention to each modality equivalently, so the unsisensory responsiveness is subthreshold while the multisensory response is above threshold (see 3.2). Superior Colliculus Adapted from [2] Visual Tactile 3.2 Sigmoidal curve yields MSE 3.3 No noise 2.1 MMA’s Architecture 3.1 How SOMs form 2.2 MMA’s Results • I took the important facets of computational models of SC MSI ([1],[5],[7] and [8]) and applied them to a cortical setting: Auditory • [1] ,[5] and [7] used excitatory self-organizing maps (SOMs) to explain MSI • [7] and [8] used layered, topographic architectures to model multisensory information processing • [7] showed that SOMs can be used in a multilayer system • [8] shows the importance of uses inhibition and feedback 1.1Traditional view of multisensory areas 1.2Revised view of multisensory areas SOMs are a solid foundation for cortical MSI: • SOMs form a weight distribution to allow MSI using the sigmoidal firing • Noise is essential to smooth map formation. The random variations are micro-examples that fill in the gaps in the map (contrast in 2.2 with 3.3) • Evidence suggests that our sensory areas have a topographic organization and [4] suggests this is the result of SOMs MMA found the SC formed unisensory areas in the corners of the grid with multisensory areas in between the unisensory areas. The response of the network to multisensory stimuli showed a nonlinear increase as compared to the component unisensory stimuli (MSE). I built extensions onto [5] to test the applicability of the SOM-based models to the cortex and discovered that a SOM alone cannot explain cortical MSI. I propose an additional training rule and a hierarchy based on context to allow inhibition and feedback in a multilayer SOM. Moving SOMs from MMA to a Cortical Hierarchy Virtual multisensory world Larger “multi-neuron” modalities Inhibition Multiple receptive fields (RFs) 4 Sense 1 Sense 2 Bimodal MSE Feed-forward and excitatory connections trained with traditional SOM Training rule sets Single RFs Cortex Multisensory world Inhibition Modified Hebb with inhibition to deal with the issue of multiple RFs and signal-to-noise 3 1) Unisensory Extraction Multiple RFs The visual and auditory areas are trained with a SOM to store unisensory patterns. 2) Cross-modal Interaction Bimodal MSE/MSD Sense 1 Sense 2 Sensory Size 2 Interactions between the two sensory areas allow the alignment of sensory information: Auditory Area 2 x 2 4.1 World with unisensory and multisensory space Visual Area • If two neurons fire in response to the same input, increase connection weight • If one neuron fires but another neuron does not, decrease connection weight • The weights are capped to allow subthreshold influences that generate MSE 3 x 3 1 • 4.3 Inhibition increases signal-to-noise allowing for larger grids and potentially both MSE (Red) and MSD (Green) 3) Multisensory Integration The primarily “unisensory” areas projections to the cortical area are trained with a SOM to extract a multisensory view of the world. 4 x 4 Multi-neuron modalities Multiple RFs 4) CorticalFeedback Multisensory World Cortical feedback is trained with the same scheme as 2 to allow for top-down integration. Signal:Noise Hierarchy Sense 1 Sense 2 Bimodal MSE Sensory Size 1:3 The flow of information in the hierarchy and the additional rule set address the problems of signal-to-noise and inhibition while conforming to the literature on cortical MSI. The literature has yet to suggest reasons for the existence of interconnections between low level sensory areas and feedback from cortical areas. This model works by setting up a hierarchy of contexts. The visual, auditory and cortical areas all have their view of the world. The cross talk and feedback allows these contexts to be enhanced and suppressed as needed to create a coherent view. This view of information flow through a hierarchy has been expressed in [3] and [5]. [3] has successfully implemented contextual hierarchy to simulate advanced computer vision. 2 x 2 Acknowledgements I would like to thank Dr. Elaine Reynolds for her continued advice and mentorship through the course of this research. 1:8 3 x 3 References 1:15 [1] Anastasio, T. J., & Patton, P. E. (2003). A two-stage unsupervised learning algorithm reproduces multisensory enhancement in a neural network model of the corticotectal system. Journal of Neuroscience, 23, 6713-6727. [2] Ghazanfar, A. A., & Schroeder, C. E. (2006). Is neocortex essentially multisensory? Trends in Cognitive Sciences, 10, 278-285. [3] Hawkins, J., & Blakeslee, S. (2004). On Intelligence. New York: Holt. [4] Kohonen, T. & Hari, R. (1999). Where the abstract feature maps of the brain might come from. Trends in Neuroscience. 22, 135-139. [5] Martin, J. G., Meredith, M.A. & Ahmad, K. (2009). Modeling multisensory enhancement with self-organizing maps. Frontiers in Computation Neuroscience, 3. [6] Meyer K., & Damasio A. (2009). Convergence and divergence in a neural architecture for recognition and memory, Trends in Neurosciences, 32, 376-382. [7] Pavlou, A. and Casey, M. (2010). Simulating the effects of cortical feedback in the superior colliculus with topographic maps, Proceedings of the International Joint Conference on Neural Networks 2010, Barcelona, 18-23 July. [8] Ursino, M., Cuppini, C., Magosso, E., Serino, A. & Pellegrino, G. (2009). Multisensory integration in the superior colliculus: a neural network model. Journal of Computational Neuroscience, 26, 55-73. 4 x 4 4 x 4 (Adjusted) • 4.4 2D SOMs can only map one relationship. They can either map the overlapping RFs within a sense or the overlapping RFs between senses, but not both. 4.2 Extensions to larger senses decreased the signal-to-noise ratio resulting in MSE of noise activity. Adjusting parameters was not enough to fix the ratio.

More Related