1 / 9

Information Theoretic Characterization of Stimulus Relationships Encoded by Sensory Neurons

Information Theoretic Characterization of Stimulus Relationships Encoded by Sensory Neurons. Robin Ince MSc Computational Neuroscience and Neuroinformatics University of Manchester. Output (spike or no spike). Inputs (stimulus features). Black Box System (neuron). Introduction.

harva
Download Presentation

Information Theoretic Characterization of Stimulus Relationships Encoded by Sensory Neurons

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Information Theoretic Characterization of Stimulus Relationships Encoded by Sensory Neurons Robin Ince MSc Computational Neuroscience and Neuroinformatics University of Manchester

  2. Output (spike or no spike) Inputs (stimulus features) Black Box System (neuron) Introduction

  3. Subspace of distributions with no higher order interactions Subset of distributions sharing same pairwise marginals True Distribution Projection of true distribution: Maximum Entropy Solution Maximum Entropy Method • a principled way to investigate interactions • same marginals as true distribution but no higher order effects • solution via Information Geometry (Amari, 2001)

  4. Information Decomposition I(k;R) = IS(2)(k) + I(1)(k;R) + I(=2)(k;R) + IHO(k;R) • IS(2)(k) = H(k) – H(2)(k) • I(1)(k;R) = H(1)(k) – H(1)(k|R) • I(=2)(k;R) = I(2)(k;R) – I(1)(k;R) = H(2)(k) – H(1)(k) + H(1)(k|R) – H(2)(k|R) • IHO(k;R) = H(2)(k|R) – H(k|R)

  5. Results – Artificial Examples

  6. Results – Experimental Data

  7. Results – Experimental Data

  8. Results – Experimental Data

  9. Output Features processed separately Conclusions • few underlying assumptions; widely applicable • accurate predictions; good temporal precision • can be difficult to interpret

More Related