The free-energy principle: a rough guide to the brain? Karl Friston. Presented by : Gokrna Poudel. Guiding question. Q1 : Explain the following terms: KL divergence, entropy, ergodic , free energy, Bayesian surprise, generative model, recognition density, sufficient statistics.
Presented by : Gokrna Poudel
Q1: Explain the following terms: KL divergence, entropy, ergodic, free energy, Bayesian surprise, generative model, recognition density, sufficient statistics.
Q2: Explain the free-energy principle of the brain, i.e. the fact that self-organizing biological agents resist a tendency to disorder and therefore minimize the entropy of their sensory states. Give various forms of free energy.
Q3: How can action reduce free energy? How can perception reduce free energy? How can active sampling of the sensorium contribute to the free energy reduction?
Q4: Explain the neurobiological architecture for implementing the free-energy principle in Figure 1 in Box 1. Describe each of the modules in the figure and their functions as well as the quantities that define the free energy.
Q5: Describe the sufficient statistics representing a hierarchical dynamic model of the world in the brain in Figure 1 in Box 2. How are they related with each other? How are the changes in synaptic activity, connectivity, and gain involved with perceptual inference, learning and attention?
Q6: Formulate and describe the neuronal architecture for the hierarchical dynamic model in Figure 1 in Box 3. How are the forward prediction errors computed? How are the backward predictions made? What are the sources of the forward and backward connections in terms of brain anatomy?
Q7: A key implementational issue is how the brain encodes the recognition density. There are two forms of probabilistic neuronal codes: free forms and fixed forms. Give examples of each form and explain them.
Q8: What kinds of optimization schemes does the brain use? Does it use deterministic search on free energy to optimize action and perception? Or, does it use stochastic search? What is your opinion?
(i) Perceptual inference on states of the world (i.e. optimizing synaptic activity);
(ii) Perceptual learning of the parameters underlying causal regularities (i.e. optimizing synaptic efficacy) and
(iii) Attention or optimizing the expected precision of states in the face of random fluctuations and uncertainty (i.e. optimizing synaptic gain).
the recognition density is represented by the sample density of neuronal ensembles, whose activity encodes the location of particles in state-space.
Multinomial forms assume the world is in one of several discrete states and are usually associated with hidden Markov models.
The Gaussian or Laplace assumption allows for continuous and correlated states.