1 / 16

580.691 Learning Theory Reza Shadmehr Bayesian Learning 2:

580.691 Learning Theory Reza Shadmehr Bayesian Learning 2: Gaussian distribution & linear regression Causal inference. Joint distribution p(w,y). Joint distribution Evaluated at y(n). w. Prior dist p(w). y (n). y. Marginal dist p(y).

Download Presentation

580.691 Learning Theory Reza Shadmehr Bayesian Learning 2:

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 580.691 Learning Theory Reza Shadmehr Bayesian Learning 2: Gaussian distribution & linear regression Causal inference

  2. Joint distribution p(w,y) Joint distribution Evaluated at y(n) w Prior dist p(w) y(n) y Marginal dist p(y) For today’s lecture we will attack the problem of how to apply Bayes rule when both our prior (p(w)) and our condition p(y|w) are Gaussian: Prior Distr. Conditional Distr. Posterior distr. The numerator is just the joint distribution of w and y, evaluated at a particular y(n). The denominator is the marginal distribution of y, evaluated at y(n), that is, it is just a number that makes the numerator integrate to one.

  3. Example: Linear regression with a prior

  4. So the joint probability is Normally distributed. Now what we would like to do is to factor this expression so that we can write it as a conditional probability times a prior. If we can do this, then the conditional probability is the posterior that we are looking for.

  5. For the rest of the lecture we will try to solve this problem when our prior and the conditional distribution are both Normally distributed. The Multivariate Normal distribution is: Where x is a d x 1 vector and Sigma is a d x d variance covariance matrix.. The distribution has two parts: The exponential part is a quadratic form that determines the form of the Gaussian curve. The factor before is just a constant factor that makes the exponential part integrate to 1 (it does not depend on x). Now let’s start with two variables that have a joint Gaussian distribution: x1 is a px1 vector and x2 a qx1 vector. They have covariance S12: pxp pxq qxp qxq

  6. How would we calculate ? The following calculation for Gaussians will be a little long, but it is worth it, because the result will be extremely useful. Often we have things that are Gaussian and often we can use the Gaussian distribution as approximations. To calculate the posterior probability, we need to know how to factorize the joint probability into a part that depends on x1 and x2 and one that only depends on x2. So, we need to learn how to block-diagonalize the variance-covariance matrix: M/H is called the Schur complement of the matrix M with respect to H.

  7. Result 1 Now let’s take the determinant of the above equation. Remember for square matrices A and B: det(AB)=det(A)*det(B). Also remember that the determinant of a block-triangular matrix is just the product of the determinants of the diagonal blocks. Result 2 As a second result, what is M-1?

  8. We use result 1 to split the constant first factor out of multivariate Gaussian into two factors. (A) (B) Now we can factorize the exponential part into two, using result 2: (C) (D)

  9. Now see that part A and C and part B and D each combine to a normal distribution. Thus we can write: Then x1 given x2 has a normal distribution with: If x1 and x2 are jointly normally distributed, with:

  10. Linear regression with a prior and the relationship to Kalman gain mean variance

  11. Causal inference Recall that in the hiking problem we had two GPS devices that measured our position. We combined the reading from the two devices to form an estimate of our location. This approach makes sense if the two readings are close to each other. However, we can hardly be expected to combine the two readings if one of them is telling us that we are on the north bank of the river and the other is telling us that we are on the south bank. We know that we are not in the middle of the river! In this case the idea of combining the two readings makes little sense. Wallace and colleagues (2004) examined this question by placing people in a room where LEDs and small speakers were placed around a semi-circle (Fig. 1A). A volunteer was placed in the center of the semi-circle and held a pointer in hand. The experiment began by the volunteer fixating a location (fixation LED, Fig. 1A). An auditory stimulus was presented from one of the speakers, and then one of the LEDs was turned on 200, 500, or 800ms later. The volunteer estimated the location of the sound by pointing (pointer, Fig. 1A). Then the subject pressed a switch with their foot if they thought that the light and the sound came from the same location. The results of the experiment are plotted in Fig. 1B and C. The perception of unity was highest when the two events occurred in close temporal and spatial proximity. Importantly, when the volunteers perceived a common source, their perception of the location of the sound was highly affected by the location of the light. If location of the sound is:Location of the LED is:Estimate of the location of the sound:The estimate of location of sound was biased by the location of the LED when the volunteer thought that there was a common source (Fig. 1C). This bias fell to near zero when the volunteer perceived light and sound to originate from different sources

  12. People were asked to report their perception of unity, i.e., whether the location and light and sound were the same. Wallace et al. (2004) Exp Brain Res 158:252-258.

  13. When our various sensory organs produce reports that are temporally and spatially in agreement, we tend to believe that there was a single source that was responsible for both observations. In this case, we combine the readings from the sensors to estimate the state of the source. On the other hand, if our sensory measurements are temporally or spatially inconsistent, then we view the events as having disparate sources, and we do not combine the sources. Therefore, the nature of our belief as to whether there was a common source or not is not black or white. Rather, there is some probability that there was a common source. In that case, this probability should have a lot to do with how we combine the information from the various sensors

More Related