1 / 25

The Multivariate Gaussian

The Multivariate Gaussian. Jian Zhao. Outline . What is multivariate Gaussian? Parameterizations Mathematical Preparation Joint distributions, Marginalization and conditioning Maximum likelihood estimation. What is multivariate Gaussian?.

Download Presentation

The Multivariate Gaussian

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Multivariate Gaussian Jian Zhao

  2. Outline • What is multivariate Gaussian? • Parameterizations • Mathematical Preparation • Joint distributions, Marginalization and conditioning • Maximum likelihood estimation

  3. What is multivariate Gaussian? Where x is a n*1 vector, ∑ is an n*n, symmetric matrix

  4. Geometrical interpret • Exclude the normalization factor and exponential function, we look the quadratic form This is a quadratic form. If we set it to a constant value c, considering the case that n=2, we obtained

  5. Geometrical interpret (continued) This is a ellipse with the coordinate x1 and x2. Thus we can easily image that when n increases the ellipse became higher dimension ellipsoids

  6. Geometrical interpret (continued) Thus we get a Gaussian bump in n+1 dimension Where the level surfaces of the Gaussian bump are ellipsoids oriented along the eigenvectors of ∑

  7. Parameterization Another type of parameterization, putting it into the form of exponential family

  8. Mathematical preparation • In order to get the marginalization and conditioning of the partitioned multivariate Gaussian distribution, we need the theory of block diagonalizing a partitioned matrix • In order to maximum the likelihood estimation, we need the knowledge of traces of the matrix.

  9. Partitioned matrices • Consider a general patitioned matrix To zero out the upper-right-hand and lower-left-hand corner of M, we can premutiply and postmultiply matrices in the following form

  10. Partitioned matrices (continued) Define Schur complement of Matrix M with respect to H , denote M/H as the term Since So

  11. Partitioned matrices (continued) Note that we could alternatively have decomposed the matrix m in terms of E and M/E, yielding the following expression for the inverse.

  12. Partitioned matrices (continued) Thus we get At the same time we get the conclusion |M| = |M/H||H|

  13. Theory of traces Define It has the following properties: tr[ABC] = tr[CAB] = tr[BCA]

  14. Theory of traces (continued) so

  15. Theory of traces (continued) • We want to show that • Since • Recall • This is equivalent to prove • Noting that

  16. Joint distributions, Marginalization and conditioning We partition the n by 1 vector x into p by 1 and q by 1, which n = p + q

  17. Marginalization and conditioning

  18. Normalization factor

  19. Marginalization and conditioning Thus

  20. Marginalization & Conditioning • Marginalization • Conditioning

  21. In another form • Marginalization • Conditioning

  22. Maximum likelihood estimation Calculating µ Taking derivative with respect to µ Setting to zero

  23. Estimate ∑ • We need to take the derivative with respect to ∑ • according to the property of traces

  24. Estimate ∑ • Thus the maximum likelihood estimator is • The maximum likelihood estimator of canonical parameters are

  25. THANKS ALL

More Related