1 / 10

ECE 7251: Signal Detection and Estimation

ECE 7251: Signal Detection and Estimation. Spring 2002 Prof. Aaron Lanterman Georgia Institute of Technology Lecture 14, 2/6/02: “The” Expectation-Maximization Algorithm (Theory). Convergence of the EM Algorithm. We’d like to prove that the likelihood goes up with each iteration:.

cmontano
Download Presentation

ECE 7251: Signal Detection and Estimation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ECE 7251: SignalDetection and Estimation Spring 2002 Prof. Aaron Lanterman Georgia Institute of Technology Lecture 14, 2/6/02: “The” Expectation-Maximization Algorithm (Theory)

  2. Convergence of the EM Algorithm • We’d like to prove that the likelihood goes up with each iteration: • Recall from last lecture, • Take logarithms of both sides and rearrange:

  3. Deriving the Smiley Equality • Multiply both sides by • and integrate with respect to z: • Simplifies to: Call this equality ☺

  4. Using the Smiley Equality • Evaluate ☺ at ♠ ♣ • Subtract ♣from♠

  5. A really helpful inequality: Let’s focus on this final term for a little bit Use A Really Helpful Inequality

  6. Zeroness of the Last Term

  7. I Think We Got It! • Now we have: • Recall the definition of the M-step: • So, by definition • Hence

  8. Some Words of Warning • Notice we showed that the likelihood was nondecreasing; that doesn’t automatically imply that the parameter estimates converge • Parameter estimate could slide along a contour of constant loglikelihood • Can prove some things about parameter convergence in special cases • Ex: EM Algorithm for Imaging from Poisson Data (i.e. Emission Tomography)

  9. Generalized EM Algorithms • Recall this line: • What if the M-step is too hard? Try a “generalized” EM algorithm: • Note convergence proof still works!

  10. The SAGE Algorithm • Problem: EM algorithms tend to be slow • Observation: “Bigger” complete data spaces result in slower algorithms than “smaller” complete data spaces • SAGE (Space-Alternating Generalized Expectation-Maximization), Hero & Fessler • Split big complete data space into several smaller “hidden” data spaces • Designed to yield faster convergence • Generalization of “ordered subsets” EM algorithm

More Related