1 / 14

Pushpak Bhattacharyya CSE Dept., IIT Bombay 11 th Nov, 2012

CS460/626 : Natural Language Processing/Speech, NLP and the Web Lecture 34: A very useful Maximum Likelihood Function. Pushpak Bhattacharyya CSE Dept., IIT Bombay 11 th Nov, 2012. Observed variable. #Observation = N. Hidden variable.

mirra
Download Presentation

Pushpak Bhattacharyya CSE Dept., IIT Bombay 11 th Nov, 2012

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS460/626 : Natural Language Processing/Speech, NLP and the WebLecture 34: A very useful Maximum Likelihood Function Pushpak BhattacharyyaCSE Dept., IIT Bombay 11th Nov, 2012

  2. Observed variable • #Observation = N

  3. Hidden variable • Each observation from M ‘sources’, giving rise to M values form a ‘categorised distribution’ for unobserved variables.

  4. Variables: The complete picture • Observed • Unobserved • Complete data: <X,Z>

  5. Parameters

  6. Example • Sources are 10 dice • Each dice has 6 outcomes • M = 10 • K = 6 • Suppose #observations = 20 • then N = 20

  7. Likelihood formulation • Maximum likelihood of complete data for the ith item

  8. Bernoulli like trial

  9. Variable definition • i goes over observation • j goes over source/observation • k goes over outcome/source/observation

  10. Log likelihood of MLE Already proved:- We have to maximize expectation wrt z of LL(θ) [Consequence marginalization of P(X: θ) wrt z]

  11. Optimization of expectation wrt constraints E can move in since LL(.) is linear in zij.

  12. Introduction of Lagrange multiplier

  13. Maximization and expectation step

  14. Two coin example

More Related