1 / 9

Likelihood Scores

Likelihood Scores. Lecture XX. Reminder from Information Theory. Mutual Information: Conditional Mutual Information: Entropy: Conditional Mutual Information: . Scoring Maximum Likelihood Function.

elsa
Download Presentation

Likelihood Scores

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Likelihood Scores Lecture XX

  2. Reminder from Information Theory • Mutual Information: • Conditional Mutual Information: • Entropy: Conditional Mutual Information:

  3. Scoring Maximum Likelihood Function • When scoring function is the Maximum Likelihood, the model would make the data as probable as possible by choosing the graph structure that would produce the highest score for the MLE estimate of the parameter, we define:

  4. X X Y Y Two Graph Structures • Consider two simple graph structures: • The difference is:

  5. Ex Continued • By counting how many times each conditional probability parameter appears in this term: • When is empirical distribution observed in the data where is the mutual information between X and Y in the distribution The goal is to maximize the mutual information

  6. Likelihood Score: General Networks Entropy: to penalize addition of too many arcs between the nodes to prevent over fitting. • Proposition: • Proof: Looping over all variables Looping over all rows of CPT All settings of this variable

  7. Proof Cont.

  8. Tree-Augmented Naïve Bayes (TAN) Model • Bayesian network in which one node is distinguished as the Class node • Arc from Class to every other node (feature node), as in naïve Bayes • Remaining arcs form a directed tree among the feature nodes

  9. TAN Learning Algorithm (guarantees maximum likelihood TAN model) • Compute (based on data set) conditional mutual information between each pair of features, conditional on Class • Compute the maximum weight spanning tree of the complete graph over features, with each edge weighted by conditional mutual information computed above • Choose any feature as root and direct arcs from root, to get directed tree over features • Add Class variable with arcs to all feature nodes, to get final network structure • Learn parameters from data as for any other Bayes net

More Related