1 / 15

Parameter Learning in Markov Nets

Parameter Learning in Markov Nets. Dhruv Batra , 10-708 Recitation 11/13/ 2008. Contents. MRFs Parameter learning in MRFs IPF Gradient Descent HW5 implementation. Semantics. Priors on edges Ising Prior / Potts Model Metric MRFs. Metric MRFs. Energies in pairwise MRFs. HW4.

albin
Download Presentation

Parameter Learning in Markov Nets

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Parameter Learning in Markov Nets DhruvBatra,10-708 Recitation 11/13/2008

  2. Contents • MRFs • Parameter learning in MRFs • IPF • Gradient Descent • HW5 implementation

  3. Semantics • Priors on edges • Ising Prior / Potts Model • Metric MRFs

  4. Metric MRFs • Energies in pairwiseMRFs

  5. HW4 • Semi-supervised image segmentation

  6. HW4 • Effect of varying beta beta = 0 beta = 2 beta = 4 beta = 6 beta = 8 beta = 10 Segmentations by Congcong Li

  7. HW 5 • Potts Model • More general parameters

  8. Learning Parameters of a BN C D I G S L J H • Log likelihood decomposes: • Learn each CPT independently • Use counts 10-708 – CarlosGuestrin 2006

  9. Learning Parameters of a MN Coherence Difficulty Intelligence Grade SAT Letter Job Happy 10-708 – Carlos Guestrin 2006

  10. Log-linear Markov network(most common representation) • Feature is some function [D] for some subset of variables D • e.g., indicator function • Log-linear model over a Markov network H: • a set of features 1[D1],…, k[Dk] • each Di is a subset of a clique in H • two ’s can be over the same variables • a set of weights w1,…,wk • usually learned from data 10-708 – Carlos Guestrin 2006

  11. HW 5 10-708 – Carlos Guestrin 2006

  12. Questions?

  13. Semantics • Factorization • Energy functions • Equivalent representation

  14. Semantics • Log Linear Models

More Related