30 likes | 119 Views
Explore Jensen and variational bounds, MaxEnt models, Conditional Random Fields, and GIS algorithm optimization in the context of Exponential Family Distributions. Learn about the feature representation of undirected graphical models, examples of different distributions, and the relationship between parameters and moments. Understand Sufficient Statistics and their impact on parameter determination. Get ready for ML learning and IRLS in the next lecture.
E N D
Lecture 7 Generalized Iterative Scaling Exponential Family Distributions
Iterative Scaling • Two Bounds for convex (concave) functions: Jensen and variational bounds. • We have seen MaxEnt models in the unsupervised setting. Supervised setting: We can again go the discriminative or the generative path. • Discriminative: Conditional random fields. • GIS: a parallel bound optimization algorithm for (conditional) random fields and MaxEnt distributions.
Exponential Family Distr. • ExpFamDistr just like feature representation of undirected graphical models. • Example: multinomial, Bernoulli, Gaussian, Poisson,... • Mean is first derivative of logZ. • Variance is second derivative of logZ • LogZ = Convex function of parameters, one-to-one correspondence between value and derivative. • value = canonical parameters derivative = moments • these representations are duals of each other. • Sufficient statistics determine the parameters values completely. • In case of multiple data cases, their sum is SS. • Next week: ML learning and IRLS.