1 / 53

THE LEARNING AND USE OF GRAPHICAL MODELS FOR IMAGE INTERPRETATION

THE LEARNING AND USE OF GRAPHICAL MODELS FOR IMAGE INTERPRETATION. Thesis for the degree of Master of Science By Leonid Karlinsky Under the supervision of Professor Shimon Ullman. Introduction. Introduction. Part I: MaxMI Training. Best = Maximal MI. Classification.

zeno
Download Presentation

THE LEARNING AND USE OF GRAPHICAL MODELS FOR IMAGE INTERPRETATION

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. THE LEARNING AND USE OF GRAPHICAL MODELS FOR IMAGE INTERPRETATION Thesis for the degree of Master of Science By Leonid Karlinsky Under the supervision of Professor Shimon Ullman

  2. Introduction

  3. Introduction

  4. Part I: MaxMI Training

  5. Best = Maximal MI Classification Goal: Classify C, using a subset of “trained” features - F on new examples with minimum error Training tasks: • Best F • Best • Efficient model More…

  6. MaxMI Training - The Past 4 5 6 1 2 3 • Model: simple “Flat” structure, NCC thresholds • Training: • Features and thresholds selected one by one • Cond. independence in C increased MI upper bound More…

  7. MaxMI Training – Our Approach Learn model and alltogether maximizing:

  8. MaxMI Training – Learning Maximize for all together MaxMI: Decompose MI Efficiently learn parameters using GDL More…

  9. MaxMI Training – Assumptions • TAN model structure– Tree Augmented Naïve Bayes [Friedman, 97] • Feature Tree (FT)– can remove C preserving the feature tree.

  10. MaxMI Training – TAN and • TAN structure is unknown • Learn and TAN s.t.: • is maximized. • Asymptotic correctness • FT holds • Efficiency

  11. MaxMI Training – MaxMI hybrid

  12. MaxMI Training – MaxMI hybrid [Chow & Liu, 68] MaxMI: [Friedman, 97] More…

  13. MaxMI Training – MaxMI hybrid • Convergent algorithm: TAN More…

  14. MaxMI Training – empirical results More…

  15. MaxMI Training – empirical results More…

  16. Train any parameters MaxMI Training – Generalizations • Any low-TREEWIDTH structure • Even without assumptions:

  17. Back to the Goals

  18. Part II: Loopy MAP approximation

  19. Loopy network example • Want to solve MAP: • NP-hard in general! [Cooper 90, Shimony 94]

  20. Our approach – opening loops • Now, we can maximize: • The assignment is legal for the loopy problem if:

  21. Our approach – opening loops • Legally maximize: • Can maximize unrestricted: • Usually • Our solution –slow connections

  22. Our approach – slow connections • Fix z=Z • Maximize (loop-free, use GDL): • Now legalize and return to step one. • Iterate until convergence. This is the Maximize-and-Legalize algorithm.

  23. Our approach – slow connections When will this work? • The intuition:z-minor • Strong z-minor global maximum–single step • Weak z-minor local optimum–several steps

  24. Making the assumptions true Selecting z-variables • The intuition: recursive z-selection • Recursivestrong z-minor: single step, global maximum! • Recursiveweak z-minor: iterations, local maximum. • Different / Same speed • Remove – Contract – Split algorithm More…

  25. Making the assumptions true Approximating the function • The intuition: recursively “chip away” small parts of the function More…

  26. Existing approximation algorithms • Clustering: triangulation [Pearl, 88] • Loopy Belief Revision [McEliece, 98] • Bethe-Kikuchi Free-Energy: CCCP [Yuille, 02] • Tree Re-Parametrization (TRP) [Wainwright, 03]

  27. Experimental Results More…

  28. Experimental Results More…

  29. More… Maximum MI vs. Minimum PE More…

  30. Thank you for your time

  31. Explanations

  32. Classification Specifics MAP: • How do we classify a new example? • What are “the best” features and parameters? Maximize MI: • Why maximize MI? • Tightlyrelated to PE • More reasons – if time permits Back…

  33. MaxMI Training - The Past - Reasons 4 5 6 1 2 3 • Why did it work? • Conditional independence in C • Increased MI upper bound • What was missing? • Conditional independence in C was assumed! • Maximizing the “whole” MI. • Learning model structure. Back…

  34. MaxMI Training – JT • JT structure = TAN structure • GDL - exponential in TREEWIDTH • TREEWIDTH = 2 Back…

  35. MaxMI Training – EM • [Redner, Walker, 84] EM algorithm: • Training CPTs with EM Why not EM? • EM assumes static training data! • Not true in our scenario! Back…

  36. MaxMI Training – MaxMI hybrid solution • [Chow, Liu 68] “Best” Feature Tree • [Friedman, et al. 97] “Best” TAN • [We, 2004] Maximal MI Back…

  37. MaxMI Training – MaxMI hybrid solution • Increase: ? • ICR • Non-decrease: • TAN Asymptotic correctness Back…

  38. MaxMI Training – MaxMI hybrid Back…

  39. MaxMI Training – empirical results Before training: After training: Back…

  40. MaxMI Training – empirical results Back…

  41. MaxMI Training – empirical results Back…

  42. MaxMI Training – empirical results Back…

  43. Remove – Contract – Split Back…

  44. Making the assumptions true Approximating the function • Strong z-minor • Challenge: selecting proper Z constants • Benefit: single step convergence • Weak z-minor • Drawback: exponential in number of “chips” • Benefit: less restrictive Back…

  45. The clique tree Back…

  46. Experimental Results

  47. Experimental Results

  48. Experimental Results Back…

  49. Optional

  50. MaxMI Training – extensions • Observed and unobserved model. • MaxMI augmented to support O&U • Training observed only + EM heuristic. • Complete training • Constrained and greedy TAN restructure. • MaxMI vs. MinPE in ideal scenario – characterization and comparison. • Future research directions

More Related