1 / 23

Selected Topics in Graphical Models

Selected Topics in Graphical Models. Petr Šimeček. Independence. Unconditional Independence: Discrete r.v. Continuous r.v. Conditional Independence: Discrete r.v. Continuous r.v. List of Independence Relationships. N random variables X 1 , X 2 , …, X N and their distribution P

jenna-mckay
Download Presentation

Selected Topics in Graphical Models

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Selected Topics in Graphical Models PetrŠimeček

  2. Independence • Unconditional Independence: • Discrete r.v. • Continuous r.v. • Conditional Independence: • Discrete r.v. • Continuous r.v.

  3. List of Independence Relationships N random variables X1, X2, …, XN and their distribution P List of all conditional and unconditional independence relations between them

  4. Representation by Graph X1 X2 X3 X4

  5. Cloudy Sprinkler Rain WetGrass Example – Sprinkler Network

  6. Cloudy Sprinkler Rain WetGrass Example – Sprinkler Network

  7. Cloudy Sprinkler Rain WetGrass Example – Sprinkler Network

  8. Cloudy Sprinkler Rain WetGrass Example – Sprinkler Network

  9. C S R W Example – Sprinkler Network The number of parameters needn’t grow exponentially with the number of variables! It depends on the number of parents of nodes.

  10. Cloudy Sprinkler Rain WetGrass Purpose 1– Propagation of Evidence What is the probability that it is raining if we know that grass is wet?

  11. Propagation of Evidence In general: I have observed some variable(s). What is the probability of other variable(s)? What is the most probable value(s)? Why don’t transfer BN to contingency table? Marginalization does not work for N large: needs 2N memory, much time, has low precision…

  12. Propagation of Evidence In general: I have observed some variable(s). What is the probability of other variable(s)? What is the most probable value(s)? Why don’t transfer BN to contingency table? Marginalization does not work for N large: needs 2N memory, much time, has low precision…

  13. Cloudy Sprinkler Rain WetGrass Purpose 2 – Parameter Learning

  14. Parameter Learning We know: • graph (CI structure) • sample (observations) of BN We don’t know: • conditional probabilistic distributions (could be estimated by MLE, Bayesian stat.)

  15. Purpose 3 – Structure Learning

  16. Structure Learning We know: • independent observations (data) of BN • sometimes, the casual ordering of vars We don’t know: • graph (CI structure) • conditional probabilistic distributions Solution: • CI tests • maximization of some criterion – huge s. space (AIC, BIC, Bayesian approach)

  17. Example – Entry Examination

  18. WetGrass Rain Markov Equivalence Some of arcs can be changed without changing CI relationships. The best one can hope to do is to identify the model up to Markov equivalence. WetGrass Rain

  19. Structure Learning • Theory • algorithms proved to be asymptotically right • Janžura, Nielsen (2003) 1 000 000 observations for 10 binary variables • Practice • in medicine – usually 50-1500 obs. • BNs are often used in spite of that

  20. Structure Learning - Simulation • 3 variables, take m from 100 to 1000 • for each m do 100 times • generate of Bayesian network • generate m samples • use K2 structure learning algorithm • count the probability of successful selection for each m This should give an answer to the question: “Is it a chance to find the true model?”

  21. To Do List: • software: free, open source, easy to use, fast, separated API • more simulation: theory x practice • popularization of structural learning • Czech literature: maybe my PhD. thesis

  22. References: • Castillo E. et al. (1997): Expert Systems and Probabilistic Network Models, Springer Verlag. • Neapolitan R. E. (2003): Learning Bayesian Networks, Prentice Hall. • Janžura N., Nielsen J. (2003): A numerical method for learning Bayesian Networks from Statistical Data, WUPES. Thanks for your attention

More Related