1 / 64

Efficient Inference for General Hybrid Bayesian Networks

Efficient Inference for General Hybrid Bayesian Networks. Wei Sun PhD in Information Technology wsun@gmu.edu George Mason University, 2007. Acknowledgements. Sincere gratitude goes to: Dr. KC Chang Dr. Kathryn Laskey Dr. Kristine Bell Dr. James Gentle

weldon
Download Presentation

Efficient Inference for General Hybrid Bayesian Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Efficient Inference for General Hybrid Bayesian Networks Wei Sun PhD in Information Technology wsun@gmu.edu George Mason University, 2007

  2. Acknowledgements • Sincere gratitude goes to: • Dr. KC Chang • Dr. Kathryn Laskey • Dr. Kristine Bell • Dr. James Gentle • Financial support from MDA sponsored projects.

  3. Overview • Inference algorithms development • Probabilistic inference for Bayesian network (BN) is NP-hard in general. • It is well known that no exact solution is possible in nonlinear, non-Gaussian case. • This dissertation develops efficient approximate inference algorithms for hybrid Bayesian networks in which arbitrary continuous variables with nonlinear relationships are mixed with discrete variables. • Model performance evaluation for hybrid BN • It is typically evaluated using extensive simulation with possibly complicated inference algorithms. • This dissertation develops approximate analytical performance prediction using Gaussian mixture models.

  4. Outline • Background • Research objective and contributions • Literature review • Efficient inference algorithms for general hybrid BN. • BN model performance evaluation. • Summary

  5. Simple Example of BN Vehicle Identification

  6. Advantages using BN Model • Conditional independence simplifies specification and inference. • Joint distribution using general probability model: 2*2*2*3*3=72 71 probabilities need to be specified. • BN model: P(T,W,F,R,S)=P(T)P(W)P(R|T,W)P(F|T)P(S|T,F) only 22 probabilities need to be specified. Note: for real-life problems, the savings could be significant. • Modularity, decomposability, transparent modeling, efficient reasoning.

  7. Real Applications - big networks; -more interactions between variables; - nonlinear relationships; - hybrid variables; - arbitrary distributions.

  8. Research Overview • Many real-world problems are naturally modeled by hybrid BN with both categorical and continuous variables: nonlinear relationships, arbitrary distributions, big size. • Objective: develop efficient approximate inference algorithms that can perform acceptably in problems with nonlinearity, heterogeneity, and non-Gaussian variable. • Approach: message passing, unscented transformation, function estimation, Gaussian mixture model are integrated in a unified manner.

  9. Contributions • Novel Inference Algorithms development • Unscented message passing (UMP-BN) for arbitrary continuous Bayesian networks • Hybrid message passing (HMP-BN) for general mixed Bayesian networks • Performance evaluation methods • Approximate analytical method to predict BN model performance without extensive simulations • It can help the decision maker to understand the model prediction performance and help the modeler to build and validate model effectively. • Software development • Inference algorithms coding in MATLAB are added in BN Toolbox as extra inference engines.

  10. Literature Review

  11. Definition of BN • A Bayesian network is a directed, acyclic graph consisting of nodes and arcs: • Nodes: variables • Arcs: probabilistic dependence relationships • Parameters: for each node, there is a conditional probability distribution (CPD). • CPD of Xi: P(Xi|Pa(Xi)) where Pa(Xi) represents all parents of Xi • Discrete: CPD is typically represented as a table, also called CPT. • Continuous: CPD involves functions, such as P(Xi|Pa(Xi)) = f(Pa(Xi), w), where w is a random noise. • Joint distribution of variables in BN is

  12. Probabilistic Inference in BN • Task: find the posterior distributions of query nodes given evidence • Bayes’ Rule: • Both exact and approximate inference using BNs are NP-hard. Tractable inference algorithms exist only for special classes of BNs.

  13. Classify BNs by Network Structure Singly-connected networks (a.k.a. polytree) Multiply - connected networks

  14. Classify BNs by Node Types • Node types • Discrete: conditional probability distribution is typically represented as a table. • Continuous: Gaussian or non-Gaussian distribution; conditional probability distribution is specified using functions: • P(Xi|Pa(Xi)) = f(Pa(Xi), w) where w is a random noise; the function could be linear/nonlinear. • Hybrid model: discrete and continuous variables are mixed in the networks.

  15. Conditional Linear Gaussian (CLG) • CLG – Conditional Linear Gaussian model is the simplest hybrid Bayesian networks: • All continuous variable are Gaussian • The functional relationships between continuous variables and their parents are linear. • There is no continuous parent for any discrete node. • Given any assignment of all discrete variables in CLG, it represents a multivariate Gaussian distribution.

  16. Conditional Hybrid Model (CHM) • The conditional hybrid model (CHM) is a special hybrid BN: • There is no continuous parent for any discrete node. • Continuous variable can be arbitrary. • The functional relationships between variables can be nonlinear. • Only difference between the CHM and general hybrid BN is the restriction that there is no continuous parent for any discrete node.

  17. Examples of CHM and CLG Conditional Hybrid Model (CHM) CLG model

  18. Research Focus Taxonomy of BNs

  19. Inference Algorithms Review - 1 • Exact Inference • Pearl’s message passing algorithm (MP) [Pearl88] • In MP, messages (probabilities/likelihood) propagate between variables. After finite number of iterations, each node has its correct beliefs. • It only works for pure discrete or pure Gaussian and singly-connected network (inference is done in linear time). • Clique tree (a.k.a. Junction tree) [LS88,SS90,HD96] and related algorithms • Includes variable elimination, arc reversal, symbolic probabilistic inference (SPI). • It only works on pure discrete or pure Gaussian networks or simple CLGs • For CLGs, clique tree algorithm is also called Lauritzen’s algorithm (1992). It returns the correct mean and variance of the posterior distributions for continuous variables even though the true distribution might be Gaussian mixture. • It does not work for general hybrid model and is intractable for complicated CLGs.

  20. Inference Algorithms Review - 2 • Approximate Inference • Model simplification • Discretization, linearization, arc removal etc. • Performance degradation could be significant. • Sampling method • Logic sampling [Hen88] • Likelihood weighting [FC89] • Adaptive Importance Sampling (AIS-BN) [CD00], EPIS-BN [YD03], • Performs well in case of unlikely evidence, but only work for pure discrete networks • Markov chain Monte Carlo. • Gibbs sampling • Cutset sampling [BD06], can be used in importance sampling too. It outperforms AIS-BN, but only works for discrete BNs. • Loopy propagation [MWJ99]: use Pearl’s message passing algorithm for networks with loops. This become a popular topic recently. • For pure discrete or pure Gaussian networks with loops, it usually converges to approximate answers in several iterations. • For hybrid model, message representation and integration are open issues. • Numerical hybrid loopy propagation [YD06], computational intensive.

  21. Methodologies for efficient inference in hybrid BN models

  22. Pearl’s Message Passing Algorithm • In polytree, any node d-separate the sub-network above it from the sub-network below it. For a typical node X in a polytree, evidence can be divided into two exclusive sets, and processed separately: • Define messages and messages as: Multiply-connected network may not be partitioned into two separate sub-networks by a node. • Then the belief of node X is:

  23. Pearl’s Message Passing in BNs • In message passing algorithm, each node maintains Lambda message and Pi message for itself. Also it sends Lambda message to every parent it has and Pi message to its children. • After finite-number iterations of message passing, every node obtains its correct belief. For polytree, MP returns exact belief; For networks with loop, MP is called loopy propagation that often gives good approximation to posterior distributions.

  24. Continuous Message Representation and Propagation • Message is represented by MEAN and VARIANCE regardless of the distribution. • Message propagations between continuous variables are equivalent to fusing multiple estimates and estimating functional transformation of distributions. • Unscented transformation uses deterministic sampling scheme to obtain good estimates of the first two moments of continuous random variable subject to an nonlinear function transformation.

  25. Unscented Transformation (UT) • Unscented transformation is a deterministic sampling method • Approximate the first two moments of a continuous random variable transformed via an arbitrary nonlinear function. • UT bases on the principle that it is easier to approximate a probability distribution than a nonlinear function. deterministic sample points are chosen and propagated via the nonlinear function.

  26. Unscented Transformation Example A cloud of 5000 samples drawn from a Gaussian prior is propagated through an arbitrary highly nonlinear function and the true posterior sample mean and covariance are calculated, which can be regarded as a ground truth of the two approaches, EKF and UT.

  27. Unscented Message Passing (UMP-BN)(For arbitrary continuous BN) Derived generalized Equations to handle continuous variables. Conventional Pearl’s Equations

  28. UMP-BN Algorithm Wei Sun and KC Chang. “Unscented Message Passing for Arbitrary Continuous Variables in Bayesian Networks” In Proceedings of the 22nd AAAI Conference on Artificial Intelligence, Vancouver, Canada, July 2007.

  29. UMP-BN: Numerical Experiments

  30. Numerical Results - 1

  31. Numerical Results - 2

  32. Hybrid Message Passing • Message passing in pure discrete networks is well established. • Message passing in pure continuous networks could be approximated using UMP-BN. • So why not partitioning the network, passing messages separately, then fusing intermediate messages.

  33. Network Partition for CHM BNs • Definition: In a conditional hybrid BN model (CHM), a discrete variable is called a discrete parent if and only if it has at least one continuous child. • Definition: The set of all discrete parents in a CHM is called the interface nodes of the network. • The interface nodes in the hybrid Bayesian network model can partition the network into separated network segments each one has either pure discrete or pure continuous variables. • The interface nodes “d-separate” continuous variables from other non-interface discrete variables. Furthermore, if we have multiple continuous network segments, the interface node d-separate continuous variables in different continuous sub-networks.

  34. Continuous variable Discrete variable Network Partition Example

  35. H F K B G Step 1: Inference in Continuous Segment For any assignment of interface nodes, one can apply UMP-BN to compute ‘mean’ and ‘covariance’ of posterior distribution for hidden continuous variable given local evidence. T P(T|K=1, M=m, Y=y) P(T|K=2, M=m, Y=y) P(R|K=1, M=m, Y=y) P(R|K=2, M=m, Y=y) P(S|K=1, M=m, Y=y) P(S|K=2, M=m, Y=y) R S Gaussian Component Y M Observed as “G=g” Observed as “M=m, Y=y” p(M=m, Y=y|K=1) = a p(M=m, Y=y|K=2) = b likelihood

  36. K H G F G B K F H B T R S Y M Observed as “G=g” Observed as “M=m, Y=y” Where is a normalizing constant. Step 2: Network Transformation Dummy

  37. K T R S Mixing prior Y M Mean and variance of the ith Gaussian component. Mean and variance of the Gaussian mixture. Observed as “M=m, Y=y” Step 3: Messages Integration Now “K” has the posterior distribution given all evidence saved as the mixing prior for the intermediate messages of hidden continuous variables. P(K = i | E)

  38. Hybrid Message Passing Algorithm (HMP-BN) Wei Sun and KC Chang. “Hybrid Message Passing for Mixed Bayesian Networks”. In Proceedings of the 10th International Conference on Information Fusion, Quebec, Canada, July 2007. Wei Sun and KC Chang. “Message Passing for General Bayesian Networks: Representation, Propagation and Integration”. Submitted to IEEE Transactions on Aerospace Electronic Systems. September, 2007.

  39. H F K B G T R S Y M Observed as “G=g” Observed as “M=m, Y=y” Numerical Experiment - I Synthetic Hybrid Model - 1

  40. Results of Model I

  41. Results of Model I – Cont.

  42. Results of Model I – Unlikely Evidence

  43. Numerical Experiment - II Synthetic Hybrid Model - 2

  44. Results of Model II

  45. Performance Summary

  46. Complexity of HMP-BN The complexity of HMP-BN grows exponentially with the size of interface nodes.

  47. D Unscented Hybrid Loopy Propagation U Weighted sum of continuous message. where is the function specified in CPD of X. X Non-negative constant. Weighted sum of continuous message. where is the inverse function. Complexity is reduced significantly! Only depends on the size of discrete parents in local CPD.

  48. Performance Evaluation for hybrid BN model

  49. Performance Metric - Probability of Correct Classification (Pcc): For discrete variable T in a hybrid BN, Pcc is the probability mass function of T given each state of T. An example of Pcc: Diagonal of Pcc = [0.69, 0.75, 0.77, 0.82, 0.78]

  50. E: evidence set {Ed, Ec}. T: discrete target variable. Theoretical Pcc Computing

More Related