Budgeted Machine Learning of Bayesian Networks. Michael R. Gubbels , Mentor: Dr. Stephen D. Scott. Department of Computer Science and Engineering University of Nebraska—Lincoln. Background. Materials and Methods.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Michael R. Gubbels, Mentor: Dr. Stephen D. Scott
Department of Computer Science and Engineering
University of Nebraska—Lincoln
Materials and Methods
Budgeted machine learning is when a computer program learns a concept by generalizing the information observed in a set of examples representative of that concept.
Figure 5 (b). Comparison of average prediction accuracies for models learned for the Asia network using the round robin (left) and biased robin (right) attribute selection policies. This shows that the more complex model may require more purchases than the naïve Bayesian classifier using these policies for this network.
Figure 3. The examples used for learning were generated using a “correct” Bayesian network. The network used to generate data was an accurate model for the concept that will be learned.
Figure 1. An set of examples with no observable attribute values. An attribute’s cost is shown below its name. Each value shown as “?” must be purchased before it can be observed.
Figure 4. Pseudocode for the round robin, biased robin, and random data selection policies. A is the set of attributes available to purchase values from, and a is a particular attribute in A. E is the set of examples, and e is a particular example in E. v is an attribute value in an example. M denotes the specific model being learned, and the variables mold and mnew represent the correctness of model M before and after learning new information.
Purpose and Hypothesis
The purpose of this study was to extend and evaluate existing budgeted machine learning algorithms that learn naïve Bayesian networks (i.e., naïve Bayes classifiers) to learn non-restrictive Bayesian networks.
Lizotte, D. J., Madani, O., & Greiner, R. (2003). Budgeted learning of naïve-Bayes classifiers. Uncertainty in Artificial Intelligence, 378-385.
Tong, S., &Koller, D. (2001). Active learning for parameter estimation in Bayesian networks. International Joint Converences on Artificial Intelligence.
Deng, K., Bourke, C., Scott, S., Sunderman, J., &Zheng, Y. (2007). Bandit-based algorithms for budgeted learning. Seventh IEEE International Conference on Data Mining, 463-468.
Neapolitan, R. E. (2004). Learning bayesian networks. New Jersey: Pearson Prentice Hall.
Mitchell, T. M. (1997). Machine learning. New York: McGraw-Hill.
Figure 2. A naïve Bayesian model (left) and a Bayesian network (right). The directed arrows denote influence among attributes (although this influence can vary when certain attribute values are observed).
Figure 5. Average prediction accuracies for naïve Bayesian model (left) and Bayesian network (right) for Asia networks.