1 / 10

Pat Langley Computational Learning Laboratory Center for the Study of Language and Information

Machine Learning for Cognitive Networks. Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California http://cll.stanford.edu/~langley/.

bryantz
Download Presentation

Pat Langley Computational Learning Laboratory Center for the Study of Language and Information

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Machine Learning for Cognitive Networks Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California http://cll.stanford.edu/~langley/ Thanks to Chris Ramming and Tom Dietterich for discussions that led to many of these ideas.

  2. a software artifact by acquiring knowledge that improves task performance based on partial task experience Definition of a Machine Learning System

  3. Elements of a Machine Learning System performance element environment knowledge learning algorithm

  4. Five Representational Paradigms decision trees neural networks logical rules probabilistic formalisms case libraries

  5. Three Formulations of Learning Problems A more basic decision than choice of representational framework is whether one formulates the problem as: Learning for classification and regression ; Learning for action and planning ; or Learning for interpretation and understanding . These paradigms differ in their performance task, i.e., the manner in which the learned knowledge is utilized.

  6. Learning for Classification and Regression Learned knowledge can be used to classify a new instance or to predict the value for one of its numeric attributes, as in: Supervised learning– from labeled training cases Unsupervised learning– from unlabeled training cases Semi-supervised learning– from partly labeled cases These are most basic, best-studied induction tasks, which has led to development of robust algorithms for them. Such methods have been used in many successful applications, and they form the backbone of commercial data-mining systems.

  7. Learning for Action and Planning Learned knowledge can be used to decide which action to execute or which choice to make during problem solving, as in: Adaptive interfaces– learn from interaction with user Behavioral cloning– learn from behavioral traces Empirical optimization– from varying control parameters Reinforcement learning– from delayed reward signals Learning from problem solving– from the results of search Progress on these formulations is at different stages, with some used in commerce and others needing more basic research.

  8. Learning for Understanding Learned knowledge can be used to interpret, understand, or explain situations or events, as in: Structured induction–from trainer-explained instances Constructive induction– from self-explained training cases Generative induction– learn structures needed for explanation ––––––––––––––––––––– Parameter estimation– from training cases given structures Theory revision– revise structures based on training cases Research in these frameworks is less mature than others, but holds great potential for combining learning with reasoning.

  9. Comments about Problem Formulations With respect to the Knowledge Plane, it is important to realize that one can view a given task in different ways. For example, one can formulate diagnostic problems as either: Supervised learning from labeled examples of network faults Unsupervised learningfrom anomalous network behaviors Behavioral cloning from traces of network manager’s responses Reinforcement learning from experience with sensing actions Constructive induction from explanations of network faults We need measures of progress that focus on networking rather than to specific problem formulations.

  10. Challenges in Experimental Evaluation To evaluate learning methods for the Knowledge Plane, we need: Dependent measures– related to network management tasks Independent variables Amount of experience– to determine rate of learning Complexity of task and data – to determine robustness System modules and knowledge – to infer sources of power Data sets and test beds– to support the experimental process The goal of experimentation is to promote scientific understanding, not to show that one method is better than another.

More Related