1 / 7

Publication Venues

Publication Venues. Main Neural Network Conferences NIPS (Neural Information Processing Systems) IJCNN (Intl Joint Conf on Neural Networks) Main Neural Network Journals Neural Networks Neural Computation IEEE Transactions on Neural Networks. Publication Venues.

birch
Download Presentation

Publication Venues

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Publication Venues • Main Neural Network Conferences • NIPS (Neural Information Processing Systems) • IJCNN (Intl Joint Conf on Neural Networks) • Main Neural Network Journals • Neural Networks • Neural Computation • IEEE Transactions on Neural Networks

  2. Publication Venues • Main Machine Learning Conferences • ICML (Intl Conf on Machine Learning) • COLT (Computational Learning Theory) • Main Machine Learning Journals • ML (Machine Learning) • JMLR (J. Machine Learning Research) • JAIR (J. Artificial Intelligence Research)

  3. Underfit and Overfit

  4. Need for Bias 22n Boolean function of n inputs x1 x2 x3 Class Possible Consistent Function Hypotheses 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 0 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 1 1 0 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 1 1 1 ? 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

  5. No Free Lunch • Any inductive bias chosen will have equal accuracy compared to any other bias over all possible functions (assuming all functions are equally likely). If correct on some cases, must be incorrect on equally many cases. • Is this a problem? • Random vs. Regular • Anti-Bias (even though regular) • The “Interesting” Problems – subset of learnable?

  6. Automatic Discover of Inductive Bias • Defining the set of Interesting/Learnable problems – No Free Lunch concepts • Defining the set of available inductive biases • Proposing novel learning algorithms • Analysis, comparison, and extension of current learning algorithms • Defining/discovering a set of biases which covers I (Interesting problems) • Parameter Free learning algorithms – Automatic selection of learning parameters • Automatically fitting a bias to a problem – Overfit, underfit, noise issues, etc. • Automatic Feature Selection

  7. ADIB (Cont.) • Dynamic Inductive Biases • Pre-selection of an appropriate bias based on the application data set • Automatically selecting a bias during learning • Bias which adjusts dynamically in time during learning • Bias which adjusts dynamically in space during learning (different parts of the problem space are better learned with different biases, including differing parameters in one bias). • Combinations of the above • Combination of Biases • Linear and non-linear combinations of biases • Dynamic combinations of biases • Ensemble variants

More Related