1 / 31

Neural networks (NN) and Multivariate Adaptive Regression Splines (MARS)

Neural networks (NN) and Multivariate Adaptive Regression Splines (MARS). Different types of neural networks Considerations in neural network modelling Multivariate Adaptive Regression Splines. Feed forward neural network. Feed-forward neural network Input layer Hidden layer(s)

colm
Download Presentation

Neural networks (NN) and Multivariate Adaptive Regression Splines (MARS)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Neural networks (NN) andMultivariate Adaptive Regression Splines (MARS) • Different types of neural networks • Considerations in neural network modelling • Multivariate Adaptive Regression Splines Data mining and statistical learning - lecture 12

  2. Feed forward neural network Feed-forward neural network • Input layer • Hidden layer(s) • Output layer … f1 fK z1 z2 … zM … x1 x2 xp Data mining and statistical learning - lecture 12

  3. Terminology • Feed-forward network • Nodes in one layer are connected to the nodes in next layer • Recurrent network • Nodes in one layer may be connected to the ones in previous layer or within the same layer Data mining and statistical learning - lecture 12

  4. Multilayer perceptrons • Any number of inputs • Any number of outputs • One or more hidden layers with any number of units. • Linear combinations of the outputs from one layer form inputs to the following layers • Sigmoid activation functions in the hidden layers. … f1 fK z1 z2 … zM … x1 x2 xp Data mining and statistical learning - lecture 12

  5. Parameters in a multilayer perceptron • C1, C2 : combination function • g,  : activation function • 0m0k : bias of hidden unit • imjk : weight of connection Data mining and statistical learning - lecture 12

  6. Least squares fitting of neural networks Consider a simple perceptron (no hidden layer) Find weights and bias minimizing the error function f1 f2 fK … x1 x2 xp Data mining and statistical learning - lecture 12

  7. Alternative measures of fit • For regression we normally use the sum-of-squared errors as measure of fit • For classification we use either squared errors or cross-entropy (deviance) and the corresponding classifier is argmaxkfk(x) • The measure of fit can also be adapted to specific distributions, such as Poisson distributions Data mining and statistical learning - lecture 12

  8. Combination and activation functions • Combination function • Linear combination: • Radial combination: • Activation function in the hidden layer • Identity • Sigmoid • Activation function in the output layer • Softmax • Identity Data mining and statistical learning - lecture 12

  9. Ordinary radial basis function networks (ORBF) • Input and output layers and one hidden layer • Hidden layer: Combination function=radial Activation function=exponential, softmax • Output layer: Combination function=linear Activation function =any, normally identity … f1 fK z1 z2 … zM … x1 x2 xp Data mining and statistical learning - lecture 12

  10. Issues in neural network modelling • Preliminary training – learning with different initial weights (since multiple local minima are possible) • Scaling of the inputs is important (standardization) • The number of nodes in the hidden layer(s) • The choice of activation function in the output layer • Interval – identity • Nominal – softmax Data mining and statistical learning - lecture 12

  11. Overcoming over-fitting • Early stopping • Adding a penalty function Objective function=Error function+Penalty term Data mining and statistical learning - lecture 12

  12. MARS: Multivariate Adaptive Regression Splines An adaptive procedure for regression that can be regarded as a generalization of stepwise linear regression Data mining and statistical learning - lecture 12

  13. Reflected pair of functionswith a knot at the value x1 Data mining and statistical learning - lecture 12

  14. Reflected pairs of functionswith knots at the values x1and x2 x1 x2 Data mining and statistical learning - lecture 12

  15. MARS with a single input Xtaking the values x1, …, xN Form the collection of base functions Construct models of the form where each hm(X) is a function in C or a product of two or more such functions Data mining and statistical learning - lecture 12

  16. MARS model with a single input Xtaking the values x1, x2 x1 x2 Data mining and statistical learning - lecture 12

  17. MARS model with a single input Xtaking the values x1, x2 x1 x2 Data mining and statistical learning - lecture 12

  18. MARS: Multivariate Adaptive Regression Splines At each stage we consider as a new basis function pair all products of functions already in the model with one of the reflected pairs in the set C Although each basis function depends only on a single Xj it is considered as a function over the entire input space Data mining and statistical learning - lecture 12

  19. MARS: Multivariate Adaptive Regression Splines- model selection MARS functions typically overfit the data and so a backward deletion procedure is applied The size of the model is determined by Generalized Cross Validation An upper limit can be set on the order of interaction Data mining and statistical learning - lecture 12

  20. The MARS model can be viewed as a generalization of the classification and regression tree (CART) Data mining and statistical learning - lecture 12

  21. Some characteristics of different learning methods Data mining and statistical learning - lecture 12

  22. Separating hyperplane Data mining and statistical learning - lecture 12

  23. Optimal separating hyperplane- support vector classifier Find the hyperplane that creates the biggest margin between the training points for class 1 and -1 margin Data mining and statistical learning - lecture 12

  24. Formulation of the optimization problem Signed distance to decision border y=1 for one of the groups and y=-1 for the other one Data mining and statistical learning - lecture 12

  25. Two equivalent formulations of the optimization problem Data mining and statistical learning - lecture 12

  26. Characteristics of the support vector classifier Points well inside their class boundary do not play a big role in the shaping of the decision border Cf. linear discriminant analysis (LDA) for which the decision boundary is determined by the covariance matrix of the class distributions and their centroids Data mining and statistical learning - lecture 12

  27. Support vector machinesusing basis expansions (polynomials, splines) Data mining and statistical learning - lecture 12

  28. Characteristics of support vector machines The dimension of the enlarged feature space can be very large Overfitting is prevented by a built-in shrinkage of beta coefficients Irrelevant inputs can create serious problems Data mining and statistical learning - lecture 12

  29. The SVM as a penalization method Misclassification: f(x) < 0 when y=1 or f(x)>0 when y=-1 Loss function: Loss function + penalty: Data mining and statistical learning - lecture 12

  30. The SVM as a penalization method Minimizing the loss function + penalty is equivalent to fitting a support vector machine to data The penalty factor  is a function of the constant providing an upper bound of Data mining and statistical learning - lecture 12

  31. Some characteristics of different learning methods Data mining and statistical learning - lecture 12

More Related