Forward backward selection in hybrid network
This presentation is the property of its rightful owner.
Sponsored Links
1 / 23

Forward & Backward selection in hybrid network PowerPoint PPT Presentation


  • 84 Views
  • Uploaded on
  • Presentation posted in: General

Forward & Backward selection in hybrid network. Introduction. A training algorithm for an hybrid neural network for regression. Hybrid neural network has hidden layer that has RBF or projection units (Perceptrons). When is it good?. Hidden Units. RBF:. MLP:. Overall algorithm.

Download Presentation

Forward & Backward selection in hybrid network

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Forward & Backward selection in hybrid network


Introduction

  • A training algorithm for an hybrid neural network for regression.

  • Hybrid neural network has hidden layer that has RBF or projection units (Perceptrons).


When is it good?


Hidden Units

  • RBF:

  • MLP:


Overall algorithm

  • Divide input space and assign units to each sub-region.

  • Optimize parameters.

  • Prune un-necessary weights using Bayesian Information Criteria.


Forward leg

  • Divide the input space into sub-regions

  • Select type of hidden unit for each sub-region

  • Stop when error goal or maximum number of units is achieved.


Input space division

  • Like CART using

  • Maximum reduction in


Unit type selection (RBF)


Unit type selection (projection)


Units parameters

  • RBF unit: center at maximum point.

  • Projection unit: weight normalized of maximum point


ML estimate for unit type


Pruning

  • Target function values corrupted with Gaussian noise


BIC approximation

  • Schwartz, Kass and Raftery


Evidence for the model


Evidence for unit type1


Evidence for unit type cont2’


Evidence fore unit type cont3’


Evidence Unit Type alg4.

  • Initialize alfa and beta

  • Loop: compute w,wo

  • Recompute alfa and beta

  • Until difference in the evidence is low.


Pumadyn data set DELVE archive

  • Dynamic of a puma robot arm.

  • Target: annular acceleration of one of the links.

  • Inputs: various joint angles, velocities and torques.

  • Large Guassian noise.

  • Data set non linear.

  • Input dimension: 8, 32.


Results pumadyn-32nh


Results pumadyn-8nh


Related work

  • Hassibi et al. with Optimal Brain Surgeon

  • Mackey with Bayesian inference of weights and regularization parameters.

  • HME Jordan and Jacob, division on input space.

  • Kass & Raftery Schwarz with BIC.


Discussion

  • Pruning removes 90% of parameters.

  • Pruning reduces variance of estimator.

  • The pruning algorithm is slow.

  • PRBFN better then MLP of RBF alone.

  • Bayesian techniques disadvantage: the prior distribution parameter.

  • Bayesian techniques are better then LRT.

  • Unit type selection is a crucial element in PRBFN

  • Curse of dimensionality is well seen on pumadyn data sets.


  • Login