Forward backward selection in hybrid network
Download
1 / 23

Forward & Backward selection in hybrid network - PowerPoint PPT Presentation


  • 99 Views
  • Uploaded on

Forward & Backward selection in hybrid network. Introduction. A training algorithm for an hybrid neural network for regression. Hybrid neural network has hidden layer that has RBF or projection units (Perceptrons). When is it good?. Hidden Units. RBF:. MLP:. Overall algorithm.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Forward & Backward selection in hybrid network' - ray-ramsey


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

Introduction
Introduction

  • A training algorithm for an hybrid neural network for regression.

  • Hybrid neural network has hidden layer that has RBF or projection units (Perceptrons).



Hidden units
Hidden Units

  • RBF:

  • MLP:


Overall algorithm
Overall algorithm

  • Divide input space and assign units to each sub-region.

  • Optimize parameters.

  • Prune un-necessary weights using Bayesian Information Criteria.


Forward leg
Forward leg

  • Divide the input space into sub-regions

  • Select type of hidden unit for each sub-region

  • Stop when error goal or maximum number of units is achieved.


Input space division
Input space division

  • Like CART using

  • Maximum reduction in




Units parameters
Units parameters

  • RBF unit: center at maximum point.

  • Projection unit: weight normalized of maximum point



Pruning
Pruning

  • Target function values corrupted with Gaussian noise


Bic approximation
BIC approximation

  • Schwartz, Kass and Raftery






Evidence unit type alg4
Evidence Unit Type alg4.

  • Initialize alfa and beta

  • Loop: compute w,wo

  • Recompute alfa and beta

  • Until difference in the evidence is low.


Pumadyn data set delve archive
Pumadyn data set DELVE archive

  • Dynamic of a puma robot arm.

  • Target: annular acceleration of one of the links.

  • Inputs: various joint angles, velocities and torques.

  • Large Guassian noise.

  • Data set non linear.

  • Input dimension: 8, 32.




Related work
Related work

  • Hassibi et al. with Optimal Brain Surgeon

  • Mackey with Bayesian inference of weights and regularization parameters.

  • HME Jordan and Jacob, division on input space.

  • Kass & Raftery Schwarz with BIC.


Discussion
Discussion

  • Pruning removes 90% of parameters.

  • Pruning reduces variance of estimator.

  • The pruning algorithm is slow.

  • PRBFN better then MLP of RBF alone.

  • Bayesian techniques disadvantage: the prior distribution parameter.

  • Bayesian techniques are better then LRT.

  • Unit type selection is a crucial element in PRBFN

  • Curse of dimensionality is well seen on pumadyn data sets.


ad