radial basis function networks n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Radial Basis Function Networks PowerPoint Presentation
Download Presentation
Radial Basis Function Networks

Loading in 2 Seconds...

play fullscreen
1 / 25

Radial Basis Function Networks - PowerPoint PPT Presentation


  • 136 Views
  • Uploaded on

Radial Basis Function Networks. 20013627 표현아 Computer Science, KAIST. contents. Introduction Architecture Designing Learning strategies MLP vs RBFN. introduction.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Radial Basis Function Networks' - dayo


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
radial basis function networks

Radial Basis Function Networks

20013627 표현아

Computer Science,

KAIST

contents
contents
  • Introduction
  • Architecture
  • Designing
  • Learning strategies
  • MLP vs RBFN
introduction
introduction
  • Completely different approach by viewing the design of a neural network as a curve-fitting (approximation) problem in high-dimensional space ( I.e MLP )
radial basis function network

introduction

Radial Basis Function Network
  • A kind of supervised neural networks
  • Design of NN as curve-fitting problem
  • Learning
    • find surface in multidimensional space best fit to training data
  • Generalization
    • Use of this multidimensional surface to interpolate the test data
radial basis function network1

introduction

Radial Basis Function Network
  • Approximate function with linear combination of Radial basis functions

F(x) = S wi h(x)

  • h(x) is mostly Gaussian function
architecture
architecture

h1

x1

W1

h2

x2

W2

h3

x3

W3

f(x)

Wm

hm

xn

Input layer

Hidden layer

Output layer

three layers

architecture

Three layers
  • Input layer
    • Source nodes that connect to the network to its environment
  • Hidden layer
    • Hidden units provide a set of basis function
    • High dimensionality
  • Output layer
    • Linear combination of hidden functions
radial basis function

architecture

Radial basis function

m

f(x) =  wjhj(x)

j=1

hj(x)= exp( -(x-cj)2 / rj2 )

Where cj is center of a region,

rj is width of the receptive field

designing
designing
  • Require
    • Selection of the radial basis function width parameter
    • Number of radial basis neurons
selection of the rbf width para

designing

Selection of the RBF width para.
  • Not required for an MLP
  • smaller width
    • alerting in untrained test data
  • Larger width
    • network of smaller size & faster execution
number of radial basis neurons

designing

Number of radial basis neurons
  • By designer
  • Max of neurons = number of input
  • Min of neurons = ( experimentally determined)
  • More neurons
    • More complex, but smaller tolerance
learning strategies
learning strategies
  • Two levels of Learning
    • Center and spread learning (or determination)
    • Output layer Weights Learning
  • Make # ( parameters) small as possible
    • Principles of Dimensionality
various learning strategies

learning strategies

Various learning strategies
  • how the centers of the radial-basis functions of the network are specified.
  • Fixed centers selected at random
  • Self-organized selection of centers
  • Supervised selection of centers
fixed centers selected at random 1

learning strategies

Fixed centers selected at random(1)
  • Fixed RBFs of the hidden units
  • The locations of the centers may be chosen randomly from the training data set.
  • We can use different values of centers and widths for each radial basis function -> experimentation with training data is needed.
fixed centers selected at random 2

learning strategies

Fixed centers selected at random(2)
  • Only output layer weight is need to be learned.
  • Obtain the value of the output layer weight by pseudo-inverse method
  • Main problem
    • Require a large training set for a satisfactory level of performance
self organized selection of centers 1

learning strategies

Self-organized selection of centers(1)
  • Hybrid learning
    • self-organized learning to estimate the centers of RBFs in hidden layer
    • supervised learning to estimate the linear weights of the output layer
  • Self-organized learning of centers by means of clustering.
  • Supervised learning of output weights by LMS algorithm.
self organized selection of centers 2

learning strategies

Self-organized selection of centers(2)
  • k-means clustering
    • Initialization
    • Sampling
    • Similarity matching
    • Updating
    • Continuation
supervised selection of centers

learning strategies

Supervised selection of centers
  • All free parameters of the network are changed by supervised learning process.
  • Error-correction learning using LMS algorithm.
learning formula

learning strategies

Learning formula
  • Linear weights (output layer)
  • Positions of centers (hidden layer)
  • Spreads of centers (hidden layer)
approximation

MLP vs RBFN

Approximation
  • MLP : Global network
    • All inputs cause an output
  • RBF : Local network
    • Only inputs near a receptive field produce an activation
    • Can give “don’t know” output