other network models
Download
Skip this Video
Download Presentation
Other Network Models

Loading in 2 Seconds...

play fullscreen
1 / 34

Other Network Models - PowerPoint PPT Presentation


  • 72 Views
  • Uploaded on

Other Network Models. Deterministic weight updates. Until now, weight updates have been deterministic. State = current weight values & unit activations But a probabilistic distribution can be used to determine whether or not a unit should change to the new calculated state.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Other Network Models' - goro


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
deterministic weight updates
Deterministic weight updates
  • Until now, weight updates have been deterministic.
  • State = current weight values & unit activations
  • But a probabilistic distribution can be used to determine whether or not a unit should change to the new calculated state.
  • So for example, in Discrete Hopfield, even if a unit is selected for update, it might not be updated.
simulated annealing
Simulated Annealing

Points tried at medium

Points tried at low

Find a global minimum using simulated annealing

slide4
S.A.
  • A deterministic algorithm like backpropogation that uses gradient descent often gets caught in local minima.
  • Once caught, the network can no longer move along error surface to a more optimal solution.
  • Metropolis algorithm: Select at random a part of the system to change. The change is always accepted if the global system energy falls, but if there’s an increase in energy then the change is accepted with propability p.
slide5
S.A.

Is change in energy and T is temperature.

example algorithm for function minimizing geman and hawang 1986
Example algorithm for function minimizing (Geman and Hawang, 1986)
  • Select at random an initial vector x and an initial value of T.
  • Create a copy of x called xnew and randomly select a component of xnew to change. Flip the bit of the selected component.
  • Calculate the change in energy.
  • If the change in energy is less than 0 then x = xnew. Else select a random number between 0 and 1 using a uniform distribution probability density function. If the random number is less than formula then x = xnew.
continued
Continued

5. If there have been a specified number (M) of changes in x for which the value of f has dropped or there have been N changes in x since the last change in temperature, then set T = αT.

6. If the minimum value of f has not decreased more than some specified constant in the last L iterations then stop, otherwise go back and repeat from step 2.

boltzmann machine
Boltzmann machine
  • Is a neural network that uses the idea of simulated annealing for updating the network’s state.
  • It’s a Hopfield network that uses a stochastic process for updating the state of a network unit.
  • Assume +1 and -1 activation values.
weight update
Weight update

= correlation between units during clamped phase

= correlation between units during free-running phase

an example boltzmann machine can be used for autoasssociation
An example Boltzmann machine (can be used for autoasssociation)

Output layer

Input layer

probabilistic neural networks
Probabilistic Neural Networks
  • In a PNN, a pattern is classified based on its proximity to neighbouring patterns.
  • The manner in which neighbouring patterns are distributed is important.
  • A simple metric to decide the class of a new metric is to calculate the centroid for each class.
  • The PNN is based on Bayes’ technique of classification, make a decision as to the most likely class that a sample is taken from. The decision requires to estimate a probability density function for each class.
  • The estimate is constructed from training data.
gaussian dist
Gaussian dist.

Gaussian function for two variables

pdf probability density function
PDF (Probability density function)

The estimated PDF is the summation of the individual Gaussians centered at each sample point. Here σ = 0.1

slide17
PDF

The same estimate as in previous figure but with σ = 0.3. The width is too large, then there is a danger that classes will become blurred (a high chance of misclassifying).

slide18
PDF

The same estimate as in previous figure but with σ = 0.05. The width becomes too small, then there is a danger of poor generalization: the fit around the training samples becomes too close.

slide19
PPN
  • The class with a highly dense population in the region of an unknown sample will be preferred over other classes.
  • The probability density function (PDF) needs to be estimated.
  • The estimate can be found using Parzen’s PDF estimator which uses a weight function that is centered at a training point. The weight function is called a potential function or kernel.
  • A commonly used function is a Gaussian function.
slide20
PPN
  • The Gaussian functions are then summed to give the PDF.
  • The form of Gaussian function is as follows:

This square will be cancelled with square-root in normalization formula.

example
Example
  • There are two classes of a data of a single variable in the following figure. A sample positioned at 0.2 is from an unknown class. Using a PDF with a Gaussian kernel, estimate the class that the sample is from.
slide22

Unknown sample

Figure. The unknown sample to be classified using a PDF.

solution
SOLUTION
  • The value for α = 0.1. The result of the density estimation are shown in table of the following slide.
  • Although the unknown sample is closest to a point in class A the calculation favors class B. The reason why B is preferred is the high density of points around 0.35.
the neural network architecture for a pnn
The neural network architecture for a PNN
  • The input and pattern layers are fully connected.
  • The weights feeding into a pattern unit are set to the elements of the corresponding pattern vector.
  • The activation of a pattern unit is

x is an unknown input pattern.

slide26
PNN

If the input vectors are all of unit length, then the following form of the activation function can be used.

Number of input units = number of features

Number of pattern units = number of training samples

Number of summation units = number of classes

The weights from the pattern to summations units are fixed at 1.

an example pnn architecture
An Example PNN Architecture

Output layer

Summation layer

Input layer

Pattern layer

example1
Example
  • Following figure shows a set of training points from three classes and an unknown sample. Normalize the inputs to unit length and, using a PNN, find the class to which the unknown sample is assigned.
solution1
Solution

The vectors shown in previous figure are normalized here.

calculations of activations
Calculations of activations

>> exp(((0.6247*0.7967)+(0.7809*0.6044)-1)/0.01)

ans =

0.0482

>> exp(((0.9138*0.7967)+(0.4061*0.6044)-1)/0.01)

ans =

0.0704

ad