1 / 34

Other Network Models

Other Network Models. Deterministic weight updates. Until now, weight updates have been deterministic. State = current weight values & unit activations But a probabilistic distribution can be used to determine whether or not a unit should change to the new calculated state.

goro
Download Presentation

Other Network Models

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Other Network Models

  2. Deterministic weight updates • Until now, weight updates have been deterministic. • State = current weight values & unit activations • But a probabilistic distribution can be used to determine whether or not a unit should change to the new calculated state. • So for example, in Discrete Hopfield, even if a unit is selected for update, it might not be updated.

  3. Simulated Annealing Points tried at medium Points tried at low Find a global minimum using simulated annealing

  4. S.A. • A deterministic algorithm like backpropogation that uses gradient descent often gets caught in local minima. • Once caught, the network can no longer move along error surface to a more optimal solution. • Metropolis algorithm: Select at random a part of the system to change. The change is always accepted if the global system energy falls, but if there’s an increase in energy then the change is accepted with propability p.

  5. S.A. Is change in energy and T is temperature.

  6. Example algorithm for function minimizing (Geman and Hawang, 1986) • Select at random an initial vector x and an initial value of T. • Create a copy of x called xnew and randomly select a component of xnew to change. Flip the bit of the selected component. • Calculate the change in energy. • If the change in energy is less than 0 then x = xnew. Else select a random number between 0 and 1 using a uniform distribution probability density function. If the random number is less than formula then x = xnew.

  7. Continued 5. If there have been a specified number (M) of changes in x for which the value of f has dropped or there have been N changes in x since the last change in temperature, then set T = αT. 6. If the minimum value of f has not decreased more than some specified constant in the last L iterations then stop, otherwise go back and repeat from step 2.

  8. Boltzmann machine • Is a neural network that uses the idea of simulated annealing for updating the network’s state. • It’s a Hopfield network that uses a stochastic process for updating the state of a network unit. • Assume +1 and -1 activation values.

  9. Probability function for state change

  10. Weight update = correlation between units during clamped phase = correlation between units during free-running phase

  11. An example Boltzmann machine (can be used for autoasssociation) Output layer Input layer

  12. Probabilistic Neural Networks • In a PNN, a pattern is classified based on its proximity to neighbouring patterns. • The manner in which neighbouring patterns are distributed is important. • A simple metric to decide the class of a new metric is to calculate the centroid for each class. • The PNN is based on Bayes’ technique of classification, make a decision as to the most likely class that a sample is taken from. The decision requires to estimate a probability density function for each class. • The estimate is constructed from training data.

  13. Class estimation methods

  14. Guassian dist.

  15. Gaussian dist. Gaussian function for two variables

  16. PDF (Probability density function) The estimated PDF is the summation of the individual Gaussians centered at each sample point. Here σ = 0.1

  17. PDF The same estimate as in previous figure but with σ = 0.3. The width is too large, then there is a danger that classes will become blurred (a high chance of misclassifying).

  18. PDF The same estimate as in previous figure but with σ = 0.05. The width becomes too small, then there is a danger of poor generalization: the fit around the training samples becomes too close.

  19. PPN • The class with a highly dense population in the region of an unknown sample will be preferred over other classes. • The probability density function (PDF) needs to be estimated. • The estimate can be found using Parzen’s PDF estimator which uses a weight function that is centered at a training point. The weight function is called a potential function or kernel. • A commonly used function is a Gaussian function.

  20. PPN • The Gaussian functions are then summed to give the PDF. • The form of Gaussian function is as follows: This square will be cancelled with square-root in normalization formula.

  21. Example • There are two classes of a data of a single variable in the following figure. A sample positioned at 0.2 is from an unknown class. Using a PDF with a Gaussian kernel, estimate the class that the sample is from.

  22. Unknown sample Figure. The unknown sample to be classified using a PDF.

  23. SOLUTION • The value for α = 0.1. The result of the density estimation are shown in table of the following slide. • Although the unknown sample is closest to a point in class A the calculation favors class B. The reason why B is preferred is the high density of points around 0.35.

  24. The calculation of the density estimation Sample point

  25. The neural network architecture for a PNN • The input and pattern layers are fully connected. • The weights feeding into a pattern unit are set to the elements of the corresponding pattern vector. • The activation of a pattern unit is x is an unknown input pattern.

  26. PNN If the input vectors are all of unit length, then the following form of the activation function can be used. Number of input units = number of features Number of pattern units = number of training samples Number of summation units = number of classes The weights from the pattern to summations units are fixed at 1.

  27. An Example PNN Architecture Output layer Summation layer Input layer Pattern layer

  28. Example • Following figure shows a set of training points from three classes and an unknown sample. Normalize the inputs to unit length and, using a PNN, find the class to which the unknown sample is assigned.

  29. The unknown sample to be classified using a PNN Unknown sample A B C

  30. Solution The vectors shown in previous figure are normalized here.

  31. Training data normalized to unit length

  32. Unknown sample

  33. Computation of the PNN for classifying the unknown sample

  34. Calculations of activations >> exp(((0.6247*0.7967)+(0.7809*0.6044)-1)/0.01) ans = 0.0482 >> exp(((0.9138*0.7967)+(0.4061*0.6044)-1)/0.01) ans = 0.0704

More Related