1 / 11

Simple Perceptrons

Simple Perceptrons. Or one-layer feed-forward networks. Perceptrons or Layered Feed-Forward Networks. Equation governing comp of simple perceptron. activation function, usually nonlinear, e.g. step function or sigmoid. ksi. Threshold or no threshold?. with threshold.

gretel
Download Presentation

Simple Perceptrons

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Simple Perceptrons Or one-layer feed-forward networks

  2. Perceptrons or Layered Feed-Forward Networks

  3. Equation governing comp of simple perceptron activation function, usually nonlinear, e.g. step function or sigmoid ksi

  4. Threshold or no threshold? with threshold without threshold; threshold simulated with connections to an input terminal permanently tied to -1

  5. The General Association (Matching) Task: Is to ask for: actual output pattern = target pattern

  6. Threshold Units • Start with simplest threshold unit, practical for 1-level perceptrons • Also assume the targets have plus/minus 1 values and no values in between those extremes, that is, • Then all that matter is that for each input pattern, the net input (weighted sum) h to each output unit has the same sign as the target zeta

  7. A Notational Simplification • To simplify notation, note that the output units are independent • [In a multilayer nn, however, the hidden (non-output) layers aren’t independent] • So let’s consider only one output at a time • Drop the i subscripts Weights and each input pattern live in the same space. Advantage: can geometrically represent these two vectors together.

  8. New Form for General Association Task: geometric interpretation Another form:

  9. A simple learning algorithm • Also called the Perceptron Rule • Go through the input patterns one by one • For each pattern go through the output units one by one, asking whether output is the desired one. • If so, leave the weight into that unit alone • Else in the spirit of Hebb add to each connection something proportional to product of the input and desired output

  10. Simplified Simple Learning Algorithm(for one neuron case) • Start with w = 0 (not necessary) • Cycle through the learning patterns • For each pattern ksi • If the output (O) != desired output (zeta), add product of the desired output and the input to w. (i.e., w = w + z*x) • Keep cycling through the patterns until done. • Convergence is guaranteed provided the two classes of input points are linearly separable. • Perceptron convergence theorem guarantees this

  11. Weight Update Formula,“Hebbian” from blue book, too complicated

More Related