1 / 17

Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay

CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models). Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay. Training of the MLP. Multilayer Perceptron (MLP)

knox
Download Presentation

Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS621: Artificial IntelligenceLecture 22-23: Sigmoid neuron, Backpropagation(Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay

  2. Training of the MLP • Multilayer Perceptron (MLP) • Question:- How to find weights for the hidden layers when no target output is available? • Credit assignment problem – to be solved by “Gradient Descent”

  3. Gradient Descent Technique • Let E be the error at the output layer • ti = target output; oi = observed output • i is the index going over n neurons in the outermost layer • j is the index going over the p patterns (1 to p) • Ex: XOR:– p=4 and n=1

  4. Weights in a ff NN • wmn is the weight of the connection from the nth neuron to the mth neuron • E vs surface is a complex surface in the space defined by the weights wij • gives the direction in which a movement of the operating point in the wmn co-ordinate space will result in maximum decrease in error m wmn n

  5. Sigmoid neurons • Gradient Descent needs a derivative computation - not possible in perceptron due to the discontinuous step function used!  Sigmoid neurons with easy-to-compute derivatives used! • Computing power comes from non-linearity of sigmoid function.

  6. Derivative of Sigmoid function

  7. Training algorithm • Initialize weights to random values. • For input x = <xn,xn-1,…,x0>, modify weights as follows Target output = t, Observed output = o • Iterate until E <  (threshold)

  8. Calculation of ∆wi

  9. Observations Does the training technique support our intuition? • The larger the xi, larger is ∆wi • Error burden is borne by the weight values corresponding to large input values

  10. Backpropagation on feedforward network

  11. Backpropagation algorithm …. Output layer (m o/p neurons) j • Fully connected feed forward network • Pure FF network (no jumping of connections over layers) wji …. i Hidden layers …. …. Input layer (n i/p neurons)

  12. Gradient Descent Equations

  13. Backpropagation – for outermost layer

  14. Backpropagation for hidden layers …. Output layer (m o/p neurons) k …. j Hidden layers …. i …. Input layer (n i/p neurons) kis propagated backwards to find value of j

  15. Backpropagation – for hidden layers

  16. General Backpropagation Rule • General weight updating rule: • Where for outermost layer for hidden layers

  17. = 0.5 w1=1 w2=1 x1x2 1 1 x1x2 1.5 -1 -1 1.5 x2 x1 How does it work? • Input propagation forward and error propagation backward (e.g. XOR)

More Related