Download
artificial neural network back propagation neural network n.
Skip this Video
Loading SlideShow in 5 Seconds..
Artificial Neural Network (Back-Propagation Neural Network) PowerPoint Presentation
Download Presentation
Artificial Neural Network (Back-Propagation Neural Network)

Artificial Neural Network (Back-Propagation Neural Network)

503 Views Download Presentation
Download Presentation

Artificial Neural Network (Back-Propagation Neural Network)

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Artificial Neural Network(Back-Propagation Neural Network) Yusuf Hendrawan, STP., M.App.Life Sc., Ph.D

  2. http://research.yale.edu/ysm/images/78.2/articles-neural-neuron.jpghttp://research.yale.edu/ysm/images/78.2/articles-neural-neuron.jpg http://faculty.washington.edu/chudler/color/pic1an.gif Neurons Biological Artificial

  3. A typical AI agent

  4. http://smig.usgs.gov/SMIG/features_0902/tualatin_ann.fig3.gifhttp://smig.usgs.gov/SMIG/features_0902/tualatin_ann.fig3.gif Neural Network Layers • Each layer receives its inputs from the previous layer and forwards its outputs to the next layer

  5. Multilayer feed forward network It contains one or more hidden layers (hidden neurons). “Hidden” refers to the part of the neural network is not seen directly from either input or output of the network . The function of hidden neuron is to intervene between input and output. By adding one or more hidden layers, the network is able to extract higher-order statistics from input

  6. Back-Propagation Algorithm: function BACK-PROP-LEARNING(examples, network) returns a neural network inputs:examples, a set of examples, each with input vector x and output vector y network, a multilayer network with L layers, weights Wj,i , activation functiong repeat for each einexamplesdo for each node j in the input layer doaj‰xj[e] for l = 2 toMdo ini‰ åjWj,iaj ai‰g(ini) for each node i in the output layer do Dj‰g’(inj)åiWjiDi for l = M – 1 to 1 do for each node j in layer l do Dj‰ g’(inj)åiWj,iDi for each node i in layer l + 1 do Wj,i‰Wj,i + a xaj x Di until some stopping criterion is satisfied return NEURAL-NET-HYPOTHESIS(network) [Russell, Norvig] Fig. 20.25 Pg. 746 Neural Network Learning

  7. Back-Propagation Illustration ARTIFICIAL NEURAL NETWORKS Colin Fahey's Guide (Book CD)

  8. Input (X) Hidden Output (Y) Z1 X1 Z2 Y Z3 X2 Z4 Vo Wo

  9. Input (X) Output / Target (T)

  10. BobotAwal Input ke Hidden Bias ke Hidden BobotAwal Hidden ke Output Bias ke Output

  11. MenghitungZin & Zdari input ke hidden Zin(1) = (X1 * V11) + (X2 * V21) = (0.3 * 0.75) + (0.4 * 0.35) = 0.302 Zin(2) = (X1 * V12) + (X2 * V22) = (0.3 * 0.54) + (0.4 * 0.64) = 0.418 Zin(3) = (X1 * V13) + (X2 * V23) = (0.3 * 0.44) + (0.4 * 0.05) = 0.152 Zin(4) = (X1 * V14) + (X2 * V24) = (0.3 * 0.32) + (0.4 * 0.81) = 0.42

  12. MenghitungYin & Ydari hidden ke output Yin = (Z(1) * W1) + (Z(2)* W2) + (Z(3)* W3) + (Z(4)* W4) = (0.57 * 0.04) + (0.603 * 0.95) + (0.538 * 0.33) + (0.603 * 0.17) = 0.876 Menghitungdev antaraYdengan output nyata dev = (T - Y) * Y * (1 - Y) = (0.1 – 0.706) * 0.706 * (1 – 0.706) = -0.126 Menghitungselisih selisih= T - Y= -0.606

  13. Back-Propagation Menghitung dindari output ke hidden din(1) = (dev * W1) = (-0.126 * 0.04) = -0.00504 din(2) = (dev * W2) = (-0.126 * 0.95) = -0.1197 din(3) = (dev * W3) = (-0.126 * 0.33) = -0.04158 din(4) = (dev * W4) = (-0.126 * 0.17) = -0.02142 Menghitungd d (1) = (din(1)* Z(1)* (1 - Z(1)) = (-0.00504 * 0.575 * (1 – 0.575) = -0.00123 d (2) = (din(2) *Z(2) * (1 - Z(2) ) = (-0.1197 * 0.603 * (1 – 0.603) = -0.02865 d (3) = (din(3) *Z(3) * (1 - Z(3) ) = (-0.04158 * 0.538 * (1 – 0.538) = -0.01033 d (4) = (din(4) *Z(4) * (1 - Z(4) ) = (-0.02142 * 0.603 * (1 – 0.603) = -0.00512

  14. Mengkoreksibobot (W) dan bias (Wo) W1 = W1 + (α * dev * Z(1)) + (m * Wo(1)) = 0.04 + (0.1 * -0.126 * 0.575) + (0.9 * 0.66) = 0.627 W2 = W2 + (α * dev * Z(2) ) + (m * Wo(2)) = 0.95 + (0.1 * -0.126 * 0.603) + (0.9 * 0.56) = 1.45 W3 = W3 + (α * dev * Z(3) ) + (m * Wo(3)) = 0.33 + (0.1 * -0.126 * 0.538) + (0.9 * 0.73) = 0.98 W4 = W4 + (α * dev * Z(4) ) + (m * Wo(4)) = 0.17 + (0.1 * -0.126 * 0.603) + (0.9 * 0.01) = 0.171 Wo1 = (α * Z(1)) + (m * Wo(1)) = (0.1 * 0.575) + (0.9 * 0.66) = 0.65 Wo2 = (α * Z(2) ) + (m * Wo(2)) = (0.1 * 0.603) + (0.9 * 0.56) = 0.564 Wo3 = (α * Z(3) ) + (m * Wo(3)) = (0.1 * 0.538) + (0.9 * 0.73) = 0.71 Wo4 = (α * Z(4) ) + (m * Wo(4)) = (0.1 * 0.603) + (0.9 * 0.01) = 0.0693

  15. Mengkoreksibobot (V) dan bias (Vo) V11 = V11 + (α * d (1) * X1 ) + (m * Vo(11)) = 0.75 + (0.1 * -0.00123 * 0.3) + (0.9 * 0.07) = 0.8129 V12 = V12 + (α * d(2) * X1 ) + (m * Vo(12)) = 0.54 + (0.1 * -0.02865 * 0.3) + (0.9 * 0.91) = 1.3581 V13 = V13 + (α * d(3) * X1 ) + (m * Vo(13)) = 0.44 + (0.1 * -0.01033 * 0.3) + (0.9 * 0.45) = 0.8446 V14 = V14 + (α * d(4) * X1 ) + (m * Vo(14)) = 0.32 + (0.1 * -0.00512 * 0.3) + (0.9 * 0.25) = 0.5448 V21 = V21 + (α * d (1) * X2 ) + (m * Vo(21)) = 0.35 + (0.1 * -0.00123 * 0.4) + (0.9 * 0.12) = 0.4579 V22 = V22 + (α * d(2) * X2 ) + (m * Vo(22)) = 0.64 + (0.1 * -0.02865 * 0.4) + (0.9 * 0.23) = 0.8458 V23 = V23 + (α * d(3) * X2 ) + (m * Vo(23)) = 0.05 + (0.1 * -0.01033 * 0.4) + (0.9 * 0.85) = 0.8145 V24 = V24 + (α * d(4) * X2 ) + (m * Vo(24)) = 0.81 + (0.1 * -0.00512 * 0.4) + (0.9 * 0.09) = 0.8907

  16. Mengkoreksibobot (V) dan bias (Vo) Vo11 = (α * d (1) * X1 ) + (m * Vo11) = (0.1 * -0.00123*0.3)+(0.9*0.07) = 0.0629 Vo12 = (α * d (2) * X1 ) + (m * Vo12) = (0.1 * -0.02865*0.3)+(0.9*0.91) = 0.8181 Vo13 = (α * d (3) * X1 ) + (m * Vo13) = (0.1 * -0.01033*0.3)+(0.9*0.45) = 0.4046 Vo14 = (α * d (4) * X1 ) + (m * Vo14) = (0.1 * -0.00512*0.3)+(0.9*0.25) = 0.2248 Vo21 = (α * d (1) * X2 ) + (m * Vo21) = (0.1 * -0.00123*0.4)+(0.9*0.12) = 0.1079 Vo22 = (α * d (2) * X2 ) + (m * Vo22) = (0.1 * -0.02865*0.4)+(0.9*0.23) = 0.2058 Vo23 = (α * d (3) * X2 ) + (m * Vo23) = (0.1 * -0.01033*0.4)+(0.9*0.85) = 0.7645 Vo24 = (α * d (4) * X2 ) + (m * Vo24) = (0.1 * -0.00512*0.4)+(0.9*0.09) = 0.0807