1 / 21

Praktikum Metkuan Jaringan Syaraf Tiruan Propagasi Balik

Praktikum Metkuan Jaringan Syaraf Tiruan Propagasi Balik. Jaringan Syaraf Tiruan. Perceptron. Single Layer. JST. Multi Layer Perceptron (MLP). JST Propagasi Balik. Multiple Layer. 1. bias. v01. 1. v11. X1. w01. Z1. w1 1. v12. Y1. v21. w21. X2. Z 2. v22. Output Layer.

mareo
Download Presentation

Praktikum Metkuan Jaringan Syaraf Tiruan Propagasi Balik

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Praktikum MetkuanJaringan Syaraf Tiruan Propagasi Balik

  2. Jaringan Syaraf Tiruan Perceptron Single Layer JST Multi Layer Perceptron (MLP) JST Propagasi Balik

  3. Multiple Layer 1 bias v01 1 v11 X1 w01 Z1 w11 v12 Y1 v21 w21 X2 Z2 v22 Output Layer v02 1 input Layer Hidden Layer

  4. X1 1 Y 2 X2 1 Review Perceptron : AND X1 X2 net Y, 1 jika net >=2, 0 jika net < 2 1 1 1.1+1.1=2 1 1 0 1.1+0.1=1 0 0 1 0.1+1.1=1 0 0 0 0.1+0.1=0 0 Ternyata BERHASIL mengenali pola

  5. X1 1 Y 1 X2 1 Review Perceptron : OR X1 X2 net Y, 1 jika net >=1, 0 jika net < 1 1 1 1.1+1.1=2 1 1 0 1.1+0.1=1 1 0 1 0.1+1.1=1 1 0 0 0.1+0.1=0 0 Ternyata BERHASIL mengenali pola

  6. X1 2 Y 2 X2 -1 Review Perceptron : X1 and not(X2) X1 X2 net Y, 1 jika net >=2, 0 jika net < 2 1 1 1.2+1.-1=1 0 1 0 1.2+0.-1=2 1 0 1 0.2+1.-1=-1 0 0 0 0.2+0.-1=0 0 Ternyata BERHASIL mengenali pola

  7. F(0,1) = 1 F(1,1) = 0 F(0,0) = 0 F(1,0) = 1 Problem “XOR” X1 X2 Y 1 1 0 1 0 1 0 1 1 0 0 0 GAGAL!

  8. 2 X1 Z1 1 Y -1 -1 1 X2 Z2 2 Solusi • XOR = (x1 ^ ~x2) V (~x1 ^ x2) • Ternyatadibutuhkansebuah layer tersembunyi -> Multi Layer Perceptron

  9. Tabel

  10. JST Propagasi Balik • 3 tahapan : 1. Feedforward pola pelatihan yang diinput 2. perhitungan dan propagasi balik kesalahan 3. penyesuaian bobot (weight). Untukmendapatkanbobotakhir yang bisamemetakan output dengantepat, diperlukan 3 tahapandiatas

  11. Fungsi Aktivasi • Fungsi undak biner (hard limit) • Fungsi undak biner (threshold) 

  12. Fungsi Aktivasi • Fungsi bipolar • Fungsi bipolar dengan threshold

  13. Fungsi Aktivasi • Fungsi Linier (identitas) • Fungsi Sigmoid biner

  14. ContohKasus Misalakandibuatarsitektur JST Back Propagation mengenaliangka 1-9 dengan 5 neuron pada hidden layer denganfungsitansig (tan sigmoid) dan4 neuron padaoutputdenganfungsipurelin (pure linear)

  15. function NN() • %ada 15 baris karena grid 3x5 • rangeinput=[0 1; 0 1; 0 1; 0 1; 0 1; 0 1; 0 1; 0 1; 0 1; 0 1; 0 1; 0 1; 0 1; 0 1; 0 1 ]; • %bikinfeedforward • net=newff(rangeinput,[5 4],{'tansig' 'purelin'}); • % 5 neouron hidden dengantansig, 4 neuron output denganpurelin • %inisialisasi • net=init(net);

  16. angka1=[0; 1; 0; • 1; 1; 0; • 0; 1; 0; • 0; 1; 0; • 1; 1; 1]; • angka2=[1; 1; 1; • 0; 0; 1; • 0; 1; 1; • 1; 1; 0; • 1; 1; 1]; • angka3=[1; 1; 1; • 0; 0; 1; • 1; 1; 1; • 0; 0; 1; • 1; 1; 1]; • angka4=[1; 0; 1; • 1; 0; 1; • 1; 1; 1; • 0; 0; 1; • 0; 0; 1];

  17. angka5=[1; 1; 1; 1; 0; 0; 1; 1; 1; 0; 0; 1; 1; 1; 1]; • angka6=[1; 1; 1; 1; 0; 0; 1; 1; 1; 1; 0; 1; 1; 1; 1]; • angka7=[1; 1; 1; 0; 0; 1; 0; 1; 1; 1; 1; 0; 1; 0; 0]; • angka8=[1; 1; 1; 1; 0; 1; 1; 1; 1; 0; 1; 0; 1; 1; 1]; • angka9=[1; 1; 1; 1; 0; 1; 1; 1; 1; 0; 0; 1; 1; 1; 1]; • p=[angka1 angka2 angka3 angka4 angka5 angka6 angka7 angka8 angka9]; • t=[0 0 0 1; • 0 0 1 0; • 0 0 1 1; • 0 1 0 0; • 0 1 0 1; • 0 1 1 0; • 0 1 1 1; • 1 0 0 0; • 1 0 0 1]; %9x9 karena data trainingnyaada 9

  18. t=t'; • %training • net= train(net, p, t); • %testing • %a=sim(net, datatesting) • %melihatnilaisemuabobotdarilapisan input ke layer • disp('net.IW{1,1}-->'); • net.IW{1,1} • %melihatnilaisemuabobotdari layer 1 ke layer 2 • disp('net.LW{2,1}-->'); • net.LW{2,1} • disp('net.b{1}-->'); • net.b{1} • disp('net.b{2}-->'); • net.b{2} • datatesting=[0; 1; 0; 1; 1; 0; 0; 1; 0; 0; 1; 0; 1; 1; 1]; • a=sim(net,datatesting)

  19. Latihan • Input training data

  20. Input testing data

  21. neuron • Input : 15 neurons • Hidden layer : 5 neurons • Output layer : 4 neurons • Hidden layer : purelin (linear) • Output layer : tansig (sigmoid)

More Related