1 / 25

Umjetna inteligencija

Umjetna inteligencija. Vježbe: Neuronske mreže. Matlab: Neural Network Toolbox. >> neural. >> nntool. Neuron Model - Simple Neuron. n = -5:0.1:5; plot(n,hardlim(n));. Multiple-Input Neuron. MATLAB code : n = W*p + b. Multiple-Input Neuron - Abreviated Notation. Layer of Neurons. p.

rodney
Download Presentation

Umjetna inteligencija

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Umjetna inteligencija Vježbe: Neuronske mreže

  2. Matlab: Neural Network Toolbox >> neural >> nntool

  3. Neuron Model - Simple Neuron n = -5:0.1:5; plot(n,hardlim(n));

  4. Multiple-Input Neuron MATLAB code: n = W*p + b

  5. Multiple-Input Neuron - Abreviated Notation

  6. Layer of Neurons

  7. p b a 1 1 1 p b a p b a 2 2 2 = = = p b a R S S Layer of Neurons - Abreviated Notation w w ¼ w 1 , 1 1 , 2 1 , R w w ¼ w 2 , 1 2 , 2 2 , R W = w w ¼ w S , 1 S , 2 S , R

  8. Multiple Layers of Neurons

  9. Multiple Layers of Neurons - Abreviated Notation

  10. Backpropagation - Creating a Network (newff) newff(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF) PR - R x 2 matrix of min and max values for R input elements Si - Size of i-th layer, for Nl layers TFi - Transfer function of i-th layer, default = 'tansig' BTF - Backpropagation network training function, default = 'traingdx' BLF - Backpropagation weight/bias learning function, default = 'learngdm' PF - Performance function, default = 'mse'

  11. Example 1. Two-layer network net=newff([-1 2; 0 5],[3,1],{'tansig','purelin'},'traingd'); net.inputs{1}.range net.layers{1}.size net.layers{2}.size net.layers{1}.transferFcn net.layers{2}.transferFcn net.b{1} net.b{2} net.IW{1,1} net.LW{2,1} net.trainFcn gensim(net,-1)

  12. Backpropagation - Simulation (sim) incremental mode of simulation: p = [1;2]; a = sim(net,p) batch mode form of simulation: p = [1 3 2;2 4 1]; a=sim(net,p) >> t=0:0.01:10; >> p1=sin(2*t); >> p2=sin(3*t); >> plot(t,p1,t,p2) >> p=[p1;p2]; >> a = sim(net,p); >> plot(t,a)

  13. Backpropagation - Simulation (gensim) >> gensim(net,-1)

  14. Backpropagation - Training (train, traingd) p = [-1 -1 2 2;0 5 0 5]; % inputs t = [-1 -1 1 1]; % target net=newff(minmax(p),[3,1],{'tansig','purelin'},'traingd'); net.trainParam.show = 50; net.trainParam.epochs = 300; net.trainParam.goal = 1e-3; [net,tr]=train(net,p,t); TRAINGD, Epoch 0/300, MSE 1.59423/0.001, Gradient 2.76799/1e-010 TRAINGD, Epoch 50/300, MSE 0.101785/0.001, Gradient 0.517769/1e-010 TRAINGD, Epoch 100/300, MSE 0.0310146/0.001, Gradient 0.263201/1e-010 TRAINGD, Epoch 150/300, MSE 0.010717/0.001, Gradient 0.145051/1e-010 TRAINGD, Epoch 200/300, MSE 0.00442531/0.001, Gradient 0.082443/1e-010 TRAINGD, Epoch 250/300, MSE 0.00229352/0.001, Gradient 0.05011/1e-010 TRAINGD, Epoch 300/300, MSE 0.00143589/0.001, Gradient 0.0339086/1e-010 TRAINGD, Maximum epoch reached, performance goal was not met. >> plotperf(tr) a = sim(net,p) a = -0.9510 -1.0209 0.9720 1.0184

  15. Training Functions (net.trainFcn) trainbfgBFGS quasi-Newton backpropagation. trainbrBayesian regularization. traincgbPowell-Beale conjugate gradient backpropagation. traincgfFletcher-Powell conjugate gradient backpropagation. traincgpPolak-Ribiere conjugate gradient backpropagation. traingdGradient descent backpropagation. traingdaGradient descent with adaptive lr backpropagation. traingdm Gradient descent with momentum backpropagation. traingdxGradient descent with momentum and adaptive lr backprop. trainlmLevenberg-Marquardt backpropagation. trainossOne-step secant backpropagation. trainrpResilient backpropagation (Rprop). trainscgScaled conjugate gradient backpropagation. trainb Batch training with weight and bias learning rules. trainc Cyclical order incremental training with learning functions. trainr Random order incremental training with learning functions.

  16. p = [-1 -1 2 2;0 5 0 5]; t = [-1 -1 1 1]; net=newff(minmax(p),[3,1],{'tansig','purelin'},'traingda'); net.trainParam.show = 50; net.trainParam.lr = 0.05; net.trainParam.lr_inc = 1.05; net.trainParam.epochs = 300; net.trainParam.goal = 1e-5; [net,tr]=train(net,p,t); Faster Training - Variable Learning Rate (traingda, traingdx) a = sim(net,p) a = -0.9962 -1.0026 0.9974 1.0031

  17. Faster Training - Resilient Backpropagation, Rprop(trainrp) p = [-1 -1 2 2;0 5 0 5]; t = [-1 -1 1 1]; net=newff(minmax(p),[3,1],{'tansig','purelin'},'trainrp'); net.trainParam.show = 10; net.trainParam.epochs = 300; net.trainParam.goal = 1e-5; [net,tr]=train(net,p,t); TRAINRP, Epoch 0/300, MSE 0.426019/1e-005, Gradient 1.18783/1e-006 TRAINRP, Epoch 10/300, MSE 0.00610265/1e-005, Gradient 0.172704/1e-006 TRAINRP, Epoch 20/300, MSE 2.30232e-005/1e-005, Gradient 0.00614419/1e-006 TRAINRP, Epoch 23/300, MSE 5.73054e-006/1e-005, Gradient 0.00202166/1e-006 TRAINRP, Performance goal met. a = sim(net,p) a = -0.9962 -1.0029 0.9996 1.0000

  18. Potrebno je neuronskom mrežom aproksimirati funkciju: Zadatak 1. Aproksimacija funkcija u intervalu 0 < x < 5 s točnošću od 2e-4.

  19. Jedno moguće rješenje clear; p = 0:0.01:5; t = exp(-0.1*p.^2).*sin(5*sin(3*p)); net=newff(minmax(p),[30,1],{'tansig','purelin'},'traincgb'); net.trainParam.show = 100; net.trainParam.epochs = 2000; net.trainParam.goal = 2e-4; [net,tr]=train(net,p,t); a = sim(net,p); figure(1), plot(p,t,p,a,':r','linewidth',1.5), legend('Funkcija','Neuronska mreža')

  20. Usporedba NN aproksimacije sa zadanom funkcijom

  21. Zadatak 2. Aproksimacija signala sa šumom p = 0:0.01:5; t0 = exp(-0.1*p.^2).*sin(5*sin(3*p)); % signal t = t0 +0.05*randn(size(p)); % signal+šum net=newff(minmax(p),[30,1],{'tansig','purelin'},'traincgf'); net.trainParam.show = 100; net.trainParam.epochs = 2000; net.trainParam.goal = 2e-4; [net,tr]=train(net,p,t); a = sim(net,p); figure(1), plot(p,t0,p,t,':r','linewidth',1.5), legend('Signal','Signal + Šum') figure(2), plot(p,t,':r',p,a,'b','linewidth',1.5), legend('Signal + Šum','Neuronska mreža') figure(3), plot(p,t0,p,a,':r','linewidth',1.5), legend('Signal','Neuronska mreža')

  22. signal sa šumom

  23. NN aproksimacija signala sa šumom

  24. usporedba NN aproksimacije sa signalom bez šuma

  25. Zadatak 3. Aproksimacija signala sa šumom (II) • Potrebno je neuronskom mrežom aproksimirati funkciju (sa šumom): u intervalu -4 < x < 4 s točnošću od 1e-4.

More Related