1 / 16

Príklady modelovania /Slovak version/ Multilayers perceptron - modelling

Š tefan Kozák, Department of Automatic Control Systems, Faculty of Electrical Engineering and Information Technology Slovak University of Technology Bratislava email: stefan. kozak@stuba.sk www.kasr.elf.stuba.sk. Príklady modelovania /Slovak version/ Multilayers perceptron - modelling.

iona
Download Presentation

Príklady modelovania /Slovak version/ Multilayers perceptron - modelling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Štefan Kozák, Department of Automatic Control Systems, Faculty of Electrical Engineering and Information Technology Slovak University of Technology Bratislava email:stefan.kozak@stuba.sk www.kasr.elf.stuba.sk Príklady modelovania/Slovak version/Multilayers perceptron - modelling

  2. Vybrané metódy a m-fily pre modelovanie lineárnych a nelineárnych procesov–Matlab-UNS • Vybrané m-fily potrebné pre modelovanie procesov a ich nadväznosť pre trénovanie a simuláciu: • newff vytvorenie feed-forward UNS (backpropagation network) • train trénovanie UNS • simsimulácia siete pre trénované a nové udaje

  3. Vybrané metódy a m-fily pre modelovanie lineárnych a nelineárnych procesov–Matlab-UNS • NEWFF Create a feed-forward backpropagation network. • Syntax • net = newff • net = newff(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF) • Opis a charakteristika • NET = NEWFF tvorba novej NN siete • NEWFF(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF) takes, • PR - Rx2 matrix of min and max values for R input elements. • Si - Size of ith layer, for Nl layers. • TFi - Transfer function of ith layer, default = 'tansig'. • BTF - Backprop network training function, default = 'trainlm'. • BLF - Backprop weight/bias learning function, default = 'learngdm'. • PF - Performance function, default = 'mse'. • and returns an N layer feed-forward backprop network. • The transfer functions TFi can be any differentiable transfer • function such as TANSIG, LOGSIG, or PURELIN. • Algorithm • Feed-forward networks consist of Nl layers using the DOTPROD • weight function, NETSUM net input function, and the specified • transfer functions. • The first layer has weights coming from the input. Each subsequent • layer has a weight coming from the previous layer. All layers • have biases. The last layer is the network output. • Each layer's weights and biases are initialized with INITNW. • Adaption is done with TRAINS which updates weights with the • specified learning function. Training is done with the specified • training function. Performance is measured according to the specified • performance function. • See also NEWCF, NEWELM, SIM, INIT, ADAPT, TRAIN, TRAINS

  4. The training function BTF can be any of the backprop training • functions such as TRAINLM, TRAINBFG, TRAINRP, TRAINGD, etc. • *WARNING*: TRAINLM is the default training function because it • is very fast, but it requires a lot of memory to run. If you get • an "out-of-memory" error when training try doing one of these: (1) Slow TRAINLM training, but reduce memory requirements, by • setting NET.trainParam.mem_reduc to 2 or more. (See HELP TRAINLM.) • (2) Use TRAINBFG, which is slower but more memory efficient than TRAINLM. • (3) Use TRAINRP which is slower but more memory efficient than TRAINBFG. • The learning function BLF can be either of the backpropagation • learning functions such as LEARNGD, or LEARNGDM. • The performance function can be any of the differentiable performance

  5. TRAIN Trénovanie UNS. Syntax [net,tr,Y,E,Pf,Af] = train(NET,P,T,Pi,Ai,VV,TV) TRAIN trains a network NET according to NET.trainFcn and NET.trainParam. TRAIN(NET,P,T,Pi,Ai) , NET - Network. P - Vstupy do siete. T - Ziadaný výstup (targets, default = zeros). Pi - Initial input delay conditions, default = zeros. Ai - Initial layer delay conditions, default = zeros. VV - Structure of validation vectors, default = []. TV - Structure of test vectors, default = []. and returns, NET - New network. TR - Training record (epoch and perf). Y - Network outputs. E - Network errors. Pf - Final input delay conditions. Af - Final layer delay conditions.

  6. Príklad (Vytvorenie UNS siete – modelovanie) • P- vstupov, T (target) cieľové – žiadané hodnoty • Vstupný vektor: P = [0 1 2 3 4 5 6 7 8 9 10]; • Žiadaný vektor: T = [0 1 2 3 4 3 2 1 2 3 4]; • Úloha : • - Typ siete Trojvrstvová sieť, Vstupy sú v rozsahu < 0-10>. • Aktivačná funkcia v skrytej vrstve je typu TANSIG • - Výstup je jeden s aktivačnou funkciou typu PURELIN • vrstva je typu • Trénovacia metóda TRAINLM (Levenberg-Marquardt) • Počet trénovacích epoch 50

  7. 0 1 2 3 4 5 6 7 8 9 10 bias bias Y PURELIN TANSIG - + T

  8. Programová realizácia príkladu v Matlabe P = [0 1 2 3 4 5 6 7 8 9 10];%vstup T = [0 1 2 3 4 3 2 1 2 3 4]; % ziadany vystup net = newff([0 10],[5 1],{'tansig' 'purelin'}) pause Y = sim(net,P) plot(P,T,P,Y,'o') pause net.trainParam.epochs = 200; net = train(net,P,T) pause Y = sim(net,P) plot(P,T,P,Y,'o')

  9. Prvý krok riešenia – náhodne generovanie váh, výstup zo siete Y = 0.8481 -0.0144 -0.4497 -1.5582 -1.8822 -1.8217 -1.7415 -1.7863 -1.9691 -2.0518 -2.2853

  10. Riešenie po 200 epochách, chyba, výstup zo siete Y = 0.0000 1.0000 2.0000 3.0001 3.9586 3.1108 1.8584 1.1467 1.8581 3.1124 3.9585

  11. T = ´ 01 2 3 4 3 2 1 2 3 4 Y = 0.0004 0.9991 1.9999 3.0014 3.9992 3.0000 2.0000 1.0005 1.9981 3.0021 3.9992

  12. Príklad 2 Sušič vlasov (arx0.m) % dryer model % Toto demo je upravene pre potreby vyucby. % Priebezne data su merane z elektricky ohrievaneho systemu (susic vlasov). % Ulohou je urcit matematicky model typu ARX a porovnat ho s NN modelom. % Vystupom je teplota ohriateho vzduchu T a vstupom je velkost napatia U. % Teplota je snimana termoclankom. % Merane udaje su zapisane v subore DRYER2.mat % Vo vektore y2 je ulozena snimana teplota (1000 hodnot),vo vektore u2 su % ulozene merane hodnoty vstupneho napatia (1000 hodnot). % Perioda vzorkovania (snimania je 0.08s) pause load dryer2 % nacitanie vsetkych udajov % Press a key to continue % Pre identifikáciu je použitých prvych 300 udajov: z2 = [y2(1:300) u2(1:300)]; th = arx(z2,[3 2 3]); th = sett(th,0.08);

  13. Príklad 2 (neurf.m) load dryer2 % Urcenie vstupov a referencnych vystupov pre NN MLP modelu nI=5; % Rad vstupu nT=5; % Rad vystupu n=250; % počet vstupnych udajov z 1000 [pom1,pom2]=size(y2); nt=pom1-nT-1; I=zeros(n,nI+nT); for ii=1:1:nI I(:,ii)=u2(ii:n+ii-1,1); end for ii=1:1:nT I(:,ii+nI)=y2(ii:n+ii-1,1); end I=I'; T=y2(nT+1:n+nT,1)';

  14. % Vytvorenie UNS MLP siete net = newff(minmax(I),[5 1],{'tansig''purelin'},'trainlm','learnwh','sse'); net.trainParam.epochs = 80; %net.performFcn = 'sse'; % Suma stvorcov odchyliek %net.trainParam.goal = 0.1; % Sum-squared error goal. net.trainParam.show = 10; % Frek.zobrazov. (in epochs). net.trainParam.mc = 0.95; %lp.mc = 0.2; % Natrenovanie siete net = train(net,I,T); pause; Y = sim(net,I);

  15. ARX model SS1=(yh-y); SS2=SS1.^2; SSE=sum(SS2) SSE = 3.4761 NN MLP model SSE = 1.8979

More Related