slide1
Download
Skip this Video
Download Presentation
Modelleerimine ja Juhtimine Tehisnärvivõrgudega

Loading in 2 Seconds...

play fullscreen
1 / 12

Modelleerimine ja Juhtimine Tehisnärvivõrgudega - PowerPoint PPT Presentation


  • 102 Views
  • Uploaded on

Modelleerimine ja Juhtimine Tehisnärvivõrgudega Identification and Control with artificial neural networks (2nd part). Eduard Petlenkov, Automaatikainstituut. Eduard Petlenkov, Automaatikainstituut, 2013. Supervised learning.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Modelleerimine ja Juhtimine Tehisnärvivõrgudega' - sibyl


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
slide1

Modelleerimine ja Juhtimine Tehisnärvivõrgudega

Identification and Control with artificial neural networks

(2nd part)

Eduard Petlenkov,

Automaatikainstituut

Eduard Petlenkov, Automaatikainstituut, 2013

slide2

Supervised learning

Supervised learning is a training algorithm based on known input-output data.

X is an input vector;

Ypis a vector of reference outputs corresponding to input vector X

Y is a vector of NN’s outputs corresponding to input vector X

NNis a mathematica function of the network (Y=NN(X))

Eduard Petlenkov, Automaatikainstituut, 2013

slide3

Õpetamine (2)

  • Computation of NN’s outputs corresponding to the current NN’s paraneters;
  • Computation of NN’s error;
  • Updating NN’s parameters by using selected training algorithm in order to minimize the error.

Network training procedure consists of the following 3 steps:

Eduard Petlenkov, Automaatikainstituut, 2013

slide4

Gradient descent error backpropagation (1)

Error function:

- NN’s output

-inputs used for training;

- NN’s function;

- etalons (references) for outputs=desired outputs.

function to be minimized:

Eduard Petlenkov, Automaatikainstituut, 2013

slide5

Gradient descent error backpropagation (2)

Eduard Petlenkov, Automaatikainstituut, 2013

slide6

Global minimum problem

Eduard Petlenkov, Automaatikainstituut, 2013

slide7

Training of a double-layer perceptron using Gradient descent error backpropagation (1)

i - number of input;

j – number of neuron in the hidden layer;

k – number of output neuron;

NN’s input at time instance t;

;

- Values oh the hidden layer weighting coefficients at time instance t;

- Values oh the hidden layer biases at time instance t;

- Activation function of the hidden layer neurons

- Outputs of the output layer neurons at time instancet;

- Values oh the output layer weighting coefficients at time instance t;

- Values oh the output layer biases at time instance t;

- Activation function of the output layer neurons;

- Outputs of the network at time instance t;

Eduard Petlenkov, Automaatikainstituut, 2012

slide8

Training of a double-layer perceptron using Gradient descent error backpropagation (2)

error:

gradients:

Signals distributing information about error from output layer to previous layers (that is why the method is called error backpropagation):

Are derivatives of the corresponding layers activation functions.

Eduard Petlenkov, Automaatikainstituut, 2013

slide9

Training of a double-layer perceptron using Gradient descent error backpropagation (3)

Updated weighting coefficients:

learning rate

Eduard Petlenkov, Automaatikainstituut, 2013

slide10

Gradient Descent (GD) method vs Levenberg-Marquardt (LM) method

- Gradient

- Hessian

GD:

LM:

Eduard Petlenkov, Automaatikainstituut, 2012

slide11

Näide MATLABis (1)

P1=(rand(1,100)-0.5)*20% Juhuslikud sisendid. 100 väärtust vahemikus [-10 +10]

P2=(rand(1,100)-0.5)*20

P=[P1;P2]% Närvivõrgu sisendite moodustamine

T=0.3*P1+0.9*P2 % Vastavate väljundite arvutamine

net=newff([-10 10;-10 10],[5 1],{\'purelin\' \'purelin\'},\'traingd\') % Närvivõrgu genereerimine

teinesisend

[min max]

neuronite arvud igas kihis

Viimase kihi neuronite arv = väljundite arv

esimene sisend

[min max]

Teise kihi neuronite aktiveerimisfunktsioon

Esimese kihi neuronite aktiveerimisfunktsioon

õppimisalgoritm

net.trainParam.show=1;

Treenimise parameetrid

net.trainParam.epochs=100;

slide12

Näide MATLABis (2)

net=train(net,P,T) % Närvivõrgu treenimine

out=sim(net,[3 -2]) % Närvivõrgu simuleerimine

Teise sisendi väärtus

Esimese sisendi väärtus

W1=net.IW{1,1} % esimese kihi kaalukoefitsientide maatriks

W2=net.LW{2,1} % teise kihi kaalukoefitsientide maatriks

B1=net.b{1} % esimese kihi neuronite nihete vektor

B2=net.b{2} % teise kihi neuronite nihete vektor

Eduard Petlenkov, Automaatikainstituut, 2011

ad