3453950
Sponsored Links
This presentation is the property of its rightful owner.
1 / 78

مباحث : PowerPoint PPT Presentation


  • 213 Views
  • Uploaded on
  • Presentation posted in: General

مباحث :. معرفی شبکه های عصبی مصنوعی( ANN ها) مبانی شبکه های عصبی مصنوعی توپولوژی شبکه فرآیند یادگیری شبکه تجزیه و تحلیل داده ها توسط شبکه های عصبی مصنوعی ایده ی اصلی شبکه های عصبی مصنوعی معایب شبکه های عصبی مصنوعی کاربردهای شبکه های عصبی مصنوعی. مقدمه: زمان پاسخ گویی نرون طبیعی :

Download Presentation

مباحث :

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


:

(ANN)


:

:

:

.


ANN

  • .


ANN

  • .


ANN

  • .



.



output 1

0.9

Input 1

0.3

0.78

output 2

Input 2

0.3

0.7

output 3

0.8

.


( ) . ( ) .

.

FeedForward topology

Recurrent topology


Input layer

Output layer

Hidden layer



  • .

  • . :


  • . .

  • - . .

  • .

  • . .

  • . .


.

(supervised)

(unsupervised)

(reinforcement)


Perceptron

. 1 -1 .


Linearly separable

+

+

+

+

+

-

-

-

+

-

-

-

Linearly separable

Non-linearly separable


bias

1 W0 .


http://research.yale.edu/ysm/images/78.2/articles-neural-neuron.jpg

  • :

  • :

if

otherwise

where

=

1 if y > 0

-1 otherwise


:

  • . .

  • :

    • 2


:

:


  • :

= ( t o ) xi

t: target output

o: output generated by the perceptron

: constant called the learning rate (e.g., 0.1)

.


Delta Rule

  • . .

  • gradient descent . Backpropagation.


Delta Rule

  • . . :


Delta Rule

  • :

    : learning rate (e.g., 0.1)


MultilayerArchitecture

Output

layer

Input

layer

Hidden Layers


1

-10 -8 -6 -4 -2 2 4 6 8 10

Activation Functions

Sigmoidal Function


Back propagation

  • Back Propagation . gradient descent .

  • :

outputs tkdokd k d .


Back-propagation Algorithm


(Forward Step)

X .

.


(Backward Step)

  • :

  • :

  • :

    :


BP

  • ninnhiddennout.

  • .

  • ) ( :

    x:

    X

    E .


BP

  • gradient descent .

  • :

    • stochastic gradient descent


  • n .

    0 <= <= 1.

    :

    • .


overfitting

  • BP

  • . overfitting.

Validation set error

Error

Training set error

Number of weight updates


overfitting

  • overfitting. .

  • .


  • Vallidation.

  • : weight decay.

  • k-fold cross validation m K k . . .


BP :

  • .

  • .

    Overfitting.


:

    • Hybrid Global Learning

    • Simulated Annealing

    • Genetic Algorithms

    • Radial Basis Functions

    • Recurrent Network


FNN

  • Stimulated Annealing

  • PSO Particle Swarm Optimization

  • ...


  • (posterior probability)


ANN

  • .

  • .

  • .

  • ( ) ..

  • . .


RBF

  • .



:


  • :

  • :

  • k


:

xi

f(xi) .

(f(xi yi . W

. .

4. .

5. .


  • (Pattern Recognition) (Character Recognition)

  • (Speech Recognition)

  • (Image Processing)

  • (Classification)


( ...)

  • /

  • /


  • .


( )

Failure mode and effects analysis

* * = RPN


  • RPN .

  • ( ) . .

  • .




Particle Swarm Optimization

  • (Evolutionary) .

  • Kennedy Eberhart 1995 .

  • .

  • PSO (Population) .

  • PSO .

  • PSO .


x2

max

x1

min

fitness

Particle Swarm Optimization Concept


  • (Pb) (Pg) .

  • (Pb) (Pg) .


Particle Swarm Optimization The Basic Model


Particle Swarm Optimization The Basic Model

Rules of movement

Vid(t+1)= Vid(t)+c1* rand()*[Pid(t)-xid(t)]+c2*rand()*[Pgd(t)-xid(t)]

Xid(t+1)=xid(t)+vid(t+1) 1i n 1 d D

c1 c2 rand() 0 1 .


Particle Swarm Optimization The Basic Model


x2

max

x1

min

fitness

Particle Swarm Optimization Concept

search space


x2

max

x1

min

fitness

Particle Swarm Optimization Animation

search space


x2

max

x1

min

fitness

Particle Swarm Optimization Animation

search space


x2

max

x1

min

fitness

Particle Swarm Optimization Animation

search space


x2

max

x1

min

fitness

Particle Swarm Optimization Animation

search space


x2

max

x1

min

fitness

Particle Swarm Optimization Animation

search space


x2

max

x1

min

fitness

Particle Swarm Optimization Animation

search space


x2

max

x1

min

fitness

Particle Swarm Optimization Animation

search space


x2

max

x1

min

fitness

Particle Swarm Optimization Animation

search space


x2

max

x1

min

fitness

Particle Swarm Optimization Animation

search space


x2

max

x1

min

fitness

Particle Swarm Optimization Animation

search space


. . . .


Particle Swarm Optimization Flow Chart

Flow chart depicting the General PSO Algorithm:

Start

Initialize particles with random position

and velocity vectors.

For each particles position (p)

evaluate fitness

Loop until all particles exhaust

If fitness(p) better than

fitness(pbest) then pbest= p

Loop until max iter

Set best of pBests as gBest

Update particles velocity (eq. 1) and

position (eq. 3)

Stop: giving gBest, optimal solution.


:

www.rsh.ir

http://en.wikipedia.org/wiki/Neural_network

http://www.neuralnetworksolutions.com/resources.php

http://www.tandf.co.uk/journals/titles/0954898X.asp

http://www.30sharp.com

( )


  • Login