3453950
Download
1 / 78

مباحث : - PowerPoint PPT Presentation


  • 225 Views
  • Uploaded on
  • Presentation posted in: General

مباحث :. معرفی شبکه های عصبی مصنوعی( ANN ها) مبانی شبکه های عصبی مصنوعی توپولوژی شبکه فرآیند یادگیری شبکه تجزیه و تحلیل داده ها توسط شبکه های عصبی مصنوعی ایده ی اصلی شبکه های عصبی مصنوعی معایب شبکه های عصبی مصنوعی کاربردهای شبکه های عصبی مصنوعی. مقدمه: زمان پاسخ گویی نرون طبیعی :

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha

Download Presentation

مباحث :

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


3453950

:

(ANN)


3453950

:

:

:

.


3453950

ANN

  • .


3453950

ANN

  • .


3453950

ANN

  • .



3453950

.


3453950


3453950

output 1

0.9

Input 1

0.3

0.78

output 2

Input 2

0.3

0.7

output 3

0.8

.


3453950

( ) . ( ) .

.

FeedForward topology

Recurrent topology


3453950

Input layer

Output layer

Hidden layer



3453950

  • . :


3453950

  • . .

  • - . .

  • .

  • . .

  • . .


3453950

.

(supervised)

(unsupervised)

(reinforcement)


3453950

Perceptron

. 1 -1 .


3453950

Linearly separable

+

+

+

+

+

-

-

-

+

-

-

-

Linearly separable

Non-linearly separable


3453950

bias

1 W0 .


3453950

http://research.yale.edu/ysm/images/78.2/articles-neural-neuron.jpg

  • :

  • :

if

otherwise

where

=

1 if y > 0

-1 otherwise


3453950

:

  • . .

  • :

    • 2


3453950

:

:


3453950

:

  • :

= ( t o ) xi

t: target output

o: output generated by the perceptron

: constant called the learning rate (e.g., 0.1)

.


3453950

:Delta Rule

  • . .

  • gradient descent . Backpropagation.


3453950

:Delta Rule

  • . . :


3453950

:Delta Rule

  • :

    : learning rate (e.g., 0.1)


3453950

Multilayer :Architecture

Output

layer

Input

layer

Hidden Layers


3453950

1 :

-10 -8 -6 -4 -2 2 4 6 8 10

Activation Functions

Sigmoidal Function


3453950

:Back propagation

  • Back Propagation . gradient descent .

  • :

outputs tkdokd k d .



3453950

:(Forward Step)

X .

.


3453950

:(Backward Step)

  • :

  • :

  • :

    :


3453950

: BP

  • ninnhiddennout.

  • .

  • ) ( :

    x:

    X

    E .


3453950

:BP

  • gradient descent .

  • :

    • stochastic gradient descent


3453950

:

  • n .

    0 <= <= 1.

    :

    • .


3453950

:overfitting

  • BP

  • . overfitting.

Validation set error

Error

Training set error

Number of weight updates


3453950

:overfitting

  • overfitting. .

  • .


3453950

:

  • Vallidation.

  • : weight decay.

  • k-fold cross validation m K k . . .


3453950

:

BP :

  • .

  • .

    Overfitting.


3453950

:

:

    • Hybrid Global Learning

    • Simulated Annealing

    • Genetic Algorithms

    • Radial Basis Functions

    • Recurrent Network


3453950

FNN

  • Stimulated Annealing

  • PSO Particle Swarm Optimization

  • ...


3453950

  • (posterior probability)


3453950

ANN

  • .

  • .

  • .

  • ( ) ..

  • . .


3453950

RBF

  • .



3453950

:


3453950

  • :

  • k


3453950

:

xi

f(xi) .

(f(xi yi . W

. .

4. .

5. .


3453950

:

  • (Pattern Recognition) (Character Recognition)

  • (Speech Recognition)

  • (Image Processing)

  • (Classification)


3453950

: ( ...)

  • /

  • /


3453950

  • .


3453950

( )

Failure mode and effects analysis

* * = RPN


3453950

  • ( ) . .

  • .




3453950

Particle Swarm Optimization

  • (Evolutionary) .

  • Kennedy Eberhart 1995 .

  • .

  • PSO (Population) .

  • PSO .

  • PSO .


3453950

x 2

max

x1

min

fitness

Particle Swarm Optimization Concept


3453950

  • (Pb) (Pg) .

  • (Pb) (Pg) .



3453950

Particle Swarm Optimization The Basic Model

Rules of movement

Vid(t+1)= Vid(t)+c1* rand()*[Pid(t)-xid(t)]+c2*rand()*[Pgd(t)-xid(t)]

Xid(t+1)=xid(t)+vid(t+1) 1i n 1 d D

c1 c2 rand() 0 1 .



3453950

x 2

max

x1

min

fitness

Particle Swarm Optimization Concept

search space


3453950

x 2

max

x1

min

fitness

Particle Swarm Optimization Animation

search space


3453950

x 2

max

x1

min

fitness

Particle Swarm Optimization Animation

search space


3453950

x 2

max

x1

min

fitness

Particle Swarm Optimization Animation

search space


3453950

x 2

max

x1

min

fitness

Particle Swarm Optimization Animation

search space


3453950

x 2

max

x1

min

fitness

Particle Swarm Optimization Animation

search space


3453950

x 2

max

x1

min

fitness

Particle Swarm Optimization Animation

search space


3453950

x 2

max

x1

min

fitness

Particle Swarm Optimization Animation

search space


3453950

x 2

max

x1

min

fitness

Particle Swarm Optimization Animation

search space


3453950

x 2

max

x1

min

fitness

Particle Swarm Optimization Animation

search space


3453950

x 2

max

x1

min

fitness

Particle Swarm Optimization Animation

search space


3453950

. . . .


Particle swarm optimization flow chart
Particle Swarm Optimization Flow Chart

Flow chart depicting the General PSO Algorithm:

Start

Initialize particles with random position

and velocity vectors.

For each particles position (p)

evaluate fitness

Loop until all particles exhaust

If fitness(p) better than

fitness(pbest) then pbest= p

Loop until max iter

Set best of pBests as gBest

Update particles velocity (eq. 1) and

position (eq. 3)

Stop: giving gBest, optimal solution.


3453950

:

www.rsh.ir

http://en.wikipedia.org/wiki/Neural_network

http://www.neuralnetworksolutions.com/resources.php

http://www.tandf.co.uk/journals/titles/0954898X.asp

http://www.30sharp.com

( )


ad
  • Login