A dynamic system analysis of simultaneous recurrent neural network
Download
1 / 17

A Dynamic System Analysis of Simultaneous Recurrent Neural Network - PowerPoint PPT Presentation


  • 171 Views
  • Uploaded on

A Dynamic System Analysis of Simultaneous Recurrent Neural Network. Dr. Gursel Serpen and Mr. Yifeng Xu Electrical Engineering and Computer Science Department University of Toledo Toledo, Ohio, USA. Introduction. Motivation for research Simultaneous Recurrent Neural Network (SRN)

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' A Dynamic System Analysis of Simultaneous Recurrent Neural Network' - travis-collins


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
A dynamic system analysis of simultaneous recurrent neural network

A Dynamic System Analysis of Simultaneous Recurrent Neural Network

Dr. Gursel Serpen and Mr. Yifeng Xu

Electrical Engineering and Computer Science Department

University of Toledo

Toledo, Ohio, USA


Introduction
Introduction Network

  • Motivation for research

  • Simultaneous Recurrent Neural Network (SRN)

  • Local Stability of SRN

  • Global stability of SRN

Werbos, Paul many recent articles – some unpublished!

Serpen et al., The Simultaneous Recurrent Neural Network for Addressing the Scaling Problem in Static Optimization, Neural Systems, Vol. 11, No. 5, 2001, pp. 477-487.

FOR MORE INFO...


Motivation
Motivation Network

  • Simultaneous Recurrent Neural (SRN) network has been shown to have the potential to address large-scale static optimization problems: located relatively high quality solutions.

  • SRN is trainable: implies that it can learn from prior search attempts (A Hopfield net cannot do this!)

  • Computational complexity for SRN simulations is much less again compared to the Hopfield and its derivatives.


Research goals
Research Goals Network

  • Understand the stability and convergence properties of the SRN dynamics.

  • Establish stable dynamics following initialization.

  • Establish stability while training the SRN with a fixed-point algorithm (recurrent backprop).

  • Apply the SRN to (large-scale) static optimization problems.


Research goals detailed
Research Goals - detailed Network

Initialization of weights to guarantee existence of at least one or more fixed points in the state space of the SRN dynamics.

Stability as weight matrices are being modified while learning with a fixed-point algorithm, i.e., recurrent backpropagation.

Assessing the computational power of the SRN as a static optimizer for large-scale problem instances.


Simultaneous recurrent neural network
Simultaneous Recurrent Neural Network Network

Topology

Feedforward Network

Input

Output

Typical propagation delay exists on the feedback path as dictated by physical constraints!

Delayed Feedback


Srn detailed structure 3 layers
SRN Detailed Structure – 3 Layers Network

x

y

W

U

z

V

Propagation Delay

outputs

z = f (Uy) and y = f (Wx+Vz)


Analysis of srn dynamics
Analysis of SRN Dynamics Network

Local stability analysis

SRN dynamics linearized at hypercube corners (that are also equilibrium points)

(Local) stability conditions for hypercube corner equilibrium points

A theorem...its proof (elsewhere)


Equilibrium points in state space of srn dynamics
Equilibrium Points in State Space of SRN Dynamics Network

SRN dynamics

Output layer

Hidden layer

SRN dynamics in matrix form in terms of z:


Equilibrium points of srn dynamics
Equilibrium Points of SRN Dynamics Network

Equilibrium Points of SRN Dynamics

Hypercube Corners

zk = 1 or zk = 0

for k = 1, 2, …, K

Points interior, on the surface, and on the edges of the hypercube


Stability of srn dynamics linearized
Stability of SRN Dynamics - Linearized Network

Eigenvalues of SRN dynamics linearized at hypercube corners (equilibrium points):

for k = 1, 2, … , K.


Stability of srn dynamics linearized1
Stability of SRN Dynamics - linearized Network

Set of inequalities derived from eigenvalues through stability condition for equilibrium points.

= 0

for

= 1

for


A stability theorem for srn dynamics
A Stability Theorem for SRN Dynamics Network

For any given hypercube corner in the state space of SRN dynamics,

which is configured for static combinatorial optimization, i.e. with high-gain neurons, one hidden layer, one output layer, no external input, and operating in associative memory mode,

it is possible to define the weight matrices U and V to establish that hypercube corner as a stable equilibrium point in a local sense.


Simulation study
Simulation Study Network

Case 1 - 2X4 SRN

Methodology: Create 75 instances of U & V then, observe set of stable

equilibrium points thru multiple random initializations.

Fixed points (hypercube corners) 48

Limit cycles - cycle length of 2 12

Stable non-hypercube points 19

* all 16 hypercube corners appeared as

stable equilibrium points for 75 instances

of weight matrices.

Case 2 - 2X100 SRN

Case 3 - 5X10000 SRN

Case 4 - 5X25000 SRN


Global stability of srn
Global Stability of SRN? Network

  • The SRN paradigm is closely related to a number of globally asymptotically stable recurrent neural network algorithms!

  • BAM is a special case of SRN.

  • ART 1 & 2 cores are similar topologies.

  • Significant simulation-based empirical studies conducted by authors suggest global stability.

  • However, no Liapunov function yet!


Conclusions
Conclusions Network

  • A theorem demonstrates that there exist real forward and backward weight matrices for SRN dynamics, which will induce stability of any given hypercube corner as an equilibrium point.

  • This theoretical finding was also validated by extensive simulation-based empirical studies, some of which was reported in a recent journal article:

    Serpen et al., The Simultaneous Recurrent Neural Network for Addressing the Scaling Problem in Static Optimization, Neural Systems, Vol. 11, No. 5, 2001, pp. 477-487.


Thank you
Thank You ! Network

  • Questions ?

This research has been funded in part by the US National Science Foundation grant ECS-9800247.