1 / 34

Simultaneous Recurrent Neural Networks for Static Optimization Problems By: Amol Patwardhan

Simultaneous Recurrent Neural Networks for Static Optimization Problems By: Amol Patwardhan Adviser: Dr. Gursel Serpen August, 1999 The University of Toledo. Driving Force for the Research. Drawbacks of conventional computing systems:- Perform poorly on complex problems

bien
Download Presentation

Simultaneous Recurrent Neural Networks for Static Optimization Problems By: Amol Patwardhan

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Simultaneous Recurrent Neural Networks for Static Optimization Problems By: Amol Patwardhan Adviser: Dr. Gursel Serpen August, 1999 The University of Toledo

  2. Driving Force for the Research • Drawbacks of conventional computing systems:- • Perform poorly on complex problems • Lack the computational power • Do not utilize the inherent parallelism of problems • Advantages of Artificial Neural Networks:- • Perform well even on complex problems • Very fast computational cycles if implemented in hardware • Can take advantage of the inherent parallelism of problems

  3. Earlier Efforts to solve Optimization Problems • Many ANN algorithms with feedforward and recurrent • architectures have been used to solve unconstrained and • combinatorial optimization problems • The Hopfield network and its derivatives including Boltzmann • machine and MFA seem to be most prominent and extensively • applied ANN algorithms to solve these static optimization • problems. • However HN and their derivatives do not scale well with the • increase in the size of the optimization problem.

  4. Statement of Thesis Can we use …. • Simultaneous Recurrent Neural Network, a trainable • and recurrent ANN, to address the scaling problem • Artificial Neural Network algorithms currently • experience for static optimization problems

  5. Research Approach • A neural network simulator is developed for simulation • of Simultaneous Recurrent Neural Network • An extensive simulation study is conducted on two • well known static optimization problems • - Traveling Salesman Problem • - Graph Path Search Problem • Simulation results are analyzed

  6. Significance of Research • A powerful and efficient optimization tool • Optimizer can solve real-life size and complex • static optimization problems • Will require a fraction of time if implemented in hardware • Applications in many fields like • - Routing in computer networks • - VLSI circuit design • - Planning in operational and logistic systems • - Power distribution systems • - Wireless and satellite communication systems

  7. Hopfield Network and Static Optimization Problems • Most widely used ANN algorithms • Offer a computationally simple way for a class of • optimization problems • HN dynamics minimizes a quadratic Lyapunov function • Employed as fixed-point attractors • Performance greatly depends on constraint weight • parameters

  8. Shortcoming of the Hopfield Network • Constraint weight parameters are set empirically • All weights and connections are specified in advance • Difficult to guess weights for large-scale problems • Lack mechanism to incorporate the experience gained • Quality of solution not good for large scale TSP • Do not scale well with increase in the problem size • Acceptable solution for Graph Path Search Problem • can not be found

  9. Why Simultaneous Recurrent Neural Network • Hopfield Network do not employ any learning that can • benefit from prior relaxations • A relaxation based neural search algorithm, which can • learn from its own experience is needed • Simultaneous Recurrent Neural Network is a ….. • - Recurrent algorithm • - has relaxation search capability • - has ability to learn

  10. Simultaneous Recurrent Neural Network Outputs Y Inputs X Feedforward Mapping (.,W) Feedback Path Simultaneous Recurrent Neural Network is a feedforward network with simultaneous feedback from outputs of the network to its inputs without any time delay

  11. Simultaneous Recurrent Neural Network • Follows a trajectory in the state space to relax to a fixed point • The network is provided with the external inputs and initial • outputs are typically assumed randomly • The output of previous iteration is fedback to the • network along with the external inputs to compute • the output of next iteration • The network iterates until it reaches a stable equilibrium point

  12. Training of SRN • Methods available for training of SRN in literature •  Backpropagation Through Time (BTT) which requires the • knowledge of desired outputs throughout the trajectory path •  Error critics (EC) has no quarantee of yielding exact results • in equilibrium •  Truncation did not provide satisfactory results and needs • to be further tested •  Recurrent Backpropagation requires only knowledge of • desired outputs at the end of trajectory path and hence • chosen to train SRN

  13. Traveling Salesman Problem

  14. Network Topology for Traveling Salesman Problem Input Layer Hidden Layer(s) Output Layer Hidden Layers N N nodes Cost Matrix Path Specification N N nodes Output Array N N nodes

  15. Error Computation for TSP Destination Cities Source Cities Output Matrix • Constraints used for training TSP •  Asymmetry of the path traveled •  Column inhibition •  Row inhibition •  Cost of the path traveled •  Values of the solution matrix

  16. Graph Path Search Problem Source Destination

  17. Network Topology for Graph Path Search Problem Input Layer Hidden Layer(s) Output Layer N N nodes Adjacency Matrix Hidden Layers Path Specification N N nodes N N nodes Cost Matrix Output Array N N nodes

  18. Error Computation for GPSP Destinations Vertices Sources Vertices Output Matrix • Constraints used for training GPSP •  Asymmetry of the sub-graph •  Column inhibition •  Row inhibition •  Source and target vertex inhibition •  Column/row excitation •  Row/column excitation •  Cost of the solution path •  Number of vertices in the path

  19. Simulation:- Software Environment Language: C, MATLAB 5.2 GUI: Xwindows11 Plotting of Graphs: C program calling MATLAB functions for plotting of graphs Hardware Environment Sun OS 5.7 Sun Ultra machine 300MHz Physical Memory (RAM) 1280 MB Virtual Memory (Swap) 1590 MB

  20. Simulation:- GUI for Simulator

  21. Simulation:- Initialization • Randomly initialize weights and initial outputs (Range: 0.0 - 1.0) • Randomly initialize cost matrix for TSP (Range: 0.0 - 1.0) • Randomly initialize adjacency matrix ( 0.0 or 1.0) depending • on the connectivity level parameter for GPSP • For TSP, values along the diagonal of the cost matrix • are clamped to 1.0 to avoid self looping. • For GPSP, values along the diagonal of the adjacency matrix • and cost matrix are clamped to 1.0 to avoid self looping. • Values of constraint weight parameters are set depending on the • size of the problem

  22. Simulation:- Initialization for TSP Initial values and Increments per 30 relaxation of constraint weight parameters for the TSP

  23. Simulation:- Training Error function vs. Simulation Time for TSP

  24. Simulation:- Results Convergence criteria of network is checked after every 100 relaxations Criteria: 95% of active nodes have value greater than 0.9

  25. Simulation:- Results Plot of Normalized Distance between cities after the convergence of network to an acceptable solution vs. Problem Size

  26. Simulation:- Results Plot of Number of Relaxations required for a solution and values of Constraint Weight Parameters gc and gr after 300 Relaxations vs. Problem Size

  27. Simulation:- Initialization for GPSP Initial values and Increments per 30 relaxation of constraint weight parameters for the GPSP

  28. Simulation:- Results for GPSP Convergence criteria of network is checked after every 100 relaxations Criteria: Active nodes have value greater than 0.8

  29. Simulation:- Results for GPSP Plot of Number of Relaxations required for a solution and values of Constraint Weight Parameters gi after 300 Relaxations vs. Problem Size

  30. Conclusions • The SRN with the RBP was able to find “good quality” solutions, • in the range of 0.25-0.35, for large-scale (40 to 500 city) • Traveling Salesman Problem • Solutions were obtained within acceptable computation efforts • Normalized Distance between cities remained almost consistent • as the problem size was varied from 40 to 500 cities • The simulator developed does not require weights to be • predetermined before simulation as is the case with the HN and • its derivatives • The initial and incremental values of constraint weight parameters • play very important role in the training of the network

  31. Conclusions (continued) • Computational effort and memory requirement increased • proportional to the square of the problem size • The SRN with the RBP was able to find a solution for large-scale • Graph Path Search Problem in the range of 40 to 500 vertices • The solutions were obtained within acceptable computation • efforts and time • The computation effort required for the GPSP is 1.1 to 1.2 times • more than that of the TSP • The number of relaxations required increased with the increase • in the problem size • The GPSP was very sensitive to the constraint weight parameters

  32. Conclusions (continued) Thus we can say that …. Simultaneous Recurrent Neural Network with Recurrent Backpropagation training algorithm scaled well for large-scale static optimization problems like the Traveling Salesman Problem and the Graph Path Search Problem within acceptable computation effort bounds.

  33. Recommendations for Future Study • The feasibility of the hardware implementation of the network • and algorithm for the TSP should be thought over • More number of simulations should be done for the GPSP • to find the effect of change in each constraint weight parameter • on the solution • The effect of incorporating a stochastic or probabilistic • component into the learning for network dynamics can also be • studied to find the better approach • Simulation study on weighted GPSP should be done for • more practical use

  34. Questions ?

More Related