Loading in 2 Seconds...
Loading in 2 Seconds...
RECENT DEVELOPMENTS IN MULTILAYER PERCEPTRON NEURAL NETWORKS. Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, TX 75265 [email protected] [email protected] Michael T. Manry The University of Texas at Arlington Arlington, TX 76010 [email protected]
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Walter H. Delashmit
Lockheed Martin Missiles and Fire Control
Dallas, TX 75265
Michael T. Manry
The University of Texas at Arlington
Arlington, TX 76010
Memphis Area Engineering and Science Conference 2005
May 11, 2005
Scales and shifts all net functions so that they do not generate small gradients and do not allow large inputs to mask the potential effects of small inputs
Eint(0) Eint (1) Eint (2) …. Eint(Nhmax) and train the network to minimize Ef(Nh)
such that Ef(0) Ef (1) Ef (2) …. Ef(Nhmax)
Each CSPI network starts with same IRNS
Create an initial network with Nh hidden units
Train this initial
network
Nh Nh+p
Yes
Nh>Nhmax ?
Stop
No
Initialize new hidden units
Nhp+1 j Nh
woh(k,j) 0, 1 k M
whi(j,i) RN(ind+), 1 i N+1
Net control for whi(j,i), 1 i N+1
Train new
network
(1) DI network: standard DI network design for Nh hidden units
(2) RI type 1: RI networks were designed using a single network for each value of Nh and every network of size Nh was trained using the value of Niter that the corresponding network was trained with for the DI network.
(3) RI type 2: RI networks were designed using a single network for each value of Nh and every network was trained using the total number of Niter that was used for the entire sequence of DI networks. This can be expressed by
This results in the RI type 2 network actually having a larger value of Niter than the
DI network.
)
)


t
t
t
x
,
t
t
p
p
p
p
p
p
BottomUp Separating Mean
Generate new desired output vector
Generate linear
Train MLP
mapping results
using new data
power12