application of back propagation neural network in data forecasting l.
Skip this Video
Loading SlideShow in 5 Seconds..
Application of Back-Propagation neural network in data forecasting PowerPoint Presentation
Download Presentation
Application of Back-Propagation neural network in data forecasting

Loading in 2 Seconds...

play fullscreen
1 / 23

Application of Back-Propagation neural network in data forecasting - PowerPoint PPT Presentation

  • Uploaded on

Application of Back-Propagation neural network in data forecasting. Le Hai Khoi, Tran Duc Minh Institute Of Information Technology – VAST Ha Noi – Viet Nam. Acknowledgement . The authors want to Express our thankfulness to Prof. Junzo WATADA who read and gave us worthy comments.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

Application of Back-Propagation neural network in data forecasting

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
application of back propagation neural network in data forecasting

Application of Back-Propagation neural network in data forecasting

Le Hai Khoi, Tran Duc Minh

Institute Of Information Technology – VAST

Ha Noi – Viet Nam


The authors want to Express ourthankfulness to Prof. Junzo WATADA who read and gave us worthy comments.


  • Introduction
  • Steps in data forecasting modeling using neural network
  • Determine network’s topology
  • Application
  • Concluding remarks

Data collecting and analyzing



Neural Networks

Figure 1: Data Processing.

  • Neural networks are “Universal Approximators”
  • To find a suitable model for the data forecasting problem is very difficult andin reality, it might be done only by trial-and-error
  • We may take the data forecasting problem for a kind of data processingproblem
steps in data forecasting modeling using neural network
Steps in data forecasting modeling using neural network

The works involved in are:

* Data pre-processing: determining data interval: daily, weekly, monthly or quarterly; data type:technical index or basic index; method to normalize data: max/min ormean/standard deviation.

* Training: determining the learning rate, momentum coefficient, stop condition, maximumcycles, weight randomizing, and size of training set, test set and verificationset.

* Network’s topology: determining number of inputs, hidden layers, number of neurons in each layer,number of neurons in output layer, transformation functions for the layers anderror function

steps in data forecasting modeling using neural network6
Steps in data forecasting modeling using neural network

The major steps in design the data forecasting model is as follow:

1.Choosing variables

2.Data collection

3.Data pre-processing

4.Dividing the data set into smaller sets: training, test and verification

5.Determining network’s topology: number of hidden layers, number ofneurons in each layer, number of neurons in output layer and thetransformation function.

6.Determining the error function



In performing the above steps, it is not necessary to perform steps sequentially. We could be back to the previous steps, especially in training and choosing variables steps. The reason is because in the designing period, if the variables chosen gave us unexpected results then we need to choose another set of variables and bring about the training step

choosing variables and data collection
Choosing variables and Data collection
  • Determining which variable is related directly or indirectly to the data that we need to forecast.
  • If the variable does not have any affect to the value of data that we need to forecast then we should wipe it out of consider.
  • Beside it, if the variable is concerned directly or indirectly then we should take it on consider.

Collecting data involved with the variables that are chosen

data pre processing
Data pre-processing

Analysis and transform values of input and output data to emphasize the important features, detect the trends and the distribution of data.

Normalize the input and output real values into the interval between max and min of transformation function (usually in [0, 1] or [-1, 1] intervals). The most popular methods are following:

SV = ((0.9 - 0.1) / (MAX_VAL - MIN_VAL)) * (OV - MIN_VAL)


SV = TFmin + ((TFmax - TFmin) / (MAX_VAL - MIN_VAL)) * (OV - MIN_VAL)


SV: Scaled Value

MAX_VAL: Max value of data

MIN_VAL: Min value of data

TFmax: Max of transformation function

TFmin: Min of transformation function

OV: Original Value

dividing patterns set
Dividing patterns set
  • Divide the whole patterns set into the smaller sets:
  • Training set
  • Test set
  • Verification set.
  • The training set is usually the biggest set employed in training the network. The test set, often includes 10% to 30% of training set, is used in testing the generalization. And the verification set is set balance between the needs of enough patterns for verification, training, and testing.
determining network s topology
Determining network’s topology

This step determines links between neurons, number of hidden layers, number of neurons in each layer.

1. How neurons in network are connected to each other.

2. The number of hidden layers should not exceed two

3. There is no method to find the most optimum number of neurons used in  hidden layers.

=> Issue 2 and 3 can only be done by trial and error since it is depended on the problem that we are dealing with.

determining the error function
Determining the error function
  • To estimate the network’s performance before and after training process.
  • Function used in evaluation is usually a mean squared errors. Other functions may be: least absolute deviation, percentage differences, asymmetric least squares etc.
  • Performance index
  • F(x) = E[eTe]= E [ (t - a)T(t - a) ]
  • Approximate Performance index
  • F(x) = eT(k)e(k)]= (t(k) - a(k) )T(t(k) - a(k))
  • The lastest quality determination function is usually the Mean Absolute Percentage Error - MAPE.
  • Training tunes a neural network by adjusting the weights and biases that is expected to give us the global minimum of performance index or error function.
  • When to stop the training process ?
  • It should stop only when there is no noticeable progress of the error function against data based on a randomly chosen parameters set?
  • It should regularly examine the generalization ability of the network by checking the network after a pre-determined number of cycles?
  • Hybrid solution is having a monitoring tool so we can stop the training process or let it run until there is no noticeable progress.
  • The result after examining of verification set of a neural network is most persuadable since it is a directly obtained result of the network after training.

This is the last step after we determined the factors related to network’s topology, variables choosing, etc.

1. Which environment: Electronic circuits or PC

2. The interval to re-train the network: might be depended on the times and also other factors related to our problem.

determine network s topology
Determine network’s topology

Multi-layer feed-forward neural networks







R1 x1
















a2 = f2( W2 f1 (W1p + b1) + b2)

Figure 2: Multi-layer feed-forward neural networks


P: input vector (column vector)

Wi: Weight matrix of neurons in layer i. (SixRi: Si rows (neurons), Ri columns (number of inputs))

bi: bias vector of layer i (Six1: for Si neurons)

ni: net input (Six1)

fi: transformation function (activate function)

ai: net output(Six1)

: SUM function

i = 1 .. N, N is the total number of layers.

determine training algorithm and network s topology










Input layer

Hidden layers

Output layer


ƒ(x) =

and ƒ’(x) = ƒ(x) { 1 - ƒ(x) }

1 + e-δx

Determine training algorithm and network’s topology



Figure 3: Multi-layered Feed-forward neural network layout

Transfer function is a sigmoid or any squashing function that is differentiable

back propagation algorithm
Back-propagation algorithm

Step 1: Feed forward the inputs through networks:

a0 = p

am+1 = fm+1 (Wm+1am + bm+1), where m = 0, 1, ..., M– 1.

a = aM

Step 2: Back-propagate the sensitive (error):

at the output layer

at the hidden layers

where m = M– 1, ..., 2, 1.

Step 3: Finally, weights and biases are updated by following formulas:


(Details on constructing the algorithm and other related issues should be found on text book Neural Network Design)

using momentum
Using Momentum

This is a heuristic method based on the observation of training results.

The standard back-propagation algorithm will add following item to the weight as

the weight changes:

∆Wm(k) = - sm (am – 1)T,

∆bm(k) = - sm .

When using momentum coefficient, this equation will be changed as follow:

∆Wm(k) = ∆Wm(k – 1) – (1 – ) sm (am – 1)T,

∆bm(k) = ∆bm(k – 1) – (1 – ) sm .





Arrow: inheritance relation

Rhombic antanna arrow:

Aggregate relation

NEURAL NET class includes the components that are the instances of Output Layer and Hidden Layer.

Input Layer is not implemented here since it does not do any calculation on the input data.

Output layer

Hidden layer

concluding remarks
Concluding remarks
  • The determination of the major works is important and realistic. It will help develop more accuracy data forecasting systems and also give the researchers the deeper look in implementing the solution using neural networks
  • In fact, to successfully apply a neural network, it is depended on three major factors:
    • First, the time to choose the variables from a numerous quantity of data as well as performpre-processing those data;
    • Second, the software should provide the functions to examine the generalization ability,help find the optimal number of neurons for the hidden layer and verify with many input sets;
    • Third, the developers need to consider, examine all the possible abilities in each timechecking network’s operation with various input sets as well as the network’s topologies so that the chosen solution will exactly described the problem as well as give us the mostaccuracy forecasted data.
thank you for attending



Kytakyushu 03/2004