1 / 25

Cascade-correlation

Cascade-correlation. By Kranthi & sudhan. Contents . Motivation Recollecting back-prop Cascading architecture Learning Algorithm Example Comparision with other systems. Motivation . Curse of dimensionality Simple network Determines the structure Fast learner . Recollect .

leola
Download Presentation

Cascade-correlation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cascade-correlation By Kranthi & sudhan cascade-correlation

  2. Contents • Motivation • Recollecting back-prop • Cascading architecture • Learning Algorithm • Example • Comparision with other systems cascade-correlation

  3. Motivation • Curse of dimensionality • Simple network • Determines the structure • Fast learner cascade-correlation

  4. Recollect • What`s Back-propagation? • Problems with this alogorithm. • How CC is going to solve these problems? cascade-correlation

  5. Cascade-Correlation CC combines 2 key ideas. • Cascade architecture. • Learning alogorithm. cascade-correlation

  6. Cascade Architecture • Begins with some inputs and one or more outputs. • Every input is connected to every output. • Bias is permanently set to +1. cascade-correlation

  7. Stage -1 x0 y1 x1 y2 x2 cascade-correlation

  8. Stage-2 x0 y1 x1 z1 y2 x2 cascade-correlation

  9. Stage-3 x0 y1 z1 z2 x1 y2 x2 cascade-correlation

  10. Algorithm • Train stage 1. If error is not acceptable, proceed. • Train stage 2. If error is not acceptable proceed. • Etc cascade-correlation

  11. Algorithm 1. Train all the connections ending at an output unit with a usual learning algorithm until the error of the net no longer decreases. 2 CC starts with a minimal network consisting only of an input and an output layer. Both layers are fully connected cascade-correlation

  12. Algorithm 3. Generate the so-called candidate units. Every candidate unit is connected with all input units and with all existing hidden units. Between the pool of candidate units and the output units there are no weights. cascade-correlation

  13. Algorithm 4. Try to maximize the correlation between the activation of the candidate units and the residual error of the net by training all the links leading to a candidate unit. Learning takes place with an ordinary learning algorithm. The training is stopped when the correlation scores no longer improves. cascade-correlation

  14. Algorithm • In order to maximize this S,we compute partial derivative of S with respect to each candidate units incoming weight 5.Choose the candidate unit with the maximum correlation, freeze its incoming weights and add it to the net. cascade-correlation

  15. Algorithm 6. To change the candidate unit into a hidden unit, generate links between the selected unit and all the output units. Since the weights leading to the new hidden unit are frozen, a new permanent feature detector is obtained. Loop back to step 2. 7. This algorithm is repeated until the overall error of the net falls below a given value cascade-correlation

  16. Example -Two spirals problem cascade-correlation

  17. Evolution of a 12-hidden unit solution to the two spirals problem cascade-correlation

  18. Evolution of a 12-hidden unit solution to the two spirals problem cascade-correlation

  19. Comparing CC with other learning algorithms • No need to predict the size,depth and connectivity pattern of network • Learns fast unlike some other algorithms. • At any time,we only train one layer of weights in network so results can be cached. cascade-correlation

  20. Experimental results for hand written digits data sets cascade-correlation

  21. Experimental results for the patients with severe head injury dataset cascade-correlation

  22. Experimentals results for land satellite image cascade-correlation

  23. Conclusion • Principle difference between CC and other algorithms is dynamic creation of hidden units • Speed up the learning process considerably cascade-correlation

  24. References The Cascade Correlation Learning Architecture. Scott Fahlman and Christian Lebiere. Machine Learning, Neural and Statistical Classification by D. Michie, D.J. Spiegelhalter, C.C. Taylor (eds) cascade-correlation

  25. Thank you cascade-correlation

More Related