1 / 19

SOMTIME

SOMTIME: AN ARTIFICIAL NEURAL NETWORK FOR TOPOLOGICAL AND TEMPORAL CORRELATION FOR SPATIOTEMPORAL PATTERN LEARNING. SOMTIME. Spatio-Temporal Pattern recognition Architecture 1: Nielsen Threads. Architecture 2: Recursive approximation. Conclusions. Spatio-Temporal Pattern Recognition.

abie
Download Presentation

SOMTIME

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SOMTIME: AN ARTIFICIAL NEURAL NETWORK FOR TOPOLOGICAL AND TEMPORAL CORRELATION FOR SPATIOTEMPORAL PATTERN LEARNING

  2. SOMTIME • Spatio-Temporal Pattern recognition • Architecture 1: Nielsen Threads. • Architecture 2: Recursive approximation. • Conclusions.

  3. Spatio-Temporal Pattern Recognition • Why Spatio-Temporal Recognition?? • Several Applications!! • Speech Processing. • Video Processing. • Sonar Processing. • Radar Processing. • All of them vary in both Space and Time.

  4. Spatio-Temporal Pattern Recognition • Why Neural Networks?? • Properties: • Adaptivity: it adapts to the changes of the surrounding world. • Nonlinearity: speech signal is inherently nonlinear. • Likelihood: % of accuracy detecting a signal. • Contextual Information: Every neuron affected by the others.

  5. Spatio-Temporal Pattern Recognition • Two Architectures Proposed: Common Topological analysis, Different Temporal Analysis. • Topological Analysis: SOM. • Temporal Analysis: • Feedforward accumulation of activity: NIELSEN THREADS. • Recurrent Neural Network.

  6. Nielsen. Architecture. SOM + Nielsen Thread. Recognition. Transmission of the impulse. Learning. Back-Propagation (One level). Recurrent. Architecture. SOM + Recurrent Thread Recognition. Convolution of neuron outputs with the desired output. Learning. RTRL (Real-Time recurrent Learning). Nielsen vs. Recurrent

  7. Preprocessor for Topology Extraction (SOM)

  8. Kohonen Feature Map Characteristics • Type: feedforward / feedback. • Neuron Layers: • 1 input layer. • 1 map layer. • Input value types: binary, real. • Activation Function: Sigmoid . • Learning method: unsupervised • Learning Algorithm: Selforganization • Mainly used in: • Pattern classification. • Optimization problems. • Simulation.

  9. Nielsen Threads Architecture • One direction Thread. • There are as many neurons as samples.

  10. Nielsen Threads • Input’s enter into SOM. • SOM yields a Winning Neuron. • Neurons orderly excited transfer an impulse left to right till the output is reached.

  11. Recurrent Architecture • Bi-directional thread. • Interaction is differential.

  12. Recurrent Neural Network • Input’s enter into SOM. • SOM yields a winning neuron. • Recurrent Network orderly excited yields a sequence of high values of each neuron.

  13. Training the Net ... Unsupervised Supervised

  14. Training the Nielsen Thread The training is performed in a neuron/sample by neuron/sample basis. Each Neuron is trained for its corresponding sample in an individual way.

  15. Training the Recurrent Thread RTRL: Real Time Recurrent Learning We have to transform this architecture into its canonic form, so that we could apply the algorithm.

  16. Training the Recurrent Thread RTRL: Real Time Recurrent Learning CANONIC FORM

  17. Training the Recurrent Thread RTRL: Real Time Recurrent Learning 1.- Set the synaptic weights of the algorithm to small values selected from a uniform distribution. 2.- Set the initial value of the state vector X(0)=0 3.- Set j(0) = 0 for j= 1, 2, … dim(state space), where Computation: compute for n = 0, 1, 2, …, j(n+1)= (n)[Wa(n)  j(n)- W’a(n) j(n-1)+Uj(n)] e(n)=d(n)-y(n)=d(n)-CX(n)

  18. NIELSEN Easy training. Intuitive dynamics. Too many neurons for each thread. RECURRENT Small thread provide the functionality. Complicated Training. Complicated dynamics. Why Nielsen or Recurrent?? Which way?? We could think of the recurrent architecture as a generalization of the Nielsen Thread.

  19. Next Steps... • Learning Interaction with HMM. • Suitable recognition figure for continuous speech recognition. • Control of convergence. • Improving the training set.

More Related