1 / 14

Neuron Networks Omri Winner Dor Bracha

Neuron Networks Omri Winner Dor Bracha. What is a neural network? Collection of interconnected neurons that compute and generate impulses. Components of a neural network include neurons, synapses , and activation functions.

kirra
Download Presentation

Neuron Networks Omri Winner Dor Bracha

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Neuron NetworksOmri WinnerDorBracha

  2. What is a neural network? Collection of interconnected neurons that compute and generate impulses. Components of a neural network include neurons, synapses, and activation functions. Neural networks modeled by mathematical or computer models are referred to as artificial neural networks. Neural networks achieve functionality through learning/training functions. Introduction

  3. N inputs: x1, ..., xn. • Weights for each input: w1, ..., wn. • A bias input x0 (constant) and associated weight w0. • The output Y is a weighted sum of the inputs: y = w0x0 + w1x1 + ... + wnxn. • A threshold function, i.e: 1 if y > 0, else 0. Perceptron

  4. w1 x1 x0 w0 x2 w2 Σ Threshold . . . 1ify>0 -1otherwise y = Σwixi xn wn Diagram

  5. The key to neural networks is weights. Weights determine the importance of signals and essentially determine the network output. • Training cycles adjust weights to improve the performance of a network. The performance of an ANN is critically dependent on training performance. • An untrained network is basically useless. • Different training algorithms lend themselves better to certain network topologies, but some also lend themselves to parallelization. Learning/Training

  6. Network topologies describe how neurons map to each other. • Many artificial neural networks (ANNs) utilize network topologies that are very parallelizable. Network Topologies

  7. Inputs 3HiddenUnits x1 x2 . . . x960 Outputs Network structure

  8. Training and evaluation of each node in large networks can take an incredibly long time • However, neural networks are "embarrassingly parallel“ • Computations for each node are generally independent of all other nodes Why Parallel Computing?

  9. Typical structure of an ANN: • For each training session • For each training example in the session • For each layer (forward and backward) • For each neuron in the layer • For all weights of the neuron • For all bits of the weight value Possible Levels of Parallelization

  10. ● Multilayer feedforwardnetworks Layered network where each layer of neurons only outputs to next layer. ● Feedback networks single layer of neurons with outputs that loops back to the inputs. ● Self-organizing maps (SOM) Forms mappings from a high dimensional space to a lower dimensional space(one layer of neurons) ● Sparse distributed memory (SDM) Special form of a two layer feedforward network that works as an associative memory. Most Commonly Used Algorithms

  11. S.O.M. is producing a low dimensional of the • training samples. • They use neighborhood function to preserve the • topological properties of the input space. • S.O.M. is useful for visualizing. Self OrganizingMap (S.O.M.)

  12. Training Time of 25-node SOM depending on the number of used processors

  13. [1] Tomas Nordstrom. Highly parallel computers for artificial neural networks. 1995. [2] George Dahl , Alan McAvinney and Tia Newhall. Parallelizing neural network training for cluster systems,2008. [3] Hopfield , j.j. and D. Tank. Computing with neural circuits. Vol. ,2008. :pp. 624-633,1986. Appendix

More Related