1 / 30

Chapter 1 Introduction to Neurocomputing

Chapter 1 Introduction to Neurocomputing. 國立雲林科技大學 資訊工程研究所 張傳育 (Chuan-Yu Chang ) 博士 Office: ES 709 TEL: 05-5342601 ext. 4337 E-mail: chuanyu@yuntech.edu.tw. What is Neurocomputing?. Neurocomputing approach Involves a learning process within an ANN

Download Presentation

Chapter 1 Introduction to Neurocomputing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 1Introduction to Neurocomputing 國立雲林科技大學 資訊工程研究所 張傳育(Chuan-Yu Chang ) 博士 Office: ES 709 TEL: 05-5342601 ext. 4337 E-mail: chuanyu@yuntech.edu.tw

  2. What is Neurocomputing? • Neurocomputing approach • Involves a learning process within an ANN • 一旦類神經網路訓練完成,該類神經網路即可進行特定的工作。例如pattern recognition • Associated (聯想) • Why can we (human) perform certain tasks much better than a digital computer? • Our brain is organized • 大腦神經(nerve)的傳導速度比電子的速度慢(106)倍,但腦神經具有大量平行的計算能力。(約1011個neuron) • 大腦是一個adaptive, nonlinear, parallel的計算機。

  3. What is Neurocomputing? (cont.) • 類神經網路的主要能力: • 由範例學習(learn by example) • 歸納(generalize) • The NN can classify input patterns to an acceptable level of accuracy even if they were never used during the training process. • 大部分的類神經網路具有類似的特徵: • Parallel computational architecture • Highly interconnected • Nonlinearity output

  4. What is Neurocomputing? (cont.) • Applications of neural networks: • Image processing • Prediction and forecasting • Associative memory • Clustering • Speech recognition • Combinatorial optimization • Feature extraction • …

  5. What is Neurocomputing? (cont.) • 使用neurocomputing解決特定問題的優點: • Fault tolerance • Nonlinear • Adaptive

  6. Course Introduction • Text book • Fredric M. Ham, Principles of Neurocomputing for Science & Engineering, McGRAW-HILL, 2001. • Simon Haykin, Neural Networks-A comprehensive foundation, 2nd Edition. Prentice Hall, 1999

  7. Contents • Introduction to Neurocomputing • Fundamental Neurocomputing Concepts • Activation Function • Adaline and Madaline • Perceptron • Multilayer perceptron • Leaning Rules • Mapping Networks • Associative memory • Backpropagation Learning Algorithm • Counterpropagation • Radial Basis Function Neural Networks

  8. Contents (con’t) • Self-Organizing Networks • Kohonen Self-Organizing Map • Learning Vector Quantization • Adaptive Resonance Theory Neural network • Recurrent Networks and Temporal feedforward Networks • Hopfield Associative Memory • Simulated Annealing • Boltzmann Machine • Time-Delay Neural Networks

  9. Contents (con’t) • Statistical Methods Using Neural Networks • Principal Component Analysis • Independent Component Analysis

  10. Course Requirements • Homeworks • Supervised learning: Digits Recognition • Unsupervised learning: Traveling Salesperson Problem (TSP) • Final Project • Presentation • Implementation and Demonstration • Final Examination

  11. 請思考這門課對各位的研究是否有幫助? Does this course helpful to your research?

  12. Historical notes • McCulloch and Pitts (1943) • No training to neurons. • Act as certain logic functions. • Hebb (1949) • Based on a neurobiological viewpoint, describes a learning process. • Hebb stated that information is stored in the connections of neurons and postulated a learning strategy for adjustment of the connection weight. • Rosenblatt (1958) • Proposed the concept of the perceptron

  13. Historical notes (cont.) • Widrow and Hoff (1960) • The Adaline (adaptive linear element) trained by the LMS learning rule. • Minsky and Papert (1969) • Slowed down neural network research in 1969. • Perceptrons have limited capabilities, the XOR problem. • Kohonen and Anderson (1972) • Content-addressable associative memories. • Werbos (1974) • The first description of the backpropagation algorithm for training multilayer feed-forward perceptrons.

  14. Historical notes (cont.) • Little and Shaw (1975) • Use a probabilistic model of a neuron instead of a deterministic one. • Lee (1975) • Presented the fuzzy McCulloch-Pitts neuron mode. • Hopfield (1982) • Proposed a recurrent neural network • The network can store information in a dynamically stable storage and retrieval. • Kohonen (1982) • Presented the self-organizating feature map. • It is an unsupervised, competitive learning, clustering network in which only one neuron is “on” at a time.

  15. Historical notes (cont.) • Oja (1982) • Presented a single linear neuron trained by a normalized Hebbian learning rule that acts as a principal-component analyzer. • The neuron is capable of adaptive extracting the 1st principal eignvector from the input data. • Carpenter and Grossberg (1987) • Developed self-organizing neural networks based adaptive resonance theory (ART) • Sivilotti, Mahowald, and Mead (1987) • The first VLSI realization of neural networks. • Broomhead and Lowe (1988) • First exploitation of radial basis function in designing neural networks.

  16. McCulloch-Pitts neuron • The McCulloch-Pitts neuron is a two-state device. (on/off, binary) • The output of a particular neuron cannot coalesce with output of another neuron; it can branch to another neuron and terminate as an input to that neuron.(個別神經元的輸出無法與其他神經元的輸出合併,但可當成其他神經元的輸入。) • Two types of terminations • Excitatory input • Inhibitory input • There can be any number of inputs to a neuron. .(一個神經元可有任意多個輸入) • Whether the neuron will be on (firing) or off depends on the threshold of the neuron.(但其輸出是由threshold所決定。)

  17. McCulloch-Pitts neuron • Physical assumptions:(基本假設) • The neuron activity is an all-or-nothing process, ie., the activation of the neuron is binary. • A certain fixed number of synapses (>1) must be excited within a period of latent addition for a neuron to be excited. • The only significant delay within the nervous system is synaptic delay. • The activity of any nonzero inhibitory synapse absolutely prevents excitation of the neuron at that time. • The structure of the network does not change with time.

  18. McCulloch-Pitts neuron • Architecture of a McCulloch-Pitts neuron • N excitatory input • M inhibitory input • 1個由threshold決定的輸出y。 • 為滿足前面的假設4, u= S xi w

  19. McCulloch-Pitts neuron • Example 1 • The AND logic function can be realized using the McCulloch-Pitts neuron having two synaptic connections with equal weights of 1 and a threshold q=2.

  20. McCulloch-Pitts neuron • Example 2: • The OR logic function can be realized using the McCulloch-Pitts neuron having two synaptic connections with equal weights of 2 and a threshold q=2.

  21. McCulloch-Pitts neuron • Example 3: • The XOR logic function can be realized using three McCulloch-Pitts neuron having two synaptic connections with equal weights of 2 and a threshold q=2.

  22. Neurocomputing and Neuroscience • Biological neural networks • 神經系統(nervous system)是一個巨大且複雜的神經網路(neural network),大腦是神經系統的中樞,神經系統和各感官器官相連結,傳遞感官訊息給大腦,並且傳送行為命令給反應器官(effectors) 。 • 大腦是由大約1011個神經元所組成,而這些神經元相互連接成子網路,組成所謂的細胞核(nuclei) ,細胞核是由一連串的神經元所組成,具有特定的功能。 • 在大腦中的感官系統(sensory system)和它的子網路,擅於將複雜的感官資訊分解(decompose)成知覺的基本特性。對於不同的知覺會有不同的分解。

  23. Small cell Large cell Neurocomputing and Neuroscience • Fausett 更有效的建構生物的(biological)神經元(neuron)成類神經元(artificial neurons)。大腦和神經系統的其他部分是由許多不同種類的神經元組成,有不同的電子特性、數量、大小、及連接方式。 樹狀突 軸突

  24. Neurocomputing and Neuroscience • 一個生物神經元包含有三個部分: • 樹狀突(dendrites) :收取來自其他神經元的訊號。 • Cell body:用來彙總來自樹狀突及其他神經鍵的訊號。當收到的刺激大於threshold時,神經元將產生一個電能(potential) ,也就是fire,並將此電能沿著axon傳給其他神經元或是目的細胞,例如肌肉。 • 軸突(axon) :藉由神經鍵(synapses)和其他神經元的軸突(axon)相連結。神經鍵(synapses)的數目可能從數百到上萬。 (體幹)

  25. Neurocomputing and Neuroscience • A biological neuron consists of three main components: • Dendrites :receive signals from other neurons。 • Cell body (soma):sums the incoming signals from the dendrites and sums the signals from the numerous synapses on its surface. • Axon :the axons of other neurons connect to the dendrite and cell body surfaces by means of connectors called synapses。The number of synaptic connection from other neurons may range from a few hundred to 10,000. (體幹)

  26. Neurocomputing and Neuroscience Simplified drawing of the synapses

  27. Neurocomputing and Neuroscience • 由於神經元的外表有神經薄膜,因此訊號到達樹狀突時會因時間和距離而迅速衰減。而喪失刺激神經元的能力,除非它們收到來自附近神經元的訊號強化了衰減的訊號。

  28. Neurocomputing and Neuroscience • 如果神經元的輸入無法達到threshold,則輸入訊號將快速衰減,而且不會產生行動電能(potential)。因此,action potential的產生原則是all-or-nothing。 • 輸入強度是表現在每秒產生的action potential數量,而不是action potential的大小。較強的輸入信號會比較弱的輸入信號產生較多的action potential。

  29. Neurocomputing and Neuroscience • Synapses are the points of contact that connect the axon terminals to their targets. • Synapse由下列所組成: • Nerve terminal • Synaptic cleft or gap • Postsynaptic membrane

  30. Classification of neural networks • Supervised learning vs. unsupervised learning

More Related