1 / 81

Fundamental Neurocomputing Concepts

Fundamental Neurocomputing Concepts. 國立雲林科技大學 資訊工程研究所 張傳育 (Chuan-Yu Chang ) 博士 Office: EB212 TEL: 05-5342601 ext. 4516 E-mail: chuanyu@yuntech.edu.tw HTTP://MIPL.yuntech.edu.tw. Basic Models of Artificial neurons.

tbatts
Download Presentation

Fundamental Neurocomputing Concepts

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Fundamental Neurocomputing Concepts 國立雲林科技大學 資訊工程研究所 張傳育(Chuan-Yu Chang ) 博士 Office: EB212 TEL: 05-5342601 ext. 4516 E-mail: chuanyu@yuntech.edu.tw HTTP://MIPL.yuntech.edu.tw

  2. Basic Models of Artificial neurons • An artificial neuron can be referred to as a processing element, node, or a threshold logic unit. • There are four basic components of a neuron • A set of synapses with associated synaptic weights • A summing device, each input is multiplied by the associated synaptic weight and then summed. • A activation function, serves to limit the amplitude of the neuron output. • A threshold function, externally applied and lowers the cumulative input to the activation function.

  3. Basic Models of Artificial neurons

  4. Basic Models of Artificial neurons (2.2) (2.3) (2.4)

  5. Basic Models of Artificial neurons • The threshold (or bias) is incorporated into the synaptic weight vector wq for neuron q.

  6. Basic Models of Artificial neurons

  7. Basic Activation Functions • The activation function, transfer function, • Linear or nonlinear Linear (identity) activation function

  8. Basic Activation Functions • Hard limiter • Binary function, threshold function • (0,1) • The output of the binary hard limiter can be written as Hard limiter activation function

  9. Basic Activation Functions • Bipolar, symmetric hard limiter • (-1, 1) • The output of the symmetric hard limiter can be written as • Sometimes referred to as the signum (or sign) function. Symmetric limiter activation function

  10. Basic Activation Functions • Saturation linear function, piecewise linear function • The output of the saturation linear function is given by Saturation linear activation function

  11. Basic Activation Functions • Saturation linear function • The output of the symmetric saturation linear function is given by Saturation linear activation function

  12. Basic Activation Functions • Sigmoid function (S-shaped function) • Binary sigmoid function • The output of the binary sigmoid function is given by Binary sigmoid function Where a is the slope parameter of the binary sigmoid function Hard limiter has no derivative at the origin, the binary sigmoid is a continuous and differentiable function.

  13. Basic Activation Functions • Sigmoid function (S-shaped function) • Bipolar sigmoid function, hyperbolic tangent sigmoid • The output of the Binary sigmoid function is given by

  14. Adaline and Madaline • Least-Mean-Square (LMS) Algorithm • Widrow-Hoff learning rule • Delta rule • The LMS is an adaptive algorithm that computes adjustments of the neuron synaptic weights. • The algorithm is based on the method of steepest decent. • It adjusts the neuron weights to minimize the mean square error between the inner product of the weight vector with the input vector and the desired output of the neuron. • Adaline (adaptive linear element) • A single neuron whose synaptic weights are updated according to the LMS algorithm. • Madaline (Multiple Adaline)

  15. Simple adaptive linear combiner x0=1, wo=b (bias) inputs

  16. Simple adaptive linear combiner • The difference between the desired response and the network response is • The MSE criterion can be written as • Expanding Eq(2.23) (2.22) (2.23) (2.24) (2.25)

  17. Simple adaptive linear combiner • Cross correlation vector between the desired response and the input patterns • Covariance matrix for the input pattern • J(w)的MSE表面有一個最小值(minimum) ,因此計算梯度等於零的權重值 • 因此,最佳的權重值為 (2.26) (2.27)

  18. The LMS Algorithm • 上式的兩個限制 • 求解covariance matrix的反矩陣很費時 • 不適合即時的修正權重,因為在大部分情況,covariance matrix和cross correlation vector無法事先知道。 • 為避開這些問題,Widow and Hoff提出了LMS algorithm • To obtain the optimal values of the synaptic weights when J(w) is minimum. • Search the error surface using a gradient descent method to find the minimum value. • We can reach the bottom of the error surface by changing the weights in the direction of the negative gradient of the surface.

  19. The LMS Algorithm Typical MSE surface of an adaptive linear combiner

  20. The LMS Algorithm • Because the gradient on the surface cannot be computed without knowledge of the input covariance matrix and the cross-correlation vector, these must be estimated during an iterative procedure. • Estimation of the MSE gradient surface can be obtained by taking the gradient of the instantaneous error surface. • The gradient of J(w) approximated as • The learning rule for updating the weights using the steepest descent gradients method as (2.28) (2.29) Learning ratespecifies the magnitude of the update step for the weights in the negative gradient direction.

  21. The LMS Algorithm • If the value of m is chosen to be too small, the learning algorithm will modify the weights slowly and a relatively large number of iterations will be required. • If the value of m is set too large, the learning rule can become numerically unstable leading to the weights not converging.

  22. The LMS Algorithm • The scalar form of the LMS algorithm can be written from (2.22) and (2.29) • 從(2.29)及(2.31)式,我們必須給learning rate設立一個上限,以維持網路的穩定性。(Haykin,1996) (2.30) (2.31) The largest eigenvalue of the input covariance matrix Cx

  23. The LMS Algorithm • 為使LMS收斂的最小容忍的穩定性,可接受的learning rate可限定在 • (2.33)式是一個近似的合理解法,因為 (2.33) (2.34)

  24. The LMS Algorithm • 從(2.32) 、(2.33)式知道,learning rate的決定,至少得計算輸入樣本的covariance matrix,在實際的應用上是很難達到的。 • 即使可以得到,這種固定learning rate在結果的精確度上是有問題的。 • 因此,Robbin’s and Monro’s root-finding algorithm提出了,隨時間變動learning rate的方法。(Stochastic approximation )where k is a very small constant. • 缺點:learning rate減低的速度太快。 (2.35)

  25. The LMS Algorithm • 理想的做法應該是在學習的過程中,learning rate m應該在訓練的開始時有較大的值,然後逐漸降低。(Schedule-type adjustment) • Darken and Moody • Search-then converge algorithm • Search phase: m is relatively large and almost constant. • Converge phase: m is decrease exponentially to zero. • m0 >0 and t>>1, typically 100<=t<=500 • These methods of adjusting the learning rate are commonly called learning rate schedules. (2.36)

  26. The LMS Algorithm • Adaptive normalization approach (non-schedule-type) • m is adjusted according to the input data every time step • where m0 is a fixed constant. • Stability is guaranteed if 0< m0 <2; the practical range is 0.1<= m0 <=1 (2.37)

  27. The LMS Algorithm m is a constant • Comparison of two learning rate schedules: stochastic approximation schedule and the search-then-converge schedule. Eq.(2.36) Eq.(2.35)

  28. Summary of the LMS algorithm • Step 1: set k=1, initialize the synaptic weight vector w(k=1), and select values for m0 and t. • Step 2: Compute the learning rate parameter • Step 3: Computer the error • Step 4: Update the synaptic weights • Step 5: If convergence is achieved, stop; else set k=k+1, then go to step 2.

  29. b x d Example 2.1: Parametric system identification • Input data consist of 1000 zero-mean Gaussian random vectors with three components. The bias is set to zero. The variance of the components of x are 5, 1, and 0.5. The assumed linear model is given by b=[1, 0.8, -1]T. • To generate the target values, the 1000 input vectors are used to form a matrix X=[x1x2…x1000], the desired outputs are computed according to d=bTX The learning process was terminated when The progress of the learning rate parameter as it is adjusted according to the search-then converge schedule.

  30. Example 2.1 (cont.) • Parametric system identification: estimating a parameter vector associated with a dynamic model of a system given only input/output data from the system. • The root mean square (RMS) value of the performance measure.

  31. Adaline and Madaline • Adaline • It is an adaptive pattern classification network trained by the LMS algorithm. 可調整的bias或 weight 產生bipolar (+1, -1)的輸出,可因activation function的不同,而有(0,1)的輸出 x0(k)=1

  32. Adaline • Linear error • The difference between the desired output and the output of the linear combiner. • Quantizer error • The difference between the desired output and the output of the symmetric hard limiter.

  33. Adaline • Adaline的訓練過程 • 輸入向量x必須和對應的desired輸出d,同時餵給Adaline。 • 神經鍵的權重值w,會根據linear LMS algorithm動態的調整。 • Adaline在訓練的過程,並沒有使用到activation function,(activation function只有在測試階段才會使用) • 一旦網路的權重經過適當的調整後,可用未經訓練的pattern來測試Adaline的反應。 • 如果Adaline的輸出和測試的輸入有很高的正確性時,可稱網路已經generalization。

  34. Adaline • Linear separability • The Adaline acts as a classifier which separates all possible input patterns into two categories. • The output of the linear combiner is given as

  35. Adaline Linear separability of the Adaline Adaline只能分割 線性可分割的patten

  36. Adaline Nonlinear separation problem 若separating boundary非straight line, Adaline無法分割 Since the boundary is not a straight line, the Adaline cannot be used to accomplish this task.

  37. Adaline (cont.) • Linear error correction rules • 有兩種基本的線性修正規則,可用來動態調整網路的權重值。(網路權重的改變與網路實際輸出和desire輸出的差異有關) • m-LMS:same as (2.22) and (2.29)(基於最小化MSE表面) • a-LMS: a self-normalizing version of the m-LMS learning rule • a-LMS演算法是根據最小擾動原則(minimal-disturbance principle) ,當調整權重以適應新的pattern的同時,對於先前的pattern的反應,應該有最小的影響。 (2.46)

  38. Adaline (cont.) • Consider the change in the error for a-LMS • From (2.47) • The choice of a controls stability and speed of convergence, is typically set in the range • a-LMS之所以稱為self-normalizing是因為a的選擇和網路的輸入大小無關, (2.47) (2.48)

  39. Adaline (cont.) • Detail comparison of the m-LMS and a-LMS • From (2.46) • Define normalized desired response and normalized training vector • Eq(2.49) can be rewrote as (2.49) (2.50-51) (2.52) 和m-LMS具有相同的型式,所以a-LMS表示正規化輸入樣本後的m-LMS 。

  40. Multiple Adaline (Madaline) • 單一個Adaline無法解決非線性分割區域的問題。 • 可使用多個adaline • Multiple adaline • Madaline • Madaline I:single-layer network with single output. • Madaline II:multi-layer network with multiple output.

  41. Example of Madaline I network consisting of three Adalines May be OR, AND, and MAJ

  42. Two-layer Madaline II architecture

  43. Simple Perceptron • Simple perceptron (single-layer perceptron) • Very similar to the Adaline, 由Frank Rosenblatt (1950)提出。 • Minsky and Paper發現一個嚴重的限制:perceptron無法解決XOR的問題。 • 藉由正確的processing layer,可解決XOR問題,或是parity function的問題。 • Simple perceptron和典型的pattern classifier 的maximum-likelihood Gaussian classifier有關,均可視為線性分類器。 • 大部分的perceptron的訓練是supervised,也有部分是self-organizing。

  44. Simple Perceptron (cont.) • Original Rosenblatt’s perceptron • Binary input, no bias. • Modified perceptron • Bipolar inputs and a bias term • Output y={-1,1}

  45. Simple Perceptron (cont.) • The quantizer error is used to adjust the synaptic weights of the neuron. • The adaptive algorithm for adjusting the neuron weights (the perceptron learning rule) is given as • Rosenblatt normally set a to unity. • The choice of the learning rate a does not affect the numerical stability of the perceptron learning rule. • a can affect the speed of convergence. (2.55) 比較(2.46) (2.56)

  46. Simple Perceptron (cont.) • The perceptron learning rule is considered a nonlinear algorithm. • The perceptron learning rule performs the update of the weights until all the input patterns are classified correctly. • The quantizer error will be zero for all training pattern inputs, and no weight adjustments will occur. • The weights are not guaranteed to be optimal.

  47. Simple Perceptron with a Sigmoid Activation Function • The learning rule is based on the method of steepest descent and attempts to minimize an instantaneous performance function.

  48. Simple Perceptron with a Sigmoid Activation Function (cont.) • 學習演算法可由MSE推導獲得 • The instantaneous performance function to be minimized is given as (2.59) (2.60) (2.61)

  49. Simple Perceptron with a Sigmoid Activation Function (cont.) • 假設 activation function為hyperbolic tangent sigmoid,因此,神經元的輸出可表示成 • 根據(2.15)式對hyperbolic tangent sigmoid函數的微分 • 採用steepest descent的discrete-time learning rule (2.62) (2.63) (2.64) (參考2.29式)

  50. Simple Perceptron with a Sigmoid Activation Function (cont.) • 計算(2.64)式的梯度(gradient) • 以(2.63)式代入(2.65)式 • 採用(2.66)式的gradient,則discrete-time learning rule for simple perceptron 可寫成 (2.65) (2.66) (2.67)

More Related