1 / 8

Lecture 12. MLP (IV): Programming & Implementation

Lecture 12. MLP (IV): Programming & Implementation. Outline. Matrix formulation Matlab Implementation. Summary of Equations (per epoch). Feed-forward pass : For k = 1 to K,  = 1 to L , i = 1 to N(  ) , t: epoch index k: sample index Error-back-propagation pass :

roccos
Download Presentation

Lecture 12. MLP (IV): Programming & Implementation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 12. MLP (IV): Programming & Implementation

  2. Outline • Matrix formulation • Matlab Implementation (C) 2001-2003 by Yu Hen Hu

  3. Summary of Equations (per epoch) • Feed-forward pass: For k = 1 to K,  = 1 to L, i = 1 to N(), t: epoch index k: sample index • Error-back-propagation pass: For k = 1 to K,  = 1 to L, i = 1 to N(), (C) 2001-2003 by Yu Hen Hu

  4. Summary of Equations (cont’d) • Weight update pass: For k = 1 to K,  = 1 to L, i = 1 to N(), (C) 2001-2003 by Yu Hen Hu

  5. Matrix Formulation • The feed-forward, back-propagation, and weight-update loops can be formulated as matrix-vector product, or matrix-matrix product operations. • Many computers can implement matrix operations using special instructions (e.g. MMX) to speed up the computation and hence improve training speed. • Implementation using Matlab can be made quite efficient when the algorithm is formulated in terms of matrix, vector operations rather than loops (too much overhead). (C) 2001-2003 by Yu Hen Hu

  6. Matrix Notations Z(),  = 0, 1, …, L: N()  K matrix. ith column is the outputs of neurons corresponding to the kthtraining sample in the th layer if  > 0, and is the training feature vector if  = 0. W(),  = 1, 2, …, L: N() (N(1)+1) weight matrix. The (i,j)th element of W() is wij().The indices of its columns starts with 0 so that wi0() corresponds to the bias input of the ithneuron of the th layer. the activation function f()is applied to individual elements if its argument is a matrix (C) 2001-2003 by Yu Hen Hu

  7. Matrix Notations (Cont’d) D: N(L)  K matrix of the desired output (L) = f’(U(L)).*(D  Z(L)): N(L)  K output layer delta error matrix. “.*” means element by element product of two matrices (not usual matrix-matrix product). () = f’(U()).*[W(+1)(:,2:N(+1)+1)]T(+1): N()  K matrix where W(+1)(:,2:N(+1)+1) is the last N(+1) columns of the W(+1) matrix, and []T is matrix transpose.  : noise. (C) 2001-2003 by Yu Hen Hu

  8. Matlab Implementation bp.m (C) 2001-2003 by Yu Hen Hu

More Related