1 / 12

Neural Networks

Neural Networks. Examples of Single Layer Perceptron Convergence in Linear Case No Convergence in Non-linear Case (Hyper)Line of separation Separation of space by (hyper)line/plane Orthogonality to weight vector. Neural Networks.

Download Presentation

Neural Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Neural Networks • Examples of Single Layer Perceptron • Convergence in Linear Case • No Convergence in Non-linear Case • (Hyper)Line of separation • Separation of space by (hyper)line/plane • Orthogonality to weight vector Lecture 7, CS567

  2. Neural Networks • Consider 2 input single neuron for Boolean prediction of clinical drug effectiveness, based on results from 2 experiments X and Y • Neuron output = 1 => Clinically effective drug • Neuron output = 0 => Drug not effective clinically • Inputs to network • p1 = Prediction of Experiment X [Yes/No] = [0/1] • p2 = Prediction of Experiment Y [Yes/No] = [0/1] • Consider wonder drug candidates dA, dB, dC, dD for revolutionary product “eMemory” that is proclaimed to increase IQ (as it turns on genes responsible for mental alertness and suppression of neuronal loss) used to train network Lecture 7, CS567

  3. Neural Networks 0 = No effect; 1 = Effect observed Lecture 7, CS567

  4. Neural Networks • Consider 2 input SLP with random initial weights and bias p1 w1,1=1 b=0.5 Sum = x = w1,1p1 + w1,2p2 + b = p1 + p2 + 0.5 Neuron w1,2=1 p2 Output = fhardlimit(x) = 0 if (x < 0), else 1 Lecture 7, CS567

  5. Neural Networks • Consider NN prediction for dA (Target prediction is 0) • Note that output now for dA = -0.5 => 0 (as expected) p1 0 w1,1=1 b=0.5 Sum = 0.5; Output f(0.5)= 1 Neuron w1,2=1 p2 0 Error e = Target - f(x) = 0 – 1 = -1 Wcurrent = Wprevious + eP => w1,1=1; w1,2=1 bcurrent = bprevious + e => b = 0.5 – 1 = -0.5 Lecture 7, CS567

  6. Neural Networks • Consider NN prediction for dB (Target prediction is 0) • Note that output now for dB = -0.5 => 0 (as expected) p1 1 w1,1=1 b=-0.5 Sum = 0.5; Output f(0.5)= 1 Neuron w1,2=1 p2 0 Error e = Target - f(x) = 0 – 1 = -1 Wcurrent = Wprevious + eP => w1,1=0; w1,2=1 bcurrent = bprevious + e => b = -0.5 – 1 = -1.5 Lecture 7, CS567

  7. Neural Networks • Consider NN prediction for dC (Target prediction is 0) p1 0 w1,1=0 b=-1.5 Sum = -0.5; Output f(-0.5)= 0 Neuron w1,2=1 p2 1 Error e = Target - f(x) = 0 – 0 = 0 No change in weights or bias Lecture 7, CS567

  8. Neural Networks • Consider NN prediction for dD (Target prediction is 1) • Note that output now for dD = 1.5 => 1 (as expected) p1 1 w1,1=0 b=-1.5 Sum = -0.5; Output f(-0.5)= 0 Neuron w1,2=1 p2 1 Error e = Target - f(x) = 1 - 0 = 1 Wcurrent = Wprevious + eP => w1,1=1; w1,2=2 bcurrent = bprevious + e => b = -1.5 + 1 = -0.5 Lecture 7, CS567

  9. Neural Networks • Check NN prediction is correct for all inputs dA:dD • OK for all except dC (expect 0) p1 0 w1,1=1 b=-0.5 Sum = 1.5; Output f(1.5)= 1 Neuron w1,2=2 p2 1 Error e = Target - f(x) = 0 - 1 = -1 Wcurrent = Wprevious + eP => w1,1=1; w1,2=1 bcurrent = bprevious + e => b = -0.5 - 1 = -1.5 Lecture 7, CS567

  10. Neural Networks • Check NN prediction is correct for all inputs dA:dD • Converged: Gives right prediction for all possible inputs. (Try starting with different initial states) p1 w1,1=1 b=-1.5 Neuron w1,2=1 p2 Lecture 7, CS567

  11. Neural Networks- (Hyper)Line of separation • 2 input Boolean problem is linear if line can be drawn between the two classes of result • 3 input Boolean problem is linear if plane can be drawn between the two classes of result • Note that Weight vector points to the positive side and is orthogonal to line of separation p2 2 Effective drug W 1 Ineffective drug p1 1 2 WP + b = 0 Lecture 7, CS567

  12. Neural Networks • Will this converge? (XOR does not converge) Lecture 7, CS567

More Related