1 / 41

Lecture on Intelligent Signal Processing

Lecture on Intelligent Signal Processing. www.AssignmentPoint.com. Intelligent Signal Processing ?. Sensors …  Electronics …  DSP …  ISP High Machine Intelligence Quotient Human-like Information Processing. Components of ISP. Artificial Neural Networks (ANN)

jmarlow
Download Presentation

Lecture on Intelligent Signal Processing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture onIntelligent Signal Processing www.AssignmentPoint.com www.assignmentpoint.com

  2. Intelligent Signal Processing ? • Sensors …  Electronics …  DSP …  ISP • High Machine Intelligence Quotient • Human-like Information Processing www.assignmentpoint.com

  3. Components of ISP • Artificial Neural Networks (ANN) • Adaptability, robustness, data-oriented • Fuzzy Logic (FL) • Interface btw language and numeric • Evolutionary Computing (EC) (aka. genetic algo/programming …) • « invent » unforeseen solution, (sub)-optimization • [ Other Related Concepts ] • Support Vector Machines, Soft Computing, AI… www.assignmentpoint.com

  4. Which for What ? ANN EC Learning Capability Optimizing Capability Every combi is possible and used: Goal is to realize processing systems with greater intelligence FL Representing Capability www.assignmentpoint.com

  5. : Neuron : weighted link Inputs Outputs Artificial Neural Networks • Biologically inspired • Network of simple processing elements • Distributed function www.assignmentpoint.com

  6. Some real ANN usages • Recognition of hand-written letters • Predicting on-line the quality of welding spots • Identifying relevant documents in corpus • Visualizing high-dimensional space • Tracking on-line the position of robot arms • … etc www.assignmentpoint.com

  7. ANN a good choice if: • Data-rich / model-deficient problem • Failure of classical mathematical modeling • Nonlinear, multidim input/output mapping • Failure of classical linear methods (try it first) • Enough time to design the final ANN • Hours to days to get a ~m sec cycle ANN www.assignmentpoint.com

  8. Preliminary steps for ANN • Get a lot of data : inputs and outputs • Analyze data on the PC • Relevant inputs ? • Linear relations ? • Transform and scale variables • Other useful preprocessing ? • Divide in 3 data sets: • Learning set • Test set • Validation set www.assignmentpoint.com

  9. First design step for ANN • Set the ANN architecture (PC or board) • MLP, RBF, TDNN, Kohonen, GNG ? • Number of inputs, outputs ? • Number of hidden layers • Number of neurons • Learning schema « details » www.assignmentpoint.com

  10. Last design step for ANN • Tune/optimize internal parameters wi(PC or board) • By presenting learning data set to ANN • Test ANN • Success ? Good job ! • Validate it (board or PC) • Implement it (board or PC) • Failure ? • … Go back to previous steps www.assignmentpoint.com

  11. Main Types of ANN • Supervised Learning • Feed-forward Layered ANN • Multi-Layer Perceptron ( with sigmoid hidden neurons) • Radial Basis Functions (gaussian, wavelets) • Recurrent Networks Transform them to Time Delay Network: a layered ANN • Unsupervised Learning • Self organizing ANN • Kohonen topgraphic maps • Growing neural gas etc www.assignmentpoint.com

  12. : Neuron : weighted link Input Output What’s next with ANN today ? • Detailed description of : • Multilayer perceptron • But just a few words on • Kohonen maps,RBF, TDNN, GNG… www.assignmentpoint.com

  13. 1 Anatomy of an Artificial Neuron bias f : activation function inputs output h : combine wi & xi www.assignmentpoint.com

  14. Common Activation Functions • Sigmoidal Function: • Radial Function, e.g.. Gaussian: • … Linear Function www.assignmentpoint.com

  15. Hidden layers { Multilayer Perceptron www.assignmentpoint.com

  16. Roughness of Output • Outputs depends of the whole set of weighted links {wij} • Example: output unit versus input 1 and input 2 for a 2*10*1 ANN with random weights www.assignmentpoint.com

  17. Important Properties of ANN 1/2 • Assume • g(x): bounded and sufficiently regular fct. • ANN with 1 hidden layer of finite N neurons (Transfer function is identical for every neurons) • => ANN is an Universal Approximator of g(x) Theorem by Cybenko et al. in 1989 In the sense of uniform approximation For arbitrary precision e www.assignmentpoint.com

  18. Important Properties of ANN 2/2 • Assume • ANN as before (1 hidden layer of finite N neurons, non linear transfer function) • Approximation precision e • => #{wi} ~ # inputs Theorem by Barron in 1993 ANN is more parsimonious in #{wi} that a linear approximator [linear approximator: #{wi} ~ exp(# inputs) ] www.assignmentpoint.com

  19. desired output (supervisor) Supervised Learning Typically: Backprop of errors Training set: {(mxin,mtout); 1 ≤ m ≤ P} - www.assignmentpoint.com

  20. Backpropagation algorithm 1/2 • Idea: Error calculus applied to weights wi,l(k) • adjust wi,l(k) to minimize error with gradient descent • Error is • some algebra to get www.assignmentpoint.com

  21. Derivation of Backpropagation algorithm • Remembering that for the perceptron ! • And using chain rule • Let’s define • To finally get wi,l(k) www.assignmentpoint.com

  22. Backprop illustrated (Index m omitted in this slide) www.assignmentpoint.com

  23. Backpropagation algorithm : other useful refinements • Add extra terms • Some « momentum » with a in(0..1) to speed up convergence • Some noise to escape local minimum of error fct • Keep in memory best previous adjustment, etc etc etc etc • Ex: ~80 speedup in learning time for the toy problem XOR www.assignmentpoint.com

  24. Problems, difficulties and solutions • Scarce Data Sets • Leave-one-out validation schema • Overfitting because of high numbers of {wij} compared to number of samples Early stopping or regularization techniques www.assignmentpoint.com

  25. With test set (which does not changes wij) Error With learning set (which changes wij) Stop Here ! Training time Early stopping www.assignmentpoint.com

  26. Regularization OK, but what numeric value for l ? weight decay weight elimination «  early stopping on 1/ l  » or see statistical theory by Tikhonov et al. (‘50) Smooth IO mapping www.assignmentpoint.com

  27. What are MLP good for ? • Modeling Input-Output relationship • Regression (linear output activation fct) • Projection on subspace • i.e. a kind of filtering • Discrimination (sigmoid output activation fct) • Pattern recognition: C class & C output neurons • Cross entropy for Error www.assignmentpoint.com

  28. z-i From Recurrent ANN toTime Delay Neural Network • Recurrent ANN: ANN with cycle with internal delay i • Transform it by adding « special » delaying neurons You get a TDNN… www.assignmentpoint.com

  29. TDNN for modeling state of robot arm actuator From « réseaux de neurones » by G. Dreyfus et al, ed Eyrolles www.assignmentpoint.com

  30. Radial Basis Function Networks Outputs as linear combination of • Usually apply a sub-optimal learning procedure • Set number of neurons and then adjust : • gaussian centers • gaussian widths • weights hidden layer of RBF neurons Inputs (fan in) www.assignmentpoint.com

  31. What are RBF good for ? • Density estimation • Discrimination • Regression • Good to know: • Can be described as Bayesian Networks • Close to some Fuzzy Systems www.assignmentpoint.com

  32. Kohonen topographic maps Outputs: Sj= S wjk xk • Neurons : receptive fields with local interaction -> • Neurons know their neighbors • Sj= S wjk xk is max • when Wj. is collinear with x www.assignmentpoint.com

  33. Unsupervised learning with Kohonen maps • Present one sample x • Detect which neuron j has max sj  (winner) • Adjust wij of winner neuron • Adjust partly wij of neighbors of winner • …. Next sample www.assignmentpoint.com

  34. y x Illustration of learning for Kohonen maps Inputs: coordinates (x,y) of points drawn from a square Display neuron j at position xj,yj where its sj is maximum 100 inputs 200 inputs From: les réseaux de neurones artificiels » by Blayo and Verleysen, Que sais-je 3042, ed PUF 1000 inputs Random initial positions www.assignmentpoint.com

  35. What are Kohonen maps good for ? • Data Analysis • Signal Classification • Data Visualization • By projection from high D -> 2D Preserving neighborhood relationships • Partitioning Input Space Vector Quantization (Coding) www.assignmentpoint.com

  36. Growing Neural Gas et al. GNG = Kohonen maps &Dynamical creation/removal of the topological links Best explained by a demo ! Growing Neural Gas adapting to UNI shape local Java demo or Internet Java demo http://www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/research/gsn/DemoGNG/GNG.html Don’t miss the interesting Growing Grid adaptation to moving shape www.assignmentpoint.com

  37. What are GNG good for ? • Great Adaptability to Data Topology • Either dynamically or spatially • Data Analysis • Data Visualization www.assignmentpoint.com

  38. Good ANN Practices • Strong effort • collecting lot of good quality data • preprocessing • Divide to conquer • Subtasks could be linearly processed • Be very suspicious of overfitting • Use test and validation sets www.assignmentpoint.com

  39. Un choix de références • «Réseaux de neurones » par G. Dreyfus et al, éd. Eyrolles 2002 • « Les réseaux de neurones artificiels », F. Blayo & M. Verleysen, coll. Que sais-je ? N° 3042, éd. PUF (1996-) www.assignmentpoint.com

  40. Some Textbooks • « Neural Networks, a Comprehensive Foundation », S. Haykin, ed. Prentice Hall (1999) • « Neural Networks for Pattern Recognition », C. M. Bishop, ed Oxford uni press (1995) • “Self Organizing Maps”, T. Kohonen, Springer (2001) www.assignmentpoint.com

  41. some toolboxes • Free software • SNNS: Stutgarter Neural Network Systems & Java NNS • GNG at uni-bochum • Matlab toolboxes • Fuzzy Logic • Artificial Neural Networks • Signal Processing www.assignmentpoint.com

More Related