1 / 25

Welcome to CS 672 – Neural Networks Fall 2010

Welcome to CS 672 – Neural Networks Fall 2010. Instructor: Marc Pomplun. Instructor – Marc Pomplun. Office: S-3-171 Lab: S-3-135 Office Hours: Tuesdays 14:30-16:00 Thursdays 19:00-20:30 Phone: 287-6443 (office) 287-6485 (lab) E-Mail: marc@cs.umb.edu.

edmond
Download Presentation

Welcome to CS 672 – Neural Networks Fall 2010

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Welcome toCS 672 – Neural NetworksFall 2010 Instructor: Marc Pomplun Neural Networks Lecture 1: Motivation & History

  2. Instructor – Marc Pomplun • Office: S-3-171 • Lab: S-3-135 • Office Hours: Tuesdays 14:30-16:00 Thursdays 19:00-20:30 • Phone: 287-6443 (office) 287-6485 (lab) • E-Mail: marc@cs.umb.edu Neural Networks Lecture 1: Motivation & History

  3. The Visual Attention Lab Cognitive research, esp. eye movements Neural Networks Lecture 1: Motivation & History

  4. Example: Distribution of Visual Attention Neural Networks Lecture 1: Motivation & History

  5. Selectivity in Complex Scenes Neural Networks Lecture 1: Motivation & History

  6. Selectivity in Complex Scenes Neural Networks Lecture 1: Motivation & History

  7. Selectivity in Complex Scenes Neural Networks Lecture 1: Motivation & History

  8. Selectivity in Complex Scenes Neural Networks Lecture 1: Motivation & History

  9. Selectivity in Complex Scenes Neural Networks Lecture 1: Motivation & History

  10. Selectivity in Complex Scenes Neural Networks Lecture 1: Motivation & History

  11. Artificial Intelligence Neural Networks Lecture 1: Motivation & History

  12. Modeling of Brain Functions Neural Networks Lecture 1: Motivation & History

  13. Biologically Motivated Computer Vision: Neural Networks Lecture 1: Motivation & History

  14. Human-Computer Interfaces: Neural Networks Lecture 1: Motivation & History

  15. Grading For the assignments, exams and your course grade, the following scheme will be used to convert percentages into letter grades: •  95%: A •  90%: A-  86%: B+ 82%: B 78%: B-  74%: C+  70%: C 66%: C-  62%: D+ 56%: D 50%: D-  50%: F Neural Networks Lecture 1: Motivation & History

  16. Complaints about Grading • If you think that the grading of your assignment or exam was unfair, • write down your complaint (handwriting is OK), • attach it to the assignment or exam, • and give it to me or put it in my mailbox. • I will re-grade the whole exam/assignment and return it to you in class. Neural Networks Lecture 1: Motivation & History

  17. Computers vs. Neural Networks • “Standard” Computers Neural Networks • one CPU highly parallel processing • fast processing units slow processing units • reliable units unreliable units • static infrastructure dynamic infrastructure Neural Networks Lecture 1: Motivation & History

  18. Why Artificial Neural Networks? • There are two basic reasons why we are interested in building artificial neural networks (ANNs): • Technical viewpoint: Some problems such as character recognition or the prediction of future states of a system require massively parallel and adaptive processing. • Biological viewpoint: ANNs can be used to replicate and simulate components of the human (or animal) brain, thereby giving us insight into natural information processing. Neural Networks Lecture 1: Motivation & History

  19. Why Artificial Neural Networks? • Why do we need another paradigm than symbolic AI for building “intelligent” machines? • Symbolic AI is well-suited for representing explicit knowledge that can be appropriately formalized. • However, learning in biological systems is mostly implicit – it is an adaptation process based on uncertain information and reasoning. • ANNs are inherently parallel and work extremely efficiently if implemented in parallel hardware. Neural Networks Lecture 1: Motivation & History

  20. How do NNs and ANNs work? • The “building blocks” of neural networks are the neurons. • In technical systems, we also refer to them as units or nodes. • Basically, each neuron • receives input from many other neurons, • changes its internal state (activation) based on the current input, • sends one output signal to many other neurons, possibly including its input neurons (recurrent network) Neural Networks Lecture 1: Motivation & History

  21. How do NNs and ANNs work? • Information is transmitted as a series of electric impulses, so-called spikes. • The frequency and phase of these spikes encodes the information. • In biological systems, one neuron can be connected to as many as 10,000 other neurons. • Usually, a neuron receives its information from other neurons in a confined area, its so-called receptive field. Neural Networks Lecture 1: Motivation & History

  22. History of Artificial Neural Networks • Rashevsky describes neural activation dynamics by means of differential equations • McCulloch & Pitts propose the first mathematical model for biological neurons • Hebb proposes his learning rule: Repeated activation of one neuron by another strengthens their connection • 1958 Rosenblatt invents the perceptron by basically adding a learning algorithm to the McCulloch & Pitts model Neural Networks Lecture 1: Motivation & History

  23. History of Artificial Neural Networks • Widrow & Hoff introduce the Adaline, a simple network trained through gradient descent • Rosenblatt proposes a scheme for training multilayer networks, but his algorithm is weak because of non-differentiable node functions • Hubel & Wiesel discover properties of visual cortex motivating self-organizing neural network models • Novikoff proves Perceptron Convergence Theorem Neural Networks Lecture 1: Motivation & History

  24. History of Artificial Neural Networks • Taylor builds first winner-take-all neural circuit with inhibitions among output units • Minsky & Papert show that perceptrons are not computationally universal; interest in neural network research decreases • Hopfield develops his auto-association network • 1982 Kohonen proposes the self-organizing map • 1985 Ackley, Hinton & Sejnowski devise a stochastic network named Boltzmann machine Neural Networks Lecture 1: Motivation & History

  25. History of Artificial Neural Networks • Rumelhart, Hinton & Williams provide the backpropagation algorithm in its modern form, triggering new interest in the field • Hecht-Nielsen develops the counterpropagation network • Carpenter & Grossberg propose the Adaptive Resonance Theory (ART) Since then, research on artificial neural networks has remained active, leading to numerous new network types and variants, as well as hybrid algorithms and hardware for neural information processing. Neural Networks Lecture 1: Motivation & History

More Related