1 / 40

ARTIFICIAL INTELLIGENCE: THE MAIN IDEAS

ARTIFICIAL INTELLIGENCE: THE MAIN IDEAS. OLLI COURSE SCI 102 Tuesdays, 11:00 a.m. – 12:30 p.m. Winter Quarter, 2013 Higher Education Center, Medford Room 226. Nils J. Nilsson. nilsson@cs.stanford.edu http:// ai.stanford.edu/~nilsson /. Course Web Page: www.sci102.com/.

tommy
Download Presentation

ARTIFICIAL INTELLIGENCE: THE MAIN IDEAS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ARTIFICIAL INTELLIGENCE:THE MAIN IDEAS OLLI COURSE SCI 102 Tuesdays, 11:00 a.m. – 12:30 p.m. Winter Quarter, 2013 Higher Education Center, Medford Room 226 Nils J. Nilsson nilsson@cs.stanford.edu http://ai.stanford.edu/~nilsson/ Course Web Page: www.sci102.com/ For Information about parking near the HEC, go to: http://www.ci.medford.or.us/page.asp?navid=2117 There are links on that page to parking rules and maps

  2. AI in the News CBS 60 Minutes Are robots hurting job growth? January 13, 2013 5:00 PM Technological advances, especially robotics, are revolutionizing the workplace, but not necessarily creating jobs. Steve Kroft reports. http://www.cbsnews.com/video/watch/?id=50138922n

  3. Sources of Additional Information http://aitopics.net/index.php

  4. MOOCS(Massive Open Online Courses) https://www.coursera.org/course/aiplan Artificial Intelligence Planning Gerhard Wickler, Austin Tate Begins Jan 28th 2013 (5 weeks long) And, check out http://www.class-central.com/ for a list of dozens and dozens of online courses, including some on topics related to AI

  5. Perception Action Selection Many Kinds of Agents Memory stimulus-response or planned processes raw sensory information models of world Modifiable by Learning

  6. PART ONE(Continued)REACTIVE AGENTS Perception Action Selection

  7. More Intelligent PerceptionYields More Intelligent Action Perception Action Selection

  8. Volvo V40 (Semi-) Automatic Reverse Parking

  9. Neural Networks For Perceptual Learning One of AI’s Main Ideas!

  10. Two Neurons Hebb’s rule for synaptic changes The human brain has about 100 billion neurons and 100 trillion synapses.

  11. An Artificial Neural Element Inputs from sensors or from other neural elements Output adds up the weighted inputs, compares to a threshold, and produces an output “synaptic” weights

  12. Frank Rosenblatt (on left) andHis Synaptic Weight – ca 1963

  13. Using a Neural Element to Control a Light-Following Robot wr wl l photocell values t If (l x wl) + (rx wr) > t, turn left Otherwise, turn right r Wl= ?, Wl= ?, t= ?

  14. Try Some Weight Values and Test on a Sample Input left photocell 5 -1 8 7 right photocell 10 30 is greater than 7, so turn left Error!

  15. A Neural Element Implements a Boundary in the Graph of landrValues. 5 -1 l 7 r Boundary: 5l- r= 7

  16. l Error, Adjust weights sample input boundary decide to turn left decide to turn right 5l- r= 7 r

  17. l desired boundary turn left region turn right region r

  18. Adjust Weights to Correct Error left photocell 5 –> 3 8 -1–>-4 7 –>9 right photocell 10 24 – 40 = – 16 < 9, so turn right

  19. Training ProcedureCycle Through Training Inputs l training inputs r

  20. Another Example

  21. A Harder Problem(Not “Linearly Separable”) Neural element 1 Neural element 2

  22. Harder Problems Can BeSolved with a Network Neural element 1 Neural element 2

  23. Very Large Networks Can Be Used

  24. A Big Problem:How to Train Weights in a Network Jay McClelland David Rumelhart

  25. The “Backprop” Method

  26. To Train, Make Small Adjustments to the Weights in Directions that Make the Outputs (a Little) More Correct

  27. Input: Text Versions of English Words Output: Sound Training: Change Weights to Make Sound More Correct An Example: NetTalk Sejnowski, T. J. and Rosenberg, C. R., Parallel networks that learn to pronounce English text, Complex Systems 1, 145-168 (1987).

  28. Another Example: ALVINN http://www.youtube.com/watch?v=yfVxt-eBVLo A neural network system called ALVINN (Autonomous Land Vehicle in a Neural Network) has been trained to steer a Humvee successfully on ordinary roads and highways at speeds of 55 mph.

  29. ALVINN’S Neural Network 960 inputs 30 x 32 “retina” TV Image of Road Ahead 5 hidden units connected to all 960 inputs 30 output units connected to all hidden units

  30. What Are ALVINN’s Hidden Units Measuring? First Hidden Unit w1 w960 After thorough training, here are the weight values of the first hidden unit (represented as pixels in the 30 x 32 image) Black = negative weight, White = positive weight

  31. Weight Values for All 5 Hidden Units The diagonal black and white bands on weights represent detectors for the yellow line down the center and the white line down the right edge of the road. D. A. Pomerleau “Efficient Training of Artificial Neural Networks for Autonomous Navigation,” Neural Computation, 1991, MIT Press

  32. In All of The Examples So Far,We Knew the Category of the Input, So We Knew What the Correct Network Response Should Be “Supervised Learning”

  33. What About “Unsupervised Learning” ?

  34. Letting Networks “Adapt” to Their Inputs All connections are bi-directional Massive number of training samples Weight Values Become Those For Extracting “Features” of Inputs

  35. Hubel & Wiesel’s “Detector Neurons” David Hubel, Torsten Wiesel Short bar of light projected onto a cat’s retina Response of a single neuron in the cat’s visual cortex (as detected by a micro-electrode in the anaesthetized cat)

  36. Models of the Cortex: Deep, Hierarchical Neural Networks All connections are bi-directional

  37. Two Pioneers in Using Networks to Model the Cortex Hierarchical Temporal Memory Jeff Hawkins Geoffrey Hinton

  38. More About Hawkins http://www.numenta.com/htm-overview/education/HTM_CorticalLearningAlgorithms.pdf

  39. Dileep George’s HTM Model A “Convolutional” Network George is a founder of startup, Vicarious http://vicarious.com/team.html

More Related