1 / 58

ARTIFICIAL INTELLIGENCE: THE MAIN IDEAS

ARTIFICIAL INTELLIGENCE: THE MAIN IDEAS. OLLI COURSE SCI 102 Tuesdays, 11:00 a.m. – 12:30 p.m. Winter Quarter, 2013 Higher Education Center, Medford Room 226. Nils J. Nilsson. nilsson@cs.stanford.edu http:// ai.stanford.edu/~nilsson /. Course Web Page: www.sci102.com/.

buffy
Download Presentation

ARTIFICIAL INTELLIGENCE: THE MAIN IDEAS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ARTIFICIAL INTELLIGENCE:THE MAIN IDEAS OLLI COURSE SCI 102 Tuesdays, 11:00 a.m. – 12:30 p.m. Winter Quarter, 2013 Higher Education Center, Medford Room 226 Nils J. Nilsson nilsson@cs.stanford.edu http://ai.stanford.edu/~nilsson/ Course Web Page: www.sci102.com/ For Information about parking near the HEC, go to: http://www.ci.medford.or.us/page.asp?navid=2117 There are links on that page to parking rules and maps

  2. AI in the News?

  3. PART ONE(Continued)REACTIVE AGENTS Perception Action Selection Memory

  4. Summary:Neural Networks Have Many Applications

  5. But Some Are Not Very User-Friendly Fair Isaac Experience

  6. Models of the Cortex Using Deep, Hierarchical Neural Networks All connections are bi-directional

  7. The Neocortex

  8. Two Pioneers in Using Networks to Model the Cortex Hierarchical Temporal Memory Jeff Hawkins Geoffrey Hinton

  9. More About Jeff Hawkins’s Ideas http://www.numenta.com/htm-overview/education/HTM_CorticalLearningAlgorithms.pdf

  10. Dileep George’s Hierarchical Temporal Memory (HTM) Model A “Convolutional” Network George is a founder of startup, Vicarious http://vicarious.com/team.html

  11. A “Mini-Column” of the Neo-Cortex From: “HIERARCHICAL TEMPORAL MEMORY” http://www.numenta.com/htm-overview/education/HTM_CorticalLearningAlgorithms.pdf

  12. Figure 10. Columnar organization of the microcircuit. George, Dileepand Hawkins, Jeff: (2009) Towards a Mathematical Theory of Cortical Micro-circuits. PLoSComputBiol 5(10): e1000532. doi:10.1371/journal.pcbi.1000532 http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1000532

  13. Figure 9. A laminar biological instantiation of the Bayesian belief propagation equations used in the HTM nodes. George D, Hawkins J (2009) Towards a Mathematical Theory of Cortical Micro-circuits. PLoS Comput Biol 5(10): e1000532. doi:10.1371/journal.pcbi.1000532 http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1000532

  14. Ray Kurzweil’s New Book

  15. Unsupervised Learning

  16. Letting Networks “Adapt” to Their Inputs All connections are bi-directional Massive number of inputs Weight Values Become Those For Extracting “Features” of Inputs HonglakLee,et al., “Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations,” Proceedings of the 26th Annual International Conference on Machine Learning, 2009

  17. Hubel & Wiesel’s “Detector Neurons” David Hubel, Torsten Wiesel Short bar of light projected onto a cat’s retina Response of a single neuron in the cat’s visual cortex (as detected by a micro-electrode in the anaesthetized cat)

  18. Use of Deep Networks With Unsupervised Learning All connections are bi-directional First Layer Learns “Building-Block” Features Common to Many Images

  19. Second Layer Learns Features Common Just to Cars, Faces, Motorbikesand Airplanes cars, faces, motorbikes, airplanes

  20. Third Layer Learns How to Combine the Features of the Second Layer Into aRepresentation of the Input cars, faces, motorbikes, airplanes

  21. Output Layer Can be Used to Make a Decision CAR

  22. The Net Can Make Predictions About Unseen Parts of the Input

  23. “Building High-level Features Using Large Scale Unsupervised Learning” Quoc V. Lee,et al. (Google and Stanford) 1,000 Google Computers, 1,000,000,000 Connections

  24. Large Scale Unsupervised Learning (Continued) Recognizes 22,000 object categories Unsupervised learning for three days 10 million 200x200 pixel images downloaded from the Internet (stills from YouTube) a “cat neuron” a “face neuron”

  25. One Result 81.7% accuracy in detecting faces out of 13,026 faces in a test set For more information about these experiments at Google/Stanford, see: http://research.google.com/archive/unsupervised_icml2012.html

  26. Using Models (i.e., Memory) Can Make Agents Even More Intelligent Perception Action Selection Model of World (e.g., a map)

  27. Types of Models Maps Memory of Previous States List of State-Action Pairs

  28. Models can be pre-installed or learned

  29. Learning and Using Maps where am I? where is everything else? Neato Robot Vacuum

  30. Neato RoboticsMapping System

  31. NEATO ROBOTICS XV11

  32. Action Selection Perception S-R Rules Using “State” of the Agent determines the“state”of the world Library of States and Actions (Memory) IF state1, THEN actiona IF state2, THEN actionb . . .

  33. Lists of numbers, such as (1,7,3,4,6) Arrays, such as “Statements,” such as Color(Walls, LightBlue) Shape(Rectangular) . . . Ways to Represent States

  34. (1,7,3,4,6) a (1,6,2,8,7) b (4,5,1,8,5) c . . . (7,4,8,9,2) k Library of States & Actions (1,5,2,8,6) Input (present state) Closest Match

  35. Example: Face Recognition Using a large database containing many, many images of faces, a small set of “building-block” faces is computed: The average of all faces: http://cognitrn.psych.indiana.edu/nsfgrant/FaceMachine/faceMachine.html

  36. Familiar Uses of “Building Blocks” A Musical Tone Consists of “Harmonics”

  37. Library of Known Faces (Represented as composites of the building-block faces) Sam Joe (2,2,-2,0,0,1,2,2,-1,2,2,-1,,0,2,0) (0,0,1,0,0,-2,-2,0,-1,-2,-2,-1,2,-1,0) Plus Thousands More Sue Mike (-3,2,1,1,-2,1,-2,3,0,0,0,-4,-3,2,-2) (4,1,3,-1,4,0,4,4,1,4,4,-4,4,-4,-4)

  38. Face Recognition Library of Known Faces Query Face • Represented as a composite of the building-block faces • (present state) (0,0,1,0,0,-2,-2,0,-1,-2,-2,-1,2,-1,0) Sam Joe Mike Sue (2,2,-2,0,0,1,2,2,-1,,2,2,-1,,0,2,0) (-3,2,1,1,-2,1,-2,3,0,0,0,-4,-3,2,-2) (-2,2,1,1,-2,1,-2,3,1,0,0,-4,-3,2,-2) (4,1,3,-1,4,0,4,4,1,4,4,-4,4,-4,-4) Sue is the Closest Match

  39. A table of states and actions and “values” Another Kind of Model

  40. Why have values for multiple actions instead of just noting the best action? Because the values in the table can be changed (learned) depending on experience! REINFORCEMENT LEARNING (Another Point of Contact with Brains)

  41. Pioneers in the Use of Reinforcement Learning in AI Andy Barto Rich Sutton Chris Watkins

  42. An Example:Learning a Maze

  43. But the Mouse Doesn’t Have a Map of the Maze (Like We Do)Instead it remembers the states it visits and assigns their actions random initial values

  44. It Can Change the Values in the TableThe First Step (state1, up) gets initial random value 3

  45. There is only one action possible (up), and the mouse ends up in state2 state2, has 3 actions, each with initial random values

  46. Now the mouse updates the value of (state1, up) in its table 5 value propagates backward (possibly with some loss)

  47. Sooner or later, the mouse stumbles into the goal and gets a “reward”

  48. The reward value is propagated backward 99 value propagates backward (with some loss)

  49. And So On . . .With a Lot of Exploration, the Mouse Learns the Maze

More Related