1 / 50

Human, Animal, and Machine Learning

Human, Animal, and Machine Learning. Vasile Rus http://www.cs.memphis.edu/~vrus/teaching/cogsci. Overview. General Info about this course/seminar Why Learning? The ultimate goal: learning to learn

Download Presentation

Human, Animal, and Machine Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Human, Animal, and Machine Learning Vasile Rus http://www.cs.memphis.edu/~vrus/teaching/cogsci

  2. Overview • General Info about this course/seminar • Why Learning? • The ultimate goal: learning to learn • The crux of intelligent systems that enables them to increase adaptivity to environment and therefore chances of survival

  3. General Information • Web Page: http://ww.cs.memphis.edu/~vrus/teaching/cogsci/ • Check the page as often as possible • It is the main way of getting latest info

  4. General Information • Instructor • Vasile Rus, PhD • Office: 320 Dunn Hall/FIT403c • Phone: x5259 • E-mail: vrus@memphis.edu

  5. What is Learning? • the cognitive process of acquiring skill or knowledge (WordNet) • the process of acquiring knowledge, skills, attitudes, or values, through study, experience, or teaching (Wikipedia)

  6. Machine Learning? • Processes and algorithms that allow computers to "learn“, i.e. improve their knowledge and performance from experience • applications to • search engines • medical diagnosis • bioinformatics and cheminformatics • detecting credit card fraud • and many more: stock market analysis, classifying DNA sequences, speech and handwriting recognition, object recognition in computer vision, game playing and robot locomotion

  7. Goals of this Course • Learn about theory and practice of learning in humans, animals, and machines: • What are the major issues? • What are the major solutions? • How well do they work ? • How do they work ?

  8. Goals of this Course • At the end you should: • Agree that learning is subtle and interesting! • Be able to model a problem as a machine learning problem, run machine learning algorithms on some data to solve the problem

  9. Questions the Course Will Answer • How people learn? • What are the primitive and patterns of learning? • How animals learn? • Can we make machines learn?

  10. Today • Motivation • Course Goals • The Syllabus

  11. Syllabus – Student Session • Week 1: Introduction to Machine Learning • Week 2: The WEKA Machine Learning environment • Week 3: Concept Learning • Week 4: Decision Trees Learning • Week 5: Linear Regression and Perceptrons • Week 6: Hypotheses Spaces and Evaluating Hypotheses • Week 7: Graphical Models: Naïve Bayes, Bayes Nets • Week 8: SPRING BREAK

  12. Syllabus (cont’d) • Week 9: Graphical Models: Hidden Markov Models • Week 10: Graphical Models: LDA • Week 11: Computational Learning Theory • Week 13: Support Vector Machines • Week 14: Instance-based Learning • Week 15: Project Presentations • Week 16: Project Presentations

  13. Syllabus – Plenary Session • Jan 22: NO TALK • Jan 29 • Art Graesser, The University of Memphis: “How Are Theoretical Principles of Learning Incorporated in Intelligent Pedagogical Agents?” • Feb 5 • Razvan Bunescu, Ohio University: “Machine learning approaches to word sense disambiguation and (co)reference resolution” • Feb 12 • Andrew Olney, The University of Memphis: “Building a BrainTrust” • Feb 19 • Giuseppe diFabbrizio, Amazon, Inc.: “Learning to interact - A machine learning approach to dialog management” • Feb 26 • Kim Oller, The University of Memphis: “Evolutionary-Developmental Biology (Evo-Devo) as an influence on current thinking about human and animal language development and evolution” • March 5: NO TALK • Marc 12 • SPRING BREAK

  14. Syllabus – Plenary Session • March 19 • Panayiota Kendeou, University of Minnesota: “The Knowledge Revision Components (KReC) framework: We cannot escape the past but we can reduce its impact” • March 26 • Nobal Niraula, The University of Memphis: “A Machine Learning Approach to Anaphora Resolution in Dialogue based Intelligent Tutoring Systems” • April 2 • Phil Pavlik, The University of Memphis: “Results of Data Mining Student Vocabulary Learning” • April 9 • Dan Stefanescu, The University of Memphis: “Short Text Similarity based on Parsing and Information Content” • April 16 • Michael Johnson, Marquette University: Title Pending • April 23 NO TALK • April 30 NO TALK

  15. To be successful you need to • Read the syllabus • Understand the structure of the seminar • Read the general policies • Attend sessions and participate by asking questions or/and contributing with related remarks • Explore the seminar website • Don't limit yourself to what is asked in the seminar

  16. Grading • Assignments • 4-5 (or more) • 35% of final grade • Project • 40% • Quizzes: 20% • Participation and Presentation: 5%

  17. Grading 2.5 above or below the cut-off will earn you a + or – in front of your grade. For example: 89 has a letter equivalent of B+ Exception: 90-91 will give you A-, 92 to 96 will give you A, anything above 97 means A+.

  18. Other Issues • Attendance can help you when on borderline • General announcements are posted on the web site frequently! • Please check it out as often as possible • If you notice any inconsistencies on the website (broken links, misspellings, etc.) please notify me • Thank you!

  19. Bibliography • Tom Mitchell: Machine Learning, McGraw Hill, 1997, ISBN 0070428077. • RECOMMENDED • Ian H. Witten and Eibe Frank: Data Mining:Practical Machine Learning Tools and Techniques, Morgan Kaufmann, 2005, ISBN 0-12-088407-0. • Any graduate textbook on learning

  20. Office Hours and Extra Help • During the following times I'll be available in my office • Mondays: 11:00AM-12:00PM • Wednesdays: 1:20-2:20 PM • By appointment • You must send me an email to set up an appointment • If you just knock on my door without notice the chances are that I'll be busy • Please use the office hours!

  21. Assignment Submission • Submissions: • You will have on average one-two weeks from the date the work is assigned • Late submissions are not accepted • In exceptional cases you may have a 48-hour grace period at the cost of 50% of the grade (you should ask for it before the due date) • Should be submitted electronically AND on paper

  22. Plagiarism • Plagiarism is not tolerated. If caught, you'll be given grade 0 (zero) and disciplinary actions will be taken • It's OK to help some of your friends who may have problems • This is actually a good learning tool • but it is not OK to share answers. If they need, help/discuss with them but never show them your solution • I may (and I will) ask you to demonstrate and explain your solutions

  23. Project • Preferably an interdisciplinary team • A common project OR • Something of your choice

  24. Machine Learning • The study of automated processes, algorithms, and systems that learn from experience: improve their knowledge and performance

  25. A Typical Learning Task • Learning the Sound ‘R’

  26. Learning Sound ‘R’ • at least 32 different R sounds to consider as separate distinct sounds • http://mommyspeechtherapy.com/?p=1116 • Vocalic R: • R can be vowel-like too • depending on the location of the R relative to a vowel, the R will change pronunciation • In words like car, fear, for, the R sound comes after the vowels; each vowel is pronounced differently and so is the R • the R takes on the characteristic of the vowel depending on context and combination. The six different vocalic combinations, [ar, air, ear, er, or, ire], are collectively called vocalic R, r-controlled vowels, or vowel R • How do children learn to pronounce R in English?

  27. Machine Learning • Standard process but no standard algorithm • Machine Learning task • Learning from experience E with respect to some class of tasks T and performance measure P • Learning is successful if P increases after learning from E

  28. Checkers Playing • Task T: play checkers • Performance P: percent of games won against opponents • Training experience E: playing practice games against itself

  29. Examples of Learning Tasks • Recognize spoken words • Sphinx system (Lee, 1989) learns speaker specific strategies for recognizing phonemes and words • Neural networks • Hidden Markov models • Learning to drive an autonomous vehicle • Many methods

  30. Examples of Learning Tasks • Playing backgammon • Very competitive • Reinforcement learning

  31. ML Process • Specify: T, P, E • Specify: the type of knowledge to be learned • Specify: Representation of the target knowledge • Specify: Learning mechanism/algorithm

  32. Checkers: ML Process – Step #1 • Task T: learn how to play checkers • Performance P: #wins in world tournament • Experience E: see next slide

  33. Checkers: ML Process – Step #1 • Step #1: Choose training experience • Type of feedback: direct or indirect • If indirect feedback is available then there is an issue of credit assignment • In our checkers playing problem, we only know the final result • Control over training experience • Does the learner control the training examples or does a teacher provide them? • Distribution of training examples • Ideally, training examples should have same distribution as testing, future examples

  34. Checkers: ML Process – Step #2 • Choose type of knowledge • Target function • Target function: • Find a best search strategy in the space of legal moves that yields best move sequences • ChooseMove : B → M, maps a legal state of the board to a move • V : B → R, maps a legal board state to a real value or score (higher values mean better states)

  35. Checkers: ML Process – Step #2 • Value of target function V(b) for a board state b is • V(b)=100, if a final board state is a win • V(b)=-100, if a final board state is a loss • V(b)=0, if a final board state is drawn • V(b)=V(b’) where b’ is the best final board state starting at b and playing optimally • Even V(b) is hard to compute and so an operation description of V is needed • Operational description of ideal function V is an approximation of it

  36. Checkers: ML Process – Step #2

  37. Checkers: ML Process – Step #3

  38. Checkers: ML Process – Step #3

  39. Learning Task Elements (so far)

  40. Checkers: ML Process – Step #4

  41. Checkers: ML Process – Step #4

  42. Checkers: ML Process – Step #4 • We need an algorithm for finding weights of a linear function that minimizes E • Incrementally refine weights as new training examples become available • Robust to errors in the training estimated values • LMS algorithm

  43. Checkers: ML Process – Step #4

  44. 4 Modules of Learning Systems • Performance System • Applies the learned function • Critic • Generates training examples • Generalizer • The learning algorithm • Experiment Generator • Generates problem instances/samples

  45. Design Choices

  46. Key Issues in ML • What algorithms can learn functions from examples and how well can they do it? • Which algorithms perform well for what types of problems and representations? • How does noise impact learning? • How much training data is sufficient? • Mantra#2: the more data the better but … more data many times is not possible or is expensive • How can prior knowledge be used in learning?

  47. Key Issues in ML • What are the limits of learnability? • Under what conditions is successful learning possible, less possible, or impossible? • Nature of learning problems • Under what conditions is a particular learning algorithm assured of learning successfully? • Nature of learning algorithms • How can learners alter their representations to improve?

  48. Gist of ML • Experts do their job and many times it is hard to make them articulate a procedure/function that captures their expertise • Basic idea: • Have a mechanism that can learn from examples and hope for the best • Improve your knowledge from more examples

  49. Summary • Intro to Machine Learning

  50. Next Time • Intro to Weka • Classification and clustering

More Related