1 / 18

Cogs 118A: Natural Computation I Angela Yu ajyu@ucsd TA: He (Crane) Huang January 5, 2009

Cogs 118A: Natural Computation I Angela Yu ajyu@ucsd.edu TA: He (Crane) Huang January 5, 2009. www.cogsci.ucsd.edu/~ajyu/Teaching/Cogs118A_wi10/cogs118A.html. Course Website. Policy on academic integrity Scribe schedule. 118A Curriculum Overview. Regression (3). Graphical models (8).

jarvis
Download Presentation

Cogs 118A: Natural Computation I Angela Yu ajyu@ucsd TA: He (Crane) Huang January 5, 2009

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cogs 118A: Natural Computation IAngela Yuajyu@ucsd.eduTA: He (Crane) HuangJanuary 5, 2009 www.cogsci.ucsd.edu/~ajyu/Teaching/Cogs118A_wi10/cogs118A.html

  2. Course Website • Policy on academic integrity • Scribe schedule

  3. 118A Curriculum Overview Regression (3) Graphical models (8) assignments Approximate inference & sampling (10, 11) Sequential data (13) Information theory, reinforcement learning midterm Applications to cognitive science final project due No final!

  4. 118B Curriculum (Tentative) Overview Classification (4) Neural networks (5) Kernel methods & SVM (6, 7) Mixture models & EM (9) Continuous latent variables (12) Final project

  5. Supervised learning The agent also observes a sequence of labels or outputsy1, y2, …, and the goal is to learn the mapping f: x  y. Unsupervised learning The agent’s goal is to learn a statistical model for the inputs, p(x), to be used for prediction, compact representation, decisions, … Reinforcement learning The agent can perform actions a1, a2, a3, …, which affect the state of the world, and receives rewards r1, r2, r3, … Its goal is to learn a policy: {x1, x2, …, xt} atthat maximizes <reward>. Machine Learning: 3 Types Imagine an agent observes a sequence of inputs: x1, x2, x3, …

  6. Cat Dog y x Supervised Learning Classification: the outputs y1, y2, … are discrete labels Regression: the outputs y1, y2, … are continuously valued

  7. Object Categorization Speech Recognition VET WET Face Recognition Motor Learning Applications: Cognitive Science

  8. Face Recognition Challenges • Noisy sensory inputs • Incomplete information • Excess inputs (irrelevant) • Prior knowledge/bias • Changing environment • No (one) right answer, inductive inference (open-ended) • Stochasticity (uncertainty), not deterministic • …

  9. An Example: Curve-Fitting y x

  10. Polynomial Curve-Fitting Linear model Error function (root-mean-square)

  11. Linear Fit? y x

  12. Quadratic Fit? y x

  13. 5th-Order Polynomial Fit? y x

  14. 10th-Order Polynomial Fit? y x

  15. And the Answer is… Quadratic y x

  16. Training Error vs. Test (Generalization) Error 1st-order 2nd-order y 5th-order 10th-order y x x

  17. What Did We Learn? • Model complexity important • Minimizing training error  overtraining • A better error metric: test (generalization) error • But test error costs extra data (precious) • Fix 1: regularization (3.1) • Fix 2: Bayesian model comparison (3.3)

  18. Math Review

More Related