1 / 21

580.691 Learning Theory Reza Shadmehr

580.691 Learning Theory Reza Shadmehr LMS with Newton-Raphson, weighted least squares, choice of loss function. Review of regression. Multivariate regression:. Batch algorithm. Steepest descent. LMS. Finding the minimum of a function in a single step. Taylor series expansion.

hope-meyers
Download Presentation

580.691 Learning Theory Reza Shadmehr

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 580.691 Learning Theory Reza Shadmehr LMS with Newton-Raphson, weighted least squares, choice of loss function

  2. Review of regression Multivariate regression: Batch algorithm Steepest descent LMS

  3. Finding the minimum of a function in a single step Taylor series expansion (If J is quadratic, otherwise more terms here)

  4. Newton-Raphson method

  5. The gradient of the loss function Newton-Raphson

  6. The gradient of the loss function

  7. is a singular matrix. LMS algorithm with Newton-Raphson Steepest descent algorithm Note: LMS

  8. Weighted Least Squares • Suppose some data points are more important than others. We want to weight the errors in matching those data points more.

  9. How to handle artifacts in FMRI data Diedrichsen and Shadmehr, NeuroImage (2005) In fMRI, we typically measure the signal intensity from N voxels at acquisition time t=1…T. Each of these T measurements constitutes an image. We assume that the time series of voxel n is an arbitrary linear function of the design matrix X plus a noise term: T x p design matrix p x 1 vector T x 1 column vector If one source of noise is due to random discrete events, for example, artifacts arising from the participant moving their jaw, then only some images will be influenced, violating the assumption of a stationary noise process. To relax this assumption, a simple approach is to allow the variance of noise in each image to be scaled by a separate parameter. Under the temporal independence assumption, the variance-covariance matrix of the noise process might be: a variance scaling parameter for the i-th time that the voxel was imaged

  10. Discrete events (e.g., swallowing) will impact only those images that were acquired during the event. What should be done with these images, once they are identified? A typical approach would be to discard images based on some fixed threshold. If we knew the optimal approach would be to weight the images by the inverse of their variance. But how do we get V? We can use the residuals from our model: This is a good start, but has some issues regarding bias of our estimator of variance. To improve things, see Diedrichsen and Shadmehr (2005).

  11. Weighted Least Squares “Normal equations” for weighted least squares Weighted LMS

  12. Regression with basis functions • In general, predictions can be based on a linear combination of a set of basis functions: basis set: Examples: Linear basis set: Gaussian basis set: Each basis is a local expert. This measures how close are the features of the input to that preferred by expert i. Radial basis set (RBF)

  13. Output 1 Collection of experts Input space

  14. Regression with basis functions

  15. 4 3 2 1 0 -2 -1 0 1 2 Choice of loss function In learning, our aim is to find parameters w so to minimize the expected loss: Probability density of error, given our model parameters This is a weighted sum. The loss is weighted by the likelihood of observing that loss.

  16. Inferring the choice of loss function from behavior Kording & Wolpert, PNAS (2004) A trial lasted 6 seconds. Over this period, a series of ‘peas’ appeared near the target, drawn from a distribution that depended on the finger position. The object was to “place the finger so that on average, the peas land as close as possible to the target”.

  17. 4 1 0.8 3 0.6 2 0.4 0.2 1 -2 -1 1 2 0 -2 -1 0 1 2 The delta loss function Loss Imagine that the learner cannot arbitrarily change the density of the errors through learning. All the learner can do is shift the density left or right through setting the parameter w. If the learning uses this loss function: Then the smallest possible expected value of loss occurs when p(y) has its peak at yerror =0 Therefore, in the above plot choice of w2 is better than w1. In effect, the w that the learner chooses will depend on the exact shape of p(y).

  18. 0.15 0.1 0.05 0 -0.05 0.3 0.4 0.5 0.6 0.7 0.8 Behavior with the delta loss function 1 1.2 1 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0 0 -2 -1 0 1 2 -2 -1 0 1 2 Suppose the “outside” system (e.g., the teacher) sets r. Given the loss function, we can predict what the best w will be for the learner.

  19. Behavior with the squared error loss function We have a p(ytilda) with a variance that is independent of w. So to minimize E(loss), we should pick a w that produces the smallest E[ytilda]. That happens at a w that sets mean of p(ytilda) equal to zero.

  20. Mean and variance of mixtures of normal distributions

  21. Kording & Wolpert, PNAS (2004) delta 0.15 0.1 Typical subjects 0.05 0 -0.05 0.2 0.3 0.4 0.5 0.6 0.7 0.8 1.2 1 • Results: large errors are penalized by less than a squared term. The loss function was estimated at: • However, note that the largest errors tended to occur very infrequently in this experiment. 0.8 0.6 0.4 0.2 0 -2 -1 0 1 2 (cm)

More Related