1 / 9

ICS 178 Introduction Machine Learning & data Mining

ICS 178 Introduction Machine Learning & data Mining. Instructor max Welling Lecture 4: Least squares Regression. What have we done so far?. parametric. non-parametric. density estimation: parzen-windowing. future example: k-mean. unsupervised. classification. regression. regression.

Download Presentation

ICS 178 Introduction Machine Learning & data Mining

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ICS 178Introduction Machine Learning & data Mining Instructor max Welling Lecture 4: Least squares Regression

  2. What have we done so far? parametric non-parametric density estimation: parzen-windowing future example: k-mean unsupervised classification regression regression classification supervised future example: logistic regression today: least-squares kNN X

  3. Problem # of mantee-kills versus boats

  4. Goal • Given data find a linear relationship between them. • In 1 dimension we have data {Xn,Yn} (blue dots) • We imagine vertical springs (red lines) between the data and a stiff rod (line). • (imagine they can slide over the rod so they remain vertical). • Springs have rest length 0, so they compete to pull the rod towards them. • The relaxed solution is what we are after.

  5. Cost Function We measure the total squared length of all the springs: We can now take derivatives wrt a,b and set that to 0. After some algebra (on white board) we find,

  6. More Variables • More generally, we want to have Dx input variables and Dy output variables. • The cost is now:

  7. In Matlab function [A,b] = LSRegression(X,Y) [D,N] = size(X); EX = sum(X,2)/N; CovX = X*X'/N - EX*EX'; EY = sum(Y,2)/N; CovXY = Y*X'/N - EY*EX'; A = CovXY * inv(CovX); b = EY - A*EX;

  8. Statistical Interpretation • We can think of the problem as one where we are trying to find the • probability distribution for P(Y|X). • We can write: • where d is the residual error pointing vertically from the line to the data-point. • d is a random vector and we may assume is has a Gaussian distribution.

  9. Statistical Interpretation • We can now maximize the probability of the data under the model by adapting • the parameters A,b. • If we use negative log-probability we get: • Looks familiar? • We can also optimize for • (It won’t affect A,b) • This is called “maximum likelihood learning”.

More Related