1 / 15

Econ 427 lecture 3 slides

Econ 427 lecture 3 slides. A scatterplot. A regression line. A regression line. Linear Regression. We assume that y is linearly related to x, with an independently and identically distributed ( iid ) disturbance term with a zero mean and constant variance: t = 1,…, T. Linear Regression.

jchandler
Download Presentation

Econ 427 lecture 3 slides

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Econ 427 lecture 3 slides

  2. A scatterplot

  3. A regression line

  4. A regression line

  5. Linear Regression • We assume that y is linearly related to x, with an independently and identically distributed (iid) disturbance term with a zero mean and constant variance: t = 1,…, T

  6. Linear Regression • The regression function gives an estimate of y, given x, which is just the conditional expectation of y given x = x*,

  7. Linear Regression • Since we don’t know the true (population) relationship, we estimate it from the data by calculating the parameters that minimize squared errors:

  8. Linear Regression • Then we used the estimated parameters to get a fitted value (also called an “in-sample forecast” of y, given x: • where the “hats” indicate estimated values. • The in-sample forecast errors are just:

  9. sum of squared residuals • SSR is the sum of squared residuals of the regression (the minimized value that OLS searches for)

  10. R-squared • A standard measure of overall goodness of fit is R2 (R-squared), technically the percentage of the variance of y explained by the variables in the model:

  11. Adjusted R-squared • the problem with R2 is that it always goes up when you add more variables. To avoid “overfitting” (any model will fit the data if there are enough RHS variables), we normally adjust for the degrees of freedom using the “adjusted R-squared”:

  12. F-statistic • F-statistic is a test of whether all model coefficients are jointly zero—an overall test of the significance of the regression:

  13. Durbin-Watson statistic • The Durbin-Watson statistic is a measure of serialcorrelation in the regression errors. (Why do we care about whether errors are serially correlated?)

  14. Durbin-Watson statistic • is a test of whether there is first-order autocorrelation of the model errors, i.e. • Values for DW fall in the [0,4] interval and values significantly below 2 (below 1.5 say) are indicative of serial correlation.

  15. Moments • Mean • Variance • Skewness • Kurtosis

More Related