1 / 40

Lecture 3: The Simple Regression Model

Lecture 3: The Simple Regression Model. Wooldridge, Ch. 2. The Simple Regression Model. y = b 0 + b 1 x + u Note: 1. Now using y , x for random variables and realisations (data). 2.Also called the two-variable regression model. Some Terminology.

vail
Download Presentation

Lecture 3: The Simple Regression Model

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 3: The Simple Regression Model Wooldridge, Ch. 2 Slides adapted by K.Clark from P. Anderson

  2. The Simple Regression Model y = b0 + b1x + u Note: 1. Now using y, x for random variables and realisations (data). 2.Also called the two-variable regression model.

  3. Some Terminology • In the simple linear regression model, where y = b0 + b1x + u, we typically refer to y as the • Dependent Variable, or • Left-Hand Side Variable, or • Explained Variable, or • Regressand

  4. Some Terminology, cont. • In the simple linear regression of y on x, we typically refer to x as the • Independent Variable, or • Explanatory Variable, or • Right-Hand Side Variable, or • Regressor, or • Covariate, or • Control Variables.

  5. Some Terminology, cont. • In the simple linear regression model u is the random error term: • a random variable • captures unsystematic influences on y • contains other determinants of y

  6. Some Terminology, cont. • The parameters β0 and β1 are the intercept and slope of the hypothesised linear relationship representing E(y|x). • This relationship is motivated by economic theory. • The linear form is not as restrictive as it looks.

  7. An Example A Simple Wage Equation: wage = β0 + β1educ + u u includes: experience, ability, tenure, work ethic, etc. β0 is the wage when educ = 0 β1 is the change in wage for a unit change in educ, other things (u) held equal.

  8. A Simple Assumption • The average value of u, the error term, in the population is 0. That is, • E(u) = 0 • This is not a restrictive assumption, since we can always use b0to normalize E(u) to 0

  9. Zero Conditional Mean • We need to make a crucial assumption about how u and x are related • We want it to be the case that knowing something about x does not give us any information about u, so that they are completely unrelated. That is, that • E(u|x) = E(u) = 0, which implies • E(y|x) = b0 + b1x

  10. E(y|x) as a linear function of x, where for any x the distribution of y is centered about E(y|x) y f(y) . E(y|x) = b0 + b1x . x1 x2

  11. Ordinary Least Squares • Basic idea of regression is to estimate the population parameters from a sample • Let {(xi,yi): i=1, …,n} denote a random sample of size n from the population • For each observation in this sample, it will be the case that • yi = b0 + b1xi + ui • This is the data generation process (DGP)

  12. Population regression line, sample data points and the associated error terms y E(y|x) = b0 + b1x . y4 { u4 . u3 y3 } . y2 u2 { u1 . } y1 x2 x1 x4 x3 x

  13. The data we observe y . y4 . y3 . y2 . y1 x2 x1 x4 x3 x

  14. Deriving OLS Estimates • Our assumptions imply restrictions on the data generation process: • First, E(u) = 0. • Second, E(u|x) = 0  Cov(x,u) = E(xu) = 0 Why? Remember from basic probability that Cov(X,Y) = E(XY) – E(X)E(Y). Also E(u|x) = 0 implies independence.

  15. Deriving OLS continued • Our two restrictions: E(u) = 0 E(xu) = 0

  16. Deriving OLS continued We can write our 2 restrictions just in terms of x, y, b0 and b1 , since u = y – b0 – b1x E(y – b0 – b1x) = 0 E[x(y – b0 – b1x)] = 0 These are called moment restrictions

  17. Deriving OLS using M.O.M. • The method of moments approach to estimation implies imposing the population moment restrictions on the sample moments • What does this mean? Recall that for E(X), the mean of a population distribution, a sample estimator of E(X) is simply the arithmetic mean of the sample

  18. More Derivation of OLS • We want to choose values of the parameters that will ensure that the sample versions of our moment restrictions are true • The sample versions are as follows:

  19. More Derivation of OLS • Given the definition of a sample mean, and properties of summation, we can rewrite the first condition as follows

  20. More Derivation of OLS

  21. So the OLS estimated slope is

  22. Summary of OLS slope estimate • The slope estimate is the sample covariance between x and y divided by the sample variance of x • If x and y are positively correlated, the slope will be positive • If x and y are negatively correlated, the slope will be negative • Only need x to vary in our sample

  23. A Silly Example Some “data”: xyx- y- 3 4 5 8 7 6

  24. Example continued

  25. Example continued So the sample regression line is:

  26. Alternative Derivation of OLS • Intuitively, OLS is fitting a line through the sample points such that the sum of squared residuals is as small as possible, hence the term least squares • The residual, û, is an estimate of the error term, u, and is the difference between the fitted line (sample regression function) and the sample point

  27. Sample regression line, sample data points and the associated estimated error terms y . y4 { û4 . û3 y3 } . y2 û2 { û1 } . y1 x2 x1 x4 x3 x

  28. Alternate approach to derivation • Given the intuitive idea of fitting a line, we can set up a formal minimization problem • That is, we want to choose our parameters such that we minimize the following:

  29. Alternate approach, continued • If one uses calculus to solve the minimization problem for the two parameters you obtain the following first order conditions, which are the same as we obtained before, multiplied by n

  30. Algebraic Properties of OLS • The sum of the OLS residuals is zero • Thus, the sample average of the OLS residuals is zero as well • The sample covariance between the regressors and the OLS residuals is zero • The OLS regression line always goes through the mean of the sample

  31. Algebraic Properties (precise)

  32. More terminology

  33. Proof that SST = SSE + SSR

  34. Goodness-of-Fit • How do we think about how well our sample regression line fits our sample data? • Can compute the fraction of the total sum of squares (SST) that is explained by the model, call this the R-squared of regression • R2 = SSE/SST = 1 – SSR/SST • 0 < R2 < 1 • A higher R2 implies a better fitting model.

  35. Using Stata for OLS regressions • Now that we’ve derived the formula for calculating the OLS estimates of our parameters, you’ll be happy to know you don’t have to compute them by hand • OLS in Stata is very simple: to run the regression of y on x, just type • . regress y x

  36. A (slightly) more interesting example Data from wage3.dta as used in Stata tutorials. 526 observations on wage, education, experience, gender, marital status. Estimate parameters of wage = β0 + β1educ + u

  37. n SSE R2 SSR SST Stata Output . regress wage educ Source | SS df MS Number of obs = 526 -------------+------------------------------ F( 1, 524) = 103.36 Model | 1179.73204 1 1179.73204 Prob > F = 0.0000 Residual | 5980.68225 524 11.4135158 R-squared = 0.1648 -------------+------------------------------ Adj R-squared = 0.1632 Total | 7160.41429 525 13.6388844 Root MSE = 3.3784 ------------------------------------------------------------------------------ wage | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- educ | .5413593 .053248 10.17 0.000 .4367534 .6459651 _cons | -.9048516 .6849678 -1.32 0.187 -2.250472 .4407687 ------------------------------------------------------------------------------

  38. Interpretation • Estimated slope is dollar increase in wages for additional year’s education, other things (u) equal • Estimated intercept is expected level of wages when education is zero • R-squared value suggests 16% of variation in wage explained. Low?

  39. Next Week • Finish off chapter 2 of Wooldridge: • Functional form (revise your logs!) • Statistical properties (E() and Var()) of OLS estimators • Estimating the error variance.

More Related