1 / 34

Outline

Outline. Least Squares Methods Estimation: Least Squares Interpretation of estimators Properties of OLS estimators Variance of Y, b, and a Hypothesis Test of b and a ANOVA table Goodness-of-Fit and R 2. Linear regression model. Terminology.

xia
Download Presentation

Outline

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Outline • Least Squares Methods • Estimation: Least Squares • Interpretation of estimators • Properties of OLS estimators • Variance of Y, b, and a • Hypothesis Test of b and a • ANOVA table • Goodness-of-Fit and R2 (c) 2007 IUPUI SPEA K300 (4392)

  2. Linear regression model (c) 2007 IUPUI SPEA K300 (4392)

  3. Terminology • Dependent variable (DV) = response variable = left-hand side (LHS) variable • Independent variables (IV) = explanatory variables = right-hand side (RHS) variables = regressor (excluding a or b0) • a (b0) is an estimator of parameter α, β0 • b (b1) is an estimator of parameter β, β1 • a and b are the intercept and slope (c) 2007 IUPUI SPEA K300 (4392)

  4. Least Squares Method • How to draw such a line based on data points observed? • Suppose a imaginary line of y= a + bx • Imagine a vertical distance (or error) between the line and a data point. E=Y-E(Y) • This error (or gap) is the deviation of the data point from the imaginary line, regression line • What is the best values of a and b? • A and b that minimizes the sum of such errors (deviations of individual data points from the line) (c) 2007 IUPUI SPEA K300 (4392)

  5. Least Squares Method (c) 2007 IUPUI SPEA K300 (4392)

  6. Least Squares Method • Deviation does not have good properties for computation • Why do we use squares of deviation? (e.g., variance) • Let us get a and b that can minimize the sum of squared deviations rather than the sum of deviations. • This method is called least squares (c) 2007 IUPUI SPEA K300 (4392)

  7. Least Squares Method • Least squares method minimizes the sum of squares of errors (deviations of individual data points form the regression line) • Such a and b are called least squares estimators (estimators of parameters α and β). • The process of getting parameter estimators (e.g., a and b) is called estimation • “Regress Y on X” • Lest squares method is the estimation method of ordinary least squares (OLS) (c) 2007 IUPUI SPEA K300 (4392)

  8. Ordinary Least Squares • Ordinary least squares (OLS) = • Linear regression model = • Classical linear regression model • Linear relationship between Y and Xs • Constant slopes (coefficients of Xs) • Least squares method • Xs are fixed; Y is conditional on Xs • Error is not related to Xs • Constant variance of errors (c) 2007 IUPUI SPEA K300 (4392)

  9. Least Squares Method 1 How to get a and b that can minimize the sum of squares of errors? (c) 2007 IUPUI SPEA K300 (4392)

  10. Least Squares Method 2 • Linear algebraic solution • Compute a and b so that partial derivatives with respect to a and b are equal to zero (c) 2007 IUPUI SPEA K300 (4392)

  11. Least Squares Method 3 Take a partial derivative with respect to b and plug in a you got, a=Ybar –b*Xbar (c) 2007 IUPUI SPEA K300 (4392)

  12. Least Squares Method 4 Least squares method is an algebraic solution that minimizes the sum of squares of errors (variance component of error) Not recommended (c) 2007 IUPUI SPEA K300 (4392)

  13. OLS: Example 10-5 (1) (c) 2007 IUPUI SPEA K300 (4392)

  14. OLS: Example 10-5 (2), NO! (c) 2007 IUPUI SPEA K300 (4392)

  15. OLS: Example 10-5 (3) (c) 2007 IUPUI SPEA K300 (4392)

  16. What Are a and b ? • a is an estimator of its parameter α • a is the intercept, a point of y where the regression line meets the y axis • b is an estimator of its parameter β • b is the slope of the regression line • b is constant regardless of values of Xs • b is more important than a since that is what researchers want to know. (c) 2007 IUPUI SPEA K300 (4392)

  17. How to interpret b? • For unit increase in x, the expected change in y is b, holding other things (variables) constant. • For unit increase in x, we expect that y increases by b, holding other things (variables) constant. • For unit increase in x, we expect that y increases by .964, holding other variables constant. (c) 2007 IUPUI SPEA K300 (4392)

  18. Properties of OLS estimators • The outcome of least squares method is OLS parameter estimators a and b. • OLS estimators are linear • OLS estimators are unbiased (precise) • OLS estimators are efficient (small variance) • Gauss-Markov Theorem: Among linear unbiased estimators, least square estimator (OLS estimator) has minimum variance. BLUE (best linear unbiased estimator) (c) 2007 IUPUI SPEA K300 (4392)

  19. Hypothesis Test of a an b • How reliable are a and b we compute? • T-test (Wald test in general) can answer • The standardized effect size (effect size / standard error) • Effect size is a-0 and b-0 assuming 0 is the hypothesized value; H0: α=0, H0: β=0 • Degrees of freedom is N-K, where K is the number of regressors +1 • How to compute standard error (deviation)? (c) 2007 IUPUI SPEA K300 (4392)

  20. Variance of b (1) • b is a random variable that changes across samples. • b is a weighted sum of linear combinations of random variable Y (c) 2007 IUPUI SPEA K300 (4392)

  21. Variance of b (2) • Variance of Y (error) is σ2 • Var(kY) = k2Var(Y) = k2σ2 (c) 2007 IUPUI SPEA K300 (4392)

  22. Variance of a • a=Ybar + b*Xbar • Var(b)=σ2/SSx , SSx = ∑(X-Xbar)2 • Var(∑Y)=Var(Y1)+Var(Y2)+…+Var(Yn)=nσ2 Now, how do we compute the variance of Y, σ2? (c) 2007 IUPUI SPEA K300 (4392)

  23. Variance of Y or error • Variance of Y is based on residuals (errors), Y-Yhat • “Hat” means an estimator of the parameter • Y hat is predicted (by a + bX) value of Y; plug in x given a and b to get Y hat • Since a regression model includes K parameters (a and b in simple regression), the degrees of freedom is N-K • Numerator is SSE in the ANOVA table (c) 2007 IUPUI SPEA K300 (4392)

  24. Illustration (1) SSE=127.2876, MSE=31.8219 (c) 2007 IUPUI SPEA K300 (4392)

  25. Illustration (2): Test b • How to test whether beta is zero (no effect)? • Like y, α and β follow a normal distribution; a and b follows the t distribution • b=.9644, SE(b)=.2381,df=N-K=6-2=4 • Hypothesis Testing • 1. H0:β=0 (no effect), Ha:β≠0 (two-tailed) • 2. Significance level=.05, CV=2.776, df=6-2=4 • 3. TS=(.9644-0)/.2381=4.0510~t(N-K) • 4. TS (4.051)>CV (2.776), Reject H0 • 5. Beta (not b) is not zero. There is a significant impact of X on Y (c) 2007 IUPUI SPEA K300 (4392)

  26. Illustration (3): Test a • How to test whether alpha is zero? • Like y, α and β follow a normal distribution; a and b follows the t distribution • a=81.0481, SE(a)=13.8809, df=N-K=6-2=4 • Hypothesis Testing • 1. H0:α=0, Ha:α≠0 (two-tailed) • 2. Significance level=.05, CV=2.776 • 3. TS=(81.0481-0)/.13.8809=5.8388~t(N-K) • 4. TS (5.839)>CV (2.776), Reject H0 • 5. Alpha (not a) is not zero. The intercept is discernable from zero (significant intercept). (c) 2007 IUPUI SPEA K300 (4392)

  27. Questions • How do we test H0: β0(α)=β1=β2 …=0? • Remember that t-test compares only two group means, while ANOVA compares more than two group means simultaneously. • The same thing in linear regression. • Construct the ANOVA table by partitioning variance of Y; F test examines the above H0 • The ANOVA table provides key information of a regression model (c) 2007 IUPUI SPEA K300 (4392)

  28. Partitioning Variance of Y (1) (c) 2007 IUPUI SPEA K300 (4392)

  29. Partitioning Variance of Y (2) (c) 2007 IUPUI SPEA K300 (4392)

  30. Partitioning Variance of Y (3) • 122.52=81+.96×43, 148.6=.81+.96×70 • SST=SSM+SSE, 649.5=522.2+127.3 (c) 2007 IUPUI SPEA K300 (4392)

  31. ANOVA Table • H0: all parameters are zero, β0 = β1 = 0 • Ha: at least one parameter is not zero • CV is 12.22 (1,4), TS>CV, reject H0 (c) 2007 IUPUI SPEA K300 (4392)

  32. R2 and Goodness-of-fit • Goodness-of-fit measures evaluates how well a regression model fits the data • The smaller SSE, the better fit the model • F test examines if all parameters are zero. (large F and small p-value indicate good fit) • R2 (Coefficient of Determination) is SSM/SST that measures how much a model explains the overall variance of Y. • R2=SSM/SST=522.2/649.5=.80 • Large R square means the model fits the data (c) 2007 IUPUI SPEA K300 (4392)

  33. Myth and Misunderstanding in R2 • R square is Karl Pearson correlation coefficient squared. r2=.89672=.80 • If a regression model includes many regressors, R2 is less useful, if not useless. • Addition of any regressor always increases R2 regardless of the relevance of the regressor • Adjusted R2 give penalty for adding regressors, Adj. R2=1-[(N-1)/(N-K)](1-R2) • R2 is not a panacea although its interpretation is intuitive; if the intercept is omitted, R2 is incorrect. • Check specification, F, SSE, and individual parameter estimators to evaluate your model; A model with smaller R2 can be better in some cases. (c) 2007 IUPUI SPEA K300 (4392)

  34. Interpolation and Extrapolation • Confidence interval of E(Y|X), where x is within the rage of data x; interpolation • Confidence interval of Y|X, where x is beyond the range of data x; extrapolation • Extrapolation involves penalty and danger, which widens the confidence interval; less reliable (c) 2007 IUPUI SPEA K300 (4392)

More Related