1 / 13

Basic Econometrics

Basic Econometrics. Chapter 3 : TWO-VARIABLE REGRESSION MODEL: The problem of Estimation. 3-1. The method of ordinary least square (OLS). Least-square criterion: Minimizing U^ 2 i = (Y i – Y^ i ) 2 = (Y i - ^ 1 - ^ 2 X) 2 (3.1.2)

joellej
Download Presentation

Basic Econometrics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Basic Econometrics Chapter 3: TWO-VARIABLE REGRESSION MODEL: The problem of Estimation Prof. Himayatullah

  2. 3-1. The method of ordinary least square (OLS) Least-square criterion: Minimizing U^2i = (Yi – Y^i)2 = (Yi- ^1 - ^2X)2 (3.1.2) Normal Equation and solving it for ^1 and ^2= Least-square estimators [See (3.1.6)(3.1.7)] Numerical and statistical properties of OLS are as follows: Prof. Himayatullah

  3. 3-1. The method of ordinary least square (OLS) OLS estimators are expressed solely in terms of observable quantities. They are point estimators The sample regression line passes through sample means of X and Y The mean value of the estimated Y^ is equal to the mean value of the actual Y: E(Y) = E(Y^) The mean value of the residuals U^i is zero: E(u^i )=0 u^i are uncorrelated with the predicted Y^i and with Xi : That are u^iY^i = 0; u^iXi = 0 Prof. Himayatullah

  4. 3-2. The assumptions underlying the method of least squares Ass 1: Linear regression model (in parameters) Ass 2: X values are fixed in repeated sampling Ass 3: Zero mean value of ui : E(uiXi)=0 Ass 4: Homoscedasticity or equal variance of ui : Var (uiXi) = 2 [VS. Heteroscedasticity] Ass 5: No autocorrelation between the disturbances:Cov(ui,ujXi,Xj ) = 0 with i # j [VS. Correlation, + or - ] Prof. Himayatullah

  5. 3-2. The assumptions underlying the method of least squares Ass 6: Zero covariance between ui and Xi Cov(ui,Xi) = E(ui, Xi) = 0 Ass 7: The number of observations n must be greater than the number of parameters to be estimated Ass 8: Variability in X values. They must not all be the same Ass 9: The regression model is correctly specified Ass 10: There is no perfect multicollinearity between Xs Prof. Himayatullah

  6. 3-3. Precision or standard errors of least-squares estimates In statistics the precision of an estimate is measured by its standard error (SE) var( ^2) = 2/ x2i(3.3.1) se(^2) =  Var(^2) (3.3.2) var( ^1) = 2 X2i/ n x2i(3.3.3) se(^1) =  Var(^1) (3.3.4) ^ 2 = u^2i/ (n - 2) (3.3.5) ^ =  ^ 2 is standard error of the estimate Prof. Himayatullah

  7. 3-3. Precision or standard errors of least-squares estimates Features of the variance: + var( ^2) is proportional to 2 and inversely proportional to x2i + var( ^1) is proportional to 2 and X2i but inversely proportional to x2i and the sample size n. + cov ( ^1 , ^2) = - var( ^2) shows the independence between ^1 and ^2 Prof. Himayatullah

  8. 3-4. Properties of least-squares estimators: The Gauss-Markov Theorem An OLS estimator is said to be BLUE if : + It is linear, that is, a linear function of a random variable, such as the dependent variable Y in the regression model + It is unbiased , that is, its average or expected value, E(^2), is equal to the true value 2 + It has minimum variance in the class of all such linear unbiased estimators An unbiased estimator with the least variance is known as an efficient estimator Prof. Himayatullah

  9. 3-4. Properties of least-squares estimators: The Gauss-Markov Theorem Gauss- Markov Theorem: Given the assumptions of the classical linear regression model, the least-squares estimators, in class of unbiased linear estimators, have minimum variance, that is, they are BLUE Prof. Himayatullah

  10. 3-5. The coefficient of determination r2:A measure of “Goodness of fit” Yi = i + i or Yi - = i - i + i or yi = i + i (Note: = ) Squaring on both side and summing =>  yi2 = 2 x2i +  2i ; or TSS = ESS + RSS Prof. Himayatullah

  11. 3-5. The coefficient of determination r2:A measure of “Goodness of fit” TSS =  yi2 = Total Sum of Squares ESS =  Y^ i2 = ^22 x2i = Explained Sum of Squares RSS =  u^2I = Residual Sum of Squares ESS RSS 1 = -------- + -------- ; or TSS TSS RSS RSS 1 = r2 + ------- ;or r2 = 1 - ------- TSS TSS Prof. Himayatullah

  12. 3-5. The coefficient of determination r2:A measure of “Goodness of fit” r2 = ESS/TSS is coefficient of determination, it measures the proportion or percentage of the total variation in Y explained by the regression Model 0  r2  1; r =  r2 is sample correlation coefficient Some properties of r Prof. Himayatullah

  13. 3-5. The coefficient of determination r2:A measure of “Goodness of fit” 3-6. A numerical Example (pages 80-83) 3-7. Illustrative Examples (pages 83-85) 3-8. Coffee demand Function 3-9. Monte Carlo Experiments (page 85) 3-10. Summary and conclusions (pages 86-87) Prof. Himayatullah

More Related