1 / 22

Regression Analysis

Regression Analysis. Introduction. Derive the α and β Assess the use of the T-statistic Discuss the importance of the Gauss-Markov assumptions Describe the problems associated with autocorrelation, how to measure it and possible remedies Introduce the problem of heteroskedasicity.

tate
Download Presentation

Regression Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Regression Analysis

  2. Introduction • Derive the α and β • Assess the use of the T-statistic • Discuss the importance of the Gauss-Markov assumptions • Describe the problems associated with autocorrelation, how to measure it and possible remedies • Introduce the problem of heteroskedasicity

  3. Values and Fitted Values

  4. Deriving the α and β • The aim of a least squares regression is to minimize the distance between the regression line and error terms (e).

  5. The Constant

  6. The Slope Coefficient (β)

  7. T-test • When conducting a t-test, we can use either a 1 or 2 tailed test, depending on the hypothesis • We usually use a 2 tailed test, in this case our alternative hypothesis is that our variable does not equal 0. In a one tailed test we would stipulate whether it was greater than or less than 0. • Thus the critical value for a 2 tailed test at the 5% level of significance is the same as the critical value for a 1 tailed test at the 2.5% level of significance.

  8. T-test • We can also test whether our coefficient equals 1.

  9. Gauss-Markov Assumptions • There are 4 assumptions relating to the error term. • The first is that the expected value of the error term is zero • The second is that the error terms are not correlated • The third is that the error term has a constant variance • The fourth is that the error term and explanatory variable are not correlated.

  10. Gauss-Markov assumptions • More formally we can write them as:

  11. Additional Assumptions • There are a number of additional assumptions such as normality of the error term and n (number of observations) exceeding k (the number of parameters). • If these assumptions hold, we say the estimator is BLUE

  12. BLUE • Best or minimum variance • Linear or straight line • Unbiased or the estimator is accurate on average over a large number of samples. • Estimator

  13. Consequences of BLUE • If the estimator is not BLUE, there are serious implications for the regression, in particular we can not rely on the t-tests. • In this case we need to find a remedy for the problem.

  14. Autocorrelation • Autocorrelation occurs when the second Gauss-Markov assumption fails. • It is often caused by an omitted variable • In the presence of autocorrelation the estimator is not longer Best, although it is still unbiased. Therefore the estimator is not BLUE.

  15. Durbin-Watson Test • This tests for 1st order autocorrelation only • In this case the autocorrelation follows the first-order autoregressive process

  16. Zone of indecision Zone of indecision 0 dl du 2 4-du b-dl 4 Durbin-Watson Test- decision framework

  17. DW Statistic • The DW test statistic lies between 0 and 4, if it lies below the dl point, we have positive autocorrelation. If it lies between du and 4-du, we have no autocorrelation and if above 4-dl we have negative autocorrelation. • The dl and du value can be found in the DW d-statistic tables (at the back of most text books)

  18. Lagrange Multiplier (LM) Statistic • Tests for higher order autocorrelation • The test involves estimating the model and obtaining the error term . • Then run a second regression of the error term on lags of itself and the explanatory variable: (the number of lags depends on the order of the autocorrelation, i.e. second order)

  19. LM Test • The test statistic is the number of observations multiplied by the R-squared statistic. • It follows a chi-squared distribution, the degrees of freedom are equal to the order of autocorrelation tested for (2 in this case) • The null hypothesis is no autocorrelation, if the test statistic exceeds the critical value, reject the null and therefore we have autocorrelation.

  20. Remedies for Autocorrelation • There are 2 main remedies: • The Cochrane-Orcutt iterative process • An unrestricted version of the above process

  21. Heteroskedasticity • This occurs when the variance of the error term is not constant • Again the estimator is not BLUE, although it is still unbisased it is no longer Best • It often occurs when the values of the variables vary substantially in different observations, i.e. GDP in Cuba and the USA.

  22. Conclusion • The residual or error term is the difference between the fitted value and actual value of the dependent variable. • There are 4 Gauss-Markov assumptions, which must be satisfied if the estimator is to be BLUE • Autocorrelation is a serious problem and needs to be remedied • The DW statistic can be used to test for the presence of 1st order autocorrelation, the LM statistic for higher order autocorrelation.

More Related