1 / 31

In This Chapter We Will Cover

In This Chapter We Will Cover. Deductions we can make about  even though it is not observed. These include Confidence Intervals Hypotheses of the form H 0 :  i = c Hypotheses of the form H 0 :  i  c Hypotheses of the form H 0 : a ′  = c Hypotheses of the form A = c

sonel
Download Presentation

In This Chapter We Will Cover

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. In This Chapter We Will Cover • Deductions we can make about  even though it is not observed. These include • Confidence Intervals • Hypotheses of the form H0: i = c • Hypotheses of the form H0: i  c • Hypotheses of the form H0: a′ = c • Hypotheses of the form A = c We also cover deductions when V(e)  2I (Generalized Least Squares)

  2. The Variance of the Estimator From these two raw ingredients and a theorem: V(y) = V(X + e) = V(e) = 2I we conclude

  3. What of the Distribution of the Estimator? As normal Central Limit Property of Linear Combinations

  4. So What Can We Conclude About the Estimator? From the V(linear combo) + assumptions about e From the Central Limit Theorem From Ch 5- E(linear combo)

  5. Steps Towards Inference About  In general In particular But note the hat on the V! (X′X)-1X′y

  6. Lets Think About the Denominator where dii are diagonal elements of D = (XX)-1 = {dij}

  7. Putting It All Together • Now that we have a t, we can use it for two types of inference about : • Confidence Intervals • Hypothesis Testing

  8. A Confidence Interval for i A 1 -  confidence interval for i is given by which simply means that

  9. Graphic of Confidence Interval 1.0 1 -  0 i

  10. Statistical Hypothesis Testing: Step One Generate two mutually exclusive hypotheses: H0: i = c HA: i ≠ c

  11. Statistical Hypothesis Testing Step Two Summarize the evidence with respect to H0:

  12. Statistical Hypothesis Testing Step Three reject H0 if the probability of the evidence given H0 is small

  13. One Tailed Hypotheses Our theories should give us a sign for Step One in which case we might have H0: i c HA: i < c In that case we reject H0 if

  14. A More General Formulation Consider a hypothesis of the form H0: a´ = c so if c = 0… tests H0: 1= 2 tests H0: 1 + 2 = 0 tests H0:

  15. A t test for This More Complex Hypothesis We need to derive the denominator of the t using the variance of a linear combination which leads to

  16. Multiple Degree of Freedom Hypotheses

  17. Examples of Multiple df Hypotheses tests H0: 2 = 3 = 0 tests H0: 1 = 2 = 3

  18. Testing Multiple df Hypotheses

  19. Another Way to Think About SSH Assume we have an A matrix as below: We could calculate the SSH by running two versions of the model: the full model and a model restricted to just 1 SSH = SSError (Restricted Model) – SSError (Full Model) so F is

  20. A Hypothesis That All ’s Are Zero If our hypothesis is Then the F would be Which suggests a summary for the model

  21. Generalized Least Squares When we cannot make the Gauss-Markov Assumption that V(e) = 2I Suppose that V(e) = 2V. Our objective function becomes f = eV-1e

  22. SSError for GLS with

  23. GLS Hypothesis Testing H0: i = 0 where dii is the ith diagonal element of (XV-1X)-1 H0: a = c H0: A - c = 0

  24. Accounting for the Sum of Squares of the Dependent Variable e′e = y′y - y′X(X′X)-1X′y SSError = SSTotal - SSPredictable y′y = y′X(X′X)-1X′y + e′e SSTotal = SSPredictable + SSError

  25. SSPredicted and SSTotal Are a Quadratic Forms SSPredicted is And SSTotal yy = yIy Here we have defined P = X(X′X)-1X′

  26. The SSError is a Quadratic Form Having defined P = X(XX)-1X, now define M = I – P, i. e. I - X(XX)-1X. The formula for SSError then becomes

  27. Putting These Three Quadratic Forms Together SSTotal = SSPredictable + SSError yIy = yPy + yMy here we note that I = P + M

  28. M and P Are Linear Transforms of y = Py and e = My so looking at the linear model: Iy = Py + My and again we see that I = P + M

  29. The Amazing M and P Matrices = SSPredicted = y′Py = Py and What does this imply about M and P? e = My and = SSError = y′My

  30. The Amazing M and P Matrices = SSPredicted = y′Py = Py and PP = P MM = M e = My and = SSError = y′My

  31. In Addition to Being Idempotent…

More Related