1 / 7

The Proof of unbiased estimator of  2 (Ref. To Gujarati (2003)pp.100-101)

The Proof of unbiased estimator of  2 (Ref. To Gujarati (2003)pp.100-101). = 0. b 2. =  k i (Y i -Y) = k i Y i - Yk i = k i Y. = k i.  x i y i  x i 2. =. = 0. = 1. = 0. The least squares formulas (estimators) in the simple regression case:. Since k i = 0 k i X = 1.

Download Presentation

The Proof of unbiased estimator of  2 (Ref. To Gujarati (2003)pp.100-101)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Proof of unbiased estimator of 2 (Ref. To Gujarati (2003)pp.100-101) = 0 b2 = ki(Yi-Y) = kiYi- Yki= kiY = ki xiyi xi2 = = 0 = 1 = 0 The least squares formulas (estimators) in the simple regression case: Since ki= 0 kiX = 1 Substitute the PRF: Y = 1 +2X + u into the b2 formula b2 = ki (1 +2Xi + ui ) = 1ki+2 kiXi + kiui = 2 + kiu Take the expectation on both side: E(b2) = 2 + ki E(ui) E(b2) = 2 (unbiased estimator) By assumptions: E(ui) = 0 E(ui ,uj) = 0 Var(uI) = 2

  2. The Proof of variance of 2 (Ref. To Gujarati (2003)pp.101-102) ^ E(b1) = 1 (unbiased estimator) = 0 = 0 Var (b2) = E[ b2 – E(b2)]2 = E[ b2 - 2]2 = E[ kiui]2 = E[k12u12+ k22u22 + k32u32 +….+2k1k2u1u2+2k1k3u1u3 + …..] = k12E(u12)+ k22E(u22) + k32E(u32) +….+2k1k2E(u1u2) +2k1k3E(u1u3 )+….. = k122 + k222+ k322+ …. + 0 + 0 + 0 + … = 2ki2 = 2(xi/ xi2 )2 = 2xi2/ (xi2 )2 = 2/ xi2 By assumptions: E(i) = 0 E(i ,j) = 0 Var(I) = 2

  3. The Proof of covariance of 1 and 2 : cov(1,2) ^ ^ ^ ^ (Ref. To Gujarati (2003)pp.102) Cov (b1, b2) = E{[b1- E(b1)][b2- E(b2)]} = E{[(Y – b2X) – (Y - 2X)][b2 - 2]} = E{[-X(b2 - 2)][b2 - 2]} = -X E(b2 - 2)2 = -X[2 / xi2] By definition:

  4. The Proof of minimum variance property of OLS (Ref. To Gujarati (2003)pp.104-105) xiyi xi2 The OLS estimator of 2 is:b2==kiYi Now suppose an other linear estimator of 2 is b2*=wiYi And assume wi  ki Since b2* = wiYi = wi(2+2Xi+ui) = wi1+2wiXi+ wi uI Take the expectation of b2*: E( b2*) = E(wi1)+2E(wiXi)+ E(wi ui) = 1wi +2wiXi since E(ui)=0 For b2* to be unbiased, i.e., E(b2*) = 2 there must be wi =0, and wiXi = 1

  5. And the variance of b2* is: Var(b2*) = E[b2* - E(b2*)]2 = E[ b2* - 2]2 = E(wi ui)2 = wi2 E(ui)2 = 2 wi2 = 2  [(wi - ki) + ki)]2 = 2 (wi - ki)2 + 2 ki2 + 2 2(wi - ki)ki = 2 (wi - ki)2 + 2 ki2 Since ki =0 = 2 (wi - ki)2 + 2/xi2 = 2 (wi - ki)2 + Var(b2) Therefore, only if wi = ki, then Var(b2*) = Var(b2), Hence the OLS estimator b2is the min. variance. If it is not min.=>OLS isn’t the best = 0 If b2* is an unbiased estimator, then b2* = wiYi = wi(1+2Xi+ui) = 2 + wi ui Therefore, (b2* - 2)= wi ui

  6. Or Var( e ) = E(e2) = 2 ^ ^ Var( u ) = E(u2) = 2 ^ Since Y = 1 + 2X + u and Y = 1 + 2X + u =>y = 2x + (u - u ) e = Y – b1 – b2X and 0 = Y – b1 – b2X => e = y – b2x e = 2x + (u - u ) – b2x = (2– b2)x + (u - u ) Deviation form Take squares and summing on both sides: e2 = (2 –b2)2x2+ (u - u )2 – 2(2 –b2)x(u - u) Take expectation on both sides: E(e2) = E[(2 –b2)2]x2 + E[(u - u )2] – 2E[(2 –b2)x(u - u)] II III I The Proof of unbiased estimator of 2 (Ref. To Gujarati (2003)pp.102-03)

  7. Substituting these three terms, I, II, and III into the equation and get E(e2) = (n-2)2 And if define 2 = e2 /(n-2) Therefore, the expected value is E(2) = E(e2)/(n-2) = (n-2)2/(n-2) = 2 ^ ^ Utilize the OLS assumptions, that are E(u) = 0, E(u2) = 2 and E(uiuj) = 0 And get I = 2 II = (n-1) 2 III = -2 2

More Related