1 / 18

Supplement 3:

Supplement 3:. Expectations, Conditional Expectations, Law of Iterated Expectations.

lucas
Download Presentation

Supplement 3:

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Supplement 3: Expectations, Conditional Expectations, Law of Iterated Expectations *The ppt is a joint effort: Ms Jingwen zHANG discussed the law of iterated expectations with Dr. Ka-fu Wong on 1 March 2007; Ka-fu explained the concept with an example; Jingwen drafted the ppt; Ka-fu revised it. Use it at your own risks. Comments, if any, should be sent to kafuwong@econ.hku.hk.

  2. Joint, conditional and marginal probability, when there are two random variables. • Let (X,Y) be two random variables with a joint probability of P(X,Y). • From the joint probability, we can compute • the marginal probability PX(X) and PY(Y). PX(X=k) = ∑Y P(X=k,Y); PY(Y=k) = ∑X P(X,Y=k) • the conditional probability Px|y(X) and Py|x(Y).PX|Y=k(X) = P(X,Y=k)/ PY(Y=k) ; PY|X=k(Y) = P(X=k,Y)/ PX(X=k) • Unconditional expectation E(Y) =∑Y ∑X Y*P(X,Y) • Conditional expectations: E(Y|X) and E(X|Y) E(Y|X) = ∑Y Y*PY|X(Y)E(X|Y) = ∑XX*PX|Y(X)

  3. Conditional expectations are random variables The conditional expectation can take different values. The probability of the conditional expectation taking a particular value.

  4. Expectation of conditional expectations E[E(Y|X)] = ∑X{E(Y|X)*PX(X)} = ∑X{[∑Y Y*Py|x(Y)]*PX(X)} since E(Y|X) = ∑y Y*Py|x(Y)= ∑X{[∑Y Y* P(X,Y)/ PX(X)]*PX(X)} since PY|X=k(Y) = P(X=k,Y)/ PX(X=k) = ∑X ∑Y Y* P(X,Y) = E(Y)

  5. Let X and Y be random variables. X = education attainment (1=degree holder, 0 without degree)Y = income (only three groups for simplicity; 1, 2, 3 thousands)

  6. Let X and Y be random variables. X = education attainment (1=degree holder, 0 without degree)Y = income (only three groups for simplicity; 1, 2, 3) Expected education of a person randomly drawn from the income group Y=1. Expected income of a person randomly drawn from the education group X=1.

  7. Joint, conditional and marginal probability, when there arethree random variables. • Let (X,Y, Z) be three random variables with a joint probability of P(X,Y, Z). • From the joint probability, we can compute • The marginal probability PX(X), PY(Y), PZ(Z). PX(X=k) = ∑Y ∑Z P(X=k,Y, Z); PY(Y=k) = ∑X ∑Z P(X,Y=k, Z)PZ(Z=k) = ∑X ∑Y P(X,Y, Z=k) • The bivariate distribution of any pair of the three random variablesPXY(X,Y), PXZ(X,Z), PYZ(Y,Z) • The conditional probability PX|Y,Z(X), PY|X,Z(Y), PZ|X,Y(Z). PX|Y=k,Z=m(X) = P(X,Y=k,Z=m)/ PYZ(Y=k,Z=m) ; PY|X=k,Z=m(Y) = P(X=k,Y,Z=m)/ PXZ(X=k,Z=m) PZ|X=k,Y=m(Z) = P(X=k,Y=m,Z)/ PXY(X=k,Y=m) • The conditional bivariate probability PXY|Z(X,Y), PYZ|X(Y,Z), PXZ|Y(X,Z).PXY|Z=m(X,Y) = P(X,Y,Z=m)/ PZ(Z=m)

  8. Joint, conditional and marginal probability, when there is only three random variables. • Let (X,Y, Z) be three random variables with a joint probability of P(X,Y, Z). • Unconditional expectation E(Y) =∑y ∑x ∑Z Y*P(X,Y,Z) • Conditional expectations: E(Y|X,Z) and E(X|Y,Z) E(Y|X,Z) = ∑Y Y*Py|x,Z(Y)E(X|Y,Z) = ∑XX*PX|Y,Z(X)

  9. Conditional expectations are random variables. E[E(Y|X,Z)|Z] = ∑X{E(Y|X,Z)*PX|Z(X)} = … = E(Y|Z) E[E(Y|Z)]=E(Y)

  10. X, Y and Z random variables. X = education attainment (1=degree holder, 0 without degree)Y = income (only three groups for simplicity; 1, 2, 3 thousands)Z = gender (1= male, 2=female) E(Y|X=1,Z=1) The expected income of a person randomly drawn from the group of male degree holders. E(Y|X=1,Z=2) The expected income of a person randomly drawn from the group of female degree holders. E(Y|X=1,Z=1) - E(Y|X=1,Z=2) >0 and E(Y|X=0,Z=1) - E(Y|X=0,Z=2) >0 For the same education attainment, male’s expected income of a person is higher than female’s. Sometimes, it is interpreted as a piece of evidence of sex discrimination against female.

  11. X, Y and Z random variables. X = education attainment (1=degree holder, 0 without degree)Y = income (only three groups for simplicity; 1, 2, 3 thousands)Z = gender (1= male, 2=female) E(Y|X=1,Z=1) The expected income of a person randomly drawn from the group of male degree holders. E(Y|X=0,Z=1) The expected income of a person randomly drawn from the group of male non-degree holders. E(Y|X=1,Z=1) - E(Y|X=0,Z=1) >0 and E(Y|X=1,Z=2) - E(Y|X=0,Z=2) >0 The return to education/schooling is positive. Education/schooling thus helps to accumulate the “human capital” embodied in us.

  12. X, Y and Z random variables. X = education attainment (1=degree holder, 0 without degree)Y = income (only three groups for simplicity; 1, 2, 3 thousands)Z = gender (1= male, 2=female) E(Y | X=1,Z=1) The expected income of a person randomly drawn from the group of male degree holders. E(Y | Z=1) The expected income of a person randomly drawn from the group of male, regardless of education attainment. E(Y|Z=1)=E[E(Y|X,Z)|Z=1] = 1.5*0.4+2.5*0.6 E(Y|Z=2)=E[E(Y|X,Z)|Z=2] = 1.3*0.6+2.1*0.4 E(Y|Z)=E[E(Y|X,Z)|Z]

  13. Law of iterated expectations • Given E(e|X) = 0, find E(eX). E(eX) = E[E(eX|X)] =E[E(e|X)X] =E[0*X] =0 E(eX) =P(X=1) E(eX|X=1) + P(X=2) E(eX|X=2) =0.4*E(e|X=1)*1+0.6*E(e|X=2)*2 =0.4*0 + 0.6*0 = 0 E(eX) =0.1*(-1) + 0.1*0 + 0.1*1 + 0.2*(-4) + 0.2*(-2) + 0.3*(4) =0 + 0.

  14. Law of iterated expectations • Given E(Y|X,Z) = 0, E(XY) = 2, E(Z) =4, find E(XYZ). E(XYZ) = E[E(YXZ|X,Z)] = E[E(Y|X,Z) X Z] = E[ 0* X Z] = 0.

  15. Definition: Estimator • Estimator is a formula or a rule that takes a set of data and returns an estimate of the population quantity (also known as population parameter) we are interested in. θ(x1,x2,...,xn)

  16. Example: An estimator for the population mean • If we are interested in the population mean, a very intuitive estimator of the population mean based on a sample (x1,x2,...,xn) is θ(x1,x2,...,xn)= (x1+x2+...+xn)/n • Suppose someone suggest θ(x1,x2,...,xn)= (x1+x2+...+xn+1)/n

  17. Desired property: unbiased. That is, on average, the estimator correctly estimates the population mean. • θ(x1,x2,...,xn)= (x1+x2+...+xn)/n E [θ(x1,x2,...,xn)]= E [(x1+x2+...+xn)/n] = (1/n)*{E(x1) +E(x2)+...+E(xn)}= (1/n)*n*E(x)= E(x) • θ(x1,x2,...,xn)= (x1+x2+...+xn+1)/n E [θ(x1,x2,...,xn)]= E [(x1+x2+...+xn+1)/n] = (1/n)*{E(x1) +E(x2)+...+E(xn) + 1}= (1/n)*{n*E(x) + 1}= E(x) + 1/n Approaches zero as sample size increases. i.e., the estimator is asymptotically unbiased.

  18. Supplement 3: Expectations, Conditional Expectations, Law of Iterated Expectations - END -

More Related