1 / 12

Point Estimation

Point Estimation. W&W, Chapter 7. Introduction. Population parameters like the mean (  ) and variance (  2 ) are constants because there is only one true estimate of each.

aliza
Download Presentation

Point Estimation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Point Estimation W&W, Chapter 7

  2. Introduction Population parameters like the mean () and variance (2) are constants because there is only one true estimate of each. The sample mean and sample variance are random variables. Each varies from sample to sample, according to its sampling distribution (such as the normal).

  3. Properties of Statistical Estimators We are going to talk about three desirable properties of statistical estimators: • Unbiasedness • Efficiency • Consistency

  4. Unbiased Estimators An unbiased estimator is one that produces the right answer on average. Bias occurs when there is a systematic error in the measure that shifts the estimate more in one direction than another over a set of replications. Figure 7.1; Literary Digest survey

  5. Unbiased Estimators In general, we can consider any population parameter, , and denote its estimator by U. U is an unbiased estimator of  if E(U) = . Any bias that occurs is Bias = E(V) - .

  6. Unbiased Estimators To avoid bias, we should randomly sample from the whole population. KKV note that one form of bias that is common in the social sciences is due to people that are wary of providing information that we need for our analyses (study of education in India). Other examples of systematic bias: race/interviewer effects, acquiescence bias, response bias

  7. Efficient Estimators As well as being on target on the average, we would also like the distribution of an estimator to be highly concentrated, that is, to have a small variance. This is the notion of efficiency. Efficiency of V compared to W = var W/var V where V and W are two estimators, such as the mean and median See Figure 7.2 for example

  8. Efficient Estimators How do we increase efficiency? We know that more observations will produce more efficient estimators because the standard error of most of the sampling distributions we have discussed involve dividing by n.

  9. Efficient Estimators For normal populations, the efficiency of the mean relative to the median is equal to 1.57. This means that the mean is about 57% more efficient than the sample median. Efficiency is a way generally of comparing unbiased estimators, but KKV (pages 70-74) show that you can use the same criteria to compare estimators with a small amount of bias; sometimes you would prefer a more efficient estimator with a small amount of bias to a unbiased estimator that is not efficient.

  10. Efficient Estimators In comparing unbiased estimators, we choose the one with minimum variance. Figure 7-3 (page 239) shows some examples of unbiased estimates that might be inefficient and biased estimates that might be more efficient. Statisticians use both criteria for determining good estimators. They use the Mean Square Error or MSE = E(V - )2 = variance of estimator + (its bias)2. We choose the estimator that minimizes this MSE.

  11. Efficient Estimators If a sample has a large amount of bias, then increasing the sample size will not reduce the bias. We will just decrease the variance around the wrong estimate (e.g. Literary Digest survey).

  12. Consistent Estimators The property of consistency is such that as the number of observations gets very large, the variability around your estimate decreases to zero, and the estimate equals the parameter we are trying to estimate. One of the conditions that makes an estimator consistent is if its bias and variance both approach zero. In other words, we expect the MSE to approach zero in the limit.

More Related