1 / 13

STA 291 Fall 2009

STA 291 Fall 2009. Lecture 13 Dustin Lueker. Central Limit Theorem. For random sampling, as the sample size n grows, the sampling distribution of the sample mean, , approaches a normal distribution Amazing: This is the case even if the population distribution is discrete or highly skewed

knox
Download Presentation

STA 291 Fall 2009

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. STA 291Fall 2009 Lecture 13 Dustin Lueker

  2. Central Limit Theorem • For random sampling, as the sample size n grows, the sampling distribution of the sample mean, , approaches a normal distribution • Amazing: This is the case even if the population distribution is discrete or highly skewed • Central Limit Theorem can be proved mathematically • Usually, the sampling distribution of is approximately normal for n≥30 • We know the parameters of the sampling distribution STA 291 Fall 2009 Lecture 13

  3. Central Limit Theorem (Binomial Version) • For random sampling, as the sample size n grows, the sampling distribution of the sample proportion, , approaches a normal distribution • Usually, the sampling distribution of is approximately normal for np≥5, nq≥5 • We know the parameters of the sampling distribution STA 291 Fall 2009 Lecture 13

  4. Statistical Inference: Estimation • Inferential statistical methods provide predictions about characteristics of a population, based on information in a sample from that population • Quantitative variables • Usually estimate the population mean • Mean household income • For qualitative variables • Usually estimate population proportions • Proportion of people voting for candidate A STA 291 Fall 2009 Lecture 13

  5. Two Types of Estimators • Point Estimate • A single number that is the best guess for the parameter • Sample mean is usually at good guess for the population mean • Interval Estimate • Point estimator with error bound • A range of numbers around the point estimate • Gives an idea about the precision of the estimator • The proportion of people voting for A is between 67% and 73% STA 291 Fall 2009 Lecture 13

  6. Point Estimator • A point estimator of a parameter is a sample statistic that predicts the value of that parameter • A good estimator is • Unbiased • Centered around the true parameter • Consistent • Gets closer to the true parameter as the sample size gets larger • Efficient • Has a standard error that is as small as possible (made use of all available information) STA 291 Fall 2009 Lecture 13

  7. Unbiased • An estimator is unbiased if its sampling distribution is centered around the true parameter • For example, we know that the mean of the sampling distribution of equals μ, which is the true population mean • Thus, is an unbiased estimator of μ • Note: For any particular sample, the sample mean may be smaller or greater than the population mean • Unbiased means that there is no systematic underestimation or overestimation STA 291 Fall 2009 Lecture 13

  8. Biased • A biased estimator systematically underestimates or overestimates the population parameter • In the definition of sample variance and sample standard deviation uses n-1 instead of n, because this makes the estimator unbiased • With n in the denominator, it would systematically underestimate the variance STA 291 Fall 2009 Lecture 13

  9. Efficient • An estimator is efficient if its standard error is small compared to other estimators • Such an estimator has high precision • A good estimator has small standard error and small bias (or no bias at all) • The following pictures represent different estimators with different bias and efficiency • Assume that the true population parameter is the point (0,0) in the middle of the picture STA 291 Fall 2009 Lecture 13

  10. Bias and Efficient Note that even an unbiased and efficient estimator does not always hit exactly the population parameter. But in the long run, it is the best estimator. STA 291 Fall 2009 Lecture 13

  11. Point Estimators of the Mean and Standard Deviation • Sample mean is unbiased, consistent, and (often) relatively efficient for estimating μ • Sample standard deviation is almost unbiased for estimating population standard deviation • No easy unbiased estimator exists • Both are consistent STA 291 Fall 2009 Lecture 13

  12. Example • Suppose we want to estimate the proportion of UK students voting for candidate A • We take a random sample of size n=100 • The sample is denoted X1, X2,…, Xn, where Xi=1 if the ith student in the sample votes for A, Xi=0 otherwise STA 291 Fall 2009 Lecture 13

  13. Example • Estimator1 = the sample mean (sample proportion) • Estimator2 = the answer from the first student in the sample (X1) • Estimator3 = 0.3 • Which estimator is unbiased? • Which estimator is consistent? • Which estimator is efficient? STA 291 Fall 2009 Lecture 13

More Related