1 / 42

Week 6 October 6-10

Week 6 October 6-10. Four Mini-Lectures QMM 510 Fall 2014 . Chapter Contents 8.1 Sampling Variation 8.2 Estimators and Sampling Errors 8.3 Sample Mean and the Central Limit Theorem 8.4 Confidence Interval for a Mean (μ) with Known σ

gil
Download Presentation

Week 6 October 6-10

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Week 6 October 6-10 Four Mini-Lectures QMM 510 Fall 2014

  2. Chapter Contents 8.1 Sampling Variation 8.2 Estimators and Sampling Errors 8.3 Sample Mean and the Central Limit Theorem 8.4 Confidence Interval for a Mean (μ) with Known σ 8.5 Confidence Interval for a Mean (μ) with Unknown σ 8.6 Confidence Interval for a Proportion (π) 8.7 Estimating from Finite Populations 8.8 Sample Size Determination for a Mean 8.9 Sample Size Determination for a Proportion 8.10 Confidence Interval for a Population Variance, 2 (Optional) Sampling Distributions ML 6.1 Chapter 8 So many topics, so little time …

  3. Learning Objectives LO8-1: Define sampling error, parameter, and estimator. LO8-2: Explain the desirable properties of estimators. LO8-3:State the Central Limit Theorem for a mean. LO8-4:Explain how sample size affects the standard error. Sampling Distributions Chapter 8

  4. Sampling Variation Chapter 8 • Sample statistic– a random variable whose value depends on which population items are included in the random sample. • Depending on the sample size, the sample statistic could either represent the population well or differ greatly from the population. • This sampling variationcan be illustrated. Here are 100 individual items drawn from a population. When n = 1, the histogram of the sampled items resembles the population, but not exactly.

  5. Sampling Variation Chapter 8 Example: GMAT Scores • Consider eight random samples of size n = 5 from a large population of GMAT scores for MBA applicants. • The sample items vary, but the means tend to be close to the population mean (m = 520.78).

  6. Sampling Variation Chapter 8 Example: GMAT Scores • Sample dot plots show that the sample means have much less variation than the individual sample items.

  7. Estimators and Sampling Distributions Chapter 8 Some Terminology • Estimator– a statistic derived from a sample to infer the value of a population parameter. • Estimate – the value of the estimator in a particular sample. • A population parameteris usually represented by aGreek letter and the corresponding statisticby a Roman letter.

  8. Estimators and Sampling Distributions Chapter 8 Examples of Estimators Sampling Distributions The sampling distributionof an estimator is the probability distribution of all possible values the statistic may assume when a random sample of size n is taken. Note: An estimator is a random variable since samples vary.

  9. Estimators and Sampling Distributions Chapter 8 • Sampling erroris the difference between an estimate and the • corresponding population parameter. For example, if we use the sample • mean as an estimate for the population mean, then the • Bias is the difference between the expected value of the estimator and the true parameter. Example for the mean, • An estimator is unbiased if its expected value is the parameter being estimated. The sample mean is an unbiased estimator of the population mean since • On average, an unbiased estimator neither overstates nor understates the true parameter.

  10. Estimators and Sampling Distributions Chapter 8 Unbiased A desirable property for an estimator is for it to be unbiased.

  11. Estimators and Sampling Distributions Chapter 8 • Efficiency refers to the variance of the estimator’s sampling distribution. • A more efficient estimator has smaller variance. Efficiency Figure 8.6

  12. Estimators and Sampling Distributions Chapter 8 A consistent estimator converges toward the parameter being estimated as the sample size increases. Consistency Figure 8.6

  13. Central Limit Theorem Chapter 8 The Central Limit Theorem is a powerful result that allows us to approximate the shape of the sampling distribution of the sample mean even when we don’t know what the population looks like.

  14. Central Limit Theorem Chapter 8 If the population is exactly normal, then the sample mean follows a normal distribution. As the sample size n increases, the distribution of sample means narrows in on the population mean µ.

  15. Central Limit Theorem Chapter 8 If the sample is large enough, the sample means will have approximately a normal distribution even if your population is notnormal.

  16. Central Limit Theorem Chapter 8 Illustrations of Central Limit Theorem Using the uniform and a right-skewed distribution. Note:

  17. Central Limit Theorem Chapter 8 Applying The Central Limit Theorem The Central Limit Theorem permits us to define an interval within which the sample means are expected to fall. As long as the sample size n is large enough, we can use the normal distribution regardless of the population shape (or any n if the population is normal to begin with).

  18. Central Limit Theorem Chapter 8 Sample Size and Standard Error The sample means tend to fall within a narrower interval as n increases. The key is the standard error: For example, when n = 4 the standard error is halved. To halve it again requires n = 16, and to halve it again requires n = 64. To halve the standard error, you must quadruple the sample size (the law of diminishing returns).

  19. Central Limit Theorem Chapter 8 Illustration: All Possible Samples from a Uniform Population • Consider a discrete uniform population consisting of the integers {0, 1, 2, 3}. • The population parameters are: m = 1.5, s = 1.118.

  20. Central Limit Theorem Chapter 8 Illustration: All Possible Samples from a Uniform Population • The population is uniform, yet the distribution of all possible sample means of size 2 has a peaked triangular shape.

  21. Central Limit Theorem Chapter 8 Illustration: 100 Samples from a Uniform Population The population is uniform, yet the histogram of sample means has a peaked triangular shape starting with n = 2. By n = 8, the histogram appears normal.

  22. Central Limit Theorem Chapter 8 Illustration: 100 Samples from a Skewed Population The population is skewed, yet the histogram of sample means starts to have a normal shape starting with n = 4. By n = 16, the histogram appears arguably normal.

  23. Confidence Interval for ML 6.2 a Mean () with Known  Chapter 8 What Is a Confidence Interval?

  24. Confidence Interval for a Mean () with Known  Chapter 8 What is a Confidence Interval? • The confidence interval for m with known s is: z-values for commonly-used confidence levels

  25. Confidence Interval for a Mean () with Known  Chapter 8 Example: Bottle Fill … but usually we do NOT know σ

  26. Confidence Interval for a Mean () with Known  Chapter 8 Choosing a Confidence Level • A higher confidence level leads to a wider confidence interval. • Greater confidence implies loss of precision (i.e. greater margin of error). • 95% confidence is most often used. Confidence Intervals for Example 8.2

  27. Confidence Interval for a Mean () with Known  Chapter 8 Interpretation • A confidence interval either does or does not contain m. • The confidence level quantifies the risk. • Out of 100 confidence intervals, approximately 95% may contain m, while approximately 5% might not contain  when constructing 95% confidence intervals (for example, sample 14 below).

  28. Confidence Interval for a Mean () with Known  Chapter 8 When Can We Assume Normality? • If  is known and the population is normal, then we can safely use the formula to compute the confidence interval. • If  is known and we do not know whether the population is normal, a common rule of thumb is that n  30 is sufficient to use the formula as long as the distribution is approximately symmetric with no outliers. • Larger n may be needed to assume normality if you are sampling from a strongly skewed population or one with outliers.

  29. Confidence Interval for ML 6.3 a Mean () with Unknown  Chapter 8 Use the Student’s t distributioninstead of the normal distribution when the population is normal but the standard deviation s is unknown and the sample size is small. Student’s t Distribution … and usually we do NOT know σ …

  30. Confidence Interval for a Mean () with Unknown  Chapter 8 Student’s t Distribution

  31. Confidence Interval for a Mean () with Unknown  Chapter 8 Student’s t Distribution • t distributions are symmetric and shaped like the standard normal distribution. • The t distribution is dependent on the size of the sample. Comparison of Normal and Student’s t Figure 8.11

  32. Confidence Interval for a Mean () with Unknown  Chapter 8 Degrees of Freedom • Degrees of freedom (d.f.) is a parameter based on the sample size that is used to determine the t distribution. • The d.f. for the t distribution in this case is given by d.f. = n 1. • As n increases, the t distribution approaches the shape of the normal distribution. • For a given confidence level, t is always larger than z, so a confidence interval based on t is always wider than if z were used. Comparison of Normal and Student’s t

  33. Confidence Interval for a Mean () with Unknown  Chapter 8 Comparison of z and t • For very small samples, t-values differ substantially from the normal. • As degrees of freedom increase, the t-values approach the normal z-values. Note: the z and t distributions are almost the same for d.f.= 30 • For example, for n = 31, the degrees of freedom would be d.f. = 31 – 1 = 30. • So for a 90 percent confidence interval, we would use t = 1.697, which is slightly larger than z = 1.645.

  34. Confidence Interval for a Mean () with Unknown  Chapter 8 Example: GMAT Scores Again Figure 8.13

  35. x = 510 s = 73.77 Confidence Interval for a Mean () with Unknown  Chapter 8 Example: GMAT Scores Again • Construct a 90% confidence interval for the mean GMAT score of all MBA applicants. • Since s is unknown, use the Student’s t for the confidence interval with d.f. = 20 – 1 = 19. • Find t/2 = t.05 = 1.729 from Appendix D.

  36. Confidence Interval for a Mean () with Unknown  Chapter 8 Example: GMAT Scores Again • For a 90% confidence interval, use Appendix D to find t0.05 = 1.729 with d.f. = 19. Note:We could also use Excel, MINITAB, etc. to obtain t.05 values as well as to construct confidence intervals. =T.INV.2T(0.1,19) = 1.729 We are 90 percent confident that the true mean GMAT score might be within the interval [481.48, 538.52]

  37. Confidence Interval for a Mean () with Unknown  Chapter 8 Confidence Interval Width • Confidence interval width reflects - the sample size, - the confidence level and - the standard deviation. • To obtain a narrower interval and more precision- increase the sample size, or - lower the confidence level (e.g., from 90% to 80% confidence). There is no free lunch!

  38. Confidence Interval for a Mean () with Unknown  Chapter 8 Using Appendix D • Beyond d.f.= 50, Appendix D shows d.f.in steps of 5 or 10. • If the table does not give the exact degrees of freedom, use the t-value for the next lower degrees of freedom. • This is a conservative procedure since it causes the interval to be slightly wider. • A conservative statistician may use the t distribution for confidence intervals when σ is unknown because using z would underestimate the margin of error.

  39. Confidence Interval for a ML 6.4 Population Variance, 2. Chapter 8 Chi-Square Distribution • If the population is normal, then the sample variance s2 follows the chi-square distribution (c2) with degrees of freedom d.f.= n – 1. • Lower (c2L) and upper (c2U) tail percentiles for the chi-square distribution can be found using Appendix E. Note: The chi-square distribution is skewed right, but less so for larger d.f.

  40. Confidence Interval for a Population Variance, 2 Chapter 8 • Using the sample variance s2, the confidence interval is Confidence Interval • To obtain a confidence interval for the standard deviation , just take the square root of the interval bounds.

  41. Confidence Interval for a Population Variance, 2 Chapter 8 • You can use Appendix E to find critical chi-square values. or from Excel: =CHISQ.INV(0.025,39)= 23.65 =CHISQ.INV(0.975,39) = 58.12

  42. Confidence Interval for a Population Variance, 2 Chapter 8 Bottom Line: • Estimating a variance is easy. • But you don’t see it very often. • Maybe because the chi-square distribution is less familiar? • Maybe because we usually are more about the mean?

More Related