1 / 52

Estimations Moore chapters 4, 6 and 7, Guan chapter 7

Estimations Moore chapters 4, 6 and 7, Guan chapter 7. Point estimator Sampling distribution Interval estimation. Research questions.

pedrotaylor
Download Presentation

Estimations Moore chapters 4, 6 and 7, Guan chapter 7

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Estimations Moore chapters 4, 6 and 7, Guan chapter 7

  2. Point estimator • Sampling distribution • Interval estimation

  3. Research questions • Plasticizer (塑化劑) is required not to be detected in toy. If a toy is found to have plasticizer about 0.01%, will it be a trouble? What if another toy is found to have plasticizer about 1.1%? • Euro require 0.1% or lower plasticizer for toy. • Why not requiring 0% of plasticizer?

  4. Research questions • What do you know from this news: • 〔記者楊久瑩、王珮華/自由時報(2012/9/29)〕消基會調查發現,高達五成三的衛生紙品牌出現實際張數少於標示的情形;其中,又以五X花張數少最多,每包自動「合法減張」四到五張,大潤發則「送很大」,最多單包奉送四十一張。 • 消基會九月於北市各大賣場、超市隨機購買十五件樣品,進行張數及標示檢視。調查發現,八件樣品出現單包衛生紙實際張數少於標示張數,如以六包為一件來算總張數,則有四件的實際張數少於標示該有的總張數;另有一包抽取式衛生紙,張數不符國家標準。 • 消基會董事長蘇錦霞指出,依國家標準CNS第一○九一號參考性規定,連續抽取式衛生紙的張數每包許可差負三.五%、平版衛生紙則為負二.五%。 • 在短缺的衛生紙品牌中,永豐餘生產的「五X花平版衛生紙」總張數六包少了廿五張,短缺最多張;其次是「COS???平版衛生紙」,少二十張;「O划算平版衛生紙」少了十九.五張。

  5. Random sample • A sample is a combination of specific observations. • For example,flip a fair coin five times: {1,0,1,1,0}、{0,1,0,0,1}、{0,0,0,1,0} are all possible samples。 • If the sample is generated in a random, then we call it the random sample(隨機樣本)。 • Random sample:The sample generated by a probability distribution with observations that are independent of each others. • A random sample has observations independent and identically distributed to a specific distribution, we call, i.i.d.。

  6. Point estimator • Parameter space(參數空間):A set of all possible values of a parameter, where the parameter for a random sample could be described as θ。 • Statistic(統計量):A function of a random variable or of several random variables • Point estimator(點估計式):Since true parameter θ is usually unknown, so we estimate θ and the estimation is called point estimator. The realization of point estimator is point estimate(點估計值 )。 Xiis a random variable following Bernoulli distribution, unknown parameterp0=P({Xi=1}). Under this setting, all following statistics are possible point estimator of p0. (X1+X2+X3+X4+X5)/5, (X1+X3+X5)/3, Max(X1,X2,X3,X4,X5), All with value ranged in [0, 1]

  7. Analog estimation • Analog estimation (類比式估計法): Use sample average as the point estimator for unknown parameter. • Sample mean for population mean. • Sample average of sum of squares (X2) for population second moment.

  8. Analog estimation • Similarly, sample covariance could be the point estimator ofpopulation covariance: cov (Xi, Yi) : n-1 because of degree of freedom (自由度)

  9. Properties of point estimators • Since we may have many point estimators for a unknown parameters, we need some standards to select the point estimator. • Three standards: • Unbiasedness(不偏性) • Efficiency(有效性) • Consistency(一致性)

  10. Unbiasdness • Unbiasedness is about the expected value. If isa point estimator of θ0, then an unbiased point estimator should be • Any linear transformation makes unbiasedness held. Yet non linear transformation breaks the unbiasedness. • If, thenit is called biased estimator.The difference between and is calledthe bias of . Here we have upward and downward biases (向上與向下偏誤).

  11. Unbiasdness • is a series of i.i.d. random variables with mean, then for any distribution, is a unbiased estimator of . • If the random variables have a unique variance, thenSn2 is the unbiased estimator Question: If we take half of sample and calculate the simple average, will this sample average with half sample be a unbiased estimator? Ans: Yes

  12. Efficiency • If both sample mean and halved-sample average are unbiased estimator, we need more statistic property to select our point estimator. • Next, we look at point estimator and true parameter ,and their difference mean squared error (MSE): • An estimator with smaller MSE is a relative efficient (相對效率) estimator. • An estimator with smallest MSE is a most efficient estimator (最有效估計式)or best estimator(最佳估計式).

  13. Efficiency • The MSE between true parameter and estimator is • For two estimators and , if , then is more efficient than . is relative efficiency between two estimators.

  14. Efficiency • Ifis an unbiased estimator with zero bias, then MSEcould stand for the variability of . • Because we know that bigger sample size yields lower variability of the estimator, which is a more accurate estimator. • For {X1,…..,Xn}is a series ofi.i.d. random variable, with meanμ0, variance σ02. Then is the best linear unbiased estimator (最佳線性不偏估計式, BLUE) of μ0 .。

  15. Consistency • Consistency: Another standard to see if bigger sample size yields a more accurate estimator. • For any tiny valueε, or Thenisconsistent estimator (一致性估計式). • Consistency indicates that when sample size approaches extreme large, the estimate and true parameter could be very likely to be the same (i.e., is closed to zero)

  16. Consistency • Sample mean is a consistent estimator. • For extreme large sample size, the probability that sample mean is equal to population mean is closed to one. For {X1,….Xn} as a series of i.i.d. random variable with mean , variance . is the unbiased estimator of , with variance From Chebyshev inequality, we know that When n approaches infinite, then right hand side of the inequality is converged to zero. So is the consistent estimator of .

  17. Consistency • How large the sample size should be in confirming a consistent estimator? • Generally, no certain answer. • Make n as large as possible • Theoretically, whenn approaches infinite, (a) and (b) thenis the consistent estimator of .

  18. Consistency and weak law of large numbers • Recall the weak law of large numbers, for i.i.d random variables with meanμ0, then • By law of large numbers, any analog estimations would be consistent estimators. • Any continuous transformation of a consistent estimator could be a consistent estimator. Thinking: Will a consistent estimator be necessary a unbiased estimator? If is a unbiased and consistent estimator of , then But

  19. Law of large numbers (大數法則) As the number of randomly drawn observations in a sample increases, the mean of the sample gets closer and closer to the population mean m. This is the law of large numbers. It is valid for any population.

  20. Sampling distribution • Statistics (統計量) is a function of a random sample, where the random sample is determined by its probability distribution. Thus, we call the distribution of the statistics the sampling distribution(抽樣分配). • Sometimes, we can know the real distribution of the statistics, we call it exact sampling distribution(實際分配). • If the exact sample distribution is not available, we can use a large sample to gauge the true distribution. The distribution with extremely large sample is limiting distribution(極限分配).

  21. Some important sampling distribution • Sample mean • Population variance is known • Population variance is unknown • Sample variance • Two sample variance difference • Two sample means difference • Population variance is known • Population variance is unknown • Homo case • Hetero case

  22. x Sampling distribution- sample mean Although the sample mean, , is a unique number for any particular sample, if you pick a different sample you will probably get a different sample mean. In fact, you could get many different values for the sample mean, and virtually none of them would actually equal the true population mean, .

  23. But the sample distribution is narrower than the population distribution, by a factor of √n. Thus, the estimates gained from our samples are always relatively close to the population parameter µ. n Sample means,n subjects Population, xindividual subjects m If the population is Normally distributed N(µ,σ), so is the sampling distribution N(µ,σ/√n).

  24. 95% of all sample means will be within roughly 2 standard deviations (2*s/√n) of the population parameter m. Because distances are symmetrical, this implies that the population parameter m must be within roughly 2 standard deviations from the sample average , in 95% of all samples. Red dot: mean value of individual sample This reasoning is the essence of statistical inference.

  25. Sampling distribution- sample mean • If{X1,X2,….,Xn} are i.i.d.random variables N (µ0,σ02), then or, we have standardized normal distribution Note that this result is true only if the population variance is known. In particular, we have

  26. Sampling distribution- sample mean • If the population variance is unknown, then we use the sample variance to standardize, but the standardized distribution does not obey a normal. • If{X1,….,Xn} arei.i.d. random variables withN (µ0,σ02), then the sample mean and sample variance Sn2 are independent. • If{X1,….,Xn} arei.i.d. random variables withN (µ0,σ02), then

  27. Sampling distribution- sample mean • If{X1,….,Xn} arei.i.d. random variables withN (µ0,σ02), then

  28. When n is very large, s is a very good estimate of s and the corresponding t distributions are very close to the Normal distribution. The t distributions become wider for smaller sample sizes, reflecting the lack of precision in estimating s from s.

  29. When σ is unknown, we use a t distribution with “n−1” degrees of freedom (df). Table D shows the z-values and t-values corresponding to landmark P-values/ confidence levels. When σ is known, we use the Normal distribution and the standardized z-value.

  30. (B) Population Sample 2 Sample 1 Sampling distribution- two sample problem (A) Population 1 Population 2 Sample 2 Sample 1 Which is it? We often compare two treatments used on independent samples. Is the difference between both treatments due only to variations from the random sampling as in (B), or does it reflect a true difference in population means as in (A)? Independent samples: Subjects in one sample are completely unrelated to subjects in the other sample.

  31. Sampling distribution- two sample variances difference • {X1,…..,Xn} and{Y1,…..,Ym} are two i.i.d. ramdom samples with the distributions N (µx,σx2) andN (µY,σY2), respectively.Let to be corresponding sample variances, we have Ifσx2= σY2, then

  32. Sampling distribution- two sample means difference • {X1,…..,Xn} and{Y1,…..,Ym} are two i.i.d. ramdom samples with the distributions N (µx,σx2) andN (µY,σY2), respectively. The statistics to test the mean equality, if σ 02 is known, is :

  33. Sampling distribution- two sample means difference • {X1,…..,Xn} and{Y1,…..,Ym} are two i.i.d. ramdom samples with the distributions N (µx,σx2) andN (µY,σY2), respectively. The statistics to test the mean equality, if σ 02 is unknown and variances are equal, is :

  34. Sampling distribution- two sample means difference • {X1,…..,Xn} and{Y1,…..,Ym} are two i.i.d. ramdom samples with the distributions N (µx,σx2) andN (µY,σY2), respectively. The statistics to test the mean equality, if σ 02 is unknown and variances are unequal, is :

  35. Common mistake!!! A common mistake is to calculate a one-sample confidence interval for m1 and then check whether m2 falls within that confidence interval or vice versa. This is wrong because the variability in the sampling distribution for two independent samples is more complex and must take into account variability coming from both samples. Hence the more complex formula for the standard error:

  36. Asymptotic distribution and central limit theorem • It is not easy to understand the exact distribution of a random sample if the random sample does not follow a normal even it is unknown. To solve the problem, we can take advantage of the asymptotic distribution of the statistic. • Central limit theorem(CLT, 中央極限定理 ): No matter what the probability distribution of the random variable, the standardized sample mean approximately obeys a standard normal when sample size, n, approaches vary large. One important condition for CLT is the finite variance of the random variable.

  37. Asymptotic distribution and central limit theorem • Central limit theorem:Let{X1,….,Xn} as a i.i.d. random sample (with i.i.d. random variables), mean is µ0,variance is σ02. For every a, Where Φ isN (0,1) c.d.f. We can also express it as

  38. The central limit theorem (Delay times) The individual flight delay times are not Normally distributed.

  39. The central limit theorem (Delay times) The means of 100 flight delay times are less variable and appear Normally distributed.

  40. The central limit theorem (Delay times) Using a different scale better shows the Normal distribution. The histogram is centered on the population mean.

  41. The central limit theorem (Delay times) A Normal quantile plot confirms the Normal distribution.

  42. The central limit theorem Central Limit Theorem: When randomly sampling from any population with mean m and standard deviation s, when n is large enough, the sampling distribution of x bar is approximately normal: ~N(m, s/√n). Population with strongly skewed distribution Sampling distribution of for n = 2 observations Sampling distribution of for n = 10 observations Sampling distribution of for n = 25 observations

  43. Income distribution Let’s consider the very large database of individual incomes from the Bureau of Labor Statistics as our population. It is strongly right skewed. • We take 1000 SRSs of 100 incomes, calculate the sample mean for each, and make a histogram of these 1000 means. • We also take 1000 SRSs of 25 incomes, calculate the sample mean for each, and make a histogram of these 1000 means. Which histogram corresponds to the samples of size 100? 25? $$$

  44. How large a sample size? It depends on the population distribution. More observations are required if the population distribution is far from normal. • A sample size of 25 is generally enough to obtain a normal sampling distribution from a strong skewness or even mild outliers. • A sample size of 40 will typically be good enough to overcome extreme skewness and outliers. In many cases, n = 25 is good enough. Thus, even for strange population distributions we can assume a normal sampling distribution of the mean and work with it to solve problems.

  45. How large a sample size? Caution Note that 25 (some text books say 30 or 40) arenot absolutely a large sample which makes CLT held. See this random variable x. Xt=Xt-1+et (unit root, 單根) N=500 N=2000

  46. Interval estimation • Point estimation gauges the possible estimation on the unknown parameter. • Interval estimation (區間估計)offers a possible range (interval) for unknown parameter. • Average income (sample mean ) is 45000/month. • Although point estimation shows an exact result, interval estimation offers a flexible range. • Average income (sample mean ) ranges between 3000~55000/month.

  47. Confidence coefficient, confidence level and confidence interval • The confidence interval (信賴區間)of unknown parameter θ0is a range with stochastic upper and lower boundaries (a, b). The probability to have θ0 in this range is positive.a , b is determined by , which is also a point estimator. • P({a ≦ θ0 ≦ b})=γ Confidence coefficient (信賴係數),γ: Given unknown parameter θ0, and confidence interval (a, b), we c% are confident that θ0 allocates between range (a, b). Confidence level (信賴水準), c, is equal to 100% x γ.

  48. With 95% confidence, we can say that µ should be within roughly 2 standard deviations (2*s/√n) from our sample mean . In 95% of all possible samples of this size n, µ will indeed fall in our confidence interval. In only 5% of samples would be farther from µ. Using sample mean as the eample

  49. Two endpoints of an interval m within ( −m) to ( + m) Ex: 114 to 126 A confidence interval can be expressed as: Mean ± mm is called the margin of errorm within ± mEx: 120 ± 6 A confidence level C (in %) indicates the probability that the µ falls within the interval. It represents the area under the Normal curve within ± m of the center of the curve. m m

  50. Confidence interval- Example 1 • {X1,X2…,Xn}arei.i.d. random variables followingN (μ0, σ02), μ0 is unknown, please find the confidence interval under95% confidence level of μ0 . Because variance is given, we know

More Related