1 / 56

Chapter 4. Inference about Process Quality

Chapter 4. Inference about Process Quality. Random Sample. Statistics. Chi-square (  2 ) Distribution. t Distribution. F Distribution. Estimator: estimates probability parameter from samples Good Characteristics for Estimators Unbiased Minimum variance.

meagant
Download Presentation

Chapter 4. Inference about Process Quality

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 4. Inference about Process Quality

  2. Random Sample Statistics

  3. Chi-square (2) Distribution

  4. t Distribution

  5. F Distribution

  6. Estimator: estimates probability parameter from samples • Good Characteristics for Estimators • Unbiased • Minimum variance

  7. As n gets large the bias goes to zero

  8. Null Hypothesis Alternative Hypothesis • In this example, H1 is a two-sided alternative hypothesis Hypothesis Testing

  9. H1 is a two-sided alternative hypothesis. • The procedure for testing this hypothesis is to: • take a random sample of n observations on the random variable x, • compute the test statistic, and • reject H0 if |Z0| > Z/2, where Z/2 is the upper /2 percentage point of the standard normal distribution.

  10. One-Sided Alternative Hypotheses • In some situations we may wish to reject H0 only if the true mean is larger than µ0 • Thus, the one-sided alternative hypothesis is H1: µ>µ0, and we would reject H0: µ=µ0 only if Z0>Zα • If rejection is desired only when µ<µ0 • Then the alternative hypothesis is H1: µ<µ0, and we reject H0 only if Z0<−Zα

  11. Confidence Interval If P ( L ≤ μ ≤ U ) = 1- α L ≤ μ ≤ U is 100 (1- α) % confidence interval. → If the variance is known.

  12. For the two-sided alternative hypothesis, reject H0 if |t0| > t/2,n-1, where t/2,n-1, is the upper /2 percentage of the t distribution with n  1 degrees of freedom • For the one-sided alternative hypotheses, • If H1: µ1 > µ0, reject H0 if t0 > tα,n − 1, and • If H1: µ1 < µ0, reject H0 if t0 < −tα,n − 1 • One could also compute the P-value for a t-test

  13. t0.025, 14 = 2.145. Thus, we should accept H0.

  14. Section 3-3.4 describes hypothesis testing and confidence intervals on the variance of a normal distribution

  15. Suppose, out of n samples chosen, x samples belongs to a subclass with probability p.

  16. Confidence Intervals on a Population Proportion For large n and p, use normal approximation. For large n and small p, use Poisson approximation. For small n, use binomial distribution.

  17. Two independent samples of size n1 and n2. Of them, x1 and x2 belong to the class of interest.

  18. More Two Populations

  19. Analysis of Variance (ANOVA)

More Related