1 / 84

ITED 434 Quality Organization & Management Ch 10 & 11

ITED 434 Quality Organization & Management Ch 10 & 11. Ch 10: Basic Concepts of Statistics and Probability Ch 11: Statistical Tools for Analyzing Data. Chapter Overview. Statistical Fundamentals Process Control Charts Some Control Chart Concepts Process Capability

hedia
Download Presentation

ITED 434 Quality Organization & Management Ch 10 & 11

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ITED 434Quality Organization & Management Ch 10 & 11 Ch 10: Basic Concepts of Statistics and Probability Ch 11: Statistical Tools for Analyzing Data

  2. Chapter Overview • Statistical Fundamentals • Process Control Charts • Some Control Chart Concepts • Process Capability • Other Statistical Techniques in Quality Management

  3. Statistical Fundamentals • Statistical Thinking • Is a decision-making skill demonstrated by the ability to draw to conclusions based on data. • Why Do Statistics Sometimes Fail in the Workplace? • Regrettably, many times statistical tools do not create the desired result. Why is this so? Many firms fail to implement quality control in a substantive way.

  4. Statistical Fundamentals • Reasons for Failure of Statistical Tools • Lack of knowledge about the tools; therefore, tools are misapplied. • General disdain for all things mathematical creates a natural barrier to the use of statistics. • Cultural barriers in a company make the use of statistics for continual improvement difficult. • Statistical specialists have trouble communicating with managerial generalists.

  5. Statistical Fundamentals • Reasons for Failure of Statistical Tools (continued) • Statistics generally are poorly taught, emphasizing mathematical development rather than application. • People have a poor understanding of the scientific method. • Organization lack patience in collecting data. All decisions have to be made “yesterday.”

  6. Statistical Fundamentals • Reasons for Failure of Statistical Tools (continued) • Statistics are view as something to buttress an already-held opinion rather than a method for informing and improving decision making. • Most people don’t understand random variation resulting in too much process tampering.

  7. Statistical Fundamentals • Understanding Process Variation • Random variation is centered around a mean and occurs with a consistent amount of dispersion. • This type of variation cannot be controlled. Hence, we refer to it as “uncontrolled variation.” • The statistical tools discussed in this chapter are not designed to detect random variation.

  8. Statistical Fundamentals • Understanding Process Variation (cont.) • Nonrandom or “special cause” variation results from some event. The event may be a shift in a process mean or some unexpected occurrence. • Process Stability • Means that the variation we observe in the process is random variation. To determine process stability we use process charts.

  9. Statistical Fundamentals • Sampling Methods • To ensure that processes are stable, data are gathered in samples. • Random samples. Randomization is useful because it ensures independence among observations. To randomize means to sample is such a way that every piece of product has an equal chance of being selected for inspection. • Systematic samples. Systematic samples have some of the benefits of random samples without the difficulty of randomizing.

  10. Statistical Fundamentals • Sampling Methods • To ensure that processes are stable, data are gathered in samples (continued) • Sampling by Rational Subgroup. A rational subgroup is a group of data that is logically homogenous; variation within the data can provide a yardstick for setting limits on the standard variation between subgroups.

  11. Standard normal distribution • The standard normal distribution is a normaldistributionwith a mean of 0 and a standard deviation of 1. Normal distributions can be transformed to standard normal distributions by the formula: • X is a score from the original normal distribution, is the mean of the original normal distribution, and is the standard deviation of original normal distribution.

  12. Standard normal distribution • A z score always reflects the number of standard deviations above or below the mean a particular score is. • For instance, if a person scored a 70 on a test with a mean of 50 and a standard deviation of 10, then they scored 2 standard deviations above the mean. Converting the test scores to z scores, an X of 70 would be: • So, a z score of 2 means the original score was 2 standard deviations above the mean. Note that the z distribution will only be a normal distribution if the original distribution (X) is normal.

  13. Applying the formula Applying the formula will always produce a transformed variable with a mean of zero and a standard deviation of one. However, the shape of the distribution will not be affected by the transformation. If X is not normal then the transformed distribution will not be normal either. One important use of the standard normal distribution is for converting between scores from a normal distribution and percentile ranks. Areas under portions of the standard normal distribution are shown to the right. About .68 (.34 + .34) of the distribution is between -1 and 1 while about .96 of the distribution is between -2 and 2.

  14. Area under a portion of the normal curve - Example 1 If a test is normally distributed with a mean of 60 and a standard deviation of 10, what proportion of the scores are above 85? From the Z table, it is calculated that .9938 of the scores are less than or equal to a score 2.5 standard deviations above the mean. It follows that only 1-.9938 = .0062 of the scores are above a score 2.5 standard deviations above the mean. Therefore, only .0062 of the scores are above 85.

  15. Example 2 • Suppose you wanted to know the proportion of students receiving scores between 70 and 80. The approach is to figure out the proportion of students scoring below 80 and the proportion below 70. • The difference between the two proportions is the proportion scoring between 70 and 80. • First, the calculation of the proportion below 80. Since 80 is 20 points above the mean and the standard deviation is 10, 80 is 2 standard deviations above the mean. The z table is used to determine that .9772 of the scores are below a score 2 standard deviations above the mean.

  16. Example 2 To calculate the proportion below 70: • Assume a test is normally distributed with a mean of 100 and a standard deviation of 15. What proportion of the scores would be between 85 and 105? • The solution to this problem is similar to the solution to the last one. The first step is to calculate the proportion of scores below 85. • Next, calculate the proportion of scores below 105. Finally, subtract the first result from the second to find the proportion scoring between 85 and 105. The z-table is used to determine that the proportion of scores less than 1 standard deviation above the mean is .8413. So, if .1587 of the scores are above 70 and .0228 are above 80, then .1587 -.0228 = .1359 are between 70 and 80.

  17. Example 2 Begin by calculating the proportion below 85. 85 is one standard deviation below the mean: Using the z-tablewith the value of -1 for z, the area below -1 (or 85 in terms of the raw scores) is .1587. Do the same for 105

  18. Example 2 The z-tableshows that the proportion scoring below .333 (105 in raw scores) is .6304. The difference is .6304 - .1587 = .4714. So .4714 of the scores are between 85 and 105.

  19. Sampling Distributions

  20. Sampling Distributions • If you compute the mean of a sample of 10 numbers, the value you obtain will not equal the population mean exactly; by chance it will be a little bit higher or a little bit lower. • If you sampled sets of 10 numbers over and over again (computing the mean for each set), you would find that some sample means come much closer to the population mean than others. Some would be higher than the population mean and some would be lower. • Imagine sampling 10 numbers and computing the mean over and over again, say about 1,000 times, and then constructing a relative frequency distribution of those 1,000 means.

  21. Sampling Distributions • The distribution of means is a very good approximation to the sampling distribution of the mean. • The sampling distribution of the mean is a theoretical distribution that is approached as the number of samples in the relative frequency distribution increases. • With 1,000 samples, the relative frequency distribution is quite close; with 10,000 it is even closer. • As the number of samples approaches infinity, the relative frequency distribution approaches the sampling distribution

  22. Sampling Distributions • The sampling distribution of the mean for a sample size of 10 was just an example; there is a different sampling distribution for other sample sizes. • Also, keep in mind that the relative frequency distribution approaches a sampling distribution as the number of samples increases, not as the sample size increases since there is a different sampling distribution for each sample size.

  23. Sampling Distributions • A sampling distribution can also be defined as the relative frequency distribution that would be obtained if all possible samples of a particular sample size were taken. • For example, the sampling distribution of the mean for a sample size of 10 would be constructed by computing the mean for each of the possible ways in which 10 scores could be sampled from the population and creating a relative frequency distribution of these means. • Although these two definitions may seem different, they are actually the same: Both procedures produce exactly the same sampling distribution.

  24. Sampling Distributions • Statistics other than the mean have sampling distributions too. The sampling distribution of the median is the distribution that would result if the median instead of the mean were computed in each sample. • Students often define "sampling distribution" as the sampling distribution of the mean. That is a serious mistake. • Sampling distributions are very important since almost all inferential statistics are based on sampling distributions.

  25. Sampling Distribution of the mean • The sampling distribution of the mean is a very important distribution. In later chapters you will see that it is used to construct confidence intervals for the mean and for significance testing. • Given a population with a mean of  and a standard deviation of , the sampling distribution of the mean has a mean of  and a standard deviation of s/ N , where N is the sample size. • The standard deviation of the sampling distribution of the mean is called the standard error of the mean. It is designated by the symbol .

  26. Sampling Distribution of the mean • Note that the spread of the sampling distribution of the mean decreases as the sample size increases. An example of the effect of sample size is shown above. Notice that the mean of the distribution is not affected by sample size.

  27. Spread A variable's spread is the degree scores on the variable differ from each other. If every score on the variable were about equal, the variable would have very little spread. There are many measures of spread. The distributions on the right side of this page have the same mean but differ in spread: The distribution on the bottom is more spread out. Variability and dispersion are synonyms for spread.

  28. 5 Samples

  29. 10 Samples

  30. 15 Samples

  31. 20 Samples

  32. 100 Samples

  33. 1,000 Samples

  34. 10,000 Samples

  35. Hypothesis Testing

  36. Classical Approach • The Classical Approach to hypothesis testing is to compare a test statistic and a critical value. It is best used for distributions which give areas and require you to look up the critical value (like the Student's t distribution) rather than distributions which have you look up a test statistic to find an area (like the normal distribution). • The Classical Approach also has three different decision rules, depending on whether it is a left tail, right tail, or two tail test. • One problem with the Classical Approach is that if a different level of significance is desired, a different critical value must be read from the table.

  37. Left Tailed Test H1: parameter < valueNotice the inequality points to the left Decision Rule: Reject H0 if t.s. < c.v. Right Tailed Test H1: parameter > valueNotice the inequality points to the right Decision Rule: Reject H0 if t.s. > c.v. Two Tailed Test H1: parameter not equal valueAnother way to write not equal is < or >Notice the inequality points to both sides Decision Rule: Reject H0 if t.s. < c.v. (left) or t.s. > c.v. (right) The decision rule can be summarized as follows: Reject H0 if the test statistic falls in the critical region (Reject H0 if the test statistic is more extreme than the critical value)

  38. P-Value Approach • The P-Value Approach, short for Probability Value, approaches hypothesis testing from a different manner. Instead of comparing z-scores or t-scores as in the classical approach, you're comparing probabilities, or areas. • The level of significance (alpha) is the area in the critical region. That is, the area in the tails to the right or left of the critical values. • The p-value is the area to the right or left of the test statistic. If it is a two tail test, then look up the probability in one tail and double it. • If the test statistic is in the critical region, then the p-value will be less than the level of significance. It does not matter whether it is a left tail, right tail, or two tail test. This rule always holds. • Reject the null hypothesis if the p-value is less than the level of significance.

  39. P-Value Approach (Cont’d) • You will fail to reject the null hypothesis if the p-value is greater than or equal to the level of significance. • The p-value approach is best suited for the normal distribution when doing calculations by hand. However, many statistical packages will give the p-value but not the critical value. This is because it is easier for a computer or calculator to find the probability than it is to find the critical value. • Another benefit of the p-value is that the statistician immediately knows at what level the testing becomes significant. That is, a p-value of 0.06 would be rejected at an 0.10 level of significance, but it would fail to reject at an 0.05 level of significance. Warning: Do not decide on the level of significance after calculating the test statistic and finding the p-value.

  40. P-Value Approach (Cont’d) • Any proportion equivalent to the following statement is correct: The test statistic is to the p-value as the critical value is to the level of significance.

  41. Process Control ChartsSlide 1 of 37 • Process Charts • Tools for monitoring process variation. • The figure on the following slide shows a process control chart. It has an upper limit, a center line, and a lower limit.

  42. Process Control ChartsSlide 2 of 37 Control Chart (Figure 10.3 in the Textbook) The UCL, CL, and LCL are computed statistically Each point represents data that are plotted sequentially Upper Control Limit (UCL) Center Line (CL) Lower Control Limit (LCL)

  43. Process Control ChartsSlide 3 of 37 • Variables and Attributes • To select the proper process chart, we must differentiate between variables and attributes. • A variable is a continuous measurement such as weight, height, or volume. • An attribute is the result of a binomial process that results in an either-or-situation. • The most common types of variable and attribute charts are shown in the following slide.

  44. Process Control ChartsSlide 4 of 37 Variables and Attributes Variables Attributes X (process population average) P (proportion defective) X-bar (mean for average) np (number defective) R (range) C (number conforming) MR (moving range) U (number nonconforming) S (standard deviation)

  45. Process Control ChartsSlide 5 of 37 Central Requirements for Properly Using Process Charts 1. You must understand the generic process for implementing process charts. You must know how to interpret process charts. You need to know when different process charts are used. You need to know how to compute limits for the different types of process charts. 2. 3. 4.

  46. Process Control ChartsSlide 6 of 37 • A Generalized Procedure for Developing Process Charts • Identify critical operations in the process where inspection might be needed. These are operations in which, if the operation is performed improperly, the product will be negatively affected. • Identify critical product characteristics. These are the attributes of the product that will result in either good or poor function of the product.

  47. Process Control ChartsSlide 7 of 37 • A Generalized Procedure for Developing Process Charts (continued) • Determine whether the critical product characteristic is a variable or an attribute. • Select the appropriate process control chart from among the many types of control charts. This decision process and types of charts available are discussed later. • Establish the control limits and use the chart to continually improve.

  48. Process Control ChartsSlide 8 of 37 • A Generalized Procedure for Developing Process Charts (continued) • Update the limits when changes have been made to the process.

  49. Process Control ChartsSlide 9 of 37 • Understanding Control Charts • A process chart is nothing more than an application of hypothesis testing where the null hypothesis is that the product meets requirements. • An X-bar chart is a variables chart that monitors average measurement. • An example of how to best understand control charts is provided under the heading “Understanding Control Charts” in the textbook.

More Related