1 / 50

Design and Analysis of Experiments

Design and Analysis of Experiments. Prof. Dr. Amran Ahmed School of Science & Technology Universiti Malaysia Sabah. Literally, an experiment is a test.

maik
Download Presentation

Design and Analysis of Experiments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Design and Analysis of Experiments Prof. Dr. Amran Ahmed School of Science & Technology Universiti Malaysia Sabah

  2. Literally, an experiment is a test. • We can define an experiment as a test or series of tests in which purposeful changes are made to the input variables of a process or system so that we may observe and identify the reasons for changes that may be observed in the output response

  3. Literally, an experiment is a test. • We are concern about planning and conducting experiments and about analyzing the resulting data so that valid and objective conclusions are obtained

  4. Example: • Suppose that a metallurgical engineer is interested in studying the effect of two different hardening processes, oil quenching and saltwater quenching, on aluminum alloy. The objective of the experimenter is to determine which quenching solution produces the maximum hardness for this particular alloy.

  5. Example: • The engineer decides to subject a number of alloy specimens or test coupons to each quenching medium and measures the hardness of the specimens after quenching. • The average hardness of the specimens treated in each quenching solution will be used to determine which solution is the best.

  6. As we consider this simple experiment, a number of important questions come to mind • Are these two solutions the only quenching media of potential interest? • Are there any other factors that might affect hardness that should be investigated or controlled in this experiment? • How many coupons of alloy should be tested in each quenching solution? • How should the test coupons be assigned to the quenching solutions, in what order should the data be collected?

  7. As we consider this simple experiment, a number of important questions come to mind • What method of data analysis should be used? • What difference in the average observed hardness between the two quenching media will be considered important?

  8. Why experiments need to be designed • For many, experimental design conjures up unhappy memories of mathematics or statistics lessons, and is generally thought of something difficult that should be left to statisticians • WRONG on both counts.

  9. Why experiments need to be designed? • Designing simple but good experiments doesn’t require difficult mathematics. • Instead, experimental design is more about common sense, knowledge insight and careful planning. • It does require a certain type of common sense, and there are some basic rules.

  10. Basic Principles • Statistical design of experiments refers to the process of planning the experiment so that appropriate data that can be analyzed by statistical methods will be collected, resulting in valid and objective conclusions. • The statistical approach to experimental design is necessary if we wish to draw meaningful conclusions from the data.

  11. Basic Principles • When the problem involves data that are subject to experimental errors, statistical methodology is the only objective approach to analysis. • Thus, there are two aspects to any experimental problem: the design of the experiment, and the statistical analysis of the data.

  12. Basic Principles The three basic principles of experimental design are • Replication • Randomization, and • Blocking

  13. Replication • By replication, we mean a repetition of the basic experiment. • Replication would consist of treating a specimen in each medium. Thus, if 5 specimens are treated in each medium, then we have 5 replicates

  14. Replication has two important properties • Firstly, it allows the experimenter to obtain an estimate of the experimental error. This estimate of error becomes a basic unit of measurement for determining whether observed differences in the data are really statistically different.

  15. Replication has two important properties • Secondly, if the sample mean is used to estimate the effect of a factor in the experiment, replication permits the experimenter to obtain a more precise estimate of this effect. • There is an important distinction between replication and repeated measurements

  16. Randomization • The cornerstone underlying the use of statistical methods in experimental design. • By randomization, we mean that both the allocation of the experimental material and the order in which the individual runs or trial of the experiment are to be performed are randomly determined.

  17. Randomization • Statistical methods require that the observations (or errors) be independently distributed random variables. • Randomization usually makes this assumption valid. • By properly randomizing the experiment, we also assist in “averaging out” the effects of extraneous factors that may be present. • Avoid systematic bias into experimental results

  18. Blocking • Blocking is a design technique used to improve the precision with which comparisons among the factors of interest are made. • Often blocking is used to reduce or eliminate the variability transmitted from nuisance factors; that is, factors that may influence the experimental response but in which we are not directly interested. • Generally, a block is a set of relatively homogeneous experimental conditions.

  19. Formal experiments are … • Cons • extremely expensive (time & money) • usually not representative of the real world (cf. natural observation, field studies, surveys) • Pros • highly controlled • replicable • sometimes the only way to measure small effects or to identify interactions

  20. Designed experiments are used to … • address a research question • to test a hypothesis or a model

  21. Definitions: • Independent Variable- the variable which the experimenter has direct control over and is purposely manipulated to test a hypothesis (presence vs. absence, amount, type) • Dependent Variable- what’s being measured

  22. Definitions • Factor, Treatment- a controlled variable in an experiment (fixed & random) • Level- a particular setting of a factor • Main effect- the effect of a independent variable on experiment • randomize- error, random?

  23. Definitions • within subjects experiment - all subjects receive the same treatments • between subjects experiment - subject are randomly divided into groups, and different groups receive different treatments • asymmetrical transfer - when the effect of doing A then B is different then doing B then A

  24. Definitions • confounding - where the effect of variable has not been separated from the effect of another a variable • control group - a group that does not receive a treatment • factorial design - a designed experiment where two or more independent variables are studied simultaneously

  25. Guidelines for Designing an Experiment • Recognition of statement of problem. • Choice of factors, levels, and ranges • Selection of response variable • Choice of experimental design • Performing the experiment • Statistical analysis of the data • Conclusions and recommendations

  26. Guidelines for Designing an Experiment Notes: • Steps 1, 2 & 3 are pre-experimental planning • In practice, steps 2 & 3 are often simultaneously or in reverse order

  27. Recognition of statement of problem • It is usually helpful to prepare a list of specific problems or questions that are to be addressed by the experiment. • A clear statement of the problem often contributes substantially to better understanding of the phenomenon being studied and the final solution of the problem.

  28. Recognition of statement of problem 3. There are many possible objectives of an experiment, including i) confirmation ii) discovery iii) stability

  29. Choice of Factors, Levels, and Range • When considering the factors that may influence the performance of a process or system, the experimenter usually discovers that these factors can be classified as either potential design factors or nuisance factors. • Some useful classifications are design factors, held-constant factors, and allowed-to-vary factors.

  30. Choice of Factors, Levels, and Range 3. Design factors are the factors actually selected for the study in the experiment 4. Held-constant factors are variables that may exert some effect on the response, but for the purposes of the present experiment these factors are not of interest, so they will be held at a specific level.

  31. Choice of Factors, Levels, and Range 5. Nuisance factors may have large effects that must be accounted for 6. Nuisance factors are often classified as controllable, uncontrollable, or noise factors. 7. Controllable nuisance factor is one whose levels may be set by experimenter 8. Blocking is often useful in dealing with controllable nuisance factors.

  32. Choice of Factors, Levels, and Range 9. If a nuisance factor is uncontrollable in the experiment, but it can be measured, an analysis procedure called the analysis of covariance can often be used to compensate for its effect. 10. Noise factor is a factor that varies naturally and uncontrollably in the process can be controlled for the purposes of an experiment. 11. The objective is usually to find the settings of the controllable design factors that minimize the variability transmitted from the noise factors

  33. Selection of the Response Variable • In selecting the response variable, the experimenter should be certain that this variable really provides useful information about the process under study • It is usually critically important to identify issues related to defining the responses of interest and how they are to be measured before conducting the experiment.

  34. Choice of Experimental Design • If the pre-experimental planning activities are done correctly, this step is relatively easy. • Choice of design involves the consideration of the sample size (no. of replicates), the selection of a suitable run order for the experimental trials, and the determination of whether or not blocking or other randomization restrictions are involved.

  35. Performing the Experiment • When running the experiment, it is vital to monitor the process carefully to ensure that everything is being done according to plan. • Errors in experimental procedure at this stage will usually destroy the experimental validity. • Prior to conducting the experiment a few trial runs or pilot runs are often helpful.

  36. Statistical Analysis of the Data • Statistical methods should be used to analyze the data so that results and conclusions are objectively rather than judgmental in nature. • If the experiment has been designed correctly, and if it has been performed according to the design, the statistical methods required are not elaborate.

  37. Statistical Analysis of the Data 3. It is usually very helpful to present the results of many experiments in term of an empirical model, that is, an equation derived from the data that expresses the relationship between response and the important design factors 4. Residual analysis and the model adequacy checking are also important analysis techniques.

  38. Conclusions & Recommendations • Once the data have been analyzed, the experimenter must draw practical conclusions about the results and recommend a course of action. • Follow-up runs and confirmation testing should also be performed to validate the conclusions from the experiment.

  39. Fractional Factorial Designs • Number of trials gets very large as one increases the number of factors & levels • higher order interactions are actually quite rare • therefore, it makes sense to confound the higher order interactions • example: 25-1 fractional factorial design

  40. Interaction An interaction exist when the effect of one variable depends on the level of another variable • Example: 2x2 factorial design has 7 possibilities for significant effects

  41. A nice way to specify a design: “The experiment was a within subjects 5 X 3 X 3 factorial, repeated measures design 10 subjects X 5 limb conditions X 3 target amplitudes X 3 target widths X 5 blocks X 20 trials per amplitude-width condition X = 45,000 total trials”

  42. Some basic rules … • You should always think you know what you’re going to find BEFORE you run the experiment (which doesn’t mean that you are always right, only that you have a hypothesis) • Everything that is tested statistically should also be graphed • If your graphs and your stat analysis don’t CLEARLY agree, something is wrong

  43. Some basic rules…. • You should always know exactly how you are going to analyze your data BEFORE you collect it. (the statistical methods) • Remember the difference between statistical significance and the magnitude of the effect

  44. Summary: Using Statistical Techniques in Experimental • Use your non-statistical knowledge of the problem • Keep the design and analysis as simple as possible. • Recognize the difference between practical & statistical significance. • Experiments are usually iterative.

  45. Use your non-statistical knowledge of the problem • Experimenters are usually highly knowledgeable in the fields. • This type of non-statistical knowledge is invaluable in choosing factors, determining factors levels, deciding how many replicates to run, interpreting the results of the analysis, and so forth. • Using statistics is no substitute for thinking about the problem.

  46. Keep the design & analysis as simple as possible • Don’t be overzealous in the use of complex, sophisticated statistical techniques. • Relatively simple design and analysis methods are almost always best. • If you do the design carefully and correctly, the analysis will always be relatively straightforward.

  47. Recognize the difference between practical & statistical significance • Just because two experimental conditions produce mean responses that are statistically different, there is no assurance that this difference is large enough to have any practical value.

  48. Experiments are usually iterative • Remember that in most situations it is unwise to design too comprehensive an experiment at the start of a study. • Successfully design requires knowledge of the important factors, the ranges over which these factors are varied, the appropriate number of levels for each factor, and the proper methods and units of measurement for each factor and response.

More Related