1 / 19

AP Statistics

AP Statistics. 5.2 Designing Experiments. Learning Objective:. Design a comparative experiment and a completely randomized experiment Understand the 3 principles of experimental design Understand cautions about experimentation. Definitions.

tea
Download Presentation

AP Statistics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. AP Statistics 5.2 Designing Experiments

  2. Learning Objective: • Design a comparative experiment and a completely randomized experiment • Understand the 3 principles of experimental design • Understand cautions about experimentation

  3. Definitions • experimental units – individuals on which the experiment is done • subjects – human experimental units • treatment – specific experimental condition applied to the units

  4. explanatory variable – attempts to explain the observed response • response variable – measures an outcome of the experiment • factors – explanatory variables in an experiment • level – specific value of a factor

  5. placebo – “dummy” treatment/no treatment; e.g. getting sugar pill instead of actual medicine • control group – group that receives the dummy treatment • bias – systematically favoring certain outcomes

  6. randomization – use of chance to divide experimental units into groups • statistically significant – an observed effect so large that it would rarely occur by chance • placebo effect – subjects respond favorably to getting any treatment – response to dummy treatment

  7. Example : 5.27 p. 268 • Answer: The liners are the experimental units. The heat applied to the liners is the factor; the levels are 250F, 275F, 300F, and 325F. The force required to open the package is the response variable.

  8. Example: 5.29 p. 268 • The experimental units are the batches of the product; the yield of each batch is the response variable. • There are two factors: temperature (with 2 levels) and stirring rates (with 3 levels), for a total of 6 treatments. • Since two experimental units will be used for each of the 6 treatments, we need 12.

  9. Simple Experiment treatment  observe response ***problems with lurking variables (such as the placebo effect) since there is no control group to establish a “baseline” measurement***

  10. Comparative Experiment *****problems with bias in assigning individuals to treatment groups; e.g. put all of the more sickly patients in the control group so the treatment group looks better*****

  11. Randomized Comparative Experiment

  12. Principles of Experimental Design • Control the effects of lurking variables (use a control group) • Randomization – randomly allocate units to treatment groups (so you’re not “stacking the deck” in favor of one particular treatment) • Replication – take a large enough group size so that you can see the results repeatedly (this reduces the chance variation in the results; therefore, results are more likely to be statistically significant)

  13. Statistical Significance • How much difference in the responses of the control and treatment groups is enough to be convincing that the treatment really works? Experiments give good evidence to establish cause-and-effect, but the outcomes of randomized experiments do depend on chance. We cannot say that any difference in responses is due to the treatment. Some differences in response would appear even if we used the same treatment on all subjects, because everyone is not exactly the same. By randomly assigning subjects to groups, we eliminate the possibility of systematic differences between the groups, but there will still be chance variation. It is through statistical inference, using the laws of probability, that we decide whether the observed difference in responses between the control and treatment groups is too large to occur as a result of chance alone. This is called statistical significance.

  14. Cautions about Experimentation • Lack of blindness – If the experimenter assessing the responses to treatments knows which treatment was given, he/she may bias the results (consciously or unconsciously); if a subject knows that they got the placebo treatment, they may under-report their response Lack of realism – subjects or treatments or setting of an experiment may not realistically duplicate the conditions we want to study

  15. Example: 5.38 p. 279 • Answer: Because the experimenter knew which subjects had learned the meditation techniques, he (or she) may have had some expectations about the outcome of the experiment: if the experimenter believed that meditation was beneficial, he may subconsciously rate that group as being less anxious.

  16. Other Experimental Designs • Double-blind – neither the subjects nor the people (evaluators) measuring the response to treatments knows which treatment the subject got; helps eliminate bias

  17. Matched Pairs Design • choose blocks of two units that are as closely alike as possible and randomly select which one receives the treatment OR one subject/unit who receives two treatments in random order (a person serves as their own control)

  18. Paired Design • Each subject is exposed to both treatments

  19. Block design – units are separated in to blocks (groups of units that are known before the experiment to be similar in some way that is expected to affect the response to the treatments, e.g. blocking by gender or race); random assignment of units to treatments is carried out separately within each block

More Related