urbdp 591 a lecture 8 experimental and quasi experimental design n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
URBDP 591 A Lecture 8: Experimental and Quasi-Experimental Design PowerPoint Presentation
Download Presentation
URBDP 591 A Lecture 8: Experimental and Quasi-Experimental Design

Loading in 2 Seconds...

play fullscreen
1 / 31

URBDP 591 A Lecture 8: Experimental and Quasi-Experimental Design - PowerPoint PPT Presentation


  • 247 Views
  • Uploaded on

URBDP 591 A Lecture 8: Experimental and Quasi-Experimental Design. Objectives Basic Design Elements Experimental Designs Comparing Experimental Design Example Quasi-Experimental Designs The Nature of Good Design. Research Approaches. Basic Design Elements.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

URBDP 591 A Lecture 8: Experimental and Quasi-Experimental Design


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
    Presentation Transcript
    1. URBDP 591 A Lecture 8: Experimental and Quasi-Experimental Design Objectives • Basic Design Elements • Experimental Designs • Comparing Experimental Design Example • Quasi-Experimental Designs • The Nature of Good Design

    2. Research Approaches

    3. Basic Design Elements Time: A causal relationship implies that some time has elapsed between the occurrence of the cause and the consequent effect. Program(s) or Treatment(s). The presumed cause may be treatment under the explicit control of the researcher or the occurrence of some natural or human induced event not explicitly controlled. In design notation we depict it with the symbol "X". When multiple programs or treatments are being studied, we can use subscripts such as "X1" or "X2". Observation(s) or Measure(s). Measurements are typically depicted in design notation with the symbol "O” if the same measurement or observation is taken at every point in time in a design. If different measures are given at different times, we can use subscripts such as ”O1" or ”O2". Groups or Individuals. Typically, there will be one or more program and comparison groups. In design notation, each group is indicated on a separate line. Assignment to the group is indicated by "R" to indicate the group was randomly assigned, "N" was nonrandomly assigned and a "C" that a cutoff score or a measurement was used.

    4. Types of Designs

    5. Experimental Designs A. Post-only design B. Pre and post design C. Multiple Levels of single IV D. Multiple Experimental and Control Groups E. Multiple IVs (Factorial design)

    6. Post-Only Design • Subjects randomly assigned to experimental\control groups • Introduction of IV in experimental condition • Measurement of DV (single or multiple instances)

    7. Post-Only Design Randomized IV DV Group 1 R X O Group 2 R O

    8. Pre and Post Design • Subjects randomly assigned to experimental/ control groups • Preliminary measurement of DV before before treatment (check of random assgn) • Introduction of IV in experimental condition • Measurement of DV (single or multiple instances)

    9. Pre- and Post- Design Randomized DV IV DV Group 1 R O1 X O2 Group 2 R O1 O2

    10. Multiple Levels of Single IV • Subjects randomly assigned to experimental\control groups • Introduction of multiple levels of IV in experimental condition • Measurement of DV across different conditions

    11. Multiple Levels Design Randomized IV DV Group 1 R X1 O Group 2 R X2 O Group 3 R X3 O Group 4 R O

    12. Multiple Experimental and Control Groups (Solomon Four-Group design) • Subjects randomly assigned to experimental\control groups • Preliminary measurement of DV in one exp\control pair • Introduction of IV in both experimental conditions • Measurement of DV (assess effects of pretest)

    13. Multiple Levels Design Randomized DV IV DV Group 1 R O1 X O2 Group 2 R O1 O2 Group 3 R X O2 Group 4 R O2

    14. Multiple IVs (Factorial Design) • Subjects randomly assigned to experimental\control groups • Introduction of multiple levels of IVs in experimental condition • Measurement of DV across different conditions (cells)

    15. Multiple IVs Design Randomized IV IV DV Group 1 R X1 Y1 O Group 2 R X2 Y2 O Group 3 R X1 Y2 O Group 4 R X2 Y1 O

    16. Different statistical tests • Requirements for randomized experimental design • has at least two groups • has two distributions (measures), each with an average and variation • assess treatment effect = statistical difference between the groups

    17. t-test and one-way Analysis of Variance (ANOVA)

    18. The t-test

    19. Two group posttest only randomized experimental design : T-test or One way ANOVA

    20. Pretest-posttest randomized experimental design : ANCOVA

    21. Pretest-posttest randomized experimental design with interaction: ANCOVA

    22. Quasi-Experimental Design • Matching instead of randomization is used. For example, someone studying the effects of growth management on residential density will try to find a similar metropolitan areas similar to the experimental metropolitan area. That other town is not technically a control group, but a comparison group, and this matching strategy is sometimes called nonequivalent group design. • Time series analysis is involved. A time series is perhaps the most common type of longitudinal (over time) research found in public policy. A time series can be interrupted or noninterrupted. Both types examine changes in the dependent variable over time, with only an interrupted time series involving before and after measurement.

    23. Non-equivalent Comparison Group • Post Only, Pre-Post, Multiple Treatments • No random assignment into exp\con • “Create” comparison groups • Selection criteria or eligibility protocol • Partial out confounding variance • (statistical control)

    24. Time Series • Multiple observations before and after treatment or intervention is introduced • Examine changes in data trends (slope and intercept) • Investigate effects of both onset and offset of interventions

    25. Time Series Group 1 N R O1 O2 X O3 Group 1 C O1 O2 O3

    26. Regression Discontinuity • Separate sample based on some criterion (pre-test) • One group administered treatment, other is control group • Examine trends in both groups; hypothesize equivalent

    27. Regression Discontinuity Group 1 C O1 O2 X O3 Group 1 C O1 O2 O3

    28. Advantages of Experiments 1. Isolation of the experimental variable 2. Allows for (relatively) easy replication 3. Establishing causality 4. Control - A true experiment offers the ultimate in control 5. Longitudinal analysis - The experiment offers the opportunity to study change over time.

    29. Disadvantages of Experiments 1. Artificial environment 2. Experimenter effect - The experimenter's expectations can affect the results of the experiment. 3. Lack of control - Placing subjects in a laboratory may alter the very behavior the experimenter is trying to study. 4. Sample size - The larger the group, the more difficult it is to control extraneous variables.

    30. Summary: Experiment • Aim to measure the effect of IV on DV • Involve experimenter’s intervention • Should control for experimental error (confounding variables) • Comparing the DV produced by at least two levels (conditions) of IV

    31. The Nature of Good Design 1.Theory-Grounded. Good research strategies reflect the theories which are being investigated. Where specific theoretical expectations can be hypothesized these are incorporated into the design to improve discriminant validity and demonstrates the predictive power of the theory. 2.Situational. Good research designs reflect the settings of the investigation. This was illustrated above where a particular need of teachers and administrators was explicitly addressed in the design strategy. 3.Feasible. Good designs can be implemented. The sequence and timing of events are carefully thought out. Potential problems in measurement, adherence to assignment, database construction and the like, are anticipated. 4.Redundant. Good research designs have some flexibility built into them. Often, this flexibility results from duplication of essential design features. For example, multiple replications of a treatment help to insure that failure to implement the treatment in one setting will not invalidate the entire study. 5.Efficient. Good designs strike a balance between redundancy and the tendency to overdesign. Where it is reasonable, other, less costly, strategies for ruling out potential threats to validity are utilized.