1 / 34

Internal Validity

Research methodology (459500) Lecture 11 Tuesday 14/10/2008 Dr Jihad ABDALLAH Source: Research Methods Knowledge Base http://www.socialresearchmethods.net/. Internal Validity. It is the approximate truth about inferences regarding cause-effect or causal relationships.

ishana
Download Presentation

Internal Validity

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Research methodology (459500)Lecture 11Tuesday 14/10/2008Dr Jihad ABDALLAHSource: Research Methods Knowledge Basehttp://www.socialresearchmethods.net/

  2. Internal Validity • It is the approximate truth about inferences regarding cause-effect or causal relationships. • Thus, internal validity is only relevant in studies that try to establish a causal relationship. It's not relevant in most observational or descriptive studies • For studies that assess the effects of social programs or interventions, internal validity is perhaps the primary consideration. In those contexts, you would like to be able to conclude that your program or treatment made a difference.

  3. But there may be lots of reasons, other than your program, that caused the difference. • The key question in internal validity is whether observed changes can be attributed to your program or intervention (i.e., the cause) and not to other possible causes (sometimes described as "alternative explanations" for the outcome). • All that internal validity means is that you have evidence that what you did in the study (i.e., the program) caused what you observed (i.e., the outcome) to happen.

  4. Research Design • Research design can be thought of as the structure of research: it provides the glue that holds the research project together. • A design is used to structure the research, to show how all of the major parts of the research project -- the samples or groups, measures, treatments or programs, and methods of assignment -- work together to try to address the central research questions.

  5. What are the "elements" that a design includes? • Observations or Measures: These are symbolized by an 'O' in design notation. An O can refer to a single measure (e.g., a measure of body weight), a single instrument with multiple items (e.g., a 10-item self-esteem scale), a complex multi-part instrument (e.g., a survey), or a whole battery of tests or measures given out on one occasion. If you need to distinguish among specific measures, you can use subscripts with the O, as in O1, O2, and so on.

  6. Treatments or Programs: These are symbolized with an 'X' in design notations. The X can refer to a simple intervention (e.g., a one-time surgical technique) or to a complex program (e.g., an employment training program). Usually, a no-treatment control or comparison group has no symbol for the treatment (some researchers use X+ and X- to indicate the treatment and control respectively). As with observations, you can use subscripts to distinguish different programs or program variations.

  7. Groups: Each group in a design is given its own line in the design structure. if the design notation has three lines, there are three groups in the design. • Assignment to Group:is designated by a letter at the beginning of each line (i.e., group) that describes how the group was assigned. The major types of assignment are: R = random assignment N = nonequivalent groups C = assignment by cutoff • Time:Time moves from left to right. Elements that are listed on the left occur before elements that are listed on the right.

  8. Random Selection & Assignment • Selection is how you draw the sample of people for your study from a population. • Assignment is how you assign the sample that you draw to different groups or treatments in your study. • Example: Let's say you drew a random sample of 100 clients from a population list of 1000 current clients of your organization. That is random sampling. Now, let's say you randomly assign 50 of these clients to get some new additional treatment and the other 50 to be controls. That's random assignment.

  9. Random selection is related to sampling. Therefore it is most related to external validity (generalizability) of your results. After all, we would randomly sample so that our research participants better represent the larger group from which they're drawn. • Random assignment is most related to design. In fact, when we randomly assign participants to treatments we have, by definition, an experimental design. Therefore, random assignment is most related to internal validity. After all, we randomly assign in order to help assure that our treatment groups are similar to each other (i.e., equivalent) prior to the treatment.

  10. “Pretest-posttest (or before-after) treatment versus comparison group” randomized experimental design.

  11. Types of Designs A randomized experiment generally is the strongest of the three designs when your interest is in establishing a cause-effect relationship. A non-experiment is generally the weakest in this respect.

  12. Two group designs "nonequivalent" because in this design we do not explicitly control the assignment and the groups may be nonequivalent or not similar to each other  The simplest form of non-experiment is a one-shot survey design that consists of nothing but a single observation O. This is probably one of the most common forms of research and, for some research questions -- especially descriptive ones -- is clearly a strong design.

  13. Experimental Designs

  14. Two-Group Experimental Designs • the two-group posttest-only randomized experiment • The simplest of all experimental designs. • In design notation, it has two lines -- one for each group -- with an R at the beginning of each line to indicate that the groups were randomly assigned. • One group gets the treatment or program (the X) and the other group is the comparison group and doesn't get the program

  15. “Pretest-posttest (or before-after) treatment versus comparison group” randomized experimental design. (Analysis of Covariance Design ANCOVA) The ANCOVA design is a noise-reducing experimental design. It "adjusts" posttest scores for variability on the covariate (pretest).

  16. Completely Randomized Design (CRD)

  17. Factorial Designs

  18. Parallel lines  No interaction effects in this case

  19. Change in scale An interaction effect exists when differences on one factor depend on the level you are on another factor.

  20. Change in rank  Strong interaction effect

  21. Randomized Block Designs • They require that the researcher divide the sample into relatively homogeneous subgroups or blocks (analogous to "strata" in stratified sampling). Then, the experimental design you want to implement is implemented within each block or homogeneous subgroup. • The key idea is that the variability within each block is less than the variability of the entire sample. Thus each estimate of the treatment effect within a block is more efficient than estimates across the entire sample.

  22. Quasi-experimental designs

  23. The Nonequivalent Groups Design • The Non-Equivalent Groups Design (NEGD) is probably the most frequently used design in social research. • It is structured like a pretest-posttest randomized experiment, but it lacks the key feature of the randomized designs -- random assignment. • In the NEGD, we most often use intact groups that we think are similar as the treatment and control groups. • In education, we might pick two comparable classrooms or schools. In community-based research, we might use two similar communities. • We try to select groups that are as similar as possible so we can fairly compare the treated one with the comparison one. But we can never be sure the groups are comparable.

  24. It's unlikely that the two groups would be as similar as they would if we assigned them randomly. Because it's often likely that the groups are not equivalent, this design was named the nonequivalent groups design.

  25. Regression-discontinuity design (RD) • In its simplest most traditional form, the RD design is a pretest-posttest program-comparison group strategy. • The unique characteristic which sets RD designs apart from other pre-post group designs is the method by which research participants are assigned to conditions. • In RD designs, participants are assigned to program or comparison groups solely on the basis of a cutoff score on a pre-program measure.

  26. The RD design has not been used frequently in social research. The most common implementation has been in compensatory education evaluation where school children who obtain scores which fall below some predetermined cutoff value on an achievement test are assigned to remedial training designed to improve their performance. • The "basic" RD design is a pretest-posttest two group design. The term "pretest- posttest" implies that the same measure (or perhaps alternate forms of the same measure) is administered before and after some program or treatment. • In fact, the RD design does not require that the pre and post measures are the same.

  27. C indicates that groups are assigned by means of a cutoff score, • an O stands for the administration of a measure to a group, • an X depicts the implementation of a program, • and each group is described on a single line (i.e., program group on top, control group on the bottom).

  28. No treatment or program is administered

  29. Treatment or program is administered which improved each treated person’s score by 10 points

More Related