Purpose of Research Design

Purpose of Research Design PowerPoint PPT Presentation

  • Uploaded on
  • Presentation posted in: General

Controlling Variance. Variance can be expressed quantitatively as a real, positive number. A variance of zero indicates that all scores in a distribution are identical.Wiersma, W. Research Methods in Education: An Introduction (7th Ed). Needham Heights, MA: Allan

Download Presentation

Purpose of Research Design

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

1. Purpose of Research Design 1. To provide answers to research questions; 2. To control variance. Controlling variance means being able to explain or account for variance caused by variables being studied. Kerlinger (1986) in : Wiersma, W. Research Methods in Education: An Introduction (7th Ed). Needham Heights, MA: Allan & Bacon, 2000.

2. Controlling Variance Variance can be expressed quantitatively as a real, positive number. A variance of zero indicates that all scores in a distribution are identical. Wiersma, W. Research Methods in Education: An Introduction (7th Ed). Needham Heights, MA: Allan & Bacon, 2000.

3. Controlling Variance 4 Ways of Controlling Variance: 1. Randomization. 2. Building conditions or factors into the design as independent variables. 3. Holding conditions or factors constant. 4. Statistical adjustments

4. Randomization With random selection, the sample represents the population from which it was selected. With random assignment, the groups of subjects differ only on the basis of random sampling fluctuation. Wiersma (2000)

5. Randomization Random assignment without random selection poses questions of representativeness and generalizability.

6. Characteristics of Good Research Design freedom from bias freedom from confounding control of extraneous variables statistical precision for testing hypotheses

7. Measurement Considerations X = T + E Validity Reliability Objectivity

8. Validity The degree of accuracy or truthfulness of the observation. Are we accurately measuring what we purport to measure?

9. Reliability relates to the consistency or repeatability of an observation. can also be described as: –consistency –dependability –stability –precision

10. Reliability ?T2 rxx’ = ------ = 1.0 ?X2

12. Objectivity The degree to which different testers can obtain the same scores on the same subjects. interrater (intertester) reliability


14. Face (Logical) Validity Condition that is claimed when the measure obviously involves the performance being measured.

15. Content Validity Condition that is claimed when a test adequately samples what was observed in the course. Almost exclusive to educational settings.

16. Content Validity “demonstrates the degree to which the sample of items, tasks, or questions on a test are representative of some defined universe or domain of content.” –Standards for Educational and Psychological Testing, 1985, p. 10.

17. Criterion-Related Validity Evidence that a test possesses a statistical relationship with the trait being measured. based on having a true criterion measure available. it is the relationship between alternate forms of testing.

18. Criterion-Related Validity Can be further subdivided: a. Predictive Validity –criterion is measured in the future. b. Concurrent Validity –criterion is measured in the same time frame as the alternate measure.

19. EXAMPLES OF CRITERION-RELATED VALIDITY--Concurrent Validity VO2max (criterion: oxygen consumption) Distance runs (e.g., 1.0-mi, 1.5-mi, 9-min, 12-min, 20-m shuttle) Submaximal (e.g., cycle, treadmill, swimming) Nonexercise models ( e.g., self-reported physical activity) Body fat (criterion: hydrostatically determined body fat) Skinfolds Anthropometric measures Sports skills (criterion: game performance, expert ratings) Sport skills tests

20. EXAMPLES OF CRITERION-RELATED VALIDITY--Predictive Validity Heart disease (criterion: heart disease developed in later life) Present diet, exercise behaviors, blood pressure, family history Success in graduate school (criterion: grade-point average or graduation status) Graduate Record Examination scores Undergraduate GPA Job capabilities (criterion: successful job performance) Physical abilities Cognitive abilities –Morrow, J. R., A. W. Jackson, J.G. Disch, and D. P. Mood. Measurement and Evaluation in Human Performance. Champaign, IL: Human Kinetics, 1995, p. 92.

21. Construct Validity Degree to which a test measures a hypothetical construct. Usually established by relating the test to some behavior.

22. Construct Validity The highest form of validity combines both logical & statistical evidence of validity (all validity evidence is construct validity evidence.) often used to validate measures that are unobservable, yet exist in theory. --i.e., IQ, attitude measures

23. Evaluating Research Designs 4 VALIDITIES: 1. Construct Validity 2. Statistical Conclusion Validity Power (1-?) 3. Internal Validity --did A cause B? 4. External Validity --generalizability

24. 8 Threats to Internal Validity 1. Maturation 2. Instrumentation 3. Selection 4. History 5. Testing 6. Regression 7. Mortality 8. Selection-Maturation Interaction

25. 4 Threats to External Validity 1. Reactive or interactive effects of testing. 2. Interaction of selection biases and the experimental treatment. 3. Reactive effects of experimental arrangements. 4. Multiple-treatment interference.

26. Kinds of Research Designs (3) 1. True Experimental 2. Quasi-Experimental 3. Observational

27. Pre-Experimental Designs One-Shot Case Study X O One-Group Pre-Post Design O X O Static Group Comparison Design X O O

28. Quasi-Experimental Designs Time-Series Design O1 O2 O3 O4 X O5 O6 O7 O8 Equivalent Time Samples Design O X O X O X O X O Non-Equivalent Control Group O X O O O

29. Quasi-Experimental Designs Multiple Time Series Design O O O O X O O O O O O O O O O O O Multiple Baseline O X O O O O O X O O O O O X O O O O O O

30. Quasi-Experimental Designs Causal Comparative (“ex post facto”) --establish cause-and-effect by comparing two groups after the treatment has already been administered

31. Experimental Designs Post-Test Only Control Group Design R X O R O Pre-Post Control Group Design R O X O R O O

32. Experimental Designs Solomon 4-Group Design R O X O R O O R X O R O

33. Observational Design O Historical --describing what was. Ethnographic --long-term observation in natural setting (what is). Survey Designs: Cohort Study Panel Study Trend Study

  • Login