1 / 20

Validity and Reliability

Validity and Reliability. How to identify errors and bias, and eliminate them from your study. Overall Approach. What is the difference between the research intent (from Introduction/Discussion) and research study (from Method/Results) Construct Validity (accurate variables)

pats
Download Presentation

Validity and Reliability

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Validity and Reliability How to identify errors and bias, and eliminate them from your study.

  2. Overall Approach • What is the difference between the research intent(from Introduction/Discussion) and research study(from Method/Results) • Construct Validity (accurate variables) • External Validity (generalize to other) • Internal Validity (prove cause-effect) • Reliability (ascertaining error level)

  3. Construct Validity Definition • Definition – Degree to which the study measures and manipulates the underlying psychological elements that the researcher claims to be measuring and manipulating. (1) What does the Introduction/Discussion say about the definition of the CONSTRUCT? (2) What does the Method/Results say about the operationalization of the VARIABLE?

  4. Construct Validity Discrepancy if… • Experimenters • Bias (purposeful or unintended) • Measures/Manipulations • Does not map onto construct • Overly weak, or overly strong • Ambiguous • Confounds • Lack experimental realism • Participants • Misunderstand or misinterpret • Figure out the purpose of the study • Respond with compliance a/o reactance

  5. Construct Validity Questions to ask… • Experimenters • Were they trained? • Are they blind / double-blind? • Measures/Manipulations • Is the measure/manipulation consistent with definitions of the construct? • What psychological states does the manipulation produce? • Was there a manipulation check? • Are more or better control conditions needed? • Participants • What was the cover story? • Is there a way the participants could know the true purpose of the study?

  6. External Validity Definition • Definition – Degree to which the results could be generalized to different participants, settings, and times. (1) What does the Introduction/Discussion say about the intended POPULATION? (2) What does the Method/Results say about the study SAMPLE?

  7. External Validity Discrepancy if… • Experimenters • Bias (purposeful or unintended) • Measures/Manipulations • Artificial situation • Not random assignment • Participants • Small number • Not representative • Not random sampling

  8. External Validity Questions to ask… • Experimenters • Measures/Manipulations • Would results match real-world settings? • Can you identify a difference between the research setting and real-life settings, and give a specific reason why this difference would prevent the results from applying to real life? • Participants • What are the characteristics of sample? • How does this compare to characteristics of population? • Would result apply to the average person? • Were participants distinct in some way? • Were the participants too homogeneous? • Were there only certain types of individuals included in the study? • Was the dropout rate high, or high among certain groups? • Is there any specific reason to suspect that the results would not apply to a different group of participants?

  9. Internal Validity Definition • Definition – Degree to which the study demonstrated cause-and-effect relationship between variables. (1) What does the Introduction/Discussion say about the intended CONSTRUCTS? (2) What does the Method/Results say about the study MANIPULATIONS?

  10. Internal Validity Discrepancy if… • Same as “Construct” and “External” because Internal is smaller subset of both • Plus, issues specifically about the experimental manipulations such as…. • Pre-manipulation differences in groups • No counterbalancing

  11. Internal Validity Questions to ask… • Same as “Construct” and “External” because Internal is smaller subset of both • Plus, now… • Did conditions differ before study begins? • Was there counterbalancing?

  12. All types of Validity – Confounds (1/3) • Maturation • Changes on DV due to normal maturation of the participant • History • Changes on DV due to historical events unrelated to the study • Testing • Changes on DV that is function of having been tested previously • Instrumentation • Changes in the calibration of the measuring instrument or procedure • Regression to the mean • The tendency for participants who are selected because they have extreme scores on a variable to be less extreme in a follow-up testing.

  13. All types of Validity – Confounds (2/3) • Attrition • The loss of participants during a study; those who leave are different than those that stay • Mortality • When participants die; those who leave are different than those that stay • Demoralization • When participants get bored or exhausted with the study • Diffusion of Treatment • Change in the response of participants in a particular condition because of information participants gained about other research conditions • Sequence effects • Effects on a participant’s performance in later conditions that result from the experience the participant had in the previous conditions of the study.

  14. All types of Validity – Confounds (3/3) • Selection • Any factor that creates groups that are not equivalent at beginning of study, such as volunteers for studies that are essentially “self-selecting” themselves into your study • Social-desirability concerns • When people respond, or do not respond, accurately or inaccurately due to privacy concerns or positive self-image • Interactions • When multiple confounds are co-occurring and influencing each other • Rivalry • When participants or groups compete with each other to score well • Reactance/Compliance • When participants think they know the purpose of the study, and respond either in line with how they think you want them to act, or in the opposite manner of how they think you want them to act.

  15. Reliability • Definition - Producing stable and consistent scores that are not strongly influenced by random error. (1) What is the statistical CRITERIA for acceptable reliability (determined by cultural norms, you)? (2) Do your measures/manipulations EXCEED this threshold level? Only about the “Results” section

  16. Reliability Why is there error? • Research would not be needed if there was no variability among members of a population. • You want the variability to be due to true differences among participants, and not bias or error. • There will also be bias and error anytime you conduct a study.

  17. Reliability What is the error? • Experimenters error • Unintended unsystematic variations • Measures/Manipulations error • No measurement tool can perfectly capture the underlying construct so there is always some measurement error. • Participants error • People are sometimes poor at accurate reflection, and accurate self-reporting

  18. Reliability Types of reliability… • Internal Consistency • extent to which scores on the items of a scale correlate with each other • Interrater reliability • extent to which the ratings of one or more judges correlate with each other • Test-retest reliability • extent to which scores on the same measure, administered at two different times, correlate with each other • Split-half reliability • extent to which scores on the same measure, administered at same time but split in half, correlate with each other • Equivalent forms reliability • extent to which scores on similar, but not identical, measures that are administered at two different times, correlate with each other

  19. Reliability Types of validity… • Is there convergent validity? • the extent to which a measured variable is found to be related to other measured variables designed to measure the same conceptual variable • Is there discriminate validity? • the extent to which a measured variable is found to be unrelated to other measured variables designed to measure other conceptual variables • Is there criterion validity? • the extent to which a self-report measure correlates with a behavioral measured variable • Is there predictive validity? • the extent to which a self report measure correlated with a future behavior • Is there concurrent validity? • the extent to which a self report measure correlated with a behavior measured at the same time.

  20. Advanced Sources… • For individualized information about… • Validity: See Albright, L., & Malloy, T. E. (2000). Experimental validity: Brunswick, Campbell, Cronbach's, and enduring issues. Review of General Psychology, 4, 337-353. • Reliability: See the Spring “Statistics” class • For information about both Validity and Reliability… • Qualitative research: Reliability and Validity in Qualitative Research, by Kirk & Miller, Sage publications (part of the Qualitative Research Methods Series) • Quantitative research: Reliability and Validity Assessment, by Carmines & Zeller, Sage Publications (part of the Quantitative Applications in the Social Sciences Series)

More Related