1 / 57

Ecological validity

Ecological validity. Strategy: forming a research approach The validity of research findings:. Internal Validity External Validity “Ecological” Validity. Basic experimental design, control, and context . Group Assignment. Baseline (pre-test). Experimental condition.

donny
Download Presentation

Ecological validity

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Ecological validity • Strategy: forming a research approach • The validity of research findings: • Internal Validity • External Validity • “Ecological” Validity Experimental Design & sampling

  2. Basic experimental design, control, and context Group Assignment Baseline (pre-test) Experimental condition Follow-up (post-test) Group 1 Observe1 Treatment 1 Observe2 Observe2 Group 2 Observe1 Treatment 2 Observe1 Control Group 3 Observe2 2/11/13 Experimental Design & sampling

  3. The Basics of Research Design 2/15/10 • Strategy: forming a research approach • The validity of research findings: • Internal Validity • External Validity • “Ecological” Validity Experimental Design & sampling

  4. The Basics of Research Design 2/15/10 • Strategy: forming a research approach • Basic research designs • The validity of research findings Experimental Design & sampling

  5. The Research flow • What is being studied? • What is the contrast space? • What is compared to what? • What needs explaining / what is “given” Why? E X A M P L E Why do college students have riskier sex after drinking alcohol? What is the contrast space? What are you really asking? • College students versus non-students their age? • College students versus high school students? • College students adults in the community? Experimental Design & sampling

  6. Research flow: Contrast space • What is being studied? • What is the contrast space? • What is compared to what? • What needs explaining / what is “given” Why? E X A M P L E Why do college students have riskier sex after drinking alcohol? What is the contrast space? What are you really asking? • Risky sex versus safe sex? • Risky sex versus no sex? • Risky sex versus some other risk, such as driving… Experimental Design & sampling

  7. Research flow: Contrast space, 2 • What is being studied? • What is the contrast space? • What is compared to what? • What needs explaining / what is “given” Why? E X A M P L E Why do college students have riskier sex after drinking alcohol? What is the contrast space? What are you really asking? • Drinking alcohol versus not drinking? The choice of contrast space defines your Independent Variable • Alcohol v. drugs • A lot of alcohol v. a low / moderate amount… Experimental Design & sampling

  8. Research flow: Theory • What is known about the core hypothetical constructs? • How do you propose they relate to each other? • How will the study expand or clarify theory? • Use existing theory to explain a new phenomenon (“Divergent” use of theory)? • Test contrasting theories of one phenomenon (“Convergent use of theory)? • New / expanded theory? Experimental Design & sampling

  9. Research flow: Theory E X A M P L E Why do college students have riskier sex after drinking alcohol? • What is known about the core hypothetical constructs? • How do you propose they relate to each other? College environment Relaxed norms, impulsiveness Sexual risk Disinhibiting effect of alcohol Experimental Design & sampling

  10. Research flow: Theory • What is known about the core hypothetical constructs? • How do you propose they relate to each other? This variable is common to everyone – it is a Constant. We will not test it, but it is a core part of our theory E X A M P L E Why do college students have riskier sex after drinking alcohol? College environment Relaxed norms, impulsiveness Sexual risk Disinhibiting effect of alcohol Experimental Design & sampling

  11. Research flow: Theory • What is known about the core hypothetical constructs? • How do you propose they relate to each other? This is the variable we are interested in. Is there really a biological disinhibiting effect, or do people just expect there to be? E X A M P L E Why do college students have riskier sex after drinking alcohol? College environment Relaxed norms, impulsiveness Sexual risk Disinhibiting effect of alcohol This is our Independent Variable. Experimental Design & sampling

  12. Research flow: Theory • This is a mediatingvariable • We hypothesize that… • environment + alcohol  relaxed norms & impulsivity • which then leads to risk. • What is known about the core hypothetical constructs? • How do you propose they relate to each other? • How will the study expand or clarify theory? • Use existing theory to explain a new phenomenon (“Divergent” use of theory)? • Test contrasting theories of one phenomenon (“Convergent use of theory)? • New / expanded theory? E X A M P L E Why do college students have riskier sex after drinking alcohol? College environment Relaxed norms, impulsiveness Sexual risk Disinhibiting effect of alcohol Experimental Design & sampling

  13. Research flow: Theory • What is known about the core hypothetical constructs? • How do you propose they relate to each other? • How will the study expand or clarify theory? • Use existing theory to explain a new phenomenon (“Divergent” use of theory)? • Test contrasting theories of one phenomenon (“Convergent use of theory)? • New / expanded theory? Of course this is our outcome or Dependent Variable E X A M P L E Why do college students have riskier sex after drinking alcohol? College environment Relaxed norms, impulsiveness Sexual risk Disinhibiting effect of alcohol Experimental Design & sampling

  14. Research flow • What variablesbest represent the hypothetical constructs? • What is your prediction about how they are related? • Measurement design? • Quasi-experiment? • True experiment? • How have the variables been operationally defined? • Alternative operational definitions? • Implications of this operational definition? • Is the predictor best measured or manipulated? • Virtues / limitations of each approach? • Sampling? Experimental Design & sampling

  15. Research flow • What variablesbest represent the hypothetical constructs? • What is your prediction about how they are related? • Measurement design? • Quasi-experiment? • True experiment? E X A MP L E Why do college students have riskier sex after drinking alcohol? • How have the variables been operationally defined? • Alternative operational definitions? • Implications of this operational definition? • Is the predictor best measured • Hypotheses: • Real alcohol leads to more risk than placebo drinks where we just expect alcohol • Alcohol will lead to risk by inducing relaxation & impulsivity Independent variable or manipulated? • Virtues / limitations of each approach? • Sampling? Mediating variable Experimental Design & sampling

  16. Research flow • What variablesbest represent the hypothetical constructs? • What is your prediction about how they are related? • Measurement design? • Quasi-experiment? • True experiment? E X A MP L E Why do college students have riskier sex after drinking alcohol? • How have the variables been operationally defined? • Alternative operational definitions? • Implications of this operational definition? • Is the predictor best measured • Operational definitions: • Simple placebo design: real v. taste of alcohol • Relaxation & impulsivity questionnaires Exp. manipulation or manipulated? • Virtues / limitations of each approach? • Sampling? Measured variable Experimental Design & sampling

  17. Basic Designs: Methods cont. Who are your participants? • What is your sampling method: • Where do you recruit participants? • Is your study externally valid? How will you form your control group? • Can you practically & ethically have one? • Are you using existing groups? • Can participants self-select into a group? • Is random assignment or matching feasible? Probability or Non-Probability sample? What is your sampling frame? Does your sample represent the population? • How will you present the independent variable? • Simple presence v. absence? • Different doses? Experimental Design & sampling

  18. Overview What do we want to know? Why? Contrast space: what is compared to what? Hypothetical constructs? How are they related? Specific predictions? Operational definitions? Experimental Design & sampling

  19. Internal validity • Strategy: forming a research approach • The validity of research findings: • Internal Validity • External Validity • “Ecological” Validity Experimental Design & sampling

  20. Basics of Design: Internal Validity Internal Validity: Can we validly determine what is causing the results of the experiment? General Research Hypothesis:the experimental outcome (values of the Dependent Variable)is caused only by the experiment itself(Independent Variable). Confound: a “3rd variable” (unmeasured variable other than the Independent Variable) actually led to the results. Core Design Issues: • Appropriate control group • Equivalentexperimental & control groups (except for the Independent Variable). Experimental Design & sampling

  21. Treatment Key threats to internal validity 1. Lack of a control group One group Observe2 Observe1 Maturation: Participants may be older / wiser by the post-test History; Cultural or historical events may occur between pre- and post-test that change the participants Mortality:Participants may non-randomly drop out of the study Regression to baseline: Participants who are more extreme at baseline look less extreme over time as a statistical confound. Reactive Measurement:Participants may change their scores due to being measured twice, not the experimental manipulation. These processes lessen External Validity 2-group design: if groups differ in one of these there is a core confound. Experimental Design & sampling

  22. Key threats to Internal Validity, 2 2. Non-equivalent groups • Participant selection • Experimental procedures Confound: Personal characteristics differ X group Expectations differ X group (non-blind) Self-section out differs X group • Participants are in an alcohol study, so: • We have them self-select which group to be in • We let them know which group they are in • We encourage members of the alcohol group to drop out if they are uncomfortable E X A MP L E Experimental Design & sampling

  23. Key threats to Internal Validity, 2 2. Non-equivalent groups • Participant selection • Experimental procedures Confound: Groups are not blind The experimenter is not blind Procedures differ X group • Participants can just figure out whether they got alcohol or not… • One person fixes the drinks and runs the study • Since some participants are drinking, we treatment more carefully than the control group E X A MP L E Experimental Design & sampling

  24. Threats to Internal Validity, details • Non-equivalent groups; Group Assignment • Self-selection; • Self-selection in • Self-selection out • Existing groups: • Convenience samples • Groups that express the phenomenon • Cures: • Random assignmentto experimental v. control groups. • Assess andMatchparticipants on potential confounding variables (demographics, Ψ variables). People have different motives for joining / dropping out of experimental vs. control groups • rare, substantial confound if present. • Common in behavioral studies, e.g., more drop out from the experimental group in health interventions. may differ in subtle ψ variables • e.g., 9am class v. 11am class, NYC v. Chicago • those who seek therapy v. not, more / less extreme scores at baseline Experimental Design & sampling

  25. Internal validity; procedures • Non-equivalent groups; Procedures Keep all conditions equal (“constant”) across experimental & control groups except the IV. • Participants blind • Equalize (“control”) expectations & motivations x group • Control drop-out, loss to follow-up • Experimenter blind • Control explicit bias • Control self-fulfilling expectations • Standardize / automate the experimental process • Procedures exactly equal across groups • Procedures equal across pre- and post- test Double blind Experimental Design & sampling

  26. Summary: Internal validity Internal Validity overview: • Are results due to something other than the Independent Variable? •  Confounds within the experiment • Procedural differences x group • Biased assignment to group. •  Confounds from outside the experiment • History, maturation, cultural change etc. • within single-group study • differences x group in multi-group study Experimental Design & sampling

  27. External validity • Strategy: forming a research approach • The validity of research findings: • Internal Validity • External Validity • “Ecological” Validity Experimental Design & sampling

  28. Other outcomes (dependent variables) Other conditions (independent variables) Generalizability External Validity: Can we validly generalize from this experiment to the larger world? The larger population Other settings External Validity How well can we generalize to: Experimental Design & sampling

  29. External validity: larger population The larger population How well does your research sample represent the larger population. people who volunteer for research may not be typical of the general population. • “Volunteerism” bias • attitudes, motivations • responses to financial incentives • Convenience sampling • College class, shopping mall, bar, street corner… • Personal social network • Bias by self-selection. • Random selection maximizes external validity by best representing the population. We will spend several lectures on sampling later on. easily available, rather than systematically sampled participants Cure: Experimental Design & sampling

  30. Clickers! Random selection is… The way Chicago voters elect aldermen The way people are selected for the experimental v. control groups The way participants are assigned to experimenters The way people are selected from the population to be in a research sample. Experimental Design & sampling

  31. Clickers, 2 Random assignment is… The way blind dates seem to work The way people are selected for the experimental v. control groups The way participants are assigned to experimenters The way people are selected from the population to be in a research sample. Experimental Design & sampling

  32. Random selection v. assignment • Key distinction: • Random selection: from a larger population to the research sample. • Random assignment: from the sample to experimental v. control groups. Experimental Design & sampling

  33. Other settings External validity: settings How representative (or realistic) is the social and cultural context of the research is the setting similar to “real life” settings, or are the results specific to this laboratory, questionnaire, etc.? • Context: • Procedures: • Replicatethe study by different researchers, in different setting(s), with different samples. • Convergingstudies that test the same hypotheses with substantially different methods • Field v. lab studies • Experimental v. non- (or quasi-) experimental methods. • Qualitative v. quantitative approaches are results an “artifact” of a particular procedure, experimenter, or place or setting? Cures: Experimental Design & sampling

  34. Other conditions (independent variables) External validity: settings How representative is the Independent Variable / experimental manipulation • “Modeling” the phenomenon: • does the experimental condition or manipulation create the state you want it to…? • Did we really induce: stress, mood, motivation… • Dose of the IV: e.g.; • drug dose • Psychotherapy intensity • Manipulation check • Dose-response studies Cures: Experimental Design & sampling

  35. Other outcomes (dependent variables) External validity: settings How representative is the dependent Variable / Outcome measure • Operationalization: • does the assessment of the DV reflect how the process works outside of the lab? • Construct validity: • Are you modeling the hypothetical construct you intended? • How well have you captured a specific Ψ process? Cures: Standardized measures e.g., depression, stress… Psychometric studies: is the measure validand reliable Experimental Design & sampling

  36. The Independent Variable External validity: summary External Validity: Can we validly generalize from this experiment to the larger world? Is the sample typical of the larger population? The research Sample: Is this typical of “real world” settings where the phenomenon occurs? The research Setting: Is the outcome measure represen-tative, valid & reliable? The Dependent Variable The study structure & context Does the experimental manipulation (or measured predictor) actually create (validly assess…)the phenomenon you are interested in? Experimental Design & sampling

  37. Evaluation anxiety disrupts learning Hypothesis: Anxiety manipulation: class task described as I.Q. test v. non-evaluative exercise • IndependentVariable: • Dependent Variable: Operational def. of “learning”: Abstract memory task Generalizability example • Core design elements (external validity areas): • Sample: UIC students • Setting: Classroom situation E X A M P L E Experimental Design & sampling

  38. How well do the results generalize to… Generalizability: population and context Larger population(s) People in general Across contexts Other Americans Other University settings Other structured settings “Real world” situations such as at work Other young people E X A M P L E UIC students tested in class Experimental Design & sampling

  39. Generalizability: Independent & dependent variables. External validity:Do the results generalize to… Other forms of anxiety (the IV) “Natural” anxiety Other outcomes (DV) Other forms of stress Less structured leaning tasks Other cognitive tasks Job or other performance… Other instructions E X A M P L E “I.Q.” instructions & abstract memory task Experimental Design & sampling

  40. Ind. Var.: “IQ test” instruction Generalizability • How well do these data generalize to…. The larger population? Sample: UIC Students Setting: Classroom Dep. Var.: Abstract memory task E X A M P L E Other cognitive skills or tasks? Other social or learning settings? The study structure & context Other “anxiety” conditions? Experimental Design & sampling

  41. Other outcomes Other conditions (independent variables) Generalizability: general research results Each element of external validity helps determine how meaningful research results are. The larger population Other settings External Validity How well can we generalize to: Experimental Design & sampling

  42. External validity • Strategy: forming a research approach • The validity of research findings: • Internal Validity • External Validity • “Ecological” Validity Experimental Design & sampling

  43. Context effects Ecological Validity The research setting & culture The researcher The participant How valid is the larger context or “ecology” of the research? Experimental Design & sampling

  44. Context effects Ecological Validity Potential conflicts between the researcher and the participant. The researcher The Participant Biosocial [age, race, gender, status...] • Inherent social conflicts? Psychosocial [attitudes, warmth, skills...] • Cooperation? • Communication to participants? Situational [e.g., physician, teacher as researcher] • Prior relationship or 'dual role' situation • Mutual comfort? Experimental Design & sampling

  45. Ecological Validity, 2: Researcher Effects Possible Biases from The Researcher • Self-fulfilling expectations (verbal or non-verbal). • Rosenthal experiment: “smart” v. “dumb” rats & maze learning. • Education research: powerful effects of teacher expectations on student performance. • Biased procedures or handling of participants. • Clinical research; differential handling of “cases”. • Mental health research: more extreme diagnosis for minorities / lower SES pts. • Biased data recording: quantitative and qualitative • Non-random errors in data coding or entry • Confirmatory biases in recall. The researcher Experimental Design & sampling

  46. Researcher effects; cures Possible Biases from The Researcher Cures: The researcher • Randomize experimentersacross condition • match or stratify experimenter x participant • 'unknown' experimenters • Blindingof experimenter(s) • double blind • Not informing 'hands on' researcher (when blinding impossible) • Aggressive standardization & automation Experimental Design & sampling

  47. Participant effects Ecological Validity; Participant biases • Motivation to be a 'good' (or bad) subject • Social desirability responding • Primarily for personal information • Cultural & personal differences in what is considered “personal” • Face to face v. computer assessment • Changes in response over time (Doll et al.; risk disclosure). • Infer hypothesis or enrollment criteria(correctly or incorrectly) • HIV vaccine research: • Risky men lied to get into “low risk” vaccine cohorts, then showed HIV infections. Did the vaccine itself “cause” infections? • Reactive risk behavior: concentrated in men who believed they received the vaccine. The Participant Experimental Design & sampling

  48. participant effects, 3. Ecological Validity; Participant biases • Blindingparticipants • Constancy of procedures: • automation or structured protocol • training researchers • Deception or concealment of hypothesis • Diverse sampling of participants • Computer assessment Cures: The Participant Experimental Design & sampling

  49. Ecological Validity: Context and people Ecological Validity; The Research setting • Social context powerfully affects “individual” behavior: • Zimbardo prison experiment, Rosenthal & psychiatric settings • Medical context and health measures; e.g, "white coat" effect • Self-awareness  “norm following” • Context and informational availability: • Minimalist social psychology experiments & social judgments. • Survey / interview measures and uni-dimensional responding • Political / economic demands and simple bias or fraud • Drug Co. research and “cherry picking” positive results • Political pressure for “No Child Left Behind” & Houston Miracle Experimental Design & sampling

  50. Ecological validity: The research setting Ecological Validity; The Research setting Reactive measurement • Taking the same measure twice may change responses • Awareness of the hypothesis  response biases • Book example: responses to erotic stimulus (among “helpful” participants). • due to the stimulus itself? • self-generated imagery to conform to the experimental demand? • simulated response? • Research measures can create attitude change • Survey questions & “normalizing”… When do you feel it is O.K. to cheat on an exam? ..when I really do not know the material .. when others are doing it .. when I think the exam is unfair • Political “push” polls • How much do you think Obama is destroying America? • How much will Obamacare bankrupt America, economically and morally? Experimental Design & sampling

More Related