1 / 32

Final Study Guide Research Design

Final Study Guide Research Design. Experimental Research . Experimental Research. Researchers manipulate independent variable - 2 levels And measure the other (dependent variable) Give treatment to participants and observe if it causes changes in behavior

tyne
Download Presentation

Final Study Guide Research Design

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Final Study GuideResearch Design

  2. Experimental Research

  3. Experimental Research • Researchers manipulate independent variable - 2 levels • And measure the other (dependent variable) • Give treatment to participants and observe if it causes changes in behavior • Compareexperimental group (w/ treatment) with a control group (no treatment) • Can say IV caused change in the DV

  4. Independent Variable • The variable whose impact you want to know • ‘Stimulus’ ‘Input’ Variable • The variable you manipulate in experimental research

  5. Dependent Variable • The variable whose changes you want to know • You measure it • ‘Outcome’ ‘Response’ variable

  6. Random Selection • A way to choose your sample of study • Any member of population has equal chance of being selected • Random Assignment • A way to assign participants in sample to the various treatment conditions (groups will receive different level of IV) • Any member of your sample has equal chance of being assigned in any treatment group

  7. Internal Validity • Ability of your research design to adequately test your hypothesis • Showing that variation in I.V. CAUSED the variation in the D.V. in experiment • In correlational study, • Showing that changes in value of criterion variable relate solely to changes in value of predictor variable

  8. Confounding • Whenever 2 or more variables combine in a way that their effects cannot be separated = confounding. • Thus, the teaching method study as designed lacks internal validity. • You don’t know if the change in the DV is from the IV or from confounding variable

  9. Quasi-experimental research • Naturally occurring conditions • (IV change) • No control over variables influencing behavior (confounding variables) • Another variable that changed along with the variable of interest may have caused the observed effect • (NO random assignment)

  10. Non-Experimental Research

  11. Non-experimental Correlational research • Determine whether 2 or more variables are associated, • If so, to establish direction and strength of relationships • Observe variables as they are, • can’t manipulate them

  12. Research design Manipulate IV Random Assignment • Experimental (Causal) x x • Quasi-experimental x • Non-experimental / • Correlational • Predictive • Descriptive

  13. Causal - (Experimental) • one variable directly or indirectly influences another. • Correlational - (Non-experimental) • Changes in one variable accompany changes in another. • A relationship exists. Don’t know if either variable actually influences the other.

  14. TERMS Population • Universe/entire set of people you want to draw conclusions about Sample • Subset of the population • People actually in your study Sampling error • Differences between sample & population

  15. Sampling • Drawing a subgroup from a population (vs. Census)

  16. Simple random Systematic random Stratified random Cluster Convenience Snowball Quota Purposive Probability vs. Non-probability Probability Sampling Non-probability Sampling Population info Available Population info Not available

  17. Representativenss & Generalizability • Representativeness = Resemblance to the population characteristics • Generalizability = An ability to generalize the results of your study to the whole population • High representativeness = High generalizability • Probability sampling allows higher representativeness than non-probability

  18. External Validity • Degree that results can be extended beyond the limited research setting • Generalizable • Based on sample ( rats, college students, whites, males, lab setting)

  19. Non-Probability Sampling

  20. Convenience Sampling • Get available people in the population • Low representativeness / generalizability

  21. Quota Sampling • Predetermine the proportion of groups in the sample (e.g., male 50%, female 50%)

  22. Conceptualization & Operationalization Idea Clarification Conceptualization Operationalization

  23. Operationalization • From complex variable to series of simpler variables • Redefining a variable in terms of steps to measure • Conceptual definition  Operational definition • What the researcher must do to MEASURE it

  24. Face validity Content validity Predictive Concurrent Convergent Discriminant Types of Measurement Validity Empirical (Criterion-related) Judgmental

  25. “O = T + E” rule Observed score = True score + Eerror Observed = measured score, result True = “true”, actual, exact state Error = measurement error

  26. Reliability of a Measure Degree to which a measure (score, observation) is affected by error • A reliable measure has little or no error

  27. Types of Reliability • Interobserver (interrater) reliability • Test-Retest reliability • Parallel-forms reliability • Split- half

  28. Inter-rater Agreement • Consistency between measurements by two or more observers • Different observers watch the same sample of behavior • Compute proportion of time both observers recorded the same behavior as happening # agreements # agreements + # disagreements (# of observations) • Training needed for observers

  29. Increasing reliability • Increase number of items on your questionnaire (no 1 or 2 item measures) • Write clear, well-written items on survey • Standardize administration procedures • Treat all participants alike • Timing, procedures, instructions alike • Score survey carefully -- avoid errors

  30. Valid and Reliable • A good measurement • Measures what it should measure in a consistent way

  31. Reliable but Invalid • Your measurement is consistent, but not measuring what it is supposed to measure

  32. Research Report Structure • Abstract • Introduction • Method • Results • Discussion • Reference

More Related