1 / 33

How Psychologists Do Research

How Psychologists Do Research. Chapter 2. How Psychologists Do Research. What makes psychological research scientific? Research Methods Descriptive studies Correlational Studies Experiments Evaluating the findings Keeping the enterprise ethical. Making Psychological Research Scientific.

Download Presentation

How Psychologists Do Research

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. How Psychologists Do Research Chapter 2

  2. How Psychologists Do Research • What makes psychological research scientific? • Research Methods • Descriptive studies • Correlational Studies • Experiments • Evaluating the findings • Keeping the enterprise ethical

  3. Making Psychological Research Scientific • Precision • Scepticism • Reliance on empirical evidence • Willingness to make risky predictions • Openness

  4. Precision • Theories: • organized systems of assumptions that purport to explain phenomena and their interrelationships. • Hypotheses: • attempt to predict or account for a set of phenomena; specify relationships among variables, and are empirically tested. • Operational definitions: • define terms in hypotheses by specifying the operations for observing and measuring the process or phenomenon.

  5. Scepticism • Scientists do not accept ideas on faith or authority. • Scepticism means treating conclusions, both old and new, with caution.

  6. Willingness to Make “Risky Predictions” • Confirmation bias • Tendency to look for or pay attention only to information that confirms one’s own belief. • Principle of Falsifiability • A scientific theory must make predictions that are specific enough to expose the theory to the possibility of disconfirmation; that is, the theory must predict not only what will happen, but also what will not happen.

  7. Reliance on empirical evidence • A scientist relies on empirical evidence to determine whether a hypothesis is true.

  8. Openness • Scientists must be willing to tell others where they got their ideas, how they tested them and what the results were. • Peer review, publishing and replicating research gives science a built-in system of checks and balances.

  9. Descriptive Methods • Methods that yield descriptions of behaviour but not necessarily causal explanations. • Include: • Case studies. • Observational studies. • Psychological tests. • Surveys.

  10. Case Studies • A detailed description of a particular individual being studied or treated which may be used to formulate broader research hypotheses. • More commonly used by clinicians; occasionally used by researchers.

  11. Observational Studies • Researchers carefully and systematically observe and record behaviour without interfering with behaviour. • Naturalistic observation • Purpose is to observe how people or animals behave in their natural environment. • Laboratory observation • Purpose is to observe people or animals in a more controlled setting.

  12. Psychological Tests • Procedures used to measure and evaluate personality traits, emotional states, aptitudes, interests, abilities, and values. • Psychological tests can be objective or projective. • Characteristics of a good test include: • Standardization. • Reliability. • Validity.

  13. Standardization • The test is constructed to include uniform procedures for giving and scoring the test. • In order to score tests in a standardized way, an individual’s outcome or score is compared to norms. • To establish norms, the test is given to a large group of people who are similar to those for whom the test is intended. • By having norms or established standards of performance, we know who scores low, average or high.

  14. Reliability • When constructing a test, the scores achieved on the test at one time and place should be consistent with the scores achieved at another time and place.

  15. Validity • Content validity • The test broadly represents the trait in question. • Criterion validity • The test predicts other measures of same trait in question. • The ability of a test to measure what it was designed to measure.

  16. Surveys • Questionnaires and interviews that ask people directly about their experiences, attitudes, or opinions. • Should have a representative sample: • A group of subjects, selected from the population for study, which matches the population on important characteristics such as age and sex. • Popular polls and surveys use volunteers rather than representative samples. • Leads to volunteer bias or the belief that volunteers may differ from those who did not volunteer.

  17. Correlational Studies • Defining a correlational study • Understanding directions of correlations • Reading Scatterplots • Evaluating Correlations

  18. Correlational Study • A descriptive study that looks for a consistent relationship between two phenomena. • Correlation • A statistical measure of how strongly two variables are related to one another. • Correlational coefficients can range from - 1.0 to 1.0. • Variables • Characteristics of behaviour or experiences that can be measured or described by a numeric scale; • variables are manipulated and assessed in scientific studies.

  19. Direction of Correlations • Positive correlations • An association between increases in one variable and increases in another, or decreases in one variable and decreases in another. • Negative correlations • An association between increases in one variable and decreases in another.

  20. Scatterplots • Correlations can be represented by scatterplots.

  21. Explaining Correlations • Start with 3 variables, (X, Y, & Z) where X and Y are correlated: • X might cause Y • Y might cause X • X might be correlated with Y, which causes Z • Correlations show patterns, not causes

  22. An Experiment • A controlled test of a hypothesis in which the researcher manipulates one variable to discover its effect on another. • An experiment includes: • Variables of interest. • Control conditions. • Random assignment.

  23. Variables of Interest • Independent variables are variables the experimenter manipulates. • Dependent variables are variables that the experimenter predicts will be effected by manipulations of the independent variable or variables.

  24. Control Conditions • In an experiment, a comparison condition in which subjects are exposed to the same treatment as in the experimental condition. • In some experiments, the control group is given a placebo which is an inactive substance or fake treatment.

  25. Random assignment • In order for experiments to have experimental and control groups composed of subjects similar in characteristics that may effect their results, random assignment should be used. • Each individual participating in the study has the same probability as any other of being assigned to a given group.

  26. Experimenter Effects • Unintended changes in subject’s behaviour due to cues inadvertently given by the experimenter. • Strategies for preventing experimenter effects include single and double-blind studies.

  27. Descriptive Statistics • Statistical procedures that organize and summarize research data. • Examples include: • Arithmetic mean • Standard deviation

  28. Inferential Statistics • Statistical procedures that allow researchers to draw inferences about how statistically meaningful a study’s results are. • The most commonly used inferential statistics are significance tests: • Statistical tests that show how likely a study’s results occurred merely by chance.

  29. Choosing the Best Explanation • Interpretation of results may depend on how the research was conducted. • Cross-sectional studies involve subjects of different ages being compared at a given time. • Longitudinal studies involve subjects who are periodically reassessed over a period of time.

  30. Judging the Result’s Importance • Statistical techniques such as effect size and meta-analysis can help us determine if results are really important. • Effect size is the amount of variance among scores in the study accounted for by the independent variable. • Meta-analysis is a procedure for combining and analyzing data from many studies. It determines how much of the variance in scores across all studies can be explained by a particular variable.

  31. Ethical Dilemmas • Ethics Considerations in Human Research • Ethics Considerations in Animal Research

  32. The Ethics of Studying Humans • Informed consent • Prospective participants should receive enough information to let them decide freely whether to participate. • Freedom to withdraw at any time • Minimize discomfort • Keep data confidential • If deception is necessary, debriefing must occur

  33. The Ethics of Studying Animals • Animals have always been used in a small percentage of psychological studies. • To conduct basic research. • To discover practical applications. • To study issues that cannot be studied. experimentally with human beings. • To clarify theoretical questions. • To improve human welfare.

More Related