1 / 37

Presentation of Selected Research Methodology

Presentation of Selected Research Methodology. Activity – Evaluating and Interpreting Research Literature. After reading the materials titled “Evaluating and Interpreting Research Literature, answer the following questions:

akiva
Download Presentation

Presentation of Selected Research Methodology

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Presentation of Selected Research Methodology

  2. Activity – Evaluating and Interpreting Research Literature • After reading the materials titled “Evaluating and Interpreting Research Literature, answer the following questions: • Have any of your peers, colleagues, or instructors ever stated that a study “proves” something? If so, briefly describe what he or she said, and in light of reading the materials provided, would you be cautious about believing such a statement? Why? (answer in one-two paragraphs) • According to the reading materials, what is the primary and secondary purpose for preparing a literature review? • What do quantitatively oriented researchers emphasize when sampling that qualitatively researchers do not emphasize? • Name two examples of common sampling flaws.

  3. Activity – Evaluating and Interpreting Research Literature • Name a trait, other than the ones mentioned in this chapter that you think are inherently difficult to measure. Why? • Briefly explain why a highly reliable measuring instrument can be invalid. • If a common well-known IQ scale is considered to have adequate reliability and validity, does this mean the scale has no flaws? If not, briefly explain why. • To study causality, what do researchers need to do? Why? • If a difference is statistically significant, does this mean the difference is large? If not, what does the fact that a difference is statistically significant tell you? What else can you look at to indicate the magnitude of a difference?

  4. Review • In your last assignment you developed a research question • Your research question guides your design for the research study methodology • This session will help you better formulate and articulate the research design and methodology section of your paper

  5. Brief Description of Methodology and Research Design • This section will define how you are going to address the research question or questions • You will be able to… • Present an overview of the methodology • Describe the appropriateness of the methodology • Explain the rationale for selecting the methodology

  6. Research Design Overview • Nonexperimental Research Designs • Quasi-experimental Research Designs • Experimental Research Designs • Qualitative Research Designs • Program Evaluation

  7. Nonexperimental Research Designs • Descriptive • Purpose is to create a detailed description of some phenomenon • Example: Satisfaction levels of employers with business graduates’ job skills • Causal-Comparative • Purpose is to compare two or more groups in order to explore possible causes or effects of a phenomenon • Example: Effects of type of classroom (inclusion vs. non-inclusion) on academic achievement

  8. Nonexperimental Research Designs (cont’d) • Correlational • Purpose is to determine the strength and direction between two variables

  9. Quasi-Experimental DesignNon-Equivalent Control Group 1. The treatment group is composed of students already in the program through self- or external selection. Random assignment to treatment and control groups is not possible. Another group that is as similar to the treatment group as possible is selected for a control group. It does not receive the program. 2. Administer the pre-test to both groups. Initial differences can be adjusted later by statistical means. 3. Expose the treatment group to the program while withholding it from the control group. 4. Administer the post-test to both groups. 5. If there is a difference which favors the treatment group, you can be fairly confident (though less so than with random assignment) that it was due to the program. If there is no difference, i.e., if the scores for both groups remained the same or changed equally (up or down), that indicates the program is probably not effective.

  10. Quasi-Experimental Designs: Non-Equivalent Control GroupTreatment Group (T) Gets Program (X). Control Group (C ) Does Not.Treatment Group T Pre-Test X Post-TestAlready DeterminedSelect Similar C Pre-Test Post-TestControl Group

  11. Controlling Non-Equated Variables Through Statistical Analysis (or Making Non-Equivalent Groups Similar)Often times we want to evaluate the effectiveness of a program that is already in place, and we are not able to construct a treatment and a control group. For example, suppose we wanted to evaluate the effectiveness of public schools vs.. private schools on academic achievement; and we looked at the average NAEP math scores for 4th grade students in public and private schools and found the following:

  12. But what does the picture look like when we control for SES?When we compare public and private students of the same SES, we find there is little difference in their achievement. But because there are more high SES students in private schools, the overall comparison is misleading. (For the precise data on these comparisons, see Lubienski and Lubienski, “A New Look at Public and Private Schools.” Phi Delta Kappan, May 2005.)

  13. Quasi-Experimental Design:Interrupted Time Series Design: multiple historical measures only on a treatment group before and after its exposure to the program.In situations where a control group is not possible, if (1) data on the treatment group can be obtained for several periods both before and after the participants are exposed to the program, and (2) there is change in scores immediately following the implementation of the program, and (3) there is a continuation of the change in subsequent time periods, that is considered good evidence that the intervention produced the change. Time 1 2 3 4 5 6 Experimental Group 0 0 0 X X X X New program (X) is introduced.

  14. Campbell’s Example of the Interrupted Time Series Design—1

  15. Campbell’s Example of the Interrupted Time Series Design—2

  16. Experimental Research Designs • Involve the introduction of an intervention by the researcher to determine a cause-and-effect relationship • Strongest type of design (pre  INT  post)! • To yield valid findings, these studies must be rigorous!

  17. Experimental Research Designs (cont’d) • Single-Group Designs • All individuals in the study receive the treatment • Example: pre  WebAchiever  post • Threats to validity • Control-Group Designs • Strongest type of design!! • Experimental group: pre  INT  post • Control group: pre  NO  post

  18. Experimental Research Designs (cont’d) • Quasi-Experimental Designs • Random assignment of subjects is not possible (e.g., using an entire classroom, etc.) • Biggest problem

  19. Experimental Research Designs (cont’d) • Single Case Designs • Involves the intense study of one individual (or more than one individual treated as single group).

  20. Validity Issues • Can the change in the posttest be attributed only to the experimental treatment that was manipulated by the researcher? • Must be able to control extraneous variables that could have undue influence!

  21. Validity Issues (cont’d) • Internal Validity • The extent to which to which extraneous variables have been controlled by the researcher, so that any observed effect can be attributed to the treatment variable. • External Validity • Extent to which the findings can be applied to individuals and settings beyond those studied

  22. Qualitative Research Designs • Case Study • Researcher collects intensive data about particular instances of a phenomenon and seek to understand each instance in its own terms and in its own context • Historical Research • Understanding the present condition by shedding light on the past

  23. Program Evaluation • Making judgments about the merit, value, or worth of a program • A valuable tool in program management and policy analysis • Usually initiated by someone’s need for a decision to be made concerning policy, management, or political strategy.

  24. Components Addressed in Research Methodology (See Syllabus for Details) • Participants • Instruments • Procedures • Limitations • Anticipated Outcomes • References

  25. Participants • This section should include the following elements: (a) the target population or sample to which it is hoped the findings will be applicable should be defined, consistent with the Statement of Problem and the Research Question(s), (b) the population from which the sample will actually be drawn should be specified. This should also include demographic information such as age, gender, ethnicity etc., (c) procedures for selecting the sample should be outlined, including justification for the sampling method, (d) the implications for the generalizability of findings from the sample to the accessible population and then to the target population should be addressed.

  26. Procedures • Procedures – the procedures section will be based directly on the research questions. That is, this is the “how-to” section of the study and will introduce the design of the research and how the data will be collected based on the questions of interest. This section should include the approach (i.e., design) to conducting the research (e.g., experimental, quasi-experimental, survey, historical, or ethnographic) and the appropriate procedures to be followed. For example, for an experimental or quasi-experimental study, the proposal should indicate how participants will be assigned to treatments and how the research will be conducted to ensure internal and external validity. If an evaluation project is proposed, the model to be followed should be specified.

  27. Instruments • Examples of data-gathering instruments include standardized tests, teacher-made tests, questionnaires, interview guides, psychological tests, or field-study logs • Indicate the source (literature citation) of the instrument and cite appropriately • Include validity and reliability information

  28. Limitations • Limitations are conditions, restrictions or constraints that may affect the validity of the project outcomes • A limitation is a weakness or shortcoming in the project that could not be avoided or corrected and is acknowledged in the final report • Common limitations are the lack of reliability of measuring instruments and the restriction of the project to a particular organization setting

  29. Anticipated Outcomes • Description of expected results of the study • Detail the importance of conducting the study as well as possible impact on practice and theory

  30. Research Question Activity • State your research question • What are the key variables? • What are the Conceptual definitions of your key variables? • What are the Operational definitions of your key variables?

  31. Conceptual and Operational Variables • Concepts (or constructs) = terms that refer to the characteristics of an event, situation, or group being studied • We need to clearly specify how we define each and every concept in our research studies.

  32. Two Types of Definitions • Conceptual Definition ...the “empirical definition” of a construct

  33. Examples of Conceptual Definitions • Cognitive dissonance = “the unpleasant state of psychological arousal resulting from an inconsistency within one’s important attitudes, beliefs, or behaviors” • Self-esteem = “a person’s overall self-evaluation or sense of self worth” • Aggression = “behavior intended to injure another” • Stereotype = “a belief about the personal attributes of a group of people”

  34. Conceptual definitions offer general, abstract characterizations of psychological constructs. • This is exactly why we need operational definitions! • Operational definition ...the precise specification of how a concept is measured or manipulated in a particular study

  35. Operational Definitions How can we operationalize “aggression”? -punching another’s face? -hitting another’s arm? -spreading rumors about another? -verbally insulting another? -throwing glass at another? -etc. * The more specific, the better.

  36. Operational Definitions Why are they necessary/important? • They force us to think carefully and empirically in precise and specific terms. • They make the concept public; they allow for replication. • Measurable.

More Related