Research design
This presentation is the property of its rightful owner.
Sponsored Links
1 / 50

Research Design PowerPoint PPT Presentation


  • 92 Views
  • Uploaded on
  • Presentation posted in: General

Research Design. Quantitative. Symbolic Representations of Quantitative Designs - Shorthand. R = random assignment O = observation X = intervention Super or subscript = numbered sequence of events Types of Experimental Designs = Pre-experimental True experimental Quasi-experimental.

Download Presentation

Research Design

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Research design

Research Design

Quantitative


Symbolic representations of quantitative designs shorthand

Symbolic Representations of Quantitative Designs - Shorthand

  • R = random assignment

  • O = observation

  • X = intervention

  • Super or subscript = numbered sequence of events

  • Types of Experimental Designs =

    • Pre-experimental

    • True experimental

    • Quasi-experimental


Pre experimental designs

Pre-experimental Designs

  • One-shot experimental Design

x O1


Pre experimental design

Pre-Experimental Design

  • One Group Pretest-Posttest Design

O1 X O2


Pre experimental designs1

Pre-Experimental Designs

  • Static Group Comparison

x O1

O1


True experimental designs the true experimental design pretest postest control group design

True Experimental DesignsThe True Experimental DesignPretest-postest control group design

R O1 x O2

R O1 O2


True experimental designs solomon four group design

True Experimental DesignsSolomon Four Group Design

R O1 x O2

R x O2

R O1 O2

R O2


True experimental designs posttest only control group design

True Experimental DesignsPosttest Only Control Group Design

R x O1

R O1


True experimental designs

True Experimental Designs

  • Within-Subjects Design – Only one Group

X1 O1

X2 O2


Other experimental designs

Other Experimental Designs

  • Factorial Design

    • Used when two or more different characteristics, treatments, or events are independently varied in a single study

    • R X1 X2 O1

    • R X1 O1

    • R X2 O1

    • R O1

  • Nested Design

    • Used when the subjects are aggregates


  • Other experimental designs1

    Other Experimental Designs

    • Repeated measures design with counterbalancing – also called crossover design

      • Used when more than one treatment is administered to each subject in sequence, but the sequence is varied

  • Multivariate Design

    • Used when there are multiple variables and complex relationships among the variables

  • Randomized Clinical Trials

    • Used with a large number of subjects to test the results of a treatment and compare the results with a control group who have not received the treatment. The study is carried out in multiple geographic locations and it is “double-blind”


  • Strength of experimental designs

    Strength of Experimental Designs

    • They eliminate all factors influencing the dependent variable other than the cause (the independent variable) being studied. This gives the researcher confidence in inferring causal relationships.

    • Criteria for causality (Paul Lazarfeld)

      • Cause must precede effect in time

      • There must be an empirical relationship between the presumed cause and presumed effect

      • The relationship can’t be explained as being due to a third variable


    Weakness of experimental designs

    Weakness of Experimental Designs

    • Many variables are not amenable to experimental manipulation, such as human or environmental characteristics

    • Ethics may prohibit manipulation of some variables

    • It is just impractical to manipulate some variables

    • Laboratory experiments are artificial

    • The Hawthorne effect may occur


    Ways to overcome unfairness

    Ways to Overcome “Unfairness”

    • Use alternative interventions

    • Use placebo effect

    • Use the standard method of care

    • Use different doses or intensities

    • Use delayed treatment – give same treatment after data have been collected for all groups


    Quasi experimental design

    Quasi-Experimental Design

    • These designs lack at least one of the three properties that characterize true experiments

    • Manipulation of the independent variable must always be present

    • There are usually control groups

    • Most of the time, the control groups are not randomly selected – called non-equivalent control groups


    The nonrandomized control group design

    The Nonrandomized Control Group Design

    O1 x O2

    O1 O2


    Reversed treatment design with pre and posttest one group

    Reversed Treatment Design with Pre and Posttest – One Group

    O1 +x O2

    O1 -x O2


    Nonequivalent dependent variables design one group

    Nonequivalent Dependent Variables Design - One Group

    O1 DV1 x O2 DV1 changed

    O1 DV2 x O2 DV2 not changed


    Simple time series

    Simple Time Series

    Jan Feb Mar Apr x May June Jul Aug


    Control group time series

    Control Group Time Series

    O1 O2 X O3 O4

    O1 O2 __ O3 O4


    Reversal time samples design and alternating treatment design

    Reversal Time Samples Design and Alternating Treatment Design

    X O1 __ O2 X O3

    X1 O1 __ O2 X2 O3


    Strengths and weaknesses of quasi experimental designs

    Strengths and Weaknesses of Quasi-experimental Designs

    • Strengths

      • Practical

      • Feasible

      • Generalizable to a certain extent

    • Weakness

      • Absence of control makes it possible that some other external factor caused the effect, that selection influenced the effect or that maturation influenced the effect


    Correlational studies

    Correlational Studies

    • These studies examine the relationships between variables. They can describe a relationship, predict a relationship or test a relationship proposed by a theory. They do not test causality. They do not test differences between two or more groups. They examine a single group or situation in terms of two or more variables


    Types of correlational designs

    Types of Correlational Designs

    • Descriptive correlational design – describes two or more variables and the relationships among the variables

    • Predictive studies – are used to facilitate decision-making about individuals such as admission of students to nursing school. Retrospective data from other groups are used to predict the behavior of a similar group


    Types of correlational designs1

    Types of Correlational Designs

    • Retrospective studies – manifestation of some phenomena existing in the present is linked to phenomena occurring in the past.

    • Prospective studies – examine a presumed cause then go forward in time to the presumed effect. It’s more costly and you may have to wait a long time, but the correlation is stronger


    Types of correlational designs2

    Types of Correlational Designs

    • Theory testing correlational designs – used to test propositions in a theory

      • Partial correlational design eliminates the influence of an intervening variable (mathematically) to study the relationship of the two remaining variables

      • Cross-lagged panel design collects data on two variables at two or more time periods to support the inference that variable 1 occurs before variable 2

      • Path analysis design


    Strengths and weaknesses of correlational designs

    Strengths and Weaknesses of Correlational Designs

    • Strengths

      • Various constraints often limit true or quasi-experimental designs

      • Causal relationships may not be important

      • A larger amount of data is able to be gathered than can be acquired through experimental design

      • They are strong in realism and solve practical problems

  • Weaknesses

    • Inability to actively manipulate IV

    • Inability to randomly assign individuals to treatments

    • Possible faulty interpretation of results


  • Simple ex post facto design

    Simple Ex Post Facto Design

    • This shows the possible effects of an experience that occurred (or of a condition that was present) prior to the research.

      Experience O1

      O1


    Descriptive study designs

    Descriptive Study Designs

    • These studies are conducted to examine variables in naturally occurring situations. They look at relationships between variables as part of the overall descriptions but they do not examine the type or degrees of relationships. They protect against bias through conceptual and operational definitions of variables, sample selection, valid and reliable instruments, and control of the environment in which the data are collected.


    Types of descriptive studies

    Types of Descriptive Studies

    • Exploratory Study

      • When little is known about the phenomenon of interest, an exploratory study is used to build basic knowledge, to describe or identify the phenomenon

      • The approach is loosely structured and may include both quantitative and qualitative aspects, but it is still considered quantitative because the data obtained are quantified

      • There are usually no hypotheses


    Types of descriptive studies1

    Types of Descriptive Studies

    • Purely descriptive studies

      • study the variables within a particular situation with a single sample of subjects

  • Comparative descriptive studies

    • examine the difference in variables between two or more groups that occur in a particular situation

  • Time dimensional studies

    • Prospective and retrospective

    • Longitudinal – changes in same subjects

    • Cross-sectional – changes in groups of subjects at different stages of development, simultaneously

    • Trend – take samples of population at pre-set intervals

    • Event partitioning


  • Descriptive study designs1

    Descriptive Study Designs

    • Case study design

      • Investigation of an individual, group, institution or other social unit to determine the dynamics of what the subject thinks, behaves or develops in a particular manner. It requires detailed study over time. You can use any data collection method. Content Analysis is often a major choice.

      • Strength – the depth of the study – it’s not superficial

      • Weakness – subjectivity of the researcher


    Descriptive study designs2

    Descriptive Study Designs

    • Survey Design

      • Research activity that focuses on the status quo of some situation. Information is collected directly from the group that is the object of the investigation. Purposes can be to

        • describe – people’s characteristics, attitudes or beliefs – sub-samples may be compared

        • explain – a variable of interest by examining its relationship to other variables – nothing is manipulated

        • predict – people report their plans or intentions and extrapolations can be made

        • explore – use probing, loosely formulated questions to find out background data of subjects; to gain information to formulate research questions or hypotheses; to help develop theory for qualitative research


    Descriptive designs

    Descriptive Designs

    • Types of survey techniques

      • Personal interview

      • Telephone interviews

      • Written questionnaires - self administered

      • Internet questionnaires – self administered

      • Strengths and Weaknesses of Surveys

        • Weaknesses – superficial, ex post facto, time and resources

        • Strength – flexibility and broad scope


    Evaluation research

    Evaluation Research

    • An extremely applied form of Research that looks at how well a program, practice or policy is working. Its purposes are

      • To evaluate the success of a program, not why it succeeds, but whether it is succeeding

      • To answer practical problems for persons who must make decisions


    Evaluation research cont

    Evaluation Research cont.

    • The classical approach

      • Determine objectives of the program

      • Develop means of measuring attainment of objectives

      • Collect data

      • Interpret data vies-à-vies the objectives

  • Goal-free evaluation

    • Evaluation of the outcomes of a program in the absence of information about intended outcomes

    • Must describe the repercussions of a program or practice or various components of the overall system


  • Categories of evaluation

    Categories of Evaluation

    • Formative evaluation – the ongoing process of providing evaluation feedback in the course of developing a program or policy – the goal is to improve the program. It is also called Process or Implementation Evaluation.

    • Summative evaluation – the worth of a program after it is already in operation – to help decide whether it should be discarded, replaced, modified or continued. It describes the effectiveness of a program.


    Summative evaluation

    Summative Evaluation

    • Also called Outcome Analysis

      • Comparative evaluation – assesses the worth of two or more programs or procedures

      • Absolute evaluation – assess the effects of a program in and of itself – no contrast with other programs – called criterion-referenced – measures against criteria

      • Impact Analysis looks at the efficiency of the program according to the subgroups for whom it is most effective

      • Cost Analysis

        • Cost-benefit – Money estimates for costs and benefits

        • Cost effectiveness – Cost to produce the impact


    Needs assessment

    Needs Assessment

    • Similar to evaluation research, it provides informational input in a planning process. It is usually done by an agency or group with a service component. It helps in establishing priorities. There are three approaches:

      • Key informant

      • Survey

      • Indicators


    Evaluation research weaknesses

    Evaluation Research Weaknesses

    • Threatening to individuals

    • Seen as a waste of time

    • Role conflicts if researcher is in-house

    • Censor by “politicians” in-house

    • When some goals are satisfied and others are not, how is the whole thing evaluated

    • Goals may be for the future so can’t see outcome now


    Other types of research

    Other Types of Research

    • Secondary Analysis –studying data that have been previously gathered

      • Strength – it is efficient and economical

      • Weakness –

        • Variables may have been under analyzed

        • You may want to look at different relationships among variables

        • You may want to change the unit of analysis

        • You may want data from a sub-sample

        • You may want to change the method of analysis

  • Replication Studies


  • Other types of research1

    Other Types of Research

    • Meta-analysis – merging findings from many studies that have examined the same phenomenon then using statistics to determine overall findings – looking for effects

    • Meta-synthesis – merging findings (themes) from qualitative studies

    • Methodological – designed to develop the validity and reliability of instruments that measure constructs/variables. They are controlled investigations of ways to obtain, organize and analyze data.


    Research design considerations

    Research Design Considerations

    • Research Control – the design should maximize the control an investigator has over the research situation and the variables. Rigor in quantitative control is exerted by the methodology used, whereas rigor in qualitative design is exerted by bracketing and intuiting. Quantitative control requires:

      • Constancy of conditions – conditions under which the data are collected must be as similar as possible

        • Environment

        • Time, day, year

        • One interviewer –if not minimize the variability

        • Communication and treatment should be constant (same)


    Research control cont

    Research Control cont.

    Manipulation as control – ability to manipulate the independent variable is very powerful

    • Assures that conditions under which information was obtained were constant or at least similar – can’t do that with ex post facto research

    • Allows more difficult treatment because of the control the researcher can exercise over it

    • Can use factorial designs to test two independent variable at the same time as their effects


    Research control cont1

    Research Control cont.

    • Comparison groups as control – scientific knowledge requires some type of comparison – even case studies have an implied reference – “normal”

    • Randomization as control – if you can’t randomize the subjects, then at least vary the order in which questions are asked – especially for attitudes


    Research control cont2

    Research Control cont.

    Control over extraneous individual characteristics of subjects

    • Use only homogeneous subjects

    • Include extraneous variables as independent variable – randomly assign them to sub-blocks

    • Matching – use knowledge of subjects from comparison groups – matching on more than three characteristics is difficult. Matching may be done after the fact

    • Use statistical procedures (ANOVA) after the fact

    • Randomization

    • Use subjects themselves as their own controls


    Research design considerations1

    Research Design Considerations

    • Validity – the measure of truth or accuracy of a claim

      • Internal validity shows that the findings are due to the independent variable. It is maintained by using the controls on the previous slides, and by preventing threats to internal validity

      • It is assumed the IV causes the DV

      • Threats to internal validity are other possible explanations for the changes in the DV


    Research design considerations2

    Research Design Considerations

    • Threats to internal validity

      • History – external threats which affect the dependent variable

      • Selection – biases from pre-treatment differences

      • Maturation – within the subject over time – not from the treatment

      • Testing – the effect of taking a pretest on posttest scores

      • Instrumentation – changes made by the researcher or mechanical changes

      • Mortality – loss of subjects during the study

      • Other factors - such as statistical regression


    Research design considerations3

    Research Design Considerations

    External validity – the generalizability of research findings to other settings or samples specifically to the population from which the sample came – there is no problem generalizing to the accessible population. Threats to external validity are:

    • Population Factors

      • The Hawthorne effect – awareness of participation causes different behavior

      • Novelty effect – newness of the treatment might cause alteration in behavior


    Research design considerations4

    Research Design Considerations

    Ecological Factors

    • Interaction between history and treatment effects

    • Interaction between selection and treatment – too many decline

    • Interaction between setting and treatment – some resist

  • Experimenter factors – research is affected by characteristics of the researcher

    • Paradigm effect – basic assumptions and ways of conceptualization

    • Loose protocol – step-by-step detail not planned

    • Miss-recording effect –especially if subjects record own responses

    • Unintentional expectancy effect – influences subjects response

    • Analysis effect – decide how to analyze after data collected

    • Fudging effect – reporting effects not obtained


  • Login