Meta analysis of research
This presentation is the property of its rightful owner.
Sponsored Links
1 / 84

META-ANALYSIS OF RESEARCH PowerPoint PPT Presentation


  • 97 Views
  • Uploaded on
  • Presentation posted in: General

META-ANALYSIS OF RESEARCH. Texas A&M University SUMMER STATISTICS INSTITUTE June 2009 Victor L. Willson, Instructor. TOPIC AREAS. Background Research focus for meta-analysis Finding studies Coding studies Computing effect sizes Effect size distribution Mediators Moderators

Download Presentation

META-ANALYSIS OF RESEARCH

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Meta analysis of research

META-ANALYSIS OF RESEARCH

Texas A&M University

SUMMER STATISTICS INSTITUTE

June 2009

Victor L. Willson, Instructor


Topic areas

TOPIC AREAS

  • Background

  • Research focus for meta-analysis

  • Finding studies

  • Coding studies

  • Computing effect sizes

  • Effect size distribution

  • Mediators

  • Moderators

  • Report-writing

  • Current issues


Background

Background

  • Purposes

  • Historical

  • Meta-analysis as survey research

  • Strengths

  • Weaknesses


Purposes for meta analysis

Purposes for Meta-Analysis

  • Cumulate findings of studies on a particular topic

  • Examine homogeneity of outcomes

  • Estimate effects of independent variables on outcomes in a standardized format

    • Evaluate moderator and mediator effects on outcomes

    • Differentiate different types or classes of outcome effects


Historical background

Historical background

  • Criticism of traditional narrative reviews of research

  • Exasperation in social sciences with constructs measured different ways in terms of determining consistencies

  • Need to formulate theoretical relationships based on many studies


History part 2

History part 2

  • Early 1970s efforts focused on significance testing and “vote counts” of significance

  • Glass (1976) presented a method he called “meta-anlaysis” in Am. Ed. Research Assn. presidential address

  • Others proposed related methods, but Glass and colleagues developed the most widely used approach (Glass, McGaw, & Smith, 1981)


Meta analysis as survey research

Meta-Analysis as Survey Research

  • Research articles as unit of focus

  • Population defined

    • Conditions for inclusion of articles

      • Data requirements needed for inclusion

      • Completeness of data available in article or estimable

      • Publication sources available, selected

  • Sample vs. population acquisition

    • Availability of publications and cost

    • Time to acquisition (how long to wait for retrieval)


Strengths of meta analysis

Strengths of Meta-Analysis

  • Definition of effect and effect size beyond “significant or not”

  • Focus on selection threats in traditional reviews (bias in selection of articles for review)

  • Systematic consideration of potential mediators and moderators of effects

  • Data organization of articles for public review


Weaknesses of meta analysis

Weaknesses of Meta-Analysis

  • Methodologically sophisticated and expensive

  • Potential ignoring of contextual effects not easily quantified; eg. historical/environmental placement of research

  • Potential improper mixing of studies

  • Averages hiding important subgroupings

  • Improperly weighting studies with different methodological strength/rigor


Research focus for meta analysis

Research focus for meta-analysis

  • Defining and delineating the construct

  • Determining a research outlet

  • Meta-Analysis as an interactive, developing process


Recent criticism

Recent Criticism

  • Suri & Clarke (2009): Advancements in Research Synthesis Methods: From a Methodologically Inclusive Perspective (Review of Educational Research, pp. 395-430)

  • They propose 6 overlapping approaches:

    • Statistical research syntheses (eg. meta-analysis)

    • Systematic reviews

    • Qualitative research syntheses

    • Qualitative syntheses of qualitative and quantitative research

    • Critical impetus in reviewing research

    • Exemplary syntheses


Some critical comments on suri clarke 2009

Some critical comments on Suri & Clarke (2009)

  • Systematic reviews- original Glass criticisms hold: what is the basis for inclusion and exclusion; why are certain articles privileged?

  • Qualitative research syntheses- how can these be done with situated contexts, small samples, environmentally-developed variables, sources, etc.? Will there be a review for every reader, or for every researcher? Same limitation as all qual research

  • Qual syntheses of quant and qual research- potentially doable, with an alternating order: qual first to focus emphases in the quant analysis, or quant first to be validated with the qual studies of particular environments and populations- do they fit/match in reasonable ways?

  • Critical impetus- code words for critical theory/Marxist etc. Answer is already known, why do the research?

  • Exemplary syntheses- what is the purpose?


Defining and delineating the research topic

Defining and Delineating the Research Topic

  • Outcome construct definition

    • Importance to the field to know what has been learned

    • How big is it? How many potential studies?

    • Conduct preliminary searches using various databases

  • Refining the construct

    • How much resource is available? Eg. 1000 studies = 2-3 years work

    • Are there specific sub-constructs more important than others? Select them or one of them

    • Are there time-limitations (no studies before 19xx)

    • Are there too few studies for the given construct, should it be broadened? Too few-> less than 10?


Defining and delineating the research topic1

Defining and Delineating the Research Topic

  • What is the typical research approach for the topic area?

    • All quantitative

    • All qualitative

    • Mixed quantitative and qualitative

  • Are there sufficient quantitative studies to provide evidence for findings?

    • Can qualitative studies be included as a separate part of the study? How?


Determining research outlet

Determining Research Outlet

  • Does the proposed journal

    • publish research on the construct?

    • Publish reviews or meta-analyses?

  • Is there a journal devoted to reviews that your project would fit with?

  • Has a recent similar meta-analysis been published? If so, will yours add anything new?

    • Ex. Allen, et al (under review) evaluated articles on first grade retention after 1990 focusing on the quality of the research design in each study to determine if the effects were different from a fairly recent meta-analysis by Jimerson (2001)


Meta analysis as an interactive developing process

Meta-Analysis as an interactive, developing process

  • View meta-analysis as evolutionary

    • As studies are reviewed and included, purpose and scope may change

  • Assume initial conceptualizations about both outcomes and potential predictors may change over time

    • Definitions, instruments, coding may all change as studies are found and included

  • Plan for revisions to all aspects of the meta-analysis


Finding studies

FINDING STUDIES

  • Searches

  • Selection criteria


Searches

Searches

  • Traditional literature review methods:

    • Current studies are cumulated Branching backward search uses the

    • Reference Lists of current studies

  • Electronic searches

    • Google, Google Scholar, PsyInfo, research library catalogs (for major research institution libraries)

    • Searches of major journal article titles and abstracts (commonly available now through electronic libraries)

  • Abstract vs. full content searches- electronic, pdf, hard copy

  • Author requests: email or hard copy requests for newly published articles or other works not found in typical search outcomes


Selection criteria

Selection Criteria

  • In or out:

    • Any quantitative data available?

      • Descriptive data- means and SDs for all groups of interest?

      • Analysis summaries- F- or t-tests, ANOVA tables etc. available that may be utilized?

  • Iterative process: outs may come back in given broader definitions of a construct

  • Duplicated articles/data reports? Decide on which to keep (earliest? Most complete?) why were multiple articles prepared? New groups included that can be used?

  • Keep records of every study considered- excel or hard copy, for example


Selection criteria1

Selection Criteria

  • Useful procedure:

  • Create an index card for each study along with notes of each to refer to

  • Organize studies into categories or clusters

  • Review periodically as new studies are added, revise or regenerate categories and clusters

  • Consider why you organized the studies this way- does it reflect the scope of research, construct organization, or other classes?


Coding studies

CODING STUDIES

  • Dependent variable(s)

    • Construct(s) represented

    • Measure name and related characteristics

    • Effect size and associated calculations

  • Independent variables

    • Population

    • Sample

    • Design

    • Potential Mediators and Moderators

    • Bias mechanisms and threats to validity


Coding studies dependent variables

CODING STUDIES- Dependent Variables

  • Construct name(s): eg. Receptive or Expressive Vocabulary

  • Measurement name: Willson EV Test

  • Raw score summary data (mean, SD for each group or summary statistics and standard errors for dep. var):

    Exp Mean= 22 Exp SD = 5 n=100, Con Mean = 19 Con SD = 4, n=100

  • Effect size (mean difference or correlation)

    e = (22-19)/(20.5)

  • Effect size transformation used (if any) for mean differences:

    • t-test transform ( e = t (1/n1 + 1/n2)½ ), F-statistic transform (F)½ = t for df = 1, 198

    • probability transform to t-statistic: t(198) = [probt(.02)]

    • point-biserial transform to t-statistic, regression coefficient t-statistic

  • Effect size transformations used (if any) for correlations:

    • t-statistic to correlation: r2 = t2 / (t2+ df)

    • Regression coefficient t-statistic to correlation


Coding studies independent variables

CODING STUDIES- Independent variables

  • Population(s): what is the intended population, what characterizes it?

    Gender? Ethnicity? Age? Physical characteristics, Social characteristics, Psychosocial characteristics? Cognitive characteristics?

  • Sample: population characteristics in Exp, Control samples

    eg. % female, % African-American, % Hispanic, mean IQ, median SES, etc.


Coding studies independent variables1

CODING STUDIES- Independent variables

Design (mean difference studies):

  • Random assignment, quasi-experimental, or nonrandom groups

  • Treatment conditions: treatment variables of importance (eg. duration, intensity, massed or distributed etc.); control conditions same

  • Treatment givers: experience and background characteristics: teachers, aides, parents

  • Environmental conditions (eg. classroom, after-school location, library)


Coding studies independent variables2

CODING STUDIES- Independent variables

Design (mean difference studies)

5. Time characteristics (when during the year, year of occurrence)

6. Internal validity threats:

  • maturation,

  • testing,

  • instrumentation,

  • regression,

  • history,

  • selection


Coding studies independent variables3

CODING STUDIES- Independent variables

Mediators and Moderators

Mediators are indirect effects that explain part or all of the relationship between hypothesized treatment and effect:

(T)e

M

In meta analysis we establish that the effect of T on the outcome is nonzero, then if M is significantly related to the effect e. We do not routinely test if T predicts M


Coding studies independent variables4

CODING STUDIES- Independent variables

Mediators and Moderators

Moderators are variable for which the relationship changes from one moderator value to the next:

(T)e for M=1

(T)e for M=2

.3

.7

In meta analysis we establish that the effect of T on the outcome is nonzero, then if M is significantly related to the effect e. We do not routinely test if T predicts M


Coding studies bias mechanisms

Coding Studies- Bias Mechanisms

  • Researcher potential bias- membership in publishing cohort/group

  • Researcher orientation- theoretical stance or background

  • Type of publication:

    • Refereed vs. book chapter vs. dissertation vs. project report: do not assume refereed articles are necessarily superior in design or analysis- Mary Lee Smith’s study of gender bias in psychotherapy indicated publication bias against mixed gender research showing no effects by refereed journals with lower quality designs than non-refereed works

  • Year of publication- have changing definitions affected effects? Eg. Science interest vs. attitude- terms used interchangeably in 1940s-1950s; shift to attitude in 1960s

  • Journal of publication- do certain journals only accept particular methods, approaches, theoretical stances?


Computing effect sizes mean difference effects

Computing Effect Sizes- Mean Difference Effects

  • Glass: e = (MeanExperimental – MeanControl)/SD

    • SD = Square Root (average of two variances) for randomized designs

    • SD = Control standard deviation when treatment might affect variation (causes statistical problems in estimation)

  • Hedges: Correct for sampling bias:g = e[ 1 – 3/(4N – 9) ]

    • where N=total # in experimental and control groups

    • Sg = [ (Ne + Nc)/NgNc + g2/(2(Ne + Nc) ]½


Computing effect sizes mean difference effects example from spencer adhd adult study

Computing Effect Sizes- Mean Difference Effects Example from Spencer ADHD Adult study

  • Glass: e = (MeanExperimental – MeanControl)/SD

    = (82 – 101)/21.55

    = .8817

  • Hedges: Correct for sampling bias:g = e[ 1 – 3/(4N – 9) ]

    = .8817 (1 – 3/(4*110 – 9)

    = .8762

    Note: SD computed from t-statistic of 4.2 given in article:

    e = t*(1/NE + 1/NC )½


Computing mean difference effect sizes from summary statistics

Computing Mean Difference Effect Sizes from Summary Statistics

  • t-statistic: e = t*(1/NE + 1/NC )½

  • F(1,dferror): e = F½ *(1/NE + 1/NC )½

  • Point-biserial correlation:

    e = r*(dfe/(1-r2 ))½ *(1/NE + 1/NC )½

  • Chi Square (Pearson association):

     = 2/(2 + N)

    e = ½*(N/(1-))½ *(1/NE + 1/NC )½

  • ANOVA results: Compute R2 = SSTreatment/Sstotal

    Treat R as a point biserial correlation


Meta analysis of research

Excel workbook for Mean difference computation


Working an example

WORKING AN EXAMPLE

Story Book Reading

References

1 Wasik & Bond: Beyond the Pages of a Book: Interactive Book Reading and Language Development in Preschool Classrooms. J. Ed Psych 2001

2 Justice & Ezell. Use of Storybook Reading to Increase Print Awareness in At-Risk Children. Am J Speech-Language Path 2002

3 Coyne, Simmons, Kame’enui, & Stoolmiller. Teaching Vocabulary During Shared Storybook Readings: An Examination of Differential Effects. Exceptionality 2004

4 Fielding-Barnsley & Purdie. Early Intervention in the Home for Children at Risk of Reading Failure. Support for Learning 2003


Coding the outcome

Coding the Outcome

1 open Wasik & Bond pdf

2 open excel file “computing mean effects example”

3 in Wasik find Ne and Nc

4 decide on effect(s) to be used- three outcomes are reported: PPVT, receptive, and expressive vocabulary at classroom and student level: what is the unit to be focused on? Multilevel issue of student in classroom, too few classrooms for reasonable MLM estimation, classroom level is too small for good power- use student data


Coding the outcome1

Coding the Outcome

5 Determine which reported data is usable: here the AM and PM data are not usable because we don’t have the breakdowns by teacher-classroom- only summary tests can be used

6 Data for PPVT were analyzed as a pre-post treatment design, approximating a covariance analysis; thus, the interaction is the only usable summary statistic, since it is the differential effect of treatment vs. control adjusting for pretest differences with a regression weight of 1 (ANCOVA with a restricted covariance weight):

Interactionij= Grand Mean – Treat effect –pretest effect

= Y… - ai.. – b.j.

Graphically, the Difference of Gain inTreat(post-pre) and Gain in Control (post –pre)

  • F for the interaction was F(l,120) = 13.69, p < .001.

  • Convert this to an effect size using excel file Outcomes Computation

  • What do you get? (.6527)


Coding the outcome2

Coding the Outcome

Y

Gain not “predicted” from control

post

gains

pre

Control Treatment


Coding the outcome3

Coding the Outcome

7 For Expressive and Receptive Vocabulary, only the F-tests for Treatment-Control posttest results are given:

Receptive: F(l, 120) = 76.61, p < .001

Expressive: F(l, 120) =128.43, p< .001

What are the effect sizes? Use Outcomes Computation

1.544

1.999


Getting a study effect

Getting a Study Effect

  • Should we average the outcomes to get a single study effect or

  • Keep the effects separate as different constructs to evaluate later (Expressive, Receptive) or

  • Average the PPVT and receptive outcome as a total receptive vocabulary effect?

    Comment- since each effect is based on the same sample size, the effects here can simply be averaged. If missing data had been involved, then we would need to use the weighted effect size equation, weighting the effects by their respective sample size within the study


Getting a study effect1

Getting a Study Effect

  • For this example, let’s average the three effects to put into the Computing mean effects example excel file- note that since we do not have means and SDs, we can put MeanC=0, and MeanE as the effect size we calculated, put in the SDs as 1, and put in the correct sample sizes to get the Hedges g, etc.

  • (.6567 + 1.553 + 2.01)/3 = 1.4036


2 justice ezell

2 Justice & Ezell

  • Receptive: 0.403

  • Expressive: 0.8606

  • Average = 0.6303

  • 4 Fielding

  • PPVT: -0.0764

3 Coyne et al

  • Taught Vocab: 0.9385

  • Untaught Vocab: 0.3262

  • Average = 0.6323


Computing mean effect size

Computing mean effect size

  • Use e:\\Computing mean effects1.xls

Mean


Computing correlation effect sizes

Computing Correlation Effect Sizes

  • Reported Pearson correlation- use that

  • Regression b-weight: use t-statistic reported,

    e = t*(1/NE + 1/NC )½

  • t-statistics: r = [ t2 / (t2 + dferror) ] ½

    Sums of Squares from ANOVA or ANCOVA:

    r = (R2partial) ½

    R2partial = SSTreatment/Sstotal

    Note: Partial ANOVA or ANCOVA results should be noted as such and compared with unadjusted effects


Computing correlation effect sizes1

Computing Correlation Effect Sizes

  • To compute correlation-based effects, you can use the excel program “Outcomes Computation correlations”

  • The next slide gives an example.

  • Emphasis is on disaggregating effects of unreliability and sample-based attenuation, and correcting sample-specific bias in correlation estimation

  • For more information, see Hunter and Schmidt (2004): Methods of Meta-Analysis. Sage.

  • Correlational meta-analyses have focused more on validity issues for particular tests vs. treatment or status effects using means


Computing correlation effects example

Computing Correlation Effects Example


Effect size distribution

EFFECT SIZE DISTRIBUTION

  • Hypothesis: All effects come from the same distribution

  • What does this look like for studies with different sample sizes?

  • Funnel plot- originally used to detect bias, can show what the confidence interval around a given mean effect size looks like

    • Note: it is NOT smooth, since CI depends on both sample sizes AND the effect size magnitude


Effect size distribution1

EFFECT SIZE DISTRIBUTION

  • Each mean effect SE can be computed from

    SE = 1/ (w)

    For our 4 effects: 1: 0.200525

    2: 0.373633

    3: 0.256502

    4: 0.286355

    These are used to construct a 95% confidence interval around each effect


Effect size distribution se of overall mean

EFFECT SIZE DISTRIBUTION- SE of Overall Mean

  • Overall mean effect SE can be computed from

    SE = 1/ (w)

    For our effect mean of 0.8054, SE = 0.1297

    Thus, a 95% CI is approximately (.54, 1.07)

    The funnel plot can be constructed by constructing a SE for each sample size pair around the overall mean- this is how the figure below was constructed in SPSS, along with each article effect mean and its CI


Effect size distribution statistical test

EFFECT SIZE DISTRIBUTION- Statistical test

  • Hypothesis: All effects come from the same distribution: Q-test

  • Q is a chi-square statistic based on the variation of the effects around the mean effect

    Q =  wi ( g – gmean)2

    Q 2 (k-1)

k


Meta analysis of research

Example Computing Q Excel file


Computational excel file

Computational Excel file

  • Open excel file: Computing Q

  • Enter the effects for the 4 studies, w for each study (you can delete the extra lines or add new ones by inserting as needed)

  • from the Computing mean effect excel file

  • What Q do you get?

    Q = 39.57

    df=3

    p<.001


Interpreting q

Interpreting Q

  • Nonsignificant Q means all effects could have come from the same distribution with a common mean

  • Significant Q means one or more effects or a linear combination of effects came from two different (or more) distributions

  • Effect component Q-statistic gives evidence for variation from the mean hypothesized effect


Interpreting q nonsignificant

Interpreting Q- nonsignificant

  • Some theorists state you should stop- incorrect.

  • Homogeneity of overall distribution does not imply homogeneity with respect to hypotheses regarding mediators or moderators

  • Example- homogeneous means correlate perfectly with year of publication (ie. r= 1.0, p< .001)


Interpreting q significant

Interpreting Q- significant

  • Significance means there may be relationships with hypothesized mediators or moderators

  • Funnel plot and effect Q-statistics can give evidence for nonconforming effects that may or may not have characteristics you selected and coded for


Mediators

MEDIATORS

  • Mediation: effect of an intervening variable that changes the relationship between an independent and dependent variable, either removing it or (typically) reducing it.

  • Path model conceptualization:

Treatment

Outcome

Mediator


Mediators1

MEDIATORS

  • Statistical treatment typically requires both paths ‘a’ and ‘b’ to be significant to qualify as a mediator. Meta-analysis seems not to have investigated path ‘a’ but referred to continuous predictors as regressors

  • Lipsey and Wilson(2001) refer to this as “Weighted Regression Analysis”

Treatment

Outcome

a

b

Mediator


Weighted regression analysis

Weighted Regression Analysis

  • Model: e = b X + residual

  • Regression analog: Q = Qregression + Qresidual

  • Analyze as “weighted least squares” in programs such as SPSS or SAS

  • In SPSS the weight function w is a variable used as the weighting


Weighted regression analysis1

Weighted Regression Analysis

  • Emphasis on predictor and its standard error: the usual regression standard error is incorrect, needs to be corrected (Hedges & Olkin, 1985):

    SE’b = SEb / (MSe)½

    where SEb is the standard error reported in SPSS,

    and MSe is the reported regression mean square error


Weighted regression q statistics

Weighted Regression Q-statistics

  • Qregression = Sum of Squaresregression

    df = 1 for single predictor

  • Qresidual = Sum of Squaresresidual

    df = # studies - 2

    Significance tests: Each is a chi square test with appropriate degrees of freedom


Meta analysis of research

98.999.0531191.702628.781319

712.810.0324260.267212.369946

89.0911.5630200.7561211.22962

810.8610.5225250.6532111.867084

97.737.8622281.541429.530347

610.1110.7724260.3507112.291338

78.576.9134160.4438110.651743

79.598.5322281.1245210.659409

87.9810.9230200.542111.591384

912.698.1628220.6337111.739213

58.6110.572822-0.5976211.80079

59.347.4324260.3771112.262378

710.3910.126240.7234211.714913

68.669.421290.2413112.094229

79.169.0425250.6637111.847643

88.186.4330200.9038210.928737

910.049.8425250.4603112.177485

712.3311.422280.3948112.087879

78.8310.672327-0.1726212.374215

810.888.8126240.4633112.154409

89.58.0928220.8481211.317137

910.4210.1218320.7114110.885366

911.827.1328220.5407111.891682

611.698.1123270.4926112.05664


Meta analysis of research

SPSS ANALYSIS OUTPUT

  • ANOVAb,c

  • ModelSum of SquaresdfMean SquareFSig.

  • Regression19.166119.16612.096.002a

  • Residual34.858221.584

  • Total54.02423

  • Predictors: (Constant), AGE

  • Dependent Variable: HEDGE d*

  • c. Weighted Least Squares Regression - Weighted by w

Coefficientsa,b

Model Unstandardized CoefficientsStandardized CoefficientstSig.

BStd. ErrorBeta

(Constant)-1.037.465-2.230.036

AGE.215.062.5963.478.002

a. Dependent Variable: HEDGE d*

b. Weighted Least Squares Regression - Weighted by w


Example

Example

  • See SPSS “sample meta data set.sav” or the excel version “sample meta data set regression”

  • The d effect is regressed on Age

  • b = 0.215, SEb = 0.062, MSe = 1.584

  • Thus, SE’b = 0.062 / (1.584)½

    = 0.0493

    A 95% CI around b gives (0.117, 0.313) for the regression weight of age on outcome, p<.001


Q statistic tests

Q-statistic tests

  • Qregression = 19.166 with df=1, p < .001

  • Qresidual = 34.858 with df=22, p = .040

  • So- are the residuals homogeneous or not? Given a large number of significance tests, one might require the Type I error rate for such tests to be .001 or something small


Moderators

MODERATORS

  • Moderators are typically considered categorical variables for which effects differ across categories or levels

  • In a limited form, this can be considered a treatment-moderator interaction

  • Moderator analysis is more general in the sense that any parameters of a within-category analysis may change across categories (multigroup analysis concept in Structural Equation Modeling)


Moderator analysis q between

Moderator Analysis- QBetween

  • Analog to ANOVA- split into Qbetween and Qwithin

  • QB = wiEi2– (wiEi)2 /wi

    where Ei is the mean for category i and wi is the total weight function for Ei

  • Remember that you constructed a mean effect for a study; the weight function for that mean effect is the sum of the weights that made up the mean: Ei = wjgj/wj for J effects in study I

    wi = wj


Moderator analysis q within

Moderator Analysis- QWithin

  • Analog to ANOVA- split into Qbetween and Qwithin

  • QW = wj(i)(Ej(i) - MeanEi)2

    I j

    where MeanEi is the mean for each category i, Ej(i) is an effect j in category i and wj(i) is the weight function for the jth effect in category i

  • This is analogous to the within-subjects term in ANOVA

  • Lipsey and Wilson do not give a very good equation for this on p. 121- confusing


Computational issues

Computational Issues

  • The excel file “Meta means working COMPUTATIONS” provides a workbook to compute such effects

  • An exemplar is shown below, is in your set of materials

  • Computation of QB and QW are done from the summary data of Hedge’s g and sample sizes


Moderator example

Moderator Example

  • For our Storybook reading example, we can break the effect into two design types:

    1 = no baseline equivalence

    2 = baseline equivalence

    Wasik = 2

    Coyne = 1

    Justice = 2

    Fielding = 1


Moderator example1

Moderator Example

  • Select “Meta means working COMPUTATIONS” excel file

  • Reduce the number of studies to 2 in Design 1 and 2 in design 2

  • Insert the Hedge’s g effects, Cntrl N, Trmt N into the correct boxes, all other effects will be correctly computed


Meta analysis of research

Storybook Reading Design Moderator effect

QB sig., two design means are different

QW nonsig., homogeneous effects within the two design categories


Meta analysis report writing

Meta-Analysis Report-Writing

  • Traditional journal approach:

    -Intro, lit review, methods, results, discussion

    -References: background, studies in meta analysis*

    -Tables: effects, SEs, Q’s, mediators, moderators

    -Figures: Cluster diagrams, funnel plots, graphs of effects by features

  • Literature review approach:

    -Thematic or theory focus: what lit exists, what does it say

    Tabular summarizations of works


Current issues

Current Issues

  • Multi-level models: Raudenbush & Bryk analysis in HLM6

  • Structural equation modeling in meta-analysis

  • Clustering of effects: cluster analysis vs. latent class modeling

  • Multiple studies by same authors- how to treat (beyond ignoring follow-on studies), the study dependence problem

  • Multiple meta-analyses: consecutive, overlapping


Multilevel models

Multilevel Models

  • Raudenbush & Bryk HLM 6

    • One effect per study

    • Two level model, mediators and moderators at the second level

    • Known variance for first level (wi)

    • Mixed model analysis: requires 30+ studies for reasonable estimation, per power analysis

    • Maximum likelihood estimation of effects


Multilevel models1

Multilevel Models

  • Model:

    Level 1:gi = gi + ei

    where there is one effect g per study i

    Level 2: gi = 0 + 1W + ui

    where W is a study-level predictor such as design in our earlier example

    Assumption: the variance of gi is known = wi


Structural equation modeling in sem

Structural Equation Modeling in SEM

  • New area- early work in progress:

  • Cheung & Chan (2005, Psych Methods), (2009, Struc Eqn Modeling)- 2-step approach using correlation matrices (variables with different scales) or covariance matrices (variables measured on the same scale/scaling)

    • Stage 1: create pooled correlation (covariance) matrix

    • Stage 2: fit SEM model to Stage 1 result


Structural equation modeling in sem1

Structural Equation Modeling in SEM

  • Pooling correlation matrices:

    • Get average r:

      rmean(jk) = wi riij/ wijk

      I i

      where j and k are the subscripts for the correlation between variables j and k,

      where i is the ith data set being pooled

      Cheung & Chan propose transforming all r’s to Fisher Z-statistics and computing above in Z

      If using Z, then the SE for Zi is (1-r2)/n½ and


Structural equation modeling in sem2

Structural Equation Modeling in SEM

  • Pooling correlation matrices: for each study,

    COVg(rij, rkl) = [ .5rij rkl (r2ik + r2il + r2jk + r2jl) +

    rik*rjl + ril*rjk –

    (rij*rik*ril + rji*rjk*rjl + rki*rkj*rkl +

    rli*rlj*rlk)]/n

    Let i = covariance matrix for study i, G = {0,1} matrix that selects a particular correlation for examination, Then G = [ |G1|’ G2 |’…| Gk |’]’

    and  = diag [1, 2, … k]


Structural equation modeling in sem3

Structural Equation Modeling in SEM

Beretvas & Furlow (2006) recommended transformations of the variances and covariances:

SDrtrans = log(s) + 1/(2(n-1)

COV(ri,rj)trans = r2ij/(2(n-1))

The transformed covariance matrices for each study are then stacked as earlier


Clustering of effects cluster analysis vs latent class modeling

Clustering of effects: cluster analysis vs. latent class modeling

  • Suppose Q is significant. This implies some subset of effects is not equal to some other subset

  • Cluster analysis uses study-level variables to empirically cluster the effects into either overlapping or nonoverlapping subsets

  • Latent class analysis uses mixture modeling to group into a specified # of classes

  • Neither is fully theoretically developed- existing theory is used, not clear how well they work


Meta analysis of research

Multiple studies by same authors- how to treat (beyond ignoring follow-on studies), the study dependence problem

  • Example: in storybook telling literature,

    Zevenberge, Whitehurst, & Zevenbergen (2003) was a subset of Whitehurst, Zevenbergen, Crone, Schultz, Velging, & Fischel (1999), which was a subset of Whitehurst, Arnold, Epstein, Angell, Smith, & Fischel (1994)

  • Should 1999 and 2003 be excluded, or included with adjustments to 1994?

  • Problem is similar to ANOVA: omnibus vs. contrasts

  • Currently, most people exclude later subset articles


Multiple meta analyses consecutive overlapping

Multiple meta-analyses: consecutive, overlapping

  • The problem of consecutive meta-analyses is now arising:

    • Follow-ons typically time-limited (after last m-a)

    • Some m-a’s partially overlap others: how should they be compared/integrated/evaluated?

    • Are there statistical methods, such as the correlational approach detailed above, that might include partial dependence?

    • Can time-relatedness be a predictor? Willson (1985)


Conclusions

CONCLUSIONS

  • Meta-analysis continues to evolve

  • Focus in future on complex modeling of outcomes (SEM, for example)

  • More work on integration of qualitative studies with meta-analysis findings


  • Login