- By
**gabe** - Follow User

- 783 Views
- Uploaded on

Download Presentation
## PowerPoint Slideshow about 'Structural equation modeling with Mplus' - gabe

**An Image/Link below is provided (as is) to download presentation**

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

### Structural equation modeling withMplus

### Multiple Regression

### Generic Problem – Grossly distorted (distribution of), or violated assumptions for the criterion variable

E. Kevin Kelloway, Ph.D.

Canada Research Chair in Occupational Health Psychology

Overview

- Day 1: Familiarization with the Mplus environment – Varieties of regression
- Day 2 Introduction to SEM: Path Modeling, CFA and Latent variable analysis
- Day 3 Advanced Techniques – Longitudinal data, multi-level SEM etc

Today’s Agenda

- 0900 - 1000 Introduction : The Mplus Environment
- 1000 – 1015 Break
- 1015 – 1100 Using Mplus: Regression
- 1100 – 1200 Variations on a theme: Categorical, Censored and

Count Outcomes

- 1200 – 1300 Break
- 1300 – 1400 Multilevel models: Some theory
- 1400 – 1415 Break
- 1415 – 1530 Estimating multilevel models in Mplus

MPLUS

- Statistical modeling program that allows for a wide variety of models and estimation techniques
- Explicitly designed to “do everything”
- Techniques for handling all kinds of data (continuous, categorical, zero-inflated etc),
- Allows for multilevel and complex data
- Allows the integration of all of these techniques

The Mplus Framework

Observed variables

x background variables (no model structure)

y continuous and censored outcome variables

u categorical (dichotomous, ordinal, nominal) and

count outcome variables

• Latent variables

f continuous variables

– interactions among f’s

c categorical variables

– multiple c’s

Mplus Configurations

- BASE MODEL – Does regression and most versions of SEM
- Mixture - Adds in mixture analysis (using categorical latent variables)
- Multi-level Add-on –adds the potential for multi-level analysis
- Recommend the Combo Platter

Some Characteristics of Mplus

- Batch processor
- Text commands (no graphical interface) and keywords
- Commands can come in any order in the file
- Three main tasks
- GET THE DATA into MPLUS and DESCRIBE IT
- ESTIMATE THE MODEL of INTEREST
- REQUEST THE DESIRED OUTPUT

The Mplus Language

- 10 Commands
- TITLE Provides a title
- • DATA (required) Describes the Dataset
- • VARIABLE (required) Names/identifies Variables
- • DEFINE Computes/transforms
- • ANALYSIS Technical details of analysis
- • MODEL Model to be estimated
- • OUTPUT Specifies the output
- • SAVEDATA Saves the data
- • PLOT Graphical Output
- • MONTECARLO Monte Carlo Analysis
- Comments are denoted by ! And can be anywhere in the file

Some conventions

- “is” “are” and = can generally be used interchangeably
- Variable: Names is Bob
- Variable: Names = Bob
- Variable: Names are Bob
- “-” denotes a range
- Variable: Names = Bob1 – Bob5
- : ends each command
- ; ends each line

Getting the data into Mplus (1)

- Step 1: Move your data into a “.dat” file (ASCII) – SPSS or Excel will do this
- Step 2: Create the command file with DATA and VARIABLE STATEMENTS
- Step 3 (Optional) I always ask for the sample statistics so that I can check the accuracy of data reading
- OPEN and RUN Day1 Example 1.inp

Example 1

- TITLE: This is an example of how to read data into Mplus from an ASCII File
- DATA: file is workshop1.dat;
- Variable: NAMES are sex age hours location TL PL GHQ Injury;
- USEVARIABLES = tl – injury;
- Output: Sampstat;
- Include the demographic variables in the analysis

Output: Three major divisions

- Repeat the input instructions – check to see if proper N, K and number of groups
- Describe the analysis – describes the analysis, check for accuracy
- Report the results
- Fit Statistics
- Parameter Estimates
- Requested information (sample statistics, standardized parameters etc)
- NOTE: Not all output is relevant to your analysis

Getting Data into MPLUS (2)

- N2Mplus – freeware program that will read SPSS or excel files
- Will Create the data file
- Will write the Mplus syntax which can be pasted into mplus
- Limit of 300 variables
- Watch variable name lengths (SPSS allows more characters than does Mplus)

General Goal

To predict one variable (DV or criterion) from a set of other variables (IVs or Predictors). IVs may be (and usually are) intercorrelated. Minimize least squares (minimize prediction error) - Maximize R

Bivariate Regression

- Correlation is ZxZy/N
- Line of best fit (OLS Regression line) is found by y = mx+b where
- b = Y intercept Y – bX
- And m = slope = r Sdy/Sdx

Multiple Regression

- Extension of Bivariate Regression to the case of multiple predictors
- Predictors may be (usually are) intercorrelated so need to partial variance to determine the UNIQUE effects of X on Y

Regression

- To specify a simple linear regression you simply add a Model line to the file
- Model DV on IV1 IV2 IV3….IVX
- You also want to specify some specific forms of output to get the “normal” regression information
- Useful options are
- SAMPSTAT – sample statistics for the variables
- STANDARDIZED – standardized parameters
- Savedata: Save=Cooks Mahalanobis
- What predicts GHQ?

LOGISTIC REGRESSION

- Used typically with dichotomous outcome (also ordered logistic and probit models)
- Similar to regression – generate an overall test of goodness of fit
- Generate parameters and tests of parameters
- Odds ratios
- When split is 50/50 then discriminant and logistic should give the same result
- When split varies, then logistic is preferred

TESTS

- Likelihood chi-squared - baseline to model comparisons
- ParameterTest(B/SE)
- Odds ratio - increase/decrease in odds of being in one outcome category if predictor increases by 1 unit (Log of B)

In Mplus

- Specify one outcome as categorical (can be either binary or ordered)
- Default estimator is MLR which gives you a probit analysis
- Changing to ML gives you a Logistic regression
- RUN DAY1Example3.inp
- To dichotomize the outcome (from a multi-category or continuous measure
- define: cut injury (1);

- Data from a study of metro transit bus drivers (n=174)
- Data on workplace violence (extent to which one has been hit/kicked; attacked by a weapon;had something thrown at you) 1 = not at all 4 = 3 or more times
- Data cleaning suggests highly skewed and kurtotic distribution
- Descriptive Statistics
- N Minimum Maximum Mean Std. Deviation Skewness Kurtosis
- Statistic StatisticStatisticStatisticStatisticStatistic Std. Error Statistic Std. Error
- violence
- 170 1.00 3.00 1.2353 .37623 1.900 .186 3.677 .370
- Valid N (listwise) 170

►Negative Binomial. This distribution can be thought of as the number of trials required to observe k successes and is appropriate for variables with non-negative integer values. If a data value is non-integer, less than 0, or missing, then the corresponding case is not used in the analysis. The fixed value of the negative binomial distribution's ancillary parameter can be any number greater than or equal to 0. When the ancillary parameter is set to 0, using this distribution is equivalent to using the Poisson distribution.

- Normal. This is appropriate for scale variables whose values take a symmetric, bell-shaped distribution about a central (mean) value. The dependent variable must be numeric.
- Poisson. This distribution can be thought of as the number of occurrences of an event of interest in a fixed period of time and is appropriate for variables with non-negative integer values. If a data value is non-integer, less than 0, or missing, then the corresponding case is not used in the analysis.

Some Observations on Count Data

►Counts are discrete not continuous

►Counts are generated by a Poisson distribution (discrete probability distribution)

►Poisson distributions are typically problematic because they are skewed (by definition non-normal)

are non-negative (cannot have negative predicted values)

have non constant variance– variance increases as mean

increases

BUT…

Poisson regressions also make some very restrictive assumptions about the data (i.e., the underlying rate of the DV is the same for all individuals in the population or we have measured every possible influence on the DV)

The Negative Binomial Distribution

►allows for more variance than does the poisson model (less restrictive assumptions)

Can fit a poisson model and calculate dispersion (Deviance/df). Dispersion close to 1 indicates no problem; if over dispersion use the negative binomial

Poisson but not neg binomial is available in Mplus

Zero Inflated Poisson Regresson (ZIP Regression)

Zero Inflated Negative Binomial Regression (ZINB Regression)

Assumes two underlying processes

predict whether one scores 0 or not 0

Predict count for those scoring > 0

Day1 Example4

- Run to obtain a Poisson Regression
- Outcome is specified as a count variable
- To obtain a ZIP regression run Day1 Example5
- Note that one can specify different models for occurrence and frequency

- What is the correlation between X and Y?
- Descriptive Statistics
- Mean Std. Deviation N
- x 8.0000 4.42396 15
- y 8.0000 4.42396 15
- Correlationsa
- x y
- x Pearson Correlation 1 .912**
- Sig. (2-tailed) .000
- y Pearson Correlation .912** 1
- Sig. (2-tailed) .000
- a. Listwise N=15

Split Sample by Group

- Group 1 r = 0.0 Mean = 3 N=5
- Group 2 r = 0.0 Mean = 8 N=5
- Group 3 r = 0.0 Mean = 13 N=5

- Multi-level data occurs when responses are grouped (nested) within one or more higher level units of responses
- E.G. Employees nested within teams/groups
- Longitudinal data – observations nested within individuals
- Creates a series of problems that may not be accounted for in standard techniques (e.g., regression, SEM etc)

Some Problems with MultiLevel Data

- Individuals within each group are more alike than individuals from different groups (variance is distorted) – violation of the assumption of independence
- We may want to predict level 1 responses from level 2 characteristics (i.e., does company size predict individual job satisfaction). If we analyse at the lowest level only we under-estimate variance and hence standard errors leading to inflated Type 1 errors – we find effects where they don’t exist
- Aggregation to the highest level may distort the variables of interest (or may not be appropriate)

- Simpson’s – Completely erroneous conclusions may be drawn if grouped data, drawn from heterogeneous populations are collapsed and analyzed as if drawn from a single population
- Ecological – The mistake of assuming that the relationship between variables at the aggregated (higher) level will be the same at the disaggregated (lower) level

- Essentially an extension of a regression model
- Y = mx + b + error
- Multilevel models allow for variation in the regression parameter (intercepts (b) and slopes(m)) across the groups comprising your sample
- Also allow us to predict variation ask why groups might vary in intercepts or slopes
- Intercept differences imply mean differences across groups
- Slope differences indicate different relationships (e.g., correlations) across groups

- Attempting to explain (partition) variance in the DV
- Why don’t we all score the same on a given variable?
- Simplest explanation is error – individual’s score is the grand mean + error.
- If employees are in groups – then the variance of the level 1 units has at least 2 components – the variance of individuals around the group mean (within group variance) and the variance of the group means around the grand mean (between group variance)
- This is known as the intercepts only or “variance components” or “unconditional” model – it is a baseline that incorporates no predictors

- Can introduce predictors either at level 1 or level 2 or both to further explain variance
- Can allow the effects of level 1 predictors to vary across groups (random slopes)
- Can examine interactions within and across levels
- Can incorporate quadratic terms etc

- To create level 2 observations we often need to aggregate variables to the higher level and to merge the aggregated data with our level 1 data. To aggregate you need to specify [a] the variables to be aggregated, [b] the method of aggregation (sum, mean etc) and [c] the break variable (definition for level 2)
- SPSS allows you to aggregate and save group level data to the current file using the aggregate command
- Mplus allows you to do this within the Mplus run

- If you choose to aggregate, then there should be some empirical support (i.e., evidence for similar responses within group). Some typical measures are:
- ICC – the interclass correlation. The extent to which variance is attributable to group differences. From ANOVA (MSb-MSw)/MSb+C-1(MSw) where C= average group size
- ICC(2) -reliability of means(MSb – MSw)/MSb
- Rwg (multiple variants) indices of agreement
- MPLUS calculates the ICC for random intercept models

- Centering a variable helps us to interpret the effect of predictors. In the simplest sense, centering involves subtracting the mean from each score (resulting a distribution of deviation scores that have a mean of 0)
- Centering (among other things) helps with convergence by imposing a common scale
- GRAND MEAN Centering – involves subtracting the sample mean from each score
- GROUP MEAN Centering –involves subtracting the group mean from each score – must be done manually.

- Grand mean – each score is measured as a deviation from the grand mean. The intercept is the score of an individual who is at the mean of all predictors “the average person”
- Group mean – each score measured as a deviation from the group mean. The intercept is the score of an individual who is at the mean of all predictors in the group “the average person in group X”
- Grand mean is the same transformation for all cases – for fixed main effects and overall fit will give the same results as raw data
- Group mean – different for each group – different results

- Grand mean – helps model fitting, aids interpretation (meaningful 0), may reduce collinearity in testing interactions, or between model parameters or squared effects – may reduce meaning if raw scores actually “mean something”
- Group mean – helps model fitting, can remove collinearity if you are including both group (aggregate) and individual measures of the same construct in the model (aggregate data explains between group and individual level explains within group variance).

- Grand mean – may be appropriate when the underlying model is either incremental (group effects add to individual level effects) or mediational (group effects exert influence through individual)
- Group mean – may be more appropriate when testing cross-level interactions
- Hoffman & Gavin (1998) – Journal of Management

How many subjects = how long is a piece of string?

- Calculations are complex, dependent on intraclass correlations, sample size, effect size etc etc
- In general power at Level 1 increases with the number of observations and a Level 2 with the number of groups
- Hox (2002) recommends 30 observations in each of 30 groups Heck & Thomas (2000) suggested 20 groups with 30 observations in each
- Others suggest that even k=50 is too small
- Practical constraints likely rule
- Better to have a large number of groups with fewer individuals in each group than a small number of groups with large group sizes

- Occassionally (about 50% of the time) the program will not converge on a solution and will report a partial solution (i.e., not all parameters).
- In my experience lack of convergence is a direct function of sample size (small samples = convergence failures)
- The easiest fix is to ensure that this is not a scaling issue – ie that all variables are measured on roughly the same metric (standardize)
- The single most frustrating aspect of multi-level models

- 1. Ensure data are structured/arranged properly (aggregate, centered etc) – most of this can be done in MPLUS
- 2. Run a null model – The null model estimates a grand mean only model and provides a baseline for comparison
- 3. Run the unconditional model (grouping but no predictors) – assess ICC1 and whether varying intercepts is appropriate - a low ICC1 leads one to question the importance of a multilevel model (although this can be controversial)

- 4. Incorporate level 1 predictors. Assess change in fit, level 1 variance and level 2 variance – starting to move into conditional models - this is equivalent to modeling our data as a series of parallel lines (one for each group) – slopes are the same but intercepts are allowed to vary
- 5. Allow slope to vary Assess fit, change in variance etc. Can now also estimate the covariance between intercept and slope effects that may be of interest
- 6. Incorporate level 2 predictors - explain team group but not individual level variance

Testing Models: -2 Log Likelihood

- A global test of the model adequacy is given by the -2 log likelihood statistic – also known as the model deviance
- We can examine the change in deviance as models are made more complex
- No equivalent to the difference test in REML (Residual Max Likelihood)

Testing Models: Percentage of variance

- No direct equivalent to an R-squared because there are multiple portions of variance
- Can focus on explaining variance at either the group or the individual level (i.e., reducing the residual)
- One useful approach is to calculate the variance explained at each step of the model
- Variance explained after predictor is added/variance before the addition of the predictor

Testing Models: Parameter tests

- Statistical tests of parameters
- Analagous to the tests of regression (B) coefficients in regression
- Tests the null hypothesis that the parameter is 0

- Run Day1Example 6 to read in the data.
- Measures include GHQ, transformational leadership and team identifier
- Sample Total N =851 in 31 locations
- Start by estimating the variance components (random intercept only) model
- On the variable statement specify the usevariables=ghq team
- Specify cluster=team
- Add an analysis command

Analysis: Type = twolevel

Implementing the Analysis (cont’d)

- Hypotheses
- GHQ varies across team
- GHQ is predicted by leadership
- Effect of leadership on stress varies by location

Random Intercept Models

- Run Day1Example6.inp – the variance components model – a random intercept only model
- Add in the within group predictor TFL
- Need to include tfl on the use variables line
- Specify the centering centering=grandmean(tfl)
- Specify the within group model
- Model %Within%
- GHQ on tfl
- Maybe try the between group model
- Model %between%
- Ghq on Tfl

Variable types

- In Mplustwolevel analyses variables are specified as either Within (can only be modeled in the within group model) or Between (can only be modeled with the Between group model)
- Unspecified variables will be used appropriately (if used in the between group model then MPLUS will calculate the aggregate score on the variable)

Random Slope Models

- Add “random” to the type statement Type = Twolevel random;
- Specify the random slope in the within model as S| Y on X where S is the name of the slope, Y is the DV, X is the predictor e.g,
- %Within%
- S|ghq on tfl;

In the between model allow the random slope to correlate with the random intercept

GHQ with S

Predict the random slope

S GHQ on TFL

Extensions

- Can use any techniques previously discussed
- Specify outcomes as binary or ordered (multilevel logistic), multilevel poisson etc etcetc

Can incorporate multilevel regressions into path or SEM analyses (More about this later)

Download Presentation

Connecting to Server..