Stat 415a: Structural equation Modeling . Fall 2013 Instructor: Fred Lorenz Department of Statistics Department of Psychology. Outline. Part 1: Introduction Part 2: Classical path analysis Part 3: Non-recursive models for reciprocity Part 4: Measurement models
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Instructor: Fred Lorenz
Department of Statistics
Department of Psychology
Part 1: Introduction
Part 2: Classical path analysis
Part 3: Non-recursive models for reciprocity
Part 4: Measurement models
Part 5: Model integration
Part 6: Concluding comments
The language of structural equations
Motivation: Why structural equation modeling?
Historic overview of SEM
Introduction to the data sets used in class examples and homework assignments
Why structural equation modeling?
* Explanation rather than prediction
* Conceptual simplification (parsimony)
* Reliability (random measurement error)
* Validity (systematic measurement error)
* Ockum’s razor: all things equal, we prefer the simplest (most parsimonious) explanation
* And we avoid the “bulkanization” of concepts
* We distinguish “latent” variable from their observed manifestations
* And we distinguish causative (formative) from effects (reflexive) variables
Reliability (random measurement error)
Validity (systematic measurement error)
Wright traced genetic inheritability
Path analysis (recursive models) in Sociology (Duncan 1966).
Non-recursive (simultaneous equation) models in economics
Measurement models from psychometrics (Lawley 1940)
Integrated recursive & non-recursive measurement error models (Joreskog, Keesing & Wiley notation).
The Iowa Family Transition Project (FTP)
550 rural Iowa families (1989 – 2009)
Interview: mother, father, “target child”
And later, target’s romantic partner
Some themes. . .
Model estimation with PROC REG and PROC CALIS
Model evaluation & model comparisons
The decomposition of effects
Ordinary least squares regression (use PROC REG is SAS)
Maximum likelihood estimation, or variations on it (PROC CALIS)
Standardized vsunstandardized coefficients
Evaluate specific paths of a model
Is a proposed path significant?
Is an hypothesized null path really not significant?
Is there evidence of mediation? Spuriousness?
Evaluate the overall model
How well does a model fit the data?
The two extremes:
saturated (fully recursive) model
null model of complete independence
Where does your model fit?
Implicit model comparison and the chi-square test
Explicit model comparisons and the change in chi- square.
What does the chi-square statistics test do when comparing model?
Compare the expected distribution under the null hypothesis to the observed distribution
The greater the difference between expected and observed distributions, the larger the chi- square
T = (N – 1)F
F = min(obs – expected)
df = p*(p+1)/2 - t
t = # parameters being estimated
T ~ 2 (df)
Absolute fit: Compare model to the “saturated” model, which fits the data perfectly
Relative fit: Compare model to the model of complete independence (like a model with no predictors in OLS regression
Goodness of fit (GFI) and adjusted goodness of fit (AGFI) indices
Standardized root mean residual (SRMR)
Root mean square error of approximation (RMSEA)
Goal: Get the values as small (close to zero) as possible
The correlation between two variables can be decomposed into 4 parts:
* direct effects
* indirect effects
* spurious effects
* associational effects
The total effect of one variable on another can be decomposed into 2 parts:
Total effect = Direct effect + Indirect effect
To calculate, use EFFPART in PROC CALIS
Reciprocity & causal order in survey data. . .
Model specification: writing the equations
Interpreting model results: reciprocity and causal order
Model comparison and evaluation
Identification in simple algebra:
* two unknowns at least two equations
* k unknowns at least k equations
In SEM: the necessary condition
df = (p)(p+1)/2 – t
t = number of parameters estimated
p = number of observed variables.
Bollens (1989) rules for identification for non-recursive models
Line up the equations
The necessary order condition
The sufficient rank condition
Some observations on identification
Measurement error vs. mistakes in measuring
* A note on classical test theory
Random vs systematic measurement error
* The concept of method variance
* Managing method variance
Confirmatory vs. exploratory factor analysis
* A note on notation
* Write down the (restricted) equations
Confirmatory factor analysis (CFA)
* Model specification & identification
* Model estimation & evaluation
Rules for identification
* The usual (necessary) t-rule
* The three indicator rule
* The two indicator rule
Comments on identification
Nested vs. non-nested models
Evaluating specific models using the chi-square
Comparing nested models using the change in chi- square: indices and graphic displays
What do integrated models look like?
Integrating the measurement & structural components
Specifying (writing) the equations
Is the measurement model identified?
Is the structural portion of the model identified?
Model estimation using PROC CALIS
Standardized vs non-standardized coefficients
Significant and non-significant structural paths
Significant and non-significant correlated errors
Chi-square goodness of fit statistic for absolute fit
Relative fit indices for comparing nested models
Complete homework projects