1 / 21

Gifts in the treasure chest of Methodology: A personal view

Gifts in the treasure chest of Methodology: A personal view. Rolf Steyer Friedrich Schiller University Jena Institute of Psychology Department of Methodology and Evaluation Research Germany. Items in the treasure chest Latent Class Models Mixture Distribution Models

juliet-wise
Download Presentation

Gifts in the treasure chest of Methodology: A personal view

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Gifts in the treasure chest of Methodology: A personal view Rolf Steyer Friedrich Schiller University Jena Institute of Psychology Department of Methodology and Evaluation Research Germany

  2. Items in the treasure chest • Latent Class Models • Mixture Distribution Models • Structural Equation Models • Generalized Linear Models • Loglinear Models • Multilevel Models • CART (Classification and Regression Trees) • ... • LISREL and the other SEM programs • ConQuest and other IRT programs • ... • EM-Algorithm • Newton-Raphson • ...

  3. Outline • Measurement • Causality • Statistics

  4. Outline • Measurement • Causality • Statistics • I have deliberately choosen this order, because: • (Causal) modeling does not make sense if we don´t have reasonable measurements • Statistcal analysis does not make sense if we don´t have reasonable measurements and a causal model, if we are looking for causal effects (which we do most of the time)

  5. MeasurementThe fundament of every science • Fundamental measurement theory • IRT, uni- and multidimensional • SEM modeling including models for ordinal variables • Multidimensional Scaling

  6. Measurement • Measurement • is much more than assigning numbers to observations • defines the concepts to which empirical research really refers to • explicates the relationship between observations and theoretical concepts (constructs)

  7. Measurement • Measurement • is not reading numbers from a meter stick • it rather means introducing the concept of length • and spells out the rules of assigning numbers to observations representing the length of the objects considered

  8. Measurement (cont´d) • Measurement defines our theoretical concepts by • selecting the observables or items • specifying a mathematical measurement model relating the observables to the theoretical concepts • Of course both these points need substantive theory and ideas. They are part of the substantive theory.

  9. Measurement (cont´d) • The measurement model determines • the logical nature of our theoretical variables (metric, ordinal, or nominal concepts) • the scale level of the theoretical variables

  10. Measurement (cont´d) Latent variable models are flexible enough to model complex reality

  11. Measurement (cont´d)

  12. Measurement (cont´d) • IRT-Models • introduce metric variables on the basis of qualitative variables • allow adaptive testing • are recently going multivariate ...

  13. Measurement (cont´d) • Where are we going? • IRT-Models and Structural Equation Models will merge • Adaquate and sophisticated latent variable modeling will increase

  14. Causality • we know much more than „correlation is not causality“ or „noncorrelation is not noncausality“ • what we really want are individual causal effects, what we get are expectations and their difference, which are not the average of the individual causal effects

  15. Causality (cont´d) • If we have a client, we have at least two alternative treatments for him • we are able to decide which treatment to choose only when we have a hypothesis about the individualeffect of the treatment compared to its alternative • when there is no knowledge about the individual effect we need at least some knowledge about the average effect

  16. Causality (cont´d)

  17. Causality (cont´d) • we know • sufficient conditions for causal unbiasedness • necessary conditions for unconfoundedness • how to analyze causal effects in nonorthogonal ANOVA • how to test for unconfoundedness

  18. Causality (cont´d) • Where are we going? We will learn more about • nonexperimental design and analysis • systems of regression equations (causality in SEMs) • generalize to distributions • applying it to sophisticated data

  19. Statistics • statistical tests important, but are we really happy with knowing the probability of the test statistic being this or more extreme under the assumption of H0 ? • Shall we be going Bayesian and ask for the probabilities of hypotheses? • How long will it take for the resampling procedures (bootstrapping etc.) to be an easy tool in our standard software?

  20. Conclusions • There is much more in our treasure chest than an individual could even learn and apply • Hence we need to organize, distribute and teach our knowledge in a more efficient way • Intensifying cooperation in Europe may be one way to achieve these goals

  21. Where to find more For example, this power point file and more useful things, such as papers, infos on workshops etc. may be found at: http://www.uni-jena.de/svw/metheval/ Or mail to: rolf.steyer@uni-jena.de

More Related