html5-img
1 / 16

Some key developments in data analysis

Some key developments in data analysis. Michael Babyak, PhD. Areas of development. Discarding flawed techniques New types of models Treatment of missing data Simulation and empirical tests Validation. Techniques largely discredited or highly suspect.

shandra
Download Presentation

Some key developments in data analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Some key developments in data analysis Michael Babyak, PhD

  2. Areas of development • Discarding flawed techniques • New types of models • Treatment of missing data • Simulation and empirical tests • Validation

  3. Techniques largely discredited or highly suspect • Categorization of continuous variables without good reason • Automated variable selection without validation • Overfitted or “cherry-picked” models

  4. New types of models • Regression family • Clustered data • Factor analysis family

  5. Generalized Linear Model Normal Binary/Binomial Count, heavy skew, Lots of zeros Poisson, ZIP, Negbin, gamma General Linear Model/ Linear Regression Logistic Regression ANOVA/t-test ANCOVA Transformed Chi-square Can be applied to clustered (e.g, repeated measures data)

  6. Factor Analytic Family Structural Equation Models Partial Least Squares Latent Variables (Common Factor Analysis) Multiple regression Principal Components

  7. You Use Latent Variables Every Day • A Single Measurement is an indicator of an underlying phenomenon, e.g. mercury rising in a sphygmomanometer measures the underlying construct of “blood pressure.” • How do you improve the reliability of blood pressure measurement? Measure more than once, perhaps even in different setting (e.g. ambulatory monitoring). • A Psychometric Scale is also a collection of indicators of an underlying process, attempting to triangulate on an underlying construct by multiple items (indicators). • A Latent Variable is a collection of indicators with the unshared/unreliable part of the indicators removed—what’s the problem?

  8. Missing Data • Imputation or related approaches are almost ALWAYS better than deleting incomplete cases • Multiple Imputation • Full Information Maximum Likelihood

  9. Out of Missing Data Work • Propensity Scoring • “Matches” individuals on multiple dimensions to improve “baseline balance” • Complier Average Causal Effect (CACE) • Generates a guess at the effect of a treatment among all potential compliers, including those in the control arm

  10. Simulation Example Y = .4 X + error bs1 bs2 bsk-1 bsk bs3 bs4 …………………. Evaluate

  11. True Model:Y = .4*x1 + e

  12. Validation • Split-half better than nothing, but often too conservative • Bootstrap • Repeated splitting

  13. Some Premises • “Statistics” is a cumulative, evolving field • Newer is not necessarily better, but should be entertained as regards the scientific question at hand • Keeping up is hard to do • There’s no substitute for thinking about the problem

  14. http://www.duke.edu/~mababyak • michael.babyak @ duke.edu • http://symptomresearch.nih.gov/chapter_8/

More Related