1 / 28

How to produce credible research, on anything

Learn how to produce credible research by addressing the transparency gap and implementing credibility boosting tools such as pre-registration, registered reports, and open materials. Discover the issues with replication and the consensus design process for protecting against post-hoc criticism. Develop trustworthiness in original studies instead of relying heavily on replications.

elaines
Download Presentation

How to produce credible research, on anything

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. How to produce credible research, on anything Zoltan Kekecs, PhD, Hungarian Academy of Sciences, ELTE, Lund University

  2. Transparency gap Bem 2011 Consensus Design Credibility boosting tools Generalized mistrust means lower research value • In many-labs projects only 50% of findings were replicated (Klein, et al. 2018) • 11% of landmark cancer biology findings reproducible (Begley & Ellis, 2012) • Generalized mistrust in research reports • De-valuation of research

  3. Transparency gap Bem 2011 Consensus Design Credibility boosting tools Solutions Fortunately, we have solutions: • Pre-registration • Registered report • Open materials

  4. Transparency gap Bem 2011 Consensus Design Credibility boosting tools The Transparency Gap The results can be biased due to • Protocol deviations • Sampling (e.g. selective sampling) • Protocol execution (e.g. broken blinding, failed randomization) • Starting and stopping (e.g. optional restart, optional abandonment) • Data handling (e.g. data entry errors, result-driven outlier exclusion, fabrication, deletion, imputation) • Data analysis (e.g. blinding of statistician broken, flexibility in model diagnostics) • Retrospective “pre”registration

  5. Transparency gap Bem 2011 Consensus Design Credibility boosting tools The Transparency Gap The results can be biased due to • Protocol deviations • Sampling (e.g. selective sampling) • Protocol execution (e.g. broken blinding, failed randomization) • Starting and stopping (e.g. optional restart, optional abandonment) • Data handling (e.g. data entry errors, result-driven outlier exclusion, fabrication, deletion, imputation) • Data analysis (e.g. blinding of statistician broken, flexibility in model diagnostics) • Retrospective “pre”registration (67% in medical research - Harriman, 2016)

  6. Transparency gap Bem 2011 Consensus Design Credibility boosting tools Solutions • Pre-registration • Registered report • Open materials

  7. Transparency gap Bem 2011 Consensus Design Credibility boosting tools Solutions • Pre-registration • Registered report • Open materials • Replications

  8. Transparency gap Bem 2011 Consensus Design Credibility boosting tools Issues with replication • Time consuming • Costly • Cannot replicate everything ourselves • If done by others replications have the same trust-issues as original studies • Post-hoc criticism • We need tools to be able to create trustworthy original studies that are acceptable for the stakeholders instead of relying heavily on replications

  9. Transparency gap Bem 2011 Consensus Design Credibility boosting tools Background How the project started

  10. Transparency gap Bem 2011 Consensus Design Credibility boosting tools Basic study 1 design

  11. Transparency gap Bem 2011 Consensus Design Credibility boosting tools How to produce “extraordinary evidence” The basics: • Preregistration • Registered Report • Open materials Transparency gap • Not enough

  12. Transparency gap Bem 2011 Consensus Design Credibility boosting tools Protection against post-hoc criticism • Consensus Design Process • Systematic search • Invitation of all stakeholders to contribute to study design • 29 researchers (roughly equal proponent-opponent mix)

  13. Transparency gap Bem 2011 Consensus Design Credibility boosting tools Consensus Design Process • Circulate protocol • Get feedback • Rate methodological quality and protection against bias/QRPs • Iterate until pre-specified criteria is reached

  14. Transparency gap Bem 2011 Consensus Design Credibility boosting tools Consensus Design Process • Agree on: • appropriate protocol • appropriate safeguards against bias • Mutually agreeable interpretation of the results

  15. Transparency gap Bem 2011 Consensus Design Credibility boosting tools Pre-specified conclusions if H0 is supported • “The data were more consistent with the null model assuming that humans cannot guess the future randomly determined position of a target at higher than chance success rate, rather than with the model assuming that they can. We have observed [percentage of successful guesses]% successful guesses within [total N] trials. Observing this percentage of successful guesses is [Bayes factor (M0 vs M1)] times more likely if humans are guessing randomly than if they can guess future randomly determined events. Taken at face value, the data provide strong evidence that the probability of successfully guessing later computer-generated random events is not higher than chance level, contrary to what was previously reported by Bem (2011) and others (Bem, Tressoldi, Rabeyron, & Duggan, 2015). • The findings of this study are not consistent with the predictions of the ESP model in this particular paradigm. The methodology of the present study reasonably addressed all types of deficiencies in modal research practice (LeBel & Peters, 2011) that we were able to identify, with extensive input from other researchers. The failure to replicate previous positive findings with this strict methodology indicates that it is likely that the overall positive effect in the literature might be the result of methodological biases rather than ESP. However, the occurrence of ESP effects could depend on some unrecognized moderating variables that were not adequately controlled in this study, or ESP could be very rare or extremely small, and thus undetectable with this study design. Nevertheless, even if ESP would exist, our findings strongly indicate that this particular paradigm, utilized in the way we did, is unlikely to yield evidence for its existence.”

  16. Transparency gap Bem 2011 Consensus Design Credibility boosting tools Pre-specified conclusions if H1 is supported • “The data were more consistent with the model assuming that some humans can guess the future randomly determined position of a target at higher than chance success rate, rather than the null model. We have observed [percentage of successful guesses]% successful guesses within [total N] trials. Observing this percentage of successful guesses is [Bayes factor (M1 vs M0)] times more likely if humans can guess future randomly determined events than if they are guessing randomly. Taken at face value, the data provide strong evidence that the probability of successfully guessing later computer-generated random events is higher than chance level as previously reported by Bem (2011) and others (Bem, Tressoldi, Rabeyron, & Duggan, 2015). • The findings of this study are consistent with the predictions of the ESP model in this particular paradigm. The methodology of the present study reasonably addressed all alternative explanations stemming from deficiencies in modal research practice (LeBel & Peters, 2011) that we were able to identify, with extensive input from other researchers. The successful replication of previous positive findings with this strict methodology indicates that it is unlikely that the overall positive effect in the literature would be only the result of recognized methodological biases. However, there could be methodological biases that have not yet been recognized. The present study clearly supports continued research on this topic.”

  17. Transparency gap Bem 2011 Consensus Design Credibility boosting tools Extend transparency Things that would make people feel that they are present at the experimental sessions • Born-open data • But born open data happened with a lag previously • Requires “observers” • Requires people with good data analysis skills • Direct Data Deposition • Observers no longer necessary, the study can be “played back” at any time • Data integrity is verifiable • Real-time research reports • More inclusive • More accessible (thus popular) than born-open data

  18. Transparency gap Bem 2011 Consensus Design Credibility boosting tools Protocol fidelity • Interventions to minimize mistakes and bias from human factors • Computerized data collection • Step-by-step manual • Study sessions checklists • Verified training via reviewing a video-recorded mock study session • Laboratory logs • Improve transparency about protocol deviations and researcher flexibility

  19. Transparency gap Bem 2011 Consensus Design Credibility boosting tools Improve adequacy of the protocol • Registered reports • Pilot study

  20. Transparency gap Bem 2011 Consensus Design Credibility boosting tools Fraud prevention • Tamper-evident technology: • Central server • Contents synced with GitLab • Unalterable server logs • (Direct data deposition)

  21. Transparency gap Bem 2011 Consensus Design Credibility boosting tools Too much transparency Increasing transparency further may be a dead-end • We are generating data faster than we can process it • Who will review the open materials for the studies? • External audit • assign responsibility for verification of study materials • Automation

  22. Transparency gap Bem 2011 Consensus Design Credibility boosting tools Automation is the key to credibility • Computerized data collection • Direct Data Deposition • Real-time research reports • Tamper-evident technology

  23. Transparency gap Bem 2011 Consensus Design Credibility boosting tools Join the project The registered report is under stage 1 review in RSOS Email me at: kekecs.zoltan@gmail.com OR Complete sign up form: http://tiny.cc/tpp-signup

  24. Transparency gap Bem 2011 Consensus Design Credibility boosting tools Acknowledgment Funding

  25. Transparency gap Bem 2011 Consensus Design Credibility boosting tools Acknowledgment Co- Authors: Balazs Aczel1, Bence Palfi3, Barnabas Szaszi1, Peter Szecsi1, Mark Zrubka4, Marton Kovacs1, Bence E. Bakos1, Denis Cousineau5, Patrizio Tressoldi6 1, Institute of Psychology, ELTE, Eotvos Lorand University, Budapest, Izabella u 46. 1064, Hungary 2, Department of Psychology, Lund University, Lund, Box 213, 221 00, Sweden 3, School of Psychology, University of Sussex, Sackler Centre for Consciousness Science, University of Sussex, Brighton, Falmer, Brighton BN1 9RH, UK 4, Department of Psychology, University of Amsterdam, P.O. Box 19268, 1000 GG Amsterdam, The Netherlands 5, École de psychologie, University of Ottawa, 136, Jean-Jacques Lussier, K1N 6N5, Ontario, Canada 6, Dipartimento di Psicologia Generale, Università di Padova via Venezia 8, 35131, Padova , Italy

  26. Transparency gap Bem 2011 Consensus Design Credibility boosting tools Acknowledgment Consensus design panel members: Daryl Bem, Dick Bierman, Robbie C.M. van Aert, Denis Cousineau, Michael Duggan, Renaud Evrard, Christopher French, Nicolas Gauvrit, Ted Goertzel, Moritz Heene, Jim Kennedy, Daniel Lakens, Alexander Ly, Maxim Milyavsky, Sam Schwarzkopf, Björn Sjödén, Anna Stone, Eugene Subbotsky, Patrizio Tressoldi, Marcel van Assen, David Vernon, Eric-Jan Wagenmakers

  27. Transparency gap Bem 2011 Consensus Design Credibility boosting tools Acknowledgment Thank you!

  28. Transparency gap Bem 2011 Consensus Design Credibility boosting tools Join the project The registered report is under stage 1 review in RSOS Email me at: kekecs.zoltan@gmail.com OR Complete sign up form: http://tiny.cc/tpp-signup

More Related