330 likes | 338 Views
Registered Reports, five years on A vaccine against bias in research and publishing. Chris Chambers Cardiff University Brain Research Imaging Centre (CUBRIC) School of Psychology, Cardiff University Email: chambersc1@cardiff.ac.uk Twitter: @chrisdc77.
E N D
Registered Reports, five years onA vaccine against biasin research and publishing Chris Chambers Cardiff University Brain Research Imaging Centre (CUBRIC) School of Psychology, Cardiff University Email: chambersc1@cardiff.ac.uk Twitter: @chrisdc77
Results-driven culture distorts incentives What’s best for science High quality research, published regardless of outcome What’s best for scientists Producing a lot of “good results” see Nosek, Spies & Motyl (2012). Perspectives on Psychological Science, 7(6): 615–631
What happens you put researchers under pressure to get “good results”? ~92% positive Fanelli (2010) Publication bias Lack of data sharing Lack of replication ~70% failure Wicherts et al (2006) 1 in 1000 papers Makel et al (2012) Changing the hypothesis ~50-90% prevalence John et al (2012) Kerr (1998) Low statistical power ~50% chance to detect medium effects Cohen (1962); Sedlmeier and Gigerenzer (1989); Bezeau and Graves (2001) p-hacking ~50-100% prevalence John et al (2012) p-hacking
Fixing this requires making results a dead currency in quality evaluation • Premise: • What gives hypothesis-testing its scientific value is • the QUESTION it asks • the QUALITY of the method it uses • never the RESULT it produces If we accept this philosophy then editorial decisions at journals should be blind to results
This is not a new idea Robert Rosenthal (1966). Experimenter effects in behavioral research. New York. “What we need is a system for evaluating research based only on the procedures employed. If the procedures are judged appropriate, sensible, and sufficiently rigorous to permit conclusions from the results, the research cannot then be judged inconclusive on the basis of the results and rejected by the referees or editors. Whether the procedures were adequate would be judged independently of the outcome.” The American Statistician, 1970 British Medical Journal, 1987
This is not a new idea Modern revival of preregistration in basic science in 2011/2012
Registered Reports Four central aspects of the Registered Reportsmodel: • Researchers decide hypotheses, experimental procedures, and main analyses before data collection • Part of the peer review process takes place before experiments are conducted • Passing this stage of review virtually guarantees publication • Original studies and high-value replications are welcome
How it works Authors submit STAGE 1 manuscript with Introduction, Proposed Methods & Analyses, and Pilot Data (if applicable) Reviewers assess importance of research question and rigour of the methodology according to specific criteria Stage 1 peer review If reviews are positive then journal offers in-principle acceptance (IPA), regardless of study outcome (protocol usually not published yet)
How it works Authors do the research • Authors resubmit completed STAGE 2 manuscript: • Introduction and Methods (virtually unchanged) • Results (new): Registered confirmatory analyses + unregistered exploratory analyses • Discussion (new) • Data and materials deposited in a public archive Stage 2 peer review Reviewers assess compliance with study protocol, whether pre-specified quality checks were passed, and whether conclusions are evidence-based Manuscript published!
Strengths of the Registered Reports approach • – Reproducible – • detailed, repeatable methods • high statistical power (2-3x > sample sizes) • – Transparent – • accompanied by open data & materials • outcomes of confirmatory and exploratory analyses distinguished • – Credible – • no publication bias • no hindsight bias • no selective reporting
Registered Reports are here to stay • 108 journals have adopted them so far • Fields covered • Life/medical sciences: neuroscience, nutrition, psychology, psychiatry, biology, cancer research, ecology, clinical & preclinical medicine • Social sciences: political science, financial and accounting research • Physical sciences: chemistry, physics, computer science etc. https://www.zotero.org/groups/osf/items/collectionKey/KEJP68G9 125 fully completed RRs have been published so far
Registered Reports at Royal Society Open Science Now available in all STEM areas, from physics to psychology http://rsos.royalsocietypublishing.org/content/registered-reports
Registered Reports at BMC Medicine • The first Registered Reports model for clinical trials • Prevents hidden outcome switching (AKA outcome reporting bias; see http://www.compare-trials.org/ • Eliminates publication bias and ensures all trials are published regardless of outcome • Should all clinical trials be published as Registered Reports?
Other journals that publish RRs in psychology/neuroscience Post publication peer review (PPPR) model of Registered Reports
Curated list https://cos.io/rr/
Policy features tables https://docs.google.com/spreadsheets/d/1D4_k-8C_UENTRtbPzXfhjEyu3BfLxdOsn9j-otrO870/edit#gid=0
Curated list https://cos.io/rr/
FAQS https://cos.io/rr/
Curated list https://cos.io/rr/
Going further… Can we integrate grant funding and Registered Reports? • Registered Reports funding model • Authors submit their research proposal before they have funding. • Following review by the both the funder and the journal, the strongest proposals are offered financial support by the funder AND in-principle acceptance for publication by the journal. Grant funded and article(s) accepted on same day!
Registered Report: Journal/Funder Partnerships • Journals/publishers • Nicotine and Tobacco Research • PLOS Biology • PLOS ONE • Royal Society Open Science • BMC, including BMC Medicine Funders Cancer Research UK Pfizer Children’s Tumor Foundation CHDI https://doi.org/10.1093/ntr/ntx081
Registered Report: Journal/Funder Partnerships http://blogs.plos.org/everyone/2017/09/26/registered-reports-with-ctf/
Registered Reports are not a one-size fits all solution We also need publication models that encourage and reward transparent exploratory science De-emphasis on a priori hypotheses and p values Greater emphasis on parameter estimation and hypothesis generation http://neurochambers.blogspot.de/2017/07/open-ended-open-science.html https://www.sciencedirect.com/science/article/pii/S0010945217302393
Our goals for the future • Registered Reports offered as an article type at all reputable empirical journals • Registered Report grants offered by all major funding agencies • All registered clinical trials should be published as Registered Reports • Continual innovation and refinement of the format, and assessment of impact
Information Hub at the Center for Open Science • Detailed FAQs • Table comparing journal features https://cos.io/rr/ These slides are available here: https://osf.io/d4fh5/ For more info, email me (chambersc1@cardiff.ac.uk) or David Mellor at the COS (david@cos.io)
Ten ways to avoid having your Registered Report rejected 1. Make sure the cover letter says what it needs to about ethics, funding, Withdrawn Registration policy, public preregistration etc. 2. Build sufficient methodological detail into the protocol to enable replication and to convince reviewers that you have closed off researcher degrees of freedom. 3. Ensure a tight correspondence between theory, scientific hypotheses, power (where applicable) and the pre-registered statistical tests. 4. Make sure the power analysis (or alternative sampling plan) reaches the minimum threshold required by journal policy (e.g. 90% power, BF>6). 5. When conducting power analysis, avoid trap of proposing an over-optimistic or insufficiently justified estimate of the effect size.
Ten ways to avoid having your Registered Report rejected 6. Don’t rely on conventional null hypothesis significance testing if you want to be able to conclude evidence of absence from negative results (instead use equivalence testing or Bayesian methods). 7. Remove all exploratory analysis sections from Stage 1. Include at Stage 2 only. 8. Be sure to clearly distinguish work that has already been done (e.g. pilot experiments) from work yet to be done. 9. Include pre-specified positive controls or other data quality checks, or justify their absence. 10. When reporting positive controls that rely on inferential testing, be sure to include a sampling plan or power analysis.