1 / 34

How Should We ‘Evaluate ’ Scientific Publications Today? Pete Binfield

This article discusses the various ways to evaluate scientific publications, including assessing their impact, reception, and integrity. It also explores the different levels of granularity at which evaluation can occur, from the journal level to the article level. The emergence of MegaJournals and the importance of open peer review are also highlighted.

braswell
Download Presentation

How Should We ‘Evaluate ’ Scientific Publications Today? Pete Binfield

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. How Should We ‘Evaluate’Scientific Publications Today? Pete Binfield Co-Founder and Publisher PeerJ Samuel Merritt - 10/30/2013 @p_binfield pete@peerj.com @ThePeerJ https://peerj.com

  2. What do we mean when we say ‘Evaluate’? • Evaluating ‘Impact’ or ‘Reception’ or ‘Reach’ or ‘?’ • Providing Subjective Opinions & Evaluations • Evaluating ‘Integrity’ • And at what level of granularity? • The journal? • The article? • The paragraph?

  3. #1. Evaluating ‘Impact’ or ‘Reception’ or ‘Reach’ or ‘Interest’ or ‘Readership’ or, or, or

  4. Open Access ‘MegaJournals’ • Online-only, peer-reviewed, open access journals • covering a very broad subject area • selecting content based only on ‘technical soundness’ (or similar) • with a business model which allows each article to cover its own costs

  5. PLOS ONE Quarterly Output

  6. Known MegaJournals (Oct 2013)

  7. In addition, if we allow for narrow scope ‘megajournals’ then we should also include: • All of the “Frontiers in…” Series (part of Nature) • All of the “BMC Series” (~ half of BMC) • ~ 1/3 of Hindawi’s current output All of these titles refuse to pre-judge what the audience should be reading (other than determining that the content should join the literature).

  8. An OA future containing MegaJournals PLoSONE ALL OTHER OA JOURNALS SAGEOpen PeerJ etc.etc.

  9. The Effect of the ‘MegaJournal’ Rapidly Approaching ~10% of all published content, spurring new developments Require, and have given rise to, Article-Level Metrics Publish Negative Results, Replication Studies, Incremental Articles Dramatic Improvement to the Speed of the Ecosystem Dramatic Improvement to the Efficiency of the Ecosystem

  10. From “Article-­‐Level Metrics, A SPARC Primer” - http://www.sparc.arl.org/sites/default/files/sparc-alm-primer.pdf

  11. Screenshot from ~ Nov 2009 but Way Back Machine has examples from April 2008

  12. PLOS ALMs

  13. PeerJ ALMs

  14. http://www.altmetric.com/details.php?doi=10.7717/peerj.182

  15. The Effect of the ‘MegaJournal’ Rapidly Approaching ~10% of all published content, spurring new developments Require (and have stimulated) Article-Level Metrics Publish Negative Results, Replication Studies, Incremental Articles Dramatic Improvement to the Speed of the Ecosystem Dramatic Improvement to the efficiency of the way the ecosystem currently ‘filters’ content

  16. “rejected from at least six journals (including Nature, Nature Genetics, Nature Methods, Science) and took a year to publish before going on to be my most cited research paper (150 last time I looked)” – Cameron Neylon

  17. http://grigoriefflab.janelia.org/rejections

  18. http://blog.rubriq.com/2013/06/03/how-we-found-15-million-hours-of-lost-time/http://blog.rubriq.com/2013/06/03/how-we-found-15-million-hours-of-lost-time/ “…in a recent report Kassab and his colleagues estimated that Elsevier currently rejects 700,000 out of 1 million articles each year.” http://poynder.blogspot.co.uk/2013/10/media-research-analyst-at-exane-bnp.html

  19. #2. ‘Subjective’ Opinions & Evaluations(i.e. contextual, human evaluations)

  20. PeerJ Q&A

  21. PMC Commons

  22. #3. Evaluating ‘Integrity’

  23. Open Peer Review would makethis problem disappear. Overnight

  24. Journals Practicing Open Peer-Review • Atmospheric Chemistry and Physics - Reviewers comments published on pre-pub discussion site. Reviewer names optional. • Biology Direct - Reviewer comments published, and reviewers named • BMJ Open - All reviewers named, all reports public • eLife- Decision letter published with articles with author approval. Reviewers anonymous, but editor named. • EMBO journal - Review process file published with articles. Reviewers anonymous, editor named. • F1000Research - All reviewers named, all reports public. • Frontiers journals - Reviewers named, but reports not public • GigaScience- Pre-publication history published with articles, and reviewers named. (encouraged, opt-out) • Medical journals in the BMC series - Pre-publication history published with articles, and reviewers named (encouraged). • PeerJ - Peer review history published with articles with author approval. Reviewers encouraged to sign report.

  25. ~40% of PeerJ Reviewers name themselves. • ~80% of PeerJ Authors reproduce their peer review history

  26. What do we mean when we say ‘Evaluate’? • Evaluating ‘Impact’ or ‘Reception’ or… • Providing ‘Subjective’ Opinions & Evaluations • Evaluating ‘Integrity’ • And at what level of granularity? • The journal? • The article? • The paragraph?

  27. @ThePeerJ Thank You Pete Binfield Co-Founder and Publisher @p_binfield pete@peerj.com

More Related