1 / 24

Editorial Decisions & Determinants of Impact

Ayeh Bandeh-Ahmadi Department of Economics University of Maryland & hypochondri.ac. Editorial Decisions & Determinants of Impact. Goals/Outcomes of This Study. Better design of innovative organizations Case study of Knowledge Commons use Insights for designing knowledge commons

gin
Download Presentation

Editorial Decisions & Determinants of Impact

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Ayeh Bandeh-Ahmadi Department of Economics University of Maryland & hypochondri.ac Editorial Decisions & Determinants of Impact

  2. Goals/Outcomes of This Study • Better design of innovative organizations • Case study of Knowledge Commons use • Insights for designing knowledge commons • Development of broader set of metrics including text-based • Beyond publications, applications to • grants, prizes, fellowships, society elections

  3. Background • Journal publications are broadly used for: • job placement, tenure, grants, prizes • Landscape is changing with: • advent of electronic publications (eg. PLoS) • financial pressure on publishers • blogs, working papers • Opportunities to build new institutions and tweak existing ones

  4. More Context “a system in which commercial publishers make profits based on the free labor of mathematicians and subscription fees from their institutions’ libraries, for a service that has become largely unnecessary” Signed by 34 mathematicians, February 2012 • Ongoing realities: • Elsevier Boycott • Opportunities for design of Scientific Commons • Research Questions: • What are incentives for editors, referees, authors, publishers? • What does quality actually mean? Citations? • Learning experiences with proprietary arrangements • Insights for designing Scientific Commons

  5. Stakeholder Framework • Interactions between stakeholders: • Publishers • Editors • Referees • Authors • Libraries • Readers • Researchers • Students • Professors • Academic Organizations • Research Funding Agencies • Others?

  6. Existing Theories/Related Literature • Abrevaya & Hamermesh 2009 • No evidence of gender bias amongst econ referees • Cherkashin et al (Type I/II errors in accept/reject at Journal of International Economics) • Co-editor standards seem to vary significantly • Type I error small (rejected papers less cited) • Type II error large (poorly cited accepted papers)

  7. Existing Theories/Related Literature • Ellison (2002) • Theory of editorial incentives & referee communication explains increasing lags • Laband (1990) • Survey data characterizes critical referee roles, editors feeling they must take marginal work • Labande & Piette (1994) • Peers’ work selected by editors often yield greater citations

  8. Theory: Factors affecting editorial decisions, referee recommendations, citations: • Editors select papers to publish based on: • Fit within journal’s niche • Accuracy • Future Impact • Pressure to fill pages from publishers • Perhaps also personal benefit (citations, etc.) • What role do referees play? • Providing feedback to improve quality of mediocre papers, rich information content in referee letters • Evaluating fit, accuracy, future impact, personal benefit • What drives citations? • Hot topics, famous authors, publication in journals

  9. Data: Editorial Databases Data from Five Journals: 5372 submissions; 2004-2010 • Manuscript ID • Manuscript Received Date • JEL codes • Country • Co-editor ID • Co-editor Evaluation: • Summary Reject, No Referee Input • Summary Reject, Referee Input • Reject • Withdrawn • Returned for Revision • Conditionally Accepted (Minor Revisions) • Accept • Revision • Manuscript text • Referee review text • Referee ID • Referee Evaluation: • No Recommendation (i.e. no review submitted) • Definite Reject • Reject • Weak Revise & Resubmit • Revise & Resubmit • Strong Revise & Resubmit • Accept with Revisions • Accept

  10. Building Meaningful Metrics for Testing Theories • Textual content of manuscripts • similarity to past accepted articles in journal • Presence of co-editor name in paper references (first, last versions) • Paper length • Textual content of referee reviews • Referee scores (past and current) • Editorial scores (past and current) • Citation Commons (IDEAS, Google Scholar)

  11. Textual similarity between submissions • Edges show relationships between submissions with textual cosine similarity of at least 18%. • Green nodes are accepted submissions; pink nodes are all other submissions • Layout using spring-embedded algorithm (Kamada & Kawai, 1988).

  12. Models: LASSO citation model selection method: Predict citations of journal submissions – based on referee language -- whether they are published or not Ordered Probit model of referee decision Ordered Probit model of editorial decision

  13. Referee Language Determinants of Score, Citations • Little correlation between referee language coefficients on referee score versus eventual citations

  14. LASSO Model Findings: Determinants of Impact • Terms of significance in ref reports include: • (+) “debate” • (+) “government” • (+) “from [years] to [years]” • (-) “counterfactual” • (+) “wage” • Could identify types of impact valued by different individuals & different fields

  15. Basic Model Journals/Editors face separable paper payoff function: over accuracy a, impact i, fit f : assuming fit is ex ante observable…

  16. Basic Model Journals/Editors maximize payoff: by selecting a threshold based on available space and making decisions D accordingly:

  17. Methods • Econometric analysis to understand form of payoff function Pp • Dig deeper to develop metrics for a, i, f : • Measure submissions’ similarities to past accepted articles (fit within journal niche) • Develop estimator of submission impact based on referee language (potential impact) • Referee scores seem good proxy for accuracy • Also, count mentions of journal editors names in submissions

  18. Findings: Editor & Referee Preferences (ordered probits) • Editors seem to show preference for: • (+) referee score • (+) fit of submission with journal niche • (+) potential impact • (++) interaction between potential impact & referee score • Referee scores show: • (+) very small taste for potential impact • no significant effect of fit with niche • Terms of significance in ref reports (debate, from to years, etc.)

  19. More Findings • Evidence that published papers closer to journal’s niche get more impact/citations • Papers mentioning an editor in first version more likely to get eventually accepted • Editors are harder on papers that mention them • Evidence that editors differentiate/select higher impact among papers that mention them (some do, others don’t; can score editors by this) • Some editors choose harder referees (or are less likely to desk reject lower quality papers)

  20. Summary • Support Laband’s hypotheses that there is a great deal of rich information content in referee reviews • Appearance of favoritism among editors may reflect better selection (as in Laband & Piette) • Evidence of an interaction between accuracy and potential for citation impact that suggests it may be much more risky to produce and submit papers that are expected to receive high numbers of citations at certain journals. • Characterizes budget constraints on referee capital (binding) • Characterize (low) publication space limits on high impact publications

  21. Insights for Development of Scientific Commons on Journals • Referee reviews: • include full texts if possible • alternatively include check boxes for referees to indicate different types of merit relevant to each journal • unique data sources or experimental setups • contributions to ongoing debates or policy • computational methods for relevant types of merit by journal • Anonymized Referee Scores • Measures of textual similarity to past accepted work • Anonymous flags on citations of referees or editors’ own work in a paper • can be used to score referees and editors if desired Simple to automatically compute given modern software technologies Relevant to many types of scientific review (eg. society elections, grant awards, tenure decisions) and other processes where text accompanies data on preferences (employee promotion, appraisals, product reviews, health evaluations)

  22. Insights for Development of Scientific Commons on Journals • Balance between breadth of access and data detail, eg.: • referee/author identities, • review/manuscript texts • ability to connect datasets is critical!!!! • Text-based metrics can be powerful: • LASSO for model selection on text • Recognition of editor names in first, last versions • Textual similarities (to past publications/submissions) • Automatically computable given modern software technologies • Relevant to many types of scientific review • society elections • grant awards • tenure decisions • other processes where text accompanies data on preferences (employee promotion, legal filings, appraisals, product reviews, health evaluations)

  23. Ayeh Bandeh-Ahmadi abandeh@umd.edu Discussion/Questions

More Related