1 / 10

Standards in science indicators

Standards in science indicators. Vincent Larivière EBSI, Université de Montréal OST, Université du Québec à Montréal Standards in science workshop SLIS-Indiana University August 11th 2011. Current situation. Since the early 2000s, we are witnessing :

yoshi
Download Presentation

Standards in science indicators

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Standards in science indicators Vincent Larivière EBSI, Université de Montréal OST, Université du Québec à Montréal Standards in science workshop SLIS-Indiana University August 11th 2011

  2. Current situation Since the early 2000s, we are witnessing: Increase in the use of bibliometrics in researchevaluation; Increase in the size of the bibliometric community; Increase in the variety of actorsinvoved in bibliometrics (e.g. no longer limited to LIS or the STS community); Increase in the variety of existingmetrics for mesuringresearch impact: H-index (withitsdozenvarieties); engenvalues, SNIP and SCIMAGO impact indicators, etc. No longer an ISI monopoly (Scopus, Google Scholar + severalother initiatives (SBD, etc.).

  3. Why do we need standardized bibliometric indicators? Symptomatic of the immaturity of the research field – no paradigm is yet dominant; Bibliometric evaluations are spreading at the levels of countries, institutions, research groups and individuals; Worldwide rankings are spreading and often yield diverging results Standards shows the consensus in the community and allows for various measures to be : Comparable Reproducable

  4. Impact indicators Impact indicators have been used for quite a while in science policy and researchevaluation. Untilquiterecently, only a handful of metrixwereavailable or compiled by research groups involved in bibliometrics: 1) raw citations 2) citations per publication 3) Impact factors Only one databasewasused: ISI Only one normalizationwas made: by field (whenitwasdone!)

  5. Factors to take into account in the creation of a new standard Field specificities: citation potential and agingcharacteristics. Field definition: at the level of journal or at the level of paper? Interdisciplinaryjournals? Differences in the coverage of databases Distributions vs. aggregatedmeasures Skewness of citation distributions (use of logs?) Paradox of ratios (01∞) Averages vs medians vs ranks Citation windows Unit vs fractionalcounting Equal or differentweight for each citation?

  6. Ex. 1: Impact indicators Example of how a very simple change in the calculation method of an impact indicator can change the results obtained – even when very large number of papers are involved. All things are kept constant here: same papers, same database, same subfield classification, same citation window. The only difference is the order of operations leading to the calculation: average of ratio (AoR) vs ratio of averages (RoA). Both these methods are considered as standards in research evaluation. 4 levels of aggregation are analyzed: individuals, departments, institutions and countries

  7. Relation between RoA and AoR field normalized citation indicators at the level of A) individual researchers (≥20 papers), B) departments (≥50 papers), C) institutions (≥500 papers) and D) countries (≥1000 papers)

  8. Figure 2. Relationship between (AoR – RoA) / AoR and the number of papers at the level of A) individual researchers, B) departments, C) at the level of institutions (≥500 papers), D) countries.

  9. Ex. 2: Productivity measures Typically, we count the research productivity of units by summing the distinct number papers they produced and dividing it by the total number of researchers of the unit. Another method is to assign papers to each researcher of the group, and then perform the average of their individual output. Both counting methods are correlated, but nonetheless yield different results:

  10. Difference in the results obtained for 1223 departments (21,500 disambiguated researchers)

More Related