The debate on uses and consequences of sti indicators
Download
1 / 21

The debate on uses and consequences of STI indicators - PowerPoint PPT Presentation


  • 75 Views
  • Uploaded on

The debate on uses and consequences of STI indicators . OST Workshop 12 May 2014, Paris. Paul Wouters, Sarah de Rijcke and Ludo Waltman, Centre for Science and Technology Studies (CWTS). Debate so far.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' The debate on uses and consequences of STI indicators ' - cachet


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
The debate on uses and consequences of sti indicators

The debate on uses and consequences of STI indicators

OST Workshop 12 May 2014, Paris

Paul Wouters, Sarah de Rijcke and Ludo Waltman, Centre for Science and Technology Studies (CWTS)



“The variety of available bibliographic databases and the tempestuous development of both hardware and software have essentially contributed to the great rise of bibliometric research in the 80-s. In the last decade an increasing number of bibliometric studies was concerned with the evaluation of scientific research at the macro- and meso-level. Different database versions and a variety of applied methods and techniques have resulted sometimes in considerable deviations between the values of science indicators produced by different institutes”

Glänzel (1996): THE NEED FOR STANDARDS IN BIBLIOMETRIC RESEARCH AND TECHNOLOGY, Scientometrics, 35 (2), p. 167.


Applications citation analysis
Applications citation analysis tempestuous development of both hardware and software have essentially contributed to the great rise of bibliometric research in the 80-s. In the last decade an increasing number of bibliometric studies was concerned with the evaluation of scientific research at the macro- and meso-level. Different database versions and a variety of applied methods and techniques have resulted sometimes in considerable deviations between the values of science indicators produced by different institutes”

Citation analysis has fourmainapplications:

  • Qualitativeandquantitativeevaluation of scientists, publications, andscientificinstitutions

  • Reconstructionandmodeling of the historicaldevelopment of scienceandtechnology

  • Information search and retrieval

  • Knowledge organization


Why standards
Why standards? tempestuous development of both hardware and software have essentially contributed to the great rise of bibliometric research in the 80-s. In the last decade an increasing number of bibliometric studies was concerned with the evaluation of scientific research at the macro- and meso-level. Different database versions and a variety of applied methods and techniques have resulted sometimes in considerable deviations between the values of science indicators produced by different institutes”

  • Bibliometrics increasingly used in research assessment

  • Data & indicators for assessment widely available

  • Some methods blackboxed in database-linked services (TR as well as Elsevier)

  • No consensus in bibliometric community

  • Bewildering number of indicators and data options

  • Bibliometric knowledge base not easily accessible

  • Ethical and political responsibility distributed and cannot be ignored


State of affairs on standards
State of affairs on standards tempestuous development of both hardware and software have essentially contributed to the great rise of bibliometric research in the 80-s. In the last decade an increasing number of bibliometric studies was concerned with the evaluation of scientific research at the macro- and meso-level. Different database versions and a variety of applied methods and techniques have resulted sometimes in considerable deviations between the values of science indicators produced by different institutes”

  • “End users” demand clarity about the best way to assess quality and impact from bibliometric experts

  • Most bibliometric research focused on creating more diversity rather than on pruning the tree of data and indicator options

  • Bibliometric community has not yet a professional channel to organize its professional, ethical, and political responsibility

  • We lack a code of conduct with respect to research evaluation although assessments may have strong implications for human careers and lives


Three types of standards
Three types of standards tempestuous development of both hardware and software have essentially contributed to the great rise of bibliometric research in the 80-s. In the last decade an increasing number of bibliometric studies was concerned with the evaluation of scientific research at the macro- and meso-level. Different database versions and a variety of applied methods and techniques have resulted sometimes in considerable deviations between the values of science indicators produced by different institutes”

  • Data standards

  • Indicator standards

  • Standards for good evaluation practices


Data standards
Data standards tempestuous development of both hardware and software have essentially contributed to the great rise of bibliometric research in the 80-s. In the last decade an increasing number of bibliometric studies was concerned with the evaluation of scientific research at the macro- and meso-level. Different database versions and a variety of applied methods and techniques have resulted sometimes in considerable deviations between the values of science indicators produced by different institutes”

  • Data standards:

    • Choice of data sources

    • Selection of documents from those sources

    • Data cleaning, citation matching and linking issues

    • Definition and delineation of fields and peer groups for comparison


Indicator standards
Indicator standards tempestuous development of both hardware and software have essentially contributed to the great rise of bibliometric research in the 80-s. In the last decade an increasing number of bibliometric studies was concerned with the evaluation of scientific research at the macro- and meso-level. Different database versions and a variety of applied methods and techniques have resulted sometimes in considerable deviations between the values of science indicators produced by different institutes”

  • Indicator standards:

    • Choice of level of aggregation (nations; programs; institutes; groups; principal investigators; individual researchers)

    • Choice of dimension to measure (size, activity, impact, collaboration, quality, feasibility, specialization)

    • Transparency of construction

    • Visibility of uncertainty and of sources of error

    • Specific technical issues:

      • Size: fractionalization; weighting

      • Impact: averages vs percentiles; field normalization; citation window


Standards for gep
Standards for GEP tempestuous development of both hardware and software have essentially contributed to the great rise of bibliometric research in the 80-s. In the last decade an increasing number of bibliometric studies was concerned with the evaluation of scientific research at the macro- and meso-level. Different database versions and a variety of applied methods and techniques have resulted sometimes in considerable deviations between the values of science indicators produced by different institutes”

  • Standards for good evaluation practices:

    • When is it appropriate to use bibliometric data and methods?

    • The balance between bibliometrics, peer review, expert review, and other assessment methodologies (eg Delphi)

    • The transparency of the assessment method (from beginning to end)

    • The accountability of the assessors

    • The way attempts to manipulate bibliometric measures is handled (citation cartels; journal self-citations)

    • Clarity about the responsibilities of researchers, assessors, university managers, database providers, etc.


Preconference sti enid workshop 2 september 2014
Preconference STI_ENID workshop 2 September 2014 tempestuous development of both hardware and software have essentially contributed to the great rise of bibliometric research in the 80-s. In the last decade an increasing number of bibliometric studies was concerned with the evaluation of scientific research at the macro- and meso-level. Different database versions and a variety of applied methods and techniques have resulted sometimes in considerable deviations between the values of science indicators produced by different institutes”

  • Advantages and disadvantages of different types of bibliometric indicators, Which types of indicators are to be preferred, and how does this depend on the purpose of a bibliometric analysis? Should multiple indicators be used in a complementary way?

  • Advantages and disadvantages of different approaches to the field-normalization of bibliometric indicators, eg cited-side and citing-side approaches.

  • The use of techniques for statistical inference, such as hypothesis tests and confidence intervals, to complement bibliometric indicators.

  • Journal impact metrics. Which properties should a good journal impact metric have? To what extent do existing metrics (IF, Eigenfactor, SJR, SNIP) have these properties? Is there a need for new metrics? How can journal impact metrics be used in a proper way?


An example of a problem in standards for assessment
An example of a problem in standards for assessment tempestuous development of both hardware and software have essentially contributed to the great rise of bibliometric research in the 80-s. In the last decade an increasing number of bibliometric studies was concerned with the evaluation of scientific research at the macro- and meso-level. Different database versions and a variety of applied methods and techniques have resulted sometimes in considerable deviations between the values of science indicators produced by different institutes”


Example individual level bibliometrics
Example: individual level bibliometrics tempestuous development of both hardware and software have essentially contributed to the great rise of bibliometric research in the 80-s. In the last decade an increasing number of bibliometric studies was concerned with the evaluation of scientific research at the macro- and meso-level. Different database versions and a variety of applied methods and techniques have resulted sometimes in considerable deviations between the values of science indicators produced by different institutes”

  • Involves all three forms of standardization

  • Not in the first place a technical problem, but does have many technical aspects

  • Glaenzel & Wouters (2013) presented 10 dos and don’ts of individual level bibliometrics

  • Moed (2013) presented a matrix/portfolio approach

  • ACUMEN (2014) presented the design of a Web based research portfolio


expertise tempestuous development of both hardware and software have essentially contributed to the great rise of bibliometric research in the 80-s. In the last decade an increasing number of bibliometric studies was concerned with the evaluation of scientific research at the macro- and meso-level. Different database versions and a variety of applied methods and techniques have resulted sometimes in considerable deviations between the values of science indicators produced by different institutes”

output

narrative

influence

portfolio

  • aim is to give researchers a voice in evaluation

  • evidence based arguments

  • shift to dialog orientation

  • selection of indicators

  • narrative component

  • Good Evaluation Practices

  • envisioned as web service


Career Narrative tempestuous development of both hardware and software have essentially contributed to the great rise of bibliometric research in the 80-s. In the last decade an increasing number of bibliometric studies was concerned with the evaluation of scientific research at the macro- and meso-level. Different database versions and a variety of applied methods and techniques have resulted sometimes in considerable deviations between the values of science indicators produced by different institutes”Links expertise, output, and influence together in an evidence-based argument; included content is negotiated with evaluator and tailored to the particular evaluation

Influence- on science

- on society

- on economy

- on teaching

Output

- publications

- public media

- teaching

- web/social media

- data sets

- software/tools

- infrastructure

- grant proposals

Expertise

- scientific/scholarly

- technological

- communication

- organizational

- knowledge transfer

- educational

Tatum & Wouters | 14 November 2013

ACUMEN Portfolio

  • Evaluation Guidelines

  • aimed at both researchers and evaluators

  • development of evidence based arguments (what counts as evidence?)

  • expanded list of research output

  • establishing provenance

  • taxonomy of indicators: bibliometric, webometric, altmetric

  • guidance on use of indicators

  • contextual considerations, such as: stage of career, discipline, and country of residence


Tatum & Wouters | 14 November 2013 tempestuous development of both hardware and software have essentially contributed to the great rise of bibliometric research in the 80-s. In the last decade an increasing number of bibliometric studies was concerned with the evaluation of scientific research at the macro- and meso-level. Different database versions and a variety of applied methods and techniques have resulted sometimes in considerable deviations between the values of science indicators produced by different institutes”

Portfolio & Guidelines

  • Instrument for empowering researchers in the processes of evaluation

  • Taking in to consideration all academic disciplines

  • Suitable for other uses (e.g. career planning)

  • Able to integrate into different evaluation systems


What type of standardization process do we need
What type of standardization process do we need? tempestuous development of both hardware and software have essentially contributed to the great rise of bibliometric research in the 80-s. In the last decade an increasing number of bibliometric studies was concerned with the evaluation of scientific research at the macro- and meso-level. Different database versions and a variety of applied methods and techniques have resulted sometimes in considerable deviations between the values of science indicators produced by different institutes”


Evaluation machines
Evaluation Machines tempestuous development of both hardware and software have essentially contributed to the great rise of bibliometric research in the 80-s. In the last decade an increasing number of bibliometric studies was concerned with the evaluation of scientific research at the macro- and meso-level. Different database versions and a variety of applied methods and techniques have resulted sometimes in considerable deviations between the values of science indicators produced by different institutes”

  • Primaryfunction: make stuff auditable

  • Mechanization of control – degradation of workand trust? (performance paradox)

  • Risksforevaluandanddefensive responses

  • What are theircosts, direct and indirect?

  • Microquality versus macroquality – lock-in

  • Goal displacement & strategicbehaviour


Citation as infrastructure
Citation as infrastructure tempestuous development of both hardware and software have essentially contributed to the great rise of bibliometric research in the 80-s. In the last decade an increasing number of bibliometric studies was concerned with the evaluation of scientific research at the macro- and meso-level. Different database versions and a variety of applied methods and techniques have resulted sometimes in considerable deviations between the values of science indicators produced by different institutes”

  • Infrastructures are not constructed but evolve

  • Transparent structures taken for granted

  • Supported by invisible work

  • They embody technical and social standards

  • Citation network includes databases, centres, publishers, guidelines


Effects of indicators
Effects of indicators tempestuous development of both hardware and software have essentially contributed to the great rise of bibliometric research in the 80-s. In the last decade an increasing number of bibliometric studies was concerned with the evaluation of scientific research at the macro- and meso-level. Different database versions and a variety of applied methods and techniques have resulted sometimes in considerable deviations between the values of science indicators produced by different institutes”

  • Intended effect: behavioural change

  • Unintended effects:

    • Goal displacement

    • Structural changes

  • The big unknown: effects on knowledge?

  • Institutional rearrangements

  • Does quality go up or down?


Constitutive effects
Constitutive tempestuous development of both hardware and software have essentially contributed to the great rise of bibliometric research in the 80-s. In the last decade an increasing number of bibliometric studies was concerned with the evaluation of scientific research at the macro- and meso-level. Different database versions and a variety of applied methods and techniques have resulted sometimes in considerable deviations between the values of science indicators produced by different institutes”effects

  • Limitations of conventionalcritiques (eg ‘perverse or unintendedeffects’)

  • Effects:

    • Interpretative frames

    • Content & priorities

    • Socialidentities & relations (labelling)

    • Spread over time and levels

  • Not a deterministicprocess

  • Democraticrole of evaluations


  • ad