niso altmetrics standards project w hite p aper draft 4 as at 06 06 2014 n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
NISO altmetrics Standards Project W hite P aper – draft 4, as at 06/06/2014 PowerPoint Presentation
Download Presentation
NISO altmetrics Standards Project W hite P aper – draft 4, as at 06/06/2014

Loading in 2 Seconds...

play fullscreen
1 / 14

NISO altmetrics Standards Project W hite P aper – draft 4, as at 06/06/2014 - PowerPoint PPT Presentation


  • 117 Views
  • Uploaded on

NISO altmetrics Standards Project W hite P aper – draft 4, as at 06/06/2014. Response from Gregor McDonagh NERC Research Information Manager 10/06/2014. Traditional Citations Analysis Issues.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'NISO altmetrics Standards Project W hite P aper – draft 4, as at 06/06/2014' - gunnar


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
niso altmetrics standards project w hite p aper draft 4 as at 06 06 2014

NISOaltmetrics Standards Project White Paper – draft 4, as at 06/06/2014

Response from

Gregor McDonagh

NERC Research Information Manager

10/06/2014

traditional citations analysis issues
Traditional Citations Analysis Issues
  • Where a Funding Organisation has a portfolio biased disproportionately towards pure/basic research will it naturally tend to generate higher than average citations counts within a domain? (see slides 5 & 6 for Environmental Sciences Frascati breakdown from ERFF Report 04 ) [NERC, for example, achieves very good citations metrics, but it is important to remain critical and consider whether this is at least in part due to methodological biases.]
  • Where a journal publication appears in a peer reviewed journal it has passed a ‘minimum quality standard’ by reason of having been peer reviewed. Nevertheless, some of the outlier high citations papers could obtain their citations from refutation rather than use. To guard against such instances, a cited and the citing papers can be read to validate quality; evaluation panels can validate in detail groups of papers. Alternatively, outliers wash out in large aggregations, say at a national level. Nevertheless, citations do appear to have a grey area at mezzanine aggregations where unduly influenced by outliers. Thus, peer review of journals, in of itself, doesn’t appear to underpin all potential citation analyses.
  • Bullet 2 draws the following question: If citations analyses at high aggregation do not rely upon peer review for validity, then is there not scope to vary the methodology by broadening the coverage, at the very least, of cited papers from other peer reviewed journals to other forms of grey literature. This might start to then broaden the interpretation of use as purely generating academic impact, to a broader set of users.
options
Options
  • Can slight adaptations in methodology for citations analysis be shown to produce results consistent with traditional methods?
  • At a more fundament level can citations methodology to picked apart, to determine when and whether certain ‘essential’ components, like peer review, really are that in all cases, and, if not, what are the alternative conditions?
  • Can a broader appreciation of the weaknesses of culturally accepted methodologies encourage a fair comparison with alternatives (particularly accepting there will be horses for courses)
  • Would proving some cases using grey literature help with the intellectual leap to usage stats which are far more depend upon aggregation ‘magic’
other points of detail
Other points of detail
  • Whilst the NISO white paper is in so many regards excellent in its clarity, more could be done to explain the peculiarities of the ‘variable geometries’ that arise from multiple alternative aggregations arising from diverse reporting granularities.
  • Eg, NERC started collecting research outputs on via its Research Outputs Database (ROD) nearly a decade ago and long before the current RCUK harmonisation. NERC found that the historic programmes structure used by its Centres for reporting were many times was cut up into chunks many times the size of ‘intramural’ investments of other Councils and larger still than the value of the average Council extramural research grant.
  • The higher the fidelity of resolution of the funding structure the more that collaboration generates multiple attribution. So when it came to presenting inputs and outputs information in the new RCUK Gateway to Research system, I needed to explain to our Centres how the old methodology in ROD that tried to avoid double counting, since ROD didn’t make use of unique identifiers like DOI codes, left NERC Centres looking relatively light on output volumes at a project level (see slides 10 to 14).
  • Hence, p.10 at “A single research output can then be further aggregated or grouped by: … Funder,” add a new bullet “Funder and Funding”.
matching attribution
Matching (attribution)

Project 1

collaboration

paper

Project 2

aggregation
aggregation

Centre

programme

programme

programme

paper

paper

paper

Project 1

paper

Project 1

paper

Project 1

paper

Project 1

papers

Project 1

projects

aggregation1
aggregation

HEI

scheme

scheme

scheme

Project 1

Project 1

Project 1

Project 1

Project 1

Projects (grants)

analysing inputs
analysing inputs

£ + scope

Projects (grants)

scheme

HEI

£ + scope

projects

programmes

Centres

missing info

Centres invented programmes in ROD

= no link to NERC’s financial information

= gap in inputs to outcomes matching

!

bigger on the inside
Bigger on the inside!

+35% attributions at project level means Centres losing out in comparison with HEIs