1 / 28

Hazards of price-per-use comparison in e-journal management

Are they any use?. Hazards of price-per-use comparison in e-journal management. Jason S. Price , Ph.D. Claremont Colleges’ Libraries Los Angeles, California. General hazards -- Broad Strokes. Defining use narrowly Vagaries of user behavior Different dissemination styles in teaching

tokala
Download Presentation

Hazards of price-per-use comparison in e-journal management

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Are they any use? Hazards of price-per-use comparisonin e-journal management Jason S. Price, Ph.D. Claremont Colleges’ Libraries Los Angeles, California

  2. General hazards -- Broad Strokes • Defining use narrowly • Vagaries of user behavior • Different dissemination styles in teaching • Granularity of usage reports * Ejournal package-centric approach from an academic institutions’ perspective

  3. G1. A narrow definition of use COUNTER JR 1: Full text article requests Additional use-related measures: • A-Z list click throughs/Web log files • Times cited at your Inst. in recent papers • ACS Livewire 8:2 • Impact Factor • # of papers published by local researchers by journal • Faculty/Researcher Surveys • Print Use? • Page Rank? (Bollen & Van de Sompel 2006)

  4. G2. Vagaries of search/use habits Users may check for full-text before judging relevance from abstracts (or even titles!) Google Accelerator

  5. G3. Dissemination style in teaching Prof. A downloads 1 pdf, makes copies for students ↓ under-counts 1 use for many Prof. B sends link to publisher PDF to her 40 students ↑ over-counts many uses of 1 article Prof. C posts pdf on Electronic Reserve site ↓ under-counts 1 use for many

  6. G4. Usage Report Granularity Title level use statistics Can’t separate frontfile use from backfile use whether purchased or freely available

  7. Specific Hazards – Cost per use • Determining cost • Comparison to ILL cost • Comparison across Publishers • Ignoring ‘by-title’ data • Lack of benchmarks

  8. COUNTER briefing Counting Online Use of NeTworked Electronic Resources -A standard & code of practice that enables comparison of usage statistics from different vendors Components: Terminology & Definitions Layout & Format of journal & database reports) Processing of user input Delivery frequency & availability period Testing & Regular audits www.projectcounter.orgtinyurl.com/nxqvv COMPLIANT

  9. ? S1. Determining Cost Overall cost per use =1 year’s cost / 1 year’s requests e.g. $58,600 publisher E-access fee 35,700 article views $1.64 Cost per use * $420,000 mandatory cost of subs (to agent) for a subset of these same titles $420K + $ 58.6K = $478,600 / 35,700 = $13.40

  10. Overall cost per view by Subs Type

  11. S2. Comparison with ILL Package CPV = $13.40 What does this tell us? • Is it High? Low? • Better than ILL?

  12. CPV S3. Cross-package comparison So Pkg 1 is a better value than Pkg 3? It might not be…

  13. Variation in use by format Davis and Price, 2006 JASIST

  14. html to pdf Ratios vary widely for these packages How many pdfs in Pkg 1 are duplicates of html views?

  15. Live Link

  16. CPU vs. CPP S3. Package value revisited pdf requests only tell a different story!

  17. Response: COUNTER filter A unique article filter provides new metric: number of successful unique article requests in a session Need to be applied to Specific institutions/ interface configurations

  18. ASSUMPTIONS Reality Check Should we expect cost per use to be equivalent among packages? • Quality • Scope • Business Model • For Profit vs Cost Recovery • Exposure in Google Scholar

  19. S4. Ignoring by-title data

  20. Cutting off the long tail…

  21. Before Collaboration  After Collaboration 

  22. Consortium S5. Lack of Benchmarks

  23. Consortium S5. Lack of Benchmarks

  24. Consortium S5. Lack of Benchmarks

  25. Consortium S5. Lack of Benchmarks

  26. Consortial benchmarking

  27. Recommendations • Ensure you have the right cost • Be wary of cross-publisher comparison • Consider both overall and pdf use • Should it be the same? • For single package evaluation: • Look at patterns at title level • Benchmark vs Consortium or Peers • (Support efforts to ‘outsource’ CPU analysis to consortia staff)

  28. Support from COUNTER • Indication of subs type (Subs vs Lease) • Separation of backfile data • Unique article filter to mitigate interface & linking effects • By title data • Single Password consortium access to aggregate and by-institution statistics • Much more…

More Related