1 / 51

Is it really that bad?

Karen R. Harker, MLS, MPH Collection Assessment Librarian UNT Libraries. Is it really that bad?. Verifying the extent of full-text linking problems.

olin
Download Presentation

Is it really that bad?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Karen R. Harker, MLS, MPH Collection Assessment Librarian UNT Libraries Is it really that bad? Verifying the extent of full-text linking problems

  2. “I find searching for journal articles and actually finding full articles to a very difficult. Sometimes it just links to another site and then to a fragment of an article.” “It's hard to find links to online articles--some links say they can't find it, but sometimes you can still locate it and I don't know why the main "find links" page doesn't bring it up.” Frustrated with "Find Full-Text" when "it doesn't work“; “if it's not 100% perfect, there is really no point” in offering the service. “Sometimes, if a link is not provided to an article in the search results, it is very difficult to find. The article linker often comes up with no results even though it says the article is in UNT's collection.”

  3. What have you done?

  4. Link-Checking M R N O A D Selection Cannot Be Predicted

  5. What is required? Intermediate to advanced Excel (but NOT programming)

  6. Knowledge About your collection About the problem

  7. Clear questions

  8. Think about... your problem your collection your link resolver your people

  9. Brainstorm

  10. Come away with…

  11. What Kind of Research Questions? What is important to you? Just a few Compared to what? Be Specific

  12. Such as… Are links from EBSCO more successful than links from Ovid? Is Serials Solutions’ 360 link resolver more likely to get to the full-text from our key resources than EBSCO’s? Is full-text linking better or worse compared to last year? What is the chance that a client will get the full-text of an article on the first click?

  13. Start with the Results

  14. Comparing One Source With Another Is Chi-Square test < Target? No Is Ovid Significantly Different from EBSCO? No

  15. Comparing One Source With the Average Is Chi-Square test < Target? Yes Is EBSCO Significantly Different from the Average? Yes

  16. Comparing One Target With The Expected or Ideal Rate Is Chi-Square test < Target? Yes Is EBSCO Significantly Different from Ideal Rate? Yes

  17. Random Sampling Review or background, depending on your viewpoint

  18. Universe Sampling Terms All Citations Citations in databases to which we have access to articles to which we have full-text access Sampling Population Sampling Frame Only Journal Articles Sample

  19. Selection Methods Non-probability Probability • Convenience sampling • The chance of being selected is not known • The probability of any one citation being selected is known.

  20. Simple Random Sampling • Every citation that meets the criteria has an equal chance of being selected. See Demo. NOTE: Articles vary greatly by source, target and year.

  21. Stratified Sampling • Every citation in discrete homogeneous groups has an equal chance of being chosen. Try Demo again… • Useful to zero-in on a possible problem • Stratify by source, target & year, but would be time consuming

  22. Sampling Population Strata Samples Selected from Each Stratum

  23. Cluster Sampling • When the sampling population naturally “clusters” (e.g. source and targets). • The way they cluster doesn’t affect your outcome. • Divide population into these clusters • Randomly select the clusters to be a part of the sampling frame • Randomly select sample from selected clusters • Useful for very large populations.

  24. Sampling Population Clusters Samples Selected from Selected Clusters

  25. This Methodology Simple randomized cluster • Select a sample of ejournals (clusters) • Search each database for articles • Randomly select a citation (sample) • Test and record results • Most useful for questions that are focused on the sources.

  26. Other Questions, Other Designs Comparing link-resolvers: Matched-pair • Select a sample of ejournals • Using one of the link resolvers, search the source for articles in these ejournals. • Randomly select a citation • Test and record results • For the next link resolver, search each source for the same citation (the matched pair). • Test and record results.

  27. Other Questions, Other Designs • For problems related to targets: • Use the same method, but… • Randomly select ejournals from each target • For problems related to environment(browsers, location of user, etc.): • Use the same method as the link-resolver, only change the browser or location.

  28. Practical Applications Using Excel to Help You Along

  29. Before we begin… • For those with Laptops, download files: • Excel file: http://digital.library.unt.edu/ark:/67531/metadc96818/ • PDF of Steps: http://digital.library.unt.edu/ark:/67531/metadc96827/ • Or, just follow along…

  30. I need to check how many? • May be fewer than you’d think • Need to know: • Sampling strategy • Kind of analysis • Expected rate • Chance of a title being indexed in the source • Number of databases or sources to examine • Educated Guess

  31. Selecting the Journal Titles (Clusters) • Download your ejournals list • May want to limit to only those used recently • Randomly select the correct number of journals based on sample size • Randomly assign each title to the databases or sources you will be searching. • Excel Tricks • Remove Duplicates • Fill Cell – assigns a new ID number • Sampling method in Data Analysis • Randomly selects IDs from your list • VLOOKUP – gets the titles for the selected IDs • RANDBETWEEN – Randomly assigns each title to a source to be searched

  32. If Not Found

  33. Test the Sources • Login to the database • Search for articles in the first journal • If none are found, note this in your results and skip to next title. • If some are found, note the total number of articles.

  34. Test Sources • If articles are found: • Sort the list by author last name (if possible) • Note the total number of articles found • Enter this number in the Sample Size Calculator worksheet (Random Article Selector) • Note the “Select this article” number • In the database, navigate to this article • Click on the Find Full-Text button

  35. Full-Text PDF!

  36. Test Sources • Note the success of that link in your tally sheet • Rinse & repeat: • For each journal in the list • For each database to be tested

  37. If Not Found

  38. Tips & Tricks • Search by ISSN, if possible. • Display the most citations per page • If full-text article is in that database, skip title. • This is a non-response • the Response Rate • the Sample Size

  39. Summarize the data • Count up all results in each source • Create ratios for each result (e.g. Full-text ratio) • # Full-Text / # Titles Found • Average the ratios

  40. Example Data: Raw Counts

  41. Example Data: Ratios

  42. Test the Results • So you have a ratio – so what? What does it mean? Is it high? Low? Compared to what? • Use the Chi-Squared test to compare the ratios • Excel: CHISQ.TEST(actual range, expected range) • If the result is less than 0.10 (or 1.00 – Confidence Level), then the difference is statistically significant. • This may be good or not good, depending on what your comparing against.

  43. Comparing Against an Ideal % • Actual range: the # of Successes & # of Failures • Expected Range: # expected to be success & # of expected failures • Example: CHISQ.TEST(B2:C2, B3:C3)

  44. Context is King • The value of the result depends on what you are measuring and comparing • If the difference between two sources is not significant, then they are statistically similar. • If the difference between two link resolvers is significant, then one is better than the other. • NOTE: This doesn’t tell you by how much! • Use your best judgment

More Related