1 / 30

Archival Metrics: Standardized Survey Tools for Assessment in Archives and Special Collections

This project aims to develop standardized survey tools for assessing user-based evaluation in archives and special collections. The goal is to improve the culture of assessment and provide reliable means of comparing data across repositories.

jomason
Download Presentation

Archival Metrics: Standardized Survey Tools for Assessment in Archives and Special Collections

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Standardized Survey Tools for Assessment in Archives and Special Collections Elizabeth Yakel University of Michigan Library Assessment Conference August 4th, 2008

  2. Archival Metrics • Drs. Wendy Duff and Joan Cherry, University of Toronto • Dr. Helen Tibbo, University of North Carolina • Aprille McKay, J.D., University of Michigan • Andrew W. Mellon Foundation Grant (June 2005 – March 2008) • Toward the Development of Archival Metrics in College and University Archives and Special Collections • Archival Metrics • http://archivalmetrics.org

  3. Archival Metrics Toolkits • On-site Researcher • Archival Website • Online Finding Aids • Student Researcher • Teaching Support

  4. Archives and Special Collections • Weak culture of assessment • Need the ability to perform user based evaluation on concepts that are specific to archives and special collections • No consistency in user-based evaluation tools • No reliability in any survey tools • No means of comparing data across repositories

  5. Developing the Archival Metrics Toolkits • Analysis of other survey instruments • Interviews with archivists, faculty, students to identify core concepts for evaluation • Concept Map • Creation of the questionnaires • Testing the questionnaires, survey administration procedures, and the instructions for use and analysis

  6. Analysis of Other Instruments • LibQual+ • E-metrics (Charles McClure and David Lankes, Florida State University) • Definitions • Question phraseology

  7. Interviews • Develop concepts for evaluation • Identify areas of greatest concern • Understand the feasibility of deploying surveys using different methods

  8. Interviews: Availability (1) • Archivist: This is a very big place and people are busy. If somebody doesn’t stop you to ask you a question and you don’t even see that they’re there because they don’t…we’re actually thinking about finding some mechanism like a bell rings when somebody walks in the door because you get so focused on your computer screen that you don’t even know somebody’s there. We understand why they’re a little intimidated. (MAM02, lines 403-408)

  9. Interviews: Availability (2) • Student: I’ve said this a million times, but the access to someone who is open to helping. So not just someone who is supposed to be doing it, but who actually wants to. (MSM02, lines 559-561) • Professor: And they'll even do it after hours, almost every one of them. In fact they'd rather do it after hours because they don't have a crush of business in there. (MPM04, lines 320-322)

  10. Defining terms • Accessibility of service • Accessibility of Service is a measure of how easily potential users are able to avail themselves of the service and includes (but is certainly not limited to) such factors as: availability (both time and day of the week); site design (simplicity of interface); ADA compliance; ease of use; placement in website hierarchy if using web submission form or email link from the website; use of metatags for digital reference websites (indexed in major search tools, etc.); or multilingual capabilities in both interface and staff, if warranted based on target population (E-Metrics)

  11. Conceptual Framework • Quality of the Interaction • Access to systems and services • Physical Information Space • Learning Outcomes

  12. Quality of the Interaction • Perceived expertise of the archivist • Availability of staff to assist researchers • Instructors and students in the interviews returned to these dimensions again and again as important in a successful visit.

  13. Learning Outcomes • Student Researcher • Cognitive and affective outcomes • Confidence

  14. Testing of the tools • Iterative design and initial user feedback • Small scale deployment • Larger scale testing

  15. Pilot Testing Testing • Two phases • Single archives / special collections • Walked people through the questionnaire • Post test interview • Clarity of questions • Length of instrument • Comprehensiveness of questions • Willingness to complete a questionnaire of this type • Single implementations of the questionnaire and administration procedures until the instrument was stable

  16. Pilot Testing Outcomes • Lots of feedback on question wording • Decision to include a definition of finding aids • Decision not to pursue a question bank approach • Move away from expectation questions • Decisions about how to administer the surveys • On-site researcher and student researcher surveys became paper-based • Online finding aids and website survey - online

  17. Larger Scale Testing

  18. Testing Administration Procedures • Researcher • When to administer? • During visit • Problematic due to archival procedures • Student • End of term • Need instructor cooperation • Early email with online survey link had a poor response rate

  19. Administration of the Web-based Tools • Email to recent on-site researchers • Rolling invitations immediately after a response to an email reference question • Retrospective invitations to email reference requestors • Across all the surveys tested, we received an average response rate of 65%.

  20. Accumulating Email Reference Requests

  21. Administration Recommendations • Email reference and on-site researchers represent two different populations • Few email reference requestors had visited the repository in person

  22. Differences in Samples

  23. Reliability Testing

  24. Reliability of Other Questions • Free text, multiple choice questions generate consistent responses • Student survey focuses on learning outcomes, confidence, development of transferable skills • Contradictory results in Questions 12 (‘Is archival research valuable for your goals?’) and 13 (‘Have you developed any skills by doing research in archives that help you in other areas of your work or studies?’) • Moderately correlated chi-square test, the phi coefficient was .296; 2(1, N=426) = 37.39, p<.05.

  25. Archival Metrics Toolkits • Questionnaire Word document • PDF • Can transfer Survey Monkey version to other Survey Monkey accounts • Administering the Survey (Instructions) • Preparing your data for analysis • Excel spreadsheet pre-formatted for data from the Questionnaire • Pre-coded questionnaire • SPSS file pre-formatted for data from the Website Questionnaire • Sample report

  26. Administration Instructions • How the questionnaires can and cannot be amended • Identifying a survey population • Sampling strategies • Soliciting subjects • Applying to their university’s Institutional Review Board (IRB) or ethics panel.

  27. Analysis Instructions • Instructions • Preformatted Excel spreadsheet • Codebook • Preformatted SPSS file

  28. Archival Metrics Toolkits • Free and available for download • Creative Commons License • Must ‘register” but this information will not be shared • We will be doing our own user-based evaluation of their use

  29. Thank-you and Questions

More Related