1 / 1

Methods

The Response-Shift Bias: Pre-Test/Post-Test v. Post-Test/Retrospective Pre-Test Evaluation of Information Literacy Training Programs

yelena
Download Presentation

Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Response-Shift Bias: Pre-Test/Post-Test v. Post-Test/Retrospective Pre-Test Evaluation of Information Literacy Training Programs Marie T. Ascher, Head, Reference & Information Services, Diana J. Cunningham, Associate Dean & DirectorHealth Sciences Library, New York Medical College, Valhalla, NY marie_ascher@nymc.edu ObjectiveThe purpose of this research was to look at the response-shift bias among public health workers who participated in informatics training. The purpose of this poster is to introduce this technique to information professionals and educators doing competency-based evaluation. BackgroundFollowing a baseline survey to assess competencies in the Spring of 2005, a series of training sessions aimed at improving proficiency on many of those informatics competencies1 was conducted between October 2005 and January 2006. Originally self-reported proficiency at baseline (pre-test) was compared to self-reported post-test scores. Using this methodology, evidence of gains in proficiency was weak, and in several cases even showed a decline! Thinking that this was a possible case of “I know now what I didn’t know then,” the researchers looked for a so-called “response shift bias.” Figure 1. A segment of the original competencies instrument Methods As a follow-up to the initial post-test, investigators conducted focus groups with those employees who had attended training in order to better understand the results. During these follow-up focus groups, a post-test/retrospective pre-test questionnaire was distributed with a self-addressed stamped envelope, and an incentive for return. In contrast to a pre-test/post-test design, the post-test/retrospective pre-test survey asked "What is your proficiency for each competency today?" and "What was your proficiency for each competency one year ago?" This then/post survey enabled a comparison, for those participants, between results obtained using a pre-test/post-test methodology versus a post-test/retrospective pre-test methodology. Figure 2. A segment of the Post-Test/Retrospective Pre-Test instrument Furthermore, self-reported post scores on the two surveys varied wildly from each other. Eleven of 15 participants rated themselves relatively lower for most competencies on the then/post survey than they did on the previous post-test. Conclusions The decision to look at this methodology did not occur until after results from the original pre/post methodology were questioned, i.e. how could our trainees have become less proficient?! Self-report in general involves biases and perhaps should be avoided where other methods might be more reliable. The results of our post-test/retrospective pre-test survey are not conclusive: a small number of surveys, returned by probably our most diligent participants, with limited results to report. What we learned above all is that pre-post self-report is unreliable. The purpose of this inquiry was not to get better results, but to determine which results are more realistic. Based on satisfaction surveys with focus group results, it is believe that the then/post results are more realistic, and a better measure of knowledge gained or perceived to have been gained by participants. To conduct this research again and measure proficiency ratings for the recommended CDC competencies, this then/post methodology, combined with focus group follow-up, would be our methodology from the outset. There are times when self-report is the best or only option available. It is the recommendation of these researchers that library researchers consider this then/post methodology over traditional pre/post self-report comparisons, given that even provided with bad training (which ours wasn’t), learners shouldn’t know less when they complete a curriculum! ResultsFocus Group Results: Germane to this poster, the third focus group question asked participants: “Based upon your current understanding of our project’s efforts to identify your training needs, do you feel that you would rate yourself differently now than you did when you first completed the survey?” Several responses indicated, as one participant from Putnam County succinctly put it: “When I did the post-test, I realized that I didn’t understand the questions during the pre-test” Post-Test/Retrospective Pre-Test Results: Fifteen valid surveys (two surveys were discounted because they did not have valid ID numbers for comparison) were returned by focus group participants. The results show that for all respondents and for all twenty-six competencies there was an average overall gain from the “aware” (1) to “knowledgeable” (3) level. As demonstrated in Table 1, all respondents to the then/post survey except one (who reported zero change) reported increased overall proficiency. Most importantly, average levels of improvement were higher on the competencies originally targeted for training, those with a significant “gap score” between proficiency and relevance (roughly competency numbers 1-18).3 Overall, there was significant improvement across competencies compared to what was indicated by the original pretest/posttest design.As Table 1 also demonstrates, individuals showed a greater increase in proficiency level using the then/post methodology. • Based upon results of the pre-test, the following instructional sessions were offered to all employees of three county health departments: • 1. Utilizing Your PC for Public Health • 1. Using Your PC for Inter-Office Communication - Creating a disease fact sheet with Word, e-communication (27 participants) • 2. Managing and Presenting Your Data - Using Excel to create tables and graphics, Using Power Point (35 participants) • 2. Information Sleuthing: Retrieving and Manipulating Public Health Information • 1. Finding and Using Public Health Data – Locating web-based dat for various scenarios, downloading data to Excel (32 participants) • 2. Finding and Managing the Literature of Public Health – phpartners.org resources, PubMed, evidence based intro (19 participants) • 3.Grant-Writing Overview • 1. Scoping it Out: Researching Grant Opportunities – Funding opportunities, online grant resources, grants.gov, etc. (14 participants) • 2. Making it Happen: Basics of Grant Writing – Outsourced, hands-on workshop • 4. Bringing It Together • 1. Finding, Evaluating and Organizing Information - Developing a resource guide from a variety of sources using pandemic influenza example. Included some html instruction. (13 participants) References 1O’Carroll PW, and the Public Health Informatics Competencies Working Group. Informatics competencies for public health professionals. Seattle, WA: Northwest Center for Public Health Practice, 2002. 2Howard GS. Response-shift bias: a problem in evaluating interventions with pre-post self-reports. Eval Rev 1980 Feb;4:93-106. 3Cunningham DJ, Ascher MT, Viola D, Visintainer PF. Baseline assessment of public health informatics competencies in two Hudson Valley health departments. Public Health Rep 2007 May/June;122:302-310. Satisfaction ratings at the end of each session were overwhelmingly positive (overall average score: 4.5 on a 5-point scale) The Response-Shift Bias “In using self-report instruments, researchers assume that a subject’s understanding of the standard of measurement for the dimension being assessed will not change from one testing to the next (pretest to posttest). If the standard of measurement were to change, the ratings would not accurately reflect change due to treatment and would be invalid.” 2 The Public Health Information Partners (PHIP) project is funded by the National Library of Medicine under NLM contract N01-LM-1-3521 with the National Networks of Library of Medicine, Middle Atlantic Region, Region 1. Poster presented at the Annual Meeting of the Medical Library Association, May 21, 2007, Philadelphia, PA.

More Related