1 / 52

Evaluating Intended Continuing Education Outcomes

Evaluating Intended Continuing Education Outcomes. Joshua D. Southwick, MRC, CRC David Vandergoot, PhD. Outline. Why Continuing Education? Intended Outcomes of Continuing Education Actual Results of Continuing Education Evaluating Training Why Evaluate? How to Evaluate?

xanti
Download Presentation

Evaluating Intended Continuing Education Outcomes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluating Intended Continuing Education Outcomes Joshua D. Southwick, MRC, CRC David Vandergoot, PhD

  2. Outline • Why Continuing Education? • Intended Outcomes of Continuing Education • Actual Results of Continuing Education • Evaluating Training • Why Evaluate? • How to Evaluate? • Approaches to Evaluation • Guiding Principles • Recommendations • Examples • Practice & Share (if time)

  3. Why Continuing Education? Conferences? In-Service Training?

  4. Why Continuing Education? Conferences? In-Service Training? • We Need Qualified Rehabilitation Counselors • The Rehabilitation Act of 1973, as amended, requires “qualified vocational rehabilitation counselors” to provide services • Ethically relevant: • CRCs “practice only within the boundaries of their competence” (CRCC, p. 11) • CRCs “recognize the need for continuing education . . . to maintain competence in the skills they use” (p. 11)

  5. What are the Intended Outcomes of Continuing Education? Conferences? In-service Training?

  6. What are the Intended Outcomes of Continuing Education? Conferences? In-service Training? • Continuing Professional Development (CPD) involves “the continuous acquisition of new knowledge, skills, and attitudes to enable competent practice.”1 • Well-trained employees may feel less frustration, more job satisfaction and more job commitment2 • After pre-service, graduates should expect to learn new skills or hone existing skills3 • Gaining specialty-specific expertise • Understanding ever-changing challenges arising within the field • Becoming familiar with promising and evidence-based practices emerging from new empirical research • Peck, McCall, McLaren, & Rotem, 2000, p. 432 • Allen & van der Velden, 2001 • Leahy et al., 2009

  7. Intended Outcomes (Continued) • Certification or Licensure Maintenance • Team Building • Networking • Increase Organizational Effectiveness and Efficiency • Increased capacity to serve individuals • Better services for persons with disabilities • Greater consumer satisfaction

  8. Mixed Results forTraining Effectiveness • Training has often been less effective than expected: • Managerial Training – people are learning but not applying1 • Medical Professionals2, 3 • Training has been effective: • medium to large effect sizes for training outcome criteria related to learning (e.g., knowledge; d=0.63), behavior (e.g., job-related behavior changes; d=0.62), and results (e.g., productivity; d=0.62)4 • 1. Powell & Yalcin (2010) • 2. Davis, O’Brien, Freemantle, Wolf, Mazmanian, & Taylor-Vaisey (1999) • 3. Green & Seifert (2005) • 4. Arthur, Bennett, Edens, & Bell (2003)

  9. Training Evaluation Studies in Rehabilitation • New Zealand study, Flett, Biggs, & Alpass (1994) • Finding: professional training decreased occupational stress, thereby increasing the rehabilitation practitioner’s ability to work effectively • Christensen, Boisse, Sanchez, & Friedmann (2004) • Finding: a one-day training workshop impacted VR counselors’ knowledge and reported practice in substance abuse screening (*never to rarely)

  10. Black Hole? • Despite the intended outcomes of continuing education, the return on this training investment remains, to a great extent, unmeasured and unknown. • Most training programs are evaluated only for the participants reactions (i.e., satisfaction; Alliger & Janak, 1989; Van Buren & Erskine, 2002)

  11. Why Evaluate?

  12. Why Evaluate? • It is important to evaluate continuing education in order to validate and improve such training efforts • When budgets are tight, it may be necessary to justify training expenses

  13. Evaluating Training • Comic showing 2 men at a chalkboard. The chalkboard has a complicated formula on it. In step 2 of the formula are the words "then a miracle occurs." One man says: "I think you should be more explicit here in step two." Better Outcomes for Persons with Disabilities Continuing Education Training

  14. Approaches to Training Evaluation • Kirkpatrick’s Four Levels • Logic Model

  15. Kirkpatrick’s Four Levels • The Four Levels • Level 1: Reaction – To what degree participants react favorably to the learning event. • Level 2: Learning – To what degree participants acquire the intended knowledge, skills, and attitudes based on their participation in the learning event. • Level 3: Behavior – To what degree participants apply what they learned during training when they are back on the job. • Level 4: Results - To what degree targeted outcomes occur, as a result of the learning event(s) and subsequent reinforcement. *From Kirkpatrick & Kirkpatrick (2010)

  16. Percentage of Training Evaluated at each Level • Reaction – 78% • Learning – 32% • Behavior – 19% • Results – 7% • (Reported across multiple disciplines) *Reported across multiple disciplines Morin, L., & Renaud, S. (2004). Participation in corporate university training: Its effect on individual job performance. Canadian Journal of Administrative Sciences, 21(4), 295-306.

  17. Evaluation through Logic Models • A logic model shows the rationale or program theory for how program planners believe that the resources and activities invested in a program will produce the expected outcomes. • Used to: • Visually display the components of a program • Identify measures that will be useful in evaluating the program outcomes

  18. Components of a Logic Model Inputs Resources (human, financial, organizational, community) Activities Implementation; how resources are used (projects, events, actions) Outputs Participation Direct products (deliverables) Outcomes Impact (expected changes or benefits) Short-term = learning Medium-term = action or behavior Long-term = conditions Adapted from University of Wisconsin-Extension-Cooperative Extension, 2003; W. K. Kellogg Foundation, 2004 *Adapted from University of Wisconsin-Extension-Cooperative Extension, 2003; W. K. Kellogg Foundation, 2004

  19. Logic Model Example Inputs Activities Outputs Outcomes Inputs Activities Outputs Outcomes

  20. A Possible Logic Model for Continuing Education Inputs program planners instructor preparation training materials money facilities technology Activities Using resources to implement a: continuing education program conference Workshop webinar Outputs verification of attendance at the training CEU credits earned satisfaction scores Outcomes Short-term: increases in participants’ knowledge, skills, and confidence Medium-term: participants implement new knowledge and skills Long-term: increases in agency-level performance, efficiency, and consumer satisfaction

  21. How to Evaluate Intended Outcomes • Which criteria should be measured in order to most accurately assess the outcomes of continuing education? • What types of measures can act as indicators that professionals are developing?

  22. Guiding Principles • Knowledge Translation (KT) • Organization Development (OD)

  23. Knowledge Translation (KT) • “A move beyond the simple dissemination of knowledge into actual use of knowledge”1 • Barriers to KT / research utilization2: • Environmental & organizational factors (culture, leadership) • Individual factors (age, years of service) • Difficulty accessing research (database access, time) • Difficulty determining the relevance of research • Straus, S. E., Tetroe, J., & Graham, I. (2009). Defining knowledge translation. Canadian Medical Association Journal, 181(3-4), 165-168. • Johnson, K., Brown, P., Harniss, M., & Schomer, K. (2010). Knowledge translation in rehabilitation counseling. Rehabilitation Education, 24(3-4), 239-250.

  24. KT – what happens after the knowledge is in our heads?

  25. Organization Development (OD) • Organizational Development: A method for designing, implementing, and reinforcing intentional organizational changes1 • Key characteristic of OD: • The action taken is deliberately and consciously designed to bring about change over a specified time period, and there must be some way to demonstrate and/or measure the degree to which the change occurred2 1. Cummings & Worley, 2009 2. Worley & Feyerherm, 2003

  26. Guiding Principles for the Evaluation of Continuing Education KT2 Notes: 1. Kirkpatrick’s levels 2. Knowledge Translation principles 3. Organization Development principles

  27. How it is done now: • Requirements for pre-approval of CRC continuing education credits: • >60 minutes • Focus is to increase knowledge of or skills in rehabilitation counseling • Clearly defined learning objectives or expected outcomes • Participants complete an evaluation of the program’s value (not an evaluation of learning) • Accessible, barrier free location • For CE through written means, multiple choice questions are required. CRCC (2011)

  28. Example of How Evaluation could be done… • VR agency does an in-service training • They identify learners needs/wants; Involve learners in training planning process (Adult learning theory; OD) • Set objectives for learning (Use these in the evaluation questions & indicators) • Hold training • Evaluate: • Reaction (satisfaction survey) • Learning (pre-post quiz) • Behavior (2-3 month follow up survey on Objectives) (Knowledge Translation) • Results (3-12 month follow up on agency indicators specifically related to Objectives) (Organization Development)

  29. Evaluation Recommendations • Timing • 2-12 months post training? Or the amount of time estimated for participants to implement new skills. • Measurements not too distal from training objectives (if too distal, you won’t see the impact) • May also want to assess organizational culture (did it support or hinder implementation)

  30. Additional Recommendations for Distance & Blended Training • Include planned assessments in the course outline • Build assessments so that they are fully integrated into the course • Ensure participants that their satisfaction scores will remain anonymous (i.e., they will not be tracked by IP address) • Provide immediate feedback when feasible

  31. Dave’s Example

  32. Context • Online training provided to 42 counselors • Trained to implement a case management model in 10 sites • Trainees were administered a knowledge check as a post-training assessment • Ongoing training provided using Case Reviews • Performance evaluated using benchmarks of key model indicators aggregated by site

  33. Evaluation Strategy • Conduct training and evaluate the extent of content learned using a knowledge check • Assess interim performance by conducting case reviews using a protocol reflective of model processes and providing one-on-one instruction as needed • Evaluate relationship of performance on case review protocol with model performance indicators (interim assessment reported here) • Eventually relate performance on model indicators with employment outcomes (will not be available for several years)

  34. Ongoing Evaluation Model Provide online training Assess Knowledge Conduct quarterly case reviews Provide TA as needed Monitor site performance monthly Analyze individual and site data (back to) Conduct quarterly case reviews Conduct quarterly case reviews Provide online training Assess Knowledge Provide TA as needed Monitor site performance monthly Analyze individual and site data

  35. First Question • Did those who completed all the online courses do better than those who did not complete the courses on their first Case Review?

  36. Answer to First Question • Score on First Case Review • 8 did not complete all the courses • 34 did complete them • Means: • Completers averaged 76% • Non Completers averaged 86% • This is not a significant difference • Implication – more assistance needed

  37. Second Question • Did those who completed all the online courses do better than those who did not complete the courses as averaged over all their Case Reviews?

  38. Answer to Second Question • The average score of all case reviews • Means: • Completers averaged 80% • Non Completers averaged 82% • This is not a significant difference • Providing training in and of itself may not be sufficient to achieve desired performance

  39. Third Question and Answer • What is the relationship between the average course grade and Case Review scores? • First Case Review score: Correlation = .09 • Averaged Case Review scores: Correlation = .12 • These are both significant at the .01 level • Implication – Although these results are in the desired direction, they are weak and reinforce the need for ongoing technical assistance

  40. Fourth Question and Answer • What is the degree of improvement in Case Review Ratings over time? Mean Case Review Ratings: Quarter 1 – 77.9 Quarter 2 – 70.7 Quarter 3 – 81.9 Quarter 4 – 85.3 Quarter 5 – 89.0 • Implication – providing ongoing technical assistance indicates improved conformance to model expectations

  41. Fifth Question and Answer • This analysis was based on aggregated data by site • How do Case Review scores, as measured by aggregating most recent two reviews, relate to the most recent performance indicators? • Correlation with: • Key indicators most reflective of course content = .59 • Total benchmark score = .65 • Ranking of site performance = .71

  42. Overall Implications • Training with follow up technical assistance leads to desired performance. Simply providing training may not lead to success. • Looking at data using the individual as unit of analysis and also using aggregated site data as unit of analysis leads to enhanced understanding of the impact of training and technical assistance

  43. Josh’s Example

  44. Context • Professional Conference • 2 follow-up surveys were sent • 1st survey immediately following the conference • 3 month follow-up survey after the conference • Sought to identify factors that facilitated or hindered Knowledge Translation • Sought to identify the types of collaboration that resulted from networking at the conference

  45. First Survey Results • N = 98 • Reported KT = 76 • Facilitators of KT • Personal Interest = 72% (55) • Opportunity in current situation = 61% (46) • Belief that applying the knowledge/skills will make a positive difference = 45% (34) • High Self-efficacy = 39% (30) • Supportive policies and/or superiors = 34% (26) • Peer Interest = 33% (25) • New collaborations as a result of the conference: • About 500 new collaborative projects (not unique) reported among 76 responders

  46. First Survey Results (cont.) • Reported no KT = 22 • Barriers to KT • Lack of personal interest = 0 • Lack of peer interest = 1 • *Lack of opportunity = 8 • Lack of supportive policies and/or superiors = 0 • Low Self-efficacy = 0 • Belief that applying the knowledge/skills will not make a difference = 1 • *Lack of Time = 6

  47. Reported Facilitators of KT

  48. 3 month Follow-up Survey Results

  49. Advantages of Logic Model Evaluation: • Validation of the effectiveness of continuing education • Timely feedback to continuing education providers • Evaluate at levels beyond just Reaction: Learning, Behavior, Results • The very act of evaluating (and planning to evaluate) can positively impact: • Organization of the continuing education activity • Engagement of the participants during the continuing education activity • Transfer of knowledge and skills to workplace behaviors • Results for the entire organization

More Related