1 / 26

Evaluation

Evaluation. Chapter 10. LEARNING OBJECTIVES. Know the difference between monitoring and evaluating the intervention Develop skills for identifying and documenting empirical chance in client system outcomes

melvinb
Download Presentation

Evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluation Chapter 10

  2. LEARNING OBJECTIVES • Know the difference between monitoring and evaluating the intervention • Develop skills for identifying and documenting empirical chance in client system outcomes • Demonstrate knowledge of multiple factors that may contribute to a delay in goal accomplishment • Become skillful in contract reformulation based on empirical evidence showing change or lack of change • Use tools to demonstrate the extent of goal accomplishment and to monitor progress or regression in client system outcomes

  3. Reasons for Evaluation • Part of the continuous process of service delivery • Expected ethical principle of social work practice • Client system goal attainment • Client system accountability • Stakeholder accountability • Quality assurance

  4. Formative: Process oriented evaluation Internally conducted Focuses on tracking changes in practice decisions Tool for monitoring the application of the intervention Summative: Outcome oriented evaluation Focus on client system outcomes Focus on program outcomes requires “before” and “after” examination of change in outcomes uses controlled comparisons for outcomes Types of Evaluation

  5. FORMATIVE VS. SUMMATIVE

  6. Client System Outcome - Goal Outcomes refer to any condition that an intervention is intended to affect or change. They are targets toward which intervention is directed. • Short-term • Client satisfaction with services • Intermediate • Client change in specific skills • Long-term • Client change in psychosocial functioning

  7. Instruments for Measuring Change Standardized Instruments • Designed to measure particular knowledge, aptitudes, feelings, or behaviors as diagnostic measures. • Administered, scored, and interpreted in a systematic standard objective manner on the basis of norms established for a large number of persons with similar problems. • Have established scientific instrument validity and reliability, therefore must be administered in a specific way to assure continued validity.

  8. Advantages: Known validity, reliability, norms Known systematic application process Specified scoring and administration process May have diagnostic and screening properties Disadvantages: Sometimes too complex to administer Too long Not relevant to client system needs Often requires specific training Cost Standardized Instruments

  9. Instruments for Measuring Change Client System Focused Measures • Client system focused measures are evaluative tools or instruments developed to individually assess, monitor, and evaluate quantitative and qualitative changes in client system outcomes and overall situation. • Logs • Rating Scales • Goal Attainment Scaling System

  10. Choosing tools and instruments • What is the instrument supposed to track and show? • Is it a valid and reliable instrument tool? • Is it easy to administer or does it require special training? • Is it sensitive to the client system’s capacities in age, cognition, literacy, gender, experience, language, and culture? • Is it easily scored or does it require special scoring system? • How much time does it take to administer or collect information? • Can it be used frequently or does frequent repetition create a potential for bias or client learning? • Is it standardized or not, and if so, in what respect?

  11. Validity :Instructions get you correctly to the train station every time. The instrument measures what it is supposed to measure and not something else. Reliability:Instructions consistently lead you to the bus station but not train station. The instrument is consistent but not necessarily valid. Instrument Validity and Reliability:Getting to the train station

  12. Types of Research Designs Considered for Practice and Program Evaluation • Group based research designs • Single system designs • Qualitative designs

  13. Practice Aspects of Evaluation • Reviewing contract accomplishments • Reviewing specific goal accomplishments • Reviewing specific objectives and task activities accomplishments

  14. Questions for Evaluating Goals Has the goal been accomplished? YES…..proceed to termination planning NO …..review the following: How much of the planned change has been accomplished? How much more time may be needed to accomplish the goal? What client system barriers are preventing the goal accomplishment? What environmental barriers are preventing goal accomplishment? Were tasks appropriately selected, sequenced, planned, and described? Was the goal/need appropriately identified and assessed?

  15. Client System Logs • Structured journals or other such individually developed recording devices • Computer blogs, audiotapes, or videotapes • Self completed • The intent of log-journaling is to help facilitate a dialogue about what took place, when, where, why, and how and to maintain documentation that can be tracked over a period of time and be reviewed to help the client system see and reflect upon changes that take place during intervention

  16. Rating Scales • Individually developed, empirically rank-ordered judgments that track a specified client system outcome. • Typically, these scales are used for client systems to self-rate their feelings, behaviors, traits, problems, and changes in objectives. • Graphic rating scales • Self-anchored scales • Summated rating scales

  17. Goal Attainment Scaling System • Identify client system goals and objectives based on needs and preferences. • Optional – Assign weights to selected goals or objectives. • Define each goal objective by expected outcome on a scale (-2, -1, 0, +1, +2) in which the client system and social worker define the minimum level of progress expected for a successful outcome to be achieved. • Obtain (a weighted) score for each goal (based on the number of weighted objectives) during a designated time period (for example: at baseline and then at a later point, or monthly, quarterly, annually).

  18. Single System Designs • Case-level empirical methods of evaluation • Assess change in practice objectives or client system outcomes by repeated frequent measurement • The evaluated subject can include single individual, couple, group, family, organization, or community • The case-level replications can be aggregated to determine program effectiveness

  19. Purposes of Single System Designs • To monitor, evaluate, and adapt intervention to change over time • To evaluate whether changes occurred in the targeted outcomes • To determine if the changes could have been produced by the intervention • To determine the effectiveness of different interventions

  20. Using Single System Designs • Decide how the system’s change is going to be measured over time • Specify and define the target outcomes (e.g. behaviors) that are going to be tracked over time • Operationalize the target outcomes and specify benchmarks for tracking positive change • Choose a design for the intervention application • Graph the measured results • Inspect results periodically or at specific intervals • Apply an analytical strategy for evaluating changes in targeted outcomes • Assess whether statistical and practical or clinical significance has been achieved

  21. Understanding Single System Designs • Phase A - Baseline, no intervention is applied • Phase B - Application of the first intervention • Phase B1 - Intensified application of the first intervention • Phase C - Second or different intervention is applied

  22. Single System Designs – Basic Monitoring Tracking ChangesDecreasing Client Number of Problems Number of Sessions

  23. Decreasing Client System Number of Problems Baseline A Intervention Phase Number of Sessions

  24. Baseline and Two-Treatment ProgressDecreasing Client System Problems Baseline A B - Phase C - Phase Individual Intervention Group Intervention

  25. Explaining the Effects of InterventionDecreasing Client System Number of Problems A B A B 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Number of Sessions

  26. An empirically based approach to evaluation in the General Method encourages the integration of ethical, evidentiary, and application concerns. • It involves a systematic approach to improving and maintaining quality of client system services. • It includes steps that convert goals into measurable objectives. • It uses multiple sources of evidence. • It makes the process of evaluation an explicit process for public and client system scrutiny. • It can be used to evaluate the effectiveness and efficiency in carrying out the intervention. • It can be used for monitoring client system change as well as outcomes. • It seeks ways to assure and improve client best practices.

More Related