html5-img
1 / 68

Evidence-Based Evaluation for Victim Service Providers

Evidence-Based Evaluation for Victim Service Providers. Anne P. DePrince, Ph.D. Department of Psychology Center for Community Engagement and Service Learning. Game Plan: Guiding Questions . 1:45-2:45 Goals for gathering evidence What to measure? 255-3:15 Selecting measures (Part 1).

elana
Download Presentation

Evidence-Based Evaluation for Victim Service Providers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evidence-Based Evaluation for Victim Service Providers Anne P. DePrince, Ph.D. Department of Psychology Center for Community Engagement and Service Learning

  2. Game Plan: Guiding Questions • 1:45-2:45 • Goals for gathering evidence • What to measure? • 255-3:15 • Selecting measures (Part 1) • August • Selecting measures (Part 2)? • Who to measure? • When to measure? • Costs (to respondents) of measuring?

  3. Perspectives I bring • Researcher, TSS Group • Director, Center for Community Engagement and Service Learning (CCESL)

  4. Big Picture • Evidence-based evaluation never happens in a vacuum • Evidence goals should be directly tied to strategic planning/program goals • Evidence-based evaluation ends up in a black hole • Think about uses of data on front end

  5. CCESL Sample Goals from Strategic Planning

  6. Goals for gathering evidence

  7. Goals for gathering evidence • Program evaluation • Why do we do what we do; and does it work? • Do your programs/services/interventions work? • Funders care about this too…

  8. Goals for gathering evidence • Building evidence-based evaluate capacity helps your agencies as research consumers • Understanding measurement issues helps you evaluate evidence for various practices

  9. Goals for gathering evidence • Descriptive • What new trends need to be addressed? • Have we accurately characterized the problem?

  10. Goals for gathering evidence • Generative and Collaborative • Building programs based on theory and data

  11. What to measure?

  12. What to measure Monitoring/ Process Measuring/ Outcome What changes occurred because of your program? knowledge attitude skill behavior expectation emotion life circumstance • What (how much) did clients receive? • How satisfied were clients?

  13. What to measure Monitoring/ Process Measuring/ Outcome Increase in knowledge about safety planning Increase in positive preceptions ofcriminal justice process Increase in engagement with criminal justice process • # clients served • # calls on crisis line • # prevention programs delivered • # clients satisfied with services

  14. General Issues in Measuring • Building blocks to get to your specific evidence-based evaluation question

  15. You start with a theory of change Based on Theory A, we believe that increases in Victim Safety will lead to lower psychological distress and greater engagement with c.j. system

  16. From theory, you identify constructs important to your agency/program • A variable, not directly observable, that has been identified to explain behavior on the basis of some theory Construct Example: Victim Safety Based on Theory A, we believe that increases in Victim Safety will lead to lower psychological distress and greater engagement with c.j. system

  17. Change at Your Agency • Pick one program within your agency • What is the theory of change that underlies that program? • What are you trying to change? • What factors lead to the change you want? • Identify 3-4 of the MOST relevant constructs tied to this program’s theory of change.

  18. Back to that construct What on earth is Victim Safety?

  19. Measuring the weight of smoke Victim Safety

  20. Defining terms in evaluation • Variable • Any characteristic or quantity that can take on one of several values

  21. Different kinds of variables What comes first What comes next Outcome (dependent) variable Aggressive behavior Low/high Parenting skills Ineffective to effective • Predictor (independent) variable • Program • Program A versus B • Program A versus no program • # sessions • Restraining order • Yes/no

  22. Different kinds of variables • confound – any uncontrolled extraneous (or third) variable that changes with your predictor and could provide an alternative explanation of the results. Earlier SANE exam Better mental health Earlier victim advocacy

  23. Operationalize variables • Operationalize definitions – • make your variable specific and limiting. • Get to something we can observe • Example: • Big difference between saying I want to: • “treat trauma” • “decrease children’s hyper-vigilance and avoidance”

  24. Different kinds of measurements…give you different information • Nominal • Values differ by category, there is no ordering • Can’t calculate an average • a.k.a qualitative, dichotomous, discrete, categorical • Least information • Examples: • Sex, Race, Ethnicity • Anything answered as Yes/no; Present/absent

  25. Scales of Measurement • Ordinal • Values have different names and are ranked according to quantity. • e.g., Olympic medals • Example • Divide people into low, moderate, and high service needs • ** You don’t know the exact distance between two values on an ordinal scale; you just know high is higher than medium, etc.

  26. Scales of Measurement • Interval and Ratio • Spacing between values is known (so you know that not only is one unit larger or smaller, but by how much it is larger or smaller) • Examples • Scores on • a measure of PTSD symptoms • a test of knowledge of safety planning • Number of • calls for service • revictimizations

  27. How to decide on a measurement scale • Choice of scale affects the amount and kind of information you get • And generally the less information you get, the less powerful the statistics you can use • Interval and ratio scales provide the most information • But you can’t always use them – e.g., sex • GUIDING RULE: When you can, always go for more info (interval/ratio)

  28. At your agency • How are you currently measuring the 3-4 constructs you identified? • What kind of measurement scale?

  29. Relevant Measure Websites • Measuring Violence-Related Attitudes, Behaviors, and Influences AmongYouths: A Compendium of Assessment Tools - Second Editionhttp://www.cdc.gov/ncipc/pub-res/measure.htm • Measuring Intimate Partner Violence Victimization and Perpetration: ACompendium of Assessment Toolshttp://www.cdc.gov/ncipc/dvp/Compendium/Measuring_IPV_Victimization_and_Perpetration.htm • http://mailer.fsu.edu/~cfigley/Tests/Tests.html ‘ • http://vinst.umdnj.edu/VAID/browse.asp

  30. What to look for in a good measure? • Validity and Reliability

  31. What is Validity? • “truth” (Bryant, 2000) • Degree to which our inference or conclusion is accurate, reasonable and correct.

  32. Examples of Types of Measurement Validity:Face Validity • The measure seems valid “on its face”. • A judgment call. • Probably the weakest form of measurement validity. A measure of anxiety includes items that are clearly about anxiety

  33. Examples of Types of Measurement Validity: Construct Validity • Extent to which an instrument measures the targeted construct • Haynes, Richard & Kubany, 1995.

  34. Construct Validity • Like Law… • The truth, the whole truth and nothing but the truth • The construct, the whole construct and nothing but the construct Trochim, 2001

  35. Construct Validity Goal Measure all of the construct and nothing else. Other construct: A Other construct: B The construct Other construct: C Other construct: D Trochim, 2001

  36. Construct Validity Goal Measure all of the construct and nothing else. Generalized anxiety Depression Social Anxiety Self-esteem Guilt Though, in reality, the construct is related to all 4 things.

  37. Reliability • You are measuring the construct with little error • Versus accuracy • Versus validity • Something can be reliable but not valid • But, if something is not reliable, it cannot be valid b/c then your measure is only measuring random variability

  38. How can we improve reliability • By reducing error • Standardization • When measurement conditions are standardized, sources of variance become constants and therefore do not influence the variability of scores • (Strube, 2000). • Aggregation • With more items, error might cancel itself out • Error on first item might be + and on second -

  39. Random Error Frequency X The distribution of X with no random error The distribution of X with random error Trochim, 2001

  40. Systematic Error • Any factors that systematically affect measurement of the variable across the sample. • Systematic error = bias. • e.g., asking questions that start “do you agree with right-wing fascists that...” will tend to yield a systematic lower agreement rate. • Systematic error does affect average performance for the group. Trochim, 2001

  41. Systematic Error Notice that systematic error affects the average; called a bias. Frequency X The distribution of X with systematic error The distribution of X with no systematic error Trochim, 2001

  42. This validity and reliability business is why evaluators make you crazy with… • Worries that you’ve made up your own evaluation instrument • Requests to standardize how evaluations are implemented (a new intern administers an interview vs. a seasoned staff member)

  43. When you can… • Use existing measures that have some evidence of reliability and validity (and then brag that you are doing so) • Standardize assessment procedures • Be thoughtful about number of respondents

  44. Selecting Measures

  45. Self-report surveys/questionnaires • Importance of how you ask what you ask… • Examples: • Exit Polls 2004 • Open-ended versus structured questions • 2013 Obamacare v. Affordable Care Act • Wording matters…a lot!

  46. Writing Surveys • Types of questions • Decisions about question content • Decisions about question wording • Decisions about response form • Placement and sequence of questions

  47. Keeping in mind: Bias • Can social desirability be avoided? • Can interviewer distortion and subversion be controlled? • Can false respondents be avoided?

  48. Types of Questions • Unstructured • Structured

  49. Male Female Structured Questions Dichotomous

More Related