1 / 31

Writing a Successful TC Evaluation Plan

Writing a Successful TC Evaluation Plan. http://tobaccoeval.ucdavis.edu. Overview. SMART objective Evaluation Plan Type Data collection by plan type Process versus outcome data collection and sampling Evaluation Reporting Dissemination Limitations Evaluation Summary Narrative

ovid
Download Presentation

Writing a Successful TC Evaluation Plan

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Writing a Successful TC Evaluation Plan http://tobaccoeval.ucdavis.edu

  2. Overview • SMART objective • Evaluation Plan Type • Data collection by plan type • Process versus outcome data collection and sampling • Evaluation Reporting • Dissemination • Limitations • Evaluation Summary Narrative • Greatest Mistake • Resources

  3. A Good Evaluation Plan Needs SMART Objectives • Specific • Measurable • Achievable and ambitious • Realistic (and Relevant) • Time-bound

  4. SMART example • By June 2013 at least 1 city in Fresno County and/or the unincorporated area of Fresno County will adopt a policy that requires all tobacco retailers to obtain a license to sell tobacco products with a portion of the fees earmarked to conduct regular compliance checks. • Specific: adopt and implement with enforcement fees • Measurable: Can do process and outcome evaluation • Achievable: Has been achieved elsewhere • Relevant: CTCP has indicated that it is of high priority • Time bound: By June 2013

  5. Not So SMART Examples “Our objective is to reduce secondhand smoke in outdoor dining facilities” (By when? How? Through a voluntary policy? A city ordinance?) “By June 2012 we plan to have zero tobacco litter in our parks” (Not realistic and not specific: which parks? City? County? Specific ones?) “We will work on reducing tobacco sales to minors” (By how much? By when? Where? How?)

  6. Evaluating Your Intervention Think “backwards” by starting at the end: • What is it that you are trying to achieve? • What evaluation activities will help you get there? • How can you best keep track of successes and barriers? • How can you best show what you have achieved/what your outcome is?

  7. Considerations • Use evaluation activities strategically to help advance your objective • Make sure your evaluation activities produce useful and usable data • Use activities that are manageable with the resources you have • Adhere to OTIS instructions and make the evaluation plan fit your intervention plan

  8. Evaluation Plan on OTIS • Consult the OTIS Evaluation Guide • Follow instructions closely • Your objective determines the overall evaluation design • Within some limits you have many choices • Refer to the OTIS plan type decision tree to determine plan type and the data collection by plan type to determine outcome or process (or both)

  9. Data Collection by Plan Type Single policy Indiv. behavior Other w/ Other without Multiple policy change measurable measurable outcome outcome Policy adoption Policy adoption and implementation Policy implementation process outcome Process and outcome outcome outcome process

  10. Process Evaluation Process evaluations are used: • To help further the objectives (e.g. KII w/policy makers to assess adoption readiness) • To demonstrate need (e.g. POS to show the public wants a policy) • To document how your project addressed this objective (e.g. KIIs w/key stakeholders) • To document policy adoption (e.g. record review of policy language)

  11. Outcome Evaluation • Outcome evaluations are used • To show a policy has been successfully implemented (e.g. pre-/post outdoor policy adoption litter observation) • To show that a program has had impact (e.g. pre-/post- and 3 month follow-up survey with participants in cessation class)

  12. Process versus Outcome Common errors: • Mistaking the adoption of a policy for an “outcome” (in OTIS “outcome” requires implementation of the policy) • Thinking that Process and Outcome use different data collection methods (either evaluation type can use KIIs, surveys, observations, etc.)

  13. Decisions Plan type 1: Single/Multiple Policy - Policy adoption needs Process evaluation • Policy implementation needs Outcome evaluation • Policy adopt and implement needs Process and Outcome evaluation Common errors: choosing “multiple policy” when it’s a single policy; and confusing process and outcome • EXPLAIN – meaning not clear re mult vs single

  14. Decisions Plan type 2: Individual Behavior Change (cessation programs, education programs) Needs: Outcome evaluation Recommended: several waves of a survey that measures short-term and long-term quitting rates in cessation programs

  15. Decisions Plan type 3: Other with Measurable Outcome (e.g. reduce # of stores with signage violations) Needs: Outcome evaluation Plan type 4: Other without Measurable Outcome (e.g. develop and distribute education materials in different languages) Needs: Process Evaluation

  16. Evaluation Design • Outcome Study Design: • Experimental (needs control and intervention group and random assignment; rarely done in TC) • Quasi-experimental (control group or multiple measurement, e.g. over time; often used in TC) • Non-experimental (no control group, only one pre-post, anecdotal evidence)

  17. Common Errors with Outcome Study Design • Proposing an experimental design that is unrealistic and too costly • Falsely classifying a quasi-experimental design as experimental when it has a control group but is not randomly assigned • Only one post-test for cessation programs • Classifying one pre-post test as quasi-experimental (it’s non-experimental)

  18. Outcome Data Activity • One of the most important decisions! Huh???? • What method makes most sense, gives you the best information, shows outcome most likely? ??? (Mail survey, written questionnaire, key informant interview [called “face-to-face survey” in OTIS Evaluation Guide], telephone survey, observations, Internet survey) • Use tested instruments or specify ”will use TCEC resources to help develop instrument”

  19. Sampling for Outcome Activity • The sampling method and the sample size need to be convincing to likely critics • For sampling strategies see OTIS evaluation guide p. 87 or the Sampling tool on the TCEC website • For calculating sample sizes, use a reputable online sample size calculator

  20. Common Errors with Outcome Data Collection Activities • Data collection activity not well suited for purpose • Purpose of data collection not clear – no logical connection to how results will be used by project • Suggesting poorly designed, non-tested data collection instruments • Sample size is too small or is not representative (cluster or stratified sampling may be more appropriate)

  21. Process Data Collection • Almost all objectives need some process data collection activities • Some activities can be used to advance the objective: Focus group, KII, public opinion survey, observation, youth purchase survey • Some activities can be used to evaluate your activities: education/participant survey • Some activities can help document your project’s process: policy record, media activity record • Need to specify how data collectors will be trained and by whom

  22. Common Problems with Process Data Collection Activities • Forgetting data collection trainings • Forgetting to add an education/ participant survey when an education event takes place • Same sampling errors as in outcome data collection

  23. Evaluation Reporting • In this section, you will be asked about the analysis you plan to do. You will report on the analysis through progress reports and the final evaluation report (brief for non-primary objective, full for primary objective), as well as disseminate results through other chanels.

  24. Evaluation Reporting Analysis plan that relates to the overall design Most commonly done in TC: • Descriptive statistics (frequencies, means) • Comparison over time using frequencies, means (excel is sufficient) • Less frequently but sometimes useful in comparisons over time or comparisons of variables: T-test, Anova, Regression (statistical program needed)

  25. Common Errors in Analysis Plan • Getting stuck in evaluation details and not addressing the “overall design” • Proposing a design that is too sophisticated for the limited sample size or the limited resources and capacity • Not sufficiently linking the design to the proposed program and evaluation plan

  26. Dissemination of Results • Share w those who provided info and local communities • Present data to decision makers • Create fact sheets to use in education materials • Send out press releases to generate media coverage of the issue/your program • Share with other tobacco control programs using CTCP channels • Tobacco related conferences • Peer reviewed journals

  27. Limitations • Mention any possible limitations that you can foresee, including program changes, low response rates, only adoption achieved but not implementation, so outcome evaluation is not possible • Suggest back-up plan

  28. Evaluation Summary Narrative Common problems: • Not describing why this design and the suggested activities were chosen • Not presenting a “story” of your evaluation but only listing activities • Not describing the plan in chronological order • The logic is hard to follow

  29. Greatest Mistake • Filling out the OTIS evaluation fields BEFORE having thought out the evaluation plan. • Hoping the plan will come together while answering the questions and drop-down menus on OTIS.

  30. Resources http://tobaccoeval.ucdavis.edu TCEC website has: • OTIS Evaluation Guide • Sample evaluation plans • Sample data collection instruments • Evaluation tools (e.g. “Sampling”)

More Related