1 / 18

Evaluation and Attitude Assessment BME GK-12 Fellows 12/1/10, 4-5pm 321 Weill Hall

Evaluation and Attitude Assessment BME GK-12 Fellows 12/1/10, 4-5pm 321 Weill Hall . Tom Archibald PhD Student, Adult & Extension Education GRA, Cornell Office for Research on Evaluation. quick overview of evaluation. merit and worth program improvement knowledge generation

oralee
Download Presentation

Evaluation and Attitude Assessment BME GK-12 Fellows 12/1/10, 4-5pm 321 Weill Hall

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluation and Attitude AssessmentBME GK-12 Fellows 12/1/10, 4-5pm 321 Weill Hall Tom Archibald PhD Student, Adult & Extension Education GRA, Cornell Office for Research on Evaluation

  2. quick overview of evaluation • merit and worth • program improvement • knowledge generation • very similar to research, with some differences

  3. CORE’s “touchstone” statement on the goal of evaluation: To obtain accurate, useful insights about the answers to evaluation questions in a manner that is feasible, is credible to relevant stakeholders, makes strategic use of limited time and resources, and contributes to our general knowledge, to future evaluations and to program evolution.

  4. a systems approach for evaluation planning • Programs are viewed (and modeled) as • parts of larger systems • dynamic and evolving • related to other programs in the present • connected to past and future programs • being perceived differently by different stakeholders and systems • Evaluation plan development takes this into account and should help programs and systems evolve

  5. Program Phase Evaluation Phase Module-Based Education RET REU Initiation Development Maturity Translation & Dissemination Step 2.05 - Program and Lifecycle Analysis

  6. Step 2.07 - Pathway Modeling

  7. The key elements of Evaluation Plan quality are • Consistency with a high-quality program model • Fitness of evaluation plan elements to the program and program context • Internal alignment of the evaluation plan elements • Holistic coherence

  8. Consistency with a high-quality program model • A “high-quality model” … • is grounded in knowledge of the program • incorporates perspectives of multiple stakeholders • shows causal pathways (program logic) • reflects careful thought about program “boundaries” • includes program assumptions and key elements of context • is connected to program evidence base (relevant research) • “consistency with…” means • evaluation questions can be located in terms of model elements • evaluation “scope” makes sense

  9. Fitness of evaluation questions and other evaluation plan elements to the program and program context • Evaluation questions are “mined” from the model • Evaluation questions are appropriate for the program’s maturity and stability, existing state of knowledge, and program needs • Evaluation focus, methods, and tools meet needs of key stakeholders • Evaluation plan makes efficient and strategic use of program and evaluation resources

  10. Internal alignment of the evaluation plan elements • Measures fits the constructs • Measure is the most strategic option among those that fit • Design is appropriate for lifecycle stage • Design can support claims implied in purpose statement • Sampling and Analysis plans can generate evidence needed

  11. Holistic coherence… the elusive element • Evaluation planning requires myriad decisions about multi-faceted tradeoffs • Making these decisions well requires a holistic comprehension of the program and the environment and systems that embed it • The decisions may be invisible in the written plan so some of the resulting quality might be ascertainable, some of it will not be

  12. Getting to Measures Worksheet 1. Starting Points: (a) Copy here what you want to know about your program for this Evaluation Question. (b) Copy here your formal Evaluation Question for this inquiry. 2. Clarify the “constructs” in your Evaluation Question above – what exactly do you mean by the activity and/or outcome you are focusing on? Note: revisit your Evaluation Question above and see if it needs to be revised to capture this precision.

  13. 3. Using this sharper definition of what’s being evaluated, work through the following sequence of questions: (a) What would this “look like” or consist of in practice? (List as many options as you can. Be creative – think outside the “usual” options.) (b) What might serve as evidence? For the most promising candidates in (a), write down various kinds of evidence that would be informative about whether this has occurred. (c) Review the strengths and weaknesses of the options, taking into account several aspects that matter: “closeness” to the real thing; accuracy and reliability. Write down a short list of the most promising candidates from (b).

  14. 4. How could you gather this evidence? Again, think of as many options as you can and list them here. (Possibilities might include video-taping a demonstration, live observation during class sessions, directly asking participants, asking people who know the participants, testing the end-products, etc.) Identify the ways that seem best, taking into account accuracy, feasibility, fit with program context and target population. Indicate the top choices among those on your list. Once again, take a moment to revisit your Evaluation Question phrasing in 1(b) and see if it needs updating. 5. What types of measure would be needed? From your answer(s) to 4, now ask yourself what type(s) of measure would allow you to gather evidence in this way. Here is a list of possible measure types from which to choose: case study, interview, observation, group assessment (e.g. focus group), expert or peer review, portfolio review, testimonial, test of knowledge or skill, photograph, slide or video, diary or journal, log, document analysis, action cards, simulation, problem story, creative expression, unobtrusive measures.

  15. For more explanation of measure types, see: Taylor-Powell, E., S. Steele (1996) Collecting Evaluation Data: An Overview of Sources and Methods, Cooperative Extension Publications, University of Wisconsin, Madison, WI. (http://learningstore.uwex.edu/Collecting-Evaluation-Data-An-Overview-of-Sources-and-Methods-P1025C237.aspx)

  16. find v. write Summary checklist (questions to help make the “find” versus “write” decision): 1.Is the evaluation question assessing an outcome? If yes, do you need to make a strong claim of effectiveness? 2. Do you need to compare your evaluation results to those of others or to some external norm? 3. Do stakeholders require a pre-validated measure? 4. Are staff resources available to draft measures in accordance with best practices? To conduct a high quality search for measures? 5.After a quick scan, does it appear you are likely to be able to find measure that fits your construct through peers, the Netway, your program system, program funders or professional groups related

More Related