1 / 25

Evaluation as a Tool for Learning

Evaluation as a Tool for Learning. A conversation with PSOs in Winnipeg, Manitoba on June 4 th , 2008 Jennifer Birch-Jones Performance Management Network Inc. www.pmn.net. Desired Outcomes. Increased awareness and understanding of what it is why do it what makes it hard what we can do

marin
Download Presentation

Evaluation as a Tool for Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluation as a Tool for Learning A conversation with PSOs in Winnipeg, Manitoba on June 4th, 2008 Jennifer Birch-Jones Performance Management Network Inc. www.pmn.net

  2. Desired Outcomes • Increased awareness and understanding of • what it is • why do it • what makes it hard • what we can do • Increased commitment to using evaluation as a tool for learning

  3. Strategic Analysis Where are we now? Strategic Framework Where do you want to be? Monitoring & Evaluation Are we getting there? Implementation How do we get there? What is it? Strategic Management Cycle Source: Kent and Wilkinson, Applied Strategic Planning, 1991.

  4. What is it? • Evaluation involves looking at a specific aspect of what your organization does, i.e., a program, project or initiative during a specific period of time and asking: • Is what we are doing working? • How do we know it is working? • Under what conditions does it work best? (Festen & Philbin, 2007) • Involves a natural, though disciplined, use of three steps: • Asking good questions • Gathering and reviewing information • Sharing the information to foster good decision-making (Gray & Associates, 1998)

  5. What is it? • The purpose of evaluation is to plan for next year, not to judge what you did in the past • Part of what you put in place beforehand to help you run your programs • Evaluation should not be: • a test or a punishment • a scientific research project requiring control groups • an occasional activity (i.e., once every five years) or a one time event • only done to benefit the funder

  6. What is it? • Both outcomes measurement and evaluation • are tools for decision-making • allow you to change the conversation from “what did we do” to “what difference we have made” • rely on the logic model as the fundamental point of reference

  7. What is it? • A results chain or logic model is a diagram showing the links from the activities through the sequence of outcomes to the final outcomes Inputs Activities  Outputs  Goals / Outcomes • Goals / outcomes can be Short-Term  Medium-Term  and Long-Term Goals / Outcomes

  8. WHAT do we want? WHO? WHERE? WHY? HOW? users / clients / co-deliverers / beneficiaries direct and intermediate outcomes final outcomes inputs activities outputs “Operational” “Behavioural Change” “State” What is it?

  9. Spheres of Influence WHY? (State) Your environment of indirect influence e.g., individuals and/or communities of interest where you do not make direct contact End Outcomes Intermediate Outcomes WHAT do we want by WHOM? (Behavioral Change) Your environment of direct influence e.g., people, groups and organizations in direct contact with your operations Performance needs to be considered in terms of its differing spheres of influence. Actions in the operational sphere should directly lead to changes in targeted groups which should in turn affect the desired ‘state’. Immediate Outcomes HOW? (Operational) Your operational environment You have direct control over the behaviors within this sphere Outputs Sources: Van Der Heijden (1996), Montague (2000).

  10. What is it? • Outcomes measurement is primarily about the here and now and is descriptive in nature; it provides a descriptive look at where we are today and asks how well are we doing in relation to our high level outcomes; it looks for evidence that the program is moving in its’ intended direction, helping managers to make mid-course directions, and providing a basis for accountability (Schacter, 2002)

  11. What is it? • Evaluation takes a longer-term perspective, looking back over a period of years at the performance of a program (or policy), based on in-depth research and analysis, goes beyond measuring progress to trying to explain whether the intended outcomes have been achieved (and why / why not), whether the rationale for the program remains valid, and if there are better alternate ways of achieving program outcomes (Schacter, 2002)

  12. What is it?

  13. What is it? • The reality is that what we often do in sport organizations is a mixture of outcomes measurement and evaluation (project evaluation?) • Monitoring is a term that is also used to denote regular collection and use of outcomes measures • Need to have a plan / strategy in advance for measuring / evaluating outcomes

  14. Why Do It? • Measure progress on priority issues and identify priority areas for improvement • Set realistic gaols by providing information for making and fine tuning strategic program decisions • Identify staff and volunteer training and technical assistance needs • Be accountable and credible to you constituents, your community, your partners, your funders and yourself • Motivate by providing documentation of your achievements • Generate support for programs and make the case for added resources • Guide budget and resource allocations (Gray and Associates, 1998)

  15. What Makes it So Hard? • Tension Between Learning and Accountability (philosophical) • Gap Between Requirements and Capacity (structural)

  16. What Makes it So Hard? • Tension Between Learning and Accountability (did you achieve what you agreed to) • Compliance versus learning (Wesley, Zimmerman and Patton, 2006) • Learning by the funder versus the organization (Festen and Philbin, 2007) • Trust versus mistrust

  17. What Makes it So Hard? • Requirements and Capacity Gap • Expectations versus reality • Recipients need the funds to do “good work” but don’t want to admit that they can’t realistically achieve the expected results given the resources ($ and time) • The “gap” also increases when the time frame and / or $ amount changes for the results but the expectations (usually the funder’s) do not

  18. What Makes it So Hard? • Requirements and Capacity Gap (Cont’d) • Although funders are increasing their demands for outcomes information, very few are providing the necessary funds to organizations to do so (VSERP, 2003) - outcomes are more difficult to measure • The evaluation requirements from the funder(s) are the same, regardless of the amount of funding • Everyone uses different language

  19. What Can We Do? • Recognize the tension between accountability and learning and the impact it can have on what and how you evaluate – know how failure is defined and will be considered • Be clear about what the language / terms mean • Be realistic about what can be achieved and re-negotiate results expectations when timeframe, $ and other circumstances impact on the achievement of the results

  20. What Can We Do? • Focus your resources on the most important issues and only collect the information that you need / will use • Consider “rolling” evaluation - choose one or two questions you want to explore in any given year that relate to a desired outcome and focus only on that topic (Festen and Philbin, 2007) • Build the capacity of staff to do outcome measurement / evaluation (lots of great free online resources)

  21. What Can We Do? • Create learning moments at staff and Board meetings - reframe the conversation to become one of questioning (Gray and Associates, 1998): “Since our last meeting, what difference have we made in providing a quality sport experience for Manitobans?” And / Or “From the information we have just received, how well is our organization holding our vision and attaining our mission?”

  22. What Can We Do? • Engage in conversations with your funder(s) on how they can help you better use evaluation as a tool for learning • What excites you the most about the potential for using evaluation as a tool for learning in your PSO? • If you could only had evaluation resources to examine one issue in your PSO, what would that issue be? • In what ways can Sport Manitoba help?

  23. Final Thoughts “”The reason we got into this business was to change lives.Now we have all thestaff thinking in this orientation. Outcome measurement creates focus in a way no other management tool can do. We used to have forms we filled out that were meaningless. Now we have the same number of forms, but we get real information. We used to count the number of things we did. Now we count results." Jon Berry, Executive Director Freeport West Minneapolis, Minnesota Source: Outcome Measurement: Showing Results in the Nonprofit Sector, United Way of America, 1999. Source: Outcome Measurement: Showing Results in the Nonprofit Sector, United Way of America, 1999.

  24. References and Resources • Applied Strategic Planning (Kent and Wilkinson, 1991) • Evaluation with Power: A New Approach to Organizational Effectiveness, Empowerment and Excellence (Gray and Associates, 1998)* • Outcome Measurement: Showing Results in the Nonprofit Sector (United Way of America, 1999) • Not a “Tool Kit” – Practitioner’s Guide to Measuring the Performance of Public Programs (Schacter, 2002)* • Assessing Performance: Evaluation Practices & Perspectives in Canada’s Voluntary Sector (Voluntary Sector Evaluation Research Project, 2003)

  25. References and Resources • Getting to Maybe – How the World is Changed (Wesley, Zimmerman and Patton, 2006) • Learning from our Evaluation Practice (The J.W. McConnell Foundation, 2006) • Program Evaluation and Performance Measurement (McDavid and Hawthorn, 2006) • Level Best – How Small and Grassroots Non-Profits Can Tackle Evaluation and Talk Results (Festen and Philbin, 2007)* * Very practical, PSO-friendly and affordable evaluation resources.

More Related