1 / 34

Chapter 11

Chapter 11. Evaluation research. Evaluation research is not a method of data collection, like survey research of experiments, nor is it a unique component of research designs, like sampling or measurement. Instead, evaluation research is social research that is

Download Presentation

Chapter 11

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 11 Evaluation research

  2. Evaluation research is not a method of data collection, • like survey research of experiments, nor is it a unique • component of research designs, like sampling or • measurement. • Instead, evaluation research is social research that is • conducted for a distinctive purpose: to investigate social • programs (e.g., substance abuse treatment programs, • welfare programs, criminal justice programs, or employment • and training programs).

  3. For each project, an evaluation researcher must select a • research design and method of data collection that are • useful for answering the particular research questions • posed and appropriate for the particular program investigated. • The development of evaluation research as a major enterprise • followed on the heels of the expansion of the federal • government during the Great Depression and World War II.

  4. Large Depression-era government outlays for social program • stimulated interest in monitoring program output, and • the military effort in World War II led to some of the • necessary review and contracting procedures for sponsoring • evaluation research. • In the 1960s, criminal justice researchers began to use • experiments to test the value of different policies • (Orr 1999:24).

  5. In the early 1980s, after this period of rapid growth, • many evaluation research firms closed in tandem with the • decline of many Great Society programs. • However, the demand for evaluation research continues, • due, in part, to government requirements. • The growth of evaluation research is also reflected in the • social science community. The American Evaluation • Association was founded in 1986 as a professional • organization for evaluation researchers (merging two • previous associations) and the publisher of an evaluation • research journal.

  6. The process of evaluation research can be viewed as • a simple systems model. • First, clients, customers, students, or some other persons or • units—cases—enter the program as inputs. (You’ll notice • that this model treats programs like machines, with people • functioning as raw materials to be processed.) • Resources and staff required by a program are also program • inputs.

  7. Insert exhibit 11.1

  8. Next some service or treatment is provided to the cases. • This may be attendance in a class, assistance with a health • problem, residence in new housing, or receipt of special • cash benefits. • The program process may be simple or complicated, • short or long, but it is designed to have some impact on • the cases.

  9. The direct product of the program’s service delivery • process is its output. • Program outputs may include clients served, case • managers trained, food parcels delivered, or arrests made. • The program outputs may be desirable in themselves, • but they primarily serve to indicate that the program • is operating.

  10. Program outcomesindicate the impact of the program • on the cases that have been processed. • Outcomes can range from improved test scores or • higher rates of job retention to fewer criminal offenses • and lower rates of poverty. • Any social program is likely to have multiple outcomes, • some intended and some unintended, some positive and • others that are viewed as negative.

  11. Variation in both outputs and outcomes, in turn, • influence the inputs to the program through a feedback • process. • If not enough clients are being served, recruitment • of new clients may increase. • If too many negative side effects result from a trial • medication, the trials may be limited or terminated. • If a program does not appear to lead to improved • outcomes, clients may go elsewhere.

  12. The evaluation process as a whole, and feedback in • particular, can be understood only in relation to the • interests and perspective of program stakeholders. • Stakeholders are those individuals and groups who have • some basis of concern with the program. • They might be clients, staff, managers, funders, or the public. • Who the program stakeholders are and what role they • play in the program evaluation will have tremendous • consequences for the research.

  13. Alternatives in evaluation designs • Evaluation research tries to learn if, and how, real-world • programs produce results. But that simple statement covers • a number of important alternatives in research design, • including the following: • Black box or program theory—Do we care how the program gets results? • Researcher or stakeholders orientation—Whose goals matter most? • Quantitative or qualitative methods—Which methods provide the best answers? • Simple or complex outcomes—How complicated should the findings be?

  14. Black box or program theory • Most evaluation research tries to determine whether a program • has the intended effect. • If the effect occurred, the program “worked”; if the effect • didn’t occur, then, some would say, the program should be • abandoned or redesigned. • In this simple approach, the process by which a program • produces outcomes is often treated as a “black box,” in which • the “inside” of the program is unknown.

  15. The focus of such research is whether cases have changed as • a result of their exposure to the program, between the time • they entered as inputs and when they exited as outputs • (Chen, 1990). • The assumption is that program evaluation requires only • the test of a simple input/output model, like that in Exhibit 11.1. • There may be no attempt to “open the black box” of the • program process.

  16. If an investigation of program process had been conducted, • though, a program theory could have been developed. • A program theory describes what has been learned about • how the program has its effect. • When a researcher has sufficient knowledge before the • investigation begins, outlining a program theory can help to • guide the investigation of program process in the most • productive directions. • This is termed a theory-driven evaluation.

  17. Program theory can be either descriptive or prescriptive • (Chen, 1990). • Descriptive theory specifies impacts that are generated and • how this occurs. • It suggests a causal mechanism, including intervening • factors, and the necessary context for the effects. • Descriptive theories are generally empirically based.

  18. Prescriptive theory specifies what ought to be done by • the program, and is not actually tested. • Prescriptive theory specifies how to design or implement • the treatment, what outcomes should be expected, and how • performance should be judged. • Comparison of the program’s descriptive and prescriptive • theories can help to identify implementation difficulties and • incorrect understandings that can be fixed (Patton, 2002:162–164).

  19. Researcher or stakeholder orientation • Stakeholder approaches encourage researchers to be • responsive to program stakeholders. • Issues for study are to be based on the views of people • involved with the program and reports are to be made to • program participants (Stake, 1975). • The stakeholders and others who may be drawn into the • evaluation are welcomed as equal partners in every aspect of • design, implementation, interpretation, and resulting action of • an evaluation—that is, they are accorded a full measure of • political parity and control....determining what questions are • to be asked and what information is to be collected on the • basis of stakeholder inputs.

  20. Social science approaches, in contrast, emphasize • researcher expertise autonomy in order to develop the most • trustworthy, unbiased program evaluation. • A program theory is derived from information on how • the program operates and current social science theory, not • from the views of stakeholders.

  21. Integrative approaches attempt to cover issues of concern • to both stakeholders, and evaluators. • The emphasis given to either stakeholder or scientific • concerns varies with the specific circumstances. • Integrative approaches seek to balance responsiveness • to stakeholders with being objectivity and scientific validity.

  22. Quantitative and qualitative approaches to evaluation each • have their strengths and appropriate uses. • Quantitative research, with its clear percentages and • numerical scores, allows quick comparisons over time and • categories, and thus is typically used in attempts to identify • the effects of a social program. • Qualitative methods can add depth, detail, and nuance; • they can clarify the meaning of survey responses, and reveal • more complex emotions and judgments people may have.

  23. Simple or complex outcomes • Few programs have only one outcome. • Sometimes a single policy outcome is sought, but is found • not to be sufficient, either methodologically or substantively. • In spite of the difficulties, most evaluation researchers • attempt to measure multiple outcomes. • Collection of multiple outcomes gives a better picture • of program impact.

  24. Focus of evaluation studies • Evaluation projects can focus on a variety of different • questions related to social programs and their impact. • Which question is asked will determine what research methods • are used. • What is the level of need for the program? • Can the program be evaluated? • How does the program operate? • What is the program’s impact? • How efficient is the program?

  25. Needs assessment • A needs assessment attempts, with systematic, credible • evidence, to evaluate what needs exist in a population. • Need may be assessed by social indicators such as the • poverty rate or the level of home ownership, interviews • with local experts such as school board members or team • captains, surveys of populations potentially in need, or focus • groups with community residents. • In general, it is a good idea to use multiple indicators of need. • There is no absolute definition of need in most projects.

  26. Evaluability assessment • Some type of study is always possible, but to specifically • identify the effects of a program may not be possible • within the available time and resources. • So researchers may conduct an evaluability assessment • to learn this in advance, rather than expend time and effort • on a fruitless project. • Because they are preliminary studies to “check things out,” • evaluability assessments often rely on qualitative methods. • The knowledge gained can be used to refine evaluation plans.

  27. Process evaluation • Process evaluation:Evaluation research that investigates • the process of service delivery. • Process evaluation is more important when more complex • programs are evaluated. • Many social programs comprise multiple elements and are • delivered over an extended period of time, often by different • providers in different areas.

  28. Formative evaluation • Formative evaluation: Process evaluation that is used to • shape and refine program operations. • Evaluation may then lead to changes in recruitment • procedures, program delivery, or measurement tools.

  29. Impact analysis • The core questions of evaluation research are: Did the • program work? Did it have the intended result? This kind • of research is variously called impact analysis, impact • evaluation, or summative evaluation. • Impact analysis (also called summative evaluation) compares • what happened after a program was implemented with what • would have happened had there been no program at all.

  30. Efficiency analysis • Finally, a program may be evaluated for how efficiently • it provides its benefit; typically, financial measures are used. • Cost-benefit analysis: a type of evaluation that identifies the • specific program costs and the procedures for estimating the • economic value of specific program benefits. • Cost-effectiveness analysis: a type of evaluation research • that focuses attention directly on the program’s outcomes • rather than on the economic value of those outcomes.

  31. Ethics in evaluation • Evaluation research can make a difference in people’s lives • while the research is being conducted, as well as after the • results are reported. • Job opportunities, welfare requirements, housing options, • treatment for substance abuse, and training programs are • each potentially important benefits, and an evaluation • research project can change both the type and availability • of such benefits. • This direct impact on research participants and, potentially, • their families, heightens the attention that evaluation • researchers have to give to human subjects concerns.

  32. There are many specific ethical challenges in evaluation • research: • How can confidentiality be preserved when the data are owned • by a government agency or are subject to discovery in a legal • proceeding? • Who decides what burden an evaluation project can impose • upon participants? • Can a research decision legitimately be shaped by political • considerations? • Must findings be shared with all stakeholders, or only with • policymakers? • Will a randomized experiment yield more defensible evidence • than the alternatives? • Will the results actually be used?

  33. Hopes for evaluation research are high: Society could • benefit from the development of programs that work well, • accomplish their policy goals, and that serve people who • genuinely need them. • Evaluation research can provide social scientists with • rare opportunities to study complex social process, with • real consequences, and to contribute to the public good. • Although they may face unusual constraints on their • research designs, most evaluation projects can result in • high-quality analysis and publications in reputable social • science journals.

More Related