1 / 39

Summative Program Evaluations

Summative Program Evaluations. The External Evaluation. Used with the permission of: John R. Slate. Definitions:. The use of data to determine the effectiveness of a unit, course, or program AFTER it has been completed.

rhondaw
Download Presentation

Summative Program Evaluations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Summative Program Evaluations The External Evaluation Used with the permission of: John R. Slate

  2. Definitions: • The use of data to determine the effectiveness of a unit, course, or program AFTER it has been completed. • An evaluation that provides information about the overall effectiveness, impact, and/or outcomes of a program.

  3. The goal of summative evaluation . . . • to collect and present information needed for summary statements and judgments about the program and its value.

  4. Used in . . . • making terminal end-of-experience judgments of: • worth & value • appropriateness of the experience • goodness • assessing the end results of an experience

  5. Examples • Termination of employment • Final grade in a course • Final report for a program that is ending • Board report

  6. The role of the evaluator: • to provide findings about the program which can be generalized to other contexts beyond the program being evaluated. • to focus the evaluation on the primary features and outcomes of the program and on the policy questions which may underlie the program.

  7. to educate the audience about what constitutes good and poor evidence of program success. • to admonish the audience about the foolishness of basing important decisions on a single study. • to be a program advocate when merited.

  8. to convey to your audience as complete a depiction of the program’s crucial characteristics as possible. • to express opinions about the quality of the program. • to be able to defend your conclusions.

  9. Steps in Summative Evaluation

  10. Phase A: Set the Boundaries of the Evaluation • Research the program • Encourage Trust, Cooperation, and Ownership • Identify the audience/stakeholders • Identify programmatic goals

  11. Step 1: Identify the Sponsors/Audiences

  12. Questions of sponsors & audiences . . . • Is the program worth continuing? • How effective was it? • What does the program look like and accomplish? • What conclusions could you draw about program effectiveness?

  13. Step 2: Find out as much as you can about the program. • Collect and scrutinize written documents that describe the program. • Talk to people

  14. What are the goals and objectives of the program? Does the program lead to goal achievement? How effective is the program? Are there more effective alternative programs available? What are the most important characteristics, activities, services, staffing, and administrative arrangements of the program? Did the planned program occur? Questions on the mind of the evaluator . . .

  15. Questions to be asked of the stakeholders . . . • What are the most important outcomes of the program, including planned, serendipitous, and unanticipated? • Which aspects of the program do you think wield greatest influence in producing program outcomes? • What are the most important organizational and administrative aspects of the program?

  16. Which parts of the program do you consider its most distinctive characteristics, those that make it unique among programs of its kind? • With what types of students/clients, participants, staff do you think the program is most/least effective?

  17. What is the theory of action behind the program? • What are the policy alternatives if the program is found effective? • How much expansion is possible? • How might expansion sites be selected? • What are the possible markets, communities, or sites for future expansion?

  18. What are the policy alternatives if the program is found ineffective? • Would the program be cutback, eliminated, and/or refined?

  19. Step 3: Develop a written description of the program as you understand it.

  20. Step 4: Focus the evaluation • Judge the adequacy of your written documents for describing the program • Visualize what you might do as the evaluator • Assess your own strengths and preferences

  21. Step 5: Negotiate your role • Agree generally about the basic outline of the evaluation • Verify with the evaluation sponsor your general agreement about services and responsibilities

  22. Phase B: Select Appropriate Evaluation Methods • Establish a common understanding with your study sponsor(s) and program staff about the purposes of the evaluation and about the nature of the activities.

  23. Step 1: Data Collection • Determine appropriate sources for data collection • Select data collection instruments • Develop instruments where necessary

  24. Step 2: Consolidate your concerns • Time • Money • Availability of data collection sources • Availability of staff and/or students/clients

  25. Step 3: Plan the construction and purchase of instruments • Schedule, schedule, schedule • Field-testing

  26. Step 4:Plan the data analysis you will perform • Mostly quantitative • SPSS • Mostly qualitative • Themes

  27. Step 5: Choose evaluation design • True Control Group • Identify all participants • Pretest all participants • Randomly divide participants into one of two groups (Control or Experimental) • Avoid confounding and contaminating variables • Posttest both groups simultaneously.

  28. True Control Group with Posttest only • Same as True Control Group, BUT no pretest is given. • Hope that randomization ensures equivalence of groups.

  29. Non-equivalent Control Group • Find a group similar to your experimental group to serve as the control • Pretest both groups • Investigate differences • Posttest both groups

  30. Single Group Time Series • Collection of scores from same group • Several occasions prior to experiment • Several occasions during experiment • Several occasions after experiment

  31. Time Series with Non-Equivalent Control Groups • Not randomly assigned • Same procedure as above with both groups

  32. Before and After Design • Informal comparisons • Compare experimental group with national sample norms • Examine school records • Examine on predetermined standards

  33. Step 6: Choose sampling strategy for conducting data collection • Step 7: Estimate the cost of the evaluation • Step 8: Come to final agreement about services and responsibilities

  34. Phase C: Collect and Analyze Information • Step 1: Set deadlines • Step 2: Set up the evaluation design • Step 3: Administer instruments, score, and record data • Step 4: Conduct the data analysis

  35. Phase D: Reporting the Findings • Step 1: Plan the report • Step 2: Choose a method of presentation

  36. Be careful to . . . • apply standards and criteria appropriately. • use valid and reliable instruments. • be objective not subjective (formative). • make sure that program implementation has been completed.

  37. Realize that . . . • the program documentation that you generate may be used for accountability, creating a lasting description/impression of the program, and/or creating a list of the possible causes of program effects.

  38. The critical characteristic is: • to provide the best possible information that could have been collected under the circumstances, and that this information meet the credibility requirements of the audience.

  39. In closing, remember . . . • The summative evaluation is most often conducted at the conclusion of the program to provide potential consumers with judgments about the program’s worth or merit. • The more skeptical your audience, the greater the necessity for providing formal backup data.

More Related