1 / 39

EVAL 6000: Foundations of Evaluation

EVAL 6000: Foundations of Evaluation. Dr. Chris L. S. Coryn Kristin A. Hobson Fall 2012. Agenda. Announcements Lingering questions from the first meeting Activity 1 Lecture Overview of evaluation theory History of evaluation Basic principles and core concepts

moesha
Download Presentation

EVAL 6000: Foundations of Evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EVAL 6000: Foundations of Evaluation Dr. Chris L. S. Coryn Kristin A. Hobson Fall 2012

  2. Agenda • Announcements • Lingering questions from the first meeting • Activity 1 • Lecture • Overview of evaluation theory • History of evaluation • Basic principles and core concepts • Shadish, Cook, & Leviton’s (1991) five principles of program evaluation theory • Questions and discussion • Encyclopedia of Evaluation entries

  3. Announcements • Date: September 20th, 2012 • Time and Location: 12:00 in the President’s Dining Room, Bernhard Center (Main floor) • Title: Some Elements of Context and Their Influence on Evaluation Practice • Presenter(s): Jody Fitzpatrick—Associate Professor, University of Colorado Denver and President-Elect of the American Evaluation Association • Abstract: Many elements of context influence evaluation but some primary ones include the setting of the evaluand (education, social welfare, mental health, environment) and the discipline or training of the evaluator and the key stakeholders. Both of these contextual issues influence the culture and values concerning evaluation. I (Fitzpatrick) will talk about the diversity in evaluation in the United States and other countries today and how these elements, setting and discipline, influence approaches and practice in evaluation.

  4. Lingering Questions • Are there questions regarding the syllabus, first assignment, or other matters related to the course?

  5. Activity 1 • Draw the first image that comes to mind when you hear the word evaluation (15 minutes) • In small groups share your images, identify common themes, and write your themes on the flipchart (20 minutes) • Share your group’s themes (15 minutes)

  6. Meta-Theory and Theory • A meta-theory is a theory whose subject matter is some other theory • In other words, it is a theory about a theory • A theory is a set of interrelated constructs, definitions, and propositions that present a systematic view of phenomena by specifying relations among variables, with the purpose of explaining and predicting phenomena

  7. Evaluation Theory • Evaluation theories describe and prescribe what evaluators do or should do when conducting evaluations and are mostly normative in origin • They specify such things as evaluation purposes, users and uses, who participates in the evaluation process and to what extent, general activities or strategies, methods choices, and roles and responsibilities of the evaluator, among others

  8. Classification of Theories • Shadish, Cook, and Levition’s five principles that undergird evaluation • Alkin and Christie’s evaluation theory tree that classifies and describes major theorists’ orientation • Stufflebeam and Coryn’s five categories • Scriven and Fournier’s more general ‘logic of evaluation’

  9. Program Evaluation Theory • Shadish, Cook, and Levition’s five principles that undergird evaluation • Theory of practice • Theory of knowledge • Theory of valuing • Theory of use • Theory of social programming

  10. Shadish, Cook, & Leviton’s Elements of “Good Theory for Social Program Evaluation” • Social programming • Ways that social programs and policies develop, improve, and change, especially in regard to social problems • Knowledge construction • Ways researchers/evaluators construct knowledge claims about social programs • Valuing • Ways values can be attached to programs • Knowledge use • Ways social science information is used to modify programs and policies • Evaluation practice • Tactics and strategies evaluators follow in their professional work, especially given the constraints they face

  11. Evaluation Theory Tree • Alkin and Christie’s theory tree

  12. Broad Category Classification • Stufflebeam and Coryn’s classification system for evaluation models and approaches • Pseudoevaluations • Questions- and methods-oriented • Improvement- and accountability-oriented • Social agenda/advocacy • Eclectic

  13. Logic of Evaluation • Scriven’s logic of evaluation is the closest to a meta-theory of evaluation • Establishing criteria • On what dimensions must the evaluand do well? • Constructing standards • How well should the evaluand perform? • Measuring performance and comparing with standards • How well did the evaluand perform? • Synthesizing and integrating information/data into a judgment of merit or worth • What is the merit or worth of the evaluand?

  14. General Premises • Factual premises • The nature, performance, or impact of an evaluand or evaluee • Roughly equivalent to description (“what’s so?”) • Value premises • The properties or characteristics (i.e., criteria and standards) which typify a good, valuable, or important evaluand or evaluee of a particular class or type in a particular context

  15. Value Premises • General values • The merit-defining criteria by which an evaluand or evaluee is evaluated; the properties or characteristics which define a ‘good’ or ‘valuable’ evaluand or evaluee • Specific values • The standards (i.e., levels of performance; usually an ordered set of categories) which are applied and by which performance is upheld, in order to determine if that performance is or is not meritous, valuable, or significant

  16. General Logic (Scriven) Working Logic (Fournier) Phenomenon: Functional product Establish criteria Question: Is X a good/less good one of its type? Construct standards Measure performance and compare to standards Problem: Extent of performance Synthesize into a judgment of merit or worth Claim: Performance/value

  17. General Logic

  18. Working Logic

  19. Historical Evolution of Evaluation

  20. “In the beginning God created the heaven and the earth, then God stood back, viewed everything made, and proclaimed, “Behold, it is very good.” An the evening and the morning were the sixth day. And on the seventh day God rested from all work. God’s archangel came then, asking, “God, how do you know that what you have created is ‘very good’? What are your criteria? On what data do you base your judgment? Just what results were you expecting to attain? And aren’t you a little close to the situation to make a fair and unbiased evaluation?” God thought about these questions and that day and God’s rest was greatly disturbed. On the eighth day God said, “Lucifer go to hell.” Thus was evaluation born a blaze of glory.” — Michael Q. Patton

  21. Ancient Practice, New Discipline • Arguably, evaluation is the single most important and sophisticated cognitive process in the repertoire of human reasoning and logic • It is a natural, evolutionary process without which we would not survive • Earliest known examples • Product evaluation • Personnel evaluation

  22. Early History in the United States • Tyler’s national “Eight Year Study” (1933-1941) • Involved 30 secondary schools and 300 colleges and universities and addressed narrowness and rigidity in high school curricula • Mainly educational assessments during the 1950s and early 1960s conducted by social scientists and education researchers

  23. Early History in the United States • Johnson’s “War on Poverty” and “Great Society” programs of the 1960s • Head Start, Follow Through • Evaluation clause in Elementary and Secondary Education Act (ESEA) • Evaluation became part of every federal grant

  24. Toward Professionalization • Two U.S.-based professional evaluation organizations emerged in mid-1970s • Evaluation Network (E-Net) • Evaluation Research Society (ERS) • In 1985, the two merged to form what is now the American Evaluation Association (AEA)

  25. Growing Concerns for Use • Through the 1970s and 1980s, growing concerns were voiced about the utility of evaluation findings, in general, and the use of experimental and quasi-experimental designs, more specifically

  26. Decreased Emphasis • In the 1980s, huge cuts in social programs resulted from Reagan's emphasis on less government involvement • The requirement for evaluation was removed or lessoned from many federal programs during this period • During the 1980s, many school districts, universities, private companies, state departments of education, The Federal Bureau of Investigation (FBI), the Food and Drug Administration (FDA), and the General Accounting Office (GAO) developed internal evaluation units

  27. Increased Emphasis • In the 1990s, there was an increased emphasis on government program accountability and organizations’ efforts to be lean, efficient, global, and more competitive • Evaluation was conducted not only to meet government accountability but also to enhance effectiveness • In addition, it was during this period that an increasing number of foundations created internal evaluation units, provided support for evaluation activities, or both

  28. Recent Milestones • In 2001, the reauthorization of ESEA that resulted in the No Child Left Behind (NCLB) act is considered the most sweeping reform of education since 1965 • It has redefined the federal role in K-12 education by focusing on closing the achievement gap between disadvantaged and minority students • NCLB has had a profound influence on evaluation design and methods by emphasizing the use of randomized controlled trials (RCT) • To this day, the RCT debate is one of the most pervasive in evaluation

  29. Professionalization • By 2010, there were more than 65 national and regional evaluation organizations throughout the world, most in developing countries • Although specialized training programs have existed for several decades, graduate degree programs in evaluation have emerged only recently • Australasia • Africa • Canada • Central America • Europe (not every country) • Japan • Malaysia • United Kingdom

  30. Definition • Evaluation is the act or process of determining the merit, worth, or significance of something or the product of that process • Merit: Intrinsic quality; absent of context and costs • Worth: Synonymous with value; quality under consideration of costs and context • Significance: Synonymous with importance; merit and worth in a specific context

  31. Competing Definitions • Evaluation is “the use of social science research procedures to systematically investigate the effectiveness of social intervention programs” (Rossi, Freeman, & Lipsey). • Proponents of theory-driven evaluation approaches characterize evaluation as explaining “how and why programs work, for whom, and under what conditions.”

  32. Competing Definitions • Advocates of the empowerment evaluation movement portray evaluation as “the use of evaluation concepts and techniques that foster self-determination.” • The Organization for Economic Co-Operation and Development designates evaluation as “the systematic and objective assessment of an on-going or completed project, programme or policy, its design, implementation and results…the aim is to determine the relevance and fulfillment of objectives, development efficiency, effectiveness, impact and sustainability.”

  33. Purposes • Formative: To improve • Summative: To inform decision making • Developmental/proformative: To help develop an intervention or program; ongoing formative • Accountability: To hold accountable; usually under summative • Monitoring: To assess implementation and gauge progress toward a desired end • Knowledge generation: To generate knowledge about general patterns of effectiveness • Ascriptive: Merely for the sake of knowing

  34. Functional Forms • Process evaluation • Assessment of everything that occurs prior to true outcomes • Outcome evaluation • Assessment of an evaluand’s effects • Cost evaluation • Assessment of monetary and non-monetary costs, direct and indirect costs, and actual and opportunity costs

  35. Sub-Divisions of Evaluation • The Elders • Logic • Ethics • Aesthetics • Medicine • The Established • Product • Personnel • Performance • Program • Policy • Proposal • Portfolio • Phenomenon • The Newbies • Intradisciplinary A/B • Evaluation of research • Meta-evaluation

  36. Uses and Misuses Use Misuse Ideal Use Mistaken Use (incompetence, uncritical acceptance, unawareness) Mischievous Use (manipulation, coercion) Instrumental Use Conceptual Use Persuasive Use Legitimate Use Misuse Rational Non-Use Political Non-Use Abuse (inappropriate suppression of findings) Unjustified Non-Use Justified Non-Use Non-Use Cousins, J. B. (2004). Commentary: Minimizing evaluation misuse as principled practice. American Journal of Evaluation, 25(3), 391-397.

  37. Professional Standards • Utility • Feasibility • Propriety • Accuracy • Evaluation Accountability

  38. Questions and Discussion

  39. Encyclopedia Entries • Bias • Causation • Checklists • Conceptual Use • Consumer • Effectiveness • Efficiency • Epistemology • Evaluation Use • Experimental Design • Experimental Society • Impartiality • Independence • Instrumental Use • Intended Uses • Judgment • Merit • Modus Operandi • Ontology • Outcomes • Paradigm • Positivism • Postpositivism • Process use • Quantitative Weight and Sum • Recommendations • Synthesis • Transdiscipline • Treatments • Worth

More Related