Eval 6000 foundations of evaluation
This presentation is the property of its rightful owner.
Sponsored Links
1 / 27

EVAL 6000: Foundations of Evaluation PowerPoint PPT Presentation

  • Uploaded on
  • Presentation posted in: General

EVAL 6000: Foundations of Evaluation. Dr. Chris L. S. Coryn & Carl D. Westine September 23, 2010. Evaluation Theory and Logic. Like statistics, evaluation is a subject of amazingly many uses and yet few effective practitioners — Coryn (2009).

Download Presentation

EVAL 6000: Foundations of Evaluation

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

Eval 6000 foundations of evaluation

EVAL 6000: Foundations of Evaluation

Dr. Chris L. S. Coryn & Carl D. Westine

September 23, 2010

Evaluation theory and logic

Evaluation Theory and Logic

  • Like statistics, evaluation is a subject of amazingly many uses and yet few effective practitioners

    —Coryn (2009)

Evaluation theory and logic1

A set of interrelated constructs, definitions, and propositions that present a systematic view of phenomena by specifying relations among variables, with the purpose of explaining and predicting phenomena

Not to be confused with natural or biological theories, phenomena, predictions, explanations, principles, or laws, among others

(Normative) Evaluation theories describe and prescribe what evaluators do or should do when conducting evaluations (and their anticipated consequences)

They specify such things as evaluation purposes, users and uses, who participates in the evaluation process and to what extent, general activities or strategies, methods choices, and roles and responsibilities of the evaluator, among others

Evaluation Theory and Logic

  • Theory (Social Science)

  • Evaluation Theory

Evaluation theory taxonomies

Evaluation Theory Taxonomies

  • Four systems are useful in understanding, describing, and classifying various types of evaluation theories

    • Shadish, Cook, and Levition’s (1991) five principles that undergird evaluation

      • This system is directed toward theories of program evaluation

    • Stufflebeam’s (2001) taxonomy

      • Classifies evaluation approaches into distinct categories based on primary orientation

    • Alkin and Christie’s (2004) evaluation theory tree

      • Describes major theorists’ orientations

    • Fournier’s (1995) more general ‘logic of evaluation’ (largely derived from Scriven’s earlier works)

      • This system is more generalizable and useful for describing nearly all forms of evaluation (e.g., personnel, product, program) and approaches (e.g., goal-based, participatory, empowerment)

Shadish et al five principles

Shadish et al. Five Principles

  • Theories of evaluation can be (somewhat) described by five dimensions (we’ll come back to this next week)

    • Social programming: the ways that social programs and policies develop, improve, and change , especially in regard to social problems

    • Knowledge construction: the ways researchers/evaluators construct knowledge claims about social programs

    • Valuing: the ways values can be attached to programs

    • Knowledge use: the ways social science information is used to modify programs and policies

    • Evaluation practice: the tactics and strategies evaluators follow in their professional work, especially given the constraints they face

Stufflebeam taxonomy

Stufflebeam Taxonomy

  • General classification scheme

    • Pseudoevaluations

    • Questions- and methods-oriented

    • Improvement- and accountability-oriented

    • Social agenda and advocacy

    • Eclectic

Stufflebeam taxonomy1

Stufflebeam Taxonomy

  • Pseudoevaluations

    • Shaded, selectively released, overgeneralized, or even falsified findings

    • Falsely characterize constructive efforts—such as providing evaluation training or developing an organization’s evaluation capability

    • Serving a hidden, corrupt purpose

    • Lacking true knowledge of evaluation planning, procedures, and standards

    • Feigning evaluation expertise while producing and reporting false outcomes

Stufflebeam taxonomy2

Stufflebeam Taxonomy

  • Question- and method-oriented

    • Address specific questions (often employing a wide range of methods)—questions-oriented

    • Typically use a particular method (methods-oriented)

    • Whether the questions or methods are appropriate for assessing merit and worth is a secondary consideration

    • Both are narrow in scope and often deliver less than a full assessment of merit and worth

Stufflebeam taxonomy3

Stufflebeam Taxonomy

  • Improvement- and accountability-oriented

    • Fully assess an evaluand’s merit and worth

    • Expansive and seek comprehensiveness

    • Consider the full range of questions and criteria to assess and evaluand

    • Often employ needs assessment as the source of foundational criteria

    • Look for all relevant outcomes, not just those keyed to objectives

Stufflebeam taxonomy4

Stufflebeam Taxonomy

  • Social agenda- and advocacy-oriented

    • Aimed at increasing social justice through evaluation

    • Seek to ensure that all segments of society have equal access to opportunities and services

    • Advocate affirmative action to give the disadvantaged preferential treatment

    • Favor constructivist orientation and qualitative methods

Stufflebeam taxonomy5

Stufflebeam Taxonomy

  • Eclectic

    • No connection to any particular evaluation philosophy, methodological approach, or social mission

    • Advanced pragmatic approaches that draw selectively from a wide range of other evaluation approaches

    • Designed to accommodate needs and preferences of a wide range of clients and evaluation assignments

    • Unconstrained by a single model or approach

The evaluation theory tree

The Evaluation Theory Tree




Alkin & Christie (2004)

Accountability & Fiscal Control

Social Inquiry

The evaluation theory tree1

The Evaluation Theory Tree

  • The trunk is built on a dual foundation of accountability and social inquiry

    • These two areas have supported development of the field in different ways

  • The need and desire for accountability presents a need for evaluation

    • Accountability is broad in scope

      • It is not a limiting activity, but rather is designed to improve and better programs (and other things), society, and the human condition

The evaluation theory tree2

The Evaluation Theory Tree

  • The social inquiry root of the tree eminates from a concern for employing a systematic and justifiable set of methods for determining accountability

  • While accountability provides the rational, it is primarily from social inquiry that evaluation models (i.e., theories, approaches) have been derived

  • The main branch of the tree is the continuation of the social inquiry trunk

    • This is the evaluation as research, or evaluation guided by research methods, branch (designated METHODS)

The evaluation theory tree3

The Evaluation Theory Tree

  • Initially inspired by Michel Scriven, the VALUING branch firmly establishes the vital nature of the evaluator in valuing

    • Those on this branch maintain that placing value on objects is the central task of evaluation

    • Subsequent theorists extend the evaluator’s role to include facilitating the placing of value by others (e.g., Guba & Lincoln)

The evaluation theory tree4

The Evaluation Theory Tree

  • The third major branch is USE, which originated with the work of Daniel Stufflebeam and Joesph Wholey, where evaluation was focused on decision making

    • Work done by theorists on this branch express a concern for how evaluation will be used and by whom

    • Michael Patton, more than any other theorist, has developed the most comprehensive, extensive theory of use

Evaluation logic

Evaluation Logic

  • Four steps of the general (working) logic

    • Establishing criteria

      • On what dimensions must the evaluand do well?

    • Constructing standards

      • How well should the evaluand perform?

    • Measuring performance and comparing with standards

      • How well did the evaluand perform?

    • Synthesizing and integrating information/data into a judgment of merit or worth

      • What is the merit or worth of the evaluand?

Evaluation logic1

Evaluation Logic

  • This logic requires two general types of premises

    • Factual premises

      • The nature, performance, or impact of an evaluand or evaluee

      • Roughly equivalent to description (“what’s so?”)

    • Value premises

      • The properties or characteristics (i.e., criteria and standards) which typify a good, valuable, or important evaluand or evaluee of a particular class or type in a particular context

Evaluation logic2

Evaluation Logic

  • The value premise can be further broken down into

    • General values

      • The merit-defining criteria by which an evaluand or evaluee is evaluated; the properties or characteristics which define a ‘good’ or ‘valuable’ evaluand or evaluee

    • Specific values

      • The standards (i.e., levels of performance; usually an ordered set of categories) which are applied and by which performance is upheld, in order to determine if that performance is or is not meritous, valuable, or significant

    • The ‘sum’ (i.e., synthesis) of these two answer the “so what?” question

Evaluation logic general

Evaluation Logic: General

Evaluation logic general working

Evaluation Logic: General & Working

Evaluation logic working

Evaluation Logic: Working

Evaluation logic general working1

Evaluation Logic: General & Working

Evaluation logic working1

Evaluation Logic: Working

Evaluation logic3

Evaluation Logic

  • Several methods of synthesis (step 4)

    • Primary method is fact-value synthesis, which is comparing performance to standards on a single criterion

      • Typically using decision trees/rules and rubrics

      • Sometimes quantitatively (e.g., allocation or distributional methods including differational weighting)

    • Synthesis across multiple criteria or dimensions normally uses one of two methods

      • Numeric Weight and Sum (NWS)

      • Qualitative Weight and Sum (QWS)

    • No matter what the method, this type of synthesis is not a precise/exact procedure (yet) and subject to numerous sources and types of error

Encyclopedia entries for last week

Encyclopedia Entries for last Week

  • Assessment

  • Accountability

  • Auditing

  • Campbell, Donald T.

  • Cook, Thomas D.

  • Criteria

  • Evaluand

  • Evaluation

  • Evaluation Theory

  • External Evaluation

  • Formative Evaluation

  • History of Evaluation

  • Independence

  • Logic of Evaluation

  • Objectivity

  • Scriven, Michael

  • Shadish, William R.

  • Standard Setting

  • Standards

  • Summative evaluation

  • Value-free inquiry

  • Value judgment

  • Values

Encyclopedia entries for this week

Encyclopedia Entries for this Week

  • Bias

  • Causation

  • Checklists

  • Chelimsky, Eleanor

  • Conflict of interest

  • Countenance model of evaluation

  • Critical theory evaluation

  • Effectiveness

  • Efficiency

  • Empiricism

  • Independence

  • Evaluability assessment

  • Evaluation use

  • Fournier, Deborah

  • Positivism

  • Relativism

  • Responsive evaluation

  • Stake, Robert

  • Thick Description

  • Utilization of evaluation

  • Weiss, Carol

  • Wholey, Joseph

  • Login