1 / 134

Measurement Issues Inherent in Educator Evaluation

Measurement Issues Inherent in Educator Evaluation. Michigan School Testing Conference Workshop C February 21, 2012. Presenters & Developers*. Bruce Fay, Consultant, Wayne RESA Ed Roeber, Professor, Michigan State University Based on presentations previously developed by

fitzpatrick
Download Presentation

Measurement Issues Inherent in Educator Evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Measurement Issues Inherent in Educator Evaluation Michigan School Testing Conference Workshop C February 21, 2012

  2. Presenters & Developers* • Bruce Fay, Consultant, Wayne RESA • Ed Roeber, Professor, Michigan State University • Based on presentations previously developed by • Jim Gullen, Consultant, Oakland Schools • Ed Roeber, Professor, Michigan State University • Affiliated with the * We are Ph.Ds, not J.D.s We are not lawyers and have not even played lawyers on TV. Nothing in this presentation should be construed as legal, financial, medical, or marital advice. Please be sure to consult your legal counsel, tax accountant, minister or rabbi , and/or doctor before beginning any exercise or aspirin regimen. The use of the information in this presentation is subject to political whim. This presentation may cause drowsiness or insomnia…depending on your point of view. Any likeness to characters, real or fictional is purely coincidental.

  3. Before we even begin... Educator Evaluation is still very fluid in Michigan. This workshop will try to establish basic measurement concepts and potential issues related to the evaluation of educators regardless of what legal requirements may ultimately be imposed. The systems of educator evaluation that eventually get implemented in Michigan may not be consistent with the information presented here.

  4. Housekeeping • Cell phones on silent • Please take calls out into the lobby • We will take a few breaks (long workshop), but... • Please take care of personal needs as needed • Restrooms in several locations • Lets get questions/comments out as they come up • Deal with them at that time if on point • Defer to later if we plan to cover it later • Parking lot – hold to end, if time • The fact that something is a measurement issue does not mean there is an answer/solution presently at hand • We don’t know everything, so we may not have an answer or even a good suggestion, so please be kind 

  5. Workshop Outline • Introduction / Framing • Purpose / Components • Measuring Educator Practice • Measuring Student Achievement • Evaluating Educators – Putting it ALL Together • Reporting & Use of Educator Evaluations • Wrap Up

  6. Workshop Outline • Introduction / Framing • What are we talking about today? • Purpose / Components • Measuring Educator Practice • Measuring Student Achievement • Evaluating Educators – Putting it ALL Together • Reporting & Use of Educator Evaluations • Wrap Up

  7. Why are we here? • Not just an existential question! • We have legislation that requires performance evaluation systems that are... • Rigorous, transparent, and fair • Based on multiple rating categories • Based in part on student growth, as determined by multiple measures of student learning, including national, state, or local assessments or other objective criteria as a “significant” factor • We have a Governor’s Council that will... • Make specific recommendations to the Governor and Legislature regarding this by April 30, 2012

  8. Things we’re thinking about today • What does it mean to evaluate something? • What is the purpose of an educator evaluation system? • What components are needed? • How can components be combined? • What role does measurement play? • What do we mean by “measurement issues”? • Are there different measurement issues associated with different purposes? Roles? Stakes? • Are there other non-measurement technical issues?

  9. Things we’re thinking about today • What do we know (or believe to be true) about the degree to which educator practice determines student results? • What do we know (or believe to be true) about the degree to which student results can be attributed to educators? • What types of student achievement metrics could/should be used? • Is it only about academics, or do other things matter? • What’s “growth” and how do we measure it?

  10. Things we’re thinking about today • What does it mean for the system to be reliable, fair (unbiased), and able to support valid decisions? • What could impact our systems and threaten reliability, fairness, and/or validity? • Some of the things we are thinking about are clearly NOT measurement issues, so we may not deal with them today, but...

  11. What Are YOU Thinking About? • What issues are you concerned about? • What questions do you have coming in? • What are you expecting the take away from this workshop?

  12. Workshop Outline • Introduction / Framing • Purpose / Components • Why are we talking about this? • The system has to consist of something • Measuring Educator Practice • Measuring Student Achievement • Evaluating Educators – Putting it ALL Together • Reporting & Use of Educator Evaluations • Wrap Up

  13. Possible Purpose(s) of an Educator Evaluation System

  14. The Big Three • Quality Assurance / Accountability • Performance-based Rewards • Continuous Improvement • Some of these are higher stakes than others! • High stakes systems need to be rigorous in order to be defensible • Rigorous systems are more difficult, time consuming, and costly to implement

  15. Rigor vs. Purpose High (defensible) QA Rewards Rigor CI Stakes Low High (consequential) (not consequential) Low (not defensible)

  16. Teacher Examples High Rigor National Board Certification Praxis III Structured Mentoring Programs, e.g. New Teacher Center Low High Stakes Informal mentoring programs Traditional Evaluation Systems DANGER! Low

  17. Quality Assurance / Accountability(very high stakes) What Why • Assure all personnel competently perform their job function(s) •  A minimum acceptable standard •  The ability to accurately identify and remove those who do not meet this standard • Compliance with legal requirements • Fulfill a public trust obligation • Belief that sanction-based systems (stick) “motivate” people to fix deficient attitudes and behaviors

  18. Performance-based Rewards(moderate stakes) What Why • Determine which personnel (if any) deserve some form of reward for performance / results that are: • Distinguished • Above average •  The ability to accurately distinguish performance and correctly tie it to results • Belief that incentive-based systems (carrot) “motivate” people to strive for better results (change attitudes and behaviors) •  Assumes that these changes will be fundamentally sound, rather than “gaming” the system to get the reward

  19. Continuous Improvement(lower stakes, but not less important) What Why • Improvement in personal educator practice • Improvement in collective educator practice •  Professional Learning • Constructive / actionable feedback • Self-reflection • Belief that quality should be strived for but is never attained • Status quo is not an option; things are improving or declining • It’s what professionals do • Can’t “fire our way to quality” • Clients deserve it

  20. The Nature of Professional Learning • Trust • Self-assessment • Reflection on practice • Professional conversation • A community of learners

  21. Can One Comprehensive SystemServe Many Purposes? • Measurement & evaluation issues (solutions) may be different for each: • Purpose • Role (Teachers, Administrators, Other Staff) • Nationally, our industry has not done particularly well at designing/implementing systems for any of these purposes (or roles) • We have to try, but...very challenging task ahead! • We won’t get it right the first time; the systems will need continuous improvement

  22. A Technical Concept (analogy) • When testing statistical hypotheses, one has to decide, a priori, what probability of reaching a particular type of incorrect conclusion is acceptable. • Hypotheses are normally stated in “null” form, i.e., that there was no effect as a result of the treatment • Rejecting a “true” null hypothesis is a “Type I Error”, and this is probability that is usually set a priori. • Failing to reject a false null hypothesis is a “Type II error.” • The probability of correctly detecting a treatment effect is known as “Power.”

  23. A Picture of Type I & II Error and Power Reject Correct Decision– The treatment had a statistically significant effect – The ability to do this is know as Power D e c i s i o n Type I Error False True Null Hypothesis Correct Decision – the treatment did NOT have a statistically significant effect Type II Error Accept

  24. These Things Are Related • Obviously we would prefer not to make mistakes, but...the lower the probability of making a Type I Error: • The higher the probability of making a Type II Error • The lower the Power of the test

  25. An Example From Medicine • When testing drugs, we want a low Type I Error (we do not want to decide that a drug is effective when in fact it is not) • When testing patients, however, we want high power and minimum Type II Error (we want to make sure we detect disease when it is actually present) • The price we pay in medicine is a willingness to tell a patient that they are sick when they are not (Type I Error) • However, since treatments can have serious side effects, our safeguard is to use multiple tests, get multiple expert opinions, and to continue to re-check the patient once treatment begins

  26. Do These Concepts Have Something To Do With Educator Evaluation? • We are not aware of anyone contemplating using statistical testing to do educator evaluation at this time, but • Yes, conceptually these ideas are relevant and have technical analogs in educator evaluation

  27. Application to Educator Evaluation • In our context, a reasonable null hypothesis might be that an educator is presumed competent (innocent until proven guilty) • Deciding that a competent educator is incompetent would be a: • Type I error (by analogy) • Very serious / consequential mistake • If we design our system to guard against (minimize) this type of mistake, it may lack the power to accomplish other purposes without error

  28. Possible Components of an Educator Evaluation System Conceptual, Legal, & Technical

  29. Conceptually Required Components • The measurement of practice (what an educator does) based on a definition of practicethat is clear, observable, commonly accepted, and supported by transparent measurement methods / instruments that are technically sound and validated against desired outcomes

  30. Conceptually Required Components • The measurement of student outcomes based on a definition of desired student outcomesthat is clear, commonly accepted, and supported by transparent measurement methods / instruments that are technically sound and validated for that use

  31. Conceptually Required Components • A clear and commonly accepted method for combining A. & B. to make accurate, fair, and defensible high-stakes evaluative decisions that is: • Technically sound • Has an appropriate role for professional judgment • Includes a fair review process • Provides specific / actionable feedback and guidance • Affords a reasonable chance to improve, with appropriate supports

  32. Legally Required ComponentsRSC 380.1249(2)(c) and (ii) ... • Annual (year-end) Evaluation • Classroom Observations (for teachers) • A review of lesson plans and the state curriculum standard being used in the lesson • A review of pupil engagement in the lesson • Growth in Student Academic Achievement • The use of multiple measures • Consideration of how well administrators do evaluations as part of their evaluation

  33. Additional Legal Requirements • The Governor’s Council on Educator Effectiveness shall submit by April 30, 2012 • A student growth and assessment tool RSC 380.1249(5)(a) • That is a value-added model RSC 380.1249(5)(a)(i) • Has at least a pre- and post-test RSC 380.1249(5)(a)(iv) • A process for evaluating and approving local evaluation tools for teachers and administrators RSC 380.1249(5)(f) • There are serious / difficult technical issues implicit in this language

  34. Yet More LegislationRSC 380.1249(2)(a)(ii) ... • If there are student growth and assessment data (SGaAD) available for a teacher for at least 3 school years, the annual year-end evaluation shall be based on the student growth and assessment data for the most recent 3-consecutive-school-year period. • If there are not SGaAD available for a teacher for at least 3 school years, the annual year-end evaluation shall be based on all SGaAD that are available for the teacher.

  35. The Ultimate Legal Requirement • Mandatory dismissal of educators who have too many consecutive ineffective ratings

  36. Required Technical Properties • In general, to be legally and ethically defensible, the system must have these technical properties embedded throughout: • Reliable – internally and externally consistent • Fair & Unbiased – objectively based on data • Validated – capable of consistently making correct (accurate) conclusions about adult performance that lead to correct (accurate) decisions/actions about educators

  37. Technical Components – Practice • Clear operational definition of practice (teaching, principalship, etc.) with Performance Levels • Professional development for educators and evaluators to ensure a thorough and common understanding of all aspects of the evaluation system • Educators, evaluators, mentors / coaches trained together (shared understanding) • Assessments to establish adequate depth and commonality of understanding (thorough and common)

  38. Technical Components – Practice • Validated instruments and procedures that provide consistently accurate, unbiased, defensible evidence of practice from multiple sources, with ongoing evidence of high inter-rater reliability, including: • Trained/certified evaluators • Periodic calibration of evaluatorsto ensure consistent evidence collection between evaluators and over time (overlapping, independent observation and analysis of artifacts) • Adequate sampling of practiceto ensure that evidence is representative of actual practice(may be the big measurement challenge with respect to practice) • Methods for summarizing practice evidence • Tools (software, databases) to support the work

  39. Technical Components – Outcomes • Multiple validated measures of student academic achievement need to be: • Aligned to curriculum / learning targets • Common (where possible / appropriate0 • Standardized (admin and scoring) where possible • Adequate samples of what students know and can do • Capable of measuring “growth” • Instructionally sensitive • Able to support attribution to teacher practice • Methods for summarizing the measurement of student academic achievement and attributing it to educators • Tools (software, databases) to support the work

  40. Technical Components – Evaluation • Methods for combining practice and outcome evidence • Careful use of formulas and compensatory systems, if used at all (great caution needed here) • Opportunity for meaningful self-evaluation and input by evaluee • Process for making initial summative overall judgment • Opportunity for review of summative judgment by evaluee • Reasonable appeal process

  41. Evaluation vs. Measurement • Measurement = accurate data, often quantitative, to be used as evidence. A science and an art. • Evaluation = value judgment (hopefully accurate) • What is the (proper) role of measurement within an evaluation system? • Appropriate, sufficient, reliable, and unbiased data to inform evaluative decisions • What else does an evaluation system need? • Performance standards for competent practice and acceptable outcomes • Methods for combining multiple sources of evidence and making meaning of them • Methods for making overall judgments about educator effectiveness based on the above

  42. “Effectiveness” • Effectiveness (ineffectiveness) and Competence (incompetence) are not (automatically) the same thing • Effectiveness implies results, in this case relating to the (academic) achievement of specific sets of students • Educators can (and are) often effective in one setting but not in another • Competence is something we tend to think of as a somewhat fixed set of attributes of an individual at any point in time regardless of setting • Effectiveness is definitely contextual – it is not a fixed attribute of a person

  43. Questions & Comments ...and perhaps a short break

  44. Workshop Outline • Introduction / Framing • Purpose / Components • Measuring Educator Practice • ...on the assumption that what educators do makes a difference in student outcomes • Measuring Student Achievement • Evaluating Educators – Putting it ALL Together • Reporting & Use of Educator Evaluations • Wrap Up

  45. Caution! “After 30 years of doing such work, I have concluded that classroom teaching … is perhaps the most complex, most challenging, and most demanding, subtle, nuanced, and frightening activity that our species has ever invented. ..The only time a physician could possibly encounter a situation of comparable complexity would be in the emergency room of a hospital during or after a natural disaster.” Lee Shulman, The Wisdom of Practice

  46. Double Caution!! • The work of effectively leading, supervising, and evaluating the work of classroom teachers (principalship) must also be some of the most “complex, challenging, demanding, subtle, nuanced, and frightening”work on the planet. • Humility rather than hubris seems appropriate here. While adults certainly do not have the right to harm children, do we have the right to harm adults under the pretense of looking out for children?

  47. The Nature of Evidence • Evidence is not just someone’s opinion • It is... • Factual (accurate & unbiased) • Descriptive (non-judgmental, non-evaluative) • Relevant • Representative • Interpreting evidence is separate from collecting it, and needs to occur later in the evaluation process

  48. Cognitive Load and Grain Size • There is only so much a person can attend to • The ability to collect good evidence, especially from observation, requires frameworks of practice with associated rubrics, protocols, and tools that... • Have an appropriate “grain size” • Don’t have too many performance levels • Are realistic in terms of cognitive load for the observer • Allow/support quick but accurate sampling of specific targets

  49. Classification Accuracy • It seems that the more categories into which something can be classified, the more accurately it should be able to be classified. This, however, is not usually the case • The more categories (performance levels) there are, the more difficult it is to: • Write descriptions that unambiguously distinguishable • Internalize the differences • Keep them clearly in mind while observing/rating • The result is that classification becomes less accurate and less reliable as the number of categories increases

  50. Classification Accuracy • Classification will be most accurate and reliable when there are only two choices, e.g., • “satisfactory”/“unsatisfactory” (Effective/Ineffective) • However, the law says we need more levels, and we tend to want to separate the following anyway: • Proficient from Not Proficient • Proficient from Advanced • Basic (needs improvement) from Deficient • Trained, conscientious observers/raters can reliably distinguish four levels of performance on good rubrics

More Related