1 / 35

Gathering Feedback for Teaching

Combining High-Quality Observations with Student Surveys and Achievement Gains. Gathering Feedback for Teaching. Closing the effectiveness gap. Progressing beyond “The Plateau”. Ensuring Reliable & Trustworthy Observations. The First of Three Questions.

greg
Download Presentation

Gathering Feedback for Teaching

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Combining High-Quality Observations with Student Surveys and Achievement Gains Gathering Feedback for Teaching

  2. Closing the effectiveness gap

  3. Progressing beyond “The Plateau”

  4. Ensuring Reliable & Trustworthy Observations

  5. The First of Three Questions How many distinct levels of teacher performance do you think an evaluation system should recognize?

  6. The Second of Three Questions What performance levels would you assign to what fraction of your district’s teachers on the following competencies? • Classroom Management (time, behavior, materials) • Goals & Tasks (clear, appropriate, rigorous, interesting) • Supporting Student Understanding (content depth, feedback, questioning and discussion, instructional dialogue)

  7. The Third of Three Questions How closely associated are teaching behaviors with student outcomes (academic growth over time)? highly moderately weakly Student Performance Teacher’s Classroom Observation Score

  8. Teachers with Higher Observation Scores Had Students Who Learned More Validation

  9. Observation Score DistributionsFramework for Teaching

  10. PLATO Prime, CLASS and MQI Lite

  11. Observation Score Distributions UTeach Observation Protocol

  12. Measures have different strengths …and weaknesses H M L M H M M/H L H

  13. The Importance of Multiple Measures

  14. Compared to MA Degrees and Years of Experience, the Combined Measure Identifies Larger Differences … on state tests Compared to What?

  15. …and on low stakes assessments Compared to What?

  16. …as well as on student-reported outcomes. Compared to What?

  17. The Measures of Effective Teaching Project Participating Teachers • Two school years: 2009–10 and 2010–11 • >100,000 students • Grades 4–8: ELA and Math • High School: ELA I, Algebra I and Biology

  18. Research Partners • Our primary collaborators include: • Mark Atkinson, Teachscape • Nancy Caldwell, Westat • Ron Ferguson, Harvard University • Drew Gitomer, Educational Testing Service • Eric Hirsch, New Teacher Center • Dan McCaffrey, RAND • Roy Pea, Stanford University • Geoffrey Phelps, Educational Testing Service • Rob Ramsdell, Cambridge Education • Doug Staiger, Dartmouth College • Other key contributors include: • Joan Auchter, National Board for Professional Teaching Standards • Charlotte Danielson, The Danielson Group • Pam Grossman, Stanford University • Bridget Hamre, University of Virginia • Heather Hill, Harvard University • Sabrina Laine, American Institutes for Research • Catherine McClellan, Clowder Consulting • Denis Newman, Empirical Education • Raymond Pecheone, Stanford University • Robert Pianta, University of Virginia • Morgan Polikoff, University of Southern California • Steve Raudenbush, University of Chicago • John Winn, National Math and Science Initiative

  19. What the Participants Said…. • The MET Project is ultimately a research project. Nonetheless, participants frequently tell us they have grown professionally as a result of their involvement. Below is a sampling of comments we received. • From Teachers: • “The video-taping is what really drew me in, I wanted to see not only what I’m doing but what are my students doing. I thought I had a pretty good grasp of what I was doing as a teacher, but it is eye opening … I honestly felt like this is one of the best things that I have ever done to help me grow professionally. And my kids really benefited from it, so it was very exciting.” • "With the videos, you get to see yourself in a different way. Actually you never really get to see yourself until you see a video of yourself. I changed immediately certain things that I did that I didn't like.” • “I realized I learned more about who I actually was as a teacher by looking at the video. I learned of the things that I do that I think that I’m great at I was not so great at after all.” • “Even the things I did well, I thought, ok that's pretty good, why do I do that, and where could I put that to make it go farther. So it was a two-way road, seeing what you do well, and seeing the things that have become habits that you don't even think about anymore." • From Raters: • “Being a rater has been a positive experience for me.  I find myself ‘watching’ my own teaching more and am more aware of the things I should be doing more of in my classroom.” • “I have to say, that as a teacher, even the training has helped me refine my work in the classroom.  How wonderful!” • “I have loved observing teachers, reflecting on my own teaching and that of the teachers teaching in my school.”

  20. MET Extension: A Library of Teaching Practice • Additional Data Collection • Subset of 360 MET Teachers (disproportionately highly effective) • 50 lessons taped (18,000 total lessons) • 100% teacher & parental consent (allowing for broader public use) • Cheaper cameras with potential for scale • Library of Practice • Searchable database (tagged by Common Core standards, teaching practices, etc.) • Tagging to be done in partnership with schools of education (w/ teachers-in-training) • Potential Uses: • Rater training, certification, and calibration • School districts – professional development • Teacher training institutions – teaching methods • Observation instrument developers – validation of new & existing tools

  21. MET Logical Sequence Measures reliable Measures predict Research Use Measures stable under pressure Measures fairly reflect teacher ? Effective Teaching Index Measures combine Teaching Effectiveness Dashboard Measures communicated effectively Measures improve effectiveness

  22. Validation Engine System picks observation rubric & trains raters Raters score MET videos of instruction • Software provides analysis of: • Rater consistency • Rubric’s relation to student learning

  23. Four Steps Four Steps to High-Quality Classroom Observations

  24. Step 1: Define ExpectationsFramework for Teaching (Danielson) Four Steps

  25. Step 2: Ensure Accuracy of Observers Four Steps

  26. Step 3: Monitor Reliability Four Steps

  27. Multiple ObservationsLeads to Higher Reliability Four Steps NOTES: The numbers inside each circle are estimates of the percentage of total variance in FFT observation scores attributable to consistent aspects of teachers’ practice when one to four lessons were observed, each by a different observer. The total area of each circle represents the total variance in scores. These estimates are based on trained observers with no prior exposure to the teachers’ students, watching digital videos. Reliabilities will differ in practice. See the research paper, Table 11, for reliabilities of other instruments.

  28. Students with Most Effective Teachers Learn More in School

  29. Student Perceptions Captivate Consolidate Challenge Confer Control Clarify Care TestPrep Care • My teacher makes me feel that s/he really cares about me • My teacher seems to know if something is bothering me • My teacher really tries to understand how students feel about things • My teacher takes the time to summarize what we learn each day • The comments that I get on my work in this class help me understand how to improve • If you don’t understand something , my teacher explains it a different way. • My teacher knows when the class understands, and when we do not. • My teacher has several good ways to explain each topic that we cover in the class. • My teacher asks students to explain more about the answers they give. • My teacher doesn’t let people give up when the work gets hard. • In this class, we learn to correct our mistakes. • Students in this class treat the teacher with respect • My classmates behave the way the teacher wants them to • Our class stays busy and doesn’t waste time • My teacher makes learning enjoyable • My teacher makes learning interesting • I like the way we learn in this class • My teacher wants us to share our thoughts • Students get to decide how activities are done in this class • I have learned a lot this year about [the state test] • Getting ready for [the state ] test takes a lot of time in our class Control Clarify Challenge Captivate Confer Consolidate TestPrep

  30. Student Perceptions • Top 5 Correlations Category Rank Survey Statement • Students in this class treat the teacher with respect Control 1 • My classmates behave the way my teacher wants them to Control 2 3 • Our class stays busy and doesn’t waste time Control 4 • In this class, we learn a lot every day Challenge 5 • In this class, we learn to correct our mistakes Challenge • I have learned a lot this year about [the state test] Test Prep 33 Test Prep 34 • Getting ready for [the state test] takes a lot of time in our class

  31. The Reliability and Predictive Power of Measures of Teaching: .25 VA alone Combined (Criterion Weights) .2 Combined (Equal Weights) .15 Tripod alone Difference in Math VA (Top 25% vs. Bottom 25%) .1 FFT alone .05 0 .1 .2 .3 .4 .5 .6 .7 Reliability Note: Table 16 of the research report. Reliability based on one course section, 2 observations. Combining Measures Improved Reliability as well as Predictive Power Dynamic Trio Note: For the equally weighted combination, we assigned a weight of .33 to each of the three measures. The criterion weights were chosen to maximize ability to predict a teacher’s value-added with other students. The next MET report will explore different weighting schemes.

More Related