1 / 18

An Exploratory Analysis of Teaching Evaluation Data

An Exploratory Analysis of Teaching Evaluation Data. Michael D. Martinez Department of Political Science College of Liberal Arts and Sciences University of Florida martinez@ufl.edu August 17, 2011. Questions. What does our teaching evaluation instrument actually tell us about our teaching?

keisha
Download Presentation

An Exploratory Analysis of Teaching Evaluation Data

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Exploratory Analysis of Teaching Evaluation Data Michael D. Martinez Department of Political Science College of Liberal Arts and Sciences University of Florida martinez@ufl.edu August 17, 2011

  2. Questions • What does our teaching evaluation instrument actually tell us about our teaching? • Are the items that students use to evaluate us actually measuring different things? • Do the items in the teaching evaluation instrument actually produce a reliable scale? • How much, on average, are teaching evaluations affected by level of instruction and size of class?

  3. Teaching Evaluation formfrom a social science perspective • Closed and open ended questions

  4. Closed Ended Questions

  5. Open ended questions

  6. Data • From CLAS Fall 1995 through Spring 2010 • 84163 sections • Only includes Publicly Visible Data • Excludes CLAS items • Excludes “control” variables Q11-15 • Excludes open-ended questions

  7. What does our teaching evaluation instrument actually tell us about our teaching? • Are the items that students use to evaluate us actually measuring different things? • Probably Not • Students act as though they develop an attitude about the class and rank it on almost all items based on that attitude. • Do the items in the teaching evaluation instrument actually produce a reliable scale? • Yes

  8. Inter-item correlations(CLAS Data, Fall 1995 through Spring 2010) Cronbach’s alpha = 0.978

  9. How much, on average, are teaching evaluations affected by level of instruction and size of class? • SOME, but less than might be expected. • Q10 = a + b1 Lower + b2 Grad + b3 Log Enrollment + e

  10. How much, on average, are teaching evaluations affected by level of instruction and size of class? • SOME, but less than might be expected. • Q10 = a + b1 Lower + b2 Grad + b3 Log Enrollment + e • b1 will be the average effect of teaching a lower division course, relative to an upper division course, controlling for the size of the class. • b2 will be the average effect of teaching a graduate course, relative to an upper division course, controlling for the size of the class. • b3 will be the average effect of the log of class size, controlling for the level of the class.

  11. Regression of Instructor Evaluation (Q10)on Level of Course and Class size (log) Entries are unstandardized regression coefficients, with standard errors in parentheses.

  12. Regression of Instructor Evaluation (Q10)on Level of Course and Class size (log)

  13. Expected Values: Humanities Expected Values: Phys and Math

  14. Expected Values: Soc and Beh Expected Values: Political Sci

  15. Morals of the Story • We have a reliable teaching evaluation instrument which is definitely measuring something. • Sections that are evaluated positively on one item tend to be evaluated positively on other items. • Reliability isn’t validity. • Response set could be a problem, but the cost of fixing it would be a disruption in the continuity of the data that we have. • Like GRE scores, these scores should be regarded as a good measure, but not the only measure.

  16. Morals of the Story • Most variation in course evaluations is NOT accounted for by level of instruction or class size. • Both class size and level of instruction matter, but should not be regarded as excuses for really low evaluations.

  17. Darts and Laurels • Laurel – Brian Amos, my graduate research assistant, for help with analyzing these data. • Laurel – Dave Richardson and CLAS office, for making these data available to me. • Dart – Academic Affairs, for not making these data publicly available in a usable form to everyone. • Laurel – Academic Affairs, for (finally) creating a link to allow promotion candidates and award nominees to generate teaching evaluation reports in Word automatically with just a few clicks.

  18. Darts and Laurels • Dart – Academic Affairs, for not posting the CLAS-only items, and not posting the teaching evaluations of graduate students who taught their own courses. • Laurel – Academic Affairs, for an otherwise much improved website showing evaluations • Laurel – You, for patiently listening • Thanks!

More Related