1 / 32

When the tables are turned: Students evaluating the teaching of academic staff

When the tables are turned: Students evaluating the teaching of academic staff. Nobody’s favourite activity. Professor Carol Miles New Faculty Orientation 2014. What we know about student ratings of teaching and courses…. Perhaps a little myth busting will be valuable here.

tansy
Download Presentation

When the tables are turned: Students evaluating the teaching of academic staff

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. When the tables are turned: Students evaluating the teaching of academic staff Nobody’s favourite activity... Professor Carol Miles New Faculty Orientation 2014

  2. What we know about student ratings of teaching and courses…. Perhaps a little myth busting will be valuable here

  3. Positive Correlations Students tend to give higher ratings to instructors in classes where they learn more Instructor Self Ratings Administrator, Colleague, Alumni, and Trained Observer ratings

  4. Interesting Finding A positive correlation between student ratings conducted on the second day of class and at the end of the semester (Cashin, 1995). We don’t change? Importance of the first day of class “Slice” analysis – even holds for the first few seconds of the first class

  5. Racial/Ethnic/Gender Bias There is no evidence of systematic racial/ethnic/gender bias in student evaluations of teaching, and there is substantial evidence to the contrary. This does not mean that prejudicial opinions, ratings, interpretations, or decisions have not appeared or been made …. only that we can not presume such bias to be consistent or widespread enough to be generalized. Michael Theall 2001

  6. Isn’t it true that the size of the class affects student ratings? Class size is moderately correlated with the most logical related variables (Group Interaction and Individual Report), but not with other dimensions, nor with the overall ratings of the course instructor. Does the time of the day the course is taught affect student ratings? There is too little research on this aspect. The research done doesn’t see a time-of-the-day effect in student ratings.

  7. Student Motivation Motivated students do tend to give higher student ratings. Therefore, upper-level courses in the major, elective courses, etc. tend to have higher student ratings. Overall evaluations should include ratings from a cross-section of courses. Evaluators should consider the motivation factor in evaluating faculty who teach lower-level required courses.

  8. “Aren’t students too immature, inexperienced and capricious to make any consistent judgments about the instructor and instruction?” “Don’t students have to be away from the course and possibly the college, for several years before they are able to make correct judgements about the instructor and instruction?” Student ratings are quite stable, and added perspective does not alter the ratings given at the end of a course.

  9. Non-traditional Teaching Methods Evidence exists that when instructors use non-passive teaching methods student ratings are lower. Student understanding and acceptance of teaching strategies can minimize or eliminate this result.

  10. Beran & Violato (2004) reported • Study of over 370,000 student ratings from a major Canadian University – all disciplines • Most student and course characteristics have no impact on student ratings of instruction • Limited effect showing that students who attend class the most often, and expect the highest grades give higher ratings • Lab-type courses received higher ratings • Social sciences courses received higher ratings than natural sciences, but……

  11. Beran & Violato (2004)….. • These things accounted for only about 7% of the overall variance in the scores • Concluded that student ratings were primarily influenced by teaching, instruction and behavior than any external variables • Therefore, instrument was determined to be valid indicator of teaching effectiveness

  12. Isn’t it true that the only faculty who are really qualified to teach or evaluate their peers’ teaching are those who are actively involved in conducting research in their field? Peer ratings are “1) less sensitive, reliable and valid, 2) more threatening and disruptive of faculty morale and 3) more affected by non-instructional factors such as research productivity” than student ratings. (Murray, 1980)

  13. Practical Interpretations of Instructor Evaluation Results

  14. Consider Completing a “formal” context form with every evaluation • Physical Classroom conditions (hot, cold, noisy) • Time of Day/Day of the Week/Night Class • Special class composition/student variables (one single cohort from a specific program?) • Aspects of the course (first time you taught it, first time ever offered, recently made into a core course, removed pre-requisites, etc.

  15. Context Form… • Teaching strategies – non-traditional? Performance-based? First time trying this? • Did you take over half-way through for someone else? • Any other issues that may affect your scores positively or negatively • This will help you contextualize results for years to come (because they all DO blend!) • Seal it and give it to your departmental administrator

  16. Analyzing Results • Halo Effects • Horns Effects (Millstone Effects) • Central Tendency • The ONE magic number may be adversely affected by one dimension • If results don’t appear to make sense, maybe they just are nonsense • Comments are important, but keep in mind that each one is from ONE student

  17. An ethical consideration Should you discuss your students’ completion of the formal teaching evaluation prior to its administration?

  18. Informal Evaluation of Teaching • By far the most effective way to improve your formal teaching scores. • Informs Teaching “on the fly” • Various methods of collecting feedback

  19. At Newcastle SFP – Student Feedback on PROGRAM SFC – Student Feedback on COURSE SFT – Student Feedback on TEACHING

  20. Having the discussion when things seem to be going wrong

  21. Before the meeting Always allow the academic to see the results and be able to digest them in privacy well before discussing.

  22. Think carefully about whether there are systemic issues. For example: Has someone who has always taught core courses to your program now been asked to teach a service course for an unrelated discipline?

  23. During the meeting Allow the academic to analyse his or her scores without judgment or comment Wait for him/her to ask for input or suggestions

  24. Is there a pattern or is it an isolated score? Are you aware of any risks taken in teaching strategies during the course in question?

  25. For seasoned academics Everyone who teaches has a tremendously personal connection to their teaching, whether they admit it or not. Do not assume that cursory engagement in general teaching theory will address deep rooted beliefs…

  26. Such as the recurring issues of:

  27. STANDARDS

  28. STUDENTS THESE DAYS….

  29. CTL can help – suggest confidential consultation (but remember, it’s confidential)

  30. If things are really serious Have the meeting – and listen more than talking. Allow the story to unfold. Ask the academic to present a plan. Make concrete suggestions. Document, but don’t make this the first step. Consult with Human Resources.

  31. Informal Evaluation Methods • Blackboard • In-class focus groups • CTL Midterm Evaluation

  32. BE GENTLE ON YOURSELF AND OTHERS • THIS IS PERSONAL

More Related