Loading in 2 Seconds...
Loading in 2 Seconds...
Developing a Comprehensive Plan for Evaluating Teaching Effectiveness One Department’s Experience Defining the Territory How can we distinguish “the best” teachers in the department from “the average” teachers in the department?
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
One Department’s Experience
How can we distinguish “the best” teachers in the department from “the average” teachers in the department?
What information about the quality of teaching in the department should come from students?
What other sources of evaluation should be used and how should they be weighted?
Did class match the syllabus?
Does evaluation cover what was read/talked about in class?
Was instructor available to students?
Was instructor organized?
Did instructor explain material clearly for students at all levels?
Did students learn in the class?What should students tell us about our teaching?
Self Evaluation--should count for 15%-20% of overall evaluation (range 10%-33%)
Student Evals--should count for 30%-50% of overall evaluation (range 10%-66%)
Other factors to consider: # courses; # students; overall impact on teaching mission of dept.How much weight should be assigned to various factors?
Feldman, K.A. (1992). Instructional effectiveness of college teachers as judged by teachers themselves, current and former students, colleagues, administrators and external (neutral) observers. Research in Higher Education, 33 (3), 317-375.
Root, L.S. (1987). Faculty evaluation: Reliability of peer assessments of research, teaching, and service. Research in Higher Education, 26, 71-84.
Chism, N. (1998). Peer Review of Teaching. Bolton, MA: Anker Publishing.
2. How frequently should we do the videotaped evaluation?
3. Can we develop checklists to use for peer evaluation, so everyone is evaluated using the same items?Issues to be resolved about peer evaluation
(1=strongly agree; 5=strongly disagree)
Undergrad ratings are compressed at the upper end. The best rated course is separated from the #5 course by only .24 point.
DO WE HAVE “RATING INFLATION” AT WORK HERE?What is notable about these rankings?
It is possible to achieve some parsimony for administrative purposes.
Our students appear to be focused on how much and how well they learn in our courses.
We appear to be making distinctions between “good” and “excellent,” instead of poor and good.What do we know now?