1 / 21

Welcome

Welcome. The Good, The Bad and The Ugly: What Empirical Research Says About Student Evaluations. 30 August 2001 John Stone, Co-Director UWW LEARN Center. Headline in January 1998 Chronicle of Higher Education. New Research Casts Doubt on Value of Student Evaluations of Professors.

chaela
Download Presentation

Welcome

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Welcome The Good, The Bad and The Ugly: What Empirical Research Says About Student Evaluations 30 August 2001 John Stone, Co-Director UWW LEARN Center

  2. Headline in January 1998 Chronicle of Higher Education New Research Casts Doubt on Value of Student Evaluations of Professors

  3. Extensive Empirical Research “Although one can find individual studies that support almost any conclusion, for a number of variables there are enough studies to discern trends.” -- Cashin, 1995

  4. This Session • Faculty Perceptions & Concerns • Student Perceptions • Generalizability, Reliability, Stability, Validity and Variables Associated with Bias • Improve Departmental Processes (Has Limitations…)

  5. Faculty Perceptions & Concerns • What do the responses on your pre-assessment of attitudes form suggest about how you view the student evaluation process? • Is it positive? • Is it useful? • What concerns do you have about student evaluations and the use of such data?

  6. Common Faculty Perceptions • Positive attitude towards ratings. • Ratings are useful in improving instruction. • Should be accompanied by some sort of peer evaluation. • Using for personnel decisions is appropriate. • Students don’t put enough thought into them.

  7. Common Faculty Concerns Student evaluations are: • invalid (doesn’t measure what it says it will measure); • not reliable (low agreement among raters); • highly correlated with grades; • popularity contests; • affected by various extraneous course characteristics; (e.g., class size, subject matter, time of day, etc.) • affected by various extraneous student characteristics; (e.g., age, sex, interest in course, etc.) • affected by various extraneous instructor characteristics; (e.g., age, sex, grading pattern, etc.) • students not qualified to evaluate; and (e.g., immature; can’t judge long term value) • ratings pose threat to academic freedom

  8. Common Student Perceptions • They take course evaluations seriously. • They are qualified to make accurate judgments about the teachers and the course. • The evaluations are fair and accurate. • Neither faculty nor administrators pay much attention to the results. • No action typically results from the completion of student evaluations.

  9. Student Evaluations: Summary of Findings “Despite some inconsistencies and unresolved issues in the extant literature, certain conclusions have been relatively well accepted by researchers and practitioners in the field.” --Marsh (1987)

  10. Student Evaluations: Summary of Findings Student evaluations of teaching are multidimensional (they measure several different aspects of teaching—e.g., a teacher “might be quite well organized,” but “lack enthusiasm”) Caveat: averaging dissimilar items is not appropriate; one or more “global” items may provide sufficient data for personal decisions (Centra, 1993)

  11. Student Evaluations: Summary of Findings Student evaluations of teaching are reliable (there are correlations among items that are supposed to measure the same thing, and agreement among ratings by different students in the same class) Caveat: reliability increases with the number of students—reliability ratings with less than five students are at risk

  12. Student Evaluations: Summary of Findings Student evaluations of teaching are stable (ratings of a single course don’t change much over time; study of faculty over 13 year period found very little variation in student evaluation)

  13. Student Evaluations: Summary of Findings Student evaluations of teaching are generalizable (the evaluation is primarily a function of the instructor who teaches the course rather than the course that is taught—instructors rated as unorganized in class tend to be evaluated that way in others)

  14. Student Evaluations: Summary of Findings Student evaluations of teaching are valid (they do measure “teaching effectiveness,” measured against a number of other indicators, they’ve been found to correlate with self-evaluations, evaluations by peers, evaluations by administrators, and evaluations made by trained observers)

  15. Student Evaluations: Summary of Findings Student evaluations of teaching are relatively unaffected by potential biases “Student ratings tend to be statistically reliable, valid, and relatively free from bias or the need for control; probably more so than any other data used for evaluation.” -- Cashin, 1995 Caveat: some variables deserve watching.

  16. Variables that Have Little or No Relationship With Student Ratings • Instructor Variables not related to student ratings: • age of instructor • teaching experience • sex of instructor* • race** • personality • research productivity • Course Variables not related to student ratings: • class size (some variation) • time of day (when course is taught)

  17. Variables that Have Little or No Relationship With Student Ratings • Student Variables not related to student ratings: • age of student • sex of student • level of the student (e.g., freshmen vs. senior) • student GPA • student personality • Administrative Variable not related to student ratings: • time during the term

  18. Variables that Have A Relationship With Student Ratings • Instructor Variables related to student ratings: • faculty rank (does not require control) • teaching expressiveness (does not require control) • Course Variables related to student ratings: • level of the course (potential need for control) • academic discipline (potential need for control) • workload/difficulty (does not require control)

  19. Variables that Have A Relationship With Student Ratings • Student Variables related to student ratings: • student motivation (requires control) • expected grades (potential need for control) • Administrative Variables related to student ratings: • non-anonymous ratings (requires control) • instructor present during rating (requires control) • purpose of the rating (requires control)

  20. How Does This Apply? Given what was been said, what are key considerations or new ideas that could be used to improve your department’s current practices of collecting and analyzing student evaluation data? • Educate students as to the use of student evaluations. • Have students complete mid-term evaluations. • Survey alumni regarding their long-term view of teaching effectiveness. • Don’t report teaching evaluation scores as a single number—but rather as single scores on a number of dimensional items.

  21. At Least We Have a Reasonable Understanding of What We’re Dealing With… “Student evaluations of teaching effectiveness are probably the most thoroughly studied of all forms of personnel evaluation, and one of the best in terms of being supported by empirical research.” --Marsh, 1993, p.1

More Related