1 / 12

SETE

SETE. ABOR POLICY ON STUDENT EVALUATIONS 6 -221B. General Policy It is the policy of the Arizona Board of Regents that faculty shall be evaluated on their performance in accordance with the following guidelines:

broderick
Download Presentation

SETE

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SETE • ABOR POLICY ON STUDENT EVALUATIONS 6-221B. General Policy • It is the policy of the Arizona Board of Regents that faculty shall be evaluated on their performance in accordance with the following guidelines: • Faculty shall have their performance, personal progress and future potential formally reviewed on a scheduled basis at least once every twelve months • A systematic assessment of student opinion shall constitute one, but not the only, component of the evaluation 

  2. Why the Task Force on Evaluation of Teaching? • Provost Liz Grobsmithconvened the Task Force on Evaluation of Teaching in 2011 to explore current NAU practices in relation to the evaluation of teaching and best practices in other institutions and to make recommendations for Northern Arizona University. • Currently there are approximately 160 different end of term student surveys on campus that are used as the primary end of term student evaluations. Some are on line, some in class, some forty years old and generated by the department, some come from external companies. Because of the variability in survey procedures even within some departments, the process is confusing and very time consuming for students.

  3. Who was on the task force? • Kathy Bohan, Chair, Educational Psychology and Academic Chairs Council • David Boyce, Graduate Student Government • Wendy Campione, Professor, Franke College of Business and Teaching Academy • Ryan Ellis Lee, Associate Students of NAU • Gae Johnson, Professor, College of Ed and Faculty Senate • Dan Kain, Provost’s Office and Convener of Task Force • Karen Mueller, Professor, Health and Human Services • Mary Reid, Professor, School of Earth Science and Environmental Sustainability and President’s Distinguished Teaching Fellows • Michael Vincent, Dean of College of Arts and Letters and Provost’s Academic Leadership Council • Andy Walters, Associate Professor, Psychology, Social and Behavioral Sciences • Eric Yordy, Assistant Professor and Associate Dean, Franke College of Business

  4. The Task • Explore alternatives to the current course evaluation practices at NAU, and • Propose revisions to the system, procedures and instruments in place at NAU. • the Task Force was specifically charged with determining whether to recommend any common procedures and/or items for student opinion surveys. • The Task Force met monthly throughout the AY2011-12 year, through April. In addition, three subcommittees (Current Practices, Best Practices, and Common Procedures, Practices and Instruments) met regularly.

  5. TASK FORCE RECOMMENDATIONS 1: Clarify the Purpose of Evaluations • Research demonstrates that the nature of student responses varies according to their understanding of the purposes and uses of course evaluations. Therefore, it is incumbent upon units and faculty members to communicate the value and uses of the information gathered through the evaluation process.

  6. TASK FORCE RECOMMENDATIONS 2 2: Administer Midterm and End-of-Term Questionnaires • The use of a two-stage process may diminish the time demands placed on students at the end of the term; • provides faculty an opportunity to incorporate valid suggestions into their courses while the providers of that feedback can experience the changes; • offers the possibility of more individualized and substantive feedback for faculty members; • forms a reasonable approach to distinguish the uses of evaluation material (for example, a midterm questionnaire might be used in the year of comprehensive reviews for tenured faculty members, but not in expedited review years); and • allows for streamlining the final course evaluation process, potentially increasing the likelihood of student participation. • Therefore, the Task Force recommends a unit-determined approach of blending more comprehensive midterm evaluation forms with brief end-of-term questionnaires.

  7. TASK FORCE RECOMMENDATION 3 • Commercial vs. “Home-grown” Surveys • The Task Force recommends NAU use a commercial provider to administer the student opinion surveysto allow for comparability among units, uniformity in the administration of surveys, enhanced reliability of survey data, reduction of staffing demands within NAU, flexibility in the development of midterm survey items and integration with the Faculty Activity and Achievement Report (FAAR) system. We recommend the use of SmarterSurveysfor the following reasons: single price for the year, regardless of the number of surveys administered; flexibility in constructing instruments to meet the needs of various units; and integration with the Faculty Activity and Achievement Reporting (FAAR) system. The Task Force recommends the tool be piloted in AY2012-13. Results of the pilot experience will be incorporated into the decision-making process.

  8. Task for Recommendation 4 • Common Items in End of Term Questionnaires • The Task Force endorses a small set of common items for end-of-term evaluation forms. These items are the components of the SETE (Student Evaluation of Teaching Effectiveness), provided through SmarterSurveys

  9. TASK FORCE RECOMMENDATION 5 • Administering Surveys • The Task Force endorses the administering of surveys via a web protocol. We recognize that there are legitimate concerns about the response rates for student questionnaires. Several practices tend to lead to higher response rates, and we encourage units to explore how best to incorporate the following: • Students are more likely to respond if they are convinced the information is put to use (both by faculty members and by administrators). Therefore, communication about the uses of course evaluations must be proactive. • Students are more likely to respond if the end-of-term evaluations are succinct and less demanding of their time. • Students are more likely to respond if they are assured of anonymity. [Note: it is crucial that students understand that midterm evaluations will be read by instructors during the course.] • The Task Force urges units to consider carefully the appropriateness of incentives (e.g., extra credit) that faculty members offer students for completing evaluations.

  10. Task Force Recommendations to the Provost • The Task Force recommendations were presented to the Provost, but urged broader discussion among key constituents, including the Provost’s Academic Leadership Council, the Academic Chairs Council, and the Faculty Senate. Key elements of our recommendations are as follows: • Create an evaluation system that incorporates information used for formative as well as summative purposes, based on principles of effective teaching; • Encourage units to gather and use multiple forms of evidence from multiple sources in evaluating teaching; • Following a pilot in AY12-13, adopt a uniform web-based course evaluation system (assuming the pilot warrants this); • Incorporate brief end-of-course surveys and mid-term surveys as appropriate to the purpose at hand;

  11. REMAINING HISTORY • The task force presented SETE to the various recommended groups, and a limited pilot of SETE was approved for the 2012-13 academic year. • The pilot met with limited success in the area of numbers of student participants, but was felt to be a good tool by departments that piloted SETE. • It was determined that the next step, to gain a better more accurate sampling of SETE would be to conduct a University wide trial in the 2013-14 academic year. • This proposal was brought to the summer senate, and it was determined at that time that the summer was not sufficiently representative of the full Senate and to defer a recommendation until the first session in the fall.

  12. REMAINING HISTORY cont. • The recommendation to conduct a campus wide trial was presented to the full senate on Monday September 9th, 2013 and the Senate agreed to the campus wide trial. • The essential part of the recommendation is that in addition to the standardized SETE evaluations, departments utilize other evaluation tools such as peer evaluations, in class surveys, and the course surveys available in BBLearn. • The effectiveness of SETE will be evaluated following completion of the 2013-14 campus wide trial.

More Related