1 / 14

Using Rubrics for Evaluating Student Learning

Using Rubrics for Evaluating Student Learning. Purpose. To review the development of rubrics for the purpose of assessment To share an example of how a rubric can be developed for use in assessment To show how rubric assessment can be quantified and shared

luke
Download Presentation

Using Rubrics for Evaluating Student Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Using Rubrics for Evaluating Student Learning

  2. Purpose • To review the development of rubrics for the purpose of assessment • To share an example of how a rubric can be developed for use in assessment • To show how rubric assessment can be quantified and shared • To encourage use of this method as a form of direct assessment of student learning

  3. Why Rubric Assessment? • Rubrics… • Provide standardized information about student learning on specifically defined student learning objectives • Emphasize department/program control • Can link to other programs (i.e., general education program objectives) • Form of direct assessment of student learning • Can be directly linked to program objectives • Results are more easily reported

  4. Defining Assignment To Be Assessed • Should be a critical assignment that includes several student learning/program objectives • Faculty should agree with the utility of the assessment • Should allow for simple measurement (i.e., descriptive statistics, etc.) • Some additional comparisons could be made (i.e., comparisons between freshman and senior performance, evening and day courses, on-line and traditional courses, etc.)

  5. Match Assignments With Program Objectives • Rubric elements should match with student learning objectives • Faculty should play significant role in developing and implementing rubric • Critical question: Is rubric comprehensive enough to assess several objectives? • Caution: Do not “shoehorn” a rubric by including categories that are difficult to assess in the first place. It is fine if all objectives are not measured by one instrument. • When developing a rubric, it is always (therefore) a good idea to make sure that a program is assessing as much as it can, given the amount of effort it may take to develop and then implement the rubric on a regular cycle. A fair amount of work is involved, but it is worth it!

  6. Brief Case Study • Case: A program has a number of learning objectives it needs to assess. It has chosen a research project students must complete during their senior year. The assignment requires students to put forward a point of view, and support the argument by critically evaluating competing points of view, use an effective research method, and apply appropriate content knowledge specific to their subject area. • The faculty feel that they are clear on the purposes of the assignment, and what each purpose means, so it collaboratively creates a rubric to evaluate these projects.

  7. Example Columns chosen according to degree. In this hypothetical case, the faculty chose “target,” “satisfactory,” and “unsatisfactory.” When describing what these different degrees mean, it is important to discuss what is ideal, what is satisfactory, and what is unsatisfactory. When filling in each cell in a rubric, it may be best to define the extremes first—target and unsatisfactory—and then fill in the satisfactory cell (Harder and Harper, ISU Assessment Workshop, April 2004) Categories chosen according to student learning/program objectives of specific department or program. Each category is assigned different values (or weights) in order to arrive at final score. This is not necessary if the purpose of the rubric is to examine each individual category, however. When filling in each cell, care must be taken to be very specific about expectations. When discussing this, faculty and evaluators may want to discuss the meaning of key words, avoiding vague wording and sentences that may contradict. For example, the wording within each of these cells may be unclear. 2-3 faculty members may want to review it to clarify some of the terms.

  8. Choosing How and When To Use Rubrics • When using a rubric as a method of assessing department/program goals, one may want to perform the following actions: Require the use of the rubric in all course sections in which the assignment is required • Capstone courses can be used in this respect • Requires cooperation among all faculty teaching the course • A department may use selected courses (not all sections), although it is important to know which courses are not being included to anticipate any bias that might result from such an arrangement. Choose a random sample • Collect samples from all students and store in a central location • Select a random sample from this collection • Define periodicity (every year, every two years, three?) • Select evaluators (faculty or outside qualified evaluators) Pre-test the rubric • Test rubric for inconsistencies by evaluating a couple of assignments first • Later test for inter-rater reliability

  9. Use of Rubric Data • Final totals can be used to observe what grades students have received • Sub-category totals are most useful because they assess student learning on each objective (especially if each row in rubric corresponds with a program/student learning objective)

  10. Example • In the following example, a random sample of thirty final papers was chosen and graded according to the rubric developed in a prior slide. • Totals were inputted on an excel spreadsheet. Percentages were then calculated by dividing the average number of points for each category by the total number of assignments graded (in this case, 30). This presents a percentage between 0 and 1, allowing for comparison among the categories.

  11. Rubric Results A total average is included here to compare with each sub-category in case there is interest. Rubric categories are included on a graph (a table is also fine, of course). Categories can be compared by taking the average for each category and dividing by the total number of responses. The differences among the categories can then be discussed by faculty in a faculty meeting. For example, in this group of histograms, faculty might conclude that students appear to perform well on content knowledge, advocacy, and critical evaluation, but may need to improve in respect to using statistical knowledge and use of research methodologies. Of course, as always, it is also helpful to use additional forms of assessment to “triangulate” this finding.

  12. Demonstrating Value Added • It may be a good idea to compare freshman (or entering student) performance with that of seniors. • One can perform statistical tests to see if there is improvement between students at entry in a program and senior students • Assumes that entering and exiting students complete similar assignments, requiring a good deal of coordination • Excellent method of demonstrating impact of a program on student learning

  13. Conclusion • Rubric-based assessment requires a good deal of coordination among faculty • Requires specific identification of student learning goals and how they are to be measured. • Despite high coordination costs, very valuable method of direct assessment • Once implemented, comparatively easy to implement down the road assuming consistent leadership and implementation. • As always, any assessment has little value until findings are discussed with faculty and results incorporated into some result, such as the program review process, enhancement of program offerings, changes in teaching emphases, etc.

  14. Contact Information • Dr. Sean McKitrick, Assistant Provost for Curriculum, Instruction, and Assessment • (607) 777-2150 • smckitri@binghamton.edu

More Related