1 / 15

Using Formative Assessment to Improve Student Achievement

Using Formative Assessment to Improve Student Achievement. Dan Hyson Data Management Coordinator Hiawatha Valley Education District (HVED), Winona, MN. Agenda. Review agenda What other questions were you hoping I would address? What are the primary functions of assessment?

soren
Download Presentation

Using Formative Assessment to Improve Student Achievement

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Using Formative Assessment to Improve Student Achievement Dan Hyson Data Management Coordinator Hiawatha Valley Education District (HVED), Winona, MN

  2. Agenda • Review agenda • What other questions were you hoping I would address? • What are the primary functions of assessment? • How can teachers use informal instructional techniques to gather formative assessment evidence about student mastery? • How can teachers use formal common classroom assessments to gather formative assessment evidence about student mastery?

  3. 3. What are the primary functions of assessment? • Summative • Did students meet standards? (e.g., MCA proficiency, NWEA MAP percent meeting growth targets, traditional classroom unit tests) • Formative – provides evidence of student learning you can use to adjust instruction • Benchmark screening • Are all students meeting standards or growing at a rate that will make them more likely to meet standards in the future? If not, which students are not meeting standards or not growing at the necessary rate? (e.g., AIMSweb/NWEA MAP benchmark performance, common classroom assessments and instructional techniques) • Progress monitoring/Mastery monitoring • Are those identified students responding to additional intervention we’re providing? (e.g., AIMSweb, common classroom assessments and instructional techniques) • Diagnostic • If students are not responding, what specific areas of weakness are getting in the way? (e.g., MCA and NWEA MAP sub-skill strands, common classroom assessments and instructional techniques)

  4. ACTIVITY: What assessments do you use within your school to address these primary functions?

  5. 4. How can teachers use informal instructional techniques to gather formative assessment evidence about student mastery? Typically created and administered by individual teachers within each classroom • Letter-card responses/ “Clicker” student response systems • Random student responses • Whiteboard responses • Quick student self-assessment of understanding (e.g., thumbs up, traffic signal technique) • Make student thinking visible (Love, 2013; Popham, 2008)

  6. 5. How can teachers use formal common classroom tests to gather formative assessment evidence about student mastery? Typically administered by individual teachers within each classroom, but can be created and evaluated as grade level or content area teams • Identify “essential learnings”/ Power Standards within grade and/or content area standards (“Unwrapping Template…”) • Ensure that all students have opportunity to learn by monitoring the fidelity of implementation of curriculum and instruction designed to teach learnings • Create common classroom assessments to assess those learnings (“Sample Protocol…” and “Assessment Plan”) • Set per student and per class/grade level “triggers” • If student performance below/above triggers on assessment, adjust instruction

  7. Fidelity of implementation of curriculum and instruction • Self-assessment AND peer/administrator observations (NON-EVALUATIVE) • Questions to address: • Is enough time being allocated to deliver the curriculum as intended? • What are the non-negotiable components of the curriculum and are they being implemented as intended? • If NO to either question, do teachers need refresher training in delivering curriculum?

  8. Set per student and per class/grade level “triggers”  If student performance below/above, adjust instruction • “If at least 90 percent of my students don’t earn scores of 90 percent or better on Thursday’s formative quiz, I’ll add a new review lesson on Friday.” • “If at least 95 percent of my students correctly answer at least 8 items on tomorrow’s 10-item quiz on Topic X, I will delete next week’s planned Topic X review lesson.” (Popham, 2008, p.65)

  9. Evaluating the quality of a common classroom test by calculating… • correlation with summative outcome • item difficulty • Proportion of test takers answering item correctly • Example: If 13 out of 20 students get item correct, item difficulty = 13/20 = .65 • item discrimination • Item discrimination index = difference between proportion of test takers who earned overall test scores in top 1/3 of class and proportion in bottom 1/3 answering item correctly • Example: If 8 out 8 test takers in top 1/3 and 2 out of 8 in the bottom 1/3 of a class of 24 students get item correct, item discrimination index = 8/8 – 2/8 = 1 - .25 = .75

  10. Evaluating the quality of a common classroom test by calculating… • item discrimination (continued) • Point biserial correlation = correlation between performance on one item and performance on the entire test • rpbis = [(Y1 – Y)/Sy] Px/1-Px • Y1 = mean of total test scores of those answering item correctly • Y = mean of total test scores of all test takers • Px = item difficulty (proportion of those getting item correct) • Sy = standard deviation of total test scores of all test takers

  11. Designing new common classroom tests with items… • of optimal difficulty • Chance performance level = proportion of success on an item expected by chance • For multiple choice item, 1.0 divided by number of response options (e.g., if 4 response choices, 1.0/4 = .25 chance performance level) • Optimal difficulty level = [(1.0 – chance performance level) /2] + chance performance level • Example: If chance performance level = .25, optimal difficulty level = [(1.0 - .25)/2] + .25 = .625 • BUT, not all items on test need be of optimal difficulty. Optimal range = .30 to .70. BUT, OK to have a few of .90 to 1.0, especially at beginning of test to build confidence.

  12. Designing new common classroom tests with items… • to test varied levels of understanding • Using Bloom’s taxonomy as a guide, design items that call on test taker to demonstrate • Knowledge • Comprehension • Application • Analysis • Synthesis • Evaluation

  13. References/Resources Bailey, K. & Jakicic, C. (2012). Common formative assessment: A toolkit for professional learning communities at work. Bloomington, IN: Solution Tree Press. Love, N. (2013). Data literacy for teachers. Port Chester, NY: National Professional Resources, Inc. Popham, W.J. (2008). Transformative assessment. Alexandria, VA: Association for Supervision and Curriculum Development. Hyson webinar recording and handouts on “Creating or Improving the Effectiveness of Data Teams” - http://www.hved.org/index.php/programs-services/data-management/52-trainings-webinar-recordings

  14. Questions?

  15. Contact information Dan Hyson Hiawatha Valley Education District 1410 Bundy Boulevard Winona, MN 55987 507-452-1200, ext. 119 OR 507-474-7196 (direct line) dhyson@hved.org

More Related