1 / 36

Making MAP More Meaningful

Making MAP More Meaningful. David Dreher, Project Coordinator Dr. Kathryn Sprigg, Assistant Director Office of Accountability, Highline Public Schools Dr. Sandra L. Hunt , Literacy Coach Beverly Park Elementary, Highline Public Schools. Overview. The needs of the data users

keaira
Download Presentation

Making MAP More Meaningful

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Making MAP More Meaningful David Dreher, Project Coordinator Dr. Kathryn Sprigg, Assistant Director Office of Accountability, Highline Public Schools Dr. Sandra L. Hunt , Literacy Coach Beverly Park Elementary, Highline Public Schools

  2. Overview • The needs of the data users • The objectives of the data producers • The products • The process • The implementation • The results • The future

  3. What is MAP • Measures of Academic Progress • Developed by the Northwest Evaluation Association • Norm-referenced assessment • Computerized and adaptive • Performance is reported as a RIT score • The RIT Scale • Uses individual item difficulty values to estimate student achievement • A RIT score has the same meaning regardless of grade level • Equal interval scale • Highline Public Schools • Three testing windows per year (Fall, Winter, Spring) • Test students in the areas of math and reading • Test students in grades 3-10

  4. The Needs of the Data User • Building staff were saying things like . . . • “How can we use MAP data to help us make decisions?” • “How do MAP and WASL performance compare?” • “I want to know what a student’s history is with MAP.” • “What is a RIT score?” • “Giving me a RIT score is like telling me the temperature in Celsius!”

  5. The Objectives for Us • Include more historical data in reports. • Make the data more accessible. • Put MAP scores in context with WASL scores. • Provide indication of a student’s likelihood of meeting standard.

  6. Some General Challenges • Fear of Numbers • The products generated had to be fairly simple to explain and understand. • Availability of Time • Because it had to be there yesterday it has to be fairly simple for us to produce.

  7. The Products • Fall Predictions • Our “best guess” about each student’s performance on the upcoming WASL. • Used for • Identifying level of risk for not meeting standard • School- and District- level WASL forecasts • Benchmark, Strategic, Intensive (BSI) Updates • “Status update” produced after each testing window. • “ Coarse filter” based only on MAP. • Cut Score Document • A “quick reference table” that could be used to help put a MAP score in context.

  8. Making The Predictions • Snooped and found the best indicators of WASL success • Applied linear regression models to generate WASL scores for each student • Examined the predicted WASL scores

  9. Snooping (Reading)R-Values

  10. Snooping (Math)R-values

  11. What we learned by snooping. . . • Correlations were generally good. • Reading R-value range: 0.711 - 0.835 • Math R-value range: 0.603 - 0.921 • Correlations in math were stronger than in reading. • “Highest MAP” consistently correlated better than any single MAP score. • Correlations were generally strongest when Highest MAP and WASL 2006 factors were combined.

  12. Regression Models For students with both MAP and 2006 WASL scores (~95%) WASL 2007 = b0 + b1*Highest MAP + b2*WASL 2006 For students that only had MAP score(s) (~3%) WASL 2007 = b0 + b1*Highest MAP For students that only had WASL 2006 score (~2%) WASL 2007 = b0 + b1*WASL 2006 Where: Highest MAP = The student’s highest score on MAP from the Fall 2006, Winter 2007, or Spring 2007 windows. Typically Spring 2007. WASL 2006 = The student’s raw score from the 2006 WASL Spring testing.

  13. Prediction Models For students with both MAP and 2007 WASL scores WASL 2008 = b0 + b1*Projected MAP + b2*WASL 2007 For students with only MAP score(s) WASL 2008 = b0 + b1*Projected MAP For students with only WASL 2007 score WASL 2008 = b0 + b1* WASL 2007 Where: Projected MAP = Projected Spring 2008 MAP score based on the student’s highest score on MAP from the Winter 2007, Spring 2007 or Fall 2008 windows. WASL 2007 = The student’s raw score from the 2007 WASL Spring testing.

  14. Projecting MAP to Spring • For the models with “Projected MAP” as one of the factors individual student performance on MAP in the Spring of 2008 was projected. • The amount of expected growth added to a student’s Highest MAP score came from NWEA’s Growth Study

  15. Example of Projection and Prediction7th Grade Student in Reading

  16. WASL Prediction Range • Constructed using the SEM values reported in the 2001 WASL Technical Reports. • Predicted Range = Predicted WASL Score +/- SEM

  17. Examining the Predictions • What are the predictions saying about how we might do in 2008? • “Forward look” • How would we have done if we had predicted 2007 WASL scores in the fall of 2006? • “Backward look”

  18. What are the predictions saying about how we might do in 2008?

  19. What are the predictions saying about how we might do in 2008?

  20. What are the predictions saying about how we might do in 2008?

  21. What are the predictions saying about how we might do in 2008?

  22. What are the predictions saying about how we might do in 2008?

  23. What are the predictions saying about how we might do in 2008?

  24. Looking BackwardsHow would we have done predicting the 2007 WASL? • Successful prediction • Accurately predicting whether a student would or would not meet standard on the WASL • Unsuccessful prediction • Predicted to meet standard and did not • “false positive” (the kind we don’t want) • Predicted not to meet standard but did • “false negative” (the kind we are okay with)

  25. Looking Backwards - Math

  26. Looking Backwards - Math

  27. Looking Backwards - Reading

  28. Looking Backwards - Reading

  29. The Implementation • Fall Predictions • Rolled out at Fall Math Summit • Cut Scores • Released in November 2007 • BSI Status Updates • Delivered in February 2008 • Use of the information was determined within each building by principals, coaches, and teachers.

  30. The Results • How good were the predictions? • We won’t know how good they are until after we get our WASL results. • Come see our Fall WERA presentation! • Did the products “work” for the end user. • Feedback has been fairly limited • Most feedback has been positive • Some feedback says more work is still needed.

  31. The Future • Check the predictions after WASL results are released • Continue to refine the products to make them “work” for the end user • “job security”

  32. Work in ProgressCut Scores Document • Predictions “Roll Out” • Cut Score Table • Augmented with BSI graph • NWEA’s recently released Cut Scores document

  33. Email to Principals • If the prediction range is: • Entirely below 400 (ex.: 380-396): student has less than a 20% chance on the WASL this spring unless we accelerate their learning. • Straddles 400 (ex.: 396-410): student has basically a coin-flip chance on the WASL, even if their prediction is above 400. • Entirely above 400 (ex.: 408-424): student has more than an 80% chance on the WASL in the spring, IF they continue to progress.

  34. Contact Information David Dreher, Project Coordinator Dr. Kathryn Sprigg, Assistant Director Office of Accountability, Highline Public Schools www.hsd401.org 206-433-2334

More Related