1 / 22

NCLB and the effects of high-stakes accountability systems (in NYC and elsewhere)

NCLB and the effects of high-stakes accountability systems (in NYC and elsewhere) NY Law School presentation Leonie Haimson, Class Size Matters September 15, 2010. What is No Child left Behind (NCLB)?. 2001 reauthorization of the federal Elementary and Secondary Education Act (ESEA)

jaybrown
Download Presentation

NCLB and the effects of high-stakes accountability systems (in NYC and elsewhere)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. NCLB and the effects of high-stakes accountability systems (in NYC and elsewhere) NY Law School presentation Leonie Haimson, Class Size Matters September 15, 2010

  2. What is No Child left Behind (NCLB)? • 2001 reauthorization of the federal Elementary and Secondary Education Act (ESEA) • Requires annual state testing in grades 3-8, and once in grades 10-12 in math and reading • Districts and schools must make “adequate yearly progress or AYP” so by 2014, all students and all subgroups (racial/ethnic/ELL/ poor/sped) must be “proficient” • If they failed to meet those targets put on various “failing” lists (Schools in Need of Improvement, etc.)

  3. Sanctions only for schools with large numbers of poor students • Title 1 schools that failed to make AYP have to make tutoring available from outside vendors • They have to notify students they could transfer to another school not “in need of improvement” • Districts required to spend up to 20% of Title one funds for tutoring programs • If after four years, school fails to make AYP, must undergo “restructuring”

  4. “Restructuring” includes: • Convert to a charter school • Contract w/ private management company to run school • Replace all or most of school staff • State takeover • Any other major or fundamental reform

  5. Results • Within two years, about 25% of all US public schools failed to make AYP; by 2007-8 school year, 36% of schools labeled “in need of improvement” (SINI); • System unfairly targeted high-poverty schools and schools with more diversity; • Incentives for states to lower standards and definition of “proficiency” and many did; • Sanctions had little or no basis in research; • Less than 20% of parents utilized tutoring programs, and many programs were ineffective, with no quality control. • Districts invested millions in test–prep materials rather than real learning; • 45% of districts reported significant loss of classroom time devoted to science, history and the arts, and a “narrowing of curriculum”; • Less than 5% of parents in “SINI” schools transferred their children because wanted to remain in neighborhood; and most higher performing schools elsewhere already overcrowded.

  6. Also… • Emphasis of high-stakes tests led to more cheating scandals erupting through nation, with less incentive from districts to provide strict oversight; • Narrowing of curriculum threatened creative and innovative thinking. • Many schools ignored highest need students instead to focus on “bubble kids” -- ie those closest to proficiency • Gains in math achievement on NAEPs (national exams) were minimal, and in reading nonexistent.

  7. What about in NYC? • The high stakes accountability system of NCLB was made even worse by even higher stakes put on schools and teachers by DOE. • Starting in 2007, all schools given grades based largely on test scores from A-F; • Principals and teachers given large cash bonuses if their schools made improvements in test scores and/or graduation rates; • Schools threatened with closure and teachers w/ loss of jobs if they did not.

  8. NYC school “progress” reports or grades • Depend 85% on state test scores; • 60% of grade based on “progress” or “value-added” (change in student test scores from previous year) • 25% on level of current year’s scores • 15% on the results of surveys and attendance • Each school’s measure in above categories compared to a bunch of “peer schools”

  9. High school grades • Depend primarily on the change and level of credit accumulation of students (course passing rates); • Student Regents exam scores and passing rates; • Graduation rates; • Again, compared to “peer groups”.

  10. So what’s wrong? • Research shows that 34- 80% of the annual fluctuations in a school's scores is random, or due to one-time factors alone, leading to huge amount of volatility. (Source: Thomas J. Kane, Douglas O. Staiger, “The Promise and Pitfalls of Using Imprecise School Accountability Measures,” The Journal of Economic Perspectives, Vol. 16, No. 4). • The fact that 60% of NYC grade based on “progress” = one year’s change in student test scores; and 25% on current test scores, makes it inherently unreliable; • Results: in 2007, many high–achieving schools unfairly got failing grades – including some recognized by the federal government for their exemplary work with high needs students. • The extreme volatility showed in 2008, 77% of schools that had received an F in 2007 got an A or a B, with little or no change in teachers or overall program. • There was NO relationship between the progress scores that schools received in 2007 and 2008. • Leading to the question: “Could a Monkey Do a Better Job of Predicting Which Schools Show Student Progress in English Skills than NYC DOE?” (Aaron Pallas, Columbia University on Eduwonkette blog) • ead more: http://www.nydailynews.com/opinions/2007/11/07/2007-11-07_why_parents__teachers_should_reject_new_.html#ixzz0zRZ7a5Aq

  11. Conclusion: • “The progress measure… is a fruitless exercise in measuring error rather than the value that schools themselves add to students.” (Aaron Pallas, Columbia Univ. and Jennifer Jennings of NYU)

  12. ( (Slide thanks to Jennifer Jennings.)

  13. Many problems with “value-added” evaluations of teachers and/or schools • They use complex statistical models to estimate their “value-added, based on before and after student test scores; • Students are not randomly assigned to teachers or schools; • Each model varies and depends on how designed, what demographic factors and classroom conditions it attempts to controls for. • Many factors outside teacher’s control, including class size and peer effects; • One example: DOE’s “teacher data reports” produced by Battelle, found large statistical effect of smaller classes, and attempted to control for them. School-based progress reports do not.

  14. Leading to huge uncertainties…. • According to new Annenberg report on value-added by Sean Corcoran of NYU, many teachers who score in the top category on one type of exam will rank in lowest category in another; • His analysis of NYC “teacher data reports” finds an uncertainty range from 34 to 61 percentage points out of 100 points in ranking of teachers. • Recent study by Mathematica for US DOE found a 25-35% error rate in used value-added methods to identify best or worse teachers. • “If the “best available” automobile burst into flames on every fifth start, I think I’d walk or stay home instead.” -- Prof. Bruce Baker of Rutgers Univ. • Wouldn’t you?

  15. Campbell’s law • Coined by sociologist Donald Campbell in 1975 • This effect has been widely observed in fields of medicine, industry, and education. • “…the more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and…to distort and corrupt the social processes it is intended to monitor.”

  16. What does Campbell’s law meanre high stakes testing? • High stakes testing leads to excessive test prep and cheating, with little or no oversight; • Most NYC cheating scandals that erupt in press not followed up by DOE or anyone else; quite often accusers ended up in the “rubber room”. • Since 2002, questions on the NY state exams got much easier and narrower in focus; • Cut scores for “proficiency” were lowered each year; • In some grades, student could pass (or get a level #2) by randomly answering multiple choice questions • As a result, city made big jumps in test scores, which Bloomberg used in his campaign to renew mayoral control and to be re-elected.

  17. This “test score inflation” was reflected in 2009 school grades • 84% of NYC elementary and middle schools received a letter grade of A, and an additional 13% of schools received a B.  • Only two schools out of 1,058 received an F, and just five were awarded a D.

  18. What happened this July?The test score bubble burst!

  19. In response to ongoing and vehement criticism, the State re-calibrated the cut scores • Only 42% of NYC students scored at or above proficiency in English Language Arts (ELA) this year, compared to 69% last year • Only 54% of students scored at or above the proficiency in Math, compared to 82% last year; • The number of students who tested below basic (Level 1 on the ELA exam) increased from 12,000 to 63,000 citywide.  

  20. It is now evident that there are large number of schools with a high proportion of low-performing students • There are 369 K-8 NYC schools where at least two-thirds of students are not meeting standards in ELA. • In these schools, at least 20% of studentsare below basic. • This represents 36% of the elementary and middle schools in NYC.

  21. Myth and reality Claim: “In recent weeks, there has been some controversy and confusion stemming from the state's decision to raise the standards for proficiency on its math and English tests.” (Joel Klein, NY Post oped, 8/20/10) Reality: Actually, the state just attempted to reverse the lowering of standards that has occurred at least since 2002.

  22. State exams completely unreliable, so DOE’s charts showing increased scale scores meaningless • The exams have become easier over time; and more narrow in focus. • Only semi-reliable source of info on achievement in NYC are results on the national exams known as NAEPS. • One must compare NYC to other large cities in the NAEPs, to see how we are doing since Klein’s policies imposed in 2003.

More Related