1 / 14

A Guide to Education Research in the Era of NCLB

A Guide to Education Research in the Era of NCLB. Brian Jacob University of Michigan December 5, 2007. How has the environment for education research changed?. NCLB: evidenced-based programs Accountability Tight state and local budgets Heightened oversight by foundations. Outline.

kosey
Download Presentation

A Guide to Education Research in the Era of NCLB

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Guide to Education Research in the Era of NCLB Brian Jacob University of Michigan December 5, 2007

  2. How has the environment for education research changed? • NCLB: evidenced-based programs • Accountability • Tight state and local budgets • Heightened oversight by foundations

  3. Outline • What are the different types of education research, and what are the goals of each? • What distinguishes good evaluation research from bad research? • What are some of the common challenges to collaboration between education researchers and practitioners?

  4. Types of Education Research • Basic/Descriptive • What teacher characteristics do principals value most highly? • How does the gender gap in student achievement change over the course of schooling? • Program Evaluation • Formative (Process Analysis) • Which aspects of a curricular intervention work well and which aspects need to be refined? • Summative (Impact Analysis) • Does a new curriculum have a positive impact on student achievement?

  5. How to Tell a Good Impact Analysis when You See It • Good intervention • Well-defined question • Coherent and explicit theory of action • Good research design • Causal inference • Good implementation • Attrition, contamination • Good analysis/interpretation • Magnitude • Generalizability

  6. The Problem of Causal Inference • What is the casual impact of a particular program or policy on the outcome of interest? • Do teacher induction/mentoring programs reduce teacher attrition? • Does computer-aided curriculum improve student achievement? • We need to know what would have happened in the absence of the program (i.e., the “counterfactual”) • We often start with a correlation • Students who attend magnet schools score higher than students in traditional schools. • But this correlation may not reflect a “causal” impact • Many potential “confounding” factors • Students who attend magnet schools are more motivated and have more supportive families than those who attend traditional schools.

  7. Threats to Causal Inference • Selection • Students/teachers/schools who participate in a program are systematically different than those who do not participate. • Example: families that choose to send their children to charter schools; teachers that engage in professional development. • Concurrent events • Concurrent events may have been responsible for the effects that are attributed to the program. • Example: High school reforms in Chicago implemented at the same time as accountability provisions. • Maturation • Naturally occurring changes over time may be confused with a treatment effect • Example: Schools that were improving because of new professional development adopt a tutoring program.

  8. Common Research Designs • Example: Success for All (SFA) is implemented in 12 out of 32 elementary schools in a district in 2001-02. • Matched Comparison Group • Compare test scores in SFA schools with “similar” non-SFA schools in years after 2001-02. • Pre/Post Design • Compare test scores in SFA schools after 2002 with test scores in the same schools before 2002. • Pre/Post with Matched Comparison Group • Compare test score changes from, say, 1996 to 2005 in SFA schools with changes over the same time period in the “similar” non-SFA schools.

  9. Randomized Control Trials (RCTs): The Gold Standard • Randomly assign some students/teachers/schools to receive the treatment, and others to not receive the treatment. • Randomization assures that the treatment and control groups are equivalent in all ways - even in ways that one cannot observe, such as “motivation,” life circumstance, etc. • Avoids concurrent event/maturation concerns since both treatment and control group should experience these effects • Decision about the level of random assignment depends on the nature of the treatment, and the possibility of “contamination” of the control group. • One-on-one tutoring: student level random assignment • Curriculum or pedagogy: classroom or school random assignment • Some policies/programs cannot be evaluated via RCTs. • Example: the competition effects of school choice

  10. Some Concerns with RCTs • Ethical Concerns: Should we deny treatment to some students/schools? • Assumes that we know that the program is effective • If there are limited resources, then randomization is arguably the “fairest” method for allocating the treatment. • Many research designs can ensure equitable distribution of the program while, at the same time, maintaining random assignment. • Group 1: 3rd grade classes get the curriculum in year 1, and then 5th grade classes get the curriculum in year 3 • Group 2: 5th grade classes get the curriculum in year 1, and then 3rd grade classes get the curriculum in year 3. • Logistical Concerns: How disruptive would it be for schools/districts to conduct random assignment? • Depends on the context, but often not very disruptive • Professional development program with existing student tests • Requires evaluations to be planned at the same time that the program is implemented, and not merely attempted after the fact.

  11. Good Research Worries about the Problem of Attrition • Attrition occurs when participants (e.g., students, teachers) leave the school/district prior to the conclusion of the study. • Difficult to collect outcome data for these individuals. • Differential attrition is when members of the treatment group leave the study at a different rate than members of the control group. • If those who “attrit” are different in important ways, this can bias the results of the study - even in RCTs. • Example: “lottery” studies of magnet/charter schools • Many students who lose the lottery leave the public school system. If the most “motivated” lottery losers leave, one would overstate the benefits of attending a magnet/charter school.

  12. Good research also … • Pays attention to the magnitude of the effects, and not just the statistical significance. • Looks at how effects change over time. • Addresses the generalizability of the results (external validity) • Explores variation across students, teacher and/or school subgroups • Discusses the limitations of the study

  13. Common Challenges for any Research Collaboration • Planning for evaluation in advance rather than conducting it after the fact • Concern about denying treatment • Obtaining consent from principals, teachers and parents • Financial incentives; give information back to schools • Respecting the organization of schools • Be aware of school schedules and the academic calendar • Respecting the time constraints of school and district staff • Use existing tests when possible; limit survey length

  14. Conclusions • Good research is becoming increasingly important in education • And can be very useful for practitioners (case studies) • Good research requires advance planning and collaboration between researchers and practitioners • Collaboration is possible, as shown in the following case studies

More Related