1 / 30

Using indicators to advocate for policy and programs

Using indicators to advocate for policy and programs. Dr Gill Westhorp Community Matters. Session Overview. Framing advocacy arguments Advocacy in one policy/intervention area An approach to evidence Implications for advocating for policy and programs. Advocacy.

merv
Download Presentation

Using indicators to advocate for policy and programs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Using indicators to advocate for policy and programs Dr Gill Westhorp Community Matters

  2. Session Overview • Framing advocacy arguments • Advocacy in one policy/intervention area • An approach to evidence • Implications for advocating for policy and programs

  3. Advocacy • “advocare”: “to be called to stand beside” • verbal support or argument for a cause or policy; the function of being an advocate • advocate: a person who supports or speaks in favour; a person who pleads for another; recommend, plead for, defend Australian Concise Oxford Dictionary

  4. Framing the argument • What’s the problem? • Breached rights? • Unmet needs? • ‘Wrong’ solution? • Program effectiveness? • Cost-effectiveness? • What’s the solution? • What’s the evidence?

  5. Planning your advocacy • To whom? - Politicians? Senior policy makers? Service provision organisations? • For what? – An over-arching policy framework? A change within an existing policy? A particular program? • What issues does the decision-maker need to take into account? • What evidence will they find convincing?

  6. Matching evidence to argument

  7. Kinds of Evidence (R&E)

  8. Who uses evaluation findings? • Internal to the program: • program sponsors, directors, and practitioners • “learning organisations” • External to the program: • managers of other similar programs to learn how to improve their programs, • funding and policy bodies to figure out what to fund or how to improve the operation of programs they sponsor, • politicians to amend/make policies, • social scientists to see what new knowledge has accrued and incorporate it into theory • evaluators to profit from the findings and the methods of study. • civil society – as volunteers, members of Boards, for advocacy…

  9. Types of use • “instrumental use” – decision-making e.g. by funders • “conceptual use by local program people… learn more about what the program is and does… • mobilize support for change • influence external to the program – “enlightenment”: “evaluation evidence often comes into currency through professionals and academics and evaluators, and it influences networks of practicing professionals and policy wonks (Heclo,1978) infiltrates advocacy coalitions (Sabatier, 1993), alters policy paradigms (Weiss, 1980), changes the policy agenda (Kingdon, 1995), and affects belief communities within institutions (Radaelli, 1995).” Carol Weiss, 1998

  10. Early Years Early Intervention • Common risk and protective factors for (e.g.) • offending • homelessness • early school leaving • drug and alcohol abuse • unemployment • mental health problems… • Early years brain development research

  11. Advocacy for early years interventions • Child Outcomes for most effective programs: •  school completion •  college/university attendance •  employment/earnings •  mental health •  drug use, tobacco use •  child abuse, removals from home •  delinquency, adult arrest for violence, jail

  12. Advocacy for early years interventions (cont.) • Parent outcomes for most effective programs • completed high school •  employment/earnings •  arrest for violence •  welfare support • Cost effectiveness of effective programs • Between ≈$4 and $10 saved for every $1 spent

  13. Early Head Start • Major funding program, US of A • Low income families with 0-3 year olds • Wide geographic spread, urban and rural • Target diverse populations • Services: • centre based • home visiting based • mixed approaches • Evaluation: 17 EHS programs; 3001 families • Random assignment evaluation (RCT)

  14. Early Head Start - Impacts • overall positive impacts for children, parents and home environments across approaches. • parents more emotionally supportive of their children, less detached, and less likely to spank their children… • overall impacts: effect sizes 10% to 20%; • some programs had higher levels of effects (20% -50% across multiple outcomes). • used mixed approaches • African American families (more disadvantaged?) • families with moderate number of demographic risk factors

  15. Early Head Start: Works for Whom? • 5 demographic risk factors analysed: “being a single parent; receiving public assistance; being neither employed nor in school or job training; being a teenage parent; and lacking a high school diploma…” • “The programs had only a few significant impacts on families with fewer than three demographic risks, and the impacts on families with more than three risks were unfavourable. … Previous research suggests that low-income families who have experienced high levels of instability, change and risk may be overwhelmed by changes that a new program introduces into their lives, even though the program is designed to help….” (p9)

  16. Programs ‘not effective’ for: • Those presenting with depression, withdrawal, low self-esteem, limited parenting skills and unrealistic expectations of their children. The additional presence of family violence or chemical dependency can lead to a deterioration in these families (Boston program by Ayoub et al; cited in Browne, 1995). • Poor families characterized by a high, rather than low or moderate, number of risk factors (i.e. low maternal education, unemployed head of household, single marital status, teenage mother, high levels of depressive symptoms, and low social support) (Infant Health Development Program; cited in Brooks-Gunn et al, 2000). • Mothers who have experienced much grief, trauma, depression and abuse (Hawaii Healthy Start;cited in Knitzer, 2000). • Women with few friends, little support and many problems(Keys to Caregiving Program;cited in Knitzer, 2000). • Families in poverty experiencing significant personal stress (Farran, 2000). • Reducing child abuse in families with significant levels of domestic violence (Nurse Home Visiting Program, Eckenrode et al, 2000) • Relatively more disadvantaged families within disadvantaged areas (British Sure Start evaluation, 2005)

  17. Other social interventions… Examplesfrom 1 book of “where evidence of harm was detected following a social intervention”: • social work counselling for ‘problem’ boys (Cambridge-Somerville study); • social work counselling for girls (Girls at Vocational High); • income maintenance (Negative Income Tax studies); • financial aid for ex-prisoners (Transitional Aid Research Project); • privatizing teaching (Educational Performance Contracting research ); • counselling for prisoners in California; • intensive social and medical services for older people in New York. • Oakley (2000), Experiments in Knowing, p 309

  18. Hearing what we want to hear • “In all these instances, the findings of the research surprised those who had launched it, and also the professional groups involved. The research findings appeared ‘counter-intuitive’, in the sense that people responsible for the various interventions believed that they were doing good, not harm. For this reason, they often found it difficult to take on board the results of the experimental evaluation, seeking instead other explanations (the context of the research, its design, problems with the quality or the implementation of the intervention) which would preserve the possibility that the practice under test did actually work..” • Oakley op cit p 309

  19. EHS Negative Impacts • NOT explained by: • service model • lower access to services • teenage parent-hood • maternal depression • cultural group (Black American / Hispanic) • ‘not meeting the needs of the target group’

  20. Policy & Methodological Problems • “Effective” strategy doesn’t work, and sometimes is counter-productive, for the highest risk (primary) target group • No-one knows why • Need to evaluate for whom our programs work / don’t work and find out why.

  21. A ‘definition’: Realist Evaluation Realistic Evaluation Ray Pawson and Nick Tilley Sage, 1997 • Not “what works”, but “What works for whom, in what contexts, and how?”

  22. Social Programs • Social programs are real and can have real effects – both positive (helpful) and negative (harmful). • Programs are an attempt to create change. • Programs ‘work’ by changing the choices that participants (individuals, organisations, communities) make • Choice-making is always constrained - by previous experiences, attitudes, beliefs, resources available, expectations…

  23. Mechanisms • Programs change choices by altering reasoning of or resources available to participants • Reasoning: (eg) beliefs, attitudes, values, ‘logic in use’ • Resources: (eg) information, skill, material resources, social support • The interaction between reasoning & resources = program mechanism • Participant response, not just program strategy, determines whether programs ‘work’

  24. Context • The contexts in which programs operate make a difference to the outcomes they achieve by determining whether/ which mechanisms “fire”. • Program contexts include features such as • organizational context • program participants • program staffing • economic, geographical and historical context • and so on…

  25. Mechanisms & outcomes in one parenting program Code:  = evidence in support; x = evidence against; ? = contradictory evidence; blank = no evidence

  26. Realist research programmes • Single studies are apt to mislead • Repeat single studies tend to produce mixed findings • Series of studies are needed to identify and refine CMOCs • CMOCs can change cumulatively through series of studies

  27. Realist synthesis • A particular process for learning from the literature • Builds program theories about ‘what works for whom in what contexts and how’ • Can use any previous research/reports that were of “good enough quality” to support the conclusions they reached

  28. Constructing CMOC’s

  29. Who needs what information?

  30. Implications for advocacy • Match the indicator type to the argument • rights, needs, effectiveness, cost-effectiveness • Know the research (some of the bureaucrats do) • Don’t rely on single evaluations • Check your own assumptions: what works for whom? Where and how? • Match the level of information or argument to the audience: ‘who needs to know what?’

More Related