1 / 44

To crawl before we run: optimising therapies with aggregated data

To crawl before we run: optimising therapies with aggregated data. Chris Evans, Michael Barkham, John Mellor-Clark, Frank Margison, Janice Connell. Aims. Panel aim is to help bridge the gap between researchers and practitioners

dacey
Download Presentation

To crawl before we run: optimising therapies with aggregated data

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. To crawl before we run:optimising therapies with aggregated data Chris Evans, Michael Barkham, John Mellor-Clark, Frank Margison, Janice Connell

  2. Aims • Panel aim is to help bridge the gap between researchers and practitioners • Specifically, to promote new forms of “practice based evidence” (PBE) which work in and across that gap and which complement EBP • This paper aims to present low sophistication, service oriented methods to complement the HLM and other sophisticated methods that Wolfgang, Zoran and many others have developed

  3. Specific aims for this presentation • Show realities of routine data collection • Show the magnitude of service level variation • Argue that simple service level analyses can help us learn from treatment failures • Computer processing is needed by most services/practitioners but is alien to many, two methods of computer processing available for CORE • For now, confidence intervals and graphical data presentations may be the “zone of proximal development”

  4. The dataset • 6610 records (from >12k): • 33 primary care NHS services • 40 to 932 records per service • Anonymised, voluntary • Four components to the data: • Therapist completed CORE-A • Therapy Assessment Form (TAF) • End of Therapy Form (EOT) • Client completed CORE-OM • At assessment and end of therapy or follow-up

  5. CORE-A TAF

  6. CORE-A EOT

  7. CORE-OM

  8. CORE-PC version of CORE-OM “It’s really simple and easy to use. I’m not very computer literate, but I’d got to grips with it in less than an hour”

  9. Plotting data: simple proportion

  10. Plotting data: reference lines

  11. Plotting data: add CI for sites

  12. Plotting data: add summary

  13. Getting data (2): CORE-OM 1

  14. Getting data (3): CORE-OM 2

  15. Getting data (4): all four forms

  16. Getting data: summary • For each of these basic indices the differences across services: • were significant p<.0005 • were very large in magnitude • the number “significantly” different from overall proportion ranged from 15 to 22 of the 33 • Even at the “best” end, datasets are fairly incomplete … • … at the “worst” end completion rate is cripplingly low

  17. Demographics (1): gender

  18. Demographics (2): ethnicity

  19. Demographics (3): employment

  20. Demographics (4): young age

  21. Demographics (5): older age

  22. Demographics (6): age

  23. Demographics: summary • All differences p<.0005 • Quite large in magnitude • Number “significantly” different from overall proportion/median ranged from 3 to 16 of the 33 • Particularly big differences on ethnicity • Some of these demographic variables will have relationships to outcome and failure both within and between services

  24. Individual level & site level effects

  25. Starting points (1): on medication

  26. Starting points (2): CORE-OM score

  27. Starting points (3): % > CSC cut point

  28. Starting points: summary • All statistically significant p<.0005 • Large differences • Number “significantly” different from overall proportion/median ranged from 6 to 10 of the 33 • Again, starting conditions can have relationships with outcome and failures at both individual and service level

  29. Logistics (1): wait time to assessment

  30. Logistics (2): % offered more sessions

  31. Logistics (3): #(sessions planned)

  32. Logistics: summary • All p<.0005 • All large differences, particularly for waiting time from referral to assessment (13 days cf. 137 days) • Number “significantly” different from overall proportion/median ranged from 6 to 19 of the 33 • There are big differences on number of sessions offered (medians from 3 to 10) • … but many services offering fixed number, mode is six sessions • Looks very likely that there will be some differences between services in the ways they operate that will hugely affect outcome and failures

  33. Outcomes (1): unplanned endings

  34. Outcomes (2): CORE-OM change

  35. Outcomes (2): CORE-OM change

  36. Outcomes (3): % RC

  37. Outcomes (4): % CSC

  38. Outcomes: summary • All statistically significant p<.0005 • Large differences • Number “significantly” different from overall proportion/median ranged from 4 to 9 • Despite large differences on RC and CSC, number of services differing “significantly” from the overall is not so high (4 and 6 respectively)

  39. Can automation of data processing help bridge the gap? • Neither researchers nor practitioners know much about the generalisability of “strong causal inference” to routine practice • Need practice to come out of the confidentiality closet without harming true confidentiality • Very, very few services currently collect routine outcome data • Few services link with other services to compare practices and data • Few services have strong links to researchers to help understand data • Need to bridge these gaps: if we make data easier to handle it might help!

  40. Automation (1): batch route • Facilitates some distancing from the data • Data analyses done by researchers and experts in analysis and data handling • Reports (30+ pages) well received • Can explore site specific issues

  41. Automation (2): CORE-PC “The clinical and reliable change graph is invaluable. As a service manager it gives me instant access to where we can look to improve our service provision”

  42. Automation (2): CORE-PC “I never realised that writing a report could be so simple, all I need to do is copy the tables I need from CORE-PC, paste them in Word, and write my interpretations.”

  43. Automation (2): PC • Allows services to get much “nearer” to their data • Should prevent some data entry errors • Should increase data completeness • May mean that service clinicians and managers feel uncertain about how to analyse and interpret their data… • … will need training and support

  44. http://www.psyctc.org/stats/Weimar Not until Monday 30.vi.03!

More Related