1 / 20

ProTECT III Community Consultation Study

ProTECT III Community Consultation Study. Neal W Dickert , MD, PhD, Victoria Mah , MPH, Michelle H Biros, MD, Deneil Harney, MPH, MSW, Robert E Silbergleit , MD, Jeremy Sugarman , MD, MA, MPH, Emir Veledar , PhD, Kevin P Weinfurt , PhD, David W Wright, MD, Rebecca D Pentz , PhD.

kemp
Download Presentation

ProTECT III Community Consultation Study

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ProTECT III Community Consultation Study Neal W Dickert, MD, PhD, Victoria Mah, MPH, Michelle H Biros, MD, Deneil Harney, MPH, MSW, Robert E Silbergleit, MD, Jeremy Sugarman, MD, MA, MPH, Emir Veledar, PhD, Kevin P Weinfurt, PhD, David W Wright, MD, Rebecca D Pentz, PhD

  2. Why are we studying CC? • CC requirement is unique to EFIC • CC is confusing • Multiple potential goals • Multiple methods • Community consultation can be a barrier to important research

  3. CC Feedback Varies in NETT

  4. Knowledge Gaps • No study has examined CC results across multiple sites for the same study • Little is know about impact/role of different CC methods • Rates of EFIC/study acceptance • Level of understanding among respondents • How to use feedback • ProTECT CC study was a collaborative effort to address these issues within NETT

  5. Methods • Survey instrument/assessment tool • Developed in consultation with HSP-WG • Cognitively pre-tested • Research purpose and reporting mechanism • 2 forms of the survey available • Self-contained form for self-administration (included disclosure/description of ProTECT) • Survey alone for administration after a CC event • Did not include sites who had previously developed siteor method-specific tools

  6. Design Instrument designed to characterize the range of feedback and address 3 hypotheses: Interactive CC methods lead to greater acceptance of EFIC in general and personal EFIC enrollment. Interactive methods increase knowledge of study details. Increased study knowledge predicts EFIC acceptance.

  7. Statistical Methods • CC methods categorized by interactivity • Interactive: interviews, focus groups, existing meeting groups, investigator-initiated meetings, town hall/open forums • Non-interactive: surveys at events, surveys online/email • 5-point Likert-scale questions collapsed • Knowledge-based questions summed as a 10-point composite score • Regression models and GLM created

  8. Demographics

  9. Range of Methods

  10. Recall of Study Information

  11. Recall of Study Information

  12. Views on EFIC and CC

  13. EFIC Acceptance- MVLR *Age, gender, and community type were not significant in the models

  14. Composite Knowledge- GLM

  15. EFIC Acceptance by Site

  16. EFIC Acceptance by Method

  17. Summary • Overall acceptance is reasonably high • Interactive CC methods associated with greater acceptance of EFIC enrollment • Interactive CC associated with greater recall about study elements except risks • Significant variability in acceptance at community-meeting based CC events • Lower EFIC acceptance among “other” races vs. whites, but no significant difference between black and white

  18. Implications • Choice of method impacts EFIC acceptance and CC participants’ level of understanding of the study being discussed • Types of feedback are meaningfully different • Public reaction versus more considered opinion • Not sure what to make of the risk difference • Variability among interactive events is not surprising but has important implications • Can’t read too much off a simple acceptance rate • Some sources of variability are good • Some may be problematic

  19. Implications • Growing body of evidence will provide benchmarks and method-specific knowledge • Will need data across different types of studies • Better characterize local variability • Different methods target different goals • Lingering questions • Whose views matter most? • What level of “acceptance” is enough? • How much CC is sufficient?

  20. Acknowledgments • ProTECT Boss- David Wright • HSP Working Group • Participating sites

More Related