1 / 24

Biostatistics for Coordinators

Biostatistics for Coordinators. Peter D. Christenson REI and GCRC Biostatistician. GCRC Lecture Series: Strategies for Successful Clinical Trials Session #2 June 10, 2004. Outline. Typical Flow of Data in Clinical Studies

ward
Download Presentation

Biostatistics for Coordinators

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Biostatistics for Coordinators Peter D. Christenson REI and GCRC Biostatistician GCRC Lecture Series: Strategies for Successful Clinical Trials Session #2 June 10, 2004

  2. Outline • Typical Flow of Data in Clinical Studies • Biostatistical Resources at REI and GCRC • Statistical Components of Research Protocols

  3. Typical Flow of Data in Clinical Studies Reports Spreadsheets Statistics Software Graphics Software Source Documents Database CRFs Database is the hub: export to applications

  4. Biostatistical Resources at REI and GCRC • Biostatistician: Peter Christenson • Assist with study design, protocol development • Minor, limited analysis of data • Major analysis as investigator with %FTE on funded studies • GCRC and non-GCRC studies • Biostatistics short courses: 6 weeks 2x/yr • GCRC computer laboratory in RB-3 • For GCRC studies • Statistical, graphics, database software • Webpage: www.gcrc.humc.edu/Biostat

  5. NCSS: Basic intuitive statistics package in GCRC computer lab; has power module

  6. SPSS: More advanced statistics package in GCRC lab

  7. SAS: Advanced professional statistics package in GCRC lab

  8. Sigma Plot: Scientific publication graphics software in GCRC lab

  9. nQuery: Professional study size / power software in GCRC lab

  10. www.gcrc.humc.edu/Biostat

  11. www.statsoft.com/textbook/stathome.html Good general statistics book by a software vendor.

  12. www.StatCrunch.com NSF-funded software development. Not a download; use online from web browsers

  13. www.stat.uiowa.edu/~rlenth/Power Online Study Size / Power Calculator

  14. Statistical Components of Protocols • Target population / sample (generalizability). • Quantification of aims, hypotheses. • Case definitions, endpoints quantified. • Randomization. • Blinding. • Study size: screen, enroll, complete. • Use of data from non-completers. • Justification of study size (power, precision, other). • Methods of analysis. • Mid-study analyses.

  15. Randomization • Helps assure attributability of treatment effects. • Blocked randomization assures approximate chronologic equality of numbers of subjects in each treatment group. • Recruiters must not have access to randomization list. • List can be created with a random number generator in software (e.g., Excel, NCSS), printed tables in stat texts, pick slips out of a hat.

  16. Study Size / Power : Definition • Power is the probability of declaring a treatment effect from the limited number of study subjects, if there really is an effect of a specified magnitude (say 10) among all persons to whom we are generalizing. [ Similar to diagnostic sensitivity. ] • Power is not the probability that an effect (say 10) observed in the study will be “significant”.

  17. Study Size / Power : Confusion Reviewer comment on a protocol: “… there may not be a large enough sample to see the effect size required for a successful outcome. Power calculations indicate that the study is looking for a 65% reduction in incidence of … [disease]. Wouldn’t it also be of interest if there were only a 50% or 40% reduction, thus requiring smaller numbers and making the trial more feasible?” Investigator response was very polite.

  18. Study Size / Power : Issues • Power will be different for each outcome. • Power depends on the statistical method. • Five factors including power are inter-related. Fixing four of these specifies the fifth: • Study size • Power • p-value cutoff (level of significance, e.g., 0.05) • Magnitude of treatment effect to be detected • Heterogeneity among subjects (std dev)

  19. Study Size / Power : Example Project #10038: Dan Kelly & Pejman Cohan Hypopituitarism after Moderate and Severe Head Injury • “The primary outcomes for the hydrocortisone trial are changes in mean MAP and vasopressor use from the 12 hours prior to initiation of randomized treatment to the 96 hours after initiation.” • Mean changes in placebo subjects will be compared with hydrocortisone subjects using a two sample t-test.

  20. Study Size / Power : Example Cont’d Thus, with a total of the planned 80 subjects, we are 80% sure to detect (p<0.05) group differences if treatments actually differ by at least 5.2 mm Hg in MAP change, or by a mean 0.34 change in number of vasopressors.

  21. Study Size / Power : Example Cont’d Pilot data: SD=8.16 for ΔMAP in 36 subjects. For p-value<0.05, power=80%, N=40/group, the detectable Δ of 5.2 in the previous table is found as:

  22. Study Size / Power : Summary • Power analysis assures that effects of a specified magnitude can be detected. • For comparing means, need (pilot) data on variability of subjects for the outcome measure. [E.g., Std dev from previous study.] • Comparing rates (%s) does not require pilot variability data. Use if no pilot data is available. • Helps support (superiority) studies with negative conclusions. • To prove no effect (non-inferiority), use an equivalency study design.

  23. Non-completing Subjects • Enrolled subjects are never “dropouts”. • Some studies may require recording last status at last known time. • Protocol should specify: • Primary analysis set (e.g., all subjects with at least one post-baseline value for the primary endpoint, “modified Intent-to-Treat”.) • How final values will be assigned to non-completers (e.g., Last Value Carried Forward). • Study size estimates should incorporate the number of expected non-completers.

  24. Mid-Study Analyses • Mid-study comparisons should not be made before study completion unless planned for (interim analyses). Early comparisons are unstable, and can invalidate final comparisons. • Mid-study reassessment of study size is advised for long studies. Only standard deviations to-date are used. Often, these analyses can be performed at the time of DSM reports.

More Related