Confidentiality and trial integrity issues for monitoring adaptive design trials. Paul Gallo FDA-Industry Workshop September 28, 2006. Outline. Interim analysis conventions, motivation Issues for adaptive designs Interim analysis / review / decision process Sponsor involvement?
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
September 28, 2006
Adaptive designs present a number of challenges (e.g., statistical, logistic, procedural) which will need to be addressed before they can become widely utilized.
Issues relating to
may affect the integrity of trial results, and are thus likely to be critical in determining the extent, and shaping the nature, of adaptive design utilization in clinical trials.
Monitoring of accruing data is of course performed in many clinical trials, most frequently for:
Current procedures and conventions governing monitoring are a sensible starting point for addressing similar issues in trials with adaptive designs.
As described, e.g., in the recent FDA DMC guidance document, comparative interim results and access to unblinded data should be strictly controlled:
Thus a standard operational model involves having interim analysis results reviewed confidentially, and recommendations made, by a Data Monitoring Committee (DMC), whose members do not have any other responsibilities in the trial.
In confirmatory trials, DMCs are usually totally external to the sponsor organization, for maximum independence.
I. Adaptive designs will certainly require review of accruing data.
II. An important distinction versus other monitoring situations: the results are intended to be used to implement some adaptation(s) which will govern some aspect of the conduct of the remainder of the trial.
Concerns about confidentiality to ensure objective trial management, and potential bias introduced by knowledge of interim results, would seem to be no less relevant for adaptive designs than in other settings.
The key principles to adhere to would seem to be:
Monitoring board composition
FDA (2006): “Sponsor exposure to unblinded interim data . . . can present substantial risk to the integrity of the trial”.
Might sponsor perspective be relevant for most effectively making certain types of adaptation decisions? (e.g., dose selection).
Will sponsors accept and trust decisions made confidentially by external DMCs in long-term trials / projects with important commercial implications? (e.g., seamless phase II/III).
Potential sponsor participation in the process in confirmatory adaptive trials should require:
Note: other types of monitoring are not immune from this issue.
It has never been the case that no information can be inferred from monitoring; i.e., all monitoring has some action thresholds, and lack of action usually implies that such thresholds have not been reached.
Note: even for an O’Brien-Fleming boundary, despite its perception of conservativeness, continuation beyond 2/3 information would imply an estimate below the hypothesized delta.
In conventional GS design practice, when monitoring is justified, this issue seems not to be perceived to compromise the trial nor to discourage the monitoring.
Presumably, it is viewed that reasonable balance is struck between the objectives and benefits of the monitoring and any slight potential for risk to the trial, with appropriate and feasible safeguards in place to minimize that risk.
This same general type of standard should make sense in considering this type of issue in adaptive designs.
Selection decisions, of the type made in seamless designs (e.g., choice of dose, subgroup, etc. for continuation), might not seem to convey an amount of information that should influence or compromise a trial, as long as the specific numerical results on which the decisions were based remain confidential.
The information conveyed might often be similar to that in other conventionally acceptable monitoring situations.
More problematic - changes based in an algorithmic manner on interim treatment effect estimates, which in effect provide knowledge of those estimates to anyone who knows the algorithm and the change.
Most typical example - certain approaches to sample size re-estimation:
SSnew = f (interim treatment effect estimate)
=> estimate = f -1 (SSnew)
Recent literature, e.g., Jennison & Turnbull, Mehta & Tsiatis.
Group sequential designs of course can be viewed as a mechanism for sample size determination.
What’s the difference between:
Maybe not so much ? . . .