Keynote speaker: Dominic Magirr (Novartis Pharma AG)



Deconstructing the Max-combo Test

The Max-Combo Test (Lin et al., 2020) is an adaptive procedure to select the best test statistic from a small prespecified set of candidates. It includes a correction for multiplicity. Proposed applications include clinical trials in oncology when there is prior uncertainty regarding the commonly made proportional hazards assumption. In this context, the candidate set of test statistics usually come from the Fleming-Harrington rho-gamma family of weighted log-rank statistics. In some of the candidate test statistics, the weighting is tilted towards events occurring in the early part of follow up, whereas in others it is tilted towards events in the later part of follow up. The overall procedure is therefore robust to a wide range of treatment effect patterns.

In this talk I shall take a close look at the Max-Combo Test, using recently proposed visualization techniques (Jimenez et al., 2023) to explain some of its counter-intuitive properties, as pointed out in recent publications (Freidlin & Korn, 2019). The same visualization techniques can be used to re-evaluate weighted log-rank tests in the broader context of the estimand framework in clinical trials.

References

 

Keynote speaker: Annette Kopp-Schneider (German Cancer Research Center)



Borrowing from external information in clinical trials: methods, benefits and limitations

When trials can only be performed with small sample sizes as, for example, in the situation of precision medicine where patients cohorts are defined by a specific combination of biomarker and targeted therapy, borrowing information from historical data is currently discussed as an approach to improve the efficiency of the trial. In this context, borrowing information is often also referred to as evidence synthesis or extrapolation, where external data could be historical data or another source of co-data. A number of approaches for borrowing from external data that dynamically discount the amount of information transferred from external data based on the discrepancy between the external and current data have been proposed. We will present two selected approaches. The robust mixture prior (Schmidli et al, 2014) is a popular method. It is a weighted mixture of an informative and a robust prior, equivalent to a meta-analytic-combined analysis of historical and new data, assuming that parameters are exchangeable across trials. The power prior approach incorporates external data in the prior used for analysis of the current data. This prior is proportional to the likelihood of the external data raised to the power of a weight parameter. An Empirical Bayes approach for the estimation of the weight parameter from the similarity of external and current data has been proposed by Gravestock et al. (2017).

We will discuss the frequentist operating characteristics (FOC) of trials using these two adaptive borrowing approaches, evaluating type I error rate and power as well as Mean Squared Error. Use of the robust mixture prior requires the selection of the mixture weight, the mean and the variance of the robust component and we will discuss the impact of the selection on FOC. The concept of prior effective sample size facilitates quantification and communication of prior information by equating it to a sample size. When prior information arises from historical observations, the traditional approach identifies the ESS with a historical sample size, a measure that is independent of the current observed data, and thus does not capture an actual loss of information induced by the prior in case of prior-data conflict. The effective current sample size of a prior (Wiesenfarth and Calderazzo 2020) is introduced which relates prior impact to the number of (virtual) samples from the current data model. All aspects that will be discussed show that in the frequentist perspective borrowing cannot be beneficial for any possible true parameter value. However, benefits can be obtained if prior information is reliable and consistent.

References

 

Invited Sessions


Methodological and practical outcomes from the Adaptive Designs Working Group of the MRC-NIHR Trials Methodology Research Partnership


nina_williamson

Exploring current practices in adaptive trials: patient information sheets, costing, and efficiently conducting interim analyses


Nina Wilson (Newcastle University)
david-robertson

Estimation after adaptive designs


David Robertson (University of Cambridge)
munya_dimairo

Making adaptive designs more accessible. A practical adaptive designs toolkit


Munya Dimairo (University of Sheffield)




Practical experiences of using software to design clinical trials using simulations


peter_jacko

Using the “SIMulating PLatform trials Efficiently” (SIMPLE) R package to develop a simulator for a bespoke platform trial


Peter Jacko (Berry Consultants)
david-robertsonfriedrich_pahlke

Flexible Clinical Trial Planning with the R Package rpact


Gernot Wassmer & Friedrich Pahlke (RPACT)
tobias_mielke

Design or simulation: What comes first?


Tobias Mielke (Janssen)
daniel_sabanes

Discussant


Daniel Sabanés Bové (Roche)