Cross-sectional analysis characterizing the use of rank preserving structural failure time in oncology studies: changes to hazard ratio and frequency of inappropriate use

Ours is the first, to our knowledge, umbrella analysis of the use of RPSFT in cancer clinical trials and its implications for inferences and results. First, we found that a sizable percentage of RPSFT studies (68%) are written by medical writers and use consulting companies. Second, we found that this method lowers the overall survival hazard ratio by a median 0.1 point, which suggests a notable impact. Third, the rate of crossover only explained 19% of the variability in the change in hazard ratios. Fourth, RPSFT was used appropriately in 19% of cases (tested for fundamental efficacy without a standard of care) but inappropriately in 81% (tested for fundamental efficacy with a standard of care or in sequence). We discuss these insights.

One concerning finding from our study is that all RPSFT analyses were either funded by drug sponsors, if funding was disclosed, and/or were written by at least one author who was employed by the drug sponsor. Furthermore, a notable percentage of studies used medical writers for reporting the results of the RPSFT analyses. Industry funding, while common, can lead to notable bias, skewing results towards the publication of favorable findings for the drug company [13]. Methodological papers on RPSFT that did not have industry ties were few, [1, 4] while papers with financial industry ties were numerous [2, 14, 15].

We found that the use of the RPSFT method lowers the overall survival hazard ratio by a median 0.1 point. This is a notable impact and rivals the impact of therapies themselves [16]. This can be compared to a previous analysis that reported a pooled hazard ratio to be 0.77 for all approved cancer drugs [17], and yet almost 20% of the drugs in our analysis were not approved at the time of manuscript preparation.

In our study, we found that the correlation between the uncorrected and corrected OS hazard ratio and the percentage of individuals who crossed over to the experimental drug was low, suggesting that only a small portion of an RPSFT corrected hazard ratio is due to the percentage of control arm participants who crossover at progression. Furthermore, most studies (~52%) were conducted in situations where the drug was being tested for fundamental efficacy when there was a standard of care, situations where it is often inappropriate to cross patients over to the drug being tested. And, another 29% of studies tested a drug that was already used in a latter line, being moved upfront, where some percentage of the control arm eventually received that therapy, rendering an inappropriate situation for RPSFT analysis.

We found that only about one-quarter of studies tested a drug’s sequential efficacy and another 18% tested a drug’s fundamental efficacy when there was no standard of care. Some researchers assert that crossover is an important element in randomized trials because of the ethics of providing patients who have progression with treatment options [18]. We contend that while this is true when there are no post-progression treatment options available or the tested drug is already approved in a latter line, there are other situations where crossover is not appropriate [5]. Therefore, crossover, and methods to adjust for its effects, should not be applied generally.

Researchers have justified the use of RPSFT as a way to correct for crossover, and many have insisted that because of numerically lower hazard ratios using the RPSFT adjustment, the drug likely provided OS benefit. However, we found a moderate agreement between finding a significant OS hazard ratio in the uncorrected and corrected analysis, suggesting that even with correction for crossover, the significance of OS findings is often not changed with the use of RPSFT. In other words, RPSFT correction often does not result in a significant OS hazard ratio. Furthermore, an improvement in OS benefit is likely due to a biased overestimation of a drug’s effect, which has been previously reported [19]. This bias may be due to physicians who are more likely to prescribe crossover treatment to people who are healthier and will do better regardless of subsequent treatment [14].

There have been several recent examples of an RPSFT analysis being incorporated into FDA submission data [20,21,22]. In these cited examples, the corrected OS data were found to be inappropriate for or were discouraged from determining drug efficacy and had or would have no bearing on the drug’s approval. But it is concerning that drug manufacturers are beginning to incorporate these data into drug approval data. We encourage regulatory agencies and reviewers of drug data to uphold standards of appropriateness in crossover and accompanying analysis.

Other classification systems have been proposed for interpreting correlation values [23, 24]. Using these interpretations, the correlations were low to moderate, depending on whether the drugs were being tested for fundamental or sequential efficacy.

Strengths and limitations

There are at least 3 strengths and 3 limitations. The first strength is that this is the first umbrella analysis of RPSFT analyses. Second, we characterized the appropriateness of crossover and RPSFT analysis, based on whether the situation tested a drug’s fundamental efficacy without a standard of care, tested a drug’s fundamental efficacy with a standard of care, or tested the drug in sequence, which has previously not been done. Our methods have identified limitations of RPSFT use. Third, we determined who funded and wrote the publications of the RPSFT analyses, thus identifying the sources of RPSFT analyses.

One limitation to our analysis is that our search may not have been exhaustive and did not include all studies with an RPSFT correction. Our study search was systematic and included multiple search engines, and our results should not have been differentially affected. Second, we included abstracts that had limited data reported in them. For studies that were missing key data points, we searched clinicaltrials.gov for other publications that might contain pertinent information. Finally, our findings are likely not generalizable to oncology at-large, because all the studies in our analysis were funded by the drug sponsor who has a financial interest in only publishing favorable results for their drug.

留言 (0)

沒有登入
gif