Statistics in Oncology Clinical Trials


Phase II design: history and evolution

Larry Rubinstein

Abstract

Historically, phase II trials in oncology generally had a single-arm design, constructed to distinguish between a tumor response rate felt to indicate a lack of promise (often 5%) and a rate that would indicate potential benefit (often 20%), with a one-sided type I error rate of 5% to 10% and a type II error rate of 10% to 20%. The dominant use of this design was based on the premise that an agent that could not produce a tumor response rate of 20% was not likely to produce a clinically meaningful overall survival (OS) or progression-free survival (PFS) benefit in subsequent phase III testing. Recent trends in oncology drug development have challenged this paradigm. Many phase II trials are now designed to assess the promise of a molecularly targeted agent, given either alone or in combination with another regimen. In many cases these agents are not anticipated to produce or improve tumor response rates; rather the desired outcome from their use is improved PFS or OS through means other than direct cell killing as evidenced by tumor shrinkage. In general, PFS is the preferred end point for such phase II trials, as it is more statistically efficient than OS (because it is substantially shorter and the treatment effect is not diluted by salvage treatment). However, in a situation with no effective salvage therapy and/or a disease with concerns regarding the timing of progression assessment, OS could be chosen as the endpoint. We have reviewed the history and evolution of the phase II trial over the past 50 years, in particular, in oncology trials. This review is not meant to be exhaustive, but rather to cover the primarily used designs in self-contained detail, in such a manner as to provide a primer for the young investigator and reminders for the more experienced.

Download Citation