Correlating Drug Exposure with Drug Efficacy and
Toxicity
Once it is established that the blood levels of a novel oral anticancer
drug vary significantly from one patient to another and cannot be
practically estimated by means other than direct measurement, the
relationship between systemic drug exposure and drug efficacy and drug
toxicity should be investigated (Figure 1, Stage 2). Such studies can
provide evidence in favor of TDM and define the target (therapeutic)
exposure range by demonstrating increased treatment failures at
sub-therapeutic exposures and increased toxicity at supra-therapeutic
exposures. After all, if low drug levels cannot predict treatment
failure and high drug levels cannot predict adverse drug effects, TDM
will be of limited value.
Measuring Efficacy
A drug’s effect can be measured in various ways. In oncology, efficacy
endpoints such as response rates, progression free survival, or overall
survival are typically used.15 These endpoints are
commonly derived from histologic and/or radiologic tumor evaluations,
but other assessments such as circulating tumor cells, circulating
cell-free tumor DNA, microRNA, or protein markers can also be used. In
addition, pharmacodynamics markers (e.g. measurable molecules
corresponding to drug target inhibition or downstream pathway activity)
may be available for some drugs, which may enable close to real-time PD
monitoring.16 Thus, for studying the relationship
between drug levels and effect, one or more efficacy endpoints, alone or
in combination with PD biomarkers, can be used.1–3Measures of drug effect can be represented by dichotomous variables,
such as frequency of occurrence, or by continuous variables such as
concentrations of tumor markers.
To help decrease methodological heterogeneity in measuring drug
response, an international multidisciplinary working group developed
RECIST (Response Evaluation Criteria in Solid Tumours) criteria for the
evaluation of tumor burden. 17 These criteria describe
standardized approaches of solid tumor size measurement, primarily using
imaging techniques, and define the outcomes of complete response (CR),
partial response (CR), stable disease (SD), and progressive disease
(PD). 17
The duration of time it takes to achieve the chosen efficacy endpoints
is also important for study design. The lag in time between when drug
exposure is initially assessed and when clinical response can be
detected is typically on the order of weeks to months or even years. On
these timescales, the initial exposure assessment may no longer
accurately represent the total drug exposure over the course of
treatment. Thus, in studies with long treatment duration, serial
exposure assessments over time may be particularly useful for capturing
the overall drug exposure more accurately.
The pre-treatment dynamics of outcome measures is also important to
consider. For example, high heterogeneity in the pre-treatment rates of
tumor growth and trajectories of biomarker levels between individuals in
a study population is likely to result in high inter-individual
variability in these measures during treatment. Consequently, the
statistical power of the study suffers, requiring increased numbers of
participants. As a further example, a small decrease in the rate of
tumor growth after treatment initiation may be interpreted as disease
progression in a patient with a fast-growing tumor and as stable disease
in a patient with a slow-growing tumor. This suggests that several
pre-treatment assessments of the patient’s baseline tumor size or
biomarker levels, as well as the use of a control group, may help more
accurately characterize the effect of the drug.
Measuring Toxicity
The side effects that occur during treatment can be a consequence a
drug’s effect, related to a drug’s unwanted but expected off-target
effect, or they can be idiopathic. Depending on the mechanism, side
effects can manifest relatively quickly, within hours or days, or can
take months to develop. Similar to the assessment of drug efficacy
discussed above, the prevalence and timing of drug toxicity will impact
the study design with respect to the number of participants required,
the frequency of toxicity assessments, and the duration of toxicity
monitoring.
Drug-related toxicity often correlates with drug dose and typically
subsides following dose decrease or interruption. However, the
occurrence of adverse drug events may also seem stochastic and they may
appear and disappear without temporally related dose adjustments. In
this context, variations in drug exposure (at the same prescribed dose)
may correlate with toxicity. Thus, serial exposure assessments over time
may be particularly helpful for relating fluctuations in drug trough
levels to toxicity symptoms, especially in individuals concurrently
treated with other drugs prone to interactions or toxicities of their
own.
The approach to capturing and quantifying adverse drug effect data must
also be considered. Self-administered patient questionnaires (patient
reported outcomes or PROs) may be used to supplement clinical
assessments.18 Toxicity may be represented as
dichotomous (either present or not), categorical (based on severity) or
even continuous (e.g. elevation in blood pressure) variables. In
addition, the National Cancer Institute (NCI) provides Common
Terminology Criteria for Adverse Events (CTCAE) to help standardize the
description and grading of adverse events.19,20
Of note, drug toxicity itself can sometimes be used to guide dose
optimization. Such “dosing to toxicity” strategies have long been used
for chemotherapy but may also have a role in dosing of oral targeted
small molecule drugs.16 A relevant review on the
susceptibility to adverse drug reactions was recently published in this
journal.21 The described susceptibility factors
included the type of immunological reaction, genetics, age, sex,
physiological changes (such as pregnancy), exogenous factors (such as
interacting drugs), and diseases. Notably, the authors highlight that
there may be significant inter-patient variability in the dose-response
curves not only for drug benefits but also for harm, providing an
illustration of how some (hypersusceptible) patients may experience
toxicity at drug concentrations insufficient for
efficacy.21 Importantly, this is one context in which
TDM has a clear advantage: using a dosing to toxicity approach for
hypersusceptible patients results in continued treatment with drug doses
that are ineffective, while TDM informs a change in therapy.
Standardization of Assays and
Methods
For the vast majority of new drugs there are no FDA-approved
quantitative assays. Instead, new drugs are typically quantified by
assays developed in individual laboratories, known as laboratory
developed tests (LDTs). The required levels of quality assurance for
these tests vary widely, in part depending on the laboratory’s local and
other regulations (e.g. CLIA, GLP). In addition, LDTs developed
in different labs may employ distinct methodologies (e.g.immunoassays, liquid chromatography-tandem mass spectrometry, etc.).
Taken together, this can lead to significant inter-laboratory and
sometimes even intra-laboratory differences in results. External
proficiency testing programs can help minimize such differences but,
more often than not, such programs do not exist for new drugs.
Therefore, it is important to be aware that lab-to-lab differences in
the measurement of drug levels may be a significant contributor of noise
in TDM studies. Utilizing the same laboratory with a thoroughly
validated method for all drug level measurements for a precision dosing
study may be a worthwhile consideration.
The same holds true for methods and approaches for quantifying drug
effects. The challenges associated with bioanalytical measurements of
pharmacodynamics biomarkers are analogous to those for drug assays.
Similarly, there may be significant inter-institution and even
intra-institution variability in imaging or anatomical techniques used
for tumor assessments. Again, this variability may be a considerable
source of noise in TDM studies.
In order to improve experimental reproducibility as well as
applicability and translatability of results, attempts should be made to
standardize the assays and methods. As mentioned above, RECIST criteria
can help standardize solid tumor size measurements and NCI’s CTCAE can
help standardize assessments of drug toxicity.17,19Similarly, guidance from the NCI also exists for the development and
incorporation of biomarkers studies in drug trials.22The standardization of assays for oral small molecules for cancer is
lagging, although some proficiency testing programs have recently become
available.23
Study Design Considerations for Exposure-Response
Relationships
In contrast to biomarker studies, which can obtain useful data through
retrospective analysis of repository samples collected during routine
patient care, TDM studies aiming to investigate the correlation of drug
levels with effects and toxicity will likely require prospective
collection of samples. This is because the relative timing of drug
intake and blood sampling is critically important to interpreting the
obtained drug level results. In samples without associated data on
timing of last drug intake (most repository samples), the drug levels
may represent trough, peak, or intermediate time points. In addition,
exposure-response relationship studies are typically observational (no
dose adjustment based on results) rather than interventional, because
dose adjustment after blood level measurement but before response
measurement would confound interpretation of results. A large number of
examples of such studies for oral small molecule anticancer drugs have
been summarized in numerous reviews.1–5
Although the necessity to conduct such studies prospectively presents
certain challenges (e.g. obtaining preliminary data for a grant
proposal and long accrual times), prospective studies tend to be less
prone to certain types of biases such as recall bias and non-recorded
confounders. Other types of bias, such as selection bias, can still
occur in prospective studies.24,25
As discussed above, numerous choices are available with respect to the
frequency and duration of exposure sampling as well as the timing,
prevalence, and quantification of clinical endpoints and toxicity.
Consequently, study design and power calculations should take into
account the temporal relationships between drug levels and efficacy and
toxicity as well as the anticipated frequency of measured outcomes and
adverse events. Although there are numerous resources to guide power
calculations for PK studies, the literature on power calculations for
TDM studies seems to be lacking.26,27
Data from the exposure-response relationship studies can be described
using various forms of regression analysis or more simply by comparing
the outcomes of patients stratified by, for example,
Cmin quartiles or deciles.25,28–30Ultimately, the goal of such studies is to define a therapeutic exposure
range below which there is increased risk of lack of efficacy and above
which there is increased risk of toxicity.1–5
It should be mentioned that in vitro and pre-clinicalin-vivo experiments may also demonstrate concentration-effect
relationships and can be used to supplement the results obtained in
clinical studies. 4,5 For solid tumors, blood level
measurements may be complemented by in vivo studies that also
measure drug concentrations in tumor tissue.31