Assessing Variability in Systemic Drug
Exposure
Some drugs (e.g. intravenously administered medications with
well-known pharmacokinetics) have relatively consistent and predictable
dose-exposure relationships. The blood levels of such drugs can be
estimated reasonably well and measuring them may not provide additional
valuable information. Here, TDM is of limited relevance, except perhaps
for cases of medication non-compliance or failure of organs involved in
drug metabolism and excretion. For many orally administered medications,
on the other hand, numerous additional variables such as bioavailability
and first-pass metabolism widen the range of possible blood level
concentrations and make predicting systemic drug exposure from the dose
much more difficult. Thus, the earliest evidence in support of TDM for a
new oral anticancer drug would be derived from observation of a large
range of not otherwise predictable blood drug levels between patients on
the same treatment regimen (Figure 1, Stage 1).
There are several ways of measuring the systemic exposure to a drug.
Examples include comprehensive sampling of numerous time points to
determine area under the concentration (AUC) time curves as well as
parsimonious or limited sampling strategies such as trough
(Cmin) levels, peak levels, or a combination thereof.
For routine monitoring of oral drugs that are taken once or twice daily
in an outpatient setting, measuring the drug concentration in a single
sample collected prior to the next dose (i.e. trough level
monitoring) is often the only practical option. Even during phase 2 and
phase 3 trials, blood sampling is restricted and, if possible, sparse
sampling strategies (e.g. trough concentrations) should be used
to study PK and PK/PD. We will therefore only focus on trough level
monitoring.
The relationship between a trough level and the systemic exposure, as
determined by area under the concentration (AUC) time curves, can often
be obtained from phase 1 and phase 2 clinical
studies.8–10 Although most early-phase trials collect
the data to derive this correlation, it may not be explicitly reported.
Using phase 1 and 2 study data, one can get a reasonable idea if trough
level monitoring could be used as a proxy for the more comprehensive AUC
analysis. It is important to keep in mind that a clinical drug
development PK study is usually much more controlled in terms of drug
intake and sample collection than routine patient care and that
parameters obtained in such studies may not translate to real world
patients. Therefore, there may be added value from assessing the
relationship between trough levels and AUC in a patient care setting.
Factors that may confound the relationship between Cminand AUC include PK drug-drug interactions, alterations in PK as a result
of (auto-)induction, and inhibition of metabolizing enzymes and
transporters, as well as inaccuracies in determining the triad of time
of drug intake, time of sample collection, and half-life of a drug. All
components of this triad are relevant for an accurate assessment of the
systemic exposure and all may differ from patient to
patient.11 Nevertheless, it is intriguing that
less-than-perfect correlation between trough levels and systemic
exposure can still be useful for assessing systemic exposure. Indeed,
even for some of the most monitored drugs that utilize trough levels,
such as cyclosporine and tacrolimus, the correlation coefficient between
trough levels and systemic exposure is in the 0.7-0.8
range.12,13
For effective TDM, in addition to being able to measure systemic
exposure (e.g. trough levels), one must also be able to predict
how changes in dosing will change the drug exposure. Consequently, the
dose-exposure relationships must be well-characterized. To this end,
serial sampling of drug exposure in the same individual over time
provides crucial information and should be incorporated into precision
dosing studies whenever possible.14 First, it enables
evaluation of intra-individual exposure variability over time. This
helps estimate how well the systemic exposure can be predicted from dose
alterations. Second, it improves estimation of the total systemic
exposure over the course of treatment. Finally, it allows for
determination of additional parameters, such as the maximum or minimum
blood drug concentrations, which may also be relevant for predicting
drug efficacy, resistance, and toxicity.
The inter- and intra-individual PK variability and the strength of
correlation between trough levels and AUC are important considerations
for calculation of sample sizes in clinical studies. These parameters
should guide not only the number of study participants but also the
number of samples per individual as well as the sampling frequency.