7 Phase I: Human Pharmacology
First-in-human administration marks the transition from preclinical models to clinical research. Until that point, everything known about the compound comes from test tubes, cell cultures, and animal models. Phase I studies provide the first glimpse of how the drug behaves in humans.
Phase I studies are traditionally called first-in-human trials, though this label applies specifically to the initial study in a development program. The broader category of Phase I encompasses all studies primarily designed to assess safety, tolerability, and pharmacokinetics in humans (International Council for Harmonisation 2021). In the current R&D landscape, these early-phase trials are increasingly the domain of Emerging Biopharma (EBP) companies, which sponsored 65% of all Phase I trial starts in 2024—more than twice the share of large pharmaceutical companies (24%) (IQVIA Institute for Human Data Science 2025).
What happens when a person takes this drug? Is it absorbed? How quickly does it reach the bloodstream, and how long does it stay there? What biological effects does it produce? And what adverse effects does it cause, and at what doses do those effects become unacceptable?
First-in-human studies occupy a unique space in clinical research. They are typically conducted in healthy volunteers rather than patients—after all, it would be inappropriate to expose sick people to a drug whose behavior in humans is completely unknown. The exception is oncology and certain other therapeutic areas where the expected toxicity is too significant to justify giving the drug to healthy individuals.
The starting dose for a first-in-human study is carefully calculated from preclinical data. Typically, it is derived from the no observed adverse effect level (NOAEL) in the most sensitive animal species, adjusted for differences in body surface area. This dose is chosen to be far below what would be expected to cause toxicity—the goal is to start small and proceed cautiously.
Dose escalation follows strict rules. After the first cohort receives the starting dose and is observed for signs of toxicity, the dose may be increased for the next cohort. The escalation continues until a maximum tolerated dose (MTD) is reached—the highest dose that can be given without unacceptable toxicity—or until the desired pharmacological effect is achieved.
The 2006 tragedy involving TGN1412, in which six healthy volunteers suffered life-threatening immune reactions despite receiving a dose calculated by conventional methods, led to heightened scrutiny of first-in-human study design. Subsequent guidance emphasized understanding target biology, considering mechanism of action when selecting doses, staggering dosing within cohorts, and extending observation periods before escalation.
7.1 Single and Multiple Ascending Dose Studies
Most Phase I programs include both single ascending dose (SAD) and multiple ascending dose (MAD) studies.
In SAD studies, small cohorts—typically 6 to 8 subjects—receive a single dose of the drug, and their safety and pharmacokinetics are carefully monitored. Once the data from one cohort is reviewed and found acceptable, the next cohort receives a higher dose. Placebo is typically included within each cohort to help distinguish drug effects from background noise.
MAD studies come next, evaluating what happens when the drug is given repeatedly over the course of days or weeks. These studies answer questions about drug accumulation, time to reach steady state, and tolerability with repeated exposure. They bridge from the artificial world of single doses to the clinical reality of chronic treatment.
7.2 Characterizing Pharmacokinetics
A primary goal of Phase I is to determine how the drug moves through the body—its pharmacokinetics (PK) (International Council for Harmonisation 2021). This involves measuring drug concentrations in blood (and sometimes other fluids) at multiple time points after dosing.
From these measurements, pharmacokineticists calculate key parameters (summarized in Table 7.1):
| PK Parameter | Symbol | Definition | Formula | Clinical Implication |
|---|---|---|---|---|
| Maximum Concentration | Cmax | Peak drug concentration achieved | Observed directly from PK curve | Indicates peak exposure; related to tolerability |
| Time to Maximum | Tmax | Time to reach peak concentration | Observed directly from PK curve | Related to absorption rate; affects onset of action |
| Area Under Curve | AUC | Total exposure over time | \(\int_0^{\infty} C(t) \, dt\) | Proportional to amount absorbed; key PK/PD metric |
| Half-life | t1/2 | Time for concentration to decrease by 50% | \(t_{1/2} = \frac{0.693 \times V_d}{CL}\) | Determines dosing interval needed for steady state |
| Clearance | CL | Volume of plasma cleared per unit time | \(CL = \frac{Dose}{AUC}\) | Efficiency of drug elimination; guides dosing |
| Volume of Distribution | Vd | Apparent volume drug distributes into | \(V_d = \frac{Dose}{C_0}\) | Suggests tissue distribution; affects loading doses |
| Bioavailability | F | Fraction of dose reaching systemic circulation | \(F = \frac{AUC_{oral}}{AUC_{IV}}\) | Determines oral vs. IV dose equivalence |
These parameters have direct practical implications. Half-life determines how often patients must take the drug: a 4-hour half-life typically requires dosing three or four times daily, while a 24-hour half-life allows once-daily dosing. Caffeine has a half-life of about 5 hours, which is why morning coffee wears off by afternoon; some antidepressants have half-lives exceeding 100 hours, which is why missing a dose matters less but also why side effects persist after discontinuation.
Volume of distribution reveals where the drug goes. A Vd close to plasma volume (about 3 liters) suggests the drug stays in the bloodstream—useful for treating blood infections but unable to reach intracellular targets. A very large Vd (hundreds of liters, far exceeding actual body volume) indicates extensive tissue binding—the drug accumulates in fat, muscle, or specific organs. Chloroquine has a Vd of over 200 liters per kilogram because it concentrates in tissues; this is why loading doses are needed and why the drug persists for weeks after the last dose.
Clearance determines the dose needed to maintain therapeutic levels. A drug with high clearance is rapidly eliminated and requires higher or more frequent doses; a drug with low clearance accumulates and requires careful dose titration to avoid toxicity. Patients with impaired kidney or liver function have reduced clearance, which is why dose adjustments are required in these populations.
Bioavailability explains why oral and intravenous doses differ. A drug with 50% oral bioavailability requires twice the oral dose to achieve the same exposure as an IV dose. Some drugs have bioavailability below 10%, making oral administration impractical; others are destroyed by stomach acid or extensively metabolized by the liver before reaching circulation (the “first-pass effect”).
7.3 Exploring Pharmacodynamics
Pharmacokinetics tells us what the body does to the drug; pharmacodynamics (PD) tells us what the drug does to the body. Phase I studies often include pharmacodynamic endpoints—biomarkers or biological effects that indicate the drug is producing its intended action.
Linking PK and PD through PK/PD modeling enables prediction of the doses and dosing regimens most likely to be effective in later trials. If a biomarker correlates with drug exposure, and if that biomarker is a reasonable predictor of clinical effect, PK/PD models can guide dose selection for Phase II.
7.4 Special Studies
Phase I encompasses more than just first-in-human and dose-escalation studies. A variety of specialized studies (Table 7.2) are typically conducted during this phase (International Council for Harmonisation 2021; U.S. Food and Drug Administration 2024):
| Study Type | Purpose | Design | Regulatory Requirement |
|---|---|---|---|
| Bioavailability | Determine what fraction of dose reaches circulation | Compare oral vs. IV administration (absolute) or different formulations (relative) | Required for oral formulations |
| Bioequivalence | Show new formulation performs like reference product | Crossover design comparing test vs. reference; assess if 90% CI of AUC/Cmax ratio within 80-125% | Required for formulation changes, generics |
| Food Effect | Assess how meals affect absorption | Crossover comparing fed vs. fasted states | Required; informs label dosing instructions |
| Drug-Drug Interaction (DDI) | Evaluate interaction with other medications | Test investigational drug with probe substrates/inhibitors/inducers of CYP450 enzymes | Required for commonly co-administered drugs |
| Hepatic Impairment | PK in patients with liver dysfunction | Compare normal vs. mild/moderate/severe hepatic impairment | Required if drug significantly metabolized |
| Renal Impairment | PK in patients with kidney dysfunction | Compare normal vs. mild/moderate/severe renal impairment | Required if drug renally eliminated |
| Elderly Subjects | PK in aged population ($$65 years) | Compare young vs. elderly PK parameters | Required for drugs intended for elderly use |
| Thorough QT | Assess cardiac repolarization risk | High therapeutic and supratherapeutic doses; measure QTc prolongation | Required unless waived by alternative data |
These studies ensure that appropriate dosing recommendations can be provided for all who might need the drug, accounting for physiological differences and potential interactions.
7.5 Phase I in Oncology
Phase I oncology studies differ substantially from traditional Phase I programs. Because cancer patients face serious illness with limited treatment options, and because anticancer drugs often cause significant toxicity, these studies are conducted in patients rather than healthy volunteers.
The objective is typically to identify the MTD—the highest dose that can be given without unacceptable toxicity. Traditional designs like the 3+3 enroll three patients at each dose level; if no dose-limiting toxicity (DLT) occurs, the dose is escalated. If one patient experiences DLT, three more are enrolled at that dose. If two or more patients experience DLT, the dose is considered too high.
More sophisticated model-based designs like the continual reassessment method (CRM) use statistical models to guide dose escalation more efficiently. These designs can reduce the number of patients treated at suboptimal doses while maintaining safety.
7.6 The Phase I-Phase II Transition
The output of Phase I is the foundation for everything that follows. Successful Phase I studies produce a safety database characterizing adverse events across a range of doses, pharmacokinetic parameters to guide dosing decisions, pharmacodynamic data linking exposure to biological effects, and—most critically—a recommended Phase II dose that balances safety with expected efficacy.
This transition point is one of several go/no-go decisions that punctuate drug development. If Phase I reveals unexpected toxicity, poor pharmacokinetics, or lack of pharmacodynamic effect, the program may be terminated. Such early terminations, while disappointing, are far preferable to late-stage failures.
Although small, Phase I trials form the foundation of clinical development. A well-designed program not only establishes human safety and tolerability but also provides the pharmacokinetic and pharmacodynamic insights necessary to right-size Phase II and III trials. The geographic footprint of these studies is also shifting, with China-headquartered companies accounting for 30% of global trial starts in 2024, reflecting the rapid maturation of the Chinese biopharma sector (IQVIA Institute for Human Data Science 2025). By identifying a recommended Phase II dose that balances safety with initial proof of mechanism, Phase I researchers move the drug from laboratory hypotheses into the reality of human biology.