About TCAW - Subscription Info
September 2001
Vol. 10, No. 09,
pp 38–40, 42.
 
 
 
Today's Chemist at Work
Focus: Liquid Chromatography

FEATURE

Getting the peaks perfect: System suitability for HPLC

On-line testing can ensure data quality in pharmaceutical assays.

opening artChromatography, specifically liquid chromatography, is used extensively in pharmaceutical development and manufacturing. In the pharmaceutical industry, countless HPLCs are working in research and quality control laboratories, churning out innumerable test results each day on the dosage levels, purity, and dissolution characteristics of new drug candidates or marketed products. Now… how can anyone be sure that each HPLC autosampler is injecting precisely or that the column has not degraded the day before? In other words, how can researchers be assured that each HPLC system is functioning properly on any particular day? These are not academic questions, given the serious role of the pharmaceutical business in developing safe and efficacious drugs for human consumption.

FIGURE 1: Domains of a validated analytical system showing relationships among calibration (qualification), method validation, and system suitability.
Source: Reference 1 (with permission from Marcel Dekker).
One might recognize these questions as part of the age-old problem of “system validation” under Good Manufacturing Practices (GMP). Purdue Pharma, for example, typically follows the three-step approach commonly adopted by the industry (Figure 1). The first step, Initial System Qualification, is followed by Periodic Calibration, which verifies the functional performance of each component of the instrument upon initial installation and then every six months afterwards (see TCAW, Feb 2001, p 42). The second step is Method Validation, which verifies the performance of the entire analytical procedure, including sample preparation. The final step is System Suitability Testing (SST), which verifies the holistic functionality of the chromatographic system on a day-to-day basis. In this article, we focus on this final validation step and discuss how to perform SST and set suitability limits according to the latest regulatory guidelines.

Defining SST
SST is commonly used to verify resolution, column efficiency, and repeatability of a chromatographic system to ensure its adequacy for a particular analysis. According to the United States Pharmacopeia (USP) and the International Conference on Harmonization (ICH), SST is an integral part of many analytical procedures. Although USP and ICH are not regulatory agencies, their guidelines are “bibles” followed closely in the industry because they are accepted by the FDA. SST is based on the concept that the equipment, electronics, analytical operations, and samples to be analyzed constitute an integral system that can be evaluated as a whole. The chromatographic systems used for most pharmaceutical analyses such as assays of the active ingredients, impurity determinations, and dissolution testing (measuring the dissolution rate for a particular form of dosage) must pass a set of predefined acceptance criteria (SST limits) before sample analysis can commence.

Doing SST
How does one use SST to satisfy the latest USP and ICH guidelines? First, SST must be performed before and throughout all regulated assays. It is no longer sufficient to apply SST at the beginning of the chromatographic run and to assume that the system will function properly during the experiment. Second, a single-component calibration solution to check system suitability is not adequate because the system’s separation capability is not demonstrated. Rather, the use of System Suitability Samples (SSSs) or resolution test mixtures containing both main components and expected impurities is required. At Purdue, many of our SSSs contain the active pharmaceutical ingredients (APIs) at 80–120% of the concentration claimed on the label and are spiked with one or more critical components, such as the least resolved related substances at 0.1–0.5%. SSSs are analyzed before and interspersed between samples during testing (i.e., five replicate injections of SSS for initial SST and one SSS injection every 10 assay or 12 dissolution samples).

TABLE 1: Comparison of SST criteria
SST limits CDER guidelines Hsu and Chien recommendation
Repeatability of peak response less than or equal to1.0% for 5 replicates less than or equal to1.5% general
5–15% trace
<5% biologics
Resolution >2.0 general >2.0 general
>1.5 quantitation
Tailing factor less than or equal to 2.0 <1.5–2.0
Column efficiency >2000 (plate count) Not available
Capacity factor >2 2–8
Data sources: References 2 and 3.
Primary SST parameters are resolution (R), repeatability (RSD—relative standard deviations—of peak response and retention time), column efficiency (N), and tailing factor (T). These parameters are most important as they indicate system specificity, precision, and column stability. Other parameters include capacity factor (k) and signal-to-noise ratio (S/N) for impurity peaks. Most chromatographic data systems can automate the measurement and reporting of these SST parameters. Acceptance criteria or SST limits are predefined in most “official” analytical methods. Limits may vary with different tests and are typically less stringent for biologics and trace impurities. Table 1 summarizes guidelines for setting SST limits from the FDA’s Center for Drug Evaluation and Research (CDER) (2) and compares them with those proposed by Hsu and Chien (3), which include recommendations for biologics and trace components.

Setting Limits
SST limits should represent the minimum acceptable system performance levels rather than typical or optimal levels. Many analytical methods simply adopted the general limits from the CDER guidance document (2). While acceptable from the regulatory standpoint, these limits might be too wide to detect emerging system problems. For instance, if historical analysis data show performance of a specific method to be R = 4–6, N = 8000–10,000, and T = 1.0–1.3, then the general CDER limits (R > 2, N > 2000, and T less than or equal to 2.0) might not reflect the normal performance range and perhaps not truly fulfill the role of determining system suitability.

So, how does the pharmaceutical scientist go about setting realistic SST limits that balance the task of system evaluation and the practical reality of performing assays? Several studies (1, 4–5) have suggested the use of statistical analysis (e.g., Plackett and Burman or other fractional factorial designs) on data gathered during method optimization or validation. This is in line with guidance from ICH, which regards SST as one of the method validation steps. Another approach particularly useful during method revisions is to apply the 3-sigma rule to historical performance data (preferably from different laboratories). For instance, suppose the average column efficiency (N) is found to be 8000 plates, with a standard deviation (sigma) of 1000 plates. The expected range is then the mean ±3sigma, or 5000–11,000 plates. The efficiency criterion can then be set to >5000 plates.

When SST Fails
If possible, an analyst facing initial SST failure should stop the assay sequence immediately, before any sample injections have been done, to avoid having to do a sample retest. (Note: In many laboratories, a sample retest might require prior internal regulatory approval, involving documentation and an official “investigation” to locate the system failure mode.) The analyst then diagnoses the system problem, makes necessary adjustments (6) or repairs, and performs SST again. Analysis of actual samples should only commence after the system has passed all SST limits, not only the failed criteria.

Most SST failures are attributed to poor precision (repeatability) of the autosampler, aging columns, pump problems, or mobile-phase preparation errors. In our laboratory, failed precision situations are often caused by a worn sampling syringe or bubbles in the flush solvent for the syringe (see TCAW, Aug 2000 p 28), whereas failures to pass R, T, and N criteria can probably be fixed by replacing the column. If one of the interspersed SSS injections fails, data from all samples after the last passing SSS become invalid, and those samples must be reinjected after the system is brought back under control.

Expediting SST
How can one maintain lab productivity while complying with ever-stricter regulatory rules? One way to expedite SST is to use the bracketed calibration standards (commonly used to improve data accuracy) as SSS. For assays of the active pharmaceutical ingredient, this might require injections of larger amounts of calibration solution to increase impurity peaks (typically <1%) to demonstrate resolution while still keeping the main peak below detector saturation (i.e., <1.5 absorbance unit [AU]). An alternative way is to spike the calibration standard with one or more critical components at expected levels. This approach is made feasible by modern UV–vis detectors with improved sensitivity (noise <1 x 10–5 AU) and linearity (up to 2 AU). The wider linear dynamic range allows quantitation of both the active drug substance and its trace impurities (<0.05%) in one injection. Better yet, use the impurity method also as the assay method, thus saving significant sample preparation and analysis time. In this approach, the same data are processed twice: first for assay of the API (label claim), and second for impurities and degradants (typically using normalized area %). We have successfully used this combined assay/impurity testing approach during early drug development and found it particularly effective in stability studies.

Web Resources
www.fda.gov/cder/guidance/index.htm (U.S. FDA CDER site containing hundreds of downloadable guidance documents)

www.ifpma.org/ich1.html (ICH site of Technical Requirements for Registration of Pharmaceuticals for Human Use)

According to the latest USP and ICH guidelines, SST must be performed before and throughout any regulated analysis, using resolution mixtures that mimic real samples. Although onerous, SST and other validation processes are necessitated by the seriousness of ensuring data quality that might affect human health. Rather than being regarded as another regulatory hurdle to overcome, SST should be viewed as an aid: an online QC tool and an early warning system to reduce sample retesting. The key is to derive more realistic SST limits from historical or robustness data. SST is expedited by adopting the calibration standards as SSSs.

References

  1. Wiggins, D. E. J. Liq. Chromatogr. 1991, 14, 3045–3060.
  2. Center for Drug Evaluation and Research, U.S. Food and Drug Administration. Reviewer Guidance, Validation of Chromatographic Methods; FDA, Rockville, MD; Nov 1994.
  3. Hsu, H.; Chien, C. S. J. Food Drug Anal. 1994, 2 (3), 161–176.
  4. Heyden, Y. V.; et al. J. Chromatogr. 1999, 845, 145–154.
  5. Jenke, D. J. Liq. Chromatogr. 1996, 19 (12), 1873–1891.
  6. Furman, W. B.; Dorsey, J. G.; Snyder, L. R. Pharm. Technol. 1998, 22 (6), 58–64.

Acknowledgments

The authors thank Phil Palermo, Larry Wilson, Katharina Jakaitis, Catherine Davidson, Joshua McWilliams, and other scientists at Purdue Pharma for helpful suggestions.


Michael Dong, Roy Paul, and Lea Gershanov are members of the scientific staff of the Pharmaceutical Analysis Department of Purdue Pharma L.P. in Ardsley, NY. Send your comments or questions regarding this article to tcaw@acs.org or the Editorial Office 1155 16th St N.W., Washington, DC 20036.

Return to Top || Table of Contents


s="178,4,245,4,235,20,170,20" href="http://www.chemport.org/" alt="ChemPort">ChemCenterPubs Page