Pharmacopoeial vs Internal Specs: How to Design a Defensible API Testing Strategy

The International Council for Harmonisation has spent the past two years pushing the industry away from “validate once and file it” thinking for analytical methods. Its revised Q2(R2) validation guideline and the new Q14 guideline on analytical procedure development codify a lifecycle approach: methods are designed with risk in mind, validated against defined performance characteristics, and monitored in routine use for drift.

That shift is forcing a practical re-think of a stubborn, expensive problem in small-molecule manufacturing: where pharmacopoeial specifications end and where internal specifications should begin.

In plain terms, pharmacopoeial compliance is a floor, not a control strategy. A defensible API testing strategy starts with the compendial monograph when one exists, then adds internal tests or tighter limits only when the science and the process justify them.

Over the next 12–18 months, the companies that get this right will reduce batch release friction and audit pain. The organizing claim is straightforward: internal specifications should exist to control demonstrated risks, not to perform quality theatre or to compensate for weak process knowledge.

Pharmacopoeial Specifications as the Public Baseline

Source: translate.hicom-asia.com

Pharmacopoeial specifications—USP, Ph. Eur., JP—are designed to set minimum public standards for identity, strength, and certain impurities. They are intentionally general. They can be slow to reflect newer synthetic routes, new impurity profiles, or analytical capabilities.

A monograph may specify an HPLC assay and related substances method, but it will not necessarily control the process-specific genotoxic impurity that appears only under a particular reagent lot or workup. It may not control a polymorph change that affects downstream processing. It may not address residual solvents that are irrelevant for one route and critical for another.

Internal Specifications and Process-Specific Control

Internal specifications are the opposite: private, process-aware, and often narrower. They include limits and tests that matter for patient safety, manufacturability, and supply reliability in a specific process.

They can also include operational controls that never appear on a certificate of analysis (CoA: in-process controls, cleaning verification, hold-time studies, and equipment-related checks. The point is not “more tests.” The point is a control strategy that makes it hard for a bad batch to exist.

The ICH Q6A Perspective on Specifications

Source: linkedin.com

A good place to start is ICH Q6A’s framing of specifications as acceptance criteria intended to assure quality at release and through shelf life. That is the right mental model for separating the public floor from the private levers.

A pharmacopoeial monograph tells you what the market recognizes as acceptable. Q6A pushes you to justify why your acceptance criteria and test procedures are appropriate for your substance, your process, and your intended use.

Inference: When a monograph exists, it can reduce the burden of inventing a baseline from scratch, but it does not eliminate the obligation to justify controls for risks that the monograph does not address.

Building a One-Page Specification Map

The quickest way to turn that into an actual testing strategy is to build a one-page “specification map” before anyone writes a method. List the critical quality attributes (CQAs) that matter for safety and performance: assay, identity, related substances, residual solvents, water content, elemental impurities where relevant, polymorphic form if it is critical, and particle size if it affects downstream processing.

Then attach each CQA to (a) the compendial test and acceptance criteria, if available; (b) the internal test and acceptance criteria, if needed; and (c) the process step where the risk is best controlled. A cGMP system cares as much about where you control risk as how you test it.

Impurities and ICH Q3A(R2) Thresholds

Source: testinglab.net

Impurities are where compendial-vs-internal choices become visible. ICH Q3A(R2) ties impurity reporting, identification, and qualification thresholds to maximum daily dose, with concrete threshold examples such as 0.05% for reporting and 0.10% for identification in certain dose bands.

Those numbers give teams a defensible anchor: limits should reflect toxicological relevance, not internal aesthetics.

Inference: When internal limits are significantly tighter than ICH-driven thresholds, reviewers and auditors will ask whether the tightness reflects known safety risk, manufacturing capability, or a commercial preference masquerading as quality.

Quality by Design and the Role of Specifications

The analytical method side of the argument has also tightened. Yu and colleagues’ 2014 AAPS Journal review on pharmaceutical Quality by Design (QbD) laid out what regulators have been trying to coax into routine practice for years: quality is built through process understanding and control, not tested into existence at the end.

That matters for specifications because it reframes them as part of a broader control strategy. A release test is a backstop. It is not the primary control for a variable you can and should manage upstream.

When Internal Specifications Are Justified

This is where internal specifications earn their keep. When a process creates a route-specific impurity that is absent from the monograph’s related substances method, internal testing is not optional. When polymorph form is linked to filtration behavior or downstream dissolution risk, an XRPD check or an orthogonal identity method can be justified.

When a process uses a solvent class that tends to hang around, a GC residual solvent panel may need to be broader than the monograph. When water drives a known degradation pathway, Karl Fischer titration may require a tighter internal limit than a compendial “not more than” value. These are not marketing embellishments. They are risk controls tied to a process reality.

Method Validation, Transfer, and ICH Q2(R2) / Q14

Method validation and method transfer are where “defensible” becomes expensive. New ICH Q2(R2) and Q14 language reinforces that analytical procedures should be developed with intended use and performance requirements in mind, then validated accordingly.

That makes compendial methods a starting point, not a free pass. Users still need to verify that a compendial method is suitable in their hands, on their instruments, with their API matrix.

Inference: The more an organization relies on multiple contract labs or multi-site testing, the more it needs explicit method performance criteria and verification packages, because inter-lab variation becomes a supply-chain risk, not a statistical footnote.

Ongoing Analytical Procedure Performance Verification (OPPV)

A 2024 paper by Borman and co-authors in Analytical Chemistry makes the lifecycle point operational. It argues for ongoing analytical procedure performance verification (OPPV): post-validation monitoring to ensure the method stays in control, using risk-based selection of what to monitor and how often.

That idea sounds abstract until you attach it to decisions that matter. One practical rule is to set internal alert and action limits for system suitability and key method performance indicators, then gate changes based on trend signals rather than anecdotes.

If the method’s signal-to-noise, resolution, or retention times drift outside established control limits for a defined window, the team investigates before it becomes an out-of-spec (OOS) event. The specific control limits should be set from historical data and method capability; without that dataset, the limit is a placeholder, not a standard.

The Economics of Internal Specifications

Source: qbdgroup.com

The economics are blunt. Every internal test and every tightened limit consumes lab capacity, reference standards, analyst time, and calendar time. Release testing can become a bottleneck, especially when the supply chain already runs on tight batch schedules and fixed shipping windows.

Internal specs that are tighter than process capability create self-inflicted batch failures and rework.

Inference: If a company sees repeated “passing by retest” behavior or frequent laboratory investigations without corresponding process fixes, it is likely using specifications as a substitute for process improvement, which increases cost without reducing risk.

The Risks of Treating Pharmacopoeial Compliance as Sufficient

The flip side is also costly. Treating pharmacopoeial compliance as synonymous with adequate control can lead to late-stage surprises: unexpected impurity excursions, stability failures, or deviations during scale-up that were invisible under a generic monograph method.

The price shows up as batch holds, regulatory questions, and supply interruptions. The winners are the organizations that place controls where the risk is generated, then use specifications and methods to confirm control—not to discover chaos at the end.

Implications for Contract Manufacturers and Suppliers

Source: tentaconsult.com

For contract manufacturers and intermediate suppliers, the defensibility question becomes a commercial one: what do you give customers and regulators that proves control without turning your CoA into a novel?

Inference: Buyers increasingly treat documentation quality as a proxy for operational maturity, which raises the value of clear spec rationales, method verification summaries, and change control discipline. Manufacturers such as Suzhou Henra Biotechnology sit in the middle of that squeeze: expected to meet compendial baselines, demonstrate GMP discipline, and still operate efficiently at scale.

The Counterargument: Risks of Over-Tight Internal Specifications

There is a serious counterargument: internal specifications can fragment the market and create non-standard “shadow requirements” that complicate sourcing. Over-tight internal limits can be used as leverage in supplier negotiations, not as safety controls.

A second failure mode is regulatory inconsistency: if internal specs diverge from the monograph without a clean scientific rationale, it can look like the firm does not trust the compendial standard while still claiming compliance.

When the Counterargument Holds

That counterargument wins when three conditions hold. The process is stable, the monograph is current for the API as manufactured, and the known risk profile is adequately controlled by the compendial tests plus standard GMP systems.

In that case, layering internal limits on top can create noise without adding protection. The discipline is to prove, with data, that an internal spec addresses a real risk that the compendial package does not.

Decision-Relevant Takeaway

The decision-relevant takeaway is not ideological. A defensible testing strategy is one where every test and every limit can be explained in one sentence tied to a risk, a process step, and a dataset.

The falsifier is also simple: if internal specs and extra tests do not reduce deviations, customer complaints, or stability surprises over a defined monitoring window, they are probably not controlling the right risks. The metric that matters is not how many boxes are checked on release. It is whether the control strategy keeps the API inside its quality envelope as the process, suppliers, and analytical methods inevitably change.