Revisiting Limits Based on Process Capability

Oct 2012

Some bad ideas won’t go away. In the Cleaning Memo of June 2005 I questioned the appropriateness of setting limits based on a “true” process capability study. I think the issue is being resurrected around the original context in which it had arisen, and that is setting limits in cleaning validation for bulk actives in biotechnology manufacture. The standard way limits are set in that situation is based in a sense on “process capability”, but not a true process capability study. In a true process capability study, data is developed over 20-30 runs, and then the average plus three standard deviations is set as the upper control point. The question is, “Should that control point be the acceptance limit in a cleaning validation protocol?”

What appears to be happening is that limits are set at levels, such as those given in PDA Technical Report #49, of 10 ppm TOC in an upstream bulk process and 1 ppm TOC in a downstream bulk process. While these are limits, the actual values achieved by most manufacturers are much lower, such as 1 ppm upstream and 0.25 ppm downstream. The question arises, “If companies are able to achieve values that low, shouldn’t their limits be set lower?”

Putting this in perspective, why should biotech manufacture be held to a different standard from cleaning validation involving small molecules? After all, for small molecules it is common practice (it is usually part of the cleaning process design) to have cleaning processes which achieve residue values significantly below the acceptance limit (for example, limits based on a 0.001 dose criterion). If my calculated limit is X ppm, I would like to design a process which consistently achieves values of from “non-detected” to about 0.3X. I certainly don’t deliberately try to design as process where the protocol values are closer to 0.6X to 0.8X (even though those higher values would be passing and don’t present a patient safety or product quality concern). In other words, I have never seen this concern arise with small molecule manufacture (although admittedly, I have not seen every manufacturing facility in the world).

I will also admit that there is something different about biotech cleaning validation (see the May 2003 Cleaning Memo), in that limits for bulk manufacture are generally not set based on carryover calculations. The reasons are based on the batch size to surface area ratio, and the fact that residues of the protein actives in biotech are generally not the native protein, but rather degraded fragments. However, the industry is starting to develop data to show that current ways of setting limits are more stringent than any safety based concern using the actual residues left after cleaning.

Regardless of that difference, setting limits on a true process capability study are not appropriate for the following reason. For a new product, must I make 20-30 batches before I have process capability data to set limits based on an average plus three standard deviations? This does not make sense. Certainly I can base a limit on other products previously validated, but ordinarily I am not going to have 20-30 TOC data points on the same swab sampling sites, and I might not even have 20-30 TOC data points for rinse sample (if during routine monitoring, I focus on conductivity and not TOC). Further, if my limit is based on an average plus three standard deviations, that means that about 1% of the time I am going to see values above my limit. That clearly is a recipe for a compliance disaster.

Another argument (originally presented in the June 2005 Cleaning Memo) involves the situation where two biotech manufacturers make the same active, using identical equipment, but using slightly different cleaning processes (let’s say that the cleaning process for Company A is little more robust than the cleaning process for Company B). Let’s assume that in both cases the limit is set the same at 10 ppm TOC. Company A gets data of 1 ppm TOC, while Company B gets data of about 7 ppm TOC. In this situation, do we congratulate Company B for setting limits “correctly” (close to the actual values), and then castigate Company A for setting limits too high? That would not be my approach. I would praise Company A for having a more robust cleaning process (which I think is the more reasonable response).

There is a value to doing true process capability studies for biotech cleaning validation (and for that matter for small molecule cleaning validation). What I mean is that after my validation runs (PPQ runs if you want to use the new FDA process validation terminology) I establish action or alert levels as part of my routine monitoring (or my continued process verification, using the new FDA terminology). That is, as I collect routine monitoring data, I am eventually able to do a true process capability analysis of the performance of my system. Then I can set an action or alert level using something like the average plus three standard deviations. I might also set preliminary action or alert levels (based on professional judgment) before I collect enough data to do true process capability calculations. As I monitor my cleaning process after it has been validated, I can then trend data and take steps to address my cleaning process before the process is out of control. This latter approach (setting action or alert levels for routine monitoring) is a better approach to using process capability data for biotech cleaning validation.

Take me to the memos

from the year: