Causing Cleaning Validation Problems by Analogy

Aug 2020

Last month we discussed how analogies can be useful when we are faced with a new cleaning validation issue. This month we will look at how analogies can do just the opposite – lead us down a rabbit hole that just may seem to be a solution, but in reality just complicates life. A key issue in using analogies is that there has to be a reasonable similarity between a situation with a well-established approach and the new situation that we are trying to address. That is, we can say that “B looks like A; therefore what works for A will also work for B”; but in reality while B “looks like” A, in reality it is different enough that what works for A probably is not applicable to B. Here are three situations that we may have all come across that illustrate this lack of similarity.

The first involves analytical method validation for the analysis of residue samples in solution (which might be either a solution in an extracted swab sample or a rinse sample). How do we address what to establish as a linear range? Well, we might look at what is done to establish a range for the active concentration in a potency assay for a drug product (by potency assay I mean an assay for the concentration of the active in the drug product). In that validation for potency, the analytical lab knows what the target concentration is and then validates range below and above that value. That range may be 75-125% of the target value, or 50-150% of the target value. Okay, that is well accepted for a potency assay. So, now I am faced with analytical method validation for residue measurement in a cleaning validation protocol. So, the two situations (analytical method for potency assay and analytical method for residue determination) are similar (right?). Therefore, for my analytical method validation for cleaning validation purposes I merely validate a linear range from 50% up to 150% of the residue limit in solution. Sounds easy enough, and it is something that I see too often.

However, the issue is that the two situations are not exactly similar enough. In the case of the potency assay, the target is the amount I am hoping to find (the 100% value). In the case of residue assay, the limit is not my target (or if it is my “target”, it is a target I am hoping to miss!!). The limit in a residue analysis is a value I want to be below, and hopefully significantly below. If the bottom of my linear range is 50% of the limit, and if that value is actually my LOQ (limit of quantitation), then I am really in a situation where, if I want a robust cleaning validation process, the best I can say (assuming 100% recovery in sampling and a residue value below the LOQ) is that the residue is less than 50% of the limit. While technically that works from a compliance point of view, most companies want to demonstrate a more robust cleaning process.

A second difference is that for the residue assay, my “target” value is not a fixed value, but may be range. What do I mean by that? What I mean is that my desired values (to demonstrate the robustness of my cleaning process) might be values about 10-30% of the residue limit. So my linear range for the residue in solution might be 10-100% of my residue limit, which is a much wider range than for a potency assay. You might wonder why I extend the upper limit up to 100% of the limit (particularly if I prefer that value be much lower). The answer is that while I prefer lower values, I don’t want to exclude the possibility that in a few cases I will get higher values. Unless the analytical method is validated to measure at 95% of the limit (still a passing value), how can I trust the validity of that value. By having the method validated up to 100% of the limit, I am covering the possibility that I might have values in that range (even though those higher values are not preferred).

So the “take home” less here is that for analytical method validation, the target value in a potency assay is different enough from a limit value in a residue assay such that the range for my method validation should probably be different. Note that this is not to say that other aspects of analytical method validation for potency assays might not be applicable.

Now for the second situation of where an analogy comes up short, that of doing swab recovery studies at different spiked levels. That is, since we do analytical method validation over a range of values for residues in solution, we should also do the recovery studies over a range of values (such as 50-150% of the residue limit). The main issue I want to address here is whether there is a sufficient similarity between the purpose of a range that is used for the analytical methods in solution and the purpose of a range that is typically used for swab recovery studies. For the residue values in solution, the purpose typically is to establish a linear range where measured values are accurate and precise. With residue recovery studies, are we expected to find a linear range? Perhaps, but the data I have seen sometimes shows that the recovery percentage is the same over the range, sometimes shows recovery increasing with increasing spiked level, sometimes shows recovery decreasing with increased spiked levels, and sometimes shows a highly variable relationship. Now it is possible that all of these may be true, and that is the relationship of percent recovery to spike level is highly dependent on the residue, the surface, the analytical method, and the swabbing technique. And while I acknowledge that the recovery percentage may vary based on those factors, I believe (based on reason and logic) that in general over a relatively narrow range the recovery percentage will be the same in a carefully controlled experimental design. That is, I don’t expect significant variation of recovery percentage over narrow spiked levels of 1X to 5X. However, if the range increases to 1X to 25X, I believe as the residue load increases on the surface, the percentage recovered in swab sampling decreases (even though the amount removed might increase). Therefore, performing a recovery study only at the limit should represent a worst case (applicable to all values up to that limit).

A third example of where analogies can go wrong involves the statistical analysis of swab sample results. Here is the analogy. Cleaning validation is like process validation. In process validation I look at the results of a certain quality parameter of the manufactured product, and statistically analyze the results based on samples within a batch and samples from batch to batch to determine consistency. So, if cleaning validation is similar to process validation, I can then statistically evaluate the various swab samples to determine the consistency of my cleaning process. The issue with this analogy is that consistency among analytical values for product quality consistency (I am expecting only minor variation) in process validation is different from the consistency expected of swab samples in cleaning validation. My expectation for consistency is that “swab samples results are consistently below the calculated limit”. Swab sampling locations are not the same “population” (some are harder to clean locations and some are easier to clean locations), therefore treating these different locations by statistical analysis doesn’t makes sense. If I really wanted a higher level of consistency, the best I could hope for is that all my results were non-detectable (below the limit of detection). Even in that situation (all non-detectable), I certainly could not apply statistical analysis. Treating different swab locations has the aura of a “scientific” approach, but in most cases it is just “window dressing”.

So, where does this leave us in the application of analogies? Clearly there has to be an element of thought, analysis and wisdom in deciding which analogies are useful and which are not. This is not unlike other situations we face in everyday life.

Take me to the memos

from the year: