Dealing with Campaigns – Part 3

Mar 2022

In the last two months, we covered definitions and clarifications in dealing with issues related to cleaning validation for campaigns and then how to select the maximum campaign length. In this Cleaning Memo we will cover how to validate for that maximum campaign length. I will assume that we plan to make sure that the process will be validated for a maximum of X batches and/or a maximum of Y days, whichever comes first.

For simplicity, I will use (as an example) a maximum number of batches of 10 and a maximum elapsed time of 20 days. So, the most straightforward approach for my three qualification runs is to perform each run with cleaning started after exactly 10 batches and after an elapsed time of exactly 20 days (meeting both of those criteria in each of the three runs). In that case I would have clearly met the maximum length criterion in just three protocol runs. The issue I face with is one of scheduling. What if 30 batches (three ten-batch campaigns) in an overall time frame of even six months (allowing some time between each of the runs) is not practical; for example, I can’t sell all that product manufactured, and a significant portion would eventually exceed its expiry date.

Another option is to get around that issue by making a 10-batch campaign in each of three consecutive years with a validation protocol at the end of campaign and with a significant time between campaigns in order to match the commercial need for product. I might not like that option because it means that I will start a protocol and not complete it for more than two years. During that time I could not say the cleaning process was validated (although I could conceivably say that each campaign protocol run was essentially a “cleaning verification” protocol so that the equipment could be safely released for subsequent manufacture even though cleaning validation was not complete).

Still another option is to do initial short campaigns in order to get my cleaning process validated (albeit for fewer batches and a shorter elapsed time than I might ultimately want). Here is an example. I initially do three “one-batch campaigns” (okay, I know those are really not “campaigns”, but I think you understand). After each campaign I perform my “end of campaign” cleaning procedure, and then sample the equipment and measure residues. For simplicity, let’s assume the equipment has four swab locations, and here are the swab results for the API for each location (A, B, C and D) in each of the three campaigns (Runs 1, 2, and 3). Furthermore, my acceptance limit is 1.0 (choose any units you want for this value; it is immaterial to this assessment).

Location >>ABCD
Run 1 (1-batch)0.20.30.20.1
Run 2 (1-batch)0.30.30.30.1
Run 3 (1-batch)0.20.20.30.2
Table 1: Swab Results for “One-batch” Campaigns

Based on this data, I conclude that (at least for the API) the cleaning process is validated for cleaning after a “one-batch” campaign. And because Corporate wants cleaning validation done, I report (perhaps with a wink) that the cleaning process is validated.

But, I really want an extended campaign. So now I perform one protocol run on a “three-batch” campaign. (I picked three batches arbitrarily; you could pick a different number depending on the confidence you have in the measured residue results in an extended campaign.) For the cleaning done after three consecutive batches (with only “minor” cleaning done between batches), here (Table 2) are the swab residue results along with the prior results for the “one-batch” campaigns.

Location >>ABCD
Run 1 (1-batch)0.20.30.20.1
Run 2 (1-batch)0.30.30.30.1
Run 3 (1-batch)0.20.20.30.2
Run 4 (3-batch)0.30.20.20.1
Table 2: Swab Results for “Three-batch” Campaign with Comparison

I assess that data and conclude that there is not a significant difference between the results after three and the results after one batch (you might have some quantitative criteria for what constitutes a significant difference). Based on that data, I now conclude that my cleaning process is validated for cleaning after up to a three-batch campaign.

What if the residue results after the three-batch campaign is different (that is, with significantly higher values)? Here (Table 3) is one example.

Location >>ABCD
Run 1 (1-batch)0.20.30.20.1
Run 2 (1-batch)0.30.30.30.1
Run 3 (1-batch)0.20.20.30.2
Run 4 (3-batch)0.50.40.60.3
Table 3: Swab Results for “Three-batch” Campaign with Comparison

In this case I assess the data and conclude that even though the three-batch results are passing, they are significantly different from the one-batch results. Therefore, I cannot conclude just based on this one additional run at three batches that the cleaning process is validated. I would want to perform two more runs involving cleaning after three batches before I could come to any conclusion (or I might say that the cleaning process is not robust enough and try to improve the cleaning process before doing more). Note that even though I cannot conclude that the cleaning process is validated for a three-batch campaign, I can still release the equipment based on acceptable results from what becomes in essence a “cleaning verification” protocol.

Let’s say that the results in Table 2 are what I obtain for my “three-batch” campaign. I can then decide to extend the campaign length further, perhaps to five batches and then to ten batches (my ultimate goal). But what if I try that and at ten batches I actually get failing results with residue data above 1.0. From a cleaning validation perspective, I treat this as a deviation and then proceed to clean the equipment again with the same or a different cleaning procedure. I then sample and measure residues again so that the equipment can be appropriately released for subsequent manufacture. But (and this is an important “but”) I also have to check the manufactured product to confirm that extended batches do not affect product quality (in terms of quality issues related to impurities, bioburden, and physical properties, as discussed in Part 2), so that the batches can be released.

There is also another option for extending campaign lengths. In this option, I perform three validation runs each with a number of batches as determined by production or scheduling.  For example, the first three campaigns may be one two-batch campaign, one three-batch campaign and one four-batch campaign. If all pass and the residue data is not significantly different, I conclude that the campaign length is validated for the “shortest of the three longest campaigns” (short and long applying to number of batches). In this case, there are only three campaigns, and the shortest is two batches; therefore my cleaning process is validated for a “two-batch” campaign. My next campaign is five batches. If the residue data for this five-batch campaign is not significantly different from the data for the shorter campaigns, then applying the criterion of “shortest of three longest campaigns”, my cleaning process is now validated for a maximum of three batches (the three longest are three batches, four batches and five batches).  I can continue this process as needed until I reach my goal of ten batches (or until I get a failure).  

So far we have focused only on the maximum number of batches criterion. Obviously I will also have to consider the maximum elapsed time criterion. This can get complicated. If the first approach (starting with three one-batch campaigns) is used, the easiest way to handle it is to always target the maximum batches and the maximum elapsed time in the same protocol. In this approach, the maximum elapsed time for the extended campaign (five-batch campaign or ten-batch campaign) is ordinarily longer than the maximum elapsed times for one-batch or three-batch campaigns. This may mean extending the interval between batches so that the maximum batches and maximum elapsed time coincide. For the second approach (“the shortest of the longest three campaign”), the same principle applied to “number of batches” can be applied to “elapsed time”. That is, my maximum campaign length considers the more stringent of “the smallest number of batches of the three largest number of batches” and “the smallest elapsed time of the three largest number of batches”.

One additional consideration in campaigns with extended batch numbers/elapsed times is the Dirty Hold Time (DHT) in the protocol run at end of the campaign. My preference is to always do the cleaning after a specified DHT, which then becomes part of the validated process. This means that cleaning in subsequent campaigns should always be done in a time less than or equal to the validated DHT. I realize that in an extended campaign, the time between batches may effectively more than compensate for the DHT at the end of the campaign. Furthermore, for solid dose products the DHT may not be material for difficulty of cleaning at the end of a campaign.

This Cleaning Memo does not address all the possibilities or situations for extending the validated campaign length. However, the principles utilized may be used for other situations.

Take me to the memos

from the year: