Issues in Equipment Grouping

Jun 2014

Equipment grouping (also called matrixing, a family approach, or bracketing) is an option companies are considering more and more as part of a risk-based approach to cleaning validation. In a grouping approach, what is done is to consider certain equipment as equivalent for cleaning validation purposes. This Cleaning Memo will cover two issues: first, how is equipment considered “equivalent”, and second, how many validation protocols are performed and on what equipment are they performed. Just for clarification, remember that what we are considering are equipment items that are used to manufacture a certain group of products and cleaned by the same cleaning procedure.

For the first issue (what’s “equivalent”), if we have equipment that is identical by IQ and OQ, then it is clearly the case that these items are good candidates for equipment grouping. I also advocate that the criterion of “identical by IQ and OQ” be loosened slightly to be “equivalent by IQ and OQ in all things related to the cleaning process”. That is, if two pieces of equipment are identical except for the software controlling mixing (for example), and if the mixer action is not part of the cleaning procedure, then the software difference is not really relevant to whether the equipment is equivalent for cleaning purposes.

Let’s move to another case, that of equipment which is the same in all respects except for size. Let’s say I have a 500 L storage tank and a 1000 L storage tank. Let’s assume they are cleaned by the same process. For clarification, I don’t necessarily mean by the same SOP number; there could be two SOPs, but the cleaning processes would be essentially the same. If I cleaned the two tanks by a CIP process, it is likely that the SOP number would be different (or perhaps the options selected within one SOP number would be different); however, if I had a written rationale and could demonstrate that the critical process parameters (such as time, temperature, cleaning agent concentration, flow/turbulence) were the same, I could treat the different sizes as a group. How much variation in size is allowed is certainly open to discussion. However, my answer would depend on whether one size was selected as the worst case, or whether I decided that it was unclear whether the largest or smallest was the worst case. Note: In some cases, I could have a reason why “larger” was more difficult to clean, and in other case why “smaller” is more difficult to clean. It is not an “automatic” that larger is always the worst case, or that smaller is always the worst case. The answer will be situational, and the result might be that a clear decision cannot be made.

Another case might be equipment which is similar except for one feature. A simple example is that I have two 500 L storage tanks, except that one has baffles and one does not have baffles. However, they are both cleaned by the same CIP process. This is a case where they are not exactly equivalent, but I could consider them in the same group if I were able to establish that the added feature made it more (or less) difficult to clean. In the case of the two storage tanks, I would argue that the baffles made that situation more difficult to clean, and therefore in the group consisting of the two tanks I would perform my validation runs on the worst case (the tank with baffles).

Okay, we’ve covered the first issue, that of what’s in a group (how we define what’s equivalent). Let’s move to the next issue, which is which how many validation runs should be performed. Although the FDA process validation guidance has gotten away from specifying the number of runs for manufacturing process validation, particularly for situations where a grouping approach is used, I generally recommend that three validation (qualification, if you prefer that terminology) runs be performed in a grouping approach for cleaning validation. There may be a good rationale for reducing the number of runs; however, by using a grouping approach, one is already designing a more efficient validation process. I think it prudent from a business perspective, where a manufacturer has more “eggs in one basket”, to perform three protocol runs.

Let’s assume for now that you accept my contention that three runs are performed. The next question is what equipment those runs are performed on. Assuming the equipment is truly identical by IQ and OQ, my preference is to spread the runs out on as many individual items as feasible. So, if there are two identical equipment items, I perform two runs on one item and one on another. If for some reason (scheduling for example), that was not possible, I would accept three runs on only one equipment item. If that were the case, however, I would try to make sure any future requalification runs involved that other equipment item. If there were three identical items, I would prefer one run on each of the three items. If there were four items, I would perform one validation run on each of three different items, meaning that a validation run was not performed on one item. Should I be worried about that case? Well, if I were truly convinced that all four items are identical by IQ and OQ, I may be concerned, but only to the extent that any future qualification runs would include that fourth item. I would use the same principle if there were five, six or however many identical equipment items.

When there are multiple identical items, some of you may want to perform three validation runs on one item and then one validation runs on every other item in the group. Certainly that is acceptable from a compliance perspective (more always seems to be better from a compliance perspective). However, if I truly have support for the equipment to be identical, what value does it add? This approach may just call into question the rationale for why I judged the equipment identical in the first place.

What about the situation where I have equipment that differs only in size? Let’s say I have three storage tanks, identical except for size, and I have no basis for saying one size is clearly more difficult to clean. One is 1000 L, one is 800 L and one is 600L. There are two approaches commonly used here. One approach is to perform three validation runs on the largest and three validation runs on the smallest, which means acceptable validation for all group members from 600 L up to 1000 L. A less conservative position is to do a total of three runs, including at least one run on the largest and at least one run on the smallest. While this might mean that no runs were done on the middle size, my preference would be to do one on each size for a total of three. If there were four are more storage tanks that were identical except for size, then if I used this latter approach, I would omit at least one of the middle size tanks. Unless I had information to the contrary, that would be acceptable to me (of course, if I had information to the contrary, I might not even group the tanks together). Note that this approach of selecting the largest and smallest is a case that most appropriately could be called “bracketing”.

Let’s consider the situation discussed above, where I grouped equipment together, but was clearly able to select one feature that caused at least one of the items in the group to be more difficult to clean. This is a case where if one item were clearly more difficult to clean (the tank with baffles, in the example given previously), I would perform three validation runs on that one tank, and no validation runs of the easier to clean tank. If there were two tanks with baffles in this group (assuming the group has at least three members), then my preference would be to perform two runs on one baffled tank and one run on the other baffled tank. With three baffled tanks, my preference would be to perform one run on each of the baffled tanks. Using this same principle, I think you can guess what my preference would be if there were four baffled tanks in the group. In any of these situations involving baffled and non-baffled tanks, it is certainly possible to do additional validation runs on the non-baffled tanks. However, it baffles me as to what value that adds to my cleaning validation program.

The purpose of this Cleaning Memo is to explore two issues in equipment grouping. The first is how to form group, and the second is what group members to perform validation protocols on. While I have given some guidelines, the key in each situation is to document rationales for why you do what you do. In a risk-based approach, a clear understanding of the cleaning process situation is critical.

Take me to the memos

from the year: