Alert and Action Levels

Oct 2022

This Cleaning Memo is indirectly related to those of the prior three months in that it deals with assessing the “health” of a cleaning validation program. However, the focus here is on individual data values in routine monitoring, and specifically those higher than normal results which may call for more attention to be paid to specific aspects of my cleaning validation program. These are individual data values that suggest something may be different, changing and/or wrong such that I need to address it.

What is done in some cases is either to re-evaluate the data such as is done in an OOS (Out of Specification) program or pay more attention to future similar data to determine any trends. In this case an “alert level” is set. The alert level is a data value (typically much less than the calculated acceptance limit in the original protocol) which, if exceeded, I would be concerned (but not overly concerned) and therefore take such steps as suggested above. In all cases, however, I would document the event as exceeding the alert level so that I could effectively trend such events.

In other case cases where the data value is either exceeding my cleaning validation limit or is approaching that cleaning validation limit,  I would take more significant steps to address the higher level of concern. While I would certainly perform an OOS, I might take further steps to investigate any possible cause and its impact, as well as take steps to prevent it from happening again. In this case an “action level” is set, which if exceeded should result in more concern (as compared to exceeding an alert level).

Some companies prefer to call these “alert limits” and “action limits”. While that “limit” terminology can be used (and is used by many), I tend to avoid it and just refer to “alert levels” and “action levels”. This use of “levels” rather than “limits” terminology is more consistent with what is done in cleanroom environmental monitoring programs. The reason I choose the “levels” terminology is that I don’t want to confuse it with the calculated “limits” for my cleaning validation protocol. If I exceed a cleaning validation limit in a protocol, then that protocol typically fails. If I exceed an alert or action level in routine monitoring, then it is typically not a failure; it is merely a situation that I should address in some way to make sure I am not in a position where I am compromising patient safety and/or product quality or where my cleaning process may be changing in an unplanned way.

So the next question is “How to set alert and action levels?” Practical options depend on the life cycle stage at which I am trying to set these levels. We will consider several, but  not all, of the possibilities.

In an early design stage as I am designing my cleaning process and the associated cleaning validation protocol, I might not have a lot of data on a new cleaning process to establish alert/action levels. However,  I might establish tentative alert/action levels either as a percentage of a calculated sample limit or based on actual data from sufficiently similar cleaning processes. For example, using the “percentage of limit” approach, the alert level may be somewhere between 50% and 75% of the calculated sample limit. And the action level may be somewhere between 75% and 100% of the calculated sample limit. If I were to use the approach of “data from a sufficiently similar cleaning processes” (which could include data any design studies done for the new product as well as any relevant data from cleaning of other products), I might try to determine an average (the “mean”) from a sufficient number of different sampling locations, and set the alert level as the “mean plus two standard deviations” and the action level as the “mean plus three standard deviations” (provided the “mean plus three standard deviations” does not exceed the calculated limit). Regardless of what approach I were to use, if any result exceeded the calculated limit, I should treat it with an “action level” response.

While this approach to alert/action levels may be appropriate initially, I may eventually develop sufficient data on a given cleaning process adopt (under change control) more precise values. To do so, I  might be in a situation with adequate data from multiple cleaning events involving the same product to utilize the “mean plus X standard deviation” approach (with X being about 2 for the alert level and X being about 3 for the action level). While it is possible to include only data from a specific product for this approach, it is also possible to include data from sufficiently similar cleaning processes.

If I want to “improve” my alert/action levels as I move from the design/qualification stage to routine monitoring in the commercial production stage, it is preferable (though not an absolute requirement) that I not make the alert/action levels less stringent based on a sound scientific rationale. The reason for this is partly “optics”; for example if I start increasing the value for an action level, I may find it more difficult to convince internal reviewers/auditors that there was not a problem with my cleaning process using the older more stringent (that is, lower) alert/action levels. In other words, most people would not question data that supports the use of a lower value for an alert/action level, while they may go over the data “with a fine tooth comb” if I am trying the make the alert/action level higher. A corollary to this “optics” issue is to be careful about setting excessively tight alert/action levels in the design stage where there generally is more limited relevant data.

The next sections deal with what to do if alert/action levels are exceeded in routine monitoring. In considering this, it is important to note what is stated in the 2011 FDA process validation guidance. Section  IV.D. of that guidance states that in Stage 3 “trending and calculations are to be performed and should guard against overreaction to individual events….” [italics added]. Just for balance, the FDA follows that comment about overreacting to individual events by noting the companies should also have procedures to guard “against failure to detect unintended process variability”. This balance, of course, is where appropriate scientific and professional judgment come into play.

If an alert level is exceeded, that incident should be documented. It may lead to just more attention paid to the cleaning process and to future sampling data. And even though it is normally called an “alert” level, it may lead to a specific action. An example of a specific action following an “alert” is an OOS investigation. Here again, the FDA caution not to “overreact” to individual events should be considered.

If the action level is exceeded, it is critical to “react” appropriately. Certainly more activities and/or investigations may be needed. High analytical result values generally require an OOS investigation. In addition, an individual high value above the sampling cleaning validation limit may be addressed by determining what the effect of the total carryover to the next product could conceivably be based on a stratified sampling evaluation (see the March/April/May 2010 Cleaning Memos for more on stratified sampling). Another step that may be taken in the case of exceeding an action level might be re-cleaning of the equipment (provided that processing of the next product has not yet started). That re-cleaning may also involve some level of re-sampling. A result of such an investigation may be to increase future monitoring or to make a planned change in the overall cleaning process.

Note that multiple individual samples exceeding the alert level in one cleaning event may require that the situation be treated by the action level criteria. Furthermore, continually exceeding the alert level for the same sampling location over multiple cleaning events may require that the situation be addressed by the action level criteria.

Also, alert and action levels may be set on data other than my protocol residue sampling data. For example, alert/action levels may be set on cleaning process parameters such as temperature or the actual dirty hold time. Further, I may collect samples using chemical analytical methods not used for protocol residue limits. For example, while the analytical method for sampling in a protocol may be HPLC, for routine monitoring I may just utilize a TOC method.

As suggested earlier, continually exceeding an action level may not implicate patient safety or product quality issues. However, such a situation may suggest the need to review what is done in the design phase for cleaning validation. After all, the cleaning process should be designed so that all sampling locations (and specifically the “worst case” locations) have data values well below the calculated acceptance limit. However, perhaps it is not the cleaning process design that needs improvement, but rather the problems may be related to how the cleaning is implemented (including operator training).

Finally, it is important to distinguish between situations where I may be compromising patient safety (such as in exceeding the calculated protocol limit) and situations where there may be an issue with the process control (or consistency) of my cleaning process. Thinking about the reasons for doing routine monitoring may help avoid immediately trying to implement inappropriate “cookie-cutter” approaches for the assessment of data for routine monitoring and process control.

Copyright © 2022 by Cleaning Validation Technologies

Take me to the memos

from the year: