Process Capability and Statistical Process Control

, which says that the result of any measurement, y, has two components. The first, f(x), is the signal or what the controlled factor or factors are, and the second, ε, is the noise. The good news is that through replication, the noise can be brought down to manageable levels so the signal can be heard over it. His early control chart capitalized on this idea by reducing the noise in order to detect process incursions, today called assignable sources of variation. It’s important to point out that ε can consist of several components, for example noise related to the measurement system, another component due to variations in starting materials and yet another component due to processing conditions. In certain cases, these components may be separable but in many cases, they will remain confounded. Their composition and possible existence are important to recognize and take into account.


Shewhart’s idea took hold, and over the decades we have seen many authors build on it to produce Statistical Quality Control, Statistical Process Control, and Total Quality Management. It has led to the present-day Lean Six Sigma initiative which sequences and links a variety of statistical methods and augments earlier programs by incorporating financial analyses. Resulting financial savings roll into billions of dollars (Hare 2003).




21.3 Process Capability and Performance


In business parlance, process capability is often referred to as entitlement. That is, if an owner or manager purchases a piece of equipment he or she is “entitled” to receive its most consistent results. That terminology may suffice in the board room, but it doesn’t really capture the essence of the concept or aid practical application.

Admittedly, the concept of process capability is a bit nebulous and subject to various interpretations. We choose to think of it as the inherent, intrinsic variation of the process. It is the variation the process exhibits when it runs at its absolute best. There are no exterior disturbances, no haphazard shocks to the process, no tweaks, and no adjustments, illicit or otherwise; only pure random variation. For pharmaceutical drug products, this consists of both dosage unit variation as well as analytical uncertainty. Naturally, the variation due to process capability alone should be very much smaller than the specification range. For example, if the response of interest is normally distributed, its process capability standard deviation should be smaller than one-sixth of the specification range to assure consistency in producing the specified product.

Performance variation, on the other hand, is a measure of the variation experienced by the consumer or end user. It includes capability variation along with all the other sources of variation such as environmental changes, variation induced by different batches using various raw material supplies, process adjustments during time of batch manufacture, ambient conditions and so on. Ideally, we would want performance variation to be as close to capability variation as possible. But the difference between performance and capability should be a major subject of attention: if the difference is large, it is likely that the process would benefit from interventions to reduce the difference. This may be true even when all production is within specification because departure from process capability may result in production line inefficiencies.

Assessing process capability is no easy chore. Some text books teach users to wait until the process reaches equilibrium, then take roughly 30 samples and then calculate their standard deviation. We have some problems with that advice. How might we know when the process reaches a state of equilibrium? How do we know that the recommended samples are representative of the process, much less truly representative of process capability? So the measurement of process capability is more complicated than that.

For example, suppose we have a rotary tablet press that produces 30 tablets, one from each of thirty pockets per rotation, and let’s say we are interested in tablet thickness. We might want to base our estimate of process capability on the standard deviation calculated from 30 consecutive tablets. Better yet, we might assure representativeness by taking those 30 consecutive tablets repeatedly over, say 8, time periods spaced evenly throughout a production run. We would pool the 8 individual standard deviations yielding a weight capability estimate based on (8 × (30 – 1)) = 232 degrees of freedom. For greater assurance yet, we might want to include several production runs with perhaps fewer sampling times per production run. The point is that estimates of the process capability made this way would be both representative and independent of process mean changes that might take place from sampling time to sampling time.

A purist may want to drill down even further. What is the variation experienced among repeated tablet thickness from each of the 30 pockets? You could measure that by sampling 60 consecutive tablets and pair tablet 1 with 31, 2 with 32 and so on to measure the within pocket variation. Isn’t the resulting variation also a component of process capability? Of course, it is. But practicality steps in at this juncture to recommend that we follow the earlier technique described above and let within-pocket variation be assigned to the capability estimate as a random component. A check to guard against blunders caused by avoiding the within-pocket detail can be made through careful examination of the control charts to be described.

The measurement of process performance is a bit more complicated. Suppose we have the same tablet press as mentioned above. To measure the thickness weight variation experienced by the consumer, we would want to assure representation of normal production. Sampling within one batch alone is not adequate. Instead, we should have upwards of 20 or 30 batches in our sample. In that case, we would carry out a variance components analysis, combining the within and between batch variance components to form the estimate of the performance variance.


21.4 Assessing Process Capability


The 2011 Process Validation guidance speaks to three stages: (1) Process Design, (2) Process Performance Qualification and (3) Continued Process Verification. During Stage 1, DoE tools are brought to bear to study important process parameters and linkages to critical quality attributes. During Stages 2 and 3, product batches are produced and evaluated for many quality related parameters including content uniformity, assay, moisture, degradation compounds, and in the case of tablet manufacture, hardness, friability and weight. Additional measurements will be taken depending on the nature of the product. Associated with each of these measurements is a target value and accompanying specification limits. One goal during these stages of new drug manufacture is to assess consistency with compendial requirements and preliminary specifications. The methodology given in ASTM 2709–12 using the specified acceptance test would be one way to achieve this goal. The methodology computes at a prescribed confidence level, a lower bound on the probability of passing the acceptance procedure. A Bayesian calculation based on the posterior predictive distribution is also an appropriate method to achieve this purpose, and has the added benefit of providing inferences about future performance. Refer to Chap. 19 for greater detail about the Process Validation requirements.

Another goal, which is entirely consistent with the first, is to assess process capability. As discussed above, process capability is the inherent, intrinsic variation of the process. It is not necessarily what the process is doing or what it has done, so much as it is what the process can do when it operates at its best. And, of course, when it operates at its best, it is unencumbered by extraneous sources of variation such as system shocks that might result from such events as shift changes, climatic changes, raw material changes and other assignable causes of variation. In this context, “assignable” is meant to contrast with “common cause” variation. Another perspective on the interpretation of capability and performance concepts and indices is given in ISO-22541 Part 1(International Standards Organization 2009).

Appropriate sampling is essential if we are to isolate the common cause or capability variation from the overall variation experienced during the initial phase of new drug manufacture. Careful consideration should be given to sampling in order to assure that the assessment of capability variation is not contaminated by assignable sources of variation. The brief rotary tablet press example described above exemplifies the capability assessment strategy. There, we spread sampling throughout the production run, but to assess capability, we bore down to very local variation assessment, pooling it across the entire production run. By sampling in that manner, we strive to be fully representative of the process, beginning to end, and we build in a check against nonhomogeneous variation across the production period. Such nonhomogeneous variation would be revealed by simple scatter plots showing replicated observations by time. If present, it might indicate erratic process behavior or it might indicate that there are additional sources of variation not accounted for by our sampling. In either event, the accumulation of process data from systematic samples provides information that would otherwise not be available. If there are sources of variation for which we have failed to account, a repeat of the sampling should be done with those taken into consideration.

As an example, suppose a process of interest produces tablets, and we are concerned about the uniformity of tablet thickness. The tableting device has 10 pockets which engineers assure us are precisely milled to the same dimensions. The sampling plan is to measure thicknesses twice per pocket at each of 36 times spaced uniformly throughout the production run. The resulting data are shown partially in Table 21.1.


Table 21.1
Replicate tablet thickness by sample time and pocket position




















































































































































































Position

Replicate

Time 1

Time 2

Time 3


Time 36

1

1

11.30

10.60

10.57


11.08

2

1

10.98

10.82

11.24


11.09

3

1

11.11

11.35

11.14


11.09

4

1

11.20

10.52

11.28


10.45

5

1

11.25

11.11

10.37


10.68

6

1

10.76

10.85

10.50


10.39

7

1

10.42

11.30

11.13


10.54

8

1

10.81

10.53

10.89


10.41

9

1

10.70

10.82

11.26


10.31

10

1

10.67

11.29

10.64


10.42

1

2

11.13

10.98

10.87


10.27

2

2

10.79

10.47

10.65


10.64

3

2

11.21

11.18

10.88


11.09

4

2

10.88

10.45

11.30


10.85

5

2

10.51

11.07

10.40


11.19

6

2

10.57

10.54

10.73


10.46

7

2

10.72

10.87

11.23


10.90

8

2

10.72

11.26

11.06


10.34

9

2

10.67

10.98

10.61


10.68

10

2

11.06

10.52

10.98


10.39

A graph of the data is appropriate prior to any kind of analysis (see Fig. 21.1).

A330233_1_En_21_Fig1_HTML.gif


Fig. 21.1
Variability chart for thickness—first five time periods only

There should be no great surprises. However, we might wonder why sometimes the spread of duplicate observations is very small, while at other times it is considerably larger. Are we looking at random variation? Did the engineers get it wrong? We’ll come back to those questions.

In the meantime, we might consider what the data analysis says. A fixed effects analysis of variance (ANOVA) model with terms shown in Table 21.2 shows an error or residual mean square of 0.089. This is actually composed of pooled, single degree of freedom standard deviations from duplicate readings at each combination of time and position. As they are duplicates, quickly taken in time, there is no time for assignable cause variation to creep into their calculation. Therefore, they measure capability. The square root of this mean square, which comes to 0.299, is probably a close estimate of the true process capability standard deviation. This estimate includes measurement error which, in many cases, may be assumed to be small. However, if there is any doubt, a complete measurement systems analysis (MSA, Montgomery 2000) should be undertaken.


Table 21.2
Fixed effects analysis of variance—thickness data














































Source

DF

Sum of squares

Mean squares

F ratio

Prob > F

Position

9

0.52

0.058

0.647

0.757

Time

35

6.84

0.195

2.189

< 0.001

Time*Position

315

25.61

0.081

0.911

0.804

Error

360

32.15

0.089
   

Notice also from the ANOVA table that the mean square for Time-by-Position is very close to the error mean square. This suggests that there is no “weaving” or changing among positions from sampling time to sampling time. Further, we can see that position differences are small, but that there are time differences in relation to the error mean square (p < 0.001)—the process mean is shifting. Therefore the process throughout the batch manufacture is shifting, and it is not, strictly speaking, in control.

Now, if we approach the situation from a different perspective and declare all effects random, we find variance components computed using a restricted maximum likelihood (REML) approach as shown in Table 21.3.


Table 21.3
Variance components model—thickness data
































Random effect

Variance component

Percent of total

Position

0

0

Time

0.006

6.1

Time*Position

0

0

Residual

0.085

93.9

Total

0.091

100.0

It shows that most of the variation is due to the process capability itself, but there is some variation due to time drift. An estimate of the performance variance is derived from the sum of the variance components. Some may point out that this is not entirely correct because there are actually fixed effects in the model. “Position” is a fixed effect, technically. So the estimate of performance variation should be taken as a useful approximation. It is 0.091, and it is a measure of the variation experienced by the end user of product from the batch under study. Its square root, the performance standard deviation, is 0.301. In those cases where the process capability is a smaller proportion of the total variance, the performance standard deviation will be large compared to the process capability standard deviation. More often than not, this will be the case in pharmaceutical manufacturing due to shifting means caused by assignable sources of variation, which may not be known.

Of course, there are other considerations. After all, this is only one batch, and other batches may show different characteristics. The process performance qualification would benefit from replication across multiple batches. If necessary, additional replication of the same sampling plan across additional batches can be carried forward into the continued process verification stage. In the case of this example, given what we’ve seen from the first batch, it is probably not necessary to examine duplicate samples from each position and time combination for the remainder of the batches. But the concept of duplicate or back-to-back sampling should not be dismissed from future similar studies.
< div class='tao-gold-member'>

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jul 22, 2016 | Posted by in PHARMACY | Comments Off on Process Capability and Statistical Process Control

Full access? Get Clinical Tree

Get Clinical Tree app for offline access