Statistical Process Control, The Alpha and Omega of Six Sigma, Part 4: Statistical Process Control and Six Sigma Projects

Add bookmark
Rip Stauffer
Rip Stauffer
11/03/2009

This is the last in a four-part series on Statistical Process Control. In the first segment, we looked into the history behind Statistical Process Control; in the second, we examined a few of the basic tools. The third segment covered capability concepts. In this final segment, we will examine how all these concepts and tools play into our Six Sigma DMAIC projects.


Where Does Statistical Process Control Fit into Six Sigma DMAIC?

Six Sigma was born in an organization that practiced Statistical Process Control as an ongoing management technique. At Motorola inputs and outputs from most processes were monitored using control charts, and capability studies and indices were used to assess quality. The same is, unfortunately, not true in many of the organizations that have tried implementing Six Sigma in the 22 years since Motorola’s launch of its quality improvement program. As a result, the kind of rigor around data-based management that was taken for granted at Motorola has often been side-tracked and has led to a lack of emphasis on Statistical Process Control. In many organizations’ Six Sigma DMAIC project flows, Statistical Process Control may not be mentioned at all. Control charts are part of the Control Phase in Six Sigma DMAIC, but not used in other phases.

Wheeler1 suggested this as one of the flaws in most Six Sigma DMAIC approaches: The failure to investigate what can be accomplished by operating the process up to its full potential.

The failure to develop a well-defined baseline of performance in the Define phase of Six Sigma DMAIC leads to problems with quantifying benefits realistically, rework in later phases and—in some cases—Six Sigma project failure. So we need to agree on an answer to the question: What makes a good baseline?

In Six Sigma DMAIC, a lot of statistical tools are used; some of them are inferential tools such as t-tests, ANOVA and tests of normality. What is not commonly discussed in most statistics training is the importance of homogeneity in the data. Data homogeneity is assumed, based on good random samples from homogeneous populations. For enumerative studies, studying a reasonably static population, this view is often effective.
In any process improvement paradigm, however, the assumption of homogeneity is much trickier. Process improvement studies are inherently analytic studies; they deal with the cause system underlying a dynamic process. No population exists; we don’t extrapolate from a sample to a population, but from the present to the future. Before we can characterize a distribution, we have to know whether the data are homogeneous…did they all come from the same universe (process)? If not, we can’t say anything real about the distribution. Most tests and tools we use to deal with the data are only as good as the assumption of homogeneity.

Fortunately, in process studies, we have powerful Statistical Process Control tools for checking homogeneity in data collected over time. Process control charts provide strong evidence of data homogeneity and can often be used in place of some hypothesis tests when comparing processes. These charts are, of course, used in Statistical Process Control and often provide opportunities for ongoing improvement. Unlike Statistical Process Control or Kaizen, however, Six Sigma is not about continual improvement, it’s about breakthrough. Operationally defining breakthrough will be helpful to our discussion.

Breakthrough aims to move the performance to new levels, creating shifts in both the mean and the dispersion. This concept is illustrated in figures 1 and 22: the mean and the average moving range from the baseline have shifted appreciably, yielding evidence of stability around a new mean with substantially reduced variation.

Evidence of Breakthrough 
Figure 1—Evidence of Breakthrough

Confirmation of Breakthrough
Figure 2—Confirmation of Breakthrough

So we can’t define breakthrough without the ability to separate common cause from special cause. Therefore, one minimum requirement for the baseline of any project’s progress measure is reasonable evidence of a state of statistical control! A quarterly number isn’t a baseline. An average from a process displaying statistical control is a coherent baseline, and any performance baseline derived without regard to statistical control is suspect, providing a poor basis for project justification. In addition, the hypothesis tests used in the Measure and Analyze phase of Six Sigma DMAIC are sometimes completed on sets of data that have not been checked for homogeneity. Results of such tests are irrelevant if the data come from an out-of-control process.

So, let’s begin to use the things we’ve learned about data analysis since Shewhart. Let’s stop assuming homogeneity in our data, and start gathering evidence for it early. Move the data collection and tracking for that Y variable into Define; use an appropriate control chart and monitor throughout the Six Sigma project.

Some advantages to this approach include:

  1. Clarity in the results of experiments, quick hits, and other actions.
  2. We can track identified process input variables (Xs) as well, so you build the control plan for the final improved process before you arrive at the Control phase of Six Sigma DMAIC.
  3. Reduction of uncertainty for tests of normality and for other hypothesis tests.
  4. More rational goal-setting; process behavior charts yield operational definitions for breakthrough and benchmarks for achievement.
  5. More rational project justification. Problems, and the scope of problems, are quantified and verified.

Statistical Process Control, Capability, and DPMO

One of the reasons defects per million opportunities (DPMO) has been adopted in Six Sigma is that you can use this measure to aggregate defects, defectives and non-conforming measurements into an overall estimate of proportion defective and yield. This proportion defective is then reverse-engineered to estimate the distance from the process mean to the nearest specification limit in sigma units, the process sigma (then the Motorola standard 1.5-sigma shift is applied).

While there are a number of very arguable and provable flaws in the use of the process sigma as a comparative metric, the use of DPMO is less troublesome, as long as we recognize that it is just a model, and just an estimate, and don’t try to use it as an exact value. A good estimate of DPMO requires good operational definitions of units, defects and opportunities. You need to know which model you will use for the estimate: a continuous model like the Normal distribution, or a discrete model such as the Binomial or Poisson. In every case, a stable process is required…without a stable process, we have no reason to assume that our DPMO will be predictive, so it’s invalid on its face.

The use of continuous data requires that we use a capability study because a capability study is a tool for estimating "elbow room" in a continuous process. Where we have a specification limit (or two), we take short-term variation (estimated by control chart methods) and compare it to the specification limits to get a distance from the mean to the nearest specification limit in sigma units; the percentage of the curve falling outside the limits provides an estimate of expected non-conformities. Some software (including Minitab) also estimates long-term capability using the pooled standard deviation. This is intended to estimate defects that might fall outside the limits due to small undetected shifts in the process. Note: Any capability study also requires a stable process to be valid.

If you are assuming the Binomial distribution (counts of items, numerator and denominator numbers in the same units, you can count occurrences and non-occurrences), the DPMO is easy. These data are defectives data; either the unit is good or not, so there is only one opportunity per unit. The average proportion defective from a stable process is the DPO; subtract it from one for yield and multiply it by 1 million for the DPMO.

When using Poisson-type data (counts of events, where the numerator is a count, the denominator is some finite region of space or time, you can count the occurrences but not the non-occurrences), because the numerator and denominator are in different units, you can end up with numbers greater than one. As an example, if you define your unit as a week and your defect as a system crash, you may end up with more than one crash per week. Trying to estimate DPMO when you have more than one defect per unit will yield very interesting results if you try to use the usual D/(O x U) to come up with a DPO; in this case, the Poisson estimator for rolled throughput yield (RTY) is useful: e-DPU. Subtracting the resulting RTY from one gives an estimate of DPO, then you can just multiply by 1 million to get the DPMO.

Summary of How Statistical Process Control Fits into Six Sigma DMAIC

I hope you have found this series useful; one thing I hope you'll take away from it is that Statistical Process Control is not a footnote in Control Phase training. I have just scratched the surface of this subject in this introduction. There are many variants on the charts and the rules for interpretation; there are other types of charts beyond the scope of an introductory series. There is the issue of rational subgrouping, which helps wring the last ounce of analytical power in averages charts. There are charts used for tracking measurement systems, and charts used instead of hypothesis tests. There are other types of capability studies, as well as other indices.

For me, the bottom line on Statistical Process Control and Six Sigma is this: Organizations that practice Statistical Process Control as a regular component of their quality management system will be much more successful at Six Sigma implementation. Projects will be more relevant and be completed more quickly, and the gains held longer (and continually improved upon).


Resources
1Wheeler, D. J. (2005), The Six Sigma Practitioner’s Guide to Data Analysis, Knoxville, TN, SPC Press.
2Stauffer, R.F. (2008), A DMAIC makeover. Quality Progress 41(12), 54-59.


RECOMMENDED