# Statistical Process Control—The Alpha and Omega of Six Sigma, Part 2: Tracking Process Behavior Using Control Charts

Commonly-Used Control Charts*This is the second in a four-part series on Statistical Process Control (SPC). In our first article, we discussed the origins of Statistical Process Control and its impact on the history of the Quality movement, including Six Sigma. In this article, we examine the use and interpretation of a few of the most commonly-used control charts in the Statistical Process Control toolkit.*

Basic Elements of a Statistical Control Chart*(Click on diagram to enlarge.)*

Figure 1—Control Chart

A control chart is a run chart with some added elements. The control charts consist of the data, plotted in a run chart in time order, plus a centerline (usually the mean, sometimes the median), and upper and lower control limits, each set at 3 sigma (essentially, three standard deviations) above and below the centerline. A natural process boundary, such as a zero for count data or proportions, will take the place of a control limit when appropriate.

Basic Functions for Statistical Control Chart

Control charts have the following functions:*Maximize the Signal-to-Noise Ratio*—These control charts separate the signals of unusual amounts of variation from the common noise in a process and allow us to determine when process behavior results some specific reason, such as a shift in the mean or the dispersion. Shewhart called these signals *assignable cause*, Deming called them *special cause*.*Provide a Basis for Action*—Acting on common cause variation as though it’s special cause makes the process worse. Ignoring signals harms the process and causes missed improvement opportunities. Control charts suggest appropriate actions:

- Given acceptable performance and no signals, monitor and wait for signals that will help improve it.
- Given unacceptable performance and no signals, make some fundamental changes to the process.
- Given signals (unstable process), act immediately to eliminate or incorporate the causes.

*Allow us to Predict Performance*—We can predict the behavior of a stable process, at least for the near term. Prediction offers a basis for more accurate planning and forecasting, and allows for capability studies and for using the data in hypothesis tests (including tests for normality); without stability, we cannot assert anything about a DPMO or Process Sigma.*Assess the Effects of Changes*—When we make improvements to a process, we expect to see improvement in performance. Significant performance shifts signal as special causes. Then, with sufficient data for new limits, we can directly compare the new process performance to the old.

Control Charts for Continuous Data (Measured Things)

Many process data are from measurements: We use a stopwatch, ruler, scale or some other instrument. Continuous variables fall on a conceptual continuum, i.e., the number of decimal places in each data point is limited only by the discrimination in the measurement system or the choice of the data collector. Continuous data are also called *variables* data, or *measurement *data. We will discuss two control charts for measurement data: averages charts and individual values charts.

Many processes produce fairly constant streams of data. This means that we should be able to reach in *any time the process is running* and collect observations. In these processes, we can select subgroups of observations from across short periods of time, or from parallel processes. These data are described by Wheeler as "...data for which one may choose both the subgroup size and frequency. That is, the subgroup size is independent of the subgroup frequency."1 This trait enables the tracking of subgroup averages. We can collect groups at times likely to characterize changes in the process. Averages also provide more sensitivity in detecting signals, because a distribution of averages is narrower than the distribution of the contributing individual values, and small shifts in the dispersion are more easily and quickly detected using averages. *(Click on diagram to enlarge.)*

Figure 2—Xbar-R Chart for Diameters

Figure 2 depicts an averages and ranges chart for diameters. Each plot in the upper chart represents the average of five diameters taken during a production day. Each plot in the lower chart represents the subgroup *range* (difference between the largest and smallest values in the subgroup). The centerline for the upper chart is the grand mean; the centerline for the lower chart is the mean range. The upper and lower control limits are set using the mean range as a measure of local (within-subgroup) variation; they are derived through math that approximates three standard deviations above and below the mean. This use of a local dispersion statistic allows for ready detection of shifts in the mean and the range, signaled via points outside the control limits, or non-random patterns within the limits.

Another versatile tool the ImR (or XmR) control chart. This particular control chart is useful when the logical subgroup size for the observations we collect is *one*. The data come one per time period, and *each is uniquely associated with the specific time period of collection*. Examples might be monthly or daily reports or inventories, attendance or attrition numbers, hiring data, test values from a series of periodic tests, weather data, accident data, etc. The subgroup size is dependent on frequency; if we want larger subgroups, we must wait for more time periods to pass.

Without rational subgrouping, we won’t have subgroup averages for a local (within) measure of dispersion. In that case, we can use the differences between successive points as our local measure of dispersion, basing the control limits on the mean point-to-point difference. Control limits derived this way are sometimes called *natural process limits*, because they depict the likely spread of the individual data. *(Click on diagram to enlarge.)*

Figure 3—Individuals/Moving Range Chart

The ImR control chart in Figure 3 suggests that, prior to the 26th, we could expect between 1.3 and 7.6 billable hours per day. Beginning on the 26th, though, the number of billable hours has risen significantly. This shift is seen not only in signals in the last three data points in the individual values (upper) chart, but also by the single point outside the limits in the moving range chart.

Control Charts for Discrete Data (Counted Things)

These next control charts are for counted things: *discrete* data. If we are counting items or events, counting in integers, our data will generally fit one of two distribution models, the *Binomial *or the *Poisson*.

- Binomial:
- Items that possess or do not possess some attribute (e.g., late or on-time, defective or non-defective);
- you can count both the occurrences and the non-occurrences;
- the numerator and denominator are both numbers, in the same units.

- Poisson:
- Events that occur (e.g., blemishes in paint, systems failures, accidents);
- you can count the occurrences but
*not*the non-occurrences (e.g., I can count the accidents in a month but not the non-accidents); - the numerator is a count and the denominator is some finite region of space or time.

The binomial distribution depends on two assumed conditions:

- One item’s possessing the attribute will not affect the likelihood that the next item possesses the attribute (independence), and
- The probability that any item possesses the attribute must remain constant for each of the n items in a single sample.

Like the binomial, the Poisson distribution is based on probability theory and there are some conditions that the data must meet in order for the theory to apply:

- Events must occur independently of each other.
- The likelihood of an event is proportional to the size of the area of opportunity (there is a uniform likelihood of the event throughout each area of opportunity).
- The events must be rare.

Under the right conditions (assumptions met, large enough counts), the Binomial and Poisson distributions will be unimodal and symmetrical, and approximate the bell-shaped curve; this makes these data great for control charts. However, these control charts derive their limits from their appropriate probability theories. While they work very well when the assumptions for their distributions hold up, they are not very robust. Since the assumptions can easily break down, especially when the background counts are large, the control limits may not reflect the true state of homogeneity in the data set. As an alternative, since the logical subgroup size for any count is one, counts (or proportions or rates calculated from them) may also be tracked in ImR control charts. *(Click on diagram to enlarge.)*

Figure 4—P-chart

Figure 4 depicts a p-chart, from a Don Wheeler example,2 for tracking the proportion of shipments sent out via premium freight. Each day, a different number of shipments are sent; some of them are sent using premium freight. Because the daily proportions are based on differing daily totals, control limits are calculated for each day (larger shipments, tighter limits). The proportions could also be tracked on an ImR control chart (Figure 5). Either control chart shows an in-control process with about 24.5 percent premium freight. *(Click on diagram to enlarge.)*

Figure 5—ImR Chart (Same Data)

Selecting the Right Control Chart for the Job (for the Data)

Selecting the correct control chart for the data at hand is important. The value of the control chart will be in the insight that the chart can provide, and the accuracy of the chart depends on our assumptions going in. Figure 6 suggests a flow that can be used for control chart selection. *(Click on diagram to enlarge.)*

Figure 6

In Summary of Control Chart Usage for Tracking Process Behavior

This article touched briefly on the types of data we tend to see in Six Sigma projects, and some of the types of control charts used to track process behavior. In the next article, we’ll talk about using what we learn about process behavior to determine the capability for a process, and from there the Six Sigma metrics.*Resources 1Wheeler, D. J. and Chambers, David S (1992), *Understanding Statistical Process Control

*. Knoxville, TN: SPC Press.*

2Wheeler, D. J. (2003),Making Sense of Data

2Wheeler, D. J. (2003),

*. Knoxville, TN: SPC Press.*

*Continue to part 3 of this series.*