I am always interested in tools that prove useful in improving a factory’s operations. But I always wondered, how useful is Statistical Process Control (SPC)? The fact is, I have never seen a Chinese factory make good use of SPC tools.
I asked Brad Pritts, an experienced quality consultant specializing in the auto industry, for his opinion. Some classic SPC tools are listed in the “core quality tools” of the North American car industry. So it would make sense to have lots of factories apply SPC… Or so I thought.
Here is what Brad wrote back:
In thirty-five years of quality consulting I have worked with or audited about 150 companies. Most of these were medium to small size auto parts manufacturers (25-250 employees), and most are in the USA. However, I do include about 20 Chinese factories in the list.
Auto parts manufacturers have been mandated by their customers to implement SPC since the late 1980’s; so most of these companies made some effort to use SPC. My comments below are based on my experience in this particular environment and are not intended to condemn SPC in general.
Many companies have made good use of statistics to manage and improve quality.
Very few in my limited experience have made effective use of textbook X bar/R or X bar/ moving R charts (Shewhart charts) to manage “online” process control. I can think of only two companies in this category.
- One was a steel processor who monitored several characteristics on their pickling and slitting operation, as coil steel flowed through a machine at fairly high speed.
- The second is a fastener maker who has an end of line 100% automated inspection of bolts that uses ongoing inspection data to identify and automatically reject (“kick out”) individual pieces that fall outside control limits.
Neither of these companies was in China.
Both successful cases had some common threads.
- Both used the SPC in automated, or nearly fully automated, implementations. The measurements were taken with automated instruments; fed directly to a process control computer; and the calculations fully done by machine.
- Both had relatively large, uniform volumes of material flowing through these processes. (This helped cost justify the large investment in technology.)
- In both companies, the systems were implemented by a few really clever engineers, and were accepted but not really understood by most of the employees. (In the fastener maker, I am curious to see how, or whether, the system will survive when its designer retires next year!)
Besides these cases in my personal experience, I can relate a few cases described by a colleague who sold and installed process control equipment. He had a number of successes with automated process monitoring of large scale industrial processes such as paper-making and steel rolling mills, where the control equipment was programmed to use SPC to directly control process variables.
They were able to show process and profit improvements by first improving process capability (often reducing scrap and waste) and then optimizing the process. You can picture these cases as having the same common factors – highly automated, large volume processes. The capital investments for the process control systems were typically in the $100,000 USD range or more.
On the other hand I have seen a lot of SPC (traditional Shewhart Xbar/R) implementations which were a total waste of time and money, done to create a show or to meet a customer mandate. Some of these were done with all good intent while others were frauds.
These had some common features, too.
In many cases employees gamed the system by not documenting out of control conditions, but instead stopping, adjusting the process, and taking new samples once the process was back in spec. (Note: in spec, not in statistical control.) A good analyst can usually detect this in the SPC data, because this behavior will prevent the system from attaining stability. Meanwhile, the operators and management will conclude that SPC is worthless. (Which is true if you operate it this way.)
In some cases QA people gamed the system by selecting a few variables which weren’t really important to chart but in statistical control. so that they could report “good” Cpk values to customers, while not charting difficult but important variables. Another waste of time, although I could rationalize this as a sales and marketing expense — giving the customer the appearance of what they demanded.
One of my favorites was a company which installed a mostly automated system with gages wired directly to a computer monitoring system. The sampling plan was set to 5 pieces per hour for X-bar/R charting. The operator was supposed to sample 5 consecutive pieces, put them in the gage and send the results to the computer which would do the charting and report a result. Data was transmitted by putting a part in the gage and pressing a foot pedal to send the data to the computer. To “save time” some operators would put one part in the gage and then press the foot pedal 5 times, pretending that 5 parts had been checked. As you can imagine when you do this the part to part variation is zero. But, the variation for the piece run the next hour will appear awful because it’s measured compared to the zero piece to piece variation! Again, a good SPC analyst can figure this out if they study the data (and surreptitiously watch the operator!)
One reason for the difficulty in application is that many of the companies I have worked with have processes that are equipment or tooling dominant.
Think of metal stamping, plastic injection molding, or cold heading of fasteners. Ongoing realtime SPC helps little in these cases. SPC may work much better in machining operations (detecting and controlling tool wear)
but I haven’t had much personal experience in that industry.
The next article will be Brad’s advice on how to use statistics in a useful way to get processes under control (click here).