The Quality Progress magazine ran a well-written article about measurement systems in their September issue (it is entitled ‘A Study in Measurements’ and was written by Neetu Choudhary).
If you purchase products that have to comply with strict measurement requirements, this is an important topic.
Let’s look at a classic situation.
- A production operator measures a few pieces and finds they are within spec.
- An inspector measures a few pieces from that same batch and finds a few of them out of spec.
- Another inspector, sent by the customer, measures a few other pieces from that same batch and finds most of them out of spec.
The usual reaction? “Production never stops quality issues”, “the customer’s staff is too strict, they are probably trying to get a kickback”, and so on. Very frustrating.
And yet, there is often a clear origin to these discrepancies: different ways to measure the same product.
Let’s start with a breakdown of all sources of variation that come from the measurement process itself.
The author writes:
Accuracy is the closeness of a measured value to the true value and is comprised of three components:
- Bias. The difference between the average measured value and the true value of a reference standard.
- Linearity. The change in bias over the normal operating range.
- Stability. Statistical stability of the measurement process with respect to its average and variation over time.
And what about precision? When there are doubts about the validity of a measurement system, precision is usually checked first through a gage repeatability & reproducibility study (GR&R study).
A GR&R study typically doesn’t take more than 1 hour of the operators’ and inspectors’ time, if they are given a clear plan to follow. It doesn’t require them to check hundreds of pieces. Usually it is planned this way:
This type of study shows several types of differences. The source of many frustrations is typically reproducibility, and that concept includes mainly 2 types of differences:
- Between different people (what do they find when checking the same product?)
- Between what a person finds the first time she checks a product, the second time she checks the same product, etc. with the same instrument, in the same conditions, etc.
One can go very deep with this type of study, but in most cases it is not called for. Specialized software such as Minitab can take care of the calculations.
What type of conclusion can it help you draw?
If 30% of the variation comes from the measurement system itself, there is a serious issue! Any decision taken on the basis of measurements is going to be questioned, and for good reason.
In such a case, you will need to make changes such as changing/improving some of the measurement devices, ensuring proper calibration, writing a clear work instruction (how to measure a product), get your customer(s) and suppliers(s) to confirm it, train operators and inspectors, and so forth.