Why Gauge R&R Tests Fail Good Measurement Systems
We wanted to determine the relationship between a "Test Uncertainty Ratio" and a "GRR%" so we ran a study to compare them.
We started by using the Measurement System Analysis 4th Edition dataset and plugged it into Minitab.
We set a process tolerance of -9 and 9 in order to hit our recommended target of a 10% GRR% as required by most auditors and customers.
I then took the standard deviations from the repeatability and reproducibility from Minitab's output and inputted them into my GUM Method software.
We set the expanded uncertainty to .6011 and set the tolerancing to -9 to 9.
This is my result:
A GRR% with a requirement of 10% which is what most auditors require, would be similar to a 15:1 Test Uncertainty Ratio. Most precision calibration laboratories are pushing for 4:1 Test Uncertainty Ratios. This 15:1 TUR they're pushing for doesn't even take any other components of uncertainty into consideration. These numbers are absolutely unrealistic in most measurement situations.
GRR% requirements are so insanely impossible to reach, the MSA Handbook had to write an entire chapter about how to mitigate the mess. In order to successfully mitigate a GRR% problem, you would need to be using an Error Modeling system. Most companies don't use error modeling.