Escolar Documentos
Profissional Documentos
Cultura Documentos
n
O
e
s
U
l
a
rn
te
In
MEASUREMENT SYSTEMS ANALYSIS n
y
a
p
m
o
C
Reference Manual
r
to
Third Edition
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
First Edition, October 1990 • Second Edition, February 1995; Second Printing, June 1998 • Third Edition, March 2002
Copyright © 1990, © 1995, © 2002 DaimlerChrysler Corporation, Ford Motor Company, General Motors Corporation
i
P
ro
p
e
rt
y
o
f
F
o
rd
M
o
to
r
C
ii
o
m
p
a
n
y
In
te
rn
a
l
U
s
e
O
n
ly
FOREWORD
This Reference Manual was developed by a Measurement Systems Analysis (MSA) Work Group,
sanctioned by the DaimlerChrysler Corporation/Ford Motor Company/General Motors Corporation
Supplier Quality Requirements Task Force, and under the auspices of the American Society for Quality
(ASQ) and the Automotive Industry Action Group (AIAG). The Work Group responsible for this Third
Edition were David Benham (DaimlerChrysler Corporation), Michael Down (General Motors
Corporation), Peter Cvetkovski (Ford Motor Company), Gregory Gruska (Third Generation, Inc.), Tripp
Martin (Federal Mogul) and Steve Stahley (SRS Technical Services).
ly
n
In the past, Chrysler, Ford, and General Motors each had their own guidelines and formats for ensuring
O
supplier compliance. Differences between these guidelines resulted in additional demands on supplier
e
resources. To improve upon this situation, the Task Force was chartered to standardize the reference
s
U
manuals, procedures, reporting formats, and technical nomenclature used by Chrysler, Ford, and General
l
Motors.
a
rn
te
Accordingly, Chrysler, Ford, and General Motors agreed in 1990 to develop, and, through AIAG,
distribute an MSA manual. That first edition was well received by the supplier community, which offered
In
valuable inputs, based on application experience. These inputs have been incorporated into the Second
y
n
and this Third edition. This manual, which is approved and endorsed by DaimlerChrysler Corporation,
a
Ford Motor Company, and General Motors Corporation, is a supplemental reference document to QS-
p
m
9000.
o
C
The manual is an introduction to measurement system analysis. It is not intended to limit evolution of
r
analysis methods suited to particular processes or commodities. While these guidelines are intended to
to
cover normally occurring measurement system situations, there will be questions that arise. These
o
M
questions should be directed to your customer’s Supplier Quality Assurance (SQA) activity. If you are
uncertain as to how to contact the appropriate SQA activity, the buyer in your customer’s purchasing
rd
The MSA Work Group gratefully acknowledges: the leadership and commitment of Vice Presidents
o
Tom Sidlik at DaimlerChrysler Corporation, Carlos Mazzorin at Ford Motor Company and Bo Andersson
y
rt
of General Motors Corporation; the assistance of the AIAG in the development, production and
e
distribution of the manual; the guidance of the Task Force principals Hank Gryn (DaimlerChrysler
p
Corporation), Russ Hopkins (Ford Motor Company), and Joe Bransky (General Motors Corporation), in
ro
association with ASQ represented by Jackie Parkhurst (General Motors Corporation), and the American
P
Society for Testing and Materials (ASTM International). This manual was developed to meet the specific
needs of the automotive industry.
This Manual is copyrighted by DaimlerChrysler Corporation, Ford Motor Company, and General Motors
Corporation, all rights reserved, 2002. Additional manuals can be ordered from AIAG and/or permission
to copy portions of this manual for use within supplier organizations may be obtained from AIAG at 248-
358-3570.
March 2002
iii
MSA 3rd Edition Quick Guide
Type of
MSA Methods Chapter
Measurement System
Signal Detection,
Basic Attribute III
ly
Hypothesis Test Analyses
n
O
Non-Replicable
Control Charts IV
e
(e.g., Destructive Tests)
s
U
Range, Average & Range, ANOVA,
l
a
Complex Variable III, IV
Bias, Linearity, Control Charts
rn
te
Multiple Systems, Gages
In
Control Charts, ANOVA, Regression Analysis III, IV
or Test Stands
y
n
a
http://www.aiag.org/publications/quality/msa3.html
rd
o
F
Historically, by convention, a 99% spread has been used to represent the “full” spread of
rt
measurement error, represented by a 5.15 multiplying factor (where σ GRR is multiplied by 5.15
e
p
A 99.73% spread is represented by a multiplier of 6, which is ±3σ and represents the full spread
of a “normal” curve.
If the reader chooses to increase the coverage level, or spread, of the total measurement
variation to 99.73%, please use 6 as a multiplier in place of 5.15 in the calculations.
Awareness of which multiplying factor is used is crucial to the integrity of the equations and
resultant calculations. This is especially important if a comparison is to be made between
measurement system variability and the tolerance.
iv
TABLE OF CONTENTS
CHAPTER I – GENERAL MEASUREMENT SYSTEMS GUIDELINES ...........................................................1
CHAPTER I -- Section A ............................................................................................................................................3
Introduction, Purpose and Terminology .....................................................................................................................3
Quality of Measurement Data............................................................................................................................... 3
Purpose .......................................................................................................................................................................4
Terminology ...............................................................................................................................................................4
Summary of Terms .................................................................................................................................................5
True Value ...........................................................................................................................................................10
CHAPTER I – Section B ...........................................................................................................................................11
The Measurement Process ........................................................................................................................................11
Statistical Properties of Measurement Systems ...................................................................................................12
ly
Sources of Variation ............................................................................................................................................13
n
O
The Effects of Measurement System Variability......................................................................................................16
Effect on Decisions ..............................................................................................................................................16
e
s
Effect on Product Decisions ................................................................................................................................17
U
Effect on Process Decisions.................................................................................................................................18
l
New Process Acceptance .....................................................................................................................................20
a
rn
Process Setup/Control (Funnel Experiment) ...................................................................................................... 21
CHAPTER I – Section C...........................................................................................................................................23
te
Measurement Strategy and Planning ........................................................................................................................23
In
Complexity ..........................................................................................................................................................23
y
Identify the Purpose of the Measurement Process..............................................................................................24
n
Measurement Life Cycle .....................................................................................................................................24
a
p
Datum Coordination...........................................................................................................................................28
o
Specifications......................................................................................................................................................30
f
o
Shipment .............................................................................................................................................................34
p
Documentation Delivery.....................................................................................................................................34
P
v
Measurement Uncertainty and MSA....................................................................................................................62
Measurement Traceability ...................................................................................................................................62
ISO Guide to the Expression of Uncertainty in Measurement.............................................................................63
CHAPTER I – Section G...........................................................................................................................................65
Measurement Problem Analysis ...............................................................................................................................65
CHAPTER II – GENERAL CONCEPTS FOR ASSESSING MEASUREMENT SYSTEMS...........................67
CHAPTER II – Section A .........................................................................................................................................69
Background...............................................................................................................................................................69
CHAPTER II – Section B..........................................................................................................................................71
Selecting/Developing Test Procedures .....................................................................................................................71
CHAPTER II – Section C .........................................................................................................................................73
Preparation for a Measurement System Study..........................................................................................................73
CHAPTER II – Section D .........................................................................................................................................77
Analysis of the Results .............................................................................................................................................77
ly
CHAPTER III – RECOMMENDED PRACTICES FOR SIMPLE MEASUREMENT SYSTEMS .................79
n
CHAPTER III – Section A........................................................................................................................................81
O
Example Test Procedures .........................................................................................................................................81
e
CHAPTER III – Section B ........................................................................................................................................83
s
U
Variable Measurement System Study – Guidelines..................................................................................................83
l
Guidelines for Determining Stability ...................................................................................................................83
a
Guidelines for Determining Bias – Independent Sample Method........................................................................85
rn
Guidelines for Determining Bias – Control Chart Method .................................................................................88
te
Guidelines for Determining Linearity..................................................................................................................92
In
Guidelines for Determining Repeatability and Reproducibility ..........................................................................97
y
Range Method ..........................................................................................................................................................97
n
Average and Range Method .....................................................................................................................................99
a
Average Chart....................................................................................................................................................102
p
m
Normalized Histogram.......................................................................................................................................109
M
Analytic Method.................................................................................................................................................135
P
vi
V2: Multiple Readings with p ≥ 2 Instruments..............................................................................................151
V3: Split Specimens (m = 2) ........................................................................................................................... 152
V4: Split Specimens (General)........................................................................................................................ 153
V5: Same as V1 with Stabilized Parts..............................................................................................................153
V6: Time Series Analysis .................................................................................................................................154
V7: Linear Analysis .........................................................................................................................................154
V8: Time versus Characteristic (Property) Degradation ............................................................................... 155
V9: V2 with Simultaneous Multiple Readings and p ≥ 3 Instruments............................................................155
CHAPTER V – OTHER MEASUREMENT CONCEPTS ..................................................................................157
CHAPTER V – Section A........................................................................................................................................159
Recognizing the Effect of Excessive Within-Part Variation...................................................................................159
CHAPTER V – Section B........................................................................................................................................161
Average and Range Method – Additional Treatment .............................................................................................161
ly
CHAPTER V – Section C........................................................................................................................................169
n
Gage Performance Curve........................................................................................................................................169
O
CHAPTER V – Section D........................................................................................................................................175
e
Reducing Variation Through Multiple Readings....................................................................................................175
s
CHAPTER V – Section E........................................................................................................................................177
U
Pooled Standard Deviation Approach to GRR .......................................................................................................177
l
a
APPENDICES..........................................................................................................................................................185
rn
Appendix A...............................................................................................................................................................187
te
Analysis of Variance Concepts...............................................................................................................................187
In
Appendix B...............................................................................................................................................................191
Impact of GRR on the Capability Index Cp............................................................................................................191
y
n
Formulas .................................................................................................................................................................191
a
Analysis ..................................................................................................................................................................191
p
Appendix C...............................................................................................................................................................195
o
C
d2 Table ..................................................................................................................................................................195
Appendix D...............................................................................................................................................................197
r
to
Gage R Study..........................................................................................................................................................197
o
Appendix E...............................................................................................................................................................199
M
Appendix F ...............................................................................................................................................................201
P.I.S.M.O.E.A. Error Model...................................................................................................................................201
o
GLOSSARY .............................................................................................................................................................205
F
INDEX ......................................................................................................................................................................219
rt
vii
P
ro
p
e
rt
y
o
f
F
o
rd
M
o
to
r
C
o
viii
m
p
a
n
y
In
te
rn
a
l
U
s
e
O
n
ly
LIST OF TABLES
Number Title Page
ly
7 Gage Study (Range Method).............................................................................................98
n
8 ANOVA Table ..................................................................................................................120
O
9 ANOVA Analysis – Percent Variation and Contribution ..................................................121
e
10 Comparison of ANOVA and Average and Range Methods ............................................122
s
U
11 GRR ANOVA Method Report ..........................................................................................122
l
12 Attribute Study Data Set ..................................................................................................127
a
13 Examples of Measurement Systems ...............................................................................143
rn
14 Methods Based on Type of Measurement System .........................................................144
te
15 Pooled Standard Deviation Analysis Data Set ................................................................181
In
16 Estimate of Variance Components ..................................................................................187
17 y
5.15 Sigma Spread..........................................................................................................188
n
18 Analysis of Variance ........................................................................................................189
a
ix
P
ro
p
e
rt
y
o
f
F
o
rd
M
o
to
r
C
x
o
m
p
a
n
y
In
te
rn
a
l
U
s
e
O
n
ly
LIST OF FIGURES
Number Title Page
ly
8 Relationship Between Bias and Repeatability...................................................................60
n
9 Control Chart Analysis for Stability ....................................................................................84
O
10 Bias Study – Histogram of Bias Study...............................................................................87
e
11 Linearity Study – Graphical Analysis .................................................................................95
s
U
12 Gage Repeatability and Reproducibility Data Collection Sheet ......................................101
l
13 Average Chart – “Stacked” ..............................................................................................103
a
14 Average Chart – “Unstacked” ..........................................................................................103
rn
15 Range Chart – “Stacked”.................................................................................................104
te
16 Range Chart – “Unstacked”.............................................................................................105
In
17 Run Chart by Part ............................................................................................................105
18 y
Scatter Plot ......................................................................................................................106
n
19 Whiskers Chart ................................................................................................................107
a
26 Interaction Plot.................................................................................................................119
M
27 Residual Plot....................................................................................................................119
rd
28 Example Process.............................................................................................................126
o
33 (33a & b) Measurement Evaluation Control Chart ............................................... 164 & 165
e
xi
P
ro
p
e
rt
y
o
f
F
o
rd
M
o
to
r
C
xii
o
m
p
a
n
y
In
te
rn
a
l
U
s
e
O
n
ly
ACKNOWLEDGEMENTS
There have been many individuals responsible for the creation of this document over the years. The following are
but a few who have given much of their time and effort in the development of this manual.
The ASQ and the AIAG have contributed time and facilities to aid in the development of this publication. Greg
Gruska, as a representative of the Automotive Division of the ASQ, and John Katona as the past chairman of the
Revision Work Group have been major contributors to the development and the past publication of this manual.
The techniques described in Chapter III of this document were first investigated and developed by Kazem Mirkhani
from Chevrolet Product Assurance under the direction and motivation of Barney Flynn. The variable gage study,
based on a paper by R.W. Traver of General Electric (1962 ASQC Transactions), was validated by Jim McCaslin.
ly
n
The concepts were extended to attribute studies and gage performance curves by Jim McCaslin, Gregory Gruska and
O
Tom Bruzell from Chevrolet (1976 ASQC Transactions). These techniques were consolidated and edited by Bill
e
Wiechec in June, 1978 resulting in the publication of the Chevrolet Measurement System Analysis Book.
s
U
l
Over the past few years, supplemental materials were developed. In particular, Sheryl Hansen and Ray Benner of
a
Oldsmobile documented the ANOVA approach and the confidence intervals. In the 1980’s Larry Marruffo and John
rn
Lazur of Chevrolet updated the Chevrolet manual. John Lazur and Kazem Mirkhani have organized the sections for
te
the manual and enhanced some of the concepts such as stability, linearity, and ANOVA. Jothi Shanker of EDS
In
contributed to the preparation of the update for the Supplier Development Staff. Additional updates included the
y
concept of identification and qualification of with-in part variation, as well as more thorough description of
n
statistical stability, both of which were contributed by the GM Corporate Statistical Review Committee.
a
p
m
The latest improvements were updating the format to conform to the current QS-9000 documentation, more
o
clarification and examples to make the manual more user friendly, discussion on the concept of measurement
C
uncertainty, and additional areas which where not included or did not exist when the original manual was written.
r
This update also includes the concept of measurement system life cycle and moves toward a measurement analysis
to
similar to conventional process analysis. Portions of the GM Powertrain internal Measurement Processes:
o
Planning, Use and Improvement manual, printed April 28, 1993, were included in this revision.
M
rd
The current re-write subcommittee is chaired by Mike Down from General Motors Corporation and consists of
o
David Benham from DaimlerChrysler Corporation, Peter Cvetkovski from Ford Motor Company, Greg Gruska, as a
F
representative of the Automotive Division of the ASQ, Tripp Martin from Federal Mogul, and Steve Stahley from
f
SRS Technical Services. There were also significant contributions from Yanling Zuo of Minitab, Neil Ullman of
o
ASTM International and Gordon Skattum of Rock Valley College Technology Division. The AIAG has contributed
y
rt
Finally, the joint consensus on the contents of this document was effected through the MSA Work Group members
ro
representing General Motors Corporation, DaimlerChrysler Corporation and Ford Motor Company.
P
xiii
P
ro
p
e
rt
y
o
f
F
o
rd
M
o
to
r
C
o
xiv
m
p
a
n
y
In
te
rn
a
l
U
s
e
O
n
ly
Chapter I
General Measurement System Guidelines
ly
n
O
e
s
U
l
a
rn
Chapter I
te
In
y
n
a
GUIDELINES
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
1
Chapter I – Section A
Introduction, Purpose and Terminology
ly
n
O
e
s
U
l
a
rn
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
2
Chapter I – Section A
Introduction, Purpose and Terminology
CHAPTER I – Section A
Introduction, Purpose and Terminology
Measurement data are used more often and in more ways than ever before.
Introduction For instance, the decision to adjust a manufacturing process or not is now
commonly based on measurement data. Measurement data, or some statistic
calculated from them, are compared with statistical control limits for the
process, and if the comparison indicates that the process is out of statistical
control, then an adjustment of some kind is made. Otherwise, the process is
allowed to run without adjustment. Another use of measurement data is to
ly
determine if a significant relationship exists between two or more variables.
n
For example, it may be suspected that a critical dimension on a molded
O
plastic part is related to the temperature of the feed material. That possible
e
relationship could be studied by using a statistical procedure called
s
U
regression analysis to compare measurements of the critical dimension with
l
measurements of the temperature of the feed material.
a
rn
Studies that explore such relationships are examples of what Dr. W. E.
te
Deming called analytic studies. In general, an analytic study is one that
In
increases knowledge about the system of causes that affect the process.
y
Analytic studies are among the most important uses of measurement data
n
because they lead ultimately to better understanding of processes.
a
p
m
quality of the measurement data used. If the data quality is low, the benefit of
C
the procedure is likely to be low. Similarly, if the quality of the data is high,
r
To ensure that the benefit derived from using measurement data is great
M
Measurement Data stable conditions. For instance, suppose that a measurement system,
e
value for the characteristic, then the quality of the data is said to be “high.”
Similarly, if some, or all, of the measurements are “far away” from the
master value, then the quality of the data is said to be “low.”
The statistical properties most commonly used to characterize the quality of
data are the bias and variance of the measurement system. The property
called bias refers to the location of the data relative to a reference (master)
value, and the property called variance refers to the spread of the data.
One of the most common reasons for low-quality data is too much variation.
Much of the variation in a set of measurements may be due to the interaction
between the measurement system and its environment. For instance, a
measurement system used to measure the volume of liquid in a tank may be
3
Chapter I – Section A
Introduction, Purpose and Terminology
ly
environment so that only data of acceptable quality are generated.
n
O
e
s
U
Purpose
l
a
rn
The purpose of this document is to present guidelines for assessing the
te
quality of a measurement system. Although the guidelines are general
In
enough to be used for any measurement system, they are intended primarily
y
for the measurement systems used in the industrial world. This document is
n
a
replicated on each part. Many of the analyses are useful with other types of
o
C
Terminology
o
y
rt
section provides a summary of such terms which are used in this manual.
4
Chapter I – Section A
Introduction, Purpose and Terminology
ly
n
demonstrated their usefulness in the area of statistical process control.
O
e
s
U
l
Summary of Terms1
a
rn
te
In
Standard
y
n
• Accepted basis for comparison
a
p
value
r
to
• Reference value
o
M
yield the same results when applied by the supplier or customer, with
o
Basic equipment
y
rt
1
See Chapter I, Section E for terminology definitions and discussion.
5
Chapter I – Section A
Introduction, Purpose and Terminology
• Reference value
Accepted value of an artifact
Requires an operational definition
Used as the surrogate for the true value
• True value
Actual value of an artifact
Unknown and unknowable
ly
n
O
Location variation
e
s
U
• Accuracy
l
a
“Closeness” to the true value, or to an accepted reference value
rn
ASTM includes the effect of location and width errors
te
In
• Bias n
y
Difference between the observed average of measurements and
a
p
• Stability
rd
respect to location
o
Alias: Drift
y
rt
e
p
ro
•
P
Linearity
The change in bias over the normal operating range
The correlation of multiple and independent bias errors over the
operating range
A systematic error component of the measurement system
6
Chapter I – Section A
Introduction, Purpose and Terminology
Width variation
• Precision2
“Closeness” of repeated readings to each other
A random error component of the measurement system
• Repeatability
Variation in measurements obtained with one measuring
ly
instrument when used several times by an appraiser while
n
measuring the identical characteristic on the same part
O
The variation in successive (short term) trials under fixed and
e
defined conditions of measurement
s
U
Commonly referred to as E.V. – Equipment Variation
l
Instrument (gage) capability or potential
a
rn
Within-system variation
te
In
• Reproducibility
y
n
a
on one part
C
A C B
GRR
2
In ASTM documents, there is no such thing as the precision of a measurement system; i.e., the precision cannot
be represented by a single number.
7
Chapter I – Section A
Introduction, Purpose and Terminology
• Sensitivity
Smallest input that results in a detectable output signal
Responsiveness of the measurement system to changes in
measured feature
Determined by gage design (discrimination), inherent quality
(OEM), in-service maintenance, and operating condition of the
ly
instrument and standard
n
Always reported as a unit of measure
O
e
s
U
• Consistency
l
a
The degree of change of repeatability over time
rn
A consistent measurement process is in statistical control with
te
respect to width (variability)
In
y
n
• Uniformity
a
p
Homogeneity of repeatability
o
C
r
System variation
to
o
• Capability
o
•
o
Performance
y
• Uncertainty
P
8
Chapter I – Section A
Introduction, Purpose and Terminology
ly
n
O
e
Most of the industrialized countries throughout the world maintain their own
s
NMIs and similar to NIST, they also provide a high level of metrology
U
National standards or measurement services for their respective countries. NIST
l
a
Measurement works collaboratively with these other NMIs to assure measurements made
rn
Institutes in one country do not differ from those made in another. This is
te
accomplished through Mutual Recognition Arrangements (MRAs) and by
In
performing interlaboratory comparisons between the NMIs. One thing to
y
note is that the capabilities of these NMIs will vary from country to country
n
and not all types of measurements are compared on a regular basis, so
a
p
Measurements that are traceable to the same or similar standards will agree
M
more closely than those that are not traceable. This helps reduce the need for
rd
9
Chapter I – Section A
Introduction, Purpose and Terminology
Wavelength Interference
National Standard Comparator
Standard
ly
n
O
e
s
U
Figure 1: Example of a Traceability Chain for a Length Measurement
l
a
rn
te
In
NMIs works closely with various national labs, gage suppliers, state-of-the-
y
art manufacturing companies, etc. to assure that their reference standards are
n
properly calibrated and directly traceable to the standards maintained by the
a
p
NMI. These government and private industry organizations will then use
m
primary standards. This linkage or chain of events ultimately finds its way
r
to
onto the factory floor and then provides the basis for measurement
o
10
Chapter I – Section B
The Measurement Process
CHAPTER I – Section B
The Measurement Process3
ly
n
Specifications and engineering requirements define what the process should
O
be doing.
e
s
The purpose of a Process Failure Mode Effects Analysis4 (PFMEA) is to
U
define the risk associated with potential process failures and to propose
l
a
corrective action before these failures can occur. The outcome of the
rn
PFMEA is transferred to the control plan.
te
Knowledge is gained of what the process is doing by evaluating the
In
parameters or results of the process. This activity, often called inspection, is
n
y
the act of examining process parameters, parts in-process, assembled
a
subsystems, or complete end products with the aid of suitable standards and
p
m
measuring devices which enable the observer to confirm or deny the premise
o
General Process
o
M
rd
Measurement Process
e
p
ro
P
11
Chapter I – Section B
The Measurement Process
ly
n
Equipment is only one part of the measurement process. The owner of the
O
process must know how to correctly use this equipment and how to analyze
e
and interpret the results. Management must therefore also provide clear
s
U
operational definitions and standards as well as training and support. The
l
owner of the process has, in turn, the obligation to monitor and control the
a
rn
measurement process to assure stable and correct results which includes a
te
total measurement systems analysis perspective – the study of the gage,
procedure, user, and environment; i.e., normal operating conditions.
In
y
n
An ideal measurement system would produce only “correct” measurements
a
Statistical
p
each time it was used. Each measurement would always agree with a
m
Properties of standard.5 A measurement system that could produce measurements like that
o
would be said to have the statistical properties of zero variance, zero bias,
C
Systems
to
properties seldom exist, and so process managers are typically forced to use
M
statistical properties of the data it produces over time. Other properties, such
F
as cost, ease of use, etc., are also important in that they contribute to the
f
o
measurement system.
e
p
Statistical properties that are most important for one use are not necessarily
ro
the most important properties for another use. For instance, for some uses of
P
12
Chapter I – Section B
The Measurement Process
ly
for the purpose of measurement. The commonly known Rule of Tens, or
n
10-to-1 Rule, states that instrument discrimination should divide the
O
tolerance (or process variation) into ten parts or more. This rule-of-
e
thumb was intended as a practical minimum starting point for gage
s
U
selection.
l
a
2) The measurement system ought to be in statistical control.6 This means
rn
that under repeatable conditions, the variation in the measurement system
te
is due to common causes only and not due to special causes. This can be
In
referred to as statistical stability and is best evaluated by graphical
methods. y
n
a
6-sigma process variation and/or Total Variation from the MSA study.
rd
o
F
the items being measured vary. If so, then the largest (worst) variation
y
6
The measurement analyst must always consider practical and statistical significance.
13
Chapter I – Section B
The Measurement Process
Although the specific causes will depend on the situation, some typical
sources of variation can be identified. There are various methods of
presenting and categorizing these sources of variation such as cause-effect
diagrams, fault tree diagrams, etc., but the guidelines presented here will
focus on the major elements of a measuring system.
S Standard The acronym S.W.I.P.E.7 is used to represent the six essential elements of a
W Workpiece (i.e., part) generalized measuring system to assure attainment of required objectives.
I Instrument S.W.I.P.E. stands for Standard, Workpiece, Instrument, Person and
P Person / Procedure Procedure, and Environment. This may be thought of as an error model for a
E Environment
complete measurement system.8
ly
n
O
Factors affecting those six areas need to be understood so they can be
e
controlled or eliminated.
s
U
Figure 2 displays a cause and effect diagram showing some of the potential
l
sources of variation. Since the actual sources of variation affecting a specific
a
rn
measurement system will be unique to that system, this figure is presented
te
as a thought starter for developing a measurement system’s sources of
variation.
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
7
This acronym was originally developed by Ms. Mary Hoskins, a metrologist associated with Honeywell, Eli
Whitney Metrology Lab and the Bendix Corporation.
8
See Appendix F for an alternate error model, P.I.S.M.O.E.A.
14
Workpiece Instrument
(Part) (Gage) use
interrelated build
variation assumptions
P characteristics design
mass build amplification
elastic
build tolerances robustness
ro
deformation p cleanliness contact
e bias
geometry
elastic
deformation
rt
properties y adequate stability effects
supporting o datums
features f design validation consistency
F - clamping linearity
stability o - locators
sensitivity
hidden operational
r - measurement points
geometry d definition - measurement probes uniformity
calibration
M
o maintenance repeatability
coef of thermal variability
traceability
to
expansion r reproducibility
calibration
elastic properties C p.m.
15
o Measurement System
m
Standard p Variability
a educational
geometric n experience
compatibility
y physical
In training
air drafts
thermal lights
te
people sun air pollution limitations
expansion rn
artificial operational
definition
a skill
components vibration l
equalization -- visual U experience
system components standards s
e
cycles lighting
O training
procedures
standard stress n
ly
period of time. It is the combination of errors quantified by linearity,
n
O
uniformity, repeatability and reproducibility. The measurement system
e
performance, as with process performance, is the effect of all sources of
s
variation over time. This is accomplished by determining whether our
U
process is in statistical control (i.e., stable and consistent; variation is due
l
a
only to common causes), on target (no bias), and has acceptable variation
rn
(gage repeatability and reproducibility (GRR)) over the range of expected
te
results. This adds stability and consistency to the measurement system
In
capability.
y
n
a
decision about the product and the process, the cumulative effect of all
m
After measuring a part, one of the actions that can be taken is to determine
M
Effect on the status of that part. Historically, it would be determined if the part were
rd
(“good”). This does not restrict the application of the discussion to other
ro
categorization activities.
P
16
Chapter I – Section B
The Measurement Process
Philosophy Interest
The next section deals with the effect of the measurement error on the
product decision. Following that is a section which addresses its impact
on the process decision.
ly
n
O
e
s
U
In order to better understand the effect of measurement system error on
Effect on Product
l
product decisions, consider the case where all of the variability in
a
rn
Decisions multiple readings of a single part is due to the gage repeatability and
reproducibility. That is, the measurement process is in statistical control
te
and has zero bias.
In
y
n
A wrong decision will sometimes be made whenever any part of the
a
p
17
Chapter I – Section B
The Measurement Process
That is, with respect to the specification limits, the potential to make the
wrong decision about the part exists only when the measurement system
error intersects the specification limits. This gives three distinct areas:
where:
ly
n
I Bad parts will always be called bad
O
e
II Potential wrong decision can be made
s
U
III Good parts will always be called good
l
a
rn
Since the goal is to maximize CORRECT decisions regarding product
te
status, there are two choices:
In
y
1) Improve the production process: reduce the variability of the
n
a
system error to reduce the size of the II areas so that all parts
C
being produced will fall within area III and thus minimize the
r
to
decision.
f
o
y
rt
• Statistical control
Decisions
ro
• On target
P
• Acceptable variability
18
Chapter I – Section B
The Measurement Process
σ obs
2
= σ actual
2
+ σ msa
2
where
σ obs
2
= observed process variance
σ actual
2
= actual process variance
ly
σ msa
2
= variance of the measurement system
n
O
e
The capability index9 Cp is defined as
s
U
l
a
rn
ToleranceRange
Cp =
te
6σ
In
y
This can be substituted in the above equation to obtain the relationship
n
a
between the indices of the observed process and the actual process as:
p
m
o
measurement variation.
p
ro
9
Although this discussion is using Cp, the results hold also for the performance index Pp.
10
See Appendix B for formulas and graphs.
19
Chapter I – Section B
The Measurement Process
ly
(gage). For example, parts measured with a coordinate measuring
n
machine during buy-off and then with a height gage during production;
O
samples measured (weighed) on an electronic scale or laboratory
e
mechanical scale during buy-off and then on a simple mechanical scale
s
U
during production.
l
a
In the case where the (higher order) measurement system used during
rn
buy-off has a GRR of 10% and the actual process Cp is 2.0 the observed
te
process Cp during buy-off will be 1.96.11
In
y
n
a
p
m
o
C
r
to
o
CMM variation
o
F
f
o
y
rt
e
p
ro
P
11
For this discussion, assume there is no sampling variation. In reality 1.96 will be the expected value but actual
results will vary around it.
20
Chapter I – Section B
The Measurement Process
ly
n
O
e
s
U
l
a
rn
te
Often manufacturing operations use a single part at the beginning of the
In
Process Setup/ day to verify that the process is targeted. If the part measured is off
y
Control (Funnel target, the process is then adjusted. Later, in some cases another part is
n
a
measured and again the process may be adjusted. Dr. Deming referred to
Experiment)
p
part is being controlled to a target of 5.00 grams. Suppose that the results
r
to
from the scale used to determine the weight vary ±0.20 grams but this is
o
not known since the measurement system analysis was never done. The
M
and every hour based on one sample. If the results are beyond the
o
interval 4.90 to 5.10 grams then the operator is to setup the process
F
again.
f
o
Now the process is running at 5.10 grams for a target. When the operator
ro
checks the setup this time, 5.08 grams is observed so the process is
P
12
Deming, W. Edwards, Out of the Crisis, Massachusetts Institute of Technology, 1982, 1986.
21
Chapter I – Section B
The Measurement Process
ly
n
• Recalibration of gages based on arbitrary limits – i.e., limits not
O
reflecting the measurement system’s variability. (Rule 3)
e
s
• (Re)mastering the process control measurement system after an
U
arbitrary number of uses without any indication or history of a
l
a
change (special cause). (Rule 3)
rn
• Autocompensation adjusts the process based on the last part
te
produced. (Rule 2)
In
• y
On the job training where worker A trains worker B who later trains
n
worker C... without standard training material. Similar to the “post
a
p
is taken. (Rule 1)
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
22
Chapter I – Section C
Measurement Strategy and Planning
CHAPTER I – Section C
Measurement Strategy and Planning
ly
and measurement error in the future.
n
O
In some cases due to the risk involved in the component being
measured or because of the cost and complexity of the measurement
e
s
device the OEM customer may use the APQP process and committee
U
to decide on the measurement strategy at the supplier.
l
a
Not all product and process characteristics require measurement systems
rn
whose development falls under this type of scrutiny. Simple standard
te
measurement tools like micrometers or calipers may not require this in-
In
depth strategy and planning. A basic rule of thumb is whether the
y
characteristic being measured on the component or sub-system has been
n
identified in the control plan or is important in determining the
a
p
23
Chapter I – Section C
Measurement Strategy and Planning
The first step is to establish the purpose for the measurement and how
Identify the Purpose the measurement will be utilized. A cross-functional team organized
of the Measurement early in the development of the measurement process is critical in
accomplishing this task. Specific considerations are made in relation to
Process audit, process control, product and process development and analysis of
the “Measurement Life Cycle”.
The Measurement Life Cycle concept expresses the belief that the
Measurement Life measurement methods may change over time as one learns and improves
Cycle the process. For example, measurement may start on a product
ly
characteristic to establish stability and capability of the process. This
n
may lead to an understanding of critical process control characteristics
O
that directly affect the part characteristics. Dependency on part
e
characteristic information becomes less and the sampling plan may be
s
U
reduced to signify this understanding (five parts per hour to one part per
l
shift). Also, the method of measurement may change from a CMM
a
rn
measurement, to some form of attribute gaging. Eventually it may be
found that very little part monitoring may be required as long as the
te
process is maintained or measuring and monitoring the maintenance and
In
tooling may be all that is needed. The level of measurement follows the
y
n
level of process understanding.
a
p
plan and concept for the measurement system required by the design.
Design Selection
f
o
24
Chapter I – Section C
Measurement Strategy and Planning
each step in the process. This will aid in the development of the
measurement equipment criteria and requirements affected by the
location in the process. A measurement plan, a list of measurement types,
comes out of this investigation.13
For complex measurement systems, a flow chart is made of the
measurement process. This would include delivery of the part or sub-
system being measured, the measurement itself, and the return of the part
or sub-system to the process.
Next use some method of brainstorming with the group to develop
general criteria for each measurement required. One of the simple
methods to use is a cause and effect diagram.14 See the example in Figure
2 as a thought starter.
ly
n
O
A few additional questions to consider in relation
e
s
to measurement planning:
U
l
• Who ought to be involved in the “needs” analysis? The flow chart
a
rn
and initial discussion will facilitate the identification of the key
te
individuals.
In
• Why will measurement be taken and how will it be used? Will the
y
data be used for control, sorting, qualification, etc? The way the
n
a
measurement system.
m
o
calibration masters?
P
• When and where will the measurement be taken? Will the part be
clean, oily, hot, etc.?
13
This can be considered as a preliminary control plan.
14
See Guide to Quality Control, Kaoru Ishikawa, published by Asian Productivity Organization, 1986.
25
Chapter I – Section C
Measurement Strategy and Planning
ly
During and after the fabrication of the measurement equipment and
n
development of the measurement process (methods, training,
O
documentation, etc.), experimental studies and data collection activities
e
will be performed. These studies and data will be used to understand this
s
U
measurement process so that this process and future processes may be
l
improved.
a
rn
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
26
Chapter I – Section D
Measurement Source Development
CHAPTER I – Section D
Measurement Source Development
ly
process for which it was created. It is strongly encouraged that this
n
MEASUREMENT chapter not be used without reading and understanding the entire
O
PROCESS
discussion about a measurement process. To obtain the most benefit
e
from the measurement process, study and address it as a process with
s
U
INPUT OUTPUT inputs and outputs.15
l
a
rn
This chapter was written with the team philosophy in mind. It is not a
te
job description for the buyer or purchasing agent. The activities
In
described here will require team involvement to be completed
successfully and it should be administered within the overall framework
y
n
of an Advanced Product Quality Planning (APQP) team. This can result
a
p
out of the planning process may be modified before the gage supplier
o
requirements.
r
to
with the customer’s formal presentation of the intent of the project in the
f
o
15
See Chapter I, Section B
27
Chapter I – Section D
Measurement Source Development
ly
measurement system and this needs to be established very early in the
n
APQP process. Initial responsibility for this may lie with the product
O
design engineer, dimensional control, etc. depending on the specific
e
organization. When datum schemes do not match throughout a
s
U
manufacturing process, particularly in the measurement systems, the
l
wrong things may be measured, there may be fit problems, etc., leading
a
rn
to ineffective control of the manufacturing process.
te
In
There may be times when a datum scheme used in a final assembly
y
cannot possibly match that used in a sub-component manufacturing
n
a
difficulties and conflicts that may lie ahead and have every opportunity
C
differences.
o
M
rd
on centers but the important product features are in its lobes. One
y
measurement.
ro
P
28
Chapter I – Section D
Measurement Source Development
It is assumed that the gage supplier will be involved with the APQP
process, a team approach. The gage supplier will develop a clear
appreciation of the overall production process and product usage so that
ly
his role is understood not only by him but by others on the team
n
O
(manufacturing, quality, engineering, etc.).
e
There may be slight overlap in some activities or the order of those
s
U
activities depending on the particular program/project or other
l
constraints. For instance, the APQP team without much input from a
a
rn
gage source may develop certain gage concepts. Other concepts may
te
require the expertise of the gage source. This may be driven by the
In
complexity of the measurement system and a team decision as to what
makes sense. y
n
a
p
m
o
C
r
The team of individuals that will employ and be responsible for the
ro
have direct responsibility for developing the detailed concept. This can
be part of the APQP team. To better develop this concept, several
questions need to be answered.
The team may research various issues to help decide which direction or
path will be followed for designing the measurement process. Some may
be dictated or heavily implied by the product design. Examples of the
multitude of possible issues that need to be addressed by the team when
developing this detailed concept may be found in the “Suggested
Elements for a Measurement System Development Checklist” at the end
of this section.
29
Chapter I – Section D
Measurement Source Development
All too often, customers rely too heavily on suppliers for solutions.
Before a customer asks a supplier to suggest solutions to process
problems, the foundation and intent of the process needs to be
thoroughly understood and anticipated by the team that owns that
process. Then and only then will the process be properly used,
supported and improved upon.
ly
measurement system, device or apparatus. Simpler gages may only
n
Considerations
O
require an inspection at regular intervals, whereas more complex systems
e
may require ongoing detailed statistical analyses and a team of engineers
s
to maintain in a predictive fashion.
U
l
a
rn
Planning preventive maintenance activities should coincide with the
te
initiation of the measurement process planning. Many activities, such as
In
draining air filters daily, lubricating bearings after the designated number
y
of operating hours, etc., can be planned before the measurement system
n
is completely built, developed and implemented. In fact this is
a
p
collected and plotted over time. Simple analytical methods (run charts,
rd
• Design Standards
• Build Standards
30
Chapter I – Section D
Measurement Source Development
a good idea to have sufficient documented design detail that the design
may be built or repaired to original intent by any qualified builder –
however, this decision may be driven by cost and criticality. The
required format of the final design may be some form of CAD or
hardcopy engineering drawings. It may involve engineering standards
chosen from those of the OEM, SAE, ASTM, or other organization, and
the gage supplier must have access to the latest level and understand
these standards. The OEM may require the use of particular standards at
either the design or build phase and may even require formal approvals
before the measurement system may be released for use.
ly
(CAD – e.g., CATIA, Unigraphics, IGES, manual hardcopy, etc.) to the
n
O
builder. It may also cover performance standards for a more complex
e
measurement system.
s
U
l
a
Build standards will cover the tolerances to which the measurement
rn
system must be built. Build tolerance should be based on a combination
te
of the capabilities of the process used to produce the gage or gage
In
component, and the criticality of the intended measurement. Build
y
tolerance should not be a mere given percent of product tolerance alone.
n
a
p
m
measurement error.
rd
o
F
Quotations
y
O
Q
regard to price or delivery – this would not necessarily be
N discounted as a negative factor – one supplier may have discovered
C U
E O an item that others overlooked.)
T
P
E
Do the concepts promote simplicity and maintainability?
T
APPROVAL
31
Chapter I – Section D
Measurement Source Development
ly
Effective documentation for any system serves the same purpose as a
n
O
good map on a trip. For example, it will suggest to the user how to get
e
from one point to another (user instructions or gage instructions). It
s
provides the user with possible alternative routes to reach the desired
U
destinations (troubleshooting guides or diagnostic trees) if the main route
l
a
is blocked or closed.
rn
te
A complete documentation package may include:
In
• y
Reproducible set of assembly and detailed mechanical drawings
n
a
•
r
This list should include items that may require considerable lead-
o
time to acquire.
M
rd
members).
p
ro
• Calibration instructions.
The above list can be used as a checklist when organizing the quotation
package; however it is not necessarily all-inclusive.
32
Chapter I – Section D
Measurement Source Development
ly
Results of such dimensional layout and/or testing should be done in
n
O
accordance with customer design and build standards and be fully
e
documented and available for customer review.
s
U
l
a
After successful dimensional layout, the supplier should perform a
rn
preliminary but formal measurement systems analysis. This again pre-
te
requires that the supplier have the personnel, knowledge and experience
In
to accomplish the appropriate analysis. The customer should
y
predetermine with the supplier (and perhaps the OEM) exactly what sort
n
of analysis is required at this point and should be aware of any guidance
a
p
the supplier might need. Some issues that may need discussion,
m
reproducibility (GRR)
rd
Acceptance criteria
e
16
See Appendix D.
33
Chapter I – Section D
Measurement Source Development
CHECKLIST
Shipment • When should the equipment be shipped?
• How should it be shipped?
• Who removes equipment from the truck or rail car?
• Is insurance required?
• Should documentation be shipped with the hardware?
• Does the customer have the proper equipment to unload the
ly
hardware?
n
O
• Where will the system be stored until shipment?
e
•
s
Where will the system be stored until implementation?
U
• Is the shipping documentation complete and easily
l
a
understandable for the loader, transporter, unloader and
rn
installation crew?
te
In
y
n
a
Qualification at the
m
Customer customer once delivery is completed. Since this becomes the first real
C
before shipment and confidence in the quality of the layout results done
e
p
comparing results before and after shipment, be aware that there will
P
34
Chapter I – Section D
Measurement Source Development
• User manuals
Maintenance/service manual
Spare parts list
Troubleshooting guide
• Calibration instructions
• Any special considerations
At the outset, the delivered documentation needs to be noted as
preliminary. Original or reproducible documentation does not need to be
delivered at this time because potential revision may be necessary after
implementation. In fact, it is a wise idea to not have the original
documentation package delivered until after the entire system is
ly
n
implemented – suppliers are generally more efficient with updating
O
documentation than customers.
e
s
U
l
a
rn
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
35
Chapter I – Section D
Measurement Source Development
ly
n
stationary? Is it an electrical property? Is there significant within-part variation?
O
e
For what purpose will the results (output) of the measurement process be used? Production improvement,
s
production monitoring, laboratory studies, process audits, shipping inspection, receiving inspection, responses
U
to a D.O.E.?
l
a
rn
Who will use the process? Operators, engineers, technicians, inspectors, auditors?
te
Training required: Operator, maintenance personnel, engineers; classroom, practical application, OJT,
In
apprenticeship period. n
y
a
Have the sources of variation been identified? Build an error model (S.W.I.P.E. or P.I.S.M.O.E.A.) using
p
teams, brainstorming, profound process knowledge, cause & effect diagram or matrix.
m
o
Flexible vs. Dedicated Measurement Systems: Measurement systems can either be permanent and dedicated
or they can be flexible and have the ability to measure different types of parts; e.g., doghouse gages, fixture
o
M
gaging, coordinate measurement machine, etc. Flexible gaging will be more expensive, but can save money in
the long run.
rd
o
Contact vs. Non-contact: Reliability, type of feature, sample plan, cost, maintenance, calibration, personnel
F
skill required, compatibility, environment, pace, probe types, part deflection, image processing . This may be
f
o
determined by the control plan requirements and the frequency of the measurement ( Full contact gaging may
y
get excessive wear during continuous sampling). Full surface contact probes, probe type, air feedback jets,
rt
Environment: Dirt, moisture, humidity, temperature, vibration, noise, electro-magnetic interference (EMI),
ambient air movement, air contaminants, etc. Laboratory, shop floor, office, etc? Environment becomes a key
P
issue with low, tight tolerances in the micron level. Also, in cases that CMM, vision systems, ultrasonic, etc.
This could be a factor in auto-feedback in-process type measurements. Cutting oils, cutting debris, and extreme
temperatures could also become issues. Is a clean room required?
Measurement and Location Points: Clearly define, using GD&T, the location of fixturing and clamping
points and where on the part the measurements will be taken.
Part Preparation: Should the part be clean, non-oily, temperature stabilized, etc. before measurement?
36
Chapter I – Section D
Measurement Source Development
Correlation Issue #1 – Duplicate Gaging: Are duplicate (or more) gages required within or between plants to
support requirements? Building considerations, measurement error considerations, maintenance considerations.
Which is considered the standard? How will each be qualified?
Correlations Issue #2 – Methods Divergence: Measurement variation resulting from different measurement
system designs performing on the same product/process within accepted practice and operation limits (e.g.,
CMM versus manual or open-setup measurement results).
Destructive versus Nondestructive (NDT) Measurement: Examples: tensile test, salt spray testing,
plating/paint coating thickness, hardness, dimensional measurement, image processing, chemical analysis,
ly
stress, durability, impact, torsion, torque, weld strength, electrical properties, etc.
n
O
Potential Measurement Range: size and expected range of conceivable measurements.
e
s
U
Effective Resolution: Is measurement sensitive to physical change (ability to detect process or product
l
variation) for a particular application acceptable for the application?
a
rn
Sensitivity: Is the size of the smallest input signal that results in a detectable (discernable) output signal for this
te
measurement device acceptable for the application? Sensitivity is determined by inherent gage design and
In
quality (OEM), in-service maintenance, and operating condition.
y
n
Measurement System Build Issues (equipment, standard, instrument):
a
p
m
Have the sources of variation identified in the system design been addressed? Design review; verify and
o
validate.
C
r
Calibration and control system: Recommended calibration schedule and audit of equipment and
to
Input requirements: Mechanical, electrical, hydraulic, pneumatic, surge suppressors, dryers, filters, setup and
rd
Output requirements: Analog or digital, documentation and records, file, storage, retrieval, backup.
f
o
Cost: Budget factors for development, purchase, installation, operation and training.
y
rt
e
Serviceability: Internal and external, location, support level, response time, availability of service parts,
P
Ergonomics: Ability to load and operate the machine without injuries over time. Measurement device
discussions need to focus on issues of how the measurement system is interdependent with the operator.
Storage and Location: Establish the requirements around the storage and location of the measurement
equipment. Enclosures, environment, security, availability (proximity) issues.
Measurement cycle time: How long will it take to measure one part or characteristic? Measurement cycle
integrated to process and product control.
37
Chapter I – Section D
Measurement Source Development
Will there be any disruption to process flow, lot integrity, to capture, measure and return the part?
Material Handling: Are special racks, holding fixtures, transport equipment or other material handling
equipment needed to deal with parts to be measured or the measurement system itself?
Environmental issues: Are there any special environmental requirements, conditions, limitations, either
affecting this measurement process or neighboring processes? Is special exhausting required? Is temperature or
humidity control necessary? Humidity, vibration, noise, EMI, cleanliness.
Are there any special reliability requirements or considerations? Will the equipment hold up over time?
Does this need to be verified ahead of production use?
Spare parts: Common list, adequate supply and ordering system in place, availability, lead-times understood
and accounted for. Is adequate and secure storage available? (bearings, hoses, belts, switches, solenoids, valves,
ly
etc.)
n
O
e
User instructions: Clamping sequence, cleaning procedures, data interpretation, graphics, visual aids,
s
comprehensive. Available, appropriately displayed.
U
l
a
Documentation: Engineering drawings, diagnostic trees, user manuals, language, etc.
rn
te
Calibration: Comparison to acceptable standards. Availability and cost of acceptable standards.
In
Recommended frequency, training requirements. Down-time required?
y
n
Storage: Are there any special requirements or considerations regarding the storage of the measurement
a
p
Error/Mistake proofing: Can known measurement procedure mistakes be corrected easily (too easily?) by the
user? Data entry, misuse of equipment, error proofing, mistake proofing.
r
to
o
Support: Who will support the measurement process? Lab technicians, engineers, production, maintenance,
rd
Training: What training will be needed for operators/inspectors/technicians/engineers to use and maintain this
f
o
measurement process? Timing, resource and cost issues. Who will train? Where will training be held? Lead-
y
Data management: How will data output from this measurement process be managed? Manual,
p
computerized, summary methods, summary frequency, review methods, review frequency, customer
ro
requirements, internal requirements. Availability, storage, retrieval, backup, security. Data interpretation.
P
Personnel: Will personnel need to be hired to support this measurement process? Cost, timing, availability
issues. Current or new.
Improvement methods: Who will improve the measurement process over time? Engineers, production,
maintenance, quality personnel? What evaluation methods will be used? Is there a system to identify needed
improvements?
Long-term stability: Assessment methods, format, frequency, and need for long-term studies. Drift, wear,
contamination, operational integrity. Can this long-term error be measured, controlled, understood, predicted?
Special considerations: Inspector attributes, physical limitations or health issues: colorblindness, vision,
strength, fatigue, stamina, ergonomics.
38
Chapter I – Section E
Measurement Issues
CHAPTER I – Section E
Measurement Issues
ly
discrimination? Discrimination (or class) is fixed by design and
n
serves as the basic starting point for selecting a measurement
O
system. Typically, the Rule of Tens has been applied, which
e
states that instrument discrimination should divide the tolerance
s
U
(or process variation) into ten parts or more.
l
Second, does the measurement system demonstrate effective
a
rn
resolution? Related to discrimination, determine if the
measurement system has the sensitivity to detect changes in
te
product or process variation for the application and conditions.
In
y
n
2) The measurement system must be stable.
a
p
causes.
r
statistical significance.
o
M
rd
or process control).
f
o
39
Chapter I – Section E
Measurement Issues
ly
extremely accurate gages with very high repeatability. Applications of
n
such a study provide the following:
O
e
• A criterion to accept new measuring equipment.
s
U
• A comparison of one measuring device against another.
l
a
• A basis for evaluating a gage suspected of being deficient.
rn
te
• A comparison for measuring equipment before and after repair.
In
• A required component for calculating process variation, and the
y
acceptability level for a production process.
n
a
•
p
true value.
C
r
to
o
Operational Definition
y
Definitions and
rt
Potential Sources
p
17
See Chapter V, Section C.
18
W. E. Deming, Out of the Crisis (1982, 1986), p. 277.
40
Chapter I – Section E
Measurement Issues
Standard
A standard is anything taken by general consent as a basis for
comparison; an accepted model. It can be an artifact or ensemble
(instruments, procedures, etc.) set up and established by an authority as a
rule for the measure of quantity, weight, extent, value or quality.
The concept of ensemble was formalized in ANSI/ASQC Standard M1-
1996.19 This term was used to stress the fact that all of the influences
affecting the measurement uncertainty need to be taken into account;
e.g., environment, procedures, personnel, etc. “An example of a simple
ensemble would be an ensemble for the calibration of gage blocks
ly
consisting of a standard gage block, a comparator, an operator,
n
environment, and the calibration procedure.”
O
e
s
U
Reference Standards
l
a
A standard, generally of the highest metrological quality available at a
rn
given location, from which measurements made at that location are
te
derived.
In
Measurement and Test Equipment (M&TE)
n
y
All of the measurement instruments, measurement standards, reference
a
p
measurement.
o
C
Calibration Standard
r
to
Transfer Standard
o
F
Master
rt
e
Working Standard
A standard whose intended use is to perform routine measurements
within the laboratory, not intended as a calibration standard, but may be
utilized as a transfer standard.
19
This definition was later updated as Measurement and Test Equipment or M&TE by subsequent military
standards.
41
Chapter I – Section E
Measurement Issues
Transfer Standard
ly
Equipment
n
O
Calibration Standard Master
e
s
U
l
a
Transfer Standard
rn
te
In
Working Standard
y
n
a
p
m
Check Standard
o
C
Check Standard
M
Reference Value
y
rt
reference for comparison. Accepted reference values are based upon the
ro
following:
P
42
Chapter I – Section E
Measurement Issues
ly
n
True Value
O
e
The true value is the “actual” measure of the part. Although this value is
s
unknown and unknowable, it is the target of the measurement process.
U
Any individual reading ought to be as close to this value as
l
a
(economically) possible. Unfortunately, the true value can never be
rn
known with certainty. The reference value is used as the best
te
approximation of the true value in all analyses. Because the reference
In
value is used as a surrogate for the true value, these terms are commonly
y
used interchangeably. This usage is not recommended.21
n
a
p
m
Discrimination
o
C
readability or resolution.
o
M
this range has been taken to be the product specification. Recently the 10
e
21
See also ASTM: E177-90a.
43
Chapter I – Section E
Measurement Issues
full
interval
half
ly
interval
n
O
e
s
Figure 4: Discrimination
U
l
a
The above rule of thumb can be considered as a starting point to
rn
determine the discrimination since it does not include any other element
te
of the measurement system’s variability.
In
Because of economic and physical limitations, the measurement system
y
n
will not perceive all parts of a process distribution as having separate or
a
will be grouped by the measured values into data categories. All parts in
o
the same data category will have the same value for the measured
C
characteristic.
r
to
44
Chapter I – Section E
Measurement Issues
ly
• Can be used with semi- • Generally unacceptable for
n
variable control techniques estimating process parameters
O
based on the process and indices since it only
e
distribution provides coarse estimates
s
• Can produce insensitive
U
variables control charts
l
a
2 - 4 Data Categories
rn
te
In
y
n
a
control charts
o
C
Figure 6 contains two sets of control charts derived from the same data.
o
thousandth of an inch. Control Chart (b) shows these data rounded off to
e
control due to the artificially tight limits. The zero ranges are more a
ro
product of the rounding off than they are an indication of the subgroup
P
variation.
The best indication of inadequate discrimination can be seen on the SPC
range chart for process variation. In particular, when the range chart
shows only one, two, or three possible values for the range within the
control limits, the measurements are being made with inadequate
discrimination. Also, if the range chart shows four possible values for the
range within control limits and more than one-fourth of the ranges are
zero, then the measurements are being made with inadequate
discrimination.
45
Chapter I – Section E
Measurement Issues
Returning to Figure 6, Control Chart (b), there are only two possible
values for the range within the control limits (values of 0.00 and 0.01).
Therefore, the rule correctly identifies the reason for the lack of control
as inadequate discrimination (sensitivity or effective resolution).
This problem can be remedied, of course, by changing the ability to
detect the variation within the subgroups by increasing the discrimination
of the measurements. A measurement system will have adequate
discrimination if its apparent resolution is small relative to the process
variation. Thus a recommendation for adequate discrimination would be
for the apparent resolution to be at most one-tenth of total process six
sigma standard deviation instead of the traditional rule which is the
apparent resolution be at most one-tenth of the tolerance spread.
ly
Eventually, there are situations that reach a stable, highly capable process
n
O
using a stable, “best-in-class” measurement system at the practical limits
e
of technology. Effective resolution may be inadequate and further
s
improvement of the measurement system becomes impractical. In these
U
special cases, measurement planning may require alternative process
l
a
monitoring techniques. Only qualified, technical personnel familiar with
rn
the measurement system and process should make and document such
te
decisions. Customer approval shall be required and documented in the
In
Control Plan.
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
46
Chapter I – Section E
Measurement Issues
Xbar/R Chart
Discrimination = .001
0.145
UCL=0.1444
Sample Mean
0.140 Mean=0.1397
0.135 LCL=0.1350
A Subgroup 0 5 10 15 20 25
0.02
UCL=0.01717
ly
Sample Range
n
O
0.01
R=0.00812
e
s
U
0.00 LCL=0
l
a
rn
te
Xbar/R Chart
In
Discrimination = .01 y
n
0.145
a
UCL=0.1438
p
Sample Mean
m
o
0.140 Mean=0.1398
C
r
to
B 0.135
LCL=0.1359
o
M
Subgroup 0 5 10 15 20 25
rd
o
F
0.02
Sample Range
f
o
UCL=0.01438
y
0.01
rt
R=0.0068
e
p
ro
0.00 LCL=0
P
22
Figure 6 is adapted from Evaluating The Measurement Process, by Wheeler and Lyday, Copyright 1989, SPC
Press, Inc., Knoxville, Tennessee.
47
Chapter I – Section E
Measurement Issues
ly
n
O
e
s
U
l
a
rn
Location
te
In
Width y
n
a
p
m
Accuracy
F
48
Chapter I – Section E
Measurement Issues
Bias
Bias is often referred to as “accuracy.” Because “accuracy” has several
meanings in literature, its use as an alternate for “bias” is not
recommended.
ly
to the total error comprised of the combined
n
O
effects of all sources of variation, known or
e
unknown, whose contributions to the total
s
error tends to offset consistently and
U
predictably all results of repeated applications
l
a
of the same measurement process at the time of the measurements.
rn
te
In
Possible causes for excessive bias are:
y
•
n
Instrument needs calibration
a
p
• Linearity error
rd
•
e
49
Chapter I – Section E
Measurement Issues
Stability
Stability (or drift) is the total variation in the measurements obtained
with a measurement system on the same master or parts when measuring
a single characteristic over an extended time period. That is, stability is
the change in bias over time.
Time
ly
n
O
e
s
U
l
a
Reference Value
rn
Possible causes for instability include:
te
• Instrument needs calibration, reduce the calibration interval
In
• y
Worn instrument, equipment or fixture
n
a
cleanliness
C
r
Linearity
The difference of bias throughout the expected operating
(measurement) range of the equipment is called linearity. Linearity can
be thought of as a change of bias with respect to size.
50
Chapter I – Section E
Measurement Issues
BIAS
BIAS
Value 1 Value N
ly
n
O
Note that unacceptable linearity can come in a variety of flavors. Do not
e
s
assume a constant bias.
U
l
a
rn
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
51
Chapter I – Section E
Measurement Issues
ly
n
• Application – part size, position, operator skill, fatigue, observation
O
error (readability, parallax)
e
s
U
l
a
rn
Precision
te
Width Variation Traditionally, precision describes the net effect of discrimination,
In
sensitivity and repeatability over the operating range (size, range and
y
time) of the measurement system. In some organizations precision is
n
a
the range of measurement; that range may be size or time (i.e., “a device
C
what linearity is to bias (although the first is random and the other
o
conditions.
o
F
f
o
Repeatability
y
rt
52
Chapter I – Section E
Measurement Issues
ly
n
• Within-environment: short-cycle fluctuations in temperature,
O
humidity, vibration, lighting, cleanliness
e
s
• Violation of an assumption – stable, proper operation
U
•
l
Instrument design or method lacks robustness, poor uniformity
a
rn
• Wrong gage for the application
te
• Distortion (gage or part), lack of rigidity
In
• y
Application – part size, position, observation error (readability,
n
parallax)
a
p
m
o
Reproducibility
C
r
part. This is often true for manual instruments influenced by the skill of
o
Appraiser A C B
53
Chapter I – Section E
Measurement Issues
ly
n
• Between-methods: average difference caused by changing point
O
densities, manual versus automated systems, zeroing, holding or
e
s
clamping methods, etc.
U
• Between-appraisers (operators): average difference between
l
a
appraisers A, B, C, etc., caused by training, technique, skill and
rn
experience. This is the recommended study for product and process
te
qualification and a manual measuring instrument.
In
• Between-environment: average difference in measurements over time
y
n
1, 2, 3, etc. caused by environmental cycles; this is the most common
a
p
qualifications.
o
C
•
M
parallax)
F
f
o
the definitions used by ASTM and those used by this manual. The
rt
54
Chapter I – Section E
Measurement Issues
σ GRR
2
= σ reproducibility
2
+ σ repeatability
2
Reference Value
ly
n
O
e
s
U
l
a
rn
te
A C B
In
GRR y
n
a
Sensitivity
p
m
• Skill of operator
y
rt
pneumatic gages
P
55
Chapter I – Section E
Measurement Issues
Consistency
Consistency is the difference in the variation of the measurements taken
over time. It may be viewed as repeatability over time.
ly
• Worn equipment
n
O
e
s
U
l
Uniformity
a
rn
Uniformity is the difference in variation throughout the operating range
te
of the gage. It may be considered to be the homogeneity (sameness) of
In
the repeatability over size.
y
n
Factors impacting uniformity include:
a
p
• Parallax in reading
r
to
o
M
rd
Capability
o
Measurement
F
System Variation
o
consistency
Refer to Chapter III for typical methods and examples to quantify each
component.
An estimate of measurement capability, therefore, is an expression of the
expected error for defined conditions, scope and range of the
measurement system (unlike measurement uncertainty, which is an
expression of the expected range of error or values associated with a
measurement result). The capability expression of combined variation
(variance) when the measurement errors are uncorrelated (random and
independent) can be quantified as:
56
Chapter I – Section E
Measurement Issues
σ 2
capability
=σ 2
bias ( linearity )
+σ 2
GRR
ly
entire measurement range. Short-term could mean: the capability over a
n
O
series of measurement cycles, the time to complete the GRR evaluation,
e
a specified period of production, or time represented by the calibration
s
frequency. A statement of measurement capability need only be as
U
complete as to reasonably replicate the conditions and range of
l
a
measurement. A documented Control Plan could serve this purpose.
rn
te
Second, short-term consistency and uniformity (repeatability errors) over
the range of measurement are included in a capability estimate. For a
In
simple instrument, such as a 25 mm micrometer, the repeatability over
y
n
the entire range of measurement using typical, skilled operators is
a
estimate may include the entire range of measurement for multiple types
o
over range or size. Because these errors are correlated they cannot be
o
M
or
e
p
Performance
As with process performance, measurement system performance is the
net effect of all significant and determinable sources of variation over
time. Performance quantifies the long-term assessment of combined
measurement errors (random and systematic). Therefore, performance
includes the long-term error components of:
• Capability (short-term errors)
• Stability and consistency
57
Chapter I – Section E
Measurement Issues
Refer to Chapter III for typical methods and examples to quantify each
component.
ly
n
σ =σ +σ +σ
O
2 2 2 2
performanc e capability stability consistenc y
e
s
U
l
a
Again, just as short-term capability, long-term performance is always
rn
associated with a defined scope of measurement – conditions, range and
te
time. The scope for an estimate of measurement performance could be
In
very specific or a general statement of operation, over a limited portion
y
or entire measurement range. Long-term could mean: the average of
n
several capability assessments over time, the long-term average error
a
p
Uncertainty
Measurement uncertainty is defined by VIM as a “parameter, associated
with the result of a measurement, that characteristics the dispersion of the
58
Chapter I – Section E
Measurement Issues
ly
n
decisions about the product and process.
O
This ambiguity carries over to bias and repeatability (as measures of
e
s
accuracy and precision). It is important to realize that:
U
• Bias and repeatability are independent of each other (See Figure 8).
l
a
rn
• Controlling one of these sources of error does not guarantee the
te
control of the other. Consequently, measurement systems control
In
programs (traditionally referred to as Gage Control Programs) ought
to quantify and track all relevant sources of variation.25
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
24
Measurand is then defined by the VIM as “the particular quantity subject to measurement”.
25
See also Chapter I, Section B.
59
Chapter I – Section E
Measurement Issues
ly
n
O
e
s
U
l
a
rn
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
60
Chapter I – Section F
Measurement Uncertainty
CHAPTER I – Section F
Measurement Uncertainty
Measurement Uncertainty is a term that is used internationally to
General describe the quality of a measurement value. While this term has
traditionally been reserved for many of the high accuracy measurements
performed in metrology or gage laboratories, quality system standards
such as QS-9000 or ISO/IEC TS16949 require that, “Measurement
uncertainty be known and consistent with required measurement
capability of any inspection, measuring or test equipment.”26
ly
n
O
In essence, uncertainty is the range assigned to a measurement result that
describes, within a defined level of confidence, the range expected to
e
s
contain the true measurement result. Measurement uncertainty is
U
normally reported as a bilateral quantity. Uncertainty is a quantified
l
a
expression of measurement reliability. A simple expression of this
rn
concept is:
te
In
y
n
a
(k) that represents the area of the normal curve for a desired level of
F
often interpreted as k = 2.
p
ro
P
U = kuc
26
QS-9000, 3rd edition, Section 4.11.1
61
Chapter I – Section F
Measurement Uncertainty
ly
n
O
e
It is important to remember that measurement uncertainty is simply an
s
U
estimate of how much a measurement may vary at the time of
l
measurement. It should consider all significant sources of measurement
a
rn
variation in the measurement process plus significant errors of
te
calibration, master standards, method, environment and others not
previously considered in the measurement process. In many cases, this
In
estimate will use methods of MSA and GRR to quantify those significant
y
n
standard errors. It is appropriate to periodically reevaluate uncertainty
a
estimate.
o
C
r
to
The major difference between uncertainty and the MSA is that the MSA
Measurement
o
Uncertainty and amount of error in the process, and assessing the adequacy of the
rd
value of measurement.
y
rt
e
p
ro
62
Chapter I – Section F
Measurement Uncertainty
ly
n
O
e
s
U
l
a
rn
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
63
Chapter I – Section F
Measurement Uncertainty
ly
n
O
e
s
U
l
a
rn
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
64
Chapter I – Section G
Measurement Problem Analysis
CHAPTER I – Section G
Measurement Problem Analysis
ly
may cause loss of time and resources as the focus is made on the process
n
O
itself, when the reported variation is actually caused by the measurement
e
device.
s
U
In this section a review will be made on basic problem solving steps and
l
show how they relate to understanding the issues in a measurement
a
rn
system. Each company may use the problem resolution process which
the customer has approved.
te
In
If the measurement system was developed using the methods in this
y
manual, most of the initial steps will already exist. For example, a cause
n
and effect diagram may already exist giving valuable lessons learned
a
p
Step 1
o
variation and its contribution, from the process variation (the decision
f
o
definition that anyone would understand and be able to act on the issue.
e
p
ro
Step 2
The problem solving team, in this case, will be dependent on the
complexity of the measurement system and the issue. A simple
measurement system may only require a couple of people. But as the
system and issue become more complex, the team may grow in size
(maximum team size ought to be limited to 10 members). The team
members and the function they represent need to be identified on the
problem-solving sheet.
65
Chapter I – Section G
Measurement Problem Analysis
ly
unknown information. The team would use subject matter knowledge to
n
O
initially identify those variables with the largest contribution to the issue.
e
Additional studies can be done to substantiate the decisions.
s
U
Plan-Do-Study-Act (PDSA)27
l
a
Step 5
rn
This would lead to a Plan-Do-Study-Act, which is form of scientific
te
study. Experiments are planned, data are collected, stability is
In
established, hypotheses are made and proven until an appropriate
solution is reached.
y
n
a
p
m
o
C
r
to
o
Step 6
rd
Step 7
P
27
W. Edwards Deming, The New Economics for Industry, Government, Education, The MIT Press, 1994, 2000
66
Chapter II
General Concepts for Assessing Measurement Systems
ly
n
O
e
s
U
l
a
rn
Chapter II
te
In
y
n
a
MEASUREMENT SYSTEMS
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
67
Background
P
Chapter II – Section A
ro
p
e
rt
y
o
f
F
o
rd
M
o
to
r
C
68
o
m
p
a
n
y
In
te
rn
a
l
U
s
e
O
n
ly
Chapter II – Section A
Background
CHAPTER II – Section A
Background
Two important areas need to be assessed:
Introduction
1) Verify the correct variable is being measured at the proper
characteristic location. Verify fixturing and clamping if
applicable. Also identify any critical environmental issues that
are interdependent with the measurement. If the wrong variable
is being measured, then no matter how accurate or how precise
ly
the measurement system is, it will simply consume resources
n
O
without providing benefit.
e
2) Determine what statistical properties the measurement system
s
U
needs to have in order to be acceptable. In order to make that
l
determination, it is important to know how the data are to be
a
rn
used, for without that knowledge, the appropriate statistical
properties cannot be determined. After the statistical properties
te
have been determined, the measurement system must be assessed
In
to see if it actually possesses these properties or not.
n
y
a
Phase 1 & 2 measured at the proper characteristic location per measurement system
m
o
there are any critical environmental issues that are interdependent with
r
Understand the to evaluate the effect of the operating environment on the measurement
o
process and does reproducibility). Phase 1 test results can indicate that the operating
rd
it satisfy the environment does not contribute significantly to the overall measurement
o
requirements? system variation. Additionally, the variation attributable to the bias and
F
69
Background
P
Chapter II – Section A
ro
p
e
rt
y
o
f
F
o
rd
M
o
to
r
C
70
o
m
p
a
n
y
In
te
rn
a
l
U
s
e
O
n
ly
Chapter II – Section B
Selecting/Developing Test Procedures
CHAPTER II – Section B
Selecting/Developing Test Procedures
“Any technique can be useful if its limitations are understood and observed.”28
ly
n
be an integral part of the Phase 1 testing discussed in the previous
O
section.
e
s
General issues to consider when selecting or developing an assessment
U
procedure include:
l
a
•
rn
Should standards, such as those traceable to NIST, be used in the
testing and, if so, what level of standard is appropriate? Standards are
te
frequently essential for assessing the accuracy of a measurement
In
system. If standards are not used, the variability of the measurement
y
system can still be assessed, but it may not be possible to assess its
n
a
measurement system.
r
to
28
W. Edwards Deming, The Logic of Evaluation, The Handbook of Evaluation Research, Vol. 1, Elmer L.
Struening and Marcia Guttentag, Editors
29
The “Hawthorne Effect” refers to the outcomes of a series of industrial experiments performed at the
Hawthorne Works of Western Electric between November 1924 and August 1932. In the experiments, the
researchers systematically modified working conditions of five assemblers and monitored the results. As the
conditions improved, production rose. However, when working conditions were degraded, production continued
to improve. This was thought to be the results of the workers having developed a more positive attitude toward
the work solely as a result of them being part of the study, rather than as a result of the changed working
conditions. See A History of the Hawthorne Experiments, by Richard Gillespie, Cambridge University Press,
New York, 1991.
71
Chapter II – Section B
Selecting/Developing Test Procedures
ly
customers of a manufacturing process that, in effect, is not monitored
n
due to a measurement system not performing properly.
O
e
In addition to these general issues, other issues that are specific to the
s
particular measurement system being tested may also be important.
U
Finding the specific issues that are important to a particular measurement
l
a
system is one of the two objectives of the Phase 1 testing.
rn
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
72
Chapter II – Section C
Preparation for a Measurement System Study
CHAPTER II – Section C
Preparation for a Measurement System Study
As in any study or analysis, sufficient planning and preparation ought to
be done prior to conducting a measurement system study. Typical
preparation prior to conducting the study is as follows:
1) The approach to be used should be planned. For instance,
determine by using engineering judgment, visual observations,
or a gage study, if there is an appraiser influence in calibrating or
using the instrument. There are some measurement systems
ly
where the effect of reproducibility can be considered negligible;
n
O
for example, when a button is pushed and a number is printed
e
out.
s
U
2) The number of appraisers, number of sample parts, and number
l
of repeat readings should be determined in advance. Some
a
rn
factors to be considered in this selection are:
te
(a) Criticality of dimension – critical dimensions require more
In
parts and/or trials. The reason being the degree of confidence
y
desired for the gage study estimations.
n
a
samples (or standards) must be selected, but need not cover the
ro
73
Chapter II – Section C
Preparation for a Measurement System Study
ly
n
Samples can be selected by taking one sample per day for
O
several days. Again, this is necessary because the parts will be
e
treated in the analysis as if they represent the range of production
s
U
variation in the process. Since each part will be measured several
l
times, each part must be numbered for identification.
a
rn
5) The instrument should have a discrimination that allows at least
te
one-tenth of the expected process variation of the characteristic
In
to be read directly. For example, if the characteristic’s variation
y
is 0.001, the equipment should be able to “read” a change of
n
0.0001.
a
p
m
ensure that any drift or changes that could occur will be spread
rt
74
Chapter II – Section C
Preparation for a Measurement System Study
ly
measurement device should be included in the study.
n
O
Each appraiser should use the procedure – including all steps
e
– they normally use to obtain readings. The effect of any
s
U
differences between methods the appraisers use will be
reflected in the Reproducibility of the measurement system.
l
a
rn
• Is appraiser calibration of the measurement equipment likely to
te
be a significant cause of variation? If so, the appraisers should
In
recalibrate the equipment before each group of readings.
y
• How many sample parts and repeated readings are required?
n
a
variation.
r
to
test results.
y
rt
e
p
ro
P
75
Chapter II – Section C
Preparation for a Measurement System Study
ly
n
O
e
s
U
l
a
rn
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
76
Chapter II – Section D
Analysis of the Results
CHAPTER II – Section D
Analysis of the Results
The results should be evaluated to determine if the measurement device
is acceptable for its intended application. A measurement system should
be stable before any additional analysis is valid.
Acceptability Criteria – Location Error
Location Error Location error is normally defined by analyzing bias and linearity.
In general, the bias or linearity error of a measurement system is
ly
unacceptable if it is significantly different from zero or exceeds the
n
O
maximum permissible error established by the gage calibration
procedure. In such cases, the measurement system should be recalibrated
e
s
or an offset correction applied to minimize this error.
U
l
Acceptability Criteria – Width Error
a
Width Error
rn
The criteria as to whether a measurement system’s variability is
te
satisfactory are dependent upon the percentage of the manufacturing
In
production process variability or the part tolerance that is consumed by
y
measurement system variation. The final acceptance criteria for specific
n
measurement systems depends on the measurement system’s
a
p
follows:
to
o
measurement system.
rd
repair, etc.
f
o
y
equal to 5.
32
See Chapter III, Section B, “Analysis of Results – Numerical”.
77
Chapter II – Section D
Analysis of the Results
ly
n
O
e
s
U
l
a
rn
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
78
Chapter III
Recommended Practices for Simple Measurement Systems
ly
n
O
e
s
U
l
a
rn
Chapter III
te
In
y
RECOMMENDED PRACTICES FOR SIMPLE
n
a
p
m
o
C
MEASUREMENT SYSTEMS
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
79
Chapter III – Section A
Example Test Procedures
ly
n
O
e
s
U
l
a
rn
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
80
Chapter III – Section A
Example Test Procedures
ly
are due to the instrument (gage/equipment), person (appraiser), and
n
method (measurement procedure). The test procedures in this chapter are
O
sufficient for this type of measurement system analysis.
e
s
The procedures are appropriate to use when:
U
l
Only two factors, or conditions of measurement (i.e., appraisers
a
rn
and parts) plus measurement system repeatability are being
studied.
te
The effect of the variability within each part is negligible.
In
There is no statistical interaction between appraisers and parts.
y
The parts do not change dimensionally during the study.
n
a
p
m
81
Chapter III – Section A
Example Test Procedures
ly
n
O
e
s
U
l
a
rn
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
82
Chapter III – Section B
Variable Measurement System Study – Guidelines
ly
GUIDELINES FOR DETERMINING STABILITY
n
O
e
Conducting the Study
s
U
l
a
1) Obtain a sample and establish its reference value(s) relative to a
rn
traceable standard. If one is not available, select a production
te
part33 that falls in the mid-range of the production measurements
In
and designate it as the master sample for stability analysis. The
known reference value is not required for tracking measurement
y
n
system stability.
a
p
It may be desirable to have master samples for the low end, the
m
Time
high end, and the mid-range of the expected measurements.
o
C
each.
to
Reference Value
actually being used. This will account for warm-up, ambient or
e
33
Caution should be taken where a production master could experience excessive wear due to use, material, and
handling. This may require modifying the production part, such as plating, to extend the life of the master.
83
Chapter III – Section B
Variable Measurement System Study – Guidelines
ly
the lack of measurement system stability.
n
O
e
Example – Stability
s
U
To determine if the stability of a new measurement instrument was
l
a
acceptable, the process team selected a part near the middle of the range
rn
of the production process. This part was sent to the measurement lab to
te
determine the reference value which is 6.01. The team measured this part
In
5 times once a shift for four weeks (20 subgroups). After all the data
y
were collected, X & R charts were developed (see Figure 9).
n
a
p
m
6.3 UCL=6.297
r
Sample Mean
6.2
to
6.1
o
6.0 6.021
M
5.9
rd
5.8
o
LCL=5.746
5.7
F
Subgroup 0 10 20
f
o
y
rt
1.0 UCL=1.010
e
P Range
p
ro
0.5 0.4779
Sample
0.0 LCL=0
34
See SPC Reference Manual.
84
Chapter III – Section B
Variable Measurement System Study – Guidelines
ly
It may be desirable to have master samples for the low end of
n
the expected measurements, the high end, and the mid-range. If
O
this is done, analyze the data using a linearity study.
e
s
U
2) Have a single appraiser measure the sample n ≥ 10 times in the
l
a
normal manner.
rn
te
In
Analysis of Results – Graphical
y
n
a
n
∑ xi
y
rt
i =1
e
X =
p
n
ro
P
max ( xi ) − min ( xi )
σ repeatability = d 2*
35
See Chapter I, Section E, for an operational definition and discussion of potential causes.
85
Chapter III – Section B
Variable Measurement System Study – Guidelines
σb = σr
n
bias
t=
σb
ly
n
7) Bias is acceptable at the α level if zero falls within the 1-α
O
confidence bounds around the bias value:
e
s
U
d σ d σ
Bias − 2 * b tν , 1−α ≤ zero ≤ Bias + 2 * b tν , 1−α
l
a
d 2 2
d 2 2
rn
te
In
*
where d 2 , d 2 and ν are found in Appendix C
n
y
with g = 1 and m = n and
a
p
2
o
C
r
used.
F
f
o
Example – Bias
y
rt
36
This approach uses the average range to approximate the standard deviation of the measurement process.
Because the efficiency of this use of the range drops rapidly as the sample size increases, it is recommended that
if the total sample size exceeds 20, then either break the sample into multiple subgroups and use the control
chart approach, or use the root mean squared (RMS) calculation of the standard deviation with the traditional
one sample t test.
37
The uncertainty for bias is given by σ b .
86
Chapter III – Section B
Variable Measurement System Study – Guidelines
Reference
Bias
Value = 6.00
1 5.8 -0.2
2 5.7 -0.3
3 5.9 -0.1
T 4 5.9 -0.1
R 5 6.0 0.0
I 6 6.1 0.1
A 7 6.0 0.0
L 8 6.1 0.1
S 9 6.4 0.4
ly
n
10 6.3 0.3
O
11 6.0 0.0
e
s
12 6.1 0.1
U
13 6.2 0.2
l
a
14 5.6 -0.4
rn
15 6.0 0.0
te
In
Table 2: Bias Study Data n
y
a
4
to
o
M
3
rd
Frequency
2
F
f
o
1
y
rt
e
p
0
ro
Measured value
Since zero falls within the confidence interval of the bias (– 0.1185,
0.1319), the engineer can assume that the measurement bias is acceptable
assuming that the actual use will not introduce additional sources of
variation.
87
Chapter III – Section B
Variable Measurement System Study – Guidelines
ly
n
95% Confidence Interval
O
t df Significant t value Bias of the Bias
e
statistic (2-tailed)
s
U
Lower Upper
l
a
Measured Value .1153 10.8 2.206 .0067 -.1185 .1319
rn
te
Table 3: Bias Study – Analysis of Bias Study
In
y
n
a
p
can also be used to evaluate bias. The control chart analysis should
rd
evaluated.
F
f
o
designate it as the master sample for bias analysis. Measure the part
n ≥ 10 times in the tool room, and compute the average of the n
P
88
Chapter III – Section B
Variable Measurement System Study – Guidelines
ly
n
5) Compute the repeatability standard deviation using the Average
O
Range
e
s
U
R
=
l
a
σ repeatability d 2*
rn
te
where d 2* is based on the subgroup size (m) and the
In
number of subgroups in the chart (g). (see Appendix C)
y
n
a
p
σb = σr
C
g
r
to
bias
o
t =
M
σb
rd
d σ d σ
Bias − 2 * b tν , 1−α ≤ zero ≤ Bias + 2 * b tν , 1−α
y
d 2 2 d 2 2
rt
e
p
ro
*
where d 2 , d 2 and ν are found in Appendix C
P
38
The uncertainty for bias is given by σb .
89
Chapter III – Section B
Variable Measurement System Study – Guidelines
Example – Bias
ly
n
Since zero falls within the confidence interval of the bias (-0.0800,
O
0.1020), the process team can assume that the measurement bias is
e
acceptable assuming that the actual use will not introduce additional
s
sources of variation.
U
l
a
rn
te
Standard Standard Error
In
n Mean, X Deviation, σ r of Mean, σ b
y
n
Measured 100 6.021 .2048 .0458
a
Value
p
m
o
C
statistic value
rd
90
Chapter III – Section B
Variable Measurement System Study – Guidelines
ly
•
n
Instrument not calibrated properly. Review calibration procedure.
O
• Instrument used improperly by appraiser. Review measurement
e
s
instructions.
U
• Instrument correction algorithm incorrect.
l
a
rn
te
If the measurement system has non-zero bias, where possible it should be
In
recalibrated to achieve zero bias through the modification of the
y
hardware, software or both. If the bias cannot be adjusted to zero, it still
n
a
by the bias). Since this has a high risk of appraiser error, it should be
m
91
Chapter III – Section B
Variable Measurement System Study – Guidelines
ly
gage is encompassed.
n
O
3) Have each part measured m ≥ 10 times on the subject gage by one of
e
the operators who normally use the gage.
s
U
Select the parts at random to minimize appraiser “recall” bias in
l
the measurements.
a
rn
te
Analysis of Results – Graphical
In
4) Calculate the part bias for each measurement and the bias average for
y
n
each part.
a
p
m
m
r
∑ biasi, j
to
o
j =1
biasi =
M
m
rd
5) Plot the individual biases and the bias averages with respect to the
o
6) Calculate and plot the best fit line and the confidence band of the line
f
o
where
P
xi = reference value
yi = bias average
and
39
See Chapter I, Section E, for an operational definition and discussion of potential causes.
92
Chapter III – Section B
Variable Measurement System Study – Guidelines
1
∑ xy − gm ∑ x∑ y
a = = slope
1
∑ x 2 − gm ( ∑ x )
2
b = y − ax = intercept
where s =
∑ yi2 − b∑ yi − a∑ xi yi
gm − 2
ly
n
1
O
( x0 − x )
2 2
1 +
e
lower: b + a x0 − t gm− 2,1−α s
∑
s
2
gm ( xi − x )
2
U
l
a
rn
1
2
te
( x0 − x )
2
1 +
upper: b + a x0 + t gm − 2,1−α s
In
∑
2
gm ( i )
2
x − x
y
n
a
7) Plot the “bias = 0” line and review the graph for indications of
p
m
Figure 11.)
C
r
0” line must lie entirely within the confidence bands of the fitted
o
M
line.
rd
o
H0: a = 0 slope = 0
p
ro
do not reject if
P
a
t = ≤ t gm− 2, 1−α
2
s
∑(xj )
2
−x
40
See the note on selecting the α level in the “Guidelines for Determining Bias” area in Chapter III, Section B.
93
Chapter III – Section B
Variable Measurement System Study – Guidelines
ly
n
O
A plant supervisor was introducing a new measurement system to the
process. As part of the PPAP41 the linearity of the measurement system
e
s
needed to be evaluated. Five parts were chosen throughout the operating
U
range of the measurement system based upon documented process
l
a
variation. Each part was measured by layout inspection to determine its
rn
reference value. Each part was then measured twelve times by the lead
te
operator. The parts were selected at random during the study.
In
y
n
a
Part 1 2 3 4 5
p
Reference
m
41
Production Parts Approval Process manual, 3rd Edition, 2000
94
Chapter III – Section B
Variable Measurement System Study – Guidelines
Part 1 2 3 4 5
Reference
Value 2.00 4.00 6.00 8.00 10.00
1 0.7 1.1 -0.2 -0.4 -0.9
2 0.5 -0.1 -0.3 -0.3 -0.7
3 0.4 0.2 -0.1 -0.2 -0.5
B 4 0.5 1 -0.1 -0.3 -0.7
I 5 0.7 -0.2 0.0 -0.2 -0.6
A 6 0.3 -0.1 0.1 -0.2 -0.5
S 7 0.5 -0.1 0.0 -0.2 -0.5
8 0.5 -0.1 0.1 -0.3 -0.5
9 0.4 -0.1 0.4 -0.2 -0.4
ly
10 0.4 0.0 0.3 -0.5 -0.8
n
O
11 0.6 0.1 0.0 -0.4 -0.7
e
12 0.4 -0.2 0.1 -0.3 -0.6
s
BIAS Avg. 0.491667 0.125 0.025 -0.29167 -0.61667
U
l
a
Table 6: Linearity Study – Intermediate Results
rn
te
In
y
n
a
Linearity Example
p
m
Y = 0.736667 - 0.131667X
o
C
R-Sq = 71.4 %
r
to
o
1
M
rd
o
F
f
o
Bias
Bias = 0
rt
0
e
p
ro
P
Regression
95% CI
Bias Average
-1
2 3 4 5 6 7 8 9 10
Reference Values
Figure 11: Linearity Study – Graphical Analysis
95
Chapter III – Section B
Variable Measurement System Study – Guidelines
ly
paperwork is left unmarked, the supervisor calculates the t-statistic for
n
O
the slope and intercept:
e
ta = − 12.043
s
U
l
tb = 10.158
a
rn
Taking the default α = .05 and going to the t-tables with (gm – 2) = 58
te
degrees of freedom and a proportion of .975, the supervisor comes up
In
with the critical value of: y
n
t58, .975 = 2.00172
a
p
m
Since ta > t58, .975 , the result obtained from the graphical analysis is
o
C
In this case, it does not matter what relation tb has to t58, .975 since there
M
system range, it still can be used for product/process control but not
ro
Since this has a high risk of appraiser error, it should be used only
with the concurrence of the customer.
42
See standard statistical texts on analysis of the appropriateness of using a linear model to describe the
relationship between two variables.
96
Chapter III – Section B
Variable Measurement System Study – Guidelines
ly
variation (such as roundness, diametric taper, flatness, etc., as discussed
n
O
in Chapter V, Section A) in their analyses.
e
s
However, the total measurement system includes not only the gage itself
U
and its related bias, repeatability, etc., but also could include the
l
a
variation of the parts being checked. The determination of how to handle
rn
within-part variation needs to be based on a rational understanding of the
te
intended use of the part and the purpose of the measurement.
In
Finally, all of the techniques in this section are subject to the prerequisite
y
of statistical stability.
n
a
Range Method
y
rt
e
The Range method is a modified variable gage study which will provide
p
provide only the overall picture of the measurement system. It does not
P
43
See Chapter I, Section E, for an operational definition and discussion of potential causes.
44
i.e., %GRR > 30%
97
Chapter III – Section B
Variable Measurement System Study – Guidelines
ly
n
Parts Appraiser A Appraiser B Range (A, B)
O
1 0.85 0.80 0.05
e
s
U
2 0.75 0.70 0.05
l
a
3 1.00 0.95 0.05
rn
4 0.45 0.55 0.10
te
0.50
In
5 0.60 0.10
y
∑R
n
0.35
a
i
AverageRange ( R ) = =
= 0.07
p
5 5
m
o
R R 0.07
C
GRR = * = = = 0.0588
d2 1.19 1.19
r
to
o
M
GRR
F
GRR
%GRR = 100 * = 75.7%
Process Standard Deviation
Now that the %GRR for the measurement system is determined, an
interpretation of the results should be made. In Table 7, the %GRR is
determined to be 75.7% and the conclusion is that the measurement
system is in need of improvement.
98
Chapter III – Section B
Variable Measurement System Study – Guidelines
ly
n
Conducting the Study
O
e
Although the number of appraisers, trials and parts may be varied, the
s
subsequent discussion represents the optimum conditions for conducting
U
the study. Refer to the GRR data sheet in Figure 12. The detailed
l
a
procedure is as follows:
rn
1) Obtain a sample of n > 5 parts46 that represent the actual or expected
te
range of process variation.
In
y
2) Refer to the appraisers as A, B, C, etc. and number the parts 1
n
through n so that the numbers are not visible to the appraisers.
a
p
m
4) Let appraisers B and C measure the same n parts without seeing each
M
other’s readings; then enter the results in rows 6 and 11, respectively.
rd
Enter data in rows 2, 7 and 12. Record the data in the appropriate
f
column. For example if the first piece measured is part 7 then record
o
the result in the column labeled part 7. If three trials are needed,
y
rt
6) Steps 4 and 5 may be changed to the following when large part size
ro
Let appraiser A measure the first part and record the reading in
row 1. Let appraiser B measure the first part and record the
reading in row 6. Let appraiser C measure the first part and
record the reading in row 11.
45
The ANOVA method can be used to determine the interaction between the gage and appraisers, if such exists.
46
The total number of “ranges” generated ought to be > 15 for a minimal level of confidence in the results.
Although the form was designed with a maximum of 10 parts, this approach is not limited by that number. As
with any statistical technique, the larger the sample size, the less sampling variation and less resultant risk will
be present.
47
See Chapter III, Section B, “Randomization and Statistical Independence”
99
Chapter III – Section B
Variable Measurement System Study – Guidelines
Let appraiser A repeat reading on the first part and record the
reading in row 2, appraiser B record the repeat reading in row 7,
and appraiser C record the repeat reading in row 12. Repeat this
cycle and enter the results in rows 3, 8, and 13, if three trials are
to be used.
7) An alternative method may be used if the appraisers are on different
shifts. Let appraiser A measure all 10 parts and enter the reading in
row 1. Then have appraiser A repeat the reading in a different order
and enter the results in rows 2 and 3. Do the same with appraisers B
and C.
ly
n
O
e
s
U
l
a
rn
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
100
Chapter III – Section B
Variable Measurement System Study – Guidelines
PART
Appraiser
AVERAGE
/Trial # 1 2 3 4 5 6 7 8 9 10
1 A 1 0.29 -0.56 1.34 0.47 -0.80 0.02 0.59 -0.31 2.26 -1.36
2 2 0.41 -0.68 1.17 0.50 -0.92 -0.11 0.75 -0.20 1.99 -1.25
3 3
ly
0.64 -0.58 1.27 0.64 -0.84 -0.21 0.66 -0.17 2.01 -1.31
n
4 Average Xa =
O
e
s
5 Range Ra =
U
l
6 B 1
a
0.08 -0.47 1.19 0.01 -0.56 -0.20 0.47 -0.63 1.80 -1.68
rn
7 2 0.25 -1.22 0.94 1.03 -1.20 0.22 0.55 0.08 2.12 -1.62
te
8 3
In
0.07 -0.68 1.34 0.20 -1.28 0.06 0.83 -0.34 2.19 -1.50
9 Average y Xb =
n
a
p
10 Range Rb =
m
o
11 C 1 0.04 -1.38 0.88 0.14 -1.46 -0.29 0.02 -0.46 1.77 -1.49
C
r
12 2 -0.11 -1.13 1.09 0.20 -1.07 -0.67 0.01 -0.56 1.45 -1.77
to
13 3
o
-0.15 -0.96 0.67 0.11 -1.45 -0.49 0.21 -0.49 1.87 -2.16
M
14 Average Xc =
rd
o
15 Range Rc =
F
f
o
16
Part Average X =
y
rt
Rp =
e
p
17 R = ([ R = ] + [ Rb = ] + [ Rc = ]) /[# OF APPRAISERS = ]= R=
ro
a
P
18 X = [Max X = ] - [Min X = ]=
DIFF
19 * UCL = [R = ] × [D4 = ]=
R
*D4 = 3.27 for 2 trials and 2.58 for 3 trials. UCLR represents the limit of individual R’s. Circle those that are
beyond this limit. Identify the cause and correct. Repeat these readings using the same appraiser and unit as originally used or
discard values and re-average and recompute R and the limiting value from the remaining observations.
Notes:_____________________________________________________________________________________________________________
101
Chapter III – Section B
Variable Measurement System Study – Guidelines
ly
List).
n
O
e
s
U
The averages of the multiple readings by each appraiser on each part are
l
a
Average Chart plotted by appraiser with part number as an index. This can assist in
rn
determining consistency between appraisers.
te
In
The grand average and control limits determined by using the average
range are also plotted. The resulting Average Chart provides an
y
n
indication of “usability” of the measurement system.
a
p
The area within the control limits represents the measurement sensitivity
m
(“noise”). Since the group of parts used in the study represents the
o
C
fall outside the control limits. If the data show this pattern, then the
to
analyzing and controlling the process. If less than half fall outside the
rd
effective resolution or the sample does not represent the expected process
F
variation.
f
o
y
rt
e
p
ro
P
48
Detailed descriptions of these analyses are beyond the scope of this document. For more information, refer to
the references and seek assistance from competent statistical resources
102
Chapter III – Section B
Variable Measurement System Study – Guidelines
ly
n
O
e
s
U
l
Figure 13: Average Chart – “Stacked”49
a
rn
te
Review of the charts indicates that the measurement system appears to
In
have sufficient discrimination for processes with variation described by
y
the sample parts. No appraiser-to-appraiser differences are readily
n
apparent.
a
p
m
o
C
2
r
Average
to
1
o
M
0
rd
-1
o
F
f
-2
o
y
49
With the ANOVA approach, this is also referred to as appraiser-by-part interaction chart
103
Chapter III – Section B
Variable Measurement System Study – Guidelines
ly
If all appraisers have some out of control ranges, the measurement
n
system is sensitive to appraiser technique and needs improvement to
O
obtain useful data.
e
s
Neither chart should display patterns in the data relative to the
U
appraisers or parts.
l
a
The ranges are not ordered data. Normal control chart trend
rn
analysis must not be used even if the plotted points are connected
te
by lines.
In
Stability is determined by a point or points beyond the control limit;
y
within-appraiser or within-part patterns. Analysis for stability ought
n
a
each part
M
rd
o
F
f
o
y
rt
e
p
ro
P
104
Chapter III – Section B
Variable Measurement System Study – Guidelines
1.0
Range
0.5
0.0
ly
Review of the above charts indicates that there are differences between
n
O
the variability of the appraisers.
e
s
U
l
a
rn
The individual readings are plotted by part for all appraisers (see Figure
Run Chart 17) to gain insight into:
te
In
• The effect of individual parts on variation consistency
• y
Indication of outlier readings (i.e., abnormal readings)
n
a
p
m
o
2
C
r
1
to
Value
0
M
rd
-1
o
F
-2
f
o
part 1 2 3 4 5 6 7 8 9 10
y
rt
e
p
Review of the above chart does not indicate any outliers or inconsistent
parts.
105
Chapter III – Section B
Variable Measurement System Study – Guidelines
ly
n
1
O
Value
e
0
s
U
-1
l
a
rn
-2
Part
te
1 2 3 4 5
In
y
n
2
a
p
1
m
Value
0
C
-1
r
to
-2
o
M
Part 6 7 8 9 10
rd
Appraiser A B C
o
F
f
o
y
106
Chapter III – Section B
Variable Measurement System Study – Guidelines
In a Whiskers Chart, the high and low data values and the average by
Whiskers Chart part-by-appraiser are plotted (see Figure 19). This provides insight into:
3 3
2 2
1 1
ly
Appr A
Appr B
n
0 0
O
e
-1 -1
s
U
-2 -2
l
a
-3 -3
rn
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10
te
part part
In
3
y
n
a
p
2
m
o
1
Appr C
0
r
to
-1
o
M
-2
rd
-3
o
1 2 3 4 5 6 7 8 9 10
F
part
f
o
107
Chapter III – Section B
Variable Measurement System Study – Guidelines
ly
n
0.5
O
e
s
error
0.0
U
l
a
rn
-0.5
te
part 1 2 3 4 5
In
y
n
0.5
a
p
m
error
0.0
o
C
r
to
-0.5
o
M
part 6 7 8 9 10
rd
Appraiser A B C
o
F
f
o
108
Chapter III – Section B
Variable Measurement System Study – Guidelines
The histogram plot (Figure 21) is a graph that displays the frequency
Normalized distribution of the gage error of appraisers who participated in the study.
Histogram It also shows their combined frequency distribution.
If the reference values are available:
Error = Observed Value – Reference Value
Otherwise:
Normalized Value = Observed Value – Part Average
The histogram plot provides a quick visual overview of how the error is
distributed. Issues such as whether
bias or lack of consistency exist in
ly
7
the measurements taken by the
n
6
appraisers, can be identified even
O
5
before the data are analyzed.
Frequency
e
4
s
Analysis of the histograms (Figure
U
3
l
2
a
They also indicate that only appraiser
rn
1
te
0
0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 indicate that appraisers A and C are
In
Appr A introducing a systematic source of
y variation which is resulting in the
n
5
biases.
a
p
m
4
o
Frequency
3
C
r
2
to
1
o
M
0
rd
-0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6
Appr B
o
F
f
7
o
6
y
rt
5
Frequency
4
p
3
ro
2
P
Appr C
Error = 0 line
50
Note that the “0.0” of each of the histograms are aligned with respect to each other.
109
Chapter III – Section B
Variable Measurement System Study – Guidelines
The averages of the multiple readings by each appraiser on each part are
X-Y Plot of plotted with the reference value or overall part averages as the index (see
Averages by Figure 22). This plot can assist in determining:
Size • Linearity (if the reference value is used)
• Consistency in linearity between appraisers
3 3
2 2
ly
n
1 1
Appr A
Appr B
O
0 0
e
s
-1 -1
U
l
a
-2 -2
rn
-3 -3
te
-3 -2 -1 0 1 2 3 -3 -2 -1 0 1 2 3
In
reference reference
y
n
3
a
p
2
m
o
1
C
Appr C
0
to
o
-1
M
-2
rd
o
-3
F
-3 -2 -1 0 1 2 3
f
reference
o
y
rt
e
The averages of the multiple readings by each appraiser on each part are
Comparison plotted against each other with the appraisers as indices. This plot
X-Y Plots compares the values obtained by one appraiser to those of another (see
Figure 23). If there were perfect agreement between appraisers, the
plotted points would describe a straight line through the origin and 45° to
the axis.
110
Chapter III – Section B
Variable Measurement System Study – Guidelines
Appr A
1.25
Appr B
-1.25
1.25
Appr C
-1.25
ly
n
25 1. 2
5 25
1.2
5
-1. -1.
O
e
s
Figure 23: Comparison X–Y Plots
U
l
a
rn
NUMERICAL CALCULATIONS
te
In
The Gage Repeatability and Reproducibility calculations are shown in
y
n
Figures 24 and 25. Figure 24 shows the data collection sheet on which all
a
study results are recorded. Figure 25 displays a report sheet on which all
p
m
section. The procedure for doing the calculations after data have
o
and 3; enter the result in row 5. Do the same for rows 6, 7, and 8; and
f
11, 12, and 13 and enter results in rows 10 and 15, respectively.
o
y
values.
e
p
3) Total row 5 and divide the total by the number of parts sampled to
ro
obtain the average range for the first appraisers trials Ra . Do the
P
111
Chapter III – Section B
Variable Measurement System Study – Guidelines
ly
condition would have already been corrected and would not occur
n
here.
O
7) Sum the rows (rows 1, 2, 3, 6, 7, 8, 11, 12, and 13). Divide the sum
e
s
in each row by the number of parts sampled and enter these values in
U
the right most column labeled “Average”.
l
a
8) Add the averages in rows 1, 2 and 3 and divide the total by the
rn
number of trials and enter the value in row 4 in the X a block. Repeat
te
In
this for rows 6, 7 and 8; and 11, 12 and 13, and enter the results in
y
the blocks for X b and X c in rows 9 and 14, respectively.
n
a
p
10) Sum the measurements for each trial, for each part, and divide the
total by the number of measurements (number of trials times the
o
M
11) Subtract the smallest part average from the largest part average and
F
enter the result in the space labeled R p in row 16. R p is the range of
f
o
part averages.
y
rt
e
51
See Statistical Process Control (SPC) Reference Manual, 1995, or other statistical reference source for a table
of factors.
112
Chapter III – Section B
Variable Measurement System Study – Guidelines
PART
Appraiser
AVERAGE
/Trial # 1 2 3 4 5 6 7 8 9 10
1 A 1 0.29 -0.56 1.34 0.47 -0.80 0.02 0.59 -0.31 2.26 -1.36 0.1
2 2 0.41 -0.68 1.17 0.50 -0.92 -0.11 0.75 -0.20 1.99 -1.25 0.1
3 3 0.64 -0.58 1.27 0.64 -0.84 -0.21 0.66 -0.17 2.01 -1.31 0.2
ly
4 Average Xa =
n
0.19
O
0.447 -0.607 1.260 0.537 -0.853 -0.100 0.667 -0.227 2.087 -1.307
e
5 Range Ra = 0.1
s
0.35 0.12 0.17 0.17 0.12 0.23 0.16 0.14 0.27 0.11
U
6 B 1 0.08 -0.47 1.19 0.01 -0.56 -0.20 0.47 -0.63 1.80 -1.68 0.0
l
a
7 2
rn
0.25 -1.22 0.94 1.03 -1.20 0.22 0.55 0.08 2.12 -1.62 0.1
te
8 3 0.07 -0.68 1.34 0.20 -1.28 0.06 0.83 -0.34 2.19 -1.50 0.0
In
9 Average Xb =
0.133 -0.790 1.157 0.413 -1.013 0.027 y
0.617 -0.297 2.037 -1.600 0.06
n
a
10 Range Rb = 0.5
p
0.18 0.75 0.40 1.02 0.72 0.42 0.36 0.71 0.39 0.18
m
11 C 1 0.04 -1.38 0.88 0.14 -1.46 -0.29 0.02 -0.46 1.77 -1.49 -0.2
o
C
12 2 -0.11 -1.13 1.09 0.20 -1.07 -0.67 0.01 -0.56 1.45 -1.77 -0.2
r
to
13 3 -0.15 -0.96 0.67 0.11 -1.45 -0.49 0.21 -0.49 1.87 -2.16 -0.2
o
14 Average
M
- Xc =
0.073 -1.157 0.880 0.150 -1.327 -0.483 0.080 -0.503 1.697 -1.807 -0.25
rd
15 Range Rc =
o
0.19 0.42 0.42 0.09 0.39 0.38 0.20 0.10 0.42 0.67 0.3
F
16
f
X =
o
Part
y
Average
rt
.00
Rp =
e
0.169 -0.851 1.099 0.367 -1.064 -0.186 0.454 -0.342 1.940 -1.571 3.5
p
R = 0.34
P
*D4 = 3.27 for 2 trials and 2.58 for 3 trials. UCLR represents the limit of individual R’s. Circle those that are
beyond this limit. Identify the cause and correct. Repeat these readings using the same appraiser and unit as originally used or
discard values and re-average and recompute R and the limiting value from the remaining observations.
Notes:_____________________________________________________________________________________________________________
113
Chapter III – Section B
Variable Measurement System Study – Guidelines
ly
= 0.20188 2 0.8862 = 17.62%
n
O
3 0.5908
e
Reproducibility – Appraiser Variation (AV)
s
U
AV = ( X DIFF × K 2 )2 - ( EV
2
( nr ) ) %AV = 100 [AV/TV]
l
a
rn
= 100 [0.22963/1.14610]
= ( 0.4446 × 0.5231)2 - 2
( 0.20188 (10 × 3) )
te
In
= 0.22963 Appraisers 2 3 = 20.04%
n = parts r = trials K2 0.7071 y
0.5231
n
a
GRR = EV
2
+ AV
2
%GRR = 100 [GRR/TV]
o
C
2 0.7071 = 26.68%
o
= 0.30575
M
3 0.5231
rd
4 0.4467
F
= R p × K3 = 100 [1.10456/1.14610]
f
PV 5 0.4030
o
y
2
+ PV
2
8 0.3375
P
TV = GRR
114
Chapter III – Section B
Variable Measurement System Study – Guidelines
ly
The repeatability or equipment variation (EV or σ E ) is determined by
n
O
multiplying the average range ( R ) by a constant (K1). K1 depends upon
e
the number of trials used in the gage study and is equal to the inverse of
s
U
d 2* which is obtained from Appendix C. d 2* is dependent on the number
l
a
of trials (m) and the number of parts times the number of appraisers (g)
rn
(assumed to be greater than 15 for calculating the value of K1)
te
The reproducibility or appraiser variation (AV or σ A ) is determined by
In
multiplying the maximum average appraiser difference ( X DIFF ) by a
y
n
constant (K2 ). K2 depends upon the number of appraisers used in the
a
p
calculated by
rd
( EV )
2
(X )
o
2
AV = × K2 −
F
DIFF
nr
f
o
If a negative value is calculated under the square root sign, the appraiser
e
p
R&R = ( EV )2 + ( AV )2
52
The numerical results in the example were developed as if they were computed manually; i.e., the results were
carried and rounded to one additional decimal digit. Analysis by computer programs should maintain the
intermediate values to the maximum precision of the computer/programming language. The results from a valid
computer program may differ from the example results in the second or greater decimal place but the final
analysis will remain the same.
115
Chapter III – Section B
Variable Measurement System Study – Guidelines
TV = (GRR )2 + ( PV ) 2
ly
If the process variation is known and its value is based on 6σ , then it
n
O
can be used in place of the total study variation (TV) calculated from the
e
gage study data. This is accomplished by performing the following two
s
calculations:
U
l
a
1) TV = process variation
rn
6.00
te
In
2) PV = (TV ) 2 - (GRR ) 2
y
n
Both of these values (TV and PV) would replace those previously
a
calculated.
p
m
Once the variability for each factor in the gage study is determined, it can
o
C
performing the calculations on the right side of the gage report form
to
follows:
f
o
116
Chapter III – Section B
Variable Measurement System Study – Guidelines
ly
Given that the graphical analysis has not indicated any special cause
n
variation, the rule of thumb for gage repeatability and reproducibility
O
(%GRR) may be found in Chapter II, Section D.
e
s
In addition, the ndc is truncated to the integer and ought to be greater
U
than or equal to 5.
l
a
rn
ANALYSIS OF VARIANCE (ANOVA) METHOD
te
In
Analysis of variance (ANOVA) is a standard statistical technique and can
y
n
be used to analyze the measurement error and other sources of variability
a
interaction between parts and appraisers, and replication error due to the
C
gage.
r
to
•
rd
117
Chapter III – Section B
Variable Measurement System Study – Guidelines
for the measurement by the first appraiser on the nth part. Follow the
same procedure for the next appraiser up to and including the kth
appraiser. The similar notation will be used where B1, C1 denotes the
measurement for second and third appraiser on the first part. Once all nk
combinations are written, then the slips of paper can be put in a hat or
bowl. One at a time, a slip of paper is selected. These combinations (A1,
B2, ...) are the measuring order in which the gage study will be
performed. Once all nk combinations are selected, they are put back into
the hat and the procedure is followed again. This is done for a total of r
times to determine the order of experiments for each repeat.
There are alternate approaches to generate a random sample. Care
should be exercised to differentiate among random, haphazard and
ly
convenience sampling.54
n
O
In general, all efforts need to be taken to assure statistical independence
e
within the study.
s
U
l
Conducting the Study
a
rn
The data can be collected in a random manner using a form similar to
te
Figure 12. For our example, there are ten parts and three appraisers, and
In
the experiment has been performed in random order twice for each part
y
and appraiser combination.
n
a
p
m
Graphical Analysis
o
C
Analysis above can be used in the graphical analysis of the data collected
to
plot confirms the results of the F test on whether or not the interaction is
F
per appraiser per part vs. part number (1, 2, … etc.) is graphed in Figure
y
26. The points for each appraiser average measurement per part are
rt
the graph is if the k lines are parallel there is no interaction term. When
p
ro
the lines are nonparallel, the interaction can be significant. The larger the
P
54
See Wheeler and Lyday, Evaluating the Measurement Process, Second Edition, 1989, p. 27.
118
Chapter III – Section B
Variable Measurement System Study – Guidelines
ly
n
O
e
s
U
Figure 26: Interaction Plot
l
a
rn
te
Another graph sometimes of interest is the residuals plot. This graph is
In
more a check for the validity of the assumptions. This assumption is that
the gage (error) is a random variable from a normal distribution. The
y
n
residuals, which are the differences between the observed readings and
a
repeated readings for each appraiser for each part. If the residuals are not
o
(response is value)
o
F
f
o
0.5
y
rt
e
Residual
p
ro
0.0
P
-0.5
-2 -1 0 1 2
Fitted Value
119
Chapter III – Section B
Variable Measurement System Study – Guidelines
Numerical Calculations
Although the values can be calculated manually, most people will use a
computer program to generate what is called the Analysis of Variance
(ANOVA) table (see Appendix A).
The ANOVA table here is composed of five columns (see Table 8).
• Source column is the cause of variation.
• DF column is the degree of freedom associated with the source.
• SS or sum of squares column is the deviation around the mean of
the source.
ly
• MS or mean square column is the sum of squares divided by
n
degrees of freedom.
O
•
e
F-ratio column, calculated to determine the statistical
s
significance of the source value.
U
l
The ANOVA table is used to decompose the total variation into four
a
rn
components: parts, appraisers, interaction of appraisers and parts, and
te
repeatability due to the instrument.
In
For analysis purposes, negative variance components are set to zero.
y
n
This information is used to determine the measurement system
a
Table 8 shows the ANOVA calculations for the example data from Figure
r
ANOVA method with the Average and Range method. Table 11 shows
o
Source DF SS MS F
F
f
Total 89 94.6471
* Significant at α = 0.05 level
Table 8: ANOVA Table
120
Chapter III – Section B
Variable Measurement System Study – Guidelines
γ 2= 0 INT = 0 0 0
(Interaction)
ly
System = GRR = 0.302373 27.9 7.8
n
0.09143
O
(τ + γ + ω )
2 2 2
e
s
U
σ 2 = 1.086446 PV = 1.042327 96.0 92.2
l
a
(Part)
rn
te
Total Variation TV =1.085 100.0
In
Table 9: ANOVA Analysis % Variation & Contribution
y
n
(Estimate of variance is based on model without interaction)
a
p
m
(
ndc = 1.41 1.04233 ) = 4.861 ≅ 4
o
C
.30237
r
to
o
σ ( components )
o
% of TotalVariation = 100
σ (total )
F
f
o
y
σ 2( components )
rt
σ (total )
2
p
ro
P
121
Chapter III – Section B
Variable Measurement System Study – Guidelines
ANOVA
EV 0.177 0.200 0.231 18.4
ly
AV 0.129 0.227 1.001 20.9
n
O
INTERACTION -- 0 -- 0
GRR 0.237 0.302 1.033 27.9
e
s
PV 1.042 96.0
U
* In the average and range method, the interaction component cannot be estimated.
l
a
rn
Table 10: Comparison of ANOVA and Average and Range Methods
te
In
y
n
a
p
VARIATION CONTRIBUTION
rd
Note:
e
p
55
CL = Confidence Limit
122
Chapter III – Section B
Variable Measurement System Study – Guidelines
ly
n
• The gage may need to be redesigned to be more rigid.
O
e
• The clamping or location for gaging needs to be improved.
s
U
• There is excessive within-part variation.
l
a
If reproducibility is large compared to repeatability, then possible causes
rn
could be:
te
In
• The appraiser needs to be better trained in how to use and read
the gage instrument. y
n
a
A fixture of some sort may be needed to help the appraiser use the gage
o
more consistently.
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
123
Chapter III – Section B
Variable Measurement System Study – Guidelines
ly
n
O
e
s
U
l
a
rn
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
124
Chapter III – Section C
Attribute Measurement Systems Study
ly
such systems.
n
O
As discussed in Chapter I, Section G, there is a quantifiable risk when
e
using any measurement systems in making decisions. Since the largest
s
risk is at the category boundaries, the most appropriate analysis would be
U
the quantification of the measurement system variation with a gage
l
a
performance curve.
rn
te
In
RISK ANALYSIS METHODS n
y
a
they should be used only with the consent of the customer. Selection and
use of such techniques should be based on good statistical practices, an
o
F
Scenario
Possible Approaches
The production process is in statistical control and has the performance
indices of Pp = Ppk = 0.5 which is unacceptable. Because the process is
producing nonconforming product, a containment action is required to
cull the unacceptable parts from the production stream.
56
This includes the comparison of multiple appraisers.
57
See Reference List.
125
Chapter III – Section C
Attribute Measurement Systems Study
LSL USL
ly
Figure 28: Example Process
n
O
For the containment activity the process team selected an attribute gage
e
s
that compares each part to a specific set of limits and accepts the part if
U
the limits are satisfied; otherwise it rejects the part (known as a Go/NoGo
l
a
gage). Most gages of this type are set up to accept and reject based on a
rn
set of master parts. Unlike a variable gage, this attribute gage cannot
te
indicate how good or how bad a part is, but only that the part is accepted
In
or rejected (i.e., 2 categories).
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
Figure 29: The “Gray” Areas Associated with the Measurement System
o
y
rt
The specific gage the team is using has a %GRR = 25% of the
e
tolerance.58 Since this has not yet been documented by the team, it needs
p
ro
to study the measurement system. The team has decided to take a random
P
sample of fifty parts from the process in order to obtain parts throughout
the process range.
Three appraisers are used, with each appraiser making three decisions on
the each part.
58
Since the process variation is larger than the tolerance, it is appropriate to compare the measurement system to
the tolerance rather than to the process variation.
126
Chapter III – Section C
Attribute Measurement Systems Study
Part A-1 A-2 A-3 B-1 B-2 B-3 C-1 C-2 C-3 Reference Ref Value Code
1 1 1 1 1 1 1 1 1 1 1 0.476901 +
2 1 1 1 1 1 1 1 1 1 1 0.509015 +
3 0 0 0 0 0 0 0 0 0 0 0.576459 –
4 0 0 0 0 0 0 0 0 0 0 0.566152 –
5 0 0 0 0 0 0 0 0 0 0 0.570360 –
6 1 1 0 1 1 0 1 0 0 1 0.544951 x
7 1 1 1 1 1 1 1 0 1 1 0.465454 x
8 1 1 1 1 1 1 1 1 1 1 0.502295 +
9 0 0 0 0 0 0 0 0 0 0 0.437817 –
10 1 1 1 1 1 1 1 1 1 1 0.515573 +
11 1 1 1 1 1 1 1 1 1 1 0.488905 +
12 0 0 0 0 0 0 0 1 0 0 0.559918 x
ly
13 1 1 1 1 1 1 1 1 1 1 0.542704 +
n
14 1 1 0 1 1 1 1 0 0 1 0.454518 x
O
15 1 1 1 1 1 1 1 1 1 1 0.517377 +
e
16 1 1 1 1 1 1 1 1 1 1 0.531939 +
s
U
17 1 1 1 1 1 1 1 1 1 1 0.519694 +
18 1 1 1 1 1 1 1 1 1 1 0.484167 +
l
a
19 1 1 1 1 1 1 1 1 1 1 0.520496 +
rn
20 1 1 1 1 1 1 1 1 1 1 0.477236 +
te
21 1 1 0 1 0 1 0 1 0 1 0.452310 x
In
22 0 0 1 0 1 0 1 1 0 0 0.545604 x
23 1 1 1 1 1 1 1 y1 1 1 0.529065 +
n
24 1 1 1 1 1 1 1 1 1 1 0.514192 +
a
p
25 0 0 0 0 0 0 0 0 0 0 0.599581 –
m
26 0 1 0 0 0 0 0 0 1 0 0.547204 x
o
27 1 1 1 1 1 1 1 1 1 1 0.502436 +
C
28 1 1 1 1 1 1 1 1 1 1 0.521642 +
r
to
29 1 1 1 1 1 1 1 1 1 1 0.523754 +
30 0 0 0 0 0 1 0 0 0 0 0.561457 x
o
M
31 1 1 1 1 1 1 1 1 1 1 0.503091 +
32 1 1 1 1 1 1 1 1 1 1 0.505850 +
rd
33 1 1 1 1 1 1 1 1 1 1 0.487613 +
o
34 0 0 1 0 0 1 0 1 1 0 0.449696 x
F
35 1 1 1 1 1 1 1 1 1 1 0.498698 +
f
o
36 1 1 0 1 1 1 1 0 1 1 0.543077 x
y
37 0 0 0 0 0 0 0 0 0 0 0.409238 –
rt
38 1 1 1 1 1 1 1 1 1 1 0.488184 +
e
p
39 0 0 0 0 0 0 0 0 0 0 0.427687 –
ro
40 1 1 1 1 1 1 1 1 1 1 0.501132 +
P
41 1 1 1 1 1 1 1 1 1 1 0.513779 +
42 0 0 0 0 0 0 0 0 0 0 0.566575 –
43 1 0 1 1 1 1 1 1 0 1 0.462410 x
44 1 1 1 1 1 1 1 1 1 1 0.470832 +
45 0 0 0 0 0 0 0 0 0 0 0.412453 –
46 1 1 1 1 1 1 1 1 1 1 0.493441 +
47 1 1 1 1 1 1 1 1 1 1 0.486379 +
48 0 0 0 0 0 0 0 0 0 0 0.587893 –
49 1 1 1 1 1 1 1 1 1 1 0.483803 +
50 0 0 0 0 0 0 0 0 0 0 0.446697 –
127
Chapter III – Section C
Attribute Measurement Systems Study
ly
n
A * B Crosstabulation
O
e
B
s
.00 1.00 Total
U
A .00 Count 44 6 50
l
a
Expected Count 15.7 34.3 50.0
rn
1.00 Count 3 97 100
te
Expected Count 31.3 68.7 100.0
In
Total Count 47 103 150
y
Expected Count 47.0 103.0 150.0
n
a
p
m
o
B * C Crosstabulation
C
C
r
to
B .00 Count 42 5 47
M
A * C Crosstabulation
P
C
.00 1.00 Total
A .00 Count 43 7 50
Expected Count 17.0 33.0 50.0
1.00 Count 8 92 100
Expected Count 34.0 66.0 100.0
Total Count 51 99 150
Expected Count 51.0 99.0 150.0
128
Chapter III – Section C
Attribute Measurement Systems Study
ly
n
Let po = the sum of the observed proportions in the diagonal cells
O
e
pe = the sum of the expected proportion in the diagonal cells
s
U
then
l
a
po − pe
rn
kappa =
1 − pe
te
In
y
n
Kappa is a measure rather than a test.59 Its size is judged by using an
a
thumb is that values of kappa greater than 0.75 indicate good to excellent
o
agreement (with a maximum kappa = 1); values less than 0.40 indicate
C
poor agreement.
r
to
Upon calculating the kappa measures for the appraisers, the team came
rd
Kappa A B C
f
o
y
A – .86 .78
rt
e
B .86 – .79
p
ro
C .78 .79 –
P
This analysis indicates that all the appraisers show good agreement
between each other.
This analysis is necessary to determine if there are any differences
among the appraisers but it does not tell us how well the measurement
59
As in all such categorical evaluations, a large number of parts covering the entire spectrum of possibilities is
necessary.
60
When the observations are measured on an ordinal categorical scale a weighted kappa can be used to better
measure agreement. Agreement between the two raters is treated as for kappa but disagreements are measured
by the number of categories by which the raters differ.
129
Chapter III – Section C
Attribute Measurement Systems Study
system sorts good parts from bad. For this analysis the team had the parts
evaluated using a variable measurement system and used the results to
determine the reference decision.
With this new information another group of cross-tabulations was
developed comparing each appraiser to the reference decision.
A * REF Crosstabulation
REF
.00 1.00 Total
A .00 Count 45 5 50
Expected Count 16.0 34.0 50.0
ly
n
1.00 Count 3 97 100
O
Expected Count 32.0 68.0 100.0
e
Total Count 48 102 150
s
U
Expected Count 48.0 102.0 150.0
l
a
rn
te
B * REF Crosstabulation
In
REF
y .00 1.00 Total
n
a
B .00 Count 45 2 47
p
C * REF Crosstabulation
o
F
REF
f
o
C .00 Count 42 9 51
rt
Expected Count
e
1.00 Count 6 93 99
ro
130
Chapter III – Section C
Attribute Measurement Systems Study
The team also calculated the kappa measure to determine the agreement
of each appraiser to the reference decision:
A B C
kappa .88 .92 .77
These values can be interpreted as each of the appraisers has good
agreement with the standard.
The process team then calculated the effectiveness of the measurement
system.
number of correct decisions
Effectiveness =
ly
total opportunities for a decision
n
O
e
% Appraiser1 %Score vs Attribute2
s
U
Source Appraiser A Appraiser B Appraiser C Appraiser A Appraiser B Appraiser C
l
Total Inspected 50 50 50 50 50 50
a
rn
# Matched 42 45 40 42 45 40
te
False Negative (appraiser biased toward rejection) 0 0 0
In
False Positive (appraiser biased toward acceptance) 0 0 0
Mixed y 8 5 10
n
95% UCI 93% 97% 90% 93% 97% 90%
a
p
Total Inspected 50 50
rd
# in Agreement 39 39
95% UCI 64% 64%
o
F
Notes
p
131
Chapter III – Section C
Attribute Measurement Systems Study
Since the calculated score of each appraiser falls within the confidence
interval of the other, the team concludes that they cannot reject the null
hypotheses. This reinforces the conclusions from the kappa measures.
For further analysis, one of the team members brought out the following
table which provides guidelines for each appraisers results:
ly
appraiser – may need ≥80% ≤ 5% ≤ 10%
n
O
improvement
e
s
Unacceptable for the appraiser
U
– needs improvement < 80% > 5% > 10%
l
a
rn
te
Summarizing all the information they already had, the team came up
In
with this table:
y
n
Effectiveness Miss Rate False Alarm
a
Rate
p
m
A 84% 5% 8%
o
C
B 90% 2% 4%
r
to
C 80% 9% 15%
o
M
go with these results since they were tired of all this analysis and these
rt
conclusions were at least justifiable since they found the table on the
e
p
web.
ro
P
Concerns
1) There is no theory-based decision criteria on acceptable risk. The
above guidelines are heuristic and developed based on individual
“beliefs” about what will pass as “acceptable”. The final decision
criteria should be based on the impact (i.e., risk) to the remaining
process and final customer. This is a subject matter decision – not a
statistical one.
132
Chapter III – Section C
Attribute Measurement Systems Study
ly
n
O
e
Figure 30: Example Process with Pp = Ppk = 1.33
s
U
l
a
With this new situation, it would be concluded that all the
rn
appraisers were acceptable since there would be no decision
te
errors.
In
3) There is usually a misconception of what the cross-tabulation results
y
really mean. For example, appraiser B results are:
n
a
p
m
B * REF Crosstabulation
o
REF
C
B .00 Count 45 2 47
o
parts, most people look at the upper left corner as a measure of the
p
133
Chapter III – Section C
Attribute Measurement Systems Study
.938* (.0027 )
Pr ( bad called bad ) =
.938* (.0027 ) + .020* (.9973)
That is, these results indicate that if the part is called bad there is only
a 1 out of 10 chance that it is truly bad.
ly
4) The analysis does not use the variable data information or even the
n
relative order information which was available when the reference
O
decision values were determined.
e
s
U
l
a
rn
Signal Detection Approach
te
An alternate approach is to use Signal Detection
In
Ref Value Code Ref Value Code theory to determine an approximation of the width of
0.599581 - 0.503091 + y
the region II area and from this, the measurement
n
0.587893 - 0.502436 + system GRR.
a
p
0.576459 - 0.502295 +
Let di = distance between the last part accepted by all
m
0.570360 - 0.501132 +
o
0.566575 - 0.498698 + appraisers to the first part rejected by all (for each
C
d = average (di)
M
0.547204 x 0.487613 +
rd
0.543077 x 0.483803 +
f
0.531939 + 0.476901 +
y
0.529065 + 0.470832 +
e
0.521642 + 0.462410 x
ro
61
The “goodness” of this estimate depends on the sample size and how close the sample represents the process.
The larger the sample, the better the estimate.
134
Chapter III – Section C
Attribute Measurement Systems Study
ANALYTIC METHOD62
As with any measurement system, the stability of the process should be
verified and, if necessary, monitored. For attribute measurement systems,
attribute control charting of a constant sample over time is a common
way of verifying stability63.
For an attribute measurement system, the concept of the Gage
Performance Curve (see Chapter V, Section C) is used for developing a
measurement system study, which is used to assess the amount of
repeatability and bias of the measurement system. This analysis can be
ly
used on both single and double limit measurement systems. For a double
n
limit measurement system, only one limit need be examined with the
O
assumptions of linearity and uniformity of error. For convenience, the
e
s
lower limit will be used for discussion.
U
In general, the attribute measurement system study consists of obtaining
l
a
the reference values for several selected parts. These parts are evaluated
rn
a number of times, (m), with the total number of accepts (a), for each
te
part being recorded. From the results, repeatability and bias can be
In
assessed.
y
n
The first stage of the attribute study is the part selection. It is essential
a
that the reference value be known for each part used in the study. Eight
p
m
The maximum and minimum values should represent the process range.
C
Although this selection does not affect confidence in the results, it does
r
to
affect the total number of parts needed to complete the gage study. The
eight parts must be run through the gage, m = 20 times, and the number
o
M
For the total study, the smallest part must have the value a = 0; the
o
largest part, a = 20; and the six other parts, 1≤ a ≤ 19. If these criteria are
F
not satisfied, more parts with known reference values, (X), must be run
f
o
through the gage until the above conditions are met. If, for the smallest
y
value a ≠ 0, then smaller and smaller parts are taken and evaluated until
rt
a = 0. If, for the largest value a ≠ 20, then larger and larger parts are
e
p
taken until a = 20. If six of the parts do not have 1 ≤ a ≤ 19, additional
ro
parts can be taken at selected points throughout the range. These points
P
62
Adapted with permission from “Analysis of Attribute Gage Systems” by J. McCaslin & G. Gruska, ASQC,
1976.
63
Caveat: np > 4
135
Chapter III – Section C
Attribute Measurement Systems Study
Once the data collection criteria have been satisfied, the probabilities of
acceptance must be calculated for each part using the following
equations:
a + 0.5 a
if < 0.5, a ≠ 0
m m
a − 0.5 a
Pa' = if > 0.5 a ≠ 20
m m
a
0.5 if = 0.5
m
The adjustments cover the instances where 1 ≤ a ≤ 19. For the instances
where a = 0 set Pa′ = 0 except for the largest reference value with a = 0,
ly
n
in which Pa′ = 0.025. For the instances where a = 20 then Pa′ = 1 except
O
e
for the smallest reference value with a = 20 in which Pa′ = 0.975.
s
U
Once the Pa′ has been calculated for each XT , the Gage Performance
l
a
rn
Curve (GPC) can be developed. Although the GPC can be presented
graphically (see Figure 32), use of normal probability paper (see Figure
te
31) yields more accurate estimates of the repeatability and bias.
In
y
The calculated probabilities are plotted on normal probability paper and a
n
line of best fit is drawn through these points. The bias is equal to the
a
p
Pa′ = 0.5,
o
C
or
r
to
1.08
e
p
statistic is used:
P
31.3 × Bias
t=
Repeatability
If this calculated value is greater than 2.093 (t025,19), then the bias is
significantly different from zero.
An example will clarify the attribute study data collection and calculation
of repeatability and bias.
64
The 1.08 adjustment factor is specific for a sample size of 20 and was determined through a simulation of this
approach.
136
Chapter III – Section C
Attribute Measurement Systems Study
Example:
An attribute gage is being used to measure a dimension that has a
tolerance of ± 0.010. The gage is an end-of-line 100% automatic
inspection gage that is affected by repeatability and bias. To perform the
attribute study, eight parts with reference values at intervals of 0.002
from –0.016 to –0.002 are run through the gage 20 times each. The
number of accepts for each part are:
XT a
–0.016 0
–0.014 3
ly
–0.012 8
n
–0.010 20
O
–0.008 20
e
–0.006 20
s
U
–0.004 20
-0.002 20
l
a
rn
Since there are two reference values with 1 ≤ a ≤ 19, at least four more
te
parts must be found. Therefore, it is necessary to run parts with reference
In
values at the midpoints of the existing intervals. These reference values
y
n
and the number of accepts are:
a
p
m
–0.015 1
o
C
–0.013 5
-0.011 16
r
to
o
Now there are five reference values with 1 ≤ a ≤ 19. The procedure
M
requires that one more part be found with 1 ≤ a ≤ 19. Therefore, the
rd
–0.0105 18
y
rt
e
Now that the data collection criteria have been satisfied, the probabilities
p
below.
P
XT a Pa′
-0.016 0 0.025
-0.015 1 0.075
-0.014 3 0.175
-0.013 5 0.275
-0.012 8 0.425
-0.011 16 0.775
-0.0105 18 0.875
-0.010 20 0.975
-0.008 20 1.000
137
Chapter III – Section C
Attribute Measurement Systems Study
ly
0.0084 ( 0.0163 )
n
R =
O
1.08
e
s
0.0079
U
= = 0.0073
l
1.08
a
rn
To determine if the bias is significantly different from zero, calculate:
te
31.3 × Bias
In
t = y
R
n
a
31.3 × 0.0023
p
= = 9.86
m
0.0073
o
C
41, the attribute GPC can also be plotted on plain graph paper (see
M
Figure 32). This can be accomplished in one of two ways. One approach
rd
would be to run the sample study at the other limit of the specification. In
o
the example, the long method for attribute study would also have to be
F
conducted at the high limit of the specification, and the calculated values
f
o
plotted accordingly.
y
rt
be necessary to run the study again. Since the shape of the curve at the
p
ro
high limit should be a “mirror image” of the curve at the low limit, the
only consideration necessary is the location of the curve with respect to
P
138
Chapter III – Section C
Attribute Measurement Systems Study
Repeatability = 0.0079
unadjusted
99.8 0.2
99.5 0.5
99.0 1.0
98.0 2.0
95.0 5.0
90.0 10.0
ly
n
80.0 20.0
O
e
70.0 30.0
s
U
60.0 40.0
l
a
rn
50.0 50.0
te
40.0 60.0
In
30.0 70.0
y
n
20.0 80.0
a
p
m
10.0 90.0
o
C
r
5.0 95.0
to
o
2.0 98.0
M
rd
1.0 99.0
o
0.5 99.5
F
f
0.2 99.8
o
Figure 31: Attribute Gage Performance Curve Plotted on Normal Probability Paper
P
139
Chapter III – Section C
Attribute Measurement Systems Study
1.0
0.9
Probability of Acceptance
0.8
0.7
0.6
0.5
ly
n
0.4
O
0.3
e
s
U
0.2
l
a
rn
0.1
te
0.0
In
y
n
-0.020 -0.015 -0.010 -0.005 0.0 0.005 0.010 0.015 0.020
a
p
140
Chapter IV
Practices for Complex Measurement Systems
ly
n
O
e
s
U
l
a
Chapter IV
rn
te
PRACTICES FOR COMPLEX In
y
n
a
p
m
o
MEASUREMENT SYSTEMS
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
141
Chapter IV
Practices for Complex Measurement Systems
ly
n
O
e
s
U
l
a
rn
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
142
Chapter IV – Section A
Practices for Complex or Non-Replicable Measurement Systems
CHAPTER IV – Section A
Practices for Complex or Non-Replicable
Measurement Systems
The focus of this reference manual is measurement systems where the
Introduction readings can be replicated for each part. Not all measurement systems
have this characteristic; for example:
• Destructive measurement systems
• Systems where the part changes on use/test; e.g., engine or
ly
transmission dynamometer tests
n
O
The following are examples of approaches to the analysis of
e
measurement systems, including those not previously discussed in
s
U
this manual. This is not intended to be a complete listing covering
every type of measurement system but only examples of various
l
a
approaches. If a measurement system does not fit the manual’s
rn
focus, it is recommended that assistance from competent statistical
te
resources be consulted.
In
NON-REPLICABLE MEASUREMENT SYSTEMS y
n
Scenario – Non-Destructive Measurement Systems Examples
a
p
The part is not changed by the measurement process; i.e., • Vehicle dynamometers used with non-
m
measurement systems that are non-destructive and will be used green vehicles/powertrains
o
C
Vehicle Dynamometers
ro
143
Chapter IV – Section A
Practices for Complex or Non-Replicable Measurement Systems
The mapping of the studies described in this chapter and the various
scenarios are as follows:
Stability Studies
Scenario S1 S2 S3 S4 S5
The part is not changed by the measurement process; i.e.,
measurement systems that are non-destructive (replicable) and will
be used with parts (specimens) with:
• Static properties, or
• Dynamic (changing) properties which have been stabilized.
ly
n
The shelf life of the characteristic (property) is known and extends
O
beyond the expected duration of the study; i.e., the measured
e
characteristic does not change over the expected period of use.
s
U
Destructive measurement systems
l
a
Non-replicable measurement systems.
rn
te
Test stands
In
Variability Studies y
n
a
Scenario V1 V2 V3 V4 V5 V6 V7 V8 V9
p
m
• Static properties, or
to
stabilized.
M
test stands
e
p
144
Chapter IV – Section B
Stability Studies
CHAPTER IV – Section B
Stability Studies
S1: Single Part65, Single Measurement Per Cycle
Application:
a) Measurement systems in which the part is not changed by the
measurement process; i.e., measurement systems that are non-
destructive and will be used with parts (specimens) with:
Static properties, or
ly
n
Dynamic (changing) properties which have been stabilized.
O
e
b) The shelf life of the characteristic (property) is known and extends
s
beyond the expected duration of the study; i.e., the measured
U
characteristic does not change over the expected period of use.
l
a
rn
Assumptions:
te
• The measurement system is known (documented) to have a linear
In
response over the expected range of the characteristic (property).
y
n
• Parts (specimens) cover the expected range of the process variation
a
of the characteristic.
p
m
d 2*
F
f
•
rt
65
A reference standard can be used if it is appropriate for the process.
145
Chapter IV – Section B
Stability Studies
ly
beyond the expected duration of the study; i.e., the measured
n
O
characteristic does not change over the expected period of use.
e
Assumptions:
s
U
• The measurement system is known (documented) to have a linear
l
a
response over the expected range of the characteristic (property).
rn
• Parts (specimens) cover the expected range of the process variation
te
of the characteristic.
In
y
n
Analyze using a [z, R ] chart: where zi = xi - µi
a
p
(specimen).
r
to
d 2*
o
y
variability study.
rt
e
66
A reference standard can be used if it is appropriate for the process.
67
If more than one appraiser is involved in the data collection, then σ E (repeatability estimate) is affected also by
the reproducibility of the measurement system. Quantify reproducibility by scatter and whisker plots indexed by
appraiser (see Chapter III, Section B).
146
Chapter IV – Section B
Stability Studies
ly
The measurement system must be evaluating a homogeneous
n
independent identically distributed (“iid”) sample (collected and
O
isolated). The measurements of individual parts (specimens) are not
e
replicated so this study can be used with destructive and non-replicable
s
U
measurement systems.
l
a
Assumptions:
rn
• The shelf life of the characteristic (property) is known and extends
te
beyond the expected duration of the study; i.e., measured
In
characteristic does not change over the expected period of use and/or
y
storage.
n
a
p
Analyze by:
rd
unimodal distribution.)
y
rt
• σ total
2
= σ 2process + σ measurement
2
e
system
p
ro
• Measuring one or more individuals from the isolated sample per time
P
68
Dataplot, National Institute of Standards and Technology, Statistical Engineering Division (www.itl.nist.gov).
147
Chapter IV – Section B
Stability Studies
ly
n
• Parts (specimens) cover the extended range of the process variation
O
of the characteristic (property).
e
s
• Specimens are split into m portions. With m=2 portions, this is often
U
called a test-retest study.
l
a
rn
Analyze using:
te
• Range chart to track the consistency of the measurements
In
(confounded with the “within-lot” consistency).
y
n
• Compare σ e = R with the repeatability estimate σ E from a
a
d 2*
p
m
variability study.
o
C
This study is the same as S4 with homogeneous parts from different lots.
y
148
Chapter IV – Section B
Stability Studies
ly
S5b: Variable Data Responses
n
O
Analyze using ANOVA and graphical techniques:69
e
•
s
Calculate x & s for each test stand (by characteristic), by time
U
period.
l
a
• Determine consistency among the stands: a single x & s chart
rn
including the results from all the stands.
te
In
• Determine stability within individual stands: separate x & s chart
for each stand. y
n
a
69
see also James, P. D., “Graphical Displays of Gage R&R Data,” AQC Transaction, ASQC, 1991
149
Chapter IV – Section C
Variability Studies
ly
n
O
e
s
U
l
a
rn
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
150
Chapter IV – Section C
Variability Studies
CHAPTER IV – Section C
Variability Studies
All descriptive studies are enumerative in nature in that they describe the
measurement system (including the effects of the environment) during
the study. Since measurement systems are to be used in making future
decisions about products, processes, or services, an analytic conclusion
about the measurement system is necessary. The transition from
enumerative to analytic results require subject matter knowledge and
expertise to:
ly
• Assure that all expected measurement sources of variation are
n
O
considered in the design and execution of the study.
e
• Analyze the results (data) in light of the expected uses, environment,
s
U
control, maintenance, etc.
l
a
rn
V1: Standard GRR Studies
te
In
These studies are those contained within this reference manual.
These studies include graphical analysis as well as numerical analysis.
y
n
a
p
Application:
y
rt
1) Static properties, or
2) Dynamic (changing) properties which have been stabilized.
Assumptions:
• The shelf life of the characteristic (property) is known and extends
beyond the expected duration of the study; i.e., the measured
characteristic does not change over the expected period of use.
• Parts (specimens) cover the expected range of the process variation
of the characteristic.
151
Chapter IV – Section C
Variability Studies
ly
measurement systems and can be used to analyze measurement systems
n
O
with dynamic characteristics.
e
Assumptions:
s
U
• The shelf life of the characteristic (property) is known and extends
l
a
beyond the expected duration of the study; i.e., measured
rn
characteristic does not change over the expected period of use and/or
te
storage.
In
• Parts (specimens) cover the extended range of the process variation
y
of the characteristic (property).
n
a
p
• Specimens are split into m portions. With m=2 portions, this is often
m
This study is the same as V3 using consecutive pairs of part rather than
e
split specimens. This study is used in situations where the part cannot be
p
ro
70
See Reference List, No. 15.
71
See Reference List, No. 38.
152
Chapter IV – Section C
Variability Studies
ly
n
characteristic does not change over the expected period of use and/or
O
storage.
e
s
• Parts (specimens) cover the extended range of the process variation
U
of the characteristic (property).
l
a
• Split specimens into m portions where m = 0 mod 2 or 3; m ≥ 2 (e.g.,
rn
m =3, 4, 6, 9,.....).
te
In
Analyze using:
y
• Standard GRR techniques including graphics
n
a
p
Different Lots
M
rd
This study is the same as V4 using consecutive pairs of part rather than
o
split specimens. This study is used in situations where the part cannot be
F
153
Chapter IV – Section C
Variability Studies
Assumptions:
• Repeated readings are taken over specified time intervals .
• The shelf life of the characteristic (property) is known and extends
beyond the expected duration of the study; i.e., the measured
characteristic does not change over the expected period of use.
• Parts (specimens) cover the expected range of the process variation
of the characteristic.
ly
n
O
Analyze by determining the degradation model for each sample part:
e
s
• σE = σe
U
l
• Consistency of degradation (if n ≥ 2)
a
rn
te
V7: Linear Analysis
In
y
n
Assumptions:
a
p
of the characteristic.
f
o
• σE = σe
e
p
154
Chapter IV – Section C
Variability Studies
ly
n
O
e
s
U
l
a
rn
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
155
Chapter IV – Section C
Variability Studies
ly
n
O
e
s
U
l
a
rn
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
156
Chapter V
Other Measurement Concepts
ly
n
O
e
s
U
l
a
rn
te
Chapter V
In
y
n
a
p
m
157
Chapter V
P
ro
p
e
Other Measurement Concepts
rt
y
o
f
F
o
rd
M
o
to
r
C
o
158
m
p
a
n
y
In
te
rn
a
l
U
s
e
O
n
ly
Chapter V – Section A
Quantifying the Effect of Excessive Within-Part Variation
CHAPTER V – Section A
Recognizing the Effect of Excessive Within-Part
Variation
Understanding the sources of variation of a measurement system is
important for all measurement applications but becomes even more
critical where there is significant within-part variation. Within-part
variation, such as taper or out-of-round, can cause the measurement
system evaluation to provide misleading results. This is because
unaccounted within-part variation affects the estimate of repeatability,
reproducibility, or both. That is, the within-part variation can appear as a
ly
significant component of the measurement system variation.
n
Understanding the within-part variation present in the product will result
O
in a more meaningful understanding of the suitability of the measurement
e
system for the task at hand.
s
U
l
a
Examples of within-part variation which may be encountered are:
rn
roundness (circular runout), concentricity, taper, flatness, profile,
te
cylindricity, etc.72 It is possible that more than one of these
In
characteristics may be present at the same time within the same part
y
(composite error). The strength of each characteristic and their
n
interdependencies may compound the data and the resultant
a
p
affect how a part is measured, how a fixture may be designed, and the
F
be a plastic part that has a critical feature on the parting line (a parting
y
line typically has excess plastic material where the two halves of the
rt
e
159
Chapter V – Section A
Quantifying the Effect of Excessive Within-Part Variation
ly
n
O
e
s
U
l
a
rn
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
160
Chapter V – Section B
Average and Range Method – Additional Treatment
CHAPTER V – Section B
Average and Range Method – Additional Treatment
There is an additional consideration worthy of mention relative to the
Introduction Average and Range method of measurement system assessment. The
control chart example is taken with permission from “Evaluating the
Measurement Process,” by Wheeler & Lyday (see Reference List).
The primary purpose of this graphical approach is the same as other well
designed measurement system analyses: to determine if the measurement
process is adequate to measure the manufacturing process variation
and/or evaluate product conformance
ly
n
• Are all gages doing the same job?
O
• Are all appraisers doing the same job?
e
s
•
U
Is the measurement system variation acceptable with respect to the
l
process variation?
a
rn
• How good are the data obtained from the measurement process or
te
into how many non-overlapping groups or categories can the data be
In
divided?
y
n
a
p
m
2) Have each appraiser check each sample for the characteristic being
to
studied. Record the first checks on the top data row of a control chart
o
3) Repeat the checks and record the data on the second data row of the
control chart. (Note: Do not allow the appraisers to see their original
o
F
reading while making this second check.) The data should now show
f
4) Analyze the data by calculating the average ( X ) and range (R) for
e
each subgroup.
p
ro
5) Plot the range values on the range chart and calculate the average
P
range ( R ) (include all sub-group ranges (R) for all appraisers). Draw
this average range on the chart. Use the D4 factor for n = 2 to
calculate the control limit for the range chart. Draw in this limit and
determine if all values are in control.
If all ranges are in control, all appraisers are doing the same
job.
If one appraiser is out of control, his method differs from the
others.
If all appraisers have some out of control ranges, the
measurement system is sensitive to appraiser technique and
needs improvement to obtain useful data.
161
Chapter V – Section B
Average and Range Method – Additional Treatment
6) Next, plot the average for each subgroup ( X ) for all appraisers on
the average chart (see Figure 33). The average values represent both
variation and measurement variation.
ly
overshadows the process variation. In other words, the measurement
n
O
process has more variation than the manufacturing process and is of no
e
value in monitoring or controlling that process.
s
U
If less than half of the averages are outside the limits, the measurement
l
system is inadequate for process control.
a
rn
On the other hand, if a majority of the averages fall outside the control
te
limits, it indicates that the signals from the manufacturing process are
In
greater than measurement variation. This measurement system can
y
provide useful data for controlling the process.
n
a
p
m
o
Worksheet Example
C
r
The question, “How good are the data collected by this measurement
to
34. All data needed for the worksheet can be found on the average and
range charts described above.
rd
o
(Figure 34):
f
o
control chart.
P
162
Chapter V – Section B
Average and Range Method – Additional Treatment
7) Compute each appraiser average by averaging all the samples
obtained by each appraiser and enter these averages in the space
provided for each appraiser (A, B, C).
8) Examine the appraiser averages (A, B, C) and determine the range of
the appraiser averages by subtracting the lowest from the highest and
insert in the space for (RA).
9) Compute the estimated appraiser standard deviation ( σˆ A ) as shown
by using the d 2* value for the corresponding nA value.
10) Compute the sample averages by averaging the value obtained by all
appraisers for each sample. For example, (sample 1 avg. of appraiser
1 + sample 1 avg. of appraiser 2 + sample 1 avg. of the last
appraiser and divide this sum by the number of appraisers). This is
ly
the best estimate of that sample’s true value. Place the value for each
n
O
sample in the space provided (1, 2, 3.....9, 10) in Figure 34.
e
11) Observe the sample averages (1, 2, 3......9, 10) and calculate the
s
U
range of the sample averages ( RP ) by subtracting the lowest from
l
a
the highest. Insert this value in the space provided.
rn
12) Estimate the sample-to-sample standard deviation (σˆ P ) as shown by
te
In
using the d 2* value for the corresponding n value.
y
n
13) Compute the “Signal-to-Noise Ratio” (SN) by dividing the sample
a
σP
SN =
r
to
σ GRR
o
M
Figure 34b.)
rt
e
If the number of categories is less than two (2), the measurement system
p
If the number of categories is two (2), it means the data can be divided
into only high and low groups; however this is only the equivalent of
attribute data.
If the number of categories is three (3), the data can be divided into high,
medium and low groups. This is a slightly better measurement system.
A system containing four (4) or more categories would be much better
than the first three examples.
163
Chapter V – Section B
Average and Range Method – Additional Treatment
ly
n
O
e
s
U
l
a
rn
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
164
Chapter V – Section B
Average and Range Method – Additional Treatment
ly
n
O
e
s
U
l
a
rn
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
165
Chapter V – Section B
Average and Range Method – Additional Treatment
Measurement Done
Evaluated______FITTING LENGTH____ By_____R.W.L._____ Date__MM-DD-YY____
ly
d2 5 2.326
n
6 2.534
O
7 2.704
e
s
8 2.847
U
9 2.970
l
a
10 3.078
rn
te
CALCULATIONS FOR APPRAISER EFFECT: Write appraiser averages below:
In
y Appraiser Average
d*2
n
nA
a
p
2 1.410
B – lo 102.8
o
3 1.906
C
5 2.477
o
7 2.827
F ____
rd
8 2.961
5.3 R
o
9 3.076 = σˆ e = σˆ e = 2.781
F
*
1.906 d2
f
10 3.178
o
y
rt
e
p
ro
Figure 34a: Computations for the Control Chart Method of Evaluating a Measurement Process
P
(Part 1 of 2).
166
Chapter V – Section B
Average and Range Method – Additional Treatment
2 2
σˆ m = σˆ e + σˆ o
2 2
= (6.7376) + (2.7806)
= 7.289
ly
n
CALCULATIONS FOR SIGNAL TO NOISE RATIO:
O
e
s
Write the average for each sample piece or batch below:
U
l
d*2 Sample Average
a
n
rn
113.67 - 83.0 = R p
te
2 1.410 1 111.17
Range for these Sample Averages = R p = 30.67
In
3 1.906 2 113.33
4 2.237 Estimate Sample to Sample Standard Deviation: y 3- Lo 83.00
n
5 2.477 4 102.17
a
30.67 RP
p
d2*
7 2.827
2.477 6
o
C
8 2.961 7
Signal to Noise Ratio:
r
9 3.076 8
to
10 3.178 12.38 σˆ P 9
o
= 1.698 = 1.698
M
7.289 σˆ m 10
rd
o
F
σˆ
y
rt
1.41 = 2.395 or 2
P
e
σˆ
p
m
ro
This is the number of non-overlapping 97% confidence intervals that will span the range of product
P
variation. (A 97% confidence interval centered on a single measurement would contain the actual product
value that is represented by that measurement 97% of the time.)
Figure 34b: Computations for the Control Chart Method of Evaluating a Measurement Process
(part 2 of 2).
167
Chapter V – Section B
Average and Range Method – Additional Treatment
ly
n
O
e
s
U
l
a
rn
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
168
Chapter V – Section C
Gage Performance Curve
CHAPTER V – Section C
Gage Performance Curve73
ly
that system.
n
O
To accomplish this, the assumption is made that the measurement system
e
error consists primarily of lack of repeatability, reproducibility and bias.
s
U
Further, repeatability and reproducibility are taken to be normally
l
distributed with some variance, σ 2 . Consequently, gage error is
a
rn
normally distributed with a mean XT, the reference value, plus the bias,
te
and has some variance, σ 2 . In other words:
In
Actual Value from the Gage = N ( X T + b, σ 2 )
y
n
a
the relationship
m
o
UL
C
Pa = ∫ N ( XT + b, σ 2 )dx
r
to
LL
o
UL - ( X T + b) LL - ( X T + b)
Pa = φ - φ
o
σ σ
F
f
o
where
y
rt
UL
UL - ( X T + b)
φ = ∫ N ( XT + b, σ 2 )dx
e
σ
p
−∞
ro
P
∞
LL - ( X T + b)
φ
σ =
∫ N ( XT + b, σ 2 )dx
LL
73
Adapted with permission from “Analysis of Attribute Gage Systems” by J. McCaslin & G. Gruska, ASQC,
1976.
169
Chapter V – Section C
Gage Performance Curve
Example:
Determine the probability of accepting a part when the reference torque
value is 0.5 Nm, 0.7 Nm, 0.9 Nm.
Using data from a previously conducted study:
Upper specification = UL = 1.0 Nm
Lower specification = LL = 0.6 Nm
bias = b = 0.05 Nm
σ GRR = 0.05 Nm
Applying the above to the formulas on the previous page:
UL - ( X T + b) LL - ( X T + b)
ly
Pa = φ - φ
n
σ σ
O
e
1.0 - (0.5 + 0.05) 0.6 - (0.5 + 0.05)
s
Pa = φ - φ
U
0.05 0.05
l
a
rn
Pa = φ (9.0) − φ (1.0)
te
= 1.0 – 0.84
In
= 0.16 y
n
a
That is, when the part has a reference value of 0.5 Nm it will be rejected
p
For XT = 0.7 Nm
rd
Pa = φ - φ
F
0.05 0.05
f
o
y
rt
Pa = φ (5.0) − φ (−3.0)
e
p
ro
= 0.999
P
If the reference value of the part is 0.7 Nm then it will be rejected less
than approximately 0.1% of the time.
For XT = 0.9 Nm
1.0 - (0.9 + 0.05) 0.6 - (0.9 + 0.05)
Pa = φ - φ
0.05 0.05
Pa = φ (1.0) − φ (−7.0)
= 0.84
170
Chapter V – Section C
Gage Performance Curve
If the reference value of the part is 0.9 Nm then it will be rejected
approximately less than 16% of the time.
If the probability of acceptance is calculated for all values of XT and
plotted, then the Gage Performance Curve, as shown in Figure 36, would
be obtained.
This same curve can be more easily plotted on normal probability paper,
as shown in Figure 37. Once plotted, the GPC gives the probability of
accepting a part of any part size.
In addition, once GPC is developed, it can be used to calculate the
repeatability and reproducibility error, and the bias error74.
The 5.15 GRR range can be determined by finding the XT value that
corresponds to Pa = 0.995, and the XT value that corresponds to Pa =
ly
0.005 for either of the two limits. The GRR is the difference between the
n
two XT values, as shown graphically in Figure 37.
O
e
An estimate of the bias is determined by finding the XT, for either the
s
U
upper or lower limit, that corresponds to Pa = 0.5, and calculating:
l
a
B = XT – LL or BB = XT – UL
rn
depending on at which limit XT is chosen.75
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
74
See “Attribute Gage Study,” Chapter III, Section C.
75
This assumes that the measurement system is linear over the operating range.
171
Chapter V – Section C
Gage Performance Curve
1.0
0.9
Probability of Acceptance
0.8
0.7
0.6
0.5
ly
n
0.4
O
e
0.3
s
U
0.2
l
a
rn
0.1
te
0.0
In
y
n
0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2
a
p
172
Chapter V – Section C
Gage Performance Curve
1.0
0.9
Probability of Acceptance
0.8
0.7
0.6
ly
n
0.5
O
0.4
e
s
U
0.3
l
a
rn
0.2
te
0.1
In
0.0 y
n
a
p
0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2
m
o
173
Chapter V – Section C
Gage Performance Curve
99.5 0.5
99.0 1.0
98.0 2.0
95.0 5.0
90.0 10.0
ly
80.0 20.0
n
O
70.0 30.0
e
s
60.0 40.0
U
l
50.0 50.0
a
rn
40.0 60.0
te
30.0 70.0
In
20.0 y 80.0
n
a
p
m
10.0 90.0
o
C
5.0 95.0
r
to
2.0 98.0
o
M
1.0 99.0
rd
0.5 99.5
o
F
0.2 99.8
f
o
Bias = 0.05
e
p
ro
174
Chapter V – Section D
Reducing Variation Through Multiple Readings
CHAPTER V – Section D
Reducing Variation Through Multiple Readings
If the variation of the present measurement system is not acceptable
(over 30%), there is a method that can be used to reduce the variation to
an acceptable level until proper measurement system improvements can
be made. The unacceptable variation may be reduced by taking multiple
statistically independent (non-correlated) measurement readings of the
part characteristic being evaluated, determining the average of these
measurements and letting the numeric value of the result be substituted
ly
for the individual measurement. This method will, of course, be time
n
consuming, but is an alternative, until improvements are made to the
O
measurement system (i.e., redesigning or buying a new system). The
e
procedure for this alternative method is as follows:
s
U
l
a
1) Determine the number of multiple readings required to meet an
rn
acceptable level of variation.
te
2) Follow the gage study procedure discussed earlier in this
In
guideline. n
y
a
p
Example:
m
o
C
individual and average measurements has the same mean numeric value.
f
(6σ ) 2
(6σ x ) 2 =
P
n
This can be approximated by
(6 s )
(6 sx ) =
n
and
76
See note on page iv.
175
Chapter V – Section D
Reducing Variation Through Multiple Readings
0.24
0.14 =
n
so
n = 1.714
and
n = 3 (rounded to nearest integer)
Therefore, 3 multiple readings on the part characteristic will reduce the
total measurement system variation to approximately 0.14 and the %GRR
to 15%.
ly
This method should be considered as a temporary step until other
n
O
improvements are made on the measurement system. This should be used
e
only with the concurrence of the customer.
s
U
l
a
rn
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
176
Chapter V – Section E
Pooled Standard Deviation Approach to GRR
CHAPTER V – Section E
Pooled Standard Deviation Approach to GRR77
Analysis of a measurement system usually assumes that replicated data
from all the part/specimens can be obtained from all the appraisers in a
random fashion. This may not always be possible. If the measurement
system analysis includes multiple locations it can be logistically
unfeasible to demand a random sampling. Also some tests, notably
chemical and metallurgical analyses (inter- and intra- laboratory studies),
may require a cross section of diverse samples which are not part of a
ly
homogeneous process and may not be available at the same time.
n
O
These situations may be handled by using a nested DOE. An alternate
e
approach is the pooled standard deviation GRR study which follows the
s
methodology discussed in ASTM E691.
U
l
This approach views each part as a separate material and then computes
a
rn
the repeatability and reproducibility standard deviations as in E691. This
will yield multiple separate values of repeatability and reproducibility.
te
Since the parts are considered to be essentially identical, these separate
In
estimates are assumed to be effectively identical. Of course they will
n
y
never be exactly the same, but their average will give a good estimate of
a
greater than zero most of the time (that is for most of the materials)
o
Sequential lends itself to a sequential approach. This is useful when all the samples
y
rt
Application are not available at the same time. It also can be used as part of the
e
variability.
P
The following study description assumes that the study will be applied in
a sequential manner.
77
Portions of this section including all of “Consistency Statistics” contributed by Neil Ullman of the American
Society for Testing and Materials (ASTM International).
177
Chapter V – Section E
Pooled Standard Deviation Approach to GRR
ly
upper control limit for the standard deviation chart. Draw in this
n
O
limit and determine if all values are in control (see Figure 38).
e
10) Plot the average ( X ) for each subgroup for all appraisers on the
s
U
average chart (see Figure 38). The average values represent both
l
a
process variation and measurement variation.
rn
te
11) Calculate the grand average ( X ) (include all subgroup averages
In
( X ) for all appraisers). Draw this grand average ( X ) line on the
chart. y
n
a
12) Calculate the control limits for this chart using the A2 factor for r and
p
13) Analyze the data using the control charts and other graphical
r
to
Chapter III).
M
m
o
∑s 2
y
xi
rt
i =1
s xg =
e
m
p
ro
∑s 2
P
si
i =1
Repeatability g = sEg =
m
sE2 g
Reproducibilityg = s Ag = s − 2
xg
m
178
Chapter V – Section E
Pooled Standard Deviation Approach to GRR
sr2
sappr = sx2 −
3
sR = sr2 + sappr
2
ly
15) Evaluate the overall measurement system’s parameters by pooling
n
O
the part results.
e
s
g
U
∑s 2
l
Ei
a
i =1
Repeatability = sE =
rn
g
te
In
g
y ∑s 2
Ai
n
i =1
Reproducibility = s A =
a
p
g
m
o
g
C
∑s 2
r
GRRi
to
i =1
GRR = sGRR =
o
g
M
179
Chapter V – Section E
Pooled Standard Deviation Approach to GRR
1 1
2 1
1 1
Sample Mean
1 1 1
1 1 1 1
UCL=0.3492
0 Mean=0.001444
LCL=-0.3463
1 1 1
-1 1 1
1 1
1 1
-2 1
1
Subgroup 0 10 20 30
ly
Operator 1 2 3
n
0.6 1
O
0.5
UCL=0.4570
Sample StDev
e
0.4
s
U
0.3
l
0.2 S=0.1780
a
rn
0.1
0.0 LCL=0
te
In
y
n
a
180
P Part Across Parts
Trial ro 1 2 3 4 5 6 7 8 9 10
A 1 0.29
p -0.56 1.34 0.47 -0.8 0.02 0.59 -0.31 2.26 -1.36
2 0.41 e -0.68 1.17 0.5 -0.92 -0.11 0.75 -0.2 1.99 -1.25
3 0.64 rt -0.58 1.27 0.64 -0.84 -0.21 0.66 -0.17 2.01 -1.31
xbar 0.4467 y
-0.6067 1.2600 0.5367 -0.8533 -0.1000 0.6667 -0.2267 2.0867 -1.3067 xdbbar 0.19033333
sd 0.1779 o
0.0643 0.0854 0.0907 0.0611 0.1153 0.0802 0.0737 0.1504 0.0551 appr sd 0.10289153
f
F
o
B 1 0.08 -0.47 r1.19 0.01 -0.56 -0.2 0.47 -0.63 1.8 -1.68
2 0.25 -1.22
d
0.94 1.03 -1.2 0.22 0.55 0.08 2.12 -1.62
3 0.07 -0.68 1.34
M 0.2 -1.28 0.06 0.83 -0.34 2.19 -1.5
xbar 0.1333 -0.7900 1.1567 o0.4133
to -1.0133 0.0267 0.6167 -0.2967 2.0367 -1.6000 xdbbar 0.06833333
sd 0.1012 0.3869 0.2021 0.5424 0.3946 0.2120 0.1890 0.3570 0.2079 0.0917 appr sd 0.3017394
r
C
181
C 1 0.04 -1.38 0.88 0.14
o-1.46 -0.29 0.02 -0.46 1.77 -1.49
m
2 -0.11 -1.13 1.09 0.2 -1.07
p -0.67 0.01 -0.56 1.45 -1.77
3 -0.15 -0.96 0.67 0.11 -1.45 a -0.49 0.21 -0.49 1.87 -2.16
xbar -0.0733 -1.1567 0.8800 0.1500 -1.3267 n -0.4833 0.0800 -0.5033 1.6967 -1.8067 xdbbar -0.25433333
sd 0.1002 0.2113 0.2100 0.0458 0.2223
y
0.1901 0.1127 0.0513 0.2194 0.3365 appr sd 0.19056058
In
sd xbar 0.26182 0.28005 0.19648 0.19751 0.24077 0.26555 te0.32524 0.14385 0.21221 0.25125 Pooled sd's
rn
Repeatability 0.13153 0.25721 0.17534 0.31863 0.26388 0.21252 0.19494
0.17736 0.13524a 0.20385
pooled 0.204274 0.195107 0.23223 0.238896 0.218021 0.215578
0.229787 0.218795l 0.21443466
Reproducibility
U
0.25056 0.23743 0.16839 0.07191 0.18644 0.24501 0.31573 0.07508 0.17991 0.22198
s
By Part results
GRR 0.28299 0.35004 0.24310 0.32664 0.32310 0.30247 0.34347 0.22540 O 0.26527 0.30138
Measurement System
pooled 0.318285 0.295359 0.303481 0.307505 0.306671 0.312194 0.302708 n0.29878 0.29904106
ly
Chapter V – Section E
Pooled Standard Deviation Approach to GRR
Chapter V – Section E
Pooled Standard Deviation Approach to GRR
Consistency Statistics
The ASTM and ISO methods78 suggest that two “consistency” statistics,
h and k, be computed. The h values are calculated as:
xappr − x part
h=
sx
For appraiser A and part 1 the average ( xappr above) is 0.447 and the part
average ( x part above) is 0.169. The standard deviation among appraisers
( sx above) is 0.262. Then
ly
n
O
0.447 - 0.167 0.278
h=
e
= = 1.06
s
0.262 0.262
U
l
a
rn
The value of k is the ratio of the standard deviation for each part for each
te
appraiser to the repeatability standard deviation. In this case (appraiser A
In
and part 1 ) it is:
y
standard deviation (appr A, part 1) 0.178
n
k = = = 1.35
a
repeatability 0.132
p
m
different materials.
r
to
deviations, the h and k calculations of E691 can still use to compare the
rd
h Part
y
A 1.06 0.87 0.82 0.86 0.88 0.32 0.65 0.80 0.69 1.05 0.80 2.53
e
p
B -0.14 0.22 0.29 0.24 0.21 0.80 0.50 0.32 0.46 -0.11 0.28 0.88
ro
C -0.93 -1.09 -1.11 -1.10 -1.09 -1.12 -1.15 -1.12 -1.15 -0.94 -1.08 -3.41
P
k median k "z"
A 1.35 0.25 0.49 0.28 0.23 0.65 0.59 0.35 0.77 0.27 0.42 -3.20
B 0.77 1.50 1.15 1.70 1.50 1.20 1.40 1.68 1.07 0.45 1.30 3.14
C 0.76 0.82 1.20 0.14 0.84 1.07 0.83 0.24 1.13 1.65 0.84 -0.17
78
See ISO 5725
182
Chapter V – Section E
Pooled Standard Deviation Approach to GRR
In the last two columns are the averages and a value of “z-value” to see if
the appraisers are significantly different. The h values indicate that
appraiser A is significantly high and appraiser C is significantly low in
their readings of the size of the parts. It is also this significant difference
which creates the GRR standard deviation.
ly
appraiser B is as significantly high. So we see very great differences in
n
the performance of just these three operators.
O
e
s
The graphs of h (Figure 38b) and k (Figure 38c) also help to illustrate
U
these differences. Appraiser C has much lower results than the others.
l
a
Similarly, the k values show how much lower appraiser A variation is
rn
with respect to repeatability. These are certainly issues to be examined
te
about the performance of the measurement method as carried out by
In
these appraisers.
y
n
a
p
m
o
h values
C
r
to
1.5
o
M
rd
1
o
F
f
0.5
o
y
rt
0
e
p
ro
-0.5
P
-1
-1.5
0 A
1 B
2 C
3 4
Appraiser
183
Chapter V – Section E
Pooled Standard Deviation Approach to GRR
k values
1.8
1.6
1.4
1.2
ly
1
n
O
0.8
e
s
0.6
U
l
0.4
a
rn
0.2
te
In
0
0 A1 B
2 y C
3 4
n
a
Appraiser
p
m
o
184
P
ro
p
e
rt
y
o
f
F
o
rd
M
o
to
r
C
o
185
m
p
a
n
y
Appendices
In
te
rn
a
l
U
s
e
O
n
ly
Appendices
Appendix A
Analysis of Variance Concepts
ly
n
O
e
s
U
l
a
rn
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
186
Appendix A
Analysis of Variance Concepts
Appendix A
Analysis of Variance Concepts
The GRR numerical analysis can be done following the formulas in
Table 18. This is what is called the Analysis of Variance (ANOVA) table.
The ANOVA table is composed of six columns:
• Source column is the cause of variation.
• DF column is the degree of freedom associated with the source.
• SS or sum of squares column is the deviation around the mean of the
ly
source.
n
O
• MS or mean square column is the sum of squares divided by degrees
e
s
of freedom.
U
• EMS or expected mean square column determines the linear
l
a
combination of variance components for each MS. The ANOVA table
rn
decomposes the total source of variation into four components: parts,
te
appraisers, interaction of appraisers and parts, and replication error
In
due to the gage/equipment repeatability.
y
n
• The F-ratio column is calculated only for the interaction in a MSA
a
Variance Estimate
rd
r
y
Appraiser (AV) MS A − MS AP
rt
ω2 =
e
nr
p
ro
Part (PV) MS − MS AP
σ2 = P
P
kr
187
Appendix A
Analysis of Variance Concepts
zero or have a small sample size. For analysis purposes, the negative
variance component is set to zero.
Standard deviation is easier to interpret than variance because it has the
same unit of measurement as the original observation. In practice, the
basic measure of spread is given by 5.15 times the standard deviation.81
Table 17 shows 5.15 sigma spread for a measure of repeatability called
equipment variation (EV) and measure of reproducibility called appraiser
variation (AV). If the interaction of part and appraiser is significant, then
there exists a non-additive model and therefore an estimate of its
variance components is given. The GRR in Table 17 is the total of
measurement system variation.
ly
n
EV = 5.15 MSe Equipment Variation = Repeatability
O
e
s
Appraiser Variation = Reproducibility
MS A - MS AP
U
AV = 5.15
nr
l
a
rn
Interaction of Appraiser by Part
MS AP - MSe
te
I AP = 5.15
In
r
Gage R&R
y
n
GRR = ( EV ) 2 + ( AV ) 2 + ( I AP ) 2
a
p
m
Part Variation
MS P - MS AP
PV = 5.15
o
C
kr
r
to
In the additive model, the interaction is not significant and the variance
components for each source is determined as follow: First, the sum of
o
F
square of gage error (SSe from Table 18) is added to the sum of square of
f
appraiser by part interaction (SSAP from Table 18) and which is equal to
o
calculate MSpool. The 5.15 sigma spread limit then will be:
p
ro
EV = 5.15 MS pool
P
MS A - MS pool
AV = 5.15
nr
GRR = ( EV )2 + ( AV )2
MS P - MS pool
PV = 5.15
kr
81
This is the 99% range. See note on page iv.
82
Where n = number of parts, k = number of appraisers and r = number of trials.
188
Appendix A
Analysis of Variance Concepts
ly
n
i =1
O
e
k x.2j . x2
s
SS A = ∑ − ...
U
j =1 nr nkr
l
a
rn
n k 2r
x...2
∑∑ ∑ xijm nkr
te
TSS = −
In
i =1 j =1 m =1
y
n
n k xij2. n
x2 k x.2j. x...2
∑∑ − ∑ i.. −
a
SS AP = ∑ nr +
p
r i =1 kr nkr
m
i =1 j =1 j =1
o
C
SSe = TSS − [ SS A + SS P + SS AP ]
r
to
o
Source DF SS MS F EMS
M
rd
(k − 1)
F
f
SS P
MSP =
y
(n - 1)
rt
e
Appraiser-by-Part (n – 1) (k – 1) SSAP SS AP MS AP
τ 2 + rγ 2
p
MSAP =
ro
MSe
(n - 1) (k - 1)
P
Parts ~ N( 0, σ )
2
Appraiser x Part ~ N( 0, γ )
2
Equipment ~ N( 0, τ )
2
189
Appendix A
Analysis of Variance Concepts
Tables 19a and 19b show the ANOVA calculations for our example data
from Figure 24.
Source DF SS MS F EMS
Appraiser 2 3.1673 1.58363 34.44* τ 2 + 3γ 2 + 30ω 2
Parts 9 88.3619 9.81799 213.52* τ 2 + 3γ 2 + 9σ 2
Appraiser by Part 18 0.3590 0.01994 0.434 τ 2 + 3γ 2
Equipment 60 2.7589 0.04598 τ2
Total 89 94.6471
α
ly
* Significant at = 0.05 level
n
O
Table 19a: Tabulated ANOVA Results
e
s
U
l
a
rn
Estimate of Std. 5.15 (σ) % Total Variation % Contribution
te
Variance Dev. (σ)
In
τ 2 = 0.039973 0.199933 EV = 1.029656 18.4 3.4
(Equipment)
y
n
a
(Appraiser)
o
C
γ 2= 0 INT = 0 0 0
r
to
(Interaction)
o
(τ + γ + ω )
2 2 2
rd
o
(Part)
f
o
5.15σ 2( components )
% Contribution (to Total Variance) = 100
5.15σ (total )
2
190
Appendix B
Impact of GRR on the Capability Index Cp
Appendix B
Impact of GRR on the Capability Index Cp
Formulas:
σ O2 = σ A2 + σ M2 (1)
where O = the observed process variation
A = the actual process variation
ly
M = the measurement system variation
n
U − L
O
Cpx = (2)
6σ x
e
s
where U, L are the upper and lower specification values
U
x = O or A as defined in (1)
l
a
rn
GRR% = GRR p *100% (3)
te
In
based on process variation:
y
kσ M
n
GRR p = (4)
a
6σ O
p
m
kσ M
o
GRR p = (5)
M
U − L
rd
In (4) and (5), k is normally taken to be 5.15. However, without loss of generality this analysis will take k to
o
Analysis:
o
y
rt
e
σA
p
CpO = Cp A *
ro
σO
P
σ O2 − σ M2
= Cp A * using (1)
σO
with GRR based on the process variation
σO 1 − GRR 2
CpO = Cp A * using (4)
σO
= Cp A * 1 − GRR 2 (6)
or
191
Appendix B
Impact of GRR on the Capability Index Cp
CpO
Cp A = (6’)
1 − GRR 2
with GRR based on the tolerance range
1 σM
GRR = * using (2) and (5)
CpO σO
consequently
( CpO * GRR )
2
CpO = Cp A * 1 − (7)
and
ly
n
O
CpO
Cp A = (7’)
e
( CpO * GRR )
2
1 −
s
U
l
a
rn
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
192
Appendix B
Impact of GRR on the Capability Index Cp
Graphical Analysis
Based on (6), the family of lines for CpO with respect to CpA is:
3.5
10 %
3
30 %
2.5 50%
Observed Cp
70%
2
1.5
ly
90%
n
O
1
e
s
U
0.5
l
a
rn
0
te
0.5 1 1.5 2 2.5 3
In
Actual Cp
y
n
a
p
m
Actual GRR
o
193
Appendix B
Impact of GRR on the Capability Index Cp
Based on (7’), the family of lines for CpA with respect to CpO is:
5.0
4.0
Actual Cp
3.0
30%
ly
2.0 10%
n
O
1.0
e
s
U
0.0
l
a
rn
0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0
te
Observed Cp
In
y
n
a
p
m
194
Values associated with the Distribution of the Average Range
Subgoup Size (m)
2 3 4 5 7 8 9 10 11 12 13 14 15 16 17 18 19 20
1 1.0 2.0 2.9 3.8
P4.7 6 5.5 6.3 7.0 7.7 8.3 9.0 9.6 10.2 10.8 11.3 11.9 12.4 12.9 13.4
1.41421 1.91155 2.23887 2.48124 2.67253 2.82981 2.96288 3.07794 3.17905 3.26909 3.35016 3.42378 3.49116 3.55333 3.61071 3.66422 3.71424 3.76118 3.80537
ro
2 1.9 3.8 5.7 7.5 9.2 10.8 12.3 13.8 15.1 16.5 17.8 19.0 20.2 21.3 22.4 23.5 24.5 25.5 26.5
1.27931 1.80538 2.15069 2.40484 2.60438
p2.76779 2.90562 3.02446 3.12869 3.22134 3.30463 3.38017 3.44922 3.51287 3.57156 3.62625 3.67734 3.72524 3.77032
3 2.8 5.7 8.4 11.1 13.6 e
16.0 18.3 20.5 22.6 24.6 26.5 28.4 30.1 31.9 33.5 35.1 36.7 38.2 39.7
1.23105 1.76858 2.12049 2.37883 2.58127 2.74681
rt 2.88628 3.00643 3.11173 3.20526 3.28931 3.36550 3.43512 3.49927 3.55842 3.61351 3.66495 3.71319 3.75857
4 3.7 7.5 11.2 14.7 18.1 21.3 24.4 27.3 30.1 32.7 35.3 37.7 40.1 42.4 44.6 46.7 48.8 50.8 52.8
1.20621 1.74989 2.10522 2.36571 2.56964 2.73626
y 2.87656 2.99737 3.10321 3.19720 3.28163 3.35815 3.42805 3.49246 3.55183 3.60712 3.65875 3.70715 3.75268
5 4.6 9.3 13.9 18.4 22.6 26.6 30.4
o 34.0 37.5 40.8 44.0 47.1 50.1 52.9 55.7 58.4 61.0 63.5 65.9
1.19105 1.73857 2.09601 2.35781 2.56263 2.72991 2.87071 2.99192 3.09808 3.19235 3.27701 3.35372 3.42381 3.48836 3.54787 3.60328 3.65502 3.70352 3.74914
Appendix C –
rd
8 7.2 14.8 22.1 29.2 36.0 42.4 48.5 54.3 59.9 65.2 70.3 75.2 80.0 84.6 89.0 93.3 97.4 101.4 105.3
1.16794 1.72147 2.08212 2.34591 2.55208 2.72036 2.86192 2.98373 3.09039
M 3.18506 3.27006 3.34708 3.41742 3.48221 3.54192 3.59751 3.64941 3.69806 3.74382
9 8.1 16.6 24.9 32.9 40.4 47.7 54.5 61.1 67.3 73.3 79.1 84.6 90.0 95.1 100.1 104.9 109.5 114.1 118.5
1.16361 1.71828 2.07953 2.34370 2.55013 2.71858 2.86028 2.98221 3.08896
o 3.18370 3.26878 3.34585 3.41624 3.48107 3.54081 3.59644 3.64838 3.69705 3.74284
10 9.0 18.4 27.6 36.5 44.9 52.9 60.6 67.8 74.8 81.5 87.9 94.0 99.9 105.6 111.2 116.5 121.7 126.7 131.6
1.16014 1.71573 2.07746 2.34192 2.54856 2.71717 2.85898 2.98100 3.08781 3.18262 3.26775 3.34486 3.41529 3.48016 3.53993 3.59559 3.64755 3.69625 3.74205
to
11 9.9 20.2 30.4 40.1 49.4 58.2 66.6 74.6 82.2 r 89.6 96.6 103.4 109.9 116.2 122.3 128.1 133.8 139.4 144.7
1.15729 1.71363 2.07577 2.34048 2.54728 2.71600 2.85791 2.98000 3.08688 3.18174 3.26690 3.34406 3.41452 3.47941 3.53921 3.59489 3.64687 3.69558 3.74141
12 10.7 22.0 33.1 43.7 53.8 63.5 72.6 81.3 89.7
C
97.7 105.4 112.7 119.9 126.7 133.3 139.8 146.0 152.0 157.9
Table
1.15490 1.71189 2.07436 2.33927 2.54621 2.71504 2.85702 2.97917 3.08610 3.18100o 3.26620 3.34339 3.41387 3.47879 3.53861 3.59430 3.64630 3.69503 3.74087
195
13 11.6 23.8 35.8 47.3 58.3 68.7 78.6 88.1 97.1 105.8 114.1
m 122.1 129.8 137.3 144.4 151.4 158.1 164.7 171.0
st nd
* *
O
Table entries: 1 line of each cell is the degrees of freedom (ν) and the 2 line of each cell is d ; d is the infinity value of d : Additional values of ν can be built up
2 2 2 nfrom the constant difference cd.
th
ly
Note: The notation used in this table follows that of Acheson Duncan Quality Control and Industrial Statistics, 5 edition, McGraw-Hill, 1986.
2
* 2 2
2
ν (R d ) σ ′ is distributed approximately as a χ distribution with ν of freedom where R is the average range of g subgroups of size m
d 2* Table
Appendix C
Appendix D
Gage R Study
P
ro
p
e
rt
y
o
f
F
o
rd
M
o
to
r
C
o
196
m
p
a
n
y
In
te
rn
a
l
U
s
e
O
n
ly
Appendix D
Gage R Study
Appendix D
Gage R Study
Application:
ly
• May be used during gage development – e.g., for quickly comparing
n
different clamping locations; comparing different methodologies.
O
e
• This method CANNOT be used for final gage acceptance without
s
U
other more complete and detailed MSA methods.
l
a
rn
Assumptions:
te
In
• Process stability and variation cannot be known at this point in time.
y
• Linearity and bias are not issues.
n
a
p
assessment of stability.
to
o
M
Analyze by:
rd
o
Take one part, one operator; place part in fixture, measure; take part out
F
of fixture; repeat this 9 more times with the same part and same
f
o
operator.
y
rt
83
Some judgment may be needed here, as 10 subgroups of individual data is insufficient to establish stability;
however, obvious instabilities may still be assessed and provide value to the analysis.
197
Appendix D
Gage R Study
P
ro
p
e
rt
y
o
f
F
o
rd
M
o
to
r
C
o
198
m
p
a
n
y
In
te
rn
a
l
U
s
e
O
n
ly
Appendix E
Alternate PV Calculation Using Error Correction Term
Appendix E
Alternate PV Calculation Using Error Correction Term
ly
n
EV 2
O
PV = ( RP × K3 )2 −
k × r
e
s
U
where RP = range of the part averages, k = # of appraisers, r = # trials.
l
a
rn
Recognition of this method of calculating PV was published in 1997.84
te
This method is presented here as a more statistically correct alternative to
In
the commonly accepted definition of PV historically used in this manual.
y
Generally, when EV contaminates PV, it does so to the extent of only a
n
a
84
“Reliable Data is an Important Commodity,” Donald S. Ermer and Robin Yang E-Hok, University of
Wisconsin, Madison, published in The Standard, ASQ Newsletter of the Measurement Quality Division, Vol
97-1, Winter, 1997.
199
Appendix E
Alternate PV Calculation Using Error Correction Term
ly
n
O
e
s
U
l
a
rn
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
200
Appendix F
P.I.S.M.O.E.A. Error Model
Appendix F
P.I.S.M.O.E.A. Error Model
ly
variation for any measurement system. There are various methods of
n
presenting and categorizing these sources of variation using simple cause
O
& effect, matrix, or tree diagrams.
e
s
U
The acronym P.I.S.M.O.E.A.85 represents another useful model for
l
a
defining a measurement system by its basic sources of variation. It is not
rn
the only model, but does support universal application.
te
In
Error Source Alias or Component y Factor or Parameter
n
Production part, sample, measurand, Unit
a
P Part Unknown
p
acceptance criteria
criteria
M
program
F
E Environment
power, Electromagnetic Interference (EMI), noise
ro
The alias error source varies with the measurement application and
industry. These are common examples. However, the parameter and
characteristic effect is always the same.
85
P.I.S.M.O.E.A. was originally developed by Mr. Gordon Skattum, Senior ASQ CQE, metrologist and Director
of Integrated Manufacturing for Rock Valley College Technology Center.
201
Appendix F
P.I.S.M.O.E.A. Error Model
ly
statistical assumptions related to measurement systems (e.g., calibration,
n
O
operational, coefficients and rates of expansion, physical laws and
e
constants). The measurement planner should be able to identify, control,
s
correct, or modify the MSA method to accommodate significant
U
violations for the assumptions in the measurement process.
l
a
rn
te
Violation of the test-retest criteria is always a consideration for
destructive test or in-process measurement systems where the feature is
In
changing. The measurement planner ought to consider appropriate
y
n
hybrids, modifications, or alternative techniques to the standard
a
202
Appendix F
P.I.S.M.O.E.A. Error Model
ly
E Often controlled
conditions principle error source
n
O
Cannot be assumed, a principle
A Statistical, often ignored Statistical, application specific
e
error source
s
Product control, Product control,
U
Purpose Process control (SPC)
100% inspection calibration tolerance
l
a
rn
te
In
The degree of influence and contribution to measurement variation for
specific error sources will depend on the situation. A matrix, cause and
y
n
effect, or fault tree diagram will be a useful tool to identify and
a
p
and improvement.
o
C
r
203
Glossary
P
ro
p
e
rt
y
o
f
F
o
rd
M
o
to
r
C
o
204
m
p
a
n
y
In
te
rn
a
l
U
s
e
O
n
ly
Glossary
GLOSSARY
See the Statistical Process Control (SPC) Reference Manual for additional glossary definitions.
ly
n
Apparent Resolution The size of the least increment on the measurement instrument is the
O
apparent resolution. This value is typically used in literature as
e
advertisement to classify the measurement instrument. The number of
s
data categories can be determined by dividing the size into the expected
U
process distribution spread ( 6σ ).
l
a
NOTE: The number of digits displayed or reported does not always
rn
indicate the resolution of the instrument. For example, parts measured
te
as 29.075, 29.080, 29.095, etc., are recorded as five (5) digit
measurements. However, the instrument may not have a resolution of
In
.001 but rather .005. n
y
a
Appraiser Variation The variation in average measurements of the same part (measurand)
p
Confidence Interval The range of values expected to include (at a some desired probability
called a confidence level) the true value of a parameter.
205
Glossary
Designed Experiment A planned study involving statistical analysis of a series tests in which
purposeful changes are made to process factors, and the effects
observed, in order to determine the relationship of process variables
and improve the process.
ly
Discrimination Alias smallest readable unit, discrimination is the measurement
n
resolution, scale limit, or smallest detectable unit of the measurement
O
device and standard. It is an inherent property of gage design and
e
reported as a unit of measurement or classification. The number of data
s
U
categories is often referred to as the discrimination ratio since it
l
describes how many classifications can be reliably distinguished given
a
the observed process variation.
rn
te
Distinct Data Categories The number of data classifications or categories that can be reliably
In
distinguished determined by the effective resolution of the
y
measurement system and part variation from the observed process for a
n
given application. See ndc.
a
p
m
Effective Resolution The size of the data category when the total measurement system
o
this ndc (at the 97% confidence level) is 1.41[PV/GRR]. (See Wheeler,
rd
mean square error to the within-group mean square error for a set of
o
level of confidence.
e
p
Independent The occurrence of one event or variable has no effect on the probability
that another event or variable will occur.
206
Glossary
Independent and Identically Distributed Commonly referred to as “iid”. A homogeneous group of data which
are independent and randomly distributed in one common distribution.
Linearity The difference in bias errors over the expected operating range of the
measurement system. In other terms, linearity expresses the correlation
of multiple and independent bias errors over the operating range.
ly
n
Measurand The particular quantity or subject to be measured under specified
O
conditions; a defined set of specifications for a measurement
e
application.
s
U
l
Measurement system A collection of instruments or gages, standards, operations, methods,
a
fixtures, software, personnel, environment, and assumptions used to
rn
quantify a unit of measure or fix assessment to the feature characteristic
te
being measured; the complete process used to obtain measurements.
In
Measurement System Error y
The combined variation due to gage bias, repeatability, reproducibility,
n
stability and linearity.
a
p
m
ndc (
Number of distinct categories. 1.41 PV )
r
GRR
to
o
stable process.
Precision The net effect of discrimination, sensitivity and repeatability over the
operating range (size, range and time) of the measurement system. In
some organizations precision is used interchangeability with
repeatability. In fact, precision is most often used to describe the
207
Glossary
Process Control Operational state when the purpose of measurement and decision
criteria applies to real-time production to assess process stability and
the measurand or feature to the natural process variation; the
ly
measurement result indicates the process is either stable and “in-
n
control” or “out-of-control.”
O
e
Product Control Operational state when the purpose of measurement and decision
s
U
criteria is to assess the measurand or feature for conformance to a
l
specification; the measurement result is either “in-tolerance” or “out-
a
of-tolerance.”
rn
te
Reference Value A measurand value that is recognized and serves as an agreed upon
In
reference or master value for comparison:
y
•
n
A theoretical or established value based on scientific
a
principles;
p
m
organization;
r
to
accepted value
conventional value
P
Regression Analysis A statistical study of the relationship between two or more variables. A
calculation to define the mathematical relationship between two or
more variables.
Repeatability The common cause, random variation resulting from successive trials
under defined conditions of measurement. Often referred to as
208
Glossary
ly
condition(s) of change in the measurement process. Typically, it has
n
been defined as the variation in average measurements of the same part
O
(measurand) between different appraisers (operators) using the same
e
measuring instrument and method in a stable environment. This is often
s
U
true for manual instruments influenced by the skill of the operator. It is
l
not true, however, for measurement processes (i.e., automated systems)
a
where the operator is not a major source of variation. For this reason,
rn
reproducibility is referred to as the average variation between-systems
te
or between-conditions of measurement.
In
Resolution y
May apply to measurement resolution or effective resolution
n
The capability of the measurement system to detect and faithfully
a
discrimination.)
o
probability that the indicated value of any part which differs from a
r
reference part by less than δ will be the same as the indicated value of
to
Scatter Diagram A X-Y plot of data to assess the relationship between two variables.
F
f
o
Significance level A statistical level selected to test the probability of random outcomes;
also associated with the risk, expressed as the alpha ( α ) risk, that
represents the probability of a decision error.
209
Glossary
ly
n
O
e
s
U
l
a
rn
te
In
y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
210
Reference List
REFERENCE LIST
1. ASTM E691-99, Standard Practice for Conducting an Interlaboratory Study to Determine the
Precision of a Test Method, (www.astm.org).
2. ASTM E177-90a, Standard Practice for Use of the Terms Precision and Bias in ASTM Test Methods.
4. AT&T Statistical Quality Control Handbook, Delmar Printing Company, Charlotte, NC, 1984.
5. Burdick, Richard K., and Larsen, Greg A., “Confidence Intervals on Measures of Variability in R&R
ly
n
Studies,” Journal of Quality Technology, Vol. 29, No. 3, July 1997.
O
e
6. Dataplot, National Institute of Standards and Technology, Statistical Engineering Division,
s
(www.itl.nist.gov)
U
l
a
7. Deming, W. E., Out of the Crisis, Massachusetts Institute of Technology, 1982, 1986.
rn
te
8. Deming, W. E., The New Economics for Industry, Government, Education, The MIT Press, 1994,
In
2000.
y
n
9. Duncan, A. J., Quality Control and Industrial Statistics, 5th ed., Richard D. Irwin, Inc., Homewood,
a
p
Illinois, 1986.
m
o
10. Eagle, A. R., “A Method For Handling Errors in Testing and Measuring,” Industrial Quality Control,
C
11. Ermer, Donald S., and E-Hok, Robin Yang, “Reliable Data is an Important Commodity,” University
M
of Wisconsin, Madison; published in The Standard, ASQ Newsletter of the Measurement Quality
rd
12. Foster, Lowell W., GEO-METRICS: The Metric Application of Geometric Tolerancing, Addison-
f
13. Gillespie, Richard, A History of the Hawthorne Experiments, Cambridge University Press, New York,
e
1991.
p
ro
14. Grubbs, F. E., “On Estimating Precision of Measuring Instruments and Product Variability,” Journal
P
15. Grubbs, F. E., “Errors of Measurement, Precision, Accuracy and the Statistical Comparison of
Measuring Instruments,” Technometrics, Vol. 15, February 1973, pp. 53-66.
16. Gruska, G. F., and Heaphy, M. S., “Measurement Systems Analysis,” TMI Conference Workshop,
Detroit, Michigan, 1987.
211
Reference List
18. Hicks, Charles R., Fundamental Concepts in the Design of Experiments, Holt, Rinehart and Winston,
New York, 1973.
19. Hahn, J. H. and Nelson, W., “A Problem in the Statistical Comparison of Measuring Devices,”
Technometrics, Vol. 12., No. 1, February 1970, pp. 95-102.
20. Hamada, Michael, and Weerahandi, Sam, “Measurement System Assessment Via Generalized
Inference,” Journal of Quality Technology, Vol. 32, No. 3, July 2000.
21. Heaphy, M. S., et. al., “Measurement System Parameters,” Society of Manufacturing Engineering -
IQ81-154, Detroit, Michigan, 1981.
22. International Vocabulary of Basic and General Terms in Metrology (abbr. VIM),
ly
ISO/IEC/OIML/BIPM (1993).
n
O
23. Ishikawa, Kaoru, Guide to Quality Control, Asian Productivity Organization, 1986
e
s
U
24. Jaech, J. L., “Further Tests of Significance for Grubbs’ Estimators,” Biometrics, 27, December, 1971,
l
a
pp. 1097-1101.
rn
te
25. James, P. D., “Graphical Displays of Gage R&R Data,” AQC Transaction, ASQC, 1991.
In
26. Lipson, C., and Sheth, N. J., Statistical Design & Analysis of Engineering Experiment, McGraw-Hill,
y
n
New York, 1973.
a
p
m
27. Maloney, C. J., and Rostogi, S. C., “Significance Test for Grubbs’ Estimators,” Biometrics, 26,
o
28. Mandel, J., “Repeatability and Reproducibility,” Journal of Quality Technology, Vol.4, No.2, April,
to
1972, pp.74-85.
o
M
29. McCaslin, J. A., and Gruska, G. F., “Analysis of Attribute Gage Systems,” ASQC Technical
rd
30. Measurement Systems Analysis Reference Manual, Second Edition, Chrysler, Ford, GM, Second
o
Printing, 1998.
y
rt
e
31. Minitab User’s Guide 2: Data Analysis and Quality Tools, Release 13 for Windows, Minitab, Inc.,
p
February 2000.
ro
P
32. Montgomery, Douglas C., and Runger, George C., “Gauge Capability and Designed Experiments.
Part I: Basic Methods; Part II: Experimental Design Models and Variance Component Estimation,”
Quality Engineering, Marcel Dekker, Inc., 1993.
33. Nelson, L., “Use of the Range to Estimate Variability,” Journal of Quality Technology, Vol. 7, No.1,
January, 1975, pp.46-48.
34. Potential Failure Mode and Effects Analysis (FMEA) Reference Manual, Third Edition,
DaimlerChrysler, Ford, GM, 2001.
35. Production Part Approval Process (PPAP), Third Edition, DaimlerChrysler, Ford, GM, 2000.
212
Reference List
36. Statistical Process Control Reference Manual(SPC), Chrysler, Ford, GM, Second Printing, 1995.
37. Thompson, W. A., Jr., “The Problem of Negative Estimates of Variance Components,” Annals of
Mathematical Statistics, No. 33, 1962, pp. 273-289.
38. Thompson, W. A., Jr., “Precision of Simultaneous Measurement Procedures”, American Statistical
Association Journal, June, 1963, pp. 474-479.
39. Traver, R. W., “Measuring Equipment Repeatability – The Rubber Ruler,” 1962 ASQC Convention
Transactions, pp. 25-32.
40. Tsai, Pingfang, “Variable Gauge Repeatability and Reproducibility Study Using the Analysis of
Variance Method,” Quality Engineering, Marcel Dekker, Inc., 1988.
ly
n
O
41. Vardeman, Stephen B., and VanValkenburg, Enid S., “Two-Way Random-Effects Analyses and
Gauge R&R Studies,” Technometrics, Vol. 41, No. 3, August 1999.
e
s
U
42. Wheeler, D. J., “Evaluating the Measurement Process When the Testing is Destructive,” TAPPI, 1990
l
a
Polymers Lamination & Coatings Conference Proceedings, pp. 805-807 (www.tappi.org;
rn
plc90905.PDF).
te
In
43. Wheeler, D. J., and Lyday, R. W., Evaluating the Measurement Process, SPC Press, Inc., Knoxville,
Tennessee, 1989. y
n
a
p
m
o
C
r
to
o
M
rd
o
F
f
o
y
rt
e
p
ro
P
213
P
ro
p
e
rt
y
o
f
F
o
rd
M
o
to
r
C
o
214
m
p
a
n
y
In
te
rn
a
l
U
s
e
O
n
ly
ly
n
O
e
s
U
l
a
rn
Sample Forms
te
In
y
n
The reader has permission to reproduce the forms in this section.
a
p
m
The forms in this section each represent a possible format for GRR data collection and reporting.
o
C
They are not mutually exclusive to other types of formats which may contain the same
r
215
Gage Repeatability and Reproducibility Data Collection Sheet
PART
Appraiser
AVERAGE
/Trial # 1 2 3 4 5 6 7 8 9 10
A 1
2
3
Average Xa =
ly
n
O
Range Ra =
e
s
B 1
U
2
l
a
rn
3
te
Average Xb =
In
Range y Rb =
n
a
p
C 1
m
o
2
C
3
r
to
Average Xc =
o
M
Range Rc =
rd
o
F
Part Average X =
f
Rp =
o
y
rt
([ R a = ] + [ Rb = ] + [ Rc = ]) / [ # OF APPRAISERS = ]= R =
e
p
] - [Min X = ]=
P
* UCL = [R =
R
] × [D4 = ]=
*D4 = 3.27 for 2 trials and 2.58 for 3 trials. UCLR represents the limit of individual R’s. Circle those that are
beyond this limit. Identify the cause and correct. Repeat these readings using the same appraiser and unit as originally used or
discard values and re-average and recompute R and the limiting value from the remaining observations.
Notes:_____________________________________________________________________________________________________________
216
Gage Repeatability and Reproducibility Report
Part No. & Name: Gage Name: Date:
Characteristics: Gage No: Performed by:
Specifications: Gage Type:
ly
n
= _______ 3 0.5908 = _______%
O
Reproducibility – Appraiser Variation (AV)
e
s
( X DIFF × K 2 )2 -
U
AV = ( EV
2
( nr )) %AV = 100 [AV/TV]
l
a
( _____ × _____ )2 - 2
(__ × __) )
rn
= ( _____ = 100 [_______/_______]
te
= _______ Appraisers 2 3 = _______%
In
n = parts r = trials K2 0.7071 0.5231
n
y
Repeatability & Reproducibility (GRR)
a
p
2
+ AV
2
%GRR= 100 [GRR/TV]
m
GRR = EV
o
( _____ 2 + _____ 2 )
C
= Parts K3
= 100 [_______/_______]
r
to
6 0.3742
y
= _______ = _______%
rt
7 0.3534
p
( )
ro
TV = GRR
2
+ PV
2
8 0.3375 ndc = 1.41 PV
GRR
P
For information on the theory and constants used in the form see MSA Reference Manual, Third edition.
217
Index
P
ro
p
e
rt
y
o
f
F
o
rd
M
o
to
r
C
o
218
m
p
a
n
y
In
te
rn
a
l
U
s
e
O
n
ly
P
ro
p
e
rt
y
o
f
F
o
rd
M
o
to
r
C
o
219
m
p
Index
a
n
y
In
te
rn
a
l
U
s
e
O
n
ly
Index
Index
A
Accuracy- 6, 48 (Also see Bias)
Additive Model- 188
Analysis of Variance (ANOVA)- 117
Analytic Method- 135
Appraiser Variation (AV)- 7, 110, 188
Artifact- 6, 41
ASTM (American Society for Testing and Materials)- iii, 7, 31, 48, 54
Attribute- 125, 135, 149
Average- 99, 102, 110, 161
ly
B
n
O
Bias- 6, 49, 51, 60, 85-91
e
C
s
U
l
a
Calibration- 25, 37, 38, 41
rn
Calibration Standard- 41
te
Capability (Measurement System)- 7, 8, 16
In
Cause and Effect- 15, 65
Charts:
y
n
a
Average- 102
p
Error- 108
m
o
Run- 105
r
Check Standard- 42
to
Complexity- 23
rd
Consistency- 8, 56
o
Consumer’s Risk- 17
F
D
y
rt
Designed Experiment- 69
e
p
Discrimination- 5, 43
P
220
Index
E
Ensemble- 41
Environment- 36
Equipment Variation (EV)- 7, 52 (see Repeatability)
Error Charts- 108
Ergonomics -37
F
Fixturing- 36, 69
Flexibility- 31
Flow Chart- 24, 25
ly
n
FMEA- 11, 24
O
Funnel Experiment- 21
e
G
s
U
l
Gage- 5
a
rn
Gage Repeatability (GR) – Appendix D, 197
te
Gage Repeatability and Reproducibility (GRR)- 7, 33, 55, 56, 63, 122, 151, 177
In
Geometric Dimensioning and Tolerancing (GD&T)- 28, 159
y
Guide to the Expression of Uncertainty of Measurement (GUM)- 63
n
a
H
p
m
I
r
to
J, K
rd
L
o
F
f
o
M
P
Maintenance- 8, 24,25, 29, 30, 32, 35, 37, 50, 53, 55, 69, 91
Master- 3, 23, 25, 29, 30, 32, 35, 41, 50, 53, 55, 69, 83, 85, 208
Master Standard- 62
Master Value- 3, 42
Measurand- 59, 61, 207
Measurement Life Cycle- 24
Measurement System- iii, 3, 4, 5, 6, 8
Miss Rate- 17, 130, 132
MRA (Mutual Recognition Arrangement)- 9
221
Index
N
NBS (National Bureau of Standards)- 9
NIST (National Institute of Standards)- 9, 10, 32, 43
NMI (National Measurements Institute)- 9, 10, 43
Non-replicable- 143, 144, 147, 148, 152, 207
O
Observed Process Variance- 19
Observed Value- 18, 108, 109
Operating Range-6, 8, 50, 52, 73, 74, 87, 92, 94
ly
n
Operational Definition- 5, 6, 10, 12, 13, 40, 43, 65
O
P
e
s
U
Performance (Measurement)- 8, 16, 31, 40, 57-58, 77
l
Performance Curve- 40, 135, 136, 138, 138-140, 169-174
a
rn
P.I.S.M.O.E.A. Error Model – Appendix F, 201
te
Plan-Do-Study-Act (PDSA)- 66
In
Precision- 7, 52, 59, 71, 207
y
Process Control- 5, 12, 13, 16, 18, 22, 24, 39, 47, 62, 73, 74, 96, 162, 207
n
a
R
p
m
Randomization- 117
r
Range- 46
to
Reference Value- 5, 6, 9, 10, 42, 43, 49, 83, 84, 85, 108, 135-138, 145, 169, 208
rd
Repeatability- 7, 48, 52, 56, 58, 60(and bias), 84, 86, 96-117, 139, 145, 152, 179, 208
Reproducibility- 7, 16, 53, 54, 55, 96-117, 169, 179, 209
o
F
Resolution:
rt
222
Index
S
Scatter Plot- 106
Sensitivity – 5, 8, 13, 37, 39, 44, 46, 52, 86, 90, 209
Specification Limits- 13, 17
Stable- 3, 6, 8, 11, 12, 16, 21, 22, 30, 39, 42, 46, 77, 84, 88, 96, 147, 209
Standard- 5, 9, 11, 12, 30, 31, 34, 38, 41-42, 53, 62, 71-72
Statistical Control- 3, 6, 13, 16, 18, 48, 104, 125
S.W.I.P.E.- 14
T
Target- 10, 18, 21
ly
n
Ten to One Rule (10 to 1)- 43
O
Tolerance- 13, 23, 31, 39, 46, 73, 77, 116, 126, 210
e
Tolerance Range- 191, 192
s
U
Traceability- 9, 10, 62
l
Transfer Standard- 41
a
rn
True Value- 5, 6, 8, 10, 40, 43, 62, 163, 205
te
Type I Error- 17
In
Type II Error- 17
y
U
n
a
p
V
to
o
M
Variation- 6, 7, 8, 11, 13, 16, 18, 19, 24, 40, 45, 48, 50, 56, 65, 73
rd
W
o
F
f
o
Working Standard- 4
e
p
X
ro
P
223
Index
P
ro
p
e
rt
y
o
f
F
o
rd
M
o
to
r
C
o
224
m
p
a
n
y
In
te
rn
a
l
U
s
e
O
n
ly
Feedback
ly
Your Name _____________________________________________________
n
O
Representing ________________________________________________
e
s
Company/Division Name
U
l
Address _____________________________________________________
a
_____________________________________________________
rn
_____________________________________________________
te
In
Phone ___(____)________________
y
n
Please list your top three automotive customers and their locations.
a
________________________ ________________________
p
Customer Location
m
________________________ ________________________
o
Customer Location
C
________________________ ________________________
r
Customer Location
to
o
_____________________________________________________
rd
_____________________________________________________
o
_____________________________________________________
F
_____________________________________________________
f
_____________________________________________________
o
y
rt
e
Suite 200
P
225