Escolar Documentos
Profissional Documentos
Cultura Documentos
Acceptance Sampling
Final Exam Review
Acceptance Sampling
Why Acceptance Sampling and not 100% Inspection?
Testing is destructive
Cost of 100% inspection is high
100% inspection is not feasible (require too much time)
If vendor has excellent quality history
Advantages and Disadvantages of Sampling
Advantages
Less expensive
Reduced damage
Reduces the amount of inspection error
Disadvantages
Risk of accepting bad lots, rejecting good lots.
Less information generated
Requires planning and documentation
3
Acceptance Sampling
Problem:
A lot (shipment) is received.
A sample is taken from the lot.
Some quality characteristic of the units in the sample is inspected.
On the basis of this inspection information, the lot is sentenced accept or
reject
Lot formation
Lots should be homogeneous.
Larger lots are preferred over smaller ones.
Lots should be conformable to materials-handling systems
used in both supplier and consumer facilities.
Random Sampling
OC Curve
Lot size, N large
# defective, d, in a random sample of size n will follow a binominal
distribution with parameters, n and p.
n!
p k 1 p , k 0, 1, 2, ..., n
nk
P(d k )
k !( n k )!
k 0 k !( n k )!
OC Curve plots the probability of accepting the lot Pa versus the lot fraction defective p (true
proportion nonconforming) .
Example: An apple producer has 500 baskets of apples, containing 20 each (10000 apples in the
lot). A buyer wants to inspect 10 of the apples before accepting the lot (if 2 or less are bruised). That
is n=10, N=10000, c=2. Suppose 20% of the apples are bruised, what is the probability of
accepting such a lot?
Solution: Let d be the number of bruised apples in the sample. Calculate: Pa (0.2) P ( d 2)
6
c
n!
Pa P ( d c ) p k 1 p
nk
0.8
k 0 k !( n k )!
Probability of Acceptance
0.6
>> p=0:.001:1;
>> c=2; n=10; 0.4
>> plot(p,cdf(binomial,c,n,p));
>> xlabel('p'); ylabel('Probability of Acceptance'); 0.2
Mi is calculated by
n
= - taking the mean
rows of the values in
dimension i
(column-wise mean)
p columns
11
Covariance Matrix: S
p
Note: Since the means of the dimensions in
the adjusted data set, X, are 0, the covariance
matrix can simply be written as:
p
Matrix A Matrix A
Eigenvalues Eigenvalues
x x
Eigenvectors Eigenvectors
Eigenvalues j are used for calculation of [% of total variance] (vj) for each
component j
13
Example
Car Plant Electricity Usage
The manager of a car plant wishes to investigate how the plants electricity usage
depends upon the plants production. The linear model y 0 1 x will allow
a months electrical usage to be estimated as a function of the months production.
17
i 1 i 1
L n
2 ( yi 0 1 xi ) xi 0
1 0 , 1 i 1
( y y ) ( y y ) ( y y )
i 1
i
2
i 1
i
2
i 1
i i
2
SST SS R SS E
And the sample correlation coefficient is given by:
S xy S xx
r 1
S xx S yy SST
Regression SS R 1 MS R MS R / MS E
Error or residual SS E n-2 MS E
Total SST n 1
S xy2 n
SS R 1S xy , SST S yy ( yi y ) 2 , SS E SST SS R
S xx i 1
Example: Car Plant Electricity Usage
19
Coefficient of Determination R2
The total variability in the dependent variable, the total sum of squares
SST in1 ( yi y ) 2
can be partitioned into the variability explained by the regression line,
S xy2
theregression sum of squaresSSR in1 (
yi y ) 2 1S xy
S xx
and the variability about the regression line, the error sum of squares
SS n ( y
E i 1 y )2 .
i i
The proportion of the total variability accounted for by the regression line is
the coefficient of determination
SSR SS
R 2 1 E
SST SST
which takes a value between zero and one.
SSR 1.2124
Car Plant Electricity Usage: R 2 0.802
SST 1.5115
21
ANOVA model:
Let yij be a random number indicating the j-th observation in the sample of the i-th treatment, then an
ANOVA is based on the following linear model:
y ij i ij
i ij
ANOVA assumptions:
ij ~ N (0, 2 ) i
In words: We assume the that the errors (residuals) are from a constant normal distribution (variance
unaffected by treatments) with a mean of zero.
ANOVA hypothesis (Fixed effects model):
An ANOVA tests the following hypothesis:
H0 : 1 2 ... a 0
H1 : At least one i 0
Let yi . represent the total of the observations under the i-th treatment and yi . represent the
average of the observations under the i-th treatment. Similarly, let y.. represent the grand
total of all observations and y.. represent the grand mean of all observations. Expressed
mathematically:
n
yi . yij yi . yi . / n i 1, 2,..., a
j 1
a n
y.. yij y.. y.. / N
i 1 j 1
where N an is the total number of observations. Thus, the dot subscript notation implies
summation over the subscript that it replaces.
We are interested in testing the equality of a treatment means 1 , 2 ,...., a
This is equivalent to testing the hypotheses
H 0 : 1 1 a 0
H1 : At least one i 0
Thus, if the null hypothesis is true, each observation consists of the overall mean plus a
realization of the random error component ij . This is equivalent to saying that all N
observations are taken from a normal distribution with mean and variance 2. Therefore, if
the null hypothesis is true, changing the levels of the factor has no effect on the mean
response.
ONE-WAY ANOVA
27
y
2
y y i .
a n a n
ij y ..
2
ij y i. y ..
i 1 j 1 i 1 j 1
y y i.
a n a n
y y .. 2
2
ij i.
i1 j 1 i 1 j 1
The total number of observations is: N = an. It can also be shown that the degrees of freedom can
be decomposed as: TOTAL WITHIN BETWEEN
an - 1 a(n-1) a-1
28
ONE-WAY ANOVA
The previous sums of squares are totals within their respective categories. Using the
degrees of freedom, the Within and between sum of squares can be converted to mean
sum of squares as follows: SS
MSTreatments Tr
a 1
SS E
MS E
a ( n 1)
a 1
If H0 is true, both of the above would be unbiased estimators of the same population
variance and therefore their ratio would have a F-distribution. In symbols:
MS Treatments
~ F with a 1 and a ( n 1) degrees of freedom
MS E
29
i 1
SS
E Treatments
2
a -1
If the alternative hypothesis is true, then
a
n i2
SS
i 1
E Treatments 2
a -1 a 1
We can also show that the expected value of the error sum of squares is
E ( SS E ) a ( n 1) 2. Therefore, the error mean square MS E SS E /[ a ( n 1)] is an
unbiased estimator of 2 regardless of whether or not H 0 is true.
The error mean square SS E
MS E
a (n 1)
is an unbiased estimator of 2.
31
Therefore, the residual is eij yij yi. ; that is, the difference between an observation
and the corresponding factor-level mean. The residuals for the hardwood percentage
experiment are shown in Table 3-8.
33
i 1, 2,..., a
yijk i j ( )ij ijk j 1, 2,..., b
k 1, 2,..., n
34
Statistical Analysis
The corresponding degree of freedom decomposition is
Statistical Analysis
The ANOVA is usually performed with computer software, although simple computing
formulas for the sums of squares may be obtained easily. The computing formulas for
these sums of squares follow.
a b n
y...2
SST y 2
ijk
i 1 j 1 k 1 abn
Main effects
yi2.. y...2
a
SS A
i 1 bn abn
b y.2j .
y...2
SS B
j 1 an abn
Interaction a b yij2.
y...2
SS AB SS A SS B
i 1 j 1 n abn
Error SS E SST SS A SS B SS AB
The 2k Factorial Design
36
The 22 Design
2
The geometry of the 2 design is shown in Fig. 12-17a. Note that the design can be
represented geometrically as a square with the 2 4 runs forming the corners of the
2
square. Fig. 12-17b shows the 4 runs in a tabular format often called the test matrix.
A y A y A
a ab b (1)
2n 2n
1
a ab b (1)
2n
B yB yB
b ab a (1)
2n 2n
1
b ab a (1)
2n
ab (1) a b
AB
2n 2n
1
ab (1) a b
2n
Contrast A a ab b (1)
38
In these equations, the contrast coefficients are always either +1 or -1. A table of
plus and minus signs, such as Table 12-7, can be used to determine the sign of each
run for a particular contrast.
Let k be is the number of factors. Then the effects and the sums of squares
for A, B, and AB are obtained as follows:
Contrast
Effect
n 2k 1
(Contrast) 2 2 k 2
SS k
n (Effect) 2
n2
39
Example 12-6
40
Analysis of Variance
We can compute the factor effect estimates as follows:
1
A [ a ab b (1)]
2n Contrast
1 133.1 Effect
2(4)
[96.1 161.1 59.7 64.4]
8
16.64 n 2k 1
1
B [b ab a (1)]
2n (Contrast) 2 2 k 2
1 60.3 SS k
n (Effect) 2
[59.7 161.1 96.1 64.4] 7.54 n2
2(4) 8
n
2 2
y ...2
1
AB
2n
[ ab (1) a b ] SS T otal y ijk
2
i 1 j 1 k 1 4n
1 69.7
[161.1 64.4 96.1 59.7] 8.71
2(4) 8
Example: The main effect may be estimated using the corresponding equations.
The effect of A, for example is
1
A a ab ac abc b c bc (1)
4n
1
22 27 23 30 20 21 18 16
4(2)
1
27 3.375
8
(Contrast A ) 2
and the sum of squares for A is given by SS A
n 2k
(27) 2
45.5625
2(8)
44