Você está na página 1de 30

ChE 425 - Equation Sheet

Chapter 1—Statistical Background


Mean, arithmetic mean, Expectation: 𝜇 Variance 𝜎 2
Discrete: 𝑉(𝑋) = 𝜎 2 = 𝐸{[𝑋 − 𝐸(𝑋)]2 }
2
𝐸(𝑋) = 𝜇 = ∑ 𝑥𝑝(𝑥) 𝑉(𝑋) = 𝐸 [𝑋 2 − 2𝐸[𝑋]𝑋 + (𝐸(𝑋)) ]
Continuous: 𝑉(𝑋) = 𝐸(𝑋 2 ) − 2𝜇𝐸[𝑋] + 𝜇2 𝐸(1)
𝑉(𝑋) = 𝐸(𝑋 2 ) − [𝐸(𝑋)]2
𝐸(𝑋) = 𝜇 = ∫ 𝑥𝑓(𝑥)𝑑𝑥
Basic Operator of E():
𝐸(𝑐) = 𝑐
𝐸(𝑎𝑋) = 𝑎𝐸(𝑋)
𝐸[𝑎𝑋 ± 𝑏𝑌] = 𝐸[𝑎𝑋] ± 𝐸[𝑏𝑌] = 𝑎𝐸(𝑋) ± 𝑏𝐸(𝑌)
Basic Operator of V():
𝑉𝑎𝑟[𝑎𝑋 ± 𝑏𝑌] = 𝑎2 𝑉𝑎𝑟(𝑋) + 𝑏 2 𝑉𝑎𝑟(𝑌) ± 2𝑎𝑏𝐶𝑜𝑣(𝑋, 𝑌)
Cov() Operator:
𝐶𝑜𝑣(𝑋, 𝑌) = 𝐸{[𝑋 − 𝐸(𝑋)][𝑌 − 𝐸(𝑋)]}
𝐶𝑜𝑣(𝑋, 𝑌) = 𝐸(𝑋𝑌) − 𝐸(𝑋) ∙ 𝐸(𝑌)
Sample Population
* Unbiased: the long-run average or
expected value of the point estimator
should be equal to the parameter that is
̅ 2 2
Properties of point estimators (e.g. 𝑋 for 𝜇, 𝑆 for 𝜎 ): being estimated
* Have minimum variance: is smaller
than the variance of any other estimator
of that parameter
Mean:
𝑛
1 𝑋̅ 𝜇
∑ 𝑋𝑖
𝑛
𝑖=1
Variance:
∑𝑛𝑖=1(𝑋𝑖 − 𝑋̅)2
𝑛 𝑛 2 𝑆2 𝜎2
1 1
= {∑ 𝑋𝑖 2 − (∑ 𝑋𝑖 ) }
𝑛−1 𝑛−1 𝑛
𝑖=1 𝑖=1
Standard Deviation: 𝑆 𝜎
𝑆𝑆
𝑆 = √𝑆 2 = √
𝑛−1
Remarks:
Degrees of Freedom Sum of Squares (SS): Variance: Grand Total (gt):
𝑛 𝑆𝑆 𝑛
(df):
𝑆𝑆 = ∑(𝑋𝑖 − 𝑋̅) 2 𝑉𝑎𝑟 = 𝑔𝑡 = ∑ 𝑋𝑖
𝑑𝑓 = (𝑛 − 1) 𝑑𝑓
𝑖=1 𝑖=1
Mean/Average: Uncorrected Sum of Squares: USS Correction for Corrected SS
𝑛 𝑛 1
the mean:
𝑆𝑆 = ∑ 𝑋𝑖2 − 𝑛𝑋̅ 2 𝑈𝑆𝑆 = ∑ 𝑋𝑖2 {𝑈𝑆𝑆 − 𝑛𝑋̅ 2 }
𝑔𝑡 2 𝑛−1
𝑖=1 𝑖=1 𝑛
1
What is the mean? Variance of mean? Standard Deviation? 𝐸(𝑋𝑖2 )
E(mean): Var(mean): E(S2) = 𝑉𝑎𝑟(𝑋𝑖 ) + (𝐸(𝑋𝑖 ))
2
𝐸(𝑋̅) = 𝜇 𝜎2 𝐸(𝑆 2 ) = 𝜎 2
𝑉𝑎𝑟(𝑋̅) = 𝐸(𝑋𝑖2 ) = 𝜎 2 + 𝜇2
𝑛
Coefficient of Variance: Signal-to-Noise Ratio
𝜎2 𝑉𝑎𝑟 (𝑋) = 𝜎 2 𝑆 (SNR):
𝐸(𝑋̅ 2 ) = + 𝜇2 𝐶𝑜𝑉 =
𝑋̅ 𝑠
𝑛 𝑋̅
=
𝑆 𝑁
Normal Distribution: Random Variable Z:
1 1 𝑥−𝜇 2 𝑋−𝜇
𝑓(𝑥) = exp (− ( ) ) 𝑍=
𝜎√2𝜋 2 𝜎 𝜎
𝐸(𝑍) = 𝜇𝑧 = 0; 𝑉(𝑍) = 𝜎𝑧2 = 1

Normal Probability Plots (NPP): Confidence Interval for the Mean:


𝑖 − 0.5 𝜎
95.44% confidence interval: 𝑋̅ ± 2 𝑛
𝑃(𝑟𝑎𝑛𝑘) = √
𝑛 99.74% confidence interval: 𝑋̅ ± 3
𝜎
n = sample size (# of data points) √𝑛
i = index (rank from high to low)

Rank (1 to N); 𝑋𝑖 (least to greatest); P(rank);


Expected Value (from Z-tables)
Confidence Level: Confidence Interval (Z-value):
𝐶𝐿 = (1 − 𝛼) 𝜎
𝑋̅ ± 𝑍𝛼
2 √𝑛
Confidence Interval (T-distribution): sample size Hypothesis Testing:
small and variance is unknown 𝑋̅𝑠𝑎𝑚𝑝𝑙𝑒 − 𝜇0
𝜎 | | < 𝑡𝑛−1,𝛼
𝑋̅ ± 𝑡𝛼 𝑆/√𝑛 2
2 √𝑛
P-values:
Guidelines for rejecting the null hypothesis:
For 2-sided test:
𝑃𝑣𝑎𝑙𝑢𝑒 = 2[1 − 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦(𝑝𝑟𝑜𝑏𝑒)] 𝑃 < 0.01 [𝑣𝑒𝑟𝑦 𝑠𝑡𝑟𝑜𝑛𝑔 𝑒𝑣𝑖𝑑𝑒𝑛𝑐𝑒]
0.01 < 𝑃 < 0.025 [𝑠𝑡𝑟𝑜𝑛𝑔 𝑒𝑣𝑖𝑑𝑒𝑛𝑐𝑒]
For 1-sided test: 0.025 < 𝑃 < 0.05 [𝑚𝑜𝑑𝑒𝑟𝑎𝑡𝑒]
𝑃𝑣𝑎𝑙𝑢𝑒 = 1 − 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦(𝑝𝑟𝑜𝑏𝑒) 0.05 < 𝑃 < 0.1 [𝑤𝑒𝑎𝑘]

Estimating the Difference between 2 Means


Case 1: 𝜎12 and 𝜎22 are known:
𝜎2 𝜎2
(𝑥̅1 − 𝑥̅2 ) ± 𝑍𝛼 √ 1 + 2
2 𝑛1 𝑛2

2
Case 2: 𝜎12 and 𝜎22 are equal and unknown: Case 3: 𝜎12 and 𝜎22 are unequal and unknown:
1 1 𝑆2 𝑆2
(𝑥̅1 − 𝑥̅2 ) ± 𝑡𝛼,𝜈 𝑆𝑝 √ + (𝑥̅1 − 𝑥̅2 ) ± 𝑡𝛼,𝜈 √ 1 + 2
2 𝑛1 𝑛2 2 𝑛1 𝑛2

(𝑛1 − 1)𝑆12 + (𝑛2 − 1)𝑆22 𝑆2 𝑆2


2
𝑆𝑝2 = ; (𝑛1 + 𝑛2 )
𝑛1 + 𝑛2 − 2 1 2
𝜈= 2 2
𝑆2 1 𝑆2 1
𝑣 = 𝑛1 + 𝑛2 − 2 [( 1 ) ( ) ] [( 2 ) ( )]
𝑛1 𝑛1 − 1 𝑛2 𝑛2 − 1

(𝑥̅1 − 𝑥̅2 ) − (𝜇𝑥1 − 𝜇𝑥2 ) Note: 𝜈 is rounded up to the nearest integer (by
𝑜
𝑡=
convention)
1 1
𝑆𝑝 √𝑛 + 𝑛
𝑥1 𝑥2

𝑡𝑐𝑟𝑖𝑡 = 𝑡𝛼,𝑛 +𝑛 −2
2 1 2
F-Test:
𝑆12 If 𝐹𝑝𝑟𝑜𝑏𝑒 > 𝐹𝑐𝑟𝑖𝑡 , we reject 𝐻0
𝐹𝑝𝑟𝑜𝑏𝑒 = ; 𝐹𝑐𝑟𝑖𝑡 = 𝐹𝛼,𝑛1 −1,𝑛2 −1
𝑆22
Confidence interval:
Hypothesis test: 𝑆12 𝜎12 𝑆12
𝐹 𝛼 ≤ ≤ 𝐹𝛼
𝜎12 𝑆22 1−( 2 ),𝑛1 −1,𝑛2 −1 𝜎22 𝑆22 2 ,𝑛1 −1,𝑛2 −1
𝐻0 :=1
𝜎22
𝜎12 1
𝐻1 : 2 > 1 𝐹1−(𝛼),𝑛 =
2 1 −1,𝑛2 −1 𝐹𝛼,𝑛
𝜎2 2 2
−1,𝑛1 −1
Note: the smaller of the observed variances
goes in the denominator (by convention)
Confidence Intervals on Variance:
(𝑛 − 1)𝑆 2 (𝑛 − 1)𝑆 2
≤ 𝜎2 ≤
𝜒𝛼2 𝜒2 𝛼
2 1−( )
2
Paired Comparison Problems: Sample std. deviation of the differences:
𝑑𝑗 = 𝑦1𝑗 − 𝑦2𝑗 1/2
1 2
∑𝑛𝑗=1 𝑑𝑗2 − (∑𝑛𝑗=1 𝑑𝑗 )
𝑆𝑑 = [ 𝑛 ]
Expected value of the difference, 𝑑𝑗 : 𝑛−1
𝜇𝑑 = 𝜇1 − 𝜇2
Confidence interval on 𝜇1 − 𝜇2 :
Hypothesis testing: 𝑆𝑑
𝐻0 : 𝜇𝑑 = 0; 𝐻1 : 𝜇𝑑 ≠ 0 𝑑̅ ± 𝑡𝛼,𝑛−1
2 𝑛
Test statistic: Advantages of paired comparison design:
𝑑̅ - Special case of randomized block design (noise
𝑡0 = , 𝑤ℎ𝑒𝑟𝑒:
𝑆𝑑 reduction design technique)
√𝑛 - Lost 𝑛 − 1 degrees of freedom, BUT we removed
𝑛
1 a source of variability
𝑑̅ = ∑ 𝑑𝑗 - Can compare 𝑆𝑑 with 𝑆𝑝 to see the effect of
𝑛
𝑗=1
blocking on variability

3
Sum of Squared Errors/Residual Sum of Squares Sum of Squared Residuals (explained):
(unexplained): 2
𝑆𝑆𝑅 = ∑(𝑌̂ − 𝑌̅)
2
𝑆𝑆𝐸 = 𝑅𝑆𝑆 = ∑(𝑌 − 𝑌̂) = ∑ 𝑒 2
Coefficient of Correlation (Pearson’s Correlation):
𝑌̂ = estimate of Y from the model
Slope and Intercept that minimize SSE: ∑(𝑋 − 𝑋̅)(𝑌 − 𝑌̅)
∑(𝑋 − 𝑋̅)(𝑌 − 𝑌̅) 𝑟=
𝑎̂ = √∑(𝑋 − 𝑋̅)2 (𝑌 − 𝑌̅)2
∑(𝑋 − 𝑋̅)2
𝑏̂ = 𝑌̅ − 𝑎̂𝑋̅
Total Sum of Squares: Simple Linear Regression:
𝑆𝑆𝑇 = ∑(𝑌 − 𝑌̅)2 • Min(SSE) = 0, you try to minimize the
performance index by setting the
derivative = 0
2 2
∑(𝑌 − 𝑌̅)2 = ∑(𝑌 − 𝑌̂) + ∑(𝑌̂ − 𝑌̅) • Mathematical Model: 𝑌̂ = 𝑏 ̂𝑜 + 𝑏̂1 𝑥1
𝑆𝑆𝑇 = 𝑆𝑆𝐸 + 𝑆𝑆𝑅 • Regression Model: 𝑌 = 𝑏𝑜 + 𝑏1 𝑥1 + 𝜀
• Residual: 𝑌 − 𝑌̂ = 𝑒
MSE: Regression Diagnostics:
𝑆𝑆𝐸 𝑆𝑆𝑅
𝑀𝑆𝐸 = = 𝑆2 𝑅2 =
𝑛−𝑝 𝑆𝑆𝑇

We compare MSR/MSE to F-distribution:


If MSR/MSE > F(p-1,n-p) we reject the null
hypothesis.
Graphical Description of Variability: Central Limit Theorem (CLT):
- Dot diagram: suitable for up to ~20 runs For a random variable 𝑋 with mean 𝜇 and variance
- Histogram: suitable as a large-sample tool 𝜎 2 , the distribution of the sample mean 𝑋̅ from a
- Box(-and-whisker) plot: shows the minimum, sample size of 𝑛 on 𝑋 follows the normal
maximum, median (50th percentile), and 25th & distribution with mean 𝜇 and variance 𝜎 2 /𝑛. This is
75th percentile values true regardless of the shape of the distribution of
𝑋.
Errors Associated with Hypothesis Testing:
- Type I error: if the null hypothesis is rejected when it’s true
- Type II error: if the null hypothesis is not rejected when it’s false

4
Proofs:
(1) y̅ is an unbiased point estimator for μ (2) 𝑆 2 is an unbiased estimator of 𝜎 2

∑𝑛𝑖=0 𝑦𝑖 2)
∑𝑛𝑖=1(𝑦𝑖 − 𝑦̅)2
𝐸(𝑦̅) = 𝐸 ( ) 𝐸(𝑆 = 𝐸 [ ]
𝑛 𝑛−1
𝑛
1 1 ∑𝑛𝑖=1(𝑦𝑖 − 𝑦̅)2
= ∑ 𝐸(𝑦𝑖 ) = 𝐸[ ]
𝑛 𝑛−1 𝑛−1
𝑖=0 1
𝑛
1 = 𝐸(𝑆𝑆)
= ∑𝜇 𝑛 − 1𝑛
𝑛
𝑖=0
𝐸(𝑆𝑆) = 𝐸 [∑(𝑦𝑖 − 𝑦̅)2 ]
=𝜇
𝑖=1
𝑛

= 𝐸 [∑ 𝑦𝑖2 − 𝑛𝑦̅ 2 ]
𝑖=1
𝑛
2 2) 2
𝜎2
= ∑(𝜇 + 𝜎 − 𝑛 (𝜇 + )
𝑛
𝑖=1
= (𝑛 − 1)𝜎 2

1
∴ 𝐸(𝑆 2 ) = 𝐸(𝑆𝑆) = 𝜎 2
𝑛−1

ANOVA TABLE
Source SS df MS F
Regression 2 𝑝−1 2 2
∑(𝑌̂ − 𝑌̅) ∑(𝑌̂ − 𝑌̅) ∑(𝑌̂ − 𝑌̅)
𝑝−1 𝑝−1
(𝑔𝑡)2
𝛽̂ 𝑇 𝑋 𝑇 𝑌 − ∑(𝑌 − 𝑌̂)
2
𝑛
𝑛−𝑝
Error 2 𝑛−𝑝 2
∑(𝑌 − 𝑌̂) ∑(𝑌 − 𝑌̂)
𝑌 𝑇 𝑌 − 𝛽̂ 𝑇 𝑋 𝑇 𝑌 𝑛−𝑝
Total ∑(𝑌 − 𝑌̅)2 𝑛−1 Note:

(𝑔𝑡)2 𝑔𝑡 = ∑ 𝑌𝑖
𝑌𝑇 𝑌 −
𝑛

ANOVA TABLE (In words)


Source SS df MS F
Regression SSR p-1 SSR/df MSR/MSE
=Fprobe/Fobserved
Error SSE n-p SSE/df
Total SST n-1

5
Chapter 2—Linear Regression Analysis
Theory
• Uses of models
o Process understanding
o Process simulation (sensitivity analysis)
o Process optimization
o Process safety
o Model-based control (filtering unmeasured properties, sensor
development/selection/positioning)
o Education and training
• Developing a model is an iterative process
• Errors result from:
o The failure of 𝑥𝑖 to represent the 𝑥 values actually used
o Lurking variables (e.g. impurities)
o Measurement errors, disturbances
• We assume that independent variables are perfectly known, since this is basically valid if the
errors in 𝑥 are much less than the errors in 𝑦
• Linear regression is concerned with models that are linear in the parameters
• Models take the difference between 𝑌 and 𝑌̂, and minimize their difference
• Assumptions inherent in ordinary linear regression:
o The model is valid (we’ve done our research)
o The 𝑋 values are perfectly known at each trial
o The errors (𝜀) are additive to the true values of the 𝑌 values; they do not covary with
each other (i.e. they are uncorrelated); and they are of constant, but possibly unknown,
variance
𝐸(𝜀) = 0, 𝑉(𝜀) = 𝐼𝜎 2
o The error is distributed normally:
𝜀: 𝑁(0, 𝐼𝜎 2 )
𝑦 = 𝑋𝛽 + 𝜀
∴ 𝑦: 𝑁(𝑋𝛽 ∗ , 𝐼𝜎 2 )
• 𝜎 2 is the variance of the error affecting 𝑌-values

6
General Regression Model: Linear Regression:
𝑔(𝑦𝑖 ) = 𝑓(𝑥𝑖 , 𝜃 ∗ ) + 𝜀𝑖 𝑔(𝑦𝑖 ) = 𝛽1∗ 𝑓1 (𝑥𝑖 ) + 𝛽2∗ 𝑓2 (𝑥𝑖 ) + ⋯ + 𝛽𝑝∗ 𝑓𝑝 (𝑥𝑖 )
Where: + 𝜀𝑖
• 𝑔(), 𝑓(): Known mathematical functions In Matrix Notation:
• 𝜃: px1 vector of unknown parameters to 𝑦 = 𝑋𝛽 ∗ + 𝜀
be estimated
• ∗: denotes the “true” values Where:
• 𝜀𝑖 : “error” – discrepancy between the Matrix
“true” value of 𝑔(𝑦𝑖 ) and the one X= 𝑓𝑗 (𝑥𝑖 ) nxp
measured Y= 𝑔(𝑦𝑖 ) nx1
𝛽∗= 𝛽𝑗∗ px1
𝜀= 𝜀𝑖 nx1
Errors: 𝛽̂ is known as the…
Additive to the true values - Unbiased estimator of 𝛽
𝐸(𝜀) = 0, 𝑉(𝜀) = 𝐼𝜎 2 - Unbiased Minimum Variance Maximum
Where 𝐼 is the Identity Matrix Likelihood Variable
- Best Linear Unbiased Estimator
Maximum Likelihood Estimate of Linear Regression Parameters:
Multivariate Normal Probability Density Function:
1 1
𝑓(𝑧) = 𝑛 exp (− (𝑧 − 𝜇)′ 𝑉 −1 (𝑧 − 𝜇))
(2𝜋)2 |𝑉|1/2 2
For the dependent variable y:
1 1
𝑓(𝑧) = 𝑛 exp (− 2 (𝑦 − 𝑋𝛽 ∗ )′ 𝐼(𝑦 − 𝑋𝛽 ∗ ))
(2𝜋)2 𝜎 𝑛 2𝜎

𝑌~𝑁(𝜇, 𝜎 2 ) [𝑈𝑛𝑖𝑣𝑎𝑟𝑖𝑎𝑡𝑒]
𝑌~𝑁 (𝜇, 𝑉) [𝑀𝑢𝑙𝑡𝑖𝑣𝑎𝑟𝑖𝑎𝑡𝑒]
Variance-Covariance Matrix: If 𝑌 = 𝐴𝑋 + 𝐵, then what does 𝑉(𝑌) give?
𝜎𝑥21 𝜎𝑥21 𝑥2 𝜎𝑥21 𝑥𝑣
𝑉 = [𝜎𝑥22 𝑥1 𝜎𝑥22 𝜎𝑥22 𝑥𝑣 ] 𝑉(𝑌) = 𝐴𝑉(𝑋)𝐴′
𝜎𝑥2𝑣 𝑥1 𝜎𝑥2𝑣 𝑥2 𝜎𝑥2𝑣
*THIS EQUATION IS SUPER USEFUL
• Off-diagonals give covariance
• Diagonals give variance of the error affecting
response values

7
When 𝑦,𝑋, 𝜎 2 are known but 𝛽 ∗ is not:
Probability Density Function for y becomes the likelihood function for unknown 𝛽 ∗ values
1 1
𝐿(𝛽) = 𝑛 exp (− 2 (𝑦 − 𝑋𝛽)′ (𝑦 − 𝑋𝛽))
(2𝜋)2 𝜎 𝑛 2𝜎
1 1
𝑍(𝛽) = (𝑦 − 𝑋𝛽)′ (𝑦 − 𝑋𝛽), 𝐾1 = 𝑛 , 𝐾2 = 2
(2𝜋)2 𝜎 𝑛 2𝜎
𝐿(𝛽) = 𝐾1 exp(−𝐾2 𝑍(𝛽))

Maximizing 𝐿(𝛽) is equivalent to finding the value of β, which minimizes 𝑍(𝛽)


−1
(𝛽̂ ) = (𝑋 𝑇 𝑋) 𝑋 𝑇 𝑦
* 𝑋 must be well-conditioned
* This 𝛽-value is always the least squares (LS) estimate of 𝛽 ∗
* If the errors are normally distributed, this 𝛽-value is the maximum likelihood (ML) estimate

Cramer’s Rule:
−1 1
(𝑋 𝑇 𝑋) = 𝑐𝑜𝑓(𝑋 𝑇 𝑋)
det(𝑋 𝑇 𝑋)
𝑎22 −𝑎21
𝑐𝑜𝑓(𝑋 𝑇 𝑋) = [−𝑎 𝑎11 ] [𝑓𝑜𝑟 𝑋 = 2 × 2]
12

Fisher Information Matrix (FIM): Matrix Properties:

𝑋𝑇 𝑋 (𝑋 ± 𝑌)𝑇 = 𝑋 𝑇 ± 𝑌 𝑇
(𝑋𝑌)𝑇 = 𝑌 𝑇 𝑋 𝑇
𝑋𝑌 ≠ 𝑌𝑋
𝑋𝑋 = 𝑋 −1 𝑋 = 𝐼
−1

Inference on the Parameters Confidence Interval for the Individual Parameter


Mean: Estimates:
𝐸 (𝛽̂ ) = 𝛽 ∗
−1 −1 [When 𝜎 2 is known]
𝑉𝑎𝑟 (𝛽̂𝑖 ) = (𝑋 𝑇 𝑋)𝑖𝑖 𝜎 2 = (𝑋 𝑇 𝑋)𝑖𝑖 𝑆 2
−1
𝛽̂ ± 𝑧𝛼 𝜎√(𝑋 𝑇 𝑋)𝑖𝑖
2
Under the assumption of normality in the errors:
−1
𝛽̂ : 𝑁 (𝛽 ∗ , (𝑋 𝑇 𝑋) 𝜎 2 ) [When 𝜎 2 is unknown]
−1
𝛽̂𝑖 : 𝑁 (𝛽𝑖∗ , (𝑋 𝑇 𝑋)𝑖𝑖 𝜎 2 ) 𝛽̂𝑖 ± 𝑡𝛼,𝑛−𝑝 𝑠√(𝑋 𝑇 𝑋)𝑖𝑖
−1
2
- 𝛽̂ is an unbiased estimator/best linear unbiased
estimator (BLUE) of 𝛽 𝛽̂𝑖 ± 𝑡𝛼,𝑛−𝑝 𝑠√𝑉𝑎𝑟(𝛽̂𝑖 )
2

8
𝑛𝑖
Minimum of 𝑍(𝛽) occurs when 𝛽 = 𝛽̂ 𝑚
2
Residual Sum of Squares: 𝑆𝑆𝑃𝐸 = ∑ ∑(𝑦𝑖𝑗 − 𝑦̅𝑖 )
𝑆𝑆𝐸 = 𝑅𝑆𝑆 = 𝑦 𝑇 𝑦 − 𝛽̂ 𝑇 𝑋 𝑇 𝑦 𝑖=1 𝑗=1
𝑅𝑆𝑆 Sum of squares pure error—model independent
𝑆2 = = 𝑀𝑆𝐸 𝑚
𝑛−𝑝
𝑆𝑆𝐿𝑂𝐹 = ∑ 𝑛𝑖 (𝑦̅𝑖 − 𝑦̂𝑖 )2
𝑆𝑆𝐸 = 𝑆𝑆𝑃𝐸 + 𝑆𝑆𝐿𝑂𝐹 𝑖=1
Sum of squares lack-of-fit (due to model bias,
model mismatch)—model dependent
Model: 𝑌 = 𝑋𝛽 + 𝜀 Fitted Model: 𝑌̂ = 𝑋𝛽̂
Grand Total: SSR: SSE: SST:
𝑔𝑡 = ∑ 𝑌𝑖 (𝑔𝑡)2 𝑇 ̂
𝑆𝑆𝐸 = 𝑌 𝑌 − 𝛽 𝑋 𝑌 𝑇 𝑇 (𝑔𝑡)2
𝑆𝑆𝑅 = 𝛽̂ 𝑇 𝑋 𝑇 𝑌 − 𝑇
𝑆𝑆𝑇 = 𝑌 𝑌 −
𝑛 𝑛
Hypothesis Testing: 𝑆𝑆𝑅 𝑆𝑆𝐸
̂ =𝛽
𝛽 ̂ =0 𝑅2 = =1−
𝐻𝑜 : 1 2 ; NOT 𝛽̂𝑜 (the intercept) 𝑆𝑆𝑇 𝑆𝑆𝑇
𝑠𝑙𝑜𝑝𝑒𝑠
𝐻1 : at least one 𝛽̂𝑖 is not 0, 𝑖 = 1,2, … Better Indicators than R2:
𝛽̂ 2 𝑛 − 1 𝑆𝑆𝐸
𝑅𝑎𝑑𝑗𝑢𝑠𝑡𝑒𝑑 =1− ( )
𝑛 − 𝑝 𝑆𝑆𝑇
𝑡𝛼,𝑛−𝑝 √𝑉𝑎𝑟(𝛽̂ ) 𝑀𝑆𝐸
2
2 𝑅𝑎𝑑𝑗𝑢𝑠𝑡𝑒𝑑 =1−( )
𝑀𝑆𝑇
or
2 2
𝛽̂𝑖 𝑅𝑝𝑟𝑒𝑠𝑠 < 𝑅𝑎𝑑𝑗 < 𝑅2
𝑡𝑝𝑟𝑜𝑏𝑒 =
√𝑉𝑎𝑟(𝛽̂𝑖 ) Given model 1 with 𝑝1 parameters and model 2
with 𝑝2 parameters, where 𝑝2 > 𝑝1 , it can be
If |𝑡𝑝𝑟𝑜𝑏𝑒 | > 𝑡𝛼,𝑛−1 = 𝑡𝑐𝑟𝑖𝑡 , 𝐻0 is rejected deduced that:
2 𝑆𝑆𝑅2 > 𝑆𝑆𝑅1
𝑆𝑆𝐸2 < 𝑆𝑆𝐸1
If 𝐹𝑝𝑟𝑜𝑏𝑒 > 𝐹𝑐𝑟𝑖𝑡 = 𝐹𝛼,𝑝−1,𝑛−𝑝 , 𝐻𝑜 is rejected,
hence the model is overall significant
Joint Confidence Regions: Want to minimize Fisher Information Matrix:
Where: 𝑝 = the number of parameters (𝑋 𝑇 𝑋)
This can be done by maximizing the det |𝑋 𝑇 𝑋|—
When 𝜎 is known: this is known as D-optimality

(𝛽 − 𝛽̂ ) (𝑋 𝑇 𝑋)(𝛽 − 𝛽̂ ) ≤ 𝜎 2 𝜒𝑝,𝛼
2 This will minimize the correlation of parameters

When 𝜎 is unknown: Note: |𝑋 𝑇 𝑋| is an indicator of matrix health



(𝛽 − 𝛽̂ ) (𝑋 𝑇 𝑋)(𝛽 − 𝛽̂ ) ≤ 𝑝𝑠 2 𝐹𝑝,𝑛−𝑝,𝛼
2

𝑠 2 = reduced chi-squared statistic

9
Model Predications: ̂ Matrix:
𝐻
𝑦̂𝑜 = 𝑤𝑜 𝛽̂ ↔ 𝑦̂𝑜 = 𝑋𝑜 𝛽 ̂ = 𝑋(𝑋 𝑇 𝑋)−1 𝑋 𝑇
𝐻
Where: 𝑤𝑜 = the settings for a specific trial (1 × 𝑝
row vector) ∴ 𝑌̂ = 𝐻
̂𝑌

Confidence interval on fitted values (interpolate): The residuals and the variance are also known:
𝑒 = (𝐼 − 𝐻)𝑌
𝑦̂𝑜 ± 𝑧𝛼/2 𝜎√𝑤𝑜 (𝑋 𝑇 𝑋)−1 𝑤𝑜𝑇 𝑉𝑎𝑟(𝑒) = (𝐼 − 𝐻)𝜎 2
𝑉𝑎𝑟(𝑒𝑖 ) = (𝐼 − 𝐻𝑖𝑖 ) 𝜎 2
Prediction interval (PI) on single future
occurrence (extrapolate): Standardized residuals:
𝑒𝑖
𝑦̂𝑜 ± 𝑧𝛼/2 𝜎√𝑤𝑜 (𝑋 𝑇 𝑋)−1 𝑤𝑜𝑇 + 1 𝑟𝑖 =
𝜎√1 − ℎ𝑖𝑖
𝑉𝑎𝑟(𝑒) = (1 − ℎ𝑖𝑖 )𝜎 2
𝑃. 𝐼. > 𝐶. 𝐼. 𝑉𝑎𝑟(𝑟𝑖 ) = 1

Testing diagnostics is equivalent to testing a


model’s predictive capabilities
Residual Patterns:
Model has reduced raw
measurements

Uncorrelated residuals

No discernible pattern

This is good!

There is lack of fit

There is a correlation that is not


linear

Parabolic/Higher order

Try adding (𝛽𝑖 𝑋𝑖2 )

Converging/Diverging patterns
Non-constant variance
Use a different least squares
method
Variance Covariance ≠ 𝐼𝜎 2
Use a weighted least squares log
transform
Transform the data, not the
model!!!

10
Random error about the 0 line.
There is a biased term in the
model. Look at the estimation of
your intercept

There is a biased term,


something changing with time.
Put time in the model it may be
important

Residuals concentrated

Design of experiment is not


good

Model needs something else

Where 𝛽3 𝑋1 𝑋2 is the interaction


term
𝑦 = 𝛽𝑜 + 𝛽1 𝑋1 + 𝛽2 𝑋2 + 𝛽3 𝑋1 𝑋2

Seasonal Variation Periodicity

(Time Series)

Regression Summary Distribution Properties: Regression Summary Expressions/Equations:

𝑌 ~ 𝑁(𝑋𝛽, 𝐼𝜎 2 ) 𝑌 = 𝑋𝛽 + 𝜀
𝛽̂ ~𝑁(𝛽 ∗ , (𝑋 ′ 𝑋)−1 𝜎 2 ) 𝛽̂ = (𝑋 𝑇 𝑋)−1 𝑋 𝑇 𝑌
𝑌̂~(𝑋𝛽, 𝐻𝜎 2 ) 𝑌̂ = 𝐻
̂𝑌
𝑒~𝑁(0, (𝐼 − 𝐻)𝜎 2 ) 𝑒 = 𝑌 − 𝑌̂
𝑌𝑝𝑟𝑒𝑑 ~ (𝑋𝛽, (𝐼 + 𝐻)𝜎 2 ) ̂
𝑉𝑎𝑟(𝛽 ) = (𝑋 𝑇 𝑋)−1 𝜎 2
𝜀~𝑁(0, 𝐼𝜎 2 ) 𝑉𝑎𝑟(𝑌̂) = 𝐻𝜎 2
𝑉𝑎𝑟(𝑒) = (𝐼 − 𝐻)𝜎 2
𝑉𝑎𝑟(𝑌𝑝𝑟𝑒𝑑 ) = (𝐼 + 𝐻)𝜎 2

11
Extra SS Principle and Test: General Remarks:
- A model is parisominous when it has sufficient
Model 1 → Parameters P1, SSR1, SSE1 fitting (interpolation) and prediction
Model 2 → Parameters P2, SSR2, SSE2 (extrapolation) capability with the minimum
P2 > P1 SSR2 > SSR1, SSE2 < SSE1 number of parameters
H0: “Extra Sum of Squares” principle is not Y Model X
significant Raw, Empirical Input
𝑆𝑆𝐸1 − 𝑆𝑆𝐸2 observed, or (simple) or Manipulated
~𝐹𝛼,(𝑝2 −𝑝1 ),𝑛−𝑝2
𝑃2 − 𝑃1 measured mechanistic var.
𝑆𝑆𝐸2 data (follows Settings
𝑠2 =
𝑛 − 𝑝2 Output certain Regressors
Responses physical laws) Factors
Dependent Indep. var.
var.

Extensions in Least Squares Regression:


(1) Non-normality (5) Model inadequacies:
- Make an NPP of the residuals - Assume that the model is valid
- The parameter estimates are unbiased only iff
(2) Hetergeneous Variance the model is correction
- Assume: 𝑉(𝜀) = 𝐼𝜎 2
- Transformations/weighted least squares (6) Collinearity (correlated 𝑋𝑖 factors):
𝛽̂ = (𝑋 𝑇 𝑉 −1 𝑋)−1 𝑋 𝑇 𝑉 −1 𝑦 - Symptoms:
- Common in social sciences—questionnaries are a) Solution is very unstable (ill-conditioned FIM)
designed such that the variance is known b) High covariances in the var-cov matrix
- Plot residuals vs. Y-hat—to protect against this, c) Elongated JCRs
the residual pattern should be a converging or d) Variance of the parameter estimates (near
diverging funnel singularity situations) become very large
- Heteroscedastic regression - Very serious impact if the objective is to
estimate parameters or identify important
(3) Correlated Errors: variables
𝐶𝑜𝑣(𝜀𝑖 , 𝜀𝑗 ) ≠ 0 - Less serious impact if the objective is simply
- Loss of parameter precious, which invalidates fitting
significance tests - Biased regression methods (e.g. ridge analysis,
- Occurs when data are collected in a time principle component analysis—PCA)
sequence (e.g. pollutant emissions with time, - Common in multi-component systems
biological studies, batch rectors) (concentrations; volume, surface area, diameter;
- Time series analysis OR generalized least dimensionless numbers; Ideal Gas Law)
squares
(7) Errors in the independent variables:
(4) Influential data points and outliers - If the assumption that 𝑋 are perfectly known is
- Influential points have a greater impact on least false, the parameter estimates will be biased
square results, yet all points are weighted equally - Use the Error-in-Variables-Model (EVM)
- Outliers: values of the dependent variable that methodology
are inconsistent with the bulk of the data
- Little confidence can be placed in least square
estimates that are dominated by such points

12
ANOVA Table for Regression Model:

Source SS df MS F
Regression (𝑔𝑡)2 𝑝−1 𝑆𝑆𝑅 𝑀𝑆𝑅
𝛽̂ 𝑇 𝑋 𝑇 𝑌 −
𝑛 𝑑𝑓𝑅 𝑀𝑆𝐸
Error 𝑇
𝑌 𝑌− 𝛽 𝑋 𝑌 ̂ 𝑇 𝑇 𝑛−𝑝 𝑆𝑆𝐸
Or 𝑑𝑓𝐸
2
∑(𝑌 − 𝑌̂)

Total (𝑔𝑡)2 𝑛−1


𝑌𝑇 𝑌 −
𝑛
Or
∑(𝑌 − 𝑌̅)2

= ∑ 𝑌 2 − 𝑛 (𝑌̅)2

Chapter 3—High-Level Overview of Statistical Design of Experiments


• Objective: how to design experiments and analyze resulting data efficiently
o Satisfy the objectives of the study
o Maximize information content (i.e. minimize variance)
o Minimize experiment effort
o Spread information over the whole operating region (OR) homogenously
o Draw valid conclusions
• 2 approaches to experiment design:
o Informed but passive observation
▪ Someone who has knowledge about the process watches it change and tries to deduce
cause
▪ Try to develop understanding from historical (happenstance/undersigned) data
o Directed experimentation
▪ Someone who knows the process, causes it to change systematically, and watches the
results of those changes
• The investigative process
o A series of sequential, iterative steps:
▪ Hypothesize
• Formulate the problem
• What causes it?
• Formulate the specific objective(s) for the design stage
• Tools: scatter plots, process flowsheets, cause and effect diagrams, Pareto diagrams
▪ Design
• Select the design to suit the problem (e.g. variable screening, quantifying variable effects,
estimating parameters in a model, etc.)

13

Accommodate constraints
o Number of observations possible
o Variable ranges
o Randomization; replication; (blocking)
o Good planning avoids excessive data collection and complicated
data analysis
▪ Analyze
• Data collection and handling
• Computation/analysis of test statistics
• Interpret results, linking back to our understanding of the process
• Analysis is easy if the experiment is well-planned
o Eventually, the process becomes optimal
• Experimental design is not a black box
o Keep in mind principles from the physical sciences, engineering, and statistics
o Mechanistic (hypothesizing) vs. empirical (experimental) math models
• The purpose of running experiments (some examples):
o Factor screening/characterization
o Optimization
o Confirmation
o Discovery
o Robustness

Experimental design procedures for avoiding problems that occur in analyzing happenstance data:

Problem Experimental Design Procedures for Avoiding the Problem


Inconsistent data Blocking and randomization
Range limited by control Experimenter makes own choice over the range of variables
Semiconfounding of effects Use of designs (e.g. factorials) that provide uncorrelated estimates of
the individual effects
Nonsense correlations due Randomization
to lurking variables
Serially correlated errors Randomization (to some extent)
Dynamic relationships Where only steady-state characteristics are of interest, sufficient time is
allowed between successive runs for the process to settle down
Feedback Temporary disconnection of feedback system

14
Chapter 4—Design/Analysis of Single Factor Experiments
Definition of a Single Factor Experiment: Examples of Single Factor Studies:
- It studies the effect of a single independent - Analytical labs—method comparisons
variable (factor), which can be qualitative or - Product development—comparing new
quantitative products to that of the competition
- Treatments: the levels at which the factor is - Process optimization—comparing different
studied; there will be 2/+ treatments in an operating conditions
experiment - Quality control—comparing different suppliers’
- Determines if there are significant differences in raw materials
the results of the measured response, depending
on the factor’s treatment
Design Considerations: (1) Randomization:
(1) Guarantee the experimental validity with (a) To prevent personal bias (of experimenter or
randomization (in time, order of experiments, others) from entering
and statistical tests used) (b) To eliminate biases in estimated effects due
to trends in the errors/lurking variables—biases
(2) Replication—to obtain a measure of are absorbed into the error (𝑀𝑆𝐸 increases).
reproducibility (error magnitude; 𝑀𝑆𝐸 or 𝜎) Otherwise, you may reject a significant model.
Helpful to calculate the variance inflation factor
(3) Control sources of variability by blocking (VIF)
(block what you can, randomize what you (c) To prevent time/order effects from masking
cannot) results
(d) To ensure the observed effects were caused
by the changes that the experimenter made to
the factors
(e) To ensure that the significance tests are based
on valid random variables
* (f) Relates findings back to the Central Limit
Theorem and justifies its use
* (g) Eliminates the requirement for the
normality assumption

(2a) Reproducibility and Experimental Error: (2b) Replication:


(a) To compare variability between treatment The number of genuine/independent replicates
levels with the “background” (variability within that you should take depends on:
treatment) (i.e. obtain an estimate of the - The difference that you want to detect
experimental error) - Standard deviation of the process (response)
Background is the experimental/inherent - The desired degree of confidence
/underlying/intrinsic error
(b) Obtaining a reliable estimate of the
experimental error is key to the analysis—hence,
you must replicate to obtain an estimate of the
experimental error
Sources of experimental error include:
measurement error, lurking variables, assignable
sources of error

15
ANOVA Table: Diagnostics Checking:
𝑆𝐵 = between treatments sum of squares (signal) Effects model: 𝑦𝑡𝑖 = 𝜇 + 𝜏𝑡 + 𝜀𝑡𝑖
𝑆𝑊 = within treatment sum of squares
(residuals/noise/error) Under the assumption that the errors, 𝜀𝑡𝑖 , are
normally distributed (ensured by randomization),
Notation: the model parameters can be estimated from:
𝐹𝑜𝑟 𝑌𝑡𝑖 : 𝜇̂ = 𝑦̅
𝑡 = 𝑘 = 𝑎 = 𝑡𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡 𝑛𝑜. 𝜏̂ 𝑡 = 𝑦̅𝑡 − 𝑦̅
𝑖 = 𝑡𝑟𝑖𝑎𝑙 𝑛𝑜. 𝑦̂𝑡𝑖 = 𝑦̅𝑡
𝑛 𝑘 𝜀̂𝑡𝑖 = 𝑦𝑡𝑖 − 𝑦̅𝑡
𝑌°° = 𝑌° = ∑ ∑ 𝑌𝑡𝑖 = 𝑔𝑡 Trick: 𝑦𝑡𝑖 ≡ 𝑦̅ + (𝑦̅𝑡 − 𝑦̅) + (𝑦𝑡𝑖 − 𝑦̅𝑡 )
𝑖=1 𝑡=1
𝑛𝐴
(1) Plots:
𝑌𝐴° = ∑ 𝑌𝐴𝑖 - Overall dot diagram—should look like it follows
𝑖=1 the normal distribution
𝑌̅𝐴° = 𝑌𝐴° /𝑛𝐴 - Residuals vs. predicted values (treatments may
𝑌̅𝐴°° = 𝑌°° /𝑁𝐴 have the same 𝑦̅𝑡 )
(2) Diagnostics—F-Test: (3) Diagnostics—Confidence Intervals:
𝐻0 : 𝜏𝐴 = 𝜏𝐵 = 𝜏𝐶 = 𝜏𝐷 = ⋯ 𝜏𝑘 = 0
𝐻1 : 𝜏𝑖 ≠ 0 𝑓𝑜𝑟 𝑎𝑡 𝑙𝑒𝑎𝑠𝑡 𝑜𝑛𝑒 𝑜𝑓 𝑖 = 𝐴, … , 𝑘 On the 𝑖 𝑡ℎ treatment mean 𝜇𝑖 :
𝑆𝐵 /𝑑𝑓𝐵 𝑀𝑆𝐵
𝐹𝑝𝑟𝑜𝑏𝑒 = = 𝑀𝑆𝐸 𝑀𝑆𝐸
𝑆𝑊 /𝑑𝑓𝑊 𝑀𝑆𝑊 𝑦̅𝑖° − 𝑡𝛼,𝑁−𝑘 √ ≤ 𝜇𝑖 ≤ 𝑦̅𝑖° + 𝑡𝛼,𝑁−𝑘 √
Compare 𝐹𝑝𝑟𝑜𝑏𝑒 with 𝐹𝑐𝑟𝑖𝑡𝑖𝑐𝑎𝑙 = 𝐹(𝑑𝑓𝐵 , 𝑑𝑓𝑊 , 𝛼) 2 𝑛 2 𝑛

If 𝐹𝑝𝑟𝑜𝑏𝑒 > 𝐹𝑐𝑟𝑖𝑡 , we reject 𝐻0 Difference of any 2 treatment means:


2 ∙ 𝑀𝑆𝐸
Standardized residual as a rough check for 𝑦̅𝑖° − 𝑦̅𝑗° − 𝑡𝛼,𝑁−𝑘 √ ≤ 𝜇𝑖
2 𝑛
outliers:
𝑒𝑖𝑗
𝑑𝑖𝑗 = 2 ∙ 𝑀𝑆𝐸
√𝑀𝑆𝐸 ≤ 𝑦̅𝑖° − 𝑦̅𝑗° + 𝑡𝛼,𝑁−𝑘 √
2 𝑛
𝑒𝑖𝑗 = 𝑦𝑖𝑗 − 𝑦̂𝑖𝑗

16
(4) Diagnostics—Bonferroni t-Test for Multiple Comparisons:
Assuming a set of 𝑘 means (i.e. 𝑘 treatments), Since individual tests are performance at a level
there are 𝑘 ∙ (𝑘 − 1)/2 possible pairs of means, of significance 𝛼, the overall probability of
and tests of this type: making at least 1 incorrect region is much larger
𝐻0 : 𝜇𝑖 = 𝜇𝑗 than 𝛼 and is unknown
𝐻1 : 𝜇𝑖 ≠ 𝜇𝑗
Bonferroni inequality gives an upper-bound for a
This can be tested with the t-test: set of 𝑐 tests at 𝛼:
𝑦̅𝑖 − 𝑦̅𝑗 𝛼 ′ ≤ 1 − (1 − 𝛼)𝑐
𝑡𝑝𝑟𝑜𝑏𝑒 = 𝑘−1
1 1 𝑐=𝑘∙
𝑠√𝑛 + 𝑛 2
𝑖 𝑗
Where:
Compare with: 𝛼 = comparison-wise error rate (for individual
𝑡𝑐𝑟𝑖𝑡𝑖𝑐𝑎𝑙 = 𝑡𝑁−𝑘,𝛼 tests)
2 𝛼′ = experiment-wise error rate (overall)
𝑘
To compensate:
𝑁 = ∑ 𝑛𝑡 1) Only carry out the tests that are really of
𝑡=1 interest to the investigation
2) Choose a reasonably small target upper-bound
𝑐 for 𝛼′ (𝛼 ′ = 0.05 is always a good choice)
3) Conduct each individual test at 𝛼 = 𝑏/𝑐
(5) Diagnostics for Multiple Comparisons—Least Significant Difference (LSD)
2∙𝑆2
Fisher’s LSD: 𝑡𝛼,𝑁−𝑘 √ 𝑛
2

2∙𝑆 2
[If 𝑛𝑖 ≠ 𝑛𝑗 ] 𝑡𝛼,𝑁−𝑘 √
2 𝑛̅

∴ the difference between a specific pair of means is significant at the 𝛼 level if it exceeds Fisher’s LSD

17
ANOVA Table for Single-Factor Experiments:

Source df SS MS F
Between (𝑛𝑜. 𝑡𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡𝑠) − 1 𝑘 𝑆𝑆 𝑀𝑆𝐵
treatments 𝑑𝑓𝐵 = 𝑘 − 1 𝑆𝐵 = ∑ 𝑛𝑡 (𝑦̅𝑡 − 𝑦̅)2 𝑑𝑓𝐵 𝑀𝑆𝑊
𝑡=1
𝑘
(𝑡𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡 𝑡𝑜𝑡𝑎𝑙𝑠)2
=∑
𝑛𝑡
𝑡=1
− (𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑖𝑜𝑛 𝑓𝑜𝑟 𝑚𝑒𝑎𝑛)
Error (𝑛𝑜. 𝑡𝑟𝑖𝑎𝑙𝑠) 𝑆𝑊 = 𝑆𝑡𝑜𝑡𝑎𝑙 − 𝑆𝐵 𝑆𝑆
𝑘 𝑛𝑡
(within − (𝑛𝑜. 𝑡𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡𝑠) 𝑑𝑓𝑊
treatments) 𝑑𝑓𝑊 = 𝑑𝑓𝑡𝑜𝑡𝑎𝑙 − 𝑑𝑓𝐵 𝑆𝑊 = ∑ ∑(𝑦𝑡𝑖 − 𝑦̅𝑡 )2
𝑡=1 𝑖=1
Total 𝑘 𝑆𝑡𝑜𝑡𝑎𝑙
𝑘 𝑛𝑡
𝑑𝑓𝑡𝑜𝑡𝑎𝑙 = ∑ 𝑛𝑡 − 1
𝑡=1 = ∑ ∑(𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑡𝑖𝑜𝑛𝑠)2
𝑡=1 𝑖=1
− (𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑖𝑜𝑛 𝑓𝑜𝑟 𝑚𝑒𝑎𝑛)
𝑆𝑡𝑜𝑡𝑎𝑙 = ∑(𝑦𝑖 − 𝑦̅)2

18
Chapter 5—Single Factor Experiments: Blocking
Blocking: Like a lurking variable except we can’t Foreword: Justification
do anything about it. It is unknown how/when it Want to make sure comparison is sensitive and
will affect a study, so you have to do it first to selective
conclude whether or not blocking caused If there are no observable effects of blocking on
observable effect on the data. the data, the data is NOT subject to
heterogeneity
If blocking isn’t done first:
𝑆𝑆𝑏𝑙𝑜𝑐𝑘 goes to 𝑆𝑆𝐸
∴ 𝑆𝑆𝑡𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡 < 𝑆𝑆𝐸
∴ you may conclude that there are no treatment
effects

Paired Comparisons: Blocking: separate into boxes and randomize


𝐻0 : 𝜇𝐴 − 𝜇𝐵 = 0 each block. This is the best option.
𝐻1 : 𝜇𝐴 − 𝜇𝐵 ≠ 0
𝑦𝐴 − ̅̅̅
̅̅̅ 𝑦𝐵 − 0 Effects model: 𝑦𝑡𝑖 = 𝜇 + 𝛽𝑖 + 𝜏𝑡 + 𝜀𝑡𝑖
𝑡𝑝𝑟𝑜𝑏𝑒 =
1 1 𝛽𝑖 → Block deviation for 𝑖-th block
√𝑠𝑝2 ( + )
𝑛 𝑛𝐴 𝐵

Differences (D) = B-A Correction for the mean:


2
𝑘 𝑛
𝐻0 : 𝜇𝐷 = 0 1 𝑔𝑡 2
(∑ ∑ 𝑦𝑡𝑖 ) =
𝐻1 : 𝜇𝐷 ≠ 0 𝑘𝑛 𝑘𝑛
𝑡=1 𝑖=1

𝑠𝑢𝑚 𝑜𝑓 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒𝑠
̅=
𝐷 𝑊ℎ𝑒𝑟𝑒: 𝑘 → # 𝑡𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡𝑠, 𝑛 → # 𝑏𝑙𝑜𝑐𝑘𝑠
𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒𝑠

̅ − 𝜇𝐷
𝐷
𝑡=
𝑠2
√( )
𝑛

𝑡𝑐𝑟𝑖𝑡𝑖𝑐𝑎𝑙 = 𝑡𝑑𝑓,𝛼/2
Overall (Effects) Model: Estimate of the Overall Mean:
𝑌̂𝑡𝑖 = 𝑌̅𝑏𝑙𝑜𝑐𝑘 + 𝑌̅𝑡 + 𝑌̅ 𝜇̂ = 𝑦̅
𝑔𝑡
𝑦̅ =
𝑁
Estimate of the Block Effect: Estimate of the Treatment Effect:
𝛽̂ = 𝑌̅𝑏𝑙𝑜𝑐𝑘 − 𝑌̂ 𝜏̂ 𝑡 = 𝑌̂𝑡 − 𝑌̅

Treatment Average Mean: Block Average/Mean:


∑ 𝑡𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡 𝑡𝑜𝑡𝑎𝑙𝑠 ∑ 𝑏𝑙𝑜𝑐𝑘 𝑡𝑜𝑡𝑎𝑙𝑠
𝑌̅𝑡 = 𝑌̅𝑏𝑙𝑜𝑐𝑘 =
# 𝑡𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡𝑠 # 𝑏𝑙𝑜𝑐𝑘𝑠

Residual:
𝑒𝑡𝑖 = 𝑌𝑡𝑖 − 𝑌̂𝑡𝑖

19
Source df SS MS F
Treatments (# 𝑡𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡𝑠) − 1 𝑆𝑇𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡 𝑆𝑆𝑇𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡𝑠 𝑀𝑆𝑇𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡
𝑑𝑓𝑇𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡 = 𝑘 − 1 𝑘
1 𝑑𝑓𝑇𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡 𝑀𝑆𝐸
= ∑(𝑡𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡 𝑡𝑜𝑡𝑎𝑙𝑠)2
𝑛
𝑖=1
− (𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑖𝑜𝑛 𝑓𝑜𝑟 𝑚𝑒𝑎𝑛)
Blocks (# 𝑏𝑙𝑜𝑐𝑘𝑠) − 1 𝑆𝐵 = 𝑆𝑡𝑜𝑡𝑎𝑙 − 𝑆𝑡𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡 𝑆𝑆𝐵𝑙𝑜𝑐𝑘𝑠 𝑀𝑆𝐵𝑙𝑜𝑐𝑘𝑠
𝑑𝑓𝐵 = 𝑛 − 1 𝑆𝐵 𝑑𝑓𝐵 𝑀𝑆𝐸
𝑛
1
= ∑(𝑏𝑙𝑜𝑐𝑘 𝑡𝑜𝑡𝑎𝑙𝑠)2
𝑘
𝑡=1
− (𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑖𝑜𝑛 𝑓𝑜𝑟 𝑚𝑒𝑎𝑛)

Errors 𝑑𝑓𝐸 𝑆𝑆𝐸 𝑆𝑆𝐸


= 𝑑𝑓𝑡𝑜𝑡𝑎𝑙 – 𝑑𝑓𝑡𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡 = 𝑆𝑆𝑇 − 𝑆𝑆𝑇𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡𝑠 – 𝑆𝑆𝐵 𝑑𝑓𝐸
– 𝑑𝑓𝑏𝑙𝑜𝑐𝑘𝑠

Total 𝑘 𝑆𝑡𝑜𝑡𝑎𝑙
𝑘 𝑛𝑡
𝑑𝑓𝑡𝑜𝑡𝑎𝑙 = ∑ 𝑛𝑡 − 1
𝑡=1 = ∑ ∑(𝑦𝑡𝑖 )2
𝑑𝑓𝑡𝑜𝑡𝑎𝑙 = 𝑘 ∙ 𝑛 − 1 𝑡=1 𝑖=1
− (𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑖𝑜𝑛 𝑓𝑜𝑟 𝑚𝑒𝑎𝑛)
𝑆𝑡𝑜𝑡𝑎𝑙 = ∑(𝑦𝑖 − 𝑦̅)2

20
Chapter 6—Multi-Factor Experiments
Definitions: Plotting Model Outputs:
- Factorial experiments: experiments in which we (1) Response Surface (RS)
measure the effects of factors on some - Plot Y-hat vs the factors
response(s) of interest - Resulting geometry is an ellipsoid
- Treatments: all the combinations of factors that
can be formed by choosing one level for each (2) Contour Plot
factor (number of trials) - Plot fixed Y-hat values vs the factors
- Levels: actual value of the factor - These plots are projections of the response
surface in 2-D
Experimental Design Strategies:
General Remarks: (1) One Variable at a Time (OVAT, OFAT)
- An undesigned experiment is the starting point - Experiments in which only one factor is varied at
for a design experiment a time while other factors are kept constant
- Generally, we want to start with factorial - Does not detect interaction
experiments→maybe move on informed - May lead you to an optimal point in the OR (if
search→maybe move onto OVAT you’re lucky)
- Frequently used and abused in practice
(2) Informed Search (3) Factorial Experiments
- A few targeted experiments - All levels of one factor are combined with all
- Requires intricate knowledge of the process levels of the other factors
being studied - May be full or fractional factorial
- Does NOT separate effects of variables - Enables estimation of effects and interactions of
- May or may not detect interactions variables
- Use with caution - Hidden/implicit replication providers greater
- No formal (statistical) analysis sensitivity
- Reproducibility allows for statistical analysis (i.e.
error estimation)
- Covers more range in the OR, allowing for more
generalizable conclusions
- May require more experiments at the start
- Increases the information gained from
experimentation
- Forces you to take safe, reliable steps during
experimentation (i.e. the direction of steepest
ascend)
Evolution of Experimental Designs during a Project:
- As the project progresses, experiments should become increasingly selective (they should NOT start
out selective)
- Due to this change over time, it’s recommended to only use about 25% of your available resources
on the initial design
𝑦𝑡𝑖𝑗 = 𝜇 + 𝜏𝑡 + 𝛽𝑖 + 𝜔𝑡𝑖 + 𝜀𝑡𝑖𝑗
𝑦𝑡𝑖𝑗 = 𝑦̅ + (𝑦̅𝑡 − 𝑦̅) + (𝑦̅𝑖 − 𝑦̅) + (𝑦̅𝑡𝑖 − 𝑦̅𝑖 − 𝑦̅𝑡 + 𝑦̅) + (𝑦𝑡𝑖𝑗 − 𝑦̅𝑡𝑖 )

Where 𝑦̅𝑡 is brand average, 𝑦̅𝑖 is popper average, 𝑦̅𝑡𝑖 is cell average, 𝑦̅ grand total
Where t = brand, i = popper, j = replication

21
Grand Average: Factor A effect:
𝑔𝑡 ∑ 𝑦𝑡𝑖𝑗 𝜏̂ 𝑡 = 𝑦̅𝑡 − 𝑦̅ = 𝑦̅𝑡°° − 𝑦̅
𝑦̅ = =
𝑡∙𝑖∙𝑗 𝑡∙𝑖∙𝑗
Factor B effect: Interaction:
𝛽̂𝑖 = 𝑦̅𝑖 − 𝑦̅ 𝜔
̂𝑡𝑖 = 𝑦̅𝑡𝑖 − 𝑦̅𝑡 − 𝑦̅𝑖 + 𝑦̅ = 𝑡𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡 𝑎𝑣𝑔
= 𝑑𝑒𝑠𝑖𝑔𝑛 𝑚𝑎𝑡𝑟𝑖𝑥 𝑐𝑒𝑙𝑙 𝑎𝑣𝑒𝑟𝑎𝑔𝑒
Residual (Error): Sum of Squares of Residual Error
𝜀̂𝑡𝑖𝑗 = 𝑒𝑡𝑖𝑗 = 𝑦𝑡𝑖𝑗 − 𝑦̂𝑡𝑖𝑗 = 𝑦𝑡𝑖𝑗 − 𝑦̅𝑡𝑖 𝑏
𝑆𝐸 = 𝑆𝑇𝑜𝑡𝑎𝑙 − 𝑆𝑇𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡
Sum of Squares of interaction
𝑆𝐼 = 𝑆𝑇𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡 − 𝑆𝑃𝑜𝑝𝑝𝑒𝑟 − 𝑆𝐵𝑟𝑎𝑛𝑑
Factor Coding: (𝐶𝑜𝑛𝑡𝑟𝑎𝑠𝑡)𝑖
(𝑒𝑓𝑓𝑒𝑐𝑡)𝑖 =
𝑅2𝑘−1
𝑈𝑛𝑐𝑜𝑑𝑒𝑑 𝑉𝑎𝑙𝑢𝑒 − 𝑀𝑒𝑎𝑛 𝑁
𝐶𝑜𝑑𝑒𝑑 𝑉𝑎𝑙𝑢𝑒 = (𝐶𝑜𝑛𝑡𝑟𝑎𝑠𝑡)𝑖 = ∑ 𝜆𝑖 𝑦𝑖
𝑅𝑎𝑛𝑔𝑒
2 𝑖=1
𝑁 = 𝑅2𝑘 = 𝑁 + + 𝑁 −
R = # of replicates under full replication
For 22 design: For 23 design:
𝑦𝑖 = 𝛽0 + 𝛽1 (𝐴𝑖 ) + 𝛽2 (𝐵𝑖 ) + 𝛽3 (𝐴𝐵𝑖 ) + 𝑒𝑟𝑟𝑜𝑟 𝑦𝑖 = 𝛽0 + 𝛽1 (𝐴𝑖 ) + 𝛽2 (𝐵𝑖 ) + 𝛽3 (𝐶𝑖 ) + 𝛽4 (𝐴𝐵𝑖 )
+ 𝛽5 (𝐴𝐶𝑖 ) + 𝛽6 (𝐵𝐶𝑖 )′
+ 𝛽7 (𝐴𝐵𝐶𝑖 ) + 𝑒𝑟𝑟𝑜𝑟
Estimate Error Variance: Example:
1 24 (𝑋 ′ 𝑋) = 16 𝐼
𝛽̂𝑖 = (𝑒𝑓𝑓𝑒𝑐𝑡)𝑖 1
2
1 24 (𝑋 ′ 𝑋)−1 = 𝐼
𝑉𝑎𝑟(𝛽̂ ) = 𝑉𝑎𝑟(𝑒𝑓𝑓𝑒𝑐𝑡)𝑖 16
4
Model intercept: Two Basic Approaches:
∑ 𝑦𝑖 ∑ 𝑦𝑖 1. If an error variance estimate is available (i.e.
𝛽̂𝑜 = =
𝑅∙2 𝑘 𝑁 full replication OR replication on the center
1 points), use ANOVA
𝑠𝑒(𝛽𝑖 ) = 𝑠𝑒(𝑒𝑓𝑓𝑒𝑐𝑡)𝑖
2 1
2. If no replication has been carried out, use
𝑀𝑆𝐸 2 normal probability plots on all effects—significant
Where 𝑠𝑒(𝑒𝑓𝑓𝑒𝑐𝑡) = 2 ( ); 𝑁 = 𝑅2𝑘
𝑁 effects/interactions will appear as outliers
Or: 𝑠𝑒(𝑒𝑓𝑓𝑒𝑐𝑡) = √𝑉𝑎𝑟(𝑐𝑜𝑛𝑡𝑟𝑎𝑠𝑡)
SS of effects (iff the design has been fully SST:
replicated): 𝑆𝑆𝑇 = ∑ 𝑆𝑆𝑒𝑓𝑓𝑒𝑐𝑡𝑠 + 𝑆𝑆𝐸
2
𝑘−2
(𝑐𝑜𝑛𝑡𝑟𝑎𝑠𝑡)𝑖
𝑆𝑆𝑖 = 𝑅 ∙ 2 ( ) Where SSE = SSPE + SSLOF
𝑅2𝑘−1 Correction for the mean
𝑆𝑆𝑖 = 𝑅 ∙ 2𝑘−2 (𝑒𝑓𝑓𝑒𝑐𝑡)2𝑖 𝑔𝑡 2 𝑔𝑡 2
SS of effects: =
𝑎𝑏𝑛 𝑁
𝑆𝑆𝑖 = 2𝑛−1 (𝑒𝑓𝑓𝑒𝑐𝑡𝑠)2𝑖

Where n = number of factors


Degrees of Freedom Pooled Sample to Calculate 𝑀𝑆𝐸 : [Full replication
𝑑𝑓𝑡𝑜𝑡𝑎𝑙 = 𝑅2𝑘 − 1; 𝑅 > 2 OR replication on the center points]
𝑑𝑓𝐸 = 2𝑘 (𝑅 − 1) ; 𝑅 > 2 ∑𝑅𝑖=1(𝑛𝑖 − 1)𝑆𝑖2
𝑆𝑝2 =
∑𝑅𝑖=1(𝑛𝑖 − 1)

22
Using Residuals to Calculate 𝑀𝑆𝐸 (model- Confidence Intervals on Effects:
dependent): (𝑒𝑓𝑓𝑒𝑐𝑡)𝑖 ± 𝑡𝛼,𝑁−𝑝 ∙ 𝑠𝑒(𝑒𝑓𝑓𝑒𝑐𝑡)
𝑁 𝑁 2

𝑆𝑆𝐸 = ∑ 𝑒𝑖2 = ∑(𝑦𝑖 − 𝑦̂𝑖 )2


𝑖=1 𝑖=1
𝑆𝑆𝐸
𝑀𝑆𝐸 = Variance of Effects:
𝑑𝑓𝐸
- If this method and the other model- 𝑉(𝑐𝑜𝑛𝑡𝑟𝑎𝑠𝑡)
𝑉𝑎𝑟(𝑒𝑓𝑓) = 𝑉(𝑒𝑓𝑓) =
independent methods give you the same 𝑀𝑆𝐸 , 𝑅 2 ∙ 22(𝑘−1)
this estimate is correct 𝑉(𝑐𝑜𝑛𝑡𝑟𝑎𝑠𝑡) = 𝑁 ∙ 𝜎 = (𝑅 ∙ 2𝑘 ) ∙ 𝑀𝑆𝐸
2

- Otherwise, the data is telling you something


2
𝑅𝑝𝑟𝑒𝑠𝑠 : Concluding Comments on 2𝑘 Factorial Designs:
- Considers the model’s predictive capability
- PRESS = Prediction Error Sum of Squares (1) Orthogonality of design:
𝑛
𝑒𝑖 2 - Gives us uncorrelated parameters
𝑃𝑅𝐸𝑆𝑆 = ∑ ( )
1 − ℎ𝑖𝑖
𝑖=1 (2) Rotatability of design:
2
𝑃𝑅𝐸𝑆𝑆 - Related to prediction variance, which is related
𝑅𝑝𝑟𝑒𝑠𝑠 =1−
𝑆𝑆𝑡𝑜𝑡𝑎𝑙 to the average predication variance on concentric
ℎ𝑖𝑖 = diagonal element of the hat matrix. If it’s circles at some radius or distance from the center
large, we have a high leverage point (influential of design
point) - Benefits:
- Hence why we shouldn’t remove any collected → You can move away from the center of design
data points nor take averages of responses in concentric circles, out of the current design’s
operating region (to extrapolate)
2 2
𝑅𝑝𝑟𝑒𝑠𝑠 < 𝑅𝑎𝑑𝑗 < 𝑅2 - Prediction variance is a function of the distance
from the center of design (basis of Central
Composite Designs)

If a design of experience satisfies both these


properties, we have high confidence in it
- Hence why 2𝑘 experiments provide even more
reliable results than 3𝑘 , 4𝑘 , etc. designs

23
ANOVA for 𝟐 × 𝒌 (2 factors with k levels) Experiment (Popcorn example)

Source df SS MS F
𝑛𝐴
Factor A (# 𝑙𝑒𝑣𝑒𝑙𝑠 𝑜𝑓 𝐹𝑎𝑐𝑡𝑜𝑟 𝐴) (𝐹𝑎𝑐𝑡𝑜𝑟 𝐴 𝑡𝑜𝑡𝑎𝑙𝑠)2 (𝑔𝑡)2
(Factor of −1 ∑ −
(# 𝑟𝑒𝑝𝑙𝑖𝑐𝑎𝑡𝑒𝑠) × (# 𝑙𝑒𝑣𝑒𝑙𝑠 𝑓𝑎𝑐𝑡𝑜𝑟 𝐵) 𝑁
Interest) 𝑖=1
𝑛𝐵
Factor B (# 𝑙𝑒𝑣𝑒𝑙𝑠 𝑜𝑓 𝐹𝑎𝑐𝑡𝑜𝑟 𝐵) (𝐹𝑎𝑐𝑡𝑜𝑟 𝐵 𝑡𝑜𝑡𝑎𝑙𝑠)2 (𝑔𝑡)2
(Block) −1 ∑ −
(# 𝑟𝑒𝑝𝑙𝑖𝑐𝑎𝑡𝑒𝑠) × (# 𝑙𝑒𝑣𝑒𝑙𝑠 𝑓𝑎𝑐𝑡𝑜𝑟 𝐴) 𝑁
𝑖=1
Interaction 𝑑𝑓𝐼 = 𝑑𝑓𝑇𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡 𝑆𝐼 = 𝑆𝑡𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡 − 𝑆𝐴 − 𝑆𝐵
− 𝑑𝑓𝐴
− 𝑑𝑓𝐵
Treatment (# 𝑡𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡𝑠) − 1 𝑘
1
= (# "𝑐𝑒𝑙𝑙𝑠" ) − 1 𝑆𝑡𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡 = ∑(𝑡𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡 𝑡𝑜𝑡𝑎𝑙𝑠)2
𝑅
𝑖=1
− (𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑖𝑜𝑛 𝑓𝑜𝑟 𝑚𝑒𝑎𝑛)
Error 𝑑𝑓𝑡𝑜𝑡𝑎𝑙 − 𝑑𝑓𝑡𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡 𝑆𝑒𝑟𝑟𝑜𝑟 = 𝑆𝑡𝑜𝑡𝑎𝑙 − 𝑆𝑡𝑟𝑒𝑎𝑡𝑚𝑒𝑛𝑡

Total (# 𝑡𝑟𝑖𝑎𝑙𝑠) − 1 (𝑔𝑡)2


𝑆𝑡𝑜𝑡𝑎𝑙 = 𝑆𝐷 = ∑ 𝑌𝑖2 −
𝑁

Remarks about the 2 × 𝑘 ANOVA


Effects model: 𝑦𝑡𝑖𝑗 = 𝜇 + 𝜏𝑡 + 𝛽𝑖 + 𝜔𝑡𝑢 + 𝜀𝑡𝑖𝑗
𝐻0 : 𝜏𝑡 = 𝛽𝑖 = 𝜔𝑡𝑖 = 0
𝐻1 : at least one of 𝜏𝑡 , 𝛽𝑖 , 𝜔𝑡𝑖 ≠ 0

- If 𝐹𝑝𝑟𝑜𝑏𝑒 > 𝐹𝑐𝑟𝑖𝑡 by 5-10 times, we have high confidence in rejecting 𝐻0


- 𝐹𝑝𝑟𝑜𝑏𝑒 ≅ 1 for interaction means there is no interaction effect between the factors
Effects using the 2 × 𝑘 Model Popcorn Example:
Brand effects: Popper effects:
37.5 40.5
𝜏̂1 = − 𝑦̅ 𝛽̂1 = − 𝑦̅
2∙3 3∙3

28.5 49.5
𝜏̂ 2 = − 𝑦̅ 𝛽̂2 = − 𝑦̅
2∙3 3∙3

24
𝜏̂ 3 = − 𝑦̅
2∙3

Interaction effects:
17 13.5
𝜔
̂11 = , 𝜔
̂23 =
3 3

24
ANOVA for 𝟐𝒌 (2 levels with k factors) Factorial Experiment

Source Effect df SS MS F
Main Effects 1 𝑆𝑆𝑖 Compare
(𝑐𝑜𝑛𝑡𝑟𝑎𝑠𝑡)𝑖
Interactions 1 𝑅 ∙ 2𝑘−2 ∙ (𝑒𝑓𝑓𝑒𝑐𝑡)2𝑖 all with
2𝑘−1 ∙ 𝑅 𝑑𝑓𝑖
𝑀𝑆𝐸
Error 𝑁−𝑝 𝑆𝑆𝐸
𝑆𝑆𝑡𝑜𝑡𝑎𝑙 − ∑ 𝑆𝑆𝑚𝑎𝑖𝑛 𝑒𝑓𝑓𝑒𝑐𝑡𝑠
𝑑𝑓𝐸
− ∑ 𝑆𝑆𝑖𝑛𝑡𝑒𝑟𝑎𝑐𝑡𝑖𝑜𝑛𝑠
Total (𝑛𝑜. 𝑡𝑟𝑖𝑎𝑙𝑠) 𝑁 𝑅
2
(𝑔𝑡)2
−1 ∑ ∑ 𝑦𝑘𝑗 −
𝑅 ∙ 2𝑘
𝑘=1 𝑗=1

Remarks about the 2𝑘 ANOVA


- Replication increases 𝑑𝑓𝐸 to increase our confidence in our estimation of 𝑀𝑆𝐸
𝐹𝑐𝑟𝑖𝑡 = 𝐹𝛼,1,𝑅−1

Reconciled ANOVA [Using independent center point replicates to estimate 𝑀𝑆𝐸 ]

Source Effect df SS MS F
Main Effects 1 𝑆𝑆𝑖 Compare
(𝑐𝑜𝑛𝑡𝑟𝑎𝑠𝑡)𝑖
Interactions 1 𝑅 ∙ 2𝑘−2 ∙ (𝑒𝑓𝑓𝑒𝑐𝑡)2𝑖 all with
2𝑘−1 ∙ 𝑅 𝑑𝑓𝑖
𝑀𝑆𝐸
Error: LOF 1 𝑆𝑆𝑡𝑜𝑡𝑎𝑙 − 𝑆𝑆𝑃𝐸
Error: PE 𝑅−1 𝑀𝑆𝐸 ∙ 𝑑𝑓𝐸 𝑆𝑝2
Total (# 𝑡𝑟𝑖𝑎𝑙𝑠) 𝑁 𝑅
2
(𝑔𝑡)2
−1 ∑ ∑ 𝑦𝑘𝑗 −
𝑁
𝑘=1 𝑗=1

Remarks about the reconciled ANOVA:


- Detecting LOF is a sign of model mismatch or model bias, due to curvature
- Having replicates was the only reason why we detected LOF!
- Next steps:
1. Expand on or modify the original model—try expanding the model to include quadratic term(s),
starting with the effect/interaction that had the largest 𝐹𝑝𝑟𝑜𝑏𝑒 in the reconciled ANOVA
2. Look for other designs of experiments. For example: response surface methodology (RSM), central
composite designs (CCDs), process optimization

25
Chapter 7—Multi-Factor Experiments and 2-Level Fractional Factorial Designs
Fraction Factorial Designs: Terminology:
- Let you study a number of factors in fewer - Design generator: the treatment combination(s)
runs than in a full factorial representing the relationship between factors in an
- There exists a trade-off between the loss of experiment
experiment information (resolution) and - Defining relation: the relation of the identity matrix
minimizing the number of trails to the treatment combinations for an experiment. It
- High-order (3 or more) interactions tend to be includes all words that are equal to 𝐼
insignificant - In larger experiments involving a high number of
factors, we exploit the concept of confounding
Example: (aliasing) to reduce the design size
1 - Resolution: the smallest length word in the defining
2𝑘−𝑝 = 𝑝 ∙ 2𝑘 , 𝑤ℎ𝑒𝑟𝑒:
2 relation
𝑁𝑢𝑚𝑒𝑟𝑎𝑡𝑜𝑟 → 𝑓𝑢𝑙𝑙 𝑑𝑒𝑠𝑖𝑔𝑛
{ }
𝐷𝑒𝑛𝑜𝑚𝑖𝑛𝑎𝑡𝑜𝑟 → 𝑏𝑎𝑠𝑒 𝑑𝑒𝑠𝑖𝑔𝑛
→ projecting a full design onto a specified base
Obtaining the Defining Relation: Determining which terms are confounded:
1. Start with the design generator 1. Multiply the defining relation by the term of
𝐷 = 𝐴𝐵𝐶 interest. For AB:
2. Multiply both sides with the single variable 𝐴𝐵 ∙ 𝐼 = 𝐴𝐵 ∙ 𝐴𝐵𝐶𝐷 = 𝐴2 𝐵2 𝐶𝐷
𝐷 2 = 𝐴𝐵𝐶𝐷 2. Set squared terms to 𝐼
3. Set squared terms to 𝐼, the identity column 𝐴𝐵 = 𝐼 ∙ 𝐼 ∙ 𝐶𝐷 → 𝐴𝐵 = 𝐶𝐷
𝐼 = 𝐴𝐵𝐶𝐷

Note:
𝐼 = 𝐴𝐵𝐶𝐷 → 𝑝𝑟𝑖𝑛𝑐𝑖𝑝𝑎𝑙 𝑓𝑟𝑎𝑐𝑡𝑖𝑜𝑛
𝐼 = −𝐴𝐵𝐶𝐷 → 𝑎𝑙𝑡𝑒𝑟𝑛𝑎𝑡𝑖𝑣𝑒 𝑓𝑟𝑎𝑐𝑡𝑖𝑜𝑛
Design Resolution: Selecting the Best Design Fraction:
- The higher the resolution, the more effects we
can “resolve” Must meet all these objectives:
- 𝑅𝑒𝑠 𝑉 is desirable 1. Make the number of letters in each design
- 𝑅𝑒𝑠 𝐼𝐼𝐼 may be restrictive, but typically a good generator about the same
way to start 2. Minimize the overlap in the letters of the design
- 𝑅𝑒𝑠 𝐼𝐼 is very restrictive generators
3. Look at the worst-case—find confounding pattern
of all main effects and 2-factor interactions

- Table 8-14 (table of designs) summarizes the


general forms of optimal design generator
assignments
- The best fraction is selected on a case-by-case basis

26
Normal Probability Plots:
- Used to identify significant effects when there What if we cannot run the experiment at a certain
was no full replication of the experiment treatment? For example: at all high levels of
- May plot contrast OR effect on the x-axis, factors
since they’re scaled versions of each other - We have 2 options—use the alternative design
𝑛𝑜. 𝑜𝑓 𝑁𝑃𝑃 𝑑𝑎𝑡𝑎 𝑝𝑜𝑖𝑛𝑡𝑠 = 2𝑝 − 1 OR skip the 16th run
- Best option is to use the alternative design,
maintaining orthogonality of design, and thus,
uncorrelated parameters
Generalized Formulas for the ANOVA Table:
𝑐𝑜𝑛𝑡𝑟𝑎𝑠𝑡 𝑐𝑜𝑛𝑠𝑡𝑟𝑎𝑠𝑡
𝑒𝑓𝑓𝑒𝑐𝑡 = =
𝑑𝑖𝑣𝑖𝑠𝑜𝑟 𝑑𝑖𝑣

𝑅 ∙ 2𝑘−1 (𝑓𝑢𝑙𝑙)
1 ∙ 2𝑘−1 (𝑓𝑟𝑎𝑐𝑡𝑖𝑜𝑛𝑎𝑙, 𝑤. 𝑟. 𝑡. 𝑘 → 𝑏𝑎𝑠𝑒)
𝑑𝑖𝑣 = ↕
1
∙ 2𝑘−1 (𝑓𝑟𝑎𝑐𝑡𝑖𝑜𝑛𝑎𝑙, 𝑤. 𝑟. 𝑡. 𝑘 → 𝑓𝑢𝑙𝑙 𝑑𝑒𝑠𝑖𝑔𝑛)
{(𝑓𝑟𝑎𝑐𝑡𝑖𝑜𝑛)

Screening Designs:
- Preliminary/exploratory designs Setting up the design matrix:
- Fractional factorials are preferred (res. IV 1. Effects across the top; number of trials are the
desired) first column
- Plackett-Burman (P-B) designs were 2. Obtain the first row of the column of signs from
developed specifically for screening purposes Table 8.23
(res. III) 3. For the proceeding rows:
- Considered minimal/saturated/supersaturated (a) The sign of the last entry in the previous row
designs—you use the smallest number of becomes the sign of the first entry in the new row
experiments at the expense of confounding (b) The signs are offset right by 1 position
effects compared to the previous row
- The number of runs typically is a multiple of 4 4. Repeat Step 3 until you run out of shifts
(6-7 factors in 8 runs, 8-11 factors in 12 runs, 5. The last row should have all negative signs
12-15 factors in 16 runs) Note: In the elements row, 𝐼 is not used to avoid
confusion with the identity matrix

Modifications:
- Check fractional factorial scenarios before adding foldovers
- Foldovers: when paired with fractional factorial designs, they are analogous to using alternative
fractional designs vs fractional designs
(a) Full foldover: all signs of the design matrix entries are switched
(b) Single-factor/partial foldover: only the signs of the design matrix entries of a specific factor are
switched—helps you resolve this factor
- Center point replicates let us calculate an error estimate
Fractional factorial designs taught us how to map response surfaces
Mean Squares:
𝑆𝑆(𝑒𝑓𝑓𝑒𝑐𝑡𝑠)𝑖 𝑅2𝑘−2 (𝑒𝑓𝑓𝑒𝑐𝑡)2𝑖
=
𝑑𝑓𝑖 𝑑𝑓

27
4 8 16 32 64 128
3 23−1
𝐼𝐼𝐼 2 3 3
2 3
2 3
2 23
±𝐶 = 𝐴𝐵 2 times 4 times 8 times 16 times
4 24−1
𝐼𝑉 24 24 24 24
±𝐷 = 𝐴𝐵𝐶 2 times 4 times 8 times
5 25−2
𝐼𝐼𝐼 25−1
𝑉 25 25 25
±𝐷 = 𝐴𝐵 ±𝐸 = 𝐴𝐵𝐶𝐷 2 times 4 times
±𝐸 = 𝐴𝐶
6 26−3
𝐼𝐼𝐼 26−2
𝐼𝑉 26−1
𝑉𝐼 26 26
±𝐷 = 𝐴𝐵 ±𝐸 = 𝐴𝐵𝐶 ±𝐹 = 𝐴𝐵𝐶𝐷𝐸 2 times
±𝐸 = 𝐴𝐶 ±𝐹 = 𝐵𝐶𝐷
±𝐹 = 𝐵𝐶
7 27−4
𝐼𝐼𝐼 27−3
𝐼𝑉 27−2
𝐼𝑉 27−1
𝑉𝐼𝐼 27
±𝐷 = 𝐴𝐵 ±𝐸 = 𝐴𝐵𝐶 ±𝐹 = 𝐴𝐵𝐶𝐷 ±𝐺 = 𝐴𝐵𝐶𝐷𝐸𝐹
±𝐸 = 𝐴𝐶 ±𝐹 = 𝐵𝐶𝐷 ±𝐺 = 𝐴𝐵𝐷𝐸
±𝐹 = 𝐵𝐶 ±𝐺 = 𝐴𝐶𝐷
±𝐺 = 𝐴𝐵𝐶
8 28−4
𝐼𝑉 28−3
𝐼𝑉 28−2
𝑉 28−1
𝑉𝐼𝐼𝐼
±𝐸 = 𝐵𝐶𝐷 ±𝐹 = 𝐴𝐵𝐶 ±𝐺 = 𝐴𝐵𝐶𝐷 ±𝐻 = 𝐴𝐵𝐶𝐷𝐸𝐹𝐺
±𝐹 = 𝐴𝐶𝐷 ±𝐺 = 𝐴𝐵𝐷 ±𝐻 = 𝐴𝐵𝐸𝐹
±𝐺 = 𝐴𝐵𝐶 ±𝐻 = 𝐵𝐶𝐷𝐸
±𝐻 = 𝐴𝐵𝐷
9 29−5
𝐼𝐼𝐼 29−4
𝐼𝑉 29−3
𝐼𝑉 29−2
𝑉𝐼
±𝐸 = 𝐴𝐵𝐶 ±𝐹 = 𝐵𝐶𝐷𝐸 ±𝐺 = 𝐴𝐵𝐶𝐷 ±𝐻 = 𝐴𝐶𝐷𝐹𝐺
±𝐹 = 𝐵𝐶𝐷 ±𝐺 = 𝐴𝐶𝐷𝐸 ±𝐻 = 𝐴𝐶𝐸𝐹 ±𝐼 = 𝐵𝐶𝐸𝐹𝐺
±𝐺 = 𝐴𝐶𝐷 ±𝐻 = 𝐴𝐵𝐷𝐸 ±𝐼 = 𝐶𝐷𝐸𝐹
±𝐻 = 𝐴𝐵𝐷 ±𝐼 = 𝐴𝐵𝐶𝐸
±𝐼 = 𝐴𝐵𝐶𝐷
10 210−6
𝐼𝐼𝐼 210−5
𝐼𝑉 210−4
𝐼𝑉 210−3
𝑉
±𝐸 = 𝐴𝐵𝐶 ±𝐹 = 𝐴𝐵𝐶𝐷 ±𝐺 = 𝐵𝐶𝐷𝐹 ±𝐻 = 𝐴𝐵𝐶𝐺
±𝐹 = 𝐵𝐶𝐷 ±𝐺 = 𝐴𝐵𝐶𝐸 ±𝐻 = 𝐴𝐶𝐷𝐹 ±𝐼 = 𝐵𝐶𝐷𝐸
±𝐺 = 𝐴𝐶𝐷 ±𝐻 = 𝐴𝐵𝐷𝐸 ±𝐼 = 𝐴𝐵𝐷𝐸 ±𝐽 = 𝐴𝐶𝐷𝐹
±𝐻 = 𝐴𝐵𝐷 ±𝐼 = 𝐴𝐶𝐷𝐸 ±𝐽 = 𝐴𝐵𝐶𝐸
±𝐼 = 𝐴𝐵𝐶𝐷 ±𝐽 = 𝐵𝐶𝐷𝐸
±𝐽 = 𝐴𝐵

Number OF Factors Factorial Fractional Factorial (𝑅𝑒𝑠 ≤ 𝑉) PB with Foldover (Res = IV)
6 64 32 16
7 128 64 16
8 256 64 16
9 512 128 24
10 1024 128 24
11 2048 128 24
12 4096 256 24
13-16 32
17-20 40

28
Chapter 8—Concluding Remarks
Why Apply Experimental Design? What if experiments are undesigned? (i.e. dealing
- A sound approach for studying and with undesigned or happenstance data)
understanding situations where many variables - Gain in knowledge may be impeded because:
affect a system (a) The objectives are unclear
- A strategy for data collection that guarantees (b) Previous knowledge isn’t fully used
separation of effects studied (c) No multiple measurements of effects, leading
- A roadmap for the process of data collection in to restricted conclusions
a R&D/troubleshooting project (d) No randomization to minimize the effects of
- When applied in the appropriate situations, this “lurking” variables
approach is usually far more efficient at (e) No replication to provide a measurement of
developing new knowledge background variability (i.e. error)
- Target: efficient and robust model-building (f) Interaction info is usually not obtained
(g) Factor effects are ordinarily confounded
(h) Results are difficult to analyze
(i) No evaluation of statistical significance
(goodness of fit)
(j) Model inadequacy is not easily deduced

Next Steps: Software Packages:


- Try out experimental design methodology on a
small project Older packages:
- Work with it to gain familiarity - IMSL—International Mathematics/Stats Library
- Identify a statistical resource (e.g. books, - NAG—Numerical Algorithms
papers, subject matter experts) - NIST—National Institute of Statistics
- Mainframe packages: SPSS, SYSTAT, ECHIP,
Statistica

More recent packages: Excel, MATLAB, Maple,


Mathematic, SIMCA, MODD-E and MODD, R

Multivariate Analysis:
- PCA (Principal Component Analysis)—for
detecting outliers
- PLS (Projection on Latent Surfaces)
- CCA (Canonical Correlation Analysis)—for serial
correlation
What if the experiments don’t go as planned? What if the results are unexpected (e.g. a known
(1) Some experiments turn out to be infeasible significant factor isn’t found to be statistically
- Move points inward from the infeasible region significant)?
and use regression analysis - Check and re-check the analysis
- Use advanced techniques based on D-optimality - Check for bad data (using plots)
- Were the levels too narrow to permit detecting
(2) Design can’t be completed effect?
- Use regression analysis and examine - Does the error estimate appear to be
correlations between independent variables reasonable?

29
(3) The experimental plan was not followed - Check if enough trials for run for the
- Salvage plan by making more runs reproducibility level
- Do a regression analysis and examine Replicate key points in the plan
correlation between independent variables - Question previous knowledge
- Design a new series of experiments
(4) You have the answer to your problem, but - Re-design the process to eliminate problems at
you’re not done your experimental plan their root cause
- Stop, analyze the results via regression, and - Repeat the analysis
examine correlation
Meaningful Physical Interpretation: Tips from Professional Consultants:
- Experimental results are a fuzzy snapshot of the - Take time to plan your experiment (objective,
small subset of the real physical process response(s) to be measured, factors to be
→ Control the “fuzziness”, size, and location of studied, most suitable design)
the snapshot with experimental design - Apply DOE to gain process understanding
- Use a mechanistic model as the basis of an - If experimenting with factors that are all likely
empirical model to take into account physical to interact use a resolution V design—not a
interpretation of the results screening design
- Small designs/fractions only find large effects
Relating statistical conclusions to the physical - Before doing a DOE, invest in developing a good
process: method for data collections
- Compare results with past knowledge - If the response is a function of ingredient
- Try to interpret interactions in physical terms portions (including water/humidity), use a
- If there are significant confounded effects, try to mixture design
eliminate confounding by physical reasoning (re- - Learn more about DOE tools and choose the
design) one that best fits the experiment
- Use marginal means and contour plots to - If sorting the DOE runs to make it easier to
summarize statistical findings in terms of new conduct trials, this is a split-plot design
process knowledge
Big Ideas to Remember:
- If a system is sufficiently complex, it will be built before it’s designed, implemented before it’s
tested, and outdated before it’s debugged (e.g. institutional structures, software packages)
- Beware of confounded effect(s)
- We can only obtain those features for which we’re willing to pay

30

Você também pode gostar