Você está na página 1de 426

Home and Consumer Finance Group

Rev. 1.0
This document and all information and expression contained herein are the property of ASU Department of Industrial Engineering, and may not, in whole or in part, be used,
duplicated, or disclosed for any purpose without prior written permission of ASU Department of Industrial Engineering. All rights reserved.
ASU Department of Industrial Engineering 2012
Define Phase
Paul Sandell (paul.g.sandell@intel.com)
Geetha Rajavelu (geetha.rajavelu@bankofamerica.com)

ASU Department of Industrial Engineering 2012
2
Objectives
What is Six Sigma
Pre-Define: Project ideation and Prioritization
Identify the elements of the Define phase
Discuss tollgate elements completed in Define
Discuss some Define tools
ASU Department of Industrial Engineering 2012
3
What is Six Sigma?
Six Sigma
DMAIC Lean
DMADV
Six Sigma is a problem solving and process improvement methodology that helps
improve our products, processes, services, and people, by reducing variation and
eliminating defects, and waste!

In completing Process Improvement Projects, Six Sigma uses three approaches
DMAIC (Define-Measure-Analyze-Improve-Control) When we have an existing process that
is not meeting customer requirements
DMADV (Define-Measure-Analyze-Design-Verify) When we are designing a new process, or
completely re-designing an existing process
Lean Principles to reduce waste and accelerate the velocity of a process

ASU Department of Industrial Engineering 2012
4
Start Date:
_________________
Project Charter
Problem Statement
Goal Statement
In/Out of Scope
Team & Time
Commitments
Timeline / Milestone
Estimate Financial
Benefits
Risks, Constraints &
Compliance Issues
Identified
SIPOC
High Level Process Map





Start Date:
_________________
Y & Defect Defined
Performance Spec for Y
Data Collection Plan
Measurement System
Validated
Collect data for Y
Process Capability for
Y
Improvement Goal for Y
Detailed Process Map





Tollgate Review Date:
________________
Start Date:
________________
Brainstorm all possible
Xs
Prioritize list of Xs
ID gaps
Refine benefits
estimate














Tollgate Review Date:
_________________
Start Date:
______________
Develop and select
solutions
Perform pilot of
solutions
Confirm improvements
Confirm prioritized Xs
Map new process
Develop
implementation plan
ID controls
Implement
improvements





Start Date:
_____________
Process documentation
/ Controls in place
Measure performance
Confirm sustainable
solution
Transfer ownership
Evaluate translation
opportunities
ID other improvement
opportunities
Project documentation
complete
Financial benefits
verified and approved
Leveragability
Tollgate Review Date:
_________________
Define Control Improve Analyze Measure
Not Started In Progress Complete
DMAIC Tollgate Checklist
ASU Department of Industrial Engineering 2012
5
Life Before Define
One of the most critical aspects of a successful Six Sigma
deployment is to select the right project(s)
Effective project ideation, prioritization and selection leads to Six Sigma
project results
Many companies fail before they even start Define
ASU Department of Industrial Engineering 2012
6
Generate
Ideas
Prioritize Launch
2 1 3
The high level process.details to follow!
Project Selection Roadmap
ASU Department of Industrial Engineering 2012
7
Generate
Ideas
Prioritize Launch
CTQ flow from Strategic
Plan
Financial Analysis
Performance
Metrics
Things that keep you up at
night (Organic)

Potential project ideas Projects
Project Ideation Methods
ASU Department of Industrial Engineering 2012
8

CTQ Flow Down Process
What A process in which strategic goals of the organization are used and a statistical relationship is
determined to describe how the strategic goal is improved by completing the project.
How and Who A trained MBB or BB would partner with the key business leaders (Champions) and
process owners, to establish the linkage from strategy to project ideas.
How Often Should be completed at least annually, and updated as business strategies change
Issues This process can take from week to months to adequately complete!
This is our most essential voice of the customer linkage to the business!
Strategy 1 Strategy 2
Business Need 1 Business Need 2 Business Need 3
Project Idea 1 Project Idea 2 Project Idea 3 Project Idea 4 Project Idea 5
ASU Department of Industrial Engineering 2012
9

Financial Analysis
What A process that reviews keys financial indicators for the business to identify project
opportunities
How and Who A financial leader would partner with the key business leaders (Champions) and
process owners, to establish the linkage from strategy to project ideas. An MBB or BB can be used to
help facilitate the process.
How Often Completed at least annually, and as frequent as quarterly
Issues Potential introduction of variation
This is a voice of the business process to generate project ideas
ASU Department of Industrial Engineering 2012
10

Performance to Plan
What A process that reviews metrics of existing performance to the business plan, and develops
project ideas based on performance gap to the plan
How and Who Process owners and key business leaders review the gaps, primarily during operational
reviewsactions (projects) are typically an output of the process
How Often Quarterly
Issues Potential introduction of variation
Another voice of the business process to generate project ideas
ASU Department of Industrial Engineering 2012
11

Organic Project Path
What A process that uses structured brainstorming to bubble up project ideas at all business levels
How and Who Process owners (with MBB or BB assistance as necessary) facilitate their work teams
through the process
How Often Quarterlyuntil the process becomes a natural part of the culture
Issues Can be great for team buildingdont let the process become a complete whining session
A creativity process, based on business pain points
ASU Department of Industrial Engineering 2012
12

Yes
Six Sigma Project checklist
Key driver of Customer Satisfaction focused
Narrow scope
Available metrics or measurements that can be developed
quickly
Control of the process owned by the Champion
Recurring events
Linked to corporate or business unit objectives
Financial benefits are traceable
Solution unknown







ASU Department of Industrial Engineering 2012
13
How Do We Prioritize?
Ideas Prioritize Launch
Ensure projects are linked to a
companys businesses?
With ideas generated through multiple
methods, we are ready to score the
ideas against one another.so what is
the process?

Ideas




Become





Projects
ASU Department of Industrial Engineering 2012
14

Prioritization of Projects
Now that we have a list of projects (project pipeline) how do we
decide to do which ones first??


Prioritization A system is needed to gauge the relative customer, business, team
member and time impact of each project idea.

Best practice organizations develop filters or criteria to complete this assessment,
with a numerical importance value attached to the criteria.
Panning for the gold nuggets in our business!
ASU Department of Industrial Engineering 2012
15
Proposed criteria will be used to prioritize identified projects
Criteria Description Worse Better
Potential
Impact on
Employee
Satisfaction
Relative impact to employee satisfaction
Potential Impact
on Customer
Metrics
Relative benefits and impact to key
business drivers, when compared
customer requirements
1 3 9
No change Improve < 20% Improve > 20%
Time to
Implement
Expected time required to fully implement
the project
Savings or
Revenue
Approximate savings or revenue obtained
Potential
Impact on
Experience
Relative impact to customer experience
Weight
20%
30%
10%
20%
20%
C
u
s
t
o
m
e
r

F
i
n
a
n
c
i
a
l

E
m
p
l
o
y
e
e

P
r
o
c
e
s
s

1 3 9
No change Improve < 20% Improve > 20%
1 3 9
< $50k > $50k < $200k > $200k
1 3 9
No change Improve < 20% Improve > 20%
1 3 9
9+ months 3-9 months 0-3 months
ASU Department of Industrial Engineering 2012
16
Project Launch
Ideas Prioritize Launch
Identification of Project Champion


Champion drafts charter


. Champion selects belt and attend pre-training


Team members are identified

Pre-Launch Checklist
Output is Project Kickoff!
ASU Department of Industrial Engineering 2012
17

Black Belt vs Green Belt Project
Black Belt Project

Full time BB resource
Project scope is broad
Typically more complex
Large business impact
3-6 months to complete

Green Belt Project

Part time GB resource
Project scope is narrow
Typically less complex
Localized business impact
3 months to complete

Champion supports the determination of BB Vs GB project!
ASU Department of Industrial Engineering 2012
18

Review of the Process
Lets take a few minutes to review our process

1. Business strategy and operation plans established
2. Ideas generated based on key processes identified
3. Prioritization completed
4. Business drafts charters and determines project methodology
5. Projects launched

Our projects are established, and we are on our way to Define!
ASU Department of Industrial Engineering 2012
19
What Happens In Define?
In the Define phase of DMAIC there are three
key elements that we seek to understand
Clarify the business problem (i.e. opportunity)
Identify and validate customer requirements
Begin documentation of the process
ASU Department of Industrial Engineering 2012
20
Define for Design Approach (DMADV)
Objectives
Define a vision and project strategy to achieve the vision
Review your Design Charter
Opportunity, goal, scope
Validated by leadership
Identify initial design requirements
based on Voice of the Customer
Document the process where design
will focus
ASU Department of Industrial Engineering 2012
21
Define: DMAIC Vs DMADV
Define in DMAIC
Define the business
problem/goals and customer
deliverables
Documents the existing process
and existing customer needs
Define in DMADV
Define the business
problem/goals and customer
deliverables
Determines unknown customer
needs and links to future design
ASU Department of Industrial Engineering 2012
22
Charter Elements Buttoned Down
We have (definitely) completed a draft charter before the belt
attends class, and we want to refine and clarify as appropriate

Problem statement
Goal statement
Scope
Team & time commitments
Project plan (timeline)
Estimated benefits
Define: Measure: Analyze: Improve: Control:
Start End
Start End Start Revised
One time Impact Annual One time Impact Annual
0.00 0.00 0.00 0.00 0.00 0.00
Other
Other
Other
Defect Definition
Revenue Growth
Other
Materials
Other
Other
PROJECT OVERVIEW
Problem Statement:
Identify Business Strategic Linkages & Secondary Business Impacts:
Project Goal:
Project Milestones
Project Title:
Project Leader: Champion:
BENEFITS SUMMARY ($000)
Indirect (Soft) Type B
Productivity
Cost Avoidance
Benefits Category
Direct (Hard) Type A
PROJECT DETAILS
In / Out of Scope
Team Members & Resources
Opportunity Definition
Sign-offs: Open Close
Coaching MBB: _____________________ __________________________
Champion: ____________________ __________________________
Business Leader: ____________________ __________________________
Finance Rep. ____________________
Other
One Wells Fargo Customer Impact
You know me
TOTAL BENEFITS:
(For example: $ per resource requirements)
Other
Other Key Project Measurements:
Overall Benefits: 0.00
Baseline Who is the Customer & What is the Customer Impact?:
Process Owner:
Idea Document Completed?:
Six Sigma Project Charter
MBB / Coach:
Location: Project Start Date:
Project End Date:
Planned Completion Dates, by project phase:
PROJECT MILESTONES
Project Type: Project ID No.:
Process:
Goal
`
Project Metrics (can include secondary metrics)
Rev. 2.2
ASU Department of Industrial Engineering 2012
23
Lets Look At A Real Green Belt Project
Problem statement: The current lack of procedures for completing
emergency installs to production systems creates a risk of user and customer
impacts, down-time and inadequate communication.
Project objective / goal: Design and implement an emergency installs
process that will allow for quick and accurate resolution of high severity
production issues and create objective tracking of results.
ASU Department of Industrial Engineering 2012
24
Takeaway
What is your assessment of this Problem and Goal? Why?
ASU Department of Industrial Engineering 2012
25
Project Plan
The roadmap on the journeywe want all belts to complete a
project plan!
Six Sigma Project Plan
Scheduled
Start
Scheduled
Finish
Actual
Start
Actual
Finish
Estimated
Duration
Tool or
Task Define Day 1 Day 35
Task Champion Identifies business oportunity linked to Business strategy On going On going
Tool Champion drafts rough cut charter Day 1 Day 2
Task Champion selects belt candidate Day 2 Day 2
Task Belt is assigned to training wave Day 2 Day 2
Task Belt assigned MBB coach Day 3 Day 3
Tool Champion/Belt review and modify charter Day 4 Day 9
Task Problem statement definition complete Day 4 Day 4
Task Project goal definition complete Day 4 Day 4
Task Project defect definition complete Day 4 Day 4
Task Project scope complete Day 4 Day 4
Task Key output metric and customer benefits complete Day 4 Day 4
Task Project benefits estimated Day 4 Day 9
Task Review benefits estimate with finance Day 5 Day 5
Task Finalize benefits and obtain finance signoff Day 9 Day 9
Task Champion and Belt determine project resources Day 5 Day 9
Task Final project signoff with Champion and MBB coach Day 10 Day 10
Task Meeting schedule determined with MBB coach Day 10 Day 10
Tool Project plan complete Day 10 Day 15
Task Kickoff meeting held with team and customer Day 11 Day 11
Task Roles clarified Day 11 Day 11
Tool Issue/Action/Risk log initiated Day 11 Day 11
Task Customer requirements obtained Day 12 Day 15
Tool SIPOC completed Day 15 Day 15
Tool Survey completed Day 15 Day 35
Tool High level "as is" process map complete Day 16 Day 16
Measure Day 11 Day 44
ASU Department of Industrial Engineering 2012
26
Project Benefits
A critical element in definehelps clarify business value!

There are two types of project benefits.
Customer satisfaction (includes internal customersteam member
satisfaction)
Financial

A Six Sigma project should have at
least one if not both of these benefits!
Both benefit types are the result of improved PROCESSES
ASU Department of Industrial Engineering 2012
27
Project Benefits
Customer Satisfaction A measurable result of the belt project,
would be higher levels of satisfaction.

Financial
Productivity
Cost or Growth
Cost Avoidance
Materials
Physical or vendor costs
Best practice organizations measure annualized benefits
ASU Department of Industrial Engineering 2012
28
Risks, Constraints & Compliance Issues
Risk Impact Probability Risk Score Mitigation Actions
Six Sigma Project: Risk, Constraints, Compliance Issues
Why do we discuss this as part of Define??
ASU Department of Industrial Engineering 2012
29
Document the Process
Two critical tools that belts use to document the process (and
which Leaders should understand are.

SIPOC
Process Map (high level)

Lets take a look at both!
ASU Department of Industrial Engineering 2012
30
A Process Is Defined As...
...A series of tasks or activities whereby one thing (the
input) is changed or used to create something else (the
output)
ASU Department of Industrial Engineering 2012
31
The SIPOC is a tool that documents a process from suppliers to
customers. Once completed, it is used to:
Identify and balance competing customer requirements.
Aid in identification of data collection needs.
See process connected to the Customer
Avoids getting stuck in detail
A simplified view of entire process visible at a glance
Help provide scoping direction on projects
SIPOC Defined
Suppliers Inputs - Process Outputs - Customers
ASU Department of Industrial Engineering 2012
32
Steps to Diagram a SIPOC
1. Identify the Process to be diagrammed and name it
Write that in the Process Name
Complete other information at top of form
2. Define the Outputs and Inputs (boundaries):
Start at the END: Customer(s) and key Output(s)
Supplier(s) and key Input(s)
3. Clarify the Requirements (optional, but recommended)
What are key features/characteristics of the Output for each Customer?
4. Establish ~2-5 high-level Process Steps
Brainstorm major process activities on sticky notes
Group or organize activities into similar categories or
major steps in the process (Suggestion: use Affinity method)
Place major steps in most appropriate order
ASU Department of Industrial Engineering 2012
33
SIPOC Form
ASU Department of Industrial Engineering 2012
34
SIPOC Example
Project Title: Project Champion:
Process Owner: Project Belt:
Core Process: Project Number:
SUPPLIERS INPUTS OUPUTS
(Providers of the
required resources)
(Resources required by
the process)
(Deliverables from the
process)
Requirements Requirements
Knowledgeable Baked Cookies Soft/chewy Kids
Spouse
Spouse
Spouse
Soft/chewy
Warm
Clean
Baked Cookies
Baked Cookies
Messy Kitchen
Avaliable
Avalible/quality
Available
Available/Working
Cook
Recipe/Book
Ingredients
Utinsils
Oven
Food store
Retail store
Appliance store
Family
Amazon
SIPOC
Cookie
Betty Crocker
Cookie Baking
Martha Stewart
Rachael Ray
1
(Top level description of activity)
(anyone who receives a deliverable from
the process)
PROCESS CUSTOMERS
Timer Dings
Bake Cookie
Dough
Obtain
Ingredients
1
1
9
5
8 6 7 4 3 2
ASU Department of Industrial Engineering 2012
35
Graphical representation of a process
Identifies Key Process Input Variables (KPIVs, also called your little xs)
Identifies Key Process Output Variables (KPOVs, also called your little
ys)
First process map should be as is
Ensure process is walked. A business process can be walked, by
representing information transfer and modification points. Team should not
assume they know the process well enoughwalk it.
The result should encompass a process map that identifies KPIVs and
KPOVs. Critical KPOVs should be linked to customer CTQs
Process map can have other information identified on it as well as
information the team feels is appropriate (ie. Data collection points)
Take advantage of tribal knowledge held by those who work the process
What is a Process Map?
ASU Department of Industrial Engineering 2012
36
High Level Process Map
The high level process map builds upon our SIPOC by
seeking to show the primary sequence of events in the process

A high level go to ASU process
ASU Department of Industrial Engineering 2012
37
A graphic representation of process that details decision points, lists
and classifies KPIVs (little xs) and lists KPOVs (little ys)
Detailed Process Mapping
Rep answers
phone
Rep greets
customer
Rep determines
product need
Cust identify need
date
Rep obtains
customer info and
amount
Rep obtains
internal
information
Rep determine
terms
Rep verifies
information
Rep completes
request worksheet
Rep inputs order
entry info
Rep prints order
confirmation
Rep determines
ship date
Rep reviews order
Rep faxes
confirmation to
customer
Rep verifies
manufacturing
receipt
Phone - SOP
CSR - N
Greeting - SOP
CSR - N
Customer - N
Product Infomation - C
CSR - N
Customer - N
Answred phone
Customer greeted Prod. need obtained
Date obtained
Cust info obtained
Internal info obtained
Terms completed
Info validated Completed worksheet
Order entered
Confirmation printed
Ship date
Order reviewed
Confirmation faxed
Receit verified
System - SOP
CSR - N
Customer - N
System - SOP
CSR - N
Customer - N
System - SOP
CSR - N
Customer - N
System - SOP
CSR - N
Customer - N
Term info - N
Fax machine- SOP
CSR - N
Customer - N
Order - C
CSR - N
Customer - N
Order - C
CSR - N
Customer - N
Order - C
Printer - SOP
Customer - N
Order - C
CSR - N
CSR - N
Customer - N
Order - C
System - SOP
CSR - N
Customer - N
Worksheet - C
CSR - N
Order - C
CSR - N
Receipt - SOP
Inputs
Outputs
ASU Department of Industrial Engineering 2012
38
Define Completed
With these elements completed, the Define phase is essentially
complete.why do we say essentially?
Project Charter
Problem Statement
Goal Statement
In/Out of Scope
Team & Time Commitments
Timeline / Milestone
Estimate Financial Benefits
Risks, Constraints & Compliance Issues Identified
SIPOC
High Level Process Map
ASU Department of Industrial Engineering 2012
39
Define
"What we think..."
Purpose: Properly define project in terms of Project
purpose, scope, objective & customer CTQ's are
stated. Processes or product to be improved
identified.
Key Outputs:
Customer / Business CTQ's
Project Charter
SIPOC
Process Map
To Measure
Corporate Vision &
Objectives Set
Dept. A Dept. A
Dept. B Dept. B
Dept. C Dept. C
Dept. D Dept. D
Dept. E Dept. E
Cycle Time Cycle Time
2 days 2 days 4 days 4 days 3 days 3 days 3 days 3 days 2 days 2 days
1 1
2 2
3 3
4 4
10 6 8
Project
Home Mortgage Defects 9 3 9 180
0
0
0
C
T
Q
C
u
s
t o
m
e
r I m
p
a
c
t
F
i n
a
n
c
i a
l I m
p
a
c
t
E
m
p
l o
y
e
e
I m
p
a
c
t
C
T
Q
Total
C
T
Q
Cause and Effect Matrix Project Prioritization
Business Groups Establish Objectives
supporting Corporate Objectives
Belt Candidate Selected
Projects Selected & Prioritized
Champion or Champion & Belt
complete Project Charter. Customer &
Business CTQ's become part of
project.
High Level Process Flow Diagram to
begin understanding the process
Customer Requirements & CTQ's
determined
Complete SIPOC which helps to
scope project & ID measurement
points as well as customers
process inputs & outputs
Detailed Process Map with
inputs and outputs identified
Answer phone Greet customer Determine product
need Identify need date Obtain customer
info and amount
Obtain internal
information Determine terms
Verify information Complete request
worksheet
Input order entry
info
Print order
confirmation
Determine ship
date Review order Fax confirmation
to customer
Verify
manufacturing
receipt
Phone - SOP
CSR - N
Greeting - SOP
CSR - N
Customer - N
Product Infomation - C
CSR - N
Customer - N
Answred phone Customer greeted Prod. need obtained Date obtained Cust info obtained Internal info obtained Terms completed
Info validated Completed worksheet Order entered Confirmation printed Ship date Order reviewed Confirmation faxed
Receit verified
System - SOP
CSR - N
Customer - N
System - SOP
CSR - N
Customer - N
System - SOP
CSR - N
Customer - N
System - SOP
CSR - N
Customer - N
Term info - N
Fax machine- SOP
CSR - N
Customer - N
Order - C
CSR - N
Customer - N
Order - C
CSR - N
Customer - N
Order - C
Printer - SOP
Customer - N
Order - C
CSR - N
CSR - N
Customer - N
Order - C
System - SOP
CSR - N
Customer - N
Worksheet - C
CSR - N
Order - C
CSR - N
Receipt - SOP
ASU Department of Industrial Engineering 2012
40
Summary
Reviewed and discussed the elements in the Define
Phase
Demonstrated appropriate applications of the Six Sigma
tools in the Define Phase
ASU Department of Industrial Engineering 2012
41
Appendix Completed Charter
Home and Consumer Finance Group
Rev. 1.0
This document and all information and expression contained herein are the property of ASU Department of Industrial Engineering, and may not, in whole or in part, be used,
duplicated, or disclosed for any purpose without prior written permission of ASU Department of Industrial Engineering. All rights reserved.
ASU Department of Industrial Engineering 2012
Six Sigma Define
2012
This document and all information and expression contained herein are the property of ASU Department of Industrial Engineering, and may not, in whole or
in part, be used, duplicated, or disclosed for any purpose without prior written permission of ASU Department of Industrial Engineering. All rights reserved.
ASU Department of Industrial Engineering 2004
IEE 581
Six Sigma Methodology
DMAIC Measure Phase
Dr. Harry Shah
President and Master Black Belt
Business Excellence Consulting LLC
harry.shah@bizxlnc.com
www.bizxlnc.com
ASU Department of Industrial Engineering 2004
DMAIC - Process Improvement Roadmap
What is
important?
How are
we doing?
What is
wrong?
What
needs to
be done?
How do we
guarantee
performance?
1.0
Define
Opportunities
2.0
Measure
Performance
3.0
Analyze
Opportunity
4.0
Improve
Performance
5.0
Control
Performance
ASU Department of Industrial Engineering 2004
Measure Performance
Key Deliverables
Input, Process, and
Output Indicators
Operational
Definitions
Data Collection
Formats and Sampling
Plans
Measurement System
Capability
Baseline Performance
Metrics
Process Capability
DPMO
PLT
PCE
Yield/Scrap
Others
Productive Team
Atmosphere
Inputs
Team Charter
Business case
Goal statement
Project scope
Project plan
Team roles and responsibilities
Prepared Team
Critical Customer Requirements
Process Maps
Quick Win Opportunities
2.0 Measure Performance
Determine What to
Measure
Manage Measurement
Evaluate Measurement
System
Determine Process
Performance
ASU Department of Industrial Engineering 2004
Determine What to Measure
ASU Department of Industrial Engineering 2004
Determine What to Measure
SIPOC Diagram
Common Elements to All Processes
Supplier
Input
Process
Output
Customer
ASU Department of Industrial Engineering 2004
Case Study - Coffee Example
A fast food restaurant conducted an annual customer
survey. There was an overwhelming response from
customers. A good percentage of the responses were
favorable. The customers liked their service and food.
Other customers complained that the coffee served by
the restaurant was not consistent in taste. As a result
some customers stopped patronizing the restaurant.
Owners son, is enrolled in Six Sigma Methodology
course at ASU. He decided to tackle the problem. The
process consisted of Coffee Brewing.
ASU Department of Industrial Engineering 2004
SIPOC Diagram
Process
Supplier
Input
Output
Customer
Case Study - Coffee Example
Coffee
Mfg
Filter Mfg
Water
Supplier
Coffee
Filter
Water
Brewed
Coffee
Patron
ASU Department of Industrial Engineering 2004
Determine What to Measure
SIPOC Diagram
Common Elements to All Processes
Supplier
Input
Process
Output
Customer
Input Indicators Process Indicators Output Indicators
ASU Department of Industrial Engineering 2004
Determine What to Measure
Input Indicators
Measures that evaluate the degree to which the inputs
to a process, provided by supplier, are consistent with
what process needs to efficiently and effectively
convert into customer satisfying outputs.
ASU Department of Industrial Engineering 2004
Determine What to Measure
Process Indicators
Measures that evaluate the effectiveness, efficiency,
and quality of the steps and activities used to convert
inputs into customer satisfying outputs.
ASU Department of Industrial Engineering 2004
Determine What to Measure
Output Indicators
Measures that evaluate the effectiveness of the output.
ASU Department of Industrial Engineering 2004
Case Study - Coffee Example
Input
Indicators
Coffee
Manufacturer
Filter
Manufacturer
Type of Water
(Tap vs Bottle)
Process
Indicators
Amount of
Coffee
Amount of
Water
Age of Coffee
Output
Indicators
Coffee Temp.
Coffee Color
Coffee Flavor
Customer
Satisfaction
Index (Taste)
ASU Department of Industrial Engineering 2004
Determine What to Measure
Input
Indicators
Process
Indicators
Output
Indicators
X
Y
Y = f(X)
ASU Department of Industrial Engineering 2004
Tools
Functional Process Map
Brainstorming (Cause & Effect Diagram)
Failure Modes and Effects Analysis (FMEA)
Cause and Effect Matrix
Selecting and Prioritizing Input, Process and
Output Indicators
ASU Department of Industrial Engineering 2004
Empty
Coffee Pot
Put Coffee
Filter
Put Coffee
in Filter
Fill Water
Jug
Turn Coffee
Maker On
Pour Water
in Coffee Maker
Coffee
Ready
Receive
Customer
Order
Fill Coffee
in Cup
Serve
Customer
Get
Payment
Coffee Maker Sales Associate
Functional Process Map - Coffee Example
ASU Department of Industrial Engineering 2004
MEOPLE MACHINE
VARIATION IN
COFFEE TASTE
MATERIAL METHOD
Amount of Coffee
Age of Coffee
Caffeine Content
Amount of Water
Water Type
Coffee Mfg
Training
Cause & Effect Diagram - Coffee Example
Age of Brewed Coffee
Heater
ASU Department of Industrial Engineering 2004
Failure Modes and Effects Analysis

Identify potential failure modes, determine their effect


on the operation of the product, and identify actions to
mitigate the failures.

Utilize cross functional team

Improve product/process reliability & quality

Emphasizes problem prevention

Increase customer satisfaction


www.npd-solutions.com/fmea.html
ASU Department of Industrial Engineering 2004
Failure Modes & Effects Analysis

Process/Product: FMEA Date: (original)
FMEA Team: (Revised)
Black Belt: Page: of

Process Actions Results
I
t
e
m

P
r
o
c
e
s
s

S
t
e
p
s

P
o
t
e
n
t
i
a
l

F
a
i
l
u
r
e

M
o
d
e

P
o
t
e
n
t
i
a
l

E
f
f
e
c
t
s

o
f

F
a
i
l
u
r
e

S
e
v
e
r
i
t
y

P
o
t
e
n
t
i
a
l

C
a
u
s
e
(
s
)

o
f

F
a
i
l
u
r
e

O
c
c
u
r
r
e
n
c
e

C
u
r
r
e
n
t

C
o
n
t
r
o
l
s

D
e
t
e
c
t
i
o
n

R
i
s
k

P
r
i
o
r
i
t
y

N
u
m
b
e
r

R
e
c
o
m
m
e
n
d
e
d

A
c
t
i
o
n

R
e
s
p
o
n
s
i
b
i
l
i
t
y

a
n
d

T
a
r
g
e
t

C
o
m
p
l
e
t
i
o
n

D
a
t
e

A
c
t
i
o
n

T
a
k
e
n

S
e
v
e
r
i
t
y

O
c
c
u
r
r
e
n
c
e

D
e
t
e
c
t
i
o
n

R
i
s
k

P
r
i
o
r
i
t
y

N
u
m
b
e
r



















Total Risk Priority: Resulting Risk Priority


ASU Department of Industrial Engineering 2004
ASU Department of Industrial Engineering 2004
Cause and Effect Matrix
Helps to prioritize key input and process indicators
(Xs) by evaluating the strength of their relationship
to output indicators (Ys)
Useful when no data exists
Effective in team consensus environment
ASU Department of Industrial Engineering 2004
Cause and Effect Matrix
ASU Department of Industrial Engineering 2004
Manage Measurement
ASU Department of Industrial Engineering 2004
Manage Measurement
Develop an Operational Definition
Provides everybody with the same meaning
Contains the What, How and Who
Adds consistency and reliability to data collection
ASU Department of Industrial Engineering 2004
Example: Operational Definition
One of the key output indicators for coffee example is
Temperature of the coffee
John (owners son) decides to implement a step to measure
coffee temperature.
How should John write an Operational Definition to measure
temperature?
ASU Department of Industrial Engineering 2004
Example: Operational Definition
When coffee is ready measure temperature of coffee.
Is this a good Operational Definition?
As soon as coffee is ready, pour cup of coffee in a
plastic cup. Put thermometer in coffee for 30 sec.
Read temperature in
o
F. Record date, time and
temperature in a log book.
ASU Department of Industrial Engineering 2004
Example: Operational Definition
XYZ Financials company provides car loans to customers. A
recent customer survey indicates customers are unhappy about
the time company takes to process their loan applications. CEO
asks a Black Belt to determine the average cycle time to process
a loan application.
All loan application are received by FAX. The approval/rejection
letter is sent to customer via Fax.
Black Belt decides to collect data over one month. Once
application has been processed, a bank employee will determine
cycle time.
How should Black Belt write an Operational Definition to
measure cycle time of loan application?
ASU Department of Industrial Engineering 2004
Example: Operational Definition
Measure Cycle Time for all loan applications
processed over one month.
Is this a good Operational Definition?
Collect data from all applications received by fax
between Sep 1, 2005 - Sep 30, 2005. The response
time will be determined by the date and time of the fax
received (as shown on the faxed application), to the
time the approval or rejection letter is faxed to the
applicant (as shown on the fax log).
ASU Department of Industrial Engineering 2004
Manage Measurement
Develop a Measurement Plan
Sample size, frequency etc.
Type of data
Continuous/Variable
Discrete/Attribute (Ordinal, Nominal)
Data collection log sheets
Treat as a process!
Collect Data
Visually Examine Data
ASU Department of Industrial Engineering 2004
Sample Data Measurement Plan Form
Performance
Measure
Operational
Definition
Data Source
and Location
Sample
Size
Who Will
Collect the
Data
When Will the
Data Be
Collected
How Will the
Data Be
Collected
Other Data that
Should Be
Collected at the
Same Time
How will the data be used? How will the data be displayed?
Examples:
Identification of Largest Contributors
Identifying if Data is Normally Distributed
Identifying Sigma Level and Variation
Root Cause Analysis
Correlation Analysis
Examples:
Pareto Chart
Histogram
Control Chart
Scatter Diagrams
ASU Department of Industrial Engineering 2004
Collect Data
First:
Evaluate the measurement system
Then:
Follow the plan note any deviations from the plan.
Be consistent avoid bias.
Observe data collection.
Collect data on a pilot scale (optional).
ASU Department of Industrial Engineering 2004
The data collected will only be as good as the
collection system itself. In order to assure timely and
accurate data, the collection method should be simple
to use and understand.
All data can be collected manually or automatically.
Automatic data collection assures accurate and timely
data, and removes the burden of collection from the
operator of the process. But, it can be very expensive
to set up. It usually involves computer programming
and/or hardware.
Obtaining the Measurements
ASU Department of Industrial Engineering 2004
Histogram
Box plot
Trend chart
Probability plot
Scatter plot
etc.
Visually Examine Data
ASU Department of Industrial Engineering 2004
Evaluate Measurement System
ASU Department of Industrial Engineering 2004
Measurement Systems Analysis (MSA)
A process to evaluate the factors that effect the
quality of measurements
Measuring/metrology tool or gauge
Operator
Procedure or method
Environment
Must be performed before collecting data
ASU Department of Industrial Engineering 2004
Why should Measurement Systems be evaluated?
MSA for Continuous Data
2 2 2
Process Measurement Total
+ =
Process Measure
3.17
3.80
2.93
3.39
3.53
ASU Department of Industrial Engineering 2004

Process
=3

Process
=0.333

Measurement
=0.4

Measurement
=0.0167

Total
=3.4

Total
=0.3334
+
=
ASU Department of Industrial Engineering 2004
Measurement Systems Properties for Continuous
Data
Discrimination
Accuracy (Bias)
Stability
Linearity
Gauge Capability (GR&R)
MSA for Continuous Data
ASU Department of Industrial Engineering 2004
Property: Discrimination
Capability of the measurement system to detect and
faithfully indicate even small changes of the measured
characteristic
1 2 3 4 5
Good Discrimination
1 2 3 4 5
Poor Discrimination
ASU Department of Industrial Engineering 2004
Discrimination contd.
A general Rule of Thumb:
A measurement tool will have adequate discrimination
if the measurement unit is at most one-tenth of the six
sigma spread of the total process variation,
Measurement Unit < (6*
Total
)/10
ASU Department of Industrial Engineering 2004
Property: Accuracy or Bias
Bias is the difference between the observed average
and the reference value
Accurate
Not
Accurate
ASU Department of Industrial Engineering 2004
Obs Avg = 101.63 Ref Value = 100 Bias
Accuracy or Bias contd.
ASU Department of Industrial Engineering 2004
The distribution of the measurements should be
constant over time
Average
Standard deviation
No drifts, sudden shifts, cycles, etc.
Evaluated with control charts of standard/golden
unit(s) measurements
Xbar/R, Xbar/S, X/MR, etc.
Property: Stability
ASU Department of Industrial Engineering 2004
Stable
Gage
Time 1 Time 2
Not
Stable
Gage
Stability contd
ASU Department of Industrial Engineering 2004
Stability contd
3 Reference Units on 1 Metrology Tool
ASU Department of Industrial Engineering 2004
50 40 30 20 10 0
4250
4240
4230
Reading No.
P
o
l
y

T
h
i
c
k
n
e
s
s

(
A
n
g
s
t
r
o
m
s
)
6/9/xx
6/22/xx
7/11/xx
Stability -- Example
Trend chart for polysilicon thickness measurements in a Chemical
Vapor Deposition system.
On 6/22, something apparently happened to the process.
The change on 6/22 was traced to a faulty measurement tool.
ASU Department of Industrial Engineering 2004
Property: Linearity
Linearity is the difference in the bias values through the
expected operating range of the gauge
Good Linearity
Not Good
Linearity
Low
High
Range of
Operation
ASU Department of Industrial Engineering 2004
Bias and Linearity Example
(File: gauge study.mtw)
ASU Department of Industrial Engineering 2004
Property: Gauge Capability (GR&R)
Gauge Capability is made up of two sources of
variation or components
Repeatability & Reproducibility
2 2 2
Repeatability Reproducibility Measurement
+ =
2 2 2 2
Repeatability Reproducibility Process Total
+ + =
ASU Department of Industrial Engineering 2004
Repeatability
The inherent variability of the measurement system.
The variation that results when repeated measurements are made
of the same parameter under as absolutely identical conditions
as possible:
same operator.
same set up procedure.
same test unit.
same environmental conditions.
during a short interval of time.
ASU Department of Industrial Engineering 2004
Repeatability
True
Value
Mean
Poor
Repeatability
Good
Repeatability
Mean
6
6
ASU Department of Industrial Engineering 2004

2
Measurement
=
2
Repeatability
+
2
Reproducibility
Reproducibility
The variation that results when different conditions are used to
make the measurement:
different operators.
different set up procedures, maintenance procedures, etc.
different parts.
different environmental conditions.
During a longer period of time.
ASU Department of Industrial Engineering 2004
Reproducibility
True
Value
Good
Reproducibility
Poor
Reproducibility
Operator 1 Operator 2
Operator 3
Operator 2
Operator 3
Operator 1
ASU Department of Industrial Engineering 2004
Gauge Capability Metrics
Measurement
Total
% R&R 100

=
Measurement
6
% P/T *100
USL - LSL

=
ASU Department of Industrial Engineering 2004
Requirements for Gauge Capability Metrics
Guidelines for %R&R and %P/T:
Under 10% Acceptable
10% - 30% May be Acceptable
Over 30% Not Acceptable
To find %R&R and %P/T we must estimate

Measurement
and
Total
ASU Department of Industrial Engineering 2004
Example: ANOVA Method (File: gauge study.mtw)
3 Operators, same 10 Parts, 2 Readings/Part
Operators & Parts are crossed
USL = 2 and LSL = 1
Gage R&R
%Cont r i but i on
Sour ce Var Comp ( of Var Comp)
Tot al Gage R&R 0.0012892 11. 44
Repeat abi l i t y 0. 0004033 3. 58 Er r or
Repr oduci bi l i t y 0. 0008858 7. 86
Oper at or 0. 0002584 2. 29
Oper at or *Par t 0. 0006274 5. 57
Par t - To- Par t 0. 0099772 88. 56 Pr oc ess
Tot al Var i at i on 0.0112664 100. 00
ASU Department of Industrial Engineering 2004
ANOVA Method contd - Minitab Output

St udy Var %St udy Var %Tol er ance


Sour ce St dDev ( SD) ( 6 * SD) ( %SV) ( SV/ Tol er )
Tot al Gage R&R 0.035905 0. 215430 33.83 21.54
Repeat abi l i t y 0. 020083 0. 120499 18. 92 12. 05
Repr oduci bi l i t y 0. 029763 0. 178578 28. 04 17. 86
Oper at or 0. 016076 0. 096454 15. 15 9. 65
Oper at or *Par t 0. 025048 0. 150289 23. 60 15. 03
Par t - To- Par t 0. 099886 0. 599316 94. 10 59. 93
Tot al Var i at i on 0.106143 0. 636859 100. 00 63. 69
Number of Di st i nct Cat egor i es = 3
2
Measurement Measurement Total
0.00129; 0.036; 0.106 = = =
%R&R = 33.83% and %P/T = 21.54%
ASU Department of Industrial Engineering 2004
Gauge Capability Nested, Mixed and Other
Models
Crossed Factor B is crossed with Factor A if the levels of B
are the same for each level of A
Example: In an MSA study, 3 operators measure the same 10
parts 3 times. Operator is Factor A, Part is Factor B. B is
crossed with A.
Operator 1
Part 1 Part 2 Part 10
Rpt 1Rpt 2Rpt 3 Rpt 1Rpt 2Rpt 3
Operator 3
Part 1 Part 2 Part 10
Rpt 1Rpt 2Rpt 3 Rpt 1Rpt 2Rpt 3
Operator 2
Part 1 Part 2 Part 10
Rpt 1Rpt 2Rpt 3 Rpt 1Rpt 2Rpt 3
ASU Department of Industrial Engineering 2004
Gauge Capability Nested, Mixed and Other
Models
Nested Factor B is nested within Factor A if the levels of B are
different for each level of A
Example: In an MSA study, 3 operators measure 10 different
parts 3 times. Operator is Factor A, Part is Factor B. B is
nested within or under A.
Operator 1
Part 1 Part 2 Part 10
Rpt 1Rpt 2Rpt 3 Rpt 1Rpt 2Rpt 3 Rpt 1Rpt 2Rpt 3
Operator 2
Part 11Part 12 Part 20
Rpt 1Rpt 2Rpt 3
Operator 3
Part 21Part 22 Part 30
Rpt 1Rpt 2Rpt 3 Rpt 1Rpt 2Rpt 3
ASU Department of Industrial Engineering 2004
Additional Model Examples
Six operators were randomly chosen for an MSA
study. Each operator had four different instruments
to measure with, and the instruments used by one
operator were different than the instruments used by
another operator. There were nine different parts
measured by each instrument. Each part was
measured three times.
What if the operators had used the same four
instruments?
ASU Department of Industrial Engineering 2004
References
Montgomery & Runger: Gauge Capability & Designed
Experiments Part I: Basic Methods. Quality
Engineering (1993-94); 6(1), pp 115-135
Montgomery & Runger: Gauge Capability & Designed
Experiments Part II: Experimental Design Models &
Variance Comp. Estimation. Quality Engineering
(1993-94); 6(2), pp 289-305
ASU Department of Industrial Engineering 2004
MSA for Attribute Data
Binomial results: Good/Bad,
Conforming/Nonconforming, Red/Not Red, etc.
Use a minimum of 10 known good items and 10
defective items
Use 2-3 Operators or Appraisers
Have each Appraiser inspect or evaluate each unit 2-
3 times
Analyze as Attribute Agreement Analysis
Example
(File: ATTR-GAGE STUDY.mtw)
ASU Department of Industrial Engineering 2004
MSA for Attribute Data - Example
Within Appraisers
Assessment Agreement
Appraiser # Inspected # Matched Percent 95 % CI
Fred 20 20 100.00 (86.09, 100.00)
Lee 20 18 90.00 (68.30, 98.77)
# Matched: Appraiser agrees with him/herself across trials.
Fleiss' Kappa Statistics
Appraiser Response Kappa SE Kappa Z P(vs > 0)
Fred G 1.0000 0.223607 4.47214 0.0000
NG 1.0000 0.223607 4.47214 0.0000
Lee G 0.6875 0.223607 3.07459 0.0011
NG 0.6875 0.223607 3.07459 0.0011
Each Appraiser vs Standard
Assessment Agreement
Appraiser # Inspected # Matched Percent 95 % CI
Fred 20 20 100.00 (86.09, 100.00)
Lee 20 17 85.00 (62.11, 96.79)
# Matched: Appraiser's assessment across trials agrees with the
known standard.
Assessment Disagreement
Appraiser # NG / G Percent # G / NG Percent # Mixed Percent
Fred 0 0.00 0 0.00 0 0.00
Lee 1 5.56 0 0.00 2 10.00
# NG / G: Assessments across trials = NG / standard = G.
# G / NG: Assessments across trials = G / standard = NG.
# Mixed: Assessments across trials are not identical.
Fleiss' Kappa Statistics
Appraiser Response Kappa SE Kappa Z P(vs > 0)
Fred G 1.00000 0.158114 6.32456 0.0000
NG 1.00000 0.158114 6.32456 0.0000
Lee G 0.60784 0.158114 3.84434 0.0001
NG 0.60784 0.158114 3.84434 0.0001
Ho: k=0
Ha: k>0
ASU Department of Industrial Engineering 2004
MSA for Attribute Data - Example
Between Appraisers
Assessment Agreement
# Inspected # Matched Percent 95 % CI
20 17 85.00 (62.11, 96.79)
# Matched: All appraisers' assessments agree with each
other.
Fleiss' Kappa Statistics
Response Kappa SE Kappa Z P(vs > 0)
G 0.673203 0.0912871 7.37457 0.0000
NG 0.673203 0.0912871 7.37457 0.0000
All Appraisers vs Standard
Assessment Agreement
# Inspected # Matched Percent 95 % CI
20 17 85.00 (62.11, 96.79)
# Matched: All appraisers' assessments agree with the known
standard.
Fleiss' Kappa Statistics
Response Kappa SE Kappa Z P(vs > 0)
G 0.803922 0.111803 7.19049 0.0000
NG 0.803922 0.111803 7.19049 0.0000
Ho: k=0
Ha: k>0
ASU Department of Industrial Engineering 2004
Appraiser
P
e
r
c
e
n
t
Lee Fred
100
95
90
85
80
75
70
65
95.0% CI
Percent
Appraiser
P
e
r
c
e
n
t
Lee Fred
100
95
90
85
80
75
70
65
95.0% CI
Percent
Date of study:
Reported by:
Name of product:
Misc:
Assessment Agreement
Within Appraisers Appraiser vs Standard
MSA for Attribute Data - Example
ASU Department of Industrial Engineering 2004
Determine Process Performance
ASU Department of Industrial Engineering 2004
Determine Process Performance
Document baseline performance
Provide direction to the project
Compare before performance to after
ASU Department of Industrial Engineering 2004
Determine Process Performance
Process Capability Indices
For continuous data: Cp, Cpk, Cpm
For discrete data: Defect Per Million
Opportunities (DPMO)
Process Lead Time (PLT)
Process Cycle Efficiency (PCE)
Yield/Scrap
Others
ASU Department of Industrial Engineering 2004
Steps for Conducting a Process Capability
Study
1. Verify that process is stable
2. Determine whether the data distribution is normal
3. Calculate appropriate indices
4. Make recommendations for improvement
ASU Department of Industrial Engineering 2004
ASU Department of Industrial Engineering 2004
Cpk = min{Cpu,Cpl} where
S
LSL X
Cpl
S
X USL
Cpu
3
and
3

=

=
ASU Department of Industrial Engineering 2004

T L S L U S L
F o u r P r o c e s s e s w i t h C p k = 1 . 5
C p = 6 . 0
C p = 3 . 0
C p = 2 . 0
C p = 1 . 5
A :
B :
C :
D :
Cpk alone is not sufficient to indicate the capability of a process
ASU Department of Industrial Engineering 2004
Cpm Alternative to Cpk
Cpm is considerably
more sensitive to
deviations from target
than Cpk
ASU Department of Industrial Engineering 2004
A hotel provides room service meals to its guests. It is
hotel policy that the meal is delivered at the time
scheduled by the guest.
The hotel Six Sigma team has found from the Voice of
the Customer that a breakfast delivered too early will
inconvenience the guest as much as a late delivery.
Research indicates that guests require that breakfast
be delivered within 10 minutes of the scheduled
delivery time.
Example: Hotel Breakfast Delivery
(File: HotelMeals.mtw)
ASU Department of Industrial Engineering 2004
Example: Hotel Breakfast Delivery
24 18 12 6 0 -6 -12
LSL Target USL
Process Data
Sample N 725
StDev(Within) 7.20201
StDev(Overall) 7.16405
LSL -10.00000
Target 0.00000
USL 10.00000
Sample Mean 6.00357
Potential (Within) Capability
CCpk 0.46
Overall Capability
Pp 0.47
PPL 0.74
PPU 0.19
Ppk
Cp
0.19
Cpm 0.36
0.46
CPL 0.74
CPU 0.18
Cpk 0.18
Observed Performance
PPM < LSL 13793.10
PPM > USL 268965.52
PPM Total 282758.62
Exp. Within Performance
PPM < LSL 13138.34
PPM > USL 289479.68
PPM Total 302618.02
Exp. Overall Performance
PPM < LSL 12745.81
PPM > USL 288475.05
PPM Total 301220.86
Within
Overall
Process Capability of Delivery Time Deviation
ASU Department of Industrial Engineering 2004
Defects per Million Opportunities
D = Total # of defects counted in the sample
Must be at least 5 defects and 5 non-defects to calculate
DPMO
N = # of units of product/service
O = # of opportunities for a defect to occur per unit of
product/service
M = million
DPMO = 1M Defects .
Units Opportunities
ASU Department of Industrial Engineering 2004
Sigma DPMO
2 308770
2.25 226716
2.5 158687
2.75 105660
3 66811
3.25 40060
3.5 22750
3.75 12225
4 6210
4.25 2980
4.5 1350
4.75 577
5 233
5.25 88
5.5 32
5.75 11
6 3.4
Defects per Million Opportunities vs Process Sigma
ASU Department of Industrial Engineering 2004
D = 205
N = 725
O = 1
Example: Hotel Breakfast Delivery
DPMO = 1M 205 .
725 1
= 282,758
ASU Department of Industrial Engineering 2004
Process Lead Time (PLT)
Littles Law
Customer
Orders
Order Entry Credit Check Schedule Orders
Order Take
Exit Rate =
20 units/day
WIP = 100
PLT= 100/20 = 5 days
Exit Rate (ER)
Work IN Process (WIP)
=
Process
Lead Time
(PLT)
ASU Department of Industrial Engineering 2004
Definitions
Process Lead Time (PLT) The time taken from the
entry of work into a process until the work exits the
process (which may consist of many activities).
Work-In-Process (WIP) The amount of work that has
entered the process but has not been completed. It
can be paper, parts, product, information, emails, etc.
Exit Rate (Average Completion Rate or Throughput)
The average output of a process over a given period
of time (usually a day) (units/time).
ASU Department of Industrial Engineering 2004
Value is Defined by the Customer
Customer Value-Added (CVA)
An activity adds value for the customer only if:
The customer recognizes the value
It changes the service/product toward
something the customer expects
It is done right the first time
ASU Department of Industrial Engineering 2004
Process Cycle Efficiency (PCE)
Process Lead Time
Customer Value Added Time
=
Process
Cycle
Efficiency
Customer
Orders
Order Entry Credit Check Schedule Orders
Order Take
Exit Rate =
20 units/day
WIP = 100
CVA=0.4 hrs CVA=0.4 hrs
CVA=0.3 hrs
CVA=0.4 hrs
PCE = 1.5 hrs/5 days = 1.5 hrs/40 hrs = 3.75%
Assuming 1 day = 8 hrs
ASU Department of Industrial Engineering 2004
Measure Performance
Inputs
Team Charter
Business case
Goal statement
Project scope
Project plan
Team roles and responsibilities
Prepared Team
Critical Customer Requirements
Process Maps
Quick Win Opportunities
2.0 Measure Performance
Determine What to
Measure
Manage Measurement
Evaluate Measurement
System
Determine Process
Performance
Key Deliverables
Input, Process, and
Output Indicators
Operational
Definitions
Data Collection
Formats and Sampling
Plans
Measurement System
Capability
Baseline Performance
Metrics
Process Capability
DPMO
PLT
PCE
Yield/Scrap
Others
Productive Team
Atmosphere
ASU Department of Industrial Engineering 2004
DMAIC - Process Improvement Roadmap
What is
important?
How are
we doing?
What is
wrong?
What
needs to
be done?
How do we
guarantee
performance?
1.0
Define
Opportunities
2.0
Measure
Performance
3.0
Analyze
Opportunity
4.0
Improve
Performance
5.0
Control
Performance
IEE 581 Six-Sigma Methodology
DMAIC The Analyze Phase
Fall 2012 Class 6
Cheryl L. Jennings, PhD, MBB
c.jennings@asu.edu
1
More on Process Capability Analysis
Previous Measure lecture, measures of process performance included:
Cp, Cpk, Cpm for continuous data
DPMO vs PPM for discrete data

Typically used as a goodness measure of a process performance
In the Measure phase to baseline performance
During the Analyze phase to identify suspect equipment, suppliers, etc., and
provide direction to the project
In the Improve phase to compare before performance to after
In the Control phase to monitor ongoing performance

Underlying assumptions are normality, and that the process is in statistical control
2
Relationship Between Cp and Cpk
3
* From Montgomery, D. C. (2009), Introduction to Statistical Quality Control 6
th
edition, Wiley, New York
Motorola Definition of
Six Sigma Quality
Cp and Cpk The Usual Equations
4
Arent these just Point Estimates?
{ }
USL-LSL
6
USL LSL

min , where
3 3
P
PK PU PL PU PL
C
s
X X
C C C C and C
s s
=

= = =
Confidence Interval for Cpk
Key Points
The Cpk metric is routinely used.
Recall that the Cpk that we calculate are based on statistics. Therefore our
calculated Cpks are used to estimate the TRUE Cpk.
Rarely (if ever) is the confidence interval on a Cpk considered.
Black Belts should consider CIs.
5
2 2
1 1 1 1

1 1.96 1 1.96
9 2 2 9 2 2
PK PK PK
PK PK
C C C
nC n nC n
| | | |
+ s s + +
| |

\ . \ .
For a 95% confidence interval:
/2 /2
2 2
1 1 1 1

1 1
9 2 2 9 2 2
PK PK PK
PK PK
C Z C C Z
nC n nC n
o o
| | | |
+ s s + +
| |

\ . \ .
Example
Based on a sample size of n = 13 and an estimated Cpk = 1.11, a 95% confidence
interval for Cpk is:
2 2
2
2
1 1 1 1

1 1.96 1 1.96
9 2 2 9 2 2
1 1
1.11 1 1.96
9(13)(1.11 ) 2(13) 2
1 1
1.11 1 1.96
9(13)(1.11 ) 2(13) 2
0.63 1.59
PK PK PK
PK PK
PK
PK
C C C
nC n nC n
C
C
| | | |
+ s s + +
| |

\ . \ .
| |
+ s
|

\ .
| |
s + +
|

\ .
s s
6
What if Data is not Normally Distributed?
7
Example
n = 200
Values range from 1001.68 to 2891.49
Histogram shows clearly that data are
skewed right and not normal
With LSL = 900 and USL = 2700
Assuming normal data, the usual Cpk
estimate would be 0.46
However non-normal Cpk = 1.15
8
Yet Another Process Performance Measure
9
Are these indices really useful?
* From Montgomery, D. C. (2009), Introduction to Statistical Quality Control 6
th
edition, Wiley, New York
Key Points
The Cpk metric is routinely used

Rarely (if ever) is the confidence interval on a Cpk considered

Black Belts should consider using Confidence Intervals
10
The DMAIC Process
* From Montgomery, D. C. (2009), Introduction to Statistical Quality Control 6
th
edition, Wiley, New York
Analyze Opportunity
12
3.0 Analyze Opportunity
Identify and Validate
Root Causes
Basic Tools
Advanced Tools
Inputs
Input, Process, and Output
Indicators
Operational Definitions
Data Collection Formats and
Sampling Plans
Measurement System Capability
Baseline Performance Metrics
Process Capability
Cost of Poor Quality (COPQ)
Time
Yield
Other
Productive Team Atmosphere
Outputs
Data Analyses
Validated Root Causes
Potential Solutions
* From Montgomery, D. C. (2009), Introduction to Statistical Quality Control 6
th
edition, Wiley, New York
The Primary DMAIC Six Sigma Tools
Three Ways to Obtain Data for Analysis
1. A retrospective study using historical data
A lot of data
But generated under what conditions?
Data quality issues

2. An observational study
Planned data collection, under known conditions in a production mode
Typically a short period of time, may not see all variation or be able to see
changes in key variables

3. A designed experiment
Also planned data collection, with deliberate manipulation of controllable
process inputs
The only way to prove cause-and-effect relationship
Requires commitment of resources
14
Identify Potential Root Causes Basic Analyze Tools
Cause & Effect Diagram*
FMEA*
Cause & Effect Matrix*

Histogram
Scatter Plot
Box Plots
Pareto Diagram






*Discussed in Measure lectures
15
Pareto Diagram
16
80%
Top three complaint categories comprise
80% of problem.

Other teams are working on 1 & 2. Your
team is tasked with cabin-related
complaints.

Cabin accommodations generated most
complaints related to aircraft cabins; most
complaints were about room for carry-on
baggage.
In the last year, 65% of airline passenger complaints about aircraft cabin interior baggage accommodations
concerned insufficient stowage in overhead bins for carry-on luggage.
Complaints
N
u
m
b
e
r

o
f

D
e
f
e
c
t
s

Cost Sched Cabin Bags Rgs Tix Etc.
Cabin-related
Complaints
Accom. Food Bevs Ent Sound Other
50%
N
u
m
b
e
r

o
f

D
e
f
e
c
t
s

Cabin Physical
Accommodations
Bag
Room
Leg
Room
Seat
Width
Head
Room
Rest
Room
Other
80%
50%
Bag Accommodations
(Storage)
Ovhd
Bin
Under
Seat
Garment
Rack
Other
65%
Identify Potential Root Causes Advanced Analyze Tools
Statistical Process Control (SPC)
Comparative Methods: Hypothesis tests, Confidence intervals
ANOVA
Source of Variation (SOV) Studies
Regression Analysis
Screening Experiments (Designed Experiment, DOE)
Nonparametric Methods
17
Phase I and Phase II Control Chart Application
Phase I Process Taming
Process is likely out of control; as in Measure, Analyze and Improve phases
Use of control charts is to bring process into state of control, with the
identification of out-of-control signals and investigation for root cause
Shewhart control charts are suited to Phase I because
Easy to construct & interpret
Effective at detecting both large, sustained process shifts as well as outliers,
measurement errors, data entry errors, etc.
Patterns are often easy to interpret and have physical meaning
Also suited to use of sensitizing or Western Electric rules

Phase II Process Monitoring
Process is relatively stable, causes of larger shifts have been identified and
permanently fixed; as in Control phase
18
SPC to Identify Potential Causes
In Phase I, control limits are typically calculated retrospectively
Data is collected, say 20 or 25 subgroups
Trial control limits are calculated
Out-of-control points are investigated for assignable causes and solutions
Control limits are recalculated from points within the trial control limits
New data is collected, compared with the revised trial control limits, and the
analysis is repeated until the process is stabilized

In Phase II, control limits are calculated from the stabilized process
19
Shewart 3-sigma limits
Why do we often use 3 sigma limits?
... Experience indicates that t = 3 seems to be an acceptable economic value. ...
Economic Control of Quality of Manufactured Product, W.A. Shewhart,
Commemorative Issue published by ASQ in 1980, p. 277.

Wider control limits decrease the risk of a type I error, the risk of a point falling
beyond the control limits indicating an out-of-control condition when no assignable
cause exists
For 3-sigma limits, the probability is 0.0027 (27 out of 10,000 plot points), or
0.0135 in one direction

Wider control limits also increase the risk of a type II error, the risk of a point falling
between the control limits when the process is really out of control
20
Comparative Methods
Comparison Type Analysis Tests
Single sample one-to-standard (fixed
value)
Z-test
t-test
_
2
-test
Sign/Wilcoxon
Two samples
Paired two-sample
one-to-one Z-test
t-test
F-test
Paired t-test
Sign test
Wilcoxon Rank Sum (also called
the Mann-Whitney test)
Multiple samples multiple ANOVA
Kruskal Wallis
Use of ranks
_
2
-tests
21
Parametric Inference Methods
We will look at three tests, but fundamentals apply to all tests
The one-sample Z-test
The one-sample t-test
The two-sample t-test (also the pooled t-test)

Assumptions for these three tests are
Random samples
From normal populations
And for two-sample tests, the two populations are independent

Checking for random, independent samples
Best approach is to use a sound sampling plan
Statistical approaches for time-oriented data include runs tests and time series
methods
22
Checking Normality
Probability Plot
Boxplot
Goodness-of-fit tests: chi-square, Anderson-Darling
H0: The form of the population distribution for characteristic is Normal.
23
7-Step Hypothesis Testing Procedure
24
1. Parameter of Interest
2. Null Hypothesis
3. Alternative Hypothesis
4. Test Statistic
5. Reject H0 if:
Test statistic approach (fixed significance)
P-value approach
Confidence Interval approach
6. Computations
Includes checking assumptions
7. Conclusions
The One-Sample Z-Test
25
We Could Also Use a P-Value Approach
26
27
An Example of the Z-Test
28
3
rd
Approach:
Confidence
Intervals
The One-Sample t-Test
29
An Example of the t-Test
30
MINITAB 1-Sample t-Test
31
When doing the t-test
manually, it is usually necessary
to approximate the P-value
Approximating the P-value with a t-Table
32
Approximating the P-value with MINITAB
33
The Two-Sample t-Test
34
Testing Hypotheses on the Difference in Means of Two Normal
Distributions, Variances Unknown
An Example
35
MINITAB 2-Sample t-Test
36
Other Comparative Tests for Normal Distributions
The Paired t-Test
2 samples, paired data
If analyzed incorrectly as a 2-sample test, the variance estimate may be inflated
and give misleading results

_
2
-test
Variance of a normal distribution

F test
Variances of two normal distributions



37
What if the Distribution is Not Normal?
Comparative methods discussed are based on assumption of random sample from a
normal distribution
Most of the comparative methods based on the normal distribution are relatively
insensitive to moderate departures from normality
Two exceptions are the _
2
and F tests for variance

Options for more severe departures from normality are
1. Transform the data to normal, for example using logarithm, square root or a
reciprocal, and use a method based on the normal distribution
See Montgomery, DOE, Selecting a Transformation: The Box-Cox Method
2. Utilize a nonparametric or distribution-free approach
38
More Than Two Populations?
For more than two populations or two factor levels aka a single-factor experiment
ANOVA can be used for comparing means
39
40
Assumptions can be checked by analyzing residuals
Normality
Independence
Equal variance
Sources of Variation Studies
Sources of Variation (or SOV) studies are used to understand and characterize
process variability

Often described as a process snapshot, the process is observed in a production
mode without adjustment or manipulation

A sampling plan is designed to encompass what are thought to be the major
contributors to process variability

Data is collected over a sufficient period of time to capture a high percentage of the
historical process variation

Often suited to analysis as a nested design

May be a precursor to a designed experiment (DOE)

41
Solder Paste Example
A process engineer is interested in determining where the majority of the variability is
coming from in the raw material being supplied to a screen-printing process. Three
lots of solder paste are randomly selected. From each lot, four tubes of solder paste
are selected at random. Three boards are printed for each tube of solder paste.
42
For more on Nested Designs, see Chapter 14 in Montgomery, D. C. (2009),
Design and Analysis of Experiments, 7
th
edition, Wiley, New York.
2 3 1 Lot:
1 2 3 4 Tube:
Board:
1
2
3
1 2 3 4 1 2 3 4
Tree Diagram

Volume
Measurement:
28
23
23
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
27
25
24

MINITAB Analysis
Examining p-values, conclude there is
no significant effect on Volume due to
Lot, but the Tubes of solder paste from
the same Lot differ significantly.

Knowing that the major source of
variability is the Tube-to-Tube
variation within a Lot points gives
direction for solving the problem.

Unfortunately, also note that the
Within-Tube (Error, or Board-to-Board)
variability is the largest source of
variation, suggesting improvement in
the screen-printing process.
43
Regression Analysis
Recall that two ways to obtain data for analysis included
A Retrospective study using historical data
An Observational study resulting from planned data collection
Regression can be used for both, with care on Retrospective data

Abuses of Regression include
Selection of variables that are completely unrelated in a causal sense a strong
observed relationship does not imply that a causal relationship exists. Designed
experiments are the only way to determine cause-and-effect relationships.
Extrapolation beyond the range of the original data

We will study logistic regression in a later class lecture on Categorical data analysis
44
Design of Experiments
Types of
Experiments
Screening Optimization Comparison Robust Design
Full Factorial Medium Medium High Medium
Fraction Factorial High Low Medium Low
Response Surface
Methodology (RSM)
Low High Medium High
Plackett-Burman High Low Low Low
45
The table below lists four types of experiments and the degree of suitability (High,
Med, or Low) for each experimental objective
Screening and Comparison experiments are suited for use in the DMAIC Analyze
phase
46
Step 4.
Perform Residual Diagnostics
Step 1.
View the Data
Step 5.
Transformation
Required?
Make
Confirmation Runs
Yes
Yes
No
Step 9.
Stop
Experimentation?
Run RSM
Yes
No
No
Step 8.
Interpret Chosen Model
Step 7.
Choose Model
Step 6.
Reduce Model?
Step 3.
Fit the Model
Step 2.
Create the Model
Analysis and Interpretation
of Factorial Experiments
Tips for Designed Experiments
Plan Experiment (Use Engineering and
Statistical Knowledge)
Objective
Selection of Responses & Input
Variables (Operating Range, Levels,
Interactions etc.)
Blocking
Replication
Dont forget Center Points!

Conduct Experiment
Randomization
Data collection and Comments
Statistical Analysis

Analyze Experiment
Sparsity of Effects
Statistical Model
Residual Diagnostics

Interpret Results
Results match with engineering
intuition
Confidence Interval on Predictions
Confirmation Tests
47
One Tip on How NOT to Design an Experiment
A Designed Experiment is NOT a Retrospective or Observational study

The variables and variable levels are deliberately manipulated in a random
manner

A DOE cannot be retro-fitted to data collected retrospectively or through passive
observation
48
References
Montgomery, D. C. (2009), Design and Analysis of Experiments, 7th edition, Wiley,
New York.

Montgomery, D. C. (2009), Introduction to Statistical Quality Control, 6th edition,
Wiley, New York.

Montgomery, D. C. and Runger, G. C. (2011), Applied Statistics and Probability for
Engineers, 5th edition, Wiley, New York.
Upcoming
Analyze dataset is posted on Blackboard (both MINITAB and Excel)

Read two case studies posted on Blackboard
Goodman et al, Six Sigma Forum Magazine, November 2007, When Project
Termination is the Beginning
Tong et al, Intl Journal AMT, January 2004, A DMAIC approach to printed circuit
board quality improvement


How to contact me
E-mail: c.jennings@asu.edu
Cell: 602-463-5134

50
51
IEE 581 Six-Sigma Methodology
DMAIC The Analyze Phase
Fall 2012 Class 7
Cheryl L. Jennings, PhD, MBB
c.jennings@asu.edu
1
Fisher 1 in 20
Why do we often use = 0.05 as significance level?
http://psychclassics.asu.edu/Fisher/Methods/chap3.htm, Statistical Methods for
Research Workers By Ronald A. Fisher (1925), Chapter III, Distributions
we can find what fraction of the total population has a larger deviation; or, in other
words, what is the probability that a value so distributed, chosen at random, shall exceed a
given deviation. Tables I. and II. have been constructed to show the deviations
corresponding to different values of this probability. The rapidity with which the probability
falls off as the deviation increases is well shown in these tables. A deviation exceeding the
standard deviation occurs about once in three trials. Twice the standard deviation is
exceeded only about once in 22 trials, thrice the standard deviation only once in 370 trials,
while Table II. shows that to exceed the standard deviation sixfold would need [p. 47] nearly
a thousand million trials. The value for which P =.05, or 1 in 20, is 1.96 or nearly 2 ; it is
convenient to take this point as a limit in judging whether a deviation is to be considered
significant or not. Deviations exceeding twice the standard deviation are thus formally
regarded as significant. Using this criterion, we should be led to follow up a negative
result only once in 22 trials, even if the statistics are the only guide available. Small effects
would still escape notice if the data were insufficiently numerous to bring them out, but
no lowering of the standard of significance would meet this difficulty.
2
How robust is the t-test to the normality assumption?
One assumption for using the t-test for
means is that the data is normally
distributed
While the test is somewhat robust to
this assumption, consider the test
statistic calculation


Two key things about this statistic
When sampling from the normal
distribution, are
independent
The denominator is distributed as
Lets look at an example
Consider a cycle time problem, say
the time it takes to process a loan
from receipt of application to wiring
of funds. Cycle times are often
exponentially distributed.
Select a random sample of ten loans
and test the hypothesis that the
mean cycle time is 10 days

To study the impact of cycle time
distribution on the t-test statistic,
randomly generate 50 samples of 10
loans each, from an exponential
distribution with a mean of 10 days


3
( )
0
X
t
S n

=
and X S
2
~ df S n _
4
Recall that for an exponential
distribution, = o, so clearly the
independence assumption is violated
Histograms of the 50 samples show
the skewness of cycle time
A histogram of the 50 t statistics is
clearly skewed in comparison to a t
distribution with 9 degrees of
freedom
Using p-values based on the t
distribution could lead to erroneous
conclusions

What if the distribution is not normal?
Comparative methods discussed were based on assumption of random sample from
a normal distribution

Most of these procedures are relatively insensitive to moderate departures from
normality

Options for more severe departures from normality are
1. Transform the data to normal, for example using logarithm, square root or a
reciprocal, and use a method based on the normal distribution
See Montgomery, DOE, Selecting a Transformation: The Box-Cox Method
2. Utilize a nonparametric or distribution-free approach
5
Non-Parametric Inference Methods
We will look at two types of tests, tests based on Signs and tests based on Ranks
Distribution-free, or no underlying parametric distribution assumption
However each test does have other assumptions
Why not always use nonparametric methods?
In general, nonparametric procedures do not use all the information in a sample,
and as a result are less efficient, requiring larger samples sizes to achieve same
power as the appropriate parametric procedure


6
Comparison Type Analysis Tests
Single sample
Paired two-sample
one-to-standard (fixed value) Sign
Wilcoxon Signed-Rank
Two samples one-to-one Wilcoxon Rank Sum (also called
the Mann-Whitney test)
Multiple samples multiple Kruskal Wallis
Use of ranks
The Sign Test for One Sample
7
* From Montgomery, D. C. and Runger, G. C. (2011), Applied Statistics and Probability
for Engineers 5
th
edition, Section 9-9 Nonparametric Procedures, Wiley, New York
8
9
10
Sign Test Example
11
Calculating P-value in Minitab
12
P-value = 2 x Pr(R
+
14) = 2 x Pr(R
+
13) = 2 x (1 0.942341) = 0.1153
13
Or, Minitab 1-Sample Sign Test
14
15
16
17
18
19
20
The Wilcoxon Signed-Rank Test for One Sample
21
* From Montgomery, D. C. and Runger, G. C. (2011), Applied Statistics and Probability
for Engineers 5
th
edition, Section 9-9 Nonparametric Procedures, Wiley, New York
22
23
Wilcoxon Signed-Rank Example
24
are shown to the left.
Minitab 1-Sample Wilcoxon
25
Uses maximum sum of
ranks instead of
minimum
26
Comparison to the t-Test
27
Median Tests for Paired Samples
Both the sign test and the Wilcoxon signed-rank test can be applied to paired
observations.
In the case of the sign test, the null hypothesis is that the median of the differences
is equal to zero.
The Wilcoxon signed-rank test is for the null hypothesis that the mean of the
differences is equal to zero.
The procedures are applied to the observed differences as described previously.
28
29
* From Montgomery, D. C. and Runger, G. C. (2009), Applied Statistics
and Probability for Engineers 4
th
edition, Wiley, New York
30
1. Parameter of Interest: The parameters of
interest are the median fuel mileage performance
for the two metering devices.
2. Null Hypothesis: H
0
: Median
1
= Median
2
, or
equivalently, H
0
: Median
D
= 0
3. Alternative Hypothesis: H
1
: Median
1

Median
2
, or equivalently, H
1
: Median
D
0
4. Test Statistic: We will use Appendix Table VIII
for the test, so the test statistic is r = min(r+, r).
5. Reject H
0
if: For = 0.05, n = 12, two-sided
test, Table VIII gives the critical value as r*
0.05
=
2. We will reject H
0
in favor of H
1
if r 2.
6. Computations: Table 15-2 shows differences
and their signs, r+ = 8 and r = 4. So r = min (8,
4) = 4.
7. Conclusion: Since r = 4 is not less than or
equal to the critical value r*
0.05
= 1, we cannot
reject the null hypothesis that the two devices
provide the same median fuel mileage
performance.
EXAMPLE 15-3
An automotive engineer is investigating two different
types of metering devices for an electronic fuel
injection system to determine whether they differ in
their fuel mileage performance. The system is
installed on 12 different cars and a test is run with
each metering device on each car. The observed fuel
mileage performance data, corresponding
differences, and their signs are shown in Table 15-2.
We will use the sign test to determined whether the
median fuel mileage performance is the same for
both devices using = 0.05.
* From Montgomery, D. C. and Runger, G. C. (2009), Applied Statistics
and Probability for Engineers 4
th
edition, Wiley, New York
Minitab 1-Sample Sign with Paired Data
31
Minitab 1-Sample Wilcoxon with Paired Data
32
The Wilcoxon Rank-Sum Test for Two Samples
33
* From Montgomery, D. C. and Runger, G. C. (2011), Applied Statistics and Probability
for Engineers 5
th
edition, Section 9-9 Nonparametric Procedures, Wiley, New York
34
35
36
Wilcoxon Rank-Sum Example
37
38
Minitab Mann-Whitney (Wilcoxon Rank Sum)
39
40
41
More Than Two Means
Weve looked at methods that can be used for experiments with two levels: the t-
test and Wilcoxon-Rank Sum test
For more than two levels, ANOVA can be used for comparing means
The single-factor analysis of variance model for comparing a population means is



In this model the error terms c
ij
are assumed to be normally and independently
distributed with mean zero and variance o
2
. The assumption of normality leads
directly to the F-test.
The Kruskal-Wallis test is a nonparametric alternative to the F-test; it requires only
that the c
ij
have the same continuous distribution for all factor levels i = 1, 2, ..., a.
42
* From Montgomery, D. C. and Runger, G. C. (2009), Applied Statistics and
Probability for Engineers 4
th
edition, Wiley, New York
1,2,...,
1,2,...,
ij i ij
i
i a
Y
j n
t c
=

= + +

=

The Kruskal-Wallis Test for Multiple Samples


43
44
45
46
Kruskal-Wallis Example
47
48
Minitab Kruskal-Wallis
49
Not adjusted
for ties
The Rank Transformation
50
51
Use Nonparametric Methods?
Most of the comparative methods based on the normal distribution are relatively
insensitive to moderate departures from normality
Two exceptions are the _
2
and F tests for variances

If data distribution has a severe departure from normality
Transform data to a normal distribution
Use the nonparametric sign tests
BEWARE assumptions for using other nonparametric tests
52
Analyze Tools
Identify Potential Root Causes
Cause & Effect Diagram
FMEA
Cause & Effect Matrix
Histogram
Scatter Plot
Box Plots
Pareto Diagram
Statistical Process Control
Validate Potential Root Causes
Comparative Methods: Hypothesis
tests, p-values, confidence intervals
Source of Variation Studies
ANOVA
Regression Analysis
Screening Experiments
Nonparametric Methods

Can you
Formulate hypotheses?
Calculate the test statistic?
Find the P-value?
Interpret Minitab output?
Draw conclusions?

53
Analyze Opportunity
54
3.0 Analyze Opportunity
Identify and Validate
Root Causes
Basic Tools
Advanced Tools
Inputs
Input, Process, and Output
Indicators
Operational Definitions
Data Collection Formats and
Sampling Plans
Measurement System Capability
Baseline Performance Metrics
Process Capability
Cost of Poor Quality (COPQ)
Time
Yield
Other
Productive Team Atmosphere
Outputs
Data Analyses
Validated Root Causes
Potential Solutions
The DMAIC Process
* From Montgomery, D. C. (2009), Introduction to Statistical Quality Control 6
th
edition, Wiley, New York
References
Montgomery, D. C. (2009), Introduction to Statistical Quality Control, 6
th
edition,
Wiley, New York.

Montgomery, D. C. (2009), Design and Analysis of Experiments, 7
th
edition, Wiley,
New York.

Montgomery, D. C. and Runger, G.C. (2011), Applied Statistics and Probability for
Engineers, 5
th
edition, Wiley, New York.

Montgomery, D. C. and Runger, G. C. (2009), Applied Statistics and Probability for
Engineers 4th edition, Wiley, New York.

Upcoming
Next will be Improve and Project Planning classes with Dr. Zenzen

Following week the Control classes with me, on campus

How to contact me
E-mail: c.jennings@asu.edu
Cell: 602-463-5134

57
58
This document and all information and expression contained herein are the property of ASU Department of Industrial Engineering, and may not, in whole or
in part, be used, duplicated, or disclosed for any purpose without prior written permission of ASU Department of Industrial Engineering. All rights reserved.
ASU Department of Industrial Engineering 2004
IEE 581 Six Sigma
Methodology
Lecture 9/10 - Improve
Instructor:
Dr. Fran Zenzen
General Dynamics C4 Systems
Fran.zenzen@gdc4s.com
ASU Department of Industrial Engineering 2004
Agenda
Introductions
Statistical Thinking
Improve (DMAIC)
Cost of Quality Review
Wrap-up
ASU Department of Industrial Engineering 2004
During the session take the time to write down what
you plan to:
Implement on your current or future work activities
Stop on your current or future work activities
Continue to implement on your current or future work
activities
Actions for the Participant
1.
2.
3.
1.
2.
3.
1.
2.
3.
ASU Department of Industrial Engineering 2004
The DMAIC Process
* From Montgomery, D. C. (2009), Introduction to Statistical Quality Control 6
th
edition, Wiley, New York
ASU Department of Industrial Engineering 2004
Case Study II Enhanced Profitability
Ford Motor Co. admitted that quality snafus cost it more
than $1 billion in profits in 2000 in the Jan. 12 issue of the
Wall Street Journal. In the same issue, the newspaper
reported that Bridgestone Corp.s CEO and president,
Yoichiro Kaizeki, had resigned along with three executive
vice presidents. Shigio Watanbe, a senior vice president,
replaced Kaizeki. Both companies were affected by reports
of rollovers of Ford Explorer SUVs equipped with
Bridgestone/Firestone tires.
Would you buy Ford Explorer with Bridgestone/Firestone tires?
ASU Department of Industrial Engineering 2004
Case Study II
What could have prevented this from
happening?
What were the impacts?
Who was responsible?
Validation design reviews
Enhanced testing
More inspection
Analysis of early failures
Recall/Announcement
Lost Customers
Lost Revenue
Lost Jobs
Everyone involved with the
product was responsible
ASU Department of Industrial Engineering 2004
Improve Tools
Two Broad Categories
Simple Data Collection/Analysis Tools to Confirm
Improvements
Specialized Tools Targeted at Specific Types of
Process Problems
ASU Department of Industrial Engineering 2004
The Tools of Six Sigma
The Simple Tools
Control Charts
Adapted from Putting Total Quality Management to Work
Marshall Sashkin & Kenneth Kiser
ASU Department of Industrial Engineering 2004
The Tools of Six Sigma
The Simple Tools
Control Charts
Pareto Charts
Adapted from Putting Total Quality Management to Work
Marshall Sashkin & Kenneth Kiser
ASU Department of Industrial Engineering 2004
The Tools of Six Sigma
The Simple Tools
Control Charts
Pareto Charts
Cause / Effect
diagrams (Fishbone
diagrams)
Adapted from Putting Total Quality Management to Work
Marshall Sashkin & Kenneth Kiser
ASU Department of Industrial Engineering 2004
The Tools of Six Sigma
The Simple Tools
Control Charts
Pareto Charts
Cause / Effect diagrams (Fishbone diagrams)
Run Charts
Adapted from Putting Total Quality Management to Work
Marshall Sashkin & Kenneth Kiser
ASU Department of Industrial Engineering 2004
The Tools of Six Sigma
The Simple Tools
Control Charts
Pareto Charts
Cause / Effect diagrams (Fishbone diagrams)
Run Charts
Histograms
Adapted from Putting Total Quality Management to Work
Marshall Sashkin & Kenneth Kiser
ASU Department of Industrial Engineering 2004
LATE CARs BY RESPONSIBLE DEPARTMENT
9
11
1
0 0 0 0 0 0
47
31
5
10
5
1
0 0
2
0
10
20
30
40
50
ENG MFG QA CM PM OT SUB CON SM
RESPONSIBLE DEPT
N
U
M
B
E
R

O
F

C
A
R
s
LATE TOTAL
Provides Continuous Improvement
CARs are our
friends!
CAR Data Reporting
ASU Department of Industrial Engineering 2004
The Tools of Six Sigma
The Simple Tools
Control Charts
Pareto Charts
Cause / Effect diagrams (Fishbone diagrams)
Run Charts
Histograms
Scatter Diagrams
Adapted from Putting Total Quality Management to Work
Marshall Sashkin & Kenneth Kiser
ASU Department of Industrial Engineering 2004
The Tools of Six Sigma
The Simple Tools
Control Charts
Pareto Charts
Cause / Effect diagrams (Fishbone diagrams)
Run Charts
Histograms
Scatter Diagrams
Flow Charts
Adapted from Putting Total Quality Management to Work
Marshall Sashkin & Kenneth Kiser
ASU Department of Industrial Engineering 2004
The Tools of Six Sigma
Some More Powerful Tools
Advanced statistical tools
Correlation
Regression Linear and Multivariate
Design of experiments Analysis of Variance
FMEA (Failure Mode and Effects Analysis)
Simulation
Project Management for large scale process
improvements
Design for Six Sigma
Adapted from Putting Total Quality Management to Work
Marshall Sashkin & Kenneth Kiser
ASU Department of Industrial Engineering 2004
Statistical Thinking is a philosophy of learning
and action based on the following fundamental
principles:
All work occurs in a system of interconnected
processes
Variation exists in all processes
Understanding and reducing variation are
keys to success
Glossary of Statistical Terms - Quality Press, 1996
Statistical Thinking
ASU Department of Industrial Engineering 2004
Use of Statistical Thinking
Where we're
headed
Managerial processes
to guide us
Where the work
gets done
Strategic
Managerial
Operational
Executives
Managers
Workers
ASU Department of Industrial Engineering 2004
Overall Approach
1. Practical Problem
y f x x x
k
( , , ... , )
1 2
3. Statistical Solution
2. Statistical Problem
4. Practical Solution
ASU Department of Industrial Engineering 2004
Organizational
Impact
The Way We Think
Time
Organizational
Improvement
Product & Process
Improvement
Problem
Solving
Source: Snee
Expanding World Of Statistics
ASU Department of Industrial Engineering 2004
Case Study I
The following information comes from
http://www.space.com/news/spacestation/iss_fraud_010726.html:
A NASA subcontractor has been charged with allowing potentially dangerous parts to be installed on
the International Space Station. Space agency officials, working with the US government and the
FBI, say the company was paid for component testing work that was never carried out. The news has
led to an urgent safety review of the facility. Test Lab, N.A., of Woburn, Massachusetts
(CAGE code - 8Z235), and its president have been indicted on 36 counts of mail fraud. The charges
allege the company defrauded its customers by certifying tests had been done and then billing them.
All the parts in question are on the Russian section of the space station. It is claimed that between
10% and 20% of the parts received by Test Lab between 1995 and October 1999 were not tested
properly, even though the company received at least $300,000. NASA's William Miller told
Space.com: "I think it's a serious safety risk any time you put parts into any space vehicle that weren't
tested as you contracted. "NASA tests these parts for a reason. It's very expensive to go back to
space station and replace parts."
ASU Department of Industrial Engineering 2004
Case Study I
What could have prevented this from
happening?
What were the impacts?
Who was responsible?
What are the trait of a Six Sigma
Blackbelt?
Proactive
Problem solver
Value added Flexible
Facilitator
Planning & execution
Customer advocate
ASU Department of Industrial Engineering 2004
Process Characterization Model
Phase 1: Process Definition
Understand the process, variables
and characteristics.
Phase 2: Process Capability
Establish current level of
performance of the process.
Phase 3: Process
Optimization
Determine which variables drive
the output of the process and
their find their optimum levels.
Phase 4: Process Control
Monitor the process performance
1
Process
Definition
2
Process
Capability
4
Process
Control
3
Process
Optimization
ASU Department of Industrial Engineering 2004
Improve Tools
Two Broad Categories
Simple Data Collection/Analysis Tools to Confirm
Improvements
Specialized Tools Targeted at Specific Types of
Process Problems
ASU Department of Industrial Engineering 2004
Specialized Tools
Cost of Quality Measures
Obtain management commitment and support
Establish an installation team
Select an organizational segment as a prototype
Obtain cooperation and support os users and suppliers of
information
Define quality costs and quality cost categories
Identify quality costs within each category
Determine the sources of quality cost information
Design quality cost reports and graphs
Establish procedures to collect quality cost information
Collect data, prepare and distribute reports
Eliminate bugs from the system
Expand the system
* From Evans and Lindsay. (1996), The Management and Control of Quality 3rd ed, West, New York
ASU Department of Industrial Engineering 2004
Cost of Quality
Planning/Prevention Costs
Appraisal Costs
Internal Failure Costs
External Failure Costs
ASU Department of Industrial Engineering 2004
Cost of Quality
ASU Department of Industrial Engineering 2004
Specialized Tools
Mistake Proofing
Started in manufacturing at Model T plant (Ford)
Can be applied to any human process
Latest version is poka-yoke (Japan)
Example: Grinding Operation
Basic
Install an optical gauging device to automatically measure and
check devices
If found defective, automatically place in rework bin
Eliminates problem parts and doesnt provide to downstream
operations
Does not eliminate scrap and rework
More advanced
Install a device during the grinding to ensure it is ground
correctly
ASU Department of Industrial Engineering 2004
Mistake Proofing
Focus on preventing problems from
happening not just keeping them
from reaching the customer
ASU Department of Industrial Engineering 2004
Kaizen
Means continuous improvement
Modeled after quality circles, team-based continuous
improvement- Started at Toyota
Kaizen is an intensive, rapid improvement
Training: Team receives specialized training for the event
Discovery: Guided Tours of Area
Analysis: Gather Data on Current Situation
Brainstorming: use of simple statistical tools
Implementation: Team divides into sub-teams to address
prioritized issues
Standardization: Create procedures
Results: Document Results
Follow-up: Implement other solution not done
Parking Lot: Park out of scope ideas
Presentation: Present to local management
ASU Department of Industrial Engineering 2004
Kaizen
Generates considerable momentum and
organizational energy
Great for quick fix but must be used continuously to
sustain improvements
Usually supplements an on-going, larger project
Blackbelt
Define and organize the Kaizen events for larger project
Present key data
Must be used judiciously
ASU Department of Industrial Engineering 2004
Four-Step Rapid Setup Method
Step 1: Separate Internal and External Setup
Observe process and categorize as Internal or External setup
work
Ask, Can this step be accomplished only with the machine
shut down, or can we do this while the machine is working on
the previous batch?
Internal Setup: Work accomplished when machine is not
running
External Setup: Work accomplished when machine is
operational
Step 2: Convert Internal Setup to External Setup
Try to move performing actions when machine running
Step 3: Streamline Internal Setup
Minimize necessary Internal Setup
Step 4: Eliminate Adjustments
ASU Department of Industrial Engineering 2004
Queuing Methods for Reducing Congestion and
Delay Due to Time Variation
Three Principal techniques for reducing congestion
Pooling
Prepared for irregular but certain overloads
Route excess work to other workstations
Must trade off extra equipment versus idle time
Triaging
Work problems in groups
Easy and small
Real problems
Catastrophic problems
Have techniques, staff, and processes for handling for each
category
Backup Capacity
Better for sustained delay issues
ASU Department of Industrial Engineering 2004
Total Productive Maintenance (TPM)
35% machine capacity is lost to downtime
Reduce downtime
Implement TPM
See Figure 11-19, page 216 (George)
ASU Department of Industrial Engineering 2004
Design of Experiments (DOE)
When true causes do not jump out, you might try
Trial-and-rror
One-Factor-at-a-Time
Design of Experiments (DOE) is a systematic problem
solving and process improvement approach that uses
statistical principles and scientific experimentation
methods
ASU Department of Industrial Engineering 2004
DOE
DOE Consideration
All factors tested simultaneously
Specific pattern of Test Runs
Different DOE designs and tools available
Once test runs are completed, analysis completed
Positive Relationships
Negative Relationships
No Relationship
Interactions
Can be used for screening
ASU Department of Industrial Engineering 2004
DOE Applications
Robust Design
Determine input values in an existing process to optimize
results and minimize costs
Response Surface Methodology
ASU Department of Industrial Engineering 2004
Designed Experiments
Problem Solving Screening Robust Design Optimization
Purpose Specific Problem To I.D. Inputs that affect
Outputs
Output Response
Sensitivity to Inputs
To detemine Input
Settings for optimal
results
# of Inputs Greater than 4 5 or More 2 32 2 5
Type B vs C
Fractional Factorial
Full Factorial
Fractional Factorial
Plackett-Burman

Fractional Factorials
Full Factorials
Response Surface
Full Factorials
Response Surface
Assumptions Limited Simple Linear
Normal Models
Limited
Noise Variables can be
Manipulated
Typically Quadratic
Models

ASU Department of Industrial Engineering 2004
Montgomerys Theorems on Designed
Experiments
Theorem 1: If something can go wrong
in conducting an experiment, it will.
ASU Department of Industrial Engineering 2004
Montgomerys Theorems on Designed
Experiments
Theorem 2: The probability of
successfully completing an experiment
is inversely proportional to the number
of runs.
ASU Department of Industrial Engineering 2004
Montgomerys Theorems on Designed
Experiments
Theorem 3: Never let one person design
and conduct and experiment alone,
particularly if that person is a subject-
matter expert in the field of study.
ASU Department of Industrial Engineering 2004
Montgomerys Theorems on Designed
Experiments
Theorem 4: All experiments are
designed experiments; some of them are
designed well, and some of them are
designed really badly. The badly
designed ones often tell you nothing.
ASU Department of Industrial Engineering 2004
Montgomerys Theorems on Designed
Experiments
Theorem 5: About 80 percent of your
success in conducting a designed
experiment results directly from how
well you do the pre-experimental
planning.
ASU Department of Industrial Engineering 2004
Montgomerys Theorems on Designed
Experiments
Theorem 6: It is impossible to
overestimate the logistical complexities
associated with running an experiment
in a complex setting, such as a factory
or a plant.
ASU Department of Industrial Engineering 2004
Montgomerys Theorems on Designed
Experiments
Finally, without good experimental
design, we often end up doing PARC
analysis.
Planning After the Research is Complete
ASU Department of Industrial Engineering 2004
A reminder
ASU Department of Industrial Engineering 2004
Know it, accept it, learn to deal with it!
Like it or not, variation is everywhere!
Determine if you can improve the situation
Your drive time to work/school each day
The quantity of production/software
development each shift/day/week/month
Departure time of your plane
ASU Department of Industrial Engineering 2004
On Your Way To Work or School, can this be
improved?
What causes variation in your trip time?
Stop Signs
School bus
Red lights
People in crosswalks
The gas tanker truck that crashed into the bridge on I-10
Weather
Example:
ASU Department of Industrial Engineering 2004
Evaluation Based on Data
THE KEY IS TO ASK WHY!!!
Why is the material from the production
equipment/software development so
inconsistent?
Why do we constantly firefight?
Why does our daily output vary so much?
Why is the cost higher than expected?
Why does the schedule time allocated seem
inadequate? (too much )
ASU Department of Industrial Engineering 2004
Variation can be thought of as:
1. Deviation around the overall
average or
2. Deviation of the overall average
from a desired target.
Ave
Target Ave
Improvement Goals: Variation and Targets
ASU Department of Industrial Engineering 2004
Target
Reduced deviation around the
Average and on Target
ASU Department of Industrial Engineering 2004
Case Study III
The Wall Street Journal reported that Boeing was
adding 370 jobs to its inspection system because of
107 specific problems at seven Washington state
and Oregon facilities (FAAs Audit Faults
Procedures at Boeing, Oct. 31, p. A7). The jet
makers problems included inadequate
inspections and work instructions, shortcomings in
inspection of parts from certain suppliers and lack
of awareness of design procedures among
assembly workers. None of the problems were said
to threaten passenger safety.
Who decides the appropriate
inspection level? Is this the
best method?
Do you
want to fly
on a
Boeing
aircraft?
ASU Department of Industrial Engineering 2004
Case Study III
What could have prevented this from
happening?
What were the impacts?
Who was responsible?
Close loop reporting
Updates to documented
procedures when holes
found
Periodic auditing
including suppliers
Increased Rework
Employee Dissatisfaction
Inconsistent Quality
ASU Department of Industrial Engineering 2004
Types of Variation
Common Cause
Special Cause
Structural
ASU Department of Industrial Engineering 2004
Three Ways to Reduce Variation
and Improve Quality
Control the Process:
Eliminate Special
Cause Variation
Improve the System:
Reduce Common
Cause Variation
Anticipate Variation:
Design Robust
Processes and Products
Quality
Improvement
ASU Department of Industrial Engineering 2004
Robustness of Product and Process Design
Anticipate variation and reduce its effects
Design the process or product to be insensitive to
variation
Reduce the effects of uncontrollable variation in:
Product design
Process design
Business Practices
A robust process or product is more likely to
perform as expected
Note: 100% inspection cannot provide robustness
ASU Department of Industrial Engineering 2004
Process Robustness Analysis
Identify Those Uncontrollable Factors that
Affect Process Performance
Weather
Customer Use of Products
Employee Knowledge, Skills, Experience,
Work Habits
Age of Equipment
Design the Process to be Insensitive to the
Uncontrollable Variations in the Factors
ASU Department of Industrial Engineering 2004
Example
ASU Department of Industrial Engineering 2004
Customer Survey Results
Customer Survey Identified Opportunities
Communications
Technical Issues
Senior Management
Cost/Schedule Estimating and
Execution
GD does not always understand and/or
have the same objectives as the
customer
GD is reactive not proactive to problems
Strategic Planning
18 customer responses
ASU Department of Industrial Engineering 2004
72% perceive we need better communication
67% perceive we have technical issues
67% perceive issues with Senior Management
56% perceive that we do not have the same objectives as them
56% perceive issues in cost/schedule estimating and execution
50% perceive that we are reactive not proactive to problems
28% perceive issues with strategic planning
Major Category Communication
Technical
Issues
Sr.
Management
Cost/Schedule
Estimating &
Execution needs
improvement
GD does not
always
understand
and/or have the
same objectives
as the customer
GD is reactive to
problems, not
proactive
Strategic
Planning
Percent of response 72% 67% 67% 56% 56% 50% 28%
# Responded 13 12 12 10 10 9 5
Customer Survey Results
ASU Department of Industrial Engineering 2004
Customer Response Plan
Program managers have been informed of their programs
survey results and are working specific tactical opportunities for
improving the customers perceptions.
Business champions have been assigned to each major
opportunity category to address strategic opportunities for
improving the customers perceptions of us:
1. Communication
2. Technical Issues
3. Senior Management
4. Cost/Schedule/Execution
5. Common Objectives
6. Reactive vs. Proactive
7. Strategic Planning
IEE581 Six Sigma Methodology
DMAIC Control Phase 1
Fall 2012 9/24/2012
Cheryl L. Jennings, PhD, MBB
c.jennings@asu.edu
1
2
Mr. Pareto Head, Quality Progress, August 2011, p. 17
(http://asq.org/quality-progress/2011/08/mr-pareto-head.html)
The DMAIC Process
From Montgomery, D. C. (2009), Introduction to Statistical Quality Control 6
th
edition, Wiley, New York
3
Main Activities of Control Phase
Institutionalize the final solution to sustain the gains
Develop and implement process control system
Integrate solution into daily work processes
Fully deploy final solution
Verify results of process improvement and associated financial impact
Conduct project closure activities including final management review

Execute on replication and standardization opportunities

4
What Tools Will You Use?
FMEA
Process Control Plan
SPC Chart or other monitoring scheme
Mistake-proofing
Out-of-Control Action Plans
Measurement System Control Plan
Process Performance Metrics
5
Validation of FMEA
Validate that process improvements reduce RPN
Severity remains the same
Has Occurrence been reduced through project work?
Has Detection been improved?

FMEA for an Invoicing Process
6
Process Control System
The goal of an effective process control system is to ensure that processes operate
consistently on target with minimum variation

A comprehensive system includes
A Process Control Plan detailing characteristics, sampling plan, monitoring
scheme, and out-of-control action plans
A stable and capable measurement system
Management review of performance
7
Lesson Learned
We learned that control plans are the lifeblood to value creation and the
realization of tangible benefits. The lack of a control plan with clear accountability is
the biggest cause of post-project failure, and the biggest challenge to a projects
credibility.
Milton H. Jones, Global Quality & Productivity Executive, Bank of America.
Remarks to International Society of Six Sigma Professionals Leadership
Conference Banking on Six Sigma: Taking the Road Less Traveled to Quality
Leadership in the Financial Services Industry; June 28, 2005; Scottsdale, AZ;
www.bankofamerica.com/newsroom

8
Process Control Plan
A Process Control Plan identifies important processes used to produce a product or
service, the significant characteristics associated with each process, and how to
control each of those significant characteristics.

A Process Control Plan consists of the following:
Significant characteristics
Measurement method
Sample size and frequency
Control method
Reaction plan
9
Control Plan Format
10
What types of
characteristics should
be chosen?
Outputs? Inputs?
How are specifications
obtained?
From FMEA
Control Plan Format
11
1
2
3
4
Control Plan Example
12
1
2 3
4
1 Measurement Technique
May range from manual review to sophisticated automated measuring systems

Be specific about who, what, how, etc.

If multiple measurement systems are to be used, be sure to list ALL systems

If multiple methods may be used on the same system, be sure to specify which
method
13
1
2 Sample Size and Frequency
Driven by
Control method
Type I error, Probability of concluding out of control when in control, or
false alarms
Type II error, Probability of concluding in control when out of control
Economics of sampling effort
Rate of production

For SPC and acceptance sampling, operating characteristic curves can be useful in
selecting sample sizes
14
2
3 Control Methods
Statistical Process Control
Mistake-proofing
Acceptance Sampling
Customer Call Monitoring for Quality Assurance
Process Log with Check Sheet or other Data Collection Tool
Periodic Process Audits
15
3
Statistical Process Control Charts
The SPC chart is a plot over time of a sample statistic of a quality characteristic
Used to determine whether a process is in statistical control experiencing only
chance (common) causes of variation or is out of control and experiencing
special (assignable) causes of variation
Upper and lower control limits are set to encompass most of the process
variation when only common cause variation is present
For Shewhart control charts, limits are typically 3 sigma from the mean
Plot points beyond the control limits, as well as patterns of plot points within the
limits, signal potential out-of-control behavior

Phase I Process Taming

Phase II Process Monitoring
16
Guide to Univariate Process Monitoring and Control*
17
* From Montgomery, D. C. (2009), Introduction to Statistical Quality Control 6
th
edition, Wiley, New York
Common Phase II Problems
1. Even after recalculating control limits, the process looks too good to be true.
Variability estimate for Shewhart charts assumes random error is only source of
variability. If true, within-sample variability leads to a good estimate of variability
in mean. If not, limits may be too wide or to narrow.
Solution: An I-MR-R/S chart
2. The occurrence of a single defective unit in a sample generates an out-of-control
signal.
The sample size should be sufficiently large to have a high probability of at least
one defective unit, an issue with 1000 or fewer occurrences per million.
Solution: Transform the time-between-events data to an approximately normal
distribution allowing use of the I-MR control chart and special cause tests.
3. In addition to flagging large shifts, there is concern for small shifts in a particularly
unfavorable direction.
Even with tests for special causes, a Shewhart chart may be slow to detect shifts
of 0.251
Solution: A CUSUM can be designed to quickly detect specific sizes of shifts
18
Mistake Proofing Another Control Method
Also known as poka yoke (pronounced POH-kah YOH-kay)
Invented by Shigeo Shingo in the 1960s as an approach to achieve zero defects
Defects result from a mistake reaching the customer, and defects are entirely
avoidable
Essential idea of poka yoke is to design a process so that mistakes are impossible
(prevention), or at least easily detected and corrected (detection)
Applicable to both manufacturing and transactional processes
19
Mistake Proofing Example 1*
Say we're at the signing of a bank loan by a lucky couple closing the mortgage on
their first home

The lucky couple picks up the pen to sign, but when they depress the top of the pen
to extend the writing part it malfunctions because the spring is missing.

A poka yoke could have prevented this situation. If all pieces of the pen were
presented to the assembler in a dish, a simple poka yoke would be for the
assembler to visually inspect the dish for any remaining parts once the pen was
assembled.
20
* Simon, Kerri. "Poka Yoke Mistake Proofing."
http://www.isixsigma.com/library/content/c020128a.asp
Mistake Proofing Example 2*
The lucky couple bypasses the signature part of the process because their bank is
really high-technology focused. In fact, they signed a writing pad and their signature
was recorded electronically. The bank also needed to collect 4 additional pieces of
information before the entire package of information is sent to the processing
department.

A simple poka yoke to add to this process is to require all fields to be filled in
(including the loanee signature) before allowing the form to be sent to processing.
This prevents the processing department from reviewing an incomplete document,
sending back to the loan department, delaying the processing of paperwork
21
* Simon, Kerri. "Poka Yoke Mistake Proofing."
http://www.isixsigma.com/library/content/c020128a.asp
Mistake Proofing Example 3*
Once the complete paperwork is submitted to the processing department and it is
printed, it then needs to be filed with the city and state. In order for this to occur,
papers need to be filled out (the city and state are not high-technology enabled)
and attached to the form.

A poka yoke used by the city is a simple checksheet at the top of the form. This
allows the person submitting the form to ensure that all additional information and
payments are attached. As in example 2 above, this prevents the city/state from
reviewing an incomplete document, sending back the document to the sender,
delaying the processing of paperwork
22
* Simon, Kerri. "Poka Yoke Mistake Proofing."
http://www.isixsigma.com/library/content/c020128a.asp
4 Out-of-Control Action Plans (OCAPs)
OCAPs, or reaction plans, provide assistance to operations personnel on how to
react to out-of-control signals in a process

An OCAP lists all known defects and conditions that can exist in a process and gives
instructions for both the product and the process
Specific actions are needed to disposition product, including containment
Corrective actions are needed to restore control to process

OCAPs may be manual or automated
Many process monitoring applications have OCAP functionality
Manual OCAPs should be readily accessible in the work area, and included as part
of training
23
4
OCAP Design
An OCAP contains:
Known defects and conditions
Probable cause of defect
Adjustable inputs
Responsibility for specific actions
Containment action (product)
Corrective action (process)
OCAPs can be formatted as:
A decision tree in a flow chart form
A narrative explaining the diagnostic, containment & corrective actions
An interactive computer dialogue to lead an operator through diagnostics and
remedies
A well-structured OCAP begins with most likely defects and causes
Responsibility should be assigned to minimize downtime while ensuring that
appropriate personnel take action
24
Example of an OCAP
for a Photoresist
Hard Bake Process*
25
* From Montgomery, D. C. (2009), Introduction to Statistical
Quality Control 6
th
edition, Wiley, New York (Figure 5-6)
Process Control Plan Summary
Identifies important processes, significant characteristics, and control scheme

Includes:
1. Measurement technique
2. Sampling plan
3. Control method
4. OCAP
26
Measurement System Control Plans
In order to evaluate ongoing process stability and capability, need to ensure that the
measurement system remains stable and capable

Periodic Calibration
Compares readings from measurement system with a standard reference
Centers the mean of the distribution of readings on the true or reference value
Requires use of a traceable standard
Periodic monitoring
Typically with a control chart, of a multiple standards with known values
Measurement System Analysis (MSA)
Should be repeated periodically to ensure that the measurement system remains
accurate, repeatable, reproducible, stable, and linear
As process variability decreases, the measurement system may become incapable
of detecting significant process changes
27
Process Performance Metrics
Process Capability
Cp, Cpk, Pp, Ppk, sigma level, ppm,
DPMO

Process Instability
Instability Index, often measured as
% of out-of-control points

Process Yield, Test Yield, Rolled
Throughput Yield

Measurement System R&R

Customer Defects / Complaints
28
Reading / References
Reading assignment for both Control lectures:
BALDRIGE, SIX SIGMA, & ISO: Understanding Your Options, Baldrige National Quality Program, CEO Issue
Sheet, Summer 2002. (http://www.baldrige.nist.gov/PDF_files/Issue_Sheet_SS.pdf)

George, Michael L., Lean Six Sigma, Chapters 10 & 11

McTurk, Laurie and Wilson, Dennis (2002). Presentation on the Instability Index.
(www.pro.enbis.org/members/documentation/Nissanmotorolapres.ppt)

Montgomery, Douglas C. (2009), Introduction to Statistical Quality Control, 6th edition, Wiley, New York

NIST-SEMATECH, Engineering Statistics Handbook, (http://www.itl.nist.gov/div898/handbook/index.htm)

Simon, Kerri. Poka Yoke Mistake Proofing. (http://www.isixsigma.com/library/content/c020128a.asp)

Snee, Ronald D. and Roger W. Hoerl, Leading Six Sigma, Chapters 5, 6 & 7

The Toyota System web site, Poka-Yoke examples and definition By Greg, May 12, 2009.
(http://www.thetoyotasystem.com/lean_inventions/poka_yoke-you-can%E2%80%99t-go-wrong.php)

29
IEE581 Six Sigma Methodology
DMAIC The Control Phase-2
Fall 2012 9/27/2012
Cheryl L. Jennings, PhD, MBB
c.jennings@asu.edu
1
More on Project Charters
Useful reference in September 2012 Quality Progress, Pick Your Spots.
(http://asq.org/quality-progress/2012/09/best-practices/pick-your-spots.html)
Requires membership or purchase.
2
3
Updated
Main Activities of Control Phase
Institutionalize the final solution to sustain the gains
Develop and implement process control system
Integrate solution into daily work processes
Fully deploy final solution
Verify results of process improvement and associated financial impact
Conduct project closure activities including final management review

Execute on replication and standardization opportunities

4
What Tools Will You Use?
FMEA
Process Control Plan
SPC Chart or other monitoring scheme
Mistake-proofing
Out-of-Control Action Plans
Measurement System Control Plan
Process Performance Metrics
5
Activities to Sustain the Gains
Develop Training Plan
Develop Communication Plan
Update procedures used to manage daily work processes
Modify policies as needed
Update quality management systems & standards
Change engineering drawings and specifications
Modify quality appraisal and audit criteria
Change manufacturing planning
Revise budgets
Revise resource forecasts
Change information systems
6
Training Plan
Purpose
To set up the organization to win, by building the right competencies and enabling the
people who have to change in order to be successful

Consider
Process maps, procedures and documentation
Training approach and design
Evaluation of training delivery
Assessment of training effectiveness
Implementation of support line or help desk

A Training Plan should address
Who is to be trained & their job function
Key change to be addressed
Training method & delivery method
Timing of training
Measurement of effectiveness
7
Communication Plan
Purpose
To gain support and build enthusiasm by demonstrating leadership support and providing
the right information at the right time

Consider
Compelling need for change
Whats in it for me?
Leadership support
How to be successful with solution
Training plan and schedule

A Communication Plan should address
Variety & level of audiences
Key message
Assumptions about prior knowledge
Timeline
Responsible person & contact information
8
Updated Procedures
Includes guides, job aids, etc. any format providing work instructions

Benefits
A training aid
Solutions remain in place long after the team has been reassigned
Eases replication for other departments, employees, products, customers, etc.

Contents
At a level so that the job can be performed by relatively new operators
Should be specific, including actions and responsible parties
Include underlying cause & effect relationships to enable learning

Formats include written procedures, checklists, checksheets, flowcharts, processing
systems
9
Dos and Donts for Developing Procedures
Do
Include operators actually performing the work in creating the procedures
Identify best practices
Ensure that training follows procedures
Verify, or test drive, before releasing for distribution
Provide a means for updates & revision

Dont
Forget that real people are going to try to perform the activities as described
consider legibility, education level & literacy, language
Make procedures inaccessible procedures should be readily available at a work
station
Give vague instructions, like check for defects itemize specific examples of
defects
10
Approaches to Project Deployment
Sequenced
Complete implementation at one site or process before beginning at another site
or process

Parallel
Implement at two or more sites/processes at the same time

Phased
Based on a milestone schedule and achievement of pre-determined goals

Whole-Hog
Implement company-wide at all sites/processes at the same time
11
After Deployment Validate & Close the Project!
Verify results of process improvement and validate financial impact
Complete project report summarizing
Accomplishments
Key Findings
Lessons Learned
Conduct final project review with senior leadership
Transfer process ownership
Control Plan
Training
Recognize accomplishments and communicate success
Match reward & recognition to the person and achievement
Be timely & specific
12
Management Review of Performance Metrics
Process Capability
Cp, Cpk, Pp, Ppk, sigma level, PPM, DPMO

Process Instability
Instability Index, often measured as % of out-of-control points

Process Yield, Test Yield, Rolled Throughput Yield

Measurement System R&R

Customer Defects / Complaints
13
After Institutionalization
So youve successfully implemented and closed your process improvement, within
the scope of your project


14
15
... I can think of three things were working on now in my company (Bank of America) that
may be relevant for others too. we need to sustain the gains we've made and replicate
them throughout the company.

This point has a lot to do with the control phase of the DMAIC process. We've seen
examples where projects that were well-conceived and well-executed lacked a control plan
that was strong and permanent enough to ensure the new process had staying power. In
those cases, its all too easy for what we sometimes call muscle-memory - the tendency to
revert back to old ways of doing things - to take over. The control plan also hinges on the
question of accountability. If theres a lack of clarity about who is accountable for ensuring
that the new process continues to operate - and who is responsible for reporting on the
process and its results over time - it wont last.

To replicate our gains across the company more efficiently, we need to do a better job of
spreading knowledge gained from one project in one division to projects that address similar
processes in other divisions. This sounds simple enough, but in a large organization,
identifying these opportunities is a significant challenge.

Kenneth D. Lewis, CEO and President, Bank of America
Remarks at the International Society of Six Sigma Practitioners Symposium;
October 26, 2004; Charlotte, NC; www.bankofamerica.com/newsroom
Impact of Replication & Standardization
How will you expand and implement a successful solution across a wider scope of
business operations and processes?
16
Increased sigma performance
resulting from successful pilot
Increased sigma performance
resulting from site replication
Increased sigma performance resulting
from company-wide replication
Increased sigma performance resulting
from company-wide standardization
Standardization Opportunities
R
e
p
l
i
c
a
t
i
o
n

O
p
p
o
r
t
u
n
i
t
i
e
s
Increased sigma performance
resulting from successful pilot
Increased sigma performance
resulting from site replication
Increased sigma performance resulting
from company-wide replication
Increased sigma performance resulting
from company-wide standardization
Standardization Opportunities
R
e
p
l
i
c
a
t
i
o
n

O
p
p
o
r
t
u
n
i
t
i
e
s
Process Replication
Replication involves the expansion of a successful solution to identical or very
similar processes across a greater number of geographies, products, etc.

Deployment concerns include:
Protecting the integrity of the original process design and results
Ensuring continuity of the deployment expansions
Taking advantage of the best practices learned through-out the original pilot
implementation
17
Considerations for Replication
Resources required for deployment, including team members
Capacity and demand
Concurrent project implementations
Current performance levels and priority for deployment
Process similarities
Leadership support & receptiveness to change
Labor issues, such as contractual obligations
Technology issues & strategies
Need for tailoring based on unique site characteristics
Lessons learned, including effectiveness of process control system and
implementation to date
18
Solution Standardization
Standardization involves the identification of opportunities and application of
solutions from one process to other processes and operations which may be quite
different from those where the solutions were first deployed

One key to identification of these opportunities is sharing stories of successful
projects beyond the impacted work area
19
Spreading the Knowledge
Effective replication and standardization can have an exponential impact on sigma
performance that far exceeds the impact anticipated by the pilot and solution
implementation

Detailed planning is required for expansion and implementation across a wider
scope of business operations and processes

Teams should seek opportunities to leverage improvement through replication and
standardization

Management must develop an overall organizational system to enable these
improvements to be applied
20
An Example of Spreading the Knowledge
21
The Way We Work
Full project deployment includes integrating with existing work and management
systems and aligning with other key initiatives

How does Six Sigma fit with other initiatives, such as ISO 9000 and the Baldrige
Criteria for Performance Excellence?
22
Management System Standards The Basics
Management Systems Standards (MSS) are primarily concerned with what an
organization does to operate and manage its work processes

Standards state minimum requirements for what the organization must do to
manage processes

These standards help ensure that an organization has a minimum structure for its
business, so that time, money and other resources are utilized efficiently

Managed by international or national organizations

Certification is obtained through assessment of the organizations quality system to
the requirements of the standard by accredited, 3rd-party assessors

Often required as a condition of business
23
Some Widely-Used MSS
24
Standard Description
ISO 9000 Family Quality Management System (ISO 9001:2008)
ISO 14000 Family Environmental Management System (ISO 14001:2004)
Various other ISO
standards
Including Aerospace, Food and Drink industry, Medical
products, Computer software, etc.
CMMI


Capability Maturity Model Integration

. Model for
improving and appraising the performance of
development in software organizations and related fields.
JCHAO Accredits health care organizations and programs
NCA Accredits learning institutions in 19 states
MBNQA
Performance
Excellence Criteria
Basis for the Malcolm Baldrige National Quality Award, in
three areas: Business/Nonprofit, Health Care, Education.
Not a certification.
ISO 9000 Family
Addresses what an organization does to meet a Customers quality requirements, meet
applicable regulatory requirements, enhance Customer satisfaction, and achieve continual
performance improvement
Comprises
ISO 9000, Quality Management (Definitions, fundamentals and vocabulary)
ISO 9001, Quality management systems Requirements
ISO 9004, Quality management systems Guidelines for performance improvements
ISO 9001:2008 did not introduce additional requirements ISO 9001:2000, which
Combined twenty elements of original standard into four major processes
Management Responsibility
Resource Management
Product and/or Service Realization Management
Measurement, Analysis and Improvement
Added requirements to
Establish a system-level procedure to facilitate continual improvement
Measure customer satisfaction and dissatisfaction
Use of appropriate statistical techniques
25
Sample of ISO 9001:2000* Language
26
* Note: ISO 9001:2008 does not introduce additional requirements or
change intent, but clarifies existing requirements
The Baldrige Criteria for Performance Excellence
The Baldrige Performance Excellence Program and the Performance Excellence
criteria play a role in US Competitiveness
Managed by NIST
Help improve organizational performance practices, capabilities, and results
Facilitate communication and sharing of best practices
Serve as a working tool for understanding and managing performance
Are the basis for conducting organizational self-assessments, giving feedback to
applicants, and making the MBNQA
Provide the framework for an organization to
Measure organizational performance
Plan in an uncertain environment
Decide on approaches to performance improvement
Lean, Six Sigma, ISO, Balanced Scorecard, etc.
Improve communication, productivity and effectiveness
Achieve strategic goals

27
28
1. Leadership
1. Senior Leadership
2. Governance and Societal Responsibilities
2. Strategic Planning
1. Strategy Development
2. Strategy Implementation
3. Customer Focus
1. Voice of the Customer
2. Customer Engagement
4. Measurement, Analysis, and Knowledge Management
1. Measurement, Analysis, and Improvement of Organizational Performance
2. Management of Information, Knowledge, and Information Technology
5. Workforce Focus
1. Workforce Environment
2. Workforce Engagement
6. Process Management
1. Work Systems
2. Work Processes
7. Results
1. Product and Process Outcomes
2. Customer-Focused Outcomes
3. Workforce-Focused Outcomes
4. Leadership and Governance Outcomes
5. Financial and Market Outcomes
D
20112012 Business Criteria for Performance Excellence*
C
* 2011-2012 Business Criteria for Performance Excellence, BNQP, NIST
M,A,I
Sample of PE Criteria Language
29
Lean? Six Sigma?
P-D-C-A?
30
What Makes the Difference?*
Although all three are quality management systems, Six Sigma, ISO 9001 Registration and the
Baldrige Criteria for Performance Excellence each offer a different emphasis in helping
organizations improve performance and increase customer satisfaction.

Six Sigma
concentrates on measuring product (and service) quality and improving process engineering
drives process improvement and cost savings

ISO 9001 Registration
is a product/service conformity model for guaranteeing equity in the marketplace
concentrates on fixing quality system defects and product/service nonconformities

Baldrige Criteria for Performance Excellence
focus on performance excellence for the entire organization in an overall management
framework
identify and track all-important organizational results: customer, product/service, financial,
human resource, and organizational effectiveness
* Baldrige, Six Sigma, & ISO: Understanding Your Options, CEO
Issue Sheet, Baldrige National Quality Program, Summer 2002.
Control Tollgate Questions
Was the solution tested on a small scale? How
representative was the test? How are your
learnings from the pilot integrated into the
implementation plan?

What is the implementation plan? Who is
responsible for implementation? What are the
potential problems? What are the contingency
plans?

How has the process been standardized? How have
you documented the process changes?

How has training been conducted to assure
understanding of the process changes? How
effective was this training? What continuing issues
does your team need to address in the area of
training?

What is the communication plan for
implementation? How will your team use
communications to manage this change, minimize
resistance and mobilize stakeholders?
What controls are in place to assure that the
problem does not recur? What is being measured?
What evidence do you have that would indicate the
process is in-control? How well and consistently
is the process performing? Is a response plan in
place for when the process experiences out-of-
control occurrences?

Who is the process owner? How will the
responsibility for continued review be transferred
from the improvement team to the process owner?
Have key process metrics been integrated into the
management review process? How frequent are
the reviews?

What gains or benefits have we realized from
implementation? How can we replicate the
improvements elsewhere in the organization?

What did the team learn from the project? Where
are some other areas of the business that can
benefit from your learnings? When will the
learnings be shared?

31
The DMAIC Process
From Montgomery, D. C. (2009), Introduction to Statistical Quality Control 6
th
edition, Wiley, New York
32
For Six Sigma to be Successful
As described by Mikel Harry (class lecture video), three keys for successful Six Sigma
implementation are:

1. Six Sigma must be owned at the top, driven by executive management
throughout the organization

2. There must be a company-wide infrastructure to support Six Sigma

3. Six Sigma projects must have demonstrated success in terms of money, not just
cost avoidance
3a. Compensation must be tied to performance results
33
Caterpillar is One Company with a Successful Six Sigma Initiative
2001: Launched 6 Sigma.

2002: 6 Sigma is process improvement and people
engagement.

2003: Increased emphasis on 6 Sigma in New
Product Introduction and Continuous Product
Improvement initiatives.

2004: Leadership team commits to encoding 6
Sigma into CATs DNA and extends deployment to
dealers and suppliers.

2005: Ongoing effort to encode 6 Sigma discipline
into daily work. Used 6 Sigma to develop content
for first Sustainability Report.
2006: Refocused 6 Sigma human resources on
quality, to drive common processes, metrics and
simplification.

2007: Continued focus on 6 Sigma to make
significant improvements in quality, availability,
inventory turns and cost performance.

2008: Caterpillar Production System is powered by
6 Sigma.

2010: Caterpillar Production System and 6 Sigma
methodologies remain core operating principles,
with references in Annual and Sustainability
Reports.

2011: 6 Sigma is embedded in Caterpillars
Worldwide Code of Conduct and Sustainability
Report.
34
And their progress in level of maturity is evident in annual reports
Reading / References
Reading assignment for both Control lectures:
BALDRIGE, SIX SIGMA, & ISO: Understanding Your Options, Baldrige National Quality Program, CEO Issue
Sheet, Summer 2002. (http://www.baldrige.nist.gov/PDF_files/Issue_Sheet_SS.pdf)

George, Michael L., Lean Six Sigma, Chapters 10 & 11

McTurk, Laurie and Wilson, Dennis (2002). Presentation on the Instability Index.
(www.pro.enbis.org/members/documentation/Nissanmotorolapres.ppt)

Montgomery, Douglas C. (2009), Introduction to Statistical Quality Control, 6th edition, Wiley, New York

NIST-SEMATECH, Engineering Statistics Handbook, (http://www.itl.nist.gov/div898/handbook/index.htm)

Simon, Kerri. Poka Yoke Mistake Proofing. (http://www.isixsigma.com/library/content/c020128a.asp)

Snee, Ronald D. and Roger W. Hoerl, Leading Six Sigma, Chapters 5, 6 & 7

The Toyota System web site, Poka-Yoke examples and definition By Greg, May 12, 2009.
(http://www.thetoyotasystem.com/lean_inventions/poka_yoke-you-can%E2%80%99t-go-wrong.php)

35
Up Next
Be sure to watch the video History of Six Sigma by M. Harry

Next three classes with Dr. Cathy Lawson

Upcoming Office hours : Wednesday, 10/3 in BYAC TBD, from 5pm to 7pm. Please
email or call if you need to make other arrangements.

36
This document and all information and expression contained herein are the property of ASU Department of Industrial Engineering, and may not, in whole or
in part, be used, duplicated, or disclosed for any purpose without prior written permission of ASU Department of Industrial Engineering. All rights reserved.
ASU Department of Industrial Engineering 2004
IEE 598 Six-Sigma Methodology

The role of teams in Six Sigma
Dr. Cathy Lawson Medtronic
cathy.a.lawson@medtronic.com

ASU Department of Industrial Engineering 2004
Attributes of Teams
Made up of individuals working towards a shared goal
Cross-functional
Short term
Dynamic
Six Sigma projects are best accomplished using cross-
functional teams
Six Sigma projects are best accomplished using cross-
functional teams
ASU Department of Industrial Engineering 2004
Types of Six Sigma Teams
Process Improvement Teams
Problem Solving Teams
Self-managed Work teams
Steering Committees and Councils
ASU Department of Industrial Engineering 2004
Discussion
Think of a team that you have participated on in the past.

What was your role on the team?

Was it a successful team?

What were the attributes that either made it successful or not?
ASU Department of Industrial Engineering 2004
Attributes of Effective Teams
Informal, comfortable atmosphere
Lots of discussion everyone participates
Tasks and objectives well defined
Members listen to each other
There may be healthy conflict
The decision making process is understood
People feel free to express their ideas
Clear assignments are made for required actions
Leader does not dominate unnecessarily
The team is self-monitoring
Members are committed to the goal

ASU Department of Industrial Engineering 2004
Black Belt Roles on a Team
Leader
Facilitator
Member
Consultant
Expert
Mentor

Black Belts serve in many roles when interacting with teams
ASU Department of Industrial Engineering 2004
Black Belt Team Skills
Planning and goal setting
Meeting management
Listening
Resolving conflict
Presenting
Risk taking
Mentoring
Influencing
Giving and accepting feedback
Dealing with ineffective team members
Monitoring and evaluating
ASU Department of Industrial Engineering 2004
Common mistakes teams make
Not defining the problem/task well

Falling in love with tools

Jumping to a solution without defining the problem

Looking busy while the problem goes away

ASU Department of Industrial Engineering 2004
Common mistakes Management makes with teams
Dont scope team projects well Achieve world
peace

Putting a hero on the team

Charter and walk away
ASU Department of Industrial Engineering 2004
A problem well stated is a
problem half solved.

Charles F. Kettering
ASU Department of Industrial Engineering 2004
Getting the Team Started
Who
What
Where
When

Why

How
Definition (66%)
Cause (17%)
Solution (17%)
ASU Department of Industrial Engineering 2004
What Questions
What do we know as fact?
What do we need to know?
What kind of data do we need to have to tell us what
we need to know?
What are the symptoms we are observing?
What is the problem statement?


ASU Department of Industrial Engineering 2004
Case Study
You are asked by the program manager to work with a
project team to develop and implement a Statistical
Process Control plan for his area. No one on the team
has had any SPC training. At the first meeting, the
program manager is absent and the factory manager
does not hide the fact that he does not want to
participate in this initiative.


What is your role on this team?
How do you make this successful?
ASU Department of Industrial Engineering 2004
Case Studies
You are brought in to consult with a highly visible project that is
having a serious technical problem. It has halted production and
is costing the company thousands of dollars each day the
production line is on hold. When you are brought in, the team
believes they have identified the red X but they want to run an
experiment to verify their conclusions. You learn through
questioning that they have tested two units and found the red
X in one of them.

What is your role on this team?
How do you make this successful?
ASU Department of Industrial Engineering 2004
Case Studies
You are asked to participate on a team working on an
improvement project for a new product design. The team is
following the DMAIC approach. You are asked to design an
experiment to look at the effect of five variables on a particular
response. You present a 2
5-1
fractional factorial design to the
team. After the meeting, you hear from the technician that the
process engineer (who was not at the meeting) has reviewed the
experiment design and said that 5 of the treatment combinations
are not going to work and that he is unwilling to provide
experimental units for those runs.

What is your role on this team?
How do you make this successful?

ASU Department of Industrial Engineering 2004
Case Studies
You are mentoring a new Black Belt candidate who has recently
been assigned an improvement project to lead. You have been
asked to sit in and observe this team and this candidate as they
work on this project. The team has been given 8 weeks to work
on this issue. Four weeks into the project, the team is
significantly behind schedule and it is obvious that the candidate
has no idea how to get this team back on track.

What is your role on this team?
What do you do to make this successful?

This document and all information and expression contained herein are the property of ASU Department of Industrial Engineering, and may not, in whole or
in part, be used, duplicated, or disclosed for any purpose without prior written permission of ASU Department of Industrial Engineering. All rights reserved.
ASU Department of Industrial Engineering 2004
IEE 598 Six-Sigma Methodology

Communicating Results and
Consulting
Dr. Cathy Lawson Medtronic
cathy.a.lawson@medtronic.com

ASU Department of Industrial Engineering 2004
Whos the audience?
Management
Customers
Peers
Team
Hourly Employees
One of the keys to a successful presentation is to know
your audience
ASU Department of Industrial Engineering 2004
Formats for presenting results
Formal presentation
Executive Summary
Journal article
Informal briefing
Written report
Memo
Minutes

ASU Department of Industrial Engineering 2004
Good Etiquette for Formal Presentations
Keep to your time limit
Allow time for questions
Have the number slides you present be approximately
equal to 1/3 the number of minutes you have to
present
Make sure your slides are readable from all areas of
the room
Dont assume everyone knows your acronyms
Be aware of the audience reception of your
presentation and make adjustments as necessary

ASU Department of Industrial Engineering 2004
Audience Analysis - Management/Customers
Summarize concisely
Prepare an executive summary
Make practical recommendations
Relate subject to the bottom line
Identify 1-3 key takeaways
Be prepared to discuss how you performed within the
given constraints.
ASU Department of Industrial Engineering 2004
Audience Analysis - Peers/Teammates
Prepare technical reports with an abstract
Use graphical techniques to convey your results
Translate the statistical analysis into the specific
actions the team needs to take
Document results as you go

ASU Department of Industrial Engineering 2004
Audience Analysis Hourly Employees
Use communication as a form of training
Keep it simple
Relate to their job
Acknowledge their expertise
Be aware that the hourly employee may have some
fear about what you are doing

ASU Department of Industrial Engineering 2004
The Art of Consulting
The practice of consulting is to provide guidance,
knowledge and assistance to the individual or
institution making the request on a topic(s) in which
the consultant is a recognized expert.

The art of consulting is being able to provide
meaningful guidance, knowledge and assistance to
the individual or institution which enables them to
perform better.

ASU Department of Industrial Engineering 2004
Black Belts as Consultants
Black Belts within an organization become an internal
resource with whom people will consult.
Technical expertise
Statistical applications
Use of continuous improvement tools
Problem solving/prevention strategies

ASU Department of Industrial Engineering 2004
Consulting Skills
Listening
Questioning
Framing
Advising
Mentoring
Problem Solving
Influencing

ASU Department of Industrial Engineering 2004
Case Study
An engineer comes to you asking for help in
designing an experiment to help solve a
problem.


What do you do next?

ASU Department of Industrial Engineering 2004
Case Study
A process engineer comes to you asking for
advice. He says that the factory has just
solved a technical problem and put a fix in
place. They want to verify that the fix is
successful. How many samples do they need
to collect in order to prove that?

What do you do next?
ASU Department of Industrial Engineering 2004
Case Study
A project is having numerous quality
problems. The manager of the area wants you
to come in and make sure the team is using
the proper statistical tools to help them solve
the problems.

What do you do next?

ASU Department of Industrial Engineering 2004
Case Study
A team needs to run an experiment to
investigate the effect of a proposed design
change on a new product design. There are
two variables of interest under investigation.
The team says they can only provide 5
experimental units.

How can you help them?

IEE 581 Six Sigma Methodology
Categorical Data Analysis, Part 1
Fall 2012 10/11/2012
Cheryl L. Jennings, PhD, MBB
c.jennings@asu.edu
1
Categorical Data Analysis
Analysis of Categorical, or Discrete, RESPONSE data

Can be encountered throughout the DMAIC process
Define
VOC
Measure
MSAs
Process capability studies
Analyze
Hypothesis testing, regression
Improve
Designed experiments
Control
SPC charts
2
Categorical Data Analysis Techniques
Attribute Agreement Analysis
Agreement tables (Measure lecture)
Kappa (Measure lecture)
Interclass Correlation
Kendalls Coefficient of Concordance

Process Capability
DPMO for discrete data (Measure
lecture)
Contingency Table Analysis
Test for Independent classifications
Test for Homogeneity across
categories

Logistic Regression Analysis
Binary
Nominal
Ordinal

Attribute Control Charts (Control
lecture, SPC course)
3
Measurement System Analysis Properties
The properties analyzed for a system measuring continuous data are
Discrimination
Accuracy (or Bias)
Stability
Linearity
Gauge Repeatability & Reproducibility (GR&R)

There are many situations where the output of a measurement system or an
evaluation is an attribute, for example pass/fail, classification or rating.
Some of the usual MSA properties cannot be evaluated, but they do suggest
properties that should be considered.
4
Attribute Agreement Analysis
An Attribute Agreement Analysis can be used to evaluate
Within Appraiser agreement, or Repeatability
Between Appraiser agreement, or Reproducibility
Appraiser agreement versus a standard if available, or Bias

The Attribute MSA shown in the Measure lecture included
Agreement tables
Kappa statistics

The Intra-correlation coefficient and Kendalls Coefficient of Concordance are
additional methods for evaluating agreement.
5
Small Business Loan Example
Small business loans are reviewed to approval parameters in a manual underwriting
process, and the decision is made to either approve or decline the loan application.
Consistency in underwriting decisions is desired at several levels:
Consistency with decision of a subject matter expert
Consistency when provided with the same application
Consistency between underwriters
6
Agreement Tables for Loan Decisioning
7
Within Appraisers
Assessment Agreement
Appraiser # Inspected # Matched Percent 95% CI
1 14 14 100.00 (80.74, 100.00)
2 14 11 78.57 (49.20, 95.34)
3 14 14 100.00 (80.74, 100.00)
# Matched: Appraiser agrees with him/herself across trials.

Each Appraiser vs Standard
Assessment Agreement
Appraiser # Inspected # Matched Percent 95% CI
1 14 11 78.57 (49.20, 95.34)
2 14 9 64.29 (35.14, 87.24)
3 14 10 71.43 (41.90, 91.61)
# Matched: Appraiser's assessment across trials agrees with the known standard.

Between Appraisers
Assessment Agreement
# Inspected # Matched Percent 95% CI
14 8 57.14 (28.86, 82.34)
# Matched: All appraisers' assessments agree with each other.

All Appraisers vs Standard
Assessment Agreement
# Inspected # Matched Percent 95% CI
14 6 42.86 (17.66, 71.14)
# Matched: All appraisers' assessments agree with the known standard.
*Minitab output, data is in categorical data sets.xlsx
Kappa Statistic
Kappa is defined as the proportion of agreement between Appraisers after
agreement by chance has been removed. The formula is:



Values range from -1 to +1. The higher the value, the stronger the agreement
between the rating and standard.
If K = 1, is perfect agreement. K > 0.9 is excellent; K < 0.7 needs improvement.
If K = 0, then agreement is the same as would be expected by chance.
If K < 0, then agreement is weaker than expected by chance.

Hypotheses are:
H0: Level of agreement is the same as that expected by chance alone (K = 0).
HA: Level of agreement is significantly different (stronger or weaker) than
expected by chance.
8
1
Observed Expected
Expected
P P
K
P

Kappa Statistic for Simplified Loan Decisioning


9
( ) ( )
( ) ( )
App-App Dec-Dec 7 14 4 14 0.786
9 14 8 14 5 14 6 14 0.520
0.786 .520
0.553
1 .520
Observed Agreement
Expected Agreement
P
P
K

= + = + =
= + =

= =

Between Appraisers

Assessment Agreement
# Inspected # Matched Percent 95% CI
14 11 78.57 (49.20, 95.34)
# Matched: All appraisers' assessments agree with each
other.

Fleiss' Kappa Statistics
Response Kappa SE Kappa Z P(vs > 0)
A 0.550802 0.267261 2.06091 0.0197
D 0.550802 0.267261 2.06091 0.0197
*Minitab output, data is in categorical data sets.xlsx
Kappa Statistics for Loan Decisioning
10
Within Appraisers
...
Fleiss' Kappa Statistics
Appraiser Response Kappa SE Kappa Z P(vs > 0)
1 A 1.00000 0.267261 3.74166 0.0001
D 1.00000 0.267261 3.74166 0.0001
2 A 0.55080 0.267261 2.06091 0.0197
D 0.55080 0.267261 2.06091 0.0197
3 A 1.00000 0.267261 3.74166 0.0001
D 1.00000 0.267261 3.74166 0.0001

Each Appraiser vs Standard
...
Fleiss' Kappa Statistics
Appraiser Response Kappa SE Kappa Z P(vs > 0)
1 A 0.569231 0.188982 3.01209 0.0013
D 0.569231 0.188982 3.01209 0.0013
2 A 0.492949 0.188982 2.60844 0.0045
D 0.492949 0.188982 2.60844 0.0045
3 A 0.416667 0.188982 2.20479 0.0137
D 0.416667 0.188982 2.20479 0.0137

Between Appraisers
...
Fleiss' Kappa Statistics
Response Kappa SE Kappa Z P(vs > 0)
A 0.628361 0.0690066 9.10581 0.0000
D 0.628361 0.0690066 9.10581 0.0000

All Appraisers vs Standard
...
Fleiss' Kappa Statistics
Response Kappa SE Kappa Z P(vs > 0)
A 0.492949 0.109109 4.51795 0.0000
D 0.492949 0.109109 4.51795 0.0000
*Minitab output,
data is in
categorical data
sets.xlsx
Intraclass Correlation Coefficient*
If a measurement system is needed to simply classify objects in a non-quantitative
manner, then the Kappa statistic is appropriate.

If it is possible to rank or order objects in some way, Intraclass Correlation
Coefficients (ICC) should be used to evaluate the agreement of the ratings.

The distance between the ordered categories is perceived as roughly equal.

ICCs are ratios of between rating variance to total variance, comparing the
covariance of the ratings with the total variance.

Variations of the ICC are a function of
How Appraisers are selected
How Ratings are used (individual or averaged)
11
*from Futrell (1995)
Six Types of ICCs*
Variations of the ICC are set up the same way and differ only in treatment of
components of variation. The six types of ICCs are:
1. Each object is rated by a different set of Appraisers, randomly selected from a
larger population, and the reliability of each Appraisers rating is of interest.
2. Each object is rated by a different set of Appraisers, randomly selected from a
larger population, and reliability of the Appraisers averaged rating is of interest.
3. Each object is rated by a single set of Appraisers, randomly selected from a
larger population, and the reliability of each Appraisers rating is of interest.
4. Each object is rated by a single set of Appraisers, randomly selected from a
larger population, and reliability of the Appraisers averaged rating is of interest.
5. Each object is rated by a set of Appraisers, who are the only Appraisers of
interest (there is not a larger population), and reliability of each Appraisers
rating is of interest.
6. Each object is rated by a set of Appraisers, who are the only Appraisers of
interest, and the reliability of the Appraisers averaged rating is of interest.
If ratings are from a single Appraiser, then 1, 3 and 5 apply. If ratings are averaged
across multiple Appraisers, then 2, 4 and 6 apply.
12
*from Futrell (1995)
ICC for Supplier Evaluation
Package Buyer 1 Buyer 2 Buyer 3
1 5 7 6
2 6 5 4
3 4 4 3
4 4 5 4
5 7 6 5
6 6 7 7
7 8 9 8
8 9 8 8
9 5 5 6
10 6 7 8
Supply management wants to evaluate
a supplier based on the completeness
of purchase orders.
Three senior buyers will evaluate ten
purchase orders and rate
completeness from 1 (poor) to 10
(excellent).
If 3 different randomly selected
buyers rate each purchase order,
then 1 & 2 apply.
If the same 3 randomly selected
buyers rate all 10 purchase orders,
then 3 & 4 apply.
If there are only 3 buyers in the
organization and each buyer rates
all 10 purchase orders, then 5 & 6
apply.
13
ICC for Supplier Evaluation (continued)
14
7.32 0.6
ICC1 0.79
( 1) 7.32 (3 1)0.6
7.32 0.6
ICC2 0.92
7.32
ICC3
( 1) ( ) /
7.32 0.62
0.79
7.32 (3 1)0.62 3(0.43 0.62) / 10
ICC4
( ) /
7.32 0
BMS WMS
BMS k WMS
BMS WMS
BMS
BMS EMS
BMS k EMS k JMS EMS n
BMS EMS
BMS JMS EMS n

= = =
+ +

= = =

=
+ +

= =
+ +

=
+

=
.62
0.92
7.32 (0.43 0.62) / 10
7.32 0.62
ICC5 0.78
( 1) 7.32 (3 1)0.62
7.32 0.62
ICC6 0.92
7.32
BMS EMS
BMS k EMS
BMS EMS
BMS
=
+

= = =
+ +

= = =
Kendalls Coefficient of Concordance*
For ordinal data, Kendalls Coefficient of Concordance (KCC) can also be used to
measure agreement in ratings.

It is a measure of total correlation that indicates the degree of association of ordinal
assessments made by multiple Appraisers when evaluating the same objects.

Values range from 0 to +1.
If there is perfect agreement, with objects assigned same rating by all Appraisers,
W = 1.
If there is perfect disagreement among rankings, W will be zero (or very close).

A high or significant Kendall's coefficient means that the Appraisers are applying
essentially the same standard when evaluating the objects (Agreement is not due to
chance).
15 *from Conover (1980)
Minitabs Approach to Estimating KCC
Minitab estimates KCC as


where:
N = the number of subjects.
E R
i
2
= the sum of the squared sums of ranks for each of the ranked N subjects
K = the number of appraisers.
T
j
assigns the average of ratings to tied observation; T
j
=

where t
i
= the number of tied ranks in the i
th
grouping of ties, and g
j
= the
number of groups of ties in the j
th
set of ranks.

To test significance of Kendall's coefficient, use _
2
= k (N - 1) W where W is the
calculated Kendall's coefficient. _
2
is distributed as chi-square with N - 1 degrees of
freedom.

16
W =
KCC for Supplier Evaluation
17
Attribute Agreement Analysis for Buyer1, Buyer2, Buyer3
Between Appraisers

Assessment Agreement
# Inspected # Matched Percent 95% CI
10 0 0.00 (0.00, 25.89)
# Matched: All appraisers' assessments agree with each other.

Fleiss' Kappa Statistics
Response Kappa SE Kappa Z P(vs > 0)
3 -0.034483 0.182574 -0.18887 0.5749
4 0.280000 0.182574 1.53362 0.0626
5 -0.041667 0.182574 -0.22822 0.5903
6 -0.250000 0.182574 -1.36931 0.9145
7 0.040000 0.182574 0.21909 0.4133
8 0.280000 0.182574 1.53362 0.0626
9 -0.071429 0.182574 -0.39123 0.6522
Overall 0.037433 0.081300 0.46043 0.3226

Kendall's Coefficient of Concordance
Coef Chi - Sq DF P
0.858947 23.1916 9 0.0058
None of the
Buyers ratings
matched
Overall suggests
poor
consistency in
ratings between
Buyers
Suggests consistency between
Buyers is acceptable (i.e., not
due to chance)
*Minitab output, data is in categorical data sets.xlsx
Hot Sauce Example from Futrell (1995)
Wilson and Justin are visiting New
Orleans and are overwhelmed by the
varieties of local hot sauces available.
As they taste a few varieties, they
notice that they seem to agree about
how hot each one is.
Since Wilson aspires to become a
psychometrician, he designs a study to
measure their agreement.
They randomly purchase 10 bottles of
hot sauce and independently classify
them into four categories:
Mild(M)
Hot (H)
Very hot (VH)
Makes me suffer (MMS)
Sauce Wilson Justin
1 M (1) MM (1)
2 M (1) H (2)
3 MMS (4) VH (3)
4 VH (3) MMS (4)
5 H (2) VH (3)
6 VH (3) VH (3)
7 H (2) M (1)
8 H (2) H (2)
9 MMS (4) VH (3)
10 M (1) H (2)
18
Kappa for Hot Sauce Ratings
19
Results for: Hot Sauce.MTW
Attribute Agreement Analysis for Wilson, Justin
Between Appraisers

Assessment Agreement
# Inspected # Matched Percent 95% CI
10 3 30.00 (6.67, 65.25)
# Matched: All appraisers' assessments agree with each other.


Fleiss' Kappa Statistics
Response Kappa SE Kappa Z P(vs > 0)
H 0.047619 0.316228 0.150585 0.4402
M 0.200000 0.316228 0.632456 0.2635
MMS -0.176471 0.316228 -0.558049 0.7116
VH 0.047619 0.316228 0.150585 0.4402
Overall 0.047619 0.187155 0.254436 0.3996


*Minitab output, data is in categorical data sets.xlsx
ICC & KCC for Hot Sauce Ratings
20
1.8944 0.3833
ICC5 0.66
( 1) 1.8944 (2 1)0.3833
1.8944 0.3833
ICC6 0.80
1.8944
BMS EMS
BMS k EMS
BMS EMS
BMS

= = =
+ +

= = =
Results for: Hot Sauce.MTW
Attribute Agreement Analysis for Wilson, Justin
Between Appraisers



Kendall's Coefficient of Concordance
Coef Chi - Sq DF P
0.731148 13.1607 9 0.1555
*Minitab output, data is in categorical data sets.xlsx
Summary of Agreement Analysis Techniques
Agreement Tables
Often used for transactional processes to assess whether appraisers are making
the same decisions
Require relatively more data
Interpretation of results can be subjective
Kappa Statistic
Data classification is categorical: pass/fail, attribute, qualitative feature
Results are less subjective than for Agreement tables
All misclassifications are treated equally
Intracorrelation Coefficient and Kendalls Coefficient of Concordance
Data classification is ordinal and hence quantitative
Misclassification is not treated equally across all ratings
The consequence of rating a Make me suffer hot sauce as Mild is much
more serious than rating as Hot
21
Contingency Table Analysis
Test for Independent classifications

Test for Homogeneity across categories


22
r c Contingency Tables
Consider a sample of size N from a
single population, with observations
that can be classified according to two
criteria.
Results are tabulated into r rows
associated with the 1
st
criterion and c
columns associated with the 2
nd

criterion.
This is referred to as an r c
contingency table.
The question is: Are the two methods
of classification statistically
independent?
Contingency tables answer this
question by comparing the observed
frequencies in the cells of the table to
the expected frequencies.
2
nd
Criterion
1 2 . c Totals
1
st
1 O
11
O
12
. O
1c
R
1
Criterion 2 O
21
O
22
. O
2c
R
2
. . . . . .
r O
r1
O
r2
. O
rc
R
r
Totals C
1
C
2
. C
c
N
This is the first application of a
contingency table a test for
Independence, also known as the Chi-
Square Test for Independence.

Suppose that there are only two
categories (columns) success/ failure,
defective/non-defective. Then the
contingency table could be used to test
the equality of r binomial parameters.
23
Contingency Tables Test for Independence*
24
*from Montgomery & Runger (2011)
Is Pension Plan Preference Independent of Job Classification?
25
*from Montgomery & Runger (2011)
26
*from Montgomery & Runger (2011)
Independence Test in Minitab
27
Chi-Square Test: Plan1, Plan2, Plan3

Expected counts are printed below observed counts
Chi-Square contributions are printed below expected counts

Plan1 Plan2 Plan3 Total
1 160 140 40 340
136.00 136.00 68.00
4.235 0.118 11.529

2 40 60 60 160
64.00 64.00 32.00
9.000 0.250 24.500

Total 200 200 100 500

Chi-Sq = 49.632, DF = 2, P-Value = 0.000
*Minitab output, data is in categorical data sets.xlsx
Contingency Tables Test for Homogeneity
The second application is a test for
Homogeneity, also known as the Chi-
Square Test for Differences in
Probabilities.
Consider r samples drawn from
different populations, with
observations classified into c
categories.
The question is: Do the r samples
have the same proportions of
elements in a certain category?
H0: All the probabilities in the same
column are equal to each other.
HA: At least two of the probabilities
in the same column are not equal to
each other.
Computations are the same as for the
test for independence.
The same approach can also be used
to answer the question: Does a
treatment significantly alter the
proportion of objects in each of two
classifications?
The population before & after the
treatment is represented in the
rows.
28
Categories
1 2 . c Totals
Population 1 O
11
O
12
. O
1c
R
1
Population 2 O
21
O
22
. O
2c
R
2
. . . . . .
Population r O
r1
O
r2
. O
rc
R
r
Totals C
1
C
2
. C
c
N
Did the Experimental Method Result in Better Learning?
Sixty newly-hired employees were
divided into two groups of 30 each and
taught how to complete an order
transaction. One group used the
conventional method of learning, and
the other group used a new,
experimental method.
At the end of the courses, each new
employee was given a test that
consisted of completing the order
transaction. The order transaction
was either correct or incorrect.
The data is shown to the right.
Is there reason to believe that the
experimental method is superior? Or
could the differences be due to chance
fluctuations?

Correct
Order
Incorrect
Order
Conventional
Group
23 7
Experimental
Group
27 3
29
Homogeneity Test in Minitab
30
Chi-Square Test: Correct, Incorrect

Expected counts are printed below observed counts
Chi-Square contributions are printed below expected counts

Correct Incorrect Total
1 23 7 30
25.00 5.00
0.160 0.800

2 27 3 30
25.00 5.00
0.160 0.800

Total 50 10 60

Chi-Sq = 1.920, DF = 1, P-Value = 0.166
*Minitab output, data is in categorical data sets.xlsx
Special Application 2 2 Contingency Tables*
In a 2 2 contingency table, two
samples have observations that may
be categorized into one of two classes.

The alternative hypothesis can have
three forms:
Two-sided Test
H
0
: p
1
= p
2
H
A
: p
1
p
2
One-sided Test (upper tail)
H
0
: p
1
p
2
H
A
: p
1
> p
2
One-sided Test (lower tail)
H
0
: p
1
p
2
H
A
: p
1
< p
2

Test Statistics
For the two-sided test, the test
statistic is the same, more simply:


For the one-sided tests,



Decision rules for one-sided tests
Upper Tail
Reject H
0
at the approximate level
if T
1
exceeds Z

Lower Tail
Reject H
0
at the approximate level
if T
1
is less than -Z

31 *from Conover (1980)
2
2 11 22 12 21
0
1 2 1 2
( ) N O O O O
n n CC
_

=
11 22 12 21
1
1 2 1 2
( )
(0,1)
N O O O O
T N
n n CC

= ~
2 2 Contingency Table Example One-Sided Test*
At the US Naval Academy, a new
lighting system was installed
throughout the midshipmens living
quarters. It was claimed that the new
lighting system resulted in poor
eyesight due to a continual strain on
the eyes of the midshipmen.

Consider a study to test this claim:
H
0
: The probability of a graduating
midshipman having 20-20 (good)
vision is the same or greater under
the new lighting than it was under
the old lighting, versus
H
A
: The probability of good vision is
less now than it was.
Good Vision Poor Vision
Old Lights 714 111
New Lights 662 154
32
*from Conover (1980)
2 2 Example in Minitab
This is a one-sided test, with H
A
: p
Old
>
p
New
However Minitab does not have a one-
sided test option
Run as two-sided test in Minitab, and
take square root of calculated chi-
square test statistic

Compare to standard normal
distribution
Reject H
0
if T
1
> Z


For = 0.05, Z = 1.65
Decision: Reject H
0
, the two classes do
differ with respect to the proportions
having poor eyesight. Vision is worse
with the new lights.


33
Chi-Square Test: Good, Poor

Expected counts are printed
below observed counts
Chi-Square contributions are
printed below expected counts

Good Poor Total
1 714 111 825
691.77 133.23
0.714 3.708

2 662 154 816
684.23 131.77
0.722 3.749

Total 1376 265 1641

Chi-Sq = 8.893, DF = 1, P-Value
= 0.003
1
8.893 2.98 T = =
*Minitab output, data is in categorical data sets.xlsx
References
Bower, K.M. Trainers Corner: Measurement system analysis with attribute data.
http://www.minitab.com/uploadedFiles/Shared_Resources/Documents/Articles/measurement
_system_analysis_attribute_data.pdf (Kappa statistic and Kendalls Coefficient of
Concordance)

Conover, W.J. (1980). Chapter 4, Contingency Tables, Practical Nonparametric Statistics, 2
nd

edition. Wiley, New York.

Futrell, D. (1995), When Quality is a Matter of Taste, Use Reliability Indexes, Quality Progress,
May 1995. (Available for purchase or free to members, at www.asq.org)

Montgomery, D.C. and Runger, G.C. (2011). 9-8 Contingency Table Tests. Applied Statistics
and Probability for Engineers, 5
th
edition. Wiley, New York.

NIST/SEMATECH e-Handbook of Statistical Methods,
http://www.itl.nist.gov/div898/handbook/, 10/2010. (contingency tables)


34
Useful Minitab Documentation
Minitab Assistant White Paper
Attribute Agreement Analysis: http://www.minitab.com/en-
US/support/documentation/Answers/Assistant%20White%20Papers/AttributeAgreementAn
alysis_MtbAsstMenuWhitePaper.pdf

Minitab Technical Support Documents
Attribute Agreement Analysis: http://www.minitab.com/en-
US/support/documentation/Answers/AttribAgreeAnalysisTutorial.pdf
Service Quality Example Using Ordinal Data:
http://www.minitab.com/support/documentation/Answers/SQAttributeAgreementAnalysis.
pdf
Manufacturing Example Using Binary Data:
http://www.minitab.com/support/documentation/Answers/MANUAttributeAgreementAnal
ysis.pdf

35
Upcoming
No class on Tuesday 10/16
Next class on Thursday 10/18
Assignment #1 due on Tuesday 10/23
Mid-term exam on Tuesday 10/23
Following four lectures on DFSS and Lean with Dr. Holcomb ENJOY!
36

Você também pode gostar