Você está na página 1de 9

December 15, 2009

A data-based problem-solving technique designed to
glean critical information from contrasting situations
Most effective in a situation where an unexpected level
of performance can be compared with acceptable or
standard performance
Was developed by C.H. Kepner and B.B. Tregoe and is
explained in The New Rational Manager (1981)
Can be an effective tool in the Measure and Analyze
phases of a DMAIC project to uncover and prioritize
potential root causes
Four basic dimensions of a problem are probed: what,
when, where, and to what extent
Observed Facts (is) and Comparative Facts (is not)
are collected, and then contrasted for Differences
Differences are compared to Relevant Changes and
probable causes are generated
The most-probable causes explain the most
Problem Definition

No. Leading Question Answer

1 What object IS defective? Especially BuBc, CHOL, LIPA, CREA, BUN

2 What object is NOT defective? The other 37 products we manufacture

3 What exactly IS wrong? Flakes appearing on slit edges within slides

4 What is NOT wrong? Chopped edges are OK

5 Where DO we find the problem? In MCI during physical inspection; from ALL SAMs.

Where do we NOT find the problem?

Where on the object DOES the defect appear?


Flakes appearing on slit edges within slides

Waste for
8 Where on the object does the defect NOT appear?

9 When was the defect first observed?

Chopped edges are OK

Early 2005
10 When was the defect NOT observed? Before 2005
11 When in the product lifecycle is the problem observed? In MCI during physical inspection
12 When in the product lifecycle is the problem NOT seen? TBD
during slide
13 In what pattern IS the defect observed? Right at the beginning of the mounting event
14 In what pattern is the defect NOT observed? Slit? Roll location? Mounting event time? Repeat?
15 How much of the object IS defective? Flakes are 5-10% of the surface area. Single flake.
16 How much of the object is NOT defective? Center OK. No multiple flakes.
17 What IS the trend? Step change in early 2005
18 What is NOT the trend? Not getting worse, not getting better suddenly
19 How many objects ARE defective? Problems in 1/3 of the weeks of the year (BuBc, 2005)
20 How many objects are NOT defective? No problems in the other 2/3 of the weeks (BuBc, 2005)

Suggested characteristics for Influence Matrix

Still need information in these boxes

Facilitation Tips
Gather a multi-disciplinary team to fill out the table
Use a color to indicate distinctions or defining
characteristics that are supported with fact
Use another color to indicate where additional data is
required separate opinion from fact
Add rows if there are multiple answers to a question
use one row per answer
Dont let the team get bogged down in creating
accurate diction or placing the comments in the
correct box the value is in the discussion and
Distinctions or
were weighted and
placed in an
influence matrix

Inputs identified in
a cause-and-effect
exercise were
ranked by their
influence on the
JKL Alternative Points KT Use
KT was incorporated several years into Kodaks Black
Belt / 6 Sigma Training program
Important to have unbiased facilitator
Process flow:
Define Is / Is Nots (Helps to really define the problem)
Develop hypotheses from group- everyone contributes-
hypotheses tested to see if they fit the data
Develop action item register + ID where more data needed
Can be done on easel pad, Word format, or Excel
whatever works for the group to get started
Many successful examples in 313 in past 3 years (10+)
This KT effort went on and directly led
to a successful Green Belt project that
Nick End completed 12/2009