Você está na página 1de 28

Pareto Chart

Pareto Charts Provide Focus

Pareto charts show us where to focus our problem solving efforts, and are very useful for improving metrics like waste, downtime, warranty, etc. The power of Pareto charts lies in the Pareto Principle, which can be stated as: typically a small number of factors will influence a given outcome the most. Pareto charts provide crystal-clear focus. For example, a student’s chances of getting into college will be mostly determined by two things: high school grades and standardized testing scores. Books have been written about the many factors affecting college acceptance decisions, but the reality is that excellent grades and competitive testing scores make up the majority of the equation for most schools. Students who excel in these to areas will have their choice of colleges to attend.

A Practical Example

Imagine working on a project to reduce your energy bills at home. There are three possible approaches you might take:

(1) minimize the use of all the appliances in your home simultaneously, (2) minimize the use of appliances that you think use the most energy, or (3) build a Pareto chart showing the energy consumed each type of appliance, and focus on the high-energy appliances. The first approach will result in a great deal of sacrifice but will only produce marginal results, because focusing on all appliances equally takes focus away from the vital-few appliances that use the most energy. The second approach is a roll of the dice, and won’t deliver a high success rate, on average. The Pareto chart that could be used for approach (3) is below (source data from Clark Public Utilities):

Pareto Chart Pareto Charts Provide Focus Pareto charts show us where to focus our problem solving

The Pareto chart above brings immediate focus to the project: 5 out of 22 appliances consume 75% of home energy costs. Focus on those appliances and forget about the rest. A resulting action list might be

  • - Buy a more efficient water heater.

  • - Negotiate the heating oil contract in the summer months to obtain a better price.

  • - Get rid of the water bed who would have guessed that a heated water bed used so much energy?

  • - Switch from incandescent to fluorescent light bulbs.

  • - Enjoy the TV, microwave oven, computer, etc. they use an insignificant amount of energy compared to the top 5

appliances in the house. The Pareto Principle is at work everywhere, and the real goal of any Six Sigma project is to find the top two or three factors that make up the true Pareto chart for a given problem, and then control those factors to achieve breakthrough performance. It’s that simple.

A Typical Pareto Chart

Here is a simple Pareto chart showing reasons for arriving late to work over the past year:

- Get rid of the water bed – who would have guessed that a heated water

This is a standard format for Pareto charts the count is noted on the left y-axis: this person was late eight times due to his alarm clock not sounding. The blue line and right-hand axis show the cumulative percentage for all reasons noted on the Pareto chart: in the case, the top two causes (alarm clock not sounding and sleeping through the alarm) account for 73% of the reasons for being late over the last year.

How to Make a Pareto Chart

  • 1. Clearly define the scope

Define the scope clearly examples include Reasons for shipment delays, Reasons for scrap in plant XYZ, reasons for wrong

medication administered, etc.

  • 2. Find out if data already exists

There are many cases where Pareto data already exists, but has never been analyzed. For example, a manufacturing plant

may not have the details behind why materials are being scrapped, but most plants will know what is being scrapped, since this information is necessary for inventory control. Building the high-level Pareto on what materials are being scrapped will help focus the data collection effort for building sub-Paretos on the Whys.

When data does not already exist, a carefully planned data collection effort will be needed. Pareto categories (that will go along the bottom axis) should be anticipated ahead of time if possible.

  • 4. Summarize and sort the data in descending order

Here is a data set, ready to be plotted -

When data does not already exist, a carefully planned data collection effort will be needed. ParetoExcel® file if you need a starting point, or check out www.paretochart.org . Diving Deep with Sub-Pareto Charts It’s very common to produce sub -Pareto charts as well. For example, in the above case it might be useful to build a Pareto chart that focuses on the reasons behind the first bar on the top- level Pareto (why the alarm clock didn’t sound). The sub- Pareto chart might look like this – " id="pdf-obj-2-8" src="pdf-obj-2-8.jpg">
  • 5. Create the chart

Use this Excel® file if you need a starting point, or check out www.paretochart.org.

Diving Deep with Sub-Pareto Charts

It’s very common to produce sub-Pareto charts as well. For example, in the above case it might be useful to build a Pareto chart that focuses on the reasons behind the first bar on the top-level Pareto (why the alarm clock didn’t sound). The sub- Pareto chart might look like this

Now we are finally getting down to some actionable items, like buying an alarm clock with

Now we are finally getting down to some actionable items, like buying an alarm clock with a backup battery in case of a power loss.

Two Situations Where Pareto Charts are Not Helpful

Pareto charts are very useful when top-level reasons for a problem are fairly straightforward and can be categorized.

This is

commonly the case with business metrics. But there are situations where Pareto charts are not helpful or are simply irrelevant

to the problem at-hand:

  • 1. Improving process capability a specific CTQ

Many Six Sigma projects are focused at the CTQ level, and Pareto charts are not very helpful in these situations.

For example,

a Black Belt focused on achieving six sigma capability on a machined casting dimension will have little use for a Pareto chart.

  • 2. “Flat” Pareto charts

There are cases where the Pareto Principle simply does not apply. This is especially true when the major causes surrounding a particular problem are already addressed, and only the long “tail” of the original Pareto remains. Usually at this point in time the underlying metric is performing very well, and the management team should re-evaluate the project’s worth before proceeding (i.e. return to the Define phase and make sure the team is focused on the best possible project).

Wrap Up

There are many situations where a process has been neglected over time (i.e. not supported), and the team running the process on a daily basis truly knows the reasonsbehind the poor performance. We’ve seen this with scrap reduction projects – basic tooling refurbishments and preventive maintenance have reduced scrap rates by more than 80%. The Pareto chart is an excellent tool for documenting the current state and setting priorities in cases like this.

Flowchart

Developing a process flowchart early in the DMAIC methodologyhas several benefits -

  • 1. Not all team members are familiar with the entire process at the start of the project. Developing a process flowchart in

a group session gives all team members a full appreciation for the inputs, outputs, controls, and value-added operations.

  • 2. A good flowchart helps structure the Analyze phase, as team members consider possible sources of variation. It’s

easy to overlook major causes of variation, and a complete process flowchart will help minimize this risk.

  • 3. During the Control phase the team must decide on process controls and mistake proofing measures for long term control.

Having a flowchart makes this process easier, especially as the team tries to work as far “upstream” as possible when

implementing process controls. The following symbols are typically used in process flowcharts

Flowchart Developing a process flowchart early in the <a href=DMAIC methodology h as several benefits - 1. Not all team members are familiar with the entire process at the start of the project . Developing a process flowchart in a group session gives all team members a full appreciation for the inputs, outputs, controls, and value-added operations. 2. A good flowchart helps structure the Analyze phase, as team members consider possible sources of variation. It’s easy to overlook major causes of variation, and a complete process flowchart will help minimize this risk. 3. During the Control phase the team must decide on process controls and mistake proofing measures for long term control. Having a flowchart makes this process easier, especially as the team tries to work as far “upstream” as possible when implementing process controls. The following symbols are typically used in process flowcharts – After the initial flowchart is completed, we recommend adding the inputs (x’s) and outputs (y’s) of each process step. Here is a sample flowchart with the x’s and y’s noted below and above eac h process step: Powerpoint® file There are a number of software tools available for creating process flowcharts, but Microsoft’s Powerpoint® package is available to most professionals and contains the symbols necessary to create a process flowchart. Here are a few tips for planning a productive flowcharting session –  Identify a facilitator who is not an expert on the process being flowcharted.  Define ahead of time where the process starts and ends.  Start by listing out all of the process steps in a simple list. Then take a step back and see if anything is missing, prior to creating the flowchart " id="pdf-obj-4-30" src="pdf-obj-4-30.jpg">

After the initial flowchart is completed, we recommend adding the inputs (x’s) and outputs (y’s) of each process step. Here is a sample flowchart with the x’s and y’s noted below and above each process step:

Flowchart Developing a process flowchart early in the <a href=DMAIC methodology h as several benefits - 1. Not all team members are familiar with the entire process at the start of the project . Developing a process flowchart in a group session gives all team members a full appreciation for the inputs, outputs, controls, and value-added operations. 2. A good flowchart helps structure the Analyze phase, as team members consider possible sources of variation. It’s easy to overlook major causes of variation, and a complete process flowchart will help minimize this risk. 3. During the Control phase the team must decide on process controls and mistake proofing measures for long term control. Having a flowchart makes this process easier, especially as the team tries to work as far “upstream” as possible when implementing process controls. The following symbols are typically used in process flowcharts – After the initial flowchart is completed, we recommend adding the inputs (x’s) and outputs (y’s) of each process step. Here is a sample flowchart with the x’s and y’s noted below and above eac h process step: Powerpoint® file There are a number of software tools available for creating process flowcharts, but Microsoft’s Powerpoint® package is available to most professionals and contains the symbols necessary to create a process flowchart. Here are a few tips for planning a productive flowcharting session –  Identify a facilitator who is not an expert on the process being flowcharted.  Define ahead of time where the process starts and ends.  Start by listing out all of the process steps in a simple list. Then take a step back and see if anything is missing, prior to creating the flowchart " id="pdf-obj-4-37" src="pdf-obj-4-37.jpg">

There are a number of software tools available for creating process flowcharts, but Microsoft’s Powerpoint® package is

available to most professionals and contains the symbols necessary to create a process flowchart. Here are a few tips for planning a productive flowcharting session

  • Identify a facilitator who is not an expert on the process being flowcharted.

  • Define ahead of time where the process starts and ends.

  • Start by listing out all of the process steps in a simple list. Then take a step back and see if anything is missing, prior to creating the flowchart

Gage R&R

Gage R&R (Gage Repeatability and Reproducibility) is the amount of measurement variation introduced by a measurement system, which consists of the measuring instrument itself and the individuals using the instrument. A Gage R&R study is a critical step in manufacturing Six Sigma projects, and it quantifies three things 1 :

  • 1. Repeatability variation from the measurement instrument

  • 2. Reproducibility variation from the individuals using the instrument

  • 3. Overall Gage R&R, which is the combined effect of (1) and (2)

The overall Gage R&R is normally expressed as a percentage of the tolerance for the CTQbeing studied, and a value of 20%

Gage R&R or less is considered acceptable in most cases. Example: for a 4.20mm to 4.22mm specification (0.02 total tolerance) on a shaft diameter, an acceptable Gage R&R value would be 20 percent of 0.02mm (0.004mm) or less.

Gage R&R Gage R&R (G age R epeatability and R eproducibility) is the amount of measurementCTQ b eing studied, and a value of 20% Gage R&R or less is considered acceptable in most cases. Example: for a 4.20mm to 4.22mm specification (0.02 total tolerance) on a shaft diameter, an acceptable Gage R&R value would be 20 percent of 0.02mm (0.004mm) or less. The Difference Between Gage R&R and Accuracy (Bias) A Gage R&R study quantifies the inherent variation in the measurement system, but measurement system accuracy (more specifically referred to as bias) must be verified through a calibration process. For example, when reading an outdoor thermometer, we might find a total Gage R&R of five degrees, meaning that we will observe up to five degrees of temperature variation, independent of the actual temperature at a given time. However, the thermometer itself might also be calibrated ten degrees to the low side, meaning that, on average , the thermometer will read ten degrees below the actual temperature. The effects of poor accuracy and a high Gage R&R can render a measurement system useless if not addressed. Measurement system variation is often a major contributor to the observed process variation, and in some cases it is found to be the number-one contributor. Remember, Six Sigma is all about reducing variation. Think about the possible outcomes if a high-variation measurement system is not evaluated and corrected during the Measure phase of a DMAIC project – there is a good chance that the team will be mystified by the variation they encounter in the Analyze phase, as they search for variation causes outside the measurement system. Measurement system variation is inherently built into the values we observe from a measuring instrument, and a high-variation measurement system can completely distort a process capability study (not to mention the effects of false accepts and false rejects from a quality perspective). The following graph shows how an otherwise capable process ( Cpk = 2.0: this is a Six Sigma process) is portrayed as marginal or poor as the Gage R&R percentage increases: " id="pdf-obj-5-30" src="pdf-obj-5-30.jpg">

The Difference Between Gage R&R and

Accuracy (Bias)

A Gage R&R study quantifies the inherent variation in the measurement system, but measurement system accuracy (more specifically referred to as bias) must be verified through a calibration process. For example, when reading an outdoor thermometer, we might find a total Gage R&R of five degrees, meaning that we will observe up to five degrees of temperature variation, independent of the actual temperature at a given time. However, the thermometer itself might also be calibrated ten degrees to the low side, meaning that, on average, the thermometer will read ten degrees below the actual temperature. The effects of poor accuracy and a high Gage R&R can render a measurement system useless if not addressed. Measurement system variation is often a major contributor to the observed process variation, and in some cases it is found to be the number-one contributor. Remember, Six Sigma is all about reducing variation. Think about the possible outcomes if a high-variation measurement system is not evaluated and corrected during the Measure phase of a DMAIC project there is a good chance that the team will be mystified by the variation they encounter in the Analyze phase, as they search for variation causes outside the measurement system. Measurement system variation is inherently built into the values we observe from a measuring instrument, and a high-variation measurement system can completely distort a process capability study (not to mention the effects of false accepts and false rejects from a quality perspective). The following graph shows how an otherwise capable process (Cpk = 2.0: this is a Six Sigma process) is portrayed as marginal or poor as the Gage R&R percentage increases:

Conducting a Gage R&R Study GR&R studies can be conducted on both variable (gaging that producesdistinct categories for more information. Collect samples to be measured – it’s important to collect samples (parts, materials, or whatever is being measured) that represent the majority of the variation present in the process. Sometimes it is helpful to have inspectors set aside a group of parts that represent the full spectrum of measurements. Ten samples are considered a good number for a Gage R&R study in a manufacturing environment. Plan for within-product variation if necessary – sometimes the same characteristic can be measured in multiple locations on the same sample, and in these cases it’s a good idea to mark the item being measured to indicate where the measurement should take place. For example, if the diameter of a washer is being measured, the roundness of each washer in the study could affect the Gage R&R results, so mark the location on each washer where the diameter measurement should be taken. The same guideline applies to things like color measurements, where within-part color variation can overshadow the measurement sys tem’s performance. Once again, clearly note the location on the sample where the color should be measured. Identify operators – a Gage R&R study can be done with two operators, but a minimum of three operators is recommended for a meaningful study. Also make sure that at least one of the operators is an individual who will interpret the gage during normal production operations. Document the measurement method and train the operators to follow it – this can save the team from having to repeat the study. " id="pdf-obj-6-2" src="pdf-obj-6-2.jpg">

Conducting a Gage R&R Study

GR&R studies can be conducted on both variable (gaging that produces data) andattribute (gaging that produces a “go/no-go” result) gaging. Prior to conducting a Gage R&R, the following steps/precautions should be taken

Calibrate the gage ensure that the gage is calibrated through its operating range keep in mind that Gage R&R and gage accuracy are two different things. Check the gage resolution the gage should have sufficient resolution to distinguish between several values within the tolerance range of the feature being measured. As a general rule, the gage should be able to distinguish at least ten readings within the tolerance range. See distinct categories for more information. Collect samples to be measured – it’s important to collect samples (parts, materials, or whatever is being measured) that represent the majority of the variation present in the process. Sometimes it is helpful to have inspectors set aside a group of parts that represent the full spectrum of measurements. Ten samples are considered a good number for a Gage R&R study in a manufacturing environment. Plan for within-product variation if necessary sometimes the same characteristic can be measured in multiple locations on the same sample, and in these cases it’s a good idea to mark the item being measured to indicate where the measurement

should take place. For example, if the diameter of a washer is being measured, the roundness of each washer in the study could affect the Gage R&R results, so mark the location on each washer where the diameter measurement should be taken. The same guideline applies to things like color measurements, where within-part color variation can overshadow the measurement system’s performance. Once again, clearly note the location on the sample where the color should be measured. Identify operators a Gage R&R study can be done with two operators, but a minimum of three operators is recommended for a meaningful study. Also make sure that at least one of the operators is an individual who will interpret the gage during normal production operations. Document the measurement method and train the operators to follow it this can save the team from having to repeat the study.

Plan for randomization – it’s important to randomize the sequence of measurements (who measures which part, and when), to block the effects of uncontrollable factors that might otherwise be attributed to specific operators. Each operator should measure each part three times, so with ten parts a total of 90 measurements will be taken.

Generating the Gage R&R Results

Gage R&R results can be generated using a number of statistical software packages, or with a spreadsheet (see above or go to Excel files). Reducing Measurement System Variation

If the Gage R&R is greater than 20% (see guidelines above), then the measurement device and/or the measurement method will need to be addressed. Seasoned Six Sigma professionals use graphical plots all the time, and we have found a few plots to be very helpful when studying Gage R&R results:

Plan for randomization – it’s important to randomize the sequence of measurements (who measures which part,Excel files ) . Reducing Measurement System Variation If the Gage R&R is greater than 20% (see guidelines above), then the measurement device and/or the measurement method will need to be addressed. Seasoned Six Sigma professionals use graphical plots all the time, and we have found a few plots to be very helpful when studying Gage R&R results: " id="pdf-obj-7-16" src="pdf-obj-7-16.jpg">
Plan for randomization – it’s important to randomize the sequence of measurements (who measures which part,Excel files ) . Reducing Measurement System Variation If the Gage R&R is greater than 20% (see guidelines above), then the measurement device and/or the measurement method will need to be addressed. Seasoned Six Sigma professionals use graphical plots all the time, and we have found a few plots to be very helpful when studying Gage R&R results: " id="pdf-obj-7-18" src="pdf-obj-7-18.jpg">
A word of advice to quality leaders: always ask to see a GR&R study before acceptingAutomotive Industry Action Group’s MSA guide – this is an outstanding publication. " id="pdf-obj-8-2" src="pdf-obj-8-2.jpg">

A word of advice to quality leaders: always ask to see a GR&R study before accepting the results of a process capability study. For an in-depth review of Gage R & R and overall measurement systems analysis (MSA), purchase a copy of the Automotive Industry Action Group’s MSA guide this is an outstanding publication.

Process Sigma

Process sigma (also referred to as sigma level) is a measure of process capability: the higher the process sigma, the more capable the process is. A Six Sigma process has a short-termprocess sigma of 6, and a long-term process sigma of 4.5 (seewhy not 4.5 sigma?). The theoretical defect rate for a Six Sigma process is 3.4 defects per million (DPM). Simply put, the process sigma indicates how many standard deviations (“Sigmas”) can fit inside the gap between the process

average and the nearest specification limit:

Process Sigma Process sigma (also referred to as sigma level) is a measure of process capability:short-term p rocess sigma of 6, and a long-term process sigma of 4.5 (se e why not 4.5 sigma? ) . The theoretical defect rate for a Six Sigma process is 3.4 defects per million (DPM). Simply put, the process sigma indicates how many standard deviations (“Sigmas”) can fit inside the gap between the process average and the nearest specification limit: Note that the above example shows a histogram for a particular CTQ , so the process sigma of 4.5 applies to the specific CTQ being studied. If an overall long-term defect rate is available for all defects, it is possible to state the process sigma for the entire process (all CTQ’s and their associated defects) by locating the defect rate on the Sigma Conversion Chart and finding the corresponding sigma-level. Typically, however, a Six Sigma project will be sufficiently narrowed to focus on one or two CTQ’s , which will be evaluated separately for their process sigma levels. In cases where both lower and upper specification limits exist and the process is not capable on either side of the distribution, the process sigma can be calculated by adding the theoretical DPM levels on each side of the distribution (using the Sigma Conversion Chart) and then finding the corresponding process sigma for the combined DPM level. A more common measure of process capability is C , which is equal to the process sigma divided by 3. So a Six Sigma process has a C of 2.0. " id="pdf-obj-9-20" src="pdf-obj-9-20.jpg">

Note that the above example shows a histogram for a particular CTQ, so the process sigma of 4.5 applies to the specific CTQ being studied. If an overall long-term defect rate is available for all defects, it is possible to state the process sigma for the entire process (all CTQ’s and their associated defects) by locating the defect rate on the Sigma Conversion Chart and finding the corresponding sigma-level. Typically, however, a Six Sigma project will be sufficiently narrowed to focus on one or two CTQ’s , which will be evaluated separately for their process sigma levels. In cases where both lower and upper specification limits exist and the process is not capable on either side of the distribution, the process sigma can be calculated by adding the theoretical DPM levels on each side of the distribution (using the Sigma Conversion Chart) and then finding the corresponding process sigma for the combined DPM level. A more common measure of process capability is C pk , which is equal to the process sigma divided by 3. So a Six Sigma process has a C pk of 2.0.

What are the defect levels associated with various process sigma levels? A <a href=Sigma Conversion Chart provides the theoretical defect rates associated with various sigma-levels. The conversion chart assumes that the underlying data is continuous and normally distributed, as in the histogram above. Long-Term Versus Short-Term Data Notice that the above chart includes columns for long-term and short-term process sigma levels, depending on how extensive our sample data set is. Use the short-term column if data has been collected over a limited period of time, and the long-term column if data is extensive and likely includes all variation sources that can influence the CTQ over the long run. " id="pdf-obj-10-2" src="pdf-obj-10-2.jpg">

What are the defect levels associated with various process sigma levels?

A Sigma Conversion Chart provides the theoretical defect rates associated with various sigma-levels. The conversion chart assumes that the underlying data is continuous and normally distributed, as in the histogram above.

What are the defect levels associated with various process sigma levels? A <a href=Sigma Conversion Chart provides the theoretical defect rates associated with various sigma-levels. The conversion chart assumes that the underlying data is continuous and normally distributed, as in the histogram above. Long-Term Versus Short-Term Data Notice that the above chart includes columns for long-term and short-term process sigma levels, depending on how extensive our sample data set is. Use the short-term column if data has been collected over a limited period of time, and the long-term column if data is extensive and likely includes all variation sources that can influence the CTQ over the long run. " id="pdf-obj-10-10" src="pdf-obj-10-10.jpg">

Long-Term Versus Short-Term Data

Notice that the above chart includes columns for long-term and short-term process sigma levels, depending on how extensive our sample data set is. Use the short-term column if data has been collected over a limited period of time, and the long-term column if data is extensive and likely includes all variation sources that can influence the CTQ over the long run.

Fishbone Diagram

Example and Template

Fishbone diagrams, also known as cause-and-effect diagrams, are about organizing possiblecauses behind a given problem. Of course, all possibilities will need to be proved or disproved during theAnalyze phase of the project. The following Fishbone Diagram example looks at possible causes of employee turnover, based on an excellent article from Sigma Assessment Systems, Inc. Constructing a Fishbone Diagram Visit fishbonediagram.org for a thorough review of fishbone diagrams and Excel/PPT templates.

Fishbone Diagram Example and Template Fishbone diagrams, also known as cause-and-effect diagrams, are about organizing possiblee Analyze phase of the project. The following Fishbone Diagram example looks at possible causes of employee turnover, based on an excellent article from Sigma Assessment Systems, Inc. Constructing a Fishbone Diagram Visit fishbonediagram.org for a thorough review of fishbone diagrams and Excel/PPT templates. " id="pdf-obj-11-15" src="pdf-obj-11-15.jpg">

5-Why

5-Why is a simple approach for exploring root causes and instilling a “Fix the root cause, not the symptom,” culture at all levels of a company. Invented by Japanese Industrialist Sakichi Toyoda, the idea is to keep asking “Why?” until the root cause is arrived at. The number five is a general guideline for the number of Why’s required to reach the root cause level, but asking “Why?” five times versus three, four, or six times is not a rigid requirement. What matters is that we fix recurring problems by addressing true causes and not symptoms - this is true progress.

5-Why Example

(Here is the 5-Why Powerpoint file used for these graphics)

5-Why 5- Why is a simple approach for exploring root causes and instilling a “Fix theSakichi Toyoda , the idea is to keep asking “Why?” until the root cause is arrived at. The number five is a general guideline for the number of Why’s required to re ach the root cause level, but asking “Why?” five times versus three, four, or six times is not a rigid requirement. What matters is that we fix recurring problems by addressing true causes and not symptoms - this is true progress. 5-Why Example (Here is the 5-Why Powerpoint file used for these graphics) 5-Why Benefits – Addressing Root Causes The 5- Why thought process guides us to lasting corrective actions, because we address root causes and not symptoms. Let’s look at the effects of addressing the 1st Why versus the 5th Why in the above exercise - " id="pdf-obj-12-24" src="pdf-obj-12-24.jpg">

5-Why Benefits Addressing Root Causes

The 5-Why thought process guides us to lasting corrective actions, because we address root causes and not symptoms. Let’s look at the effects of addressing the 1st Why versus the 5th Why in the above exercise -

Note the improvement in corrective action effectiveness as each deeper Why is addressed above:  Responding

Note the improvement in corrective action effectiveness as each deeper Why is addressed above:

  • Responding to the first Why in the 5-Why process is almost counterproductive: we are retraining the stock pickers in our warehouse, because we assume that they pulled the wrong item from our inventory. In reality, the stock pickers performed their jobs perfectly, and the real cause was mislabeled parts coming from the supplier.

  • Addressing the third Why (having the supplier check their stock for other mislabled products) is much more effective that addressing the first Why, but this action will have no lasting effect beyond fixing the current inventory situation.

  • Addressing the fifth why is powerful, because it focuses on the true cause: mistakes being made in the label application process. World class companies routinely address systemic causes like the 5th “Why?” above, eliminating reactionary problem solving and shifting resources to prevention activities over the long run.

5-Why Reporting Format

The above example shows a typical 5-Why format for narrow problem statements where one cause-path exists. For broader problem statements where two or more causes are involved, the following format is effective

This format is also very useful for explaining the causes behind the top bars on aPareto chart , as in the case above where a team has collected data and built a Pareto chart on top-level reasons for downtime. Instead of simply showing a Pareto chart with no further insight into each Pareto bar, the team selects the top two items (material shortages and downtime on machine ABC) and uses the 5-Why format to explore the root causes of each. Noting Actions Directly on 5- Why’s Remember that 5-Why exercises are only useful when actions come from the meeting. Take time to document those actions/next steps with the team, and then follow up to ensure that they are implemented. Recording action items to the 5-Why document itself is a great practice, as shown below. Don’t forget to agree on the action owners and estimated timing! " id="pdf-obj-14-2" src="pdf-obj-14-2.jpg">

This format is also very useful for explaining the causes behind the top bars on a Pareto chart, as in the case above where a team has collected data and built a Pareto chart on top-level reasons for downtime. Instead of simply showing a Pareto chart with no further insight into each Pareto bar, the team selects the top two items (material shortages and downtime on machine ABC) and uses the 5-Why format to explore the root causes of each.

Noting Actions Directly on 5-Why’s

Remember that 5-Why exercises are only useful when actions come from the meeting.

Take time to document those

actions/next steps with the team, and then follow up to ensure that they are implemented. Recording action items to the 5-Why

document itself is a great practice, as shown below. Don’t forget to agree on the action owners and estimated timing!

Leading a 5-Why Exercise 1. Schedule the Meeting Schedule a time for the 5-Why discussion, and

Leading a 5-Why Exercise

  • 1. Schedule the Meeting

Schedule a time for the 5-Why discussion, and invite individuals who know about the product and/or process at hand. Your goal

should be to run an efficient meeting and complete the 5-why in one session, no more than an hour long. to be a lengthy exercise.

  • 2. Plan for a Successful Outcome

5-Why is not meant

Clearly state the problem and desired outcome in advance of the meeting, via the meeting notice, email, etc

An example of

.. this would be: “Complete a 5-Why analysis on problem ABC and understand next steps for further investigation / corrective action.” Have an easel pad or large dry-erase board in the meeting room, and have the problem statement and 5 Why column headers already documented before the attendees show up.

  • 3. Run a Successful Meeting

Remember that the success of your meeting will determine attendance at your future meetings! You want to have a reputation

for holding productive meetings, being highly respectful of attendees while keeping the meeting on track to achieve desired outcomes.

Take a couple of minutes at the start of the meeting and explain that 5-Why is a way to document root causes, showing an

example (use the Powerpoint file above if you don’t have a more relevant example from your company).

Take another couple of minutes to clearly state the problem, and make sure everyone agrees with the problem statement. The more specific the problem statement, the better. See the our video on this page for handling the 5-Why discussion itself there are a few important points that will help you manage the discussion in the meeting.

  • 4. Agree on Follow-Up Actions / Next Steps

Take time to document those actions/next steps with the team, and then follow up to ensure those actions are implemented.

Limitations

5-Why is useful for straightforward problems with systemic causes like the case noted above, where poor preventive maintenance is the systemic cause for unplanned equipment downtime. In cases when the root cause is not readily apparent, 5- Why by itself will not solve the problem. For example, if a toy manufacturer needs to improve color consistency in a product, they will need to understand which factors influence color the most (otherwise they might not need a Six Sigma project to begin with). In cases like this, structured analysis methods like multi-vari, correlation analysis, and DOE may be necessary to actually learn the physical relationships between the input variables (process settings, raw materials, etc.) and output variables (in this case, color). If your team is attacking a number of product variation challenges, then read Keki Bhote’s World Class Quality for a highly effective approach

Hypothesis Testing

Hypothesis testing is used in the Six Sigma Analyze Phase for screening potential causes. A hypothesis test calculates the probability, p, that an observed difference between two or more data samples can be explained by random chance alone, as opposed to any fundamental difference between the underlying populations that the samples came from. So hypothesis testing answers the question, what is the probability that these data samples actually came from the same underlying population? If this probability, known as the p-value, is small (typically below 0.05), then we conclude that the two samples likely came from different underlying populations. For example, a p-value of 0.02 indicates that there is only a 2% chance that the data samples came from the same underlying population. Here are a few situations where hypothesis testing assists us in the problem solving process – • Evaluating a proposed process improvement to see if its effect is statistically significant, or if the same improvement could have occurred by random chance. • Evaluating several process factors (process inputs, or x’s) in a designed experiment to understand which factors are significant to a given output, and which are not. • Understanding the likelihood that a data sample comes from a population that follows a given probability distribution (i.e. normal, exponential, uniform, etc.).

An individual untrained in basic statistical knowledge might naturally question the need for a hypothesis test: “Why can’t we

simply compare the average values of a given CTQ, before and after a process change, to determine if the change we made actually made a difference?” The answer is that the supposed improvement we observe might have nothing to do with the change we made to the process, and might have everything to do with chance variation. In other words, the two data sets might actually have come from the same underlying population. Statistical Significance Vs. Practical Significance There are many situations where a process change has a statistically significant effect on a CTQ, but an insignificant effect in real world terms. For example, an individual working to improve his or her vehicle’s fuel economy might run a hypothesis test comparing fuel economy at driving speeds of 60 mph and 70 mph on the highway. The result might show that driving at the lower speed has a statistically significant effect on the CTQ, which in this case is miles-per-gallon fuel economy. However, the actual improvement in fuel economy might only be 0.5 miles per gallon, which might be deemed not worth the extra time it will

take to get to work each day

Regression Analysis

The goal of regression analysis is to determine the values of parameters for a function that cause the function to best fit a set of data observations that you provide. In linear regression, the function is a linear (straight-line) equation. For example, if we assume the value of an automobile decreases by a constant amount each year after its purchase, and for each mile it is driven, the following linear function would predict its value (the dependent variable on the left side of the equal sign) as a function of the two independent variables which are age and miles:

value = price + depage*age + depmiles*miles

where value, the dependent variable, is the value of the car, age is the age of the car, and miles is the number of miles that the car has been driven. The regression analysis performed by NLREG will determine the best values of the three parameters, price, the estimated value when age is 0 (i.e., when the car was new), depage, the depreciation that takes place each year, and depmiles, the depreciation for each mile driven. The values of depage and depmiles will be negative because the car loses value as age and miles increase.

For an analysis such as this car depreciation example, you must provide a data file containing the values of the dependent and independent variables for a set of observations. In this example each observation data record would contain three numbers:

value, age, and miles, collected from used car ads for the same model car. The more observations you provide, the more accurate will be the estimate of the parameters. The NLREG statements to perform this regression are shown below:

Variables value,age,miles; Parameters price,depage,depmiles; Function value = price + depage*age + depmiles*miles; Data; {data values go here}

Once the values of the parameters are determined by NLREG, you can use the formula to predict the value of a car based on its age and miles driven. For example, if NLREG computed a value of 16000 for price, -1000 for depage, and -0.15 for depmiles, then the function

value = 16000 - 1000*age - 0.15*miles

could be used to estimate the value of a car with a known age and number of miles.

If a perfect fit existed between the function and the actual data, the actual value of each car in your data file would exactly equal the predicted value. Typically, however, this is not the case, and the difference between the actual value of the dependent variable and its predicted value for a particular observation is the error of the estimate which is known as the "deviation'' or "residual''. The goal of regression analysis is to determine the values of the parameters that minimize the sum of the squared residual values for the set of observations. This is known as a "least squares'' regression fit.

Here is a plot of a linear function fitted to a set of data values. The actual data points are marked with ''x''. The red line between a point and the fitted line represents the residual for the observation.

NLREG is a very powerful regression analysis program. Using it you can perform multivariate, linear, polynomial,

NLREG is a very powerful regression analysis program. Using it you can perform multivariate, linear, polynomial, exponential, logistic, and general nonlinear regression. What this means is that you specify the form of the function to be fitted to the data, and the function may include nonlinear terms such as variables raised to powers and library functions such as log, exponential, sine, etc. For complex analyses, NLREG allows you to specify function models using conditional statements (if, else), looping (for, do, while), work variables, and arrays. NLREG uses a state-of-the-art regression algorithm that works as well, or better, than any you are likely to find in any other, more expensive, commercial statistical packages.

As an example of nonlinear regression, consider another depreciation problem. The value of a used airplane decreases for each year of its age. Assuming the value of a plane falls by the same amount each year, a linear function relating value to age is:

value = p0 + p1*Age

Where p0 and p1 are the parameters whose values are to be determined. However, it is a well-known fact that planes (and automobiles) lose more value the first year than the second, and more the second than the third, etc. This means that a linear (straight-line) function cannot accurately model this situation. A better, nonlinear, function is:

value = p0 + p1*exp(-p2*Age)

Where the ''exp'' function is the value of e (2.7182818

...

)

raised to

a

power. This type

of function is known as "negative

exponential" and is appropriate for modeling a value whose rate of decrease is proportional to the difference between the value and some base value. Here is a plot of a negative exponential function fitted to a set of data values.

NLREG is a very powerful regression analysis program. Using it you can perform multivariate, linear, polynomial,

Much of the convenience of NLREG comes from the fact that you can enter complicated functions using ordinary algebraic notation. Examples of functions that can be handled with NLREG include:

Linear:

Y = p0 + p1*X

Quadratic:

Y = p0 + p1*X + p2*X^2

Multivariate:

Y = p0 + p1*X + p2*Z + p3*X*Z

Exponential:

Y = p0 + p1*exp(X)

Periodic:

Y = p0 + p1*sin(p2*X)

Misc:

Y = p0 + p1*Y + p2*exp(Y) + p3*sin(Z)

In other words, the function is a general expression involving one dependent variable (on the left of the equal sign), one or more independent variables, and one or more parameters whose values are to be estimated. NLREG can handle up to 500 variables and 500 parameters.

Because of its generality, NLREG can perform all of the regressions handled by ordinary linear or multivariate regression programs as well as nonlinear regression.

Some other regression programs claim to perform nonlinear regression but actually do it by transforming the values of the variables such that the function is converted to linear form. They then perform a linear regression on the transformed function. This technique has a major flaw: it determines the values of the parameters that minimize the squared residuals for the transformed, linearized function rather than the original function. This is different than minimizing the squared residuals for the actual function and the estimated values of the parameters may not produce the best fit of the original function to the data. NLREG uses a true nonlinear regression technique that minimizes the squared residuals for the actual function. Also, NLREG can handle functions that cannot be transformed to a linear form. < --> Error: #include file specification missing closing quote <-- /tbody>

Histograms

Frequency histograms are the ultimate tool for visualizing process capability. The height of each bar on a histogram shows how

often a given range of values occurs in our data.

Histograms can be made manually, but there are a number of software tools

available to get the job done easily. Consider each bar on a histogram a “bucket” that is assigned to a given range of values, with the height of each bar representing number of data points that fall into each “bucket.” The following histogram shows 50 tire pressure readings from an outgoing product audit at a motorcycle factory:

Histograms Frequency histograms are the ultimate tool for visualizing process capability. The height of each bar

Note that the x-axis is divided into “buckets” or cells, and the y-axis shows how many data points fall inside each bucket:

Finally, we can add specification limits to graphically show how capable the process is:

Finally, we can add specification limits to graphically show how capable the process is:

Finally, we can add specification limits to graphically show how capable the process is:

Failure Mode and Effects Analysis

A quick note before we jump into PFMEA.

Below is a picture of a fantastic error proofing example that I recently saw in a

sandwich shop. In this particular case, the sandwich shop wants to ensure that plastic trays are not accidentally thrown into the trash. Instead of simply relying on the sign that is on the trash can, the shop owner went one step further and made the trash can opening too small to fit the tray. This is great example of effective error proofing simple and effective!

Failure Mode and Effects Analysis A quick note before we jump into PFMEA. Below is aSix Sigma’s DMAIC Methodology , and th e Automotive Industry Action Group (AIAG) publishes a comprehensive PFMEA workbook that is well worth the cost for any team seriously contemplating PFMEA’s. Also, our free Six Sigma Excel Templates p age has a link to a PFMEA document. PFMEA RPN Value – Calculated in the Excel Template The risk level for each process step is quantified using a Risk Priority Number (RPN) that typically ranges from 1 to 1,000, with 1,000 being the highest possible risk level. The RPN is a product of three risk factors, all ranked on a scale of 1 to 10 – RPN = Severity x Occurrence x Detection Risk PFMEA risk factors Description How severe a given defective condition would be to the customer The best estimate of how often the defective condition will occur in the process Detection The likelihood that the defective condition will be detected prior to reaching the customer. The current or proposed control plan should be referenced when assigning detection values. Prior to conducting a PFMEA, the facilitator should have a set of ranking criteria for Severity, Occurrence, and Detection, which should be shared with the team ahead of time. An example of Severity, Occurrence, and Detection ranking criteria would be as follows – PFMEA Severity Scale Typical PFMEA severity rankings Severity Rank Description " id="pdf-obj-23-12" src="pdf-obj-23-12.jpg">

PFMEA (Process Failure Mode and Effects Analysis) is a structured approach that assigns quality risk levels to each step in a process (manufacturing or transactional). PFMEA is a powerful prevention tool, since it does not wait for defects to occur, but rather anticipates them and implements countermeasures ahead of time. PFMEA is normally used in the Analyze Phase of Six Sigma’s DMAIC Methodology, and theAutomotive Industry Action Group (AIAG) publishes a comprehensive PFMEA workbook that is well worth the cost for any team seriously contemplating PFMEA’s. Also, our free Six Sigma Excel Templates page has a link to a PFMEA document.

PFMEA RPN Value Calculated in the Excel Template

The risk level for each process step is quantified using a Risk Priority Number (RPN) that typically ranges from 1 to 1,000, with 1,000 being the highest possible risk level. The RPN is a product of three risk factors, all ranked on a scale of 1 to 10 RPN = Severity x Occurrence x Detection

Risk

Factor

Severity

PFMEA risk factors

Description

How severe a given defective condition would be to the customer

Occurrence The best estimate of how often the defective condition will occur in the process

Detection

The likelihood that the defective condition will be detected prior to reaching the customer. The current or proposed control plan should be referenced when assigning detection values.

Prior to conducting a PFMEA, the facilitator should have a set of ranking criteria for Severity, Occurrence, and Detection, which should be shared with the team ahead of time. An example of Severity, Occurrence, and Detection ranking criteria would be as follows

PFMEA Severity Scale

Typical PFMEA severity rankings

Severity Rank

Description

10

Hazardous, without warning

  • 9 Hazardous, with warning

  • 8 Very High

  • 7 High

  • 6 Moderate

  • 5 Low

  • 4 Very Low

  • 3 Minor

  • 2 Very Minor

  • 1 None

Severity levels of 9 and 10 are typically reserved for hazardous situations, with a severity of 10 reserved for hazards that the customer is not warned about ahead of time. For example, an electrical shock hazard would typically be rated with a severity of

10.

PFMEA Occurrence Scale

Typical PFMEA occurrence rankings

Occurrence Rank

Description

10

>100 Per 1,000

  • 9 50 Per 1,000

  • 8 20 Per 1,000

  • 7 10 Per 1,000

  • 6 5 Per 1,000

  • 5 2 Per 1,000

  • 4 1 Per 1,000

  • 3 0.5 Per 1,000

  • 2 0.1 Per 1,000

  • 1 < 0.01 Per 1,000

Keep in mind that the Occurrence value represents how often a given problem can occur in the first place, and has nothing to with whether or not the problem is detected in the process (this is where the Detection ranking comes into play).

PFMEA Detection Scale

Typical PFMEA detection rankings

Detection Rank

Description

  • 10 Absolutely Impossible

    • 9 Very Remote

    • 8 Remote

    • 7 Very Low

    • 6 Low

    • 5 Moderate

    • 4 Moderately High

2

Almost Certain

  • 1 Certain

So a Detection level of 10 indicates that is virtually impossible to detect a given defect once it occurs, and a Detection level of 1 indicates that the process is absolutely guaranteed to catch the defect should it occur.

PFMEA Example

While PFMEA’s are most often applied to production processes, they can be applied to any process that follows a sequence of steps to achieve a desired outcome. The attachedPFMEA example shows two steps in a process that most people are familiar with refueling an automobile. Additional comments are noted in red font. PFMEA Countermeasures

Almost Certain Certain So a Detection level of 10 indicates that is virtually impossible to detectd PFMEA example shows two steps in a process that most people are familiar with – refueling an automobile. Additional comments are noted in red font. PFMEA Countermeasures Countermeasures are actions planned by the team to lower high-RPN (high risk) process steps, and are considered to be the “value added” outputs of PFMEA’s. The best countermeasures utilize error proofing, which is an important step in the Control phase of DMAIC . Let’s look at another process that takes place millions of times around the world each day: homeowners using their automatic garage door openers. A few years ago, it was possible to lower an automatic garage door onto a vehicle. Let’s use PFMEA logic on the process step: ” Lowering the automatic garage door” - Process Step: Lowering the Garage Door Potential Failure Mode: Garage door lowers onto and damages vehicle Effect: Significant damage to vehicle Severity: 9 (hazardous with warning) Occurrence: 6 (approximately 5 out of every thousand times the homeowner will start to close the garage door when the vehicle is underneath) Detection: 3 (high – in most cases the homeowner will realize what they have done and click the remote again to stop the door) RPN = 9 X 6 X 3 = 162 RPN’s over 100 are generally considered unacceptable, and given the severity of lowering a garage door onto a car, a solid countermeasure is needed in this case. Garage door manufacturers might get some benefit from adding warning labels to the garage door remote (the equivalent of training an operator in a manufacturing plant), which might lower the Occurrence rate a point but wouldn’t significantly lower the overall risk of dama ging a vehicle. A truly effective countermeasure in this case, and the one actually implemented with most garage door systems, would be a photoelectric sensor to detect the presence of anything underneath the door. If the door is being lowered and the " id="pdf-obj-25-23" src="pdf-obj-25-23.jpg">

Countermeasures are actions planned by the team to lower high-RPN (high risk) process steps, and are considered to be the “value added” outputs of PFMEA’s. The best countermeasures utilize error proofing, which is an important step in the Control phase of DMAIC. Let’s look at another process that takes place millions of times around the world each day: homeowners using their automatic garage door openers. A few years ago, it was possible to lower an automatic garage door onto a vehicle. Let’s use PFMEA logic on the process step: ” Lowering the automatic garage door” - Process Step: Lowering the Garage Door Potential Failure Mode: Garage door lowers onto and damages vehicle Effect: Significant damage to vehicle

Severity:

9 (hazardous with warning)

Occurrence:

6 (approximately 5 out of every thousand times the homeowner will start to close the garage door when the

vehicle is underneath)

Detection:

3 (high in most cases the homeowner will realize what they have done and click the remote again to stop the

door) RPN = 9 X 6 X 3 = 162

Almost Certain Certain So a Detection level of 10 indicates that is virtually impossible to detectd PFMEA example shows two steps in a process that most people are familiar with – refueling an automobile. Additional comments are noted in red font. PFMEA Countermeasures Countermeasures are actions planned by the team to lower high-RPN (high risk) process steps, and are considered to be the “value added” outputs of PFMEA’s. The best countermeasures utilize error proofing, which is an important step in the Control phase of DMAIC . Let’s look at another process that takes place millions of times around the world each day: homeowners using their automatic garage door openers. A few years ago, it was possible to lower an automatic garage door onto a vehicle. Let’s use PFMEA logic on the process step: ” Lowering the automatic garage door” - Process Step: Lowering the Garage Door Potential Failure Mode: Garage door lowers onto and damages vehicle Effect: Significant damage to vehicle Severity: 9 (hazardous with warning) Occurrence: 6 (approximately 5 out of every thousand times the homeowner will start to close the garage door when the vehicle is underneath) Detection: 3 (high – in most cases the homeowner will realize what they have done and click the remote again to stop the door) RPN = 9 X 6 X 3 = 162 RPN’s over 100 are generally considered unacceptable, and given the severity of lowering a garage door onto a car, a solid countermeasure is needed in this case. Garage door manufacturers might get some benefit from adding warning labels to the garage door remote (the equivalent of training an operator in a manufacturing plant), which might lower the Occurrence rate a point but wouldn’t significantly lower the overall risk of dama ging a vehicle. A truly effective countermeasure in this case, and the one actually implemented with most garage door systems, would be a photoelectric sensor to detect the presence of anything underneath the door. If the door is being lowered and the " id="pdf-obj-25-60" src="pdf-obj-25-60.jpg">

RPN’s over 100 are generally considered unacceptable, and given

the severity of lowering a garage door onto a car, a solid countermeasure is needed in this case. Garage door manufacturers might get some benefit from adding warning labels to the garage door remote (the equivalent of training an operator in a manufacturing plant), which might lower the Occurrence rate a point but wouldn’t significantly lower the overall risk of damaging a vehicle. A truly effective countermeasure in this case, and the one actually implemented with most garage door systems, would be a photoelectric sensor to detect the presence of anything underneath the door. If the door is being lowered and the

sensor detects an object in the door’s path, the circuit that powers the garage door will be automatically opened, causing the door to stop. Planned countermeasures are noted in the far-right columns of the PFMEA, and there is also a place for the anticipated RPN values following countermeasure implementation. In this case, the adjusted RPN values might be:

Process Step: Lowering the garage door (photoelectric sensor implemented) Potential Failure Mode: Garage door lowers onto vehicle Effect: Significant damage to vehicle

Severity:

9 (the effect of a garage door lowering onto a car doesn’t change)

Occurrence:

6 (homeowners will be just as likely to press the “down” but on the remote at the wrong time)

Detection:

1 (certain)

RPN = 9 X 6 X 1 = 54 The team has successfully lowered the risks associated with garage doors damaging vehicles, and can know that they have

made a true difference for their customers and their company’s product liability costs.

Conducting a PFMEA

Just like team selection is crucial to a successful Six Sigma project, it is equally important to holding an effective PFMEA session. The following preparation steps can make all the difference conducting a worthwhile PFMEA session

  • Find a good facilitator who can keep the process moving. Teams tend to get bogged down deciding on Severity, Occurrence, and Detection values this can make the PFMEA process painfully long and cause the participants to avoid future sessions. Our rule of thumb is to go with the higher Severity, Occurrence, and Detection values when the team is in doubt – it’s better to assume the risk is higher and therefore address it with countermeasures.

  • Bring design engineering and process engineering knowledge to the PFMEA session design engineering to help with Severity levels, and process engineering to help with occurrence and detection levels.

  • Pass out hard-copies of the Severity, Occurrence, and Detection criteria (as noted above) at the start of the meeting and review these criteria with the team.

  • Make sure a Process Flow Diagram is done prior to the PFMEA. For more information on pitfalls to avoid with PFMEA, read the top ten reasons why PFMEA’s fail. PFMEA spreadsheets are typically sorted in descending order of Risk Priority Number (RPN) at the end of the session, and time is spent brainstorming and planning countermeasures for the high RPN values. Many businesses have a maximum allowable RPN value for any process, such as 100 or 200, and process development teams (or Six Sigma Teams for existing processes) are charged with implementing effective countermeasures on all high-risk process steps. Most PFMEA spreadsheets include additional columns that show revised risk levels based on planned countermeasures.

Lastly, the PFMEA should be maintained as a “living document” and kept up to date as more knowledge is gained around true

Severity-Occurrence-Detection values and countermeasure effectiveness.

How to Assess the “Quality” of a PFMEA

For those assessing the quality of PFMEA’s submitted to them (i.e. from suppliers), here are a few guidelines that will help -

  • 1. The team has done something about the high RPN’s: countermeasures and updated RPN values are documented.

  • 2. The steps in the PFMEA are match up with a detailed process flow diagram. This is a sign that the team was thorough, and

did not lump several process steps together to get through the exercise.

  • 3. Severity, Occurrence, and Detection values are realistic. Ask the team how they came up with the values, and make sure

they have not biased the values to the low end to get through the exercise to avoid countermeasure work.

DOE

Designed of Experiments (DOE) is a structured approach for varying process and/or product factors (x’s) and quantifying their effects on process outputs (y’s), so that those outputs can be controlled to optimal levels.

For example, a DC motor manufacturer might wish to understand the effects of two process variables, wire tension and trickle resin volume, on motor life. In this case, a simpletwo factor (wire tension and trickle resin volume), two level (low and high values established for each of the two factors) experiment would be a good starting point. Randomizing the order of trials in an experiment can help prevent false conclusions when other significant variables, not known to the experimenter, affect the results. There are a number of statistical tools available for planning and analyzing designed experiments.

ANOVA

Analysis of Variance (ANOVA) determines whether or not statistically significant differences exist between multiple sample groups, by comparing the variation or “noise” levels within sample groups to the average differences between sample groups.