Você está na página 1de 25

QUESTION

1


Executive Summary:

The analysis on flights delay presented by the FAA on its October 3rd report is highly
imprecise and misleading. The report presents the data on flights delay in a very simplistic
way and it fails to analyse accurately the efficiency of the different airlines in transporting
their passengers to their destiny in the shortest time possible.

In order to clarify the real information behind the data, The AGSM FT MBA candidates
prepared a comprehensive report that intends to show the real information behind the data.
Their work is based on statistical analysis and it aims to analyse the following key criterias:

1. The percentage of delayed flights
1.1.
Taking into consideration cancelled flights
1.2.
Controlling for differences in flight durations

2. The arrival delay in minutes

2.1. Sample distribution - normality and skewness
2.2.
Investigating the circumstances of the outliers
2.3.
Controlling for differences in flight durations

In fact, their conclusion is that there is no significant difference in performance for Kwantas
and Cougar Airlines on both percentage of flights delayed and on number of minutes of
delay when it happens. Moreover, most of Kwantass extreme delays were because they had
no flight cancellations. On these occasions Cougar cancelled some of their flights, thus
Kwantass higher average delay time and percentage of delayed flights. Moreover, when
adjusting for differences in flight duration, Kwantas performance exceeds that of Cougar, in
other words, on average, the efficiency in flying from point A to point B is better with
Kwantas.

On the report below the AGSM FT MBA candidates intend to show the problems behind
FAAs analysis and present a more precise and comprehensive way to look at the data. By
doing this they find different conclusion than that of FAA.

Let us go through their detailed analysis before a recommendation on how to answer PHB
CEOs email is made.




AGSM FT MBA Candidates Report



This analysis is based on the data from the spreadsheet MNGT5232 S116 A1 Q1.xlsx that
contains the same data used on FAA report.

1) Analysing the percentage of delayed flights for the two companies.

Initially, one must take in to account that when calculating the number of average delayed
flights FAA failed to incorporate the cancelled flights on its calculation. One of the
companies had some its flights cancelled during the period and when calculating the
number average delay flights, the FAA report treated them as on-time flights. This had an
unintended positive effect for the number of the delayed flights for Cougar Airlines.
Therefore, if you exclude cancelled flights from our data set of analysis and consider only
flights that actually took place, the number of average flights delayed for both companies
change to the following:

KWANTAS





COUGAR
Total Completed flights: 240


Total Completed Flights: 120
Delayed Flights: 63



Delayed Flights: 31
Cancelled Flights: 0



Cancelled Flights: 3
Delay percentage: 26.25%



Delay percentage: 26.49%

Note that by doing this simple adjustment, once cancelled flights are no longer treated as
on-time flights for one of the companies, the number of delayed flights is now marginally
smaller in favour of Kwantas instead of Cougar, which goes against the conclusion found in
FAAs report.

In order to affirm with more academic rigour the absence of significant statistical difference
between the average delays for both companies we conducted a two sample hypothesis test
for mean equality. The test consists of using the sample to estimate the means of the two
population and checking for their equality. The test hypotheses are the following:


H0: 1 - 2 = 0

H1: 1 - 2 <> 0



Where 1 represents the mean of population 1 and 2 represents the mean of population
two, or Kwantas and Cougar in our case. The test statistic follows a Student-T distribution
and its outcome is the following:





Sample Summaries



Delay indicator (Cougar) Delay indicator (KWANTAS)
Data Set #1
Data Set #1

Sample Size
Sample Mean
Sample Std Dev

Hypothesis Test (Difference of Means)

117
0.2650
0.4432
Equal
Variances

240
0.2625
0.4409
Unequal
Variances

Hypothesized Mean Difference


0
0
Alternative Hypothesis
<> 0
<> 0
Sample Mean Difference
0.0025
0.0025
Standard Error of Difference
0.049799798
0.049889251
Degrees of Freedom
355
229
t-Test Statistic
0.0493
0.0493
p-Value
0.9607
0.9608
Null Hypoth. at 10% Significance
Don't Reject
Don't Reject
Null Hypoth. at 5% Significance
Don't Reject
Don't Reject
Null Hypoth. at 1% Significance
Don't Reject
Don't Reject
Ratio of Sample Variances
1.0104

p-Value
0.9336





Note: given the result of the test for variance equality (null hypothesis of equality not
rejected) we will use the outcome on the left column for the mean equality test.
Conclusion of the mean equality test: With a p-value as high s 96.07% we cant reject the
null hypothesis of mean equality between the two populations, in other words, we cant
affirm that there is any significant difference between the average delay for two airlines.
Therefore, we can say that Cougars performance is not exceeding that of Kwantas

2) Treating for differences in flight duration.

An important point that has to be considered when assessing the efficiency of the airlines in
delivering passengers on-time to their destinations, is the duration of the flights that the
companies are committed to and how to compare the delays in case one company has a
flight with the duration shorter than the other. As will be seeing shortly, all the flights from
Cougar take longer than those from Kwantas. In the four flight routes the FAA report
analyses these are the time duration of each of them:


KWANTAS


COGAR
SYD MEL
X


X

MEL SYD
X


X
MEL ADE
X


X

ADE MEL
X


X

As we can see above, same routes have different durations for each airline. Note that
Kwantas promises their clients to perform the same flights as Cougar but in 10 minutes less
in the case of the SYDMEL/SYD-MEL route and 5 minutes MEL-ADE/ADE-MEL route.
Therefore, from the starting point of the FAA report, the comparison is not fair. Consider the
example below to illustrate this point.

e.g. (1) Assume that in a given day both airlines Cougar and Kwantas have a flight from
Sydney to Melbourne leaving at the exactly same time. In this case Kwantas flight is previewed
to last for 1.30hr while Cougars flight is for 1.35hr. Now assume that for some reason the both
companies flights took 1.45 minutes to arrive in Melbourne. In this case, both airlines are
going to touch ground at the same time, but FAA will consider Kwantas flight delayed (given
the 15 minutes criteria used to define delays) but will not consider Cougar flight as a delay.

We see this phenomenon as a distortion and believe that in order to assess the real
efficiency of the two airlines a comparison must be made on same bases. We decided though
to recalculate the variable Delayed Flights by considering this extra time that Cougar
flights have over Kwantas. Therefore, we subtracted this time difference from the variable
Minutes of Delay (5 minutes for the SYD MEL routes and 10 minutes for the ADE MEL
routes) and defined a new categorical-variable named New Flight Delays that is equal to
1 when a flight is delayed and to 0 if it is not a delay. Note that by doing this it is the same as
if both companies offered their clients flights from point A to point B with the same
duration, and situations like the one described above (where two flights leaving the same
place at the same time and arriving at the same place and same time and one of them is
considered to be delayed while the other is not) are avoided.

By applying these changes, the number of average flights delayed for both companies
changed to the following:

KWANTAS





COUGAR
Total Completed flights: 240


Total Completed Flights: 120
Delayed Flights: 19



Delayed Flights: 31
Cancelled Flights: 0



Cancelled Flights: 3
Delay percentage: 7.9%



Delay percentage: 26.49%

As we can see, by controlling for differences in flights duration the average of flights delayed
falls tremendously for Kwantas (7.9%) while for Cougar it remains the same (26.49%).

As we did in part (1) we performed again the same hypothesis test to check for significant
differences on the average number of delayed flights for the two companies but now using
the new variable we defined. The outcome for the test is below:





Sample Summaries


Updated DELAY indicator
(Cougar)
Data Set #1 Only Delay


Updated DELAY indicator
(KWANTAS)
Data Set #1 Only Delay

Sample Size
Sample Mean
Sample Std Dev

Hypothesis Test (Difference of Means)

117
0.2650
0.4432
Equal
Variances

240
0.0792
0.2706
Unequal
Variances

Hypothesized Mean Difference


Alternative Hypothesis
Sample Mean Difference
Standard Error of Difference
Degrees of Freedom
t-Test Statistic
p-Value
Null Hypoth. at 10% Significance
Null Hypoth. at 5% Significance
Null Hypoth. at 1% Significance
Equality of Variances Test

0
<> 0
0.1858
0.037981993
355
4.8915
< 0.0001
Reject
Reject
Reject

0
<> 0
0.1858
0.044541438
159
4.1712
< 0.0001
Reject
Reject
Reject

Ratio of Sample Variances


2.6834

p-Value
< 0.0001






Note: In this case the result of the variance-equality test strongly rejects the null hypothesis,
therefore we will use the outcome on the right column for the mean equality test.

Conclusion of the 2nd mean equality test: With a p-value as lower than 1% we can reject
the null hypothesis of mean equality between the two populations, in other words, we can
affirm that there is a significant difference between the number of average delays for two
airlines.

Therefore, we can say that Kwantass performance is significantly better than that of
Cougar.






3) Analysing the time of delay for the two companies.

We start by analysing the histogram of the variable Minutes of Delay to check its
distribution. The histogram will display the distribution of the variable Average Time of
Delay.

Histogram of Arrival delay in minutes / Data Set #1 (Cougar)
35
30
Frequency

25
20
15
10

26.00

22.00
140.70

24.00

20.00
132.50

18.00

16.00

14.00

12.00

10.00

8.00

6.00

4.00

2.00

0.00

-2.00

-4.00

-6.00

-8.00

-10.00

-12.00

Histogram of Arrival delay in minutes / Data Set #1


(KWANTAS)
100
90
80
Frequency

70
60
50
40
30
20

148.90

124.30

116.10

107.90

99.70

91.50

83.30

75.10

66.90

58.70

50.50

42.30

34.10

25.90

17.70

9.50

1.30

-6.90

10



We can see from the histograms that although both distributions have strong signs of
skewness, in this case using the mean to compare both companies time of delay is not ideal.
In this case we recommend using the median and not the mean.

The table below provides us the descriptive statistics of both samples delays in minutes for
both companies.

Arrival delay in minutes Arrival delay



(Cougar)
(KWANTAS)
One
Variable
Summary
Data Set #1
Data Set #1

in

minutes

Mean
10.92
15.66
Median
13.00
9.00
Mode
13.00
0.00
Minimum
-13.00
-11.00
Maximum
27.00
153.00
Count
117
240

As we can see above, although the average time of delay is bigger for Kwantas, its median
time of delay is smaller than that of Cougar, therefore, it is necessary to further investigate
the nature the outliers.

Before that, we use the Box Plot diagram to check for any significant differences between
the Arrival Delay in Minutes for both airlines. It is important to observe that although
many outliers can be found for Kwantas, the Box Plot analyses provides evidence that there
is no statically significant difference on the Arrival Delay in Minutes for both companies.



To understand the nature of the outliers we decided to breakdown the sample in the
different days of the week and search for abnormalities. We plotted then the box plots for
Arrival Delay in Minutes for each different week of the week and as a result we found that
extremities on weeks 2 and 6 for both airlines.


Analysing Arrival Delays in Minutes for the different days in the week, something curious
was was found on certain weeks 2 and 6.

Date: 12-09-2008


Date: 12-09-2008


As you can see on the date 12-09-2008, KWANTAS seems to have had delays on its flights
(hence some of the outliers), on the other hand, Cougar cancelled all its flights (the
reasons are unknown) from Sydney to Melbourne on that day. On 29-09-2008 a similar

episode happened again with Kwantas having several delays but Cougar cancelling one of
its flights. Therefore at least Kwantas accomplished delivering all its flights.

Date: 29-09-2008


The above tables present reasons that contributed to increase Kwantas average Arrival
Delay in Minutes, thus the mean by itself is not the correct way to compare the flight delay
times and the performance. By using the median value of Arrival Delay in Minutes we can
have a better comparison for the performance of both airlines. In this case, we consider
Kwantas to have had a better performance than Cougar not only because its median time of
of delay is smaller but also because when analysing the outliers for both samples we feel
that customers would prefer to have delayed flights than cancelled flights.

4) Treating for differences in flight duration.

As we did in section 2, we decided to take a look on what is the effect of controlling for fight
duration on the variable Arrival Delays in Minutes. As explained in section 2 we adjusted
this variable to reflect the difference between the time duration of each airline in order to
provide a fair comparison basis and to avoid situations like the one exemplified in e.g. (1).
For that we defined a new variable named New Arrival Delays where we subtract 5
minutes of each delay of Kwantas on the SYD MEL routes and 10 minutes on the MEL ADE
routes. The result can be seen in the box plot below:



Box-Whisker Plot of Comparison of New arrival delay after taking


cougar's extra schdueld time into account / Data Set #1 Box plot

Airline = KWANTAS

Airline = Cougar

-40

-20

20

40

60

80

100

120

140

160



In this case the data points to Kwantas having a better performance in minutes of delay, i.e.,
under the new definition Kwantass average time of delay drops to 6.91 minutes, which is
less than Cougars 10.92. Note that this time there is no overlap between the box plots of
Kwantas and Cougar, which is an evidence that there is a significant difference between the
average time of delay for the two airliners even though Kwantas mean is still inflated by
outliers.

End of AGSM FT MBA Candidates Report.

Recommendations:

Based on the report made by the AGSM FT MBA candidates these are the recommendations
that should be given to our COO Alan:

1) The FAA data analytic metrics are flawed and key data are misinterpreted.
2) KWANTASs demonstrates its commitment to its customers by not cancelling flights
as opposed to Cougar, are reason why some of the delays are occurring.
3) KWANTAS flight durations are shorter than those from Cougar, taking its customers
on average faster to their destinations which is also a bias for the misinterpretation
of FAA report.
4) When accounting for cancelations and for differences in flight durations, the
percentage of delayed flight falls from 26.25% to 7.9%. Also, the arrival delay in
minutes is drastically reduced from 15.66 to 6.91 minutes.

Therefore, looking at all these metrics and re-interpreting the FAA data, it is
straightforward that the on-time performance and arrival delays for Kwantas is not just
within the limits established in the contract, but is much better than Cougars.

QUESTION 2


PART A
Should Bill have waited out the storm, hoping that Semicon will relax its new, more
stringent specifications? Would this strategy work in the short term? In the long term?

Bill should not wait for Semicon to relax its new and stringent specifications.
Semicon requires tighter design specifications as a result of the new production line of
super high-definition with 3D imagery. Hence, Semicon wont relax its specifications later
on. The wait and see strategy could work in the short term, but it the long term, Bill could
loose its largest customer to other competitors willing to comply with the changing
specifications.

PART B
1. For this data set, construct an appropriate control charts

0.7

R: 1, 2, 3

Center Line

0.6

Control Limit

0.5

0.4
0.3
0.2
0.1

80

Center Line
Control Limit
Sigma2 Limit
Sigma1 Limit
X-Bar

X-Bar: 1, 2, 3

3.9

70

60

50

40

30

20

10

-0.1

3.7
3.5
3.3
3.1
2.9

80

70

60

50

40

30

20

10

2.5

2.7

Test Results

Up/Down Runs



Above/Below Runs
0

Zone A Test
14

Zone B Test
5

2. Use the chart to determine whether the process variation is in control
According to the data provided, the process is not in control. The range chart has 2 points
outside the upper control limit occurring at subgroup 8 and subgroup 40. Since the range
chart is not in control, the mean value is inflated. In addition, the X-chart has failed both the
Zone A and the Zone B tests.

3. Comment on whether this chart should be used to determine if the process is capable
of meeting the required specifications
This chart should not be used to determine if the process is capable of meeting the required
specification, because the process variation is not in control. We cant yet remove the points
that have fallen out of the control limit specifications because we do not know the reasons
for the special causes. The special causes should be investigated further in order to produce
a chart that has relevant and reliable information of the process.


PART C

1. For this data set, construct the appropriate control chart
R: 1, 2, 3

0.23

Center Line
Control Limit
R

0.18

0.13

0.08

80

70

60

50

40

30

20

10

-0.02

0.03


3.14

Center
Line
Control
Limit
Sigma2
Limit

X-Bar: 1, 2, 3

3.12
3.1
3.08
3.06
3.04
3.02
3

Test Results

Up/Down Runs
Above/Below Runs
Zone A Test
Zone B Test

0
0
0
0

80

70

60

50

40

30

20

10

2.96

2.98


2. Determine if the process variation is in control
Since no special causes have been found, we can determine that the R-chart and X-chart
with the new measurements are in control; it can be concluded that after the
recommendations for temperature fixes in the room, the process is in control.


PART D

Based on your analysis in Parts B and C, what would you recommend?
By completing part B and C, it can be concluded that the process is in control. The
conditions for the process to remain in control have to stay constant:
1) To maintain a constant temperature of 30 C in the diffusion room
2) To do the maintenance outside the working hours of the diffusion process
In addition, we recommend analyzing the capability of the process in order to determine if
it meets the specifications, or if further changes have to be introduced.


PART E

1. What is the cumulative probability for a single wafer from the process to have a
thickness of 3,000 ?

Assuming that the controlled conditions under which the data was collected remain
unchanged, the control bar gives us the following data:
= 3,061.278; Se(x) = 23.070

As the n = 3 (<30), we plot the histogram to certify that the sample distribution is close to
the normal distribution so that we can apply the Central Limit Theory (CLT) to derive the
standar deviation and the calculation to be reliable.

60

Histogram of Post-improvement Diffusion


Process / histogram

50
Frequency

40
30
20

3.14611

3.12433

3.10256

3.08078

3.05900

3.03722

3.01544

2.99367

2.97189

10



Based on the Standard Limit Theorem:

=3,061.278

Standard Deviation= n * Se(x) = 3 * 23.070= 39.958

Now, the cumulative probability for a single wafer of the process to have a thickness of
3000 is calculated as follows: =

!,!!!!!,!"#.!"#
!".!"#

= 1.533


3,000 = 1.533 = . . 1.533 = 0.0625 = 6.25%

So, the cumulative probability for a single wafer of the process to have a thickness of
3000 is 6.258%.

2. What is the percentage of defectives being produced from the process under the
current setup?
The percentage of defectives being produced from the process under the current setup:

3,061.278
Standard Deviation
39.958


Lower Specification Limit (LSL)
2,900



Upper Specification Limit (USL)
3,100







Calculate the Z-Score:
2,900 3,061.278
= 2,900 =
= 4.036
39.958


= 3,100 =

3,100 3,061.278
= 0.969
39.958


Calculate the probability:
2,900 3,100 = 4.036 0.969
= [ Norm.s.dist (- 4.036) (1- Norm.s.dist(0.969)] = 0.0027% + 16.625% = 16.628%


The percentage of defectives being produced from the process under the current
setup is 16.628%

3. Compute the process capability index, Cp, for this situation. Is the process capable of

meeting the customers requirements?


Is Cp the appropriate metric for measuring process capability in this case? If so, why? If
not, why not? What other metric would you suggest?


3,100 2,900
=
=
= 0.8367
6
6 39.958

If we use this C(p), as it is <1, we say the process is capable of meeting the customer's
requirements.
However, the C(p) is not an appropriate metric since the mean does not fall at the centre of
the range of USL LSL, which is
Centre of the range USL LSL = =

!,!""!!,!""
!

= 3,000 < 3,061.278


We calculate the capability ratio with respect to USL and LSL

(3,100 3,061.278)
=
=
= 0.323
3
3 (39.958)


(3,061.278 2,900)
=
=
= 1.345
3
3 (39.958)

The appropriate capability index is = 0.323


Using this capability index, we conclude that the process is incapable of meeting the
customer's requirements since its capability index is <1.

4. Is this a six-sigma process? If not, what would the standard deviation of the process
need to be in order for this to be considered a six-sigma process?

This is not a six sigma process, since its capability index (0.323 as being calculated
above) is <2
To be consider a six sigma process, its capability index will need to be = 2. To achieve that, its standard

deviation needs to be:




6

3,100 2,900
=
=
= 16.667
6 ()
62
=





In summary, to be consider a sigma process, its standard deviation needs to be equivalent to 16.6667



5. Assuming that the process mean can drift by, at most, 1.5 sigma in either direction of
the current mean without being detected, compute the approximate number of
defectives out of a million if this were to occur in either direction.

We calculate the number of defectives occurred when the process experiences the right
shift and the left shift and then we do the summary.

Right shift: the process shifts to the right by 1.5 sigma
Right shifted process mean = + 1.5 = 3061.278 + (1.5 * 39.958) =3,121.216
=3,121.216; Standard Deviation = 39.958; Lower Specification Limit (LSL) = 2,900; Upper
Specification Limit (USL) = 3,100
3,121.216 2,900
=
= 5.536
39.96

3,100 3,121.216
=
= 0.531
39.96

Total probability of having defectives
2,900 = 5,536 = . . 5.536 = 0.00%
3,100 = 0.531 = 1 . . 0.531 = 0.702 = 70.227%

Hence, if the process shifts to the right, the percentage of defectives is 70.227%.
Out of a million products, there will be 702,270 defectives.

Left shift: the process shift to the left by 1.5 sigma
Left shifted process mean = + 1.5 = 3061.278 - (1.5 * 39.958) =3,001.340
= 3,001.340; Standard Deviation = 39.958 ; Upper Specification Limit (USL) = 3,100

3,001.340 2,900
=
= 2.536
39.96

3,100 3,001.34
=
= 2.469
39.96

2,900 = 2.536 = . . 2.536 = 0.560%
3,100 = 2.536 = 1 . . 2.469 = 0.677%


If the process shifts left, the probability of defectives to occur is 0.677% + 0.56% = 1.24%
Out of a million, there will be 12,377 defectives.

PART F

1 & 2 . What should ACMS do and what can we recommend for the future?
Possible Scenarios:

X: Use current equipment with modifications to the process so that it is centered within the
specification range (2,900-3,000)
Y: Lease a new equipment with a SD of 15 A
For both scenarios, there is a sales growth rate of 20% per year, with a current horizon of 5 years.
Therefore, the sales will be as follow:


No. of units sold
Year 1
1,000,000
year 2 = Year 1 x (1+20%)
1,200,000
Year 3 = Year 2 x (1+20%)
1,440,000
Year 4 = Year 3 x (1+20%)
1,728,000
Year 5 = Year 4 x (1+20%)
2,073,600
Total Sales of 5 years
7,441,600


SCENARIO X

Use current equipment with modifications to the process so that it is centered within the
specification range (2,900-3,000)
Probability of thickness following between the different measures, if =3061.2778; SD=39.96

Thickness
Probability
Probability

<2,860 (cant rework - write off) = P(X< 2,860)
0.0000%

2860 - 2880
= P(X<2880) - P (X<2,860)
0.0003%
2880 - 2900
= P(X<2900) - P (X<2880)
0.0024%
3100 - 3120
= P(X<3,120) - P (X<3,100)
9.5421%
3120 - 3140
= P (X<3,140) - P (X<3,120)
4.6427%
>3140 (can't rework - write off) = 1 - P(X<3140)
2.4418%
2900- 3100
= 1 - sum (all)
83.371%

With the probabilities of the thickness following between the previous measures, and the total units
sold, we calculate the revenue for Scenario X:

Thickness
<2,860
2860 - 2880
2880 - 2900
3100 - 3120

Rework cost Revenue


-500
125
150
50
150
50
150

Net Revenue % of volume no. of units


Revenue
-500
0.0000%
2
(880)
25
0.0003%
20
488
100
0.0024%
181
18,103
100
9.5421%
710,087
71,008,678

3120 - 3140
>3140
2900 - 3100

125
-500
0

150

25
4.6427%
-500
2.4418%
150
150
83.3706%
TOTAL REVENUE EARNED

345,495
8,637,371
181,707 (90,853,450)
6,204,109 930,616,376
919,426,686


Using the current equipment with modifications to the process so that it is centred within the
specification range (2,900-3,000), will produce a revenue of $919,426,686

SCENARIO Y

Lease a new equipment with a SD of 15 A
Probability of thickness following between the different measures, if =3061.2778; SD=15:
Thickness
Probability
Probability
<2,860
= P(X< 2,860)
0.0000%
2860 - 2880
= P(X<2880) - P (X<2,860)
0.0000%
2880 - 2900
= P(X<2900) - P (X<2880)
0.0000%
3100 - 3120
= P(X<3,120) - P (X<3,100)
0.4874%
3120 - 3140
= P (X<3,140) - P (X<3,120)
0.0045%
>3140 (can't rework - write
off)
= 1 - P(X<3140)
0.0000%
2900- 3100
= 1 - sum (all)
99.508%

With the probabilities of the thickness following between the previous measures, and the total units
sold, we calculate the revenue for Scenario Y:


Thickness
<2,860
2860 - 2880
2880 - 2900
3100 - 3120
3120 - 3140
>3140
2900 - 3100

Rework cost
-500
125
50
50
125
-500
0

Revenue

150
150
150
150

150

Net Rev % of vol


-500
0.0000%
25
0.0000%
100
0.0000%
100
0.4874%
25
0.0045%
-500
0.0000%
150
99.5081%
TOTAL REVENUE

no. of units
0
0
0
36,268
336
1
7,404,996

Revenue
(0)
0
0
3,626,758
8,402
(286)
1,110,749,368
1,114,384,242

Using the new equipment with a SD of 15, the total revenue is 1,114,384,242, minus the cost of the
equipment=
1,114,384,242 -75,000,000= $1,039,384,242

By looking at Scenario X and Y, we can conclude that leasing the new equipment is more cost
effective, when comparing the two revenues of $919,426,686 (Scenario X) vs $1,039,384,242
(Scenario Y). Leasing the equipment is more cost effective by a difference of $ 119,957,555.70

Now, if the process can drift up to 1.5 SD without being detected, we can analyse the following:

Scenario X with a left and right SD shift of 1.5
Scenario Y with a left and SD shift of 1.5

Scenario X with a left and right SD shift of 1.5


Right Shift:
Probability of thickness following between the different measures, if = 3121.217778; SD=39.96
Thickness
<2,860
2860 - 2880
2880 - 2900
3100 - 3120
3120 - 3140
>3140
2900- 3100

Probability
= P(X< 2,860)
= P(X<2880) - P (X<2,860)
= P(X<2900) - P (X<2880)
= P(X<3,120) - P (X<3,100)
= P (X<3,140) - P (X<3,120)
= 1 - P(X<3140)
= 1 - sum (all)

Probability
0.0000%
0.0000%
0.0000%
19.0126%
19.2987%
31.9168%
29.772%


With the probabilities of the thickness following between the previous measures, and the total units
sold, we calculate the revenue for Scenario X with a right shift of 1.5 SD:

Thickness
<2,860
2860 - 2880
2880 - 2900
3100 - 3120
3120 - 3140
>3140
2900 - 3100

Rework
-500
125
50
50
125
-500
0

Revenue Net Rev % of vol


no. of units Revenue
-500
0.0000%
0
(0)

150
25
0.0000%
0
0
150
100
0.0000%
0
11
150
100 19.0126% 1,414,843
141,484,344
150
25 19.2987% 1,436,136
35,903,391
-500 31.9168% 2,375,123 (1,187,561,673)

150
150 29.7718% 2,215,497
332,324,618

TOTAL REVENUE EARNED
(677,849,309)


Using the current equipment with modifications to the process so that it is centered within
the specification range (2,900-3,000), shifting the process to the right by 1.5 DS, will produce
revenue of $(677,849,309)


Left Shift:
Probability of thickness following between the different measures, if:
= 3001.337778
SD=39.96
Thickness
<2,860
2860 - 2880
2880 - 2900
3100 - 3120
3120 - 3140
>3140
2900- 3100

Probability
= P(X< 2,860)
= P(X<2880) - P (X<2,860)
= P(X<2900) - P (X<2880)
= P(X<3,120) - P (X<3,100)
= P (X<3,140) - P (X<3,120)
= 1 - P(X<3140)
= 1 - sum (all)

Probability
0.0202%
0.0994%
0.4410%
0.5283%
0.1231%
0.0260%
98.762%


With the probabilities of the thickness following between the previous measures, and the total units
sold, we calculate the revenue for Scenario X with a right left of 1.5 SD:

Thickness
<2,860
2860 - 2880
2880 - 2900
3100 - 3120
3120 - 3140
>3140
2900 - 3100

Rework Revenue
-500
125
150
50
150
50
150
125
150
-500
0
150

Net Revenue % of vol


no. of unit
Revenue
-500
0.0202%
1,506
(752,956)
25
0.0994%
7,400
185,002
100
0.4410%
32,816
3,281,644
100
0.5283%
39,312
3,931,243
25
0.1231%
9,161
229,033
-500
0.0260%
1,936
(968,169)
150 98.7619%
7,349,467 1,102,420,116

1,108,325,91
TOTAL REVENUE
5


Using the current equipment with modifications to the process so that it is centered within
the specification range (2,900-3,000), shifting the process to the left by 1.5 DS, will produce
revenue $1,108,325,915

Scenario Y with a left and right SD shift of 1.5:
Right Shift:
Probability of thickness following between the different measures, if:
= 3083.777778
SD=15
Thickness
<2,860
2860 - 2880
2880 - 2900
3100 - 3120
3120 - 3140
>3140
2900- 3100

Probability
= P(X< 2,860)
= P(X<2880) - P (X<2,860)
= P(X<2900) - P (X<2880)
= P(X<3,120) - P (X<3,100)
= P (X<3,140) - P (X<3,120)
= 1 - P(X<3140)
= 1 - sum (all)

Probability
0.0000%
0.0000%
0.0000%
13.1870%
0.7783%
0.0089%
86.026%


With the probabilities of the thickness following between the previous measures, and the total units
sold, we calculate the revenue for Scenario Y with a right shift of 1.5 SD:

Thickness (A)
<2,860
2860 - 2880
2880 - 2900
3100 - 3120
3120 - 3140
>3140
2900 - 3100

Rework cost Rev per unit Net Rev % of vol


No. of Units
Rev Earned
-500
-500
0.0000%
0
(0)
125
150
25
0.0000%
0
0
50
150
100
0.0000%
0
0
50
150
100 13.1870%
981,323
98,132,298
125
150
25
0.7783%
57,914
1,447,862
-500
-500
0.0089%
663
(331,421)
0
150
150 86.0259%
6,401,700
960,254,951


TOTAL REVENUE

1,059,503,691


Using the new equipment with a SD of 15, and shifting the mean to the right by 1.5 SD, the
total revenue is 1,059,503,691, minus the cost of the equipment=
1,059,503,691-75,000,000= $984,503,691

Left Shift:
Probability of thickness following between the different measures, if:
= 3038.777778
SD=15
Thickness
<2,860
2860 - 2880
2880 - 2900
3100 - 3120
3120 - 3140
>3140
2900- 3100

Probability
= P(X< 2,860)
= P(X<2880) - P (X<2,860)
= P(X<2900) - P (X<2880)
= P(X<3,120) - P (X<3,100)
= P (X<3,140) - P (X<3,120)
= 1 - P(X<3140)
= 1 - sum (all)

Probability
0.0000%
0.0000%
0.0000%
0.0022%
0.0000%
0.0000%
99.998%


With the probabilities of the thickness following between the previous measures, and the total units
sold, we calculate the revenue for Scenario Y with a left shift of 1.5 SD:
Thickness
<2,860
2860 - 2880
2880 - 2900
3100 - 3120
3120 - 3140
>3140
2900 - 3100

Rework Revenue
-500
125
150
50
150
50
150
125
150
-500
0
150

Net Revenue % of volume no. of units Revenue


-500
0.0000%
0
(0)
25
0.0000%
0
0
100
0.0000%
0
0
100
0.0022%
166
16,628
25
0.0000%
0
6
-500
0.0000%
0
(0)
150
99.9978%
7,441,433 1,116,215,024

1,116,231,65
TOTAL REVENUE EARNED

8


Using the new equipment with a SD of 15, and shifting the mean to the left by 1.5 SD, the total
revenue is 1,116,231,658, minus the cost of the equipment=
1,116,231,658-75,000,000= $ 1,041,231,658

Now, we can compare the cost efficiency of all the options available: Scenario X & Scenario Y,
with the mean movements of 1.5 SD to the left and the right:

Net profit
Left Shift Net Profit
Right Shift Net Profit

Current Equipment
919,426,685.89
1,108,325,915.17
(677,849,309.03)

New Equipment
1,039,384,241.60
1,041,231,657.64
984,503,691.15


If the expected sales growth occurs, inflation is ignored, and thickness specifications are not
further modified in the next five years, we can analyse the data above.
By comparing the Scenarios X & Y, and all the data, we can conclude that leasing the new
equipment is the best option for ASCM since it is the most cost effective.
We recommend for the company to lease the new equipment.
In addition, we recommend investigating the causes that are shifting the mean to the left by
1.5 SD, and to replicate the scenario that is causing such shifts, in order for the company to
increase its revenue even further.

QUESTION 3

It is known that the average time is normally distributed with a mean time of 3.5 hours and
standard deviation of 0.75 hours.
Let us consider now
1. X as average time.
2. Z is the Z score
3. P is the probability for the particular set of values used.

PART A

1. Probability that an enquiry will be responded to in a time less than 2 hours

First we find the Z score.

2 3.5
=2=
=
= 2

0.75
2 = 2 = . . 2 = 2.275%
The probability that an enquiry will be responsible to in a time less than 2 hours is
2.275%

2. Probability that an enquiry will be responded to in a time between 3.25 hours and 4
hours

First we find the Z score.

3.25 3.5
= 3.25 =
=
= 0.333

0.75

4.00 3.5
= 4.00 =
=
= 0.667

0.75

3.5 = 0.333 = . . 0.333 = 36.944%

< 4 = 0.667 = . . 0.667 = 74.751%

Now all we need to find the value of probability in between these values is to subtract each
other.
0.333 0.667 = 74.751% 36.944% = 37.807%
The probability that an enquiry will be responsible to in a time between 3.25 hours to
4 hours is 37.807%

4. Probability that an enquiry will be responded to in a time longer than 5 hours

We follow the same principle that we used earlier here as well just that the value X is now 5
hours.

First we find the Z score.

5 3.5
=5=
=
= 2.000

0.75

5 = 2 = 1 . . 2 = 2.275%

The probability that an enquiry will be responsible to in a time longer than 5 hours is
2.275%


PART B

Cut off time for KPI

The problem here is that the management wants to set a cut off time for their KPI and to do
that we follow the similar data analysis methods.
We need to find the time that 99% of the enquiries are responded to.
So now we got the probability =99% and from this we need to find the actual value. i.e X.
For this we use the norm.s.inv function in Excel to first find the Z score
Norm.s.inv(99%) ==> Z = 2.326
We derive X from the Z-score:

3.5
=

= 2.326 = 5.245

0.75
The manager can thus quote the time of 5.2448 hours so that 99% of all enquiries are
responded.


PART C

The probability of getting penalised under the new Services Level Agreement (SLA)

The company now got to sign an SLA where it states that every month a random sample of
50 enquiries will be taken from the system and using that sample they will try to ensure
that the response time doesnt exceed the 3.25 hours time limit.
Any failure noticed in maintaining the response time less than or equal to 3.25 hours for the
sample will cause the company to get penalized.
Now to help the management find the probability of ever getting penalised under this new
SLA we need to do the following.
Note: We are assuming that the response times for this company follow the above general
distribution.


The information we got to work with:
Sample size (n): 50
Sample mean: 3.5

We first calculate the value of standard error.

0.75
=
=
= 0.1061

50
We calculate the Z-score

3.25 3.5
= 3.25 =
=
= 2.356

0.75
From this we find the probability of call P (X bar >3.25 hours) by using the Z score we
calculated earlier.

> 3.25 = 2.356 = 1 . . 2.356 = 99.077%


The probability of getting penalised under this new SLA is 99.077%. Hence, the
management shall not sign the contract.



QUESTION 4


For this question, we need to help Janice with her research on the growing issue of the
obesity. We will help her analyse the data, she got by surveying the majority of the large
new apartment complexes within 2 km of the Sydney townhall.

What we currently know now is that she has randomly selected 100 adult residents over the
age of 21 years. Based on the sample, we also learnt that the average weight of the adult
residents in the area is 80 kg. The sample standard deviation was also calculated to be 8 km.

PART A

We know the following data:
Average Mean = 80

Standard Deviation = 8





!
!
Standard Error = = = 0.8
!

!"

Sample size n = 100


Degrees of freedom df = n-1 = 100 1 =99

Now we need to find the 99% confidence interval for the estimated mean weight of adults
over the age of 21, so we require = 0.01 (i.e. 1-0.99) in both tails combined.

T = T.INV.2T (, df)
T = T.INV.2T (.01, 99) = 2.6264
Confidence Limits = ( + (T * Se(x), (T * se(x))
Upper Limit = 80 + (2.6264 * 0.8) = 82.101
Lower Limit = 80 - (2.6264 * 0.8) = 77.898

Confidence Limits = (82.101, 77.898)
We have assumed that the data forms a distribution close to the normal distribution.
Sample size is greater than 30 in this case so even the central limit theorem holds.









PART B

Now in this problem our goal is find whether the goal set by the Sydney City council to
reduce the average weight of residents by at least 10% below the national average i.e. less
than equal to 78.2 Kg, after taking the national average weight into account, is happening or
not.














The calculations can be seen as below:


Null Hypothesis H0 : 78.2
Alternate Hypothesis HA : > 78.2
Average Mean = 80
= 78.2

Standard Deviation = 8


!
!
Standard Error Se(x) = = = 0.8
!

!"

Sample size n = 100


Degrees of freedom df = n-1 = 100 1 =99

T =

!! !
!"(!)

!"!!".!
!.!

= 2.25

P value = T.DIST.RT(T,df)
= T.DIST.RT(2.25,99) = 0.01333 = 1.3%
Since p value is between 0.01 and 0.05, we have strong evidence against H0.





PART C













It is not possible to evaluate the weight of a single individual. The reason why it is so is that
the confidence interval provides a set of possibilities of a plausible value for the population
mean while we have been asked to evaluate a single value.

Você também pode gostar