Escolar Documentos
Profissional Documentos
Cultura Documentos
852
Stanley B. Gershwin
http://web.mit.edu/manuf-sys
Spring, 2010
F E
F E
F F F E E
F E
◮ Systems with loops are not ergodic. That is, the steady-state
distribution is a function of the initial conditions.
◮ Example: if the system below has K pallets at time 0, it will have K
pallets for all t ≥ 0. Therefore, the probability distribution is a
function of K .
Raw Part Input
◮ This applies to more general systems with loops, such the example on
Slide 3.
◮ In general,
p(s|s(0)) = lim prob { state of the system at time t = s|
t→∞
state of the system at time 0 = s(0)}.
◮ Consequently, the performance measures depend on the initial state
of the system:
◮ The production rate of Machine Mi , in parts per time unit, is
�
Ei (s(0)) = prob αi = 1 and (nb > 0 ∀ b ∈ U(i)) and
� �
�
(nb < Nb ∀ b ∈ D(i)) ��s(0) .
Mj Mm
B (j, i) B (i, m)
• Mi •
• •
B (n, i) B (i, q)
• •
Mn Mq
•
•
•
Mu (n, i) B (n, i) Md (n, i)
•
•
•
Mu (i, q) B (i, q) Md (i, q)
Part of Decomposition
Deterministic
processing
time model
7.3351
5.6516
Case 1:
2.6449
7.3351 ri = .1, pi =
.1, i = 1, ..., 8;
7.3351
Ni = 10, i =
5.6516
1, ..., 7.
7.3351
7.9444
7.0529
2.0555
Case 2:
7.9444
Same as Case
7.9444
1 except
p7 = .2
7.0529
7.9444
4.5640
4.1089
2.2923
Case 3:
7.7077
Same as Case
7.7077
1 except
p1 = .2
6.5276
7.7077
7.9017
3.4593
2.0983
Case 4:
7.9017
Same as Case
7.9017
1 except
p3 = .2
6.9609
7.9017
can be assembled
production system
structures.
operation of all.
N =2
1
N =2 N =3
1 2
µ
1
µ µ µ µ
2 1 2 3
N =3
2
µ
3
µ1 µ1
00 10 20
N =2
1
µ3 µ2 µ3 µ2 µ 3
µ µ1 µ1
1 01 11 21
µ3 µ2 µ3 µ 2 µ 3
µ
2 µ1 µ1
02 12 22
N =3
2
µ3 µ2 µ3 µ 2 µ 3
µ
3
µ1 µ1
03 13 23
µ1 µ1
03 13 23
µ3 µ2 µ3 µ2 µ 3
N =2 N = 3
1 2
µ1 µ1
02 12 22
µ3 µ2 µ3 µ 2 µ 3
µ µ µ
1 2 3
µ1 µ1
01 11 21
µ3 µ2 µ3 µ 2 µ 3
µ1 µ1
00 10 20
µ1 µ1
◮ Therefore, in steady state,
N =2
1 µ3 µ2 µ3 µ2 µ3
µ1 µ1
µ 01 11 21
1
µ3 µ2 µ3 µ 2 µ3
µ µ1 µ1
2
02 12 22
N =3
2
µ3 µ2 µ3 µ 2 µ3
µ
3
µ1 µ1
03 13 23
µ3 µ2 µ3 µ2 µ3
N =2 N =3
1 2 µ1 µ1
02 12 22
µ3 µ2 µ3 µ 2 µ3
µ µ µ µ1 µ1
1 2 3
01 11 21
µ3 µ2 µ3 µ 2 µ3
µ1 µ1
00 10 20
Therefore
PA = PT
2 �
3 2
� 3
�
� � �
n̄1 = n1 p(n1 , n2 ) = n1 p(n1 , n2 ) µ1 µ1
n1 =0 n2 =0 n1 =0 n2 =0 00 10 20
N =2
1 µ3 µ2 µ3 µ2 µ3
µ1 µ1
µ 01 11 21
1
µ3 µ2 µ3 µ 2 µ3
µ µ1 µ1
2
02 12 22
N =3
2
µ3 µ2 µ3 µ 2 µ3
µ
3
µ1 µ1
03 13 23
2 �
3 2
� 3
�
� � �
n̄1 = n1 p(n1 , n2 ) = n1 p(n1 , n2 ) µ1 µ1
n1 =0 n2 =0 n1 =0 n2 =0 03 13 23
3
µ2 µ3 µ2 µ3
N =2 N = 3
1 2
µ1 µ1
02 12 22
3 µ2 µ3 µ 2 µ3
µ µ µ µ1 µ1
1 2 3
01 11 21
µ2 µ3 µ 2 µ3
3
µ1 µ1
00 10 20
Therefore
n̄1A = n̄1T
2 �
3 3
� 2
�
� � �
n̄2 = n2 p(n1 , n2 ) = n2 p(n1 , n2 ) µ1 µ1
n1 =0 n2 =0 n2 =0 n1 =0 00 10 20
N =2
1 µ3 µ2 µ3 µ2 µ3
µ1 µ1
µ 01 11 21
1
µ3 µ2 µ3 µ 2 µ3
µ µ1 µ1
2
02 12 22
N =3
2
µ3 µ2 µ3 µ 2 µ3
µ
3
µ1 µ1
03 13 23
2 �
3 3
� 2
�
� � �
n̄2 = n2 p(n1 , n2 ) = n2 p(n1 , n2 ) µ1 µ1
n1 =0 n2 =0 n2 =0 n1 =0 03 13 23
µ3 µ2 µ3 µ2 µ3
N =2 N =3
1 2 µ1 µ1
02 12 22
µ3 µ2 µ3 µ 2 µ3
µ µ µ µ1 µ1
1 2 3
01 11 21
µ3 µ2 µ3 µ 2 µ3
µ1 µ1
00 10 20
Therefore
n̄2A = N2 − n̄2T
◮ Assume
◮ Z and Z ′ are two exponential A/D networks with the same number of
machines and buffers. Corresponding machines and buffers have the
same parameters; that is, µ′i = µi , i = 1, ..., kM and
Nb′ = Nb , b = 1, ..., kB .
◮ There is a subset of buffers Ω such that for j 6∈ Ω, u ′ (j) = u(j) and
d ′ (j) = d(j); and for j ∈ Ω, u ′ (j) = d(j) and d ′ (j) = u(j). That is,
there is a set of buffers such that the direction of flow is reversed in the
two networks.
◮ Then, the transition equations for network Z ′
are the same as those of
Z , except that the buffer levels in Ω are replaced by the amounts of
space in those buffers.
Corollary:
◮ Assume:
◮ The initial states s(0) and s ′ (0) are related as follows: nj′ (0) = nj (0)
for j 6∈ Ω, and nj′
(0) = Nj − nj (0) for j ∈ Ω.
◮ Then
◮ the average levels of all the buffers in the systems whose direction of
flow has not been changed are the same,
◮ the average levels of all the buffers in the systems whose direction of
flow has been changed are complementary; the average number of
parts in one is equal to the average amount of space in the other.
N N
1 2
µ µ µ
1 2 3
N N
1 1
B B
2 1
µ µ
1 1
µ µ
2 2
B B
1 2
N N
2 2
µ µ
3 N N 3
2 1
µ µ µ
3 2 1
Representative members
2 2
1 2 1 2
1 3 1 3
4 3 4 3
4 4
Ω= (3, 4)
For information about citing these materials or our Terms of Use,visit: http://ocw.mit.edu/terms.
MIT 2.852
Lectures 19–21
Scheduling: Real-Time Control of Manufacturing Systems
Stanley B. Gershwin
Spring, 2007
Factory Control
Dynamic
Programming Problem
Dynamic
Programming Problem
Let g(i, j) be the cost of traversing the link from i to j. Let i(t)
be the tth node on a path from A to Z. Then the path cost is
T
�
g(i(t − 1), i(t))
t=1
δ Bad news: it often does not reduce it enough to give an exact optimal
solution practically (ie, with limited time and memory). This is the curse of
dimensionality .
δ Good news: we can learn something by characterizing the optimal
solution or an approximation.
Dynamic
Programming Solution
t=1
where i(0) = i; i(T ) =Z; (i(t − 1), i(t)) is a link for every t.
Dynamic
Programming Solution
J (Z) = 0
Z
i j
j2
j3 Z
i j4
j5
j6
Suppose that for each node j for which a link exists from i to j,
the optimal path and optimal cost J (j) from j to Z is known.
c 2007 Stanley B. Gershwin.
Copyright � 11
Example
Dynamic
Programming Solution
Then the optimal path from i to Z is the one that minimizes the
sum of the costs from i to j and from j to Z. That is,
J (i) can be calculated from this if J (j) is known for every node j
Dynamic
Programming Solution
i Z
Dynamic
Programming Solution
Dynamic
Programming Solution
Algorithm
1. Set J (Z) = 0.
2. Find some node i such that
• J (i) has not yet been found, and
• for each node j in which link (i, j) exists, J (j) is already
calculated.
Assign J (i) according to
Dynamic
Programming Solution
F
B L
11
13 5
G
A
14 Z
12 11
H
C 6 M
D
8
17 4
14 11 N
I
J
13
E
9 6
K O
Dynamic
Programming Solution
• that the solution is determined for every i, not just A and not just nodes on
the optimal path;
• that J(i) depends on the nodes to be visited after i, not those between A
and i. The only thing that matters is the present state and the future;
• that J(i) is obtained by working backwards.
Dynamic
Programming Solution
Programming Stochastic
Suppose
• g(i, j) is a random variable; or
• if you are at i and you choose j, you actually go to k with
probability p(i, j, k).
Then the cost of a sequence of choices is random. The objective
function is � T �
�
E g(i(t − 1), i(t))
t=1
and we can define
J (i) = E min [g(i, j) + J (j)]
u1 (t) x1 d1 Type 1
u2 (t) x2 d2 Type 2
r, p
• Perfectly flexible machine, two part types. λi time units required
to make Type i parts, i = 1, 2.
• Exponential failures and repairs with rates p and r.
• Constant demand rates d1, d2.
• Instantaneous production rates ui(t), i = 1, 2 — control
variables .
• Downstream surpluses xi(t).
c 2007 Stanley B. Gershwin.
Copyright � 21
Continuous Time, Mixed State
Dynamic
Cumulative
Production
and Demand
production Pi (t)
Objective: Minimize
the difference
between cumulative
production and
cumulative demand.
t
c 2007 Stanley B. Gershwin.
Copyright � 22
Continuous Time, Mixed State
Dynamic
Feasibility:
• For the problem to be feasible, it must be possible to make
approximately diT Type i parts in a long time period of length
T, i = 1, 2. (Why “approximately”?)
• The time required to make diT parts is λidiT .
• During this period, the total up time of the machine — ie, the
time available for production — is approximately r/(r + p)T .
• Therefore, we must have λ1d1T + λ2d2T � r/(r + p)T , or
2
� r
λi di �
i=1
r+p
2
� di r
�
i=1
µi r+p
where µi = 1/λi.
If there were only one part type, this would be
d � µ
r + p
Look familiar?
Examples:
• g(x1, x2) = A1x21 + A2x22
• g(x1, x2) = A1|x1| + A2|x2|
• g(x1, x2) = g1(x1) + g2(x2) where
(i−) i ,
−
δ gi(xi) = g(i+)x+ i + g x
−
δ x+
i = max(x i , 0), x i = − min(xi , 0),
Programming
Stochastic Example
Objective: � T
min E g(x1(t), x2(t))dt
0
g(x1,x2 )
x2
x1
Constraints:
u1(t) ≈ 0; u2(t) ≈ 0
Short-term capacity:
i
or
�
λiui(t) � 1 0 1/1� u
i
Bellman’s
Equation Deterministic
� T
Problem: min g(x(t), u(t))dt + F (x(T ))
u(t),0�t�T 0
such that
dx(t)
= f (x(t), u(t), t)
dt
x(0) specified
h(x(t), u(t)) � 0
0�t�T
Bellman’s
Equation Deterministic
⎞ ⎩
⎨
⎨ ⎨
⎨
⎨ �� ⎨
�⎨
⎨
⎠� t1 T ⎦
u(t),
⎨
⎨
⎨ 0 u(t), t1 ⎨
⎨ ⎨
0�t�t1 t1 �t�T
⎧ ⎫
�� t1 �
Bellman’s
Equation Deterministic
where
� T
J (x(t1), t1) = min g(x(t), u(t))dt + F (x(T ))
u(t),t1 �t�T t1
such that
dx(t)
= f (x(t), u(t), t)
dt
x(t1) specified
h(x(t), u(t)) � 0
Bellman’s
Equation Deterministic
Or,
�
J (x(t1), t1) = min g(x(t1), u(t1))�t + J (x(t1), t1)+
u(t1)
αJ αJ
�
(x(t1), t1)(x(t1 + �t) − x(t1)) + (x(t1), t1)�t
αx αt
Note that
dx
x(t1 + �t) = x(t1) + �t = x(t1) + f (x(t1), u(t1), t1)�t
dt
Bellman’s
Equation Deterministic
Therefore
J (x, t1) = J (x, t1)
αJ αJ
� �
+ min g(x, u)�t + (x, t1)f (x, u, t1)�t + (x, t1)�t
u αx αt
where x = x(t1); u = u(t1) = u(x(t1), t1).
Then (dropping the t subscript)
αJ αJ
� �
− (x, t) = min g(x, u) + (x, t)f (x, u, t)
αt u αx
Bellman’s
Equation Deterministic
This is the Bellman equation . It is the counterpart of the recursion equation for
the network example.
• If we had a guess of J(x, t) (for all x and t) we could confirm it by
αJ
� �
u = U x, ,t .
αx
Bellman’s
Equation Example
Bang-Bang Control
� �
min |x|dt
0
subject to
dx
=u
dt
x(0) specified
−1 � u � 1
Bellman’s
Equation Example
αt u, αx
−1�u�1
J(x, t) = J(x) is a solution because the time horizon is infinite and t does not
appear explicitly in the problem data (ie, g(x) = |x| is not a function of t.
Therefore
dJ
� �
0 = min |x| + (x)u .
u, dx
−1�u�1
Bellman’s
Equation Example
dJ
−1 if (x) > 0
⎧
⎧
⎧
⎧
⎧
⎧
⎧ dx
⎧
⎧
⎧
⎧
� dJ
u= 1 if (x) < 0
⎧
⎧
⎧ dx
⎧
⎧
⎧
⎧
dJ
⎧
� undetermined if
⎧
(x) = 0
⎧
⎧
dx
Why?
Bellman’s
Equation Example
dJ
(x) = |x|
dx
Bellman’s
Equation Example
dJ
Bellman’s
Equation Example
Therefore
dJ
(x) >= x
dx
so
1
J = x2
2
and
� 1 if x < 0
�
⎧
u= 0 if x = 0
� −1 if x > 0
⎧
Equation
Stochastic
⎫� ⎭
T
J (x(0), �(0), 0) = min E g(x(t), u(t))dt + F (x(T ))
u 0
such that
dx(t)
= f (x, �, u, t)
dt
Equation
Stochastic
�
= H(j)prob {�(t + �t) = j | �(t)}
Equation
Stochastic
⎪ �
� �
= H(j)�j�(t)�t + H(�(t)) ⎬1 − �j�(t)�t� + o(�t)
j�=�(t) j �=�(t)
� � �
= H(j)�j�(t)�t + H(�(t)) 1 + ��(t)�(t) �t + o(�t)
j�=�(t)
⎛ ⎣
�
E {H(�(t + �t)) | �(t)} = H(�(t)) + � H(j)�j�(t)⎤ �t + o(�t)
j
We use this in the derivation of the Bellman equation.
Equation
Stochastic
⎫� ⎭
T
J (x(t), �(t), t) = min E g(x(s), u(s))ds + F (x(T ))
u(s), t
t�s<T
Equation
Stochastic
⎧
⎧
�� t+�t
= min E g(x(s), u(s))ds
u(s),
⎧
⎧
⎧ t
�
0�s�t+�t
⎨
�� �⎧
T ⎧
u(s), t+�t
⎧
t+�t�s�T
Equation
Stochastic
⎫�
t+�t
= min Ẽ g(x(s), u(s))ds
u(s), t
t�s�t+�t
⎨
Equation
Stochastic
J (x(t), �(t), t) =
�
min Ẽ g(x(t), u(t))�t + J (x(t), �(t + �t), t) +
u(t)
αJ αJ
�
(x(t), �(t + �t), t)�x(t) + (x(t), �(t + �t), t)�t + o(�t).
αx αt
where
�x(t) = x(t + �t) − x(t) = f (x(t), �(t), u(t), t)�t + o(�t)
Equation
Stochastic
αJ αJ ⎩
Equation
Stochastic
�
J (x, �, t) =
�
�
αx αt ⎪
Equation
Stochastic
�
�
Equation
Stochastic
αJ �
− (x, �, t) = J (x, j, t)�j�+
αt j
αJ
� �
min g(x, u) + (x, �, t)f (x, �, u, t)
u αx
Equation
Stochastic
Equation
Stochastic
� J �T
Equation
Stochastic
Manufacturing
System Control
� T
min E g(x(t))dt
0
subject to:
dxi(t)
= ui(t) − di, i = 1, ..., N
dt
prob(�(t + �t) = 0|�(t) = 1) = p�t + o(�t)
Manufacturing
System Control
αJ �
− (x, �, t) = J (x, j, t)�j�+
αt j
αJ
� �
min g(x) + (x, �, t)(u − d)
u∗�(�) αx
Manufacturing
System Control
αW
� �
min g(x) + (x, �, t)(u − d)
u∗�(�) αx
Recall that �
�j� = 0...
j
Manufacturing
System Control
so
�
J� = W (x, j)�j�+
j
αW
� �
min g(x) + (x, �, t)(u − d)
u∗�(�) αx
for � = 0, 1
Manufacturing
System Control
αW
� �
�
J = g(x) + W (x, 0)p − W (x, 1)p + min (x, 1)(u − d)
u∗�(1) αx
for � = 1.
Manufacturing
� dW
J = g(x) + W (x, 1)r − W (x, 0)r − (x, 0)d,
dx
for � = 0,
dW
� �
�
J = g(x) + W (x, 0)p − W (x, 1)p + min (x, 1)(u − d)
0�u�µ dx
for � = 1.
Manufacturing
System Control
91–120.
When � = 0, u = 0.
When � = 1,
• if dW
dx
< 0, u = µ,
• if dW
dx
= 0, u unspecified,
• if dW
dx
> 0, u = 0.
Manufacturing
System Control
• if � = 1,
dt+Z
δ if x < Z, u = µ,
δ if x = Z, u = d,
hedging point Z
δ if x > Z, u = 0;
surplus x(t)
• if � = 0,
demand dt
δ u = 0.
How do we choose Z?
c 2007 Stanley B. Gershwin.
Copyright � 71
Flexible Single-part-type case
Manufacturing
� Z
J � = Eg(x) = g(Z)P (Z, 1)+ g(x) [f (x, 0) + f (x, 1)] dx
−�
f (x, 0) = Aebx
d
f (x, 1) = A µ−d ebx
P (Z, 1) = A dp ebZ
where
r p
b= −
d µ−d
and A is chosen so that
� Z
[f (x, 0) + f (x, 1)] dx + P (Z, 1) = 1
−�
Manufacturing
• if Z > 0,
� 0
J � = g+ZP (Z, 1) − g−x [f (x, 0) + f (x, 1)] dx
−�
� Z
+ g+x [f (x, 0) + f (x, 1)] dx.
0
Manufacturing
To minimize J �:
� ⎬
g
ln Kb(1 + g− )
• if g+ − Kb(g+ + g−) < 0, Z =
+
.
b
• if g+ − Kb(g+ + g−) ≈ 0, Z = 0
where K =
µp µp 1 µp
� �
= =
b(µbd − d2 b + µp) b(r + p)(µ − d) b db(µ − d) + µp
� 0
prob(x � 0) = (f (x, 0) + f (x, 1))dx
−�
0
d
� ��
=A 1+ ebxdx
µ−d −�
d 1 µ
� �
=A 1+ =A
µ−d b b(µ − d)
bp(µ − d) µ
� �
−bZ
= e
db(µ − d) + µp b(µ − d)
µp
� �
= e−bZ
db(µ − d) + µp
Manufacturing
Or,
µp 1 g+
� � � � ��
prob(x � 0) = max 1,
db(µ − d) + µp Kb g + + g−
That is,
µp g+
• if > , then Z = 0 and
µp + bd(µ − d) g+ + g−
µp
prob(x � 0) = ;
µp + bd(µ − d)
µp g+
• if < , then Z > 0 and
µp + bd(µ − d) g+ + g−
g+
prob(x � 0) = .
g+ + g−
90
80
70
60
Z 50
40
30
20
10
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
d
60
50
40
Z
30
20
10
12
10
8
Z
6
0
0 1 2 3 4 5 6 7 8 9 10 11
g−
Manufacturing
1200
1000
800
600
400
200
u2 (t) x2 d2 Type 2
r, p
u2
1/2�
0 1/1� u1
Manufacturing
System Control
u∗�(�) αx
Partial solution of LP:
1
0 1/ � u1
u1 = µ 1 u1 = u2 = 0
u2 = 0
dx
dt
x1
u1 = 0
u2 = µ 2
u2
2
1/ �
u2
1/ � 2
0 1/ � 1 u1
1
0 1/ � u1
1/ �
u1 = u2 = 0
u2 = 0
1
1/ � u1
dx
0
dt
x1
u1 = 0
u2 = µ 2
u2
2
1/ �
u2
1/ � 2
0 1/ � 1 u1
1
0 1/ � u1
e56
u2 x2
�
e 45
2 1
e23
6 5
Z
4
e 34 3
I ¥ e61
3 6
e61 d x1
e23
u1 e56
1 2
4 5
e34
e12
e45
Manufacturing
System Control
• Operating Machine M
according to the hedging
M
• D is a demand generator .
M
δ Whenever a demand arrives, D
sends a token to B.
FG
• S is a synchronization machine. S
finitely fast.
D
Manufacturing
Machine
Operator
• An operation cannot take
place unless there is a
Part
Operation
Part token available.
Consumable Waste
Token Token
• Tokens authorize
production.
possible.
To control
M1 B1 M2 B2 M3
M1 B1 M2 B2 M3
SB 1 SB 2 SB 3
S1 S2 S3
SB 1 SB 2 SB 3
S1 S2 S3
systems
Production
and Demand
production P(t) • Objective is to keep
earliness
cumulative production
close to cumulative
demand.
• Surplus-based policies
look at vertical
differences between the
surplus/backlog x(t)
graphs.
demand D(t) • Time-based policies look
t
at the horizontal
differences.
Token flow
0.875 30
n1
n2
n3
0.87
25
0.865
20
0.86
0.85
10
0.845
5
0.84
0.835 0
0 20 40 60 80 100 120 0 20 40 60
80 100 120
Population Population
Other policies
Multi-stage
systems Basestock
Demand
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
MIT 2.852
Lecture 1: Overview
Stanley B. Gershwin
http://web.mit.edu/manuf-sys
Spring, 2010
◮ Required
◮ Manufacturing Systems Engineering (MSE) by Stanley B. Gershwin
◮ ... obtainable from author.
◮ Optional
◮ Factory Physics by Hopp and Spearman
◮ The Goal by Goldratt
◮ Stochastic Models of Manufacturing Systems by Buzacott and
Shanthikumar
◮ Production Systems Engineering by Li and Meerkov
Why?
◮ There are significant setup times from part family to part family. If setup
times are not considered, changeovers will occur too often, and waste
capacity.
◮ Any rules that that do not consider setup times in this factory will perform
poorly.
◮ Manufacturing System:
◮ Alternate terms:
◮ Factory
◮ Production system
◮ Fabrication facility
◮ Subsets of manufacturing systems, which are themselves systems, are
sometimes called cells, work centers, or work stations .
◮ Make to Order:
◮ production started only after order arrives
◮ appropriate for custom products, low volumes, expensive raw materials
◮ Make to Stock:
◮ large finished goods inventories needed to prevent stockouts
◮ small finished goods inventories needed to keep costs low
◮ Make to Order:
◮ excess production capacity (low utilization) needed to allow early,
reliable delivery promises
◮ minimal production capacity (high utilization) needed to to keep costs
low
Operator
Machine
Part Part
Operation
Consumable Waste
Operator
Machine
Part Part
Operation
Consumable Waste
... of any one of these items causes waiting in the rest of them and reduces
performance of the system.
Machine Buffer
M 1 B41 M 4
System with
B11 B51 B12 B32 B71 B31
reentrant flow and
B22
two part types
B61
M 2 M 3
B21
reject
inspection
rework
◮ delay
◮ capacity utilization
◮ Probability
◮ Basics, Markov processes, queues, other examples.
◮ Transfer lines
◮ Models, exact analysis of small systems, approximations of large
systems.
◮ Extensions of transfer line models
◮ Assembly/disassembly, loops, system optimization
◮ Real-time scheduling
◮ Quality/Quantity interactions
◮ New material
For information about citing these materials or our Terms of Use,visit: http://ocw.mit.edu/terms.
MIT 2.852
Lecture 14-16
Line Optimization
Stanley B. Gershwin
Spring, 2007
no
Evaluate Design D n Is performance satisfactory?
Pn = P(Dn )
yes
Quit
• Computation time per iteration includes evaluation time and the time to
determine the next design to be evaluated.
evaluations later.
Statement
• static/dynamic
• deterministic/stochastic
• X set: continuous/discrete/mixed
(Extensions: multi-criteria optimization, in which the set of all
Variables and
Objective
g(x) � 0
• This is equivalent to
Find t to maximize (or minimize) F (t)
when F (t) is differentiable, and f (t) = dF (t)/dt is
continuous.
• If f (t) is differentiable, maximization or minimization
depends on the sign of d2F (t)/dt2.
Objective
One-dimensional search
f(t)
with t
�0 = t0 and t�
1
=
t2.
t0 t2 t1
0 1.5
3
1.5
2.25
3
1.5
1.875
2.25
1.875
2.0625
2.25
1.875
1.96875
2.0625
1.96875
2.015625
2.0625
1.96875
1.9921875
2.015625
Example: 1.9921875
2.00390625
2.015625
1.9921875
1.998046875
2.00390625
1.998046875
2.0009765625
2.00390625
1.998046875
1.99951171875
2.0009765625
f (t) = 4 − t2
1.99951171875
2.000244140625
2.0009765625
1.99951171875
1.9998779296875
2.000244140625
1.9998779296875
2.00006103515625
2.000244140625
1.9998779296875
1.99996948242188
2.00006103515625
1.99996948242188
2.00001525878906
2.00006103515625
1.99996948242188
1.99999237060547
2.00001525878906
1.99999237060547
2.00000381469727
2.00001525878906
1.99999237060547
1.99999809265137
2.00000381469727
1.99999809265137
2.00000095367432
2.00000381469727
f(t)
df (t0)/dt.
t0 t1 t
� Choose t1 so that
(t 0 )
f (t0) + (t1 − t0) dfdt = 0. f(t1 )
t0
3
Example: 2.16666666666667
2.00641025641026
f (t) = 4 − t2 2.00001024002621
2.00000000002621
2
f(t)
• Newton search, approximate
tangent:
f(t0 )
approximate slope
s = f (tt11)−f (t0 )
. f(t2 )
t
t0 t2 t1
−t0
enough.
t0
0
3
1.33333333333333
Example: 1.84615384615385
2.03225806451613
f (t) = 4 − t2 1.99872040946897
1.99998976002621
2.0000000032768
1.99999999999999
2
J
Optimum
Steepest
Ascent
Directions
Optimum often found
x2 by steepest ascent or
hill-climbing methods.
x1
Variables and
maximizes Jn(t).
3. Set xn+1 = xn + t�n �J
�x
(xn).
4. Set n � n + 1. Go to Step 1.
�
also called steepest ascent or steepest descent .
c 2007 Stanley B. Gershwin.
Copyright � 17
Continuous Constrained
Variables and
Objective
solution is required to
plane.
x2
g(x 1 , x2 ) > 0
x1
Inequality constraints that are satisfied with equality are called effective or
active constraints.
20
8*(x+y)−.25*(x**4+y**4)
16
−4 12
8
4
0
−2 −4
−8
subject to x + y ≈ 0
2
6
−2 0 2 4 6
20
8*(x+y)−.25*(x**4+y**4)
16
−4 12
8
4
0
−2 −4
−8
subject to x − (x − y)2 + 1 ≈ 0
2
6
−2 0 2 4 6
x2
x1
Variables and
Objective
f (x), F , j(x), and J are scalars. We will call these problems duals of one
another. (However, this is not the conventional use of the term.) Under certain
conditions when the last inequalities are effective, the same x satisfies both
problems.
We will call one the primal problem and the other the dual problem .
Variables and
Objective
Generalization:
g(x) � 0 g(x) � 0
j(x) ≈ J f (x) � F
M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6
Buffer Space
Allocation
Primal problem:
�1
k−
Minimize Ni
i=1
Ni ≈ N MIN, i = 1, ..., k − 1.
Allocation
i
ri
200
Optimal curve
P=0.8800
P=0.8825
P=0.8850
P=0.8875
P P=0.8900
P=0.8925
150 P=0.8950
0.91
P=0.8975
0.9 P=0.9000
0.89
0.88
0.87
100
N2
0.86
0.85
0.84
0.83
100 50
90
80
70
60
10 20 50
30 40 N2
40 50 30
60 70 20 0
N1 80 90 100 10 0 50 100 150 200
N1
r1 = .35 r2 = .15 r3 = .4
p1 = .037 p2 = .015 p3 = .02
e1 = .904 e2 = .909 e3 = .952
MIN
Ni � N , i = 1, ..., k − 1.
Difficulty: If all the buffers are larger than N MIN, the solution will satisfy
P (N1, ..., Nk−1) = P �. (Why?) But P (N1, ..., Nk−1) is nonlinear and
cannot be expressed in closed form. Therefore any solution method will have
to search within this constraint and all steps will be small and there will be
many iterations.
It would be desirable to transform this problem into one with linear constraints.
�1
k−
TOTAL
subject to Ni � N specified
i=1
MIN
Ni � N , i = 1, ..., k − 1.
All the constraints are linear. The solution will satisfy the N TOTAL constraint
with equality (assuming the problem is feasible). (1. Why? 2. When would the
problem be infeasible?) This problem is consequently relatively easy to solve.
Buffer Space
Allocation N 2
Dual problem
TOTAL
N 1 +N2 +N3 = N
Constraint set
(if N MIN = 0). N1
N3
Buffer Space
Allocation Primal strategy
1. Guess N TOTAL.
Pk−1
i=1
N � N MIN
, i = 1, ..., k − 1.
i
Calculate gradient g.
Calculate search
direction p.
(Here, p = ĝ.)
^ ^
Is N close to N? NO Set N = N
YES
N is the solution.
Terminate.
Initial Guess
100
N2
50
0
0 50 100 150 200
N1
0.9
0.88
Maximum average production rate
0.86
0.84
0.82
0.8
0.78
0 50 100 150 200
Total buffer space
P MAX(N TOTAL
) as a function of N TOTAL
.
MIN
0 0
Set N = (k-1)N . Calculate P (N ).
MAX
1
Solve the dual to obtain P (N ). Set j = 2.
MAX
j
Calculate N from (7).
YES
average WIP
distribution if all
Average buffer level
30
Ni = 53,
i = 1, ..., 19 10
0
0 2 4 6 8 10 12 14 16 18 20
Buffer
• This shows the optimal distribution of buffer space and the resulting
distribution of average inventory .
25
20
Buffer Size/Average buffer level
15
10
0
0 5 10 15 20
Buffer
Observations:
• The optimal distribution of buffer space does not look like the
distribution of inventory in the line with equal buffers. Why not?
Explain the shape of the optimal distribution.
• The distribution of average inventory is not symmetric.
• This shows the ratios of average inventory to buffer size with equal buffers
and with optimal buffers.
1
Equal buffers
Optimal buffers
0.8
Ratio = Average buffer level/Buffer Size
0.6
0.4
0.2
0
0 5 10 15 20
Buffer
Buffer Space
Allocation
Buffer Space
Allocation
Buffer Space
Allocation
Buffer Space
Allocation
Buffer Space
Allocation
Solution
60
Line Space
Buffer Size
40
30 Case 1
430
20
Case 2
485
Case 3
523
10
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Buffer
Buffer Space
Allocation
• This shows the optimal distribution of buffer space and the resulting
distribution of average inventory for Case 3.
55
50
45
40
Buffer Size/Average buffer level
35
30
25
20
15
10
0
0 5 10 15 20
Buffer
Buffer Space
Allocation
• This shows the ratio of average inventory to buffer size with optimal buffers
for Case 3.
0.7
0.65
0.6
Ratio = Average buffer level/Buffer Size
0.55
0.5
0.45
0.4
0.35
0.3
0 5 10 15 20
Buffer
Buffer Space
Allocation
50
45
40
Buffer Size/Average buffer level
35
30
25
20
15
10
0
0 5 10 15 20
Buffer
Buffer Space
Allocation
• This shows the ratio of average inventory to buffer size with optimal buffers
for Case 4.
0.75
0.7
0.65
Ratio = Average buffer level/Buffer Size
0.6
0.55
0.5
0.45
0.4
0.35
0.3
0 5 10 15 20
Buffer
Buffer Space
Allocation
• This shows the optimal distribution of buffer space and the resulting
distribution of average inventory for Case 5.
55
50
45
40
Buffer Size/Average buffer level
35
30
25
20
15
10
0
0 5 10 15 20
Buffer
Buffer Space
Allocation
• This shows the ratio of average inventory to buffer size with optimal buffers
for Case 5.
0.7
0.65
0.6
Ratio = Average buffer level/Buffer Size
0.55
0.5
0.45
0.4
0.35
0.3
0 5 10 15 20
Buffer Space
Allocation
• This shows the optimal distribution of buffer space and the resulting
distribution of average inventory for Case 6.
55
50
45
40
Buffer Size/Average buffer level
35
30
25
20
15
10
0
0 10 20 30 40 50
Buffer
Buffer Space
Allocation
• This shows the ratio of average inventory to buffer size with optimal buffers
for Case 6.
0.7
0.65
0.6
Ratio = Average buffer level/Buffer Size
0.55
0.5
0.45
0.4
0.35
0.3
0 10 20 30 40 50
Buffer
Buffer Space
Allocation
Allocation
790
800
810
• Three-machine, continuous
820
827
830
832
material line.
Profit
840
830
820
810
800
790
• ri = .1, pi = .01,µi = 1.
780
770
760
750 • � = 1000P (N1, N2)
20 40
60
80
N2
−(n̄1 + n̄2).
40
60 20
N1 80
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
MIT 2.852
Lecture 10–12
Stanley B. Gershwin
http://web.mit.edu/manuf-sys
Spring, 2010
M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6
◮ Difficulty:
◮ If all buffer sizes are N and the length of the line is k, the number of
states is S = 2k (N + 1)k−1 .
◮ if N = 10 and k = 20, S = 6.41 × 1025 .
Line L(i)
M u (i) M d (i)
There are 4(k − 1) unknowns for the deterministic processing time line:
The decomposition equations relate ru (i), pu (i), rd (i), and pd (i) to behavior in
the real line and in other two-machine lines.
◮ Conservation of flow, equating all production rates.
◮ Flow rate/idle time, relating production rate to probabilities of starvation
and blockage.
◮ Resumption of flow, relating ru (i) to upstream events and rd (i) to
downstream events.
◮ Boundary conditions, for parameters of Mu (1) and Md (k − 1).
◮ specified parameters, or
◮ unknowns, or
◮ functions of parameters or unknowns derived from the two-machine line
analysis.
Notation convention:
◮ Items that pertain to the real line L will have i in the subscript.
Example: ri .
E (i ) = E (1), i = 2, . . . , k − 1.
Problem:
◮ This expression involves a joint probability of two buffers taking
certain values at the same time.
◮ But we only know how to evaluate two-machine, one-buffer lines, so
we only know how to calculate the probability of one buffer taking on
a certain value at a time.
Observation:
0 Ni
The only way to have ni −1 = 0 and ni = Ni is if
◮ Mi −1 is down or starved for a long time
◮ and Mi is up
◮ and Mi +1 is down or blocked for a long time
◮ and to have exactly Ni parts in the two buffers.
Then
prob [ni −1 > 0 and ni < Ni ]
= 1 − prob [ni −1 = 0 or ni = Ni ]
Therefore
or,
E (i − 1) E (i)
ps (i − 1) = 1 − ; pb (i) = 1 −
ed (i − 1) eu (i)
so (replacing ≈ with =),
E (i − 1) E (i)
E i = ei 1 − 1 − − 1−
ed (i − 1) eu (i)
Since
rd (i − 1) ru (i)
ed (i − 1) = ; eu (i) = ,
pd (i − 1) + rd (i − 1) pu (i) + ru (i)
we can write
pd (i − 1) pu (i) 1 1
+ = + − 2, i = 2, . . . , k − 1
rd (i − 1) ru (i) E (i) ei
M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3
M u (i) M d (i)
M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3
M u (i) M d (i)
M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3
0 0
M u (i) M d (i)
M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3
0 0 0
M u (i) M d (i)
M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3
0 0 0 0
M u (i) M d (i)
... etc.
M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3
M u (i−1) M d (i−1)
M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3
M u (i−1) M d (i−1)
M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3
M u (i) M d (i)
Comparison
M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3
M u (i−1) M d (i−1)
M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3
0 0
M u (i) M d (i)
M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3
0 0
M u (i−1) M d (i−1)
M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3
0 0 0
M u (i) M d (i)
M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3
0 0 0
M u (i−1) M d (i−1)
M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3
0 0 0 0
M u (i) M d (i)
That is, when the Line L(i) observer sees a failure in Mu (i),
M u (i) M d (i)
M u (i−1) 0 M d (i−1)
Also, for the Line L(j) observer to see Mu (j) up, Mj must be up and Bj−1 must
be non-empty. Therefore,
Then
ru (i ) = prob [αu (i , t + 1) = 1 | αu (i , t) = 0]
= prob {αi (t + 1) = 1} and {ni −1 (t) > 0}
{αi (t) = 0} or {ni −1 (t − 1) = 0}
V = {αi (t) = 0}
W = {ni −1 (t − 1) = 0}
Important: V and W are disjoint.
prob (V and W ) = 0.
prob (U and (V or W ))
prob (U|V or W ) =
prob (V or W )
prob (V ) prob (W )
= prob (U|V ) + prob (U|W )
prob (V or W ) prob (V or W )
Note that
ni −1 (t − 1) = 0 ,
X (i) = prob (W
|V or W )
= prob ni −1 (t − 1) = 0ni −1 (t − 1) = 0 or αi (t) = 0 ,
X ′ (i) = prob (V |V or W )
= prob [αi (t) = 0 | {ni −1 (t − 1) = 0 or αi (t) = 0}] .
To evaluate
» ˛ –
˛
A(i − 1) = prob ni −1 (t) > 0 and αi (t + 1) = 1˛˛ni −1 (t − 1) = 0 :
Note that
◮ For Buffer i − 1 to be empty at time t − 1, Machine Mi must be up at time t − 1
and also at time t. It must have been up in order to empty the buffer, and it must
stay up because it cannot fail. Therefore αi (t) = 1.
◮ For Buffer i − 1 to be non-empty at time t after being empty at time t − 1, it
must have gained 1 part. For it to gain a part when αi (t) = 1, Mi must not have
been working (because it was previously starved). Therefore, Mi could not have
failed and A(i − 1) can therefore be written
» ˛ –
˛
A(i − 1) = prob ni −1 (t) > 0˛˛ni −1 (t − 1) = 0
» ˛ –
˛
A(i − 1) = prob ni −1 (t) > 0˛˛ni −1 (t − 1) = 0
Similarly,
Interpretation so far:
◮ ru (i ), the probability that Mu (i ) goes from down to up, is
Since these are the only two ways that Mu (i) can be down,
X ′ (i) = 1 − X (i)
X (i) = prob ni −1 (t − 1) = 0ni −1 (t − 1) = 0 or αi (t) = 0
prob [ni −1 (t − 1) = 0]
=
prob [ni −1 (t − 1) = 0 or αi (t) = 0]
ps (i − 1)
=
prob [ni −1 (t − 1) = 0 or αi (t) = 0]
pu (i) pu (i)
prob [αu (i) = 1 and ni (t − 1) < Ni ] = E (i)
ru (i) ru (i)
Therefore,
ps (i − 1)ru (i)
X (i) =
pu (i)E (i)
and
where
pb (i)rd (i − 1)
Y (i) = .
pd (i − 1)E (i − 1)
ru (1) = r1
pu (1) = p1
rd (k − 1) = rk
pd (k − 1) = pk
We now have 4(k − 1) equations in 4(k − 1) unknowns ru (i), pu (i), rd (i), pd (i),
i = 1, ..., k − 1.
pd (i − 1) pu (i) 1 1
FRIT: + = + −2
rd (i − 1) ru (i) E (i) ei
Upstream equations:
ps (i − 1)ru (i)
ru (i) = ru (i − 1)X (i) + ri (1 − X (i)); X (i) =
pu (i)E (i)
„ «
1 1 pd (i − 1)
pu (i) = ru (i) + −2−
E (i) ei rd (i − 1)
Downstream equations:
pb (i + 1)rd (i)
rd (i) = rd (i + 1)Y (i + 1) + ri +1 (1 − Y (i + 1)); Y (i + 1) = .
pd (i)E (i)
„ «
1 1 pu (i + 1)
pd (i) = rd (i) + −2−
E (i + 1) ei +1 ru (i + 1)
ps (i − 1)ru (i)
ru (i) = ru (i − 1)X (i) + ri (1 − X (i)); X (i) =
pu (i)E (i − 1)
„ «
1 1 pd (i − 1)
pu (i) = ru (i) + −2−
E (i − 1) ei rd (i − 1)
Modified downstream equations:
pb (i + 1)rd (i)
rd (i) = rd (i + 1)Y (i + 1) + ri +1 (1 − Y (i + 1)); Y (i + 1) = .
pd (i)E (i + 1)
„ «
1 1 pu (i + 1)
pd (i) = rd (i) + −2−
E (i + 1) ei +1 ru (i + 1)
◮ etc.
Question: When will this work well, and when will it work badly?
.7
p 2 = .05
.6 .15
.25
.5
E
.4
.3 r1 = r2 = r3 = .2
.2
p1 = .05
.35
.45
N1 = N2 = 5
.1 .55
.65
.1 .2 .3 .4 .5 .6 .7
p3
7
I .25
6
.35
5 .45
.55
4 .65 r1 = r2 = r3 = .2
p1 = .05
3 N1 = N2 = 5
2
0 .1 .2 .3 .4 .5 .6 .7
p3
15
Average Buffer Level
Distribution of
10
material in a line
with identical
5 machines and buffers.
Explain the shape.
0
0 10 20 30 40 50
Buffer Number
Analytical vs simulation
Time steps Decomp 10,000 50,000 200,000
Production rate 0.786 0.740 0.751 0.750
16
Analytic
10,000 steps
50,000 steps
200,000 steps
14
12
Average buffer leve
10
4
0 5 10 15 20 25 30 35 40 45 50
Buffer number
15
Average Buffer Level
10
Same as Slide 55
except that Buffer 25
is now huge.
5
15
Average Buffer Level
10
Upstream half of
Slide 57.
5
Explain the shape.
0
0 10 20 30 40 50
Buffer Number
50 Machines; upstream r=0.1; p=0.01; mu=1.0; N=20.0; N(25)=2000.0 downstream r=0.15; p=0.01; mu=1.0, N=50.0
20
15
Average Buffer Level
10
Upstream same as
Slide 58; downstream
faster.
5
50 Machines; upstream r=0.1; p=0.01; mu=1.0; N=20.0; N(25)=2000.0 downstream r=0.09; p=0.01; mu=1.0, N=50.0
20
15
Average Buffer Level
10
Upstream same as
Slide 58; downstream
faster.
5
50 Machines; upstream r=0.1; p=0.01; mu=1.0; N=20.0; N(25)=2000.0 downstream r=0.09; p=0.01; mu=1.0, N=15.0
20
15
Average Buffer Level
Downstream same as
10
downstream half of
Slide 57; upstream
5 faster.
Explain the shape.
0
0 10 20 30 40 50
Buffer Number
15
Same as upstream
Average Buffer Level
half of Slide 61
10
except for Machine
26.
5
Explain the shape.
How was Machine 26
0
0 10 20 30 40 50 chosen?
Buffer Number
15
Average Buffer Level
Operation time
10
bottleneck. Identical
machines and buffers,
5 except for M10 .
Explain the shape.
0
0 10 20 30 40 50
Buffer Number
15
Average Buffer Level
10
Failure time
bottleneck.
5
Explain the shape.
0
0 10 20 30 40 50
Buffer Number
15
Average Buffer Level
10
Repair time
bottleneck.
5
Explain the shape.
0
0 10 20 30 40 50
Buffer Number
The observer in each buffer sees exactly the same behavior. Consequently, the
decomposed pseudo-machines are all identical and symmetric. For each i,
ru (i) = ru (i − 1) = rd (i) = rd (i − 1)
pu (i) = pu (i − 1) = pd (i) = pd (i − 1).
ru (i ) = ru (i − 1)X (i ) + ri (1 − X (i ))
ru = ru X + r (1 − X )
so ru (i ) = rd (i ) = r .
FRIT says
pd (i −1) pu (i ) 1 1
rd (i −1) + ru (i ) = E (i ) + ei −2
2pu 1 1
r = E + e −2
Ni = 10, i = 1 ...., k - 1
.32
.31
.30
.29
.28
.27
.26 Ni = 5, i = 1, ..., k - 1
.25
.24
.23
.22
.21
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Number of Machines k
25
n1
n2
n3
n4
20
n5
n6
n7
Continuous material model.
◮ Eight-machine,
Average Buffer Level
15 seven-buffer line.
◮ For each machine,
10 r = .075, p = .009,
µ = 1.2.
5 ◮ For each buffer (except
Buffer 6), N = 30.
0
0 5 10 15 20 25 30 35 40 45 50
N6
M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6 B6 M7 B7 M8
25
n1
n2
n3
n4
n5
n6
20 n7
Average Buffer Level
0
0 5 10 15 20 25 30 35 40 45 50
N6
M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6 B6 M7 B7 M8
M1 M2 M3 B3 M4 M5 M6 B6 M7 M8 M9
1
8 buffers
2 buffers
0.95
◮ Continuous model; all
0.9 machines have
P 0.85
r = .019, p = .001,
µ = 1.
0.8
◮ What are the
0.75
asymptotes?
0.7
◮ Is 8 buffers always
0.65
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 faster?
1
8 buffers
2 buffers
0.95
ru (i)δt = probability that Mu (i) goes from down to up in (t, t + δt), for small δt;
rd (i)δt = probability that Md (i) goes from down to up in (t, t + δt), for small δt;
All of these, except for the Interruption of Flow equations, are similar to those of
the deterministic processing time case.
The first two sets of equations describe the interruptions of flow caused by
machine failures. By definition,
pu (i)δt = prob αu (i; t + δt) = 0αu (i; t) = 1 and ni (t) < Ni ,
or,
pu (i)δt = prob Mu (i) down at t + δt Mu (i) up and ni < Ni at t .
Mu (i) is down if
1. Mi is down, or
2. ni −1 = 0 and Mu (i − 1) is down.
Mu (i) is up for all other states of the transfer line upstream of Buffer Bi .
Therefore, Mu (i) is up if
ru (i − 1)p(i − 1; 001)
pu (i) = pi + .
Eu (i)
and similarly,
rd (i + 1)p(i + 1; N10)
pd (i) = pi +1 + .
Ed (i)
in which p(i − 1; 001) is the steady state probability that line L(i − 1) is in state
(0, 0, 1) and p(i + 1; N10) is the steady state probability that line L(i + 1) is in
state (Ni +1 , 1, 0).
P(i ) = P(1), i = 2, . . . , k − 1.
1 1 1 1
+ = + ; i = 2, . . . , k − 1.
ei µi P ed (i − 1)µd (i − 1) eu (i )µu (i )
( )
1 1
µu (i) = 1 1 1 ,
eu (i) P(i ) + ei µi − ed (i −1)µ d (i −1)
i = 2, · · · , k − 1,
( )
1 1
µd (i) = 1 1 1 ,
ed (i) P(i ) + ei +1 µi +1 − eu (i +1)µu (i +1)
i = 1, · · · , k − 2.
ru (1) = r1
pu (1) = p1
µu (1) = µ1
rd (k − 1) = rk
pd (k − 1) = pk
µd (k − 1) = µk
.20
i Parameters
Decomposition ri pi µi Ni
Simulation
1 .05 .03 .5 8
.10 2 .06 .04 — 8
3 .05 .03 .5
0
0 .4 .8 1.2 1.6 1.8
ρ2
M u (i) M d (i)
M u (i) M d (i)
Assume that ... < µi −2 < µi −1 < µi < µi +1 < .... Assume all the machines are up and
Bi is not full. Then the observer in Bi actually sees material entering Bi ...
◮ at rate µi if Bi −1 is not empty;
◮ at rate µi −1 if Bi −2 is not empty and Bi −1 is empty;
◮ at rate µi −2 if Bi −3 is not empty and Bi −2 is empty and Bi −1 is empty;
◮ etc.
Therefore, this approximation may break down if the µi are very different.
We have the same 6(k − 1) unknowns, so we need 6(k − 1) equations. They are,
as before,
◮ Interruption of flow ,
◮ Resumption of flow,
◮ Conservation of flow,
◮ Flow rate/idle time,
◮ Boundary conditions.
They are the same as in the exponential processing time case except for the
Interruption of Flow equations.
„ „ ««
pi −1 (0, 1, 1)µu (i) µu (i − 1)
pu (i) = pi 1 + −1 +
P(i) − pi (Ni , 1, 1)µd (i) µi
„ «
pi −1 (0, 0, 1)µu (i)
ru (i − 1), i = 2, · · · , k − 1
P(i) − pi (Ni , 1, 1)µd (i)
and, similarly,
„ „ ««
pi +1 (Ni +1 , 1, 1)µd (i) µd (i + 1)
pd (i) = pi +1 1 + − 1) +
P(i) − pi (0, 1, 1)µu (i) µi +1
„ «
pi +1 (Ni +1 , 1, 0)µd (i + 1)
rd (i + 1), i = 1, · · · , k − 2
P(i) − pi (0, 1, 1)µu (i)
◮ Assembly/Disassembly Systems
◮ Buffer Optimization
◮ Effect of Buffers on Quality
◮ Loops
◮ Real-Time Control
◮ ????
For information about citing these materials or our Terms of Use,visit: http://ocw.mit.edu/terms.
MIT 2.852
Lectures 18–19
Loops
Stanley B. Gershwin
Spring, 2007
Statement
B1 M2 B2 M3 B3
M1 M4
B6 M6 B5 M5 B4
Problem
Statement
• Limited pallets/fixtures.
• CONWIP (or hybrid) control systems.
• Extension to more complex systems and policies.
N
N1
B1 N*
r1 r2 r1 r2
p1 M1 M2 p2 =� p1 M1 B M2 p2
N2 B2
P loop(r1, p1, r2, p2, N1, N2) = P line(r1, p1, r2, p2, N �)
where
N � = min(n, N1) − max(0, n − N2)
r2 .1
p2 .01 0.88
N1 20
production rate
0.875
0.87
0.865
0.86
0 10 20 30 40 50 60
population
population
method
population
method
population
method
�
n̄i = N
i
population
method
Behavior:
population
method
� Suppose the population is smaller than the smallest buffer. Then there
will be no blockage. The expected population method does not take this
into account.
B1 M2 B2 M3 B3
M1 M4
B6 M6 B5 M5 B4
that could block it if they stayed down for a long enough time.
that could starve it if they stayed down for a long enough time.
10 10 10
B1 M2 B2 M3 B3
M1 M4
7 0 0
B6 M6 B5 M5 B4
7 10 10
B1 M2 B2 M3 B3
M1 M4
0 0 10
B6 M6 B5 M5 B4
Mi Mj
Range of blocking of M j
Range of blocking of M i
Md(i)
Md(j)
L(1)
Range of blocking of M6
M u(1) B(1)
d
M (1)
B1 M2 B2 M3 B3
Range of blocking of M1
M1 M4
B1 M2 B2 M3 B3
B6 M6 B5 M5 B4
Range of starvation of M1 M1 M4
d u
M (6) B(6) M (6)
B6 M6 B5 M5 B4
Range of blocking of M1
Range of blocking of M2
10 10 10
B1 M2 B2 M3 B3
M1 M4
7 0 0
B6 M6 B5 M5 B4
M5 can block M2. Therefore the parameters of M5 should directly affect the
parameters of Md(1) in a decomposition. However, M5 cannot block M1 so
the parameters of M5 should not directly affect the parameters of Md(6).
Therefore, the parameters of Md(6) cannot be functions of the parameters of
Md(1).
Mode Line
Decomposition
1,2, 5,6,7,
3,4 8,9,10
1,2, 5,6,7,
3,4 8,9,10
1,2, 5,6,7,
3,4 8,9,10
up
down
Mode Line
Decomposition
1,2, 5,6,7,
3,4 8,9,10
1,2, 5,6,7,
3,4 8,9,10
• Remote modes: i is the building block number; j and f are the machine
d
jf (i), p
two-machine line i in an iterative method.
Mode Line
Decomposition
Consider
Ps,jf (i − 1) Pb,jf (i)
pu
jf (i) = rjf ; pdjf (i − 1) = rjf
E(i) E(i − 1)
In a line, jf refers to all modes of all upstream machines in the
first equation; and all modes of all downstream machines in the
second equation.
We can interpret the upstream machines as the range of
starvation and the downstream machines as the range of
blockage of the line.
B1 M2 B2 M3 B3
M1 M4
B6 M6 B5 M5 B4
10 10 10
B1 M2 B2 M3 B3
M1 M4
5 2 0
B6 M6 B5 M5 B4
5
d u
M (6) B6 M (6)
• The B6 observer knows how many parts there are in his buffer.
• If there are 5, he knows that the modes he sees in M d(6) could
be those corresponding to the modes of M1, M2, M3, and M4.
10 10 9
B1 M2 B2 M3 B3
M1 M4
8 0 0
B6 M6 B5 M5 B4
8
d u
M (6) B6 M (6)
buffer size
• When M1 fails for a long time,
20 3
B4 and B3 fill up, and there
B1 M2 B2
18
13 1
is one part in B2. Therefore
there is a threshold of 1 in B2.
• When M2 fails for a long time,
M1
M3 B1 fills up, and there is one
part in B4. Therefore there is
a threshold of 1 in B4.
1
B4 M4 B3
• When M3 fails for a long time,
15 5 B2 fills up, and there are 18
threshold population = 21
parts in B1. Therefore there is
c 2007 Stanley B. Gershwin.
Copyright �
a threshold of 18 in B1. 34
Transformation
buffer size
20 3
• When M4 fails for a long time,
B1 B2
18
M2 B3 and B2 fill up, and there
13 1
are 13 parts in B1. Therefore
there is a threshold of 13 in
M3
B1 .
M1
• Note: B1 has two thresholds
and B3 has none.
1
• Note: The number of thresh
B4 M4 B3
15 5
olds equals the number of ma
chines.
threshold population = 21
20
B1 M2 B2
3
M*1 M2
M*4
M3
M3
M1
M*3
15 5
B4 M4 B3 M1 M4
M*2
M1 2 M*3 5 M*4 13 M2
B1
1 2
M*2 14 M4 5 M3 1 M*1
B4 B3 B2
Numerical
Results
maximum of 21%.
maximum of 44%.
Numerical
Results
0.815
Decomposition
• Error is very small, but there
0.81
Simulation
are apparent discontinuities.
0.805
0.795
0.785
0.78
introduce reliable machines in
cases where there would be
0.775
0.77
no thresholds.
0 5 10 15 20 25 30
Population
* M1 M3
B4 M4 B3
Numerical
Results
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Numerical
Results
10
b1 average
9 b2 average
b3 average
b4 average
8
7
B1 M2 B2
6
*
4
M1 M3
3
1
B4 M4 B3 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
MIT 2.852
Manufacturing Systems Analysis
Lectures 2–5: Probability
Basic probability, Markov processes, M/M/1 queues, and more
Stanley B. Gershwin
http://web.mit.edu/manuf-sys
Spring, 2010
Question: What is the probability that it will show heads on the next flip?
Probability 6= Statistics
if that is the opinion (ie, belief or state of mind) of an observer before the
experiment is performed.
Axioms of probability
◮ If Ei
◮
i Ei = U, and
◮ Ei ∩ Ej = φ for
all i and j,
�
◮ then i prob (Ei ) = 1
Venn diagrams
Venn diagrams
A B
U
A B
AUB
prob (A ∩ B)
prob (A|B) =
prob (B)
U
A B
U
A B
AUB
Example
Throw a die.
◮ A is the event of getting an odd number (1, 3, 5).
◮ B is the event of getting a number less than or equal to 3 (1, 2, 3).
Then prob (A) = prob (B) = 1/2 and
prob (A ∩ B) = prob (1, 3) = 1/3.
Also, prob (A|B) = prob (A ∩ B)/ prob (B) = 2/3.
Note: prob (A|B) being large does not mean that B causes A. It only
means that if B occurs it is probable that A also occurs. This could be
due to A and B having similar causes.
Similarly prob (A|B) being small does not mean that B prevents A.
U
C A D
U U
A C A D
◮ Also
prob (C ∩ B) prob (C )
prob (C |B) = = because C ∩ B = C .
prob (B) prob (B)
prob (D)
Similarly, prob (D|B) =
prob (B)
2.852 Manufacturing Systems Analysis 18/128 c
Copyright �2010 Stanley B. Gershwin.
Probability Basics
Law of Total Probability
U
A B
A ∩ B = A ∩ (C ∪ D) =
A ∩ C + A ∩ D − A ∩ (C ∩ D) =
A∩C +A∩D
Therefore,
◮ Or,
prob (A|B) =
prob (A|C ) prob (C |B) + prob (A|D) prob (D|B).
prob (A)
= prob (A ∩ C ) + prob (A ∩ D)
= prob (A|C ) prob (C ) + prob (A|D) prob (D).
U
A D
U
A
C
U D=C
A C
�
prob (Ej ) = 1
j
and
�
prob (A) = j prob (A|Ej ) prob (Ej ).
prob (A and B) =
�
j prob (A|B and Ej ) prob (Ej and B).
Let X be the number of heads. The order does not matter! Then X = 0, 1, 2,
or 3.
◮ prob (X = 0)=1/8; prob (X = 1)=3/8; prob (X = 2)=3/8;
prob (X = 3)=1/8.
3/8
1/4
1/8
0 1 2 3 x
prob(X B = 0) = p.
prob(X B = 1) = 1 − p.
The sum of n independent Bernoulli random variables XiB with the same
parameter p is a binomial random variable X b .
n
�
Xb = XiB
i =0
n!
prob (X b = x) = p x (1 − p)(n−x)
x!(n − x)!
X g = min{XiB = 0}
i
◮ For t > 1,
prob (X g > t)
= prob (X g > t|X g > t − 1) prob (X g > t − 1)
Alternative view
1−p 1
1 0
Consider a two-state system. The system can go from 1 to 0, but not from
0 to 1.
Geometric p
(Why?)
we have
Geometric p
p(0, t) = 1 − (1 − p)t ,
p(1, t) = (1 − p)t .
Geometric p
Geometric Distribution
0.8
0.6
probability
p(0,t)
p(1,t)
0.4
0.2
0
0 10 20 30
t
Geometric p
Recall that once the system makes the transition from 1 to 0 it can never
go back. The probability that the transition takes place at time t is
Note: If the transition represents a machine failure, then 1/p is the Mean
Time to Fail (MTTF). The Mean Time to Repair (MTTR) is similarly
calculated.
Geometric 1
p
0
x(t + 1) = f (x(t), t)
where t is an integer and x(t) is a real or complex vector.
To determine x(t), we must also specify additional information, for example
initial conditions:
x(0) = c
Difference equations are similar to differential equations. They are easier to solve
numerically because we can iterate the equation to determine x(1), x(2), .... In fact,
numerical solutions of differential equations are often obtained by approximating them
as difference equations.
x(t + 1) = Ax(t)
Solution:
x(t) = At c
However, this form of the solution is not always convenient.
c = b1 + b2 + ... + bk
λ1 , λ2 , ..., λk are the eigenvalues of A and b1 , b2 , ..., bk are its eigenvectors, but we don’t
always have to use that explicitly to determine them. This is very similar to the solution
of linear differential equations with constant coefficients.
x(t) = bλt
and plug it into the difference equation. We find that λ must satisfy a kth
order polynomial, which gives us the k λs. We also find that b must
satisfy a set of linear equations which depends on λ.
Examples and variations will follow.
2 P
45
5
3 7
�
Pij is a probability. Note that Pii = 1 − m,m6=i Pmi .
2.852 Manufacturing Systems Analysis 46/128 c
Copyright �2010 Stanley B. Gershwin.
Markov processes
States and transitions
◮ The probability of the system being in that state at time 1 is very different from
the probability of it being in that state at time 2, which is very different from it
being in that state at time 3.
◮ The probability of the system being in that state at time 1000 is very close to the
probability of it being in that state at time 1001, which is very close to the
probability of it being in that state at time 2000.
Then, the system has reached steady state at time 1000.
1=up; 0=down.
1−p r 1−r
1 0
Solution
Guess
p(0, t) = a(0)X t
p(1, t) = a(1)X t
Then
Or,
a(0)X = a(0)(1 − r ) + a(1)p,
a(1)X = a(0)r + a(1)(1 − p).
or,
a(1)
X = 1−r + p,
a(0)
a(0)
X = r + 1 − p.
a(1)
so
rp
X =1−r +
X −1+p
or,
(X − 1 + r )(X − 1 + p) = rp.
Solution
Two solutions:
X = 1 and X = 1 − r − p.
a(1)
If X = 1, a(0) = pr . If X = 1 − r − p, a(1)
a(0) = −1. Therefore
Therefore
r r +p
p(0, 0) + p(1, 0) = 1 = a1 (0) + a1 (0) = a1 (0)
p p
So
p p
a1 (0) = and a2 (0) = p(0, 0) −
r +p r +p
Solution
After more simplification and some beautification,
Solution
Discrete Time Unreliable Machine
0.8
0.6
probability
p(0,t)
p(1,t)
0.4
0.2
0
0 20 40 60 80 100
t
Steady-state solution
As t → ∞,
p
p(0) → ,
r +p
r
p(1) →
r +p
which is the solution of
Steady-state solution
If the machine makes one part per time unit when it is operational, the
average production rate is
r 1
p(1) = = p
r +p 1+ .
r
Classification of states
A chain is irreducible if and only if each state can be reached from each
other state.
Let fij be the probability that, if the system is in state j, it will at some
later time be in state i . State i is transient if fij < 1. If a steady state
distribution exists, and i is a transient state, its steady state probability is
0.
Classification of states
The states can be uniquely divided into sets T , C1 , . . . Cn such that T is the set
of all transient states and fij = 1 for i and j in the same set Cm and fij = 0 for i
in some set Cm and j not in that set. If there is only one set C , the chain is
irreducible. The sets Cm are called final classes or absorbing classes and T is the
transient class.
Transient states cannot be reached from any other states except possibly other
transient states. If state i is in T , there is no state j in any set Cm such that
there is a sequence of possible transitions (transitions with nonzero probability)
from j to i.
Classification of states
C6
C7 C5
C1 T
C4
C2 C3
1
λ 4 λ
24 64
2 λ
45
5
3 7
1 0
� � �
�
pδt = prob α(t + δt) = 0�α(t) = 1 + o(δt).
�
p(0, t + δt) =
� � �
�
prob α(t + δt) = 0�α(t) = 1 prob [α(t) = 1]+
�
� � �
�
prob α(t + δt) = 0�α(t) = 0 prob[α(t) = 0].
�
or
p(0, t + δt) = pδtp(1, t) + p(0, t) + o(δt)
or
dp(0, t)
= pp(1, t).
dt
2.852 Manufacturing Systems Analysis 71/128 c
Copyright �2010 Stanley B. Gershwin.
Markov processes
Exponential
1 0
dp(1, t)
= −pp(1, t).
dt
If p(1, 0) = 1, then
p(1, t) = e −pt
and
p(0, t) = 1 − e −pt
Density function
Density function
0.8 0.8
0.7 0.7
0.6 0.6
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
t t
1p 1p
Density function
Density function
Exponential density
function and a small
number of actual
samples.
Continuous time
r
1 0
Continuous time
The probability distribution satisfies
p(0, t + δt) = p(0, t)(1 − r δt) + p(1, t)pδt + o(δt)
p(1, t + δt) = p(0, t)r δt + p(1, t)(1 − pδt) + o(δt)
or
dp(0, t)
= −p(0, t)r + p(1, t)p
dt
dp(1, t)
= p(0, t)r − p(1, t)p.
dt
Solution
� �
p p
p(0, t) = + p(0, 0) − e −(r +p)t
r +p r +p
p(1, t) = 1 − p(0, t).
As t → ∞,
p
p(0) → ,
r +p
r
p(1) →
r +p
Steady-state solution
If the machine makes µ parts per time unit on the average when it is
operational, the overall average production rate is
µr 1
µp(1) = =µ p
r +p 1+ .
r
µ
λ
◮ Exponential arrivals:
◮ If a part arrives at time s, the probability that the next part arrives
during the interval [s + t, s + t + δt] is e −λt λδt + o(δt) ≈ λδt. λ is
the arrival rate.
◮ Exponential service:
◮ If an operation is completed at time s and the buffer is not empty, the
probability that the next operation is completed during the interval
[s + t, s + t + δt] is e −µt µδt + o(δt) ≈ µδt. µ is the service rate.
Sample path
Number of customers in the system as a function of time.
n
6
5
4
3
2
1
t
State Space
λ λ λ λ λ λ λ
0 1 2 n−1 n n+1
µ µ µ µ µ µ µ
Performance Evaluation
Let p(n, t) be the probability that there are n parts in the system at time
t. Then,
and
Performance Evaluation
Or,
dp(n, t)
= p(n − 1, t)λ + p(n + 1, t)µ − p(n, t)(λ + µ),
dt
n>0
dp(0, t)
= p(1, t)µ − p(0, t)λ.
dt
Performance Evaluation
Let ρ = λ/µ. These equations are satisfied by
p(n) = (1 − ρ)ρn , n ≥ 0
if ρ < 1. The average number of parts in the system is
� ρ λ
n̄ = np(n) = = .
n
1−ρ µ−λ
From Little’s law , the average delay experienced by a part is
1
W = .
µ−λ
Performance Evaluation
Delay in a M/M/1 Queue
40
30
Delay
20
10
0
0 0.25 0.5 0.75 1
Arrival Rate
Performance Evaluation
W
100
0
0 0.5 1 1.5 2 λ
µ=1 µ=2
High density
Low density
M1 B1
x1
M2
B2
x2 M3
M1 B1 M2 B2 M3
M1 B1
x1
M2
B2
x2 M3
M1 B1 M2 B2 M3
Dimensionality
x2 One−dimensional density
Two−dimensional density
Probability distribution
of the amount of
Zero−dimensional density material in each of the
(mass)
two buffers.
x1
M1 B1 M2 B2 M3
Discrete approximation
x2
Probability distribution
of the amount of
material in each of the
two buffers.
x1
M1 B1 M2 B2 M3
Problem
1 0
Production rate = µ Production rate = 0
p
� �
r
Demand rate = d < µ . (Why?)
r +p
Problem: producing more than has been demanded creates inventory and is
wasteful. Producing less reduces revenue or customer goodwill. How can we
anticipate and respond to random failures to mitigate these effects?
Solution
u d
x
u=0
α= 0
u=0
How do we choose u?
Solution
dx(t)
Surplus, or inventory/backlog: = u(t) − d
dt
Production policy: Choose Z Cumulative
production
(the hedging point ) Then, Production and Demand
◮ if α = 1,
dt+Z
◮ if x < Z , u = µ,
hedging point Z
◮ if x = Z , u = d,
surplus x(t)
◮ if x > Z , u = 0;
◮ if α = 0, demand dt
◮ u = 0.
t
How do we choose Z ?
2.852 Manufacturing Systems Analysis 101/128 c
Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example
Mathematical model
Definitions:
f (x, α, t) is a probability density function.
prob (Z , α, t) = prob (x = Z
and the machine state is α at time t).
Mathematical model
State Space:
dx
=µ −d
dt
α=1
x
α=0
dx
= −d
dt
x=Z
Mathematical model
Transitions to α = 1, [x, x + δx]; x <Z :
x
α=1
no failure
repair
α=0
δx
x=Z
Mathematical model
Transitions to α = 0, [x, x + δx]; x <Z :
x
α=1
failure
no repair
α=0
δx
x=Z
α=1
x(t) = x + dδ t
α=0
f (x, 1, t + δt)δx =
+o(δt)o(δx)
Mathematical model
Or,
o(δt)o(δx)
f (x, 1, t + δt) =
δx
Mathematical model
Expand in Taylor series:
f (x, 1) =
� �
df (x, 0)
f (x, 0) + dδt r δt
dx
� �
df (x, 1)
+ f (x, 1) − (µ − d)δt (1 − pδt)
dx
o(δt)o(δx)
+
δx
Mathematical model
Multiply out:
df (x, 0)
f (x, 1) = f (x, 0)r δt + (d)(r )δt 2
dx
df (x, 1)
+f (x, 1) − (µ − d)δt
dx
df (x, 1)
−f (x, 1)pδt − (µ − d)pδt 2
dx
o(δt)o(δx)
+
δx
Mathematical model
Subtract f (x, 1) from both sides and move one of the terms:
df (x, 1) o(δt)o(δx)
(µ − d)δt =
dx δx
df (x, 0)
+f (x, 0)r δt + (d)(r )δt 2
dx
df (x, 1)
−f (x, 1)pδt − (µ − d)pδt 2
dx
Mathematical model
Divide through by δt:
df (x, 1) o(δt)o(δx)
(µ − d) =
dx δtδx
df (x, 0)
+f (x, 0)r + (d)(r )δt
dx
df (x, 1)
−f (x, 1)p − (µ − d)pδt
dx
Mathematical model
Take the limit as δt −→ 0:
df (x, 1)
(µ − d) = f (x, 0)r − f (x, 1)p
dx
f (x, 0, t + δt)δx =
+o(δt)o(δx)
Mathematical model
By following essentially the same steps as for the transitions to
α = 1, [x, x + δx]; x < Z , we have
df (x, 0)
d = f (x, 0)r − f (x, 1)p
dx
Note:
df (x, 1) df (x, 0)
(µ − d) = d
dx dx
Why?
Mathematical model
Transitions to α = 1, x = Z :
Z − ( µ− d) δ t
x=Z
no failure
1−p δt
α=1
no failure 1−p δt
α=0
Mathematical model
Or,
Mathematical model
Or,
P(Z , 1)pδt = f (Z , 1)(µ − d)δt + o(δt)
or,
Mathematical model
dx
=µ −d
dt
α=1
x
α=0
dx
= −d
dt
x=Z
P(Z , 0) = 0. Why?
Mathematical model
Transitions to α = 0, Z − (µ − d)δt < x < Z :
x=Z
Z − d δt
α=1
p δt 1−r δt no repair
failure
α=0
Mathematical model
Or,
Mathematical model
df
(x, 0)d = f (x, 0)r − f (x, 1)p
dx
df (x, 1)
(µ − d) = f (x, 0)r − f (x, 1)p
dx
f (Z , 1)(µ − d) = f (Z , 0)d
0 = −pP(Z , 1) + f (Z , 1)(µ − d)
�Z
1 = P(Z , 1) + −∞ [f (x, 0) + f (x, 1)] dx
Solution
Solution of equations:
f (x, 0) = Ae bx
d
f (x, 1) = A µ−d e bx
P(Z , 1) = A dp e bZ
P(Z , 0) = 0
where
r p
b= −
d µ−d
and A is chosen so that normalization is satisfied.
Solution
Density Function -- Controlled Machine
0.03
0.02
1E-2
0
-40 -20 0 20
x
Observations
1. Meanings of b:
Mathematical:
In order for the solution on the previous slide to make sense, b > 0.
Otherwise, the normalization integral cannot be evaluated.
Observations
Intuitive:
◮ The average duration of an up period is 1/p. The rate that x increases
(while x < Z ) while the machine is up is µ − d. Therefore, the average
increase of x during an up period while x < Z is (µ − d)/p.
◮ The average duration of a down period is 1/r . The rate that x decreases
while the machine is down is d. Therefore, the average decrease of x during
an down period is d/r .
◮ In order to guarantee that x does not move toward −∞, we must have
(µ − d)/p > d/r .
Observations
If (µ − d)/p > d/r ,
p r
then <
µ−d d
r p
or b= − > 0.
d µ−d
That is, we must have b > 0 so that there is enough capacity for x to increase on
the average when x < Z .
Observations
r p
Also, note that b > 0 =⇒ > =⇒
d µ−d
r (µ − d) > pd =⇒
r µ − rd > pd =⇒
r µ > rd + pd =⇒
r
µ >d
r +p
which we assumed.
Observations
2. Let C = Ae bZ . Then
d
f (x, 1) = C µ−d e −b(Z −x)
P(Z , 1) = C dp
P(Z , 0) = 0
For information about citing these materials or our Terms of Use,visit: http://ocw.mit.edu/terms.
MIT 2.852
Lectures 22–?
Quality and Quantity
Stanley B. Gershwin
Spring, 2007
Quantity
Quantity
Quantity
Quality and
Quantity
Quality and
Quantity
control in order to
Out of control
maintain the machine.
Quality and
Quantity
Example:
Issues
• Failure dynamics
• Inspection
⋆ Binary (good/bad) vs. measurement
⋆ Accuracy (false positives and negatives)
⋆ Spatial and temporal frequency
• Actions on parts and machines
• Topology of system
• Performance measures
Taxonomy of
Issues
GGGGGBGGGBGGGGGGGBGGBGGGGBBGGGGGGGG.....
GGGGBGGGGGBGGGGGGGGBBBBBBGBBBBGBBGB.....
Taxonomy of
Issues
Taxonomy of
Issues
G B
Note: The operator does not know when the machine is in the
bad state until it has been detected.
Taxonomy of
Issues Simplest model
UP/Good UP/Bad
Versions:
0% yield.
Taxonomy of
Issues Simplest model
repair. failure)
Taxonomy of
Issues Simplest model
• One extension is
Quality
UP/Good UP/Bad Repair
DOWN G DOWN B
Taxonomy of
Issues Simplest model
• Another extension is
Quality
Gk G k−1 G6 G5 G4 G3 G2 G1 B Repair
Dk D k−1 D6 D5 D4 D3 D2 D1 D
Taxonomy of
Issues Simplest model
Quality
Gk G k−1 G6 G5 G4 G3 G2 G1 B Repair
Dk D k−1 D6 D5 D4 D3 D2 D1 D
Taxonomy of
Issues Simplest model
Quality
Gk G k−1 G6 G5 G4 G3 G2 G1 B Repair
Dk D k−1 D6 D5 D4 D3 D2 D1 D
Quality
Gk G k−1 G6 G5 G4 G3 G2 G1 B Repair
Dk D k−1 D6 D5 D4 D3 D2 D1 D
Taxonomy of
Issues Simplest model
Distribution
of measured
parameter
Gk Gk−1 G6 G5 G4 G3 G2 G1 B
Quality
Repair
Dk Dk−1 D6 D5 D4 D3 D2 D1 D
Taxonomy of
Issues
accepted.
Issues
1 5
6 10
11 15 16
1 5
6 10
11 15 16
1 5
6 10
11 15 16
Taxonomy of
Issues
Two-Machine
Systems
Note:
and
g f
1 −1 0
f P (−1) = gP (1)
1
P (1) =
1 + (p + g)/r + g/f
(p + g)/r
P (0) =
1 + (p + g)/r + g/f
g/f
P (−1) =
1 + (p + g)/r + g/f
Two-Machine
Systems
Two-Machine
Systems
� �
P
T
∞
= min
,
f1f2
P
E
∞
= P
T
∞
(f1 + g1)(f2 + g2)
min[µ1, µ2]
PT0 =
f1
b
(pb
1
+ g b
1
) f
b
(pb
2
+ g b
2
)
1 +
b b
+
r1(f
1 + g1 ) r2(f
2b + g2b)
f
1
bf
2
b
PE0 =
PT0
(f1b + g1b)(f2b + g2b)
Procedure:
• Guess x̄.
• Calculate h12.
• Solve the two-machine line. Recalculate x̄ and iterate.
Two-Machine
Systems
• M1 makes only good parts in the G state and only bad parts in
the B state.
0.72
Effective Production Rate
0.71
0.7
0.69
0 5 10 15 20 25 30 35 40 45 50
Buffer Size
Effective production rate = production rate of good parts.
0.5
0
0 5 10 15 20 25 30 35 40 45 50
Buffer Size
• When the inspection detects the first bad part after a good part,
the buffer contains only bad parts.
0.4
0.395
Effective Production Rate
0.39
0.385
0.38
0.375
0.37
0 5 10 15 20 25 30 35 40 45 50
Buffer Size
0.375
0.365
Best locations
0.355
Good Production Rate
0.345
0.335
0.325
Worst locations
0.315
0.305
0.295
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Number of Inspection Stations
Details are in
Jongyoon Kim, “Integrated Quality and Quantity
Modeling of a Production Line,” Ph. D thesis, MIT
Mechanical Engineering, November, 2004.
and
Kim and Gershwin, “ Modeling and analysis of long
flow lines with quality and operational failures,” IIE
Transactions, to appear.
Ubiquitous inspection:
I I I I I I I I
• Procedure:
⋆ Guess x̄i.
⋆ Calculate required hij parameters.
⋆ Transform the 3-state machines into approximate 2-state
machines.
technique.
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Efficient Buffer Design Algorithms for
Chuan Shi
⃝2010
c Chuan Shi — : 1/79
Contents
1 Problem
Problem
Research Topics
Five cases
Algorithm derivation
Numerical results
Research extensions
⃝2010
c Chuan Shi — : 2/79
Problem
Manufacturing Systems
A manufacturing system is a set of machines, transportation elements,
computers, storage buffers, and other items that are used together for
manufacturing.
⃝2010
c Chuan Shi — Problem : Problem 3/79
Problem
Production Lines
A production line, is organized with machines or groups of machines
(�1 , ⋅ ⋅ ⋅ , �� ) connected in series and separated by buffers (�1 , ⋅ ⋅ ⋅ , ��−1 ).
Inventory space N1 N2 N3 N4 N5
M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6
Average inventory n1 n2 n3 n4 n5
⃝2010
c Chuan Shi — Problem : Problem 4/79
Why study production lines ?
Economic importance
Production lines are used in high volume manufacturing, particularly au
tomobile production, in which they make engine blocks, cylinders, con
necting rods, etc. Their capital costs range from hundreds of thousands
dollars to tens of millions of dollars.
⃝2010
c Chuan Shi — Problem : Motivation and research goals 5/79
Research goals
⃝2010
c Chuan Shi — Problem : Motivation and research goals 6/79
Production line design
Yes
⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 7/79
Our focus: choosing buffers
⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 8/79
Our focus: choosing buffers
Problem description and Assumptions
The deterministic processing time model of Gershwin (1987).
Time required to process a part is the same for all machines and is
taken as the time unit.
Machine � is parameterized by the probability of failure, �� = 1/� � � �� ,
and the probability of repair, �� = 1/� � � �� , in each time unit.
The deterministic processing time model of Tolio, Matta, and Gersh
win (2002).
Processing times of all machines are equal, deterministic, and con
stant.
It allows each machine to have multiple failure modes. Each failure
is characterized by a geometrical distribution.
The continuous production line model of Levantesi, Matta, and Tolio
(2003).
Machines can have deterministic, yet different, processing speeds.
It allows each machine to have multiple failure modes. Each failure
is characterized by an exponential distribution.
⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 9/79
Benefits and costs of buffers
Necessity
Machines are not perfectly reliable and predictable.
The unreliability has the potential for disrupting the operations of
adjacent machines or even machines further away.
Buffers decouple machines, and mitigate the effect of a failure of
one of the machines on the operation of others.
⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 10/79
Benefits and costs of buffers
Necessity
Machines are not perfectly reliable and predictable.
⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 10/79
Benefits and costs of buffers
Necessity
Machines are not perfectly reliable and predictable.
The unreliability has the potential for disrupting the operations of
adjacent machines or even machines further away.
Buffers decouple machines, and mitigate the effect of a failure of
one of the machines on the operation of others.
⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 10/79
Benefits and costs of buffers
inventory.
⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 11/79
Benefits and costs of buffers
⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 11/79
Benefits and costs of buffers
inventory.
⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 11/79
Benefits and costs of buffers
inventory.
⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 11/79
Difficulties
Evaluation
Calculate production rate and average inventory as a function of buffer
sizes (and machine reliability).
����� ����� ����� � �� ���� �����
⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 12/79
Difficulties
Evaluation
Calculate production rate and average inventory as a function of buffer
sizes (and machine reliability).
����� ����� ����� � �� ���� �����
⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 12/79
Difficulties
Evaluation
Calculate production rate and average inventory as a function of buffer
sizes (and machine reliability).
����� ����� ����� � �� ���� �����
⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 12/79
Difficulties
Evaluation
Calculate production rate and average inventory as a function of buffer
sizes (and machine reliability).
����� ����� ����� � �� ���� �����
⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 12/79
Difficulties
Decomposition
Line L(i)
M u(i) M d(i)
⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 13/79
Difficulties
Optimization1
1
This is my contribution.
⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 14/79
Prior work review
There are many studies focusing on maximizing the production rate but few
studies concentrating on maximizing the profit.
Substantial research has been conducted on production line evaluation and opti
mization (Dallery and Gershwin 1992).
Buzacott derived the analytic formula for the production rate for two-machine,
one-buffer lines in a deterministic processing time model (Buzacott 1967).
The invention of decomposition methods with unreliable machines and finite
buffers (Gershwin 1987) enabled the numerical evaluation of the production rate
of lines having more than two machines.
Diamantidis and Papadopoulos (2004) also presented a dynamic programming
algorithm for optimizing buffer allocation based on the aggregation method given
by Lim, Meerkov, and Top (1990). But they did not attempt to maximize the
profits of lines.
For other line optimization work, see Chan and Ng (2002), Smith and Cruz
(2005), Bautista and Pereira (2007), Jin et al. (2006), and Rajaram and Tian
(2009).
⃝2010
c Chuan Shi — Problem : Prior work review 15/79
Prior work review
⃝2010
c Chuan Shi — Problem : Prior work review 16/79
Schor’s problem
s.t. �� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
⃝2010
c Chuan Shi — Problem : Prior work review 17/79
Assumptions
Assumptions
� (N) is monotonic and concave.
0.9
0.89
0.88
0.87
P(N) 0.86
0.85
0.84
N1 N2 0.83
0.82
0.81
0.8
0.79
M1 B1 M2 B2 M3
100
90
80
70
0 60
10 50
20 30 40 30
40 N2
50 60 20
70
N1 80 90
10
1000
⃝2010
c Chuan Shi — Problem : Prior work review 18/79
Assumptions
Assumptions
� (N) is monotonic and concave.
0.9
0.89
0.88
0.87
P(N) 0.86
0.85
0.84
N1 N2 0.83
0.82
0.81
0.8
0.79
M1 B1 M2 B2 M3
100
90
80
70
0 60
10 50
20 30 40 30
40 N2
50 60 20
70
N1 80 90
10
1000
⃝2010
c Chuan Shi — Problem : Prior work review 18/79
Assumptions
Assumptions
� (N) is monotonic and concave.
0.9
0.89
0.88
0.87
P(N) 0.86
0.85
0.84
N1 N2 0.83
0.82
0.81
0.8
0.79
M1 B1 M2 B2 M3
100
90
80
70
0 60
10 50
20 30 40 30
40 N2
50 60 20
70
N1 80 90
10
1000
⃝2010
c Chuan Shi — Problem : Prior work review 18/79
Assumptions
Assumptions
� (N) is monotonic and concave.
0.9
0.89
0.88
0.87
P(N) 0.86
0.85
0.84
N1 N2 0.83
0.82
0.81
0.8
0.79
M1 B1 M2 B2 M3
100
90
80
70
0 60
10 50
20 30 40 30
40 N2
50 60 20
70
N1 80 90
10
1000
⃝2010
c Chuan Shi — Problem : Prior work review 18/79
The Gradient Method
1220
1200
1180
1160
J(N) 1140
1120
1100
1080
1060
1040
100
90
80
70
0 60
10 50
20 30 40 N2
40 50 30
60 70 20
N1 80 90
10
1000
⃝2010
c Chuan Shi — Problem : Prior work review 19/79
The Gradient Method
⃝2010
c Chuan Shi — Problem : Prior work review 20/79
Research topics
Profit maximization for production lines with both time window con
loop structure.
⃝2010
c Chuan Shi — Problem : Research Topics 21/79
MIT OpenCourseWare
http://ocw.mit.edu
For information about citing these materials or our Terms of Use,visit: http://ocw.mit.edu/terms.
Topic one: Production line profit maximization subject to a
⃝2010
c Chuan Shi — Topic one: Line optimization : 22/79
Production line profit maximization
The profit maximization problem
�−1
∑ �−1
∑
max �(N) = �� (N) − �� �� − �� �
¯ � (N)
N
�=1 �=1
s.t. � (N) ≥ �ˆ ,
�� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
⃝2010
c Chuan Shi — Topic one: Line optimization : Constrained and unconstrained problems 23/79
An example about the research goal
"data.txt"
"feasible.txt"
1205
1200
1190
1180
1160
1140
1220 1120
1200 1100
1180 1080
1160 1060
J(N) 1140 Optimal
boundary
1120
1100
1080
1060
1040
100
90
80
70
0 60
10 50
20 30 40 N2
40 50 30
60 70 20
N1 80 90
10
1000
⃝2010
c Chuan Shi — Topic one: Line optimization : Constrained and unconstrained problems 24/79
Two problems
s.t. � (N) ≥ �ˆ ,
�� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
s.t. �� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
⃝2010
c Chuan Shi — Topic one: Line optimization : Constrained and unconstrained problems 25/79
An example for algorithm derivation
Data
�1 = .1, �1 = .01, �2 = .11, �2 = .01, �3 = .1, �3 = .009, �ˆ = .88
Cost function
�(N) = 2000� (N) − �1 − �2 − � ¯ 1 (N) − �
¯ 2 (N)
100
90
80 P(N1,N2)>P
1660
1640
70
1620
J(N) 1600 60
1580
N2
1560
50
1540
1520
1500 40
100 30
90
80 P(N1,N2)<P P(N1,N2)=P
70
0 60 20
10 50
20 30 40 N2
40 50 30
60 20 10
70 20 30 40 50 60 70 80 90 100
N1 80 90
10
1000 N1
⃝2010
c Chuan Shi — Topic one: Line optimization : Algorithm derivation 26/79
An example for algorithm derivation
1660
1640
1620
J(N) 1600
1580
1560
1540
1520
1500
100
90
80
70
0 60
10 50
20 30 40 N2
40 50 30
60 70 20
N1 80 90
10
1000
⃝2010
c Chuan Shi — Topic one: Line optimization : Algorithm derivation 27/79
Algorithm derivation
Two cases
Case 1
The solution of the unconstrained problem is N� s.t. � (N� ) ≥ �ˆ . In this
case, the solution of the constrained problem is the same as the solution
of the unconstrained problem. We are done.
100
Unconstrained problem 90
u u
�−1
80 ( N1 , N2 )
∑
max �(N) = �� (N) − �� �� 70
N
�=1 60
N2
50
�−1
∑ 40
P(N1,N2) > P
− �� �
¯ � (N)
30
�=1
20
s.t. �� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1. 10
20 30 40 50 60 70 80 90 100
N1
⃝2010
c Chuan Shi — Topic one: Line optimization : Algorithm derivation 28/79
Algorithm derivation
Two cases (continued)
Case 2
N� satisfies � (N� ) < �ˆ . This is not the solution of the constrained
problem.
100
90
80
70
60
N2
50
40
P(N1,N2) > P
30
u u
20 ( N1 , N2 )
10
20 30 40 50 60 70 80 90 100
N1
⃝2010
c Chuan Shi — Topic one: Line optimization : Algorithm derivation 29/79
Algorithm derivation
Case 2 (continued)
�−1
∑ �−1
∑
′
max �(N) = � � (N) − �� �� − �� �
¯ � (N)
N
�=1 �=1
s.t. �� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
⃝2010
c Chuan Shi — Topic one: Line optimization : Algorithm derivation 30/79
Assertion
The constrained problem
�−1
∑ �−1
∑
max �(N) = �′ � (N) − �� �� − �� �
¯ � (N)
N
�=1 �=1
s.t. � (N) ≥ �ˆ ,
�� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
has the same solution for all �′ in which the solution of the corresponding
unconstrained problem
�−1
∑ �−1
∑
max �(N) = �′ � (N) − �� �� − �� �
¯ � (N)
N
�=1 �=1
s.t. �� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
has � ★ (�′ ) ≤ �ˆ .
⃝2010
c Chuan Shi — Topic one: Line optimization : Algorithm derivation 31/79
Interpretation of the assertion
We claim
If the optimal solution of the unconstrained problem is not that of the constrained
problem, then the solution of the constrained problem, (�1★ , ⋅ ⋅ ⋅ , ��★−1 ), satisfies
� (�1★ , ⋅ ⋅ ⋅ , ��★−1 ) = �ˆ .
480 80
460 60
s.t. � (� ) ≥ �ˆ
450 50
s.t. � ≥ �min
A’P(N)
Cost ($)
440 40
s.t. � (� ) ≥ �ˆ ⇒ � (� ) = �ˆ
410 10
s.t. � ≥ �min
N*(A’)
400 0
0 5 10 15 20 25 30 35 40
P(N*(A’)) < P N
460 60
s.t. � (� ) ≥ �ˆ
450 50
s.t. � ≥ �min
A’P(N)
Cost ($)
440 40
s.t. � (� ) ≥ �ˆ ⇒ � (� ) = �ˆ
410 10
s.t. � ≥ �min
N*(A’)
400 0
0 5 10 15 20 25 30 35 40
P(N*(A’)) < P N
460 60
s.t. � (� ) ≥ �ˆ
450 50
s.t. � ≥ �min
A’P(N)
Cost ($)
440 40
s.t. � (� ) ≥ �ˆ ⇒ � (� ) = �ˆ
410 10
s.t. � ≥ �min
N*(A’)
400 0
0 5 10 15 20 25 30 35 40
P(N*(A’)) < P N
"infeasible.txt"
A’ = 2500 A’ = 4535.82 (Final A’) "feasible.txt"
3819
"infeasible.txt" 3816
2080 "feasible.txt" 3850 3813
2060 2060 3818
2055 3800 3800
2040
2020 2049.8 3750 3770
J(N) 2000 2040 J(N) 3700 3730
1980 2020 3700
2000 3650 3660
1960 3600
1940 1980 3620
1950 3550 3580
1920 1930
3500 3540
1900 1900 3500
1880 Optimal 3450 Optimal
100 boundary 100 boundary
90 90
80 80
70 70
0 60 0 60
10 50 10 50
20 30 40 N2 20 30 40 N2
40 50 30 40 50 30
60 20 60 70 20
70
N1 80 90
10 N1 80 90
10
1000 1000
⃝2010
c Chuan Shi — Topic one: Line optimization : Algorithm derivation 33/79
Karush-Kuhn-Tucker (KKT) conditions
Let �★ be a local minimum of the problem
min � (�)
s.t. ℎ1 (�) = 0, ⋅ ⋅ ⋅ , ℎ� (�) = 0,
�1 (�) ≤ 0, ⋅ ⋅ ⋅ , �� (�) ≤ 0,
∇� �(�★ , �★ , �★ ) = 0,
�★� ≥ 0, � = 1, ⋅ ⋅ ⋅ , �,
�★� �� (�★ ) = 0, � = 1, ⋅ ⋅ ⋅ , �.
∑� ∑�
where �(�, �, �) = � (�) + �=1 �� ℎ� (�) + �=1 �� �� (�) is called the
Lagrangian function.
⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 34/79
Convert the constrained problem to minimization form
Minimization form
The constrained problem
�−1
∑ �−1
∑
min −�(N) = −�� (N) + �� �� + �� �
¯ � (N)
N
�=1 �=1
s.t. �ˆ − � (N) ≤ 0
�min − �� ≤ 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1
⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 35/79
Applying KKT conditions
The Slater constraint qualification for convex inequalities guarantees the
existence of Lagrange multipliers for our problem. So, there exist unique
Lagrange multipliers �★� , � = 0, ⋅ ⋅ ⋅ , � − 1 for the constrained problem to
satisfy the KKT conditions:
�−1
∑
− ∇�(N★ ) + �★0 ∇(�ˆ − � (N★ )) + �★� ∇(�min − �� ) = 0 (1)
�=1
or
∂�(N★ ) ∂� (N★ )
⎛ ⎞ ⎛ ⎞
⎜ ∂�1 ⎟ ⎜ ∂�1
⎟ ⎛
1
⎞ ⎛
0
⎞ ⎛
0
⎞
⎜ ∂�(N★ ) ⎟ ⎜ ∂� (N★ )
⎟
⎜ ⎟ ⎜ ⎟ ⎜ 0 ⎟ ⎜ 0 ⎟ ⎜ 0 ⎟
∂�2 ⎟ − �★0 ⎜ ∂�2 ⎟ − �★1 ⎜ ★
⎜ ⎟ ⎜ ⎟
−⎜ ⎟ − ⋅ ⋅ ⋅ − ��−1 ⎜ ⎟=⎜ ⎟,
⎟ ⎜ ⎟ ⎜ ⎟
.. .. ⎜ .. .. ..
. . .
⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠
⎜
⎜ . ⎟
⎟
⎜
⎜ . ⎟
⎟
⎝ ∂�(N★ ) ⎠ ⎝ ∂� (N★ ) ⎠ 0 1 0
∂��−1 ∂��−1
(2)
⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 36/79
Applying KKT conditions
and
�
★� ≥ 0, ∀� = 0, ⋅ ⋅ ⋅ , � − 1, (3)
haha
�★0
(�ˆ − � (N★ )) = 0, (4)
haha
�
★� (�min − ��★ ) = 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1, (5)
⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 37/79
Applying KKT conditions
∂�(N★ ) ∂� (N★ )
⎛
⎞
⎛
⎞
⎜ ∂�1 ⎟ ⎜ ∂�1
⎟ ⎛ ⎞
0
⎜ ∂�(N★ ) ⎟ ⎜ ∂� (N★ )
⎜ ⎟ ⎜ ⎟
⎟ ⎜ 0
⎟
⎟ ⎜ ⎟
∂�2 ⎟ − �★0 ⎜ ∂�2
⎜ ⎟ ⎜
−
⎜ ⎟ =
⎜ .
⎟ ,
⎟ ⎝
. ⎠
(6)
⎜ .
⎟ ⎜ .
.
.
⎜ .
⎟ ⎜ .
⎟
⎝
∂�(N★ ) ⎠
∂� (N★ ) ⎠
⎜ ⎟ ⎜ ⎟
∂��−1 ∂��−1
⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 38/79
Applying KKT conditions
The KKT conditions are simplified to
∂�(N★ ) ∂� (N★ )
⎛
⎞
⎛
⎞
⎜ ∂�1 ⎟ ⎜ ∂�1
⎟ ⎛ ⎞
0
⎜ ∂�(N★ ) ⎟ ⎜ ∂� (N★ )
⎜ ⎟ ⎜ ⎟
⎟ ⎜ 0
⎟
⎟ ⎜ ⎟
∂�2 ⎟ − �★0 ⎜ ∂�2
⎜ ⎟ ⎜
−
⎜ ⎟ =
⎜ .
⎟ ,
⎟ ⎝
. ⎠
⎜ .
⎟ ⎜ .
.
.
⎜ .
⎟ ⎜ .
⎟
⎝
∂�(N ) ⎠
∂� (N★ ) ⎠
⎜ ★
⎟ ⎜ ⎟
∂��−1 ∂��−1
∂�(N� ) ∂� (N� )
⎛
⎞
⎛
⎞
⎜ ∂�1 ⎟ ⎜ ∂�1 ⎟ ⎛ ⎞
� ⎟ 0
⎜ ∂�(N ) ⎟ ⎜ ∂� (N� ) ⎟ ⎜ ⎟
⎜ ⎜ ⎟
⎜ ⎟ ⎜ ⎟ 0
⎟
−
⎜ ∂�2 ⎟ − �0 ⎜ ∂�2 ⎟ =
⎜ ⎜ .
⎟ ,
(8)
⎜ .
⎟ ⎜ .
⎟ ⎝
..
⎠
⎜ .
⎟ ⎜ .
⎟
⎝
∂�(N� ) ⎠
∂� (N� ) ⎠
⎜ ⎟ ⎜ ⎟
∂��−1 ∂��−1
where N� is the unique solution of (8). Note that N� is the solution of
the following optimization problem:
¯
min −�(N) = −�(N) + �0 (�ˆ − � (N))
N
(9)
s.t. �min − �� ≤ 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 40/79
Applying KKT conditions
The problem above is equivalent to
s.t. �min − �� ≤ 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
or
�−1
∑ �−1
∑
max �¯(N) = (� + �0 )� (N) − �� �� − �� �
¯�
N (12)
�=1 �=1
s.t. �� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 41/79
Applying KKT conditions
The problem above is equivalent to
s.t. �min − �� ≤ 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
or
�−1
∑ �−1
∑
max �¯(N) = (� + �0 )� (N) − �� �� − �� �
¯�
N (12)
�=1 �=1
s.t. �� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 41/79
Applying KKT conditions
The problem above is equivalent to
s.t. �min − �� ≤ 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
or
�−1
∑ �−1
∑
max �¯(N) = (� + �0 )� (N) − �� �� − �� �
¯�
N (12)
�=1 �=1
s.t. �� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 41/79
Applying KKT conditions
or, finally,
�−1
∑ �−1
∑
max �¯(N) = �′ � (N) − �� �� − �� �
¯�
N
�=1 �=1 (13)
s.t. �� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
In addition, the KKT conditions indicate that the optimal solution of the
constrained problem N★ satisfies � (N★ ) = �ˆ . This means that, for every
�′ > � (or �0 > 0), we can find the corresponding optimal solution N�
satisfying condition (8) by solving problem (13). We need to find the
�′ such that the solution to problem (13), denoted as N★ (�′ ), satisfies
� (N★ (�′ )) = �ˆ .
⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 42/79
Applying KKT conditions
or, finally,
�−1
∑ �−1
∑
max �¯(N) = �′ � (N) − �� �� − �� �
¯�
N
�=1 �=1 (13)
s.t. �� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
In addition, the KKT conditions indicate that the optimal solution of the
constrained problem N★ satisfies � (N★ ) = �ˆ . This means that, for every
�′ > � (or �0 > 0), we can find the corresponding optimal solution N�
satisfying condition (8) by solving problem (13). We need to find the
�′ such that the solution to problem (13), denoted as N★ (�′ ), satisfies
� (N★ (�′ )) = �ˆ .
⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 42/79
Applying KKT conditions
⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 43/79
Applying KKT conditions
⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 43/79
Algorithm summary for case 2
�−1
∑ �−1
∑
max �(N) = �′ � (N) − �� �� − �� �
¯ � (N)
N
�=1 �=1
Solve unconstrained problem
s.t. �� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
Search No
P(N*(A'))=P?
′ ′
Do a one-dimensional search on � > � to find �
such that the solution of the unconstrained problem, Yes
N★ (�′ ), satisfies
Quit
� (N★ (�′ )) = �ˆ .
⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 44/79
Numerical results
Computation speed.
⃝2010
c Chuan Shi — Topic one: Line optimization : Numerical results 45/79
�ˆ surface search
P surface
The optimal solution
90
85
N3 80
75
70
15
20
25
N1
30
35 70
60 65
50 55
40 40 45
N2
⃝2010
c Chuan Shi — Topic one: Line optimization : Numerical results 46/79
Experiment on short lines (4-buffer line)
machine �1 �2 �3 �4 �5
� .11 .12 .10 .09 .10
� .008 .01 .01 .01 .01
Cost function
4
∑ 4
∑
�(N) = 2500� (N) − �� − �
¯ � (N)
�=1 �=1
⃝2010
c Chuan Shi — Topic one: Line optimization : Numerical results 47/79
Experiment on short lines (4-buffer line)
Results
Optimal solutions
�ˆ Surface Search The algorithm Error Rounded � ★
Prod. rate .8800 .8800 .8800
�1★ 28.85 28.8570 0.02% 29.0000
�2★ 58.46 58.5694 0.19% 59.0000
�3★ 92.98 92.9068 0.08% 93.0000
�4★ 87.39 87.4415 0.06% 87.0000
�
¯1 19.0682 19.0726 0.02% 19.1791
�
¯2 34.3084 34.3835 0.23% 34.7289
�
¯3 48.7200 48.6981 0.04% 48.9123
�
¯4 31.9894 32.0063 0.05% 31.9485
Profit ($) 1798.2 1798.1 0.006% 1797.4000
⃝2010
c Chuan Shi — Topic one: Line optimization : Numerical results 48/79
Experiment on long lines (11-buffer line)
machine �1 �2 �3 �4 �5 �6
� .11 .12 .10 .09 .10 .11
� .008 .01 .01 .01 .01 .01
Cost function
11
∑ 11
∑
�(N) = 6000� (N) − �� − �
¯ � (N)
�=1 �=1
⃝2010
c Chuan Shi — Topic one: Line optimization : Numerical results 49/79
Experiment on long lines (11-buffer line)
Results
Optimal solutions, buffer sizes:
�ˆ Surface Search The algorithm Error Rounded � ★
Prod. rate .8800 .8800 .8799
�1★ 29.10 29.1769 0.26% 29.0000
�2★ 59.20 59.2830 0.14% 59.0000
�3★ 97.80 97.7980 0.002% 98.0000
�4★ 107.50 107.4176 0.08% 107.0000
�5★ 84.50 84.4804 0.02% 84.0000
�6★ 70.80 70.6892 0.17% 71.0000
�7★ 63.10 63.1893 0.14% 63.0000
�8★ 53.10 52.9274 0.33% 53.0000
�9★ 47.20 47.2232 0.05% 47.0000
★
�10 47.90 47.7967 0.22% 48.0000
★
�11 48.80 48.7716 0.06% 49.0000
⃝2010
c Chuan Shi — Topic one: Line optimization : Numerical results 50/79
Experiment on long lines (11-buffer line)
Results (continued)
Optimal solutions, average inventories:
�ˆ Surface Search The algorithm Error Rounded � ★
�
¯1 19.2388 19.2986 0.31% 19.1979
�
¯2 34.9561 35.0423 0.25% 34.8194
�
¯3 52.5423 52.6032 0.12% 52.6833
�
¯4 45.1528 45.1840 0.07% 45.0835
�
¯5 34.4289 34.4770 0.14% 34.2790
�
¯6 30.7073 30.7048 0.01% 30.8229
�
¯7 28.0446 28.1299 0.30% 28.0902
�
¯8 21.5666 21.5438 0.11% 21.5932
�
¯9 21.5059 21.5442 0.18% 21.4299
�
¯ 10 22.6756 22.6496 0.11% 22.7303
�
¯ 11 20.8692 20.8615 0.04% 20.9613
Profit ($) 4239.3 4239.2 0.002% 4239.5000
⃝2010
c Chuan Shi — Topic one: Line optimization : Numerical results 51/79
Experiments for Tolio, Matta, and Gershwin (2002) model
Consider a 4-machine 3-buffer line with constraints �ˆ = .87. In addition, � =
2000 and all �� and �� are 1.
machine �1 �2 �3 �4
��1 .10 .12 .10 .20
��1 .01 .008 .01 .007
��2 – .20 – .16
��2 – .005 – .004
⃝2010
c Chuan Shi — Topic one: Line optimization : Numerical results 52/79
Experiments for Levantesi, Matta, and Tolio (2003) model
Consider a 4-machine 3-buffer line with constraints �ˆ = .87. In addition, � =
2000 and all �� and �� are 1.
machine �1 �2 �3 �4
�� 1.0 1.02 1.0 1.0
��1 .10 .12 .10 .20
��1 .01 .008 .01 .012
��2 – .20 – .16
��2 – .005 – .006
⃝2010
c Chuan Shi — Topic one: Line optimization : Numerical results 53/79
Computation speed
Experiment
Run the algorithm for a series of experiments for lines having iden
tical machines to see how fast the algorithm could optimize longer
lines.
�−1
∑ �−1
∑
�(N) = �� (N) − �� − �
¯ � (N).
�=1 �=1
⃝2010
c Chuan Shi — Topic one: Line optimization : Numerical results 54/79
Computation speed
3000
2500
Computer time (seconds)
2000
1500
1000
10 mins
500
3 mins
0 1 min
5 10 15 20 25 30
Length of production lines
⃝2010
c Chuan Shi — Topic one: Line optimization : Numerical results 55/79
Algorithm reliability
We run the algorithm on 739 randomly generated 4-machine 3-buffer lines.
98.92% of these experiments have a maximal error less than 6%.
450
400
350
300
250
Frequency
200
150
100
50
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Maximal Error
⃝2010
c Chuan Shi — Topic one: Line optimization : Numerical results 56/79
Algorithm reliability
Taking a closer look at those 98.92% experiments, we find a more accurate
distribution of the maximal error. We find that, out of the total 739 experiments,
83.90% of them have a maximal error less than 2%.
60
50
40
Frequency
30
20
10
0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
Maximal Error
⃝2010
c Chuan Shi — Topic one: Line optimization : Numerical results 57/79
MIT OpenCourseWare
http://ocw.mit.edu
For information about citing these materials or our Terms of Use,visit: http://ocw.mit.edu/terms.
Topic two: Production line profit maximization subject to both
⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : 58/79
Line optimization with time window constraint
Motivation
A time-window constraint between operations means that the time for a part
waiting for the next operation after the previous operation should be kept less
than a fixed value, to guarantee the quality of the part. Such a constraint
is common in semiconductor industry (Kitamura, Mori, and Ono 2006). As
examples,
Robinson and Giglio (1999) mentioned that a baking operation must be
started within two hours of a prior clean operation. If more than two hours
elapse, the lot must be sent back to be cleaned again.
Lu, Ramaswamy, and Kumar (1994) studied the efficient scheduling policies
to reduce mean and variance of cycle-time, and pointed out that the shorter
the period that wafers are exposed to aerial contaminants while waiting for
processing, the smaller is the yield loss.
Yang and Chern (1995) indicated the consideration of such a time-window
constraint in food production, chemical production, and steel production.
For surveys, see Neacy, Brown, and McKiddie (1994) and Uzsoy, Lee, and
Martin-Vega (1992).
⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Problem and motivation 59/79
Mathematical model
¯ˆ� ≤ �
� ˆˆ� � (�1 , ⋅ ⋅ ⋅ , ��−1 )
Note that constraint above guarantees the average part waiting time, NOT the
maximal part waiting time, to be upper bounded by � ˆˆ� . To resolve this concern,
we may consider to reduce � ˆˆ� , by a certain multiplier 0 < � < 1, to �� ˆˆ� in the
constraint, such that the probability that the waiting time for a part is less than
or equal to �ˆˆ� will satisfy a specific confidence level.
⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Problem and motivation 60/79
Mathematical model
s.t. � (N) ≥ �ˆ
¯ˆ� ≤ �
� ˆˆ� � (N)
�� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1
⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Problem and motivation 61/79
Five cases
The optimal solution exists. Both the production rate constraint and
The optimal solution exists. Both the production rate constraint and
⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Five cases 62/79
Five cases
⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Five cases 63/79
Case example −− Case 1 �ˆ = .89 and �ˆ1 = 2
The production rate constraint conflicts with the time-window constraint. There
fore, there is no feasible solution to the problem.
"data.txt"
1255
1253
1250
1240
1230
1220
1210
1200
1260 1180
1240 1160
1220 P boundary
W boundary
J(N) 1200
1180
1160
1140
1120
1100 60
50
40
0 30
10 N2
20 20
30
40 10
N1 50
60 0
⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Five cases 64/79
Case example −− Case 2 �ˆ = .88 and �ˆ1 = 7
The optimal solution exists. Both the production rate constraint and the time
window constraint are active.
"data.txt"
"feasible.txt"
1252
1249.4
1244.9
1240
1230
1220
1210
1200
1180
1160
1260 1140
1240 Optimal
1220 P boundary
W boundary
J(N) 1200
1180
1160
1140
1120
1100 60
50
40
0 30
10 N2
20 20
30
40 10
N1 50
60 0
⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Five cases 65/79
Case example −− Case 3 �ˆ = .88 and �ˆ1 = 15
The optimal solution exists. The production rate constraint is active, while the
time window constraint is inactive.
"infeasible.txt"
"feasible.txt"
1255
1253
1249.4
1240
1230
1220
1210
1200
1180
1260 1160
1240 Optimal
1220 P boundary
W boundary
J(N) 1200
1180
1160
1140
1120
1100 60
50
40
0 30
10 N2
20 20
30
40 10
N1 50
60 0
⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Five cases 66/79
Case example −− Case 4 �ˆ = .86 and �ˆ1 = 6.5
The optimal solution exists. The production rate constraint is inactive, while
the time window constraint is active.
"infeasible.txt"
"feasible.txt"
1255
1253
1250
1240
1230
1220
1210
1200
1180
1260 1160
1240 Optimal
1220 P boundary
W boundary
J(N) 1200
1180
1160
1140
1120
1100 60
50
40
0 30
10 N2
20 20
30
40 10
N1 50
60 0
⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Five cases 67/79
Case example −− Case 5 �ˆ = .86 and �ˆ1 = 15
The optimal solution exists. Both the production rate constraint and the time
window constraint are inactive.
"infeasible.txt"
"feasible.txt"
1255
1253
1250
1240
1230
1220
1210
1200
1180
1260 1160
1240 Optimal
1220 P boundary
W boundary
J(N) 1200
1180
1160
1140
1120
1100 60
50
40
0 30
10 N2
20 20
30
40 10
N1 50
60 0
⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Five cases 68/79
Algorithm derivation
This is equivalent to require that ∇� ˆˆ� (N★ ) and ∇� (N★ ) are linearly
independent. Since all components of ∇� (N★ ) are positive due to the
monotonicity of � (N), but ∇� ˆˆ� (N★ ) has both positive and negative com
ponents, they are linearly independent2 .
2
We will provide formal proof for this in the future.
⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Algorithm derivation 69/79
Algorithm derivation
�★0 (ˆ
�ˆ� (N★ ) − �
ˆˆ� � (N★ )) = 0 (15)
and
⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Algorithm derivation 70/79
Algorithm derivation
�−1
∑ �−1
∑
max ¯
�(N) ˆˆ� + �1 )� (N) −
= (� + �0 � �� �� − ¯ � (N) − �0 �
�� � ¯ˆ� (N)
N
�=1 �=1
s.t. �min − �� ≤ 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1
(17)
until its solution N satisfies � (N ) = �ˆ and �
� �
ˆˆ� (N� ) = �
ˆˆ� � (N� ). Then,
★ �
N =N .
⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Algorithm derivation 71/79
Algorithm summary
2 Solve the problem with the production rate constraint, and let N�
ˆˆ� (N� ) ≤ �
denote the solution. Check if � ˆˆ� � (N� ). If yes, then we
★ �
are done and N = N . If not, go to step 3.
⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Algorithm derivation 72/79
Numerical results
Consider a 6-machine 5-buffer line with constraints �ˆ = .83 and �
ˆ3 = 7. In
addition, � = 3000 and all �� and �� are 1.
machine �1 �2 �3 �4 �5 �6
� .15 .15 .09 .10 .11 .10
� .01 .01 .01 .01 .01 .01
⃝2010
c Chuan Shi — Research in process and Research extensions : 74/79
Topic three: loop optimization
Loop-start machine
B7
M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6 B6 M7
Loop-end machine
⃝2010
c Chuan Shi — Research in process and Research extensions : Research in process 75/79
Topic three: loop optimization
Work
Develop analytical solutions for two-machine-line evaluation with no
loops.
⃝2010
c Chuan Shi — Research in process and Research extensions : Research in process 76/79
Possible extension 1: quality control
Kim and Gershwin (2005) pointed out that in the case of our example
above, an increase of buffer size could either increase or decrease the
production rate for different lines.
⃝2010
c Chuan Shi — Research in process and Research extensions : Research extensions 77/79
Possible extension 2: Set-up cost for buffers
⃝2010
c Chuan Shi — Research in process and Research extensions : Research extensions 78/79
Question session
Thank you!
⃝2010
c Chuan Shi — Research in process and Research extensions : Research extensions 79/79
MIT OpenCourseWare
http://ocw.mit.edu
For information about citing these materials or our Terms of Use,visit: http://ocw.mit.edu/terms.
MIT 2.852
Stanley B. Gershwin
http://web.mit.edu/manuf-sys
Spring, 2010
M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6
Machine Buffer
◮ Economic importance.
◮ Relative simplicity for analysis and for intuition.
◮ Complex behavior.
◮ Analytical solution available only for limited systems.
◮ Exact numerical solution feasible only for systems with a small
number of buffers.
◮ Simulation may be too slow for optimization.
4000
Weekly
Production Production output
2000 from a simulation of
a transfer line.
0
0 20 40 60 80 100
Week
◮ Operation-Dependent Failures
◮ A machine can only fail while it is working.
◮ IMPORTANT! MTTF must be measured in working time!
◮ This is the usual assumption.
◮ Note: MTBF = MTTF + MTTR
Proof
Machine UP Machine DOWN
M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6
Assumptions:
◮ A machine is not idle if it is not starved.
◮ The first machine is never starved.
M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6
◮ The production rate of the line is the production rate of the slowest
machine in the line — called the bottleneck .
◮ Slowest means least average production rate, where average
production rate is calculated from one of the previous formulas.
M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6
M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6
M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6 B6 M7 B7 M8 B8 M9 B9 M10
14000
Buffer 1
Buffer 2
12000 Buffer 3
Buffer 4
Buffer 5
Buffer 6
10000 Buffer 7
Buffer 8
Buffer 9
8000
6000
4000
2000
0
0 100000 200000 300000 400000 500000 600000 700000 800000 900000 1e+06
M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6 B6 M7 B7 M8 B8 M9 B9 M10
M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6 B6 M7 B7 M8 B8 M9 B9 M10
12000
Buffer 1
Buffer 2
Buffer 3
10000 Buffer 4
Buffer 5
Buffer 6
Buffer 7
8000 Buffer 8
Buffer 9
6000
4000
2000
0
0 100000 200000 300000 400000 500000 600000 700000 800000 900000 1e+06
M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6 B6 M7 B7 M8 B8 M9 B9 M10
12000
10000
8000 Buffer 4
Buffer 9
6000
4000
2000
0
0 100000 200000 300000 400000 500000 600000 700000 800000 900000 1e+06
Question:
◮ What are the slopes (roughly!) of the two indicated graphs?
Questions:
◮ If we want to increase production rate, which machine should we
improve?
◮ What would happen to production rate if we improved any other
machine?
M1 M2 M3 M4 M5 M6
M1 M2 M3 M4 M5 M6
M1 M2 M3 M4 M5 M6
M1 M2 M3 M4 M5 M6
M1 M2 M3 M4 M5 M6
◮ Same as the earlier formula (page 11, page 14) when k = 1. The
isolated production rate of a single machine Mi is
!
1 1 1 ri
= .
τ 1 + prii τ ri + pi
◮ The average repair time of Mi is τ /ri each time it fails, so the total
system down time is close to
k
X mi τ
Dτ =
ri
i =1
◮ Since the system produces one part per time unit while it is working,
it produces U parts during the interval of length T τ .
◮ Note that, approximately,
mi = pi U
because Mi can only fail while it is operational.
◮ Thus,
k
X pi
Uτ = T τ − Uτ ,
ri
i =1
or,
U 1
= EODF =
T Xk
pi
1+
ri
i =1
and
1 1
P=
τ Xk
pi
1+
ri
i =1
Questions:
◮ If we want to increase production rate, which machine should we
improve?
◮ What would happen to production rate if we improved any other
machine?
All machines are the same except Mi . As pi increases, the production rate
decreases.
0.5
0.45
0.4
0.35
P 0.3
0.25
0.2
0.15
0.1
0.05
0 0.2 0.4 0.6 0.8 1
pi
All machines are the same. As the line gets longer, the production rate
decreases.
1
0.9 Infinite buffer
0.8
production rate
0.7
0.6
0.5 Capacity loss
P
0.4
0.3
0.2
0.1
0
0 10 20 30 40 50 60 70 80 90 100
M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6
M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6
M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6
◮ Difficulty:
◮ No simple formula for calculating production rate or inventory levels.
◮ Solution:
◮ Simulation
◮ Analytical approximation
Motivation:
◮ We can develop intuition from these systems that is useful for
understanding more complex systems.
◮ Repairs:
s = (n, α1 , α2 )
where
n = 0, 1, ..., N
αi = 0, 1
◮ (0,0,0) is transient because it can be reached only from itself or (0,1,0). It can be
reached from itself if neither machine is repaired; it can be reached from (0,1,0) if
the first machine fails while attempting to make a part. It cannot be reached from
(0,0,1) or (0,1,1) since the second machine cannot fail. Otherwise, if
α1 (t + 1) = 0 and α2 (t + 1) = 0, then n(t + 1) = n(t).
◮ Similarly, (N, 0, 0), (N, 0, 1), (N, 1, 1), and (N − 1, 0, 1) are transient.
(0,1)
(1,0)
(1,1)
key
states transitions
transient out of transient states
Internal equations 2 ≤ n ≤ N − 2
n=0 n=1 n=2 n=3 n=4 n=5 n=6 n=7 n=8 n=9 n=10 n=11 n=12 n=13
=N−1 =N
(α1 ,α2)
(0,0)
(0,1)
(1,0)
(1,1)
n=0 n=1 n=2 n=3 n=4 n=5 n=6 n=7 n=8 n=9 n=10 n=11 n=12 n=13
=N−1 =N
(α1 ,α2)
(0,0)
(0,1)
(1,0)
(1,1)
Repair frequency equals failure frequency For every repair, there is a failure
(in steady state). When the system is in steady state,
r1 D1 = p1 E1 .
Proof: The left side is the probability that the state leaves the set of states
since the only way the system can leave S0 is for M1 to get repaired. (M1 is
down, so the buffer cannot become full.)
n=0 n=1 n=2 n=3 n=4 n=5 n=6 n=7 n=8 n=9 n=10 n=11 n=12 n=13
=N−1 =N
(α1 ,α2)
(0,0)
(0,1)
(1,0)
(1,1)
The right side is the probability that the state enters S0 . When the system is in
steady state, the only way for the state to enter S0 is for it to be in set
(0,1)
(1,0)
(1,1)
Conservation of Flow E1 = E2 = E
N−1
X N−1
X
Proof: E1 = p(n, 1, 0) + p(n, 1, 1),
n=0 n=0
N
X N
X
E2 = p(n, 0, 1) + p(n, 1, 1).
n=1 n=1
N−1
X N
X
Then E1 − E2 = p(n, 1, 0) − p(n, 0, 1)
n=0 n=1
N−2
X N−2
X
= p(n + 1, 1, 0) − p(n, 0, 1)
n=1 n=1
Or,
N−2
X
E1 − E2 = p(n + 1, 1, 0) − p(n, 0, 1)
n=1
N−2
X
E1 − E2 = δ(n)
n=1
Or,
p(2, 1, 0) = p(1, 0, 1)
Then δ(1) = 0.
Now add all the internal equations, after changing the index of two of them:
r1 (1 − r2 )p(n, 0, 0) + r1 p2 p(n, 0, 1)
+(1 − p1 )(1 − r2 )p(n, 1, 0) + (1 − p1 )p2 p(n, 1, 1)
Or, for n = 2, . . . , N − 2,
or,
Since
δ(n) = 0, n = 1, . . . , N − 2
Therefore
N−2
X
E1 − E2 = δ(n) = 0
n=1
QED
E = e1 (1 − pb ).
prob [n < N] = E + D1 ,
p1 E
or, 1 − pb = E + E = .
r1 e1
Similarly,
E = e2 (1 − ps )
2.852 Manufacturing Systems Analysis 78/165 c
Copyright
2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Analytical Solution
Internal equations:
X n = (1 − r1 )(1 − r2 )X n + (1 − r1 )p2 X n Y2 + p1 (1 − r2 )X n Y1 + p1 p2 X n Y1 Y2
X n Y1 Y2 = r1 r2 X n + r1 (1 − p2 )X n Y2 + (1 − p1 )r2 X n Y1 + (1 − p1 )(1 − p2 )X n Y1 Y2
Or,
Or,
1 = (1 − r1 + Y1 p1 ) (1 − r2 + Y2 p2 )
X −1 Y2 = (1 − r1 + Y1 p1 ) (r2 + Y2 (1 − p2 ))
XY1 = (r1 + Y1 (1 − p1 )) (1 − r2 + Y2 p2 )
Y1 Y2 = (r1 + Y1 (1 − p1 )) (r2 + Y2 (1 − p2 ))
Since the last equation is a product of the other three, there are only three
independent equations in three unknowns here. They may be simplified further:
1 = (1 − r1 + Y1 p1 ) (1 − r2 + Y2 p2 )
r1 + Y1 (1 − p1 )
XY1 =
1 − r1 + Y1 p1
r2 + Y2 (1 − p2 )
X −1 Y2 =
1 − r2 + Y2 p2
0 = Y12 (p1 + p2 − p1 p2 − p1 r2 )
+r1 (r1 + r2 − r1 r2 − r1 p2 ) ,
Boundary conditions:
If we plug the internal expression for ξ(n, α1 , α2 ) = X n Y1α1 Y2α2 into the
right side of
ξ(1, 0, 1) = XY2
which implies that
Recall that
Recall
Y22
X2 =
Y12
Consequently,
C1 X1 X1 Y11 − Y21 = 0,
or,
r1 r2
C1 − = 0,
p1 p2
Therefore,
r1 r2
if 6= , then C1 = 0.
p1 p2
r1 r2
In the following, we assume 6= and we drop the j subscript.
p1 p2
r1 r2
But what happens when = ?
p1 p2
r1 r2
And what does = mean?
p1 p2
There are three unknown quantities: p(1, 0, 0), p(1, 1, 1), and C .
2.852 Manufacturing Systems Analysis 91/165 c
Copyright
2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Analytical Solution
which also has three unknown quantities: p(1, 0, 0), p(1, 1, 1), and C . If we eliminate
p(1, 1, 1) and simplify, we get
p(1, 0, 0) = CX .
CX r1 + r2 − r1 r2 − r1 p2
p(1, 1, 1) = .
p2 p1 + p2 − p1 p2 − r1 p2
Finally, the first equation on slide 91 gives
r1 + r2 − r1 r2 − r1 p2
p(0, 0, 1) = CX .
r1 p2
The upper boundary conditions are determined in the same way.
r1 + r2 − r1 r2 − r1 p2
Y1 =
p1 + p2 − p1 p2 − p1 r2
r1 + r2 − r1 r2 − p1 r2
Y2 =
p1 + p2 − p1 p2 − r1 p2
Y2
X =
Y1
and C is a normalizing constant.
2.852 Manufacturing Systems Analysis 95/165 c
Copyright
2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Analytical Solution
Observations:
Typically, we can expect that ri < .2 since a repair is likely to take at least 5
times as long as an operation. Also, since, typically, efficiency = ri /(ri + pi ) > .7,
pi < .4ri , p(0, 0, 1), p(1, 1, 1), p(N − 1, 1, 1), p(N, 1, 0) are much larger than
internal probabilities.
This is because the system tends to spend much more time at those states than
at internal states.
If r1 → 0, then E → 0, ps → 1, pb → 0, n̄ → 0.
If r2 → 0, then E → 0, pb → 1, ps → 0, n̄ → N.
If p1 → 0, then ps → 0, E → 1 − pb → e2 , n̄ → N − e2 .
If p2 → 0, then pb → 0, E → 1 − ps → e1 , n̄ → e1 .
If N → ∞
e1
and e1 < e2 , then E → e1 , pb → 0, ps → 1 − .
e2
Proof:
Many of the limits follow from combining conservation of flow and the
flow rate-idle time relationship:
r1 r2
E= (1 − pb ) = (1 − ps ).
r1 + p1 r2 + p2
The last set comes from the analytic solution and the observation that if
e1 > e2 , X > 1, and if e1 < e2 , X < 1.
10
6
n(t)
0
0 200 400 600 800 1000
t
100
80
60
n(t)
40
20
0
0 2000 4000 6000 8000 10000
t
100
80
60
n(t)
40
20
0
0 2000 4000 6000 8000 10000
t
100
80
60
n(t)
40
20
0
0 2000 4000 6000 8000 10000
t
0.5
r = 0.14
1
0.4 r = 0.12
1
P
τ = 1. r = 0.1
1
p1 = .1
0.3 r = 0.08
r2 = .1 1
p2 = .1 r = 0.06
1
increasing?
r = 0.14
1
◮ Why do they reach an
asymptote? 0.4 r = 0.12
1
P
◮ What is P when N = 0? r = 0.1
1
r = 0.06
1
0
0 50 100 150 200
r = 0.06 r = 0.06
1 1
0
0 50 100 150 200
0 50 100 150 200
N N
Questions:
◮ If we want to increase production rate, which machine should we
improve?
◮ What would happen to production rate if we improved any other
machine?
0.8
Machine 1 improved
0.7
Improvements to
Identical machines
non-bottleneck
machine.
0.6
20 40 60 80 100 120 140 160 180 200
N
Note: Graphs would be the same if we improved Machine 2.
140
120 Machine 1
100
improved
Inventory increases as the 80
(non-bottleneck) upstream
machine is improved and as 60
20
Identical machines
0
0 20 40 60 80 100 120 140 160 180 200
N
2.852 Manufacturing Systems Analysis 109/165 c
Copyright
2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Average inventory vs. storage space
200
180 n
160
140
downstream machine 80
◮ r2 = 0.8, p2 = 0.09, N = 10
0.9
r1
◮ r1 and p1 vary together and r1 +p1 = .9
P 0.85
◮ Answer: evidently, short, frequent
failures. 0.8
◮ Why?
0.75
0 0.2 0.4 0.6 0.8 1
r1
0.95
P ◮ M1
0.9 M2 slow ◮ r1 = .1, p1 = .01
M2 fast
◮ M2
r2 = .1, p2 = .01 — Fast
0.85
◮
0.95
◮ M1
◮ r1 = .1, p1 = .01
0.9
◮ M2
0.85
◮ r2 = .1, p2 = .01 — Fast
◮ r2 = .01, p2 = .001 — Slow
◮ r2 = .001, p2 = .0001 — Very slow
0.8
0.95
◮ M1
◮ r1 = .1, p1 = .01
0.9
◮ M2
◮ r2 = .1, p2 = .01 — Fast
0.85
◮ r2 = .01, p2 = .001 — Slow
◮ r2 = .001, p2 = .0001 — Very slow
0.8
0 200 400 600 800 1000 1200 1400 1600 1800 2000
0.91
◮ M1
0.9
0.89
◮ r1 = .1, p1 = .01
0.88
◮ M2
0.87
0.86
◮ r2 = .1, p2 = .01 — Fast
0.85 ◮ r2 = .01, p2 = .001 — Slow
0.84 ◮ r2 = .001, p2 = .0001 — Very slow
0.83
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
4
x 10
0.95
◮ M1
◮ r1 = .1, p1 = .01
0.9
◮ M2
0.85
◮ r2 = .1, p2 = .01 — Fast
◮ r2 = .01, p2 = .001 — Slow
◮ r2 = .001, p2 = .0001 — Very slow
0.8
0 1 2 3 4 5 6 7 8 9 10
4
x 10
20
18
◮ M1
16
◮ r1 = .1, p1 = .01
14
12
◮ M2
10
0
0 2 4 6 8 10 12 14 16 18 20
200
◮ M1
r1 = .1, p1 = .01
180
◮
160
140
120
◮ M2
100
80
◮ r2 = .1, p2 = .01 — Fast
60
40
◮ r2 = .01, p2 = .001 — Slow
20
◮ r2 = .001, p2 = .0001 — Very slow
0
0 20 40 60 80 100 120 140 160 180 200
2000
◮ M1
r1 = .1, p1 = .01
1800
◮
1600
1400
1200
◮ M2
1000
800
◮ r2 = .1, p2 = .01 — Fast
600
400
◮ r2 = .01, p2 = .001 — Slow
200
◮ r2 = .001, p2 = .0001 — Very slow
0
0 200 400 600 800 1000 1200 1400 1600 1800 2000
We can assume that only one event occurs during (t, t + δt).
n=0 n=1 n=2 n=3 n=4 n=5 n=6 n=7 n=8 n=9 n=10 n=11 n=12 n=13
=N−1 =N
(α1 ,α2)
(0,0)
(0,1)
(1,0)
(1,1)
Pi = µi Ei .
Conservation of Flow
P = P1 = P2 = . . . = Pk .
ρi = µi ei .
α1 = a2 = 0 :
α1 = 0, α2 = 1 :
α1 = 1, α2 = 0 :
α1 = 1, α2 = 1 :
Performance measures
Efficiencies:
N−1
X 1
X
E1 = p(n, 1, α2 ),
n=0 α2 =0
1
N X
X
E2 = p(n, α1 , 1).
n=1 α1 =0
Production rate:
P = µ1 E1 = µ2 E2 .
Expected in-process inventory:
1 X
N X
X 1
n̄ = np(n, α1 , α2 ).
n=0 α1 =0 α2 =0
Assume
p1 Y1 + p2 Y2 − r1 − r2 = 0
1 r1
µ1 − 1 − p1 Y1 + r1 + − p1 = 0
X Y1
r2
µ2 (X − 1) − p2 Y2 + + r2 − p2 = 0
Y2
2.852 Manufacturing Systems Analysis 132/165 c
Copyright
2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Exponential processing time model
The other three solutions satisfy a cubic polynomial equation. Compare with slide
85. In general, there is no simple expression for them.
2.852 Manufacturing Systems Analysis 133/165 c
Copyright
2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Exponential processing time model
When x = 0
1. the change in x is (α1 µ1 − α2 µ2 )+ δt
(That is, when x = 0, it can only increase.)
When x = N
1. the change in x is (α1 µ1 − α2 µ2 )− δt
δx
(0,1)
r1δt
(1,0)
r2δt
(1,1)
µ2δ t
x − µ1δ t +
1 − (p +p ) δ t
x − µ1δ t
µ2δ
1 2
x
or
f (x, 1, 1, t + δt) =
∂f
(1 − (p1 + p2 )δt) f (x, 1, 1, t) + (x, 1, 1, t)(−µ1 δt + µ2 δt)
∂x
+r1 δtf (x + µ2 δt, 0, 1, t) + r2 δtf (x − µ1 δt, 1, 0, t) + o(δt)
or
f (x, 1, 1, t + δt) =
∂f
(1 − (p1 + p2 )δt) f (x, 1, 1, t) + (x, 1, 1, t)(µ2 − µ1 )δt
∂x
∂f
+r1 δt f (x, 0, 1, t) + (x, 0, 1, t)µ2 δt
∂x
∂f
+r2 δt f (x, 1, 0, t) − (x, 1, 0, t)µ1 δt + o(δt)
∂x
or
f (x, 1, 1, t + δt) =
∂f
f (x, 1, 1, t) − (p1 + p2 )f (x, 1, 1, t)δt + (µ2 − µ1 ) (x, 1, 1, t)δt
∂x
+r1 f (x, 0, 1, t)δt + r2 f (x, 1, 0, t)δt
or, finally,
∂f ∂f
(x, 1, 1) = −(p1 + p2 )f (x, 1, 1) + (µ2 − µ1 ) (x, 1, 1)
∂t ∂x
Similarly,
∂f
(x, 0, 0) = −(r1 + r2 )f (x, 0, 0) + p1 f (x, 1, 0) + p2 f (x, 0, 1)
∂t
∂f ∂f
(x, 0, 1) = µ2 (x, 0, 1) − (r1 + p2 )f (x, 0, 1) + p1 f (x, 1, 1) + r2 f (x, 0, 0)
∂t ∂x
∂f ∂f
(x, 1, 0) = −µ1 (x, 1, 0) − (p1 + r2 )f (x, 1, 0) + p2 f (x, 1, 1) + r1 f (x, 0, 0)
∂t ∂x
(0,1)
(1,0)
(1,1)
x=0 µ2δt
(1,0)
(1,1)
x=0
The system can go from (0,0,0) to (0,0,0) if there is no repair. It can go from
(0,1,0) if the first machine does not fail.
It cannot go from (0,0,1) to (0,0,0) because the second machine is starved and
cannot fail. To go from (0,1,1) to (0,0,0) require two simultaneous failures, which
has a probability on the order of δt 2 .
2.852 Manufacturing Systems Analysis 146/165 c
Copyright
2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
(0,1)
(1,0)
(1,1)
x=0 µ2δt
0 < x < α2 µ2 δt − α1 µ1 δt
(0,1)
(1,0)
(1,1)
x=0 µ2δt
But
prob ([0 < x < µ2 δt], 0, 1) = f (x, 0, 1)µ2 δt + o(δt) = f (0, 0, 1)µ2 δt + o(δt)
and the transition probability from (0,1) to (0,0) is
(1 − r1 δt)p2 δt + o(δt) = p2 δt + o(δt).
(0,1)
(1,0)
(1,1)
x=0 µ2δt
Therefore, the probability of going from ([0 < x < µ2 δt], 0, 1) to (0,0,0) is
(1,0)
(1,1)
x=0
Therefore
or
d
p(0, 0, 0) = −(r1 + r2 )p(0, 0, 0) + p1 p(0, 1, 0)
dt
2.852 Manufacturing Systems Analysis 150/165 c
Copyright
2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Consider state (0,1,0). As soon as the system enters this state, it leaves.
This is because x must immediately increase. Therefore
p(0, 1, 0) = 0
even if the system is not in steady state . Therefore
d
p(0, 0, 0) = −(r1 + r2 )p(0, 0, 0)
dt
In steady state,
p(0, 0, 0) = 0
(1,0)
p1δt
(1,1)
x=0 µ2δt
(1,0)
p1δt
(1,1)
x=0 µ2δt
or,
d
p(0, 0, 1) = r2 p(0, 0, 0) − r1 p(0, 0, 1) + p1 p(0, 1, 1) + µ2 f (0, 0, 1).
dt
(0,1)
r1δt
(1,0)
(1,1)
b
1 − p1δ t + p2δt
x=0 (µ2 − µ1 )δ t
(0,1)
r1δt
(1,0)
(1,1)
b
1 − p1δ t + p2δt
x=0 (µ2 − µ1 )δ t
d
p(0, 1, 1) = −(p1 + p2b )p(0, 1, 1) + r1 p(0, 0, 1)
dt
+(µ2 − µ1 )f (0, 1, 1), if µ1 ≤ µ2 .
p(0, 1, 1) = 0
◮ Production rate
»Z N –
P2 = µ2 (f (x, 0, 1) + f (x, 1, 1))dx + p(N, 1, 1) + µ1 p(0, 1, 1).
0
»Z N –
= P1 = µ1 (f (x, 1, 0) + f (x, 1, 1))dx + p(0, 1, 1) + µ2 p(N, 1, 1).
0
◮ Average in-process inventory
1
X 1 »Z
X N –
x̄ = xf (x, α1 , α2 )dx + Np(N, α1 , α2 ) .
α1 =0 α2 =0 0
Also to come:
◮ Identities (in steady state)
◮ Conservation of flow; Blocking, Starvation, and Production Rate;
Repair frequency equals failure frequency; Flow Rate-Idle Time; Limits
◮ Solution technique
◮ Internal solution; transient states;
P
◮ N = 20 0.4
µ2
20
Exponential
Continuous
◮ Explain the shapes of the 15
graphs.
10
n
5
0
µ2
0 1 2
20
1 Exponential
P n Continuous
No−variability limit
0.8 15
0.6
10
0.4
5
0.2
No−variability limit
Continuous
Exponential
0 0
0 1 µ2 2 0 1
µ2
2
20
nbar
10
Continuous
Exponential
0
0 1 2
r1
2.852 Manufacturing Systems Analysis 163/165 c
Copyright
2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Continuous material and Deterministic Processing Time Lines
0.67
0.665
E
0.66
0.655
0 0.2 0.4 0.6 0.8 1
delta
2.852 Manufacturing Systems Analysis 165/165 c
Copyright
2010 Stanley B. Gershwin.
MIT OpenCourseWare
http://ocw.mit.edu
For information about citing these materials or our Terms of Use,visit: http://ocw.mit.edu/terms.