Você está na página 1de 844

MIT 2.

852

Manufacturing Systems Analysis

Lectures 15–16: Assembly/Disassembly Systems

Stanley B. Gershwin

http://web.mit.edu/manuf-sys

Massachusetts Institute of Technology

Spring, 2010

2.852 Manufacturing Systems Analysis 1/41 c


Copyright �2010 Stanley B. Gershwin.
Assembly-Disassembly Systems
Assembly System

2.852 Manufacturing Systems Analysis 2/41 c


Copyright �2010 Stanley B. Gershwin.
Assembly-Disassembly Systems
Assembly-Disassembly System with a Loop

2.852 Manufacturing Systems Analysis 3/41 c


Copyright �2010 Stanley B. Gershwin.
Assembly-Disassembly Systems
A-D System without Loops

2.852 Manufacturing Systems Analysis 4/41 c


Copyright �2010 Stanley B. Gershwin.
Assembly-Disassembly Systems
Disruption Propagation in an A-D System without Loops

F E

F E

F F F E E

F E

2.852 Manufacturing Systems Analysis 5/41 c


Copyright �2010 Stanley B. Gershwin.
Assembly-Disassembly Systems
Models and Analysis

An assembly/disassembly system is a generalization of a transfer line:


◮ Each machine may have 0, 1, or more than one buffer upstream.
◮ Each machine may have 0, 1, or more than one buffer downstream.
◮ Each buffer has exactly one machine upstream and one machine
downstream.
◮ Discrete material systems: when a machine does an operation, it
removes one part from each upstream buffer and inserts one part into
each downstream buffer.
◮ Continuous material systems: when machine Mi operates during
[t, t + δt], it removes µi δt from each upstream buffer and inserts
µi δt into each downstream buffer.
◮ A machine is starved if any of its upstream buffers is empty. It is
blocked if any of its downstream buffers is full.
2.852 Manufacturing Systems Analysis 6/41 c
Copyright �2010 Stanley B. Gershwin.
Assembly-Disassembly Systems
Models and Analysis

◮ A/D systems can be modeled similarly to lines:


◮ discrete material, discrete time, deterministic processing time,
geometric repair and failure times;
◮ discrete material, continuous time, exponential processing, repair, and
failure times;
◮ continuous continuous time, deterministic processing rate, exponential
repair and failure times;
◮ other models not yet discussed in class.
◮ A/D systems without loops can be analyzed similarly to lines by
decomposition.
◮ A/D systems with loops can be analyzed by decomposition, but there
are additional complexities.

2.852 Manufacturing Systems Analysis 7/41 c


Copyright �2010 Stanley B. Gershwin.
Assembly-Disassembly Systems
Models and Analysis

◮ Systems with loops are not ergodic. That is, the steady-state
distribution is a function of the initial conditions.
◮ Example: if the system below has K pallets at time 0, it will have K
pallets for all t ≥ 0. Therefore, the probability distribution is a
function of K .
Raw Part Input

Empty Pallet Buffer

Finished Part Output

◮ This applies to more general systems with loops, such the example on
Slide 3.

2.852 Manufacturing Systems Analysis 8/41 c


Copyright �2010 Stanley B. Gershwin.
Assembly-Disassembly Systems
Models and Analysis

◮ In general,
p(s|s(0)) = lim prob { state of the system at time t = s|
t→∞
state of the system at time 0 = s(0)}.
◮ Consequently, the performance measures depend on the initial state
of the system:
◮ The production rate of Machine Mi , in parts per time unit, is

Ei (s(0)) = prob αi = 1 and (nb > 0 ∀ b ∈ U(i)) and
� �

(nb < Nb ∀ b ∈ D(i)) ��s(0) .

◮ The average level of Buffer b is



n̄b (s(0)) = nb prob (s|s(0)).
2.852 Manufacturing Systems Analysis 9/41 s c
Copyright �2010 Stanley B. Gershwin.
Assembly-Disassembly Systems
Decomposition

Mj Mm

B (j, i) B (i, m)

• Mi •
• •
B (n, i) B (i, q)
• •

Mn Mq

Part of Original Network

2.852 Manufacturing Systems Analysis 10/41 c


Copyright �2010 Stanley B. Gershwin.
Assembly-Disassembly Systems
Decomposition

Mu (j, i) B (j, i) Md (j, i)




Mu (n, i) B (n, i) Md (n, i)

Mu (i, m) B (i, m) Md (i, m)




Mu (i, q) B (i, q) Md (i, q)

Part of Decomposition

2.852 Manufacturing Systems Analysis 11/41 c


Copyright �2010 Stanley B. Gershwin.
Numerical examples
Eight-Machine Systems

Deterministic
processing
time model

2.852 Manufacturing Systems Analysis 12/41 c


Copyright �2010 Stanley B. Gershwin.
Numerical examples
Eight-Machine Systems

7.3351

5.6516

Case 1:
2.6449
7.3351 ri = .1, pi =
.1, i = 1, ..., 8;
7.3351

Ni = 10, i =
5.6516
1, ..., 7.
7.3351

2.852 Manufacturing Systems Analysis 13/41 c


Copyright �2010 Stanley B. Gershwin.
Numerical examples
Eight-Machine Systems

7.9444

7.0529

2.0555
Case 2:
7.9444

Same as Case
7.9444
1 except
p7 = .2
7.0529

7.9444

2.852 Manufacturing Systems Analysis 14/41 c


Copyright �2010 Stanley B. Gershwin.
Numerical examples
Eight-Machine Systems

4.5640

4.1089

2.2923
Case 3:
7.7077

Same as Case
7.7077
1 except
p1 = .2
6.5276

7.7077

2.852 Manufacturing Systems Analysis 15/41 c


Copyright �2010 Stanley B. Gershwin.
Numerical examples
Eight-Machine Systems

7.9017

3.4593

2.0983
Case 4:
7.9017

Same as Case
7.9017
1 except
p3 = .2
6.9609

7.9017

2.852 Manufacturing Systems Analysis 16/41 c


Copyright �2010 Stanley B. Gershwin.
Numerical Examples

Alternate Assembly Line Designs

A product is made of three

subassemblies (blue, yellow,

and red). Each subassembly

can be assembled

independently of the others.

We consider four possible

production system

structures.

Machine 6 (the first machine

of the yellow process) is the

bottleneck — the slowest

operation of all.

2.852 Manufacturing Systems Analysis 17/41 c


Copyright �2010 Stanley B. Gershwin.
Numerical Examples
Alternate Assembly Line Designs

2.852 Manufacturing Systems Analysis 18/41 c


Copyright �2010 Stanley B. Gershwin.
Numerical Examples
Alternate Assembly Line Designs

Now the bottleneck is


Machine 5, the last
operation of the blue
process.

2.852 Manufacturing Systems Analysis 19/41 c


Copyright �2010 Stanley B. Gershwin.
Numerical Examples
Alternate Assembly Line Designs

2.852 Manufacturing Systems Analysis 20/41 c


Copyright �2010 Stanley B. Gershwin.
Equivalence
Simple models

Consider a three-machine transfer line and a three-machine assembly


system. Both are perfectly reliable (pi = 0) exponentially processing time
systems.

N =2
1
N =2 N =3
1 2
µ
1

µ µ µ µ
2 1 2 3

N =3
2

µ
3

2.852 Manufacturing Systems Analysis 21/41 c


Copyright �2010 Stanley B. Gershwin.
Equivalence
Assembly System State Space

µ1 µ1
00 10 20
N =2
1
µ3 µ2 µ3 µ2 µ 3

µ µ1 µ1
1 01 11 21

µ3 µ2 µ3 µ 2 µ 3

µ
2 µ1 µ1
02 12 22
N =3
2
µ3 µ2 µ3 µ 2 µ 3

µ
3
µ1 µ1
03 13 23

2.852 Manufacturing Systems Analysis 22/41 c


Copyright �2010 Stanley B. Gershwin.
Equivalence
Transfer Line State Space

µ1 µ1
03 13 23

µ3 µ2 µ3 µ2 µ 3

N =2 N = 3

1 2

µ1 µ1
02 12 22

µ3 µ2 µ3 µ 2 µ 3

µ µ µ
1 2 3
µ1 µ1
01 11 21

µ3 µ2 µ3 µ 2 µ 3

µ1 µ1
00 10 20

2.852 Manufacturing Systems Analysis 23/41 c


Copyright �2010 Stanley B. Gershwin.
Equivalence
Unlabeled State Space

◮ The transition graphs of the two systems

are the same except for the labels of the


µ1 µ1
states.

◮ Therefore, the steady-state probability µ3 µ2 µ3 µ2 µ3


distributions of the two systems are the µ1 µ1
same, except for the labels of the states.
◮ The relationship between the labels of the µ3 µ2 µ3 µ 2 µ3
states is: µ1 µ1

(n1A , n2A ) ⇐⇒ (n1T , N2 − n2T ) µ3 µ2 µ3 µ 2 µ3

µ1 µ1
◮ Therefore, in steady state,

prob(n1A , n2A ) = prob(n1T , N2 − n2T )


2.852 Manufacturing Systems Analysis 24/41 c
Copyright �2010 Stanley B. Gershwin.
Equivalence
Assembly System Production Rate

Production rate = rate of flow of material into M1


1 �
� 3
= µ1 p(n1 , n2 ) µ1 µ1
n1 =0 n2 =0 00 10 20

N =2
1 µ3 µ2 µ3 µ2 µ3
µ1 µ1
µ 01 11 21
1

µ3 µ2 µ3 µ 2 µ3
µ µ1 µ1
2
02 12 22
N =3
2
µ3 µ2 µ3 µ 2 µ3
µ
3
µ1 µ1
03 13 23

2.852 Manufacturing Systems Analysis 25/41 c


Copyright �2010 Stanley B. Gershwin.
Equivalence
Transfer
AssemblyLine
System
Production
Production
RateRate

Production rate = rate of flow of material into M1


1 �
� 3
= µ1 p(n1 , n2 ) µ1 µ1
n1 =0 n2 =0 03 13 23

µ3 µ2 µ3 µ2 µ3
N =2 N =3
1 2 µ1 µ1
02 12 22

µ3 µ2 µ3 µ 2 µ3
µ µ µ µ1 µ1
1 2 3
01 11 21

µ3 µ2 µ3 µ 2 µ3
µ1 µ1
00 10 20

2.852 Manufacturing Systems Analysis 26/41 c


Copyright �2010 Stanley B. Gershwin.
Equivalence
Equal
Assembly
Produc
Systtem
ion Production
Rates Rat

Therefore

PA = PT

2.852 Manufacturing Systems Analysis 27/41 c


Copyright �2010 Stanley B. Gershwin.
Equivalence
Assembly System n̄1

2 �
3 2
� 3

� � �
n̄1 = n1 p(n1 , n2 ) = n1 p(n1 , n2 ) µ1 µ1
n1 =0 n2 =0 n1 =0 n2 =0 00 10 20

N =2
1 µ3 µ2 µ3 µ2 µ3
µ1 µ1
µ 01 11 21
1

µ3 µ2 µ3 µ 2 µ3
µ µ1 µ1
2
02 12 22
N =3
2
µ3 µ2 µ3 µ 2 µ3
µ
3
µ1 µ1
03 13 23

2.852 Manufacturing Systems Analysis 28/41 c


Copyright �2010 Stanley B. Gershwin.
Equivalence
Assembly
Transfer Line
System
n̄1 n̄1

2 �
3 2
� 3


� � �
n̄1 = n1 p(n1 , n2 ) = n1 p(n1 , n2 ) µ1 µ1
n1 =0 n2 =0 n1 =0 n2 =0 03 13 23

3
µ2 µ3 µ2 µ3
N =2 N = 3

1 2
µ1 µ1
02 12 22

3 µ2 µ3 µ 2 µ3
µ µ µ µ1 µ1
1 2 3

01 11 21

µ2 µ3 µ 2 µ3
3

µ1 µ1
00 10 20

2.852 Manufacturing Systems Analysis 29/41 c


Copyright �2010 Stanley B. Gershwin.
Equivalence
Equal
Assembly
n̄1 System Production Rate

Therefore

n̄1A = n̄1T

2.852 Manufacturing Systems Analysis 30/41 c


Copyright �2010 Stanley B. Gershwin.
Equivalence
Assembly System n̄2

2 �
3 3
� 2

� � �
n̄2 = n2 p(n1 , n2 ) = n2 p(n1 , n2 ) µ1 µ1
n1 =0 n2 =0 n2 =0 n1 =0 00 10 20

N =2
1 µ3 µ2 µ3 µ2 µ3
µ1 µ1
µ 01 11 21
1

µ3 µ2 µ3 µ 2 µ3
µ µ1 µ1
2
02 12 22
N =3
2
µ3 µ2 µ3 µ 2 µ3
µ
3
µ1 µ1
03 13 23

2.852 Manufacturing Systems Analysis 31/41 c


Copyright �2010 Stanley B. Gershwin.
Equivalence
Assembly
Transfer Line
System
n̄2 n̄2

2 �
3 3
� 2

� � �
n̄2 = n2 p(n1 , n2 ) = n2 p(n1 , n2 ) µ1 µ1
n1 =0 n2 =0 n2 =0 n1 =0 03 13 23

µ3 µ2 µ3 µ2 µ3
N =2 N =3
1 2 µ1 µ1
02 12 22

µ3 µ2 µ3 µ 2 µ3
µ µ µ µ1 µ1
1 2 3
01 11 21

µ3 µ2 µ3 µ 2 µ3
µ1 µ1
00 10 20

2.852 Manufacturing Systems Analysis 32/41 c


Copyright �2010 Stanley B. Gershwin.
Equivalence
Assembly Systemn̄Production
Complementary 1 Rate

Therefore

n̄2A = N2 − n̄2T

2.852 Manufacturing Systems Analysis 33/41 c


Copyright �2010 Stanley B. Gershwin.
Equivalence
Theorem

◮ Notation: Let j be a buffer. Then the machine upstream of the


buffer is u(j) and the machine downstream of the buffer is d(j).
◮ Theorem:

◮ Assume

◮ Z and Z ′ are two exponential A/D networks with the same number of
machines and buffers. Corresponding machines and buffers have the
same parameters; that is, µ′i = µi , i = 1, ..., kM and
Nb′ = Nb , b = 1, ..., kB .
◮ There is a subset of buffers Ω such that for j 6∈ Ω, u ′ (j) = u(j) and
d ′ (j) = d(j); and for j ∈ Ω, u ′ (j) = d(j) and d ′ (j) = u(j). That is,
there is a set of buffers such that the direction of flow is reversed in the
two networks.
◮ Then, the transition equations for network Z ′
are the same as those of
Z , except that the buffer levels in Ω are replaced by the amounts of
space in those buffers.

2.852 Manufacturing Systems Analysis 34/41 c


Copyright �2010 Stanley B. Gershwin.
Equivalence
Theorem

◮ That is, the transition (or balance) equations of Z ′ can be written by


transforming those of Z .

◮ In the Z equations, replace nj by Nj − nj for all j ∈ Ω.

2.852 Manufacturing Systems Analysis 35/41 c


Copyright �2010 Stanley B. Gershwin.
Equivalence
Theorem

Corollary:
◮ Assume:
◮ The initial states s(0) and s ′ (0) are related as follows: nj′ (0) = nj (0)
for j 6∈ Ω, and nj′
(0) = Nj − nj (0) for j ∈ Ω.
◮ Then

P ′ (n′ (0)) = P(n(0))

n̄b′ (n′ (0)) = n̄b (n(0)), for j 6∈ Ω


n̄b′ (n′ (0)) = Nb − n̄b (n(0)), for j ∈ Ω

2.852 Manufacturing Systems Analysis 36/41 c


Copyright �2010 Stanley B. Gershwin.
Equivalence
Theorem

Corollary: That is,

◮ the production rates of the two systems are the same,

◮ the average levels of all the buffers in the systems whose direction of
flow has not been changed are the same,

◮ the average levels of all the buffers in the systems whose direction of
flow has been changed are complementary; the average number of
parts in one is equal to the average amount of space in the other.

2.852 Manufacturing Systems Analysis 37/41 c


Copyright �2010 Stanley B. Gershwin.
Equivalence
Equivalence class of three-machine systems

N N
1 2

µ µ µ
1 2 3

N N
1 1
B B
2 1

µ µ
1 1

µ µ
2 2
B B
1 2
N N
2 2

µ µ
3 N N 3
2 1

µ µ µ
3 2 1

2.852 Manufacturing Systems Analysis 38/41 c


Copyright �2010 Stanley B. Gershwin.
Equivalence
Equivalence classes of four-machine systems

Representative members

2.852 Manufacturing Systems Analysis 39/41 c


Copyright �2010 Stanley B. Gershwin.
Equivalence
Example of equivalent loops

2 2

1 2 1 2

1 3 1 3

4 3 4 3

4 4

Ω= (3, 4)

(a) A Fork/ Join Network (b) A Closed Network

2.852 Manufacturing Systems Analysis 40/41 c


Copyright �2010 Stanley B. Gershwin.
Equivalence
To come

◮ Loops and invariants


◮ Two-machine loops
◮ Instability of A/D systems with infinite buffers

2.852 Manufacturing Systems Analysis 41/41 c


Copyright �2010 Stanley B. Gershwin.
MIT OpenCourseWare
http://ocw.mit.edu

2.852 Manufacturing Systems Analysis


Spring 2010

For information about citing these materials or our Terms of Use,visit: http://ocw.mit.edu/terms.
MIT 2.852

Manufacturing Systems Analysis

Lectures 19–21
Scheduling: Real-Time Control of Manufacturing Systems
Stanley B. Gershwin

Spring, 2007

c 2007 Stanley B. Gershwin.


Copyright �
Definitions

• Events may be controllable or not, and predictable


or not.
controllable uncontrollable
predictable loading a part lunch
unpredictable ??? machine failure

c 2007 Stanley B. Gershwin.


Copyright � 2
Definitions

• Scheduling is the selection of times for future


controllable events.
• Ideally, scheduling systems should deal with all

controllable events, and not just production.

� That is, they should select times for operations,

set-up changes, preventive maintenance, etc.

� They should at least be aware of set-up changes,


preventive maintenance, etc.when they select
times for operations.
c 2007 Stanley B. Gershwin.
Copyright � 3
Definitions

• Because of recurring random events, scheduling is


an on-going process, and not a one-time calculation.

• Scheduling, or shop floor control, is the bottom of the


scheduling/planning hierarchy. It translates plans
into events.

c 2007 Stanley B. Gershwin.


Copyright � 4
Issues in

Factory Control

• Problems are dynamic ; current decisions influence


future behavior and requirements.
• There are large numbers of parameters, time-varying
quantities, and possible decisions.
• Some time-varying quantities are stochastic .
• Some relevant information (MTTR, MTTF, amount of
inventory available, etc.) is not known.
• Some possible control policies are unstable .

c 2007 Stanley B. Gershwin.


Copyright � 5
Example

Dynamic
Programming Problem

Discrete Time, Discrete State,


F Deterministic
6
8 L
B
2 5
1 2
10
G
A 9 5
6 Z
3 1 4 H
C M 4
2 2
6
D 7 6 1 6
2 7
4 N
I 8
5 J
5
9
E
4
K 3 O

Problem: find the least expensive path from A to Z.


c 2007 Stanley B. Gershwin.
Copyright � 6
Example

Dynamic
Programming Problem

Let g(i, j) be the cost of traversing the link from i to j. Let i(t)
be the tth node on a path from A to Z. Then the path cost is
T

g(i(t − 1), i(t))
t=1

where T is the number of nodes on the path, i(0) = A, and


i(T ) = Z.

T is not specified; it is part of the solution.

c 2007 Stanley B. Gershwin.


Copyright � 7
Example
Dynamic
Programming Solution

• A possible approach would be to enumerate all possible paths


(possible solutions). However, there can be a lot of possible
solutions.
• Dynamic programming reduces the number of possible
solutions that must be considered.
δ Good news: it often greatly reduces the number of possible solutions.

δ Bad news: it often does not reduce it enough to give an exact optimal
solution practically (ie, with limited time and memory). This is the curse of
dimensionality .
δ Good news: we can learn something by characterizing the optimal

solution, and that sometimes helps in getting an analytical optimal

solution or an approximation.

δ Good news: it tells us something about stochastic problems.

c 2007 Stanley B. Gershwin.


Copyright � 8
Example

Dynamic
Programming Solution

Instead of solving the problem only for A as the initial point, we


solve it for all possible initial points.

For every node i, define J (i) to be the optimal cost to go from


Node i to Node Z (the cost of the optimal path from i to Z).
We can write
T

J (i) = g(i(t − 1), i(t))

t=1
where i(0) = i; i(T ) =Z; (i(t − 1), i(t)) is a link for every t.

c 2007 Stanley B. Gershwin.


Copyright � 9
Example

Dynamic
Programming Solution

Then J (i) satisfies

J (Z) = 0

and, if the optimal path from i to Z traverses link (i, j),

J (i) = g(i, j) + J (j).

Z
i j

c 2007 Stanley B. Gershwin.


Copyright � 10
Example
Dynamic
Programming Solution
Suppose that several links go out of Node i.
j1

j2

j3 Z
i j4

j5

j6

Suppose that for each node j for which a link exists from i to j,
the optimal path and optimal cost J (j) from j to Z is known.
c 2007 Stanley B. Gershwin.
Copyright � 11
Example

Dynamic
Programming Solution

Then the optimal path from i to Z is the one that minimizes the
sum of the costs from i to j and from j to Z. That is,

J (i) = min [g(i, j) + J (j)]

where the minimization is performed over all j such that a link

from i to j exists. This is the Bellman equation .

This is a recursion or recursive equation because J () appears

on both sides, although with different arguments.

J (i) can be calculated from this if J (j) is known for every node j

such that (i, j) is a link.

c 2007 Stanley B. Gershwin.


Copyright � 12
Example

Dynamic
Programming Solution

Bellman’s Principle of Optimality: if i and j are nodes


on an optimal path from A to Z, then the portion of that
path from A to Z between i and j is an optimal path
from i to j.
A

i Z

c 2007 Stanley B. Gershwin.


Copyright � 13
Example

Dynamic
Programming Solution

Example: Assume that we have determined that J (O) = 6 and


J (J ) = 11.
To calculate J (K),
� �
g(K, O) + J (O)
J (K) = min
g(K, J ) + J (J )
� �
3+6
= min = 9.
9 + 11

c 2007 Stanley B. Gershwin.


Copyright � 14
Example

Dynamic
Programming Solution

Algorithm
1. Set J (Z) = 0.
2. Find some node i such that
• J (i) has not yet been found, and
• for each node j in which link (i, j) exists, J (j) is already
calculated.
Assign J (i) according to

J (i) = min [g(i, j) + J (j)]

3. Repeat Step 2 until all nodes, including A, have costs


calculated.
c 2007 Stanley B. Gershwin.
Copyright � 15
Example

Dynamic
Programming Solution

F
B L
11

13 5

G
A
14 Z

12 11
H
C 6 M
D
8
17 4
14 11 N
I
J
13
E
9 6

K O

c 2007 Stanley B. Gershwin.


Copyright � 16
Example

Dynamic
Programming Solution

The important features of a dynamic programming problem are


• the state (i) ;
• the decision (to go to j after i);
• the objective function
� � ⎝
T
t=1 g(i(t − 1), i(t))

• the cost-to-go function (J(i)) ;


• the one-step recursion equation that determines J(i)

(J(i) = minj [g(i, j) + J(j)]);

• that the solution is determined for every i, not just A and not just nodes on
the optimal path;
• that J(i) depends on the nodes to be visited after i, not those between A
and i. The only thing that matters is the present state and the future;
• that J(i) is obtained by working backwards.

c 2007 Stanley B. Gershwin.


Copyright � 17
Example

Dynamic
Programming Solution

This problem was


• discrete time, discrete state, deterministic.
Other versions:
• discrete time, discrete state, stochastic
• continuous time, discrete state, deterministic
• continuous time, discrete state, stochastic
• continuous time, mixed state, deterministic
• continuous time, mixed state, stochastic

in stochastic systems, we optimize the expected cost.


c 2007 Stanley B. Gershwin.
Copyright � 18
Discrete time, discrete state
Dynamic

Programming Stochastic

Suppose
• g(i, j) is a random variable; or
• if you are at i and you choose j, you actually go to k with
probability p(i, j, k).
Then the cost of a sequence of choices is random. The objective
function is � T �

E g(i(t − 1), i(t))

t=1
and we can define
J (i) = E min [g(i, j) + J (j)]

c 2007 Stanley B. Gershwin.


Copyright � 19
Continuous Time, Mixed State
Dynamic

Programming Stochastic Example

Context: The planning/scheduling hierarchy


• Long term: factory design, capital expansion, etc.
• Medium term: demand planning, staffing, etc.
• Short term:
δ response to short term events
δ part release and dispatch
In this problem, we deal with the response to short term events.
The factory and the demand are given to us; we must calculate
short term production rates; these rates are the targets that
release and dispatch must achieve.

c 2007 Stanley B. Gershwin.


Copyright � 20
Continuous Time, Mixed State
Dynamic
Programming Stochastic Example

u1 (t) x1 d1 Type 1

u2 (t) x2 d2 Type 2
r, p
• Perfectly flexible machine, two part types. λi time units required
to make Type i parts, i = 1, 2.
• Exponential failures and repairs with rates p and r.
• Constant demand rates d1, d2.
• Instantaneous production rates ui(t), i = 1, 2 — control
variables .
• Downstream surpluses xi(t).
c 2007 Stanley B. Gershwin.
Copyright � 21
Continuous Time, Mixed State
Dynamic

Programming Stochastic Example

Cumulative
Production
and Demand
production Pi (t)
Objective: Minimize
the difference
between cumulative
production and
cumulative demand.

surplus xi (t) The surplus satisfies


xi(t) = Pi(t) − Di(t)
demand Di (t) = di t

t
c 2007 Stanley B. Gershwin.
Copyright � 22
Continuous Time, Mixed State
Dynamic

Programming Stochastic Example

Feasibility:
• For the problem to be feasible, it must be possible to make
approximately diT Type i parts in a long time period of length
T, i = 1, 2. (Why “approximately”?)
• The time required to make diT parts is λidiT .
• During this period, the total up time of the machine — ie, the
time available for production — is approximately r/(r + p)T .
• Therefore, we must have λ1d1T + λ2d2T � r/(r + p)T , or
2
� r
λi di �
i=1
r+p

c 2007 Stanley B. Gershwin.


Copyright � 23
Continuous Time, Mixed State
Dynamic

Programming Stochastic Example

If this condition is not satisfied, the demand cannot be met. What


will happen to the surplus?
The feasibility condition is also written

2
� di r

i=1
µi r+p
where µi = 1/λi.
If there were only one part type, this would be

d � µ

r + p

Look familiar?

c 2007 Stanley B. Gershwin.


Copyright � 24
Continuous Time, Mixed State
Dynamic

Programming Stochastic Example

The surplus satisfies


xi(t) = Pi(t) − Di(t)
where � t
Pi(t) = ui(s)ds; Di(t) = dit
0
Therefore
dxi(t)
= ui(t) − di
dt

c 2007 Stanley B. Gershwin.


Copyright � 25
Continuous Time, Mixed State
Dynamic

Programming Stochastic Example

To define the objective more precisely, let there be a


function g(x1, x2) such that
• g is convex
• g(0, 0) = 0
• lim g(x1, x2) = �; lim g(x1, x2) = �.
x1�→ x1�−→
• lim g(x1, x2) = �; lim g(x1, x2) = �.
x2�→ x2�−→

c 2007 Stanley B. Gershwin.


Copyright � 26
Continuous Time, Mixed State
Dynamic

Programming Stochastic Example

Examples:
• g(x1, x2) = A1x21 + A2x22
• g(x1, x2) = A1|x1| + A2|x2|
• g(x1, x2) = g1(x1) + g2(x2) where
(i−) i ,

δ gi(xi) = g(i+)x+ i + g x

δ x+
i = max(x i , 0), x i = − min(xi , 0),

δ g(i+) > 0, g(i−) > 0.

c 2007 Stanley B. Gershwin.


Copyright � 27
Continuous Time, Mixed State
Dynamic

Programming
Stochastic Example
Objective: � T
min E g(x1(t), x2(t))dt
0
g(x1,x2 )

x2

x1

c 2007 Stanley B. Gershwin.


Copyright � 28
Continuous Time, Mixed State
Dynamic

Programming Stochastic Example

Constraints:
u1(t) ≈ 0; u2(t) ≈ 0

Short-term capacity:

• If the machine is down at time t,


u1(t) = u2(t) = 0

c 2007 Stanley B. Gershwin.


Copyright � 29
Continuous Time, Mixed State
Dynamic

Programming Stochastic Example


• Assume the machine is up for a short period [t, t + �t]. Let �t
be small enough so that ui is constant; that is
ui(s) = ui(t), s ≤ [t, t + �t]
The machine makes ui(t)�t parts of type i. The time required
to make that number of Type i parts is λiui(t)�t.
Therefore u2

λiui(t)�t � �t 1/2�

i
or

λiui(t) � 1 0 1/1� u
i

c 2007 Stanley B. Gershwin.


Copyright � 30
Continuous Time, Mixed State
Dynamic

Programming Stochastic Example

Machine state dynamics: Define �(t) to be the repair state of the


machine at time t. �(t) = 1 means the machine is up; �(t) = 0
means the machine is down.

prob(�(t + �t) = 0|�(t) = 1) = p�t + o(�t)

prob(�(t + �t) = 1|�(t) = 0) = r�t + o(�t)

The constraints may be written



λiui(t) � �(t); ui(t) ≈ 0
i

c 2007 Stanley B. Gershwin.


Copyright � 31
Continuous Time, Mixed State
Dynamic

Programming Stochastic Example

Dynamic programming problem formulation:


� T
min E g(x1(t), x2(t))dt
0
subject to:
dxi(t)
= ui(t) − di
dt
prob(�(t + �t) = 0|�(t) = 1) = p�t + o(�t)

prob(�(t + �t) = 1|�(t) = 0) = r�t + o(�t)



λiui(t) � �(t); ui(t) � 0
i
x(0), �(0) specified

c 2007 Stanley B. Gershwin.


Copyright � 32
Elements of a DP Problem
Dynamic
Programming
• state: x all the information that is available to determine the
future evolution of the system.
• control: u the actions taken by the decision-maker.
• objective function: J the quantity that must be minimized;
• dynamics: the evolution of the state as a function of the control
variables and random events.
• constraints: the limitations on the set of allowable controls
• initial conditions: the values of the state variables at the start
of the time interval over which the problem is described. There
are also sometimes terminal conditions such as in the network
example.

c 2007 Stanley B. Gershwin.


Copyright � 33
Elements of a DP Solution
Dynamic
Programming

• control policy: u(x(t), t). A stationary or


time-invariant policy is of the form u(x(t)).

• value function: (also called the cost-to-go function)


the value J (x, t) of the objective function when the
optimal control policy is applied starting at time t,
when the initial state is x(t) = x.

c 2007 Stanley B. Gershwin.


Copyright � 34
Continuous x, t

Bellman’s
Equation Deterministic

� T
Problem: min g(x(t), u(t))dt + F (x(T ))

u(t),0�t�T 0

such that
dx(t)
= f (x(t), u(t), t)
dt
x(0) specified

h(x(t), u(t)) � 0

x ≤ Rn, u ≤ Rm, f ≤ Rn, h ≤ Rk, and g and F are scalars.


Data: T, x(0), and the functions f, g, h, and F .
c 2007 Stanley B. Gershwin.
Copyright � 35
Continuous x, t
Bellman’s
Equation Deterministic

The cost-to-go function is


� T
J(x, t) = min g(x(s), u(s))ds + F (x(T ))
t
� T
J(x(0), 0) = min g(x(s), u(s))ds + F (x(T ))
0
⎭� ⎡
t1 � T
= min g(x(t), u(t))dt + g(x(t), u(t))dt + F (x(T )) .
u(t), 0 t1

0�t�T

c 2007 Stanley B. Gershwin.


Copyright � 36
Continuous x, t

Bellman’s
Equation Deterministic

⎞ ⎩


⎨ ⎨


⎨ �� ⎨
�⎨


⎠� t1 T ⎦

= min g(x(t), u(t))dt + min g(x(t), u(t))dt + F (x(T ))

u(t),


⎨ 0 u(t), t1 ⎨

⎨ ⎨

0�t�t1 t1 �t�T

⎧ ⎫

�� t1 �

= min g(x(t), u(t))dt + J(x(t1), t1) .


u(t), 0
0�t�t1

c 2007 Stanley B. Gershwin.


Copyright � 37
Continuous x, t

Bellman’s
Equation Deterministic

where
� T
J (x(t1), t1) = min g(x(t), u(t))dt + F (x(T ))
u(t),t1 �t�T t1

such that
dx(t)
= f (x(t), u(t), t)
dt
x(t1) specified

h(x(t), u(t)) � 0

c 2007 Stanley B. Gershwin.


Copyright � 38
Continuous x, t
Bellman’s
Equation Deterministic

Break up [t1, T ] into [t1, t1 + �t] ↔ [t1 + �t, T ] :


⎫�
t1+�t
J (x(t1), t1) = min g(x(t), u(t))dt
u(t1) t1

+J (x(t1 + �t), t1 + �t)}


where �t is small enough so that we can approximate x(t) and
u(t) with constant x(t1) and u(t1), during the interval. Then,
approximately,
� �
J (x(t1), t1) = min g(x(t1), u(t1))�t + J (x(t1 + �t), t1 + �t)
u(t1)

c 2007 Stanley B. Gershwin.


Copyright � 39
Continuous x, t

Bellman’s
Equation Deterministic

Or,

J (x(t1), t1) = min g(x(t1), u(t1))�t + J (x(t1), t1)+
u(t1)

αJ αJ

(x(t1), t1)(x(t1 + �t) − x(t1)) + (x(t1), t1)�t
αx αt

Note that
dx
x(t1 + �t) = x(t1) + �t = x(t1) + f (x(t1), u(t1), t1)�t
dt

c 2007 Stanley B. Gershwin.


Copyright � 40
Continuous x, t

Bellman’s
Equation Deterministic

Therefore
J (x, t1) = J (x, t1)
αJ αJ
� �
+ min g(x, u)�t + (x, t1)f (x, u, t1)�t + (x, t1)�t
u αx αt
where x = x(t1); u = u(t1) = u(x(t1), t1).
Then (dropping the t subscript)

αJ αJ
� �
− (x, t) = min g(x, u) + (x, t)f (x, u, t)
αt u αx

c 2007 Stanley B. Gershwin.


Copyright � 41
Continuous x, t

Bellman’s
Equation Deterministic

This is the Bellman equation . It is the counterpart of the recursion equation for
the network example.
• If we had a guess of J(x, t) (for all x and t) we could confirm it by

performing the minimization.

• If we knew J(x, t) for all x and t, we could determine u by performing the

minimization. U could then be written

αJ
� �
u = U x, ,t .
αx

This would be a feedback law .

The Bellman equation is usually impossible to solve analytically or numerically.


There are some important special cases that can be solved analytically.

c 2007 Stanley B. Gershwin.


Copyright � 42
Continuous x, t

Bellman’s
Equation Example

Bang-Bang Control
� �
min |x|dt
0
subject to

dx
=u
dt

x(0) specified

−1 � u � 1

c 2007 Stanley B. Gershwin.


Copyright � 43
Continuous x, t

Bellman’s
Equation Example

The Bellman equation is


αJ αJ
� �
− (x, t) = min |x| + (x, t)u .

αt u, αx
−1�u�1

J(x, t) = J(x) is a solution because the time horizon is infinite and t does not
appear explicitly in the problem data (ie, g(x) = |x| is not a function of t.
Therefore
dJ
� �
0 = min |x| + (x)u .

u, dx
−1�u�1

J(0) = 0 because if x(0) = 0 we can choose u(t) = 0 for all t. Then


x(t) = 0 for all t and the integral is 0. There is no possible choice of u(t) that
will make the integral less than 0, so this is the minimum.
c 2007 Stanley B. Gershwin.
Copyright � 44
Continuous x, t

Bellman’s
Equation Example

The minimum is achieved when


dJ
−1 if (x) > 0






⎧ dx




� dJ
u= 1 if (x) < 0


⎧ dx




dJ

� undetermined if

(x) = 0


dx
Why?

c 2007 Stanley B. Gershwin.


Copyright � 45
Continuous x, t

Bellman’s
Equation Example

Consider the set of x where dJ/dx(x) < 0. For x in that set,


u = 1, so
dJ
0 = |x| + (x)
dx
or
dJ
(x) = −|x|
dx

Similarly, if x is such that dJ/dx(x) > 0 and u = −1,

dJ

(x) = |x|
dx

c 2007 Stanley B. Gershwin.


Copyright � 46
Continuous x, t

Bellman’s
Equation Example

To complete the solution, we must determine where dJ/dx > 0,


< 0, and = 0.
We already know that J (0) = 0. We must have J (x) > 0 for all

x ≥= 0 because |x| > 0 so the integral of |x(t)| must be positive.

Since J (x) > J (0) for all x =


≥ 0, we must have
dJ
(x) < 0 for x < 0
dx

dJ

(x) > 0 for x > 0


dx

c 2007 Stanley B. Gershwin.


Copyright � 47
Continuous x, t

Bellman’s
Equation Example

Therefore
dJ

(x) >= x
dx

so
1
J = x2
2
and
� 1 if x < 0


u= 0 if x = 0
� −1 if x > 0

c 2007 Stanley B. Gershwin.


Copyright � 48
Continuous x, t,Discrete �
Bellman’s

Equation
Stochastic
⎫� ⎭
T
J (x(0), �(0), 0) = min E g(x(t), u(t))dt + F (x(T ))

u 0

such that
dx(t)
= f (x, �, u, t)
dt

prob [�(t + �t) = i | �(t) = j] = �ij �t for all i, j, i =


≥ j

x(0), �(0) specified

h(x(t), �(t), u(t)) � 0

c 2007 Stanley B. Gershwin.


Copyright � 49
Continuous x, t,Discrete �
Bellman’s

Equation
Stochastic

Getting the Bellman equation in this case is more complicated


because � changes by large amounts when it changes.
Let H(�) be some function of �. We need to calculate

ẼH(�(t + �t)) = E {H(�(t + �t)) | �(t)}


= H(j)prob {�(t + �t) = j | �(t)}

c 2007 Stanley B. Gershwin.


Copyright � 50
Continuous x, t,Discrete �
Bellman’s

Equation
Stochastic

⎪ �
� �
= H(j)�j�(t)�t + H(�(t)) ⎬1 − �j�(t)�t� + o(�t)
j�=�(t) j �=�(t)

� � �
= H(j)�j�(t)�t + H(�(t)) 1 + ��(t)�(t) �t + o(�t)
j�=�(t)

⎛ ⎣

E {H(�(t + �t)) | �(t)} = H(�(t)) + � H(j)�j�(t)⎤ �t + o(�t)
j
We use this in the derivation of the Bellman equation.

c 2007 Stanley B. Gershwin.


Copyright � 51
Continuous x, t,Discrete �
Bellman’s

Equation
Stochastic

⎫� ⎭
T
J (x(t), �(t), t) = min E g(x(s), u(s))ds + F (x(T ))
u(s), t

t�s<T

c 2007 Stanley B. Gershwin.


Copyright � 52
Continuous x, t,Discrete �
Bellman’s

Equation
Stochastic



�� t+�t
= min E g(x(s), u(s))ds
u(s),


⎧ t

0�s�t+�t

�� �⎧

T ⎧

+ min E g(x(s), u(s))ds + F (x(T ))

u(s), t+�t

t+�t�s�T

c 2007 Stanley B. Gershwin.


Copyright � 53
Continuous x, t,Discrete �
Bellman’s

Equation
Stochastic

⎫�
t+�t
= min Ẽ g(x(s), u(s))ds
u(s), t

t�s�t+�t

+J (x(t + �t), �(t + �t), t + �t)

Next, we expand the second term in a Taylor series about x(t).


We leave �(t + �t) alone, for now.

c 2007 Stanley B. Gershwin.


Copyright � 54
Continuous x, t,Discrete �
Bellman’s

Equation
Stochastic

J (x(t), �(t), t) =


min Ẽ g(x(t), u(t))�t + J (x(t), �(t + �t), t) +

u(t)

αJ αJ

(x(t), �(t + �t), t)�x(t) + (x(t), �(t + �t), t)�t + o(�t).
αx αt
where
�x(t) = x(t + �t) − x(t) = f (x(t), �(t), u(t), t)�t + o(�t)

c 2007 Stanley B. Gershwin.


Copyright � 55
Continuous x, t,Discrete �
Bellman’s

Equation
Stochastic

Using the expansion of ẼH(�(t + �t)),

J (x(t), �(t), t) = min g(x(t), u(t))�t


u(t) �

+ J (x(t), �(t), t) + J (x(t), j, t)�j�(t)�t
j ⎨

αJ αJ ⎩

+ (x(t), �(t), t)�x(t) + (x(t), �(t), t)�t + o(�t)


αx αt ⎪

We can clean up notation by replacing x(t) with x, �(t) with �,


and u(t) with u.
c 2007 Stanley B. Gershwin.
Copyright � 56
Continuous x, t,Discrete �
Bellman’s

Equation
Stochastic


J (x, �, t) =

min g(x, u)�t + J (x, �, t) + J (x, j, t)�j��t


u �
j ⎨
αJ αJ ⎩
+ (x, �, t)�x + (x, �, t)�t + o(�t)

αx αt ⎪

We can subtract J (x, �, t) from both sides and use the


expression for �x to get ...

c 2007 Stanley B. Gershwin.


Copyright � 57
Continuous x, t,Discrete �
Bellman’s

Equation
Stochastic


0 = min g(x, u)�t + J (x, j, t)�j��t


u �
j ⎨
αJ αJ ⎩
+ (x, �, t)f (x, �, u, t)�t + (x, �, t)�t + o(�t)
αx αt ⎪
or,

c 2007 Stanley B. Gershwin.


Copyright � 58
Continuous x, t,Discrete �
Bellman’s

Equation
Stochastic

αJ �
− (x, �, t) = J (x, j, t)�j�+
αt j

αJ
� �
min g(x, u) + (x, �, t)f (x, �, u, t)
u αx

• Bad news: usually impossible to solve;


• Good news: insight.

c 2007 Stanley B. Gershwin.


Copyright � 59
Continuous x, t,Discrete �
Bellman’s

Equation
Stochastic

An approximation: when T is large and f is not a function of t,


typical trajectories look like this:
x

c 2007 Stanley B. Gershwin.


Copyright � 60
Continuous x, t,Discrete �
Bellman’s

Equation
Stochastic

That is, in the long run, x approaches a steady-state probability


distribution. Let J � be the expected value of g(x, u), where u is
the optimal control.
Suppose we started the problem with x(0) a random variable
whose probability distribution is the steady-state distribution.
Then, for large T ,
�� �
T
EJ = minu E 0 g(x(t), u(t))dt + F (x(T ))

� J �T

c 2007 Stanley B. Gershwin.


Copyright � 61
Continuous x, t,Discrete �
Bellman’s

Equation
Stochastic

For x(0) and �(0) specified

J (x(0), �(0), 0) � J �T + W (x(0), �(0))


or, more generally, for x(t) = x and �(t) = � specified,
J (x, �, t) � J �(T − t) + W (x, �)

c 2007 Stanley B. Gershwin.


Copyright � 62
Flexible

Manufacturing

System Control

Single machine, multiple part types. x, u, d are N -dimensional vectors.

� T
min E g(x(t))dt
0
subject to:
dxi(t)
= ui(t) − di, i = 1, ..., N
dt
prob(�(t + �t) = 0|�(t) = 1) = p�t + o(�t)

prob(�(t + �t) = 1|�(t) = 0) = r�t + o(�t)



λiui(t) � �(t); ui(t) � 0
i
x(0), �(0) specified

c 2007 Stanley B. Gershwin.


Copyright � 63
Flexible

Manufacturing

System Control

Define �(�) = {u| λiui � �}. Then, for � = 0, 1,



i

αJ �
− (x, �, t) = J (x, j, t)�j�+
αt j

αJ
� �
min g(x) + (x, �, t)(u − d)
u∗�(�) αx

c 2007 Stanley B. Gershwin.


Copyright � 64
Flexible

Manufacturing

System Control

Approximating J with J �(T − t) + W (x, �) gives:




J = (J �(T − t) + W (x, j))�j�+
j

αW
� �
min g(x) + (x, �, t)(u − d)
u∗�(�) αx
Recall that �

�j� = 0...
j

c 2007 Stanley B. Gershwin.


Copyright � 65
Flexible

Manufacturing

System Control

so

J� = W (x, j)�j�+
j

αW
� �
min g(x) + (x, �, t)(u − d)
u∗�(�) αx

for � = 0, 1

c 2007 Stanley B. Gershwin.


Copyright � 66
Flexible

Manufacturing

System Control

This is actually two equations, one for � = 0, one for � = 1.


� αW
J = g(x) + W (x, 1)r − W (x, 0)r − (x, 0)d,
αx
for � = 0,

αW
� �

J = g(x) + W (x, 0)p − W (x, 1)p + min (x, 1)(u − d)
u∗�(1) αx
for � = 1.

c 2007 Stanley B. Gershwin.


Copyright � 67
Flexible Single-part-type case

Manufacturing

System Control Technically, not flexible!

Now, x and u are scalars, and


�(1) = [0, 1/λ ] = [0, µ]

� dW
J = g(x) + W (x, 1)r − W (x, 0)r − (x, 0)d,
dx
for � = 0,

dW
� �

J = g(x) + W (x, 0)p − W (x, 1)p + min (x, 1)(u − d)
0�u�µ dx
for � = 1.

c 2007 Stanley B. Gershwin.


Copyright � 68
Flexible Single-part-type case

Manufacturing

System Control

See book, Sections 2.6.2 and 9.3; see Probability slides #

91–120.

When � = 0, u = 0.

When � = 1,

• if dW
dx
< 0, u = µ,
• if dW
dx
= 0, u unspecified,
• if dW
dx
> 0, u = 0.

c 2007 Stanley B. Gershwin.


Copyright � 69
Flexible Single-part-type case

Manufacturing

System Control

W (x, �) has been shown to be convex in x. If the minimum of


W (x, 1) occurs at x = Z and W (x, 1) is differentiable for all x,
then
• dW
dx
<0�x<Z
• dW
dx
=0�x=Z
• dW
dx
>0�x>Z
Therefore,
• if x < Z, u = µ,
• if x = Z, u unspecified,
• if x > Z, u = 0.
c 2007 Stanley B. Gershwin.
Copyright � 70
Flexible Single-part-type case
Manufacturing
System Control
dx(t)
Surplus, or inventory/backlog:
= u(t) − d
dt
Production policy: Choose Z Cumulative

(the hedging point ) Then, Production and Demand production

• if � = 1,

dt+Z
δ if x < Z, u = µ,

δ if x = Z, u = d,
hedging point Z

δ if x > Z, u = 0;
surplus x(t)

• if � = 0,

demand dt
δ u = 0.

How do we choose Z?
c 2007 Stanley B. Gershwin.
Copyright � 71
Flexible Single-part-type case

Manufacturing

System Control Determination of Z

� Z
J � = Eg(x) = g(Z)P (Z, 1)+ g(x) [f (x, 0) + f (x, 1)] dx
−�

in which P and f form the steady-state probability distribution of

x. We choose Z to minimize J �. P and f are given by

f (x, 0) = Aebx

d
f (x, 1) = A µ−d ebx

P (Z, 1) = A dp ebZ

c 2007 Stanley B. Gershwin.


Copyright � 72
Flexible Single-part-type case
Manufacturing
System Control Determination of Z

where
r p
b= −

d µ−d
and A is chosen so that
� Z
[f (x, 0) + f (x, 1)] dx + P (Z, 1) = 1
−�

After some manipulation,


bp(µ − d)
� �
A= e−bZ
db(µ − d) + µp
and
db(µ − d)
P (Z, 1) =
db(µ − d) + µp

c 2007 Stanley B. Gershwin.


Copyright � 73
Flexible Single-part-type case

Manufacturing

System Control Determination of Z

Since g(x) = g+x+ + g−x−,


• if Z � 0, then
� Z
J � = −g−ZP (Z, 1) − g−x [f (x, 0) + f (x, 1)] dx;
−�

• if Z > 0,
� 0
J � = g+ZP (Z, 1) − g−x [f (x, 0) + f (x, 1)] dx
−�
� Z
+ g+x [f (x, 0) + f (x, 1)] dx.
0

c 2007 Stanley B. Gershwin.


Copyright � 74
Flexible Single-part-type case

Manufacturing

System Control Determination of Z

To minimize J �:
� ⎬
g
ln Kb(1 + g− )
• if g+ − Kb(g+ + g−) < 0, Z =
+
.
b

• if g+ − Kb(g+ + g−) ≈ 0, Z = 0

where K =
µp µp 1 µp
� �
= =
b(µbd − d2 b + µp) b(r + p)(µ − d) b db(µ − d) + µp

Z is a function of d, µ, r, p, g+, g−.

c 2007 Stanley B. Gershwin.


Copyright � 75
Flexible Single-part-type case
Manufacturing
System Control Determination of Z

That is, we choose Z such that


g + + g−
� � ��
bZ
e = min 1, Kb
g+
or
1 g+
� � ��
−bZ
e = max 1,
Kb g+ + g−

c 2007 Stanley B. Gershwin.


Copyright � 76
Flexible Single-part-type case
Manufacturing
System Control Determination of Z

� 0
prob(x � 0) = (f (x, 0) + f (x, 1))dx
−�
0
d
� ��
=A 1+ ebxdx
µ−d −�
d 1 µ
� �
=A 1+ =A
µ−d b b(µ − d)
bp(µ − d) µ
� �
−bZ
= e
db(µ − d) + µp b(µ − d)
µp
� �
= e−bZ
db(µ − d) + µp

c 2007 Stanley B. Gershwin.


Copyright � 77
Flexible Single-part-type case

Manufacturing

System Control Determination of Z

Or,
µp 1 g+
� � � � ��
prob(x � 0) = max 1,
db(µ − d) + µp Kb g + + g−

It can be shown that


µp
Kb =
µp + bd(µ − d)
Therefore
1 g+
� � ��
prob(x � 0) = Kb max 1,
Kb g+ + g−
µp g+
� �
= max ,
µp + bd(µ − d) g+ + g−

c 2007 Stanley B. Gershwin.


Copyright � 78
Flexible Single-part-type case
Manufacturing
System Control Determination of Z

That is,
µp g+
• if > , then Z = 0 and
µp + bd(µ − d) g+ + g−
µp
prob(x � 0) = ;

µp + bd(µ − d)

µp g+
• if < , then Z > 0 and
µp + bd(µ − d) g+ + g−
g+
prob(x � 0) = .
g+ + g−

This looks a lot like the solution of the “newsboy problem.”


c 2007 Stanley B. Gershwin.
Copyright � 79
Flexible Single-part-type case
Manufacturing
System Control Z vs. d

Base values: g+ = 1, g− = 10 d = .7, µ = 1., r = .09,


p = .01.
100

90

80

70

60

Z 50

40

30

20

10

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
d

c 2007 Stanley B. Gershwin.


Copyright � 80
Flexible Single-part-type case
Manufacturing
System Control Z vs. g+

Base values: g+ = 1, g− = 10 d = .7, µ = 1., r = .09,


p = .01.
70

60

50

40

Z
30

20

10

0 0.5 1 1.5 2 2.5 3 3.5


g+

c 2007 Stanley B. Gershwin.


Copyright � 81
Flexible Single-part-type case
Manufacturing
System Control Z vs. g−

Base values: g+ = 1, g− = 10 d = .7, µ = 1., r = .09,


p = .01.
14

12

10

8
Z
6

0
0 1 2 3 4 5 6 7 8 9 10 11
g−

c 2007 Stanley B. Gershwin.


Copyright � 82
Flexible Single-part-type case

Manufacturing

System Control Z vs. p

Base values: g+ = 1, g− = 10 d = .7, µ = 1., r = .09,


p = .01.
1400

1200

1000

800

600

400

200

0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04


p

c 2007 Stanley B. Gershwin.


Copyright � 83
Flexible Two-part-type case
Manufacturing
System Control
u1 (t) x1 d1 Type 1

u2 (t) x2 d2 Type 2
r, p
u2

1/2�

0 1/1� u1

Capacity set �(1) when machine is up.

c 2007 Stanley B. Gershwin.


Copyright � 84
Flexible Two-part-type case

Manufacturing

System Control

We must find u(x, �) to satisfy


αW
� �
min (x, �, t) u

u∗�(�) αx
Partial solution of LP:

• If αW/αx1 > 0 and αW/αx2 > 0, u1 = u2 = 0.


• If αW/αx1 < αW/αx2 < 0, u1 = µ1, u2 = 0.
• If αW/αx2 < αW/αx1 < 0, u2 = µ2, u1 = 0.

Problem: no complete analytical solution available.

c 2007 Stanley B. Gershwin.


Copyright � 85
Flexible Two-part-type case
Manufacturing
System Control
Case: Exact solution if Z = (Z1, Z2) = 0
x2 u2
2
1/ �

1
0 1/ � u1

u1 = µ 1 u1 = u2 = 0
u2 = 0
dx
dt

x1

u1 = 0
u2 = µ 2
u2
2
1/ �
u2

1/ � 2

0 1/ � 1 u1

1
0 1/ � u1

c 2007 Stanley B. Gershwin.


Copyright � 86
Flexible Two-part-type case
Manufacturing
System Control
Case: Approximate solution if Z > 0
x2
u1 = µ 1 2
u2

1/ �
u1 = u2 = 0
u2 = 0
1
1/ � u1

dx
0

dt

x1

u1 = 0
u2 = µ 2
u2
2
1/ �
u2

1/ � 2

0 1/ � 1 u1

1
0 1/ � u1

c 2007 Stanley B. Gershwin.


Copyright � 87
Flexible Two-part-type case
Manufacturing
System Control
Two parts, multiple machines without buffers:
e 12

e56
u2 x2

e 45

2 1
e23
6 5

Z
4

e 34 3
I ¥ e61
3 6
e61 d x1
e23
u1 e56
1 2
4 5
e34
e12

e45

c 2007 Stanley B. Gershwin.


Copyright � 88
Flexible Two-part-type case

Manufacturing

System Control

• Proposed approximate solution for multiple-part,


single machine system:
� Rank order the part types, and bring them to their
hedging points in that order.

c 2007 Stanley B. Gershwin.


Copyright � 89
Flexible Single-part-type case
Manufacturing
System Control Surplus and tokens

• Operating Machine M
according to the hedging
M

FG point policy is equivalent to


S
operating this assembly
system according to a finite
buffer policy.
B

c 2007 Stanley B. Gershwin.


Copyright � 90
Flexible Single-part-type case
Manufacturing
System Control Surplus and tokens

• D is a demand generator .
M
δ Whenever a demand arrives, D

sends a token to B.
FG

• S is a synchronization machine. S

δ S is perfectly reliable and in­


B

finitely fast.
D

• F G is a finite finished goods buffer.


• B is an infinite backlog buffer.

c 2007 Stanley B. Gershwin.


Copyright � 91
Flexible Single-part-type case

Manufacturing

System Control Material/token policies

Machine
Operator
• An operation cannot take
place unless there is a
Part
Operation
Part token available.
Consumable Waste

Token Token
• Tokens authorize
production.

• These policies can often be implemented either with finite

buffer space, or a finite number of tokens. Mixtures are also

possible.

• Buffer space could be shelf space, or floor space indicated with


paint or tape.
c 2007 Stanley B. Gershwin.
Copyright � 92
Proposed policy
Multi-stage
systems

To control
M1 B1 M2 B2 M3

add an information flow system:

M1 B1 M2 B2 M3

SB 1 SB 2 SB 3

S1 S2 S3

BB1 BB2 BB3

c 2007 Stanley B. Gershwin.


Copyright � 93
Proposed policy
Multi-stage
systems
M1 B1 M2 B2 M3

SB 1 SB 2 SB 3

S1 S2 S3

BB1 BB2 BB3

• Bi are material buffers and are finite.


• SBi are surplus buffers and are finite.
• BBi are backlog buffers and are infinite.
• The sizes of Bi and SBi are control parameters.
• Problem: predicting the performance of this system.
c 2007 Stanley B. Gershwin.
Copyright � 94
Three Views of Scheduling
Multi-stage

systems

Three kinds of scheduling policies, which are


sometimes exactly the same.
• Surplus-based: make decisions based on how
much production exceed demand.
• Time-based: make decisions based on how early or
late a product is.
• Token-based: make decisions based on presence
or absence of tokens.

c 2007 Stanley B. Gershwin.


Copyright � 95
Objective of Scheduling
Multi-stage
systems Surplus and time
Cumulative

Production

and Demand
production P(t) • Objective is to keep
earliness
cumulative production
close to cumulative
demand.
• Surplus-based policies
look at vertical
differences between the
surplus/backlog x(t)

graphs.
demand D(t) • Time-based policies look
t
at the horizontal
differences.

c 2007 Stanley B. Gershwin.


Copyright � 96
Other policies
Multi-stage
systems CONWIP, kanban, and hybrid

• CONWIP: finite population, infinite buffers


• kanban: infinite population, finite buffers
• hybrid: finite population, finite buffers

c 2007 Stanley B. Gershwin.


Copyright � 97
Other policies
Multi-stage
systems CONWIP, kanban, and hybrid

CONWIP Supply Demand

Token flow

Demand is less than capacity.


How does the number of tokens affect performance (production
rate, inventory)?

c 2007 Stanley B. Gershwin.


Copyright � 98
Other policies
Multi-stage
systems CONWIP, kanban, and hybrid

0.875 30
n1
n2
n3
0.87
25

0.865

20
0.86

Average Buffer Level


0.855 15

0.85
10

0.845

5
0.84

0.835 0
0 20 40 60 80 100 120 0 20 40 60
80 100 120

Population Population

c 2007 Stanley B. Gershwin.


Copyright � 99

Other policies

Multi-stage
systems Basestock

Demand

c 2007 Stanley B. Gershwin.


Copyright � 100
Other policies
Multi-stage
systems FIFO

• First-In, First Out.


• Simple conceptually, but you have to keep track of
arrival times.
• Leaves out much important information:
� due date, value of part, current surplus/backlog
state, etc.

c 2007 Stanley B. Gershwin.


Copyright � 101
Other policies
Multi-stage
systems EDD

• Earliest due date.


• Easy to implement.
• Does not consider work remaining on the item, value
of the item, etc..

c 2007 Stanley B. Gershwin.


Copyright � 102
Other policies
Multi-stage
systems SRPT

• Shortest Remaining Processing Time


• Whenever there is a choice of parts, load the one
with least remaining work before it is finished.
• Variations: include waiting time with the work time.
Use expected time if it is random.

c 2007 Stanley B. Gershwin.


Copyright � 103
Other policies
Multi-stage
systems Critical ratio

• Widely used, but many variations. One version:


Processing time remaining until completion
δ Define CR =
Due date - Current time
δ Choose the job with the highest ratio (provided it is positive).
δ If a job is late, the ratio will be negative, or the denominator
will be zero, and that job should be given highest priority.
δ If there is more than one late job, schedule the late jobs in
SRPT order.

c 2007 Stanley B. Gershwin.


Copyright � 104
Other policies
Multi-stage
systems Least Slack

• This policy considers a part’s due date.


• Define slack = due date - remaining work time
• When there is a choice, select the part with the least
slack.
• Variations involve different ways of estimating
remaining time.

c 2007 Stanley B. Gershwin.


Copyright � 105
Other policies
Multi-stage
systems Drum-Buffer-Rope

• Due to Eli Goldratt.


• Based on the idea that every system has a bottleneck.
• Drum: the common production rate that the system operates
at, which is the rate of flow of the bottleneck.
• Buffer: DBR establishes a CONWIP policy between the
entrance of the system and the bottleneck. The buffer is the
CONWIP population.
• Rope: the limit on the difference in production between
different stages in the system.
• But: What if bottleneck is not well-defined?

c 2007 Stanley B. Gershwin.


Copyright � 106
Conclusions

• Many policies and approaches.


• No simple statement telling which is better.
• Policies are not all well-defined in the literature or in practice.
• My opinion:
δ This is because policies are not derived from first principles.
δ Instead, they are tested and compared.
δ Currently, we have little intuition to guide policy development
and choice.

c 2007 Stanley B. Gershwin.


Copyright � 107
MIT OpenCourseWare
http://ocw.mit.edu

2.852 Manufacturing Systems Analysis


Spring 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
MIT 2.852

Manufacturing Systems Analysis

Lecture 1: Overview

Stanley B. Gershwin

http://web.mit.edu/manuf-sys

Massachusetts Institute of Technology

Spring, 2010

2.852 Manufacturing Systems Analysis 1/44 c


Copyright �2010 Stanley B. Gershwin.
Books

◮ Required
◮ Manufacturing Systems Engineering (MSE) by Stanley B. Gershwin
◮ ... obtainable from author.
◮ Optional
◮ Factory Physics by Hopp and Spearman
◮ The Goal by Goldratt
◮ Stochastic Models of Manufacturing Systems by Buzacott and
Shanthikumar
◮ Production Systems Engineering by Li and Meerkov

2.852 Manufacturing Systems Analysis 3/44 c


Copyright �2010 Stanley B. Gershwin.
Course Overview
Goals

◮ To explain important measures of system performance.


◮ To show the importance of random, potentially disruptive events in
factories.
◮ To give some intuition about behavior of these systems.
◮ To describe some current tools and methods.

2.852 Manufacturing Systems Analysis 4/44 c


Copyright �2010 Stanley B. Gershwin.
Problems

◮ Manufacturing systems engineering is not as well-developed as most


other fields of engineering.
◮ Practitioners are encouraged to rely on gurus, slogans, and black
boxes.
◮ There is a gap between theoreticians and practitioners.

2.852 Manufacturing Systems Analysis 5/44 c


Copyright �2010 Stanley B. Gershwin.
Problems

◮ The research literature does not always focus on real-world problems


◮ ... but practitioners are often unaware of what does exist.
◮ Terminology, notation, basic assumptions are not standardized.
◮ There is a separation of product, process, and system design.

2.852 Manufacturing Systems Analysis 6/44 c


Copyright �2010 Stanley B. Gershwin.
Problems

◮ Confusion about objectives:


◮ maximize capacity?
◮ minimize capacity variability?
◮ maximize capacity utilization?
◮ minimize lead time?
◮ minimize lead time variability?
◮ maximize profit?
◮ Systems issues are often studied last, if at all.

2.852 Manufacturing Systems Analysis 7/44 c


Copyright �2010 Stanley B. Gershwin.
Problems

◮ Manufacturing gets no respect.


◮ Systems not designed with engineering methods.
◮ Product designers and sales staff are not informed of manufacturing
costs and constraints.
◮ Black box thinking.
◮ Factories not treated as systems to be analyzed and engineered.
◮ Simplistic ideas often used for management and design.

2.852 Manufacturing Systems Analysis 8/44 c


Copyright �2010 Stanley B. Gershwin.
Problems

Reliable systems intuition is lacking. As a consequence, there is ...


◮ Management by software
◮ Managers buy software to make production decisions, rather than to
aid in making decisions.
◮ Management by slogan
◮ Gurus provide simple solutions which sometimes work. Sometimes.

2.852 Manufacturing Systems Analysis 9/44 c


Copyright �2010 Stanley B. Gershwin.
Observation

◮ When a system is not well understood, rules proliferate.


◮ This is because rules are developed to regulate behavior.
◮ But the rules lead to unexpected, undesirable behavior. (Why?)
◮ New rules are developed to regulate the new behavior.
◮ Et cetera.

2.852 Manufacturing Systems Analysis 10/44 c


Copyright �2010 Stanley B. Gershwin.
Observation
Example

◮ A factory starts with one rule: do the latest jobs first .


◮ Over time, more and more jobs are later and later.
◮ A new rule is added: treat the highest priority customers orders as though
their due dates are two weeks earlier than they are.
◮ The low priority customers find other suppliers, but the factory is still late.

2.852 Manufacturing Systems Analysis 11/44 c


Copyright �2010 Stanley B. Gershwin.
Observation
Example

Why?
◮ There are significant setup times from part family to part family. If setup
times are not considered, changeovers will occur too often, and waste
capacity.

◮ Any rules that that do not consider setup times in this factory will perform
poorly.

2.852 Manufacturing Systems Analysis 12/44 c


Copyright �2010 Stanley B. Gershwin.
Definitions

◮ Manufacturing: the transformation of material into something useful and


portable.
◮ Manufacturing System: A manufacturing system is a set of machines,
transportation elements, computers, storage buffers, people, and other items
that are used together for manufacturing. These items are resources.

2.852 Manufacturing Systems Analysis 13/44 c


Copyright �2010 Stanley B. Gershwin.
Definitions

◮ Manufacturing System:
◮ Alternate terms:
◮ Factory
◮ Production system
◮ Fabrication facility
◮ Subsets of manufacturing systems, which are themselves systems, are
sometimes called cells, work centers, or work stations .

2.852 Manufacturing Systems Analysis 14/44 c


Copyright �2010 Stanley B. Gershwin.
Basic Issues

◮ Increasingly, there are ...


◮ frequent new product introductions, and
◮ short product lifetimes, and
◮ short process lifetimes.
◮ Consequently, ...
◮ factories are built and rebuilt frequently, and
◮ there is not much time to tinker with a factory. It must be operational
quickly.

2.852 Manufacturing Systems Analysis 15/44 c


Copyright �2010 Stanley B. Gershwin.
Basic Issues
Consequent Needs

◮ Tools to predict performance of proposed factory design.


◮ Tools for optimal real-time management (control) of factories.
◮ Manufacturing Systems Engineering professionals who understand
factories as complex systems.

2.852 Manufacturing Systems Analysis 16/44 c


Copyright �2010 Stanley B. Gershwin.
Basic Issues
Quantity, Quality and Variability

◮ Quantity – how much and when.


◮ Quality – how well.
In this course, we emphasize quantity.

General Statement: Variability is the enemy of manufacturing.

General Statement: Know your enemy!

2.852 Manufacturing Systems Analysis 17/44 c


Copyright �2010 Stanley B. Gershwin.
Basic Issues
More Definitions

◮ Make to Stock (Off the Shelf):


◮ items available when a customer arrives
◮ appropriate for large volumes, limited product variety, cheap raw
materials

◮ Make to Order:
◮ production started only after order arrives
◮ appropriate for custom products, low volumes, expensive raw materials

2.852 Manufacturing Systems Analysis 18/44 c


Copyright �2010 Stanley B. Gershwin.
Basic Issues
Conflicting Objectives

◮ Make to Stock:
◮ large finished goods inventories needed to prevent stockouts
◮ small finished goods inventories needed to keep costs low

◮ Make to Order:
◮ excess production capacity (low utilization) needed to allow early,
reliable delivery promises
◮ minimal production capacity (high utilization) needed to to keep costs
low

2.852 Manufacturing Systems Analysis 19/44 c


Copyright �2010 Stanley B. Gershwin.
Basic Issues
Concepts

◮ Complexity: collections of things have properties that are


non-obvious functions of the properties of the things collected.
◮ Non-synchronism (especially randomness) and its consequences:
Factories do not run like clockwork.

2.852 Manufacturing Systems Analysis 20/44 c


Copyright �2010 Stanley B. Gershwin.
Basic Issues
What is an Operation?

Operator
Machine

Part Part
Operation
Consumable Waste

Nothing happens until everything is present.

2.852 Manufacturing Systems Analysis 21/44 c


Copyright �2010 Stanley B. Gershwin.
Basic Issues
Waiting

Whatever does not arrive last must wait.

◮ Inventory: parts waiting.


◮ Underutilization: machines waiting.
◮ Idle work force: operators waiting.

2.852 Manufacturing Systems Analysis 22/44 c


Copyright �2010 Stanley B. Gershwin.
Basic Issues
Causes of Poor Performance

Operator
Machine

Part Part
Operation
Consumable Waste

◮ Reductions in the availability, or ...


◮ Variability in the availability ...

... of any one of these items causes waiting in the rest of them and reduces
performance of the system.

2.852 Manufacturing Systems Analysis 23/44 c


Copyright �2010 Stanley B. Gershwin.
Kinds of Systems
Flow shop

... or Flow line , Transfer line , or Production line.

Machine Buffer

Traditionally used for high volume, low variety production.

What are the buffers for?

2.852 Manufacturing Systems Analysis 24/44 c


Copyright �2010 Stanley B. Gershwin.
Kinds of Systems
Assembly system

Assembly systems are trees, and may involve thousands of parts.

2.852 Manufacturing Systems Analysis 25/44 c


Copyright �2010 Stanley B. Gershwin.
Loops
Closed loop (1a)

Limited number of pallets or fixtures:


Raw Part Input

Empty Pallet Buffer

Finished Part Output

◮ Pallets or fixtures travel in a closed loop. Routes are determined. The


number of pallets in the loop is constant.
◮ Pallets or fixtures take up space and may be expensive.

2.852 Manufacturing Systems Analysis 26/44 c


Copyright �2010 Stanley B. Gershwin.
Loops
Closed loop (1b)

Limited number of tokens:


Raw Part Input

Empty Token Buffer

Finished Part Output

◮ Tokens travel in a closed loop. Routes are determined. The number of


pallets in the loop is constant.
◮ Tokens take up no space and cost nothing.

What are the tokens for?

2.852 Manufacturing Systems Analysis 27/44 c


Copyright �2010 Stanley B. Gershwin.
Loops
Reentrant (2)

Type 1 Type 2 Type 2 Type 1

M 1 B41 M 4

System with
B11 B51 B12 B32 B71 B31
reentrant flow and
B22
two part types
B61
M 2 M 3

B21

Routes are determined. The number of parts in the loop varies.


Semiconductor fabrication is highly reentrant.

2.852 Manufacturing Systems Analysis 28/44 c


Copyright �2010 Stanley B. Gershwin.
Loops
Rework loop (3)

reject
inspection

rework

Routes are random. The number of parts in the loop varies.

2.852 Manufacturing Systems Analysis 29/44 c


Copyright �2010 Stanley B. Gershwin.
Kinds of Systems
Job shop

◮ Machines not organized according to process flow.


◮ Often, machines grouped by department:
◮ mill department
◮ lathe department
◮ etc.
◮ Great variety of products.
◮ Different products follow different paths.
◮ Complex management.

2.852 Manufacturing Systems Analysis 30/44 c


Copyright �2010 Stanley B. Gershwin.
Two Issues

◮ Efficient design of systems;

◮ Efficient operation of systems after they are built.

2.852 Manufacturing Systems Analysis 31/44 c


Copyright �2010 Stanley B. Gershwin.
Time

◮ Most factory performance measures are about time.

◮ production rate: how much is made in a given time.


◮ lead time: how much time before delivery.
◮ cycle time: how much time a part spends in the factory.
◮ delivery reliability: how often a factory delivers on time.
◮ capital pay-back period: the time before the company get its
investment back.

2.852 Manufacturing Systems Analysis 32/44 c


Copyright �2010 Stanley B. Gershwin.
Time

◮ Time appears in two forms:

◮ delay
◮ capacity utilization

◮ Every action has impact on both.

2.852 Manufacturing Systems Analysis 33/44 c


Copyright �2010 Stanley B. Gershwin.
Time
Delay

◮ An operation that takes 10 minutes adds 10 minutes to the delay that


◮ a workpiece experiences while undergoing that operation;
◮ every other workpiece experiences that is waiting while the first is being
processed.

2.852 Manufacturing Systems Analysis 34/44 c


Copyright �2010 Stanley B. Gershwin.
Time
Capacity Utilization

◮ An operation that takes 10 minutes takes up 10 minutes of the


available time of
◮ a machine,
◮ an operator,
◮ or other resources.
◮ Since there are a limited number of minutes of each resource
available, there are a limited number of operations that can be done.

2.852 Manufacturing Systems Analysis 35/44 c


Copyright �2010 Stanley B. Gershwin.
Time
More Definitions

◮ Operation Time: the time that a machine takes to do an operation.


◮ Production Rate: the average number of parts produced in a time
unit. (Also called throughput. )

If nothing interesting ever happens (no failures, etc.),


1
Production rate =
operation time
... but something interesting always happens.

2.852 Manufacturing Systems Analysis 36/44 c


Copyright �2010 Stanley B. Gershwin.
Time
More Definitions

◮ Capacity: the maximum possible production rate of a manufacturing


system, for systems that are making only one part type.
◮ Short term capacity: determined by the resources available right now.
◮ Long term capacity: determined by the average resource availability.
◮ Capacity is harder to define for systems making more than one part
type. Since it is hard to define, it is very hard to calculate.

2.852 Manufacturing Systems Analysis 37/44 c


Copyright �2010 Stanley B. Gershwin.
Randomness, Variability, Uncertainty
More Definitions

◮ Uncertainty: Incomplete knowledge.

◮ Variability: Change over time.

◮ Randomness: A specific kind of incomplete knowledge that can be


quantified and for which there is a mathematical theory.

2.852 Manufacturing Systems Analysis 38/44 c


Copyright �2010 Stanley B. Gershwin.
Randomness, Variability, Uncertainty

◮ Factories are full of random events:


◮ machine failures
◮ changes in orders
◮ quality failures
◮ human variability

◮ The economic environment is uncertain


◮ demand variations
◮ supplier unreliability
◮ changes in costs and prices

2.852 Manufacturing Systems Analysis 39/44 c


Copyright �2010 Stanley B. Gershwin.
Randomness, Variability, Uncertainty

Therefore, factories should be


◮ designed as reliably as possible, to minimize the creation of variability;
◮ designed with shock absorbers, to minimize the propagation of
variability;
◮ operated in a way that minimizes the creation of variability;
◮ operated in a way that minimizes the propagation of variability.

2.852 Manufacturing Systems Analysis 40/44 c


Copyright �2010 Stanley B. Gershwin.
Randomness, Variability, Uncertainty

◮ Therefore, all engineers should know probability...


◮ especially manufacturing systems engineers .

◮ Probability is an important prerequisite for this course.

2.852 Manufacturing Systems Analysis 41/44 c


Copyright �2010 Stanley B. Gershwin.
The Course
Mechanics

◮ Reading: Mainly Chapters 2–9 of MSE . (Chapter 9 up to 9.3.)


◮ Grading: project and class participation.
◮ Homework optional.

2.852 Manufacturing Systems Analysis 42/44 c


Copyright �2010 Stanley B. Gershwin. .
The Course
Topics

◮ Probability
◮ Basics, Markov processes, queues, other examples.
◮ Transfer lines
◮ Models, exact analysis of small systems, approximations of large
systems.
◮ Extensions of transfer line models
◮ Assembly/disassembly, loops, system optimization
◮ Real-time scheduling
◮ Quality/Quantity interactions
◮ New material

2.852 Manufacturing Systems Analysis 43/44 c


Copyright �2010 Stanley B. Gershwin.
The Course

◮ Emphasis on mathematical modeling and analysis.


◮ Emphasis on intuition.
◮ Comparison with 2.854: Narrower and deeper.

2.852 Manufacturing Systems Analysis 44/44 c


Copyright �2010 Stanley B. Gershwin.
MIT OpenCourseWare
http://ocw.mit.edu

2.852 Manufacturing Systems Analysis


Spring 2010

For information about citing these materials or our Terms of Use,visit: http://ocw.mit.edu/terms.
MIT 2.852

Manufacturing Systems Analysis

Lecture 14-16
Line Optimization
Stanley B. Gershwin

Spring, 2007

c 2007 Stanley B. Gershwin.


Copyright �
Line Design

• Given a process, find the best set of machines and buffers on


which it can be implemented.
• Best: least capital cost; least operating cost; least average
inventory; greatest profit, etc.
• Constraints: minimal production rate, maximal stockout
probability, maximal floor space, maximal inventory, etc..
• To be practical, computation time must be limited.
• Exact optimality is not necessary, especially since the
parameters are not known perfectly.

c 2007 Stanley B. Gershwin.


Copyright � 2
Optimization

• Optimization may be performed in two ways:


� Analytical solution of optimality conditions; or
� Searching
• For most problems, searching is the only realistic
possibility.
• For some problems, optimality cannot be achieved in
a reasonable amount of time.

c 2007 Stanley B. Gershwin.


Copyright � 3
Search
Optimization

Modify design as a function of


past designs and performance.
Propose Design D0 n n+1
Dn+1 = D(D0, D1 , ..., Dn, P0 , P1 , ..., Pn)
n=0

no
Evaluate Design D n Is performance satisfactory?
Pn = P(Dn )

yes

Quit

Typically, many designs are tested.

c 2007 Stanley B. Gershwin.


Copyright � 4
Issues
Optimization
• For this to be practical, total computation time must be limited. Therefore,
we must control both computation time per iteration and the number of
iterations .

• Computation time per iteration includes evaluation time and the time to
determine the next design to be evaluated.

• The technical literature is generally focused on limiting the number of


iterations by proposing designs efficiently.

• The number of iterations is also limited by choosing a reasonable


termination criterion (ie, required accuracy).

• Reducing computation time per iteration is accomplished by


� using analytical models rather than simulations
� using coarser approximations in early iterations and more accurate

evaluations later.

c 2007 Stanley B. Gershwin.


Copyright � 5
Problem

Statement

X is a set of possible choices. J is a scalar function defined on

X. h and g are vector functions defined on X.

Problem: Find x ← X that satisfies


J (x) is maximized (or minimized) — the objective
subject to
h(x) = 0 — equality constraints

g(x) � 0 — inequality constraints

c 2007 Stanley B. Gershwin.


Copyright � 6
Taxonomy

• static/dynamic

• deterministic/stochastic

• X set: continuous/discrete/mixed
(Extensions: multi-criteria optimization, in which the set of all

good compromises between different objectives are sought;


games, in which there are multiple optimizers, each preferring
different xs but none having complete control; etc.)

c 2007 Stanley B. Gershwin.


Copyright � 7
Continuous

Variables and

Objective

X = Rn. J is a scalar function defined on Rn. h(← Rm) and


g(← Rk) are vector functions defined on Rn.

Problem: Find x ← Rn that satisfies


J (x) is maximized (or minimized)
subject to
h(x) = 0

g(x) � 0

c 2007 Stanley B. Gershwin.


Copyright � 8
Continuous Unconstrained
Variables and
Objective One-dimensional search

Find t such that f (t) = 0.

• This is equivalent to
Find t to maximize (or minimize) F (t)
when F (t) is differentiable, and f (t) = dF (t)/dt is
continuous.
• If f (t) is differentiable, maximization or minimization
depends on the sign of d2F (t)/dt2.

c 2007 Stanley B. Gershwin.


Copyright � 9
Continuous
Unconstrained
Variables and

Objective
One-dimensional search

f(t)

Assume f (t) is decreasing.


• Binary search: Guess t0 and f(t0 )
t1 such that f (t0) > 0 and
f (t1) < 0. Let
t2 = (t0 + t1)/2. f(t2 )
t0 t2 t1 t

� If f (t2) < 0, then repeat f(t1 )

with t
�0 = t0 and t�
1
=
t2.

� If f (t2) > 0, then repeat t’0 t’1


with t0� =
t2 and t�
1
=
t1.

c 2007 Stanley B. Gershwin.


Copyright � 10
Continuous Unconstrained
Variables and
Objective One-dimensional search

t0 t2 t1
0 1.5
3
1.5
2.25
3
1.5
1.875
2.25

1.875
2.0625
2.25

1.875
1.96875
2.0625

1.96875
2.015625
2.0625

1.96875
1.9921875
2.015625

Example: 1.9921875
2.00390625
2.015625

1.9921875
1.998046875
2.00390625

1.998046875
2.0009765625
2.00390625

1.998046875
1.99951171875
2.0009765625

f (t) = 4 − t2
1.99951171875
2.000244140625
2.0009765625

1.99951171875
1.9998779296875
2.000244140625

1.9998779296875
2.00006103515625
2.000244140625

1.9998779296875
1.99996948242188
2.00006103515625

1.99996948242188
2.00001525878906
2.00006103515625

1.99996948242188
1.99999237060547
2.00001525878906

1.99999237060547
2.00000381469727
2.00001525878906

1.99999237060547
1.99999809265137
2.00000381469727

1.99999809265137
2.00000095367432
2.00000381469727

c 2007 Stanley B. Gershwin.


Copyright � 11
Continuous Unconstrained
Variables and
Objective One-dimensional search

f(t)

• Newton search, exact tangent: f(t0 )

� Guess t0. Calculate

df (t0)/dt.

t0 t1 t
� Choose t1 so that
(t 0 )
f (t0) + (t1 − t0) dfdt = 0. f(t1 )

� Repeat with t�0 = t1 until

|f (t�0)| is small enough.

c 2007 Stanley B. Gershwin.


Copyright � 12
Continuous Unconstrained
Variables and
Objective One-dimensional search

t0
3
Example: 2.16666666666667
2.00641025641026
f (t) = 4 − t2 2.00001024002621
2.00000000002621
2

c 2007 Stanley B. Gershwin.


Copyright � 13
Continuous Unconstrained
Variables and
Objective One-dimensional search

f(t)
• Newton search, approximate
tangent:
f(t0 )

� Guess t0 and t1. Calculate

approximate slope

s = f (tt11)−f (t0 )

. f(t2 )
t
t0 t2 t1
−t0

� Choose t2 so that f(t1 )

f (t0) + (t2 − t0)s = 0.

� Repeat with t�0 = t1 and

t�1 = t2 until |f (t�0)| is small

enough.

c 2007 Stanley B. Gershwin.


Copyright � 14
Continuous Unconstrained
Variables and
Objective One-dimensional search

t0
0
3
1.33333333333333
Example: 1.84615384615385
2.03225806451613
f (t) = 4 − t2 1.99872040946897
1.99998976002621
2.0000000032768
1.99999999999999
2

c 2007 Stanley B. Gershwin.


Copyright � 15
Continuous Unconstrained
Variables and
Objective Multi-dimensional search

J
Optimum

Steepest
Ascent
Directions
Optimum often found

x2 by steepest ascent or

hill-climbing methods.

x1

c 2007 Stanley B. Gershwin.


Copyright � 16
Continuous Unconstrained

Variables and

Objective Gradient search�


To maximize J (x), where x is a vector (and J is a scalar function
that has nice properties):
0. Set n = 0. Guess x0.
1. Evaluate �J
�x
(xn).
�J
� �
2. Let t be a scalar. Define Jn(t) = J xn + t
(xn)
�x

Find (by one-dimensional search ) t�n, the value of t that

maximizes Jn(t).
3. Set xn+1 = xn + t�n �J
�x
(xn).
4. Set n � n + 1. Go to Step 1.

also called steepest ascent or steepest descent .
c 2007 Stanley B. Gershwin.
Copyright � 17
Continuous Constrained
Variables and
Objective

Constrained Equality constrained: solution is


Optimum
on the constraint surface.

Problems are much easier when


x2 constraint is linear, ie, when the
surface is a plane.
h(x 1 , x2 ) = 0
• In that case, replace �J/�x
by its projection onto the
x1
constraint plane.
• But first: find an initial
feasible guess.

c 2007 Stanley B. Gershwin.


Copyright � 18
Continuous Constrained
Variables and
Objective
J Constrained
Optimum
Inequality constrained:

solution is required to

be on one side of the

plane.
x2

g(x 1 , x2 ) > 0

x1

Inequality constraints that are satisfied with equality are called effective or
active constraints.

If we knew which constraints would be effective, the problem would reduce to


an equality-constrained optimization.
c 2007 Stanley B. Gershwin.
Copyright � 19
Continuous Constrained
Variables and
Objective
−6

20
8*(x+y)−.25*(x**4+y**4)
16
−4 12
8
4
0
−2 −4
−8

Minimize 8(x + y) − (x4 + y 4)/4


−12
−16
−20
0 −24

subject to x + y ≈ 0
2

6
−2 0 2 4 6

Solving a linearly-constrained problem is relatively easy. If the

solution is not in the interior, search within the boundary plane.

c 2007 Stanley B. Gershwin.


Copyright � 20
Continuous Constrained
Variables and
Objective
−6

20
8*(x+y)−.25*(x**4+y**4)
16
−4 12
8
4
0
−2 −4
−8

Minimize 8(x + y) − (x4 + y 4)/4


−12
−16
−20
0 −24

subject to x − (x − y)2 + 1 ≈ 0
2

6
−2 0 2 4 6

Solving a nonlinearly-constrained problem is not so easy.


Searching within the boundary is numerically difficult.
c 2007 Stanley B. Gershwin.
Copyright � 21
Continuous Nonlinear and Linear Programming
Variables and
Objective
Optimization problems with continuous variables,
objective, and constraints are called nonlinear
programming problems, especially when at least one
of J, h, g are not linear.
When all of J, h, g are linear, the problem is a linear
programming problem.

c 2007 Stanley B. Gershwin.


Copyright � 22
Continuous Multiple Optima
Variables and
Objective
Global Maximum
Local (or Relative)
Maxima
J

x2

x1

Danger: a search might find a local, rather than the


global, optimum.
c 2007 Stanley B. Gershwin.
Copyright � 23
Continuous Primals and Duals

Variables and

Objective

Consider the two problems:

min f (x) max j(x)

subject to j(x) ≈ J subject to f (x) � F

f (x), F , j(x), and J are scalars. We will call these problems duals of one
another. (However, this is not the conventional use of the term.) Under certain
conditions when the last inequalities are effective, the same x satisfies both
problems.

We will call one the primal problem and the other the dual problem .

c 2007 Stanley B. Gershwin.


Copyright � 24
Continuous Primals and Duals

Variables and
Objective
Generalization:

min f (x) max j(x)

subject to h(x) = 0 subject to h(x) = 0

g(x) � 0 g(x) � 0

j(x) ≈ J f (x) � F

c 2007 Stanley B. Gershwin.


Copyright � 25
Problem statement
Buffer Space
Allocation

M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6

Problem: Design the buffer space for a line. The


machines have already been selected. Minimize the
total buffer space needed to achieve a target
production rate.
Other problems: minimize total average inventory;
maximize profit (revenue - inventory cost - buffer space
cost); choose machines as well as buffer sizes; etc.

c 2007 Stanley B. Gershwin.


Copyright � 26
Problem statement

Buffer Space
Allocation

Assume a deterministic processing time line with k machines with ri and pi


known for all i = 1, ..., k. Assume minimum buffer size N MIN. Assume a
target production rate P �. Then the function P (N1, ..., Nk−1) is known — it
can be evaluated using the decomposition method. The problem is:

Primal problem:
�1

k−
Minimize Ni

i=1

subject to P (N1, ..., Nk−1) ≈ P �

Ni ≈ N MIN, i = 1, ..., k − 1.

In the following, we treat the Nis like a set of continuous variables.


c 2007 Stanley B. Gershwin.
Copyright � 27
Properties of P (N1, ..., Nk−1)
Buffer Space

Allocation

P (∗, ..., ∗) = min ei


i=1,...,k

MIN, ..., N MIN) 1


P (N � << P (∗, ..., ∗)
�1
k−
pi
1 +

i
ri

• Continuity: A small change in any Ni creates a small change in P .


• Monotonicity: The production rate increases monotonically in each N i .
• Concavity: The production rate appears to be a concave function of the
vector (N1, ..., Nk−1).

c 2007 Stanley B. Gershwin.


Copyright � 28
Properties of P (N1, ..., Nk−1)
Buffer Space
Allocation Example — 3-machine line

200
Optimal curve
P=0.8800
P=0.8825
P=0.8850
P=0.8875
P P=0.8900
P=0.8925
150 P=0.8950
0.91
P=0.8975
0.9 P=0.9000

0.89
0.88
0.87
100

N2
0.86
0.85
0.84
0.83
100 50
90
80
70
60
10 20 50
30 40 N2
40 50 30
60 70 20 0
N1 80 90 100 10 0 50 100 150 200
N1

r1 = .35 r2 = .15 r3 = .4
p1 = .037 p2 = .015 p3 = .02
e1 = .904 e2 = .909 e3 = .952

c 2007 Stanley B. Gershwin.


Copyright � 29
Solution
Buffer Space
Allocation Primal problem
�1
k−
Minimize Ni
i=1

subject to P (N1, ..., Nk−1) � P �

MIN
Ni � N , i = 1, ..., k − 1.
Difficulty: If all the buffers are larger than N MIN, the solution will satisfy
P (N1, ..., Nk−1) = P �. (Why?) But P (N1, ..., Nk−1) is nonlinear and
cannot be expressed in closed form. Therefore any solution method will have
to search within this constraint and all steps will be small and there will be
many iterations.
It would be desirable to transform this problem into one with linear constraints.

c 2007 Stanley B. Gershwin.


Copyright � 30
Solution
Buffer Space
Allocation Dual problem

Maximize P (N1, ..., Nk−1)

�1
k−
TOTAL
subject to Ni � N specified
i=1

MIN
Ni � N , i = 1, ..., k − 1.

All the constraints are linear. The solution will satisfy the N TOTAL constraint
with equality (assuming the problem is feasible). (1. Why? 2. When would the
problem be infeasible?) This problem is consequently relatively easy to solve.

c 2007 Stanley B. Gershwin.


Copyright � 31
Solution

Buffer Space
Allocation N 2
Dual problem

TOTAL
N 1 +N2 +N3 = N

Constraint set
(if N MIN = 0). N1

N3

c 2007 Stanley B. Gershwin.


Copyright � 32
Solution

Buffer Space
Allocation Primal strategy

Solution of the primal problem:

1. Guess N TOTAL.

2. Solve the dual problem. Evaluate


P = P MAX(N TOTAL).

3. Use a one-dimensional search method to find N TOTAL


such that P MAX(N TOTAL) = P �.

c 2007 Stanley B. Gershwin.


Copyright � 33
Solution
Buffer Space
Allocation Dual Algorithm
Maximize P (N1 , ..., Nk−1 )
subject to Ni = N TOTAL specified

Pk−1
i=1
N � N MIN
, i = 1, ..., k − 1.
i

• Start with an initial guess (N1, ..., Nk−1) that satisfies


�k−1 TOTAL.
i=1 Ni = N
• Calculate the gradient vector (g1, ..., gk−1):
P (N1, . . . , Ni + �N, . . . , Nk−1) − P (N1, . . . , Ni, . . . , Nk−1)
gi =
�N

• Calculate the projected gradient vector (ĝ1, ..., ĝk−1):


k−1
1 �
ĝi = gi − ḡ where ḡ = gi
k − 1 i=1

c 2007 Stanley B. Gershwin.


Copyright � 34
Solution
Buffer Space
Allocation Dual Algorithm

• The projected gradient ĝ satisfies


�1
k− �1
k− �1
k−
ĝi = (gi − ḡ) = gi − (k − 1)ḡ = 0
i=1 i=1 i=1

• Therefore, if A is a scalar, then


�1
k− �1
k− �1
k− �1
k−
(Ni + Aĝi) = Ni + Aĝi = Ni
i=1 i=1 i=1 i=1
so if (N1, ..., Nk−1) satisfies k−
� 1 TOTAL and N � = N + Aĝi for
i=1 Ni = N� i i
any scalar A, then (N1�, ..., Nk� −1) satisfies k− 1 � TOTAL.
i=1 Ni = N
• That is, if N is on the constraint, then N + Aĝ is also on the constraint (as
long as all elements ≈ N MIN).

c 2007 Stanley B. Gershwin.


Copyright � 35
Solution
Buffer Space
Allocation Dual Algorithm

• The gradient g is the direction of greatest increase of P .


• The projected gradient ĝ is the direction of greatest increase of
P within the constraint plane .
• Therefore, once we have a point N on the constraint plane, the
best improvement is to move in the direction of ĝ; that is,
N + Aĝ.
• To find the best possible improvement, we find A�, the value of
A that maximizes P (N + Aĝ). A is a scalar, so this is a
one-dimensional search.
• N + A�ĝ is the next guess for N , and the process repeats.

c 2007 Stanley B. Gershwin.


Copyright � 36
Solution
Buffer Space
Allocation Dual Algorithm

Specify initial guess


N = (N1, ..., Nk-1 )
and search parameters.

Calculate gradient g.

Calculate search
direction p.

(Here, p = ĝ.)

Find A such that P(N+Ap)


is maximized. Define
^ N+
N= Ap

^ ^
Is N close to N? NO Set N = N

YES
N is the solution.
Terminate.

c 2007 Stanley B. Gershwin.


Copyright � 37
Solution
Buffer Space
Allocation Dual Algorithm

Initial Guess

c 2007 Stanley B. Gershwin.


Copyright � 38
Solution
Buffer Space
Allocation Primal algorithm

• We can solve the dual problem for any N TOTAL.


• We can calculate N1(N TOTAL), N2(N TOTAL), ..., Nk−1(N TOTAL),
P MAX(N TOTAL). 200
Optimal curve
P=0.8800
P=0.8825
P=0.8850
P=0.8875
P=0.8900
P=0.8925
150 P=0.8950
P=0.8975
P=0.9000

100
N2

50

0
0 50 100 150 200
N1

c 2007 Stanley B. Gershwin.


Copyright � 39
Solution
Buffer Space
Allocation Primal algorithm
0.92

0.9

0.88
Maximum average production rate

0.86

0.84

0.82

0.8

0.78
0 50 100 150 200
Total buffer space

P MAX(N TOTAL
) as a function of N TOTAL
.

c 2007 Stanley B. Gershwin.


Copyright � 40
Solution
Buffer Space
Allocation Primal algorithm

Then, we can find, by 1-dimensional search, N TOTAL


such that
P MAX(N TOTAL) = P �.

c 2007 Stanley B. Gershwin.


Copyright � 41
Solution
Buffer Space
Allocation Primal algorithm

MIN
0 0
Set N = (k-1)N . Calculate P (N ).
MAX

Specify initial guess N1 and search parameters.

1
Solve the dual to obtain P (N ). Set j = 2.
MAX

j
Calculate N from (7).

Call the dual algorithm to


evaluate P (N j).
MAX
(Here, (7) refers to a
one-dimensional search.)
NO Increment j by 1.
Is P (N j) close enough to P * ?
MAX

YES

Minimum total buffer space is


N j.
Terminate.

c 2007 Stanley B. Gershwin.


Copyright � 42
Example
Buffer Space
Allocation The “Bowl Phenomena”
• Problem: how to allocate space in a line with identical machines.
• Case: 20-machine continuous material line, ri = .0952, pi = .005, and
µi = 1, i = 1, ..., 20.
50

First, we show the 40

average WIP
distribution if all
Average buffer level

30

buffers are the


same size: 20

Ni = 53,
i = 1, ..., 19 10

0
0 2 4 6 8 10 12 14 16 18 20
Buffer

c 2007 Stanley B. Gershwin.


Copyright � 43
Example
Buffer Space
Allocation The “Bowl Phenomena”

• This shows the optimal distribution of buffer space and the resulting
distribution of average inventory .
25

20
Buffer Size/Average buffer level

15

10

0
0 5 10 15 20
Buffer

c 2007 Stanley B. Gershwin.


Copyright � 44
Example
Buffer Space
Allocation The “Bowl Phenomena”

Observations:

• The optimal distribution of buffer space does not look like the
distribution of inventory in the line with equal buffers. Why not?
Explain the shape of the optimal distribution.
• The distribution of average inventory is not symmetric.

c 2007 Stanley B. Gershwin.


Copyright � 45
Example
Buffer Space
Allocation The “Bowl Phenomena”

• This shows the ratios of average inventory to buffer size with equal buffers
and with optimal buffers.
1
Equal buffers
Optimal buffers

0.8
Ratio = Average buffer level/Buffer Size

0.6

0.4

0.2

0
0 5 10 15 20
Buffer

c 2007 Stanley B. Gershwin.


Copyright � 46
Example

Buffer Space
Allocation

• Design the buffers for a 20-machine production line.

• The machines have been selected, and the only


decision remaining is the amount of space to
allocate for in-process inventory.
• The goal is to determine the smallest amount of
in-process inventory space so that the line meets a
production rate target.

c 2007 Stanley B. Gershwin.


Copyright � 47
Example

Buffer Space
Allocation

• The common operation time is one operation per


minute.
• The target production rate is .88 parts per minute.

c 2007 Stanley B. Gershwin.


Copyright � 48
Example

Buffer Space
Allocation

• Case 1 MTTF= 200 minutes and MTTR = 10.5


minutes for all machines (P = .95 parts per minute).

c 2007 Stanley B. Gershwin.


Copyright � 49
Example

Buffer Space
Allocation

• Case 1 MTTF= 200 minutes and MTTR = 10.5


minutes for all machines (P = .95 parts per minute).
• Case 2 Like Case 1 except Machine 5. For
Machine 5, MTTF = 100 and MTTR = 10.5 minutes
(P = .905 parts per minute).

c 2007 Stanley B. Gershwin.


Copyright � 50
Example

Buffer Space
Allocation

• Case 1 MTTF= 200 minutes and MTTR = 10.5


minutes for all machines (P = .95 parts per minute).
• Case 2 Like Case 1 except Machine 5. For
Machine 5, MTTF = 100 and MTTR = 10.5 minutes
(P = .905 parts per minute).
• Case 3 Like Case 1 except Machine 5. For
Machine 5, MTTF = 200 and MTTR = 21 minutes
(P = .905 parts per minute).

c 2007 Stanley B. Gershwin.


Copyright � 51
Example
Buffer Space
Allocation

Are buffers really needed?


Line
Production rate with no buffers,

parts per minute


Case 1
.487
Case 2
.475
Case 3
.475

Yes. How were these numbers calculated?

c 2007 Stanley B. Gershwin.


Copyright � 52
Example
Buffer Space
Allocation

Solution
60

Bottleneck Case 1 −− All Machines Identical


50
Case 2 −− Machine 5 Bottleneck −− MTTF = 100
Case 3 −− Machine 5 Bottleneck −− MTTR = 21

Line Space

Buffer Size

40

30 Case 1
430

20

Case 2
485

Case 3
523

10

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

Buffer

c 2007 Stanley B. Gershwin.


Copyright � 54
Example

Buffer Space
Allocation

• This shows the optimal distribution of buffer space and the resulting
distribution of average inventory for Case 3.
55

50

45

40
Buffer Size/Average buffer level

35

30

25

20

15

10

0
0 5 10 15 20
Buffer

c 2007 Stanley B. Gershwin.


Copyright � 54
Example

Buffer Space
Allocation

• This shows the ratio of average inventory to buffer size with optimal buffers
for Case 3.
0.7

0.65

0.6
Ratio = Average buffer level/Buffer Size

0.55

0.5

0.45

0.4

0.35

0.3
0 5 10 15 20
Buffer

c 2007 Stanley B. Gershwin.


Copyright � 55
Example

Buffer Space
Allocation

• Case 4: Same as Case 3 except bottleneck is at Machine 15.


• This shows the optimal distribution of buffer space and the resulting
distribution of average inventory for Case 4.
55

50

45

40
Buffer Size/Average buffer level

35

30

25

20

15

10

0
0 5 10 15 20
Buffer

c 2007 Stanley B. Gershwin.


Copyright � 56
Example

Buffer Space
Allocation

• This shows the ratio of average inventory to buffer size with optimal buffers
for Case 4.
0.75

0.7

0.65
Ratio = Average buffer level/Buffer Size

0.6

0.55

0.5

0.45

0.4

0.35

0.3
0 5 10 15 20
Buffer

c 2007 Stanley B. Gershwin.


Copyright � 57
Example

Buffer Space
Allocation

• Case 5: MTTF bottleneck at Machine 5, MTTR bottleneck at Machine 15.

• This shows the optimal distribution of buffer space and the resulting
distribution of average inventory for Case 5.
55

50

45

40
Buffer Size/Average buffer level

35

30

25

20

15

10

0
0 5 10 15 20
Buffer

c 2007 Stanley B. Gershwin.


Copyright � 58
Example

Buffer Space
Allocation

• This shows the ratio of average inventory to buffer size with optimal buffers
for Case 5.
0.7

0.65

0.6
Ratio = Average buffer level/Buffer Size

0.55

0.5

0.45

0.4

0.35

0.3
0 5 10 15 20

c 2007 Stanley B. Gershwin.


Copyright � 59
Example

Buffer Space
Allocation

• Case 6: Like Case 6, but 50 machines, MTTR bottleneck at Machine 45.

• This shows the optimal distribution of buffer space and the resulting
distribution of average inventory for Case 6.
55

50

45

40
Buffer Size/Average buffer level

35

30

25

20

15

10

0
0 10 20 30 40 50
Buffer

c 2007 Stanley B. Gershwin.


Copyright � 60
Example

Buffer Space
Allocation

• This shows the ratio of average inventory to buffer size with optimal buffers
for Case 6.
0.7

0.65

0.6
Ratio = Average buffer level/Buffer Size

0.55

0.5

0.45

0.4

0.35

0.3
0 10 20 30 40 50
Buffer

c 2007 Stanley B. Gershwin.


Copyright � 61
Example

Buffer Space
Allocation

• Observation from studying buffer space allocation


problems:
� Buffer space is needed most where buffer level
variability is greatest!

c 2007 Stanley B. Gershwin.


Copyright � 62
Profit as a function of buffer sizes
Buffer Space

Allocation

790
800
810

• Three-machine, continuous
820
827
830
832

material line.
Profit
840
830
820
810
800
790
• ri = .1, pi = .01,µi = 1.
780
770
760
750 • � = 1000P (N1, N2)
20 40
60
80

N2
−(n̄1 + n̄2).
40
60 20
N1 80

c 2007 Stanley B. Gershwin.


Copyright � 63
MIT OpenCourseWare
http://ocw.mit.edu

2.852 Manufacturing Systems Analysis


Spring 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
MIT 2.852

Manufacturing Systems Analysis

Lecture 10–12

Transfer Lines – Long Lines

Stanley B. Gershwin

http://web.mit.edu/manuf-sys

Massachusetts Institute of Technology

Spring, 2010

2.852 Manufacturing Systems Analysis 1/91 c


Copyright 2010 Stanley B. Gershwin.
Long Lines

M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6

◮ Difficulty:

◮ No simple formula for calculating production rate or inventory levels.

◮ State space is too large for exact numerical solution.

◮ If all buffer sizes are N and the length of the line is k, the number of
states is S = 2k (N + 1)k−1 .
◮ if N = 10 and k = 20, S = 6.41 × 1025 .

◮ Decomposition seems to work successfully.

2.852 Manufacturing Systems Analysis 2/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition — Concept

Decomposition works for many kinds of systems, and extending it is an


active research area.
◮ We start with deterministic processing time lines.
◮ Then we extend decomposition to other lines.
◮ Then we extend it to assembly/disassembly systems without loops.
◮ Then we look at systems with loops.
◮ Etc., etc. if there is time.

2.852 Manufacturing Systems Analysis 3/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition — Concept

◮ Conceptually: put an observer in a buffer, and tell him that he is in


the buffer of a two-machine line.
◮ Question: What would the observer see, and how can he be convinced
he is in a two-machine line?

2.852 Manufacturing Systems Analysis 4/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition — Concept

◮ Decomposition breaks up systems and then reunites them.


◮ Construct all the two-machine lines.

2.852 Manufacturing Systems Analysis 5/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition — Concept

◮ Evaluate the performance measures (production rate, average buffer


level) of each two-machine line, and use them for the real line.

◮ This is an approximation; the behavior of the flow in the buffer of a


two-machine line is not exactly the same as the behavior of the flow
in a buffer of a long line.

◮ The two-machine lines are sometimes called building blocks.

2.852 Manufacturing Systems Analysis 6/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition — Concept

◮ Consider an observer in Buffer Bi .


◮ Imagine the material flow process that the observer sees entering and
the material flow process that the observer sees leaving the buffer.
◮ We construct a two-machine line L(i)
◮ (ie, we find machines Mu (i) and Md (i) with parameters ru (i), pu (i),
rd (i), pd (i), and N(i) = Ni )
such that an observer in its buffer will see almost the same processes.
◮ The parameters are chosen as functions of the behaviors of the other
two-machine lines.

2.852 Manufacturing Systems Analysis 7/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition — Concept

M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3

Line L(i)

M u (i) M d (i)

2.852 Manufacturing Systems Analysis 8/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition — Concept

There are 4(k − 1) unknowns for the deterministic processing time line:

ru (1), pu (1), rd (1), pd (1),


ru (2), pu (2), rd (2), pd (2),
...,
ru (k − 1), pu (k − 1), rd (k − 1), pd (k − 1)
Therefore, we need

◮ 4(k − 1) equations, and

◮ an algorithm for solving those equations.

2.852 Manufacturing Systems Analysis 9/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Overview

The decomposition equations relate ru (i), pu (i), rd (i), and pd (i) to behavior in
the real line and in other two-machine lines.
◮ Conservation of flow, equating all production rates.
◮ Flow rate/idle time, relating production rate to probabilities of starvation
and blockage.
◮ Resumption of flow, relating ru (i) to upstream events and rd (i) to
downstream events.
◮ Boundary conditions, for parameters of Mu (1) and Md (k − 1).

2.852 Manufacturing Systems Analysis 10/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Overview

◮ All the quantities in all these equations are

◮ specified parameters, or
◮ unknowns, or
◮ functions of parameters or unknowns derived from the two-machine line
analysis.

◮ This is a set of 4(k − 1) equations.

2.852 Manufacturing Systems Analysis 11/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Overview

Notation convention:

◮ Items that pertain to two-machine line L(i ) will have i in parentheses.


Example: ru (i ).

◮ Items that pertain to the real line L will have i in the subscript.
Example: ri .

2.852 Manufacturing Systems Analysis 12/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Conservation of Flow

E (i ) = E (1), i = 2, . . . , k − 1.

◮ Recall that E (i ) is a function of the unknowns ru (i ), pu (i ), rd (i ), and


pd (i ).
◮ (It is also a function of N(i ), but N(i ) is known.)
◮ We know how to evaluate it easily, but we don’t have a simple
expression for it.
This is a set of k − 2 equations.

2.852 Manufacturing Systems Analysis 13/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Flow Rate-Idle Time

Ei = ei prob [ni −1 > 0 and ni < Ni ]


where
ri
ei =
ri + pi

Problem:
◮ This expression involves a joint probability of two buffers taking
certain values at the same time.
◮ But we only know how to evaluate two-machine, one-buffer lines, so
we only know how to calculate the probability of one buffer taking on
a certain value at a time.

2.852 Manufacturing Systems Analysis 14/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Flow Rate-Idle Time

Observation:

prob (ni −1 = 0 and ni = Ni ) ≈ 0.


Reason:
M i−1 Bi−1 Mi Bi M i+1

0 Ni
The only way to have ni −1 = 0 and ni = Ni is if
◮ Mi −1 is down or starved for a long time
◮ and Mi is up
◮ and Mi +1 is down or blocked for a long time
◮ and to have exactly Ni parts in the two buffers.

2.852 Manufacturing Systems Analysis 15/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Flow Rate-Idle Time

Then
prob [ni −1 > 0 and ni < Ni ]

= prob [NOT {ni −1 = 0 or ni = Ni }]

= 1 − prob [ni −1 = 0 or ni = Ni ]

= 1 − { prob (ni −1 = 0) + prob (ni = Ni )


− prob (ni −1 = 0 and ni = Ni )}

≈ 1 − { prob (ni −1 = 0) + prob (ni = Ni )}

2.852 Manufacturing Systems Analysis 16/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Flow Rate-Idle Time

Therefore

Ei ≈ ei [1 − prob (ni −1 = 0) − prob (ni = Ni )]


Note that

prob (ni −1 = 0) = ps (i − 1); prob (ni = Ni ) = pb (i)

Two of the FRIT relationships in lines L(i − 1) and L(i) are

E (i) = eu (i) [1 − pb (i)] ; E (i − 1) = ed (i − 1) [1 − ps (i − 1)]

2.852 Manufacturing Systems Analysis 17/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Flow Rate-Idle Time

or,
E (i − 1) E (i)
ps (i − 1) = 1 − ; pb (i) = 1 −
ed (i − 1) eu (i)
so (replacing ≈ with =),
    
E (i − 1) E (i)
E i = ei 1 − 1 − − 1−
ed (i − 1) eu (i)

The goal is to have E = Ei = E (i − 1) = E (i), so


    
E (i) E (i)
E (i) = ei 1 − 1 − − 1−
ed (i − 1) eu (i)

2.852 Manufacturing Systems Analysis 18/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Flow Rate-Idle Time

Since
rd (i − 1) ru (i)
ed (i − 1) = ; eu (i) = ,
pd (i − 1) + rd (i − 1) pu (i) + ru (i)
we can write

pd (i − 1) pu (i) 1 1
+ = + − 2, i = 2, . . . , k − 1
rd (i − 1) ru (i) E (i) ei

This is a set of k − 2 equations.

2.852 Manufacturing Systems Analysis 19/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Resumption of Flow

M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3

M u (i) M d (i)

When the observer sees Mu (i ) down, Mi may actually be down...

2.852 Manufacturing Systems Analysis 20/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Resumption of Flow

M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3

M u (i) M d (i)

... or, Mi −1 may be down and Bi −1 may be empty, ...

2.852 Manufacturing Systems Analysis 21/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Resumption of Flow

M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3

0 0

M u (i) M d (i)

... or Mi −2 may be down and Bi −1 and Bi −2 may be empty, ...

2.852 Manufacturing Systems Analysis 22/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Resumption of Flow

M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3

0 0 0

M u (i) M d (i)

... or Mi −3 may be down and Bi −1 and Bi −2 and Bi −3 may be empty, ...

2.852 Manufacturing Systems Analysis 23/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Resumption of Flow

M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3

0 0 0 0

M u (i) M d (i)

... etc.

2.852 Manufacturing Systems Analysis 24/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Resumption of Flow

M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3

M u (i−1) M d (i−1)

Similarly for the observer in Bi −1 .

2.852 Manufacturing Systems Analysis 25/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Resumption of Flow

M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3

M u (i−1) M d (i−1)

M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3

M u (i) M d (i)

Comparison

2.852 Manufacturing Systems Analysis 26/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Resumption of Flow

M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3

M u (i−1) M d (i−1)

M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3

0 0

M u (i) M d (i)

2.852 Manufacturing Systems Analysis 27/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Resumption of Flow

M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3

0 0

M u (i−1) M d (i−1)

M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3

0 0 0

M u (i) M d (i)

2.852 Manufacturing Systems Analysis 28/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Resumption of Flow

M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3

0 0 0

M u (i−1) M d (i−1)

M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3

0 0 0 0

M u (i) M d (i)

2.852 Manufacturing Systems Analysis 29/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Resumption of Flow

That is, when the Line L(i) observer sees a failure in Mu (i),
M u (i) M d (i)

◮ either real machine Mi is down,


M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3

◮ or Buffer Bi −1 is empty and the Line L(i − 1) observer sees a failure in


Mu (i − 1).
M i−4 Bi−4 M i−3 Bi−3 M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3

M u (i−1) 0 M d (i−1)

Note that these two events are mutually exclusive. Why?

2.852 Manufacturing Systems Analysis 30/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Resumption of Flow

Also, for the Line L(j) observer to see Mu (j) up, Mj must be up and Bj−1 must
be non-empty. Therefore,

{αu (j, τ ) = 1} ⇐⇒ {αj (τ ) = 1} and {nj−1 (τ − 1) > 0}

{αu (j, τ ) = 0} ⇐⇒ {αj (τ ) = 0} or {nj−1 (τ − 1) = 0}

2.852 Manufacturing Systems Analysis 31/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Resumption of Flow

Then

ru (i ) = prob [αu (i , t + 1) = 1 | αu (i , t) = 0]

= prob {αi (t + 1) = 1} and {ni −1 (t) > 0}

{αi (t) = 0} or {ni −1 (t − 1) = 0}

2.852 Manufacturing Systems Analysis 32/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Resumption of Flow

To express ru (i) in terms of quantities we know or can find, we have to simplify


prob (U|V or W ), where

U = {αi (t + 1) = 1} and {ni −1 (t) > 0}

V = {αi (t) = 0}

W = {ni −1 (t − 1) = 0}
Important: V and W are disjoint.
prob (V and W ) = 0.

2.852 Manufacturing Systems Analysis 33/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Resumption of Flow

prob (U and (V or W ))
prob (U|V or W ) =
prob (V or W )

prob ((U and V ) or (U and W ))


=
prob (V or W )

prob (U and V ) prob (U and W )


= +
prob (V or W ) prob (V or W )

prob (U|V )prob (V ) prob (U|W )prob (W )


= +
prob (V or W ) prob (V or W )

2.852 Manufacturing Systems Analysis 34/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Resumption of Flow

prob (V ) prob (W )
= prob (U|V ) + prob (U|W )
prob (V or W ) prob (V or W )
Note that

prob (V and (V or W )) prob (V )


prob (V |V or W ) = =
prob (V or W ) prob (V or W )
so

prob (U|V or W ) = prob (U|V )prob (V |V or W )

+prob (U|W )prob (W |V or W ).

2.852 Manufacturing Systems Analysis 35/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Resumption of Flow

Then, if we plug U, V , and W from Slide 33 into this, we get

ru (i) = A(i − 1)X (i) + B(i)X ′ (i), i = 2, . . . , k − 1


where

A(i − 1) = prob (U|W )




= prob ni −1 (t) > 0 and αi (t + 1) = 1


ni −1 (t − 1) = 0 ,

2.852 Manufacturing Systems Analysis 36/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Resumption of Flow

X (i) = prob (W
 |V or W ) 

= prob ni −1 (t − 1) = 0 ni −1 (t − 1) = 0 or αi (t) = 0 ,

B(i) = prob (U|V )


= prob [ni −1 (t) > 0 and αi (t + 1) = 1 | αi (t) = 0] ,

X ′ (i) = prob (V |V or W )
= prob [αi (t) = 0 | {ni −1 (t − 1) = 0 or αi (t) = 0}] .

2.852 Manufacturing Systems Analysis 37/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Resumption of Flow

To evaluate
» ˛ –
˛
A(i − 1) = prob ni −1 (t) > 0 and αi (t + 1) = 1˛˛ni −1 (t − 1) = 0 :

Note that
◮ For Buffer i − 1 to be empty at time t − 1, Machine Mi must be up at time t − 1
and also at time t. It must have been up in order to empty the buffer, and it must
stay up because it cannot fail. Therefore αi (t) = 1.
◮ For Buffer i − 1 to be non-empty at time t after being empty at time t − 1, it
must have gained 1 part. For it to gain a part when αi (t) = 1, Mi must not have
been working (because it was previously starved). Therefore, Mi could not have
failed and A(i − 1) can therefore be written
» ˛ –
˛
A(i − 1) = prob ni −1 (t) > 0˛˛ni −1 (t − 1) = 0

2.852 Manufacturing Systems Analysis 38/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Resumption of Flow

» ˛ –
˛
A(i − 1) = prob ni −1 (t) > 0˛˛ni −1 (t − 1) = 0

◮ For Buffer i − 1 to be empty, Mi −1 must be down or starved. For Mi −1 to be


starved, Mi −2 must be down or starved, etc. Therefore, saying Mi −1 is down or
starved is equivalent to saying Mu (i − 1) is down. That is, if ni −1 (t − 1) = 0 then
αu (i − 1, t − 1) = 0.
◮ Conversely, for Buffer i − 1 to be non-empty, Mi −1 must not be down or starved.
That is, if ni −1 (t) > 0, then αu (i − 1, t) = 1.
Therefore,
» ˛ –
˛
A(i − 1) = prob αu (i − 1, t) = 1˛˛αu (i − 1, t − 1) = 0 = ru (i − 1)

2.852 Manufacturing Systems Analysis 39/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Resumption of Flow

Similarly,

B(i) = prob [ni −1 (t) > 0 and αi (t + 1) = 1 | αi (t) = 0]


Note that if αi (t) = 0, we must have ni −1 (t) > 0. Therefore

B(i) = prob [αi (t + 1) = 1 | αi (t) = 0] ,


or,
B(i) = ri
so

ru (i) = ru (i − 1)X (i) + ri X ′ (i),

2.852 Manufacturing Systems Analysis 40/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Resumption of Flow

Interpretation so far:
◮ ru (i ), the probability that Mu (i ) goes from down to up, is

◮ ri times the probability that Mu (i) is down because Mi is down


◮ plus ru (i − 1) times the probability that Mu (i) is down because
Mu (i − 1) is down and Bi −1 is empty.

2.852 Manufacturing Systems Analysis 41/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Resumption of Flow

X (i)= the probability that Mu (i) is down because Mu (i − 1) is down and Bi −1 is


empty;

X ′ (i) = the probability that Mu (i) is down because Mi is down.

Since these are the only two ways that Mu (i) can be down,

X ′ (i) = 1 − X (i)

2.852 Manufacturing Systems Analysis 42/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Resumption of Flow

 

X (i) = prob ni −1 (t − 1) = 0 ni −1 (t − 1) = 0 or αi (t) = 0

prob [ni −1 (t − 1) = 0 and {ni −1 (t − 1) = 0 or αi (t) = 0}]


=
prob [ni −1 (t − 1) = 0 or αi (t) = 0]

prob [ni −1 (t − 1) = 0]
=
prob [ni −1 (t − 1) = 0 or αi (t) = 0]

ps (i − 1)
=
prob [ni −1 (t − 1) = 0 or αi (t) = 0]

2.852 Manufacturing Systems Analysis 43/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Resumption of Flow

To analyze the denominator, note


◮ {ni −1 (t − 1) = 0 or αi (t) = 0} = {αu (i) = 0} by definition;
◮ prob [ni −1 (t − 1) = 0 or αi (t) = 0] ≈
prob [{ni −1 (t − 1) = 0 or αi (t) = 0} and ni (t − 1) < Ni ] because
prob [ni −1 (t − 1) = 0 and ni (t − 1) = Ni ] ≈ 0
so the denominator is, approximately,

prob [αu (i) = 0 and ni (t − 1) < Ni ]


Recall that this is equal to

pu (i) pu (i)
prob [αu (i) = 1 and ni (t − 1) < Ni ] = E (i)
ru (i) ru (i)

2.852 Manufacturing Systems Analysis 44/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Resumption of Flow

Therefore,

ps (i − 1)ru (i)
X (i) =
pu (i)E (i)
and

ru (i) = ru (i − 1)X (i) + ri (1 − X (i)), i = 2, . . . , k − 1

This is a set of k − 2 equations.

2.852 Manufacturing Systems Analysis 45/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Resumption of Flow

By the same logic,

rd (i − 1) = rd (i)Y (i) + ri (1 − Y (i)), i = 2, . . . , k − 1

where
pb (i)rd (i − 1)
Y (i) = .
pd (i − 1)E (i − 1)

This is a set of k − 2 equations.

We now have 4(k − 2) = 4k − 8 equations.

2.852 Manufacturing Systems Analysis 46/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Boundary Conditions

Md (1) is the same as M1 and Md (k − 1) is the same as Mk . Therefore

ru (1) = r1
pu (1) = p1
rd (k − 1) = rk
pd (k − 1) = pk

This is a set of 4 equations.

We now have 4(k − 1) equations in 4(k − 1) unknowns ru (i), pu (i), rd (i), pd (i),
i = 1, ..., k − 1.

2.852 Manufacturing Systems Analysis 47/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Algorithm

pd (i − 1) pu (i) 1 1
FRIT: + = + −2
rd (i − 1) ru (i) E (i) ei
Upstream equations:

ps (i − 1)ru (i)
ru (i) = ru (i − 1)X (i) + ri (1 − X (i)); X (i) =
pu (i)E (i)
„ «
1 1 pd (i − 1)
pu (i) = ru (i) + −2−
E (i) ei rd (i − 1)
Downstream equations:

pb (i + 1)rd (i)
rd (i) = rd (i + 1)Y (i + 1) + ri +1 (1 − Y (i + 1)); Y (i + 1) = .
pd (i)E (i)
„ «
1 1 pu (i + 1)
pd (i) = rd (i) + −2−
E (i + 1) ei +1 ru (i + 1)

2.852 Manufacturing Systems Analysis 48/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Algorithm

We use the conservation of flow conditions by modifying these equations.

Modified upstream equations:

ps (i − 1)ru (i)
ru (i) = ru (i − 1)X (i) + ri (1 − X (i)); X (i) =
pu (i)E (i − 1)
„ «
1 1 pd (i − 1)
pu (i) = ru (i) + −2−
E (i − 1) ei rd (i − 1)
Modified downstream equations:

pb (i + 1)rd (i)
rd (i) = rd (i + 1)Y (i + 1) + ri +1 (1 − Y (i + 1)); Y (i + 1) = .
pd (i)E (i + 1)
„ «
1 1 pu (i + 1)
pd (i) = rd (i) + −2−
E (i + 1) ei +1 ru (i + 1)

2.852 Manufacturing Systems Analysis 49/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Algorithm

Possible Termination Conditions:

◮ |E (i ) − E (1)| < ǫ for i = 2, ..., k − 1, or

◮ The change in each ru (i ), pu (i ), rd (i ), pd (i ) parameter,


i = 1, ..., k − 1 is less than ǫ, or

◮ etc.

2.852 Manufacturing Systems Analysis 50/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition Equations
Algorithm

DDX algorithm : due to Dallery, David, and Xie (1988).


1. Guess the downstream parameters of L(1) (rd (1), pd (1)). Set i = 2.
2. Use the modified upstream equations to obtain the upstream parameters of
L(i) (ru (i), pu (i)). Increment i.
3. Continue in this way until L(k − 1). Set i = k − 2.
4. Use the modified downstream equations to obtain the downstream
parameters of L(i). Decrement i.
5. Continue in this way until L(1).
6. Go to Step 2 or terminate.

2.852 Manufacturing Systems Analysis 51/91 c


Copyright 2010 Stanley B. Gershwin.
Decomposition
Approximations

Is the decomposition exact? NO, because

1. The behavior of the flow in the buffer of a two-machine line is not


exactly the same as the behavior of the flow in a buffer of a long line.

2. prob [ni −1 (t − 1) = 0 and ni (t − 1) = Ni ] ≈ 0

Question: When will this work well, and when will it work badly?

2.852 Manufacturing Systems Analysis 52/91 c


Copyright 2010 Stanley B. Gershwin.
Examples
Three-machine line

Three-machine line – production rate.


.8

.7
p 2 = .05

.6 .15

.25
.5
E
.4

.3 r1 = r2 = r3 = .2
.2
p1 = .05
.35
.45
N1 = N2 = 5
.1 .55
.65

.1 .2 .3 .4 .5 .6 .7
p3

2.852 Manufacturing Systems Analysis 53/91 c


Copyright 2010 Stanley B. Gershwin.
Examples
Three-machine line

Three-machine line – total average inventory


10 p 2 = .05
.15
9
8

7
I .25
6
.35
5 .45
.55
4 .65 r1 = r2 = r3 = .2
p1 = .05
3 N1 = N2 = 5
2
0 .1 .2 .3 .4 .5 .6 .7
p3

2.852 Manufacturing Systems Analysis 54/91 c


Copyright 2010 Stanley B. Gershwin.
Examples
Long lines

50 Machines; r=0.1; p=0.01; mu=1.0; N=20.0


20

15
Average Buffer Level

Distribution of
10
material in a line
with identical
5 machines and buffers.
Explain the shape.
0
0 10 20 30 40 50
Buffer Number

2.852 Manufacturing Systems Analysis 55/91 c


Copyright 2010 Stanley B. Gershwin.
Examples
Long lines

Analytical vs simulation
Time steps Decomp 10,000 50,000 200,000
Production rate 0.786 0.740 0.751 0.750
16
Analytic
10,000 steps
50,000 steps
200,000 steps
14

12
Average buffer leve

10

4
0 5 10 15 20 25 30 35 40 45 50
Buffer number

(Not the same line as in Slide 55.)

2.852 Manufacturing Systems Analysis 56/91 c


Copyright 2010 Stanley B. Gershwin.
Examples
Long lines

50 Machines; r=0.1; p=0.01; mu=1.0; N=20.0 EXCEPT N(25)=2000.0


20

15
Average Buffer Level

10
Same as Slide 55
except that Buffer 25
is now huge.
5

Explain the shape.


0
0 10 20 30 40 50
Buffer Number

2.852 Manufacturing Systems Analysis 57/91 c


Copyright 2010 Stanley B. Gershwin.
Examples
Long lines

25 Machines; r=0.1; p=0.01; mu=1.0; N=20.0


20

15
Average Buffer Level

10
Upstream half of
Slide 57.
5
Explain the shape.

0
0 10 20 30 40 50
Buffer Number

2.852 Manufacturing Systems Analysis 58/91 c


Copyright 2010 Stanley B. Gershwin.
Examples
Long lines

50 Machines; upstream r=0.1; p=0.01; mu=1.0; N=20.0; N(25)=2000.0 downstream r=0.15; p=0.01; mu=1.0, N=50.0
20

15
Average Buffer Level

10
Upstream same as
Slide 58; downstream
faster.
5

Explain the shape.


0
0 10 20 30 40 50
Buffer Number

2.852 Manufacturing Systems Analysis 59/91 c


Copyright 2010 Stanley B. Gershwin.
Examples
Long lines

50 Machines; upstream r=0.1; p=0.01; mu=1.0; N=20.0; N(25)=2000.0 downstream r=0.09; p=0.01; mu=1.0, N=50.0
20

15
Average Buffer Level

10
Upstream same as
Slide 58; downstream
faster.
5

Explain the shape.


0
0 10 20 30 40 50
Buffer Number

2.852 Manufacturing Systems Analysis 60/91 c


Copyright 2010 Stanley B. Gershwin.
Examples
Long lines

50 Machines; upstream r=0.1; p=0.01; mu=1.0; N=20.0; N(25)=2000.0 downstream r=0.09; p=0.01; mu=1.0, N=15.0
20

15
Average Buffer Level

Downstream same as
10
downstream half of
Slide 57; upstream
5 faster.
Explain the shape.
0
0 10 20 30 40 50
Buffer Number

2.852 Manufacturing Systems Analysis 61/91 c


Copyright 2010 Stanley B. Gershwin.
Examples
Long lines

26 Machines; r=0.1; p=0.01; mu=1.0, N=20.0 EXCEPT N(25)=2000.0, r(26)=.09, p(26)=0.032783


20

15

Same as upstream
Average Buffer Level

half of Slide 61
10
except for Machine
26.
5
Explain the shape.
How was Machine 26
0
0 10 20 30 40 50 chosen?
Buffer Number

2.852 Manufacturing Systems Analysis 62/91 c


Copyright 2010 Stanley B. Gershwin.
Examples
Long lines — Bottlenecks

50 Machines; r=0.1; p=0.01; mu=1.0; N=20.0 EXCEPT mu(10)=0.8


20

15
Average Buffer Level

Operation time
10
bottleneck. Identical
machines and buffers,
5 except for M10 .
Explain the shape.
0
0 10 20 30 40 50
Buffer Number

2.852 Manufacturing Systems Analysis 63/91 c


Copyright 2010 Stanley B. Gershwin.
Examples
Long lines — Bottlenecks

50 Machines; r=0.1; p=0.01; mu=1.0; N=20.0 EXCEPT p(10)=0.0375


20

15
Average Buffer Level

10
Failure time
bottleneck.
5
Explain the shape.

0
0 10 20 30 40 50
Buffer Number

2.852 Manufacturing Systems Analysis 64/91 c


Copyright 2010 Stanley B. Gershwin.
Examples
Long lines — Bottlenecks

50 Machines; r=0.1; p=0.01; mu=1.0; N=20.0 EXCEPT r(10)=0.02667


20

15
Average Buffer Level

10
Repair time
bottleneck.
5
Explain the shape.

0
0 10 20 30 40 50
Buffer Number

2.852 Manufacturing Systems Analysis 65/91 c


Copyright 2010 Stanley B. Gershwin.
Examples
Infinitely long lines

Infinitely long lines with identical machines and buffers



ri = r 
pi = p for each i, −∞ < i < ∞.
Ni = N

The observer in each buffer sees exactly the same behavior. Consequently, the
decomposed pseudo-machines are all identical and symmetric. For each i,

ru (i) = ru (i − 1) = rd (i) = rd (i − 1)
pu (i) = pu (i − 1) = pd (i) = pd (i − 1).

2.852 Manufacturing Systems Analysis 66/91 c


Copyright 2010 Stanley B. Gershwin.
Examples
Infinitely long lines

Resumption of flow says

ru (i ) = ru (i − 1)X (i ) + ri (1 − X (i ))
ru = ru X + r (1 − X )

so ru (i ) = rd (i ) = r .

FRIT says
pd (i −1) pu (i ) 1 1
rd (i −1) + ru (i ) = E (i ) + ei −2

2pu 1 1
r = E + e −2

2.852 Manufacturing Systems Analysis 67/91 c


Copyright 2010 Stanley B. Gershwin.
Examples
Infinitely long lines

In the last equation, pu is unknown and E is a function of pu . This is one


equation in one unknown.
.35
.34
ri = .1, p1 = .1, i= l, ...., k
.33
Production Rate E, Parts/Cycle

Ni = 10, i = 1 ...., k - 1
.32
.31
.30
.29
.28
.27
.26 Ni = 5, i = 1, ..., k - 1
.25
.24
.23
.22
.21
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Number of Machines k

2.852 Manufacturing Systems Analysis 68/91 c


Copyright 2010 Stanley B. Gershwin.
Examples
Effect of one buffer size on all buffer levels

25
n1
n2
n3
n4

20
n5
n6
n7
Continuous material model.
◮ Eight-machine,
Average Buffer Level

15 seven-buffer line.
◮ For each machine,
10 r = .075, p = .009,
µ = 1.2.
5 ◮ For each buffer (except
Buffer 6), N = 30.
0
0 5 10 15 20 25 30 35 40 45 50
N6
M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6 B6 M7 B7 M8

2.852 Manufacturing Systems Analysis 69/91 c


Copyright 2010 Stanley B. Gershwin.
Examples
Effect of one buffer size on all buffer levels

25
n1
n2
n3
n4
n5
n6
20 n7
Average Buffer Level

◮ Which n̄i are


15
decreasing and
which are
10
increasing?
◮ Why?
5

0
0 5 10 15 20 25 30 35 40 45 50
N6
M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6 B6 M7 B7 M8

2.852 Manufacturing Systems Analysis 70/91 c


Copyright 2010 Stanley B. Gershwin.
Examples
Buffer allocation

Which has a higher production rate?

◮ 9-Machine line with two buffering options:

◮ 8 buffers equally sized; and


M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6 B6 M7 B7 M8 B8 M9

◮ 2 buffers equally sized.

M1 M2 M3 B3 M4 M5 M6 B6 M7 M8 M9

2.852 Manufacturing Systems Analysis 71/91 c


Copyright 2010 Stanley B. Gershwin.
Examples
Buffer allocation

1
8 buffers
2 buffers
0.95
◮ Continuous model; all
0.9 machines have
P 0.85
r = .019, p = .001,
µ = 1.
0.8
◮ What are the
0.75
asymptotes?
0.7
◮ Is 8 buffers always
0.65
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 faster?

Total Buffer Space

2.852 Manufacturing Systems Analysis 72/91 c


Copyright 2010 Stanley B. Gershwin.
Examples
Buffer allocation

1
8 buffers
2 buffers
0.95

0.9 ◮ Is 8 buffers always


P 0.85 faster?
0.8 ◮ Perhaps not, but
0.75
difference is not
significant in systems
0.7
with very small buffers.
0.65
1 10 100 1000 10000

Total Buffer Space

2.852 Manufacturing Systems Analysis 73/91 c


Copyright 2010 Stanley B. Gershwin.
Long Lines — More Models
Discrete Material Exponential Processing Time and Continuous
Material Models

◮ New issue: machines may operate at different speeds.


◮ Blockage and starvation may be caused by differences in machine
speeds, not only failures.
◮ Decomposition of these classes of systems is similar to that of
discrete-material, deterministic-processing time lines except
◮ The two-machine lines have machines with 3 parameters (ru (i), pu (i),
µu (i); rd (i), pd (i), µd (i)). More equations — 6(k − 1) — are therefore
needed.
◮ Exponential decomposition is described in the book in detail;
continuous material decomposition was not developed until after book
was written.

2.852 Manufacturing Systems Analysis 74/91 c


Copyright 2010 Stanley B. Gershwin.
Long Lines — Exponential Processing Time Model
The observer thinks he is in a two-machine exponential processing time line with
parameters

ru (i)δt = probability that Mu (i) goes from down to up in (t, t + δt), for small δt;

pu (i)δt = probability that Mu (i) goes from up to down in (t, t + δt)


if it is not blocked, for small δt;

µu (i)δt = probability that a piece flows into Bi in (t, t + δt)


when Mu (i) is up and not blocked, for small δt;

rd (i)δt = probability that Md (i) goes from down to up in (t, t + δt), for small δt;

pd (i)δt = probability that Md (i) goes from up to down in (t, t + δt)


if it is not starved, for small δt;

µd (i)δt = probability that a piece flows out of Bi in (t, t + δt)


when Md (i) is up and not starved, for small δt.

2.852 Manufacturing Systems Analysis 75/91 c


Copyright 2010 Stanley B. Gershwin.
Long Lines — Exponential Processing Time Model
Equations

We have 6(k − 1) unknowns, so we need 6(k − 1) equations. They are


◮ Interruption of flow , relating pu (i) to upstream events and pd (i) to
downstream events,
◮ Resumption of flow,
◮ Conservation of flow,
◮ Flow rate/idle time,
◮ Boundary conditions.

All of these, except for the Interruption of Flow equations, are similar to those of
the deterministic processing time case.

2.852 Manufacturing Systems Analysis 76/91 c


Copyright 2010 Stanley B. Gershwin.
Long Lines — Exponential Processing Time Model
Interruption of Flow

The first two sets of equations describe the interruptions of flow caused by
machine failures. By definition,
 

pu (i)δt = prob αu (i; t + δt) = 0 αu (i; t) = 1 and ni (t) < Ni ,

or,
 

pu (i)δt = prob Mu (i) down at t + δt Mu (i) up and ni < Ni at t .

2.852 Manufacturing Systems Analysis 77/91 c


Copyright 2010 Stanley B. Gershwin.
Long Lines — Exponential Processing Time Model
Interruption of Flow

We define the events that a pseudo-machine is up or down as follows:

Mu (i) is down if

1. Mi is down, or
2. ni −1 = 0 and Mu (i − 1) is down.

Mu (i) is up for all other states of the transfer line upstream of Buffer Bi .
Therefore, Mu (i) is up if

1. Mi is operational and ni −1 > 0, or


2. Mi is operational, ni −1 = 0 and Mu (i − 1) is up.

2.852 Manufacturing Systems Analysis 78/91 c


Copyright 2010 Stanley B. Gershwin.
Long Lines — Exponential Processing Time Model
Interruption of Flow

After a lot of equation manipulation, we get:

ru (i − 1)p(i − 1; 001)
pu (i) = pi + .
Eu (i)
and similarly,

rd (i + 1)p(i + 1; N10)
pd (i) = pi +1 + .
Ed (i)
in which p(i − 1; 001) is the steady state probability that line L(i − 1) is in state
(0, 0, 1) and p(i + 1; N10) is the steady state probability that line L(i + 1) is in
state (Ni +1 , 1, 0).

2.852 Manufacturing Systems Analysis 79/91 c


Copyright 2010 Stanley B. Gershwin.
Long Lines — Exponential Processing Time Model
Resumption of Flow

pi −1 (0, 0, 1)ru (i)µu (i)


ru (i) = ru (i − 1)
pu (i)P(i)
„ «
pi −1 (0, 0, 1)ru (i)µu (i)
+ri 1 − ,
pu (i)P(i)
i = 2, · · · , k − 1

pi +1 (Ni +1 , 1, 0)rd (i)µd (i)


rd (i) = rd (i + 1)
pd (i)P(i)
„ «
pi +1 (Ni +1 , 1, 0)rd (i)µd (i)
+ri +1 1 − ,
pd (i)P(i)
i = 1, · · · , k − 2

2.852 Manufacturing Systems Analysis 80/91 c


Copyright 2010 Stanley B. Gershwin.
Long Lines — Exponential Processing Time Model
Conservation of Flow

P(i ) = P(1), i = 2, . . . , k − 1.

2.852 Manufacturing Systems Analysis 81/91 c


Copyright 2010 Stanley B. Gershwin.
Long Lines — Exponential Processing Time Model
Flow Rate/Idle Time

The flow rate-idle time relationship is, approximately,

Pi = ei µi (1 − prob [ni −1 = 0] − prob [ni = Ni ]) .

which can be transformed into

1 1 1 1
+ = + ; i = 2, . . . , k − 1.
ei µi P ed (i − 1)µd (i − 1) eu (i )µu (i )

2.852 Manufacturing Systems Analysis 82/91 c


Copyright 2010 Stanley B. Gershwin.
Long Lines — Exponential Processing Time Model
Flow Rate/Idle Time

For the algorithm, we express it as

( )
1 1
µu (i) = 1 1 1 ,
eu (i) P(i ) + ei µi − ed (i −1)µ d (i −1)

i = 2, · · · , k − 1,
( )
1 1
µd (i) = 1 1 1 ,
ed (i) P(i ) + ei +1 µi +1 − eu (i +1)µu (i +1)

i = 1, · · · , k − 2.

2.852 Manufacturing Systems Analysis 83/91 c


Copyright 2010 Stanley B. Gershwin.
Long Lines — Exponential Processing Time Model
Boundary Conditions

Md (1) is the same as M1 and Md (k − 1) is the same as Mk . Therefore

ru (1) = r1

pu (1) = p1

µu (1) = µ1

rd (k − 1) = rk

pd (k − 1) = pk

µd (k − 1) = µk

2.852 Manufacturing Systems Analysis 84/91 c


Copyright 2010 Stanley B. Gershwin.
Long Lines — Exponential Processing Time
Example

LINE PRODUCTION RATE


(UNIT/TIME)
Upper Bound .258
.25

.20
i Parameters
Decomposition ri pi µi Ni
Simulation
1 .05 .03 .5 8
.10 2 .06 .04 — 8
3 .05 .03 .5

0
0 .4 .8 1.2 1.6 1.8
ρ2

◮ Exponential processing time line — 3 machines


◮ Upper bound determined by smallest ρi .
◮ Simulation satisfies upper bound; decomposition does not. Why?

2.852 Manufacturing Systems Analysis 85/91 c


Copyright 2010 Stanley B. Gershwin.
Long Lines — Continuous Material
M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6

Conceptually very similar to exponential processing time model. One


difference:
◮ prob (xi −1 = 0 and xi = Ni ) = 0 exactly .

2.852 Manufacturing Systems Analysis 86/91 c


Copyright 2010 Stanley B. Gershwin.
Long Lines — Continuous Material Model
New approximation

◮ New approximation: The observer sees both pseudo-machines operating at


multiple rates, but the two-machine lines assume single rates.

M u (i) M d (i)

r (i), p (i), µ (i) r (i), p (i), µ (i)


u u u d d d

If this were really a two-machine continuous material line,


◮ material would enter the buffer at rate µu (i) (if Mu (i) is up and the buffer is
not full) or µd (i) (if Mu (i) and Md (i) are up and the buffer is full and
µd (i) < µu (i)) or 0;
◮ material would exit the buffer at rate µd (i) (if Md (i) is up and the buffer is
not empty) or µu (i) (if Mu (i) and Md (i) are up and the buffer is empty and
µu (i) < µd (i)) or 0;

2.852 Manufacturing Systems Analysis 87/91 c


Copyright 2010 Stanley B. Gershwin.
Long Lines — Continuous Material
New approximation

M i−2 Bi−2 M i−1 Bi−1 Mi Bi M i+1 Bi+1 M i+2 Bi+2 M i+3

M u (i) M d (i)

Assume that ... < µi −2 < µi −1 < µi < µi +1 < .... Assume all the machines are up and
Bi is not full. Then the observer in Bi actually sees material entering Bi ...
◮ at rate µi if Bi −1 is not empty;
◮ at rate µi −1 if Bi −2 is not empty and Bi −1 is empty;
◮ at rate µi −2 if Bi −3 is not empty and Bi −2 is empty and Bi −1 is empty;
◮ etc.

Therefore, this approximation may break down if the µi are very different.

2.852 Manufacturing Systems Analysis 88/91 c


Copyright 2010 Stanley B. Gershwin.
Long Lines — Continuous Material
Equations

We have the same 6(k − 1) unknowns, so we need 6(k − 1) equations. They are,
as before,
◮ Interruption of flow ,
◮ Resumption of flow,
◮ Conservation of flow,
◮ Flow rate/idle time,
◮ Boundary conditions.

They are the same as in the exponential processing time case except for the
Interruption of Flow equations.

2.852 Manufacturing Systems Analysis 89/91 c


Copyright 2010 Stanley B. Gershwin.
Long Lines — Continuous Material
Interruption of Flow

Considerable manipulation leads to

„ „ ««
pi −1 (0, 1, 1)µu (i) µu (i − 1)
pu (i) = pi 1 + −1 +
P(i) − pi (Ni , 1, 1)µd (i) µi
„ «
pi −1 (0, 0, 1)µu (i)
ru (i − 1), i = 2, · · · , k − 1
P(i) − pi (Ni , 1, 1)µd (i)
and, similarly,

„ „ ««
pi +1 (Ni +1 , 1, 1)µd (i) µd (i + 1)
pd (i) = pi +1 1 + − 1) +
P(i) − pi (0, 1, 1)µu (i) µi +1
„ «
pi +1 (Ni +1 , 1, 0)µd (i + 1)
rd (i + 1), i = 1, · · · , k − 2
P(i) − pi (0, 1, 1)µu (i)

2.852 Manufacturing Systems Analysis 90/91 c


Copyright 2010 Stanley B. Gershwin.
To come

◮ Assembly/Disassembly Systems
◮ Buffer Optimization
◮ Effect of Buffers on Quality
◮ Loops
◮ Real-Time Control
◮ ????

2.852 Manufacturing Systems Analysis 91/91 c


Copyright 2010 Stanley B. Gershwin.
MIT OpenCourseWare
http://ocw.mit.edu

2.852 Manufacturing Systems Analysis


Spring 2010

For information about citing these materials or our Terms of Use,visit: http://ocw.mit.edu/terms.
MIT 2.852

Manufacturing Systems Analysis

Lectures 18–19
Loops
Stanley B. Gershwin

Spring, 2007

c 2007 Stanley B. Gershwin.


Copyright �
Problem

Statement

B1 M2 B2 M3 B3

M1 M4

B6 M6 B5 M5 B4

• Finite buffers (0 � ni(t) � Ni).


• Single closed loop – fixed population (

i ni(t) = N ).
• Focus is on the Buzacott model (deterministic processing time; geometric
up and down times). Repair probability = ri; failure probability = pi. Many
results are true for more general loops.
• Goal: calculate production rate and inventory distribution.

c 2007 Stanley B. Gershwin.


Copyright � 2
Motivation

Problem
Statement

• Limited pallets/fixtures.
• CONWIP (or hybrid) control systems.
• Extension to more complex systems and policies.

c 2007 Stanley B. Gershwin.


Copyright � 3
Two-Machine Loops
Special Case

Refer to MSE Section 5.6, page 205.

N
N1
B1 N*
r1 r2 r1 r2
p1 M1 M2 p2 =� p1 M1 B M2 p2

N2 B2

P loop(r1, p1, r2, p2, N1, N2) = P line(r1, p1, r2, p2, N �)
where
N � = min(n, N1) − max(0, n − N2)

c 2007 Stanley B. Gershwin.


Copyright � 4
Two-Machine Loops
Special Case
0.89
N2=10
r1 .14 N2=15
N2=20
p1 .01 0.885
N2=30
N2=40

r2 .1
p2 .01 0.88
N1 20
production rate

0.875

0.87

0.865

0.86
0 10 20 30 40 50 60
population

c 2007 Stanley B. Gershwin.


Copyright � 5
Expected

population

method

• Treat the loop as a line in which the first machine


and the last are the same.
• In the resulting decomposition, one equation is
missing.
• The missing equation is replaced by the expected

population constraint ( i n̄i = N ).


c 2007 Stanley B. Gershwin.


Copyright � 6
Expected

population

method

Evaluate i, i − 1, i + 1 modulo k (ie, replace 0 by k and replace k + 1 by 1).


ps(i − 1)ru(i)
ru(i) = ru(i − 1)X(i) + ri(1 − X(i)); X(i) =
pu(i)E(i − 1)
1 1 pd(i − 1)
� �
pu(i) = ru(i) + −2− , i = 1, ..., k
E(i − 1) ei rd(i − 1)
pb(i + 1)rd(i)
rd(i) = rd(i + 1)Y (i + 1) + ri+1(1 − Y (i + 1)); Y (i + 1) = .
pd(i)E(i + 1)
1 1 pu(i + 1)
� �

pd(i) = rd(i) + −2− , i = k, ..., 1


E(i + 1) ei+1 ru(i + 1)

This is 4k equations in 4k unknowns. But only 4k − 1 of them are independent


because the derivation uses E(i) = E(i + 1) for i = 1, ..., k. The first k − 1
are E(1) = E(2), E(2) = E(3), ..., E(k − 1) = E(k). But this implies
E(k) = E(1), which is the same as the kth equation.
c 2007 Stanley B. Gershwin.
Copyright � 7
Expected

population

method

Therefore, we need one more equation. We can use


n̄i = N
i

If we guess pu(1) (say), we can evaluate


TOTAL
= i n̄i as a function of pu(1). We search for


the value of pu(1) such that n̄ TOTAL = N .

c 2007 Stanley B. Gershwin.


Copyright � 8
Expected

population

method

Behavior:

• Accuracy good for large systems, not so good for


small systems.
• Accuracy good for intermediate-size populations; not
so good for very small or very large populations.

c 2007 Stanley B. Gershwin.


Copyright � 9
Expected

population

method

• Hypothesis: The reason for the accuracy behavior of the


population constraint method is the correlation in the buffers.
� The number of parts in the system is actually constant .
� The expected population method treats the population as random, with a
specified mean.
� If we know that a buffer is almost full, we know that there are fewer parts

in the rest of the network, so probabilities of blockage are reduced and

probabilities of starvation are increased. (Similarly if it is almost empty.)

� Suppose the population is smaller than the smallest buffer. Then there
will be no blockage. The expected population method does not take this
into account.

c 2007 Stanley B. Gershwin.


Copyright � 10
Loop Behavior

To construct a method that deals with the invariant


(rather than the expected value of the invariant), we
investigate how buffer levels are related to one another
and to the starvation and blocking of machines.
In a line, every downstream machine could block a
given machine, and every upstream machine could
starve it. In a loop, blocking and starvation are more
complicated.

c 2007 Stanley B. Gershwin.


Copyright � 11
Ranges
Loop Behavior

B1 M2 B2 M3 B3

M1 M4

B6 M6 B5 M5 B4

• The range of blocking of a machine is the set of all machines

that could block it if they stayed down for a long enough time.

• The range of starvation of a machine is the set of all machines

that could starve it if they stayed down for a long enough time.

c 2007 Stanley B. Gershwin.


Copyright � 12
Ranges
Loop Behavior
Range of Blocking

10 10 10

B1 M2 B2 M3 B3

M1 M4
7 0 0

B6 M6 B5 M5 B4

• All buffer sizes are 10.


• Population is 37.
• If M4 stays down for a long time, it will block M1.
• Therefore M4 is in the range of blocking of M1.
• Similarly, M2 and M3 are in the range of blocking of M1.
c 2007 Stanley B. Gershwin.
Copyright � 13
Ranges
Loop Behavior
Range of starvation

7 10 10

B1 M2 B2 M3 B3

M1 M4
0 0 10

B6 M6 B5 M5 B4

• If M5 stays down for a long time, it will starve M1.


• Therefore M5 is in the range of starvation of M1.
• Similarly, M6 is in the range of starvation of M1.

c 2007 Stanley B. Gershwin.


Copyright � 14
Ranges
Loop Behavior
Line

• The range of blocking of a machine in a line is the entire


downstream part of the line.
• The range of starvation of a machine in a line is the entire
upstream part of the line.

c 2007 Stanley B. Gershwin.


Copyright � 15
Ranges
Loop Behavior
Line
In an acyclic network, if Mj is downstream of Mi, then the range
of blocking of Mj is a subset of the range of blocking of Mi.

Mi Mj

Range of blocking of M j

Range of blocking of M i

Similarly for the range of starvation.


c 2007 Stanley B. Gershwin.
Copyright � 16
Ranges
Loop Behavior
Line
In an acyclic network, if Mj is downstream of Mi, any real machine whose
failure could cause Md(j) to appear to be down could also cause Md(i) to
appear to be down.
Mi Bi M
j

Md(i)

Md(j)

Consequently, we can express rd(i) as a function of the parameters of L(j).


This is not possible in a network with a loop because some machine that
blocks Mj does not block Mi.

c 2007 Stanley B. Gershwin.


Copyright � 17
Ranges
Loop Behavior
Difficulty for decomposition

L(1)

Range of blocking of M6
M u(1) B(1)
d
M (1)

B1 M2 B2 M3 B3

Range of blocking of M1
M1 M4

B1 M2 B2 M3 B3

B6 M6 B5 M5 B4

Range of starvation of M1 M1 M4

d u
M (6) B(6) M (6)

B6 M6 B5 M5 B4

L(6) Range of starvation of M2

Ranges of blocking and starvation of adjacent machines are not subsets or


supersets of one another in a loop.

c 2007 Stanley B. Gershwin.


Copyright � 18
Ranges
Loop Behavior
Difficulty for decomposition

Range of blocking of M1
Range of blocking of M2

10 10 10

B1 M2 B2 M3 B3

M1 M4
7 0 0

B6 M6 B5 M5 B4

M5 can block M2. Therefore the parameters of M5 should directly affect the
parameters of Md(1) in a decomposition. However, M5 cannot block M1 so
the parameters of M5 should not directly affect the parameters of Md(6).
Therefore, the parameters of Md(6) cannot be functions of the parameters of
Md(1).

c 2007 Stanley B. Gershwin.


Copyright � 19
Multiple Failure

Mode Line

Decomposition

• To deal with this issue, we introduce a new


decomposition.
• In this decomposition, we do not create failures of
virtual machines that are mixtures of failures of real
machines.
• Instead, we allow the virtual machines to have
distinct failure modes, each one corresponding to the
failure mode of a real machine.

c 2007 Stanley B. Gershwin.


Copyright � 20
Multiple Failure
Mode Line
Decomposition
1,2 4 5,6,7 8 9,10
3

1,2, 5,6,7,
3,4 8,9,10

• There is an observer in each buffer who is told that


he is actually in the buffer of a two-machine line.

c 2007 Stanley B. Gershwin.


Copyright � 21
Multiple Failure
Mode Line
Decomposition
1,2 4 5,6,7 8 9,10
3

1,2, 5,6,7,
3,4 8,9,10

• Each machine in the original line may and in the two-machine


lines must have multiple failure modes.
• For each failure mode downstream of a given buffer, there is a
corresponding mode in the downstream machine of its
two-machine line.
• Similarly for upstream modes.

c 2007 Stanley B. Gershwin.


Copyright � 22
Multiple Failure
Mode Line
Decomposition
1,2 4 5,6,7 8 9,10
3

1,2, 5,6,7,
3,4 8,9,10

• The downstream failure modes appear to the observer after


propagation through blockage .
• The upstream failure modes appear to the observer after
propagation through starvation .
• The two-machine lines are more complex that in earlier
decompositions but the decomposition equations are simpler.

c 2007 Stanley B. Gershwin.


Copyright � 23
Multiple Failure Two-Machine Line
Mode Line
Decomposition

up

down

Form a Markov chain and find the steady-state probability


distribution. The solution technique is very similar to that of the
two-machine-state model. Determine the production rate,
probability of starvation and probability of blocking in each down
mode, average inventory.
c 2007 Stanley B. Gershwin.
Copyright � 24
Multiple Failure Line Decomposition

Mode Line

Decomposition

• A set of decomposition equations are formulated.


• They are solved by a Dallery-David-Xie-like
algorithm.
• The results are a little more accurate than earlier
methods, especially for machines with very different
failures.

c 2007 Stanley B. Gershwin.


Copyright � 25
Multiple Failure Line Decomposition
Mode Line
Decomposition
1,2 4 5,6,7 8 9,10
3

1,2, 5,6,7,
3,4 8,9,10

• In the upstream machine of the building block, failure mode 4 is a local


mode; modes 1, 2, and 3 are remote modes. Modes 5, 6, and 7 are local
modes of the downstream machine; 8, 9, and 10 are remote modes.
• For every mode, the repair probability is the same as the repair probability
of the corresponding mode in the real line.
• Local modes: the probability of failure into a local mode is the same as the
probability of failure in that mode of the real machine.

c 2007 Stanley B. Gershwin.


Copyright � 26
Multiple Failure Line Decomposition
Mode Line
Decomposition
1,2 4 5,6,7 8 9,10
3

1,2, 5,6,7,
3,4 8,9,10

• Remote modes: i is the building block number; j and f are the machine

number and mode number of a remote failure mode. Then

u Ps,jf (i − 1) d Pb,jf (i)


pjf (i) = rjf ; pjf (i − 1) = rjf
E(i) E(i − 1)
where pu jf (i) is the probability of failure of the upstream machine into mode
jf ; Ps,jf (i − 1) is the probability of starvation of line i − 1 due to mode jf ;
rjf is the probability of repair of the upstream machine from mode jf ; etc.
• Also, E(i − 1) = E(i).
• pu jf (i) are used to evaluate E(i), Ps,jf (i),Pb,jf (i) from

d
jf (i), p
two-machine line i in an iterative method.

c 2007 Stanley B. Gershwin.


Copyright � 27
Multiple Failure Line Decomposition

Mode Line

Decomposition
Consider
Ps,jf (i − 1) Pb,jf (i)
pu
jf (i) = rjf ; pdjf (i − 1) = rjf
E(i) E(i − 1)
In a line, jf refers to all modes of all upstream machines in the
first equation; and all modes of all downstream machines in the
second equation.
We can interpret the upstream machines as the range of
starvation and the downstream machines as the range of
blockage of the line.

c 2007 Stanley B. Gershwin.


Copyright � 28
Multiple Failure Extension to Loops
Mode Line
Decomposition

B1 M2 B2 M3 B3

M1 M4

B6 M6 B5 M5 B4

• Use the multiple-mode decomposition, but adjust the


ranges of blocking and starvation accordingly.
• However, this does not take into account the local
information that the observer has.
c 2007 Stanley B. Gershwin.
Copyright � 29
Thresholds

10 10 10

B1 M2 B2 M3 B3

M1 M4
5 2 0

B6 M6 B5 M5 B4

5
d u
M (6) B6 M (6)

• The B6 observer knows how many parts there are in his buffer.
• If there are 5, he knows that the modes he sees in M d(6) could
be those corresponding to the modes of M1, M2, M3, and M4.

c 2007 Stanley B. Gershwin.


Copyright � 30
Thresholds

10 10 9

B1 M2 B2 M3 B3

M1 M4
8 0 0

B6 M6 B5 M5 B4

8
d u
M (6) B6 M (6)

• However, if there are 8, he knows that the modes he sees in


M d(6) could only be those corresponding to the modes of M1,
M2, and M3; and not those of M4.
• The transition probabilities of the two-machine line therefore
depend on whether the buffer level is less than 7 or not.

c 2007 Stanley B. Gershwin.


Copyright � 31
Thresholds

• This would require a new model of a two-machine


line.
• The same issue arises for starvation.
• In general, there can be more than one threshold in
a buffer.
• Consequently, this makes the two-machine line very
complicated.

c 2007 Stanley B. Gershwin.


Copyright � 32
Transformation

• Purpose: to avoid the complexities caused by


thresholds.
• Idea: Wherever there is a threshold in a buffer,
break up the buffer into smaller buffers separated by
perfectly reliable machines.

c 2007 Stanley B. Gershwin.


Copyright � 33
Transformation

buffer size
• When M1 fails for a long time,
20 3
B4 and B3 fill up, and there
B1 M2 B2
18
13 1
is one part in B2. Therefore
there is a threshold of 1 in B2.
• When M2 fails for a long time,
M1
M3 B1 fills up, and there is one
part in B4. Therefore there is
a threshold of 1 in B4.
1

B4 M4 B3
• When M3 fails for a long time,
15 5 B2 fills up, and there are 18
threshold population = 21
parts in B1. Therefore there is
c 2007 Stanley B. Gershwin.
Copyright �
a threshold of 18 in B1. 34
Transformation

buffer size

20 3
• When M4 fails for a long time,
B1 B2
18
M2 B3 and B2 fill up, and there
13 1
are 13 parts in B1. Therefore
there is a threshold of 13 in
M3
B1 .
M1
• Note: B1 has two thresholds
and B3 has none.
1
• Note: The number of thresh­
B4 M4 B3
15 5
olds equals the number of ma­
chines.
threshold population = 21

c 2007 Stanley B. Gershwin.


Copyright � 35
Transformation

20
B1 M2 B2
3
M*1 M2
M*4

M3
M3
M1

M*3
15 5

B4 M4 B3 M1 M4
M*2

• Break up each buffer into a sequence of buffers of size 1 and


reliable machines.
• Count backwards from each real machine the number of
buffers equal to the population.
• Identify the reliable machine that the count ends at.
c 2007 Stanley B. Gershwin.
Copyright � 36
Transformation

M1 2 M*3 5 M*4 13 M2

B1
1 2

M*2 14 M4 5 M3 1 M*1

B4 B3 B2

• Collapse all the sequences of unmarked reliable


machines and buffers of size 1 into larger buffers.
c 2007 Stanley B. Gershwin.
Copyright � 37
Transformation

• Ideally, this would be equivalent to the original


system.
• However, the reliable machines cause a delay, so
transformation is not exact for the discrete/
deterministic case.
• This transformation is exact for continuous-material
machines.

c 2007 Stanley B. Gershwin.


Copyright � 38
Small populations
Transformation

• If the population is smaller than the largest buffer, at


least one machine will never be blocked.
• However, that violates the assumptions of the
two-machine lines.
• We can reduce the sizes of the larger buffers so that
no buffer is larger than the population. This does not
change performance.

c 2007 Stanley B. Gershwin.


Copyright � 39
Accuracy

Numerical
Results

• Many cases were compared with simulation:


� Three-machine cases: all throughput errors under 1%; buffer
level errors averaged 3%, but were as high as 10%.
� Six-machine cases: mean throughput error 1.1% with a

maximum of 2.7%; average buffer level error 5% with a

maximum of 21%.

� Ten-machine cases: mean throughput error 1.4% with a

maximum of 4%; average buffer level error 6% with a

maximum of 44%.

c 2007 Stanley B. Gershwin.


Copyright � 40
Other algorithm attributes
Numerical
Results

• Convergence reliability: almost always.


• Speed: execution time increases rapidly with loop
size.
• Maximum size system: 18 machines. Memory
requirements grow rapidly also.

c 2007 Stanley B. Gershwin.


Copyright � 41
The Batman Effect

Numerical
Results
0.815
Decomposition
• Error is very small, but there

0.81
Simulation
are apparent discontinuities.

0.805

0.8 • This is because we cannot


deal with buffers of size 1, and
Throughput

0.795

because we do not need to


0.79

0.785

0.78
introduce reliable machines in
cases where there would be
0.775

0.77

no thresholds.
0 5 10 15 20 25 30
Population

c 2007 Stanley B. Gershwin.


Copyright � 42
Behavior
Numerical
Results
B1 M2 B2

* M1 M3

B4 M4 B3

• All buffer sizes 10. Population 15. Identical machines


except for M1.
• Observe average buffer levels and production rate as
a function of r1.
c 2007 Stanley B. Gershwin.
Copyright � 43
Behavior

Numerical
Results

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

• Production rate vs. r1.


• Usual saturating graph.
c 2007 Stanley B. Gershwin.
Copyright � 44
Behavior

Numerical
Results

10
b1 average
9 b2 average
b3 average
b4 average
8

7
B1 M2 B2
6

*
4
M1 M3
3

1
B4 M4 B3 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

• When r1 is small, M1 is a bottleneck, so B4 holds 10

parts, B3 holds 5 parts, and the others are empty.

• As r1 increases, material is more evenly distributed.

When r1 = 0.1, the network is totally symmetrical.

c 2007 Stanley B. Gershwin.


Copyright � 45
Applications

• Design system with pallets/fixtures. The fixtures and


the space to hold them in are expensive.
• Design system with tokens/kanbans (CONWIP). By
limiting population, we reduce production rate, but
we also reduce inventory.

c 2007 Stanley B. Gershwin.


Copyright � 46
MIT OpenCourseWare
http://ocw.mit.edu

2.852 Manufacturing Systems Analysis


Spring 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
MIT 2.852
Manufacturing Systems Analysis
Lectures 2–5: Probability
Basic probability, Markov processes, M/M/1 queues, and more

Stanley B. Gershwin

http://web.mit.edu/manuf-sys

Massachusetts Institute of Technology

Spring, 2010

2.852 Manufacturing Systems Analysis 1/128 c


Copyright �2010 Stanley B. Gershwin.
Probability and Statistics
Trick Question

I flip a coin 100 times, and it shows heads every time.

Question: What is the probability that it will show heads on the next flip?

2.852 Manufacturing Systems Analysis 2/128 c


Copyright �2010 Stanley B. Gershwin.
Probability and Statistics

Probability 6= Statistics

Probability: mathematical theory that describes uncertainty.


Statistics: set of techniques for extracting useful information from data.

2.852 Manufacturing Systems Analysis 3/128 c


Copyright �2010 Stanley B. Gershwin.
Interpretations of probability
Frequency

The probability that the outcome of an experiment is A is prob (A)

if the experiment is performed a large number of times and the fraction of


times that the observed outcome is A is prob (A).

2.852 Manufacturing Systems Analysis 4/128 c


Copyright �2010 Stanley B. Gershwin.
Interpretations of probability
Parallel universes

The probability that the outcome of an experiment is A is prob (A)

if the experiment is performed in each parallel universe and the fraction of


universes in which the observed outcome is A is prob (A).

2.852 Manufacturing Systems Analysis 5/128 c


Copyright �2010 Stanley B. Gershwin.
Interpretations of probability
Betting Odds

The probability that the outcome of an experiment is A is


prob (A) = P(A)

if before the experiment is performed a risk-neutral observer would be


willing to bet $1 against more than $ 1−P(A)
P(A) .

2.852 Manufacturing Systems Analysis 6/128 c


Copyright �2010 Stanley B. Gershwin.
Interpretations of probability
State of belief

The probability that the outcome of an experiment is A is prob (A)

if that is the opinion (ie, belief or state of mind) of an observer before the
experiment is performed.

2.852 Manufacturing Systems Analysis 7/128 c


Copyright �2010 Stanley B. Gershwin.
Interpretations of probability
Abstract measure

The probability that the outcome of an experiment is A is prob (A)

if prob () satisfies a certain set of axioms.

2.852 Manufacturing Systems Analysis 8/128 c


Copyright �2010 Stanley B. Gershwin.
Interpretations of probability
Abstract measure

Axioms of probability

Let U be a set of samples . Let E1 , E2 , ... be subsets of U. Let φ be the


null set (the set that has no elements).
◮ 0 ≤ prob (Ei ) ≤ 1
◮ prob (U) = 1
◮ prob (φ) = 0
◮ If Ei ∩ Ej = φ, then prob (Ei ∪ Ej ) = prob (Ei ) + prob (Ej )

2.852 Manufacturing Systems Analysis 9/128 c


Copyright �2010 Stanley B. Gershwin.
Probability Basics

◮ Subsets of U are called events.

◮ prob (E ) is the probability of E .

2.852 Manufacturing Systems Analysis 10/128 c


Copyright �2010 Stanley B. Gershwin.
Probability Basics

◮ If Ei


i Ei = U, and
◮ Ei ∩ Ej = φ for
all i and j,

◮ then i prob (Ei ) = 1

2.852 Manufacturing Systems Analysis 11/128 c


Copyright �2010 Stanley B. Gershwin.
Probability Basics
Set Theory

Venn diagrams

prob (Ā) = 1 − prob (A)

2.852 Manufacturing Systems Analysis 12/128 c


Copyright �2010 Stanley B. Gershwin.
Probability Basics
Set Theory

Venn diagrams

A B

U
A B
AUB

prob (A ∪ B) = prob (A) + prob (B) − prob (A ∩ B)

2.852 Manufacturing Systems Analysis 13/128 c


Copyright �2010 Stanley B. Gershwin.
Probability Basics
Independence

A and B are independent if

prob (A ∩ B) = prob (A) prob (B).

2.852 Manufacturing Systems Analysis 14/128 c


Copyright �2010 Stanley B. Gershwin.
Probability Basics
Conditional Probability

prob (A ∩ B)
prob (A|B) =
prob (B)
U

A B

U
A B
AUB

prob (A ∩ B) = prob (A|B) prob (B).

2.852 Manufacturing Systems Analysis 15/128 c


Copyright �2010 Stanley B. Gershwin.
Probability Basics
Conditional Probability

Example
Throw a die.
◮ A is the event of getting an odd number (1, 3, 5).
◮ B is the event of getting a number less than or equal to 3 (1, 2, 3).
Then prob (A) = prob (B) = 1/2 and
prob (A ∩ B) = prob (1, 3) = 1/3.
Also, prob (A|B) = prob (A ∩ B)/ prob (B) = 2/3.

2.852 Manufacturing Systems Analysis 16/128 c


Copyright �2010 Stanley B. Gershwin.
Probability Basics
Conditional Probability

Note: prob (A|B) being large does not mean that B causes A. It only
means that if B occurs it is probable that A also occurs. This could be
due to A and B having similar causes.

Similarly prob (A|B) being small does not mean that B prevents A.

2.852 Manufacturing Systems Analysis 17/128 c


Copyright �2010 Stanley B. Gershwin.
Probability Basics
Law of Total Probability
B

U
C A D

U U
A C A D

◮ Let B = C ∪ D and assume C ∩ D = φ. We have


prob (A ∩ C ) prob (A ∩ D)
prob (A|C ) = and prob (A|D) = .
prob (C ) prob (D)

◮ Also
prob (C ∩ B) prob (C )
prob (C |B) = = because C ∩ B = C .
prob (B) prob (B)
prob (D)
Similarly, prob (D|B) =
prob (B)
2.852 Manufacturing Systems Analysis 18/128 c
Copyright �2010 Stanley B. Gershwin.
Probability Basics
Law of Total Probability

U
A B

A ∩ B = A ∩ (C ∪ D) =

A ∩ C + A ∩ D − A ∩ (C ∩ D) =

A∩C +A∩D
Therefore,

prob (A ∩ B) = prob (A ∩ C ) + prob (A ∩ D)


2.852 Manufacturing Systems Analysis 19/128 c
Copyright �2010 Stanley B. Gershwin.
Probability Basics
Law of Total Probability

◮ Or,

prob (A|B) prob (B) =


prob (A|C ) prob (C ) + prob (A|D) prob (D)
so

prob (A|B) =
prob (A|C ) prob (C |B) + prob (A|D) prob (D|B).

2.852 Manufacturing Systems Analysis 20/128 c


Copyright �2010 Stanley B. Gershwin.
Probability Basics
Law of Total Probability

An important case is when C ∪ D = B = U, so that A ∩ B = A. Then

prob (A)
= prob (A ∩ C ) + prob (A ∩ D)
= prob (A|C ) prob (C ) + prob (A|D) prob (D).

U
A D

U
A
C

U D=C
A C

2.852 Manufacturing Systems Analysis 21/128 c


Copyright �2010 Stanley B. Gershwin.
Probability Basics
Law of Total Probability

More generally, if A and E1 , . . . Ek are Ei


events and

Ei and Ej = ∅, for all i 6= j


and
A

Ej = the universal set


j

(ie, the set of Ej sets is mutually exclu-


sive and collectively exhaustive ) then
...

2.852 Manufacturing Systems Analysis 22/128 c


Copyright �2010 Stanley B. Gershwin.
Probability Basics
Law of Total Probability


prob (Ej ) = 1
j

and

prob (A) = j prob (A|Ej ) prob (Ej ).

2.852 Manufacturing Systems Analysis 23/128 c


Copyright �2010 Stanley B. Gershwin.
Probability Basics
Law of Total Probability

Some useful generalizations:



prob (A|B) = prob (A|B and Ej ) prob (Ej |B),
j

prob (A and B) =

j prob (A|B and Ej ) prob (Ej and B).

2.852 Manufacturing Systems Analysis 24/128 c


Copyright �2010 Stanley B. Gershwin.
Probability Basics
Random Variables

Let V be a vector space. Then a random variable X is a mapping (a


function) from U to V .

If ω ∈ U and x = X (ω) ∈ V , then X is a random variable.

2.852 Manufacturing Systems Analysis 25/128 c


Copyright �2010 Stanley B. Gershwin.
Probability Basics
Random Variables

Flip of One Coin

Let U=H,T. Let ω = H if we flip a coin and get heads; ω = T if we flip a


coin and get tails.

Let X (ω) be the number of times we get heads. Then X (ω) = 0 or 1.

prob (ω = T ) = prob (X = 0) = 1/2

prob (ω = H ) = prob (X = 1) = 1/2

2.852 Manufacturing Systems Analysis 26/128 c


Copyright �2010 Stanley B. Gershwin.
Probability Basics
Random Variables

Flip of Three Coins

Let U=HHH, HHT, HTH, HTT, THH, THT, TTH, TTT.


Let ω = HHH if we flip 3 coins and get 3 heads; ω = HHT if we flip 3 coins and
get 2 heads and then tails, etc. The order matters!
◮ prob (ω) = 1/8 for all ω.

Let X be the number of heads. The order does not matter! Then X = 0, 1, 2,
or 3.
◮ prob (X = 0)=1/8; prob (X = 1)=3/8; prob (X = 2)=3/8;
prob (X = 3)=1/8.

2.852 Manufacturing Systems Analysis 27/128 c


Copyright �2010 Stanley B. Gershwin.
Probability Basics
Random Variables

Probability Distributions Let X (ω) be a random variable. Then


prob (X (ω) = x) is the probability distribution of X (usually written
P(x)). For three coin flips:
P(x)

3/8

1/4

1/8

0 1 2 3 x

2.852 Manufacturing Systems Analysis 28/128 c


Copyright �2010 Stanley B. Gershwin.
Dynamic Systems

◮ t is the time index, a scalar. It can be discrete or continuous.


◮ X (t) is the state.
◮ The state can be scalar or vector.
◮ The state can be discrete or continuous or mixed.
◮ The state can be deterministic or random.

X is a stochastic process if X (t) is a random variable for every t.

The value of X is sometimes written explicitly as X (t, ω) or X ω (t).

2.852 Manufacturing Systems Analysis 29/128 c


Copyright �2010 Stanley B. Gershwin.
Discrete Random Variables
Bernoulli

Flip a biased coin. If X B is Bernoulli, then there is a p such that

prob(X B = 0) = p.

prob(X B = 1) = 1 − p.

2.852 Manufacturing Systems Analysis 30/128 c


Copyright �2010 Stanley B. Gershwin.
Discrete Random Variables
Binomial

The sum of n independent Bernoulli random variables XiB with the same
parameter p is a binomial random variable X b .
n

Xb = XiB
i =0

n!
prob (X b = x) = p x (1 − p)(n−x)
x!(n − x)!

2.852 Manufacturing Systems Analysis 31/128 c


Copyright �2010 Stanley B. Gershwin.
Discrete Random Variables
Geometric

The number of independent Bernoulli random variables XiB tested until


the first 0 appears is a geometric random variable X g .

X g = min{XiB = 0}
i

To calculate prob (X g = t):


◮ For t = 1, we know prob (X B = 0) = p.
Therefore prob (X g > 1) = 1 − p.

2.852 Manufacturing Systems Analysis 32/128 c


Copyright �2010 Stanley B. Gershwin.
Discrete Random Variables
Geometric

◮ For t > 1,

prob (X g > t)
= prob (X g > t|X g > t − 1) prob (X g > t − 1)

= (1 − p) prob (X g > t − 1),


so
prob (X g > t) = (1 − p)t
and
prob (X g = t) = (1 − p)t−1 p

2.852 Manufacturing Systems Analysis 33/128 c


Copyright �2010 Stanley B. Gershwin.
Discrete Random Variables
Geometric

Alternative view
1−p 1

1 0

Consider a two-state system. The system can go from 1 to 0, but not from
0 to 1.

Let p be the conditional probability that the system is in state 0 at time


t + 1, given that it is in state 1 at time t. That is,
� � �

p = prob α(t + 1) = 0��α(t) = 1 .

2.852 Manufacturing Systems Analysis 34/128 c


Copyright �2010 Stanley B. Gershwin.
1−p 1

Discrete Random Variables 1 0

Geometric p

Let p(α, t) be the probability of the system being in state α at time t.


Then, since
� � �

p(0, t + 1) = prob α(t + 1) = 0�α(t) = 1 prob [α(t) = 1]

� � �

+ prob α(t + 1) = 0�α(t) = 0 prob [α(t) = 0],

(Why?)
we have

p(0, t + 1) = pp(1, t) + p(0, t),


p(1, t + 1) = (1 − p)p(1, t),
and the normalization equation
p(1, t) + p(0, t) = 1.

2.852 Manufacturing Systems Analysis 35/128 c


Copyright �2010 Stanley B. Gershwin.
1−p 1

Discrete Random Variables 1 0

Geometric p

Assume that p(1, 0) = 1. Then the solution is

p(0, t) = 1 − (1 − p)t ,
p(1, t) = (1 − p)t .

2.852 Manufacturing Systems Analysis 36/128 c


Copyright �2010 Stanley B. Gershwin.
1−p 1

Discrete Random Variables 1 0

Geometric p

Geometric Distribution

0.8

0.6
probability

p(0,t)
p(1,t)
0.4

0.2

0
0 10 20 30
t

2.852 Manufacturing Systems Analysis 37/128 c


Copyright �2010 Stanley B. Gershwin.
1−p 1

Discrete Random Variables 1 0

Geometric p

Recall that once the system makes the transition from 1 to 0 it can never
go back. The probability that the transition takes place at time t is

prob [α(t) = 0 and α(t − 1) = 1] = (1 − p)t−1 p.

The time of the transition from 1 to 0 is said to be geometrically


distributed with parameter p. The expected transition time is 1/p. (Prove
it!)

Note: If the transition represents a machine failure, then 1/p is the Mean
Time to Fail (MTTF). The Mean Time to Repair (MTTR) is similarly
calculated.

2.852 Manufacturing Systems Analysis 38/128 c


Copyright �2010 Stanley B. Gershwin.
Discrete Random Variables 1−p 1

Geometric 1

p
0

Memorylessness: if T is the transition time,

prob (T > t + x|T > x) = prob (T > t).

2.852 Manufacturing Systems Analysis 39/128 c


Copyright �2010 Stanley B. Gershwin.
Digression: Difference Equations
Definition

A difference equation is an equation of the form

x(t + 1) = f (x(t), t)
where t is an integer and x(t) is a real or complex vector.
To determine x(t), we must also specify additional information, for example
initial conditions:

x(0) = c
Difference equations are similar to differential equations. They are easier to solve
numerically because we can iterate the equation to determine x(1), x(2), .... In fact,
numerical solutions of differential equations are often obtained by approximating them
as difference equations.

2.852 Manufacturing Systems Analysis 40/128 c


Copyright �2010 Stanley B. Gershwin.
Digression: Difference Equations
Special Case

A linear difference equation with constant coefficients is one of the form

x(t + 1) = Ax(t)

where A is a square matrix of appropriate dimension.

Solution:
x(t) = At c
However, this form of the solution is not always convenient.

2.852 Manufacturing Systems Analysis 41/128 c


Copyright �2010 Stanley B. Gershwin.
Digression: Difference Equations
Special Case

We can also write

x(t) = b1 λt1 + b2 λt2 + ... + bk λtk


where k is the dimensionality of x, λ1 , λ2 , ..., λk are scalars and
b1 , b2 , ..., bk are vectors. The bj satisfy

c = b1 + b2 + ... + bk
λ1 , λ2 , ..., λk are the eigenvalues of A and b1 , b2 , ..., bk are its eigenvectors, but we don’t
always have to use that explicitly to determine them. This is very similar to the solution
of linear differential equations with constant coefficients.

2.852 Manufacturing Systems Analysis 42/128 c


Copyright �2010 Stanley B. Gershwin.
Digression: Difference Equations
Special Case

The typical solution technique is to guess a solution of the form

x(t) = bλt
and plug it into the difference equation. We find that λ must satisfy a kth
order polynomial, which gives us the k λs. We also find that b must
satisfy a set of linear equations which depends on λ.
Examples and variations will follow.

2.852 Manufacturing Systems Analysis 43/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes

◮ A Markov process is a stochastic process in which the probability of


finding X at some value at time t + δt depends only on the value of
X at time t.
◮ Or, let x(s), s ≤ t, be the history of the values of X before time t and
let A be a set of possible values of X (t + δt). Then

prob {X (t + δt) ∈ A|X (s) = x(s), s ≤ t} =

prob {X (t + δt) ∈ A|X (t) = x(t)}


◮ In words: if we know what X was at time t, we don’t gain any more
useful information about X (t + δt) by also knowing what X was at
any time earlier than t.

2.852 Manufacturing Systems Analysis 44/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
States and transitions

Discrete state, discrete time


◮ States can be numbered 0, 1, 2, 3, ... (or with multiple indices if that
is more convenient).
◮ Time can be numbered 0, 1, 2, 3, ... (or 0, Δ, 2Δ, 3Δ, ... if more
convenient).
◮ The probability of a transition from j to i in one time unit is often
written Pij , where
Pij = prob{X (t + 1) = i |X (t) = j}

2.852 Manufacturing Systems Analysis 45/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
States and transitions

Discrete state, discrete time


Transition graph
P
14 1 −P − P −P
14 24 64
1
P 4 P
24 64

2 P
45

5
3 7


Pij is a probability. Note that Pii = 1 − m,m6=i Pmi .
2.852 Manufacturing Systems Analysis 46/128 c
Copyright �2010 Stanley B. Gershwin.
Markov processes
States and transitions

Discrete state, discrete time


◮ Define pi (t) = prob{X (t) = i}.
◮ {pi (t)for all i} is the probability distribution at time t.

◮ Transition equations: pi (t + 1) =
j Pij pj (t).
◮ Initial condition: pi (0) specified. For example, if we observe that the system
is in state j at time 0, then pj (0) = 1 and pi (0) = 0 for all i 6= j.
◮ Let the current time be 0. The probability distribution at time t > 0
describes our state of knowledge at time 0 about what state the system will
be in at time t.

◮ Normalization equation:
i pi (t) = 1.

2.852 Manufacturing Systems Analysis 47/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
States and transitions

Discrete state, discrete time


◮ Steady state: pi = limt→∞ pi (t), if it exists.

◮ Steady-state transition equations: pi = j Pij pj .
◮ Steady state probability distribution:
◮ Very important concept, but different from the usual concept of steady
state.
◮ The system does not stop changing or approach a limit.
◮ The probability distribution stops changing and approaches a limit.

2.852 Manufacturing Systems Analysis 48/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
States and transitions

Discrete state, discrete time

Steady state probability distribution: Consider a typical (?) Markov process.


Look at a system at time 0.
◮ Pick a state. Any state.

◮ The probability of the system being in that state at time 1 is very different from
the probability of it being in that state at time 2, which is very different from it
being in that state at time 3.

◮ The probability of the system being in that state at time 1000 is very close to the
probability of it being in that state at time 1001, which is very close to the
probability of it being in that state at time 2000.
Then, the system has reached steady state at time 1000.

2.852 Manufacturing Systems Analysis 49/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
States and transitions

Discrete state, discrete time


Transition equations are valid for steady-state and non-steady-state
conditions.

(Self-loops suppressed for clarity.)

2.852 Manufacturing Systems Analysis 50/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
States and transitions
Discrete state, discrete time
Balance equations — steady-state only. Probability of leaving node i =
probability of entering node i.
� �
pi Pmi = Pij pj
m,m6=i j,j6=i
(Prove it!)

2.852 Manufacturing Systems Analysis 51/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
Unreliable machine

1=up; 0=down.
1−p r 1−r

1 0

2.852 Manufacturing Systems Analysis 52/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
Unreliable machine

The probability distribution satisfies

p(0, t + 1) = p(0, t)(1 − r ) + p(1, t)p,


p(1, t + 1) = p(0, t)r + p(1, t)(1 − p).

2.852 Manufacturing Systems Analysis 53/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
Unreliable machine

Solution

Guess

p(0, t) = a(0)X t
p(1, t) = a(1)X t

Then

a(0)X t+1 = a(0)X t (1 − r ) + a(1)X t p,


a(1)X t+1 = a(0)X t r + a(1)X t (1 − p).

2.852 Manufacturing Systems Analysis 54/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
Unreliable machine
Solution

Or,
a(0)X = a(0)(1 − r ) + a(1)p,
a(1)X = a(0)r + a(1)(1 − p).
or,
a(1)
X = 1−r + p,
a(0)
a(0)
X = r + 1 − p.
a(1)
so
rp
X =1−r +
X −1+p
or,
(X − 1 + r )(X − 1 + p) = rp.

2.852 Manufacturing Systems Analysis 55/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
Unreliable machine

Solution
Two solutions:

X = 1 and X = 1 − r − p.
a(1)
If X = 1, a(0) = pr . If X = 1 − r − p, a(1)
a(0) = −1. Therefore

p(0, t) = a1 (0)X1t + a2 (0)X2t = a1 (0) + a2 (0)(1 − r − p)t


r
p(1, t) = a1 (1)X1t + a2 (1)X2t = a1 (0) − a2 (0)(1 − r − p)t
p

2.852 Manufacturing Systems Analysis 56/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
Unreliable machine
Solution
To determine a1 (0) and a2 (0), note that

p(0, 0) = a1 (0) + a2 (0)


r
p(1, 0) = a1 (0) − a2 (0)
p

Therefore
r r +p
p(0, 0) + p(1, 0) = 1 = a1 (0) + a1 (0) = a1 (0)
p p
So
p p
a1 (0) = and a2 (0) = p(0, 0) −
r +p r +p

2.852 Manufacturing Systems Analysis 57/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
Unreliable machine

Solution
After more simplification and some beautification,

p(0, t) = p(0, 0)(1 − p − r )t


p �
1 − (1 − p − r )t ,

+
r +p

p(1, t) = p(1, 0)(1 − p − r )t


r �
1 − (1 − p − r )t .

+
r +p

2.852 Manufacturing Systems Analysis 58/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
Unreliable machine

Solution
Discrete Time Unreliable Machine

0.8

0.6
probability

p(0,t)
p(1,t)
0.4

0.2

0
0 20 40 60 80 100
t

2.852 Manufacturing Systems Analysis 59/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
Unreliable machine

Steady-state solution
As t → ∞,
p
p(0) → ,
r +p
r
p(1) →
r +p
which is the solution of

p(0) = p(0)(1 − r ) + p(1)p,


p(1) = p(0)r + p(1)(1 − p).

2.852 Manufacturing Systems Analysis 60/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
Unreliable machine

Steady-state solution
If the machine makes one part per time unit when it is operational, the
average production rate is
r 1
p(1) = = p
r +p 1+ .
r

2.852 Manufacturing Systems Analysis 61/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
States and Transitions

Classification of states
A chain is irreducible if and only if each state can be reached from each
other state.

Let fij be the probability that, if the system is in state j, it will at some
later time be in state i . State i is transient if fij < 1. If a steady state
distribution exists, and i is a transient state, its steady state probability is
0.

2.852 Manufacturing Systems Analysis 62/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
States and Transitions

Classification of states
The states can be uniquely divided into sets T , C1 , . . . Cn such that T is the set
of all transient states and fij = 1 for i and j in the same set Cm and fij = 0 for i
in some set Cm and j not in that set. If there is only one set C , the chain is
irreducible. The sets Cm are called final classes or absorbing classes and T is the
transient class.
Transient states cannot be reached from any other states except possibly other
transient states. If state i is in T , there is no state j in any set Cm such that
there is a sequence of possible transitions (transitions with nonzero probability)
from j to i.

2.852 Manufacturing Systems Analysis 63/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
States and Transitions

Classification of states

C6
C7 C5

C1 T
C4

C2 C3

2.852 Manufacturing Systems Analysis 64/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
States and Transitions

Discrete state, continuous time

◮ States can be numbered 0, 1, 2, 3, ... (or with multiple indices if that


is more convenient).
◮ Time is a real number, defined on (−∞, ∞) or a smaller interval.
◮ The probability of a transition from j to i during [t, t + δt] is
approximately λij δt, where δt is small, and
λij δt = prob{X (t + δt) = i |X (t) = j} + o(δt) for j 6= i .

2.852 Manufacturing Systems Analysis 65/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
States and Transitions
Discrete state, continuous time

Transition graph no self loops!!!!


λ
14

1
λ 4 λ
24 64

2 λ
45

5
3 7

λij is a probability rate. λij δt is a probability.


2.852 Manufacturing Systems Analysis 66/128 c
Copyright �2010 Stanley B. Gershwin.
Markov processes
States and Transitions

Discrete state, continuous time

◮ Define pi (t) = prob{X (t) = i }



◮ It is convenient to define λii = − j6=i λji
dpi (t) �
◮ Transition equations: dt = j λij pj (t).

◮ Normalization equation: i pi (t) = 1.

2.852 Manufacturing Systems Analysis 67/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
States and Transitions

Discrete state, continuous time

◮ Steady state: pi = limt→∞ pi (t), if it exists.



◮ Steady-state transition equations: 0 = j λij pj .
� �
◮ Steady-state balance equations: pi m,m6=i λmi = j,j6=i λij pj

◮ Normalization equation: i pi = 1.

2.852 Manufacturing Systems Analysis 68/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
States and Transitions

Discrete state, continuous time

Sources of confusion in continuous time models:


◮ Never Draw self-loops in continuous time Markov process graphs.
◮ Never write 1 − λ14 − λ24 − λ64 . Write
◮ 1 − (λ14 + λ24 + λ64 )δt, or
◮ −(λ14 + λ24 + λ64 )

◮ λii = − j6=i λji is NOT a probability rate and NOT a probability. It
is ONLY a convenient notation.

2.852 Manufacturing Systems Analysis 69/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
Exponential

Exponential random variable: the time to move from state 1 to state 0.

1 0

� � �

pδt = prob α(t + δt) = 0�α(t) = 1 + o(δt).

2.852 Manufacturing Systems Analysis 70/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
Exponential
1 0

p(0, t + δt) =
� � �

prob α(t + δt) = 0�α(t) = 1 prob [α(t) = 1]+

� � �

prob α(t + δt) = 0�α(t) = 0 prob[α(t) = 0].

or
p(0, t + δt) = pδtp(1, t) + p(0, t) + o(δt)
or
dp(0, t)
= pp(1, t).
dt
2.852 Manufacturing Systems Analysis 71/128 c
Copyright �2010 Stanley B. Gershwin.
Markov processes
Exponential

1 0

Since p(0, t) + p(1, t) = 1,

dp(1, t)
= −pp(1, t).
dt
If p(1, 0) = 1, then
p(1, t) = e −pt
and
p(0, t) = 1 − e −pt

2.852 Manufacturing Systems Analysis 72/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
Exponential

Density function

The probability that the transition takes place in [t, t + δt] is

prob [α(t + δt) = 0 and α(t) = 1] = e −pt pδt.


The exponential density function is pe −pt .
The time of the transition from 1 to 0 is said to be exponentially
distributed with rate p. The expected transition time is 1/p. (Prove it!)

2.852 Manufacturing Systems Analysis 73/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
Exponential

Density function

◮ f (t) = pe −pt for t ≥ 0; f (t) = 0 otherwise;


F (t) = 1 − e −pt for t ≥ 0; F (t) = 0 otherwise.
◮ ET = 1/p, VT = 1/p 2 . Therefore, cv=1.
f(t) 1 F(t) 1
0.9 0.9

0.8 0.8

0.7 0.7

0.6 0.6

0.5 0.5

0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1

0 0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5

t t
1p 1p

2.852 Manufacturing Systems Analysis 74/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
Exponential

Density function

◮ Memorylessness: prob (T > t + x|T > x) = prob (T > t)


◮ prob (t ≤ T ≤ t + δt) ≈ µδt for small δt.
◮ If T1 , ..., Tn are exponentially distributed random variables with
parameters µ1 ..., µn and T = min(T1 , ..., Tn ), then T is an
exponentially distribution random variable with parameter
µ = µ1 + ... + µn .

2.852 Manufacturing Systems Analysis 75/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
Exponential

Density function

Exponential density
function and a small
number of actual
samples.

2.852 Manufacturing Systems Analysis 76/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
Unreliable machine

Continuous time
r

1 0

2.852 Manufacturing Systems Analysis 77/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
Unreliable machine

Continuous time
The probability distribution satisfies
p(0, t + δt) = p(0, t)(1 − r δt) + p(1, t)pδt + o(δt)
p(1, t + δt) = p(0, t)r δt + p(1, t)(1 − pδt) + o(δt)

or

dp(0, t)
= −p(0, t)r + p(1, t)p
dt
dp(1, t)
= p(0, t)r − p(1, t)p.
dt

2.852 Manufacturing Systems Analysis 78/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
Unreliable machine

Solution

� �
p p
p(0, t) = + p(0, 0) − e −(r +p)t
r +p r +p
p(1, t) = 1 − p(0, t).

As t → ∞,
p
p(0) → ,
r +p
r
p(1) →
r +p

2.852 Manufacturing Systems Analysis 79/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
Unreliable machine

Steady-state solution
If the machine makes µ parts per time unit on the average when it is
operational, the overall average production rate is
µr 1
µp(1) = =µ p
r +p 1+ .
r

2.852 Manufacturing Systems Analysis 80/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
The M/M/1 Queue

µ
λ

◮ Simplest model is the M/M/1 queue:


◮ Exponentially distributed inter-arrival times — mean is 1/λ; λ is arrival
rate (customers/time). (Poisson arrival process.)
◮ Exponentially distributed service times — mean is 1/µ; µ is service rate
(customers/time).
◮ 1 server.
◮ Infinite waiting area.

2.852 Manufacturing Systems Analysis 81/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
The M/M/1 Queue

◮ Exponential arrivals:
◮ If a part arrives at time s, the probability that the next part arrives
during the interval [s + t, s + t + δt] is e −λt λδt + o(δt) ≈ λδt. λ is
the arrival rate.
◮ Exponential service:
◮ If an operation is completed at time s and the buffer is not empty, the
probability that the next operation is completed during the interval
[s + t, s + t + δt] is e −µt µδt + o(δt) ≈ µδt. µ is the service rate.

2.852 Manufacturing Systems Analysis 82/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
The M/M/1 Queue

Sample path
Number of customers in the system as a function of time.
n

6
5
4
3
2
1
t

2.852 Manufacturing Systems Analysis 83/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
The M/M/1 Queue

State Space
λ λ λ λ λ λ λ

0 1 2 n−1 n n+1

µ µ µ µ µ µ µ

2.852 Manufacturing Systems Analysis 84/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
The M/M/1 Queue

Performance Evaluation
Let p(n, t) be the probability that there are n parts in the system at time
t. Then,

p(n, t + δt) = p(n − 1, t)λδt + p(n + 1, t)µδt


+p(n, t)(1 − (λδt + µδt)) + o(δt)
for n > 0

and

p(0, t + δt) = p(1, t)µδt + p(0, t)(1 − λδt) + o(δt).

2.852 Manufacturing Systems Analysis 85/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
The M/M/1 Queue

Performance Evaluation
Or,
dp(n, t)
= p(n − 1, t)λ + p(n + 1, t)µ − p(n, t)(λ + µ),
dt
n>0
dp(0, t)
= p(1, t)µ − p(0, t)λ.
dt

If a steady state distribution exists, it satisfies

0 = p(n − 1)λ + p(n + 1)µ − p(n)(λ + µ), n > 0


0 = p(1)µ − p(0)λ.
Why “if”?

2.852 Manufacturing Systems Analysis 86/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
The M/M/1 Queue

Performance Evaluation
Let ρ = λ/µ. These equations are satisfied by

p(n) = (1 − ρ)ρn , n ≥ 0
if ρ < 1. The average number of parts in the system is
� ρ λ
n̄ = np(n) = = .
n
1−ρ µ−λ
From Little’s law , the average delay experienced by a part is
1
W = .
µ−λ

2.852 Manufacturing Systems Analysis 87/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
The M/M/1 Queue

Performance Evaluation
Delay in a M/M/1 Queue

40

30
Delay

20

10

0
0 0.25 0.5 0.75 1
Arrival Rate

Define the utilization ρ = λ/µ.


What happens if ρ > 1?

2.852 Manufacturing Systems Analysis 88/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
The M/M/1 Queue

Performance Evaluation
W
100

80 ◮ To increase capacity, increase µ.


60
◮ To decrease delay for a given λ,
40 increase µ.
20

0
0 0.5 1 1.5 2 λ
µ=1 µ=2

2.852 Manufacturing Systems Analysis 89/128 c


Copyright �2010 Stanley B. Gershwin.
Markov processes
The M/M/1 Queue

Other Single-Stage Models


Things get more complicated when:
◮ There are multiple servers.
◮ There is finite space for queueing.
◮ The arrival process is not Poisson.
◮ The service process is not exponential.
Closed formulas and approximations exist for some cases.

2.852 Manufacturing Systems Analysis 90/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Philosophical issues

1. Mathematically, continuous and discrete random variables are very


different.
2. Quantitatively , however, some continuous models are very close to
some discrete models.
3. Therefore, which kind of model to use for a given system is a matter
of convenience .
Example: The production process for small metal parts (nuts, bolts,
washers, etc.) might better be modeled as a continuous flow than a large
number of discrete parts.

2.852 Manufacturing Systems Analysis 91/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Probability density

High density

The probability of a two-dimensional


random variable being in a small square is
the probability density times the area of
the square. (Actually, it is more general
than this.)

Low density

2.852 Manufacturing Systems Analysis 92/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Probability density

2.852 Manufacturing Systems Analysis 93/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Spaces

◮ Continuous random variables can be defined


◮ in one, two, three, ..., infinite dimensional spaces;
◮ in finite or infinite regions of the spaces.
◮ Continuous random variables can have
◮ probability measures with the same dimensionality as the space;
◮ lower dimensionality than the space;
◮ a mix of dimensions.

2.852 Manufacturing Systems Analysis 94/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Dimensionality

M1 B1

x1
M2

B2

x2 M3

M1 B1 M2 B2 M3

2.852 Manufacturing Systems Analysis 95/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Dimensionality

M1 B1

x1
M2

B2

x2 M3

M1 B1 M2 B2 M3

2.852 Manufacturing Systems Analysis 96/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Spaces

Dimensionality
x2 One−dimensional density
Two−dimensional density

Probability distribution
of the amount of
Zero−dimensional density material in each of the
(mass)
two buffers.
x1

M1 B1 M2 B2 M3

2.852 Manufacturing Systems Analysis 97/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Spaces

Discrete approximation
x2

Probability distribution
of the amount of
material in each of the
two buffers.
x1

M1 B1 M2 B2 M3

2.852 Manufacturing Systems Analysis 98/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Problem

Production surplus from an unreliable machine


r

1 0
Production rate = µ Production rate = 0
p

� �
r
Demand rate = d < µ . (Why?)
r +p
Problem: producing more than has been demanded creates inventory and is
wasteful. Producing less reduces revenue or customer goodwill. How can we
anticipate and respond to random failures to mitigate these effects?

2.852 Manufacturing Systems Analysis 99/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Solution

We propose a production policy. Later we show that it is a solution to an


optimization problem.
Model:
u
α= 1
0<u< µ

u d
x
u=0
α= 0
u=0

How do we choose u?

2.852 Manufacturing Systems Analysis 100/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Solution
dx(t)
Surplus, or inventory/backlog: = u(t) − d
dt
Production policy: Choose Z Cumulative
production
(the hedging point ) Then, Production and Demand

◮ if α = 1,
dt+Z

◮ if x < Z , u = µ,
hedging point Z
◮ if x = Z , u = d,
surplus x(t)
◮ if x > Z , u = 0;
◮ if α = 0, demand dt

◮ u = 0.
t

How do we choose Z ?
2.852 Manufacturing Systems Analysis 101/128 c
Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Mathematical model
Definitions:
f (x, α, t) is a probability density function.

f (x, α, t)δx = prob (x ≤ X (t) ≤ x + δx


and the machine state is α at time t).

prob (Z , α, t) is a probability mass.

prob (Z , α, t) = prob (x = Z
and the machine state is α at time t).

Note that x > Z is transient.


2.852 Manufacturing Systems Analysis 102/128 c
Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Mathematical model
State Space:
dx
=µ −d
dt
α=1
x
α=0
dx
= −d
dt

x=Z

2.852 Manufacturing Systems Analysis 103/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Mathematical model
Transitions to α = 1, [x, x + δx]; x <Z :
x

α=1
no failure
repair
α=0

δx

x=Z

2.852 Manufacturing Systems Analysis 104/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Mathematical model
Transitions to α = 0, [x, x + δx]; x <Z :
x

α=1
failure
no repair
α=0

δx

x=Z

2.852 Manufacturing Systems Analysis 105/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example
Mathematical model
Transitions to α = 1, [x, x + δx]; x <Z :
x(t) = x−( µ−d) δt
x(t+ δt)

α=1
x(t) = x + dδ t

α=0

f (x, 1, t + δt)δx =

[f (x + dδt, 0, t)δx]r δt + [f (x − (µ − d)δt, 1, t)δx](1 − pδt)

+o(δt)o(δx)

2.852 Manufacturing Systems Analysis 106/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Mathematical model
Or,
o(δt)o(δx)
f (x, 1, t + δt) =
δx

+f (x + dδt, 0, t)r δt + f (x − (µ − d)δt, 1, t)(1 − pδt)


In steady state,
o(δt)o(δx)
f (x, 1) =
δx

+f (x + dδt, 0)r δt + f (x − (µ − d)δt, 1)(1 − pδt)

2.852 Manufacturing Systems Analysis 107/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Mathematical model
Expand in Taylor series:
f (x, 1) =
� �
df (x, 0)
f (x, 0) + dδt r δt
dx
� �
df (x, 1)
+ f (x, 1) − (µ − d)δt (1 − pδt)
dx
o(δt)o(δx)
+
δx

2.852 Manufacturing Systems Analysis 108/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Mathematical model
Multiply out:
df (x, 0)
f (x, 1) = f (x, 0)r δt + (d)(r )δt 2
dx
df (x, 1)
+f (x, 1) − (µ − d)δt
dx
df (x, 1)
−f (x, 1)pδt − (µ − d)pδt 2
dx
o(δt)o(δx)
+
δx

2.852 Manufacturing Systems Analysis 109/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Mathematical model
Subtract f (x, 1) from both sides and move one of the terms:

df (x, 1) o(δt)o(δx)
(µ − d)δt =
dx δx

df (x, 0)
+f (x, 0)r δt + (d)(r )δt 2
dx
df (x, 1)
−f (x, 1)pδt − (µ − d)pδt 2
dx

2.852 Manufacturing Systems Analysis 110/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Mathematical model
Divide through by δt:

df (x, 1) o(δt)o(δx)
(µ − d) =
dx δtδx

df (x, 0)
+f (x, 0)r + (d)(r )δt
dx
df (x, 1)
−f (x, 1)p − (µ − d)pδt
dx

2.852 Manufacturing Systems Analysis 111/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Mathematical model
Take the limit as δt −→ 0:
df (x, 1)
(µ − d) = f (x, 0)r − f (x, 1)p
dx

2.852 Manufacturing Systems Analysis 112/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example
Mathematical model
Transitions to α = 0, [x, x + δx]; x <Z :
x(t) = x−( µ−d) δt
α=1
x(t+ δt)
failure
no repair
α=0
x(t) = x + dδ t

f (x, 0, t + δt)δx =

[f (x + dδt, 0, t)δx](1 − r δt) + [f (x − (µ − d)δt, 1, t)δx]pδt

+o(δt)o(δx)

2.852 Manufacturing Systems Analysis 113/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Mathematical model
By following essentially the same steps as for the transitions to
α = 1, [x, x + δx]; x < Z , we have

df (x, 0)
d = f (x, 0)r − f (x, 1)p
dx
Note:

df (x, 1) df (x, 0)
(µ − d) = d
dx dx
Why?

2.852 Manufacturing Systems Analysis 114/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Mathematical model
Transitions to α = 1, x = Z :
Z − ( µ− d) δ t
x=Z
no failure
1−p δt
α=1
no failure 1−p δt

α=0

P(Z , 1) = P(Z , 1)(1 − pδt)


+ prob (Z − (µ − d)δt < X < Z , α = 1)(1 − pδt)
+o(δt).

2.852 Manufacturing Systems Analysis 115/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Mathematical model
Or,

P(Z , 1) = P(Z , 1) − P(Z , 1)pδt

+f (Z − (µ − d)δt, 1)(µ − d)δt(1 − pδt) + o(δt),


or,
P(Z , 1)pδt = o(δt)+
� �
df (Z , 1)
+ f (Z , 1) − (µ − d)δt (µ − d)δt(1 − pδt),
dx

2.852 Manufacturing Systems Analysis 116/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Mathematical model
Or,
P(Z , 1)pδt = f (Z , 1)(µ − d)δt + o(δt)
or,

P(Z , 1)p = f (Z , 1)(µ − d)

2.852 Manufacturing Systems Analysis 117/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Mathematical model
dx
=µ −d
dt
α=1
x
α=0
dx
= −d
dt

x=Z

P(Z , 0) = 0. Why?

2.852 Manufacturing Systems Analysis 118/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Mathematical model
Transitions to α = 0, Z − (µ − d)δt < x < Z :
x=Z
Z − d δt

α=1
p δt 1−r δt no repair
failure

α=0

prob (Z − dδt < X < Z , 0) = f (Z , 0)dδt + o(δt)


= P(Z , 1)pδt + o(δt)

2.852 Manufacturing Systems Analysis 119/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Mathematical model
Or,

f (Z , 0)d = P(Z , 1)p = f (Z , 1)(µ − d)

2.852 Manufacturing Systems Analysis 120/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Mathematical model

df
(x, 0)d = f (x, 0)r − f (x, 1)p
dx
df (x, 1)
(µ − d) = f (x, 0)r − f (x, 1)p
dx

f (Z , 1)(µ − d) = f (Z , 0)d

0 = −pP(Z , 1) + f (Z , 1)(µ − d)
�Z
1 = P(Z , 1) + −∞ [f (x, 0) + f (x, 1)] dx

2.852 Manufacturing Systems Analysis 121/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Solution
Solution of equations:
f (x, 0) = Ae bx

d
f (x, 1) = A µ−d e bx

P(Z , 1) = A dp e bZ

P(Z , 0) = 0
where
r p
b= −
d µ−d
and A is chosen so that normalization is satisfied.

2.852 Manufacturing Systems Analysis 122/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Solution
Density Function -- Controlled Machine

0.03

0.02

1E-2

0
-40 -20 0 20
x

2.852 Manufacturing Systems Analysis 123/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Observations

1. Meanings of b:
Mathematical:
In order for the solution on the previous slide to make sense, b > 0.
Otherwise, the normalization integral cannot be evaluated.

2.852 Manufacturing Systems Analysis 124/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Observations
Intuitive:
◮ The average duration of an up period is 1/p. The rate that x increases
(while x < Z ) while the machine is up is µ − d. Therefore, the average
increase of x during an up period while x < Z is (µ − d)/p.
◮ The average duration of a down period is 1/r . The rate that x decreases
while the machine is down is d. Therefore, the average decrease of x during
an down period is d/r .
◮ In order to guarantee that x does not move toward −∞, we must have
(µ − d)/p > d/r .

2.852 Manufacturing Systems Analysis 125/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Observations
If (µ − d)/p > d/r ,
p r
then <
µ−d d
r p
or b= − > 0.
d µ−d

That is, we must have b > 0 so that there is enough capacity for x to increase on
the average when x < Z .

2.852 Manufacturing Systems Analysis 126/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Observations
r p
Also, note that b > 0 =⇒ > =⇒
d µ−d
r (µ − d) > pd =⇒

r µ − rd > pd =⇒

r µ > rd + pd =⇒
r
µ >d
r +p
which we assumed.

2.852 Manufacturing Systems Analysis 127/128 c


Copyright �2010 Stanley B. Gershwin.
Continuous random variables
Example

Observations

2. Let C = Ae bZ . Then

f (x, 0) = Ce −b(Z −x)

d
f (x, 1) = C µ−d e −b(Z −x)

P(Z , 1) = C dp

P(Z , 0) = 0

That is, the probability distribution really depends on Z − x. If Z is


changed, the distribution shifts without changing its shape.

2.852 Manufacturing Systems Analysis 128/128 c


Copyright �2010 Stanley B. Gershwin.
MIT OpenCourseWare
http://ocw.mit.edu

2.852 Manufacturing Systems Analysis


Spring 2010

For information about citing these materials or our Terms of Use,visit: http://ocw.mit.edu/terms.
MIT 2.852

Manufacturing Systems Analysis

Lectures 22–?
Quality and Quantity
Stanley B. Gershwin

Spring, 2007

c 2007 Stanley B. Gershwin.


Copyright �
Quality and

Quantity

• Quantity is about how much is produced, when it is produced,


and what resources are required to produce it.

• Quality is about how well it is made, and how much of it is


made well.
⋆ Design quality is about giving customers what they would
like.
⋆ Production quality is about not giving customers what they
would not like.

c 2007 Stanley B. Gershwin.


Copyright � 2
Quality and

Quantity

• Most literature is all quantity or all quality.

• Quantity measures include production rate, lead


time, inventory, utilization.

• Quality measures include yield and output defect


rate.

c 2007 Stanley B. Gershwin.


Copyright � 3
Quality and

Quantity

• Quantity strategies include optimizing local


inventories, optimizing global inventory,
release/dispatch policies, make-to-order vs.
make-to-stock, etc.

• Quality strategies include inspection, statistical


process control, etc.

c 2007 Stanley B. Gershwin.


Copyright � 4
The Problem

Quality and
Quantity

The problem is that, conventionally, ...

• Quantity strategies are selected according to how


they affect quantity measures, and

• Quality strategies are selected according to how they


quality measures, but ...

• in reality, both affect both .

c 2007 Stanley B. Gershwin.


Copyright � 5
Quality

Quality and
Quantity

Example: Statistical Process Control

• Goal is to determine when UCL

a process has gone out of

control in order to
Out of control
maintain the machine.

• Upper and lower control

limits (UCL, LCL) usually


In control
chosen to be 6σ apart.

• Basic idea: which is the

most likely distribution that LCL

sample comes from?

c 2007 Stanley B. Gershwin.


Copyright � 6
Quantity

Quality and
Quantity

Example:

Everything we have been discussing so far.

c 2007 Stanley B. Gershwin.


Copyright � 7
Taxonomy of

Issues

• Failure dynamics
• Inspection
⋆ Binary (good/bad) vs. measurement
⋆ Accuracy (false positives and negatives)
⋆ Spatial and temporal frequency
• Actions on parts and machines
• Topology of system
• Performance measures

c 2007 Stanley B. Gershwin.


Copyright � 8
Failure Dynamics

Taxonomy of
Issues

• Definition: How the quality of a machine changes over time.


• The quality literature distinguishes between common causes
and special causes . (Other terms are also used.)
⋆ Common cause: successive failures are equally likely,

regardless of past history.

GGGGGBGGGBGGGGGGGBGGBGGGGBBGGGGGGGG.....

⋆ Special cause: something happens to the machine, and

failures become much more likely.

GGGGBGGGGGBGGGGGGGGBBBBBBGBBBBGBBGB.....

• We use this concept to extend quantity models.

c 2007 Stanley B. Gershwin.


Copyright � 9
Failure Dynamics

Taxonomy of
Issues

• Bernoulli or common cause: independent.


• Persistent or special cause: all parts after the first
bad part are bad, until the repair.
• Multi-Yield : generalization of persistent.

c 2007 Stanley B. Gershwin.


Copyright � 10
Failure Dynamics

Taxonomy of
Issues

The relationship between quality dynamics and statistical process


control:

G B

Note: The operator does not know when the machine is in the
bad state until it has been detected.

c 2007 Stanley B. Gershwin.


Copyright � 11
Failure Dynamics

Taxonomy of
Issues Simplest model

UP/Good UP/Bad
Versions:

• The Good state has 100%

yield and the Bad state has

0% yield.

• The Good state has high yield


and the Bad state has low DOWN
yield.

c 2007 Stanley B. Gershwin.


Copyright � 12
Failure Dynamics

Taxonomy of
Issues Simplest model

The three-state machine model is much too simple.


Quality
failure
• No matter how the (Invisible
machine arrived at the failure)
UP/Good UP/Bad
DOWN state, it gets the
same repair. Since the
next state is always the
UP/Good state, there Operational
Repair Failure
failure
must have been a quality (Visible
detection

repair. failure)

• Quality repairs are


expensive, and not
DOWN
necessary for operational
failures.

c 2007 Stanley B. Gershwin.


Copyright � 13
Failure Dynamics

Taxonomy of
Issues Simplest model

• One extension is
Quality
UP/Good UP/Bad Repair

DOWN G DOWN B

• ... but even this leaves out important features.

c 2007 Stanley B. Gershwin.


Copyright � 14
Failure Dynamics

Taxonomy of
Issues Simplest model

• Another extension is

Quality
Gk G k−1 G6 G5 G4 G3 G2 G1 B Repair

Dk D k−1 D6 D5 D4 D3 D2 D1 D

• This allows very general wear or aging models.

c 2007 Stanley B. Gershwin.


Copyright � 15
Failure Dynamics

Taxonomy of
Issues Simplest model

• A maintenance strategy could be modeled as

Quality
Gk G k−1 G6 G5 G4 G3 G2 G1 B Repair

Dk D k−1 D6 D5 D4 D3 D2 D1 D

if we have perfect knowledge of the machine state.

c 2007 Stanley B. Gershwin.


Copyright � 16
Failure Dynamics

Taxonomy of
Issues Simplest model

• A maintenance strategy could be implemented as

Quality
Gk G k−1 G6 G5 G4 G3 G2 G1 B Repair

Dk D k−1 D6 D5 D4 D3 D2 D1 D

if we do not have perfect knowledge of the machine state.

• It would be analyzed according to

Quality
Gk G k−1 G6 G5 G4 G3 G2 G1 B Repair

Dk D k−1 D6 D5 D4 D3 D2 D1 D

c 2007 Stanley B. Gershwin.


Copyright � 17
Failure Dynamics

Taxonomy of
Issues Simplest model

Distribution

of measured

parameter

Gk Gk−1 G6 G5 G4 G3 G2 G1 B

Quality
Repair

Dk Dk−1 D6 D5 D4 D3 D2 D1 D

Copyright �c 2007 Stanley B. Gershwin. 18


Inspection

Taxonomy of
Issues

• Motivation — why inspect?


⋆ To take action on parts and machines.
• Objectives of inspection:
⋆ Bad parts rejected or reworked.
⋆ Machine maintained when necessary.
• Effects of inspection errors:
⋆ Some good parts rejected or reworked; some bad parts

accepted.

⋆ Unnecessary downtime and/or more bad parts.

c 2007 Stanley B. Gershwin.


Copyright � 19
Inspection
Taxonomy of
Issues

• Destructive vs. non-destructive


• Domain
• Sampling
• Inspection time
• Accuracy (and goal of inspection)

c 2007 Stanley B. Gershwin.


Copyright � 20
Actions on parts and machines
Taxonomy of

Issues

• Actions on parts: accept, rework, or scrap.

• Actions on machines: leave alone or repair.

c 2007 Stanley B. Gershwin.


Copyright � 21
Topology of system
Taxonomy of
Issues

c 2007 Stanley B. Gershwin.


Copyright � 22
Topology of system
Taxonomy of
Issues

1 5

6 10

11 15 16

c 2007 Stanley B. Gershwin.


Copyright � 23
Topology of system
Taxonomy of
Issues

1 5

6 10

11 15 16

c 2007 Stanley B. Gershwin.


Copyright � 24
Topology of system
Taxonomy of
Issues

1 5

6 10

11 15 16

c 2007 Stanley B. Gershwin.


Copyright � 25
Performance Measures

Taxonomy of
Issues

• Expected total production rate


• Expected good production rate
• Yield
• Expected inventory.
• Miss and waste
• Production lead time

They are easy to calculate in a single-machine model.

c 2007 Stanley B. Gershwin.


Copyright � 26
One- and

Two-Machine

Systems

Note:

All the material up to Slide 43 is taken from

Kim and Gershwin, “Integrated Quality and Quantity Modeling of


a Production Line,” OR Spectrum, Volume 27, Numbers 2-3, pp.
287–314, June, 2005.

and

Jongyoon Kim, “Integrated Quality and Quantity Modeling of a


Production Line,” Ph. D thesis, MIT Mechanical Engineering,
November, 2004.
c 2007 Stanley B. Gershwin.
Copyright � 27
One- and Single Machine
Two-Machine
Systems
p

g f

1 −1 0

(g + p)P (1) = rP (0)

f P (−1) = gP (1)

c 2007 Stanley B. Gershwin.


Copyright � 28
One- and Single Machine
Two-Machine
Systems

rP (0) = pP (1) + f P (−1)

P (0) + P (1) + P (−1) = 1

1
P (1) =
1 + (p + g)/r + g/f

(p + g)/r

P (0) =
1 + (p + g)/r + g/f
g/f
P (−1) =
1 + (p + g)/r + g/f

c 2007 Stanley B. Gershwin.


Copyright � 29
One- and Single Machine

Two-Machine

Systems

The total production rate, including good and bad parts, is


1 + g/f
PT = µ(P (1) + P (−1)) = µ
1 + (p + g)/r + g/f
The effective production rate, the production rate of good parts
only, is
1
PE = µP (1) = µ
1 + (p + g)/r + g/f
(This quantity is also called the good production rate.) Since there
is no scrapping, the rate at which parts enter the system is equal
to the rate at which parts leave the system, so that the yield is
PE P (1) f
Y = = =
PT P (1) + P (−1) f +g
c 2007 Stanley B. Gershwin.
Copyright � 30
One- and Lines with Infinite Buffers

Two-Machine

Systems

Two-Machine, Infinite-Buffer Line:

µ1(1 + g1/f1) µ2(1 + g2/f2)

� �
P
T


= min
,

1 + (p1 + g1)/r1 + g1/f1 1 + (p2 + g2)/r2 + g2/f2

f1f2

P
E


= P
T


(f1 + g1)(f2 + g2)

c 2007 Stanley B. Gershwin.


Copyright � 31
One- and Lines with Zero Buffers
Two-Machine
Systems
Two-Machine, Zero-Buffer Line:

min[µ1, µ2]
PT0 =
f1
b
(pb

1
+ g b

1
) f

b
(pb

2
+ g b

2
)

1 +
b b
+

r1(f
1 + g1 ) r2(f
2b + g2b)

f
1
bf
2
b

PE0 =
PT0
(f1b + g1b)(f2b + g2b)

c 2007 Stanley B. Gershwin.


Copyright � 32
One- and Two-Machine-One-Buffer Lines
Two-Machine
Systems
• Continuous material
• Three-state machine
• Quality information feedback
⋆ Defects produced by the first machine are detected, after a
delay, by the second machine.
⋆ The length of the delay depends on the number of parts in
the buffer.
• As buffer size increases, total production rate increases and
yield decreases. But good production rate behavior is harder to
predict.

c 2007 Stanley B. Gershwin.


Copyright � 33
One- and Two-Machine-One-Buffer Lines
Two-Machine
Systems Quality Information Feedback

Quality Information Feedback

c 2007 Stanley B. Gershwin.


Copyright � 34
One- and Two-Machine-One-Buffer Lines
Two-Machine
Systems Solution Technique

The two-machine, one-buffer line with known


parameters can be solved using standard methods.
All parameters of the two-machine, one-buffer line are
known except h12, the transition rate from the bad
quality state of M1 to the down state due to the
inspection at M2. This depends on the number of
parts in the buffer x̄.

c 2007 Stanley B. Gershwin.


Copyright � 35
One- and Two-Machine-One-Buffer Lines
Two-Machine
Systems Solution Technique

Procedure:
• Guess x̄.
• Calculate h12.
• Solve the two-machine line. Recalculate x̄ and iterate.

c 2007 Stanley B. Gershwin.


Copyright � 36
One- and Intuition

Two-Machine

Systems

• Quantity-oriented people tend to assume that


increasing a buffer increases the production rate.

• Quality-oriented people tend to assume that


increasing a buffer decreases the production rate of
good items.

• However, we have found that the picture is not so


simple.

c 2007 Stanley B. Gershwin.


Copyright � 37
One- and M1 B M2
Two-Machine
Systems Assumptions

• M1 has variable quality; the inspection occurs at M2.

• M1 makes only good parts in the G state and only bad parts in
the B state.

• Stoppages occur at both machines at random times for random


durations.

• The buffer is operated according to FIFO.

• Detection of the M1 state change cannot take place until a bad


part reaches M2.

c 2007 Stanley B. Gershwin.


Copyright � 38
One- and M1 B M2
Two-Machine
Systems Beneficial Buffer

0.72
Effective Production Rate

0.71

0.7

0.69
0 5 10 15 20 25 30 35 40 45 50

Buffer Size
Effective production rate = production rate of good parts.

c 2007 Stanley B. Gershwin.


Copyright � 39
One- and M1 B M2
Two-Machine
Systems Harmful Buffer
Effective Production Rate 1.5

0.5

0
0 5 10 15 20 25 30 35 40 45 50
Buffer Size

c 2007 Stanley B. Gershwin.


Copyright � 40
One- and M1 B M2
Two-Machine
Systems Intuition

• When the inspection detects the first bad part after a good part,
the buffer contains only bad parts.

• In the harmful buffer case, the first machine has a higher


isolated total production rate than the second. Therefore, the
buffer is usually close to full, no matter how large the buffer is .

• Increasing the size of the buffer increases the number of bad


parts in the system when the M1 state change is detected.

• It also increases the total production rate, but not as much as it


increases the production rate of bad parts.

c 2007 Stanley B. Gershwin.


Copyright � 41
One- and M1 B M2
Two-Machine
Systems Intuition

• In the beneficial buffer case, the first machine has a smaller


isolated production rate than the second.

• Therefore, even if the buffer size increases, the number of parts


in the system is almost always small.

• Therefore it is rare for there to be many bad parts in the buffer


when the first bad part is inspected.

• Consequently, the production rate of bad parts remains limited


even as the buffer size increases.

c 2007 Stanley B. Gershwin.


Copyright � 42
One- and M1 B M2
Two-Machine
Systems Mixed-Benefit Buffer

0.4

0.395
Effective Production Rate

0.39

0.385

0.38

0.375

0.37
0 5 10 15 20 25 30 35 40 45 50
Buffer Size

c 2007 Stanley B. Gershwin.


Copyright � 43
Simulation

Long Lines with


Finite Buffers

• Intuition: more inspection improves quality.


• Reality: increasing inspection can actually reduce
quality, if it is not done intelligently.

c 2007 Stanley B. Gershwin.


Copyright � 44
Simulation

Long Lines with


Finite Buffers

• We simulated a 15-machine, 14-buffer line.


• All machines and buffers were identical.
• We looked at all possible combinations of inspection
stations in which all operations were inspected.
⋆ Example: Inspection stations just after Machines 6, 9, 13,
and 15.
⋆ The first inspection looks at the results from Machines 1 – 6;
the second looks at results from Machines 7, 8, and 9; the
third from 10 – 13; and the last from 14 and 15.
⋆ There is always one inspection after Machine 15.
• A total of 214=16,384 cases were simulated.
c 2007 Stanley B. Gershwin.
Copyright � 45
Simulation
Long Lines with
Finite Buffers
Range of Good Production Rates for Different Numbers of Inspection Stations
(15 five-state machines, 14 buffers, information feedback only)
No inspection: 0.10040

0.375

0.365
Best locations

0.355
Good Production Rate

0.345

0.335

0.325
Worst locations

0.315

0.305

0.295
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Number of Inspection Stations

c 2007 Stanley B. Gershwin.


Copyright � 46
Observations

Long Lines with


Finite Buffers

A few inspection stations deployed well can do as well


or better than many stations deployed poorly.
• The best distribution of 3 stations has a higher
effective production rate than the worst distribution of
7 stations.

• The best distribution of 8 stations performs almost


as well as 15 inspection stations.

c 2007 Stanley B. Gershwin.


Copyright � 47
Decomposition

Long Lines with


Finite Buffers Three structures analyzed

Details are in
Jongyoon Kim, “Integrated Quality and Quantity
Modeling of a Production Line,” Ph. D thesis, MIT
Mechanical Engineering, November, 2004.
and
Kim and Gershwin, “ Modeling and analysis of long
flow lines with quality and operational failures,” IIE
Transactions, to appear.

c 2007 Stanley B. Gershwin.


Copyright � 48
Decomposition
Long Lines with
Finite Buffers Three structures analyzed

Ubiquitous inspection:
I I I I I I I I

Single remote inspection of a single machine:

Single remote inspection of multiple machines:

c 2007 Stanley B. Gershwin.


Copyright � 49
Decomposition

Long Lines with


Finite Buffers

• Procedure:
⋆ Guess x̄i.
⋆ Calculate required hij parameters.
⋆ Transform the 3-state machines into approximate 2-state

machines.

⋆ Solve the resulting line by a standard decomposition

technique.

⋆ Recalculate x̄i and iterate.


• Comparison with simulation is reasonable.

c 2007 Stanley B. Gershwin.


Copyright � 50
Decomposition

Long Lines with


Finite Buffers

The system yield is the product of individual machine


yields using the final hij values .
The effective production rate is the total production
rate times the system yield.

c 2007 Stanley B. Gershwin.


Copyright � 51
Conclusions

• Yield is a system attribute. It is not a simple function of


machine yields. It depends on the operation policy, the buffer
sizes, etc.

• The Q/Q area is important but has not been studied


systematically with engineering rigor as much as other areas
have. Much work remains to be done.

• Factory designers and operators must use intuition and


simulation.

c 2007 Stanley B. Gershwin.


Copyright � 52
MIT OpenCourseWare
http://ocw.mit.edu

2.852 Manufacturing Systems Analysis


Spring 2010

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Efficient Buffer Design Algorithms for

Production Line Profit Maximization

Chuan Shi

Department of Mechanical Engineering

Massachusetts Institute of Technology

Advisor: Dr. Stanley B. Gershwin

Ph.D. Committee Meeting (Spring 2010)

April 27, 2010

⃝2010
c Chuan Shi — : 1/79
Contents
1 Problem
Problem

Motivation and research goals

Our focus: choosing buffers

Prior work review

Research Topics

2 Topic one: Line optimization


Constrained and unconstrained problems
Algorithm derivation
Proofs of the algorithm by KKT conditions
Numerical results
3 Topic two: Line opt. with time window constraint
Problem and motivation

Five cases

Algorithm derivation

Numerical results

4 Research in process and Research extensions


Research in process

Research extensions

⃝2010
c Chuan Shi — : 2/79
Problem
Manufacturing Systems
A manufacturing system is a set of machines, transportation elements,
computers, storage buffers, and other items that are used together for
manufacturing.

⃝2010
c Chuan Shi — Problem : Problem 3/79
Problem
Production Lines
A production line, is organized with machines or groups of machines
(�1 , ⋅ ⋅ ⋅ , �� ) connected in series and separated by buffers (�1 , ⋅ ⋅ ⋅ , ��−1 ).

Inventory space N1 N2 N3 N4 N5

M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6

Average inventory n1 n2 n3 n4 n5

⃝2010
c Chuan Shi — Problem : Problem 4/79
Why study production lines ?

Economic importance
Production lines are used in high volume manufacturing, particularly au
tomobile production, in which they make engine blocks, cylinders, con
necting rods, etc. Their capital costs range from hundreds of thousands
dollars to tens of millions of dollars.

The simplest form of an important phenomenon


Manufacturing stages interfere with each other and buffers decouple
them.

⃝2010
c Chuan Shi — Problem : Motivation and research goals 5/79
Research goals

Make factories more efficient and more profitable, including micro-


and nano-fabrication factories.

Develop tools for rapid design of production lines. This is very im


portant for products with short life cycles.

⃝2010
c Chuan Shi — Problem : Motivation and research goals 6/79
Production line design

Design Design Choose Choose


products processes machines buffers

Are cost and


performance
No satisfactory?

Yes

⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 7/79
Our focus: choosing buffers

Problem description and Assumptions


Maximize profit for production lines subject to a production rate
target constraint.
Process and machines have already been chosen (3 models).
The deterministic processing time model of Gershwin (1987), (1994).
The deterministic processing time model of Tolio, Matta, and Gersh
win (2002).
The continuous production line model of Levantesi, Matta, and Tolio
(2003).
Decision variables: sizes of in-process inventory (buffer) spaces, i.e.,
(�1 , ⋅ ⋅ ⋅ , ��−1 ) ≡ N.
Cost is due to inventory space and inventory.

⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 8/79
Our focus: choosing buffers
Problem description and Assumptions
The deterministic processing time model of Gershwin (1987).
Time required to process a part is the same for all machines and is
taken as the time unit.
Machine � is parameterized by the probability of failure, �� = 1/� � � �� ,
and the probability of repair, �� = 1/� � � �� , in each time unit.
The deterministic processing time model of Tolio, Matta, and Gersh
win (2002).
Processing times of all machines are equal, deterministic, and con
stant.
It allows each machine to have multiple failure modes. Each failure
is characterized by a geometrical distribution.
The continuous production line model of Levantesi, Matta, and Tolio
(2003).
Machines can have deterministic, yet different, processing speeds.
It allows each machine to have multiple failure modes. Each failure
is characterized by an exponential distribution.

⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 9/79
Benefits and costs of buffers

Necessity
Machines are not perfectly reliable and predictable.
The unreliability has the potential for disrupting the operations of
adjacent machines or even machines further away.
Buffers decouple machines, and mitigate the effect of a failure of
one of the machines on the operation of others.

⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 10/79
Benefits and costs of buffers

Necessity
Machines are not perfectly reliable and predictable.

The unreliability has the potential for disrupting the operations of

adjacent machines or even machines further away.

Buffers decouple machines, and mitigate the effect of a failure of

one of the machines on the operation of others.

⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 10/79
Benefits and costs of buffers

Necessity
Machines are not perfectly reliable and predictable.
The unreliability has the potential for disrupting the operations of
adjacent machines or even machines further away.
Buffers decouple machines, and mitigate the effect of a failure of
one of the machines on the operation of others.

⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 10/79
Benefits and costs of buffers

Undesirable consequence of buffers: Inventory

Inventory costs money to create or store.

The average lead time is proportional to the average amount of

inventory.

Inventory in a factory is vulnerable to damage.

The space and equipment needed for inventory costs money.

⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 11/79
Benefits and costs of buffers

Undesirable consequence of buffers: Inventory

Inventory costs money to create or store.

The average lead time is proportional to the average amount of


inventory.

Inventory in a factory is vulnerable to damage.

The space and equipment needed for inventory costs money.

⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 11/79
Benefits and costs of buffers

Undesirable consequence of buffers: Inventory

Inventory costs money to create or store.

The average lead time is proportional to the average amount of

inventory.

Inventory in a factory is vulnerable to damage.

The space and equipment needed for inventory costs money.

⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 11/79
Benefits and costs of buffers

Undesirable consequence of buffers: Inventory

Inventory costs money to create or store.

The average lead time is proportional to the average amount of

inventory.

Inventory in a factory is vulnerable to damage.

The space and equipment needed for inventory costs money.

⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 11/79
Difficulties

Evaluation
Calculate production rate and average inventory as a function of buffer
sizes (and machine reliability).
����� ����� ����� � �� ���� �����

state = (�1 , �2 , ⋅ ⋅ ⋅ , �� , �1 , �2 , ⋅ ⋅ ⋅ , ��−1 )


where �� = state of �� = operation or repair
�� = number of parts in �� , 0 ≤ �� ≤ ��

Exact numerical solution is impractical due to large state space.


There is no practical analytical solution to this problem for � > 2.
For 2-machine lines, there are analytical solutions.
Good approximation is available: decomposition.

⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 12/79
Difficulties

Evaluation
Calculate production rate and average inventory as a function of buffer
sizes (and machine reliability).
����� ����� ����� � �� ���� �����

state = (�1 , �2 , ⋅ ⋅ ⋅ , �� , �1 , �2 , ⋅ ⋅ ⋅ , ��−1 )


where �� = state of �� = operation or repair
�� = number of parts in �� , 0 ≤ �� ≤ ��

Exact numerical solution is impractical due to large state space.


There is no practical analytical solution to this problem for � > 2.
For 2-machine lines, there are analytical solutions.
Good approximation is available: decomposition.

⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 12/79
Difficulties

Evaluation
Calculate production rate and average inventory as a function of buffer
sizes (and machine reliability).
����� ����� ����� � �� ���� �����

state = (�1 , �2 , ⋅ ⋅ ⋅ , �� , �1 , �2 , ⋅ ⋅ ⋅ , ��−1 )


where �� = state of �� = operation or repair
�� = number of parts in �� , 0 ≤ �� ≤ ��

Exact numerical solution is impractical due to large state space.


There is no practical analytical solution to this problem for � > 2.
For 2-machine lines, there are analytical solutions.
Good approximation is available: decomposition.

⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 12/79
Difficulties

Evaluation
Calculate production rate and average inventory as a function of buffer
sizes (and machine reliability).
����� ����� ����� � �� ���� �����

state = (�1 , �2 , ⋅ ⋅ ⋅ , �� , �1 , �2 , ⋅ ⋅ ⋅ , ��−1 )


where �� = state of �� = operation or repair
�� = number of parts in �� , 0 ≤ �� ≤ ��

Exact numerical solution is impractical due to large state space.


There is no practical analytical solution to this problem for � > 2.
For 2-machine lines, there are analytical solutions.
Good approximation is available: decomposition.

⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 12/79
Difficulties
Decomposition

M i−2 B i−2 M i−1 B i−1 Mi Bi M i+1 B i+1 M i+2 B i+2 M i+3

Line L(i)

M u(i) M d(i)

★ Reprint with permission from Dr. Gershwin.

⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 13/79
Difficulties

Optimization1

Determine the optimal set of buffer sizes.

The cost function is nonlinear.

The constraints can be nonlinear.

1
This is my contribution.
⃝2010
c Chuan Shi — Problem : Our focus: choosing buffers 14/79
Prior work review

There are many studies focusing on maximizing the production rate but few
studies concentrating on maximizing the profit.
Substantial research has been conducted on production line evaluation and opti
mization (Dallery and Gershwin 1992).
Buzacott derived the analytic formula for the production rate for two-machine,
one-buffer lines in a deterministic processing time model (Buzacott 1967).
The invention of decomposition methods with unreliable machines and finite
buffers (Gershwin 1987) enabled the numerical evaluation of the production rate
of lines having more than two machines.
Diamantidis and Papadopoulos (2004) also presented a dynamic programming
algorithm for optimizing buffer allocation based on the aggregation method given
by Lim, Meerkov, and Top (1990). But they did not attempt to maximize the
profits of lines.
For other line optimization work, see Chan and Ng (2002), Smith and Cruz
(2005), Bautista and Pereira (2007), Jin et al. (2006), and Rajaram and Tian
(2009).

⃝2010
c Chuan Shi — Problem : Prior work review 15/79
Prior work review

Evaluation: simulation methods

Slow (according to Spinellis and Papadopoulos 2000).


Statistical.

Optimization: combinatorial or integer programming meth


ods

Slow (according to Gershwin and Schor 2000).

Inaccurate (So 1997, Tempelmeier 2003).

Do not take advantage of special properties of the problem (Shi and

Men 2003, Dolgui et al. 2002, Huang et al. 2002).

⃝2010
c Chuan Shi — Problem : Prior work review 16/79
Schor’s problem

Schor 1995, Gershwin and Schor 2000


Schor’s unconstrained profit maximization problem:
�−1
∑ �−1

max �(N) = �� (N) − �� �� − �� �
¯ � (N)
N
�=1 �=1

s.t. �� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.

where � (N) = production rate, parts/time unit


�ˆ = required production rate, parts/time unit
� = profit coefficient, $/part
¯ � (N)
� = average inventory of buffer �, � = 1, ⋅ ⋅ ⋅ , � − 1
�� = buffer cost coefficient, $/part/time unit
�� = inventory cost coefficient, $/part/time unit

⃝2010
c Chuan Shi — Problem : Prior work review 17/79
Assumptions
Assumptions
� (N) is monotonic and concave.

0.9
0.89
0.88
0.87
P(N) 0.86
0.85
0.84
N1 N2 0.83
0.82
0.81
0.8
0.79
M1 B1 M2 B2 M3
100
90
80
70
0 60
10 50
20 30 40 30
40 N2
50 60 20
70
N1 80 90
10
1000

�� can be treated as continuous variables (Schor 1995, Gershwin and


Schor 2000).
� (N) and �(N) can be treated as continuously differentiable func
tions (Schor 1995, Gershwin and Schor 2000).
The decomposition is a good approximation.

⃝2010
c Chuan Shi — Problem : Prior work review 18/79
Assumptions
Assumptions
� (N) is monotonic and concave.

0.9
0.89
0.88
0.87
P(N) 0.86
0.85
0.84
N1 N2 0.83
0.82
0.81
0.8
0.79
M1 B1 M2 B2 M3
100
90
80
70
0 60
10 50
20 30 40 30
40 N2
50 60 20
70
N1 80 90
10
1000

�� can be treated as continuous variables (Schor 1995, Gershwin and


Schor 2000).
� (N) and �(N) can be treated as continuously differentiable func
tions (Schor 1995, Gershwin and Schor 2000).
The decomposition is a good approximation.

⃝2010
c Chuan Shi — Problem : Prior work review 18/79
Assumptions
Assumptions
� (N) is monotonic and concave.

0.9
0.89
0.88
0.87
P(N) 0.86
0.85
0.84
N1 N2 0.83
0.82
0.81
0.8
0.79
M1 B1 M2 B2 M3
100
90
80
70
0 60
10 50
20 30 40 30
40 N2
50 60 20
70
N1 80 90
10
1000

�� can be treated as continuous variables (Schor 1995, Gershwin and


Schor 2000).
� (N) and �(N) can be treated as continuously differentiable func
tions (Schor 1995, Gershwin and Schor 2000).
The decomposition is a good approximation.

⃝2010
c Chuan Shi — Problem : Prior work review 18/79
Assumptions
Assumptions
� (N) is monotonic and concave.

0.9
0.89
0.88
0.87
P(N) 0.86
0.85
0.84
N1 N2 0.83
0.82
0.81
0.8
0.79
M1 B1 M2 B2 M3
100
90
80
70
0 60
10 50
20 30 40 30
40 N2
50 60 20
70
N1 80 90
10
1000

�� can be treated as continuous variables (Schor 1995, Gershwin and


Schor 2000).
� (N) and �(N) can be treated as continuously differentiable func
tions (Schor 1995, Gershwin and Schor 2000).
The decomposition is a good approximation.

⃝2010
c Chuan Shi — Problem : Prior work review 18/79
The Gradient Method

1220
1200
1180
1160
J(N) 1140
1120
1100
1080
1060
1040

100
90
80
70
0 60
10 50
20 30 40 N2
40 50 30
60 70 20
N1 80 90
10
1000

Figure 1: �(N) vs. �1 and �2

⃝2010
c Chuan Shi — Problem : Prior work review 19/79
The Gradient Method

We calculate the gradient direction to move in (�1 , ⋅ ⋅ ⋅ , ��−1 ) space.


A line search is then conducted in that direction until a maximum is
encountered. This becomes the next guess. A new direction is chosen
and the process continues until no further improvement can be achieved.
There is no analytical expression to compute profits of lines having more
than two machines. Consequently, to determine the search direction, we
compute the gradient, g, according to a forward difference formula, which
is
�(�1 , ⋅ ⋅ ⋅ , �� + ��� , ⋅ ⋅ ⋅ , ��−1 ) − �(�1 , ⋅ ⋅ ⋅ , �� , ⋅ ⋅ ⋅ , ��−1 )
�� =
���
where �� is the gradient component of buffer �� , � is the profit of the
line, and ��� is the increment of buffer �� .

⃝2010
c Chuan Shi — Problem : Prior work review 20/79
Research topics

Topics that have been finished/are in process


Profit maximization for production lines with a production rate con
straint.

Profit maximization for production lines with both time window con

straint and production rate constraint.

Evaluation and profit maximization for lines with an arbitrary single

loop structure.

Topics that might be considered in the future


Systems with quality control.
Systems with set-up cost for buffers.
etc.

⃝2010
c Chuan Shi — Problem : Research Topics 21/79
MIT OpenCourseWare
http://ocw.mit.edu

2.852 Manufacturing Systems Analysis


Spring 2010

For information about citing these materials or our Terms of Use,visit: http://ocw.mit.edu/terms.
Topic one: Production line profit maximization subject to a

production rate constraint

⃝2010
c Chuan Shi — Topic one: Line optimization : 22/79
Production line profit maximization
The profit maximization problem

�−1
∑ �−1

max �(N) = �� (N) − �� �� − �� �
¯ � (N)
N
�=1 �=1

s.t. � (N) ≥ �ˆ ,

�� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.

where � (N) = production rate, parts/time unit


�ˆ = required production rate, parts/time unit
� = profit coefficient, $/part
¯ � (N)
� = average inventory of buffer �, � = 1, ⋅ ⋅ ⋅ , � − 1
�� = buffer cost coefficient, $/part/time unit
�� = inventory cost coefficient, $/part/time unit

⃝2010
c Chuan Shi — Topic one: Line optimization : Constrained and unconstrained problems 23/79
An example about the research goal
"data.txt"
"feasible.txt"
1205
1200
1190
1180
1160
1140
1220 1120
1200 1100
1180 1080
1160 1060
J(N) 1140 Optimal
boundary
1120
1100
1080
1060
1040

100
90
80
70
0 60
10 50
20 30 40 N2
40 50 30
60 70 20
N1 80 90
10
1000

Figure 2: �(N) vs. �1 and �2

⃝2010
c Chuan Shi — Topic one: Line optimization : Constrained and unconstrained problems 24/79
Two problems

Original constrained problem


�−1
∑ �−1

max �(N) = �� (N) − �� �� − �� �
¯ � (N)
N
�=1 �=1

s.t. � (N) ≥ �ˆ ,

�� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.

Simpler unconstrained problem (Schor’s problem)


�−1
∑ �−1

max �(N) = �� (N) − �� �� − �� �
¯ � (N)
N
�=1 �=1

s.t. �� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.

⃝2010
c Chuan Shi — Topic one: Line optimization : Constrained and unconstrained problems 25/79
An example for algorithm derivation

Data
�1 = .1, �1 = .01, �2 = .11, �2 = .01, �3 = .1, �3 = .009, �ˆ = .88
Cost function
�(N) = 2000� (N) − �1 − �2 − � ¯ 1 (N) − �
¯ 2 (N)
100

90

80 P(N1,N2)>P
1660
1640
70
1620
J(N) 1600 60
1580

N2
1560
50
1540
1520
1500 40

100 30
90
80 P(N1,N2)<P P(N1,N2)=P
70
0 60 20
10 50
20 30 40 N2
40 50 30
60 20 10
70 20 30 40 50 60 70 80 90 100
N1 80 90
10
1000 N1

Figure 3: �(N) vs. �1 and �2 Figure 4: � (N)

⃝2010
c Chuan Shi — Topic one: Line optimization : Algorithm derivation 26/79
An example for algorithm derivation

1660
1640
1620
J(N) 1600
1580
1560
1540
1520
1500
100
90
80
70
0 60
10 50
20 30 40 N2
40 50 30
60 70 20
N1 80 90
10
1000

Figure 5: �(N) vs. �1 and �2

⃝2010
c Chuan Shi — Topic one: Line optimization : Algorithm derivation 27/79
Algorithm derivation

Two cases
Case 1
The solution of the unconstrained problem is N� s.t. � (N� ) ≥ �ˆ . In this
case, the solution of the constrained problem is the same as the solution
of the unconstrained problem. We are done.
100

Unconstrained problem 90

u u

�−1
80 ( N1 , N2 )

max �(N) = �� (N) − �� �� 70

N
�=1 60

N2
50
�−1
∑ 40
P(N1,N2) > P
− �� �
¯ � (N)
30
�=1
20

s.t. �� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1. 10
20 30 40 50 60 70 80 90 100
N1

⃝2010
c Chuan Shi — Topic one: Line optimization : Algorithm derivation 28/79
Algorithm derivation
Two cases (continued)
Case 2
N� satisfies � (N� ) < �ˆ . This is not the solution of the constrained
problem.
100

90

80

70

60
N2

50

40
P(N1,N2) > P

30

u u
20 ( N1 , N2 )
10
20 30 40 50 60 70 80 90 100
N1

⃝2010
c Chuan Shi — Topic one: Line optimization : Algorithm derivation 29/79
Algorithm derivation

Two cases (continued)

Case 2 (continued)

In this case, we consider the following unconstrained problem:

�−1
∑ �−1


max �(N) = � � (N) − �� �� − �� �
¯ � (N)
N
�=1 �=1

s.t. �� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.

in which � is replaced by �′ . Let N★ (�′ ) be the solution to this problem


and � ★ (�′ ) = � (N★ (�′ )).

⃝2010
c Chuan Shi — Topic one: Line optimization : Algorithm derivation 30/79
Assertion
The constrained problem
�−1
∑ �−1

max �(N) = �′ � (N) − �� �� − �� �
¯ � (N)
N
�=1 �=1

s.t. � (N) ≥ �ˆ ,

�� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
has the same solution for all �′ in which the solution of the corresponding
unconstrained problem
�−1
∑ �−1

max �(N) = �′ � (N) − �� �� − �� �
¯ � (N)
N
�=1 �=1

s.t. �� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.

has � ★ (�′ ) ≤ �ˆ .
⃝2010
c Chuan Shi — Topic one: Line optimization : Algorithm derivation 31/79
Interpretation of the assertion
We claim
If the optimal solution of the unconstrained problem is not that of the constrained
problem, then the solution of the constrained problem, (�1★ , ⋅ ⋅ ⋅ , ��★−1 ), satisfies
� (�1★ , ⋅ ⋅ ⋅ , ��★−1 ) = �ˆ .
480 80

max �(� ) = 500� (� ) − � − �


¯ (� )
470 70 N

460 60
s.t. � (� ) ≥ �ˆ
450 50
s.t. � ≥ �min
A’P(N)

Cost ($)
440 40

430 30 max �(N) = 500�ˆ − � − �


¯ (� )
N
420 20

s.t. � (� ) ≥ �ˆ ⇒ � (� ) = �ˆ

410 10
s.t. � ≥ �min

N*(A’)
400 0
0 5 10 15 20 25 30 35 40
P(N*(A’)) < P N

We formally prove this by the Karush-Kuhn-Tucker (KKT) conditions of nonlin


ear programming.
⃝2010
c Chuan Shi — Topic one: Line optimization : Algorithm derivation 32/79
Interpretation of the assertion
We claim
If the optimal solution of the unconstrained problem is not that of the constrained
problem, then the solution of the constrained problem, (�1★ , ⋅ ⋅ ⋅ , ��★−1 ), satisfies
� (�1★ , ⋅ ⋅ ⋅ , ��★−1 ) = �ˆ .
480 80

max �(� ) = 500� (� ) − � − �


¯ (� )
470 70 N

460 60
s.t. � (� ) ≥ �ˆ
450 50
s.t. � ≥ �min
A’P(N)

Cost ($)
440 40

430 30 max �(N) = 500�ˆ − � − �


¯ (� )
N
420 20

s.t. � (� ) ≥ �ˆ ⇒ � (� ) = �ˆ

410 10
s.t. � ≥ �min

N*(A’)
400 0
0 5 10 15 20 25 30 35 40
P(N*(A’)) < P N

We formally prove this by the Karush-Kuhn-Tucker (KKT) conditions of nonlin


ear programming.
⃝2010
c Chuan Shi — Topic one: Line optimization : Algorithm derivation 32/79
Interpretation of the assertion
We claim
If the optimal solution of the unconstrained problem is not that of the constrained
problem, then the solution of the constrained problem, (�1★ , ⋅ ⋅ ⋅ , ��★−1 ), satisfies
� (�1★ , ⋅ ⋅ ⋅ , ��★−1 ) = �ˆ .
480 80

max �(� ) = 500� (� ) − � − �


¯ (� )
470 70 N

460 60
s.t. � (� ) ≥ �ˆ
450 50
s.t. � ≥ �min
A’P(N)

Cost ($)
440 40

430 30 max �(N) = 500�ˆ − � − �


¯ (� )
N
420 20

s.t. � (� ) ≥ �ˆ ⇒ � (� ) = �ˆ

410 10
s.t. � ≥ �min

N*(A’)
400 0
0 5 10 15 20 25 30 35 40
P(N*(A’)) < P N

We formally prove this by the Karush-Kuhn-Tucker (KKT) conditions of nonlin


ear programming.
⃝2010
c Chuan Shi — Topic one: Line optimization : Algorithm derivation 32/79
Interpretation of the assertion

A = 1500 (Original Problem) A’ = 3500

1220 "infeasible.txt" 2950 "infeasible.txt"


1200 "feasible.txt" "feasible.txt"
1205 2900 2921
1180
1160 1200 2850 2919.7
J(N) 1140 1190 J(N) 2910
1180 2800 2900
1120 1160 2880
1100 1140 2750 2860
1080 1120 2830
1100 2700 2800
1060
1040 1080 2650 2770
1060 2730
Optimal Optimal
100 boundary 100
90 90 boundary
80 80
70 70
0 60 0 60
10 50 10 50
20 30 40 N2 20 30 40 N2
40 50 30 40 50 30
60 70 20 60 70 20
N1 80 90
10 N1 80 90
10
1000 1000

"infeasible.txt"
A’ = 2500 A’ = 4535.82 (Final A’) "feasible.txt"
3819
"infeasible.txt" 3816
2080 "feasible.txt" 3850 3813
2060 2060 3818
2055 3800 3800
2040
2020 2049.8 3750 3770
J(N) 2000 2040 J(N) 3700 3730
1980 2020 3700
2000 3650 3660
1960 3600
1940 1980 3620
1950 3550 3580
1920 1930
3500 3540
1900 1900 3500
1880 Optimal 3450 Optimal
100 boundary 100 boundary
90 90
80 80
70 70
0 60 0 60
10 50 10 50
20 30 40 N2 20 30 40 N2
40 50 30 40 50 30
60 20 60 70 20
70
N1 80 90
10 N1 80 90
10
1000 1000

⃝2010
c Chuan Shi — Topic one: Line optimization : Algorithm derivation 33/79
Karush-Kuhn-Tucker (KKT) conditions
Let �★ be a local minimum of the problem

min � (�)
s.t. ℎ1 (�) = 0, ⋅ ⋅ ⋅ , ℎ� (�) = 0,
�1 (�) ≤ 0, ⋅ ⋅ ⋅ , �� (�) ≤ 0,

where � , ℎ� , and �� are continuously differentiable functions from ℜ�


to ℜ. Then there exist unique Lagrange multipliers �★1 , ⋅ ⋅ ⋅ , �★� and
�★1 , ⋅ ⋅ ⋅ , �★� , satisfying the following conditions:

∇� �(�★ , �★ , �★ ) = 0,

�★� ≥ 0, � = 1, ⋅ ⋅ ⋅ , �,

�★� �� (�★ ) = 0, � = 1, ⋅ ⋅ ⋅ , �.
∑� ∑�
where �(�, �, �) = � (�) + �=1 �� ℎ� (�) + �=1 �� �� (�) is called the
Lagrangian function.
⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 34/79
Convert the constrained problem to minimization form

Minimization form
The constrained problem
�−1
∑ �−1

min −�(N) = −�� (N) + �� �� + �� �
¯ � (N)
N
�=1 �=1

s.t. �ˆ − � (N) ≤ 0

�min − �� ≤ 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1

We have argued that we treat �� as continuous variables, and � (� ) and


�(� ) as continuously differentiable functions.

⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 35/79
Applying KKT conditions
The Slater constraint qualification for convex inequalities guarantees the
existence of Lagrange multipliers for our problem. So, there exist unique
Lagrange multipliers �★� , � = 0, ⋅ ⋅ ⋅ , � − 1 for the constrained problem to
satisfy the KKT conditions:
�−1

− ∇�(N★ ) + �★0 ∇(�ˆ − � (N★ )) + �★� ∇(�min − �� ) = 0 (1)
�=1

or
∂�(N★ ) ∂� (N★ )
⎛ ⎞ ⎛ ⎞
⎜ ∂�1 ⎟ ⎜ ∂�1
⎟ ⎛
1
⎞ ⎛
0
⎞ ⎛
0

⎜ ∂�(N★ ) ⎟ ⎜ ∂� (N★ )

⎜ ⎟ ⎜ ⎟ ⎜ 0 ⎟ ⎜ 0 ⎟ ⎜ 0 ⎟
∂�2 ⎟ − �★0 ⎜ ∂�2 ⎟ − �★1 ⎜ ★
⎜ ⎟ ⎜ ⎟
−⎜ ⎟ − ⋅ ⋅ ⋅ − ��−1 ⎜ ⎟=⎜ ⎟,
⎟ ⎜ ⎟ ⎜ ⎟
.. .. ⎜ .. .. ..
. . .
⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠

⎜ . ⎟


⎜ . ⎟

⎝ ∂�(N★ ) ⎠ ⎝ ∂� (N★ ) ⎠ 0 1 0
∂��−1 ∂��−1
(2)

⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 36/79
Applying KKT conditions

and


★� ≥ 0, ∀� = 0, ⋅ ⋅ ⋅ , � − 1, (3)

haha
�★0
(�ˆ − � (N★ )) = 0, (4)

haha

★� (�min − ��★ ) = 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1, (5)

where N★ is the optimal solution of the constrained problem. Assume


that �
�★ > �min for all �. In this case, by equation (5), we know that

★� = 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1.

⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 37/79
Applying KKT conditions

The KKT conditions are simplified to

∂�(N★ ) ∂� (N★ )




⎜ ∂�1 ⎟ ⎜ ∂�1
⎟ ⎛ ⎞
0

⎜ ∂�(N★ ) ⎟ ⎜ ∂� (N★ )

⎜ ⎟ ⎜ ⎟
⎟ ⎜ 0

⎟ ⎜ ⎟
∂�2 ⎟ − �★0 ⎜ ∂�2
⎜ ⎟ ⎜

⎜ ⎟ =
⎜ .
⎟ ,

⎟ ⎝

. ⎠

(6)

⎜ .

⎟ ⎜ .

.
.

⎜ .
⎟ ⎜ .


∂�(N★ ) ⎠

∂� (N★ ) ⎠

⎜ ⎟ ⎜ ⎟

∂��−1 ∂��−1

�★0 (�ˆ − � (N★ )) = 0, (7)



where �★0 ≥ 0. Since N is not the optimal solution of the unconstrained
problem, ∇�(N★ ) = ∕ 0. Thus, �★0 = ∕ 0 since otherwise condition (6)
would be violated. By condition (7), the optimal solution N★ satisfies
� (N★ ) = �ˆ .

⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 38/79
Applying KKT conditions
The KKT conditions are simplified to

∂�(N★ ) ∂� (N★ )




⎜ ∂�1 ⎟ ⎜ ∂�1
⎟ ⎛ ⎞
0

⎜ ∂�(N★ ) ⎟ ⎜ ∂� (N★ )

⎜ ⎟ ⎜ ⎟
⎟ ⎜ 0

⎟ ⎜ ⎟
∂�2 ⎟ − �★0 ⎜ ∂�2
⎜ ⎟ ⎜

⎜ ⎟ =
⎜ .
⎟ ,

⎟ ⎝

. ⎠

⎜ .

⎟ ⎜ .

.
.

⎜ .
⎟ ⎜ .


∂�(N ) ⎠

∂� (N★ ) ⎠

⎜ ★
⎟ ⎜ ⎟

∂��−1 ∂��−1

�★0 (�ˆ − � (N★ )) = 0,


In addition, conditions (6) and (7) reveal how we could find �★0 and N★ .
For every �★0 , condition (6) determines N★ since there are � − 1 equations
and � − 1 unknowns. Therefore, we can think of N★ = N★ (�★0 ). We
search for a value of �★0 such that � (N★ (�★0 )) = �ˆ . As we indicate in the
following, this is exactly what the algorithm does.
⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 39/79
Applying KKT conditions
Replacing �★0 by �0 > 0 in constraint (6) gives

∂�(N� ) ∂� (N� )




⎜ ∂�1 ⎟ ⎜ ∂�1 ⎟ ⎛ ⎞
� ⎟ 0

⎜ ∂�(N ) ⎟ ⎜ ∂� (N� ) ⎟ ⎜ ⎟
⎜ ⎜ ⎟
⎜ ⎟ ⎜ ⎟ 0


⎜ ∂�2 ⎟ − �0 ⎜ ∂�2 ⎟ =
⎜ ⎜ .
⎟ ,
(8)

⎜ .

⎟ ⎜ .

⎟ ⎝

..

⎜ .
⎟ ⎜ .


∂�(N� ) ⎠

∂� (N� ) ⎠

⎜ ⎟ ⎜ ⎟

∂��−1 ∂��−1
where N� is the unique solution of (8). Note that N� is the solution of
the following optimization problem:
¯
min −�(N) = −�(N) + �0 (�ˆ − � (N))
N
(9)
s.t. �min − �� ≤ 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1.

⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 40/79
Applying KKT conditions
The problem above is equivalent to

max �¯(N) = �(N) − �0 (�ˆ − � (N))


N
(10)
s.t. �min − �� ≤ 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
or
�−1
∑ �−1

max �¯(N) = �� (N) − �� �� − ¯ � − �0 (�ˆ − � (N))
�� �
N
�=1 �=1 (11)

s.t. �min − �� ≤ 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
or
�−1
∑ �−1

max �¯(N) = (� + �0 )� (N) − �� �� − �� �
¯�
N (12)
�=1 �=1
s.t. �� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 41/79
Applying KKT conditions
The problem above is equivalent to

max �¯(N) = �(N) − �0 (�ˆ − � (N))


N
(10)
s.t. �min − �� ≤ 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
or
�−1
∑ �−1

max �¯(N) = �� (N) − �� �� − ¯ � − �0 (�ˆ − � (N))
�� �
N
�=1 �=1 (11)

s.t. �min − �� ≤ 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
or
�−1
∑ �−1

max �¯(N) = (� + �0 )� (N) − �� �� − �� �
¯�
N (12)
�=1 �=1
s.t. �� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 41/79
Applying KKT conditions
The problem above is equivalent to

max �¯(N) = �(N) − �0 (�ˆ − � (N))


N
(10)
s.t. �min − �� ≤ 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
or
�−1
∑ �−1

max �¯(N) = �� (N) − �� �� − ¯ � − �0 (�ˆ − � (N))
�� �
N
�=1 �=1 (11)

s.t. �min − �� ≤ 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
or
�−1
∑ �−1

max �¯(N) = (� + �0 )� (N) − �� �� − �� �
¯�
N (12)
�=1 �=1
s.t. �� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.
⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 41/79
Applying KKT conditions
or, finally,

�−1
∑ �−1

max �¯(N) = �′ � (N) − �� �� − �� �
¯�
N
�=1 �=1 (13)

s.t. �� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.

where �′ = � + �0 . This is exactly the unconstrained problem, and N�


is its optimal solution. Note that �0 > 0 indicates that �′ > �.

In addition, the KKT conditions indicate that the optimal solution of the
constrained problem N★ satisfies � (N★ ) = �ˆ . This means that, for every
�′ > � (or �0 > 0), we can find the corresponding optimal solution N�
satisfying condition (8) by solving problem (13). We need to find the
�′ such that the solution to problem (13), denoted as N★ (�′ ), satisfies
� (N★ (�′ )) = �ˆ .
⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 42/79
Applying KKT conditions
or, finally,

�−1
∑ �−1

max �¯(N) = �′ � (N) − �� �� − �� �
¯�
N
�=1 �=1 (13)

s.t. �� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.

where �′ = � + �0 . This is exactly the unconstrained problem, and N�


is its optimal solution. Note that �0 > 0 indicates that �′ > �.

In addition, the KKT conditions indicate that the optimal solution of the
constrained problem N★ satisfies � (N★ ) = �ˆ . This means that, for every
�′ > � (or �0 > 0), we can find the corresponding optimal solution N�
satisfying condition (8) by solving problem (13). We need to find the
�′ such that the solution to problem (13), denoted as N★ (�′ ), satisfies
� (N★ (�′ )) = �ˆ .
⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 42/79
Applying KKT conditions

Then, �0 = �′ − � and N★ (�′ ) satisfy conditions (6) and (7):

−∇�(N★ (�′ )) + �★0 ∇(�ˆ − � (N★ (�′ ))) = 0,

�★0 (�ˆ − � (N★ (�′ ))) = 0.


Hence, �★0 = �′ −� is exactly the Lagrange multiplier satisfying the KKT
conditions of the constrained problem, and N★ = N★ (�′ ) is the optimal
solution of the constrained problem.
d
Consequently, solving the constrained problem through our algorithm is
essentially finding the unique Lagrange multipliers and optimal solution
of the problem.

⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 43/79
Applying KKT conditions

Then, �0 = �′ − � and N★ (�′ ) satisfy conditions (6) and (7):

−∇�(N★ (�′ )) + �★0 ∇(�ˆ − � (N★ (�′ ))) = 0,

�★0 (�ˆ − � (N★ (�′ ))) = 0.


Hence, �★0 = �′ −� is exactly the Lagrange multiplier satisfying the KKT
conditions of the constrained problem, and N★ = N★ (�′ ) is the optimal
solution of the constrained problem.
d
Consequently, solving the constrained problem through our algorithm is
essentially finding the unique Lagrange multipliers and optimal solution
of the problem.

⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 43/79
Algorithm summary for case 2

Solve unconstrained problem


Solve, by a gradient method, the unconstrained prob-
lem for fixed �′ Search: Choose A'

�−1
∑ �−1

max �(N) = �′ � (N) − �� �� − �� �
¯ � (N)
N
�=1 �=1
Solve unconstrained problem
s.t. �� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1.

Search No
P(N*(A'))=P?
′ ′
Do a one-dimensional search on � > � to find �
such that the solution of the unconstrained problem, Yes
N★ (�′ ), satisfies
Quit
� (N★ (�′ )) = �ˆ .

⃝2010
c Chuan Shi — Topic one: Line optimization : Proofs of the algorithm by KKT conditions 44/79
Numerical results

Numerical experiment outline


Experiments on short lines.

Experiments on long lines.

Computation speed.

Method we use to check the algorithm


�ˆ surface search in (�1 , ⋅ ⋅ ⋅ , ��−1 ) space. All buffer size allocations, N,
such that � (N) = �ˆ compose the �ˆ surface.

⃝2010
c Chuan Shi — Topic one: Line optimization : Numerical results 45/79
�ˆ surface search
P surface
The optimal solution
90

85

N3 80

75

70

15
20
25
N1
30
35 70
60 65
50 55
40 40 45
N2

Figure 6: �ˆ Surface search

⃝2010
c Chuan Shi — Topic one: Line optimization : Numerical results 46/79
Experiment on short lines (4-buffer line)

Line parameters: �ˆ = .88

machine �1 �2 �3 �4 �5
� .11 .12 .10 .09 .10
� .008 .01 .01 .01 .01

Machine 4 is the least reliable machine (bottleneck) of the line.

Cost function
4
∑ 4

�(N) = 2500� (N) − �� − �
¯ � (N)
�=1 �=1

⃝2010
c Chuan Shi — Topic one: Line optimization : Numerical results 47/79
Experiment on short lines (4-buffer line)

Results
Optimal solutions
�ˆ Surface Search The algorithm Error Rounded � ★
Prod. rate .8800 .8800 .8800
�1★ 28.85 28.8570 0.02% 29.0000
�2★ 58.46 58.5694 0.19% 59.0000
�3★ 92.98 92.9068 0.08% 93.0000
�4★ 87.39 87.4415 0.06% 87.0000

¯1 19.0682 19.0726 0.02% 19.1791

¯2 34.3084 34.3835 0.23% 34.7289

¯3 48.7200 48.6981 0.04% 48.9123

¯4 31.9894 32.0063 0.05% 31.9485
Profit ($) 1798.2 1798.1 0.006% 1797.4000

The maximal error is 0.23% and appears in �


¯2.
Computer time for this experiment is 2.69 seconds.

⃝2010
c Chuan Shi — Topic one: Line optimization : Numerical results 48/79
Experiment on long lines (11-buffer line)

Line parameters: �ˆ = .88

machine �1 �2 �3 �4 �5 �6
� .11 .12 .10 .09 .10 .11
� .008 .01 .01 .01 .01 .01

machine �7 �8 �9 �10 �11 �12


� .10 .11 .12 .10 .12 .09
� .009 .01 .009 .008 .01 .009

Cost function
11
∑ 11

�(N) = 6000� (N) − �� − �
¯ � (N)
�=1 �=1

⃝2010
c Chuan Shi — Topic one: Line optimization : Numerical results 49/79
Experiment on long lines (11-buffer line)

Results
Optimal solutions, buffer sizes:
�ˆ Surface Search The algorithm Error Rounded � ★
Prod. rate .8800 .8800 .8799
�1★ 29.10 29.1769 0.26% 29.0000
�2★ 59.20 59.2830 0.14% 59.0000
�3★ 97.80 97.7980 0.002% 98.0000
�4★ 107.50 107.4176 0.08% 107.0000
�5★ 84.50 84.4804 0.02% 84.0000
�6★ 70.80 70.6892 0.17% 71.0000
�7★ 63.10 63.1893 0.14% 63.0000
�8★ 53.10 52.9274 0.33% 53.0000
�9★ 47.20 47.2232 0.05% 47.0000

�10 47.90 47.7967 0.22% 48.0000

�11 48.80 48.7716 0.06% 49.0000

⃝2010
c Chuan Shi — Topic one: Line optimization : Numerical results 50/79
Experiment on long lines (11-buffer line)

Results (continued)
Optimal solutions, average inventories:
�ˆ Surface Search The algorithm Error Rounded � ★

¯1 19.2388 19.2986 0.31% 19.1979

¯2 34.9561 35.0423 0.25% 34.8194

¯3 52.5423 52.6032 0.12% 52.6833

¯4 45.1528 45.1840 0.07% 45.0835

¯5 34.4289 34.4770 0.14% 34.2790

¯6 30.7073 30.7048 0.01% 30.8229

¯7 28.0446 28.1299 0.30% 28.0902

¯8 21.5666 21.5438 0.11% 21.5932

¯9 21.5059 21.5442 0.18% 21.4299

¯ 10 22.6756 22.6496 0.11% 22.7303

¯ 11 20.8692 20.8615 0.04% 20.9613
Profit ($) 4239.3 4239.2 0.002% 4239.5000

Computer time is 91.47 seconds.

⃝2010
c Chuan Shi — Topic one: Line optimization : Numerical results 51/79
Experiments for Tolio, Matta, and Gershwin (2002) model
Consider a 4-machine 3-buffer line with constraints �ˆ = .87. In addition, � =
2000 and all �� and �� are 1.

machine �1 �2 �3 �4
��1 .10 .12 .10 .20
��1 .01 .008 .01 .007
��2 – .20 – .16
��2 – .005 – .004

�ˆ Surf. Search The algorithm Error



� (N ) .8699 .8699
�1★ 29.8600 29.9930 0.45%
�2★ 38.2200 38.0206 0.52%
�3★ 20.6800 20.7616 0.39%

¯1 17.2779 17.3674 0.52%

¯2 17.2602 17.1792 0.47%

¯3 6.1996 6.2121 0.20%
Profit ($) 1610.3000 1610.3000 0.00%

⃝2010
c Chuan Shi — Topic one: Line optimization : Numerical results 52/79
Experiments for Levantesi, Matta, and Tolio (2003) model
Consider a 4-machine 3-buffer line with constraints �ˆ = .87. In addition, � =
2000 and all �� and �� are 1.

machine �1 �2 �3 �4
�� 1.0 1.02 1.0 1.0
��1 .10 .12 .10 .20
��1 .01 .008 .01 .012
��2 – .20 – .16
��2 – .005 – .006

� ★ Surf. Search The algorithm Error


� (N★ ) .8699 .8700
�1★ 27.7200 27.9042 0.66%
�2★ 38.7900 38.9281 0.34%
�3★ 34.0700 34.1574 0.26%

¯1 15.4288 15.5313 0.66%

¯2 19.8787 19.9711 0.46%

¯3 13.8937 13.9426 0.35%
Profit ($) 1590.0000 1589.7000 0.02%

⃝2010
c Chuan Shi — Topic one: Line optimization : Numerical results 53/79
Computation speed

Experiment
Run the algorithm for a series of experiments for lines having iden

tical machines to see how fast the algorithm could optimize longer

lines.

Length of the line varies from 4 machines to 30 machines.

Machine parameters are � = .01 and � = .1.

In all cases, the feasible production rate is �ˆ = .88.

The objective function is

�−1
∑ �−1

�(N) = �� (N) − �� − �
¯ � (N).
�=1 �=1

where � = 500� for the line of length �.

⃝2010
c Chuan Shi — Topic one: Line optimization : Numerical results 54/79
Computation speed

3000

2500
Computer time (seconds)

2000

1500

1000

10 mins
500

3 mins
0 1 min
5 10 15 20 25 30
Length of production lines

⃝2010
c Chuan Shi — Topic one: Line optimization : Numerical results 55/79
Algorithm reliability
We run the algorithm on 739 randomly generated 4-machine 3-buffer lines.
98.92% of these experiments have a maximal error less than 6%.

450

400

350

300

250
Frequency

200

150

100

50

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Maximal Error

⃝2010
c Chuan Shi — Topic one: Line optimization : Numerical results 56/79
Algorithm reliability
Taking a closer look at those 98.92% experiments, we find a more accurate
distribution of the maximal error. We find that, out of the total 739 experiments,
83.90% of them have a maximal error less than 2%.
60

50

40
Frequency

30

20

10

0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
Maximal Error

⃝2010
c Chuan Shi — Topic one: Line optimization : Numerical results 57/79
MIT OpenCourseWare
http://ocw.mit.edu

2.852 Manufacturing Systems Analysis


Spring 2010

For information about citing these materials or our Terms of Use,visit: http://ocw.mit.edu/terms.
Topic two: Production line profit maximization subject to both

time window constraint and production rate constraint

⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : 58/79
Line optimization with time window constraint
Motivation
A time-window constraint between operations means that the time for a part
waiting for the next operation after the previous operation should be kept less
than a fixed value, to guarantee the quality of the part. Such a constraint
is common in semiconductor industry (Kitamura, Mori, and Ono 2006). As
examples,
Robinson and Giglio (1999) mentioned that a baking operation must be
started within two hours of a prior clean operation. If more than two hours
elapse, the lot must be sent back to be cleaned again.
Lu, Ramaswamy, and Kumar (1994) studied the efficient scheduling policies
to reduce mean and variance of cycle-time, and pointed out that the shorter
the period that wafers are exposed to aerial contaminants while waiting for
processing, the smaller is the yield loss.
Yang and Chern (1995) indicated the consideration of such a time-window
constraint in food production, chemical production, and steel production.
For surveys, see Neacy, Brown, and McKiddie (1994) and Uzsoy, Lee, and
Martin-Vega (1992).
⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Problem and motivation 59/79
Mathematical model

Mathematical expression of the constraint


We transform this constraint (for a specific buffer �ˆ� ) into our mathematical
model through Little’s law (Little 1961), as

¯ˆ� = �ˆ� � (�1 , ⋅ ⋅ ⋅ , ��−1 )



where �ˆ� is the average waiting time for a part in buffer �ˆ� , �
¯ˆ� is the average
inventory of buffer �ˆ� . If we further let �
ˆˆ� be the time constraint, then we
require �ˆ� ≤ �
ˆˆ� , or

¯ˆ� ≤ �
� ˆˆ� � (�1 , ⋅ ⋅ ⋅ , ��−1 )
Note that constraint above guarantees the average part waiting time, NOT the
maximal part waiting time, to be upper bounded by � ˆˆ� . To resolve this concern,
we may consider to reduce � ˆˆ� , by a certain multiplier 0 < � < 1, to �� ˆˆ� in the
constraint, such that the probability that the waiting time for a part is less than
or equal to �ˆˆ� will satisfy a specific confidence level.

⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Problem and motivation 60/79
Mathematical model

The optimization problem


�−1
∑ �−1

max �(N) = �� (N) − �� �� − �� �
¯ � (N)
N
�=1 �=1

s.t. � (N) ≥ �ˆ

¯ˆ� ≤ �
� ˆˆ� � (N)

�� ≥ �min , ∀� = 1, ⋅ ⋅ ⋅ , � − 1

⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Problem and motivation 61/79
Five cases

The optimization problem has 5 cases in general. They are


The production rate constraint conflicts with the time-window con

straint. Therefore, there is no feasible solution to the problem.

The optimal solution exists. Both the production rate constraint and

the time window constraint are active.

The optimal solution exists. The production rate constraint is active,

while the time window constraint is inactive.

The optimal solution exists. The production rate constraint is inac

tive, while the time window constraint is active.

The optimal solution exists. Both the production rate constraint and

the time window constraint are inactive.

⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Five cases 62/79
Five cases

Consider a three machine two buffer line with machine parameters �1 =


.15, �1 = .01, �2 = .15, �2 = .01, �3 = .09 and �3 = .01. In addition,
consider these 5 cases below:
Case 1: �ˆ = .89 and �
ˆ1 = 2.

Case 2: �ˆ = .88 and �


ˆ1 = 7.

Case 3: �ˆ = .88 and �


ˆ1 = 15.

Case 4: �ˆ = .86 and �


ˆ1 = 6.5.

Case 5: �ˆ = .86 and �


ˆ1 = 15.

⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Five cases 63/79
Case example −− Case 1 �ˆ = .89 and �ˆ1 = 2
The production rate constraint conflicts with the time-window constraint. There
fore, there is no feasible solution to the problem.
"data.txt"
1255
1253
1250
1240
1230
1220
1210
1200
1260 1180
1240 1160
1220 P boundary
W boundary
J(N) 1200
1180
1160
1140
1120
1100 60
50
40
0 30
10 N2
20 20
30
40 10
N1 50
60 0

⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Five cases 64/79
Case example −− Case 2 �ˆ = .88 and �ˆ1 = 7
The optimal solution exists. Both the production rate constraint and the time
window constraint are active.
"data.txt"
"feasible.txt"
1252
1249.4
1244.9
1240
1230
1220
1210
1200
1180
1160
1260 1140
1240 Optimal
1220 P boundary
W boundary
J(N) 1200
1180
1160
1140
1120
1100 60
50
40
0 30
10 N2
20 20
30
40 10
N1 50
60 0

⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Five cases 65/79
Case example −− Case 3 �ˆ = .88 and �ˆ1 = 15
The optimal solution exists. The production rate constraint is active, while the
time window constraint is inactive.
"infeasible.txt"
"feasible.txt"
1255
1253
1249.4
1240
1230
1220
1210
1200
1180
1260 1160
1240 Optimal
1220 P boundary
W boundary
J(N) 1200
1180
1160
1140
1120
1100 60
50
40
0 30
10 N2
20 20
30
40 10
N1 50
60 0

⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Five cases 66/79
Case example −− Case 4 �ˆ = .86 and �ˆ1 = 6.5
The optimal solution exists. The production rate constraint is inactive, while
the time window constraint is active.
"infeasible.txt"
"feasible.txt"
1255
1253
1250
1240
1230
1220
1210
1200
1180
1260 1160
1240 Optimal
1220 P boundary
W boundary
J(N) 1200
1180
1160
1140
1120
1100 60
50
40
0 30
10 N2
20 20
30
40 10
N1 50
60 0

⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Five cases 67/79
Case example −− Case 5 �ˆ = .86 and �ˆ1 = 15
The optimal solution exists. Both the production rate constraint and the time
window constraint are inactive.
"infeasible.txt"
"feasible.txt"
1255
1253
1250
1240
1230
1220
1210
1200
1180
1260 1160
1240 Optimal
1220 P boundary
W boundary
J(N) 1200
1180
1160
1140
1120
1100 60
50
40
0 30
10 N2
20 20
30
40 10
N1 50
60 0

⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Five cases 68/79
Algorithm derivation

We extend the algorithm in Topic 1 to solve the new optimization problem


with both time-window constraint and production rate constraint. For
the case where both of the two constraints are active, one constraint
qualification we can use to guarantee the existence of Lagrange multipliers
ˆˆ� � (N★ )) and ∇(�ˆ −� (N★ )) are linearly independent.
�ˆ� (N★ )−�
is that ∇(ˆ

This is equivalent to require that ∇� ˆˆ� (N★ ) and ∇� (N★ ) are linearly
independent. Since all components of ∇� (N★ ) are positive due to the
monotonicity of � (N), but ∇� ˆˆ� (N★ ) has both positive and negative com
ponents, they are linearly independent2 .

2
We will provide formal proof for this in the future.
⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Algorithm derivation 69/79
Algorithm derivation

Applying the KKT conditions, we have

∂�(N★ ) ¯ˆ� (N★ ) ∂� (N★ ) ∂� (N★ )


⎛ ⎞ ⎛
∂�
⎞ ⎛ ⎞
−� ˆˆ�
⎜ ∂�1 ⎟ ⎜ ∂�1 ∂�1 ⎟ ⎜ ∂�1 ⎟ ⎛
0

.. .. ..
⎜ ⎟ ⎜ ⎟ ⎜ ⎟
..
⎜ ⎟ ⎜ ⎟ ⎜ ⎟
⎜ . ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ ⎟
⎜ ¯ˆ� (N★ ) .
∂�(N★ ) ⎟ ∂� (N★ ) ★
⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟
⎟ + �★0 ⎜ ∂� ⎟ − �★1 ⎜ ∂� (N )
⎜ ⎟ ⎜ ⎟ ⎜ ⎟
−⎜ −� ˆˆ� ⎟=⎜ 0 ⎟
⎜ ∂�ˆ� ⎟ ⎜ ∂�ˆ� ∂�ˆ� ⎟ ⎜ ∂�ˆ� ⎟ ⎜ ⎟
⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ .. ⎟
⎜ .. ⎟ ⎜ .. ⎟ ⎜ .. ⎟ ⎝ . ⎠
⎜ . ⎟ ⎜ . ⎟ ⎜ . ⎟



∂�(N★ ) ⎠

⎝ ∂�¯ˆ� (N★ ) ∂� (N★ )



⎝ ∂� (N★ )

⎠ 0
−� ˆˆ�
∂��−1 ∂��−1 ∂��−1 ∂��−1
(14)

�★0 (ˆ
�ˆ� (N★ ) − �
ˆˆ� � (N★ )) = 0 (15)
and

�★1 (�ˆ − � (N★ )) = 0 (16)

⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Algorithm derivation 70/79
Algorithm derivation

Solving those three conditions is equivalent to searching �0 and �1 and


solving the following optimization problem

�−1
∑ �−1

max ¯
�(N) ˆˆ� + �1 )� (N) −
= (� + �0 � �� �� − ¯ � (N) − �0 �
�� � ¯ˆ� (N)
N
�=1 �=1

s.t. �min − �� ≤ 0, ∀� = 1, ⋅ ⋅ ⋅ , � − 1
(17)
until its solution N satisfies � (N ) = �ˆ and �
� �
ˆˆ� (N� ) = �
ˆˆ� � (N� ). Then,
★ �
N =N .

⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Algorithm derivation 71/79
Algorithm summary

1 Check the feasibility of the problem.

2 Solve the problem with the production rate constraint, and let N�
ˆˆ� (N� ) ≤ �
denote the solution. Check if � ˆˆ� � (N� ). If yes, then we
★ �
are done and N = N . If not, go to step 3.

3 Solve the problem with the time-window constraint, and let N�


denote the solution. Check if � (N� ) ≥ �ˆ . If yes, then we are done
and N★ = N� . If not, go to step 4.

4 Solve the problem with both constraints.

⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Algorithm derivation 72/79
Numerical results
Consider a 6-machine 5-buffer line with constraints �ˆ = .83 and �
ˆ3 = 7. In
addition, � = 3000 and all �� and �� are 1.
machine �1 �2 �3 �4 �5 �6
� .15 .15 .09 .10 .11 .10
� .01 .01 .01 .01 .01 .01

�ˆ Surface Search The algorithm Error



� (N ) .8300 .8300
¯ 3 (N★ )/� (N★ )
� 6.9999 7.0884 1.26%
�1★ 8.4800 8.5010 0.25%
�2★ 21.2800 21.1960 0.39%
�3★ 12.5200 12.6625 1.14%
�4★ 40.5200 39.8855 1.57%
�5★ 24.5200 24.6071 0.36%

¯1 5.6656 5.6818 0.29%

¯2 13.5400 13.4803 0.44%

¯3 5.8099 5.8834 1.27%

¯4 13.3901 13.2503 1.04%

¯5 8.0201 8.0387 0.23%
Profit ($) 2336.2582 2336.8139 0.02%
⃝2010
c Chuan Shi — Topic two: Line opt. with time window constraint : Numerical results 73/79
Research in process and Research extensions

⃝2010
c Chuan Shi — Research in process and Research extensions : 74/79
Topic three: loop optimization

Loop-start machine
B7

M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6 B6 M7

Loop-end machine

Processes that utilize pallets or fixtures can be viewed as loops since


the number of pallets/fixtures that are in the system remains con
stant.

Control policies such as Constant Work-in-process (CONWIP) and


Kanban create conceptual loops by imposing a limit on the number
of parts that can be in the system at any given time.

⃝2010
c Chuan Shi — Research in process and Research extensions : Research in process 75/79
Topic three: loop optimization

Prior research and assumption


Single loop evaluation by Gershwin and Werner (2005), multiple loop

evaluation by Zhang (2006).

Concavity of � (N, �).

Work
Develop analytical solutions for two-machine-line evaluation with no

delay machines based on Tolio, Matta, and Gershwin (2002).

Extend and improve loop evaluation algorithm for single arbitrary

loops.

Present the optimization algorithm, which is an extension of the

algorithm for line optimization.

Prove this algorithm theoretically by the KKT conditions of nonlinear

programming, and verify this algorithm numerically.

⃝2010
c Chuan Shi — Research in process and Research extensions : Research in process 76/79
Possible extension 1: quality control

By taking account of quality control, we assume that machines generate


both good parts and bad parts. Unfortunately, buffers delay the inspec
tion of bad parts.

Kim and Gershwin (2005) pointed out that in the case of our example
above, an increase of buffer size could either increase or decrease the
production rate for different lines.

⃝2010
c Chuan Shi — Research in process and Research extensions : Research extensions 77/79
Possible extension 2: Set-up cost for buffers

This means whenever we decide to establish a buffer between two ma


chines (or machine sets), we introduce a fixed buffer set-up cost. After
the buffer is established, the buffer space cost will be proportional to its
size. So, in this case, buffer space cost will be 0 if �� < �min or �� +�� ��
if �� ≥ �min for buffer �� .

⃝2010
c Chuan Shi — Research in process and Research extensions : Research extensions 78/79
Question session

Thank you!

⃝2010
c Chuan Shi — Research in process and Research extensions : Research extensions 79/79
MIT OpenCourseWare
http://ocw.mit.edu

2.852 Manufacturing Systems Analysis


Spring 2010

For information about citing these materials or our Terms of Use,visit: http://ocw.mit.edu/terms.
MIT 2.852

Manufacturing Systems Analysis

Lectures 6–9: Flow Lines

Models That Can Be Analyzed Exactly

Stanley B. Gershwin

http://web.mit.edu/manuf-sys

Massachusetts Institute of Technology

Spring, 2010

2.852 Manufacturing Systems Analysis 1/165 c


Copyright 2010 Stanley B. Gershwin.
Models

◮ The purpose of an engineering or scientific model is to make


predictions.
◮ Kinds of models:
◮ Mathematical: aggregated behavior is described by equations.
Predictions are made by solving the equations.
◮ Simulation: detailed behavior is described. Predictions are made by
reproducting behavior.
◮ Models are simplifications of reality.
◮ Models that are too simple make poor predictions because they leave
out important features.
◮ Models that are too complex make poor predictions because they are
difficult to analyze or are time-consuming to use, because they require
more data, or because they have errors.

2.852 Manufacturing Systems Analysis 2/165 c


Copyright 2010 Stanley B. Gershwin.
Flow Line

... also known as a Production or Transfer Line.

M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6

Machine Buffer

◮ Machines are unreliable.


◮ Buffers are finite.

2.852 Manufacturing Systems Analysis 3/165 c


Copyright 2010 Stanley B. Gershwin.
Flow Line
Motivation

◮ Economic importance.
◮ Relative simplicity for analysis and for intuition.

2.852 Manufacturing Systems Analysis 4/165 c


Copyright 2010 Stanley B. Gershwin.
Flow Line

Buffers and Inventory


◮ Buffers are for mitigating asynchronization (ie, they are shock
absorbers).
◮ Buffer space and inventory are expensive.

2.852 Manufacturing Systems Analysis 5/165 c


Copyright 2010 Stanley B. Gershwin.
Flow Line
Analysis Difficulties

◮ Complex behavior.
◮ Analytical solution available only for limited systems.
◮ Exact numerical solution feasible only for systems with a small
number of buffers.
◮ Simulation may be too slow for optimization.

2.852 Manufacturing Systems Analysis 6/165 c


Copyright 2010 Stanley B. Gershwin.
Flow Line
Output Variability

4000

Weekly
Production Production output
2000 from a simulation of
a transfer line.

0
0 20 40 60 80 100

Week

2.852 Manufacturing Systems Analysis 7/165 c


Copyright 2010 Stanley B. Gershwin.
Flow Line
Usual General Assumptions

◮ Unlimited repair personnel.


◮ Uncorrelated failures.
◮ Perfect yield.
◮ The first machine is never starved and the last is never blocked.
◮ Blocking before service.
◮ Operation dependent failures.

2.852 Manufacturing Systems Analysis 8/165 c


Copyright 2010 Stanley B. Gershwin.
Single Reliable Machine

◮ If the machine is perfectly reliable, and its average operation time is


τ , then its maximum production rate is 1/τ .
◮ Note:
◮ Sometimes cycle time is used instead of operation time , but
BEWARE: cycle time has two meanings!
◮ The other meaning is the time a part spends in a system. If the system
is a single, reliable machine, the two meanings are the same.

2.852 Manufacturing Systems Analysis 9/165 c


Copyright 2010 Stanley B. Gershwin.
Single Unreliable Machine
ODFs

◮ Operation-Dependent Failures
◮ A machine can only fail while it is working.
◮ IMPORTANT! MTTF must be measured in working time!
◮ This is the usual assumption.
◮ Note: MTBF = MTTF + MTTR

2.852 Manufacturing Systems Analysis 10/165 c


Copyright 2010 Stanley B. Gershwin.
Single Unreliable Machine
Production rate

◮ If the machine is unreliable, and


◮ its average operation time is τ ,
◮ its mean time to fail is MTTF,
◮ its mean time to repair is MTTR,
then its maximum production rate is
 
1 MTTF
τ MTTF + MTTR

2.852 Manufacturing Systems Analysis 11/165 c


Copyright 2010 Stanley B. Gershwin.
Single Unreliable Machine
Production rate

Proof
Machine UP Machine DOWN

◮ Average production rate, while machine is up, is 1/τ .


◮ Average duration of an up period is MTTF.
◮ Average production during an up period is MTTF/τ .
◮ Average duration of up-down period: MTTF + MTTR.
◮ Average production during up-down period: MTTF/τ .
◮ Therefore, average production rate is
(MTTF/τ )/(MTTF + MTTR).

2.852 Manufacturing Systems Analysis 12/165 c


Copyright 2010 Stanley B. Gershwin.
Single Unreliable Machine
Geometric Up- and Down-Times

◮ Assumptions: Operation time is constant (τ ). Failure and repair


times are geometrically distributed.

◮ Let p be the probability that a machine fails during any given


operation. Then p = τ /MTTF.

2.852 Manufacturing Systems Analysis 13/165 c


Copyright 2010 Stanley B. Gershwin.
Single Unreliable Machine

◮ Let r be the probability that M gets repaired in during any operation


time when it is down. Then r = τ /MTTR.

◮ Then the average production rate of M is


 
1 r
.
τ r +p
◮ (Sometimes we forget to say “average.”)

2.852 Manufacturing Systems Analysis 14/165 c


Copyright 2010 Stanley B. Gershwin.
Single Unreliable Machine
Production Rates

◮ So far, the machine really has three production rates:


◮ 1/τ when it is up (short-term capacity) ,
◮ 0 when it is down (short-term capacity) ,
◮ (1/τ )(r /(r + p)) on the average (long-term capacity) .

2.852 Manufacturing Systems Analysis 15/165 c


Copyright 2010 Stanley B. Gershwin.
Infinite-Buffer Line

M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6

Assumptions:
◮ A machine is not idle if it is not starved.
◮ The first machine is never starved.

2.852 Manufacturing Systems Analysis 16/165 c


Copyright 2010 Stanley B. Gershwin.
Infinite-Buffer Line

M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6

◮ The production rate of the line is the production rate of the slowest
machine in the line — called the bottleneck .
◮ Slowest means least average production rate, where average
production rate is calculated from one of the previous formulas.

2.852 Manufacturing Systems Analysis 17/165 c


Copyright 2010 Stanley B. Gershwin.
Infinite-Buffer Line

M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6

◮ Production rate is therefore


 
1 MTTFi
P = min
i τi MTTFi + MTTRi
◮ and Mi is the bottleneck.

2.852 Manufacturing Systems Analysis 18/165 c


Copyright 2010 Stanley B. Gershwin.
Infinite-Buffer Line

M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6

◮ The system is not in steady state.


◮ An infinite amount of inventory accumulates in the buffer upstream of
the bottleneck.
◮ A finite amount of inventory appears downstream of the bottleneck.

2.852 Manufacturing Systems Analysis 19/165 c


Copyright 2010 Stanley B. Gershwin.
Infinite-Buffer Line

M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6 B6 M7 B7 M8 B8 M9 B9 M10

14000
Buffer 1
Buffer 2
12000 Buffer 3
Buffer 4
Buffer 5
Buffer 6
10000 Buffer 7
Buffer 8
Buffer 9
8000

6000

4000

2000

0
0 100000 200000 300000 400000 500000 600000 700000 800000 900000 1e+06

2.852 Manufacturing Systems Analysis 20/165 c


Copyright 2010 Stanley B. Gershwin.
Infinite-Buffer Line

M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6 B6 M7 B7 M8 B8 M9 B9 M10

◮ The second bottleneck is the slowest machine upstream of the


bottleneck. An infinite amount of inventory accumulates just
upstream of it.
◮ A finite amount of inventory appears between the second bottleneck
and the machine upstream of the first bottleneck.
◮ Et cetera.

2.852 Manufacturing Systems Analysis 21/165 c


Copyright 2010 Stanley B. Gershwin.
Infinite-Buffer Line

M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6 B6 M7 B7 M8 B8 M9 B9 M10

12000
Buffer 1
Buffer 2
Buffer 3
10000 Buffer 4
Buffer 5
Buffer 6
Buffer 7
8000 Buffer 8
Buffer 9

6000

4000

2000

0
0 100000 200000 300000 400000 500000 600000 700000 800000 900000 1e+06

A 10-machine line with bottlenecks at Machines 5 and 10.

2.852 Manufacturing Systems Analysis 22/165 c


Copyright 2010 Stanley B. Gershwin.
Infinite-Buffer Line

M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6 B6 M7 B7 M8 B8 M9 B9 M10

12000

10000

8000 Buffer 4
Buffer 9

6000

4000

2000

0
0 100000 200000 300000 400000 500000 600000 700000 800000 900000 1e+06

Question:
◮ What are the slopes (roughly!) of the two indicated graphs?

2.852 Manufacturing Systems Analysis 23/165 c


Copyright 2010 Stanley B. Gershwin.
Infinite-Buffer Line

Questions:
◮ If we want to increase production rate, which machine should we
improve?
◮ What would happen to production rate if we improved any other
machine?

2.852 Manufacturing Systems Analysis 24/165 c


Copyright 2010 Stanley B. Gershwin.
Zero-Buffer Line

M1 M2 M3 M4 M5 M6

◮ If any one machine fails, or takes a very long time to do an operation,


all the other machines must wait.
◮ Therefore the production rate is usually less — possibly much less –
than the slowest machine.

2.852 Manufacturing Systems Analysis 25/165 c


Copyright 2010 Stanley B. Gershwin.
Zero-Buffer Line

M1 M2 M3 M4 M5 M6

◮ Special case: Constant, unequal operation times, perfectly reliable


machines.
◮ The operation time of the line is equal to the operation time of the
slowest machine, so the production rate of the line is equal to that of
the slowest machine.

2.852 Manufacturing Systems Analysis 26/165 c


Copyright 2010 Stanley B. Gershwin.
Zero-Buffer Line
Constant, equal operation times, unreliable machines

M1 M2 M3 M4 M5 M6

◮ Assumption: Failure and repair times are geometrically distributed.


◮ Define pi = τ /MTTFi = probability of failure during an operation.
◮ Define ri = τ /MTTRi probability of repair during an interval of length
τ when the machine is down.
◮ Operation-Dependent Failures (ODFs): Machines can only fail while they
are working.

2.852 Manufacturing Systems Analysis 27/165 c


Copyright 2010 Stanley B. Gershwin.
Zero-Buffer Line

M1 M2 M3 M4 M5 M6

Buzacott’s Zero-Buffer Line Formula:


Let k be the number of machines in the line. Then
1 1
P=
τ Xk
pi
1+
ri
i =1

2.852 Manufacturing Systems Analysis 28/165 c


Copyright 2010 Stanley B. Gershwin.
Zero-Buffer Line

M1 M2 M3 M4 M5 M6

◮ Same as the earlier formula (page 11, page 14) when k = 1. The
isolated production rate of a single machine Mi is
!  
1 1 1 ri
= .
τ 1 + prii τ ri + pi

2.852 Manufacturing Systems Analysis 29/165 c


Copyright 2010 Stanley B. Gershwin.
Zero-Buffer Line
Proof of formula

◮ Let τ (the operation time) be the time unit.


◮ Assumption: At most, one machine can be down.
◮ Consider a long time interval of length T τ during which Machine Mi
fails mi times (i = 1, . . . k).
M3 M5 M2 M3 M1 M4

All up Some machine down


◮ Without failures, the line would produce T parts.

2.852 Manufacturing Systems Analysis 30/165 c


Copyright 2010 Stanley B. Gershwin.
Zero-Buffer Line

◮ The average repair time of Mi is τ /ri each time it fails, so the total
system down time is close to
k
X mi τ
Dτ =
ri
i =1

where D is the number of operation times in which a machine is


down.

2.852 Manufacturing Systems Analysis 31/165 c


Copyright 2010 Stanley B. Gershwin.
Zero-Buffer Line

◮ The total up time is approximately


k
X mi τ
Uτ = T τ − .
ri
i =1
◮ where U is the number of operation times in which all machines are
up.

2.852 Manufacturing Systems Analysis 32/165 c


Copyright 2010 Stanley B. Gershwin.
Zero-Buffer Line

◮ Since the system produces one part per time unit while it is working,
it produces U parts during the interval of length T τ .
◮ Note that, approximately,

mi = pi U
because Mi can only fail while it is operational.

2.852 Manufacturing Systems Analysis 33/165 c


Copyright 2010 Stanley B. Gershwin.
Zero-Buffer Line

◮ Thus,
k
X pi
Uτ = T τ − Uτ ,
ri
i =1
or,

U 1
= EODF =
T Xk
pi
1+
ri
i =1

2.852 Manufacturing Systems Analysis 34/165 c


Copyright 2010 Stanley B. Gershwin.
Zero-Buffer Line

and
1 1
P=
τ Xk
pi
1+
ri
i =1

◮ Note that P is a function of the ratio pi /ri and not pi or ri separately.


◮ The same statement is true for the infinite-buffer line.
◮ However, the same statement is not true for a line with finite,
non-zero buffers.

2.852 Manufacturing Systems Analysis 35/165 c


Copyright 2010 Stanley B. Gershwin.
Zero-Buffer Line

Questions:
◮ If we want to increase production rate, which machine should we
improve?
◮ What would happen to production rate if we improved any other
machine?

2.852 Manufacturing Systems Analysis 36/165 c


Copyright 2010 Stanley B. Gershwin.
Zero-Buffer Line
ODF and TDF

TDF= Time-Dependent Failure. Machines fail independently of one


another when they are idle.
k  
1Y ri
PTDF = > PODF
τ ri + pi
i =1

2.852 Manufacturing Systems Analysis 37/165 c


Copyright 2010 Stanley B. Gershwin.
Zero-Buffer Line
P as a function of pi

All machines are the same except Mi . As pi increases, the production rate
decreases.
0.5
0.45
0.4
0.35

P 0.3
0.25
0.2
0.15
0.1
0.05
0 0.2 0.4 0.6 0.8 1

pi

2.852 Manufacturing Systems Analysis 38/165 c


Copyright 2010 Stanley B. Gershwin.
Zero-Buffer Line
P as a function of k

All machines are the same. As the line gets longer, the production rate
decreases.
1
0.9 Infinite buffer
0.8
production rate
0.7
0.6
0.5 Capacity loss
P
0.4
0.3
0.2
0.1
0
0 10 20 30 40 50 60 70 80 90 100

2.852 Manufacturing Systems Analysis 39/165 c


Copyright 2010 Stanley B. Gershwin.
Finite-Buffer Lines

M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6

◮ Motivation for buffers: recapture some of the lost production rate.


◮ Cost
◮ in-process inventory/lead time
◮ floor space
◮ material handling mechanism

2.852 Manufacturing Systems Analysis 40/165 c


Copyright 2010 Stanley B. Gershwin.
Finite-Buffer Lines

M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6

◮ Infinite buffers: no propagation of disruptions.


◮ Zero buffers: instantaneous propagation.
◮ Finite buffers: delayed propagation.
◮ New phenomena: blockage and starvation .

2.852 Manufacturing Systems Analysis 41/165 c


Copyright 2010 Stanley B. Gershwin.
Finite-Buffer Lines

M1 B1 M2 B2 M3 B3 M4 B4 M5 B5 M6

◮ Difficulty:
◮ No simple formula for calculating production rate or inventory levels.
◮ Solution:
◮ Simulation
◮ Analytical approximation

2.852 Manufacturing Systems Analysis 42/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines

◮ Exact solution is available to model of two-machine line.


◮ Discrete time-discrete state Markov process:

prob{X (t + 1) = x(t + 1)|


X (t) = x(t), X (t − 1) = x(t − 1), X (t − 2) = x(t − 2), ...} =

prob{X (t + 1) = x(t + 1)|X (t) = x(t)}


◮ In the following, we construct prob{X (t + 1) = x(t + 1)|X (t) = x(t)} and
solve the steady-state transition equations.

2.852 Manufacturing Systems Analysis 43/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines

Here, X (t) = (n(t), α1 (t), α2 (t)), where


◮ n is the number of parts in the buffer; n = 0, 1, ..., N.
◮ αi is the repair state of Mi ; i = 1, 2.
◮ αi = 1 means the machine is up or operational;
◮ αi = 0 means the machine is down or under repair.

2.852 Manufacturing Systems Analysis 44/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines

Motivation:
◮ We can develop intuition from these systems that is useful for
understanding more complex systems.

◮ Two-machine lines are used as building blocks in decomposition


approximations of realistic-sized systems.

2.852 Manufacturing Systems Analysis 45/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines

Several models available:


◮ Deterministic processing time, or Buzacott model: deterministic
processing time, geometric failure and repair times; discrete state,
discrete time.
◮ Exponential processing time: exponential processing, failure, and
repair time; discrete state, continuous time.
◮ Continuous material, or fluid: deterministic processing, exponential
failure and repair time; mixed state, continuous time.
◮ Extensions
◮ Models with multiple up and down states.

2.852 Manufacturing Systems Analysis 46/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines

Outline: Details of two-machine, deterministic processing time line.


◮ Assumptions ◮ Identities
◮ Performance measures ◮ Analytical solution
◮ Transient states ◮ Limits
◮ Transition equations ◮ Behavior

2.852 Manufacturing Systems Analysis 47/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Assumptions, etc.

Assumptions, etc. for deterministic processing time systems (including


long lines)
◮ All operation times are deterministic and equal to 1.
◮ The amount of material in Buffer i at time t is ni (t), 0 ≤ ni (t) ≤ Ni .
A buffer gains or loses at most one piece during a time unit.
◮ The state of the system is s = (n1 , . . . , nk−1 , α1 , . . . , αk ).

2.852 Manufacturing Systems Analysis 48/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Assumptions, etc.

◮ Operation dependent failures:

prob [αi (t + 1) = 0 | ni −1 (t) = 0, αi (t) = 1, ni (t) < Ni ] = 0,


prob [αi (t + 1) = 1 | ni −1 (t) = 0, αi (t) = 1, ni (t) < Ni ] = 1,

prob [αi (t + 1) = 0 | ni −1 (t) > 0, αi (t) = 1, ni (t) = Ni ] = 0,


prob [αi (t + 1) = 1 | ni −1 (t) > 0, αi (t) = 1, ni (t) = Ni ] = 1,

prob [αi (t + 1) = 0 | ni −1 (t) > 0, αi (t) = 1, ni (t) < Ni ] = pi ,


prob [αi (t + 1) = 1 | ni −1 (t) > 0, αi (t) = 1, ni (t) < Ni ] = 1 − pi .

2.852 Manufacturing Systems Analysis 49/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Assumptions, etc.

◮ Repairs:

prob [αi (t + 1) = 1 | αi (t) = 0] = ri ,

prob [αi (t + 1) = 0 | αi (t) = 0] = 1 − ri .

2.852 Manufacturing Systems Analysis 50/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Assumptions, etc.

◮ Timing convention: In the absence of blocking or starvation:


ni (t + 1) = ni (t) + αi (t + 1) − αi +1 (t + 1).
More generally,

ni (t + 1) = ni (t) + Iui (t + 1) − Idi (t + 1),


where

1 if αi (t + 1) = 1 and ni −1 (t) > 0 and ni (t) < Ni ,


(
Iui (t + 1) =
0 otherwise.

1 if αi +1 (t + 1) = 1 and ni (t) > 0 and ni +1 (t) < Ni +1


(
Idi (t + 1) =
0 otherwise.
2.852 Manufacturing Systems Analysis 51/165 c
Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Assumptions, etc.

◮ In the Markov chain model, there is a set of transient states, and a


single final class. Thus, a unique steady state distribution exists. The
model is studied in steady state. That is, we calculate the stationary
probability distribution.

◮ We calculate performance measures (production rate and average


inventory) from the steady state distribution.

2.852 Manufacturing Systems Analysis 52/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Performance measures

◮ The steady state production rate (throughput , flow rate , or efficiency ) of


Machine Mi is the probability that Machine Mi produces a part in a time
step.
◮ Units: parts per operation time.
◮ It is the probability that Machine Mi is operational and neither starved nor
blocked in time step t.
◮ It is equivalent, and more convenient, to express it as the probability that
Machine Mi is operational and neither starved nor blocked in time step t + 1:

Ei = prob (αi (t + 1) = 1, ni −1 (t) > 0, ni (t) < Ni )


For a useful analytical expression, we must rewrite this so that all states are
evaluated at the same time.

2.852 Manufacturing Systems Analysis 53/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Performance measures

Ei = prob (αi (t + 1) = 1, ni −1 (t) > 0, ni (t) < Ni )

= prob (αi (t + 1) = 1 | ni −1 (t) > 0, αi (t) = 1, ni (t) < Ni )


prob (ni −1 (t) > 0, αi (t) = 1, ni (t) < Ni )

+ prob (αi (t + 1) = 1 | ni −1 (t) > 0, αi (t) = 0, ni (t) < Ni )


prob (ni −1 (t) > 0, αi (t) = 0, ni (t) < Ni ).

= (1 − pi ) prob (ni −1 (t) > 0, αi (t) = 1, ni (t) < Ni )


+ri prob (ni −1 (t) > 0, αi (t) = 0, ni (t) < Ni ).

2.852 Manufacturing Systems Analysis 54/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Performance measures

In steady state, there is a repair for every failure of Machine i , or

ri prob (ni −1 (t) > 0, αi (t) = 0, ni (t) < Ni ) =


pi prob (ni −1 (t) > 0, αi (t) = 1, ni (t) < Ni )
Therefore,

Ei = prob (αi = 1, ni −1 > 0, ni < Ni ).

2.852 Manufacturing Systems Analysis 55/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Performance measures

The steady state average level of Buffer i is


X
n̄i = ni prob (s).
s

2.852 Manufacturing Systems Analysis 56/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
State Space

s = (n, α1 , α2 )
where

n = 0, 1, ..., N

αi = 0, 1

2.852 Manufacturing Systems Analysis 57/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Transient states

◮ (0,1,0) is transient because it cannot be reached from any state. If


α1 (t + 1) = 1 and α2 (t + 1) = 0, then n(t + 1) = n(t) + 1.

◮ (0,1,1) is transient because it cannot be reached from any state. If n(t) = 0


and α1 (t + 1) = 1 and α2 (t + 1) = 1, then n(t + 1) = 1 since M2 is starved
and thus not able to operate. If n(t) > 0 and α1 (t + 1) = 1 and
α2 (t + 1) = 1, then n(t + 1) = n(t).

2.852 Manufacturing Systems Analysis 58/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Transient states

◮ (0,0,0) is transient because it can be reached only from itself or (0,1,0). It can be
reached from itself if neither machine is repaired; it can be reached from (0,1,0) if
the first machine fails while attempting to make a part. It cannot be reached from
(0,0,1) or (0,1,1) since the second machine cannot fail. Otherwise, if
α1 (t + 1) = 0 and α2 (t + 1) = 0, then n(t + 1) = n(t).

◮ (1,1,0) is transient because it can be reached only from (0,0,0) or (0,1,0). If


α1 (t + 1) = 1 and α2 (t + 1) = 0, then n(t + 1) = n(t) + 1. Therefore, n(t) = 0.
However, (1,1,0) cannot be reached from (0,0,1) since Machine 2 cannot fail. (For
the same reason, it cannot be reached from (0,1,1), but since the latter is
transient, that is irrelevant.)

◮ Similarly, (N, 0, 0), (N, 0, 1), (N, 1, 1), and (N − 1, 0, 1) are transient.

2.852 Manufacturing Systems Analysis 59/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
State space
n=0 n=1 n=2 n=3 n=4 n=5 n=6 n=7 n=8 n=9 n=10 n=11 n=12 n=13
=N−1 =N
(α1 ,α2)
(0,0)

(0,1)

(1,0)

(1,1)

key
states transitions
transient out of transient states

non−transient out of non−transient states

boundary to increasing buffer level

internal to decreasing buffer level

unchanging buffer level

2.852 Manufacturing Systems Analysis 60/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Transition equations

Internal equations 2 ≤ n ≤ N − 2

p(n, 0, 0) = (1 − r1 )(1 − r2 )p(n, 0, 0) + (1 − r1 )p2 p(n, 0, 1)


+p1 (1 − r2 )p(n, 1, 0) + p1 p2 p(n, 1, 1)

p(n, 0, 1) = (1 − r1 )r2 p(n + 1, 0, 0) + (1 − r1 )(1 − p2 )p(n + 1, 0, 1)


+p1 r2 p(n + 1, 1, 0) + p1 (1 − p2 )p(n + 1, 1, 1)

p(n, 1, 0) = r1 (1 − r2 )p(n − 1, 0, 0) + r1 p2 p(n − 1, 0, 1)


+(1 − p1 )(1 − r2 )p(n − 1, 1, 0) + (1 − p1 )p2 p(n − 1, 1, 1)

p(n, 1, 1) = r1 r2 p(n, 0, 0) + r1 (1 − p2 )p(n, 0, 1) + (1 − p1 )r2 p(n, 1, 0)


+(1 − p1 )(1 − p2 )p(n, 1, 1)

2.852 Manufacturing Systems Analysis 61/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Transition equations

Lower boundary equations n ≤ 1

p(0, 0, 1) = (1 − r1 )p(0, 0, 1) + (1 − r1 )r2 p(1, 0, 0)


+(1 − r1 )(1 − p2 )p(1, 0, 1) + p1 (1 − p2 )p(1, 1, 1).

p(1, 0, 0) = (1 − r1 )(1 − r2 )p(1, 0, 0) + (1 − r1 )p2 p(1, 0, 1) + p1 p2 p(1, 1, 1)

p(1, 0, 1) = (1 − r1 )r2 p(2, 0, 0) + (1 − r1 )(1 − p2 )p(2, 0, 1)+


p1 r2 p(2, 1, 0) + p1 (1 − p2 )p(2, 1, 1)

p(1, 1, 1) = r1 p(0, 0, 1) + r1 r2 p(1, 0, 0) + r1 (1 − p2 )p(1, 0, 1)


+(1 − p1 )(1 − p2 )p(1, 1, 1)

p(2, 1, 0) = r1 (1 − r2 )p(1, 0, 0) + r1 p2 p(1, 0, 1) + (1 − p1 )p2 p(1, 1, 1)

2.852 Manufacturing Systems Analysis 62/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Transition equations

Upper boundary equations n ≥ N − 1

p(N − 2, 0, 1) = (1 − r1 )r2 p(N − 1, 0, 0) + p1 r2 p(N − 1, 1, 0)


+p1 (1 − p2 )p(N − 1, 1, 1)

p(N − 1, 0, 0) = (1 − r1 )(1 − r2 )p(N − 1, 0, 0) + p1 (1 − r2 )p(N − 1, 1, 0)


+p1 p2 p(N − 1, 1, 1)

p(N − 1, 1, 0) = r1 (1 − r2 )p(N − 2, 0, 0) + r1 p2 p(N − 2, 0, 1)


+ (1 − p1 )(1 − r2 )p(N − 2, 1, 0) + (1 − p1 )p2 p(N − 2, 1, 1)

p(N − 1, 1, 1) = r1 r2 p(N − 1, 0, 0) + (1 − p1 )r2 p(N − 1, 1, 0)


+(1 − p1 )(1 − p2 )p(N − 1, 1, 1) + r2 p(N, 1, 0)

p(N, 1, 0) = r1 (1 − r2 )p(N − 1, 0, 0) + (1 − p1 )(1 − r2 )p(N − 1, 1, 0)


+(1 − p1 )p2 p(N − 1, 1, 1) + (1 − r2 )p(N, 1, 0)

2.852 Manufacturing Systems Analysis 63/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Performance measures

E1 is the probability that M1 is operational and not blocked:


X
E1 = p(n, α1 , α2 ).
n<N
α1 = 1

n=0 n=1 n=2 n=3 n=4 n=5 n=6 n=7 n=8 n=9 n=10 n=11 n=12 n=13
=N−1 =N
(α1 ,α2)
(0,0)

(0,1)

(1,0)

(1,1)

2.852 Manufacturing Systems Analysis 64/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Performance measures

E2 is the probability that M2 is operational and not starved:


X
E2 = p(n, α1 , α2 ).
n>0
α2 = 1

n=0 n=1 n=2 n=3 n=4 n=5 n=6 n=7 n=8 n=9 n=10 n=11 n=12 n=13
=N−1 =N
(α1 ,α2)
(0,0)

(0,1)

(1,0)

(1,1)

2.852 Manufacturing Systems Analysis 65/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Performance measures

The probabilities of starvation and blockage are:

ps = p(0, 0, 1), the probability of starvation,

pb = p(N, 1, 0), the probability of blockage.

The average buffer level is:


X
n̄ = np(n, α1 , α2 ).
all s

2.852 Manufacturing Systems Analysis 66/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Identities

Repair frequency equals failure frequency For every repair, there is a failure
(in steady state). When the system is in steady state,

r1 prob [{α1 = 0} and {n < N}] =


p1 prob [{α1 = 1} and {n < N}] .
Let

D1 = prob [{α1 = 0} and {n < N}] ,


then

r1 D1 = p1 E1 .

2.852 Manufacturing Systems Analysis 67/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Identities

Proof: The left side is the probability that the state leaves the set of states

S0 = {{α1 = 0} and {n < N}} .

since the only way the system can leave S0 is for M1 to get repaired. (M1 is
down, so the buffer cannot become full.)
n=0 n=1 n=2 n=3 n=4 n=5 n=6 n=7 n=8 n=9 n=10 n=11 n=12 n=13
=N−1 =N
(α1 ,α2)
(0,0)

(0,1)

(1,0)

(1,1)

2.852 Manufacturing Systems Analysis 68/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Identities

The right side is the probability that the state enters S0 . When the system is in
steady state, the only way for the state to enter S0 is for it to be in set

S1 = {{α1 = 1} and {n < N}}

in the previous time unit.


n=0 n=1 n=2 n=3 n=4 n=5 n=6 n=7 n=8 n=9 n=10 n=11 n=12 n=13
=N−1 =N
(α1 ,α2)
(0,0)

(0,1)

(1,0)

(1,1)

2.852 Manufacturing Systems Analysis 69/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Identities

Conservation of Flow E1 = E2 = E

N−1
X N−1
X
Proof: E1 = p(n, 1, 0) + p(n, 1, 1),
n=0 n=0
N
X N
X
E2 = p(n, 0, 1) + p(n, 1, 1).
n=1 n=1

N−1
X N
X
Then E1 − E2 = p(n, 1, 0) − p(n, 0, 1)
n=0 n=1

N−2
X N−2
X
= p(n + 1, 1, 0) − p(n, 0, 1)
n=1 n=1

2.852 Manufacturing Systems Analysis 70/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Identities

Or,

N−2
X 
E1 − E2 = p(n + 1, 1, 0) − p(n, 0, 1)
n=1

Define δ(n) = p(n + 1, 1, 0) − p(n, 0, 1). Then

N−2
X
E1 − E2 = δ(n)
n=1

2.852 Manufacturing Systems Analysis 71/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Identities

Add lots of lower boundary equations:

p(0, 0, 1) + p(1, 0, 0) + p(1, 1, 1) + p(2, 1, 0) =

(1 − r1 )p(0, 0, 1) + (1 − r1 )r2 p(1, 0, 0)


+(1 − r1 )(1 − p2 )p(1, 0, 1) + p1 (1 − p2 )p(1, 1, 1)

+(1 − r1 )(1 − r2 )p(1, 0, 0) + (1 − r1 )p2 p(1, 0, 1) + p1 p2 p(1, 1, 1)

+r1 p(0, 0, 1) + r1 r2 p(1, 0, 0) + r1 (1 − p2 )p(1, 0, 1)


+(1 − p1 )(1 − p2 )p(1, 1, 1)

+r1 (1 − r2 )p(1, 0, 0) + r1 p2 p(1, 0, 1) + (1 − p1 )p2 p(1, 1, 1)

2.852 Manufacturing Systems Analysis 72/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Identities

Or,

p(0, 0, 1) + p(1, 0, 0) + p(1, 1, 1) + p(2, 1, 0) =

p(0, 0, 1) + p(1, 0, 0) + p(1, 0, 1) + p(1, 1, 1)


Or,

p(2, 1, 0) = p(1, 0, 1)
Then δ(1) = 0.

2.852 Manufacturing Systems Analysis 73/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Identities

Now add all the internal equations, after changing the index of two of them:

p(n, 0, 0) + p(n − 1, 0, 1) + p(n + 1, 1, 0) + p(n, 1, 1) =

(1 − r1 )(1 − r2 )p(n, 0, 0) + (1 − r1 )p2 p(n, 0, 1)


+p1 (1 − r2 )p(n, 1, 0) + p1 p2 p(n, 1, 1)

(1 − r1 )r2 p(n, 0, 0) + (1 − r1 )(1 − p2 )p(n, 0, 1)


+p1 r2 p(n, 1, 0) + p1 (1 − p2 )p(n, 1, 1)

r1 (1 − r2 )p(n, 0, 0) + r1 p2 p(n, 0, 1)
+(1 − p1 )(1 − r2 )p(n, 1, 0) + (1 − p1 )p2 p(n, 1, 1)

r1 r2 p(n, 0, 0) + r1 (1 − p2 )p(n, 0, 1) + (1 − p1 )r2 p(n, 1, 0)


+(1 − p1 )(1 − p2 )p(n, 1, 1)

2.852 Manufacturing Systems Analysis 74/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Identities

Or, for n = 2, . . . , N − 2,

p(n, 0, 0) + p(n − 1, 0, 1) + p(n + 1, 1, 0) + p(n, 1, 1) =

p(n, 0, 0) + p(n, 0, 1) + p(n, 1, 0) + p(n, 1, 1),

or,

p(n + 1, 1, 0) − p(n, 0, 1) = p(n, 1, 0) − p(n − 1, 0, 1)


or,
δ(n) = δ(n − 1)

2.852 Manufacturing Systems Analysis 75/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Identities

Since

δ(1) = 0 and δ(n) = δ(n − 1), n = 2, . . . , N − 2


we have

δ(n) = 0, n = 1, . . . , N − 2
Therefore
N−2
X
E1 − E2 = δ(n) = 0
n=1

QED

2.852 Manufacturing Systems Analysis 76/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Identities

Alternative interpretation of p(n + 1, 1, 0) − p(n, 0, 1) = 0:

(α1 ,α2) n n+1

(0,0) ◮ The only way the buffer can go


from n + 1 to n is for the state
(0,1)
to go to (n, 0, 1).

◮ The only way the buffer can go


(1,0)
from n to n + 1 is for the state
to go to (n + 1, 1, 0).
(1,1)

2.852 Manufacturing Systems Analysis 77/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Identities

Flow rate/idle time

E = e1 (1 − pb ).

Proof: From the definitions of E1 and D1 , we have

prob [n < N] = E + D1 ,

p1 E
or, 1 − pb = E + E = .
r1 e1
Similarly,

E = e2 (1 − ps )
2.852 Manufacturing Systems Analysis 78/165 c
Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Analytical Solution

1. Guess a solution for the internal states of the form


p(n, α1 , α2 ) = ξj (n, α1 , α2 ) = X n Y1α1 Y2α2 .
2. Determine sets of Xj , Y1j , Y2j that satisfy the internal equations.
3. Extend ξj (n, α1 , α2 ) to all of the boundary states using some of the
boundary equations.
P
4. Find coefficients Cj so that p(n, α1 , α2 ) = j Cj ξj (n, α1 , α2 ) satisfies
the remaining boundary equationss and normalization.

2.852 Manufacturing Systems Analysis 79/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Analytical Solution

Internal equations:

X n = (1 − r1 )(1 − r2 )X n + (1 − r1 )p2 X n Y2 + p1 (1 − r2 )X n Y1 + p1 p2 X n Y1 Y2

X n−1 Y2 = (1 − r1 )r2 X n + (1 − r1 )(1 − p2 )X n Y2 + p1 r2 X n Y1


+p1 (1 − p2 )X n Y1 Y2

X n+1 Y1 = r1 (1 − r2 )X n + r1 p2 X n Y2 + (1 − p1 )(1 − r2 )X n Y1 + (1 − p1 )p2 X n Y1 Y2

X n Y1 Y2 = r1 r2 X n + r1 (1 − p2 )X n Y2 + (1 − p1 )r2 X n Y1 + (1 − p1 )(1 − p2 )X n Y1 Y2

2.852 Manufacturing Systems Analysis 80/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Analytical Solution

Or,

1 = (1 − r1 )(1 − r2 ) + (1 − r1 )p2 Y2 + p1 (1 − r2 )Y1 + p1 p2 Y1 Y2

X −1 Y2 = (1 − r1 )r2 + (1 − r1 )(1 − p2 )Y2 + p1 r2 Y1


+p1 (1 − p2 )Y1 Y2

XY1 = r1 (1 − r2 ) + r1 p2 Y2 + (1 − p1 )(1 − r2 )Y1 + (1 − p1 )p2 Y1 Y2

Y1 Y2 = r1 r2 + r1 (1 − p2 )Y2 + (1 − p1 )r2 Y1 + (1 − p1 )(1 − p2 )Y1 Y2

2.852 Manufacturing Systems Analysis 81/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Analytical Solution

Or,

1 = (1 − r1 + Y1 p1 ) (1 − r2 + Y2 p2 )

X −1 Y2 = (1 − r1 + Y1 p1 ) (r2 + Y2 (1 − p2 ))

XY1 = (r1 + Y1 (1 − p1 )) (1 − r2 + Y2 p2 )

Y1 Y2 = (r1 + Y1 (1 − p1 )) (r2 + Y2 (1 − p2 ))

2.852 Manufacturing Systems Analysis 82/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Analytical Solution

Since the last equation is a product of the other three, there are only three
independent equations in three unknowns here. They may be simplified further:

1 = (1 − r1 + Y1 p1 ) (1 − r2 + Y2 p2 )

r1 + Y1 (1 − p1 )
XY1 =
1 − r1 + Y1 p1
r2 + Y2 (1 − p2 )
X −1 Y2 =
1 − r2 + Y2 p2

2.852 Manufacturing Systems Analysis 83/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Analytical Solution

Eliminating X and Y2 , this becomes

0 = Y12 (p1 + p2 − p1 p2 − p1 r2 )

− Y1 (r1 (p1 + p2 − p1 p2 − p1 r2 ) + p1 (r1 + r2 − r1 r2 − r1 p2 ))

+r1 (r1 + r2 − r1 r2 − r1 p2 ) ,

which has two solutions:


r1 r1 + r2 − r1 r2 − r1 p2
Y11 = , Y12 = .
p1 p1 + p2 − p1 p2 − p1 r2

2.852 Manufacturing Systems Analysis 84/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Analytical Solution

The complete solutions are:


r1 + r2 − r1 r2 − r1 p2

r1 
Y12 = 
Y11 =

 p1 + p2 − p1 p2 − p1 r2 
p1

 


 

 
r1 + r2 − r1 r2 − p1 r2

 

r2 Y22 =
Y21 = p1 + p2 − p1 p2 − p2 r1 
p2 
 


 

Y22

 

X1 = 1 X2 =
 


Y12

2.852 Manufacturing Systems Analysis 85/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Analytical Solution

Recall that ξ(n, α1 , α2 ) = X n Y1α1 Y2α2 .

We now have the complete internal solution:

p(n, α1 , α2 ) = C1 ξ1 (n, α1 , α2 ) + C2 ξ2 (n, α1 , α2 )

= C1 X1n Y11 Y21 + C2 X2n Y12


α1 α2 α1 α2
Y22 .

2.852 Manufacturing Systems Analysis 86/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Analytical Solution

Boundary conditions:

If we plug the internal expression for ξ(n, α1 , α2 ) = X n Y1α1 Y2α2 into the
right side of

ξ(1, 0, 1) = (1 − r1 )r2 ξ(2, 0, 0) + (1 − r1 )(1 − p2 )ξ(2, 0, 1)+


p1 r2 ξ(2, 1, 0) + p1 (1 − p2 )ξ(2, 1, 1),
we find

ξ(1, 0, 1) = XY2
which implies that

p(1, 0, 1) = C1 Y21 + C2 X2 Y22 .

2.852 Manufacturing Systems Analysis 87/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Analytical Solution

Recall that

p(2, 1, 0) = p(1, 0, 1).


Then
C1 X12 Y11 + C2 X22 Y12 = C1 X1 Y21 + C2 X2 Y22 ,
or,    
C1 X12 Y11 − C1 X1 Y21 + C2 X22 Y12 − C2 X2 Y22 = 0,
or,    
C1 X1 X1 Y11 − Y21 + C2 X2 X2 Y12 − Y22 = 0,

2.852 Manufacturing Systems Analysis 88/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Analytical Solution

Recall
Y22
X2 =
Y12
Consequently,  
C1 X1 X1 Y11 − Y21 = 0,
or,  
r1 r2
C1 − = 0,
p1 p2
Therefore,
r1 r2
if 6= , then C1 = 0.
p1 p2

2.852 Manufacturing Systems Analysis 89/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Analytical Solution

r1 r2
In the following, we assume 6= and we drop the j subscript.
p1 p2
r1 r2
But what happens when = ?
p1 p2
r1 r2
And what does = mean?
p1 p2

2.852 Manufacturing Systems Analysis 90/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Analytical Solution

Combining the following two boundary conditions ...

r1 p(0, 0, 1) = (1 − r1 )r2 p(1, 0, 0) + (1 − r1 )(1 − p2 )p(1, 0, 1)


+p1 (1 − p2 )p(1, 1, 1).

p(1, 1, 1) = r1 p(0, 0, 1) + r1 r2 p(1, 0, 0) + r1 (1 − p2 )p(1, 0, 1)


+(1 − p1 )(1 − p2 )p(1, 1, 1)
gives

p(1, 1, 1) = r2 p(1, 0, 0) + (1 − p2 )CXY2 + (1 − p2 )p(1, 1, 1)


or,
p2 p(1, 1, 1) = r2 p(1, 0, 0) + (1 − p2 )CXY2 .

There are three unknown quantities: p(1, 0, 0), p(1, 1, 1), and C .
2.852 Manufacturing Systems Analysis 91/165 c
Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Analytical Solution

Another boundary condition,

p(1, 0, 0) = (1 − r1 )(1 − r2 )p(1, 0, 0) + (1 − r1 )p2 p(1, 0, 1) + p1 p2 p(1, 1, 1)


can be written

(r1 + r2 − r1 r2 )p(1, 0, 0) = (1 − r1 )p2 CXY2 + p1 p2 p(1, 1, 1).

which also has three unknown quantities: p(1, 0, 0), p(1, 1, 1), and C . If we eliminate
p(1, 1, 1) and simplify, we get

(r1 + r2 − r1 r2 − p1 r2 )p(1, 0, 0) = (p1 + p2 − p1 p2 − p2 r1 )CXY2 .

From the definition of Y22 (slide 85),

p(1, 0, 0) = CX .

2.852 Manufacturing Systems Analysis 92/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Analytical Solution

If we plug this into the last equation on slide 91, we get

p2 p(1, 1, 1) = CX (r2 + (1 − p2 )Y2 )


or

CX r1 + r2 − r1 r2 − r1 p2
p(1, 1, 1) = .
p2 p1 + p2 − p1 p2 − r1 p2
Finally, the first equation on slide 91 gives
r1 + r2 − r1 r2 − r1 p2
p(0, 0, 1) = CX .
r1 p2
The upper boundary conditions are determined in the same way.

2.852 Manufacturing Systems Analysis 93/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Analytical Solution

Summary of Steady-State Probabilities: Boundary values


N−1
p(0, 0, 0) = 0 p(N − 1, 0, 0) = CX
r1 + r2 − r1 r2 − r1 p2 p(N − 1, 0, 1) = 0
p(0, 0, 1) = CX N−1
r1 p2 p(N − 1, 1, 0) = CX Y1
p(0, 1, 0) = 0 CX N−1 r1 + r2 − r1 r2 − p1 r2
p(0, 1, 1) = 0 p(N − 1, 1, 1) =
p1 p1 + p2 − p1 p2 − p1 r2
p(1, 0, 0) = CX p(N, 0, 0) = 0
p(1, 0, 1) = CXY2 p(N, 0, 1) = 0
p(1, 1, 0) = 0 + r2 − r1 r2 − p1 r2
N−1 r1
p(N, 1, 0) = CX
CX r1 + r2 − r1 r2 − r1 p2 p1 r2
p(1, 1, 1) =
p2 p1 + p2 − p1 p2 − r1 p2 p(N, 1, 1) = 0

2.852 Manufacturing Systems Analysis 94/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Analytical Solution

Summary of Steady-State Probabilities: Internal states, etc.

p(n, α1 , α2 ) = CX n Y1α1 Y2α2 ,


2 ≤ n ≤ N − 2; α1 = 0, 1; α2 = 0, 1
where

r1 + r2 − r1 r2 − r1 p2
Y1 =
p1 + p2 − p1 p2 − p1 r2
r1 + r2 − r1 r2 − p1 r2
Y2 =
p1 + p2 − p1 p2 − r1 p2
Y2
X =
Y1
and C is a normalizing constant.
2.852 Manufacturing Systems Analysis 95/165 c
Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Analytical Solution

Observations:

Typically, we can expect that ri < .2 since a repair is likely to take at least 5
times as long as an operation. Also, since, typically, efficiency = ri /(ri + pi ) > .7,
pi < .4ri , p(0, 0, 1), p(1, 1, 1), p(N − 1, 1, 1), p(N, 1, 0) are much larger than
internal probabilities.

This is because the system tends to spend much more time at those states than
at internal states.

Refer to transition graph on page 60 to trace out typical scenarios.

2.852 Manufacturing Systems Analysis 96/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Limits

If r1 → 0, then E → 0, ps → 1, pb → 0, n̄ → 0.

If r2 → 0, then E → 0, pb → 1, ps → 0, n̄ → N.

If p1 → 0, then ps → 0, E → 1 − pb → e2 , n̄ → N − e2 .

If p2 → 0, then pb → 0, E → 1 − ps → e1 , n̄ → e1 .

If N → ∞
e1
and e1 < e2 , then E → e1 , pb → 0, ps → 1 − .
e2

2.852 Manufacturing Systems Analysis 97/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Limits

Proof:

Many of the limits follow from combining conservation of flow and the
flow rate-idle time relationship:

r1 r2
E= (1 − pb ) = (1 − ps ).
r1 + p1 r2 + p2

The last set comes from the analytic solution and the observation that if
e1 > e2 , X > 1, and if e1 < e2 , X < 1.

2.852 Manufacturing Systems Analysis 98/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Behavior

10

6
n(t)

0
0 200 400 600 800 1000
t

r1 = .1, p1 = .01, r2 = .1, p2 = .01, N = 10

2.852 Manufacturing Systems Analysis 99/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Behavior

100

80

60
n(t)

40

20

0
0 2000 4000 6000 8000 10000
t

r1 = .1, p1 = .01, r2 = .1, p2 = .01, N = 100

2.852 Manufacturing Systems Analysis 100/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Behavior

100

80

60
n(t)

40

20

0
0 2000 4000 6000 8000 10000
t

ri = .1, i = 1, 2, p1 = .02, p2 = .01, N = 100

2.852 Manufacturing Systems Analysis 101/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Behavior

100

80

60
n(t)

40

20

0
0 2000 4000 6000 8000 10000
t

ri = .1, i = 1, 2, p1 = .01, p2 = .02, N = 100

2.852 Manufacturing Systems Analysis 102/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Behavior

Deterministic Processing Time

0.5

r = 0.14
1

0.4 r = 0.12
1
P
τ = 1. r = 0.1
1
p1 = .1
0.3 r = 0.08
r2 = .1 1

p2 = .1 r = 0.06
1

0 50 100 150 200


N

2.852 Manufacturing Systems Analysis 103/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Behavior

Discussion: Deterministic Processing Time

◮ Why are the curves 0.5

increasing?
r = 0.14
1
◮ Why do they reach an
asymptote? 0.4 r = 0.12
1
P
◮ What is P when N = 0? r = 0.1
1

◮ What is the limit of P as 0.3 r = 0.08


1
N → ∞?
r = 0.06
1
◮ Why are the curves with
0 50 100 150 200
smaller r1 lower?
N

2.852 Manufacturing Systems Analysis 104/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Behavior

Deterministic Processing Time


Discussion: 200

◮ Why are the curves increasing?


◮ Why different asymptotes? 150 r = 0.14
1

◮ What is n̄ when N = 0? r = 0.12


1
100
◮ What is the limit of n̄ as N → ∞? n r = 0.1
1
◮ Why are the curves with smaller r = 0.08
50
r1 lower? 1

r = 0.06
1
0
0 50 100 150 200

2.852 Manufacturing Systems Analysis 105/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Behavior

Deterministic Processing Time Deterministic Processing Time


200
0.5

r = 0.14 150 r = 0.14


1 1

0.4 r = 0.12 r = 0.12


1 1
P 100
r = 0.1 n r = 0.1
1 1

0.3 r = 0.08 50 r = 0.08


1 1

r = 0.06 r = 0.06
1 1
0
0 50 100 150 200
0 50 100 150 200
N N

◮ What can you say about the optimal buffer size?


◮ How should it be related to ri , pi ?

2.852 Manufacturing Systems Analysis 106/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Behavior

Questions:
◮ If we want to increase production rate, which machine should we
improve?
◮ What would happen to production rate if we improved any other
machine?

2.852 Manufacturing Systems Analysis 107/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Production rate vs. storage space

0.8

Machine 1 more improved

Machine 1 improved
0.7

Improvements to
Identical machines
non-bottleneck
machine.
0.6
20 40 60 80 100 120 140 160 180 200

N
Note: Graphs would be the same if we improved Machine 2.

2.852 Manufacturing Systems Analysis 108/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Average inventory vs. storage space
200

180 n Machine 1 more improved


160

140

120 Machine 1
100
improved
Inventory increases as the 80
(non-bottleneck) upstream
machine is improved and as 60

the buffer space is increased. 40

20

Identical machines
0
0 20 40 60 80 100 120 140 160 180 200

N
2.852 Manufacturing Systems Analysis 109/165 c
Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Average inventory vs. storage space

200

180 n
160

140

◮ Inventory decreases as 120

the (non-bottleneck) 100

downstream machine 80

is improved. Identical machines


60
Machine 2 improved
◮ Inventory increases as 40

the buffer space is


20
increased. Machine 2 more improved
0
20 40 60 80 100 120 140 160 180 200

2.852 Manufacturing Systems Analysis 110/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Frequency and Production Rate

Should we prefer short, frequent, disruptions or long, infrequent,


disruptions?
0.95

◮ r2 = 0.8, p2 = 0.09, N = 10
0.9
r1
◮ r1 and p1 vary together and r1 +p1 = .9
P 0.85
◮ Answer: evidently, short, frequent
failures. 0.8

◮ Why?
0.75
0 0.2 0.4 0.6 0.8 1
r1

2.852 Manufacturing Systems Analysis 111/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Frequency and Production Rate

0.95

P ◮ M1
0.9 M2 slow ◮ r1 = .1, p1 = .01

M2 fast
◮ M2
r2 = .1, p2 = .01 — Fast
0.85

◮ r2 = .01, p2 = .001 — Slow


M2 very slow
0.8 ◮ r2 = .001, p2 = .0001 — Very slow
0 2 4 6 8 10 12 14 16 18 20

2.852 Manufacturing Systems Analysis 112/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Frequency and Production Rate

0.95
◮ M1
◮ r1 = .1, p1 = .01
0.9

◮ M2

0.85
◮ r2 = .1, p2 = .01 — Fast
◮ r2 = .01, p2 = .001 — Slow
◮ r2 = .001, p2 = .0001 — Very slow
0.8

0 20 40 60 80 100 120 140 160 180 200

2.852 Manufacturing Systems Analysis 113/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Frequency and Production Rate

0.95
◮ M1
◮ r1 = .1, p1 = .01
0.9
◮ M2
◮ r2 = .1, p2 = .01 — Fast
0.85
◮ r2 = .01, p2 = .001 — Slow
◮ r2 = .001, p2 = .0001 — Very slow
0.8

0 200 400 600 800 1000 1200 1400 1600 1800 2000

2.852 Manufacturing Systems Analysis 114/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Frequency and Production Rate

0.91
◮ M1
0.9

0.89
◮ r1 = .1, p1 = .01
0.88
◮ M2
0.87

0.86
◮ r2 = .1, p2 = .01 — Fast
0.85 ◮ r2 = .01, p2 = .001 — Slow
0.84 ◮ r2 = .001, p2 = .0001 — Very slow
0.83
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
4
x 10

2.852 Manufacturing Systems Analysis 115/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Frequency and Production Rate

0.95
◮ M1
◮ r1 = .1, p1 = .01
0.9

◮ M2
0.85
◮ r2 = .1, p2 = .01 — Fast
◮ r2 = .01, p2 = .001 — Slow
◮ r2 = .001, p2 = .0001 — Very slow
0.8

0 1 2 3 4 5 6 7 8 9 10
4
x 10

2.852 Manufacturing Systems Analysis 116/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Frequency and Average Inventory

20

18
◮ M1
16
◮ r1 = .1, p1 = .01
14

12
◮ M2
10

8 ◮ r2 = .1, p2 = .01 — Fast


6 ◮ r2 = .01, p2 = .001 — Slow
4
◮ r2 = .001, p2 = .0001 — Very slow
2

0
0 2 4 6 8 10 12 14 16 18 20

2.852 Manufacturing Systems Analysis 117/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Frequency and Average Inventory

200
◮ M1
r1 = .1, p1 = .01
180

160

140

120
◮ M2
100

80
◮ r2 = .1, p2 = .01 — Fast
60

40
◮ r2 = .01, p2 = .001 — Slow
20
◮ r2 = .001, p2 = .0001 — Very slow
0
0 20 40 60 80 100 120 140 160 180 200

2.852 Manufacturing Systems Analysis 118/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Frequency and Average Inventory

2000
◮ M1
r1 = .1, p1 = .01
1800

1600

1400

1200
◮ M2
1000

800
◮ r2 = .1, p2 = .01 — Fast
600

400
◮ r2 = .01, p2 = .001 — Slow
200
◮ r2 = .001, p2 = .0001 — Very slow
0
0 200 400 600 800 1000 1200 1400 1600 1800 2000

2.852 Manufacturing Systems Analysis 119/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Exponential processing time model

Exponential processing time: exponential processing, failure, and repair time;


discrete state, continuous time; discrete material.

Assumptions are similar to deterministic processing time model, except:


◮ µi δt = the probability that Mi completes an operation in (t, t + δt);
◮ pi δt = the probability that Mi fails during an operation in (t, t + δt);
◮ ri δt = the probability that Mi is repaired, while it is down, in (t, t + δt);

We can assume that only one event occurs during (t, t + δt).

2.852 Manufacturing Systems Analysis 120/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Exponential processing time model

n=0 n=1 n=2 n=3 n=4 n=5 n=6 n=7 n=8 n=9 n=10 n=11 n=12 n=13
=N−1 =N
(α1 ,α2)
(0,0)

(0,1)

(1,0)

(1,1)

transient out of transient states


key non−transient out of non−transient states

boundary to increasing buffer level

internal to decreasing buffer level

unchanging buffer level

2.852 Manufacturing Systems Analysis 121/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Exponential processing time model

Performance measures for general exponential lines

The probability that Machine Mi is processing a workpiece is the efficiency:

Ei = prob [αi = 1, ni −1 > 0, ni < Ni ].


The production rate (throughput rate) of Machine Mi , in parts per time
unit, is

Pi = µi Ei .

2.852 Manufacturing Systems Analysis 122/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Exponential processing time model

Conservation of Flow

P = P1 = P2 = . . . = Pk .

This should be proved from the model.

2.852 Manufacturing Systems Analysis 123/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Exponential processing time model

Flow Rate-Idle Time Relationship

The isolated efficiency ei of Machine Mi is, as usual,


ri
ei =
ri + pi
and it represents the fraction of time that Mi is operational. The isolated
production rate is

ρi = µi ei .

2.852 Manufacturing Systems Analysis 124/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Exponential processing time model

The flow rate-idle time relation is

Ei = ei prob [ni −1 > 0 and ni < Ni ].


or

P = ρi prob [ni −1 > 0 and ni < Ni ].

This should also be proved from the model.

2.852 Manufacturing Systems Analysis 125/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Exponential processing time model

Balance equations — steady state only

α1 = a2 = 0 :

p(n, 0, 0)(r1 + r2 ) = p(n, 1, 0)p1 + p(n, 0, 1)p2 ,


1 ≤ n ≤ N − 1,

p(0, 0, 0)(r1 + r2 ) = p(0, 1, 0)p1 ,

p(N, 0, 0)(r1 + r2 ) = p(N, 0, 1)p2 .

2.852 Manufacturing Systems Analysis 126/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Exponential processing time model

α1 = 0, α2 = 1 :

p(n, 0, 1)(r1 + µ2 + p2 ) = p(n, 0, 0)r2 + p(n, 1, 1)p1


+p(n + 1, 0, 1)µ2 , 1 ≤ n ≤ N − 1

p(0, 0, 1)r1 = p(0, 0, 0)r2 + p(0, 1, 1)p1 + p(1, 0, 1)µ2

p(N, 0, 1)(r1 + µ2 + p2 ) = p(N, 0, 0)r2

2.852 Manufacturing Systems Analysis 127/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Exponential processing time model

α1 = 1, α2 = 0 :

p(n, 1, 0)(p1 + µ1 + r2 ) = p(n − 1, 1, 0)µ1 + p(n, 0, 0)r1


+p(n, 1, 1)p2 , 1 ≤ n ≤ N − 1

p(0, 1, 0)(p1 + µ1 + r2 ) = p(0, 0, 0)r1

p(N, 1, 0)r2 = p(N − 1, 1, 0µ1 + p(N, 0, 0)r1 + p(N, 1, 1)p2

2.852 Manufacturing Systems Analysis 128/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Exponential processing time model

α1 = 1, α2 = 1 :

p(n, 1, 1)(p1 + p2 + µ1 + µ2 ) = p(n − 1, 1, 1)µ1 + p(n + 1, 1, 1)µ2


+p(n, 1, 0)r2 + p(n, 0, 1)r1 , 1≤n ≤N −1

p(0, 1, 1)(p1 + µ1 ) = p(1, 1, 1)µ2 + p(0, 1, 0)r2 + p(0, 0, 1)r1

p(N, 1, 1)(p2 + µ2 ) = p(N − 1, 1, 1)µ1 + p(N, 1, 0)r2 + p(N, 0, 1)r1

2.852 Manufacturing Systems Analysis 129/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Exponential processing time model

Performance measures

Efficiencies:
N−1
X 1
X
E1 = p(n, 1, α2 ),
n=0 α2 =0

1
N X
X
E2 = p(n, α1 , 1).
n=1 α1 =0

2.852 Manufacturing Systems Analysis 130/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Exponential processing time model

Production rate:

P = µ1 E1 = µ2 E2 .
Expected in-process inventory:
1 X
N X
X 1
n̄ = np(n, α1 , α2 ).
n=0 α1 =0 α2 =0

2.852 Manufacturing Systems Analysis 131/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Solution of balance equations

Assume

p(n, α1 , α2 ) = cX n Y1α1 Y2α2 , 1 ≤ n ≤ N −1


where c, X , Y1 , Y2 are parameters to be determined. Plugging this into the
internal equations gives

p1 Y1 + p2 Y2 − r1 − r2 = 0
 
1 r1
µ1 − 1 − p1 Y1 + r1 + − p1 = 0
X Y1
r2
µ2 (X − 1) − p2 Y2 + + r2 − p2 = 0
Y2
2.852 Manufacturing Systems Analysis 132/165 c
Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Exponential processing time model

These equations can be reduced to one fourth-order polynomial (quartic)


equation in one unknown. One solution is
r1
Y11 =
p1
r2
Y21 =
p2
X1 = 1
This solution of the quartic equation has a zero coefficient in the expression for
the probabilities of the internal states:
4
X
p(n, α1 , α2 ) = cj Xjn Y1jα1 Y2jα2 for n = 1, . . . , N − 1.
j=1

The other three solutions satisfy a cubic polynomial equation. Compare with slide
85. In general, there is no simple expression for them.
2.852 Manufacturing Systems Analysis 133/165 c
Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Exponential processing time model

Just as for the deterministic processing time line,


◮ we obtain the coefficients c1 , c2 , c3 , c4 from the boundary conditions and the
normalization equation;
◮ we find c1 = 0; (What does this mean? Why is this true?)
◮ we construct all the boundary probabilities. Some are 0.
◮ we use the probabilities to evaluate production rate, average buffer level, etc;
◮ we prove statements about conservation of flow, flow rate-idle time, limiting values
of some quantities, etc.
◮ we draw graphs, and observe behavior which is qualitatively very similar to
deterministic processing time line behavior (e.g., P vs. N, n̄ vs N, etc.).
We also draw some new graphs (P vs. µi , n̄ vs µi ) and observe new behavior. This is
discussed below with the discussion of continuous material lines.

2.852 Manufacturing Systems Analysis 134/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Continuous Material model

Continuous material, or fluid: deterministic processing, exponential failure and


repair time; mixed state, continuous time.; continuous material.

◮ µi δt = the amount of material that Mi processes, while it is up, in


(t, t + δt);
◮ pi δt = the probability that Mi fails, while it is up, in (t, t + δt);
◮ ri δt = the probability that Mi is repaired, while it is down, in (t, t + δt);

2.852 Manufacturing Systems Analysis 135/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines

Model assumptions, notation, terminology, and conventions

During time interval (t, t + δt):

When 0 < x < N


1. the change in x is (α1 µ1 − α2 µ2 )δt

2. the probability of repair of Machine i , that is, the probability that


αi (t + δt) = 1 given that αi (t) = 0, is ri δt

3. the probability of failure of Machine i , that is, the probability that


αi (t + δt) = 0 given that αi (t) = 1, is pi δt.

2.852 Manufacturing Systems Analysis 136/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines

When x = 0
1. the change in x is (α1 µ1 − α2 µ2 )+ δt
(That is, when x = 0, it can only increase.)

2. the probability of repair is ri δt

3. if Machine 1 is down, Machine 2 cannot fail. If Machine 1 is up, the


probability of failure of Machine 2 is p2b δt, where
p2 µ
p2b = , µ = min(µ1 , µ2 )
µ2
The probability of failure of Machine 1 is p1 δt.

2.852 Manufacturing Systems Analysis 137/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines

When x = N
1. the change in x is (α1 µ1 − α2 µ2 )− δt

2. the probability of repair is ri δt

3. if Machine 2 is down, Machine 1 cannot fail. If Machine 2 is up, the


probability of failure of Machine 1 is p1b δt, where
p1 µ
p1b = .
µ1
The probability of failure of Machine 2 is p2 δt.

2.852 Manufacturing Systems Analysis 138/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines

Transition equations — internal


f (x, α1 , α2 , t)δx + o(δx) is the probability of the buffer level being between
x and x + δx and the machines being in states α1 and α2 at time t.
Transitions into ([x,x+δx],1,1)
(0,0)

δx
(0,1)

r1δt
(1,0)

r2δt
(1,1)
µ2δ t
x − µ1δ t +

1 − (p +p ) δ t
x − µ1δ t

µ2δ

1 2
x

2.852 Manufacturing Systems Analysis 139/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Then

f (x, 1, 1, t + δt) = (1 − (p1 + p2 )δt)f (x − µ1 δt + µ2 δt, 1, 1, t)


+r1 δtf (x + µ2 δt, 0, 1, t) + r2 δtf (x − µ1 δt, 1, 0, t)
+o(δt)

or

f (x, 1, 1, t + δt) =
 
∂f
(1 − (p1 + p2 )δt) f (x, 1, 1, t) + (x, 1, 1, t)(−µ1 δt + µ2 δt)
∂x
+r1 δtf (x + µ2 δt, 0, 1, t) + r2 δtf (x − µ1 δt, 1, 0, t) + o(δt)

2.852 Manufacturing Systems Analysis 140/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines

or
f (x, 1, 1, t + δt) =
 
∂f
(1 − (p1 + p2 )δt) f (x, 1, 1, t) + (x, 1, 1, t)(µ2 − µ1 )δt
∂x
 
∂f
+r1 δt f (x, 0, 1, t) + (x, 0, 1, t)µ2 δt
∂x
 
∂f
+r2 δt f (x, 1, 0, t) − (x, 1, 0, t)µ1 δt + o(δt)
∂x

2.852 Manufacturing Systems Analysis 141/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines

or
f (x, 1, 1, t + δt) =
∂f
f (x, 1, 1, t) − (p1 + p2 )f (x, 1, 1, t)δt + (µ2 − µ1 ) (x, 1, 1, t)δt
∂x
+r1 f (x, 0, 1, t)δt + r2 f (x, 1, 0, t)δt
or, finally,

∂f ∂f
(x, 1, 1) = −(p1 + p2 )f (x, 1, 1) + (µ2 − µ1 ) (x, 1, 1)
∂t ∂x

+r1 f (x, 0, 1) + r2 f (x, 1, 0)

2.852 Manufacturing Systems Analysis 142/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines

Similarly,

∂f
(x, 0, 0) = −(r1 + r2 )f (x, 0, 0) + p1 f (x, 1, 0) + p2 f (x, 0, 1)
∂t

∂f ∂f
(x, 0, 1) = µ2 (x, 0, 1) − (r1 + p2 )f (x, 0, 1) + p1 f (x, 1, 1) + r2 f (x, 0, 0)
∂t ∂x

∂f ∂f
(x, 1, 0) = −µ1 (x, 1, 0) − (p1 + r2 )f (x, 1, 0) + p2 f (x, 1, 1) + r1 f (x, 0, 0)
∂t ∂x

2.852 Manufacturing Systems Analysis 143/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines

Transition equations — boundary

p(x, α1 , α2 , t) is the probability of the buffer level being x (where x = 0 or


N) and the machines being in states α1 and α2 at time t.

Boundary equations describe transitions from boundary states to boundary


states; from boundary states to interior states; and from interior states to
boundary states.

Boundary equations are relationships among p(x, α1 , α2 , t) and


f (x, α1 , α2 , t) and their derivatives for x = 0 or x = N.

2.852 Manufacturing Systems Analysis 144/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines

Transitions into (0,0,0)


(0,0)

(0,1)

(1,0)

(1,1)

x=0 µ2δt

We must construct an equation of the form

p(0, 0, 0, t + δt) = p(0, 0, 0, t) + Aδt + o(δt)

2.852 Manufacturing Systems Analysis 145/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines

Transitions from boundary states into (0,0,0)


(0,0)
1 − r1δt + r2δt
p1δt
(0,1)

(1,0)

(1,1)

x=0

The system can go from (0,0,0) to (0,0,0) if there is no repair. It can go from
(0,1,0) if the first machine does not fail.
It cannot go from (0,0,1) to (0,0,0) because the second machine is starved and
cannot fail. To go from (0,1,1) to (0,0,0) require two simultaneous failures, which
has a probability on the order of δt 2 .
2.852 Manufacturing Systems Analysis 146/165 c
Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines

Transitions from internal states into (0,0,0)


(0,0)

(0,1)

(1,0)

(1,1)

x=0 µ2δt

To go from (x, α1 , α2 ), x > 0 to (0,0,0), we must have

0 < x < α2 µ2 δt − α1 µ1 δt

For example, if α1 = 0 and α2 = 1, we are considering transitions from (x, 0, 1)


to (0,0,0) where 0 < x < µ2 δt.

2.852 Manufacturing Systems Analysis 147/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Transitions from internal states into (0,0,0)
(0,0)

(0,1)

(1,0)

(1,1)

x=0 µ2δt

But
prob ([0 < x < µ2 δt], 0, 1) = f (x, 0, 1)µ2 δt + o(δt) = f (0, 0, 1)µ2 δt + o(δt)
and the transition probability from (0,1) to (0,0) is
(1 − r1 δt)p2 δt + o(δt) = p2 δt + o(δt).

2.852 Manufacturing Systems Analysis 148/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Transitions from internal states into (0,0,0)
(0,0)

(0,1)

(1,0)

(1,1)

x=0 µ2δt

Therefore, the probability of going from ([0 < x < µ2 δt], 0, 1) to (0,0,0) is

f (x, 0, 1)µ2 p2 δt 2 + (δt)o(δt) = o(δt)


For other transitions from (x, α1 , α2 ), x > 0 to (0,0,0), the probabilities
are similar or smaller.
2.852 Manufacturing Systems Analysis 149/165 c
Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Transitions from boundary states into (0,0,0)
(0,0)
1 − r1δt + r2δt
p1δt
(0,1)

(1,0)

(1,1)

x=0

Therefore

p(0, 0, 0, t + δt) = (1 − r1 δt − r2 δt)p(0, 0, 0, t) + p(0, 1, 0, t)p1δt

or
d
p(0, 0, 0) = −(r1 + r2 )p(0, 0, 0) + p1 p(0, 1, 0)
dt
2.852 Manufacturing Systems Analysis 150/165 c
Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines

Consider state (0,1,0). As soon as the system enters this state, it leaves.
This is because x must immediately increase. Therefore

p(0, 1, 0) = 0
even if the system is not in steady state . Therefore

d
p(0, 0, 0) = −(r1 + r2 )p(0, 0, 0)
dt
In steady state,

p(0, 0, 0) = 0

2.852 Manufacturing Systems Analysis 151/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Transitions into (0,0,1)
(0,0)
r2δt
1 − r1δt
(0,1)

(1,0)
p1δt
(1,1)

x=0 µ2δt

p(0, 0, 1, t + δt) = r2 δtp(0, 0, 0, t) + (1 − r1 δt)p(0, 0, 1, t)


Z µ2 δt
+p1 δtp(0, 1, 1, t) + f (x, 0, 1, t)dx
0

2.852 Manufacturing Systems Analysis 152/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines

Transitions into (0,0,1)


(0,0)
r2δt
1 − r1δt
(0,1)

(1,0)
p1δt
(1,1)

x=0 µ2δt

or,
d
p(0, 0, 1) = r2 p(0, 0, 0) − r1 p(0, 0, 1) + p1 p(0, 1, 1) + µ2 f (0, 0, 1).
dt

2.852 Manufacturing Systems Analysis 153/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Transitions into (0,1,1), µ2 > µ1
(0,0)

(0,1)
r1δt
(1,0)

(1,1)
b
1 − p1δ t + p2δt

x=0 (µ2 − µ1 )δ t

p(0, 1, 1, t + δt) = (1 − (p1 + p2b )δt)p(0, 1, 1, t) + r1 δtp(0, 0, 1, t)


Z (µ2 −µ1 )δt
+ f (x, 1, 1, t)dx,
0

2.852 Manufacturing Systems Analysis 154/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Transitions into (0,1,1), µ2 > µ1
(0,0)

(0,1)
r1δt
(1,0)

(1,1)
b
1 − p1δ t + p2δt

x=0 (µ2 − µ1 )δ t

d
p(0, 1, 1) = −(p1 + p2b )p(0, 1, 1) + r1 p(0, 0, 1)
dt
+(µ2 − µ1 )f (0, 1, 1), if µ1 ≤ µ2 .

2.852 Manufacturing Systems Analysis 155/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines

Transitions into (0,1,1), µ2 ≤ µ1

If x(t) = 0, the transition from any (α1 (t), α2 (t)) to


(α1 (t + δt), α2 (t + δt)) = (1, 1) would cause x to increase immediately.
Therefore

p(0, 1, 1) = 0

2.852 Manufacturing Systems Analysis 156/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
To come:
◮ Other boundary equations
◮ Normalization
1
X 1 »Z
X N –
f (x, α1 , α2 )dx + p(0, α1 , α2 ) + p(N, α1 , α2 ) = 1.
α1 =0 α2 =0 0

◮ Production rate
»Z N –
P2 = µ2 (f (x, 0, 1) + f (x, 1, 1))dx + p(N, 1, 1) + µ1 p(0, 1, 1).
0
»Z N –
= P1 = µ1 (f (x, 1, 0) + f (x, 1, 1))dx + p(0, 1, 1) + µ2 p(N, 1, 1).
0
◮ Average in-process inventory
1
X 1 »Z
X N –
x̄ = xf (x, α1 , α2 )dx + Np(N, α1 , α2 ) .
α1 =0 α2 =0 0

2.852 Manufacturing Systems Analysis 157/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines

Also to come:
◮ Identities (in steady state)
◮ Conservation of flow; Blocking, Starvation, and Production Rate;
Repair frequency equals failure frequency; Flow Rate-Idle Time; Limits
◮ Solution technique
◮ Internal solution; transient states;

f (x, α1 , α2 ) = Ce λx Y1α1 Y2α2

Cases (µ1 < µ2 , µ1 = µ2 , µ1 > µ2 ); boundary probabilities

2.852 Manufacturing Systems Analysis 158/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Exponential and continuous line performance

Exponential and Continuous Two−Machine Lines


1

◮ r1 = 0.09, p1 = 0.01, µ1 = 1.1


0.8

◮ r2 = 0.08, p1 = 0.009 0.6

P
◮ N = 20 0.4

◮ Explain the shapes of the graphs. 0.2


Continuous
Exponential
0
0 1 2

µ2

2.852 Manufacturing Systems Analysis 159/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Exponential and continuous line performance

20
Exponential
Continuous
◮ Explain the shapes of the 15
graphs.
10

n
5

0
µ2
0 1 2

2.852 Manufacturing Systems Analysis 160/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Exponential and continuous line performance

The no-variability limit:


Consider a new continuous-material two-machine line. It is has parameters
µ′1 , r1′ , p1′ , µ′2 , r2′ , p2′ , N ′ . Assume it is perfectly reliable and its machines have the
same isolated production rates as those of the first continuous-material
two-machine line. It also has the same buffer size.
Its parameters are therefore given by
µ′1 = ρ1 ; r1′ unspecified; p1′ = 0; N′ = N
µ′2 = ρ2 ; r2′ unspecified; p2′ = 0
where ri
ρi = µi
ri + pi

2.852 Manufacturing Systems Analysis 161/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Exponential and continuous line performance

20
1 Exponential
P n Continuous
No−variability limit

0.8 15

0.6
10

0.4

5
0.2
No−variability limit
Continuous
Exponential
0 0
0 1 µ2 2 0 1
µ2
2

2.852 Manufacturing Systems Analysis 162/165 c


Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Exponential and continuous line performance
Exponential and Continuous Two-Machine Lines

20
nbar

10

Continuous
Exponential
0
0 1 2
r1
2.852 Manufacturing Systems Analysis 163/165 c
Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Continuous material and Deterministic Processing Time Lines

Figure 6.8 in Schick, Irvin C. "Analysis of a multistage transfer line


with unreliable components and interstage buffer storages with
applications to chemical engineering problems." Master's thesis, MIT, 1978.
2.852 Manufacturing Systems Analysis 164/165 c
Copyright 2010 Stanley B. Gershwin.
M1 B M2
Two-Machine, Finite-Buffer Lines
Continuous material and Deterministic Processing Time Lines
delta transformation

0.67

0.665
E

0.66

0.655
0 0.2 0.4 0.6 0.8 1
delta
2.852 Manufacturing Systems Analysis 165/165 c
Copyright 2010 Stanley B. Gershwin.
MIT OpenCourseWare
http://ocw.mit.edu

2.852 Manufacturing Systems Analysis


Spring 2010

For information about citing these materials or our Terms of Use,visit: http://ocw.mit.edu/terms.

Você também pode gostar