Você está na página 1de 9

i

i
9781107035904ar 2014/1/6 20:35 page 154 #168

154

Poisson processes and waiting times

Generating function

(x, z) :=

Pn (x) zn .

(9.9)

n=0

Apart from the presence of the additional variable x, it is the same object introduced in
Section 2.1.3 [p. 21]. For arbitrary but fixed x, Pn (x) represents the coefficients of a Taylor
expansion. The representation by (x, z) has the same information as Pn (x) but certain
advantages, as we have seen in Section 2.1.3 [p. 21] already. In particular, the probabilities
Pn (x) are given by


1 d
 .
P (x) =
(x,
z)
(9.10)

! dz
z=0
In order to obtain the differential equation for the generating function, we multiply equation (9.7) by zn summed over n. The result is




d 
Pn (x) zn =
Pn1 (x) zn
Pn (x) zn ,
dx
n=0
n=0
n=0





 


(x,z)

z (x,z)

(x,z)

d
(x, z) = (z 1) (x, z).
dx

(9.11)

Along with the initial condition equation (9.8), which implies (x, 0) = 1, the solution of
the differential equation (9.11) reads
(x, z) = e (x1) z .
Eventually we obtain the sought-for probabilities Pn (x) via equation (9.10):

1 d n (x1) t 
1
( t)n e t .
e
=
Pn (t) =

n
n! dx
n!
x=0
This is as expected the Poisson distribution. We see that the Poisson distribution follows
from very general assumptions. For that reason the distribution is ubiquitous in physics.

9.3 Waiting time paradox


If we interpret the distance between neighbouring PPs t in a temporal way we have
p(t| ) =

1 t
e ,

(9.12)

with being the mean waiting time. Let us assume a Poisson-distributed arrival time of
buses at a bus stop. What is the mean waiting time for the next bus, when we are at the bus

i
i

i
9781107035904ar 2014/1/6 20:35 page 155 #169

9.3 Waiting time paradox

155

stop at a random instant of time? We could likewise ask: If we switch on a Geiger counter,
how long does it take to the next signal, triggered by an alpha particle. Since the mean time
between arrivals is , it seems reasonable to assume that the mean waiting time should be
/2. The correct answer, however, is , which may appear paradoxical since the remaining
time to the next bus should be shorter than the entire interval between successive buses.
In the following, we will see how Bayesian probability theory provides the correct
answer, and why the result is nonetheless reasonable. The quantity to be determined is
the probability density p(t|RI , , I) for the waiting time t, given that we arrived at a random instant t in time. The time t falls into a time interval I between successive buses.
This is denoted by the proposition
It : The time interval I of our arrival is determined by t I.
We proceed by introducing the length L of the arrival interval I via the marginalization rule

p(t|RI , , I) =

p(t|LI = L, It , I), p(L|It , , I) dL,





0

1
L


p(t|It , , I) =

1(0<tL)

1
p(L|It , , I) dL.
L

(9.13)

Here we exploited the random arrival time t, resulting in a uniform PDF within the interval
I . The quantity p(L|It , , I) is the PDF for the length of the interval selected as specified
in It . We proceed with Bayes theorem:
p(L|It , , I) =

p(It |L, , I) p(L|, I)


.
p(It |, I)

The probability to arrive in an interval of length L is, according to the classical definition
and in analogy with the dart problem of Section 7.2.1 [p. 94], proportional to its length L
p(It |L, , I) = L,
with an unknown parameter . The PDF p(L|, I) is the exponential distribution of neighbouring PPs. The expression in the denominator is a constant that can be combined with
into a constant that is determined by the normalization, resulting in
p(L|It , , I) =

L L
e .
2

This is a special case of a Gamma distribution with = 2 and = 1/ . Hence, the mean
interval length L is therefore
L = 2 .
Apparently, intervals selected by It are longer than average.

i
i

i
9781107035904ar 2014/1/6 20:35 page 156 #170

156

Poisson processes and waiting times

Now, we proceed with equation (9.13):


1
p(t|It , , I) = 2

e dL =

1 t
e .

This proves that the waiting time has the same PDF as the interval length between successive buses. As we have seen, this counterintuitive result is due to the larger probability
for an arrival in a long interval. This also clarifies why the first interval from an arbitrarily
chosen point in time to the first event also has a mean duration of .
9.4 Order statistic of Poisson processes
Assume {tn } are Poisson points, obeying equation (9.12), and let N (t) denote the stochastic
process which provides the number of Poisson points up to time t:
N (t) =

1(t tn ) .

n=0

The probability for N (t) = n is of course given by the Poisson distribution. A representative result is depicted in Figure 9.2. Next we want to estimate the PDF for position x of
the nth Poisson point. We can use the general considerations on order statistics derived in
chapter 7.6 [p. 118]. In order to find the nth Poisson point in [t, t + dt), n 1 PPs have to
be located in the interval [0, t). The corresponding probability is
p(n 1|t, , I) = et/

(t/ )n1
.
(n 1)!

The probability for the next point being located in [t, t + dt) is given by dt/ . The product
of the two independent probabilities results in a Gamma distribution, which is also called
the Erlang distribution in the present context.
10

N(t)

8
6
4
2
0
0

Figure 9.2 Poisson points (crosses) and Poisson process N (t) for = 1.

i
i

i
9781107035904ar 2014/1/6 20:35 page 157 #171

9.5 Various examples

157

Erlang distribution
p(tn = t|, I) =

n n1 t/
e
.
t
(n)

(9.14)

The expectation value of this position is


tn = n .
This result is in agreement with the expectations, since the mean distance of the points
is given by . The Erlang distribution has been introduced in studies of the frequency of
phone calls and is important for various types of queueing problems.

9.5 Various examples


Next we will give some simple pedagogical examples. Realistic applications will follow in
Part V [p. 333].

9.5.1 Shot noise


As a first example we consider shot noise, which is relevant in the measurement of individual photons. Suppose a light source emits photons with a mean rate of photons per time
interval. The emission of the photons shall be uncorrelated and within an infinitesimal time
interval at most one photon is being emitted. Therefore, the probability for the emission of
a photon within dt is given by
p = dt.
These are precisely the conditions for a Poisson process, and the probability to find k
photons within a finite interval of length t is therefore
P (n|N = t/dt, p = dt) et
dt0

(t)n
.
n!

Hence, shot noise obeys a Poisson distribution.

9.5.2 The stickiness of bad luck


It is a common impression that all waiting queues proceed faster than our own. Is this just
a distorted perception? Let us analyse the problem.
Let t0 denote our waiting time and ti (i = 1, . . .) the waiting times in other queues. Let
us assume that nobody experiences preferred treatment and that hence all xi are subject to

i
i

i
9781107035904ar 2014/1/6 20:35 page 158 #172

158

Poisson processes and waiting times

the same statistical fluctuations and are hence i.i.d.-distributed random variables with probability density p(t|I). We are interested in the probability that only the nth waiting time
(tn ) is larger than ours (t0 ). For a systematic treatment we define the following proposition.
An : Only the nth waiting time {tn } is greater than ours.
In order to compute the probability P (An |I), we employ the marginalization rule to introduce the value t0 of our waiting time:

P (An |I) =

P (An |t0 , I) p(t0 |I) dt0 .

The expression An implies that the waiting times of the queues 1 (n 1) are smaller than
t0 and that tn > t0 . Since the waiting times are uncorrelated, we obtain
P (An |I) =

 n1


P (ti < t0 |I)





i=1 :=q(t ); independent of i
0

P (t > t |I) p(t0 |I) dt0 .


 n  0 
=(1q(t0 ))

Now, q(t) is the CDF corresponding to p(t|I) and therefore p(t0 |I)dt0 = dq(t0 ). So, the
integral simplifies significantly as

P (An |I) =

q n1 (1 q) dq =

(n)(2)
(n 1)!
1
=
=
.
(n + 2)
(n + 1)!
n(n + 1)

It is straightforward to verify the correct normalization:


n=1

P (An |I) =


n=1

= lim


1
= lim
n(n + 1) L


L
n=1

n=1

L+1

n=2

1
n

1
1

n n+1

= lim

1
L+1


= 1.

The surprising aspects of the result are:


it is independent of the distribution p(t|I);
the mean value does not exist, n = .

This implies that on average, infinite other queues are faster than ours. However, the mean
is dominated by the tail of the distribution, because the median is only n = 1.5 since
P (A1 |I) = 12 .
In conclusion, we want to simulate this queueing problem. As PDF for the waiting times
we use uniform random numbers on the unit interval. The algorithm is as follows:

i
i

i
9781107035904ar 2014/1/6 20:35 page 159 #173

9.5 Various examples

159

Algorithm to simulate
the queueing problem
Generate a reference random number t0 .
Generate random numbers tn until tn > t0 and record n.
Repeat the experiment L times.
Compute the mean and standard deviation of n.

(9.15)

The result of a simulation run up to a maximum length L = 1000 is shown in Figure 9.3.
Neither the mean value nor the standard deviation converge with increasing simulation
length. This is to be expected, since both values diverge for L . Despite the high
probability of 2/3 that the next or second next element already exceeds the value of the
reference element, there is a non-negligible probability for the occurrence of very long
waiting times (please note the logarithmic scale of the ordinate in Figure 9.3). In the present
example the arithmetic mean at the end of the simulation is 548 and the standard deviation
is 17 151, 30 times the mean value!
9.5.3 Pedestrian delay problem
In its simplest form the pedestrian delay problem considers pedestrians arriving at random
at a street with the goal of crossing it. The traffic is described by a Poisson process, where
106

Number of faster queues

105
104
103
102
101
100

200

400

600

800

1000

Figure 9.3 Computer simulation of the queueing problem according to algorithm in equation (9.15).
The number n(L) of faster queues is marked by dots. The running mean (solid line) and standard
deviation (dashed line) are plotted versus the number of repetitions L.

i
i

i
9781107035904ar 2014/1/6 20:35 page 160 #174

160

Poisson processes and waiting times

any structures due to traffic lights or the like are ignored. The central question is how long
the pedestrian has to wait on average until the time gap between cars is big enough.
To be more specific, let T be the time the pedestrian needs to traverse the street and
the mean time gap between the Poisson events, i.e. the PDF for the Poisson events reads
p(t|, I) =

1 t
e .

It is expedient to express all times in units of . Then the PDF reads


p(t|, I) = et .
We seek to determine the PDF p(T |I) for T , the waiting time of the pedestrian. To begin
with, we introduce the proposition
An : The first n intervals are too short and the interval n+1 is long enough.
This proposition implies that the first time intervals t1 , . . . , tn are all smaller than T and
for the last one we have tn+1 T . We will first determine the probability for An :
Pn := P (An |T , I) =

n


P (t < T |I) P (tn+1 T |I),

=1
T

et dt = 1 eT := F,

P (t < T |I) =
0

P (t T |I) = 1 F,
Pn = F n (1 F ).
This is the geometric distribution. The mean and variance of the number of initially wrong
intervals are therefore, according to equations (4.25b) and (4.25c) [p. 63] ,
F
= eT 1,
(1 F )


F
T
T
e
(n)2 =
=
e

1
.
(1 F )2
n =

(9.16a)
(9.16b)

Next we determine the PDF for the waiting time T :


p(T |T , I) =

p(T |T , An I) Pn .

n=0

We introduce the n random variables t = {t1 , . . . , tn }, corresponding to the interval lengths,


by the marginalization rule

i
i

i
9781107035904ar 2014/1/6 20:35 page 161 #175

9.5 Various examples

161

 



, I p t|An , T , I ,
p(T |An , T , I) = . . . dt n p T |t, 
A
T
n, 
0

p T |t, I = T

n



t ,

=1



p t|An , T , I =

n


p(t |t < T ).

=1

Hence
T T
n
n





t
p t |t < T , I .
p(T |An , I) = . . . dt n T
0

=1

=1

We will not compute the PDF for T in full, but rather only the mean:
T =




T P (T |An , T , I) dT Pn .

n=0 0



:= T n

For the mean for given n


T T
n
n




n
t
p t |t < T , I
T n = . . . dt
0

=1

=1

we need the integral


T

T

t p t |t < T , I dt =

I :=
0

et
1F
t
dt =
F
F


e

T

1 T .

Then T n = nI and T = n I . Along with equation (9.16) we obtain the final result,
which in original units reads
Mean waiting time


T
.
T = eT / 1

This is a rather striking result as the time increases exponentially with the time T needed
to cross the road. In particular, it presents an exponential disadvantage for handicapped or
elderly people.

i
i

i
9781107035904ar 2014/1/6 20:35 page 162 #176

162

Poisson processes and waiting times

The rules of probability theory provide the means to transform probabilities. As a direct
consequence of the sum rule and the product rule, Bayes theorem in equation (1.9) [p. 12]
has been derived:
1
P (a|d, I) =
P (d|a, I) P (a|I) .
Z
The theorem allows us to relate the posterior distribution to the product of the likelihood
and the prior PDF. The likelihood distribution is usually known and determined by the
experimental setup (expressed by I) and the parameters a of the model. Prior probabilities,
on the contrary, cannot be assigned solely based on the rules of probability theory, since
the priors incorporate the assumptions underlying the inference process. Prior PDFs may
also occur in a more indirect way, for example, if not all parameter values entering the
likelihood are known. The unknown parameters a can be marginalized:

(9.17)
p(d|b, I) =
p(d|a, b, I) p(a|b, I) da m ,
resulting in a probability distribution on the left-hand side which is independent of a. However, the marginalization also requires the specification of a prior PDF p(a|b, I) for a,
which ultimately cannot be provided by the rules of probability theory. A typical example
for this situation is provided by Gaussian noise with zero mean and unknown variance.
For the marginal likelihood the prior PDF for the standard deviation p( |I) needs to be
specified. This is a generic property of Bayesian inference. The rules of probability yield a
hierarchy of conditional probabilities, but at the very end there is always a prior probability
that has to be provided from outside, based on the available information. There are three
generic cases.
(a) Ignorant priors: For parameters or densities without any additional information,
except the definition of the problem.
(b) Testable information: In addition to (a) exact information is available, typically in
the form of equality constraints, e.g. the moments of the sought-for distribution,
{p(x|I)} = 0.
(c) Priors for positive additive distribution functions.
Next, we will discuss the first two cases in more detail. The third topic is addressed in
Chapter 12 [p. 201] on the quantified maximum entropy method (QME), which has
been applied very successfully to form-free image reconstruction problems, for example,
in astrophysics and plasma physics.

i
i

Você também pode gostar