Escolar Documentos
Profissional Documentos
Cultura Documentos
Maksim Levental
September 30, 2014
2.6 (a) If < x < 0 then Y = g(X) = |X|
3
= (X)
3
and hence g
1
(Y ) =
3
Y and
Y
_
1
3
_
3
Y
_
2
=
1
3
_
3
Y
_
2
and if 0 x < then Y = g(X) = |X|
3
= X
3
and hence g
1
(Y ) =
3
Y and
_
3
Y
_
1
3
_
3
Y
_
2
=
1
3
_
3
Y
_
2
therefore for 0 y <
f
Y
(y) =
e
|
3
y|
6
_
3
y
_
2
+
e
|
3
y|
6
_
3
y
_
2
=
e
y
3
_
3
y
_
2
Let u =
3
y then
0
f
Y
(y)dy =
0
e
y
3
_
3
y
_
2
dy =
3
3
0
e
y
d (
3
y) = 1
(b) Y = g(X) = 1 X
2
= g
1
(Y ) =
1 Y
_
=
1
2
1 Y
and g
1
(Y ) =
1 Y
_
=
1
2
1 Y
and therefore for 0 y < 1
f
Y
(y) =
3
_
1 y + 1
_
2
16
1 y
+
3
_
1 y + 1
_
2
16
1 y
=
3
8
_
y 2
1 y
_
Let u =
1 y then u
2
= 1 y = y = 1 u
2
= dy = 2udu (which is of course
just going back to X space). Hence
1
0
f
Y
(y)dy =
1
0
3
8
_
y 2
1 y
_
dy =
3
8
(2)(1)
0
1
_
1 +u
2
_
du =
3
4
_
0
_
1 +
1
3
__
=
3
4
_
4
3
_
= 1
2.11 X
1
2
e
x
2
/2
(a)
E
_
X
2
_
=
1
x
2
e
x
2
/2
dx
=
1
x x e
x
2
/2
dx
=
1
2
_
_
x
x e
x
2
/2
dx
_
_
x e
y
2
/2
dy
_
dx
_
=
1
2
_
x
_
e
x
2
/2
_
e
x
2
/2
dx
_
=
1
2
_
0 +
e
x
2
/2
dx
_
= 1
By example 2.1.7
f
Y
(y) =
1
2
2y
_
e
y/2
+e
y/2
_
=
1
2y
e
y/2
and since 0 < y <
E (Y ) =
0
yf
Y
(y)dy
=
1
0
e
y/2
y
dy
=
1
2
1
2
2e
(
y)
2
/2
d (
y)
=
1
e
(
y)
2
/2
d (
y)
=
1
2
= 1
(b) The support of Y is 0 < y < . If < x < 0 then Y = X and g(Y )
1
= Y , else if
0 x < then Y = X and g(Y )
1
= Y . Then
f
Y
(y) =
1
2
_
e
(y)
2
/2
|1| +e
y
2
/2
|1|
_
=
2e
y
2
/2
2
and therefore
E(Y ) =
2
0
ye
y
2
/2
dy
=
2
0
e
(y
2
/2)
d
_
y
2
/2
_
=
2
2
(c) Var(Y ) = E
_
Y
2
_
(E (Y ))
2
E
_
Y
2
_
=
2
0
y
2
e
y
2
/2
dy
= 2
1
2
y
2
e
y
2
/2
2
dy
= 1 by part (a)
Hence Var (Y ) = 1
2
.
2.12 Y = g(X) = d tan(X). g(X) is increasing for 0 < x < /2 and g
1
(Y ) = arctan(Y ). Hence
(arctan(Y/d))
=
1
1 + (Y/d)
2
1
d
and therefore
f
Y
(y) =
2
1
1 + (y/d)
2
1
d
with support y (0, ). This is the Cauchy distribution hence E(Y ) = .
2.13 The probability that there are k heads, given the rst ip lands heads, is geometrically dis-
tributed ips until rst tail: P
H
(X = k) = p
k
(1p) but restricted to k = 1, 2, 3, . . . and the
probability that there are k tails, given the rst ip lands tails, is also geometrically distributed
ips until rst head: P
T
(X = k) = (1p)
k
p, but also restricted to k = 1, 2, 3, . . . . Therefore
the probability that theres either a run of k heads or tails is
P
HT
(X = k) = P
H
+P
T
= p
k
(1 p) + (1 p)
k
p
and
E (X) =
k=1
k
_
p
k
(1 p) + (1 p)
k
p
_
=
k=1
kp
k
(1 p) +
k=1
k(1 p)
k
p
= E (H) (1 p) +E (T) p
=
1 (1 p)
1 p
+
1 p
p
=
1
p
+
1
1 p
2
2.14 (a)
E (X) =
0
x f
X
(x)dx let u = F
X
(x) and since F
X
strictly monotonic
=
1
0
F
1
X
(u)du
=
0
(1 F
X
(x)) dx
(b) First note that x =
x
k=1
1
E (X) =
x=0
x f
X
(x)
=
x=1
x f
X
(x)
=
x=1
x
k=1
f
X
(x)
=
k=1
x=k
f
X
(x) since 0 < k < x and 0 < x < k < x < and 0 < k <
=
k=1
(1 F
X
(k))
=
k=0
(1 F
X
(k))
2.17 m is such that m = F
1
X
(1/2)
(a) 3
m
0
x
2
dx = m
3
and therefore m =
3
_
1/2.
(b)
1
2
=
1
1
1 +x
2
=
1
(arctan(m) arctan())
=
1
_
arctan(m) +
2
_
Therefore m = tan (0) = 0
2.20 Number of children is distributed Geometrically P(X = k) = (1 p)
k1
p, number of trials
until rst success, including rst success, with p = 1/2. The mean = 1/p = 2. Therefore the
couple, on average, should have two children.
2.22 (a) Note that
0
e
x
2
dx =
1
2
_
a
. Let = 1/
2
under the integral, then
0
f
X
(x)dx =
4
0
x
2
e
x
2
/
2
dx
=
4
e
x
2
dx
=
4
d
d
0
e
x
2
dx
=
4
2
d
d
1/2
=
4
2
1
2
3/2
=
4
2
1
2
_
1
2
_
3/2
=
1
3
1
_
1
2
_
3/2
=
1
3
1
= 1
(b) Note that
0
xe
x
2
=
1
2
(u substitution). Let = 1/
2
under the integral, then
E (X) =
4
0
x
3
e
x
2
/
2
dx
=
4
_
xe
x
2
_
dx
=
4
d
d
0
xe
x
2
dx
=
4
d
d
1
2
1
=
2
2
=
2
4
=
2
0
x
4
e
x
2
/
2
dx
=
4
2
_
e
x
2
_
dx
=
4
d
2
d
2
0
e
x
2
dx
=
4
2
d
2
d
2
1/2
=
4
4
d
d
3/2
=
6
4
d
d
5/2
=
6
4
_
1
2
_
5/2
=
3
2
2
Hence Var(X) =
3
2
_
2
_
2
=
2
_
3
2
4
_
.
2.23 (a) f
Y
(y) =
1
2
y
_
f
X
_
y
_
+f
X
_
y
__
=
1
4
y
_
1 +
y + 1
y
_
=
1
2
y
.
(b) E (Y ) =
1
2
1
0
y
y
dy =
1
2
1
0
ydy =
1
3
and E
_
Y
2
_
=
1
2
1
0
y
3/2
dy =
1
5
therefore Var(X) =
1
5
1
9
=
4
45
.
2.24 (a) E (X) = a
1
0
x
a
dx =
a
a+1
and E
_
X
2
_
= a
1
0
x
a+1
dx =
a
a+2
therefore Var(X) =
E
_
X
2
_
(E(X))
2
=
a
a+2
a
2
a
2
+2a+1
=
a
(a
2
+1)(a+2)
(b) E (X) =
1
n
n
k=1
k =
1
n
n(n+1)
2
=
1
2
(n + 1) and E
_
X
2
_
=
1
n
n
k=1
k
2
but
k
3
(k 1)
3
= k
3
_
k
3
3k
2
+ 3k + 1
_
= 3k
2
3k 1
and therefore
1
n
n
k=1
k
2
=
1
3n
n
k=1
k
3
(k 1)
3
+ 3k 1
=
1
3n
_
n
k=1
_
k
3
(k 1)
3
_
+
n
k=1
3k 1
_
=
1
3n
_
n
k=1
_
k
3
(k 1)
3
_
+
3
2
n(n + 1) n
_
=
1
3n
_
n
3
+
3
2
n(n + 1) n
_
=
1
6
(2n + 1)(n + 1)
Hence
Var(X) =
1
6
(2n + 1)(n + 1)
1
4
(n + 1)(n + 1)
=
1
2
(n + 1)
_
1
3
(2n + 1)
1
2
(n + 1)
_
=
1
12
(n + 1) (n 1)
=
n
2
1
12
(c) TODO
2.25 (a) Let Y = g(X) = X then g
1
(Y ) = Y and
(Y )
= 1. Hence Y f
X
(y)
(Y )
=
f
X
(y).
(b)
E
_
e
tX
_
=
e
tx
f
X
(x)dx =
e
t(x)
f
X
(x)dx =
e
t(x)
f
X
(x)dx
Let Y = X. Then by part (a)
e
t(x)
f
X
(x)dx =
e
t(y)
f
X
(y)dy =
e
t(y)
f
X
(y)dy =
e
tx
f
X
(x)dx = E
_
e
tX
_
Hence E
_
e
tX
_
= E
_
e
tX
_
.
2.26 (a) N(0, 1), Standard Cauchy, Students t.
(b)
1 = lim
a+
a
f
X
(x)dx
= lim
_
a
a
f
X
(x)dx +
a+
a
f
X
(x)dx
_
= lim
_
a
a
f
X
(x)dx +
a
a
f
X
(x )d(x )
_
= lim
_
a
a
f
X
(x)dx +
a
a
f
X
((x ) +)d(x )
_
3.2 (a) The probability that 0 items in k draws are defective if 6 are defective in 100 i
P(X = 0) =
_
6
0
__
94
k
_
_
100
k
_
=
94!
k!(94k)!
100!
k!(100k)!
=
(100 k)(99 k)(98 k)(97 k)(96 k)(95 k)
100 99 98 96 95
.10
Then solving P(X = 0) .10 numerically yield k 32. So To detect 6 defectives in a
batch of 100 you need at least 32 draws, but as the number of defectives goes up this
number will decrease hence you need at most draws.
(b)
P(X = 0) =
_
1
0
__
99
k
_
_
100
k
_
=
99!
k!(99k)!
100!
k!(100k)!
=
(100 k)
100
.10
Therefore k 90.
3.4 (a) The number of ips until success (nding the right key) is geometrically distributed
with success probability 1/n and failure probability (n1)/n. Therefore the mean number
of trials is
1
1/n
= n
(b) There are n! permutations of the keys (assuming theyre all distinct) and n dierent
positions in any permutation that the correct key could be in. There are
_
n1
k1
_
(k
1)! dierent permutations of keys that could precede the correct key and
_
nk
nk
_
(n k)!
permutations of keys that could succeed the correct key. Therefore the probability the
correct key is in the kth position is
P(X = k) =
_
n1
k1
_
(k 1)!
_
nk
nk
_
(n k)!
n!
=
1
n
and then E (X) = n + 1/2, i.e. in the middle, as youd expect.
3.7 P(X = k) = e
k
/k! implies
P(X 2) = e
k=2
k
k!
= e
k=0
k
k!
1
_
= e
_
e
1
_
= 1 e
.99 6.63835
3.10 (a) The probability of choosing 4 packets of cocaine out all 496 packets is
_
N
4
_
_
N+M
4
_
The probability of choosing 2 noncocaine packets out all the rest is
_
M
2
_
_
N+M4
2
_
Therefore, by independence, the probability of choosing 4 packets of cocaine and then 2
packets of noncocaine is
_
N
4
_
_
N+M
4
_
_
M
2
_
_
N+M4
2
_
(b)
_
N
4
_
_
N+M
4
_
_
M
2
_
_
N+M4
2
_ =
m(m1)n(n 1)(n 2)(n 3)
(m+n)(m+n 1)(m+n 2)(m+n 3)(m+n 4)(m+n 5)
3.13 (a) P(X > 0) =
k=1
e
k
/k! =
k=0
e
k
/k! e
= 1 e
hence
P(X
T
= k) =
e
1 e
k
k!
I
{1,2,... }
Then
E(X
T
) =
e
1 e
k=1
k
k
k!
=
e
1 e
k=0
k
k
k!
0
_
=
e
1 e
and
E(X
2
T
) =
e
1 e
k=1
k
2
k
k!
=
e
1 e
_
2
t
2
MGF(X)
t=0
0
_
=
e
1 e
_
2
t
2
e
e
t
1
t=0
_
=
e
1
1 e
_
t
e
t
e
e
t
t=0
_
=
e
1
1 e
_
e
t
e
e
t
+e
2t
e
e
t
t=0
_
=
e
1
1 e
_
e
+e
_
=
(1 +)
e (1 e
)
and nally
Var(X
T
) = E(X
2
T
) (E(X
T
))
2
=
(1 +)
e (1 e
)
e
1 e
=
1 e
_
(1 +)
e
e
_
(b) P(X = k) =
_
k+r1
k
_
(1p)
k
(p)
r
. Note this denition is obverse from the book - p = 1p
.
Firstly
P(X > k) =
i=1
_
k +r 1
k
_
(1 p)
k
(p)
r
=
i=0
_
k +r 1
k
_
(1 p)
k
(p)
r
_
0 +r 1
0
_
(1 p)
0
(p)
r
= 1 p
r
Then P(X = k) =
1
p
r
_
k+r1
k
_
(1 p)
k
(p)
r
I
{1,2,... }
and
E(X
T
) =
1
p
r
k=1
k
_
k +r 1
k
_
p
k
(1 p)
r
=
1
p
r
k=0
k
_
k +r 1
k
_
p
k
(1 p)
r
=
r(1 p)
p
r+1
and
E(X
2
T
) = E(X
T
(X
T
1)) +E(X
T
)
=
1
p
r
k=1
k(k 1)
_
k +r 1
k
_
p
k
(1 p)
r
+E(X
T
)
=
1
p
r
k=0
(k +r 1)!
(k 2)!(r 1)!
p
k
(1 p)
r
+E(X
T
)
=
1
p
r
k=0
((k 2) + (r + 2) 1)!
(k 2)!((r + 2) 2 1)!
p
k
(1 p)
r
+E(X
T
)
=
p
2
p
2
p
r
((r + 2) 3)((r + 2) 2)
k=0
((k 2) + (r + 2) 1)!
(k 2)!((r + 2) 1)!
p
k2
(1 p)
r+2
+E(X
T
)
=
1
p
r
(r 1)r
k=0
_
(k 2) + (r + 2) 1
k 2
_
p
k2
(1 p)
r+2
+E(X
T
)
=
1
p
r
(r 1)r
+
r(1 p)
p
r+1
=
1 +p
r1
r(1 p)(r 1)
p
r
(r 1)r
Finally
Var(X
T
) =
1 +p
r1
r(1 p)(r 1)
p
r
(r 1)r
_
r(1 p)
p
r+1
_
2
= 1 (p 1)p
(r(p 1))
2
p
2(r+1)