Escolar Documentos
Profissional Documentos
Cultura Documentos
a.s.
Here is the underlying probability space on which all the random variables are defined. That is,
there is a set , a probability space, and the random variables are real valued functions defined on
this set, Xn (). There is also a family F of subsets of that is closed under complements, countable
unions and intersections. In addition, the set of all such that Xn () x belongs to F for
all real x (which says Xn is measurable). There is also a probability measure P on (, F) such that
FXn (x) = P { : Xn () x}.
Lp
for any continuous points x of F . Here Fn (), 1 n is the cdf of Xn . In this case, we also say
w
Fn () converges to F () weakly, denoted as Fn F .
Prove the following statements.
Lp
1. Show that for p 1, Xn X implies Xn X . And give an example to show the converse
statement doesnt hold.
a.s.
2. Show that Xn X implies Xn X . Use the bounded convergence theorem which says that is
a.s.
if |g(x)| is bounded uniformly for all real x and Xn X then Eg(Xn ) Eg(X ).
L1
1. Chebyshev inequality:
P(|Xn X | > ) = P(|Xn X |p > p )
E|Xn X |p
0 (n )
p
2. Almost sure convergence can be expressed in terms of the set: { : limn Xn () = X ()} =
T
S
T
a.s.
>0 N 1 nN { : |Xn () X ()| < }. Therefore if Xn X , then > 0,
P(
[ \
N 1 nN
a.s.
(n )
4. Denote Fn as the cdf for Xn , 1 n . Then for any continuity point x of F (x), we have
Fn (x) = P(Xn x) P(X x + ) + P(|Xn X | ).
Let n and then let 0, we have lim supn Fn (x) F (x). Similarly,
Fn (x) = P(Xn x) P(X x ) P(|Xn X | ).
Let n and then let 0 we have lim inf n Fn (x) F (x). Hence limn exists and equals
to F (x).
D
Z x0
Z x0
|
g(x)dFn (x)
g(x)dF (x)|
(2)
Z xN
Z xN
+|
g(x)dFn (x)
g(x)dF (x)|
(3)
x0
x0
Z
Z
+|
g(x)dFn (x)
g(x)dF (x)|
(4)
xN
xN
The first and third terms on the right are less than a constant times as n because of (a),(b)
and the fact that g is bounded. In the middle term, we add and subtract gN to get
Z xN
Z xN
|
g(x)dFn (x)
g(x)dF (x)|
(5)
x0
x0
Z xN
Z xN
|
g(x)dFn (x)
gN (x)dFn (x)|
(6)
x0
x0
Z xN
Z xN
+|
gN (x)dFn (x)
gN (x)dF (x)|
(7)
x0
x0
Z xN
Z xN
+|
gN (x)dFn (x)
g(x)dF (x)|
(8)
x0
x0
The first and third terms on the right side are less than a constant times because of (c). The middle
term is now a finite sum of differences Fn F at continuity points of F , so it goes to zero as n
by the hypothesis.
7. For any continuity point x of FX (x) and y < c, we have
P(Xn x, Yn y) P(Yn y) 0 = P(X x, Y y).
For any continuity point x of FX (x) and y > c, we have
P(Xn x, Yn y) = P(Xn x) P(Xn x, Yn > y).
As P(Xn x, Yn > y) P(Yn > y) 0, we obtain that P(Xn x, Yn y) FX (x) = P(X
x, Y y). This completes the proof.
E(X|n )P (n )
where {n : n 1} is a partition of the probability space , that is, it is the union of disjoint sets
{n : n 1}.
Condition on the first outcome:
EN =
1
1
(E(N |H) + E(N |T )) = (E(N |H) + 1 + EN )
2
2
1
1
(E(N |HH) + E(N |HT )) = (E(N |H) + 1 + E(N |HT ))
2
2
1
1
(E(N |HT H) + E(N |HT T )) = (3 + 3 + EN )
2
2
EN = 10.
P(X > t) dt
E[X] =
P(X < t) dt
Solution:
We have
Z
EX
xdF (x)
Z Z x
=
dydF (x)
Z0 Z0
=
dF (x)dy
0
For general X, let X = X + X where X + is the positive part of X and X is the negative part of X, i.e.,
X = max(X, 0).
X + = max(X, 0),
Then
EX
=
=
EX + EX
Z
Z
P(X + > t)dt
0
0
0
1
(x1 1 )2
(x2 2 )2
exp
.
21 2
21
22
EY1 =
sin(2y)dy = 0,
EY2 =
cos(2y)dy = 0
Moreover
Z
EY1 Y2 =
Z
sin(2y) cos(2y)dy =
1
sin(4y)dy = 0.
2
p
g2 (u1 , u2 ) = sin(2u1 ) 2 log u2 .
Then
fX1 ,X2 (x1 , x2 ) = fU1 ,U2 (u1 , u2 )| det J|
where
J=
g11
x1
g21
x1
g11
x2
g21
x2
!
.
g21 (x1 , x2 )
1
x2
=
arctan
.
2
x1
Hence u1 , u2 [0, 1]. This gives fU1 ,U2 (u1 , u2 ) = 1. On the other hand, it is easy to check that
1
1 2 1 2
exp x1 x2 .
| det J| =
2
2
2
Hence
1 2 1 2
1
fX1 ,X2 (x1 , x2 ) =
exp x1 x2 .
2
2
2
Tn
1
n log n
in L2 .
Solution:
n
X
Ei =
i=1
n
X
i=1
X1
n
=n
.
n+1i
k
k=1
n
X
k=1
Var(k )
n
X
k=1
X 1
1
2
=
n
Cn2
(n + 1 k)2
k2
k=1
0.
log2 n
1. Define Xi to be the indicator function of whether the i-th box is empty. That is,
1 if the i-th box is empty
Xi =
0 otherwise
Then Nn =
Pn
i=1
2. We have
Var(Nn )
= ENn2 (ENn )2
X
2
n
n
X
X
EXi
=
EXi +
EXi Xj
i=1
1
= n 1
n
i=1
i6=j
r
2
+n(n 1) 1
n
r
n
1
1
n
2r
3. Now
2
Nn
E
e
n
=
=
1
2
ENn2 e ENn + e2
2
n
n
r
r
r
1
1
n1
2
1
1
+
1
2e 1
+e2
n
n
n
n
n
e2 2e2 + e2
0.
10
1
f or x R
(1 + x2 )
=
=
E(eitX )
Z
(x)eitx dx
eitx
dx
(1 + x2 )
This integration can be calculated using complex integral techniques. Let us assume t > 0 without
1
loss of generality. The function 1+z
2 has singularities z = i and z = i. When R > 1, the singular
point z = i lies in the interior of the semicircular region bounded by the segment z = x(R x R)
of the real axis and the upper half CR of the circle |z| = R from z = R to z = R. Integrating
counter-clockwize around the boundary of this semicircular region C, we see that
Z R
Z
Z
eitz
eitz
eitx
dx
+
dz
=
dz
2
2
2
CR (1 + z )
C (1 + z )
R (1 + x )
=
=
eitz
z=i (z + i)(z i)
eitz
2i
(z + i)
2iRes
z=i
et
R
eitz
Also, it can be shown that the integral CR (1+z
2 ) dz tends to 0 as R . To do this, we observe that
when |z| = R, |z 2 + 1| R2 1, and |eitz | 1. So if z is any point on CR ,
eitz
1
MR , where MR = 2
(1 + z 2 )
R 1
and this means that
Z
eitz
dz
MR R 0 as R
CR (1 + z 2 )
where R is the length of the semi-circle CR . Thus, it now follows that
Z R
eitx
lim
dx = et
R R (1 + x2 )
Similarly, we can do with t < 0, and it is easy to check that the characteristic function is
Z
eitx
1
X (t) =
dx
(1 + x2 )
=
e|t| .
11
e
+
e
=
2
n ix
n ix
0
!
1
1
1
=
2
n ix n ix
n
=
.
(n2 + x2 )
The probability density of linear transformed variable will give the same result. (Y (y) =
where y = ax)
y
1
a X ( a ),
4. Suppose Xn is a sequence of independent Cauchy random variables. Show that there is no number
such that we have convergence in probability
Sn = (X1 + + Xn )/n
Solution:
The density function of Sn does not change at all, as n increases.
5. Why is this not a contradiction to weak law of large numbers?
Solution:
E(|X1 |) does not exist.
12
n n
e
2n
n n
e
2N 2N N + m N N m N m
/
22N
e
e
e
2 N
m
m m
m m
=
1 2
1+
1
N
N
N
2
2
(em /N )N (em/N )m (em/N )m
2
em
Using the fact the
/N
(m N )
Pm = 1, we find
r
Pm
m2
1
exp
N
N
p
It says that the number fluctuations are distributed in a Gaussian distribution with = N/2.
Note that
13
Solution:
2
2
The total distance vector R =
r1 + +
r
N . Noting that R = R R , and rm rn = l cos mn = 0, it is
2
2
easy to see that R = N l .
14
0, elsewhere
=
1
E(XY )
Cov(X, Y )
y 2 (x + y)dxdy =
5
12
y 2 (x + y)dxdy =
1
3
0
1
=
0
5
12
=
Z
x2 (x + y)dxdy =
E(Y 2 )
7
12
=
Z
y(x + y)dxdy =
0
E(X 2 )
7
12
=
Z
x(x + y)dxdy =
0
Z
E(Y )
= E(XY ) E(X)E(Y ) =
11
144
11
V ar(Y ) = E(Y 2 ) E(Y )2 =
144
V ar(X)
= E(X 2 ) E(X)2 =
Thus,
Cov(X, Y )
p
V ar(X)V ar(Y )
1
=
11
=
15
1
144
n2 =
1 X
n )2
(Xj X
n 1 j=1
be the empirical variance. Show (a) that E(n2 ) = 2 , and (b) n2 2 in probability as n .
Solution:
We may assume = 0. We then see that
n
n2 =
1 X 2
n
n 2
(X 2 ) +
2
X
n 1 j=1 j
n1
n1 n
The WLLN applies to the sum on the right and it goes to zero in probability as n . The forth moment
n 0
condition is used here (but could be weakened). The last term also goes to zero in probability since X
in probability. This gives the result.
16
(n)
(n)
(n)
(n)
(n)
i(X1
(n)
+X2
(n)
++Xn
)
}=
n
Y
(n)
E{e
i(Xj
j=1
n
i
(ei 1)
}= 1+
e(e 1)
n
We now note that the limit on right is the characteristic function a Poisson random variable Z with parameter
. Since is any real number this implies convergence in distribution of Zn to Z.
17