Você está na página 1de 7

Answers for Stochastic Calculus for Finance I; Steven Shreve VJul 15 2009

Marco Cabral mapcabral@ufrj.br Department of Mathematics Federal University of Rio de Janeiro July 25, 2009

Chapter 1
1.1: Since S1 (H) = uS0 , S1 (T ) = dS0 , X1 (H) = 0 S0 (u (1 + r)) and X1 (T ) = 0 S0 (d (1 + r)). Therefore, X1 (H) positive implies X1 (T ) negative and vice-versa. 1.2: Check that X1 (H) = 30 3/20 = X1 (T ). Therefore if X1 (H) is positive, X1 (T ) is negative and vice-versa. 1.3: By (1.1.6), V0 = S0 . 1.4: mutatis-mutandis the proof in Theorem 1.2.2 replacing u by d and H by T . 1.5: many computations. 1.6: We have 1.5 V1 = 0 S1 +(0 S0 )(1+r). We determine 0 = 1/2. So we should sell short 1/2 stocks. 1.7: see last exercise. 1.8: (i) vn (s, y) = 2/5[vn+1 (2s, y + 2s) + vn+1 (s/2, y + s/2)] (ii) v0 (4, 4) = 2.375. One can check that v2 (16, 28) = 8, v2 (4, 16) = 1.25, v2 (4, 10) = 0.25 and v2 (1, 7) = 0. (iii) (s, y) = [vn+1 (2s, y + 2s) vn+1 (s/2, y + s/2)]/[2s s/2]. 1.9: (i) Vn (w) = 1/(1+rn (w))[pn (w)Vn+1 (wH)+qn (w)Vn+1 (wT )] where pn (w) = (1 + rn (w) dn (w))/(un (w) dn (w)) and qn (w) = 1 pn (w). (ii) n (w) = [Vn+1 (wH) Vn+1 (wT )]/[Sn+1 (wH) Sn+1 (wT )] (iii) p = q = 1/2. V0 = 9.375. One can check that V2 (HH) = 21.25, V2 (HT ) = V2 (T H) = 7.5 and V2 (T T ) = 1.25.

Chapter 2
2.1: (i) A and A are disjoint and their union is the whole space. Therefore P (A A [= whole space]) = 1 = P (A) + P (A ) [they are disjoint]. (ii) it is enough to show it for two events A and B. By induction it follows for any nite set of events. Since A B = A (B/A), and this is a disjoint union, P (A B) = P (A) + P (B/A). Since B/A is a subset of B, P (B/A) P (B) by denition (2.1.5). 2.2: (i) P (S3 = 32) = P (S3 = 0.5) = (1/2)3 = 1/8; P (S3 = 2) = P (S3 = 8) = 3(1/2)3 = 3/8. (ii) ES1 = (1 + r)S0 = 4(1 + r), ES2 = (1 + r)2 S0 = 4(1 + r)2 , ES3 = (1 + 3 r) S0 = 4(1 + r)3 , Average rate of growth is 1 + r (see p.40, second paragraph). (iii) P (S3 = 32) = (2/3)3 = 8/27, P (S3 = 8) = 4/9, P (S3 = 2) = 2/9, P (S3 = 0.5) = (1/3)3 = 1/27 We can compute ES1 directly or use p.38, second paragraph: En [Sn+1 ] = (3/2)Sn . Therefore, E0 [S1 ] = E[S1 ] = (3/2)S0 , E1 [S2 ] = (3/2)S1 . Applying E to both sides, E[E1 [S2 ]] = E[S2 ] = (3/2)E[S1 ] = (3/2)2 S0 . Following the same reasoning, E[S3 ] = (3/2)3 S0 . Therefore, the average rate of growth is 3/2. 2.3: Use Jensens inequality and the martingale property: (Mn ) = (En [Mn+1 ]) En [(Mn+1 )]. 2.4: n+1 (i) En (Mn + 1) = j=1 En [Xj ] = (taking out what is known) =En [Xn+1 ] + n j=1 Xj = En [Xn+1 ] + Mn . Since Xj assumes 1 or 1 with equal probability, and depends only on the n+1 coin toss (independence), En [Xn+1 ] = E[Xn+1 ] = 0. Therefore, En (Mn+1 ) = Mn . (ii) Since Xn+1 depends only on the n+1 coin toss (independence), En (eXn+1 ) = E(eXn+1 ) = (e + e )/2. En [eMn+1 ] = (taking out what is known) = eMn En [eXn+1 ] = eMn (e + e )/2. 2.5: 2 (i) Hint: Mn+1 = Mn + Xn+1 (why?) and (Xj )2 = 1 and therefore, Mn+1 = 2 Mn + 2Mn Xn+1 + 1. Also In+1 = Mn (Mn+1 Mn ) + In = Mn Xn+1 + In . One 2 can prove by induction on n since, by induction hypotheses, In = 1/2(Mn n) 2 2 and therefore, In+1 = 1/2(Mn + 2Mn Xn+1 n) = 1/2(Mn+1 1 n). (ii) from (i), In+1 = 1/2(Mn +Xn+1 )2 (n+1)/2. Since Xn+1 = 1 or 1 with same probability, En (f (In+1 )) = j(Mn ), where j(m) = E[f (1/2(m + Xn+1 )2 (n + 1)/2)] = 1/2f (1/2(m + 1)2 (n + 1)/2) + f (1/2(m 1)2 (n + 1)/2) = 1/2f (1/2(m2 n) + m) + f (1/2(m2 n) m) 2 Since In = 1/2(Mn n), En (f (In+1 )) = j(Mn ) = 1/2f (In + Mn ) + f (In Mn ) 2 Now we need to make the rhs to depend on In only. Since In = 1/2(Mn n), 2 Mn = 2In + n. So En (f (In+1 )) = g(In ) where g(i) = 1/2f (i + 2i + n) + f (i 2i + n). 2.6: It is easy to show that In is an adapted process. En [In+1 ] =(taking out what is known) = n (En (Mn+1 ) Mn ) + In . Since Mn is a martingale, En (Mn+1 ) = Mn and the rst term is zero.

2.7 2.8 (i) MN 1 = E(MN 1 ) = EN 1 (MN ) = MN 1 . Therefore MN 1 = MN 1 . Proceed by induction. (ii) En [Vn+1 ](w) = pVn+1 (wH) + qVn+1 (wT ) = (1 + r)Vn (w) from algorithm 1.2.16. Now is is easy to prove. (iii) this is a consequence of the fact that En [Z], for any Z, is a martingale. 2.9 see p. 46 2.4.1: models with random interest rates, it would ... (i) P (HH) = 1/4, P (HT ) = 1/4, P (T H) = 1/12, P (T T ) = 5/12. (ii) V2 (HH) = 5, V2 (HT ) = 1, V2 (T H) = 1, V2 (T T ) = 0, V1 (H) = 1/9, V1 (T ) = 12/5, V0 = 226/225. (iii) 0 = 103/270. (iv) 1 (H) = 1. 2.10 (i) Follow the proof of 2.4.5 (p. 40) (ii) easy (iii) easy 2.11 (i) easy since CN = (SN K)+ , FN = SN K and PN = (K SN )+ . (ii) trivial (iii) F0 = 1/(1 + r)N E(FN ) = 1/(1 + r)N E(SN K). Since SN = S0 (1 + r)N , F0 = S0 K/(1 + r)N . (vi) Yes, Cn = Pn . 2.12 2.13 n+1 n+1 (i) we write (Sn+1 , Yn+1 ) = ( SSn Sn , Yn + SSn Sn ). Now we apply the independence lemma to variables: Sn+1 /Sn , Sn and Yn . We obtain: En (f (Sn+1 , Yn+1 )) = pf (uSn , Yn + uSn ) + qf (dSn , Yn + dSn ). Therefore, g(s, y) = pf (us, y + us) + qf (ds, y + ds). 1 1 (ii) vN (s, y) = f (y/(N +1)). Vn = vn (Sn , Yn ) = En (Vn+1 ) = En (vn+1 (Sn+1 , Yn+1 )) = r r 1 (pvn+1 (uSn , Yn + uSn ) + qvn+1 (dSn , Yn + dSn )). r 1 So, vn (s, y) = [pvn+1 (us, y + us) + qvn+1 (ds, y + ds)]. r 2.14 (i) see 2.13 (i) 1 y (ii) vN (s, y) = f ( N M ). For 0 n < M, vn (s) = (pvn+1 (us) + qvn+1 (ds)) r 1 For n > M , vn (s, y) = [pvn+1 (us, y + us) + qvn+1 (ds, y + ds)]. r 1 For n = M , vM (s) = [pvn+1 (us, us) + qvn+1 (ds, ds)] r

Chapter 3
3.1 (i) Since P > 0, Z > 0. Therefore (ii)
1 EZ 1 Z(w)

> 0 for every w.

1 Z(w) P (w)

P (w) = 1 replacing Z by its denition.

(iii) EY =

YP =

1 Y (w)P (w)/Z(w) = E( Z Y )

3.2 (i) P () = Z(w)P (w) = EZ = 1. (ii) EY = Y P = Y ZP = E(Y Z) (iii) Since P (A) = 0, P (w) = 0 for every w A. Now P (A) = wA P (w) = wA Z(w) P (w) = 0 since P (w) = 0 for every w A. (iv) If P (A) = wA P (w) = wA Z(w)P (w) = 0. Since P (Z > 0) = 1, Z(w) > 0 for every w. Therefore the sum wA Z(w)P (w) can be zero i 1 P (w) = 0 for every w A. Therefore, P(A)=0. (v) P (A) = 1 i P (A ) = 1 P (A) = 0 i P (A ) = 0 = 1 P (A) i P (A) = 1. (vi) Let = {a, b}, Z(a) = 2, Z(b) = 0. Let P (a) = P (b) = 1/2. Now P (a) = 1, P (b) = 0. 3.3 M0 = 13.5, M1 (H) = 18, M1 (T ) = 4.5, M2 (HH) = 24, M2 (HT ) = 6, M2 (T H) = 6, M2 (T T ) = 1.5 Now, En [Mn+1 ] = En [En+1 [Mn ]]. From the properties of conditional expectation, this is equal to En [Mn ] = Mn . 3.5 (i) Z(HH) = 9/16, Z(HT ) = 9/8, Z(T H) = 9/24, Z(T T ) = 45/12. (ii) Z1 (H) = 3/4, Z1 (T ) = 3/2, Z0 = 1.

Chapter 4
4.1 (ii) V0C = 320/125 = 2.56, V2 (HH) = 64/5 = 12.8, V2 (HT ) = V2 (T H) = 8/5 = 1.6, V2 (T T ) = 0, V1 (H) = 144/25 = 5.76, V1 (T ) = 16/25 = 0.64. 4.2 0 = (0.4 3)/6, 1 (H) = 1/12, 1 (T ) = 1, C0 = 0, C1 (H) = 0, C1 (T ) = 1. 4.3 The time-zero price is V0 = 0.4 and the optimal stopping time is (H ) = + and (T ) = 1. Moreover, V1 (H) = V2 (HH) = V2 (HT ) = 0, V1 (T ) = 1 = max(1, 14/15), V2 (T H) = 2/3 = max(2/3, 4/10), V2 (T T ) = 5/3 = max(5/3, 31/20), V3 (T T H) = 7/4 = 1.75, V3 (T T T ) = 8.5/4 = 2.125 4.4 No. With 1.36 we can make a hedge such that we can pay Y for any outcame and we will have some spare cash. In this case he will not exercise at optimal times. If we have HH or HT we will keep 0.36 at time one with probability 1/2. If we have TT we get nothing and if we have TH we keep 1 at time 1 with probability 1/4.
1 i

means if, and only if.

Therefore at time zero we would have 0.35(1/2) + 4/5(1/4)1 = 0.38 = 1.74 1.36. 4.5 List of 11 stopping times that never exercises when the option is out of money: t(HH) 0 inf inf inf inf inf inf inf inf inf inf t(HH) t(HT) 0 2 2 2 2 2 inf inf inf inf inf t(HT) t(TH) 0 1 2 inf 2 inf 1 2 inf 2 inf t(TH) t(TT) 0 1 inf 2 2 inf 1 inf 2 2 inf t(TT) value 1 1.36 0.32 0.8 0.96 0.16 1.2 0.16 0.64 0.8 0 value

4.6 (i) Since Sn /(1 + r)n is a martingale, Sn /(1 + r)n is a martingale by theorem 4.3.2. Therefore, E[S0 /(1 + r)0 ] = E[SN /(1 + r)N ]. Now the rst term is equal to E[S0 /(1 + r)0 ] = S0 . The last term is equal to E[S /(1 + r) ] since N. Therefore, for any , E[G /(1 + r) ] = E[(K S )/(1 + r) ] = E[K/(1 + r) ] E[S /(1 + r) ]. Since the discounted price is a martingale, this is equal to E[K/(1 + r) ] S0 . Since (1 + r) 1, the maximum for this term is when (w) = 0 for every w. In this case, the rst term will be K. Therefore the value is K S0 . 4.7 Using an argument similar to 4.6 (i), taking (w) = N for every w, the S0 K value will be (1+r)N .

Chapter 5
5.1 (i) We need the following property: If X and Y are independent r.v. then E(XY ) = E(X)E(Y ). Since 2 1 and 1 are independent, E(2 ) = E(2 1 +1 ) = E(2 1 1 ) = E(2 1 )E(1 ) = (E(1 ))2 (ii) Write m = (m m1 ) + (m1 m2 ) + + (2 1 ) + 1 . Using the same argument as in (i) we obtain the result. (iii) Yes since the probability of rising from level 0 to level 1 is the same as rising from level 1 to level 2. This is dierent though from the probability to go to level 1. 5.2 5

(i) We can write f ( ) = p(e e )+e . Since p > 1/2 and (e e ) > 0, f ( ) > 1/2(e e ) + e = cosh( ) > 1 Another proof is determining 0 such that f (0 ) = 0. This have only one solution 0 = log(q/p)/2. Since q/p < 1, 0 < 0. Also f (0) = p q > 0. Therefore f is strictly increasing for 0. Since f (0) = 1, f ( ) > 1 for every > 0. (ii) En [Sn+1 ] = Sn En [eXn+1 ]/f () = Sn E[eXn+1 ]/f () = Sn f ()/f () = Sn (iii) Follow the same argument as in pages 121 and 122. (iv) (see p.123) Given (0, 1) we need to solve for > 0 which satises = 1/f (). This is the same as to solve pe + (1 p)e = 1. This is a . Since quadratic equation for e and we obtain that e = 2(1p) 1 < 2(1 p) i 1 2(1 p) < i < 1, we have that e < 1. We conclude that > 0. 1 142 p(1p) Therefore, E1 = 2(1p)
1 2(1p) 1 142 p(1p)

(v) Follow corollary 5.2 (p.124): Let = 1 42 p(1 p) E[1 1 ] = 1 1 = 22 (1p) (I used a CAS=computer algebra system: Maxima) Now

if we let goes to 1 then goes to 2p1 (since 1 4p(1 p) = (2p 1)2 = |2p 1|, since p > 1/2, this is equal to 2p 1). Therefore we obtain that 1 E1 = 2p1 . 5.3 (i) 0 = log(q/p) which is greater that 0 since q > p. (ii) Follow the steps in p.121 and p.122 but replacing 2/(e +e ) by 1/f (). Now the equation 5.2.11 from p.122 (with the replacement above) is true for > 0 . Here we cannot let goes to zero. Instead we let goes to 0 . Since from (i) e0 = q/p, we obtain the answer: p/q. (iii) We can write E1 = E(I{1 =} 1 )+E(I{1 <} 1 ). Since 0 < < 1, 1 = 0 for the rst term. Therefore, E1 = E(I{1 <} 1 ). 1 142 p(1p) Following exercise 5.2 (iv): E1 = (note that E1 = 2(1p) 1 1 E(I{1 <} ) + E(I{1 =} ). (iv) Follow exercise 5.2 (v): Let = 1 42 p(1 p) E[I{1 <} 1 1 ] = 1 1 1 2(1p) = 22 (1p) (I used a CAS=computer algebra system: Maxima) Now if we let goes to 1 then goes to 12p (since 1 4p(1 p) = (2p 1)2 = |2p 1|, since p < 1/2, this is equal to 1 2p). Therefore we obtain that p E[I{1 <} 1 ] = (1p)(12p) . 5.4 (ii) P (2 = 2k) = P (2 2k) P (2 2k 2) By the reection principle, P (2 2k) = P (M2k = 2) + 2P (M2k 4). Since the random walk is symmetric, 2P (M2k 4) = P (M2k 4) + P (M2k 4). Therefore, P (2 2k) = P (M2k = 2) + P (M2k 4) + P (M2k 4). This is equal to P (2 2k) = 1 P (M2k = 0) P (M2k = 2). Replacing k by k 1, P (2 2k 2) = 1P (M2k2 = 0)P (M2k2 = 2). Therefore, P (2 = 2k) = P (M2k2 = 0) + P (M2k2 = 2) P (M2k = 0) P (M2k = 2).

The complete formula can be written since P (Mm = 0) = and P (Mm = 2) = 5.5 (ii) replace
1 n 2 m! (m/21)!(m/2+1)!
nb 2 +m

1 m 2

m! ((m/2)!)2

1 m 2

and .

in (i) by p

n+b 2 m

Você também pode gostar