Escolar Documentos
Profissional Documentos
Cultura Documentos
253
R/Rmetrics eBooks is a series of electronic books and user guides aimed at students and practitioner who use R/Rmetrics to analyze nancial markets.
A Discussion of Time Series Objects for R in Finance (2009) Diethelm Wrtz, Yohan Chalabi, Andrew Ellis Portfolio Optimization with R/Rmetrics (2010), Diethelm Wrtz, William Chen, Yohan Chalabi, Andrew Ellis Basic R for Finance (2010), Diethelm Wrtz, Yohan Chalabi, Longhow Lam, Andrew Ellis Early Bird Edition Financial Market Data for R/Rmetrics (2010) Diethelm Wrtz, Andrew Ellis, Yohan Chalabi Early Bird Edition Indian Financial Market Data for R/Rmetrics (2010) Diethelm Wrtz, Mahendra Mehta, Andrew Ellis, Yohan Chalabi Presentations from the R/Rmetrics Singapore Workshop (2010) Diethelm Wrtz, Mahendra Mehta, Juri Hinz, David Scott
Series Editors: PD Dr. Diethelm Wrtz Institute of Theoretical Physics and Curriculum for Computational Science Swiss Federal Institute of Technology Hnggerberg, HIT K 32.2 8093 Zurich Contact Address: Rmetrics Association Weinbergstrasse 41 8006 Zurich info@rmetrics.org
Publisher: Finance Online GmbH Swiss Information Technologies Weinbergstrasse 41 8006 Zurich
Authors: Diethelm Wrtz, Swiss Federal Institute of Technology Zurich Mahendra Mehta, NeuralSoft Technologies Mumbai Juri Hinz, National University of Singapore, Singapore David Scott, University of Auckland, Auckland
2009, Finance Online GmbH, Zurich Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modied versions of this manual under the conditions for verbatim copying, provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one. Permission is granted to copy and distribute translations of this manual into another language, under the above conditions for modied versions, except that this permission notice may be stated in a translation approved by the Rmetrics Association, Zurich.
Limit of Liability/Disclaimer of Warranty: While the publisher and authors have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specically disclaim any implied warranties of merchantability or tness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor authors shall be liable for any loss of prot or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identication and explanation, without intent to infringe.
WELCOME
Welcome to the rst R/Rmetrics Singapore Conference on Computational Topics in Finance. We are very glad that you found the time to come to the Singapore, and for the many of you traveling from the U.S., Europe and various places in Asia, we hope that your journey was not too arduous. With the R/Rmetrics Singapore Conference, we want to create a new forum where fund and/or risk managers from banks and insurance rms, decision makers, researchers from industry and academia, and students can exchange ideas and engage in stimulating discussions. The environment for this workshop should be a place a little bit aside from the mainstream conference of venues, and we are happy to have found this at the Risk Management Institute of the National University of Singapore. About 40 participants are attending the conference, and the mixture, as planned, is quite heterogeneous. About half are from academia, and the other half from the software and nancial industries, including banks. Last but not least, we want to thank the organizing committee and our sponsors. We wish you an interesting conference with many inspiring and stimulating discussions.
CONTENTS
WELCOME CONTENTS
V VII
I
1 2 3 4
Friday Morning
STEFANO IACUS JURI HINZ DAVID SCOTT MARC PAOLELLA
1
2 40 56 82
II Friday Afternoon
5 6 7 8 9 VIKRAM KURIYAN BERNARD LEE KAM FONG CHAN ANDREW ELLIS ANMOL SETHY
85
86 120 122 134 148 164
10 KARIM CHINE
167
168 190
CONTENTS 13 JOEL YU 14 LEONG CHEE KIA 15 DIETHELM WRTZ 16 PRATAP SONDHI 192 202 204 214
VIII
IV Appendix
SPONSORS RMETRICS ASSOCIATION
233
234 236
PART I
FRIDAY MORNING
CHAPTER 1
STEFANO IACUS
The "yuima" package: An R framework for simulation and inference of stochastic differential equations
Stefano M. Iacus on behalf of Yuima Project Team
Department of Economics, Business and Statistics Universit degli Studi di Milano, Italy
Most of the theoretical results in modern finance rely on the assumption that the underlying dynamics of asset prices, currencies exchange rates, interest rates, etc are continuous time stochastic proces-ses driven by stochastic differential equations. Continuous time models are also at the basis of option pricing and option pricing often requires Monte Carlo methods. In turn, the Monte Carlo method re-quires a preliminary good model to simulate whose parameters has to be estimated from historical data. Most ready-to-use tools in computational finance relies on pure discrete time models, like arch, garch, etc. and very few examples of software handling continuous time processes in a general fashion are available also in the R community. There still exists a gap between what is going on in mathematical finance and applied finance. The "yuima" package is intended to help in filling this gap. The Yuima Project is an open source and collaborative effort of several mathematicians and statisticians aimed at developing the R package named "yuima" for simulation and inference of stochastic differential equations. The "yuima" package is an environment that follows the paradigm of methods and classes of the S4 system for the R language. In the "yuima" package stochastic differential equations can be of very abstract type, e.g. uni or multi-dimensional, driven by Wiener process of fractional Brownian motion with general Hurst parameter, with or without jumps specified as Lvy noise. Lvy processes can be specified via compound Poisson description, by the specification of the Lvy measure or via increments and stable laws. The "yuima" package is intended to offer the basic infrastructure on which complex models and inference procedures can be built on. In particular, the basic set of functions includes the following: 1) Simulation schemes for all types of stochastic differential equations (Wiener, fBm, Lvy). 2) Different subsampling schemes including random sampling with user specified random times distribution, space discretization, tick times, etc. 3) Automatic asymptotic expansion for the approximation and estimation of functionals of diffusion processes with small noise via Malliavin calculus, useful in option pricing. 4) Efficient quasi-likelihood inference for diffusion processes and diffusion processes with jumps. All simulation schemes, subsampling and inference are designed to work on both regular or irregular grid times (i.e. regular or irregular time series). In special cases also asynchronous data and sampling schemes can be handled. As proof-of-concept (but fully operational) examples of statistical procedures have been implemented like change point analysis in volatility of stochastic differential equations, asynchronous covariance estimation, divergence test statistics.
The Yuima Project was partly supported by Japan Science and Technology Agency, Basic Research Programs PRESTO, Grants-in-Aid for Scientific Research No. 19340021.
1 / 56
Overview of the Yuima Project Overview of the yuima package What contains a yuima object ? What is possible to do with a yuima object in hands? How it is supposed to work? Inference Change-point Analysis LASSO estimation Asymptotic Expansion Roadmap
2 / 56
A. Brouste (Univ. Le Mans, FR) M. Fukasawa (Osaka Univ. JP) H. Hino (Waseda Univ., Tokyo, JP) S.M. Iacus (Milan Univ., IT) K. Kengo (Tokyo Univ., JP) H. Masuda (Kyushu Univ., JP) Y. Shimitzu (Osaka Univ., JP) M. Uchida (Osaka Univ., JP) N. Yoshida (Tokyo Univ., JP) . . . more to come The yuima package1 is written by people working in mathematical statistics and nance, who actively publish results in the eld, have some knowledge of R, and have the feeling on whats next in the eld. Aims at lling the gap between theory and practice!
1 The Yuima Project is funded by the Japan Science Technology (JST) Basic Research Programs PRESTO, Grants-in-Aid for Scientic Research No. 19340021.
3 / 56
The yuima package goal: ll the gap between theory and practice
Overview of the Yuima Project Overview of the yuima package What contains a yuima object ? What is possible to do with a yuima object in hands? How it is supposed to work? Inference Change-point Analysis LASSO estimation Asymptotic Expansion Roadmap
The Yuima Project aims at implementing, via the yuima package, a very abstract framework to describe probabilistic and statistical properties of stochastic processes in a way which is the closest as possible to their mathematical counterparts but also computationally efcient. it is an R package, using S4 classes and methods, where the basic class extends to SDEs with jumps (simple Poisson, Levy), SDEs driven by fBM, Markov switching regime processes, HMM, etc. separates the data description from the inference tools and simulation schemes the design allows for multidimensional, multi-noise processes specication it includes a variety of tools useful in nance, like asymptotic expansion of functionals of stochastic processes via Malliavin calculus
4 / 56
Overview of the Yuima Project Overview of the yuima package What contains a yuima object ? What is possible to do with a yuima object in hands? How it is supposed to work? Inference Change-point Analysis LASSO estimation Asymptotic Expansion Roadmap
5 / 56
The main object is the yuima object which allows to describe the model in a mathematically sound way. Then the data and the sampling structure can be included as well or, just the sampling scheme from which data can be generated according to the model. The package exposes very few generic functions like simulate, qmle, plot, etc. and some other specic functions for special tasks. Before looking at the details, let us see an overview of the main object.
6 / 56
Overview of the Yuima Project Overview of the yuima package What contains a yuima object ? What is possible to do with a yuima object in hands? How it is supposed to work? Inference Change-point Analysis LASSO estimation Asymptotic Expansion Roadmap
7 / 56
univariate multivariate
Data
diffusion Levy
fractional BM
Sampling
tick times space disc. ... random deterministic
Yuima
Model
Markov Switching
HMM
univariate multivariate
Data
diffusion Levy
fractional BM
Sampling
tick times space disc. ... random deterministic
Yuima
Model
Markov Switching
HMM
univariate multivariate
Data
diffusion Levy
fractional BM
Sampling
tick times space disc. ... random deterministic
Yuima
Model
Markov Switching
HMM
univariate multivariate
Data
diffusion Levy
fractional BM
Sampling
tick times space disc. ... random deterministic
Yuima
Model
Markov Switching
HMM
Overview of the Yuima Project Overview of the yuima package What contains a yuima object ? What is possible to do with a yuima object in hands? How it is supposed to work? Inference Change-point Analysis LASSO estimation Asymptotic Expansion Roadmap
12 / 56
Covariation EulerMaruyama
Nonparametrics
p-variation
Simulation
Yuima
Asymptotic expansion
Parametric Inference
Option pricing
Akaikes Monte Carlo
Change point
LASSO-type
Model selection
Hypotheses Testing
Covariation EulerMaruyama
Nonparametrics
p-variation
Simulation
Yuima
Asymptotic expansion
Parametric Inference
Option pricing
Akaikes Monte Carlo
Change point
LASSO-type
Model selection
Hypotheses Testing
Covariation EulerMaruyama
Nonparametrics
p-variation
Simulation
Yuima
Asymptotic expansion
Parametric Inference
Option pricing
Akaikes Monte Carlo
Change point
LASSO-type
Model selection
Hypotheses Testing
Covariation EulerMaruyama
Nonparametrics
p-variation
Simulation
Yuima
Asymptotic expansion
Parametric Inference
Option pricing
Akaikes Monte Carlo
Change point
LASSO-type
Model selection
Hypotheses Testing
Covariation EulerMaruyama
Nonparametrics
p-variation
Simulation
Yuima
Asymptotic expansion
Parametric Inference
Option pricing
Akaikes Monte Carlo
Change point
LASSO-type
Model selection
Hypotheses Testing
Covariation EulerMaruyama
Nonparametrics
p-variation
Simulation
Yuima
Asymptotic expansion
Parametric Inference
Option pricing
Akaikes Monte Carlo
Change point
LASSO-type
Model selection
Hypotheses Testing
Overview of the Yuima Project Overview of the yuima package What contains a yuima object ? What is possible to do with a yuima object in hands? How it is supposed to work? Inference Change-point Analysis LASSO estimation Asymptotic Expansion Roadmap
19 / 56
We consider here the three main classes of SDEs which can be easily specied. All multidimensional and eventually parametric models.
20 / 56
We consider here the three main classes of SDEs which can be easily specied. All multidimensional and eventually parametric models. Diffusions dXt = a(t, Xt )dt + b(t, Xt )dWt
20 / 56
We consider here the three main classes of SDEs which can be easily specied. All multidimensional and eventually parametric models. Diffusions dXt = a(t, Xt )dt + b(t, Xt )dWt Fractional Gaussian Noise, with H the Hurst parameter
20 / 56
We consider here the three main classes of SDEs which can be easily specied. All multidimensional and eventually parametric models. Diffusions dXt = a(t, Xt )dt + b(t, Xt )dWt Fractional Gaussian Noise, with H the Hurst parameter
|z|>1
0<|z|1
dXt = 3Xt dt +
Overview of the Yuima Project Overview of the yuima package What contains a yuima object ? What is possible to do with a yuima object in hands? How it is supposed to work? Inference Change-point Analysis LASSO estimation Asymptotic Expansion Roadmap
1 2 dWt 1+Xt
21 / 56
dXt = 3Xt dt +
Overview of the Yuima Project Overview of the yuima package What contains a yuima object ? What is possible to do with a yuima object in hands? How it is supposed to work? Inference Change-point Analysis LASSO estimation Asymptotic Expansion Roadmap
1 2 dWt 1+Xt
> str(mod1) Formal class yuima.model [package "yuima"] with 16 slots ..@ drift : expression((-3 * x)) ..@ diffusion :List of 1 .. ..$ : expression(1/(1 + x^2)) ..@ hurst : num 0.5 ..@ jump.coeff : expression() ..@ measure : list() ..@ measure.type : chr(0) ..@ parameter :Formal class model.parameter [package "yuima"] with 6 slots .. .. ..@ all : chr(0) .. .. ..@ common : chr(0) .. .. ..@ diffusion: chr(0) .. .. ..@ drift : chr(0) .. .. ..@ jump : chr(0) .. .. ..@ measure : chr(0) ..@ state.variable : chr "x" ..@ jump.variable : chr(0) ..@ time.variable : chr "t" ..@ noise.number : num 1 ..@ equation.number: int 1 ..@ dimension : int [1:6] 0 0 0 0 0 0 ..@ solve.variable : chr "x" ..@ xinit : num 0 ..@ J.flag : logi FALSE
21 / 56
dXt = 3Xt dt +
Overview of the Yuima Project Overview of the yuima package What contains a yuima object ? What is possible to do with a yuima object in hands? How it is supposed to work? Inference Change-point Analysis LASSO estimation Asymptotic Expansion Roadmap
1 2 dWt 1+Xt
0.8 0.0
0.6
0.4
0.2
0.0
0.2
0.2
0.4 t
0.6
0.8
1.0
22 / 56
dXt = 3Xt dt +
Overview of the Yuima Project Overview of the yuima package What contains a yuima object ? What is possible to do with a yuima object in hands? How it is supposed to work? Inference Change-point Analysis LASSO estimation Asymptotic Expansion Roadmap
1 2 dWt 1+Xt
23 / 56
24 / 56
24 / 56
25 / 56
dt +
sol <- c("x1","x2") # variable for numerical solution a <- c("-3*x1","-x1-2*x2") # drift vector b <- matrix(c("1","x1","0","3","x2","0"),2,3) # diffusion matrix mod3 <- setModel(drift = a, diffusion = b, solve.variable = sol)
26 / 56
27 / 56
x1
Change-point Analysis
3 0.0
0.2
0.4 t
0.6
0.8
1.0
28 / 56
Multidimensional SDE
Overview of the Yuima Project Overview of the yuima package What contains a yuima object ? What is possible to do with a yuima object in hands? How it is supposed to work? Inference Change-point Analysis LASSO estimation Asymptotic Expansion Roadmap
, 1 2 dWt2 ))
The above is an example of parametric SDE with more equations than noises.
29 / 56
30 / 56
30 / 56
> set.seed(123) > X <- simulate(mod4, n=1000) > plot(X, main="Im fractional!")
Im fractional!
4 y 0 0.0 1 2 3
0.2
0.4 t
0.6
0.8
1.0
31 / 56
Jump processes
Overview of the Yuima Project Overview of the yuima package What contains a yuima object ? What is possible to do with a yuima object in hands? How it is supposed to work? Inference Change-point Analysis LASSO estimation Asymptotic Expansion Roadmap
Jump processes can be specied in different ways in mathematics (and hence in yuima package). Let Zt be a Compound Poisson Process (i.e. jumps follow some distribution, e.g. Gaussian) Then is is possible to consider the following SDE which involves jumps
32 / 56
> mod5 <- setModel(drift=c("-theta*x"), diffusion="sigma", jump.coeff="1", measure=list(intensity="10", df=list("dnorm(z, 0, 1)")), measure.type="CP", solve.variable="x") > set.seed(123) > X <- simulate(mod5, true.p=list(theta=1,sigma=3),n=1000) > plot(X, main="Im jumping!")
Im jumping!
0 x 8 0.0 6 4 2
0.2
0.4 t
0.6
0.8
1.0
33 / 56
Jump processes
Overview of the Yuima Project Overview of the yuima package What contains a yuima object ? What is possible to do with a yuima object in hands? How it is supposed to work? Inference Change-point Analysis LASSO estimation Asymptotic Expansion Roadmap
Another way is to specify the Levy measure. Without going into too much details, here is an example of a simple OU process with IG Levy measure
x 0 5 10
15
20
4 t
10
34 / 56
in
35 / 56
A sampling or subsampling structure can be created via the setSampling constructor. This allow to specify regular or irregular multidimensional grids (i.e. each equation has its own grid), possibly a random distribution of times.
36 / 56
A sampling or subsampling structure can be created via the setSampling constructor. This allow to specify regular or irregular multidimensional grids (i.e. each equation has its own grid), possibly a random distribution of times. The sampling slot in Yuima is also used during the inference. For example, one can specify the model, the data and then explicit the sampling which will contain informations about how these data have been collected. In this case, the tools for inference in Yuima will act differently upon this information.
36 / 56
A sampling or subsampling structure can be created via the setSampling constructor. This allow to specify regular or irregular multidimensional grids (i.e. each equation has its own grid), possibly a random distribution of times. The sampling slot in Yuima is also used during the inference. For example, one can specify the model, the data and then explicit the sampling which will contain informations about how these data have been collected. In this case, the tools for inference in Yuima will act differently upon this information. In simulation studies, one can decide to simulate the processes at high frequency and then resample the simulated data according to different subsampling schemes: random, irregular, space grids, etc and verifty the effect of different subsampling on the estimation or the calibration of nancial product.
36 / 56
Overview of the Yuima Project Overview of the yuima package What contains a yuima object ? What is possible to do with a yuima object in hands? How it is supposed to work? Inference Change-point Analysis LASSO estimation Asymptotic Expansion Roadmap
Inference
37 / 56
the covariance estimator of Yoshida-Hayashi (2005) for multidimensional Ito processes with asynchronous data quasi-likelihood estimation for multidimensional diffusions (Yoshida, 1992, 2005) change point estimation for the volatility in a multidimensional Ito process (Iacus & Yoshida, 2009) Bayes type estimators (Yoshida, 2005) LASSO-type and hypotheses testing based on -divergences (De Gregorio & Iacus, 2008 & 2010)
Just not to be too vague, let us consider the exact fomulations of some of the problems which can be handled by the yuima package.
38 / 56
Overview of the Yuima Project Overview of the yuima package What contains a yuima object ? What is possible to do with a yuima object in hands? How it is supposed to work? Inference Change-point Analysis LASSO estimation Asymptotic Expansion Roadmap
Change-point Analysis
39 / 56
40 / 56
Change-point analysis
Overview of the Yuima Project Overview of the yuima package What contains a yuima object ? What is possible to do with a yuima object in hands? How it is supposed to work? Inference Change-point Analysis LASSO estimation Asymptotic Expansion Roadmap
Yt =
for t [0, )
for t [ , T ].
The change point instant is unknown and is to be estimated, along with 1 and 2 , from the observations sampled from the path of (X, Y ).
41 / 56
- Lehman Brothers
- Lehman - DJ Stoxx - Goldman - Deutsche Bank - HSBC Brothers 600 Banks Sachs
- DJ Stoxx Americas 600 Banks - DJ Stoxx 600 Banks - Deutsche Bank - HSBC - Barclays - Deutsche Bank (Ger) - CAC
- Nyse - DJ Stoxx Global 1800 - Dow Jones - MSCI World - S&P 500 - Morgan Stanley - Bank of America - FTSE - DAX - Barclays - S&P MIB - RBS - CAC - Unicredit - Intesa Sanpaolo - IBEX - Deutsche Bank (Ger) - SMI - Nikkei 225 - Commerzbank - DJ Stoxx 600
Overview of the Yuima Project Overview of the yuima package What contains a yuima object ? What is possible to do with a yuima object in hands? How it is supposed to work? Inference Change-point Analysis LASSO estimation Asymptotic Expansion Roadmap
LASSO estimation
43 / 56
LASSO estimation
Overview of the Yuima Project Overview of the yuima package What contains a yuima object ? What is possible to do with a yuima object in hands? How it is supposed to work? Inference Change-point Analysis LASSO estimation Asymptotic Expansion Roadmap
LASSO is nothing but estimation under constraints on the parameters. Usually studied for the least squares estimation method, can be applied here using the QMLE approach for the following diffusion model
min Hn (, ) +
, j=1
n,j |j | +
k=1
n,k |k |
Lasso tries to set the maximal number of parameters to 0. In this sense operates model selection jointly with estimation.
44 / 56
LASSO estimation of the U.S. Interest Rates monthly data from 06/1964 to 12/1989. These data have been analyzed by many author including Nowman (1997), At-Sahalia (1996), Yu and Phillips (2001) and it is a nice application of LASSO.
Reference Merton (1973) Vasicek (1977) Cox, Ingersoll and Ross (1985) Dothan (1978) Geometric Brownian Motion Brennan and Schwartz (1980) Cox, Ingersoll and Ross (1980) Constant Elasticity Variance CKLS (1992) Model
0 0
= dt + dWt = ( + Xt )dt + dWt = ( + Xt )dt + Xt dWt = Xt dWt = Xt dt + Xt dWt = ( + Xt )dt + Xt dWt 3/2 = Xt dWt = Xt dt + Xt dWt = ( + Xt )dt + Xt dWt
1/2
0 0 0 0 0 1 1 1
3/2
45 / 56
Estimation Method MLE Nowman Exact Gaussian (Yu & Phillips) QMLE
4.1889 2.4272 2.0069 (0.5216) 2.0822 (0.9635) 1.5435 (0.6813) 0.5412 (0.2076)
-0.6072 -0.3277 -0.3330 (0.0677) -0.2756 (0.1895) -0.1687 (0.1340) 0.0001 (0.0054)
1.3610 1.3610
CKLS
CKLS
QMLE + LASSO with mild penalization QMLE + LASSO with strong penalization
CKLS
Overview of the Yuima Project Overview of the yuima package What contains a yuima object ? What is possible to do with a yuima object in hands? How it is supposed to work? Inference Change-point Analysis LASSO estimation Asymptotic Expansion Roadmap
Asymptotic Expansion
47 / 56
Estimation of functionals
Overview of the Yuima Project Overview of the yuima package What contains a yuima object ? What is possible to do with a yuima object in hands? How it is supposed to work? Inference Change-point Analysis LASSO estimation Asymptotic Expansion Roadmap
The yuima package can handle asymptotic expansion of functionals of d-dimensional diffusion process
dXt = a(Xt , )dt + b(Xt , )dWt ,
(0, 1]
with Wt and r -dimensional Wiener process, i.e. Wt = (Wt1 , . . . , Wtr ). The functional is expressed in the following abstract form
r F (Xt ) = =0 0 T f (Xt , d)dWt + F (Xt , ),
Wt0 = t
48 / 56
max
T 0
1 T
T 0
Xt dt K, 0
. Thus the
1 T
Xt dt,
r=1
49 / 56
max
T 0
1 T
T 0
Xt dt K, 0
. Thus the
1 T
Xt dt,
r=1
with
f0 (x, ) =
in
F (Xt ) =
x , T
r
f1 (x, ) = 0,
T
F (x, ) = 0
=0 0
49 / 56
So, the call option price requires the composition of a smooth functional
F (Xt ) =
1 T
T 0
Xt dt,
r=1
max(x K, 0)
Monte Carlo methods require a HUGE number of simulations to get the desired accuracy of the calculation of the price, while asymptotic expansion of F provides unexpectedly accurate approximations. The yuima package provides functions to construct the functional F , and automatic asymptotic expansion based on Malliavin calculus starting from a yuima object.
50 / 56
setFunctional method
Overview of the Yuima Project Overview of the yuima package What contains a yuima object ? What is possible to do with a yuima object in hands? How it is supposed to work? Inference Change-point Analysis LASSO estimation Asymptotic Expansion Roadmap
dXt^e = Xt^e * dt + e * Xt^e * dWt diff.matrix <- matrix( c("x*e"), 1,1) model <- setModel(drift = c("x"), diffusion = diff.matrix) T <- 1 xinit <- 1 f <- list( expression(x/T), expression(0)) F <- 0 e <- .3 yuima <- setYuima(model = model, sampling = setSampling(Terminal=T, n=1000)) yuima <- setFunctional( yuima, f=f,F=F, xinit=xinit,e=e)
the denition of the functional is now included in the yuima object (some output dropped)
> str(yuima) Formal class yuima [package "yuima"] with 5 slots ..@ data :Formal class yuima.data [package "yuima"] with 2 slots ..@ model :Formal class yuima.model [package "yuima"] with 16 slots ..@ sampling :Formal class yuima.sampling [package "yuima"] with 11 slots ..@ functional :Formal class yuima.functional [package "yuima"] with 4 slots .. .. ..@ F : num 0 .. .. ..@ f :List of 2 .. .. .. ..$ : expression(x/T) .. .. .. ..$ : expression(0) .. .. ..@ xinit: num 1 .. .. ..@ e : num 0.3
51 / 56
Data
Sampling
Yuima
Model
Functional
Then, it is as easy as
> F0 <- F0(yuima) > F0 [1] 1.716424 > max(F0-K,0) # asian call option price [1] 0.7164237
53 / 56
Then, it is as easy as
> F0 <- F0(yuima) > F0 [1] 1.716424 > max(F0-K,0) # asian call option price [1] 0.7164237
53 / 56
> LevyAsianApproxOption("c", S = 1, SA = 1, X = 1, + Time = 1, time = 1, r = 0, b = 1, sigma = e) Option Price: [1] 0.7184944 > X <- sde.sim(drift=expression(x), sigma=expression(e*x), N=1000,M=1000) > mean(colMeans((X-K)*(X-K>0))) # MC asian call price based on M=1000 repl.
[1] 0.707046 54 / 56
Overview of the Yuima Project Overview of the yuima package What contains a yuima object ? What is possible to do with a yuima object in hands? How it is supposed to work? Inference Change-point Analysis LASSO estimation Asymptotic Expansion Roadmap
Roadmap
55 / 56
Where: R-Forge.R-Project.org/projects/yuima
56 / 56
Where: R-Forge.R-Project.org/projects/yuima When: beta release, march 2010; stable release by summer 2010
56 / 56
Where: R-Forge.R-Project.org/projects/yuima When: beta release, march 2010; stable release by summer 2010 Documentation: planned a R/Rmetric e-book for developers and users
56 / 56
Where: R-Forge.R-Project.org/projects/yuima When: beta release, march 2010; stable release by summer 2010 Documentation: planned a R/Rmetric e-book for developers and users Parallelization of simulators: the foreach approach in 2010
56 / 56
Where: R-Forge.R-Project.org/projects/yuima When: beta release, march 2010; stable release by summer 2010 Documentation: planned a R/Rmetric e-book for developers and users Parallelization of simulators: the foreach approach in 2010 User friendly (point&click) GUI: we have plans
56 / 56
Where: R-Forge.R-Project.org/projects/yuima When: beta release, march 2010; stable release by summer 2010 Documentation: planned a R/Rmetric e-book for developers and users Parallelization of simulators: the foreach approach in 2010 User friendly (point&click) GUI: we have plans
Thanks!
Q&A
56 / 56
CHAPTER 2
JURI HINZ
40
A Monte Carlo method for optimal stochastic control problems with convex value functions
Juri Hinz
National University of Singapore Department of Mathematics Faculty of Science
Abstract We present a method for calculation of optimal control policies for problems with convex value functions. Such situations appear frequently in many applications and encompass important examples arising in the area of the so-called partially observed Markov decision processes. We show that an increase of the calculation performance can be achieved by an adaptation of the classical least-square approach. The modifications are based on the convexity-preserving property of the conditional expectation, valid in our framework.
A Monte Carlo method for problems of optimal stochastic control with convex value functions.
Juri Hinz1
1 NUS
Storage management Commodity price process (Zk )k 1 with state space Z R Storage positions P (nite set: empty, full, half-full) Actions A (nite set: sell, buy one unit) Change of position by action (p, a) (p, a) = (p a)+ P Policy (p, a) (p, a) A yields actions and postions
a := (pk , Zk ), k pk +1 := (pk , a ), k
k 1
Ingredients
Value of the policy is V (p, z) given by solution to V (p, z) = R(p, z, (p, z))+ V ((p, (p, z)), z ) K (z, dz )
P(Z2 dz |Z1 =z)
Optimal control Optimal policy is better than each other policy V (p, z) V (p, z)
Solution method
a sequence (V (n) )n1 of pointwise converging functions whose limit V V (p, z) = lim V (n) (p, z)
n
f (z )K (z, dz )
if the state space is not countable high dimensional with complicated geometry
Solution
Suggest an approximation to T suitable for numerical calculations. Approximate Tf in terms of basis functions M T f (z) Monte-Carlo transition (Tf )(z) j j (z) = T f (z) Approximative transition j=1 T f (z) Bounded appr. transition
Transition approximations Idea is simple. Using conditional expectation Tf (Z1 ) = E (f (Z2 ) | (Z1 )) one recognizes the projection Tf = LI (I f ) in the Hilbert space Z, P(Z1 ,Z2 ) ). Now approximate replace the measure by point measures from a sample L2 (Z P(Z1 ,Z2 ) 1 N
N
(zi ,zi )
i=1
Monte-Carlo transition
Tf =
M j=1
j j
the minimizer of the sum of squared errors M j j (zi )|2 over (j )M RM . j=1 j=1
Problems with T depends on the basis and on the sample enlarging the basis gives oscillations in the projection to capture oscillations, the sample must be very large
Approximative transition
if Tf is non-negative and convex then chose non-negative and convex basis functions take only positive coefcients in the linear combinations basis can be arbitrary large, no oscillations occur due convexity. Thus, under the standing assumption that all basis functions (j )M are non-negative: j=1 0 j (z) for all z Z, j = 1, . . . , M. we dene the approximative transition T f
approximative transition T
j j ,
f 0
where (j )M solve the constrained quadratic minimization j=1 minimize N |f (zi ) M j j (zi )|2 i=1 j=1 subject to j 0 for j = 1, . . . , M.
j j ,
f 0
where (j )M solve the constrained quadratic minimization j=1 minimize N |f (zi ) M j j (zi )|2 i=1 j=1 subject to j 0 for j = 1, . . . , M maxzZ
M j=1 j j (z)
maxzZ f (z)
Boundary is crucial to ensure the existence of xed point V to V (p, z) = max R(p, z, a) + T V ((p, a), )(z)
aA
under slight additional assumptions. Proposition If all reward functions satisfy 0 R(a, , p) < and the the basis functions values on the sample {(j (zi ))N : j = 1, . . . , M} are linearly independent, i=1 then there exists a solution V .
How to use? If value functions V (p, ) of the original problem are non-negative and convex, then nd V (p, ) for arbitrarily large cone of convex basis functions (no oscillations due convexity!). Claim V V Still a problem: Computational problems with large basis. Observation: Basis dimension can be low, if basis is properly chosen. Ideally, basis elements mimic the targeted value functions. Idea: After calculations with preliminary basis, change the basis such that its elements are similar to the obtained by projections. Apply such a procedure repeatedly.
Basis-free least-square optimal control Suppose that for each a procedure determines a basis () = {1 , . . . , M }, whose approximative transition is denoted by T() . Given f and a 0 , proceed recursively k +1 = T(k ) f , k 1.
(1)
Basis-free version
Given improvement operator () we suggest to study the following problem: Determine the solution V to the xed point equations as V (p, z) = maxaA R(p, z, a) + T(((p,a))) V ((p, a), )(z) where (p) is a non-improvable projection T((p)) V (p, ) = (p) for all p P
Specic situation if Tf is non-negative and convex then approximate Tf TC f where C spans the cone of all non-negative, convex functions. To approach TC f , we construct improvement operators l () = {, l} It turns out that Tl () f = for each afne-linear l = TC f = positive and convex, l afne linear.
Stylized procedure
to approach T f by improvement of two dimensional cones 0) Given f 0, chose a convex > 0. 1) For an afne linear l and calculate Tl () f . () f = , then repeat 1) with the same but another l. 2) If T
l
3) If Tl () f = , then repeat 1) with the new := Tl () f and the same l. 4) Terminate if 1) 2) follows sufciently many times.
A more formal algorithm Step 0 (Initialization) For f 0, set := (f (zi ))N . i=1 Specify positive convex {1 , . . . , F } and afne linear (0) (0) {l1 , . . . , lL }. For positive and convex (0) , dene (0) = {1 , . . . , F , (0) , (0) l1 , . . . , (0) lL }. Step 1 (Minimization) Given (k ) = {1 , . . . , M }, Mij
(k ) (k ) (k )
:= j (zi ),
(k )
(k )
i = 1, . . . , N, j = 1, . . . , M.
i=1
if E (k ) E (k 1) < then nish and return (k ) , 0therwise proceed Step 3 (Basis change) Dene (k +1) = {1 , . . . , F , (k ) , (k ) l1 , . . . , (k ) lL } and go to the Step 1.
(k ) (k )
10
15
0 sample realizations
1 2
1 2
30.644,
i=1
31.913.
10
15
0 sample realizations
Suppose that (Zk )k 1 follows an auto-regression Zk +1 = 0.9Zk +Xk +1 , k 1 (Xk )k 1 iid, N(0, 0.09)-distributed.
Positions P = {stopped, goes} Actions A = {stop, go} Position change (stopped, stop) (goes, stop) (stopped, go) (goes, go) = stopped stopped stopped goes .
Reward is paid only when the system stops R(stopped, z, stop) R(goes, z, stop) R(stopped, z, go) R(goes, z, go) = 0 ez 0 0
Given path realization (zk )400 and = 0.95 one obtains V by k =1 value iteration basis improvement in each step
1 state variable
Outlook How about non-convex value functions? Represent non-convex functions by a difference of convex functions and adapt basis improvement accordingly. (T cos)(z) = Example of 2 2 cos(z + x)N(0, X )(dx) = cos(z)eX /2
1.0
0.5
0.0
0.5
1.0
10
Conclusion
Knowing particular properties of conditional expectation helps to improve the calculation of least square projection Convexity is the key property here Adaptive basis improvement seems to work Using this, Markov decision algorithm can be adapted to complicated and high dimensional spaces, no basis is construction is required
Thank you!
CHAPTER 3
DAVID SCOTT
56
Keywords: distributions; nancial returns; generalized lambda distribution We investigate the generalized lambda distribution with innite support for modeling nancial return series with power law tails. We derive expressions for the distribution, for random number generation, and for nancial risk measures including value at risk, expected shortfall and tail indices. We introduce a new method of obtaining parameter estimates in which the data is standardized to have zero median and unit interquartile range and then a generalized lambda distribution with zero median and unit interquartile range is tted to the data. This reduces the number of parameters to two allowing for more ecient parameter estimation. Using this idea we demonstrate a simple robust method of moments estimation approach using moments based on Bowleys skewness and Moors kurtosis. We compare the performance of several further estimation approaches including maximum log likelihood, maximum product spacing, goodness of t testing, and histogram binning using Monte Carlo simulation with data derived from the NASDAQ-100 returns.
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
1 Institut
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Outline
1
Introduction Fitting the Generalized Lambda Distribution The NASDAQ-100 Parameter Optimization Simulating Financial Returns Conclusions
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Outline
1
Introduction Fitting the Generalized Lambda Distribution The NASDAQ-100 Parameter Optimization Simulating Financial Returns Conclusions
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Outline
1
Introduction Fitting the Generalized Lambda Distribution The NASDAQ-100 Parameter Optimization Simulating Financial Returns Conclusions
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Outline
1
Introduction Fitting the Generalized Lambda Distribution The NASDAQ-100 Parameter Optimization Simulating Financial Returns Conclusions
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Outline
1
Introduction Fitting the Generalized Lambda Distribution The NASDAQ-100 Parameter Optimization Simulating Financial Returns Conclusions
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Outline
1
Introduction Fitting the Generalized Lambda Distribution The NASDAQ-100 Parameter Optimization Simulating Financial Returns Conclusions
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Introduction Fitting the Generalized Lambda Distribution The NASDAQ-100 Parameter Optimization Simulating Financial Returns Conclusions
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Parameter Space
1 can take any real value, 2 must be positive Only particular values of 3 and 4 produce proper statistical distributions The support of the distribution changes with dierent values of the parameters 3 and 4 [Karian et al., 1996] identied six regions in which the shape parameters can lie in which the shapes of the GLDs are similar
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Parameter Space
4
Region 5 Region 1
2
Region 3
3
2 1 1 2
Region 6 Region 4
1
Region 2
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Examples
1.0 1.5 Probability 4 2 0 x 2 4 Density 1.0 0.5 0.0 0.0 0.2 0.4 0.6 0.8
0 x
Density and probability function for the GLD in parameter region 4. The right tail is xed at 4 = 1/4 and the left tail varies in powers of 2 in the range {1/8, 1/4, 1/2, 1, 2}.
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Tail Behaviour
The lower (upper) tail of the distribution function of the GLD is regularly varying at (+) with index 1/3 (1/4 ) For the density function f (x) we have f (x) and f (x) 1 (2 x)1/3 1 3 1 (2 x)1/4 1 4 as x (2)
as x
(3)
Moment existence and tail order change continuously with the values of the tail-weight parameters For the stable distribution the moment existence changes in a discontinuous fashion with the index (mean exists for index > 1, all moments for index 2)
Yohan Chalabi, David Scott, Diethelm Wrtz u Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
(4)
ES =
xf (x|)dx =
0
F 1 (p|)dp
(5)
= 1 +
1 1 3 +1 + (1 )4 +1 1 2 (3 + 1) 2 (4 + 1)
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Introduction Fitting the Generalized Lambda Distribution The NASDAQ-100 Parameter Optimization Simulating Financial Returns Conclusions
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Fitting Methods
Many methods have been proposed including
the method of moments, [Ramberg et al., 1979] u least squares, [Oztrk and Dale, 1985] tting using percentiles, [Karian and Dudewicz, 1999] search routines, [King and MacGillivray, 1999] tting using L-moments, [Asquith, 2007] histogram tting, [Su, 2005] maximum likelihood, [Su, 2007]
Combinations of methods have been suggested to deal with the problem of nding starting solutions for optimization, for example [Su, 2007] Combination approaches seem the most sensible
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Fitting Methods
Many investigators nonetheless seem to suggest using method of moments Nonsensical in our case when moments of order 4 or even less can be innite Percentile methods and L-moments are usable We have implemented a variation to these approaches using robust moments as investigated by [Kim and White, 2004]
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Robust Moments
The rst two robust moments are the median, r and interquartile range, r The next two moments are the robust skewness and kurtosis, sr and r r = 1/2 r = 3/4 1/4 sr = r = 3/4 + 1/4 22/4 3/4 1/4 7/8 5/8 + 3/8 1/8 6/8 2/8 (6)
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Robust Moments
There are the obvious estimators where pq indicates the sample qth quantile and the hat that the statistic is a sample quantity: r = p1/2 r = p3/4 p1/4 r = s r = p3/4 + p1/4 2p2/4 p3/4 p1/4 p7/8 p5/8 + p3/8 p1/8 p6/8 p2/8 (7)
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
S3 ,4 (3/4) + S3 ,4 (1/4) 2S3 ,4 (1/2) S3 ,4 (3/4) S3 ,4 (1/4) S3 ,4 (7/8) S3 ,4 (5/8) + S3 ,4 (3/8) S3 ,4 (1/8) S3 ,4 (6/8) S3 ,4 (2/8)
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
(11)
S3 ,4 (3/4) S3 ,4 (1/4) 2
(12)
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Introduction Fitting the Generalized Lambda Distribution The NASDAQ-100 Parameter Optimization Simulating Financial Returns Conclusions
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
20
40
60
Generalized Lambda
80
100
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
lambda 4
1.25
1.25
Kurtosis
Kurtosis
0.75
0.5 0.25
0.75
0.5 0.25
0.6
0.4
0.2
0.0 Skewness
0.2
0.4
0.6
0.6
0.4
0.2
0.0 Skewness
0.2
0.4
0.6
Skewness
0.2 0.2
0 .5
0.4
0.3
.1 0 0
Kurtosis
0.2
1.4 1.6
0.6
lambda 4
lambda 4
0.6
2.8
1.8
1.0
1.0
2.2
1.4
1.4
2.4
0.1
0.2
0.3
0.4
0.5
2.8
2.6
28
1.4
1.0
0.6 lambda 3
0.2
1.4
1.0
0.6 lambda 3
0.2
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Results
Fourth moments exist only for equities for which both 3 and 4 are greater than 0.25 The variance exists only for equities for which both 3 and 4 are greater than 0.5 We observe
for a reasonable number of equities, the fourth moment exists for the bulk of the equities, at least the variance exists for some of the equities, the variance does not exist
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Introduction Fitting the Generalized Lambda Distribution The NASDAQ-100 Parameter Optimization Simulating Financial Returns Conclusions
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Parameter Optimization
The robust method of moments approach only constitutes the rst stage of tting the GDL to a data set We considered a number of approaches to optimizing the t, using the robust method of moments estimates as a starting point
histogram methods goodness of t criteria maximum likelihood maximum product spacing
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Parameter Optimization
Histogram methods vary according to the way breaks are chosen. We used the choice of breaks due to Freedman and Diaconis There are many goodness of t measures which have been used for tting distributions. We used the Anderson-Darling statistic Maximum product spacing does not appear to have been used previously with the GLD
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
log f(x)
10
0 x
10
15
Results from the MLE (blue), MPS (red), AD (orange), and FD (green) approaches. The full lines are drawn from the tted distribution function and the points are taken from a kernel density estimate of the simulated series.
Yohan Chalabi, David Scott, Diethelm Wrtz u Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Results
We observe that there is very little dierence discernable between the goodness of t (AD) and MPS approaches The MLE t diers from AD and MPS in the tails by a small amount The tail t for the histogram (FD) approach is substantially dierent in the upper tail.
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Introduction Fitting the Generalized Lambda Distribution The NASDAQ-100 Parameter Optimization Simulating Financial Returns Conclusions
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Parameter Correlation
Tail Index Parameterization
lambda1
lambda2
lambda3
lambda4
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Parameter Correlation
Skewness/Kurtosis Parameterization
lambda1
lambda2
delta
beta
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Parameter Simulation
Estimate the median, the inter-quartile range, and the robust skewness and kurtosis parameters from the 100 NASDAQ equities, and obtain the sample 1 , 2 , and s. Compute from the parameters 1 , 2 , , and density estimates using the smoothing spline ANOVA approach of [Gu, 2002] and [Gu and Wang, 2003] Estimate the dependency structures of 1 vs. and 2 vs from two bivariate Gaussian copulas Generate random variates for the probabilities from the copulas and compute from the marginal distributions the parameters 1 , 2 , , and . 3 and 4 are recalculated from and
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Marginal Parameters
lambda 1
4
delta
Density
0 0.08
10
15
20
25
0.04 s
0.00
0.04
lambda 2
5 4
beta
Density
0 0.7
0.5
0.3 s
0.1
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Copula Simulation
Correlation: lambda 1 | delta
0.04
0.00
0.02
p | delta
r | delta
0.04
0.08
0.4
0.2
0.0 r | lambda 1
0.2
0.4
0.0
0.2
0.4
0.6
0.0
0.2
0.4
0.6
0.8
1.0
p | lambda 1
0.2
0.1
0.3
p | beta
r | beta
0.5
0.4
0.6
0.8
0.6
0.4 r | lambda 2
0.2
0.0
0.0
0.2
0.4
0.6
0.0
0.2
0.4
0.6
0.8
1.0
p | lambda 1
0.2
r | lambda 2
r | lambda 2
0.4
0.6
0.8
0.4
0.2
0.0 r | lambda 1
0.2
0.4
0.6
0.5
0.4
0.3
0.2
0.1
0.6
0.5
0.4
0.3
0.2
0.1
r | lambda 1
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Introduction Fitting the Generalized Lambda Distribution The NASDAQ-100 Parameter Optimization Simulating Financial Returns Conclusions
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Conclusions
The generalized lambda distribution is useful in tting the distribution of returns for equities It is easier to use and the results are more informative compared to the use of the stable distribution It is possible to realistically simulate nancial returns using the generalized lambda distribution
Generalized Lambda
Outline Introduction Fitting The NASDAQ-100 Optimization Simulating Financial Returns Conclusions
Bibliography
Asquith, W. H. (2007). L-moments and TL-moments of the generalized lambda distribution. Computational Statistics & Data Analysis, 51(9):44844496. Gu, C. (2002). Smoothing Spline ANOVA Models. Springer Series in Statistics. Springer-Verlag, New York. Gu, C. and Wang, J. (2003). Penalized likelihood density estimation: Direct cross-validation and scalable approximation. Statistica Sinica, pages 811826. Hastings, C., Mosteller, F., Tukey, J. W., and Winsor, C. P. (1947). Low moments for small samples: A comparative study of order statistics. David Scott, Diethelm Wrtz Yohan Chalabi, u Generalized Lambda Th A l fM h i l S i i 18(3) 413 426
CHAPTER 4
MARC PAOLELLA
82
Marc S. Paolella
An open and active question concerns the construction of a multivariate distribution whose marginals are Student's t but with potentially different degrees of freedom. This is of particular value in empirical finance, where it is well known that the tail indices, or maximally existing moments of the returns, differ markedly across assets. While several constructions can be found in the literature, all have weaknesses. In this paper, we propose a new construction, which is also easily endowed with a different asymmetry parameter for each marginal. While the computation of the density via the definition is possible but time-consuming, thus prohibiting direct calculation and optimization of the likelihood, we discuss how the method of indirect inference can be used. An example using series comprising the DJIA is illustrated.
PART II
FRIDAY AFTERNOON
85
CHAPTER 5
VIKRAM KURIYAN
86
We will present an analytical approach that takes an investment management point of view to look at the financial landscape. This talk will focus on the path of the crisis, trace the mechanisms through which the crisis was transmitted globally and offer some ideas for the future. We will aim to understand the drivers of bank balance sheet exposures that are the drivers of this crises. We will also look at bank balance sheets from the eye of a derivative trader to demonstrate that the banking system has implicit but often not well-understood asymmetric payoff structures and how a deep understanding of derivatives can make the banking system less fragile. We will also examine the role of regulators and rating agencies as inadvertent catalysts for this particular collapse. Lastly, we will offer some suggestions for the future.
Current Environment
A Quick Review: We are coming off of the greatest global economic contraction since the Great Depression Massive governmental intervention was necessary to prevent large parts of the global financial system from collapsing
United States United Kingdom Iceland Dubai Greece
In Other Words
Unprecedented Collapse PLUS Unprecedented Government Support EQUALS Unprecedented Changes in Asset Valuations
Will the future be a repeat of the 70s.or a repeat of the lost decades in Japan?
Outline of talk
Theory Multiple Models
Bank Stocks in an asset allocation Credit as a Put Option Demand deposit as a de stabilizer
Multiple Models
Youve got to have models in your head. And youve got to array your experience, both vicarious and direct, on this latticework of models. Charles Munger.
Think about Risk of each segment: Volatility, correlations, macro environments, leverage, fat tails. Think about Risk in a portfolio context.
Now throw in implicit derivative exposures: Merton Model for Risky Debt
Credit = Risk Free Debt Guarantee Credit = Risk Free Debt Put(Asset, strike price) All credit = Risk Free Debt + Short Put Option Applies to all credit: corporate debt, cards, Mortgages, and, in particular, accrual books
Now throw in implicit derivative exposures: Merton Model for Risky Debt
Now throw in implicit derivative exposures: Merton Model for Risky Debt
Applies to all credit: corporate debt, CDS, Mortgages, As derivative modellers, you know that the put delta goes up as the asset price collapses and so risk goes up too. Already have 4 decades of experience with derivative models to explain what went on and what can happen !! Do not need new models or technologies to analyze risk Derivative models explain how bank equity will behave in times of stress.
D/D R/D
D/R R/R
D/D R/D
D/R R/R
. Given this structure, stress has to be expected The only question is the timing and the severity. (Not a Black Swan).
Robust Systems
it is the exposure (or payoff) that creates the complexity and the opportunities and dangers not so much the knowledge ( i.e., statistical distribution, model representation, etc.) Nassim Taleb in
http://www.edge.org/3rd_culture/taleb08/taleb08_index.htm
Animal Spirits
2007 RBS paid $100 billion mainly in cash to buy ABN Amro A year later, you could buy
Citibank (20b), Morgan Stanley (11b), Goldman Sachs (20b), Deutsche Bank (13b), Barclays (13b),
Crisis Pictures
Negative Swap Spreads/Limits of Arbitrage
Crisis Pictures
Volkswagen
VoW
Bank Runs, Nash Equilibria and Self Fulfilling Prophecy John Nash, Robert K Merton
Increased Coordination Across Regulatory Entities: Bank Regulatory Standards with new Basel requirements Scenario Analysis and Stress Testing Caveat: Be cautious about all fat tailed stress tests
References
Bank Runs, Deposit Insurance, and Liquidity by Douglas Diamond, Philip Dybvig The Journal of Political Economy Vol. 91, No. 3. 1983 . Bridgewater Daily Notes 12/03/2008 JPM 2008 Annual Report Niall Ferguson, The Ascent of Money Robert Merton, MIT OpenWorld lecture
http://mitworld.mit.edu/video/659
Risk Taxonomy
We categorize risks by the nature of their origin and the frequency of occurrence. The fundamental categories are Transactional Risk Operating Risk Episodic Risk
+
Operating Risks
Operational Risk
+
Episodic Risks
Liquidity Risk Strategic Risk Macro economic Risk Black Swan Risk
Sample Data
20 year daily data of 14 major stock indices & 4 commodities 30 major currencies 26 interest rates in 16 major currencies 36 swap rates in 20 major currencies
Data Analysis
Calculation of daily, weekly, monthly, quarterly, semi annual and annual returns on a rolling basis Statistical analysis of data Classification of fluctuations into business as usual, mild stress, moderate stress, extreme stress, and historical worst case
Positive Fluctuation
Historical maximum
Negative Fluctuation
Historical maximum
Moderate
Extreme
Mild
Moderate
Extreme
0 0 0 0 0 0 0 0 0
10 10 7 8 6 4 6 5 7
19 19 13 17 11 9 12 10 14
29 29 20 25 17 13 18 15 21
71 88 56 48 42 23 37 55 42
10 10 7 8 6 4 6 5 7
19 19 13 17 11 9 12 10 14
29 29 20 25 17 13 18 15 21
67 79 72 47 37 23 35 56 55
Lehman Bankruptcy
September 2008 February 2009
Highest 1 week Highest Highest 1 week Highest Highest 1 week Highest deviation for time drawdown in 1 deviation for time drawdown in 1 deviation for time drawdown in 1 period week from event period week from event period week from event
DEM IR SWAP 10 YEAR GBP IR SWAP 10 YEAR HKD INTERBANK 3 MONTH NOK INTERBANK 3 MONTH SGD INTERBANK 3 MONTH EURO INTERBANK 3 MONTH USD IR SWAP 10 YEAR
40 46 63 26 50 126
27 26 33 47 25 36 45
7 7 38 27 19 46 22
36 35 100 71 19 11 47
21 12 86 53 13 7 29
In times of trouble
In a crisis, uncorrelated assets can become correlated, as we saw in the volatile summer of 2007
Correlation to S&P 500 Index* High Yield Bonds 99% -8% EM Stocks 100% 53% Financial Mtl & Mining Stocks Stocks 99% 95% 36% 26% USD vs. JPY 80% -2%
Finance Profession needs to move more toward embracing the possibility of black swans and coming up with responses to extreme events and less enamoured with the precision of risk reports
Risk Management
Real money individuals through institutions typically allocate and re balance capital within the framework of a long term asset allocation strategy. Rapid changes in the entire asset allocation pie is a potential cause of stress.
Risk Management
The core asset classes
Stocks Bonds Real Assets (including real estate, commodities,) Hedge Funds Private Equity
All of the above must be monitored. Bubbles/depressions in any one asset class can point to the next source of stress Concentration and over crowding in any one asset class is also a point of stress
Source of risk in of themselves Multiplier effects through credit extended on collateral
Risk Management
The dangerous bubble asset classes
Equities Debt Real Assets (including real estate, commodities,)
Debt bubbles (credit traps) tend to be the most dangerous because it is the least observable
Long cycle times Stress points are few and far between Not enough data to model robustly
Will affect real economy through liquidity demands, asset prices, systemic collusion Poisson processes, hard to model, but real and inevitable
CHAPTER 6
BERNARD LEE
120
An Analysis of Extreme Price Shocks and Illiquidity Among Systematic Trend Followers
Bernard Lee Singapore Management University - School of Economics
Shih-Fen Cheng Singapore Management University - School of Information Systems Annie Koh Singapore Management University - School of Business
Abstract
We construct an agent-based model to study the interplay between extreme price shocks and illiquidity in the presence of systematic traders known as trend followers. The agent-based approach is particularly attractive in modeling commodity markets because the approach allows for the explicit modeling of production, capacities, and storage constraints. Our study begins by using the price stream from a market simulation involving human participants and studies the behavior of various trend-following strategies, assuming initially that their participation will not impact the market. We notice an incremental deterioration in strategy performance as and when strategies deviate further and further from the theoretical strategy of lookback straddles (Fung and Hsieh 2001), due to the negative impacts of transaction cost and imperfect execution. Next, the trend followers are allowed to participate in the market, trading against uninformed computer traders making randomized bids and offers. We notice that market prices begin to break down as the percentage of trend followers in the market reaches 80%. In addition, in a market dominated by smart traders, it becomes increasingly difficult for any of them to generate profits using what is supposed to be a long gamma strategy. After all, trading is a zero-sum game: It is not feasible for any long gamma trader to generate a consistent profit unless someone else is willing to be on the other side of his/her trades. In any such market dominated by smart traders with low liquidity and extreme price instability, one proposed solution (as proposed earlier by the U.S. Commodity Futures Trading Commission) is to control position size limits, by either decreasing them (in the original proposal) or increasing them (for completeness in our analysis). Based on our simulation results, we have found no evidence supporting that such a solution will be effective; in fact, doing so will only lead to erratic price behavior as well as a variety of practical issues when imposing such changes to position size limits. An alternative proposal is to intervene in the market direct/indirectly, such as by using a market maker to inject/reduce liquidity. Our simulation results show evidence that injecting and reducing liquidity by the market maker can both be effective. However, a market maker can accumulate a large negative P&L by buying in a one-sided, falling market in which it is the only bidder, or vice versa. Therefore, in practice, no market maker may volunteer to participate in any such market rescue efforts unless governments are willing to underwrite some of its large potential losses. In short, direct/indirect intervention by controlling liquidity is not a panacea, and there are practical limits to its effectiveness.
CHAPTER 7
122
Spillover effect between the Credit Default Swaps (CDS) and the stock market using a general stochastic volatility with jumps model
Risk Analytics Division Risk Management Department United Overseas Bank (UOB) Ltd
This paper investigates the time-series dynamics governing the credit default swap indices (CDX), and volatility and jump spillover between the stock and CDX markets. We use daily returns data on the S&P500 and Dow Jones CDX North American Investment Grade 5-year (CDX.NA.IG.5Y) indices over the period between June 1, 2004 and June 30, 2009. Our empirical evidence suggests the presence of two components - (i) diffusive stochastic volatility; and (ii) jumps in returns and volatility - in both the stock and CDX markets. Further, our results show that the contemporaneous correlation between the stochastic volatilities of both markets decreased during financial crisis, suggesting greater diversification benefits between the stock and CDX markets in periods of financial downturn. In addition we find evidence of strong bidirectional Granger-causality between the stochastic volatility in the stock and CDX markets during the crisis period. We find no evidence, however, to suggest lagged jumps in the CDX market predict jumps in the stock marketand vice versa.
Common work with Alastair Marsden from University of Auckland, New Zealand.
Volatility and Jump Spillover Between Stock and CDX Markets: Evidence during Global Financial Crisis
Kam Fong Chan* (United Overseas Bank, Singapore) & Alastair Marsden (University of Auckland, New Zealand)
19 February 2010 * The views here are those of the authors and do not necessarily reflect the views of UOB Singapore.
K. F. Chan
Introduction
Outline of the presentation:
CDS and CDX Graphical analysis Objectives of the study The model Econometric method Empirical results
K. F. Chan
Protection buyer
Protection seller
K. F. Chan
Protection buyer
Protection seller
K. F. Chan
50,000.00 40,000.00
30,000.00 Credit event 20,000.00 10,000.00 1st 2nd 1st 2nd 1st 2nd 1st 2nd 1st 2nd 1st 2nd 1st 2nd 1st half half half half half half half half half half half half half half half 2002 2002 2003 2003 2004 2004 2005 2005 2006 2006 2007 2007 2008 2008 2009
K. F. Chan
K. F. Chan
Graphical Analysis
300 CDX 250 200 150 1000 100 50 0 800 600 S&P500 1400 1200 1600
06 2004
09 2004
01 2005
05 2005
09 2005
12 2005
04 2006
08 2006
12 2006
04 2007
08 2007
11 2007
03 2008
07 2008
11 2008
Figure : Daily prices of S&P500 and CDX.NA.IG.5Y (CDX) between June 1, 2004 and June 30, 2009 Note: CDX.NA.IG.5Y refers to the Dow Jones CDX North American Investment Grade 5-Year index
K. F. Chan
Graphical Analysis
Figure : Daily returns of S&P500 and CDX.NA.IG.5Y (CDX) between June 1, 2004 and June 30, 2009
25 20 15 10 5
(a) S&P500
01 06 04 01 10 04 01 02 05 01 06 05 01 10 05 01 02 06 01 06 06 01 10 06 01 02 07 01 06 07 01 10 07 01 02 08 01 06 08 01 10 08 01 02 09 01 06 09
25 20 15 10 5
(b) CDX
01 06 04 01 10 04 01 02 05 01 06 05 01 10 05 01 02 06 01 06 06 01 10 06 01 02 07 01 06 07 01 10 07 01 02 08 01 06 08 01 10 08 01 02 09 01 06 09
K. F. Chan
03 2009
S&P500
CDX
Objectives
Objectives of the study:
Investigate the time-series properties of the CDX returns. Examine volatility and jump spillover between the CDX and stock markets.
K. F. Chan
The Model
The SVCJ model:
Stochastic Volatility with Correlated Jumps (SVCJ). Belong to the affine jump-diffusion model class of Duffie et al. (2000). Has been examined in the stock market by Eraker et al. (2003), Eraker (2004), Broadie et al. (2006) and Li et al. (2006).
K. F. Chan
The Model
The SVCJ model:
U measures the correlation between returns and volatility. NY and NV are Poisson jumps in returns and volatility, respectively. [Y and [V are the jump sizes in returns and volatility, respectively.
K. F. Chan
The Model
The SVCJ model:
Assume NY = NV = N [V ~ exp(PV) [Y ~ N(PY + UJ [V, Q2) Assume UJ = 0
K. F. Chan
Econometric Method
The SVCJ model is discretized as:
K. F. Chan
Econometric Method
We estimate the model using Markov Chain Monte Carlo (MCMC) method. The idea is to estimate the latent variables and model parameters from their joint posterior density:
K. F. Chan
Empirical Results
K. F. Chan
Empirical Results
40 35 30 25 20 15 10 5 0 CDX S&P500
02 06 04
02 12 04
02 06 05
02 12 05
K. F. Chan
02 06 06
02 12 06
02 06 07
02 12 07
02 06 08
02 12 08
02 06 09
Empirical Results
Figure : Jump probabilities of the S&P500 and CDX indices
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
(a) S&P500
02 06 04
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
02 12 04
02 06 05
02 12 05
02 06 06
02 12 06
02 06 07
02 12 07
02 06 08
02 12 08
02 06 09
(b) CDX
02 06 04
02 12 04
02 06 05
02 12 05
02 06 06
K. F. Chan
K. F. Chan
02 12 06
02 06 07
02 12 07
02 06 08
02 12 08
02 06 09
K. F. Chan
K. F. Chan
CHAPTER 8
ANDREW ELLIS
134
Rmetrics is a collection of R packages originally created for teaching computational finance and financial engineering by the Econophysics Group at ETH Zurich. The Rmetrics packages cover a wide range of topics such as time series analysis, hypothesis testing, volatility forecasting, extreme value theory, pricing of derivatives, portfolio analysis, risk management, trading analysis and many more. Rmetrics offers an open source teaching solution with state-of-the-art algorithms to help the integration of academic research to industry. All packages are released under the GNU GPL license. Many of the functions contained in this collection are not only used by students in education at the ETH in Zurich, but also in many other academic institutes and business schools worldwide. Furthermore, the Rmetrics packages are increasingly being used as a code archive for rapid model prototyping in business environments such as banks, fund management firms, and insurance companies. Beside the software development the Rmetrics Association supports further fields: A high quality documentation project with the publication of ebooks and user guides for R/Rmetrics packages, supporting the R-in-Finance special interest group, the organization of user and developer workshops, summer schools and conferences, and the organization of student internships at ETH Zurich. The Rmetrics Association is a non-profit foundation under Swiss law.
www.rmetrics.org
What is Rmetrics?
Andrew Ellis Rmetrics Association & ETH Zurich www.rmetrics.org/
1
Friday, 19 February 2010 1
R
the S language was developed by John
Chambers at Bell labs
R is the open source version of S, and has R has hundreds of contributed packages,
available on CRAN www.r-project.org
Rmetrics packages
A collection R packages originally created
for teaching computational nance and nancial engineering by the Econophysics Group at ETH Zurich
topics such as time series analysis, portfolio optimization, extreme value theory, risk management
3
Rmetrics packages
All packages are released under the GNU
Public license (GPL)
implementations of the latest research, thus making the resulting methods and techniques available to everybody
4
Friday, 19 February 2010 4
Rmetrics packages
Rmetrics packages are available on CRAN
(stable versions) Rforge
Rforge
R-Forge offers a central platform for the
development of R packages
offers easy access to SVN repository packages are built and checked daily
(binaries for Windows and OS X)
Rforge
http://r-forge.r-project.org/projects/
rmetrics/
Rmetrics has 47 packages on Rforge currently 20 developers packages include: fPortfolio, fGarch,
7
Friday, 19 February 2010
Rmetrics packages are also included as part R-sig-Finance mailing list is introduced 2008 packages are hosted on the new Rforge server in Vienna
9
Friday, 19 February 2010 9
Organization of Rmetrics
Rmetrics Association was founded as an The Rmetrics Association provides
interest group, and is now organized as a non-prot association under Swiss law software packages, writes documentation, organizes and funds student projects and workshops
10
Friday, 19 February 2010 10
https://stat.ethz.ch/
mailman/listinfo/rsig-nance
13
Friday, 19 February 2010 13
Rmetrics documentation
Rmetrics aims to provide rst class
documentation of packages to-date
Available ebooks
Basic R for Finance (published as draft) A Discussion of Time Series Objects for R
in Finance (free)
Planned ebooks
Advanced Portfolio Optimization with R/
Rmetrics
17
Friday, 19 February 2010 17
Rmetrics Events
Meielisalp workshop: takes place every year
in June/July in the Swiss mountains (limited to 50 participants, so register early) conference in Singapore
Student internships
Rmetrics provides student internships at
the Econophysics Group, Institute of Theoretical Physics, ETH Zurich documentation
Sponsorship
Sponsor for sudent internships and events
so far include: Finance Online (Zurich), Insightful (Tibco), Reechem (Hedge Fund), Invesco, Theta Fund, Revolution, Mango Solutions, ETH, RMI
20
Friday, 19 February 2010 20
Join Rmetrics
add your packages to the Rmetrics project support student interships buy the books sponsorship for workshops, conferences www.rmetrics.org
21
Friday, 19 February 2010 21
CHAPTER 9
ANMOL SETHY
148
Measurement of de facto exchange rate regimes has been an area of interest to the economics community as well financial market traders, albeit for different purposes. A continuous measurement of exchange rate flexi1bility at low frequency is useful to economists in obtaining results that provide a glimpse into the open-macro-economy framework. Traders on the other hand are more interested in looking for high frequency changes in the exchange rate regime to assimilate new information in expectations for currency movements. In the economics literature, the existing measures of de facto currency regimes do not provide a fine structure of classifying exchange rate regimes, and often redefine classifications making comparison over time difficult. The situation is further complicated by the fact that information from the central banks is often limited and sometimes misleading as well. The de facto exchange rate regime can be easily estimated by a least-squares regression for exchange rate returns and changes in the exchange rate regime correspond to changes in the regression parameters. However, unlike in classical least-squares methods (such as the Bai & Perron framework for structural change analysis), the error variance is not a nuisance parameter but of prime interest as well as it corresponds to the flexibility of the exchange rate regime. Hence, we extend the standard structural change framework to maximum likelihood models where we can easily incorporate the error variance as a full model parameter in an (approximately) Gaussian model. In this model we can perform testing (in historical data), monitoring (in incoming data to evaluate its divergence from historical data), and dating of structural changes in exchange rate regimes. All three techniques (testing, monitoring, dating) are provided in the R package "fxregime". A particular challenge, however, is the dating of structural changes as the algorithm's complexity is of order O(n^2). A simple way to speed this process up is to parallelize the search for the breakpoints. This has been implemented through the use of foreach package in R. This is done in a manner that the code becomes impervious to whether the underlying system is a multicore (in which case the library deployed by the user is multicore) or a cluster (in which case the library snow is employed). Parallel computing, however, leads to computational and process time gain only when the time series under study is long.
Package fxregime
Package fxregime
Continuous measure of de facto currency regimes Achim Zeileis1 Ajay Shah2 Ila Patnaik2 Balasubramaniam Vimal2 Anmol Sethy3
1 WU 2 National
(Singapore)
1 Outline 2 De facto exchange rate regimes 3 Purpose of measuring currency regimes 4 Estimation Technique 5 Some results 6 Recap
An exchange rate regime is the way a country manages its currency with respect to other currencies Management of currency and its benets are not entirely clear in academic literature The exchange rate regime has an impact on
1 2 3 4
Financial ows and market eciency Value of trade (imports and exports) On ination in the economy On interest rates in the economy
Currency exposure Risk of macroeconomic crisis Long-term understanding where an economy stands, vis-a-vis the impossible trinity Risk of macroeconomic crisis and build up in pressure Financial Development Assessment of central bank policy on exchange rate management
Policy implications:1 2 3 4
Stated intention of central bank do not reect reality Measurement of de facto currency regimes becomes signicant given such dierences Academic literature has used various measures to classify exchange rate regimes into various categories that depend on central bank sources and multiple variables. Often, these classication miss the ne structure of the exchange rate regime. that are limited and misleading in measuring de facto currency regimes.
For economists
The nature of exchange rate regime and its consequences for trade and nance The position of economies vis-a-vis the impossible trinity
Aids in further analytical research on open macroeconomics in areas of nance, trade, monetary policy and so on.
For traders
Traders can use the monitoring system which can warn them of possible break away from recent behaviour.
A valuable tool for understanding the de facto exchange rate regime in operation is a linear regression model based on cross-currency exchange rates (with respect to a suitable numeraire, e.g., CHF). If estimation involving the Singapore dollar (SGD) is desired, the model estimated is:
d log SGD CHF = 1 +2 d log USD CHF +3 d log JPY CHF +4 d log DEM CHF +5 d log GBP CHF +
Testing for parameter stability has:H0 : (i) = (0) H1 : (i) = (0) where is the k dimensional parameter we are interested in
Testing Process
Fit a regression model once on the whole sample Capture the cumulative sum of model deviations The model deviations are the empirical estimating functions for testing parameter stability
Testing Process
200 2004
400
600
800
1000
1200
2006
2008
t
2010
2012
Testing Process
4000 2004
2000
2000
4000
2006
2008
t
2010
2012
On evidence of parameter instability, the attempt is to know the dates and the extent of change. Either use an exhaustive search over all conceivable partitions of order O(nm ) or Employ dynamic programming approach which can reduce this to O(n2 ) as discussed in Perron and Bai (Econometrica, 2002). The technique relies of a triangular matrix of (i, j) for all 1 i < jn , (i, j) = min,2 j (yk , xk , , 2 ) k=i
For fx rates the variance of the error term has to be considered as a full parameter. NLL (, )=
yi xiT n 1 ))) i=1 (log ( (
xiT )
For a given number of breaks m, the optimal breaks can thus be found. To decide upon the number of breaks, information critera can be used.
Monitoring currency regimes is a continuation of the empirical process as new data ticks in. Compute the empirical estimating function for each incoming observation and update the cumulative and recursive process However an assumption has to be made about the model initially used to set up the efp
Implementation issues
The dating of structural changes is a particular challenge as the algorithms complexity is of order O(n2 ) The attempt has been to speed this process up by parallelising the search for the breakpoints.
Parallel estimation
Packages foreach, multicore and snow have been employed to tackle this issue. This is useful only if the time series is long Communication losses override computational gain when the time series involved is not long. Useful only when time series is long. Computational time drops to an extent of 25-30%
USD
800
1000
1200
1400
1600
1800
2000
1995
2000 Time
2005
2010
-3
-2
-1
3 -3
-2
-1
3 -3
-2
-1
1995
2000
2005
2010
Time
Time
1995
2000 Time
2005
2010
1 2 3 4 5
300
200
100
2007
2008 TIME
2009
2010
900
1000
1100
1200
1300
1400
1500
In summary...
fxregime can aid in both historical analysis as well as mointoring of exchange rates Employs generalized uctuation tests for detecting strucural breaks Extends usual methods for analysis by incorporating variance as a full parameter Parallelization for speedier computations
References
Achim Zeileis, Ajay Shah, Ila Patnaik (2010).Testing, Monitoring, and Dating Structural Changes in Exchange Rate Regimes. Computational Statistics & Data Analysis, Forthcoming. Preprint at http://statmath.wu.ac.at/ ~zeileis/papers/Zeileis+Shah+Patnaik-2010.pdf Achim Zeileis, Ajay Shah, Ila Patnaik, Anmol Sethy (2010). fxregime: Exchange Rate Regime Analysis. R package version 1.0-0. URL: http://CRAN.R-project.org/package=fxregime
Thank You
CHAPTER 10
KARIM CHINE
164
Karim Chine
Cloud Era Ltd, Cambridge UK
Abstract
Elastic-R is a new portal built using the Biocep-R platform. It enables statisticians, computational scientists, financial analysts, educators and students to use cloud resources seamlessly; to work with R engines and use their full capabilities from within simple browsers; to collaborate, share and reuse functions, algorithms, user interfaces, R sessions, servers; and to perform elastic distributed computing with any number of virtual machines to solve computationally intensive problems.
PART III
SATURDAY MORNING
167
CHAPTER 11
DEFENG SUN
168
A Majorized Penalty Approach for Calibrating Rank Constrained Correlation Matrix Problems
Sun Defeng
Risk Management Institute National University of Singapore
Abstract:
In this paper, we aim at finding a nearest correlation matrix to a given symmetric matrix, measured by the componentwise weighted Frobenius norm, with a prescribed rank and bound constraints on its correlations. This is in general a non-convex and difficult problem due to the presence of the rank constraint. To deal with this difficulty, we first consider a penalized version of this problem and then apply the essential ideas of the majorization method to the penalized problem by solving iteratively a sequence of least squares correlation matrix problems without the rank constraint. The latter problems can be solved by a recently developed quadratically convergent smoothing Newton-BiCGStab method. Numerical examples demonstrate that our approach is very efficient for obtaining a nearest correlation matrix with both rank and bound constraints.
A Majorized Penalty Approach for Calibrating Rank Constrained Correlation Matrix Problems
Defeng Sun
Department of Mathematics and Risk Management Institute National University of Singapore
This is a joint work with Yan Gao at NUS
On January 15, 2010, I received the following email: From: XXX@grupobbva.com Sent: Friday, January 15, 2010 5:14 PM To: Sun Defeng Cc: XXX XXX Subject: Nearest Correlation Matrix: Faster code request Dear Mr. Sun, Please let me introduce myself. My name is XXX and I work in one of Spains major banks, BBVA. The position that I hold is Quantitative Analyst. We have been looking for quite a while for nearest correlation matrix problem algorithms until we found your paper An augmented Lagrangian dual approach for the H-weighted nearest correlation matrix problem ...,
R/Rmetrics Computational Topics in Finance Conference NUS/SUN 2 / 40
which shows not only a feasible approach, but also robust and fast results. I was also happy to check and test the MATLAB code that you provide in your web page ..., with outstanding results. We are planning to apply your algorithm to large scale problems (around 2000x2000 correlation matrixes) through a C++ implementation using LAPACK library routines; this is why we are particularly interested in performance. Could you please provide us with any faster code (MATLAB or other) for this matter? Thank you in advance and sorry for any inconvenience this may cause you. Regards, XXX
NUS/SUN 3 / 40
On November 18, 2009, I received the following email: From: XXXXX@fortis.com Sent: Wednesday, November 18, 2009 5:11 PM To: Sun Defeng Subject: nearest correlation matrix Dear Professor Sun, For R&D purpose, I am currently using your algorithms CorNewton and CorNewton3 Wnorm, which I downloaded from your webpage. The results look very satisfactory. I was wondering whether you would have another version of the algorithm available in C or C++. Best Regards, Dr. XXX XXX BNP Paribas Equity Derivatives Quantitative Research
R/Rmetrics Computational Topics in Finance Conference NUS/SUN 4 / 40
On October 27, 2009, I received this from Universiteit van Tilburg: My thesis is about correlations in a pension fund pooling. It is important for economic capital calculations. For some risks such as operational risk, I dont have data and hence I need to consult for an expert opinion. Then I might end up with not PSD matrices. Therefore, I need to calculate nearest correlation matrix. In my given correlation matrix, I want to x the correlations, which are data driven and I want the rest of the correlations not smaller than 0.1 from original matrix. Your code is very convenient for my study. However, ...
NUS/SUN 5 / 40
On November 3, 2009: Thank you for your valuable time, comments and helping me about solving my problem. I gave no chance that my xed constraints could be non-PSD before. Your advice solves the problem. I will modify my study in the light of it.
NUS/SUN 6 / 40
The model
In this talk, we are interested in the following rank constrained covariance matrix problem min s.t. H (X G)
F
Xii = 1, i = 1, . . . , n Xij = eij , (i, j) Be , Xij uij , (i, j) Bu , rank(X) r , Xij lij , (i, j) Bl , (1)
n X S+ ,
where Be , Bl , and Bu are three index subsets of {(i, j) | 1 i < j n} satisfying Be Bl = , Be Bu = , and lij < uij for any (i, j) Bl Bu .
R/Rmetrics Computational Topics in Finance Conference NUS/SUN 7 / 40
continued
n Here S n and S+ are, respectively, the space of n n symmetric matrices and the cone of positive semidenite matrices in S n .
H 0 is a weight matrix. Hij is larger if Gij is better estimated. Hij = 0 if Gij is missing.
n A matrix X S+ is called a correlation matrix if X and Xii = 1, i = 1, . . . , n. n 0 (i.e., X S+ )
NUS/SUN 8 / 40
Xii = 1 , i = 1, . . . , n
(2)
rank(X) r .
NUS/SUN 9 / 40
Xii = 1 , i = 1, . . . , n
(3)
rank(X) r .
NUS/SUN 10 / 40
In nance and statistics, correlation matrices are in many situations found to be inconsistent, i.e., X 0. These include, but are not limited to,
Structured statistical estimations; data come from dierent time frequencies Stress testing regulated by Basel II; Expert opinions in reinsurance, and etc.
NUS/SUN 11 / 40
1.0000 0.9872 0.9485 0.9216 0.9872 1.0000 0.9551 0.9272 0.9485 0.9551 1.0000 0.9583 G= 0.9216 0.9272 0.9583 1.0000 0.0485 0.0754 0.0688 0.1354 0.0424 0.0612 0.0536 0.1229
The eigenvalues of G are: 0.0087, 0.0162, 0.0347, 0.1000, 1.9669, and 3.8736.
1
Stress tested
Lets change G to [change G(1, 6) = G(6, 1) from 0.0424 to 0.1000]
1.0000 0.9872 0.9485 0.9216 0.9872 1.0000 0.9551 0.9272 0.9485 0.9551 1.0000 0.9583 0.9216 0.9272 0.9583 1.0000 0.0485 0.0754 0.0688 0.1354 0.1000 0.0612 0.0536 0.1229 0.0485 0.1000 0.0754 0.0612 0.0688 0.0536 0.1354 0.1229 1.0000 0.9869 0.9869 1.0000
The eigenvalues of G are: 0.0216, 0.0305, 0.0441, 0.1078, 1.9609, and 3.8783.
R/Rmetrics Computational Topics in Finance Conference NUS/SUN 13 / 40
Missing data
On the other hand, some correlations may not be reliable or even missing:
G=
1.0000 0.9872 0.9485 0.9216 0.9872 1.0000 0.9551 0.9272 0.9485 0.9551 1.0000 0.9583 0.9216 0.9272 0.9583 1.0000 0.0485 0.0754 0.0688 0.1354 0.0612 0.0536 0.1229
0.0485 0.0754 0.0612 0.0688 0.0536 0.1354 0.1229 1.0000 0.9869 0.9869 1.0000
NUS/SUN 14 / 40
(4)
(5)
which is known as the nearest correlation matrix (NCM) problem, a terminology coined by Nick Higham (2002).
R/Rmetrics Computational Topics in Finance Conference NUS/SUN 15 / 40
where X is a real Hilbert space equipped with a scalar product , and its induced norm , A : X m is a bounded linear operator, Q = {0}p q is a polyhedral convex cone, 1 p m, q = m p, + and K is a closed convex cone in X .
NUS/SUN 16 / 40
where means the orthogonality. Q is the dual cone of Q and K is the dual cone of K.
NUS/SUN 17 / 40
Equivalently,
(x + z) c A y = 0 Q y Ax b Q , x K (x + z) = 0
NUS/SUN 18 / 40
Consequently, by rst eliminating (x + z) and then x, we get Q which is equivalent to F (y) := y Q [y (AK (c + A y) b)] = 0, y
m
y AK (c + A y) b Q ,
NUS/SUN 19 / 40
b, y
1 c 2
NUS/SUN 20 / 40
F is not dierentiable at y; F involves two metric projection operators; Even if F is dierentiable at y, it is too costly to compute F (y).
NUS/SUN 21 / 40
n and K = S+ .
NUS/SUN 22 / 40
The projector
For n = 1, we have
1 x+ := S+ (x) = max(0, x).
Note that x+ is only piecewise linear, but not smooth. (x+ )2 is continuously dierentiable with 1 (x+ )2 = x+ , 2
NUS/SUN 23 / 40
NUS/SUN 24 / 40
Convex Cone
()
x2
x3
NUS/SUN 25 / 40
Let X S n have the following spectral decomposition X = P P T , where is the diagonal matrix of eigenvalues of X and P is a corresponding orthogonal matrix of orthonormal eigenvectors. Then
n X+ := PS+ (X) = P + P T .
NUS/SUN 26 / 40
We have X+
2
= X+ ,
but is not twice continuously dierentiable. X+ is not piecewise smooth, but strongly semismooth2 .
2 D.F. Sun and J. Sun. Semismooth matrix valued functions. Mathematics of Operations Research 27 (2002) 150169.
R/Rmetrics Computational Topics in Finance Conference NUS/SUN 27 / 40
A quadratically convergent Newtons method is then designed by Qi and Sun3 The written code is called CorNewton.m.
"This piece of research work is simply great and practical. I enjoyed reading your paper." March 20, 2007, a home loan nancial institution based in McLean, VA. "Its very impressive work and Ive also run the Matlab code found in Defengs home page. It works very well." August 31, 2007, a major investment bank based in New York city.
3 H.D. Qi and D.F. Sun. A quadratically convergent Newton method for computing the nearest correlation matrix. SIAM Journal on Matrix Analysis and Applications 28 (2006) 360385.
R/Rmetrics Computational Topics in Finance Conference NUS/SUN 28 / 40
Inequality constraints
If we have lower and upper bounds on X, F takes the form
n F (y) = y Q [y (AS+ (G + A y) b)] ,
which involves double layered projections over convex cones. A quadratically convergent smoothing Newton method is designed by Gao and Sun4 . Again, highly ecient.
4 Y. Gao and D.F. Sun. Calibrating least squares covariance matrix problems with equality and inequality constraints, SIAM Journal on Matrix Analysis and Applications 31 (2009), 14321457.
R/Rmetrics Computational Topics in Finance Conference NUS/SUN 29 / 40
rank(X) k, 1 H (X G) 2 A(X) b + Q ,
n X S+ , 2 F
i (X) = 0, i = k + 1, . . . , n.
R/Rmetrics Computational Topics in Finance Conference NUS/SUN 30 / 40
min s.t.
+c
i=k+1
i (X)
+ c I, X c
i (X)
i=1
NUS/SUN 31 / 40
Majorization functions
Let h(X) :=
i=1 k
i (X) I, X .
Since h is a convex function, for given X k , we have h(X) hk (X) := h(X k ) + V k , X X k , where V k h(X k ). Let d n be a positive vector such that H H ddT .
0.5 0.5 For example, d = max(Hij )e. Let D1/2 = diag(d1 , . . . , dn ).
NUS/SUN 32 / 40
1 H (X G) 2
2 F.
1 1/2 D (X X k )D1/2 2
2 F.
NUS/SUN 33 / 40
which is a diagonal weighted least squares correlation matrix problem min s.t. 1 1/2 k D (X X )D1/2 2 A(X) b + Q ,
n X S+ .
R/Rmetrics Computational Topics in Finance Conference
2 F
NUS/SUN 34 / 40
Now, we can use the two Newton methods introduced earlier for the majorized subproblems! fc (X k+1 ) < fc (X k ) < < fc (X 1 ).
NUS/SUN 35 / 40
time (secs)
20
40
80
100
120
PenCorr
Major
SemiNewton
DualBFGS
relative gap
NUS/SUN 37 / 40
PenCorr time 11640.0 1570.0 899.0 318.3 326.3 residue 1.872e2 1.011e2 8.068e1 7.574e1 7.574e1
NUS/SUN 38 / 40
Final remarks
A code named PenCorr.m can eciently solve all sorts of rank constrained correlation matrix problems. Faster when rank is larger. The techniques may be used to solve other problems, e.g., low rank matrix problems with sparsity. The limitation is that it cannot solve problems for matrices exceeding the dimension 4, 000 by 4, 000 on a PC due to memory constraints.
NUS/SUN 39 / 40
End of talk
Thank you! :)
NUS/SUN 40 / 40
CHAPTER 12
YOHAN CHALABI
190
Generalized autoregressive heteroskedastic (GARCH) models are nowadays widely used to reproduce stylized facts of nancial time series and play an essential role in risk management and volatility forecasting. Although these models are well studied, numerical problems may arise in the estimation of the parameters when outliers are present in the data set. Indeed, maximum likelihood estimation can be sensitive to outliers. To overcome this limitation, the weighted trimmed likelihood estimation (WTLE) has been recently introduced. In this talk, we extend the GARCH family models to the weighted trimmed likelihood procedure to obtain robust estimates. Other robust GARCH estimators will be presented and an extensive Monte-Carlo study will be applied to compare the dierent approaches.
Keywords: GARCH models; Robust estimation; Trimmed Weighted Likelihood; M-estimates; Outliers
CHAPTER 13
JOEL YU
192
joel.yu@up.edu.ph
Foreign exchange rate modeling has gone through a host of developments in accounting for its nonlinear characteristics. Since the seminal work of Hamilton (1989), markov-switching models (MSM) have been increasingly used to quantify the nonlinear aspects of exchange rates. Initial research papers show that the MSM can very well describe the movements of exchange rates over time (e.g., Engel and Hamilton, 1990). Today, this approach is widely used to characterize exchange rate movements and to explore the possibility of improving exchange rate forecasting.
This paper employs MSM in characterizing the short-term movements in the wondollar rate. Its ability to account for non-linear aspects provides insights in identifying the relevance of explanatory variables in affecting the movements in the won-dollar pair under different states. Results provide a strong support for nonlinearity in the weekly changes in the won-dollar rate for the period March 2000 to May 2009. The model shows that during periods of high volatility, changes in interest rates are not relevant in affecting the movements in exchange rates while equity return takes on an increased importance. These results may serve as a guide in monetary policy in Korea in terms of intervention during periods of highly volatile markets.
hGtTzGtGG G~TkGy
qGjUG jGGiGh |GGGw yTtGj uG|GGz X_TX`GmGYWXW
p
20 15 10 4.00 5 3.00 0 2.00 -5 -10 -15
08 05 01 02 00 03 06 04 07 09
6.00
5.00
1.00
0.00
p
v
{GGGTGGGGG GGGTG
t
tTGGOtztP tztGGGGGGGG GGGGGGGG GGGGGGG GUGpGtztSGGGG GGGGGGGG GGGUG
lGmGyGt
{GzGtG zGt
wGwGw iGGwGh tGh hGtGh
tGw
tGjaGhG zGG jGGGT
tTzGt
y t (s t ) = 1 [y t 1 (s t 1 )] + ... + 4 [y t 4 (s t 4 )] + u t
where yt: log rate of change in real GDP times 100 (st): conditional mean that changes between two states, st The non-observable state, st, is assumed to follow an ergodic first-order Markov chai n process described by transition probabilities, Pr(st=j|st=i)=pij, where jpij=1 These transition probabilities are generally summarized in a matrix P given by:
p P = 11 p 21
p12 p 22
~T|zkGyGt
+ 1 i t + 1 rt + u t if s t = 1 dkrw t = 1 2 + 2 i t + 2 rt + v t if s t = 2
where: dkrwt : log rate of change at time t in weekly won-dollar (krw) rate times 100 : weekly change in overnight call rate at time t it rt : log rate of change at time t in weekly KOSPI times 100 ut NID(0, s12P vt NID(0, s22)
zGj
Smoothed Probability: Regime 1
1.0
0.8
0.6
0.4
0.2
0.0
20 01
20 06
20 03
20 05
20 04
20 08
20 02
20 07
20 09
zGj
State 1 vs State 2 Periods (yyyy:mm:dd) State 1 From 2000:03:15 2001:01:03 2001:04:18 2003:03:19 2004:12:08 2008:05:21 To 2000:11:15 2001:03:14 2003:03:05 2004:11:10 2008:03:05 2008:08:06 From 2000:11:22 2001:03:21 2003:03:12 2004:11:17 2008:03:12 2008:08:13 State 2 To 2000:12:27 2001:04:11 2003:03:12 2004:12:01 2008:05:14 2009:05:13
zG
Sample DEXR Total State 1 State 2 DCRATE Total State 1 State 2 RETURN Total State 1 State 2 64 64 64 0.09 0.16 -0.35 15.99 13.37 33.23 4.00 3.66 5.76 479 415 415 -0.01 0.00 -0.05 0.01 0.01 0.05 0.12 0.09 0.22 479 415 64 0.02 -0.08 0.65 3.10 0.65 18.81 1.76 0.80 4.34 Mean Variance Std Dev
{GGs
Non Linear Log Likelihood AIC Criterion HQ Criterion SC Criterion LR Linearity Test Chi (4) Chi (6) Davies -677.4416 2.8703 2.9046 2.9574 431.0869 0.0000 0.0000 0.0000 Linear -892.985 3.7452 3.7589 3.7801
zGw
Transitions Matrix State 1 State 1 State 2 0.9819 0.0955 State 2 0.0181 0.9045
wGl
Linear Model Coef 0.0344 -0.8767 -0.2033 t-val 0.4805 -1.4481 -11.3217 Coef -0.0654 -0.9497 -0.0654 State 1 t-val -1.6176 -2.1174 -5.9432 Coef 0.4043 0.2361 -0.5423 State 2 t-val 1.0658 0.1424 -8.3157
j
mSGGGGGGGGG GGGGGGGGGGGUG zSGGGGGGGG GGGGGGGtztGGG GGGGGGGGG GGGGG UGkG GGGGGG GSGGGGGGGGG GGGTGUG sSGGGGGGGGG GGTGUGiGGGGGG GGGSGGGGGG GGGGGGGGG U
CHAPTER 14
202
Abstract Financial credit and risk scoring is a very important aspect of risk management. In this paper, we demonstrate the applications of learning Bayesian network for credit and risk scoring. A learning Bayesian network is a graphical model which encodes the joint probability distribution for a set of random variables. The advantages of Bayesian network classiers in credit and risk scoring is its capacity to provide a clear insight into the structural relationships between variables affecting risk and creditworthiness. The learning Bayesian network algorithm involves the construction of priors for network parameters and learning of parameters via conjugate updating. The network structure is developed using the network score via a heuristic search strategy. Illustrations using a credit scoring data set are demonstrated using R.
Corresponding
author. Email:ckleong@unisim.edu.sg.
CHAPTER 15
DIETHELM WRTZ
204
Diethelm Wrtz*, Yohan Chalabi*, Andrew Ellis**, William Chen* and Stefan Theussl***
*Swiss Federal Institute of Technology, Zurich **Finance Online GmbH, Zurich ***University of Economics and Business Administration, Vienna
wuertz@phys.ethz.ch
The underlying assumption of Markowitz modern portfolio theory states that the measure of investment risk is described by the sample variance of asset returns and that all securities can be adequately represented by a multivariate ellipticallycontoured distribution. These facts do not always represent the realities of the investment markets, where we are confronted with non-stationary behavior and unusual market behavior due to structural breaks, bubbles, and even market crashes. Risk is becoming more and more related to bad outcomes and losses, which are considered to weigh more heavily than gains. This view has been put forward by researchers in finance, economics and psychology, which has in turn lead to the introduction of more sophisticated risk measures and methods to analyze portfolios. We give a summary of postmodern investment strategies and sophisticated methods which can make fund managers and their clients, show how to use them in practice, and how they are made available in the R/Rmetrics portfolio software package.
DJIA @14000
9-11
Lehman failed
2008-09-15
5Y Rolling Risk-Return
0.249
0.302
0.432
0.6
0.781
0.969
1.17
1.39
SBI SPI SII LMI MPI ALT
0.0
0.000102
0.0266
0.053
0.0795
0.106
0.132
0.159
0.185
Efficient Frontier
MV Portfolio | mean-Stdev View
W Weighted Return 0.15 0.20
Weighted Returns
0.318 0.249 0.302 0.432 0.6 0.781 0.969 1.17 1.39
ALT 0.2 20
SPI
Efficient Frontier
0.15
Target Return[mean]
MPI
0.00
EWP Equal Weights Portfolio TGP Tangency Portfolio GMV Global Minim Risk
0.000102
0.0266
0.053
0.0795
0.106
0.132
0.159
0.185
0.10
EWP
Cov Risk Budgets 0.6 0.8 1.0
Target Risk
MV | solveRquadprog | minRisk = 0.249 m
0.0714
TGP
0.05 5 SII
MV | solveRquadprog
0.053 6
0.00
0.0
0.2
0.4
0.000102
0.0266
0.053
0.0795
0.106
0.132
0.159
0.185
Target Risk
0.05
0.10
Weights
Target Risk
0.2
0.4
0.6
Estimation of Means and Covariances Sample Estimator Robust Estimators MCD, MVE OGK MCD MVE, OGK, Shrinkage Methods Bayes Stein Estimator y Ledoit-Wolf Estimator Random Matrix Theory Denoising
SBI CH Bonds SPI CH Stocks SII CH Immo LMI World Bonds MPI World Stocks ALT World AltInvest
Tail Dependence Coeff: Lower SBI SBI SBI SBI SBI SPI SPI SPI SPI SII SII LMI LMI MPI SPI SII LMI MPI ALT SII LMI MPI ALT LMI MPI MPI ALT ALT 0 0.055 0.064 0 0 0 0 0.352 0.273 0.075 0 0 0 0.124
-1.0
Weights %
Series
0.0
0.5
2006-12-20
2008-07-09
Weights Rebalance
Horizon = 12 | Smoothing: 3m | Startup: 1m | Shift 1m -10 0 10 20 30 40
Portfolio vs Benchmark
Horizon = 12m | Smoothing: 3m | Startup: 1m | Shift 1m 0.2 MV | shrinkEstimator 2006-12-20 2008-07-09
Strategy: myPortfolioStrategy Portfolio Benchmark Total Return -0.17 -0.38 Mean Return 0.00 -0.01 StandardDev Return 0.06 0.09 Maximum Loss -0.22 -0.25 Portfolio Specification: Type: Optimize: Estimator: MV minRisk shrinkEstimator Constraints: "maxW[1:(nAssets-1)] = 0.30"
Portfolio Strategy: MV Tangency Portfolio Dynamic Horizon < 12M y Optimal Shrinkage Estimator best of O = 0 1 Partial Cash Position Max 30% Box Constraints
Start 2006-05-31 t:
Cumulated
2005-06-01
2006-12-20
2008-07-09
Drawdowns
2006
2007
2008
2009
-0.6 6 2005-06-01
-0.2
Black-Litterman (1982) Fisher Black and Robert Littermans 1992 goal was to create a systematic method of specifying and then incorporating analyst/portfolio manager views into the estimation of market parameters for portfolio optimization.
Markowitz 1952
[ k = 2, A = Infinity, Y0 = mean (R) ]
Solution: QP 1982, SOCP Programming 1994
Note if the assets are elliptically distributed we will get distributed, the same set of weights as for the Mean-Variance Markowitz Portfolio! this makes a coherent risk measure
7 How Can we Include Expert Views into Non Normal Portfolio Design ? Non-Normal
an alternative approach to Black-Litterman when asset returns are not normally distributed The Copula Opinion Pooling (COP) approach of Meucci (2006a, 2006b) makes the modelling of dependencies by using copulas possible. By simulating market scenarios, the approach is free from distributional assumptions concerning the variables used used.
9 How Can we Analyse the Non Stationary Behaviour of a Return Series ? Non-Stationary
Example Series
Financial Time Series are Non-Stationary e.g. the running variance contains no information on the frequency of a periodic signal, only on its p g , y amplitude Wavelet Analysis decomposes a time series into time/frequency space simultaneously. simultaneously One gets information on both the amplitude of any "periodic" signals within the series, and h i hi h i d how this hi amplitude varies with time.
Log Return Series Black Regime: Returns generated by a predictive heteroskedastic process Grey Regime: Returns generated by an unpredictable jump process
Grey Bars: Probability to which regime the record belongs Colored Line Bundle: Crossing indicator to quantify size and strength of the regimes
Sep 7: Federal takeover of Fannie Mae and Freddie Mac Sep 14: Merrill Lynch sold to Bank of America and Lehmann Brothers collapse Sep 15: Lehmann Brothers files for bankruptcy protection Sep 16 Moodys d S 16: M d and S&P downgrade ratings on AIG d d ti Sep 17: The US FED lends $85 billion to AIG to avoid bankruptcy. Sep 18: Paulson and Bernanke propose a $700 billion emergency bailout
Financial R t Fi i l Return M d li Modeling Many Fat and Semi Fat Tailed Distributions Extreme Value Theory, Quantile and OBRE-CVaR Estimation Assets Modeling Correlation and Dependence Structure Analysis, Copulae Asset Selection by Partitioning, Clustering, and Self Organization Volatility Modeling and Forecasting Univariate and Multivariate GARCH Models Robust Volatility Modeling y g Portfolio Optimization Performance and Risk Measurement and Attribution Complex C C l Constraints, t i t LP, QP, SOCP, NLP, Mixed Integer Programming BARRAS Multifactor Models Stability Analysis and Stress Testing ( y y g (under current implementation) p )
Use Rmetrics
Thank you
Seite 16
CHAPTER 16
PRATAP SONDHI
214
The economic fallout of the present crisis underscores the need for banks to maintain sufficient capacity to absorb systemic shocks. This capacity must be actively managed because the economic and business environment changes from time to time as well as the financial conditions of banks, individually and collectively. Thus, because of interbank linkages, each bank must be able to evaluate and monitor not only its own resilience to systemic shocks but also that of other banks and of the sector as a whole.
Some measures of bank and sector resilience are discussed and numerically illustrated that can be implemented and monitored to assist active management.
Pratap Sondhi
Concepts
Can we define proper and easy to interpret measures of bank and sector resilience to systemic shocks?
Proposition
Changes in
Sector expected loss (SAR bn) 2.00 1.50 1.00 0.50 Benchmark Scenario
Key Concepts
Systemic shock
> Correlated defaults: Adverse economic shock that might cause joint defaults of banks with similar credit or market exposure; > Domino/Contagion defaults: Complicated network of interbank liabilities linking individual banks that might directly cause failure of one bank through default of another one.
Default risk A measure of bank risk, expressed in basis points, based on ranking bank financials on an ordinal, relative, risk scale (e.g. bank ratings). Default risk may or may not measure absolute probabilities of default. Potential loss A measure of bank sector risk, expressed in monetary units, derived from Default risk and obtained by regarding the sector as a portfolio of banks; Resilience A measure of the capacity to absorb shocks, obtained by evaluating the change in default risk/potential loss for a specified shock
Context: The IMF has proposed a macro-prudential surveillance program to assess systemic risk
Identify potential macroeconomic shocks using micro and financial data policy and analysis Analyze financial soundness indicators (FSIs) to measure the financial systems vulnerabilities and capacity to absorb losses Apply stress tests by combining identified macro risks with the systems vulnerabilities
FSIs are macro-prudential indicators aggregating micro-prudential, bank level supervisory data
Core FSIsbank sector Regulatory capital ratios Asset quality 13/VWRWDO ORDQV 13/V-provisions)/capital 6HFWRU H[SRVXUH FRQFHQWUDWLRQV Earnings and profitability 52( 52$ LQWHUHVW PDUJLQ ([SHQVH UDWLR Liquidity /LTXLG DVVHW UDWLR /LTXLG DVVHWV67 OLDELOLWLHV Market risk ); QHW RSHQ SRVLWLRQFDSLWDO 'XUDWLRQ PDWXULW\ PLVPDWFK Encouraged FSIs Other banking sector FSIs &DSLWDOWRWDO DVVHWV *URVV GHULYDWLYHV SRVLWLRQV 7UDGLQJ LQFRPHLQFRPH liquidity in securities market %LG-ask spread $YHUDJH GDLO\ WXUQRYHU Non-bank financial institutions (leverage ratio) Non-financial sectors &RUSRUDWH OHYHUDJH UDWLR &RUSRUDWH 52( &RUSRUDWH ); H[SRVXUH 5HDO HVWDWH SULFHV
We can develop a bank resilience measure, Default Risk, from bank specific financial factors
Risk factors
Macro Risk factors
Analytical Models
Country Risk model
Risk measures
Country risk rating
Ratings PD table
The default risk model rates bank financials according to relative risk
Model is calibrated to agency bank ratings for banks across the rating spectrum and across many countries Model ratings are mapped to a default probability scale to provide a cardinal risk measure
Default Risk(bp) 1 2.3 4 6.9 11.4 18.6 29.8 46.6 71.4 107.1 157.6 227.3 321.3 445.3 605.3 807 1055.5 1525.9 2341.1 3707
Model rating
AAA AA A+/A A/ABBB+ BBB BBB/BBBBBBBB+ BB BB BB B+ B+ B B BCCC+ CCC D
The sector can be modeled as a portfolio of banks to gauge sector loss - a measure of sector resilience
Risk factors
Macro Risk factors
Analytical Models
Country Risk model
Risk measures
Country risk rating
Bank specific financial factors Recovery rates, Default risk relative volatility, Inter-bank borrowing & lending
A simplified stress test is applied to the Saudi banking sector to illustrate the analysis
A benchmark risk rating profile is estimated based on 2008 year-end financial statements for each of the 11 banks in the sector from the default risk model; Country risk is initially assumed to be AA- rated; &RQQHFWHG V\VWHPLF VKRFNV DUH WKHQ VLPXODWHG VHTXHQWLDOO\
Specify global percentage reductions/increases in income/expense line items - WR VLPXODWH D FRQVHTXHQW ILQDQFLDO sector shock
Bank recovery rates are provisionally assumed to be deterministic; 7KH VHTXHQFH RI VKRFN VFHQDULRV DUH LOOXVWUDWLYH RQO\ DQG QRW HFRQRPLFDOO\ linked; 2QO\ D XQLIRUP LQFRPH VKRFN LV DSSOLHG bank balance sheets are held FRQVWDQW VR QR FDSLWDO RU OLTXLGLW\ VKRFNV The ratings model calibration is approximate; 2QO\ D FRUUHODWHG GHIDXOWV VKRFN LV FRQVLGHUHG IRU ZKLFK DQ DSSUR[LPDWH portfolio model is employed.
initial AA-
scenario A
Financial factors
y/e 2008
Country risk is a backg round macro risk factor affecting all banks
Sector systemic shock: The country shock is assumed to trigger an income shock uniformly across all banks
scenario A1 0.7
Net interest income Other Income Impairment - credit % of assets Impairment - investment % of assets
1RWH: Financial scenario shocks are illustrative only and not econometrically linked here to the macro shock
The income shock causes large, differential but correlated changes in bank default risk
Default Risk (bp) 300.00 250.00 200.00 150.00 100.00 50.00 0.00
Bank 10 Bank 11
Changes in default risk can differ even for banks with the same initial state
The change in sector loss in response to a shock measures the sector resilience to the shock
Sector expected loss (SAR bn) 2.00 1.50 1.00 0.50 Benchmark Scenario
% Risk Contributions
25.00 20.00 15.00 10.00 5.00 0.00 Bank Bank Bank Bank Bank Bank Bank Bank Bank Bank Bank 1 2 3 4 5 6 7 8 9 10 11
A portfolio analysis reveals a high level of diversifiable, or concentration, risk in the sector
(SAR bn) Expected loss Risk analysis > Systematic risk > Diversifiable risk Total risk Benchmark 0.56 Scenario 1.61
Saudi sector strengths & weaknesses Commentary based on the stress test*
Saudi banking sector risk is intensified by bank concentration small number of banks in the sector, with similar and concentrated sources of revenue/ expense
scenarios considered ;
Risks to the banking system are concentrated in three banks for the stress
Two other banks individually have high default risk but do not contribute proportionally to sector risk due to their small size
* Approximate model calibration and with no accounting adjustments for Islamic banks
In practice, the impact of a macro shock should be simulated by its effect on bank specific exposures
Macro Shock scenario
Credit portfolio impairment Mark to market losses Decline in interest income Decline in other income Defined by appropriate Stress test scenarios - already performed by banks
Deposit withdrawals Sell investments Reduce loan portfolio Modify long term borrowing Dynamic provisioning Reduce dividends
Balance sheet shock Income shock
ASSETS Cash & balances with SAMA Due from banks Investments, net Loans & advances Other assets Total assets Total assets ($bn) LIABILITIES & EQUITY Liabilities Due to banks Customer deposits Other liabilities Term loans Total liabilities Total equity Total liabilities and equity
Country risk
AA-
15
A credit shock is simulated that induces a liquidity shock, requiring management action
writeoffs provisions
withdrawals
- demand deposits - time deposits
Liquidate cash and short term deposits s.t. SAMA constraints Sell and repurchase govt. bonds - market impact is 3% loss Reduce loan book s.t. SAMA constraints
Credit shock
3.00% 3.00%
54%
100% 50%
initial state Equity/Total assets Net Interest margin Return on avg assets Return on avg equity Cost/income Net loans/Total assets 11.82% 3.24% 3.11% 24.25% 19.81% 63.46%
scenario state
Country risk
AA-
AA-
15
24.2
The combined credit and liquidity shocks reduce the balance sheet from $34 bn to $20 bn
ASSETS Cash & balances with SAMA Due from banks Investments, net Loans & advances Other assets Total assets initial state % initial assets 5% 3% 25% 63% 3% 100%
34
20
initial state Equity/Total assets Net Interest margin Return on avg assets Return on avg equity Cost/income Net loans/Total assets 11.82% 3.24% 3.11% 24.25% 19.81% 63.46%
scenario state
Country risk
AA-
AA-
15
71.4
7UDQVIHU RI WKH DQWLFLSDWHG VKRFN RI WKH SURYLVLRQ IRU ORDQ ORVVHV from the scenario (forecast) state to the current period (initial state). However, WKH SRVVLELOLW\ RI VXFK G\QDPLF SURYLVLRQLQJ RU LQFRPH VPRRWKLQJ GHSHQGV on the regulatory/accounting regime. Such risk mitigation decisions depend on managements forecast of the SUREDELOLW\ RI WKH VKRFN VFHQDULR DQG UHTXLUH D WUDGH-off between risk and expected return
initial state Equity/Total assets Net Interest margin Return on avg assets Return on avg equity Cost/income Net loans/Total assets 11.82% 3.24% 3.11% 24.25% 19.81% 63.46%
scenario state
Country risk
AA-
AA-
AA-
15
15
29.8
The decision to increase liquidity and loan loss provisions in the current period (initial state) requires a risk/expected return trade-off
Observations
The change in default risk provides a simple, scalar measure of a banks resilience to shocks; ,W FDQ EH HPSOR\HG DV DQ RYHUOD\ RQ WKH FUHGLW SULFH DQG OLTXLGLW\ VWUHVV testing that banks perform, from time to time, to summarize their joint effects. For a prescribed shock it can assist decisions on how to modify resilience and what level to maintain by allowing an evaluation of risk versus return.
PART IV
APPENDIX
233
List of Sponsors
www.ethz.ch
www.rmi.nus.edu.sg
www.finance.ch
www.revolution-computing.com
www.crcpress.com
www.neuraltechsoft.com
Organization
Diethelm Wrtz Juri Hinz David Scott Mahendra Mehta Conference Office Yohan Chalabi Andrew Ellis ETH Zurich Switzerland National University of Singapore University of Auckland NeuralTechSoft Mumbai
The Rmetrics Association was founded as an interest group in finance, and is organized as a non-profit foundation under Swiss law. The Rmetrics Association develops and provides software, offers a teaching environment with textbooks and user documentation, organizes and funds student projects and workshops, and is a partner for the banking and insurance industry. Rmetrics Software Development Rmetrics offers a collection of Open Source software packages for the R Environment created for teaching computational finance and financial engineering by the Econophysics Group at ETH Zurich. The software packages cover a wide range of topics, such as time series analysis, hypothesis testing, volatility forecasting, extreme value theory, pricing of derivatives, portfolio analysis, risk management, trading analysis and many more. Rmetrics offers an open source teaching solution with state-of-the-art algorithms to help the integration of academic research and industry. All packages are released under the GNU Public license (GPL). Many of the functions contained in this collection are not only used by students in education at the ETH in Zurich, but also in many other academic institutes and business schools worldwide. The Rmetrics packages are increasingly being used as a code archive for rapid model prototyping in business environments such as banks, fund management firms, and insurance companies. Rmetrics Teaching Environment Rmetrics offers the most complete teaching environment in financial analysis, covering a wide variety of state-of-the-art techniques, some of which are not even available in commercial software projects. Students not only become acquainted with open source software, but also learn the inner workings and algorithms of financial engineering concepts by studying the source code, which is not possible with commercial software. Rmetrics has been used for teaching in numerous universities and recognized business schools all over the world, and also to train practitioners in the industry. Rmetrics Knowledge Transfer The assimilation of new techniques and innovations in the banking and insurance industry is very challenging. Often, practitioners in the industry still rely on methods that have been shown to be unreliable or inadequate. Academic research has demonstrated how to overcome some critical limitations, but industrial practitioners often lag behind the pace of new academic research. The Rmetrics Association is trying to fill this gap by providing open source implementations of the latest research, thus making the resulting methods and techniques available to everybody. We are convinced that it is a fundamental role of academic institutions to share their scientific work with as large a community as possible.