Você está na página 1de 38

VALUE-AT-RISK FOR FIXED INCOME PORTFOLIO of

SECURITIES -
A COMPARISON OF ALTERNATIVE MODELS

A Project Study submitted in partial fulfilment for the requirement of


Post Graduate Diploma in Management (Finance) 2013-2015

By
Abir Basak
Roll no. 220/2013-2015

Under the guidance of


Dr Alok Pandey

LAL BAHADUR SHASTRI INSTITUTE OF MANAGEMENT,


DELHI

JANUARY 2015

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 1
Abstract

We evaluate the performance of an extensive family of ARCH models in modeling daily Value at- Risk (VaR) of

perfectly diversified portfolios in five stock indices, using a number of distributional assumptions and sample sizes. We

find, first, that leptokurtic distributions are able to produce better one-step-ahead VaR forecasts; the choice of sample size

is important for the accuracy of the forecast, whereas the specification of the conditional mean is indifferent. Finally,

despite the claims for the contrary, a different and specific structure of ARCH model produces the most accurate VaR

forecast for bond index portfolio.

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 2
LAL BAHADUR SHASTRI INSTITUTE OF MANAGEMENT, DELHI

. Dated

CERTIFICATE

Certified that _______________ has successfully completed Project Study entitled


________________________________________ under my guidance. It is his / her
original work, and is fit for evaluation in partial fulfillment for the requirement of the
Two Year (Full-Time) Post Graduate Diploma in Management.

(Name of the Student (Name of the Guide with


Signature) with Signature)

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 3
TABLE OF CONTENTS

INTRODUCTION FOR THE STUDY ............................................................................ 5-6

Objectives of the Study.................................................................................................. 7

LITERATURE REVIEW................................................................................................. 8-9

METHODS OF CALCULATING VALUE AT RISK.................................................... 10-13

Co-variance Method.. 10
Historical Simulation............. 10
Monte Carlo Simulation................................................................................................. 11
JP Morgan Risk Metrics........ 11-12
ARMA-GARCH models................................................................................................ 12-13

DATA & RESEARCH METHODOLOGY..................................................................... 14

BACKTESTING......... 15-19

ANALYSIS AND FINDINGS .......................................................................................... 20-31

E Views output for Data Stationarity & ARCH Effects .................................................. 20


Test for Data Stationarity & ARCH Effects 21
Test for Normality. 22
Models & Specifications ...... 23-27
Variance Equations... 28
VaR Evidences.. 29
Results of Backtesting... 30
LR Test...... 31

CONCLUSIONS .............................................................................................................. 32

ANNEXURES. 33-36

REFERENCES ................................................................................................................. 37-38

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 4
Introduction

The VaR is a number indicating the maximum amount of loss, with certain specified confidence level, a
financial position may incur due to some risk events/factors, say, market swings (market risk) during a given
future time horizon (holding period). If the value of a portfolio today is W, one can always argue that the entire
value may be wiped out at some crisis phase so the maximum possible loss would be the todays portfolio value
itself. However, VaR does not refer to this trivial upper bound of the loss. The VaR concept is defined in a
probabilistic framework, making it possible to determine a non-trivial upper bound (lower than trivial level) for
loss at a specified probability. Denoting L to represent loss of the portfolio over a specified time horizon, the
VaR for the portfolio, say V*, associated with a given probability, say p, 0 < p <1, is given by Prob [Loss > V*]
= p, or equivalently, Prob [Loss < V*] = (1-p), where Prob [.] represents the probability measure. Usually, the
terms VaR for probability prefer to the definitional identity Prob [Loss > V*] = p, and the terms VaR for
100*(1-p) per cent confidence level are used to refer to the identity Prob [Loss < V*]=(1-p).
Value-at-Risk (VaR) has been widely promoted by regulatory groups and embraced by financial institutions as
a way of monitoring and managing market risk - the risk of loss due to adverse movements in interest rate,
exchange rate, equity and commodity exposures - and as a basis for setting regulatory minimum capital
standards. The revised Basle Accord, implemented in January 1998, allows banks to use VaR as a basis for
determining how much additional capital must be set aside to cover market risk beyond that required for credit
risk. Market related risk has become more relevant and important due to the trading activities and market
positions taken by large banks. Another impetus for such a measure has come from the numerous and
substantial losses that have arisen due to shortcomings in risk management procedures that failed to detect
errors in derivatives pricing (NatWest, UBS), excessive risk taking (Orange County, Proctor and Gamble), as
well as fraudulent behaviour (Barings and Sumitomo).
VaR models usually use historical data to evaluate maximum (worst case) trading losses for a given portfolio
over a certain holding period at a given confidence interval. As a result; it is essentially determined by two
parameters, (holding) time period and a confidence level. It is a measure of the loss (expressed in say rupees
crores) on the portfolio that will not be exceeded by the end of the time period with the specified confidence
level. If is the confidence level and N days is the time period, the calculation of VaR is based on the
probability distribution of changes in the portfolio value over N days. Specifically, VaR is set equal to the loss
in the portfolio at the (1- )*100 percentile point of the distribution.
Fundamentally, all the statistical risk modelling techniques fall into one of the three categories or a combination
thereof. Fully parametric methods based on modelling the entire distribution of returns, usually with some form
of conditional volatility, the non-parametric method of historical simulation, and parametric modelling of the
tails of the return distribution. The latter is the recent introduction into the literature on risk management and is

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 5
based on Extreme Value theory that models the tail probabilities directly without making any assumption about
the distribution of the entire return process. All these methods have pros and cons, generally ranging from easy
and inaccurate to difficult and precise. No method is perfect and usually the choice of a technique depends upon
a market in question, and the availability of data, computational and human resources.

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 6
Objective
This paper makes an empirical attempt to select suitable VaR model for Fixed Income Security Market in India.
The paper focuses more on demonstrating the steps involved in such a task with the help of select bond index.
A number of competing models/methods have been evaluated for estimating VaR numbers for selected
government bonds.
In practice, selection of risk model for a portfolio has to be based on empirical
findings. Against this backdrop, this paper makes an empirical attempt to select VaR model for government
security market in India. The paper focuses more on demonstrating the steps involved in such a task with the
help of select bonds. In reality, actual portfolio differs (say, in terms of composition) across investors/ banks
and the strategy demonstrated here can be easily replicated for any specific portfolio. The rest of the paper is
organized as follows.
1st part presents the VaR concept and discusses some related issues.
2nd part summarises a number of techniques to estimate VaR using Historical Returns for a portfolio and
3rd part discusses criteria to evaluate alternative VaR models.
Empirical results for select bonds are presented in 4th part.
Finally, 5th part presents the concluding remarks of the paper

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 7
Literature Review

Gangadhar Darbha_National Stock Exchange, Mumbai, India December 2001, The paper presents a case
for a new method for computing the VaR for a set of fixed income securities based on extreme value theory that
models the tail probabilities directly without making any assumption about the distribution of entire return
process. It compares the estimates of VaR for a portfolio of fixed income securities across three methods:
Variance-Covariance method, Historical Simulation method and Extreme Value method and finds that extreme
value method provides the accurate VaR estimator in terms of correct failure ratio and the size of VaR

J. Beleza Sousa, M. L. Esquvel, R. M. Gaspar, P. C. Real February 29, 2012, Bonds historical returns
cannot be used directly to compute VaR by historical simulation because the maturities of the interest rates
implied by the historical prices are not the relevant maturities at time VaR is computed. In this paper the author
adjust bonds historical returns so that the adjusted returns can be used directly to compute VaR by historical
simulation. The adjustment is based on the prices, implied by the historical prices, at the times to maturity
relevant for the VaR computation.

G. P. Samanta, Prithwis Jana, Angshuman Hait and Vivek Kumar* Reserve Bank of India Occasional
Papers Vol. 31, No.1, Summer 2010, This paper makes an empirical attempt to select suitable VaR model for
government security market in India. The empirical results show that returns on these bonds do not follow
normal distribution the distributions possess fat-tails and at times, are skewed. The observed non-normality of
the return distributions, particularly fat-tails, adds great difficulty in estimation of VaR . The paper focuses
more on demonstrating the steps involved in such a task with the help of select bonds. The authors have
evaluated a number of competing models/methods for estimating VaR numbers for selected government bonds.

Jeremy Berkowitz* James OBrien, April 17, 2001, For a sample of large bank holding companies, the
authors evaluate the performance of banks. trading risk models by examining the statistical accuracy of the VaR
forecasts. Although a substantial literature has examined the statistical and economic meaning of Value-at-Risk
models, this article is the first to provide a detailed analysis of the performance of models actually in use.

Engle (2000) proposed a Dynamic Conditional Correlation (DCC) multivariate GARCH model which models
the conditional variances and correlations using a single step procedure and which parameterizes the conditional
correlations directly in a bivariate GARCH model. In this approach, a univariate GARCH model is fitted to a
product of two return series. Parameters or model coefficients of GARCH model can be estimated by log
likelihood estimation.

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 8
The need to model the variance of a financial portfolio accurately has become especially important following
the 1995 amendment to the Basel Accord, whereby banks were permitted to use internal models to calculate
their Value-at-Risk (VaR) thresholds (see 2 Jorion (2000) for a detailed discussion of VaR). This amendment
was in response to widespread criticism that the Standardized approach, which banks used to calculate their
VaR thresholds, led to excessively conservative forecasts. Excessive conservatism has a negative impact on the
profitability of banks as higher capital charges are subsequently required.

Although the amendment to the Basel Accord was designed to reward institutions with superior risk
management systems, a back testing procedure, whereby the realized returns are compared with the VaR
forecasts, was introduced to assess the quality of the internal models. In cases where the internal models lead to
a greater number of violations than could reasonably be expected, given the confidence level, the bank is
required to hold a higher level of capital. If a banks VaR forecasts were violated more than 9 times in any
financial year, the bank may be required to adopt the Standardized approach. The imposition of such a penalty
is severe as it affects the profitability of the bank directly through higher capital charges has a damaging effect
on the banks reputation, and may lead to the imposition of a more stringent external model to forecast the
banks VaR .

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 9
Methods of calculating Value-at-Risk

The Covariance method (Analytical or Delta Normal method)


The Covariance VAR assumes that price changes and returns are equally distributed. This method describes the
volatility of the price with standard deviations. The volatility is the change of price, measured in percentage that
is equal to one standard deviation. Volatility is concerned with What confidence can I have that future price
changes will not be greater than the volatility quoted? The covariance method uses curves and the standard
deviation tells us where the worst and best prices lie on a curve. The standard deviation is also defined by a
confidence level or confidence interval. If we suppose that a standard deviation of 1.65 gives a confidence level
of 90%, this would mean that 5% of the upward changes of price will be lying outside of the 1.65 deviation
number and 5% of the downward change of price will be out of the 1.65 standard deviation. However, when
measuring VAR only negative price changes or losses are of concern for the risk managers, therefore it is only
the downward price changes that are to be taken into consideration. Therefore, most banks use one-tailed
confidence interval of 95%, which takes into consideration only the negative price change.
The holding period is the period for which VAR is calculated and could be a day, week, month, or a year
depending on the liquidity of the assets in the portfolio or the time over which the composition of the portfolio
remains constant.

Historical Simulation
Historical Simulation and Monte Carlo simulation are the two main approaches used when calculating VaR .
The Historical Simulation is often preferred as it is easy to understand and the calculations are simple and easy
to implement. Moreover, the Historical Simulation does not assume that the price changes are equally
distributed as covariance does.
The way the Historical Simulation works is by using past price changes and applying them to the current
portfolio. The steps for running the Historical Simulation will be explained in detail below.
The first step is to obtain price change series for every asset or risk factor in order to revalue the portfolio.
Important point here is to choose the length of the price history. Some banks use 100 days of price history and
others use couple of years. However, when collecting the historical data it is vital that there are no missing days
of data and that there are the same numbers of days of data for all risk factors or else the correlations will be
unreliable. Moreover, if one of the assets is new and there is no price history for it will be impossible to run a
Historical Simulation, as long as it will be impossible to calculate volatility and correlations between assets.
Although, such obstacles could be overcome by using historical data for an existing asset with similar
characteristics, nonetheless the portfolio should be well diversified.

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 10
The next step is to apply price changes to the portfolio, to generate historical series of portfolio value changes
(Best, 1998:34). The value of the new portfolio is calculated by multiplying the price by the quantity of the
assets in the portfolio. However, this is only valid for straightforward assets and for options a valuation model
must be utilized. After the portfolio value changes are generated VAR can be calculated. The results from the
portfolio value changes are grouped into percentiles and plotted on a histogram. A percentile contains a 1%
change.

Monte Carlo Simulation


On the contrary to Historical Simulation, Monte Carlo simulation uses a great number of price change
scenarios, so the chance of omitting a scenario that could have caused a loss is not likely to occur.
In cases when a historical data is not available and the Historical Simulation is difficult to implement, Monte
Carlo simulation could be used to estimate VAR.
Monte Carlo works by generating artificially created events and then applies them to a set of portfolio assets.
The large number of artificially created events is applied to the portfolio and the VAR is calculated from the
resulting value changes similar to the approach utilized by the Historical Simulation.
One of its limitations is that it assumes that the price changes are equally distributes, just as the covariance
approach. Furthermore, the model is criticized because it uses randomly generated events and there is no
guarantee that the events needed to compute an accurate VAR are generated. However if a large number of
artificially generated events is used this is not considered to be an issue.
To conclude there are three existing methods for calculating Value at Risk. The covariance approach, although
quite easy to understand and compute has the main disadvantage of assuming normal price distribution. Monte
Carlo simulation is a bit more advanced to compute as it uses a large number of random simulations. The
Historical Simulation uses past data and assumes that changes in the near future will be the same. However, all
Value at Risk methods need to be supplemented by a stress test due to the fact that they ignore extreme market
changes.

JP Morgan Risk Metrics


JP Morgans Risk Metrics system for market risk management considers the following model, where the
weights on past squared returns decline exponentially as we move
backward in time.

The Risk Metrics model has some clear advantages. First, it tracks variance changes in a way that is
broadly consistent with observed returns. Recent returns matter more for tomorrows variance than distant

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 11
returns as is less than one and therefore the impact of the lagged squared return gets smaller when the
lag, ; gets bigger. Second, the model only contains one unknown parameter, namely, ;when estimating
on a large number of assets, Risk Metrics found that the estimates were quite similar across assets, and they
therefore simply set = 0:94 for every asset for daily variance forecasting. In this case, no
estimation is necessary, which is a huge advantage in large portfolios. Third, relatively
little data need to be stored in order to calculate tomorrows variance.

ARMA-GARCH MODELS

Econometric models such as ARMA-GARCH type can also be used to calculate VaR. These models,
however, do not forecast the VaR; rather they forecast the return and conditional variance of the time series.
Here the asset return is forecasted using ARMA models, and the volatility is forecasted using the volatility
models such as GARCH, exponential GARCH (EGARCH), and GARCH-in-
Mean (GARCH-M) and integrated in variance GARCH (IGARCH). These volatility models can capture
the observed volatility clustering in financial time series data.
An ARMA (p, q)-GARCH (u, v) process is represented by the time series model below:

Where the innovation is the error term. Assuming that is normally


Distributed, VaR at time t +1 is given by the following equation:

Where F-1(p) is the ( p) 100% quantile of a standard normal distribution. Assuming that et is distributed as a
standardized Student-t distribution with v degrees of freedom, the VaR at time t +1 is:

Where t * (p) is t hve pth quantile of a standardized Student-t distribution with v degrees of freedom.
Here the computed VaR is actually the VaR of the log return tr. The final VaR figure is computed by
multiplying ARMA-GARCH VaR estimates by the asset amount or the mark- to-market value of the financial
position.

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 12
While these models can capture serial autocorrelation and volatility clustering, the observed non-
normality and heavy-tails of the distributions of financial data still pose problems to a risk manager. Thus,
skewness risk and kurtosis risk are still other issues in risk measurement. So to address these problems, this
paper proposes to use extreme value theory in measuring market risk.

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 13
RESEARCH AND METHODOLOGY

For the purpose of this exercise I have collected 13 years data of the Total Return Government
Securities Composite Index from the NSE website. The time period for data collection is from the
year 2001 to year 2014.A lot of upheavals has been experienced by the Government Debt Market
during the span of data collection. In order to test the predictive performance of different models,
I have conducted the comparative analysis of 13 different models for VaR assessment. These
models are used to estimate one period ahead return prediction in left tail of the return
distribution at confidence levels of 0.95, 0.975, 0.99, 0.995 and 0.999.
All these methods have pros and cons, generally ranging from easy and inaccurate to difficult and
precise. No method is perfect and usually the choice of a technique depends upon a market in
question, and the availability of data, computational and human resources.

Following are the assessment techniques:


Mean Relative Bias

BackTesting

Likelihood ratio Test (LR test of unconditional coverage (LRuc), LR test of independence
(LRind) and LR test of conditional coverage (LRcc)

Quadratic Loss Function

Market Risk Capital

Mean Relative Scaled Bias

Stress Testing

In this paper we have used Likelihood ratio test and Backtesting to test the accuracy and efficiency
of the VaR models.

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 14
BACKTESTING
The main purpose of backtesting is to obtain feedback on the quality and precision of the risk
measurement system. The framework used in this paper consists of checking whether actual returns
are in line with VaR forecasts. It is the process of comparing returns predicted by the VaR
model to those actually experienced over the testing period. The most important result of this
comparison is identification of outcomes where the VaR measure was violated, that is, where the
realized returns were worse than those predicted by the model.
In its simplest form, the backtesting procedure consists of counting the number of times that the
actual portfolio returns fall outside the VaR estimate (so called violations), and comparing that
number to the confidence level used. For example, if the confidence level is
99%, we are likely to expect violations of VaR in 1% of the observations, that is, on 10 days out
of 1000 trading days.

Unconditional Coverage testing using Likelihood ratio test


We first want to test if the fraction of violations obtained for a particular risk model, call it , is
significantly different from the promised fraction, p. We call this the unconditional coverage
hypothesis. To test it, we write the likelihood of an i.i.d. Bernoulli () hit sequence.

where T0 and T1 are the number of 0s and 1s in the sample. We can easily estimate from =
T1/T; that is, the observed fraction of violations in the sequence. Plugging the maximum
likelihood (ML) estimates back into the likelihood function gives the optimized likelihood as

Under the unconditional coverage null hypothesis that = p, where p is the known VaR
coverage rate, we have the likelihood

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 15
We can check the unconditional coverage hypothesis using a likelihood ratio test

Asymptotically, that is, as the number of observations, T; goes to infinity, the test will be distributed
as a X2 with one degree of freedom.

The larger the LRuc value is the more unlikely the null hypothesis is to be true. Choosing a significance level
of say 10% for the test, we will have a critical value of 2.7055 from the - X21 distribution. If the LRuc test value
is larger than 2.7055, then we reject the VaR model at the 10% level. Alternatively, we can calculate the P-
value associated with our test statistic. The P-value is defined as the probability of getting a sample that
conforms even less to the null hypothesis than the sample we actually got given that the null hypothesis is
true. In this case, the P-value is calculated as

2 2
where F ( ) denotes
X 1 the cumulative density function of a X

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 16
Independence testing using Likelihood ratio test
We should be concerned if all of the VaR violations or hits in a sample are happening around
the same time. If the VaR violations are clustered then the risk manager can essentially predict that
if today is a violations, then tomorrow is more than p.100% likely to be a violation as well. This is
clearly not satisfactory. In such a situation the risk manager should increase the VaR in order to
lower the conditional probability of a violation to the promised p.

Our task is to establish a test which will be able to reject VaR with clustered violations. To this end,
assume the hit sequence is dependent over time and that it can be described as a so-called first-
order Markov sequence with transition probability matrix.

These transition probabilities simply mean that conditional on today being a non-violation
(that is It=0), then the probability of tomorrow being a violation ( that is, It+1 = 1) is 01. The
probability of tomorrow being a violation given today is also a violation is defined by

Similarly, the probability of tomorrow being a violation given today is not a violation is defined
by

The first-order Markov property refers to the assumption that only todays outcome matters for
tomorrows outcome. As only two outcomes are possible (zero and one), the two probabilities 01
and 11 describe the entire process. The probability of a nonviolation following a nonviolation is
1-01, and the probability of a nonviolation following a violation is 1-11. If we observe a
sample of T observations, the likelihood function of the first-order Markov process as

where T ij, i, j = 0,1 is the number of observations with a j following an i . Taking first
derivatives with respect to 01 and 11 and setting these derivatives to zero, we can solve for the
maximum likelihood estimates

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 17
Using then the fact that the probabilities have to sum to one, we have

which gives the matrix of estimated transition probabilities

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 18
Allowing for dependence in the hit sequence corresponds to allowing 01 to be different from
11. We are typically worried about positive dependence, which amounts to the probability of a
violation following a violation (11)being larger than the probability of a violation following a
nonviolation (01). If, on the other hand, the hits are independent over time, then the probability of a
violation tomorrow does not depend on today being a violation or not, and we write 01 = 11 = .
Under independence, the transition matrix is thus

We can test the independence hypothesis that 01 = 11 using a likelihood ratio test

Conditional Coverage testing using Likelihood ratio test

Ultimately, we care about simultaneously testing if the VaR violations are independent and the
average number of violations is correct. We can test jointly for independence and correct
coverage using the conditional coverage test.

which corresponds to testing that 01 = 11 = p .


Notice that the LRcc test takes the likelihood from the null hypothesis in the LRuc test and
combines it with the likelihood from the alternative hypothesis in the LRind test. Therefore,

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 19
ANALYSIS AND FINDINGS

TEST FOR DATA STATIONARITY (Table 1)

TEST FOR ARCH EFFECTS ( Table 2 )

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 20
DATA STATIONARITY
The table 1 above in the previous page is the output for data stationarity. The probability value is less than 5%
and the t statistic is than the critical values indicating null hypothesis is rejected or the returns series doesnt have
a unit root or the data is stationary. Also Durbin Watson value is an ideal 2 indicating the series also doesnt have
a problem of autocorrelation or there is no significant correlation with the lagged variables.

TEST FOR ARCH EFFECTS


If the variance of the errors is constant, 2 -- this is known as the assumption of homoscedasticity.
If the errors do not have a constant variance, they are said to be heteroscedastic.
If the errors are heteroscedastic, but assumed homoscedastic, an implication would be that standard error
estimates could be wrong. It is unlikely in the context of financial time series that the variance of the
errors will be constant over time, and hence it makes sense to consider a model that does not assume that
the variance is constant, and which describes how the variance of the errors evolves.
Table 2 in the previous page indicates Probability value less than 5% significance and F statistic greater
hence the presence of ARCH or heteroscedastic effect is confirmed.

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 21
TEST FOR NORMALITY
Descriptive Statistics

Mean 0.00028161
Standard Error 9.85369E-05
Median 0.000425298
Mode -0.008630495
Standard Deviation 0.0067424
Sample Variance 4.546E-05
Kurtosis 10.1398511
Skewness -0.238746023
Minimum -0.064192983
Maximum 0.049071287
Sum 1.318497203
Count 4682
Confidence Level (95.0%) 0.000193179

The Descriptive Statistic indicates a left skew as its value is negative with a Kurtosis of 10 which clearly
indicates the data is not normally distributed.

QQ PLOT
10
(Returns)
Sorted Standardized Return

8
6
4
Quantile

2
0
-10 -9 -8 -7 -6 -5 -4 -3 -2 -1 -2 0 1 2 3 4 5 6 7 8 9 10
-4
-6
-8
-10
Normal Quantile

Figure 1. QQ Plot for daily total return govt.bond index (composite) NSE from 14th Oct
2001 to 26th Sept 2009

Figure 1 clearly shows that Bond returns distributions present heavier tails and sharper distribution
forms than the normal distribution. If returns are distributed normally, all points from the
empirical series should remain on the line depicting quantiles from the normal distribution.
However, points from the empirical distribution deviate on the extremes from the straight line,
that is, the extreme points show more variability than the points captured at the centre. The
t y p i c a l S -shaped c u r v e s h o w s t h a t t h e r e t u r n s distributions from the NSE Bond market
present a leptokurtic and asymmetrical behaviour compared with the normal distribution.
| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 22
MODELS (OUTPUT)
GARCH (1, 1) normal

GARCH (1,1) Student-t

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 23
EGARCH (1, 1) normal

EGARCH (1,1) Student-t

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 24
PARCH (1, 1) normal

PARCH (1,1) Student-t

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 25
GARCH-M (1, 1) normal

GARCH-M (1, 1) Student-t

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 26
I GARCH (1, 1) normal

I GARCH Student-t

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 27
Variance Equations

Models Dependent Variable Equation

2 2
JP Morgan Variance 0.94 t-1
+ 0.06r t-1

AR-Garch(1,1) 2 2
Variance 0.000000489+ 0.222800r +0.803281
normal t-1 t-1

AR-Garch(1,1) 2 2
Variance 0.000000778+ 0.354555r +0.725490 --- deg_freedom=3.766041
student-t t-1 t-1

AR-EGarch 2
log(var) -0.612830+ (0.379557*|rt-1/t-1|)+(0.027336*rt-1/t-1)+(0.967471*log( ))
normal t-1

2
AR-EGarch -0.941187+ (0.534368*|rt-1/t-1|)+(0.029642*rt-1/t-1)+(0.945420*log( ))
log(var) t-1
student-t --- deg _ freedom=3.818443
AR-PARCH (Standard 1.323453 1.323453
1.323453 0.0000204+0.213846*|rt-1|-((0.092744)*(rt-1)^ )+0.829466*(t-1)^
normal Deviation)^
1.180369
AR-PARCH (0.0000688+0.295072*|rt-1|-((-0.089956)*(rt-1)^ )+0.776232*(t-
Variance 1.1803691 (2/1.1803691)
student-t )^ )^ --- deg_freedom=3.742512
1

AR-Garch-M 2 2
Variance 0.000000492+0.223768*r +0.802485*
normal t-1 t-1

AR-Garch-M 2 2
Variance 0.000000814+ 0.365216r +0.718270 --- deg_freedom=3.759055
student-t t-1 t-1

AR-IGarch 2 2
Variance 0.093855*r + (1-0.093855)*
normal t-1 t-1

AR-IGarch 2 2
Variance 0.130739r +(1-0.130739) --- deg _freedom=5.032307
student-t t-1 t-1

All the parameters have been identified using E Views 7.1.

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 28
VaR Evidences

After finding the parameter values of all the models our next objectives is to find the VaR values. In
this study, we have mainly focused on the negative returns captured along the left tail of
probability distribution.

Considering daily positive and negative returns selected VaR for confidence levels of 95%, 97.5%,
99%, and 99.5%. The results are then compared with results from conventional first-generation
parametric and nonparametric models. Parametric models include JP Morgan Risk Metrics,
GARCH (1, 1), EGARCH, IGARCH PARCH and GARCH-M. Non-parametric models include the
method of HISTORICAL SIMULATION.

Results for the 10 VaR models that include the tails of the returns distributions for the Indian
stock markets are included in Table 3

AVERAGE VALUE AT RISK (TABLE 3)

Confidence Interval
95.0% 97.5% 99.0% 99.5%
Historical
1.10% 1.46% 2.10% 2.87%
JPM Risk Metrics 0.73% 0.87% 1.03% 1.14%
Garch (1,1) normal 0.53% 0.63% 0.75% 0.83%
Garch (1,1) student-t 0.95% 1.25% 1.74% 2.22%
AR-EGarch normal 0.21% 0.25% 0.29% 0.32%
AR-EGarch student-t 0.30% 0.40% 0.55% 0.71%
AR-Garch-M normal 0.53% 0.63% 0.74% 0.82%

AR-Garch-M student-t
0.94% 1.24% 1.73% 2.21%
AR-IGarch normal 0.63% 0.75% 0.89% 0.99%
AR-IGarch student-t 0.85% 1.04% 1.33% 1.57%

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 29
RESULTS OF BACKTESTING (Table 4)

Number of violations
VaR model
95.0% 97.5% 99.0% 99.5%
Expected 181 91 36 18
Historical Simulation 234 115 52 32
JP Morgan 164 109 70 53
AR-Garch(1,1) normal 158 87 42 30
AR-Garch(1,1) student-t 14 3 0 0
AR-EGarch normal 349 271 202 168
AR-EGarch student-t 131 86 51 29
AR-PARCH normal 0 0 0 0
AR-PARCH student-t 0 0 0 0
AR-Garch-M normal 158 87 42 30
AR-Garch-M student-t 14 3 0 0
AR-IGarch normal 181 117 69 54
AR-IGarch student-t 49 25 11 4

According to the definition, for a good VaR estimation method, we expect that the number of violations is close to
expected. Therefore, the smaller the differences between estimated and realized number of VaR violations, the better the
performance of VaR estimation method. The result in Table 4 shows that for all the confidence levels PARCH model has
no number of violations when compared to the expected, whereas JP Morgan Risk metrics have a successful model
only at 95% confidence interval. Garch (1, 1) is successful at the 95% and 97.5% only Garch (1, 1) Student t is
successful at all confidence levels.E Garch Student t is successful at 95% and 97.5% confidence interval. Garch M
Normal and Garch M Student t produces the same number of violations. I Garch normal is successful at 95%
confidence level and I Garch Student t is successful at all levels.

Hence, on an average we can say that the models applied with the student t assumptions is more successful than the
models with normal assumptions. Of which Garch (1, 1) student t and Garch M Student t were most successful with
least number of violations. And it can be also indicated that PARCH model, both Student t as well as normal is over
estimating the VaR as there were not a single violation.

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 30
LR Test (Table 5)

SIGNIFICANCE LEVEL 1% LRuc LRind LRcc


Don't Reject VaR
Historical Simulation Not defined Not defined
model
Don't Reject VaR
JP Morgan Reject VaR model Reject VaR model
model
Don't Reject VaR
AR-Garch(1,1) normal Not defined Not defined
model

AR-Garch(1,1) student-t Not defined Not defined Not defined

AR-EGarch normal Not defined Not defined Not defined

AR-EGarch student-t Reject VaR model Not defined Not defined

AR-PARCH normal Not defined Not defined Not defined

AR-PARCH student-t Not defined Not defined Not defined


Don't Reject VaR
AR-Garch-M normal Not defined Not defined
model

AR-Garch-M student-t Not defined Not defined Not defined

AR-IGarch normal Reject VaR model Not defined Not defined

AR-IGarch student-t Reject VaR model Not defined Not defined

Table 5 shows the results of likelihood ratio tests. Three models with Historical simulation, AR-Garch (1,1)
normal, AR Garch M normal- have passed the LRuc test. This indicates that these VaR models can
accurately predict 99% of the time.

Now these five models are further assessed using LR ind test to check if proportion of clustered VaR
exceptions is equal to the proportion of independent VaR exceptions. In this test only JP Morgan Risk
Metrics qualifies. Further we would assess LR cc test to test jointly for independence and correct coverage.
Here no model qualifies.

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 31
CONCLUSIONS
In this paper, a comparative study of predictive abilities of twelve different methods for VaR assessment was
performed. The models used were: unconditional volatility models (Historical VaR), conditional volatility
models (GARCH and its variants like EGarch, Garch-M, IGarch, Parch). Data of last 13 years were used and
estimations for left tail of returns distribution at four confidence levels were considered in order to obtain
reliable results. Furthermore, statistical backtesting was used to evaluate VaR results along with the
likelihood tests.

This paper has shown that Garch and M Garch offers several benefits in terms of a more precise
estimation for high confidence levels and in terms of potential losses because of tail-risk from the returns
distributions for the emerging Bond market like India. This is true for the Bond market in India where
market fluctuations have been large and adjacent due to an explosive growth, coupled with high-inflation
rates.

The first empirical main finding of this study is that bond market is characterized by non normal distribution
resulting from excess kurtosis; this can be explained by the extreme volatility that the Indian economy
experienced during the last decades.

Second, the empirical evidence clearly shows that VaR estimates based on Garch (1, 1) and Garch-M yield
more precise and robust information about financial risk than conventional parametric approximations, for
confidence level of 99%. Results proved that in left tails, conditional VaR models (GARCH and its
variants) performed better than unconditional ones (historical VaR). This is due to the fact that
conditional VaR estimates embrace different volatility regimes, thus varying a lot more than unconditional
ones. This also derives from the fact that the Garch and Garch M correctly models the magnitude of extreme
movements found in the left tails of the probability distribution.

Therefore VaR estimates based on Garch and Garch-M can be used effectively to manage financial risk in a
univariate context; they allow banks to have a better perspective about the risks assumed because of
extreme movements, and to make better buy and sell decisions under uncertainty in both mature and
emerging markets. To sum up, VaR based on Garch and Garch M can become very valuable tool to manage
risk from portfolio holdings. Furthermore, risk analysis based on Garch and Garch M also offers valuable
information for the banking industry and its regulatory authorities for the purpose of more efficiently
determining capital needs. With this it can be concluded that the study has successfully assessed Garch and
M Garchs potential as risk measure.

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 32
ANNEXURES

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 33
Garch 11 vs JP Morgan
6.00%
4.00%
2.00%
Returns & Var

0.00%
-2.00%
-4.00%
-6.00%
-8.00%
-10.00%

Returns Rt Garch (1,1) normal JP Morgan

E Garch vs Historical Simulation


6.00%

4.00%

2.00%

0.00%
Returns & Var

-2.00%

-4.00%

-6.00%

-8.00%

-10.00%

Returns Rt AR-EGarch normal Historical Simulation

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 34
I Garch vs M Garch
6.00%

4.00%

2.00%
Returns & Var

0.00%

-2.00%

-4.00%

-6.00%

-8.00%

-10.00%

Returns Rt AR-Garch-M normal AR-IGarch normal

Garch 11- t vs Parch 11-t


6.00%
4.00%
2.00%
0.00%
-2.00%
-4.00%
-6.00%
Returns & Var

-8.00%
-10.00%
-12.00%
-14.00%
-16.00%
-18.00%
-20.00%
-22.00%
-24.00%
-26.00%
-28.00%
-30.00%

Returns Rt Garch (1,1) Student-t AR-Parch Student-t

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 35
Quantile Plot Garch 11

QQ PLOT (Garch 11)


10
Sorted Standardized Residuals

8
6
4
Quantile

2
0
-10 -9 -8 -7 -6 -5 -4 -3 -2 -1 -2 0 1 2 3 4 5 6 7 8 9 10
-4
-6
-8
-10
Normal Quantile

Quantile Plot E Garch

QQ PLOT (E Garch)
10
Sorted Standardized Residuals Quantile

8
6
4
2
0
-10 -9 -8 -7 -6 -5 -4 -3 -2 -1 -2 0 1 2 3 4 5 6 7 8 9 10

-4
-6
-8
-10
Normal Quantile

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 36
REFERENCES

1. COOPER, G. (2008). The Origin of Financial Crisis. Great Britain: Athenaeum Press Limited, Tyne &
. Wear
2. BROOKS, C (2008). Introductory Econometrics for Finance. New York: Cambridge
University Press
3. CHRISTOFFERSEN, P (2012). Elements of Financial Risk Management. Waltham
MA: Elsevier
4. JORION, P (2007). Value at Risk: The New Benchmark for Managing Financial Risk.
USA: McGraw Hill
5. Basel Committee on Banking Supervision (2004) International convergence of capital measurement and
. capital standards. Bank for International Settlements
6. BEDER, T. (1995) VAR Seductive but Dangerous, Financial Analysts Journal, Vol.
. 51 (5)
7. BLUNDELL-WIGNALL, A., ATKINSON, P. (2010), Thinking Beyond Basel III: Necessary Solution For
. Capital and Liquidity, Organization for Economic Co- operation and Development, Vol.2010, Issue 1
8. MEHTA A, NEUKIRCHEN M, PFETSCH S, POPPENSIEKER T (2012), Managing
Market risk: Today and tomorrow, McKinsey Working Papers on Risk, Number 32
9. ROGACHEV A, (2002), Dynamic Value at risk, St Gallen

10. TOTI S , B U L A J I M , V L A S T E L I C A T , ( 2011), Empirical c o m p a r i s o n o f conventional


. Methods And extreme value theory approach in value-at-risk assessment, African Journal of Business
. Management Vol. 5(33), pp. 12810-12818
11. ATIK J, Basel II and Extreme Risk Analysis

12. STYGER, Calculating operational value-at-risk (OpVaR) in a retail bank, School of Economics, North-
West University
13. OLIVER Q. J, SUAISO AND DENNIS S. MAPA, Measuring Market Risk using extreme value .
. Theory, University of Philippines, School of Statistics

14. TRZPIOT G, MAJEWSKA J, (2010), Estimation Of Value At Risk: Extreme Value And Robust Approaches

15. RUFINO C, G. DE GUIA E, Empirical Comparison of Extreme Value Theory Vis-- Vis Other Methods
of VaR Estimation Using ASEAN+3 Exchange Rates, DLSU Business & Economics Review 20.2 (2011)

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 37
17. SLIM S, GOMMOUDI I, BELKACEM L (2012), Portfolio Value at Risk Bounds Using Extreme
Value Theory, International Journal of Economics and Finance Vol. 4, and No. 3
18. ANDJELIC G, MILOSEV I, AND DJAKOVIC V (2010), Extreme Value theory in emerging
markets, Economic Annals, Volume LV, No. 185
19. Bank for International Settlements. Website, www.bis.org.
20. Yahoo Finance, (2013), BSE Historical Prices, Yahoo Finance, visited: 22.12.2013
21. WILSON, T., (2009) Did Risk Management Cause the Financial Crisis?, Global Association of
Risk Professionals, visited: 09.10.2013, available: http://www.garp.org/risk-news-and-
resources/2009/august/did-risk-management-cause- the-financial-crisis.aspx
22. MORGAN, J., (2013), Value-at-Risk: An Overview of Analytical VaR, J.P. Morgan Investment
Analytics and Consulting, visited: 11.10.2013, available:
intinno.iitkgp.ernet.in/courses/1239/wfiles/249706
23. Speakes J, (2011), Value at Risk and Extreme Scenarios, Centre for Economic Research and
Forecasting Blog, visited: 13.10.2013, available: http://www.clucerf.org/blog/2011/10/24/value-at-
risk-and-extreme-scenarios/comment- page-1/
24. Larsen P (2012), A Farewell to VaRms, Reuters Breaking Views, visited: 22.11.2013 available:
http://www.breakingviews.com/jpmorgan-hammers-another-nail-in-var- coffin/21017713.article
25. Hajjar G (2011), Basel II Stressed VaR and Stress Scenario Testing, Optimal MRM,
Visited 22.11.2013, available: https://www.optimalmrm.com/stressed-var/

| Value at Risk for Fixed Income portfolio of Securities-A Comparison of Alternative Models 38

Você também pode gostar