Você está na página 1de 49

Morgan Stanley Investment Management

Portfolio Management Today –


A Tour Around Recent Advances in Portfolio
Construction
Dr. B. Scherer, September 2008

Page 1
Overview

Table of Contents

Section 1 The Evolution of Risk Measures

Section 2 Postmodern Portfolio Theory (PMPT)

Section 3 Estimation Error: Bayesian Estimates versus Robust Estimation

Section 4 Fairness in Asset Management

Section 5 Optimal Leverage: About Feathers and Stones

Page 2
Section 1: The Evolution of Risk Measures

Axioms on Random Variables versus Behaviour


Statistics versus Financial Economics

Why do we need a good • Actuarial research / Statistics (Axioms on random variables)


risk measure?
- Pricing in incomplete − define a set of “reasonable” axioms
markets
− deduce a risk measure or criticize risk measures
- Portfolio selection
− dependent on the appropriateness of axioms
Can we invent risk
measures arbitrarily?
• Financial economics (Axioms on behaviour)
− Decision making under uncertainty
(NEUMAN/MORGENSTERN)
− deduce a risk measure or criticize risk measures
− dependent on the appropriateness of axioms

• Without a good risk (better risk/reward) measure there is no


meaningful portfolio selection

Page 3
Section 1: The Evolution of Risk Measures

Why did we ever look at volatility?


It seemed an approximation to maximizing expected utility

• Decision theoretic Foundation

w = arg max E P ⎡⎣U (Wealth ) ⎤⎦ = arg max E P ⎡⎢U


w w ⎣ (∑ n
i =1 )
wi (1 + ri ) ⎤⎥ ≈ wT ⋅ μ − λ2 wT ⋅ Ω ⋅ w

Neumann / Morgenstern MARKOWITZ

Above holds as long as returns are not “too non-normal”


(MARKOWITZ/LEVY, 1979); many studies confirm this

• Caution: An often made mistake (maximizing net present


value and utility optimization have nothing in common)

w T ⋅ μ − λ2 w T ⋅ Ω ⋅ w ≠ max ∑ i =1
∞ CFshareholder ,i

! (1+ COE )i

• Modern Finance is about the separation of preferences and


valuation, see your favourite corporate finance textbook
(only actuaries ignore this)

Page 4
Section 1: The Evolution of Risk Measures

Volatility Is Not Always An Ideal Risk Measure


… but not as bad as you think

• Disadvantages
– Simply not true for many return series and seriously wrong for optioned
portfolios

• Advantages (Why do people still use it?)


– Ok for large diversified long/short portfolios and/or long time horizons
– Degree of non-normality is not too large for monthly (typical revision
horizon) return series
– Symmetry (uses all data): small estimation error
– Easy portfolio aggregation & time aggregation
– Consistent with earlier asset pricing models
– Computational solutions readily available
– Ok for elliptical distributions

Page 5
Section 1: The Evolution of Risk Measures

Value at Risk
… definitely worse than you think

• Disadvantages
– Quintile measure; ignores risks within the tail
– Might induce taking of extreme risks (BASAK/SHAPIRO, 2001)
– Diversification can lead to higher Risk (not sub-additive)
– Not convex, i.e. impossible to use in optimization problems

• Advantages
– More perceived than real / would Value at Risk have been
promoted if deficiencies would have been known earlier?

Problem: despite all its


deficits, no practitioner will be
willing to give it up as it is
universal and can be
expressed in $

Page 6
Section 1: The Evolution of Risk Measures

Value at Risk gives wrong diversification advice!


Scenario Manager 1 Manager 2 Manager 1+2 Probability
• The reason for these
1 -20% 2% -9% 3%
paradoxical results
directly lies in the 2 -3% 2% -0,5% 2%

concept of value at risk. 3 2% -20% -9% 3%


It ignores the large -20% 2% -3% -0,5% 2%
4
losses that are waiting
5 2% 2% 2% 90%
undetected in the tail of
the distribution. However, Table 1. Data Manager Combination: Active Return
when we average across
portfolios these returns
will be diversified into the
Risk Measure Manager 1 Manager 2 Manager 1+2
portfolio risk measure
and increase risk as they Volatility 3,80% 3,80% 2,63%

have been ignored Value at Risk


-3% -3% -9%
before. ) (95%)
LPM0 5% 5% 10%

CVaR -13,20% -13,20% -5,60%

Table 2. Risk Measures in Multiple Manager Example

Page 7
Section 1: The Evolution of Risk Measures

Conditional Value at Risk


Only marginally better

• Coherent, but also economically


sensible?
– Are you risk neutral in the tail?
• Neglected approximation issue

• Nonparametric & unconditional

• Large estimation error


– Driven by rare events MSCI Japan, annual data 1975 - 1992

– Only uses few data points 60%

50%

• Implicit momentum 40%

30%

Annual return in %
– Up markets (momentum) 20%

show little risks: Japan fallacy 10%

0%

– Explains superiority in back-

1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
-10%

tests -20%

-30%
– Risk measure depends on -40%

return estimate!!

Page 8
Section 1: The Evolution of Risk Measures

Some Other Risk Measures


All of them equally problematic

Utility theory is used as ex- • Lower partial moments


post sanctification, i.e.
academic fig-leave • Semi-variance

• Mean absolute deviation


Necessary utility functions
implausible with observed • Omega ratio
behaviour / preferences
• Minimum Regret

Page 9
Section 1: The Evolution of Risk Measures

Spectral Risk Measures: Theory


Weighting quintiles

• Giving larger weights to • Value at Risk places all weight on a single quintile (implies investor
more extreme quintiles is does not care about tail risk)
super close to
maximizing expected
utility for reasonable • Conditional Value at Risk places an equal weight to all tail quintiles
utility functions
and zero else (implies investor is risk neutral in the tail)
• Weights all cases from
worst to best
• ACERBI (2004) allows us to include risk aversion in the risk measure
• Advantage: weighting
by allowing a (subjective) weighting on quintiles
function allows to merge 1
the risk measure with risk Mφ ( X ) = ∫0 φ
N ( p ) FX−1 ( p )( p )dp


preferences weighting loss


function quantile

• If φ(p) is a non-decreasing function, spectral risk measures are


coherent

Page 10
Section 1: The Evolution of Risk Measures

Spectral Risk Measures: Example


Weighting quintiles

• I assume an exponential weighting scheme derived from the


exponential utility function DOWD/COTTER (2007)

a exp( −a ⋅( 1− p ) )
φ(p ) = 1− exp( −a )

a Normal-Dist T-Dist

1 0,27 3,8

5 1,08 18,26

10 1,50 34,69
25 1,95 79,69

100 2,51 274,79

• Larger risk aversion leads to a larger risk measure

• This effect tails out for the normal distribution but is unchanged for
the t-distribution
Page 11
Section 1: The Evolution of Risk Measures

Portfolio Construction under Non-Normality


What are our options?

• Use of downside risk measures


− If you feel you must, but I would advise against it

• Approximate Utility Optimization


– Quadrature methods
– Higher order expansions of utility functions: number of higher co-
moments grows extremely large for large number of assets
– In the co-skewness and co-kurtosis matrix we are in need to calculate
n(n+1)(n+2)/6 and n(n+1)(n+2)(n+3)/24 different entries

• Full Scale Direct Optimization


− Theoretically and practically most convincing
− See KRITZMAN (2007) for convincing out of sample results

Page 12
Section 1: The Evolution of Risk Measures

In Summary: A Vicious Research Cycle


After 60 years we are back to our roots

Expected utility is a risk


adjusted performance
measure

Spectral Risk Measures


Expected utility can not be ACERBI, 2004
Utility Optimization
NEUMANN/MORGENSTERN, 1944
gamed

Expected utility explains


why some people buy and
others sell options
Mean Variance Optimization
VaR and CvaR optimization
as an approximation to Utility Optimization
(ROCKEFELLER/URYASEV, 2000)
(MARKOWITZ, 1955)

Lower partial
Moments
(FISHBURN/SORTINO 1990)

Page 13
Section 1: The Evolution of Risk Measures

Case Study: Risk Measures and the Credit Crunch


In Sample Optimization

See SCHERER/MARTIN
• Investment universe: UK equities (FTSE), Emerging market bonds
(2005) for complete code (EMBI), Commodities (GSCI), Emerging markets equities (MSCI)
including URYASEV VAR
approximation − Take five years of monthly data up to June 2007 to get the following
weights
Portfolio Emerging GSCI
UK Equities EM Equities
weights Market Bonds Commodities
Mean Variance 24,63% 34,12% 1,82% 39,43%
Mean Absolute Deviation 31,83% 26,19% 0,00% 41,99%
Semi-Variance 21,35% 39,67% 1,21% 37,78%
Minimizing Regret 40,92% 13,09% 0,00% 46,00%
Conditional Value at Risk 19,31% 44,22% 0,00% 36,47%

− Leads to the following characteristics


Conditional Cumulative
Volatility Value at Risk Semivariance Worst month Return
Value at Risk Drawdown

Mean Variance 2,29% -2,73% -3,59% 1,75% -5,10% -4,89% 1,67%


Mean Absolute Deviation 2,30% -2,65% -3,77% 1,81% -5,85% -4,81% 1,67%
Semi-Variance 2,35% -2,71% -4,06% 1,88% -6,77% -4,71% 1,67%
Minimizing Regret 2,29% -2,50% -4,06% 1,72% -4,93% -4,93% 1,67%
Conditional Value at Risk 2,30% -2,34% -3,36% 1,73% -4,95% -4,95% 1,67%

Page 14
Section 1: The Evolution of Risk Measures

Case Study: Risk Measures and the Credit Crunch


Out of Sample Performance

• No clear picture

Conditional Cumulative
Volatility Value at Risk Semivariance Worst month Return
Value at Risk Drawdown
Mean Variance 3,81% -6,83% -7,98% 2,79% -14,32% -7,98% -0,33%
Mean Absolute Deviation 3,71% -6,57% -7,88% 2,79% -14,34% -7,88% -0,30%
Semi-Variance 3,61% -6,03% -7,34% 2,42% -13,76% -7,34% -0,18%
Minimizing Regret 3,86% -7,08% -8,29% 2,88% -14,98% -8,29% -0,39%
Conditional Value at Risk 3,90% -7,30% -8,64% 2,99% -15,81% -8,64% -0,47%

• Minimizing regret (close to robust and very pessimistic) does best


• CVAR does worst; out of sample UK and Emerging market equities
did worst while Emerging bonds and Commodities did best
(sampling error seriously affects CVAR)

Page 15
Section 2: Postmodern Portfolio Theory

What is Postmodern Portfolio Theory?


Proponents are SHARPE (2007) and COCHRANE (2007)

• Asset pricing takes the distribution of wealth as given and derives


asset prices using arbitrage principles and return characteristics

• Portfolio theory takes asset prices as given and derives the utility
optimal allocation of wealth.

• Postmodern portfolio theory (PMPT) unites asset pricing and portfolio


theory by aligning risk neutral and real world distribution with the use of
state price deflators.

• The new book by SHARPE (2007) is an impressive collection of real


world applications that allows for a much richer framework than the one
currently used by practitioners.

Page 16
Section 2: Postmodern Portfolio Theory

PMPT: The Theoretical Framework


Combines P and Q measure

• Optimize utility or other objective function under the P measure while


constraints are governed by the Q measure

w = arg max E P [U (W ) ]
Q
E [W ]=Wo (1+ c )

• Far superior to what the industry has done so far. Some examples
include

− Eliminates arbitrage from MV optimization (allows us to


include options into portfolio optimization)
− Implicitly solves dynamic optimization problems (we can
write down any payoff for final wealth as long as the
budget constraint is satisfied)
− Tracking dynamic trading strategies

Page 17
Section 2: Postmodern Portfolio Theory

How can we rescue implied returns?


What happen if we leave MV efficiency?

This approach (GRINOLD,


1999 and SHARPE, 2007)
• Step 1. Build pricing kernel from utility function
works under arbitrary
utility functions and shows
the power of “neo portfolio πsU ′ ( 1 + Rs,b )
πs* = S
theory” by merging the
literature on asset pricing
∑s =1 πsU ′ ( 1 + Rs,b )
and asset allocation.

• Step 2. Expected returns for all assets are the same under the
above pricing measure

S S
∑s =1 πs* ( Rs,i + di ) = ∑s =1 πs*Rs,b

• Solve for di for alternative utility functions and risk aversions

Page 18
Section 2: Postmodern Portfolio Theory

Building More Meaningful Implied Returns

More risk averse investors


place larger weight on tails
of the distribution and
hence require much larger
excess returns to be
willing to hold a negatively
skewed asset

Source: Scherer (2007, chapter 9 )

Page 19
Section 2: Postmodern Portfolio Theory

Example: Replication of CPPI Strategies

We suggest solving the following problem. Minimize the tracking error between the
continuous CPPI trading strategy and our static buy and hold tracking portfolio

(
min ∑ i π i WT*,i − wc erT − wS ST ,i − wc1 C1,i )
2

subject to a budget constraint

(
W0 = e − rT ∑ i π i wc 6000e rT + wS ST ,i + wc1 C1,i ) = 6000

and a non-negativity constraint on individual asset holdings

wc ≥ 0, wS ≥ 0, wc1 ≥ 0,… , wcm ≥ 0

Page 20
Section 2: Postmodern Portfolio Theory

Example: Replication of CPPI Strategies


We also need a cardinality constraint for the number of instruments ( nassets ) involved:

wc ≤ large number ⋅ δ c , δ c ∈ {0,1}


wS ≤ large number ⋅ δ S , δ S ∈ {0,1}
wc1 ≤ large number ⋅ δ c1 , δ c1 ∈ {0,1}

wcm ≤ large number ⋅ δ cm , δ cm ∈ {0,1}

Note that each equation provides a logical switch using a binary Variable that takes
on a value of either 0 or 1. Let us review the case for cash to clarify the calculations.
As soon as wc > 0 by even the smallest amount, cash enters the optimal solution in

which case δ c = 1 to satisfy the inequality. If wc = 0 instead, it must follow that δ c = 0

Computationally the “large number” should not be chosen ”too large”, i.e. it depends
how large wc can become . Finally we need to add the “dummy variables” to count

the number of assets that enter the optimal solution.

Page 21 δ c + δ S + δ c + … + δ c ≤ nassets
1 m
Section 2: Postmodern Portfolio Theory

Example: Replication of CPPI Strategies

0
-500
OBPI minus CPPI
-1000
-1500

4000 6000 8000 10000 12000 14000

Stock market

Figure 2. Hedging error of a static option hedge. We used nassets = 6 to track a given CPPI
strategy. Note that hedging errors (difference between CPPI and OBPI payoff) are small around the
current stock price of 6000 and much larger where tracking is less relevant, i.e. where the real world
probability is low.

Source: Scherer (2007, chapter 8 )

Page 22
Section 2: Postmodern Portfolio Theory

Example: Replication of CPPI Strategies

1.4
1.2
1.0
Tracking error

0.8
0.6
0.4
0.2
0.0

4 6 8 10 12
# of admissable instruments

Figure 3. Reduction of hedging error and number of admissible instruments. As the number of
instruments rises, the tracking error decreases. Note that tracking error is expressed as annual
percentage volatility, meaning 0.2 equals 20%. The tracking advantage tails off quickly with more than
8 admissible instruments.

Source: Scherer (2007, chapter 8 )

Page 23
Section 2: Postmodern Portfolio Theory

Example: Utility Optimization

( ) ( ) (
C Si +1 ,σˆ impl ( Si +1 ),… − 2 C Si ,σˆ impl ( Si ),… + C Si −1 ,σˆ impl ( Si −1 ),… )
π i = e rT
( Si+1 − Si ) 2

0.25
Ordinary least squares
Robust regression
Implied volatility
0.20
0.15
0.10

5000 5500 6000 6500 7000


Strike

Figure 4. Fitted implied volatilities for linear and robust regression. The cross
sectional regression is run using both OLS as well as robust regression. Fitted lines
are either dashed or dotted, while the raw data are provided in Table 4.

Source: Scherer (2007, chapter 8 )


Page 24
Section 2: Postmodern Portfolio Theory

Implied Density

Lognormal PDF

0.015
Option implied PDF

Risk neutral density

0.010
0.005
0.0

5000 5500 6000 6500 7000

Stock price

Figure 5. Lognormal volatility versus option implied volatility. Implied risk neutral densities are
provided for both a lognormal model (with 11% historical volatility) as well as for the coefficient
estimates of the robust regression.

Source: Scherer (2007, chapter 8 )


Page 25
Section 2: Postmodern Portfolio Theory

Example: Utility Optimization

Forward looking risk


measure that uses the
markets risk perception
rather than unconditional
distribution u = 1−1υ W 1−υ , for υ ≠ 1

max ∑ i π i 1−1υ (WT ,i )


1−υ

W0 = e − rT ∑ i π i ( wc 6000e rT + wS ST ,i ) = 6000

Risk aversion Implied volatility Lognormal volatility


2 100.00% 100.00%
5 74.54% 78.91%
10 37.51% 39.45%
50 7.54% 7.89%

Optimal equity allocations

Page 26
Section 3: The Robust Contra-Revolution

Some Frustration with Bayesian Methods


Where we have been stuck

The main reason for the • The BLACK/LITTERMAN model looks (very) tired
success of the
BLACK/LITTERMAN • Many problems yet unsolved
model was its “anchoring”
in the market portfolio.
– BAYES and Non-normality
– Informative priors and missing data
– Partial solutions: various ways to deal with estimation error. What’s the
best combination?

• Interesting new development: Economic priors by McGULLOCH

• Required subjectivity (information) is deemed too high by many users; (too)


great level of arbitrariness, too much subjectivity, too much responsibility

• Practitioners want the solution to be built into the optimisation process;

Page 27
Section 3 The Robust Contra-Revolution

Reaction 1: Create “Robust” Solutions


Solutions that don’t change “too much”

Weight based • Upper and lower bounds: JAGANNATHAN/MA (2003), can be viewed
diversification constraints
as leveraging up the respective entries in the covariance matrix
• 1/n rule: DEMIGUEL/GARLAPPI/UPPAL (2007), equal weighting is
hard to beat
• Vector norms: DEMIGUEL/GARLAPPI/UPPAL/NOGALES (2007)
• Concentration measures: KING (2008)

Most of the above can be shown to be equivalent


to Bayesian shrinkage!

Risk based diversification • Percentage contribution to risks


constraints

Page 28
Section 3: The Robust Contra-Revolution

Reaction 2: Create “Robust” Inputs


Combat estimation error impact by making forecasts that don’t differ “too much”

In my view this is
widespread in active quant
• Create ordinal scores (+1 for outperform and -1 for underperform). For diagonal
strategies
covariance matrix this leads to

1 −1
wi = 1
σi (∑ )i σi

• Percentile ranks as in SATCHELL/WRIGHT (2003)

• Ordering information as in CHRIS/ALMGREN (2006)

• Robust statistics as reviewed in SCHERER/MARTIN, 2005

Page 29
Section 3: The Robust Contra-Revolution

Reaction 3: Robust Optimization


Build solution into optimization process

• An early attempt: Tütüncü/König (2004)


• Try to find “good solutions” for all possible parameter realizations

max⎛⎜ min w T μ − λw T Ωw ⎞⎟
w ⎝ μ∈S μ ,Ω∈S Ω ⎠
S μ : Set of all mean vectors
S Ω : Set of all covariance matrices

• How do we define the set of all possible parameters? Imposing short


sale constraints this simplifies to
max w T μ l − λw T Ω h w
w ≥0

μ l : minmal element of S μ for a given confidence


Ω h : maximal element of S Ω for a given confidence

Page 30
Section 3: The Robust Contra-Revolution

The Perils of Being Too Afraid

• Mean vector represents the lower 5% quintile entries, while


covariance represents the upper 5% entries. Risk Aversion of 2.

• Data From Michaud (1998)

1.0
Ad hoc way to define mean vector
and covariance matrix
0.8

Leads to very cornered portfolios

Poor rooting in classic decision


Weight Allocation
0.6

theory
0.4
0.2
0.0

Eq.Can Eq. Fra Eq.Ger Eq.Jap Eq.UK Eq.US Fi.US Fi.EU

Asset Class
Page 31
Section 3: The Robust Contra-Revolution

The Structure of Estimation Error

• For the purpose of this presentation we assume Σ = Ω / n

• Could also allow for different error structures, that allow


uncorrelated estimation errors and make diversification of
estimation errors optimal

⎡σ 12 n 0⎤
⎢ ⎥
⎢ σ 22 n ⎥
Σ=
⎢ ⎥
⎢ ⎥
⎣ 0 ⎦

• Note that as the number of assets rise the matrix becomes


increasingly difficult to estimate

Page 32
Section 3: The Robust Contra-Revolution

Robust Portfolio Optimization: The Contender

• CERIA/STUBBS (2003) derive the following objective function …

L ( w, θ ) = wT μ − κα , m n 2 σ p − λ2 σ p2 + θ ( wT 1 − 1)
−1

• … and after some tedious algebra (see SCHERER, 2006) we


can get

The careful reader will realize that


the previous result essentially
⎛ −1
n 2 κα , m ⎞ 1 −1 ⎛ μT Ω −1 1 ⎞ Ω −1 1
views robust optimization as w *
rob = ⎜1 − −1 ⎟ λ Ω ⎜ μ − T −1 1 ⎟ + T −1
shrinkage estimator
⎝ λσ *
p + n 2κ
α , m ⎠ ⎝ 1 Ω 1 ⎠ 1 Ω 1
As long as estimation error
⎛ ⎞ *
− 1
aversion is positive this term will n 2 κα , m
always be smaller than one.
= ⎜1 − − 1 ⎟ w spec + w min
⎝ λσ + 2κ
α ,m ⎠
*
p n

⎛ ⎞
− 1
n 2 κα ,m
0 ≤ ⎜1 − − 1 ⎟ ≤1
⎝ λσ + 2κ

*
p n α , m

Page 33
Section 3: The Robust Contra-Revolution

How do solutions differ?


Traditional Optimization

The least risky asset is Fi.EU

1.0
Fi.US
over-weighted in the Eq.US
Eq.UK

0.8
solution Eq.Jap
Eq.Ger

0.6
Eq. Fra
Eq.Can

0.4
0.2
0.0
Robust Optimization

Fi.EU
1.0

Fi.US
Eq.US
Eq.UK
0.8

Eq.Jap
Eq.Ger
0.6

Eq. Fra
Eq.Can
0.4
0.2
0.0

Figure 1. Robust versus traditional portfolio construction ( λ = 0.01, n = 60, α = 99.99%, S = 1000 ). Robust
portfolios react less sensitive to changes in expected returns. Given the high required confidence of α = 99.9% ,
robust portfolios invest heavily in assets with little estimation error. This is entirely different with our intuition
that error in return estimates become less and less important as we move towards the minimum risk portfolio.
The data are taken from Michaud (1998). The abbreviations used are FI.EU (fixed income Europe), FI.US (fixed
income US), Eq.US (equity US), EQ.UK (equity UK), EQ.Jap (equity Japan), EQ.Ger (equity Germany), EQ.Fra
(equity France) and EQ.Can (equity Canada).
Page 34
Section 3: The Robust Contra-Revolution

Out of sample utility

Some corner portfolios don’t


do well out of sample

Traditional optimization

50

40

30

20
Percent of Total

10

0
Robust optimization

50

40

30

20

10

0.3 0.4 0.5 0.6 0.7

Out of sample utility

Page 35
Section 3: The Robust Contra-Revolution

Out of sample utility


Small sample size (n = 60)

α = 95% α = 97.5% α = 99.99%

λ = 0.05 9.7 bps 8.7bps 7.79bps


(20.16) (17.76) (16.09)
68.5% 63.4% 61.0%

λ = 0.025 -3.13bps -5.54bps -6.96bps


(-18.23) (-14.84) (-11.85)
39.14% 32.6% 28.8%

λ = 0.01 -19.8bps -24.09bps -26.14bps


(-49,6) (-63.33) (-70.07)
13.0% 8.1% 6.8%

Out of sample performance for full investment universe (m=8). The table shows the relative performance of
robust portfolio optimization relative to traditional portfolio optimization. The 1st number is the difference in
expected utility, which we can interpret in terms of a security equivalent (i.e. basis points of monthly
performance). The 2nd number (in round brackets) represents the t-value of the difference in expected utility (a
value of about 2 would be significant at the 5% level, for a two sided hypothesis), while the third number
represents the percentage of runs, where robust optimization generated a higher out of sample utility than
traditional optimization.

Page 36
Section 3: The Robust Contra-Revolution

Summary

• Robust methods based on estimation error are equivalent to


shrinkage estimators and leave the efficient set unchanged

• Robust methods come at the expense of computational difficulties


(2nd order cone programming), but at the advantage of automated
solutions

• Yet difficult to calibrate uncertainty and risk aversion

Page 37
Section 4: Fairness In Asset Management

Fairness in Asset Management


Treat clients fairly

FSA Principle #6: “A


• Implementation of views
firm must pay due – Clients differ in benchmarks, constraints, investment universe,
regard to its customers
and treat them fairly” risk budgets
– How do we make sure that all clients receive the same
information, i.e. that all client portfolios are consistent?

• Trading and transaction costs


– Asset management firms manage multiple accounts
– Asset manager is seen as mediator and guarantor of fairness in
trading decisions

Page 38
Section 4: Fairness In Asset Management

Portfolio Factory: Consistent View Implementation


How can we ensure all portfolios receive the same information/attention?

• Step 1: Start with unconstrained model portfolio (otherwise views are


contaminated by constraints)
• Step 2: Back out implied returns
• Step 3: Run optimisation with client specific constraints whereby the
same implied returns are used for all clients

Client portfolios will still differ in total and active weight (dispersion),
but not in input information!

• (Step 4: Overcome organisational resistance: Portfolio factory


weakens the role of the portfolio manager, portfolio construction
becomes commodity, optimizer as the poor man’s portfolio manager,
portfolio manager loose out to analysts, i.e. information providers)

Page 39
Section 4: Fairness In Asset Management

Optimization Across Multiple Accounts


Why is this a problem?

Nonlinear transaction
costs create an externality Total, marginal,
from one account one average TC
another
γ −1
∂τ
∂Δw
= θγ ( Δw)
The literature provides two
solutions
γ −1
τ
Δw
= θ ( Δw)

- NASH solution (CERIA,


2007)
TC allocated by
- Collusive, Pareto optimal trading desk τ = θ ( Δw)
γ Marginal TC differ
solution (O’CINNEIDE
/SCHERER/XU, 2006) TC in independent
optimizations

Δwi,1 Δwi,2
Trade in asset i
for account 1,2

Page 40
Section 4: Fairness In Asset Management

Optimization Across Multiple Accounts


Model Set Up (joint work with Steve Satchell)

• Two accounts of different size, s, that trade one asset n.


• Quadratic transaction cost function
2
τ = θ
2
( n1s1 + n2s2 )
• Standard preferences for each account (note that the cost term
reflects cost sharing, i.e. both accounts trade simultaneously
VAi = ni si μ − λ2 ( nisi )2 σ 2 − θ2 ( nisi + n j s j )( nisi )

Page 41
Section 4: Fairness In Asset Management

Stand Alone Solution


Optimize accounts separately without taking interactions into account (batch job)

• “Optimal” trading
μ
niSA = arg max ⎣⎡ ni si μ − λ2 ( nisi )2 σ 2 − θ2 ( nisi )2 ⎦⎤ = si ( θ +λσ 2 )
ni

0.4

0.3

λμ 2 σ 2
n1

0.2
VAiSA + VAjSA =
( θ + λσ 2 )
2

0.1

0
0 0.1 0.2 0.3 0.4
n2

Page 42
Section 4: Fairness In Asset Management

COURNOT/NASH-Solution
Interaction is accounted for but treated as given

• First order condition (solving leads to reaction functions below)


dVAi
dni = μsi − λnisi2σ 2 − θnisi2 − 21 θn j sis j = 0

n1
0.4

2μ μ
nCN
j − n SA
j = s j ( 3θ + 2λσ 2 )
−s
j ( θ + λσ )
2

θμ
0.3 = −s <0
j( )( 3θ + 2λσ 2 )
2
θ + λσ

0.2 2 μ 2 ( θ +λσ 2 ) λμ 2σ 2
VAiCN − VAiSA = −
( 3θ + 2λσ 2 ) 2( θ +λσ 2 )
2 2

μ 2θ 2 ( 4θ + 3λσ 2 )
= >0
2( θ +λσ 2 ) ( 3θ + 2λσ 2 )
2 2

0.1

n2
Page 43
0.1 0.2 0.3 0.4
Section 4: Fairness In Asset Management

Collusive Solution
Full interaction is accounted for

• Combined objective function (monopoly)

VA = VAi + VAj = ni μ − λ2 ni2 σ 2 − 2θ ( ni si + n j s j )( ni si ) + n j μ − λ2 n 2j σ 2 − 2θ ( ni si + n j s j )( n j s j )

• Leads to less trading and higher value added (risk adjusted client
performance)

θ 2μ2
VAiC − VAiCN = 1
2 ( 2θ +λσ 2 )( 3θ + 2λσ 2 )2 >0

nC CN
i / ni = 1− θ
4θ + 2λσ 2
<1

Page 44
Section 5: Risk Management – About Feathers and Stones

The Story of Feathers And Stones


Does leverage matter?

• Ask a 5 year old: What is heavier, 1kg of feathers or 1kg of stones?

• Ask some risk managers: What is riskier, a portfolio with 10% volatility and a
leverage of one, or a portfolio with a 10% volatility and a leverage of 2? This
directly reflects on ARTZNERS axiom on homogenity

• If you answer like a 5 year old you


– might be concerned about liquidity risk
– might exhibit high risk aversion against estimation error risk (KNIGHTIAN
uncertainty) in which case cash is right for you (only asset that does
neither exhibit estimation nor investment risk)

• In any case: What is the difference between a 100 million investment in a fund
with 10% volatility and leverage 4 and a 50 million investment in a fund with
volatility 20% and leverage 8?

Page 45
Section 5: Risk Management – About Feathers and Stones

Optimal Leverage 1: Maximize Geometric Return


Too much leverage leads to lower long term return

Argument is closely related to • We know from Jensen’s inequality that average returns are a positive function
the debate between of expected log returns c + μθ and a negative function of log return variance
SHANNON and ½ θ σ2, where θ denotes leverage, c denotes cash and μ is the strategy
2
SAMUELSON in the 1970’s
specific risk premium.
• In this setting exp(c + μθ − ½ θ σ2) is maximized for θ = μσ-2. Any more
2
see SCHERER (editor),
2008, Portfolio Management, leverage will reduce average returns.
Risk Books: London,
• Example: GTAA with 5% excess return and 10% volatility should at most
forthcoming
leverage 5 times

18% θ = μσ-2
16%
Geometric Return

14%

12%

10%

8%

6%
0 2 4 6 8

Leverage

Page 46
Section 5: Risk Management – About Feathers and Stones

Optimal Leverage 2: Fees at Risk


Leverage across time might depend on fee schedules

Critical loss levels (where a


manager gets terminated by
client) will drive optimal
leverage over time

Source: SCHERER/XU (2007), Journal of Investment Management, Vol5n3, page 11

Page 47
Section 5: Risk Management – About Feathers and Stones

Optimal Leverage 3: Efficiency Gains


Decomposing the efficiency loss: Scherer/Xu (2007)

Percentage contribution to loss in utility


Correlation of
Lambda Utility loss Cash Beta Size
alpha and size Long only
neutrality Neutrality neutrality

10 0.0133 2% 0% 0% 98%
ρ (αi , si ) = 0
4 0.0469 4% 0% 0% 96%

10 0.0146 2% 0% 13% 85%


ρ (αi , si ) = −0.3
4 0.0445 3% 0% 4% 93%

10 0.0183 2% 0% 47% 51%


ρ (αi , si ) = −0.6
4 0.0517 4% 0% 43% 53%

Percentage contribution to loss in utility. The above table shows the loss in utility, as well as its percentage
contribution. Calculations are performed using the BARRA covariance matrix as of 06/30/2006. All alphas are
drawn from a normal distribution with mean and standard deviation equal to 0 and 0.006 respectively. The
dominance of the long only constraint becomes considerably smaller if the focus of an investment process shifts
away from stock selection.

Page 48
Presenter

Bernd Scherer, Ph.D.


Managing Director

bernd.scherer@morganstanley.com
Bernd is Global Head of Quantitative
GTAA. He joined Morgan Stanley in 2007
and has 13 years of investment
experience. Prior to joining the firm,
Bernd worked at Deutsche Bank Asset
Management as Head of the Quantitative
Strategies Group's Research Center as
well as Head of Portfolio Engineering in
New York. Before this he headed the
Investment Solutions and Overlay
Management Group in Frankfurt. Bernd
has also held various positions at Morgan
Stanley, Oppenheim Investment
Management, Schroders and JPMorgan
Investment Management. He authored
several books on quantitative asset
management and more than 40 articles in
refereed Journals. Bernd received Master's
degrees in economics from the University
of Augsburg, and the University of
London and a Ph.D. from the University
of Giessen. He is visiting professor at
Birkbeck College.

Page 49

Você também pode gostar