Você está na página 1de 40

CO VERS

EVERY LEA RN IN G
OBJECTIVE O N
THE E X A M

W il e y
Wiley FRM Exam Review Study Guide 2019
Part II
Wiley FRM Exam Review Study Guide 2019
Part II
Market Risk Measurement and Management,
Credit Risk Measurement and Management,
Operational and Integrated Risk Management,
Risk Management and Investment Management,
Current Issues in Financial Markets
Christian H. Cooper, CFA, FRM

W ILEY
Cover image: Loewy
Cover design: Loewy
Copyright © 2019 by John Wiley & Sons, Inc. All rights reserved.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey.
Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any
form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise,
except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without
either the prior written permission of the Publisher, or authorization through payment of the
appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers,
MA 01923, (978) 750-8400, fax (978) 646-8600, or on the Web at www.copyright.com. Requests
to the Publisher for permission should be addressed to the Permissions Department, John Wiley &
Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at
http://www.wiley.com/go/permissions.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best
efforts in preparing this book, they make no representations or warranties with respect to the
accuracy or completeness of the contents of this book and specifically disclaim any implied
warranties of merchantability or fitness for a particular purpose. No warranty may be created or
extended by sales representatives or written sales materials. The advice and strategies contained
herein may not be suitable for your situation. You should consult with a professional where
appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other
commercial damages, including but not limited to special, incidental, consequential, or other
damages.
For general information on our other products and services or for technical support, please contact
our Customer Care Department within the United States at (800) 762-2974, outside the United
States at (317) 572-3993 or fax (317) 572-4002.
Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material
included with standard print versions of this book may not be included in e-books or in
print-on-demand. If this book refers to media such as a CD or DVD that is not included in the
version you purchased, you may download this material at http://booksupport.wiley.com. For more
information about Wiley products, visit www.wiley.com.
Library of Congress Cataloging-in-Publication Data:
ISBN 978-1-119-57390-6 (ePDF); ISBN 978-1-119-57388-3 (ePub)
Printed in the United States of America
10 9 8 7 6 5 4 3 2 1
Contents

How to Study for the Exam xiii

About the Author xiv

Market Risk Measurement and Management


Lesson: Kevin Dowd, Measuring Market Risk, 2nd Edition (West Sussex, UK: John Wiley & Sons,
2005). Chapter 3. Estimating Market Risk Measures: An Introduction and Overview 3

Lesson: Kevin Dowd, Measuring Market Risk, 2nd Edition (West Sussex, England: John Wiley &
Sons, 2005). Chapter 4. Non-Parametric Approaches 11

Lesson: Philippe Jorion, Value-at-Risk: The New Benchmark for Managing Financial Risk, 3rd
Edition (New York: McGraw-Hill, 2007). Chapter 6. Backtesting VaR 15

Lesson: Philippe Jorion, Value-at-Risk: The New Benchmark for Managing Financial Risk, 3rd
Edition (New York: McGraw-Hill, 2007). Chapter 11. VaR Mapping 19

Lesson: "Messages from the Academic Literature on Risk Measurement for the Trading Book,"
Basel Committee on Banking Supervision, Working Paper No. 19, January 2011 25

Lesson: Gunter Meissner, Correlation Risk Modeling and Management (Hoboken, NJ: John
Wiley & Sons, 2014). Chapter 1. Some Correlation Basics: Properties, Motivation, Terminology 31

Lesson: Gunter Meissner, Correlation Risk Modeling and Management (Hoboken, NJ: John Wiley
& Sons, 2014). Chapter 2. Empirical Properties of Correlation: How Do Correlations Behave in the
Real World? 37

Lesson: Gunter Meissner, Correlation Risk Modeling and Management (Hoboken, NJ: John
Wiley & Sons, 2014). Chapter 3. Statistical Correlation Models— Can We Apply Them to
Finance? 41

Lesson: Gunter Meissner, Correlation Risk Modeling and Management (Hoboken, NJ: John Wiley
& Sons, 2014). Chapter 4. Financial Correlation Modeling— Bottom-Up Approaches (Sections
4.3.0 (Intro), 4.3.1, and 4.3.2 only) 43

©2019 Wiley o
CONTENTS

Lesson: Bruce Tuckman, Fixed Income Securities, 3rd Edition (Hoboken, NJ: John Wiley & Sons,
2011). Chapter 6. Empirical Approaches to Risk Metrics and Hedges 49

Lesson: Bruce Tuckman, Fixed Income Securities, 3rd Edition (Hoboken, NJ: John Wiley &
Sons, 2011). Chapter 7. The Science of Term Structure Models 53

Lesson: Bruce Tuckman, Fixed Income Securities, 3rd Edition (Hoboken, NJ: John Wiley & Sons,
2011). Chapter 8. The Evolution of Short Rates and the Shape of the Term Structure 63

Lesson: Bruce Tuckman, Fixed Income Securities, 3rd Edition (Hoboken, NJ: John Wiley & Sons,
2011). Chapter 9. The Art of Term Structure Models: Drift 69

Lesson: Bruce Tuckman, Fixed Income Securities, 3rd Edition (Hoboken, NJ: John Wiley & Sons,
2011). Chapter 10. The Art of Term Structure Models: Volatility and Distribution 79

Lesson: John Hull, Options, Futures; and Other Derivatives, 10th Edition (New York:
Pearson, 2017). Chapter 20. Volatility Smiles 83

Credit Risk Measurement and Management


Lesson: Jonathan Golin and Philippe Delhaise, The Bank Credit Analysis Handbook (Hoboken, NJ:
John Wiley & Sons, 2013). Chapter 1. The Credit Decision 89

Lesson: Jonathan Golin and Philippe Delhaise, The Bank Credit Analysis Handbook (Hoboken, NJ:
John Wiley & Sons, 2013). Chapter 2. The Credit Analyst 93

Lesson: Giacomo De Laurentis, Renato Maino, and Luca Molteni, Developing, Validating and
Using Internal Ratings (West Sussex, United Kingdom: John Wiley & Sons, 2010). Classifications
and Key Concepts of Credit Risk 95

Lesson: Giacomo De Laurentis, Renato Maino, and Luca Molteni, Developing, Validating and
Using Internal Ratings (West Sussex, United Kingdom: John Wiley & Sons, 2010). Chapter 3.
Ratings Assignment Methodologies 101

Lesson: Rene Stulz, Risk Management & Derivatives (Florence, KY: Thomson South-Western,
2002). Chapter 18. Credit Risks and Credit Derivatives 115

Lesson: Allan Malz, Financial Risk Management: Models, History, and Institutions (Hoboken, NJ:
John Wiley & Sons, 2011). Chapter 7. Spread Risk and Default Intensity Models 123

Lesson: Allan Malz, Financial Risk Management: Models, History, and Institutions (Hoboken, NJ:
John Wiley & Sons, 2011). Chapter 8. Portfolio Credit Risk (Sections 8.1,8.2,8.3 only) 131

Lesson: Allan Malz, Financial Risk Management: Models, History, and Institutions (Hoboken, NJ:
John Wiley & Sons, 2011). Chapter 9. Structured Credit Risk 139

Lesson: Jon Gregory, ThexVA Challenge: Counterparty Credit Risk, Funding, Collateral, and Capital,
3rd Edition (West Sussex, UK: John Wiley & Sons, 2015). Chapter 4. Counterparty Risk 147

© ©2019 Wiley
CONTENTS

Lesson: Jon Gregory, ThexVA Challenge: Counterparty Credit Risk, Funding, Collateral, and Capital,
3rd Edition (West Sussex, UK: John Wiley & Sons, 2015). Chapter 5. Netting, Close-out and
Related Aspects 153

Lesson: Jon Gregory, ThexVA Challenge: Counterparty Credit Risk, Funding, Collateral, and Capital,
3rd Edition (West Sussex, UK: John Wiley & Sons, 2015). Chapter 6. Collateral 157

Lesson: Jon Gregory, ThexVA Challenge: Counterparty Credit Risk, Funding, Collateral, and Capital,
3rd Edition (West Sussex, UK: John Wiley & Sons, 2015). Chapter 7. Credit Exposure and Funding 163

Lesson: Jon Gregory, ThexVA Challenge: Counterparty Credit Risk, Funding, Collateral, and Capital,
3rd Edition (West Sussex, UK: John Wiley & Sons, 2015). Chapter 9. Counterparty Risk
Intermediation 169

Lesson: Jon Gregory, ThexVA Challenge: Counterparty Credit Risk, Funding, Collateral, and Capital,
3rd Edition (West Sussex, UK: John Wiley & Sons, 2015). Chapter 12. Default Probabilities, Credit
Spreads, and Funding Costs 175

Lesson: Jon Gregory, ThexVA Challenge: Counterparty Credit Risk, Funding, Collateral, and Capital,
3rd Edition (West Sussex, UK: John Wiley & Sons, 2015). Chapter 14. Credit and Debt Value
Adjustments 181

Lesson: Jon Gregory, ThexVA Challenge: Counterparty Credit Risk, Funding, Collateral, and Capital,
3rd Edition (West Sussex, UK: John Wiley & Sons, 2015). Chapter 17. Wrong-way Risk 187

Lesson: Stress Testing: Approaches, Methods, and Applications, Edited by Akhtar Siddique and
Iftekhar Hasan (London: Risk Books, 2013). Chapter 4. The Evolution o f Stress Testing
Counterparty Exposures 191

Lesson: Michel Crouhy, Dan Galai, and Robert Mark, The Essentials o f Risk Management, 2nd
Edition (New York: McGraw-Hill, 2014). Chapter 9. Credit Scoring and Retail Credit Risk
Management 199

Lesson: Michel Crouhy, Dan Galai and Robert Mark, The Essentials o f Risk Management,
2nd Edition (New York: McGraw-Hill, 2014). Chapter 12. The Credit Transfer Markets-and Their
Implications 205

Lesson: Moorad Choudhry, Structured Credit Products: Credit Derivatives & Synthetic
Securitization, 2nd Edition (New York: John Wiley & Sons, 2010) Chapter 12. An Introduction to
Securitization 213

Lesson: Adam Ashcraft and Til Schuermann, "Understanding the Securitization of Subprime
Mortgage Credit," Federal Reserve Bankof New York Staff Reports, No. 318 (March 2008) 221

Operational and Integrated Risk Management


Lesson: "Principles for the Sound Management of Operational Risk" (Basel Committee on
Banking Supervision Publication, June2011) 227

©2019 Wiley ©
CONTENTS

Lesson: Brian Nocco and Rene Stulz, "Enterprise Risk Management: Theory and Practice,"
Journal o f Applied Corporate Finance 18, no. 4 (2006): 8-20 235

Lesson: "Observations on Developments in Risk Appetite Frameworks and IT Infrastructure"


Senior Supervisors Group, December 2010 239

Lesson: Anthony Tarantino and Deborah Cernauskas, Risk Management in Finance: Six Sigma
and Other Next Generation Techniques (Hoboken, NJ: John Wiley & Sons, 2009). Chapter 3.
Information Risk and Data Quality Management 243

Lesson: Marcelo G. Cruz, Gareth W. Peters, and Pavel V. Shevchenko, Fundamental Aspects
o f Operational Risk and Insurance Analytics: A Handbook o f Operational Risk
(Hoboken, NJ: John Wiley & Sons, 2015). Chapter 2. OpRisk Data and Governance 245

Lesson: Philippa X. Girling, Operational Risk Management: A Complete Guide to a Successful


Operational Risk Framework (Hoboken, NJ: John Wiley & Sons, 2013).
Chapter 8. External Loss Data 251

Lesson: Philippa X. Girling, Operational Risk Management: A Complete Guide to a Successful


Operational Risk Framework, (Hoboken, NJ: John Wiley & Sons, 2013). Chapter 12. Capital
Modeling 255

Lesson: Kevin Dowd, Measuring Market Risk, 2nd Edition (West Sussex, UK: John Wiley & Sons,
2005). Chapter 7. Parametric Approaches (II): Extreme Value 261

Lesson: Giacomo De Laurentis, Renato Maino, and Luca Molteni, Developing, Validating and
Using Internal Ratings (Hoboken, NJ: John Wiley & Sons, 2010). Chapter 5. Validating Rating
Models 265

Lesson: Michel Crouhy, Dan Galai, and Robert Mark, The Essentials o f Risk Management,
2nd Edition (New York: McGraw-Hill, 2014). Chapter 15. Model Risk 269

Lesson: Michel Crouhy, Dan Galai, and Robert Mark, The Essentials o f Risk Management, 2nd
Edition (New York: McGraw-Hill, 2014). Chapter 17. Risk Capital Attribution and Risk-Adjusted
Performance Measurement 273

Lesson: "Range of Practices and Issues in Economic Capital Frameworks,"


Basel Committee on Banking Supervision Publication, March 2009 277

Lesson: "Capital Planning at Large Bank Holding Companies: Supervisory Expectations and
Range of Current Practice," Board of Governors of the Federal Reserve System, August 2013 289

Lesson: Bruce Tuckman and Angel Serrat, Fixed Income Securities: Tools for Today's
Markets, 3rd Edition (Hoboken, NJ: John Wiley & Sons, 2011). Chapter 12.
Repurchase Agreements and Financing 295

Lesson: Kevin Dowd, Measuring Market Risk, 2nd Edition (West Sussex, UK: John Wiley & Sons,
2005). Chapter 14. Estimating Liquidity Risks 303

© ©2019 Wiley
CONTENTS

Lesson: Allan Malz, Financial Risk Management: Models, History, and Institutions
(Hoboken, NJ: John Wiley & Sons, 2011). Chapter 11. Section 11.1, Assessing the
Quality of Risk Measures 307

Lesson: Allan Malz, Financial Risk Management: Models, History, and Institutions (Hoboken, NJ:
John Wiley & Sons, 2011). Chapter 12. Liquidity and Leverage 311

Lesson: Darrell Duffie, "The Failure Mechanics of Dealer Banks "Journal o f Economic
Perspectives 24, No. 1 (2010): 51-72 321

Lesson: Til Schuermann, "Stress Testing Banks" prepared for the Committee on Capital Market
Regulation, Wharton Financial Institutions Center, April 2012 325

Lesson: "Guidance on Managing Outsourcing Risk," Board of Governors of the


Federal Reserve System, December 2013 329

Lesson: John C. Hull, Risk Management and Financial Institutions, 5th Edition, (Hoboken, NJ:
John Wiley & Sons, 2015). Chapter 15. Basel I, Basel II, and Solvency II 331

Lesson: John C. Hull, Risk Management and Financial Institutions, 5th Edition,
(Hoboken, NJ: John Wiley & Sons, 2015). Chapter 16. Basel II.5, Basel III,
and Other Post-Crisis Changes 343

Lesson: John C. Hull, Risk Management and Financial Institutions, 5th Edition, (Hoboken, New
Jersey: John Wiley & Sons, 2018). Chapter 17. Regulation of the 0TC Derivatives Market 349

Lesson: John C. Hull, Risk Management and Financial Institutions, 5th Edition, (Hoboken, NJ:
John Wiley & Sons, 2018). Chapter 18. Fundamental Review of the Trading Book 351

Lesson: "High-Level Summary o f Basel III Reforms" (Basel Committee on Banking Supervision
Publication, December 2017) 355

Lesson: "Basel III: Finalising Post-Crisis Reforms,"(Basel Committee on Banking


Supervision Publication, December 2017) 363

Lesson: "Sound Management of Risks Related to Money Laundering and Financing of


Terrorism," (Basel Committee on Banking Supervision, June 2017) 369

Lesson: Regulatory Readings 373

Risk Management and Investment Management


Lesson: Andrew Ang, Asset Management: A Systematic Approach to Factor Investing (New York:
Oxford University Press, 2014). Chapter 6. Factor Theory 377

Lesson: Andrew Ang, Asset Management: A Systematic Approach to Factor Investing


(New York: Oxford University Press, 2014). Chapter 7. Factors 385

©2019 Wiley ©
CONTENTS

Lesson: Andrew Ang, Asset Management: A Systematic Approach to Factor Investing (New York:
Oxford University Press, 2014). Chapter 10. Alpha (and the Low-Risk Anomaly) 391

Lesson: Andrew Ang, Asset Management: A Systematic Approach to Factor Investing


(New York: Oxford University Press, 2014). Chapter 13. Illiquid Assets (excluding section 13.5 -
Portfolio Choice w ith Illiquid Assets) 401

Lesson: Richard Grinold and Ronald Kahn, Active Portfolio Management: A Quantitative
Approach for Producing Superior Returns and Controlling Risk, 2nd Edition (New York:
McGraw-Hill, 2000). Chapter 14. Portfolio Construction 405

Lesson: Philippe Jorion, Value-at-Risk: The New Benchmark for Managing Financial Risk,
3rd Edition (New York: McGraw-Hill, 2007). Chapter 07. Portfolio Risk:
Analytical Methods 411

Lesson: Philippe Jorion, Value-at-Risk: The New Benchmark for Managing Financial Risk,
3rd Edition (New York: McGraw-Hill, 2007). Chapter 17. VaR and Risk Budgeting in Investment
Management 417

Lesson: Robert Litterman and the Quantitative Resources Group, Modern Investment
Management: An Equilibrium Approach (Hoboken, NJ: John Wiley & Sons, 2003). Chapter 17. Risk
Monitoring and Performance Measurement 421

Lesson: Zvi Bodie, Alex Kane, and Alan J. Marcus, Investments, 11th Edition (New York:
McGraw-Hill, 2017). Chapter 24. Portfolio Performance Evaluation 425

Lesson: G. Constantinides, M. Harris, and R. Stulz, eds., Handbook o f the Economics o f Finance,
Volume2B (Oxford, UK: Elsevier, 2013). Chapter 17. Hedge Funds,
by William Fung and David Hsieh 435

Lesson: Kevin R. Mirabile, Hedge Fund Investing: A Practical Approach to Understanding Investor
Motivation, Manager Profits, and Fund Performance 2nd Edition (Hoboken, NJ: Wiley Finance,
2016). Chapter 12. Performing Due Diligence on Specific Managers and Funds 441

Current Issues in Financial Markets


Lesson: Kopp, Emanuel and Kaffenberger, Lincoln and Wilson, Christopher, "Cyber Risk, Market
Failures, and Financial Stability," (August 2017). IMF Working Paper No. 17/185 447

Lesson: Hal Varian, "Big Data: New Tricks for Econometrics! 'Journal o f Economic Perspectives
28:2 (Spring 2014), 3-28 453

Lesson: Bart van Liebergen, "Machine Learning: A Revolution in Risk Management and
Compliance?" Institute of International Finance, April 2017 455

Lesson: "Artificial Intelligence and Machine Learning in Financial Services," Financial Stability
Board, Nov. 1,2017 459

© ©2019 Wiley
CONTENTS

Lesson: Gomber, Peter and Kauffman, Robert J. and Parker, Chris and Weber, Bruce, "On the
Fintech Revolution: Interpreting the Forces of Innovation, Disruption and Transformation in
Financial Services," (December 20,2017). Journal of Management Information Systems, 35(1),
2018,220-265 465

Lesson: Rama Cont, "Central Clearing and Risk Transformation," Norges Bank Research,
March 2017 469

Lesson: "What is S0FR?"CME Group, March 2018 473

©2019 Wiley ©
How to Study for the Exam
The FRM Exam Part II curriculum covers the tools used to assess financial risk:

• Market risk measurement and management—25%


• Credit risk measurement and management—25%
• Operational and integrated risk management—25%
• Risk management and investment management— 15%
• Current issues in financial risk management— 10%

It is important to focus only on the learning objectives as you are asked about them and
pay close attention to the percentages of each section. That is the core of my focus through
the text, the online lecture sessions, and with the practice questions. A study hour doesn’t
count unless you are laser focused on specifically how GARP asks a learning objective.

Consistency is also key. Making a regular weekly study time is going to be important to
staying on track. There is a reason only ~50% of the candidates pass the exam every year.
It’s a tough exam. It also tests intuition, not just memorization. That is why I attempt at
every opportunity to connect the dots across readings and teach how changing
environments change both markets and the models we use to model them, as well as
helping you with the questions GARP specifically wants you to calculate an outcome for.

Calculator policy:

It is best to begin your study with one of the approved calculators. You will not be
admitted to the exam without one of these approved calculators!

• Hewlett Packard 12C (including the HP 12C Platinum and the Anniversary
Edition)
• Hewlett Packard 1OB II
• Hewlett Packard 1OB 11+
• Hewlett Packard 20B
• Texas Instruments BA II Plus (including the BA II Plus Professional)

Every year, candidates are turned away from the exam site because of wrong calculators.

Make sure you aren’t one of them.

©2019 Wiley
ABOUT THE AUTHOR
Christian H. Cooper is an author and trader based in New York City. He initially created
the FRM program because, as a candidate, he was frustrated with the quality of study
programs available. Writing from a practitioner’s point of view, Christian drew on his
experience as a trader across fixed income and equity markets, most recently as head of
derivatives trading at a bank in New York, to create a program that is very focused on
exam results while connecting the dots across topics to increase intuition and
understanding.

Christian is a graduate of King College and holds both the CFA and FRM designations. He
is a Truman National Security Fellow, a former Term Member at the Council on Foreign
Relations, and active with the Aspen Institute. He lives and works in Lower Manhattan.

© ©2019 Wiley
Ma r k et Ris k M a nag ement
and Me a sur ement (M R )
The broad areas of knowledge covered in readings related to Market Risk Measurement
and Management include the following:

• VaR and other risk measures:


o Parametric and nonparametric methods of estimation
o VaR mapping
o Backtesting VaR
o Expected shortfall (ES) and other coherent risk measures
o Extreme value theory (EVT)
• Modeling dependence: correlations and copulas
• Term structure models of interest rates
• Discount rate selection
• Volatility: smiles and term structures

©2019 Wiley ©
D o w d , Ch a pt e r 3
Kevin Dowd, Measuring Market Risk, 2nd Edition (West Sussex, UK: John Wiley &
Sons, 2005). Chapter 3. Estimating Market Risk Measures: An Introduction and Overview

After completing this reading you should be able to:

• Estimate VaR using a historical simulation approach.


• Estimate VaR using a parametric approach for both normal and lognormal return
distributions.
• Estimate the expected shortfall given P/L or return data.
• Define coherent risk measures.
• Estimate risk measures by estimating quantiles.
• Evaluate estimators of risk measures by estimating their standard errors.
• Interpret QQ plots to identify the characteristics of a distribution.

Reading notes: This is a very dense reading about 20 pages long with a lot of VaR
“estimate” questions. This is probably one of the most important readings in the Market
Risk Measurement and Management section of the exam. Make sure you can calculate
these answers and be able to qualitatively discuss these topics. However, don’t get lost in
the technical material that asks you to “describe.” The assigned readings go into deep
detail that isn’t required for the exam. Make sure you are paying attention to what you are
actually asked.

Learning objective: Estimate VaR using a historical simulation approach.

First recognize that as flawed as VaR is, using VaR based on historical data is even worse.
The idea here is to order the historical losses observed in a portfolio, ordered by the actual
amount of the loss. So, if we have 1,000 loss observations ordered from smallest to largest
and we are interested in the 97% confidence level, that implies that 30 observations will
be in the tail of the distribution that will define the VaR of this portfolio.

So “estimate” here really means “observe,” and in the example the 31st loss observation
would be the amount of the expected VaR of the portfolio based on the losses the portfolio
has historically experienced.

VaR is flawed when used incorrectly, and VaR based on historical observation is really
flawed. There is no insight into the risk the portfolio has, no idea how that risk may
change in the future, and so on. There are almost too many problems here to mention and
even more problems when used with a portfolio that has embedded options.

Learning objective: Estimate VaR using a parametric approach for both


normal and lognormal return distributions.

I will move forward on the assumption you are very comfortable with the lognormal and
normal distributions; take a break here if you need a review.

Starting first with the assumption of a normal distribution, we have to define the mean and
standard deviation of the portfolio in order to calculate the VaR. Likely this will be given

©2019 Wiley ©
MARKET RISK MEASUREMENT AND MANAGEMENT (MR)

to you on the exam, but in the real world this will be estimated parameters based on the
risk manager’s expectations.

Since we want to interpret VaR in terms of lost money, our formula for losses is:

VaR = —p + aZa

where negative mu is expected losses, sigma is the standard deviation of the loss, and Z is
the standard normal variate corresponding to the level of significance we are looking for.
Recall that the standard normal variate at the 95% level of significance is 1.645.

If profits and losses over some period are normally distributed with mean of 20 and
standard deviation of 30, the 95% VaR is calculated as:

VaR = -2 0 + 30(1.645) = -29.35

This is straightforward enough, but the learning objective is asking about a normal and
lognormal distribution of returns, not dollars gained or lost.

When we transition to returns, we have to add additional parameters to describe the


starting dollar value and ending dollar value so we can arrive at the return estimate.

Furthermore, we have to establish some critical value r* so we can compare our expected
probability to our critical value return where that probability is equal to our level of
confidence.

Taking the previous equation:

VaR = —p + aZa

I am going to modify it to:

r* = - p + oZa

This will be our return critical value that we need to establish a confidence interval around
the return of our portfolio at any given time, rt.

Also, since any given return is the starting value relative to ending value, we can define rt
as:

Don’t let all the equations fool you; we are defining return as you normally would: My
return is my portfolio value at time t minus what it was before, divided by the starting
value.

However, since we are thinking about VaR at risk, we are only concerned with the losses.

© ©2019 Wiley
DOWD, CHAPTER 3

We can extend the relationship to include VaR and recover our critical value:

Now, substitute into this equation:

r* = —|i + c

to arrive at:

VaR
-H + crZ0
p -i

and last,

VaR = ( - |i + oZa)Pt_,

The only difference between the normal and lognormal version is going to be the critical
values.

Let’s learn a quick calculation. If I know returns are normally distributed at 12% with
standard deviation of 18% and I want the 95% level VaR:

—12% + 18% (1.645) = 0.1761 (converting from percentages)

On the exam, you may be given estimates of profits and losses, which would mean you
don’t use the population parameters of mu and sigma, but the math and order are the same.

Lognormal Version

The real dividing issue between normal and lognormal that should be clear by now is that
a normal distribution is symmetric and assigns an equal probability of going up or down
in price. This implies that prices can be negative if you use the normal distribution. Now
for stocks, this doesn’t make sense since they have a zero floor. That is why when we talk
about financial assets we often refer to the distribution of returns being normally
distributed, not the asset prices themselves, which, of course, is the lognormal
distribution. This is why we need the lognormal.

So how do we calculate VaR when lognormally distributed? From the discussion of


normally distributed VaR, we know we are dealing with potential returns instead of dollar
values of potential loss (price moves) under the normal distribution, so we have the extra
step of converting our lognormally derived VaR back into dollar terms.

You do not have to memorize the derivation of this formula, but you will use the very last
line in the calculation.

©2019 Wiley ©
MARKET RISK MEASUREMENT AND MANAGEMENT (MR)

As in the normal, we are going to use this formula from before, but let’s call VaR a new
random variable that will correspond to a loss equal to our VaR. Ultimately, this is what
we want to know.

Our original equation:

VaR = —p + aZa

becomes:

X* = - p + aZa

Since we are using geometric returns, we insert a term that describes the price at some
period right now (assuming a loss) relative to some time in the past to establish our return.
It sounds complicated, but this is the same as a $50 stock that goes to $45 so the loss is
10%. 45/50 = .90 and (1 - 0.90) = 0.10. We are going to do the exact same thing here.

X* = ln(Pricenow/Pricepast)

Reordering terms and using the properties of logarithm so the ratio of logarithms can be
expressed as the difference of two logarithms:

X* = ln(Price„ow) - ln(Pricepast)

Remember, X* is our critical value that ultimately becomes our VaR as some certainty,
alpha.

Reordering terms,

ln(Pricenow) = X* + ln(Pricepast)

Distribute the logarithm across: This goes back to early calculus, but don’t worry—you
don’t need to know it; just remember the last step.

(Pricenow) = (Pricepast)e

Insert the original equation for X*:

(Pricenow) = (Pricepast)e M+aZ"

Skipping a few messy steps that are unnecessary to know for the exam, we arrive at what
you need to know for VaR under a lognormal distribution of returns, and that is this
relationship:

V a R = P pastU - exP(P - ° Z >I

© ©2019 Wiley
DOWD, CHAPTER 3

To bring this all together, and since this is an “estimate” question, you may be asked to
calculate/evaluate this on exam day:

Let’s assume returns are normally distributed with mean 10% and standard deviation
18%, and our portfolio is $100.

We can express the lognormal VaR at the 95% level as:

1 - exp(0.10 - 0.18 x 1.645) = 0.26 or $26

Learning objective: Estimate the expected shortfall given P/L or return data.

Risk as Shortfall

Another way to specify a risk objective is by means of the expected portfolio standard
deviation, which is the square root of the expected portfolio variance. For example, the
S&P 500 might have an annual portfolio standard deviation of 23%. Using this figure and
either a normal or lognormal distribution, we can quantify the probability that a given
portfolio will have a loss, or a return below a certain minimum requirement (this
probability is called a shortfall risk).

As might be expected, the focus of this criterion is minimizing the chances that a specified
minimum return (say RL) is not achieved. If the portfolio return (call it RP) is normally
distributed, then it is not very difficult to calculate the probability that RP will fall short of
Rl — in other words, P(RP < RL)—or to calculate the number of standard deviations RL
falls below the expected portfolio return, E(RP).

Note that the portfolio that maximizes E(RP) - RL will at the same time minimize
P(RP < Rl ). If we divide the expression E(RP) —RL by a P, the standard deviation of the
portfolio, we get the safety margin in units of portfolio standard deviation. This ratio is
called Roy’s safety-first criterion (SFRatio):

E(RP) - Rl
SFRatio =

Assuming returns are normally distributed, the safety-first ratio will be maximized by
choosing the optimal portfolio, using the three steps of Roy’s safety-first criterion:

1. Calculate the SFRatio for the portfolio.


2. Calculate P(RP < RL) by looking up N(-SFRatio) = 1 - N(SFRatio).
3. Choose the portfolio that results in the lowest probability for step 2.

Example: You work for the investment advisor of a major educational institution with a
$2 billion endowment. The school wishes to be able to use $50 million of the
endowment’s investment income annually for operational expenses, but does not wish to

©2019 Wiley ©
MARKET RISK MEASUREMENT AND MANAGEMENT (MR)

invade the endowment principal. Also, the school intends to place the endowment in one
of three investment pools, which have the following return and risk characteristics:

P o o ll Pool 2 Pool 3
Expected return 25% 15% 10%
Standard deviation of return 35% 25% 15%

Find the shortfall level of return, RL. Next, according to Roy’s safety-first criterion, which
investment pool is best suited to the school’s objectives? Last, find the probability that the
optimal investment pool will fail to generate a return equal to or greater than RL. Assume
returns are normally distributed.

Solution: If the school wishes to spend $50,000,000 of the endowment investment


income annually, it would imply that the shortfall return level is equal to
$50,000,000/$2,000,000,000 = 2.5%.

Next, to figure out which investment pool is optimal for the school, we must calculate the
SFRatio for each pool.

Pool 1: SFRatio = (0.25 - 0.025)/0.35 = 0.6429

Pool 2: SFRatio = (0.15 - 0.025)/0.25 = 0.5000

Pool 3: SFRatio = (0.10 - 0.025)70.15 = 0.5000

Pool 1 has the highest SFRatio, so if returns are normally distributed, then it is the optimal
portfolio, and it will minimize the probability that returns will fall short of RL.

The probability that Pool 1 will generate a return less than RL is found by looking up:

N(—SFRatio) = 1 - N(SFRatio) = 1 - N(0.6429) = 1 - 0.7389 = 0.2611 = 26.11%

Regardless of which investment criterion comes first, risk and return objectives are
essential ingredients in setting a strategic asset allocation.

Learning objective: Define coherent risk measures.

A coherent risk measure satisfies the following five properties, all of which are highly
theoretical and some of which are hard to visualize.

First, a portfolio that has no holdings has no risk (normalized property); second, the
lower-return portfolio has less risk (monotonicity property); third, risk doesn’t increase
when adding two portfolios together, also known as diversification (subaddivity property);
fourth, an identical portfolio of double the notional should have the potential for double
the notional loss (positive homogeneity property); and fifth, the addition of risk-free assets
dilutes the potential notional loss percentage (translation invariance property). This last
one needs an example: Pretend you have a $100 portfolio with a $1 loss potential for a 1%

® ©2019 Wiley
DOWD, CHAPTER 3

potential loss. Adding $100 of cash, pretending cash is risk free, increases the portfolio
size to $200 with the same potential for a $1 loss, so the loss percentage is reduced to
50 basis points.

Learning objective: Estimate risk measures by estimating quantiles.

This portion of the reading gets very technical, but stay focused on what you are asked.
Any measure of risk is only as good as its precision. Consider shortfall risk: How sure are
we that we have estimated either the probability or the degree (amount) of loss? VaR is
fairly precise, but that isn’t the only measure of risk.

So when we are talking about quantiles (a quantile comes from statistics, and it is basically
just a regularly spaced interval that divides up a probability distribution into equal-sized
parts—5%, 10%, or half of a distribution), we are talking about setting up a confidence
indicator around the risk measure to indicate the precision of that particular risk measure.

So a quantile by definition is just a sample of a defined size (e.g., top 5%) of a given
distribution, and, just like any other sampling, we can define the standard error of the
sample (quantile).

Now the relationship you need to understand is that the standard error for any sample size
falls as the sample size gets larger. Also, the standard error rises as the estimator gets
farther into the tail. So the standard error of a one-in-a-million event is much wider than
the standard error of a one standard deviation estimation. Stated differently, the more
extreme the probability of an event, the less precise we can be about its estimation. This
has huge implications for risk management using VaR and is one of the key reasons VaR
creates a false sense of security among those who don’t understand the quantitative
limitations of the model itself.

So how do we apply this to expected shortfall (ES) and VaR?

When comparing VaR and ES, both have similar standard errors for normal distributions,
but expected shortfall has much bigger standard errors in distributions with heavy tails,
also known as reality. By definition, ES estimates are less accurate than VaR when
considering distributions with heavy tails.

Learning objective: Evaluate estimators of risk measures by estimating their


standard errors.

The method of estimating I think will most likely be covered on the exam is one that is
called “bootstrapping” and shouldn’t be confused with the method of yield curve creation
by the same name. In this case we take a large sample of estimators from a particular
distribution and then estimate the standard error of that sample. This is convoluted for
sure, but just know for the exam that there is this method called bootstrapping and it
means to create a standard error estimate for a sample of estimators from a particular
distribution.

©2019 Wiley ®
MARKET RISK MEASUREMENT AND MANAGEMENT (MR)

Learning objective: Interpret QQ plots to identify the characteristics of a


distribution.

Recall from the Part I material that a distribution is just a special equation that describes
the distribution of some data set. The probability density function is the name of the
unique equation that describes what we call the bell curve. However, we can define any
equation that fits a set of data as a density function as long as it fits the data and satisfies a
few requirements. A QQ plot is useful to find that special equation that describes a set of
data. QQ stands for quantile-quantile rank and plots the empirical (observed) data in a
particular quantile to what is predicted in the distribution we are testing.

What you need to know for the exam is if a QQ plot is linear, then the empirical data
matches the distribution we are testing, but if it is nonlinear, our empirical data does not
match the distribution we are testing.

Stated differently, if the plot is not linear at a 45-degree angle, the empirical data and the
distribution belong to two different families of distributions.

©2019 Wiley
D o w d , Ch a pt e r 4
Kevin Dowd, Measuring Market Risk, 2nd Edition (West Sussex, England: John Wiley &
Sons, 2005). Chapter 4. Non-Parametric Approaches

After completing this reading you should be able to:

• Apply the bootstrap historical simulation approach to estimate coherent risk


measures.
• Describe historical simulation using nonparametric density estimation.
• Compare and contrast the age-weighted, the volatility-weighted, the
correlation-weighted, and the filtered historical simulation approaches.
• Identify advantages and disadvantages of nonparametric estimation methods.

Reading notes: This is a relatively long reading of 30 pages with no “calculate” questions.
You can waste a lot of time here if you aren’t careful. Most of the assigned reading covers
material not actually on the FRM exam. Recall that “nonparametric” means a way to
model data without actually using the parameters of a model— so this is using historic
data or other number crunching to measure market risk without the use of probability
density functions.

Learning objective: Apply the bootstrap historical simulation approach to


estimate coherent risk measures.

This is a little misleading: We defined a coherent risk measure in the first reading. Recall
that VaR is not a coherent risk measure because of subadditivity, and all you need to know
here is how the bootstrap method actually works.

“Bootstrapping” is another term for resampling from our distribution with replacement.
We resample many times, replacing each time, and create a large number of samples.

We can chart each of these samples on a histogram and then look at any alpha, or degree
of certainty, we want, and that is our bootstrapped VaR. One thing to know for the exam is
that the bootstrap method won’t tell us much about how precise these sample estimates of
risk measures are, but this is how the method works.

Learning objective: Describe historical simulation using nonparametric


density estimation.

Remember, all density estimation refers to is an attempt to build a probability distribution


using past observed profit and loss data. It requires no model parameters since no model is
actually used. We are simply looking at the past and trying to explain the future from it.
Historical simulation has many, many problems, but it is a widely used method.

Learning objective: Compare and contrast the age-weigh ted, the


volatility-weighted, the correlation-weighted, and the filtered historical
simulation approaches.

©2019 Wiley ©
MARKET RISK MEASUREMENT AND MANAGEMENT (MR)

“Weighting” is the idea of how much value we give to a particular set of data or historical
circumstances. Just like with a parametric model, there are some decisions to be made
and, in historical simulation, the weighting is one of the most important. No one would
argue that the price movement of stocks from 1992 is particularly valuable but what if
during a particular historic period the other conditions, rates, volatility, correlation, or P/E
ratios were very similar? We could make the argument then that that period of historic
data has significant value and those are the decisions we are stuck making with historical
simulation. Below are the types of historical simulation you need to know for the exam.

AGE-WEIGHTED HISTORICAL SIMULATION


In this type, we have some parameter that applies a function that “decays” the value of
very old data such that only the most recent trailing data set is the most important. If we
are doing an age-weighted simulation, the argument would be made that the future market
behavior we are simulating is most like the most recent trailing 90 days’ worth of data.
This argument is very simplistic, makes a lot of assumptions, and has lots of problems.

In the text, there is the use of the parameter lambda (X), which is used to control how
quickly the value of old data decays or becomes useless. Think of this parameter as
controlling the size of the trailing window that captures how much historic data to use. For
the exam, you should know that a lambda of 1 means no decay, so all historic data is used,
and as lambda gets closer to zero, it means that only the most very recent data set is used.

Key advantages: Age weighting allows the user to increasingly ignore older data and
acknowledges that the future is not going to be exactly like the past (a very weak
argument, in my opinion), it allows the user to increase lambda and capture large losses
instead of having them “drop off” of the data set, allows the exclusion of events that are
very unlikely to occur again (again, very subjective and very poor risk management—I
don’t agree with this at all but this is what you need to know for the test), and it allows the
sample size to grow over time.

VOLATILITY-WEIGHTED HISTORICAL SIMULATION


Volatility weighting is also called “regime change” because the argument goes that option
pricing, and by extension expressions of volatility, are the market’s best way to capture
changing market conditions; so if volatility has changed massively from six months ago,
we apply less weight to the price moves of six months ago. Essentially, this is a flag that
lets us know which historic data to pay closest attention to.

Key advantages: Explicitly capturing volatility changes is superior and preferable to the
age-weighted method where we simply ignore older data. Since volatility is forward
looking, we can include volatility estimates (both current and future) into VaR and
shortfall estimation. Volatility-based loss estimates allow for values that exceed what is in
the historic data set, and empirical evidence indicates that it is superior to simple age
weighting.

CORRELATION-WEIGHTED HISTORICAL SIMULATION


This is probably the least important for the exam. This uses a fairly complicated series of
matrices to define when significant market shifts have occurred, similar to the
regime-changing volatility. For the most part, I think you can skip this for the exam.

FILTERED HISTORICAL SIMULATION


Filtered HS (FHS) is a hybrid model that combines a basic HS with the volatility model of
GARCH (generalized autoregressive conditional heteroskedasticity). It tries to move HS

!2) ©2019 Wiley


DOWD, CHAPTER 4

beyond a very simple tool with all the problems inherent with historical simulation and
add on a layer of complexity through volatility estimations.

This model assumes that portfolio returns follow some average rate of return plus an error
term, which is really problematic. It is very easy to lump model misspecification into an
“error” term and miss a really significant part of risk.

Once we simulate volatility and add the potential impact to portfolio returns, we can
correspond the result to a particular dollar loss and then that becomes our VaR at the
appropriate confidence level.

This is also a very low probability for the exam and honestly is almost useless in the real
world. Be very careful of this method if it is ever used on the desk. However, if you are
asked about the attraction of FHS, here is what you need to know:

• It takes a very simple no-parametric model of HS and combines it with some


estimates of volatility modeling.
• It is easy to use and fast to calculate.
• Because it makes assumptions about volatility, it is possible to get potential losses
that exceed the maximum loss of the sample size. This is a good thing— we want
to know the potential of extreme events and sometimes HS ignores these.
• No complex correlation matrices are required because the volatility assumptions
are self-contained in GARCH.
• It can be modified to include correlation, but if you are spending time modifying
basic models, why still rely on HS?

Learning objective: Identify advantages and disadvantages of nonparametric


estimation methods.•*

Note: There are about 20 pages at the end of this assigned reading that aren’t covered on
the exam. If you are also following the readings, pay close attention to what is asked
because this happens often.

Advantages: Nonparametric models can accommodate any distribution of returns,


including fat tails or skewness. They can accommodate any product type, including
nonlinear derivatives (be careful here, just because it can accommodate them doesn’t
mean it does it well).

• They are easy to implement on a spreadsheet.


• They use easily available data.
• It is very easy to produce confidence intervals for both VaR and ES.

Disadvantages: These are almost the exact opposite of the advantages.

• Low periods of volatility will “pollute” the observation window and minimize the
VaR when relying on HS.
• Dramatic shifts in correlation or volatility are missed by HS.
• Extreme events can dominate the observation window just like a low volatility
period can as well.

©2019 Wiley
MARKET RISK MEASUREMENT AND MANAGEMENT (MR)

Since we don’t require parameters, our data set may represent an unusually low period of
volatility, which would underestimate true risk.

Also, in Part I, I spoke about the idea of regime change where there are long-term shifts in
market risk that may not be captured by historical data. They aren’t forward looking by
any means.

Lastly, the nonparametric models make no allowance for events that could occur by only
focusing on those things that have occurred in the past. This is a big trade-off when all we
get in return is ease of calculation.

© ©2019 Wiley
Jo r i o n , Ch a pt e r 6
Philippe Jorion, Value-at-Risk: The New Benchmark fo r Managing Financial Risk,
3rd Edition (New York: McGraw-Hill, 2007). Chapter 6. Backtesting VaR

After completing this reading you should be able to:

• Define backtesting and exceptions and explain the importance of backtesting VaR
models.
• Explain the significant difficulties in backtesting a VaR model.
• Verify a model based on exceptions or failure rates.
• Define and identify type I and type II errors.
• Explain the need to consider conditional coverage in the backtesting framework.
• Describe the Basel rules for backtesting.

Learning objective: Define backtesting and exceptions and explain the


importance of backtesting VaR models.

What you need to know for the FRM exam is the data issue. Looking at a portfolio over a
fixed time period assumes that the portfolio never changes over that time period. This is
never the case in reality. However, we assume some rate of return over the life of the
portfolio. How do we arrive at this? We use DV01 and convexity to estimate daily returns.
Here we have another problem: We are assuming we can accurately predict the risk profile
of the portfolio, but if we have more complex products, even the first two moments won’t
be enough to calculate returns.

Learning objective: Explain the significant difficulties in backtesting a VaR


model.

In backtesting, an exception is when realized profit and loss (P&L) exceed predicted P&L,
or more precisely when VaR is exceeded out of a particular historical sample. So the
process of backtesting a model is a hypothesis test around the failure rate of a model
where we look at the number of exceptions when the model is incorrect or when the
model is correct. This sets up type I and II errors.

Learning objective: Verify a model based on exceptions or failure rates.

The idea of verifying a model is a really important topic.

To calculate VaR, we have to make some assumption about the distribution of returns. We
can do this parametrically like the normal distribution or using historical bootstrapping to
calculate some VaR using a nonparametric method.

We know we use VaR as a forward-looking tool, but it is important to see how many times
the model will tell us we have an “exceedance” or a loss greater than estimated VaR, when
compared to the actual number of exceedances we know happened based on historical
market data.

The question is: How do we quantify or verify our model based on historical data?

©2019 Wiley ©
MARKET RISK MEASUREMENT AND MANAGEMENT (MR)

The way we test exceedances is to look at a model’s failure rate. To do this, we take a
sample of historical data T, and the number of exceedances within that sample. The
question is: Did our model predict the right amount and number of exceedances at the
desired level of significance?

The perfect test for this (success or failure) is a Bernoulli trial where the exceedances
follow a binomial distribution.

This isn’t a “calculate” question, so just understand how to get to the point of comparing
predicted exceedances in the past versus what actually occurred.

Learning objective: Define and identify type I and type II errors.

As with statistical inference, the decision to hire (or retain) an investment manager is
subject to error. Recall that rejecting the null hypothesis based on statistical evidence,
when the null hypothesis is actually true, is a type I error. A type I error in manager
continuation policy would involve inferring that the active portfolio manager had superior
skill when, in fact, she did not (e.g., her cumulative annualized value-added return was
outside of the confidence interval, but it turned out to simply be due to random variation).

A type II error occurs when the null hypothesis is accepted, but it is false. The analogous
situation in manager continuation policy would be to infer that a portfolio manager did not
have superior investment skill, when in fact she did. This could result in a manager being
unjustly fired for inadequate performance.

Decreasing the likelihood of a type I error increases the likelihood of a type II error. These
possibilities must be managed and balanced when setting manager continuation policy.

Learning objective: Explain the need to consider conditional coverage in the


backtesting framework.

Think back to Part I and what conditional probability is. Conditional probability is the
likelihood of some event occurring dependent upon another event already happening.
Exceptions to a VaR model are assumed to occur evenly over time, so if a series of
exceptions are clustered together, it means that something in the underlying model has
changed. Stated differently, clustered exceptions are assumed to be driven by variation in
the underlying market fundamentals. This change in the market is called conditional
coverage and could refer to changes in correlations, volatility, or anything else. You could
also say conditional coverage is necessary to identify changes in market dynamics that are
reflected in actual P&L deviating from predicted P&L statistically more than we expect.

Learning objective: Describe the Basel rules for backtesting.

In order to backtest, we first have to define the statistical parameters and establish the
level of rejection for the test. So most important is defining the type I error rate. This
would be when the model fails due to bad luck, and Basel doesn’t penalize for this. This
model would still be considered correctly specified where there are exceptions due to bad
luck. However, if these “bad luck” incidents happen in clusters, they could signal a change

© ©2019 Wiley
JORION, CHAPTER 6

in market conditions that needs to be incorporated into the model itself. This would be a
model misspecification error.

What you need to know for the exam is that the verification process consists of recording
the daily exception rate at the 99% confidence interval or a 1% failure over the course of
the year. One percent of 250 trading days in a year is approximately 2.5 exceptions. These
are attributed to bad luck, and the Basel rules allow up to four of these to be considered in
the green zone.

In the yellow zone, five to nine exemptions, the bank has to justify the reason according to
one of the following categories: model error due to programming (not parameters of the
model, but an actual error in the model itself); the model does not specify risk with
enough granularity; positions changed intraday; or bad luck, meaning volatility or
changing market correlations.

Ten or more exceptions are considered in the red zone and result in an automatic penalty.

The real issue here is balancing type I versus type II errors and the qualitative task of
separating bad luck from a faulty model.

©2019 Wiley ®
J o r i o n , Ch a pt e r 11
Philippe Jorion, Value-at-Risk: The New Benchmark fo r Managing Financial Risk,
3rd Edition (New York: McGraw-Hill, 2007). Chapter 11. VaR Mapping

After completing this reading you should be able to:

• Explain the principles underlying VaR mapping, and describe the mapping
process.
• Explain how the mapping process captures general and specific risks.
• Differentiate among the three methods of mapping portfolios of fixed income
securities.
• Summarize how to map a fixed income portfolio into positions of standard
instruments.
• Describe how mapping of risk factors can support stress testing.
• Explain how VaR can be used as a performance benchmark.
• Describe the method of mapping forwards, forward rate agreements, interest rate
swaps, and options.

Reading notes: There are about 12 pages of reading with quite a bit of dense equations.
None of the learning objectives in this section are “calculate,” so focus on the list and
describe aspects of this content. It is very qualitative, and is about understanding how VaR
mapping would be impacted by changing different inputs as opposed to actually
calculating changes.

Learning objective: Explain the principles underlying VaR mapping, and


describe the mapping process.

Very simply, this matches each asset in a portfolio to a set of risk factors and allocates
some exposure to each risk factor. Ultimately, the portfolio can be represented as a matrix
of risk factors.

Clearly this has problems, especially for nonlinear instruments. This also assumes every
instrument in the portfolio can be completely assigned some risk factor and the entire risk
of an instrument can be captured. These risk factors could be beta for stocks, duration for
bonds, vega for options, or any of a dozen other characterizations.

Learning objective: Explain how the mapping process captures general and
specific risks.

All you really need to know is the difference between general and specific risks. General
risks are those market-level risks such as beta or duration—very broad, first-order
estimates of risk. It stands to reason that this level of detail can easily be captured by most
basic simulation or analytic methods.

However, specific risks gets into the risk associated with a specific issuer, a specific stock,
or a certain part of an asset-backed security. This level of detail requires substantially more
modeling with respect to what the specific risk factors are, and presents a clear trade-off in
analytic capacity, potential for increased model error, and garbage in, garbage out.

©2019 Wiley ©
MARKET RISK MEASUREMENT AND MANAGEMENT (MR)

There are lots of formulas here, but don’t get bogged down. Know that the return of a
portfolio is decomposed into specific risk and general risk factors. Recall arbitrage pricing
theory (APT) in Part I, and this is the general idea. Then the variance of a portfolio is
decomposed further into a general (beta or duration) mapping plus some specific mapping
(credit quality, issuers in South America, etc.).

Things to remember for the FRM exam:

Recall that the portfolio return looks like this:

Ri - a i + P i ( R m) + 8*321 i

or the return of every “ith” portfolio is equal to the risk-free rate of return (alpha), plus the
portfolio-specific beta, multiplied by the expected return of the market, plus some room
for error or unexplained return.

It is true that for risk-mapping purposes, the alpha term does not need to be mapped since
it does not contribute to risk; however, the error term should be mapped or at least not
ignored or discarded.

It also stands to reason that the more detail we have in the specific risk category, the less
we will attribute to generic risk measures; so for the exam remember there is a sliding
scale between the detail contained in the generic and specific risk measures, and these will
always add up to the total risk. Taken to the extreme, it could be possible to map all the
specific risks and leave no risks in the general risk category.

Learning objective: Differentiate among the three methods of mapping


portfolios of fixed income securities.

In increasing level of detail, the three methods are:

1. Principal mapping
2. Duration mapping
3. Cash flow mapping

In principal mapping, the crudest, a single risk measure is established for the average
maturity (not duration) of the portfolio, and this is used to establish VaR for that particular
portfolio and effectively map its risk. In order for this to work, you will usually be given
par bonds at varying maturities and the VaR for each of the maturities. By multiplying the
VaR for the bond representing the average life and then multiplying that by the principal
of the portfolio, we arrive at a VaR number for the entire portfolio.

This has obvious problems. First, it doesn’t consider the risk behavior of bonds trading
significantly away from par, nor does it consider bonds with embedded options or any
nonlinear instrument of any kind.

Duration mapping is almost as bad. In this we use the actual duration of the portfolio
instead of just the average maturity and then follow the same process: multiplying the
VaR for this average maturity by the notional of the portfolio.

©2019 Wiley
JORION, CHAPTER 11

Remember this for the exam: For coupon-paying instruments, the duration of a bond is
always less than the maturity. A 4-year bond may have a duration of 3.2 years. It stands to
reason that principal mapping, based on average maturity, will always return higher VaR
than duration mapping. These are the types of qualitative nuances you need to be prepared
for on the exam, especially in the “describe” type questions. Rarely will they ask you just
to describe something but rather will ask what are the impacts beyond the basic
description.

Last, cash flow mapping is quite complex. Recall that in both principal and duration
mapping you can see what the VaR is for a par bond at any maturity. Assume this is given
and you don’t need to calculate it.

For cash flow mapping, you take the cash flow from each bond and multiply the present
value of the expected cash flow (discounted at the zero rate, of course) by the appropriate
VaR at each corresponding point on the curve. So if you had a 3-year-maturity annual-pay
bond, you would look at annual cash flows multiplied by the VaR at that point on the
curve.

Additionally, in cash flow mapping we also consider a correlation matrix for every point
on the curve. For example, how is the term structure correlated at each individual point?
For the exam, know that the more diversified the bonds are across the curve and the lower
the correlation, the lower the VaR given by cash flow mapping.

Learning objective: Summarize how to map a fixed income portfolio into


positions of standard instruments.

The idea of mapping means understanding the risk factors that impact each type of
instrument.

The reading goes into a lot of detail about the impact of changes in floating rates or
forward rates and the impact on valuation, but the reality is changes in the yield curve
have a far greater impact on changes in present value of fixed income instruments, and by
extension VaR, than the short-term rates or forward rate curve.

Mapping is, then, not simply understanding the risk factors, but also understanding the
dominant risk factor. For a fixed income instrument, that is, changes in the par rate, no
matter if it is a forward, a swap, or a futures contract.

Learning objective: Describe how mapping of risk factors can support stress
testing.

Recall that stress testing conceptually is pushing a portfolio to maximum market stress
conditions. Also recall that in times of stress, correlations go to 1. With respect to cash
flow mapping, we can assume correlations are 1, skip the matrix multiplication step, and
ignore the matrix multiplication of the correlations.

So in times of stress when correlations are 1, it implies zero value from diversification and
a higher VaR than the simple and straightforward cash flow mapping. Since the risk

©2019 Wiley ©
MARKET RISK MEASUREMENT AND MANAGEMENT (MR)

factors are already mapped, it is easy to get to a stress test also because we are ignoring the
diversification benefits of having cash flows spread across the term structure of the curve.

Learning objective: Explain how VaR can be used as a performance


benchmark.

This concept is a throwback to the idea of tracking error in Part I and reviewed previously.
Notice this isn’t a “calculate” question but rather asks how we can use VaR to judge
performance next to some benchmark. Remember, we aren’t concerned only with how
closely our portfolio matches the benchmark, but with the difference in risk we had to take
to achieve that return. Furthermore, we want to define the tracking error in terms of VaR
and, when compared to our benchmark, determine how we are performing relative to that.

First, we assume a cash flow decomposition of some portfolio and determine that our
benchmark has a duration of 5.12 years and the VaR of this portfolio at the 95%
confidence interval is $2.25 million. These are all made-up numbers but close to reality
with a 5-year portfolio. You won’t have to calculate these numbers; I’m just stating them
here for an example.

Now consider we want to track this portfolio with only four bonds. We choose some
combination of bonds similar to the risk profile of the portfolio and arrive at the notional
weightings through cash flow decomposition.

Comparing the VaR of our tracking portfolio to the VaR of the benchmark portfolio, we
can arrive at the expected VaR difference between the two. If we widen or narrow the
brackets of cash flows in our portfolio, we can experiment with how the VaR of our
portfolio differs from that of the original portfolio.

Once we have minimized that number, the tracking error VaR, we should never see our
tracking portfolio vary from the benchmark by greater than that VaR; if we do, we are
under- or overperforming the portfolio.

Learning objective: Describe the method of mapping forwards, forward rate


agreements, interest rate swaps, and options.

This is very important. Notice that these instruments are all linear derivatives—no
embedded optionality. That means the instruments are linearly or one-for-one related to
the underlying rate structure. For forwards, mapping is very simple because they cover a
single period; forward rate agreements (FRAs) are more complicated because they can
cover additional periods; and interest rate swaps are slightly more complex because they
are typically longer than even FRAs.

The problem with mapping the risk of options is the nonlinear nature of options
themselves. Recall what nonlinear refers to: Linear derivatives by definition change in
price one for one with the underlying security they are mapped to (with some caveats).
The first big exception to this worth mentioning is the idea of convexity and how this
property changes the risk profile of even linear derivatives like simple swaps.

© ©2019 Wiley
JORION, CHAPTER 11

The behavior of options, however, cannot be ignored and they do not lend themselves
easily to mapping the risk profile of the securities like linear derivatives do. As an
extreme example, think of a call option that expires tomorrow with a strike price of 50
and a payoff equal to 1,000 (stock price - K). A closing price of 50.25 would equal a
value of 0.25 X 1,000 = $250, whereas a price of 49.99 would have zero value. So huge
swings in the derivative valuation that don’t correspond directly to a huge swing in the
underlying price is a property of nonlinear behavior. Consequently, these are very difficult
to map. So how do we handle this?

The assigned readings take a very long-winded way to say something very basic: In order
to map options, you have to map each of the risk factors beyond simple DV01 like you
would for linear derivative products. So in the case of swaptions, you need to map DV01
(first-order risk), convexity (second-order risk), vega (partial derivative with respect to
volatility), and gamma (second-order vega risk: how quickly vega changes when volatility
is changing).

©2019 Wiley

Você também pode gostar