Você está na página 1de 210

Algorithmic Trading

Session 1
Introduction

Oliver Steinki, CFA, FRM

Outline

An Introduction to Algorithmic Trading


Definition, Research Areas, Relevance and Applications

General Trading Overview

Goals and Types of Trading, Instruments and Order Types


Algorithmic Trading Framework
Prop Trading Strategy Steps, Algo Trading Development Cycle

Signal Generation
Mathematical Tools, Attributes of Scientific Trading Models, Backtesting, Calibration and Robustness

Trade Implementation

Portfolio Analysis, Order Manipulation Process, Exit Orders

Performance Analysis
Return, Risk and Efficiency Metrics, Success Factors of Quantitative Trading Strategies and Trading System
Efficiency

Summary and Questions

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

An Introduction to Algorithmic Trading


Definition, Research Areas and Relevance

Definition: Algorithmic trading is a discipline at the intersection of finance, computer science and mathematics.
It describes the process of using algorithms to generate and execute orders in financial markets. Such algorithms
generate long/short/neutral signals, adapt market quotes and/or execute trading decisions with minimal market
impact and/or improved transaction prices

Research areas are mainly in two different disciplines:


Computer Science: Build more reliable and faster execution platforms
Mathematics: Build more comprehensive and accurate prediction models

Relevance: Algorithmic trading strategies account for approximately:

60% of all US equity volume


40% of all European equity volume
25% of all Forex transactions
20% of all US option trades

Source: Interactive Brokers

An Introduction to Algorithmic Trading


Applications

Algorithmic Execution: Use algorithms to search/discover fragmented liquidity pools to optimize execution
via complex / high frequency order routing strategies. Profit comes from improved prices and reduced market
impact
Example: Order routing to dark pools to improve execution price, iceberg orders to reduce market impact

Market Making: Supply the market with bid ask quotes for financial securities. Ensure the book respects
certain constraints such as delta profile or net position. Profit comes mainly from clients trading activity, hence
the bid-ask spread. Also known as flow trading or sell side. Main risk comes from market moves against position if
net position/Greeks are not perfectly hedged
Example: A broker offers to sell a financial security at the ask and to buy at the bid to earn the spread

Trade Signal Generation: Design proprietary strategies to generate profits by betting on market directions.
Profit comes from winning trades. Also known as propietary trading or buy side. Main risk is that market does not
move as expected/back tested and strategy becomes unprofitable
Example: Buy/sell security when moving averages cross each other

General Trading Overview


Goals of Trading

Profit Generation
1. Entry: Based on a trade signal, generate order to go short or long a certain financial instrument in a certain
quantity. Trade results in a certain position in this security
2. Mark-to-Market: As the price of the security changes, so does your unrealized PnL.
PnL = Side * Quantity * (Pbid Pask)
3. Exit: Generate order to exit the position and create a realized PnL. Orders to exit are usually one of the
three categories: take profit order, a trailing stop or stop loss order.
PnL = Side * Quantity * (Pexit Pentry)

Hedging

Hedging describes the process of placing a trade to reduce/eliminate a certain kind of exposure
Example: Reduce exposure of an option position to a move of the underlying by entering a delta-neutral
position
Hedging costs money, but reduces uncertainty. Continuous hedging as assumed by much of the academic
literature is neither feasible nor cost effective

General Trading Overview


Types of Trading

Fundamental:

Quantitative:

Rule based
Econometric Forecasting
Statistical Arbitrage

Technical:

Stock Picking
Ratio Analysis
Sector Analysis
Executive Management Signals

Charting
Trend Analysis

Time Frames:

Long Term: Months to Years

Short Term: Days, Weeks, Months

Intraday: Seconds to Hours

High frequency: Fractions of Seconds

General Trading Overview


Instruments and Order Types

Instruments:

Equities
Bonds
Commodities
Foreign Exchange
Credit Default Swaps
Asset Backed Securities
Swaps
Rates
Futures and Options on the above

Order Types:

Market
Limit
Stop Loss
Trailing Stop
Attached Orders
Conditional Orders

Algorithmic Trading Framework


Prop Trading Strategy Steps
SIGNAL GENERATION

Proprietary algorithmic trading strategies can be broken down into three subsequent
steps: Signal Generation, Trade Implementation and Performance Analysis

DECIDE WHEN AND


HOW TO TRADE

TRADE
IMPLEMENTATION

The first step, Signal Generation, defines when and how to trade. For example, in a
moving average strategy, the crossing of the shorter running moving average over the
longer running moving average triggers when to trade. Next to long and short, the signal
can also be neutral (do nothing). Using moving averages to generate long/short trading
signals is an example choice of how to trade

Trade Implementation happens after the Signal Generation step has triggered a buy or
sell signal. It determines how the order is structured, e.g. position size and limit levels. In
advanced strategies, it can also take into account cross correlation with other portfolio
holdings and potential portfolio constraints

Performance Analysis is conducted after the trade has been closed and used in a
backtesting context to judge whether the strategy is successful or not. In general, we can
judge the performance according to five different metrics: return, risk, efficiency, trade
frequency and leverage

SIZE AND EXECUTE


ORDERS, INCL. EXIT

PERFORMANCE
ANALYSIS

RETURN, RISK AND


EFFICIENCY RATIOS

Algorithmic Trading Framework


Trading System Development Cycle

The development cycle of a quantitative strategy is an iterative process that can be split into 8 steps:
1) Generate or improve a trading strategy idea (based on intuition)
2) Quantify the trading idea and build a model to replicate it

3) Back test the strategy for multiple time frames, trade implementation rules and related financial
instruments
4) Calculate performance, risk and efficiency statistics, choose trade frequency and leverage

5) If the statistics are not satisfactory, restart at step #1


6) If the strategy does not add significant value to the existing strategies, restart at step #1
7) Implement the strategy on an execution platform (e.g. Interactive Brokers, Oanda), initially as a paper
trading account
8) Trade the strategy with real money
9

Signal Generation
Mathematical Tools and Attributes of Scientific Trading Models

Mathematical Tools:

Markov Models
Co Integration
Stationarity vs. Non-Stationarity
Mean Reverting Processes
Bootstrapping
Signal Processing Tools
Return Distributions / Lvy Processes
Time Series Modelling
Ensemble Methods

Attributes of Scientific Trading Models:

Scientific Trading Models are based on logical arguments


One can specify all assumptions
Models can be quantified from assumptions
Model properties can be deduced from assumptions
Model properties can be back tested in an objective, rule-based manner
Clear Model property specification allows for iterative strategy improvement
10

Signal Generation
Strategy Development and Back Testing

Strategy Development:

Identify patterns in historical data or formulate trading idea


Quantify these patterns/trading idea in an initial trading model
Verify if the patterns are persistent/trading idea would have worked in the past
Create a more advanced trading model based on these signals

Back Testing:

Back testing simulates the potential success of a strategy based on historical or simulated data
Gives an estimate how the strategy would have worked in the past, but not if it will work in the future
It is an objective method to conduct performance analysis and choose most promising strategy
Data from Performance Analysis step helps in further strategy improvement and trade implementation, e.g.
determining exit order levels

11

Signal Generation
Calibration and Robustness

Calibration:
Most strategies calibrate model parameters to optimize certain performance analysis factors, e.g. total return
Calibration is an inverse process: we know how to get from market data to model parameters when we have a
model, but not how to get to the most realistic model from market data alone
Occams razor: Fewer parameters are usually preferable and tend to increase model robustness

Robustness:

How much does the success of a strategy change given a small variation in parameter values?
Avoid in-sample overfitted parameters out of sample testing is crucial!!!
Plot of performance measure vs. delta of or absolute parameter(s) values visualizes parameter sensitivity
We look for plateaus where a broader range of parameters results in stable performance measures
Ensembles of different models can help to increase meta model robustness

12

Trade Implementation
Order Manipulation Process

Portfolio Analysis: Cross correlation with other portfolio positions, position limits, portfolio constraints

Exposure Analysis: Convex or linear payoff profile? Instrument constraints?

Order Sizing: Determine number of contracts/instruments to be bought/sold

Order Execution: Market or Limit Order? Time in Force? Potential Issues:

What if an order is already filled before a modify command arrives at the market?
What if an old order is partially filled and then deleted/reduced?
What if a confirmation arrives too late (or never arrives)?
What if the price moves again before the new limit order is placed?
What if the new order is rejected by the Market ?
What if the new order breaks position limits or portfolio constraints?
What if the gateway, market or broker is down?

13

Trade Implementation
Exit Orders

Implement Take Profit orders or keep positions until signal direction changes?

Stop Loss vs. Trailing Stop to limit downside? Optimal distance of Stop Level to Market?

Time in Force for Orders, Maximum Holding Period for portfolio constituents?

Triggers to change Exit Orders?

14

Performance Analysis
Return, Risk and Efficiency Metrics

Return Metrics:
Total Return, Annualized Return, Winning %, Avg. Winning Size, Biggest Winner, Distribution of Winning
Trades, Loosing %, Avg. Loosing Size, Biggest Looser, Distribution of Losing Trades

Risk Measures:

Annualized Standard Deviation, Downside Deviation, Max Drawdown, Peak to Through

Efficiency Measures:
Sharpe, Information and Sortino Ratio

Trade Frequency:

Fractions of seconds to Months/Years. Hurdle rate for transaction costs?

Leverage:
Unleveraged ( 100% of equity value) or leveraged positions ( 100% of equity value)? Costs of leverage?

15

Performance Analysis
Success Factors of Quantitative Trading Strategies

Quantitative Investment Strategies are driven by four success factors: trade frequency,
success ratio, return distributions when right/wrong and leverage ratio

The two graphs show 95% confidence levels of annualized expected returns of two
underlyings exhibiting different volatilities (7.8% annualized for AUDJPY vs. 64.6% for
VIX) and daily trade frequency. The higher the success ratio, the more likely it is to
achieve a positive return over a one year period. Higher volatility of the underlying
assuming constant success ratio will lead to higher expected returns

The distribution of returns when being right / wrong is especially important for strategies
with heavy long or short bias. Strategies with balanced long/short positions on any
underlying are less impacted by these distributional patterns. Downside risk can further be
limited through active risk management, e.g. stop loss orders

Leverage plays an important role to scale returns and can be seen as an artificial way to
increase the volatility of the traded underlying. For example, a 10 times leveraged position
on an asset with 1% daily moves is similar to a non-leveraged position on an asset with
10% daily moves

Assuming daily trade frequency, a success ratio of 54%, even long/short return
distributions and a leverage ratio of 100% implies a probability of less than 5% of nonpositive annual returns

0.2
0.0
-0.2

expected annual return

0.4

AUDJPY Curncy

0.50

0.52

0.54

0.56

0.58

model success ratio

2
1
0
-1
-2

expected annual return

VIX Index

0.50

0.52

0.54

0.56

0.58

model success ratio

16

Performance Analysis
Trading System Efficiency

Van Tharp introduced the concept of R multiple. 1 R measures the initial risk of a position, which is equal to the
distance between entry and stop loss level. Exit levels should be chosen so that the gains are higher than 1R. This is
another way of saying cut losses short and let profits run. Example: Enter long position at 10 EUR with stop loss
order at 9 EUR. 1R= initial risk = 10%

In mathematical terms, the expected profit of a trading strategy is:


Gain in %= frequency of trades * (winning % * avg. winning size loosing % * avg. loosing size) * leverage ratio
loosing % = 1 winning % and avg. winning size = n * R, with n = average win to loss ratio
Gain in % = frequency of trades * (winning % * n * R (1 winning %) * R) * leverage ratio * 1/100

Example: A strategy trades daily, has a success ratio of 60%, equal average winning and losing size of 1 % and
trades a leverage ratio of 200% of equity. In this case, the expected yearly gain is:

Gain = 250 * (60% * 1 % 40% * 1 %) * 2

= 100% p.a.

Gain = 250 * (60% * 1/1 * 1 (1 60%) * 1) * 2 * 1/100

= 100% p.a.

In this business if you're good, you're right six times out of ten.You're never going to be right nine times out of ten. . Peter Lynch

17

Summary and Questions

Algorithmic trading is an emerging, intellectually challenging field in quantitative finance where computer science is
as important as mathematics

Successful firms are either faster (computer science) or have better forecasting accuracy (mathematics) than the
competition, or both

A scientific approach to algorithmic trading helps to differentiate signals from noise / chance from recurring patterns

Proprietary algorithmic trading strategies can be broken down into three subsequent steps: Signal Generation, Trade
Implementation and Performance Analysis

Questions?

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

18

AlgorithmicTrading
Session 2
Success and Risk Factors of
Quantitative Trading Strategies
Oliver Steinki, CFA, FRM

Outline

Introduction

Performance Drivers of Quantitative Trading Strategies

Basic Concepts

Mathematical Expectation

To Reinvest Trading Profits or not?

What Is the Best Way to Reinvest?

Kelly Criteria

Finding the optimal f

Summary and Questions

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

Introduction
Risk / Money Management as a Performance Driver

The performance of any trading strategy is driven by the mathematical expectation of the strategy and
the position size taken. Obviously, position sizing (money management) is related to the size of your account
level, but how exactly does it relate?

We do not have control whether the next trade will be profitable or not as it is a mathematical expectations game.
Yet, we do have control over the quantity we have on. Since one does not dominate the other, our resources are
better spent concentrating on putting on the right quantity

Risk management is about decision making strategies that aim to maximize the risk / return ratio within a given
level of acceptable risk.

Example: Your client gives you a drawdown limit of 5%. Hence, you would like to optimize your trading
strategy in terms of risk / return tradeoff, however it has to be done in a way that it must not breach the
drawdown constraint

Several concepts of money management will be explained by using gambling concepts. The mathematics of money
management and the principles involved in trading and gambling are quite similar. The main difference is that in the
math of gambling we are usually dealing with Bernoulli outcomes (only two possible outcomes), whereas in trading
we are dealing with an entire probability distribution that the PnL may take

Performance Drivers
Performance Drivers of Quantitative Trading Strategies I

Quantitative Investment Strategies are driven by four success factors: trade frequency, success ratio, return
distributions when right/wrong and leverage ratio

The higher the success ratio, the more likely it is to achieve a positive return over a one year period. Higher
volatility of the underlying assuming constant success ratio will lead to higher expected returns

The distribution of returns when being right / wrong is especially important for strategies with heavy long or short
bias. Strategies with balanced long/short positions and hence similar distributions when right/wrong are less
impacted by these distributional patterns. Downside risk can further be limited through active risk/money
management, e.g. stop loss orders

Leverage plays an important role to scale returns and can be seen as an artificial way to increase the volatility of
the traded underlying. It is at the core of the money management question to determine the ideal betting size. For
example, a 10 times leveraged position on an asset with 1% daily moves is similar to a full non-leveraged position
on an asset with 10% daily moves

Performance Drivers
Performance Drivers of Quantitative Trading Strategies II

Van Tharp introduced the concept of R multiple. 1 R measures the initial risk of a position, which is equal to the
distance between entry and stop loss level. This assumes that one would be executed at stop loss level. Exit levels
should be chosen so that the gains are higher than 1R. This is another way of saying cut losses short and let profits
run. Example: Enter long position at 10 EUR with stop loss order at 9 EUR. 1R= initial risk = 10%

In mathematical terms, the expected profit of a trading strategy is:


Gain in %= frequency of trades * (winning % * avg. winning size loosing % * avg. loosing size) * leverage ratio
loosing % = 1 winning % and avg. winning size = n * R, with n = average win to loss ratio

Gain in % = frequency of trades * (winning % * n * R (1 winning %) * R) * leverage ratio * 1/100

Example: A strategy trades daily, has a success ratio of 60%, equal average winning and losing size of 1 % and
trades a leverage ratio of 200% of equity. In this case, the expected yearly gain is:
Gain = 250 * (60% * 1 % 40% * 1 %) * 2

= 100% p.a.

Gain = 250 * (60% * 1/1 * 10% (1 60%) * 10%) * 2

= 100% p.a.

Basic Concepts

Ultimately, we have no control over whether the next trade will be profitable or not. Yet we do have control over
the quantity we have on. Since one does not dominate the other, our resources are better spent concentrating on
putting on the tight quantity

The worst case loss on any given trade (which could be ensured through a stop loss, but does not necessarily have
to) together with the level of equity in your account, should be the base for your position sizing, e.g. how many
contracts to trade

The divisor f of this biggest perceived loss is a number between 0 and 1 which determines how many contracts to
trade. Assuming a portfolio of $50.000, a worst case loss of $5.000 per contract and a position of five contracts,
this divisor is calculated as:
$50.000 / ($5000 / f) = 5,

Thus, you trade 1 contract per $10.000 in equity with a divisor f of 0.5

This divisor we will call by its variable name f. Thus, whether consciously or subconsciously, on any given trade
you are selecting a value for f when you decide how many contracts or shares to put on.

Mathematical Expectation

Mathematical Expectation (ME) is the amount you expect to make or lose, on average, each bet (trade)

ME = = 1, ( ), with
P = Probability of winning

A = Amount won or lost


N = Number of possible outcomes

The mathematical expectation is computed by multiplying each possible gain or loss by its corresponding
probability and then summing these products

Exampe: Consider a game with a 50% chance of winning $2 and a (1-50%) = 50% chance of losing $1

ME = (0.5*$2) + (0.5*(-$1)) = $0.5

The expected profit per game is hence $0.5. This is a typical example of a game with an edge as the ME in a fair
game should be $0.

In a negative expectation game, there is no money management scheme that will create a winning strategy. If you
continue to trade a negative expectation game, you will lose your entire stake in the long run.
7

Reinvest Trading Profits or Not?

Reinvesting trading profits can turn a winning system into a losing system but not vice versa. A winning system
becomes a losing system if returns are not consistent enough

Changing the order or sequence of trades does not affect the final outcome, neither on a non reinvestment basis,
nor on a reinvestment basis

Reinvesting turns the linear growth function of a trading strategy with positive mathematical expectation into an
exponential growth function

The geometric mean is the best system to measure the tradeoff between profitability and consistency. It is
calculated as the Nth root of the Terminal Wealth Relative (TWR). TWR represents the return on your stake as a
multiple of your initial investment
= 1, , with

HPR = Holding Period Returns

For the three systems analyzed in excel, the TWRs and geometric means are as follows:

The geometric mean expresses your growth factor per trade:


=(

System
System A
System B
System C

TWR
0.918
1.071
1.041

Geometric Mean
0.979
1.017
1.010
8

What Is the Best Way to Reinvest?

So far, we have shown that reinvestment of returns leads to the highest geometric return and should therefore be
used. We have implicitly assumed that we would reinvest at any time 100% of our equity. However, this is not
ideal.

Consider again our coin toss game: 50% chance of winning $2, 50% chance of losing 1%. What would be the ideal
size to bet? If you bet 100% each time, you will be wiped out sooner or later. If you only bet $1 each time, you do
not reinvest. Hence, the ideal betting size is somewhere in between these extremes

Consider the ideal strategy for a negative expectancy game: You want to bet on as few trades as possible as your
likelihood of losing increases with the number of trials.

Example: You are forced to play a game with 49% chance of winning $1, 51% chance of losing $1. The more
often you bet, the greater the likelihood you will lose, hence you should only bet once.

Returning to the positive expectancy game: The quantity f, that a trader can put on, lies between 0 and 1. f
represents the traders quantity relative to the perceived loss and total equity. As you know you have an edge over
N bets, but not which bets will be losers/winners and by how much, the best way is to bet a constant percentage of
your total equity. We are hence investigating the question of how to best exploit a positive expectation game?

The answer: For an independent trials process, this is achieved by reinvesting a fixed fraction f of your total stake
9

Kelly Criteria

The Kelly criteria deals with the question of optimal f in a gambling context. It states that we should bet that fixed
fraction of our stake (f) which maximizes the growth function G(f) for events with two possible outcomes:
= ln 1 + + + 1 ln 1 , with

f = the optimal fixed fraction


P = the probability of a winning bet

B = ratio of amount won to amount lost if bet won/lost


ln = the natural logarithm function

Kelly formula applicable for events with equal wins and losses:
= , or = (2 ) 1, which are equivalent, with
Q = complement of P (1 P)
Example: Consider the following stream of bets: -1,+1,+1,-1,-1,+1,+1,-1,+1,+1
f = 0.6 0.4 = 0.2 or (0.6 * 2) 1 = 0.2
10

Kelly Criteria

Kelly formula applicable for events with unequal wins and losses:

= ( + 1 1)/, with notation as before


Example: the two to one coin toss example

f = ((2+1) * .5 1) / 2
= (3 * 0.5 1) /2 = 0.5 / 2 = 0.25

However, the Kelly formula is only applicable to outcomes that have a Bernoulli distribution. A Bernoulli
distribution has only two possible discrete outcomes. Hence, the Kelly formula is applicable for gambling, but not
for trading, where we have more than two possible outcomes and the Kelly formula would yield wrong results for
the optimal f

In case of gambling with a Bernoulli distributed outcome, the optimal f is:


f = ME / B = 0.5 / 2 = 0.25 in the two to one coin toss example

11

Finding the Optimal f

To find the optimal f for trading, we must amend our formula to find Holding Period Returns (HPR):

= 1 +

, with

f = the value we are using for f


-Trade = the inverse of the PnL on any given trade (profits are negative, losses positive numbers)

Biggest Loss = The PnL that resulted in the biggest loss (always negative)
= = 1, (1 +
= = 1, (1 +

(1/)

(
)
),

with notation as before and

N = total number of trades


G = Geometric mean of the HPRs

We find the f that results in the highest G by looping through all possible f values starting at 0 in 0.01 increments
and stop as soon as G starts decreasing as the f function has only one peak
12

Finding the Optimal f


Examples

Consider the following sequence of trades: +9, +18,+7,+1+10,-5,-3,-17,-7

Wrong Way: Kelly, f = 0.16 with a TWR of 1.085

Correct way: f = 0.24 with a TWR of 1.096

Comparison after x number of loops


# of loops
1
20
50
100

correct f TWR wrong f TWR


Difference
1.096
1.085
1%
6.220
5.125
21%
96.506
59.453
62%
9,313.314
3,534.708
163%

As you see, using the optimal f does not appear to offer much advantage over the short run, but over the long run it
becomes more and more important. The point is, you must give the program time when trading at the optimal f
and not expect miracles in the short run. The more time (i.e., bets or trades) that elapses, the greater the difference
between using the optimal f and any other money-management strategy

13

Summary and Questions

We do not have control whether the next trade will be profitable or not. Yet, we do have control over the quantity
we have on. Since one does not dominate the other, our resources are better spent concentrating on putting on the
right quantity

The Kelly formula is only applicable to outcomes that have a Bernoulli distribution. It is a common mistake among
traders to use it for trading

Optimal f is the mathematical way to maximize the geometric mean of your trading system and hence the way to size
positions if you want to exploit a positive expectancy game in the most efficient manner

Questions?

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

14

AlgorithmicTrading
Session 3
Trade Signal Generation I
FindingTrading Ideas and Common Pitfalls
Oliver Steinki, CFA, FRM

Outline

Introduction

Finding Trading Ideas

Common Pitfalls of Trading Strategies

Summary and Questions

Sources

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

Introduction
Where Do We Stand in the Algo Prop Trading Framework?
SIGNAL GENERATION

As we have seen, algorithmic proprietary trading strategies can be broken down into
three subsequent steps: Signal Generation, Trade Implementation and Performance
Analysis

The first step, Signal Generation, defines when and how to trade. For example, in a
moving average strategy, the crossing of the shorter running moving average over the
longer running moving average triggers when to trade. Next to long and short, the signal
can also be neutral (do nothing). Using moving averages to generate long/short trading
signals is an example choice of how to trade

Sessions 3 6 deal with the question of deciding when and how to trade

DECIDE WHEN AND


HOW TO TRADE

TRADE
IMPLEMENTATION
SIZE AND EXECUTE
ORDERS, INCL. EXIT

PERFORMANCE
ANALYSIS

Todays Session 3: Finding Suitable Trading Strategies and Avoiding Common


Pitfalls
Session 4: Backtesting
Session 5: Mean Reversion Strategies
Session 6: Momentum Strategies

RETURN, RISK AND


EFFICIENCY RATIOS

Introduction
Signal Generation

Signal Generation describes the process of deciding when and how to trade

Finding a trading idea is actually not the hardest part of building a quantitative trading strategy. There are hundreds,
if not thousands, of trading ideas that are in the public sphere at any time, accessible to anyone at little or no cost.
Many authors of these trading ideas will tell you their complete methodologies in addition to their backtest results.
There are finance and investment books, newspapers and magazines, mainstream media web sites, academic papers
available online or in the nearest public library, trader forums, blogs etc.

However, you have to choose a strategy that is suitable to your constraints

Avoid common trading strategy pitfalls. Several checks can tell you quite quickly whether you should further
investigate a strategy or not

Finding Trading Ideas


Where can you find good trading ideas?

Finding prospective quantitative trading


strategies is not difficult.There are:

Academic research websites

Finance web sites and blogs

Trader Forums

Newspapers and Magazines

Table taken from the book How to Build Your Own Algorithmic Trading Business by E. Chan

Finding Trading Ideas


Which Strategy Suits You?

Finding a viable strategy that suits you often does not have anything to do with the strategy itself, rather with your
constraints:

How much time do you have for baby-sitting your trading programs? Do you trade part time? If so, you
would probably want to consider only strategies that hold overnight and not the intraday strategies.
Otherwise, you may have to fully automate your strategies

How good a programmer are you? If you know some programming languages such as Visual Basic,
MATLAB, R or even Java, C#, or C++, you can explore high-frequency strategies, and you can also trade a
large number of securities. Otherwise, settle for strategies that trade only once a day, or trade just a few
stocks, futures, or currencies

How much capital do you have available for trading? Do you have a lot of capital available for trading as
well as expenditure on infrastructure and operation? In general, I would not recommend quantitative trading
for an account with less than EUR 50,000

Is your goal to earn steady monthly income or to strive for a large, long-term capital gain? Most
people who choose to become traders want to earn a steady (hopefully increasing) monthly, or at least
quarterly, income. But you may be independently wealthy, and long-term capital gain is all that matters to
you
6

Finding Trading Ideas


How Does Capital Availability Affect Your Choices?

Ernest Chan lists a number of considerations dependent on your capital available:

Table taken from the book How to Build Your Own Algorithmic Trading Business by E. Chan

Common Pitfalls of Trading Strategies


Quick Checks

Before doing an in-depth backtest of the strategy, you can quickly filter out unsuitable strategies if they fail one or
more of these tests:

Does it outperform a benchmark?

Does it have a high enough Sharpe/Sortino ratio?

Does it have a small enough drawdown and short enough drawdown duration?

Does the backtest suffer from survivorship bias?

Does the backtest suffer from data snooping bias?

Common Pitfalls of Trading Strategies


Benchmark Outperformance and Performance Consistency

How compares the strategy with a benchmark and how consistent are its returns? Return comparisons are easy for
long- only stock strategies, but more difficult for advanced strategies such as long/short equity

Another issue to consider is the consistency of the returns generated by a strategy. Though a strategy may have the
same average return as the benchmark, perhaps it delivered positive returns every month while the benchmark
occasionally suffered some very bad months. In this case, we would still deem the strategy superior than the
benchmark. Always use measures such as Information, Sharpe (Information ratio with benchmark returns equal
to the risk free asset) or Sortino ratio to measure risk adjusted performance, never return alone

Information ratio:

Sortino ratio:

Common Pitfalls of Trading Strategies


Drawdowns

How deep and long is the drawdown? A strategy suffers a drawdown whenever it has lost money recently. A
drawdown at a given time t is defined as the difference between the current equity value (assuming no redemption
or cash infusion) of the portfolio and the global maximum of the equity curve occurring on or before time t. The
maximum drawdown is the difference between the global maximum of the equity curve with the global
minimum of the curve after the occurrence of the global maximum

Are there any client specific or risk management imposed drawdown constraints?

Graph taken from the book How to Build Your Own Algorithmic Trading Business by E. Chan

10

Common Pitfalls of Trading Strategies


Transaction Costs

Every time a strategy buys and sells a security, it incurs a transaction cost. The more frequent it trades, the larger
the impact of transaction costs will be on the profitability of the strategy. These transaction costs are not just due to
commission fees charged by the broker.There exist also exchange fees as well as stamp duty fees

There will also be the cost of liquidity - when you buy and sell securities at their market prices, you are paying
the bid-ask spread. If you buy and sell securities using limit orders, however, you avoid the liquidity costs but
incur opportunity costs

When you buy or sell a large chunk of securities, you will not be able to complete the transaction without
impacting the prices at which this transaction is done. This effect on the market prices due to your own order is
called market impact, and it can contribute to a large part of the total transaction cost when the security is not
very liquid.Algorithmic execution models try to limit these market impact costs

There can also be a delay between the time your strategy transmits an order to the broker and the time it is
executed at the exchange, due to slow internet connection or slow software. This delay can cause slippage, the
difference between the price that triggers the order and the actual execution price

11

Common Pitfalls of Trading Strategies


Survivorship Bias

A historical database of asset prices such as stocks that does not include stocks that have disappeared due to
bankruptcies, delistings, mergers, or acquisitions suffer from the so-called survivorship bias, because only
survivors of those often unpleasant events remain in the database

Same problem applies to mutual fund or hedge fund databases that do not include funds that went out of
business, usually due to negative performance

Survivorship bias is especially applicable to value strategies, e.g. investment concepts that buy stocks that
seem to be cheap. Some stocks were cheap because the companies were going bankrupt shortly. So if your strategy
includes only those cases when the stocks were very cheap but eventually survived (and maybe prospered) and
neglects those cases where the stocks finally did get delisted, the backtest performance will be much better than
what a trader would actually have suffered at that time

12

Common Pitfalls of Trading Strategies


Data Snooping Bias In Sample Over-Optimization

If you build a trading strategy that has 100 parameters, it is very likely that you can optimize those parameters in a
way that the historical performance looks amazing. It is also very likely that the future performance of this strategy
will not at all look like this over-optimized historical performance

Data snooping is very difficult to avoid even if you have just one or two parameters (such as entry and exit
thresholds). We will further investigate this issue in Session 4, when we discuss backtesting and common backtesting
pitfalls

In general, the more rules a strategy has, and the more parameters the model has to optimize, the more likely it is
to suffer from data-snooping bias

The following general rules based on Occams razor help to avoid data snooping bias in quantitative trading
strategies:

Strategy is based on a sound econometric or rational basis, and not on random discovery of patterns

Only a limited amount of parameters need to be fitted to past data

All optimizations must occur in a backward looking moving window, involving no future unseen data. And
the effect of this optimization must be continuously demonstrated using this future, unseen data

13

Summary and Questions

Finding quantitative trading ideas is not that difficult. There are many finance and investment books, newspapers and
magazines, mainstream media web sites, academic papers available online or in the nearest public library, trader
forums, blogs etc. which can be used to form intuitive ideas which are then transformed into trading signals

Finding a viable strategy that suits your constraints is more difficult. Consider your time and capital limitations as
well as your programming skills. You also have to decide whether you want to earn a steady monthly income or if
you want to strive for a larger, long-term capital gain

Before backtesting a strategy, you can filter out many unviable strategies based on a quick check. These checks
analyse the strategy regarding its outperformance of relevant benchmarks, its risk-adjusted returns, and drawdown
characteristics. You should also check whether your strategy suffers from survivorship bias or data snooping

Questions?

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

14

Sources

Quantitative Trading: How to Build Your Own Algorithmic Trading Business by Ernest Chan

Algorithmic Trading: Winning Strategies and Their Rationale by Ernest Chan

The Mathematics of Money Management: Risk Analysis Techniques for Traders by Ralph Vince

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

15

Algorithmic Trading
Session 4
Trade Signal Generation II
Backtesting
Oliver Steinki, CFA, FRM

Outline

Introduction

Backtesting

Common Pitfalls of Backtesting

Statistical Signficance of Backtesting

Summary and Questions

Sources

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

Introduction
Where Do We Stand in the Algo Prop Trading Framework?
SIGNAL GENERATION

As we have seen, algorithmic proprietary trading strategies can be broken down into
three subsequent steps: Signal Generation, Trade Implementation and Performance
Analysis

The first step, Signal Generation, defines when and how to trade. For example, in a
moving average strategy, the crossing of the shorter running moving average over the
longer running moving average triggers when to trade. Next to long and short, the signal
can also be neutral (do nothing). Using moving averages to generate long/short trading
signals is an example choice of how to trade

Sessions 3 6 deal with the question of deciding when and how to trade

DECIDE WHEN AND


HOW TO TRADE

TRADE
IMPLEMENTATION
SIZE AND EXECUTE
ORDERS, INCL. EXIT

Session 3: Finding Suitable Trading Strategies and Avoiding Common Pitfalls


Todays Session 4: Backtesting
Session 5: Mean Reversion Strategies
Session 6: Momentum Strategies

PERFORMANCE
ANALYSIS

RETURN, RISK AND


EFFICIENCY RATIOS

Introduction
Backtesting

Signal Generation describes the process of deciding when and how to trade. Backtesting is the process of
feeding historical data to your trading strategy to see how it would have performed. A key difference between a
traditional investment management process and an algorithmic trading process is the possibility to do so

However, if one backtests a strategy without taking care to avoid common backtesting pitfalls, the whole
backtesting procedure will be useless. Or worse - it might be misleading and may cause significant financial losses

Since backtesting typically involves the computation of an expected return and other statistical measures of the
performance of a strategy, it is reasonable to question the statistical significance of these numbers. We will discuss
the general way to estimate statistical significance using the methodologies of hypothesis testing and Monte Carlo
simulations. In general, the more round trip trades there are in the backtest, the higher will be the statistical
significance

But even if a backtest is done correctly without pitfalls and with high statistical significance, it doesnt necessarily
mean that it is predictive of future returns. Regime shifts can spoil everything, and a few important historical
examples will be highlighted

Backtesting
Why is Backtesting Important?

Backtesting is the process of feeding historical data to your trading strategy to see how it would have performed.
The idea is that the backtested performance of a strategy tells us what to expect as future performance

Whether you have developed a strategy from scratch or you read about a strategy and are sure that the published
results are true, it is still imperative that you independently backtest the strategy. There are several reasons to do
so:

The profitability of a strategy often depends sensitively on the details of implementation, e.g. which prices
(bid, ask, last traded) to use for signal generation (trigger) and as entry /exit points (execution)

Only if we have implemented the backtest ourselves, we can analyze every little detail and weakness of the
strategy. Hence, by backtesting a strategy ourselves, we can find ways to refine and improve the strategy,
hence to improve its risk/reward ratio

Backtesting a published strategy allows you to conduct true out-of-sample testing in the period following
publication. If that out-of-sample performance proves poor, then one has to be concerned that the strategy
may have worked only on a limited data set

The full list of potential backtesting pitfalls is quite long, but we will look at a few common mistakes on the
next pages
5

Common Pitfalls of Backtesting


Look-Ahead Bias

As the name suggests, look-ahead bias describes a strategy which uses data at time t1 to determine a trading signal at
t0 . A common example of look-ahead bias is a strategy that uses the high or low of a trading day as a trigger. This
assumption is not realistic as we only know the high / low after market close

Look-ahead bias is essentially a programming error and can infect only a backtest program but not a live trading
program because there is no way a live trading program can obtain future information

Hence, If your backtesting and live trading programs are one and the same, and the only difference between
backtesting versus live trading is what kind of data you are feeding into the program, youre pretty safe to avoid
look-ahead bias

Common Pitfalls of Backtesting


Data Snooping Bias In Sample Over-Optimization

If you build a trading strategy that has 100 parameters, it is very likely that you can optimize those parameters in a
way that the historical performance looks amazing. It is also very likely that the future performance of this strategy
will not at all look like this over-optimized historical performance

In general, the more rules a strategy has, and the more parameters the model has to optimize, the more likely it is
to suffer from data-snooping bias

The way to detect data-snooping bias is easy: We should test the model on out-of-sample data and reject a model
that doesnt pass the out-of sample test

Cross-validation is probably the best way to avoid data snooping. That is, you should select a number of different
subsets of the data for training and tweaking your model and, more important, making sure that the model
performs well on these different subsets. One reason why one prefers models with high risk/return ratios and short
maximum drawdown durations is that this is an indirect way to ensure that the model will pass the cross-validation
test: the only subsets where the model will fail the test are those rare drawdown periods

Common Pitfalls of Backtesting


Survivorship Bias

A historical database of asset prices such as stocks that does not include stocks that have disappeared due to
bankruptcies, delistings, mergers, or acquisitions suffer from the so-called survivorship bias, because only
survivors of those often unpleasant events remain in the database

Same problem applies to mutual fund or hedge fund databases that do not include funds that went out of business,
usually due to negative performance

Survivorship bias is especially applicable to value strategies, e.g. investment concepts that buy stocks that seem to be
cheap. Some stocks were cheap because the companies were going bankrupt shortly. So if your strategy includes
only those cases when the stocks were very cheap but eventually survived (and maybe prospered) and neglects those
cases where the stocks finally did get delisted, the backtest performance will be much better than what a trader
would actually have suffered at that time

Common Pitfalls of Backtesting


Stock Splits and Dividend Adjustments

Whenever a companys stock has an N-to-1 split, the stock price will be divided by N times. However, if you own a
number of shares of that companys stock before the split, you will own N times as many shares after the split, so
there is in fact no change in the total market value

However, in a backtesting environment, we often consider only at the price series to determine our trading signals,
not the market-value series of some hypothetical account. So unless we back-adjust the prices before the ex-date of
the split by dividing them by N, we will see a sudden drop in price on the ex-date, and that might trigger some
erroneous trading signals

Similarly, when a company pays a cash (or stock) dividend of $d per share, the stock price will also go down by $d
(absent other market movements). That is because if you own that stock before the dividend ex-date, you will get
cash (or stock) distributions in your brokerage account, so again there should be no change in the total market value

If you do not back-adjust the historical price series prior to the ex-date, the sudden drop in price may also trigger
an erroneous trading signal

Common Pitfalls of Backtesting


Trading Venue Dependency and Short Sale Constraints

Most larger stocks are listed on multiple exchanges, electronic communicatin networks (ECNs) and dark pools. The
historical last daily price might have occurred on any of those trading venues. However, if you enter a market on
open (MOO) or market on close order (MOC), this order will be routed to the primary exchange only. Hence,
your backtested performance based on open or close might be different to a live trading performance based on
MOO/MOC

Foreign Exchange (FX) markets are even more fragmented and there is no rule that says a trade executed at one
venue has to be at the best bid or ask across all the different FX venues

A stock-trading model that involves shorting stocks assumes that those stocks can be shorted, but often there are
difficulties in shorting some stocks. This might either be due to limited availability of your broker to locate such
stocks or due to regulatory reasons. For example, many European countries and the USA prohibited the short sale
of financial stocks during the financial crisis

10

Common Pitfalls of Backtesting


Futures Continuous Contracts and Futures Close vs. Settlement Prices

Futures contracts have expiry dates, so a trading strategy on, say, volatility futures, is really a trading strategy on
many different contracts. Usually, the strategy applies to front-month contracts. Which contract is the front month
depends on exactly when you plan to roll over to the next month; that is, when you plan to sell the current front
contract and buy the contract with the next nearest expiration date. Hence, when choosing a data vendor for
historical futures prices, you must understand exactly how they have dealt with the back-adjustment issue, as it
certainly impacts your backtest

The daily closing price of a futures contract provided by a data vendor is usually the settlement price, not the last
traded price of the contract during that day. Note that a futures contract will have a settlement price each day
(determined by the exchange), even if the contract has not traded at all that day. And if the contract has traded, the
settlement price is in general different from the last traded price. Most historical data vendors provide the
settlement price as the daily closing price, which can not be replicated via MOC orders in a live strategy
environment
11

Statistical Significance of Backtesting


Hypothesis Testing

In any backtest, we face the problem of finite sample size: Whatever statistical measures we compute, such as
average returns or maximum drawdowns, are subject to randomness. In other words, we may just be lucky that our
strategy happened to be profitable in a small data sample. Statisticians have developed a general methodology called
hypothesis testing to address this issue.The hypothesis testing framework applied to backtesting follows these steps:
1.

Based on a backtest on some finite sample of data, we compute a certain statistical measure called the test
statistic. For concreteness, lets say the test statistic is the average daily return of a trading strategy in that
period

2.

We suppose that the true average daily return based on an infinite data set is actually zero. This supposition is
called the null hypothesis

3.

We suppose that the probability distribution of daily returns is known. This probability distribution has a
zero mean, based on the null hypothesis.We describe later how we determine this probability distribution

4.

Based on this null hypothesis probability distribution, we compute the probability p that the average daily
returns will be at least as large as the observed value in the backtest (or, for a general test statistic, as
extreme, allowing for the possibility of a negative test statistic). This probability p is called the p-value, and if
it is very small (lets say smaller than 0.01), that means we can reject the null hypothesis, and conclude that
the backtested average daily return is statistically significant and not equal to 0
12

Statistical Significance of Backtesting


Three Ways to Determine the Probability Distribution

Step 3 of the described Hypothesis Testing Framework requires the most thought. How do we determine the
probability distribution under the null hypothesis? There are three ways to do so:
1.

We assume the daily returns follow a standard parametric distribution such as the Gaussian one. If we do
this, it is clear that if the backtest has a high Sharpe ratio, it would be very easy for us to reject the null
hypothesis. This is because the standard test statistic for a Gaussian distribution is none other than the
average divided by the standard deviation and multiplied by the square root of the number of data points

2.

Another way to estimate the probability distribution of the null hypothesis is to use Monte Carlo methods to
generate simulated historical price data and feed these simulated data into our strategy to determine the
empirical probability distribution of profits. If we do so with the same first moments and the same length as
the actual price data, and run the trading strategy over all these simulated price series, we can find out in
what fraction p of these price series are the average returns greater than or equal to the backtest return.
Ideally, p will be small, which allows us to reject the null hypothesis.

3.

Andrew Lo suggested a third way to estimate the probability distribution: instead of generating simulated
price data, we generate sets of simulated trades, with the constraint that the number of long and short entry
trades is the same as in the backtest, and with the same average holding period for the trades. These trades
are distributed randomly over the actual historical price series. We then measure what fraction of such sets
of trades has average return greater than or equal to the backtest average return
13

Statistical Significance of Backtesting


Will a Backtest Be Predictive of Future Returns?

Even if we manage to avoid all the common backtesting pitfalls outlined earlier and there are enough trades to
ensure statistical significance of the backtest, the predictive power of any backtest rests on the central assumption
that the statistical properties of the price series are unchanging, so that the trading rules that were profitable in the
past will be profitable in the future.This assumption has often been invalidated in the past:

Decimalization of U.S. stock quotes on April 9, 2001

The 2008 financial crisis that induced a subsequent 50 percent collapse of average daily trading volumes.
Retail trading and ownership of common stock was particularly reduced. This has led to decreasing average
volatility of the markets, but with increasing frequency of sudden outbursts such as that which occurred
during the flash crash in May 2010 and the U.S. federal debt credit rating downgrade in August 2011

The same 2008 financial crisis, which also initiated a multiyear bear market in momentum strategies

The removal of the old uptick rule for short sales in June 2007 and the reinstatement of the new Alternative
Uptick Rule in 2010

14

Summary and Questions

Backtesting is useless if it is not predictive of future performance of a strategy, but pitfalls in backtesting will
decrease its predictive power

Make sure to avoid the following backtesting pitfalls: look-ahead bias, data-snooping and survivorship bias. Make
sure your data is adjusted for stock splits and dividends. It should also take into account trading venue dependency
and short sale constraints. Furthermore, roll returns and differences between closing and settlement prices need to
be incorporated in your backtesting framework

Make sure your backtested results are statistically significant. Use one of three described ways to determine the
probability distribution under the null hypothesis. If possible, use data-validation

Even if you avoid all common pitfalls and ensure that your results are statistically significant, regime shifts could still
make your strategy unprofitable in a live trading environment.

Questions?

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

15

Sources

Quantitative Trading: How to Build Your Own Algorithmic Trading Business by Ernest Chan

Algorithmic Trading: Winning Strategies and Their Rationale by Ernest Chan

The Mathematics of Money Management: Risk Analysis Techniques for Traders by Ralph Vince

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

16

Algorithmic Trading
Session 5
Trade Signal Generation III
Mean Reversion Strategies
Oliver Steinki, CFA, FRM

Outline

Introduction

Mean Reversion

Stationarity

Half Life of Mean Reversion

Cointegration

Summary and Questions

Sources

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

Introduction
Where Do We Stand in the Algo Prop Trading Framework?
SIGNAL GENERATION

As we have seen, algorithmic proprietary trading strategies can be broken down into
three subsequent steps: Signal Generation, Trade Implementation and Performance
Analysis

The first step, Signal Generation, defines when and how to trade. For example, in a
moving average strategy, the crossing of the shorter running moving average over the
longer running moving average triggers when to trade. Next to long and short, the signal
can also be neutral (do nothing). Using moving averages to generate long/short trading
signals is an example choice of how to trade

Sessions 3 6 deal with the question of deciding when and how to trade

DECIDE WHEN AND


HOW TO TRADE

TRADE
IMPLEMENTATION
SIZE AND EXECUTE
ORDERS, INCL. EXIT

Session 3: Finding Suitable Trading Strategies and Avoiding Common Pitfalls


Session 4: Backtesting
Todays Session 5: Mean Reversion Strategies
Session 6: Momentum Strategies

PERFORMANCE
ANALYSIS
RETURN, RISK AND
EFFICIENCY RATIOS

Introduction
Mean Reversion vs. Momentum

Trading strategies can be profitable only if securities prices are either mean-reverting or trending.
Otherwise, they are random walking, and trading will be futile. If you believe that prices are mean reverting and
that they are currently low relative to some reference price, you should buy now and plan to sell higher later.
However, if you believe the prices are trending and that they are currently low, you should (short) sell now and plan
to buy at an even lower price later.The opposite is true if you believe prices are high

Academic research has indicated that stock prices are on average very close to random walking. However,
this does not mean that under certain special conditions, they cannot exhibit some degree of mean reversion or
trending behaviour. Furthermore, at any given time, stock prices can be both mean reverting and trending
depending on the time horizon you are interested in. Constructing a trading strategy is essentially a matter of
determining if the prices under certain conditions and for a certain time horizon will be mean reverting or
trending, and what the initial reference price should be at any given time

Mean Reversion
What is Mean Reversion?

Most price series are not mean reverting, but are geometric random walks. The returns, not the prices, are the ones
that usually randomly distribute around a mean of zero. Unfortunately, one cannot trade directly on the mean
reversion of returns (One should not confuse mean reversion of returns with anti-serial-correlation of returns,
which we can definitely trade on. But anti-serial-correlation of returns is the same as the mean reversion of prices).
Those few price series that are found to be mean reverting are called stationary, and we will see the statistical
tests (ADF test, the Hurst exponent and Variance Ratio test) for stationarity

Fortunately, we can manufacture many more mean-reverting price series than there are traded assets because we
can often combine two or more individual price series that are not mean reverting into a portfolio whose net
market value (i.e., price) is mean reverting. Those price series that can be combined this way are called
cointegrating, and we will describe the statistical tests (CADF test and Johansen test) for cointegration

Also, as a by-product of the Johansen test, we can determine the exact weightings of each asset in order to create a
mean reverting portfolio. Because of this possibility of artificially creating stationary portfolios, there are numerous
opportunities available for mean reversion traders

Stationarity
Mean Reversion vs. Stationarity and Tests to Discover It

Mean reversion and stationarity are two equivalent ways of looking at the same type of price series, but these
two ways give rise to two different statistical tests for such series

The mathematical description of a mean-reverting price series is that the change of the price series in the
next period is proportional to the difference between the mean price and the current price. This gives rise to the
ADF test, which tests whether we can reject the null hypothesis that the proportionality constant is zero

However, the mathematical description of a stationary price series is that the variance of the log of the prices
increases slower than that of a geometric random walk. That is, their variance is a sublinear function of time, rather
than a linear function, as in the case of a geometric random walk. This sublinear function is usually approximated by
2H, where is the time separating two price measurements, and H is the so-called Hurst exponent, which is less
than 0.5 if the price series is indeed stationary (and equal to 0.5 if the price series is a geometric random walk). The
Variance Ratio test can be used to see whether we can reject the null hypothesis that the Hurst exponent is
actually 0.5.

Note that stationarity is somewhat of a misnomer: It doesnt mean that the prices are necessarily range
bound, with a variance that is independent of time and thus a Hurst exponent of zero. It merely means that the
variance increases slower than normal diffusion.

Mean Reversion
Augmented Dickey Fuller Test

If a price series is mean reverting, then the current price level will tell us something about what the prices next
move will be: If the price level is higher than the mean, the next move will be a downward move; if the price level
is lower than the mean, the next move will be an upward move. The ADF test is based on just this observation. We
can describe the price changes using a linear model:
= 1 + + + 1 1 + + +

The ADF test will find out if = . If the hypothesis = 0 can be rejected, it means that the next move of the
asset is dependent on the current level and therefore not random

The statisticians Dickey and Fuller described the distribution of this test statistic and tabulated the critical values for
us, so we can look up for any value of /SE() whether the hypothesis can be rejected at, say, the 95 percent
probability level

Since we expect mean regression, /SE() has to be negative, and it has to be more negative than the critical value
for the hypothesis to be rejected. The critical values themselves depend on the sample size and whether we assume
that the price series has a non-zero mean / or a steady drift t/. Most practitioners assume the drift term
to be zero

Stationarity
Hurst Exponent

Intuitively speaking, a stationary price series means that the prices diffuse from its initial value more slowly than a
geometric random walk would. Mathematically, we can determine the nature of the price series by measuring this
speed of diffusion.The speed of diffusion can be characterized by the variance

= + ()

where z is the log prices (z = log(y)), is an arbitrary time lag, and


random walk, we know that
+ ()

an average over all . For a geometric

The means that this relationship turns into an equality with some proportionality constant for large , but it
may deviate from a straight line for small . But if the (log) price series is mean reverting or trending (i.e., has
positive correlations between sequential price moves), the last equation wont hold. Instead, we can write:
+ () 2 ~ 2

This is the definition of the Hurst exponent H. For a price series exhibiting geometric random walk, H = 0.5. But
for a mean-reverting series, H < 0.5, and for a trending series, H > 0.5. As H decreases toward zero, the price
series is more mean reverting, and as H increases toward 1, the price series is increasingly trending; thus, H serves
also as an indicator for the degree of mean reversion or trendiness

Stationarity
Variance Ratio Test

Because of finite sample size, we need to know the statistical significance of an estimated value of H to be sure
whether we can reject the null hypothesis that H is really 0.5. This hypothesis test is provided by the Variance Ratio
test. It simply tests whether

( )
( 1 )
is equal to 1. The outputs of this test are h and pValue: h = 1 means rejection of the random walk hypothesis at
the 90 percent confidence level, h = 0 means it may be a random walk. pValue gives the probability that the null
hypothesis (random walk) is true

Mean Reversion
Half Life of Mean Reversion I

The statistical tests we described for mean reversion or stationarity are very demanding, with their requirements of
at least 90 percent certainty. But in practical trading, we can often be profitable with much less certainty. In this
section, we shall find another way to interpret the coefficient of the ADF Equation so that we know whether it is
negative enough to make a trading strategy practical, even if we cannot reject the null hypothesis that its actual
value is zero with 90 percent certainty in an ADF test. We shall find that is a measure of how long it takes
for a price to mean revert

To reveal this new interpretation, it is only necessary to transform the discrete time series Equation of ADF to a
differential form so that the changes in prices become infinitesimal quantities. Furthermore, we ignore the drift and
the lagged differences and end up with the Ornstein Uhlenbeck formula for a mean reveriting process:
= 1 + +
In the discrete form of this equation, the linear regression of against 1 gave us . This value
carries over to the differential form, but it also allows for an analytical solution for the expected value of :

= 0 exp 1 exp

10

Mean Reversion
Half Life of Mean Reversion II

Remembering that is negative for a mean-reverting process, this tells us that the expected value of the price
decays exponentially to the value / with the half-life of decay equals to log(2)/. This connection between a
regression coefficient and the half-life of mean reversion is very useful for algorithmic traders.

First, if we find that is positive, this means the price series is not at all mean reverting, and we shouldnt
even attempt to write a mean-reverting strategy to trade it.

Second, if is very close to zero, this means the half-life will be very long, and a mean-reverting trading
strategy will not be very profitable because we wont be able to complete many round-trip trades in a given
time period.

Third, also determines a natural time scale for many parameters in our strategy. For example, if the half life
is 20 days, we shouldnt use a look-back of 5 days to compute a moving average or standard deviation for a
mean-reversion strategy. Often, setting the lookback to equal a small multiple of the half-life is close to
optimal, and doing so will allow us to avoid brute-force optimization of a free parameter based on the
performance of a trading strategy

11

Mean Reversion
Trading Mean Reversion

Once we determine that a price series is mean reverting, and that the half life of mean reversion for a price series is
short enough for our trading horizon, we can easily trade this price series profitably using a simple linear strategy:
determine the normalized deviation of the price (moving average divided by the moving standard deviation of the
price) from its moving average, and maintain the number of units in this asset negatively proportional to this
normalized deviation

The look-back for the moving average and standard deviation can be set to equal the half-life

You might wonder why it is necessary to use a moving average or standard deviation for a mean-reverting strategy
at all. If a price series is stationary, shouldnt its mean and standard deviation be fixed forever? Though we usually
assume the mean of a price series to be fixed, in practice it may change slowly, e.g. due to changes in the economy
or corporate management. As for the standard deviation, recall that the Hurst exponent equation implies even a
stationary price series with 0 < H < 0.5 has a variance that increases with time, though not as rapidly as a
geometric random walk. So it is appropriate to use moving average and standard deviation to allow ourselves to
adapt to an ever-evolving mean and standard deviation, and also to capture profit more quickly

12

Cointegration
Cointegrated Augmented Dickey Fuller Test

Unfortunately, most financial price series are not stationary or mean reverting. But, fortunately, we are not confined
to trading those prefabricated financial price series: We can proactively create a portfolio of individual price series
so that the market value (or price) series of this portfolio is stationary. This is the notion of cointegration: If we can
find a stationary linear combination of several non-stationary price series, then these price series are called
cointegrated. The most common combination is that of two price series: We are long one asset and simultaneously
short another asset, with an appropriate allocation of capital to each asset.

Why do we need any new tests for the stationarity of the portfolio price series, when we already have the trusty
ADF and Variance Ratio tests for stationarity? The answer is that given a number of price series, we do not know a
priori what hedge ratios we should use to combine them to form a stationary portfolio

Just because a set of price series is cointegrating does not mean that any random linear combination of them will
form a stationary portfolio. But pursuing this line of thought further, what if we first determine the optimal hedge
ratio by running a linear regression fit between two price series, use this hedge ratio to form a portfolio, and then
finally run a stationarity test on this portfolio price series? This is essentially what the CADF test does

13

Cointegration
Johansen Test for Cointegration

In order to test for cointegration of more than two variables, we need to use the Johansen test. To understand this
test, lets generalize the discrete version of the ADF equation to the case where the price variable y(t) are actually
vectors representing multiple price series, and the coefficients and are actually matrices. We will assume t = 0
for simplicity. Using English and Greek capital letters to represent vectors and matrices respectively, we can rewrite
the ADF equation as:
= 1 + + 1 1 + + +

Just as in the univariate case, we do not have cointegration if = 0. Lets denote the rank of as r, and the
number of price series n.

The number of independent portfolios that can be formed by various linear combinations of the cointegrating price
series is equal to r. The Johansen test will calculate r for us in two different ways, both based on the eigenvector
decomposition of . One test produces the so-called trace statistic, and the other one produces the eigen statistic.
We do not need to worry what they are exactly, since many programming packages will provide critical values for
each statistic to allow us to test whether we can reject the null hypotheses that r = 0 (no cointegrating relationship).

As a useful by-product, the eigenvectors found can be used as our hedge ratios for the individual price series to
form a stationary portfolio.

14

Summary and Questions

Mean reversion means that the change in price is proportional to the difference between the mean price and the
current price. Stationarity means that prices diffuse slower than a geometric random walk

The ADF test is designed to test for mean reversion. The Hurst exponent and Variance Ratio tests are designed to
test for stationarity

Half-life of mean reversion measures how quickly a price series reverts to its mean, and is a good predictor of the
profitability or Sharpe ratio of a mean reverting trading strategy when applied to this price series

One can combine two or more non-stationary price series to form a stationary portfolio, these price series are
called cointegrating. Cointegration can be measured by either CADF test or Johansen test

The eigenvectors generated from the Johansen test can be used as hedge ratios to form a stationary portfolio out of
the input price series, and the one with the largest eigenvalue is the one with the shortest half-life

Questions?

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

15

Sources

Quantitative Trading: How to Build Your Own Algorithmic Trading Business by Ernest Chan

Algorithmic Trading: Winning Strategies and Their Rationale by Ernest Chan

The Mathematics of Money Management: Risk Analysis Techniques for Traders by Ralph Vince

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

16

Algorithmic Trading
Session 6
Trade Signal Generation IV
Momentum Strategies
Oliver Steinki, CFA, FRM

Outline

Introduction

What is Momentum?

Tests to Discover Momentum

Interday Momentum Strategies

Intraday Momentum Strategies

Summary and Questions

Sources

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

Introduction
Where Do We Stand in the Algo Prop Trading Framework?
SIGNAL GENERATION

As we have seen, algorithmic proprietary trading strategies can be broken down into
three subsequent steps: Signal Generation, Trade Implementation and Performance
Analysis

The first step, Signal Generation, defines when and how to trade. For example, in a
moving average strategy, the crossing of the shorter running moving average over the
longer running moving average triggers when to trade. Next to long and short, the signal
can also be neutral (do nothing). Using moving averages to generate long/short trading
signals is an example choice of how to trade

Sessions 3 6 deal with the question of deciding when and how to trade

DECIDE WHEN AND


HOW TO TRADE

TRADE
IMPLEMENTATION
SIZE AND EXECUTE
ORDERS, INCL. EXIT

Session 3: Finding Suitable Trading Strategies and Avoiding Common Pitfalls


Session 4: Backtesting
Session 5: Mean Reversion Strategies
Todays Session 6: Momentum Strategies

PERFORMANCE
ANALYSIS
RETURN, RISK AND
EFFICIENCY RATIOS

Introduction
Mean Reversion vs. Momentum

Trading strategies can be profitable only if securities prices are either mean-reverting or trending.
Otherwise, they are random walking, and trading will be futile. If you believe that prices are mean reverting and
that they are currently low relative to some reference price, you should buy now and plan to sell higher later.
However, if you believe the prices are trending and that they are currently low, you should (short) sell now and plan
to buy at an even lower price later.The opposite is true if you believe prices are high

Academic research has indicated that stock prices are on average very close to random walking. However,
this does not mean that under certain special conditions, they cannot exhibit some degree of mean reversion or
trending behaviour. Furthermore, at any given time, stock prices can be both mean reverting and trending
depending on the time horizon you are interested in. Constructing a trading strategy is essentially a matter of
determining if the prices under certain conditions and for a certain time horizon will be mean reverting or
trending, and what the initial reference price should be at any given time

Momentum
What is Momentum?

We will separate between interday and intraday momentum strategies

There are four main causes of momentum:

For futures, the persistence of roll returns, especially of their signs

The slow diffusion, analysis and acceptance of new information

The forced sales or purchases of assets of various fund types

Market manipulation by high frequency traders

Academics sometimes classify momentum in asset prices into two types: time series momentum and crosssectional momentum. Time series momentum is very simple and intuitive: past returns of a price series are
positively correlated with future returns. Cross-sectional momentum refers to the relative performance of a price
series in relation to other price series: a price series with returns that outperformed other price series will likely
keep doing so in the future and vice versa

Momentum Tests
How Can We Discover Momentum?

Time series momentum of a price series means that past returns are positively correlated with future returns. It
follows that we can just calculate the correlation coefficient of the returns together with its p-value

One feature of computing the correlation coefficient is that we have to pick a specific time lag for the returns.
Sometimes, the most positive correlations are between returns of different lags. For example, 1-day returns might
show negative correlations, while the correlation between past 20-day return with the future 40-day return might
be very positive. We should find the optimal pair of past and future periods that gives the highest positive
correlation and use that as our look-back and holding period for our momentum strategy

Alternatively, we can also test for the correlations between the signs of past and future returns. This is appropriate
when all we want to know is that an up move will be followed by another up move, and we dont care whether the
magnitudes of the moves are similar

If we are interested instead in finding out whether there is long-term trending behaviour in the time series without
regard to specific time frames, we can calculate the Hurst exponent together with the Variance Ratio test to rule
out the null hypothesis of a random walk

Momentum Tests
Augmented Dickey Fuller Test

If a price series is trending, then the current price level will tell us something about what the prices next move will
be: If the price level is higher than the previous price level, the next move should also be an upward move; if the
price level is lower than the previous price level, the next move should also be a downward move. The ADF test is
based on just this observation.We can describe the price changes using a linear model:
= 1 + + + 1 1 + + +

The ADF test will find out if = . If the hypothesis = 0 can be rejected, it means that the next move of the
asset is dependent on the current level and therefore not random

The statisticians Dickey and Fuller described the distribution of this test statistic and tabulated the critical values for
us, so we can look up for any value of /SE() whether the hypothesis can be rejected at, say, the 95 percent
probability level

Since we expect momentum, /SE() has to be positive, and it has to be more positive than the critical value for
the hypothesis to be rejected. The critical values themselves depend on the sample size and whether we assume that
the price series has a non-zero mean / or a steady drift t/. Most practitioners assume the drift term to be
zero

Momentum Tests
Hurst Exponent

Intuitively speaking, a trending price series means that the prices diffuse from its initial value faster than a geometric
random walk would. Mathematically, we can determine the nature of the price series by measuring this speed of
diffusion.The speed of diffusion can be characterized by the variance

= + ()

where z is the log prices (z = log(y)), is an arbitrary time lag, and


random walk, we know that
+ ()

an average over all . For a geometric

The means that this relationship turns into an equality with some proportionality constant for large , but it
may deviate from a straight line for small . But if the (log) price series is mean reverting or trending (i.e., has
positive correlations between sequential price moves), the last equation wont hold. Instead, we can write:
+ () 2 ~ 2

This is the definition of the Hurst exponent H. For a price series exhibiting geometric random walk, H = 0.5. But
for a mean-reverting series, H < 0.5, and for a trending series, H > 0.5. As H decreases toward zero, the price
series is more mean reverting, and as H increases toward 1, the price series is increasingly trending; thus, H serves
also as an indicator for the degree of mean reversion or trendiness

Momentum Tests
Variance Ratio Test

Because of finite sample size, we need to know the statistical significance of an estimated value of H to be sure
whether we can reject the null hypothesis that H is really 0.5. This hypothesis test is provided by the Variance Ratio
test. It simply tests whether

( )
( 1 )
is equal to 1. The outputs of this test are h and pValue: h = 1 means rejection of the random walk hypothesis at
the 90 percent confidence level, h = 0 means it may be a random walk. pValue gives the probability that the null
hypothesis (random walk) is true

Interday Momentum Strategies


Time Series Strategies

For a certain future, if we find that the correlation coefficient between a past return of a certain look-back and a
future return of a certain holding period is high, and the p-value is small, we can proceed to see if a profitable
momentum strategy can be found using this set of optimal time periods

Why do many futures returns exhibit serial correlations? And why do these serial correlations occur only at a fairly
long time scale? The explanation often lies in the roll return component of the total return of futures

Example 6.1 (file TU_mom) results in the following return profile btw. June 1, 2004 and May 11, 2012:

Ann. Return: 1.7%

0.05

Sharpe: 1

0.04

Max DD: 2.5%

0.03

0.02

0.01

-0.01

100

200

300

400

500

600

700

800

900

10

Interday Momentum Strategies


Extracting Roll Returns

A Futures total return is composed of spot return + roll return

If the roll return is negative (contango future curve), buy the underlying asset and short the futures

If the roll return is positive (backwardation future curve), short the underlying asset and buy the futures

This will work as long as the sign of the roll return does not change quickly

However, the logistics of buying and especially shorting the underlying asset is not simple, unless an exchangetraded fund (ETF) exists that holds the asset

11

Interday Momentum Strategies


News Sentiment as a Fundamental Factor

With the advent of machine-readable, or elementized, newsfeeds, it is now possible to programmatically capture
all the news items on a company, not just those that fit neatly into one of the narrow categories such as earnings
announcements or merger and acquisition activities

Natural language processing algorithms are now advanced enough to analyse the textual information contained in
these news items, and assign a sentiment score to each news article that is indicative of its price impact on a stock

The success of these strategies also demonstrates very neatly that the slow diffusion of news is one cause of
momentum

12

Interday Momentum Strategies


Mutual Funds Asset Fire Sale and Forced Purchases

Mutual funds experiencing large redemptions are likely to reduce or eliminate their existing stock positions. This is
no surprise since mutual funds are typically close to fully invested, with very little cash reserves

Also, funds experiencing large capital inflows tend to increase their existing positions rather than using the
additional capital to invest in other assets, perhaps because new investment ideas do not come by easily

Assets disproportionately held by poorly performing mutual funds facing redemptions therefore experience
negative returns. Furthermore, this asset fire sale by poorly performing mutual funds is contagious

Hence, momentum in both directions for the commonly held assets can be exploited

13

Interday Momentum Strategies


Summary

Futures exhibit time series momentum mainly because of the persistence of the sign of roll returns

If you are able to find an instrument (e.g., an ETF or another future) that cointegrates or correlates with the spot
price or return of a commodity, you can extract the roll return of the commodity future by shorting that
instrument during backwardation, or buying that instrument during contango

Profitable strategies on news sentiment momentum show that the slow diffusion of news is one cause for stock
price momentum

The contagion of forced asset sales and purchases among mutual funds contributes to stock price momentum

14

Intraday Momentum Strategies


Opening Gap Strategy

Gap measures the difference in opening price relative to last closing price. For example a stock closing on Friday at
USD 100 and reopening on Monday at USD 95 has an opening gap of -5 USD

Whats special about the overnight or weekend gap that sometimes triggers momentum? The extended period
without any trading means that the opening price is often quite different from the closing price

Hence stop orders set at different prices may get triggered all at once at the open. The execution of these stop
orders often leads to momentum because a cascading effect may trigger stop orders placed further away from the
open price as well

Also, the gap could result from significant events that occurred overnight

15

Intraday Momentum Strategies


News Driven Momentum Strategy

As momentum is driven by the relatively slow diffusion of news, one can benefit from the first few days, hours, or
even seconds after a newsworthy event. For example, ECB press conferences are aired with 15sec. delay and only
subscribers of special services can see it live to profit from any significant announcements

Earnings announcements are another example of news driven intraday momentum. It is surprising that it still
persists, although the duration of the drift has shortened

Earnings guidance, analyst ratings and recommendation changes on a stock specific level as well as macroeconomic
indicators such as housing and unemployment numbers, consumer confidence or purchasing manager indices are
other examples of momentum creating news

16

Intraday Momentum Strategies


Index Composition and Leveraged ETFs

Rebalancings of major indices results in intraday momentum due to ETF trading activity to mirror these changes in
index composition. For example, if a stock joins the MSCI World index, all ETFs as well as funds benchmarked to
this index have to buy this stock. This momentum usually last only for a few hours on the announcement as well as
the implementation date as there are now quite some players trying to anticipate and frontrun index composition
changes

The sponsors (issuers) of leveraged ETFs experience a similar issue which can create momentum. Lets assume
there is a three times leveraged ETF mirroring a basket of stocks. If a constituent stocks goes up, the ETF sponsor
has to buy it to hold the leverage ratio constant

17

Intraday Momentum Strategies


Summary

Breakout momentum strategies involve a price exceeding a trading range. The opening gap strategy is a breakout
strategy that works for some futures and currencies. Breakout momentum may be caused by the triggering of stop
orders

Many kinds of corporate and macroeconomic news induce short-term price momentum

Index composition changes induce momentum in stocks that are added to or deleted from the index

Rebalancing of leveraged ETFs near the market close causes momentum in the underlying index in the same
direction as the market return from the previous close

18

Summary and Questions

Time-series momentum refers to the positive correlation of a price series past and future returns

Cross-sectional momentum refers to the positive correlation of a price series past and future relative returns, in
relation to that of other price series in a portfolio

Lagged correlation of prices or returns, the ADF test, the Hurst exponent and Variance Ratio test can be used to
test for momentum

Different strategies apply to inter and intraday techniques. More and more sophisticated traders result in a reduced
time to exploit the momentum created by significant news, e.g. the speed of news diffusion is increasing

Questions?

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

19

Sources

Quantitative Trading: How to Build Your Own Algorithmic Trading Business by Ernest Chan

Algorithmic Trading: Winning Strategies and Their Rationale by Ernest Chan

The Mathematics of Money Management: Risk Analysis Techniques for Traders by Ralph Vince

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

20

Algorithmic Trading
Session 7
Trade Implementation I
Orders
Oliver Steinki, CFA, FRM

Outline

Introduction

Market Orders

Limit Orders

Optional Order Instructions

Other Order Types

Summary and Questions

Sources

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

Introduction
Where Do We Stand in the Algo Prop Trading Framework?
SIGNAL GENERATION

As we have seen, algorithmic proprietary trading strategies can be broken down into
three subsequent steps: Signal Generation, Trade Implementation and Performance
Analysis

Trade Implementation happens after the Signal Generation step has triggered a buy or
sell signal. It determines how the order is structured, e.g. position size and limit levels. In
advanced strategies, it can also take into account cross correlation with other portfolio
holdings and potential portfolio constraints

Sessions 7 9 deal with the question of sizing and executing trades, incl. exit

DECIDE WHEN AND


HOW TO TRADE

TRADE
IMPLEMENTATION
SIZE AND EXECUTE
ORDERS, INCL. EXIT

Todays Session 7: Order Types


Session 8: Algorithmic Execution
Session 9: Transaction Costs

PERFORMANCE
ANALYSIS
RETURN, RISK AND
EFFICIENCY RATIOS

Introduction
Orders

Orders represent instructions how to execute trades. They allow traders to communicate their detailed
requirements, from the type of order chosen to a wide range of additional conditions and directions.

The two main order types are market and limit orders. They are exact opposites in terms of liquidity provision:
market orders are filled immediately at the best available price, but demand liquidity. Limit orders provide
liquidity and act as standing orders with inbuilt price limits, which must not be breached

Several conditions might be applied to each order to control additional execution features:

How and when the order becomes active

The duration of its validity

Whether it may be partially filled

Whether it should be routed to other venues or linked to other orders

Market Orders

Market orders are instructions to trade a given quantity at the best price possible. The focus is on completing the
order with no specific price limit, so the main risk is the uncertainty of the ultimate execution price

Market orders demand liquidity, a buy market order will try to execute at the ask price, whilst a sell order will try
to execute at the bid price. The immediate cost of this is half the bid-ask spread

For orders that are larger than the current best bid or ask size, most venues allow market orders to walk the
book. If they cannot fill completely from the top level of the order book, they then progress to the next price
level of the book. If the order still cannot be completed, some venues cancel it (e.g. LSE), whereas others leave
the residual market order in the book (e.g. Euronext)

Hence, the execution price achieved with a market order depends on both the current market liquidity and the
size of the market order

Limit Orders

Limit orders are instructions to trade a given quantity at a specified price or better. A buy order must execute at
or below the limit price, whereas a sell order must execute at or above it.

Limit orders will try to fill as much of the order as they can, without breaking the price limit. If there are no
orders that match at an acceptable price, then the order is left in place on the order book until it expires or is
cancelled. If the order is partially executed, the residual quantity will remain on the order book. This provides
liquidity as other traders can see that someone is willing to trade at a given price and quantity

Hence, limit orders are quite versatile. They can be used with an aggressive limit price, in which case they act like
a market order demanding liquidity. The firm price limit gives added price protection compared to a market
order, although there is the risk of failing to execute. Alternatively, limit orders may be issued with more passive
limits, such as when trying to capture gains from future price trends or mean reversion

The main risk with limit orders is the lack of execution certainty. The market price may never reach our limit or
even if it does, it may still not be executed since other orders may have time priority

Orders that are placed at the market correspond to buys with a limit of the best bid or sells with a limit at the
best ask. The traders who placed these orders are said to be making the market. Whilst the most passively priced
limit orders are termed behind the market. Their prices mean that they are likely to remain on the order book as
standing limit orders until the best bid or ask price moves closer to their limit
6

Optional Order Instructions


Overview

Order instructions are conditions that cater for the various requirements for a wide range of trading styles. They
allow control over how and when orders become active, how they are filled and can even specify where (or to
whom) they are sent to

These optional order instuctions can be split according to the following criteria, which we will investigate in more
detail on the next slides:

Duration
Auction / Crossing Session
Fill Instructions
Preferencing
Routing
Linking

Optional Order Instructions


Duration

Generally, orders are assumed to be valid from their creation until they are completely filled or cancelled, or it
reaches the end of the current trading day. These orders are know as good for the day (GFD) orders.

Special instructions may be used to alter the duration, such as:


Good til date (GTD)
Good till cancel (GTC)
Good after time/date (GAT)

A GTD order remains valid until the close of trading on the specified date. Variants of GTD orders include GTW
(good this week) and GTM (good this month) orders

A GTC order means the order should stay active until cancelled or until the instrument on which the order was
valid expires (mainly applicable to derivatives)

A GAT order is less common and basically only starts at a specific date and time in the future

Optional Order Instructions


Auction / Crossing Session Instructions

Auction / Session instructions are used to mark an order for participation in a specific auction, or trading either at
the open, close or intraday

Like normal market orders, auction market orders are intended to maximize the probability of execution, since in
the auction matching they will always have price priority. Whereas auction limit orders will only be executed if
the auction price equals or is better than their limit price

On-Open orders may be submitted during the pre-open period for participation in the opening auction. If the
matching volume is sufficient, then MOO (market on open) orders will execute at the auction price. For any
unfilled MOO orders, some venues convert them to limit orders at the auction prices, whilst other venues just
cancel them. MOC (market on close) order will execute at the close price, given sufficient matching volume.
Any unfilled orders will usually be cancelled

LOO (limit on open) and LOC (limit on close) orders will only execute given sufficient matching volume and
an auction price equal or better than the specified limit

Optional Order Instructions


Fill Instructions

Fill instructions were traditionally used to minimize the clearance and settlement costs of trades. Nowadays, they
are most often used as parts of liquidity seeking strategies.

There is a wide range of fill instructions available, ranging from:


Immediate or Cancel (IOC): This order means any portion of the order that cannot execute immediately
agains existing orders will be cancelled
Fill or Kill (FOK): A FOK order ensures that the order either executes immediately in full or not at all. It is
basically an IOC order combined with a 100% completion requirement.
All or None (AON): The AON instruction enforces a 100% completion requirement on an order. Unlike
FOK, there is no requirement for immediacy
Minimum Volume (MV): An MV instruction ensures that the order only fills if the quantity is sufficient.
Must be Filled (MBF): Unlike the other fill instructions, failure to fully execute is not an option for MBF
orders. MBF orders are hence treated as market orders and as they are often used to cover corresponding
option positions, short sale constraints usually dont apply

10

Optional Order Instructions


Preferencing and Directed Instructions

Order preferencing and directed instructions permit bilateral trading, since they direct orders to a specific broker
or dealer. One specificity with both directed and preferenced orders is that they bypass any execution priority
rules. So other orders which might have time priority will lose out to the directed market maker

Preferenced orders prioritise a specific market maker. On some exchanges they behave like FOK orders, being
cancelled if the chosen market maker does not quote at the best price

Directed orders are routed to a specific market maker or dealer who may accept or reject them. On some
exchanges, market makers offer price improvement for directed orders

11

Optional Order Instructions


Routing Instructions

Execution venues have often catered for additional routing instructions for orders. Thus providing a gateway
service that allows orders to be routed to other venues as well as handling them locally. This is especially
applicable for the order protection rule in the US, where it requires a venue or broker to pass on orders elsewhere
to achieve the best prices.

The following routing instructions are available:

Do not route: This instruction ensures that the execution venue will handle the order locally and not route it
to another venue
Directed routing: This instruction provides an associated destination for where the order should be routed to.
Effectively, the host venue acts as a gateway to route such orders on to their chosen destination. The
advantage of this approach is that orders may be routed to venues for which we do not have a membership,
although the host venue will levy routing fees for such orders
Inter Market Sweeps: Such an order sweeps or walks down the order book at a single venue. It means that
the order will not be routed to other venues and hence be filled where we specify. This gives better control
over how and where our orders are placed, something important if we have similar orders on across multiple
venues

12

Optional Order Instructions


Linking Instructions

Linking instructions provide a means of introducing dependencies between orders

A one cancels other (OCO) instruction may be used to make two orders mutually exclusive, often used to close
out positions. For example, we could have a sell order and a stop loss order on for a given instrument, together
with an OCO instruction. Hence, if one of the two orders is filled, the other one would be automatically
cancelled

A one triggers other (OTO) order links a supplementary order to a main order. For example, only if a certain buy
order is filled, will the corresponding stop order become active

13

Other Order Types


Overview

Conditional orders base their validity on a set condition, often the market price. Only when the condition is met
will it result in an actual order being placed. Thus, stops and contingent orders only become active when a
threshold price is breached

Trailing stop orders are similar, although they use a dynamic threshold

Contingent / if touched orders are similar, yet the opposite of stop orders. There are as well two types, market if
touched or limit if touched orders.

Hidden, undisclosed or non-displayed orders allow traders to participate in the market place without giving away
their position / trade size

14

Other Order Types


Stop Orders

Stop orders are contingent on an activation or stop price. Once the market price reaches or passes this point, they
are transformed into active market orders. In continuous trading, the price being tracked is generally the last
traded price, whilst in an auction it is usually the clearing price. Activation occurs for buys when the market price
hits the stop price or moves above, whilst for sells it is when the market price hits or drops below the stop price

They are referred to as stop loss orders. As sells, they are generally used as a safety net to protect profits by
closing out long positions should the market move against us and drop below the stop level, and vice versa.

Note that the market order generated by a stop does not guarantee anything on the actual price achieved. Hence,
if there are very significant news, you might be executed way worse than your stop level as they are only activated
when the market prices becomes unfavorable.

Stop Limit orders replace the market order once the stop is reached with a limit order. As usual, while these
orders offer price protection, they do not offer execution guarantee

Whilst stop orders are a useful tool, they can also have considerable market impact, in particular in times of
market turbulence. Upon activation, all stop orders tend to accelerate the price trend that triggered them

15

Other Order Types


Trailing Stop and Contingent Orders

A stop orders uses an absolute price, whereas for a trailing stop order the stop price follows (or trails) favourable
moves in market price

The trailing offset is either specified as an absolute amount or as a percentage

For a trailing stop sell order, as the market price rises, the trailing stop price will rise by a similar amount.
However, when the market price falls, the stop price does not change. For a trailing stop buy order, as the market
price drops, the trailing stop price will drop by a similar amount. Again, if the market price rises, the stop price
level doesnt change

Contingent, or if-touched orders are effectively the opposite of stop orders. For FX, they are often called entry
orders as they are mainly used to enter positions

As with stops, there are two types, limit-if-touched and market-if-touched. The main difference to a normal
market or limit order is that it is hidden from the order book until activated

16

Other Order Types


Hidden Orders

Hidden, undisclosed or non-displayed orders are used by traders who do not want to show their trade size.

Hidden orders do not appear on the order book and are not allowed on all exchanges. If allowed, they are usually
given lower priority to normal orders with the same price limit, even if they have been entered earlier.

An iceberg order comprises of a small visible peak and a significantly larger hidden volume. The peak (or display
volume) is customizable, although some venues require minimum sizes. The visible order cannot be distinguished
from a normal order. Each time the visible order is fully executed, the next peak will be displayed. Hence, each
displayed order has normal time priority within the order book whilst the hidden volume has only price priority

17

Summary and Questions

The two main order types are limit and market orders

Additional conditions can be applied to each order to control additional execution features:

How and when the order becomes active

The duration of its validity

Whether it may be partially filled

Whether it should be routed to other venues or linked to other orders

Conditional, contingent, and hidden orders can be used for more advanced entry and exit rules

Questions?

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

18

Sources

Algorithmic Trading and Direct Market Access by Barry Johnson

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

19

Algorithmic Trading
Session 8
Trade Implementation II
Algorithmic Execution
Oliver Steinki, CFA, FRM

Outline

Introduction

Algorithmic Execution

Market Impact Driven Algorithms

Cost Driven Algorithms

Opportunistic Algorithms

Summary and Questions

Sources

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

Introduction
Where Do We Stand in the Algo Prop Trading Framework?
SIGNAL GENERATION

As we have seen, algorithmic proprietary trading strategies can be broken down into
three subsequent steps: Signal Generation, Trade Implementation and Performance
Analysis

Trade Implementation happens after the Signal Generation step has triggered a buy or
sell signal. It determines how the order is structured, e.g. position size and limit levels. In
advanced strategies, it can also take into account cross correlation with other portfolio
holdings and potential portfolio constraints

Sessions 7 9 deal with the question of sizing and executing trades, incl. exit

DECIDE WHEN AND


HOW TO TRADE

TRADE
IMPLEMENTATION
SIZE AND EXECUTE
ORDERS, INCL. EXIT

Session 7: Order Types


Todays Session 8: Algorithmic Execution
Session 9: Transaction Costs

PERFORMANCE
ANALYSIS
RETURN, RISK AND
EFFICIENCY RATIOS

Introduction
Review: Algorithmic Trading - Areas of Applications

Algorithmic Execution: Use algorithms to search/discover fragmented liquidity pools to optimize execution
via complex / high frequency order routing strategies. Profit comes from improved prices and reduced market
impact
Example: Order routing to dark pools to improve execution price, % of volume orders to reduce market
impact
This is what we discuss today

Market Making: Supply the market with bid ask quotes for financial securities. Ensure the book respects
certain constraints such as delta profile or net position. Profit comes mainly from clients trading activity, hence
the bid-ask spread. Also known as flow trading or sell side. Main risk comes from market moves against position if
net position/Greeks are not perfectly hedged
Example: A broker offers to sell a financial security at the ask and to buy at the bid to earn the spread

Trade Signal Generation: Design proprietary strategies to generate profits by betting on market directions.
Profit comes from winning trades. Also known as proprietary trading or buy side. Main risk is that market does
not move as expected/back tested and strategy becomes unprofitable
Example: Buy/sell security when moving averages cross each other
This is what we discuss in general
4

Introduction
Algorithmic Execution

An algorithm is a set of instructions to accomplish a given task. In the context of algorithmic execution, this means
that a trading algorithm simply defines the steps required to execute an order in specific ways

We can split execution algorithms into three categories:

Impact Driven

Cost Driven

Opportunistic

Impact driven algorithms try to to minimize the overall market impact, hence they try to reduce the effect trading
has one the assets price. For example, larger orders will be split into smaller ones, trading them over a longer time
period.

Cost driven algorithms aim to reduce overall trading costs. Hence, they have to incorporate market impact, timing
risk and even price trends. Hence, implementation shortfall is an important performance benchmark for these kind
of algorithms.

Opportunistic algorithms take advantage whenever they perceive favorable market conditions. These algorithms
are generally price or liquidity driven or involve pair/spread trading
5

Algorithmic Execution
Generic Algorithm Parameters

Execution algorithms are controlled by a range of parameters, which provide the algorithm with limits or
guidelines. These parameters can be split into generic and specific. E.g., specific parameters are used to define how
much a volume weighted average price (VWAP) order may deviate from the historical volume profile. Generic
parameters represent common details of execution algorithms and are listed below:

Start / End Times: Execution algorithms usually accept specific start and end times, instead of just
duration instructions like GTC. Some algorithms even derive their own optimal trading horizon, especially
the cost driven ones. If these criteria are not entered, default values such as end of day as end time are used

Duration: Some vendors do not work with end times and use a duration parameter instead

Execution Style: This can be categorized into aggressive, passive or neutral trading and is a question of
execution certainty vs. price certainty. The more aggressive, the higher the execution certainty at the
expense of cost / performance

Limit: This feature offers price certainty like a normal limit order for algorithms without inbuilt price limits

Volume: This feature tells the algorithm a certain percentage of market volume to trade (either min. or
max.)

Auction: This feature is used to specify if the algorithm is allowed to participate in auctions and if so at
which percentage?
6

Impact Driven Algorithms


Overview

Impact driven algorithms evolved from simple order slicing strategies. By splitting larger orders into smaller child
orders, they try to reduce the impact the trading has on the assets price, and so to minimize overall market
impact costs

The average prices based algorithms, namely time weighted average price (TWAP) and volume weighted average
price (VWAP), represent the first generation of impact driven algorithms. Although intended to minimize impact
costs, their main focus is their respective benchmarks. These are predominantly schedule based algorithms and so
they track statistically created trajectories with little or no sensitivity to conditions such as price or volume. Their
aim is to completely execute the order within the given timeframe, irrespective of market conditions

The natural progression from these static first generation algorithms has been the creation of more dynamic
methods, which resulted in a gradual shift to more opportunistic methods

Impact Driven Algorithms


Time Weighted Average Price

A time weighted average price (TWAP) order is benchmarked to the average price, which reflects how the assets
market price has evolved over time. Therefore, execution algorithms that attempt to match this benchmark are
usually based on a uniform time-based schedule

The basic mechanism behind a TWAP order is based on time slicing. For example, an order to buy 1000 shares
could be split into 10 child orders of 100 shares each every 5 minutes. Hence, the trading patterns are very
uniform and independent of price and volume

TWAP orders can suffer poor execution due to their rigid adherence to the time schedule, especially if the price
becomes unfavorable or the available liquidity suddenly drops

Alternatively, we can use the linear nature of the target completion profile to adopt a more flexible trading
approach. At any given time, we can determine the target quantity the order should have achieved, e.g. 25% of
the order should be completed after 25min. in the above mentioned example. So instead of following a very
deterministic approach, we could adopt a slightly more random approach by comparing the actual progress to the
planned schedule. This allows us to vary trade frequency and size and makes the TWAP schedule less predictable
for other market participants to spot

Impact Driven Algorithms


Volume Weighted Average Price

The volume weighted average price (VWAP) benchmark for a given time span is the total traded value divided be
the total traded quantity. As a benchmark, it rapidly became very popular as it is easy to calculate and a fair
reflection of market conditions

The basic mechanism behind a VWAP order is based on the overall turnover divided by the total market volume.
Given n trades in a day, each with a specific price pn and size vn VWAP is calculated as:
=

While TWAP orders are simply a matter of trading regularly throughout the day, VWAP orders also need to trade
in the correct proportions. As we do not know the trading volume beforehand, we do not know these proportions
in advance. A common approach to mitigate this problem is the use of historical volume profiles of the asset as a
proxy. This is used as the basis for most VWAP execution algorithms

Hence, throughout the day, the execution algorithms just needs to place sufficient orders in each interval to keep
up with the target execution profile based on historic data

Impact Driven Algorithms


Percentage of Volume

Percentage of volume (POV) algorithms are go along orders with the market volume. They are also known as
volume inline, participation, target volume or follow algorithms. For example a POV order of 10% for a stock
with 1m. shares daily turnover should result in an execution of 100k shares

The basic mechanism behind POV is a dynamic adjustment based on market volume and hence different to TWAP
and VWAP orders, which follow predetermined trading schedules. The algorithm tries to participate in the
market in proportion to the market volume. Note that there is no relationship between the trading pattern and the
market price, the target trade size is solely driven by market volume

The POV order is similar to the VWAP if historical and actual trading patterns are similar. Although POV orders
are more dynamic than VWAP orders, they still cannot predict market volume

A common risk factor of POV orders is the potential to drive up or down prices if many traders have POV orders
on. One way to protect against such situations are firm price limits applied to POV orders, as they usually come
without inbuilt price sensitivity

10

Impact Driven Algorithms


Minimal Impact

Minimal impact algorithms represent the next logical progression from VWAP and POV execution algorithms.
Rather than seeking to track a market driven benchmark, they focus solely on minimizing market impact. Signaling
risk is an importing consideration for the algorithms we have seen so far. It describes the risk of potential losses
due to information that our trading pattern relays to the other market participants and depends on our trading size
and the assets liquidity

The basic mechanism behind minimal impact algorithms is therefore to route orders to dark pools, brokers
internal crossing networks as well as using hidden order types. As actual hit ratios on some dark pools can be low,
often a proportion of the order is left as VWAP or POV order on the main venue to ensure a minimum level of
execution

Specific parameters of minimal impact orders therefore include the visibility and the must be filled criteria.
While the visibility criteria determines how much of the order is actually displayed on the main venue, the must
be filled criteria determines which percentage of the order has to be filled. As the minimal impact algorithm is
focused on reducing cost at the risk of failing to fully execute, it might be more appropriate to use a cost based
algorithm if one wants to guarantee full execution

11

Cost Driven Algorithms


Overview

Cost driven algorithms seek to reduce the overall transaction costs. As we have seen in session 3, these are more
than commissions and the bid-ask spread, they also include implicit costs such as market impact and slippage

We have learned that market impact might be reduced by time slicing (TWAP orders). However, this exposes
orders to a much greater timing risk, especially for volatile assets. Therefore, cost driven algorithms also target to
reduce timing risk

In order to minimize overall transaction costs, we need to strike a balance between market impact and timing risk.
Trading too aggressively may result in considerable market impact, while trading too passively incurs timing risk.
Furthermore, we must know the investors level of urgency or risk aversion to strike the right balance

Implementation shortfall represents a purely cost driven algorithm. It seeks to minimize the shortfall between the
average trade price and the assigned benchmark, which should reflect the investors decision price

Adaptive Shortfall algorithms are just more opportunistic derivatives of implementation shortfall. They are
generally more price sensitive, although liquidity driven variants also start to appear

Market on close (MOC) algorithm aim to beat an undetermined benchmark, the future closing price. Unlike
TWAP or VWAP where the average evolves through the day, it is harder to predict where the closing price will
actually be. It is the reverse of implementation shortfall: instead of determining an optimal end time, we need to
calculate an optimal starting time
12

Cost Driven Algorithms


Implementation Shortfall

Implementation shortfall (IS) represent the difference between the price at which the investor decides to trade and
the average execution price that is actually achieved. The decision price is used as the benchmark, although often it
is not specified and instead the mid price when the order reaches the broker is used as default

The goal of IS algorithms is to minimize the difference between average execution price and decision price. To
strike the right balance between market impact and timing risk, it usually means that the algorithm tends to take
only as long as necessary to prevent significant market impact

To determine the optimal trade horizon the algorithm needs to account for factors such as order size and time
available for trading. It must also incorporate asset specific information such as liquidity and volatility.
Additionally, it must also take into account the investors urgency or risk aversion. Quantitative models are then
used to derive the optimal trade horizon based on the factors mentioned. Generally, a shorter trade horizon is due
to:
Assets with high volatility, also those with lower bid ask spreads
High risk aversion
Smaller order size, so less potential market impact

Having calculated the optimal trade horizon, a static algorithm will then determine the trading schedule, whilst
the dynamic one will determine the most appropriate participation rate. Since both versions have a pre
determined benchmark, they will both favor to trade more at the beginning of an order when the price is still
close to the benchmark
13

Cost Driven Algorithms


Adaptive Shortfall

Adaptive shortfall (AS) represents a relatively recent subclass of algorithms derived from implementation shortfall.
The adaptive moniker refers to the addition of adaptive behavior, mostly in reaction to the market price. Hence,
AS algorithms behave more opportunistic than IS algorithms. An aggressive adaptive algorithm trades more
aggressively with favorable prices and less when they become adverse. The opposite applies for passive AS orders.

AS algorithms are built upon IS algorithms, so there basic behavior is the same. However, the AS algorithm
dynamically adjusts in real time based on current market conditions. Initially, a baseline target for volume
participation may be determined based on the estimated optimal trade horizon. During trading, the adaptive
portion is then used to modify this rate. For an aggressive AS algorithm, the participation rate would be increased
if market prices are favorable compared to the benchmark, for passive AS algorithms the participation rate would
be decreased.

As the only parameter not needed for IS algorithms, the user has to specify one additional parameter for AS
algorithms, the adaption type: either passive or aggressive

14

Cost Driven Algorithms


Market on Close

The close price is often used for marking to market, calculating net asset values and daily PnLs. Hence, many
market participants are interested in the closing price as the benchmark although trading at the close can be costly.
Researchers found that prices are more sensitive to order flow at the close. They also noted price reversals after
days with significant auction imbalances. While call auctions have helped to reduce end of day volatility, the
liquidity premium can still be considerable around the close

The main issue for MOC algorithms is the fact that the benchmark is unknown until the end of the trading day.
We cannot simply slice the order to try to match or beat the benchmark. We should also not start trading too
early, in order to avoid exposure to timing risk due to the variability in the closing price. However, trading too
late may result in significant market impact

Most MOC algorithms determine an optimal trading horizon using quantitative models, incorporating similar
factors as IS algorithms. As IS algorithms determine an optimal end time, MOC algorithms calculate an optimal
start time

In general, MOC algorithms have the same parameters as IS algorithms. However, they also allow specific
parameters for different risk aversion profiles, end time and auction participation. The auction
participation instruction specifies the minimum or maximum order size allowed to participate in the close auction

15

Opportunistic Algorithms
Overview

Opportunistic algorithms have evolved from a range of trading strategies. They all take advantage of favorable
market conditions, whether this is based on price, liquidity or another factor such as spread/ratio

Price inline algorithms are essentially based on an underlying impact driven strategy such as VWAP or POV. What
they add is price sensitivity, which enables them to modify their trading style based on whether the current market
price is favorable or not. So a focus on market impact has given way to a more opportunistic approach

Liquidity driven algorithms are an evolution of simpler rule based order routing strategies. The trading is driven
by the available liquidity, although cost is also a factor

As pair trading is effectively a market neutral strategy, market risk is less of a concern. Instead, the key driver is
when the spread or ratio between the assets is favorable

16

Opportunistic Algorithms
Price Inline

A price inline (PI) algorithm adapts to the market price in a similar way to how POV algorithms adjust to market
volume. A benchmark price is defined and trading is then altered based on how the market price compares to it.
Default value for the benchmark is the mid price at the time of order arrival. Similar to AS algorithms, the term
moneyness is sometimes used for favorable market conditions

A PI algorithm consists of a basic trading mechanism combined with the price adaptive functionality. Hence, it
could be based on a static VWAP or a more dynamic POV. The actual price adaption might track the difference
between benchmark and market price and tilt an aggressive PI algorithm to trade proportionally more shares when
market conditions are favorable. Using the example of a participation rate of a POV algorithm, the PI algorithm
would increase the participation rate for a buy order when the market price is below the benchmark price

Special parameters include adaption type (like in AS), participation rate (for algorithms based on POV) and
participation adjustment. This specifies how much to alter the participation rate for a given price move, for
example 5% for every 10 cents. Asymmetrical participation levels are also possible

17

Opportunistic Algorithms
Liquidity

Liquidity represents the ease of trading a specific asset, hence it has a considerable effect on overall transaction
costs. Originally, liquidity based trading simply meant making decisions based on the available order book depth,
rather than just the best bid and offer. In todays fragmented markets with many potential execution venues,
liquidity seeking has become more complicated

Liquidity is closely related to market depth and price. Therefore, a liquidity seeking algorithm will react strongest
when there is plenty of market depth combined with a favorable price. Instead of making the algorithm react to
market volume, one can create a market depth measure that reflects the volume available at a favorable price
point. Therefore, when market depth and price are favorable, the algorithm trades aggressively to consume
liquidity

Liquidity driven algorithms are often used in fragmented markets. Often, clients may want their orders to only
participate at specific venues and might or might not want to use the internal crossing network of the broker

Special parameters include a visibility and benchmark price instruction. While visibility determines how
much of the order is actually displayed at execution venues (similar to iceberg orders), the benchmark price is
used to decide when the market price is favorable enough to warrant participation

18

Opportunistic Algorithms
Ratio/Spread

Pair trading involves buying one asset while simultaneously selling another one. As a market neutral strategy, the
risks from each asset should hedge or offset each other and hence the strategy less affected by market wide moves.
Pair trading can be broken down into statistical arbitrage and merger (risk) arbitrage. Statistical arbitrage is based
on relative valuations and based on the assumption that the spread will revert to its mean. Risk arbitrage is more
equity specific and evolves around the probability of a merger happening

Statistical arbitrage spread trading algorithms focus on trading for a pre-determined benchmark, which is either
the spread between two assets or the ratio of their prices. A simple example is based solely on the spread between
two asset prices. When the difference exceeds a certain threshold, trading is activated. An alternative example
would use the price ratio as the trigger to trade

Risk arbitrage pairs can generally use the same approach as the statistical arbitrage ones. The trading strategy
would usually include selling the bidding company and buying the target company

Specific parameters include the spread (watch out if defined A-B or A/B), a legging indicator (if the orders have to
be executed in parallel) and a volume limit if one wants to limit the participation rate

19

Summary and Questions

A execution algorithm is simply a set of instructions used to execute an order. They can be broadly categorized into
three groups based on the target objectives.These are impact driven, cost driven or opportunistic

Impact driven algorithms seek to minimize the overall market impact costs, usually by splitting larger orders into
smaller child orders

Cost driven algorithms aim to reduce the overall trading costs

Opportunistic algorithms strive to take best advantage of favorable market conditions

Questions?

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

20

Sources

Algorithmic Trading and Direct Market Access by Barry Johnson

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

21

Algorithmic Trading
Session 9
Trade Implementation III
Transaction Costs
Oliver Steinki, CFA, FRM

Outline

Introduction

Pre-Trade Analysis

Post-Trade Analysis

Breaking Down Transaction Costs

Summary and Questions

Sources

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

Introduction
Where Do We Stand in the Algo Prop Trading Framework?
SIGNAL GENERATION

As we have seen, algorithmic proprietary trading strategies can be broken down into
three subsequent steps: Signal Generation, Trade Implementation and Performance
Analysis

Trade Implementation happens after the Signal Generation step has triggered a buy or
sell signal. It determines how the order is structured, e.g. position size and limit levels. In
advanced strategies, it can also take into account cross correlation with other portfolio
holdings and potential portfolio constraints

Sessions 7 9 deal with the question of sizing and executing trades, incl. exit

DECIDE WHEN AND


HOW TO TRADE

TRADE
IMPLEMENTATION
SIZE AND EXECUTE
ORDERS, INCL. EXIT

Session 7: Order Types


Session 8: Algorithmic Execution
Todays Session 9: Transaction Costs

PERFORMANCE
ANALYSIS
RETURN, RISK AND
EFFICIENCY RATIOS

Introduction
Transaction Costs

Each time an asset is bought or sold, transaction costs are incurred. They can have a significant impact on
investment returns. Therefore, it is important to both measure and analyze them in order to improve execution

Transaction costs can vary between 1bps to 250bps of the value traded, dependent on the asset class, transaction
size and broker used. This wide range is partly due to the different characteristics of each asset and order, but also
due to the different way transaction costs may be assigned

One of the most common ways to examine transaction costs has been to compare the actual performance of a
portfolio with its paper equivalent. A paper portfolio is simply a virtual portfolio traded at benchmark prices,
but without accounting for any costs

While transaction costs are inevitable, they can be minimized. Therefore, in order to maximize investment
returns, it is important to accurately measure transaction costs and to anlayze them to understand how and why
they occur

Introduction
Transaction Costs in the Investment Cycle

Historically, most of the early research on transaction costs focused on post-trade analysis. Though, over the last
few years, pre-trade analysis has become ever more important. In particular, algorithmic trading is often reliant on
pre-trade models to achieve a more cost efficient execution

Pre-Trade Analysis concentrates on estimating potential transaction cost. Hence it is a key input into the choice of
trading strategy and can have a substantial effect on the overall execution (and so investment) performance.
Liquidity analysis might also be used to identify the best strategies and venues for trading

Post-Trade analysis focuses on execution performance and measurement of transaction costs. It is essential for
understanding the effectiveness of both the investment ideas and their implementation. In turn, this performance
is an important consideration when new investment strategies are formulated. For example, an investment
opportunity worth 30bps may not be worth following up if previous transaction costs for similar orders have been
around this level

Introduction
Transaction Costs Example

We will use the example from the assigned reading during this session to compare the various components of
transaction costs

Investment decision @ pd

50k share buy order placed at t0

Child orders execute at t1, t2, t3

Pre-trade period lasts from td to t0

Execution phase lasts from t0 to tclose

Pre-Trade Analysis
Overview

Pre-trade analysis is important to ensure that best execution is achieved. These analytics help investors or traders
make informed decisions about how best to execute a given order

Four types of information are key to trading strategy selection. These four are:

Prices: Market prices, price ranges, trends / momentum

Liquidity: Percentage of avg. daily volume, volume profile, trading stability

Risk: Volatility, beta, risk exposure

Cost estimates: Market impact, timing risk

The liquidity and risk estimates highlight the expected difficulty of trading. The cost estimates give a reasonable
indication of what might be achieved. This is particularly important for algorithmic trading strategies as it gives an
idea how suitable an order is for a given strategy

Pre-Trade Analysis
Price Data

A wide range of price data is useful for pre-trade analytics. The current market bid / ask prices act as a baseline for
what we might achieve. The last traded price is also useful (especially for illiquid assets), since this may be
significantly different from the current quotes

The bid ask spread is seen as an estimate for the cost of immediacy. If immediate execution is desired a seller has to
sell at the bid and a buyer has to buy at the ask, incurring the spread as the cost of immediacy. A comparison to
historical bid ask spreads allows us to gauge whether the current spread is unusual

Price ranges, such as the difference between a days high and low, give an indication of the current price volatility.
Likewise, benchmarks such as todays opening price or last nights close are also useful. Trends may be reflected by
daily, weekly or even monthly percentage changes

Pre-Trade Analysis
Liquidity Data

Liquidity is closely related to transaction costs. Trading volume offers a simple way to rate the liquidity of an asset.
The average daily volume (ADV) is often calculated over a period of 14, 30, 90 or 360 calendar days. The
percentage of ADV represents the relative size of our order given the assets volume. For instance, anything less
than 20% should be achievable to trade within a normal trading day, whereas above not without market impact

The required trading horizon can be based on the ADV, together with factor representing our trading
participation rate:
=

For example, given an order size of 50k as in our example, an ADV of 1m, and a trading participation rate of 10%
leads to the following horizon:
=

50,000
= 0.5
1,000,000 10%

For such estimates to reliable, it is important that the actual trading volume behaves similar to the historical one.
This can be measure by the coefficient of variation (CV), based on the standard deviation of ADV. As trading
stability is inversely related to this coefficient, a high CV value implies sizable deviations from the historical volume
=

Pre-Trade Analysis
Risk Data

Volatility is a key variable for estimating how much risk we may be exposed to. It is based on the standard deviation
of price returns (not prices, this is a common mistake), often for the last 1, 3, 6 or 12 months. As we have seen
with the CV, a high volatility implies a considerable amount of timing risk. Therefore, the more volatile the asset,
the more aggressive trading strategies (hence liquidity demanding strategies) are generally used to counteract the
timing risk

Market risk could be measured using an assets beta, which is a measure of its sensitivity to market returns
(CAPM). A positive value means that the asset price moves in the same direction as the market whilst a negative
one means it behaves in a contrarian fashion. A beta of 1 means that the asset moves in line in direction and size
with the market. A beta above 1 signifies a more pronounced price response, whilst an asset with a beta of below 1,
e.g. 0.5, moves only half as much as the market

10

Pre-Trade Analysis
Transaction Cost Estimates

Transaction cost models generally provide an estimate for the overall cost as well as detailing major cost
components such as market impact and timing risk. We will investigate both in more detail later in this
presentation

The basis for most transaction cost models is the framework of Almgren and Chriss (2000), where they detailed the
optimal execution of portfolio transactions. They use random walk models to estimate the current market price in
terms of permanent market impact, price trending and volatility

In terms of asset selection, given two assets with similar expected returns it is logical to trade the one that has the
lower expected transaction costs. Exactly the same applies for comparing different trading strategies, one should
use the one with the lowest expected transaction costs. Although detailed pre-trade analysis is required, historical
information can be used as a guideline

Cost estimates are also an important guide to the difficulty of an order. For instance, if the timing risk estimate is
significantly larger than the market impact forecast, one should apply a more aggressive trading strategy.
Conversely, a larger market impact may suggest adopting a more passive style

11

Post-Trade Analysis
Overview

The historical results of post trade analysis act as a measure of broker/trader performance. They may also inform
both investment and execution decisions

Clearly, there is a lot more to transaction costs than fees and commissions. Past performance is therefore an
important tool for comparing the quality of execution of both brokers and individual traders. Unbundling research
fees has also made it easier for investors to link costs to the execution, and so use post trade analysis to accurately
compare broker performance

Breaking down the costs into their components allows us to see where and how the costs (or slippage) occurred.
Detailed measurement helps to ensure that future efforts for cost reduction are focused on the correct stage of the
investment process. It may also be used to guide the execution method selection

Performance analysis is an important tool for post trade comparison of broker/trader/algorithm results. This is
mostly done via benchmark comparison or as a relative performance measurement

The post trade transactions costs can be determined via Perolds implementation shortfall. This measures the
difference between the idealized paper portfolio and the actually traded one

12

Post-Trade Analysis
Benchmarks

A good benchmark should be easy to track and readily verifiable, it should also provide an accurate performance
measurement. Johnson lists the following benchmarks for the example in the book

Post trade benchmarks, such as the closing price, are only known once the trading day is over. Intraday
benchmarks, such as VWAP, need constant updates as the trading day progresses. Other intraday measures such as
OHLC need the whole trading day to be completed to be known. Pre trade benchmarks, such as the previous close
or open, are known before the start of the trading day
13

Post-Trade Analysis
Relative Performance Measure

Kissell introduced the relative performance measure (RPM) as an alternative to price based benchmarks. It is based
on a comparison of what the trade achieved in relation to the rest of the market. In terms of volume, RPM
represents the ratio of the volume traded at a less favorable price to the total market volume:

=


=

Transaction cost is dependent on many factors: the assets characteristics (liquidity, volatility), market conditions
(price trends, momentum), trading strategy etc. Therefore, when comparing the performance of two separate
orders, we need to take these various factors into account and just comparing our executed price to one of the
price based benchmarks might not be enough

One of the main advantages of the RPM metric is that is that it is normalized, as the percentage rates the trade
relative to all other trades that occurred that day

14

Post-Trade Analysis
Post-Trade Transaction Costs

The total transaction costs of a trade may be determined by using Pernols implementation shortfall (IS) measure.
This is the difference in value between the idealized paper portfolio and the actually traded one. The theoretical
return depends on the price when the decision to invest was made (pd), the final market price (pN) and the size of
the investment (X). The real returns depend on the actual transaction costs. So, if xj represent the sizes of the
individual executions and pj are the achieved prices:
= (

Note this assumes that orders are fully executed. Hence, Kissell and Glantz extended it by an opportunity cost
factor as not every order will be fully executed.(X ) represents the unexecuted position
=

) =

) + (

)( ) +

For our example, we get an IS of 297bps + fixed costs, calculated as (133.5k / 4.5m)
Execution cost: (10k * 91.15 + 20k * 92.5 +15k* 93.8) (45k * 90) = 118,500

Opportunity cost: (50k 45k) * (93 90) = 15k


Order Value: 50k * 90 = 4.5m
15

Breaking Down Transaction Costs


Overview

There has been a considerable amount of research focused on breaking down trading costs. The table below
suggests one way of classifying the different constituents:

Differentiating between investment and trading related costs is useful since it helps identify who best can control
them. Investment related costs occur before the order is placed with the broker / execution trader, whereas
trading related costs account for all costs thereafter. Explicit costs can easily be measured, whereas implicit costs
are much harder to quantify

16

Breaking Down Transaction Costs


Investment Related Costs Taxes and Delay

Investment related costs can be a significant proportion of overall transaction costs. They primarily consist of delay
cost and taxes. The delay reflects the time from the investment decision being made (td) to when an order is
actually dispatched (t0)

Taxes must be incorporated in any investment strategy. They depend on asset class, legal form of the investor and
jurisdiction. For example, some countries levy capital gains taxes, whereas others do not. The same applies for
stamp duty on share purchases

The delay cost is caused by any price change from the initial decision to invest to when an order has actually been
received by a broker

17

Breaking Down Transaction Costs


Trading Related Costs - Overview

The explicit trading related costs comprise of commission and fees, which are usually known in advance and
open for negotiation

The most significant costs are the implicit trading related costs, primarily market impact and timing risk, but also
spread, price trend and opportunity cost

Spread cost is also an implicit cost since although being visible it is not always as easily measurable as commissions
or fees

Market impact represents a payment for liquidity (or immediacy) and a cost due to the information content of
the order.

The price trend represents the added burden caused by a trending market

Timing risk is primarily associated with the volatility of an assets price, as well as its liquidity

Opportunity costs represent the risk from not fully executing the order, possibly because the trading strategy
was too passive

18

Breaking Down Transaction Costs


Trading Related Costs Commissions, Fees and Spreads

Commissions are charged by brokers for agency trading to compensate them for their cost. They are generally
quoted in basis points or cents per contract. They have decreased significantly over time as most execution tasks
have been fully automatized so that less employees are needed

Fees represent the actual charges from trading. These may be from floor brokers, exchange fees as well as clearing
and settlement costs. Often brokers include them into their commission charge so that a client doesnt necessarily
know the exact breakdown between fees and commissions. Note that some exchanges and ECNs assigns costs only
to aggressively priced orders in order to encourage liquidity provision

Spread cost represents the difference between the best bid and ask prices at any given time. The spread
compensates those who provide liquidity. Clearly, aggressive trading styles will result in higher spread costs than
passive ones. Unsurprisingly, large cap and liquid stocks as well as liquid futures have lower spreads. More volatile
assets tend to have higher spreads

19

Breaking Down Transaction Costs


Trading Related Costs Market Impact and Price Trends

Market impact represents the price change caused by a specific trade or order. Generally, it has an adverse
effect, for instance helping drive prices up when are trying to buy. The exact market impact is the difference
between the actual price chart and the hypothetical one if our orders had not been created, hence it is difficult to
measure. Market impact can be broken down into temporary and permanent impact, where temporary reflects the
cost of demanding liquidity and permanent corresponds to the long term effect of our order, representing the
information content that it exposed to the market

Price trends describe the status when asset prices exhibit broadly consistent trends. This price drift, or
momentum, is also known as short-term alpha. An upward trend implies that prices will increase when buying and
vice versa. The price trend cost may be determined based on the difference between this trend price and the arrival
price. Reducing trend cost may be achieved by shortening the trading horizon and so increasing market impact
costs. So, for larger orders, one has to strike a balance between the two

20

Breaking Down Transaction Costs


Trading Related Costs Timing Risk and Opportunity Costs

Timing risk is used to represent the uncertainty of the transaction cost estimate. The two main sources of this
uncertainty are volatility in the assets price and traded volume. Price volatility is arguably the most important risk.
The more volatile an asset, the more likely its price will move away and so increase transaction costs. The liquidity
risk represents the uncertainty with respect to the market impact cost. Generally, market impact costs are
estimated based on historical volumes, so if the actual trading volumes differ significantly, this may result in a shift
in market impact

Opportunity cost reflects the cost of not fully executing an order. This may be because the assets price went
beyond the price limit or could just be due to insufficient liquidity. Either way, it represents a missed opportunity,
since the next day prices may move even further away. The overall cost may be determined as the product of the
remaining order size and the price difference between the final price and the arrival price:
(

)( 0 )

Unlike the other cost components, opportunity cost represents a virtual loss rather than a physical one and is only
realized if a new order makes up the remainder at a less favorable price

21

Summary and Questions

Transaction costs can have a significant impact on investment returns. Therefore, it is important to both measure
and analyse them if best execution is to be achieved

Implementation shortfall or slippage is the difference in performance between an actual portfolio and its
theoretical paper equivalent

Pre trade analysis concentrates on estimating the expected difficulty of trading and potential transaction costs

Post trade analysis focuses on execution performance and cost measurement

Transaction costs can be decomposed into a wide range of different components. Among them are broker costs,
spread costs, delay costs, market impact, timing risk and opportunity costs

Transaction costs are closely related to market liquidity and volatility. They become cheaper with higher liquidity
and lower volatility

Questions?

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

22

Sources

Algorithmic Trading and Direct Market Access by Barry Johnson

Optimal Execution of Portfolio Transactions by Robert Almgren and Neill Chriss, published in Journal of Risk,
2000, vol. 3, p. 5 - 29

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

23

Algorithmic Trading
Session 10
Performance Analysis I
Performance Measurement
Oliver Steinki, CFA, FRM

Outline

Introduction

Arithmetic vs. Geometric Mean

Why Dollars are More Important Than Percentages

Traditional Performance Measures

Time Weighted vs. Money Weighted Rates of Return

Performance Measurement with Cash Deposits and Withdrawals

Summary and Questions

Sources

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

Introduction
Where Do We Stand in the Algo Prop Trading Framework?
SIGNAL GENERATION

As we have seen, algorithmic proprietary trading strategies can be broken down into
three subsequent steps: Signal Generation, Trade Implementation and Performance
Analysis

Performance Analysis is conducted after the trade has been closed and used in a
backtesting context to judge whether the strategy is successful or not. In general, we can
judge the performance according to five different metrics: return, risk, efficiency, trade
frequency and leverage

Sessions 10 -12 deal with the question of analyzing performance

DECIDE WHEN AND


HOW TO TRADE

TRADE
IMPLEMENTATION
SIZE AND EXECUTE
ORDERS, INCL. EXIT

Todays Session 10: Performance Measurement


Session 11 & 12: Performance Analysis

PERFORMANCE
ANALYSIS
RETURN, RISK AND
EFFICIENCY RATIOS

Introduction
Performance Measurement

Performance measurement is a critical aspect of portfolio management

Proper performance measurement should involve a recognition of both the return and the riskiness of the
investment

When two investments returns are compared, their relative risk must also be considered

People maximize expected utility:


A positive function of expected return
A negative function of the return variance

Introduction
A Historical Guideline

The 1968 Bank Administration Institutes Measuring the Investment Performance of Pension Funds concluded:
Performance of a fund should be measured by computing the actual rates of return on a funds assets
These rates of return should be based on the market value of the funds assets
Complete evaluation of the managers performance must include examining a measure of the degree of risk
taken in the fund
Circumstances under which fund managers must operate vary so greatly that indiscriminate comparisons
among funds might reflect differences in these circumstances rather than in the ability of managers

Key Points of Performance Measurement


Arithmetic vs. Geometric Mean

The arithmetic mean is not a useful statistic in evaluating growth. It might give misleading information as a 50
percent decline in one period followed by a 50 percent increase in the next period does not produce an average
return of zero

Consider the following example from the assigned reading. 44 Wall Street and Mutual Shares both had good
returns over the 1975 to 1988 period:

Key Points of Performance Measurement


Review: Why the Arithmetic Mean Is Misleading

The proper measure of average investment return over time is the geometric mean:
1/ n

GM Ri 1
i 1
where Ri the return relative in period i

The geometric means in the preceding example are:


44 Wall Street: 7.9 percent
Mutual Shares: 22.7 percent

The geometric mean correctly identifies Mutual Shares as the better investment over the 1975 to 1988 period

Key Points of Performance Measurement


Dollars Are More Important Than Percentages
Measuring dollar values clearly shows that Mutual shares significantly outperformed 44 Wall Street:

Mutual Fund Performance

Ending Value ($)

$200,000.00
$180,000.00
$160,000.00
$140,000.00
$120,000.00
$100,000.00
$80,000.00
$60,000.00
$40,000.00
$20,000.00
$Year

44 Wall Street

Mutual Shares

Key Points of Performance Measurement


Dollars Are More Important Than Percentages

Assume two funds managed by the same portfolio manager:

Fund A has $40 million in investments and earned 12 percent last period

Fund B has $250,000 in investments and earned 44 percent last period

The correct way to determine the return of both funds combined is to weigh the funds returns by the dollar
amounts:
$40, 000, 000
$250, 000

$40, 250, 000 12% $40, 250, 000 44% 12.10%

In fact, 99.38 percent of the $40.25 million managed by this person earned 12 percent. Only 0.62 percent
earned the higher rate

Traditional Performance Measures


Sharpe and Treynor Measures

The Sharpe and Treynor Measures are calculated as follows:


Sharpe measure
Treynor measure

R Rf

R Rf

where R average return


R f risk-free rate

standard deviation of returns


beta

The Sharpe measure evaluates return relative to total risk. Hence, it is appropriate for a well-diversified
portfolio, but not for individual securities

The Treynor measure evaluates the return relative to beta, a measure of systematic risk. Hence, it ignores any
unsystematic risk and is therefore also not appropriate for individual securities

10

Traditional Performance Measures


Jensen Measure

The Jensen measure stems directly from the CAPM:

Rit R ft i Rmt R ft

The constant term should be zero. Securities with a beta of zero should have an excess return of zero according to
classical finance theory

According to the Jensen measure, if a portfolio manager is better-than-average, the alpha of the portfolio will be
positive

However, the use of Treynor and Jensen Measure relies on measuring the market return and CAPM

Difficult to identify and measure the return of the market portfolio

Evidence continues to accumulate that may ultimately displace the CAPM, but Arbitrage pricing model,
multi-factor CAPMs, inflation-adjusted CAPM could help

11

Traditional Performance Measures


Famas Return Decomposition

Famas return decomposition can be used to assess why an investment performed better or worse than expected:

The return the investor chose to take

The added return the manager chose to seek

The return from the managers good selection of securities

Diversification is the difference between the return corresponding to the beta implied by the total risk of the
portfolio and the return corresponding to its actual beta
Net selectivity measures the portion of the return from
selectivity in excess of that provided by the
diversification component

12

Dollar Weighted vs. Time Weighted Rates of Returns


Overview
The dollar-weighted rate of return is analogous to the internal rate of return in corporate finance. It is the
rate of return that makes the present value of a series of cash flows equal to the cost of the investment

cost

C3
C1
C2

(1 R) (1 R) 2 (1 R)3

The time-weighted rate of return measures the compound growth rate of an investment. It eliminates the
effect of cash inflows and outflows by computing a return for each period and linking them (like the geometric
mean return):

time - weighted return (1 R1 )(1 R2 )(1 R3 )(1 R4 ) 1


The time-weighted rate of return and the dollar-weighted rate of return will be equal if there are no inflows or
outflows from the portfolio

13

Performance Measurement with Cash Deposits and Withdrawals


Overview

The owner of a fund often takes periodic distributions from the portfolio, and may occasionally add to it
The established way to calculate portfolio performance in this situation is via a time-weighted rate of return:

Daily valuation method


Modified Bank Administration Institute (BAI) method

The daily valuation method:

Calculates the exact time-weighted rate of return


Is cumbersome because it requires determining a value for the portfolio each time any cash flow occurs. This
might be interest, dividends, or additions to or withdrawals

The modified BAI method:


Approximates the internal rate of return for the investment over the period in question
Can be complicated with a large portfolio that might conceivably have a cash flow every day

14

Performance Measurement with Cash Deposits and Withdrawals


Daily Valuation Method

The daily valuation methods solves for R:


n

Rdaily Si 1
i 1

MVEi
where S
MVBi
MVEi = market value of the portfolio at the end of period i before any cash flows in period i but including accrued
income for the period

MVBi = market value of the portfolio at the beginning of period i including any cash flows at the end of the previous
subperiod and including accrued income

15

Performance Measurement with Cash Deposits and Withdrawals


BAI method

The BAI methods solves for R:


n

MVE Fi (1 R) wi
i 1

where F the sum of the cash flows during the period


MVE market value at the end of the period,
including accrued income
F0 market value at the start of the period
CD Di
CD
CD total number of days in the period
Di number of days since the beginning of the period
wi

in which the cash flow occurred

16

Performance Measurement with Cash Deposits and Withdrawals


Example

An investor has an account with a mutual fund and dollar cost averages by putting $100 per month into the fund

The following table shows the activity and results over a seven-month period
Date

Description

$ Amount

Price

January 1

balance
forward

January 3

purchase

100

$7.00

February 1

purchase

100

March 1

purchase

March 23

liquidation

April 3

Shares

$7.00

Total Shares

Value

1,080.011

$7,560.08

14.286

1,094.297

$7,660.08

$7.91

12.642

1,106.939

$8,755.89

100

$7.84

12.755

1,119.694

$8,778.40

5,000

$8.13

-615.006

504.688

$4,103.11

purchase

100

$8.34

11.900

516.678

$4,309.09

May 1

purchase

100

$9.00

11.111

527.789

$4,750.10

June 1

purchase

100

$9.74

10.267

538.056

$5,240.67

July 3

purchase

100

$9.24

10.823

548.879

$5,071.64

August 1

purchase

100

$9.84

10.163

559.042

$5,500.97
17

Performance Measurement with Cash Deposits and Withdrawals


Example: Daily Valuation Method

The daily valuation method returns a time-weighted return of 40.6 percent over the seven-month period

Date

Sub Period

MVB

Cash Flow

January 1

Ending
Value

MVE

MVE/MVB

$7,560.08

January 3

$7,560.08

100

$7,660.08

$7,560.08

1.00

February 1

$7,660.08

100

$8,755.89

$8,655.89

1.13

March 1

$8,755.89

100

$8,778.40

$8,678.40

0.991

March 23

$8,778.40

5,000

$4,103.11

$9,103.11

1.037

April 3

$4,103.11

100

$4,309.09

$4,209.09

1.026

May 1

$4,309.09

100

$4,750.10

$4,650.10

1.079

June 1

$4,750.10

100

$5,240.67

$5,140.67

1.082

July 3

$5,240.67

100

$5,071.64

$4,971.64

0.949

August 1

$5,071.64

100

$5,500.97

$5,400.97

1.065

Product of MVE/MVB values = 1.406; R = 40.6%

18

Performance Measurement with Cash Deposits and Withdrawals


Example: BAI Method
The BAI method returns a time-weighted return of 42.1 percent over the seven-month period. However, it
requires a function like solver in Excel

Date

Weight
(214-days)/214

Day

Cash Flow

(1.421) weight x cashflow

January 1

1.000

January 3

0.9907

$7,560.06

$10,741.36

February 1

31

0.8551

$100

$141.62

March 1

60

0.7196

$100

$135.03

March 23

83

0.6121

$5,000

$128.75

April 3

94

0.5607

$100

($6,199.20)

May 1

123

0.4252

$100

$121.77

June 1

153

0.2850

$100

$116.17

July 3

185

0.1355

$100

$104.87

August 1

214

0.0000

$100

$100

Total

$5,500.84
19

Summary and Questions

Performance evaluation is a critical part of the portfolio management process. The central issue is coupling a
measure of risk with the return of a portfolio.The measurement of risk is often neglected

Average returns over time should be measured using a geometric growth rate. The arithmetic mean gives
misleading results and should not be used to compare competing investment funds or strategies

The Sharpe and Treynor measures are the two leading classical performance indicators. Their calculations are
similar, except that the Sharpe measure uses the standard deviation of returns as a risk measure whereas the Treynor
measure uses beta. Jensens measure is not that common anymore, although his definition of alpha is still used for
outperformance

When a portfolio has frequent cash deposits and withdrawals, it is best to calculate performance via a timeweighted rate of return

Questions?

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

20

Sources

Portfolio Construction, Management, and Protection by Robert A. Strong

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

21

Algorithmic Trading
Session 11
Performance Analysis II
Risk, Return and Efficiency Ratios
Oliver Steinki, CFA, FRM

Outline

Introduction

Returns

Risk

Efficiency

Summary and Questions

Sources

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

Introduction
Where Do We Stand in the Algo Prop Trading Framework?
SIGNAL GENERATION

As we have seen, algorithmic proprietary trading strategies can be broken down into
three subsequent steps: Signal Generation, Trade Implementation and Performance
Analysis

Performance Analysis is conducted after the trade has been closed and used in a
backtesting context to judge whether the strategy is successful or not. In general, we can
judge the performance according to five different metrics: return, risk, efficiency, trade
frequency and leverage

Sessions 10 -12 deal with the question of analyzing performance

DECIDE WHEN AND


HOW TO TRADE

TRADE
IMPLEMENTATION
SIZE AND EXECUTE
ORDERS, INCL. EXIT

Session 10: Performance Measurement


Todays Session 11: Performance Analysis I: Returns, risk and efficiency
Session 12: Performance Analysis II: Frequency of trades and leverage

PERFORMANCE
ANALYSIS
RETURN, RISK AND
EFFICIENCY RATIOS

Introduction
Performance Analysis

Performance Analysis is a critical aspect of portfolio management. We split the analysis into five metrics: return,
risk, efficiency, trade frequency and leverage

Return is expressed as the geometric mean growth rate of the portfolio

Risk is defined as the downside deviation of returns and can be expressed in terms such as standard deviation,
downside standard deviation, maximum drawdown and length of maximum drawdown

Efficiency measures set risk and return in relation. They are expressed through classical ratios such as Sharpe and
Treynor measure, but also more modern ones such as the Sortino ratio. Win/Loss and Average Profit/Loss also
indicate efficiency. A comparison to a benchmark is an indirect way of efficiency measurement as one targets a
better return than the benchmark with similar risk or similar returns with lower risk

Trade frequency is important to judge the impact of transaction costs and infrastructure requirements. The higher
the trade frequency, the bigger the impact of transactions costs and requirement of a sophisticated infrastructure

Leverage is another expression for money management. It deals with the question of which percentage of the total
portfolio to invest in a given trade and how one can optimize this

Introduction
Methodology

The methodology of a trading strategy is of qualitative nature, yet important for investors. Many investors have a
bias towards a certain strategy, e.g. momentum based, mean-reverting, market-neutral or directional

The level of complexity of the methodology is an important criteria of quantitative trading strategies. Although it
does not have a direct impact on the quantitative performance metrics we cover, it will be relevant for your choice
of potential investors as they might consider your strategy a black box if it is too complex for them

Does the methodology rely on sophisticated or complex statistical or machine learning techniques that are hard to
understand and require a PhD in statistics to grasp? Do these techniques introduce a significant quantity of
parameters, which might lead to optimisation bias? Is the strategy likely to withstand a regime change (i.e. potential
new regulation of financial markets)? All these factors will also determine who your potential investors are

Returns
Recap

We have seen in previous sessions that the arithmetic mean is not a useful statistic in evaluating growth. It might
give misleading information as a 50 percent decline in one period followed by a 50 percent increase in the next
period does not produce an average return of zero

The proper measure of average investment return over time is the geometric mean. It is this growth rate that any
rational investor tries to maximize

Investors are not only interested in the total return, but also in the average annualized returns and the best and
worst days / months / years

Risk
Recap

Volatility is used as a proxy for risk of a strategy. Most often, the standard deviation of returns is used as volatility
measure, although more advanced techniques only consider the downside deviation of returns. A higher frequency
strategy will require greater sampling rates of standard deviation, but a shorter overall time period of
measurement

The maximum drawdown is the largest overall peak-to-trough percentage drop on the equity curve of the
strategy. Momentum strategies are well known to suffer from periods of extended drawdowns (due to a string of
many incremental losing trades). Many traders will give up in periods of extended drawdown, even if historical
testing has suggested this is "business as usual" for the strategy. You will need to determine what percentage of
drawdown (and over what time period) you can accept before you cease trading your strategy, especially if you
trade your own money. This is a highly personal decision and thus must be considered carefully. Investors will also
give you some form of maximum drawdown constraint and might pull their money if you break these

The length of the peak to valley measures how quickly the equity curve falls from the peak to the subsequent
bottom. The peak to through or recovery measures the time it takes one reaches again the same height of the
equity curve as the previous peak

Efficiency Measures
Recap

When two investments returns are compared, their relative risk must also be considered. Efficiency measures set
returns in relation to risk in order to get some form of x units of return per z units of risk measure

Examples are classical ratios such as Sharpe and Treynor measure. Modern ratios focus on a more advanced
definition of risk, such as the Sortino ratio

Win/Loss and Average Profit/Loss also indicate efficiency. Ideally one combines a high percentage of winning
trades with higher average profits than average losses

A comparison to a benchmark is an indirect way of efficiency measurement as one targets a better return than the
benchmark with similar risk or similar returns with lower risk

Efficiency Measures
Review: Performance Drivers of Quantitative Trading Strategies

Quantitative Investment Strategies are driven by four success factors: trade frequency, success ratio, return
distributions when right/wrong and leverage ratio

The higher the success ratio, the more likely it is to achieve a positive return over a one year period. Higher
volatility of the underlying assuming constant success ratio will lead to higher expected returns

The distribution of returns when being right / wrong is especially important for strategies with heavy long or short
bias. Strategies with balanced long/short positions and hence similar distributions when right/wrong are less
impacted by these distributional patterns. Downside risk can further be limited through active risk/money
management, e.g. stop loss orders

Leverage plays an important role to scale returns and can be seen as an artificial way to increase the volatility of
the traded underlying. It is at the core of the money management question to determine the ideal betting size. For
example, a 10 times leveraged position on an asset with 1% daily moves is similar to a full non-leveraged position
on an asset with 10% daily moves

Summary and Questions

Performance Analysis is a critical aspect of portfolio management. We split the analysis into five metrics: return,
risk, efficiency, trade frequency and leverage

Return is expressed as the geometric mean growth rate of the portfolio

Risk is defined as the downside deviation of returns and can be expressed in terms such as standard deviation,
downside standard deviation, maximum drawdown and length of maximum drawdown

Efficiency measures set risk and return in relation. They are expressed through classical ratios such as Sharpe and
Treynor measure, but also more modern ones such as the Sortino ratio. Win/Loss and Average Profit/Loss also
indicate efficiency. A comparison to a benchmark is an indirect way of efficiency measurement as one targets a
better return than the benchmark with similar risk or similar returns with lower risk

Questions?

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

10

Sources

Portfolio Construction, Management, and Protection by Robert A. Strong

www.quantstart.com and www.quantgekko.com

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

11

Algorithmic Trading
Session 12
Performance Analysis III
Trade Frequency and Optimal Leverage
Oliver Steinki, CFA, FRM

Outline

Introduction

Trade Frequency

Optimal Leverage

Summary and Questions

Sources

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

Introduction
Where Do We Stand in the Algo Prop Trading Framework?
SIGNAL GENERATION

As we have seen, algorithmic proprietary trading strategies can be broken down into
three subsequent steps: Signal Generation, Trade Implementation and Performance
Analysis

Performance Analysis is conducted after the trade has been closed and used in a
backtesting context to judge whether the strategy is successful or not. In general, we can
judge the performance according to five different metrics: return, risk, efficiency, trade
frequency and leverage

Sessions 10 -12 deal with the question of analyzing performance

DECIDE WHEN AND


HOW TO TRADE

TRADE
IMPLEMENTATION
SIZE AND EXECUTE
ORDERS, INCL. EXIT

Session 10: Performance Measurement


Session 11: Performance Analysis I: Returns, risk and efficiency
Todays Session 12: Performance Analysis II: Frequency of trades and leverage

PERFORMANCE
ANALYSIS
RETURN, RISK AND
EFFICIENCY RATIOS

Introduction
Performance Analysis

Performance Analysis is a critical aspect of portfolio management. We split the analysis into five metrics: return,
risk, efficiency, trade frequency and leverage

Return is expressed as the geometric mean growth rate of the portfolio

Risk is defined as the downside deviation of returns and can be expressed in terms such as standard deviation,
downside standard deviation, maximum drawdown and length of maximum drawdown

Efficiency measures set risk and return in relation. They are expressed through classical ratios such as Sharpe and
Treynor measure, but also more modern ones such as the Sortino ratio. Win/Loss and Average Profit/Loss also
indicate efficiency. A comparison to a benchmark is an indirect way of efficiency measurement as one targets a
better return than the benchmark with similar risk or similar returns with lower risk

Trade frequency is important to judge the impact of transaction costs and infrastructure requirements. The higher
the trade frequency, the bigger the impact of transactions costs and requirement of a sophisticated infrastructure

Leverage is another expression for money management. It deals with the question of which percentage of the total
portfolio to invest in a given trade and how one can optimize this

Introduction
Review: Performance Drivers of Quantitative Trading Strategies

Quantitative Investment Strategies are driven by four success factors: trade frequency, success ratio, return
distributions when right/wrong and leverage ratio

The higher the success ratio, the more likely it is to achieve a positive return over a one year period. Higher
volatility of the underlying assuming constant success ratio will lead to higher expected returns

The distribution of returns when being right / wrong is especially important for strategies with heavy long or short
bias. Strategies with balanced long/short positions and hence similar distributions when right/wrong are less
impacted by these distributional patterns. Downside risk can further be limited through active risk/money
management, e.g. stop loss orders

Leverage plays an important role to scale returns and can be seen as an artificial way to increase the volatility of
the traded underlying. It is at the core of the money management question to determine the ideal betting size. For
example, a 10 times leveraged position on an asset with 1% daily moves is similar to a full non-leveraged position
on an asset with 10% daily moves

Trade Frequency
Classification

Time Frames:
Long Term: Months to Years

Short Term: Days, Weeks, Months


Intraday: Seconds to Hours

High frequency: Fractions of Seconds

Hurdle Rate for Transaction Costs


The more frequent the trading, the higher the transaction costs

Physical Infrastructure
The more frequent the trading, the higher the requirement to have a sophisticated trading infrastructure

Trade Frequency
Infrastructure Requirements

Data feeds: Although end of day data is available at low costs, tick by tick data can cost substantial amounts of
money

Data Storage: The more data you receive and the more frequent, the higher your data storage requirements. For
every underlying you store, you should at least store OHLC for every day and Volume data. Next to this, you
should also save information such as tickers, expiry dates, multiples, exchange on which the instrument is traded,
time zone of the exchange, trading hours, currency and security type

Data Processing: The more complex your trade signal generation process, the more computational power you
will need. Additionally, the more time critical your investment strategy, you need to invest in computational
power to minimize the time period between receiving the last information you need for your trade signal
generation and trade implementation.

Backup Systems: When you start out, a simple backup in the cloud or on an external hard disk will do. As more
you grow and attract institutional clients, they will be very keen about your business contingency processes.
Ideally you run the same system in parallel in two to three locations so that there is no impact on trading if one
system fails

Optimal Leverage
Risk Management

Well discuss several methods of computing the optimal leverage that maximizes the compounded growth rate.
Each method has its own assumptions and drawbacks. But, in all cases, we have to make the assumption that the
future probability distribution of returns of the market is the same as in the past. This is usually an incorrect
assumption, but this is the best that quantitative models can do. Even more restrictive, many risk management
techniques assume further that the probability distribution of returns of the strategy itself is the same as in the past.
And finally, the most restrictive of all assumes that the probability distribution of returns of the strategy is
Gaussian

As an institutional asset manager, client constraints limit the maximum drawdown of an account. In this case, the
maximum drawdown allowed forms an additional constraint in the leverage optimization problem

Also, it may be wise to avoid trading altogether during times when the risk of loss is high, hence setting leverage
to close to 0. Therefore, many algorithmic trading strategies reduce exposure during times of increased expected
volatility such as central bank announcements or significant macroeconomic numbers

No matter how the optimal leverage is determined, the one central theme is that the leverage should be kept
constant. This is necessary to optimize the growth rate whether or not we have the maximum drawdown
constraint. Keeping a constant leverage may sound rather mundane, but can be counterintuitive when put into
action

Optimal Leverage
Kelly Formula

If one assumes that the probability distribution of returns is Gaussian, the Kelly formula gives us a very simple
answer for optimal leverage f: f = m / s2 where m is the mean excess return and s2 is the variance of the excess
returns

It can be proven that if the Gaussian assumption is a good approximation, then the Kelly leverage f will generate
the highest compounded growth rate of equity, assuming that all profits are reinvested (see optional reading)

However, even if the Gaussian assumption is really valid, we will inevitably suffer estimation errors when we try to
estimate what the true mean and variance of the excess return are. And no matter how good ones estimation
method is, there is no guarantee that the future mean and variance will be the same as the historical ones. The
consequence of using an overestimated mean or an underestimated variance is dire: Either case will lead to an
overestimated optimal leverage, and if this overestimated leverage is high enough, it will eventually lead to ruin:
equity going to zero

On the other hand, the consequence of using an underestimated leverage is merely a submaximal compounded
growth rate. Many traders justifiably prefer the later scenario, and they routinely deploy a leverage equal to half of
what the Kelly formula recommends: the so-called half-Kelly leverage

Optimal Leverage
Simulated Returns

If one relaxes the Gaussian assumption and substitutes another analytic form (e.g., Students t) for the returns
distribution to take into account the fat tails, we can still follow the derivations of the Kelly formula in Thorps
paper and arrive at another optimal leverage, though the formula wont be as simple anymore (This is true as long
as the distribution has a finite number of moments, unlike, for example, the Pareto Levy distribution). For some
distributions, it may not even be possible to arrive at an analytic answer. This is where Monte Carlo simulations can
help

The expected value of the compounded growth rate as a function of the leverage f is (assuming for simplicity that
the risk-free rate is zero): g( f ) = log(1 + fR), where indicates an average over some random sampling of
the unlevered return-per-bar R(t) of the strategy (not of the market prices) based on some probability distribution
of R

Even though we do not know the true distribution of R, we can use the so-called Pearson system to model it.
model it. The Pearson system takes as input the mean, standard deviation, skewness, and kurtosis of the empirical
distribution of R, and models it as one of seven probability distributions expressible analytically encompassing
Gaussian, beta, gamma, Students t, and so on. Of course, these are not the most general distributions possible. The
empirical distribution might have nonzero higher moments that are not captured by the Pearson system and might,
in fact, have infi nite higher moments, as in the case of the Pareto Levy distribution. But to capture all the higher
moments invites data-snooping bias due to the limited amount of empirical data usually available. So, for all
practical purposes, we use the Pearson system for our Monte Carlo sampling
10

Optimal Leverage
Optimizing Historical Growth Rate

Instead of optimizing the expected value of the growth rate using our analytical probability distribution of returns
one can of course just optimize the historical growth rate in the backtest with respect to the leverage. We just need
one particular realized set of returns: that which actually occurred in the backtest

This method suffers the usual drawback of parameter optimization in backtest: data-snooping bias. In general, the
optimal leverage for this particular historical realization of the strategy returns wont be optimal for a different
realization that will occur in the future. Unlike Monte Carlo optimization, the historical returns offer insufficient
data to determine an optimal leverage that works well for many realizations

Despite these caveats, brute force optimization over the backtest returns sometimes does give a very similar
answer to both the Kelly leverage and Monte Carlo optimization

11

Optimal Leverage
Maximum Drawdown

Most portfolio managers manage (at least in part) other peoples assets, therefore maximizing the long-term
growth rate is not the only objective. Often, their clients (or employers) will insist that the absolute value of the
drawdown (return calculated from the historic high watermark) should never exceed a certain maximum. That is
to say, they dictate what the maximum drawdown can be. This requirement translates into an additional constraint
into our leverage optimization problem

Unfortunately, this translation is not as simple as multiplying the unconstrained optimal leverage by the ratio of the
maximum drawdown allowed and the original unconstrained maximum drawdown. Therefore, if the ideal leverage
of say 40% results in a maximum drawdown of 50% and your drawdown constraint is 25%, you can not just
reduce the leverage to 20%. The factor depends on the exact series of simulated returns, and so are not exactly
reproducible

In order to prevent future drawdowns from breaking the constraint, we can either use constant proportion
insurance or impose a stop loss

12

Optimal Leverage
CPPI

The often conflicting goals of wishing to maximize compounded growth rate while limiting the maximum
drawdown have been discussed already. There is one method that allows us to fulfill both wishes: constant
proportion portfolio insurance (CPPI)

Suppose the optimal Kelly leverage of our strategy is determined to be f. And suppose we are allowed a maximum
drawdown of D. We can simply set aside D of our initial total account equity for trading, and apply a leverage of
f to this subaccount to determine our portfolio market value. The other 1 D of the account will be sitting in
cash. We can then be assured that we wont lose all of the equity of this subaccount, or, equivalently, we wont
suffer a drawdown of more than D in our total account

If the trading strategy is profitable and the total account equity reaches a new high water mark, then we can reset
our subaccount equity so that it is again D of the total equity, moving some cash back to the cash account.
However, if the strategy suffers losses, we will not transfer any cash between the cash and the trading subaccount.
Of course, if the losses continue and we lose all the equity in the trading subaccount, we have to abandon the
strategy because it has reached our maximum allowed drawdown of D. Therefore, in addition to limiting our
drawdown, this scheme serves as a graceful, principled way to wind down a losing strategy

13

Optimal Leverage
Stop Loss

There are two ways to use stop losses. The common usage is to use stop loss to exit an existing position whenever
its unrealized P&L drops below a threshold. But after we exit this position, we are free to reenter into a new
position, perhaps even one of the same sign, sometime later. In other words, we are not concerned about the
cumulative P&L or the drawdown of the strategy

The less common usage is to use stop loss to exit the strategy completely when our drawdown drops below a
threshold. This usage of stop loss is awkward - it can happen only once during the lifetime of a strategy, and
ideally we would never have to use it. That is the reason why CPPI is preferred over using stop loss for the same
protection.

Stop loss can only prevent the unrealized P&L from exceeding our selfimposed limit if the market is always open
whenever we are holding a position. For example, it is effective if we do not hold positions after the market closes
or if we are trading in currencies or some futures where the electronic market is open 24/5. Otherwise, if the
prices gap down or up when the market reopens, the stop loss may be executed at a price much worse than what
our maximum allowable loss dictates

In some extreme circumstances, stop loss is useless even if the market is open but when all liquidity providers
decide to withdraw their liquidity simultaneously as happened during the flash crash in May 2010.

14

Summary and Questions

Performance Analysis is a critical aspect of portfolio management. We split the analysis into five metrics: return,
risk, efficiency, trade frequency and leverage

Trade frequency is important to judge the impact of transaction costs and infrastructure requirements. The higher
the trade frequency, the bigger the impact of transactions costs and requirement of a sophisticated infrastructure

Leverage is another expression for money management. It deals with the question of which percentage of the total
portfolio to invest in a given trade and how one can optimize this

Questions?

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

15

Sources

Portfolio Construction, Management, and Protection by Robert A. Strong

Algorithmic Trading: Winning Strategies and Their Rationale by Ernest P. Chan

www.quantstart.com and www.quantgekko.com

Contact Details: osteinki@faculty.ie.edu or +41 76 228 2794

16