Você está na página 1de 70

SF2943: Time Series Analysis

Kalman Filtering
Timo Koski
09.05.2013
Timo Koski () Mathematisk statistik 09.05.2013 1 / 70
Contents
The Prediction Problem
State process AR(1), Observation Equation, PMKF(= Poor Mans
Kalman Filter)
Technical Steps
Kalman Gain, Kalman Predictor, Innovations Representation
The Riccati Equation, The Algebraic Riccati Equation
Examples
Timo Koski () Mathematisk statistik 09.05.2013 2 / 70
Introduction: Optimal Estimation of a Random Variable
In the preceding lectures of sf2943 we have been dealing with the various of
instances of the general problem of estimating X with a linear combination
a
1
Y
1
+ . . . + a
N
Y
N
selecting the parameters a
1
, . . . , a
N
so that
E
_
(X (a
1
Y
1
+ . . . + a
N
Y
N
))
2
_
(1)
is minimized (Minimal Mean Squared Error). The following has been
shown.
Timo Koski () Mathematisk statistik 09.05.2013 3 / 70
Introduction: Optimal Solution
Suppose Y
1
, . . . , Y
N
and X are random variables, all with zero means, and

mk
= E [Y
m
Y
k
] , m = 1, . . . , N; k = 1, . . . , N
(2)

om
= E [Y
m
X] , m = 1, . . . , N.
then
E
_
(X (a
1
Y
1
+ . . . + a
N
Y
N
))
2
_
(3)
is minimized if the coecients a
1
, . . . , a
N
satisfy the Wiener-Hopf
equations requiring the inversion of an N N -matrix,
N

k=1
a
k

mk
=
om
; m = 1, . . . , N. (4)
Timo Koski () Mathematisk statistik 09.05.2013 4 / 70
Introduction: Wiener-Hopf Equations
The equations
N

k=1
a
k

mk
=
om
; m = 1, . . . , N. (5)
are often called the Wiener-Hopf Equations.
Timo Koski () Mathematisk statistik 09.05.2013 5 / 70
Norbert Wiener 1894 -1964 ; Professor of Mathematics at
MIT.
Timo Koski () Mathematisk statistik 09.05.2013 6 / 70
Rudolf E. Kalman
The Kalman lter has brought a fundamental reformation in the classical
theory of time series prediction originated by N. Wiener. The recursive
algorithm (to be derived) was invented by Rudolf E. Kalman
1
. His original
work is found in [3].
1
b. 1930 in Budapest, but studied and graduated in electrical engineering in USA,
Professor Emeritus in Mathematics at ETH, the Swiss Federal Institute of Technology in
Z urich.
Timo Koski () Mathematisk statistik 09.05.2013 7 / 70
U.S. President Barack Obama (R) presents a 2008
National Medal of Science to Rudolf Kalman (L) An East
Room ceremony October 7, 2009 at the White House in
Washington.
Timo Koski () Mathematisk statistik 09.05.2013 8 / 70
Introduction: Additional Samples
We augment the notations by a dependence on the number of data points,
N, as

Z
N
= a
1
(N)Y
1
+ . . . + a
N
(N)Y
N
.
Suppose now that we obtain one more measurement, Y
N+1
. Then we need
to nd

Z
N+1
= a
1
(N + 1)Y
1
+ . . . + a
N+1
(N + 1)Y
N+1
.
In principle we can by the above nd a
1
(N + 1), . . . , a
N+1
(N + 1) by
solving the Wiener-Hopf equations requiring the inversion of an
N + 1 N + 1 -matrix,
N+1

k=1
a
k
(N + 1)r
mk
= r
om
; m = 1, . . . , N + 1. (6)
but if new observations Y
N+1
, Y
N+2
, . . . are gathered sequentially in real
time, this will soon become practically unfeasible.
Timo Koski () Mathematisk statistik 09.05.2013 9 / 70
Kalman Filtering
Kalman ltering is a technique by which we calculate

Z
N+1
recursively
using

Z
N
, and the latest sample Y
N+1
. This requires a dynamic state
space representation for the observed time series Y Y
n
with
X X
n
as the state process. We consider the simplest special case.
The Kalman Recursions are usually established for multivariate time series
applying matrix equations, see, e.g., pp. 137 142 in [5]. However, some
of the basic principles can be made intelligible by a simpler approach
involving only scalar time series
2
. The presentation in this lecture is to a
large degree based on the treatment in [2] .
2
R.M. du Plessis: Poor Man
,
s Explanation of Kalman Filtering. North American
Rockwell Electronics Group, June 1967
Timo Koski () Mathematisk statistik 09.05.2013 10 / 70
The State Model (1): AR(1)
The state model is AR(1), i.e.,
X
n
= X
n1
+ Z
n1
, n = 0, 1, 2, . . . , (1)
where {Z
n
} is a white noise with expectation zero, and || 1 (so
that we may regard Z
n1
as non correlated with X
n1
, X
n2
, . . .). We
have
R
Z
(k) = E [Z
n
Z
n+k
] =
2

0,k
=
_

2
k = 0
0 k = 0
(2)
We assume that E [X
0
] = 0 and Var(X
0
) =
2
0
.
Timo Koski () Mathematisk statistik 09.05.2013 11 / 70
The Obseravtion Equation (2): State plus Noise
The true state X
n
is hidden from us, we see X
n
with added white
measurement noise V
n
, or, as Y
n
in
Y
n
= cX
n
+ V
n
, n = 0, 1, 2, . . . , (3)
where
R
V
(k) = E [V
n
V
n+k
] =
2
V

0,k
=
_

2
V
k = 0
0 k = 0
(4)
The state and measurement noises {Z
n
} and {V
n
}, respectively, are
independent.
Timo Koski () Mathematisk statistik 09.05.2013 12 / 70
A Special Case: Poor Mans Kalman Filter (PMKF)
Assume now in (1) that = 1 and
2
= 0, i.e.,
X
n
= X
0
, n = 0, 1, . . . , (5)
We take that c = 1, so that
Y
n
= X
0
+ V
n
, n = 0, 1, 2, . . . , (6)
This is the statistical model of several measurements of one random
variable. We shall obtain the Kalman recursions for estimating X
0
using
sequentially Y
0
, Y
1
, . . . , Y
n
, . . . , as a special case of the Kalman predictor
of X
n
in (1) (to be derived).
Timo Koski () Mathematisk statistik 09.05.2013 13 / 70
Recursive Prediction
We want to estimate X
n
in (1) using Y
0
, . . . , Y
n1
accrued according to
(3), so that
E
_
(X
n
(a
1
(n)Y
n1
+ . . . + a
n
(n)Y
0
))
2
_
(7)
is minimized. Next, we obtain Y
n
and want to estimate X
n+1
using
Y
0
, . . . , Y
n
so that
E
_
(X
n+1
(a
1
(n + 1)Y
n
+ . . . + a
n+1
(n + 1)Y
0
))
2
_
(8)
is minimized.
Timo Koski () Mathematisk statistik 09.05.2013 14 / 70
Recursive Prediction
Let us set

X
n+1
def
= a
1
(n + 1)Y
n
+ . . . + a
n+1
(n + 1)Y
0
or

X
n+1
=
n+1

k=1
a
k
(n + 1)Y
n+1k
, (9)
and

X
n
def
= a
1
(n)Y
n1
+ . . . + a
n
(n)Y
0
or

X
n
=
n

k=1
a
k
(n)Y
nk
. (10)
As stated above, a
1
(n + 1), . . . , a
n+1
(n + 1) satisfy the Wiener-Hopf
equations
n+1

k=1
a
k
(n + 1)E [Y
m
Y
n+1k
] = E [Y
m
X
n+1
] ; m = 0, . . . , n. (11)
Timo Koski () Mathematisk statistik 09.05.2013 15 / 70
Recursive Prediction: the road map
The road map is as follows:
We express a
k
(n + 1) for k = 2, . . . , n + 1 recursively as functions of
a
k
(n)s.
We show that

X
n+1
=

X
n
+ a
1
(n + 1)
_
Y
n
c

X
n
_
,
We nd a recursion for
e
n+1
= X
n+1

X
n+1
,
We determine nally a
1
(n + 1) by minimizing E
_
e
2
n+1

.
Timo Koski () Mathematisk statistik 09.05.2013 16 / 70
Two Auxiliary Formulas
Lemma
E [Y
m
X
n+1
] = E [Y
m
X
n
] , m = 0, . . . , n 1. (12)
and for n 1
E [Y
m
X
n
] =
E [Y
m
Y
n
]
c
; m = 0, . . . , n 1. (13)
Proof: From the state equation (1) we get
E [Y
m
X
n+1
] = E [Y
m
(X
n
+ Z
n
)]
= E [Y
m
X
n
] + E [Y
m
Z
n
] .
Here E [Y
m
Z
n
] =E [Y
m
] E [Z
n
] = 0, since Z
n
is a white noise independent
of V
m
and X
m
for m n. Next, from (3)
E [Y
m
X
n
] = E [Y
m
(Y
n
V
n
) /c]
= E [Y
m
Y
n
] /c,
as claimed.
Timo Koski () Mathematisk statistik 09.05.2013 17 / 70
The crucial step
We start from (11), and use (12) and (13) together with the fact that
a
k
(n):s satisfy the Wiener-Hopf equations
n

k=1
a
k
(n)E [Y
m
Y
nk
] = E [Y
m
X
n
] ; m = 0, . . . , n 1. (14)
This lemma still leaves us with one free parameter, i.e., a
1
(n + 1), which
will be determined in the sequel.
Timo Koski () Mathematisk statistik 09.05.2013 18 / 70
The crucial step
Lemma
a
k+1
(n + 1) = a
k
(n) ( a
1
(n + 1)c) ; k = 1, . . . , n. (15)
Timo Koski () Mathematisk statistik 09.05.2013 19 / 70
The crucial step: the proof
Proof: Let m < n. From (9) and (11) we get
E [Y
m
X
n+1
] =
n+1

k=1
a
k
(n + 1)E [Y
m
Y
n+1k
] (16)
= a
1
(n + 1)E [Y
m
Y
n
] +
n+1

k=2
a
k
(n + 1)E [Y
m
Y
n+1k
] (17)
We change the index of summation from k to l = k 1. Then
n+1

k=2
a
k
(n + 1)E [Y
m
Y
n+1k
] =
n

l =1
a
l +1
(n + 1)E [Y
m
Y
nl
] . (18)
Timo Koski () Mathematisk statistik 09.05.2013 20 / 70
The crucial step: the proof
On the other hand, (12) yields in the left hand side of (16) that
E [Y
m
X
n+1
] = E [Y
m
X
n
]
and from (13) that
a
1
(n + 1)E [Y
m
Y
n
] = a
1
(n + 1)cE [Y
m
X
n
] .
Timo Koski () Mathematisk statistik 09.05.2013 21 / 70
The crucial step: the proof
Thus we can write (16), (17) by means of (18) as
( a
1
(n + 1)c) E [Y
m
X
n
] =
n

l =1
a
l +1
(n + 1)E [Y
m
Y
nl
] . (19)
Now we compare with the system of equations in (14), i.e.,
n

k=1
a
k
(n)E [Y
m
Y
nk
] = E [Y
m
X
n
] ; m = 0, . . . , n 1.
It must hold that
a
k
(n) =
a
k+1
(n + 1)
( a
1
(n + 1)c)
or
a
k+1
(n + 1) = a
k
(n) ( a
1
(n + 1)c) .
Timo Koski () Mathematisk statistik 09.05.2013 22 / 70
A Recursion with a
1
(n + 1) undetermined
We can keep a
1
(n + 1) undetermined and still get the following result.
Lemma

X
n+1
=

X
n
+ a
1
(n + 1)
_
Y
n
c

X
n
_
(20)
Timo Koski () Mathematisk statistik 09.05.2013 23 / 70
A Recursion with a
1
(n + 1) undetermined: proof
Proof: By (9) and the same trick of changing the index of summation as
above we obtain

X
n+1
=
n+1

k=1
a
k
(n + 1)Y
n+1k
= a
1
(n + 1)Y
n
+
n

k=1
a
k+1
(n + 1)Y
nk
so from (15) in the preceding lemma
= a
1
(n + 1)Y
n
+
n

k=1
a
k
(n) [ a
1
(n + 1)c] Y
nk
= a
1
(n + 1)Y
n
+
n

k=1
a
k
(n)Y
nk
ca
1
(n + 1)
n

k=1
a
k
(n)Y
nk
.
From (10) we obviously nd in the right hand side
=

X
n
+ a
1
(n + 1)
_
Y
n
c

X
n
_
,
as was to be proved.
Timo Koski () Mathematisk statistik 09.05.2013 24 / 70
A Recursion with a
1
(n + 1) undetermined: proof
We choose the value for a
1
(n + 1) which minimizes the second moment of
the state prediction error/innovation dened as
e
n+1
= X
n+1

X
n+1
.
First we nd recursions for e
n+1
and E
_
e
2
n+1

.
Lemma
e
n+1
= [ a
1
(n + 1)c] e
n
+ Z
n
a
1
(n + 1)V
n
, (21)
and
E
_
e
2
n+1

= [ a
1
(n + 1)c]
2
E
_
e
2
n

+
2
+ a
2
1
(n + 1)
2
V
. (22)
Timo Koski () Mathematisk statistik 09.05.2013 25 / 70
A Recursion with a
1
(n + 1) undetermined: proof
Proof : We have from the equations above that
e
n+1
= X
n+1


X
n+1
= X
n
+ Z
n

X
n
a
1
(n + 1)
_
cX
n
+ V
n
c

X
n
_
=
_
X
n


X
n
_
a
1
(n + 1)c
_
X
n


X
n
_
+ Z
n
a
1
(n + 1)V
n
,
and with e
n
= X
n


X
n
the result is (21).
If we square both sides of (21) we get
e
2
n+1
= [ a
1
(n + 1)c]
2
e
2
n
+ (Z
n
a
1
(n + 1)V
n
)
2
+ 2,
where
= [ a
1
(n + 1)c] e
n
(Z
n
(a
1
(n + 1)) V
n
)
By the properties above
E [e
n
Z
n
] = E [Z
n
V
n
] = E [e
n
V
n
] = 0
(Note that

X
n
uses X
n1
, X
n2
, . . . and is thus uncorrelated with V
n
.)
Thus we have E
_
e
2
n+1

as asserted.
Timo Koski () Mathematisk statistik 09.05.2013 26 / 70
A Recursion with a
1
(n + 1) determined
We can apply orthogonality to nd the expression for a
1
(n + 1), but the
dierentiation to be invoked in the next lemma gives a faster argument.
Lemma
a
1
(n + 1) =
cE
_
e
2
n

2
V
+ c
2
E [e
2
n
]
. (23)
minimizes E
_
e
2
n+1

.
Timo Koski () Mathematisk statistik 09.05.2013 27 / 70
A Recursion with a
1
(n + 1) determined: proof
Proof: We dierentiate (22) w.r.t. a
1
(n + 1) and set the derivative equal
to zero. This entails
2c [ a
1
(n + 1)c] E
_
e
2
n

+ 2a
1
(n + 1)
2
V
= 0
or
a
1
(n + 1)
_
c
2
E
_
e
2
n

+
2
V
_
= cE
_
e
2
n

,
which yields (23) as claimed.
Timo Koski () Mathematisk statistik 09.05.2013 28 / 70
A Preliminary Summary
We change notation for a
1
(n + 1) determined in lemma 5, (23). The
traditional way (in the engineering literature) is to write this quantity as
the predictor gain, also known as the Kalman gain K(n) or
K(n)
def
=
cE
_
e
2
n

2
V
+ c
2
E [e
2
n
]
. (24)
The initial conditions are

X
0
= 0, E
_
e
2
0

=
2
0
. (25)
Then we have obtained above that
E
_
e
2
n+1

= [ K(n)c]
2
E
_
e
2
n

+
2
+ K
2
(n)
2
V
, (26)
and

X
n+1
= K(n)
_
Y
n
c

X
n
_
+

X
n
. (27)
Timo Koski () Mathematisk statistik 09.05.2013 29 / 70
The Kalman Prediction Filter
Proposition
K(n)
def
=
cE
_
e
2
n

2
V
+ c
2
E [e
2
n
]
. (28)
E
_
e
2
n+1

= [ K(n)c]
2
E
_
e
2
n

+
2
+ K
2
(n)
2
V
, (29)
and

X
n+1
= K(n)
_
Y
n
c

X
n
_
+

X
n
. (30)
The equations (28) -(30) are an algorithm for recursive computation of

X
n+1
for all n 0. This is a simple case of an important statistical
processor of time series data known as the Kalman lter.
Timo Koski () Mathematisk statistik 09.05.2013 30 / 70
The Kalman Prediction Filter
A Kalman lter is in view of (30) predicting X
n+1
using

X
n
and additively
correcting the prediction by the measured innovations

n
=
_
Y
n
c

X
n
_
.
Here
n
is the part of Y
n
which is not exhausted by c

X
n
. The innovations
are modulated by the lter gains K(n) that depend on the error variances
E
_
e
2
n

.
Timo Koski () Mathematisk statistik 09.05.2013 31 / 70
The Kalman Prediction Filter
The recursive nature of the Kalman lter cannot be overemphasized: the
lter processes one measurement Y
n
at a time, instead of all
measurements. The data preceding Y
n
, i.e., Y
0
, . . . , Y
n1
are summarized
in

X
n
, no past data need be stored. Each estimation is identical in
procedure to those that took place before it, but each has a new weighing
factor computed to take into account the sum total eect of all the
previous estimates.
Timo Koski () Mathematisk statistik 09.05.2013 32 / 70
Block-Diagrams
Timo Koski () Mathematisk statistik 09.05.2013 33 / 70
Block-Diagrams
Timo Koski () Mathematisk statistik 09.05.2013 34 / 70
The Kalman Prediction Filter: A Special Case
Assume
V
= 0 and c = 1. Then (28) becomes
K(n) = (31)
and (29) boils down to
E
_
e
2
n+1

=
2
, (32)
and (30) yields, since c = 1,

X
n+1
= Y
n
= X
n
. (33)
But this veries the familiar formula about one-step MSE-prediction of an
AR(1) process.
Timo Koski () Mathematisk statistik 09.05.2013 35 / 70
Innovations Representation
We can write the lter also with an innovations representation

X
n+1
=

X
n
+ K(n)
n
(34)
Y
n
= c

X
n
+
n
, (35)
which by comparison with (1) and (3) shows that the Kalman lter follows
equations similar to the original ones, but is driven by the innovations
n
as noise.
Timo Koski () Mathematisk statistik 09.05.2013 36 / 70
Riccati Recursion for the Variance of the Prediction Error
We nd a further recursion for E
_
e
2
n+1

. We start with a general identity


for Minimal Mean Squared Error estimation. We have
E
_
e
2
n+1

= E
_
X
2
n+1

E
_

X
2
n+1
_
. (36)
To see this, let us note that
E
_
e
2
n+1

= E
_
_
X
n+1


X
n+1
_
2
_
= E
_
X
2
n+1

2E
_
X
n+1

X
n+1
_
+ E
_

X
2
n+1
_
.
Here
E
_
X
n+1

X
n+1
_
= E
__

X
n+1
+ e
n+1
_

X
n+1
_
= E
_

X
2
n+1
_
+ E
_
e
n+1

X
n+1
_
.
But by the orthogonality principle of Minimal Mean Squared Error
estimation we have
E
_
e
n+1

X
n+1
_
= 0,
and this proves (36).
Timo Koski () Mathematisk statistik 09.05.2013 37 / 70
Riccati Recursion
We note next writing

n
= Y
n
c

X
n
= cX
n
+ V
n
c

X
n
= ce
n
+ V
n
that
E
_

2
n

=
2
V
+ c
2
E
_
e
2
n

. (37)
Timo Koski () Mathematisk statistik 09.05.2013 38 / 70
Riccati Recursion for the Variance of the Prediction Error
When we use this formula we obtain from (30) or

X
n+1
= K(n)
n
+

X
n
that
E
_

X
2
n+1
_
=
2
E
_

X
2
n
_
+ K(n)
2
_

2
V
+ c
2
E
_
e
2
n
_
. (38)
But in view of our denition of the Kalman gain in (28) we have
K(n)
2
_

2
V
+ c
2
E
_
e
2
n
_
=

2
c
2
E
_
e
2
n

2
V
+ c
2
E [e
2
n
]
As in [1, p.273] we set

n
= cE
_
e
2
n

, (39)
and

n
=
2
V
+ c
2
E
_
e
2
n

, (40)
Timo Koski () Mathematisk statistik 09.05.2013 39 / 70
Riccati Recursion for the Variance of the Prediction Error
Next we use the state equation (1) to get
E
_
X
2
n+1

=
2
E
_
X
2
n

+
2
. (41)
Hence we have by (36), (41) and (38)
E
_
e
2
n+1

=
2
E
_
X
2
n

+
2

2
E
_

X
2
n
_


2
n

n
or, again using (36),
E
_
e
2
n+1

=
2
E
_
e
2
n

+
2


2
n

n
. (42)
Timo Koski () Mathematisk statistik 09.05.2013 40 / 70
The Kalman Filter with Riccati Recursion
Hence we have shown the following proposition (c.f., [1, p.273]) about
Kalman prediction.
Proposition

X
n+1
=

X
n
+

n

n
. (43)
and if
e
n+1
= X
n+1


X
n+1
then
E
_
e
2
n+1

=
2
E
_
e
2
n

+
2


2
n

n
, (44)
where

n
= cE
_
e
2
n

, (45)
and

n
=
2
V
+ c
2
E
_
e
2
n

. (46)
Timo Koski () Mathematisk statistik 09.05.2013 41 / 70
PMKF
In PMKF we have = 1, c = 1 and = 0, we are estimating sequentially
a random variable without time dynamics. We need to rewrite the
preceding a bit, see [5]. First we have
1
E
_
e
2
n+1

=
1
E [e
2
n
]
+
1

2
V
, (47)
since in this case (44) becomes
E
_
e
2
n+1

= E
_
e
2
n

E
_
e
2
n

2
V
+ E [e
2
n
]
,
and by some elementary algebra
E
_
e
2
n+1

=

2
V
E
_
e
2
n

2
V
+ E [e
2
n
]
.
From this
1
E
_
e
2
n+1

=

2
V
+ E
_
e
2
n

2
V
E [e
2
n
]
2
=
1
E [e
2
n
]
+
1

2
V
.
Timo Koski () Mathematisk statistik 09.05.2013 42 / 70
PMKF
Then iteration gives that
1
E [e
2
n
]
=
1

2
0
+
n

2
V
. (48)
We need in view of (43) to compute the Kalman gain for the case at hand,
or

n
=
E
_
e
2
n

2
V
+ E [e
2
n
]
=
1

2
V
E[e
2
n
]
+ 1
,
where we now use (48) to get
=
1

2
V

2
0
+ n + 1
=

2
0

2
V
+
2
0
(n + 1)
,
Timo Koski () Mathematisk statistik 09.05.2013 43 / 70
PMKF
Thus we have found the Poor Mans Kalman Filter

X
n+1
=

X
n
+

2
0
(n + 1)
2
0
+
2
V
_
Y
n


X
n
_
. (49)
The poor mans cycle of computation: You have computed

X
n
.
You receive Y
n
.
Update the gain to

2
0
(n+1)
2
0
+
2
V
Compute

X
n+1
by adding the correction to

X
n
.

X
n+1


X
n
.
Timo Koski () Mathematisk statistik 09.05.2013 44 / 70
Riccati Recursion for the Variance of the Prediction Error
Recall (44), or,
E
_
e
2
n+1

=
2
E
_
e
2
n

+
2


2
n

n
, (50)
The recursion in (50) is a rst order nonlinear dierence equation known
as the Riccati equation
3
for the prediction error variance.
3
named after Jacopo Francesco Riccati, 1676 - 1754, who was a mathematician born
in Venice, who wrote on philosophy, physics and dierential equations.
http://www-groups.dcs.st-and.ac.uk/history/Biographies/Riccati.html
Timo Koski () Mathematisk statistik 09.05.2013 45 / 70
The Stationary Kalman Filter
We say that the prediction lter has reached a steady state, if E
_
e
2
n

is a
constant, say P = E
_
e
2
n

, that does not depend on n. Then the Riccati


equation in (44) becomes a quadratic algebraic Riccati equation
P =
2
P +
2


2
c
2
P
2

2
V
+ c
2
P
, (51)
or
P =
2
+

2
V

2
P

2
V
+ c
2
P
.
Timo Koski () Mathematisk statistik 09.05.2013 46 / 70
The Stationary Kalman Filter
and further by some algebra
c
2
P
2
+
_

2
V

2
V

2
c
2
_
P
2

2
V
= 0. (52)
This algebraic second order equation is solvable by the ancient formula of
Indian mathematics. We take only the non negative root into account.
Given the stationary P we have the stationary Kalman gain as
K
def
=
cP

2
V
+ c
2
P
. (53)
Timo Koski () Mathematisk statistik 09.05.2013 47 / 70
Example of Kalman Prediction: Random Walk Observed
in Noise
Consider a discrete time Brownian motion (Z
n
is a Gaussian white noise)
or a random walk
X
n+1
= X
n
+ Z
n
, n = 0, 1, 2, . . . ,
observed in noise
Y
n
= X
n
+ V
n
, n = 0, 1, 2, . . . ,
Timo Koski () Mathematisk statistik 09.05.2013 48 / 70
Examples of Kalman Prediction: Random Walk Observed
in Noise
Then

X
n+1
=

X
n
+
E
_
e
2
n

2
V
+ E [e
2
n
]
_
Y
n


X
n
_
.
and

X
n+1
=
_
1
E
_
e
2
n

2
V
+ E [e
2
n
]
_

X
n
+
E
_
e
2
n

2
V
+ E [e
2
n
]
Y
n
.
Timo Koski () Mathematisk statistik 09.05.2013 49 / 70
Example of Kalman Prediction: Random Walk Observed
in Noise
It can be shown that there is convergence to the stationary lter

X
n+1
=
_
1
P

2
V
+ P
_

X
n
+
P

2
V
+ P
Y
n
,
where P is found by solving (52).
We have in this example = 1, c = 1. We select
2
= 1,
2
V
= 1. Then
(52) becomes
P
2
P 1 = 0 P =
1
2
+
_
1
4
+ 1 = 1.618,
and the stationary Kalman gain is K = 0.618.
Timo Koski () Mathematisk statistik 09.05.2013 50 / 70
Examples of Kalman Prediction: Random Walk Observed
in Noise
In the rst gure there is depicted a simulation of the state process and
the computed trajectory of the Kalman predictor.
Timo Koski () Mathematisk statistik 09.05.2013 51 / 70
Example of Kalman Prediction: Random Walk Observed
in Noise
In the next gure we see the fast convergence of the Kalman gain K(n) to
K = 0.618.
Timo Koski () Mathematisk statistik 09.05.2013 52 / 70
Example of Kalman Prediction: Exponential Decay
Observed in Noise
Consider an exponential decay, 0 < < 1,
X
n+1
= X
n
, n = 0.1, 2, . . . , X
n
=
n
X
0
observed in noise
Y
n
= X
n
+ V
n
, n = 0, 1, 2, . . . .
Then there is convergence to the stationary lter

X
n+1
=
_
1
P

2
V
+ P
_

X
n
+
P

2
V
+ P
Y
n
.
Timo Koski () Mathematisk statistik 09.05.2013 53 / 70
Example of Kalman Prediction: Exponential Decay
Observed in Noise
We see the corresponding noisy measurements of the exponential decay
with c = 1 , = 0.9
2
= 0, and
2
V
= 1.
Timo Koski () Mathematisk statistik 09.05.2013 54 / 70
Example of Kalman Prediction: Exponential Decay
Observed in Noise
In the gures there is rst depicted the state process and the computed
trajectory of the Kalman predictor.
Timo Koski () Mathematisk statistik 09.05.2013 55 / 70
Example of Kalman Prediction: Exponential Decay
Observed in Noise
With c = 1 , = 0.9,
2
= 0, and
2
V
= 1 (52) becomes
P
2
+
_
1 0.9
2
_
P = 0 P = 0,
(where we neglect the negative root) and the stationary Kalman gain is
K = 0.
Timo Koski () Mathematisk statistik 09.05.2013 56 / 70
Example of Kalman Prediction: Exponential Decay
Observed in Noise
In the next gure we see the fast convergence of the Kalman gain K(n) to
K = 0.
Timo Koski () Mathematisk statistik 09.05.2013 57 / 70
Example of Kalman Prediction
The state process
X
n
= 0.6X
n1
+ Z
n1
, n = 0, 1, 2, . . . ,
and the observation equation
Y
n
= X
n
+ V
n
, n = 0, 1, 2, . . . ,

2
= 1,
2
V
= 0.1 ,
2
0
= 1.
Timo Koski () Mathematisk statistik 09.05.2013 58 / 70
Example of Kalman Prediction
The time series in red is

X
n
, the time series in blue is X
n
,
n = 1, 2, . . . , 100.
0 10 20 30 40 50 60 70 80 90 100
3
2
1
0
1
2
3
Timo Koski () Mathematisk statistik 09.05.2013 59 / 70
Example of Kalman Prediction
The Kalman gain:
0 20 40 60 80 100 120
0.44
0.46
0.48
0.5
0.52
0.54
0.56
Timo Koski () Mathematisk statistik 09.05.2013 60 / 70
PMKF
PMKF, where X
0
= 0.9705. Here

X
1000
= 0.9667.
0 100 200 300 400 500 600 700 800 900 1000
1.1
1.05
1
0.95
0.9
0.85
0.8
0.75
0.7
0.65
0.6
Timo Koski () Mathematisk statistik 09.05.2013 61 / 70
The Kalman recursions for Prediction: the General Case
Recall the state-space model:
X
t+1
= F
t
X
t
+V
t
, {V
t
} WN(0, {Q
t
}),
Y
t
= G
t
X
t
+W
t
, {W
t
} WN(0, {R
t
}).
Timo Koski () Mathematisk statistik 09.05.2013 62 / 70
The Kalman recursions for Prediction: the General Case
Linear estimation of X
t
in terms of
Y
0
, . . . , Y
t1
denes the prediction problem;
Y
0
, . . . , Y
t
denes the ltering problem;
Y
0
, . . . , Y
n
, n > t, denes the smoothing problem.
Timo Koski () Mathematisk statistik 09.05.2013 63 / 70
The Kalman recursions: the General Case
The predictors

X
t
def
= P
t1
(X
t
) and the error covariance matrices

t
def
= E[(X
t

X
t
)(X
t

X
t
)

]
are uniquely determined by the initial conditions

X
1
= P(X
1
| Y
0
),
1
def
= E[(X
1

X
1
)(X
1

X
1
)

]
Timo Koski () Mathematisk statistik 09.05.2013 64 / 70
The Kalman recursions: the General Case
and the recursions, for t = 1, . . .,

X
t+1
= F
t

X
t
+
t

1
t
(Y
t
G
t

X
t
) (54)

t+1
= F
t

t
F

t
+ Q
t

1
t

t
, (55)
where

t
= G
t

t
G

t
+ R
t
,

t
= F
t

t
G

t
.
The matrix
t

1
t
is called the Kalman gain.
Timo Koski () Mathematisk statistik 09.05.2013 65 / 70
Notes and Literature on the Kalman Filter
The Kalman lter was rst applied to the problem of trajectory estimation
for the Apollo space program of the NASA (in the 1960s), and
incorporated in the Apollo space navigation computer. Perhaps the most
commonly used type of Kalman lter is nowadays the phase-locked loop
found everywhere in radios, computers, and nearly any other type of video
or communications equipment. New applications of the Kalman Filter
(and of its extensions like particle ltering) continue to be discovered,
including for radar, global positioning systems (GPS), hydrological
modelling, atmospheric observations e.t.c..
Timo Koski () Mathematisk statistik 09.05.2013 66 / 70
Wikipedia on the Kalman Filter
Kalman lters have been vital in the implementation of the navigation
systems of U.S. Navy nuclear ballistic missile submarines, and in the
guidance and navigation systems of cruise missiles such as the U.S. Navys
Tomahawk missile and the U.S. Air Forces Air Launched Cruise Missile. It
is also used in the guidance and navigation systems of the NASA Space
Shuttle and the attitude control and navigation systems of the
International Space Station.
Timo Koski () Mathematisk statistik 09.05.2013 67 / 70
Notes and Literature on the Kalman Filter
A Kalman lter webpage with lots of links to literature, software, and
extensions (like particle ltering) is found at
http://www.cs.unc.edu/welch/kalman/
Timo Koski () Mathematisk statistik 09.05.2013 68 / 70
Notes and Literature on the Kalman Filter
It has been understood only recently that the Danish mathematician and
statistician Thorvald N. Thiele
4
discovered the principle (and a special
case) of the Kalman lter in his book published in Copenhagen in 1889
(!): Forelsningar over Almindelig Iagttagelseslre: Sandsynlighedsregning
og mindste Kvadraters Methode. A translation of the book and an
exposition of Thiele

s work is found in [4].


4
18381910, a short biography is in
http://www-groups.dcs.st-ac.uk./history/Biographies/Thiele.html
Timo Koski () Mathematisk statistik 09.05.2013 69 / 70
Notes and Literature on the Kalman Filter
1
P.J. Brockwell and R.A. Davies: Introduction to Time Series and
Forecasting. Second Edition Springer New York, 2002.
2
R.M. Gray and L.D. Davisson: An introduction to statistical signal
processing. Cambridge University Press, Cambridge, U.K., 2004.
3
R.E. Kalman: A New Approach to Linear Filtering and Prediction
Problems. Transactions of the ASME Journal of Basic Engineering,
1960, 82, (Series D): 3545.
4
S.L. Lauritzen: Thiele: pioneer in statistics. Oxford University Press,
2002.
5
T. S oderstr om: Discrete Time Stochastic Systems, Springer 2002.
Timo Koski () Mathematisk statistik 09.05.2013 70 / 70

Você também pode gostar