Você está na página 1de 14

European Congress on Computational Methods in Applied Sciences and Engineering

ECCOMAS 2004
P. Neittaanmäki, T. Rossi, S. Korotov, E. Oñate, J. Périaux, and D. Knörzer (eds.)
Jyväskylä, 24–28 July 2004

ON EFFICIENT IMPLEMENTATION OF THE KALMAN


FILTER BANK FOR SCENARIO ANALYSIS

Julia V. Tsyganova? and Alexey E. Kondratiev†


? Department of Mathematics and Mechanics,
Ulyanovsk State University, 42 L. Tolstoy Str. 432970 Ulyanovsk, Russia
e-mail: TsyganovaJuV@ulsu.ru

†Department of Mathematics and Mechanics,


Ulyanovsk State University, 42 L. Tolstoy Str. 432970 Ulyanovsk, Russia
e-mail: mouse fin@mail.ru

Key words: Linear Stochastic Systems, Kalman Filter, System Identification, Adaptive
Filtering, Fault Point Detection, Scenario Analysis.

Abstract. The practical aspects of efficient implementation of the Kalman Filter Bank
for scenario analysis are considered. Such implementation have to shorten running time,
computational cost, and memory demands, required for computer model of the Kalman
Filter Bank.
The first requirement is met by concurrent computing. The second and third require-
ments are satisfied by substitution the Equivalent Kalman Filter Bank for the Standard
Kalman Filter Bank.
The paper contains theoretical and practical results, which can be used for scenario anal-
ysis and other problems (fault point detection, system identification, pattern recognition,
and others).
The work is partially supported by the Russian Ministry of Education (grants No. T02-
03.2-3427 and A03-3.16-409).

1
Julia V. Tsyganova and Alexey E. Kondratiev

1 INTRODUCTION
In many cases dynamical stochastic systems are considered as multi-mode systems. The
switching from one mode to another is determined by some scenario. There exists the
problem of efficient implementation of the detector of possible system modes switching.
The basic requirement on such implementation is to shorten:

• running time,

• computational cost, and

• memory demands.

In the linear case the Standard Kalman Filter Bank (SKFB) can be considered as
the model of linear stochastic multi-mode system [1], [2], [3]. However, the practical
implementation of the Bank is a very costly algorithm in the case of systems with large
dimensionality. The aim of this paper is to investigate the efficient implementation of
SKFB computer model. It should satisfy the above mentioned requirements.
The first requirement is met by concurrent computing. The practical computer exper-
iments show the considerable running time shortening.
The second and third requirements are satisfied by substitution the Equivalent Kalman
Filter Bank (EKFB) [2] for SKFB. The equivalency of SKFB and EKFB is confirmed in
this paper by MatLab and Borland Delphi numerical experiments. The mean residual
errors here are less than 10−10 to 10−15 for the 5-dimensional system with 16 possible
scenarios.
The paper contains theoretical comparison of numbers of arithmetical operations needed
for SKFB and EKFB. The obtained areas of efficiency are confirmed by the fact that the
number of arithmetical operations for EKFB is 30% less than for SKFB in the case of
systems with large dimensionality.
Finally, parameters of linear transformation blocks in EKFB are sparse matrices. This
fact allows us to apply sparse matrix packing algorithms for the shortening of memory
demands.
All obtained results confirm the efficiency of considered Kalman Filter Bank imple-
mentation for scenario analysis and other problems (for example, fault point detection
and identification problems).

2 PROBLEM STATEMENT
Let the discrete-time system be characterized by the following equations:

x(ti ) = Φx(ti−1 ) + Ψu(ti−1 ) + Γw(ti−1 ), (1)

z(ti ) = Hx(ti ) + v(ti ), (2)

2
Julia V. Tsyganova and Alexey E. Kondratiev

where x(ti ) is the n-dimensional state vector, z(ti ) is the m-dimensional system output,
u(ti ) is the control input, and {w(t0 ), w(t1 ), . . .} and {v(t1 ), v(t2 ), . . .} are mutually inde-
pendent zero-mean Gaussian sequences of independent vectors. Without loss of generality,
their covariances Q and R are assumed to be reduced to identity matrices: Q = I and
R = I. This can be easily done by normalizing input noise in (1) and measurements in
(2). The sequences are considered independent of Gaussian initial x(t0 ) with mean x̄(t0 )
and P (t0 ).
Let us assume that the system (1), (2) is time invariant, that allow to consider using
a steady-state constant-gain Kalman filter model, which is denoted with the subscript k.

xk (ti ) = Φk xk (ti−1 ) + Ψk u(ti−1 ) + Γk wk (ti−1 ), (3)


zk (ti ) = Hk xk (ti ) + vk (ti ), (4)

where xk is the Kalman filter model state vector, Φk is the Kalman filter model state
transition matrix, Ψk is the Kalman filter model control input matrix, u is the system
input vector, Γk is the Kalman filter model noise input matrix, wk is an additive white
discrete-time dynamics noise input used in the Kalman filter model, with zero mean and
identity covariance, zk is the Kalman filter model measurement vector, Hk is the Kalman
filter model output matrix, vk is an additive white measurement noise input that is used
in the Kalman filter model. It is assumed to be independent of wk , and zero mean with
identity covariance.
The Kalman filter algorithm uses this model to define time propagation and measure-
ment update equations of the Kalman filter state estimates and the Kalman filter state
estimates covariance matrix. Thus we have
Time propagation
x̂k (t− +
i ) = Φk x̂k (ti−1 ) + Ψk u(ti−1 ), Pk (t− + T T
i ) = Φk Pk (ti−1 )Φk + Γk Γk .

Vector measurement update (5)


Kk (ti ) = Pk (t− T −1
i )Hk Ck (ti ) ,
− T Pk (t+ −
i ) = (In − Kk (ti ))Hk Pk (ti ),
Ck (ti ) = Hk Pk (ti )Hk + Im ,
x̂k (t+ −
i ) = x̂k (ti ) + Kk νk (ti ),
νk (ti ) = z(ti ) − Hk (ti )x̂k (t−
i ),

where νk (ti ) is the Kalman filter residual vector.


The following sequences

Nk = {νk (t1 ), νk (t2 ), . . . , νk (ti )} (6)

consists of mutually independent entries νk (tτ ) = zk (tτ ) − Hk x̂k (t−


τ ), τ = 1, 2, . . . , t with
zero mean and the Kalman filter-computed residual covariace matrix Ck , subject to the
corresponding Kalman filter model. The steady state Kalman filter gain Kk and the
steady state Kalman filter residual covariance matrix Ck can be precomputed using (5),

3
Julia V. Tsyganova and Alexey E. Kondratiev

and therefore do not need to be computed in real time. The steady state Kalman filter
equations become
x̂k (t− +
i ) = Φk x̂k (ti−1 ) + Ψk u(ti−1 ), (7)
x̂k (t+ −
i ) = x̂k (ti ) + Kk νk (ti ), (8)
The set of Kalman filter models forms the Standard Kalman Filter Bank (SKFB).
Further, let us follow [2] to apply expressions for the equivalent residual of a Kalman
filter using the residual of another Kalman filter and the known differences between the
two Kalman filter models. The subscripts j and k denotes the two different Kalman filter
models. We consider model differences in the state transition matrix Φ, the input matrix
Ψ, the noise input matrix Γ, and the output matrix H.

3 THE STANDARD KALMAN FILTER BANK AND ITS EFFICIENT IM-


PLEMENTATION: THE EQUIVALENT KALMAN FILTER BANK
Let us use the following notation:
∆Ψkj = Ψk − Ψj → Ψj = Ψk − ∆Ψkj ,
∆Hkj = Hk − Hj → Hj = Hk − ∆Hkj ,
∆Φkj = Φk − Φj → Φj = Φk − ∆Φkj ,
∆Γkj = Γk − Γj → Γj = Γk − ∆Γkj ,
∆Kkj = Kk − Kj → Kj = Kk − ∆Kkj .
The difference in state estimate errors is defined in the following way:
∆²jk (t+ + + + +
i ) = ²j (ti ) − ²k (ti ) = [xT (ti ) − x̂j (ti )] − [xT (ti ) − x̂k (ti )] =
x̂k (t+ + + + +
i ) − xj (ti ) → x̂j (ti ) = x̂k (ti ) − ∆²jk (ti ).

Let us define the residual of filter number j:


νj (ti ) = z(ti ) − Hj Φj x̂k (t+ +
i−1 ) − Hj Ψj u(ti−1 ) + Hj Φj ∆²jk (ti−1 ).

This equation can be rewritten in the following way:


νj (ti ) = νk (ti ) + Hj Φj ∆²jk (t+ +
i−1 ) + [Hj ∆Φkj + ∆Hkj Φk ]x̂k (ti−1 )
+[Hj ∆Ψkj + ∆Hkj Ψk ]u(ti−1 ).
Let us write a recursive expression for the difference in state estimates between filter k
and filter j (∆²jk (t+ +
i )). It allows to compute ∆²jk (ti ) at each time sample and then the
residual for the filter j. Thus, we have
∆²jk (t+ + +
i ) = (I − Kj Hj )Φj ∆²jk (ti−1 ) + [(I − Kk Hk )Φk − (I − Kj Hj )Φj ]x̂k (ti−1 )
+[(I − Kk Hk )Ψk − (I − Kj Hj )Ψj ]u(ti−1 ) + ∆Kkj z(ti ).
Now consider four cases:

4
Julia V. Tsyganova and Alexey E. Kondratiev

• differences in the input matrix Ψ,


• differences in the output matrix H,
• differences in the state transition matrix Φ,
• differences in the noise input matrix Γ.
Case 1. Ψk 6= Ψj , Hk = Hj , Φk = Φj , Γk = Γj , Kk = Kj .
∆²jk (t+ +
i ) = (I − Kj Hj )Φj ∆²jk (ti−1 ) + (I − Kj Hj )∆Ψkj u(ti−1 ),
νj (ti ) = νk (ti ) + Hj Φj ∆²jk (t+
i−1 ) + Hj ∆Ψkj u(ti−1 ).

Case 2. Ψk = Ψj , Hk 6= Hj , Φk = Φj , Γk = Γj , Kk 6= Kj .
∆²jk (t+ + −
i ) = (I − Kj Hj )Φj ∆²jk (ti−1 ) + ∆Kkj z(ti ) + (Kj Hj − Kk Hk )x̂k (ti ),
νj (ti ) = νk (ti ) + Hj Φj ∆²jk (t+ −
i−1 ) + ∆Hkj x̂k (ti ).

Case 3. Ψk = Ψj , Hk = Hj , Φk 6= Φj , Γk = Γj , Kk 6= Kj .
∆²jk (t+ + +
i ) = (I − Kj Hj )[Φj ∆²jk (ti−1 ) + Φk x̂k (ti−1 )] + ∆Kkj νk (ti ),
νj (ti ) = νk (ti ) + Hj Φj ∆²jk (t+ +
i−1 ) + Hj ∆Φkj x̂k (ti−1 ).

Case 4. Ψk 6= Ψj , Hk = Hj , Φk = Φj , Γk 6= Γj , Kk 6= Kj .
∆²jk (t+ +
i ) = (I − Kj Hj )Φj ∆²jk (ti−1 ) + ∆Kkj z(ti ),
νj (ti ) = νk (ti ) + Hj Φj ∆²jk (t+
i−1 ).

So, the Equivalent Kalman Filter Bank (EKFB) consists of the source Kalman filter k
and the set of linear transforms for producing the equivalent of the residual from another
Kalman filters j (j = 1, . . . , M ).

4 COMPARISON OF THE OPERATION COUNT. AREAS OF EFFICIENT


IMPLEMENTATION OF EKFB
Let us compare the operation count (number of multiplications and additions) that
are needed to implement the Standard Kalman Filter Bank with the count needed to
implement the Equivalent Kalman Filter Bank. Consider the following values: ns —
dimension of state vector x, ni — dimension of control input u, nm — dimension of
measurement vector z.
Let us denote the number of multiplications as < ∗ > and the number of additions as
< + >.
SKFB.
< ∗ > = ns (ns + ni + 2nm ),
< + > = ns (ns + ni + 2nm − 1).

5
Julia V. Tsyganova and Alexey E. Kondratiev

EKFB, ∆Ψ 6= 0.
Let nΨ — the number of nonzero columns of matrix ∆Ψ (1 ≤ nΨ ≤ ni ). Then

< ∗ > = nΨ (ns + nm ) + ns (ns + nm ),


(9)
< + > = nΨ (ns + nm ) + ns (ns + nm − 2).

Let us define the area of efficient implementation of the EKFB through examining the
set of inequalities which describes the gain in the operation count.
Choosing from (9) the stronger inequality, we have:

nΨ ≤ ns (ni + nm )/(ns + nm ). (10)

Let’s notice, that this inequality is meaningful always, since ns and nm are positive
natural numbers.
Taking into account the possible values of nΨ , we can draw a plot of the desired area
(see figure 1). In this case efficiency is particularly obvious for systems with (ni << ns )
& (ni << nm ).

nΨ=ns*(ni+nm)/(ns+nm)
700

600
nΨ=ni

500

400

300

200

100

0
0
200 1000
Area of efficiency 800
400
600
600
400
800 200
1000 0
nm
n
s

Figure 1: Area of efficiency in Case 1.

EKFB, ∆H 6= 0 (→ ∆K 6= 0).

6
Julia V. Tsyganova and Alexey E. Kondratiev

Let nH be the number of nonzero rows of matrix ∆H (1 ≤ nH ≤ nm ), and nK the size


of nonzero block of matrix ∆K (1 ≤ nK ≤ min{ns , nm }). Then

< ∗ > = nH (2ns − nK ) + nK (ns + nK ) + ns (ns + nm ),


< + > = nH (2ns − nK − 1) + nK (ns + nK − 1) + ns (ns + nm ).

There are two possible alternatives in this case:

1) If nH ≥ ns − nK , then

nH ≤ (ns (ni + nm ) − nK (ns + nK ))/(2ns − nK ); (11)

2) If nH < ns − nK , then

nH ≤ (ns (ni + nm − 1) − nK (ns + nK − 1))/(2ns − nK − 1). (12)

Let us consider these alternatives in order:

1) The inequality (11) is meaningful always, because nk ≤ ns , that is 2ns − nK 6= 0.


Taking into account restrictions on nH , it is possible to draw a plot of desired area
(see figure 2). In this case efficiency grows at increase ni .

2) The denominator of fraction in the right part of an inequality (12) can be equal
zero in case nK = 2ns − 1, i.e. nK − ns = ns − 1, but since nK − ns ≤ 0, and
ns − 1 ≤ 0. It is possible only at ns = 1. But then ns = 1, nK = 1 and 1 ≤ 0, since
nH ≤ ns − nK . We come to contradiction. Hence, a denominator of fraction is not
equal to zero at any allowable values of parameters, therefore the inequality (12) is
meaningful always. Now we can draw a plot of desired area subject to restrictions
on values nH (see figures 3). In this case efficiency quickly decreases at increase of
system dimensions.

EKFB, ∆Φ 6= 0 (→ ∆K 6= 0).

< ∗ > = ns (2ns + 3nm ),


< + > = ns (2ns + 3nm − 1) − 2nm .

In this case the property of matrix ∆Φ to be sparse does not give advantages since
it has usually one nonzero line and one nonzero column. When ∆Φ is multiplied with
another matrix, the product is no longer a sparse matrix. The condition of efficiency in
this case looks as follows: ni ≥ ns + nm . It is equivalent to the previous case without
taking a sparse structure into account. The plot of the area of efficiency is shown on
figure 4.

7
Julia V. Tsyganova and Alexey E. Kondratiev

n =(n (n +n )−n (n +n ))/(2n −n )


H s i m K s K s K

nH=nm

nH=ns−nK
1500

1000

500

1000
−500
0 800
200 600
400
400
600
Area of efficiency 200
800
1000 0 ni
n
s

Figure 2: Area of efficiency in Case 2.1

The computational savings are obvious for systems with (ni >> ns ) & (ni >> nm ).
EKFB, ∆Γ 6= 0 (→ ∆K 6= 0).

< ∗ > = ns (ns + 2nm ),


< + > = ns (ns + 2nm − 1) − nm .

Here the sparse structure of matrix ∆Γ does not give advantages too, however compu-
tational savings are evident based on inequalities ni ≥ 0 and ni ns + nm ≥ 0.
All obtained results allow at comparison EKFB with SKFB for the systems of large
dimensionality to speak about considerable computational savings during EKFB imple-
mentation.

5 NUMERICAL EXAMPLE
As a practical numerical example consider the following system taken from the inertial
navigation [4]:

8
Julia V. Tsyganova and Alexey E. Kondratiev

1200 nH=n+m−nK
NH=(ns(nm+ni−1)−nk(ns+nk−1))/(2ns−nk−1)
1000

800

600
n =n
H m

400

200

−200
1000
800
600
400 0
200
200 400
Area of efficiency 600
0 800
1000
n ni
s

Figure 3: Area of efficiency in Case 2.2

   
0.75 −1.74 −0.3 0.0 −0.15 0.0 0.0 0.0
   


0.09 0.91 −0.0005 0.0 −0.008 



0.0 0.0 0.0 

xt+1 = 
 0.0 0.0 0.95 0.0 0.0  xt +
  24.64 0.0 0.0  wt

   
 0.0 0.0 0.0 0.55 0.0   0.0 0.835 0.0 
0.0 0.0 0.0 0.0 0.905 0.0 0.0 1.83
" #
1−e 0 0 0 1−f
zt = xt + vt , e, f, g, h = {0, 1}
0 1−g 0 1−h 0
{wt } and {vt } are zero-mean white Gaussian sequences with covariances Qt = I3 , and
Rt = I2 (In is the n-dimensional identity matrix). These equations describe the damped
Shuler loop driven by the exponentially correlated 3-dimensional noise wt [4].
The mode of the stochastic system, or system scenario, is determined by matrix H.
Thus, 24 = 16 types of system scenario are possible.
Table 1 shows experimental data, which confirm the equivalence of residuals obtained
from the Standard Kalman Filter Bank and the Equivalent Kalman Filter Bank. This
experiment was conducted for the chosen example and 1000 quantums of time. The source
filter corresponds to scenario S0 . The mean residual errors here are less than 10−14 for

9
Julia V. Tsyganova and Alexey E. Kondratiev

2000 ni=ns+nm

1500

1000
ni

500

0
1000

800

600

400 0
200
Area of efficiency 200 400
600
0 800
1000
nm ns

Figure 4: Area of efficiency in Case 3

the 5-dimensional system with 16 possible scenarios.

6 SCENARIO ANALYSIS
Suppose that there is a control interval [t0 , tN ] on which the system behavior is subject
to some scenario S0 . It means that system parameters are Φ0 , Ψ0 , Γ0 , and H0 .
Further, consider the time moment ts ∈ [t0 , tN ] at which the system behavior can
be changed, i. e. the nominal scenario S0 is changed to one of possible scenarios Sk ,
k = 1 . . . , M . Each scenario Sk is considered as the set of the system parameters {Φk ,
Ψk , Γk , and Hk }.
Our goal is to choose the right scenario of system behavior after the moment ts . One
of the possible solving method is the Wald sequential probability ratio test [5], which
requires the computation of likelihood ratio function λp (ti ) at each moment ti . Thus, we
have

λp (ti ) = λp (ti−1 ) + µp (ti ),

2µp (ti ) = ln det C0 (ti ) − ln det Cp (ti ) + (s0 (ti ) − sp (ti )),

10
Julia V. Tsyganova and Alexey E. Kondratiev

Type of Scenario Difference of Residuals ν1 Difference of Residuals ν2


S0 0 0
S1 4.64 · 10−14 1.91 · 10−14
S2 4.79 · 10−14 6.02 · 10−15
S3 4.79 · 10−14 5.12 · 10−16
S4 4.31 · 10−14 2.70 · 10−14
S5 4.37 · 10−14 2.65 · 10−14
S6 4.32 · 10−14 6.02 · 10−15
S7 4.32 · 10−14 5.12 · 10−16
S8 1.45 · 10−14 2.18 · 10−14
S9 1.37 · 10−14 2.39 · 10−14
S10 1.33 · 10−14 6.02 · 10−15
S11 1.33 · 10−14 5.12 · 10−16
S12 3.12 · 10−15 2.21 · 10−14
S13 3.12 · 10−15 2.42 · 10−14
S14 3.12 · 10−15 6.02 · 10−15
S15 3.12 · 10−15 5.12 · 10−16
256 ensemble average 4.59 · 10−14 1.78 · 10−14

Table 1: Average Difference between SKFB and EKFB residuals.

s0 (ti ) = ν0T (ti )C0−1 (ti )ν0 (ti ),


sp (ti ) = νpT (ti )Cp−1 (ti )νp (ti ),

where ν0 (ti ) is a system residual for scenario S0 , νp (ti ) is a system residual for scenario Sp
(1 ≤ p ≤ M ), and matrices C0 (ti ) and Cp (ti ) have to be precomputed up to steady state
values C0 and Cp .
The value of λp (ti ) is then tested against two threshold levels A and B (where A >
B). If a’priori probabilities of different types of scenario are equal, then initial value of
λp (t0 ) = 0.
Decision rule for choosing desired scenario is as follows [6]:




1.
If ∀p λp (ti ) ≤ B, then system scenario S0 is chosen.



 2.
If ∃! q ∀p 6= q (λq (ti ) ≥ A)&(λp (ti ) < A), then system



 scenario Sq is chosen.


 3. If ∀p B < λp (ti ) ≤ A, the process is repeated for ti+1 .
(13)


 4. If ∃p, q (p 6= q): (λp (ti ) ≥ A)&(λq (ti ) ≥ A), then the





process is terminated with the choice of scenario Sl , for





which the value of likelihood function is maximum, that

is λl (ti ) = max{λp (ti ), λq (ti )}.

11
Julia V. Tsyganova and Alexey E. Kondratiev

The probability of the right choice of scenario varies within the limits of 60%-95%, that
directly depends on parameters of a concrete system.

7 CONCURRENT COMPUTING
To satisfy the first requirement mentioned in Section 1, the concurrent computing was
implemented for SKFB and EKFB computer models.
A shortening of running time within the framework of the given experiment to estimate
difficultly valid its transiency, however on 10000 passes of corresponding testing intervals
the following parameters of efficiency of EKFB have been obtained in comparison with
SKFB (The testing was carried out on CPU Intel Pentium II Original, 64 Mb RAM,
OS Microsoft Windows 98 OSR2, the source code was compiled in Borland Delphi 5
Enterprize Edition Version 5.0 Build 5.62.):

• EKFB (concurrent programming did not applied) ≈ 11%;

• EKFB (concurrent programming was applied, thread priority is tpNormal) ≈ 68.8%;

• EKFB (concurrent programming was applied, thread priority is tpHigher) ≈ 66.2%;

• EKFB (concurrent programming was applied, thread priority is tpHighest) ≈


65.03%;

The given proposal is probably effective for problems with large, growing in time vol-
umes of complex calculations.

8 IMPLEMENTATION OF SPARSE MATRICES


Distinctions between models in the bank of filters are often represented by sparse
matrices, therefore it is reasonable to apply packing algorithms for their storage and
processing. The next way of packing has been applied for realization of experiments (an
initial code in Borland Delphi 5):
TFloat = Extended;
TMatrixBaseClass = class
end;
TInfoRec = record
i, j: Integer;
Data: TFloat;
end;
TPackedMatrixArray = array of TInfoRec;
TPackedMatrix = class(TMatrixBaseClass)
private
Fi: Integer;
Fj: Integer;

12
Julia V. Tsyganova and Alexey E. Kondratiev

FMatrix: TPackedMatrixArray;
function GetElement(i, j: Integer): TFloat;
procedure SetElement(i, j: Integer; Value: TFloat);
public
property I: Integer read Fi;
property J: Integer read Fj;
property Element[i, j: Integer]: TFloat read
GetElement write SetElement; default;
constructor Create(i, j: Integer); virtual;
destructor Free;
end;

The packed matrix is submitted as an array (TPackedMatrixArray) of open length


which elements are information records (TInfoRec). Elements of a matrix are numbered
in the lines from the first column to the last. Such implementation gives serious economy
of operative memory for a data storage and gives an opportunity rather easy in sense of
amount of operations of search of a necessary element of the packed matrix.
Here it is necessary to mention accuracy of storage of matrix elements. Insufficient
accuracy can lead to the situation in which at definition of distinctions between system
models required sparse matrices will turn out zero, that certainly will lead to occurrence
of incorrect results.

9 CONCLUSIONS
The purpose of the given work was theoretical research, development of idea [2], [3]
and computer implementation of efficient bank of Kalman filters, experimental confirma-
tion of its equivalence to the Standard Kalman Filter Bank, and comparison of operation
count. The possibility of using the Equivalent Kalman Filter Bank in scenario analy-
sis with application of concurrent programming and Win32 threads technology was also
considered.
There was obtained the following theoretical results: the idea of EKFB was developed,
theoretical number of arithmetical operations for EKFB in comparison this required num-
ber of operations for SKFB was obtained in the form of inequalities and graphically as
the areas of efficiency.
The practical results are the following: the efficient (equivalent) bank of Kalman filters
with application of concurrent computing and packing algorithms of sparse matrices was
implemented, its equivalence to SKFB was experimentally proved.
Thus, considered implementation of the Kalman Filter Bank is really efficient, because
it meets the three basic requirements: shortening of running time, computational cost,
and memory demands. This implementation can be used for scenario analysis, fault point
detection, system identification, pattern recognition, and other problems.

13
Julia V. Tsyganova and Alexey E. Kondratiev

REFERENCES
[1] Newbold, P., M. and Ho Yu-Chi. Detection of Changes in the Characteristics of a
Gauss-Markov Process. IEEE Trans. on Aerosp. and Electron. Systems, Vol. AES-
4(5), 707–718, 1968.

[2] Hanlon, P., D., Maybeck, P., S. Interrelationship of Single-Filter and Multiple-Model
Adaptive Algorithms. IEEE on Aerospace and Electronic Systems Vol. 34(3), 934–
946, 1998.

[3] Hanlon, P., D., Maybeck, P., S. Equivalent Kalman Filter Bank Structure for Mul-
tiple Model Adaptive Estimation (MMAE) and Generalized Likelihood Ratio (GLR)
Failure Detection. Proc. of the 36th Conference on Decision & Control: San Diego
California USA, 5, 4312–4317, 1997.

[4] Stratton, A. Combination of Inertial Navigation and Radio Systems. In Borisov, N.,
I. (ed.): Problems of Inertial Navigation, Mashinostroienie, 1961. [In Russian]

[5] Wald, A. Sequential Analysis. New York, Wiley, 1947.

[6] Semoushin, I., V. The Quickest in the Mean Manoeuvre Detection with the Guar-
anteed Probability Error (Methods). Shipbuilding: Computing Techniques, 26, 3–7,
1990 [In Russian]

14

Você também pode gostar