Você está na página 1de 9

Kalman Filtering with Nonlinear State Constraints

Chun Yang Erik Blasch


Sigtem Technology, Inc. Air Force Research Lab/SNAA
113 Clover Hill Lane 2241 Avionics Circle
Harleysville, PA 19438 WPAFB, OH 45433
chunyang@sigtem.com erik.blasch@wpafb.af.mil

Abstract In [Simon and Chia, 2002], an analytic estimate onto the constrained surface. Although the main
method was developed to incorporate linear state results are restricted to linear systems and linear state
equality constraints into the Kalman filter. When the equality constraints, the authors outlined steps to extend it
state constraint is nonlinear, linearization was employed to inequality constraints, nonlinear dynamics systems, and
to obtain an approximately linear constraint around the nonlinear state constraints.
current state estimate. This linearized constrained
Kalman filter is subject to approximation errors and According to [Simon and Chia, 2002], the inequality
may suffer from a lack of convergence. In this paper, constraints can be checked at each time step of the filter.
we present a method that allows exact use of second- If the inequality constraints are satisfied at a given time
order nonlinear state constraints. It is based on a step, no action is taken since the inequality constrained
computational algorithm that iteratively finds the problem is solved. If the inequality constraints are not
Lagrangian multiplier for the nonlinear constraints. satisfied at a given time step, then the constrained solution
The method therefore provides better approximation is applied to enforce the constraints. Furthermore, to apply
when higher order nonlinearities are encountered. the constrained Kalman filter to nonlinear systems and
Computer simulation results are presented to illustrate nonlinear state constraints, it is suggested in [Simon and
the algorithm. Chia, 2002] to linearize both the system and constraint
equations about the current state estimate. The former is
Keywords: Kalman filtering, nonlinear state constraints, equivalent to the use of an extended Kalman filter (EKF).
Lagrangian multiplier, iterative solution, tracking.
However, the projection of the unconstrained state
estimate onto a linearized state constraint is subject to
constraint approximation errors, which is a function of the
1 Introduction nonlinearity and more importantly the point around which
the linearization takes place. This may result in
In a recent paper [Simon and Chia, 2002], a rigorous convergence problems. It was suggested in [Simon and
analytic method was set forth to incorporate linear state Chia, 2002] to take extra measures to guarantee
equality constraints into the Kalman filtering process. convergence in the presence of nonlinear constraints.
Such constraints (e.g., known model and signal
information) are often ignored or dealt with heuristically. There are a host of constrained nonlinear optimization
The resulting estimates, even obtained with the Kalman techniques [Luenberger, 1989]. Primal methods search
filter, cannot be optimal because they do not take through the feasible region determined by the constraints.
advantage of this additional information about state Penalty and barrier methods approximate constrained
constraints. optimization problems by unconstrained problems through
modifying the objective function (e.g., add a term for
One example that benefits from state constraints is the higher price if a constraint is violated). Instead of the
ground target tracking. When a vehicle travels off-road or original constrained problem, dual methods attempt to
on an unknown road, the state estimation problem is solve an alternate problem (the dual problem) whose
unconstrained. However, when the vehicle is traveling on unknowns are the Lagrangian multipliers of the first
a known road, be it straight or curved, the state estimation problem. Cutting plane algorithms work on a series of
problem can be cast as constrained with the road network ever-improving approximating linear programs whose
information available from, say, digital terrain maps solutions converge to that of the original problem.
[Yang, Bakich, and Blasch, 2005]. Lagrangian relaxation methods are widely used in discrete
constrained optimization problems.
To make use of state constraints, previous attempts range
from reducing the system model parameterization to In this paper, we present a method that allows for the use
treating state constraints as perfect measurements. The of second-order nonlinear state constraints exactly. The
constrained Kalman filter proposed in [Simon and Chia, method can provide better approximation to higher order
2002] consists of first obtaining an unconstrained Kalman nonlinearities. The new method is based on a
filter solution and then projecting the unconstrained state computational algorithm that iteratively finds the

1
Form Approved
Report Documentation Page OMB No. 0704-0188

Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and
maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information,
including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington
VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it
does not display a currently valid OMB control number.

1. REPORT DATE 3. DATES COVERED


2. REPORT TYPE
JUL 2006 00-00-2006 to 00-00-2006
4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER
Kalman Filtering with Nonlinear State Constraints 5b. GRANT NUMBER

5c. PROGRAM ELEMENT NUMBER

6. AUTHOR(S) 5d. PROJECT NUMBER

5e. TASK NUMBER

5f. WORK UNIT NUMBER

7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION


REPORT NUMBER
Air Force Research Lab/SNAA,2241 Avionics Circle,Wright Patterson
AFB,OH,45433
9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITORS ACRONYM(S)

11. SPONSOR/MONITORS REPORT


NUMBER(S)

12. DISTRIBUTION/AVAILABILITY STATEMENT


Approved for public release; distribution unlimited
13. SUPPLEMENTARY NOTES
9th International Conference on Information Fusion, 10-13 July 2006, Florence, Italy. Sponsored by the
International Society of Information Fusion (ISIF), Aerospace & Electronic Systems Society (AES), IEEE,
ONR, ONR Global, Selex - Sistemi Integrati, Finmeccanica, BAE Systems, TNO, AFOSRs European
Office of Aerospace Research and Development, and the NATO Undersea Research Centre.
14. ABSTRACT
see report
15. SUBJECT TERMS

16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF 18. NUMBER 19a. NAME OF
ABSTRACT OF PAGES RESPONSIBLE PERSON
a. REPORT b. ABSTRACT c. THIS PAGE Same as 8
unclassified unclassified unclassified Report (SAR)

Standard Form 298 (Rev. 8-98)


Prescribed by ANSI Std Z39-18
Lagrangian multiplier. Considering only second-order from an initial estimate x 0 = E{x 0 } and its estimation
constraints in this paper is a tradeoff between reducing error covariance matrix P0 = E{( x 0 x 0 )( x 0 x 0 ) T }
approximation errors to higher-order nonlinearities and
keeping the problem computationally tractable. where the superscript T stands for matrix transpose, the
Kalman filter equations specify the propagation of x k and
In general, the solution to a filtering problem is an a Pk over time and the update of x k and Pk by measurement
posteriori probability density function (pdf) and in the yk as
linear Gaussian case, it is normally distributed with a
mean vector and a covariance matrix. At a first glance, x k +1 = A x k + Bu k + K k ( y k C x k ) (2a)
such a solution can never be the solution to a filtering
problem with a hard constraint simply because a pdf K k = APk C T (CPk C T + R ) 1 (2b)
compliant with a hard constraint cannot have support on Pk +1 = ( APk K k CPk ) A + Q
T
(2c)
the whole state space, as pointed in [Anonym, 2006]. In
this paper and in [Simon and Chia, 2002], the constrained Now in addition to the dynamic system of Eq. (1), we are
projection actually maps the whole state space onto the given the linear state constraint
constraints, producing a constrained pdf with its support
on the constraint surface. Although not explicitly in this
Dxk = d (3)
paper, the constrained pdf can be derived for the second-
order constraints in this paper, from which the estimate
statistics can in turn be determined albeit approximately. where D is a known constant matrix of full rank, d is a
known vector, and the number of rows in D is the number
The paper is organized as follows. Section 2 presents a of constraints, which is assumed to be less than the
brief summary of linearly constrained state estimation and number of states. If D is a square matrix, the state is fully
its problems encountered when the first-order constrained and can thus be solved by inverting Eq. (3).
linearization is used to extend to nonlinear cases. Section Although no time index is given to D and d in Eq. (3), it is
3 details the computational algorithm to solve the second- implied that they can be time-dependent, leading to
order constrained least-squared optimization problem. In piecewise linear constraints.
Section 4, computer simulation results are presented to
illustrate the algorithm. Finally, Section 5 provides The constrained Kalman filter according to [Simon and
concluding remarks and suggestions for future work. Chia, 2002] is constructed by directly projecting the
unconstrained state estimate x k onto the constrained
surface S = {x: Dx = d}. It is formulated as the solution to
2 Linear Constrained State Estimation the problem:
(
In this section, we first summarize the results for linearly x = arg min ( x x ) T W ( x x ) (4)
xS
constrained state estimation [Simon and Chia, 2002] and
then point out the problems it may face when the
linearization is used to extend it to nonlinear constraints. where W is a symmetric positive definite weighting
matrix. To solve this problem, we form the Lagrangian
Consider a linear time-invariant discrete-time dynamic
J ( x, ) = ( x x ) T W ( x x ) + 2 ( D x d )
T
system together with its measurement as (5)

x k +1 = A x k + Bu k + w k (1a) The first order conditions necessary for a minimum are


given by
yk = C xk + vk (1b)
J
where the underscore indicates a vector quantity, the = 0 W ( x x ) + D T = 0 (6a)
x
subscript k is the time index, x is the state vector, u is a J
known input, y is the measurement, and w and v are state = 0 Dx d = 0 (6b)

and measurement noise processes, respectively. It is
implied that all vectors and matrices have compatible
dimensions, which are omitted for simplicity. This gives the solution

The goal is to find an estimate denoted by x k of xk given = ( DW 1 D T ) 1 ( D x d ) (7a)


(
the measurements up to time k denoted by Yk = {y0, , x = x W 1 D T ( DW 1 D T ) 1 ( D x d ) (7b)
yk}. Under the assumptions that the state and measurement
noises are uncorrelated zero-mean white Gaussian with w The above derivation does not depend on the conditional
~ N{0, Q} and v ~ N{0, R} where Q and R are positive Gaussian nature of the unconstrained estimate x . It was
semi-definite covariance matrices, the Kalman filter shown in [Simon and Chia, 2002] that when W = I, the
provides an optimal estimator in the form of solution in Eq. (7) is the same as what is obtained by the
x k = E{x k | Yk } [Anderson and Moore, 1979]. Starting mean square method, which attempts to minimize the

2
conditional mean square error subject to the state the approximate linear state constraint produces the
constraints, that is, current constrained state estimate x( + , which is however
subject to the constraint approximation error. Clearly, the
min E{ x x 2 | Y } such that Dx = d further away is x( from x, the larger is the approximation-
2
(8)
x
introduced error. More critically, such an approximately
where denotes the vector two-norm. Furthermore, linear constrained estimate may not satisfy the original
2
nonlinear constraint specified in Eq. (10). It is therefore
when W = P-1, i.e., the inverse of the unconstrained state desired to reduce this approximation-introduced error by
estimation error covariance, the solution in Eq. (7) including higher-order terms while keeping the problem
reduces to the result given by the maximum conditional computationally tractable. One possible approach is
probability method presented in the next section.

max ln Prob{x | Y } such that Dx = d (9)


x

x2 Approximate Linear State Constraint:


Several interesting statistical properties of the constrained ( (
D( x ) x d ( x ) 0
Kalman filter are presented in [Simon and Chia, 2002]. (+
This includes the fact that the constrained state estimate as x Constraint Approximation Error
(
x x
given by Eq. (7) is an unbiased state estimate for the
system in Eq. (1) subject to the constraint in Eq. (3) for a x
known symmetric positive definite weighting matrix W. Nonlinear State Constraint:
Furthermore when W = P-1, the constrained state estimate g(x) d = 0
has a smaller error covariance than that of the 0 x1
unconstrained state estimate, and it is actually the smallest
(
for all constrained Kalman filters of this type. Similar x True State x Previous Constrained State Estimate
(+
x Unconstrained State Estimate x Current Constrained Sate Estimate
results hold in terms of the trace of the estimation error
covariance matrix when W = I.
Figure 1 Errors in Linear Approximation of Nonlinear
In practical applications, however, nonlinear state State Constraints
constraints are likely to emerge. Consider the nonlinear
state constraint of the form
3 Nonlinear Constrained Estimation
g(x) = d (10)
In this section, we consider a second-order state constraint
We can expand the nonlinear state constraints about a function, which can be viewed as a second-order
(
constrained state estimate x and for the ith row of Eq. approximation to an arbitrary nonlinearity, as
(10), we have
( ( (
gi ( x) di = gi ( x )+ gi ' ( x )T ( x x ) +
f ( x) = x [ T
1 T ]
M m x

m m0 1
1 ( ( ( = x M x + m x + x m + m0 = 0
T T T
(13)
( x x )T g i " ( x )( x x ) + L d i = 0 (11)
2!
As an example, a general equation of the second degree in
where the superscripts and denote the first and second two variables and is written as
partial derivatives.
f ( , ) = a 2 + 2b + c 2 + 2d + 2e + f
Keeping only the first-order terms as suggested in [Simon
and Chia, 2002], some rearrangement leads to a b d
= [ 1] b c e = 0 (14)
( ( ( (
g ' ( x )T x d g ( x )+ g ' ( x )T x (12) d e f 1

where g(x) = [gi(x)]T, d = [di]T, and g(x) = which may represent a road segment in a digital terrain
[gi(x)]. An approximate linear constraint is therefore map.
formed by replacing D and d in Eq. (3) with g(x) and
( ( (
d g ( x )+ g ' ( x ) T x , respectively. Following the constrained Kalman filtering of [Simon and
Chia, 2002], we can formulate the projection of an
Figure 1 illustrates this linearization process, which unconstrained state estimation onto a nonlinear constraint
identifies possible errors associated with linear surface as the constrained least-square optimization
approximation of a nonlinear state constraint. As shown, problem
the previous constrained state estimate x( lies somewhere
on the constrained surface but is away from the true state. x = arg min ( z H x) T ( z H x) (15)
x
The projection of the unconstrained state estimate x onto
subject to f(x) = 0

3
ei2 ( ) i2
T = (23a)
If we let W = H H and z = H x , the formulation in Eq. i (1 + i2 ) 2
(15) becomes the same as in Eq. (4). In a sense, Eq. (15) ei ( )t i
m x = t ( I + T ) 1 e( ) = (23b)
T T
is a more general formulation because it can also be
i 1 + i
2
interpreted as a nonlinear constrained measurement
e ( )t i
update or a projection in the predicted measurement x m = e( ) T ( I + T ) 1 t = i (23c)
T

i 1 + i
2
domain.

Construct the Lagrangian with the Lagrangian multiplier Bringing these terms into the constrained equation in Eq.
as (18b) gives rise to the constraint equation, now expressed
in terms of the unknown Lagrangian multiplier , as
J(x, ) = (z-Hx)T(z-Hx) + f(x) (17)
f() = (zTH mT) (HTH+ M)-2(HTz m)
Taking the partial derivatives of J(x, ) with respect to x + mT(HTH+ M)-1(HTz m)
and , respectively, setting them to zero leads to the + (zTH mT) (HTH+ M)-1m + m0
necessary conditions: = e()T(I+ T)-1T(I+T)-1e()
+ tT(I+ T)-1e() + e()T(I+ T)-1t + m0
-HTz + m + (HTH+M)x = 0 (18a) e 2 ( ) 2 ei ( )t j
xTMx + mTx + xTm + m0 = 0 (18b) =
i (1 i+ 2 i) 2 + 2i 1 + 2 + m0 (24)
i i

Assume that the inverse matrix of HTH+M exists. Then,


x can be solved from Eq. (18a), giving the constrained As a nonlinear equation in , it is difficult to find a
solution in terms of the unknown as closed-form solution in general. Numerical root-finding
algorithms may be used instead. For example, the
Newtons method is used below. Denote the derivative of
x = (HTH+M)-1(HTz m) (19)
f() with respect to as
which reduces to the unconstrained least-squares solution
when = 0. e ( )e&i (1 + i2 ) i2 ei2 ( ) i4
f& ( ) = 2 i
i (1 + i2 )3
Assume that the matrix M admits the factorization M = e&iti (1 + i2 ) ei ( )ti i2
LTL and apply the Cholesky factorization to W = HTH as + 2 (25a)
i (1 + i2 ) 2

W(= HTH) = GTG (20)


where
where G is an upper right diagonal matrix. We then
perform a singular value decomposition (SVD) of the e& = [ e&i ] T = -VT(GT)1m (25b)
matrix LG-1 [Moon and Stirling, 2000] as
Then the iterative solution for is given by
LG-1 = UVT (21)
f ( k ) , starting with 0 = 0 (26)
k +1 = k
where U and V are orthonormal matrices and is a f& ( )
k
diagonal matrix with its diagonal elements denoted by i.
For a square matrix, this becomes the eigenvalue The iteration stops when |k+1-k| < , a given small value
decomposition. or the number of iterations reaches a pre-specified
number. Then bringing the converged Lagrangian
Introduce two new vectors multiplier back to Eq. (19) or (23) provides the
constrained optimal solution.
e() = [ ei(), ] T = VT(GT)1(HTz - m) (22a)
t = [ ti ] T = VT(GT)1m (22b)
Now consider the special case where W = HTH, z = H x ,
With these factorizations and new matrix and vector and m = 0, that is, a quadratic constraint on the state.
notations, the constrained solution in Eq. (19) can be Under these conditions, t = 0 and e is no longer a function
simplified as of so its derivative relative to vanishes, e& = 0 . The
quadratic constrained solution is then given by
x = G 1V ( I + T ) 1 e( ) (23)
(
x = (W+M)-1W x (27a)
The first and second order terms in x in Eq. (18b) can be
expressed in as: where the Lagrangian multiplier is obtained iteratively
as in Eq. (26) with the corresponding f() and f& ( ) given
x M x = e ( ) T ( I + T ) T T ( I + T ) 1 e ( )
T
by

4
ei2 i2 Furthermore, if the error covariance matrix is not
f ( )= + m0 (27b)
i (1 + i2 ) 2 diagonal, the correlation direction will also affect the
statistical property. Ruling out such variability in
ei2 i4
f& ( ) = 2 (27c) conditions will make results analysis easier while not
i (1 + i2 ) 3 losing the generality.

The solution of Eq. (27) is also called the constrained To apply the linear constrained Kalman filter of [Simon
least squares [Moon and Stirling, 2000; pp 765-766], and Chia, 2002], the nonlinear constraint is linearized
which was previously applied for the joint estimation and about a previous constrained state denoted by x( and can
calibration [Yang and Lin, 2004]. When M = 0, the
be written as
constraint in Eq. (13) degenerates to a linear one. The
constrained solution in Eq. (19) is still valid. However, the
iterative solution for finding is no longer applicable and [x(
]
( x
y = r2 (30)
a closed-form solution is available as given in Eq. (7). y

4 Simulation Results In terms of the matrix notations, we have D = [x( y( ]


and d = r2. Since the previous constrained state x( is also
In this section, a simple example is used to demonstrate on the circular path, it can be parameterized as x( =
the effectiveness of the nonlinear constrained method of
[rcos(+), r sin(+)]T where is the angular offset
this paper and to show its superior performance as
and its distance to the true state is given by
compared to the linearized constrained method in [Simon
and Chia, 2002]. In this example, a ground vehicle is ( 2
x 2 = x x = 2r 2 (1 cos )
2
assumed to travel along a circular road segment as shown (31)
2
in Figure 2 with the turn center chosen as the origin of the
x-y coordinates. With W = P-1 (i.e., H = 1 I2 and z = H x , where In stands
for an identity matrix of dimension n), the linearized
constrained estimate denoted by x( +L can be calculated
y (
using Eq. (7).
x (+
xL
Similarly, with W = P-1, M = I2, and m0 = r2, the quadratic
x
(+ constrained estimate denoted by x( +NL is given by Eq. (27)
x NL
x x
In the first simulation, we set r = 100 m, = 45o, and =
10 m and then draw samples of x from the distribution in
r
Eq. (29) as the unconstrained state estimates. We then
calculate both the linearized and nonlinear constrained
state estimates x( +L and x( +NL at two linearizing points, i.e.,
0 x two previous constrained state estimates x( , with = 0o
Figure 2 Tracking a Vehicle along a Circular Road and -15o, respectively.

Figures 3 and 4 illustrate how the nonlinear constrained


method of this paper and the linearized constrained
Under this assumption, the road constraint is quadratic method of [Simon and Chia, 2002] actually project
and can be written as unconstrained state estimates onto a nonlinear constraint,
which is a circular road segment in this example. In
1 0 x Figure 3 for = 0o, the linearizing point (small circle o)
f ( x, y )= x 2 + y 2 r 2 = [x y ] y r = 0
2 (28)
0 1 coincides with the true state (star *) (the estimator is not
aware of, though). The linearized constraint is a line
Assume that the true position of the vehicle is at x = tangent to the circular path at the point. There are 50
[rcos, rsin]T and an unbiased estimate denoted by x is random samples drawn from the distribution of Eq. (29)
as the unconstrained state estimates (dot ), which are
obtained by a certain means such that
projected onto the linearized constraint using Eq. (7) as
the linearized constrained estimates (cross sign x) and
x ~ N(x, P) (29)
using Eq. (27) as the quadratic constrained estimates (plus
sign +). Clearly, all the quadratic constrained estimates
where P is the estimation error covariance matrix, which fall onto the circle, thus satisfying the constraint whereas
is assumed to take a diagonal form as P = diag{[ x2 , not all linearized constrained estimates do so.
y2 ]} with x2 = y2 = . It is understandable that
2

In Figure 4 for = -15o, the linearizing point is away


additional errors will appear if the estimate x is biased.
from the true state. At 100 m, an angular offset is related

5
to a separation error by a factor of 1.74 m/deg. This slightly smaller than the unconstrained estimates, because
corresponds to about 26 m for = -15o. Although the the scattering along a constraint curve is smaller than in a
linearized method using Eq. (7) effectively projects all the two dimensional (2D) plan. For the same reason, the RMS
unconstrained state estimates onto the linearized values for the linearized constrained estimates are smaller
constraint, the linearized constraint itself is tilted off, than those of the quadratic constrained estimates for || <
introducing larger errors due to the constraint 7o. However, the use of RMS values for comparison is
approximation. In contrast, the quadratic constrained less meaningful in this case. As shown in Figure 6, both
method, being independent of the prior knowledge about the unconstrained estimates (circle o) and linearized
the state estimate, easily satisfies the constraint. constrained estimates (dot ) do not always satisfy the
nonlinear constraints whereas quadratic constrained
In the second simulation, we draw 5000 unconstrained estimates (plus sign +) do all the time (i.e., the constraint
state estimates for each linearizing point , which varies is satisfied if the distance to the origin is 100 m).
from -20o to 20o in steps of 5o. At each linearizing point,
we calculate the root mean squared (RMS) errors between
the true state and the linearized and nonlinear constrained 11.8
monte carlo simulation 5000 runs per

state estimates, respectively. Figure 5 shows the two unconstrained estimate


11.6 nonlinear constrained rms
RMS values as a function of the linearizing point together linearized constrained rms
with the unconstrained state estimation error standard 11.4

deviation . 11.2

state error rms, m


11
unconstrained estimates projected onto nonlinear constraints at = 0 deg
10.8

90 10.6

10.4

80 10.2

10
70
y, m

9.8
-20 -15 -10 -5 0 5 10 15 20
: angular offset from true state, deg
60

road segment
unconstrained
Figure 5 RMS Values of State Estimates vs. Angular
50 nonlinear constrained Offset
linearized constrained
linearizing point
true state
40
40 45 50 55 60 65 70 75 80 85
satisfy nonlinear constraints at = -5 deg
x, m 125
unconstrained state estimate
Figure 3 Projection of Unconstrained Estimates onto 120 linearized constrained state estimate

Constraints ( x( = x)
nonlinear constrained state estimate
115

110
distance to center, m

105
unconstrained estimates projected onto nonlinear constraints at = -15 deg
100
90
95

80 90

85
70
80
y, m

60 75
0 10 20 30 40 50 60 70 80 90 100
random sample number
50
road segment
unconstrained
nonlinear constrained
Figure 6 Constraint Satisfaction
40 linearized constrained
linearizing point
true state
30 In a 2D setting, the circular path constraint of Eq. (30)
40 45 50 55 60 65 70 75 80 85 90 allows for a simple geometry solution to the problem. The
desired on-circle point x( is the intersection between the
x, m

Figure 4 Projection of Unconstrained Estimates onto circle and the line extending from the origin to the off-
Constraints ( x( x) circle point x . The geometry solution can be written as

( y
As expected, the RMS values for the linearized x = r cos[tan 1 ( )] (32a)
x
constrained estimates (circle o) grow large as the angular
( y
offset increases. The RMS values for the quadratic y = r sin[tan 1 ( )] (32b)
constrained estimates (dot ) remain constant and are x

6
Since it is an exact solution for this particular simulation coordinate translation and rotation in order to apply this
example, we can use it to verify the iterative solution circular normalization. Reverse operations are then used
obtained by the quadratic constrained method. Figure 7 to transform back. For applications of high
shows the quadratic constrained estimates (cross x) dimensionality, the scalar iterative solution of Eq. (26)
obtained by the iterative algorithm of Eq. (27) and the may be more efficient.
exact geometry solutions (circle o), which indeed match
each other perfectly.
unconstrained estimates projected onto nonlinear constraints

Finally, we use Figure 8 to show an example of the


90
Lagrangian multiplier as it is calculated iteratively using
85
Eq. (27). The runs for five unconstrained state estimates
are plotted in the same figure and to make it fit, the 80

normalized absolute values of are taken. As shown, 75

starting from zero, it typically takes 4 iterations for the

y, m
70
algorithm to converge in the example presented.
65

60
In fact, for this simple example of a target traveling along
a circle used in the simulation, a closed-form solution can 55
road segment
be derived. Assume that W = I2, M = I2, m = 0, and m0 = - 50
unconstrained
nonlinear constrained
r2. The nonlinear constraint can be equivalently written as: 45
geometric solution

45 50 55 60 65 70 75 80
T 2 x, m
x x=r (33)
Figure 7 Iterative Quadratic Constrained Solution vs.
The quadratic constrained estimate given in Eq. (27a) is Geometry Solution
repeated below for easy reference:
iterative solution of lagrangian multiplier: first 5 data points
(
x = (W + M ) 1W x = (1 + ) 1 x (34) 1

0.9
where is the Lagrangian multiplier. 0.8

0.7
Bringing Eq. (34) back to Eq. (33) gives:
normalized ||

0.6

0.5
(T ( x T x
x x=( ) = r2 (35)
1+ 1+ 0.4

0.3 1st data point


2nd data point
The solution for is: 0.2 3rd data point
4th data point
0.1 5th data point

T
x x x 2 4 6 8 10 12 14
= 1 = 2 1 (36) iteration
r r
Figure 8 Convergence in Iterative Lagrangian Multiplier
where ||||2 stands for the 2-norm or length for the vector.

Bringing the solution for in Eq. (36) back to Eq. (34)


gives: 5 Conclusions
( x (37)
x=r In this paper, we have presented a method for
x 2 incorporating second-order nonlinear state constraints in a
Kalman filter. It is generalized from our earlier solutions
This indicates that for this particular case, the constraining for quadratic constraint functions to more general second-
results in normalization. This is consistent with the -order state constraint functions. It can be viewed as an
geometric solution shown in the above simulation. extension of the Kalman filter with linear state equality
constraints to nonlinear cases. Simulation results
This suggests a simple solution for some practical demonstrate the performance of this method as compared
applications. When a target is traveling along a circular to linearized state constraints.
path (or approximately so), one can first find the
equivalent center of the circle around which to establish a Although the present solution considers a scalar
new coordinate system. Then express the unconstrained constraint, it is possible to extend the solution to cases
solution in the new coordinate and normalize it as the where multiple nonlinear constraints or hybrid linear and
constrained solution. Finally convert it back to the nonlinear constraints are imposed. Other directions for
original coordinates. For non-circular but second-order future work include search for more efficient root finding
paths, eigenvalue-based scaling may be effected following algorithms to solve for the Lagrangian multiplier. With a

7
quadratic cost function, it is of interest to further extend
the iterative method to explore other types of nonlinear
constraints of practical significance.

Acknowledgements
Research supported in part under Contracts No. FA8650-
04-M-1624 and FA8650-05-C-1808, which is gratefully
acknowledged. Thanks also go to Dr. Rob Williams, Dr.
Devert Wicker, and Dr. Jeff Layne for their valuable
discussions and to Dr. Mike Bakich for his assistance.

References
Anonym, Comments from Reviewer # 1 of Fusion2006
Technical Review Committee, April 2006.
B. Anderson and J. Moore, Optimal Filtering, Prentice
Hall, Englewood Cliffs, NJ, 1979.
D.G. Luenberger, Linear and Nonlinear Programming
(2nd Ed.), Addison-Wesley, 1989.
T.K. Moon and W.C. Stirling, Mathematical Methods and
Algorithms for Signal Processing, Prentice Hall,
Upper Saddle River, NJ, 2000.
D. Simon and T.L. Chia, Kalman Filtering with State
Equality Constraints, IEEE Trans. On Aerospace and
Electronic Systems, 38, 1 (Jan. 2002), 128-136.
C. Yang, M. Bakich, and E. Blasch, Nonlinear
Constrained Tracking of Targets on Roads, Proc. of
the 8th International Conf. on Information Fusion,
Philadelphia, PA, July 2005.
C. Yang and D.M. Lin, Constrained Optimization for
Joint Estimation of Channel Biases and Angles of
Arrival for Small GPS Antenna Arrays, Proc. of the
60th Annual Meeting of the Institute of Navigation,
Dayton, OH, 2004.

Você também pode gostar