Escolar Documentos
Profissional Documentos
Cultura Documentos
of Electrical and Computer Engineering, Wayne State University, Detroit, Michigan 48202, USA
of Mathematics, Wayne State University, Detroit, Michigan 48202, USA
3 Key Laboratory of Systems and Control, Institute of Systems Science, Academy of Mathematics and Systems Science,
Chinese Academy of Sciences, Beijing 100190, China
4 School of Computing, Engineering and Mathematics, University of Western Sydney, Penrith, NSW 2751, Australia
2 Department
SUMMARY
Feedback systems with communication channels encounter unique challenges. Communication channels
mandate signal sampling and quantization, and introduce errors, data losses, and delays. Consequently,
transmitted signals must be estimated. The signal estimation introduces a dynamic system that interacts
with communication channels and affects the stability and performance of the feedback system. This paper
studies interactions among communications, sampling, quantization, signal estimation, and feedback, in
terms of fundamental stability and performance limitations. Typical empirical-measure-based algorithms
are used for signal estimation under quantized observations. When the sampling interval and signal estimation step size are coordinated, the ODE approach for stochastic approximations provides a suitable platform
for an integrated system analysis for signal estimation, sampling and quantization, and feedback robustness.
Feedback design for enhancing robustness against communication uncertainty and signal estimation dynamics is studied under the new notion of stability margin under signal averaging. Fundamental limitations on
noise attenuation in such an integrated system are derived. Copyright 2013 John Wiley & Sons, Ltd.
Received 10 December 2012; Revised 27 February 2013; Accepted 3 April 2013
KEY WORDS:
1. INTRODUCTION
Feedback systems with communication channels introduce new challenges. Communication
channels mandate signal sampling and quantization, and introduce errors, data losses, and delays.
Consequently, transmitted signals must be estimated. The signal estimators interact with sampling
and quantization of the communication channel and affect the feedback systems stability and performance. Despite the research effort in understanding impact of communication channels on feedback
systems [15], it remains unclear how to characterize the interaction among sampling and quantization, signal estimation, feedback robustness, and how to understand complexity issues such
as fundamental performance limitations under communication resource (bandwidth) constraints.
Typically, the system design employs individual specifications: Signal estimators are designed to
attenuate noise effects and recover transmitted signals, sampling rates and quantization are selected
on the basis of communication protocols, and feedback controllers may be designed for stability and
*Correspondence to: Le Yi Wang, Department of Electrical and Computer Engineering, Wayne State University, Detroit,
Michigan 48202, USA.
E-mail: lywang@wayne.edu
Copyright 2013 John Wiley & Sons, Ltd.
497
noise rejection under certain nominal conditions. However, when these subsystems are combined,
unexpected stability and performance deterioration can occur, and such interactions are not well
understood at present.
This work introduces a framework that integrates essential features of signal estimation (step
size), communications (sampling and quantization), and feedback control (robustness against signal
estimation), and provides design guidelines in such systems. The main contributions of this paper
include the following: (1) It establishes a basic relationship among signal quantization thresholds,
sampling intervals, and updating rates of signal estimation under which the overall feedback system can be analyzed (Theorems 1, 2, and 3). When the sampling interval and signal estimation
step size are coordinated, the overall systems behavior can be studied under the ODE framework
of stochastic approximation, providing a suitable platform in treating many stochastic scenarios in
such systems. (2) It introduces the notion of stability margins against the step size of signal estimation (Section 4.2). When a signal estimation algorithm is implemented under a feedback setting,
this margin introduces irreducible estimation errors and limits feedback performance. To enhance
feedback robustness against communication uncertainties and signal estimator dynamics, this stability margin becomes a suitable measure for robust control design (Section 4.3). (3) Within the
ODE framework, it is shown that the step size in stochastic approximation algorithms serves as a
measure of time complexity and can be meaningfully optimized in a feedback setting (Section 5).
(4) When a communication channel has limited data flow rates (bandwidths), we demonstrate the
complexity relationships in selecting quantization levels, sampling rates (Section 6). It highlights
how communication uncertainties can be incorporated into this framework for system analysis.
This work is related to several areas of research in systems and control. Networked control systems have been investigated in the past decade [1, 4, 5], in which impact of communication channels
on stability and achievable performance has been an important focus [2, 3, 6]. Channel noise and
their implication on fundamental performance limitations were studied in [1, 2, 6]. On the other
hand, data flow rates or bandwidth constraints were shown to be a critical factor in feedback stability
and performance [3, 4]. Our focus differs from these studies by considering the interactions among
quantization, sampling interval, signal estimation speed, and feedback control, and demonstrates
how such integrated studies can be simplified by using the limit ODE. There have been extensive
studies on stochastic approximation [7]. Its utility in analyzing combined signal estimation, communications, and feedback performance in this paper is new. Moreover, this paper provides a new
angle in interpretation and design of the step size in stochastic approximation algorithms. Parameter
estimation with quantized data is a new area that has seen much activity recently [811]. Utility and
properties of quantized identification in signal estimation, its ODE representation, and impact on
feedback systems are new. Signal averaging has been used in many aspects of stochastic analysis
[1215]. Background materials on stochastic processes and related topics can be found in [1618]
and references therein. This paper concentrates on one type of stochastic approximation algorithm,
but the findings can be extended to other algorithms. Stability margin in terms of gain, phase, delay,
and unmodeled dynamics is a standard concept treated in many control textbooks, see for example [1921], but robust stability against signal estimation is nonstandard. Optimal robust control
design of this paper introduces a modified structure so that the NevanlinnaPick interpolation for
gain margin computation can be employed [19, 22].
The rest of the work is organized as follows. Section 2 introduces the system structures and illustrates the issues that have motivated this study. Signal estimation problems are studied in Section 3.
We show that signal estimation under quantized observation can be represented by a filter and an
equivalent noise source, which are determined by sampling rates and quantization. It is shown that
when the sampling interval is proportional to the step size of signal estimation, the overall systems behavior can be studied under the ODE framework for stochastic approximations (Theorem 2).
Accuracy of approximation is illustrated.
Impact of signal estimation on feedback system stability is discussed in Section 4. A notion of
stability margin against signal averaging is introduced. It first shows that the stability of the closedloop system will be guaranteed if the ratio = is sufficiently small (Theorem 4). Detailed analysis
to establish the maximum ratio = is conducted (Theorems 5 and 7). This framework provides
a new meaning of the step size in recursive algorithms as an exponential time complexity, which
Copyright 2013 John Wiley & Sons, Ltd.
498
L. Y. WANG ET AL.
can be optimized for feedback robustness. Section 5 shows performance limitations in closed-loop
systems. For a given sampling rate, even optimally designed averaging filters can only have limited
benefits in reducing noise effects (Theorem 9). Further reduction of noise effects requires faster
sampling, implying increased costs.
Impact of communication channels on feedback robustness and performance is studied in
Section 6. Typical channel uncertainties are treated, including random transmission errors, packet
losses, and communication latency (Theorems 10, 11, and 12). Section 7 further investigates complexity relationships between quantization and sampling, in terms of feedback stability and noise
attenuation (Theorem 15). For concreteness, an AWGN channel and the binary phase-shift keying
modulation are used to demonstrate such trade-off. Finally, Section 8 summarizes the main findings
and some issues that are related but not resolved in this paper.
sk D Ik D
mC1
X
(1)
i D1
with 0 D ymin and mC1 D ymax . Here, Ifa<yk 6bg denotes the indicator function over an interval
.a, b. Hence, sk D i for i D 1, : : : , m C 1, implies that yk 2 .i 1 , i . The regular sensors may be
viewed as the limiting case when m ! 1 and maxi .i i 1 / ! 0.
Copyright 2013 John Wiley & Sons, Ltd.
499
D .1 /
k
X
kl sl ,
(3)
lD1
P
l
where the weight is normalized so that when sl D 1, .1 / 1
lD0 D 1. This algorithm can also
be written recursively as
0
0
0
0
k0 D k1
D k1
,
(4)
C .1 / sk k1
C sk k1
which is a stochastic approximation algorithm with a constant step size D 1 . For some small
b satisfying 0 < b < 1, define
8 0
,
if b < k0 < 1 bI
< k
if k0 < bI
k D b,
(5)
:
0
1 b, if k > 1 b.
Then, the estimate of yk is
b
y k D F 1 .k /.
(6)
When yk is a constant, denoted by yk D , an asymptotically efficient estimation algorithm of was derived [9, 10],
in the sense that the error variance asymptotically achieves the CramrRao (CR) lower bound. [9, 10] uses the standard
PN
empirical measure N D N1
kD1 sk .
Pk
500
L. Y. WANG ET AL.
The projection parameter b is introduced to overcome some transient singularity of the recursive
algorithm because the inverse F 1 ./ is involved. However, b will not affect asymptotic properties
of the algorithm or error analysis.
2.3. Interaction of communications, signal estimation, and feedback control
As part of communication systems, sampling and quantization integrate with signal estimation and
controller to impact feedback stability and performance. We use several examples to highlight the
motivating issues for this study.
Impact of Signal Averaging Weight : It is well understood that signal averaging can reduce noise
effects. However, signal averaging introduces a dynamic subsystem, which has detrimental effects
on the closed-loop system. We illustrate this by the following example.
Example 1
Consider a continuous-time open-loop system
xP D Ax C Bu
y D Cx
with
(7)
3
2 3
0
0
1 0
0 1 5 I B D 4 0 5 I C D 0, 0, 1.
AD4 0
1
1 3 0
2
The initial state is x.0/ D 1, 1, 10 . Suppose that the sampling interval is fixed as D 0.03. Assume
that regular sensors are used. Then, the signal estimator (6) is applied. Figure 2 illustrates impact of
weight on the closed-loop system. When is increased, the closed-loop system becomes unstable.
This example suggests that in closed-loop applications, must be carefully selected.
Impact of Sampling Intervals: In communication systems, due to sharing of channels with other
users of various priority levels, a user experiences variable communication data flow rates. This will
ClosedLoop Stability with Averaging Weight = 0.98
Output
4000
2000
0
2000
10
15
20
25
30
35
40
45
50
45
50
45
50
Time (second)
ClosedLoop Stability with Averaging Weight = 0.95
Output
10
15
20
25
30
35
40
Time (second)
ClosedLoop Stability with Averaging Weight = 0.85
Output
2
0
2
4
10
15
20
25
30
35
40
Time (second)
501
2
0
2
4
10
15
20
25
30
35
40
45
50
45
50
45
50
Time (second)
ClosedLoop Stability with Sampling Interval T = 0.012
Output
10
15
20
25
30
35
40
Time (second)
ClosedLoop Stability with Sampling Interval T = 0.03
Output
4000
2000
0
2000
10
15
20
25
30
35
40
Time (second)
result in different sampling rates, each time a communication link is established. This change in data
flow rates has an direct impact on feedback stability. In particular, there is a fundamental coupling
between the sampling rates and the step sizes of signal estimation when they are used in feedback
systems.
Example 2
Consider the same plant as in Example 1. The sampling interval is now varying from 0.002 to
0.03. The weight of the signal estimator (6) is fixed as D 0.98. Figure 3 illustrates impact
of different sampling rates on closed-loop systems. When is decreased, if the signal estimation
rate is not adapted accordingly, the closed-loop system may become unstable. This example suggests that in closed-loop applications, the relationship between sampling and signal estimation must
be coordinated.
Noise Attenuation Limitations: The primary purpose of signal estimation is to attenuate noise
effects. In open-loop applications, convergence of estimates is achieved by increasing towards 1.
However, in a closed-loop setting, signal averaging encounters a fundamental limitation. In other
words, convergence is not achievable.
Example 3
Consider the following continuous-time system
xP D Ax C Bu
y D Cx
with
0
4
0
AD
1
1
0
3
3
2 3
0
0
1 5 I B D 4 0 5 I C D 0, 0, 1.
1
0.6
Since the main concern here is persistent noise attenuation, the system starts from the zero initial
condition. The sampling interval is fixed at D 0.01. The sampled output yk D y.k/ is subject
to a measurement noise dk which is i.i.d., uniformly distributed in 5, 5. The signal averaging
(6) is used to estimate yk from noise corrupted yk C dk . We now vary to evaluate its impact on
Copyright 2013 John Wiley & Sons, Ltd.
502
L. Y. WANG ET AL.
3
8.75
x 10
8.7
Output variance
8.65
8.6
8.55
8.5
8.45
8.4
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.95
noise attenuation. For each , simulation is run 200 times and the average sample variance is then
calculated. This is repeated for different values with increment 0.01. Figure 4 illustrates impact
of on output noise attenuation. When the value is too low or too high, the sample variances of
the output are increased, indicating that the output variations have an irreducible lower bound and
an optimal choice of lies somewhere in between.
Tradeoff between Sampling Rates and Quantization Accuracy: Suppose that the communication
link has a limited bandwidth. Under the sampling interval and quantization precision level m, the
data flow rate (bits per second [bps]) is
BD
1
log2 .m C 1/.
For example, if the channel bandwidth for the output is limited at B D 10 KHz. When the number
of quantization levels varies from log2 .m C 1/ D 3, 4, 5, 6, 7, 8 bits, the corresponding sampling
intervals are D log2 .m C 1/=B D 0.0003, 0.0004, 0.0005, 0.0006, 0.0007, 0.0008. Figure 10 illustrates how different selections of the sampling intervals will affect the output variance. The optimal
may be selected at the minimum point.
The rest of the paper is devoted to developing new approaches to understand rigorously the following issues: (1) How are sampling and signal averaging jointly affecting feedback stability? (2)
For a give sampling interval, what is the robust range of a feedback system in terms of the decaying
rates of signal estimation? (3) How can one design the controller properly to enhance such robustness? (4) When noise attenuation is concerned as a performance index, what is the optimal selection
of the decaying rate of signal estimation? (5) To achieve better performance, how should communication resources (assigned bandwidths) be properly used? Or equivalently, how can we analyze the
relationships between sampling rates and quantization accuracy under bandwidth constraints?
3. THE ODE FRAMEWORK FOR ASYMPTOTIC ANALYSIS
3.1. Asymptotic filter representation of the quantized signal estimator
Signal estimation from (6) introduces an additional subsystem into the feedback loop. We will show
that under an asymptotically efficient algorithm, this subsystem may be asymptotically represented
by a signal averaging filter and an equivalent noise source.
The algorithm (5) is a nonlinear algorithm. [24] established the following asymptotic result on
this algorithm.
Copyright 2013 John Wiley & Sons, Ltd.
503
Lemma 1
lim Eb
y k D and
!1
F . /.1 F . //
1C
E .b
y k /2 D
.
!1 1
f 2 . /
lim
(8)
Remark 1
By [10], F .1F /=.Nf 2 / is the CR lower bound when a uniform window of size N is used. Asymptotically, the exponentially weighted window and uniform window are related by N D 1C
, see [24]
1
for derivations. Lemma 1 claims that the algorithm (5) achieves the CR lower bound asymptotically,
and hence is asymptotically efficient when ! 1.
Lemma 1 implies that asymptotically, b
y k D yk C k , where the estimation error k satisfies
1 F . /.1F . //
2
Ek D 0 and Ek D 1C
. On the other hand, the same characterization may be
f 2 . /
derived from a filter
Q./ D
.1 /
(9)
that acts on yk C dk with fdk g being a sequence of i.i.d. random variables satisfying
Edk D 0 and d2 D Edk2 D
F . yk /.1 F . yk //
.
f 2 . yk /
1
X
lD0
2l d2 D
1 2
.
1C d
(10)
This leads to the asymptotically equivalent filter representation of the estimator (6) in Figure 5. We
should emphasize that the representation contains two elements: the filter Q./ and the equivalent
noise dk . The filter is determined solely by the step size of the algorithm, but the noise variance
is a function of other parameters such as quantization levels, the distribution function F , the true
parameter , quantization thresholds, and so on. Q./ will enter the feedback loop and affect feedback stability. If the feedback system is stable, then closed-loop noise attenuation can be used as a
performance measure for which the variance of dk will be critical.
Remark 2
Lemma 1 is stated for a constant yk . When the signal estimation is involved, additional errors
will be introduced. In [24], we established the error bounds because of time variations. In general,
when a smooth continuous-time signal y.t / is sampled with sampling interval , the difference
jykC1 yk j 6 " for some " and hence is considered as a slowly time varying signal. In our
asymptotic analysis, we focus on asymptotic limit on stability and noise attenuation performance
when is sufficiently small. Consequently, Lemma 1 becomes the error bound used in asymptotic
performance analysis on the limit system.
504
L. Y. WANG ET AL.
xP D Ax C Bu
y D C x.
(11)
It is assumed that the closed-loop system under the negative unity feedback u D y is stable.
.
Namely, xP D Ax C B.C x/ D .A BC /x D A0 x is stable.
For a (sufficiently small) sampling interval , the overall closed-loop system with signal
estimation on y becomes
xkC1 D xk C .Axk C Buk /
yk D C xk
1, yk C dk 6
sk D
0, yk C dk >
(12)
kC1 D k C .sk k /
b
y k D F 1 .k /
uk D b
yk .
Theorem 1
Suppose that the sampling interval is proportional to the step size : = D . Then, the
closed-loop system is
(
(13)
Proof
Define the signal estimation error ek D b
y k yk . The state equation can be modified to
xkC1 D xk C .Axk Bb
y k / D xk C .Axk B.yk C ek // D xk C ..A BC /xk Bek /.
Hence, we have
Int. J. Adapt. Control Signal Process. 2014; 28:496522
DOI: 10.1002/acs
505
The ratio D = > 0 will be a critical factor in closed-loop stability. Define piecewise constant
interpolations .t / D k , x .t / D xk , t 2 k, .k C 1//. Then for t , s > 0, it is easily seen that
x .t C s/ x .t / D
.t Cs/=1
X
.A0 xk B. F 1 .k / C xk /
j Dt =
.t C s/ .t / D
1
.t Cs/=1
X
.sk k /.
j Dt =
In the previous equation, for notational simplicity, we have used t = and .t C s/= to denote the
integer parts of t = and .t C s/=, respectively, instead of using the floor function notation. To
establish the convergence of the interpolated sequences, we work with D.0, 1/ W R R/, the space
of functions defined on 0, 1/ taking values in R R that are right continuous and have left limits
endowed with the Skorohod topology (for the definition of such a space as well as the Skorohod
topology, we refer the reader to [7, Chapter 7]). We first show that the sequence .x ./, .// is
tight in D.0, 1/ W R R/, and the limit has continuous sample paths w.p.1. Then, we characterize
the limit process by martingale averaging techniques. Using [7, Chapter8], we obtain the following
theorem. The verbatim proof is omitted for brevity.
Theorem 2
Under Assumption 1, as ! 0, .x ./, .// converges weakly to .x./, .// such that .x./, .//
is a solution of the ordinary differential equation
(
xP D A0 x B. F 1 ./ C x/
(14)
P D 1 .F . C x/ /,
provided that (14) has a unique solution for each initial condition.
The unique equilibrium point of (14) is D F ./ and x D 0. We further derive the locally
linearized system of (14) at the equilibrium point.
Theorem 3
The locally linearized system of (14) is
xP D Ax C Bu,
y D C x,
1
1
uP D y u,
(15)
(16)
1
.
sC1
Proof
Because 0 and, as a stable matrix, A0 is nonsingular, the equilibrium point of (14), solved from
D F . C x/,
A0 x D 0,
is unique D F ./, x D 0. Define v D F ./. For stability analysis, we may transform the limit
system (14) into a system of x and v, with the equilibrium point x D 0 and v D 0,
xP D A0 x B. F 1 .v C F .// C x/
(17)
vP D 1 .F . C x/ F ./ v/.
The Jacobian matrix of (17) at x D 0, v D 0 is
"
AD
B
f . /
f ./C = 1=
#
.
(18)
506
L. Y. WANG ET AL.
B
v
f . /
(19)
vP D 1 f ./C x 1 v.
Now, by defining u D v=f ./, the linearized system (19) becomes (15). By (11) and after taking
the Laplace transform of the last equation, we obtain (16).
Remark 3
The previous result establishes a basic relationship
D e = .
.1/
(20)
1
s C 1
(21)
in the sense that maxt 2k,.kC1// jy.t / y.k/j D o./ where o./= ! 0, as ! 0. For a
simple understanding, note that the R.s/ has impulse response r.t / D 1 e t = , t > 0. Acting on a
Rt
Rt
continuous-time signal x.t /, its output is y.t / D 1 r.t
/x.
/d
D 1 1 e .t /= x.
/d
.
For small , y.t / is approximated by
1
yk D y.k/ D
D
.t /=
1
.1 /
.1 /
k
X = ki
x.
/d
D
.e
/ xi C o./
i D1
k
X
k
X
ki xi C o./ D .1 /
i D1
ki xi C o./,
i D1
507
T=0.05
2
T=0.01
T=0.02
Output
T=0.001
10
12
14
16
18
20
Time (second)
508
Now,
L. Y. WANG ET AL.
1
Be 0
A0 P0 C 2 f .
/
A1 P C P A1 D 4
f ./CA0 P0 C 2 CB 1 e 0
p B
f . /
p
2
f ./CA0 e C CB
2 A0 e C
I
!
D
CO
2
0
! p
f ./P0 A00 C 0
p1
p1
5 C P A1 .
p
C O. /
where ! D
f ./P0 A00 C 0 .
"
det.I / det p2 ! 0 ! > 0 for sufficiently small . This implies that A1 is stable. This completes
the proof.
Theorem 4 demonstrates that it is always meaningful to discuss stability margins against > 0.
We now proceed to establish the largest for stability of the closed-loop system.
4.2. Averaging stability margin
The following analysis shows that the quantity D =.1/ D = determines a certain robustness
margin of the closed-loop system. The robust stability margin will be called the averaging margin,
and is different from the typical gain margin, phase margin, delay margin, or robustness against
additive or multiplicative unstructured uncertainty. The feedback system in Figure 5 can be written as xk D G./ek C dk , ek D rk Q./xk . The feedback system is stable if all solutions to
1 C G./Q./ D 0 are inside the open unit disk.
Definition 1
The discrete-time averaging margin max is defined as the largest value 0 such that the closed-loop
system is robustly stable against Q./ D .1/
for all 0 6 < 0 . If the feedback system is stable
for all < 1, we denote max .G/ D 1.
On the other hand, the stability of the limit ODE (17) is determined by the locally linearized
system (16).
Definition 2
The continuous-time averaging margin max of the system (16) is defined as the largest value 0
such that the continuous-time closed-loop system is robustly stable against the averaging filters R
for all < 0 . If the feedback system is stable for all , we denote max D 1.
For a small , the maximum max or equivalently a minimum min D 1 max for closed-loop
stability is a linear function of . There is a fundamental relationship between the continuous-time
averaging margin max and the discrete-time averaging margin max of its sampled system.
Corollary 1
If the averaging margin in the continuous-time domain is max , then
lim
D lim
D max .
!0 min
!0 ln max
Copyright 2013 John Wiley & Sons, Ltd.
Proof
This follows from the relationship D e = and lim !0
ln.1 /
D 1.
509
Lemma 2
Suppose L.s/ D N.s/=D.s/ where N.s/ and D.s/ are polynomials. Then, max is the smallest
> 0 that makes the polynomial
sD.s/ C D.s/ C N.s/ D 0
(22)
marginally stable.
Proof
max is the largest > 0 before the closed-loop system becomes unstable. The characteristic
1 N.s/
equation of the closed-loop system is 1CR.s/L.s/ D 1C sC1
D 0 or sD.s/CD.s/CN.s/ D
D.s/
0. Because the closed system is stable when D 0, max is the value that makes the polynomial to
become marginally stable first time when is increased from 0.
Example 5
Suppose G.s/ D 2=.s 1/. Then, the closed-loop systems characteristic equation is sD.s/ C
D.s/ C N.s/ D 0; that is, s.s 1/ C s C 1 D s 2 C .1 /s C 1 D 0. This results in max D 1.
Theorem 5
max .L/ of L.s/ is the gain margin of W .s/ D
sD.s/
.
D.s/CN.s/
Proof
(22) can be equivalently written as
1C
sD.s/
D 0.
D.s/ C N.s/
(23)
Remark 4
First, max can be obtained by using the RouthHurwitz method. Also, (23) is in a standard form for
sD.s/
(it is an improper
using root locus technique. So, we may plot the root locus of the system D.s/CN.s/
system) and detect the value that reaches marginal stability, which will be max . The root locus plot
sD.s/
starts at the poles of D.s/CN.s/
, which are precisely the poles of the closed-loop system without the
averaging filter. Because the closed-loop system is stable, for small , the closed-loop system with
sD.s/
the filter will remain stable. The root locus plot moves toward the zeros of D.s/CN.s/
, which are the
poles of the open-loop system. Hence, if the open-loop system is unstable, the exponential averaging
sD.s/
margin is always finite. Alternatively, one may obtain max from the Bode plot of D.s/CN.s/
.
Example 6
sD.s/
s 2 s
Suppose G.s/ D .sC2/=.s1/. Then, W .s/ D D.s/CN.s/
D 2sC1
. The gain margin can be obtained
by using the MATLAB function margin [28], which gives max D 2, or by plotting the Bode plot,
which gives max D 6.02 dB D 2.
4.3. Optimal robustness against signal averaging
Suppose that the open-loop system L.s/ D P .s/C.s/ contains a plant P .s/ and a controller C.s/.
In this section, M will denote the set of all stable systems. For derivation simplicity, we assume that
P .s/ does not have imaginary poles or zeros.
Copyright 2013 John Wiley & Sons, Ltd.
510
L. Y. WANG ET AL.
Assumption 2
(1) C.s/ internally stabilizes P .s/. In other words, 1 C P C is invertible in M. (2) P .s/ has a
coprime factorization in M, namely P .s/ D N.s/=D.s/ with N , D 2 M such that there exist
X , Y 2 M satisfying NX C DY D 1.
Suppose now that an averaging filter R D 1=. s C 1/ is applied, resulting in a feedback system
with the expanded plant PR and controller C .
Theorem 6
s
C internally stabilizes PR if only if 1 C 1CP
is invertible in M.
C
Proof
From the principle of internal stability [19, 22], we conclude that C internally stabilizes the plant
1
with filter R if and only if 1 C RP C D 1 C sC1
P C is invertible in M. This is equiva1
1
PC
D
lent to the condition that s C 1 C P C is invertible in M. Because R.s/ 1 C sC1
1
1
s
s
1 C 1CP C
and 1=.1 C P C / 2 M by Assumption 2, this is equivalent to 1 C 1CP C
1CP C
being invertible in M.
By the Youla parametrization [19], all stabilizing controllers for P can be parameterized by
, V 2 M. As a result, internally stabilizing controllers for P .s/R.s/ can be expressed
C D XCDV
Y N V
s
as these with V 2 M such that 1 C 1CP
D 1 C sD.s/.Y N V / is invertible in M.
C
Definition 3
The system PR is said to be robustly stabilizable for 0, max / if there exists V 2 M such that
1 C sD.s/.Y N V /
(24)
S.pi / D 0,
S.j / D 1.
(25)
Let H be the closed right-half plane, D be the unit disk, and D fs W s is real and s 6 1= g. Let
be the complement of in the complex plane. Then,
.s/ D
p
1 p1Cs
1C 1Cs
is a conformal mapping
.s/ D .sS.s//.
Theorem 7
Given .s/, S.s/ satisfies the interpolation conditions (25) if and only if
.s/ D s
0 .s/,
2 M,
0 .pi /
D 0,
0 .j /
D .j /=j .
(26)
Proof
If S.s/ satisfies the interpolation conditions (25), then .0/ D
.0/ D 0, which implies that .s/
can be expressed as .s/ D s 0 .s/. Because
.s/ is a conformal mapping, 0 .s/ 2 M. Moreover,
.pi / D
.pi S.pi // D
.0/ D 0; .j / D
.i S.i // D
.j / D j .
.j /=j /.
Conversely, .s/ D s 0 .s/ with 0 2 M implies that q.s/ D
1 .s 0 .s// is analytic and satisfies q.0/ D 0. This implies that we can write q.s/ D sS.s/. In addition, q.pi / D
1 .pi 0 .pi // D
Copyright 2013 John Wiley & Sons, Ltd.
511
0 .j //
D
Corollary 2
p
1 p1C0 s
, there exists
The system PR is robustly stabilizable for 0, 0 / if and only if for
.s/ D 1C
1C0 s
an analytic function 0 .s/ W H ! D such that 0 .pi / D 0,
0 .j / D
.j /=j .
Corollary 2 claims that max is the largest 0 for which the solution to 0 .s/ in Theorem 2 exists.
More concretely, this may be expressed in terms of NevanlinnaPick matrices.
Arrange the right-half plane poles and zeros and their corresponding interpolation values by
a1 D p1 , b1 D 0, : : : , am D pm , bm D 0, amC1 D 1 , bmC1 D
.mC1 /, : : :. Denote the
NevanlinnaPick matrix
D
ij I
with
ij
1 bi bj
ai C aj
(27)
Theorem 8
max is the largest for which > 0.
Proof
This follows from the well-known NevanlinnaPick theorem.
Example 7
If a plant has one unstable pole p D 1 and one unstable zero D 2, then we have a1 D
"
#
p
1=2
1=3
1 p1C2
p
p
. D
. max solves
1, b1 D 0, a2 D 2, b2 D
2
2
2.1C 1C2/
1=3 1.1 1C2/ 4=4.1C 1C2/
p
p
2
1
p
1 . 1C21/
D 19 that is p 1C21 D 23 . Hence, max D 12.
2
8
4.1C 1C2/
1C2C1
inf
0<<max
kM kL2
(28)
!0
e
D .
(29)
Proof
f is the sampled system of M . We first establish a relationIt is well understood that for small , M
f . Suppose that the disturbance sequence dk
ship between the L2 norm of M and the l 2 norm of M
Copyright 2013 John Wiley & Sons, Ltd.
512
L. Y. WANG ET AL.
passes through a zero order hold of interval toRbecome d.t /. The continuous-time system M is
t
stable with impulse response b.t /. Then, y.t / D 0 b.t
/d.
/d
. Suppose k is a pulse sequence,
0 D 1, and D 0, k 0. Then, d.t / D 1, 0 6 t < , and d.t / D 0, otherwise. Under this input,
RT
y.t / D 0 b.t
/d
. Hence, the sampled values of y.t /, which is the pulse response of the samRT
pled system, become yk D y.K/ D 0 b.k
/d
, which for small can be approximated by
yk D b.k/.
R1
P
2
We note that for small , kM k2L2 D 0 b 2 .t /dt D 1
kD0 b .k/ C o./. Consequently, if
f , we have gk D b.k/ and kM
f k22 D kgk k22 D
we use gk to denote the pulse response of M
l
l
P1 2
2
2
kD0 b .k/ D kM kL2 C o./.
P
From yk D kiD0 gki di , if dk is i.i.d. with mean zero and variance 2 , then
k2 D
k X
k
X
i D0 j D0
k
X
2
gki
6 2 kgk k2l2 D 2 kM k2L2 C o./.
i D0
If kM k2L2 is optimized, then kM k2L2 D as in (28). Consequently, the noise reduction ratio can
be expressed as
o./
e
DC
! , as ! 0.
(30)
Example 8
2
Consider a system L.s/ D ss 2C2s1
. The closed-loop systems characteristic equation is sD.s/ C
sC4
3
2
D.s/ C N.s/ D s C .2 /s C .1 C 4 /s C 3 D 0. It can be calculated by the RouthHurwitz
method that max D 1.366. The L2 norm of M as a function of is plotted in Figure 7. The optimal
averaging occurs at D 0.59 with the optimal sensitivity kM0.59 kL2 D 2.5263.
From the relationship D e = , for small sampling interval , D e = D e =0.59 D e 1.7
is the optimal rate for averaging in the discrete-time domain. For example, if D 0.01, we obtain
D 0.983. In other words, D 1 D 0.013 is the optimal step size for signal estimation
in (4).
H2 norm of F G/(1+F G)
4.5
3.5
2.5
0.2
0.4
0.6
0.8
1.2
1.4
513
P3
j D1
11
21
12
22
13
23
ij D 1, i D 1, 2. Here,
11 D P fxk D 0jsk D 0g, 12 D P fxk D 1jsk D 0g, 13 D P fxk D jsk D 0g
21 D P fxk D 0jsk D 1g, 22 D P fxk D 1jsk D 1g, 23 D P fxk D jsk D 1g.
Let p D P fxk D 0jsk D 0g D P fxk D 1jsk D 1g, q D P fxk D js D 0g D P fxk D js D 1g,
p D P fsk D 1g, p x D P fxk D 1g. For a symmetric channel, we have 13 D 23 (the probability
of data loss) and 11 D 22 (the probability of correct data transmission). Then,
s
D
p
1pq
1pq
p
q
q
.
(31)
Assumption 3
2p C q 1 0.
The case 2p Cq 1 D 0 means that p D .1q/=2. This implies that if the data are not lost (which
has probability 1q), then the channel output has an equal probability of receiving 1 or 0 regardless
what is the input symbol. This is the singular case, and the channel does not transmit any information, as evidenced in Shannons information theory. Because p x D pp s C .1 p q/.1 p s / D
.2p C q 1/p s C 1 p q, under Assumption 3, p s can be calculated from p x
ps D
p x .1 p q/
.
.2p C q 1/
(32)
In addition, communication channels introduce time delays. Suppose that a time delay of
seconds is in effect in data transmission at a given time. Under the sampling interval , this time delay
is translated into nd D
= steps of delay in discrete time. For notational simplicity, assume that
nd is an integer. Note that for any given
, nd ! 1 when ! 0. In other words, for a meaningful discussion of effect of time delay on systems in asymptotic analysis, nd must be varied so that
nd D
is a constant.
6.1. Impact of transmission errors and packet losses
In many practical systems with communication channels, it is desirable to reduce communication
power and bandwidth consumption, and perform signal processing at the receiving side. We shall
consider the case of the binary scheme for quantization and DMC communication channels. Let
wk D H.sk / represent the channel.
Copyright 2013 John Wiley & Sons, Ltd.
514
L. Y. WANG ET AL.
Signal estimation and feedback control algorithms are modified from (12) to
xkC1 D xk C .Axk C Buk /
yk D C xk
1, yk C dk 6
sk D
0, yk C dk >
wk D H.sk /
e
k C wk e
k
kC1 D e
(33)
e
k .1 p q/
.2p C q 1/
b
y k D F 1 .k /
uk D b
yk .
k D
Remark 5
In this algorithm, the channel information p and q are assumed to be known. Joint identification of
the signal yk and the channel parameters p and q can be derived directly from the joint identification
algorithms in [11]. This will not be included here.
Definition 4
yk is slowly varying if jyk yk1 j 6 r for some small r.
By [24], we have the following result.
Lemma 3
Under
Assumption 1 and condition 4, if is selected as a function of r such that 1 .r/ ! 0 and
p
r=.1 .r// ! 0 as r ! 0, the algorithm (5) has the following property:
lim
r!0
F . yk /.1 F . yk //
1 C .r/
E.b
y k yk /2 D
.
1 .r/
f 2 . yk /
Theorem 10
The asymptotic signal estimation error is
.aF . yk / C b/.1 .aF . yk / C b//
1C
E.b
y k yk /2 D
,
!1 1
a2 f 2 . yk /
lim
(34)
where a D 2p C q 1 and b D 1 p q.
Proof
(34) follows from Lemma 3 with
lim!1
p x .1 p x /
1C
E .b
y k yk /2 D
1
.dp x =dyk /2
.ap s C b/.1 .ap s C b//
D
a2 .dp s =dyk /2
.aF . yk / C b/.1 .aF . yk / C b//
D
.
a2 f 2 . yk /
515
.1/
and fdk g is a sequence
.aF . yk /Cb/.1.aF . yk /Cb//
.
a2 f 2 . yk /
of i.i.d. random
Remark 6
We point out that communication errors and packet losses increase the variance of the equivalent
noise, but do not alter the structure of the closed-loop system. Consequently, under Assumption 3,
the stability analysis and performance trade-off presented in the previous sections remain valid here.
6.2. Impact of communication delays
Communication channels always encounter time delays. Communication latency indicates that the
data point sk sent at time tk will arrive at the receiver buffer at tkr D tk C e
k , tk1
Then, wk is received at tkr D max tk Ce
delay than sk1 , it will be considered as received immediately after wk1 is received.
Suppose the channel is subject to a constant but unknown time delay
. For simplicity, we focus
on time delay and assume that the channel has no other uncertainty. For a small sampling interval ,
the overall closed-loop system with signal estimation on y becomes
xkC1 D xk C .Axk C Buk / (plant)
yk D C xk
1, yk C dk 6
(quantization)
sk D
0, yk C dk >
wk D sk=
kC1 D k C .wk k /
(channel delay)
(signal averaging)
b
y k D F 1 .k /
uk D b
yk .
(35)
Suppose the sampling interval is proportional to the step size D = . Define piecewise constant
interpolations .t / D k , w .t / D wk , y .t / D yk , w .t / D wk , t 2 k, k C /. Then for
t , s > 0, it is easily seen that
.t C s/ .t / D
.t Cs/=1
X
A0 xk B. F 1 .k / C xk /
kDt =
.t C s/ .t / D
.t Cs/=1
X
.sk= k /.
kDt =
In the previous equation, for notational simplicity, we used t = and .t C s/= to denote the integer
parts of t = and .t C s/=, respectively. In what follows, denote
t D f.t C / W
6 6 0g for all t > 0,
(36)
516
L. Y. WANG ET AL.
Theorem 11
Under condition (1), as ! 0, .x ./, y ./, w ./, .// converges weakly to .x./, y./, w./, .//
such that .x./, .// is a solution to the differential equation
8
x.t
P / D A0 x.t / B. F 1 ..t // C x.t //
: P
.t / D 1 .F . w.t // .t //,
provided that (37) has a unique solution for each initial data (initial segments) x.0/ D x0 and
0 .0/ D 0 2 C
, 0.
The limit system is the same feedback system as (16) except that a delay is inserted. Inserting a
time delay into (16) leads to
y D L.s/e s u, u D R.s/y.
(38)
./
.
!g ./
Example 9
Suppose L.s/ D 20=.s C 1/. For this system, it can be verified that max D 1. Then, for
20
20
, the gain crossover frequency can be derived from p 2 p
D 1,
L.s/R .s/ D .sC1/.sC1/
! C1 2 ! 2 C1
which gives
s
p
.1 C 2 / C .1 C 2 /2 C 1596 2
!g D
,
2 2
and the phase margin is
D tan1 .!g / tan1 . !g /. The delay margin is then
tan1 .!g / tan1 . !g /
max . / D r
,
p
.1C2 /C
0 < < 1.
.1C2 /2 C15962
22
2.7 107
1.5 1010
517
is small. This
cannot be achieved under a fixed threshold when the range of yk is large. One possible remedy is
to increase quantization levels, leading to the quantization scheme expressed in (1).
7.1. Signal estimation errors under quantized observations
The quantization scheme in (1) increases the word length for each sample from 1 bit to log2 .m C 1/
bits. On the other hand, it can reduce signal estimation errors. Clarifying this trade-off is of essential
importance in understanding how to use communication resources efficiently.
For i D 1, : : : , n, define pjs .yk / D F i .j 1 < yk 6 j / D F i .j yk / F i .j 1 yk /,
dp s .yk /
j
, j D 1, : : : , m C 1. For yk D y, a constant, denote
j D 1, : : : , m C 1 and hsj .yk / D dy
k
0
s
s
s
h .y/ D h1 .y/, : : : , hmC1 .y/ .
The following conclusions are essential for our pursuit here. Because they can be derived from
[10] with the uniform data windows replaced by exponential windows (N D .1 C /=.1 /), their
proofs are omitted.
Lemma 4
The CR lower bound for estimating y based on observations on fsk g is
0
2 11
s
mC1
.y/
h
j
C
B1 C X
s2 D @
A .
1
pjs
j D1
Theorem 13
The signal estimation scheme in (33) is asymptotically efficient in the sense that
as ! 1, where 2 is the error variance yk b
yk .
1C
1
2
s2 ! 0
2 11 0
2 11
s
s
mC1
mC1
.y/
.y/
h
h
X
j
j
1C 2
BX
C
C
B
sup
inf
s 6
@
A D@
A .
s
1
pj
pjs
y2ymin ,ymax
y2ymin ,ymax
j D1
j D1
We now consider the case of DMC channels, without packet loss or time delay. Let pis D P
fsk D ig and piw D P fwk D ig. They are related by
piw D P fwk D ig D
mC1
X
j D1
mC1
X
pjs j i ,
j D1
0 s s
0
w
s
where j i D P fwk D ijsk D j g. Let p w D p1w , : : : , pmC1
, p D p1 , : : : , pmC1
. Let 11 be a
column vector of all 1s with a compatible dimension. Note that 110 p s D 1 and 110 p w D 1. Then,
2
3
1,mC1
11
6
7
..
..
..
.p w /0 D .p s /0 , where D 4
(39)
5.
.
.
.
mC1,1
mC1,mC1
Example 10
For instance, in the case of symmetric and memoryless channels, 11 D q, 12 D 1 q, 21 D
1 q, 22 D q. It is easily checked that is full rank if and only if q 0.5.
p s D .0 /1 p w , which ensures that the probability information p w obtained at the receiving
site of the communication channel can be used to deduce the probability p s at the transmission site,
Copyright 2013 John Wiley & Sons, Ltd.
518
L. Y. WANG ET AL.
which is then used to estimate the system parameters. Because dp s =dp w D .0 /1 , the variance of
the estimation error depends proportionally on the operator norm of .0 /1 .
By [25], pis be related to yk by an invertible mapping pis D Kis .yk /, i D 1, : : : , m C 1, whose
inverse is continuously differentiable. Denote
hsi .yk / D
hw
i .yk / D
Then, hw D
dp w
dyk
0
dpis .yk /
and hs .yk / D hs1 .yk /, : : : , hsmC1 .yk / .
dyk
0
dpiw .yk /
w
and hw .yk / D hw
1 .yk /, : : : , hmC1 .yk / .
dyk
dp
D 0 dy
D 0 hs . This analysis leads to the following conclusion.
k
Theorem 14
The signal estimation scheme in (33) is asymptotically efficient in the sense that it achieves
asymptotically the CR lower bound for estimating yk with observations on wk
2 !1
mC1
1 C X hw
i
2
w D
.
(40)
1
piw
i D1
1
log2 .m C 1/.
Hence, pb D g 1 log2 .m C 1/ . Because pb enters the channel matrix , we shall show this
dependence by using ., m/. For late use, we denote the column vectors of by ., m/ D
1 ., m/, : : : , mC1 ., m/.
Let opt be the optimal from (28) and be defined as in Theorem 9. Then, D e =opt .
Define
0
Hs D hs1 , : : : , hsmC1 , Ds ., m/ D diag01 ., m/p s , : : : , 0mC1 ., m/p s .
Copyright 2013 John Wiley & Sons, Ltd.
519
Theorem 15
The system output error variance is
Eyk2 D
1
1 e =opt 0
Hs ., m/Ds1 ., m/0 ., m/Hs
.
=
opt
1Ce
(41)
Proof
By Theorem 9, Eyk2 D w2 . From Theorem 14, we have
w2
2 !1
mC1
1
1 C X hw
1 e =opt 0 1
i
H
D
D
.,
m/H
w
w
w
w
1
pi
1 C e =opt
i D1
w
w
w
w
where Hw D hw
D 0 p s ,
1 , : : : , hmC1 and Dw .T , m/ D diag p1 , : : : , pmC1 . From (39), p
w
0
s
0
which implies pi D i ., m/p . It follows that Hw D Hs . These equalities lead to
1
Hw0 Dw
., m/Hw D Hs0 ., m/Ds1 ., m/0 ., m/Hs
and
w2 D
1
1 e =opt 0
Hs ., m/Ds1 ., m/0 ., m/Hs
.
=
opt
1Ce
p1
2
R1
x
e u
2 =2
N0 =2 is the two-sided noise power spectral density (noise power per degree of freedom), and B
is the channel bandwidth in bps. Under a fixed total energy E and channel noise level N0 =2, from
B D 1 log2 .m C 1/, we have
s
pe D Q
2E
N0
!
.
log2 .m C 1/
(42)
Suppose that the channel is symmetric. Then, the channel transition matrix is of dimension
.m C 1/ .m C 1/ and
3
2
log .mC1/
pe 2
.1 pe /log2 .mC1/ pe .1 pe /log2 .mC1/1
7
6
..
..
7.
(43)
D6
.
.
5
4
log2 .mC1/
log2 .mC1/
pe
.1 pe /
Copyright 2013 John Wiley & Sons, Ltd.
520
L. Y. WANG ET AL.
Note that
D
1 pe
pe
pe
1 pe
pe
1 pe
1 pe
pe
,
namely, the Kronecker product of the log2 .mC1/ basic one-bit channel transition matrix [27]. Consequently, the channel is nonsingular if pe 0.5 for any m. (41) in Theorem 15, (42), and (43)
characterize the time/space complexity relationship of this channel.
For example, consider a channel with the signal-to-noise ratio .SNR/ D 2E=N0 and the quantization level m. Then, the error probability pe is a function of the sampling interval . Figure 8 shows
such functions under different SNR, and Figure 9 shows such functions under different m.
Error probability pe
0.35
SNR=5 db
0.3
0.25
SNR=15 db
0.2
0.15
0.1
SNR=25 db
0.05
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Sampling interval
Figure 8. Transmission error probability as a function of the sampling interval under different SNR.
Error probability pe
0.35
0.3
m=15
0.25
m=7
0.2
0.15
m=3
0.1
0.05
m=1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Sampling interval
Figure 9. Transmission error probability as a function of the sampling interval under different m.
Copyright 2013 John Wiley & Sons, Ltd.
521
Output Errors
0.07
0.06
0.05
0.04
B=10 KHz
0.03
0.02
0.01
8
4
Sampling Interval
x 10
Figure 10. Output errors as a function of the sampling interval under a fixed data flow rate B D 10 KHz
and log2 .m C 1/ D B.
D
pe
1 pe
pe
1 pe
2
.1 pe /2 pe .1 pe / pe .1 pe /
6 p .1 p / .1 p /2
pe2
e
e
6 e
D6
4 pe .1 pe /
pe2
.1 pe /2
2
pe .1 pe / pe .1 pe /
pe
pe2
pe .1 pe /
pe .1 pe /
.1 pe /2
3
7
7
7.
5
For illustration, assume that the plant is the same as in Example 8. Hence, opt D 0.59 and
D 2.5263. The channel signal/noise ratio is SNR D 1 dB, or equivalently, R D 101=20 D 1.122.
The output disturbance or dither is i.i.d., Gaussian with mean 0 and variance 1.
Suppose that the channel bandwidth for the output is limited at B D 10 KHz. The output y.t /
is bounded in 10, 10. The quantization scheme is the uniform quantization. When the number
of quantization levels varies from log2 .m C 1/ D 3, 4, 5, 6, 7, 8 bits, the corresponding sampling
intervals are D log2 .m C 1/=B D 0.0003, 0.0004, 0.0005, 0.0006, 0.0007, 0.0008. Under each
time/space complexity selection, we consider the worst-case scenario for the signal value y.t /.
Figure 10 illustrates time/space complexity relationship when the norm of the output disturbances
is used as the performance index. In this example, D 0.0004 turns out to be the optimal choice.
8. CONCLUSIONS
Stability and performance of feedback systems with communication channels are affected by
channel uncertainty, measurement noises, sampling rates, quantization levels, and signal estimation
algorithms. This paper studies fundamental stability and performance limitations in such systems.
It is shown that by coordinating sampling rates and the estimation updating speeds, it is possible to
use the ODE approach to study the entire system in a limiting continuous-time system.
Noise artifacts on signals can be attenuated by signal averaging. When applied in an open-loop
setting, this is an issue of time complexity: the number of data points it takes to achieve a required
error bound. As a result, one may use small communication bandwidth resources if information
processing is not time pressing. However, if feedback control is involved, signal averaging becomes
less effective. It is shown in this paper that in this situation, for a given communication bandwidth,
noise attenuation encounters an irreducible error. This error is a function of bandwidth resources.
This dependence is derived within the framework of averaging stability margin, performance
Copyright 2013 John Wiley & Sons, Ltd.
522
L. Y. WANG ET AL.
limitations, and optimal filter design. Impact of communication channel uncertainties are investigated, including network packet losses, data distortion, and latency.
There are many open problems and challenges in this direction. When practical schemes for data
compression, quantization, and source and channel coding are taken into consideration, theoretical
issues become more complex. Also, channel uncertainties are usually time varying and random,
whose properties depend on network operational conditions.
ACKNOWLEDGEMENTS
This research was supported in part by the Army Research Office under grant W911NF-12-1-0223, by the
Australian Research Council under Discovery Grant DP120104986, and by the National Natural Science
Foundation of China under grant NNSFC 61203067.
REFERENCES
1. Freudenberg JS, Middleton RH, Solo V. Stabilization and disturbance attenuation over a Gaussian communication
channel. IEEE Transactions on Automatic Control 2010; 55:795799.
2. Martins NC, Dahleh MA. Feedback control in the presence of noisy channels: Bode-like fundamental limitations of
performance. IEEE Transactions on Automatic Control 2008; 53(7):16041615.
3. Matveev AS, Savkin AV. The problem of LQG optimal control via a limited capacity communication channel. System
& Control Letters 2004; 53(1):5164.
4. Nair GN, Fagnani F, Zampieri S, Evans RJ. Feedback control under data rate constraints: an overview. Proceedings
of the IEEE 2007; 95(1):108137.
5. Yksel AS, Basar T. Optimal signaling policies for decentralized multicontroller stabilizability over communication
channels. IEEE Transactions on Automatic Control 2007; 52(10):19691974.
6. Rojas AJ, Braslavsky JH, Middleton RH. Fundamental limitations in control over a communication channel.
Automatica 2008; 44:31473151.
7. Kushner HJ, Yin G. Stochastic Approximation and Recursive Algorithms and Applications, (2nd edn). SpringerVerlag: New York, 2003.
8. Casini M, Garulli A, Vicino A. Time complexity and input design in worst-case identification using binary sensors.
Proceedings of the 46th IEEE Conference on Decision and Control, New Orleans, LA, USA, December 1214, 2007;
55285533.
9. Wang LY, Zhang JF, Yin G. System identification using binary sensors. IEEE Transactions on Automatic Control
2003; 48:18921907.
10. Wang LY, Yin G. Asymptotically efficient parameter estimation using quantized output observations,. Automatica
2007; 43:11781191.
11. Wang LY, Yin G, Zhang J-F, Zhao YL. System Identification with Quantized Observations. Birkhuser: Boston, MA,
2010.
12. Benveniste A, Metivier M, Priouret P. Adaptive Algorithms and Stochastic Approximations. Springer-Verlag: Berlin,
1990.
13. Chen H-F, Guo L. Identification and Stochastic Adaptive Control. Birkhuser: Boston, MA, 1991.
14. Kushner HJ. Approximation and Weak Convergence Methods for Random Processes, with Applications to Stochastic
Systems Theory. MIT Press: Cambridge, MA, 1984.
15. Ljung L. System Identification: Theory for the User. Prentice-Hall: Englewood Cliffs, NJ, 1987.
16. Billingsley P. Convergence of Probability Measures. Wiley: New York, NY, 1968.
17. Chow YS, Teicher H. Probability Theory, (3rd edn). Springer-Verlag: New York, 1997.
18. Khasminskii RZ. Stochastic Stability of Differential Equations. Sijthoff and Noordhoff, Alphen aan den Rijn:
Netherlands, 1980.
19. Doyle J, Francis BA, Tannenbaum AR. Feedback Control Theory. Macmillan Publishing Company: New York, 1992.
20. Kuo BC. Digital Control Systems, (2nd edn). Oxford University Press: Oxford, UK, 1995.
21. Ogata K. Morden Control Engineering, (4th edn). Prentice-Hall: Englewood Cliffs, NJ, 2002.
22. Khargonekar PP, Tannenbaum A. Non-Euclidian metrics and the robust stabilization of systems with parameter
uncertainty. IEEE Transactions on Automatic Control 1985; AC-30:10051013.
23. Wang LY, Yin G, Zhang J-F. Joint identification of plant rational models and noise distribution functions using
binary-valued observations. Automatica 2006; 42:535547.
24. Wang LY, Yin G, Li CY, Zheng WX. Signal estimation with binary-valued sensors. Journal of Systems Science and
Complexity 2010; 23:622639.
25. Wang LY, Yin G. Quantized identification under dependent noise and Fisher information ratio for communication
channels. IEEE Transactions on Automatic Control 2010; 55(3):674690.
26. Proakis JG, Salehi M. Digital Communications, (5th edn). McGraw-Hill Higher Education: New York, 2008.
27. Horn RA, Johnson CR. Topics in Matrix Analysis. Cambridge University Press: Cambridge, UK, 1991.
28. MATLAB version 2012b, The MathWorks Inc., Natick, MA, 2012.
Copyright 2013 John Wiley & Sons, Ltd.