Escolar Documentos
Profissional Documentos
Cultura Documentos
WEIWEN WANG
Department/Date
Department/Date
Department/Date
Department/Date
Department/Date
DEDICATED
TO MY PARENTS
ACKNOWLEGEMENT
TABLE OF CONTENTS
Page
NOMENCLATURE............................................................................................. ix
LIST OF TABLES ............................................................................................... xi
LIST OF FIGURES .............................................................................................. v
CHAPTER
I.
INTRODUCTION........................................................................................ 1
1.1
Background ......................................................................................... 2
1.2
II.
1.3
1.4
Problem Formulation......................................................................... 11
NONLINEAR OBSERVERS.................................................................... 13
2.1
2.2
2.3
vi
2.4
2.5
2.6
Summary ........................................................................................... 28
Simulation Results............................................................................. 30
3.1.1 Open-loop Comparison ............................................................... 30
3.1.2 Selection of Nonlinear Gains for NESO ..................................... 36
3.1.3 Closed-loop Comparison............................................................. 39
3.2
3.3
Summary ........................................................................................... 46
V.
4.1
4.2
4.3
4.4
4.5
Summary ........................................................................................... 67
Background ....................................................................................... 69
5.2
5.3
vii
5.4
5.5
Summary ........................................................................................... 87
Absolute Stability.............................................................................. 89
6.2
viii
NOMENCLATURE
Control command
hi
ki
f (.)
zi
w (t)
gi
ix
kp
kd
Reference input
Damping ratio
Tmax
Maximum torque
LIST OF TABLES
xi
LIST OF FIGURES
Figure
Page
vi
vii
CHAPTER I
INTRODUCTION
Automatic control systems were first developed over two thousand years ago. The
primary motivation for feedback control in times of antiquity was the need for the
accurate determination of time. Around 270 B.C., the Greek Ktesibios invented a float
regulator for a water clock, which kept time by regulating the water level in a vessel [1].
In 1868 J.C. Maxwell explained the instabilities exhibited by the flyball governor using
differential equations to describe the control system. Since that time, control theory has
made significant strides.
1.1
Background
The period before 1868 is called the prehistory of automatic control [2]. In 1877,
E. J. Routh provided a numerical technique for determining when a characteristic
equation has stable roots [Routh 1877], followed by A. M. Lyapunovs study of the
stability of nonlinear differential equations using a generalized notion of energy in 1892
[Lyapunov 1892]. The period from 1868 to the early 1900s may be called the primitive
period of automatic control because of these discoveries.
In 1922 N. Minorsky introduced the Proportional-Integral-Derivative (PID)
controller. During 1930s and 1940s, frequency-domain approaches, the Nyquist stability
criterion, the Bode plot, and the root locus were used to analyze and design single-input
and single-output (SISO) linear systems. Simple compensators like lead-lag were
developed. The term cybernetics was coined by Norbert Weiner to refer to control
theory [Weiner 1949, Weiner 1961]. The period from 1900 to 1960 is often calledthe
classical period, and the period from 1960 through the present is the modern period.
In a series of famous papers, Kalman and his coauthors discussed the optimal
control of systems and provided the design equations for the linear quadratic regulator
(LQR) and Kalman filter. This approach was firmly rooted in time-domain and relied on
techniques from linear algebra. Today this branch of control still remains very active
under the name of the linear system theory. Digital control theory was formulated with
the advent of the microprocessor in 1969. In 1981, the H control approach was proposed
by Zames. The first significant development in optimal control was the famous work of
Pontryagin, Boltyanskii, Gamkrelidze and Mishchenko in 1986.
1.2
Today, many researchers and designers, from such broad areas as aircraft and
spacecraft control, robotics, process control, and biomedical engineering, have shown an
active interest in the development and applications of nonlinear control methodologies.
Linear control methods work well for the small range of the plant. But, when the required
operation range is large, a linear controller is likely to perform poorly because there is not
proper compensation for nonlinearities in the system.
When designing linear controllers, it is usually necessary to assume that the
parameters of the system model are reasonably well known. However, many control
problems involve uncertainties in the model parameters. A linear controller based on
inaccurate values of the model parameters may exhibit significant performance
degradation or even instability. Nonlinearities can be intentionally introduced into the
controller of a control system so that model uncertainties can be tolerated. Robust
controllers and adaptive controllers are two good examples.
The topic of nonlinear control design for large-range operation has attracted
particular attention because of the advent of microprocessors, which has made the
implementation of nonlinear controllers a relatively simple matter. Modern technology,
such as high-speed, high-accuracy robots and high-performance aircrafts, is also
demanding control systems with more stringent design specifications. Many new papers
and reports have recently been written on nonlinear control research and applications.
(1.1)
where
DTyu C y = 0;
DTyu Dyu = I .
J = G yw
,[0,t f ]
= sup
y (t )
w(t )
2,[0,t f ]
2,[0,t f ]
when J< r, (r is called performance bound). A controller that satisfies this bound is called
a suboptimal controller. The following is the procedure of the Hamilton-Jacobi inequality
(HJI) equation-based design method.
The Hamiltonian Equations are described as
tf
J (u , w, p, u , w, p ) = 0
x& (t )
=
p& (t )
A
with L = T
C y C y
L ( t f t )
A
p (t )
p (t )
A
T
C y C y
(1.2)
Bu BuT + r 2 Bw BwT
AT
11 (t f t ) 12 (t f t )
=
21 (t f t ) 22 (t f t )
(1.3)
The feedback gain exists for arbitrary time intervals, and there is a solution to the H
sub-optimal control problem only if the Hamiltonian matrix has no purely imaginary
eigenvalues. The matrix P(t) is the solution of a Riccati equation. It comes from the
Hamilton equation
p& (t ) = P& (t ) x(t ) + P(t ) x& (t ) , p (t ) = P(t ) x(t ).
Putting (1.4) into (1.2) yields
P& (t ) = P(t ) A AT P(t ) C yT C y + P(t )( Bu BuT r 2 Bw BwT ) P(t )
(1.4)
(1.5)
Lemma
function V(x) such that the following Hamilton-Jacobi inequality (HJI) holds:
Vx ( x) f ( x) +
1
1
V ( x) g1( x) g1( x)Vx ( x) Vx ( x) g 2( x) g 2( x)Vx ( x) + h '( x)h( x) 0
2 x
4r
4
df
dg1
dg 2
dh
(0),
(0),
(0), (0))
dx
dx
dx
dx
If the H control problem for the linearized system is solvable, then the H problem for
the nonlinear system is solvable locally near x=0. A controller can be constructed near
x=0 using V (x) which is locally smooth.
Variable structure control [7, 8] has been widely used over the last decade for the
control of uncertain systems because of its robustness to modeling uncertainties and
disturbances.
Consider the dynamic system:
x ( n ) (t ) = f ( X ; t ) + b( X ; t )u (t ) + d (t )
(1.6)
where u(t) is scalar control input, x is the scalar output of interest, and
X = [ x, x& ,..., x ( n 1) ]T is the state. The function f ( X ; t ) is not exactly known, but the extent
of imprecision f on f ( X ; t ) is upper bounded by a known continuous function of X
and t; similar to the control gain b(X; t). The disturbance d (t) is unknown but bounded in
absolute value by a known continuous function of time. The control problem is to get the
state X to track a specific state X d = [ xd , x&d ,..., xd ( n 1) ]T in the presence of model
imprecision on f(X;t), b(X;t) and disturbance of d(t). To guarantee that this is achievable
using a finite control u, it must be assumed:
X%
t =0
=0
(1.7)
where X% := X X d = [ x%, x&% ,..., x% ( n 1)T ] . A time-varying sliding surface s(t) is defined in the
state-space R n as s ( x%; t ) = 0 with
s ( X% ; t ) = (
d
+ ) n 1 x%, > 0
dt
(1.8)
where is a positive constant. Given initial condition (1.7), the problem of tracking
X X d is equivalent to that of remaining on the surface s(t) for all t>0; indeed s 0
represents a linear differential equation whose unique solution is X% 0 . Thus, the
problem of tracking the n-dimensional vector xd can be reduced to that of keeping the
scalar quantity s at zero. A sufficient condition for such positive invariance of s (t) is to
choose the control law u(t) of (1.6) to satisfy
1 d 2
s ( x; t ) s
2 dt
(1.9)
S (t)
practical nonlinear control problems, both as a system analysis tool and a controller
design tool.
Currently the most systematic methodology for adaptive nonlinear control design is
backstepping [9, 10, 11], which is mainly applicable to systems having a cascaded or
triangular structure. The central idea of the approach is to recursively design controllers
for subsystems in the structure and step back the feedback signals towards the control
input. This differs from the conventional feedback linearization in its ability to avoid
cancellation of useful nonlinearities in pursuing the objectives of stabilization and
tracking. In addition, by utilizing the control Lyapunov function, backstepping has the
flexibility to introduce appropriate dynamics in order to make the system behave in a
desirable manner. However, no analytical measure of performance or notions of
optimality have been shown to exist for controllers derived via backstepping approaches.
The two main methods for adaptive backstepping design are the turning function
design and the modular design. These assume that the full state of the system is available
for measurement. If it is not available, the observer backstepping method [12] is used.
The recursive-interlacing procedure was proposed for the design of globallystabilizing controllers [13, 14, 15]. The procedure introduces new interlacing steps and
combines them with the existing recursive or backstepping design. It is not limited to
systems of any specific order and imposes no restrictions on either locations or types of
10
From the literature review, it appears that several unresolved issues exist in the
current nonlinear control methods, including
The need for a model: nonlinear H and feedback linearization requires a model
of the plant to make a design;
The lack of robustness: feedback linearization cannot deal with the variety of
parameters;
11
Most nonlinear controllers are state feedback controllers. When the state of the
system is unavailable or the sensor is expensive, observers must be used in estimation.
The motivation of this research is to find a set of observer based the control systems that
are relatively independent of the mathematical model, perform better, and are simple to
implement.
Since Minorsky invented PID in 1922, PID-based control technology still remains a
tool of choice in over ninety percent of industrial applications. The goal of this research is
to find a set of novel, practical tools that make control design and tuning easy and
effective.
The Extended State Observer (ESO), proposed by Professor Jingqing Han, can
estimate the state without a mathematical model of the system. It is a novel concept for
observer design, estimating not only the state, but also the internal and external
disturbances, thus making disturbance rejection control possible. The invention of ESO is
a revolutionary concept for control theory and application. It has several properties
including model-independence, active estimation, compensation for disturbances, simple
design, and strong robustness. This method has evolved as an important technique for the
state feedback control of nonlinear systems.
In this dissertation, the concepts of the ESO, the High Gain Observer (HGO), and
the Sliding Mode Observer (SMO) are introduced in Chapter II. Chapter III presents a
12
comparative study of these three observers, which is based on the robustness of the
performance with respect to the uncertainties of the plant and the observer tracking
errors, both at steady-state and during transients. The optimal tuning methods for linear
active disturbance rejection control (LADRC) and loop-shaping are tested, and along
with active disturbance rejection control (ADRC), are applied to a servo motor system in
Chapter IV. A design procedure for ESO-based control systems is described. The active
magnetic-bearing benchmark problem is solved using LADRC in Chapter V, and Chapter
VI discusses the stability analysis of ADRC.
CHAPTER II
NONLINEAR OBSERVERS
13
14
The concept of separating the state feedback control design into the full-state
feedback part and observer is known as the separation principle, which has rigorous
validity in linear systems, as well as a limited class of nonlinear systems. Figure 2
illustrates the state-feedback design using an observer.
Estimated
output y
u
B
x
C
To controller
A
Plant output
y
Estimated
output error
(2.1)
(2.2)
e = x x
(2.3)
e& = ( A LC )e = Ae
(2.4)
15
The estimation error will converge to zero if A is a stability matrix. When A is constant,
its eigenvalues must be in the open left half-plane. This asymptotic state estimator is
known as the Luenberger observer [16]. Since the matrices A, B and C are defined by the
plant, the only freedom in the design of the observer is in the selection of the gain matrix
(2.5)
where P is the covariance matrix of the estimation error and satisfies the matrix Riccati
equation
P& = AP + PA PC R 1CP + Q
(2.6)
16
the eigenvalues of A in (2.4). From (2.4), the characteristic equation of the error is now
given by
det[ sI ( A LC )] = 0
(2.7)
If L is chosen so that A-LC has stable and reasonably fast eigenvalues, e will then decay
to zero and remain there, independent of the known forcing function u (t), its effect on the
state x (t), and irrespective of the initial condition e (0). Therefore, x(t ) will converge to
x (t), regardless of the value x(0) . Furthermore, the dynamics of the error can be chosen
for stability, as well as for speed, as opposed to the open-loop dynamics determined by A.
If we do not have an accurate model of the plant (A, B, C), the dynamics of the error are
no longer governed by (2.4). However, we can typically choose L so that the error system
is at least stable and the error remains acceptably small, even with (small) modeling
errors and disturbing inputs. It is important to emphasize that the nature of the plant and
that of the estimator are quite different. The selection of L can be approached in exactly
the same fashion that K is selected in the control-law design. If the desired location of the
estimator error poles is specified as
si = 1 , 2 ,..., n ,
then the desired estimator characteristic equation is
e ( s) = ( s 1 )( s 2 )...( s n )
(2.8)
(2.9)
x = x e
(2.10)
where
17
(2.11)
= ( A LC )e
e& = Ae
(2.12)
BG x
A LC e
(2.13)
Suppose
A BG
A=
0
BG
A LC
sI A = sI A + BG sI A + LC = 0
(2.14)
The closed-loop eigenvalues are the eigenvalues of A-BG, the full-state feedback system;
and the eigenvalues of A-LC, the dynamics matrix of the observer. This is a statement of
the well-known separation principle, which permits one to design the observer and the
full-state feedback control independently, with the assurance that the poles of the closedloop dynamic system will be the poles selected for the full-state feedback system and
those selected for the observer.
18
The high gain observer has been used in the design of output feedback controllers
because of its ability to robustly estimate the unmeasured states and derivatives of the
output, while asymptotically attenuating disturbances [21, 22, 23, 24, 25, 26]. The
technique was first introduced by Esfandiari and Khalil and since then has been used in
approximately forty papers.
Consider the second-order nonlinear system:
x&1 = x2
x&2 = ( x, u )
(2.15)
y = x1
Supposing that u = ( x) is a state feedback control that stabilizes the origin x = 0 of the
closed-loop system, the following observer is used
x&1 = x2 + h1 ( y x1 )
x& = ( x , u ) + h ( y x )
2
(2.16)
(2.17)
)
)
where ( x, x% ) = ( x, ( x )) 0 ( x , ( x ))
x% x x
x% = 1 = 1 1
x%2 x2 x2
(2.18)
19
h
As in any asymptotic observer, the observer gain H = 1 is designed to achieve
h2
asymptotic error convergence; that is, lim t x% (t ) = 0 . In the absence of the disturbance
term ( x, x% ), asymptotic error convergence is achieved by designing the observer gain
such that the matrix
h
A0 = 1
h2
1
0
(2.19)
is Hurwitz; that is, its eigenvalues have negative real parts. For this second-order system,
A0 is Hurwitz for any positive constants h1 and h2. In the presence of , the observer gain
must be designed with the additional goal of rejecting the effect of the disturbance term
on the estimation error, x% . This is ideally achieved, for any disturbance term , if the
transfer function from to x% is identically zero. The observer gain can then be
designed h1 >> h2 >> 1 , such that the transfer function H 0 is arbitrarily close to zero.
H 0 ( s) =
1
s + h1s + h2
2
1
s + h
(2.20)
In particular, taking
h1 =
, h2 = 22
(2.21)
1 =
x%1
,2 = x%2
&1 = 11 + 2
(2.22)
20
&2 = 21 + ( x, x% )
(2.23)
This equation shows that reducing diminishes the effect of the disturbance term . It
also shows that, for small , the dynamics of the estimation error will be much faster than
the dynamics of x. But the change of variable (2.22) may cause the initial condition
1
1 (0) to be of order O( ) , even when x%1 (0) is of order O(1) . With this initial condition,
1
the solution of (2.22) will contain a term of the form ( )e at / for some a>0. While this
exponential mode decays rapidly, it exhibits an impulse-like behavior where the transient
1
peaks to O( ) values before decaying toward zero. This behavior is known as the
peaking phenomenon, which is an intrinsic feature of any HGO design that rejects the
effect of the disturbance term in (2.17); that is, any design with h2 >> h1 >> 1. The peak
phenomenon could destabilize the closed-loop system, as the impulse-like behavior is
transmitted from the observer to the plant.
The HGO is basically an approximate differentiator, which can be easily seen in a
special case when the nominal function 0 is chosen to be zero; for which the observer
)
is linear. For the full-order observer (2.16) the transfer function from y to x is given by
1 + (1 / 2 ) s 1
2
s as 0
s
( s ) + 1 s + 2
2
21
Slotune, Hedrick, and Misawa [32] proposed the design of state observers using
sliding surfaces. Consider a second order system
&&
x1 = f
where f is a nonlinear, uncertain function of the state x. Based on the above discussion, an
observer structure of the following form is used
22
2 1
(2.24)
)
where x%1 = x1 x1 , f is the estimated value of f, and the constants i (i = 1, 2) are chosen
as in a Luenberger observer in order to place the poles of the linearized system at desired
locations.
The error dynamics can be written:
2 1
The extended state observer is a novel observer for a class of uncertain systems,
proposed by Han [33, 34, 39].
Consider an nth order nonlinear system described by
(2.25)
where f (.) is an uncertain function, w(t ) is the unknown external disturbance, u (t) is the
known control input, and y (t) is the measured output. The system is equivalent to
23
x&1 = x2
:
x&n = xn +1 + bu
x& = a(t )
n +1
y = x1
a(t ) =
i =1
f
f
x&i +
w&
xi
w
(2.26)
where f (.) has been regarded as an extended state xn 1 . Then the nonlinear continuous
observer is designed for system (2.25):
e = z1 y (t )
z& = z g (e)
2
01 1
1
:
z& = z g (e) + bu
n +1
0n n
n
z&n +1 = 0 n +1 g n +1 (e)
(2.27)
(2.28)
fal ( , , ) =
1 ,
>0
(2.29)
The nonlinear function in (2.29), which is used to make the observer more efficient,
was selected heuristically based on experimental results. Intuitively, it is a nonlinear gain
function where small errors correspond to higher gains, and large errors correspond to
smaller gains. When the error is small, it prevents excessive gain, which causes high
frequency chattering in some simulation studies. Figure 3 illustrates the difference
24
between the linear and nonlinear gain. If i are chosen as unity, then the observer is
equal to the well-known Luenberger observer. The ESO does not include a model, yet
can reconstruct states reliably for nonlinear plants. Because it does not use a model, it is
simpler and easier to construct, easier to implement due to its efficiency in many cases,
and freer from model uncertainties such as parameter variations and external
disturbances. Therefore, in this case, it is robust.
If part of f (t , x1 , x2 , w) , says
(2.30)
where h1 (t , z1 ,...., zn , w) = f&1 (t , z1 ,..., zn , w) . This will make the observer more efficient.
The ESO is not only a state observer, but also a disturbance observer. Since Hans
observer uses nonlinear functions, it was named Nonlinear Extended State Observer
(NESO) and can be applied for MIMO systems also.
y
y=x
y = f( x,
, )
25
(2.31)
(2.32)
where
e(t ) := x(t ) x (t ) ;
Bw = x x
(2.33)
If B has full column rank, the unique solution w can be obtained from the above equation
[35].
26
x&2 = f (t , x1 , x2 , w) + b0u
y = x
1
(2.34)
w(t)
v2(t)
v(t)
u0(t)
+_
Profile
+_
Non-linear
Combination
u(t)
+_
y(t)
Plant
Controlled
v1(t)
1/b0
b0
z3(t)
z2(t)
Extended
State Observer
(ESO)
z1(t)
27
u (t ) = u0 (t ) z3 (t ) / b0
(2.35)
x&2 = b0u0
y = x
1
(2.36)
The control problem of (2.34) is now simplified to a double integrator control problem
(2.36). Here the nonlinear combination takes the form:
u0 (t ) = k p fal ( p , p , p ) + kd fal ( d , d , d )
(2.37)
28
2.6 Summary
Observer structures usually need to include a plant model in their equations which
inevitably accompanies some practical burdens. Without a model, observers cannot be
constructed: even if it is available, unless it is accurate enough, a reliable state
reconstruction could not be expected. Even when a model is accurate enough, the
observer could often become too complicated (because of model complexity) to have any
practical use, especially on a real-time basis. The advantages and benefits of an accurate
and efficient model cannot be emphasized too much, when it is available, unless such a
model is difficult to obtain. HGOs, SMOs, and ESOs are three kinds of observers
designed for the plant in the presence of disturbances, dynamic uncertainties, and
nonlinearities in practical applications.
CHAPTER III
COMPARISON OF ADVANCED STATE OBSERVER DESIGN TECHNIQUES
29
30
A typical servo motor plant is made up of a motor, a servo drive amplifier, a system
of gears, and a load. This plant is used for software and hardware in the loop simulation.
A current drive is typically used in high performance servo motor systems to reduce the
order of the mathematic model of the servomotor by eliminating the effect of the
inductance. In motion control literature, a servo motor can be approximated as a linear,
time-invariant equation of the following form:
&&
y=
c
b
y& + u
J
J
(3.1)
where y is position, u is the motor current, J and, c are the total inertia and total viscous
friction reflected to the load, and b is the total hardware gain which typically includes the
gear ratio. The transfer function is
G (s) =
b
s( Js + c)
(3.2)
The criteria for comparison is based on the robustness of the performance with
respect to the uncertainties of plant and the observer tracking errors, both at steady state
and during transients. Simulations are conducted to give insight into observer behavior in
both open-loop and closed-loop scenarios.
An industrial motion control test-bed made by ECP [38] is used for simulation. Its
linear model was derived as:
&&
y = 1.41 y& + 23.2u
(3.3)
31
where y is the output position, and u is the control voltage sent to the power amplifier
that drives the motor. Initially, no friction, disturbance or backlash is intentionally added.
The model can be written as:
x&1 = x2
(3.4)
x&1 = x2 + h1 ( y x1 )
&
x2 = 23.2u 1.41x2 + h2 ( y x1 )
(3.5)
, h2 = 22
(3.6)
(3.7)
x&3 = 0
y = x1
where 1.41x2 is treated as f (t , x1 , x2 , w) , as well as an extended state, x3 .
(3.8)
32
(3.9)
where e=y-z1. gi (.) i = 1, 2,3 are the same as (2.29). The Matlab/Simulink package from
Mathworks was used for the simulation in this research. The simulation model is shown
in Figure 5.
33
observer converging to those of the plant. To make the comparison fair, the parameters of
the observers are adjusted so that their sensitivities to measurement noise are roughly the
same. The exact outputs of y and y& are obtained directly from the simulation model of
the plant to calculate the state estimation error.
For open-loop tests, the input to the plant is a step function, and the observers are
evaluated according to their capability in tracking the step response. The tests were run
in three conditions:
Nominal plant;
The same set of observer parameters are used in all simulations. Figure 6 illustrates the
position and velocity estimation errors for the nominal plant in terms of the tracking
errors for y and y& .
0 .0 1 5
S M O
H G O
0 .0 1
0 .0 0 5
- 0 .0 0 5
- 0 .0 1
- 0 .0 1 5
N E S O
- 0 .0 2
- 0 .0 2 5
0
T im
( s e c .)
( s e c .)
0 .3
S M O
H G O
0 .2
0 .1
- 0 .1
- 0 .2
N E S O
- 0 .3
- 0 .4
0
T im
34
All three observers perform well in steady-state and have roughly the same
accuracy and sensitivity to the noise. As expected, NESO takes longer to reach steadystate, because it does not assume the knowledge of the plant dynamics. Interestingly, z3
converges to the unknown function f = -1.41 y& , as shown in Figure 7.
- 5
- 1 0
- 1 5
Z 3 ( t)
a ( t)
- 2 0
- 2 5
0
T im
( s e c .)
Once again, z3
35
0 . 0 2
0 . 0 1
0
-0 . 0 1
N E S O
S M O
-0 . 0 2
-0 . 0 3
H G
-0 . 0 4
-0 . 0 5
-0 . 0 6
-0 . 0 7
0
0 . 5
1 . 5
2 . 5
T im
3 . 5
4 . 5
3 . 5
4 . 5
(s e c .)
0 . 2
0
N E S O
S M O
-0 . 2
-0 . 4
-0 . 6
-0 . 8
-1
-1 . 2
-1 . 4
H G
-1 . 6
-1 . 8
0
0 . 5
1 . 5
2 . 5
T im
( s e c .)
Co ulomb frictio n
- 5
- 1 0
- 1 5
Z 3 ( t)
- 2 0
( t)
- 2 5
0
0 . 5
1 . 5
T im
2 . 5
3 . 5
( s e c .)
4 . 5
36
0 .0 2
0 .0 2
0 .0 1
0 .0 1
N E S O
N E S O
-0 . 0 1
-0 .0 1
-0 . 0 2
-0 .0 2
-0 . 0
3
-0 .0 3
SS MM O O
-0 . 0
- 0 4. 0 4
HH GG OO
-0 . 0
- 0 5. 0 5
- 0 6. 0 6
-0 . 0
0 0
0 0. 5. 5
1 1. 5. 5
2 .5 2 . 5
T Ti mi m
e
3 .5
3 . 54
44 . 5
45. 5
3 . 5
4 . 5
( s e( sc e
.) c .)
e
0 . 4
0 . 2
N E S O
0
-0 . 2
-0 . 4
S M
-0 . 6
-0 . 8
-1
H G O
-1 . 2
-1 . 4
0
0 . 5
1 . 5
2 . 5
T im
( s e c .)
Figure 10: Estimated Error of the Plant with 100% Change of Inertia
- 5
Disturbance
- 1 0
Z 3 ( t)
- 1 5
a ( t)
- 2 0
- 2 5
0
0 . 5
1 . 5
T im
2 . 5
3 . 5
4 . 5
( s e c .)
The nonlinear gain function for NESO was first proposed by J. Han [39]. The
research for this dissertation revealed that, for the plant with unknown initial conditions,
a new nonlinear function fi (.), as shown in (3.10), could be used in NESO to avoid
significant transient estimation error:
37
(3.10)
with k1i , k2i > 0 . Furthermore, by choosing i < 0 in (2.29), the transient error was
significantly reduced. Three curves from (2.29) and (3.10) are shown in Figure 12 to
illustrate the differences. As in (2.29), defines the range of a high gain section where
the observer is very aggressive. This range is usually small.
-.-.-
f i ( .)
g1(.) :
g 2 (.) : k11 = 24, k12 = 192, k13 = 512, 1 =2 =3 =1, 1 = 2, 2 = .2, 3 = .2.
38
4
3 . 5
3
2 . 5
2
1 . 5
f i( .)
g 1 (.)
g 2 (.)
-.-.-.
......
0 . 5
0
-0 . 5
-1
0
0 .1
0 .2
0 . 3
T im e
0 . 4
0 . 5
0 . 6
(s e c .)
-2
-4
-6
- .- .- .
-8
......
f i( .)
g 1 ( .)
g 2 ( .)
-1 0
-1 2
-1 4
0
0 . 2
0 . 4
0 . 6
0 . 8
T im
1 . 2
1 . 4
1 . 6
1 . 8
1 . 8
( s e c .)
-5
-1 0
-1 5
- .- .- .
-2 0
......
f i( .)
g 1 ( .)
g 2 ( .)
-2 5
-3 0
-3 5
0
0 . 2
0 . 4
0 . 6
0 . 8
T im
1 . 2
1 . 4
1 . 6
( s e c .)
The simulation results indicate clearly that the g1(.) and the fi (.) function achieve the
best performance with the smallest tracking errors. The results also suggest that the
negative power should be used in (2.29) to counter unknown initial conditions.
39
Based on their open loop performance, NESO and SMO are evaluated in a closedloop feedback setting, such as the one in Figure 14 for NESO. The profile generator
provides the desired state trajectory in both y and y& , using an industry standard
trapezoidal profile.
independently, assuming that all states are accessible in the control law.
w(t)
r(t)
u0(t)
Profile
Generator
PD
u(t)
y(t)
Plant
+_
+_
v1(t)
b0
1/b0
z3(t)
z2(t)
Extended
State Observer
(ESO)
z1(t)
(3.11)
where e=[v1-z1, -z2] T and K is the state feedback gain that is equivalent to a proportionalderivative (PD) controller design. Note that z2 , instead of v&1 (t ) z2 , is used to avoid
taking the derivative of the set point. It is also used to make the closed-loop transfer
function a pure second-order without a zero. Substituting (3.9) in (2.29),
&&
y = ( f ( y, y& , w) z3 ) + Ke
(3.12)
40
K can be determined via pole-placement. More details about this design strategy can be
found in [40-43].
For SMO-based state feedback design, only the position and velocity estimates, z1
and z2, are available and the corresponding PD design yields the following closed-loop
system,
&&
y = f 0 ( y, y& , w) + Ke = 1.41y& + Ke
(3.13)
Note that the extended state, z3, is not available in SMO. The poles of the observers are
selected as 12rad/sec. For the controller design, the closed-loop poles are placed at 15
rad/sec. In addition, k1=0.5 and k2=15 are used for SMO, and i = {1, 0.5, 0.25} and
=10-3 are used for NESO. For the sake of fairness, the same observer and control poles
are used for both NESO- and SMO-based designs. The main difference in design is that
the NESO-based method assumes no knowledge of f (y, y& , w).
Simulation results are shown in Figures 15, 16, and 17. Both control systems have
similar output responses for the nominal plant. However, as soon as unknown friction or
disturbances are introduced, the differences between NESO and SMO become apparent;
indicating that the NESO-based design has inherent robustness against the uncertainties.
By estimating f (y, y& , w), instead of modeling it, the controller becomes independent of it.
0 . 3
0 . 2 5
0 . 2
0 . 1 5
0 . 1
. . .. . . .
0 . 0 5
N E S O
S M O
- 0 . 0 5
0
0 . 5
1 . 5
2 . 5
im
3 . 5
4 . 5
( s e c .)
41
0 . 3
0 . 2 5
0 . 2
0 . 1 5
0 . 1
. . . . . . .
N E S O
S M O
0 . 0 5
- 0 . 0 5
0
0 . 5
1 . 5
2 . 5
T im
3 . 5
4 . 5
( s e c . )
0 . 2 5
0 . 2
0 . 1 5
0 . 1
. . . . . . .
0 . 0 5
N E S O
S M O
- 0 . 0 5
0
0 . 5
1 . 5
2 . 5
im
3 . 5
4 . 5
( s e c . )
An industrial motion control test-bed is used to verify the above results and show
potential practical applications. The setup includes a PC-based control platform and a DC
brushless servo system, shown in Figures 18 and 19.
42
Signal
Generator
+/3.5V
0-5V
DC Signal Conditioning
DC drive
Plant
LS7166
Data acquisition
board (Clock 8254,
DAC, RS232)
DAC
DC Singal
Conditioning
DC drive
(2 channels)
Electromechanical
Plant
PC-based
ADRC
Quadrature
Counting
Board
Encoders
43
amplifier. The amplifier input is rated for +/- 10 volts peak and +/- 3.5 volts safe. A safe
value is used because the servo drive for the plant is a dumb amplifier, meaning there is
no over-current protection. If the safe limit were not used, a control scheme would have
to be implemented to prevent damage to the drive. For details on the current drive, see
[38]. A Pentium 133 MHz PC running in DOS is programmed as the controller. It
contains a CIO-DAS16/F data acquisition card to read the position encoder output signal.
This card has two 12-bit analog output channels that are configured 0-5 volts. For details
on setting and programming this card, see [44].
acquisition card is 0-5 volts, and +/-3.5 volts is required by the servo amplifier, additional
signal conditioning circuit had to be added. Since the feedback from the control loop is a
quadrature encoder signal, a special quadrature encoder data acquisition card is used
(LS7166 by US Digital). This card can accept input frequencies from DC to 10 MHZ and
is capable of reading four encoders simultaneously. The sampling frequency is 1 kHz.
The output of the controller is limited to 3.5 V. The drive system has a dead zone of
0.5 V.
44
In general, the hardware test results are consistent with those of the software
simulations.
.......
N E S O
1 . 4
S M
1 . 2
Po sitio n (rev.)
0 . 8
0 . 6
0 . 4
0 . 2
0
0
1 0
1 0
1 0
0 . 3 5
0 . 3
0 . 2 5
0 . 2
0 . 1 5
0 . 1
0 . 0 5
- 0 . 0 5
3
2 . 5
2
1 . 5
1
0 . 5
0
- 0 . 5
- 1
- 1 . 5
- 2
4
T im
( s e c .)
45
......S M
N E S O
1 . 4
1 . 2
Po sitio n (rev.)
0 . 8
0 . 6
0 . 4
0 . 2
0
0
1 0
1 0
1 0
0 . 3 5
0 . 3
0 . 2 5
0 . 2
0 . 1 5
0 . 1
0 . 0 5
-0 . 0 5
3 . 5
3
2 . 5
2
C ontro l (voltage)
1 . 5
1
0 . 5
0
-0 . 5
-1
-1 . 5
4
T im
(s e c .)
46
3.3 Summary
As a state estimator, NESO performs better than HGO and SMO. The robustness of
the NESO to plant uncertainty and external disturbance is inherent in its structure.
The chattering problem is the main drawback of the SMO in practical applications.
The simulation and experimental results seem to justify the design concepts of NESO.
By augmenting the plant and treating the unknown dynamics as an extended state, an
alternative design method for state observers and an alternative to system
identification were discovered. That is, instead of trying to find a mathematical
expression of the dynamics and disturbances, a state observer can be built to estimate
it and compensate for it in real time.
CHAPTER IV
CONTROL APPLICATIONS
47
48
x&3 = h
y = x1
(4.1)
(4.2)
where
0 1 0
0
0
A = 0 0 1 , B = b0 , C = [1 0 0] , E = 0
0 0 0
0
1
(4.3)
where L = [ 1 2 3 ]T , the observer gain vector, which is obtained using pole placement.
Combining (4.2) and (4.3), the error equation can be written as:
49
e& = Ae e + Eh
(4.4)
where
ei = xi zi , i = 1, 2,3
1
Ae = A LC = 2
3
1 0
0 1
0 0
(4.5)
( s ) = s 3 + 1 s 2 + 2 s + 3
(4.6)
are all in the left half-plane and h is bounded. Suppose the observer poles are all placed at
o
( s ) = s 3 + 1 s 2 + 2 s + 3 = ( s + o ) 3
(4.7)
1 = 3o , 2 = 3o2 , 3 = o3
(4.8)
then
The choice of o could be a trade-off between how fast the observer tracks the states and
how sensitive it is to the sensor noises.
The control law for the plant (4.1) is given as a
u=
z3 + u 0
b0
(4.9)
(4.10)
(4.11)
50
where r is the set point. This results in a pure second-order closed-loop transfer function
of
G (s) =
1
s + kd s + k p
2
(4.12)
(4.13)
where c and are the desired closed-loop natural frequency and damping ratio. can
be simply chosen as one to avoid any oscillations, and c is determined based on the
transient response requirements.
The LESO-based control scheme is referred to as LADRC [45] which uses linear
observer and linear feedback gains in place of the nonlinear ones in (2.29) and (2.37).
Optimization of LADRC
51
Step3:
Increase both until the noise level and/or oscillations in the control signal and
Disturbance
d(t)
Reference
Input
r(t)
P(s)
Prefilter
e(t)
Gc (s)
Controller
+
u(t)
Gp ( s )
Plant
Output
y(t)
n(t)
+
Sensor Noise
52
Design Specifications:
Command following
G p ( s )Gc ( s )
Y (s)
=
1 G p ( j )Gc ( j )
R( s ) 1 + G p ( s )Gc ( s )
Disturbance rejection
G p ( s)
Y ( s)
small G p ( j )Gc ( j )
=
D( s ) 1 + G p ( s )Gc ( s )
(4.15)
(4.16)
1
G p ( j )Gc ( j ) 1
lm( j )
(4.17)
Robust stability
G p ( j )Gc ( j )
1 + G p ( j )Gc ( j )
(4.14)
<
Constraints on G p ( j )Gc ( j ) :
Low Frequency: G p ( j )Gc ( j )
53
dB
G p ( j )Gc ( j )
Low
Frequency
Constraints
20 dB / dec
High
Frequency
Constraints
+ 1 s + 1
c
2
(4.18)
where c is the bandwidth, and 1 < c , 2 > c , m 0, and n 0 are selected to meet
constraints shown in Figure 23. Both m and n are integers. The default values for 1 and
2 are
1 = c /10
(4.19)
2 = 10c
(4.20)
which yield a phase margin of approximately ninety degrees. The controller can be
derived from (4.18)
s + 1
Gc ( s ) =
1
s
+ 1 s + 1
c
2
G p1 ( s )
(4.21)
54
+ 1 s + 1
c
2
G p1 ( s ) is proper.
(4.22)
This design is valid only if the plant is minimum phase. For a non-minimum phase plant,
a minimum phase approximation of G p1 ( s ) should be used instead. A compromise
between 1 and the phase margin can be made by adjusting 1 upwards, which will
increase the low frequency gains as the cost of reducing the phase margin. Similar
compromise can be made between the phase margin and 2 .
The controller used here has two parameters, k p and c . Usually, k p 0.1 0.7 .
Step1: Set an initial value of c based on the bandwidth of required transient response;
Step2: Increase c while performing tests on the simulator, until the control signal
becomes too noisy and/or too bumpy, or the output is oscillatory.
55
In order to accurately compare the three control algorithms, the simulation system
was set up for the servo motor model, shown in Figure 24. The controller blocks are sfunctions written in c-code.
In1
In2 Out1
In3
ADRC
Out1
In1
Out2
Out3
Zero-Order
Hold1
y
Workspace
Zero-Order
Hold4
Step
u1
Out1
Out2
Profile
Out1
In1
Out2
Out3
In1
In2
Out1
LOOP
Zero-Order
Hold2
Step1
In1 Out1
Out2
In2 Out3
LADRC1
Workspace2
Zero-Order
Hold5
Out1
In1
Out2
Out3
Zero-Order
Hold3
e
e
Workspace1
Zero-Order
Hold6
Step2
23.24
s ( s + 1.41)
Selecting m=n=2, the loop shaping controller can be obtained from (4.21)
s + 1
Gc ( s ) =
s ( s + 1.41)
s
23.24
+ 1 s + 1
c
2
( s + 1 ) 2 ( s + 1.41)
=
s
s
23.24s ( + 1)( + 1)2
2
(4.23)
56
2( z 1)
T ( z + 1)
(4.24)
u( z)
, where u ( z ) and e( z ) are the
e( z )
control signal and the error signal. Combining (4.23) with (4.24), the digitized loop
shaping controller is calculated as
u[k] = (a12-a5*a11*u [k-4]-(a5*a10-8*a11)*u [k-3]-(a11*a4 -8*a10+a5*a9)*u [k-2](a4* a10-8*a9)*u [k-1])/a4/a9.
where k = 0. n, integer;
a1 = 4 * c + 4 * kp * c * c * h + kp * kp * c * c * c * h * h;
a2 = 2 * kp * kp * c * c * c * h * h 8 * c ;
a3 = kp * kp * c * c * c * h * h + 4 * c 4 * kp * h * c * c ;
a4 = 4 + 2 * c * h;
a5 = 4 2 * h * c ;
a6 = 200 * c * c * h + 141 * c * c * h * h;
a7 = 282 * c * c * h * h;
a8 =141 * c * c * h * h 200 * c * c * h;
a9 = 4 + 40 * h * c + 100 * c * c * h * h;
a10 =200 * c * c * h * h - 8;
a11=100 * c * c * h * h 40 * h * c + 4;
57
(k > 0)
ESO:
u (t ) = u0 (t ) z3 (t ) / b0
where P = v1 z1 , d = v2 z2 ; p > 0, d > 0; a , d are real.
The LADRC controller is designed as
LESO:
z&1 = z2 + 1 ( z1 y )
z&2 = z3 + 2 ( z1 y ) + b0u
z& = ( z y )
3
3 1
c2
e(t )
16.5
e(t ) = r z1 (t )
2c
z2 (t )
16.5
58
u (t ) = u0 (t ) z3 (t ) / b0
The parameters for these three controllers are shown in Table I. The sample time h is
0.001 second. The simulation responses of the nominal plant are shown in Figures 25, 26,
and 27. The parameter tuning is based on the same noise sensitivities of the control
signals. Note that the output of the ADRC system has much less transient error than the
LADRC and loop shaping control (LSC) systems because the ADRC controller has two
desired input signals, r and the derivative of r. The setting time and control signals of the
three systems are very similar.
Table I: Controller Parameters
BLOCK
ADRC
PARAMETERS
a=20,
1= 3a,
2=3a2;
3=a3,
1=1.0,
1=0.01;
2=0.5,
2=0.1,
3=0.25;
3=0.1,
kp=0.1,
p=0.4;
p=0.01,
kd=10,
d=1.5;
d=0.001, b0=25;
a=40,
1 = 3a,
2=3a2;
LADRC
3=a3,
n = 30 , b0=25;
LSC
kp=0.7,
c = 20 , k=0.5;
59
0 .2
0 .1 5
0 .1
0 .0 5
0
Control (voltage)
-0 .0 5
10
5
6
T im e ( s e c . )
10
3
2
1
0
-1
-2
Control (voltage)
-0 .0 5
10
5
6
T im e ( s e c . )
10
3
2
1
0
-1
-2
Control (voltage)
-0 .0 5
10
5
6
T im e ( s e c . )
10
3
2
1
0
-1
-2
60
In order to test the robustness of the three control systems, the change of inertia and
torque disturbance are added into the plant separately. The parameters of the controllers
remain the same. Figures 28, 29, and 30 show the simulation responses of the plant with
100% inertia increased. The output of LSC system obviously has more overshoot than
the ADRC system. The setting time for the ADRC and LADRC systems is roughly the
same. Figures 31, 32, and 33 illustrate the responses of the plant with 20% torque
disturbance. Note that the system using LSC has the longest recovery time, followed by
ADRC.
0 .2
0 .1 5
0 .1
0 .0 5
0
Control (voltage)
-0 .0 5
10
5
6
T im e ( s e c . )
10
4
2
0
-2
-4
Figure 28: Position Error and Control Signal of ADRC System with Inertia Change
0 .2
0 .1 5
0 .1
0 .0 5
0
Control (voltage)
-0 .0 5
10
5
6
T im e ( s e c . )
10
4
2
0
-2
-4
Figure 29: Position Error and Control Signal of LADRC System with Inertia Change
61
0 .2
0 .1 5
0 .1
0 .0 5
0
-0 .0 5
10
5
6
T im e ( s e c . )
10
Control (voltage)
4
2
0
-2
-4
Figure 30: Position Error and Control Signal of LSC System with Inertia Change
2
-3
0
-2
-4
-6
-8
Control (voltage)
x 10
10
3
2
1
0
-1
-2
5
6
T im e ( s e c . )
10
Figure 31: Position Error and Control Signal of ADRC System with Torque Disturbance
2
-3
0
-2
-4
-6
-8
Control (voltage)
x 10
10
3
2
1
0
-1
-2
5
6
T im e ( s e c . )
10
Figure 32: Position Error and Control Signal of LADRC System with Torque Disturbance
62
-3
0
-2
-4
-6
-8
Control (voltage)
x 1 0
10
3
2
1
0
-1
-2
5
6
T im e ( s e c . )
10
Figure 33: Position Error and Control Signal of LSC System with Torque Disturbance
4.4
The motion control setup in Chapter III is used for the hardware test.
The
parameters of the controllers are the same as those that are used in simulations. The
results for the nominal plant are shown in Figure 34. Note that the system controlled by
LSC has overshoot. Again the ADRC system has the smallest transient error. The
following tests are performed individually in order to compare the robustness of the
controllers:
Increase the load by 75% (replace two 0.5 Kg weights to the disc at radius of
6.6cm)
The performances are evaluated in Table II. Figure 35 shows the responses for the plant
with 30% torque disturbance at 4 seconds. Note that the output of LADRC system has the
63
shortest recovery time. Figure 36 illustrates the responses of the plant with friction. Note
that the performances of the LADRC and LSC systems are better than ADRC.
ADRC
1.5
1
0.5
0
10
10
5
6
Time (sec.)
10
0.2
-0.2
5
-5
64
ADRC
1.5
1
0.5
0
10
10
5
6
Time (sec.)
10
0.02
-0.02
5
-5
Figure 35: Responses of the Plant with 30% Torque Disturbance at 4 second
65
ADRC
1.5
1
0.5
0
10
10
5
6
Time (sec.)
10
0.2
-0.2
5
-5
66
ADRC
Over-
Settling Time
Steady State
Shoot (%)
(sec.)
Error (rev.)
Nominal Case
0.00
0.60
4.889 108
Load Added
0.00
0.60
1.87 10 4
Friction Exerted
0.00
2.00
1 104
Disturbance
0.00
2.00
4.889 108
Nominal Case
0.00
0.70
4.889 108
Load Added
0.00
0.70
4.889 108
Friction Exerted
0.00
0.80
1.5 104
Disturbance
0.00
0.10
4.889 108
Nominal Case
3.00
0.70
6.254 105
Load Added
6.00
0.80
6.254 105
Friction Exerted
5.00
0.90
1 104
0.50
2.00
0.00
30% of Tmax
LADRC
30% of Tmax
LSC
30% of Tmax
Disturbance
67
4.5 Summary
The simulation and experimental results presented above demonstrate that with the
optimal tuning method, the loop shaping control can be a viable solution for practical
applications. Since it is a model-based control method, compared with the LADRC and
the ADRC approaches, it has least disturbance rejection ability.
With the optimal tuning method, LADRC can outperform ADRC. The tuning for
the ADRC controller is done through trial-and-error. There is a lot of room for the
performance to be improved. If this problem could be solved, the ADRC system would
get better performance than the LADRC system.
CHAPTER V
Active magnetic bearings (AMBs) are bearings used to suspend rotor by magnetic
forces without any contact. They have many advantages over conventional bearings
because contact-free support eliminates mechanical friction, lubrication and wear
problems. In this chapter, based on a nonlinear model of an active magnetic bearing
system, nonlinear controllers are developed for the electromechanical system, using
nonlinear dynamics placement control scheme and LADRC approach. Simulation results
illustrate the performances of these methods.
68
69
5.1 Background
70
specified in [34, 42, and 43], nonlinear feedback mechanisms are, in general, superior to
the linear ones and should be explored. LADRC, is also proposed for this system
because of its strong disturbance rejection and simple tuning. The research is motivated
by the need to account for the periodic disturbance acting on the AMB system.
The AMB problem is briefly introduced in [58, 59]. The AMB benchmark system
consists of a rigid beam that is free to rotate about a pivot located at its center of mass
(see Figure 37) [59]. The beams angular position is controlled via two electromagnets
positioned at each end of the beam. A non-contacting displacement sensor measures the
rotation of the beam. An additional actuator, located at one of the ends of beam, is used to
apply a perturbation force that emulates a periodic, exogenous disturbance. This system
incorporates all of the typical nonlinearities of an AMB system. The plant is open-loop
unstable.
71
The system can be modeled by the differential equations (5.1) through (5.8) [58].
The parameters and the signals used for these equations are listed in Appendix B.
&1 =
R
1
h( p ( g + y )1 , g + y ) + sat (u1 )
N
N
(5.1)
&2 =
R
1
h( p ( g y )2 , g y ) + sat (u2 )
N
N
(5.2)
f1 = c[ p ( g + y )1 ]2
(5.3)
f 2 = c[ p ( g y )2 ]2
(5.4)
L
( f1 f 2 d )
J
(5.5)
&& =
with system outputs given by
y = K
(5.6)
I1 = h( p( g + y )1 , g + y )
(5.7)
I 2 = h( p( g y )2 , g y )
(5.8)
K converts the rotation of the beam to mils and K<0. The flux leakage function p (.) can
be described by the function p(x) =(ax+b)1 , where the constants are given in Appendix B.
The nonlinear function h (.) is given as a look-up table in the simulation model in [58], as
shown in Figure 38.
72
there is a periodic disturbance torque, d, as shown in (5.5), with the magnitude limited to
less than 4 N-m and frequency between 10 and 50 Hz; and (3) the beam dynamics include
an unmodeled flexible mode.
73
5.3
The use of nonlinear gains has been contemplated by many researchers and reported
in literature; see for example [34, 42, 43]. Nonlinear feedback is known to have distinct
characteristics that can be explored in order to bring unique advantages to feedback
control.
The nature of the AMB problems and the difficulty in its control design
&&
y = w+u
(5.9)
where y is the output, u is the control signal, and w is the disturbance. With r as the
reference input, a typical linear control law is
u = k1 (r y) k2 y& , k i > 0 i=1, 2
(5.10)
where k1 and k2 are determined to place the closed-loop poles at a predetermined location.
Such a control law, known as proportional-derivative (PD) controller, is widely used in
motion control industry. Difficulties may arise when there is a constant disturbance, w,
which induces a steady-state error. The common remedy is to add an integration term to
the controller, but the associated phase lag reduces the stability margin.
Instead of adding an integrator to address the steady-state error problem associated
with the PD controller in (5.10), an alternative method is to employ nonlinear gains,
instead of linear ones [34, 42, 43]. The main idea is to make the controller more sensitive
to small errors by increasing the gain in the small error region. As demonstrated, this
method not only relieves the controller of the integrator phase lag, but also results in
74
better disturbance rejection and transient response. That is, nonlinear dynamics can be
purposely placed in the closed-loop system to make it behave better. One such alternative
controller of (5.10) is
u = k1 fal (e,1 , 1 ) k2 fal ( y& , 2 , 2 )
(5.11)
Here, , are two parameters of the nonlinear function. For =1, it is equivalent to the
linear function of y = x . For 0<<1, , the nonlinear controller provides higher gain when
error is small and lower gain when error is large. The idea is that making the controller
more responsive to small error helps avoid the use of the integrator control. Here, is a
small positive number applied to create a small linear gain area around the origin to avoid
singularity. Combining (5.9) and (5.11), the closed-loop system is purposely made
nonlinear and is, therefore, called NDP.
The AMB problem is solved in two steps: (1) by treating the mechanical part of the
plant as a double integrator, an NDP design is carried out as discussed above; and (2) a
proportional controller is designed for the flux loop.
Combining (5.5) and (5.6) and assuming d = 0, the beam angular acceleration can
be written as
&&
y = K&& =
KL
( f1 f 2 )
J
(5.12)
75
&&
y = u0
(5.13)
KL
( f1 f 2 )
J
(5.14)
and
u0 =
(5.15)
J
;
K L
J
, f 2 = 0.
u0 < 0, f1 = u0
K L
(5.16)
This scheme switches between the two actuators and effectively utilizes the power
supply voltage sat (u1 ) and sat (u2 ) [61]. Next, the desired flux * is obtained by solving
(5.3) and (5.4):
76
1* =
f1 / c
,
p( g + y)
2* =
f2 / c
p( g y)
(5.17)
and used as the set point for (5.1) and (5.2). Assuming the actuators are operating in
linear regions, (5.1) and (5.2) can be rewritten as
&1 =
R
1
I1 + u1
N
N
(5.18)
&2 =
R
1
I 2 + u2
N
N
(5.19)
Because this power electronic component has a much higher bandwidth than its
mechanical counter part, a simple proportional control is usually sufficient. That is, the
control law that makes follow * is designed as:
u1 = Nk3 (1* 1 ) + RI1
(5.20)
u2 = Nk4 (2* 2 ) + RI 2
(5.21)
where k3 and k4 are the feedback gains that can be determined using any linear design
method.
The key component here is to use a nonlinear feedback in (5.15) to make the
controller more sensitive to small errors and, therefore, aggressively regulate the beam
position in the presence of external disturbances. It also reduces or even eliminates the
need of an integral control, thus enhancing the stability margins of the closed-loop
control system. This approach has been successfully tested in similar motion control
applications, where it exhibits good disturbance rejection characteristics, and was tested
in simulation for the AMB problem.
77
Another solution is using the LADRC method, instead of NDP, in the first step. The
signal can be obtained from the LESO. Therefore, the differentiator is not necessary for
this method. The beam angular acceleration equation is
KL
&&
y = K&& =
( f1 f 2 d )
J
It can be written as
KL
&&
y = K&& =
( f1 f 2 d ) u + u
J
Assume f =
(5.22)
KL
( f1 f 2 d ) u , then
J
&&
y = f +u
(5.23)
where f is the uncertainty of the plant. u is the control signal. Applying the LADRC to
(5.22)
LESO:
z&1 = z2 + 1 ( z1 y )
z&2 = z3 + 2 ( z1 y ) + u
z& = ( z y )
3 1
3
&&
y = u0 .
The next steps are the same as (5.16) ~ (5.21).
(5.24)
78
The simulation model of AMB [58] is shown in Figure 39. Figures 40 and 41
present the models of leakage and beam dynamics.
2
Gap 1
Current 1
Current 1
Gap Fiux 1
Flux to Current 1
Current 1 Volt 1
Coil Resistance 1
gap 1
Gap Flux 1
U1
1
s
-K-
Force1
1
Displacement
(mi ls)
1
---------------Number T urns 1
Saturation 1
Flux1
Total Flux 1
Leakage 1
Out 1
Force 1
gap1 mils
Out 1 1
Force 2
Out 1 2
disp
gap 2 mils
Beam Dynamics
2
U2
1
s
-KSaturati on2
Total Flux 2
Gap Flux 2
1
---------------Number T urns 2
Current 2 Volt 2
Flux2
Disturbance
Force cal culation2
dy/dt
5
ddy/dt
Leakage 2
Coil Resistance 2
Force2
gap 2
Gap Flux 2
Current2
Gap 2
flux to current 2
Current2
1
gap 1
Gap1_mils in
P1 Out
2
T otal Flux 1
1.0528
Intercept
Phi_tot / Phi_gap
Sum
.026496
sl ope
Phi_gap / Phi_tot
Product1
Product
1
Gap1_mi l s i n
1
P1 Out
79
1
Force 1
Mux
2
Force 2
x' = Ax+Bu
y = Cx+Du
Fl exi ble Modes
Mux
-Kacc
1
s
1
s
-5724
1
Out 1
Radian to
Mils Gain
The proposed control approaches are tested in simulation. The control laws (5.15),
(5.16), (5.20), (5.21), and (5.24) are implemented digitally with a sampling period of 10-4
seconds. In order to more accurately demonstrate the advantages of the NDP approach, a
linear Pole Placement (PP) method is induced to the simulation test. Instead of using the
nonlinear control law that was shown in (5.15), the PP method uses the linear PD control
scheme. A smooth profile is used in simulation as the desired trajectory of the output.
The torque disturbance is d (t ) = 4sin(40 t ) . The initial condition is y (0) = 10 mils and
y& (0) = 2 mils/sec. The controller parameters are chosen as follows:
NDP:
PP:
LADRC:
o = 3000, c = 300, = 1,
1 = 3* o , 2 = 3* o2 , 3 = o3 .
Figure 42 illustrates the steady-state displacement errors of the three systems. Note
that the NDP and LADRC systems have smaller steady-state errors than the PP system.
The LADRC systems setting time is slightly longer than the NDP and the PP systems.
80
0.4
0.3
0.2
E
rror
0.1
LADRC
PP
NDP
0
-0.1
-0.2
-0.3
-0.4
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
Time(sec.)
81
10
voltage1
0
-10
20
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
voltage2
10
0
-10
1
-4
0x 10 0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
flux1
0.5
0
1
-4
0x 10 0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
flux2
0.5
0
0.02
0.04
0.06
0.08
0.1
0.12
Time (sec.)
0.14
0.16
0.18
0.2
10
voltage1
0
-10
20
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
voltage2
10
0
-10
1
-4
0x 10 0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
flux1
0.5
0
1
-4
0x 10 0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
flux2
0.5
0
0.02
0.04
0.06
0.08
0.1
0.12
Time (sec.)
0.14
0.16
0.18
0.2
82
10
voltage1
0
-10
20
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
voltage2
10
0
-10
1
-4
0x 10 0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
flux1
0.5
0
1
-4
0x 10 0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
flux2
0.5
0
0.02
0.04
0.06
0.08
0.1
0.12
Time (sec.)
0.14
0.16
0.18
0.2
0.4
0.3
0.2
E
rror
0.1
PP
NDP
LADRC
0
-0.1
-0.2
-0.3
-0.4
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
Time(sec.)
Figure 46: Displacement Errors for the Plant Coil Resistance Increased by 25%
83
10
voltage1
0
-10
20
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
voltage2
10
0
-10
1
-4
0x 10 0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
flux2
0.5
0
1
-4
0x 10 0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
flux1
0.5
0
0.02
0.04
0.06
0.08
0.1
0.12
Time (sec.)
0.14
0.16
0.18
0.2
10
voltage1
0
-10
20
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
voltage2
10
0
-10
1
-4
0x 10 0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
flux1
0.5
0
1
-4
0x 10 0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
flux2
0.5
0
0.02
0.04
0.06
0.08
0.1
0.12
Time (sec.)
0.14
0.16
0.18
0.2
84
10
voltage1
0
-10
20
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
voltage2
10
0
-10
1
-4
0x 10 0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
flux1
0.5
0
1
-4
0x 10 0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
flux2
0.5
0
0.02
0.04
0.06
0.08
0.1
0.12
Time (sec.)
0.14
0.16
0.18
0.2
To further investigate the robustness of the controller, the following tests are
performed individually:
The results are evaluated in terms of setting time and steady-state error, which are shown
in Table III. These results reveal that the proposed NDP method and LADRC have
distinct advantage concerning disturbance attenuation.
85
Steady
(2%vs initial
State
disp.)(sec)
Error (mils)
d = 4sin(40 t)
0.0218
0.05
d = 4 sin(100 t )
0.0225
0.05
0.0195
0.05
0.0216
0.12
d = 4 sin(40 t )
0.023
0.16
d = 4 sin(100 t )
0.0235
0.16
0.021
0.16
N/A
0.35
d = 4sin(40t)
0.025
0.06
d =4sin(100t)
0.025
0.1
0.03
0.05
0.025
0.17
(0) = 5 m ils ,
& (0) = 2 m ils / sec
NDP
d = 4sin(40 t)
25% change
resistance
d = 4sin(40t)
(0) = 5mils ,
&(0) = 2mils / sec
PP
d = 4sin(40 t)
25% change
resistance
d = 4sin(40 t)
(0) = 5 mils ,
& (0) = 2 mils / sec
LADRC
d = 4sin(40 t )
25% change
resistance
d = 4sin(40t)
86
Finally, NDP is compared with the saturated high gain controller proposed in [61].
Figures 50 and 51 plot the responses of the two systems in the same scale. Due to the lack
of information, the simulation results in [61] were directly imported in Figure 50. It
shows that the steady-state error of a saturated high gain controller is 2.5mils, which is
much larger that the one of the NDP system. Furthermore, the control voltages used by
the NDP system are much smaller than those used by the PP system. The NDP system
has an overall better performance in terms of smoothness of the output, smaller steadystate error, and less control actions required.
87
- 5
- 1
- 1
sat(u1)
. 1
. 2
. 3
. 4
. 5
. 6
. 7
. 8
. 9
. 1
. 2
. 3
. 4
. 5
. 6
. 7
. 8
. 9
. 1
. 2
. 3
. 4
. 6
. 7
. 8
. 9
- 1
- 2
- 3
- 4
0
0
sat(u2)
- 1
- 2
- 3
- 4
0
0
i m
. 5
( s
. )
5.5 Summary
Two novel control methods are proposed to resolve the AMB problem, and
simulation results are very promising. For AMB systems, where the nonlinearity is
inherently strong, a control system based on the nonlinear model is more effective than
on the linear model. Furthermore, both controllers are practical and easily implemented
into a digital controller.
CHAPTER VI
88
89
6.1
Absolute Stability
Simulation and experimental results show that ADRC has many advantages, such as
faster reaction, smaller overshoot and strong robustness. It can estimate disturbance using
ESO and compensate the system in each sampling period. The dynamic compensation
reduces the system to approximately a double integrator which can be easily controlled
using a PD controller.
Although ADRC is used in many fields, the stability analysis has not been
completely resolved. For ADRC to be considered as a serious contender in industrial
control design, a measure of stability or a certain degree of safety must be provided.
ADRC can be viewed as a nonlinear controller. At present, the approaches that are used
to analyze the stability of nonlinear systems include describing function, phase plane
analysis, the extended circle criterion, and Lyapunovs method. Lyapunovs method is
probably the most widely used approach. The second method of Lyapunov determines the
stability of equilibrium in a system by nonlinear differential equations. It is especially
useful when the control systems are described by differential equations. On the other hand,
stability analysis in classical control theory has been centered on frequency-domain
methods, propelled by the beauty, practicality, and generality (for linear systems) of the
Nyquist criterion. Each of these approaches has its advantages. Frequency-domain
methods use very compact model specifications and are able to address robustness through
gain and phase margins. However, the generalization to nonlinear systems of frequencydomain methods is awkward. Differential equation models have proved a much better way
of addressing nonlinearities overall.
90
Through the problem of determining conditions for the stability of systems with one
nonlinear element, control witnessed a symbiosis between stability theory of Lyapunov
methods and frequency-domain techniques. The catalyst in this development was Popovs
stability criterion.
In 1944, Lure and Postnikov [63] began to study the stability of systems of the form
depicted in Figure 52. Such systems are assumed to be completely linearizable with the
exception of a single nonlinear element denoted by f (.). The operator W is assumed to be
linear, time-invariant (LTI), while the nonlinear characteristic f(.) satisfies the following
conditions.
Definition: Function Class {F}. A function is f ( 0 ) said f ( 0 ) {F } if it is a real
continuous single-value scalar function that satisfies
0 f ( 0 ) 0.
(6.1)
where f ( 0 ) is constrained to fall within the first and third quadrants of the 0 - f ( 0 )
plane.
input
+
W(s)
r=0
f ( 0 )
f(.)
91
for many developments in stability theory. This problem of stability of a system with one
nonlinear element leads also to the Aizerman conjecture. This states that if there exist k1,
k2 such that k1 <
f ( )
< k 2 for all R , and if the linear system obtained by replacing the
nonlinear element by a linear one with gain k, is asymptotically stable for all k [ k1 , k2 ] ,
then the nonlinear system is also asymptotically stable in the large. This is also called
absolute stability of the system shown in Figure 52.
Romanian scientist Popov proposed an absolute stability criterion for nonlinear
systems in 1959. It is a kind of absolute stability judge method in frequency field, similar
to the Nyquist method of linear systems. The problem of stability for the non-linear
system can only be solved when computing the frequent properties of a systems linear
parts. This criterion can be formulated as follows:
Consider the feedback system described by (see Figure 52):
d
x = Ax + bu;
dt
0 = cT x;
u = f ( 0 )
(6.2)
92
r=0
f(e)
Plant
y
1/b
Observer
ef (e) > 0, e 0 .
93
(6.3)
y=x
with a, b R +
The LESO for the first-order plant in (6.3) is
z&1 = z2 + 1eo + bu
z&2 = 2 eo
where eo = y z1 ,
observer error
(6.4)
z2
,
b
(6.5)
x& = ax + bu0 z2
z& = e + bu
1
1 o
0
z&2 = 2 eo
y = x
(6.6)
1 x b
0
x& a
z&1 = 1 1 0 z1 + b u0
z&2 2 2 0 z2 0
x
y = [1 0 0] z1
z2
(6.7)
The transfer function of the linear part of the Figure 53 can be obtained as
G ( s ) = C ( SI A)1 B =
b ( s + 1 ) +
2b
s
s + ( 1 + a ) s + a 1 + 2
2
bs ( s + 1 ) + 2b
.
s[ s + ( 1 + a) s + a 1 + 2 ]
2
94
1 = 0,
1
1
1
det( I A) = 0 2 = a 1 +
( a 1 ) 2 4 2 ,
2
2
2
1
1
1
2
3 = 2 a 2 1 2 (a 1 ) 4 2 .
Since a,1 and 2 are positive, Re(2) and Re(3) negative. Therefore the matrix
0
a
A = 1 1
2 2
1
0 is stable. Note that
0
ba( 2 + 12 2 )
Re G ( j ) =
q I mG ( j ) =
(6.8)
( 2 a 1 2 )2 + [ ( 1 + a) ]
(6.9)
( 2 a 1 2 )2 + [ ( 1 + a) ]
(6.10)
= Re G ( j ) q I mG ( j )
Combining (6.8), (6.9) and (6.10),
Re(1 + qj )G ( j ) =
(6.11)
Obviously, ( 2 a 1 2 ) 2 + [ ( 1 + a ) ] >0, for all R .
2
Let q =
a
22
ba 12
ba 2 qb12 2 ab 4 a 2 1b
+
+
+
2
2 2
2 2
2
(6.12)
95
Since the last three terms in (6.12) are all positive, i.e.
a 2 1b
qb12 2
ab 4
0,
0 and
> 0.
22
22
2
Re(1 + qj ) w( j ) 0 for all R if 12
2
2
That is, according to the Popov criterion, for the first-order ADRC system, equilibrium
trajectory x = 0 is globally asymptotically stable if the following conditions are satisfied:
1. The linear plant is asymptotically stable with a > 0;
2. The parameters of the ESO are positive, and 12
2
2
CHAPTER VII
CONCLUDING REMARKS
A comparison study of three advanced observers, including ESO, HGO, and SMO,
is performed using simulations and hardware tests. The results show that ESO performs
better than HGO and SMO for the systems with uncertainties. The robustness of the
NESO is inherent in its structure.
ADRC is a NESO-based nonlinear control strategy, which can compensate for the
nonlinear dynamics in a model-independent way. It has proved to be a powerful and
effective controller. Due to the nonlinear functions used in ADRC, tuning the parameters
to achieve best performance is not straight-forward. LADRC was used to simplify the
design procedure of ADRC. In this dissertation, the effectiveness of LADRC is tested
using experiments on motion control of a DC brushless motor. The experimental results
reveal that LADRC offers the potential of improving the control precision and the quality
of the control signal for the actuators. A comparison of LADRC, ADRC, and loop
96
shaping design shows that with the optimal tuning method, LADRC can achieve high
performance.
The AMB problem is a good benchmark to test the control method because it is a
highly nonlinear plant with a periodic disturbance torque. Two methods were used for
dealing with this problem: NDP and LADRC. To show the advantages of the NDP
method, a traditional PP method is also used for comparison. Simulation results showed
that LADRC and NDP had better performance than the linear PP method. Their results
also exceed all other control schemes [53, 54, 56, 58].
Based on the assumption that the observer is linear, a stability analysis of secondorder ADRC is discussed by using the Popov criterion. The system equilibrium at x = 0
is shown to be globally asymptotically stable if the following conditions are satisfied: the
linear plant is asymptotically stable, and the parameters of observer satisfy the condition:
12
2
2
>0.
For future research, the design process of ADRC can be further investigated so that
the nonlinear gains can be easily tuned for optimal performance. The stability analysis of
high-order ADRC should also be explored.
97
98
REFERENCES
[1] Otto Mayr, The Origins of Feedback Control, The MIT Press, 1970.
[2] F.L.Lewis, Applied Optimal Control and Estimation, Prentice Hall, 1992.
[3] Shyh-Pyng Shue, R.K. Agarwal, and Shi P, Nonlinear H/sub infinity/method for
control of wing rock motions, Journal of Guidance, Control, and Dynamics. vol. 23,
no.1, pp. 60-68, Jan.-Feb. 2000.
[4] M.R. James, Nonlinear H/sub infinity control: a stochastic perspective, Proc. of the
IFIP WG 7.2 International Conference, pp 215-22, 1999.
[5] Jongguk Yim and Jong Hyeon Park, Nonlinear H/sub infinity/ control of robotic
manipulator, 1999 IEEE International Conference on Systems, Man, and
Cybernetics, vol. 2, pp. 866-871, 1999.
[6] Guo-ping Lu, Yu-fan Zheng, and Daniel W. C. Ho, Nonlinear robust H control via
dynamic output feedback, Systems & Control Letters 39, pp. 193-202, 2000.
[7] William S. Levine, Ed, The Control Handbook, CRC Press, 1996.
[8] Jean-Jacques E. Slotune and W. Li, Applied Nonlinear Control, Prentice-Hall, 1991.
[9] Wassim M. Haddad, Jerry L. Fausz, and Vijaya-sekhar, A unification between
nonlinear-nonquadratic optimal control and integrator backstepping, International
Journal of Robust and Nonlinear Control 8, pp. 879-906, 1998.
[10] J. Kaloust, C. Ham, A nonlinear robust controller for a class of nonlinear uncertain
systems, Proc. of the ACC, vol. 6, p. 4071, 1999.
99
[11] Wu Weiguo, Chen Huitang, Wang, Yuejuan, A novel global tracking control
method for mobile robots, Proc. of the IEEE International Conference on Intelligent
Robots and Systems. October 17-21, 1999.
[12] Miroslav Krstic, Ioannis Kanellakopoulos, and Petar Kokotovic, Nonlinear and
Adaptive Control Design, John Wiley & Sons, Inc., 1995.
[13] Zhihua Qu, Robust control of nonlinear uncertain systems without generalized
matching conditions, IEEE Transactions on Automatic Control, vol. 40, no.8,
August 1995.
[14] Zhihua Qu, Robust control of nonlinear uncertain systems under generalized
matching conditions, Automatica, vol. 29, no. 4, pp. 985-998, 1993.
[15] Zhihua Qu, Robust control of a class of nonlinear uncertain systems, IEEE
Transactions on Automatic Control, vol. 37, no.9, September 1992.
[16] D. Luenberger, Observers for multivariable systems, IEEE Transactions on
Automatic Control, vol. 11, pp. 190-197, 1966.
[17] Gene F. Franklin, J. David Powell, and Abbas Emami-Naeini, Feedback Control of
Dynamic Systems, Addison-Wesley Publishing Company, 1994.
[18] Guangren Duan and Steve Thompson, Separation principle for robust pole
assignment - An advantage of full-order state observer, Proc. of the 38th Conference
on Decision & Control, pp. 76-78, December 1999.
[19] Carla Seatzu and Alessandro Giua, Observer-controller design for gains via pole
placement and gain-scheduling, Proc. of the 6th IEEE Mediterranean Conference,
Alghero, Sardinia, Italy, June 1998.
100
[20] Gene F. Franklin, J. David Powell, and Michael Workman, Digital Control of
Dynamic Systems, Third Edition, Addison Wesley Longman, Inc., 1997.
[21] H.K. Khalil, High-gain observers in nonlinear feedback control, New Directions in
Nonlinear Observer Design (Lecture Notes in Control and Information Sciences) vol.
24, no. 4, pp. 249-268, 1999.
[22] F. Esfandiari and H.K. Khalil, Output feedback stabilization of fully linearizable
systems, Int. Journal of Control, vol. 56, pp. 1007-1037, 1992.
[23] Ahmed M. Dabroom and Hassan K. Khalil, Discrete-time implementation of highgain observers for numerical differentiation, Int. Jounal of Control, vol. 72, no. 17,
pp. 1523-1537, 1999.
[24] Seungrohk Oh and Hassan K. Khalil, Output feedback stabilization using variable
structure control, Int. Journal of Control, vol. 62, no.4, pp. 831-848, 1995.
[25] Gildas Besancon, Further results on high-gain observers for nonlinear systems,
Proc. of the 38th Conference on Decision & Control, AZ, pp. 2904-2902, Decemenber
1999.
[26] Eric Bullinger and Frank Allgower, An adaptive high-gain observer for nonlinear
systems, Proc. of the 36th Conference on Decision & Control, San Diego, CA, pp.
4348-4353, December 1997.
[27] Henrik Rehbinder and Xiaoming Hu, Nonlinear Pitch and Roll Estimation for
Walking Robots, Proc. of the IEEE International Conference on Robotics &
Automation, San Francisco, CA, April 2000.
101
102
[37] J.Han and Yuan Luilin, The Discrete Form of Tracking- Differentiator, Systems
Science and Mathematical Science, vol. 19, no. 3, pp 268-273, 1999. (In Chinese)
[38] Manual for Model 220 Industrial Emulator/Servo Trainer, Educational Control
Products, Woodland Hills, CA 91367, 1995.
[39] J.Han, A class of extended state observers for uncertain systems, Control and
Decision, vol. 10, no. 1, pp. 85-88, 1995.
[40] Y. Hou, Z. Gao, F .Jiang, and B.T. Boulter, Active disturbance rejection control for
web tension regulation, Proc. of the 40th IEEE Conference on Decision and Control,
Orlando, FL, pp. 4974-4979, December 2001.
[41] Z. Gao, S.Hu, and F. Jiang, A novel motion control design approach based on
active disturbance rejection, Proc. of the 40th IEEE Conference on Decision and
Control, Orlando, FL, pp. 4877-4882, December 2001.
[42] Zhiqiang Gao, From Linear to Nonlinear Control Means: a Practical Progression,
ISA Emerging Technology Conference, September 12, 2001.
[43] Zhiqiang Gao, Yi Huang, and Jingqing Han, An Alternative Paradigm for Control
System Design, Proc. of the 40th IEEE Conference on Decision and Control,
Orlando, FL, pp. 4578-4585, December 2001.
[44] CIO-DAS16/F Users Manual, Computer Boards, Inc., 1994.
[45] Zhiqiang Gao, Scaling and Bandwidth-Parameterization Based Controller Tuning,
Proc. of the ACC, Denver, CO, June 4-6, 2003.
[46] M. Fujita, K. Hatake, and F. Matsumura, Loop Shaping based Robust Control of a
Magnetic Bearing, IEEE Control Systems Magazine, vol. 13, no. 4, pp. 57-65,
August 1993.
103
104
[54] T.Sugie and K.Fujimoto, Controller Design for an Inverted Pendulum based on
Approximate Linearization, Int. Journal of Robust and Nonlinear Control, vol. 8,
pp. 585-597, 1998.
[55] M.S. de Queiroz and D.M. Dawson, Nonlinear Control of Active Magnetic
Bearings: A Backstepping Approach, IEEE Trans. Control Systems Tech., vol. 4, no.
5, pp. 545-552, September 1996.
[56] M.S. de Queiroz, D.M. Dawson, and H. Canbolat, A Backstepping-Type Controller
for a 6-DOF Active Magnetic Bearing System, Proc. IEEE Conference on Decision
and Control, Kobe, Japan, pp. 3370-3375, Dec. 1996.
[57] M.S. de Queiroz, D.M. Dawson, and A.Suri, Nonlinear Control of a Large Gap 2DOF Magnetic Bearing System Based on a Coupled Force Model, IEEE Proc.
Control Theory and Applications, vol. 145, no. 3, pp. 269-276, May 1998.
[58] Carl R. Knospe, The Nonlinear Control Benchmark Experiment, Proc. of the
ACC, Chicago, IL, pp. 2134-2138, June 2000.
[59] J. Ghosh, D. Mukherjee, M. Baloh, and B. Paden, Nonlinear Control of a
Benchmark Beam Balance Experiment Using Variable Hyperbolic Bias, Proc. of the
ACC, Chicago, IL, June 2000.
[60] M. Krstic, I. Kanellakopoulos, and P.V. Kokotovic, Nonlinear and Adaptive Control
Design, John Wiley & Sons, New York, 1995.
[61] Zongli Lin and Carl Knospe, A Saturated High Gain Control for a Benchmark
Experiment, Proc. of the ACC, Chicago, IL, pp. 2644-2648, June 2000.
[62] V.M. Popov, Absolute Stability of Nonlinear Systems of Automatic Control,
Automation and Remote Control, vol. 22, pp. 857-875, February 1962.
105
[63] A.I. Lure and V.N. Postnikov, On the theory of stability of control systems,
Principles Mat. Mech., vol. 8, no. 3, 1994.
[64] Yi Huang, Kekang Xu, Jingqing Han, and James Lam, Flight Control Design Using
Extended State Observer and Non-smooth Feedback, Proc. of the 40th IEEE
Conference on Decision and Control, Orlando, FL, pp. 223-228, December 2001.
106
APPENDICES
107
3-11-93 */
#include <stdio.h>
#include <dos.h>
#include <conio.h>
/* addresses */
#define BASE
0X340
#define DATA1
BASE+0
#define CONTROL1
#define LOAD
BASE+1
BASE+8
/* LS7166 commands */
#define MASTER_RESET 0X20
#define QUAD_X1
0XC1
/* quadrature multiplier to 1 */
#define QUAD_X2
0XC2
/* quadrature multiplier to 2 */
#define QUAD_X4
0XC3
/* quadrature multiplier to 4 */
108
#define LATCH_CNTR
0X02
/* latch counter */
/* reset counter */
#define PRESET_CTR
0X08
void init_7166(void) {
outportb(CONTROL1, MASTER_RESET);
outportb(CONTROL1, INPUT_SETUP);
outportb(CONTROL1, QUAD_X4);
outportb(CONTROL1, CNTR_RESET);
/* preset counter to 44 */
outportb(CONTROL1, ADDR_RESET);
/* reset addr */
outportb(DATA1, 0x0);
outportb(DATA1, 0x0);
outportb(DATA1, 0x080);
outportb(CONTROL1, PRESET_CTR);
/* preset to counter */
109
status = inportb(CONTROL1);
bwt_old = status & 0x01;
cyt_old = status & 0x02;
}
void soft_latch(void) {
outportb(CONTROL1, LATCH_CNTR);
/* counter to latch */
}
void hard_latch(void) {
outportb(LOAD, 0);
}
unsigned long read_7166(void) {
soft_latch();
status = inportb(CONTROL1);
outportb(CONTROL1, ADDR_RESET);
110
if ((status&0x01) != bwt_old) {
cntr_ol -= 0x01000000;
}
if ((status&0x02) != cyt_old) {
cntr_ol += 0x01000000;
}
bwt_old = status&0x01;
cyt_old = status&0x02;
return (cntr_ol);
}
#include <stdio.h>
#include <dos.h>
#include <conio.h>
void init_8254(void) {
outportb(0x43, 0xBA);
outportb(0x42, 0x0);
outportb(0x42, 0x0);
111
long read_8254(void) {
long t1,t2,t;
outportb(0x43, 0x80);
t1 = (long)inportb(0x42);
t2 = (long)inportb(0x42);
t=t1+t2*256;
return (t);
}
/* DAC program */
/* Borland C source code */
#include <stdio.h>
#include <dos.h>
#include <conio.h>
/* addresses */
#define DAC_BASE
0X300
112
double D;
int DH,DL;
if(voltage>5.0) voltage=5.0;
if(voltage<0.0) voltage=0.0;
D=4095*voltage/5.0;
DH=D/16;
DL=D-DH*16;
DL=DL*16;
outport(DAC_BASE+6, DL);
outport(DAC_BASE+7, DH);
}
void init_DAC(void) {
DAC_out(2.5);
}
113
****************************************************************
Loop shaping Control for Servo Motor with trapezoid profile December 2002
****************************************************************
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include <dos.h>
#include <conio.h>
#define pi 3.141592653589793
void main(){
double a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12;
double u=0,u_1=0,u_2=0,u_3=0,u_4=0;
/* control signal */
/* error signal */
/* parameters of trapezoid */
/* parameters of controller */
double kp=0.7, wc=20;
114
double feedback = 0;
FILE *fp;
int i, I=10,k=0,n=15,first=1;
int K=1000;
char buff[10];
long t_previous,t_current,position;
int time_count;
double *Time;
double *Feedback;
double *Control_out;
double *error;
double *Control;
fp=fopen("loop_trz.dat","w");
if (fp==NULL) {printf("Can't creat the output file out.dat!\n");exit(1);}
printf("\n Input how many revolutions to go [default 1]: ");
gets(buff);
if (strlen(buff) == 0) v0=1;
else
v0 = atof(buff);
printf("\n Input the max speed (rad/sec) [default 14]: ");
gets(buff);
115
Control_out=(double *)malloc(K*sizeof(double));
if (!Control_out) {printf(" No enough memory for Control_out Array!\n");
exit(1);}
116
Feedback=(double *)malloc(K*sizeof(double));
if (!Feedback) {printf(" No enough memory for Feedback Array!\n");
exit(1);}
printf("Program running...\n");
printf(" \n");
printf("Press any key to exit.\n");
init_DAC();
init_7166();
init_8254();
t5 = 3*pi*v0/V_max;
117
a1=4*wc+4*kp*wc*wc*h+kp*kp*wc*wc*wc*h*h;
a2=2*kp*kp*wc*wc*wc*h*h-8*wc;
a3=kp*kp*wc*wc*wc*h*h+4*wc-4*kp*h*wc*wc;
a4=4+2*wc*h;
a5=4-2*h*wc;
a6=200*wc*wc*h+141*wc*wc*h*h;
a7=282*wc*wc*h*h;
a8=141*wc*wc*h*h-200*wc*wc*h;
a9=4+40*h*wc+100*wc*wc*h*h;
a10=200*wc*wc*h*h-8;
a11=100*wc*wc*h*h-40*h*wc+4;
t_previous=read_8254();
while(!kbhit()) {
for (i=0;i<I;i++) {
delay(n);
t_current=read_8254();
if (t_current>t_previous) t_previous+=65536;
time_count=(int)(t_previous-t_current);
h=(double)time_count/1193180.0;
if (first) h=0.001;
t_previous=t_current;
t=t+h;
118
position=read_7166()-(unsigned long)0x080800000;
feedback=(double)(position*2*pi/16000.0);
if (first) {
first=0;
continue;
}
a12=(a1*a6*e+(a1*a7+a2*a6)*e_1+(a1*a8+a2*a7+a3*a6)*e_2+(a2*a8+a3*a7)*e_3+a3
*a8*e_4)/23.24;
u=(a12-a5*a11*u_4-(a5*a10-8*a11)*u_3-(a11*a4-8*a10+a5*a9)*u_2-(a4*a108*a9)*u_1)/a4/a9;
e_4=e_3;
119
e_3=e_2;
e_2=e_1;
e_1=e;
u_4=u_3;
u_3=u_2;
u_2=u_1;
u_1=u;
out=5.0*u/7.0+2.5;
DAC_out(out);
} /* for i loop */
if ( k < K ) {
Time[k]=t;
if ( k == 0) Control[k] = 0;
else Control[k] = (feedback - Feedback[k-1])/h/10;
/*
Control[k]=u0;
*/
Control_out[k]=u;
Feedback[k]=feedback;
error[k] = e;
120
++k;
}
else
goto exit1;
} /* for k loop */
exit1:
init_DAC();
for(k=0;k<K;k++){
fprintf(fp,"%f %f %f %f %f\n",Time[k],Control[k],
Control_out[k],Feedback[k],error[k]);}
fclose(fp);
printf("Program completed !\n");
}
/* end of main */
121
*********************************************************************
ESO based state feedback control (LADRC) for Servo Motor using trapezoid profile,
August 2002
*********************************************************************
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include <dos.h>
#include <conio.h>
#define pi 3.141592653589793
void main(){
double z1=0,z2=0,z3=0;
double v1,v2,v3;
double z4=0,z5=0;
double v4,v5;
double u0,u=0,out;
/* control signal */
122
double t=0,h=0;
int K=1000;
char buff[10];
long t_previous,t_current,position;
int time_count;
double *Time;
double *Feedback;
double *Control_out;
double *error;
double *error_dot;
/* parameters of trapezoid */
123
double *Control;
fp=fopen("parasw.dat","r");
if (fp==NULL) {printf("Can't open the parameters file paras.dat !\n");exit(1);}
fscanf(fp,"%lf %lf\n",&r,&v0);
fscanf(fp,"%lf\n",&K_P);
fscanf(fp,"%lf\n",&K_D);
fscanf(fp,"%lf %lf %lf\n",&beta_01,&alpha_01, &delta_01);
fscanf(fp,"%lf %lf %lf\n",&beta_02,&alpha_02, &delta_02);
fscanf(fp,"%lf %lf %lf\n",&beta_03,&alpha_03, &delta_03);
fscanf(fp,"%lf",&b0);
fscanf(fp,"%lf",&rr);
fclose(fp);
fp=fopen("eso_trz.dat","w");
if (fp==NULL) {printf("Can't creat the output file out.dat!\n");exit(1);}
printf("\n Input how many revolutions to go [default 1]: ");
gets(buff);
if (strlen(buff) == 0) v0=1;
else
v0 = atof(buff);
printf("\n Input the max speed (rad/sec) [default 14]: ");
gets(buff);
if (strlen(buff) == 0) V_max= 14.0;
124
else
V_max = atof(buff);
r=fabs(r);
if (r>rr) r=rr;
t1 = 3*pi*v0/V_max;
printf("Program running...\n");
printf(" \n");
printf("Press any key to exit.\n");
Time = (double *)malloc(K*sizeof(double));
if (!Time) {printf(" No enough memory for Time Array!\n");
exit(1);}
Control_out=(double *)malloc(K*sizeof(double));
if (!Control_out) {printf(" No enough memory for Control_out Array!\n");
exit(1);}
Control = (double *)malloc(K*sizeof(double));
if (!Control) {printf(" No enough memory for Control Array!\n");
exit(1);}
Feedback=(double *)malloc(K*sizeof(double));
if (!Feedback) {printf(" No enough memory for Feedback Array!\n");
exit(1);}
error = (double *) malloc(K*sizeof(double));
if (!error) {printf(" No enough memory for error array!\n");
exit(1);}
125
while(!kbhit()) {
for (i=0;i<I;i++) {
t=t+h;
/* ESO */
position=read_7166()-(unsigned long)0x080800000;
feedback=(double)(position*2*pi/16000.0);
e=z1-feedback;
fe=fal(e,alpha_01,delta_01);
v1=z1+h*(z2-beta_01*fe);
fe=fal(e,alpha_02,delta_02);
v2=z2+h*(z3-beta_02*fe+b0*u);
fe=fal(e,alpha_03,delta_03);
v3=z3-h*beta_03*fe;
if (t<= t1/3) v4 = 3*V_max/t1/2*t*t;
126
else
if (t<= 2*t1/3) v4 = V_max * t1/6 + V_max * (t-t1/3);
else
if (t<=t1) v4 = V_max * t1/2
+ ((t-2*t1/3)*t1-(t*t-4*t1*t1/9)/2)*3*V_max/t1;
else
v4 = 2*V_max*t1/3;
if (h == 0) v5 =0;
else
v5 = (v4 - z4)/h;
z4=v4; z5=v5;
e1=z4-z1;
e2=-z2;
u0=K_P*e1+K_D*e2;
u=u0-z3/b0;
out=5.0*u/7.0+2.5;
DAC_out(out);
delay(n);
127
t_current=read_8254();
if (t_current>t_previous) t_previous+=65536;
time_count=(int)(t_previous-t_current);
h=(double)time_count/1193180.0;
t_previous=t_current;
z1=v1;z2=v2;z3=v3;
} /* for i loop */
if ( k < K ) {
Time[k]=t;
if ( k == 0) Control[k] = 0;
else Control[k] = (feedback - Feedback[k-1])/h/10;
Control_out[k]=u;
Feedback[k]=feedback;
error[k] = z3;
error_dot[k] = z5;
++k;
}
else
goto exit1;
}
exit1:
init_DAC();
/* for k loop */
128
for(k=0;k<K;k++){
fprintf(fp,"%f %f %f %f %f %f\n",Time[k],Control[k],
Control_out[k],Feedback[k],error[k],error_dot[k]);}
fclose(fp);
printf("Program completed !\n");
} /* end of main */
129
*************************************************
ADRC for Servo Motor using trapezoid profile, May 2000
*************************************************
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include <dos.h>
#include <conio.h>
#define pi 3.141592653589793
void main(){
double z1=0,z2=0,z3=0;
double v1,v2,v3;
double z4=0,z5=0;
double v4,v5;
130
double u0,u=0,out;
/* control signal */
double t=0,h=0;
/* parameters of ESO */
double r=20,rr=20,v0=0.125*2*pi;
double V_max, t1;
/* parameters of trapezoid */
double e,fe,fe1,fe2;
double e1=0,e2=0;
double feedback=0;
FILE *fp;
int i,I=10,k=0,n=15;
int K=1000;
char buff[10];
long t_previous,t_current,position;
int time_count;
double *Time;
double *Feedback;
131
double *Control_out;
double *error;
double *error_dot;
double *Control;
fp=fopen("paras1.dat","r");
if (fp==NULL) {printf("Can't open the parameters file paras.dat !\n");exit(1);}
fscanf(fp,"%lf %lf\n",&r,&v0);
fscanf(fp,"%lf %lf %lf\n",&K_P, &alpha_P, &delta_P);
fscanf(fp,"%lf %lf %lf\n",&K_D, &alpha_D, &delta_D);
fscanf(fp,"%lf %lf %lf\n",&beta_01,&alpha_01, &delta_01);
fscanf(fp,"%lf %lf %lf\n",&beta_02,&alpha_02, &delta_02);
fscanf(fp,"%lf %lf %lf\n",&beta_03,&alpha_03, &delta_03);
fscanf(fp,"%lf",&b0);
fscanf(fp,"%lf",&rr);
fclose(fp);
fp=fopen("adrc_trz.dat","w");
if (fp==NULL) {printf("Can't creat the output file out.dat!\n");exit(1);}
132
gets(buff);
if (strlen(buff) == 0) v0=1;
else
v0 = atof(buff);
printf("\n Input the max speed (rad/sec) [default 14]: ");
gets(buff);
if (strlen(buff) == 0) V_max= 14.0;
else
V_max = atof(buff);
r=fabs(r);
if (r>rr) r=rr;
t1 = 3*pi*v0/V_max;
printf("Program running...\n");
printf(" \n");
printf("Press any key to exit.\n");
133
Control_out=(double *)malloc(K*sizeof(double));
if (!Control_out) {printf(" No enough memory for Control_out Array!\n");
exit(1);}
Feedback=(double *)malloc(K*sizeof(double));
if (!Feedback) {printf(" No enough memory for Feedback Array!\n");
exit(1);}
delay(n);
init_DAC();
134
init_7166();
init_8254();
t_previous=read_8254();
while(!kbhit()) {
for (i=0;i<I;i++) {
t=t+h;
/* ESO */
position=read_7166()-(unsigned long)0x080800000;
feedback=(double)(position*2*pi/16000.0);
e=z1-feedback;
fe=fal(e,alpha_01,delta_01);
v1=z1+h*(z2-beta_01*fe);
fe=fal(e,alpha_02,delta_02);
135
v2=z2+h*(z3-beta_02*fe+b0*u);
fe=fal(e,alpha_03,delta_03);
v3=z3-h*beta_03*fe;
if (h == 0) v5 =0;
else
v5 = (v4 - z4)/h;
z4=v4; z5=v5;
e1=z4-z1;
e2=z5-z2;
fe1=fal(e1,alpha_P,delta_P);
136
fe2=fal(e2,alpha_D,delta_D);
u0=K_P*fe1+K_D*fe2;
u=u0-z3/b0;
out=5.0*u/7.0+2.5;
DAC_out(out);
delay(n);
t_current=read_8254();
if (t_current>t_previous) t_previous+=65536;
time_count=(int)(t_previous-t_current);
h=(double)time_count/1193180.0;
t_previous=t_current;
z1=v1;z2=v2;z3=v3;
/* for i loop */
137
if ( k < K ) {
Time[k]=t;
if ( k == 0) Control[k] = 0;
else Control[k] = (feedback - Feedback[k-1])/h/10;
Control_out[k]=u;
Feedback[k]=feedback;
error[k] = z3;
error_dot[k] = z5;
++k;
}
else
goto exit1;
/* for k loop */
exit1;
init_DAC();
for(k=0;k<K;k++){
fprintf(fp,"%f %f %f %f %f %f\n",Time[k],Control[k],
Control_out[k],Feedback[k],error[k],error_dot[k]);}
138
fclose(fp);
printf("Program completed !\n");
/* end of main */
if (x>delta) return(1.0);
else if (x<-delta) return(-1.0);
else return(x/delta);
}
139
d=r*h;
d0=d*h;
y1=x1-u0+h*x2;
a0=sqrt(d*d+8.0*r*fabs(y1));
if (y1>d0) a1=x2+(a0-d)/2.0;
else if (y1<-d0) a1=x2-(a0-d)/2.0;
else a1=x2+y1/h;
return(-r*sature(a1,d));
}
140
***********************************************************************
Slid Mode Observer based state feedback control for Servo Motor using trapezoid profile,
August 2002
************************************************************************
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include <dos.h>
#include <conio.h>
#define pi 3.141592653589793
void main(){
double z1=0,z2=0;
double v1,v2;
double z4=0,z5=0;
double v4,v5;
double u=0,out;
/* control signal */
double t=0,h=0;
141
/* parameters of Slid */
double beta_01=7, alpha_01= 0.5, beta_02= 7.8, alpha_02=15;
double r=20,rr=20,v0=0.125*2*pi;
double V_max, t1;
/* parameters of trapezoid */
double b0=25;
double e,fe,fe1,fe2;
double e1=0,e2=0;
double feedback=0;
FILE *fp;
int i,I=10,k=0,n=15;
int K=1000;
char buff[10];
long t_previous,t_current,position;
int time_count;
double *Time;
double *Feedback;
double *Control_out;
double *error;
double *error_dot;
double *Control;
fp=fopen("paraslid.dat","r");
142
fscanf(fp,"%lf %lf\n",&r,&v0);
fscanf(fp,"%lf\n",&K_P);
fscanf(fp,"%lf\n",&K_D);
fscanf(fp,"%lf %lf\n",&beta_01,&alpha_01);
fscanf(fp,"%lf %lf\n",&beta_02,&alpha_02);
fscanf(fp,"%lf",&b0);
fscanf(fp,"%lf",&rr);
fclose(fp);
printf(" r= %lf\n", r);
printf("v0= %lf\n", v0);
printf(" beta_01= %lf\n", beta_01);
printf("alpha_01= %lf\n", alpha_01);
printf("beta_02 = %lf\n", beta_02);
printf("alpha_02= %lf\n", alpha_02);
fp=fopen("slid_trz.dat","w");
if (fp==NULL) {printf("Can't creat the output file out.dat!\n");exit(1);}
printf("\n Input how many revolutions to go [default 1]: ");
gets(buff);
if (strlen(buff) == 0) v0=1;
143
else
v0 = atof(buff);
printf("\n Input the max speed (rad/sec) [default 14]: ");
gets(buff);
if (strlen(buff) == 0) V_max= 14.0;
else
V_max = atof(buff);
r=fabs(r);
if (r>rr) r=rr;
t1 = 3*pi*v0/V_max;
printf("Program running...\n");
printf(" \n");
printf("Press any key to exit.\n");
Control_out=(double *)malloc(K*sizeof(double));
if (!Control_out) {printf(" No enough memory for Control_out Array!\n");
exit(1);}
144
Feedback=(double *)malloc(K*sizeof(double));
if (!Feedback) {printf(" No enough memory for Feedback Array!\n");
exit(1);}
error = (double *) malloc(K*sizeof(double));
if (!error) {printf(" No enough memory for error array!\n");
exit(1);}
error_dot = (double *) malloc(K*sizeof(double));
if (!error_dot) { printf(" No enough memory for error_dot arry!\n");
exit(1);}
delay(n);
init_DAC();
init_7166();
init_8254();
t_previous=read_8254();
while(!kbhit()) {
for (i=0;i<I;i++) {
t=t+h;
position=read_7166()-(unsigned long)0x080800000;
feedback=(double)(position*2*pi/16000.0);
e=z1-feedback;
145
fe=sgn(e);
v1=z1+h*(z2-beta_01*e-alpha_01*fe);
v2=z2+h*(-1.41*z2-beta_02*e+b0*u-alpha_02*fe);
if (t<= t1/3) v4 = 3*V_max/t1/2*t*t;
else
if (t<= 2*t1/3) v4 = V_max * t1/6 + V_max * (t-t1/3);
else
if (t<=t1) v4 = V_max * t1/2
+ ((t-2*t1/3)*t1-(t*t-4*t1*t1/9)/2)*3*V_max/t1;
else
v4 = 2*V_max*t1/3;
if (h == 0) v5 =0;
else
v5 = (v4 - z4)/h;
z4=v4; z5=v5;
e1=z4-z1;
e2=-z2;
u=K_P*e1+K_D*e2;
out=5.0*u/7.0+2.5;
146
DAC_out(out);
delay(n);
t_current=read_8254();
if (t_current>t_previous) t_previous+=65536;
time_count=(int)(t_previous-t_current);
h=(double)time_count/1193180.0;
t_previous=t_current;
z1=v1;z2=v2;
/*printf("position= %ld\n", position);*/
} /* for i loop */
if ( k < K ) {
Time[k]=t;
if ( k == 0) Control[k] = 0;
else Control[k] = (feedback - Feedback[k-1])/h/10;
Control_out[k]=u;
Feedback[k]=feedback;
++k;
}
else
goto exit1;
}
exit1:
/* for k loop */
147
init_DAC();
for(k=0;k<K;k++){
fprintf(fp,"%f %f %f %f\n",Time[k],Control[k],
Control_out[k],Feedback[k]);}
fclose(fp);
printf("Program completed !\n");
}
int sgn(double x) {
if(x>0) return(1);
else if(x<0) return(-1);
else return(0);
}
/* end of main */
148
Signals
ui
i , i = 1, 2
fi
i, i = 1, 2
0.00178 radian
by sensor, y 10 mils
Ii
Nonlinearities
p(.)
h (.)
Constants
J
149
J = 0.00967 kg-m
K
K = 5724mils / rad
Vs
a,b,c
a = 0.0265, b = 1.053, c = 5.87 109