Você está na página 1de 6

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS, VOL. CAS-21, NO.

1, JANUARY 1974

Zeros and Poles of Matrix Transfer Functions and Their Dynamical Interpretation
CHARLES A. DESOER
AND JERRY

D. SCHULMAN

Abrrrucr-The given rational matrix transfer function H(.) is viewed as a network function of a multiport. The n, x ni matrix H(s) is factored into &(s)-N&s) = N,(s)D,(s)- , where III(.), Nl(.), N,.(.), and D,(.) are polynomial matrices of appropriate size, with LIZ(.) and NI(.) left coprime and N,(.) and II,.(.) right coprime. A zero of H(.) is defined to be a point z where the local rank of Nl(.) drops below the normal rank. The theorems make precise the intuitive concept that a multiport blocks the transmission of signals proportional to ezf if and only if z is a zero of If(*). We show that p is a pole of H(.) if and only if some singular input creates a zero-state response of the form re pr, for f > 0. The order m of the zero z is similarly characterized. Although these results have state-space interpretation, they are derived by purely algebraic techniques, independently of state-space techniques. Consequently, with appropriate modifications, these results apply to the sampled-data case.

ity of an open-loop unstable distributed multivariable feedback system [9]. In order to define the concept of order of a zero, we have to go back to the more detailed structural information provided by the Smith-McMillan form factorization, which is a special case of the general factorization mentioned above. Theorems I and II characterize the transmission properties of the zeros of H(s). Theorem III does the same thing for the poles. Theorem IV states that if z is a zero of order m of H(s), it is a pole of order m of a generalized inverse H(s). Theorem V characterizes the properties of a zero of order m. The proofs are relegated to the Appendix. Notation and Preliminaries ([I], (31, [6], (lo/, [ll])

Let R (C) denote the field of real (complex, respectively) numbers. Let ? = C U {-}. Let R [s] (R(s)) be the set of all ANY RESEARCHERS have discussed the concept of polynomials (rational functions, respectively) in the complex zeros of matrix transfer functions. Some have defined variable s with real coefficients. Let R [slpx q [R(s>p q] be zeros of multivariable systems to be zeros of the scalar ra- the set of all p X 4 matrices with elements in R [s] [R(s), retional functions which are elements of the matrix transfer func- spectively] . Let N and D be matrices with elements in R [s] ; a tion [15]-[17]. Except for the special case of a diagonal matrix M is said to be a common left divisor of N and D iff there exist matrices fl and D such that N = M@ and D = MD, matrix transfer function, such zeros have nothing to do with the concept of zeros to be introduced here. Kwakernaak and where M, z, and fi have elements in R [s] ; both N and D are Sivan [26] have defined zeros in terms of the determinant of said to be right muEtipZesof M; a matrix L with elements in the matrix transfer function. However, their class of zeros is R [s] is said to be a greatest common left divisor (GCLD) of N larger than the class of zeros needed to discuss the transmission and D iff 1) it is a common left divisor of N and D, and 2) it is properties of multiports presented in this paper. Rosenbrock a right multiple of every common left divisor of N and D. When [3], [19], Moore and Silverman [22], and Barnett [25] de- a GCLD L is unimodular (i.e., det L = constant # 0), then the polynomial matrices N and D are said to be Zeft coprime. We fine zeros of rational matrices in terms of the Smith-McMillan define similarly a greatest common right divisor (GCRD) and form. Our purpose is to show that zeros of matrix transfer func- right coprime. Two important facts should be noted [3, p. 711, 110, P. 351. tions can be defined in a way quite analogous to the scalar case F-51, 1) YC.1 E R [sl no i and Dl(.) E R [s]sx 0 are left coprime and that the zeros thus defined have analogous characterization. We define zeros in terms of a general factorization of the ra- if and only if there are polynomial matrices P(a) E R [s]i 0 and Q(e) E R [s] no n, such that tional matrix of H(s) into a product of a polynomial matrix and the inverse of another polynomial matrix [l] -[6] . Such a Nl(s)P(s) + Dl(s)Q(s) = 1%) for all s E C. (1) factorization has been successively used in realization problems 2) Similarly, for right coprime matrices N,(s) and D,(.), (1) [4] -[6] , in design problems [4] -[7] , in the study of the cancellation problem in feedback systems [8], and in deriving the becomes necessary and sufficient conditions for the input-output stabilp(s)N,(s) + &s)D,.(s) =I,+ for all s E C (2)
INTRODUCTION

Manuscript received November 10, 1972; revised May 15, 1973. This work was supported in part by the Joint Services Electronics Program under Contract F44620-71-C-0087 and in part by the National Science Foundation under Grant GK-10656X2. C. A. Desoer (SSO-A53-SM57-F64) and J. D. Schuhnan (S70) are with the Department of Electrical Engineering and Computer Sciences and the Electronics Research Laboratory, University of California, Berkeley, Calif. 94720.

where p(.) and Q(e) are polynomial matrices of appropriate size.


ZEROS OF H(e) E R(s)oX i

Let no (ni) denote the number of outputs (inputs, respectively) of the possibly rectangular transfer-function matrix H(s). We consider the following polynomial matrix factorizations of

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS, JANUARY 1974

H(s): H(s) = Dl(s)-N&) = N,(s)D,(s)- (3)

state at Ot. The precise statement is given by the following theorem. Theorem I Let H(*) E R(s)"ox"i, no > ni, with the factorizations (3), and ni = &rl. Under these conditions, the following hold true. a) If z E C is a zero of H(e), then there exists a nonzero vector g E Ci and a polynomial Zor m,sQ = m(s) such that for the input u(t) = l(t)ezg t C m,8@(t)
a

where Nl, N,, Q, and 0,. are polynomial matrices; Nr and Nr are n, X ni; and DI and D, are squares of size no X n, and ni X ni. Furthermore, Nl(s) and 4(s) are left coprime N,(s) and D,(s) are right coprime. (4)

Algorithmic methods for obtaining Nr, 4, N,., and D, are known

(6)

[31-[61> [lOI.
Observation p E C is a pole of H(e) - det 4(p) = 0 -det 4(p) = 0. This follows from (1) and (2). Indeed, for all s E C, we have H(s)P(s) + Q(s) = 4(s)- p(s)H(s) + o(s) = D,(s)-. (5)

the zero state response [i.e., the system response at time t when starting from the zero state B,, at time t = 0 and driven by the input u(s)] has the property y(t, 0, en, u(e)) = 0%) for all t > 0. (7)

Definition(l4j 1) For any z E C, the rank of Nl(z) is called the local rank of Nr(-) at z and is denoted by pNr(z). 2) max,,c pan & TN, is called the normal rank of Nl(.). Similarly, we denote the local rank of N,(m) at z by plv,(z) and the normal rank of N,(e) by &vr. CaseI Let n, > ni. The number of outputs is larger than or equal to the number of inputs. We rule out the case where the normal rank FN, of Nr(*) is smaller than ni; physically, it would mean that H(e) has Only &rl < ni effective inputs; equivalently, there is a polynomial precompensator PC) E C[s]ixi of normal rank ni - j&r1> 1 such that H(s)P(s)u^(s) = 0%) for all possible inputs a(). By analogy with the scalar case, we have the following. definition. Definition I If n, > ni, z E C is called a zero of H(.) E R(s)o*i, i%,(z) < i%, = 4. Example H(s) = diag [(s - l)/(s + l), (s + l)/(s + 2)]. Then +l and - 1 are zeros of H(.); - 1 and -2 are poles of H(.). Note that det H(s) = (s - l)/(s + 2) only has one zero: - 1. Also, det Nl(s) = (s - l)(s + 1) and det 4(s) = (s + l)(s + 2) are not coprime. The transmission properties of the zeros of H(e), when n, > ni, are quite analogous to the well-known ones of the scalar case [ 121. Roughly speaking, z is a zero of H(e) if and only if it completely blocks the transmission of some input proportional to ezr. To the exponential input we add some S-functions and some derivatives of 6 in order to reach the steady IThe local rank of NC) at z is simply the rank of the matrix N(z),

b) If u E C is neither a zero nor a pole of H(*), then for all nonzero vectors k E Ci, there is a polynomial m(s) such that the input u(t) = l(t)keuf + C mor&@)(t) a produces the zero-state response with the exponential form y(t, 0, en, u(a)) = H(u)ke.# CaseII Let no G ni. If, in addition, no < ni, and there are at least (no + 1) columns in H(e) that are not identical to zero, then for any z E C, there is a vector g E Ci such that the input 1(t)ge + E:, m(,6@)(t) produces an output identical to zero for t > 0. Clearly, we shall have to modify our definition and characterization theorem. we now aSSUme that &, = &,, = no. [If j&v1< no, then H would essentially have Only &, effective outputs; equivalently, there exists a polynomial postcompensator R(e) E C[s]+J%, of normal rank no - phrr> 1) such that R(s)H(s)u^(s) = 8%)for all possible inputs Q(e).] Definition II If no < ni, z E C is called a zero of H(o) E R(s)% ni, iff &v&z) < j?N, = no. The transmission properties of the zeros of H(e) are completely characterized by the following theorem. Theorem II Let H(e) E R(s)oX n. I, no < ni, with the factorization (3). Let j&v, = no. Under these conditions, the following holds true. a) If z is a zero but not a pole of H(e), then there exists a linear combination $(t) of the components of the zero-state response with the property that Bno, for all t > 0. (9) (8)

iff

$(t) 2 cQ(z)y(t,0, en,u(.))


for all inputs of the form

= 0,

for all t > 0

(10)

where the elements are complex numbers. The normal rank of NC) is, in fact, the rank of the matrix NC) provided that when we calculate minors we view their elements as members of C[s] Equivalently, in determining linear independence, we view the rows as elements of the

u(t) = l(t)ge + C m(u6(a)(t) OL

(11)

module(C[s] Ini overC[s] .

where g is an arbitrary vector in Cni, and the mDl are appropriate vectors which depend on g.

DEsoERANDsCHULMAN:MATRIXTRANSFER

FUNCTIONS'ZEROSANDPOLES

b) Let no < ni. If u is neither a zero nor a pole of H(*), then the following hold true. 1) For all nonzero k E Pi, there are mcusuch that y t, 0, en, ke t C m$(t) a ( = H(u)keuf, 1 for all t > 0. (12) However, for some k, the right-hand side may be identically zero. 2) For all b E CO, there is some h E Ci and rnO such that Jl(t) = by t, 0, en, he t C m,&@)(t) cd ( 1 = bH(u)he # 0, for all t > 0. (13) (14)

1) A(s) E R [s]ox% and B(s) E R [s]ix i are unimodular. 2) r(s) E R(s)% i and has the form
ek(s) o . .

(18)

#k(S)

where a) ei(s) and Jli(s), i = 1, 2, . . * , k, are manic coprime polynomials; and b) ei(s)lei+r(s), i = 1, 2, * 9* , k - 1; and C) $$(s)lJ/i-l(s), i = 2,3, . , k. We can factor I(s) as follows: r(s) = $~(s)-El(s) = G(s)&-(s)- where
Jlr(s)

(19)

E R [s]%

na, El(s) E R [sInax nt,

I/+(S)

E R [s]ix ni, (20)

E,(s) E R [s] 0 i and


h(s) = diag (h(s), J/As), * * . , $k(h 1, 1,. . * , 1)

POLES OF H(s)

With H(s) E R(s)oX .r , classical analysis [ 181 defines a pole of the rational matrix H(m); namely, p E C is called a pole of order n if and only if some element of H(e) has a pole of order n at p and no element has a pole of order larger than n at p. Note that here the relative magnitude of no and ni is of no consequence. We characterize the poles as follows. 77teorem III
Let H(s) E R(s)sX i with factorization (3). Under this condition, p E C is a pole of H(a) if and only if there is an input

(21)

El(s) = diag (e,(s), e,(s), * . . , e&s), 0, 0, * * . , 0). There are similar expressions for Jlr(s) and E,(s). Observation

(22)

The Smith-McMillan factorization of H(m)in (17) is a special case of the polynomial factorizations (3). To see this, define Qt.1 4 WM)- N,(s) e MWts). (23)

u(t) = c &p(t) a!

(15)

where u, E Ci such that the corresponding zero-state response has the property that y(t) = rep, forallt>O (16)

, k, ei() and $/i(s) are coprime, it follows Since,fori= 1,2;.. easily that $1(-) and Et(*) are left-cop&e matrices. The same holds for Dt and Nl becauseA(.) and L?(e)are unimodular. For the right-coprime polynomial factorization, we consider Q(s) A B(s)- #r/r(s), N,(s) = A(sYMs)(24)

where r is a nonzero vector in Cs. In other words, a singular input u(e) of the form (15) kicks the system from its zero state at t = O- to a state at t = 0 which results in the purely exponential output for all t > 0 if and only if p E C is a pole of H(a).
THEORDER OF A ZERO OFTRANSMISSION

The concept of generalized inverses is well established [23] [25] . If for each s E C we take a generalized inverse of H(s), we obtain H+(s). Since these inverses are a generalization (for rectangular and singular matrices) of the notion of inverse, we would expect from the scalar case that if z is a zero of order m of H(.), then z is a pole of order m of H+(). First, we need to. define the concept of the order of a zero. In order to do so we have to probe deeper into the structure of the transfer function H() and reduce it to its Smith-McMillan form. To obtain the Smith-McMillan form of a rational matrix is computationally more costly than to factor it into coprime polynomial matrices. Smith-McMillan Form of H(a) E R(s)% i Let k denote the normal rank ofH(*). The Smith-McMillan form [3] , [ 19]-[22] , [27] of H(s) is given by H(s) = A (s)l?(s)B(s) where the following hold true. (17)

Observing that A(s) and B(s) are unimodular matrices, and considering (22)-(24), we conclude that, with z E C, plv,(z) < &, if and only if &r&z) < &vr. When no > ni, we shall assume that PN, = ni; hence, p(Et) = ni. Similarly, for no < ni, jT(Ei) = no. We shall say that z E C is a zero of order m of H(.) iff eni has a zero of order m at z, for the case where no > ni, and iff e,(e) has a zero of order m at z for the case where no < ni. We say that an ni X no matrix X+ is a rejlexive generalized inverse of the no X ni matrix X iff 1) XXX = X, and 2) X+Xx+ = X+ [25, p. 1371. The following theorem is a trivial extension of [20], [21] ; it has also been indicated in [22]. Theorem IV The complex number z is a zero of order m of H(m) if and only if z is a pole of order m of H(e). The result follows immediately from observing that H+ can be chosen as follows: H+(s) = B(s)-Et+(s)~t(s)A(s>- = B(s)-ICl,(s)E;(s)A(s)- (25)

where E+(s) is obtained from E(s) by transposing E(s) and inverting all its nonzero elements. When no > ni, if z is a zero of order m of H(e), then, roughly speaking, the multiport completely blocks the transmission of some input of the form xg=e gktk exp (zt), for (5= 0, 1, , m - 1; for u = m, there is an input of that form for which the

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS, JANUARY 1974

output is nonzero and proportional to exp (zt). This is precisely the generalization of the scalar case. The result is made precise in the following theorem. Theorem V Let H(s) E R [s]%i and let z be a zero of order m of H(e). a) If no > ni, then there exists nonzero polynomial vectors g(s) and m(s) such that the input i&(s) = g(s)/(s - z)O+l + m(s) produces a zero-state response Ytt, 0, en, 44.)) = (26)

Proof of Theorem I-b) Consider the input u(e) in (8) in which the ma are still undetermined; let m(s) $ ZZ:,morsa; then P(s) = H(u)k/(s - u) + 4(s)- [A$(s)k/(s - u) - Dl(s)Dl(u)-N&u)k/(s - u)] + Dl(s)-Nl(s)m(s). (32)

The bracketed term is a polynomial vector q(s). With (1) in mind, choose m(s) = -P(s)q(s) to obtain P(s) = Htu)klts - u> + Q(s)&). (33)

The conclusion (9) follows since k # tIni and H(u) has a column rank equal to ni. Q.E.D. forallt>O,fora=O,l;~~,mforallt>0,foro=m;a#0~. 1 (27) (28) Proof of Theorem II-a) Choose a nonzero c E Cs such that cNr(z) = &!ri anddefine G(t) A cDl(z)y(t). From (l), co(z) f Oho. Pick m(s) = Za m,sQ & gp(s), where p(*) E R [s] is to be chosen later. Then s(s) = cQ(z)H(s)u^(s)

eno
{ aexp(zt),

b) If no G ni, then there exists a linear combination, say c(t), of the components of the zero-state response and its derivatives with the property that for u = 0, 1, * * . , m - 1, C(t) = 0, for all inputs of the form us(t) =g I ezt t C m&j()(t) U.
Q

for all t > 0

(29)

= coltz)~(s)-N,(s)g[Il(s - z>+ /+)I .


By construction, h(s) 2 cDl(z)Dl(s)-Nl(s)g at z. Hence, h(s) =$$(s - z)

(34)

E R(s) has a zero

(30)

where g is an arbitrary constant vector in Cni and the ma are appropriate vectors in Cni which depend on g. For u = m, c(t) is proportional to ezt for all t > 0 and is nonzero for almost all g.
CONCLUSION

(35)

where n(s) and d(s) E R [s] are coprime, and d(z) # 0. Choose /I E C so that fld(z) - 1 = 0 and choose the polynomial p(s) so t&at (s - z)p(s) = /3d(s) - 1. As a result, (34) and (35) give Q.E.D. J/(s) = n(s)p. The conclusion (10) follows. Proof of Theorem II-b) Statement 1) is identical with Theorem I-b), except that now the column rank of H(u) = no d ni; hence, H(u)k may be the zero vector. Statement 2) is obvious: plv,(u) = no and det Dl(u) # 0 by assumption; so, for any nonzero b, bH(u) is nonQ.E.D. zero; hence, there is an h such that bH(u)h # 0. Proof of Theorem III * Since p is a pole of H, det 4(p) = 0 and there is a nonzero vector r E Co such that Q(p)r = 8%. Hence, D[(s)r = (s - p)k(s), where k(s) is a polynomial vector. Pick G(s) = P(s)k(s); then, using (l), we obtain successively p(s) = H(s)u^(s) = Dl(s)-N[(s)P(s)k(s) = r/O - P) - Q(s) k(s). (36) The conclusion (16) follows immediately, since Q(s)k(s) is a polynomial vector. * Suppose that p is not a pole of H(.) and that some input u(v) of the form (15) produces the zero-state response (16); then p(s) = H(s)u^(s) = Dl(s)-Nl(s) * (37)

To maintain the closest analogy to the scalar case, the approach of this paper is based on the factorizations of the rational matrix transfer function H(e) given in (3). The zeros of H(e) are defined in terms of the local rank of the polynomial matrix N1(*). The dynamic properties associated with the zeros are given in Theorems I, II, and V. The poles of H(e) are defined by classical analysis and are characterized in Theorem III. If the complex variable s is changed into z and if the resulting elements of H(e) are interpreted as z-transfer functions, then the algebraic techniques used above can be applied to the sampled-data case and, except for a few modifications in interpretations, the results still hold in this case.
APPENDIX PROOFSOFTHEOREMS

Proof of Theorem I-a) Since z is a zero of H(a), plv,(z) < ni; hence, there is a nonzero vector g E Ci such that Nl(z)g = 8,. Define the polynomial vector p(s) k Nl(s)g/(s - z); let m(s) = C, m,s 4 -P(s)p(s); thus the input u(s) of (6) is completely defined. The corresponding zero-state response is P(s) = H(s)@> = Q(s)- [p(s) + &(s)m(s)l . (31)

Using (1) we conclude that p(s) = Q(s)p(s) and the conclusion (7) follows upon taking the inverse transform of the polynomial vector p(s). Q.E.D.

= 4(s) + 40 - P) (38) where q(s) is some polynomial vector. Since (38) has a pole at p, by (37) we must have det Dl(p) = 0, i.e.,p is a pole ofH(*). Q.E.D.

DESOER AND SCHULMAN: MATRIX TRANSFER FUNCTIONS ZEROS AND POLES

Proof of Theorem V-a) (no > ni). Choose g(s) = B(s)-rni (39)

Proof of Theorem V-b) (no B ni). Pick f(s) = r&A(s)-p(s), where A(s) is given by (17) and r% is the no-vector whose elements are zero except for the n,th which is one. Choose m(s) = -gp(s), where p(s) E R [s] and will be determined later. Using (17) (19), and (30) we have

where rni is the ni-vector whose components are all zero except for the nith which is one. Using (17), (19) (26), and (39), we obtain 9(s) = H(s)&(s) = A(s)Gl(s)-El(s)rnJ(s - z)O+l + H(s)m(s). (40)

t%>= r&&(s)&-l,(s)- HskP/t~ - zY1 - ~011.


By construction,

(52)

Since z is a zero of order m, El(s)rni = (s - z)~ q5(s)eno, where G(s) E R bl , Hz> f 0, and e% is the no-vector whose components are zero except for the nith which is one; hence,

G&) d+,Er(s>rll,(s)-l~(s>g = m b%(s)


where r//%(s) [e,(s)] is the scalar polynomial in the n,th column of J/,(s) [E,(s), respectively] , and $,rJz) # 0. The scalar polynomial b%(s) is the n,th component of B(s)g. By assumption, e%(s) = (s - z)+(s), where G(s) E R [s] and @(z)# 0. So we have &) = @(W $%;) 6) (s - z)m-u-[ 1 - (s - z)+lP(s)l * (54)

9(s)= A(s)$q(s)-(s
Ifu=O, 1,s.. ,m-

- z)~~(s)E~/(s - z)+l + H(s)m(s). 1,choose

(41)

m(s) = -P(s)(s - z)--@(s)eo

(42)

where P(s) is defined in (1). Using (23) and (42) we obtain

9(s)= 4(s)-

[I% - N,(s)p(s)] (s - z>~ - - G(s)es.

(43) Now, r&(s) and (s - z)+ are coprime, so there are scalar polynomials /3(s)and p(s) such that (44)
A&M(s) + 0 - Z>P(S) = 1,

From (I), we obtain

9(s)= Q(s)(s - ~)~-~-Q$(s)e~.

Taking the inverse Laplace transform of (44), we have (27). If u = m, (41) becomes

for all s E C.

(55)

Using (55) in (54), we obtain f(s) = G(s)b,&)P(s)(s - ~)~--l. (56)

Denoting the nith element of

96) =A(s)~~ts)-~(s)f,,/(s - z>+fWW. &(s)by tini( we obtain P(S) =A(s)e~@(s)/ [J/,i(s> (s- z)I +H(s)m(s).

(45)

(46)

Therefore, bytaking the inverse Laplace transform of (56), the conclusions follow immediately. Q.E.D.
REFERENCES Paris, France: Hermann et Cie, 1964, ch. 6, sec. 1, prop. 9 (Div). 121 V. M. Popov, Some properties of the control systems with irreducible matrix-transfer functions, in Lecture Notes in Mathematics, 144, Seminar on Differential Equations and Dynamical Systems II. New York: Springer, 1970, pp. 169-180. 131 H. H. Rosenbrock, State-Space and Multivariable Theory. New t41 W. A. Wolovitch, On the synthesis of multivariable systems, in Froc. 13th Joint Automatic Control Conf American Control CounciZ,Preprints Tech. Papers (Stanford, Calif.), Aug. 1972.

Since z is a zero of order m, hi(s) and (s - z) are coprime; hence, there are scalar polynomials p(s) and q(s) such that p(s)Jlni(s) t q(s)(s - z) = 1, for all s E C. So we can write (46) as follows:

[II N. Bourbaki, (Livre II)-Algebre.

9(s)= ~w%/#G)P(z)l(s

- z) + ~(s)ko@(s)PwI(s -

- z)

z)l +~(s>J/~(s>-~,~(s>~(S) + HWW


- A(s)-lA(z)~,~(z)P(z)I(s

York: Wiley, 1970.

(47)

where the bracketed term in (47) is a polynomial vector in s, say, p(s). Define a A A(z)eSq3(z)p(z), Now we can write (47) as (note that a # 0,). (48)

PI -, [61
[71

96) = a/G- z>+4sMV


With (1) in mind, choose

[Jll(+-@ + ~,oNs>4(s) + Ms)m(s)l.


+ 5g#@4(s)l .

(49) PI (50) [lOI [Ill

m(s) = -WM(Ms)

The determination of state-space representations for linear multivariable systems, presented at the 2nd IFAC Symp. Multivariable Technical Control Systems, Dusseldorf, Germany, Paper 1.2.3, Oct. 1971. S. H. Wang, Design of linear multivariable systems, Coll. Eng., Univ. California, Berkeley, Memo. ERL-M309, ch. 1, Oct. 1971. S. H. Wang and C. A. Desoer, The exact model matching of linear multivariable systems, IEEE Trans. Automat. Contr. (Short Papers), vol. AC-17, pp. 347-349, June 1972. C. A. Desoer and J. D. Schuhnan, Cancellations in multivariable continuous-time and discrete-time feedback systems, to be published. F. M. Callier and C. A. Desoer, Necessary and sufficient conditions for stability for n-input n-output convolution feedback systems with a finite number of unstable poles, to be published. yi5% MacDuffee, The Theory of Matrices. New York: Chelsea,

NOW, with

(I), (23), and (50), we obtain

96) = a/G - z) + Q(s)[WM)

+ ~~~(s)(I(s)I .

(5 1)

Taking the inverse Laplace transform of (51) we have (28). Q.E.D.

B. L: Van der Waerden, Algebra, ~01s. 1 and 2. New York: Ungar, 1970. 1121 C. A. Desoer, Notes for a Second Courseon Linear Systems. New York: Van Nostrand-Reinhold, 1970, pp. 98-99. to Linear System Theory. New York: [I31 C. T. Chen, Introduction Holt, Rinehart and Winston, 1970, p. 137. iI41 V. Belevitch, Classical Network Theory. San Francisco, Calif.: Holden-Day, 1968, p. 406.

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS,VOL.CAS-21,N0.

l,JANUARY

1974

[ 151 E. J. Davison,A computationalmethod for finding the zerosof a multivariablelinear time invariant system,Automatica, vol. 6, pp. 481-484, May 1970. [ 161 K. E. Bollinger and J. C. Mathur, To computethe zerosof large systems,IEEE Trans. Automat. Contr. (Corresp.),vol. AC-16, pp. 95-96, Feb. 1971. [17] S. A. Marshall, Remarks on computing the zeros of large systerns, IEEE lkans. Automat. Contr. (Corresp.),vol. AC-17, p. 261, Apr. 1972. [18] J. Dieudonni, Foum$tions of Modern Analysis. New York: Academic,1969 yEa [19] H. H. Rosenbroc , Allocation of poles and zeros, Proc. Inst. Elec. Eng., vol. 117, pp. 1879-1886,Sept. 1970. [20] B. McMillan, Introduction to formal realizability theory-II, Bell Syst. Tech. J., vol. 31, pp. 541-600, May 1952.

G. D. Fomey, Jr., Convolution codesI: Algebraic structure, Theory, vol. IT-16, pp. 720-738, Nov. 1970. P21 B. C. Moore and L. M. Silverman,The structureof linear multivariablesystems:Equivalentcharacterizations,to be published. A generalized inversefor matrices,Proc. Cambridge 1231 R. Penrose, Phil. Sot.. vol. 51. ut. 3. DD.406-413. Julv 1955. [24] L. A. ZadehandCl k. D&er, Linear system Theory. New York: McGraw-Hill,1963. 1251 S.Bamett,Matricesin Control Theory with Applications to Linear Programming. London, England:Van Nostrand-Reinhold,1971. WI H. Kwakernaakand R. Sivan, Linear Optimal Control Systems. New York: Wiley, 1972. 1271 R. E. Kalman. Irreducible realizationsand the degreeof a rational matrix, J. Sot. Znd. Appl. Math., vol. 13, pi. 520-544, June 1965. WI
IEEE Trans. Inform.

On the Inversion of Rational Matrices


EROL EMRE, iiZAY HOSEYIN, AND KHALDUN ABDULLAH
Abstract-The method developed herein provides a convenient procedure for the inversion of rational matrices-matrices whose entries are rational functions in s which at s = 0 are nonsingular.

problems in computation than are encountered with other methods.


PRELIMINARY RESULTS

INTRODUCTION

Let H(s)=U+H,s+H2S2+"'+HnSn (1)

HE COMPUTATION of the inverse of (sU - A), where U is the unit matrix, is simple and well known [l] . The inversion of matrices with entries in the form of rational functions often arises in the analysis and synthesis of passive and active RLC networks and is not as simple as that of (sU - A). For example, in the s-domain formulation and modeling of electrical networks [2], there is an inevitable need to invert admittance matrices or impedance matrices, which are in general rational matrices. Also, in the analysis of power systems where the method of diakoptics [3] is utilized, the inversion of rational matrices is required. Furthermore, the analysis and design of multivariable control systems [4] involves the inversion of rational matrices. The inversion of rational matrices has been investigated by Munro and Zakian [5]. In their letter, two methods are provided, both of which utilize operations in the field of rational functions-operations not very suitable for computer programming. In this paper, an algorithm is established for the inversion of matrices with entries which are rational functions. Significantly, the algorithm requires operations with constant matrices only. In this respect, this algorithm poses considerably fewer

be a polynomial matrix of degree n, where the Hi, i = 0, 1,2, * * * , n, are constant square matrices and U is the unit matrix. Extending the known results of scalar polynomials to polynomial matrices, the MacLaurin expansion of H-(s) can be written as H-(s)= &, + R,s t * . * + Rksk t . * * . (2)

Note that the Ri are constant matrices. The relation between the Ri appearing in (2) and the Hi appearing in (1) can now be
written as i-l

Ri = Ci - C Ri Hi-i, j=o

i=o,

1,2,-e*

(3)

whereCo =UandCi=O,foralli#O. Also, for a square rational matrix Q(s) of the form Q(s)=P(s)H-(s) =(P, +p,st** ~tPmsm)(lJtHlst~~~tH,,sn)-l (4)

the MacLaurin expansion is given by

Q(~)=R~tR;st-..tR~s~t-... ManuscriptreceivedJanuary17,1973; revisedFebruary27,1973. E. Emre,0. Hiiseyln (A53-S70-M71), and K. AbduIlah(S70-M70) The following linear equation can be written for the constant are with the Departmentof ElectricalEngineering, Middle EastTechnical University, Ankara,Turkey. matrices appearing in (4):

Você também pode gostar