Você está na página 1de 18

Deflated Decomposition of Solutions of Nearly Singular Systems

Author(s): Tony F. Chan


Source: SIAM Journal on Numerical Analysis, Vol. 21, No. 4 (Aug., 1984), pp. 738-754
Published by: Society for Industrial and Applied Mathematics
Stable URL: http://www.jstor.org/stable/2157006 .
Accessed: 02/05/2011 05:38
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at .
http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless
you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you
may use content in the JSTOR archive only for your personal, non-commercial use.
Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at .
http://www.jstor.org/action/showPublisher?publisherCode=siam. .
Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed
page of such transmission.
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of
content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms
of scholarship. For more information about JSTOR, please contact support@jstor.org.

Society for Industrial and Applied Mathematics is collaborating with JSTOR to digitize, preserve and extend
access to SIAM Journal on Numerical Analysis.

http://www.jstor.org

SIAM J. NUMER. ANAL.


Vol. 21, No. 4, August 1984

1984 Society for Industrial and Applied Mathematics


008

DEFLATED DECOMPOSITIONOF
SOLUTIONS OF NEARLY SINGULAR SYSTEMS*
TONY F. CHANt
Abstract.Whensolvingthe linearsystemAx = b,whereA maybe nearlysingularand b is not consistent
with A, one is often interestedin computinga deflatedsolution,i.e., a uniquesolutionto a nearbysingular
but consistentsystem A,xd = b.Keller [14], [15] has considereddeflatedsolutionswith A, corresponding
to settinga smallpivot of the LU-factorizationof A to zero. Stewart[22] proposedan iterativealgorithm
for computinga deflatedsolutionwithA, correspondingto settingthe smallestsingularvalueof A to zero.
Keller'sapproachexplicitlyuses submatricesof the LU-factorswhereasStewart'sapproachis implicitin
that it only involvessolvingsystemswith A. In this paper,we generalizethe conceptof a deflatedsolution
to that of a deflateddecomposition,
whichexpressesthe solution x in termsof xd and the null vectors of
As, We treat such decompositionsin a uniformframeworkthat includesthe approachesof Keller and
Stewartand introducesome new deflatedsolutionsbased on the LU-factorization.Moreover,we prove
some resultsthat relatethe differentkindsof deflatedsolutions.In particular,we provethat the difference
betweenone of the LU-baseddeflatedsolutionsand the SVD-baseddeflatedsolutiontends to zero as A
tendsto exactlysingular.In addition,we presentnoniterativeimplicitalgorithmsforcomputingthe LU-based
decompositions.Numericalresultsverifyingthe accuracyand stabilityof the algorithmsare presented.

1. Introduction.In many numericalproblems (for example, in the numerical


treatmentof nonlineareigenvalueproblems[5], [13], [16], [17], [19], [20], homotopy
continuationmethods for solving nonlinear systems [1], [9], nearly decomposable
Markovchains[21], compartmentalmodels[8] andconstrainedoptimizationproblems
[10]), one often is faced with the problemof havingto solve linearsystemsof the form
(1)

Ax = b,

where it is possible for A to become nearly singular.For simplicity,we shall assume


in this paper that A is squareand its nullity is at most one. The completeframework
generalizesto higher dimensionalnull spaces but we shall not discussthat here.
For any singularmatrixAs, with a one-dimensionalnull space spannedby u, if
xd is a solutionof the consistentsystemAsxd= b, then xd + ,k u is anothersolutionfor
all scalarsA. Consequently,for the linearsystemAx = b with A close to As, the exact
solution x generally has a large norm. In many applications(see [3] for example),
insteadof computingx directly,it is preferableto computethe solutionx decomposed
in the form
(2)

x = Xd + 79U,

where xd is purgedof u, for example by requiring that u Txd= 0. Of course, one would
like to computexd accurately.However,if one takes the obviousapproachand solves

the system Ax = b directly for x (for example, by Gaussian elimination)and then


orthogonalizex withrespectto u to get Xd,the solutionx one obtainswill be dominated
by the vector ,uu and in finiteprecision,the accuracyin xd will deteriorateas a result.
In this paper,we propose and analysealgorithmsfor computingsuch decompositions
in a numericallystable manner.
We call Xd,a deflated solution of (1) and (2) a deflateddecompositionof the
solution x of (1). This decompositioncan be viewed as an accuraterepresentationof
* Received by the editors March26, 1982 and in revised form November7, 1983. This work was
supportedby the Departmentof EnergyundercontractDE-ACO2-81ER10996.
t ComputerScienceDepartment,Yale University,New Haven, Connecticut06520.
738

DEFLATED

DECOMPOSITION

OF NEARLY

SINGULAR

SYSTEMS

739

the solution x of (1) in terms of two parts: a part in the null space u of A, and a
deflatedpartpurgedof u. In ? 2, we shallstudythe questionof existenceanduniqueness
of deflatedsolutions and deflateddecompositionsin a general setting.
Dependingon how A, and b are definedin termsof A and b, there may be many
differentdeflatedsolutions and decompositions.One possibilityis to define A, to be
the nearestsingularmatrixto A in the Frobeniusnormand definexd to be the minimal
length least squaressolution to the system A5xd = b. This correspondsto setting the
smallest singularvalue of A to zero in its singularvalue decomposition(SVD) to
obtain A, and taking b to be the orthogonalprojectionof b onto the range space of
A, To avoidcomputingthe SVD of A explicitly,whichis usuallymuchmore expensive
than solving linear systems [2], [11], Stewart [22] gave an implicit algorithmfor
computingxd which only requiresthe ability to solve linear systems with A. In ? 3,
we present this algorithmand show how it can be extended to compute the deflated
decomposition of x. Since the singular vectors required in his algorithm may be
inaccurate,Stewartused a formof iterativeimprovementto refinethe deflatedsolution.
We shall show in ? 6 that if the inverseiterationis arrangedso that these approximate
singularvectorssatisfya simplerelationship,thenno iterativeimprovementis necessary.
In ? 4, we define a class of deflated solutions based on a LU-factorizationof A
with a small pivot, with A, obtainedby changingsome elements of A by amountsof
approximatelythe same size as the smallest pivot in the LU factors. These matrices
A, have the propertythat their left or rightnull vectorscan be determinedaccurately
(to machine precision) by only one back-substitution.In ? 6, we present implicit
iterative algorithmssimilarto Stewart'sfor computingthese deflated solutions. We
analysethe convergenceand stabilityof these algorithmsand show that no iterative
improvementis necessary.
In ? 5, we provesomeresultsthatrelatethe variousdeflatedsolutions.In particular,
we show that the differencebetween one of the LU-based deflatedsolutionsand the
deflatedsolutionbased on the singularvectorstends to zero as A tends to singularity,
whereasfor the other LU-based deflatedsolutionsthis differencetends to something
proportionalto v[Tb where v, is the left null vector of As. These resultsare verified
by numericalexperimentsin ? 7.
We have made no attempt to survey all related work in this area although we
would like to mention the work of Peters and Wilkinson[18]. Throughoutthis paper,
uppercase Latin letters denote matrices,lower case Latin letters denote vectors and
lower case Greek letters denote scalars.We shall use the notation 1111to denote the
Euclideannorm,(u) k to denote the kth componentof the vector u andP"with 11u I= 1
to denote the orthogonalprojector - uuT.
2. Existence and uniqueness. Deflated solutionsof (1) are solutionsto a nearby
singularbut consistentsystemderivedfrom (1). All the deflatedsolutionsthat we are
going to define are solutionsto systems of the form
SAxd = Rb,

(3)

Asxd

(4)

NXd = Xd,

where S, R and N are matricesthat are relatedto A and where A, SA is a singular


matrix"close"to A, with a one-dimensionalnullspace.We shalldenote the normalized
left null vector of As by vs and the normalizedright null vector by u5. Further,we
shall only use matricesN with a one-dimensionalnull space spannedby a normalized
left null vector vn and a normalizedright null vector u,.

740

TONY F. CHAN

The following lemma gives necessaryand sufficientconditionsfor the existence


and uniquenessof the solution to the above system.
THEOREM 1. The system (3) and (4) has a solutionif and only if v[R = OT. The
solutionis uniqueif and only if vnu5 $0.
Proof. The conditionfor existenceis simplythat the right-handside Rb be in the
range space of As. If vTR = 0T, then all solutionsof (3) are of the form Xd= Xo+ vus,
where xo is any solution of (3) and zvis an arbitraryconstant. Condition (4) gives
(vTxO+ VVTus) = 0 from which we can uniquelydetermine v if and only if v7Tus$0.
Throughoutthis paper, we shall use matrices S, R and N of a special form so
that the conditionsin Theorem 1 are automaticallysatisfied.
DEFINITION2. We shallalwaysuse S, R andN of the followingforms:S = I R = I - yvT and N = I - uspT, where the arbitraryvectors vs, w, y and p must satisfy
must be bounded
IsIvlI=1, v[wT=1, vTy = 1 and u[p =1, and liw|l, IIYland IIPII
independentlyof the singularityof A. Moreover, if A is singular,then vs must be
chosen to be the null vector of A so that As = A.
The vectors vs, w, y and p, while arbitraryhere, will be specifiedin later sections.
We have the following general existence and uniquenessresult.
THEOREM 3. If S, R and N have the forms in Definition2, then the system (3)
and (4) has a uniquesolution.
Proof. It can be easily verifiedthat the vector vs in the definitionof S is indeed
a left null vector of As and vTR =0. Moreover, vn=p/lIlpI and so VTUs = l/lIplI $ 0.
Therefore,the conditionsof Theorem 1 are satisfied.
DEFINITION 4. Define uw with || uw || = 1 and 6 '0 to be the unique vector and
scalar satisfying A uw= Ow.Similarly define uy with 11Uy I= 1 and K _0 to be the unique

vector and scalarsatisfyingAuy = Ky.


Note that when A is nonsingular,uwand uy are just multiplesof A-'w and A-'y
and 0 and K arenormalizationconstants.WhenA is singular,uw= uy= usand 0 = K = 0.
When A is nearlysingular,uwand uy are approximatenull vectors of A and 6 and K
have small absolutevalues. In fact, it can easily be verifiedthat u" is the normalized
null vector of As, i.e., uw= us. The choice of N implies that un = us.
If S and R have these speciallysimple forms, then the solution x of (1) can be
easily expressedin terms of the deflatedsolution xd and the vectors uy and u".
THEOREM 5. Let xdbe the uniquesolutionsatisfying(3) and (4); thenthefollowing
is a solutionof (1):
(5)

x = xd +(vs

b/K)uY (vsAxd/l)uw.

Proof. To show that x given by (5) satisfies Ax= b, note that Ax=

Axd+(vTb)y-(vTAxd)w=SAxd+
(I-R)b = Rb+ (I-R)b = b.
We call (5) a deflateddecompositionof x. When A is nonsingular,(5) can be
interpretedas a decompositionof the uniquesolution x of (1) into a part spannedby
approximatenull vectorsof A (whichcould have largemagnitude)and a deflatedpart
whose magnitude remains bounded. When A is singular, it appears that (5) is not

definedbecauseof divisionsby zero, but it can still have the followinginterpretation.


First note that VTAXd = 0 because A = As when A is singular.If b is consistentwith
A, i.e., vTb = 0, then (5) can be interpretedas xd plus an arbitraryscalarmultipleof
uSand thereforestill representssolutionsto (1). If V[Tb$ 0, then there is no solution
to (1) but (5) can still be interpretedas exhibitinga unique solution of a nearby
singular but consistent system (xd), a null vector of A(us = uw= uy) and the amount

that b is inconsistentwith A(v[Tb). Thus, the conceptof deflatedsolutionand deflated


decompositionis meaningfulfor both singularand nonsingularsystems.

DEFLATED

DECOMPOSITION

OF NEARLY

SINGULAR

SYSTEMS

741

With the above interpretation,the deflateddecompositionis uniquein the following sense.


THEOREM
6. If the vector x = z + puy- (v'Az/l ) uw with Nz = z satisfies the
equation Ax = b, then z =

Xd

and p =

v[Tb/K.

Proof. From Ax, it can be verified that it is equal to Asz + pKy.But Ax = b =


Rb + (v[b)y. Thus Asz = Rb + (vTb- pK)y. If A is nonsingular, then multiplying on
the left by VT shows that the coefficientof the last term must be zero and therefore
p = (Vsb)/K.
If A is singular,then v[b = 0 and K = 0 implyingthat p may have an
arbitraryvalue. In any case, the last term is zero and it follows that Asz = Rb and
since Nz = z, from the uniquenesspart of Theorem 3, z = Xd.
For the deflatedsolutionto be useful, the matricesS and R should be chosen so
that As is "close" to A and Rb is close to b in some norm. In particular,if A is
singularthen S is chosen so that As = A. For computationalpurposes,the matrixN
must also be chosen so that xd remainsboundedindependentof the singularityof A.
We see from the form of S that As - A = w(vsA) and therefore it is naturalto use
approximateleft null vectors of A for vs. In the followingtwo sections, we shall see
how v, can be definedin termsof the SVD of A and the LU-factorizationof A. Even
with vs fixed, there is some leeway in choosing the vectors w, y and p and this gives
rise to a varietyof deflatedsolutionsand deflateddecompositions.
3. Deflation using singularvectors. Let usvand vsvbe the normalizedright and
left singularvectorscorrespondingto the smallestsingularvalue o-of A. Then we have
(6)

AuSV= arsv

and
(7)

A TVSV
= (usv.

Stewart'sdefinitionof a deflatedsolution[22] and our extensionto the corresponding


deflated decomposition correspond to using vs = w = y = vsvand p = us In other words,

one uses S = R = Pvs,and N = P,svin (3) and (4).


THEOREM 7. We can writethe solutionx of (1) as
(8)

x=x5U+(vTLb/lf)usv
wherexsvis the uniquesolutionof thefollowingsystem:

(9)

Pv,Axsv = Pvsvb

and
(1 O)

Pusvxsv = Xsv'

Moreover,the matrixAsv definedby P,v,A is singular,satisfies


( 11)
Asv= PVvAP"s= APusv= A -avsvu Tv
and has vsvand usvas its left and rightnull vectorsrespectively.
Proof. The identities in (11) can be easily verified by using (6) and (7). The
existence of a unique deflatedsolution xsvfollows from Theorem 3 and the form of
the deflateddecompositionfollows from Theorem 5.
From Theorem 7, we see that xsv is a deflated solution in the sense that it is a
solution to the consistentsystem (9) with a singularmatrixAsv whichactuallycorrespondsto setting cJto zero in the SVD of A and is thus the closest singularmatrixto
A in the Frobeniusnorm. In fact, choosing N = Pusv,xsvis the minimumlength least
squares solution of the system Asx = b.

742

TONY F. CHAN

4. Deflation using LU-factorizations.We assume that we can compute a LUfactorizationof A (of size n by n) with a small pivot in the kth position of the form
0 Ui U1 Ru
1 0
ufT
e
0
QRAQc=LU=
0
0
V2
LRL
L2J
U2]
L1
vT

(12)

where QRand QCare row andcolumnpermutationmatrices,L is unitlowertriangular,


U is upper triangularand Uk,k = E is a small pivot. We shall assume, without loss of
generality,that the row and column permutationshave been pre-appliedto A and
work with QRAQCinstead of A. Because the dimensionof the null space of A is
assumed to be at most one, this implies that the matricesL, U1 and U2 are wellconditioned, i.e. JIL-1 , 11U'11j and 11U2-1'j are o('-1). In other words, we are assuming
that the singularityof A reveals itself solely in the smallnessof E. For the deflated
solutionsto be useful, - must be of about the same orderof magnitudeas the smallest
singularvalue a of A. We note, however,that althoughthis is often assumedby many
people, it is well known that it does not alwayshold with the usual pivotingstrategies
employedin Gaussianelimination,as the often quotedexampleof the matrix{ai,j= -1
if j> i, = 1 if i= j, =0 if i> j} shows [12], [23]. The readeris referredto [4] for a two
pass algorithmthat is guaranteedto producesuch a small pivot for a generalmatrix.
Based on the LU-factorization,it is easy to definean approximateleft null vector
for A.
DEFINITION 8. Define v1u,with | vlu =1 and ac?0 to be the unique vector and
scalarsatisfying
(13)

ATvu =aek,

where ek denotes the unit vector with a 1 in the kth position.


Note that if A is nonsingular,vluand a can be computedby
vlU= ATek
A

(14)

and
a = 1/11ATekII.

(15)

The choice of ek in (13) ensuresthat a = 0(E), making vluan approximateleft null


vector of A.
LEMMA

9. a= 0(E).

Proof. It can be shown by directly computing v,u from (12) that a=


where the 1 occurs in the kth position. Since 11L111
El|L
-T(0,_*_*_*,0,1,_UiTu2)TII
and 11U2`' are assumed to be o(&-'), we have a = 0(?).

If A is singular,then vluis the normalizedleft null vector of A and a = 0.


In analogy to the deflatedsolutions based on the SVD of A, one can use vluto
define A, by choosing v, = w = vlu or equivalently choosing S = Pvu Note that this

choice of w satisfiesvTw= 1. However, this is not the only way to define As. Instead
of the orthogonalprojectorPv, one can use an obliqueprojector.
DEFINITION 10. For any vector u with (u)j $ 0, where 1 j? n, define EiU=
I- ue[/(u)j.

The matrixE'"is singularand has simple left and right null vectors.
LEMMA

11. Elu =0, (E')Tej=O, (Ei ) =E , ((EiT)

(ED

Note that the projectionoperatorPu has almostthe same properties,except that


PTU

=0.

DEFLATED

DECOMPOSITION

OF NEARLY

SINGULAR

SYSTEMS

743

Since vl, has norm one, one can alwaysfind an index j such that (vl,)7-1=0(1),
independent of E. Now one can define S by choosing v, = vlU and w = ej/(vl,)j, or
equivalentlychoosing S = (Ei)9T. Note that this choice of w also satisfiesVTW= 1.
We now have two possible As's.
12. Ae-(E{

DEFINITION

) TA, A,

PvuA.

Next we define the two correspondingvectors uw's.


DEFINITION 13. Define upand uewith | up= 1 and | ue= 1 and y ' 0 and p O-0
to be the unique vectors and scalarssatisfying
(16)

Aup= y

(17)

Aue =

Pej.

The following lemma shows that vlu,ue and up are related and that /8 and y are
0(?).

14. y = a (up) k = O(E), 13= a(Ue) k/((Vlu)j = 0(?).


Proof. The lemma follows easily by left multiplying(16) and (17) by vluand the
fact that a = O(E) and (vu)'-l = 0(1).
It is straightforwardto prove the followingidentitiesfor Ae and Ap.
LEMMA

LEMMA

(18)

15.

Ae,(Ei{,)TA

= (E,i )TAEue=AE

=(E

)TAPu =A-(a1(vju)j)eje

is singularand has vluand Ueas its left and rightnull vectors.


= AE UP= Pv,uAPup= A - avlueT
Ap- Pv,A = Pv,uAEUP

(19)

is singularand has vluand upas its left and rightnull vectors.


Thus we see from (18) that Ae is obtainedfrom A by perturbingit at the (j, k)th
position by a quantitythat is 0(E). We also see from (19) that Ap is obtainedfrom
A by perturbingthe kth columnof A by quantitiesthat are O(e).
Note that the operators E e and E k are well defined because of the following
lemma.
LEMMA 16. (ue)I $0O, (Up)k 0O.
Proof. When A is nonsingular,this follows from Lemma 14. When A is singular,
then ue= up= us and direct computationshows that the unnormalizedus is equal to
(-Uj'u1, 1, 0,.*, 0), wherethe 1 is in the kth position.Therefore,(uS)k$ 0. In fact,
since 11U-1j

o(E-1), (us)-' = 0(1).

Next, we have to choose R so that the existence condition VfTR= 0 of Theorem


1 is satisfied.It can easily be seen that the choice of R = PV,Uor (El, )T will do. This
correspondsto choosing y = vluor ej/(v1u)jand the correspondinguy's have already
been defined in (17) and (16).
Lastly, we have to choose N so that the deflatedsolution is unique. It turns out
that either PU, or E u5 will do. The choice of N = PU, corresponds to p = us and the
choice of N = E k correspondsto p = ek/(Us)k. For either choice, UTp= 1 and unique-

ness follows from Theorem 3.


The choice of N = PU,corresponds to defining xd to be the unique minimum length
k
solution to Asx = Rb. The choice of N = E makes x the unique solution to Asx = Rb

with Xk= 0. ChoosingR = PV,uand N = PU,makes xd the uniqueminimumlength least


squaressolution to the system A,x = b.
To summarize,we have two possibilitiesfor S (namely(Ei1 )T and PvjU),two for
R (namely (Ei, )T and Pvj), and two for N (namelyE 5 and PUS)givingrise to eight

744

TONY F. CHAN

deflated solutions and their corresponding deflated decompositions. We shall adopt


the following notation.
DEFINITION 17. We shall denote the LU-deflated solutions to (3) and (4) by
XSRN, where S, R and N can be either e or p depending on whether the E or the P

operator is used. For example, Xepecorrespondsto using S = (Ei, )T, R = P,,1 and
k
N=E
N=EUe'

It follows from Theorem 3 that

XSRN

is well defined.

18. Thecorresponding
systems(3) and (4) definingthedeflatedsolutions
XSRN have uniquesolutions.
THEOREM

The corresponding deflated decompositions are given in Table 4.1 and follow
directly from Theorem 5.
TABLE 4.1
Deflated decompositions of the solution of Ax = b.
R

Xeee + ( V[Iub/(Vlu )jf3) Ue

Xeep

Xpee+(VT ub/(V1u)j3)Ue

Xpep-

Xepe + (VT

b/))up

E
+[(14

b-

a(Xeep)k)/1(V1u)jP]Ue

xepp
Xppe

(a(xepp)k/(Vlu)j)Ue

+(Lu

b/ y)up

+ (Vlb/y)up

P.
(a(Xpepk/

XSRN is the unique solution

y)Up

+(VU

(Vlu)j

b/

10) Ue

Xp

+ [(V

ba

(Xppp) k)/ Y]Up

of SAx = Rb. Nx = x. A TV1, = aek, AUe = fej, Aup = yvJ,,

Some of the deflated solutions based on the LU-factorization have been defined
by Keller [14], [15], although he did not use the implicit formulation that we use here.
Specifically, he considered the case where the small pivot occurs at the (n, n)th position
of U (i.e., k = n). In that case, the unnormalized vlucan be shown to be (Liv1, -1)T*
If we choose j to be n also, then the unnormalized ue is (Uju1,-l)T*
Using these
null vectors, he explicitly derived the xeee-deflated decomposition in [15]. In [14], he

consideredtwo deflatedsolutions,whichin our framework,correspondto Xepe


and Xepp,.
5. Relations among deflated solutions. Each of the deflated solutions we have
defined so far is the unique solution to a singular but consistent system derived from
(1). Since this singular system is supposed to be close to (1), it is not surprising that
some of the deflated solutions are close approximations to one another. In this section,
we prove some results that relate the LU-based deflated solutions among themselves
and with the deflated solution based on singular vectors (i.e., xv).
First we shall show that there are certain simple relationships among the LU-based
deflated solutions.
THEOREM 19.

(a)

Xeee = Xpee

(f)

Xepe = Xppe,

(b)

Puexeee = Xeep,

(g)

PuXe,,e = Xepp,

(c)

E k Xeep = Xeee,

(h) E kXepp =

(d)

PupXpee= Xpep,

(i)

(e)

E UPpe-pe
xpep= (X EXe,j=E
J)

Xepe,

PupXppe =Xppp,

k x
=-x ppeup ppp-,

DEFLATED

DECOMPOSITION

OF NEARLY

SINGULAR

745

SYSTEMS

Proof. These equalities can most easily be proven by showing, using (18) and
(19), that the differencebetween the left- and right-handsides, say d, satisfiesthe
system(3) and (4) with homogeneousright-handsides. Uniquenessthen impliesd = 0.
For example, let d = Ek
eXeep - Xeee.Then it can be shown that d satisfies Aed=0 and
Eked = d. The uniqueness result of Theorem 1 implies that d = 0 which proves (c).

We thus see that there are two familiesof deflatedsolutions,namely (a)-(e) and
(f)-(j), dependingon the choice of the R-operator.
Next we will provethat,if ? and arare both small,then vtuis a good approximation
to Vs,v, Ue is a good approximationto u,v and that up is an even better approximation
to u,v.We shall use the notation o(81, 82) to mean some quantity(scalar,vector or
matrix)the norm of which is 0(max {81, 62}).
LEMMA 20. (a) IIvlu- v,v

(b)
(c)

IIup-usvI

O(E, 02),

0 2),

I=O(E,

IUe-Usv

O(E2

0f

).

Proof. We shall prove (c) first.From the definitionof up,we have


Aup - yvlu = 0.

Now if we multiplyboth sides of the above equationby AT, we get


(ATA-

yATvluu')up

Mlup = 0.

Similarly,for the singularvectors u,v and vsv,we have


(ATA-aoATvsvu

)usv-M2usv

0.

Thusboth upand usvarenullvectorsto matricesthatare perturbationsof the symmetric


matrixATA. We also have
Ml = ATA - yaeku

T= M2 + 02

-yaekU P
USVUT

=M2+0(oZ2, E2) (from Lemma 9).


Therefore, M1 is a 0(a2,

E2) perturbation of M2. From a standard perturbation result

for simple eigenvalues([23, p. 67]), (c) follows immediately.


The proof for (b) is almost identical,except that now
Ml = M2 +0 2usvuT -/3A TejUT= M2 + 0 (0-2, E),

becausealthough, is O(E), ATejis 0(1) in general. The proof for (a) is similar.
Next we use Lemma 20 to prove the main result of this section.
THEOREM

21. If A is nonsingular, then

(a) IIXeep,XsvI |-0(o-2

(b)||xI-sPXsvl_0(0.2,
(b)
|cO(S,E

E)(IVT
b/la + ci| xeepi
||),
2)(IVTb/.+2I
)(|vsv bl cr| + C2||xppp||I),

where cl and c2 are positive constants independent of E and O.. If A is singular, then
where At is the pseudo-inverse of A and q=
xPPP=xSVand xeep-xsv = (vTb)Atq,
vvV-

ejl ( Vvv)p

Proof. We shall start with the nonsingularcase and prove (a) first. The strategy
is to take the xsv-decomposition(8) and orthogonalizethe two parts with respect to
is a unique decompositionof x, by Theorem 6 the
ue. Since the xeep-decomposition
partsthat are orthogonalto ue must be equal to xeep.Thus, we have
Xsv = [xs

(ue Xsv)Ue] + (Ue Xsv)Ue,

USV [USV-

Us)ue] + (Ue

Usv)ue

746

TONY

F. CHAN

It follows that, with v = vvb/o-,


Xeep=[x
eeLssv

-(UTX

U
)Ue]u
e Usv

s)Uv]+ -(UI V[U


eUJvs

From Lemma20 (b), we can easily deduce that


(20)

= (USV+ 0(E,
UeXSV

of ))TX =

O(E, C-)xSV,

and

(21)

USV(U

TUsv)Ue

= Ue+

O(E,?r

2)-(1

0(&,?Oc2))Ue

O( cr2)

fromwhich(a) followsimmediately.The prooffor (b) is analogous,exceptthatbecause


of Lemma 20 (c), we have O(E2 C_2) in (20) and (21). When A is singular,vU=VSV
and thereforexsvand xpppare definedby the same S, R and N. The uniquenessresult
of Theorem 3 implies that they must be identical. Let d = Xeep - xv. Then it can be
shown that d satisfies Ad = (vTb)q and Pud = d. It follows that d is the minimum
length least squaressolution and therefore d = (vTb)Atq.
The above theorem implies, if

? =

O(a),

that xpppwill approach xsv as o, goes to

zero but that XeepwiHlgenerallybe differentfrom xv ( unless vT b = 0) in the samelimit.


6. Algorithms. In this section, we propose implicitalgorithmsfor computingthe
deflatedsolutionsand deflateddecompositionsdefinedin ? 4 and analysetheirconvergence and stabilityproperties.It shouldbe apparentthatthe primarytaskis to compute
the deflated solutions because the deflated decompositionscan then be computed
withouttoo muchdifficulty.Sincethe implicitalgorithmsuse as a basictool the ability
to solve systemswith A, we shall assumein this section that A is nonsingularrelative
to the precisionof the computer.In practicethis is almost always true. We shall limit
our discussionsto direct methodsbased on Gaussianelimination.
First of all, we have to computethe approximateleft and rightnull vectorsof A,.
For vlu,Ue and up,they can each be computedby one back-substitutionusing the LU
factors of A by formulas similarto (14) and (15). For vsv and usv,we can use an
inverse iterationsimilarto one proposedby Stewart[22]:
Inverseiterationfor vv, u,v and a. Startingwith an initial guess vo, iterate until
convergence:
1.
2.
3.
4.

AUi+i = vi.
ui+1= ai+1/1Iui+1II.
ATIi31+j=ui+,.
II.
1
vj+1= vi+l/ VIi+l

Denote the converged vi by vv, compute u,v and a by:


(22)

Aiu=vsv, sus= /lull, a1/IIaII.


After the null vectors of As have been computed,one then has to findalgorithms
for solvingthe system (3) and (4) for the deflatedsolutions.We proposethe following
algorithmfor computingthe unique solution XSRNof the system (3) and (4) which is
based on a similaralgorithmfirst proposedby Stewart[22].
IIA (iterativeimprovement
algorithm).
Start with any x such that Nx = x.
Loop until convergence:
(1) r=Rb-SAx.
(2) Solve Ad = r.
(3) x4xx+Nd.
ALGORITHM

DEFLATED

DECOMPOSITION

OF NEARLY

SINGULAR

SYSTEMS

747

The following theorem states the conditions under which Algorithm II.A will
converge to the desired solution XSRN.
THEOREM 22. Assume that the assumptionsof Theorem1 are satisfiedso that the
systemdefiningXSRN has a uniquesolution.FurtherassumethatN has a one-dimensional
null space Un and thatN2 = N. DefineK - 1- A-'AS, M NKN and # -vAu,. Then
thefollowing statementsare truefor AlgorithmIIA:
(a) All iteratesx satisfyNx = x.
(b) If x converges,then it will convergeto XSRN if , 0O.
< 1.
(c) The iteratex convergesif JIMII
(d) If M = 0, then we can obtainx directlyfrom x = NA-'Rb.
Proof. Since we always start with an x such that Nx = x, Step (3) of Algorithm
IIA guaranteesthat all iteratessatisfythe same constraintsince N2 = N. The iteration
can be written as the followinglinear stationaryiteration:
x = (I-NA-1As)x + NA-'Rb.

(23)

If the iterationconvergesto x, then we have x = (I - NA- As)x + NA-'Rb fromwhich


it follows that NA-'r(x) =0, where r(x) =Rb-Asx. This implies that r(x) = v Aun,
where v is an arbitrary-scalar.
Left multiplyingby vT, we see that if , $ 0, then v = 0
and therefore r(x) = 0, whichtogetherwith Nx = x provesthat x satisfies(3) and (4).
Uniquenessthen implies x = XSRN. To analysethe convergence,note that since Nx = x
and N2= N, we can rewritethe iterationas
(24)

+ NA-1Rb = Mx + NA-1Rb,

x 't=N(I-A-1As)Nx

< 1. If M = 0, then we have convergence


and thus the iterationwill convergeif JIMJJ
after one iteration independentof the startingvector and the converged solution is
NA-1Rb which must be equal to XSRN because of uniqueness.
O
We can now apply the above theorem to the applicationof AlgorithmIIA for
computingxsv and the XSRN's. The matrixK only dependson S, whereasM depends
on both S and N. Since we alwayschoose N to be either E k or P,,S, /u only depends
on the choice of S. In Table 6.1, we give the expressionsfor K, M and it for the
differentpossible choices of S and N.
From Table 6.1, we see that all the M's are exactly equal to zero and all the ,u's
are nonzero. Thus, for all the deflated solutions that we have considered so far,
AlgorithmIIA will convergein one step, and consequentlywe can use the noniterative
version as outlined in Theorem22(d).
The above conclusionassumesthat we have the exact null vectors vs and us of
As available.Although the LU-based null vectors v,u,ue and up can most likely be
computed with small relative errors, the accuracyof v,v and u,Vdepends on the
TABLE 6.1

Table of K, M and .
M

N=E k

N = P. ,

Pu".

Us,,UT

N.A.

uee T/(Ue)k

(E ,)k
Psu

l/u.
.)e

pe k|
Tu)(Up

)k

OO

(U)k

( Up )k

748

TONY

F. CHAN

convergenceof the inverse iteration. If the smallest singularvalue of A is not well


isolated, then the inverseiterationmay have convergencedifficulty.This may occur if
one uses the deflation algorithmwhen A is not nearly singular.If v,, and u,v are
completely unrelatedand do not satisfy (6) and (7), then M $0 and AlgorithmIIA
will in generaltake more thanone iterationto converge.However,in the implementation of the inverseiterationin (22), v,v and us,vdo satisfy (6). It turnsout that this is
enough for M to be equal to zero although the deflated decorppositionhas to be
modifiedbecause the last term in (5) is no longer equal to zero.
THEOREM
23. If the approximatesingularvectorsu,v and vv used in Algorithm
IIA for xv satisfyAuSV= uvsv (but not necessarilyATvSV = 0uuS),thenM = 0, and the
corresponding
deflateddecompositionis given by
(25)

x = xd+(vI(b-Axd)/Ia)usv.

Proof. The proof is straightforwardand follows from Theorem 5.


Seen in thislight,the LU-basedxppp
deflateddecompositioncan also be considered
as a memberof the SVD-baseddeflateddecomposition,where v is chosen specifically
to be vlu,or equivalently,using one step of inverse iterationwith v0= ek.
In choosingN, the only necessaryconditionis to satisfythe uniquenesscondition
of Theorem 3. We have chosen N with a null vector un equal to us. Although this
choice is not necessary,we shall argue that it leads to a stable AlgorithmIIA in finite
precisionarithmetic.It is well knownthat when one performsa back-substitutionwith
an ill-conditionedA, in general the solution will have large errors, and the residual
will be large. However, the standardround-offerror analysisalso shows that if the
computedsolution is not large, then the residualmust be small [6, p. 181], even if A
is ill-conditioned.Thisis exactlywhathappenshere. Step (1) of AlgorithmIIA changes
the right-handside so that the solution obtainedin Step (2) will be small. However,
if A is nearly singular,a small residual still allows for a possible large errorin the
solutionin the directionof the null vector. Thereforeit is a good idea to choose N so
that its null vector unis the sameas the null vector us of As so that Step (3) annihilates
this error. This makes the noniterativealgorithmstable to round-off errors. These
nice propertieswill not hold if we use an arbitraryN with un$ u.
Becauseof the equivalencerelationshipsin Theorem19, we can limitour attention
to computingonly two of the LU-based deflatedsolutions,correspondingto the two
possibilitiesfor R. The other deflated solutions can be obtained from these two by
simple transformations.We recommendcomputingeither Xeee or Xeep, and either Xppe
or

Xppp, because

first, the corresponding deflated decompositions require computing

only two approximatenull vectors instead of the three needed for the others, and
second, Step (1) of AlgorithmIIA can be simplifiedto S (b -Ax) since S =R.
We now say a few wordsabout the efficiencyof the algorithms.We firstnote that
the P-operatorstake one inner productto applywhereasthe E-operatorstake none.
The noniterative version of Algorithm IIA costs one LU-factorizationof A, one
back-substitutionand the cost of obtainingthe null vectors. It also requiresstorage
for the (2 or 3) approximatenull vectors and, in the case of (25), storing a copy of
A. The iterative version also requires storing a copy of A. Since the factorization
usuallyrequiresmuch more time (storage)than the back-substitutions(solution),the
work (storage)involvedis usuallynot muchmore thn the normalfactor-solveprocess
for (1). If there are more than one right-handsides, the cost of the extra back-solves
for the null vectors can also be amortizedover the total computingtime. For the
LU-based deflations,this extra cost is alwaystwo back-solves,regardlessof whether
A is nearlysingularor not. On the other hand,the cost and convergenceof the inverse

DEFLATED

DECOMPOSITION

OF NEARLY

SINGULAR

SYSTEMS

749

iteration for computing the singularvectors of A are much more sensitive to the
singularityof A (two backsolvesper iteration).Moreover,if the singularvectors are
not accurate, then the computed x,, will not be the minimumlength least squares
solution to Ax = b (althoughit will still be a deflatedsolution in our interpretation),
andthus,in view of the resultsof Theorem21, it is no morespecialthanthe LU-deflated
solutions. Furthermore,an extra copy of A must be stored to obtain the deflated
decomposition.Therefore, in applicationswhere A may occasionallynot be nearly
singularor when it is not knowna prioriwhetherA is nearlysingularor not, we argue
thatthe LU-deflationsare to be preferredbecausethey are noniterative.Suchsituations
arise, for example, in applyingcontinuationmethods to solving nonlinearequations
with Jacobianmatricesthat may become singular[3], [13], [16]. If it is known that A
is very nearly singular,then the SVD-deflationsare probablyto be preferredbecause
they do not depend on the assumptionthat the LU-factorizationof A has a small
pivot and because the inverse iterationwill converge very quicklyin that case. Both
of these implicit deflation techniquesare to be preferred to the explicit deflation
techniquesif the data structuresfor storingthe LU-factorizationof A are complicated,
for example, in band solvers and sparse solvers. An extra advantageof an implicit
algorithmis its modularity-it is independentof how the factorizationand solve is
done and requiresminimummodificationsto the conventionalfactor-solveprocedure.
The most effective approachmay be in the form of a hybridalgorithmwhich uses a
LU-based algorithmas a default and switches to a SVD-based algorithmwhen the
condition

? = 0(C)

is not satisfied.

7. Numericalresults. We presentsomenumericalresultswhichverifythe accuracy


and stabilityof the variousdeflationalgorithmsdeveloped in the earliersections.We
considertwo classes of matrices:
A1: (I-2uu

T)

Diag (an 1,-i, n -2,

1) (I-2vv

T)

where u and v are chosen

randomlyand scaled to have norm 1, and crvariesfrom 1 to 10-8.


A2: T-Amin(T)I-crI

where T =Tridiagonal (1, -2, 1) and cJ again varies from

1 to 10-8.
Note that the smallest singularvalue of A1 and A2 is equal to cr.For A1, the
smallestsingularvalue has multiplicity2 when a = 1. The dimensionn of A is chosen
to be 20.
We will only be concerned with computing xsv, Xeee,

Xeep, Xppe,

and xpp, Solutions

are chosento have the form x = z + pus, where z is randomlychosenand satisfiesNz = z


and us is the correspondingnull vector of A,. The right-handside b is then obtained
by forming b = Av + pAus, where the last term is formed by using the definitionsof
u, in (6), (16) and (17) so as to minimizeround-offerrors. By Theorem 6, z is the
uniquedeflatedsolutionof Ax = b. The constantp is used to control the value of VTb.
For comparison,we will considerthe following deflated solution Xge=PuA-lb,
which is equal to xSvin exact arithmeticbut computed without using deflation. All
LU-factorizationsare performedby the routineSGECO of LINPACK[7] whichuses
the partial pivoting strategy.The computationswere performedon a DEC-20 with
27 bits mantissascorrespondingto a machineprecisionof about .4 x 10-8.
The first set of tests is to see how E varies with o- and to verify the accuracyof
the computedo. In the inverseiterationfor determiningthe singularvectorswe always
take 5 iterations.When A is highly singular,one or two iterationsis enough for full
accuracy.The computed e, its position k, and the computedcrare given in Table 7.1
for A1 and A2. We see that, at least for these two classes of matrices, ? is indeed
generally O(cr). However, the smallestpivot does not always appear at the (n, n)th

750

TONY F. CHAN
TABLE 7.1
e as a function of a = 10-1.

Matrix A1
I
0
1
2
3
4
5
6
7
8

0.1504303E+01
0.1130482E + 00
0.8422814E-01
-0.7856242E-01
-0.8268356E-03
0.9228393E - 02
-0.1059374E-04
-0.7590279E-06
-0.2421439E - 06

computed cr

1
1
1
19
20
20
20
20
20

0.1000000E+01**
0.1000000E + 00
0.1000000E - 01
0.9999483E-03
0.9996640E-04
0.9993521E-05
0.1034953E-05
0.7292485E-07
0.2352214E-07

Matrix A2
I

0
1
2
3
4
5
6
7
8

0.2773407E+00
0.9002221E+00
0.5978710E+00
0.6968676E-01
0.7055007E-02
0.7061549E-03
0.7033348E-04
0.6996095E - 05
0.6780028E-06

20
20
20
20
20
20
20
20
20

computed

0.2233835E-01*
0.7668735E-01*
0.1000000E-01
0.9999985E - 03
0.9999866E-04
0.9996851E-05
0.9955706E-06
0.9902853E-07
0.9597004E-08

* Inverse iteration had not converged after 5 iterations.


** Singular vectors had not converged after 5 iterations.

position, especiallywhen e is not small. In fact, for the case C = 10-3 for A1, the nth
pivot is not small at all. Note also that, when cris well-isolated,the computed ar'sare
rather accurateand have low absoluteerrors. However, when the smallest singular
value is not well isolated, the inverseiterationis not successfulat all. This is especially
true for A2 because its lowest eigenvaluesare ratherclose to each other.
The next set of tests check the accuracyand stabilityof AlgorithmIIA. For each
choice of S, the right-handsides b are generated,as discussedabove, so that VTb= 1,
and the deflatedsolutions Xeee, Xppe, xsv and xge are computed.The iterativeversionof
AlgorithmIIA was implementedbut one iterationalwaysprovedto be enoughfor all
the deflated solutions, so the results given here are computed with the noniterative
version. The relative errorsare displayedin Figs. 7.1 and 7.2. As expected, xge loses
accuracyas A becomes more singularwhereas the other deflated solutions remain
accurate to within roughly an order of magnitudeof the machine round-off level.
Moreover,one can draw a distinctcorrelationbetween the less accurateLU-deflated
solutionswith relativelylarge values of E (e.g., or= 10-5 for A1). These tests indicate
that the LU-deflatedsolutionsand the noniterativeversionof AlgorithmIIA together
form a ratherrobust procedurefor computingdeflatedsolutions.
The next set of tests is to verify the results of Lemma 20 and Theorem 21. A
large numberof matricesof the form A1 are generatedand the three null vectors usv,
ue and up are computed. In Fig. 7.3, 11
usv- Ue1 and 11
usv-up ll are plotted against 0r. It

DEFLATED

DECOMPOSITION

OF NEARLY

SINGULAR

751

SYSTEMS

Rl
1.OE+OO00

1.0OE-01

1I.E-02
F

1.OE-03

LU I.DE_04

0.000

2.000

m X(EEE)
0 X(PPE)
G X(GE)

4.000

6.000

8.000

+ X(SV)
FiG. 7.1. Relativeerrorsof computedXSRN VS. - =

for
10- A1f.

R2
1.lOE+00
l.OE-01
1. OE-02
I:Y 1.0E-03
Lu 1.OE-04

I.OE-05

LU .OE-07

1 OE-08
1.OE-09 ..I1.I
0.000

2.000

4.000

6.000

m X(EEE)
0 X(PPE)
G X(GE)
+ X(SV)
FIG. 7.2. Relativeerrorsof computedx RN VS. o= I0-' for A,.

8.000

752

TONY

F. CHAN

1.DE+00
1.OE-O1

'

1.OE-02
1.OE-03
Lii
uJ
ui

M1.ODE-05

1il.OE-O91
.ODE-05

0
E
P
-0UeI

6.000

4.000

2.000

0.000

8.000

P111(E) - PHI(SV)
P111(P) - PHI(SV)
LEASTSQUARES
FIT F0R m
FIT F0R 0
LEASTSQUARES

FIG. 7.3. IIUs

us-upIIvs. a

and

X(EEP)

X(SV) , R1

0~~~~~~~~~~~~~~~~~
1.OE-08 .
l.OE-08

1.ODE-O9B
0.000

2.000

0
O

PH

4.000

6.000

= .0001
P=.001

P=.01

P =.1

FIG. 7.4. IIXsv-XeepIVS. LT10

I, (P= vTb).

8.000

DEFLATED

DECOMPOSITION

OF NEARLY

X(PPP)

SINGULAR

753

SYSTEMS

X(SV), Rl

1. 0E+00

1.o E-01
1. 0E-02
1.OE-03
Li

1..OE-04

La.

1. OE-05

Lu
LL

C31.OE-06

1.OE-07
1. OE-08
1.0OE-09

2.000

0.000
0
a*

P =.0001
P=.001

P = .01

P = .1

FIG. 7.5.

6.000

?1.000

IIXs.-xvII VS.C=10I,

8.000

(p=vb).

is seen that upis indeed generallycloser to usvthan ue is for the same value of a. The
straightlines shownare the best least squaresfit to the two sets of data{log (difference),
log (o-)} by straightlines. FromLemma20 the exact slopes of the straightlines should
be 1 for Iusv-u, and 2 for usv- upJl.The slopes of the least squaresfit are 1.028
and 1.537 respectively.
Next, right-handsides are generated with values of VTLbvaryingfrom 10-4 to
10-1 for the matrices Al and all the deflated solutions computed with the same
-

right-hand sides. In Figs. 7.4 and 7.5, 11xsv -

Xeep 11and

xsv - xppp11are plotted as functions


11

- xeep, does tend to a constantvalue roughlyproportionalto


of o-.It is seen that llxSv
svband 11xsv-xppp 11does tend to zero (or round-off level) as 0- tends to zero. Moreover,

the differencesdo vary linearlywith vTLbwhen cris small. The results of Lemma20
and Theorem 21 are thus verified.
8. Conclusions. We have offereda rathercomplete analysisof deflatedsolutions
and deflated decompositionsof solutions to nearly singularlinear systems.We have
provided a uniform frameworkthrough which it is possible to relate the various
approachesused in the literatureas well as some new approachesproposedhere. We
have analysed both theoretical questions of existence and uniquenessand practical
questionsof stable computationalalgorithms.The use of implicitalgorithmsresultsin
a modularapproachwhich only accesses the matrixas a linear solver. Therefore,the
algorithmsproposedhere shouldbe easilyextensibleto linearsolversother thandense
Gaussianelimination,for example,sparsedirect solvers,conjugategradientmethods
and multi-gridmethods. Extensionsto higher dimensionalnull spaces should also be
straightforward.

754

TONY

F. CHAN

Acknowledgments.The authorwouldlike to thankthe editorandone anonymous


referee for many helpful suggestionson the presentation.
REFERENCES

[1] E. ALLGOWERAND K. GEORG, Simplicialand continuationmethodsfor approximating


fixed points
and solutions to systems of equations, SIAM Rev., 22 (1980), pp. 28-85.

ACM,Trans.Math.
[2] T. F. CHAN, An improvedalgorithmfor computingthesingularvaluedecomposition,
Software, 8 (1982), pp. 72-83.

, Deflationtechniquesand block-elimination
algorithmsfor solving borderedsingularsystems,

[3]
[4]

Tech. Rep. 226, Computer Science Dept., Yale Univ., New Haven, CT, 1982; SIAM J. Sci. Stat.
Comp., 5 (1984), pp. 121-134.
, On the existenceand computationof LU-factorizationswith small pivots,Tech. Rep. 227.
Computer Science Dept., Yale Univ., New Haven, 1982; Math. Comp., 42 (1984), pp. 535-547.

, Newton-likepseudo-arclength
methodsfor computingsimpleturningpoints,Tech. Rep. 233,

[5]

Computer Science Dept., Yale Univ., New Haven, CT, 1982; SIAM J. Sci. Stat. Comp., 5 (1984),
pp. 135-148.
[6] G. DAHLQUISTAND A. BJORCK,NumericalMethods,Prentice-Hall,EnglewoodCliffs,NJ, 1974.
[7] J. J. DONGARRA, J. R. BUNCH, C. B. MOLER AND G. W. STEWART,LINPACK Users' Guide,
Society for Industrial and Applied Mathematics, Philadelphia, 1979.
[8] R. E. FUNDERLIC AND J. B. MANKIN, Solutionof homogeneoussystemsof linearequationsarising
from compartmental models, SIAM J. Sci. Stat. Comp., 2 (1981), pp. 375-383.
[9] C. B. GARCIA AND W. I. ZANGWILL,PathwaystoSolutions,FixedPointsandEquilibria,
Prentice-Hall,
Englewood Cliffs, NJ, 1981.
[10] P. E. GILL, W. MURRAY AND M. WRIGHT, PracticalOptimization,
AcademicPress,New York, 1981.

[11] G. H. GOLUB

AND

and least squaressolutions,Numer.


C. REINSCH, Singularvalue decomposition

Math., 14 (1970), pp. 403-420.


[12] G. H. GOLUB AND J. H. WILKINSON, Ill-conditionedeigensystemsand the computationof Jordan
canonical forms, SIAM Rev., 18 (1976), pp. 578-619.

[13] H. B. KELLER, Numericalsolutionsof bifurcationand nonlineareigenvalueproblems,Applicationsof


BifurcationTheory,P. Rabinowitz,ed., AcademicPress,New York, 1977, pp. 359-384.
, Singularsystems,inverseiteration
and leastsquares,unpublishedmanuscript,AppliedMathe[14]
maticsDept., CaliforniaInstituteof Technology,Pasadena.
[15]
, Numericalcontinuationmethods,ShortCourseLectureNotes, NationalBureauof Standards,
Center for Applied Mathematics, Washington, DC.
[16] R. G. MELHELM AND W. C. RHEINBOLDT, A comparisonof methodsfor determining
turningpoints

of nonlinearequations,Computing,29 (1982), pp. 201-226.


[17] H. D. MITTELMANN AND H. WEBER, Numericalmethodsfor bifurcationproblems-A surveyand
classification,Bifurcation Problems and Their Numerical Solution, Workshop on Bifurcation
Problemsand their NumericalSolution,January15-17, Dortmund,1980, pp. 1-45.
[18] G. PETERS AND J. H. WILKINSON, The least squaresproblemand pseudo-inverses,
Comput.J., 13
(1970), pp. 309-316.
[19] W. C. RHEINBOLDT, Numericalmethodsfor a class of finite dimensionalbifurcationproblems,this
Journal,15 (1978), pp. 1-11.
, Solutionfields of nonlinearequationsand continuationmethods,this Journal,17 (1980), pp.
[20] 221-237.
[21] G. W. STEWART, Computableerrorboundsfor aggregatedMarkovchains, Tech. Rep. 901, Univ.
Maryland Computer Science Center, College Park, MD, 1980.

[22] G. W. STEWART, On the implicitdeflationof nearlysingularsystemsof linearequations,SIAM J. Sci.


Stat. Comp., 2 (1981), pp. 136-140.

[23] J. H.

WILKINSON,

TheAlgebraicEigenvalueProblem,OxfordUniv. Press,London, 1965.

Você também pode gostar