Escolar Documentos
Profissional Documentos
Cultura Documentos
How it works
We can think of A as a linear transformation taking a vector v1 in its row space
to a vector u1 = Av1 in its column space. The SVD arises from nding an
orthogonal basis for the row space that gets transformed into an orthogonal
basis for the column space: Avi = i ui .
Its not hard to nd an orthogonal basis for the row space the Gram-
Schmidt process gives us one right away. But in general, theres no reason
to expect A to transform that basis to another orthogonal basis.
You may be wondering about the vectors in the nullspaces of A and A T .
These are no problem zeros on the diagonal of will take care of them.
Matrix language
The heart of the problem is to nd an orthonormal basis v1 , v2 , ...vr for the row
space of A for which
A v1 v2 vr = 1 u1 2 u2 r ur
1
2
= u1 u2 ur ,
. ..
r
with u1 , u2 , ...ur an orthonormal basis for the column space of A. Once we
add in the nullspaces, this equation will become AV = U. (We can complete
the orthonormal bases v1 , ...vr and u1 , ...ur to orthonormal bases for the entire
space any way we want. Since vr+1 , ...vn will be in the nullspace of A, the
diagonal entries r+1 , ...n will be 0.)
The columns of U and V are bases for the row and column spaces, respec
tively. Usually U = V, but if A is positive denite we can use the same basis
for its row and column space!
1
Calculation
4 4
Suppose A is the invertible matrix . We want to nd vectors v1
3 3
and v2 in the row space R2 , u1 and u2 in the column space R2 , and positive
numbers 1 and 2 so that the vi are orthonormal, the ui are orthonormal, and
the i are the scaling factors for which Avi = i ui .
This is a big step toward nding orthonormal matrices V and U and a di
agonal matrix for which:
AV = U.
Since V is orthogonal, we can multiply both sides by V 1 = V T to get:
A = UV T .
A T A = VU 1 UV T
= V2 V T
2
1
22
T
=V V .
..
.
n2
SVD example
4 4
We return to our matrix A = . We start by computing
3 3
T 4 3 4 4
A A =
4 3 3 3
25 7
= .
7 25
The eigenvectors of this matrix will give us the vectors vi , and the eigenvalues
will gives us the numbers i .
2
1 1
Two orthogonal eigenvectors of A T A are and . To get an or
1 1
1/2 1/2
thonormal basis, let v1 = and v2 = . These have eigen
1/ 2 1/ 2
values 12 = 32 and 22 = 18. We now have:
A U T
V
4 4 4 2 0 1/2 1/2
= .
3 3 0 3 2 1/ 2 1/ 2
We could solve this for U, but for practice well nd U by nding orthonor
mal eigenvectors u1 and u2 for AA T = U2 U T .
T 4 4 4 3
AA =
3 3 4 3
32 0
= .
0 18
1
Luckily, AA T happens to be diagonal. Its tempting to let u1 = and u2 =
0
0 0
, as Professor Strang did in the lecture, but because Av2 = we
1 3 2
0 1 0
instead have u2 = and U = . Note that this also gives us a
1 0 1
chance to double check our calculation of 1 and 2 .
Thus, the SVD of A is:
A U T
V
4 4 1 0 4 2 0 1/2 1/2
= .
3 3 0 1 0 3 2 1/ 2 1/ 2
3
1/5
. To compute 1 we nd the nonzero eigenvalue of A T A.
2/ 5
4 8 4 3
A T A =
3 6 8 6
80 60
= .
60 45
Because this is a rank 1 matrix, one eigenvalue must be 0. The other must equal
the trace, so 12 = 125. After nding unit vectors perpendicular to u1 and v1
(basis vectors for the left nullspace and nullspace, respectively) we see that the
SVD of A is:
4 3 1 2 125 0 .8 .6
= 1 .
8 6 5 2 1 0 0 .6 .8
A U VT
4
MIT OpenCourseWare
http://ocw.mit.edu
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.