Você está na página 1de 5

Math S-21b Lecture #12 Notes

Todays lecture focuses on what might be called the structural analysis of linear transformations. What are
the intrinsic properties of a linear transformation? Are there any fixed directions? The discussion centers on the
eigenvalues and eigenvectors associated with an n n matrix the definitions, calculations, and applications.
Invariant directions, eigenvectors, and eigenvalues
Let A be an n n matrix representing a linear transformation TA : R n R n . Are there any invariant directions
for this linear transformation? That is, can we find a vector v such that T ( v) is parallel to v? This is an example
of an intrinsic property of the transformation something that exists independent of what basis is used or the
coordinates relative to that basis. For example, a rotation in R 3 has an axis of rotation regardless what basis is
used to describe the rotation. For an orthogonal projection onto some subspace V R n , vectors in V remain
unchanged, and vectors in its orthogonal complement are sent to the zero vector. Again, this has nothing to do
with what basis is used to represent this linear transformation.

The question of whether we find a vector v such that T ( v) is parallel to v can be rephrased as whether theres a
vector v such that T ( v ) Av v for some scalar . This leads to the following definition:
Definition: If A is an n n matrix, we call a vector v an eigenvector of A if T ( v) Av v for some scalar
. This scalar is called the eigenvalue associated with the eigenvector.
Finding the eigenvalues and eigenvectors
We can rewrite Av v as Av Iv which is more conducive to using algebra. We can then write this as
Iv Av 0 or ( I A) v 0 . In order for a vector v to be an eigenvector, it must be in the kernel of I A
for some appropriate choice of . This can only happen if this kernel is nontrivial which means that the matrix
I A would have to not be invertible, and we know from our discussion of determinants that a necessary and
sufficient condition for a matrix to not be invertible is that its determinant must be equal to 0. That is:
v is an eigenvector of A ( I A ) v 0 det( I A) 0
As well see, if A is an n n matrix p A ( ) det( I A) will be an nth degree polynomial in called the
characteristic polynomial of A. So I A will have a nontrivial kernel if and only if is a root of this
characteristic polynomial. The eigenvalues are therefore the roots of the characteristic polynomial.
Definition: The set of all eigenvalues of a matrix A is called the spectrum of A. Since the eigenvalues are the
roots of an nth degree polynomial, the spectrum will consist of at most n values. These may be real numbers or
complex numbers, possibly with repetition, and any complex eigenvalues must occur in complex conjugate
pairs. [This follows from the Fundamental Theorem of Algebra any polynomial with real coefficients can, in
theory, always be factored into a product of linear factors an irreducible quadratic factors, and these irreducible
quadratic factors will yield complex conjugate pairs (by the quadratic formula).]
If an eigenvalue occurs as a repeated root of the characteristic polynomial, we refer to the multiplicity of the
root as the algebraic multiplicity of the eigenvalue.
Definition: If is an eigenvalue of A, then ker( I A) will be a subspace called the eigenspace of , or E .
As with any subspace it must be closed under scaling and vector addition. This yields the following two
corollaries:
Corollary 1: If v is an eigenvector associated with an eigenvalue , then tv will also be an eigenvector for any
scalar t.
Corollary 2: If v1 and v 2 are eigenvectors associated with the same eigenvalue , then c1 v1 c2 v 2 will also be
an eigenvector for any scalars c1 , c2 .
1

revised 7/21/2013

Definition: The geometric multiplicity of an eigenvalue of a matrix A is dim[ker( I A)] , i.e. the number
of linearly independent eigenvectors associated with this eigenvalue.
3 1
.
Example: Find the eigenvalues and eigenvectors of the matrix A
1 3
1 0 3 1 0 3 1 3 1
Solution: We calculate I A
, so the characteristic

0 1 1 3 0 1 3 1 3
3 1
polynomial is p A ( ) det( I A) det
( 3) 2 1 2 6 8 . This is easily factored to give

(
)
(
4)(
2)
0
p A , so the eigenvalues are 1 4 and 2 2 . How you order these doesnt matter, but
you should keep the indexing consistent. For each eigenvalue, we next its eigenvectors, i.e. ker(i I A) for
each eigenvalue i :

4 3 1 1 1
, so ker( I A) is found by row reduction

1 4 3 1 1
x1 t
1
1
1 1 0 1 1 0
1 1 0 0 0 0 . This gives x2 t or x t 1 , so if we let v1 1 , this spans the eigenspace E4 .

1 1
1 1 0 1 1 0
, so ker( I A) is found by row reduction
1 2 gives I A

. This

1 1
1 1 0 0 0 0

1 4 gives I A

x t
1
1
gives 1
or x t 1 , so if we let v 2 1 , this spans the eigenspace E2 .
x

t


2

In the example, we had two distinct, real eigenvalues which produced two linearly independent eigenvectors
which may be used as a basis for R 2 , an eigenbasis. What is the matrix of this linear transformation relative to
Av 4 v1
4 0 1 0
the special basis? The relations 1
A B

D , a diagonal matrix. If we write

0 2 0 2
Av 2 2 v 2

1 1
1 1
, and A B S 1AS D . This will be the case for any matrix for
S v1 v 2
, then S 1 12

1 1

1
1


which we can produce an entire basis consisting exclusively of eigenvectors. This motivates the following:
Definition: An n n matrix A is called diagonalizable if it is possible to find a basis for R n consisting of
eigenvectors of A.

If A is diagonalizable with eigenbasis B {v1 , , v n } and if we write S v1 v n , then



Av1 1 v1


Av n n v n

0
1
1

A
S
AS
D
B

n
0

Note: It is not always possible to diagonalize a matrix. We want to understand under what circumstances this
will be possible.

revised 7/21/2013

Powers of a matrix: If a matrix A is diagonalizable, we can write A B S 1AS D for some change of basis
0
1
, well
matrix S. Therefore A SDS 1 and A t (SDS 1 )(SDS 1 ) (SDS 1 ) SDt S 1 where, if D

n
0
1t
0
t

.
have D

nt
0

3 1
Example: For the matrix A
, calculate A t for any (positive integer) t.

1
3

1 1 1 1 1 1
4 0
Solution: For this matrix we found that S 1AS D where S
, S 2
, and D
.

1
1

1
1

0 2
t
t
t
t
t
t
t
1 1 4 0 1 1 1 4 2 1 1 1 4 2 4 2

.
So A SDS 1 and A t SDt S 1 12

t
t
t
t
t
t
t
1 1 0 2 1 1 2 4 2 1 1 2 4 2 4 2
Application: Markov example
There are situations in which a fixed amount of some asset is distributed among a number of sites and where
some iterated process simultaneously redistributes the amounts to other sites according to fixed percentages. For
example, suppose you had a fixed number of beans distributed between two piles, A and B. A process
simultaneously moves 50% of the beans in pile A to pile B (while retaining 50% in pile A) and moves 75% of
the beans in pile B to pile A (while retaining 25% in pile B). We can describe the transition as follows:
If xA is the number of beans in pile A and xB is the number of beans in pile B, then the new values will be

new xA .5 x A .75 xB
determined by
. That is, the new values are determined by applying the matrix
new xB .5 x A .25 xB
x
.5 .75
A
. If we think of x 0 A as the initial distribution, then after one iteration well have x1 Ax0 ,

.5 .25
xB
2
after two iterations x 2 Ax1 A x 0 , and so on. After t iterations the distribution will be given by xt A t x0 .
The ability to calculate powers of a matrix using eigenvalues and eigenvectors greatly simplifies the analysis.
.5 .75
.5 .75
In this case, we have A
, I A
,

.5 .25
.5 .25
pA ( ) det[ I A] 2 .75 .25 ( 1)( .25) and the eigenvalues are 1 1 and 2 .25 .
3
1
These yield the eigenvectors v1 and v 2 .
1
2
x
If we began with any configuration x 0 A and expressed this in terms of the basis of eigenvectors
xB
B {v1 , v 2 } as x 0 c1 v1 c2 v 2 , then we would have x1 c11 v1 c2 2 v 2 , x 2 c112 v1 c2 2 2 v 2 , etc. After t
iterations we would get xt c11t v1 c2 2t v 2 . But with 1 1 and 2 .25 we see that 1t 1 for all t and

2t 0 , so eventually xt c1 v1 . In practical terms, this simply means that the number of beans in each pile
3
will eventually be proportional to the components of the eigenvector v1 . For example, if we began with
2
1000 beans initially configured in any way, eventually well find the number of beans to be approaching 600 in
pile A and 400 in pile B.
3

revised 7/21/2013

3 0 2
Example: Find the eigenvalues and eigenvectors of the matrix A 7 0 4 and diagonalize this matrix, if

4 0 3
possible, by finding a basis consisting of eigenvectors.

Solution: Before getting started, note that the column of 0s means that Ae 2 0 , so e 2 is actually an
eigenvectors with eigenvalue 0 . Indeed, the kernel of any n n matrix is just the eigenspace E0 .
3 0
I A 7

2
4 , so pA ( ) det[ I A] ( 3)( 2 3 ) 2(4 ) 3 ( 1)( 1) 0 .

0 3

4
This yields three distinct, real eigenvalues 1 1 , 2 0 and 3 1 . (Order doesnt matter, but be consistent.)

1 0 1 0
2 0 2 0
x1 t
1
1

1 1 7 1 4 0 0 1 3 0 x2 3t t 3 v1 3



x3 t
1
1
0 0 0 0
4 0 4 0

1 0 0 0
3 0 2 0
x 0
0
0

2 0 7 0 4 0 0 0 1 0 x2 t t 1 v 2 1



x3 0
0
0
0 0 0 0
4 0 3 0

1 0 12 0
4 0 2 0
x1 t
1
1

1
3 1 7 1 4 0 0 1 2 0 x2 t t 1 v 3 1



x3 2t
0 0 0 0
2
2
4 0 2 0

Once again, we were fortunate to be able to produce a basis of eigenvectors B {v1 , v 2 , v 3 } .


Theorem: Eigenvectors corresponding to distinct eigenvalues are linearly independent.
Proof: We prove this fact using an inductive argument in which each successive step uses the result of the
previous step. For a finite set of eigenvalues, there will be a finite number of steps.
(1) If there is just one eigenvalue 1 , then there must be a corresponding nonzero eigenvector v1 . This is a
linearly independent set.
(2) Suppose there are two distinct eigenvalues 1 2 with corresponding eigenvectors {v1 , v 2 } . We want to
show that these must necessarily be linearly independent. To this end, let c1 v1 c2 v 2 0 . If we multiply by
the matrix A, we get A(c1 v1 c2 v 2 ) c1Av1 cA 2 v 2 c11 v1 c2 2 v 2 A(0) 0 . The original relation
c1 v1 c2 v 2 0 gives that c2 v 2 c1 v1 , so c11 v1 2 (c1 v1 ) c1 (1 2 ) v1 0 . Because 1 2 and
v 2 0 , we must have c1 0 . But therefore c2 v 2 0 , so necessarily c2 0 . Therefore {v1 , v 2 } are linearly
independent.
(3) Suppose 1 , 2 , 3 are distinct eigenvalues (hence 1 2 , 1 3 , 2 3 ), with corresponding eigenvectors
{v1 , v 2 , v 3 } . Once again, we write c1 v1 c2 v 2 c3 v 3 0 . Multiplication by A gives
c1Av1 c2 Av 2 c3 Av 3 c11 v1 c2 2 v 2 c33 v 3 0 , and the original relation allows us to solve for
c3 v 3 c1 v1 c2 v 2 . Substitution gives c11 v1 c2 2 v 2 3 (c1 v1 c2 v 2 ) c1 (1 3 ) v1 c2 (2 3 ) v 2 0 .
The previous step established the linear independence of {v1 , v 2 } , so necessarily c1 (1 3 ) 0 and
4

revised 7/21/2013

c2 (2 3 ) 0 . Because the eigenvalues are all distinct, this implies that c1 0 and c2 0 . Therefore
c3 v 3 0 , so c3 0 as well. So {v1 , v 2 , v 3 } are linearly independent.
The argument continues in the same fashion so that if 1 , , k are distinct eigenvalues with corresponding
eigenvectors {v1 , , v k } , these must be linearly independent.
Corollary: If A is an n n matrix with distinct, real eigenvalues, then A is diagonalizable.
Proof: If the roots of the nth degree characteristic polynomial are 1 , , n , each will yield a corresponding
eigenvector so well have a collection {v1 , , v n } of linearly independent eigenvectors. This will constitute a
basis for R n , so the matrix A will be diagonalizable.
Note: This means that for a matrix A to fail to be diagonalizable, its spectrum must contain either repeated
eigenvalues, complex eigenvalues, or possibly both. However, it is quite possible for a matrix with repeated
eigenvalues to still be diagonalizable. The best example is the n n identity matrix which has only the
eigenvalue 1 but this eigenvalue has algebraic multiplicity n. The identity matrix is clearly diagonalizable
because its already diagonal! All vectors are eigenvectors of the identity matrix.
2 1 0
2 0 0
2 1 0

Example: If we compare the three matrices A 0 2 0 , B 0 2 0 , and C 0 2 1 , well see that

0 0 2
0 0 2
0 0 2
they each have the same characteristic polynomial p( ) ( 2)3 , so they each have just the one eigenvalue
2 with algebraic multiplicity 3. However, a quick calculation with each of these matrices reveals that the
geometric multiplicity of A is 3 (every vector is an eigenvector), the geometric multiplicity of B is 2, and the
geometric multiplicity of C is 1. Neither matrix B nor matrix C is diagonalizable.

Notes by Robert Winters

revised 7/21/2013

Você também pode gostar