Você está na página 1de 15

# Eigenvalues and eigenvectors

p. 1

## The eigenvalue problem Consider the matrix A=

3 0 0 2

When A is multiplied with a column vector, the vector will shrink/expand and change direction. However, there are two special vectors, namely

1 and 0

0 1

that do not change direction, but only expand when multiplied with A. These vectors are called the eigenvectors of A. 1 0 expands with a factor of 3 and The vector expands 0 1 with a factor of 2. All other vectors change direction and expand with a factor between 2 and 3. These two expansion factors associated with the eigenvectors of A are called the eigenvalues of A.

## Eigenvalues and eigenvectors

p. 2

Note that any scalar multiple of the two eigenvectors above are also eigenvectors of A. The eigenvectors are therefore of the form k1

1 0 and k2 0 1

where k1 and k2 may take any real or complex value except zero. In truth the eigenvectors only indicate direction! For the 2 2 matrix A we therefore have the eigenvalue 3 with corresponding eigenvector k1

1 , 0 0 . 1

## the eigenvalue 2 with corresponding eigenvector k2

Since the length of an eigenvector is not important, they are often given in normalized form. Exercise: Show that the matrix B =

1 1 and k2 and nd the corresponding eigenva1 1 lues. Also normalize the eigenvectors. tors k1

5 1 has eigenvec1 5

Eigenvalues and eigenvectors In general we seek solutions to the equation: Ax = x There are two possibilities

p. 3

(1)

(i) x = 0 which implies that may take any value. This solution is called the trivial solution of (1). (ii) x 6= 0. This implies that we seek non-zero vectors that, when multiplied with A, only expands with a factor of and for each of these vectors we seek the corresponding eigenvalue. When only A is given, how do we nd the eigenvalues and eigenvectors? The equation (1) can also be written as Ax I x = 0 or (A I ) x = 0 (2)

## Eigenvalues and eigenvectors

p. 4

The system of equations (2) has non-zero solutions when (A I ) is not invertible. We must therefore nd values for for which

det(A I ) = 0.

(3)

## For an nn matrix, the expression det(AI ) is a polynomial of degree n in , and

p() = det(A I )

(4)

is called the characteristic polynomial of A, while p() = 0 is called the characteristic equation. The eigenvalues of A are therefore the n roots of the characteristic equation. We indicate these roots by 1, 2, . . . , n. Some of the above roots may be equal, and even when the matrix has real elements, the roots can be complex. When two or more eigenvalues of a matrix are equal, we say that the matrix is degenerate.

## Eigenvalues and eigenvectors

p. 5

Exercise: Find the eigenvalues and corresponding normalized eigenvectors of the following matrices: 1 0 8 0 3 1 12 5 1 2 1 A= B= C = ; 8 20 6 ; 5 12 0 1 3 0 6 1

The general characteristic equation of a 3 3 matrix Let A be the following 3 3 matrix: a11 a12 a13 A = a21 a22 a23 a31 a32 a33

Then we have that a11 a12 a13 a21 a22 a23 det(A I ) = a31 a32 a33 = (a11 ) [ (a22 ) (a33 ) a23a32 ] a12 [ a21 (a33 ) a23a31 ] +a13 [ a21a32 (a22 ) a31 ]

Eigenvalues and eigenvectors The characteristic polynomial is therefore p() = 3 + Tr(A) 2 h(A) + det(A) where

p. 6

(5)

h(A) = a11a22 +a22a33 +a33a11 a12a21 a23a32 a31a13 (6) The characteristic polynomial in factorized form is however given by p() = (a )(b )(c ) = 0 (7)

where a, b and c are the roots of p() = 0, and therefore represent the three eigenvalues. When (7) is multiplied out, the following is obtained: p() = 3 + (a + b + c) 2 (ab + bc + ca) + abc (8) Term-by-term comparison between (5) and (8) yields (i) Tr(A) = a + b + c (the sum of the eigenvalues) (ii) det(A) = abc (the product of the eigenvalues) (iii) h(A) = ab + bc + ca

## Eigenvalues and eigenvectors These rules also apply to a general n n matrix.

p. 7

Theorem: When the eigenvalues of an n n matrix A are given by i, i = 1, 2, . . . , n, we have what (i) Tr(A) =
n X

(ii) det(A) =

i=1 n Y

## i (the sum of the eigenvalues) i (the product of the eigenvalues)

i=1

We do not prove the above theorem. Special properties of the eigenvectors of symmetrical matrices If A is an n n symmetrical matrix, then A = AT . Theorem: The eigenvectors corresponding to dierent eigenvalues of a symmetrical matrix are orthogonal. Proof: Let A be a symmetrical matrix, with eigenvalues 1 and 2 and corresponding eigenvectors x1 and x2: Ax1 = 1x1 Ax2 = 2x2 (9) (10)

## Eigenvalues and eigenvectors Transpose (10): (Ax2)T = (2x2)T

T T xT 2 A = 2 x2

p. 8

(11)

## Multiply (11) from the right with x1 and substitute AT with A:

T A x = x xT 1 2 2 2 x1

(12)

## But Ax1 = 1x1. Substitute this into (12):

T x = x xT 1 1 2 2 2 x1

Therefore
T 1xT 2 x1 2 x2 x1 = 0

or (1 2) xT 2 x1 = 0

## Eigenvalues and eigenvectors Therefore

p. 9

(i) When (1 2) 6= 0, we must have that xT 2 x1 = 0, that is the eigenvectors are orthogonal when the corresponding eigenvalues dier. (ii) When 1 = 2, xT 2 x1 does not necessarily equal zero. Therefore, when a matrix is degenerate, the eigenvectors are not necessarily orthogonal. Example 1: The eigenvalues of the matrix 4 5 1 A = 5 0 5 1 5 4

are given by 1 = 5, 2 = 3 and 3 = 10, and the eigenvectors by 1 1 1 1 1 1 2 x1 = ; x2 = 0 ; x3 = 1 61 2 1 3 1 Show that the above three vectors are mutually orthogonal.

## Eigenvalues and eigenvectors Example 2: The eigenvalues of the matrix 4 0 2 B = 0 6 0 2 0 4

p. 10

are given by 1 = 2 = 6 and 3 = 2. We rst obtain the eigenvector that corresponds to 3 = 2: (B 2I ) x = 0 gives the following two equations 2x1 + 2x3 = 0 4x2 = 0 and from this it follows that 1 1 0 x3 = 2 1 However, the system (B 6I ) x = 0 yields only one independent equation, namely 2x1 + 2x3 = 0. Therefore x2 can take any value.

p. 11

## Let x2 = k, then the normalized eigenvector that corresponds to eigenvalue 1 = 2 = 6, is given by

1 1 x1 = k 2 + k2 1

A plane therefore exists, so that any vector in that plane is an eigenvector of B that corresponds to the eigenvalue 6. However, note that x1 and x3 are still orthogonal. If we insist that a set of three orthogonal eigenvectors of B has to be available, we can choose any two orthogonal vectors in the plane. Choose for example k = 1, then one eigenvector is 1 1 1 x1 = 31 1 Let the 2nd eigenvector, that corresponds to 6, be x2 = k . 1 If it has to be orthogonal to x1, then we must have that xT 2 x1 = 0 and subsequently

2 + k = 0.

## Eigenvalues and eigenvectors Therefore 1 1 2 x2 = 6 1

p. 12

Handy properties of eigenvalues and eigenvectors Theorem 3: Let A be an n n invertible matrix with eigenvalues j and corresponding eigenvectors xj , j = 1, 2, . . . , n. Then the following holds: (a) A has eigenvalue j corresponding to eigenvector xj , where is a scalar. (b) An has eigenvalue n j corresponding to eigenvector xj , where n is a positive integer. (c) A1 has eigenvalue 1/j corresponding to eigenvector xj (d) A + I has eigenvalue j + corresponding to eigenvector xj , where is a scalar. (e) From (b) and (c) it follows that (b) also holds for negative integers n. It therefore seems that the eigenvalues reect the exponentiation properties of a matrix.

Eigenvalues and eigenvectors Exercise: (1) Prove Theorem 3 on your own. (2) Let

p. 13

A=

4 1 2 1

Show that A has an eigenvalue of 3 and nd the corresponding eigenvector. Then illustrate all the properties mentioned in Theorem 3 using this eigenvalue and eigenvector. Take = 7 and = 5. Diagonalization of a matrix The diagonal form: Let A be a non-degenerate 3 3 matrix, with eigenvalues 1, 2, and 3 and corresponding eigenvectors x1, x2, and x3. Therefore Ax1 = 1x1, Ax2 = 2x2, Ax3 = 3x3 ......... () We can now insert the column vectors x1, x2 and x3 into a matrix as follows:

p. 14

## | | | S = x1 x2 x3 | | | The equations () can then be written as follows | | | AS = 1x1 2x2 3x3 = S | | |

where 1 0 0 = 0 2 0 0 0 3

When the eigenvectors are independent, S is invertible, so that the equation can be written as follows: A = S S 1 This expression is called the diagonal form of A and S is called the diagonalization matrix of A.

## Eigenvalues and eigenvectors

p. 15

Note that the diagonalization matrix is not unique any scalar multiple of S is also a diagonalization matrix. The sequence in which the eigenvectors are stacked into S is also not important, but it is essential that the sequence in which the eigenvectors are stacked into S corresponds to the sequence in which the eigenvalues are stacked into . A matrix A is therefore only diagonalizable if S 1 exists, that is when det(S ) 6= 0, and this is only possible when the columns of S are linearly independent. The columns of S are however the eigenvectors of A, and when A is non-degenerate, these eigenvectors are linearly independent. A non-degenerate matrix can therefore always be diagonalized. For a symmetrical degenerate matrix, we simply have to choose eigenvectors that are linearly independent. For a non-symmetrical degenerate matrix, not enough linearly independent eigenvectors can be found and such a matrix is therefore not diagonalizable. In this case the socalled Jordan form (discussed later) is used.