Você está na página 1de 15

# REAL LINEAR ALGEBRA: PROBLEMS WITH SOLUTIONS

The problems listed below are intended as review problems to do before the
nal. They are organized in groups according to sections in my notes, but it is not
forbidden to use techniques from later sections (sometimes dramatically easier) to
solve earlier problems.
I emphatically do not imply that my solutions are unique or even the best solu-
tions to these problems. I hope, at least, that there are no erroneous solutions.
If you nd an error or typo, or have a better solution, be sure to let me know!
(i). Linear Equations and R
n
.
(a), (b) and (c) Solve:
_
1 2 3
1 5 6
_
_
_
x
y
z
_
_
=
_
3
7
_
and
_
1 2 3
1 2 3
_
_
_
x
y
z
_
_
=
_
3
7
_
and
_
1 2
3 4
__
x
y
_
=
_
3
7
_
.
Solutions:
(a)
_
1 2 3 3
1 5 6 7
_

_
1 2 3 3
0 7 9 10
_

_
1 2 3 3
0 1
9
7
10
7
_

_
1 0
3
7
1
7
0 1
9
7
10
7
_
.
We interpret this to mean x+
3
7
z =
1
7
and y+
9
7
z =
10
7
. Rewriting this parametrically
(set z = t) gives
_
_
x
y
z
_
_
=
_
_
_
_
1
7
10
7
0
_
_
_
_
+t
_
_
_
_
3
7
9
7
1
_
_
_
_
t is any real number.
(b)
_
1 2 3 3
1 2 3 7
_

_
1 2 3 3
0 0 0 4
_
. No solution.
(c)
_
1 2 3
3 4 7
_

_
1 2 3
0 2 2
_

_
1 2 3
0 1 1
_

_
1 0 1
0 1 1
_
. Solution:
_
x
y
_
=
_
1
1
_
Date: March 24, 2006.
1
2 REAL LINEAR ALGEBRA: PROBLEMS WITH SOLUTIONS
(ii). Parametric and Point-Normal Equations for Lines and Planes.
(a) Find parametric and point-normal forms for a line in R
2
through the point
(6, 4) and perpendicular to the vector (1, 3).
Solution: Parametric: Q(t) = (6, 4) +t (3, 1).
Point Normal: ( (x, y) (6, 4) ) (1, 3) = 0.
(b) Find parametric and point-normal forms for a plane containing the three
points (6, 4, 7), (0, 0, 1) and (1, 0, 2).
Solution: V = (6, 4, 7) (0, 0, 1) = (6, 4, 6) and W = (1, 0, 2) (0, 0, 1) =
(1, 0, 1) lie in the plane, head and tail. V W = (4, 12, 4).
Parametric: Q(s, t) = (0, 0, 1) +s (6, 4, 6) +t (1, 0, 1).
Point Normal: ( (x, y, z) (0, 0, 1) ) (4, 12, 4) = 0.
(iii). Linear Transformations from R
n
to R
m
.
(a), (b) and (c) Which of the following are linear? Justify your conclusion.
g
_
_
x
y
z
_
_
=
_
4
x +y
_
h
_
_
x
y
z
_
_
=
_
xy
x +y
_
f
_
_
x
y
z
_
_
=
_
z x
x +y
_
.
Solutions:
(a) Not linear, because g(0) = 0.
(b) Not linear. If linear, it would correspond to the matrix
( h(e
1
) h(e
2
) h(e
3
) ) =
_
0 0 0
1 1 0
_
.
However
_
0 0 0
1 1 0
_
_
_
x
y
z
_
_
=
_
0
x +y
_
=
_
xy
x +y
_
.
(c) Is linear: ( f(e
1
) f(e
2
) f(e
3
) )
_
_
x
y
z
_
_
=
_
1 0 1
1 1 0
_
_
_
x
y
z
_
_
=
_
z x
x +y
_
.
(iv). Eigenvalues.
(a), (b) and (c) Find real eigenvalues and associated eigenvectors (if any) for:
_
0 1
1 0
_ _
1 0
0 3
_
and
_
1 0
4 5
_
.
Solutions:
(a) det
_
1
1
_
=
2
+ 1 which has no real solutions. No real eigenvalues.
(b) det
_
+ 1 0
0 3
_
= ( + 1)( 3). Real eigenvalues: 1 and 3.
= 1 has eigenvector (1, 0). = 3 has eigenvector (0, 1).
REAL LINEAR ALGEBRA: PROBLEMS WITH SOLUTIONS 3
(c) det
_
1 0
4 5
_
= ( 1)( 5). Real eigenvalues: 1 and 5.
= 1 has eigenvector (1, 1). = 5 has eigenvector (0, 1).
(v). General Vector Spaces and Subspaces.
(a) Is the set of 3 by 3 matrices whose determinant is 0 a vector space with the
usual matrix addition and scalar multiplication?
Solution:
No: not closed under addition.
_
_
1 1 1
1 1 1
0 1 1
_
_
+
_
_
1 1 1
0 0 1
1 1 1
_
_
=
_
_
2 2 2
1 1 2
1 2 2
_
_
.
(b) Is the set of ordered pairs of integers a vector space with the usual operations?
Solution:
No: not closed under scalar multiplication.
1
2
(1, 1) =
_
1
2
,
1
2
_
.
(c) Prove that the eigenspace for a given eigenvalue of a square matrix is a vector
space.
Solution:
Suppose that v and w are eigenvectors for eigenvalue for matrix M and c is
any constant.
M(v +cw) = Mv +cMw = v +cw = (v +cw).
So v +cw is also an eigenvector for this eigenvalue.
(d) Is a plane in three dimensional space which contains the origin a subspace
of R
3
? What if it doesnt contain the origin?
Solution:
Yes: the equation of a plane through the origin can be written in normal form
n x = 0. Dot product is linear, so (as in the last example) the vectors in the plane
are closed under scalar multiplication and vector addition.
The points in any plane not containing 0 cannot be closed under scalar multi-
plication and so is not a vector space.
(e) Is the graph of y = x
2
a subspace of R
2
?
No: (1, 1) is in this set but 2(1, 1) is not.
(f) Is the set consisting of the x and y axes (combined) a vector subspace of R
2
?
Solution:
No: (1, 0) and (0, 1) are in this set but (1, 0) + (0, 1) is not.
4 REAL LINEAR ALGEBRA: PROBLEMS WITH SOLUTIONS
(vi). Basis for a Vector Space.
(a) Find a basis for the vector space of 2 by 2 matrices with trace 0.
Solution:
Trace is a linear function on matrices so its nullspace is a vector space. Obviously
not all 2 by 2 matrices have trace zero, so the dimension of the nullspace is at most
three. Here is a collection of three independent matrices with trace 0, which must
therefore constitute a basis:
_
1 0
0 1
_
and
_
0 0
1 0
_
and
_
0 1
0 0
_
.
(b) Find a basis for the plane given by (3, 6, 1) (x, y, z) = 0.
Solution:
{ (6, 3, 0), (0, 1, 6) }. These are two independent vectors in the plane, which
cannot have dimension exceeding two (since it is not all of R
3
.) So they form a
basis.
(c) Find a basis for the hyperplane given by (3, 6, 1, 5) (x, y, z, w) = 0.
Solution:
{ (6, 3, 0, 0), (0, 1, 6, 0), (0, 0, 5, 1) }. These are three independent vectors
in the hyperplane, which cannot have dimension exceeding three (since it is not
all of R
4
.) So these vectors form a basis.
(d) Find a basis for the set of polynomials in the variable t. (hint: This will be
an innite basis.)
Solution:
The easiest basis is { 1, t, t
2
, . . . } where we will not dwell upon the meaning of
. . . You know what I mean, right? It is obvious that this set spans the vector
space of polynomials, since any polynomial is by denition a linear combination of
these powers. We need to show it is an independent set.
Suppose there is a sum a
n
t
n
+ a
n1
t
n1
+ + a
0
of distinct powers of t, and
where a
n
is nonzero, which adds to 0. Then dividing both sides by a
n
t
n
we nd
that
0 = 1 +
a
n1
a
n
t
+ +
a
0
a
n
t
n
.
But the limit, as t , of the right is 1, since every term but the rst has t in
the denominator. Since 0 = 1 we conclude that there is no such nontrivial linear
combination, and the set of non-negative powers of t is independent.
(vii). Linear Functions Between Vector Spaces.
Let W denote the set of polynomials in x of degree 5 or less. Consider the linear
function
d
2
dx
2
: W W.
(a) What is the nullspace of
d
2
dx
2
? Find a basis.
Solution:
By the mean value theorem (applied twice) if
d
2
dx
2
f = 0 then f(x) must be of the
form ax +b. So { 1, x} is a basis of the nullspace.
REAL LINEAR ALGEBRA: PROBLEMS WITH SOLUTIONS 5
(b) What is the image of
d
2
dx
2
? Find a basis.
Solution:
d
2
dx
2
reduces the degree of a polynomial by exactly 2 if it started at degree 5, 4, 3
or 2. So the image of
d
2
dx
2
will be certain polynomials of degree 3 or less. On the
other hand,
d
2
dx
2
_
a
3
20
x
5
+
a
2
12
x
4
+
a
1
6
x
3
+
a
0
2
x
2
_
= a
3
x
3
+a
2
x
2
+a
1
x +a
0
.
So any third degree polynomial is in the image of this dierential operator. So
{ 1, x, x
2
, x
3
} is a basis for the image.
(c) Pick a basis A for W and nd a matrix M
A,A
for
d
2
dx
2
.
Solution:
How about the easiest? Let A = { 1, x, x
2
, x
3
, x
4
, x
5
}, in that order.
M
A,A
=
__
d
2
dx
2
1
_
A
_
d
2
dx
2
x
_
A
. . .
_
d
2
dx
2
x
5
_
A
_
=
_
_
_
_
_
_
_
_
0 0 2 0 0 0
0 0 0 6 0 0
0 0 0 0 12 0
0 0 0 0 0 20
0 0 0 0 0 0
0 0 0 0 0 0
_
_
_
_
_
_
_
_
.
(viii). Nullspace, Columnspace and Solutions.
(a), (b) and (c) Find the nullspace and the columnspace for each matrix:
_
1 2 3
1 5 6
_
and
_
1 2 3
1 2 3
_
and
_
1 2
3 4
_
.
Solutions:
(a) Nullspace: Span( { (3, 9, 7) } ). Columnspace: R
2
.
(b) Nullspace: Span( { (2, 1, 0), (3, 0, 1) } ). Columnspace: Span( { (1, 1) } ).
(c) Nullspace: { (0, 0) }. Columnspace: R
2
.
(d), (e) and (f) Represent the solution to the following equations in parametric
form (if there is more than one solution) using your work from above:
_
1 2 3
1 5 6
_
_
_
x
y
z
_
_
=
_
3
7
_
and
_
1 2 3
1 2 3
_
_
_
x
y
z
_
_
=
_
7
7
_
and
_
1 2
3 4
__
x
y
_
=
_
3
7
_
.
Solutions:
(d) Solution Set:
_
0, 1,
1
3
_
+t(3, 9, 7) for any real t.
This could also be written as
_
0, 1,
1
3
_
+Span( { (3, 9, 7) } ).
(e) Solution Set: (7, 0, 0) +s(2, 1, 0) +t(3, 0, 1) for any real s or t.
(f) (1, 1) only.
6 REAL LINEAR ALGEBRA: PROBLEMS WITH SOLUTIONS
(ix). More on Nullspace and Columnspace.
(a) Find the nullspace and the columnspace of
_
_
1 2 3
1 5 4
1 5 4
_
_
.
Solution:
Nullspace: Span( { (1, 1, 1) } ). Columnspace: Span( { (1, 1, 1), (2, 5, 5) } ).
(b) Find the nullspace and the columnspace of
_
_
1 5 6
1 5 6
1 5 6
_
_
.
Solution:
Nullspace: Span( { (5, 1, 0), (1, 0, 1) } ). Columnspace: Span( { (1, 1, 1) } ).
(c) Let V denote the set of 3 by 3 matrices. Show that trace: V R is linear.
What is the dimension of the kernel of trace?
Solution:
trace(X + cY ) =
3

i=1
(x
ii
+cy
ii
) =
3

i=1
x
ii
+c
3

i=1
y
ii
= trace(X) +c trace(Y ).
The dimension of the nullspace plus the dimension of the image must be nine, the
dimension of the space of 3 by 3 matrices. Since the image of trace is R, which has
dimension one, the dimension of the kernel must be eight.
(d) Pick bases for V and R from (c). Find a matrix for trace using your bases.
Solution:
Let A be the ordered basis
_
_
1 0 0
0 0 0
0 0 0
_
_
_
_
0 0 0
0 1 0
0 0 0
_
_
_
_
0 0 0
0 0 0
0 0 1
_
_
_
_
0 1 0
0 0 0
0 0 0
_
_
_
_
0 0 1
0 0 0
0 0 0
_
_
_
_
0 0 0
1 0 0
0 0 0
_
_
_
_
0 0 0
0 0 1
0 0 0
_
_
_
_
0 0 0
0 0 0
1 0 0
_
_
_
_
0 0 0
0 0 0
0 1 0
_
_
for V and let E be the standard basis 1 for R. The matrix M
A,E
is then the row
matrix
_
1 1 1 0 0 0 0 0 0
_
.
REAL LINEAR ALGEBRA: PROBLEMS WITH SOLUTIONS 7
(x). Some Notation for Solutions.
(a) Let V denote the set of 2 by 2 matrices. trace: V R is linear. Find a
basis for the kernel of trace, and use it to nd a general solution to trace(v) = 4.
Solution:
_
4 0
0 0
_
+r
_
0 1
0 0
_
+s
_
1 0
0 1
_
+t
_
0 0
1 0
_
for any real r, s or t.
(b) Let W denote the set of polynomials in x of degree 3 or less. Consider the
linear function
d
2
dx
2
: W W. Find a basis for the nullspace of
d
2
dx
2
and use it to
nd a general solution to
d
2
dx
2
f(x) = 4.
Solution:
2x
2
+rx +s for any real r or s.
(xi). Solving Problems in More Advanced Math Classes.
(xii). Inner Products.
(a), (b) and (c) Which of the following
x
T
_
1 2
2 1
_
y or x
T
_
6 2
2 5
_
y or x
T
_
1 2
0 1
_
y
give inner products on R
2
? Give evidence.
Solution:
All three satisfy the linearity in the rst vector condition, because of the
linearity of matrix multiplication.
The rst, however, is not positive denite: (1 0)
_
1 2
2 1
__
1
0
_
= 1.
The third is not symmetric: (4 1)
_
1 2
0 1
__
2
1
_
= 17 = 13 = (2 1)
_
1 2
0 1
__
4
1
_
.
The second is symmetric (because the matrix is) but it still needs to be shown
to be positive denite.
(x y)
_
6 2
2 5
__
x
y
_
= 6x
2
+ 4xy + 5y
2
.
Consider the function h(x) = 6x
2
+4xy +5y
2
where y is to be regarded as constant.
h

## (x) = 12. So the horizontal tangent at x =

y
3
is a minimum.
h
_
y
3
_
= 6
_
y
3
_
2
+ 4
_
y
3
_
y + 5y
2
=
13y
2
3
. This is obviously nonnegative, and can
only be 0 if y itself is 0. But then h(x) = 0 only if x is 0 too.
So (x y)
_
6 2
2 5
__
x
y
_
> 0 unless (x, y) is the zero vector. So the second example
is an inner product.
(d) Prove that f, g =
_
1
0
f(x)g(x) dx gives an inner product on the vector space
of continuous functions on the unit interval.
Solution:
8 REAL LINEAR ALGEBRA: PROBLEMS WITH SOLUTIONS
Linearity and symmetry are basic properties of the integral and ordinary arith-
metic applied to the integral:
_
1
0
f(x)g(x) dx =
_
1
0
g(x)f(x) dx
and
_
1
0
(f(x) +k h(x))g(x) dx =
_
1
0
g(x)f(x) dx +k
_
1
0
h(x)g(x) dx.
It only remains to demonstrate the positivity condition.
If f is continuous and nonzero at c [0, 1] there is some small interval of the
form [a, b] near c so that for every point in this interval the magnitude of f is greater
than a positive number . So (f(x))
2
>
2
> 0 on this interval. So by the denition
of integral as a Riemann sum,
_
1
0
f(x)f(x) dx
_
b
a
f(x)f(x) dx >
_
b
a

2
dx =
2
(b a) > 0.
We conclude that
_
1
0
f(x)f(x) dx > 0 unless f is the zero function.
(e) Prove that f, g =
_
1
0
f(x)g(x) dx does not give an inner product on the
vector space of continuous functions on the interval [0, 2].
Solution:
A counterexample to the positivity condition is provided by the function that is
0 between 0 and 1, but has slope 1 between 1 and 2.
(f) Let V be the space of constant or rst degree polynomials. Dene f, g on
V to be f(0)g(0) +f(1)g(1). Is this an inner product?
Solution:
Yes: If f(0)f(0) +f(1)f(1) = 0 then both f(1) and f(0) are 0. Since f is linear,
that means f is the zero function. The linearity and symmetry conditions follow
quickly.
(g) Is v, w = v
1
v
2
+v
1
+w
1
+w
1
w
2
an inner product on R
2
?
Solution:
No: v, 0 will be nonzero if v = (1, 0).
(h) Is v, w = v
1
w
1
+v
1
w
2
+v
2
w
1
+v
2
w
2
an inner product on R
2
?
Solution:
No: Positive deniteness fails (though symmetry and the linearity condition do
hold.) (1, 1), (1, 1) = 0.
REAL LINEAR ALGEBRA: PROBLEMS WITH SOLUTIONS 9
(xiii). The Matrix for an Inner Product.
Find the matrix G for the inner product on R
2
given by
x, y = 7x
1
y
1
+x
2
y
1
+x
1
y
2
+ 3x
2
y
2
.
Solution: G =
_
e
1
, e
1
e
1
, e
2

e
2
, e
1
e
2
, e
2

_
=
_
7 1
1 3
_
.
(xiv). Orthogonal Complements.
Give R
n
the usual inner product in each case below.
(a) Find the orthogonal complement of Span( { (3, 6, 1, 1), (4, 9, 1, 0) } ).
Solution:
Solve the homogeneous system
_
3 6 1 1
4 9 1 0
_
_
_
_
_
x
y
z
w
_
_
_
_
=
_
0
0
_
.
The vectors (3, 1, 3, 0) and (9, 4, 0, 3) span this vector subspace, which is the
orthogonal complement we are looking for.
(b) Find the orthogonal complement of Span( { (3, 6, 1), (4, 9, 1) } ).
Solution:
It is Span( { (3, 1, 3) } ). (Use cross product.)
(c) Find the orthogonal complement of Span( { (3, 6) } ).
Solution:
It is Span( { (6, 3) } ).
(xv). Orthonormal Basis.
(a) Let V be the space of constant or rst degree polynomials. Give V the inner
product f, g =
_
3
0
f(x)g(x) dx. Find an orthonormal basis. What is the angle
between x + 1 and x?
Solution:
If you start with ordered basis { 1, x} and apply the orthonormalization proce-
dure you get
_
1

3
,
2
3
x 1
_
. The angle between x + 1 and x is about 11
o
.
(b) With V as above, dene inner product f, g to be f(0)g(0) +f(1)g(1). Find
an orthonormal basis. What is the angle between x + 1 and x?
Solution:
If you start with ordered basis { 1, x} and apply the orthonormalization proce-
dure you get
_
1

2
,

2 x
1

2
_
. The angle between x + 1 and x is about 27
o
.
10 REAL LINEAR ALGEBRA: PROBLEMS WITH SOLUTIONS
(xvi). Projection onto Subspaces in an Inner Product Space.
Let W = Span( { (3, 6, 1, 1), (4, 9, 1, 0) } ) and give R
4
the usual inner product.
(a) Find the matrix with respect to the standard basis for the linear function
Proj
W
. (hint: First nd an orthonormal basis for W and extend it to an orthonor-
mal basis for R
4
.)
(b) Find the matrix, as above, for reection Refl
W
of points in R
4
across W.
Solution:
Apply the Gramm-Schmidt process to the ordered basis
{ (3, 6, 1, 1), (4, 9, 1, 0), (3, 1, 3, 0), (9, 4, 0, 3) }
and you get a particularly easy basis to calculate. That is because the last two
vectors span the orthogonal complement of the rst two, cutting by two thirds the
projections that must be calculated. The basis you end up with consists of the
ordered basis S = { u
1
, . . . , u
4
} given by
1

47
_
_
_
_
3
6
1
1
_
_
_
_
,
1

5499
_
_
_
_
13
21
20
67
_
_
_
_
,
1

19
_
_
_
_
3
1
3
0
_
_
_
_
,
1

20007
_
_
_
_
78
45
93
57
_
_
_
_
.
Proj
W
has a very simple matrix when expressed in these coordinates:
M
S,S
=
_
_
_
_
1 0 0 0
0 1 0 0
0 0 0 0
0 0 0 0
_
_
_
_
.
This matrix kills the part of any vector that sticks out of the plane of W, but acts
like the identity matrix in W.
However we want the matrix in terms of the standard basis E so we can work
on standard coordinates. Denoting by P
S,E
the matrix of transition from basis S
to basis E we have P
S,E
= ( [u
1
]
E
[u
2
]
E
[u
3
]
E
[u
4
]
E
) where the columns of this
matrix are the standard coordinates of the new basis we calculated above. The
matrix we want is
M
E,E
= P
S,E
M
S,S
P
E,S
.
The matrix P
E,S
is the inverse of P
S,E
, but because both E and S are orthogonal,
P
1
S,E
= P
T
S,E
.
The matrix K
S,S
for the reection in W is given by
K
S,S
=
_
_
_
_
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
_
_
_
_
.
This matrix preserves the part of a vector in W and reverses the part that sticks out
of W. It too can be transformed to K
E,E
, so it can work on standard coordinates,
by the same process.
REAL LINEAR ALGEBRA: PROBLEMS WITH SOLUTIONS 11
(xvii). Approximate Solutions.
(a) Find the point in R
2
which is the best possible solution to
_
_
1 2
3 4
6 1
_
_
_
x
y
_
=
_
_
3
7
1
_
_
where distance is measured with the usual inner product.
Solution:
We rst nd the vector in the columnspace of the matrix closest to (3, 7, 1). We
apply the Gram-Schmidt procedure to the columns, yielding
u
1
=
1

46
(1, 3, 6) and u
2
=
1

6509
(36, 62, 37).
The projection of p = (3, 7, 1) onto the columnspace is
p, u
1
u
1
+p, u
2
u
2
=
3 + 21 + 6
46
(1, 3, 6) +
108 + 434 37
6509
(36, 62, 37)
=
_
975
283
,
1915
283
,
295
283
_
.
This is a point in the image of the linear transformation, so there is a solution, and
it is the point in the image nearest to the target.
Now solve
_
_
1 2
3 4
6 1
_
_
_
x
y
_
=
_
_
_
_
_
975
283
1915
283
295
283
_
_
_
_
_
by the usual methods, yielding x =
35
283
and y =
505
283
.
(b) Find the points in R
2
which are best possible solutions to
_
1 2
3 6
__
x
y
_
=
_
3
7
_
where distance is measured with the usual inner product.
Solution:
The columnspace is spanned by (1, 3). Projecting the target (3, 7) onto the
columnspace yields the closest point v =
_
12
5

10
,
36
5

10
_
in the image of
_
1 2
3 6
_
.
Solving
_
1 2
3 6
__
x
y
_
=
_
12
5

10
36
5

10
_
yields
_
x
y
_
=
_
0
6
5

10
_
+t
_
2
1
_
for any real t.
12 REAL LINEAR ALGEBRA: PROBLEMS WITH SOLUTIONS
(xviii). Embedding an Inner Product Space in Euclidean Space.
Find an inner product on a nite dimensional vector space from the examples
given above which are not dot product. Find an orthonormal basis. Satisfy yourself
that the inner product corresponds to dot product on coordinates with respect to
this basis.
Solution:
We will examine the bases and inner products from (xv).
Let V be the space of constant or rst degree polynomials with ordered basis
A = { a
1
, a
2
} = { 1, x}. If we give V the inner product f, g =
_
3
0
f(x)g(x) dx
and apply Gramm-Schmidt we get basis C = { c
1
, c
2
} =
_
1

3
,
2
3
x 1
_
. So the
matrices of this inner product in these two bases are
G
A
=
_
a
1
, a
1
a
1
, a
2

a
2
, a
1
a
2
, a
2

_
=
_
3 4.5
4.5 9
_
and
G
C
=
_
c
1
, c
1
c
1
, c
2

c
2
, c
1
c
2
, c
2

_
=
_
1 0
0 1
_
If instead we endow V with inner product f, g = f(0)g(0) + f(1)g(1) we get
an orthonormal basis D = { d
1
, d
2
} =
_
1

2
,

2 x
1

2
_
. The matrices of this
dierent inner product in these two bases are
H
A
=
_
a
1
, a
1
a
1
, a
2

a
2
, a
1
a
2
, a
2

_
=
_
2 1
1 1
_
and
H
D
=
_
d
1
, d
1
d
1
, d
2

d
2
, d
1
d
2
, d
2

_
=
_
1 0
0 1
_
(xix). Change of Basis.
Find matrices of transition from ordered basis A = { 1, x} to orthonormal bases
C for (a) and D for (b) from the last section.
Solutions:
(a) The matrix of transition from C =
_
1

3
,
2
3
x 1
_
to A = { 1, x} is
P
C,A
=
_
1

3
1
0
2
3
_
so P
A,C
= P
1
C,A
=
_

3
3

3
2
0
3
2
_
.
(b) The matrix of transition from D =
_
1

2
,

2 x
1

2
_
to A = { 1, x} is
P
D,A
=
_
1

2

1

2
0

2
_
so P
A,D
= P
1
D,A
=
_
_

2
1

2
0
1

2
_
_
.
REAL LINEAR ALGEBRA: PROBLEMS WITH SOLUTIONS 13
(xx). Eect of Change of Basis on the Matrix for an Inner Product.
Show that the matrix of transition can be used to turn the matrix of the inner
product with respect to original basis into the identity matrix with respect to
orthonormal bases in the two cases above. The inner product becomes dot product
on these coordinates.
Solution:
P
T
A,D
H
D
P
A,D
= P
T
A,D
I P
A,D
=
_
_

2
1

2
0
1

2
_
_
T
_
_

2
1

2
0
1

2
_
_
=
_
2 0
1

2
1

2
_
_
_

2
1

2
0
1

2
_
_
=
_
2 1
1 1
_
= H
A
.
(xxi). Eect of Change of Basis on the Matrix for a Linear Function.
Find the matrices M
A,A
, M
C,C
and M
D,D
corresponding to dierentiation with
respect to the three bases in (xix). Show how to use the matrices of transition to
transform each matrix into the other two.
Solution:
d
dx
1 = 0 and
d
dx
x = 1 so M
A,A
=
_
0 1
0 0
_
.
So
M
C,C
= P
A,C
M
A,A
P
C,A
=
_

3
3

3
2
0
3
2
_
_
0 1
0 0
__
1

3
1
0
2
3
_
=
_
0
2

3
3
0 0
_
and
M
D,D
= P
A,D
M
A,A
P
D,A
=
_
_

2
1

2
0
1

2
_
_
_
0 1
0 0
_
_
1

2

1

2
0

2
_
=
_
0 2
0 0
_
(xxii). Basis of Eigenvectors.
(a) Let u, v and w be eigenvectors for three dierent eigenvalues for a linear
function f. Show that these three vectors form an independent set of vectors.
Solution:
Suppose u, v and w are eigenvectors for distinct eigenvalues
1
,
2
and
3
.
Relabel if necessary so that
1
= 0. Suppose further that au + bv + cw = 0 for
certain real numbers a, b and c. Applying f to this sum and using linearity of f
yields a
1
u+b
2
v+c
3
w = 0. Dividing this by (nonzero)
1
and subtracting this
linear combination from the earlier one yields
b
_
1

2

1
_
v +c
_
1

3

1
_
w = 0 so b
_
1

2

1
_
v = c
_

1
1
_
w.
14 REAL LINEAR ALGEBRA: PROBLEMS WITH SOLUTIONS
Applying f to both sides of the rightmost equation yields dierent multiples of each
side. The only way that can happen is if both sides are the zero vector. It follows
that c and b are both 0, which implies that a = 0 too. So the three eigenvectors
are independent.
The same argument can be adapted to show that any nite number of eigenvec-
tors for dierent eigenvalues must form an independent set of vectors. (hint: If this
set is dependent there would be a dependency involving the fewest number of these
vectors. That number of vectors must exceed three, by our work from above. So
every coecient in a linear combination of these particular eigenvectors which adds
to the zero vector must be nonzero. Apply the argument from above to exhibit
a nontrivial combination of these vectors involving fewer eigenvectors, yielding a
contradiction and establishing the result.)
(b) Find a basis of eigenvectors using the Cayley-Hamilton theorem as suggested
in the notes for the matrix
M = M
E,E
=
_
_
1 0 0
4 5 0
7 8 9
_
_
where E is the standard basis of R
3
.
(c) Find the matrix of transition from the standard basis E to the new basis B.
Verify that P
E,B
M
E,E
P
B,E
= M
B,B
is diagonal.
Solution:
The characteristic polynomial for this matrix is
h(x) = (x 1)(x 5)(x 9).
The columnspace of the matrices found below must be killed by the missing factor,
and so are eigenvectors for the eigenvalue involved in the missing factor. These
calculations are so arduous to do by hand that hardware assistance is a practical
necessity.
(M I)(M 5I) =
_
_
0 0 0
0 0 0
60 64 32
_
_
(M I)(M 9I) =
_
_
0 0 0
16 16 0
32 32 32
_
_
(M 5I)(M 9I) =
_
_
32 0 0
32 0 0
4 0 0
_
_
We select ordered basis B of eigenvectors u
1
= (0, 0, 1) and u
2
= (0, 1, 2) and
u
1
= (8, 8, 1). These are eigenvectors for eigenvalues 9, 5 and 1, respectively.
REAL LINEAR ALGEBRA: PROBLEMS WITH SOLUTIONS 15
M
B,B
must be diagonal, and we verify
M
B,B
=
_
_
9 0 0
0 5 0
0 0 1
_
_
=
_
_
0 0 8
0 1 8
1 2 1
_
_
1
_
_
1 0 0
4 5 0
7 8 9
_
_
_
_
0 0 8
0 1 8
1 2 1
_
_
= P
E,B
M
E,E
P
B,E
.
(d) Suppose M
A,A
is a matrix for a linear transformation f : V V where V has
dimension 4 and A = { a
1
, a
2
, a
3
, a
4
}. We nd that f has eigenvalues 3 and 7 with
corresponding eigenvectors v
1
and v
2
. We also nd that the vectors v
1
, v
2
, a
2
, a
3
(in that order) form a basis B. What can you say about the matrix M
B,B
?
Solution:
The matrix will be ([f(v
1
)]
B
[f(v
2
)]
B
[f(a
2
)]
B
[f(a
3
)]
B
). The rst two columns
simplify nicely: the rst is 3e
1
and the second is 7e
2
. Nothing can be said about
the last two columns.