Você está na página 1de 29

Chapter 5

Linear independence, bases


and dimension
5.1

Basic concepts

In this section, we will consider an important problem. How should the dimension of a vector space be defined? In fact, the development of a workable
definition for the dimension of a vector space is one of the most important
achievements of basic linear algebra.

5.1.1

The Intuitive Notion of Dimension

One of the most important concepts we will study is the notion of the dimension of a finite dimensional vector space. Roughly speaking, the dimension
of a vector space (or some subset of a vector space) should count its number
of degrees of freedom. Of course, we already have a well formed notion of
dimension for subsets of Rn where n = 1, 2, or 3. An example of a one
dimensional subset of R3 is the path traced out by a point moving smoothly
through R3 . A smooth surface in R3 (without any thickness) would give an
example of a two dimensional object, and so forth.
In this chapter (among other things), we will formulate the concept of
dimension for any finite dimensional vector space (i.e. any vector space
spanned by a finite set of vectors). In particular, this will apply to any
subspace of the vector space Fn , where F is an arbitrary field. Since our
intuition tells us that R1 , R2 and R3 should have dimensions one, two and
three respectively, we should expect that our general definition will have the
property that the dimension of Rn is n. This suggests that the dimension of
117

118
Fn should also be n, but we will see that this is a statement that is relative
to the field F we are using. Thus, our definition of dimension will definitely
depend on the field.
To illustrate the simplest case, consider what happens when F = C.
Since C = R2 , Cn is clearly the same as R2n . Thus, if we ask what is the
dimension of Cn , we see that the answer could arguably be either n or 2n.
This is certainly consistent with having the dimension of Fn be n. What this
means is that when we speak of the dimension of Cn , we should differentiate
between the real dimension (which is 2n) and the complex dimension (which
is n). The real dimension of R2 is two, but when we think of R2 as being
C, its complex dimension is one. Hence we will be able to avoid having a
paradox about the meaning of the dimension of Cn .

5.1.2

The notion of linear independence

The idea behind a linearly independent set of vectors in Rn is simple to


state. A set of vectors is linearly independent if no one of them can be
expressed as a linear combination of the others. For example, two vectors
are linearly independent when they dont lie on the same line through the
origin (i.e. they arent collinear). Three vectors are linearly independent
when they dont all lie on a plane through the origin. (Note: any three
vectors lie on a plane, but the plane will not necessarily contain the origin.
Thus the situation of two, three or any finite number of vectors failing to be
linearly independent will involve a constraint. We will therefore formulate
the definition as a constraint.
Now let V denote an arbitrary vector space over a field F.
Definition 5.1. We will say the vectors w1 , . . . , wk in V are linearly independent (or, simply, independent) exactly when the vector equation
x1 w1 + x2 w2 + + xk wk = 0

(5.1)

has only the trivial solution x1 = x2 = = xk = 0. If a non trivial solution


exists, we will call the vectors linearly dependent (or, simply, dependent).
One of the first things to notice is any set of vectors in V that includes
0 is dependent (why?). The first result gives a reformulation of the concept
of indepedence.
Proposition 5.1. A set of vectors is linearly dependent if and only if one
of them can be expressed as a linear combination of the others.

119
Proof. Suppose first that one of the vectors, say w1 , is a linear combination
of the others. That is
w1 = a2 w2 + . . . ak wk .
Thus
w1 a2 w2 ak wk = 0,
so (5.1) has a solution with x1 = 1, thus a non trivial solution. Therefore
w1 , . . . , wk are dependent. Conversely, suppose w1 , . . . , wk are dependent.
This means that there is a solution x1 , x2 , . . . , xk of (5.1), where some xi 6= 0.
We can assume (just by reordering the vectors) that the nonzero coefficient
is x1 . Then we can write
w1 = a2 w2 + . . . ak wk ,
where ai = xi /x1 , so the proof is done.
FIGURE
(LINEARLY DEPENDENT, INDEPENDENT)
The following fact gives one of the important properties of linearly independent sets.
Proposition 5.2. Assume that w1 , . . . , wk are linearly independent vectors
in Fn and suppose v is in their span. Then there is exactly one linear
combination of w1 , . . . , wk which gives v.
Proof. This is an important fact, so even though it may appear obvious, let
us prove it carefully. The point is that if v can be expressed in two ways,
then it would be possible by subtraction to find a linear combination of the
wi that gives the zero vector where some coeficient isnt zero, contradicting
linear independence. To be specific, let there be two expressions
v = r 1 w 1 + r2 w 2 + + r k w k
and
v = s1 w1 + s2 w2 + + sk wk
where the ri and si are all elements of F. By subtracting and doing a little
algebra, we get
0 = v v = (r1 s1 )w1 + (r2 s2 )w2 + + (rk sk )wk .
Since the wi are independent, every coefficient ri si = 0, which proves the
proposition.

120
When V = Fn , the definition of linear independence involves considering
a linear system. Recalling that vectors in Fn are viewed as column vectors,
consider the n k matrix A = (w1 . . . wk ). By the theory of linear systems
(Chapter 4), we have
Proposition 5.3. The vectors w1 , . . . , wk in Fn are linearly independent
exactly when the system Ax = 0 has no non trivial solution which is the
case exactly when the rank of A is k. In particular, more than n vectors in
Fn are linearly dependent.

5.1.3

Finite dimensional vector spaces and bases

Lets begin defining what is meant by a finite dimensional vector space.


Definition 5.2. A vector space V over F is said to be finite dimensional
over F if there exists a finite subset of V that spans V . If no finite spanning
set exists, we say that V is infinite dimensional.
Clearly, Fn is a finite dimensional vector space over F. Similarly, Cn is
finite dimensional over R, but, for example, Rn is not finite dimensional over
say Q. Of course, it is intuitively obvious that if V is a finite dimensional
vector space then so is any subspace, but this has to be proven, and will be
below.
Now suppose W is a subspace of Fn . Intuitively, the dimension of W is
the number of degrees of freedom one has in W . The problem is to make
this idea precise. The solution uses the notion of a basis.
Let V be any vector space F.
Definition 5.3. A collection of vectors in V which is both linearly independent and spans V is called a basis of V .
Notice that we have not required that a basis be a finite set. Usually,
however, we will deal with vector spaces that have a finite basis. One of the
things we would to know is whether every finite dimensional vector space
has a basis. Of course, Fn has a basis, namely the standard basis vectors,
or, in other words, the columns of the identity matrix In over F.
Any non zero vector in Rn spans a line, and clearly a single non zero
vector is linearly independent. Hence a line has a basis consisting of a
single element. A plane P through the origin is spanned by any two non
collinear vectors on P , and two any two non collinear vectors on P are
linearly independent. Thus P has a basis consisting of two vectors. It
should noted that the trivial vector space {0} does not have a basis, since

121
in order to contain a linearly independent subset it has to contain a nonzero
vector.
Proposition 5.2 allow us to deduce an elementary property of bases.
Proposition 5.4. The vectors v1 , . . . , vr in V form a basis of W if and only
if every vector v in V admits a unique expression
v = a1 v1 + a2 v2 + + ar vr ,
where a1 , a2 , . . . ar are elements of F.
Example 5.1. Lets consider a very basic example. Suppose A is an m n
matrix over F. Then we have already considered the column space of A,
which is a subspace of Fm . If A has rank n, the columns are independent,
and hence form a basis of the column space. This gives a very useful criterion for determining whether or not a given set of vectors is a basis of the
subspace they span. However, if the rank of A is less than n, the columns
are dependent, so we still have the problem of how to find a basis from a
given spanning set that is dependent. We will discuss this below.

5.1.4

The Definition of Dimension

When we consider the problem of the definition of dimension, the first thing
that comes to mind is that a line cannot contain two independent vectors
nor can a plane cannot contain three linearly independent, and , as we noted
above Rn cant contain n + 1 independent vectors. Also, a line has a basis
consisting of one vector, a plane two vectors, and do on. This suggests that
we should define the dimension of a FDVS V to be the maximal number of
linearly independent vectors in V , and that the dimension should coincide
with the number of elements in a basis. With this definition, the dimension
of a line is one, a plane is two and Rn is n.
The way we proceed is to make the following definition. First of all, we
only consider FDVSs.
Definition 5.4. The dimension of a FDVS V is the number of elements in
a basis of V . By legislation, the dimension of the trivial s vector space {0}
is 0. The dimension of W will be denoted by dim W or by dimF W in case
there is a chance of confusion about which field is under consideration.
Of course, in order to be able to use this definition, we need to know two
facts: first, every FDVS has a basis, and , second, every two bases have the
same number of elements.

122
To consider a special case, lets first show that any basis of Fn has n
elements. Suppose first that the vectors w1 , . . . , wk span Fn , and let A =
(w1 . . . wk ). Since the columns of A span Fn , Ax = b is consistent for all
b Fn . This implies the rank of the coefficient matrix A is n. Thus k n.
Hence a spanning set of Fn has to have at least n elements. Moreover, we
know w1 , . . . , wk are independent if and only if the rank of A is k. Thus,
by Proposition 5.3, k n. Hence a basis of Fn has exactly n elements. In
fact, we can even say more.
Proposition 5.5. Every basis of Fn contains exactly n vectors. Moreover, n
linearly independent vectors in Fn span Fn and hence are a basis. Similarly
n vectors in Fn that span Fn are linearly independent and hence are also a
basis.
Proof. We already verified the first statement. Now if w1 , . . . , wn are independent, then A = (w1 . . . wn ) has rank n so w1 , . . . , wn also span Fn
since A is n n. Similarly, if w1 , . . . , wn span Fn , they are independent,
again since A is n n of rank n. This proves the proposition.
Here are some examples, most of them familiar.
Example 5.2. Let ei denote the ith column of In . The vectors e1 , . . . , en
give the so called standard basis of Rn . In fact, the vectors e1 , . . . , en make
sense over any field F and are always a basis of the corresponding Fn .
Example 5.3. The dimension of a line is 1 and thta of a plane is 2. The
dimension of the hyperplane a1 x1 + + an xn = 0 in Rn is n 1, provided
some ai 6= 0. Note that the n 1 fundamental solutions form a basis of the
hyperplane.
Example 5.4. Let A = (w1 w2 . . . wn ) be n n over F, and suppose A
has rank n. Then the columns of A are a basis of Fn . Indeed, the columns
span Fn since we can express an arbitrary b Fn as a linear combinations
of the columns due to the fact that the system Ax = b is consistent for all
b. We are also guaranteed that 0 is the only solution of the system Ax = 0.
This implies that the columns of A are independent. Thus, the columns of
an n n matrix over Fn of rank n are a basis of Fn . (Note that we have
just repeated part of the discussion given above.)
Example 5.5. Let A be an m n matrix over F. Then the fundamental
solutions of Ax = 0 are a basis of the solution space N (A), which is a
subspace of Fm . Consequently we have the identity
dim N (A) = n rank(A).

123
This is just a restatement of the counting principle of Section 4.2.7.
Example 5.6. Referring back to the question about the dimension of the
vector space C, we note that by the above definition, dimR C = 2, while
dimC C = 1.
Example 5.7. For any positive integer n, let Pn denote the space of polynomials with real coefficients of degree at most n (cf. ??). Lets determine
a basis of P3 . Consider the polynomials 1, x, x2 , x3 . I claim they are linearly
independent. To see this, we have to show that if
y=

3
X

ai xi = 0

i=0

for every x, then each ai = 0. Now if y = 0, then


y(0) = a0 = 0, y 0 (0) = a1 = 0, y 00 (0) = a2 = 0, y 000 (0) = a3 = 0.
Hence we have the asserted linear independence. It is obvious that 1, x, x2 , x3
span P3 , so our job is done.
Another source of relevant examples of vector spaces consists of the solution spaces of homogeneous linear differential equations of the form
y (m) + a1 y (m1) + + am1 y 0 + am y = 0,
where a1 , . . . , am are arbitrary real constants. It turns out that there is a
theorem in differential equations that says that the dimension of the solution
space of any such equation is always m. When m = 4 and every ai = 0, then
we are dealing with the vector space P3 of the last example. The solution
space of y 00 + y = 0 consists of all linear combinations of sin and cos.
In the next subsection, we will establish the basic facts needed to show
that our definition of dimension makes sense.

5.1.5

Discussion of dimension

We begin by showing that two finite bases of a vector space have the same
number of elements.
Theorem 5.6. Let V be a vector space over F which has a finite basis
v1 , . . . , vk . Then any other basis of V also has k elements.

124
Proof. Suppose u1 , . . . , uj is another bases of V , and assume that k 6= j.
Without any loss of generality, we may suppose k > j. Expanding v1 in
terms of u1 , . . . , uj , we obtain an expression
v1 = r1 u1 + r2 u2 + rj uj .
Since some coefficient on the right is non zero, we may, after renumbering 1, 2, . . . , j if necessary, suppose r1 6= 0. I claim that this implies that
v1 , u2 , . . . , uj is also a basis of W . To see v1 , u2 , . . . , uj are independent,
suppose
x1 v1 + x2 u2 + xk uj = 0.
If x1 6= 0, then

v1 = y2 u2 + + yj uj ,

where yi = xi /x1 . Since r1 6= 0, this gives two distinct ways of expanding


v1 in terms of the ui , which contradicts their independence. Hence x1 = 0.
It follows immediately that all xi = 0 (why?), so v1 , u2 , . . . , uj are indeed
independent. We leave the proof that v1 , u2 , . . . , uj span W as an exercise,
hence they give a basis of W . Now repeat exactly the same argument to
show that after suitably renumbering u2 , . . . , uj , we can replace u2 by v2
to obtain another basis v1 , v2 , u3 . . . , uj of V . Since we may continue the
process, it turns out that v1 , . . . , vj is a basis of V . But as j < k, it follows
that vj+1 is a linear combination of v1 , . . . , vj , which contradicts the linear
independence of v1 , . . . , vk . We have now arrived at a contradiction, so we
can conclude that k j. As the situation is symmetric, it follows that j k
too, so j = k and the Dimension Theorem is proven.
The idea behind the above proof is sometimes called the replacement
principle. We next show that every subspace of a finite dimensional vector
space has a basis.
Theorem 5.7. Let V be a finite dimensional vector space over F. Then
every spanning set in V contains a basis.
Proof. Let v1 , . . . , vk span V . Consider all subsets of v1 , . . . , vk which also
span V , and choose any such subset with a minimal number of elements.
I claim that each of these subsets is a basis. That they span is obvious,
by definition, so we only need to verify independence. But if such a set is
depenedent, then one of its elements is a linear combination of the others,
so there is a subset which is also a spanning set. This impossible, so the
proof is finished.
We now know that every finite dimensional vector space has a well defined dimension.

125

5.1.6

Further properties

We still need to establish some more properties of FDVSs. First of all, we


have
Proposition 5.8. Let V be a FDVS, say dim V = k. Then any subset of
V containing more that k elements is dependent.
Proof. It suffices to show that any subset of k+1 elements of V is dependent.
Let v1 , . . . , vk+1 be independent and suppose u1 , . . . , uk give a basis of V .
Applying the replacement argument in Theorem 5.6, we get that v1 , . . . , vk
is in fact a basis, so v1 , . . . , vk+1 cant be independent.
Theorem 5.9. If V is any finite dimensional vector space, then every linearly independent set of elements of V is contained in a basis.
Proof. Let dim V = k. Now suppose v1 , . . . , vm V are LI, and let W
denote the subspace they span. If W 6= V , there exists a vector vm+1 V
such that vm+1 6 W . I claim that v1 , . . . , vm , vm+1 are LI. Indeed, if
a1 v1 + + am vm + am+1 vm+1 = 0,
and am+1 6= 0, we get a contradiction to the choice of vm+1 . Thus am+1 = 0,
and it follows immediately that all the ai = 0. Hence, v1 , . . . , vm , vm+1 are
LI. We can continue in this way, at each step obtaining a larger subspace
of V . But Proposition 5.8 says this cant go on forever, so eventually we
obtain a basis of V .
We also need to show the natural fact that every subspace of a FDVS is
also finite dimensional.
Theorem 5.10. Every subspace W of a finite dimensional vector space V is
finite dimensional. In particular, for any subspace W of V , dim W is defined
and dim W dim V .
Proof. Consider all finite subsets of W that are linearly independent. By
Proposition 5.8, each such subset has at most dim V elements. Among
these, choose one, say w1 , . . . , wm , with a maximal number of elements. I
claimw1 , . . . , wm span W . For if not, there is a vector v W not in the
span of w1 , . . . , wm . Then, imitating the argument we gave in the proof
of Proposition 5.8, we get that w1 , . . . , wm , v are also linearly independent.
But this contradicts our choice, so w1 , . . . , wm span W . The fact that
m = dim W dim V follows immediately from Proposition 5.8.

126
We should point out that in the above proof we used the Axiom of
Choice to select w1 , . . . , wm . We can easily avoid this appeal, which some
mathematicians find unappealing, but to do so we have to give a slightly
more complicated argument. This goes as follows.
Proof. Again, we may suppose W contains a non zero vector w1 W , and
let L be the line in W that w1 spans. If L = W we are through. If L 6= W ,
then there exists a w2 W not on L. Clearly w1 and w2 span a plane P . If
P = W were through, otherwise choose w3 W not in P . The point is that
after choosing w1 , w2 , . . . , wk W so that the span of w1 , w2 , . . . , wi is a
proper subspace of the span of w1 , w2 , . . . , wi+1 for each i = 1, . . . , m 1,
we have a linearly independent set. But V cannot contain more than dim V
linearly independent vectors, so for some m dim V , w1 , w2 , . . . , wm span
W (in fact are a basis).
This argument also reproves that the fact that any linearly independent
subset of W is contained in a basis.
There is a nice application to finite dimensional vector spaces V over Zp ,
p a prime.
Proposition 5.11. Let V be a finite dimensional vector spaces V over Zp ,
where p is a prime. Then the number of elements of V is exactly pdim V .
The proof goes as follows. Let k = dim V and choose a basis v1 , . . . , vk
of V . Then every v W has a unique expression
v = a1 v1 + a2 v2 + + ak vk
where a1 , a2 , . . . , ak are scalars, that is, elements of F. Thus it is simply a
matter of counting such expressions. In fact, since there are p choices for
each ai , and different choices give different elements of V , there are exactly
p p p = pk vectors in V .
Thus, for example, a line in Fn has p elements, a plane has p2 and so
forth. We find this equality extremely useful in linear coding theory.

5.1.7

Extracting a basis

We know from Theorem ?? that any spanning set of a FDVS contains a


basis. In fact, the subsets which give bases are exactly the minimal spanning subsets. Frequently, however, we need an explicit method for actually
extracting one of these subsets. One such explicit method (for subspaces of
Fn ) is based on row reduction. This goes as follows.

127
Suppose w1 , . . . , wk Fn , and let W be the subspace they span. Consider the n k matrix A = (w1 . . . wk ). We seek are columns of A which
are a basis of the column space W = col(A). The result may seem a little
surprising since it involves row reducing A which of course changes col(A).
Proposition 5.12. The columns of A that correspond to a corner entry in
Ared are a basis of the column space col(A) of A. Therefore, the dimension
of col(A) of A is the rank of A.
Proof. The key observation is that Ax = 0 if and only if Ared x = 0 (why?).
This says any expression of linear dependence among the columns of Ared
gives an expression of linear dependence among the columns of A using
exactly the same coefficients, and conversely. For example, if column five
of Ared is the sum of the first four columns of Ared , then this same relation
holds for the first five columns of A. The reader should supply the reason
for this. But it is obvious that the columns of Ared containing a corner
entry are a basis of the column space of Ared (which, of course, is different
from the column space of A). Hence these columns of A are at least linearly
independent. In fact, since every non corner column in Ared is a linear
combination of the corner columns of Ared , the same has to be true for A
from what we said above. This shows that the corner columns in A have to
span W , and the proof is complete.
Example 5.8. To consider a simple

A= 4
7

example, let

2 2
5 8 .
8 14

Then
Ared

1 0 1
= 0 1 0 .
0 0 0

Proposition 5.12 implies the first two columns are a basis of col(A). Notice
that the first and third columns are dependent in both A and Ared as the
Proposition guarantees. The Proposition says that the first two columns are
a basis of the column space, but makes no assertion about the second and
third columns, which in fact are also a basis.

5.1.8

Sums of subspaces and the Hausdorff formula

We often need to consider the intersection of a finite number of subspaces


of Fn or any vector space for that matter. Recall that the intersection of

128
two subspaces V and W is denoted by V W , and is, by definition, the
set of all elements lying in both V and W . In fact, the solution space of a
homogeneous linear system is exactly the intersection of a finite number of
hyperplanes in Fn .
Proposition 5.13. The intersection V W of two subspaces V and W is
also a subspace. More generally, the intersection of any number of subspaces
is a subspace.
Proof. We will do the We need to first show that if v and w lie in the
intersection, then x + y does also. But x + w lies in V since V is a subspace,
and, moreover, it lies in W also, for the same reason. hence it lies in V W .
Secondly, we need to show that if x V W , then x V W for every
F. But x obviously lies in both V and W , so the intersection is a
subspace.
On the other hand, the union V W is not a subspace (why?). The
smallest subspace containing V W is called the sum of V and W .
Definition 5.5. The vector space sum of two subspaces V and W of Fn is
the set
V + W = {v + w | v V, w W }.
More generally, one can define the sum of any number of subspaces, say
V1 + V2 + + Vk in exactly the same way.
Proposition 5.14. The vector space sum of a finite collection of subspaces
V1 , V2 , , Vk of Fn is a subspace. In fact, it is the smallest subspace containing every Vi .
The proof is left as an exercise.
We now ask what is the dimension of the sum V +W and the intersection
V W ? It turns out that one depends on the other. They are related by
the Hausdoff Intersection Formula.
Theorem 5.15. If V and W are subspaces of Fn , then
dim(V + W ) = dim V + dim W dim(V W ).

(5.2)

Proof. To begin, note that V W is a subspace of V , W and V + W .


Start with a basis x1 , . . . , xk of V W . Now extend this basis to a basis of
V , say x1 , . . . , xk , vk+1 , . . . , vk+r , and do likewise for W , getting the basis
x1 , . . . , xk , wk+1 , . . . , wk+s . Clearly, the union of all these vectors spans
V + W . (Check this.) Now I claim that
x1 , . . . , xk , vk+1 , . . . , vk+r , wk+1 , . . . , wk+s

129
are in fact independent. So suppose
k
X
i=1

Thus

In other words,
Thus

i xi +

k+r
X

j vj +

j=k+1

m wm =
X
X

k+s
X

m wm = 0.

(5.3)

m=k+1

i xi +


j vj V.

m wm V W.
m wm =

i xi

for some i F. Substituting this into (5.3) gives the expression


X
X
0i xi +
j vj = 0,
from which we infer that all the 0i and all j are 0 since the xi and vj are
a basis of V . Using just that each j = 0 and referring back to (5.3), we get
X
X
i xi +
m wm = 0,
hence all i and m are 0. Thus the proof of independence is complete. To
finish the proof, we need to count dimensions. We have that
dim(V + W ) = k + r + s = (k + r) + (k + s) k =
dim V + dim W dim(V W ).
This result has an interesting consequence which leads to a deeper understanding of how subspaces intersect.
Corollary 5.16. If V and W are subspaces of Fn , then
dim(V W ) dim V + dim W n.

(5.4)

Proof. Since V and W are both subspaces of Fn , dim(V + W ) n. Now


substitute this into the Hausdorff Formula (5.2).
Example 5.9. For example, (5.4) implies that the intersection P1 P2 of
two planes in R3 has to contain a line. For dim(P1 +P2 ) 2+23 = 1. More
generally, the intersection H1 H2 of two hyperplanes in Rn has dimension
at least 2(n 1) n = n 2. On the other hand, the intersection of two
planes in R4 does not have to contain a line since 2 + 2 4 = 0 doesnt tell
us anything.

130
Abstract vector spaces whose dimension isnt finite are infinite dimensional. An obvious example of an infinite dimensional vector space is C(a, b),
the space of continuous real valued functions on [a, b]. In fact, it is not hard
to show that the functions
1, x, x2 , x3 , . . . , xn , . . .
give an infinite number of linearly independent functions on any closed interval [a, b]. One of the basic tools for studying f C(a, b) is to use the
inner product to project f into various finite dimensional subspaces. We
will return to this later.
Exercises
Exercise 5.1. Find a basis for the subspace of R4 spanned by
(1, 0, 2, 1), (2, 1, 2, 1), (1, 1, 1, 1), (0, 1, 0, 1), (0, 1, 1, 0)
which contains the first and fifth vectors.

1 2 0
1 2
Exercise 5.2. Consider the matrix A = 2 0 1 1 2 to have en1 1 1 1 0
tries in Q.
(i) Show that the fundamental solutions are a basis of N (A).
(ii) Find a basis of col(A).
(iii) Repeat (i) and (ii) when A is considered as a matrix over Z3 .
Exercise 5.3. Suppose V is a finite dimensional vector space over a field
F, and let W be a subspace of V .
(i) Show that if dim W = dim V , then W = V .
(ii) Show that if w1 , w2 , . . . , wk is a basis of W and v V but v 6 W ,
then w1 , w2 , . . . , wk , v are independent.
Exercise 5.4. Let F be any field, and suppose V and W are subspaces of
Fn .
(i) Show that V W is a subspace of Fn .
(ii) Let V + W = {u Fn | u = v + w v V, w W }. Show that V + W
is a subspace of Fn .

131
Exercise 5.5. Consider the subspace W of V (4, 2) spanned by 1011, 0110,
and 1001.
(i) Find a basis of W and compute |W |.
(ii) Extend your basis to a basis of V (n, 2).
Exercise 5.6. Find a basis of the vector space Mn (R) of real nn matrices.
Exercise 5.7. A square matrix A over R is called symmetric if AT = A and
called skew symmetric if AT = A.
(a) Show that the n n symmetric matrices form a subspace of Rnn , the
space of n n real matrices, and compute the dimension of this subspace.
(b) Show that the n n skew symmetric matrices form a subspace of Rnn
and compute the dimension of this subspace.
(c) Find a basis of R33 using only symmetric and skew symmetric matrices.
Exercise 5.8. Show that the set of n upper triangular real matrices is a
subspace of the vector space of n n real matrices. Find a basis and its
dimension.
Exercise 5.9. If A and B are n n matrices so that B is invertible (but
not necessarily A), show that the ranks of A, AB and BA are all the same.
Exercise 5.10. Let u, v and w be a basis of R3 .
(a) Determine whether or not 3u + 2v + w, u + v + 0w, and u + 2v 3w
are iindependent.
(b) Find a general necessary and sufficient condition for the vectors a1 u +
a2 v + a3 w, b1 u + b2 v + b3 w and c1 u + c2 v + c3 w to be independent, where
a1 , a2 , . . . , c3 are arbitrary scalars.
Exercise 5.11. True or False: rank(A) rank(A2 ). Explain your answer.
Exercise 5.12. Let W and X be subspaces of a FDVS V of dimension n.
What are the minimum and maximum dimensions that W X can have?
Discuss the case where W is a hyperplane (i.e. dim W = n 1) and X is a
plane (i.e. dim X = 2).
Exercise 5.13. Find a basis of Pn and determine its dimension.
Exercise 5.14. Let F be the field Z2 with two elements. How many elements
does Fn have? How many vectors are there on a line through 0? Find a
formula for the number of vectors in any subspace. Finally, how many n n
matrices over F are there?

132
Exercise 5.15. Let u1 , u2 , . . . , un be mutually orthogonal unit vectors in
Rn . Are u1 , u2 , . . . , un a basis of Rn ?
Exercise 5.16. Given a subspace W of Rn , define W to be the set of
vectors in Rn orthogonal to every vector in W . Show that W is a subspace
of Rn and describe a method for constructing a basis of W .
Exercise 5.17. Show that if W is a subspace of Rn , then dim(W ) +
dim(W ) = n.
Exercise 5.18. Suppose W is a subspace of Fn of dimension k. Show the
following:
(i) Any k linearly independent vectors in W span W , hence are a basis
of W .
(ii) Any k vectors in W that span W are linearly independent, hence are
a basis of W .
Exercise 5.19. Show that the functions
1, x, x2 , x3 , . . . , xn , . . .
are linearly independent on any open interval (a, b).
Exercise 5.20. What is the dimension of R when considered as a vector
space over the rational numbers Q?
Exercise 5.21. Prove Proposition 5.18.

5.2

Some applications to linear transformations

First of all, we will prove an extremely fundamental, but very simple, fact
about linear transformations. In essence, this result tells us that given a
basis of a FDVS V , there exists a unique linear transformation taking any
prearranged values on the basis.
Proposition 5.17. Let V and W be any FDVS over F. Let v1 , . . . , vn be
any basis of V , and let w1 , . . . , wn be arbitrary vectors in W . Then there
exists a unique linear transformation T : V W such that T (vi ) = wi for
each i. In other words a linear transformation is uniquely determined by
giving its values on a basis.

133
Proof. The proof is surprisingly simple. Since every v V has a unique
expression
n
X
v=
ri vi ,
i=1

where r1 , . . . rn F, we can define


T (v) =

n
X

ri T (vi ).

i=1

This in fact does define a transformation, and we can immediately see that
T is a linear transformation.
That is, T (v+w)
= T (v)+T (w),P
and T (rv) =
P
P
rT (v). Indeed, if v = i vi and w = i vi , then v + w = (i + i )vi ,
so
X
T (v + w) =
(i + i )T (vi ) = T (v) + T (w).
Similarly, T (rv) = rT (v). Moreover, T is unique.
If V = Fn and W = Fm , we can give an even simpler proof by appealing
to matrix theory. Let B = (v1 v2 . . . vn ) and C = (w1 w2 . . . wn ). Then
the matrix A of T satisfies AB = C. But B is invertible since v1 , . . . , vn is
a basis of Fn , so A = CB 1 .
As an application, lets show that if V and W are two vector spaces having the same dimension, then there exists a one to one linear transformation
T : V W such that T (V ) = W . This means that in a certain sense we
cant distinguish FDVSs of the same dimension. To construct T , choose a
basis v1 , . . . , vk of V and a basis w1 , . . . , wk of W , and let T : V W be
the unique linear transformation such that T (vi ) = wi if 1 i k. We
leave it as an exercise to show that T satisfies all our requirements. Namely,
T (V ) = W and T is one to one.
Definition 5.6. Let V and W be two vector spaces over F. A linear transformation S : V W which is both one to one and onto W (i.e. im(T ) = W )
is called an isomorphism between V and W .
The argument above shows that every pair of subspaces of Fn and Fm of
the same dimension are isomorphic. (One might say a plane is a plane is a
plane.) The converse of this assertion is also true. Any pair of subspaces of
Fn and Fm which are isomorphic have the same dimension. More generally,
Proposition 5.18. Any two finite dimensional vector spaces over the same
field which are isomorphic have the same dimension.
We leave the proof as an exercise.

134
Example 5.10. Let L(3, 3) be the vector space of all linear transformations
T : R3 R3 . Lets compute the dimension of L(3, 3). Consider the transformation : L(3, 3) R33 such that (T ) = MT , the matrix of T . Here,
R33 denotes the space of 3 3 matrices over R. It was shown in ?? that
is a linear transformation. In fact, Proposition ?? tells us that is one to
one and Im() = R33 . Hence is an isomorphism in the sense mentioned
just above. The idea now is that once we compute the dimension of R33 ,
we will also know the dimension of L(3, 3). Indeed, if A1 , . . . , Ak are a basis
of R33 , then the linear transformations T1 , . . . , Tk such that Ai = (Ti ) are
a basis of L(3, 3) since is an isomorphism. It is clear that the Ti are independent. TheP
reason they span is that if S L(3, 3), then we can expand
MS as MS = ki=1 ai Ai where a1 , . . . , ak R. Now
k
k
k
X
X
X
(
ai Ti ) =
ai (Ti ) =
ai Ai = MS .
i=1

i=1

i=1

Pk
But this means S =
i=1 ai Ti , so T1 , . . . , Tk give a basis of L(3, 3). It
remains to find a basis of R33 . But this is clear: a basis is given by the
matrices Eij such that Eij P
has a one in the (i, j) position and
Pzeros elsewhere.
The matrix A = (aij ) = i,j aij Eij , so the Eij span. If i,j aij Eij is the
zero matrix, then obviously each aij = 0, so the Eij are also independent.
It follows that R33 has dimension nine.
This example can easily be extended to the space L(n, m) of linear transformations T : Rn Rm . In general, the dimension of L(n, m) is mn.
Example 5.11. Let W be any subspace of Rn . Lets apply Proposition
5.17 to show that there exists a linear transformation T : Rn Rn whose
kernel is W . Choose a basis v1 , . . . , vk of W and extend this basis to a basis
v1 , . . . , vn of Rn . Define a linear transformation T : Rn Rn by putting
T (vi ) = 0 if 1 i k P
and putting T (vj ) = vj if k + 1 j n. Then
ker(T ) = W . For if v = ni=1 ai vi ker(T ), then
n
n
n
X
X
X
T (v) = T (
ai vi ) =
ai T (vi ) =
aj vj = 0,
i=1

i=1

j=k+1

so ak+1 = = an = 0. Hence ker(T ) W . Since we designed T so that


ker(T ) W , we are through.
If we extend the basis v1 , . . . , vk of W in the previous example so that
each vj , k +1 j n, is orthogonal to W , then it turns out that P = In T

135
is the orthogonal projection of Rn onto W . That is, P (w) = w if w W
and P (v) = 0 if v V, where V is the span of vk+1 , . . . , vn . Projections
will be studied in greater detail later.
Exercises
Exercise 5.22. Suppose that V and W are FDVSs and v1 , . . . , vk is a basis
of V and w1 , . . . , wk is a basis of W . Let T : V W be the unique linear
transformation such that T (vi ) = wi if 1 i k.Prove that if T is an
isomorphism.
Exercise 5.23. Suppose that V and W are any FDVSs, and T : V W is
an isomorphism. Show that dim V = dim W .

5.3

The row space of A and the rank of AT

We now return to linear systems to consider the row space of an m n


matrix A. The row space row(A) of A is defined to be the subspace of Fn
spanned by the rows of A. Note that we will treat elements of row(A) as
row vectors (instead of the usual column vectors).

5.3.1

The main result

The first fact about the row space of a matrix is about how row operations
affect the row space (in fact how they dont).
Proposition 5.19. Every elementary row operation leaves the row space
of A unchanged. Consequently A and Ared always have the same row space.
Moreover, the non zero rows of Ared are a basis of row(A). Hence the
dimension of the row space of A is the rank of A, that is
dim row(A) = rank(A).
Proof. The first assertion is equivalent to the statement that for any m m
elementary matrix E, row(EA) = row(A). If E is a row swap or a row
dilation, this is clear. So we only have to worry about what happens if E
is an elementary row operation of the third kind. Suppose E replaces the
ith row ri by r0 i = ri + krj , where k 6= 0 and j 6= i. Since the rows of EA
and A are the same except that ri is replaced by r0 i , and since r0 i is itself a
linear combination of two rows of A, every row of EA is a linear combination
of some rows of A. Hence row(EA) row(A). On the other hand, since
ri = r0 i krj and j 6= i, the same reasoning tells us that row(A) row(EA).

136
Hence row(EA) = row(A). Another way to see this is to use the fact that
E 1 is also elementary, so applying row(EA) row(A), we get
row(A) = row(E 1 EA) row(EA) row(A).
Thus row(EA) = row(A).
Therefore row operations do not change the row space, and the first
claim of the proposition is proved. It follows that the non zero rows of Ared
span row(A). We will be done if the non zero rows of Ared are independent.
But this holds for the same reason the rows of In are independent. Every
non zero row of Ared has a 1 in the component corresponding to its corner
entry, and in this column, all the other rows have a zero. Therefore the only
linear combination of the non zero rows which can give the zero vector is
the one where every coefficient is zero. Hence the non zero rows of Ared are
also independent, so they form a basis of row(A). Thus dim row(A) is the
number of non zero rows of Ared , which is also the number of corners in
Ared . Therefore, dim row(A) = rank(A), and this completes the proof.
We now give some examples.
Example 5.12. The 3 3 counting matrix C of Example 4.1 has reduced
form

1 0 1
Cred = 0 1 2 .
0 0 0
The first two rows are a basis of row(C) since they span row(C) and are
clearly independent (why?).
Example 5.13. Suppose F = Z2

A= 0
0

and

0 0 1 1 1
1 0 1 0 1 .
0 1 0 1 1

A is already reduced so its rows are a basis of row(A), which is thus a three
dimensional subspace of F6 . A little combinatorial reasoning will allow us to
compute the number of elements in row(A). In fact, the answer was already
given by Proposition 5.11. Repeatingthe argument, there are 3 basis vectors
and each has 2 possible coefficients, 0 and 1. Thus there are 2 2 2 = 23
vectors in all. The 7 non zero vectors are
(100111), (010101), (001011), (110010), (101100), (011110), (1111001).

137
Note that all combinations of 0s and 1s occur in the first three components,
since the corners are in these columns. In fact, the first three components
tell you which linear combination is involved. Examples of this type will
come up again when we study linear coding theory.

5.3.2

Syndrome Matrices (Optional)

Recall that in the last section, we found a method for finding a basis and
the dimension of the column space of A by row reducing A and finding the
corners. Proposition 5.19 tells us that another way to find a basis of col(A)
is to row reduce AT . (Note that this will give a different basis unless AT is
already reduced.) However, there is still something to add. We have already
seen a method in Example 4.3 for finding homogeneous linear equations for
the column space of A by row reducing A augmented by an extra column of
scalars. This turns out to give a method for constructing a new matrix H
from A so that the column space of A is the null space of H:
col(A) = N (H).
We will call H a syndrome matrix for A. The concept of a syndrome matrix
will turn up in linear coding theory, which we will take up in 20-22.
Example 5.14. Lets consider the 4 2 matrix

1 0
0 1

A=
a b .
c d
In order to find equations of col(A) we row reduce the matrix

0
A0 =
a
c

0 u
1 v
,
b w
d x

which gives the result

1
0

0
0

0
u

1
v
.
0 w au bv
0 x cu dv

138
This means that w au bv = 0 and x cu dv = 0 are two homogeneous
linear equations such that col(A) is the set of common solutions to both these
equations. Another way of putting this is that c = (u, v, w, x)T col(A) if
and only if


 u
 

a b 1 0
v = 0 .
c d 0 1 w
0
x


hence
H=


a b 1 0
c d 0 1

is a syndrome matrix for A.


This example easily generalizes as follows:
Proposition 5.20. Let A be a n k matrix over any field F of the form
 
I
A= k
B
where B is (n k) k. Then H = (B|Ink ) is a syndrome matrix for A.
That is, col(A) = N (H).
Similarly, we can find a syndrome matrix for the row space of a matrix
of the form (Ik |B), where B is k (n k). We leave the formulation as an
exercise.

5.3.3

The Rank of A and the Rank of AT

Lets end this section with an interesting observation. Namely, the ranks
of A and AT coincide. The reason for this is simple. First of all, as we
saw in the last section, dim col(A) is the rank of A. However, we just saw
in Proposition 5.19 that the rank of A is also dim row(A). But clearly
dim col(A) = dim row(AT ), so this gives some new identities involving rank
and dimension.
Proposition 5.21. For any matrix A,
rank(A) = rank(AT ).
Proof. We know that dim row(A) = rank(A). We therefore conclude that
dim col(A) = dim row(AT ). But we also know dim col(A) = rank(A). Therefore, rank(A) = rank(AT ).

139
The surprising thing here is that even though the spaces col(A) and
col(AT ) are themselves completely unrelated, the former being a subspace
of Fm and the latter a subspace of Fn , their dimensions are the same. In
fact, this is perhaps the most surprising result weve seen so far.
Recall the principle of counting variables (4.2) which said that in any
linear system, the number of corner variables plus the number of free variables is the total number of variables. Hence, for any m n matrix over a
field F,
rank(A) + dim N (A) = n.
Thus
dim row(A) + dim N (A) = dim col(A) + dim N (A) = n.
These counting principles can also be stated in terms of the linear transformation associated to A. Let V and W be FDVSs over F, and let T : V
W be linear.
Definition 5.7. The rank of the linear transformation T : V W is defined
to be the dimension dim im(T ) of the image of T . The nullity of T is defined
to be dim ker(T ).
Clearly rank(A) = rank(T ) if V = Fn , W = Fm and A is the matrix of
T . Hence,
dim ker(T ) + rank(T ) = dim ker(T ) + dim im(T ) = dim Fn = n.
Note that in all of the versions of the counting principle, the dimension m
of Fm does not explicitly appear, although certainly rank(A) m.
A more general result for linear transformations T : V W , where V
is finite dimensional, is the following:
Proposition 5.22. Suppose T : V W is linear of rank k < n = dim V .
Then there exists a basis v1 , v2 , . . . , vn of V so that
(i) T (v1 ), T (v2 ), . . . T (vk ) is a basis of im(T ) and
(ii) vk+1 , vk+2 , . . . , vn is a basis of ker(T ).
(iii) In particular,
dim V = ker(T ) + dim im(T ).
For example, consider the projection Pb : R2 R2 . The kernel of
Pb is the line orthogonal to b (spanned, say, by c) and the image is the
line spanned by b. Hence a basis of R2 fulfilling the requirements of the
Proposition consists of b and c.

140
Exercises
Exercise 5.24. Find a bases for the row space and the column space of
each of the matrices in Exercise 1 of ??.
Exercise 5.25. In this problem, the

1 1
0 1

A=
1 0
1 0
1 0

field is Z2 . Consider the matrix

1 1 1
0 1 0

1 0 1
.
0 1 1
1 1 0

(a) Find a basis of row(A).


(b) How many elements are in row(A)?
(c) Is (01111) in row(A)?
Exercise 5.26. Suppose A is any real m n matrix. Show that when we
view both row(A) and N (A) as subspaces of Rn ,
row(A) N (A) = {0}.
Is this true for matrices over other fields, eg Z2 or C?
Exercise 5.27. Show that if A is any symmetric real n n matrix, then
col(A) N (A) = {0}.
Exercise 5.28. Suppose A is a square matrix over an arbitrary field such
that A2 = O. Show that col(A) N (A). Is the converse true?
Exercise 5.29. Suppose A is a square matrix over an arbitrary field. Show
that if Ak = O for some positive integer k, then dim N (A) > 0.
Exercise 5.30. Suppose A is a symmetric real matrix so that A2 = O.
Show that A = O. In fact, show that col(A) N (A) = {0}.
Exercise 5.31. Find a non zero 2 2 symmetric matrix A over C such that
A2 = O. Show that no such a matrix exists if we replace C by R.
Exercise 5.32. For two vectors x and y in Rn , the dot product x y can be
expressed as xT y. Use this to prove that for any real matrix A, AT A and
A have the same nullspace. Conclude that AT A and A have the same rank.
(Hint: consider xT AT Ax.)
Exercise 5.33. Prove Proposiition 5.22. (Hint: begin with a basis w1 , w2 , . . . wk
of im(T ), and choose v1 , v2 , . . . vk appropriately. Then select a basis of
ker(T ) and show that the resulting set is a basis of V .)

141
Exercise 5.34. Formulate a version of Proposition ?? for a syndrome matrix for the row space of a matrix of the form A = (Ik |B), where B is
k (n k).
Exercise 5.35. Find a syndrome matrix for the row space of the 3 6
matrix A over Z2 in Example ??.
Exercise 5.36. In the proof of Proposition 19.1, we showed that row(A) =
row(EA) for any elementary matrix E. Why does this follow once we know
row(EA) row(A)?
Exercise 5.37. Show that for any matrix A,
rank(A) = dim col(A) = dim row(A) = dim col(AT ) = rank(AT ).

5.4

Coordinates and change of bases

In this section, we will define and study the coordinates of a vector with
respect to an arbitrary basis. We will then consider the matrix of a linear
transformation with respect to any choice of a bases for the domain and for
the target.

5.4.1

Coordinates with respect to a basis

Let v1 , v2 , . . . , vn be any basis of a vector space V over F. Then every vector


w in V has a unique expansion in this basis. That is, there exist unique
scalars r1 , r2 , . . . rn F so that
w = r1 v1 + r2 v2 + + rn vn .
Definition 5.8. We will call r1 , r2 , . . . , rn the coordinates of w with respect
to the basis v1 , v2 , . . . , vn . Often we will write w =< r1 , r2 , . . . , rn >.
Notice that the notion of coordinates supposes that the basis is ordered.
Finding the coordinates of a vector with respect to a given basis of is a
familiar problem.
Example 5.15. Consider two bases of R2 , say
B1 = {(1, 1)T , (1, 1)T }

and B2 = {(1, 2)T , (0, 1)T }.

Expanding e1 = (1, 0)T in terms of these two bases gives two different sets
of coordinates for e1 . By inspection,
 
 
 
1 1
1 1
1
=
+
,
0
2 1
2 1

142
 
 
 
1
1
0
=1
2
.
0
2
1

and

We may write the coordinates of e1 with respect to the first basis as <
1 1
2 , 2 >1 and with respect to the second are < 1, 2 >2 .
Sometimes one may want to know how two different sets of coordinates
for the same vector are related. Staying with the two bases of R2 above, let
us set up the appropriate system to decide this. The way we will proceed is
to expand the first basis in terms of the second. That is, write
 
 
 
1
1
0
=a
+b
,
1
2
1


and

1
1

 
 
1
0
=c
+d
.
2
1

These equations are much easier to express in matrix form:



 


1 1
1 0
a c
=
.
1 1
2 1
b d
Now suppose p has coordinates < r, s >1 in terms of the first basis and
coordinates < x, y >2 in terms of the second. Then

  
 
1 1
r
1 0
x
p=
=
.
1 1
s
2 1
y
Now from the previous expression,

  

 
1 1
r
1 0
a c
r
=
.
1 1
s
2 1
b d
s
  
 
x
a c
r
=
,
y
b d
s

Therefore

since

1 0
2 1


is invertible.



a c
To finish our calculation, we need to compute
. Clearly
b d


 
1 

a c
1 0
1 1
=
.
b d
2 1
1 1

143
Since

we get that

1 0
2 1

1


=


1 0
,
2 1

 

a c
1
1
=
.
b d
1 3

Taking p = e1 , as in the above example, we have r = s = 12 , x = 1


and y = 2. The two sets of coordinates for e1 are supposed to satisfy the
relationship
  
 1!
1
1
1
2 ,
=
1
2
1 3
2
which in fact they do.

5.4.2

The matrix of a linear transformation with respect to


an arbitrary basis

We know how to find the matrix which says how to make the change of
coordinates from one basis to another. Recall from Proposition 5.17 that
a linear transformation is well defined once we give values on a basis of its
domain. Up to now, matrices have only used the standard basis. We will now
define the matrix expression of a linear transformation for arbitrary bases of
the domain and target.The notation here can get messy, so we will stick to
the two dimensional case. We will also assume F = R. Let T : R2 R2 be
a linear transformation, and let v1 , v2 be any basis of R2 . Define the matrix
of T with respect to this basis to be the 2 2 matrix whose first column is
the coordinates of T (v1 ) with respect to v1 , v2 and whose second column is
the coordinates of T (v2 ) with respect to v1 , v2 .
Example 5.16. Let F = R and suppose v1 and v2 denote respectively
(1, 2)T and (0, 1)T . Let T : R2 R2 be the linear transformation such that
T (v1 ) = v1 and T (v2 ) = 3v2 . By Proposition 5.17, T exists and is unique.
Now it is clear that the matrix of T with respect to the basis v1 , v2 is


1 0
.
0 3
Thus T has a diagonal matrix in the v1 , v2 basis. An important topic we
will take up later is eigentheory. This is the theory which will tell us when
a linear transformation admits a basis so that its matrix is diagonal.

144
DIAGONAL LINEAR TRANSFORMATION
The next question to look at is this: suppose we know the matrix of
T with respect to some basis. What is the matrix with respect to another
basis? In particular, if we choose the linear transformation T considered
in Example 5.16, what is the matrix of T in the standard basis? Let A
be the matrix of T standard basis so that T (v) = Av, hence in particular,
T (ei ) = Aei . Let P = (v1 v2 ). Then
AP = (Av1 Av2 ) = (T (v1 ) T (v2 )) = (v1 3v2 ).

1 0
(v1 3v2 ) = (v1 v2 )
= P D.
0 3


1 0
Therefore AP = P D, where D =
is the diagonal matrix which
0 3
represents T in the basis v1 , v2 . Since P is invertible, it follows that
But

A = P DP 1 .
The matrix of T in the standard basis is therefore



 

1 0
1 0
1 0
1 0
1
A = P DP =
=
.
2 1
0 3
2 1
4 3
The equation which gives the matrix A of T in the standard basis in terms
of the basis v1 , v2 can be turned around.
Proposition 5.23. If T : Fn Fn is an arbitrary linear transformation
whose matrix in the standard basis is A, then the matrix B of T in another
basis v1 , v2 , . . . , vn of Fn is
B = P 1 AP,
where P = (v1 v2 . . . vn ).
In the case we considered above, the matrix of T with respect to the basis
given by the columns of P is diagonal, and D = P 1 AP . In the general
case, we still get AP = P B, so B = P 1 AP as asserted.
Example 5.17. Lets find the matrix B of the linear transformation of the
linear transformation of Example 5.16 with respect to the basis (1, 1)T and
(1, 1)T of R2 . Since the matrix with respect to the standard basis is


1 0
A=
,
4 3

145
we can use the formula Proposition ??, which says

B=



1
1 1
1 0
1 1
.
1 1
4 3
1 1

Computing the product gives



B=


0 3
.
1 1

Exercises
Exercise 5.38. Find the coordinates of the standard basis vectors e1 , e2 , e3
of R3 in terms of the basis (1, 1, 1)T , (1, 0, 1)T , (0, 1, 1)T .
Exercise 5.39. Let H : R2 R2 be any reflection. Find a basis of R2 such
that the matrix of H is diagonal.
Exercise 5.40. Repeat Exercise 2 for a projection Pa : R2 R2 . That is,
find a basis for which the matrix of Pa is diagonal.
Exercise 5.41. Let R be any rotation of R2 . Does there exist a basis of
R2 for which the matrix of R is diagonal. That is, is there an invertible
2 2 matrix P such that R = P DP 1 .
Exercise 5.42. Consider the basis (1, 1, 1)T , (1, 0, 1)T , (0, 1, 1)T of R3 .
Find the matrix of the linear transformation T : R3 R3 defined by T (x) =
(1, 1, 1)T x.

Você também pode gostar