Você está na página 1de 32

Partial Solutions for Linear Algebra by Friedberg et al.

Chapter 1
John K. Nguyen
December 7, 2011

1.1.8. In any vector space V , show that (a + b)(x + y) = ax + ay + bx + by for any x, y V and any a, b F .
Proof. Let x, y V and a, b F . Note that (a + b) F (it is a scalar). By (VS7 ), we have that
(a + b)(x + y) = (a + b)x + (a + b)y. By (VS8 ), (a + b)x = ax + bx. Likewise, (a + b)y = ay + by. Thus,
(a + b)x + (a + b)y = ax + bx + ay + by. Finally, by (VS1 ), we have (a + b)(x + y) = ax + ay + bx + by.
1.1.9. Prove Corollaries 1 and 2 of Theorem 1.1 and Theorem 1.2(c).
a) Corollary 1. The vector 0 described in VS3 is unique.
b) Corollary 2. The vector y described in VS4 is unique.
c) Theorem 1.2 (c). a0 = 0 for each a F . Note that 0 denotes the zero vector.
Proof of (a). Suppose 0 V such that x + 0 = x for all x V . By (VS3 ), x + 0 = x. It follows that
x + 0 = x + 0. By (VS1 ), 0 + x = 0 + x. We apply the Cancellation Law for Vector Addition (Theorem 1.1 )
to obtain 0 = 0 as required.
Proof of (b). Assume y V such that x + y = 0. That is, both y and y are additive inverses of x (VS4 ).
Since x + y = 0 and x + y = 0, we have that x + y = x + y. By (VS1 ), we have y + x = y + x. We apply
the Cancellation Law to get y = y as required.
Proof of (c). Let a F . By (VS3 ), a0 = a(0 + 0). By (VS7 ), a0 = a0 + a0. By (VS1 ), 0 + a0 = a0 + a0.
By the Cancellation Law, 0 = a0. Then, by (VS1 ), a0 = 0 as required.
1.2.21. Let V and W be vector spaces over a field F . Let Z = {(v, w) : v V w W }. Prove that Z is a
vector space over F with the operations (v1 , w1 ) + (v2 , w2 ) = (v1 + v2 , w1 + w2 ) and c(v1 , w1 ) = (cv1 , cw1 ).
Proof. We will show that Z satisfy all of the field axioms.

We note that V and W are vector spaces. Let (v1 , x1 ), (v2 , x2 ), (v3 , x3 ) V and (y1 , w1 ), (y2 , w2 ), (y3 , w3 )
W.

By (VS1 ), (v1 , x1 ) + (v2 , x2 ) = (v1 + v2 , x1 + x2 ) = (v2 , x2 ) + (v1 , x1 ) = (v2 + v1 , x2 + x1 ). Thus,


v1 + v2 = v2 + v1 . Likewise, (y1 , w1 ) + (y2 , w2 ) = (y1 + y2 , w1 + w2 ) = (y2 , w2 ) + (y1 , w1 ) = (y2 + y1 , w2 + w1 ).
Thus, w1 + w2 = w2 + w1 . Since v1 + v2 = v2 + v1 and w1 + w2 = w2 + w1 , by definition, (v1 , w1 ) + (v2 , w2 ) =
(v1 + v2 , w1 + w2 ) = (v2 + v1 , w2 + w1 ) = (v2 , w2 ) + (v1 , w1 ). We have shown that (VS1 ) holds for Z.

Next, by (VS2 ), ((v1 , x1 )+(v2 , x2 ))+(v3 , x3 ) = (v1 , x1 )+((v2 , x2 )+(v3 , x3 )) so (v1 +v2 )+v3 = v1 +(v2 +v3 ).
Similarly, ((y1 , w1 ) + (y2 , w2 )) + (y3 , w3 ) = (y1 , w1 ) + ((y2 , w2 ) + (y3 , w3 )) so (w1 + w2 ) + w3 = w1 + (w2 + w3 ).
It follows that ((v1 + v2 ) + v3 , (w1 + w2 ) + w3 ) = (v1 + (v2 + v3 ), w1 + (w2 + w3 )). Thus, ((v1 , w1 ) + (v2 , w2 )) +
(v3 , w3 ) = (v1 , w1 ) + ((v2 , w2 ) + (v3 , w3 )) where (v1 , w1 ), (v2 , w2 ), (v3 , w3 ) Z. We have shown that (VS2 )
holds for Z.

By (VS3 ), there exists (0v1 , 0v2 ) V such that (xv , yv ) + (0v1 , 0v2 ) + 0v = (xv , yv ) for all (xv , yv ) V .
Similarly, there exists (0w1 , 0w2 ) W such that (xw , yw ) + (0w1 , 0w2 ) = (xw , yw ) for all (xw , yw ) W . By
definition, (0v1 , 0w1 ) Z. Thus, (v, w) + (0v1 , 0w1 ) = (v, w) and so (VS3 ) holds for Z.

1
By (VS4 ), there exists (v, x) V such that (v, x) + (v, x) = (0v1 , 0v2 ) for all (v, x) V . Like-
wise, there exists (y, w) V such that (y, w) + (y, w) = (0w1 , 0w2 ) for all (y, w) V . It follows that
for all (v, w) Z, there exists (v, w) Z such that (v, w) + (v, w) = (0z1 , 0z2 ) where (0z1 , 0z2 ) Z
so (VS4 ) holds for Z.

For all (v, x) V , 1 (v, x) = (v, x) so 1 v = v (by (VS5 )). Also, for all (y, w) W , 1 (y, w) = (y, w) so
1 w = w. By definition (letting c = 1), 1 (v, w) = (1 v, 1 w) = (v, w) so (VS5 ) holds for Z.

Let a, b F . By (VS6 ), for all (v, x) V , (ab)(v, x) = (a(b(v, x)). Likewise, for all (y, w) W ,
(ab)(y, w) = (a(b(y, w)). Accordingly, we have (ab)v = a(bv) and (ab)w = a(bw). By definition, (ab)(v, w) =
((ab)v, (ab)w). It follows from above that ((ab)v, (ab)w) = (a(bv), a(bw)) so (VS6 ) holds for Z.

Let c F . By (VS7 ), c((v1 , x1 ) + (v2 , x2 )) = c(v1 , x1 ) + c(v2 , x2 ) = (cv1 , cx1 ) + (cv2 , cx2 ) which implies that
c(v1 , v2 ) = (cv1 , cv2 ). Likewise, c((y1 , w1 ) + (y2 , w2 )) = c(y1 , w1 ) + c(y2 , w2 ) = (cy1 , cw1 ) + (cy2 , cw2 ) which
implies c(w1 , w2 ) = (cw1 , cw2 ). Then, c((v1 , w1 )+(v2 , w2 )) = (c(v1 +v2 ), c(w1 +w2 )) = (cv1 +cv2 , cw1 +cw2 ) =
c(v1 , w1 ) + c(v2 , w2 ) as required. So (VS7 ) holds for Z.

For all a, b F , by (VS8 ), (a + b)(v, x) = ((a + b)v, (a + b)x) = (av + bv, ax + bx) where (v, x) V .
This implies (a + b)v = av + bv. Similarly, (a + b)(y, w) = ((a + b)y, (a + b)w) = (ay + by, aw + bw) which
implies (a + b)w = aw + bw. It follows that (a + b)(v, w) = ((a + b)v, (a + b)w) = (av + bv, aw + bw) where
(v, w) Z. We have shown that (VS8 ) holds for Z.

We have shown that Z is a vector space.


1.3.5. Prove that A + At is symmetric for any square matrix A.
Proof. Let A Mnxn (F ). Note that a square matrix A is symmetric if At = A where At is the transpose of
A. That is, we wish to show that (A + At )t = A + At . By properties of tranpose, (A + At )t = At + (At )t =
At + A = A + At as required.
1.3.6. Prove that tr(aA + bB) = atr(A) + btr(B) for any A, B Mnxn (F ).
n
X n
X
Proof. Let A, B Mnxn (F ). By definition of trace, tr(A) = Aii and tr(B) = Bii where i is the row
i=0 i=0
n
X
and column of the matrix (because the matrix is square). Thus, by definition, tr(aA+bB) = (aA+bB)ii =
i=0
n
X n
X n
X n
X Xn
(aAii +bBii ). Then, by property of summation, we have aAii + bBii = a Aii +b Bii . Finally,
i=0 i=0 i=0 i=0 i=0
by definition of trace, atr(A) + btr(B) as required.
1.3.18. Prove that a subset W of a vector space V is a subspace of V if and only if 0 W and ax + y W
whenever a F and x, y W .
Proof. () Suppose a subset W of a vector space V is a subspace with addition and scalar multiplication
defined on V . By definition of subspace, we know that there exists 0 W . We also know that ax W
(closed under scalar multiplication) whenever x W and a F . And since W is closed under addition, we
have that ax + y W for all y W .

() Suppose 0 W and ax + y W for any x, y W and a F . Since we already know that 0 W , we


only need to show that W is closed under addition and scalar multiplication. Since a F , setting a = 1 F ,
we have that 1 x + y = x + y W so W is closed under addition. Since y W and 0 W , we let y = 0 W .
This means that ax + 0 = ax W so W is closed under scalar multiplication.
1.3.19. Let W1 and W2 be subspaces of a vector space V . Prove that W1 W2 is a subspace of V if and only
if W1 W2 or W2 W1 .

2
Proof. () Suppose W1 and W2 are subspaces of a vector space V and assume W1 W2 is a subspace. We
wish to prove by contradiction so assume W1 6 W2 and W2 6 W1 . Since W1 6 W2 , there exists a W1 but
a 6 W2 . Also, since W2 6 W1 , there exists b W2 but b 6 W1 . Since W1 W2 is a subspace, it must be
that (a + b) W1 or (a + b) W2 . We have two cases.

Case I: Assume (a + b) W1 . By the additive inverse property, a + b + (a) = b W1 which contra-


dicts our assumption that b 6 W1 .
Case II: Assume (a + b) W2 . By the additive inverse property, a + b + (b) = a W2 which contradicts
our assumption that a 6 W2 .

We have shown that if W1 W2 is a subspace, then W1 W2 or W2 W1 .

() Suppose W1 W2 or W2 W1 . We have two cases.

Case I: Assume W1 W2 . Then, by definition of union, W1 W2 = W2 which is a subspace of V .


Case II: Assume W2 W1 . Then, by definition of union, W1 W2 = W1 which is a subspace of V .

In either cases, we have that W1 W2 is a subspace if W1 W2 or W2 W1 .


1.3.23. Let W1 and W2 be subspaces of a vector space V .
a) Prove that W1 + W2 is a subspace of V that contains both W1 and W2 .
b) Prove that any subspace of V that contains both W1 and W2 must also contain W1 + W2 .
Proof of (a). We define W1 + W2 = {w1 + w2 |w1 W1 w2 W2 }. We first show that W1 + W2 is a
subspace. First, since 0 W1 and 0 W2 , we have that 0 = 0 + 0 W1 + W2 . Next, if w1 , w2 W1 + W2 ,
there exists x W1 and y W2 such that w1 = x + y. Similarly, there exists z W1 and t W2 such that
w2 = z + t. Then, w1 + w2 = (x + y) + (z + t) = (x + z) + (y + t) W1 + W2 (since W1 and W2 are subspaces,
x + z W1 and y + t W2 ) so W1 + W2 is closed under addition. Lastly, since W1 and W2 are subspaces, we
have that qw1 W1 and qw2 W2 for any q F by definition. Thus, aw1 + aw2 = a(w1 + w2 ) W1 + W2
so W1 + W2 is closed under scalar multiplication. Thus, W1 + W2 is a subspace of V .

We now show W1 W1 + W2 and W2 W1 + W2 . By definition of subspace, we know that every


vector in W1 + W2 has an additive inverse which means that with any arbitrary vector we can always obtain
0 W1 + W2 . And since, 0 W1 and 0 W2 by definition of subspace, it follows that W1 W1 + W2 and
W2 W1 + W2 .

Proof of (b). Suppose W is a subspace of V . Assume W1 W and W2 W . We wish to prove that


W1 + W2 W . Let u W1 + W2 . From part (a), we define u = x + y where x W1 and y W2 . Since
W1 W , x W . Similarly, since W2 W , y W . And since W is a subspace, we know that u = x+y W
where x, y W . Since W was arbitrary, we have shown that any subspace of V that contains both W1 and
W2 must also contain W1 + W2 .
1.3.30. Let W1 and W2 be subspaces of a vector space V . Prove that V is the direct sum of W1 and W2 if
and only if each vector in V can be uniquely written as x1 + x2 where x1 W1 and x2 W2 .
Proof. () Suppose W1 and W2 are subspaces of a vector space V . Assume that V is a direct sum of W1
and W2 (that is, V = W1 W2 ). Then, each vector in V can be written as x1 + x2 where x1 W1 and
x2 W2 . We wish to show that x1 + x2 is unique so assume x1 + x2 = y1 + y2 where y1 W1 and y2 W2 .
It follows that x1 y1 = y2 x2 . Because V = W1 W2 , we know that W1 W2 = {0}. Then, in knowing
that W1 and W2 are disjoint and that x1 y1 W1 and y2 x2 W2 (W1 and W2 are subspaces of V so
both are closed under addition), we have that x1 y1 = 0 and y2 x2 = 0. Thus, x1 = y1 and x2 = y2 so
x1 + x2 is unique.

() Let W1 and W2 be subspaces of a vector space V . Suppose each vector in V can be uniquely written
as x1 + x2 where x1 W1 and x2 W2 . Then, it follows that W1 + W2 = V . Next, we let t be in the
intersection of W1 and W2 (i.e. t W1 W2 ). Since W1 and W2 are subspaces, we know that 0 W1 and

3
0 W2 . Then, t = t + 0 where t W1 and 0 W2 . Also, t = 0 + t where 0 W1 and t W2 . It must follow
that t = 0 so W1 W2 = {0} since t W1 W2 . Since W1 + W2 = V and W1 W2 = {0}, by definition of
direct sum, we have V = W1 W2 as required.

We conclude that V is the direct sum of W1 and W2 if and only if each vector in V can be uniquely
written as x1 + x2 .
1.3.31. Let W be a subspace of a vector space V over a field F . For any v V the set {v} + W = {v + w :
w W } is called the coset of W containing v. It is customary to denote this coset by v + W rather than
{v} + W .

(a) Prove that v + W is a subspace of V if and only if v W .


(b) Prove that v1 + W = v2 + W if and only if v1 v2 W .
(c) Prove that the preceding operations are well defined; that is, show that if v1 + W = v`1 + W and
v2 + W = v`2 + W , then (v1 + W ) + (v2 + W ) = (v`1 + W ) + (v`2 + W ) and a(v1 + W ) = a(v`1 + W )
for all a F .
(d) Prove that the set S is a vector space with the operations defined in (c).This vector space is called the
quotient space of V modulo W and is denoted by V /W .
Proof of (a). () Suppose W is a subspace of a vector space V over a field F . Assume v + W is a subspace
of V . Because v + W is a subspace, we have that v v + W since 0 v + W . Since v v + W and v + W
is a subspace, it follows that v + v v + W (since v + W is closed under addition). We wish to prove by
contradiction so suppose v 6 W . Since v 6 W , we have by definition that v + v 6 v + W , a contradiction.
Thus, v W as required.

() Assume v W . We wish to prove that v + W is a subspace of V so we check the three parts of the def-
inition of subspace. Since W is a subspace, it is closed under scalar multiplication and so v = (1)v W .
Then, 0 = v + (v) v + W by definition of v + W . Next, suppose x = v + a v + W and y = v + b v + W
for a, b W . Then, x + y = (v + a) + (v + b) = v + (a + v + b). Since we had assumed v W and a, b W ,
it follows that (a + v + b) W . So by definition, x + y = v + (a + v + b) v + W where (a + v + b) W and
v W so v + W is closed under addition. Finally, assume f F . Then, f x = f v + f a which we can rewrite
as f x = v + (v) + f v + f a. Since a W and W is a subspace, f a W . Likewise, since v W , f v W .
And since we have already stated that v W , we know that (v) + cv + ca W because W is closed under
addition. Thus, f x v + W so v + W is closed under scalar multiplication. We have shown that 0 v + W ,
v+W is closed under addition and v+W is closed under scalar multiplication and so v+W is a subspace of V .

We conclude that v + W is a subspace of V if and only if v W .


Proof of (b). () Suppose W is a subspace of a vector space V over a field F . Assume v1 + W = v2 + W .
Suppose x v1 + W and y v1 + W . By definition of coset, we have that x = v1 + a where a W . Likewise,
we have that y = v2 + b where b W . Since v1 + W = v2 + W , we have that x = y. That is, v1 + a = v2 + b
which can be rewritten as v1 v2 = b a. Since a W , b W and W is a subspace, we have that b a W .
Thus, since v1 v2 = b a, it follows that v1 v2 W as required.

() Now suppose v1 v2 W . We wish to show that v1 + W = v2 + W . That is, v1 + W v2 + W and


v2 + W v1 + W .

() Let x v1 + W and y v2 + W . By definition of coset, we have that x = v1 + a where a W .


Likewise, we have that y = v2 + b where b W . Since v1 v2 , a W , we set b = (v1 v2 ) + a. Then,
y = v2 + (v1 v2 ) + a = v1 + a. Since x = v1 + a, we have that x = y so x v2 + W . Because x v1 + W
and x v2 + W , v1 + W v2 + W .

() Let x v1 + W and y v2 + W . Again, by definition of coset, we have that x = v1 + a where


a W . Likewise, we have that y = v2 + b where b W . Since v1 v2 W and W is a subspace, we
have that v2 v1 W (it is clear that v2 + (1)v1 W because W is closed under addition and scalar

4
multiplication). Now, since v2 v1 , b W , we set a = (v2 v1 ) + b. Then, x = v1 + (v2 v1 ) + b = v2 + b.
Since y = v2 + b, x = y so y v1 + W . Since y v2 + W and y v1 + W , v2 + W v1 + W .

Because v1 + W v2 + W and v2 + W v1 + W , we conclude that v1 + W = v2 + W as required.


Proof of (c). Assume W is a subspace of a vector space V over a field F . Suppose v1 + W = v`1 + W and
v2 + W = v`2 + W . We wish to show (v1 + W ) + (v2 + W ) = (v`1 + W ) + (v`2 + W ). However, from part(b)
we know this is equivalent to (v1 + v2 ) (v`1 + v`2 ) W . Since v1 + W = v`1 + W and v2 + W = v`2 + W , by
part (b), we have that v1 v`1 W and v2 v`2 W . Since W is a subspace, it is closed under addition so
(v1 v`1 ) + (v2 v`2 ) W . Rearranging terms, (v1 + v2 ) (v`1 + v`2 ) W as required.

Now we wish to prove that a(v1 + W ) = a(v`1 + W ) for all a F . However, we apply part (b) which
means showing av1 av`1 W is equivalent. From above, we already have v1 v`1 W . Since W is a
subspace, it is closed under scalar multiplication so we have that a(v1 v`1 ) W for all a F . By the
distribution law, we conclude that av1 av`1 W as required.
Proof of (d). To show that set S is a vector space with the operations defined in (c), we must verify the field
axioms.

For (V S1), let v1 + W S and v2 + W S where v1 , v2 V . Then, (v1 + W ) + (v2 + W ) = (v1 + v2 ) + W


and (v2 + W ) + (v1 + W ) = (v2 + v1 ) + W by the definition in part (c). Since v1 , v2 V , by (V S1), we
know that v1 +v2 V and v2 +v1 V so v1 +v2 = v2 +v1 . Thus, (v1 +v2 )+W = (v2 +v1 )+W so (V S1) holds.

Now suppose v1 + W S, v2 + W S and v3 + W S where v1 , v2 , v3 V . Then, by the definition


in part (c), ((v1 + W ) + (v2 + W )) + (v3 + W ) = ((v1 + v2 ) + W ) + (v3 + W ) = ((v1 + v2 ) + v3 ) + W =
(v1 + (v2 + v3 )) + W = (v1 + W ) + ((v2 + v3 ) + W ) = (v1 + W ) + ((v2 + W ) + (v3 + W )) which shows that
(V S2) holds.

Suppose 0 + W S (we can assume this since 0 V ). Also, let v + W S for some v V . Then,
(0 + W ) + (v + W ) = (0 + v) + W = v + W . Thus, (V S3) holds.

Next, let v + W S where v V . Let v + W S where, since V is closed under scalar multiplica-
tion, v V . Then, v + W + (v) + W = (v v) + W = 0 + W where 0 S. Therefore, (V S4) holds.

We now verify (V S5). Let v + W S for some v V . Then, by the definition in part (c), 1(v + W ) =
(1v) + W = v + W as required so (V S5) holds.

Let a, b F and suppose v+W S for some v V . Then, (ab)(v+W ) = (ab)v+W = a(bv)+W = a(bv+W )
so (V S6) holds.

For (V S7), let v1 + W S and v2 + W S where v1 , v2 V . Choose a F . Then, a((v1 + W ) + (v2 + W )) =


(a(v1 + v2 )) + W = (av1 + av2 ) + W = (av1 + W ) + (av2 + W ) = a(v1 + W ) + a(v2 + W ) as required. We
have shown that (V S7) holds.

Finally, let v + W S where v V . Pick a, b F . Then, (a + b)(v + W ) = ((a + b)v) + W = (av + bv) + W =
(av + W ) + (b + W ) = a(v + W ) + b(v + W ) so (V S8) holds.

In conclusion, since all of the field axioms hold, S is a vector space.


1.4.13. Show that if S1 and S2 are subsets of a vector space V such that S1 S2 , then span(S1 ) span(S2 ).
In particular, if S1 S2 and span(S1 ) = V , deduce that span(S2 ) = V .
Proof. Suppose S1 and S2 are subsets of a vector space V and that S1 S2 . We wish to show that every
element of span(S1 ) is contained in span(S2 ). Choose z span(S1 ). By definition of span, there exists
x1 , x2 , ..., xn S1 such that z = a1 x1 + a2 x2 + ... + an xn where a1 , a2 , ..., an F . But, since S1 S2 ,

5
x1 , x2 , ..., xn S2 also which means that z span(S2 ) by definition of linear combination. Since z was
arbitrary, we have that span(S1 ) span(S2 ).

Suppose span(S1 ) = V . We now show that span(S2 ) = V . Since span(S1 ) = V , from above, V span(S2 ).
However, we have that S2 V which means that span(S2 ) V . That is, x1 , x2 , ..., xn S2 is also contained
in V so by definition, we know that span(S2 ) V . Since V span(S2 ) and span(S2 ) V , we have that
span(S2 ) = V .
1.4.15. Let S1 and S2 be subsets of a vector space V . Prove that span(S1 S2 ) span(S1 ) span(S2 ). Give
an example in which span(S1 S2 ) and span(S1 ) span(S2 ) are equal and one in which they are unequal.
Proof. Suppose S1 and S2 be subsets of a vector space V . Let v span(S1 S2 ), Then, by definition of
span, there exists x1 , x2 , ..., xn S1 S2 such that v = a1 x1 + a2 x2 + ... + an xn where a1 , a2 , ..., an F .
Since x1 , x2 , ..., xn S1 S2 , by definition of set intersection x1 , x2 , ..., xn S1 and x1 , x2 , ..., xn S2 . From
this, we know that v = a1 x1 + a2 x2 + ... + an xn span(S1 ) and v = a1 x1 + a2 x2 + ... + an xn span(S2 ). By
definition of set intersection, since v span(S1 ) and v span(S2 ), we have that v span(S1 ) span(S2 ).
Since v was arbitrary, we have shown that span(S1 S2 ) span(S1 ) span(S2 ).
Examples. Suppose V = R2 , S1 = {(1, 3)} and S2 = {(2, 7)}. Then, S1 S2 = so span(S1 S2 ) = {0} by
definition of subspace. Next, we have that span(S1 ) span(S2 ) = {0} because span(S1 ) = {(1a, 3a) : a R}
and span(S2 ) = {(2b, 7b) : b R} have no elements in common. In this example, span(S1 S2 ) =
span(S1 ) span(S2 ).

Now consider V = R2 , S1 = {(8, 4)} and S2 = {(4, 2)}. Again, we have that S1 S2 = so span(S1
S2 ) = {0} by definition of subspace. However, since span(S1 ) = {(8a, 4a) : a R} and span(S2 ) =
{(4b, 2b) : b R}, we know that span(S1 ) span(S2 ) 6= {0} (since span(S1 ) = span(S2 )). In this example,
span(S1 S2 ) 6= span(S1 ) span(S2 )
1.5.9. Let u and v be distinct vectors in a vector space V . Show that {u, v} is linearly dependent if and only
if u or v is a multiple of the other.
Proof. () Assume u and v are distinct vectors in a vector space V . Suppose {u, v} is linearly dependent.
Assume a, b F and that they are not all are zero. Then, au + bv = 0 which can be rewriten as u = ab v.
Since both a, b F cant be zero, we can assume a 6= 0. Thus, we see that v is a multiple of u. On the other
hand, if we rewritten the equation to v = ab u and assume that b 6= 0, we have that u is a multiple of v. In
conclusion, we have that u or v is a multiple of the other.

() Now assume that u or v is a multiple of other. Then, we have that u = av or v = bu for a, b F . By


the distributive law of logic, we have two cases.

Case I: Suppose u = av. Then, 0 = av u = av + (1)u. Since 1 F , by definition of linear de-


pendent, we have {u, v} is linear dependent.

Case II: Suppose v = bu. Then, 0 = bu v = bu + (1)v. Since 1 F , by definition of linear de-
pendent, we have {u, v} is linear dependent.

In both cases, we have that {u, v} is linear dependent.

In conclusion, we have shown that {u, v} is linearly dependent if and only if u or v is a multiple of the
other
1.5.13. Let V be a vector space over a field of characteristic not equal to two.

(a) Let u and v be distinct vectors in V . Prove that {u, v} is linearly independent if and only if {u + v, u v}
is linearly indepedent.
(b) Let u,v, and w be distinct vectors in V . Prove that {u, v, w} is linearly independent if and only if
{u + v, u + w, v + w} is linearly independent.

6
Proof of (a). () Suppose V is a vector space over a field of characteristic not equal to two. Assume u and
v are distinct vectors in V . Suppose {u, v} is linearly independent. We wish to show that {u + v, u v} is
linearly independent which mean a(u + v) + b(u v) = au + av + bu bv = (a + b)u + (a b)v = 0 for some
a, b F such that a = b = 0. Since we know that {u, v} is linearly independent, a + b = 0 and a b = 0.
Since a + b = 0, we know that b = a which implies a b = a (a) = a + a = 2a = 0. Because the field
characteristic is not equal to two, we have a = 0. In a similar argument, since a b = 0, we know that a = b
which implies that a + b = b + b = 2b = 0. And since the field characteristic is not equal to two, b = 0. Thus,
since a = b = 0, we have shown that {u + v, u v} is linearly independent.

() Now assume {u + v, u v} is linearly independent. Then, a(u + v) + b(u v) = 0 for some a, b F and
a = b = 0. Rearranging terms, au + av + bu + bv = (a + b)u + (a b)v = 0. We prove by contradiction so
assume that {u, v} is linearly dependent. That is, cu + dv = 0 for some c, d F and that all cannot be zero.
Since (a + b)u + (a b)v = 0 and cu + dv = 0, c = a + b and d = a b. Since {u, v} is linearly dependent,
we have that c 6= 0 or d 6= 0. By the distributive law of logic, we have two cases.

Case I: Assume that c 6= 0. Since c = a + b, we have a + b 6= 0 which implies {u + v, u v} is lin-


early dependent because a and b cannot both be 0 since a + b 6= 0 for a(u + v) + b(u v) = 0 (our assumption
that c 6= 0 would not hold if they are both zero).

Case II: Assume that d 6= 0. Since d = a b, we have a b 6= 0 which implies {u + v, u v} is lin-


early dependent because a and b cannot both be 0 since a b 6= 0 for a(u + v) + b(u v) = 0 (our assumption
that c 6= 0 would not hold if they are both zero).

In either case, we have that {u + v, u v} is linearly dependent which is a contradiction. Therefore, {u, v}
is linearly independent.

In conclusion, {u, v} is linearly independent if and only if {u + v, u v} is linearly indepedent


Proof of (b). () Let u, v and w be distinct vectors in V . Assume V is a vector space over a field of char-
acteristic not equal to two. Suppose Assume that u, v, w is linearly independent which, by definition, means
au+bv +cw = 0 for some a, b, c F and a = b = c = 0. We wish to prove that {u+v, u+w, v +w} is linearly
independent. Then, d(u+v)+e(u+w)+f (v+w) = du+dv+eu+ew+f v+f w = (d+e)u+(d+f )v+(e+f )w = 0
for some d, e, f F and d = e = f = 0. Since u, v, w is linearly independent, d + e = 0, d + f = 0 and
e + f = 0. We then solve for each variables. Since d + e = 0, d = e. Then, d + f = e + f = 0 which
implies f = e. Since f = e, e + f = e + e = 2e = 0. Since the field characteristic is not two, we have e = 0.
In a similar argument, since f = e, e + f = f + f = 2f = 0 which implies f = 0 (since char(F ) 6= 2).
Lastly, e + f = 0 implies that e = f . Since e = f , d + e = d f = 0 which implies d = f and so
d + f = d + d = 2d = 0 which mean d = 0 (again, since char(F ) 6= 2). Thus, since d = f = e = 0, we have
that {u + v, u + w, v + w} is linearly independent.

() Now assume that {u + v, u + w, v + w} is linearly independent. Then, a(u + v) + b(u + w) + c(v + w) = 0


where a, b, c F and a = b = c = 0. Rearranging terms we have a(u + v) + b(u + w) + c(v + w) =
au + av + bu + bw + cv + cw = (a + b)u + (a + c)v + (b + c)w = 0. We will prove by contradiction so assume
{u, v, w} is linearly dependent. Then, du + ev + f w = 0 where d, e, f F and not all are zero. It follows
that a + b = d, a + c = e and b + c = f because (a + b)u + (a + c)v + (b + c)w = 0 and du + ev + f w = 0.
Next, because we assumed {u, v, w} is linearly dependent, we know that d, e, f F are not all zero so d 6= 0
or e 6= 0 or f 6= 0. By the distributive law of logic, we have three cases.

Case I: Suppose d 6= 0. Since a + b = d, a + b 6= 0 which implies {u + v, u + w, v + w} is linearly de-


pendent because a and b cannot both be equal to zero (recall that wed assumed a = b = c = 0).

Case II: Suppose e 6= 0. Since a + c = e, a + c 6= 0 which implies {u + v, u + w, v + w} is linearly de-


pendent because a and c cannot both be equal to zero.

7
Case III: Suppose f 6= 0. Since b + c = f , b + c 6= 0 which implies {u + v, u + w, v + w} is linearly
dependent because b and c cannot both be equal to zero.

Thus, we have that either a, b, or c is not equal to zero which means that {u + v, u + w, v + w} is lin-
early dependent so we have a contradiction. Therefore, {u, v, w} must be linearly independent.

In conclusion, {u, v, w} is linearly independent if and only if {u + v, u + w, v + w} is linearly independent.


1.6.11. Let u and v be distinct vectors of a vector space V . Show that if {u, v} is a basis for V and a and b
are nonzero scalars, then both {u + v, au} and {au, bv} are also bases for V .
Proof. Let u and v be distinct vectors of a vector space V . Suppose {u, v} is a basis for V and a and b are
nonzero scalars. We wish to show that {u+v, au} and {au, bv} are linearly independent and that they span V .

We begin by proving for {u + v, au}. First, to show that {u + v, au} is linearly independent, we need
to show that for x(u + v) + y(au) = 0, x = y = 0 for scalars x, y F . Rearranging terms, we notice
x(u + v) + y(au) = xu + xv + yau = (x + ya)u + (x)v = 0. Now since {u, v} is linearly independent by defini-
tion of basis, it follows that wu + qv = 0 and w = q = 0 for scalars w, q F . It follows that x + ya = w = 0
and x = q = 0. Then, because x = 0, ya = 0. Since we know that a is a nonzero scalar, it is clear that y = 0.
Thus, the only solution to x(u + v) + y(au) = 0 is x = y = 0 so {u + v, au} is linearly independent. Second,
to show that {u + v, au} span V , by definition of basis, we need to show for any p V , x(u + v) + y(au) = p
where x, y F . Again, rearranging terms, we have (x + ya)u + (x)v = p. By definition of basis, {u, v} span
V which implies that for any p V , xu + yv = p, where scalars x, y F . It follows that x + ya = x and
x = y. Since x = y, we solve for y to obtain y = xy
a which is defined since a is nonzero. Thus, we have that
{u + v, au} span V (clearly, x(u + v) + y(au) = y(u + v) + xy a (au) = p). Because {u + v, au} is linearly
independent and {u + v, au} span V , {u + v, au} is a basis for V .

Next, we will prove that {au, bv} is a basis for V . First, we show that {au, bv} is linearly independent.
That is, m(au) + n(bv) = 0 is satisfy since m = n = 0. Rearranging terms, we obtain (ma)u + (nb)v = 0.
Because {u, v} is linearly independent by definition of basis, it follows that hu + jv = 0 and h = j = 0 for
scalars w, q F . This implies that ma = h = 0and nb = j = 0. Since we know that a and b are nonzero
scalars, m = n = 0 so {au, bv} is linearly independent. Second, we will show that {au, bv} span V . That is,
by definition of span, w(au) + q(bv) = f for any f V . Again, rearranging terms, we have (wa)u + (qb)v = f
Since {u, v} span V , xu + yv = f which implies that wa = x and qb = y. Then, it follows that w = xa and
q = yb so w(au) + q(bv) = xa (au) + yb (bv) = f . Clearly, {au, bv} span V . Thus, since {au, bv} is linearly
independent and since it span V , by definition of basis, {au, bv} is a basis of V .

In conclusion, we have shown that if {u, v} is a basis for V and a and b are nonzero scalars, then both
{u + v, au} and {au, bv} are also bases for V .
1.6.22. Let W1 and W2 be subspaces of a finite-dimensional vector space V . Determine necessary and
sufficient conditions of W1 and W2 so that dim(W1 W2 ) = dim(W1 ).
Proof. We believe that dim(W1 W2 ) = dim(W1 ) if and only if W1 W2 . We will prove this claim.

() Let W1 and W2 be subspaces of a finite-dimensional vector space V . Suppose dim(W1 W2 ) = dim(W1 ).


Let be a basis for W1 W2 . By definition of set intersection, we know that W1 W2 W1 so we can extend
to be a basis for W1 . It follows that for any x W1 , x W1 W2 . This implies that W1 W1 W2 . Since
W1 W2 W1 and W1 W1 W2 , we have that W1 W2 = W1 . Now, by definition of set intersection
(specifically, the property of subset intersection), W1 W2 = W1 implies that W1 W2 .

() Now assume that W1 W2 . Let be a basis for W1 W2 . Since W1 W2 , by the definition of set
intersection (property of subset intersection), it follows that W1 W2 = W1 . Since W1 W2 = W1 (which tells
us that W1 W2 W1 ), can be extended to be a basis for W1 . This implies that dim(W1 W2 ) = dim(W1 ).

We have shown that dim(W1 W2 ) = dim(W1 ) if and only if W1 W2 .

8
1.6.29. (a) Prove that if W1 and W2 are finite-dimensional subspaces of a vector space V , then the subspace
W1 + W2 is finite-dimensional, and dim(W1 + W2 ) = dim(W1 ) + dim(W2 ) dim(W1 W2 ).

(b) Let W1 and W2 be finite-dimensional subspaces of a vector space V , and let V = W1 + W2 . Deduce
that V is the direct sum of W1 and W2 if and only if dim(V ) = dim(W1 ) + dim(W2 ).
Proof of (a). Suppose W1 and W2 are finite-dimensional subspaces of a vector space V . Let be a ba-
sis for W1 W2 such that = {u1 , u2 , ..., uk } (since we know W1 and W2 are finite-dimensional). Since
W1 W2 W1 , we extend to a basis of W1 so = {u1 , u2 , ..., uk , v1 , v2 , ..., vm }. Likewise, =
{u1 , u2 , ..., uk , w1 , w2 , ..., wp }. We wish to show that = {u1 , u2 , ..., uk , v1 , v2 , ..., vm , w1 , w2 , ..., wp } which
implies that W1 + W2 is finite-dimensional (as it contains k = m = p number of vectors) and that
dim(W1 +W2 ) = k+m+p = k+m+p+(kk) = (k+m)+(k+p)k = dim(W1 )+dim(W2 )dim(W1 W2 ).
By definition of basis, we show that span W1 + W2 and that it is linearly independent.

Note that span W1 and span W2 as they are both bases. That is, for any w1 W1 , a1 u1 +
... + ak uk + b1 v1 + ... + bm vm = w1 and, for any w2 W1 , a1 u1 + ... + ak uk + c1 w1 + ... + cp wp =
w2 for scalars ak , bm , and cp respectively (Theorem 1.8 ). It is obvious that span W1 + W2 because
a1 u1 + ... + ak uk + b1 v1 + ... + bm vm + c1 w1 + ... + cp wp = w1 + w2 for any w1 + w2 W1 + W2 (recall our
given definition of W1 + W2 in problem 1.3.23 in Homework Set 1 ).

To show that is linearly independent, we will prove by contradiction so assume that it is not ( is linearly de-
pendent). Because is linearly independent (by definition of basis), we know that for a1 u1 +...+ak uk +b1 v1 +
...+bm vm = 0, ak = bm = 0. Similarly, since is linearly indepdenent, for a1 u1 +...+ak uk +c1 w1 +...+cp wp =
0, we know that ak = cp = 0. But, we had assumed that is linearly dependent which implies that for
a1 u1 + ... + ak uk + b1 v1 + ... + bm vm + c1 w1 + ... + cp wp = 0, we have that coefficients ak , bm , and cp are not
all zeros. Thus, we have a contradiction so must be linearly independent.

By definition of basis, because span W1 + W2 and is linearly independent, we have that is a basis
for W1 + W2 . Thus, it is finite-dimensional and dim(W1 + W2 ) = (k + m) + (k + p) k = dim(W1 ) +
dim(W2 ) dim(W1 W2 ) as required.
Proof of (b). Let W1 and W2 be finite-dimensional subspaces of a vector space V , and let V = W1 + W2 .

() Suppose that V = W1 W2 . We wish to prove that dim(V ) = dim(W1 ) + dim(W2 ). Because


V = W1 W2 , we know that W1 W2 = {0} which implies that dim(W1 W2 ) = 0 as W1 and W2 have no
unique vectors in common (in fact, they dont have any vectors in common). So, noting that V = W1 + W2 ,
following from from part (a), we have dim(V ) = dim(W1 )+dim(W2 )dim(W1 W2 ) = dim(W1 )+dim(W2 )
as required.

() Suppose dim(V ) = dim(W1 ) + dim(W2 ). We wish to prove that V = W1 W2 . Since we know


that V = W1 + W2 , we only need to show that W1 and W2 are disjoint (i.e., W1 W2 = {0}). From part
(a), we know that dim(V ) = dim(W1 ) + dim(W2 ) dim(W1 W2 ). However, from our assumption, we have
that dim(W1 W2 ) = 0 which means, by definition of dimension, there isnt any unique number of vectors
in each basis for W1 W2 . This implies that W1 and W2 are disjoint. Therefore, since V = W1 + W2 and
W1 W2 = {0}, by definition of direct sum, V = W1 W2 as required.

We have shown that V is the direct sum of W1 and W2 if and only if dim(V ) = dim(W1 ) + dim(W2 )
1.6.33. (a) Let W1 and W2 be subspaces of a vector space V such that V = W1 W2 . If 1 and 2 are bases
for W1 and W2 , respectively, show that 1 2 = and 1 2 is a basis for V .

(b) Conversely, let 1 and 2 be disjoint bases for subspaces W1 and W2 , respectively, of a vector space
V . Prove that if 1 2 is a basis for V , then V = W1 W2 .
Proof of (a). Let W1 and W2 be subspaces of a vector space V such that V = W1 W2 . Suppose 1
and 2 are bases for W1 and W2 respectively. Define 1 = {u1 , u2 , ..., un } and 2 = {v1 , v2 , ..., vm }.

9
Then, by definition of basis, 1 and 2 both are linearly independent and span W1 and W2 respectively.
Since we know that V = W1 W2 , we have that W1 W2 = . By Theorem 1.18, we know that
span(1 ) = W1 and span(2 ) = W2 . Since span(1 ) = W1 , span(2 ) = W2 and W1 W2 = , it fol-
lows that span(1 ) span(2 ) = which implies that 1 2 = .

Now, based on our definition of 1 and 2 , we define 1 2 = {u1 , u2 , ..., un , v1 , v2 , ..., vm }. We first
show that 1 2 is linearly independent. However, we will prove by contradiction so assume that 1 2 is
linearly dependent. That is, a1 u1 + ... + an un + b1 v1 + ... + bm vm = 0 where scalars a1 , ..., an and b1 , ..., bm
are not all zero. Because, 1 is linearly independent, we have that a1 u1 + ... + an un = 0 where scalars
a1 = a2 = ... = an = 0. Likewise, for 2 , b1 v1 + ... + bn vn = 0 where scalars b1 = b2 = ... = bm = 0. Thus, a
contradiction so it is obvious that 1 2 is linearly independent.

Second, we need to show that 1 2 span V . Note the Corollary on page 51 of the text which states
if W is a subspace of a finite-dimensional vector space V , then any basis for W can be extended to a basis
for V . Recall that W1 and W2 are subspaces of a vector space V . Therefore, we extend 1 and 2 to bases
for V . Thus, since 1 and 2 span V (by definition of basis), it follows that 1 2 span V .

Thus, since 1 2 is linearly independent and span V , by definition of basis, we have that 1 2 is
a basis for V . In conclusion, we have proved that if 1 and 2 are bases for W1 and W2 , respectively, show
that 1 2 = and 1 2 is a basis for V .
Proof of (b). Let 1 and 2 be disjoint bases for subspaces W1 and W2 , respectively, of a vector space V
(i.e., 1 2 = ). Assume 1 2 is a basis for V . We wish to prove that V = W1 W2 . That is, we need
to show that W1 W2 = and V = W1 + W2 .

Since 1 2 = , we have that span(1 2 ) = {0}. And since 1 is a basis for W1 , 1 span W1 .
Likewise, we have that 2 span W2 . Then, by definition of span, span(1 ) = W1 and span(2 ) = W2 . Note
from problem 1.4.15 of Homework Set 2 that span(W1 W2 ) span(W1 ) + span(W2 ). It follows that
span(1 ) span(2 ) = W1 W2 = as 1 2 = .

Now we wish to show that V = W1 + W2 . We know that span(1 ) = W1 and span(2 ) = W2 . Also
recall that 1 2 is a basis for V . Then, by definition, span(1 2 ) = V . Suppose each vector in
span(1 2 ) can be uniquely written as w1 + w2 where w1 span(1 ) and w2 span(2 ). Then, by
problem 1.3.30 of Homework Set 2, it follows that V = W1 + W2 .

10
Partial Solutions for Linear Algebra by Friedberg et al.
Chapter 2
John K. Nguyen
December 7, 2011

2.1.14. Let V and W be vector spaces and T : V W be linear.

(a) Prove that T is one-to-one if and only if T carries linearly independent subsets of V onto linearly
independent subsets of W .

(b) Suppose that T is one-to-one and that S is a subset of V . Prove that S is linearly independent if
and only if T (S) is linearly independent.

(c) Suppose = {v1 , v2 , ..., vn } is a basis for V and T is one-to-one and onto. Prove that T () =
{T (v1 ), T (v2 ), ..., T (vn )} is a basis for W .
Proof of (a). Suppose V and W be vector spaces and T : V W is linear.

() Suppose T is one-to-one. Let S be a linearly independent subset of V . We wish to prove by contradic-


tion so assume that T (S) is linearly dependent. Since S is linearly independent, for a1 v1 + ... + an vn = 0,
a1 = a2 = ... = an = 0. By definition of linear transformation from V to W , we have that if T is linear,
then T (0) = 0. In this case, T (a1 v1 + ... + an vn ) = T (0) = 0. Note, by Theorem 2.5, this is equivalent to
T being one-to-one as we had assumed. But we have that T (S) is linearly dependent which implies that for
vectors v1 , v2 , ..., vn S, a1 T (v1 ) + ... + an T (vn ) = 0 where scalars a1 , a2 , ..., an are not all zero. Clearly,
this contradiction because this would mean a1 v1 + ... + an vn 6= 0 which implies T (a1 v1 + ... + an vn ) 6= T (0),
contradicting the assumption that T is one-to-one. Therefore, T (S) must be linearly independent. Thus,
since S was arbitrary, T carries linearly independent subsets of V onto linearly independent subsets of W .

() Suppose that T carries linearly independent subsets of V onto linearly independent subsets of W .
We prove by contradiction so assume T is not one-to-one. Then, by definition of one-to-one, for some
a, b V such that T (x) 6= T (y). It follows that T (x) T (y) = 0 which implies that x y N (T ) because
T : V W is linear.
Proof of (b). Let V and W be vector spaces and T : V W be linear. Suppose that T is one-to-one and
that S is a subset of V .

() Suppose that S is linearly independent. Then, by part (a), we have that T (S) is linearly indepen-
dent and so were done.

() Now suppose that T (S) is linearly independent. We wish to prove by contradiction so assume that S
is linearly dependent. This implies that for v1 , v2 , ..., vn S, a1 v1 + ... + an vn = 0 where scalars a1 , a2 , ...an
are not all zero. Since T : V W is linear, we have that a1 T (v1 ) + ... + an T (vn ) = 0, again, where scalars
a1 , a2 , ..., an are not all zero. However, we had assumed that T (S) is linearly independent so we have a
contradiction. Thus, S is linearly independent as required.
Proof of (c). Suppose = {v1 , v2 , ..., vn } is a basis for V and T is one-to-one and onto. We wish to show
that T () is linearly independent and span W . Since T is one-to-one and is linearly independent (by

1
definition of basis), by part (b), T () is linearly independent. Next, since is a basis for V , by Theorem 2.2,
R(T ) = span(T ()). That is, span R(T ). Now, since T is onto, we know that R(T ) = W . So, since T ()
span R(T ) and R(T ) = W , we have tha span W . Therefore, by definition of basis, since T () is linearly
independent and span W , T () is a basis for W .
2.1.17. Let V and W be finite-dimensional vector spaces and T : V W be linear.

(a) Prove that if dim(V ) < dim(W ), then T cannot be onto.

(b) Prove that if dim(V ) > dim(W ), then T cannot be one-to-one.


Proof of (a). Let V and W be finite-dimensional vector spaces and T : V W be linear. Assume dim(V ) <
dim(W ). We will prove by contradiction so assume that T is onto. By the Dimension Theorem, since
V is finite-dimensional then nullity(T ) + rank(T ) = dim(V ) which, by definition of nullity and rank,
can be written equivalently as dim(N (T )) + dim(R(T )) = dim(V ). Since T is onto, by Theorem 2.5,
rank(T ) = dim(V ) which is equivalent to dim(R(T )) = dim(V ) by definition of rank. This implies that
dim(N (T ))+dim(R(T )) = 0+dim(R(T )) = dim(V ). However, since T is onto, we also know that R(T ) = W
according to Theorem 2.5. Thus, we have that dim(W ) = dim(V ) which is a contradiction (based on
transitivity). Therefore, T is not onto.
Proof of (b). Let V and W be finite-dimensional vector spaces and T : V W be linear. Assume dim(V ) >
dim(W ). We will prove by contradiction so assume that T is one-to-one. By the Dimension Theorem,
since V is finite-dimensional then nullity(T ) + rank(T ) = dim(V ) which, by definition of nullity and rank,
can be written equivalently as dim(N (T )) + dim(R(T )) = dim(V ). Since T is one-to-one, by Theorem 2.4,
N (T ) = {0} so it follows that dim(N (T )) = 0. This implies that dim(R(T )) = dim(V ). Also, by by Theorem
2.5, since T is one-to-one we have that rank(T ) = dim(W ) which is equivalent to dim(R(T )) = dim(W )
by definition of rank. Therefore, since dim(R(T )) = dim(V ) and dim(R(T )) = dim(W ), it follows that
dim(W ) = dim(V ) which is a contradiction to our assumption that dim(V ) > dim(W ) so T must not be
one-to-one.
2.1.40. Let V be a vector space and W be a subspace of V . Define the mapping : V V /W by (v) = v+W
for v V .

(a) Prove that is a linear transformation from V to V /W and that N () = W .

(b) Suppose that V is finite-dimensional. Use (a) and the dimension theorem to derive a formula relat-
ing dim(V ), dim(W ), and dim(V /W ).

Proof of (a). Suppose V is a vector space and W is a subspace of V . Define the mapping : V V /W by
(v) = v + W for v V .

We will first prove that is a linear transformation from V to V /W . That is, by definition of linear
transformation, we need to show that for all x, y V and c F , (x + y) = (x) + (y) and (cx) = c(x).
Note, in Problem 1.3.31 of Homework Set 2, we have already showed that the coset operations are well
defined. Let a, b V . By definition of , (a + b) = (a + b) + W . Then, by Problem 1.3.31 and our definition
of , (a + b) + W = (a + W ) + (b + W ) = (a) + (b) as required. Next, by definition of , for any c F ,
(ca) = (ca) + W . According to the definition of coset in Problem 1.3.31, (ca) + W = c(a + W ) which, by
definition of is c(a) as required.

We now prove that N () = W . By Theorem 2.1, N () is a subspace of V . Recall from Problem 1.3.31
that we have proven that the quotient space of V modulo W (V/W) is a vector space so it is defined under
all of the field axioms. By definition of null space, N () = {x V : (x) = 0}. Let n N (). Then,
(n) = 0 but we know that 0 W since W is a subspace. Since n was arbitrary, we have that N () W .
Now let w W . Since W is closed under addition, w + 0 W . But, we also have that w + 0 N () since
(w + 0) = 0. Thus, we have that W N (). Since N () W and W N (), N () = W as required.

2
Proof of (b). We claim that dim(V ) + dim(V /W ) = dim(V ). From the Dimension Theorem, we know that
dim(N ()) + dim(R()) = dim(V ). From part (a), we know that N () = W which implies dim(W ) +
dim(R()) = dim(V ). Also, it follows that R() = V /W . Thus, dim(W ) + dim(V /W ) = dim(V )
2.2.13. Let V and W be vector spaces, and let T and U be nonzero linear transformations from V into W .
If R(T ) R(U ) = {0}, prove that {T, U } is a linearly independent subset of L(V, W ).

Proof. Let V and W be vector spaces, and suppose T : V W and U : V W are nonzero. Assume
R(T ) R(U ) = {0}. Since T is nonzero, there exists v1 V such that T (v1 ) 6= 0. Likewise for U , there exist
v2 V such that U (v2 ) 6= 0. We wish to show by contradiction so assume aT + bU = T0 (note, T0 denotes
the zero transformation) where scalars a, b F are not all zero. By the distributive law of logic, we have
two cases.

Case I: Assume a 6= 0 and b = 0. Note, since a 6= 0, aT + bU = T0 can be rewritten as T = b b


a U = U( a )
b
by definition of linear transformation. Then, T (v1 ) = U ( a v1 ) R(U ) by definition of range. This implies
that T (v1 ) R(T ) R(U ). But, we know that R(T ) R(U ) = {0} so, in consideration that T (v1 ) 6= 0,
T (v1 ) R(T ) R(U ) contradicts the fact that T is nonzero.

Case II: Assume a = 0 and b 6= 0. Note, since b 6= 0, aT + bU = T0 can be rewritten as U = a a


b T = T ( b ).
a
Then, U (v2 ) = T ( b v2 ) R(T ) by definition of range. This implies that U (v2 ) R(T ) R(U ). But, we
know that R(T ) R(U ) = {0} so, in consideration that U (v2 ) 6= 0, U (v2 ) R(T ) R(U ) contradicts the
fact that U is nonzero.

In either case, we have a contradiction. Therefore, for aT + bU = T0 , we have that a = b = 0 which,


by definition of linear independence, means that {T, U } is linearly independent.
2.2.15. Let V and W be vector spaces, and let S be a subset of V . Define S 0 = {T L(V, W ) : T (x) = 0
for all x S}. Prove the following statement.

(a) S 0 is a subspace of L(V, W ).


(b) If S1 and S2 are subsets of V and S1 S2 , then S20 S10 .
(c) If V1 and V2 are subspaces of V then (V1 + V2 )0 = V10 V20 .
Proof of (a). Let T, U S 0 and choose a F . By Theorem 1.3, we wish to show that T0 S 0 and addition
and scalar multiplication is closed for S 0 . First, it is clear that for any x S, T0 (x) = 0 since T0 is the zero
transformation. Next, by definition, (T + U )(x) = T (x) + U (x) = 0 + 0 = 0 so T + U S 0 which means that
S 0 is closed under addition. Finally, we have that aT (x) = T (ax) = 0 by definition of linear transformation
so aT (x) S 0 . Thus, we have that S 0 is a subspace of L(V, W ).
Proof of (b). Suppose S1 and S2 are subsets of V . Also assume S1 S2 . Let T S20 . By definition, we
have that for all x S2 , T (x) = 0. However, since S1 S2 , we know that x S1 also. This implies that for
all x S1 , T (x) = 0 so we have T S10 . By defiition of subset, we can conclude that S20 S10 .

Proof of (c). Suppose V1 and V2 are subspaces of V .

() Let T (V1 + V2 )0 . Then, by definition, for all a V1 + V2 , T (a) = 0. By the definition on page
22 of set addition, we have that a = v1 + v2 for all v1 V1 and v2 V2 so T (v1 + v2 ) = 0. Since V1 is a
subspace, we know that 0 V1 . So, setting v1 = 0, we have that T (0 + v2 ) = T (v2 ) = 0 for all v2 V2
which implies that T V20 . With similar consideration, since V2 is a subspace, 0 V2 so we have that
T (v1 + 0) = T (v1 ) = 0 for all v1 V1 . This implies that T V10 . Since T V20 and T V10 , it follows that
T V10 V20 . Therefore, we have that (V1 + V2 )0 V10 V20 .

() Now let T V10 V20 . By definition, we have that for all v1 V1 , T (v1 ) = 0. Likewise, for all v2 V2 ,
T (v2 ) = 0. Note that by definition of linear transformation, T (v1 ) + T (v2 ) = T (v1 + v2 ). So, T (v1 + v2 ) = 0
for all v1 + v2 V1 + V2 (by definition of subspace addition) which implies that T (V1 + V2 )0 . Thus, it
follows that (V1 + V2 )0 V10 V20 .

3
Since (V1 + V2 )0 V10 V20 and (V1 + V2 )0 V10 V20 , we have that (V1 + V2 )0 = V10 V20 as required.
2.3.11. Let V be a vector space, and let T : V V be linear. Prove that T 2 = T0 if and only if R(T ) N (T ).
Proof. Suppose V be a vector space and let T : V V be linear.

() Suppose T 2 = T0 . Pick a R(T ). By definition of range, there exists v V such that T (v) = a.
Because T 2 = T0 and T (v) = a, it follows that T 2 (v) = T (T (v)) = T (a) = 0 which means a N (T ). Thus,
we have that R(T ) N (T ).

() Suppose R(T ) N (T ). Choose a R(T ). By definition of range, there exists v V such that T (v) = a.
However, since R(T ) N (T ), a N (T ). So, by definition of nullspace, we have that T (v) = a = 0 which
implies that T (a) = 0. Then, T 2 (v) = T (T (v)) = T (a) = 0. Thus, we have that T 2 = T 0 .

In conclusion, we have shown that T 2 = T0 if and only if R(T ) N (T ).


2.3.12. Let V , W , and Z be vector spaces, and let T : V W and U : W Z be linear.

(a) Prove that if U T is one-to-one, then T is one-to-one. Must U also be one-to-one?


(b) Prove that if U T is onto, then U is onto. Must T also be onto?
(c) Prove that if U and T are one-to-one and onto, then U T is also.
Proof of (a). Suppose V , W , and Z are vector spaces, and suppose that T : V W and U : W Z is linear.
Assume U T is one-to-one. We wish to prove by contradiction so assume that T is not one-to-one. Then,
by definition of linear transformation, there exists a, b V such that a 6= b and T (a) = T (b) which can be
rewritten as T (a)T (b) = 0. By Theorem 2.10, it follows that U T (a)U T (b) = U (T (a)T (b)) = U (0) = 0.
Since U T (a) U T (b) = 0, we have that U T (a) = U T (b) which is a contradiction since a 6= b and U T is
assumed to be one-to-one. Therefore, T must be one-to-one.
Proposition (a). We claim that U does not need to be one-to-one.
Proof of Proposition (a). It is sufficient to provide a counterexample. Let V = R, W = R2 and Z = R.
Define T : R R2 by T (r1 ) = (r1 , 0) and define U : R2 R by U (r1 , r2 ) = r1 . Then, U T (r1 ) = U (T (r1 )) =
U (r1 , 0) = r1 (since U T : R R). Thus, we have that T and U T is one-to-one. However, U is not one-to-one
since an element of Z is mapped to by multiple elements of W . That is, we might have different r2 but r1
will always result from the transformation. For example, U (1, 1) = U (1, 2) = U (1, 3) = 1.
Proof of (b). Suppose V , W , and Z are vector spaces, and suppose that T : V W and U : W Z is
linear. Assume U T is onto. By Theorem 2.9, U T : V Z is linear. Since U T is onto, for all v V ,
U T (v) = z where z Z. We wish to prove by contradiction so assume that U is not onto. Then, we know
that not all vector in Z are mapped to (they do not have a corresponding vector from W ). That is, there
exists z Z such that U (w) 6= z for all w W . Clearly this contradicts the assumption that U T is onto
since all vectors in Z need to be mapped to by definition of onto. Thus, U is onto.
Proposition (b). We claim that T does not need to be onto.
Proof of Proposition (b). It is sufficient to show a counterexample. Let V = R, W = R2 and Z = R. Define
T : R R2 by T (r1 ) = (r1 , 0) and define U : R2 R by U (r1 , r2 ) = r1 . Clearly U T is onto as every element
of Z will be mapped to (U is also onto as every element of Z is mapped to by how we define U ). However,
T is not onto since it cannot correspond to the second coordinate of W . That is, the second coordinate will
always be zero by how we defined T so there will be elements of W that does not have anything mapped to.
For example, (0, 3) W does not get mapped to.
Proof of (c). Suppose V , W , and Z are vector spaces, and suppose that T : V W and U : W Z is
linear. Assume U and T are one-to-one and onto.

We first show that U T is one-to-one. Suppose U T (x) = U T (y). We wish to show that x = y by the

4
definition of one-to-one. Then, U (T (x)) = U (T (y)). Since U is one-to-one, we have that T (x) = T (y). Since
T is one-to-one, it follows that x = y as required.

We now show that U T is onto. Let v V . Since T is onto, we have that T (v) = w for some w W .
Also, let z Z. Since U is onto, we have that U (w) = z. It follows that U T (v) = U (T (v)) = U (w) = z
which implies that U T is onto.

In conclusion, if U and T are one-to-one and onto, then U T is also.


2.3.16. Let V be a finite-dimensional vector space, and let T : V V be linear.

(a) If rank(T ) = rank(T 2 ), prove that R(T ) N (T ) = {0}. Deduce that V = R(T ) N (T ).
(b) Prove that V = R(T k ) N (T k ) for some positive integer k.
Proof of (a). Let V be a finite-dimensional vector space, and let T : V V be linear. Suppose rank(T ) =
rank(T 2 ). Since V is finite-dimensional, by the Dimension Theorem, we have that rank(T ) + nullity(T ) =
dim(V ) and rank(T 2 ) + nullity(T 2 ) = dim(V ) which implies that rank(T ) + nullity(T ) = rank(T 2 ) +
nullity(T 2 ). Because rank(T ) = rank(T 2 ), it follows that nullity(T ) = nullity(T 2 ) which is equivalent to
dim(N (T )) = dim(N (T 2 )). Clearly, we have that N (T ) is a subspace of N (T 2 ) because for all t N (T ),
T 2 (t) = T (T (t)) = T (0) = 0 which implies t N (T 2 ) so N (T ) N (T 2 ). So, since dim(N (T )) =
dim(N (T 2 )) and N (T ) is a subspace of N (T 2 ), we have that N (T ) = N (T 2 ). Choose x R(T ) and
also let x N (T ). Since x R(T ), by definition of range, there exists a V such that T (a) = x. Also,
since x N (T ), by definition of nullspace, T (x) = 0. Then, with consideration that T (a) = x, we have
that T (x) = T (T (a)) = T 2 (a) = 0 so it follows that a N (T 2 ). But, since we have that N (T ) = N (T 2 ),
a N (T ) also. And because a N (T ), by definition of nullspace, T (a) = 0 which, since we have that
T (a) = x, means T (a) = x = 0. Since we had arbitrarily that x N (T ) and x R(T ), we can conclude
that R(T ) N (T ) = {0}.
Proposition (a). V = R(T ) N (T )
Proof of Proposition (a). By definition of direct sum, we wish to prove that R(T ) N (T ) = {0} and V =
R(T ) + N (T ). From part (a), we already know that R(T ) N (T ) = {0}. Recall problem 1.6.29 from
Homework Set 3. Clearly, R(T ) and N (T ) are finite-dimensional subspaces of vector space V so by part
(a) of the exercise, we have that dim(R(T ) + N (T )) = dim(R(T )) + dim(N (T )) dim(R(T ) N (T )) and
R(T ) + N (T ) is finite-dimensional. Since we have determined that R(T ) N (T ) = {0}, we have that
dim(R(T ) + N (T )) = dim(R(T )) + dim(N (T )). By the Dimension Theorem, we know that dim(V ) =
dim(R(T )) + dim(N (T )) so it follows that dim(V ) = dim(R(T ) + N (T )). Thus, since R(T ) + N (T ) and V
is finite-dimensional and dim(V ) = dim(R(T ) + N (T )), it follows that V = R(T ) + N (T ) as required. In
conclusion, since we have shown that R(T ) N (T ) = {0} and V = R(T ) + N (T ), by the definition of direct
sum, V = R(T ) N (T ).
Proof of (b). Let V be a finite-dimensional vector space, and let T : V V be linear. We wish to prove
that V = R(T k ) N (T k ) for some k Z+ . By definition of direct sum, this is sufficient to showing that
R(T k ) N (T k ) = {0} and V = R(T k ) + N (T k ). In consideration of part (a), if we assume rank(T k ) =
rank(T 2k ) then clearly R(T k ) N (T k ) = {0} following the same argument as we had shown. Similarly, in
the same manner as the proof to Proposition (a), we have that R(T k ) + N (T k ) and V is finite-dimensional
and dim(V ) = dim(R(T k ) + N (T k )) which means that V = R(T k ) + N (T k ) as required. So, since R(T k )
N (T k ) = {0} and dim(V ) = dim(R(T k ) + N (T k )), by definition of direct sum, V = R(T k ) N (T k ).
2.4.7. Let A be an n n matrix.
(a) Suppose that A2 = O. Prove that A is not invertible.
(b) Suppose that AB = O for some nonzero n n matrix B. Could A be invertible? Explain.
Proof of (a). Let A be an n n matrix and suppose that A2 = O. We will prove by contradiction so
suppose that A is invertible. Then, by definition of invertibility, there exists an n n matrix B such that
AB = BA = I. We can rewrite such expression as B = A1 (note, equivalently, A = B 1 ). Then, since
A2 = O, we have that A2 A1 = OA1 = O. However, since A is invertible and because A2 A1 = O, it

5
follows that A2 A1 = AAA1 = A(AA1 ) = AI = O which implies that A = O. This is a contradiction to
the assumption that A is invertible (i.e., A = B 1 ). Thus, we conclude that A is not invertible.
Proof of (b). We claim that A is not invertible. For the sake of contradiction, assume that A is invertible.
Suppose that AB = O for some nonzero n n matrix B. Then, A1 AB = A1 O = O. But, because
A1 AB = O and since A is invertible, A1 AB = (A1 A)B = IB = O which implies that B = O. However,
this is a contradiction since we had that B is nonzero. Thus, A is not invertible.
2.4.9. Let A and B be n n matrices such that AB is invertible. Prove that A and B are invertible. Give
an example to show that arbitrary matrices A and B need not be invertible if AB is invertible.
Proof. Let A and B be n n matrices such that AB is invertible. Since AB is invertible, by definition of
invertibility, there exists an n n matrix C such that ABC = CAB = In . Since ABC = A(BC) = In ,
we have that A is invertible as BC is the multiplicative inverse of A by definition of invertibility (i.e.,
BC = A1 ). Similarly, since CAB = (CA)B = In , we have that B is invertible as CA is the multiplicative
inverse of B by definition of invertibility (i.e., AB = B 1 ). Thus, we have that A and B are invertible.

  1 0
1 0 0
Example: Let A = and B = 0 0 respectively. Clearly, A and B are not invertible as
0 0 1
0 1
 
1 0
they are not n n matrices (by definition of invertibility). However, their product is AB = which
0 1
is invertible as the identity is always is its own inverse.

2.4.10. Let A and B be n n matrices such that AB = In .


(a) Use Exercise 9 to conclude that A and B are invertible.
(b) Prove A = B 1 (and hence B = A1 ).
(c) State and prove analogous results for linear transformations defined on finite-dimensional vector spaces.

Proof of (a). Let A and B be n n matrices such that AB = In . Then, AB is invertible as the identity
matrix is invertible. So since AB is invertible, by 2.4.9, we have that A is invertible and B is invertible.
Proof of (b). From (a), we know that AB = In , A and B is invertible. Since A is invertible and AB = In ,
by definition of invertibility, B is the multiplicative inverse of A. That is, B = A1 . Then, it follows that
B 1 = (A1 )1 = A by property of inverse.

Proposition 2.4.10 (c). Let V and W be finite-dimensional vector spaces and let T : V W and U :
W V be linear. If U T = IV then T U = IW .
Proof of Proposition 2.4.10 (c). Let V and W be finite-dimensional vector spaces and let T : V W
and U : W V be linear. Assume U T = IV . We wish to prove that T U = IW . For any v V ,
T U (T (v)) = T (U T (v)) = T (IV (v)) = T (v). Since T (v) W and v was arbitrary, it follows that T U = IW
as required.

Note that this sufficiently shows that T is invertible (U is the inverse of T ).


2.4.24. Let T : V Z be a linear transformation of a vector space V on to vector space Z. Define the
mapping
T : V /N (T ) Z by T (v + N (T )) = T (v)
for any coset v + N (T ) in V /N (T ).

(a) Prove that T is well-defined; that is, prove that if v + N (T ) = v 0 + N (T ), then T (v) = T (v 0 ).
(b) Prove that T is linear.
(c) Prove that T is an isomorphism.
(d) Prove that the diagram shown in Figure 2.3 commutes; that is, prove that T = T .

6
Proof of (a). Let T : V Z be a linear transformation of a vector space V on to vector space Z. Suppose
v + N (T ) = v 0 + N (T ). By definition, T (v + N (T )) = T (v) and T (v 0 + N (T )) = T (v 0 ). Since v + N (T ) =
v 0 + N (T ), it follows that T (v) = T (v 0 ) as required.
Proof of (b). We wish to prove that T is linear which, by definition of linear transformation, is equivalent
to showing that for all x, y V /N (T ) and c F , T (x + y) = T (x) + T (y) and T (cx) = cT (x). Choose
a, b V /N (T ) and c F . By definition of the mapping and the well-definedness of T from part (a), T (a+b) =
(a + b) + N (T ) = (a + N (T )) + (b + N (T )) = T (a) + T (b) and T (ca) = (ca) + N (T ) = c(a + N (T )) = cT (a).
Thus, we can conclude that T is linear.
Proof of (c). By definition of isomorphism, we must show that T is linear and invertible. From part (b), we
have that T is linear. By Theorem 2.5, to show that T is invertible is equivalent to showing that dim(T ) =
dim(V ). By the Dimension Theorem, we know that dim(N (T )) + dim(R(T )) = dim((V )/N (T )).

Proof of (d). We wish to prove that T = T .


2.5.10. Prove that if A and B are similar n n matrices, then tr(A) = tr(B). Hint: Use Exercise 13 of
Section 2.3.

Proof. Suppose A and B are similar n n matrices. By definition, there exists an invertible matrix Q
such that A = Q1 BQ. From Exercise 2.3.13, we had that tr(AB) = tr(BA). It follows that tr(A) =
tr(Q1 BQ) = ((Q1 B)Q) = tr((BQ1 )Q) = (B(Q1 Q)) = tr(B) as required.
2.5.13. Let V be a finite-dimensional vector space over a field F , and let = {x1 , x2 , ..., xn } be an ordered
basis for V . Let Q be an n n invertible matrix with entries from F . Define
n
X
x0j = Qij xi 1jn
n=1

and set 0 = {x01 , x02 , ..., x0n }. Prove that 0 is a basis for V and hence that Q is the change of coordinate
matrix changing 0 -coordinates into -coordinates.
2.6.13. Let V be a finite-dimensional vector space of F . For every subset S of V , define the annilator S 0
of S as
S 0 = {f V : f (x) = 0 x S}.
(a) Prove that S 0 is a subspace of V .
(b) If W is a subspace of V and x 6 W , prove that there exists f W 0 such that f (x) 6= 0.
(c) Prove (S 0 )0 = span((S)), where is defined as in Theorem 2.26.
(d) For subspaces W1 and W2 , prove that W1 = W2 if and only if W10 = W20 .
(e) For subspaces W1 and W2 , show that (W1 + W2 )0 = W10 W20 .
Proof of (a). By Theorem 1.3, we wish to show that 0 S 0 and addition and scalar multiplication is closed for
S 0 . Let f, g S 0 and a F . Clearly, f (0) = 0 S 0 . Next, by definition, (f +g)(x) = f (x)+g(x) = 0+0 = 0
so S 0 is closed under addition. Lastly, by definition, af (x) = f (ax) = 0 so S 0 is closed under scalar
multiplication. Thus, we have shown that S 0 is a subspace of V .
Proof of (b). Suppose W is a subspace of V and x 6 W . Let = {x1 , x2 , ..., xk } be a basis for W . Since W
is a subspace of V , we can extend to V so 1 = {x1 , ..., xk , xk+1 , ..., xn }. Then, let (1 ) be the dual basis
to 1 and define (1 ) = {f1 , ..., fk , fk+1 , ..., fn }. Since we have that fi (xi ) 6= 0, W 0 {f1 , ..., fk }. Since
xk+1 , ..., xn 6 W , it follows that {fk+1 , ..., fn } W 0 . However, x 6 W implies that for x = w + (a1 x1 + ... +
ai xi ) where i > k, there exists an ao 6= 0. That is, fo (x) = ao 6= 0 as required.
Proof of (c). () Pick s S such that (s) = s1 . Then, s1 (f ) = f (s) = 0 so (s) (S 0 )0 . Since and f
is linear, it follows that

7
Proof of (d). Let W1 and W2 be subspaces.

() Suppose W10 = W20 . We wish to prove by contradiction so assume W1 6= W2 . Then there exists
x 6 W1 and x W2 . From part (b), we know that there exists f W10 such that f (x) 6= 0 which is a
contradiction since W10 = W20 and f (x) 6 W20 . Thus, we have that W1 = W2 .

() Now suppose that W1 = W2 . Clearly by definition, since W1 = W2 , W10 = W20 .

Thus, we conclude that W1 = W2 if and only if W10 = W20 .


Proof of (e). () Let f (W1 + W2 )0 . Then, f (w1 ) = f (w2 ) = 0 for w1 W1 and w2 W2 . Then, we have
that f W10 and f W20 so (W1 + W2 )0 W10 W20 .

() Let f W10 W20 . Then, we have that f (W1 ) = 0 and f (W2 ) = 0 which implies that for w1 W1 and
w2 W2 , f (w1 + w2 ) = f (w1 ) + f (w2 ) = 0 + 0 = 0. That is, f (W1 + W2 )0 so W10 W20 (W1 + W2 )0 .

Since we have that (W1 + W2 )0 W10 W20 and W10 W20 (W1 + W2 )0 , we can conclude that (W1 + W2 )0 =
W10 W20 .

2.6.14. Prove that if W is a subspace of V , then dim(W ) + dim(W 0 ) = dim(V ). Hint: Extend an ordered
basis {x1 , x2 , ..., xk } of W to an ordered basis = {x1 , x2 , ..., xn } of V . Let = {f1 , f2 , ..., fn }. Prove that
{fk+1 , fk+2 , ..., fn } is a basis for W 0 .
Proof. Suppose W is a subspace of V . Let W = {x1 , x2 , ..., xk } be an ordered basis of W and extend
it onto an ordered basis = {x1 , x2 , ..., xn } of V . Let = {f1 , f2 , ..., fn }. We wish to show that =
{fk+1 , fk+2 , ..., fn } is a basis for W 0 which is equivalent to showing that dim(W ) + dim(W 0 ) = dim(V ).
Clearly, W 0 and it must be linearly independent since it is a subset of a basis. By definition of basis, we
need to show that span W 0 . We will prove by contradiction so assume that does not span W 0 . Since
W 0 V , we have that a1 f1 + ... + an fn W 0 where a1 , ..., an are not all zero. That is, there exists an ao
such that ao 6= 0. This implies that f (vo ) = ao 6= 0 for vo W , a contradiction. Thus, span W 0 . Since
is linearly independent and since it generates W 0 , we have that is a basis for W 0 as required.
2.6.15. Suppose that W is a finite-dimensional vector space and that T : V W is linear. Prove that
N (T t ) = (R(T ))0 .
Proof. Suppose that W is a finite-dimensional vector space and that T : V W is linear.

() Let f N (T t ). By definition of nullspace, we have that T t (f ) = 0. Then, for all v V , f (T (v)) = 0.


This implies that f (R(T )) = 0 which, by definition of annilator, means that f (R(T ))0 . Thus, N (T t )
(R(T ))0 .

() Now let f (R(T ))0 . By definition of annilator, we have that f (R(T )) = 0. By definition of range,
this means that for all v V , we have that f (T (v)) = 0. Since f (T (v)) = 0, we have that T t (f ) = 0 so
f N (T t ) by definition of nullspace. Thus, (R(T ))0 N (T t ).

Since N (T t ) (R(T ))0 and (R(T ))0 N (T t ), we have that N (T t ) = (R(T ))0 .
2.6.16. Use Exercises 14 and 15 to deduce that rank(LAt ) = rank(LA ) for any A Mmn (F ).
Proof. We wish to show that rank(LAt ) = rank(LA ) for any A Mmn (F ). Note, LA : V W and LAt :
W V . Also, by definition of dual space, we know that dim(V ) = dim(V ) and dim(W ) = dim(W ).
So, from definition of rank and by 2.6.14 (Homework Set 5 ) and 2.6.15 we have that

8
rank(LAt ) = dim(R(LAt ))
= dim(W ) dim(W ) dim(R(LAt ))
= dim(W ) dim(N (LAt ))
= dim(W ) dim((R(LA ))0 )
= dim(R(LA ))
= rank(LA )

as required.

9
Partial Solutions for Linear Algebra by Friedberg et al.
Chapter 3
John K. Nguyen
December 7, 2011

3.2.14. Let T, U : V W be linear transformations.


(a) Prove that R(T + U ) R(T ) + R(U )
(b) Prove that if W is finite-dimensional, then rank(T + U ) rank(T ) + rank(U ).
(c) Deduce from (b) that rank(A + B) rank(A) + rank(B) for any m n matrices A and B.
Proof of (a). Let T, U : V W be linear transformations. We wish to prove that R(T + U ) R(T ) + R(U )
so assume that x R(T + U ). Then, by definition of range, there exists v V such that (T + U )(x) = v.
By definition of linear transformation, we have that T (x) + U (x) = v which implies that x R(T ) + R(U )
according to the definition of range. Thus, R(T + U ) R(T ) + R(U ).
Proof of (b). Assume W is finite-dimensional. By Theorem 2.1, we know that R(T ) and R(U ) are subspaces
of W and, as such, they are both finite-dimensional. From (a), we know that R(T + U ) R(T ) + R(U ) so
we have dim(R(T + U )) dim(R(T ) + R(U )) dim(R(T )) + dim(R(U )). By definition of rank, we have
that rank(T + U ) rank(T ) + rank(U ).
Proof of (c). Suppose A and B are both m n matrices. By definition of left-multiplication transformation,
we have that LA : F m F n and LB : F m F n . In consideration of (b) and Theorem 2.15, we have that
rank(LA+B ) = rank(LA + LB ) rank(LA ) + rank(LB ) which implies that rank(A + B) rank(A) +
rank(B) as required.

3.2.17. Prove that if B is a 3 1 matrix and C is a 1 3 matrix, then the 3 3 matrix BC has rank at
most 1. Conversely, show that if A is any 3 3 matrix having rank 1, then there exists a 3 1 matrix B
and a 1 3 matrix C such that A = BC.
Proof. Suppose B is an 3 1 matrix and C is an 1 3 matrix. By Theorem 3.5, we know that the rank
of any matrix equals the maximum number of its linearly independent columns. Thus, clearly rank(B) 1
since B is 3 1. By definition of matrix multiplication, we have that BC is an 3 3 matrix so it is defined.
Then, from Theorem 3.7, we have that rank(BC) rank(B). Since we know that rank(B) 1, it follows
that rank(BC) rank(B) 1 which implies that rank(BC) 1.

To show the converse, suppose A is any 33 matrix having rank 1. By definition, we have that LA : F 3 F 3 .
Let be the standard ordered basis for F 3 and assume f : F 1 . Then, by the Freeness Theorem, there
exists a unique linear transformation LB : F 3 F 1 extending f . Now suppose is a standard ordered basis
for F 1 and assume g : F 3 . Also by the Freeness Theorem, there exists a unique linear transformation
LC : F 1 F 3 extending g. So, since LB : F 3 F 1 , we have that B is a 3 1 matrix. Similarly, we know
that C is a 1 3 matrix. It follows that LB LC : F 3 F 3 which implies that BC is 3 3 (we also know
this by definition of matrix multiplication). Thus, A = BC as required.

3.2.19. Let A be an m n matrix with rank m and B be an n p matrix with rank n. Determine the rank
of AB. Justify your answer.
Proof. We claim that rank(AB) = m.

1
Let A be an m n matrix with rank m and B be an n p matrix with rank n. By definition of ma-
trix multiplication, we know that AB is an m p matrix. By definition, we have that LA : F m F n ,
LB : F n F p and LAB : F m F p respectively. Observe that since LB : F p F n is onto (recall
Theorem 2.15 ), we have R(AB) = R(LA LB ) = LA LB (F p ) = LA (LB (F p )) = LA (F n ) = R(LA ). There-
fore, rank(AB) = dim(R(LA LB )) = dim(R(LA )) = rank(A). And since rank(A) = m, we have that
rank(AB) = m.

2
Partial Solutions for Linear Algebra by Friedberg et al.
Chapter 4
John K. Nguyen
December 7, 2011

4.1.11. Let : M22 (F ) F be a function with the following three properties.


(i) is a linear function of each row of the matrix when the other row is held fixed.
(ii) If the two rows of A M2 (F ) are identical, then (A) = 0.
(iii) If I is the 2 2 identity matrix, then (I) = 1.

Prove that (A) = det(A) for all A M22 (F ).


 
a b
Proof. Let A M22 (F ) and defined it as where a, b, c, d are scalars. First, by the properties
c d
       
1 1 1 0 0 1 0 1
above, notice that = + = 1+ = 0 which implies that
1 1 0 1 1 0 1 0
 
0 1
= 1. Now, in knowing this and from the properties above, we have the following
1 0
 
a b
(A) =
c d
   
1 0 0 1
= a + b
c d c d
       
1 0 1 0 0 1 0 1
= ac + ad + bc + bd
1 0 0 1 1 0 0 1
= ac(0) + ad(1) + bc(1) + bd(0)
= ad bc
 
a b
= det
c d
= det(A).

Since A was arbitrary, we have shown that (A) = det(A).


4.2.25. Prove that det(kA) = k n det(A) for any A Mnn (F ).
Proof. Choose A Mnn (F ). Then, by Theorem 4.8,

det(kA) = det(kIn A)
= det(kIn )det(A)
= k n det(In )det(A)
= k n 1det(A)
= k n det(A).

Since A was arbitrary, we have that det(kA) = k n det(A) for any A Mnn (F ) as required.

1
4.3.10. A matrix M Mnn (F ) is called nilpotent if, for some positive integer k, M k = 0, where 0 is the
n n zero matrix. Prove that if M is nilpotent, then det(M ) = 0.
Proof. Suppose M Mnn (F ) such that it is nilpotent. Then, by definition, there exists some k Z such
that M k = 0, where 0 is the nn zero matrix. This implies that det(M k ) = 0. By Theorem 4.7, we know that
det(AB) = det(A)det(B) for any A, B Mnn (F ). So, by induction, we know that det(M k ) = (det(M ))k
for all k Z. Since the determinant of the zero matrix is zero, (det(M ))k = 0 which implies that det(M ) = 0
as required.
4.3.11. A matrix M Mnn (F ) is called skew-symmetric if M t = M . Prove that if M is a skew-symmetric
and n is odd, then M is not invertible. What happens if n is even?
Proof. Choose M Mnn (F ) such that it is skew-symmetric. Then, by Theorem 4.8 and since M t = M ,

det(M ) = det(M t )
= det(M )
= (1)n det(M ).

Since we know that n is odd, we have that det(M ) = det(M ). Rearranging terms, we have that 2det(M ) = 0
which implies that det(M ) = 0 so, by the corollary on page 223, M is not invertible.

If n is even, then we would have det(M ) = det(M ) which would not imply anything.
4.3.13. For M Mnn (C), let M be the matrix such that (M )ij = Mij for all i, j, where Mij is the complex
conjugate of Mij .
(a) Prove that det(M ) = det(M ).
(b) A matrix Q Mnn (C) is called unitary if QQ = I, where Q = Qt . Prove that if Q is a unitary
matrix, then |det(Q)| = 1.
Proof of (a). Let M Mnn (C) and suppose M be the matrix such that (M )ij = Mij for all i, j, where
Mij is the complex conjugate of Mij .

2
Proof of (b). Let Q Mnn (C) be a unitary matrix. Then, by definition, QQ = QQt = I. Then, from
Theorem 4.8 and part (a), we have that

det(I) = det(QQt )
= det(Q)det(Qt )
= det(Q)det(Qt )
= det(Q)det(Q)
= |det(Q)|2 .

Since det(I) = 1, we have that |det(Q)|2 = 1 which can further be reduced to |det(Q)| = 1 as required.
4.3.21. Prove that if M Mnn (F ) can be written in the form
 
A B
M=
O C

where A and C are square matrices, then det(M ) = det(A) det(C).


Proof. Let A be an k k matrix, B be a k t matrix and C be a t t matrix. We wish to prove by induction.

For our base case, suppose k = 1. Then, expanding down the first column,

det(M ) = a det(M11 ) + 0 + ... + 0 = a det(C) = det(A)det(C).

So, this holds true for k = 1.

Now suppose that A is an (k 1) (k 1) matrix and B is an (k 1) t matrix and C is a t t


matrix and that  
A B
M= .
O C
Now, taking the cofactor expansion along the first column,

det(M ) = a11 det(M11 ) ... ak1 det(Mk1 ) + 0 + ... + 0


A11 B1 Ak1 Bk
   
= a11 det ... det
O C O C

= a11 det(A11 )det(C) ... ak1 det(Ak1 )det(C)
= (a11 det(A11 )) ... ak1 det(Ak1 )det(C)
= det(A)det(C).

4.3.22. Let T : Pn (F ) F n+1 be the linear transformation defined in Exercise 22 of Section 2.4 by
T (f ) = (f (c0 ), f (c1 ), ..., f (cn )), where c0 , c1 , ..., cn are distinct scalars in an infinite field F. Let be the
standard ordered basis for Pn (F ) and be the standard ordered basis for F n+1 .
(a) Show that M = [T ] has the form

(c) Prove that Y


det(M ) = (cj ci ),
0i<jn

the product of all the terms of the of the form cj ci for 0 i < j n.

3
Proof of (a). Suppose T : Pn (F ) F n+1 is the linear transformation defined in Exercise 22 of Section 2.4
by T (f ) = (f (c0 ), f (c1 ), ..., f (cn )), where c0 , c1 , ..., cn are distinct scalars in an infinite field F. Let be the
standard ordered basis for Pn (F ) and be the standard ordered basis for F n+1 .

Proof of (c). We will prove that Y


det(M ) = (cj ci ),
0i<jn

the product of all the terms of the of the form cj ci for 0 i < j n.

4
Partial Solutions for Linear Algebra by Friedberg et al.
Chapter 5
John K. Nguyen
December 7, 2011

5.2.11. Let A be an n n matrix that is similar to an upper triangular matrix and has the distinct eigen-
values 1 , 2 , ..., k with corresponding multiplicities m1 , m2 , ..., mk . Prove the following statements.
k
X
(a) tra(A) = mi i .
i=1
(b) det(A) = (1 )m1 (2 )m2 ...(k )mk .
Proof of (a). Since A be an n n matrix that is similar to an upper triangular matrix, say B, that has the
distinct eigenvalues 1 , 2 , ..., k with corresponding multiplicities m1 , m2 , ..., mk . By definition of similar,
there exists an invertible matrix Q such that A = Q1 BQ. Then, in consideration of Exercise 2.3.13,
Xk
1 1
tr(A) = tr(Q BQ) = tr(Q QB) = tr(B) = mi i .
i=1

Proof of (b). Since A be an n n matrix that is similar to an upper triangular matrix, say B, that has the
distinct eigenvalues 1 , 2 , ..., k with corresponding multiplicities m1 , m2 , ..., mk . By definition of similar,
there exists an invertible matrix Q such that A = Q1 BQ. Recall Theorem 4.7 which states that for
any A, B M22 (F ), det(AB) = det(A)det(B). In consideration of this theorem and the corollary on
page 223, we have that det(A) = det(Q1 BQ) = det(Q1 )det(B)det(Q) = det(Q) 1
det(B)det(Q) = det(B) =
m1 m2 mk
(1 ) (2 ) ...(k ) as required.

5.2.12. Let T be an invertible linear operator on a finite-dimensional vector space V .

(a) Recall that for any eigenvalue of T , 1 is an eigenvalue of T 1 . Prove that the eigenspace of T
corresponding to is the same as the eigenspace of T 1 corresponding to 1 .
(b) Prove that if T is diagonalizable, then T 1 is diagonalizable.

Proof of (a). Pick v E . Then, by definition, T (v) = v. Taking the inverse of both sides, we get
T 1 T (v) = T 1 (v) which means v = T 1 (v). Then, by definition, v E1 so we have that the
eigenspace of T corresponding to is the same as the eigenspace of T 1 corresponding to 1 .
Proof of (b). Suppose T is diagonalizable. Then T has n linearly independent eigenvectors. From part (a)
we know that T 1 also has the same n eigenvectors. Thus, T 1 is diagonalizable.

5.2.13. Let A Mnn (F ). Recall from Exercise 14 of Section 5.1 that A and At have the same character-
istic polynomial and hence share the same eigenvalues with the same multiplicities. For any eigenvalue of
A and At , let E and E0 denote the corresponding eigenspaces for A and At , respectively.

(a) Show by way of example that for a given common eigenvalue, these two eigenspaces need not be the
same.
(b) Prove that for any eigenvalue , dim(E ) = dim(E0 ).
(c) Prove that if A is diagonalizable, then At is also diagonalizable.

1
 
1 2
Example for (a). Define A = . Now, notice that A(1, 0) = (1, 3) but At (1, 0) = (1, 2). Thus,
3 4
t
EA 6= EA .
Proof of (b). Suppose dim(F ) = n. Then, by definition, we know that dim(E ) = n rank(A I). Then,
taking the transpose (recall that rank(A) = rank(At ) by a previous exercise), we have that dim(E ) =
n rank((A I)t ) = n rank(At I) = dim(E0 ) as required.

Proof of (c). Suppose A is diagonalizable. Then, there exists an invertible matrix Q such that B = Q1 AQ
is a diagonal matrix (recall that a square matrix is diagonalizable if it is similar to a diagonal matrix according
to Section 5.1 ). Now, taking the transpose of both sides yields B t = (Q1 AQ)t = Qt At (Q1 )t . Clear,y B t
is diagonal so by definition, we have that At is diagonalizable.
5.2.18a. Prove that if T and U are simultaneously diagonalizable operators, then T and U commute (i.e.,
U T = T U ).
Proof. Suppose T and U are simultaneously diagonalizable operators. Then, by definition, there exists an
ordered basis = {v1 , v2 , ..., vn } such that T (vi ) = vi and U (vi ) = vi where i = 1, 2, ..., n. It follows that
T U (vi ) = T (U (vi )) = T (vi ) = T (vi ) = vi = vi = U (vi ) = U (vi ) = U (T (vi )) = U T (vi ). So, we
can conclude that U T = T U as required.

5.4.13. Let T be a linear operator on a vector space V , let v be a nonzero vector in V , and let W be the
T-cyclic subspace of V generated by v. For any w V , prove that w W if and only if there exists a
polynomial g(t) such that w = g(T )(v).
Proof. () Suppose w W and assume dim(W ) = n. Let = {v, T v, ..., T n1 v} be an ordered basis for
W . Then, by definition, w = a0 v + a1 T v + ... + an1 T n1 v for some scalars a0 , a1 , ...an1 (that is, w is a
linear combination of the elements of ). Let g(t) = a0 + a1 t + ... + an1 tn1 . Then, g(t) is a polynomial of
t of degree less than or equal to n 1. This also means that w = g(T )v.

() Suppose there exists a polynomial g(t) such that w = g(T )v. Then, g(t) = a0 + a1 t + ... + an tn
and w = a0 v + a1 T v + ... + an T n v for some scalars a0 , a1 , ..., an . Since v, T v, ..., T n v W , w W .
5.4.16. Let T be a linear operator on a finite-dimensional vector space V .

(a) Prove that if the characteristic polynomial of T splits, then so does the characteristic polynomial of
the restriction of T to any T-invariant subspace of V .
(b) Deduce that if the characteristic polynomial of T splits, then any nontrivial T-invariant subspace of V
contains an eigenvector of T .

Proof of (a). Suppose the characteristic polynomial of T , say f , splits. Let W be a T invariant subspace
of V and g is the characteristic polynomial of TW . Then, g divides f so there exists a polynomial r such
that f = gr. Suppose that f has n degrees. Note that the amount of zeros that g have is less than or equal
to degree of g and analogously for r. But, it follows that n = deg(g) = deg(f ). So, g has deg(g) zeros, h has
deg(h) zeros. Thus, we can factor g to deg(g) factors which means that the characteristic polynomial of the
restriction of T to a T-invariant subspace splits.

Proof of (b). Suppose W is a nontrivial T-invariant subspace of V . Let f be the characteristic polynomial
of TW . By part (a), we know that f splits. Pick to be the root of f . Then, f () = det(TW I) = 0 .
But this means that TW I is not invertible so there exists a nonzero w W such that (T I)(w) = 0
which implies that w is an eigenvector of T . Since w is arbitrary, we have shown that if the characteristic
polynomial of T splits, then any nontrivial T-invariant subspace of V contains an eigenvector of T .

5.4.20. Let T be a linear operator on a vector space V , and suppose that V is a T-cyclic subspace of itself.
Prove that if U is a linear operator on V , then U T = T U if and only if U = g(T ) for some polynomial g(t).
Hint: Suppose that V is generated by v. Choose g(t) according to Exercise 13 so that g(T )(v) = U (v).

2
Proof. () Suppose U T = T U . Then, since V = span(v, T (v), T 2 (v), ...), U (v) = a0 + a1 T (v) + ... + an T n (v)
for some scalars a0 , a1 , ..., an . So, U (v) = g(T )(v) where g(v) = a0 + a1 x + ... + an xn . Suppose x V . Then,
x = b0 + b1 T (v) + ... + bm T m (v) for some scalars b0 , b1 , ..., bm . It follows that

U (x) = U (b0 + b1 T (v) + ... + bm T m (v)) = b0 U (T 0 (v)) + b1 U (T (v)) + ... + bm U (T m (v))


= b0 T 0 (U (v)) + b1 T (U (v)) + ... + bm T m (U (v))
= b0 T 0 (g(T )(v)) + b1 T (g(T )(v)) + ... + bm T m (g(T )(v))
= b0 g(T )(T 0 (v)) + b1 g(T )(T (v)) + ... + bm g(T )(T m (v))
= g(T )(b0 T 0 (v) + b1 T 1 (v) + ... + bm T m (v))
= g(T )(x).

Thus, U = g(T ) for some polynomial g.

() Suppose U = g(T ) for some polynomial g. Define g(x) = a0 + a1 x + ... + an xn . Then,

U T (x) = a0 T 0 (T (x)) + a1 T (T (x)) + ... + an T n (T (x))


= a0 T (T 0 (x)) + a1 T (T (x)) + ... + an T (T n (x))
= T (a0 T 0 (x) + a1 T (x) + ... + an T n (x))
= T U (x).

Therefore, we have that U T = T U .

We have shown that if U is a linear operator on V , then U T = T U if and only if U = g(T ) for some
polynomial g(t).

3
Partial Solutions for Linear Algebra by Friedberg et al.
Chapter 6
John K. Nguyen
December 7, 2011

6.1.18. Let V be a vector space over F , where F = R or F = C, and let W be an inner product space over
F with product h, i. If T : V W is linear, prove that hx, yi0 = hT (x), T (y)i defines an inner product on
V if and only if T is one-to-one.
Proof. () Suppose hx, yi0 = hT (x), T (y)i defines an inner product on V . Let T (x) = T (y) which, since T
is linear, implies T (x) T (y) = T (x y) = 0. From our definition above, we have that hx y, x yi0 =
hT (x y), T (x y)i = hT (x), T (x y)i hT (y), T (x y)i = 0. By definition (particularly part (d) on page
330) it clearly follows that x y = 0 which means x = y so T is one-to-one.

() Assume that T is one-to-one. That is, T (x) = T (y) implies x = y. Then, hx + z, yi0 = hT (x + z), T (y)i =
hT (x), T (y)i + hT (z), T (y)i. Since W is an inner product space, we have that hx + z, yi0 = hx, yi0 + hz, yi0 so
it follows immediately that hT (x), T (y)i = hx, yi0 .

We have proven that hx, yi0 = hT (x), T (y)i defines an inner product on V if and only if T is one-to-one.
6.1.12. Let {v1 , v2 , ..., vk } be an orthogonal set in V , and let a1 , a2 , ..., ak be scalars. Prove that
k 2 k
X X
a i vi = |ai |2 ||vi ||2 .



i=1 i=1

Proof. We apply the definition of inner product space and Theorem 6.2 to get the following
k 2 * k k
+ k
k X k
X X X X X
ai vi = aj vj , ai v i = aj ai < vj , vi >= |ai |2 ||vi ||2 .



i=1 j=1 i=1 j=1 i=1 i=1

as required.
6.2.13. Let V be an inner product space, S and S0 be subsets of V , and W be a finite-dimensional subspace
of V . Prove the following results.
(a) S0 S implies that S S0 .
(b) S (S ) ; so span(S) (S ) .
(c) W = (W ) . Hint: Use Exercise 6.
(d) V = W W . (See the exercises of Section 1.3)
Proof of (a). Suppose S0 S. Let x S . Then, by definition, for all y S, < x, y >= 0 which implies
that y S0 . But S0 S, so x S0 . Thus, S S0 as required.
Proof of (b). Let x S. Then, < x, y >= 0 for all y S . But this also means that x (S ) . So,
S (S ) .
Proof of (c). () By part (b), we have that W (W ) .

() We will prove the contrapositive so assume that x 6 W . Then, by Exercise 6.2.6, there exists y W

1
such that < x, y >6= 0. This implies that x 6 (W ) .

Thus, we have shown that W = (W ) .


Proof of (d). By definition of direct sum, we wish to show that W W = {0} and V = W + W . Since
the only vector that is orthogonal to itself is the zero vector, we have that W W = {0}. Let v V . By
Theorem 6.6, there exist unique vector u W and z W such that v = u+z so we have that V = W +W .
We have shown that V = W W .
6.2.14. Let W1 and W2 be subspaces of a finite-dimensional inner product space. Prove that (W1 + W2 ) =
W1 W2 and (W1 W2 ) = W1 + W2 . Hint for the second equation: Apply Exercise 13(c) to the first
equation.

Proof of (a). () Let x (W1 + W2 ) . By definition, it follows immediately that x W1 W2 . So,


(W1 + W2 ) W1 W2 .

() Let x W1 W2 . Then, x W1 and x W2 . By linearity, it follows that x (W1 + W2 ) .


So, (W1 + W2 ) W1 W2 .

Thus, since (W1 + W2 ) W1 W2 and (W1 + W2 ) W1 W2 , (W1 + W2 ) = W1 W2 .


Proof of (b). () Let x (W1 W2 ) . Then, since W1 W1 + W2 and W2 W1 + W2 by a previous
exercise, it follows that x W1 + W2 so (W1 W2 ) W1 + W2 (since x is perpendicular to vectors that
is in both W1 and W2 ).

() Let x W1 + W2 . Then, by definition, x = w1 + w2 where w1 W1 and w2 W2 . But,


W1 W1 + W2 and W2 W1 + W2 so, in consideration of Exercise 6.2.13c, (W1 W2 ) W1 + W2
which means that x (W1 W2 ) (i.e., w1 and w2 is perpendicular to the intersection of W1 and W2 ).
Therefore, (W1 W2 ) W1 + W2 .

In conclusion, since (W1 W2 ) W1 + W2 and (W1 W2 ) W1 + W2 , we have that (W1 W2 ) =


W1 + W2 .
R1
6.2.22. Let V = C([0, 1]) with the inner product < f, g >= 0 f (t)g(t)dt. Let W be the subspace spanned by

the linearly independent set {t, t}.

(a) Find an orthonormal basis for W .

2
(b) Let h(t) = t2 . Use the orthonormal basis obtained in (a) to obtain the best (closest) approximation of
h in W .
6.3.6. Let T be a linear operator on an inner product space V . Let U1 = T + T and U2 = T T . Prove that
U1 = U1 and U2 = U2 .

Proof. Let T be a linear operator on an inner product space V and suppose that U1 = T +T and U2 = T T .
We first prove that U1 = U1 . By our assumption and Theorem 6.11,

U1 = (T + T ) = T + (T ) = T + T = U1 .

In a similar argument, we will show that U2 = U2 .

U2 = (T T ) = T (T ) = T T = U2 .

Thus, we have that U1 = U1 and U2 = U2 as required.


6.3.8. Let V be a finite-dimensional inner product space, and let T be a linear operator on V . Prove that if
T is invertible, then T is invertible and (T )1 = (T 1 ) .

Proof. Let V be a finite-dimensional inner product space, and let T be a linear operator on V . Choose
x, y V . Suppose T is invertible. Then,

hT (T 1 ) (x), yi = h(T 1 ) (x), T (y)i


= hx, T 1 T (y)i
= hx, yi

which implies that T (T 1 ) = I. Therefore, (T )1 = (T 1 ) .


6.3.9. Prove that if V = W W and T is the projection on W along W , then T = T . Hint: Recall that
N (T ) = W .

Proof. Suppose that V = W W and T is the projection on W along W . Since V = W W , we have


that V = W + W . Let v1 , v2 V . Then, there exists w1 , w2 W and w1 , w2 W such that v1 = w1 + w1
and v2 = w2 + w2 . Now, in consideration that T is the projection on W along W ,

hv1 , T (v2 )i = hw1 + w1 , T (w2 + w2 )i


= hw1 + w1 , w2 i
= hw1 , w2 i.

Similarly,

hT (v1 ), v2 i = hT (w1 + w1 ), w2 + w2 i
= hw1 , w2 + w2 i
= hw1 , w2 i.

So, it follows that hv1 , T (v2 )i = hT (v1 ), v2 i which means hv1 , T (v2 )i = hv1 , T (v2 )i. This implies that
T = T
6.3.11. For a linear operator T on an inner product space V , prove that T T = T0 implies T = T0 . Is the
same result true if we assume that T T = T0 .
Proof. Let T be a linear operator on an inner product space V . Suppose T T = T0 . Pick x V . Then,
hT T (x), xi = hT0 (x), xi = h0, xi = 0 by Theorem 6.1(c). But, we also have that hT T (x), xi = hT (x), T (x)i
so hT (x), T (x)i = ||T (x)||2 = 0. It follows that T (x) = 0 which implies T = T0 .
We claim that the same result is true if we assume that T T = T0 .

3
Proof. Let T be a linear operator on an inner product space V . Suppose T T = T0 . Pick x V . Then,
hT T (x), xi = hT0 (x), xi = h0, xi = 0. But hT T (x), xi = hT (x), T (x)i which, analogously to the argument
above, we have that T (x) = 0 so T = T0 . Then, (T ) = T0 so T = T0 (note that the adjoint of the zero
operator is the zero operator).
6.3.12. Let V be an inner product space, and let T be a linear operator on V . Prove the following results:

(a) R(T ) = N (T )
(b) If V is finite-dimensional, then R(T ) = N (T ) . Hint: Use Exercise 13(c) of Section 6.2.
Proof of (a). By definition, x R(T ) if and only if hx, T (y)i = hT (x), yi = 0 for all y V . By Theorem
6.1, this is true if and only if T (x) = 0 which means x N (T ). Thus, x R(T ) if and only if x N (T )
so we have that R(T ) = N (T ).
Proof of (b). Suppose V is finite-dimmensional. From part (a), we know that R(T ) = N (T ). It follows
that (R(T ) ) = N (T ) . In consideration of 6.2.13c, we have that (R(T ) ) = R(T ). Thus, since
(R(T ) ) = N (T ) and (R(T ) ) = R(T ), we have that R(T ) = N (T ) .

Você também pode gostar