Escolar Documentos
Profissional Documentos
Cultura Documentos
(b) The set only contains two vectors, so the set represents the two points (1, −3, 1), (−2, 6, −2) in R3 .
1
2 Section 1.1 Solutions
1 2
(c) Since 0 , 1 is linearly independent (neither vector is a scalar multiple of the other), we
−2 −1
1 2
get that the set represents the plane in R3 with vector equation ~x = s 0 + t 1 , s, t ∈ R.
−2 −1
0
3
(d) The set only contains the zero vector, so it represents the origin in R . A vector equation is ~x = 0.
0
(e) The set is linearly independent (verify this) and so the set represents a hyperplane in R4 with vector
equation
1 1 3
0 0 1
~x = t1 + t2 + t3 , t1 , t2 , t3 ∈ R
1 2 0
1 1 0
1
1
(f) The set represents the line in R4 with vector equation ~x = t , t ∈ R.
1
0
" # " # " # " #
1 1 1 0
1.1.3 (a) Observe that (−1) +2 − = . Hence, the set is linearly dependent. Solving for the first
2 3 4 0
" # " # " #
1 1 1
vector gives =2 − .
2 3 4
(b) Since neither vector is a scalar multiple of the other, the set is linearly independent.
(c) Since neither vector is a scalar multiple of the other, the set is linearly independent.
" # " # " #
2 −4 0
(d) Observe that 2 + = . Hence, the set is linearly dependent. Solving for the second
3 −6 0
" # " #
−4 2
vector gives = (−2)2 .
−6 3
1 0
(e) The only solution to c 2 = 0 is c = 0, so the set is linearly independent.
1 0
(f) Since the set contains the zero vector, it is linearly dependent by Theorem 1.1.4. Solving for the
0 1 4
zero vector gives 0 = 0 −3 + 0 6.
0 −2 1
1 1 −2 0
(g) Observe that 0 1 + 2 2 − −4 = 0. Hence, the set is linearly dependent. Solving for the
0 −1 2 0
1 1 −2
second vector gives 2 = 0 1 − 21 −4.
−1 0 2
Section 1.1 Solutions 3
(h) Consider
0 1 2 0
0 = c1 −2 + c2 3 + c3 −1
0 1 4 −2
Performing operations on vectors on the right hand side, we get
0 c1 + 2c2
0 = −2c1 + 3c2 − c3
0 c1 + 4c2 − 2c3
Since vectors are equal if and only if there corresponding entries are equal we get the three equa-
tions in three unknowns
c1 + 2c2 = 0
−2c1 + 3c2 − c3 = 0
c1 + 4c2 − 2c3 = 0
The first equation implies that c1 = −2c2 . Substituting this into the second equation we get
7c2 − c3 = 0. Thus, c3 = 7c2 . Substituting these both into the third equation gives −12c2 = 0.
Therefore, the only solution is c1 = c2 = c3 = 0. Hence, the set is linearly independent.
(i) Observe that
1 2 1 2 0
1 2 0 1 0
(−2) + + 0 + 0 =
2 4 1 3 0
1 2 0 1 0
Hence, the set is linearly dependent. Solving for the second vector gives
2 1 1 2
= 2 + 0 + 0 1
2 1 0
4 2 1 3
2 1 0 1
(j) Since neither vector is a scalar multiple of the other, the set is linearly independent.
" # (" #) (" #)
1 3 3
1.1.4 (a) Clearly < Span . Thus, does not span R2 and so is not a basis for R2 .
0 2 2
" #
x1
(b) Since neither vector is a scalar multiple of the other, the set is linearly independent. Let ~x = ∈
x2
R2 and consider " # " # " # " #
x1 2 1 2c1 + c2
= c1 + c2 =
x2 3 0 3c1
Comparing entries we get
x1 = 2c1 + c2
x2 = 3c1
4 Section 1.1 Solutions
2
(" # "c1#)= x2 /3 and c2 = x1 − 2x2 /3. Since there is a solution for all ~x ∈ R , we have that
Thus,
2 1
, spans R2 .
3 0
Since the set spans R2 and is linearly independent, it is a basis for R2 .
" #
x
(c) Since neither vector is a scalar multiple of the other, the set is linearly independent. Let ~x = 1 ∈
x2
2
R and consider " # " # " # " #
x1 −1 1 −c1 + c2
= c1 + c2 =
x2 1 3 c1 + 3c2
Comparing entries we get
x1 = −c1 + c2
x2 = c1 + 3c2
Solving, we get
(" c2#="(x#) x ∈ R2 ,
1 + x2 )/4 and c1 = (−3x1 + x2 )/4. Since there is a solution for all ~
−1 1
we have that , spans R2 .
1 3
Since the set spans R2 and is linearly independent, it is a basis for R2 .
(d) Observe from our work in (c) that
" # " # " #
1 −1 1 1 0
+ =
2 1 2 3 2
x1 = c1 − 3c2
x2 = 5c2
2
Solving,
(" # we #) c2 = x2 /5 and c1 = x1 + 3x2 /5. Since there is a solution for all ~x ∈ R , we have
" get
1 −3
that , spans R2 .
0 5
Since the set spans R2 and is linearly independent, it is a basis for R2 .
(f) The set contains the zero vector and so is linearly dependent by Theorem 1.1.4. Thus, the set
cannot be a basis.
1.1.5 (a) The set contains the zero vector and so is linearly dependent by Theorem 1.1.4. Thus, the set
cannot be a basis.
Section 1.1 Solutions 5
x1
(b) Let ~x = x2 be any vector in R3 . Consider the equation
x3
x1 −1 1 −c1 + c2
x2 = c1 2 + c2 1 = 2c1 + c2
x3 −1 2 −c1 + 2c2
−c1 + c2 = x1
2c1 + c2 = x2
−c1 + 2c2 = x3
Subtracting the first equation from the third equation gives c2 = x3 − x1 . Substituting this into the
first equation we get c1 = −2x1 + x3 . Substituting both of these into the second equation gives
−5x1 + 3x3 = x2
−1 1
1
Thus, ~x is in the span of 2 , 1 if and only if −5x + 3x = x . Thus, the vector 0 is not in
1 3 2
−1 2
0
−1 1
the span, so 2 , 1 does not span R3 and hence is not a basis.
−1 2
1
1
(c) Clearly 0 < Span 0 . Thus, the set does not span R3 and hence is not a basis.
0 1
x1
(d) Let ~x = x2 be any vector in R3 . Consider the equation
x3
x1 1 0 1 c1 + c3
x2 = c1 0 + c2 1 + c3 1 = c2 + c3 (1.1)
x3 1 1 0 c1 + c2
c1 + c3 = x1
c2 + c3 = x2
c1 + c2 = x3
6 Section 1.1 Solutions
1 1 1
c1 = x1 − x2 + x3
2 2 2
1 1 1
c2 = − x1 + x2 + x3
2 2 2
1 1 1
c3 = x1 + x2 − x3
2 2 2
1 0 1
Thus, ~x is in the span of 0 , 1 , 1 . Hence, it spans R3 .
1 1 0
Moreover, if we let x1 = x2 = x3 = 0 in equation (1.1), we get that the only solution is c1 = c2 =
c3 = 0, so the set is also linearly independent. Hence, it is a basis for R3 .
(e) Observe that
1 1 0 0
1 − 1 − 1 0 = 0
3
1 0 3 0
so the set is linearly dependent and hence not a basis.
x1
(f) Let ~x = x2 be any vector in R3 . Consider the equation
x3
x1 1 1 −1 c1 + c2 − c3
x2 = c1 1 + c2 2 + c3 −1 = c1 + 2c2 − c3 (1.2)
x3 −1 −3 2 −c1 − 3c2 + 2c3
c1 + c2 − c3 = x1
c1 + 2c2 − c3 = x2
−c1 − 3c2 + 2c3 = x3
c1 = x1 + x2 + x3
c2 = −x1 + x2
c3 = −x1 + 2x2 + x3
1 1 −1
Thus, ~x is in the span of 1 , 2 , −1 . Hence, it spans R3 .
−1 −3
2
Moreover, if we let x1 = x2 = x3 = 0 in equation (1.2), we get that the only solution is c1 = c2 =
c3 = 0, so the set is also linearly independent. Hence, it is a basis for R3 .
Section 1.1 Solutions 7
1 0
1.1.6 There are infinitely many correct answers. One simple choice is 0 , 1 . Since the vectors in the set
0 0
are standard basis vectors for R3 , we know they are linearly independent and hence the set forms a basis
for a hyperplane in R3 .
1.1.7 Assume that {~v1 , ~v2 } is linearly independent. For a contradiction, assume without loss of generality that
~v1 is a scalar multiple of ~v2 . Then ~v1 = t~v2 and hence ~v1 − t~v2 = ~0. This contradicts the fact that {~v1 , ~v2 }
is linearly independent since the coefficient of ~v1 is non-zero.
On the other hand, assume that {~v1 , ~v2 } is linearly dependent. Then there exists c1 , c2 ∈ R not both zero
such that c1~v1 + c2~v2 = ~0. Without loss of generality assume that c1 , 0. Then ~v1 = − cc12 ~v2 and hence ~v1
is a scalar multiple of ~v2 .
1.1.8 If ~v1 ∈ Span{~v2 , ~v3 }, then there exists c1 , c2 ∈ R such that ~v1 = c1~v2 + c2~v3 . Thus,
where the coefficient of ~v1 is non-zero, so {~v1 , ~v2 , ~v3 } is linearly dependent.
1.1.9 Assume for a contradiction that {~v1 , ~v2 } is linearly dependent. Then there exists c1 , c2 ∈ R with c1 , c2 not
both zero such that c1~v1 + c2~v2 = ~0. Hence, we have
with not all coefficients equal to zero which contradicts the fact that {~v1 , ~v2 , ~v3 } is linearly independent.
1.1.10 To prove this, we will prove that both sets are a subset of the other.
Let ~x ∈ Span{~v1 , ~v2 }. Then there exists c1 , c2 ∈ R such that ~x = c1~v1 + c2~v2 . Since t , 0 we get
c2
~x = c1~v1 + (t~v2 )
t
Hence, we also have Span{~v1 , t~v2 } ⊆ Span{~v1 , ~v2 }. Therefore, Span{~v1 , ~v2 } = Span{~v1 , t~v2 }.
1.1.11 Assume that {~v1 , ~v2 } is linearly independent and ~v3 < Span{~v1 , ~v2 }. Consider
If c3 , 0, then we have ~v3 = − cc31 ~v1 − cc23 ~v2 which contradicts that fact that ~v3 < Span{~v1 , ~v2 }. Hence,
c3 = 0.
Thus, we have c1~v1 + c2~v2 + c3~v3 = ~0 implies c1~v1 + c2~v2 = ~0. Since {~v1 , ~v2 } is linearly independent, the
only solution to this is c1 = c2 = 0. Therefore, we have shown the only solution to c1~v1 + c2~v2 + c3~v3 = ~0
is c1 = c2 = c3 = 0, so {~v1 , ~v2 , ~v3 } is linearly independent.
8 Section 1.1 Solutions
x1
1.1.12 Let ~x = x2 be any vector in R3 . Consider the equation
x3
x1 1 0 c1
x2 = c1 2 + c2 1 = 2c1 + c2
x3 −1 2 −c1 + 2c2
c1 = x1
2c1 + c2 = x2
−c1 + 2c2 = x3
Substituting c1 = x1 into the second equation we get c2 = −2x1 + x2 . Substituting both of these into the
third equation gives
−5x1 + 2x2 = x3
1 0 0
Thus, ~x is in the span of 2 , 1 if and only if −5x + 2x = x . Thus, the vector 0 is not in the
1 2 3
−1 2
1
1 0
span, so 2 , 1 does not span R3 and hence is not a basis.
−1 2
1 0 0
Then, by Problem 11, the set 2 , 1 , 0 is linearly independent. Consider
−1 2 1
x1 1 0 0 c1
x2 = c1 2 + c2 1 + c3 0 = 2c1 + c2
x3 −1 2 1 −c1 + 2c2 + c3
Thus, we have
c1 = x1
2c1 + c2 = x2
−c1 + 2c2 + c3 = x3
1 0 0
Solving gives c1 = x1 , c2 = −2x1 + x2 , and c3 = 3x1 − 2x2 + x3 . Therefore, 2 , 1 , 0 also spans
−1 2 1
R3 and hence is a basis for R3 .
Section 1.1 Solutions 9
1.1.13 We have
~x + 2~v = ~v + (−~x)
~x + 2~v + ~x = ~v + (−~x) + ~x add ~x on the right to both sides
~x + ~x + 2~v = ~v + ~0 by V3 and V5
1~x + 1~x + (1 + 1)~v = ~v by V4, V10, and normal addition in R
(1 + 1)~x + 1~v + 1~v = ~v by V8
2~x + ~v + ~v = ~v by V10, and normal addition in R
2~x + ~v + ~v + (−~v) = ~v + (−~v) by V5, add (−~v) on the right to both sides
2~x + ~v + ~0 = ~0 by V5
2~x + ~v = ~0 by V4
2~x + ~v + (−~v) = ~0 + (−~v) by V5, add (−~v) on the right to both sides
2~x + ~0 = ~0 + (−~v) by V5
2~x = (−~v) by V4
1 1 1
(2~x) = (−~v) multiply both sides by
2 ! 2 2
1 1
2 ~x = (−~v) by V7
2 2
1
1~x = (−~v) by normal multiplication in R
2
1
~x = (−~v) by V10
2
1.1.14 (a) The statement is true. If ~v1 , ~0, then the only solution to c~v1 = ~0 is c = 0, so {~v1 } is linearly
independent. On the other hand, if ~v1 = ~0, then {~v1 } is linearly dependent by Theorem 1.1.4.
1 2
(b) The statement is false. For example Span 0 , 0 is a line in R3 .
0 0
(c) The statement is true. Let ~x ∈ R2 . Since {~v1 , ~v2 } spans R2 there exists c1 , c2 ∈ R such that
~x = c1~v1 + c2~v2 . Then we get
and
(x1 + y1 ) + (x2 + y2 ) = x1 + x2 + y1 + y2 = x3 + y3
So, ~x + ~y satisfies the condition of S4 , so ~x + ~y ∈ S4 .
cx1
Similarly, for any c ∈ R, c~x = cx2 and
cx3
so c~x ∈ S4 .
Thus, by the Subspace Test, S4 is a subspace of R3 .
To find a basis for S4 we need to find a linearly independent spanning set for S4 . We first find a
spanning set. Every vector ~x ∈ S4 satisfies x1 + x2 = x3 and so has the form
x1 x1 1 0
x2 = x2 = x1 0 + x2 1 , x1 , x2 ∈ R
x3 x1 + x2 1 1
1 0
Thus, B = 0 , 1 spans S4 . Moreover, neither vector is a scalar multiple of the other so the
1 1
set is linearly independent. Thus, B is a basis for S4 and so S4 is a plane in R3 .
(e) By definition S5 is non-empty subset of R4 .
0 0
0 0
Let ~x, ~y ∈ S5 . Then, ~x = and ~y = . Hence
0 0
0 0
0 0 0
0 0 0
~x + ~y = + = ∈ S5
0 0 0
0 0 0
(a1 + a2 ) − (b1 + b2 )
(b 1 + b2 ) − (d1 + d2 )
~x + ~y =
(a1 + a2 ) + (b1 + b2 ) − 2(d1 + d2 )
(c1 + c2 ) − (d1 + d2 )
Now, consider
−1 0 c1 − c2
0 1
0
= c 0 + c 1 + c 0 = c2
0 1 1 2 1 3 0 c1 + c2
0 0 0 1 c3
Comparing entries gives c1 − c2 = 0, c2 =0, c1 + c2 = 0, and c3 = 0. Thus, c1 = c2 = c3 = 0
1 −1 0
0 1 0
is the only solution so, B = , , is also linearly independent, and so is a basis for S8 .
1 1 0
0
0 1
Therefore, S8 is a hyperplane in R4 .
1.2.2 By definition, a plane P in R3 has vector equation ~x = c1~v1 + c2~v2 + ~b, c1 , c2 ∈ R where {~v1 , ~v2 } is linearly
independent.
If P is a subspace of R3 , then ~0 ∈ P and hence P passes through the origin. On the other hand, if P
passes through the origin, then there exists d1 , d2 ∈ R such that
~0 = d1~v1 + d2~v2 + ~b
Since c1 and c2 can be any real numbers, (c1 − d1 ) and (c2 − d2 ) can be any real numbers, so P =
Span{~v1 , ~v2 }. Consequently, P is a subspace of R2 by Theorem 1.2.2.
1.2.3 Since S does not contain the zero vector it cannot be a subspace.
x1
1.2.4 (a) Let ~x = x2 be any vector in R3 . Consider the equation
x3
x1 1 2 c1 + 2c2
x2 = c1 3 + c2 0 = 3c1
x3 1 1 c1 + c2
c1 + 2c2 = x1
3c1 = x2
c1 + c2 = x3
16 Section 1.2 Solutions
Subtracting the third equation from the first equation gives c2 = x1 − x3 . Substituting this and the
second equation into the third equation we get
1
x2 + x1 = 2x3
3
1 2 1
1
Thus, ~x is in the span of 3 , 0 if and only if 3 x2 + x1 = 2x3 . Thus, the vector 0 is not in the
1 1
0
span, so B does not span R3 and hence is not a basis for R3 .
(b) From our work in (a), we have that B is a basis for the subspace
( ) 1 2
1
S = ~x ∈ R3 | x2 + x1 = 2x3 = Span
3 , 0
3 1 1
0
1.2.5 (a) If d , 0, then 0 < P since a(0) + b(0) + c(0) , d. Hence, P cannot be a subspace.
0
(b) If P had a basis, then there would be a spanning set for P. But, this would contradict Theorem
1.2.2, since P is not a subspace.
(c) If a = b = c = 0, then P is the empty set. If a , 0, then every vector in ~x ∈ P has the form
d b
x1 a − a x2 − ac x3 d/a
−b/a −c/a
x2 = x2 = 0 + x2 1 + x3 0 , x2 , x3 ∈ R
x3 x3 0 0 1
1.2.6 A set S is a subset of Rn if every element of S is in Rn . For S to be a subspace, it not only has to be a
subset, but it also must be closed under addition and scalar multiplication of vectors (by the Subspace
Test).
Section 1.3 Solutions 17
1.3.2 (a) Since the plane passes through (0, 0, 0) we get that a scalar equation is
(b) For two planes to be parallel, their normal vectors must be scalar multiples of each other. Hence,
3
a normal vector for the required plane is 2 . Thus, a scalar equation of the plane is
−1
Since the plane passes through the origin, we get that a scalar equation is
2x1 − x2 + x3 = 0
3 2 −6
(d) We have 3 × 1 = 9 . Any non-zero scalar multiple of this will be a normal vector for the
3 −1 −3
−2
plane. We pick ~n = 3 . Since the plane passes through the origin, we get that a scalar equation
−1
is
−2x1 + 3x2 − x3 = 0
Since the plane passes through the origin, we get that a scalar equation is
(b) We have
1 3
2 · −2 = 0
−1 −1
1 2
2 · 1 = 0
−1 4
3 2
−2 · 1 = 0
−1 4
(c) We have
0 1
0 · 0 = 0
0 1
0 0
0 · 1 = 0
0 0
1 0
0 · 1 = 0
1 0
x1
Similarly,
0 = ~v2 · ~0 = ~v2 · (c1~v1 + c2~v2 ) = c1~v2 · ~v1 + c2~v2 · ~v2 = 0 + c2 k~v2 k2 = c2
Thus, c1~v1 + c2~v2 = ~0 implies c1 = c2 = 0, so {~v1 , ~v2 } is linearly independent.
1.3.8 We have
~y · ~n = (c1~v + c2 w
~ ) · ~n = c1~v · ~n + c2 w
~ · ~n = 0 + 0 = 0
Section 1.3 Solutions 21
1
1.3.10 By definition, the set S of all vectors orthogonal to ~x = 1 is a subset of R3 . Moreover, since ~0 · ~x = 0
1
~
we have that 0 ∈ S. Thus, S is non-empty.
Let ~y,~z ∈ S. Then ~x · ~y = 0 and ~x · ~z = 0. Thus, we have
~x · (~y + ~z) = ~x · ~y + ~x · ~z = 0 + 0 = 0
and
~x · (t~y) = t(~x · ~y) = 0
for any t ∈ R. Thus, ~y + ~z ∈ S and t~y ∈ S, so S is a subspace of R3 .
1.3.11 Let ~x ∈ Rn . Then,
0 x1
~0 · ~x = .. · .. = 0x1 + · · · + 0xn = 0
. .
0 xn
as required.
1 0 1
0 1 1
1.3.12 (a) The statement is false. If ~x = , ~y = , and ~z = , then ~x · ~y = 0, ~y · ~z = 0, but ~x · ~z = 1.
0 1 −1
1 0 0
" # " #
0 1
(b) The statement if false. If ~x = and ~y = , then ~x · ~y = 0, but {~x, ~y} is linearly dependent since
0 0
it contains the zero vector.
(c) The statement is true. We have
(s~x) · (t~y) = (st)(~x · ~y) = 0
Thus, {s~x, t~y} is an orthogonal set for any s, t ∈ R.
(d) The statement is true. We have
~z · ~v = ~z · (s~x + t~y) = s(~z · ~x) + t(~z · ~y) = 0 + 0 = 0
Hence, ~z is orthogonal to ~v = s~x + t~y for any s, t ∈ R.
1 0 0
(e) The statement is false. Take ~x = 0, ~y = 1, and ~z = 1. Then
0 0 0
1 0 0
~x × (~y × ~z) = 0 × 0 = 0
0 0 0
But,
0 1 0
(~x × ~y) × ~z = 0 × 0 = 1
1 0 0
22 Section 1.3 Solutions
(f) The statement is true. If ~x · ~n = ~b · ~n is a scalar equation of the plane, then multiplying both sides
by t , 0 gives another scalar equation for the plane ~x · (t~n) = ~b · (t~n). Thus, t~n is also a normal
vector for the plane.
Section 1.4 Solutions 23
(b) We have
" # " #
~u · ~v 9 1 9/10
proj~v ~u = ~
v = =
k~vk2 10 3 27/10
" # " # " #
3 9/10 21/10
perp~v ~u = ~u − proj~v ~u = − =
2 27/10 −7/10
(c) We have
" # " #
~u · ~v 0 2 0
proj~v (~u) = ~
v = =
k~vk2 13 −3 0
" # " # " #
6 0 6
perp~v (~u) = ~u − proj~v (~u) = − =
4 0 4
(d) We have
" # " #
~u · ~v 9 3 27/13
proj~v (~u) = ~v = =
k~vk2 13 2 18/13
" # " # " #
1 27/13 −14/13
perp~v (~u) = ~u − proj~v (~u) = − =
3 18/13 21/13
(e) We have
1 3
~u · ~v 3
proj~v (~u) = ~
v = 0 = 0
k~vk2 1
0 0
3 3 0
perp~v (~u) = ~u − proj~v (~u) = 2 − 0 = 2
4 0 4
(f) We have
1 3
~u · ~v 9
proj~v (~u) = ~
v = 1 = 3
k~vk2 3
1 3
3 3 0
perp~v (~u) = ~u − proj~v (~u) = 2 − 3 = −1
4 3 1
24 Section 1.4 Solutions
(g) We have
1 4
~u · ~v 8 0 0
proj~v (~u) = ~v = =
k~vk2 2 0 0
1 4
2 4 −2
5 0 5
perp~v (~u) = ~u − proj~v (~u) = − =
−6 0 −6
6 4 2
(h) We have
1 4/17
~u · ~v 8 4 16/17
proj~v (~u) = ~
v = =
k~vk2 34 4 16/17
−1 −4/17
1 4/17 13/17
1 16/17 1/17
perp~v (~u) = ~u − proj~v (~u) = − =
1 16/17 1/17
1 −4/17 21/17
(i) We have
1 2/3
~u · ~v 4 2 4/3
proj~v (~u) = ~v = =
k~vk2 6 0 0
1 2/3
3 2/3 7/3
1 4/3 −1/3
perp~v (~u) = ~u − proj~v (~u) = − =
1 0 1
−1 2/3 −5/3
(j) We have
−2 2/3
~u · ~v −8 −4 4/3
proj~v (~u) = ~v = =
k~vk2 24 0 0
−2 2/3
3 2/3 7/3
1 4/3 −1/3
perp~v (~u) = ~u − proj~v (~u) = − =
1 0 1
−1 2/3 −5/3
Section 1.4 Solutions 25
3
1.4.2 (a) A normal vector for the plane P is ~n = −2. Thus,
3
1 3 1
~v · ~n 0
projP (~v) = perp~n (~v) = ~v − ~
n = 22 −2 = 0
0 −
k~nk2
−1 3 −1
−1 1 −14/9
~v · ~n 5
projP (~v) = perp~n (~v) = ~v − ~n = 2 − 2 = 8/9
k~nk2 9
1 2 −1/9
0 0 −4 1
(c) Observe that −1 × 3 = 0 . Hence, a normal vector for the plane P is ~n = 0. Thus,
1 1 0 0
3 1 0
~v · ~n 3
projP (~v) = perp~n (~v) = ~v − ~
n = −4
1 0 = −4
−
k~nk2
1 0 1
1 −2 3
(d) A normal vector for the plane P is ~n = −2 × 1 = 0 . Hence,
1 −2 −3
1 3 3/2
~v · ~n 3
projP (~v) = perp~n (~v) = ~v − ~
n = 1 + 0= 1
k~nk2 18
2 −3 3/2
1 1 −1
1.4.3 A normal vector for the plane P is ~n = 1 × 0 = 2 .
1 −1 −1
1
(a) The projection of ~x = 2 onto P is
3
1 −1 1
~x · ~n 0
projP (~x) = perp~n (~x) = ~x − ~n = 2 − 2 = 2
k~nk2 6
3 −1 3
26 Section 1.4 Solutions
2
(b) The projection of ~x = 1 onto P is
0
2 −1 2
~x · ~n 0
projP (~x) = perp~n (~x) = ~x − ~
n = −
6 2 = 1
1
k~nk2
0 −1 0
1
(c) The projection of ~x = −2 onto P is
1
1 −1 0
~x · ~n −6
projP (~x) = perp~n (~x) = ~x − ~
n = −2 − 2 = 0
k~nk2 6
1 −1 0
2
(d) The projection of ~x = 3 onto P is
3
2 −1 13/6
~x · ~n 1
projP (~x) = perp~n (~x) = ~x − ~n = 3 − 2 = 8/3
k~nk2 6
3 −1 19/6
(~x + ~y) · ~v
proj~v (~x + ~y) = ~v
k~v k2
~x · ~v + ~y · ~v
= ~v
k~v k2
~x · ~v ~y · ~v
= 2
~v + ~v
k~v k k~v k2
= proj~v (~x) + proj~v (~y)
(b) We have
(s~x) · ~v
proj~v (s~x) = ~v
k~v k2
~x · ~v
=s ~v
k~v k2
= s proj~v (~x)
Section 1.4 Solutions 27
(c) We have
proj~v (~x) · ~v
~v
proj~v proj~v (~x) =
k~v k2
~x·~v
k~v k2
~v · ~v
= ~v
k~v k2
~x·~v
(~v · ~v)
= ~v·~v 2 ~v
k~v k
~x · ~v
= ~v
k~v k2
= proj~v (~x)
(c) The statement is false. If ~x is orthogonal to ~v, then proj~v (~x) = ~0 and hence {proj~v (~x), perp~v (~x)} is
linearly dependent.
(d) The statement is true. We have
~x·~v
~x − k~v k2
~v · ~v
~v
proj~v perp~v (~x ) =
k~v k2
~x·~v
~x · ~v − ~v·~v
(~v · ~v)
= ~v
k~v k2
~x · ~v − ~x · ~v
= ~v = ~0
k~v k2
~x·~v
~x · ~v k~v k2
~v · ~v
perp~v proj~v (~x) = ~v − ~v
k~v k2 k~v k2
~x·~v
~x · ~v ~v·~v
(~v · ~v)
= ~v − ~v
k~v k2 k~v k2
~x · ~v ~x · ~v
= ~v − ~v
k~v k2 k~v k2
= ~0 = proj~v perp~v (~x)
28 Section 1.4 Solutions
1.4.6 Since P is a plane in R3 , we must have {~v1 , ~v2 } is linearly independent. So, by Problem 1.1.10, we only
need to prove that proj~n (~x) < P.
For a contradiction, assume that proj~n (~x) ∈ P. Then, there exists c1 , c2 ∈ R such that
~x · ~n
~n = c1~v1 + c2~v2
k~n k2
Then, !
~x · ~n ~x · ~n
~x · ~n = (~
n · ~
n ) = ~
n · ~n = (c1~v1 + c2~v2 ) · ~n = 0
k~n k2 k~n k2
since ~n is the normal vector for P and ~v1 , ~v2 ∈ P. Hence, ~x is orthogonal to ~n which implies that ~x ∈ P
which is a contradiction. Therefore, proj~n (~x) < P and the result follows from Problem 1.1.10.
Chapter 2 Solutions
x1 s + 1
x2 = 2s
x3 s+1
Since the third equation is not satisfied, ~x is not a solution of the system.
29
30 Section 2.1 Solutions
(c) We have
2.1.4 (a) TRUE. Since we can write the equation as x1 + 3x2 + x3 = 0, it is linear.
(b) FALSE. Since it contains the square of a variable, it is not a linear equation.
1 2
(c) FALSE. The system x1 = 1, x2 = 1, and x3 = 1 has solution 1, but clearly 2 is not a solution.
1 2
" #
−1
(d) FALSE. The systems x1 + x2 = 0 has more unknowns than equations, but it has a solution .
1
2.1.5 (a) Since the right hand side of all the equations is 0, the system consists of hyperplanes which all
pass through the origin. Since, the origin lies on all of the hyperplanes, it is a solution. Therefore,
the system is consistent.
(b) The i-th equation of the system has the form
ai1 x1 + · · · + ain xn = 0
Hence, the rank of the coefficient matrix and the rank of the augmented matrix are 2.
iii. x3 is a free variable, so let x3 = t ∈ R. Then x1 = 3 and x2 + x3 = −1, so x2 = −1 − t.
Therefore, all solutions have the form
x1 3 3 0
~x = x2 = −1 − t = −1 + t −1
x3 t 0 1
iv. The system is a pair of planes in R3 which intersect in a line passing through (3, −1, 0).
" # " #
2 2 −3 2 2 −3 3
(b) i. ,
1 1 1 1 1 1 9
ii.
" # " #
2 2 −3 3 1 1 1 9
R1 ↔ R2 ∼ ∼
1 1 1 9 2 2 −3 3 R2 − 2R1
" # " # " #
1 1 1 9 1 1 1 9 R1 − R2 1 1 0 6
∼ ∼
0 0 −5 −15 − 51 R2 0 0 1 3 0 0 1 3
Hence, the rank of the coefficient matrix and the rank of the augmented matrix are 2.
iii. x2 is a free variable, so let x2 = t ∈ R. Then x1 + x2 = 6, so x1 = 6 − t, and x3 = 3. Therefore,
all solutions have the form
x1 6 − t 6 −1
~x = x2 = t = 0 + t 1
x3 3 3 0
iv. The system is a pair of planes in R3 which intersect in a line passing through (3, −1, 0).
2 1 −1 2 1 −1 4
(c) i. 2 −1 −3 , 2 −1 −3 −1
3 2 −1 3 2 −1 8
Section 2.2 Solutions 33
ii.
2 1 −1 4 3 2 −1 8 R1 − R3
2 −1 −3 −1 R1 ↔ R3 ∼ 2 −1 −3 −1 ∼
3 2 −1 8 2 1 −1 4
1 1 0 4 1 1 0 4
2 −1 −3 −1 R2 − 2R1 ∼ 0 −3 −3 −9 − 31 R2 ∼
2 1 −1 4 R3 − 2R1 0 −1 −1 −4
1 1 0 4 R1 − R2 1 0 −1 1 R1 + R3
0 1 1 3 ∼ 0 1 1 3 R2 + 3R3 ∼
0 −1 −1 −4 R3 + R2 0 0 0 −1
1 0 −1 0 1 0 −1 0
0 1 1 0 ∼ 0 1 1 0
0 0 0 −1 (−1)R3 0 0 0 1
Hence, the rank of the coefficient matrix is 2, and the rank of the augmented matrix is 3.
iii. Since the rank of the augmented matrix is greater than the rank of the coefficient matrix, the
system is inconsistent.
iv. The system is a set of three planes in R3 which have no common point of intersection.
1 2 0 1 2 0 3
(d) i. 1 3 −1 , 1 3 −1 4
−3 −8 2 −3 −8 2 −11
ii.
1 2 0 3 1 2 0 3 R1 − 2R2 1 0 2 1
1 3 −1 4 R2 − R1 ∼ 0 1 −1 1 ∼ 0 1 −1 1
−3 −8 2 −11 R3 + 3R1 0 −2 2 −2 R3 + 2R2 0 0 0 0
Hence, the rank of the coefficient matrix and the rank of the augmented matrix are 2.
iii. x3 is a free variable, so let x3 = t ∈ R. Then x1 + 2x3 = 1 and x2 − x3 = 1, so x1 = 1 − 2t, and
x2 = 1 + t. Therefore, all solutions have the form
x1 1 − 2t 1 −2
~x = x2 = 1 + t = 1 + t 1
x3 t 0 1
iv. The system is three planes in R3 which intersect in a line passing through (1, 1, 0).
" # " #
1 3 −1 1 3 −1 9
(e) i. ,
2 1 −2 2 1 −2 −2
ii.
" # " #
1 3 −1 9 1 3 −1 9
∼ ∼
2 1 −2 −2 R2 − 2R1 0 −5 0 −20 − 51 R2
" # " #
1 3 −1 9 R1 − 3R2 1 0 −1 −3
∼
0 1 0 4 0 1 0 4
Hence, the rank of the coefficient matrix and the rank of the augmented matrix are 2.
34 Section 2.2 Solutions
iv. The system is a pair of planes in R3 which intersect in a line passing through (−3, 4, 0).
" # " #
2 −2 2 1 2 −2 2 1 −4
(f) i. ,
3 −3 3 4 3 −3 3 4 9
ii.
1 −4 21 R1
" # " #
2 −2 2 1 −4 2 −2 2
∼ ∼
3 −3 3 4 9 R2 − 23 R1 0 0 0 5/2 15 52 R2
1 −1 1 1/2 −2 R1 − 21 R2
" # " #
1 −1 1 0 −5
∼
0 0 0 1 6 0 0 0 1 6
Hence, the rank of the coefficient matrix and the rank of the augmented matrix are 2.
iii. x2 and x3 are free variables, so let x2 = s ∈ R and x3 = t ∈ R. Then x4 = 6, and x1 − x2 + x3 =
−5, so x1 = −5 + s − t. Therefore, all solutions have the form
iv. The system is a pair of hyperplanes in R4 which intersect in a plane passing through (−5, 0, 0, 6).
1 1 1 6 1 1 1 6 5
0 2 −4 4 0 2 −4 4 −4
(g) i. ,
2 0 4 5 2 0
4 5 4
−1 2 −3 4 −1 2 −3 4 9
ii.
1 1 1 6 5 1 1 1 6 5
0 2 −4 4 −4 1
0 2 −4 4 −4 2 R2 ∼
∼
2 0 4 5 4 R3 − 2R1 0 −2 2 −7 −6
−1 2 −3 4 9 R4 + R1 0 3 −2 10 14
5 R1 − R2
1 1 1 6
1 0 3 4 7
0 1 −2 2 −2 0 1 −2 2 −2
0 −2 ∼ ∼
2 −7 −6 R3 + 2R2 0 0 −2 −3 −10 (−1)R3
1
0 3 −2 10 14 R4 − 3R2 0 0 4 4 20 4 R4
R1 − 3R3
1 0 3 4 7 1 0 3 4 7
0 1 −2 2 −2 0 1 −2 2 −2 R2 + 2R3
0 0
∼ ∼
2 3 10 0 0 1 1 5
0 0 1 1 5 R3 ↔ R4 0 0 2 3 10 R4 − 2R3
Section 2.2 Solutions 35
1 −8 R1 − R4 0 −8
1 0 0 1 0 0
0 1 0 4 8 R2 − 4R4 0 1 0 0 8
∼
0 0 1 1 5 R3 − R4 0 0 1 0 5
0 0 0 1 0 0 0 0 1 0
Hence, the rank of the coefficient matrix and the rank of the augmented matrix are 4.
iii. The solution is x1 = −8, x2 = 8, x3 = 5, x4 = 0.
iv. The system is four hyperplanes in R4 which intersect only at the point (−8, 8, 5, 0).
1 2 1 2 1 2 1 2 −2
2 1 2 1 2 1 2 1 2
(h) i. ,
0 2 1 1 0 2 1 1 2
1 1 2 0 1 1 2 0 6
ii.
1 2 1 2 −2 2 −2
1 2 1
6 − 31 R2
2 1 2 1 2 R2 − 2R1 0 −3 0 −3
0 2 1 1 ∼ ∼
2 0 2 1 1 2
1 1 2 0 6 R4 − R1 0 −1 1 −2 8
2 −2 R1 − 2R2 1 0 1 2 R1 − R3 1 0 0 1 −4
1 2 1
0
0 1 0 1 −2 0 1 0 1 −2 0 1 0 1 −2
∼ ∼
0 2 1 1 2 R3 − 2R2 0 0 1 −1 6 0 0 1 −1 6
0 −1 1 −2 8 R4 + R2 0 0 1 −1 6 R4 − R3 0 0 0 0 0
Hence, the rank of the coefficient matrix and the rank of the augmented matrix are 3.
iii. x4 is a free variables, so let x4 = t ∈ R. Then x1 + x4 = −4, x2 + x4 = −2, and x3 − x4 = 6.
Thus, x1 = −4 − t, x2 = −2 − t, and x3 = 6 + t. Therefore, all solutions have the form
iv. The system is four hyperplanes in R4 which intersect in a line passing through (−4, −2, 6, 0).
0 1 −2 0 1 −2
3
2 2
3 2 2
3 1
(i) i. ,
1 2 4 1 2 4 −1
2 1 −1 2 1 −1 1
36 Section 2.2 Solutions
ii.
0 1 −2 4 −1
3 1 2
2 2 3 1 2 2 3 1 R2 − 2R1
R1 ↔ R3 ∼
∼
1 2 4 −1 0 1 −2 3
2 1 −1 1 2 1 −1 1 R4 − 2R1
4 −1 4 −1 R1 − 2R2
1 2
1 2
0 −2 −5 3 0 1 −2 3
∼
∼
0 1 −2 3 0 −2 −5 3 R3 + 2R2
R2 ↔ R3
0 −3 −9 3 0 −3 −9 3 R4 + 3R2
8 −7 8 −7 R1 − 8R3
1 0 1 0
0 1 −2 3 0 1 −2 3 R2 + 2R3
0 0 −9 ∼
∼
9 0 0 1 −1
− 91 R3
0 0 −15 12 0 0 −15 12 R4 + 15R3
1 0 0 −15 1 0 0 −15
R1 + 15R4 1 0 0 0
0 1 0 1 0 1 0 1 R2 − R4 0 1 0 0
0 0 1 −1 ∼ ∼
0 0 1 −1 R3 + R4 0 0 1 0
− 31 R4
0 0 0 −3 0 0 0 1 0 0 0 1
Hence, the rank of the coefficient matrix is 3, and the rank of the augmented matrix is 4.
iii. Since the rank of the augmented matrix is greater than the rank of the coefficient matrix, the
system is inconsistent.
iv. The system is a set of four planes in R3 which have no common point of intersection.
2.2.2 (a) We need to determine if there exists c1 , c2 ∈ R such that
1 1 3 c1 + 3c2
3 = c1 2 + c2 −4 = 2c1 − 4c2
1 −1 2 −c1 + 2c2
c1 + 3c2 = 1
2c1 − 4c2 = 3
−c1 + 2c2 = 1
The second row corresponds to the equation 0 = 5. Thus, the system is inconsistent, so ~x < Span B.
Section 2.2 Solutions 37
2
Hence, c1 = 5 and c2 = 51 , and so ~x ∈ Span B. We can verify this answer by checking that
1 1 3
2 1
0 = 2 + −4
5 5
0 −1 2
(c) We need to determine if there exists c1 , c2 , c3 ∈ R such that
−1 3c1 + 2c2 − c3
1 3 2
−3 1 1 0 c1 + c2
= c + c + c =
−7 1 2 2 3 3 −4 2c1 + 3c2 − 4c3
−9 3 −4 3 3c1 − 4c2 + 3c3
Hence, we need to solve the system of equations
3c1 + 2c2 − c3 = 1
c1 + c2 = −3
2c1 + 3c2 − 4c3 = −7
3c1 − 4c2 + 3c3 = −9
We row reduce the corresponding augmented matrix to get
2 −1 0 −3
3
1 1 1
R1 ↔ R2
1 1 0 −3 3 2 −1 1 R2 − 3R1
∼ ∼
2 3 −4 −7 2 3 −4 −7 R3 − 2R1
3 −4 3 −9 3 −4 3 −9 R4 − 3R1
0 −3 R1 + R2 0 −1 0 −1
1 1 1 7 1 7
0 −1 −1 10 0 −1 −1 10
0 −1 −1 10
∼ ∼
0 1 −4 −1 R3 + R2 0 0 −5 9 0 0 −5 9
0 −7 3 0 R4 − 7R2 0 0 10 −70 R4 + 2R3 0 0 0 −52
The fourth row corresponds to the equation 0 = −52. Thus, the system is inconsistent, so ~x <
Span B.
38 Section 2.2 Solutions
The third row corresponds to the equation 0 = −55/3. Thus, the system is inconsistent, so ~x <
Span B.
2.2.3 (a) Consider
0 1 1 1 2
0 = c 2 + c 3 + c −1 + c 1
1 2 3 4
0 −1 1 4 1
Observe that this will give 3 equations in 4 unknowns. Hence, the rank of the matrix is at most 3,
so by Theorem 2.2.5, the system must have at least 4 − 3 = 1 free variable. Therefore, the set is
linearly dependent.
(b) Consider
0 1 2 3 c1 + 2c2 + 3c3
0 = c1 0 + c2 1 + c3 3 = c2 + 3c3
0 −1 −2 −1 −c1 − 2c2 − c3
This corresponds to the homogeneous system
c1 + 2c2 + 3c3 = 0
c2 + 3c3 = 0
−c1 − 2c2 − c3 = 0
To find the solution space of the system, we row reduce the corresponding coefficient matrix.
1 2 3 1 2 3 R1 − 2R2
0 1 3 ∼ 0 1 3 ∼
1
−1 −2 −1 R3 + R1 0 0 2 R
2 3
1 0 −3 R1 + 3R3 1 0 0
0 1 3 R2 − 3R3 ∼ 0 1 0
0 0 1 0 0 1
Hence, the only solution to the system is c1 = c2 = c3 = 0. Therefore, the set is linearly indepen-
dent.
Section 2.2 Solutions 39
(c) Consider
0 1 4 7 c1 + 4c2 + 7c3
0 = c1 2 + c2 5 + c3 8 = 2c1 + 5c2 + 8c3
0 3 6 9 3c1 + 6c2 + 9c3
This corresponds to the homogeneous system
c1 + 4c2 + 7c3 = 0
2c1 + 5c2 + 8c3 = 0
3c1 + 6c2 + 9c3 = 0
To find the solution space of the system, we row reduce the corresponding coefficient matrix.
1 4 7 1 4 7
2 5 8 R2 − 2R1 ∼ 0 −3 −6 ∼
3 6 9 R3 − 3R1 0 −6 −12 R3 − 2R2
1 4 7
0 −3 −6
0 0 0
Hence, we see that there will be a free-variable. Consequently, the homogeneous system has
infinitely many solutions, and so the set is linearly dependent.
(d) Consider
0 5 −6 6 5c1 − 6c2 + 6c3
0 = c1 −3 + c2 3 + c3 −6 = −3c1 + 3c2 − 6c3
0 4 −4 8 4c1 − 4c2 + 8c3
This corresponds to the homogeneous system
From our work in Problem 2.2.2 (d), we know the rank of the coefficient matrix is 2. Thus, there
is one free variable and so the homogeneous system has infinitely many solutions. Thus, the set is
linearly dependent.
(e) Consider
0 3 2 1 3c1 + 2c2 + c3
0 = c1 1 + c2 1 + c3 3 = c1 + c2 + 3c3
0 2 3 2 2c1 + 3c2 + 2c3
This corresponds to the homogeneous system
3c1 + 2c2 + c3 = 0
c1 + c2 + 3c3 = 0
2c1 + 3c2 + 2c3 = 0
40 Section 2.2 Solutions
Thus, we see the rank of the coefficient matrix is 3 and hence the solution is unique. Consequently,
the set is linearly independent.
(f) Consider
0 3
6 0 3c1 + 6c2
0
= c1 1 + c2 4 + c3 2 = c1 + 4c2 + 2c3
0 3 −3 −1 3c1 − 3c2 − c3
0 5 4 8 5c1 + 4c2 + 8c3
This corresponds to the homogeneous system
3c1 + 6c2 = 0
c1 + 4c2 + 2c3 = 0
3c1 − 3c2 − c3 = 0
5c1 + 4c2 + 8c3 = 0
Thus, we see the rank of the coefficient matrix is 3 and hence the solution is unique. Consequently,
the set is linearly independent.
Section 2.2 Solutions 41
c1 + 2c2 + 4c3 = 0
3c1 − 5c2 + c3 = 0
−2c1 + 3c2 = 0
Hence, the rank of the coefficient matrix is 3. Therefore, the only solution to the homogeneous
system is c1 = c2 = c3 = 0, so the set is linearly independent.
h i Moreover, since the rank equals the
number of rows, by Theorem 2.2.5 (3), the system A | ~b is consistent for every ~b ∈ R3 , so the set
also spans R3 . Therefore, it is a basis for R3 .
(d) Consider
0 1 2 1 c1 + 2c2 + c3
0 = c1 1 + c2 1 + c3 0 = c1 + c2
0 1 −1 −3 c1 − c2 − 3c3
42 Section 2.2 Solutions
c1 + 2c2 + c3 = 0
c1 + c2 = 0
c1 − c2 − 3c3 = 0
Hence, the rank of the coefficient matrix is 3. Therefore, the only solution to the homogeneous
system is c1 = c2 = c3 = 0, so the set is linearly independent.
h i Moreover, since the rank equals the
number of rows, by Theorem 2.2.5 (3), the system A | b is consistent for every ~b ∈ R3 , so the set
~
also spans R3 . Therefore, it is a basis for R3 .
(e) Consider
0 1 3 3 c1 + 3c2 + 3c3
0 = c1 0 + c2 3 + c3 6 = 3c2 + 6c3
0 −4 −5 2 −4c1 − 5c2 + 2c3
This gives the homogeneous system
c1 + 3c2 + 3c3 = 0
3c2 + 6c3 = 0
−4c1 − 5c2 + 2c3 = 0
Hence, the rank of the coefficient matrix is 2. Therefore, the homogeneous system has infinitely
many solution and so the set is linearly dependent. Consequently, it is not a basis.
(f) Consider
0 3 4 2 3c1 + 4c2 + 2c3
0 = c1 2 + c2 4 + c3 1 = 2c1 + 4c2 + c3
0 −3 0 −2 −3c1 − 2c3
Section 2.2 Solutions 43
Hence, the rank of the coefficient matrix is 3. Therefore, the only solution to the homogeneous
system is c1 = c2 = c3 = 0, so the set is linearly independent.
h i Moreover, since the rank equals the
number of rows, by Theorem 2.2.5 (3), the system A | ~b is consistent for every ~b ∈ R3 , so the set
also spans R3 . Therefore, it is a basis for R3 .
h i
2.2.5 Let {~v1 , . . . , ~vn } be a set of n-vectors in Rn and let A = ~v1 · · · ~vn .
On the other hand, if ad − bc , 0, then at least one of a and c are non-zero. Hence, we can row reduce
as in the first case to find that the rank of the coefficient matrix is 2. Then, Theorem 2.2.5 implies that B
is a basis for R2 .
2.2.7 Since there is no point in common between the three planes, we know that rank A < rank[A | ~b]. But the
planes are not parallel, so rank A > 1, and there are only three rows of [A | ~b], so rank[A | ~b] ≤ 3. It
follows that rank A = 2.
2.2.8 The intersection of two planes in R3 is represented by a system of 2 linear equations in 3 unknowns.
This will correspond to a system whose coefficient matrix has 2 rows and 3 columns. Hence, the rank
of the coefficient matrix of the system is either 1 or 2. Since we are given that the system is consistent,
Theorem 2.2.5(2) tells us that the solution set has either 1 or 2 free variables. Theorem 2.2.6 tells us that
the vectors corresponding to the free variables must be linearly independent. Thus, the solution set is
either a line or a plane in R3 .
2.2.9 (a) The statement is false. The solution set of x1 + x2 = 1 is not a subspace of R2 , since it does not
contain the zero vector.
(b) The statement is false. The system x1 + x2 + x3 + x4 + x5 = 1, x1 + x2 + x3 + x4 + x5 = 2,
x1 + x2 + x3 + x4 + x5 = 3, does not have infinitely many solutions. It is not consistent.
(c) The statement is false. The system x1 + x2 + x3 = 1, 2x1 + 2x2 + 2x3 = 2, 3x1 + 3x2 + 3x3 = 3,
4x1 + 4x2 + 4x3 = 4, 5x1 + 5x2 + 5x3 = 5 has infinitely many solutions.
(d) The statement is true. We know that a homogeneous system is always consistent and the maximum
rank is the minimum of the number of equation and the number of variables. Hence, the rank of
the coefficient matrix is at most 3. Therefore, there are at least 5 − 3 = 2 free variables in the
system. Thus, there must be infinitely many solutions.
(e) The statement is false. The system x1 + x2 + x3 + x4 = 1, x1 + x2 + x3 + x4 = 2, x2 + x3 + x4 = 0,
has rank 2, but it is inconsistent, so the solution set is not a plane.
(f) The statement is true. Since the rank equals the number of equations, the system must be consistent
by Theorem 2.2.5. Then, by Theorem 2.2.6, a vector equation for the solution set must have the
form
~x = ~c + t1~v1 + t2~v2 , t1 , t2 ∈ R
where {~v1 , ~v2 } is a linearly independent set in R5 . Therefore, the solution set is a plane in R5 .
(g) The statement is true. If the solution space is a line, then it must have the form ~x = t~v1 , t ∈ R,
where ~v1 , ~0. Therefore, the system must have had 1 free variable. Hence, by Theorem 2.2.5, we
get the rank of the coefficient matrix equals the number of variables - the number of free variables.
Hence, the rank is 3 − 2 = 1.
(h) The statement is false. Consider the system x1 + x2 + x3 = 1, 2x1 + 2x2 + 2x3 = 2. The system
has m = 2 equations and n = 3 variables, but the rank of the corresponding coefficient matrix A is
1 , m.
h i
(i) If A | ~b is consistent for every ~b ∈ Rm , then rank A = m by Theorem 2.2.5(3). Since the solution
is unique, we have that rank A = n by Theorem 2.2.5(2). Thus, m = rank A = n.
Section 2.2 Solutions 45
hh i i
2.2.10 Assume that B is linearly independent. Then, the only solution to the system ~v1 · · · ~vk | ~0 is the
trivial solution. Hence, by Theorem 2.2.5, the rank of the coefficient matrix must equal the number of
columns which is k.
h i hh i i
On the other hand, if the rank of ~v1 · · · ~vk is k, then ~v1 · · · ~vk | ~0 has a unique solution by
Theorem 2.2.5. Thus, B is linearly independent.
2.2.11 (a) We have that {~v1 , . . . , ~vk } spans Rn if and only if the equation c1~v1 + · · · + ck~vk = ~b is consistent for
every ~b ∈ Rn . The equation c1~v1 + · · · + ck~vk = ~b ishconsistent
i for every ~b ∈ Rn if and only if the
corresponding system of n equation in k unknowns A | ~b is consistent for every ~b ∈ Rn . Finally,
h i
by Theorem 2.2.5(3) the corresponding system of n equation in k unknowns A | ~b is consistent
for every ~b ∈ Rn if and only if rank A = n.
(b) If k < n, then the coefficient matrix A of the system c1~v1 + · · · + ck~vk = ~b must have rank A < n
since the rank is the number of leading ones and there cannot be more leading ones than rows. But
then, we have Span{~v1 , . . . , ~vk } = Rn and rank A < n which contradicts part (a). Hence, k ≥ n.
2.2.12 (a) Since ~ui ∈ S and Span{~v1 , . . . , ~vℓ } = S, we can write each ~ui as a linear combination of ~v1 , . . . , ~vℓ .
Say,
Consider
c1~u1 + · · · + ck~uk = ~0
Substituting, we get
~0 = c1 (a11~v1 + · · · + a1ℓ~vℓ ) + · · · + ck (ak1~v1 + · · · + akℓ~vℓ )
0~v1 + · · · + 0~vℓ = (c1 a11 + · · · + ck ak1 )~v1 + · · · + (c1 a1ℓ + · · · + ck akℓ )~vℓ
a11 c1 + · · · + ak1 ck = 0
..
.
a1ℓ c1 + · · · + akℓ ck = 0
This homogeneous system of ℓ equations in k unknowns (c1 , . . . , ck ) must have a unique solution as
otherwise, we would contradict the fact that {~u1 , . . . , ~uk } is linearly independent. By Theorem 2.2.5
(2), the rank of the matrix must equal the number of variables k. Therefore, ℓ ≥ k as otherwise
there would not be enough rows to contain the k leadings ones in the RREF of the coefficient
matrix.
(b) If {~v1 , . . . , ~vℓ } is a basis, then Span{~v1 , . . . , ~vℓ } = S. If {~u1 , . . . , ~uk } is a basis, then it is linearly
independent. Hence, by part (a), k ≤ ℓ. Similarly, we also have that {~u1 , . . . , ~uk } spans S and
{~v1 , . . . , ~vℓ } is linearly independent, so ℓ ≤ k. Therefore, k = ℓ.
46 Section 2.2 Solutions
h i
2.2.13 The i-th equation of the system A | ~b has the form
ai1 x1 + · · · + ain xn = bi
h i
If ~y is a solution of A | ~0 , then we have
ai1 y1 + · · · + ain yn = 0
h i
If ~z is a solution of A | ~b , then we have
ai1 z1 + · · · + ain zn = bi
ai1 (z1 + cy1 ) + · · · + ain (zn + cyn ) = ai1 z1 + · · · + ain zn + c(ai1 y1 + · · · + ain yn )
= bi + c(0)
= bi
f1 + f2 = 50
f2 + f3 + f5 = 60
f4 + f5 = 50
f1 − f3 + f4 = 40
(b) We row reduce the augmented matrix corresponding to the system in part (a).
R1 − R2
1 1 0 0 0 50 1 1 0 0 0 50
0 1 1 0 1 60 0 1 1 0 1 60
0 0
∼ ∼
0 1 1 50
0 0 0 1 1 50
1 0 −1 1 0 40 R4 − R1 0 −1 −1 1 0 −10 R4 + R2
1 0 −1 0 −1 −10 0 −1 0 −1 −10
1
0 1 1 0 1 60 0 1 1 0 1 60
0 0 ∼
0 1 1 50 0 0 0 1 1 50
0 0 0 1 1 50 R4 − R3 0 0 0 0 0 0
f1 − f3 − f5 = −10
f2 + f3 + f5 = 60
f4 + f5 = 50
Section 2.2 Solutions 47
We see that f3 and f5 are free variables. Thus, we get that the general solution is
48
Section 3.1 Solutions 49
(g) The product doesn’t exist since the number of rows of the second matrix does not equal the number
of columns of the first matrix.
" # 5 −3 " #
−1 2 0 −1 17
(h) 2 7 =
2 1 1 11 −2
−1 −3
T
2 1 h i 1
(i) 3 −1 = 2 3 5 −1 = 4
5 1 1
5 −3 " # −11 7 −3
−1 2 0
(j) 2 7 = 12 11 7
2 1 1
−1 −3 −5 −5 −3
T
2 1 2 h i 2 −2 2
(k) 3 −1 = 3 1 −1 1 = 3 −3 3
5 1 5 5 −5 5
T
2 x1 h i x1
(l) 3 x2 = 2 3 5 x2 = 2x1 + 3x2 + 5x3
5 x3 x3
(b) We have
n
X X n
n X n X
X n
tr(AT B) = (AT B)ii = (AT )i j (B) ji = (A) ji (B) ji
i=1 i=1 j=1 i=1 j=1
" # x1 " #
3 2 −1 4
3.1.4 We take A = , ~x = x2 , and ~b = .
2 −1 5 5
x3
3c1 + 2c2 + c3 = 0
c1 + c2 + c3 = 0
3c1 + 2c2 + c3 = 0
c1 − c2 − 2c3 = 0
50 Section 3.1 Solutions
t1 + t2 + 2t3 = −1
2t1 + t2 + 3t3 = −4
−2t1 + t3 = 3
−t1 − t2 + t3 = −2
2 −1 1 0 0 −2
1 1
2 1 3 −4 0 1 0 3
−2 ∼
0 1 3 0 0 0 −1
−1 −1 1 −2 0 0 0 0
h i
3.1.9 Let A = ~a1 · · · ~an . We have
h i h i h i
AIn = A ~e1 · · · ~en = A~e1 ··· A~en = ~a1 ··· ~an = A
as required.
A1
(a) Since A has a row of zeros, we can write A in block form as A = ~0T . Thus,
3.1.11
A2
A1 A1 B A1 B
AB = ~0T B = ~0T B = ~0T
A2 A2 B A2 B
(c) We have
Thus, [L] = A.
3.2.2 (a) Since B has 4 columns for B~x to be defined we must have ~x ∈ R4 . Thus, the domain of f is R4 .
Moreover, since B has 3 rows, we get that B~x ∈ R3 . Thus, the codomain of f is R3 .
(b) We have
2
1 2 −3 0 −11
−2
f (2, −2, 3, 1) = 2 −1 0
3 = 9
3
1 0 2 −1 7
1
−3
1 2 −3 0 −13
1
f (−3, 1, 4, 2) = 2 −1 0 3 = −1
4
1 0 2 −1 3
2
Section 3.2 Solutions 53
(c) We have
1
f (~e1 ) = B~e1 = 2
1
2
f (~e2 ) = B~e2 = −1
0
−3
f (~e3 ) = B~e3 = 0
2
0
f (~e4 ) = B~e4 = 3
−1
Thus, [ f ] = B.
3.2.3 (a) Let ~x, ~y ∈ R3 and s, t ∈ R. Then,
L(s~x + t~y) = L(sx1 + ty1 , sx2 + ty2 , sx3 + ty3 ) = ([sx1 + ty1 ] + [sx2 + ty2 ], 0)
= s(x1 + x2 , 0) + t(y1 + y2 , 0)
= sL(~x) + tL(~y)
Hence, # "
h 1 1 0 i
[L] = L(1, 0, 0) L(0, 1, 0) L(0, 0, 1) =
0 0 0
L(s~x + t~y) = L(sx1 + ty1 , sx2 + ty2 , sx3 + ty3 ) = ([sx1 + ty1 ] − [sx2 + ty2 ], [sx2 + ty2 ] + [sx3 + ty3 ])
= s(x1 − x2 , x2 + x3 ) + t(y1 − y2 , y2 + y3 )
= sL(~x) + tL(~y)
Hence,
h i "1 −1 0#
[L] = L(1, 0, 0) L(0, 1, 0) L(0, 0, 1) =
0 1 1
54 Section 3.2 Solutions
(c) Observe that L(0, 0, 0) = (1, 0, 0) and L(1, 0, 0) = (1, 1, 0), so L(0, 0, 0) + L(1, 1, 0) = (2, 1, 0). But,
Hence,
h 0 0 0
i
[L] = L(1, 0, 0) L(0, 1, 0) L(0, 0, 1) = 0 0 0
0 0 0
L(s~x + t~y) = L(sx1 + ty1 , sx2 + ty2 , sx3 + ty3 , sx4 + ty4 )
= ([sx4 + ty4 ] − [sx1 + ty1 ], 2[sx2 + ty2 ] + 3[sx3 + ty3 ])
= s(x4 − x1 , 2x2 + 3x3 ) + t(y4 − y1 , 2y2 + 3y3 )
= sL(~x) + tL(~y)
Hence,
# "
h −1 0 0 1 i
[L] = L(1, 0, 0, 0) L(0, 1, 0, 0) L(0, 0, 1, 0) L(0, 0, 0, 1) =
0 2 3 0
" #
2
3.2.4 Let ~a = .
2
Section 3.2 Solutions 55
(a) We have
" # " #
~e1 · ~a 2 2 1/2
proj~a (~e1 ) = ~a = =
k~ak2 8 2 1/2
" # " #
~e2 · ~a 2 2 1/2
proj~a (~e2 ) = ~a = =
k~ak2 8 2 1/2
Hence,
h i "1/2 1/2#
[proj~a ] = proj~a (~e1 ) proj~a (~e2 ) =
1/2 1/2
3.2.5 We have
1 3 −7/11
6
reflP (~e1 ) = 0 −
1 = −6/11
11
0 −1 6/11
0 3 −6/11
2
reflP (~e2 ) = 1 − 1 = 9/11
11
0 −1 2/11
0 3 6/11
2
reflP (~e3 ) = 0 + 1 = 2/11
11
1 −1 9/11
Hence,
−7/11 −6/11 6/11
[reflP ] = −6/11 9/11 2/11
6/11 2/11 9/11
3
h 1
i
3.2.11 (a) The standard matrix of a such a linear mapping would be [L] = L(1, 0) L(0, 1) = −1 −5.
4 9
Hence, a linear mapping would be
3x1 + x2
L(~x) = [L]~x = −x1 − 5x2
4x1 + 9x2
2
3x1 + x2
(b) We could take L(x1 , x2 ) = −x1 − 5x2 . Then, L(1, 0) = (3, −1, 4), L(0, 1) = (1, −5, 9), but L is not
4x1 + 9x2
linear since L(2, 0) = (12, −2, 8) , 2L(1, 0).
3.2.12 If L is linear, then we would get
!
1 1 1 1 1
L(1, 0) = L [(1, 1) + (1, −1)] = L(1, 1) + L(1, −1) = (2, 3) + (3, 1) = (5/2, 2)
2 2 2 2 2
Section 3.3 Solutions 57
and
!
1 1 1 1 1
L(0, 1) = L [(1, 1) − (1, −1)] = L(1, 1) − L(1, −1) = (2, 3) − (3, 1) = (−1/2, 1)
2 2 2 2 2
3.2.14 Let ~u = ... . By definition, the i-th column of [proj~u ] is
un
~ei · ~u
proj~u (~ei ) = ~u = ui~u
k~uk2
= ~u~uT
1 0
Thus, B = −1 , 0 spans ker(L) and is clearly linearly independent. Hence, B is a basis
0
1
for ker(L).
Every vector ~y ∈ Range(L) has the form
" # " #
x1 + x2 1
L(~x) = = (x1 + x2 ) , (x1 + x2 ) ∈ R
0 0
(" #)
1
Hence, C = spans Range(L) and is linearly independent, so it is a basis for Range(L).
0
ii. We have " # " # " #
1 1 0
L(1, 0, 0) = L(0, 1, 0) = L(0, 0, 1) =
0 0 0
" #
1 1 0
Thus, [L] = .
0 0 0
iii. By Theorem 3.3.4, B is a basis for Null([L]). By Theorem 3.3.5, C is a basis for Col([L]).
1 0
1
By definition, Row([L]) = Span 1 , 0 . Thus, 1 spans Row([L]) and is clearly lin-
0 0
0
early independent, so it is a basis for Row([L]).
To find a basis for the left nullspace of [L] we solve the homogeneous system [L]T ~x = ~0.
Row reducing the corresponding coefficient matrix gives
1 0 1 0
1 0 ∼ 0 0
0 0 0 0
1
Thus, B = 1 spans ker(L) and is clearly linearly independent. Hence, B is a basis for
−1
ker(L).
Every vector ~y ∈ Range(L) has the form
" # " # " # " #
x1 − x2 1 −1 0
L(~x) = = x1 + x2 + x3 , x1 , x2 , x3 ∈ R
x2 + x3 0 1 1
(" # " #)
1 0
Hence, C = , spans Range(L) and is linearly independent, so it is a basis for
0 1
Range(L).
ii. We have " # " # " #
1 −1 0
L(1, 0, 0) = L(0, 1, 0) = L(0, 0, 1) =
0 1 1
" #
1 −1 0
Thus, [L] = .
0 1 1
iii. By Theorem 3.3.4 B is a basis for Null([L]). By Theorem 3.3.5 C is a basis for Col([L]).
1 0 1 0
By definition, Row([L]) = Span −1 , 1 . Thus, −1 , 1 spans Row([L]) and is
0 1
0 1
clearly linearly independent, so it is a basis for Row([L]).
To find a basis for the left nullspace of [L] we solve the homogeneous system [L]T ~x = ~0.
Row reducing the corresponding coefficient matrix gives
1 0 1 0
−1 1 ∼ 0 1
0 1 0 0
Thus, we have x1 = x2 = 0, so Null([L]T ) = {~0}. Thus, a basis for Null([L]T ) is the empty
set.
(c) i. Observe that for any ~x ∈ R3 , we have
0
L(x1 , x2 , x3 ) = 0
0
So, ker(L) = R3 . Thus, a basis for ker(L) is the standard basis for R3 .
60 Section 3.3 Solutions
Hence, Range(L) = {~0} and so a basis for Range(L) is the empty set.
ii. We have
0 0 0
L(1, 0, 0) = 0 L(0, 1, 0) = 0 L(0, 0, 1) = 0
0 0 0
0 0 0
Thus, [L] = 0 0 0.
0 0 0
iii. By Theorem 3.3.4 the standard basis for R3 is a basis for Null([L]). By Theorem 3.3.5 the
empty set is a basis for Col([L]).
0 0 0
. Thus, Row([L]) = {~0} and so a basis for
By definition, Row([L]) = Span 0 , 0 , 0
0 0 0
Row([L]) is the empty set.
To find a basis for the left nullspace of [L] we solve the homogeneous system [L]T ~x = ~0.
Clearly, the solution space is R3 . Hence, the standard basis for R3 is a basis for Null([L]T ).
(d) i. If ~x ∈ ker(L), then we have
" # " #
0 x1
= L(x1 , x2 , x3 , x4 ) =
0 x2 + x4
Hence, x1 = 0 and x4 = −x2 . Thus, we have
x1 0 0 0
x2 x2
= x2 + x3 0 , x2 , x3 ∈ R
1
~x = =
x3 x3 0
1
x4 −x2 −1 0
0 0
1 0
Thus, B = , spans ker(L) and is clearly linearly independent. Hence, B is a basis
0 1
−1 0
for ker(L).
Every vector ~y ∈ Range(L) has the form
" # " # " #
x1 1 0
L(~x) = = x1 + (x2 + x4 ) , x1 , (x2 + x4 ) ∈ R
x2 + x4 0 1
(" # " #)
1 0
Hence, C = , spans Range(L) and is linearly independent, so it is a basis for
0 1
Range(L).
Section 3.3 Solutions 61
ii. We have
" # " # " # " #
1 0 0 0
L(1, 0, 0, 0) = L(0, 1, 0, 0) = L(0, 0, 1, 0) = L(0, 0, 0, 1) =
0 1 0 1
" #
1 0 0 0
Thus, [L] = .
0 1 0 1
iii. By Theorem 3.3.4 B is a basis for Null([L]). By Theorem
3.3.5 C is a basis for Col([L]).
1 0
1 0
0 1
0 1
By definition, Row([L]) = Span , . Thus, , spans Row([L]) and is clearly
0 0
0 0
0 1 0 1
linearly independent. Therefore, it is a basis for Row([L]).
To find a basis for the left nullspace of [L] we solve the homogeneous system [L]T ~x = ~0.
Row reducing the corresponding coefficient matrix gives
1 0 1
0
0 1 0 1
0 ∼
0 0 0
0 1 0 0
Thus, we have x1 = x2 = 0, so Null([L]T ) = {~0}. Thus, a basis for Null([L]T ) is the empty
set.
(e) i. If ~x ∈ ker(L), then we have " # " #
0 x
= L(x1 , x2 ) = 1
0 x2
Hence, x1 = x2 = 0. So, ker(L) = {~0} and so a basis for ker(L) is the empty set.
Every vector ~y ∈ Range(L) has the form
" # " # " #
x1 1 0
L(~x) = = x1 + x2 , x1 , x2 ∈ R
x2 0 1
(" # " #)
1 0
Hence, C = , spans Range(L) and is linearly independent, so it is a basis for
0 1
Range(L).
ii. We have " # " #
1 0
L(1, 0) = L(0, 1) =
0 1
" #
1 0
Thus, [L] = .
0 1
iii. By Theorem 3.3.4 the empty set is a basis for Null([L]). By Theorem 3.3.5 C is a basis for
Col([L]). (" # " #) (" # " #)
1 0 1 0
By definition, Row([L]) = Span , . Thus, , is a basis for Row([L]).
0 1 0 1
62 Section 3.3 Solutions
To find a basis for the left nullspace of [L] we solve the homogeneous system [L]T ~x = ~0. We
see that the only solution is ~x = ~0, so Null([L]T ) = {~0}. Thus, a basis for Null([L]T ) is the
empty set.
(f) L(x1 , x2 , x3 , x4 ) = (x4 − x1 , 2x2 + 3x3 )
i. If ~x ∈ ker(L), then we have
" # " #
0 x4 − x1
= L(x1 , x2 , x3 , x4 ) =
0 2x2 + 3x3
Hence, x4 = x1 and x3 = − 32 x2 . Thus, we have
x1 x1 1 0
x x
= x 0 + x 1 , x , x ∈ R
~x = 2 = 2
1 2 1 2
x3 −2x2 /3
0 −2/3
x4 x1 1 0
1 0
0 1
Thus, B = , spans ker(L) and is clearly linearly independent. Hence, B is a
0 −2/3
1
0
basis for ker(L).
Every vector ~y ∈ Range(L) has the form
" # " # " #
x4 − x1 1 0
L(~x) = = (x4 − x1 ) + (2x2 + 3x3 )
2x2 + 3x3 0 1
(" # " #)
1 0
Hence, C = , spans Range(L) and is linearly independent, so it is a basis for
0 1
Range(L).
ii. We have
" # " # " # " #
−1 0 0 1
L(1, 0, 0, 0) = L(0, 1, 0, 0) = L(0, 0, 1, 0) = L(0, 0, 0, 1) =
0 2 3 0
" #
−1 0 0 1
Thus, [L] = .
0 2 3 0
iii. By Theorem 3.3.4 B is a basis for Null([L]).
By Theorem 3.3.5 C is a basis for Col([L]).
−1 0 −1 0
0 2
0 2
By definition, Row([L]) = Span , . Thus, , spans Row([L]) and is
0
3
0 3
1
0
1
0
clearly linearly independent. Therefore, it is a basis for Row([L]).
To find a basis for the left nullspace of [L] we solve the homogeneous system [L]T ~x = ~0.
Row reducing the corresponding coefficient matrix gives
−1 0 1 0
0 2 0 1
0 3 ∼ 0 0
1 0 0 0
Section 3.3 Solutions 63
Thus, we have x1 = x2 = 0, so Null([L]T ) = {~0}. Thus, a basis for Null([L]T ) is the empty
set.
(g) i. If ~x ∈ ker(L), then we have
0 = L(~x) = (3x1 )
Hence, x1 = 0. Therefore, ker(L) = {0}, so a basis for ker(L) is the empty set.
Every vector in the range of L has the form L(~x) = (3x1 ). Hence, {1} spans Range(L) and is
linearly independent, so it is a basis for Range(L).
ii. We have L(1) = 3. Thus, [L] = [3].
iii. By Theorem 3.3.4, the empty set is a basis for ([L]). By Theorem 3.3.5, {1} is a basis for
Col([L]). Now notice that [L]T = [L]. Hence, the empty set is also a basis for ([L]T ) and {1}
is also a basis for Row([L]).
i. If ~x ∈ ker(L), then
0
Thus, B = 1 spans ker(L) and is linearly independent. Hence, it is a basis for ker(L).
0
Every vector ~y ∈ Range(L) has the form
2x1 + x3 2 1
L(~x) = x1 − x3 = x1 1 + x3 −1 , x1 , x3 ∈ R
x1 + x3 1 1
2 1
Thus, C = 1 , −1 spans Range(L) and is clearly linearly independent, so it is a basis for
1 1
Range(L).
ii. We have
2 0 1
L(1, 0, 0) = 1 , L(0, 1, 0) = 0 , L(0, 0, 1) = −1
1 0 1
2 0 1
Thus, [L] = 1 0 −1.
1 0 1
64 Section 3.3 Solutions
iii. By Theorem 3.3.4 B is a basis for Null([L]). By Theorem 3.3.5 C is a basis for Col([L]).
2 1 1
By definition, Row([L]) = Span 0 , 0 , 0 . However, observe that
1 −1 1
1 1 2
1 3
0 + 0 = 0
2 2
−1 1 1
1 1 1 1
Hence, Row([L]) = Span 0 , 0 . Since 0 , 0 is also clearly linearly independent,
−1 1
−1 1
it is a basis for Row([L]).
To find a basis for the left nullspace of [L], we solve [L]T ~x = ~0. Row reducing the corre-
sponding coefficient matrix gives
2 1 1 1 0 2/3
0 0 0 ∼ 0 1 −1/3
1 −1 1 0 0 0
−2/3
Hence, 1/3 spans the left nullspace of [L] and is clearly linearly independent, so it is a
1
basis for the left nullspace of [L].
(h) i. If ~x ∈ ker(L), then we have
0 = L(~x) = (x1 , 2x1 , −x1 )
Hence, x1 = 0. Therefore, ker(L) = {0}, so a basis for ker(L) is the empty set.
Every vector in the range of L has the form
x1 1
L(~x) = 2x1 = x1 2 , x1 ∈ R
−x1 −1
1
Hence, C = 2 spans Range(L) and is linearly independent, so it is a basis for Range(L).
−1
1 1
ii. We have L(1) = 2 . Thus, [L] = 2 .
−1 −1
Section 3.3 Solutions 65
iii. By Theorem 3.3.4, the empty set is a basis for ([L]). By Theorem 3.3.5, C is a basis for
Col([L]).
We have Row([L]) = Span{1, 2, −1} = Span{1}. Thus, {1} is a basis for Row([L]).
To find a basis for the left nullspace of [L], we solve
0 = [L]T ~x = x1 + 2x2 − x3
Thus, every vector in the left nullspace of [L] has the form
x1 −2x2 + x3 −2 1
~x = x2 = x2 = x 1 + x 0 , x2 , x3 ∈ R
2
3
x3 x3 0 1
−2 1
Hence, 1 , 0 is a basis for Null([L]T ).
0 1
i. If ~x ∈ ker(L), then
(0, 0) = L(x1 , x2 ) = (x1 + x2 , 2x1 + 4x2 )
Hence, x1 + x2 = 0 and 2x1 + 4x2 = 0. This implies that x1 = x2 = 0. Therefore, ker(L) = {~0}
and so a basis for ker(L) is the empty set.
Every vector ~y ∈ Range(L) has the form
" # " # " #
x1 + x2 1 1
L(~x) = = x1 + x2 , x1 , x2 ∈ R
2x1 + 4x2 2 4
(" # " #)
1 1
Thus, C = , spans Range(L) and is clearly linearly independent, so it is a basis for
2 4
Range(L).
ii. We have " # " #
1 1
L(1, 0) = , L(0, 1) =
2 4
" #
1 1
Thus, [L] = .
2 4
iii. By Theorem 3.3.4 the empty set is a basis for Null([L]). By Theorem 3.3.5 C is a basis for
Col([L]). (" # " #) (" # " #)
1 2 1 2
By definition, Row([L]) = Span , . Hence, , spans Row([L]) and is also
1 4 1 4
clearly linearly independent, so it is a basis for Row([L]).
To find a basis for the left nullspace of [L], we solve [L]T ~x = ~0. Row reducing the corre-
sponding coefficient matrix gives
" # " #
1 2 1 0
∼
1 4 0 1
Hence, the only vector in Null([L]T ) is the zero vector. So, a basis for the left nullspace of
[L] is the empty set.
66 Section 3.3 Solutions
(" # " #)
1 2
3.3.2 (a) By definition, , spans Col(A). Since it is also linearly independent, it is a basis for
3 4
Col(A).
(" # " #)
1 3
By definition, , spans Row(A). Since it is also linearly independent, it is a basis for
2 4
Row(A).
To find Null(A), we solve A~x = ~0. Row reducing the coefficient matrix of the corresponding
system gives " # " #
1 2 1 0
∼
3 4 0 1
Thus, the only solutions is ~x = ~0. Hence, Null(A) = {~0} and so a basis is the empty set.
To find Null(AT ), we solve AT ~x = ~0. Row reducing the coefficient matrix of the corresponding
system gives " # " #
1 3 1 0
∼
2 4 0 1
Thus, the only solutions is ~x = ~0. Hence, Null(AT ) = {~0} and so a basis is the empty set.
(" # " # " #) " # " # " #
1 0 1 1 1 0
(b) By definition, Col(B) = Span , , . Since =1 + (−1) , we get that
0 1 −1 −1 0 1
(" # " # " #) (" # " #)
1 0 1 1 0
Col(B) = Span , , = Span ,
0 1 −1 0 1
(" # " #)
1 0
by Theorem 1.1.2. Thus, , is a spanning set for Col(B) and is also linearly independent,
0 1
so it is a basis for Col(B).
1 0
By definition,
0 , 1 spans Row(B). Since it is also linearly independent, it is a basis for
1 −1
Row(B).
To find Null(B), we solve B~x = ~0. The coefficient matrix is already in RREF, so we get x1 + x3 = 0
and x2 − x3 = 0. Let x3 = t ∈ R. Then, every vector ~x ∈ Null(B) has the form
x1 −x3 −1
x2 = x3 = x3 1
x3 x3 1
−1
So, 1 spans Null(B) and is clearly linearly independent and hence is a basis for Null(B).
1
To find Null(BT ), we solve BT ~x = ~0. Row reducing the coefficient matrix of the corresponding
system gives
1 0 1 0
0 1 ∼ 0 1
1 −1 0 0
Section 3.3 Solutions 67
Thus, the only solutions is ~x = ~0. Hence, Null(BT ) = {~0} and so a basis is the empty set.
1 2 0
(c) By definition, Col(C) = Span −1 , −2 , 0 spans Col(C). Observe that
1 3 −1
1 0 2
2 −1 − 0 = −2
1 −1 3
1 0
Hence, −1 , 0 also spans Col(C). Since it is also linearly independent, it is a basis for
1 −1
Col(C).
By definition,
1 −1 1 1 1
Row(C) = Span 2 , −2 , 3 = Span 2 , 3
0
0
−1
0 −1
1 1
Hence, 2 , 3 is a basis for Row(C) since it spans Row(C) and is linearly independent.
0 −1
To find Null(C), we solve C~x = ~0. We row reduce the corresponding coefficient matrix to get
1 2 0 1 0 2
−1 −2 0 ∼ 0 1 −1
1 3 −1 0 0 0
x1 + 2x3 = 0 and x2 − x3 = 0. Then, every vector ~x ∈ Null(C) has the form
x1 −2x3 −2
x2 = x3 = x3 1 , x3 ∈ R
x3 x3 1
−2
So, 1 spans Null(C) and is clearly linearly independent and hence is a basis for Null(C).
1
To find Null(C T ), we solve C T ~x = ~0. Row reducing the coefficient matrix of the corresponding
system gives
1 −1 1 1 −1 0
2 −2 3 ∼ 0 0 1
0 0 −1 0 0 0
1
So, 1 spans Null(C T ) and is clearly linearly independent and hence is a basis for Null(C T ).
0
(d) To find a basis for Col(A), we need to determine which columns of A can be written as linear
combinations of other columns. We consider
0 1 2 1 1 c1 + 2c2 + c3 + c4
0 = c1 −2 + c2 −5 + c3 −2 + c4 1 = −2c1 − 5c2 − 2c3 + c4
0 −1 −3 −1 2 −c1 − 3c2 − c3 + 2c4
If we ignore the last column and pretend the third column is an augmented part, this shows that the
third column equals 1 times the first column (which is obvious). If we ignore the third column and
pretend the fourth column is an augmented part, we see that 7 times the first column plus -3 times
1 2
the second column gives the fourth column. Hence, we have that −2 , −5 spans Col(A).
−1 −3
Since it is also linearly independent (just consider the first two columns in the system above) and
hence it is a basis for Col(A).
To find a basis for Row(A), we need to determine which rows of A can be written as linear combi-
nations of other columns. We consider
If we ignore the last column and pretend the third column is an augmented part, this shows that
the third column equals the sum of the first two
columns. That is, the third row of A equals the
1 −2
2 −5
sum of the first two rows. Hence, we have that , spans Row(A). Since it is also linearly
1 −2
1
1
independent (just consider the first two columns in the system above) and hence it is a basis for
Row(A).
Section 3.3 Solutions 69
To find Null(A), we solve A~x = ~0. Observe that the coefficient matrix is A which we already row
reduced above. Hence, we get x1 + x3 + 7x4 = 0 and x2 − 3x4 = 0. Then, every vector ~x ∈ Null(AT )
has the form
x1 −x3 − 7x4 −1 −7
x 3x
= x + x 3 , x , x ∈ R
0
2 = 4
x3 3 4 3 4
x3
1 0
x4 x4 0 1
−1 −7
0 3
So, , spans Null(AT ) and is clearly linearly independent and hence is a basis for
1 0
0 1
Null(A).
To find Null(AT ), we solve AT ~x = ~0. We observe that we row reduced AT above. We get x1 +x3 = 0
and x2 + x3 = 0. Then, every vector ~x ∈ Null(AT ) has the form
x1 −x3 −1
x = −x = x −1 , x3 ∈ R
2 3 3
x3 x3 1
−1
So, −1 spans Null(AT ) and is clearly linearly independent and hence is a basis for Null(AT ).
1
3.3.3 If A~x = ~b has a unique solution, then A~x = ~0 has a unique solution ~x = ~0. Thus, Null(A) = {~0}.
On the other hand, if Null(A) = {~0}, then A~x = ~0 has a unique solution. Thus, by Theorem 2.2.4,
rank(A) = n and so A~x = ~b has a unique solution.
3.3.4 By definition Range(L) is a subset of Rm and ~0 ∈ Range(L) by Lemma 3.3.1.
~ ∈ Rn such that L(~x) = ~y and L(~
Let ~y,~z ∈ Range(L). Then, there exists ~x, w w) = ~z. We now see that
~ ) = L(~x) + L(~
L(~x + w w) = ~y + ~z
and
L(t~x) = tL(~x) = t~y
So, ~y + ~z ∈ Range(L) and t~y ∈ Range(L) for all t ∈ R. Thus, Range(L) is a subspace of Rm by the
Subspace Test.
3.3.5 Consider
c1 L(~v1 ) + · · · + ck L(~vk ) = ~0
Then,
L(c1~v1 + · · · + ck~vk ) = ~0
Therefore, c1~v1 + · · · + ck~vk ∈ ker(L) and hence c1~v1 + · · · + ck~vk = ~0 since ker(L) = {~0}. Thus, c1 = · · · =
ck = 0 since {~v1 , . . . , ~vk } is linearly independent. Hence, {L(~v1 ), . . . , L(~vk )} is linearly independent.
70 Section 3.3 Solutions
3.3.6 Let [L] be the standard matrix of L. By definition of the standard matrix, we have [L] is an n × n matrix.
Thus, we get that ker(L) = {~0} if and only if
~0 = L(~x) = [L]~x
has a unique solution. By Theorem 2.2.4(2), this is true if and only if rank[L] = n. Moreover, by
Theorem 2.2.4(3), we have that ~b = [L]~x = L(~x) is consistent for all ~b ∈ Rn if and only if rank[L] = n.
Hence, ker(L) = {~0} if and only if rank[L] = n if and only if Range(L) = Rn .
(a) By definition Null(A) is a subset of Rn and we have A~0 = ~0, so ~0 ∈ Null(A). Thus, Null(A) is
non-empty.
Let ~x, ~y ∈ Null(A). Then, A~x = ~0 and A~y = ~0, so
Therefore, ~x + ~y ∈ Null(A).
For any t ∈ R, we have
A(t~x) = tA~x = t(~0) = ~0
So, t~x ∈ Null(A).
Consequently, Null(A) is a subspace of Rn by the Subspace Test.
(b) Observe that A~x = ~0 represents a homogeneous system of linear equations. Thus, the set of all ~x
which satisfy this equation is the solution space of the homogeneous system, which is a subspace
of Rn .
3.3.7 (a) By definition Null(A) is a subset of Rn . Also, we have A~0 = ~0, so ~0 ∈ Null(A).
Let ~x, ~y ∈ Null(A). Then, A~x = ~0 and A~y = ~0 and hence
Thus, Null(A) is the solution space of this homogeneous system which we know is a subspace by
Theorem 2.2.3.
3.3.8 We just need to prove that if B is obtained from A by applying any one of the three elementary row
operations, then Row(A) = Row(B).
T
~a1
Let A = ... . Then, Row(A) = Span{~a1 , . . . , ~am }.
~aTm
Section 3.4 Solutions 71
If B is obtained from A by swapping row i and row j, then B still has the same rows as A (just in a
different order), so
Row(B) = Span{~a1 , . . . , ~am } = Row(A)
3.3.9 (a) If ~b = A~x for some ~x ∈ Rn , then by definition ~b ∈ Col(A). On the other hand, if ~b ∈ Col(A),
then there exists ~x ∈ Rn such that A~x = ~b, so the system is consistent. Hence, we have proven the
statement.
(b) We disprove the statement " #with a counter" example.
# Let L : R2 → R2 be defined by L(~x) = ~0.
1 1
Then, L(1, 1) = ~0 and so ∈ ker(L), but , ~0.
1 1
" #
1 1
(c) We disprove the statement with a counter example. Let A = and let L : R2 → R2 be defined
1 1
(" # " #) " #
2 3 2 3
by L(~x) = A~x. Then, observe that Range(L) = Span , , but [L] , .
2 3 2 3
Hence, # "
5 0
[L + M] =
2 1
72 Section 3.4 Solutions
(b) We have
" # " # " #
x1 + x2 −x3 x1 + x2 − x3
(L + M)(x1 , x2 , x3 ) = L(x1 , x2 , x3 ) + M(x1 , x2 , x3 ) = + =
x1 + x3 x1 + x2 + x3 2x1 + x2 + 2x3
Hence, " #
1 1 −1
[L + M] =
2 1 2
(c) We have
" # " # " #
2x1 − x2 −2x1 + x2 0
(L + M)(x1 , x2 , x3 ) = L(x1 , x2 , x3 ) + M(x1 , x2 , x3 ) = + =
x2 + 3x3 −x2 − 3x3 0
Hence, " #
0 0 0
[L + M] =
0 0 0
3.4.2 (a) We have
" # " #
3(2x1 − x2 ) + x1 7x1 − 3x2
(M ◦ L)(x1 , x2 ) = M(L(x1 , x2 )) = M(2x1 − x2 , x1 ) = =
2x1 − x2 + x1 3x1 − x2
" # " #
2(3x1 + x2 ) − (x1 + x2 ) 5x1 + x2
(L ◦ M)(x1 , x2 ) = L(M(x1 , x2 )) = L(3x1 + x2 , x1 + x2 ) = =
3x1 + x2 3x1 + x2
" # " #
7 −3 5 1
Hence, [M ◦ L] = and [L ◦ M] = .
3 −1 3 1
(b) We have
(c) We have
So, Range(M ◦ L) = R2 .
(c) No, it isn’t. If M ◦ L is onto, then for any ~z ∈ R p , there exists ~x ∈ Rn such that ~z = (M ◦ L)(~x) =
M(L(~x)). Hence, ~z ∈ Range(M). So, Range(M) = R p .
3.4.4 Let L, M ∈ L and let c, d ∈ R.
V4 Let ~x, ~y ∈ Rn and s, t ∈ R. Hence,
Hence, L + O = L.
V5 Let ~x, ~y ∈ Rn and s, t ∈ R. Then,
Thus, L + (−L) = O
V8 For any ~x ∈ Rn we have
2t1 + t2 = 0
t2 = 0
t2 = 0
t1 = 0
−t1 + t2 = 0
t1 + 2t2 + t3 = 0
3t1 + 5t2 + t3 = 0
6t1 + 8t2 − 2t3 = 0
t1 + 3t2 + 3t3 = 0
0 −3
1 2 1 1
3 5 1 0 1 2
∼
6 8 −2 0 0 0
1 3 3 0 0 0
Thus, we get
t1 3t1 3
t2 = −2t3 = t3 −2 , t3 ∈ R
t3 t3 1
Section 3.4 Solutions 75
3.4.8 Define Id : Rn → Rn by Id(~x) = ~x. Then, for any ~x, ~y ∈ Rn and s, t ∈ R, we have
Hence, Id is linear.
For any ~x ∈ Rn we have
(L ◦ Id)(~x) = L(Id(~x)) = L(~x)
and
(Id ◦L)(~x) = Id(L(~x)) = L(~x)
as required.
Chapter 4 Solutions
(e) By definition S5 is a subset of P2 (R) and the zero polynomial z(x) = 0 + 0x + 0x2 ∈ S5 since 0 = 0.
Let a + bx + cx2 , d + ex + f x2 ∈ S5 . Then, a = c and d = f . Hence,
1 + x2 = c1 (1 + x + x2 ) + c2 (−1 + 2x + 2x2 ) + c3 (5 + x + x2 )
= c1 − c2 + 5c3 + (c1 + 2c2 + c3 )x + (c1 + 2c2 + c3 )x2
c1 − c2 + 5c3 = 1
c1 + 2c2 + c3 = 0
c1 + 2c2 + c3 = 1
c1 + 3c3 = 2
c2 + c3 = 5
−c1 + 2c2 − 3c3 = 4
c1 + 2c2 + c3 = 4
c1 + c2 + 2c3 = 0
2c1 + 3c2 + 4c3 = −3
c2 + 3c3 = 3
−c1 − c2 − 5c3 = −6
0 −1
1 1 2 0 1 0
2 3 4 −3 0 1 0 −3
∼
0 1 3 3 0 0 1 2
−1 −1 −5 −6 0 0 0 0
c1 + c2 + 2c3 = 0
2c1 + 3c2 + 4c3 = −3
c2 + 3c3 = 3
−c1 − c2 − 5c3 = −6
c1 + c2 + 2c3 = 0
2c1 + 3c2 + 4c3 = −3
c2 + 3c3 = 3
−c1 − c2 − 5c3 = −6
c1 + 2c2 + c3 = 0
c1 + 2c2 − 2c3 = 0
−2c1 + c2 + 3c3 = 0
c1 + 4c3 = 0
Hence, we have the same homogeneous system as in part (c) so this set is also linearly independent.
Section 4.1 Solutions 81
(e) Consider
" # " # " # " # " # " #
0 0 1 −1 3 1 −1 5 3 1 c + 3c2 − c3 + 3c4 −1c1 + c2 + 5c3 + c4
= c1 +c2 +c3 +c4 = 1
0 0 2 1 2 1 0 1 2 2 2c1 + 2c2 + 2c4 c1 + c2 + c3 + 2c4
c1 + 3c2 − c3 + 3c4 = 0
−c1 + c2 + 5c3 + c4 = 0
2c1 + 2c2 + 2c4 = 0
c1 + c2 + c3 + 2c4 = 0
1 3 −1
3 1 0 0 0
−1 1 5
1 0 1 0 0
2 2 0 ∼
2 0 0 1 0
1 1 1 2 0 0 0 1
t1 + 2t2 + t3 = 0
3t1 + 5t2 + t3 = 0
6t1 + 8t2 − 2t3 = 0
t1 + 3t2 + 3t3 = 0
0 −3
1 2 1 1
3 5 1 0 1 2
6 8 −2 ∼ 0
0 0
1 3 3 0 0 0
Thus, we get
t1 3t1 3
t2 = −2t3 = t3 −2 , t3 ∈ R
t3 t3 1
82 Section 4.1 Solutions
4.1.5 (a) The statement is true. Assume that {~v1 , . . . , ~vℓ } is any subset of B that is linearly dependent. Then
there exists coefficients c1 , . . . , cℓ not all zero such that
c1~v1 + · · · + cℓ~vℓ = ~0
Hence, ~x + ~y ∈ {~0V }. We need to show that s~x = ~0 for all s ∈ R. We have 0~x = ~0 ∈ S by Theorem 4.1.1.
~ = 1s ~z. Then, we get
If s , 0, then for any ~z ∈ V we can let w
~z + s~0 = s~
w + s~0 = s(~
w + ~0) = s~
w = ~z
V3 We have
~s + ~t = {s1 + t1 , s2 + t2 , . . .} = {t1 + s1 , t2 + s2 , . . .} = ~t + ~s
~s + ~0 = {s1 + 0, s2 + 0, . . .} = {s1 , s2 , . . .} = ~s
Section 4.1 Solutions 83
V8 We have
(a+b)~s = {(a+b)s1 , (a+b)s2 , . . .} = {as1 +bs1 , as2 +bs2 ), . . .} = {as1 , as2 , . . .}+{bs1 , bs2 , . . .} = a{s1 , s2 , . . .}+b{s1
V9 We have
a(~s+~t) = a{s1 +t1 , s2 +t2 , . . .} = {a(s1 +t1 ), a(s2 +t2 ), . . .} = {as1 +at1 , as2 +at2 , . . .} = {as1 , as2 , . . .}+{at1 , at2 , . . .} =
V10 We have
1~s = {1s1 , 1s2 , . . .} = {s1 , s2 , . . .} = ~s
Thus, S is a vector space under these operations.
4.1.9 Let V be a vector space. Prove that (−~x) = (−1)~x for all ~x ∈ V. We have
~0 = ~x + (−~x) by V5
0~x = ~x + (−~x) by Theorem 4.1.1(1)
(1 − 1)~x = ~x + (−~x) operations on reals
1~x + (−1)~x = ~x + (−~x) by V8
~x + (−1)~x = ~x + (−~x) by V10
(−~x) + ~x + (−1)~x = (−~x) + ~x + (−~x) by V5
~0 + (−1)~x = ~0 + (−~x) by V5
(−1)~x = (−~x) by V4
4.1.10 We are assuming that there exist c1 , . . . , ck−1 ∈ R such that
c1~v1 + · · · + ci−1~vi−1 + ci~vi+1 + · · · + ck−1~vk = ~vi
Let ~x ∈ Span{~v1 , . . . , ~vk }. Then, there exist d1 , . . . , dk ∈ R such that
~x = d1~v1 + · · · + di−1~vi−1 + di~vi + di+1~vi+1 + · · · + dk~vk
= d1~v1 + · · · + dk−1~vk−1 + di (c1~v1 + · · · + ci−1~vi−1 + ci~vi+1 + · · · + ck−1~vk ) + di+1~vi+1 + · · · + dk~vk
= (d1 + di c1 )~v1 + · · · + (di−1 + di ci−1 )~vi−1 + (di+1 + di ci )~vi+1 + · · · + (dk + di ck−1 )~vk
Thus, ~x ∈ Span{~v1 , . . . , ~vi−1 , ~vi+1 , . . . , ~vk }. Hence, Span{~v1 , . . . , ~vk } ⊆ Span{~v1 , . . . , ~vi−1 , ~vi+1 , . . . , ~vk }.
Clearly, we have Span{~v1 , . . . , ~vi−1 , ~vi+1 , . . . , ~vk } ⊆ Span{~v1 , . . . , ~vk } and so
Span{~v1 , . . . , ~vk } = Span{~v1 , . . . , ~vi−1 , ~vi+1 , . . . , ~vk }
as required.
84 Section 4.1 Solutions
4.1.11 If {~v1 , . . . , ~vk } is linearly dependent, then there exists c1 , . . . , ck ∈ R not all zero such that
c1~v1 + · · · + ck~vk = ~0
Thus,
c1~v1 + · · · + ci−1~vi−1 − ~vi + ci~vi+1 + · · · + ck−1~vk = ~0
Hence, {~v1 , . . . , ~vk } is linearly dependent.
Section 4.2 Solutions 85
0 = c1 (1 + x2 ) + c2 (1 − x + x3 ) + c3 (2x + x2 − 3x3 ) + c4 (1 + 4x + x2 ) + c5 (1 − x − x2 + x3 )
= (c1 + c2 + c4 + c5 ) + (−c2 + 2c3 + 4c4 − c5 )x + (c1 + c3 + c4 − c5 )x2 + (c2 − 3c3 + c5 )x3
0 = c1 + c2 + c3
0 = c1 + 2c2 + 3c3
0 = c1 − c2 + 2c3
a + bx + ax2 = a(1 + x2 ) + bx
Thus, S5 = Span{1 + x2 , x}. Since {1 + x2 , x} is also clearly linearly independent, it is a basis for
S5 . Thus, dim S5 = 2.
Section 4.2 Solutions 87
" #
a b
(f) Let A = ∈ S6 . Then A satisfies AT = A which implies that
c d
" # " #
a b a c
=
c d b d
0 = c1 (1 + x) + c2 (1 + x + x2 ) + c3 (x + 2x2 ) + c4 (x + x2 + x3 ) + c5 (1 + x2 + x3 )
= c1 + c2 + c5 + (c1 + c2 + c3 + c4 )x + (c2 + 2c3 + c4 + c5 )x2 + (c4 + c5 )x3
Row reducing the coefficient matrix of the corresponding homogeneous system gives
Thus, treating this an augmented matrix we get that 1+x2 +xn3 can be written as a linear combinationo
of the other vectors. Ignoring the last column shows that 1 + x, 1 + x + x2 , x + 2x2 , x + x2 + x3
is linearly independent, and hence a basis for the set it spans. Thus, dim S8 = 4.
(i) Consider
0 1 −1 1 0 0 c1 − c2 + c3
0 = c1 −3 + c2 4 + c3 −1 + c4 2 + c5 1 = −3c1 + 4c2 − c3 + 2c4 + c5
0 2 −1 4 2 1 2c1 − c2 + 4c3 + 2c4 + c5
Row reducing the coefficient matrix of the corresponding homogeneous system gives
1 −1 1 0 0 1 0 3 2 1
−3 4 −1 2 1 ∼ 0 1 2 2 1
2 −1 4 2 1 0 0 0 0 0
88 Section 4.2 Solutions
Thus, we see that the last three vectors can be written as alinear combination
of the first two
1 −1
vectors. Moreover, the first two columns shows that the set −3 , 4 is linearly independent,
2 −1
and hence a basis for the set it spans. Thus, dim S9 = 2.
(" # " # " # " # " # " #)
1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0
4.2.3 , , , , ,
0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1
4.2.4 (1) This is the contrapositive of Theorem 4.2.1.
(2) Assume {~v1 , . . . , ~vk } spans V where k < n. Then, we can use Theorem 4.1.1 to remove linearly
dependent vectors (if any) to get a basis for V. Thus, we can find a basis for V with fewer than n
vectors which contradicts Theorem 4.2.4.
(3) Assume that {~v1 , . . . , ~vn } is linearly independent, but does not span V. Then, there exists ~v ∈ V
such that ~v < Span{~v1 , . . . , ~vn }. Hence, by Theorem 4.2.2 we have that {~v1 , . . . , ~vn , ~v} is a linearly
independent set of n + 1 vectors in V. But, this contradicts (1). Therefore, {~v1 , . . . , ~vn } also spans
V.
Assume {~v1 , . . . , ~vn } spans V, but is linearly dependent. Then, there exists some vector ~vi ∈
Span{~v1 , . . . , ~vi−1 , ~vi+1 , . . . , ~vn }. Hence, by Theorem 4.2.1 we have that
So, V is spanned by n − 1 vectors which contradicts (2). Thus, {~v1 , . . . , ~vn } is also linearly inde-
pendent.
4.2.5 Since we know that dim P3 (R) = 4, we need to add two vectors to the set {1 + x + x3 , 1 + x2 } so that the
set is still linearly independent. There are a variety of ways of picking such vectors. We observe that
neither x2 nor x can be written as a linear combination of 1 + x + x3 and 1 + x2 , so we will try to prove
that B = {1 + x + x3 , 1 + x2 , x2 , x} is a basis for P3 . Consider
0 4/5 −3/5
1 1 1 0 0 0 1
0 0
2 2 0 1 0 0 0 1 0 0 −1/5 2/5
∼
2 3 0 0 1 0 0 0 1 0 −3/5 1/5
1 4 0 0 0 1 0 0 0 1 −6/5 2/5
If we ignore the last two columns, we see that this implies that B = {~v1 , ~v2 , ~e1 , ~e2 } is a linearly indepen-
dent set of 4 vectors in M2×2 (R). Thus, since dim M2×2 (R) = 4 we get that B is linearly independent.
Section 4.2 Solutions 89
1 0 0
−1 2 0
4.2.7 Three vectors in the hyperplane are ~v1 = , ~v2 = and ~v3 = . Consider
0 −1 1
0 0 −2
0 c1
0 −c + 2c
= c ~v + c ~v + c ~v = 1 2
0 1 1 2 2 3 3
−c2 + c3
0 −2c3
The only solution is c1 = c2 = c3 = 0 so the vectors are linearly independent. Since a hyperplane in R4
has dimension 3 and {~v1 , ~v2 , ~v3 } is a linearly independent set of three vectors in the hyperplane, it forms
a basis for the hyperplane.
1
0
We need to add a vector ~v4 so that {~v1 , ~v2 , ~v3 , ~v4 } is a basis for R4 . Since does not satisfy the equation
0
0
of the hyperplane, it is not in Span(~v1 , ~v2 , ~v3 ) and so is not a linear combination of ~v1 , ~v2 , ~v3 . Thus,
{~v1 , ~v2 , ~v3 , ~v4 } is linearly independent set with four elements in R4 and hence is a basis for R4 since R4
has dimension 4.
4.2.8 Let B = {~v1 , . . . , ~vk } be a basis for S. Then, B is a linearly independent set of k vectors in V. But,
dim V = dim S = k, so B is also a basis for V. Hence, V = S.
4.2.9 Since k < n, we have that Span{~v1 , . . . , ~vk } , V. Thus, there exists a vector ~vk+1 < Span{~v1 , . . . , ~vk }.
Thus, by Theorem 4.1.5, {~v1 , . . . , ~vk+1 } is linearly independent.
If k + 1 = n, then {~v1 , . . . , w ~ k+1 } is a basis for V. If not, we can keep repeating this procedure until we
~ k+1 , w
get {~v1 , . . . , w ~ k+2 , . . . , w
~ n }.
4.2.10 (a) This statement is true as it is the contrapositive of Theorem 4.2.3(1).
(b) This is false. Since dim P2 (R) = 3, we have that every basis for P2 (R) has 3 vectors in it.
(c) This is false. Taking a = b = 1 and c = d = 2, gives {~v1 + ~v2 , 2~v1 + 2~v2 } which is clearly linearly
dependent.
(d) This is true. If {~v1 , . . . , ~vk } is linearly independent, then it is a basis. If it is linearly dependent, then
by Theorem 4.1.5, there is some vector ~vi such that ~vi ∈ Span{~v1 , . . . , ~vi−1 , ~vi+1 , . . . , ~vk }. Then, by
Theorem 4.1.4, we have that V = Span{~v1 , . . . , ~vi−1 , ~vi+1 , . . . , ~vk }. We can continue to repeat this
process until we have a linearly indpendent spanning set.
(e) This is true. If {~v1 , . . . , ~vk } is a linearly independent set in V, then either dim V = k or, by Theorem
4.2.3, there exist w ~ k+1 , . . . , w ~ n such that {~v1 , . . . , ~vk , w
~ k+1 , . . . , w
~ n } is a basis for V. Thus, dim V =
n ≥ k.
(" # " # " # " #)
1 0 2 0 3 0 4 0
(f) This is false. , , , is clearly linearly dependent and hence not a basis
0 0 0 0 0 0 0 0
for M2×2 (R).
90 Section 4.3 Solutions
0 1 −1 1 0 0 −2 −2
1 1
1 0 1 −3 0 0 1 0 3 1
∼
0 1 1 2 3 0 0 1 −1 2
−1 1 2 3 7 0 0 0 0 0
Hence, for the first system we have c1 = −2, c2 = 3, and c3 = −1, and for the second system we
have d1 = −2, d2 = 1, and d3 = 2. Thus,
−2 −2
[A]B = 3 and [B]B = 1
−1 2
0 −4 1 0 −2 −2
1 0 2
0
1 1 0 1 1 0 1 0 3 3
∼
1 1 0 1 1 0 0 1 1 −1
0 1 −1 2 4 0 0 0 0 0
Hence, for the first system we have c1 = −2, c2 = 3, and c3 = 1, and for the second system we
have d1 = −2, d2 = 3, and d3 = −1. Thus,
−2 −2
[A]B = 3 and [B]B = 3
1 −1
c1 (1 + x + x2 ) + c2 (1 + 3x + 2x2 ) + c3 (4 + x2 ) = −2 + 8x + 5x2
d1 (1 + x + x2 ) + d2 (1 + 3x + 2x2 ) + d3 (4 + x2 ) = −4 + 8x + 4x2
92 Section 4.3 Solutions
−14 −17
Hence, [p(x)]B = 13 and [q(x)]B = 16 .
−5 −5
4.3.5 To find the change of coordinates matrix from B-coordinates to C-coordinates, we need to determine the
C-coordinates of the vectors in B. That is, we need to find c1 , c2 , d1 , d2 such that
" # " # " # " # " # " #
2 5 3 2 5 5
c1 + c2 = , d1 + d2 =
1 2 1 1 2 3
We row reduce the corresponding doubly augmented matrix to get
" # " #
2 5 3 5 1 0 −1 5
∼
1 2 1 3 0 1 1 −1
" #
−1 5
Thus, C PB = .
1 −1
To find the change of coordinates matrix from C-coordinates to B-coordinates, we need to determine the
B-coordinates of the vectors in C. That is, we need to find c1 , c2 , d1 , d2 such that
" # " # " # " # " # " #
3 5 2 3 5 5
c1 + c2 = , d1 + d2 =
1 3 1 1 3 2
We row reduce the corresponding doubly augmented matrix to get
" # " #
3 5 2 5 1 0 1/4 5/4
∼
1 3 1 2 0 1 1/4 1/4
" #
1/4 4/5
Thus, B PC = .
1/4 1/4
Section 4.3 Solutions 93
4.3.6 To find B PC we need to find the B-coordinates of the vectors in C. Thus, we need to solve the systems
b1 (1) + b2 (−1 + x) + b3 (1 − 2x + x2 ) = 1 + x + x2
c1 (1) + c2 (−1 + x) + c3 (1 − 2x + x2 ) = 1 + 3x − x2
d1 (1) + d2 (−1 + x) + d3 (1 − 2x + x2 ) = 1 − x − x2
Similarly, to find C PB we find the C-coordinates of the vectors in B by row reducing the corresponding
triply augmented matrix to get
1 1 1 1 −1 1 1 0 0 1/2 −1/2 1
1 3 −1 0 1 −2 ∼ 0 1 0 0 1/4 −3/4
1 −1 −1 0 0 1 0 0 1 1/2 −3/4 3/4
1/2 −1/2 1
Thus, C PB = 0 1/4 −3/4.
1/2 −3/4 3/4
4.3.7 (a) To find S PB we need to find the S-coordinates of the vectors in B. We have
1 1 1
2 2 2
[1 − x + x ]S = −1 , [1 − 2x + 3x ]S = −2 , [1 − 2x + 4x ]S = −2
1 3 4
1 1 1
Hence, S PB = −1 −2 −2.
1 3 4
To find B PS we need to find the B-coordinates of the vectors in S. Thus, we need to solve the
systems
b1 (1 − x + x2 ) + b2 (1 − 2x + 3x2 ) + b3 (1 − 2x + 4x2 ) = 1
c1 (1 − x + x2 ) + c2 (1 − 2x + 3x2 ) + c3 (1 − 2x + 4x2 ) = x
d1 (1 − x + x2 ) + d2 (1 − 2x + 3x2 ) + d3 (1 − 2x + 4x2 ) = x2
(b) We have
1 1 1 3 2
[p(x)]S = S PB [p(x)]B = −1 −2 −2 1 = −1
1 3 4 −2 −2
(d) We have
1 1 1 2 1
[r(x)]S = S PB [r(x)]B = −1 −2 −2 −3 = 0
1 3 4 2 1
Hence,
3 −1 1
S PB = 0 3 −3
3 0 1
To find the change of coordinates matrix from S to B we need to find the coordinates of the vectors in S
with respect to B. We need to find a1 , a2 , a3 , b1 , b2 , b3 , c1 , c2 , c3 such that
" # " # " # " #
1 0 3 0 −1 3 1 −3
= a1 + a2 + a3
0 0 0 3 0 0 0 1
" # " # " # " #
0 1 3 0 −1 3 1 −3
= b1 + b2 + b3
0 0 0 3 0 0 0 1
" # " # " # " #
0 0 3 0 −1 3 1 −3
= c1 + c2 + c3
0 1 0 3 0 0 0 1
Section 4.3 Solutions 95
Row reducing the augmented matrix to the corresponding system of equations gives
3 −1 1 1 0 0 1 0 0 1/3 1/9 0
0 3 −3 0 1 0 ∼ 0 1 0 −1 0 1
3 0 1 0 0 1 0 0 1 −1 −1/3 1
Hence,
1/3 1/9 0
B PS =
−1 0 1
−1 −1/3 1
Thus, we take
−4/3 8/3 −1
B = B PS = 1/2 −1 1/2
2/3 −1/3 0
4.3.10 If B = {~v1 , . . . , ~vn } is a basis for V, show that {[~v1 ]B , . . . , [~vn ]B } is a basis for Rn . Consider
~0 = c1 [~v1 ]B + · · · + cn [~vn ]B
Thus, since B is linearly independent, this implies that c1 = · · · = cn = 0. Consequently, {[~v1 ]B , . . . , [~vn ]B }
is a linearly independent set of n vectors in Rn . Hence, by Theorem 4.2.3, it is a basis for Rn .
4.3.11 Observe that [~vi ]B = ~ei . Thus, we have that [~vi ]C = ~ei and so
~vi = 0~
w1 + · · · + 0~
wi−1 + 1~ wi+1 + · · · + 0~
wi + 0~ ~i
wn = w
Thus, ~vi = w
~ i for all i.
Chapter 5 Solutions
97
98 Section 5.1 Solutions
−1
1 2 1 −4 −2/3 5/3
Thus, 0 −2 4 = 2 1/6 −2/3.
3 4 4 1 1/2 −1/3
−1
3 1 −2 1/10 −3/10 1/2
Thus, 1 2 1 = 1/10 7/10 −1/2.
2 1 1 −3/10 −1/10 1/2
−1
1 0 1 1 0 −1/3
Thus, 0 2 −3 = 0 1/2 1/2 .
0 0 3 0 0 1/3
−1 0 −1 2 1 0 0 0 −1 0 −1
2 1 0 0 0
1 −1 −2 4 0 1 0 0 0 −1 −3 6 1 1 0 0
∼
−3 −1 5 2 0 0 1 0 0 0 11 −10 −4 −1 1 0
−1 −2 4 4 0 0 0 1 0 0 0 0 1 −1 −1 1
This gives us two systems A~b1 = ~e1 and A~b2 = ~e2 . We solve both by row reducing a double augmented
matrix. We get " # " #
3 1 0 1 0 1 0 0 2/5 −1/5
∼
1 2 0 0 1 0 1 0 −1/5 3/5
2/5 0
Thus, the general solution of the first system is −1/5 + t 0 and the general solution of the second
0 1
−1/5 0
system is s 3/5 + s 0. Therefore, every right inverse of A has the form
0 1
2/5 −1/5
B = −1/5 3/5
t s
h i
5.1.5 We need to find all 3 × 2 matrices B = ~b1 ~b2 such that
h i h i
~e1 ~e2 = AB = A~b1 A~b2
This gives us two systems A~b1 = ~e1 and A~b2 = ~e2 . We solve both by row reducing a double augmented
matrix. We get " # " #
1 −2 1 1 0 1 0 1 −1 2
∼
1 −1 1 0 1 0 1 0 −1 1
−1 −1
Thus, the general solution of the first system is −1 + t 0 and the general solution of the second
0 1
−1
−1
system is s 1 + s 0 . Therefore, every right inverse of A has the form
0 1
−1 − t 2 − s
B = −1 1
t s
" #T
1 −2 1
5.1.6 Observe that B = . Also, observe that if AB = I2 , then (AB)T = I2 and so BT AT = I2 . Thus,
1 −1 1
" #
−1 − t −1 t
from our work in Problem 5, we have that all left inverses of B are .
2−s 1 s
5.1.7 To find all
" left inverses
# of B, we use the same trick we did in Problem 6. We will find all right inverses
1 0 3
of BT = . We get
2 −1 3
" # " #
1 0 3 1 0 1 0 3 1 0
∼
2 −1 3 0 1 0 1 3 2 −1
Section 5.1 Solutions 101
1 −3
Thus, the general solution of the first system is 2 + t −3 and the general solution of the second system
0 1
0 −3
is s −1 + s −3. Therefore, every left inverse of B has the form
0 1
" #
1 − 3t 2 − 3t t
B=
−3s −1 − 3s s
5.1.8 (a) Using our work in Example 4, we get that
" # " #
−1 1 1 −1 −1 1
A = =
2(1) − 1(3) −3 2 3 −2
" # " #
1 5 −2 −5 2
B−1 = =
1(5) − 2(3) −3 1 3 −1
#"
5 9
(b) We have AB = and hence
6 11
" # " #
−1 1 11 −9 11 −9
(AB) = = = B−1 A−1
5(11) − 9(6) −6 5 −6 5
# "
4 2
(c) We have 2A = , so
6 2
" # " #
−1 1 2 −2 −1/2 1/2 1
(2A) = = = A−1
4(2) − 2(6) −6 4 3/2 −1 2
" # " # " #
2 3 1 1 −3 −1 3
(d) We have AT = , so (AT )−1 = 2(1)−3(1) = and
1 1 −1 2 1 −2
" #" # " #
T T −1 2 3 −1 3 1 0
A (A ) = =
1 1 1 −2 0 1
5.1.9 We have
AT (A−1 )T = [A−1 A]T = I T = I
Hence, (AT )−1 = (A−1 )T by Theorem 5.1.4.
5.1.10 Consider
c1 A~v1 + · · · + ck A~vk = ~0
Then, we have
A(c1~v1 + · · · + ck~vk ) = ~0
Since A is invertible, the only solution to this equation is
c1~v1 + · · · + ck~vk = A−1~0 = ~0
Hence, we get c1 = · · · = ck = 0, because {~v1 , . . . , ~vk } is linearly independent.
102 Section 5.1 Solutions
AB~x = A~0 = ~0
Since AB is invertible, this has the unique solution ~x = (AB)−1~0 = ~0. Thus, B is invertible.
Since AB and B−1 are invertible, we have that A = (AB)B−1 is invertible by Theorem 5.1.5(3).
" #
1 0 0
(b) Let C = and D = C T . Then C and D are both not invertible since they are not square,
0 1 0
" #
1 0
but CD = is invertible.
0 1
h i
5.1.12 Assume that A has a right inverse B = ~b1 · · · ~bm . Then, we have
h i h i h i
~e1 · · · ~em = Im = AB = A ~b1 ··· ~bm = A~b1 · · · A~bm
Hence, A~x = ~y is consistent for all ~y ∈ Rm . Therefore, rank A = m by Theorem 2.2.4(3). But then we
must have n ≥ m which is a contradiction.
5.1.13 (a) The statement if false. The 3 × 2 zero matrix cannot have a left inverse.
(b) The statement is true. By the Invertible Matrix Theorem, we get that if Null(AT ) = {~0}, then AT is
invertible. Hence, A is invertible by Theorem 5.1.5.
" # " # " #
1 0 −1 0 0 0
(c) The statement is false. A = and B = are both invertible, but A + B = is
0 1 0 −1 0 0
not invertible.
" #
1 0
(d) The statement is false. A = satisfies AA = A, but A is not invertible.
0 0
(e) Let ~b ∈ Rn . Consider the system of equations A~x = ~b. Then, we have
AA~x = A~b
~x = (AA)−1 A~b
Thus, A~x = ~b is consistent for all ~b ∈ Rn and hence A is invertible by the Invertible Matrix
Theorem.
" # " #
0 0 1 0
(f) The statement is false. If A = and B = , then AB = O2,2 and B is invertible.
0 0 0 1
(" # " # " # " #)
1 0 1 0 0 1 0 1
(g) The statement is true. One such basis is , , , .
0 1 0 2 1 0 2 0
(h) If A has a column of zeroes, then AT has a row of zeroes and hence rank(AT ) < n. Thus, AT is not
invertible and so A is also not invertible by the Invertible Matrix Theorem.
Section 5.1 Solutions 103
(i) If A~x = ~0 has a unique solution, then rank(A) = n by Theorem 2.2.5 and so A is invertible and
hence the columns of A form a basis for Rn by the Invertible Matrix Theorem. Thus, Col(A) = Rn ,
so the statement is true.
(j) If rank(AT ) = n, then AT is invertible by the Invertible Matrix theorem, and hence A is also
invertible by the Invertible Matrix Theorem. Thus, rank(A) = n by Theorem 5.1.6.
(k) If A is invertible, then AT is also invertible. Thus, the columns of AT form a basis for Rn and hence
must be linearly independent.
(l) We have that I = −A2 + 2A = A(−A + 2I). Thus, A is invertible by Theorem 5.1.4. In particular
A−1 = −A + 2I.
" # " # " # " # " #
1 1 2 1 2 −1 2 −1 5 3
5.1.14 (a) Let A = and B = . Then, A−1 = and B−1 = . Then, AB =
1 2 3 2 −1 1 −3 2 8 5
" # " #
5 −3 5 −4
and (AB)−1 = , but A−1 B−1 = .
−8 5 −5 3
(b) If (AB)−1 = A−1 B−1 = (BA)−1 , then we must have AB = BA.
104 Section 5.2 Solutions
Hence,
1 0 0 1 0 0 1 0 0 1 −2 0 1 0 0
E1 = 1 1 0 E2 = 0 1 0 E3 = 0 1/3 0 E4 = 0 1 0 E5 = 0 1 0
0 0 1 −2 0 1 0 0 1 0 0 1 0 3 1
and E5 E4 E3 E2 E1 A = R. Then
Hence,
0 0 1 1/2 0 0 1 0 0
E1 = 0 1 0 E2 = 0 1 0 E3 = 0 −1 0
1 0 0 0 0 1 0 0 1
1 −1 0 1 0 0 1 0 0
E4 = 0 1 0 E5 = 0 1 0 E6 = 0 1 −2
0 0 1 0 −1 1 0 0 1
106 Section 5.2 Solutions
and E6 E5 E4 E3 E2 E1 A = R. Then
A = E1−1 E2−1 E3−1 E4−1 E5−1 E6−1 R
0 0 1 2 0 0 1 0 0 1 1 0 1 0 0 1 0 0 1 0 0
= 0 1 0 0 1 0 0 −1 0 0 1 0 0 1 0 0 1 2 0 1 0
1 0 0 0 0 1 0 0 1 0 0 1 0 1 1 0 0 1 0 0 1
Hence,
0 1 0 1 0 0 1 0 0 1 0 0 1 0 −2
E1 = 1 0 0 E2 = −2 1 0 E3 = 0 1 0 E4 = 0 −1/3 0 E5 = 0 1 0
0 0 1 0 0 1 −1 0 1 0 0 1 0 0 1
and E5 E4 E3 E2 E1 A = R. Then
A = E1−1 E2−1 E3−1 E4−1 E5−1 R
0 1 0 1 0 0 1 0 0 1 0 0 1 0 2 1 2 0
= 1 0 0 2 1 0 0 1 0 0 −3 0 0 1 0 0 0 1
0 0 1 0 0 1 1 0 1 0 0 1 0 0 1 0 0 0
and E8 E7 E6 E5 E4 E3 E2 E1 A = R. Then
Hence,
1 0 0 1 0 0 1 −2 0 1 0 7 1 0 0
E1 = 0 1 0 E2 = 0 0 1 E3 = 0 1 0 E4 = 0 1 0 E5 = 0 1 −4
1 0 1 0 1 0 0 0 1 0 0 1 0 0 1
and E5 E4 E3 E2 E1 A = R. Then
Hence,
1 0 0 1 0 0 1 −1 0 1 0 0
E1 = −1 1 0 E2 = 0 1 0 E3 = 0 1 0 E4 = 0 1 0
0 0 1 −1 0 1 0 0 1 0 0 −1/2
1 0 −4 1 0 0
E5 = 0 1 0 E6 = 0 1 1
0 0 1 0 0 1
and E6 E5 E4 E3 E2 E1 A = I. Hence,
A−1 = E6 E5 E4 E3 E2 E1
and
Hence,
" # "# " #
0 1 1 0 1 2
E1 = E2 = E3 =
1 0 0 1/3 0 1
and E3 E2 E1 A = I. Hence,
A−1 = E3 E2 E1
and
#"
" #" #
0 1 1 0 1 −2
A =E1−1 E2−1 E3−1 =
1 0 0 3 0 1
Section 5.2 Solutions 109
Hence,
0 1 0 1 0 0 1 0 0 1 −4 0
E1 = 1 0 0 E2 = 0 1/2 0 E3 = 0 1 0 E4 = 0 1 0
0 0 1 0 0 1 1 0 1 0 0 1
1 0 0 1 0 0 1 0 8 1 0 0
E5 = 0 1 0 E6 = 0 1 0 E7 = 0 1 0 E8 = 0 1 −3
0 −6 1 0 0 −1/6 0 0 1 0 0 1
and E8 E7 E6 E5 E4 E3 E2 E1 A = I. Hence,
A−1 = E8 E7 E6 E5 E4 E3 E2 E1
and
2 0 21 R1
" # " #
4 2
R1 − R2 ∼ ∼
2 2 2 2 12 R2
" # " #
1 0 1 0
∼
1 1 R2 − R1 0 1
Hence,
1 0 0 1 0 0 1 0 0
E1 = 2 E2 = 0 1 0 E3 = 0 0 1
1 0
0 0 1 4 0 1 0 1 0
1 0 0 1 0 1
E4 = 0 1 0 E5 = 0 1 0
0 0 −1/4 0 0 1
and E5 E4 E3 E2 E1 A = I. Hence,
A−1 = E5 E4 E3 E2 E1
and
Hence,
1 0 0 0
1 0 0 0
1 2 0 0
1 1 0 0 0 1 0 0 0 1 0 0
E1 = E2 = E3 =
0 0 1 0 0 0 1 0 0 0 1 0
0 0 0 1 2 0 0 1 0 0 0 1
0 −2 0 0 −1
1 0 0 0
1 1 0
1 0 0 0
0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0
E4 = E5 = E6 = E7 =
0 −1 1 0 0 0 1 0 0 0 1 0 0 0 1/2 0
0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1
and E7 E6 E5 E4 E3 E2 E1 A = I. Hence,
A−1 = E7 E6 E5 E4 E3 E2 E1
and
5.2.5 Since multiplying on the left is the same as applying the corresponding elementary row operations we
get
" #" #" #" #" #
1 2 1/2 0 0 1 1 0 1 1
A=
0 1 0 1 1 0 −3 1 0 3
" #" #" #" #
1 2 1/2 0 0 1 1 1
=
0 1 0 1 1 0 −3 0
" #" #" #
1 2 1/2 0 −3 0
=
0 1 0 1 1 1
" #" #
1 2 −3/2 0
=
0 1 1 1
" #
1/2 2
=
1 1
112 Section 5.3 Solutions
(c) We have
" # " #
1 2 3 1 2 3
∼ R1 − R2 ∼
2 6 5 R2 − 2R1 0 2 −1
" # " #
1 0 4 1 0 4
1 ∼
0 2 −1 2 R2 0 1 −1/2
5.2.7 If A is invertible, then by Corollary 5.2.7, there exists a sequence of elementary matrices E1 , . . . , Ek such
that A = E1−1 · · · Ek−1 . If B is row equivalent to A, then there exists a sequence of elementary matrices
F1 , . . . , Fk such that Fk · · · F1 A = B. Thus,
B = Fk · · · F1 E1−1 · · · Ek−1
a2 a3
1 a
0 b − a b2 − a2 b3 − a3
det A = 2 2 3 3
0 c − a c − a c − a
0 d − a d2 − a2 d3 − a3
b − a b2 − a2 b3 − a3
= c − a c2 − a2 c3 − a3
d − a d2 − a2 d3 − a3
1 b + a b2 + ba + a2
= (b − a)(c − a)(d − a) 1 c + a c2 + ca + a2
1 d + a d2 + da + a2
Section 5.3 Solutions 115
1 b + a b2 + ba + a2
= (b − a)(c − a)(d − a) 0 c − b c2 + ca − b2 − ba
0 d − b d2 + da − b2 − ba
c − b c2 + ca − b2 − ba
= (b − a)(c − a)(d − a)
d − b d2 + da − b2 − ba
1 a + b + c
= (b − a)(c − a)(d − a)(c − b)(d − b)
1 a + b + d
= (b − a)(c − a)(d − a)(c − b)(d − b)(d − c)
1
det A−1 =
det A
5.3.4 By Theorem 5.3.9 and Corollary 5.3.10 we get
1
det B = det(P−1 AP) = det P−1 det A det P = det A det P = det A
det P
5.3.5 (a) We have AAT = AA−1 = I. Hence, by Theorem 5.3.9 and Corollary 5.3.10 we get
Inductive Hypothesis: Assume that the result holds for any (n − 1) × (n − 1) matrix.
Inductive Step: Suppose that B is an n × n matrix obtained from A by swapping two rows. If the i-th
row of A was not swapped, then the cofactors of the i-th row of B are (n − 1) × (n − 1) matrices which
can be obtained from the cofactors of the i-th row of A by swapping the same two rows. Hence, by the
inductive hypothesis, the cofactors Ci∗j of B and Ci j of A satisfy Ci∗j = −Ci j . Hence,
∗ ∗
det B = ai1Ci1 + · · · + ainCin = ai1 (−Ci1 ) + · · · + ain (−Cin )
= − ai1Ci1 + · · · + ainCin = − det A
Section 5.4 Solutions 117
6 −6 16
(d) We have adj A = 0 3 −7. Hence,
0 0 2
6 0 0
A(adj A) = 0 6 0 = (det A)I
0 0 6
−1 4 −2
(e) We have adj A = 2 −9 5 . Hence,
3 −11 6
1 0 0
A(adj A) = 0 1 0 = (det A)I
0 0 1
−1 −8 −4
(f) We have adj A = −2 −12 −4. Hence,
−2 −8 −4
4 0 0
A(adj A) = 0 4 0 = (det A)I
0 0 4
118 Section 5.4 Solutions
" #
d −b
5.4.2 (a) We have det A = ad − bc and adj A = , thus
−c a
" #
−1 1 d −b
A =
ad − bc −c a
−1 t + 3 −3
(b) We have det A = −17 − 2t and adj A = 5 2 − 3t −2t − 2. Hence,
−2 −11 −6
−1 t + 3 −3
1
A−1 = 5 2 − 3t −2t − 2
−17 − 2t
−2 −11 −6
4t + 1 −1 −3t − 1
(c) We have det A = 14t + 1 and adj A = −10 14 11 . Hence,
3 + 2t −4 2t − 3
4t + 1 −1 −3t − 1
−1 1
A = −10 14 11
14t + 1
3 + 2t −4 2t − 3
cd 0 −cb
(d) We have det A = acd − b2 c and adj A = 0 ad − b2 0 . Hence,
−cb 0 ac
cd 0 −cb
1
A−1 = 0 ad − b2 0
acd − b2 c
−cb 0 ac
5.4.3 Solve the following systems of linear equations using Cramer’s Rule.
" #
3 −1
(a) The coefficient matrix is A = , so det A = 21 + 4 = 25. Hence,
4 7
1 2 −1 19
x1 = =
25 5 7 25
1 3 2 7
x2 = =
25 4 5 25
" #
19/25
Thus, the solution is ~x = .
7/25
Section 5.4 Solutions 119
" #
2 1
(b) The coefficient matrix is A = , so det A = 14 − 3 = 11. Hence,
3 7
1 1 1 9
x1 = =
11 −2 7 11
1 2 1 7
x2 = =−
11 3
−2 11
" #
9/11
Thus, the solution is ~x = .
−7/11
" #
2 1
(c) The coefficient matrix is A = , so det A = 14 − 3 = 11. Hence,
3 7
1 3 1 16
x1 = =
11 5 7 11
1 2 3 1
x2 = =
11 3 5 11
" #
16/11
Thus, the solution is ~x = .
1/11
2 3 1
(d) The coefficient matrix is A = 1 1 −1, so det A = 6. Hence,
−2 0 2
1 3 1
1 4
x1 = −1 1 −1 =
6 6
1 0 2
2 1 1
1 −3
x2 = 1 −1 −1 =
6 6
−2 1 2
2 3 1
1 7
x3 = 1 1 −1 =
6 6
−2 0 1
2/3
Hence, the solution is ~x = −1/2.
7/6
120 Section 5.5 Solutions
5 1 −1
(e) The coefficient matrix is A = 9 1 −1, so det A = −16. Hence,
1 −1 5
4 1 −1
1 12
x1 = 1 1 −1 =
−16 −16
2 −1 5
5 4 −1
1 −166
x2 = 9 1 −1 =
−16 −16
1 2 5
5 1 4
1 −42
x3 = 9 1 1 =
−16 −16
1 −1 2
−3/4
Thus, the solution is ~x = 83/7 .
21/8
5.5.4 Since adding a multiple of one row to another does not change the determinant, we get
h i h i
V = | det ~v1 · · · ~vn | = | det ~v1 · · · ~vn + t~v1 |
Hence,
h i " 1 −4#
[L]B = [L(1, −2)]B [L(2, 1)]B =
−1 6
(b) We have
" # " # " #
6 1 1
L(1, 1) = = 10 −4
2 1 2
" # " # " #
10 1 1
L(1, 2) = = 19 −9
1 1 2
Hence,
h i " 10 19 #
[L]B = [L(1, 1)]B [L(1, 2)]B =
−4 −9
(c) We have
" # " # " #
6 −3 1
L(−3, 5) = = −2 +0
−10 5 −2
" # " # " #
−1 −3 1
L(1, −2) = =0 −1
2 5 −2
Hence,
h i "−2 0 #
[L]B = [L(−3, 5)]B [L(1, −2)]B =
0 −1
122
Section 6.1 Solutions 123
(d) We have
4 1 1 0
L(1, 1, 1) = 0 = (−1) 1 + 5 −1 − 6 −1
5 1 0 −1
1 1 1 0
L(1, −1, 0) = −1 = 1 1 + 0 −1 + 2 −1
−1 1 0 −1
−3 1 1 0
L(0, −1, −1) = 0 = 0 1 − 3 −1 + 3 −1
−3 1 0 −1
Hence,
h i −1 1 0
[L]B = [L(1, 1, 1)]B [L(1, −1, 0)]B [L(0, −1, −1)]B = 5 0 −3
−6 2 3
(e) We have
16 2 1 0
L(2, −1, −1) = −2 = (−1) −1 + 5 1 − 6 0
10 −1 1 −1
1 2 1 0
L(1, 1, 1) = −1 = (−1) −1 + 5 1 − 6 0
−1 −1 1 −1
−3 2 1 0
0 −1
L(0, 0, −1) = = (−1) + 5 − 6 0 1
−3 −1 1 −1
Hence,
h i −1 1 0
[L]B = [L(2, −1, −1)]B [L(1, 1, 1)]B [L(0, 0, −1)]B = 5 0 −3
−6 2 3
(f) We have
0 0 2 1
L(0, 1, 0) = 2 = 2 1 + 0 0 + 0 0
0 0 1 1
4 0 2 1
L(2, 0, 1) = 0 = 0 1 + 2 0 + 0 0
2 0 1 1
−1 0 2 1
L(1, 0, 1) = 0 = 0 1 + 0 0 − 1 0
−1 0 1 1
124 Section 6.1 Solutions
Hence,
h i 2 0 0
[L]B = [L(0, 1, 0)]B [L(2, 0, 1)]B [L(1, 0, 1)]B = 0 2 0
0 0 −1
(g) We have
0 0 −1 1
L(0, 1, 0) = 2 = 2 1 + 0 0 + 0 1
0 0 1 1
−1 0 −1 1
L(−1, 0, 1) = 0 = 0 1 + 1 0 + 0 1
1 0 1 1
3 0 −1 1
L(1, 1, 1) = 4 = 2 1 − 1 0 + 2 1
1 0 1 1
Hence,
h i 2 0 2
[L]B = [L(0, 1, 0)]B [L(−1, 0, 1)]B [L(1, 1, 1)]B = 0 1 −1
0 0 2
Thus,
2 −1 1 4
L(~x) = 4 1 + 19 0 + 15 1 = 19
0 1 0 19
(b) We have
0
[L(~y)]B = [L]B [~y]B = −2
3
Thus,
2 −1 1 5
L(~y) = 0 1 − 2 0 + 3 1 = 3
0 1 0 1
Section 6.1 Solutions 125
(c) From our work on similar matrices, we have that [L]B = P−1 [L]P where P is the change of
coordinates matrix from B-coordinates to standard coordinates. Thus,
2 −1 1 0 1 0
[L] = P[L]B P−1 = 1 0 1 0 4 2] = [] [
0 1 0 3 3 0
Thus,
1 0 1 8
L(~x) = 6 1 + 13 1 + 2 0 = 19
0 1 1 15
(b) We have
−3
[L(~y)]B = [L]B [~y]B = 6
2
Thus,
1 0 1 5
L(~y) = 3 1 + 6 1 + 2 0 = 9
0 1 1 8
(c) From our work on similar matrices, we have that [L]B = P−1 [L]P where P is the change of
coordinates matrix from B-coordinates to standard coordinates. Thus,
1 0 1 0 0 3
[L] = P[L]B P−1 = 1 1 0 5 0 −1] =
[] [
0 1 1 0 2 2
Therefore, " #
1 0
[proj~a ]B =
0 0
126 Section 6.1 Solutions
(b) ("
For #our
" geometrically
#) natural basis, we include ~b and a vector orthogonal to ~b. We pick B =
1 −1
, . Then we have
1 1
" # " #
~ ~ 1 −1
proj~b (b) = b = 1 +0
1 1
" #! " # " #
−1 1 −1
proj~b = ~0 = 0 +0
1 1 1
Hence, # "
1 0
[perp~b ]B =
0 0
(c) ("
For #our
" geometrically
#) natural basis, we include ~b and a vector orthogonal to ~b. We pick B =
1 −1
, . Then we have
1 1
" # " #
~ ~ 1 −1
perp~b (b) = 0 = 0 +0
1 1
" #! " # " # " #
−1 −1 1 −1
perp~b = =0 +1
1 1 1 1
Hence, " #
0 0
[perp~b ]B =
0 1
(d) For our geometrically natural basis, we include the normal vector ~n for the plane of reflection and a
basis for the plane. To pick a basis for the plane, we need to pick a set of two linearly independent
vectors which are orthogonal to ~n. Thus, we pick
1 0 1
B= 0 , 1 , 0
−1 0 1
Then we get
1 0 1
refl~n (~n) = −~n = − 0 + 0 1 + 0 0
−1 0 1
0 0 1 0 1
refl~n 1 = 1 = 0 0 + 1 + 0 0
0 0 −1 0 1
1 1 1 0 1
refl~n 0 = 0 = 0 0 + 0 1 + 1 0
1 1 −1 0 1
Section 6.1 Solutions 127
Hence,
−1 0 0
[refl~n ]B = 0 1 0
0 0 1
Thus,
(PQ)−1 A(PQ) = C
So, A is similar to C.
6.1.6 If AT A is similar to I, then there exists an invertible matrix P such that
P−1 AT AP = I
rank(Ek · · · E1 A) = rank(BEk · · · E1 )
rank(A) = rank(BEk · · · E1 ) by Corollary 5.2.5
T T
rank(A ) = rank([BEk · · · E1 ] )
rank(AT ) = rank(E1T · · · EkT BT )
rank(AT ) = rank(BT ) by Corollary 5.2.5
rank(A) = rank(B)
as required.
128 Section 6.2 Solutions
# " " #
1 0 1 1
6.1.9 (a) The statement is false. Let A = and B = , then rank A = rank B, but tr A , tr B, so A
0 0 2 2
and B cannot be similar by Theorem 6.1.1.
(b) The statement is true. If A and B are similar, then there exists an invertible matrix P such that
P−1 AP = B. Thus, B is a product of invertible matrices, so B is invertible by Theorem 5.1.5.
" # " #
2 0 1 0
(c) The statement is false. Let A = . Its RREF is R = . We have det A , det R, so A and
0 0 0 0
R cannot be similar by Theorem 6.1.1.
(d) The statement is true. If A is similar to the diagonal matrix D = diag(d11 , d22 , . . . , dnn ), then
there exists an invertible matrix P such that P−1 AP = D and so A = PDP−1 . By Theorem 5.3.2,
Theorem 5.3.9, and Corollary 5.3.10 we get
1
det A = det(PDP−1 ) = det P det D det P−1 = det D det P = det D = d11 d22 · · · dnn
det P
(c) We have
1 3 3 2 −1
A~v3 = 6 7 12 −2 = 10
−3 −3 −5 1 −5
Hence, the eigenvalues are λ1 = 1 with aλ1 = 2, and λ2 = −2 with aλ2 = 1. Since λ2 has algebraic
multiplicity 1 it also has geometric multiplicity 1 by Theorem 6.2.4. For λ1 we get
−4 −2 −2 1 1/2 1/2
A − λ1 I = 2 1 1 ∼ 0 0 0
0 0 0 0 0 0
−1 −1
Thus, a basis for Eλ1 is 2 , 0 . Thus, λ1 has geometric multiplicity 2.
0 2
130 Section 6.2 Solutions
(e) We have
−3 − λ 6 −2
C(λ) = det(A − λI) = −1 2 − λ −1
1 −3 −λ
−3 − λ 6 −2 −3 − λ 8 −2
= 0 −1 − λ −1 − λ = 0 0 −1 − λ
1 −3 −λ 1
−3 + λ −λ
= −(λ + 1)(λ2 − 1) = −(λ + 1)2 (λ − 1)
Hence, the eigenvalues are λ1 = −1 with algebraic multiplicity 2 and λ2 = 1 with algebraic mul-
tiplicity 1. Since λ2 has algebraic multiplicity 1 it also has geometric multiplicity 1 by Theorem
6.2.4. For λ1 we get
−2 6 −2 1 −3 1
A − λ1 I = −1 3 −1 ∼ 0 0 0
1 −3 1 0 0 0
Hence, the eigenvalues are λ1 = 2 with aλ1 = 2, and λ2 = 8 with aλ2 = 1. Since λ2 has algebraic
multiplicity 1 it also has geometric multiplicity 1 by Theorem 6.2.4. For λ1 we get
2 2 2 1 1 1
A − λ1 I = 2 2 2 ∼ 0 0 0
2 2 2 0 0 0
Hence, the eigenvalues are λ1 = 1, λ2 = 3, and λ3 = −1 all with algebraic multiplicity 1. Thus,
they all have geometric multiplicity 1 by Theorem 6.2.4.
Section 6.2 Solutions 131
(h) We have
2 − λ 2 −1
C(λ) = det(A − λI) = −2 1 − λ 2
2 2 −1 − λ
1 − λ 2 −1 1 − λ 2 −1
= 0 1−λ 2 = 0 1 − λ 2
1 − λ 2 −1 − λ 0
0 −λ
= −λ(λ − 1)2
Hence, the eigenvalues are λ1 = 0 with algebraic multiplicity 1 and λ2 = 1 with algebraic mul-
tiplicity 2. Since λ1 has algebraic multiplicity 1 it also has geometric multiplicity 1 by Theorem
6.2.4. For λ2 we get
1 2 −1 1 0 −1
A − λ2 I = −2 0 2 ∼ 0 1 0
2 2 −2 0 0 0
Hence, the eigenvalues are λ1 = 3, λ2 = −9, and λ3 = 6 all with algebraic multiplicity 1. Thus,
they all have geometric multiplicity 1 by Theorem 6.2.4.
(j) We have
1 − λ 0 1
C(λ) = det(A − λI) = 0 2−λ 1
0 0 2 − λ
= (1 − λ)(2 − λ)2
6.2.4 Geometrically, we are looking for vectors that are mapped to a scalar multiple of themselves. For a
rotation by π/3, the only vector that will be mapped to a scalar multiple of itself is the zero vector. Thus,
Rπ/3 has no real eigenvalues and eigenvectors. Since the projection onto ~v, is always a scalar multiple of
~v, the only eigenvectors can be vectors that a parallel to ~v or orthogonal to ~v. Indeed, by definition, we
have proj~v ~x = k~~xv·~kv2 ~v. Thus, if ~x = t~v, for t , 0, then we get
(t~v) · ~v
proj~v (t~v) = ~v = t~v = 1(t~v)
k~vk2
Thus, λ1 = 1 is an eigenvalue with Eλ1 = Span{~v}.
If ~y is any non-zero vector orthogonal to ~v, then we get
proj~v ~y = ~0 = 0~v
(AB)(5~u + 3~v) = A(5B~u + 3B~v) = A(25~u + 9~v) = 25A~u + 9A~v = 150~u + 90~v = 30(5~u + 3~v)
" #
6
w = AB(c1~u + c2~v) = c1 (AB)~u + c2 (AB)~v = c1 (30~u) + c2 (30~v) = 30(c1~u + c2~v) = 30~
AB~ w=
42
# " " #
1 0 1 0
6.2.10 It does not imply that λµ is an eigenvalue of AB. Let A = and B = . Then λ = 2 is an
0 2 0 1/2
" #
1 0
eigenvalue of A and µ = 1 is an eigenvalue of B, but λµ = 2 is not an eigenvalue of AB = .
0 1
6.2.11 By definition of matrix-vector multiplication, we have that
c
−1 1
Hence, a basis for Eλ1 is 1 , 0 .
0
1
2 −1 1 1 0 1
For λ2 = 3 we have A − λ2 I = −1 2 1 ∼ 0 1 1.
1 1 2 0 0 0
−1
Hence, a basis for Eλ2 is −1 .
1
−1 1 −1
It follows that A is diagonalized by P = 1 0 −1. The resulting diagonal matrix D has the
0 1 1
6 0 0
eigenvalues of A as its diagonal entries, so D = 0 6 0.
0 0 3
(e) We have
1 − λ 0 −1
C(λ) = 2 2−λ 1 = (2 − λ)(λ2 − 6λ + 9) = −(λ − 2)(λ − 3)2
4 0 5 − λ
136 Section 6.3 Solutions
−1 1 2 1 −1 0
For λ2 = 2 we have A − λ2 I = 2 −2 1 ∼ 0 0 1.
1 −1 −3 0 0 0
1
Hence, a basis for Eλ2 is 1 .
0
3 1 2 1 0 3/4
For λ3 = −2 we have A − λ3 I = 2 2 1 ∼ 0 1 −1/4.
1 −1 1 0 0 0
−3
Hence, a basis for Eλ3 is 1 .
4
−1 1 −3
It follows that A is diagonalized by P = −3 1 1 . The resulting diagonal matrix D has the
2 0 4
0 0 0
eigenvalues of A as its diagonal entries, so D = 0 2 0 .
0 0 −2
Section 6.3 Solutions 137
(g) We have
1 − λ −2 3 2 − λ 0 2 − λ 2 − λ 0 0
C(λ) = 2 6−λ −6 = 2 6−λ −6 = 2 6−λ −8
1 2 −1 − λ 1
2 −1 − λ 1
2 −2 − λ
= (2 − λ)(λ2 − 4λ + 4) = −(λ − 2)3
(h) We have
3 − λ 1 1 3 − λ 1 0 3 − λ 1 0
C(λ) = −4 −2 − λ −5 = −4 −2 − λ −3 + λ = −2 −λ 0
2 2 5 − λ 2
2 3−λ 2
2 3 − λ
= (3 − λ)(λ2 − 3λ + 2) = −(λ − 3)(λ − 2)(λ − 1)
1 1 1 1 1 0
For λ2 = 2 we have A − λ2 I = −4 −4 −5 ∼ 0 0 1.
2 2 3 0 0 0
−1
Hence, a basis for Eλ2 is 1 .
0
2 1 1 1 0 −1
For λ3 = 1 we have A − λ3 I = −4 −3 −5 ∼ 0 1 3 .
2 2 4 0 0 0
1
Hence, a basis for Eλ3 is −3 .
1
138 Section 6.3 Solutions
0 −1 1
It follows that A is diagonalized by P = −1 1 −3. The resulting diagonal matrix D has the
1 0 1
3 0 0
eigenvalues of A as its diagonal entries, so D = 0 2 0.
0 0 1
(i) We have
−3 − λ −3 5 2 − λ −3 5 2 − λ −3 5
C(λ) = 13 10 − λ −13 = 0 10 − λ −13 = 0 10 − λ −13
3 2 −1 − λ 2 − λ 2 −1 − λ 0 5 −6 − λ
= (2 − λ)(λ2 − 4λ + 5)
h i 1
Hence, a basis for Eλ1 is −3/5 1 0 , 0 .
1
2 6 −10 1 0 −2
For λ2 = 6 we have A − λ2 I = 5 −5 −5 ∼ 0 1 −1.
5 3 −13 0 0 0
2
Hence, a basis for Eλ2 is 1 .
1
−3/5 1 2
It follows that A is diagonalized by P = 1 0 1. The resulting diagonal matrix D has the
0 1 1
−2 0 0
eigenvalues of A as its diagonal entries, so D = 0 −2 0.
0 0 6
Section 6.3 Solutions 139
(k) We have
2 − λ 1 0 2 − λ 1 0 2 − λ 1 0
C(λ) = 1 3−λ 1 = 0 3−λ 1 = 0 3−λ 1
3 1 4 − λ −1 + λ 1 4 − λ −3 + 2λ 0 4 − λ
= (2 − λ)(3 − λ)(4 − λ) − 3 + 2λ = λ3 − 9λ2 + 24λ − 21
We find that C(λ) has non-real eigenvalues and so is not diagonalizable over R.
(l) We have
1 − λ 1 1 1 1 − λ 0
0 1
1 1−λ 1 1 1 −λ 0 1
C(λ) = =
1−λ
1 1 1 1 0 −λ 1
1 − λ 1 λ λ 1 − λ
1
1 1
1 − λ 0 0 1
1 − λ 0
1
1 −λ 0 1
= = (−λ) 1 −λ 1
1 0 −λ 1
2
λ 2 − λ
λ 2 − λ
2
0
1 − λ 0 1
= (−λ) 1 −λ 1 = λ2 (λ2 − 4λ) = λ3 (λ − 4)
3 0 3 − λ
−3 1 0 −1
1 1 1 0
1 −3 1
1 0 1 0 −1
For λ2 = 4 we have A − λ2 I =
∼ .
1 1 −3 1 0 0 0 −1
1 1 1 −3 0 0 0 0
1
1
Hence, a basis for Eλ2 is .
1
1
140 Section 6.3 Solutions
−1 −1 −1 1
1 0 0 1
It follows that A is diagonalized by P = . The resulting diagonal matrix D has
0 1 0 1
0 0 1 1
0 0 0 0
0 0 0 0
the eigenvalues of A as its diagonal entries, so D = .
0 0 0 0
0 0 0 4
" #
3 −1
(a) We have [L] = . Thus, from our work in problem 6.3.1(a), we get that if we take
−1 3
(" # " #) " #
1 −1 2 0
B= , , then [L]B = .
1 1 0 4
" #
1 3
(b) We have [L] = .
4 2
We have
1 − λ 3
C(λ) = = λ2 − 3λ = 10 = (λ − 5)(λ + 2)
4 2 − λ
Thus, the eigenvalues of A are λ1 = 5 and λ2 = −2. Since all the eigenvalues have algebraic
multiplicity 1, we know that [L] is diagonalizable.
" # " #
−4 3 4 −3
For λ1 = 5 we have A − λ1 I = ∼ .
4 −3 0 0
(" #)
3
Hence, a basis for Eλ1 is .
4
" # " #
3 3 1 1
For λ2 = −2 we have A − λ2 I = ∼ .
4 4 0 0
(" #)
−1
Hence, a basis for Eλ2 is .
1
(" # " #) " #
3 −1 5 0
Therefore, we take B = , and our we get [L]B = .
4 1 0 −2
−2 2 2
(c) We have [L] = −3 3 2.
−2 2 2
We have
−2 − λ 2 2 −2 − λ 2 2
C(λ) = −3 3−λ 2 = −3 3 − λ 2
−2 2 2 − λ λ
0 −λ
−2 − λ 2 −λ
= −3 3 − λ −1 = λ(−2 + 3λ − λ2 )
λ 0 0
Section 6.3 Solutions 141
= −λ(λ − 2)(λ − 1)
Thus, the eigenvalues of A are λ1 = 0, λ2 = 2, and λ3 = 1. Since all the eigenvalues have algebraic
multiplicity 1, we know that [L] is diagonalizable.
−2 2 2 1 −1 0
For λ1 = 0 we have A − λ1 I = −3 3 2 ∼ 0 0 1.
−2 2 2 0 0 0
1
Hence, a basis for Eλ1 is 1 .
0
−4 2 2 1 0 −1
For λ2 = 2 we have A − λ2 I = −3 1 2 ∼ 0 1 −1.
−2 2 0 0 0 0
1
Hence, a basis for Eλ2 is 1 .
1
−3 2 2 1 0 −1
For λ3 = 1 we have A − λ3 I = −3 2 2 ∼ 0 1 −1/2.
−2 2 1 0 0 0
2
Hence, a basis for Eλ3 is 1 .
2
1 1 2
0 0 0
Therefore, we take B = 1 , 1 , 1 and our we get [L]B = 0 2 0.
0 1 2
0 0 1
6.3.2 (a) By definition A is similar to D = diag(λ1 , . . . , λn ). Hence, by Theorem 6.1.1, we have that
tr A = tr diag(λ1 , . . . , λn ) = λ1 + · · · + λn
P−1 AP = diag(λ1 , . . . , λn )
as required.
6.3.5 If A is diagonalizable, then there exists an invertible matrix P and diagonal matrix D such that P−1 AP =
D. Thus,
D = DT = (P−1 AP)T = PT AT (P−1 )T
Let Q = (P−1 )T , then Q−1 = PT and so we have that
Q−1 AT Q = D
0 ≤ (a + d)2 − 4(ad − bc) = a2 + 2ad + d2 − 4ad + 4bc = a2 − 2ad + d2 − 4bc = (a − d)2 + 4bc
as required.
" #
0 1
(b) If A = , then (a − d)2 + 4bc = (0 − 0)2 + 4(1)(0) = 0, but A is not diagonalizble since a0 = 2,
0 0
but g0 = 1 < a0 .
If A is the zero matrix, then (a − d)2 + 4bc = 0, but A is diagonalizable since it is already diagonal.
(c) If (a−d)2 +4bc = (2x)2 where x ∈ Z, then observe that a−d must be even since (a−d)2 = (2x)2 −4bc
is even. Let a − d = 2y where y ∈ Z, then we have that the eigenvalues of A are
p
−2y ± (2x)2 −2y ± 2|x|
λ= = = −y ± |x|
2 2
which are both integers.
Section 6.4 Solutions 143
−5 −2 2100
" #" # " #
100 100 −1 0 1 1 2
A = PD P =
4 1 0 (−1)100 3 4 −5
−5 · 2100 + 8 −5 · 2101 + 10
1
=
3 102 103
2 −4 2 −5
6.4.3 We have
2 − λ 2
C(λ) = = λ2 + 3λ − 4 = (λ − 1)(λ + 4)
−3 −5 − λ
Thus, the eigenvalues of A are λ1 = 1 and λ2 = −4.
For λ1 = 1 we get "
# " #
1 2 1 2
A − λ1 I = ∼
−3 −6 0 0
(" #)
−2
Hence, a basis for Eλ1 is .
1
For λ2 = −4 we get " # " #
6 2 1 1/3
A − λ2 I = ∼
−3 −1 0 0
(" #)
−1
Hence, a basis for Eλ2 is .
3
" # " #
−2 −1 1 0
It follows that A is diagonalized by P = to D = . Thus, we have A = PDP−1 and so
1 3 0 −4
" #" # " #
100 100 −1 −2 −1 1 0 1 3 1
A = PD P =
1 3 0 4100 −5 −1 −2
1 −6 + 4100 −2 + 2 · 4100
" #
=−
5 3 − 3 · 4100 1 − 6 · 4100
6.4.4 We have
−2 − λ 2
C(λ) = = λ2 − 3λ − 4 = (λ + 1)(λ − 4)
−3 5 − λ
Thus, the eigenvalues are λ1 = −1 and λ2 = 4.
For λ1 = −1 we have " # " #
−1 2 1 −2
A − λ1 I = ∼
−3 6 0 0
(" #)
2
Hence, a basis for Eλ1 is .
1
Section 6.4 Solutions 145
For λ2 = 1 we have
−3 1 1 1 0 −1/2
A − λ2 I = −1 −1 1 ∼ 0 1 −1/2
−2 2 0 0 0 0
1
Hence, a basis for Eλ2 is 1 .
2
1 1 1 −1 0 0
It follows that A is diagonalized by P = 1 0 1 to D = 0 −1 0. Thus, we have A = PDP−1
0 1 2 0 0 1
and so
6.4.6 If λ is an eigenvalue of A, then there exists a non-zero vector ~v such that A~v = λ~v. We will prove by
induction that An~v = λn~v.
Base Case: n = 1. We have A1~v = λ1~v.
Inductive Hypothesis: Assume that Ak~v = λk~v.
Inductive Step: We have
as required.
6.4.7
6.4.8
6.4.9
6.4.10
6.4.11
Chapter 7 Solutions
x1 − 3x3 = 0
x2 + 3x3 = 0
x3 is a free variable, so we let x3 = t ∈ R. Then we have any vector ~x in the nullspace satisfies
x1 3t 3
x2 = −3t = t −3
x3 t 1
3
Thus, a basis for the nullspace is −3 .
1
To find a basis for the left nullspace, we row reduce the transpose of the matrix. We get
0 1 1 0
1 2 ∼ 0 1
3 3 0 0
Thus, the left nullspace is {~0}. Therefore, a basis for the left nullspace is the empty set.
(b) Row reducing we get " # " #
1 2 −1 1 2 0
∼
2 4 3 0 0 1
147
148 Section 7.1 Solutions
1 0 (" # " #)
1 −1
Hence, a basis for the row space is 2 , 0, a basis for the column space is , . To find
0 1
2 3
a basis for the nullspace we solve
x1 + 2x2 = 0
x3 = 0
x2 is a free variable, so we let x2 = t ∈ R. Then we have any vector ~x in the nullspace satisfies
x1 −2t −2
x = t = t 1
2
x3 0 0
−2
Thus, a basis for the nullspace is 1 .
0
To find a basis for the left nullspace, we row reduce the transpose of the matrix. We get
1 2 1 0
2 4 ∼ 0 1
−1 3 0 0
Thus, the left nullspace is {~0}. Therefore, a basis for the left nullspace is the empty set.
(c) Row reducing we get
2 −1 5 1 0 2
3 4 2 ∼ 0 1 −1
1 1 1 0 0 0
1 0
2 −1
Hence, a basis for the row space is 0 , 1 , a basis for the column space is 3 , 4 . To
2 −1 1 1
find a basis for the nullspace we solve
x1 + 2x3 = 0
x2 − x3 = 0
x3 is a free variable, so we let x3 = t ∈ R. Then we have any vector ~x in the nullspace satisfies
x1 −2t −2
x t
2 = = t 1
x3 t 1
−2
Thus, a basis for the nullspace is 1 .
1
Section 7.1 Solutions 149
To find a basis for the left nullspace, we row reduce the transpose of the matrix. We get
2 3 1 1 0 1/11
−1 4 1 ∼ 0 1 3/11
5 2 1 0 0 0
1
x1 + x3 = 0
11
3
x2 + x3 = 0
11
Let x3 = t ∈ R. Then every vector ~x in the left nullspace satisfies
x1 −t/11 −1/11
x2 = −3t/11 = t −3/11
x3 t 1
−1/11
Hence, a basis for the left nullspace is −3/11 .
1
x1 − 2x2 + 2x4 = 0
x3 + x4 = 0
x2 and x4 are free variables, so we let x2 = s ∈ R and x4 = t ∈ R. Then we have any vector ~x in
the nullspace satisfies
x1 2s − 2t −2
2
x s
= s + t 0
1
2 =
x3 −t 0
−1
x4 t 0 1
2 −2
1 0
Thus, a basis for the nullspace is , .
0 −1
0
1
150 Section 7.1 Solutions
To find a basis for the left nullspace, we row reduce the transpose of the matrix. We get
5
x1 − x3 = 0
3
7
x2 − x3 = 0
3
x1 + 3x3 = 0
x2 − 2x3 = 0
x4 = 0
x3 is a free variable, so we let x3 = t ∈ R. Then we have any vector ~x in the nullspace satisfies
x1 − 3x4 = 0
x2 = 0
x3 + 10x4 = 0
Also, ~a3 , ~0, since ~e3 is not in the nullspace of A. Thus, a basis for Col(A) is {~a3 }.
7.1.3 Pick ~z ∈ {RE~x | ~x ∈ Rn }. Then ~z = RE~x for some ~x ∈ Rn . By definition of matrix-vector multiplication
we have that E~x = ~y ∈ Rn for some ~y ∈ Rn . Thus, ~z = R~y ∈ {R~y | ~y ∈ Rn }.
On the other hand, pick w ~ ∈ {R~y | ~y ∈ Rn }. Since E is an invertible matrix, for any ~y ∈ Rn we have that
there exists a unique vector ~x ∈ Rn such that E~x = ~y. Thus, ~z = RE~x ∈ {RE~x | ~x ∈ Rn }.
Therefore, the sets are subsets of each other and hence equal.
152 Section 7.1 Solutions
7.1.4 The result follows immediately from the Invertible Matrix Theorem.
7.1.5 (a) If ~x ∈ Null(BT ), then by definition ~0 = BT ~x. Taking transposes of both sides gives
~0T = (BT ~x)T = ~xT (BT )T = ~xT B
~b1
T
~b1 · ~x
~0 = BT ~x = ..
.
~bn · ~x
7.1.6 We need to find a matrix A such that A~x = ~0 if and only if ~x is a linear combination of the columns of A.
Observe that we need dim Null(A) = dim Col(A) = rank(A). So, by the Dimension Theorem, we have
2 = rank(A)
" + dim# Null(A) = 2 rank(A). Thus, we need " rank(A)
# = 1. Hence, we can pick A to be of the
x x 0 1
form A = 1 2 . It is easy to see that taking A = gives
0 0 0 0
(" #)
1
Col(A) = Span = Null(A)
0
Similarly, since the rank of a matrix equals the rank of the transpose of a the matrix, we get
7.1.9 (a) If A~x = ~0, then AT A~x = AT ~0 = ~0. Hence, the nullspace of A is a subset of the nullspace of AT A.
On the other hand, consider AT A~x = ~0. Then,
Thus, A~x = ~0. Hence, the nullspace of AT A is also a subset of the nullspace of A, and the result
follows.
(b) Using part (a), we get that dim(Null(AT A)) = dim(Null(A)). Thus, the Dimension Theorem gives
as required.
7.1.10 Let ~x ∈ Col(B). Then, there exists ~y such that B~y = ~x. Now, observe that
Thus, ~x ∈ Null(A). Hence, Col(B) is a subset of Null(A) and thus, since Col(B) is a vector space, Col(B)
is a subspace of Null(A).
1 1
7.1.11 (a) A = 1 0
1 2
" #
1 1 1
(b) B =
1 0 2
(c) We first observe that C must have 3 columns. So, lets make C a 1 × 3 matrix. We require that
h i
C = −2 1 1
−2
(d) Using our work in part (c), we see that we can take D = 1 .
1
(e) No, there cannot be such a matrix F since we would have dim(Col(F)) = 2, dim(Null(F)) = 2,
and F would have to have 3 columns, but this would contradict the Dimension Theorem.
Chapter 8 Solutions
Thus, T is linear.
(e) Let a1 + b1 x + c1 x2 + d1 x3 , a2 + b2 x + c2 x3 + d2 x3 ∈ P3 (R) and s, t ∈ R. Then,
D[s(a1 + b1 x + c1 x2 + d1 x3 ) + t(a2 + b2 x + c2 x3 + d2 x3 )] = D(sa1 + ta2 + (sb1 + tb2 )x + (sc1 + tc2 )x2 + (sd1 +
= (sb1 + tb2 ) + 2(sc1 + tc2 )x + 3(sd1 + td2 )x2
= s(b1 + 2c1 x + 3d1 x2 ) + t(b2 + 2c2 x + 3d2 x2 )
= sD(a1 + b1 x + c1 x2 + d1 x3 ) + tD(a2 + b2 x + c2 x3 +
154
Section 8.1 Solutions 155
Thus, D is linear.
(f) We have " #! " #!
1 0 −1 0
L +L =1+1=2
0 1 0 −1
but "
# " #! " #!
1 0 −1 0 0 0
L + =L =0
0 1 0 −1 0 0
So, L is not linear.
8.1.2 (a) If L is linear, then we have
L(a1 , a2 , a3 ) = a1 L(1, 0, 0) + a2 L(0, 1, 0) + a3 L(0, 0, 1)
= a1 (1 + x) + a2 (1 − x2 ) + a3 (1 + x + x2 )
= (a1 + a2 + a3 )1 + (a1 + a3 )x + (−a2 + a3 )x2
=
..
.
sbn + tcn
b1 c1
(M ◦ L)(~v) = M(L(~v))
Observe that L(~v) ∈ W which is in the domain of M. Hence, we get that M(L(~v)) ∈ Range(M) which is
in W. Thus, the codomain of M ◦ L is W.
Let ~x, ~y ∈ V and s, t ∈ R. Then,
Hence, M ◦ L is linear.
8.1.6 (a) Consider c1~v1 + · · · + ck~vk = ~0. Then, we have
Thus, c1 = · · · = ck = 0 since {L(~v1 ), . . . , L(~vk )} is linearly independent. Thus, {~v1 , . . . , ~vk } must
also be linearly independent.
(b) If L : R2 → R2 is the linear mapping defined by L(~x) = ~0 and {~v1 , ~v2 } = {~e1 , ~e2 } is the standard
basis for R2 , then {~v1 , ~v2 } is linearly independent, but {L(~v1 ), L(~v2 )} is linearly dependent since it
contains the zero vector.
8.1.7 Let L, M ∈ L and t ∈ R.
(a) By definition tL is a mapping with domain V. Also, since W is closed under scalar multiplication,
we have that (tL)(~v) = tL(~v) ∈ W. Thus, the codomain of tL is W. For any ~x, ~y ∈ V and c, d ∈ R
we have
Hence, tL ∈ L.
(b) For any ~v ∈ V we have
[t(L + M)](~v) = t[(L + M)(~v)] = t[L(~v) + M(~v)] = tL(~v) + tM(~v) = [tL + tM](~v)
If [L] is invertible, then there exists a matrix A such that A[L] = I = [L]A. Define M : Rn → Rn by
M(~x) = A~x. Then, for any ~x ∈ Rn we have
and
(L ◦ M)(~x) = L(M(~x)) = [L]A~x = I~x = ~x
Thus, M = L−1 .
" #
−1 2 1
8.1.9 Using our work from problem 7, we know that the standard matrix of L is the inverse of [L] = .
3 −4
" #
−1 4/11 1/11
We have that [L] = . Consequently,
3/11 −2/11
!
4 1 3 2
L−1 (x1 , x2 ) = x1 + x2 , x1 − x2
11 11 11 11
Thus, a − b = 0 and b + c = 0. So, a = b = −c. So, every vector a + bx + cx2 ∈ ker(L) has the
form −c − cx + cx2 = c(−1 − x + x2 ). Thus, a basis for ker(L) is {−1 − x + x2 }. Consequently,
nullity(L) = 1 and the Rank-Nullity Theorem gives
Hence, a + b = 0 and a + b + c = 0. Thus, c = 0 and b = −a. So, every vector in ker(L) has the
form
a a 1
b = −a = a −1
c 0 0
Section 8.2 Solutions 159
1
Thus, a basis for ker(L) is −1 . Thus, nullity(L) = 1 and
0
Thus, Range(L) = W.
160 Section 8.3 Solutions
8.2.6 (a) Since the rank of a linear mapping is equal to the dimension of the range, we consider the range
of both mappings. Let ~x ∈ Range(M ◦ L). Then there exists ~v ∈ V such that ~x = (M ◦ L)(~v) =
M(L(~v)) ∈ Range(M).
Hence, Range(M ◦ L) is a subset, and hence a subspace, of Range(M). Therefore, dim Range(M ◦
L) ≤ dim Range(M) which implies rank M ◦ L ≤ rank M.
(b) The kernel of L is a subspace of the kernel of M ◦ L, because if L(~x) = ~0, then
(M ◦ L)(~x) = M(L(~x)) = M(~0) = ~0
Therefore
nullity(L) ≤ nullity(M ◦ L)
so
n − nullity(L) ≥ n − nullity(M ◦ L)
Hence rank(L) ≥ rank(M ◦ L), by the Rank-Nullity Theorem.
(c) We have
" #! " # " # " #
1 1 1 1 −1
T = =1 +0
0 0 1 1 1
" #! " # " # " #
0 1 1 3 1 1 −1
T = = +
1 0 2 2 1 2 1
" #! " # " # " #
1 0 1 1 1 1 −1
T = = −
0 1 0 2 1 2 1
" #! " # " # " #
0 0 1 1 −1
T = =1 +0
1 1 1 1 1
Section 8.3 Solutions 161
" #
1 3/2 1/2 1
Hence, C [T ]B = .
0 1/2 −1/2 0
(d) We have
" # " # " #
2 1 1 1
L(1 + x ) = =0 +1
2 1 2
" # " # " #
0 1 1
L(x − x2 ) = =0 +0
0 1 2
" # " # " #
2 0 1 1
L(x ) = = (−1) +1
1 1 2
" # " # " #
0 1 1
L(x3 ) = = (−1) +1
1 1 2
" #
0 0 −1 −1
Hence, C [L]B = .
1 0 1 1
(e) We have
(f) We have
L(1) = 0 = 0 + 0x + 0x2
L(x) = 1 = 1 + 0x + 0x2
L(x2 ) = 2x = 0 + 2x + 0x2
0 1 0
Hence, [L]B = 0 0 2.
0 0 0
162 Section 8.3 Solutions
(b) For ease, name the vectors in B as ~v1 , ~v2 , ~v3 , ~v4 .
" #! " #
1 1 1 1
L = = 1~v1 + 0~v2 + 0~v3 + 0~v4
0 0 0 0
" #! " #
1 0 1 0
L = = 0~v1 + 1~v2 + 0~v3 + 0~v4
0 1 0 1
" #! " #
1 1 1 1
L = = 0~v1 + 0~v2 + 1~v3 + 0~v4
0 1 0 1
" #! " #
0 0 0 1
L = = 0~v1 + −1~v2 + 1~v3 + 0~v4
1 0 0 0
1 0 0 0
0 1 0 −1
Thus, [L]B = .
0 0 1 1
0 0 0 0
(c) We have
"#! " # " # " #
1 0 2 0 1 0 2 0
T = =0 +1
0 1 0 3 0 1 0 3
" #! " # " # " #
2 0 5 0 1 0 2 0
T = =1 +2
0 3 0 7 0 1 0 3
" #
0 1
Thus, [L]B = .
1 2
(d) We have
Thus, h i
[L]B = [L(~v1 )]B ··· [L(~vn )]B = I
Section 8.4 Solutions 163
" # (" #)
1 1 1
8.3.4 Let A = and define L : R2 → R2 by L(~x) = A~x. Clearly we have Range(L) = Span . Let
1 1 1
(" # " #)
1 1
B= , . Then, we get
1 −1
" #! " # " # " #
1 2 1 1
L = =2 +0
1 2 1 −1
" #! " # " # " #
1 0 1 1
L = =0 +0
−1 0 1 −1
" #
2 0
Thus, [L]B = . Clearly, Col([L]B ) , Range(L).
0 0
Linear: Let any two elements of P1 (R) be ~a = a0 + a1 x and ~b = b0 + b1 x and let s, t ∈ R then
L(s~a + t~b) = L s(a0 + a1 x) + t(b0 + b1 x)
= L (sa0 + tb0 ) + (sa1 + tb1 )x
" # " # " #
sa0 + tb0 a0 b
= =s + t 0 = sL(~a) + tL(~b)
sa1 + tb1 a1 b1
Therefore, L is linear.
One-to-one: If a0 + a1 x ∈ ker(L), then
" # " #
0 a
= L(a0 + a1 x) = 0
0 a1
= L (sa0 + tb0 ) + (sa1 + tb1 )x + (sa2 + tb2 )x2 + (sa3 + tb3 )x3
" # " # " #
sa0 + tb0 sa1 + tb1 a a b b
= = s 0 1 + t 0 1 = sL(~a) + tL(~b)
sa2 + tb2 sa3 + tb3 a2 a3 b2 b3
164 Section 8.4 Solutions
Therefore, L is linear.
One-to-one: If a0 + a1 x + a2 x2 + a3 x3 ∈ ker(L), then
" # " #
0 0 a a1
= L(a0 + a1 x + a2 x2 + a3 x3 ) = 0
0 0 a2 a3
Therefore L is linear.
One-to-one: Assume ~a ∈ ker(L). Then
" # " #
0 0 2 a2 a1
= L (x − 1)(a2 x + a1 x + a0 ) =
0 0 0 a0
we define L : S → U by
−1
1
1 0
L a + b = a(1 + x) + bx2
0 0
0 1
−1 −1
1 1
1 0 1 0
Linear: Let any two elements of S be ~a = a1 + b1 and ~b = a2 + b2 and let s, t ∈ R
0 0 0 0
0 1 0 1
then
−1
1
1 0
L(s~a + t~b) = L (sa1 + ta2 ) + (sb1 + tb2 )
0 0
0 1
= (sa1 + ta2 )(1 + x) + (sb1 + tb2 )x2
= sa1 (1 + x) + ta2 (1 + x) + sb1 x2 + tb2 x2
= s[a1 (1 + x) + b1 x2 ] + t[a2 (1 + x) + b2 x2 ] = sL(~a) + tL(~b)
Therefore L is linear.
One-to-one: Assume L(~a) = L(~b). Then a1 (1 + x) + b1 x2 = a2 (1 + x) + b2 x2 . This gives a1 = b1
and a2 = b2 hence ~a = ~b so L is one-to-one.
−1
1
1 0
Onto: For any a(1 + x) + bx2 ∈ U we can pick ~a = a + b ∈ S so that we have L(~a) =
0 0
0 1
a(1 + x) + bx2 hence L is onto.
L[s(b1~v1 + · · · + bn~vn ) + t(c1~v1 + · · · + cn~vn )] = L (sb1 + tc1 )~v1 + · · · + (sbn + tcn )~vn
Therefore, L is linear.
166 Section 8.4 Solutions
L[s(b1~v1 + · · · + bn~vn ) + t(c1~v1 + · · · + cn~vn )] = L (sb1 + tc1 )~v1 + · · · + (sbn + tcn )~vn
Therefore, L is linear.
One-to-one: If L(b1~v1 + · · · + bn~vn ) = L(c1~v1 + · · · + cn~vn ), then
b1 + b2 x + b3 x2 + · · · + bn xn−1 = c1 + c2 x + c3 x2 + · · · + cn xn−1
L(c1~v1 + · · · + cn~vn ) = ~0
Hence, c1~v1 + · · · + cn~vn ∈ ker(L). Since L is an isomorphism, it is one-to-one and thus ker(L) = {~0} by
Lemma 8.4.1. Thus, we get c1~v1 + · · · + cn~vn = ~0. This implies that c1 = · · · = cn = 0 since {~v1 , . . . , ~vn }
is linearly independent. Consequently, {L(~v1 ), . . . , L(~vn )} is a linearly independent set of n vectors in an
n-dimensional vector space, and hence is a basis.
8.4.4 ~ ∈ W. Since M is onto, there exist a ~u ∈ U such that M(~u) = w
(a) Let w ~ . Then, since L is onto, there
exists a ~v ∈ V such that L(~v) = ~u. Hence,
~
(M ◦ L)(~v) = M(L(~v)) = M(~u) = w
as required.
(b) Let L : R → R2 be defined by L(x1 ) = (x1 , 0). Clearly, L is not onto. Now, define M : R2 → R by
M(x1 , x2 ) = x1 . Then, we get
~ = (M ◦ L)(~v) = M(L(~v))
w
F is onto: Let B = {~v1 , . . . , ~vn } and let ~y ∈ Col(A). Then, ~y = A~x for some ~x = ... ∈ Rn . Let
xn
~v = x1~v1 + · · · xn~vn so that [~v]B = ~x. Then,
(b) Since Range(L) is isomorphic to Col(A), they have the same dimension. Hence,
8.4.6 Since L and M are one-to-one, we have nullity(L) = 0 = nullity(M) by Lemma 8.4.1. Since the
range of a linear mapping is a subspace of its codomain, we have that dim V ≥ dim Range(L) and
dim U ≥ dim Range(M). Thus, by the Rank-Nullity Theorem we get
and
dim U ≥ Range(M) = rank(M) = dim V
Hence, dim U = dim V and so U and V are isomorphic by Theorem 8.4.2.
8.4.7 (a) We disprove the statement with a counter example. Let U = R2 and V = R2 and L : U → V be the
linear mapping defined by L(~x) = ~0. Then, clearly dim U = dim V, but L is not an isomorphism
since it isn’t one-to-one nor onto.
(b) By definition, we have that dim(Range(L)) = rank(L). Thus, the Rank-Nullity Theorem gives us
that
dim(Range(L)) = dim V − nullity(L)
If dim V < dim W, then dim V − nullity(L) < dim W and hence the range of L cannot be equal to
W. Thus, we have proven the statement is true.
(c) We disprove the statement with a counter example. Let V = R3 and W = R2 and L : V → W be
the linear mapping defined by L(~x) = ~0. Then, clearly dim V > dim W, but L is not onto.
(d) The Rank-Nullity Theorem tells us that
Thus, nullity(L) , {~0} and hence L is not one-to-one by Lemma 8.4.1. Therefore, we have proven
the statement is true.
Chapter 9 Solutions
and 0 = h~x, ~xi = 2x12 + 3x22 + 4x32 if and only if x1 = x2 = x3 = 0. Thus, h~x, ~xi = 0 if and only if
~x = ~0. So, it is positive definite.
h~x, ~yi = 2x1 y1 + 3x2 y2 + 4x3 y3 = 2y1 x1 + 3y2 x2 + 4y3 x3 = h~y, ~xi
Thus, it is symmetric.
hs~x + t~y,~zi = 2(sx1 + ty1 )z1 + 3(sx2 + ty2 )z2 + 4(sx3 + ty3 )z3
= s(2x1 z1 + 3x2 z2 + 4x3 z3 ) + t(2y1 z1 + 3y2 z2 + 4y3 z3 ) = sh~x, ~xi + th~y,~zi
169
170 Section 9.1 Solutions
Thus, h~x, ~xi ≥ 0 and h~x, ~xi = 0 if and only if ~x = ~0. So, it is positive definite.
Thus, it is symmetric.
hs~x + t~y,~zi = 2(sx1 + ty1 )z1 − (sx1 + ty1 )z2 − (sx2 + ty2 )z1 + 2(sx2 + ty2 )z2 + (sx3 + ty3 )z3
= s(2x1 z1 − x1 z2 − x2 z1 + 2x2 z2 + x3 z3 ) + t(2y1 z1 − y1 z2 − y2 z1 + 2y2 z2 + y3 z3 )
= sh~x, ~xi + th~y,~zi
and hp, pi = 0 if and only if p(−2) = p(1) = p(2) = 0. Since p is a polynomial of degree at most
2 with three roots we get that p = 0. Hence, hp, pi = 0 if and only if p = 0.
and hp, pi = 0 if and only if p(−1) = p(0) = p(1) = 0. Thus, hp, pi = 0 if and only if p = 0.
as required.
9.1.9 Let f, g, h ∈ C[−π, π] and a, b ∈ R. We have
Z π
h f, f i = ( f (x))2 dx ≥ 0
−π
172 Section 9.2 Solutions
since ( f (x))2 ≥ 0. Moreover, we have h f, f i = 0 if and only if f (x) = 0 for all x ∈ [−π, π].
Z π Z π
h f, gi = f (x)g(x) dx = g(x) f (x) dx = hg, f, i
Z−ππ −π
ha~x + b~y, c~x + d~yi = ha~x + b~y, c~xi + ha~x + b~y, d~yi
= ha~x, c~xi + hb~y, c~xi + ha~x, d~yi + hb~y, d~yi
= abh~x, ~xi + bch~y, ~xi + adh~x, ~yi + bdh~y, ~yi
= abh~x, ~xi + (bc + ad)h~x, ~yi + bdh~y, ~yi
Therefore, it is an orthogonal set. Since it does not contain the zero vector, we get that it is a
linearly independent by Theorem 9.2.3. Thus, it is a linearly independent set of 3 vectors in P2 (R)
and hence it is an orthogonal basis for P2 (R).
(b) We have
h1 + x, 1i 3
=
k1 + xk2 5
h−2 + 3x, 1i −6 1
= =−
k − 2 + 3xk2 30 5
2
h2 − 3x , 1i 0
= =0
k2 − 3x2 k2 6
3/5
Hence, [1]B = −1/5.
0
Section 9.2 Solutions 173
(c) We have
h1 + x, xi 2
=
k1 + xk2 5
h−2 + 3x, xi 6 1
= =
k − 2 + 3xk2 30 5
h2 − 3x2 , xi 0
= =0
k2 − 3x2 k2 6
2/5
Hence, [x]B = 1/5.
0
(d) We have
h1 + x, x2 i 2
=
k1 + xk2 5
h−2 + 3x, x2 i −4 2
2
= =−
k − 2 + 3xk 30 15
h2 − 3x2 , x2 i −2 1
2 2
= =−
k2 − 3x k 6 3
2/5
Hence, [x2 ]B = −2/15.
−1/3
(e) i. Consider
3 − 7x2x = c1 (1 + x) + c2 (−2 + 3x) + c3 (2 − 3x2 ) = (c1 − 2c2 + 2c3 ) + (c1 − 3c2 )x + (−3c3 )x2
ii. We have
iii. We have
h1 + x, 3 − 7x + x2 i −3
=
k1 + xk2 5
2
h−2 + 3x, 3 − 7x + x i −64 32
= =−
k − 2 + 3xk2 30 15
h2 − 3x2 , 3 − 7x + x2 i −2 1
= =−
k2 − 3x2 k2 6 3
−3/5
Hence, [3 − 7x + x2 ]B = −32/15.
−1/3
Therefore, B is an orthogonal set in R3 . Since it does not contain the zero vector, we get that it is
a linearly independent by Theorem 9.2.3. Thus, it is a linearly independent set of 3 vectors in R3
and hence it is an orthogonal basis for R3 .
(b) We have
* 1 1 +
−2 , −2 = 12 + (−2)2 + 22 = 9
2 2
*2 2+
2 , 2 = 22 + 22 + 12 = 9
1 1
*−2 −2+
1 , 1 = (−2)2 + 12 + 22 = 9
2 2
(c) We have
* 1 4+
−2 , 3 = 1(4) + (−2)(3) + 2(5) = 8
2 5
*2 4+
2 , 3 = 2(4) + 2(3) + 1(5) = 19
1 5
*−2 4+
1 , 3 = (−2)(4) + 1(3) + 2(5) = 5
2 5
(d) We have
* 1/3 4+ !
1 2 2
−2/3 , 3 = (4) + − (3) + (5) = 8/3
3 3 3
2/3 5
*2/3 4+
2 2 1
2/3 , 3 = (4) + (3) + (5) = 19/3
3 3 3
1/3 5
*−2/3 4+
2 1 2
1/3 , 3 = − (4) + (3) + (5) = 5/3
3 3 3
2/3 5
Therefore, B is an orthogonal set in R3 under the given inner product. Since it does not contain
the zero vector, we get that it is a linearly independent by Theorem 9.2.3. Thus, it is a linearly
independent set of 3 vectors in R3 and hence it is an orthogonal basis for R3 .
(b) We have
* 1 1 +
2 , 2 = 12 + 2(2)2 + (−1)2 = 10
−1 −1
* 3 3 +
2 2 2
−1 , −1 = 3 + 2(−1) + (−1) = 12
−1 −1
*3 3+
1 , 1 = 32 + 2(1)2 + 72 = 60
7 7
(c) We have
* 1 1+
2 , 1 = 1(1) + 2(2)(1) + (−1)(1) = 4
−1 1
* 3 1+
−1 , 1 3(1) + 2(−1)(1) + (−1)(1) = 0
−1 1
*3 1+
1 , 1 = 3(1) + 2(1)(1) + 7(1) = 12
7 1
4/10 2/5
Therefore, by Theorem 9.2.4 we have [~x]B = 0/12 = 0 .
12/60 1/5
(d) We have
* 1 1+
1 4
√ 2 , 1 = √
10 −1 1
10
1 3 1
* +
√ −1 , 1 = 0
12
−1 1
Section 9.2 Solutions 177
* 3 1+
1 12
√ 1 , 1 = √
60 7 1 60
√
4/ 10
Therefore, by Corollary 9.2.5 we have [~x]C = 0√ .
12/ 60
Therefore, it is an orthogonal set. Since it does not contain the zero vector, we get that it is a
linearly independent by Theorem 9.2.3. By definition, it is also a spanning set, and hence it is an
orthogonal basis for S.
(b) We have
*" # " #+
1 1 1 1
, = 12 + 12 + (−1)2 + (−1)2 = 4
−1 −1 −1 −1
*" # " #+
1 −1 1 −1
, = 12 + (−1)2 + 12 + (−1)2 = 4
1 −1 1 −1
*" # " #+
−1 1 −1 1
, = (−1)2 + 12 + 12 + (−1)2 = 4
1 −1 1 −1
(c) We have
*" # " #+
1/2 1/2 −3 6
, =3
−1/2 −1/2 −2 −1
*" # " #+
1/2 −1/2 −3 6
, = −5
1/2 −1/2 −2 −1
*" # " #+
−1/2 1/2 −3 6
, =4
1/2 −1/2 −2 −1
3
Therefore, [~x]C = −5.
4
178 Section 9.2 Solutions
This implies that a = 0 and −2a + 2b − 2c = 0, so any polynomial of the form bx + cx2 is orthogonal to
each polynomial in B. We use x + x2 , and normalize it to 21 (x + x2 ). This polynomial extends B to an
orthonormal basis of P2 (R).
9.2.7 We have p p √ p
kt~vk = ht~v, t~vi = t2 h~v, ~vi = t2 h~v, ~vi = |t|k~vk
9.2.8 (a) Since PPT = I we get
9.2.10 We have
But, {~v1 , . . . , ~vm } is orthonormal, so ~vi · ~vi = 1 and ~vi · ~v j = 0 for all i , j. Hence we have QT Q = I as
required.
9.2.11 (a) We have
But, since {~v1 , . . . , ~vn } is orthonormal we have < ~vi , ~vi >= 1 and < ~vi , ~v j >= 0 for i , j, hence we
get
< ~x, ~y >= c1 d1 + · · · + cn dn
(b) Taking ~y = ~x, the result of part (a) gives k~xk =< ~x, ~x >= c21 + · · · + c2n .
9.2.12 (a) We have d(~x, ~y) = k~x − ~yk ≥ 0 by Theorem 9.2.1(1).
(b) We have 0 = d(~x, ~y) = k~x − ~yk But, by Theorem 9.2.1(1), we have that k~x − ~yk = 0 if and only if
~x − ~y = ~0 as required.
(c) We have
q q q
d(~x, ~y) = k~x−~yk = h~x − ~y, ~x − ~yi = h(−1)(~y − ~x), (−1)(~y − ~x)i = (−1)(−1)h~y − ~x, ~y − ~xi = k~y−~xk = d(~y, ~x)
d(~x,~z) = k~x − ~zk = k~x − ~y + ~y − ~zk ≤ |~x − ~y| + k~y − ~zk = d(~x, ~y) + d(~y,~z)
(e) We have
d(~x, ~y) = k~x − ~yk = k~x + ~z − ~z − ~yk = k~x + ~z − (~y + ~z)k = d(~x + ~z, ~y + ~z)
d(c~x, c~y) = kc~x − c~yk = kc(~x − ~y)k = |c| k~x − ~yk = |c| d(~x, ~y)
180 Section 9.3 Solutions
w3 , ~v1 i
h~ h~ w3 , ~v2 i
~v3 = w ~3 − ~v1 − ~v2
k~v1 k2 k~v2 k2
1 1 5 −2/3
0 22
= 3 − 0 − 4 = 5/3
2 66
1 −1 5 −2/3
2
So, we take ~v2 = 3 . Finally, we get
−3
w3 , ~v1 i
h~ w3 , ~v2 i
h~
~v3 = w ~3 − 2
~v1 − ~v2
k~v1 k k~v2 k2
2 −3 2 0
−2 16
= 3 − 1 − 3 = 1
11 22
−1 −1 −3 1
w3 , ~v1 i
h~ w3 , ~v2 i
h~
~v3 = w ~3 − 2
~v1 − ~v2
k~v1 k k~v2 k2
−2
1 1 0
2 −2 2 9 3 0
= − − =
−2 16 2 12 −1 −1
0 2 −1 1
4(−2) + 2 · 2 1 1
=2−x− (2x2 + x) = 2 − x + (2x2 + x) = 2 + x2 − x
4 · 3 + 2(−2) + 2(−2) + 4 2 2
So an orthogonal basis is B2 = {x, 2x2 + x, 2 + x2 − 21 x}.
9.3.8 (a) Consider
~0 = c1~v1 + c2~v2 + c3~e1 + c4~e2 + c5~e3 + c6~e4
Row reducing the corresponding coefficient matrix gives
1 3 1 0 0 0 1 0 1/4 0 0 3/4
−1 1 0 1 0 0 0 1 1/4 0
0 −1/4
−1 1 0 0 1 0 ∼ 0 0 0
1 0 1
1 −1 0 0 0 1 0 0 0 0 1 1
So, we have extended {~v1 , ~v2 } to an orthogonal basis {~v1 , ~v2 , ~v3 , ~v4 } for R4 .
Section 9.4 Solutions 183
1 x1
⊥
(c) We first observe that a basis for S3 is 0 . Let ~
x = x2 ∈ S3 . Then
1
x 3
1 x1
0 = 0 · x2 = x1 + x3
1 x3
−1 0
Consequently, we have that B = 0 , 1 spans S⊥3 and is clearly linearly independent, so B is
1
0
a basis for S⊥3 .
" #
a1 a2
(d) Let A = ∈ S⊥4 . Then
a3 a4
" # " #
a a2 1 0
0=h 1 , i = a1 + a3
a3 a4 1 0
" # " #
a a2 2 −1
0=h 1 , i = 2a1 − a2 + a3 + 3a4
a3 a4 1 3
Row reducing the coefficient matrix of the homogeneous system gives
" # " #
1 0 1 0 1 0 1 0
∼
2 −1 1 3 0 1 1 −3
Hence, we get the general solution is
−1
a1 0
a −1 3
2 = s + t ,
s, t ∈ R
a3 1 0
a4 0 1
(" # " #) (" # " #)
−1 −1 0 3 −1 −1 0 3
Therefore, the orthogonal complement of S4 is S⊥4 = Span , . Since ,
1 0 0 1 1 0 0 1
is also clearly linearly independent, it is a basis for S⊥4 .
" #
a a2
(e) Let A = 1 ∈ S⊥5 . Then
a3 a4
" # " #
a1 a2 2 1
0=h , i = 2a1 + a2 + a3 + a4
a3 a4 1 1
" # " #
a a2 −1 1
0=h 1 , i = −a1 + a2 + 3a3 + a4
a3 a4 3 1
Section 9.4 Solutions 185
9.4.2 (a) We need to find the general form of any vector orthogonal to x2 + 1. If a + bx + cx2 ∈ S⊥ , then we
have
0 = ha + bx + cx2 , x2 + 1i = (a − b + c)(2) + a(1) + (a + b + c)(2) = 5a + 4c
Hence, c = − 45 a. Thus, we have
!
2 5 2 5 2
a + bx + cx = a + bx − ax = a 1 − x + bx
4 4
(b) Denote the vectors in the orthonormal basis from part (a) by ~c1 , ~c2 , and ~c3 respectively. Then we
get
projS (~y) = h~y, ~c1 i~c1 + h~y, ~c2 i~c2 + h~y, ~c3 i~c3
−1 6
1 1
12 1 2 12 1 −1 −12 1 0
= √ √ + + =
6 6 0 2 2 1 12 3 0
1 1 −1 6
(c) We have
perpS (~z) = ~z − projS (~z) = ~z − h~z, ~c1 i~c1 + h~y, ~c2 i~c2 + h~y, ~c3 i~c3
−1 1 3/2
1 1 1
0 3 1 2 3 1 −1 −3 1 0 0
= − √ √ + + = −
0 6 6 0 2 2 1 12 3 0 0
2 1 1 −1 2 3/2
−1/2
0
=
0
1/2
"# " # " #
1 0 1 1 2 0
9.4.4 (a) Denote the given basis by ~z1 = , ~z2 = , ~z3 = ~ 1 = ~z1 . Then, we get
. Let w
−1 1 1 1 1 1
" # " # " #
~z2 · w
~1 1 1 1 1 0 2/3 1
~ 2 = ~z2 − projw~ 1 (~z2 ) = ~z2 −
w ~1 =
w − = To simplify calculations
k~
w1 k 2 1 1 3 −1 1 4/3 2/3
" #
2 3
we use w ~2 = instead. Then, we get
4 2
" # " # " # " #
~z3 · w
~1 ~ 2)
(~z3 · w 2 0 2 1 0 10 2 3 8/11 −10/11
~ 3 = z3 −
w ~
w 1 − ~
w 2 = − − =
w1 k2
k~ w2 k2
k~ 1 1 3 −1 1 33 4 2 5/11 −3/11
"#
8 −10
~3 =
We pick w . Then the set {~ ~ 2, w
w1 , w ~ 3 } is an orthogonal basis for S.
5 −3
(b) From our work in (a)
" #
~x · w
~1 ~x · w
~2 ~x · w
~3 11 19 −23 35/9 13/9
projS (A) = ~
w 1 + ~
w 2 + ~
w 3 = ~
w 1 + ~
w 2 + ~
w 3 =
k~w1 k2 k~w2 k2 k~w3 k2 3 33 198 −35/18 93/18
(d) Since projS (B) = B we have that B ∈ S. Since projS (A) , A, we have A < S.
188 Section 9.4 Solutions
hx − x2 , 1i −2 2
~v2 = x − x2 − 2
1 = x − x2 − 1 = + x − x2
k1k 3 3
h1 + x + x2 , 1i h1 + x + x2 , 2 + 3x − 3x2 i
projS (1 + x + x2 ) = 1 +
k1k2 k2 + 3x − 3x2 k
5 4
= 1 + (2 + 3x − 3x2 )
3 24
1 1
= 2 + x − x2
2 2
1 3
perpS (1 + x + x ) = 1 + x + x − projS (1 + x + x2 ) = −1 + x + x2
2 2
2 2
(c) Since projS and perpS are linear mappings, from our work in (b) we get
and
perpS (2 + 2x + 2x2 ) = 2 perpS (1 + x + x2 ) = −2 + x + 3x2
9.4.6 We first extend the set to a basis for R3 . We take
1 1 0
B = {~ ~ 2, w
w1 , w ~ 3} = 0 , 0 , 1
−1 0 0
1
We then apply the Gram-Schmidt procedure. Take ~v1 = 0 . Then
−1
1 1 3/5
w2 , ~v1 i
h~ 2
~2 −
w ~
v 1 = 0 − 0= 0
k~v1 k2 5
0 −1 2/5
3
Hence, we take ~v2 = 0.
2
0 1 3 0
w3 , ~v1 i
h~ w3 , ~v2 i
h~
~v3 = w
~3 − ~
v − ~
v = 1 − 0 0 − 0 0 = 1
2 1 2 2
k~v1 k k~v2 k
0 −1 2 0
9.4.7 Let {~v1 , . . . , ~vk } be an orthonormal basis for W. Then, for any ~u, ~v ∈ V and s, t ∈ R, we have
projW (s~u + t~v) =hs~u + t~v, ~v1 i~v1 + · · · + hs~u + t~v, ~vk i~vk
=s h~u, ~v1 i~v1 + · · · + h~u, ~vk i~vk + t h~v, ~v1 i~v1 + · · · + h~v, ~vk i~vk
= s projW ~u + t projW ~v
Hence, projW is linear.
~ ∈ W⊥ . Then h~
Let w ~ = h~
w, ~vi i = 0 for 1 ≤ i ≤ k. Hence projW w w, ~vk i~vk = ~0, so
w, ~v1 i~v1 + · · · + h~
~ ∈ ker(projW ). Therefore, W⊥ ⊆ ker(projW ).
w
Let ~x ∈ ker(projW ), then ~0 = projW ~x = h~x, ~v1 i~v1 + · · · + h~x, ~vk i~vk
Since {~v1 , . . . , ~vk } is linearly independent, we get that h~x, ~vi i = 0 for 1 ≤ i ≤ k. Therefore, ~x ∈ W⊥ by
theorem. Hence, ker(projW ) ⊆ W⊥ . Consequently, W⊥ = ker(projW ) as required.
9.4.8 A basis for S⊥ is {p4 (x)} = {1}. Then
h1, −1 + x + x2 − x3 i −4
projS⊥ (−1 + x + x2 − x3 ) = 2
1= = −1
||1|| 4
Therefore,
projS (−1 + x + x2 − x3 ) = (−1 + x + x2 − x3 ) − projS⊥ (−1 + x + x2 − x3 ) = x + x2 − x3
~x = (AT A)−1 AT ~b
" #−1 " # 1
3 2 1 1 1
= 2
2 6 2 1 −1
5
" #" #
1 6 −2 8
=
4 −2 3 −1
" #
25/7
=
−19/14
1 5 1
(b) We have A = −2 −7 and ~b = 1. Hence, the vector ~x that minimizes kA~x − ~bk is
1 2 0
~x = (AT A)−1 AT ~b
" #−1 " # 1
6 21 1 −2 1
= 1
21 78 5 −7 2
0
" #" #
1 78 −21 −1
=
27 −21 6 −2
" #
−4/3
=
−1/3
2 1 4
(c) We have A = 2 −1 and ~b = −1. Hence, the vector ~x that minimizes kA~x − ~bk is
3 2 8
~x = (AT A)−1 AT ~b
" #−1 " # 4
17 6 2 2 3
= −1
6 6 1 −1 2
8
" #" #
1 6 −6 30
=
66 −6 17 21
" #
9/11
=
59/22
Section 9.6 Solutions 191
1 2
2
1 2 3
(d) We have A = and ~b = . Hence, the vector ~x that minimizes kA~x − ~bk is
1 3
2
1 3 3
~x = (AT A)−1 AT ~b
" # 2
#−1 "
4 10 1 1 1 2 3
=
10 26 2 2 3 3 2
3
" #" #
1 26 −10 10
=
4 −10 4 25
" #
5/2
=
0
1 −1 0
2
2 −1 1
and ~b = −2. Hence, the vector ~x that minimizes kA~x − ~bk is
(e) We have A =
2 2 1 3
1 −1 0 −2
~x = (AT A)−1 AT ~b
−1 2
10 0 4 1 2 2 1
−2
= 0 7 1 −1 −1 2 −1
3
4 1 2 0 1 1 0
−2
5/3
= 5/3
−11/3
2 −2
1
−1 1
and ~b = 2. Hence, the vector ~x that minimizes kA~x − ~bk is
(f) We have A =
3 1
1
2 −1 2
~x = (AT A)−1 AT ~b
" #−1 " # 1
18 −4 2 −1 3 2 2
=
−4 7 −2 1 1 −1 1
2
" #
9/22
=
1/11
192 Section 9.6 Solutions
(d) In our calculations in (a), (b), and (c), we see that ~y · ~n > ~x · ~n and ~z ∈ P, so ~y is the furthest from
P.
1 −1 −3
9.6.3 (a) Let X = 1 0 and ~y = 2 . Then we get
1 1 2
" #
T −1 T 1/3
~a = (X X) X ~y =
5/2
1 −2 4
0
1 −1 1
and ~y = −2. Then we get
9.6.4 (a) Let X =
1 1 1 0
1 2 4 1
−3/2
~a = (X T X)−1 X T ~y = 2/5
1/2
So, y = − 23 x + 52 x2 .
Chapter 10 Solutions
−3 4 5
−3 ~bT
PT1 AP1 = 0 1 4 =
~0 A1
0 7 4
#" " #
1 4 ~ 4
where A1 = and b = .
7 4 5
194
Section 10.1 Solutions 195
" # " #
4 −7 8 −3
Our work in part (b) tells us that we can take Q = √1 and get T 1 = .
65 7 4 0 −3
1 0 0√
1 ~0T √
= 0 4/ 65 −7/ 65. Let P = P1 P2 = P2 , then
We now take P2 = ~ √ √
0 Q
0 7/ 65 4/ 65
√ √
−3 51/ 65 −8/ 65
~
T
−3 b Q
PT AP = ~
= 0 8 −3
0 T1
0 0 −3
1
(d) Observe that λ = 7 is an eigenvalue of A with corresponding eigenvector ~v1 = 0. We extend {~v1 }
0
1 0 0
to the basis 0 , 1 , 0 for R3 . Thus, we pick P1 = I. Thus, we have
0 0 1
7 −6 5 ~ T
7 b
PT1 AP1 = 0 1 4 =
~0 A1
0 6 3
"
# " #
1 4 ~ −6
where A1 = and b = .
6 3 5
" # " #
1 −1 ~ 2
where A1 = and b = .
4 5 1
We have C(λ) = det(A − λI) = λ2 − 6λ + 9 = (λ − 3)2 . So, λ1 = 3 and
" # " #
−2 −1 1 1/2
A − 3I = ∼
4 2 0 0
" #
−1
Hence, a unit eigenvector for λ1 = 3 is ~v1 = √15 . We extend {~v1 } to the orthonormal basis
2
" # " # " #
2 1 2 1 −1 2 T 3 5
{~v1 , ~v2 } for R with ~v2 = √5 . Taking Q = √5 gives Q A1 Q = = T1.
1 2 4 0 3
1 0√ 0√
1 ~0T
= 0 −1/ 5 2/ 5. Let P = P1 P2 , then
We now take P2 = ~ √ √
0 Q
0 2/ 5 1/ 5
√
2 0 5
~
T
2 b Q
PT AP = ~
= 0 3 5
0 T1
0 0 3
is upper triangular.
(g) Observe that λ = 2 is an eigenvalue of A with corresponding h i ~v1 = ~e3 . We extend {~v1 }
eigenvector
4
to the basis ~e3 , ~e1 , ~e2 , ~e4 for R . Thus, we pick P1 = ~e3 ~e1 ~e2 ~e4 . Thus, we have
2 −1 1 0
0 2 0 1
PT1 AP1 =
0 0 2 −1
0 0 0 2
is upper triangular.
10.1.2 If A is orthogonally similar to B, then there exists an orthogonal matrix P such that PT AP = B. If B is
orthogonally similar to C, then there exists an orthogonal matrix Q such that QT BQ = C. Hence,
B−1 = QT A−1 Q
(b) If A and B are orthogonally similar, then there exists an orthogonal matrix P such that PT AP = B.
Thus, we have
B2 = (PT AP)(PT AP) = PT A(PPT )AP = PT A2 P
Thus, A2 and B2 are also orthogonally similar.
(c) If A and B are orthogonally similar, then there exists an orthogonal matrix P such that PT AP = B.
Thus,
BT = (PT AP)T = PT AT (PT )T = PT AP = B
(d) If A and B are orthogonally similar, then there exists an orthogonal matrix P such that PT AP = B.
Assume that there exists an orthogonal matrix Q such that QT BQ = T is upper triangular. Then,
where PQ is orthogonal since both P and Q are orthogonal. Thus, A is also orthogonally similar
to T .
Hence, λ1 √= 1 is an eigenvalue, √ and by the quadratic formula, we get the other eigenvalues are
λ2 = (1 + 33)/2 and λ3 = (1 − 33)/2.
−1 2 −2 1 0 0
For λ1 = 1 we get A − λ1 I = 2 0 0 ∼ 0 1 −1
−2 0 0 0 0 0
0
A basis for the eigenspace of λ1 is 1
1
Section 10.2 Solutions 199
1+ √33 √
− 2√ −2 1 0 −1+ 33
√ 2 4
1+ 33 1− 33
For λ2 = 2 we get A − λ2 I = 2 2 0√ ∼ 0 1
1
1− 33 0 0 0
−2 0 2
√
1 − 33
A basis for the eigenspace of λ2 is −4 .
4
1− √33 √
− 2√ −2 1 0 −1− 33
√ 2 4
1− 33 1+ 33
For λ3 = 2 we get A − λ3 I = 2 2 0√ ∼ 0 1 1
1+ 33 0 0 0
−2 0 2
√
1 + 33
A basis for the eigenspace of λ3 is −4 .
4
After normalizing, the √ basis vectors √ for the eigenspaces
√ form√an orthonormal basis for R3 . Hence,
0
√ (1 − 33)/(66 − √ 2 33) (1 + 33)/(66 + √ 2 33)
P = 1/ 2 −1/(66 − 2√ 33) −1/(66 + 2√ 33) orthogonally diagonalizes A to
√
1/(66 − 2 33)
1/ 2 1/(66 + 2 33)
1 √0 0
T
P AP = 0 (1 + 33)/2
0
√ .
0 0 (1 − 33)/2
4 − λ −2
(c) We have C(λ) = det(A − λI) = = λ2 − 11λ − 24 = (λ − 3)(λ − 8)
−2 7 − λ
Hence, the eigenvalues are λ1 = 3 and λ2 = 8.
" # " # (" #)
1 −2 1 −2 2
For λ1 = 3 we get A − λ1 I = ∼ . Thus, a basis for Eλ1 is .
−2 4 0 0 1
" # " # (" #)
−4 −2 1 1/2 −1
For λ2 = 8 we get A − λ2 I = ∼ . Thus, a basis for Eλ2 is .
−2 −1 0 0 2
2
After" normalizing,
√ √the# basis vectors for the eigenspaces
" #form an orthonormal basis for R . Hence,
2/ √5 −1/√ 5 3 0
P= is orthogonal and PT AP = .
1/ 5 2/ 5 0 8
(d) We have
2 − λ −2 −5 7 − λ 0 −7 + λ 7 − λ 0 0
C(λ) = −2 −5 − λ −2 = −2 −5 − λ −2 = −2 −5 − λ −4
−5 −2 2 − λ −5 −2 2 − λ −5 −2 −3 − λ
= −(λ − 7)(λ2 + 8λ + 7) = −(λ − 7)(λ + 1)(λ + 7)
−1
A basis for the eigenspace of λ1 is 0
1
3 −2 −5 1 0 −1
For λ2 = −1 we get A − λ2 I = −2 −4 −2 ∼ 0 1 1
−5 −2 3 0 0 0
1
A basis for the eigenspace of λ2 is −1 .
1
9 −2 −5 1 0 −1
For λ3 = −7 we get A − λ3 I = −2 2 −2 ∼ 0 1 −2
−5 −2 9 0 0 0
1
A basis for the eigenspace of λ3 is 2.
1
After normalizing,
√ the basis√vectors for the eigenspaces form an orthonormal basis for R3 . Hence,
√
−1/ 2 1/ 3 1/ 6
√ √ 7 0 0
−1/√ 3 2/ √6 orthogonally diagonalizes A to PT AP = 0 −1 0 .
P = 0
√
0 0 −7
1/ 2 1/ 3 1/ 6
1 − λ −2
(e) We have C(λ) = det(A − λI) = = λ2 + λ − 6 = (λ + 3)(λ − 2)
−2 −2 − λ
Hence, the eigenvalues are λ1 = −3 and λ2 = 2.
" # " # (" #)
4 −2 1 −1/2 1
For λ1 = −3 we get A − λ1 I = ∼ . Thus, a basis for Eλ1 is .
−2 1 0 0 2
" # " # (" #)
−1 −2 1 2 −2
For λ2 = 2 we get A − λ2 I = ∼ . Thus, a basis for Eλ2 is .
−2 −4 0 0 1
2
After" normalizing,
√ √the# basis vectors for the eigenspaces
" form
# an orthonormal basis for R . Hence,
1/ √5 −2/√ 5 −3 0
P= is orthogonal and PT AP = .
2/ 5 1/ 5 0 2
(f) We have
1 − λ 0 2
C(λ) = 0 1−λ 4 = (1 − λ)(λ2 − 3λ − 14) + 2(−1)(1 − λ)(2) = −(λ − 1)(λ2 − 3λ − 18) = −(λ − 1
2 4 2 − λ
After normalizing,
√ the
√ basis√vectors for the eigenspaces form an orthonormal basis for R3 . Hence,
−1/ 2 −1/ 6 1/ 3
0 0 0
√ √ √
P = 1/ 2 −1/ 6 1/ 3 orthogonally diagonalizes A to PT AP = 0 0 0.
√ √
0 0 3
0 2/ 6 1/ 3
(h) We have
2 − λ 2 4 2 − λ 2 2 + λ 2 − λ 2 2 + λ
C(λ) = 2 4−λ 2 = 2 4−λ 0 = 2 4−λ 0 = −(λ + 2)(λ2 − 10λ + 16) =
4 2 2 − λ 4
2 −2 − λ 6 − λ
4 0
−1 1
Thus, 1 , 1 is an orthogonal basis for the eigenspace of λ1 .
0 2
2 −1 1 1 0 1
For λ2 = 3 we get A − 3I = −1 2 1 ∼ 0 1 1
1 1 2 0 0 0
−1
A basis for the eigenspace of λ2 is −1 .
1
After normalizing,
√ √the basis√vectors for the eigenspaces form an orthonormal basis for R3 . Hence,
−1/ 2 1/ 6 −1/ 3
√ √ √ 6 0 0
P = 1/ 2 1/ 6 −1/ 3 orthogonally diagonalizes A to PT AP = 0 6 0.
√ √
0 0 3
0 2/ 6 1/ 3
(k) We have
−λ 1 −1 −λ 1 −1
C(λ) = 1 −λ 1 = 1 −λ 1
−1 1 −λ 0 1 − λ 1 − λ
−λ 2 −1
= 1 −1 − λ 1 = (1 − λ)(λ2 + λ − 2) = −(λ − 1)2 (λ + 2)
0 0 1 − λ
−1 1 −1/2
0 − −1 1 = 1/2
2
1 0 1
√ √
1/ 2 −1/ 6
√
Remembering that we need an orthonormal basis, we take ~v1 = 0√ and ~v2 = 1/ 6 .
√
1/ 2
2/ 6
Section 10.2 Solutions 205
Next, we have
2 1 −1 1 0 −1
A + 2I = 1 2 1 ∼ 0 1 1
−1 1 2 0 0 0
√
1/ 3
√
Hence, ~v3 = −1/ 3.
√
1/ 3
h i
Thus, {~v1 , ~v2 , ~v3 } is an orthonormal basis for R3 and hence, taking P = ~v1 ~v2 ~v3 gives
PT AP = diag(1, 1, −2)
(l) We have
1 − λ 1 1 1 1 − λ 0
0 1 1 − λ 0
0 1
1 1−λ 1 1 1 −λ 0 1 1 −λ 0 1
C(λ) = = =
1−λ
1 1 1 1 0 −λ 1 1 0 −λ 1
1 − λ 1 λ λ 1 − λ 1 λ λ 1 − λ
1
1 1
1 − λ 0 0 1
1 −λ 0 1
= = (−λ)2 (λ2 − 4λ) = −λ3 (λ − 4)
1 0 −λ 1
0 3 − λ
3
0
The eigenvalues are λ1 = 0 and λ2 = 4.
1 1 1 1 1 1 1 1
1 1 1 1 0 0 0 0
For λ1 = 0 we get A − 0I = ∼
1 1 1 1 0 0 0 0
1 1 1 1 0 0 0 0
−1 −1 −1
1 0 0
A basis for the eigenspace of λ1 is , , = {~ ~ 2, w
w1 , w ~ 3 }.
0 1 0
0
0 1
However, we need an orthogonal basis for each eigenspace, so we apply the Gram-Schmidt pro-
cedure to this basis.
−1 −1 −1 −1/2
−1
−1
Thus, we take ~v3 = and we get that ~v1 , ~v2 , ~v3 is an orthogonal basis for the eigenspace of λ1 .
−1
3
−3 1 1 1 0 0 −1
1
1 −3 1
1 0 1 0 −1
For λ2 = 4 we get A − 4I = ∼
1 1 −3 1 0 0 1 −1
1 1 1 −3 0 0 0 0
1
1
A basis for the eigenspace of λ2 is .
1
1
After normalizing,
√ the
√ basis vectors
√ for the eigenspaces form an orthonormal basis for R3 . Hence,
−1/ 2 −1/ 6 −1/ 12 1/2
0 0 0 0
√ √ √
1/ 2 −1/ 6 −1/ 12 1/2 0 0 0 0
P = √ √ orthogonally diagonalizes A to PT AP = .
0 2/ 6 −1/√ 12 1/2 0 0 0 0
0 0 3/ 12 1/2
0 0 0 4
10.2.2 Observe that {~v1 , ~v2 , ~v3 } is an orthogonal
√ set,
√ but we need an orthogonal matrix. So, we normalize the
1
− 3 −2/ 5 2/ 45
√ √
vectors and take P = −2/3 1/ 5 4/ 45.
√
2/3 0 5/ 45
We want to find A such that PT AP = diag(−3, 1, 6). Thus,
1 0 2
A = P diag(−3, 1, 6)PT = 0 1 4
2 4 2
10.2.3 If A is orthogonally diagonalizable, then there exists an orthogonal matrix P and diagonal matrix D
such that D = PT AP = P−1 AP. Since A and P are invertible, we get that D is invertible and hence,
10.2.5 Assume that there exists a symmetric matrix B such that A = B2 . Since B is symmetric, there exists an
orthogonal matrix P such that PT BP = diag(λ1 , . . . , λn ). Hence, we have B = PDPT and so
So,
A = Q diag(λ21 , . . . , λ2n )QT = [Q diag(λ1 , . . . , λn )QT ][Q diag(λ1 , . . . , λn )QT ]
Thus, we define B = Q diag(λ1 , . . . , λn )QT . We have that QT BQ = diag(λ1 , . . . , λn ) and so B is symmet-
ric since it is orthogonally diagonalizable.
" √ √ #
1/ √2 1/ √2
10.2.6 (a) The statement is false. The matrix P = is orthogonal, but it is not symmetric,
−1/ 2 1/ 2
so it is not orthogonally diagonalizable.
(b) The statement is false by the result of 4(c).
(c) The statement is true. If A is orthogonally similar to B, then there exists an orthogonal matrix
Q such that QT AQ = B. If B is symmetric, then there exists an orthogonal matrix P such that
PT BP = D. Hence,
D = PT (QT AQ)P = PT QT AQP = (QP)T A(QP)
where QP is orthogonal since a product of orthogonal matrices is orthogonal. Hence, A is orthog-
onally diagonalizable.
(d) If A is symmetric, then it is orthogonally diagonalizable by the Principal Axis Theorem. Thus, A
is diagonalizable and hence gλ = aλ by Corollary 6.3.4.
1 2 2
x1 = − y1 − y2 + y3
3 3 3
2 2 1
x2 = y1 − y2 − y3
3 3 3
2 1 2
x3 = y1 + y2 + y3
3 3 3
4 −1 1
(g) i. A = −1 4 1
1 1 4
ii. We have
C(λ) = −(λ − 5)2 (λ − 2)
Hence, the eigenvalues are λ1 = 5 and λ2 = 2 so the quadratic form and symmetric matrix
are positive definite.
iii. The corresponding diagonal form is Q = 5y21 + 5y22 + 2y23
√ √ √
−1/ 2 1/ 6 −1/ 3
√ √ √
We find that a matrix which orthogonally diagonalizes A is P = 1/ 2 1/ 6 −1/ 3
√ √
0 2/ 6 1/ 3
diagonalizes A. So, computing ~x = P~y we get that the required change of variables is
1 1 1
x1 = − √ y1 + √ y2 − √ y3
2 6 3
1 1 1
x2 = √ y1 + √ y2 − √ y3
2 6 3
2 1
x3 = √ y2 + √ y3
6 3
−4 1 1
(h) i. A = 1 −4 −1
1 −1 −4
ii. We have
C(λ) = −(λ + 3)2 (λ + 6)
Hence, the eigenvalues are λ1 = −3 and λ2 = −6 so the quadratic form and symmetric matrix
are negative definite.
iii. The corresponding diagonal form is Q = −3y21 − 3y22 − 6y23
Section 10.3 Solutions 211
√ √ √
1/ 2 1/ 6 −1/ 3
√ √ √
We find that a matrix which orthogonally diagonalizes A is P = 1/ 2 −1/ 6 1/ 3
√ √
0 2/ 6 1/ 3
diagonalizes A. So, computing ~x = P~y we get that the required change of variables is
1 1 1
x1 = √ y1 + √ y2 − √ y3
2 6 3
1 1 1
x2 = √ y1 − √ y2 + √ y3
2 6 3
2 1
x3 = √ y2 + √ y3
6 3
10.3.2 (a) Let λ1 , λ2 be the eigenvalues of A. If det A > 0 then ac − b2 > 0 so a and c must both have the
same sign. Thus, c > 0. We know that det A = λ1 λ2 and λ1 + λ2 = a + c and so λ1 and λ2 must
have the same sign since det A > 0 and we have λ1 + λ2 = a + c > 0 so we must have λ1 and λ2
both positive so Q is positive definite.
(b) Let λ1 , λ2 be the eigenvalues of A. If det A > 0 then ac − b2 > 0 so a and c must both have the
same sign. Thus, c < 0. We know that det A = λ1 λ2 and λ1 + λ2 = a + c and so λ1 and λ2 must
have the same sign since det A > 0 and we have λ1 + λ2 = a + c < 0 so we must have λ1 and λ2
both negative so Q is negative definite.
(c) Let λ1 , λ2 be the eigenvalues of A. If det A < 0 then det A = λ1 λ2 < 0 so λ1 and λ2 must have
different signs thus Q is indefinite.
10.3.3 If h , i is an inner product on Rn , then it is symmetric and positive definite. Observe that ~yT A~x is a 1 × 1
matrix, and hence it is symmetric. Thus,
~xT A~y = h~x, ~yi = h~y, ~xi = ~yT A~x = (~yT A~x)T = ~xT AT ~y
Hence, ~xT A~y = ~xT AT ~y for all ~x, ~y ∈ Rn . Therefore, A = AT by Theorem 3.1.4. Observe that
for all ~x , ~0 and h~x, ~xi = 0 if and only if ~x = ~0. Thus, h , i is positive definite.
We have
h~x, ~yi = ~xT A~y = (~xT A~y)T = ~yT AT ~x = ~yT A~x = h~y, ~xi
So, h , i is symmetric.
We have
~ i = (s~x + t~y)T A~
hs~x + t~y, w w = (s~xT + t~yT )A~
w = s~xT A~
w + t~yT A~
w
~ i + th~y, w
= sh~x, w ~i
10.3.4 (a) If A is a positive definite symmetric matrix, then ~xT A~x > 0 for all ~x , ~0. Hence, for 1 ≤ i ≤ n,
we have
aii = ~eTi A~ei > 0
as required.
(b) If A is a positive definite symmetric matrix, then all the eigenvalues of A are positive. Thus, since
the determinant of A is the product of the eigenvalues, we have that det A > 0 and hence A is
invertible.
10.3.5 We first must observe that A + B is symmetric. Since the eigenvalues of A and B are all positive, the
quadratic forms ~xT A~x and ~xT B~x are positive definite. Let ~x , ~0. Then ~xT A~x > 0 and ~xT B~x > 0 , so
~xT (A + B)~x = ~xT A~x + ~xT B~x > 0 , and the quadratic form ~xT (A + B)~x is positive definite. Thus the
eigenvalues of A + B must be positive.
Section 10.4 Solutions 213
" #
1
To sketch x12 + 8x1 x2 + x22 = 6 in the x1 x2 -plane we first draw the y1 -axis in the direction of ~v1 =
1
" #
−1
and the y2 -axis in the direction of ~v2 = . Next, to convert the asymptotes into the x1 x2 -plane,
1
" # " √ √ #" # " #
y1 T 1/ √2 1/ √2 x1 1 x1 + x2
we use the change of variables = P ~x = = √ . So, the
y2 −1/ 2 1/ 2 x2 2 −x1 + x2
asymptotes in the x1 x2 -plane are
r
5
y2 = ± y1
3
r !
1 5 1
√ (−x1 + x2 ) = ± √ 1(x + x2 )
2 3 2
√ √
3(−x1 + x2 ) = ± 5(x1 + x2 )
√ √ √ √
( 3 ∓ 5)x2 = ( 3 ± 5)x1
√ √
3± 5
x2 = √ √ x1
3∓ 5
To sketch 3x12 − 2x1 x2 + 3x22 = 12 in the x1 x2 -plane we first draw the y1 -axis in the direction of
" # " #
−1 1
~v1 = and the y2 -axis in the direction of ~v2 = . Plotting gives
1 1
Section 10.4 Solutions 215
" #
−4 2
(c) The corresponding symmetric matrix is . We find that the characteristic polynomial is
2 −7
C(λ) = (λ + 8)(λ + 3). So, we have eigenvalues λ1 = −8 and λ2 = −3.
" # " # " #
4 2 1 1/2 −1
For λ1 we get A − λ1 I = ∼ . So, a corresponding eigenvector is ~v1 = .
2 1 0 0 2
" # " # " #
−1 2 1 −2 2
For λ2 we get A − λ2 I = ∼ . So, a corresponding eigenvector is ~v2 = .
2 −4 0 0 1
Thus, we have the ellipse −8y21 − 3y22 = −8 with principal axis ~v1 for x1 and ~v2 for y1 .
Plotting this we get the graph in the y1 y2 -plane is
To sketch −4x12 + 4x1 x2 − 7x22 = −8 in the x1 x2 -plane we first draw the y1 -axis in the direction of
" # " #
−1 2
~v1 = and the y2 -axis in the direction of ~v2 = . Plotting gives
2 1
216 Section 10.4 Solutions
" #
−3 −2
(d) The corresponding symmetric matrix is . We find that the characteristic polynomial is
−2 0
C(λ) = (λ + 4)(λ − 1). So, we have eigenvalues λ1 = 1 and λ2 = −4.
" # " # " #
−4 −2 1 1/2 −1
For λ1 we get A − λ1 I = ∼ . So, a corresponding eigenvector is ~v1 = .
−2 −1 0 0 2
" # " # " #
1 −2 1 −2 2
For λ2 we get A − λ2 I = ∼ . So, a corresponding eigenvector is ~v2 = .
−2 4 0 0 1
" #
−1
To sketch −3x12 −4x1 x2 = 4 in the x1 x2 -plane we first draw the y1 -axis in the direction of ~v1 =
2
" #
2
and the y2 -axis in the direction of ~v2 = . Next, to convert the asymptotes into the x1 x2 -plane,
1
" # " √ √ #" # " #
y1 T −1/√ 5 2/ √5 x1 1 −x1 + 2x2
we use the change of variables = P ~x = = √ . So, the
y2 2/ 5 1/ 5 x2 5 2x1 + x2
asymptotes in the x1 x2 -plane are
1
y2 = ± y1
2 !
1 1 1
√ (2x1 + x2 ) = ± √ (−x1 + 2x2 )
5 2 5
2(2x1 + x2 ) = ±(−x1 + 2x2 )
(2 ∓ 2)x2 = (−4 ∓ 1)x1
To sketch −x12 + 4x1 x2 + 2x22 = 6 in the x1 x2 -plane we first draw the y1 -axis in the direction of
" # " #
1 −2
~v1 = and the y2 -axis in the direction of ~v2 = . Next, to convert the asymptotes into the x1 x2 -
2 1
" # " √ √ #" # " #
y1 T 1/ √5 2/ √5 x1 1 x1 + 2x2
plane, we use the change of variables = P ~x = = √ . So,
y2 −2/ 5 1/ 5 x2 5 −2x1 + x2
the asymptotes in the x1 x2 -plane are
r
3
y2 = ± y1
2
r !
1 3 1
√ (−2x1 + x2 ) = ± √ (x1 + 2x2 )
5 2 5
√ √
2(−2x1 + x2 ) = ± 3(x1 + 2x2 )
√ √ √ √
( 2 ∓ 2 3)x2 = (2 2 ± 3)x1
√ √
2 2± 3
x2 = √ √ x1
2∓2 3
To sketch 4x12 + 4x1 x2 + 4x22 = 12 in the x1 x2 -plane we first draw the y1 -axis in the direction of
" # " #
−1 1
~v1 = and the y2 -axis in the direction of ~v2 = . Plotting gives
1 1
"#
2 −2
(g) The corresponding symmetric matrix is A = . The eigenvalues are λ1 = 3 and λ2 =
−2 −1
" # " #
−2 1
−2 with corresponding eigenvectors ~v1 = , ~v2 = . Thus, A is diagonalized by P =
1 2
Section 10.4 Solutions 219
" √ √ # " #
−2/√ 5 1/ √5 3 0
to D = . Hence, we have the hyperbola 3y21 − 2y22 = 6 with princi-
1/ 5 2/ 5 0 2
pal axis ~v1 , and ~v2 . Since this is a hyperbola we need to graph the asymptotes. They are when
0 = 3y21 − 2y22 .
q
Hence, the equations of the asymptotes are y2 = ± 32 y1 . Graphing give the diagram below to the
left. We have " # " #" # " #
y1 T 1 −2 1 x1 1 −2x1 + x2
= P ~x = √ = √
y2 5 1 2 x2 5 x1 + 2x2
So, the asymptotes in the x1 x2 -plane are given by
r !
1 3 1
√ (x1 + 2x2 ) = ± √ (−2x1 + x2 )
5 2 5
Solving for x2 gives √
−1 ± 2 3/2
x2 = √ x1
2 ± 3/2
which is x2 ≈ 0.449x1 and x2 ≈ −4.449x1 .
Alternately, we can find that the direction vectors of the asymptotes are
" # " #" # " √ √ #
x1 1 −2 1 1 (−2 + 3/2)/
= P~x = √ √ = √ √5
x2 5 1 2 3/2 (1 + 2 3/2)/ 5
y21 + 5y22 = 5
Graphing this in the y1 y2 -plane gives the diagram below to the left.
The principal axes are ~v1 and ~v2 . So, graphing it in the x1 x2 -plane gives the diagram below to the
right.
Section 10.5 Solutions 221
C(λ) = (λ − 5)(λ − 3)
Hence, the eigenvalues of A are λ1 = 5 and λ2 = 3. Therefore, the maximum is 5 and the minimum
is 3.
" #
3 5
(b) The corresponding symmetric matrix is A = . Hence, we have
5 −3
√ √
C(λ) = (λ − 34)(λ + 34)
√ √ √
√ of A are λ1 =
Hence, the eigenvalues 34 and λ2 = − 34. Therefore, the maximum is 34 and
the minimum is − 34.
−1 0 0
(c) The corresponding symmetric matrix is A = 0 8 2 . Hence, we have
0 2 11
Hence, the eigenvalues of A are λ1 = 7, λ2 = −1, and λ3 = 12. Therefore, the maximum is 12 and
the minimum is −1.
3 −2 4
(d) The corresponding symmetric matrix is A = −2 6 2. Hence, we have
4 2 3
C(λ) = (λ − 7)2 (λ + 2)
Hence, the eigenvalues of A are λ1 = 7 and λ2 = −2. Therefore, the maximum is 7 and the
minimum is −2.
2 1 1
(e) The corresponding symmetric matrix is A = 1 2 1. Hence, we have
1 1 2
C(λ) = (λ − 1)2 (λ − 4)
Hence, the eigenvalues of A are λ1 = 1 and λ2 = 4. Therefore, the maximum is 4 and the minimum
is 1.
222 Section 10.6 Solutions
Next compute
√
1 3 " √ # 11/√ 195
1 1 2/ 13
~u1 = √
A~v1 = √ 2 1 = 7/ 195
σ1 15 1 1 3/ 13 √
5/ 195
√
1 3 " √ # 3/ 26
1 1 −3/√ 13 √
~u2 = A~v2 = √ 2 1 = −4/ 26
σ2 2 1 1 2/ 13 √
−1/ 26
We then need to extend {~u1 , ~u2 } to an orthonormal basis for R3 . Since we are in R3 , we can use the
cross product. We have
11 3 13
7 × −4 = 26
5 −1 −65
√ √ √ √
11/ 95 3/ 26 1/ √30
1/ 30
√ √ √
So, we can take ~u3 = 2/ 30 . Thus, we have U = 7/ 195 −4/ 26 2/ 30 . Then,
√ √ √ √
−5/ 30 5/ 195 −1/ 26 −5/ 30
A = UΣV T as required.
224 Section 10.6 Solutions
" #
12 −6
(e) We have AT A = . The eigenvalues of AT A are (ordered from greatest to least) λ1 = 15
−6 3
and λ2 = 0.
√ # " " √ #
−2/√ 5 1/ √5
Corresponding normalized eigenvectors are ~v1 = for λ1 and ~v2 = for λ2 .
1/ 5 2/ 5
" √ √ #
T −2/√ 5 1/ √5
Hence, A A is orthogonally diagonalized by V = .
1/ 5 2/ 5
√
√ 15 0
The singular values of A are σ1 = 15 and σ2 = 0. Thus, the matrix Σ is Σ = 0 0.
0 0
Next compute √
√ # −1/ 3
2 −1 " √
1 1 −2/√ 5
~u1 = A~v1 = √ 2 −1
= −1/ 3
σ1 15 2 −1 1/ 5 √
−1/ 3
We then need to extend {~u1 } to an orthonormal basis for R3 . One way of doing this is to find an
−1 −1
T T
orthonormal basis for Null(A ). A basis for Null(A ) is 1 , 0 . To make this an orthogonal
0
1
−1
basis, we apply the Gram-Schmidt procedure. We take ~v1 = 1 , and then we get
0
−1 −1 −1/2
0 − 1 1 = −1/2
2
1 0 1
√ √
−1/ 2 −1/ 6
√ √
Thus, we pick ~u2 = 1/ 2 and ~u3 = −1/ 6. Consequently, we have
√
0
2 6
√ √ √
−1/ 3 −1/ 2 −1/ 6
√ √ √
U = −1/ 3 1/ 2 −1/ 6 and A = UΣV T as required.
√ √
−1/ 3
0 2/ 6
3 0 0
(f) We have AT A = 0 3 0. The only eigenvalue of AT A is λ1 = 3 with multiplicity 3.
0 0 3
Next compute
1
1 1 1
~u1 = A~v1 = √
σ1 3 −1
0
0
1 1 1
~u2 = A~v2 = √
σ2 3 1
1
1
1 1 −1
~u3 = A~v3 = √
σ3 3 0
1
We then need to extend {~u1 , ~u2 , ~u3 } to an orthonormal basis for R4 . We find that a basis for Null(AT )
−1
0
is . After normalizing this vector, we take
−1
1
√ √ √
−1/√ 3 0√ 1/ √3 −1/ 3
1/ √3 1/ √3 −1/ 3 0√
T
U = and A = UΣV as required.
−1/ 3 1/ 3 0 −1/ 3
√ √ √
0 1/ 3 1/ 3 1/ 3
" #
T 4 8
(g) We have A A = . The eigenvalue of AT A are (ordered from greatest to least) λ1 = 20 and
8 16
" √ # " √ #
1/ √5 −2/√ 5
λ2 = 0. Corresponding normalized eigenvectors are ~v1 = for λ1 and ~v2 = for λ2 .
2/ 5 1/ 5
T
Hence, " A√ A is orthogonally √ # diagonalized by
1/ √5 −2/√ 5 √
V= . The singular values of A are σ1 = 20 and σ2 = 1. Thus the matrix Σ is
2/ 5 1/ 5
√
20 0
0 0
Σ = . Next compute
0 0
0 0
1 2 " √ # 1/2
1 1 1 2 1/ √5 1/2
~u1 = A~v1 = √ =
σ1 20 1 2 2/ 5 1/2
1 2 1/2
226 Section 10.6 Solutions
−1/2
1/2 1/2
−1/2 −1/2 −1/2
~u2 = , ~u3 = , ~u4 =
1/2 1/2 −1/2
1/2 −1/2 1/2
Thus, we take
√
4 3 " # 5/√ 45
1 1 4/5
~u1 =
A~v1 = √ 4 2 = 22/ 1125
σ1 45 −2 3/5 √
4
4/ 45
0√
4 3 " #
1 1 −3/5
~u2 = = −2/ √ 125
A~v2 = √ 4 2
σ2 20 −2 4/5
4 11/ 125
h i
Thus, we have U = ~u1 ~u2 ~u3 . Then, A = (AT )T = (UΣV T )T = VΣT U T .
" #
T 40 0
10.6.2 (a) We have A A = . Thus, the eigenvalues of AT A are λ1 = 40 and λ2 = 10. Hence, the
0 10
√
maximum of kA~xk subject to k~xk = 1 is 40.
" #
T 6 −3
(b) We have A A = . Thus,
−3 14
h i h i
10.6.5 Let U = ~u1 · · · ~um and V = ~v1 · · · ~vn .
By Lemma 10.6.6, {~u1 , . . . , ~ur } forms an orthonormal basis for Col(A).
By the Fundamental Theorem of Linear Algebra, the nullspace of AT is the orthogonal complement of
Col(A). We are given that {~u1 , . . . , ~um } forms an orthonormal basis for Rn and hence a basis for the
orthogonal compliment of Span{~u1 , . . . , ~ur } is B = {~ur+1 , . . . , ~um }. Thus B is a basis for Null(AT ).
By Theorem 10.6.1, we have that kA~vi k = λi . But, since UΣV T is a singular value decomposition of A,
we have that λr+1 , . . . , λn are the zero eigenvalues of A. Hence, {~vr+1 , . . . , ~vn } is an orthonormal set of
n − r vectors in the nullspace of A. But, since A has rank r, we know by the Rank-Nullity Theorem, that
dim Null(A) = n − r. So {~vr+1 , . . . , ~vn } is a basis for Null(A).
Hence, by the Fundamental Theorem of Linear Algebra we have that the orthogonal complement of
Null(A) is Row(A). Hence {~v1 , . . . , ~vr } forms a basis for Row(A).
4 2 2
10.6.6 (a) We get AT A = 2 5 1 which has eigenvalues λ1 = 8, λ2 = 4, and λ3 = 2 and corresponding
2 1 5
√ √
1/ 3 0√ −2/ 6
√ √
unit eigenvectors ~v1 = 1/ 3, ~v2 = −1/√ 2, and ~v3 = 1/ 6 . Thus, B = {~v1 , ~v2 , ~v3 }.
√ √
1/ 2
1/ 3 1/ 6
(b) We have
√ √
2 1 1 1/ √3 4/ √24
1 1
~u1 =
A~v1 = √ 0 2 0 1/ 3 = 2/ 24
σ1 8 0 0 2 1/ 3
√ √
2/ 24
2 1 1 0√ 0√
1 1
~u2 =
A~v2 = 0 2 0 −1/√ 2 = −1/√ 2
σ2 2
0 0 2 1/ 2
1/ 2
√ √
2 1 1 −2/√ 6 −2/√ 12
1 1
~u3 =
A~v3 = √ 0 2 0 1/ 6 = 2/ 12
σ3 √
2 0 0 2 1/ 6 2/ 12
√
Hence, √
8 0 0
C [L]B = 0 2 √0
0 0 2
A = UΣV T
σ1
0 ··· 0
..
0 .
T
.. .. ~v1
h i σr . . .
= ~u1 · · · ~um .
. .. ..
. . 0 T
~vn
..
.
0
···
0 0 0
T
~v
h i .1
= σ1~u1 · · · σr~ur ~0 · · · ~0 ..
~vTn
= σ1~u1~vT1 + · · · + σr~ur~vTr
Chapter 11 Solutions
2iz = 4
z = −2i
(b) We have
1 1 + 5i
1−z= −z = −1 +
1 − 5i 26
25 5
z= − i
26 26
230
Section 11.1 Solutions 231
(c) We have
(1 + i)z + (2 + i)z = 2
(3 + 2i)z = 2
2
z=
3 + 2i
6 4
z= − i
13 13
11.1.3 (a) Row reducing gives
1 2+i i 0 1+i 1 2 + i i 0 1 + i R1 − iR2
i −1 + 2i 0 2i −i R2 − iR1 ∼ 0 0 1 2i 1 − 2i ∼
1 2+i 1+i 2i 2 − i R3 − R1 0 0 1 2i 1 − 2i R3 − R2
1 2 + i 0 2 −1
0 0 1 2i 1 − 2i
0 0 0 0 0
Hence, the system is consistent with two parameters. Let z2 = s ∈ C and z4 = t ∈ C. Then, the
general solution is
−1 −2 − i −2
0 1
+ t 0
~z = + s
1 − 2i 0
−2i
0 0 1
(b) Row reducing gives
i 2 −3 − i 1 i 2 −3 − i 1 R1 − iR2
1 + i 2 − 2i −4 i R2 − R1 ∼ 1 −2i −1 + i −1 + i ∼
1
i 2 −3 − 3i 1 + 2i R3 − R1 0 0 −2i 2i 2 iR3
0 0 −2 2+i 0 0 0 i
R1 + 2R3 ∼ 1 −2i −1 + i −1 + i
1 −2i −1 + i −1 +i
0 0 1 −1 0 0 1 −1
Hence, the system is inconsistent.
(c) Row reducing gives
i 1+i 1 2i −iR3 1 1−i −i 2
1 − i 1 − 2i −2 + i −2 + i
∼
1 − i 1 − 2i −2 + i −2 +i R2 + (1 − i)R1 ∼
2i 2i 2 4 + 2i − 21 iR3 1 1 −i 1 − 2i R3 − R1
1 1 − i −i 2 R1 − (1 − i)R2 1 0 −1 − 4i 3 − 7i
0 1 −1 + 2i −4 + 3i ∼ 0 1 −1 + 2i −4 + 3i ∼
1
0 i 0 −1 − 2i R3 − iR2 0 0 2+i 2 + 2i 2+i R3
Hence, the system is consistent with one parameter. Let z3 = t ∈ C. Then, the general solution is
i −1 − i
~z = −i + t −1
0 1
(e) We have
11.1.5 Let z1 = a + bi and z2 = c + di. If z1 + z2 is a negative real number, then b = −d and a + c < 0. Also, we
have z1 z2 = (ac − bd) + (ad + bc)i is a negative real number, so ad + bc = 0 and ac − bd < 0. Combining
these we get
0 = ad + bc = ad − dc = d(a − c)
so either d = 0 or a = c. If a = c, then ac − bd < 0 implies that a2 + b2 < 0 which is impossible. Hence,
we must have d = 0. But then b = −d = 0 so z1 and z2 are real numbers.
11.1.6 Let z = a + bi. Then a2 + b2 = 1. Hence
1 1 1 1 − a + bi 1 − a + bi
= = =
1 − z 1 − a − bi 1 − a − bi 1 − a + bi (1 − a)2 + b2
1 − a + bi 1 − a + bi
= 2 2
=
1 − 2a + a + b 1 − 2a + 1
1−a b
= + i
2 − 2a 2 − 2a
!
1 1
Hence, Re = .
1−z 2
11.1.7 Let z = a + bi. We will prove this by induction on n. If n = 1, then z = z. Assume that zk = (z)k for
some integer k. Then, by Theorem 5.1.3 part 5,
as required.
αz1
α~z = αz2 ∈ S1 since i(αz1 ) = α(iz1 ) = αz3 . Therefore, by the Subspace Test, S1 is a subspace of
αz3
3
C.
Every ~z ∈ S1 has the form
z1 z1 1 0
z2 = z2 = z1 0 + z2 1
z3 iz1 i 0
1 0
Thus, B = 0 , 1 spans S1 and is clearly linearly independent, so it is a basis for S1 . Conse-
i 0
quently, dim S1 = 2.
1 0 1 1 1
(b) Observe that 0 ∈ S2 since 1(0) = 0 and 1 ∈ S2 since 0(1) = 0. However, 0 + 0 = 1 < S2
0 0 0 0 0
since 1(1) , 0. Consequently, S2 is not closed under addition and hence is not a subspace.
z1 y1
(c) By definition S3 is a subset of C3 and ~0 ∈ S3 since 0 + 0 + 0 = 0. Let ~z = z2 , ~y = y2 ∈ S3 .
z3 y3
z1 + y1
Then, z1 + z2 + z3 = 0 and y1 + y2 + y3 = 0. Hence, ~z + ~y = z2 + y2 ∈ S1 since (z1 + y1 ) +
z3 + y3
αz1
(z2 + y2 ) + (z3 + y3 ) = z1 + z2 + z3 + y1 + y2 + y3 = 0 + 0 = 0. Similarly, α~z = αz2 ∈ S1 since
αz3
αz1 + αz2 + αz3 = α(z1 + z2 + z3 ) = α(0) = 0. Therefore, by the Subspace Test, S3 is a subspace of
C3 .
Every ~z ∈ S1 has the form
z1 z1 1 0
z2 = z2 = z1 0 + z2 1
z3 −z1 − z2 1 −1
1 0
Thus, B = 0 , 1 spans S3 and is clearly linearly independent, so it is a basis for S3 . Conse-
1 −1
quently, dim S3 = 2.
11.2.2 (a) Row reducing the matrix A to its reduced row echelon form R gives
1 i 1 0
1 + i −1 + i ∼ 0 1
−1 i 0 0
Section 11.2 Solutions 235
The non-zero rows of R form a basis for the rowspace of A. Hence, a basis for Row(A) is
(" # " #)
1 0
,
0 1
The columns from A which correspond to columns in R which have leadings ones form a basis for
the columnspace of A. So, a basis for Col(A) is
1 i
, −1 + i
1 + i
−1
i
Solve the homogeneous system A~z = ~0, we find that a basis for Null(A) is the empty set.
To find a basis for the left nullspace, we row reduce AT . We get
" # " #
1 1 + i −1 1 1+i 0
∼
i −1 + i i 0 0 1
(b) Row reducing the matrix B to its reduced row echelon form R gives
" # " #
1 1 + i −1 1 1+i 0
∼
i −1 + i i 0 0 1
The non-zero rows of R form a basis for the rowspace of B. Hence, a basis for Row(B) is
1 0
1 + i , 0
0
1
The columns from B which correspond to columns in R which have leadings ones form a basis for
the columnspace of B. So, a basis for Col(B) is
(" # " #)
1 −1
,
i i
−1 − i
Solve the homogeneous system B~z = ~0, we find that a basis for Null(B) is
1 .
0
(c) Row reducing the matrix A to its reduced row echelon form R gives
1 1 + i 3 1 0 1
0 2 2 − 2i ∼ 0 1 1 − i
i 1−i −i 0 0 0
The non-zero rows of R form a basis for the rowspace of A. Hence, a basis for Row(A) is
1 0
0 , 1
1 0
The columns from A which correspond to columns in R which have leadings ones form a basis for
the columnspace of A. So, a basis for Col(A) is
1 1 + i
0 , 2
i 1−i
−1
Solve the homogeneous system A~z = ~0, we find that a basis for Null(A) is
−1 + i .
1
Thus L is linear.
" # " # " #
−i 1+i 1 + 2i
We have L(1, 0, 0) = , L(0, 1, 0) = , and L(0, 0, 1) = . Hence [L] =
−1 + i −2i −3i
" #
−i 1 + i 1 + 2i
.
−1 + i −2i −3i
(b) Row reducing [L] we get
" # " #
−i 1 + i 1 + 2i 1 −1 + i 0
∼
−1 + i −2i −3i 0 0 1
1 − i
A basis for ker(L) is the general solution of [L]~x = ~0, hence a basis is
1 .
0
(" # " #)
−i 1 + 2i
The range of L is equal to the columnspace of [L]. Thus, a basis for the range of L is , .
−1 + i −3i
11.2.5 ~ ,~z ∈ C2 and α, β ∈ C. Then
(a) Let w
" #
αz1 + βw1 + (1 + i)(αz2 + βw2 )
L(α~z + β~
w) = L(αz1 + βw1 , αz2 + βw2 ) =
(1 + i)(αz1 + βw1 ) + 2i(αz2 + βw2 )
" # " #
z1 + (1 + i)z2 w1 + (1 + i)w2
=α +β
(1 + i)z1 + 2iz2 (1 + i)w1 + 2iw2
= αL(~z) + βL(~
w)
Hence, L is linear.
" #
z
(b) If ~z = 1 ∈ ker(L), then
z2
" # " #
0 z + (1 + i)z2
= L(~z) = 1
0 (1 + i)z1 + 2iz2
Hence, we need to solve the homogeneous system of equations
z1 + (1 + i)z2 = 0
(1 + i)z1 + 2iz2 = 0
Hence, " #
0 0
[L]B =
0 1 + 2i
11.2.6 We have
1 −1 i 1 0 0
1 1
1 + i −i i = 1 + i 1 1 =
=0
i i
1 − i −1 + 2i 1 + 2i 1 − i i i
11.2.7 We have
1 1 2 1 0 0 1 0 0 2 + 2i −i −2 + i
0 1 + i 1 0 1 0 ∼ 0 1 0 1 −i i
i 1 + i 1 + 2i 0 0 1 0 0 1 −1 − i i 1−i
Hence,
−1
1 1 2 2 + 2i −i −2 + i
0 1 + i 1 = 1 −i i
i 1 + i 1 + 2i −1 − i i 1−i
Section 11.3 Solutions 239
Assume the result holds for n − 1 × n − 1 matrices and consider an n × n matrix A. If we expand det A
along the first row, we get by definition of the determinant
n
X
det A = a1iC1i (A)
i=1
where C1i (A) represents the cofactors of A. But, each of these cofactors is the determinant of an n − 1 ×
n − 1 matrix, so we have by our inductive hypothesis that C1i (A) = C1i (A). Hence,
n
X n
X
det A = a1iC1i (A) = a1iC1i (A) = det A
i=1 i=1
Solving λ2 + 16 = 0 we find that the eigenvalues are λ1 = 4i and λ2 = −4i. For λ1 = 4i we get
" # " #
3 − 4i 5 5 3 + 4i
A − 4iI = ∼
−5 −3 − 4i 0 0
(" #)
3 + 4i
Hence, a basis for Eλ1 is . For λ2 = −4i we get
−5
" # " #
3 + 4i 5 5 3 − 4i
A + 4iI = ∼
−5 −3 + 4i 0 0
(" #)
3 − 4i
Hence, a basis for Eλ2 is .
−5
" # " #
3 + 4i 3 − 4i 4i 0
Therefore, A is diagonalized by P = to D = .
−5 −5 0 −4i
(c) We have
2 − λ 2 −1 −λ 0 λ
C(λ) = −4 1 − λ 2 = −4 1 − λ 2
2 2 −1 − λ 2
2 −1 − λ
0 0 λ
= −2 1 − λ
2 = −λ(λ2 − 2λ + 5)
1 − λ 2 −1 − λ
Hence, the eigenvalues are λ1 = 0 and the roots of λ2 − 2λ + 5. By the quadratic formula, we get
that λ2 = 1 + 2i and λ3 = 1 − 2i. For λ1 = 0 we get
2 2 −1 1 0 −1/2
A − 0I = −4 1 2 ∼ 0 1 0
2 2 −1 0 0 0
1
Hence, a basis for Eλ1 is 0. For λ2 = 1 + 2i we get
2
1 − 2i 2 −1 1 0 −1
A − λ2 I = −4 −2i 2 ∼ 0 1 −i
2 2 −2 − 2i 0 0 0
1
Hence, a basis for Eλ2 is i . For λ3 = 1 − 2i we get
1
1 + 2i 2 −1 1 0 −1
A − λ3 I = −4 2i 2 ∼ 0 1 i
2 2 −2 + 2i 0 0 0
Section 11.3 Solutions 241
1
Hence, a basis for Eλ3 is −i.
1
1 1 1 0 0 0
Therefore, A is diagonalized by P = 0 i −i to D = 0 1 + 2i 0 .
1 1 1 0 0 1 − 2i
(d) We have
1 − λ i
C(λ) = = λ2
i −1 − λ
Hence, the eigenvalues are λ1 = 1 and the roots of λ2 − 4λ + 5. By the quadratic formula, we get
that λ2 = 2 + i and λ3 = 2 − i. For λ1 = 1 we get
1 1 −1 1 0 0
A − I = 2 0 0 ∼ 0 1 −1
3 −1 1 0 0 0
0
Hence, a basis for Eλ1 is 1. For λ2 = 2 + i we get
1
−i 1 −1 1 0 −(1 + 2i)/5
A − λ2 I = 2 −1 − i 0 ∼ 0 1 −(3 − i)/5
3 −1 −i 0 0 0
242 Section 11.3 Solutions
1 + 2i
Hence, a basis for Eλ2 is 3 + i . For λ3 = 2 − i we get
5
i 1 −1 1 0 −(1 − 2i)/5
A − λ3 I = 2 −1 + i 0 ∼ 0 1 −(3 + i)/5
3 −1 i 0 0 0
1 − 2i
Hence, a basis for Eλ3 is 3 − i .
5
0 1 + 2i 1 − 2i 1 0 0
Therefore, A is diagonalized by P = 1 3 + i 3 − i to D = 0 2 + i 0 .
1 5 5 0 0 2−i
(f) We have
1 + i − λ 1 0 1 + i − λ 1 0
C(λ) = 1 1−λ −i = 0 1 − λ −1 − i + λ
1 0 1 − λ 1 0 1 − λ
= (1 + i − λ)(1 − λ)2 + (−1 − i + λ) = −(λ − (1 + i))[λ2 − 2λ + 1 − 1]
= −(λ − (1 + i))(λ − 2)λ
i
Hence, a basis for Eλ1 is 0 . For λ2 = 2 we get
1
−1 + i 1 0 1 0 −1
A − λ2 I = 1 −1 −i ∼ 0 1 −1 + i
1 0 −1 0 0 0
1
Hence, a basis for Eλ2 is 1 − i . For λ3 = 0 we get
1
−1 + i 1 0 1 0 1
A − λ3 I = 1 −1 −i ∼ 0 1 −1 − i
1 0 −1 0 0 0
Section 11.3 Solutions 243
−1
Hence, a basis for Eλ3 is 1 + i .
1
i 1 −1 1 + i 0 0
Therefore, A is diagonalized by P = 0 1 − i 1 + i to D = 0 2 0.
1 1 1 0 0 0
(g) We have
5 − λ −1 + i 2i 1 − λ 0 i − iλ
C(λ) = −2 − 2i 2 − λ 1 − i = −2 − 2i 2 − λ 1 − i
4i −1 − i −1 − λ 4i −1 − i −1 − λ
1 − λ 0 0
= −2 − 2i 2 − λ −1 + i
4i −1 − i 3 − λ
= (1 − λ)[λ2 − 5λ + 4) = −(λ − 1)2 (λ − 4)
Hence, the eigenvalues are λ1 = 1 and λ2 = 4. For λ1 = 1 we get
4 −1 + i 2i 1 (−1 + i)/4 i/2
A − λ1 I = −2 − 2i 1 1 − i ∼ 0 1 0
4i −1 − i −2 0 0 0
1 − i −i
Hence, a basis for Eλ1 is 4 , 0 . For λ2 = 4 we get
0
2
1 −1 + i 2i 1 0 −i
A − λ2 I = −2 − 2i −2 1 − i ∼ 0 1 (1 − i)/2
4i −1 − i −5 0 0 0
−2i
Hence, a basis for Eλ2 is −1 + i .
2
1 − i −i −2i 1 0 0
Therefore, A is diagonalized by P = 4 0 −1 + i to D = 0 1 0.
0 2 2 0 0 4
(h) We have
−6 − 3i − λ −2 −3 − 2i i − λ −2 −3 − 2i
C(λ) = 10 2−λ 5 =
0 2−λ 5
8 + 6i 3 4 + 4i − λ −2i + 2λ 3 4 + 4i − λ
i − λ −2 −3 − 2i
= 0 2−λ 5
0 −1 −2 − λ
= (i − λ)(λ2 − 1) = −(λ − i)2 (λ + i)
244 Section 11.3 Solutions
−1
Hence, a basis for Eλ1 is 0 .
2
Observe, that if sin θ = 0, then Rθ is diagonal, so we could just take P = I. So, we now assume
that sin θ , 0.
" # " #
−i sin θ − sin θ 1 −i
For λ1 = cos θ + i sin θ, we get Rθ − λ1 I = ∼
sin θ −i sin θ 0 0
" #
i
We get that an eigenvector of λ1 is .
1
" # " #
i sin θ − sin θ 1 i
Similarly, for λ2 = cos θ − i sin θ, we get Rθ − λ2 I = ∼
sin θ i sin θ 0 0
" #
−i
We get that an eigenvector of λ1 is .
1
" # " #
i −i cos θ + i sin θ 0
Thus, P = and D = .
1 1 0 cos θ − i sin θ
" #
1 0
(b) We have R0 = which is already diagonal, so R0 is diagonalized by P = I as we stated in
0 1
(a).
" √ √ # " #
1/ √2 −1/√ 2 i −i
Rπ/4 = . From our work in a) Rπ/4 is diagonalized by P = to D =
1/ 2 1/ 2 1 1
" √ #
(1 + i)/ 2 0 √
. Indeed, we find that
0 (1 − i)/ 2
" #" √ √ #" # " √ #
−1 1 −i 1 1/ 2 −1/ 2 i −i (1 + i)/ 2 0
P Rπ/4 P = √ √ = √
2 i 1 1/ 2 1/ 2 1 1 0 (1 − i)/ 2
Section 11.4 Solutions 245
11.3.5 If A is a 3 × 3 real matrix with a non-real eigenvalue λ, then by Theorem 11.3.1, λ is another eigenvalues
of A. Also, by Corollary 11.3.2, we have that A must have a real eigenvalue. Therefore, A has 3 distinct
eigenvalues and hence is diagonalizable.
(g) h~w,~zi = w ~ · ~z = w
~ · ~z = ~z · w~ = h~z, w
~ i = 5 − 5i
* 3 −2 +
(h) h~u + ~z, 2i~
w − i~zi = 0 , 2 + 2i = 3(−2) + 0(2 − 2i) + (2 − 2i)(−3 + i) = −10 + 8i
2 − 2i −3 − i
(b) We have
hB, Ai = i(1) + 2(−2i) + (1 − i)(i) + 3(1 − i) = 4 − 5i
(c) We have p p √
kAk = hA, Ai = 1(1) + 2i(−2i) + (−i)(i) + (1 + i)(1 − i) = 8
(d) We have p p
kBk = hB, Bi = i(−i) + 2(2) + (1 − i)(1 + i) + 3(3) = 4
246 Section 11.4 Solutions
and
~u∗i ~u j = ~uTi ~u j = h~ui , ~u j i = 0
whenever i , j. Thus, U ∗ U = I as required.
11.4.5 (a) We have
(c) Note that Span{~z, w ~ } = Span{~z, ~v} where ~v is the vector from part (b) and {~z, ~v} is orthogonal over
C so we have
< ~u,~z > < ~u, ~v >
projS ~u = 2
~z + ~v
k~zk k~vk2
11.4.6 Denote the vectors in the spanning set by A1 , A2 , and A3 respectively.
Let B1 = A1 . Next, we get
" # " # " #
hA2 , B1 i i 0 i 1 i i/2 1/2
A2 − B1 = − =
kB1 k2 0 i 2 0 0 0 i
" #
i 1
So, we let B2 = . Next, we get
0 2i
" # " # " # " #
hA3 , B1 i hA3 , B2 i 0 2 −2i 1 i 4 i 1 i/3 1/3
B3 = A3 − B1 − B2 = − − =
kB1 k2 kB2 k2 0 i 2 0 0 6 0 2i 0 −i/3
So, an orthogonal basis for S is {B1 , B2 , B3 }.
11.4.7 (a) We have
∗ T
T
(A∗ )∗ = A = AT = (AT )T = A
(b) We have
(αA)∗ = (αA)T = αAT = αAT = αA∗
(c) We have
(AB)∗ = (AB)T = BT AT = BT AT = B∗ A∗
~ ∈ Cn we have
(d) For any ~z, w
~ = (A~z)T w
~ i = (A~z) · w
hA~z, w ~ = ~zT AT w
~ = ~zT AT w
~ = ~zT A∗ w
~ = ~zT A∗ w ~ = h~z, A∗ w
~ = ~z · A∗ w ~i
11.4.8 ~ ∈ Cn we have
(a) For any ~z, w
~ i = (U~z)T U w
hU~z, U w ~ = ~zT U ∗ U w
~ = ~zT U T U w ~ = ~zT U ∗ U w
~ = ~zT w
~ = h~z, w
~i
(b) Suppose λ is an eigenvalue of U with eigenvector ~v. We get kU~vk2 = hU~v, U~vi = h~v, ~vi = k~vk2 by
part (a). But we also have kU~vk2 = kλ~vk2 = hλ~v, λ~vi = λλ̄h~v, ~vi = |λ|2 k~vk2 , so since k~vk , 0, we
must have |λ| = 1.
" #
i 0
(c) The matrix U = is unitary since U ∗ U = I and the only eigenvalue is i with multiplicity 2.
0 i
11.4.9 If h~u, ~vi = 0, then we have
k~u + ~vk2 = h~u + ~v, ~u + ~vi = h~u, ~u + ~vi + h~v, ~u + ~vi = h~u, ~ui + h~u, ~vi + h~v, ~ui + h~v, ~vi
= k~uk2 + 0 + 0 + k~vk2 = k~uk2 + k~vk2
The converse is not true. One counter example is: Consider V = C with its standard inner product
and let ~u = 1 + i and ~v = 1 − i. Then k~u + ~vk2 = k2k2 = 4 and k~uk2 + k~vk2 = 2 + 2 = 4, but
h~u, ~vi = (1 + i)(1 + i) = 2i , 0.
248 Section 11.5 Solutions
Thus, T is also skew-Hermitian. Since T is upper triangular, we have that T ∗ is lower triangluar.
Consequently, T is both upper and lower triangular and hence diagonal.
So, U ∗ AU = T is diagonal as required.
(b) If A is skew-Hermitian, then A∗ = −A and hence
λh~z,~zi = hλ~z,~zi = hA~z,~zi = h~z, A∗~zi = h~z, −A~zi = h~z, −λ~zi = −λh~z,~zi
1 = k~t1 k = |t11 |
Therefore, t13 = t23 = 0. Continuing in this way we get that T is diagonal and all of the diagonal
entries satisfy |tii | = 1.
So, U ∗ AU = T is diagonal as required.
Section 11.5 Solutions 249
AA∗ = I = A∗ A
(b) We have C(λ) = λ2 − 5λ + 4 = (λ − 4)(λ − 1). Thus, the eigenvalues are λ1 = 4 and λ2 = 1. For
λ1 = 4 we get " # " #
−2 1 + i 1+i
A − λ1 I = ⇒ ~z1 =
1 − i −1 2
For λ2 = 1 we get " # " #
1 1+i −1 − i
A − λI = ⇒ ~z2 =
1−i 2 2
" # 1+i −1−i
4 0 √ √
Hence we have D = and U = 23 6
.
0 1 √ √1
3 6
250 Section 11.5 Solutions
(c) We have C(λ) = λ2 − 8λ + 12 = (λ − 2)(λ − 6). Thus, the eigenvalues are λ1 = 2 and λ2 = 6. For
λ1 = 2 we get " √ # " √ #
√ 3 2−i − 2+i
A − λ1 I = ⇒ ~z1 =
2+i 1 3
For λ2 = 6 we get " √ # "√ #
−1 2−i 2−i
A − λI = √ ⇒ ~z2 =
2+i −3 1
√ √
"
2 0
# − √ 2+i 2−i
2
Hence we have D = and U = 112 1
.
0 6 √
2
12
(d) We have
1 − λ 0 1
C(λ) = 1 1−λ 0 = (1 − λ)3 + 1 = −λ3 + 3λ2 − 3λ + 2
0 1 1 − λ
Since A is a 3 × 3 real matrix, we know that A must have at least one real eigenvalue. We try the
Rational Roots Theorem to see if the real root is rational. The possible rational roots are ±1 and
±2. We find that λ1 = 2 is a root. Then, by the Factor Theorem and polynomial division, we get
−λ3 + 3λ2 − 3λ + 2 = −(λ − 2)(λ2 − λ + 1)
√ √
3 3
Using the quadratic formula, we find that the other eigenvalues are λ2 = 12 + 2 i and λ3 = 21 − 2 i.
−1 0 1 1 0 −1
For λ1 we get A − 2I = 1 −1 0 ∼ 0 1 −1
0 1 −1 0 0 0
√
1/ 3
√
Thus, a corresponding unit eigenvector is z1 = 1/ 3.
√
1/ 3
For λ2 we get
√ √
1 − 3 i 0√ 1 1 0 −2/(−1 + 3i)
2 2 √
A − λ2 I = 1 1
− 23 i
2 0 √ ∼ 0 1 2/(1 + 3i)
3 0 0 0
1
0 1 2 − 2 i
√
2/(− 3 + 3i)
√
Thus, a corresponding unit eigenvector is z2 = −2/( 3 + 3i).
√
1/ 3
(e) We have C(λ) = −(λ − 2)(λ2 − λ + 2) = −(λ − 2)2 (λ + 1) so we get eigenvalues λ = 2 and λ = −1.
For λ = 2 we get
−1 0 1 + i 0 1 + i
C − λI = 0 0 0 ⇒ ~z1 = 1 ,~z2 = 0
1 − i 0 −2 0 1
For λ = −1 we get
2 0 1 + i 1 + i
C − λI = 0 3 0 ⇒ ~z3 = 0
1−i 0 1 −2
B∗ B = (A∗ A−1 )∗ (A∗ A−1 ) = (A−1 )∗ AA∗ A−1 = (A−1 )∗ A∗ AA−1 = (AA−1 )∗ (AA−1 ) = I
11.5.7 If AB is Hermitian, then (AB) = (AB)∗ = B∗ A∗ = BA, since A and B are Hermitian.
A3 + 2A − 2I = O
A3 + 2A = 2I
!
1 2
A A +I =I
2
Thus, A−1 = 21 A2 + I.
11.6.2 By the Cayley-Hamilton Theorem, we have that
A3 − 147A + 686I = O
A3 = 147A − 686I
441 −1176 −294 686 0 0
= −1176 −1323 −588 − 0 686 0
−294 −588 882 0 0 686
−245 −1176 −294
= −1176 −2009 −588
−294 −588 196
1 0 0
11.6.3 Observe that λ3 − 3λ + 2 = (λ + 2)(λ − 1)2 . Hence, we one choice is to take A = 0 1 0 = I. This
0 0 1
gives
A3 − 3A + 2A = I 3 − 3I + 2I = I − 3I + 2I = O
NOTE: The characteristic polynomial of our choice of A is (λ − 1)3 = λ3 − 3λ2 + 3λ − 1. Thus, we have
shown that the converse of the Cayley-Hamilton Theorem is not true.