Você está na página 1de 8

2. Vector spaces Denition 2.1.

A vector space over a eld K is a set V with two operations: vector addition + : V V V and multiplication by scalars : K V V such that: (1) V is an abelian group with respect to +; (2) 1v = v and (ab)v = a(bv) where 1, a, b K and v V ; (3) (a + b)v = av + bv for a, b K and v V . Later on we will usually omit any mention of the ground eld K. The symbol 0 will denote the neutral element in the abelian group V ; sometimes we will abuse notation and write it as simply 0. We have 0v = (0 + 0)v = 0v + 0v and it follows that 0v = 0 for any v V . We shall also write v for (1)v. Example 2.2. The eld K is a vector space over itself. The set of n-tuples (a1 , . . . , an ) with ai K is a vector space. The addition and scalar multiplication are dened component-wise. When K = R this is the n-dimensional Euclidean space Rn . The set Mn (K) of n n matrices with entries in K with matrix addition and scalar multiplication forms a vector space over K. The set of all K-valued functions on any set S with pointwise addition and scalar multiplication forms a vector space over K. Denition 2.3. Let V , W be two vector spaces. Then a function f : V W is called a linear map if for all a K and u, v V f (u + v) = f (u) + f (v) f (av) = af (v). A bijective linear map is called an isomorphism (of vector spaces). Example 2.4. (1) V = W = K n ; A is an n n matrix with entries in K. Then f : V W dened by f (v) = Av is a linear map. Linear maps having the same domain and target space are usually called linear operators. (2) V is the set of all real-valued functions on R; W = R. Dene f by f (g) = g(0). Then f is a linear map V R. Linear maps whose target is the ground eld are usually called linear functionals. (3) Let V = W be the space of all polynomials in one variable x; then f dened by f (p) = dp/dx is a linear map. 2.1. Linear combinations, bases and dimension. A linear combination of vectors v1 , . . . vk in V is an expression a1 v1 + . . . + an vn where ai K and vi V . A linear combination is called trivial if all ai s are zero and nontrivial otherwise. Denition 2.5. A subspace of a vector space V is a subset W of V which is itself a vector space under the operations of addition and scalar multiplication in V . Note that both {0} and V are subspaces of V . In order to prove that W is a subspace it is sucient to verify that any sum v + w of elements in W as well as any multiple aw of an element
1

in W belong to W . It follows that for any two subspaces U, W of V their intersection U W is a subspace of V . Example 2.6. The set of all vectors (a1 , . . . , an ) in K n whose rst component is zero forms a subspace of K n . The set of all dierentiable functions on f : R R is a subspace of the space of all functions on R. The set of all dierentiable functions on R whose derivative vanishes at zero forms a subspace in the space of all dierentiable functions on R. Denition 2.7. Given a set of vectors S = {vi V : i I} for a vector space V and some indexing set I, the collection of all linear combinations of vectors in S is called the span of S and denoted by span(S). It is clear that span(S) is a subspace of V for any set S. Example 2.8. The set {(0, 1), (1, 1), (2, 1)} spans the vector space R2 . The set {1, x, x2 } spans the subspace of polynomials of degree 2 in the space V of all polynomials in one variable x. Weve come to one of the central notions of linear algebra, linear (in)dependence. Denition 2.9. Let S be a collection of vectors in a vector space V . Then S is called linearly dependent if there exists a nontrivial linear combination of vectors in S which is equal to the zero vector. Otherwise S is called linearly independent. Example 2.10. Let V = R2 and S = {(1, 1), (0, 1)}. Indeed, an arbitrary linear combination of vectors in S will have the form a(1, 1) + b(0, 1) = (a, a + b). If it is equal to zero then a + b = 0 and b = 0 and it follows that a = b = 0. We have the following easy result. Proposition 2.11. Let S be a subset in a vector space V . (1) If 0 S then S is linearly dependent. (2) If S is linearly independent then so is any subset T S. (3) If S contains a linearly dependent subset then S itself is linearly dependent. Proof. (1) If 0 S then 1 0 = 0 is a nontrivial linear combination in S which equals zero, so S is linearly dependent. (2) If S is linearly independent then no nontrivial linear combination of vectors in S can equal zero. In particular, no nontrivial linear combination of T can equal zero, so T is also linearly independent. (3) If T S is linearly dependent then a certain nontrivial linear combination of vectors in T equals zero. But the same combination then could be considered as a linear combination in S, so S is also linearly dependent.

Denition 2.12. If a subset S of a vector space V is linearly independent and span(S) = V then S is called a basis of V . Example 2.13. (1) The set S in Example 2.10 is a basis of R2 . (2) The set of monomials 1, x, x2 , . . . form an (innite) basis in the set of all polynomials in one variable. (3) The vectors of the form (0, 0, . . . , 0, 1, 0, . . . , 0) form a basis, called the standard basis, for the vector space K n . (4) Let T be a set and V be the set of all real-valued functions on T . Then V is naturally a vector space over R. (Why?) Consider the set S = {a }, a T of functions on T where a is a function equal to 1 at a T and zero otherwise. Then S will form a basis in V if T is nite. (Why?) What happens if T is innite? Theorem 2.14. A subset S V is a basis if and only if every vector in V can be represented uniquely as a linear combination of elements in S. Proof. That every vector can be expressed as a linear combination of S follows from the condition that span(S) = V . The uniqueness follows from the linear independence of S. Theorem 2.15. Let B = {v1 , . . . , vn } be a basis of V . Then (1) Every subset of V containing at least n + 1 distinct vectors is linearly dependent. (2) Every subset of V containing at most n 1 vectors does not span V . Proof. For (1) suppose that u0 , . . . , un V . Assume without loss of generality that none of the ui s is the zero vector. Because B is a basis we can nd scalars aij for which u0 = a01 v1 + . . . a0n vn . . . un = an1 v1 + . . . ann vn Since u0 = 0 at least one of the a0j = 0 and so we have vj = a1 u0 a01 a1 v1 . . . a0j+1 a1 vj+1 . . . a0n a1 vn . 0j 0j 0j 0j Using this equation we can replace vj in the remaining equations with a combination of u0 and the other vi s. If u0 and u1 are linearly dependent the result follows; if not we can replace some vk in the equations for u2 , , . . . un , rewriting them in terms of u0 , u1 , v1 , . . . , vj1 , vj+1 , . . . , vk1 , vk+1 , . . . vn . After repeating this procedure n times, the vectors v1 , . . . , vn will all be replaced by the vectors u0 . . . , un1 and we will have a linear relation expressing un in terms of the remaining ui s. Therefore the set {u0 , . . . , un } is linearly dependent. For part (2) note that if S contains fewer than n elements then we may assume without loss generality that S is a basis by choosing a maximal linear independent subset of S. But then using part (1) we see that any subset of V (in particular B) containing more elements than S cannot be linearly independent. That contradicts our assumption that B is a basis. Corollary 2.16. If V has a basis with nitely many elements then any other basis will have exactly the same number of elements.

Denition 2.17. If V contains a basis with n elements then n is called the dimension of V . In this case V is called nite-dimensional. A vector space that is not nite-dimensional is called innite-dimensional. It is clear that the notion of dimension of a vector space does not depend on the choice of a basis. So far we have not addressed the question of existence of a basis. It is not so easy to prove that in general, however for certain types of spaces it is rather straightforward. Theorem 2.18. Let V = {0} be a vector space and we assume that there exists a nite set S which spans V . Then (1) There exists a subset of S which is a basis for V . In particular, V is nite-dimensional. (2) If a subset T V is linearly independent then T can be extended to a basis of V . (3) If U is a subspace of V then U is nite-dimensional. If furthermore dim U = dim V then U =V. Proof. (1) There are only nitely many subsets of S and among them there are linearly independent ones (e.g. the empty set). Choose among such subsets one with a largest number of elements. Such a subset S is a basis of V since it is rstly linearly independent and secondly any element of S may be expressed as a linear combination of elements in S , so that S spans V . (2) Among the supersets of T there are linearly independent ones (e.g. T itself). Furthermore, since V is nite-dimensional, any set with more than dim V vectors must be dependent, by Theorem 2.15. Thus, there is a linearly independent superset T T with a largest number of elements. As in part (1), it follows that T is a basis. (3) Again since V is nite-dimensional, we may choose a linearly independent subset T U with a largest number of elements. As in parts (1) and (2), it follows that T is a basis for U . If dim U = dim V then by Theorem 2.15 a basis of U will also be a basis of V and so U = V .

2.2. More on linear maps. We will now discuss in more detail linear maps between vector spaces. Denition 2.19. Let f : V W be a linear map. Its kernel Ker f is the set {v V : f (v) = 0}. Note that Ker f is a subspace of V . Next, the image Im f is the set {w W : f (v) = w for some v V }. Again, it is easy to see that Im f is a subspace in W . Proposition 2.20. Let f : V W be a linear map. Then (1) f is injective if and only if Ker f = {0}; (2) f is surjective if and only if Im f = W . Proof. Clearly, if f is injective then its kernel is trivial. Conversely, suppose that Ker f = {0}. Consider two vectors v1 = v2 V . If f (v1 ) = f (v2 ) then f (v1 v2 ) = 0, and therefore v1 = v2 ; thus f is injective. The statement about surjectivity is left to you as an exercise.

Recall that an isomorphism is a bijective linear map V W . Proposition 2.21. Let f : V W be an isomorphism of vector spaces. Then its inverse f 1 : W V is also a linear map. Proof. Let us prove, for example, that f 1 (w1 + w2 ) = f 1 (w1 ) + f 1 (w2 ). Indeed, both sides of this equality are vectors in V , and to show that they are equal it is sucient for their images under f to be equal, since f is injective. We have f (f 1 (w1 + w2 )) = w1 + w2 = f (f 1 (w1 ) + f 1 (w2 )) since f is linear. The equality f 1 (av) = af 1 (v) is proved similarly. The following important theorem connects the dimensions of a space V with those of the kernel and image of a linear map out of V . Theorem 2.22. Let V be nite-dimensional and f : V W a linear map. Then dim V = dim Ker f + dim Im f. Proof. First note that Ker f is nite-dimensional by Theorem 2.18. Also, given any basis of V , its image under f must span Im f . (Why?) Thus, Im f is also nite-dimensional, by Theorem 2.18. Now let {v1 , . . . , vn } be a basis of Ker f and {w1 , . . . , wk } a basis of Im f . For each wi choose ui V for which f (ui ) = wi . (Note that this choice need not be unique.) We claim that {v1 , . . . , vn , u1 , . . . , uk } is a basis of V . Linear independence: Suppose that a1 v1 + . . . + an vn + b1 u1 + . . . + bk uk = 0. Applying f to this equality and using linear independence of the wi s we deduce that b1 = . . . bk = 0. From linear independence of the vi s we further derive that all ai s are zero. Spanning: Consider any vector v V and it image f (v) W . Then since the wi form a basis in W there exist b1 , . . . , bk such that f (v) = b1 w1 + . . . bn wk . It follows that the vector v b1 u1 . . . bk uk belongs to Ker f . Therefore there exist a1 , . . . , an such that v b1 u1 . . . bk uk = a1 v1 + . . . an vn . Hence v = b1 u1 + . . . + bk uk + a1 v1 + . . . + an vn , as required. Corollary 2.23. Let V be a nite dimensional vector space and f : V V a linear operator. If f is injective or if f is surjective then f is an isomorphism. Proof. If f is injective then Ker f = {0} and by Theorem 2.22 dim V = dim(Im f ). Thus V = Im f by Theorem 2.18, so f is surjective and hence an isomorphism. Conversely, if f is surjective then dim V = dim(Im f ), so Ker f = {0} by Theorem 2.22. Therefore f is injective, and hence an isomorphism. Theorem 2.24. Let V, W be vector spaces of the same dimension n. Then V and W are isomorphic. In particular, any n-dimensional vector space is isomorphic to K n . Proof. Let {v1 , . . . , vn } be a basis of V and {w1 , . . . , wn } a basis of W . Dene f : V W by the formula f (a1 v1 + . . . + an vn ) = a1 w1 + . . . + an wn . Then f is clearly linear. Given any w W , we may write w = a1 w1 + . . . + an wn , since the wi s are a basis. But then w = f (a1 v1 + . . . + an vn ), so f in surjective. Similarly, if f (a1 v1 + . . . + an vn ) = 0 then all ai are zero, again since the wi s are a basis. Thus f is injective, and hence an isomorphism.

2.3. Direct sums and quotients. Denition 2.25. Let V and W be vector spaces. Denote by V W the vector space formed by all pairs (v, w) for v V, w W . The addition is dened as (v, w) + (v , w ) = (v + v , w + w ). The multiplication by a scalars is dened as a(v, w) = (av, aw). The zero vector of V W is the pair (0, 0). Example 2.26. Let V = K n , W = K m . Then the correspondence ((a1 , . . . , an ), b1 , . . . , bm )) (a1 , . . . , an , b1 , . . . , bm ) determines an isomorphism K n K m K n+m . = Proposition 2.27. Let V and W be vector spaces of dimensions n and m respectively. Let {v1 , . . . , vn } and {w1 , . . . , wm } be bases of V and W . Then {(v1 , 0), . . . , (vn , 0), (0, w1 ), . . . , (0, wm )} is a basis of V W . In particular, V W has dimension n + m. Proof. Exercise. Now suppose that V, W are themselves subspaces of a vector space U . Dene their sum V +W to be the subspace in U spanned by all possible sums of the form v + w where v V, w W . Note that V + W is the span of V W . One can ask about the relationship between V W and V + W . The following theorem gives an answer to this question. Theorem 2.28. Let V, W be subspaces of U and dene a map f : V W U by the formula f (v, w) = v + w for v V, w W. Then f is linear; it is injective if and only if V W = {0}; it is surjective if and only if V W spans U . Proof. The linearity of f is immediate from the denition. Now suppose that V W = {0}. Let (v, w) Ker f . Then 0 = f (v, w) = v + w, so v = w. It follows that v = w = 0, and thus Ker f = {0}. Conversely, if v V W then v W and (v, v) Ker f . Hence Ker f = {0} implies that V W = {0}. Concerning surjectivity: the image of f is the subspace V + W and if V W spans U then it will coincide with all of U . Conversely, if f is surjective then Im f = V + W = U , so V W spans U . Remark 2.29. The notion of a direct sum as dened above is sometimes referred to as an external direct sum. If two spaces V, W are subspaces of another space U and their intersection is zero, then their sum V + W is called an internal direct sum. Theorem 2.28 shows that the external direct sum is isomorphic to the internal one. In this connection the following question arises. The external direct sum clearly allows iteration, that is we can consider direct sums of the form V1 V2 V3 Vn . Similarly one can consider sums of subspaces of a given space V1 + V2 + . . . + Vn . Under what condition on the subspaces Vi U will their internal sum be isomorphic to their external sum? The answer is that all intersections Vi span(V1 Vi1 Vi Vn ) must be zero.

We will now discuss quotients. This notion naturally complements the notion of a subspace. Denition 2.30. Let V be a vector space and W be its subspace. Dene an equivalence relation on V by declaring v1 v2 if v1 v2 W . The set of equivalence classes V / is called the quotient of V by its subspace W and is denoted by V /W . The set V /W is in fact a vector space itself. Indeed, denote the equivalence class containing v V by v + W . Then set (v + W ) + (v + W ) := (v + v ) + W and for a scalar a K set a(v + W ) := av + W . Let us check that our vector addition does not depend on the choice of the representatives. Suppose that v1 v1 and v2 v2 . Then clearly v1 + v2 v1 + v2 which implies that (v1 + v2 ) + W = (v1 + v2 ) + W . The well-denedness of the scalar multiplication is checked similarly. The zero vector in V /W is the class 0 + W = W . Remark 2.31. Note the formal similarity between the subspaces and abelian groups Z/nZ. In fact both constructions are instances of one general construction, taking the quotient of a group by a normal subgroup, which you will encounter in the group theory unit. Given any vector space V and subspace W V , there is a linear map V V /W which sends a vector v V to its equivalence class v + W . This is a linear map whose kernel is equal to the subspace W . (Check it!) Theorem 2.32. Let V and W be vector spaces and f : V W a linear map. Then there is an isomorphism Im f V / Ker f . = Proof. Dene a map f : V / Ker f Im f by the formula f (v + Ker f ) = f (v). One has to check does not depend on the choice of v. For v v we have f (v) f (v ) = f (v v ) = 0. that f We further claim that f is an isomorphism. Injectivity: Suppose that f (v + Ker f ) = 0. That means that f (v) = 0 i.e. that v Ker f , so v + Ker f is the zero vector in V / Ker f . Surjectivity: Let w Im f . By denition there exists v V such that f (v) = w. But then f (v + Ker f ) = w, as required. Remark 2.33. Let W be a subspace of V and consider the linear map V V /W . This map is surjective and its kernel is W . By Theorem 2.22 we conclude that dim V /W = dim V dim W . Example 2.34. Let V be the space of all polynomials of degree n. Note that dim V = n + 1. Consider the linear operator f from V to itself f : p(x) p (x). The kernel of this map is the subspace of polynomials of degree 1. The space W of such polynomials has dimension 2. It follows that dim V /W = n 1. 2.4. Dual spaces. Let V be a vector space over K. Its dual is the vector space V of all linear functionals V K. The addition of linear functionals f, g V is dened by (f + g)(v) = f (v) + g(v), and multiplication of a functional f by a scalar a K by (af )(v) = af (v). Suppose now that V is nite dimensional over K and let {v1 , . . . , vn } be a basis. Dene the linear functionals vi V , n = 1, 2, . . . , n by the formula
vi (a1 v1 + . . . + an vn ) := ai . Theorem 2.35. The set {v1 , . . . , vn } is a basis of V , called the dual basis to {v1 , . . . , vn }. In particular, V has dimension n, the same as that of V .

8
Proof. Let us prove rst that v1 , . . . , vn are linearly independent. Assume that a1 v1 +. . .+an vn = 0. Applying this equality to vi , i = 1, . . . , n we obtain 0 = (a1 v1 + . . . + an vn )(vi ) = ai . It follows that ai = 0 so v1 , . . . , vn are indeed linearly independent. Next we show that v1 , . . . , vn span V . Given an arbitrary f V , we claim that f = f (v1 )v1 + . . . + f (vn )vn .

To see that we need to prove that both sides of the above equality produce the same result when applied to an arbitrary vector v V . Let v = a1 v1 + . . . + an vn . Then
f (v) = f (v1 )a1 + . . . + f (vn )an = [f (v1 )v1 + . . . + f (vn )vn ](v)

and we are done. Example 2.36. Let V = K n , the n-dimensional vector space, and let {v1 , . . . , vn } be the standard basis of V . Then the dual basis v1 , . . . , vn is formed by the functionals vi dened by
vi (a1 , . . . , an ) = ai . In other words, vi associates to an n-tuple (a1 , . . . , an ) its ith component.

2.5. Eigenvectors and eigenvalues. In this subsection we will give a preliminary study of eigenvectors and eigenvalues. This subject is central to linear algebra and will be studied more thoroughly later in this course. Denition 2.37. Let f : V V be a linear operator on a K-vector space V . A non-zero vector v V is called an eigenvector of f if f (v) = v for some K. The scalar is called the eigenvalue corresponding to the eigenvector v. Example 2.38. (1) Let f : V V be a scalar operator, i.e. f (v) = v for any v V . Clearly then any non-zero vector in V is an eigenvector of f with eigenvalue . (2) Let V be the space of dierentiable functions on R and f : p p . Then the function p(x) = ekx is an eigenvector of f with eigenvalue k. This example is important in the study of dierential equations. (3) There are operators with no eigenvectors. For example let V = R2 and f (a, b) = (b, a). If (a, b) is an eigenvector of f then (b, a) = (a, b), so that a = b, b = a. It follows that a = 2 a, so a = b = 0. Proposition 2.39. Let f : V V be a linear operator. Then the set V of all eigenvectors v with the same eigenvalue , together with the zero vector, forms a subspace of V . Proof. We need to check that V is closed under addition and scalar multiplication. This follows easily from the denition. The subspace of V is called the eigenspace corresponding to . Its dimension is called the multiplicity of the eigenvalue .

Você também pode gostar