Escolar Documentos
Profissional Documentos
Cultura Documentos
[~v1
~v2
...
p
u21 + u22 + . . . + u2n and is
4. k~uk2 = ~u ~u
Cauchy Inequality
If ~u, ~v Rn , then |~u ~v | k~ukk~v k.
Proof:
The general case is proven in the textbook, we will consider the case where n = 2.
In R2 , we can use the definition of dot product involving cos to prove this.
|~u ~v | = |k~ukk~v k cos |
= k~ukk~v k| cos |
k~ukk~v k(1), since | cos | 1
Triangle Inequality
k~u + ~v k k~uk + k~v k
Proof:
We will prove that k~u + ~v k2 (k~uk + k~v k)2 , and since k~u + ~v k 0 and k~uk + k~v k 0 we can take
square roots of both sides and the desired inequality will hold.
k~u + ~v k2 =
=
=
(~u + ~v ) (~u + ~v )
~u ~u + ~u ~v + ~v ~u + ~v ~v
k~uk2 + 2~u ~v + k~v k2
k~uk2 + 2|~u ~v | + k~v k2 since any real number is less than equal to its absolute value
k~uk2 + 2k~ukk~v k + k~v k2 by the Triangle Inequality
(k~uk + k~v k)2 as desired.
Definition:
A set {~u1 , ~u2 , . . . , ~uk } of non-zero vectors in Rn is called an orthogonal set if ~ui ~uj = 0 for all i 6= j.
This set {~u1 , ~u2 , . . . , ~uk } is called orthonormal if it is orthogonal and each ~ui is a unit vector. (ie.
if k~ui k = 1 for all i.)
Examples
1. The standard basis {~e1 , ~e2 , . . . , ~en } is an orthonormal set in Rn .
2. If {~u1 , ~u2 , . . . , ~uk } is orthogonal, then so is {a1~u1 , a2~u2 , . . . , ak ~uk }, where the ai s are real scalars.
We can create an orthonormal set from any orthogonal set simply by dividingeach vector by its length
1
1
1
~u1 ,
~u2 , . . . ,
u~k
making it a unit vector. That is, if {~u1 , ~u2 , . . . , ~uk } is an orthogonal set, then
k~u1 k
k~u2 k
k~uk k
is an orthonormal set.
Example:
1
1
1
1
1
0
0
3
~
~ ~ ~ ~
~
If f~1 =
1 , f2 = 1, v~3 = 1 , and f4 = 1, then show that {f1 , f2 , f3 , f4 } is orthogonal
1
2
0
1
then normalize this set.
Pythagorean Theorem in Rn
If {~u1 , ~u2 , . . . , ~uk } is orthogonal, then
k~u1 + ~u2 + . . . + ~uk k2 = k~u1 k2 + k~u2 k2 + . . . + k~uk k2
Proof:
In R2 or R3 , we can see the connection between orthogonal sets and linear dependence. Two orthogonal vectors in R2 are linearly independent and three orthogonal vectors in R3 are linearly independent.
In general, we have the following theorem:
Theorem
Every orthogonal set in Rn is linearly independent.
Proof:
Suppose {~u1 , ~u2 , . . . ~uk } is orthogonal.
By definition, none of these is the zero vector.
Then we will form the vector equation ~0 = x1~u1 + x2~u2 + . . . + xk ~uk .
We are interested in the type of solution to this equation.
We will take the dot product of each side of this equation with ~u1 :
Then we have
0 = ~0 ~u1 = (x1 u~1 + x2~u2 + . . . + xk~vk ) ~u1
= x1 k~u1 k2 + x2~u2 ~u1 + . . . + xk ~uk ~u1
= x1 k~u1 k2 + 0 + . . . + 0
Thus, x1 = 0.
Similarly, we can find that x2 = 0, . . . , xk = 0 and thus our equation has only the trivial solution and
our set of vectors is independent.
Since orthogonal sets are always linearly independent, we could consider using them as a basis for a
subspace of Rn . In fact they are the best type of bases because it is very easy to find the coefficients
to place in front of each basis vector when representing any vector in Rn as a linear combination of
the basis vectors. There is a simple formula for calculating these coefficients rather than row reducing
each time.
Expansion Theorem
Let {f~1 , f~2 , . . . , f~m } be an orthogonal basis of a subspace U of Rn . If ~b is any vector in U , then we
can write
!
!
!
~b f~1
~b f~2
~b f~m
~b =
f~1 +
f~2 + . . . +
f~m
2
2
2
~
~
~
kf1 k
kf2 k
kfm k
Proof:
Since {f~1 , f~2 , . . . , f~m } is a basis for U , this set spans U , and so we can write any vector ~b as some
linear combination of these basis vectors.
~b = x1 f~1 + x2 f~2 + . . . + xm f~m
Using the same method as our last proof, we will take the dot product of both sides with f~1 to obtain:
~b f~1 = (x1 f~1 + x2 f~2 + . . . + xm f~m ) f~1
= x1 kf~1 k2 + x2 f~2 f~1 + . . . + xm f~m f~1
= x1 kf~1 k2 + 0 + . . . + 0
~b f~1
.
kf~1 k2
We can find x2 , . . . xm in a similar fashion.
Example:
8
7
~ ~ ~ ~
Definition:
If U is a subspace of Rn , then the orthogonal complement of U , denoted U is the set of all
vectors that are perpendicular to every vector in U .
U = {~x Rn : ~x ~u = 0 for all ~u U }
Theorem:
If U is a subspace of Rn and ~x Rn , then projU ~x U and ~x projU ~x U .
Proof:
Let {f~1 , f~2 , . . . , f~m } be an orthogonal basis for U .
projU ~x U since it is a linear combination of the basis vectors in U .
If we can show that ~x projU ~x f~i for each i = 1, 2, . . . , m, then ~x projU ~x U since if a vector
~y is perpendicular to every basis vector for U , then it will be perpendicular to every vector in U .
(To see this, let ~u represent any vector in U . Since {f~1 , f~2 , . . . , f~m } is an orthogonal basis for U ,
we can write ~u = c1 f~1 + c2 f~2 + . . . + cn f~m . Now if we take ~y ~u = ~y (c1 f~1 + c2 f~2 + . . . + cn f~m ) =
c1 ~y f~1 + c2 ~y f~2 + . . . + cm ~y f~m = 0 + 0 + . . . 0 = 0 and so ~y ~u.)
Now, using the expansion theorem,
~x projU ~x = ~x
~x f~1
kf~1 k2
!
f~1 +
~b f~2
kf~2 k2
!
f~2 + . . . +
~x f~m
kf~m k2
and if we take the dot product of ~x projU ~x with each f~i , we will get 0.
For example,
!
!
~1
~b f~2
~
x
f
(~x projU ~x) f~1 = ~x f~1
f~1 f~1 +
f~2 f~1 + . . . +
kf~1 k2
kf~2 k2
= ~x f~1 ~x f~1 + 0 + . . . + 0
= 0
5
f~m
~x f~m
kf~m k2
!
f~m f~1
Example:
Find the vector in U
that
closest
is
to ~x and express
~x as the sum of a vector in U and a vector in
1
1
3
1
0
1
, and ~x = .
U , where U = span
0
1
0
1
1
2
Solution:
We can write ~x = projU ~x + ~x projU ~x.
By the expansion theorem, we have
3
1
1 1
0 0
2
1
projU ~x = 2
1 + 02 + 12 + 12
4 4 4
0
3 3 3
3
1
1 0
0 1
1
1
2
1
+
0 (1)2 + 02 + 12 + 12
1
1
0
1
1
T
1
[1 0 1 1]T
3
T
1
5 4
1 U
=
3 3
3
3
5/3
4/3
1 4/3 7/3
And ~x projU ~x =
0 1/3 = 1/3 U .
2
1
1
=
So, given an orthogonal basis for a subspace U , we can easily project any vector onto U . But what
if we are not given the orthogonal basis to begin with?
We can always construct an orthogonal basis from a regular basis.
Suppose a subspace W has basis {~x1 , ~x2 , ~x3 } and we want to construct an orthogonal basis {f~1 , f~2 , f~3 }.
Step 1 Let f~1 = ~x1 and let W1 = span{f~1 } = span{x~1 }.
Now, let W2 = span{f~1 , f~2 } = span{~x1 , ~x2 } (since f~2 is a linear combination of ~x2 and f~1 = ~x1 .)
1 2 0
Convert the basis B =
,
,
1 0 1 to an orthogonal basis for U = span{B}.
1
1
0