Você está na página 1de 8

Week 10

Section 5.3: Orthogonality


We looked at dot product and length of vectors in R2 and R3 in Chapter 4. We can extend these
definitions to Rn .
1. If ~u = [u1 u2 . . . un ]T and ~v
~u ~v = u1 v2 + u2 v2 + . . . + un vn .

[~v1

~v2

...

~vn ]T are vectors in Rn , then

2. Two vectors ~u and ~v Rn are orthogonal if ~u ~v = 0.


3. The length (or norm) of a vector ~u = [u1 u2 un ]T is k~uk =
always positive.

p
u21 + u22 + . . . + u2n and is

4. k~uk2 = ~u ~u
Cauchy Inequality
If ~u, ~v Rn , then |~u ~v | k~ukk~v k.
Proof:
The general case is proven in the textbook, we will consider the case where n = 2.
In R2 , we can use the definition of dot product involving cos to prove this.
|~u ~v | = |k~ukk~v k cos |
= k~ukk~v k| cos |
k~ukk~v k(1), since | cos | 1
Triangle Inequality
k~u + ~v k k~uk + k~v k
Proof:
We will prove that k~u + ~v k2 (k~uk + k~v k)2 , and since k~u + ~v k 0 and k~uk + k~v k 0 we can take
square roots of both sides and the desired inequality will hold.
k~u + ~v k2 =
=
=

(~u + ~v ) (~u + ~v )
~u ~u + ~u ~v + ~v ~u + ~v ~v
k~uk2 + 2~u ~v + k~v k2
k~uk2 + 2|~u ~v | + k~v k2 since any real number is less than equal to its absolute value
k~uk2 + 2k~ukk~v k + k~v k2 by the Triangle Inequality
(k~uk + k~v k)2 as desired.

Definition:
A set {~u1 , ~u2 , . . . , ~uk } of non-zero vectors in Rn is called an orthogonal set if ~ui ~uj = 0 for all i 6= j.
This set {~u1 , ~u2 , . . . , ~uk } is called orthonormal if it is orthogonal and each ~ui is a unit vector. (ie.
if k~ui k = 1 for all i.)
Examples
1. The standard basis {~e1 , ~e2 , . . . , ~en } is an orthonormal set in Rn .

2. If {~u1 , ~u2 , . . . , ~uk } is orthogonal, then so is {a1~u1 , a2~u2 , . . . , ak ~uk }, where the ai s are real scalars.

We can create an orthonormal set from any orthogonal set simply by dividingeach vector by its length

1
1
1
~u1 ,
~u2 , . . . ,
u~k
making it a unit vector. That is, if {~u1 , ~u2 , . . . , ~uk } is an orthogonal set, then
k~u1 k
k~u2 k
k~uk k
is an orthonormal set.
Example:




1
1
1
1
1
0
0
3
~
~ ~ ~ ~
~



If f~1 =
1 , f2 = 1, v~3 = 1 , and f4 = 1, then show that {f1 , f2 , f3 , f4 } is orthogonal
1
2
0
1
then normalize this set.
Pythagorean Theorem in Rn
If {~u1 , ~u2 , . . . , ~uk } is orthogonal, then
k~u1 + ~u2 + . . . + ~uk k2 = k~u1 k2 + k~u2 k2 + . . . + k~uk k2

Proof:

In R2 or R3 , we can see the connection between orthogonal sets and linear dependence. Two orthogonal vectors in R2 are linearly independent and three orthogonal vectors in R3 are linearly independent.
In general, we have the following theorem:
Theorem
Every orthogonal set in Rn is linearly independent.
Proof:
Suppose {~u1 , ~u2 , . . . ~uk } is orthogonal.
By definition, none of these is the zero vector.
Then we will form the vector equation ~0 = x1~u1 + x2~u2 + . . . + xk ~uk .
We are interested in the type of solution to this equation.
We will take the dot product of each side of this equation with ~u1 :
Then we have
0 = ~0 ~u1 = (x1 u~1 + x2~u2 + . . . + xk~vk ) ~u1
= x1 k~u1 k2 + x2~u2 ~u1 + . . . + xk ~uk ~u1
= x1 k~u1 k2 + 0 + . . . + 0
Thus, x1 = 0.
Similarly, we can find that x2 = 0, . . . , xk = 0 and thus our equation has only the trivial solution and
our set of vectors is independent.
Since orthogonal sets are always linearly independent, we could consider using them as a basis for a
subspace of Rn . In fact they are the best type of bases because it is very easy to find the coefficients
to place in front of each basis vector when representing any vector in Rn as a linear combination of
the basis vectors. There is a simple formula for calculating these coefficients rather than row reducing
each time.

Expansion Theorem
Let {f~1 , f~2 , . . . , f~m } be an orthogonal basis of a subspace U of Rn . If ~b is any vector in U , then we
can write
!
!
!
~b f~1
~b f~2
~b f~m
~b =
f~1 +
f~2 + . . . +
f~m
2
2
2
~
~
~
kf1 k
kf2 k
kfm k
Proof:
Since {f~1 , f~2 , . . . , f~m } is a basis for U , this set spans U , and so we can write any vector ~b as some
linear combination of these basis vectors.
~b = x1 f~1 + x2 f~2 + . . . + xm f~m
Using the same method as our last proof, we will take the dot product of both sides with f~1 to obtain:
~b f~1 = (x1 f~1 + x2 f~2 + . . . + xm f~m ) f~1
= x1 kf~1 k2 + x2 f~2 f~1 + . . . + xm f~m f~1
= x1 kf~1 k2 + 0 + . . . + 0
~b f~1
.
kf~1 k2
We can find x2 , . . . xm in a similar fashion.

Solving for x1 , we have x1 =

Example:

8
7
~ ~ ~ ~

Express the vector


6 as a linear combination of the orthogonal basis {f1 , f2 , f3 , f4 } from above.
1

8.1 Orthogonal Complements & Projections


If we look at the expansion theorem again, the formula for each coefficient reminds us of the orthog~x d~ ~
d.
onal projection formula projd~~x =
~ 2
kdk
We have already seen how to project a vector ~x onto a line L or a plane P in section 4.4.
And if f~1 , f~2 , f~3 is the standard basis in R3 , we have seen how to decompose any vector ~x =
[x1 x2 x3 ]T R3 as x1 f~1 + x2 f~2 + x3 f~3 , where we can think of each term as the projection of ~x
onto each coordinate axis.
When we express a vector ~x Rn in terms of its orthogonal projections onto an orthogonal basis for
a subspace U Rn , we say we find the projection of a vector ~x onto a subspace U , and we
write
!
!
~x f~1 ~
~x f~m ~
projU ~x =
f1 + . . . +
fm
kf~1 k2
kf~m k2
4

where {f~1 , . . . , f~m } is an orthogonal basis for U .


We say projU ~x is the vector in U that is closest to ~x, where we think of the perpendicular distance
as being the shortest distance.

Definition:
If U is a subspace of Rn , then the orthogonal complement of U , denoted U is the set of all
vectors that are perpendicular to every vector in U .
U = {~x Rn : ~x ~u = 0 for all ~u U }
Theorem:
If U is a subspace of Rn and ~x Rn , then projU ~x U and ~x projU ~x U .
Proof:
Let {f~1 , f~2 , . . . , f~m } be an orthogonal basis for U .
projU ~x U since it is a linear combination of the basis vectors in U .
If we can show that ~x projU ~x f~i for each i = 1, 2, . . . , m, then ~x projU ~x U since if a vector
~y is perpendicular to every basis vector for U , then it will be perpendicular to every vector in U .
(To see this, let ~u represent any vector in U . Since {f~1 , f~2 , . . . , f~m } is an orthogonal basis for U ,
we can write ~u = c1 f~1 + c2 f~2 + . . . + cn f~m . Now if we take ~y ~u = ~y (c1 f~1 + c2 f~2 + . . . + cn f~m ) =
c1 ~y f~1 + c2 ~y f~2 + . . . + cm ~y f~m = 0 + 0 + . . . 0 = 0 and so ~y ~u.)
Now, using the expansion theorem,
~x projU ~x = ~x

~x f~1
kf~1 k2

!
f~1 +

~b f~2
kf~2 k2

!
f~2 + . . . +

~x f~m
kf~m k2

and if we take the dot product of ~x projU ~x with each f~i , we will get 0.
For example,
!
!
~1
~b f~2
~
x

f
(~x projU ~x) f~1 = ~x f~1
f~1 f~1 +
f~2 f~1 + . . . +
kf~1 k2
kf~2 k2
= ~x f~1 ~x f~1 + 0 + . . . + 0
= 0
5

f~m

~x f~m
kf~m k2

!
f~m f~1

Example:
Find the vector in U
that
closest
is
to ~x and express
~x as the sum of a vector in U and a vector in
1
1
3

1
0
1
, and ~x = .
U , where U = span

0
1
0

1
1
2
Solution:
We can write ~x = projU ~x + ~x projU ~x.
By the expansion theorem, we have

3
1
1 1

0 0
2
1
projU ~x = 2
1 + 02 + 12 + 12


4 4 4
0
3 3 3


3
1
1 0


0 1
1
1
2
1
+
0 (1)2 + 02 + 12 + 12
1


1
0

1
1

T

1
[1 0 1 1]T
3
T

1
5 4
1 U
=
3 3
3

3
5/3
4/3
1 4/3 7/3

And ~x projU ~x =
0 1/3 = 1/3 U .
2
1
1
=

So, given an orthogonal basis for a subspace U , we can easily project any vector onto U . But what
if we are not given the orthogonal basis to begin with?
We can always construct an orthogonal basis from a regular basis.
Suppose a subspace W has basis {~x1 , ~x2 , ~x3 } and we want to construct an orthogonal basis {f~1 , f~2 , f~3 }.
Step 1 Let f~1 = ~x1 and let W1 = span{f~1 } = span{x~1 }.

Step 2 Let f~2 = ~x2 projW1 ~x2 W1 .


Thus f~2 f~1 .

Now, let W2 = span{f~1 , f~2 } = span{~x1 , ~x2 } (since f~2 is a linear combination of ~x2 and f~1 = ~x1 .)

Step 3 Let f~3 = ~x3 projW2 ~x3 W2


Then f~3 f~1 and f~3 f~2 .
And now if we let W3 = span{f~1 , f~2 , f~3 }, then if we can also show that
W3 = span{~x1 , ~x2 , ~x3 } = W , we will have constructed an orthogonal set of vectors that spans
W.
We can verify that {f~1 , f~2 , f~3 } is an orthogonal basis for W since first of all the vectors are linearly
independent because they form an orthogonal set.
We need only show that span{f~1 , f~2 , f~3 } = span{~x1 , ~x2 , ~x3 } to make sure that our new set of vectors
spans the same subspace W .
If we can write each vector f~i as a linear combination of {~x1 , ~x2 , ~x3 } and each vector ~xi as a linear
combination of {f~1 , f~2 , f~3 }, then the two spanning sets will be equal.
We have
f~1 = ~x1
f~2 = ~x2 projW1 ~x2
~x2 f~1 ~
f1
= ~x2
kf~1 k2
~x2 ~x1
= ~x2
~x1
k~x1 k2
~x3 f~2 ~
~x3 f~1 ~
f~3 = ~x3
f2
f1
kf~2 k2
kf~1 k2
Substituting the expressions for f~2 and f~1 in terms of ~x1 and ~x2 from above, we can see that f~3 is
also a linear combination of {~x1 , ~x2 , ~x3 }.
Similarly, you can verify that each ~xi is a linear combination of {f~1 , f~2 , f~3 }.

Gram-Schmidt Process for Creating an Orthogonal Basis


Suppose {~x1 , ~x2 , . . . , ~xm } is a basis for a subspace U of Rn .
If we let
f~1 = ~x1
~x1 f~1 ~
f1
f~2 = ~x2
kf~1 k2
~x3 f~2 ~
~x3 f~1 ~
f~3 = ~x3
f2
f1
kf~2 k2
kf~1 k2
..
.
~xm f~m1 ~
~xm f~1 ~
f~m = ~xm
fm1 . . .
f1
kf~m1 k2
kf~1 k2
Then, {f~1 , f~2 , . . . , f~m } is an orthogonal basis for U .
Example:

1
3
1

1 2 0
Convert the basis B =
,
,
1 0 1 to an orthogonal basis for U = span{B}.

1
1
0

Você também pode gostar