Você está na página 1de 35

CS 143 Linear Algebra Review

Stefan Roth
September 29, 2003
Introductory Remarks
This review does not aim at mathematical rigor very much, but
instead at ease of understanding and conciseness.
Please see the course webpage on background material for links
to further linear algebra material.
Stefan Roth 2
Overview
Vectors in R
n
Scalar product
Bases and transformations
Inverse Transformations
Eigendecomposition
Singular Value Decomposition
Stefan Roth 3
Warm up: Vectors in R
n
We can think of vectors in two ways
As points in a multidimensional space with respect to some
coordinate system
As a translation of a point in a multidimensional space, of the
coordinate origin.
Example in two dimensions (R
2
):
Stefan Roth 4
Vectors in R
n
(II)
Notation:
x R
n
, x =
_
_
_
_
_
_
_
x
1
x
2
.
.
.
x
n
_
_
_
_
_
_
_
, x = (x
1
, x
2
, . . . , x
n
)
We will skip some rather obvious things: Adding and subtracting
of vectors, multiplication by a scalar, and simple properties of
these operations.
Stefan Roth 5
Scalar Product
A product of two vectors
Amounts to projection of one vector onto the other
Example in 2D:
The shown segment has length x, y, if x and y are unit vectors.
Stefan Roth 6
Scalar Product (II)
Various notations:
x, y)
xy
x y or x y
We will only use the rst and second one!
Other names: dot product, inner product
Stefan Roth 7
Scalar Product (III)
Built on the following axioms:
Linearity:
x +x

, y) = x, y) + x

, y)
x, y +y

) = x, y) + x, y

)
x, y) = x, y)
x, y) = x, y)
Symmetry: x, y) = y, x)
Non-negativity:
x ,= 0 : x, x) > 0 x, x) = 0 x = 0
Stefan Roth 8
Scalar Product in R
n
Here: Euclidean space
Denition:
x, y R
n
: x, y) =
n

i=1
x
i
y
i
In terms of angles:
x, y) = |x| |y| cos((x, y))
Other properties: commutative, associative, distributive
Stefan Roth 9
Norms on R
n
Scalar product induces Euclidean norm (sometimes called
2-norm):
|x| = |x|
2
=
_
x, x) =

_
n

i=1
x
2
i
The length of a vector is dened to be its (Euclidean) norm. A unit
vector is one of length 1.
Non-negativity properties also hold for the norm:
x ,= 0 : |x|
2
> 0 |x|
2
= 0 x = 0
Stefan Roth 10
Bases and Transformations
We will look at:
Linear dependence
Bases
Orthogonality
Change of basis (Linear transformation)
Matrices and matrix operations
Stefan Roth 11
Linear dependence
Linear combination of vectors x
1
, . . . , x
n
:

1
x
1
+
2
x
2
+ +
n
x
n
A set of vectors X = x
1
, . . . , x
n
is linearly dependant if
x
i
X can be written as a linear combination of the rest,
X x
i
.
Stefan Roth 12
Linear dependence (II)
Geometrically:
In R
n
it holds that
a set of 2 to n vectors can be linearly dependant
sets of n + 1 or more vectors are always linearly dependant
Stefan Roth 13
Bases
A basis is a linearly independent set of vectors that spans the
whole space. we can write every vector in our space as linear
combination of vectors in that set.
Every set of n linearly independent vectors in R
n
is a basis of R
n
.
Orthogonality: Two non-zero vectors x and y are orthogonal if
x, y) = 0.
A basis is called
orthogonal, if every basis vector is orthogonal to all other basis
vectors
orthonormal, if additionally all basis vectors have length 1.
Stefan Roth 14
Bases (Examples in R
2
)
Stefan Roth 15
Bases continued
Standard basis in R
n
(also called unit vectors):
e
i
R
n
: e
i
= (0, . . . , 0
. .
i1 times
, 1, 0, . . . , 0
. .
ni1 times
)
We can write a vector in terms of its standard basis,
_
_
_
_
4
7
3
_
_
_
_
= 4 e
1
+ 7 e
2
3 e
3
Important observation: x
i
= e
i
, x), to nd the coefcient for a
particular basis vector, we project our vector onto it.
Stefan Roth 16
Change of basis
Suppose we have a basis B = b
1
, . . . , b
n
, b
i
R
m
and a
vector x R
m
that we would like to represent in terms of B.
Note that mand n can be equal, but dont have to.
To that end we project x onto all basis vectors:
x =
_
_
_
_
_
_
_
b
1
, x)
b
2
, x)
.
.
.
b
n
, x)
_
_
_
_
_
_
_
=
_
_
_
_
_
_
_
b
1
x
b
2
x
.
.
.
b
n
x
_
_
_
_
_
_
_
Stefan Roth 17
Change of basis (Example)
Stefan Roth 18
Change of basis continued
Lets rewrite the change of basis:
x =
_
_
_
_
_
_
_
b
1
x
b
2
x
.
.
.
b
n
x
_
_
_
_
_
_
_
=
_
_
_
_
_
_
_
b
1
b
2
.
.
.
b
n
_
_
_
_
_
_
_
x = Bx
Voila, we got a n mmatrix Btimes a m-vector x!
The matrix-vector product written out is thus:
x
i
= (Bx)
i
= b
i
x =

m
j=1
b
ij
x
j
Stefan Roth 19
Some properties
Because of the linearity of the scalar product, this transformation
is linear as well:
B( x) = Bx
B(x +y) = Bx +By
The identity transformation / matrix maps a vector to itself:
I =
_
_
_
_
_
_
_
e
1
e
2
.
.
.
e
n
_
_
_
_
_
_
_
=
_
_
_
_
_
_
_
1 0 0
0 1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0
0 0 1
_
_
_
_
_
_
_
Stefan Roth 20
Matrix Multiplication
We can now also understand what a matrix-matrix multiplication
is. Consider what happens when we change the basis rst to B
and then from there change the basis to A:
x = ABx = A(Bx) = A x
x
i
= (A x)
i
=
n

j=1
a
ij
x
j
=
n

j=1
a
ij
m

k=1
b
jk
x
k
=
m

k=1
_
n

j=1
a
ij
b
jk
_
x
k
=
m

k=1
(AB)
ik
x
k
Therefore A B =
_

n
j=1
a
ij
b
jk
_
ik
Stefan Roth 21
More matrix properties
The product of a l n and a n mmatrix is a l mmatrix.
The matrix product is associative, distributive, but not
commutative.
We will sometimes use the following rule: (AB) = BA
Addition, subtraction, and multiplication by a scalar are dened
much like for vectors. Matrices are associative, distributive, and
commutative regarding these operations.
Stefan Roth 22
One more example - Rotation matrices
Suppose we want to rotate a vector in R
2
around the origin. This
amounts to a change of basis, in which the basis vectors are
rotated:
R =
_
_
cos sin
sin cos
_
_
Stefan Roth 23
Inverse Transformations - Overview
Rank of a Matrix
Inverse Matrix
Some properties
Stefan Roth 24
Rank of a Matrix
The rank of a matrix is the number of linearly independent rows or
columns.
Examples:
_
_
1 1
0 1
_
_
has rank 2, but
_
_
2 1
4 2
_
_
only has rank 1.
Equivalent to the dimension of the range of the linear
transformation.
A matrix with full rank is called non-singular, otherwise it is
singular.
Stefan Roth 25
Inverse Matrix
A linear transformation can only have an inverse, if the associated
matrix is non-singular.
The inverse A
1
of a matrix Ais dened as:
A
1
A = I
_
= AA
1
_
We cannot cover here, how the inverse is computed.
Nevertheless, it is similar to solving ordinary linear equation
systems.
Stefan Roth 26
Some properties
Matrix multiplication (AB)
1
= B
1
A
1
.
For orthonormal matrices it holds that A
1
= A.
For a diagonal matrix D = d
1
, . . . , d
n
:
D
1
= d
1
1
, . . . , d
1
n

Stefan Roth 27
Determinants
Determinants are a pretty complex topic; we will only cover basic
properties.
Denition ((n) = Permutations of 1, . . . , n):
det(A) =

(n)
(1)
||
n

i=1
a
i(i)
Some properties:
det(A) = 0 iff Ais singular.
det(AB) = det(A) det(B), det(A
1
) = det(A)
1
det(A) =
n
det(A)
Stefan Roth 28
Eigendecomposition - Overview
Eigenvalues and Eigenvectors
How to compute them?
Eigendecomposition
Stefan Roth 29
Eigenvalues and Eigenvectors
All non-zero vectors x for which there is a R so that
Ax = x
are called eigenvectors of A. are the associated eigenvalues.
If e is an eigenvector of A, then also c e with c ,= 0.
Label eigenvalues
1

1

n
with their eigenvectors
e
1
, . . . , e
n
(assumed to be unit vectors).
Stefan Roth 30
How to compute them?
Rewrite the denition as
(AI)x = 0
det(AI) = 0 has to hold, because AI cannot have full
rank.
This gives a polynomial in , the so-called characteristic
polynomial.
Find the eigenvalues by nding the roots of that polynomial.
Find the associated eigenvector by solving linear equation system.
Stefan Roth 31
Eigendecomposition
Every real, square, symmetric matrix Acan be decomposed as:
A = VDV,
where Vis an orthonormal matrix of As eigenvectors and Dis a
diagonal matrix of the associated eigenvalues.
The eigendecomposition is essentially a restricted variant of the
Singular Value Decomposition.
Stefan Roth 32
Some properties
The determinant of a square matrix is the product of its
eigenvalues: det(A) =
1
. . .
n
.
A square matrix is singular if it has some eigenvalues of value 0.
A square matrix Ais called positive (semi-)denite if all of its
eigenvalues are positive (non-negative). Equivalent criterion:
xAx > 0 x ( if semi-denite).
Stefan Roth 33
Singular Value Decomposition (SVD)
Suppose A R
mn
. Then a 0 is called a singular value of
A, if there exist u R
m
and v R
n
such that
Av = u and Au = v
We can decompose any matrix A R
mn
as
A = UV,
where U R
mm
and V R
nn
are orthonormal and is a
diagonal matrix of the singular values.
Stefan Roth 34
Relation to Eigendecomposition
The columns of Uare the eigenvectors of AA, and the
(non-zero) singular values of Aare the square roots of the
(non-zero) eigenvalues of AA.
If A R
mn
(m << n) is a matrix of mi.i.d. samples from a
n-dimensional Gaussian distribution, we can use the SVD of Ato
compute the eigenvectors and eigenvalues of the covariance
matrix without building it explicitly.
Stefan Roth 35

Você também pode gostar