Você está na página 1de 6

VECTOR SPACE INTERPRETATION OF RANDOM VARIABLES

Interpretation of random variables as elements of a vector space helps in understanding many


operations involving random variables. We start with an introduction to the concepts of the
vector space. We will discuss the principles of the minimum mean-square error estimation and
the linear minimum mean-square error estimation of a signal from noise and the vector-space
interpretation of the later.

Vector space
Consider a set V with elements called vectors and the set of real numbers

. The elements of

will be called scalars. A vector will be denoted by bold-face character like v . Suppose two
operations called vector addition and scalar multiplication respectively on V.

V is called a vector space if the following properties are satisfied.


(1) ( V ,+) is a commutative group. Thus ( V ,+) satisfies the following properties.
(i) Closure property: For any pair of elements v, w V, there exists a unique
element (v + w) V.
(ii) Associativity: Vector addition is associative: v + (w + z) = (v + w) + z for
any three vectors v, w, z V
(iii) Existence of the zero vector: There is a vector 0 V such that

v + 0 = 0 + v v for any

v V.

(iv) Existence of the additive inverse: For any v V there is a vector


that

v V such

v + (-v) = 0 = (-v) + v.

(v) Commutativity: For any v, w V, v + w = w + v.


(2) For any element v V and any r . the scalar multiplication rv V
This the scalar multiplication has the following properties for any r , s . and any
v, w V,

(1) Associativity: r (sv) (rs) v for r, s

and v V

(2) Distributivity with respect to vector addition: r(v + w) = rv + rw


(3) Distributivity with respect to sacalar addition:
(r s)v = rv + sv

(4) Unity scalar: 1v = v


Example 1:
Let S be an arbitrary set and V be the set of all functions from S to
g:S

denote two functions and s S .and a

. Suppose f : S

and

. Then by definition,

( f g )(s) f (s) g (s) and


(af )(s) af (s) .

Therefore, addition of functions and scalar multiplication of a function are function in V. It is


easy to verify that addition of functions and the multiplication of a function by scalar satisfies
the properties of vector space. Particularly, the zero function is a function that maps all
elements in S to the real number 0. Thus,
0(s) 0, s S

The random variables defined on a probability space (S , , P) are functions on the sample space.
Therefore, the set of random variables forms a vector space with respect to the addition of
random variables and scalar multiplication of a random variable by a real number.
Subspace
Suppose W is a non-empty subset of V. W is called a subspace of V if W is a vector space
with respect to the vector addition and scalar multiplication defined on V.
For a non-empty subset W is a subspace of , it is sufficient that W is closed under the vector
addition and the scalar multiplication of V. Thus the sufficient conditions are:
(1) v, w W, (v + w) W and
(2) v W, r , rv W
Linear Independence and Basis
Consider a subset of n vectors B = {b1 ,b2 ,...,bn }
If c1b1 c2b2 .... cnbn 0 implies that

c1 c2 ... cn 0, then b1 ,b2 , ...,b n are called linearly independen (LI)t.

The subset B = {b1 ,b2 ,...,bn } of n LI vectors is called a basis if each v V can be expressed
as a linear combination of elements of B. The number of elements in B is called the dimension
of V. Thus B = {i, j,k} is a basis of

Norm of a vector
Suppose v is a vector in a vector space V defined over

. The norm, v is a scalar such that

v, w V and r

1. v 0
2. v 0 only when v 0
3. rv r v
4. v w v w ( Triangle Inequality )

A vector space V where norm is defined is called an normed vector space. For example,
following are valid norms of v [v1 v2 ...vn ]
(i)

v v12 v2 2 ...vn 2

(ii)

v max(v1 , v2 ,..., vn )

Inner Product
If v and w are real vectors in a vector space V defined over

, the inner product v, w is a

scalar such that v, w, z V and r


1. v, w w, v
2

2. v, v v 0, where v is a norm induced by the inner product


3. v w , z v, z w, z
4. rv , w r v, w

A vector space V where an inner product is defined is called an inner product space. Following
are examples of inner product spaces:
(i)

with the inner product v, w =

vw v1w1 v2 w2 ... vn wn , v [v1 v2 ...vn ]

(ii)

, w = [w1 w2 ...wn ]

The space L2 ( ) of square-integrable real functions with the inner product

f1 , f 2

f1 ( x) f 2 ( x)dx f1 , f 2 L2 ( )

Cauchy Schwarz Inequality


For any two vectors v and w belonging to an inner product space V ,
v, w w v

Let us consider
2
z v cw then z v cw, v cw 0

v, v 2c v, w cw, cw 0
v 2c v, w c 2 w 0
2

The left-hand side of the last inequality is a quadratic expression in variable c For the above
quadratic expression to be non-negative, the discriminant must be non-positive.. So
| 2 v, w |2 4 v

w 0

| v, w |2 v

w 0

| v, w |2 v

| v, w | v w

The equality holds when


z 0 v cw 0
v cw

Hilbert space
Note that the inner product induces a norm that measures the size of a vector. Thus, v w is a
measure of distance between the vectors v, w V, We can define the convergence of a
sequence of vectors in terms of this norm.
Consider a sequence of vectors
vector

vn , n 1, 2,...,

The sequence is said to converge to a limit

if corresponding to every 0 , we can find a positive integer N such that

v vn for n N .

Cauchy sequence if

lim

The sequence of vectors

n, m

vn , n 1, 2,...,

is said to be a

v m vn 0 .

In analysis, we may require that the limit of a Cauchy sequence of vectors is also a member of
the inner product space. Such an inner prduct space where every Cauchy sequence of vectors is
convergent is known as a Hilbert space.
Orthogonal vectors
Two vectors v and w

belonging to an inner product space V are called orthogonal if

v, w 0

Orthogonal vectors are independent and a set of n orthogonal vectors B = {b1 ,b2 ,...,bn } forms a
basis of the n-dimensional vector space.

Orthogonal projection
It is one of the important concepts in linear algebra widely used in random signal processing.
Suppose W is a subspace of an inner product space V. Then the subset
W {v | v V, v, w 0 w W}
is called the orthogonal complement of W.
Any vector v in a Hilbert space V can be expressed as
v = w + w1

where w W, w1 W . In such a decomposition, w is called the orthogonal projection of v


on W and represents closest approximation of v by a vector in W in the following sense

vw

min v u
uW

We omit the proof of this result. The result can be geometrically illustrated as follows:

v
w1

w
v
Gram-Schimdt orthogonalisation

W
v

Joint Expectation as an inner product


Interpreting the random variables X and Y as vectors, the joint expectation EXY satisfies the
properties of the inner product of two vectors. X and Y . Thus
< X, Y > = EXY

We can also define the norm of the random variable X by


X

< X, X > = EX 2

Similarly, two random variables X and Y are orthogonal if < X, Y > = EXY = 0
We can easily verify that EXY satisfies the axioms of inner product.
2

The norm of a random variable X is given by X = EX 2

X1
Y1
X
Y
2
2

For two n dimensional random vectors X


and Y , the inner product is




Xn
Yn
n

< X, Y > = EXY EX iYi


i 1

The norm of an n-dimensional random vector X is given by X < X, X > = EXX EX i2


2

i 1

Orthogonal Random Variables and Orthogonal Random Vectors


Two vectors v and w are called orthogonal if v, w 0
Two random variables X and Y are called orthogonal if EXY 0.
Similarly two n-dimension random vectors

X and Y are called orthogonal if

EXY EX iYi 0
i 1

Just like the independent random variables and the uncorrelated random variables, the
orthogonal random variables form an important class of random variables.
If X and Y are uncorrelated, then

E ( X X )(Y Y ) 0
( X X ) is orthogonal to (Y Y )
If each of X and Y is of zero-mean

Cov( X ,Y ) EXY
In this case, EXY 0 Cov( XY ) 0.

Você também pode gostar