Escolar Documentos
Profissional Documentos
Cultura Documentos
From the year 300 B.C. to the 1800 mathematicians used to believe that Euclidian geometry was the correct idealization of the real physical space. Whereas
philosophers like Kant defended that our mind had an a priori conception of
the reality philosophers like Hume assumed that science is purely empirical so
that the laws of Euclidean geometry are not necessarily physical truths. But
Kant point of view was accepted by his contemporary believing that the external world is known in relation we are forced to interpret it by our minds.
The beginning of the problem was that the Euclidean axiom of the parallels
was not clear. Although it was obvious for everybody, it hadnt got the certainty
needed to be an axiom. On the way to prove it with the other Euclidean axioms
geometers realized that they couldnt do it. And this is the beginning of the
non-Euclidean geometries.
1. The space Rn and its topology
The space Rn is a n-dimensional space of vector algebra where a point, or
n-tuple, (x1 , x2 , . . . , xn ) is a sequence of n real numbers. This a continuous
space where for any given point there exist another as close as we want. Or
between any two given points there are infinity. With this concept we can state
the local topology of the Rn space.
What we firstly need to define a topology is the concept of a distance function
between any two point. If
x = (x1 , x2 , . . . , xn )
and y = (y1 , y2 , . . . , yn )
are two points of Rn then the distance function between both points is defined
as
d(x, y) = [(x1 y1 )2 + (x2 y2 )2 + . . . + (xn yn )2 ]1/2 .
(1)
A neighbourhood of radius r of the point x Rn is a set of points Nr (x) whose
distance from x is less than r (Clarification, N is the set of points contained
in a ball centered in x with radius r where r is an arbitrary distance so that
d(x, y) < r). Now we can define more precisely the continuity of the space . A
set of points of Rn is discrete if each point has a neighbourhood which contain
no other points of the set. By this, Rn is not discrete. A set of points S of Rn
1
, , R (2)
which also define neighbourhoods and open sets. The key point is that any
set which is open according d0 (x, y) is also open according to d(x, y), and vice
versa. The proof rest on the fact that any given d-type neighbourhood of x
contains a d0 -type neighbourhood entirely within it, and vice versa. Hence
, N (x), | N0 (x) N (x)
where N and N 0 denote the d and d0 -type neighbourhood, respectively. So we
can conclude that if a set is open as defined by d(x, y) it is also open as defined
by d0 (x, y), and vice versa. We therefore say that both d and d0 induce the
same topology on Rn . So although we began with the usual Euclidean distance
function d(x, y), the topology we have defined is not very independent on the
form of d. This is called the natural topology of Rn . Topology is a more
primitive concept that distance. So we do not care about the actual distance
between two points, since there are many different possible definitions, but we
need is only a notion that the distance between points can be made arbitrarily
small and no two distinct points have zero distance between them.
The topology of a manifold is more general than any particular distance
function so that the word neighbourhood is often used in a different sense.
So we will let a neighbourhood of a point x be any set containing an open set
containing x.
2. Mapping
A map f from the space M to a space N is a rule which associates with an
element x of M a unique element y of N , that is
f: M
x
N
7
y = f (x)
2
(3)
We remark that a map gives a unique f (x) for every x, but not necessarily a
unique x for every f (x). When this happens, more than one value of x gives the
same value f (x), the map is called many-to-one (not injective). More generally,
if f maps M to N then for any set S in M the elements in N mapped from the
points of S form a set T called the image of S under f , denoted by f (S).
If f : M N S M | S 7 T = f (S) N image of S.
Conversely, the set S is called the inverse image of T or the kern (from German)
of S, denoted by f 1 (T ). If the map f is many-to-one then the inverse image of
a single point of N is not a single point of M , so there is no map f 1 from N to
M , since every map must have a unique image. So in general f 1 (T ) must be
read as a single symbol: it is not the image of T under a map f 1 but simply
a set called f 1 (T ). On the other hand, if every point in f (S) has a unique
inverse image point in S, then f is said to be ono-to-one (1-1) and there does
exist another 1-1 map f 1 , called the inverse of f , which maps the image of M
to M .
f (x) = sin x is many to one, since f (x) = f (x + 2n) = f [(2n + 1) x] n Z.
Therefore, a true inverse function does not exist. The usual inverse functions,
arcsin y or sin1 y, is obtained by restricting the original sine function to the
principal values, /2 < x /2, on which it is indeed 1-1 and invertible.
If we have two maps, f and g, f : M N and g : N P , then there is a
map called composition of f and g, denoted by g f , which maps from M to P
(g f : M P ),
f
/N
P
gf
So the composition takes a point x M , then find the point f (x) N , and
uses g to map it to P : (g f )(x) = g(f (x)).
If a map is defined in all the point of M , then we say that it is a mapping
from M into N . If, in addition, every point of N has an inverse image (not
necessarily a unique one), we say it is a mapping from M onto N . As we said,
if the inverse image is unique, the map is 1-1. A map which is both 1-1 and
onto is called a bijection.
A map f : M N is continuous at x M if any open set of N containing
f (x) contains the image of an open set of M . [This presuppose that M and N
are topological spaces.] More generally, f is continuous on M if it is continuous
at all the points of M . In calculus notations we will say that f is continuous at
a point x0 if
> 0, > 0 | |f (x) f (x0 )| < when |x x0 | < .
On the other hand, in term of open sets, and considering d000 (x, x0 ) = |x x0 |.
We say that f is continuous at x0 if every d000 -neighbourhood of f (x0 ) contains
the image of a d000 -neighbourhood of x0 . [Remember that these neighbourhood
are open sets.]
or
y = f (x).
If the functions {fi , i = 1, . . . , n} are all C k -differentiable, then the map is said
to be C k -differentiable. The Jacobian matrix of a C 1 map is the matrix of
partial derivatives fi /xj . The determinant of this matrix is simply called the
Jacobian, J, and is denoted by
f1
f1
x1
xn
(f1 , . . . , fn )
.. .
..
(4)
= ..
J=
.
.
(x1 , . . . , xn ) .
fn
fn
x1
xn
If the Jacobian at a point x is nonzero, then the inverse function theorem assures
us that the map f is 1-1 and onto in some neighbourhood of x.
If a function g(x1 ; . . . , xn ) is mapped into a function g (y1 , . . . , yn ) by the
rule
g [f1 (x1 , . . . , xn ), . . . , fn (x1 , . . . , xn )] = g(x1 , . . . , xn )
(that is, g has the same value at f (x) as g at x), then the integral of g over M
equals the integral of g J over N :
Z
Z
g(x1 , . . . , xn )dx1 . . . dxn =
g (y1 , . . . , yn )J dy1 . . . dyn .
M
Since g and g have the same value at appropriate points, the the volumeelement dx1 . . . dxn has changed to J dy1 . . . dyn (coordinate change).
3. Real analysis
A real function of a single variable, f (x), is said to be analytic at x = x0 if it
has a Taylor expansion about x0 which converges to f (x) in some neighbourhood
of x0 , and we write
df
1 d2 f
1 d3 f
2
f (x) = f (x0 )+
(xx0 )+
(xx
)
+
(xx0 )3 +. . .
0
dx x0
2 dx2 x0
3! dx3 x0
4
So functions which are not C at x0 are not analytic. Likewise there are C
functions which are not analytic. For example exp(1/x2 ), whose value and all
whose derivatives are zero at x = 0, but which is not identically zero in any
neighbourhood of x = 0. However, there are nonanalytic functions that can be
well approximated by analytic ones in the following sense. Let g(x1 , . . . , xn ) a
real valued function defined on an open region S of Rn , then it is said to be
square-integrable if the multiple integral
Z
[g(x1 , . . . , xn )]2 dx1 . . . dxn
(5)
S
f
,
x
In each case the operator may or may not be defined on all functions f . D may
not be defined on a function which is not C 1 , while G is undefined on functions
which give unbounded integrals. Specifying the set of functions on which an
operator is allowed to act in fact forms part of the definition of the operator;
this set is called domain.
The commutator of two operators A and B, called [A, B], is another operator
defined as
[A, B](f ) = (AB BA)(f ) = A[B(f )] B[A(f )].
(6)
The commutator only makes sense when acting over a function. If the two
operators have vanishing commutator, they are said to commute. One has to be
careful about the domains of the operators: the domain of [A, B] may not be as
large as that of either A or B. E.g., if A = d/dx and B = xd/dx, then we may
take both their domains to be all C 1 functions. But not for all C 1 functions will
the successive operator A[B(f )] be defined, since it involves second derivatives.
The operators AB and BA can be given all C 2 functions as domains, which
is a smaller set than C 1 functions. Then the commutator [A, B] has only C 2
functions in its domain. We can enlarge the domain (extending the operator)
in this case, though not always, observing that for any C 2 function f
d
d
d
d
df
x
x
f=
[A, B](f ) =
,
dx
dx
dx dx
dx
so we can identify [A, B] simply with d/dx and thereby extend its domain to
all C 1 functions. The point is that the commutator may be defined even on
functions on which the products in the commutator are not.
4. Group theory
A collection of elements G together with a binary operation called is a
group if it satisfies the axioms:
(Gi) Associativity: if x, y, z G, then
x (y z) = (x y) z.
(Gii) Right identity: G contains an element e such that, x G,
x e = x.
(Giii) Right inverse: x G there is an element called x1 G for which
x x1 = e.
A group is said to be Abelian or commutative if in addition satisfies
(Giv) x y = y x x, y G.
A familiar example is the group of all permutations of n objects; the binary
composition is defined is simply of two permutations is simply the permutation
obtained by the following one permutation by the other. The group has n!
elements. Its identity element is the permutation which leaves all objects
fixed.
By (Gi) to (Giii) we can conclude that: the identity element e is unique; it
is also left-identity (e x) = x; the inverse element x1 is unique for any x; and
is also left-inverse (x1 x = e). As we know, it is common to omit symbol
when there is no risk of confusion.
The most important kind of group in modern physics is the Lie group. It is
a continuous group: any open set of the elements of a Lie group has 1-1 map
onto an open set of Rn for some n. An example of Lie group is the translation
group of Rn (x x + a, a = const). Each point a of Rn correspond to an
element of the group, so the group has in fact a 1-1 map onto all of Rn . The
group composition law is simply addition: two elements a = (a1 , . . . , an ) and
b = (b1 , . . . , bn ) compose to the form c = a + b = (a1 + b1 , . . . , an + bn ). This
illustrates the fact that one need not always use the symbol to present the
group operation. With Abelian groups, as this one is, it is more common to
use the symbol +. A subgroup S of a group G is a collection of elements of G
which themselves form a group with the same binary operation, and is denoted
by S G as used above. As a group, a subgroup must have an identity element.
Since the groups identity e is unique, any subgroup must also contain e. E.g.,
a subgroup of the permutation group could be the permutations of n objects
which do not change the position of the first object because, (i) the identity e
leaves the first object fixed; (ii) the inverse of such a permutation still leaves the
first object fixes; (iii) the composition of any two such permutations still leaves
the first object fixed. This subgroup is identical to the group of permutations
of n 1 objects. This statement, that a certain subgroup of the permutation
group is identical to the group of permutations of n 1 objects, is an example
of a group isomorphism. Two groups G1 and G2 , with binary operations and
respectively, are isomorphic (which means identical in their group properties)
if there is a 1-1 map f of G1 to G2 which respects the group operations:
f (x y) = f (x) f (y)
(7)
The isomorphism f of our example: an element of the subgroup of the npermutation group which permutes only the last n 1 object is mapped to
the same permutation in the (n 1)-permutation group. Another example: let
G1 = (R+ , ), and let G2 = (R+ , +). [Both are groups with the same identity
e = 1, inverse x1 x R+ and, e.g., 1 u (2 u 3) = (1 u 2) u 3.] Then if x G1 ,
f (x) = log x defines a map f : G1 G2 which satisfies (7):
log(xy) = log x + log y.
The two groups are isomorphic and f is an isomorphism.
Another relation between groups is called group homomorphism. Is like a
isomorphism but the map can be many-to-one and may be into. Condition (7)
must also be satisfied.
5. Linear algebra
A vector space (over R) is a set V which has a binary operation called + with
which is an Abelian group and has a multiplication by real numbers, (V, +, ),
which satisfies the axioms: Let x
, y V and a, b R, then
(Vi) a (
x + y) = (a x
) + (b y),
(Vii) (a + b) x
= (a x
) + (b y),
(Viii) (ab) x
= a (b x
),
(Viv) 1 x
=x
.
The identity element of V is called 0 or 0. Some examples of vector spaces are:
(i) The set of all nn matrices, where + means adding corresponding entries
and means multiplying each entry by the real number.
(ii) The set of all continuous functions f (x) defined on the interval a 6 x 6 b.
A linear combination of vectors is like
a
x + b
y + c
z
(8)
such that x
, y, z V and a, b, c R. A set of elements {
x1 , x
2 , . . . , x
m } of V is
said to be linear independent if it is possible to find real numbers {a1 , a2 , . . . , am }
not all zero for which
a1 x
1 + a2 x
2 + . . . + am x
m = 0.
(9)
The set is a maximal linearly independent set if including any other vector of
V in it would make it linearly dependent. This means that any other vector in
7
n
X
ai x
i ,
(10)
i=1
and the numbers {ai , i = 1, . . . , n} are called the components of y on the basis
of the {
xi }n . As a complement let us play the theorem-proof game.
Theorem 2 If V is a vector space over a field F , then a basis of V is a maximal
linearly independent set in V .
Proof 2 Let B = (
x1 , . . . , x
m ) be a basis of V . Then any vector y V B can
be expressed as a linear combination of the vectors of the basis B, i.e.,
y = a1 x
1 + . . . + am x
m ,
ai F,
x V , and n(
x) = 0 x
= 0;
(Nii) n(a
x) = |a|n(
x),
a R, x
V;
(Niii) n(
x + y) n(
x) + n(
y ) x, y V .
Indeed, n is a function and there are many that can satisfy these axioms. For
example, let us consider Rn as a vector space, where vector addition is defined
by
x + y = (x1 + y1 , . . . , xn + yn ),
(11)
and a multiplication by real numbers by
ax = (ax1 , . . . , axn ).
(12)
Then we can define a norm as a distance of a vector from the origin, as we did
in section 1:
n(x) = [(x1 )2 + (x2 )2 + . . . + (xn )2 ]1/2 ,
0
(13)
2 1/2
, , R,
(14)
(15)
(16)
z (a
x + b
y ) = a(
zx
) + b(
z y).
(17)
x
y = y x
(18)
and
Symmetry means that
(commutativity).
and
x
x
=0
x
= 0.
(19)
V
7
T (
x),
where linearity is
T (a
x + b
y ) = aT (
x) + bT (
y ).
(20)
Given a basis (
e1 , e2 , . . . , en ) for V , then we can express any vector as a linear
combinations of these vectors such that
x
=
n
X
ai ei ,
(21)
i=1
i=1
i=1
(22)
j=1
Pn
where i=1 Tij ej are the components of the vector T (
ei ). Likewise, the numbers
Tij are called the components of T , and can be represented as a square n n
matrix.
Another important algebraic result is that
!
!
n
m
m
n
X
X
X
X
ai
Bij cj =
cj
Bij ai ,
(23)
i=1
j=1
j=1
i=1
which means that the order in which the sums are performed makes no difference. Therefore, we can write
n X
m
X
ai Bij cj
or just
i=1 j=1
ai Bij cj
(24)
i,j
saying that the sum is simply the sum of various products over all possible
combinations of indices.
Two linear transformations T and U acting on the space V produce the
transformation U T :
!
!
X
X
X
X
U T (
x) = U (T (
x)) = U
ai Tij ej =
ai Tij Ujk ek =
ai
Tij Ujk ek
ij
ijk
ik
{z
I
10
(BA)ik =
Bij Ajk =
Ajk Bij .
(26)
By comparing both (25) and (26) we can realize that the order of the factors is
important, which means that the matrix product is, generally, not commutative.
The transpose (AT )ij of a matrix A has elements
(AT )ij = Aji ,
(27)
which means change rows by columns. [If A is complex we define the adjoint
A of A by (A )ij = Aji , where the bar denotes de complex conjugation.] The
unit matrix, I, has ones on the main diagonal and zeros elsewhere:
(
1, i = j,
(28)
I = ij =
0, i 6= j.
where ij is the Kronecker delta symbol. The identity transformation is the one
which maps any vector x
into itself. The inverse A1 of a matrix A is a matrix
such that
A1 A = AA1 = I.
(29)
Not every matrix has an inverse, for example the zero matrix. When a inverse
exist it is unique. Of course A is the inverse of A1 . When A1 exists, A is
said to be nonsingular. (Otherwise it is singular). The set of all nonsingular
n n matrices forms a group with matrix multiplication operation. The group
identity is I. This is a very important Lie group called GL(n, R) (general linear
group).
The determinant of a 2 2 matirx
a b
A=
c d
is called det A, and is defined as
det(A) = ad bc.
(30)
a b c
A = d e f
(31)
g h k
the cofactor of a is ek f h, while that of f is bg ah. Then the determinant
of A is defined as
!
n
X
ij
det(A) =
aij a , for a fixed i.
(32)
j=1
12