Você está na página 1de 102

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011

Chapter 1

Topology
We start by defining a topological space.

A topological space is a set S together with a collection O of


subsets called open sets such that the following are true:
i) the empty set and S are open, , S O
ii) the intersection of a finite number of open sets is open; if
U1 , U2 O, then U1 U2 O
iii) S
the union of any number of open sets is open, if Ui O, then
Ui O irrespective of the range of i.
2
i

It is the pair {S, O} which is, precisely speaking, a topological


space, or a space with topology. But it is common to refer to S as
a topological space which has been given a topology by specifying
O.
Example: S = R, the real line, with the open sets being open
intervals ]a, b[ , i.e. the sets {x R | a < x < b} and their unions,
plus and R itself. Then (i) above is true by definition.
For two such open sets U1 = ]a1 , b1 [ and U2 = ]a2 , b2 [ , we can
suppose a1 < a2 . Then if b1 6 a2 , the intersection U1 U2 = O .
Otherwise U1 U2 = ]a2 , b1 [ which is an open interval and thus
U1 U2 O. So (ii) is true.
And (iii) is also true by definition.
2
Similarly Rn can be given a topology via open rectangles, i.e.
via the sets {(x1 , , xn ) Rn | ai < xi < bi }. This is called the
standard or usual topology of Rn .

The trivial topology on S consists of O = {, S}.


2
1

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


Chapter 1. Topology

The discrete topology on a set S is defined by O = {A | A S},


i.e., O consists of all subsets of S.
2

A set A is closed if its complement in S, also written S\A or


as A{ , is open.
2
Closed rectangles in Rn are closed sets as are closed balls and
single point sets.
A set can be neither open nor closed, or both open and closed.
In a discrete topology, every set A S is both open and closed,
whereas in a trivial topology, any set A 6= or S is neither open nor
closed.
The collection C of closed sets in a topological space S satisfy
the following:
i) the empty set and S are open, , S C
ii) the union of a finite number of open sets is open; if A1 , A2 C ,
then A1 A2 C
iii) the intersection
of any number of open sets is open, if Ai C ,
T
then Ai C irrespective of the range of i.
i

Closed sets can also be used to define a topology. Given a set S


with a collection C of subsets satisfying the above three properties of
closed sets, we can always define a topology, since the complements
of closed sets are open. (Exercise!)

An open neighbourhood of a point P in a topological space


S is an open set containing P . A neighbourhood of P is a set
containing an open neighbourhood of P . Neighbourhoods can be
defined for sets as well in a similar fashion.
2
Examples: For a point x R, and for any  > 0,
]x , x + [ is an open neighbourhood of x,
[x , x + [ is a neighbourhood of x,
{x  6 y < } is a neighbourhood of x,
[x, x + [ is not a neighbourhood of x.
2

A topological space is Hausdorff if two distinct points have


disjoint neighbourhoods.
2
Topology is useful to us in defining continuity of maps.

A map f : S1 S2 is continuous if given any open set U S2


its inverse image (or pre-image, what it is an image of) f 1 (U ) S1
is open.
2
m
n
When this definition is applied to functions from R to R , it is

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


3
the same as the usual  definition of continuity, which says that

f : Rm Rn is continuous at x0 if given  > 0, we can always


find a > 0 such that |f (x) f (x0 )| <  whenever |x x0 | < . 2
For the case of functions from a topological space to Rn , this
definition says that

f : S Rn is continuous at s0 S if given  > 0, we can


always find an open neighbourhoood U of s0 such that |f (s)f (s0 )| <
 whenever s U.
2

If a map f : S1 S2 is one-to-one and onto, i.e. a bijection,


and both f and f 1 are continuous, f is called a homeomorphism
and we say that S1 and cs2 are homeomorphic.
2
Proposition: The composition of two continuous maps is a continuous map.
Proof: If f : S1 S2 and g : S2 S3 are continuous maps, and
U is some open set in S3 , then its pre-image g 1 (U ) is open in S2 .
So f 1 (g 1 (U )), which is the pre-image of that, is open in S1 . Thus
(g f )1 (U ) = f 1 (g 1 (U )) is open in S1 . Thus g f is continuous.
2

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


Chapter 2

Manifolds
Now that we have the notions of open sets and continuity, we are
ready to define the fundamental object that will hold our attention
during this course.

A manifold is a topological space which is locally like Rn . 2


That is, every point of a manifold has an open neighbourhood
with a one-to-one map onto some open set of Rn .

More precisely, a topological space M is a smooth ndimensional manifold if the following are true:
i) We can cover the space with open sets U , i.e. every point of
M lies within some U .
ii) a map : U Rn , where is one-to-one and onto
some open set of Rn . is continuous, 1 is continuous, i.e.
V Rn is a homeomorphism for V .
(U , ) is called a chart (U is called the domain of the
chart). The collection of charts is called an atlas.
iii) In any intersection U U , the maps 1 , which are
called transition functions and take open sets of Rn to open
sets of Rn , i.e. 1 : (U U ) (U U ), are
smooth maps.
2

n is called the dimension of M.


2
We have defined smooth manifolds. A more general definition is
that of a C k manifold, in which the transition functions are C k , i.e. k
times differentiable. Smooth means k is large enough for the purpose
at hand. In practice, k is taken to be as large as necessary, up to
C . We get real analytic manifolds when the transition functions
4

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


5
are real analytic, i.e. have a Taylor expansion at each point, which
converges. Smoothness of a manifold is useful because then we can
say unambiguously if a function on the manifold is smooth as we will
see below.

A complex analytic manifold is defined similarly by replacing Rn with Cn and assuming the transitions functions 1 to
be holomorphic (complex analytic).
2

Given a chart (U , ) for a neighbourhood of some point P, the


image (x1 , , xn ) Rn of P is called the coordinates of P in the
chart (U , ). A chart is also called a local coordinate system.2
In this language, a manifold is a space on which a local coordinate system can be defined, and coordinate transformations between
different local coordinate systems are smooth. Often we will suppress
U and write only for a chart around some point in a manifold. We
will always mean a smooth manifold when we mention a manifold.
Examples: Rn (with the usual topology) is a manifold.
2
The typical example of a manifold is the sphere. Consider the
sphere S n as a subset of Rn+1 :
(x1 )2 + + (xn+1 )2 = 1

(2.1)

It is not possible to cover the sphere by a single chart, but it is


possible to do so by two charts.1
For the two charts, we will construct what is called the stereographic projection. It is most convenient to draw this for a circle
in the plane, i.e. S 1 in R2 , for which the equatorial plane is simply
an infinite straight line. Of course the construction works for any
S n . consider the equatorial plane defined as the x1 = 0, i.e. the
set {(0, x2 , , xn+1 )}, which is simply Rn when we ignore the first
zero. We will find homeomorphisms from open sets on S n to open
sets on this Rn . Let us start with the north pole N , defined as the
point (1, 0, , 0).
We draw a straight line from N to any point on the sphere. If
that point is in the upper hemisphere (x1 > 0) the line is extended till
it hits the equatorial plane. The point where it hits the plane is the
1

The reason that it is not possible to cover the sphere with a single chart
is that the sphere is a compact space, and the image of a compact space under a continuous map is compact. Since Rn is non-compact, there cannot be a
homeomorphism between S n and Rn .

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


Chapter 2. Manifolds

image of the point on the sphere which the line has passed through.
For points on the lower hemisphere, the line first passes through the
equatorial plane (image point) before reaching the sphere (source
point). Then using similarity of triangles we find (Exercise!) that
the coordinates on the equatorial plane Rn of the image of a point
on S n \{N } is given by



x2
xn+1
1 2
n+1
N : x , x , , x
7
.
(2.2)
, ,
1 x1
1 x1
Similarly, the stereographic projection from the south pole is
S : S n \{S} Rn ,
1

n+1

x ,x , ,x


7

x2
xn+1
,

,
1 + x1
1 + x1


.

(2.3)

If we write

z=

xn+1
x2
,

,
1 x1
1 x1


,

(2.4)

we find that

2
 n+1 2
x2
x
1 (x1 )2
1 + x1
2
|z|
+

+
=
=
(2.5)
1 x1
1 x1
(1 x1 )2
1 x1
The overlap between the two sets is the sphere without the poles.
Then the transition function between the two projections is
S N : Rn \{0} Rn \{0},

z 7

z
.
|z|2

(2.6)

These are differentiable functions of z in Rn \{0}. This shows that


the sphere is an n-dimensional differentiable manifold.
2

A Lie group is a group G which is also a smooth (real analytic


for the cases we will consider) manifold such that group composition
written as a map (x, y) 7 xy 1 is smooth.
2
Another way of defining a Lie group is to start with an nparameter continuous group G which is a group that can be
parametrized by n (and only n) real continuous variables. n is called
the dimension of the group, n = dim G. (This is a different definition of the dimension. The parameters are global, but do not in
general form a global coordinate system.)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


7
Then any element of the group can be written as g(a) where
a = (a1 , , an ) . Since the composition of two elements of G must
be another element of G, we can write g(a)g(b) = g((a, b)) where
= (1 , , n ) are n functions of a and b. Then for a Lie group,
the functions are smooth (real analytic) functions of a and b.
These definitions of a Lie group are equivalent, i.e. define the
same objects, if we are talking about finite dimensional Lie groups.
Further, it is sufficient to define them as smooth manifolds if we are
interested only in finite dimensions, because all such groups are also
real analytic manifolds. Apparently there is another definition of
a Lie group as a topological group (like n-parameter continuous
group, but without an a priori restriction on n, in which the composition map (x, y) 7 xy 1 is continuous) in which it is always possible
to find an open neighbourhood of the identity which does not contain
a subgroup.
Any of these definitions makes a Lie group a smooth manifold,
an n-dimensional Lie group is an n-dimensional manifold.
2
The phase space of N particles is a 6N -dimensional manifold, 3N
coordinates and 3N momenta.
2
The M
obius strip is a 2-dimensional manifold.
2
The space of functions with some specified properties is often a manifold. For example, linear combinations of solutions of
Schr
odinger equation which vanish outside some region form a manifold.
2
Finite dimensional vector spaces are manifolds.
2
Infinite dimensional vector spaces with finite norm (e.g. Hilbert
spaces) are manifolds.
2

A connected manifold cannot be written as the disjoint union


of open sets. Alternatively, the only subsets of a connected manifold
which are both open and closed are and the manifold itself.
2
SO(3), the group of rotations in three dimensions, is a 3dimensional connected manifold. O(3), the group of rotations plus
reflections in three dimensions, is also a 3-dimensional manifold,
but is not connected since it can be written as the disjoint union
SO(3)PSO(3) where P is reflection.
2
L+ , the group of proper (no space reflection) orthochronous (no
time reflection) Lorentz transformations, is a 6-dimensional connected manifold. The full Lorentz group is a 6-dimensional manifold,

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


Chapter 2. Manifolds

not connected.
2
Rotations in three dimensions can be represented by 3 3 real
orthogonal matrices R satisfying RT R = I. Reflection is represented
by the matrix P = I. The space of 3 3 real orthogonal matrices
is a connected manifold.
2
The space of all n real non-singular matrices is called GL(n, R).
This is an n2 -dimensional Lie group and connected manifold.
2

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


Chapter 3

Tangent vectors
Vectors on a manifold are to be thought in terms tangents to the
manifold, which is a generalization of tangents to curves and surfaces, and will be defined shortly. But a tangent to a curve is like
the velocity of a particle at that point, which of course comes from
motion along the curve, which is its trajectory. And motion means
comparing things at nearby points along the trajectory. And comparing functions at nearby points leads to differentiation. So in order
to get to vectors, let us first start with the definitions of these things.

A function f : M R is differentiable at a point P M if


in a chart at P, the function f 1 : Rn R is differentiable at
(P ).
2
1
This definition does not depend on the chart. If f
is
differentiable at (P ) in a chart (U , ) at P , the f 1 is
differentiable at (P ) for any chart (U , ) because
f 1 = (f 1 ) ( 1 )

(3.1)

and the transition functions ( 1 ) are differentiable.


This should be thought of as a special case of functions from one
manifold to another. Consider two manifolds M and N of dimension
m and n, and a mapping f : M N , P 7 Q. Consider local charts
(U, ) around P and (W, ) around Q. Then f 1 is a map
from Rm Rn and represents f in these local charts.

f is differentiable at P if f 1 is differentiable at (P ).
In other words, f is differentiable at P if the coordinates y i = f i (x )
of Q are differentiable functions of the coordinates x of P .
2
1

If f is a bijection (i.e. one-to-one and onto) and f and f are


9

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


10

Chapter 3. Tangent vectors

both differentiable, we say that f is a diffeomorphism and that M


and N are diffeomorphic.
2
k
In all of these definitions, differentiable can be replaced by C or
smooth.

Two Lie groups are isomorphic if there is a diffeomorphism


between them which is also a group homomorphism.
2

A curve in a manifold M is a map of a closed interval R


to M. (This definition can be given also when M is a topological
space.)
2
We will take this interval to be I = [0, 1] R. Then a curve is a
map : I M. If (0) = P and (1) = P 0 , for some , we say that
joins P and P 0 .

A manifold M is connected (actually arcwise connected)1 if


any two points in it can be joined by a continuous curve in M. 2
As for any map, a curve is called smooth iff its image in a chart
is smooth in Rn , i.e., iff : I Rn is smooth in Rn .
Note that the definition of a curve implies that it is parametrized.
So the same collection of points in M can stand for two different
curves if they have different parametrizations.
We are now ready to define tangent vectors and the tangent space
to a manifold. There are different ways of defining tangent vectors.
i) Coordinate approach: Vectors are defined to be objects satisfying certain transformation rules under a change of chart, i.e.
coordinate transformation, (U , ) (U , ).
ii) Derivation approach: A vector is defined as a derivation of functions on the manifold. This is thinking of a vector as defining
a directional derivative.
iii) Curves approach: A vector tangent to a manifold is tangent to
a curve on the manifold.
The approaches are equivalent in the sense that they end up defining the same objects and the same space. We will follow the third
approach, or perhaps a mix of the second and the third approaches.
Later we will briefly look at the derivation approach more carefully
and compare it with the way we have defined tangent vectors.
Consider a smooth function f : M R. Given a curve :
I M, the map f : I R is well-defined, with a well-defined
1

It can be shown that an arcwise connected space is connected.

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


11
df
derivative. The rate of change of f along is written as
.
dt
Suppose another curve another curve (s) meets (t) at some
point P , where s = s0 and t = t0 , such that


d
d


(f ) = (f )
f C (M)
(3.2)
dt
ds
P
P
That is, we are considering a situation where two curves are tangent
to each other in geometric and parametric sense. Let us introduce a
convenient notation. In any chart containing the point P, let us
write (P ) = (x1 , , xn ). Let us write f = (f 1 ) ( ),
so that the maps are
f 1 : Rn R,
n

: I R ,

x 7 f (x) or f (xi )
i

t 7 {x ((t))}.

(3.3)
(3.4)

The last are the coordinates of the curve in Rn .


Using the chain rule for differentiation, we find
d
f dxi ((t))
d
(f ) = f (x((t))) =
.
dt
dt
xi
dt
Similarly, for the curve we find

(3.5)

d
f dxi ((s))
d
(f ) = f (x((s))) =
.
(3.6)
ds
ds
xi
ds
Since f is arbitrary, we can say that two curves , have the same
tangent vector at the point P M (where t = t0 and s = s0 ) iff


dxi ((t))
dxi ((s))
=
.
(3.7)


dt
ds
t=t0
s=s0
We can say that these numbers completely determine the rate of
change of any function along the curve or at P. So we can define
the tangent to the curve.

The tangent vector to a curve at a point P on it is defined


as the map
d
(f )|P .
(3.8)
dt
As we have already seen, in a chart with coordinates {xi } we can
write using chain rule

dxi ((t)) f
P (f ) =
(3.9)
dt
xi (P )
P : C (M) R,

f 7 P (f )

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


12

Chapter 3. Tangent vectors



dxi ((t))
The numbers
are thus the components of P . We will

dt
(P )
often write a tangent vector at P as vP without referring to the curve
it is tangent to.
We note here that there is another description of tangent vectors
based on curves. Let us write if and are tangent to each
other at the point P . It is easy to see, using Eq. (3.7) for example,
that this relation is transitive, reflexive, and symmetric. In other
words, is an equivalence relation, for which the equivalence class
[] contains all curves tangent to (as well as to one another) at P .

A tangent vector at P M is an equivalence class of curves


under the above equivalence relation.
2
The earlier definition is related to this by saying that if a vector
vP is tangent to some curve at P , i.e. if vP = P , we can write
vP = [].

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


Chapter 4

Tangent Space

The set of all tangent vectors (to all curves) at some point P
M is the tangent space TP M at P.
2
Proposition: TP M is a vector space with the same dimensionality n as the manifold M.
Proof: We need to show that TP M is a vector space, i.e.
XP + YP TP M ,

(4.1)

aXP TP M ,

(4.2)

XP , YP TP M, a R.
That is, given curves , passing through P such that XP =
P , YP = P , we need a curve passing through P such that
P (f ) = XP (f ) + YP (f )f C (M).
Define : I Rn in some chart around P by = +
(P ). Then is a curve in Rn , and
=:I M

(4.3)

is a curve with the desired property.


2
Note: we cannot define = + P because addition does not
make sense on the right hand side.
The proof of the other part works similarly. (Exercise!)
To see that TP M has n basis vectors, we consider a chart with
coordinates xi . Then take n curves k such that


k (t) = x1 (P ), , xk (P ) + t, , xn (P ) ,
(4.4)
i.e., only the k-th coordinate varies along t. So k is like the axis of
the k-th coordinate (but only in some open neighbourhood of P ).
13

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


14

Chapter 4. Tangent Space





Now denote the tangent vector to k at P by


, i.e.,
xk P




.
=
(f

)
f
=

(f
)
(4.5)

k
k

dt
xk P
P
P

This notation makes sense when we remember Eq. (3.9). Using it we


can write



k (f ) = f
f C (M).
(4.6)
xk P
P



is notation. We should understand this as


Note that
xk P





f

f=
f
(4.7)
xk P
xk
xk (P )
(P )



in a chart around P . The


are defined only when this chart
xk P
is given, but these are vectors on the manifold at P , not on Rn .
Let us now show that the tangent space at P has k |P as a basis.
Take any vector vP TP M , which is the tangent vector to some
curve at P . (We may sometimes refer to P as (0) or as t = 0.)
Then


d
vP (f ) =
(f )
(4.8)
dt
t=0




d

1
=
((f )
( )) .
(4.9)

dt
(P )
t=0

Note that : I Rn , t 7 (x1 ((t)), , xn ((t))) are the coordinates of the curve , so we can use the chain rule of differentiation
to write



d i
1
vP (f ) =
(f )
(x )
(4.10)
i
x
t=0
(P ) dt

1
vP (xi ) .
(4.11)
=
(f )
i
x
(P )
The first factor is exactly as shown in Eq. (4.7), so we can write



vP (f ) =
f v (xi )
f C (M)
(4.12)
xk P P

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


15
i.e., we can write
i

vP = vP

xk


vP TP M

(4.13)


where vPi = vP (xi ). Thus the vectors x k P span TP M . These are
to be thought of as tangents to the coordinate curves in. These
can be shown to be linearly independent as well, so x k P form a
basis of TP M and vPi are the components of vP in that basis.

oxk P are called coordinate basis vectors and the set


n The


is called the coordinate basis.


xk P
It can be shown quite easily that for any smooth (actually C 1 )
function f a vector vP defines a derivation f 7 vP (f ) , i.e., satisfies
linearity and Leibniz rule,
vP (f + g) = vP (f ) + vP (g)
vP (f g) = vP (f )g(P ) + f (P )vP (g)
1

f, g C (M) and R

(4.14)
(4.15)
(4.16)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


Chapter 5

Dual space

The dual space TP M of TP M is the space of linear mappings


: TP M R.
2
We will write the action of on vP TP M as (vP ) or sometimes
as h | vP i .
Linearity of the mapping means
(uP + avP ) = (uP ) + a(vP ) ,

(5.1)

uP , vP TP M and a R .
The dual space is a vector space under the operations of vector addition and scalar multiplication defined by
a1 1 + a2 2 : vP 7 a1 1 (vP ) + a2 2 (vP ) .

(5.2)

The elements of TP M are called dual vectors, covectors,


cotangent vectors etc.
2
A dual space can be defined for any vector space V as the space of
linear mappings V R (or V C if V is a complex vector space).
Example:

Vector
column vectors
kets |i
functions

Dual vector
row vector
bras h|
linear functionals, etc.2

Given a function on a manifold f : M R , every vector at


P produces a number, vP (f ) R vP TP M . Thus f defines a
16

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


17
covector df , given by df (vP ) = vP (f ) called the differential or
2
Since vP is linear, so is df ,

gradient of f .

df (vP + awP ) = (vP + awP )(f )


= vP (f ) + awP (f )

(5.3)

vP , wP TP M, a R .
Thus df TP M .
Proposition: TP M is also n-dimensional.
Proof: Consider a chart with coordinate functions xi . Each xi
is a smooth function xi : M R . then the differentials dxi satisfy







i
i
1
i
= ji . (5.4)
=
(x ) =
x
dx
xj
xj
xj
P

(P )

The differentials dxi are covectors, as we already know. So we


have constructed n covectors in TP M . Next consider a linear combination of these covectors, = i dxi . If this vanishes, it must vanish
on every one of the basis vectors. In other words,



=0
=0
xj P



i
i dx
=0
xj P
i ji = 0 i.e. j = 0 .
(5.5)
So the dxi are linearly independent.
Finally,
given any covector , consider the covector =


i
xi P dx . Then letting this act on a coordinate basis vector, we
get



xj P







i
=

dx
xj P
xi P
xj P





i = 0j
(5.6)
j
x P
xi P j



So vanishes on all vectors, since the


form a basis. Thus
xj P
the dxi span TP M , so TP M is n-dimensional.

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


18

Chapter 5. Dual space


Also, as we have just seen, any covector TP M can be written

as
= i dx


where

i =

xi


,

(5.7)

so in particular for = df , we get







f
i (df )i = df
=
xi P
xi (P )

(5.8)

This justifies the name gradient.


It is straightforward to calculate the effect of switching to another
overlapping chart, i.e. a coordinate transformation. In a new chart
0 where the coordinates are y i (and the transition functions are thus
y i (x)) we can use Eq. (5.8) to write the gradient of y i as
 i
y
i
dy =
dxj
(5.9)
xj P
This is the result of coordinate transformations on a basis of covectors.

 

Since
is the dual basis in TP M to {dxi }, in order
i
 P

 x

to be the dual basis to {dy i } we must have


for
y i P


 j 


=
(5.10)
y i P
y i P xj P
These formulae can be generalized to arbitrary bases.
Given a vector v, it is not meaningful to talk about its dual, but
given a basis {ea }, we can define its dual basis { a } by a (eb ) = ba .
We can make a change of bases by a linear transformation,
a 7 0a = Aab b ,

ea 7 e0a = (A1 )ba eb ,

(5.11)

with A a non-singular matrix, so that 0a (e0b ) = ba .


Given a 1-form we can write it in both bases,
= a a = 0a 0a = 0a Aab a ,
from which it follows that 0a = (A1 )ba b .

(5.12)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


19
Similarly, if v is a vector, we can write
v = v a ea = v 0a e0a = v 0a (A1 )ba eb ,

(5.13)

and it follows that v a = Aab v b .

Quantities which transform like a are called covariant, while


those transforming like v a are called contravariant.
2

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


Chapter 6

Vector fields

Consider the (disjoint) union of tangent spaces at all points,


[
TM =
TP M .
(6.1)
P M

This is called the tangent bundle of M.


2

A vector field v chooses an element of TP M for every P , i.e.


v : P 7 v(P ) vP TP M.
2
We will often write v(f )|P = vP (f ).
Given a chart, v has components v i in the chart,



i
vP = v
,
(v i )P = vP (xi ) .
(6.2)
xi P

The vector field v is smooth if the functions v i = v(xi ) are


smooth for any chart (and thus for all charts).
2

A rule that selects a covector from TP M for each P is called a


oneform (often written as a 1form).
2

Given a smooth vector field v (actually C 1 is sufficient) we can


define an integral curve of v, which is a curve in M such that
(t)|

P = vP at every P . (One curve need not pass through all


P M.)
2
Suppose is an integral curve of a given vector field v, with
(0) = P. Then in a chart containing P , we can write
(t)

=v

d i
x ((t)) = v i (x(t)) ,
dt

(6.3)

with initial condition xi (0) = xi |P . This is a set of ordinary first order


differential equations. If v i are smooth, the theory of differential
20

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


21
equations guarantees, at least for small t (i.e. locally), the existence
of exactly one solution. The uniqueness of these solutions implies
that the integral curves of a vector field do not cross.
One use of integral curves is that they can be thought of as coordinate lines. Given a smooth vector field v such that v|P 6= 0, it
is possible to define a coordinate system {xi } in a neighbourhood

around P such that v =


.
xi

A vector field v is said to be complete if at every point P M


the integral curve the integral curve (t) of v passing through P can
be extended to all t R .
2
The tangent bundle T M is a product manifold, i.e., a point in
T M is an ordered pair (P, v) where P M and v TP M. The topological structure and differential structure are given appropriately.

The map : T M M, (P, v) 7 P (where v TP M) is called


the canonical projection (or simply projection).
2

For each P M, the pre-image 1 (P ) is TP M. It is called the


fiber over P . Then a vector field can be thought of as a section of
the tangent bundle.
2
Given a smooth vector field v, we can define an integral curve
through any point P by (t)

= v , i.e.,
d i
x ((t)) = v i ((t)) v(xi ((t))) ,
dt
(0) = P .

(6.4)
(6.5)

We could also choose (t0 ) = P.


Then in any neighbourhood U of P we also have Q , the integral
curve through Q. So we can define a map : I U M given by
(t, Q) = Q (t) where Q (t) satisfies

d i
x (Q (t)) = v(xi Q (t)) ,
dt
Q (0) = Q .

(6.6)
(6.7)

This defines a map t : U M at each t by t (Q) = (t, Q) =


Q (t) , i.e. for given t, t takes a point by a parameter distance t along
the curve Q (t). This t is called the local flow of v.
2
The local flow has the following properties:
i) 0 is the identity map of U ;

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


22

Chapter 6. Vector fields

ii) s t = s+t for all s, t, s + t U ;


iii) each flow is a diffeomorphism with t 1 = t .
The first property is obvious, while the second property follows
from the uniqueness of integral curves, i.e. of solutions to first order
differential equations. Then the integral curve passing through the
point Q (s) is the same as the integral curve passing through Q, so
that moving a parameter distance t from Q (s) finds the same point
on M as by moving a parameter distance s + t from Q (0) Q .
A vector field can also be thought of as a map from the space
of differentiable functions to itself v : C (M) C (M), f 7
v(f ) , with v(f ) : M R, P 7 vP (f ) . Often v(f ) is called the Lie
derivative of f along v and denoted v f .
The map v : f 7 v(f ) has the following properties:
v(f + g) = v(f ) + v(g)

(6.8)

v(f g) = f v(g) + v(f )g

(6.9)

f, g C (M),

The set of all (real) vector fields V (M) on a manifold M has the
structure of a (real) vector space under vector addition defined by
(u + v)(f ) = u(f ) + v(f ),

u, v V (M),

R.
(6.10)

It is possible to replace by some function on C (M). If u, v


are vector fields on M and is now a smooth function on M, define
u + v by
(u + v)P (f ) = uP (f ) + (P )vP (f )

f C (M),

P M.
(6.11)

This looks like a vector space but actually it is what is called a


module.

A ring R is a set or space with addition and multiplication


defined on it, satisfying (xy)z = x(yz) , x(y + z) = xy + xz , (x +
y)z = xz + yz , and two special elements 0 and 1, the additive and
multiplicative identity elements, 0 + x = x + 0 = x , 1x = x1 =
x . A module X is an Abelian group under addition, with scalar
multiplication by elements of a ring defined on it.

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


23

A module becomes a vector space when this ring is a commu


tative division ring, i.e. when the ring multiplication is commutative, xy = yx, and an inverse exists for every element except 0.
Given a smooth function , in general 1
/ C (M), so the space
of vector fields on M is in general a module, not a vector space.
Given a vector field v, in an open neighbourhood of some P M
and in a chart, and for any f C (M) , we have



f

i
v(f ) = vP (f ) = vP
,
where vPi = vP (xi ) . (6.12)
i
x
P
P

Thus we can write


v = vi

xi

with v i = v(xi ) ,

(6.13)

as an obvious generalization of vector space expansion to the module


V (M).

The v i are now the components of the vector field v, and x


i are
now vector fields, which we will call the coordinate vector fields.
Note that this is correct only in some open neighbourhood on which a
chart can be defined. In particular, it may not be possible in general
to define the coordinate vector fields globally, i.e. everywhere on M,
and thus the components v i may not be defined globally either.

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


Chapter 7

Pull back and push forward


Two important concepts are those of pull back (or pull-back or pullback) and push forward (or push-forward or pushforward) of maps
between manifolds.

Given manifolds M1 , M2 , M3 and maps f : M1 M2 , g :


M2 M3 , the pullback of g under f is the map f g : M1 M3
defined by
f g = g f .

(7.1)

2 So in particular, if M1 and M2 are two manifolds with a map


f : M1 M2 and g : M2 R is a function on M2 , the pullback
of g under f is a function on M1 ,
f g = g f .

(7.2)

While this looks utterly trivial at this point, this concept will become
increasingly useful later on.

Given two manifolds M1 and M2 with a smooth map f : M1


M2 , P 7 Q the pushforward of a vector v TP M1 is a vector
f v TQ M2 defined by
f v(g) = v(g f )
for all smooth functions g : M2 R .
Thus we can write
f v(g) = v(f g) .
24

(7.3)
2

(7.4)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


25
The pushforward is linear,
f (v1 + v2 ) = f v1 + f v2

(7.5)

f (v) = f v .

(7.6)

And if M1 , M2 , M3 are manifolds with maps f : M1 M2 , g :


M2 M3 , it follows that
(g f ) = g f ,
(g f ) v = g f v

i.e.
v TP M1 .

(7.7)

Remember that we can think of a vector v as an equivalence class of


curves []. The pushforward of an equivalence class of curves is
f v = f [] = [f ]

(7.8)

Note that for this pushforward to be defined, we do not need the


original maps to be 1-1 or onto. In particular, the two manifolds may
have different dimensions.
Suppose M1 and M2 are two manifolds with dimension m and n
respectively. So in the respective tangent spaces TP M1 and TQ M2
are also of dimension m and n respectively. So for a map f : M1
M2 , P 7 Q , the pushforward f will not have an inverse if m 6= n .
Let us find the components of the pushforward f v in terms of
the components of v for any vector v. Let us in fact consider, given
charts : P 7 (x1 , , xm ) , : Q 7 (y 1 , , y n ) the pushforward
of the basis vectors.



For the basis vector x


i P , we want the pushforward f xi P ,
 

which is a vector in TQ M2 , so we can expand it in the basis



f

xi


P

 
  


= f
xi P
y Q

y i

(7.9)

In any coordinate basis, the components of a vector are given by the


action of the vector on the coordinates as in Chap. 4,
vP = vP (y )
Thus we can write
 
 



f
= f
(y )
xi P
xi P

(7.10)

(7.11)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


26

Chapter 7. Pull back and push forward

But
f v(g) = v(g f ) ,

(7.12)

so

f

xi

(y ) =
P

xi

(y f ) .

(7.13)

But y f are the coordinate functions of the map f , i.e., coordinates


around the point f (P ) = Q . So we can write y f as y (x) , which
is what we understand by this. Thus

  

 

y (x)

=
(y f ) =
.
(7.14)
f
xi P
xi P
xi P
Because we are talking about derivatives of coordinates, these are
actually done in charts around P and Q = f (P ) , so the chart maps
are hidden in this equation.

The right hand side is called the Jacobian matrix (of y (x) =
y f with respect to xi ). Note that since m and n may be unequal,
this matrix need not be invertible and a determinant may not be
defined for it.
2
For the basis vectors, we can then write





y (x)

f
=
(7.15)
xi P
xi P y f (P )
Since f is linear, we can use this to find the components of (f v)Q
for any vector vP ,
 
 

i
f vP = f vP
xi P



i
= vP f
xi P



i y (x)
= vP
(7.16)
xi P y f (P )

i y (x)

(f vP ) = vP
.
(7.17)
i
x P
Note that since f is linear, we know that the components of f v
should be linear combinations of the components of v , so we can

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


27
already guess that (f vP ) = Ai vPi for some matrix Ai . The matrix
is made of first derivatives because vectors are first derivatives.
Another example of the pushforward map is the following. Remember that tangent vectors are derivatives along curves. Suppose
vP TP M is the derivative along . Since : I M is a map,
we can consider pushforwards under , of derivatives on I . Thus for
: I M , t 7 (t) = P , and for some g : M R ,
 
d
d

g =
(g )|t=0
dt t=0
dt
= P (g)|t=0 = vP (g) ,
(7.18)
so

d
dt


= vP

(7.19)

t=0

We can use this to give another definition of integral curves.


Suppose we have a vector field v on M . Then the integral curve of v
passing through P M is a curve : t 7 (t) such that (0) = P
and
 
d

= v|(t)
(7.20)
dt t
for all t in some interval containing P .
2
Even though in order to define the pushforward of a vector v
under a map f , we do not need f to be invertible, the pushforward
of a vector field can be defined only if f is both one-to-one and onto.
If f is not one-to-one, different points P and P 0 may have the
same image, f (P ) = Q = f (P 0 ) . Then for the same vector field v we
must have
f v|Q = f (vP ) = f (vP 0 ) ,

(7.21)

which may not be true. And if f : M N is not onto, f v will be


meaningless outside some region f (M) , so f v will not be a vector
field on N .
If f is one-to-one and onto, it is a diffeomorphism, in which case
vector fields can be pushed forward, by the rule
(f v)f (P ) = f (vP ) .

(7.22)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


Chapter 8

Lie brackets
A vector field v is a linear map C (M) C (M) since it is basically a derivation at each point, v : f 7 v(f ) . In other words, given
a smooth function f , v(f ) is a smooth function on M . Suppose we
consider two vector fields u , v . Then u(v(f )) is also a smooth function, linear in f . But is uv u v a vector field? To find out, we
consider
u(v(f g)) = u(f v(g) + v(f )g)
= u(f )v(g) + f u(v(g)) + u(v(f ))g + v(f )u(g) . (8.1)
We reorder the terms to write this as
uv(f g) = f uv(g) + uv(f )g + u(f )v(g) + v(f )u(g) ,

(8.2)

so Leibniz rule is not satisfied by uv . But if we also consider the


combination vu , we get
vu(f g) = f (vu(g) + vu(f )g + v(f )u(g) + u(f )v(g) .

(8.3)

Thus
(uv vu)(f g) = f (uv vu)(g) + (uv vu)(f )g ,

(8.4)

which means that the combination


[u , v] := uv vu

(8.5)

is a vector field on M , with the product uv signifying successive


operation on any smooth function on M .
28

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


29

This combination is called the commutator or Lie bracket of


the vector fields u and v .
2
In any chart around the point P M , we can write a vector
field in local coordinates
v(f ) = v i

f
,
xi

(8.6)

so that



i f
u(v(f )) = u
v
xj
xi
v i f
2f
= uj j i + uj v i j i ,
x x
x x
i
u f
2f
v(u(f )) = v j j i + uj v i j i .
x x
x x
j

(8.7)

Subtracting, we get
u(v(f )) v(u(f )) = uj

i
v i f
j u f

v
,
xj xi
xj xi

(8.8)

from which we can read off the components of the commutator,


[u , v]i = uj

i
v i
j u

v
xj
xj

(8.9)

The commutator is antisymmetric, [u , v] = [v , u] , and satisfies


the Jacobi identity
[[u , v] , w] + [[v , w] , u] + [[w , u] , v] = 0 .

(8.10)

The commutator is
 useful
 for the following reason: Once we have a

chart, we can use


as a basis for vector fields in a neighbourxi
hood.
Any set of n linearly independent vector fields may be chosen as
a basis, but they need not form a coordinate system. In a coordinate
system,



,
= 0,
(8.11)
xi xj
because partial derivatives commute. So n vector fields will form a
coordinate system only if they commute, i.e., have vanishing commutators with one another. Then the coordinate lines are the integral

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


30

Chapter 8. Lie brackets

curves of the vector fields. For analytic manifolds, this condition is


sufficient as well.
A simple example is the polar coordinate system in R2 . The unit
vectors are
er = ex cos + ey sin
e = ex sin + ey cos ,

(8.12)

and ey =
being the Cartesian coordinate basis
x
y
vectors, and
p
x
y
cos = ,
sin = ,
r = x2 + y 2
(8.13)
r
r
with ex =

Using these expressions, it is easy to show that [er , e ] 6= 0 , so


{er , e } do not form a coordinate basis.

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


Chapter 9

Lie algebra

An (real) algebra is a (real) vector space equipped with a bilinear operation (product) under which the algebra is closed, i.e., for
an algebra A
i) x y A

x, y A

ii) (x + y) z = x z + y z
x (y + z) = x y + x z

x, y, z A ,

, R .

If , are complex numbers and A is a complex vector space, we


get a complex algebra.
2

A Lie algebra is an algebra in which the operation is


i) antisymmetric, x y = y x , and
ii) satisfies the Jacobi identity ,
(x y) z + (y z) x + (z x) y = 0 .

(9.1)
2

The Jacobi identity is not really an identity it does not hold


for an arbitrary algebra but it must be satisfied by an algebra for
it to be called a Lie algebra.
Example:
i) The space Mn = {all n n matrices} under matrix multiplication, A B = AB . This is an associative algebra since
matrix multiplication is associative, (AB)C = A(BC) .
ii) The same space Mn of all n n matrices as above, but now
31

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


32

Chapter 9. Lie algebra


with matrix commutator as the product,
A B = [A , B] = AB BA .

(9.2)

This product is antisymmetric and satisfies Jacobi identity, so


Mn with this product is a Lie algebra.
iii) The angular momentum algebra in quantum mechanics.
If Li are the angular momentum operators with [Li , Lj ] =
iijk Lk , we can write the elements of this algebra as
(
)
X
(9.3)
L= a=
i Li |i C
i

If a =

ai Li and b = bi Li , their product is


X
X
a b [a , b] =
ai bj [Li , Lj ] = i
ijk ai bj Lk .

(9.4)

This is a Lie algebra because it [a , a] = 0 and the Jacobi identity is satisfied.


iv) The Poisson bracket algebra of a classical dynamical system consists of functions on the phase space, with the product
defined by the Poisson bracket,
f g = [f , g]P.B. .
This is a Lie algebra.
dimensional.

(9.5)

As a vector space it is infinite-

v) Vector fields on a manifold form a real Lie algebra under the


commutator bracket, since the Jacobi identity is a genuine identity, i.e. automatically satisfied, as we have seen in the previous chapter. This algebra is infinite-dimensional. (It can be
thought of as the Lie algebra of the group of diffeomorphisms,
Dif f (M)) .

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


Chapter 10

Local flows
We met local flows and integral curves in Chapter 6. Given a vector
field v , write its local flow as t .

The collection t for t <  (for some  > 0, or alternatively for


t < 1) is a oneparameter group of local diffeomorphisms . 2
Consider the vector field in a neighbourhood U of a point Q
M . Since t : U M, Q 7 Q (t) is local diffeomorphism , i.e.
diffeomorphism for sufficiently small values of t , we can use t to push
forward vector fields. At some point P we have the curve t (P ) . We
push forward a vector field at t =  to t = 0 and compare with the
vector field at t = 0 .
We recall that for a map : M1 M2 the pullback of a function
f C (M2 ) is defined as
f = f : M1 R ,

(10.1)

and f C (M1 ) if is C .
The pushforward of a vector vP is defined by
vP (f ) = vP (f ) = vP ( f )
vP TP M1 ,

vP T(P ) M2 .

(10.2)
(10.3)

If is a diffeomorphism, we can define the pushforward of a vector


field v by
v(f )|(P ) = v (f )|P
i.e.

v(f )|Q = v (f )|1 Q


= v ( f )|1 Q .
33

(10.4)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


34

Chapter 10. Local flows

We can rewrite this definition in several different ways,


( v)(f ) = v (f ) 1
= (1 ) (v (f ))
= (1 ) (v ( f )) .

(10.5)

If : M1 M2 is not invertible, v is not a vector field on


M2 . If 1 exists but is not differentiable, v is not differentiable.
But there are some and some v such that v is a differentiable
vector field, even if is not invertible or 1 is not differentiable.
Then v and v are said to be related.
2
Proposition: Given a diffeomorphism : M1 M2 (say both
C manifolds) the pushforward is an isomorphism on the Lie
algebra of vector fields, i.e.
[u , v] = [ u , v] .

(10.6)

Proof:
[u , v](f ) = [u , v] (f ) 1
= u (v (f )) 1 u v ,

(10.7)

[ u , v](f ) = u ( v (f )) u v

while

= u ( v (f ) ) 1 u v


= u v (f ) 1 1 u v
= u (v (f )) 1 u v .

(10.8)

A vector field v is said to be invariant under a diffeomorphism


: M M if v = v , i.e. if (vP ) = v(P ) for all P M .
2
We can write for any f C (M)
( v) (f ) = 1

(v ( f ))

(( v) (f )) = v ( f ) ,
v = v .

(10.9)

So if v is an invariant vector field, we can write


v = v .

(10.10)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


35
This expresses invariance under , and is satisfied by all differential
operators invariant under .
Consider a vector field u , and the local flow (or one-parameter
diffeomorphism group) t corresponding to u ,
t (Q) = Q (t) ,

Q (t) = u(Q (t)) .

(10.11)

But for any f C (M) ,



d
f Q (t)
dt
d
(f t (Q))
=
dt

d

(t (f )) = uQ (t) (f ) u(f )
=
dt
Q (t)

Q (f ) =

(10.12)

At t = 0 we get the equation




d


(t (f ))
= u(f )
dt
t=0
Q

(10.13)

We can also write


d
( f ) (Q) = u(f ) (t (Q)) = t u(f )(Q) .
dt t

(10.14)

This formula can be used to solve linear partial differential equations


of the form
n
X

f (x, t) =
(10.15)
v i (x) i f (x, t)
t
x
i=1

with initial condition f (x, 0) = g(x) and everything smooth. This is


an equation on Rn+1 , so it can be on a chart for a manifold as well.
We can treat v i (x) as components of a vector field v . Then a
solution to this equation is
f (x, t) = t g(x)
g (t (x)) g t (x) ,

(10.16)

where t is the flow of v .


Proof:

d
f
f (x, t) =
(t g) = v(f ) v i i ,
t
dt
x

(10.17)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


36

Chapter 10. Local flows

using Eq. (10.13) .


2
Thus the partial differential equation can be solved by finding the
integral curves of v (the flow of v) and then by pushing (also called
dragging) g along those curves. It can be shown, using well-known
theorems about the uniqueness of solutions to first order partial differential equations, that this solution is also unique.
Example: Consider the equation in 2+1 dimensions



f
f
(10.18)
f (x, t) = (x y)

t
x y
with initial condition f (x, 0) = x2 + y 2 . The corresponding vector
field is v(x) = (x y, x + y) . The integral curve passing through
the point P = (x0 , y0 ) is given by the coordinates
(t) = (vx (P )t + x0 , vy (P )t + y0 ) ,

(10.19)

so the integral curve passing through (x, y) in our example is given


by
(t) = ((x y)t + x, (x + y)t + y)

(10.20)

= t (x, y) ,
the flow of v. So the solution is
f (x, t) = t f (x, 0) = f (x, 0) t (x, y)
= [(x y)t + x]2 + [(x + y)t + y]2
= (x y)2 t2 + x2 + 2(x y)xt + (x y)2 t2 + y 2 2(x y)yt
= 2(x y)2 t2 + (x2 + y 2 )(1 + 2t) 4xyt .

(10.21)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


Chapter 11

Lie derivative
Given some diffeomorphism , we have Eq. (10.5) for pushforwards
and pullbacks,

( v)f = 1 v ( (f )) .
(11.1)
We will apply this to the flow t of a vector field u , defined by

d

(t f )
= u(f ) .
(11.2)
dt
t=0
Q
Applying this at t , we get


t v(f ) = 1
v t (f )
t

= t v t (f ) ,
(11.3)
where we have used the relation 1
t = t . Let us differentiate this
equation with t ,


d
d

(t v)(f )
= t v t (f )
(11.4)
dt
dt
t=0
t=0
On the right hand side, t acts linearly on vectors and v acts linearly
on functions, so we can imagine At = t v as a kind of linear operator
acting on the function ft = t f . Then the right hand side is of
the form







d
d
d
At ft
=
At ft
+ At ft
dt
dt
dt t=0
t=0


 t=0




d
d

+ At
=
t v ft
t (f )
dt
dt
t=0
t=0




= u (v(f ))
v (u(f ))
t=0
t=0

= [u , v](f ) .
(11.5)
t=0

37

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


38

Chapter 11. Lie derivative

The things in the numerator are numbers, so they can be compared


at different points, unlike vectors which may be compared only on
the same space. We can also write this as
lim

t0

t vt (P ) vP
t

= [u , v] .

(11.6)

This has the look of a derivative, and it can be shown to have


the properties of a derivation on the module of vector fields, appropriately defined. So the Lie bracket is also called the Lie derivative,
and written as
u v = [u , v] .
(11.7)
The derivation on functions by a vector field u : C (M)
C (M) , f 7 u(f ) , can be defined similarly as
t f f
.
t0
t

u(f ) = lim

(11.8)

So this can also be called the Lie derivative of f with respect


to u , and written as u f .
2
Then it is easy to see that
u (f g) = (u f ) g + f (u g) ,
and

u (f + ag) = u f + au g .

(11.9)

So u is a derivation on the space C (M) . Also,


u (v + aw) = u v + au w ,
and

u (f v) = (u f ) v + f u v

f C (M)
(11.10)
.

So u is a derivation on the module of vector fields. Also, using


Jacobi identity, we see that
u (v w) = (u v) w + v (u w) ,

(11.11)

where v w = [v , w] , so u is a derivation on the Lie algebra of


vector fields.
Lie derivatives are useful in physics because they describe invariances. For functions, u f = 0 means t f = f , so the function does
not change along the flow of u . So the flow of u preserves f , or leaves
f invariant.

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


39
If there are two vector fields u and v which leave f invariant,
u f = 0 = v f . But we know from the Eq. (11.8), which defines
the Lie derivative of a function that
u+av f = u f + av = 0
and

[u , v ]f = [u ,v] f = 0 .

a R
(11.12)

So the vector fields which preserve f form a Lie algebra.


Similarly, a vector field is invariant under a diffeomorphism if
v = v , as mentioned earlier. Using the flow of u , we find that a
vector field v is invariant under the flow of u if
t v = v

u v = v .

(11.13)

So if a vector field w is invariant under the flows of u and v , i.e. if


u w = 0 = v w , we find that
0 = u v w v u w = [u ,v] w .

(11.14)

Thus again the vector fields leaving w invariant form a Lie algebra.

Let us also define the corresponding operations for 1-forms. As


we mentioned in Chap. 6, a 1form is a section of the cotangent
bundle

T M =

TP M .

(11.15)

Alternatively, a 1-form is a smooth linear map from the space of


vector fields on M to the space of smooth functions on M ,
: v 7 (v) C (M),

(u + av) = (U ) + a(v) . (11.16)

A 1-form is a rule that (smoothly) selects a cotangent vector at each


point.
2

Given a smooth map M1 M2 (say a diffeomorphism, for


convenience), the pullback is defined by
( ) (v) = ( ) .

(11.17)

We have already seen the gradient 1-form for a function f :


M R , which is a linear map from the space of vector fields to
functions,
df (u + av) = u(f ) + av(f ) ,
(11.18)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


40

Chapter 11. Lie derivative

and which can be written as


df =

f i
dx
xi

(11.19)

in some chart.
2
For an arbitrary 1-form , we can write in a chart and for any
vector field v ,
= i dxi ,

v = vi

,
xi

(v) = i v i .

(11.20)

All the components i , v i are smooth functions, so is I v i . The


space of 1-forms is a module. Since the function (v) is chartindependent, we can find the components i0 of in a new chart
by noting that
0
(v) = i v i = 0i v i .
(11.21)
Note that the notation is somewhat ambiguous here i0 also runs
from 1 to n , and the prime actually distinguished the chart, or the
coordinate system, rather than the index i .
If the components of v in the new chart are related to those in
0
0
the old one by v i = Aij v j , it follows that
0

i0 Aij v j = j v j

i0 Aij = j

(11.22)

Since coordinate transformations are invertible, we can multiply both


sides of the last equation by A1 and write
j
(11.23)
i0 = A1 i0 j .
0

For coordinate transformations from a chart {xi } to a chart {x0i } ,


0
j
xj
x0i
,
A1 i0 =
(11.24)
j
x
x0i0
0
x0i j
xj
0
so
vi =
v ,
i0 =
j . (11.25)
j
x
x0i0
We can define the Lie derivative of a 1-form very conveniently by
going to a chart, and treating the components of 1-forms and vector
fields as functions,
0

Aij =




u (v) = u i v i = uj j i v i
x
i
v i
= uj j v i + uj i j .
x
x

(11.26)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


41
But we want to define things such that
u (v) = (u ) (v) + (u v) .

(11.27)

We already know the left hand side of this equation from Eq. (11.26),
and the right hand side can be calculated in a chart as
(u ) (v) + (u v) = (u )i v i + i (u v)i
= (u )i v i + i [u , v]i


i
i
i
j v
j u
= (u )i v + i u
v
. (11.28)
xj
xj
Equating the right hand side of this with the right hand side of
Eq. (11.26), we can write
(u )i = uj

uj
i
+

.
j
xj
xi

(11.29)

These are the components of u in a given chart {xi } .


For the sake of convenience, let us write down the Lie derivatives of the coordinate basis vector fields and basis 1-forms. The
coordinate basis vector corresponding to the i-th coordinate is

v j = ij .
xi
Putting this into the formula for Lie derivatives, we get
v=

= [u , v]j j
xi
x


j
j

k v
k u
= u
v
k
k
xj
x
x


uj

= 0 ik k
xj
x
 j
u

=
.
i
x xj

(11.30)

(11.31)

Similarly, the 1-form corresponding to the i-th basis coordinate is



dxi = ji dxj ,
i.e.
dxi j = ji .
(11.32)
Using this in the formula Eq. (11.29) we get
u dxi = ki

ui j
uk j
dx
=
dx .
xj
xj

(11.33)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


42

Chapter 11. Lie derivative

There is also a geometric description of the Lie derivative of 1forms,


i
1h
u |P = lim
t | (P ) P
t
t0 t

d
=
.
(11.34)
dt t P
We will not discuss this in detail, but only mention that it leads to
the same Leibniz rule as in Eq. (11.27), and the same description in
terms of components as in Eq. (11.29).

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


Chapter 12

Tensors
So far, we have defined tangent vectors, cotangent vectors, and also
vector fields and 1-forms. We will now define tensors. We will do
this by starting with the example of a specific type of tensor.

A (1, 2) tensor AP at P M is a map


AP : TP M TP M TP M R

(12.1)
2

which is linear in every argument.


So given two vectors uP , vP and a covector P ,
AP : (uP , vP , P ) 7 AP (uP , vP ; P ) R .

(12.2)

Suppose {ea }, {a } are bases for TP M, TP M . Write


Acab = AP (ea , eb ; c ) .

(12.3)

Then for arbitrary vectors uP = ua ea , vP = v a ea , and covector P =


a a we get using linearity of the tensor map,


AP (uP , vP ; P ) = AP ua ea , v b eb ; c c
= ua v b c Acab .

(12.4)

It is a matter of convention whether A as written above should


be called a (1, 2) tensor or a (2, 1) tensor, and the convention varies
between books. So it is best to specify the tensor by writing indices
as there is no confusion about Acab .
A tensor of type (p, q) can be defined in the same way,

Ap,q
: TP M TP M TP M TP M
P
|
{z
}
|
{z
}
q times
p times

43

R
(12.5)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


44

Chapter 12. Tensors

in such a way that the map is linear in every argument.

Alternatively, AP is an element of the tensor product space

AP TP M TP M TP M TP M
|
{z
}
|
{z
}
p times
q times

(12.6)

We can define the components of this tensor in the same way that
we did for the (1, 2) tensor. Then a (p, q) tensor has components
which can be written as
a a
Ab11bqp .

Some special types of (p, q) tensors have special names. A (1, 0)


tensor is a linear map AP : TP M R , so it is a tangent vector. A
(0, 1) tensor is a cotangent vector. A (p, 0) tensor has components
with p upper indices. It is called a contravariant ptensor. A
(0, q) tensor has components with q lower indices. It is called a
covariant qtensor.
2
It is possible to add tensors of the same type, but not of different
types,
a a
a a
a a
(12.7)
Ab11bqp + Bb11bqp = (A + B)b11bqp .
A tensor field is a rule giving a tensor at each point.
2
We can now define the Lie derivative of a tensor field by using
Leibniz rule in a chart. Let us first consider the components of a
tensor field in a chart. For a (1, 2) tensor field A , the components
in a chart are

Akij = A( i , j ; dxk ) .
(12.8)
x x
The components are functions of x in a chart. Thus we can write
this tensor field as

A = Akij dxi dxj

,
xk

(12.9)

where the indicates a product, in the sense that its action on two
vectors and a 1-form is a product of the respective components,



i
j
dx dx k (u, v; ) = ui v j k .
(12.10)
x
Thus we find, in agreement with the earlier definition,
A(u, v; ) = Akij ui v j k .

(12.11)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


45
0

Under a change of charts, i.e. coordinate system xi x0i , the


components of the tensor field change according to
A = Akij dxi dxj
Since

0
0
0
= Aki0 j 0 dx0i dx0j 0k0
xk
x

(12.12)

x0i i

xi
dx
,
=
xi
x0i0
x0i0 xi
0
(i and i are not equal in general), we get
0

dx0i =

Akij dxi dxj

(12.13)

xk

x0i i x0j
0
dx
dxj 0k0 k .
= Aki0 j 0
i
j
k
x
x
x
x x
(12.14)

Equating components, we can write


0

Akij
0

x0i x0j xk
xi xj x0k0
0
xi xj x0k
.
x0i0 x0j 0 xk

0
Aki0 j 0

Aki0 j 0 = Akij

(12.15)
(12.16)

f
From now on, we will use the notation i for
and i f for
xi
xi
unless there is a possibility of confusion. This will save some space
and make the formulae more readable.
We can calculate the Lie derivative of a tensor field (with respect
to a vector field u, say) by using the fact that u is a derivative on
the modules of vector fields and 1-forms, and by assuming Leibniz
rule for tensor products. Consider a tensor field
mn
T = Tab
m n dxa dxb .

(12.17)

Then
mn
u T = (u Tab
) m n dxa dxb
mn
+Tab
(u m ) n dxa dxb +
mn
+Tab
m n (u dxa ) dxb , +

(12.18)
where the dots stand for the terms involving all the remaining upper and lower indices. Since the components of a tensor field are
functions on the manifold, we have
mn
mn
u Tab
= ui i Tab
,

(12.19)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


46

Chapter 12. Tensors

and we also know that


u m =

ui
i ,
xm

u dxa =

ua i
dx .
xi

(12.20)

Putting these into the expression for the Lie derivative for T and
relabeling the dummy indices, we find the components of the Lie
derivative,
i
mn
(u T )mn
ab = u i Tab
in
mi
Tab
i um Tab
i un
mn
mn
a ui + + Tai
b ui .
+ Tib

(12.21)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


Chapter 13

Differential forms
There is a special class of tensor fields, which is so useful as to have
a separate treatment. There are called differential pforms or
pforms for short.

A pform is a (0, p) tensor which is completely antisymmetric,


i.e., given vector fields v1 , , vp ,
(v1 , , vi , , vj , , vp ) = (v1 , , vj , , vi , , vp )
(13.1)
for any pair i, j .
2

A 0-form is defined to be a function, i.e. an element of C (M) ,


and a 1-form is as defined earlier.
The antisymmetry of any p-form implies that it will give a nonzero result only when the p vectors are linearly independent. On the
other hand, no more than n vectors can be linearly independent in
an n-dimensional manifold. So p 6 n .
Consider a 2-form A . Given any two vector fields v1 , v2 , we have
A(v1 , v2 ) = A(v2 , v1 ) . Then the components of A in a chart are
Aij = A (i , j ) = Aji .

(13.2)

Similarly, for a p-form , the components are i1 ip , and components are multiplied by (1) whenever any two indices are interchanged.
 
n
It follows that a p-form has
independent components in np
dimensions.
Any 1-form produces a function when acting on a vector field. So
given a pair of 1-forms A, B, it is possible to construct a 2-form
47

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


48

Chapter 13. Differential forms

by defining
(u, v) = A(u)B(v) B(u)A(v),

u, v .

(13.3)

This is usually written as = A B B A , where is called


the outer product.
2

Then the above construction defines a product written as


= A B = B A ,

(13.4)

and called the wedge product . Clearly, is a 2-form.


2
Let us work in a coordinate basis, but the results we find can be
generalized to any basis. The coordinate bases for the vector fields,
{i } , and 1-forms, {dxi } , satisfy dxi (j ) = ji . A 1-form A can be
written as A = Ai dxi , and a vector field v can be written as v = v i i ,
so that A(v) = Ai v i . Then for the defined above and for any pair
of vector fields u, v,
(u, v) = A(u)B(v) B(u)A(v)
= Ai ui Bj v j Bi ui Aj v j
= (Ai Bj Bi Aj ) ui v j .

(13.5)

The components of are ij = (i , j ) , so that


(u, v) = (ui i , v j j ) = ij ui v j .

(13.6)

Then ij = Ai Bj Bi Aj for the 2-form defined above. We can now


construct a basis for 2-forms, which we write as dxi dxj ,
dxi dxj = dxi dxj dxj dxi .

(13.7)

Then a 2-form can be expanded in this basis as


=

1
ij dxi dxj ,
2!

(13.8)

because then

1
ij dxi dxj dxj dxi (u, v)
2!

1
= ij ui v j uj v i = ij ui v j .
2!

(u, v) =

(13.9)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


49
Similarly, a basis for pforms is
dxi1 dxip = dx[i1 dxip ] ,

(13.10)

where the square brackets stand for total antisymmetrization: all


even permutations of the indices are added and all the odd permutations are subtracted. (Caution: some books define the square
brackets as antisymmetrization with a factor 1/p! .) For example,
for a 3-form, a basis is
dxi dxj dxk = dxi dxj dxk dxj dxi dxk
+dxj dxk dxi dxk dxj dxi
+dxk dxi dxj dxi dxk dxj .

(13.11)

Then an arbitrary 3-form can be written as


=

1
ijk dxi dxj dxk .
3!

(13.12)

Note that there is a sum over indices, so that the factorial goes away if
we write each basis 3-form up to permutations, i.e. treating different
permutations as equivalent. Thus a pform can be written in
terms of its components as
=

1
i i dxi1 dxip .
p! 1 p

(13.13)

Examples: A 2-form in two dimensions can be written as


1
ij dxi dxj
2!

1
=
12 dx1 dx2 + 21 dx2 dx1
2!
1
(12 21 ) dx1 dx2
=
2!
= 12 dx1 dx2 .

(13.14)
2

A 2-form in three dimensions can be written as


1
ij dxi dxj
2!
= 12 dx1 dx2 + 23 dx2 dx3 + 31 dx3 dx1 (13.15)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


50

Chapter 13. Differential forms

2
In three dimensions, consider two 1-forms = i dxi , = i dxi .
Then
1
=
(i j j i ) dxi dxj
2!
=
i j dxi dxj
=

(1 2 2 1 ) dx1 dx2
+ (2 3 3 2 ) dx2 dx3
+ (3 1 1 3 ) dx3 dx1 .

(13.16)

The components are like the cross product of vectors in three dimensions. So we can think of the wedge product as a generalization of
the cross product.

We can also define the wedge product of a pform and a


qform as a (p + q)form satisfying, for any p + q vector fields
v1 , , vp+q ,
1 X
(v1 , , vp+q ) =
(1)deg P (P (v1 , , vp+q )) .
p!q!
P
(13.17)
Here P stands for a permutation of the vector fields, and deg P is 0 or
1 for even and odd permutations, respectively. In the outer product
on the right hand side, acts on the first p vector fields in a given
permutation P , and acts on the remaining q vector fields.
2
The wedge product above can also be defined in terms of the
components of and in a chart as follows.
1
=
i i dxi1 dxip
p! 1 p
1
j j dxj1 dxjq
=
q! 1 q


1
=
i1 ip j1 jq dxi1 dxip dxj1 dxjq .
p!q!
(13.18)
Note that = 0 if p + q > n , and that a term in which some i
is equal to some j must vanish because of the antisymmetry of the
wedge product.
It can be shown by explicit calculation that wedge products are
associative,
( ) = ( ) .
(13.19)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


51
Cross-products are not associative, so there is a distinction between
cross-products and wedge products. In fact, for 1-forms in three
dimensions, the above equation is analogous to the identity for the
triple product of vectors,
a (b c) = (a b) c .

(13.20)

For a p-form and q-form , we find


= (1)pq .

(13.21)

Proof: Consider the wedge product written in terms of the components. We can ignore the parentheses separating the basis forms
since the wedge product is associative. Then we exchange the basis
1-forms. One exchange gives a factor of 1 ,
dxip dxj1 = dxj1 dxip .

(13.22)

Continuing this process, we get


dxi1 dxip dxj1 dxjq
= (1)p dxj1 dxi1 dxip dxj2 dxjq
=
= (1)pq dxj1 dxjq dxi1 dxip . (13.23)
Putting back the components, we find
= (1)pq

(13.24)

as wanted.
2

The wedge product defines an algebra on the space of differential


forms. It is called a graded commutative algebra .
2

Given a vector field v , we can define its contraction with a


p-form by
v = (v, )
(13.25)
with p 1 empty slots. This is a (p 1)-form. Note that the position
of v only affects the sign of the contracted form.
2
Example: Consider a 2-form made of the wedge product of two
1-forms, = = . Then contraction by v gives
v = (v, ) = (v) (v) = ( , v) .

(13.26)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


52

Chapter 13. Differential forms

If we have a p-form = p!1 i1 ip dxi1 dxip , its contraction


with a vector field v = v i i is
1
ii i v i dxi2 dxip .
(13.27)
v =
(p 1)! 2 p
1
, we
Note the sum over indices. To see how the factor becomes (p1)!
write the contraction as

1
v = i1 ip dxi1 dxip v i i .
(13.28)
p!
Since the contraction is done in the first slot, so we consider the
action of each basis 1-form dxik on i by carrying dxik to the first
position and then writing a iik . This gives a factor of (1) for each
exchange, but we get the same factor by rearranging the indices of
, thus getting a +1 for each index. This leads to an overall factor
of p .

given a diffeomorphism : M1 M2 , the pullback of a 1form (on M2 ) is , defined by

(v) = ( v)

(13.29)

for any vector field v on M1 .


2

i
i
Then we can consider the pullback dx of a basis 1-form dx .
For a general 1-form = i dxi , we have = (i dxi ) . But
(v) = ( v) = i dxi ( v) .

(13.30)

Now, dxi ( v) = dxi (v) and the thing on the right hand side is a
function on M1 , so we can write this as
(v) = ( i ) dxi (v) ,

(13.31)

where i are now functions on M1 , i.e.


( i )|P = i |(P )

(13.32)

So we can write = ( i ) dxi . For the wedge product of two


1-forms,
( )(u, v) = ( )( u , v)
= ( u , v) ( u , v)
= ( u)( v) ( u)( v)
= (u) (v) (u) (v)
= ( )(u , v) .

(13.33)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


53
Since u, v are arbitrary vector fields it follows that
( ) =
(dxi dxj ) = dxi dxj .

(13.34)

Since the wedge product is associative, we can write (by assuming


an obvious generalization of the above formula)





dxi dxj dxk = dxi dxj dxk

= dxi dxj dxk
= dxi dxj dxk ,

(13.35)

and we can continue this for any number of basis 1-forms. So for any
p-form , let us define the pullback by
(v1 , , vp ) = ( v1 , , vp ) ,

(13.36)

and in terms of components, by


=


1
i1 ip dxi1 dxip .
p!

(13.37)

We assumed above that the pullback of the wedge product of a


2-form and a 1-form is the wedge product of the pullbacks of the
respective forms, but it is not necessary to make that assumption
it can be shown explicitly by taking three vector fields and following
the arguments used earlier for the wedge product of two 1-forms.
Then for any p-form and q-form we can calculate from this
that
( ) = .
(13.38)
Thus pullbacks commute with (are distributive over) wedge products.

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


Chapter 14

Exterior derivative
The exterior derivative is a generalization of the gradient of a function. It is a map from p-forms to (p + 1)-forms. This should be a
derivation, so it should be linear,
p-forms , .

d( + ) = d + d

(14.1)

This should also satisfy Leibniz rule, but the algebra of p-forms is
not a commutative algebra but a graded commutator algebra, i.e.,
involves a factor of (1)pq for exchanges. So we need
d( ) = d + (1)pq d ,

(14.2)

d( ) = d + (1)p d .

(14.3)

or alternatively,

This will be the Leibniz rule for wedge products. Note that it gives
the correct result when one or both of , are 0-forms, i.e., functions.
The two formulas are identical by virtue of the fact that d is a
(q + 1)-form, so that
d = (1)p(q+1) d .

(14.4)

We will try to define the exterior derivative in a way such that it has
these properties.
Let us define the exterior derivative of a p-form in a chart as
d =

1
i i1 ip dxi1 dxip
p!
54

(14.5)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


55
This clearly has the first property of linearity. To check the (graded)
Leibniz rule, let us write in components. Then

1
i i1 ip j1 jq dxi dxi1 dxjq
p!q!


1 
=
i i1 ip j1 jq + i1 ip i j1 jq dxi dxi1 dxjq
p!q!

1
=
i i1 ip j1 jq dxi dxi1 dxip dxj1 dxjq
p!q!

1
+
(1)p i1 ip i j1 jq dxi1 dxip dxi dxj1 dxjq
p!q!
= d + (1)p d .
(14.6)

d( ) =

A third property of the exterior derivative immediately follows


from here,
d2 = 0 .
(14.7)
To see this, we write

1
d i i1 ip dxi dxi1 dxip
p!
1
=
j i i1 ip dxj dxi dxi1 dxip .
p!

d(d) =

(14.8)

But the wedge product is antisymmetric, dxj dxi = dxi dxj ,


and the indices are summed over, so the above object must be antisymmetric in j , i . But that vanishes. So d2 = 0 on all forms.
Note that we can also write
d =


1
di1 ip dxi1 dxip ,
p!

(14.9)

where the object in parentheses is a gradient 1-form corresponding


to the gradient of the component.
Consider a 1-form A = A dx where A are smooth functions on
M . Then using this definition we can write
dA = (dA ) dx

(dA)

= A dx dx
1
= ( A A ) dx dx
2
= A A .

(14.10)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


56

Chapter 14. Exterior derivative

We can generalize this result to write for a p-form,


1
dx1 dxp
(14.11)
p! 1 p

1
d =
1 p dx1 dxp
p!
1
=

dx dx1 dxp
(p + 1)! [ 1 p ]
(d)1 p = [ 1 p ]
(14.12)
=

Example: For p = 1 i.e. for a 1-form A we get from this formula


(dA) = A A , in agreement with our previous calculation.
For p = 2 we have a 2-form, call it . Then using this formula
we get
(d) = [ ]
= + + .
(14.13)
Note that d is not defined on arbitrary tensors, but only on forms.
2
By definition, d2 = 0 on any p-form. So if = d , it follows that
d = 0 . But given a p-form for which d = 0 , can we say that
there must be some (p 1)-form such that = d ?

This is a good place to introduce some terminology. Any form


such that d = 0 is called closed, whereas any form such that
= d is called exact.
2
So every exact form is closed. Is every closed form exact? The
answer is yes, in a sufficiently small neighbourhood. We say that
every closed form is locally exact. Note that if a p-form = d , we
cannot uniquely specify the (p 1)-form since for any (p 2)-form
, we can always write = d 0 , where 0 = + d .
Thus a more precise statement is that given any p-form such
that d = 0 in a neighbourhood of some point P , there is some
neighbourhood of this point and some (p1)-form such that = d
in that neighbourhood. But this may not be true globally. This
statement is known as the Poincare lemma.
2
Example: In R2 remove the origin. Consider the 1-form
=

xdy ydx
.
x2 + y 2

(14.14)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


57
Then



2x2
1
2y 2
1

dx dy

dy dx
d =
x2 + y 2 (x2 + y 2 )2
x2 + y 2 (x2 + y 2 )2
2
x2 + y 2
= 2
dx dy = 0 .
(14.15)
dx

dy

2
x + y2
(x2 + y 2 )2


Introduce polar coordinates r, with x = r cos , y = r sin .


Then
dx = dr cos r sin d

dy = dr sin + r cos d

r cos (sin dr + r cos d) r sin (cos dr r sin d)

r2 
r2
2
2
2
r cos + sin d
=
= d .
(14.16)
r2

Thus is exact, but is multivalued so there is no function f


such that = df everywhere. In other words, = d is exact only
in a neighbourhood small enough that remains single-valued.

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


Chapter 15

Volume form
 
n
The space of p-forms in n dimensions is
dimensional. So the
p
space of n-forms in n dimensions is 1-dimensiona, i.e., there is only
one independent component, and all n-forms are scalar multiples of
one another.
Choose an n-form field. Call it . Suppose 6= 0 at some point
P . Then given any basis {e } of TP M , we have (e1 , en ) 6= 0
since 6= 0 . Thus all vector bases at P fall into two classes, one for
which (e1 , en ) > 0 and the other for which it is < 0 .
Once we have identified these two classes, they are independent of
. That is, if 0 is another n-form which is non-zero at P , there must
be some function f 6= 0 such that 0 = f . Two bases which gave
positive numbers under will give the same sign both positive or
both negative under 0 and therefore will be in the same class.

So every basis (set of n linearly independent vectors) is a member of one of the two classes. These are called righthanded and
lefthanded.
2

A manifold is called orientable if it is possible to define a continuous n-form field which is non-zero everywhere on the manifold.
Then it is possible to choose a basis with the same handedness everywhere on the manifold continuously.
2
Euclidean space is orientable, the Mobius band is not.

An orientable manifold is called oriented once an orientation


has been chosen, i.e. once we have decided to choose basis vectors
with the same handedness everywhere on the manifold.
2

It is necessary to choose an oriented manifold when we discuss


the integration of forms. On an n-dimensional manifold, a set of n
58

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


59
linearly independent vectors define an n-dimensional parallelepiped.
If we define an n-form 6= 0 we can think of the value of these vectors
as the volume of this parallelepiped. This is called a volume form.
2
Once a volume form has been chosen, any set of n linearly independent vectors will define a positive or negative volume.
The integral of a function f on Rn is the sum of the values of f ,
multiplied by infinitesimal volumes of coordinate elements. Similarly,
we define the integral of a function f on an oriented manifold as the
sum of the values of f , multiplied by infinitesimal volumes. The way
to do that is the following.
Given a function f , define an n-form in a chart by = f dx1
dxn . To integrate over an open set U , divide it up into infinitesimal
cells, spanned by vectors


x
, x2 2 , , xn n
1
x
x
x
1


,

where the xi are small numbers.


Then the integral of f over one such cell is approximately
f x1 x2 xn = f dx1 dxn x1 1 , , xn n
= (cell) .


(15.1)

Adding up the contributions from all cells and taking the limit of cell
size going to zero, we find
Z

Z
=

f dn x .

(15.2)

(U )

The right hand side is the usual integration in calculus of n variables,


and the left hand side is our notation which we are defining.
The right hand side can be seen to be independent of the choice
of coordinate system. If we choose a different coordinate system, we
get a Jacobian, but also a redefinition of the region (U ) . Let us
check that the left hand side is also invariant of the choice of the
coordinates. We will do this in two dimensions with = f dx1 dx2 .

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


60

Chapter 15. Volume form

In another coordinate system (y 1 , y 2 ) corresponding to 0 (U )


x1 1 x1 2
dy + 2 dy
y 1
y
2
x
x2 2
1
dx2 =
dy
+
dy
y 1
y 2

 1 2
x1 x2
x x
1
2
2 2 dy 1 dy 2
dx dx =
y 1 y 1
y y
1
2
= Jdy dy ,
(15.3)
dx1 =

and J is the Jacobian.


So what we have here is
Z
Z
=
f (x1 , x2 )dx1 dx2
U

Z
=

f (y 1 , y 2 )Jdy 1 dy 2

Z
=

f (y 1 , y 2 )Jd2 y ,

(15.4)

0 (U )

so we get the same result both ways.


Given the same f , if we choose a basis with the opposite orientation, the integral of will have the opposite sign. This is why the
choice of orientation has to be made before integration.
Manifolds become even more interesting if we define a metric.

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


Chapter 16

Metric tensor

A metric on a vector space V is a function g : V V R


which is
i) bilinear:
g(av1 + v2 , w) = ag(v1 , w) + g(v2 , w)
g(v, w1 + aw2 ) = g(v, w1 ) + ag(v, w2 ) ,

(16.1)

i.e., g is a (0,2) tensor;


ii) symmetric:
g(v, w) = g(w, v);

(16.2)

(16.3)

iii) non-degenerate:
g(v, w) = 0

v = 0.

If for some v, w 6= 0 , we find that g(v, w) = 0 , we say that v, w


are orthogonal.
2

Given a metric g on V , we can always find an orthonormal


basis {e } such that g(e , e ) = 0 if 6= and 1 if = .
2

If the number of (+1)s is p and the number of (1)s is q , we


say that the metric has signature (p, q) .
We have defined a metric for a vector space. We can generalize
this definition to a manifold M by the following.

A metric g on a manifold M is a (0, 2) tensor field such that if


(v, w) are smooth vector fields, g(v, w) is a smooth function on M ,
and has the properties (16.1), (16.2) and (16.3) mentioned earlier. 2
61

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


62

Chapter 16. Metric tensor

It is possible to show that smoothness implies that the signature


is constant on any connected component of M , and we will assume
that it is constant on all of M .
A vector space becomes related to its dual space by the metric.
Given a vector space V with metric g , and vector v defines a linear
map g(v, ) : V R , w 7 g(v, w) R . Thus g(v, ) V where V
is the dual space of V . But g(v, ) is itself linear in v , so the map
V V defined by g(v, ) is linear. Since g is non-degenerate, this
map is an isomorphism. It then follows that on a manifold we can
use the metric to define a linear isomorphism between vectors and
1-forms.
In a basis, the components of the metric are g = g(e , e ) . This
is an n n matrix in an n-dimensional manifold. We can thus write
g(v, w) = g v w in terms of the components. Non-degeneracy
implies that this matrix is invertible. Let g denote the inverse
matrix. Then, by definition of an inverse matrix, we have
g g = = g g .

(16.4)

Then the linear isomorphism takes the following form.


i) If v = v e is a vector field in a chart, and { } is the dual
basis to {e } ,
g(v, ) = v ,
(16.5)
where v = g v .
ii) If A = A is a 1-form written in a basis { } , the corresponding vector field is A e , where A = g A .
This is the isomorphism between vector fields and 1-forms. (We
could of course define a similar isomorphism between vectors and
covectors without referring to a manifold.) A similar isomorphism
holds for tensors, e.g. in terms of components,
T T T T
T

(16.6)
T

(16.7)

These correspondences are not equalities the components are not


equal. What it means is that, if we know one set of components, say
T , and the metric, we also know every other set of components.

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


63

Using the fact that a non-degenerate metric defines a 1-1 linear


map between vectors and 1-forms, we can define an inner product
of 1forms, by
hA | Bi = g A B
(16.8)
for 1-forms A, B . This result is independent of the choice of basis, i.e.
independent of the coordinate system, just like the inner product
of vector fields,
hv | wi = g(v, w) = g v w .

(16.9)

2
Given a manifold with metric, there is a canonical volume form
dV (sometimes written as vol) , which in a coordinate chart reads
q
dV = | det g |dx1 dxn .
(16.10)
Note that despite the notation, this is not a 1-form, nor the gradient
of some function V . This is clearly a volume form because it is an
n-form which is non-zero everywhere, as g is non-degenerate.
We need to show that this definition is independent of the chart.
Take an overlapping chart. Then in the new chart, the corresponding
volume form is
q
0 |dx01 dx0n .
dV 0 = | det g
(16.11)
We wish to show that dV 0 = dV . In the overlap,
dx0 =

x0
dx = A dx (say)
x

(16.12)

Then dx01 dx0n = (det A)dx1 dxn .


On the other hand, if we look at the components of the metric
tensor in the new chart,
0
g
= g(0 , 0 )


x
x
=

x0
x0


 
= g A1 , A1


= A1 A1 g .

(16.13)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


64

Chapter 16. Metric tensor

Taking determinants, we find

Thus

0
det g
= (det A)2 (det g ) .

(16.14)

q
q
1
0
| det g | = |det A|
| det g | ,

(16.15)

and so dV 0 = dV .

This is called the metric volume form and written as


p
dV = |g|dx1 dxn
(16.16)
in a chart.
2
When we write dV , sometimes
we
mean
the
n-form
as
defined
p
above, and sometimes we mean |g|dn x , the measure for the usual
integral. Another way of writing the volume form in a chart is in
terms of its components,
p
|g|
dV =
1 n dx1 dxn
(16.17)
n!
where  is the totally
p antisymmetric Levi-Civita symbol, with
12n = +1 . Thus |g| 1 n are the components of the volume
form.

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


Chapter 17

Hodge duality
We will next define the Hodge star operator. We will defineit in a
chart rather than abstractly.

The Hodge star operator, denoted ? in an n-dimensional


manifold is a map from p-forms to (n p)-forms given by
(?)1 np

p
|g|

1 n g np+1 1 g n p 1 p , (17.1)
p!

where is a p-form.
2
The ? operator acts on forms, not on components.
Example: Consider R3 with metric +++, i.e.
g =
diag(1, 1, 1) . Then |g| g = 1 , g diag(1, 1, 1) . Write the coordinate basis 1-forms as dx, dy, dz . Their components are clearly
(dx)i = i1 , (dy)i = i2 , (dz)i = i3 ,

(17.2)

the s on the right hand sides are Kroenecker deltas. So


(?dx)ij = ijk g kl (dx)l = ijk g kl l1 = ijk g k1
1
1

?dx = (?dx)ij dxi dxj = ijk g k1 dxi dxj


2!
2!
g k1 = 1 for k = 1 , 0 otherwise

1

?dx =
dx2 dx3 dx3 dx2 = dx2 dx3 = dy dz .
2!
(17.3)
Similarly, ?dy = dz dx ,

?dz = dx dy .
65

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


66

Chapter 17. Hodge duality

Example: Consider p = 0 (scalar), i.e. a 0-form in n dimensions.


p
(?)1 n = |g|1 n
p

(?1)1 n = |g|1 n
p
|g|

(?1) =
1 n dx1 dxn
n!
= dV
(17.4)
2
Example: p = n . Then
p
|g|
1 n g 1 1 g n n 1 n .
(?) =
n!

(17.5)

For the volume form,


p
|g|
dV =
1 n dx1 dxn
n!
p
(dV )1 n = |g|1 n
|g|
 g 1 1 g n n 1 n
(?dV ) =
n! 1 n
|g|
|g| n!
=
n!(det g)1 =
= sign(g) = (1)s ,(17.6)
n!
n! g
where s is the number of (1) in g .
So we find that

?(?1) = ?dV = (1)s ,

(17.7)

?(?dV ) = (1)s (?1) = (1)s dV ,

(17.8)

and
i.e., (?)2 = (1)s on 0-forms and n-forms.
In general, on a p-form in an n-dimensional manifold with signature (s, n s) , it can be shown in the same way that
(?)2 = (1)p(np)+s .

(17.9)

In particular, in four dimensional Minkowski space, s = 1, n = 4 , so


(?)2 = (1)p(4p)+1 .

(17.10)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


67
It is useful to work out the Hodge dual of basis p-forms. Suppose
we have a basis p-form dxI1 dxIp , where the indices are arranged in increasing order Ip > > I1 . Then its components are
I
p!I11 pp . So

? dxI1 dxIp 1 np
p
|g|
0
0
I
=
1 np 1 p g 1 1 g p p p! I10 p0
p
1
p!
p
1 I1
p Ip
= |g| 1 np 1 p g
g
.
(17.11)
We will use this to calculate ? .
For a p-form , we have
1
dx1 dxp
p! 1 p
X
=
I1 Ip dxI1 dxIp

(17.12)

where the sum over I means a sum over all possible index sets I =
I1 < < Ip , but there is no sum over the indices {I1 , , Ip }
themselves, in a given index set the Ik are fixed. Using the dual of
basis pforms, and Eq. (13.13), we get
X
? =
I1 Ip ? (dxI1 dxIp )
I

X
I

p
|g|
 g 1 I1 g p Ip I1 Ip dx1 dxnp .
(n p)! 1 np 1 p
(17.13)

The sum over I is a sum over different index sets as before, and
the Greek indices are summed over as usual. Thus we calculate
p
|g| X
? =
1 np 1 p g 1 I1 g p Ip I1 Ip
(n p)!
I,J

dx1 dxnp J1 Jp dxJ1 dxJp
p
|g| X
=
1 np 1 p g 1 I1 g p Ip I1 Ip J1 Jp
(n p)!
I,J

dx1 dxnp dxJ1 dxJp

(17.14)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


68

Chapter 17. Hodge duality

We see that the set {1 , , np } cannot have any overlap with the
set J = {J1 , , Jp }, because of the wedge product. On the other
hand, {1 , , np } cannot have any overlap with {1 , , p } because  is totally antisymmetric in its indices. So the set {1 , , p }
must have the same elements as the set J = {J1 , , Jp } , but they
may not be in the same order.
Now consider the case where the basis is orthogonal, i.e. g is
diagonal. Then g k Ik = g Ik Ik etc. and we can write
p
|g| X
? =
1 np I1 Ip g I1 I1 g Ip Ip I1 Ip J1 Jp
(n p)!
I,J

dx1 dxnp dxJ1 dxJp . (17.15)


We see that in each term of the sum, the indices {I1 Ip } must be
the same as {J1 Jp } because both sets are totally antisymmetrized
with the indices {1 np }.
Since both sets are ordered, it follows that we can replace J by
I,
p
|g| X
1 np I1 Ip g I1 I1 g Ip Ip I1 Ip I1 Ip
? =
(n p)!
I

dx1 dxnp dxI1 dxIp


p
|g| X
=
1 np I1 Ip I1 Ip I1 Ip
(n p)!
I

dx1 dxnp dxI1 dxIp . (17.16)


In each term of this sum, the indices {1 np } are completely
determined, so we can replace them by the corresponding ordered
set K = K1 < < Knp , which is completely determined by the
set I , so that
p X
? = |g|
K1 Knp I1 Ip I1 Ip I1 Ip
I
p
dxK1 dxKnp dxI1 dxI(17.17)
.

The indices on this  are a permutation of {1, , n} , so  is 1.


But this sign is the same as that for the permutation to bring the
basis to the order dx1 dxn , so the overall sign to get both to

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


69
the standard order is positive. Thus we get
p X I I
? = |g|
1 p I1 Ip 1n dx1 dxn
I

p 1
= |g| 1 p 1 p dx1 dxn
p!
1 1 p
=
1 p (vol)
p!

(17.18)

If we are in a basis where the metric is not diagonal, it is still


symmetric. So we can diagonalize it locally by going to an appropriate basis, or set of coordinates, at each point. In this basis, the
components of may be 0 1 p , so we can write


1 01 0p
01 0p (vol0 )
(17.19)
? =

p!
But both factors are invariant under a change of basis. So we can
now change back to our earlier basis, and find Eq. (17.18) even when
the metric is not diagonal. Note that the metric may not be diagonalizable globally or even in an extended region.

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


Chapter 18

Maxwell equations
We will now consider a particular example in physics where differential forms are useful. The Maxwell equations of electrodynamics are,
with c = 1 ,
E
E
B
t
B
B
E+
t

(18.1)

=j

(18.2)

=0

(18.3)

= 0.

(18.4)

The electric and magnetic fields are all vectors in three dimensions,
but these equations are Lorentz-invariant. We will write these equations in terms of differential forms.
Consider R4 with Minkowski metric g = diag(1, 1, 1, 1) . For
the magnetic field define a 2-form
B = Bx dy dz + By dz dx + Bz dx dy .

(18.5)

For the electric field define a 1-form


E = Ex dx + Ey dy + Ez dz .

(18.6)

Combine these two into a 2-form F = B + E dt . Let us calculate


dF = d(B + E dt) = dB + dE dt . As usual, We will write 1, 2, 3
70

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


71
for the component labels x, y, z .
dB = d(B1 dy dz + B2 dz dx + B3 dx dy)
= t B1 dt dy dz + 1 B1 dx dy dz
+t B2 dt dz dx + 2 B2 dy dz dx
+t B3 dt dx dy + 3 B3 dz dx dy .

(18.7)

And
d(E dt) = d(E1 dx dt + E2 dy dt + E3 dz dt)
= 2 E1 dy dx dt + 3 E1 dz dx dt
+1 E2 dx dy dt + 3 E2 dz dy dt
+1 E3 dx dz dt + 2 E3 dy dz dt . (18.8)
Thus, remembering that the wedge product changes sign under each
exchange, we can combine these two to get
dF = (t B1 + 2 E3 3 E2 ) dt dy dz
+ (t B2 + 1 E3 3 E1 ) dt dz dx
+ (t B3 + +1 E2 2 E1 ) dt dx dy
+ (1 B1 + 2 B2 + 3 B3 ) dx dy dz
= (t B1 + ( E)1 ) dt dy dz
+ (t B2 + ( E)2 ) dt dz dx
+ (t B3 + ( E)3 ) dt dx dy
+ ( B) dx dy dz .

(18.9)

Thus two of Maxwells equations are equivalent to dF = 0 .


For the other two equations we need ?F . Using the formula
(17.11) for dual basis forms, it is easy to calculate that
?(dx dy) = dt dz ,

?(dy dz) = dt dx ,

?(dz dx) = dt dy ,

?(dx dt) = dy dz ,

?(dy dt) = dz dx ,

?(dz dt) = dx dy .
(18.10)

We use these to calculate


?F = ?(B + E dt)
= B1 dt dx + B2 dt dy + B3 dt dz
+E1 dy dz + E2 dz dx + E3 dx dy .

(18.11)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


72

Chapter 18. Maxwell equations

Then in the same way as for the previous calculation, we find


d?F = ( E) dx dy dz
+ (t E1 ( B)1 ) dt dy dz
+ (t E2 ( B)2 ) dt dz dx
+ (t E3 + ( B)3 ) dt dx dy .

(18.12)

We need to relate this to the charge-current.


Define the current four-vector as
j = t + j 1 1 + j 2 2 + j 3 3 .

(18.13)

Then there is a corresponding one-form j dx with j = g j . So


in terms of components,
j dx = dt + j1 dx1 + j2 dx2 + j3 dx3 .

(18.14)

Then using Eq. (17.11) it is easy to calculate that


?j = dx dy dz + j1 dt dy dz
+j2 dt dz dx + j3 dt dx dy .

(18.15)

Comparing this equation with Eq. (18.12) we find that the other two
Maxwell equations can be written as
d?F = ?j .

(18.16)

Finally, using Eq. (17.18), we see that the action of electromagnetism


can be written as
Z
1

F ?F
(18.17)
2
This expression holds in both flat and curved spacetimes. For the
latter, with local coordinates (t, x, y, z) we find

F ?F = (B 2 E 2 ) g dt dx dy dz .
(18.18)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


Chapter 19

Stokes theorem
We will next discuss a very beautiful result called Stokes formula.
This is actually a theorem, but we will not prove it, only state the
result and discuss its applications. So for us it is only a formula, but
still deep and beautiful.

A submanifold S is a subset of points in M such that any point


in S has an open neighbourhoood in M for which there is some chart
where (n m) coordinates vanish. S is then m-dimensional.
2

Suppose U is a region of an oriented manifold M . The bound


ary U of U is a submanifold of dimension n 1 which divides M
in such a way that any curve joining a point in U with a point in U c
must contain a point in U .
Now suppose U has an oriented smooth boundary U . Then U
is automatically an oriented manifold, by considering the restrictions
of the charts on U to U .

Consider a smooth (n 1) form in M . Stokes formula says


that
Z
Z
d = .
(19.1)
U

If M is a compact manifold with boundary M , this formula can


be applied to all of M . If vanishes outside some compact region
we can again set U = M . Also, U can be a submanifold in another
manifold, like a 2-surface in a 3-manifold.
2
Example: Let U = [0, 1] . Then a function f : M R is a
0-form, and df = f 0 (x)dx is a 1-form. Take the orientation of M to
be from 0 to 1. Then M consists of the points x = 0 and x = 1 ,
73

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


74

Chapter 19. Stokes theorem

and Stokes formula says that


Z

Z
df =
M

Z1
i.e.

f
M

f 0 (x)dx = f (1) f (0) .

(19.2)

Example: Consider a 2-d disk D in R2 , with boundary D .


Take a 1-form A . Then Stokes formula says
Z
Z
A = dA .
(19.3)
D

Let us seee this equation in a chart. We can write


A = Ai dxi
dA = i Aj dxi dxj
(19.4)
 
d
d
A evaluated on D can be written as A
where
is tangent
dt
dt
 
d
dxi
to D . So we can write A
= Ai
dt , and
dt
dt
Z
Z
i
Ai dx =
i Aj dxi dxj
D

Z
=

(1 A2 2 A1 ) dx1 dx2

Z
=

(1 A2 2 A1 ) d2 x .

(19.5)

(D)

Similarly for higher forms on higher dimensional manifolds.

Gauss divergence theorem is a special case of Stokes theorem. Before getting to Gauss theorem, we need to make a new
definition. Consider an n-form 6= 0 on an n-dimensional manifold.
We can write this in a chart as
= f dx1 dxn
1
= f 1 n dx1 dxn .
n!

(19.6)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


75
Given a vector field v , its contraction with is
1
v 1 dx2 dxn
(n 1)! 1 2 n
= f v 1 dx2 dxn f v 2 dx1 dx3 dxn +

v = (v, ) =

(19.7)
Then we can calculate
d(v ) = d(v, ) = 1 (f v 1 ) dx1 dx2 dxn
+2 (f v 2 ) dx1 dx2 dxn
+ + n (f v n ) dx1 dx2 dxn
= (f v ) dx1 dx2 dxn
1
= (f v ) .
f
In particular, if is the volume form, we can write
p
|g|
=
1 n dx1 dxn ,
n!
p
1
d(v (vol)) = p (v |g|)(vol) .
|g|

(19.8)

(19.9)

This is called the divergence of the vector field v .


There is another expression for the divergence. Remember that
given a vector field v , we can define a one-form, also called v , with
components defined with the help of the metric,
v = g0 v

(19.10)

Consider ?v , which has components


p
0
(?v)1 n1 = |g|1 n1 g v0
p
= |g|1 n1 v .
(19.11)
p
|g|

?v =
 v dx1 dxn1
(n 1)! 1 n1
!
p
|g|
d?v = n
 v dxn dx1 dxn1
(n 1)! 1 n1

p

(1)n1
=
1 n1 n
|g|v dx1 dxn
(n 1)!
(19.12)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


76

Chapter 19. Stokes theorem

Both and n must be different from (1 , , n1 ) , so = n .


Thus in each term of the sum, the choice of (1 , , n1 ) automatically selects n (= ) , so a sum over (1 , , n ) overcounts n
times. So we can write
 p

(1)n1
d?v =
1 n
|g|v dx1 dxn
(n 1)!
p
1
(19.13)
= (1)n1 p ( |g|v )(vol) .
|g|
Since this is an n-form in n dimensions, we can calculate from
here that
?d?v =

p
(1)n+s1
p
( |g|v ) ,
|g|

(19.14)

where as before s is the signature of the manifold, i.e. the number


of negative entries in the metric in a locally diagonal form.
Let us now go back to Stokes formula. Take a region U of M
which is covered by a single chart and has an orientable boundary
U as before. Then we find
Z
Z
p
1
p ( |g|v )(vol) =
d(v (vol))
|g|
U
U
Z
=
v (vol) .
(19.15)
U

 
d
Now suppose b is a 1-form normal to U , i.e. b
= 0 for any
dt
d
vector
tangent to U , and is an (n 1)-form such that b =
dt
(vol) . Since all n-forms are proportional, always exists. For the
same reason, if b 6= 0 on U , it is unique up to a factor. And b 6= 0 on
U because U is defined as the submanifold where one coordinate is
d
constant, usually set to zero, so that one component of
vanishes
dt
at any point on U , and therefore the corresponding component of
b can be chosen to be non-zero.
So b is unique up to a rescaling b b0 = f b for some nonvanishing
1
function f . But we can always scale 0 = f so that b0 0 =
b . Further, if we restrict to U , i.e. if acts only on tangent

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


77
vectors to U , we find that is an (n 1)-form on an (n 1)dimensional manifold, so it is unique up to scaling. Therefore, is
unique once b is given. Finally, for any vector v ,




(19.16)
v (vol) = v (b )
U

is an (n 1)-form on U which acts only on vectors tangent to U .


Then




(19.17)
v (b ) = b(v)
U

because all terms of the form b v gives zero for any choice of
(n 1) vectors on U .
Then we have
Z
Z
p
1

p ( |g|v )(vol) =
b(v)
|g|
U
U
Z
=
(n v ) .
(19.18)
U

Usually b is taken to have norm 1. Then is the volume form on


U , and we can write
Z
Z
q
p
1
p ( |g|v )(vol) =
(n v ) |g(U ) |dn1 x . (19.19)
|g|
U

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


Chapter 20

Lie groups
We start a brief discussion on Lie groups, mainly with an eye to
their structure as manifolds and also application to the theory of
fiber bundles.

A Lie group is a group which is also an analytic manifold. 2


We did not define a Lie group in this way in Chap. 2 , but said that
a Lie group was a manifold in which the group product is analytic in
the group parameters, or alternatively the group product and group
inverse are both C .
The definition above comes from a theorem that given a continuous group G in which the group product and group inverse are
C functions of the group parameters, it is always possible to find
a set of coordinate charts covering G such that the overlap functions
are real analytic, i.e. are C and their Taylor series at any point
converge to their respective values.

A Lie subgroup of G is a subset H of G which is a subgroup of


G , a submanifold of G , and is a topological group, i.e., a topological
space in which the group product and group inverse are continuous
maps.
2

Sometimes this expressed in terms of another definition. P is


an immersed submanifold of M if the inclusion map j : P , M
is smooth and at each point p P its differential djp is one to one,
with djp being defined by djp : Tp P Tj(p) M such that djp (v)(g) =
v(g jp ) .
2
We have mentioned some specific examples of Lie groups earlier.
Let us mention some more examples.
Example: Rn is a Lie group under addition. So is Cn .
78

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


79
Example: Rn \{0} is a Lie group under multiplication. So is

Cn \{0} .

Example: The direct product of two Lie groups is itself a Lie


group, with multiplication (g1 , h1 )(g2 , h2 ) = (g1 g2 , h1 h2 ) .
2
Example: The set of all n n real invertible matrices forms
a group under matrix multiplication, called the General Linear
group GL(n, R) . This is also the space of all invertible linear maps
of Rn to itself. We can similarly define GL(n, C) .
2
The next few examples are Lie subgroups of GL(n, R) .
Example: The Special Linear group SL(n, R) is the subset
of GL(n, R) for which all the matrices have determinant +1 , i.e.,
SL(n, R) = {A GL(n, R)| det A = 1} . One can define SL(n, C) in
a similar manner.
2
Example: The Orthogonal group O(n) = {R GL(n, R) |
R T R = I} .
2

Example: The Unitary group U (n) = {U GL(n, C) | U U =


I} .
2
Example: The Symplectic group Sp(n) , defined as the subgroup of U (2n) given by AT JA = J , where

J=

1nn

1nn
0

2
Example: O(p, q) = {R GL(p+q, R) | RT p,q R = p,q } , where

p,q =

1pp
0

1qq

2
Example: U (p, q) = {U GL(p + q, C) | U p,q U = p,q } .
Example The Special Orthogonal group SO(n) is the subgroup of O(n) for which determinant is +1. Similarly, the Special
unitary group SU (n) is the subgroup of U (n) with determinant
+1. Similarly for SO(p.q) and SU (p, q) .
The group U (1) is the group of phases U (1) = {ei | R} . As a
manifold, this is isomorphic to a circle S 1 .
The group SU (2) is isomorphic as a manifold to a three-sphere
S 3 . These are the only two spheres (other than the point S 0 ) which
admit a Lie group structure.

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


80

Chapter 20. Lie groups

An important property of a Lie group is that the tangent space


of any point is isomorphic to the tangent space at the identity by
an appropriate group operation. Of course, the tangent space at any
pooint of a manifold is isomorphic to the tangent space at any other
point. For Lie groups, the isomorphism between the tangent spaces
is induced by group operations, so is in some sense natural.
For any Lie group G , we can define diffeomorphisms of G labelled
by elements g G , called

Left translation lg : G G
g 0 7 gg 0 ;
2

Right translation rg : G G
g 0 7 g 0 g .
2
These can be defined for any group, but are diffeomorphisms for
Lie groups. We see that
lg1 lg (g 0 ) = lg1 (gg 0 ) = g 1 gg 0 = g 0

(lg )1 = lg1

rg1 rg (g 0 ) = rg1 (g 0 g) = g 0 gg 1 = g 0

(rg )1 = rg1 . (20.1)

It is easy to check that


lg1 lg2 = lg1 g2

rg1 rg2 = rg2 g1

(20.2)

Further, lg1 (g) = e and rg1 (g) = e , so any element of G can be


moved to the identity by a diffeomorphism. The tangent space at
the identity forms a Lie algebra, as we shall see. The left and right
translations lead to diffeomorphisms which relate the tangent space
at any point to this Lie algebra, as we shall see now.

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


Chapter 21

Tangent space at the identity


A point on the Lie group is a group element. So a vector field on
the Lie group selects a vector at each g G . Since left and right
translations are diffeomorphisms, we can consider the pushforwards
due to them.

A left-invariant vector field X is invariant under left ttranslations, i.e.,


g G .

X = lg (X)

(21.1)

In other words, the vector (field) at g 0 is pushed forward by lg to the


same vector (field) at lg (g 0 ):
g, g 0 G .

lg (Xg0 ) = Xgg0

(21.2)

Similarly, a right-invariant vector field X is defined by


X = rg (X)
i.e.

rg (Xg0 ) = Xg0 g

g G ,
g, g 0 G .

(21.3)

A left or right invarian vector field has the important property


that it is completely determined by its value at the identity element
e of the Lie group, since
g G ,

lg (Xe ) = Xg

(21.4)

and similarly for right-invariant vector fields.


Write the set of all left-invariant vector fields on G as L(G) . Since
the push-forward is linear, we get
lg (aX + Y ) = alg X + lg Y ,
81

(21.5)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


82

Chapter 21. Tangent space at the identity

so that if both X and Y are left-invariant,


lg (aX + Y ) = aX + Y ,

(21.6)

so the set of left-invariant vector fields form a real vector space.


We also know that push-forwards leave the Lie algebra invariant,
i.e., for lg ,
[lg X , lg Y ] = lg [X , Y ] .

(21.7)

Thus if X, Y L(G) ,
lg [X , Y ] = [lg X , lg Y ] = [X , Y ] ,

(21.8)

so [X , Y ] L(G) . Thus the set of all left-invariant vector fields on


G forms a Lie algebra.

This L(G) is called the Lie algebra of G .


2
The dimension of this Lie algebra is the same as that of G because
of the
Theorem: L(G) as a real vector space is isomorphic to the tangent space Te G to G at the identity of G .
Proof: We will show that left translation leads to an isomorphism.
For X Te G , define the vector field LX on G by

g G
(21.9)
LX g LX
g := lg X
Then for all g, g 0 G ,
X
lg0 (LX
g ) = lg 0 (lg X) = lg 0 g X = Lg 0 g .

(21.10)

Note that for two diffeomorphisms 1 , 2 , we can write


(1 (2 v))(f ) = (2 v)(f 1 )
= v(f 1 2 )
= ((1 2 ) v)(f )

1 (2 v) = (1 2 ) v

(21.11)

Since left translation is a diffeomorphism,


lg0 (lg X) = (lg0 lg ) X = (lg0 g )X

(21.12)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


83
So it follows that LX is a left-invariant vector field, and we have a
map Te G L(G) . Since the pushforward is a linear map, so is the
map X LX . We need to prove that this map is 1-1 and onto.
If LX = LY , we have
Y
LX
g = Lg

g G ,

(21.13)

so
Y
lg1 LX
g = lg 1 Lg

X=Y

( Te G) .

(21.14)

So the map X LX is 1-1.


Now given LX , define Xe Te G by
Xe = lg1 LX
g

for any g G .

(21.15)

We can also write


Xe = LX
e .

(21.16)

X
lg Xe = lg lg1 LX
g = Lg .

(21.17)

Then

So the map X 7 LX is onto.


Then we can define a Lie bracket on Te G by

[u , v] = [Lu , Lv ] e .

(21.18)

The Lie algebra of vectors in Te G based on this bracket is thus the


Lie algebra of the group G . It follows that
dim L(G) = dim Te G = dim G .

(21.19)

Note that since commutators are defined for vector fields and not
vectors, the Lie bracket on Te G has to be defined using the commutator of left-invariant vector fields on G and the isomorphism
Te G L(G) .

If for an n-dimensional Lie group G , {t1 , , tn } is a set of basis


vectors on Te G ' L(G) , the Lie bracket of any pair of these vectors
must be a linear combination of them, so
X
k
[ti , tj ] =
Cij
tk
(21.20)
k

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


84

Chapter 21. Tangent space at the identity

k . These numbers are known as the


for some set of real numbers Cij
structure constants of the Lie group or algebra.
2
Since L(G) is a Lie algebra, with the Lie bracket as the product,
the Lie bracket is antisymmetric,

[ti , tj ] = [tj , ti ]
X
X
k
k
Cij
tk =
Cji
tk
k

k
k
k
Cij
= Cji
,

(21.21)

and the structure constants satisfy the Jacobi identity


[ti , [tj , tk ]] + [tj , [tk , ti ]] + [tk , [ti , tj ]] = 0

l
m
m
l
l
= 0.
Cjl
Cilm + Cki
Cij
Ckl
+ Cjk

(21.22)

A similar construction can be done using a set of right-invariant


vector fields defined by
RgX := rg X
and its inverse Xe = rg1 RgX .

for X Te G

(21.23)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


Chapter 22

One parameter subgroups


There is another characterization of Te G for a Lie group G as the set
of its one parameter subgroups, which we will now define. This is
also called the infinitesimal description of a Lie group, and what
Lie called an infinitesimal group.

A one parameter subgroup of a Lie group G is a smooth


homomorphism from the additive group of real numbers to G , :
(R, +) G . Then : R G is a curve such that (s + t) =
(s)(t) , (0) = e , and (t) = (t)1 .
2
Also, since this is a homomorphism, the one parameter subgroup
is Abelian.
Example: For G = (R\{0}, ) the multiplicative group of nonzero real numbers, (t) = et is a 1-p subgroup.
Example: G = U (1) ,
(t) = eit.

cos t sin t
.
Example: G = SU (2) ,
(t) =

sin t cos t

cos t sin t 0
Example: G = GL(3, R) ,
(t) = sin t cos t 0 .
0
0 et
The relation between 1-p subgroups and
and Te G is given by the
Theorem: The map 7 (0)

= e defines a 1-1 correspondence between 1-p subgroups of G, and Te G .


Proof: For any X Te G define LX = lg X as the corresponding
left-invariant vector field. We need to find a smooth homomorophism
from R to G using LX . This homomorphism is provided by the flow
or integral curve of LX , but let us work this out in more detail.

85

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


86

Chapter 22. One parameter subgroups


Denote the integral curve of LX by X (t) , i.e.,
X (0) = LX
e =X
and

X (t) = LX
(t) = l(t) X

(22.1)

X
Since LX is left-invariant, lg0 LX
g = Lg 0 g . Consider the equation
 
d
d
X
(t) = L(t) = l(t) X
.
(22.2)
dt
dt t

Given some , replace (t) by ( + t) to get


( + t) = l( +t) X .

(22.3)

Remember that (t) is an element of the group for each t . Now


replace (t) in Eq. (22.2) by ( )(t) to get
 

d
( )(t)
(22.4)
= LX
( )(t)
dt
We see that (t + ) and ( )(t) are both integral curves of LX , i.e.
both satisfy the equation of the integral curve of LX , and at t = 0
both curves are at the point ( ) . Thus by uniqueness these two are
the same curve,
( + t) = ( )(t) ,

(22.5)

and t 7 (t) is the homomorphism R G that we are looking for.


Thus for each X Te G we find a 1-p subgroup (t) given by the
integral curve of LX ,
(t)

= LX
(t) = l(t) X ,

(22.6)

where, as mentioned earlier, ((0)

= X) .
2
In a compact connected Lie group G , every element lies on some
1-p subgroup. This is not true in a non-compact G , i.e. there are
elements in G which do not lie on a 1-p subgroup. However, an
Abelian non-compact group will always have a 1-p subgroup, so this
remark applies only to non-Abelian non-compact groups.
For matrix groups, every 1-p subgroup is of the form

n
o

(t) = etM M fixed, t R .
(22.7)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


87
Let us see why. Suppose {(t)} is a 1-p subgroup of the matrix
group. Then (t) is a matrix for each t , and
(s)(t) = (s + t) .

(22.8)

Differentiate with respect to s and set s = 0 . Then


(0)(t)

= (t)
.

(22.9)

Write (0)

= M . Since G is a matrix group, M is a matrix. Then


the unique solution for is
(t) = etM .

(22.10)

The properties of M are determined by the properties of the group


and vice versa, not every matrix M will generate any group.
The allowed matrices {M } for a given group G are the {(0)}

for
all the 1-p subgroups (t) , so these are in fact the tangent vectors
at the identity. The allowed matrices {M } for a given matrix group
G thus form a Lie algebra with the Lie bracket being given by the
matrix commutator. This Lie algebra is isomorphic to the Lie algebra
of the group G . (We will not a give a proof of this here.)
We can find the Lie algebra of a matrix group by considering
elements of the form (t) = etM for small t , i.e.,
(t) = I + tM

(22.11)

for small t . Conversely, once we are given, or have found, a Lie


algebra with basis {ti } , we can exponentiate the Lie algebra to find
the set of 1-p subgroups


(a) = exp ai ti
(22.12)

This is the infinitesimal group, for compact connected groups


this is identical to the Lie group itself. So in such cases, the entire
group can be generated by exponentiating the Lie algebra. Noncompact groups cannot be written as the exponential of the Lie algebra in general.
2
Example: Consider SO(N ) , the group of N N real orthogonal
matrices R with RT R = I , det R = 1 . Write R = I + A , then AT =
A , i.e. the Lie algebra is spanned by N N real antisymmetric
matrices. Let us construct a basis for this algebra.

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


88

Chapter 22. One parameter subgroups

An N N antisymmetric matrix has N (N 1)/2 independent


elements. So we define N (N 1)/2 independent antisymmetric matrices, labelled by , = 1, , N ,
M = M
(M ) = (M ) ,

, are not matrix indices


, are matrix indices .

(22.13)

A convenient choice for the basis is given by


(M ) = .

(22.14)

Then the commutators are calculated to be


[M , M ] = M M + M M . (22.15)
This defines the Lie algebra.
Example: For SU (N ) , the group of N N unitary matrices U with U U = I , det U = 1 , the 1-p subgroups are given by
(t) = etM with M + M = 0 in the same way as above, and
det(I + tM ) = 1 Tr M = 0 . So the SU (N ) Lie algebra consists
of traceless antihermitian matrices. Often the basis is multiplied by
i to write (a) = exp(iaj tj ) , where tj are now Hermitian matrices,
with
[ti , tj ] = ifabc tc .

(22.16)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


Chapter 23

Fiber bundles
Consider a manifold M with the tangent bundle T M =

S
P M

TP M .

Let us look at this more closely. T M can be thought of as the original


manifold M with a tangent space stuck at each point P M . Thus
there is a projection map : T M M , TP M 7 P , which
associates the point P M with TP M .
Then we can say that T M consists of pooints P M and vectors
v TP M as an ordered pair (P, vP ) . Then in the neighbourhood of
any point P , we can think of T M as a product manifold, i.e. as
the set of ordered pairs (P, vP ) .
This is generalized to the definition of a fiber bundle. Locally
a fiber bundle is a product manifold E = B F with the following
properties.

B is a manifold called the base manifold, and F is another


manifold called the typical fiber or the standard fiber.

There is a projection map : E B , and if P B , the preimage 1 (P ) is homeomorphic, i.e. bicontinuously isomorphic, to
the standard fiber.
2
E is called the total space, but usually it is also called the bundle, even though the bundle is actually the triple (E, , B) .

E is locally a product space. We express this in the following


way. Given an open set Ui of B , the pre-image 1 (Ui ) is homeomorphic to Ui F , or in other words there is a bicontinuous isomorphism
i : 1 (Ui ) Ui F . The set {Ui , i } is called a local trivializa
tion of the bundle.
2

If E can be written globally as a product space, i.e. E = B F ,


it is called a trivial bundle.
2
89

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


90

Chapter 23. Fiber bundles

This description includes a homeomorphism 1 (P ) F for


each P Ui . Let us denote this map by hi (P ) . Then in some overlap
Ui Uj the fiber on P , 1 (P ) , has homeomorphisms hi (P ) and
hj (P ) onto F . It follows that hj (P ) hi (P )1 is a homeomorphism
F F . These are called transition functions. The transition
functions F F form a group, called the structure group of F . 2
Let us consider an example. Suppose B = S 1 . Then the tangent bundle E = T S 1 has F = R and (P, v) 7 P , where
P S 1 , v T S 1 . Consider a covering of S 1 by open sets Ui , and
let the coordinates of Ui S 1 be denoted by i . Then any vector at
d
TP S 1 can be written as v = ai
(no sum) for P Ui .
di
So we can define a homeomorphism hi (P ) : TP S 1 R, v 7
ai (fixed i). If P Ui Uj there are two such homeomorphisms
T S 1 R , and since i and j are independent, ai and aj are also
independent.
Then hi (P ) hj (P )1 : F F (or R R) maps aj to ai .
The homeomorphism, which in this case relates the component of
the vector in two coordinate systems, is simply multiplication by the
ai
number rij =
R\{0} . So the structure group is R\{0} with
aj
multiplication.
For an n-dimensional manifold M , the structure group of T M
is GL(n, R) .

A fiber bundle where the standard fiber is a vector space is called


a vector bundle .
2
A cylinder can be made by glueing two opposite edges of a flat
strip of paper. This is then a Cartesian product of acircle S 1 with
a line segment I . So B = S 1 , F = I and this is a trivial bundle,
i.e. globally a product space. On the other hand, a Mobius strip is
obtained by twisting the strip and then glueing. Locally for some
open set U ( S 1 we can still write a segment of the Mobius strip as
U I , but the total space is no longer a product space. As a bundle,
the M
obius strip is non-trivial.

Given two bundles (E1 , 1 , B1 ) and (E2 , 2 , B2 ) , the relevant


or useful maps between these are those which preserve the bundle
structure locally, i.e. those which map fibers into fibers. They are
called bundle morphisms.
2
A bundle morphism is a pair of maps (F, f ) , F : E1 E2 , f :

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


91
B1 B2 , such that 2 F = f 1 . (This is of course better
understood in terms of a commutative diagram.)
Not all systems of coordinates are appropriate for a bundle. But it
is possible to define a set of fiber coordinates in the following way.
Given a differentiable fiber bundle with n-dimensional base manifold
B and p-dimensional fiber F , the coordinates of the bundle are given
by bundle morphisms onto open sets of Rn Rp .
2

Given a manifold M the tangent space TP M , consider AP =


(e1 , , en ) , a set of n linearly independent vectors at P . AP is a
basis in TP M . The typical fiber in the frame bundle is the set of
all bases, F = {AP } .
2
Given a particular basis AP = (e1 , , en ) , any basis AP may
be expressed as
ei = aji ej .

(23.1)

The numbers aji can be thought of as the components of a matrix,


which must be invertible so that we can recover the original basis
from the new one. Thus, starting from any one basis, any other basis
can be reached by an nn invertible matrix, and any nn invertible
matrix produces a new basis. So there is a bijection between the set
of all frames in TP M and GL(n , R) .
Clearly the structure group of the typical fiber of the frame bundle is also GL(n , R) .

A fiber bundle in which the typical fiber F is identical (or homeomorphic) to the structure group G, and G acts on F by left translation is called a principal fiber bundle.
2
Example: 1. Typical fiber = S 1 , structure group U (1) .
2. Typical fiber = S 3 , structure group SU (2) .
3. Bundle of frames, for which the typical fiber is GL(n , R) , as
is the structure group.

A section of a fiber bundle (E, , B) is a mapping s : B


E , p 7 s(p) , where p B , s(p) 1 (p) . So we can also say s
= identity.
2
Example: A vector field is a section of the tangent bundle, v :
P 7 vP .
Example: A function on M is a section of the bundle which
locally looks like M R (or M C if we are talking about complex
functions).

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


92

Chapter 23. Fiber bundles

Starting from the tangent bundle we can define the cotangent


bundle, in which the typical fiber is the dual space of the tangent
space. This is written as T M . As we have seen before, a section of
T M is a 1-form field on M .
2

Remember that a vector bundle F E B is a bundle in


which the typical fiber F is a vector space.
2

A vector bundle (E,


e, B, F, G) with typical fiber F and structure group G is said to be associated to the principal bundle
(P, , B, G) by the representation {D(g)} of G on F if its transition functions are the images under D of the transition functions of
P.
2
That is, suppose we have a covering {Ui } of B , and local trivialization of P with respect to this covering is i : 1 (Ui ) Ui G ,
which is essentially the same as writing i,x : 1 (x) G , x Ui .
Then the transition functions of P are of the form
gij = i j 1 : Ui Uj G .

(23.2)

The transition functions of E corresponding to the same covering of


B are given by i :
e1 (Ui ) Ui F with i j 1 = D(gij ) .
That is, if vi and vj are images of the same vector vx Fx under
overlapping trivializations i and j , we must have
vi = D (gij (x)) vj .

(23.3)

A more physical way of saying this is that if two observers look at


the same vector at the same point, their observations are relatted by
a group transformation (p, v) ' (p, D(gij v) .

These relations are what are called gauge transformations in


physics, and G is called the gauge group. Usually G is a Lie group
for reasons of continuity.
2
Fields appearing in various physical theories are sections of vector
bundles, which in some trivialization look like U V where U is
some open neighborhood of the point we are interested in, and V is
a vector space. V carries a representation of some group G , usually
a Lie group, which characterizes the theory.
To discuss this a little more concretely, let us consider an associated vector bundle (E,
e, B, F, G) of a principal bundle (P, , B, G).
Then the transition functions are in some representation of the group
G . Because the fiber carries a representation {D(g)} of G , there are

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


93
always linear transformations Tx : Ex Ex which are members of
the representation {D(g)} . Let us write the space of all sections of
this bundle as (E) . An element of (E) is a map from the base
spacce to the bundle. Such a map assigns an element of V to each
point of the base space.

We say that a linear map T : (E) (E) is a gauge trans


formation if at each point x of the base space, Tx {D(g)} for
some g , i.e. if
Tx : (x, v) 7 (x, D(g)v) ,

(23.4)

for some g G and for (x, v) U F . In other words, a gauge


transformation is a representation-valued linear transformation of
the sections at each point of the base space. The right hand side is
often written as (x, gv) .
2
This definition is independent of the choice of U . To see this,
consider a point x U U . Then
(x, v) = (x, g v) .

(23.5)

In the other notation we have been using, v and v are images of the
same vector vx Vx , and v = D(g )v . A gauge transformation
T acts as
Tx : (x, v) 7 (x, gv) .

(23.6)

(x, gv) = (x, g gv)

(23.7)

But we also have

using Eq. (23.5) . So it is also true that


Tx : (x, g v) 7 (x, g gv) .

(23.8)

Since F carries a representation of G , we can think of gv as a


change of variables, i.e. define v 0 = g v . Then Eq. (23.8) can be
written also as
Tx : (x, v 0 ) 7 (x, g 0 v 0 ) ,

(23.9)

where now g 0 = g gg 1 . So T is a gauge transformation in U as


well. The definition of a gauge transformation is independent of the

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


94

Chapter 23. Fiber bundles

choice of U , but T itself is not. The set of all gauge transformations


G is a group, with
(gh)(x) = g(x)h(x) ,

(g 1 )(x) = (g(x)) 1 .

(23.10)

The groups G and G arre both called the gauge group by


different people.
2

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


Chapter 24

Connections
There is no canonical way to differentiate sections of a fiber bundle.
That is to say, no unique derivative arises from the definition of a
bundle. Let us see why. The usual derivative of a function on R is
of the form
f (x + ) f (x)
.
0


f 0 (x) = lim

(24.1)

But a section of a bundle assigns an element of the fiber at any point


P M to the base point P . Each fiber is isomorphic to the standard
fiber but the isomorphism is not canonical or unique. So there is no
unique way of adding or subtracting points on different fibers. Thus
there are many ways of differentiating sections of fiber bundles.

Each way of taking a derivative, i.e. of comparing, is called a


connection. Let us consider a bundle : E B , where (E) is the
space of all sections. Then a connection D on B assigns to every
vector field v on B a map Dv : (E) (E) satisfying
Dv (s + t) = Dv s + Dv t
Dv (f s) = v(f )s + f Dv s
Dv+f w (s) = Dv s + f Dw s ,

(24.2)

where s, t are sections of the bundle, s, t (E) , v, w are vector fields


on B , f C (B) and is a real number (or complex, depending
on what the manifold is).
2
Note that this is a connection, not some unique connection.
In other words, we need to choose a connection before we can talk
about it. In what follows, whenever we refer to a connection D , we
95

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


96

Chapter 24. Connections

mean that we have chose a connection D and that is what we are


discussing.

We call Dv s the covariant derivative of s .


2
To be specific, let us consider the bundle to be a vector bundle
on a manifold M , and try to understand the meaning of D by going
to a chart. Consider coordinates x in an open set U M , with
the coordinate basis vector fields. Write D = D . Also choose a
basis for sections, which is like a basis for the fiber (vector space) at
each point of M , i.e. like a set of basis vector fields, but the vectors
are not along M , but along the fiber at M .
Call this basis {ei } , then {ei (x)} is a basis for the fiber at P M ,
with {x} being the set of coordinates at P . Any element of V ' Fx
can be written uniquely as a linear combination of ei (x) . But then
D ej can be expressed uniquely as a linear combination of the ei ,
D ej = Aj i ei .

(24.3)

These Aij are called components of the vector potential or


the connection one-form.
2
Given a section s = si ei with si C (M) , we can write
Dv s = Dv s = v D s .

(24.4)

Also,

D s = D (si ei ) = si ei + si D ei

= si ei + si Aij ej


= si + Aj i sj ei ,

(24.5)

so writing D s = (D s)i ei , we can say


(D s)i = si + Aj i sj .

(24.6)

We have considered connections on an associated vector bundle,


which may be a principal fiber bundle such as a frame bundle. So
we should be able to talk about gauge transformations.
Remember that a gauge transformation is a linear map T : E
E , (x, v) 7 (x, gv) for some g G and for all v V ' Fx . Let us
apply this idea to the section s . We claim that given a connection
D , there is a connection D0 on E such that
D0 (g) = gDv ,

(24.7)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


97
where v is a vector field on M and (E) (i.e. = s).
Let us first check if the definition makes sense. Since g(x) G
for all x M , we know that g 1 (x) exists for all x. So

Dv0 () = Dv0 gg 1

= gDv g 1 ,
(24.8)
and thus D0 is defined on all for which Dv is defined, i.e. D0 exists
because D does. We have of course assumed that g(x) is differentiable
as a function of x.
Let us now check that D0 is a connection according to our definitions. D0 is linear since

Dv0 (1 + 2 ) = gDv g 1 (1 + 2 )


= gDv g 1 1 + gDv g 1 2
= Dv0 1 + Dv0 2 .

(24.9)

And it satisfies Leibniz rule because



Dv0 (f ) = gDv g 1 f
= gDv f g 1



= gv(f )g 1 + gf Dv g 1

= v(f ) + f gDv g 1

= v(f ) + f Dv0 () .

(24.10)

Similarly,

0
Dv+w
= gDv+w g 1


= g Dv g 1 + Dw g 1

= gDv (g 1 ) + gDw g 1
0
= Dv0 + Dw
.

(24.11)

So D0 is a connection, i.e. there is a connection D0 satisfying


Dv0 (g) = g (Dv ) .
Since  is a section, i.e. (E), so is g 1 , and thus
Dv g 1 (E) and also gDv g 1 (E) . Therefore, Dv0 maps
sections to sections, Dv0 : (E) (E) . This completes the definition of the gauge transformation of the connection. We can now
write

(24.12)
D0 = i + A0ij j ei .

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


98

Chapter 24. Connections


Using the dual space, let us write
A = Aij ei j ,
A0 = A0ij ei j ,

(24.13)


where i is the dual basis to {ei } . The gauge transformation is
then given by

Dv0 = gDv g 1


D0 = gD g 1
h

i
j i

i + A0ij ei = g g 1 + Aij g 1 ei ,
(24.14)
where, as always, the gs are in some appropriate representation of
G. Then we can write the right hand side as
h
i
i i
i + g g 1 j j + gA g 1 j j ei
h
i i
(24.15)
= i + g g 1 + gA g 1 j j ei .
From this we can read off
A0 = gA g 1 + g g 1 .

(24.16)

A connection which transforms like this is also called a G2


Example: Consider G = U (1) . Suppose E is a trivial complex
line bundle over M , i.e. E = MC , so that the fiber over any point
p M is C . A connection D on E may be written as D = + A .
We can make E into a U (1) bundle by thinking of the fiber C as
the fundamental representation space of U (1) . Then sections are
complex functions, and a gauge transformation is multiplication by
a phase.
2
connection.

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


Chapter 25

Curvature
We start with a connection D , two vector fields v, w on B , and a
section s , all on some associated vector bundle of some principal
G-bundle E . Then Dv , Dw are both maps (E) (E) .
We will define the curvature of this connection D as a rule F
which, given two vector fields v , w , produces a linear map F (v, w) :
(E) (E) by
F (v, w)s = Dv Dw s Dw Dv s D[v,w] s .

(25.1)

Remember that
D v s = D v s = v D s


= v si + Aij sj ei
= v(si )ei + v Aij sj ei .

(25.2)

D s is again a section. So we can act with D on it and write


h

 i
Dv Dw s = Dv w(sj )ej + w A j k sk ej



= v w sj ej + v w A j k sk ej



+ w sj + w A j k sk v Aij sj ei .
(25.3)
Since the connection components A j k are functions, we can write


v w A j k sk = v(w )A j k sk +w v(A j k )sk +w A j k v(sk ) . (25.4)
99

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


100

Chapter 25. Curvature

Inserting this into the previous equation and writing Dw Dv s similarly, we find
Dv Dw s Dw Dv s = [v , w](si )ei + [v , w] Aij sj ei




+ w v A ij v w Aij sj ei


+ v w Aij A j k A ij Aj k sk ei . (25.5)
Also,
D[v ,w] s = [v , w](si )ei + [v , w] Aij sj ei ,

(25.6)

so that


F (v, w)s = v w A ij Aij + Aik A kj A ik Akj sj ei .
(25.7)
Thus we can define F by
F ( , )s = F s = (F s)i ei = (F )ij sj ei ,

(25.8)

(F )ij = A ij Aij + Aik A kj A ik Akj .

(25.9)

so that

Note that since coordinate basis vector fields commute, [ , ] = 0 ,


F s = F ( , )s = D D s D D s = [D , D ]s .

(25.10)

It is not very difficult to work out that the curvature acts linearly
on the module of sections,
F (u, v)(s1 + f s2 ) = F (u, v)s1 + f F (u, v)s2 ,

(25.11)

where f C (B) . Also,


F (u, v + f w) s = F (u, v)s + f F (u, w)s .

(25.12)

For coordinate basis vector fields [ , ] = 0 , so


F s = F ( , )s = D D s D D s = [D , D ]s .

(25.13)

Since F ( , )s is a section, so is
D (F s) = D [D , D ]s .

(25.14)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


101
Similarly, since D s is a section, so is
F D s = [D , D ]D s .

(25.15)

D (F s) F D s = [D , [D , D ]]s .

(25.16)

Thus
Considering C sections, and noting that maps are associative under
map composition, we find that
[D , [D , D ]]s + cyclic = 0 .

(25.17)



F s = (F s)i ei = F ij sj ei ,

(25.18)

On the other hand,

where F ij and si are in C (B) . So we can write




 
D (F s) = F ij sj + F ij sj D ei



= F ij sj ei + F ij sj ei + F ij sj Aki ek



= F ij + F kj Aik sj ei + F ij sj ei
F ik Akj sj ei + F ij Aj k sk ei
= (D F )ij sj ei + F ij (D s)j ei ,

(25.19)

where we have defined (D F ) by (D F )ij in this. Then this is a


Leibniz rule,
D (F s) = (D F ) s + F (D s) .

(25.20)

Then we can write


D (F s) F (D s) + cyclic = 0

(D F ) s + cyclic = 0

D F + cyclic = 0 .

s
(25.21)

This is known as the Bianchi identity .


2
0
Given D and g such that g(x) G , we have D given by

Dv0 = gDv g 1 .
(25.22)

c Amitabha Lahiri: Lecture Notes on Differential Geometry for Physicists 2011


102

Chapter 25. Curvature

Then
Du0 Dv0 = Du0 gDv g 1




= gDu Dv g 1 ,

(25.23)

and thus


0
F 0 (u, v) Du0 Dv0 Dv0 Du0 D[u
,v]



= gDu Dv g 1 Dv Du g 1 gD[u ,v] g 1
= gF (u, v)g 1

0
F
= g F g 1 .

(25.24)

As before, g is in some representation of G , and D (and thus F ) acts


on the same representation. This is the meaning of the statement
that the curvature is gauge covariant .
2

Você também pode gostar