Você está na página 1de 49

Groups and Symmetries January - April 2013

Lecturer: Dr. Benjamin Doyon


Contents
1 Lecture 1 2
2 Lecture 2 3
3 Lecture 3 4
4 Lecture 4 7
5 Lecture 5 8
6 Lecture 6 11
7 Lecture 7 13
8 Lecture 8 17
9 Lecture 9 18
10 Lecture 10 23
11 Lecture 11 24
12 Lecture 12 27
13 Lecture 13 29
14 Lecture 14 30
15 Lecture 15 30
1
16 Lecture 16 32
17 Lecture 17 36
18 Lecture 18 40
19 Lecture 19 42
20 Lecture 20 43
21 To be adjusted 45
1 Lecture 1
Denition: A group is a set G with
Closure: a product GG G denoted g
1
g
2
; that is, g
1
g
2
G for all g
1
, g
2
G.
Associativity: g
1
(g
2
g
3
) = (g
1
g
2
)g
3
for all g
1
, g
2
, g
3
G
Identity: there exits a e G such that ge = eg = g for all g G
Inverse: for all g there exists a g
1
G such that gg
1
= g
1
g = e.
Easy consequences (simple proofs omitted):
The identity is unique
The inverse is unique
(g
1
g
2
)
1
= g
1
2
g
1
1
(g
1
)
1
= g.
Denition: The order of a group G is the number of elements of G, denoted [G[.
Denition: If g
1
g
2
= g
2
g
1
for all g
1
, g
2
G, then G is a commutative, or abelian, group.
Denition: A subset H G is a subgroup of G if
h
1
h
2
H for all h
1
, h
2
H
2
e H
h
1
H for all h H.
Easy consequence (simple proof omitted): a subgroup is itself a group.
Cyclic groups
Notation: g
n
(n Z) is the n-times product of g for n > 0, the n-time product of g
1
for n < 0,
and e for n = 0.
Let G be a group. The set generated by g G, denoted < g >, is the set of all elements h of G
of the form h = g
n
for some n Z. This can be written as
< g >= g
n
: n Z, g
n
G = g
n
: n Z.
(this is a set: elements dont have multiplicities).
Note: < g > is in fact a subgroup of G.
Denition: A group G is called cyclic if there exists a g G such that G =< g >. Such an
element is called a generating element for G.
2 Lecture 2
Easy observation: a cyclic group is abelian.
Let us consider for a while groups of nite order only, that is, [G[ < .
Theorem 2.1
Let G be a cyclic group. Then g
n
0
, n = 0, 1, 2, . . . , [G[ 1 are all distinct elements if G =< g
0
>,
and g
|G|
= e for all g G.
Proof. Since G is cyclic, there is a g
0
such that G =< g
0
>. First, g
n
0
, n = 0, 1, 2, . . . , [G[ 1 are
all distinct elements. Indeed, if this were not true, say they were the same for n
1
and n
2
, then
we would have that g
n
2
n
1
0
= e. Denote q = n
2
n
1
< [G[. Then, since we can always write an
integer n as n = kq + r for r = 0, . . . , q 1 and k integer, we nd that g
n
0
= g
r
0
, hence that in
< g
0
> there are only q < [G[ elements (the possible values of r), a contradiction.
Second, g
|G|
0
= e. Indeed, if g
|G|
0
were a dierent element from g
n
0
for n = 0, . . . , [G[ 1, then
we would have at least [G[ + 1 element in < g
0
>, a contradiction; and if g
|G|
0
= g
n
0
for some
n = 1, . . . , [G[ 1, then we would have the situation that we had above with n
2
n
1
= q < [G[,
a contradiction again. Finally, we may always write g = g
n
0
, hence we have that g
|G|
= g
n|G|
0
=
(g
|G|
0
)
n
= e
n
= e.
3
Theorem 2.2
Every subgroup of a cyclic group is cyclic.
Proof. Let H G =< a >= e, a, a
2
, . . . , a
|G|1
be a subgroup. Let q be the smallest positive
integer such that a
q
H. Consider c = a
n
H, and write a
n
= a
kq+r
for k integer and
r = 0, . . . , q 1. Since H is a subgroup, then a
kq
H, so that ca
kq
H. Hence, a
r
H.
This is a contradiction unless r = 0. Hence, c = a
kq
= (a
q
)
k
. That is, we have H =< a
q
>, so
H is cyclic.
Examples of cyclic groups:
the integers Z = 1.
e
2ik/n
: k = 0, 1, . . . , n 1 under multiplications for n a positive integer, the group is
given by e
2i/n
.
the integers modulo n, Z
n
= 0, 1, 2, . . . , n 1 under addition modulo n, Z
n
= 1 (note:
same notation as for Z, but here the multiplication law is dierent).
3 Lecture 3
Symbols and relations, and another example of a cyclic group.
Consider an alphabet of two letters: e, a. Let us consider all words that we can form from
these: e, a, ea, aa a
2
, a
3
e, aea, etc. This set of words does not quit form a group, but it is a
set with a multiplication law: the concatenation of words, e.g. a multiplied with ea gives aea.
The multiplication law is automatically associative (easy to check).
Let us now impose some relations. The trivial ones that we will always impose when looking at
symbols and relations are those having to do with the identity element: e
2
= e and ea = ae = a.
That reduces the words we can form: ea = a, aea = a
2
, etc. It also guarantees that we now
have a set with an associative multiplication law and an identity element, the word e.
The words we have now are e, a, a
2
, a
3
, a
4
, etc. Let us impose one more relation: a
n
= e for
some xed positive integer n (e.g. for n = 4). This unique additional relation reduces further
the number of words we can make. Take n = 4 for instance. We now have e, a, a
2
, a
3
and
nothing else (any other word reduces to one of these four). This now forms a group: we can
check that every element has an inverse a
1
= a
3
, (a
2
)
1
= a
2
, and (a
3
)
1
= a. It is a cylic
group, generated by a, so it is the group < a >.
Maps
Consider two sets X, Y and a map f : X Y . We write y = f(x) for the value in Y that is
mapped from x X.
4
Denition: y is the image of x under f.
Denition: f(X) = y Y : x [ y = f(x) Y is the image of X under f.
Denition: The map f is onto (or surjective) if every y in Y is the image of at least one x in
X, i.e. if f(X) = Y (denoted f : X Y ).
Denition: The map f is one-to-one (or injective) if for all y f(X) there exists a unique
x X such that y = f(x). That is, if the following proposition holds: f(x
1
) = f(x
2
) x
1
= x
2
.
Denition: A map f that is one-to-one and onto is called bijective
Observation: if f is bijective, then there is a unique correspondance between X and Y , and the
inverse map f
1
: Y X exists and is itself bijective (easy proof omitted). The inverse map
f
1
is dened by f
1
(y) = x if f(x) = y.
Given two maps f, g from X to X, we can form a third by compsition: h = f g given by
h(x) = f(g(x)). Consider the set Map(X, X) all such maps. Then: 1) if f and g are such
maps, then f g also is; 2) given f, g, h Map(X, X), we have that (f g) h = f (g h);
3) Map(X, X) contains the identity id(x) = x for all x X. Hence we only need one more
property, the existence of inverse, to have a group.
Theorem 3.1
The set of all bijective maps of a nite set X forms a group under compositions, called the
permutation group, Perm(X).
Proof. Just need to check inverse. Since we have bijectivity, then f
1
exists and is bijective
itself by the comments above. Hence f
1
is an element of the group. It has the property that
f
1
f = f f
1
= id because f
1
(f(x)) is the element x

such that f(x

) = f(x), hence such


that x

= x by bijectivity of f, and because f(f


1
(x)) is f(x

) where x

is such that f(x

) = x.
If X is a nite set, label by integers 1, 2, . . . , n. An element Perm(X) is a mapping k i
k
for
k = 1, 2, . . . , n, where all integers i
k
are distinct (so that the map is bijective).
Denition: The permutation group of n objects is called the symmetric group S
n
.
We can denote elements of S
n
by
_
1 2 n
i
1
i
2
i
n
_
. The product is easy to work out in this
notation.
S
3
has 6 elements: e the identity, a the shift to the right by one, b the inversion
_
1 2 3
3 2 1
_
,
and then the shift to the right by 2 (or equiv. to the left by 1) a
2
, the inversion plus shift to the
right by 1 ab and by 2 a
2
b. It is easy to see
a
3
= e, b
2
= e, ab = ba
2
, a
2
b = ba.
5
Hence, we have S
3
= e, a, a
2
, b, ab, a
2
b subject to these relations. So we can say that S
3
is
generated by a and b, and we write S
3
=< a, b > (with in mind that a and b satisfy the relations
above).
Note: S
3
is nonabelian.
Isomorphisms
Denition: A map f : G
1
G
2
is an isomorphism if it is bijective and if it satises f(gh) =
f(g)f(h) for all g, h G
1
(in other words: f preserves the multiplication rule).
Denition: Two groups G
1
and G
2
are isomorphic if there exists an isomorphism f : G
1
G
2
.
Theorem 3.2
The isomorphic relation is an equivalence relation.
Proof. We need to check that the relation is reective, symmetric and transitive. Reexive: G
1
is isomorphic to G
1
because the identity map id is an isomorphism. Symmetric: if f : G
1
G
2
is an isomorphism, than so is f
1
: G
2
G
1
. Indeed f
1
is a bijection. Further, let g =
f(h), g

= f(h

) G
2
with h, h

G
1
(h and h

always exist because f is a bijection). Then


f
1
(gg

) = f
1
(f(h)f(h

)) = f
1
(f(hh

)) where in the last step we have used that f preserves


the multiplication law. Then this equals hh

= f
1
(g)f
1
(g

) by denition of f
1
, so f
1
preserves the multiplication law. Finally, transitive: let f

and f be isomorphism. Certainly f

f
is bijective. Also f

f(gg

) = f

(f(gg

)) = f

(f(g)f(g

)) = f

(f(g))f

(f(g

)) = f

f(g) f

f(g

)
so f

f preserves the multiplication law.


We have:
f(e
1
) = e
2
: f(g) = f(ge
1
) = f(g)f(e
1
) hence f(e
1
) = f(g)
1
f(g) = e
2
.
f(g
1
) = f(g)
1
: e
2
= f(gg
1
) = f(g)f(g
1
) hence f(g
1
) = f(g)
1
.
Theorem 3.3
Two cyclic groups of the same order are isomorphic
Proof. (proof done in class).
Example
Let G
1
= 1, 1 under multiplication of integers, and G
2
= Perm(1, 2) =
__
1 2
1 2
_
,
_
1 2
2 1
__
.
Consider the function f(1) =
_
1 2
1 2
_
and f(1) =
_
1 2
2 1
_
. The map is clearly bijec-
tive. Also, f((1) (1)) = f(1) =
_
1 2
1 2
_
=
_
1 2
2 1
_

_
1 2
2 1
_
= f(1)f(1). Also,
6
f(1) =
_
1 2
1 2
_
hence f maps identity to identity. This is sucient to verify that f is an
isomorphism.

Example
Let G
1
= Z
n
and G
2
= e
2ik/n
: k = 0, 1, . . . , n 1. Consider the map f : G
1
G
2
given
by f(k) = e
2ik/n
, k = 0, 1, 2, . . . , n 1. It is well-dened: since we restrict to k between 0 and
n 1, they are all dierent integers when taken mod n. It is surjective: just by denition, we
get all e
2ik/n
: k = 0, 1, . . . , n 1. It is injective: if f(k) = f(k

) then e
2pii(kk

)/n
= 1 hence
kk

= 0 mod n so that k = k

mod n. Further, we have f(k+k

mod n) = e
2i(k+k

mod n)/n
=
e
2i(k+k

)/n
= e
2ik/n
e
2ik

/n
= f(k)f(k

).

Cosets and Lagranges theorem


Let H be a subgroup of G, dene equivalence relation a b i ab
1
H. It is an equivalence
relation because:
reexive: a a
symmetric: a b b a
transitive: a b, b c a c.
(simple proofs omitted).
The equivalence class of a is denoted [a] := b G : a b.
Denition: The set of such classes is the set of right cosets of G with respect to H.
We have [a] = Ha. Clearly, by denition, if a b then [a] = [b].
4 Lecture 4
Theorem 4.1
Two right cosets of G with respect to H are either disjoint or identical.
Proof. Let a, b G. If [a] and [b] have no element in common, then they are disjoint. If c [a]
and c [b], then then a c and b c, hence a b hence [a] = [b].
Note: the latter is true for equivalent classes in general, not just cosets of groups.
Theorem 4.2
All cosets of G w.r.t. H have the same number of elements.
7
Proof. Consider the map M
a
: Ha H dened by b ba
1
. It is a bijection. Indeed, if
M
a
(b) = h then b = ha is unique and exists.
Denition: the number of cosets of G wrt H is called the index of H in G, which we will denote
i(H, G).
Theorem 4.3
(Lagrange Theorem) Let H be a subgroup of G. The order of H divides the order of G. More
precisely, [G[ = [H[ i(H, G).
Proof. The right cosets of G w.r.t. H divide the group into i(H, G) disjoint sets with all exactly
[H[ elements.
Denition: A proper subgroup of a group G is a subgroup H that is dierent from the trivial
group e and from G itself.
Denition: The order of an element a in a group G is the smallest positive integer k such that
a
k
= e.
Note: if H is a subgroup of G and [H[ = [G[, then H = G.
Corollary (i): If [G[ is prime, then the group G has no proper subgroup.
Proof. If H is a proper subgroup of G, then [H[ divides [G[ and [H[ is a number not equal to 1
or [G[. Contradiction.
Corollary (ii): Let a G and let k be the order of a. Then k divides [G[.
Proof. Let us look at the cyclic subgroup < a > of G generated by a. This has order k. Hence
k divides [G[.
Corollary (iii): If [G[ is prime then G is a cyclic group.
Proof. Given any a G, consider the subgroup < a >. Since [G[ is prime, it has no proper
subgroup. Since the order of < a > is greater than 1, < a > must be G itself.
Hence: Any group of prime order is unique up to isomorphisms.
5 Lecture 5
Example
Examples of cosets. Take the group G = S
3
= e, a, a
2
, b, ab, a
2
b with the relations written
previously, a
3
= e, b
2
= e, ab = ba
2
, a
2
b = ba. Take the subgroup H = e, a, a
2
. Let us
calculate all the right cosets associated to this subgroup. We have
He = H, Ha = a, a
2
, a
3
= e = H, Ha
2
= a
2
, a
3
= e, a
4
= a = H
8
and
Hb = b, ab, a
2
b, Hab = ab, a
2
b, a
3
b = b = Hb, Ha
2
b = a
2
b, a
3
b = b, a
4
b = a
2
b = Hb.
Hence we nd that any two cosets Hg
1
and Hg
2
for g
1
, g
2
G are either disjoint or identical.
We nd that there are exactly 2 dierent cosets occurring, which are the sets H and Hb. This is
in agreement with Lagranges theorem, because [G[ = 6, [H[ = 3 so that the number of dierent
cosets should be i(H, G) = [G[/[H[ = 2, which it is.

Example
Consider S
3
= e, a, a
2
, b, ab, a
2
b with the relations as before. Consider the subgroup H =
e, b =< b >. We have Ha = a, a
2
b, Ha
2
= a
2
, ab. We have 3 cosets, each 2 elements,
total 6 elements.

Groups of low order


[G[ = 1: G = e.
[G[ = 2: G = e, a with a ,= e. It must be that a
2
= a or a
2
= e. In the former case,
a
2
a
1
= a so that e = a, a contradiction. Hence only the latter case is possible: a
2
= e.
The group is called Z
2
. Here, we could simply have used the results above: 2 is prime,
hence G must be cyclic, G =< a > with a
2
= e.
[G[ = 3: since 3 is prime, G must be cyclic, so we can always write G = e, a, a
2
with
a
3
= e.
[G[ = 4: G = e, a, b, c (all distinct). < a > is a subgroup, so its order divide 4. Hence
there are 2 or 4 elements in < a >. Hence a
2
= e or a
4
= e. In the latter case, < a >= G,
so G is cyclic. Let us assume that G is not cyclic, in order to see what other group we can
have. So a
2
= e. We can then do the same for b and c, so we have a
2
= b
2
= c
2
= e. Then
consider ab, and check if associativity holds.
1. ab = a: (a
2
)b = b and a(ab) = a
2
= e contradiction.
2. ab = b: similarly, a = e contradition.
3. ab = e: a
2
b = a b = a contradiction.
Hence if group exists, must have ab = c. Similarly, must have ba = c, bc = cb = a and
ca = ac = b. To show existence of the group, must check associativity in all possible triple
products abc, a
2
b, etc. (left as exercise).
The above arguments show:
9
Theorem 5.1
Every group of order 4 is either cyclic, or has the rules (this is a Cayley table)
e a b c
e e a b c
a a e c b
b b c e a
c c b a e
Denition: The group with the rules above is denoted V
4
, and called Klein four-group (Vier-
ergruppe).
Notes:
Both the cyclic group and the group V
4
are abelian. Hence there are no non-abelian groups
of order less then or equal to 4.
The group V
4
is the smallest non-cylic group.
V
4
is such that all elements dierent form e have order 2.
V
4
has 5 subgroups: the trivial one and V
4
itself, as well as the 3 proper subgroups < a >,
< b > and < c >.
V
4
can be seen as a subgroup of S
4
:
_
e,
_
1 2 3 4
2 1 4 3
_
,
_
1 2 3 4
3 4 1 2
_
,
_
1 2 3 4
4 3 2 1
__
.
V
4
can also be described using symbols and relation. It is generated by the symbols a, b,
with the relations a
2
= b
2
= e and ab = ba. We have V
4
=< a, b >= e, a, b, ab.
Example
Consider the 2 2 matrices
e =
_
1 0
0 1
_
, a =
_
1 0
0 1
_
, b =
_
1 0
0 1
_
, c =
_
1 0
0 1
_
and the product rules on these matrices given by the usual matrix multiplication. These form
the group V
4
.

Example
Take V
4
= e, a, b, ab with the relations shown above. A (cyclic) subgroup is of course H =
e, a =< a >. One coset is H = Ha, the other is Hb = b, ab. They have no element in
10
common, and have the same number of elements. Lagrange thm holds.

Direct products
Let G
1
and G
2
be two groups. Then G = g
1
G
2
= (g
1
, g
2
) : g
1
G
1
, g
2
G
2
is a group,
with the multiplication law (g
1
, g
2
)(g

1
, g

2
) = (g
1
g

1
, g
2
g

2
). The axioms of a group are satised.
Note: if G
1
and G
2
are abelian, then so is G
1
G
2
. Also, G
1
e
2
= (g
1
, e
2
) : g
1
G
1

(where e
2
is the identity in G
2
) is a subgroup of G
1
G
2
, which is isomorphic to G
1
. Likewise,
e
1
G
2

= G
2
.
Theorem 5.2
[G
1
G
2
[ = [G
1
[ [G
2
[
Proof. Trivial.
Consider G
1
= e
1
, a
1
, a
2
1
= e
1
and G
2
= e
2
, a
2
, a
2
2
= e
2
. That is, G
1

= Z
2
and G
2

= Z
2
.
Then G
1
G
2
has order 4. Hence it must be isomorphic to Z
4
or to V
4
. Which one? Since all
elements of G
1
G
2
dierent form (e
1
, e
2
) are of order 2, this cannot be Z
4
, which has element
of order 4. Hence it must be V
4
. That is, we have found
Z
2
Z
2

= V
4
.
Theorem 5.3
(without proof) A group of order 6 is isomorphic either to Z
6
(cyclic group of order 6) or to S
3
.
Consider the group Z
2
Z
3
. This has order 6. Is it isomorphic to Z
6
or to S
3
? We note that
Z
2
Z
3
is abelian. Also, Z
6
is abelian, but S
3
is not. Hence we must have
Z
2
Z
3

= Z
6
.
6 Lecture 6
In which situations do we have Z
p
Z
q

= Z
pq
? The answer is
Theorem 6.1
Z
p
Z
q

= Z
pq
if and only if p and q are relatively prime (i.e. do not have prime factors in
common).
Proof. Let Z
p
=< a > and Z
q
=< b >. That is, a
p
= e and b
q
= e (and there are no smaller
positive integeres such that these are true). Then, consider (a, b) Z
p
Z
q
. We show that if p
and q are relatively prime, then (a, b) has order pq. Indeed, let n > 0 be such that (a, b)
n
= (e, e)
(the identity in Z
p
Z
q
). Then we must have both a
n
= e and b
n
= e. Hence, n = rp = tq where
11
r and t are positive integers. The smallest values for t and r are: t = all prime factors of p not
included in q, and r = all prime factors of q not included in p. If p and q are relatively prime,
then we nd t = p and r = q. Hence, n = pq. That is, the subgroup < (a, b) > of Z
p
Z
q
has
order pq = [Z
p
Z
q
[. Hence we have found Z
p
Z
q
=< (a, b) >: it is cyclic, hence isomorphic
to Z
pq
. For the opposite proposition, we show that if p and q are not relatively prime, then
Z
p
Z
q
is not isomorphic to Z
pq
. In the argument above, in place of (a, b) we take an arbitrary
element of the form c = (a
v
, b
w
) for v, w 0, and look for some n > 0 such that c
n
= e. Clearly,
(a
v
)
p
= e and (a
w
)
q
= e. Hence the argument above with a replaced by a
v
and b replaced by b
w
shows that we can take t < p and r < q, so that n < pq. This means that any element of Z
p
Z
q
has order less than pq. But in Z
pq
there is at least one element that has order pq. Hence, we do
not have an isomorphism.
Matrices, symmetry transformations and dihedral groups
Let us consider the natural transformations of the Euclidean plane: translations, rotations and
reections. Consider the subset of the plane formed by a circle centred at the origin and of
radius 1:
(x, y) : x
2
+y
2
= 1. (1)
What are its symmetries? The only transformations that preserve this circle are the rotations
w.r.t. the origin, and the reections w.r.t. any axis passing by the origin. We can describe these
by matrices, acting on the coordinates
_
x
y
_
simply by matrix multiplication. Rotations are
A

=
_
cos sin
sin cos
_
.
For the reections, one is that through the x axis (i.e. inversing the y coordinate):
B =
_
1 0
0 1
_
.
Reections through any other axis can be obtained from these two transformations: if we want
reection through the axis that is at angle , we just need to construct
B

:= A

BA

(i.e. rst rotate the angle- axis to the x axis by a rotation A

, then do a reection, then


rotate back by an angle ). Hence the set of all symmetry transformations is A

: [0, 2)
A

BA

: [0, ) (note that we only need to consider axes of reection of angles between 0
and , because a rotation of an axis gives back the same axis). In general, a multiple rotation
lead to a single rotation with the sum of the angles:
A

= A
+

(check by matrix multiplication).


12
In order to check mathematically that some matrix multiplication gives rise to a symmetry of
the subset (1), we can proceed as follows: we start with the expression for transformed subset,
and make a change of variable. Notice that A

, B and A

BA

all are orthogonal matrices, i.e.


matrices M satisfying M
T
M = MM
T
= I. Let us then consider such an orthogonal matrix for
the transformation:
_
M
_
x
y
_
R
2
: x
2
+y
2
= 1
_
=
_
_
_
M
_
x
y
_
R
2
:
_
x
y
_
T
_
x
y
_
= 1
_
_
_
=
_
_
_
_
x

_
R
2
:
_
M
T
_
x

__
T
_
M
T
_
x

__
= 1
_
_
_
=
_
_
_
_
x

_
R
2
:
_
x

_
T
MM
T
_
x

_
= 1
_
_
_
=
_
_
_
_
x

_
R
2
:
_
x

_
T
_
x

_
= 1
_
_
_
=
__
x

_
R
2
: (x

)
2
+ (y

)
2
= 1
_
Hence we get back to the set (1). We will discuss later on what group this form. But for now,
let us consider simpler groups formed by subsets of the transformations discussed here. Let us
consider, instead of the circle, less symmetric objects: polygons.
Denition: Let n 2 be an integer. The set of rotations and reections that preserve the
regular polygon P
n
formed by successively joining the points
_
_
_
_
cos
_
2k
n
_
sin
_
2k
n
_
_
_
_
_
, k = 0, 1, 2 . . . , n 1
is called the dihedral group D
n
. This is the symmetry group of this polygon.
Note that the case n = 2 does not quite form a polygon - it is rather just a segment, and it is
not actually closed. But it does t well in the considerations below, so it is appropriate to add
it here.
7 Lecture 7
n = 2. From geometric considerations, the symmetries of the segment are the rotations by
angle 0 (the identity) and , as well as the reections w.r.t. axes at angles 0 and /2, i.e.
the x and y axes. These are the matrices, in the notation A

(rotation by an angle ) and


13
B

(reection w.r.t. the axis at angle from the x axis, with B


0
= B) introduced above:
1 =
_
1 0
0 1
_
, B =
_
1 0
0 1
_
, B
/2
=
_
1 0
0 1
_
, A

=
_
1 0
0 1
_
To show that, e.g., A

is a symmetry, we proceed as follows: the segment is described by


(x, y) R
2
: y = 0, 1 < x < 1, and the transformed segment is
_
A

_
x
y
_
R
2
: y = 0, 1 < x < 1
_
=
__
x
y
_
R
2
: y = 0, 1 < x < 1
_
=
__
x

_
R
2
: y

= 0, 1 < x

< 1
_
=
__
x

_
R
2
: y

= 0, 1 < x

< 1
_
where we have made the change of variable x

= x and y

= y, and have used x

<
1 x

> 1 and 1 < x

< 1. From previously, we see that these matrices form the


group V
4
, so we have D
2

= V
4
. We know that V
4

= Z
2
Z
2
. Can we see the Z
2
factors
in the symmetry transformations? Yes. Consider A

and B. They satisfy A


2

= A
2
= 1
and B
2
= 1. Also, A

B = BA

. These are the relations describing the group V


4
: the
set 1, A

, B, A

B have the multiplication law of the group V


4
. Also, there are at least
two Z
2
subgroups: 1, A

(identity and rotation by ) and 1, B (the identity and the


reection w.r.t. the x axis). The direct product Z
2
Z
2
is a way of putting these two
subgroups together into one whole symmetry group of a single mathematical object.
n = 3. Again from geometric considerations, the symmetries are the identity, the rotations
A
2/3
, A
4/3
, and the reections B, B
/3
and B
2/3
. Note that the angles w.r.t which we
have reection symmetries are the halves of the angles of the rotation symmetries. This
is a general fact for the dihedral groups. Hence we have here 6 elements. We know the
group can only be S
3
or Z
6

= Z
2
Z
3
. Which group is it? A simple check is that
the matrices dont all commute; for instance, a rotation followed by a reection gives
something dierent than the same reection followed by the same rotation. So we must
have S
3
. More precisely, we have A
3
2/3
= 1 and B
2
= 1, as well as A
2/3
B = BA
2
2/3
and A
2
2/3
B = BA
2/3
. These are indeed the relations describing the group S
3
. Hence,
we have D
3

= S
3
. We note that P
3
is an equilateral triangle, and that the rotations just
cyclically permute the three vertices, and the reection B exchanges to vertices. These
are indeed what the elements of S
3
do on the 3 members 1, 2, 3 of the space on which they
are bijective maps.
In general, we can set e = 1, a = A
2/n
and b = B, and we have a
n
= e, b
2
= e
and a
k
b = ba
k
for k = 1, 2, . . . , n 1 (the cases k = 1 and k = n 1 are the same,
etc.). The set of group elements generated by these symbols under these relations is
e, a, a
2
, . . . , a
n1
, b, ab, a
2
b, . . . , a
n1
b. This is the group D
n
, and it has order 2n.
14
Conjugations and normal subgroups
Denition: Given a group G, we say that a is conjugate to b if there exists a g G such that
a = gbg
1
.
Theorem 7.1
The conjugacy relation is an equivalence relation.
Proof. (i) a is conj. to itself: a = eae
1
. (ii) If a is conj. to b, then a = gbg
1
b = g
1
ag =
g
1
a(g
1
)
1
hence b is conj. to a. (iii) If a is conj. to b and b is conj. to c, then a = gbg
1
and
b = g

c(g

)
1
(for some g, g

G), hence a = gg

c(g

)
1
g
1
= gg

c(gg

)
1
.
Hence, the group G is divided into disjoint conjugacy classes, [a] = gag
1
: g G. These
classes cover the whole group,
aG
[a] = G, hence form a partition of G. Further remarks:
[e] = e. Hence no other class is a subgroup (i.e. e , [a] for any a ,= e).
All elements of a conjugacy class have the same order. Because (gag
1
)
n
= gag
1
gag
1
gag
1
(n factors) = ga
n
g
1
(using g
1
g = e). Hence if a
n
= e (i.e. a has order n), then
(gag
1
)
n
= geg
1
= e. Hence also gag
1
has order n.
If G is abelian, then [a] = a for all a G.
Denition: A subgroup H is called normal or invariant if gHg
1
H for all g G. (note: as
usual, gHg
1
= ghg
1
: h H). That is, H is normal if for every h H and every g G, we
have ghg
1
H.
Notes:
A simple consequence of normality is that gHg
1
= H. Indeed, we have H gHg
1
: for
every h H, there is a h

= g
1
hg H (by normality) such that ghg
1
= h.
Let H be a normal subgroup. If h H then [h] H. That is, H is composed of entire
conjugacy classes. In fact: a subgroup is normal if and only if the subgroup is the union
of conjugacy classes.
e is a normal subgroup.
Every subgroup of an abelian group is normal.
Denition: A group is simple if it has no proper normal subgroup. A group is semi-simple if
it has no proper abelian normal subgroup.
Denition: The center Z(G) of a group G is the set of all elements which commute with all
elements of G:
Z(G) = a G : ag = ga g G
15
Theorem 7.2
The center Z(G) of a group is a normal subgroup.
Proof. Subgroup: closure: let a, b Z(G) and g G then abg = agb = gab hence ab Z(G).
identity: e Z(G). inverse: let a Z(G) and g G then ag
1
= g
1
a hence ga
1
= a
1
g
hence a
1
Z(G). Normal: let a Z(G) and g G then gag
1
= gg
1
a = a Z(G).
Note: If G is simple, then Z(G) = e or Z(G) = G.
Quotients
Let G be a group and H a subgroup of G.
Denition: A left-coset of G with respect to H is a class [a]
L
= b G : a
1
b H for a G.
That is, [a]
L
= aH. (Compare with right-cosets same principle: a
1
b H is an equivalence
relation between a and b).
Hence we have two types of cosets (right and left), with two types of equivalence relations. We
will denote the equivalence relations respectively by
R
and
L
, and the classes by [a]
R
= Ha
and [a]
L
= aH.
Denition: The quotient G/H = [a]
L
: a G is the set of all left-cosets; that is, the set of
equivalence classes under
L
. The quotient HG = [a]
R
] : a G is the set of all right-cosets;
that is, the set of equivalence classes under
R
.
We only discuss G/H, but a similar discussion hold for HG.
Given two subsets A and B of G, we dene the multiplication law AB = ab : a A, b B.
Theorem 7.3
If H is normal, then the quotient G/H, with the multiplication law on subsets, is a group.
Proof. We need to check 4 things.
Closure: We have [g
1
]
L
[g
2
]
L
= g
1
Hg
2
H = g
1
g
2
g
1
2
Hg
2
H = g
1
g
2
HH where we used H =
g
1
2
Hg
2
(because H is normal). Since e H (because H is a subgroup), we have that
HH H. Clearly HH H by closure. Hence HH = H. Hence we nd [g
1
]
L
[g
2
]
L
=
g
1
g
2
H so that
[g
1
]
L
[g
2
]
L
= [g
1
g
2
]
L
.
Associativity: This follows immediately from associativity of G and the relation found in
the previous point.
Identity: Similarly it follows that [e]
L
is an identity.
Inverse: Similarly it follows that [g
1
]
L
is the inverse of [g]
L
.
16
We call G/H the (left-)quotient group of G with respect to H.
Homomorphisms
Denition: Let G
1
and G
2
be groups. A map : G
1
G
2
is a homomorphism if (gg

) =
(g)(g

) for all g, g

G
1
.
Note:
(e
1
) = e
2
(g
1
) = (g)
1
Example
Homomorphism S
2
S
3
:
_
1 2
1 2
_

_
1 2 3
1 2 3
_
,
_
1 2
2 1
_

_
1 2 3
2 1 3
_
Note: this is not an isomorphism though S
2
and S
3
are not isomorphic (in particular, they
dont even have the same number of elements).

A homomorphism which is bijective is an isomorphism.


8 Lecture 8
Denition: Let be a homomorphism of G
1
onto G
2
. Then the kernel of
1
is
ker
1
= g G
1
: (g) = e
2

Note: e
1
ker
1
.
Theorem 8.1
The kernel is a normal subgroup.
Proof. Let : G G

and H = ker G. We have: 1) if h


1
, h
2
H then (h
1
h
2
) =
(h
1
)(h
2
) = e

= e

hence h
1
h
2
H; 2) (e) = e

hence e H; 3) if h H then (h
1
) =
(h)
1
= e hence h
1
H. Hence H is a subgroup. Also: if h G and g G then
(ghg
1
) = (g)(h)(g
1
) = (gg
1
) = (e) = e hence ghg
1
H. Hence H is normal.
Theorem 8.2
The image of a homomorphism : G
1
G
2
is a subgroup of G
2
.
17
Proof. Let g
2
, g

2
Im G
2
. Closure: g
2
= (g
1
), g

2
= (g

1
) with g
1
, g

1
G
1
. So g
2
g

2
=
(g
1
)(g

1
) = (g
1
g

1
) Im. Identity: we know that (e
1
) = e
2
, hence e
2
Im. Inverse: we
know that (g
1
)
1
= (g
1
1
). We have g
2
= (g
1
) so g
1
2
= (g
1
)
1
= (g
1
1
) Im.
Note: Im in general is not normal.
The homomorphism theorem
Theorem 8.3
(Homomorphism theorem) Let : G G

be a homomorphism. Then, G/ker



= im.
Proof. Suppose without loss of generality that is onto G

, i.e. im = G

. Let H = ker (this


is a normal subgroup of G) and

G = G/H. Let us rst nd a homomorphism

:

G G

.
Dene

(gH) = (g). This is well dened because if g
1
H = g
2
H (recall, g
1
H and g
2
H are
either equal or disjoint) then g
1

L
g
2
hence g
1
= g
2
h for some h H hence (g
1
) = (g
2
h) =
(g
2
)(h) = (g
2
) because H is the kernel of . Also,

is a homomorphism:

(gHg

H) =

(gg

H) = (gg

) = (g)(g

) =

(gH)

(g

H).
to be continued...
9 Lecture 9
...continuation
Second, we show that

is bijective. It is clearly surjective (onto G

), because is onto G

:
the set of all left-cosets is the set of all gH for all g G, and

(gH : g G) = (G) = G

.
Hence we only need to show injectivity. Suppose

(gH) =

(g

H). Then (g) = (g

). Hence
(g)
1
(g

) = e hence (g)
1
(g

) = e hence (g
1
g

) = e hence g
1
g

H hence g

= gh for
some h H, hence g
L
g

, so that gH = g

H.
Example
Take S
3
= e, a, a
2
, b, ab, a
2
b (with the usual relations). Choose H = e, a, a
2
. Check that
it is a normal subgroup. Clearly it is a subgroup, as it is H =< a >. We need to check that
gHg
1
H; it is sucient to check for g = b, ab, a
2
b (i.e. g , H). We have bab
1
= bab =
a
2
bb = a
2
H, and ab a (ab)
1
= abab
1
a
1
= ababa
2
= aa
2
bba
2
= a
3
a
2
= a
2
H and
a
2
b a (a
2
b)
1
= a
2
bab
1
(a
2
)
1
= a
2
baba = a
2
a
2
bba = a
4
a = a
5
= a
2
H. Using these results,
the rest follows: ba
2
b
1
= (bab
1
)
2
= a
4
= a H, etc. Hence H is normal. Interestingly,
we also obtain from these calculations the conjugacy classes of a and a
2
(because the other
things to calculate are trivial since H is abelian: aaa
1
= a, etc.): we have [a]
C
= a, a
2
and
[a
2
]
C
= a, a
2
. Along with [e]
C
= e, we see indeed that H = [e]
C
[a]
C
so it contains whole
conjugacy classes.
We have two left-cosets: H = [e]
L
and bH = [b]
L
= b, ab, a
2
b. Hence, S
3
/H has two elements,
18
[e]
L
and [b]
L
. Explicitly multiplying these subsets we nd:
[e]
L
[e]
L
= [e]
L
, [e]
L
[b]
L
= [b]
L
, [b]
L
[b]
L
= [e]
L
Indeed, this forms a group, and is in agreement with the relation found (using b
2
= e). We
have [e]
L
= [a]
L
= [a
2
]
L
and [b]
L
= [ab]
L
= [a
2
b]
L
. The multiplication law is also in agreement
with other choices of representatives, because e.g. [a]
L
[ab]
L
= [a
2
b]
L
and [ab]
L
[ab]
L
= [abab]
L
=
[a
3
b
2
]
L
= [e]
L
, etc. In the end, we nd that the multiplication law is that of Z
2
, i.e.
S
3
/H

= Z
2

Example
Consider the above example again. We have the following homomorphism: : S
3
= e, a, a
2
, b, ab, a
2
b
Z
2
= e, b given by (a
n
) = e and (a
n
b) = b. This is a homomorphism: (a
n
a
m
) = e =
(a
n
)(a
m
), (a
n
ba
m
b) = (a
nm
b
2
) = (a
nm
) = e = b
2
= (a
n
b)(a
m
b), (a
n
ba
m
) =
(a
nm
b) = b = be = (a
n
b)(a
m
) and likewise (a
n
a
m
b) = b = (a
n
)(a
m
b). We see the
following: given the group H, we have found a homomorphism : S
3
S
3
/H (onto) such that
ker = H. (And put dierently, we also see that, as in the previous example, S
3
/ker = im.)

Example
Let R

be the nonzero reals. This is a group under multiplication of real numbers (e = 1,


x
1
= 1/x). Let Z
2
be the group 1, 1 under multiplication. Let R
+
be the group of positive
real numbers (it is a normal subgroup of R

, but this doesnt matter). Dene : R

R
+
(onto) by (x) = [x[. This is a homomorphism: (xx

) = [xx

[ = [x[ [x

[ = (x)(x

). Its kernel
is ker = x R

: [x[ = 1 = 1, 1 = Z
2
. Hence Z
2
is a normal subgroup of R

. Further,
let us calculate R

/Z
2
. This is the set xZ
2
: x R

= x, x : x R

= x, x :
x R
+
. That is, it is the set of pairs of number and its negative, and each pair can be
completely characterised by a positive real number. These pairs form a group (the quotient
group): x, x x

, x

= xx

, xx

.
We note that there is an isomorphism between R

/Z
2
and R
+
. Indeed, dene : R

/Z
2
R
+
as the bijective map (x, x) = [x[ (for x R

). It is clearly onto, and it is injective


because given a value of [x[ > 0, there is a unique pair x, x. Also, it is a homomorphism:
(x, x x

, x

) = (xx

, xx

) = [xx

[ = [x[ [x

[ = (x, x x

, x

). Hence, we have
found that R

/Z
2

= R
+
, that is,
R

/ker

= im
(where im = (G) is the image of ).

19
Example
not done in class
Consider the group
K
2
=
__

1
0

_
: C, C

_
L
2
=
__
1 0
1
_
: C
_
We can check that K
2
is a group, and that L
2
is a normal subgroup of K
2
.
K
2
is a group: 1) closure:
_

1
0

__

1
0

_
=
_
(

)
1
0

1
+

_
2) associativity: immediate from matrix multiplication;
3) identity: choose = 1 and = 0;
4) inverse:
_

1
0

_
1
=
_
0

1
_
(just check from the multiplication law above).
L
2
is a subgroup: just choose = 1, this is a subset that is preserve under multiplication (check
multiplication law above), that contains the identity ( = 0) and that contain inverse of every
element (check form of inverse above).
L
2
is a normal subgroup: the argument is simple: under the multiplication rule, elements of
the diagonal get multiplied directly. Hence, with element g =
_

1
0

_
K
2
and h =
_
1 0
1
_
L
2
, we have that ghg
1
is a matrix with on the diagonal 1
1
= 1 and

1
1 = 1, hence matrix of the form
_
1 0

1
_
L
2
.
So, we can form the quotient group K
2
/L
2
: this is the group of left-cosets, under element-wise
20
multiplication of left-cosets. The set of left cosets is
gL
2
: g K
2
=
__

1
0

___
1 0
1
_
: C
_
: , C; ,= 0
_
=
___

1
0
+
_
: C
_
: , C; ,= 0
_
=
___

1
0

_
: C
_
: , C; ,= 0
_
=
__

1
0
C
_
: , C; ,= 0
_
(2)
where in the third step, we changed variable to

= + (and then renamed

to ) which
preserves C, i.e. + : C = C, because ,= 0. That is, a left-coset is a subset
_

1
0
C
_
.
The multiplication law is
_

1
0
C
__

1
0
C

_
=
_
(

)
1
0
C

1
+C

_
=
_
(

)
1
0
C

_
hence clearly the identity in the quotient group is
_
1 0
C 1
_
= L
2
There exists a bijective map : K
2
/L
2
C

= C : ,= 0, given by

__

1
0
C
__
=
This is bijective. Indeed, it is surjective: given C

, there is the element


_

1
0
C
_
that maps to it; and it is injective: if both
_

1
1
0
C
1
_
and
_

1
2
0
C
2
_
map to , then

1
=
2
= , hence
_

1
1
0
C
1
_
=
_

1
2
0
C
2
_
.
The map is also a homomorphism, hence it is an isomorphism. Indeed, using the multiplication
law of the quotient group above, we see that
(
_

1
0
C
__

1
0
C

_
) = (
_
(

)
1
0
C

_
) =

= (
_

1
0
C
_
)(
_

1
0
C
_
).
21
Hence, we have shown that K
2
/L
2

= C

(we have found an isomorphism from K


2
/L
2
onto C

).
Let us now consider the homomorphism : K
2
C

given by
(
_

1
0

_
) =
This is onto C

(clearly by similar arguments as above), and it is indeed a homomorphism


(clearly from the multiplication law above). Its kernel is
ker =
__

1
0

_
, C

, C : (
_

1
0

_
) = 1
_
=
__

1
0

_
: = 1, C
_
=
__
1 0
1
_
: C
_
= L
2
(3)
Hence by the homomorphism theorem we have that K
2
/L
2

= C

, which is indeed true by the


construction above.

Additional theorems
Theorem 9.1
A homomorphism : G G

is an isomorphism if and only if it is onto and ker = e.


Proof. If is an isomorphism, then is bijective (in particular, onto), so that the relations
(g) = e

and (e) = e

imply that g = e. Hence ker = e.


Oppositely, if ker = e and is onto, then we only need to prove injectivity. We have that
(g
1
) = (g
2
) implies (g
1
)(g
2
)
1
= e

hence (g
1
)(g
1
2
) = e

hence (g
1
g
1
2
) = e

hence
g
1
g
1
2
= e by using ker = e. Hence g
1
= g
2
and we have injectivity.
Theorem 9.2
Given a group G and a normal subgroup H, there exists a homomorphism : G G/H (onto)
such that ker = H.
Proof. If g G, let (g) = gH. This is a homomorphism: (gg

) = gg

H = gHg

H = (g)(g

).
Its kernel is ker = g G : gH = H = g G : g
L
e = H.
Automorphisms
Denition: An automorphism is an isomorphism of G onto itself.
22
Example
Let a G. Dene
a
: G G by
a
(g) = aga
1
. Then
a
is an automorphism:
Homomorphism:
a
(g
1
g
2
) = ag
1
g
2
a
1
= ag
1
a
1
ag
2
a
1
=
a
(g
1
)
a
(g
2
).
Onto: given g G, there exists g

G such that
a
(g

) = g): indeed, take g

= a
1
ga.
ker
a
= e: indeed if
a
(g) = e then aga
1
= e then g = a
1
ea = e.

10 Lecture 10
Example
Consider G = Z
2
Z
2
, with Z
2
= e, a, a
2
= e. Dene : G G by ((g
1
, g
2
)) = (g
2
, g
1
).
This is an nontrivial (i.e. dierent from identity map) automorphism. Indeed: 1) it is bi-
jective, 2) it is nontrivial: ((e, a)) = (a, e) ,= (e, a), 3) it is a homomorphism: (g)(g

) =
((g
1
, g
2
))((g

1
, g

2
)) = (g
2
, g
1
)(g

2
, g

1
) = (g
2
g

2
, g
1
g

1
) = ((g
1
g

1
, g
2
g

2
)) = (gg

). But there is no
g Z
2
Z
2
such that =
g
. Indeed, we would have
g
((e, a)) = (g
1
, g
2
)(e, a)(g
1
1
, g
1
2
) =
(e, g
2
ag
1
2
) ,= (a, e) for any g
2
.
Denition: An inner automorphism is an automorphism such that =
a
for some a G.
If is not inner, it is called outer. The set of inner automorphisms of a group G is denoted
Inn(G).
Denition: the set of all automorphisms of a group G is denoted Aut(H).
Note: If G is abelian, then every inner automorphism is the identity map.
Theorem 10.1
The set of all autmomorphisms Aut(G) is a group under composition. The subset Inn(G) is a
normal subgroup.
Proof. First, we know that composing bijective maps we get a bijective map, that composition
is associative, and that there is an identity and an inverse which are also bijective. Hence
we need to check 3 things: composition is homomorphism, identity is homomorphism, and
inverse is homomorphism. Let
1
,
2
Aut(G). Closure: (
1

2
)(gg

) =
1
(
2
(gg

)) =

1
(
2
(g)
2
(g

)) =
1
(
2
(g))
1
(
2
(g

)) = (
1

2
)(g)(
1

2
)(g

) so indeed the composed map


is a homomorphism (hence automorphism because bijective). Also, the identity is obviously a
homomorphism. Finally, let Aut(G). If (g
1
) = g

1
, then g
1
is the unique one such that
this is true, and the inverse map is dened by
1
(g

1
) = g
1
. Let also (g
2
) = g

2
. Then,

1
(g

1
g

2
) =
1
((g
1
)(g
2
)) =
1
((g
1
g
2
)) = g
1
g
2
=
1
(g

1
)
1
(g

2
) so that indeed
1
is
homomorphism (hence, again, automorphism).
23
Second, the subset of inner automorphisms is a subgroup. Indeed,
a

b
=
ab
because

a
(
b
(g)) = abgb
1
a
1
= (ab)g(ab)
1
. Also, the identity automorphism is
e
and the inverse of

a
is
a
1.
Third, the subset of inner automorphisms is normal. Let be any automorphism. Then
a

1
=
(a)
, so it is indeed an inner automorphism. This is shown as follows:
a

1
(g) =
(
a
(
1
(g))) = (a
1
(g)a
1
) = (a)g(a
1
) = (a)g(a)
1
.
Note on notation: be careful of the meaning of where we put the 1 in the exponent: (g
1
) =
(g)
1
on the r.h.s. we take the inverse of the element (g). But in
a
1(g) =
1
a
(g), on the
r.h.s. we take the inverse
1
of the map , and then apply it to g.
Note the important formulae:

a

1
=
(a)
,
a

b
=
ab
,
11 Lecture 11
Theorem 11.1
G/Z(G)

= Inn(G).
Proof. We only have to realise that the map : G Aut(G) given by (g) =
g
is a homomor-
phism. Indeed, the calculation above have showed that (g
1
g
2
) =
g
1
g
2
=
g
1

g
2
= (g
1
)(g
2
).
Hence, we can use the homomorphism theorem, G/ker

= im. Clearly, by denition, im =
Inn(G). Let us calculate the kernel. We look for all g G such that (g) = id. That is, all g
such that
g
(h) = h h G. That is, ghg
1
= h h G so that gh = hg h G. Hence,
g Z(G), so that indeed ker = Z(G).
Basics of matrices
We denote by M
N
(C) and M
N
(R) the set of all N N matrices with elements in C and R resp.
We may:
add matrices C = A+B
multiply matrices by scalars C = A, C or R,
multiply matrices C = AB
The identity matrix is I with elements I
jk
=
jk
= 1(j = k), 0(j ,= k). Note: 1) IA = AI; 2)
matrix multiplication is associative.
Given any matrix A, we may
24
Take its complex conjugate

A : (

A)
jk
= A
jk
Take its transpose A
T
: (A
T
)
jk
= A
kj
Take its adjoint A

=

A
T
= A
T
.
Denition: A matrix A is
self-adjoint if A

= A
symmetric if A
T
= A
unitary if A

= A
1
diagonal if A
jk
= 0 for j ,= k
Denition: The trace of a matrix is Tr(A) =

N
j=1
A
jj
.
Note: Tr(A
1
A
2
A
k
) = Tr(A
k
A
1
A
k1
), Tr(I) = N (simple proofs omitted).
Denition: A matrix A is invertible if there exists a matrix A
1
such that AA
1
= A
1
A = I.
Denition: The determinant of a matrix is det(A) =

N
j
1
=1

N
j
N
=1

j
1
j
N
A
1j
1
A
Nj
N
where
123N
= 1 and
j
1
j
N
is completely anti-symmetric: it changes its sign if two indices are
interchanged. Example:
12
= 1,
21
= 1,
11
=
22
= 0.
Properties:
det(AB) = det(A) det(B) (hence, in particular, if A
1
exists, then det(A
1
) = 1/ det(A))
det(A) ,= 0 if and only if A is invertible (recall Kramers rule for evaluating the inverse of
a matrix)
det(I) = 1
det(

A) = det(A)
det(A
T
) = det(A)
det(A) =
N
det(A)
if A is diagonal, then det(A) =

N
j=1
A
jj
.
Note: det(SAS
1
) = det(A), and also Tr(SAS
1
) = Tr(A), by the properties above.
Note: the determinant can also be written as det(A) =

S
N
par()

N
k=1
A
k,(k)
. Here, par()
is the parity of a permutation element. The parity of a permutation is dened as the number
25
of times one needs to exchange two elements in order to obtain the result of the permutation.
More precisely, let A
N
be the set of two- element exchanges, i.e. permutations of the form
_
1 j k . . . N
1 k j N
_
for some 1 j < k N. We can always write S
N
as a
product =
1

with
j
A
N
j. This is easy to see by induction. Further, although
there are many ways of writing as such products, it turns out that if =
1

then =

mod 2. Hence, the parity of is only a function of , and this is what we dene
as par(). It is important that the parity be only a function of for the symbol
j
1
j
N
to be
nonzero if its indices are all dierent otherwise, we could exchange indices an odd number of
times and get back to the same set of indices, getting the negative of what we had, concluding
that the value must be 0.
The classical groups: the matrix groups
These are groups where the group elements are matrices, and the multiplication law is the usual
matrix multiplication.
Below we see C

and R

as groups under ordinary multiplication (closure is clear, associativity


as well, identity is 1, inverse always exists).
1. The general linear group
GL(N, C) = A M
N
(C) : det(A) ,= 0
Group: 1) closure: det(AB) = det(A) det(B) ,= 0 if det(A) ,= 0 and det(B) ,= 0. 2)
associativity: by matrix multiplication. 3) Identity: I GL(N, C) because det(I) = 1 ,= 0.
4) inverse: A
1
exists because det(A) ,= 0, and det(A
1
) = 1/ det(A) ,= 0 so that also
A
1
GL(N, C).
Likewise,
GL(N, R) = A M
N
(R) : det(A) ,= 0
and clearly GL(N, R) is a subgroup of GL(N, C).
Theorem 11.2
det : GL(N, C) C

is a homomorphism onto C

. Also, det : GL(N, R) R

is a
homomorphism onto R

.
Proof. It is onto because for any C

we can always nd a matrix A such that det(A) =


: just take A GL(N, C) with matrix entries A
11
= and A
jj
= 1 for j > 1 and
A
jk
= 0 for j ,= k. It is a homomorphism because det(AB) = det(A) det(B). Idem for
det : GL(N, R) R

.
2. Special linear group
SL(N, C) = A M
N
(C) : det(A) = 1
26
Theorem 11.3
SL(N, C) is a normal subgroup of GL(N, C).
Proof. By denition, we have SL(N, C) = ker det, where by det we mean the map det :
GL(N, C) C

. Hence, SL(N, C) it is a normal subgroup of GL(N, C) (so in particular


it is a group.)
Similarly for SL(N, R).
Theorem 11.4
GL(N, C)/SL(N, C)

= C

. Also GL(N, R)/SL(N, R)



= R

.
Proof. Again, consider the homomorphism det : GL(N, C) C

, whose kernel is SL(N, C),


and which is onto C

. By the homomorphism theorem, the theorem immediately follow.


Idem for the real case.
3. Unitary group
U(N) = A M
N
(C) : A

= A
1

Note: the conditon A

= A
1
automatically implies that A
1
exists, because of course
A

exists for any matrix A; hence it implies that det(A) ,= 0. The condition can also be
written A

A = AA

= I.
Group: 1) closure: if A
1
, A
2
U(N) then (A
1
A
2
)

= A

2
A

1
= A
1
2
A
1
1
= (A
1
A
2
)
1
hence
A
1
A
2
U(N). 2) associativity: from matrix multiplication. 3) identity: I

= I = I
1
hence I U(N). 4) inverse: if A

= A
1
then using (A

= A we nd (A
1
)

= A =
(A
1
)
1
, hence that A
1
U(N).
12 Lecture 12
Note: U(1) = z C : z z = 1. Writing z = e
i
we see that the condition z z implies
R; we may restrict to [0, 2). In terms of , the group is addition modulo 2.
Theorem 12.1
The map det : U(N) U(1) is onto and is a group homomorphism.
Proof. Onto: for z C with [z[ = 1, we can construct A diagonal with A
11
= z and
A
jj
= 1 for j > 1. Homomorphism: because of property of det as before.
4. Special unitary group
SU(N) = A U(N) : det(A) = 1
Clearly SU(N) is a subgroup of U(N). It is SU(N) = ker det, hence it is a normal
subgroup. Since det : U(N) U(1) maps onto U(1), we nd that
U(N)/SU(N)

= U(1)
27
5. Orthogonal group
O(N) = A M
N
(R) : A
T
= A
1

This is the group of orthogonal matrices. Note: O(N) U(N). If A O(N) then
det(A) = 1. Hence, here det : O(N) Z
2
is onto.
6. Special orthogonal group
SO(N) = A O(N) : det(A) = 1
As before, SO(N) = ker det for det : O(N) Z
2
. Hence as we have again
O(N)/SO(N)

= Z
2
Centers of general linear and special linear groups
Theorem 12.2
Z(GL(N, C))

= C

. Also Z(GL(N, R))



= R

.
Proof. Suppose AB = BA for all B GL(N, C). That is,

k
A
jk
B
kl
=

k
B
jk
A
kl
.
Since this holds for all B, choose B diagonal with diagonal entries all dierent from each other.
Then we have
A
jl
B
ll
= B
jj
A
jl
(B
ll
B
jj
)A
jl
= 0
hence A
jl
= 0 for j ,= l. Hence, we nd that A must be diagonal. Further, consider
_
_
_
_
_
_
_
_
_
0 1 0 0
1 0 0 0
0 0 1 0
.
.
.
.
.
.
0 0 0 1
_
_
_
_
_
_
_
_
_
Check that det(B) = 1, so B GL(N, C). The equation with j = 1 and l = 2 then gives us
A
11
B
12
= B
12
A
22
hence A
11
= A
22
. Similarly, A
jj
= A
11
for all j. So A = I for C

. The
group of such diagonal matrices is obviously isomorphic to C

. Idem for the real case.


Theorem 12.3
Z(SL(N, C))

= Z
N
. Also Z(SL(N, R))

= Z
2
if N is even, and Z(SL(N, R))

= 1 if N is odd.
Proof. The proof for GL(N, C) above goes through to show that the center must be matrices
proportional to the identity I. Hence we look for all a C such that det(aI) = 1. But
det(aI) = a
N
hence a must be a N
th
root of unity. This group of these roots of unity under
multiplication is isomorphic to the group Z
N
. For the case a R: if N is even, then a = 1, if
N is odd, then a = 1.
28
13 Lecture 13
Structures of some groups
Theorem 13.1
If N is odd, O(N)

= Z
2
SO(N).
Proof. Let us construct an isomorphism that does the job. We dene it as
: O(N) Z
2
SO(N)
A (A) =
_
det(A),
A
det(A)
_
All we have to show is that this maps into the right space as specied (because this is not
immediately obvious), and then that it is indeed an isomorphism.
First: that it maps into Z
2
SO(N) is shown as follows. 1) Clearly det(A) Z
2
by the discussion
above. 2) Also (A/ det(A))
T
= A
T
/ det(A) = A
1
/ det(A) and (A/ det(A))
1
= det(A)A
1
=
det(A)
2
A
1
/ det(A) = (1)
2
A
1
/ det(A) = A
1
/ det(A). Hence indeed A/ det(A) O(N). 3)
Further, det(A/ det(A)) = det(A)/ det(A)
N
= det(A)
1N
= (1)
1N
so if N is odd, then N 1
is even, so (1)
1N
= 1. Hence indeed det(A/ det(A)) = 1 so A/ det(A) SO(N).
Second: that it is a homomorphism:
(AB) =
_
det(AB),
AB
det(AB)
_
=
_
det(A) det(B),
A
det(A)
B
det(B)
_
=
_
det(A),
A
det(A)
__
det(B),
B
det(B)
_
= (A)(B)
Third: that it is bijective. Injectivity: if (A
1
) = (A
2
) then det(A
1
) = det(A
2
) and A
1
/ det(A
1
) =
A
2
/ det(A
2
) hence A
1
= A
2
so indeed it is injective. Surjectivity: take a Z
2
and B SO(N).
We can always nd a matrix A O(N) such that (A) = (a, B). Indeed, just take A = aB.
This has determinant det(A) = det(aB) = a
N
det(B) = a det(B) (since N is odd) = a (since
B SO(N)). Hence, (A) = (a, A/a) = (a, B) as it should.
We see that the inverse map is
1
(a, B) = aB: simply multiply the SO(N) matrix by the sign
a. But of course this only works for N odd. For N even, there is the concept of semi-direct
product that would work...
Semi-direct products
The semi-direct product is a generalisation of the direct product. Take two groups G and
H, and consider the Cartesian product of these sets, G H = g, h) : g G, h H.
29
This new set can be given the structure of a groups simply by taking the multiplication law
(g, h)(g

, h

) = (gg

, hh

). But there is another way of dening the multiplication law.


Denition: Given a homomorphism : G Aut(H), where we denote (g) =:
g
, g G, we
dene the semi-direct product G

H by the group with elements all those of the set G H,


and with multiplication law
(g, h)(g

, h

) = (gg

, h
g
(h

)).
14 Lecture 14
To check that this denes a group, we must check associativity,
(g, h)((g

, h

)(g

, h

)) = (g, h)(g

, h

g
(h

)) = (gg

, h
g
(h

g
(h

)))
where the second member in the last term can be written h
g
(h

)
g
(
g
(h

)) = h
g
(h

)
gg
(h

).
On the other hand,
((g, h)(g

, h

))(g

, h

) = (gg

, h
g
(h

))(g

, h

) = (gg

, h
g
(h

)
gg
(h

)
which is in agreement with the previous result. We must also check the presence of an iden-
tity, id = (id, id) (obvious using
id
= id and
g
(id) = id, recall the general properties of
homomorphisms). Finally, we must check that an inverse exists. It is given by
(g, h)
1
= (g
1
,
g
1(h
1
))
because we have
(g, h)
1
(g, h) = (id,
g
1(h
1
)
g
1(h)) = (id, id)
and
(g, h)(g, h)
1
= (id, h
g
(
g
1(h
1
))) = (id, h
id
(h
1
)) = (id, id).
15 Lecture 15
Theorem 15.1
Consider Z
2
= +1, 1 and the isomorphism : Z
2
I, R, R = diag(1, 1, 1, . . . , 1), given
by (1) = I and (1) = R. If N is even, then
O(N)

= Z
2

SO(N),
where (s) =
s
is given by
s
(g) = (s)g(s) for s Z
2
and g SO(N).
Proof. 1) (done in Lecture 14) We rst check that : Z
2
Aut(SO(N)) is a homomorphism.
30
a) We check that
s
is an automorphism of SO(N) for any s. This is clear, using (s)
2
= I:
we have
s
(gg

) = (s)gg

(s) = (s)g(s)(s)g

(s) =
s
(g)
s
(g

). It is also onto be-


cause
s
((s)g(s)) = g for any g SO(N), and if det(g) = 1 then det((s)g(s)) =
det((s))
2
det(g) = det(g) = 1. Further, its kernel is the identity: if
s
(g) = id then g = id.
Hence it is an isomorphism from SO(N) onto SO(N).
b) Then, we check that is a homomorphism. Indeed, (s)(s

) =
s

s
which acts as
(
s

s
)(g) = (s)(s

)g(s

)(s) = (ss

)g(s

s) = (ss

)g(ss

) =
ss
(g).
2) Second, we construct an isomorphism that maps O(N) onto Z
2

SO(N). We dene it as
: O(N) Z
2
SO(N)
A (A) = (det(A), A(det(A)))
Again all we have to show is that this maps into the right space as specied (because this is not
immediately obvious), and then that it is indeed an isomorphism.
First: that it maps into Z
2
SO(N) is shown as follows. a) Clearly det(A) Z
2
. b) Also
((det(A))A)
T
= A
T
(det(A))
T
= A
1
(det(A)) and ((det(A))A)
1
= A
1
(det(A))
1
=
A
1
(det(A)
1
) = A
1
(det(A)). Hence indeed A(det(A)) O(N). c) Further, det(A(det(A))) =
det(A) det((det(A))) = det(A)
2
= 1. Hence indeedA(det(A)) SO(N).
Second: that it is a homomorphism:
(AB) = (det(AB), AB(det(AB)))
= (det(A) det(B), AB(det(A) det(B)))
= (det(A) det(B), AB(det(A))(det(B)))
= (det(A) det(B), A(det(A))(det(A))B(det(A))(det(B)))
= (det(A) det(B), A(det(A))(det(A))B(det(B))(det(A)))
=
_
det(A) det(B), A(det(A))
det(A)
(B(det(B)))
_
= (det(A), A(det(A))) (det(B), B(det(B)))
= (A)(B)
Third: that it is bijective. Injectivity: if (A
1
) = (A
2
) then det(A
1
) = det(A
2
) and A
1
(det(A
1
)) =
A
2
(det(A
2
)) hence A
1
= A
2
so indeed it is injective. Surjectivity: take s Z
2
and B SO(N).
We can always nd a matrix A O(N) such that (A) = (s, B). Indeed, just take A = B(s).
This has determinant det(A) = s det(B) = s (since B SO(N)). Further, A(s) = B(s)
2
=
B. Hence, (A) = (det(A), A(s)) = (s, B) as it should.
The semi-direct product decomposition makes very clear the structures involved in the quotient,
e.g. O(N)/SO(N)

= Z
2
; see below.
Theorem 15.2
31
The subset (e, h) : h H GH is a subgroup of G

H that is isomorphic to H and that


is normal. The subset (g, e) : g G is a subgroup of G

H that is isomorphic to G.
Proof. For the rst statement: it is a subgroup because it contains the identity, it is closed
(e, h)(e, h

) = (e, h
e
(h

)) = (e, hh

), and it contains the inverse, (e, h)


1
= (e, h
1
) by the
multiplication rule just established. It is also clearly isomorphic to H, with (e, h) h, thanks
again to the multiplication rule. Further, it is normal:
(g, h)
1
(id, h

)(g, h) = (g
1
,
g
1(h
1
h

))(g, h) = (id,
g
1(h
1
h

h)).
For the second statement, the subset contains the identity, is closed (g, e)(g

, e) = (gg

,
g
(e)) =
(gg

, e), and by this multiplication law, it contains the inverse. Clearly again, it is isomorphic
to G.
A special case of the semi-direct product is the direct product, where
g
= id for all g G (that
is, : G Aut(H) is trivial, (g) = id). In this case, both G and H are normal subgroups.
Note: we usually denote simply by G and H the subgroups (g, e) : g G and (e, h) : h H
of GH (as we did for the direct product).
Theorem 15.3
The left cosets of GH with respect to the normal subgroup H are the subsets (g, h) : h H
for all g G. Also, (GH)/H

= G.
Proof. For the rst statement: the left cosets are (g, h)(e, H) = (g, h
g
(H)) = (g, hH) = (g, H)
since
g
is an automorphism. For the second statement: the isomorphism is (g, H) g. This is
clearly bijective, and it is a homomorphism, because (g, H)(g

, H) = (gg

, H
g
(H)) = (gg

, H).
16 Lecture 16
In general, if H is a normal subgroup of a group J, it is not necessarily true that J is isomorphic
to a semi-direct product G H with G

= J/H. But this is true in the case where 1) H
is the kernel of some homomorphism (actually this is always true, but the inverse of the
homomorphism theorem, Theorem 10.3), and 2) there is a subgroup G in J on which is an
isomorphism.
Coming back to our example: SO(N) is indeed a normal subgroup of O(N)

= Z
2
SO(N),
but the Z
2
of this decomposition, although it is a subgroup, is not normal. The Z
2
of this
decomposition can be obtained as an explicit subgroup of O(N) by the inverse map
1
of
Theorem 15.1:
1
((s, I)) = (s) for s = 1. Hence the subgroup is I, diag(1, 1, . . . , 1).
Here, we indeed have that SO(N) is the kernel of det, and that I, R is a subgroup on which
det is an isomorphism.
32
Note: Clearly, there are many Z
2
subgroups, for instance I, I; this one is normal. But it
does not take part into any decomposition of O(N) into Z
2
and SO(N).
Extra material
Continuous groups
S
N
: nite number of elements. Z: innite number of elements, but countable. But other groups have
innitely many elements forming a continuum. Ex: C

= C0 (group under multiplication of complex


numbers); or all the classical matrix groups.
More precisely: a group, as a set, may be a manifold; i.e. it may be locally dieomorphic to open subsets
of R
N
, and have a continuous, dierentiable structure on it.
Manifold:
A topological space (M, J) (M: the space, J: the open sets) that is Hausdor (every distinct points
possess distinct neighbourhoods);
an atlas = (U,
U
) : U I, I J, where (U,
U
) is a chart;

UI
U = M;

U
: U R
n
homeomorphism;
if U V ,= for U, V I, then
V

1
U
, which maps
U
(U V ) R
n
into R
n
, is smooth.
The coordinates on U are the components of the map
U
, that is, in a neighbourhood U, the coordinates
of a point p M are x
1
=
1
U
(p), x
2
=
2
U
(p), etc.
The n above is the dimension of the manifold.
If G is a manifold, then clearly also GG, the cartesian product, is a manifold, with the cartesian product
topology, etc. The dimension is 2n if n is the dimension of G.
Denition: A Lie group G is a manifold G with a group structure, such that the group operations are
smooth functions: multiplication GG G, and inverse G G.
Essentially, for the classical matrix groups above, the number of dimensions is the number of free, con-
tinuous parameters.
Example
The (real) dimensions for some of these Lie groups are
C

: 2 dimensions.
GL(N, R): N
2
dimensions.
SL(N, R): N
2
1 dimensions due to the one condition det(A) = 1.
SO(N): N(N1)/2 dimensions. Indeed: there are NN real matrices so there are N
2
parameters.
There is the condition A
T
A = I. This is a condition on the matrix A
T
A, which contains N
2
elements. But this matrix is symmetric no matter what A is, because (A
T
A)
T
= A
T
A. Hence, the
constraint A
T
A = I in fact has 1 + 2 + ... + N constraints only (looking at the top row with N
33
elements, then the second row with N 1 elements, etc.). That is, N(N +1)/2 constraints. These
are independent constraints. Hence, the dimension is N
2
N(N + 1)/2 = N(N 1)/2.

Euclidean group
We now describe the Euclidean group E
N
. We start with a formal denition, then a geometric
interpretation.
Formal denition
Consider the groups O(N) and R
N
. The former is the orthogonal group of N by N matrices.
The latter is the (abelian) group of real vectors in R
N
under addition. The Euclidean group is
E
N
= O(N)

R
N
where : O(N) Aut(R
N
), (A) =
A
is a homomorphism, with
A
dened by

A
(b) = Ab.
To make sure this is a good denition: we must show that is a homomorphism and that that
is an automorphism. First the latter.
A
is clearly bijective because the matrix A is invertible
(work out injectivity and surjectivity from this). Further, it is a homomorphism because
A
(x+
y) = A(x+y) = Ax+Ay =
A
(x) +
A
(y). Second, is a homomorphism because
AA
(b) =
AA

b = A(A

b) =
A
(
A
(b)) = (
A

A
)(b) so that (AA

) = (A)(A

) (the multiplication
in automorphisms is the composition of maps). Hence, this is a good denition of semidirect
product.
Explicitly, the multiplication law is
(A, b)(A

, b

) = (AA

, b +Ab

).
Geometric meaning
Consider the space of all translations in R
N
. This is the space of all maps x x+b for b R
N
:
maps that take each point x to x+b in R
N
. Denote a translation by T
b
, so that T
b
(x) = x+b.
The composition law is obtained from
(T
b
T
b
)(x) = T
b
(T
b
(x)) = T
b
(x +b

) = x +b +b

= T
b+b
(x).
Hence, T
b
T
b
= T
b+b
, so that compositions of translations are translations. Further, there is
a translation that does nothing (choosing b = 0), one can always undo a translation, T
b
T
b
=
T
0
= id. Hence, the set of all translations, with multiplication law the composition, is a group
34
T
N
. Note also that T
b
,= T
b
if b ,= b

and for any b R


N
there is a T
b
. Clearly, then, this is a
group that is isomorphic to R
N
, by the map
T
b
b
Consider now another type of operations on R
N
: the orthogonal transformations (those that
preserve the length of vectors in R
N
), forming the group O(N) as we have seen. For a matrix
A O(N), we will denote the action on a point x in R
N
by A(x) := Ax.
Consider the direct product of the sets of orthogonal transformations and translations, with
elements (A, T
b
) for A O(N) and T
b
T
N
. Suppose we dene the action of elements of
this form on the space R
N
by rst an orthogonal transformation, then a translation. That is,
(A, T
b
) := T
b
A, i.e.
(A, T
b
)(x) = T
b
(A(x)) = Ax +b.
Then, let us see what happen when we compose such transformations. We have
((A, T
b
) (A

, T
b
))(x) = A(A

x +b

) +b = AA

x +Ab

+b = (AA

, T
Ab

+b
)(x).
That is, we obtain a transformation that can be described by rst an orthogonal transformation
AA

, then a translation by the vector Ab

+ b. Combined with the denition of the Euclidean


group above and the fact that T
b
b is an isomorphism, what we have just shown is that the
set of all transformations orthogonal transfo followed by translation is the same set as the
set E
N
, and has the same composition law hence the rst is a group, and the two groups are
isomorphic. That is, the Euclidean group can be seen as the group of such transformations.
Note how the semi-direct multiplication law occurs essentially because rotations and translations
dont commute:
A(T
b
(x)) = A(x +b) = Ax +Ab, T
b
(A(x)) = Ax +b
so that T
b
AT
b
A

,= T
b
T
b
AA

. We rather have T
b
AT
b
A

= T
b
AT
b
A
1
AA

,
and we nd the conjugation law
A T
b
A
1
(x) = A(A
1
x +b

) = x +Ab

= T
Ab
(x)
That is: the conjugation of a translation by a rotation is again a translation, but by the rotated
vector, and this is what gives rise to the semi-direct product law. This is true generally: if two
type of transformations dont commute, but the conjugation of one by another is again of the
rst type, then we have a semi-direct product. Recall also examples of SO(2) and Z
2
in their
geometric interpretation as rotations and reections.
There is more. We could decide to try to do translations and rotations in any order that is,
we can look at all transformations of R
N
that are obtained by doing rotations and translations
in any order and of any kind. A general transformation will look like A
1
A
2
T
b
1
T
b
2

35
A

1
A

2
etc. But since orthogonal transformations and translations independently form
groups, we can multiply successive orthogonal transformations to get a single one, and like wise
for translations, so we get something of the form AT
b
A

. . . etc. Further taking into account


that we can always put the identity orthogonal transformation at the beginning, and the identity
translation at the end, if need be, we always recover something of the form (A, T
b
)(A

, T
b
) .
Hence, we recover a Euclidean transformation. Hence, the Euclidean group is the one generated
by translations and orthogonal transformations. We have proved:
Theorem 16.1
The Euclidean group E
N
is the group generated by translations and orthogonal transformations
of R
N
.
Euclidean symmetries of subsets
Consider a subset of R
2
. In order to nd the Euclidean transformations of R
2
that preserve
it, follow these steps. 1) Try to see if translations are symmetries. 2) Then, try to see if there
is one or many points about which rotations are symmetries, whether by all possible angles or
by some discrete set of angles. 3)Then, try to see if there are one or many axes about which
reection is a symmetry.
Translations form the group R or R
2
(depending in how many directions you can translate -
here, for simplicity, were in 2-d only). Rotations about one given x point (any point) and by
all possible angles form the group SO(2); if its not all possible angles that we have, but just n
of them (including the 0 angle), then this is the group Z
n
. Reections about any given x axis
form the group Z
2
(including the identity transformation).
When we put things together: if the group actions commute, then its a direct product. If
not, its a semi-direct product. In semi-direct products, we always have the order reections
to the left of rotations to the left of translations. Remember that Z
2
SO(2)

= O(2), that
O(n) R
n
= E(n), that Z
2
Z
n

= D
n
(in the latter one the case n = 2 is Z
2
Z
2
= Z
2
Z
2
:
the semi-direct product becomes a direct product). If there is translation, then there is a factor
R
n
. If there is rotation as well, then pick one center, this corresponds to the factor SO(n) in
the semi-direct product. If there is reection as well, then pick one axis, this corresponds to the
factor Z
2
in the semi-direct product.
(Examples...)
17 Lecture 17
The group SO(2)
36
Let us explicitly construct the group SO(2).
A =
_
a b
c d
_
, A
T
A = I, det(A) = 1.
We have 4 conditions:
A
T
A = I a
2
+c
2
= 1, b
2
+d
2
= 1, ac +bd = 0.
The rst two conditions imply that a = cos , c = sin (here we put a minus sign for convenience)
and b = cos , d = sin for some , in [0, 2) (or , R mod 2). The second condition then
is cos( ) = 0. Hence, = /2 mod 2. This gives b = sin , d = cos or b = sin ,
d = cos . The last condition
det(A) = 1 ad bc = 1
then implies that the rst choice must hold:
A = A() :=
_
cos sin
sin cos
_
, [0, 2)
This clearly is of dimension 1 (there is one real parameter remaining) as it should.
Explicit multiplications of matrices give
A()A(

) = A( +

), A()
1
= A()
In particular, SO(2) is abelian.
Here, the group manifold is the circle S
1
. Indeed, we have one parameter , and the matrix elements vary
continuously for [0, 2) and also going back to 0 instead of 2. Every point on S
1
corresponds to a
unique group element A(), and all group elements are covered in this way. Note: geometrically,
the manifold is indeed S
1
rather than the interval [0, 2) (which strictly speaking wouldnt be a manifold
anyway) because of periodicity, i.e. continuity from the endpoint 2 back to the starting point 0.
The interpretation of SO(2) is that of rotations about the origin: we act with SO(2) on the
plane R
2
by
v

= Av, A SO(2), v =
_
x
y
_
, (x, y) R
2
.
This gives
x

= xcos y sin , y

= xsin +y cos
All this makes it clear that SO(2)

= U(1). Indeed, just put the points x, y into the complex
plane via x +iy, and consider action of U(1) as e
i
. Hence the isomorphism is
SO(2) U(1) : A() e
i
37
The group O(2)
Two disconnected sets: O(2) = SO(2) C where C = orthogonal matrices with determinant -1.
Note: C is not a group. The condition of det = -1 implies from the previous analysis that
C =
_
B() :=
_
cos sin
sin cos
_
, [0, 2)
_
Note that, with R =
_
1 0
0 1
_
, we have
A()R = B()
for all . Note also that R O(2), in fact R C. The group element R is the reection w.r.t.
the y axis, and we have
O(2) = group generated by rotations about origin and reections about y axis
O(2) is not abelian, in particular A()R ,= RA() in general.
Clearly, then, O(2) cannot be isomorphic to Z
2
SO(2) because both Z
2
and SO(2) are abelian.
The semi-direct product Z
2
SO(2), however, is not abelian. We see that the subgroup Z
2
in
Z
2
SO(2) corresponds to the group composed of the identity and the reection, I, R. We
see also that det is the isomorphism of this subgroup onto Z
2
.
The geometric interpretation of the semi-direct product structure is as follows: we can always
represent an element of O(2) AB where A is a rotation and B is the reection w.r.t. y axis (for
instance). Indeed, if C O(2) then C = C(det C)(det C) and we can take A = C(det C)
and B = (det C). Then let us perform in succession C
2
then C
1
on the vector v. The
result is C
1
C
2
v. The transformation C
3
= C
1
C
2
can also be written as C
3
= A
3
B
3
. We have
C
1
C
2
= A
1
B
1
A
2
B
2
= A
1
(B
1
A
2
B
1
1
)B
1
B
2
and we can identify A
3
= A
1
B
1
A
2
B
1
1
and B
3
= B
1
B
2
.
So if we use the notation in paris C = (B, A), we obtain (B
1
, A
1
)(B
2
, A
2
) = (B
1
B
2
, A
1
B
1
A
2
B
1
1
),
which is exactly the semi-direct product structure with
B
(A) = BAB
1
, as we had done (here
we dont use explicitly the , as we use directly the matrices B, which are I or R; we could set
B
i
= (g
i
) and use the pairs C = (g, A) instead).
Note: we have the following relation:
RA()R = A()
But then,
A()RA() = A()RA()RR = A()
2
R = A(2)R = B(2)
The quantity A()RA() has a nice geometric meaning: reection w.r.t. axis rotated by angle
from y axis. In order to cover all B() for [0, 2), we only need A()RA() for [0, ).
This indeed gives all possible axes passing by the origin. Hence:
O(2)

= group of all rotations about origin and all reections about all dierent axes passing by the origin.
38
The group SU(2)
Let us look at SU(2). We will use the Pauli matrices: along with the identity I, the Pauli
matrices form a basis (over C) for the linear space M
2
(C).

1
=
x
:=
_
0 1
1 0
_
,
2
=
y
:=
_
0 i
i 0
_
,
3
=
z
:=
_
1 0
0 1
_
.
We will denote a general complex 2 by 2 matrix by
A = aI +b
where we understand as a vector of Pauli matrices, so that b = b
x

x
+b
y

y
+b
z

z
.
Some properties:

2
i
= I,
i

j
=
j

i
(i ,= j),
x

y
= i
z
,
y

z
= i
x
,
z

x
= i
y
,

i
=
i
, det(A) = a
2
[[b[[
2
.
From these, we nd:
x y = (x y)I +i(x y)
where x y is the vector product. We also nd
(aI +b )(aI b ) = a
2
I (b b)I i(b b) = (a
2
[[b[[
2
)I = det(A)I.
Hence,
(aI +b )
1
=
aI b
det(A)
All these properties point to a nice analogy with complex numbers. Dene, instead of one, three imaginary
numbers, = i
x
, = i
y
,

k = i
z
. They satisfy
2
=
2
=

k
2
= 1.
Denition: The division algebra H of quaternions is the non-commutative eld of all real linear combi-
nations z = a + b
x
+ b
y
+ b
z

k (a, b
x
, b
y
, b
z
R), with the relations
2
=
2
=

k
2
= 1 and = =

k
and cyclic permutations. It is associative.
We dene the quaternion conjugate by z = ab
x
b
y
b
z

k (from the point of view of 2 by 2 matrices, this


is z = z

), and we have z z = zz = a
2
+[[b[[
2
0, with equality i z = 0. Hence we can dene [z[ =

z z.
We also have z
1
= z/[z[
2
, as for complex numbers. An important identity is [z
1
z
2
[ = [z
1
[ [z
2
[ for any
z
1
, z
2
H, which follows from [z
1
z
2
[
2
= z
1
z
2
z
2
z
1
= z
2
z
2
z
1
z
1
where we used the fact that z
2
z
2
R H
hence commutes with everything. Any quaternion z has a unique inverse, except for 0. This is what makes
the quaternions a division algebra: we have the addition and the multiplication, with distributivity and
with two particular numbers, 0 and 1, having the usual properties, and we have that only 0 doesnt have
a multiplicative inverse. Note that there are no other division algebras that satisfy associativity, besides
the real numbers, the complex numbers and the quaternions. There is one more division algebra, which
is not associative in addition to not being commutative: the octonions, with 7 imaginary numbers.
Group SU(2): with A = aI +x , we require det(A) = 1 hence a
2
[[x[[
2
= 1, and A

= A
1
hence a

I + x

= aI x , so that a R and x

= x. Writing x = ib we have b R
3
.
That is,
SU(2) = A = aI +ib : a R, b R
3
, a
2
+[[b[[
2
= 1
39
Note that, in terms of quaternions, this is:
SU(2) = z H : [z[ = 1.
Note the similarity with
U(1) = z C : [z[ = 1
Note that in both cases we have a group because in both cases [z
1
z
2
[ = [z
1
[ [z
2
[, so the condition [z[ = 1 is
preserved under multiplication.
Geometrically, the condition [z[ = 1 for quaternions is the condition for a 3-sphere in R
4
. This is the
manifold of SU(2).
18 Lecture 18
Invariance of physical laws: scalars and vectors
Invariance of mathematical objects under transformations is the main concept that leads to the
study of groups. A proper denition of an invariance requires us to say three things: 1) what
is the family of mathematical objects that we consider, 2) what is the family of transformations
that we admit, and 3) what particular object do we claim to be invariant. We have seen the
examples of subsets in the plane, and briey of vector elds.
We now consider other examples. First, the equations of motion of physics. Suppose the
3-dimensional vectors x(t) and y(t), as functions of time t, satisfy the following equations (New-
tons equations):
m
d
2
x(t)
dt
2
= GmM
x(t) y(t)
[[x(t) y(t)[[
3
, M
d
2
y(t)
dt
2
= GmM
y(t) x(t)
[[x(t) y(t)[[
3
. (4)
Then clearly the new vectors x

(t) = Ax(t) + b and y

(t) = Ay(t) + b, where A O(3) is an


orthogonal matrix and b is a constant vector, also satisfy the same equations, because
d
2
dt
2
(Ax(t) +b) = A
d
2
dt
2
x(t)
and
GmM
Ax(t) Ay(t)
[[Ax(t) Ay(t)[[
2
= GmM A
x(t) y(t)
[[x(t) y(t)[[
3
so that for instance
m
d
2
x

(t)
dt
2
+GmM
x

(t) y

(t)
[[x

(t) y

(t)[[
3
= A
_
d
2
dt
2
x(t) +GmM
x(t) y(t)
[[x(t) y(t)[[
3
_
= 0
The transformations (x, y) (Ax+b, Ay +b) satisfy the rules of the Euclidean group: hence,
Newtonss equations are invariant under the Euclidean group.
Let us concentrate on the O(3) transformations for a while, forgetting about the translations.
40
Denition: An object V that transforms like V

= AV under the transformation A O(3) is


said to be a vector under O(3). If it transforms like that only under SO(3) then it is a vector
under SO(3). An object S that transforms like S

= S under the same transformation is said


to be a scalar.
The basic vectors in our example are x(t) and y(t): 3-components functions of t. They transform
as vectors by the very denition of how we want to act on space. But then, the object x(t)y(t)
is a new object, form out of the previous ones. It is also a vector. Likewise, d
2
x(t)/dt
2
is a
vector. Further, the object [[x(t) y(t)[[ is also a new object, and it is a scalar. All in all, the
left- and right-hand sides of (4) are all vectors. This fact, that they are all vectors, is why the
equation is invariant under O(3).
Other examples: x(t) y(t) = is a scalar because the inner product is invariance under O(3),
x(t +t
0
) is a vector, etc.
Another example that can be worked out: if x and y are vectors under SO(3), then also is xy
(the vector product).
A more precise denition of vectors/scalars would require the theory of representation. For our
purposes, we may do as follows. Let f(x
1
, x
2
, . . .) be a function of many variables x
1
, x
2
, . . ..
Consider a transformation of variable, with x

1
= Ax
1
, x

2
= Ax
2
, . . ., where A O(3) (resp.
SO(3)). Then, we say that f(x
1
, x
2
, . . .) is a vector under O(3) (resp. SO(3)) if f(x

1
, x

2
, . . .) =
Af(x
1
, x
2
, . . .) for all A O(3) (resp. SO(3)). If there is just one variable x
1
, then we can see
this variable as a position variable in R
3
; then a vector according to this denition is just a O(3)
(or SO(3)) invariant vector eld.
More complicated example (which doesnt quite fall into the more precise denition above, but
which follows the general principle): The 3-component dierential operator
_
_
_
/x
1
/x
2
/x
3
_
_
_
is also a vector. This is because by the chain rule,

x
i
=

j
x

j
x
i

j
=

j,k
A
jk
x
k
x
i

j
=

j
A
ji

j
so that (using the vector notation)

x
= A
T

x

= A

x
where we used A
T
= A
1
.
41
19 Lecture 19
Inner product
Let V be a nite-dimensional vector space over C. A map V V C, x, y (x, y) is an inner
product if it has the properties:
(x, y)

= (y, x), (x, ay +bz) = a(x, y) +b(x, z), (x, x) 0, (x, x) = 0 x = 0


(z

= z is the complex conjugate). This makes V into a Hilbert space. Note that the rst and
second property imply
(ay +bz, x) = a

(y, x) +b

(z, x).
This along with the second property is called sesquilinearity. The restriction over the real-vector
space (real restriction: C
N
becomes R
N
; same basis, but only consider real coecients) then
gives
(x, y) = (y, x), (ay +bz, x) = a(y, x) +b(z, x).
This restriction is bilinear and symmetric. The only example we will use is:
(x, y) =

i
x

i
y
i
.
In particular, using matrix and column vector notation, this is
(x, y) = x

y.
This implies that if A is a linear operator, then
(x, Ay) = (A

x, y).
The norm of a vector is dened by
[[x[[ =
_
(x, x)
(positive square-root).
Structure of O(N)
Theorem 19.1
A real-linear transformation A of R
N
is such that [[Ax[[ = [[x[[ for all x R
N
i A O(N).
Proof. If A O(N), then A
T
= A
1
so that
[[Ax[[
2
= (Ax, Ax)
= (Ax)
T
Ax
= x
T
A
T
Ax
= x
T
x
= [[x[[
2
(5)
42
where in the 2nd step we use reality, so that

=
T
. Also: if [[Ax[[ = [[x[[, then replace x by
x +y and use bilinearity and symmetry:
(A(x+y), A(x+y)) = (x+y, x+y) (Ax, Ax)+(Ay, Ay)+2(Ax, Ay) = (x, x)+(y, y)+2(x, y).
Using [[Ax[[ = [[x[[ and [[Ay[[ = [[y[[, the rst and last terms cancel out, so
(Ax, Ay) = (x, y).
Hence (A
T
Ax, y) = (x, y) so that (A
T
Ax x, y) = 0. This holds for all y, hence it must be
that A
T
Ax x = 0 (obtained by choosing y = A
T
Ax x, so that A
T
Ax = x. This holds for
all x, hence A
T
A = I. Hence A O(N).
Note: the same hold if we say A R
N
and ask for [[Ax[[ = [[x[[ for all x C
N
. Indeed,
this includes x R
N
, so in one direction this is obvious; in the other direction we use again
(Ax, Ax) = (A
T
Ax, x) which holds for x C
N
as well.
Theorem 19.2
If x C
N
is an eigenvector of A O(N) with eigenvalue , then [[ = 1.
Proof. Ax = x with x ,= 0, hence (Ax, Ax) = (x, x) hence (x, x) = [[
2
(x, x) hence [[
2
= 1
since x ,= 0.
Theorem 19.3
If A SO(3) then there exists a vector n R
3
such that An = n (i.e. with eigenvalue 1).
20 Lecture 20
Proof. Consider P() = det(A I). We know that P(0) = 1 because A SO(3). Also,
P() =
3
+ . . . + 1 because the only order-3 term is from the I part. Hence, P() =
(
1
)(
2
)(
3
) with
1

3
= 1.
There are 2 possibilities: First, at least one of these, say
1
, is complex. It must be a pure
phase:
1
= e
i
for (0, 2) . If x is the associated eigenvector, then Ax = e
i
x hence
A

= e
i
x

so that x

is a new eigenvector with a dierent eigenvalue, say


2
= e
i
. But
since
1

3
= 1, it must be that
3
= 1.
Second, all
i
are real, so are 1. Since
1

3
= 1, not the three are 1, hence at least one is
1.
Finally, if An = n, and if n is not real, then n

is another eigenvector with eigenvalue 1. Hence


the thirds eigenvalue also must be 1: all eigenvalues are 1. Since the space of all eigenvectors
span C
3
, then Ax = x for any x C
3
so that A = I, so we may choose n real. Hence, if An = n,
then n is real or may be chosen so.
43
We may normalise to [[n[[ = 1. Suppose n = (unit vector in x direction). Then A = implies
A =
_
_
_
1
0
0
_
_
_
Further, A
T
A = I implies
A =
_
_
_
1 0 0
0
0
_
_
_
The 2 by 2 matrix in the bottom-right, which we denote a, has the property
a
T
a = I, det(a) = 1
hence it is an SO(2) matrix. Hence,
A =
_
_
_
1 0 0
0 cos sin
0 sin cos
_
_
_. (6)
That is, A is a rotation by around the x axis. In general, A will be a rotation about the
axis spanned by n, i.e. the axis cn : c R, if An = n. Hence, an SO(3) matrix is always a
rotation w.r.t. a certain axis. Clearly, is A is a rotation, then (Ax, Ax) = (x, x) for any x R
3
,
so A O(3). Also, An = n for any vector n along the rotation axis, and in a perpendicular,
right-handed basis that includes n, A has the form (6), hence A SO(3). That is:
SO(3) = all rotations about all possible axes in R
3
.
Geometrically: we may parametrise the space of axes times direction of rotation by the space
of unit vectors in R
3
: the rotation will be about the axis cn : c R and the rotation in the
right-handed direction w.r.t. n. This space is the unit sphere in R
3
. We may then characterise
the space of all rotations by angles in [0, ], in any direction, by giving a non-unit length to
the vectors, equal to the angle of rotation. Hence, this space is the closed ball of radius in
R
3
. But we are over-counting: we need to take away exactly half of the sphere of radius (so
take away the upper closed hemisphere, and the closed half of the equator, and one of the two
points joined by the equator). This is the space of all SO(3). To make it a manifold: we simply
have to connect points on the sphere of radius that are diametrically opposed. These points
are indeed very similar rotations: a rotation in one direction by , or in the other direction by
, is the same rotation. This is the manifold of SO(3), and it is such that multiplications (i.e.
composition) of rotations by small angles correspond to a small motion on the manifold.
For O(3), we simply need to add the reections, as before. The set O(3) is all rotations, and
all reections about all possible planes . A reection about the three axes is described by the
44
matrix I, and the group structure is O(3)

= Z
2
SO(3) where the Z
2
is the subgroup I, I.
But there is another way of representing O(3): we could also consider reection about the y z
plane, for instance. This is the matrix R = diag(1, 1, 1), and the subgroup I, R also has
the structure of Z
2
. Using this Z
2
, we then can write O(3)

= Z
2
SO(3): the same semidirect
product that we used in the case of O(n) with n even.
21 To be adjusted
We can now construct particular elements of the Euclidean group that are of geometric interest,
in 3 dimensions (and also in 2) for simplicity: rotations or reections about an axis passing by
a translated point b. These are described by, for A O(3) (or A O(2)),
R
A,b
(x) := T
b
A T
b
(x) = Ax Ab +b.
Indeed, they preserve the point b: with x = b we obtain back b. Moreover, they preserve the
length of vectors starting at b and ending at x: [[R
A,b
(x) R
A,b
(b)[[ = [[Ax Ab +b b[[ =
[[A(x b)[[ = [[x b[[. (More generally, of course, this is an orthogonal transformation with
respect to the point b.) Take A SO(3): this is a rotation, and there is a vector n such that
An = n (this is the vector in the axis of rotation). We could modify b by a vector in the
direction of n without modifying R
A,b
. That is R
A,b+cn
(x) = Ax A(b + cn) + b + cn =
Ax Ab +b = R
A,b
(x).
Structure of U(N)
Theorem 21.1
A complex linear transformation A on C
N
preserves the norm, [[Ax[[ = [[x[[ for all x C
N
, i
A U(N).
Proof. 1) If A U(N) then [[Ax[[
2
= (Ax, Ax) = x

Ax = x

x = (x, x) = [[x[[
2
hence
[[Ax[[ = [[x[[.
2) Consider x = y + az. We have [[Ax[[
2
= (Ay + aAz, Ay + aAz) = [[Ay[[
2
+ [a[
2
[[Az[[
2
+
a(Ay, Az) + a

(Az, Ay). Likewise, [[x[[


2
= [[y[[
2
+ [a[
2
[[z[[
2
+ a(y, z) + a

(z, y). Then, using


[[Ay[[
2
= [[y[[
2
and [[Az[[
2
= [[z[[
2
, we have
[[Ax[[ = x a(Ay, Az) +a

(Az, Ay) = a(y, z) +a

(z, y).
Choosing a = 1 this is (Ay, Az)+(Az, Ay) = (y, z)+(z, y) and choosing a = i this is (Ay, Az)
(Az, Ay) = (y, z) (z, y). Combining these two equations, we nd (Ay, Az) = (y, z), for all
y, z C
N
. Then we can use similar techniques as before: using (Ay, Az) = (A

Ay, z) we get
((A

AI)y, z) = 0 for all y, z C


N
, which implies A

A = I.
A 3-sphere in R
4
can be imagined by enumerating its slices, starting from a pole (a point on the
3-sphere. The slices are 2-spheres, much like the slices of a 2-sphere are 1-sphere (i.e. circles).
45
Starting from a point on the 3-sphere, we enumerate growing 2-spheres until we reach the single
one with the maximal radius (this is the radius of the 3-sphere, i.e. 1 in our case), then we
enumerate shortening 2-spheres back to a point. Putting all these objects together, we see that
the set is a double-cover of an open unit ball in R
3
plus a single unit 2-sphere. This looks very
similar to the manifold of SO(3), but there we had exactly half of that (an open ball plus half
of a 2-sphere). There is a group structure behind this:
A nice theorem
Theorem 21.2
SU(2)/Z
2

= SO(3)
Proof. This proof is not complete, but the main ingredients are there. The idea of the proof is
to use the homomorphism theorem, with a homomorphism : SU(2) SO(3) that is onto,
such that ker

= Z
2
.
Proposition: There exists a linear bijective map : R
3
between the real-linear space of
self-adjoint traceless 2 by 2 matrices, and the real-linear space R
3
.
Proof. Note that the Pauli matrices are self-adjoint traceless 2 by 2 matrices. Take a general 2
by 2 matrix A = aI + b for a, b
x
, b
y
, b
z
C. The condition of tracelessness imposes a = 0.
The condition of self-adjointness imposes b

= b hence b R
3
. Hence, the real-linear space
of self-adjoint traceless 2 by 2 matrices is the space of real linear combinations A = b . Given
A such a matrix, we have a unique (A) = b R
3
, hence we have injectivity. On the other
hand, given any b R
3
, we can form A = b , hence we have surjectivity. Finally, it is clear
that given A, A

and c R, we have (cA) = c(A) and (A + A

) = (b + b

) =
((b +b

) ) = b +b

= (A) + (A

), so is linear.
Given any U SU(2), we can form a linear bijective map
U
: as follows:

U
(A) = UAU

.
This maps into because if A , then 1) Tr(
U
(A)) = Tr(UAU

) = Tr(U

UA) = Tr(A) = 0,
and 2) (UAU

= UA

= UAU

. Hence,
U
(A) . Moreover, it is bijective because 1)
injectivity: if UAU

= UA

then U

UAU

U = U

UA

U hence A = A

, and 2) surjectivity:
for any B , we have that U

BU (by the same arguments as above) and we have

U
(U

BU) = UU

BU

U = B so we have found a A = U

BU that maps to B. Finally, it


is linear, quite obviously.
The map
U
induces a map on R
3
via the map . We dene R
U
: R
3
R
3
by
R
U
=
U

1
for any U SU(2). By the properties of
U
and of derived above we have that R
U
is linear
and bijective.
46
We now want to show that R
U
SO(3).
1) from the properties of Pauli matrices, we know that det(b ) = [[b[[
2
. Hence, we have for
any A that det(A) = [[(A)[[
2
, or in other words det(
1
(b)) = [[b[[
2
. Hence,
[[R
U
(b)[[
2
= [[(
U
(
1
(b)))[[
2
= det(
U
(
1
(b))) = det(U
1
(b)U

) = det(
1
(b)) = [[b[[
2
.
That is, R
U
is a real-linear map on R
3
that preserves lengths of vectors. By the previous
theorems, it must be that R
U
O(3).
2) Further, the map g : SU(2) R given by g(U) = det(R
U
) (where we see the linear map R
U
as a 3 by 3 real orthogonal matrix). This is continuous as a function of the matrix elements
of U. Indeed, we can calculate any matrix element of R
U
by choosing two basis vectors x and
y in R
3
and by computing x R
U
(y). This is x (U
1
(y)U

). The operation U U

and the operations of matrix multiplications are continuous in the matrix elements, hence the
map U U
1
(y)U

is, for any matrix element of the resulting 2 by 2 matrix, continuous in


the matrix elements of U. Since is linear, it is also continuous, and nally the dot-product
operation is continuous. Hence, all matrix elements of R
U
are continuous functions of the
matrix elements of U, so that det(R
U
) is also a continuous function of the matrix elements of
U. Moreover, we know that with U = I, we nd R
U
= I (the former: identity 2 by 2 matrix,
the latter: identity 3 by 3 matrix). Hence, g(I) = 1. But since g(U) 1, 1 (because the
determinant of an O(3) matrix is 1), it must be that g(U) = 1 for all U SU(2) that can be
reached by a continuous path from I (indeed, if : [0, 1] SU(2) is such a continuous path,
(0) = I and (1) = U and (t) a continuous function of t, then g((t)) is a continuous function
of t with g((t)) 1, 1 and g((0)) = 1; the only possibility is g((1)) = 1 by continuity).
Since SU(2) is connected, then all U SU(2) can be reached by continuous path from I, hence
g(U) = 1 for all U SU(2), hence det(R
U
) = 1 for all U SU(2) hence R
U
SO(3).
We have shown that R
U
SO(3). Hence, we have a map : SU(2) SO(3) given by
(U) = R
U
.
We now want to show that is a homomorphism. We have

1
(R
U
1
U
2
(b)) =
U
1
U
2
(
1
(b)) = U
1
U
2

1
(b)U

2
U

1
=
U
1
(
U
2
(
1
(b)))
hence
(U
1
U
2
) = R
U
1
U
2
=
1
R
U
1
U
2
=
U
1

U
2

1
=
U
1

U
2

1
= R
U
1
R
U
2
which is the homomorphism property.
Then, we would have to prove that is onto this requires more precise calculation of what
is as function of the matrix elements of U. We will omit this step.
Finally, we can use the homomorphism theorem. We must calculate ker . The identity in O(3)
is the identity matrix. We have (U) = I O(3) i (
U
(
1
(b))) = b for all b R
3
, which
47
is true i
U
(b ) = b for all b R
3
, which is true i Ub U

= b Ub =
b U U
i
=
i
U

for i = 1, 2, 3. Since also UI = IU, we then have that (U) = I O(3)


i U(aI +b ) = (aI +b )U for all a, b
x
, b
y
, b
z
C. Hence, i UA = AU for all A M
2
(C).
This only holds if U = cI for some c C. Since we must have U SU(2), then [c[
2
= 1 and
det(U) = c
2
= 1 so that c = 1. Hence, ker = I, I SU(2). Clearly, I, I

= Z
2
. This
shows the theorem.
A vector from Pauli matrices
An interesting example of a vector constructed from the Pauli matrices is the following. Suppose
we look at SU(2) transformations, and we decide to transform not 2-component vectors, but
rather unitary matrices X as X X

:= UX transform 2 by 2, traceless, hermitian matrices


(the space introduced earlier) by a conjugation of matrices U SU(2). That is, is A ,
then its transformation is A A

:= UAU

. Let us consider the vector of 2 by 2 matrices


x = X

X with components (x)


i
given by
(x)
i
= X

i
X, i = 1, 2, 3.
How does this vector transform? We have
x

= (X

= XU

UX.
But, for any xed vector y, we found y U

U = R
U
(y) . Since R
U
is an O(3) transformation,
we may write R
U
(y) = A
T
y (we use A
T
instead of A for convenience) for A O(3). Then
y x

= X(A
T
y) X = (A
T
y) x
or in other word,
(Ay) x

= y x.
Since this must hold for all y, we nd that x

= Ax. Hence x is a vector. Here, though, we


had to make a proper denition of A SO(3) with respect to U SU(2), essentially using our
isomorphism SU(2)/Z
2

= SO(3).
Invariance denition of the Euclidean group
The Euclidean group is the one that keeps invariant a certain mathematical object and this is
what gives it its name. Consider R
N
as a metric space: a space where a distance between points
is dened. The Euclidean N-dimensional space is R
N
with the distance function given by
D(x, y) = [[x y[[
Theorem 21.3
The set of transformations Q : R
3
R
3
such that D(Q(x), Q(y)) = D(x, y) is the set of
Euclidean transformations.
48
Proof. First in the opposite direction: if Q is a Euclidean transformation, Q = (A, T
b
), then
[[Q(x) Q(y)[[ = [[Ax +b Ay bb[[ = [[A(x y)[[ = [[x y[[
so indeed it preserves the distance function.
Second the opposite direction: assume Q preserves the distance function. Let b = Q(0). Dene
Q

= T
b
Q. Hence, we have Q

(0) = 0: this preserves the origin. Hence, Q

preserves lengths
of vectors: [[Q

(x)[[ = [[Q

(x) Q

(0)[[ = [[x 0[[ = [[x[[. More than that: Q

also preserves
the inner product:
(Q

(x), Q

(y)) =
1
2
(Q

(x), Q

(x)) +
1
2
(Q

(y), Q

(y))
1
2
(Q

(x) Q

(y), Q

(x) Q

(y))
=
1
2
[[Q

(x)[[
2
+
1
2
[[Q

(y)[[
2

1
2
[[Q

(x) Q

(y)[[
2
=
1
2
[[x[[
2
+
1
2
[[y[[
2

1
2
[[x y[[
2
= (x, y)
Now we show that Q

is linear. Consider e
i
, i = 1, 2, . . . , N the unit vectors in orthogonal
directions (standard basis for R
N
as a vector space). Let e

i
:= Q

(e
i
). Then, (e

i
, e

j
) = (e
i
, e
j
) =

i,j
. Hence e

i
, i = 1, 2, . . . , N also form an orthonormal basis. Now let x =

i
x
i
e
i
for x
i
R,
and write Q

(x) =

i
x

i
e

i
(this can be done because e

i
form a basis). We can nd x

i
by taking
inner products with e

i
: x

i
= (Q

(x), e

i
). But then x

i
= (Q

(x), e

i
) = (Q

(x), Q

(e
i
)) = (x, e
i
) =
x
i
. Hence, Q

(x) =

i
x
i
e

i
, which means that Q

is a linear transformation. Hence, we have


found that Q

is a linear transformation which preserves length of vectors in R


N
, so it must be
in O(N). Hence, Q has the form T
b
A for A O(N), so that Q E
N
.
The poincare group
49

Você também pode gostar