Você está na página 1de 112

Lie algebras

Course Notes
Alberto Elduque
Departamento de Matematicas
Universidad de Zaragoza
50009 Zaragoza, Spain
c _2005-2008 Alberto Elduque
These notes are intended to provide an introduction to the basic theory of nite
dimensional Lie algebras over an algebraically closed eld of characteristic 0 and their
representations. They are aimed to beginning graduate students in either Mathematics
or Physics.
The basic references that have been used in preparing the notes are the books in the
following list. By no means these notes should be considered as an alternative to the
reading of these books.
N. Jacobson: Lie algebras, Dover, New York 1979. Republication of the 1962
original (Interscience, New York).
J.E. Humphreys: Introduction to Lie algebras and Representation Theory, GTM
9, Springer-Verlag, New York 1972.
W. Fulton and J. Harris: Representation Theory. A First Course, GTM 129,
Springer-Verlag, New York 1991.
W.A. De Graaf: Lie algebras: Theory and Algorithms, North Holland Mathemat-
ical Library, Elsevier, Amsterdan 2000.
Contents
1 A short introduction to Lie groups and Lie algebras 1
1. One-parameter groups and the exponential map . . . . . . . . . . . . . . 2
2. Matrix groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3. The Lie algebra of a matrix group . . . . . . . . . . . . . . . . . . . . . 7
2 Lie algebras 17
1. Theorems of Engel and Lie . . . . . . . . . . . . . . . . . . . . . . . . . 17
2. Semisimple Lie algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3. Representations of sl
2
(k) . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4. Cartan subalgebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5. Root space decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6. Classication of root systems . . . . . . . . . . . . . . . . . . . . . . . . 37
7. Classication of the semisimple Lie algebras . . . . . . . . . . . . . . . . 51
8. Exceptional Lie algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3 Representations of semisimple Lie algebras 61
1. Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
2. Properties of weights and the Weyl group . . . . . . . . . . . . . . . . . 64
3. Irreducible representations . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4. Freudenthals multiplicity formula . . . . . . . . . . . . . . . . . . . . . 72
5. Characters. Weyls formulae . . . . . . . . . . . . . . . . . . . . . . . . . 77
6. Tensor products decompositions . . . . . . . . . . . . . . . . . . . . . . . 84
A Simple real Lie algebras 89
1. Real forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
2. Involutive automorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . 96
3. Simple real Lie algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
v
Chapter 1
A short introduction to Lie
groups and Lie algebras
This chapter is devoted to give a brief introduction to the relationship between Lie
groups and Lie algebras. This will be done in a concrete way, avoiding the general
theory of Lie groups.
It is based on the very nice article by R. Howe: Very basic Lie Theory, Amer.
Math. Monthly 90 (1983), 600623.
Lie groups are important since they are the basic objects to describe the symmetry.
This makes them an unavoidable tool in Geometry (think of Kleins Erlangen Program)
and in Theoretical Physics.
A Lie group is a group endowed with a structure of smooth manifold, in such a way
that both the algebraic group structure and the smooth structure are compatible, in
the sense that both the multiplication ((g, h) gh) and the inverse map (g g
1
) are
smooth maps.
To each Lie group a simpler object may be attached: its Lie algebra, which almost
determines the group.
Denition. A Lie algebra over a eld k is a vector space g, endowed with a bilinear
multiplication
[., .] : g g g
(x, y) [x, y],
satisfying the following properties:
[x, x] = 0 (anticommutativity)
[[x, y], z] + [[y, z], x] + [[z, x], y] = 0 (Jacobi identity)
for any x, y, z g.
Example. Let A be any associative algebra, with multiplication denoted by juxtaposi-
tion. Consider the new multiplication on A given by
[x, y] = xy yx
for any x, y A. It is an easy exercise to check that A, with this multiplication, is a Lie
algebra, which will be denoted by A

.
1
2 CHAPTER 1. INTRODUCTION TO LIE GROUPS AND LIE ALGEBRAS
As for any algebraic structure, one can immediately dene in a natural way the
concepts of subalgebra, ideal, homomorphism, isomorphism, ..., for Lie algebras.
The most usual Lie groups and Lie algebras are groups of matrices and their Lie
algebras. These concrete groups and algebras are the ones that will be considered in
this chapter, thus avoiding the general theory.
1. One-parameter groups and the exponential map
Let V be a real nite dimensional normed vector space with norm |.|. (So that V is
isomorphic to R
n
.)
Then End
R
(V ) is a normed space with
|A| = sup
_
|Av|
|v|
: 0 ,= v V
_
= sup |Av| : v V and |v| = 1
The determinant provides a continuous (even polynomial) map det : End
R
(V ) R.
Therefore
GL(V ) = det
1
_
R 0
_
is an open set of End
R
(V ), and it is a group. Moreover, the maps
GL(V ) GL(V ) GL(V ) GL(V ) GL(V )
(A, B) AB A A
1
are continuous. (Actually, the rst map is polynomial, and the second one rational, so
they are smooth and even analytical maps. Thus, GL(V ) is a Lie group.)
One-parameter groups
A one-parameter group of transformations of V is a continuous group homomorphism
: (R, +) GL(V ).
Any such one-parameter group satises the following properties:
1.1 Properties.
(i) is dierentiable.
Proof. Let F(t) =
_
t
0
(t)dt. Then F
t
(t) = (t) for any t and for any t, s:
F(t +s) =
_
t+s
0
(u)du
=
_
t
0
(u)du +
_
t+s
t
(u)du
=
_
t
0
(u)du +
_
t+s
t
(t)(u t)du
= F(t) +(t)
_
s
0
(u)du
= F(t) +(t)F(s).
1. ONE-PARAMETER GROUPS AND THE EXPONENTIAL MAP 3
But
F
t
(0) = lim
s0
F(s)
s
= (0) = I
(the identity map on V ), and the determinant is continuous, so
lim
s0
det
_
F(s)
s
_
= lim
s0
det F(s)
s
n
= 1 ,= 0,
and hence a small s
0
can be chosen with invertible F(s
0
). Therefore
(t) =
_
F(t +s
0
) F(t)
_
F(s
0
)
1
is dierentiable, since so is F.
(ii) There is a unique A End
R
(V ) such that
(t) = e
tA
_
=

n=0
t
n
A
n
n!
_
.
(Note that this series converges absolutely, since |A
n
| |A|
n
, and uniformly on
each bounded neighborhood of 0, in particular on B
s
(0) = A End
R
(V ) : |A| <
s, for any 0 < s R.) Besides, A =
t
(0).
Proof. For any 0 ,= v V , let v(t) = (t)v. In this way, we have dened a map
R V , t v(t), which is dierentiable and satises
v(t +s) = (s)v(t)
for any s, t R. Dierentiate with respect to s for s = 0 to get
_
v
t
(t) =
t
(0)v(t),
v(0) = v,
which is a linear system of dierential equations with constant coecients. By
elementary linear algebra(!), it follows that
v(t) = e
t

(0)
v
for any t. Moreover,
_
(t) e
t

(0)
_
v = 0
for any v V , and hence (t) = e
t

(0)
for any t.
(iii) Conversely, for any A End
R
(V ), the map t e
tA
is a one-parameter group.
Proof. If A and B are two commuting elements in End
R
(V ), then
e
A
e
B
= lim
n
_
n

p=0
A
p
p!
__
n

q=0
B
q
q!
_
= lim
n
_
n

r=0
(A+B)
r
r!
+R
n
(A, B)
_
,
4 CHAPTER 1. INTRODUCTION TO LIE GROUPS AND LIE ALGEBRAS
with
R
n
(A, B) =

1p,qn
p+q>n
A
p
p!
B
q
q!
,
so
|R
n
(A, B)|

1p,qn
p+q>n
|A|
p
p!
|B|
q
q!

2n

r=n+1
_
|A| +|B|
_
r
r!
,
whose limit is 0. Hence, e
A
e
B
= e
A+B
.
Now, given any A End
R
(V ) and any scalars t, s R, tA commutes with sA, so
(t + s) = e
tA+sA
= e
tA
e
sA
= (t)(s), thus proving that is a group homomor-
phism. The continuity is clear.
(iv) There exists a positive real number r and an open set | in GL(V ) contained in
B
s
(I), with s = e
r
1, such that the exponential map:
exp : B
r
(0) |
A exp(A) = e
A
is a homeomorphism.
Proof. exp is dierentiable because of its uniform convergence. Moreover, its dif-
ferential at 0 satises:
d exp(0)(A) = lim
t0
e
tA
e
0
t
= A,
so that
d exp(0) = id (the identity map on End
R
(V ))
and the Inverse Function Theorem applies.
Moreover, e
A
I =

n=1
A
n
n!
, so |e
A
I|

n=1
|A|
n
n!
= e
|A|
1. Thus
| B
s
(I).
Note that for V = R (dimV = 1), GL(V ) = R 0 and exp : R R 0 is not
onto, since it does not take negative values.
Also, for V = R
2
, identify End
R
(V ) with Mat
2
(R). Then, with A =
_
0 1
1 0
_
, it
follows that A
2
=
_
1 0
0 1
_
, A
3
=
_
0 1
1 0
_
and A
4
= I. It follows that e
tA
=
_
cos t sin t
sin t cos t
_
.
In particular, e
tA
= e
(t+2)A
and, therefore, exp is not one-to-one.
Adjoint maps
1. For any g GL(V ), the linear map Ad g : End
R
(V ) End
R
(V ), A gAg
1
, is
an inner automorphism of the associative algebra End
R
(V ).
The continuous group homomorphism
Ad : GL(V ) GL(End
R
(V ))
g Ad g,
is called the adjoint map of GL(V ).
2. MATRIX GROUPS 5
2. For any A End
R
(V ), the linear map ad
A
(or ad A) : End
R
(V ) End
R
(V ),
B [A, B] = ABBA, is an inner derivation of the associative algebra End
R
(V ).
The linear map
ad : End
R
(V ) End
R
(End
R
(V ))
A ad
A
(or ad A),
is called the adjoint map of End
R
(V ).
We will denote by gl(V ) the Lie algebra End
R
(V )

. Then ad is a homomorphism of
Lie algebras ad : gl(V ) gl(End
R
(V )).
1.2 Theorem. The following diagram is commutative:
(1.1)
gl(V )
ad
gl(End
R
(V ))
exp

_
exp
GL(V )
Ad
GL(End
R
(V ))
Proof. The map : t Ad exp(tA) is a one-parameter group of transformations of
End
R
(V ) and, therefore,
Ad exp(tA) = exp(t/)
with / =
t
(0) gl(End
R
(V )). Hence,
/ = lim
t0
Ad
_
exp(tA)
_
I
t
and for any B End
R
(V ),
/(B) = lim
t0
exp(tA)Bexp(tA) B
t
=
d
dt
_
exp(tA)Bexp(tA)
_
[
t=0
= ABI IBA = ad
A
(B).
Therefore, / = ad
A
and Ad
_
exp(tA)
_
= exp(t ad
A
) for any t R.
2. Matrix groups
2.1 Denition. Given a real vector space V , a matrix group on V is a closed subgroup
of GL(V ).
Any matrix group inherits the topology of GL(V ), which is an open subset of the
normed vector space End
R
(V ).
2.2 Examples. 1. GL(V ) is a matrix group, called the general linear group. For
V = R
n
, we denote it by GL
n
(R).
6 CHAPTER 1. INTRODUCTION TO LIE GROUPS AND LIE ALGEBRAS
2. SL(V ) = A GL(V ) : det A = 1 is called the special linear group.
3. Given a nondegenerate symmetric bilinear map b : V V R, the matrix group
O(V, b) = A GL(V ) : b(Au, Av) = b(u, v) u, v V
is called the orthogonal group relative to b.
4. Similarly, give a nondegenerate alternating form b
a
: V V R, the matrix group
Sp(V, b
a
) = A GL(V ) : b
a
(Au, Av) = b
a
(u, v) u, v V
is called the symplectic group relative to b
a
.
5. For any subspace U of V , P(U) = A GL(V ) : A(U) U is a matrix group.
By taking a basis of U and completing it to a basis of V , it consists of the endo-
morphisms whose associated matrix is in upper block triangular form.
6. Any nite intersection of matrix groups is again a matrix group.
7. Let T
1
, . . . , T
n
be elements in End
R
(V ), then G = A GL(V ) : [T
i
, A] = 0 i =
1, . . . , n is a matrix group.
In particular, consider C
n
as a real vector space, by restriction of scalars. There
is the natural linear isomorphism
C
n
R
2n
(x
1
+iy
1
, . . . , x
n
+iy
n
) (x
1
, . . . , x
n
, y
1
, . . . , y
n
).
The multiplication by i in C
n
becomes, through this isomorphism, the linear map
J : R
2n
R
2n
, (x
1
, . . . , x
n
, y
1
, . . . , y
n
) (y
1
, . . . , y
n
, x
1
, . . . , x
n
). Then we
may identify the group of invertible complex n n matrices GL
n
(C) with the
matrix group A GL
2n
(R) : [J, A] = 0.
8. If G
i
is a matrix group on V
i
, i = 1, 2, then G
1
G
2
is naturally isomorphic to a
matrix group on V
1
V
2
.
9. Let G be a matrix group on V , and let G
o
be its connected component of I. Then
G
o
is a matrix group too.
Proof. For any x G
o
, xG
o
is connected (homeomorphic to G
o
) and xG
o
G
o
,=
(as x belongs to this intersection). Hence xG
o
G
o
is connected and, by maximality,
we conclude that xG
o
G
o
. Hence G
o
G
o
G
o
. Similarly, (G
o
)
1
is connected,
(G
o
)
1
G
o
,= , so that (G
o
)
1
G
o
. Therefore, G
o
is a subgroup of G. Moreover,
G
o
is closed, because the closure of a connected set is connected. Hence G
o
is a
closed subgroup of GL(V ).
10. Given any matrix group on V , its normalizer N(G) = g GL(V ) : gGg
1
G
is again a matrix group.
3. THE LIE ALGEBRA OF A MATRIX GROUP 7
3. The Lie algebra of a matrix group
Let G be a matrix group on the vector space V . Consider the set
g = A gl(V ) : exp(tA) G t R.
Our purpose is to prove that g is a Lie algebra, called the Lie algebra of G.
3.1 Technical Lemma. (i) Let A, B, C gl(V ) such that |A|, |B|, |C|
1
2
and
exp(A) exp(B) = exp(C). Then
C = A+B +
1
2
[A, B] +S
with |S| 65
_
|A| +|B|
_
3
.
(ii) For any A, B gl(V ),
exp(A+B) = lim
n
_
exp
_
A
n
_
exp
_
B
n
__
n
(Trotters Formula).
(iii) For any A, B gl(V ),
exp([A, B]) = lim
n
_
exp
_
A
n
_
: exp
_
B
n
__
n
2
,
where, as usual, [g : h] = ghg
1
h
1
denotes the commutator of two elements in a
group.
Proof. Note that, by continuity, there are real numbers 0 < r, r
1

1
2
, such that
exp
_
B
r
1
(0)
_
exp
_
B
r
1
(0)
_
exp
_
B
r
(0)
_
. Therefore, item (i) makes sense.
For (i) several steps will be followed. Assume A, B, C satisfy the hypotheses there.
Write exp(C) = I +C +R
1
(C), with R
1
(C) =

n=2
C
n
n!
. Hence
(3.2) |R
1
(C)| |C|
2

n=2
|C|
n2
n!
|C|
2

n=2
1
n!
|C|
2
,
because |C| < 1 and e 2 < 1.
Also exp(A) exp(B) = I +A+B +R
1
(A, B), with
R
1
(A, B) =

n=2
1
n!
_
n

k=0
_
n
k
_
A
k
B
nk
_
.
Hence,
(3.3) |R
1
(A, B)|

n=2
(|A| +|B|)
n
n!
(|A| +|B|)
2
,
because |A| +|B| 1.
8 CHAPTER 1. INTRODUCTION TO LIE GROUPS AND LIE ALGEBRAS
Therefore, C = A+B+R
1
(A, B)R
1
(C) and, since |C|
1
2
and |A|+|B| 1,
equations (3.2) and (3.3) give
|C| |A| +|B| +
_
|A| +|B|
_
2
+|C|
2
2
_
|A| +|B|
_
+
1
2
|C|,
and thus
(3.4) |C| 4
_
|A| +|B|
_
.
Moreover,
|C (A+B)| |R
1
(A, B)| +|R
1
(C)|

_
|A| +|B|
_
2
+
_
4
_
|A| +|B|
_
_
2
17
_
|A| +|B|
_
2
.
(3.5)
Let us take one more term now, thus exp(C) = I+C+
C
2
2
+R
2
(C). The arguments
in (3.2) give, since e 2
1
2
<
1
3
,
(3.6) |R
2
(C)|
1
3
|C|
3
.
Now, with S = C
_
A+B +
1
2
[A, B]
_
, we get
exp(C) = I +
_
A+B +
1
2
[A, B] +S
_
+
1
2
C
2
+R
2
(C)
= I +A+B +
1
2
[A, B] +
1
2
(A+B)
2
+
_
S +
1
2
_
C
2
(A+B)
2
_
_
+R
2
(C)
= I +A+B +
1
2
(A
2
+ 2AB +B
2
)
+
_
S +
1
2
_
C
2
(A+B)
2
_
_
+R
2
(C).
(3.7)
On the other hand,
(3.8) exp(A) exp(B) = I +A+B +
1
2
(A
2
+ 2AB +B
2
) +R
2
(A, B),
with
(3.9) |R
2
(A, B)|
1
3
(|A| +|B|)
3
.
But exp(C) = exp(A) exp(B), so by (3.7) and (3.8)
S = R
2
(A, B) +
1
2
_
(A+B)
2
C
2
_
R
2
(C)
3. THE LIE ALGEBRA OF A MATRIX GROUP 9
and, because of (3.4), (3.5), (3.6) and (3.9),
|S| |R
2
(A, B)| +
1
2
|(A+B)(A+B C) + (A+B C)C| +|R
2
(C)|

1
3
_
|A| +|B|
_
3
+
1
2
_
|A| +|B| +|C|)|A+B C| +
1
3
|C|
3

1
3
_
|A| +|B|
_
3
+
5
2
_
|A| +|B|
_
17
_
|A| +|B|
_
2
+
1
3
4
3
_
|A| +|B|
_
3
=
_
65
3
+
85
2
_
_
|A| +|B|
_
3
65
_
|A| +|B|
_
3
.
To prove (ii) it is enough to realize that for large enough n,
exp
_
A
n
_
exp
_
B
n
_
= exp(C
n
),
with (because of (3.5)),
|C
n

A+B
n
| 17
_
|A| +|B|
n
_
2
.
In other words,
exp
_
A
n
_
exp
_
B
n
_
= exp
_
A+B
n
+O
_
1
n
2
_
_
.
Therefore,
_
exp
_
A
n
_
exp
_
B
n
__
n
= exp(C
n
)
n
= exp(nC
n
)
n
exp(A+B),
since exp is continuous.
Finally, for (iii) use that for large enough n,
exp
_
A
n
_
exp
_
B
n
_
= exp
_
A+B
n
+
1
2n
2
[A, B] +S
n
_
,
with |S
n
| 65
_
|A|+|B|
_
3
n
3
, because of the rst part of the Lemma. Similarly,
exp
_
A
n
_
1
exp
_
B
n
_
1
= exp
_
A
n
_
exp
_
B
n
_
= exp
_

A+B
n
+
1
2n
2
[A, B] +S
t
n
_
with |S
t
n
| 65
_
|A|+|B|
_
3
n
3
. Again by the rst part of the Lemma we obtain
(3.10)
_
exp
_
A
n
_
: exp
_
B
n
__
= exp
_
1
n
2
[A, B] +O
_
1
n
3
_
_
,
since
1
2
_
A+B
n
+
1
2n
2
[A, B] +S
n
,
A+B
n
+
1
2n
2
[A, B] +S
t
n
_
= O
_
1
n
3
_
, and one can proceed
as before.
10 CHAPTER 1. INTRODUCTION TO LIE GROUPS AND LIE ALGEBRAS
3.2 Theorem. Let G be a matrix group on the vector space V and let g = A gl(V ) :
exp(tA) G t R. Then:
(i) g is a Lie subalgebra of gl(V ).
(ii) The map exp : g G maps a neighborhood of 0 in g bijectively onto a neighborhood
of 1 in G. (Here g is a real vector space endowed with the topology coming from
the norm of End
R
(V ) induced by the norm of V .)
Proof. By its own denition, g is closed under multiplication by real numbers. Now,
given any A, B g and t R, since G is closed, the Technical Lemma shows us that
exp
_
t(A+B)
_
= lim
n
_
exp
_
tA
n
_
exp
_
tB
n
__
n
G,
exp
_
t[A, B]
_
= lim
n
_
exp
_
tA
n
_
: exp
_
B
n
__
n
2
G.
Hence g is closed too under addition and Lie brackets, and so it is a Lie subalgebra of
gl(V ).
To prove the second part of the Theorem, let us rst check that, if (A
n
)
nN
is a
sequence in exp
1
(G) with lim
n
|A
n
| = 0, and (s
n
)
nN
is a sequence of real numbers,
then any cluster point of the sequence (s
n
A
n
)
nN
lies in g:
Actually, we may assume that lim
n
s
n
A
n
= B. Let t R. For any n N, take
m
n
Z such that [m
n
ts
n
[ 1. Then,
|m
n
A
n
tB| = |(m
n
ts
n
)A
n
+t(s
n
A
n
B)|
[m
n
ts
n
[|A
n
| +[t[|s
n
A
n
B|.
Since both |A
n
| and |s
n
A
n
B| converge to 0, it follows that lim
n
m
n
A
n
= tB.
Also, A
n
exp
1
(G), so that exp(m
n
A
n
) = exp(A
n
)
m
n
G. Since exp is continuous
and G is closed, exp(tB) = lim
n
exp(m
n
A
n
) G for any t R, and hence B g, as
required.
Let now m be a subspace of gl(V ) with gl(V ) = g m, and let
g
and
m
be the
associated projections onto g and m. Consider the analytical function:
E : gl(V ) GL(V )
A exp
_

g
(A)
_
exp
_

m
(A)
_
.
Then,
d
dt
_
exp
_

g
(tA)
_
exp
_

m
(tA)
_
_
[
t=0
=
d
dt
_
exp
_

g
(tA)
_
_
[
t=0
exp(0) + exp(0)
d
dt
_
exp
_

m
(tA)
_
_
[
t=0
=
g
(A) +
m
(A) = A.
Hence, the dierential of E at 0 is the identity and, thus, E maps homeomorphically a
neighborhood of 0 in gl(V ) onto a neighborhood of 1 in GL(V ). Let us take r > 0 and
a neighborhood 1 of 1 in GL(V ) such that E[
B
r
(0)
: B
r
(0) 1 is a homeomorphism. It
3. THE LIE ALGEBRA OF A MATRIX GROUP 11
is enough to check that exp
_
B
r
(0) g
_
= E
_
B
r
(0) g
_
contains a neighborhood of 1 in
G.
Otherwise, there would exist a sequence (B
n
)
nN
exp
1
(G) with B
n
, B
r
(0)g and
such that lim
n
B
n
= 0. For large enough n, exp(B
n
) = E(A
n
), with lim
n
A
n
= 0.
Hence exp
_

m
(A
n
)
_
= exp
_

g
(A
n
)
_
1
exp(B
n
) G.
Since lim
n
A
n
= 0, lim
n

m
(A
n
) = 0 too and, for large enough m,
m
(A
m
) ,= 0,
as A
m
, g (note that if A
m
g, then exp(B
m
) = E(A
m
) = exp(A
m
) and since exp is a
bijection on a neighborhood of 0, we would have B
m
= A
m
g, a contradiction).
The sequence
_
1
|
m
(A
n
)|

m
(A
n
)
_
nN
is bounded, and hence has cluster points, which
are in m (closed in gl(V ), since it is a subspace). We know that these cluster points are
in g, so in g m = 0. But the norm of all these cluster points is 1, a contradiction.
3.3 Remark. Given any A gl(V ), the set exp(tA) : t R is the continuous image
of the real line, and hence it is connected. Therefore, if g is the Lie algebra of the matrix
group G, exp(g) is contained in the connected component G
o
of the identity. Therefore,
the Lie algebra of G equals the Lie algebra of G
o
.
Also, exp(g) contains an open neighborhood | of 1 in G. Thus, G
o
contains the open
neighborhood x| of any x G
o
. Hence G
o
is open in G but, as a connected component,
it is closed too: G
o
is open and closed in G.
Let us look at the Lie algebras of some interesting matrix groups.
3.4 Examples. 1. The Lie algebra of GL(V ) is obviously the whole general linear
Lie algebra gl(V ).
2. For any A gl(V ) (or any square matrix A), det e
A
= e
trace(A)
. This is better
checked for matrices. Since any real matrix can be considered as a complex matrix,
it is well known that given any such matrix there is a regular complex matrix
P such that J = PAP
1
is upper triangular. Assume that
1
, . . . ,
n
are the
eigenvalues of A (or J), counted according to their multiplicities. Then Pe
A
P
1
=
e
J
and det e
A
= det e
J
=

n
i=1
e

i
= e

n
i=1

i
= e
trace(J)
= e
trace(A)
.
Hence, for any t ,= 0, det e
tA
= 1 if and only if trace(A) = 0. This shows that
the Lie algebra of the special linear group SL(V ) is the special linear Lie algebra
sl(V ) = A gl(V ) : trace(A) = 0.
3. Let b : V V R be a bilinear form. If A gl(V ) satises b(e
tA
v, e
tA
w) = b(v, w)
for any t R and v, w V , take derivatives at t = 0 to get b(Av, w)+b(v, Aw) = 0
for any v, w V . Conversely, if b(Av, w) = b(v, Aw) for any v, w V , then
b((tA)
n
v, w) = (1)
n
b(v, (tA)
n
w), so b(e
tA
v, e
tA
w) = b(v, e
tA
e
tA
w) = b(v, w) for
any t R and v, w, V .
Therefore, the Lie algebra of the matrix group G = g GL(V ) : b(gv, gw) =
b(v, w) v, w V is g = A gl(V ) : b(Av, w) +b(v, Aw) = 0 v, w V .
In particular, if b is symmetric and nondegenerate, the Lie algebra of the orthogonal
group O(V, b) is called the orthogonal Lie algebra and denoted by o(V, b). Also, for
alternating and nondegenerate b
a
, the Lie algebra of the symplectic group Sp(V, b
a
)
is called the symplectic Lie algebra, and denoted by sp(V, b
a
).
12 CHAPTER 1. INTRODUCTION TO LIE GROUPS AND LIE ALGEBRAS
4. For any subspace U of V , consider a complementary subspace W, so that V =
U W. Let
U
and
W
be the corresponding projections. For any A gl(V ) and
0 ,= t R, e
tA
(U) U if and only if
W
(e
tA
u) = 0 for any u U. In this case, by
taking derivatives at t = 0 we obtain that
W
(Au) = 0 for any u U, or A(U) U.
The converse is clear. Hence, the Lie algebra of P(U) = g GL(V ) : g(U) U
is p(U) = A gl(V ) : A(U) U.
5. The Lie algebra of a nite intersection of matrix groups is the intersection of the
corresponding Lie algebras.
6. The Lie algebra of G = G
1
G
2
_
GL(V
1
) GL(V
2
)
_
is the direct sum g
1
g
2
of the corresponding Lie algebras. This follows from the previous items because,
inside GL(V
1
V
2
), GL(V
1
) GL(V
2
) = P(V
1
) P(V
2
).
7. Given T
1
, . . . T
n
End
R
(V ). With similar arguments, one checks that the Lie
algebra of G = g GL(V ) : gT
i
= T
i
g, i = 1, . . . , n is g = A gl(V ) : AT
i
=
T
i
A, i = 1, . . . , n. In particular, the Lie algebra of GL
n
(C) is gl
n
(C) (the Lie
algebra of complex n n matrices).
In the remainder of this chapter, the most interesting properties of the relationship
between matrix groups and their Lie algebras will be reviewed.
3.5 Proposition. Let G be a matrix group on a real vector space V , and let G
o
be its
connected component of 1. Let g be the Lie algebra of G. Then G
o
is the group generated
by exp(g).
Proof. We already know that exp(g) G
o
and that there exists an open neighborhood
| of 1 G with 1 | exp(g). Let 1 = | |
1
, which is again an open neighborhood
of 1 in G contained in exp(g). It is enough to prove that G
o
is generated, as a group,
by 1.
Let H =
nN
1
n
, H is closed under multiplication and inverses, so it is a subgroup
of G contained in G
o
. Actually, it is the subgroup of G generated by 1. Since 1 is open,
so is 1
n
=
v1
v1
n1
for any n, and hence H is an open subgroup of G. But any open
subgroup is closed too, as G H =
xG\H
xH is a union of open sets. Therefore, H is
an open and closed subset of G contained in the connected component G
o
, and hence it
lls all of G
o
.
3.6 Theorem. Let G and H be matrix groups on the real vector space V with Lie
algebras g and h.
(i) If H is a normal subgroup of G, then h is an ideal of g (that is, [g, h] h).
(ii) If both G and H are connected, the converse is valid too.
Proof. Assume that H is a normal subgroup of G and let A h and B g. Since H is
a normal subgroup of G, for any t R and n N,
_
e
t
A
n
: e
B
n
_
H, and hence, by the
Technical Lemma, e
t[A,B]
= lim
n
_
e
t
A
n
: e
B
n
_
n
2
belongs to H (H is a matrix group,
hence closed). Thus, [A, B] h.
Now, assume that both G and H are connected and that h is an ideal of g. Then,
for any B g, ad
B
(h) h, so Ad e
B
(h) = e
ad
B
(h) h. In other words, e
B
he
B
h.
3. THE LIE ALGEBRA OF A MATRIX GROUP 13
Since G is connected, it is generated by e
B
: B g. Hence, ghg
1
h for any g G.
Thus, for any A h and g G, ge
A
g
1
= e
gAg
1
e
h
H. Since H is connected, it is
generated by the e
A
s, so we conclude that gHg
1
H for any g G, and hence H is
a normal subgroup of G.
3.7 Theorem. Let G be a matrix group on the real vector space V with Lie algebra g,
and let H be a matrix group on the real vector space W with Lie algebra h.
If : G H is a continuous homomorphism of groups, then there exists a unique
Lie algebra homomorphism d : g h that makes the following diagram commutative:
g
d
h
exp

_
exp
G

H
Proof. The uniqueness is easy: since exp is bijective on a neighborhood of 0 in h, d is
determined as (exp)
1
exp on a neighborhood of 0 in g. But d is linear and any
neighborhood of 0 contains a basis. Hence d is determined by .
Now, to prove the existence of such a linear map, take any A g, then t (e
tA
)
is a one-parameter group on W with image in H. Thus, there is a unique B h such
that (e
tA
) = e
tB
for any t R. Dene d (A) = B. Therefore, (e
tA
) = e
td (A)
for
any t R and A g. Now, for any A
1
, A
2
g,

_
e
t(A
1
+A
2
)
_
=
_
lim
n
_
e
tA
1
n
e
tA
2
n
_
n
_
(Trotters formula)
= lim
n
_

_
e
tA
1
n
_

_
e
tA
2
n
__
n
(since is continuous)
= lim
n
_
e
t
n
d (A
1
)
e
t
n
d (A
2
)
_
n
= e
t
_
d (A
1
)+d (A
2
)
_
and, hence, d is linear. In the same spirit, one checks that

_
e
t[A
1
,A
2
]
_
=
_
lim
n
_
e
tA
1
n
: e
A
2
n
_
n
2
_
= = e
t[d (A
1
),d (A
2
)]
,
thus proving that d is a Lie algebra homomorphism.
Several consequences of this Theorem will be drawn in what follows.
3.8 Corollary. Let G, H, g and h be as in the previous Theorem. If G and H are
isomorphic matrix groups, then g and h are isomorphic Lie algebras.
3.9 Remark. The converse of the Corollary above is false, even if G and H are con-
nected. Take, for instance,
G = SO(2) =
__
cos sin
sin cos
_
: R
_
(the special orthogonal group on R
2
, which is homeomorphic to the unit circle). Its Lie
algebra is
g =
__
0
0
_
: R
_
14 CHAPTER 1. INTRODUCTION TO LIE GROUPS AND LIE ALGEBRAS
(2 2 skew-symmetric matrices). Also, take
H =
__
0
0 1
_
: R
>0
_
which is isomorphic to the multiplicative group of positive real numbers, whose Lie
algebra is
h =
__
0
0 0
_
: R
_
.
Both Lie algebras are one-dimensional vector spaces with trivial Lie bracket, and hence
they are isomorphic as Lie algebras. However, G is not isomorphic to H (inside G one
may nd many nite order elements, but the identity is the only such element in H).
(One can show that G and H are locally isomorphic.)
If G is a matrix group on V , and X g, g G and t R,
exp(t Ad g(X)) = g
_
exp(tX)
_
g
1
G,
so Ad g(g) g. Hence, the adjoint map of GL(V ) induces an adjoint map
Ad : G GL(g),
and, by restriction in (1.1), we get the following commutative diagram:
(3.11)
g
ad
gl(g)
exp

_
exp
G
Ad
GL(g)
3.10 Corollary. Let G be a matrix group on the real vector space V and let Ad : G
GL(g) be the adjoint map. Then d Ad = ad : g gl(g).
Remember that, given a group G, its center Z(G) is the normal subgroup consisting
of those elements commuting with every element: Z(G) = g G : gh = hg h G.
3.11 Corollary. Let G be a connected matrix group with Lie algebra g. Then Z(G) =
ker Ad, and this is a closed subgroup of G with Lie algebra the center of g: Z(g) = X
g : [X, Y ] = 0 Y g (= ker ad).
Proof. With g Z(G) and X g, exp(tX) = g
_
exp(tX)
_
g
1
= exp
_
t Ad g(X)
_
for any
t R. Taking the derivative at t = 0 one gets Ad g(X) = X for any X g, so that
g ker Ad. (Note that we have not used here the fact that G is connected.)
Conversely, take an element g ker Ad, so for any X g and t R, g
_
exp(tX)
_
g
1
=
exp
_
t Ad g(X)
_
= exp(tX). Since G is connected, it is generated by exp(g) and, thus,
ghg
1
= h for any h G. That is, g Z(G).
Since Ad is continuous, it follows that Z(G) = ker Ad = Ad
1
(I) is closed.
Now, the commutativity of the diagram (3.11) shows that exp(ker ad) ker Ad =
Z(G), and hence ker ad is contained in the Lie algebra of Z(G). Conversely, if X g
and exp(tX) Z(G) for any t R, then exp(t ad
X
) = Ad exp(tX) = I and hence
(take the derivative at t = 0) ad
X
= 0, so X ker ad. Therefore, the Lie algebra of
Z(G) = ker Ad is ker ad which, by its own denition, is the center of g.
3. THE LIE ALGEBRA OF A MATRIX GROUP 15
3.12 Corollary. Let G be a connected matrix group with Lie algebra g. Then G is
commutative if and only if g is abelian, that is, [g, g] = 0.
Finally, the main concept of this course will be introduced. Groups are important
because they act as symmetries of other structures. The formalization, in our setting,
of this leads to the following denition:
3.13 Denition. (i) A representation of a matrix group G is a continuous homo-
morphism : G GL(W) for a real vector space W.
(ii) A representation of a Lie algebra g is a Lie algebra homomorphism : g gl(W),
for a vector space W.
3.14 Corollary. Let G be a matrix group with Lie algebra g and let : G GL(W)
be a representation of G. Then there is a unique representation d : g gl(W) such
that the following diagram is commutative:
g
d
gl(W)
exp

_
exp
G

GL(W)
The great advantage of dealing with d above is that this is a Lie algebra homomor-
phism, and it does not involve topology. In this sense, the representation d is simpler
than the representation of the matrix group, but it contains a lot of information about
the latter. The message is that in order to study the representations of the matrix
groups, we will study representations of Lie algebras.
Chapter 2
Lie algebras
The purpose of this chapter is to present the basic structure of the nite dimensional
Lie algebras over elds, culminating in the classication of the simple Lie algebras over
algebraically closed elds of characteristic 0.
1. Theorems of Engel and Lie
Let us rst recall the denition of representation of a Lie algebra, that has already
appeared in the previous chapter.
1.1 Denition. Let L be a Lie algebra over a eld k. A representation of L is a Lie
algebra homomorphism : L gl(V ), where V is a nonzero vector space over k.
We will use the notation x.v = (x)(v) for elements x L and v V . In this case,
V is said to be a module for L.
As for groups, rings or associative algebras, we can immediately dene the concepts
of submodule, quotient module, irreducible module (or irreducible representation), ho-
momorphism of modules, kernel, image, ...
In what follows, and unless otherwise stated, all the vector spaces and algebras con-
sidered will be assumed to be nite dimensional over a ground eld k.
1.2 Engels Theorem. Let : L gl(V ) be a representation of a Lie algebra L such
that (x) is nilpotent for any x L. Then there is an element 0 ,= v V such that
x.v = 0 for any x L.
Proof. The proof will be done by induction on n = dim
k
L, being obvious for n = 1.
Hence assume that dim
k
L = n > 1 and that the result is true for Lie algebras
of smaller dimension. If ker ,= 0, then dim
k
(L) < dim
k
L = n, but the inclusion
(L) gl(V ) is a representation of the Lie algebra (L) and the induction hypothesis
applies.
Therefore, we may assume that ker = 0 and, thus, that L is a subalgebra of gl(V ).
The hypothesis of the Theorem assert then that x
m
= 0 for any x L gl(V ) =
End
k
(V ), where m = dim
k
V . Let S be a proper maximal subalgebra of L. For any
x, y L (and with ad x = ad
x
for ease of notation),
(ad x)
2m1
(y) = [x, [x , [x, y]]] =
2m1

i=0
m
i
x
i
yx
2m1i
17
18 CHAPTER 2. LIE ALGEBRAS
for suitable integers m
i
(just expand the brackets: [x, y] = xy yx). But for any
0 i 2m1, either i or 2m1 i is m. Hence (ad x)
2m1
= 0. In particular, the
natural representation of the Lie algebra S on the quotient space L/S:
: S gl(L/S)
x (x) : L/S L/S
y +S [x, y] +S
(L is a module for S through ad, and L/S is a quotient module) satises the hypotheses
of the Theorem, but with dim
k
S < n. By the induction hypothesis, there exists an
element z L S such that [x, z] S for any x S. Therefore, S kz is a subalgebra
of L which, by maximality of S, is the whole L. In particular S is an ideal of L.
Again, by induction, we conclude that the subspace W = v V : x.v = 0 x S
is nonzero. But for any x S, x.(z.W) [x, z].W + z.(x.W) = 0 ([x, z] S). Hence
z.W W, and since z is a nilpotent endomorphism, there is a nonzero v W such that
z.v = 0. Hence x.v = 0 for any x S and for z, so x.v = 0 for any x L.
1.3 Consequences. (i) Let : L gl(V ) be an irreducible representation of a Lie
algebra L and let I be an ideal of L such that (x) is nilpotent for any x I.
Then I ker .
Proof. Let W = v V : x.v = 0 x I, which is not zero by Engels Theorem.
For any x I, y L and w W, x.(y.w) = [x, y].w + y.(x.w) = 0, as [x, y] I.
Hence W is a nonzero submodule of the irreducible module V and, therefore,
W = V , as required.
(ii) Let : L gl(V ) be a representation of a Lie algebra L. Let I be and ideal of L
and let 0 = V
0
V
1
V
n
= V be a composition series of V . Then (x) is
nilpotent for any x I if and only if for any i = 1, . . . , n, I.V
i
V
i1
.
(iii) The descending central series of a Lie algebra L is the chain of ideals L = L
1

L
2
L
n
, where L
n+1
= [L
n
, L] for any n N. The Lie algebra is said
to be nilpotent if there is an n N such that L
n
= 0. Moreover, if n = 2, L is said
to be abelian. Then
Theorem. (Engel) A Lie algebra L is nilpotent if and only if ad
x
is nilpotent for
any x L.
Proof. It is clear that if L
n
= 0, then ad
n1
x
= 0 for any x L. Conversely,
assume that ad
x
is nilpotent for any x L, and consider the adjoint representation
ad : L gl(L). Let 0 = L
0
L
n+1
= L be a composition series of this
representation. By item (ii) it follows that L.L
i
= [L, L
i
] L
i1
for any i. Hence
L
i
L
n+1i
for any i. In particular L
n+1
= 0 and L is nilpotent.
1.4 Exercise. The ascending central series of a Lie algebra L is dened as follows:
Z
0
(L) = 0, Z
1
(L) = Z(L) = x L : [x, L] = 0 (the center of L) and Z
i+1
(L)/Z
i
(L) =
Z (L/Z
i
(L)) for any i 1. Prove that this is indeed an ascending chain of ideals and
that L is nilpotent if and only if there is an n N such that Z
n
(L) = L.
1. THEOREMS OF ENGEL AND LIE 19
Now we arrive to a concept which is weaker than nilpotency.
1.5 Denition. Let L be a Lie algebra and consider the descending chain of ideals
dened by L
(0)
= L and L
(m+1)
= [L
(m)
, L
(m)
] for any m 0. Then the chain L =
L
(0)
L
(1)
L
(2)
is called the derived series of L. The Lie algebra L is said to
be solvable if there is an n N such that L
(n)
= 0.
1.6 Exercise. Prove the following properties:
1. Any nilpotent Lie algebra is solvable. However, show that L = kx + ky, with
[x, y] = y, is a solvable but not nilpotent Lie algebra.
2. If L is nilpotent or solvable, so are its subalgebras and quotients.
3. If I and J are nilpotent (or solvable) ideals of L, so is I +J.
4. Let I be an ideal of L such that both I and L/I are solvable. Then L is solvable.
Give an example to show that this is no longer valid with nilpotent instead of
solvable.
As a consequence of these properties, the sum of all the nilpotent (respectively
solvable) ideals of L is the largest nilpotent (resp. solvable) ideal of L. This ideal is
denoted by N(L) (resp. R(L)) and called the nilpotent radical (resp. solvable radical )
of L.
1.7 Lies Theorem. Let : L gl(V ) be a representation of a solvable Lie algebra L
over an algebraically closed eld k of characteristic 0. Then there is a nonzero element
0 ,= v V such that x.v kv for any x L (that is, v is a common eigenvector for all
the endomorphisms (x), x L).
Proof. Since L is solvable, [L, L] L and we may take a codimension 1 subspace of L
with [L, L] S. Then clearly S is an ideal of L. Take z L S, so L = S kz.
Arguing inductively, we may assume that there is a nonzero common eigenvector v
of (x) for any x S and, thus, there is a linear form : S k, such that x.v = (x)v
for any x S. Let W = w V : x.w = (x)w x S. W is a nonzero subspace of
V . Let U be the linear span of v, z.v, z.(z.v), . . ., with v as above. The subspace U is
invariant under (z), and for any x S and m N:
(x)(z)
m
(v) = (x)(z)(z)
m1
(v) = ([x, z])(z)
m1
(v) +(z)
_
(x)(z)
m1
(v)
_
.
Now arguing by induction on m we see that
(i) (x)(z)
m
(v) U for any m N, and hence U is a submodule of V .
(ii) (x)(z)
m
(v) = (x)(z)
m
(v) +

m1
i=0

i
(z)
i
(v) for suitable scalars
i
k.
Therefore the action of (x) on U is given by an upper triangular matrix with (x) on
the diagonal and, hence, trace (x)[
U
= (x) dim
k
U for any x S. In particular,
trace ([x, z])[
U
=
_
([x, z]) dim
k
U
trace
_
(x)[
U
, (z)[
U

= 0
20 CHAPTER 2. LIE ALGEBRAS
(the trace of any commutator is 0), and since the characteristic of k is 0 we conclude
that ([S, L]) = 0.
But then, for any 0 ,= w W and x S,
x.(z.w) = [x, z].w +z.(x.w) = ([x, z])w +z.
_
(x)w
_
= (x)z.w,
and this shows that W is invariant under (z). Since k is algebraically closed, there is a
nonzero eigenvector of (z) in W, and this is a common eigenvector for any x S and
for z, and hence for any y L.
1.8 Remark. Note that the proof above is valid even if k is not algebraically closed, as
long as the characteristic polynomial of (x) for any x L splits over k. In this case
is said to be a split representation.
1.9 Consequences. Assume that the characteristic of the ground eld k is 0.
(i) Let : L gl(V ) be an irreducible split representation of a solvable Lie algebra.
Then dim
k
V = 1.
(ii) Let : L gl(V ) be a split representation of a solvable Lie algebra. Then there
is a basis of V such that the coordinate matrix of any (x), x L, is upper
triangular.
(iii) Let L be a solvable Lie algebra such that its adjoint representation ad : L gl(L)
is split. Then there is a chain of ideal 0 = L
0
L
1
L
n
= L with dimL
i
= i
for any i.
(iv) Let : L gl(V ) be a representation of a Lie algebra L. Then [L, R(L)] acts
nilpotently on V ; that is, (x) is nilpotent for any x [L, R(L)]. The same is true
of [L, L] R(L). In particular, with the adjoint representation, we conclude that
[L, R(L)] [L, L] R(L) N(L) and, therefore, L is solvable if and only if [L, L]
is nilpotent.
Proof. Let

k be an algebraic closure of k. Then

k
k
L is a Lie algebra over

k and

k
k
R(L) is solvable, and hence contained in R(

k
k
L). Then, by extending
scalars it is enough to prove the result assuming that k is algebraically closed.
Also, by taking a composition series of V , it suces to prove the result assuming
that V is irreducible. In this situation, as in the proof of Lies Theorem, one
shows that there is a linear form : R(L) k such that W = v V : x.v =
(x)v x R(L) is a nonzero submodule of V . By irreducibility, we conclude
that x.v = (x)v for any x R(L) and any v V . Moreover, for any x
[L, L] R(L), 0 = trace (x) = (x) dim
k
V , so
_
[L, L] R(L)
_
= 0 holds, and
hence [L, R(L)].V
_
[L, L] R(L)
_
.V = 0.
The last part follows immediately from the adjoint representation. Note that if
[L, L] is nilpotent, in particular it is solvable, and since L/[L, L] is abelian (and
hence solvable), L is solvable by the exercise above.
We will prove now a criterion for solvability due to Cartan.
Recall that any endomorphism f End
k
(V ) over an algebraically closed eld de-
composes in a unique way as f = s + n with s, n End
k
(V ), s being semisimple
1. THEOREMS OF ENGEL AND LIE 21
(that is, diagonalizable), n nilpotent and [s, n] = 0 (Jordan decomposition). Moreover,
s(V ) f(V ), n(V ) f(V ) and any subspace which is invariant under f is invariant
too under s and n.
1.10 Lemma. Let V be a vector space over a eld k of characteristic 0, and let M
1
M
2
be two subspaces of gl(V ). Let A = x gl(V ) : [x, M
2
] M
1
and let z A be an
element such that trace(zy) = 0 for any y A. Then z is nilpotent.
Proof. We may extend scalars and assume that k is algebraically closed. Let n = dim
k
V .
Then the characteristic polynomial of z is (X
1
) (X
n
), for
1
, . . . ,
n
k.
We must check that
1
= =
n
= 0. Consider the Q subspace of k spanned
by the eigenvalues
1
, . . . ,
n
: E = Q
1
+ + Q
n
. Assume that E ,= 0 and take
0 ,= f : E Q a Q-linear form. Let z = s + n be the Jordan decomposition and let
v
1
, . . . , v
n
be an associated basis of V , in which the coordinate matrix of z is triangular
and s(v
i
) =
i
v
i
for any i. Consider the corresponding basis E
ij
: 1 i, j n of
gl(V ), where E
ij
(v
j
) = v
i
and E
ij
(v
l
) = 0 for any l ,= j. Then [s, E
ij
] = (
i

j
)E
ij
, so
that ad
s
is semisimple. Also ad
n
is clearly nilpotent, and ad
z
= ad
s
+ad
n
is the Jordan
decomposition of ad
z
. This implies that ad
z
[
M
2
= ad
s
[
M
2
+ ad
n
[
M
2
is the Jordan
decomposition of ad
z
[
M
2
and [s, M
2
] = ad
s
(M
2
) ad
z
(M
2
) M
1
.
Consider the element y gl(V ) dened by means of y(v
i
) = f(
i
)v
i
for any i. Then
[ad
y
, E
ij
] = f(
i

j
)E
ij
. Let p(T) be the interpolation polynomial such that p(0) = 0
(trivial constant term) and p(
i

j
) = f(
i

j
) for any 1 i ,= j n. Then
ad
y
= p(ad
s
) and hence [y, M
2
] M
1
, so y A. Thus, 0 = trace(zy) =

n
i=1

i
f(
i
).
Apply f to get 0 =

n
i=1
f(
i
)
2
, which forces, since f(
i
) Q for any i, that f(
i
) = 0
for any i. Hence f = 0, a contradiction.
1.11 Proposition. Let V be a vector space over a eld k of characteristic 0 and let L
be a Lie subalgebra of gl(V ). Then L is solvable if and only if trace(xy) = 0 for any
x [L, L] and y L.
Proof. Assume rst that L is solvable and take a composition series of V as a module
for L: V = V
0
V
1
V
m
= 0. Engels Theorem and Consequences 1.9 show that
[L, L].V
i
V
i+1
for any i. This proves that trace
_
[L, L]L
_
= 0.
Conversely, assume that trace(xy) = 0 for any x [L, L] and y L, and consider
the subspace A = x gl(V ) : [x, L] [L, L]. For any u, v L and y A,
trace
_
[u, v]y
_
= trace(uvy vuy)
= trace(vyu yvu)
= trace
_
[v, y]u) = 0 (since [v, y] [L, L]).
Hence trace(xy) = 0 for any x [L, L] and y A which, by the previous Lemma, shows
that x is nilpotent for any x [L, L]. By Engels Theorem, [L, L] is nilpotent, and hence
L is solvable.
1.12 Theorem. (Cartans criterion for solvability)
Let L be a Lie algebra over a eld k of characteristic 0. Then L is solvable if and only
if trace(ad
x
ad
y
) = 0 for any x [L, L] and any y L.
Proof. The adjoint representation ad : L gl(L) satises that ker ad = Z(L), which is
abelian and hence solvable. Thus L is solvable if and only if so is L/Z(L)

= ad L and
the previous Proposition shows that, since [ad L, ad L] = ad[L, L], that ad L is solvable
if and only if trace
_
ad
x
ad
y
) = 0 for any x [L, L] and y L.
22 CHAPTER 2. LIE ALGEBRAS
The bilinear form : L L k given by
(x, y) = trace(ad
x
ad
y
)
for any x, y L, that appears in Cartans criterion for solvability, plays a key role in
studying Lie algebras over elds of characteristic 0. It is called the Killing form of the
Lie algebra L.
Note that is symmetric and invariant (i.e., ([x, y], z) = (x, [y, z]) for any x, y, z
L).
2. Semisimple Lie algebras
A Lie algebra is said to be semisimple if its solvable radical is trivial: R(L) = 0. It is
called simple if it has no proper ideal and it is not abelian.
Any simple Lie algebra is semisimple, and given any Lie algebra L, the quotient
L/R(L) is semisimple.
2.1 Theorem. (Cartans criterion for semisimplicity)
Let L be a Lie algebra over a eld k of characteristic 0 and let (x, y) = trace(ad
x
ad
y
)
be its Killing form. Then L is semisimple if and only if is nondegenerate.
Proof. The invariance of the Killing form of such a Lie algebra L implies that the
subspace I = x L : (x, L) = 0 is an ideal of L. By Proposition 1.11, ad
L
I is a
solvable subalgebra of gl(L), and this shows that I is solvable. (ad
L
I

= I/Z(L) I).
Hence, if L is semisimple I R(L) = 0, and thus is nondegenerate. (Note
that this argument is valid had we started with a Lie subalgebra L of gl(V ) for some
vector space V , and had we replaced by the trace form of V : B : L L k,
(x, y) B(x, y) = trace(xy).)
Conversely, assume that is nondegenerate, that is, that I = 0. If J were an abelian
ideal of L, then for any x J and y L, ad
x
ad
y
(L) J and ad
x
ad
y
(J) = 0. Hence
_
ad
x
ad
y
)
2
= 0 and (x, y) = trace
_
ad
x
ad
y
_
= 0. Therefore, J I = 0. Thus, L
does not contain proper abelian ideals, so it does not contain proper solvable ideals and,
hence, R(L) = 0 and L is semisimple.
2.2 Consequences. Let L be a Lie algebra over a eld k of characteristic 0.
(i) L is semisimple if and only if L is a direct sum of simple ideals. In particular, this
implies that L = [L, L].
Proof. If L = L
1
L
n
with L
i
a simple ideal of L for any i, and J is an
abelian ideal of L, then [J, L
i
] is an abelian ideal of L
i
, and hence it is 0. Hence
[J, L] = 0. This shows that the projection of J on each L
i
is contained in the
center Z(L
i
), which is 0 by simplicity. Hence J = 0.
Conversely, assume that L is semisimple and let I be a minimal ideal of L, take
the orthogonal I

= x L : (x, I) = 0, which is an ideal of L by invariance of


. Cartans criterion of solvability (or better Proposition 1.11) shows that I I

is solvable and hence, as R(L) = 0, I I

= 0 and L = I I

. Now, I is
2. SEMISIMPLE LIE ALGEBRAS 23
simple, since any ideal J of I satises [J, I

] [I, I

] I I

= 0, and hence
[J, L] = [J, I] J. Also, =
I

I
is the orthogonal sum of the Killing
forms of I and I

. So we can proceed with I

as we did for L to complete a


decomposition of L into the direct sum of simple ideals.
(ii) Let K/k be a eld extension, then L is semisimple if and only if so is the scalar
extension K
k
L.
Proof. Once a basis of L over k is xed (which is also a basis of K
k
L over K if
we identify L with 1 L), the coordinate matrices of the Killing forms of L and
K
k
L coincide, whence the result.
(iii) If L is semisimple and I is a proper ideal of L, then both I and L/I are semisimple.
Proof. As in (i), L = I I

and the Killing form of L is the orthogonal sum of


the Killing forms of these two ideals:
I
and
I
. Hence both Killing forms are
nondegenerate and, hence, both I and I

are semisimple. Finally, L/I



= I

.
(iv) Assume that L is a Lie subalgebra of gl(V ) and that the trace form B : LL k,
(x, y) B(x, y) = trace(xy) is nondegenerate. Then L = Z(L) [L, L] and [L, L]
is semisimple (recall that the center Z(L) is abelian). Moreover, the ideals Z(L)
and [L, L] are orthogonal relative to B, and hence the restriction of B to both
Z(L) and [L, L] are nondegenerate.
Proof. Let V = V
0
V
1
0 be a composition series of V as a module for L.
Then we know, because of Consequences 1.9 that both [L, R(L)] and [L, L] R(L)
act nilpotently on V . Therefore, B
_
[L, R(L)], L
_
= 0 = B
_
[L, L] R(L), L
_
and,
as B is nondegenerate, this shows that [L, R(L)] = 0 = [L, L]R(L). In particular,
R(L) = Z(L) and, since L/R(L) is semisimple, L/R(L) = [L/R(L), L/R(L)] =
_
[L, L] + R(L)
_
/R(L). Hence L = [L, L] + R(L) and [L, L] R(L) = 0, whence
it follows that L = Z(L) [L, L]. Besides, by invariance of B, B
_
Z(L), [L, L]
_
=
B
_
[Z(L), L], L
_
= 0 and the last part follows.
(v) An endomorphism d of a Lie algebra L is said to be a derivation if d([x, y]) =
[d(x), y] + [x, d(y)] for any x, y L. For any x L, ad
x
is a derivation, called
inner derivation. Then, if L is semisimple, any derivation is inner.
Proof. Let d be any derivation and consider the linear formL k, x trace(d ad
x
).
Since is nondegenerate, there is a z L such that (z, x) = trace(d ad
x
) for any
x L. But then, for any x, y L,

_
d(x), y
_
= trace
_
ad
d(x)
ad
y
)
= trace
_
[d, ad
x
] ad
y
_
(since d is a derivation)
= trace
_
d[ad
x
, ad
y
]
_
= trace
_
d ad
[x,y]
_
= (z, [x, y]) = ([z, x], y).
Hence, by nondegeneracy, d(x) = [z, x] for any x, so d = ad
z
.
24 CHAPTER 2. LIE ALGEBRAS
Let V and W be two modules for a Lie algebra L. Then both Hom
k
(V, W) and
V
k
W are L-modules too by means of:
(x.f)(v) = x.(f(v)) f(x.v),
x.(v w) = (x.v) w +v (x.w),
for any x L, f Hom
k
(V, W) and v, w V .
2.3 Proposition. Let L be a Lie algebra over an algebraically closed eld k of char-
acteristic 0. Then any irreducible module for L is, up to isomorphism, of the form
V = V
0

k
Z, with V
0
and Z modules such that dim
k
Z = 1 and V
0
is irreducible and
annihilated by R(L). (Hence, V
0
is a module for the semisimple Lie algebra L/R(L).)
Proof. By the proof of Consequence 1.9.(iv), we know that there is a linear form :
R(L) k such that x.v = (x)v for any x R(L) and v V . Moreover,
_
[L, R(L)]
_
=
0 =
_
[L, L] R(L)
_
. Thus we may extend to a form L k, also denoted by , in
such a way that
_
[L, L]
_
= 0.
Let Z = kz be a one dimensional vector space, which is a module for L by means of
x.z = (x)z and let W = V
k
Z

(Z

is the dual vector space to Z), which is also an


L-module. Then the linear map
W
k
Z V
(v f) z f(z)v
is easily seen to be an isomorphism of modules. Moreover, since V is irreducible, so is
W, and for any x R(L), v V and f Z

, x.(v f) = (x.v) f + v (x.f) =


(x)v f (x)v f = 0 (since (x.f)(z) = f(x.z) = (x)v f = 0). Hence W is
annihilated by R(L).
This Proposition shows the importance of studying the representations of the semisim-
ple Lie algebras.
Recall the following denition.
2.4 Denition. A module is said to be completely reducible if and only if it is a direct
sum of irreducible modules or, equivalently, if any submodule has a complementary
submodule.
2.5 Weyls Theorem. Any representation of a semisimple Lie algebra over a eld of
characteristic 0 is completely reducible.
Proof. Let L be a semisimple Lie algebra over the eld k of characteristic 0, and let
: L gl(V ) be a representation and W a submodule of V . Does there exist a
submodule W
t
such that V = W W
t
?
We may extend scalars and assume that k is algebraically closed, because the exis-
tence of W
t
is equivalent to the existence of a solution to a system of linear equations:
does there exist End
L
(V ) such that (V ) = W and [
W
= I
W
(the identity map on
W)?
Now, assume rst that W is irreducible and V/W trivial (that is, L.V W). Then
we may change L by its quotient (L), which is semisimple too (or 0, which is a trivial
case), and hence assume that 0 ,= L gl(V ). Consider the trace form b
V
: L L k,
2. SEMISIMPLE LIE ALGEBRAS 25
(x, y) trace(xy). By Cartans criterion for solvability, ker b
V
is a solvable ideal of L,
hence 0, and thus b
V
is nondegenerate. Take dual bases x
1
, . . . , x
n
and y
1
, . . . , y
n

of L relative to b
V
(that is, b
V
(x
i
, y
j
) =
ij
for any i, j).
Then the element c
V
=

n
i=1
x
i
y
i
End
k
(V ) is called the Casimir element and
trace(c
V
) =
n

i=1
trace(x
i
y
i
) =
n

i=1
b
V
(x
i
, y
i
) = n = dim
k
L.
Moreover, for any x L, there are scalars such that [x
i
, x] =

n
j=1

ij
x
j
and [y
i
, x] =

n
i=1

ij
y
j
for any i. Since
b
V
_
[x
i
, x], y
j
_
+b
V
_
x
i
, [y
j
, x]
_
= 0
for any i, j, it follows that
ij
+
ji
= 0 for any i, j, so
[c
V
, x] =
n

i=1
_
[x
i
, x]y
i
+x
i
[y
i
, x]
_
=
n

i,j=1
(
ji
+
ij
)x
i
y
j
= 0.
We then have that c
V
(V ) W and, by Schurs Lemma (W is assumed here to be
irreducible), c
V
[
W
End
L
(W) = kI
W
. Besides, trace(c
V
) = dim
k
L. Therefore,
c
V
[
W
=
dim
k
L
dim
k
W
I
W
and V = ker c
V
imc
V
= ker c
V
W. Hence W
t
= ker c
V
is a submodule that
complements W.
Let us show now that the result holds as long as L.V W.
To do so, we argue by induction on dim
k
W, the result being trivial if dim
k
W = 0.
If W is irreducible, the result holds by the previous arguments. Otherwise, take a
maximal submodule Z of W. By the induction hypothesis, there is a submodule

V such
that V/Z = W/Z

V /Z, and hence V = W+



V and V

V = Z. Now, L.

V

V W = Z
and dim
k
Z < dim
k
W, so there exists a submodule W
t
of

V such that

V = Z W
t
.
Hence V = W +W
t
and W W
t
W

V W
t
= Z W
t
= 0, as required.
In general, consider the following submodules of the L-module Hom
k
(V, W):
M = f Hom
k
(V, W) : there exists
f
k such that f[
W
=
f
id,
N = f Hom
k
(V, W) : f[
W
= 0.
For any x L, f M, and w W:
(x.f)(w) = x.
_
f(w)
_
f(x.w) = x.(
f
w)
f
(x.w) = 0,
so L.M N. Then there exists a submodule X of Hom
k
(V, W) such that M = N X.
Since L.X X N = 0, X is contained in Hom
L
(V, W). Take f X with
f
= 1, so
f(V ) W and f[
W
= id. Then W = ker f W, and ker f is a submodule of V that
complements W.
2.6 Consequences on Jordan decompositions. Let k be an algebraically closed
eld of characteristic 0.
26 CHAPTER 2. LIE ALGEBRAS
(i) Let V be a vector space over k and let L be a semisimple Lie subalgebra of gl(V ).
For any x L, consider its Jordan decomposition x = x
s
+x
n
. Then x
s
, x
n
L.
Proof. We know that ad x
s
is semisimple, ad x
n
nilpotent, and that ad x = ad x
s
+
ad x
n
is the Jordan decomposition of ad x. Let W be any irreducible submodule
of V and consider the Lie subalgebra of gl(V ):
L
W
= z gl(V ) : z(W) W and trace(z[
W
) = 0.
Since L = [L, L], trace(x[
W
) = 0 for any x L. Hence L L
W
. Moreover, for
any x L, x(W) W, so x
s
(W) W, x
n
(W) W and x
s
, x
n
L
W
.
Consider also the Lie subalgebra of gl(V ):
N = z gl(V ) : [z, L] L = z gl(V ) : ad z(L) L.
Again, for any x L, ad x(L) L, so ad x
s
(L) L, ad x
n
(L) L, and x
s
, x
n
N.
Therefore, it is enough to prove that L =
_

W
L
W
_
N. If we denote by

L the
subalgebra
_

W
L
W
_
N, then L is an ideal of

L.
By Weyls Theorem, there is a subspace U of

L such that

L = L U and
[L, U] U. But [L, U] [L, N] L, so [L, U] = 0. Then, for any z U and
irreducible submodule W of V , z[
W
Hom
L
(W, W) = kI
W
(by Schurs Lemma)
and trace(z[
W
) = 0, since z L
W
. Therefore z[
W
= 0. But Weyls Theorem
asserts that V is a direct sum of irreducible submodules, so z = 0. Hence U = 0
and L =

L.
(ii) Let L be a semisimple Lie algebra. Then L

= ad L, which is a semisimple sub-
algebra of gl(L). For any x L, let ad x = s + n be the Jordan decomposition
in End
k
(L) = gl(L). By item (i), there are unique elements x
s
, x
n
L such that
s = ad x
s
, n = ad x
n
. Since ad is one-to-one, x = x
s
+ x
n
. This is called the
absolute Jordan decomposition of x.
Note that [x, x
s
] = 0 = [x, x
n
], since [ad x, ad x
s
] = 0 = [ad x, ad x
n
].
(iii) Let L be a semisimple Lie algebra and let : L gl(V ) be a representation.
Let x L and let x = x
s
+ x
n
be its absolute Jordan decomposition. Then
(x) = (x
s
) +(x
n
) is the Jordan decomposition of (x).
Proof. Since (L)

= L/ ker is a quotient of L, (x
s
) = (x)
s
and (x
n
) = (x)
n
(this is because ad
(L)
(x
s
) is semisimple and ad
(L)
(x
n
) is nilpotent). Here
ad
(L)
denotes the adjoint map in the Lie algebra (L), to distinguish it from
the adjoint map of gl(V ). By item (i), if (x) = s + n is the Jordan decomposi-
tion of (x), s, n (L) and we obtain two Jordan decompositions in gl
_
(L)
_
:
ad
(L)
(x) = ad
(L)
s + ad
(L)
n = ad
(L)
(x
s
) + ad
(L)
(x
n
). By uniqueness,
s = (x
s
) and n = (x
n
).
There are other important consequences that can be drawn from Weyls Theorem:
2. SEMISIMPLE LIE ALGEBRAS 27
2.7 More consequences.
(i) (Whiteheads Lemma) Let L be a semisimple Lie algebra over a eld k of
characteristic 0, let V be a module for L, and let : L V be a linear map such
that

_
[x, y]
_
= x.(y) y.(x),
for any x, y L. Then there is an element v V such that (x) = x.v for any
x L.
Proof. belongs to the L-module Hom
k
(L, V ), and for any x, y L:
(2.1) (x.)(y) = x.(y) ([x, y]) = y.(x) =
(x)
(y),
where
v
(x) = x.v for any x L and v V . Moreover, for any x, y and v V ,
(x.
v
)(y) = x.
_

v
(y)
_

v
([x, y]) = x.(y.v) [x, y].v = y.(x.v) =
x.v
(y).
Thus,
V
is a submodule of Hom
k
(L, V ), which is contained in W = f
Hom
k
(L, V ) : x.f =
f(x)
x L, and this satises L.W
V
. By Weyls Theo-
rem there is another submodule

W such that W =
V


W and L.

W

W
V
= 0.
But for any f

W and x, y L, (2.1) gives
0 = (x.f)(y) = x.f(y) f([x, y]) =
f(y)
(x) f([x, y])
= (y.f)(x) f([x, y]) = f([x, y]).
Therefore, f(L) = f([L, L]) = 0. Hence

W = 0 and W =
V
, as required.
(ii) (Levi-Malcev Theorem) Let L be a Lie algebra over a eld k of characteristic
0, then there exists a subalgebra S of L such that L = R(L) S. If nontrivial, S
is semisimple. Moreover, if T is any semisimple subalgebra of L, then there is an
automorphism f of L, in the group of automorphisms generated by exp ad
z
: z
N(L), such that f(T) S.
Proof. In case S is a nontrivial subalgebra of L with L = R(L) S, then S

=
L/R(L) is semisimple.
Let us prove the existence result by induction on dimL, being trivial if dimL = 1
(as L = R(L) in this case). If I is an ideal of L with 0 I R(L), then by
the induction hypothesis, there exists a subalgebra T of L, containing I, with
L/I = R(L)/I T/I. Then T/I is semisimple, so I = R(T) and, by the induction
hypothesis again, T = I S for a subalgebra S of L. It follows that L = R(L) S,
as required. Therefore, it can be assumed that R(L) is a minimal nonzero ideal of
L, and hence [R(L), R(L)] = 0 and [L, R(L)] is either 0 or R(L).
In case [L, R(L)] = 0, L is a module for the semisimple Lie algebra L/R(L), so
Weyls Theorem shows that L = R(L) S for an ideal S.
28 CHAPTER 2. LIE ALGEBRAS
Otherwise, [L, R(L)] = R(L). Consider then the module gl(L) for L (x.f = [ad
x
, f]
for any x L and f gl(L)). Let be the associated representation. Then the
subspaces
M = f gl(L) : f(L) R(L) and there exists
f
k such that f[
R(L)
=
f
id,
N = f gl(L) : f(L) R(L) and f
_
R(L)
_
= 0,
are submodules of gl(L), with (L)(M) N M. Moreover, for any x R(L),
f M and z L:
(2.2) [ad
x
, f](z) = [x, f(z)] f
_
[x, z]
_
=
f
ad
x
(z),
since [x, f(z)] [R(L), R(L)] = 0. Hence,
_
R(L)
_
(M) ad
x
: x R(L) N.
Write R = ad
x
: x R(L). Therefore, M/R is a module for the semisimple
Lie algebra L/R(L) and, by Weyls Theorem, there is another submodule

N with
R

N M such that M/R = N/R

N/R. Take g

N N with
g
= 1. Since
(L)(M) N, (L)(g) R, so for any y L, there is an element (y) R(L)
such that
[ad
y
, g] = ad
(y)
,
and : L R(L) is linear. Equation (2.2) shows that [
R(L)
= id, so that
L = R(L) ker and ker = x L : (x)(g) = 0 is a subalgebra of L.
Moreover, if T is a semisimple subalgebra of L, let us prove that there is a suitable
automorphism of L that embeds T into S. Since T is semisimple, T = [T, T]
[L, L] = [L, R(L)] S N(L) S. If N(L) = 0, the result is clear. Otherwise,
let I be a minimal ideal of L contained in N(L) (hence I is abelian). Arguing
by induction on dimL, we may assume that there are elements z
1
, . . . , z
r
in N(L)
such that
T
t
= exp ad
z
1
exp ad
z
r
(T) I S.
Now, it is enough to prove that there is an element z I such that exp ad
z
(T
t
) S.
Therefore, it is enough to prove the result assuming that L = RS, where R is an
abelian ideal of L. In this case, let : T R and : T S be the projections
of T on R and S respectively (that is, for any t T, t = (t) + (t)). For any
t
1
, t
2
T,
[t
1
, t
2
] = [(t
1
) +(t
1
), (t
2
) +(t
2
)]
= [(t
1
), t
2
] + [t
1
, (t
2
)] + [(t
1
), (t
2
)],
since [R, R] = 0. Hence ([t
1
, t
2
]) = [(t
1
), t
2
] + [t
1
, (t
2
)]. Witheheads Lemma
shows the existence of an element z R such that (t) = [t, z] for any t T. But
then, since (ad
z
)
2
= 0 because R is abelian,
exp ad
z
(t) = t + [z, t] = t (t) = (t) S,
for any t T. Therefore, exp ad
z
(T) S.
(iii) Let L be a Lie algebra over a eld k of characteristic 0, then [L, R(L)] = [L, L]
R(L).
Proof. L = R(L) S for a semisimple (if nonzero) subalgebra S, so [L, L] =
[L, R(L)] [S, S] = [L, R(L)] S, and [L, L] R(L) = [L, R(L)].
3. REPRESENTATIONS OF sl
2
(k) 29
3. Representations of sl
2
(k)
Among the simple Lie algebras, the Lie algebra sl
2
(k) of two by two trace zero matrices
plays a distinguished role. In this section we will study its representations over elds of
characteristic 0.
First note that sl
2
(k) = kh + kx + ky with h =
_
1 0
0 1
_
, x =
_
0 1
0 0
_
and y =
_
0 0
1 0
_
,
and that
[h, x] = 2x, [h, y] = 2y, [x, y] = h.
If the characteristic of the ground eld k is ,= 2, then sl
2
(k) is a simple Lie algebra.
Let V (n) be the vector space spanned by the homogeneous degree n polynomials in
two indeterminates X and Y , and consider the representation given by:

n
: sl
2
(k) gl
_
V (n)
_
h X

X
Y

Y
,
x X

Y
,
y Y

X
.
3.1 Exercise. Check that this indeed gives a representation of sl
2
(k).
3.2 Theorem. Let k be a eld of characteristic 0. Then the irreducible representations
of sl
2
(k) are, up to isomorphism, exactly the
n
, n 0.
Proof. Let us assume rst that k is algebraically closed, and let : sl
2
(k) gl(V ) be
an irreducible representation.
Since ad x is nilpotent, the consequences of Weyls Theorem assert that (x) is
nilpotent too (similarly, (y) is nilpotent and (h) semisimple). Hence W = w V :
x.w = 0 , = 0. For any w W,
x.(h.w) = [x, h].w +h.(x.w) = 2x.w +h.(x.w) = 0,
so W is h-invariant and, since (h) is semisimple, there is a nonzero v W such h.v = v
for some k.
But (y) is nilpotent, so there is an n Z
0
such that v, (y)(v), . . . , (y)
n
(v) ,= 0
but (y)
n+1
(v) = 0. Now, for any i > 0,
(h)(y)
i
(v) =
_
[h, y]
_
(y)
i1
(v) +(y)(h)(y)
i1
(v)
= 2(y)
i
(v) +(y)
_
(h)(y)
i1
(v)
_
which shows, recursively, that
h.
_
(y)
i
(v)
_
= ( 2i)(y)
i
(v),
and
(x)(y)
i
(v) =
_
[x, y]
_
(y)
i1
(v) +(y)(x)(y)
i1
(v)
=
_
2(i 1)
_
(y)
i1
(v) +(y)
_
(x)(y)
i1
(v)
_
30 CHAPTER 2. LIE ALGEBRAS
which proves that
x.
_
(y)
i
(v)
_
= i
_
(i 1)
_
(y)
i1
(v).
Therefore, with v
0
= v and v
i
= (y)
i
(v), for i > 0, we have
h.v
i
= ( 2i)v
i
,
y.v
i
= v
i+1
, (v
n+1
= 0),
x.v
i
= i
_
(i 1)
_
v
i1
, (v
1
= 0).
Hence,
n
i=0
kv
i
is a submodule of V and, since V is irreducible, we conclude that
V =
n
i=0
kv
i
. Besides,
0 = trace (h) = + ( 2) + + ( 2n) = (n + 1) (n + 1)n.
So = n. The conclusion is that there is a unique irreducible module V of dimension
n + 1, which contains a basis v
0
, . . . , v
n
with action given by
h.v
i
= (n 2i)v
i
, y.v
i
= v
i+1
, x.v
i
= i(n + 1 i)v
i1
(where v
n+1
= v
1
= 0.) Then, a fortiori, V is isomorphic to V (n). (One can check that
the assignment v
0
X
n
, v
i
n(n 1) (n i + 1)X
ni
Y
i
gives an isomorphism.)
Finally, assume now that k is not algebraically closed and that

k is an algebraic
closure of k. If V is an sl
2
(k)-module, then

k
k
V is an sl(2,

k)-module which, by
Weyls Theorem, is completely reducible. Then the previous arguments show that the
eigenvalues of (h) are integers (and hence belong to k). Now the same arguments above
apply, since the algebraic closure was only used to insure the existence of eigenvalues of
(h) on the ground eld.
3.3 Remark. Actually, the result above can be proven easily without using Weyls
Theorem. For k algebraically closed of characteristic 0, let 0 ,= v V be an eigenvector
for (h): h.v = v. Then, with the same arguments as before, h.(x)
n
(v) = ( +
2n)(x)
n
v and, since the dimension is nite and the characteristic 0, there is a natural
number n such that (x)
n
(v) = 0. This shows that W = w V : x.w = 0 , = 0. In the
same vein, for any w W there is a natural number m such that (y)
m
(w) = 0. This
is all we need for the proof above.
3.4 Corollary. Let k be a eld of characteristic 0 and let : sl
2
(k) gl(V ) be a
representation. Consider the eigenspaces V
0
= v V : h.v = 0 and V
1
= v V :
h.v = v. Then V is a direct sum of dim
k
V
0
+ dim
k
V
1
irreducible modules.
Proof. By Weyls Theorem, V =
N
i=1
W
i
, with W
i
irreducible for any i. Now, for
any i, there is an n
i
Z
0
such that W
i
= V (n
i
), and hence (h) has eigenvalues
n
i
, n
i
2, . . . , n
i
, all with multiplicity 1, on W
i
. Hence dim
k
W
i
0
+ dim
k
W
i
1
= 1 for
any i, where W
i
0
= W
i
V
0
, W
i
1
= W
i
V
1
. Since V
0
=
N
i=1
W
i
0
and V
1
=
N
i=1
W
i
1
, the
result follows.
Actually, the eigenvalues of (h) determine completely, up to isomorphism, the rep-
resentation, because the number of copies of V (n) that appear in the module V in the
Corollary above is exactly dim
k
V
n
dim
k
V
n+2
, where V
n
= v V : h.v = nv for any
n; because n appears as eigenvalue in V (n) and in V (n+2m) (m 1) with multiplicity
1, but n + 2 is also an eigenvalue of (h) in V (n + 2m), again with multiplicity 1.
4. CARTAN SUBALGEBRAS 31
3.5 Corollary. (Clebsch-Gordan formula)
Let n, m Z
0
, with n m, and let k be a eld of characteristic 0. Then, as modules
for sl
2
(k),
V (n)
k
V (m)

= V (n +m) V (n +m2) V (n m).
Proof. The eigenvalues of the action of h on V (n)
k
V (m) are n 2i + m 2j =
(n +m) 2(i +j), (0 i n, 0 j m). Therefore, for any 0 p n +m,
dim
k
V
n+m2p
=

(i, j) Z
0
Z
0
: 0 i n, 0 j m, i +j = p

and dim
k
V
n+m2p
dim
k
V
n+m2(p1)
= 1 for any p = 1, . . . , m, while dim
k
V
n+m2p

dim
k
V
n+m2(p1)
= 0 for p = m+ 1, . . . ,
_
n+m
2

.
4. Cartan subalgebras
In the previous section, we have seen the importance of the subalgebra kh of sl
2
(k). We
look for similar subalgebras in any semisimple Lie algebra.
4.1 Denition. Let L be a Lie algebra over a eld k, and let x L, the subalgebra
E
L
(x) = y L : n N such that (ad x)
n
(y) = 0
is called an Engel subalgebra of L relative to x.
4.2 Exercise. Check that E
L
(x) is indeed a subalgebra and that dim
k
E
L
(x) is the
multiplicity of 0 as an eigenvalue of ad x.
4.3 Properties. Let L be a Lie algebra over a eld k.
1. Let S be a subalgebra of L, and let x L such that E
L
(x) S. Then N
L
(S) = S,
where N
L
(S) = y L : [y, S] S is the normalizer of S in L. (Note that N
L
(S)
is always a subalgebra of L and S is an ideal of N
L
(S).)
Proof. We have x E
L
(x) S so 0 is not an eigenvalue of the action of ad x on
N
L
(S)/S. On the other hand ad x
_
N
L
(S)
_
[S, N
L
(S)] S. Hence N
L
(S)/S =
0, or N
L
(S) = S.
2. Assume that k is innite. Let S be a subalgebra of L and let z S be an element
such that E
L
(z) is minimal in the set E
L
(x) : x S. If S E
L
(z), then
E
L
(z) E
L
(x) for any x S.
Proof. Put S
0
= E
L
(z). Then S S
0
L. For any x S and k, z +x S,
so that ad(z +x) leaves invariant both S and S
0
. Hence, the characteristic poly-
nomial of ad(z +x) is a product f

(X)g

(X), where f

(X) is the characteristic


polynomial of the restriction of ad(z + x) to S
0
and g

(X) is the characteristic


32 CHAPTER 2. LIE ALGEBRAS
polynomial of the action of ad(z +x) on the quotient L/S
0
. Let r = dim
k
S
0
and
n = dim
k
L. Thus,
f

(X) = X
r
+f
1
()X
r1
+ +f
r
(),
g

(X) = X
nr
+g
1
()X
nr1
+ +g
nr
(),
with f
i
(), g
i
() polynomials in of degree i, for any i.
By hypothesis, g
nr
(0) ,= 0, and since k is innite, there are dierent scalars

1
, . . . ,
r+1
k with g
nr
(
j
) ,= 0 for any j = 1, . . . , r + 1. This shows that
E
L
(z +
j
x) S
0
for any j. But S
0
= E
L
(z) is minimal, so E
L
(z) = E
L
(z +
j
x)
for any j. Hence f

j
(X) = X
r
for any j = 1, . . . , r + 1, and this shows that
f
i
(
j
) = 0 for any i = 1, . . . , r and j = 1, . . . , r + 1. Since the degree of each f
i
is
at most r, this proves that f
i
= 0 for any i and, thus, ad(z +x) is shown to act
nilpotently on E
L
(z) = S
0
for any k: E
L
(z) E
L
(z +x) for any x S and
k. Therefore, E
L
(z) E
L
(x) for any x S.
4.4 Denition. Let L be a Lie algebra over a eld k. A subalgebra H of L is said to
be a Cartan subalgebra of L if it is nilpotent and self normalizing (N
L
(H) = H).
4.5 Example. kh is a Cartan subalgebra of sl
2
(k) if the characteristic of k is ,= 2.
4.6 Theorem. Let L be a Lie algebra over an innite eld k and let H be a subalgebra
of L. Then H is a Cartan subalgebra of L if and only if it is a minimal Engel subalgebra
of L.
Proof. If H = E
L
(z) is a minimal Engel subalgebra of L, then by Property 1 above, H
is self normalizing, while Property 2 shows that H E
L
(x) for any x H which, by
Engels Theorem, proves that H is nilpotent.
Conversely, let H be a nilpotent self normalizing subalgebra. By nilpotency, H
E
L
(x) for any x H and, hence, it is enough to prove that there is an element z H
with H = E
L
(z). Take z H with E
L
(z) minimal in E
L
(x) : x H. By Property
2 above, H E
L
(z) E
L
(x) for any x H. This means that ad x acts nilpotently
on E
L
(z)/H for any x H so, if H E
L
(z), Engels Theorem shows that there is an
element y E
L
(z) H such that [x, y] H for any x H, but then y N
L
(H) H, a
contradiction. Hence H = E
L
(z), as required.
4.7 Denition. Let L be a semisimple Lie algebra over a eld k of characteristic 0.
For any x L, let x = x
s
+ x
n
be its absolute Jordan decomposition in

k
k
L, with

k an algebraic closure of k. The element x will be said to be semisimple (respectively,


nilpotent) if x = x
s
(resp., if x = x
n
); that is, if ad x gl(L) is semisimple (resp.,
nilpotent).
A subalgebra T of L is said to be toral if all its elements are semisimple.
4.8 Theorem. Let L be a semisimple Lie algebra over an algebraically closed eld k of
characteristic 0, and let H be a subalgebra of L. Then H is a Cartan subalgebra of L if
and only if it is a maximal toral subalgebra of L.
4. CARTAN SUBALGEBRAS 33
Proof. Assume rst that H is a Cartan subalgebra of L so, by the previous Theorem,
H = E
L
(x) is a minimal Engel subalgebra. Take the absolute Jordan decomposition
x = x
s
+ x
n
. Then E
L
(x) = ker ad x
s
= C
L
(x
s
) (= y L : [x
s
, y] = 0, the centralizer
of x
s
). Therefore, there exists a semisimple element h H such that H = C
L
(h).
Since k is algebraically closed, ad h is diagonalizable, so that L = H
_

0,=k
L

(h)
_
,
where L

(h) is the eigenspace of L relative to ad h. (Note that L


0
(h) = H.)
One checks immediately that [L

(h), L

(h)] L
+
(h) and, thus,
_
L

(h), L

(h)
_
=
0 if ,= , where is the Killing form of L. Since is nondegenerate and
_
H, L

(h)
_
=
0 for any 0 ,= H

, the restriction of to H is nondegenerate too.


Now, H is nilpotent, and hence solvable. By Proposition 1.11 applied to ad H
gl(L),
_
[H, H], H
_
= 0 and, since [
H
is nondegenerate, we conclude that [H, H] = 0,
that is, H is abelian.
For any x H = C
L
(h), [x, h] = 0 implies that [x
s
, h] = 0 = [x
n
, h]. Hence
x
n
H and ad x
n
is nilpotent. Thus, for any y H, [x
n
, y] = 0, so ad
x
n
ad
y
is
a nilpotent endomorphism of L. This shows that (x
n
, H) = 0 and hence x
n
= 0.
Therefore H is toral. On the other hand, if H S, for a toral subalgebra S of L, then
S = H
_

(h)
_
. But for any 0 ,= x S

(h), with ,= 0, ad x is nilpotent (as


(ad x)
n
_
L

(h)
_
L
+n
(h), which is eventually 0). Hence x = x
n
= 0 as S is toral, a
contradiction. Thus, H is a maximal toral subalgebra of L.
Conversely, let T be a maximal toral subalgebra of L. If x T and [x, T] ,= 0 then,
since x is semisimple, there is a y T and a 0 ,= k with [x, y] = y. But then
(ad y)
2
(x) = 0 and, since y is semisimple, ad
y
(x) = 0, a contradiction. Hence T is an
abelian subalgebra of L.
Let x
1
, . . . , x
m
be a basis of T. Then ad x
1
, . . . , ad x
m
are commuting diagonaliz-
able endomorphisms of L, so they are simultaneously diagonalizable. This shows that
L =
T
L

(T), where T

is the dual vector space to T and L

(T) = y L :
[t, y] = (t)y t T. As before, [L

(T), L

(T)] L
+
(T) for any , T

and
L
0
(T) = C
L
(T) (= x L : [x, T] = 0), the centralizer of T.
For any x = x
s
+ x
n
C
L
(T), both x
s
, x
n
C
L
(T). Hence T + kx
s
is a toral
subalgebra. By maximality, x
s
T. Then ad x[
C
L
(T)
= ad x
n
[
C
L
(T)
is nilpotent, so by
Engels Theorem, H = C
L
(T) is a nilpotent subalgebra. Moreover, for any x N
L
(H)
and t T, [x, t] [x, H] H, so [[x, t], t] = 0 and, since t is semisimple, we get [x, t] = 0,
so x C
L
(T) = H. Thus N
L
(H) = H and H is a Cartan subalgebra of L. By the rst
part of the proof, H is a toral subalgebra which contains T and, by maximality of T,
T = H is a Cartan subalgebra of L.
4.9 Corollary. Let L be a semisimple Lie algebra over a eld k of characteristic 0 and
let H be a subalgebra of L. Then H is a Cartan subalgebra of L if and only if it is a
maximal subalgebra among the subalgebras which are both abelian and toral.
Proof. The properties of being nilpotent and self normalizing are preserved under ex-
tension of scalars. Thus, if

k is an algebraic closure of k and H is nilpotent and self
normalizing, so is

k
k
H. Hence

k
k
H is a Cartan subalgebra of

k
k
L. By the
previous proof, it follows that

k
k
H is abelian, toral and self centralizing, hence so is
H. But, since H = C
L
(H), H is not contained in any bigger abelian subalgebra.
Conversely, if H is a subalgebra which is maximal among the subalgebras which are
both abelian and toral, the arguments in the previous proof show that C
L
(H) is a Cartan
34 CHAPTER 2. LIE ALGEBRAS
subalgebra of L, and hence abelian and toral and containing H. Hence H = C
L
(H) is
a Cartan subalgebra.
4.10 Exercises.
(i) Let L = sl(n) be the Lie algebra of n n trace zero matrices, and let H be
the subalgebra consisting of the diagonal matrices of L. Prove that H is a Cartan
subalgebra of L and that L = H
_

1i,=jn
L

j
(H)
_
, where
i
H

is the linear
form that takes any diagonal matrix to its i
th
entry. Also show that L

j
(H) =
kE
ij
, where E
ij
is the matrix with 1 in the (i, j) position and 0s elsewhere.
(ii) Check that R
3
is a Lie algebra under the usual vector cross product. Prove that
it is toral but not abelian.
5. Root space decomposition
Throughout this section, L will denote a semisimple Lie algebra over an algebraically
closed eld k of characteristic 0, with Killing form . Moreover, H will denote a xed
Cartan subalgebra of L.
The arguments in the previous section show that there is a nite set H

0
of nonzero linear forms on H, whose elements are called roots, such that
(5.3) L = H
_

_
,
where L

= x L : [h, x] = (h)x h H , = 0 for any . Moreover, H = C


L
(H)
and [L

, L

] L
+
, where H = L
0
and L

= 0 if 0 ,= , .
5.1. Properties of the roots
(i) If , 0 and + ,= 0, then
_
L

, L

_
= 0.
Proof. ad x

ad x

takes each L

to L
+(+)
,= L

so its trace is 0.
(ii) If , then . Moreover, the restriction : L

k is nondegen-
erate.
Proof. Otherwise, (L

, L) would be 0, a contradiction with the nondegeneracy


of .
(iii) spans H

.
Proof. Otherwise, there would exist a 0 ,= h H with (h) = 0 for any , so
ad h = 0 and h = 0, because Z(L) = 0 since L is semisimple.
(iv) For any , [L

, L

] ,= 0.
Proof. It is enough to take into account that 0 ,=
_
L

, L

_
=
_
[H, L

], L

_
=

_
H, [L

, L

]
_
.
5. ROOT SPACE DECOMPOSITION 35
(v) For any H

, let t

H such that (t

, . ) = H

. Then for any ,


x

and y

,
[x

, y

] = (x

, y

)t

.
Proof. For any h H,
(h, [x

, y

]) = ([h, x

], y

)
= (h)(x

, y

) = (t

, h)(x

, y

) =
_
h, (x

, y

)t

_
and the result follows by the nondegeneracy of the restriction of to H = L
0
.
(vi) For any , (t

) ,= 0.
Proof. Take x

and y

such that (x

, y

) = 1. By the previous item


[x

, y

] = t

. In case (t

) = 0, then [t

, x

] = 0 = [t

, y

], so S = kx

+kt

+ky

is a solvable subalgebra of L. By Lies Theorem kt

= [S, S] acts nilpotently on


L under the adjoint representation. Hence t

is both semisimple (H is toral) and


nilpotent, hence t

= 0, a contradiction since ,= 0.
(vii) For any , dim
k
L

= 1 and k = .
Proof. With x

, y

and t

as above, S = kx

+kt

+ky

is isomorphic to sl
2
(k),
under an isomorphism that takes h to
2
(t

)
t

, x to x

, and y to
2
(t

)
y

.
Now, V = H
_

k
L

_
is a module for S under the adjoint representation,
and hence it is a module for sl
2
(k) through the isomorphism above. Besides,
V
0
= v V : [t

, v] = 0 coincides with H. The eigenvalues taken by h =


2
(t

)
t

are (h) =
2(t

)
(t

)
= 2 and, thus,
1
2
Z, since all these eigenvalues are
integers. On the other hand, ker is a trivial submodule of V , and S is another
submodule. Hence ker S is a submodule of V which exhausts the eigenspace of
ad h with eigenvalue 0. Hence by Weyls Theorem, V is the direct sum of ker S
and a direct sum of irreducible submodules for S in which 0 is not an eigenvalue
for the action of h. We conclude that the only even eigenvalues of the action of h
are 0, 2 and 2, and this shows that 2 , . That is, the double of a root is never
a root. But then
1
2
cannot be a root neither, since otherwise = 2
1
2
would not
be a root. As a consequence, 1 is not an eigenvalue of the action of h on V , and
hence V = ker S. In particular, L

= kx

, L

= ky

and k = .
(viii) For any , let h

=
2
(t

)
t

, which is the unique element h in [L

, L

] = kt

such that (h) = 2, and let x

and y

such that [x

, y

] = h

. Then,
for any , (h

) Z.
Proof. Consider the subalgebra S

= kx

+ kh

+ ky

, which is isomorphic to
sl
2
(k). From the representation theory of sl
2
(k), we know that the set of eigenval-
ues of the adjoint action of h

on L are integers. In particular, (h

) Z.
36 CHAPTER 2. LIE ALGEBRAS
More precisely, consider the S

-module V =
mZ
L
+m
. The eigenvalues of the
adjoint action of h

on V are (h

) + 2m : m Z such that L
+m
,= 0, which
form a chain of integers:
(h

) + 2q, (h

) + 2(q 1), . . . , (h

) 2r,
with r, q Z
0
and (h

) + 2q =
_
(h

) 2r
_
. Therefore, (h

) = r q Z.
The chain ( +q, . . . , r) is called the -string through . It is contained in
0.
5.2 Remark. Since the restriction of to H is nondegenerate, it induces a nonde-
generate symmetric bilinear form (. [ .) : H

k, given by ([) = (t

, t

)
(where, as before, t

is determined by = (t

, . ) for any H

). Then for any


, , (t

) = (t

, t

) = ([). Hence
(h

) =
2([)
([)
.
(ix) For any , consider the linear map

: H

, 2
([)
([)
. (This
is the reection through , since

() = and if is orthogonal to , that is,


([) = 0, then

() = . Hence
2

= 1.)
Then

() . In particular, the group J generated by

: is a nite
subgroup of GL(H

), which is called the Weyl group.


Proof. For any , ,

() = (r q) (r and q as before), which is in the


-string through , and hence belongs to . (Actually,

changes the order in


the -string, in particular

( +q) = r.)
Now J embeds in the symmetric group of , and hence it is nite.
(x) Let
1
, . . . ,
n
be a basis of H

contained in . Then Q
1
+ +Q
n
.
Proof. For any , =
1

1
+ +
n

n
with
1
, . . . ,
n
k. But for
i = 1, . . . , n,
2([
i
)
(
i
[
i
)
=
n

j=1

j
2(
j
[
i
)
(
i
[
i
)
and this gives a system of linear equations on the
j
s with a regular integral
matrix. Solving by Crammers rule, one gets that the
j
s belong to Q.
(xi) For any , , ([) Q. Moreover, the restriction (. [ .) : Q Q Q is
positive denite.
Proof. Since L = H
_

_
and dim
k
L

= 1 for any , given any ,


([) = (t

, t

) = trace
_
(ad t

)
2
_
=

(t

)
2
=
([)
2
4

(h

)
2
,
6. CLASSIFICATION OF ROOT SYSTEMS 37
and, therefore,
([) =
4

(h

)
2
Q
>0
.
Now, for any , ,
2([)
([)
Z, so ([) =
([)
2
2([)
([)
Q. And for any
Q, =
1

1
+ +
n

n
for some
j
s in Q, so
([) =

(t

)
2
=

1
(t

1
) + +
n
(t

n
)
_
2
0.
Besides ([) = 0 if and only if (t

) = 0 for any , if and only if t

= 0 since
spans H

, if and only if = 0.
Therefore, if the dimension of H is n (this dimension is called the rank of L, although
we do not know yet that it does not depend on the Cartan subalgebra chosen), then
E
Q
= Q is an n-dimensional vector space over Q endowed with a positive denite
symmetric bilinear form ( [ ).
Then E = R
Q
E
Q
is an euclidean n-dimensional vector space which contains a
subset which satises:
(R1) is a nite subset that spans E, and 0 , .
(R2) For any , too and R = .
(R3) For any , the reection on the hyperplane (R)

leaves invariant (i.e., for


any , ,

() ).
(R4) For any , , [) = 2
([)
([)
Z.
A subset of an euclidean space, satisfying these properties (R1)(R4), is called
a root system, and the subgroup J of the orthogonal group O(E) generated by the
reections

, , is called the Weyl group of the root system. The dimension of


the euclidean space is called the rank of the root system. Note that J is naturally
embedded in the symmetric group of , and hence it is a nite group.
6. Classication of root systems
Our purpose here is to classify the root systems. Hence we will work in the abstract
setting considered at the end of the last section. The arguments in this section follow
the ideas in the article by R.W. Carter: Lie Algebras and Root Systems, in Lectures
on Lie Groups and Lie Algebras (R.W. Carter, G. Segal and I. Macdonal), London
Mathematical Society, Student Texts 22, Cambridge University Press, 1995.
Let be a root system in a euclidean space E. Take E such that ([) ,= 0 for
any . This is always possible since is nite. (Here ( [ ) denotes the inner product
on E.) Let
+
= : ([) > 0 be the set of positive roots, so =
+

(disjoint union), where

=
+
(the set of negative roots).
A positive root is said to be simple if it is not the sum of two positive roots. Let
=
+
: is simple, is called a system of simple roots of (E, ).
38 CHAPTER 2. LIE ALGEBRAS
6.1 Proposition. Let be a root system on a euclidean vector space E and let =

1
, . . . ,
n
be a system of simple roots in (E, ). Then:
(i) For any ,= in , ([) 0.
(ii) is a basis of E.
(iii)
+
Z
0

1
+ +Z
0

n
.
(iv) For any ,

+

_
=
+
.
(v) If
t
E is a vector such that (
t
[) ,= 0 for any and
t
is the associated
system of simple roots, then there is an element J such that () =
t
.
Proof. For any , , consider the integer
N

= [)[) =
4([)
2
([)([)
Z
0
.
The Cauchy-Schwarz inequality shows that 0 N

4 and that N

= 4 if and only
if = , since R = by (R2).
Assume that ,
+
with ,= and ([) 0. Then 0 N

= [)[)
3, so either ([) = 0 or [) = 1 or [) = 1. If, for instance, [) = 1, then

() = [) = . If
+
, then = + ( ) is not simple,
while if

, then = + ( ) is not simple. This proves item (i).


Now, for any
+
, either or = +, with ,
+
. But in the latter
case, ([) = ([) + ([), with 0 < ([), ([), ([), so that both ([) and ([)
are strictly lower than ([). Now, we proceed in the same way with and . They are
either simple or a sum of smaller positive roots. Eventually we end up showing that
is a sum of simple roots, which gives (iii).
In particular, this shows that spans E. Assume that were not a basis, then
there would exist disjoint nonempty subsets I, J 1, . . . , n and positive scalars
i
such that

iI

i

i
=

jJ

j

j
. Let =

iI

i

i
=

jJ

j

j
. Then 0 ([) =

iI
jJ

j
(
i
[
j
) 0 (because of (i)). Thus = 0, but this would imply that 0 <

iI

i
([
i
) = ([) = 0, a contradiction that proves (ii).
In order to prove (iv), we may assume that =
1
. Let ,=
+
, then (iii)
shows that =

n
i=1
m
i

i
, with m
i
Z
0
for any i. Since ,= , there is a j 2 such
that m
j
> 0. Then

() = [) = (m
1
[))
1
+m
2

2
+ +m
n

n
, and
one of the coecients, m
j
, is > 0. Hence ,=

() ,

, so that

()
+
.
Finally, let us prove (v). We know that =
+

=
t
+

t

(with obvious
notation). Let =
1
2

+ (which is called the Weyl vector), and let J such


that ((
t
)[) is maximal. Then, for any :
((
t
)[) (

(
t
)[)
=
_
(
t
)[

()
_
(since
2

= 1 and

O(E))
=
_
(
t
)[
_
(because of (iv))
=
_
(
t
)[
_

_
(
t
)[
_
=
_
(
t
)[
_

t
[
1
()
_
,
6. CLASSIFICATION OF ROOT SYSTEMS 39
so
_

t
[
1
()
_
0. This shows that
1
()
t
+
, so
1
(

) =
t

and
1
()
then coincides with the set of simple roots in
t
+
, which is
t
.
Under the previous conditions, with =
1
, . . . ,
n
, consider
The square matrix C =
_

i
[
j
)
_
1i,jn
, which is called the Cartan matrix of the
root system.
Note that for any ,= in with ([) 0,
[) =
2([)
([)
=

([)
([)

4([)([)
([)([)
=
_
([)
_
([)
_
N

,
so we get a factorization of the Cartan matrix as C = D
1

CD
2
, where D
1
(re-
spectively D
2
) is the diagonal matrix with the elements
_
(
1
[
1
), . . . ,
_
(
n
[
n
)
(resp.
1

(
1
[
1
)
, . . . ,
1

(
n
[
n
)
) on the diagonal, and

C =
_
_
_
_
_
2
_
N

2

_
N

_
N

1
2
_
N

n
.
.
.
.
.
.
.
.
.
.
.
.

_
N

1

_
N

2
2
_
_
_
_
_
This matrix

C is symmetric and receives the name of Coxeter matrix of the root
system. It is nothing else but the coordinate matrix of the inner product ( [ ) in
the basis
1
, . . . ,
n
with
i
=

2
i

(
i
[
i
)
. Note that det C = det

C.
6.2 Exercise. What are the possible Cartan and Coxeter matrices for n = 2?
Here = , , and you may assume that ([) ([).
The Dynkin diagram of , which is the graph which consists of a node for each
simple root . The nodes associated to ,= are connected by N

(=
0, 1, 2 or 3) arcs. Moreover, if N

= 2 or 3, then and have dierent length


and an arrow is put pointing from the long to the short root. For instance,
C =
_
_
_
_
2 1 0 0
1 2 2 0
0 1 2 1
0 0 1 2
_
_
_
_

>

1

2

3

4
The Coxeter graph is the graph obtained by omitting the arrows in the Dynkin
diagram.
In our previous example it is .
Because of item (v) in Proposition 6.1, these objects depend only on and not on
, up to the same permutation of rows and columns in C and up to the numbering of
the vertices in the graphs.
The root system is said to be reducible if =
1

2
, with , =
i
(i = 1, 2) and
_

1
[
2
_
= 0. Otherwise, it is called irreducible.
40 CHAPTER 2. LIE ALGEBRAS
6.3 Theorem.
(a) A root system is irreducible if and only if its Dynkin diagram (or Coxeter graph)
is connected.
(b) Let L be a semisimple Lie algebra over an algebraically closed eld k of character-
istic 0. Let H be a Cartan subalgebra of L and let the associated root system.
Then is irreducible if and only if L is simple.
Proof. For (a), if is reducible with =
1

2
and is a system of simple roots,
then it is clear that =
_

1
_

2
_
and the nodes associated to the elements
in
1
are not connected with those associated to
2
. Hence the Dynkin diagram
is not connected.
Conversely, if =
1

2
(disjoint union) with , =
1
,
2
and
_

1
[
2
_
= 0,
let E
1
= R
1
and E
2
= R
2
, so that E is the orthogonal sum E = E
1
E
2
. Then

i
= E
i
is a root system in E
i
with system of simple roots
i
(i = 1, 2). It has to
be checked that =
1

2
. For any
1
,

[
E
2
is the identity. Hence, item (v)
in Proposition 6.1 shows that there exists an element J
1
such that (
1
) =
1
,
where J
1
is the subgroup of the Weyl group J generated by

:
1
. Order
the roots so that
1
=
1
, . . . ,
r
and
2
=
r+1
, . . . ,
n
. Then any can
be written as = m
1

1
+ + m
n

n
, with m
i
Z for any i, and either m
i
0 or
m
i
0 for any i. But () and, since (
1
) =
1
, () = m
t
1

1
+ +m
t
r

r
+
m
r+1

r+1
+ +m
n

n
, where (m
t
1
, . . . , m
t
r
) is a permutation of (m
1
, . . . , m
r
). Since
the coecients of () are also either all nonnegative or all nonpositive, we conclude
that either m
1
= = m
r
= 0 or m
r+1
= = m
n
= 0, that is, either
1
or

2
.
For (b), assume rst that is reducible, so =
1

2
with
_

1
[
2
) = 0 and

1
,= , =
2
. Then the subspace

+
1
_
L

+L

+[L

, L

]
_
is a proper ideal of L,
since
[L

, L

]
_
= 0 if + , 0, in particular if
1
and
2
L
+
otherwise.
Hence L is not simple in this case.
Conversely, if L is not simple, then L = L
1
L
2
with L
1
and L
2
proper ideals
of L. Hence (L
1
, L
2
) = 0 by the denition of the Killing form, and H = C
L
(H) =
C
L
1
(H) C
L
2
(H) = (H L
1
) (H L
2
), because for any h H and x
i
L
i
(i = 1, 2),
[h, x
1
+ x
2
] = [h, x
1
] + [h, x
2
], with the rst summand in L
1
and the second one in L
2
.
Now, for any , (H L
i
) ,= 0 for some i = 1, 2. Then L

= [H L
i
, L

] L
i
,
so the element t

such that (t

, . ) = satises that t

[L

, L

] L
i
. As a
consequence, =
1

2
(disjoint union), with
i
= : (H L
i
) ,= 0, and
([) = (t

, t

) = 0 for any
1
and
2
. Thus, is reducible.
6.4 Remark. The proof above shows that the decomposition of the semisimple Lie
algebra L into a direct sum of simple ideals gives the decomposition of its root system
into an orthogonal sum of irreducible root systems.
6. CLASSIFICATION OF ROOT SYSTEMS 41
Dynkin diagrams are classied as follows:
6.5 Theorem. The Dynkin diagrams of the irreducible root systems are precisely the
following (where n indicates the number of nodes):
(A
n
) , n 1.
(B
n
) > , n 2.
(C
n
) < , n 3.
(D
n
)

...................................
...................................
, n 4.
(E
6
)

.
(E
7
)

.
(E
8
)

.
(F
4
) > .
(G
2
) < .
Most of the remainder of this section will be devoted to the proof of this Theorem.
First, it will be shown that the irreducible Coxeter graphs are the ones correspond-
ing to (A
n
), (B
n
= C
n
), (D
n
), (E
6,7,8
), (F
4
) and (G
2
). Any Coxeter graph determines
the symmetric matrix
_
a
ij
_
with a
ii
= 2 and a
ij
=
_
N
ij
for i ,= j, where N
ij
is the
number of lines joining the vertices i and j. We know that this matrix is the coordinate
matrix of a positive denite quadratic form on a real vector space..
Any graph formed by nodes and lines connecting these nodes will be called a Coxeter
type graph. For each such graph we will take the symmetric matrix
_
a
ij
_
dened as
before and the associated quadratic form on R
n
, which may fail to be positive denite,
such that q(e
i
, e
j
) = a
ij
, where e
1
, . . . , e
n
denotes the canonical basis of R
n
.
6.6 Lemma. Let V be a real vector space with a basis v
1
, . . . , v
n
and a positive denite
quadratic form q : V R such that q(v
i
, v
j
) 0 for any i ,= j, and q(v
1
, v
2
) < 0. (Here
q(v, w) =
1
2
_
q(v +w) q(v) q(w)
_
gives the associated symmetric bilinear form.)
Let q : V R be a quadratic form such that its associated bilinear form satises
q(v
i
, v
j
) = q(v
i
, v
j
) for any (i, j) ,= (1, 2), i j, and 0 q(v
1
, v
2
) > q(v
1
, v
2
). Then
q is positive denite too and det q > det q (where det denotes the determinant of the
quadratic form in any xed basis).
42 CHAPTER 2. LIE ALGEBRAS
Proof. We apply a Gram-Schmidt process to obtain a new suitable basis of Rv
2
+ +Rv
n
as follows:
w
n
= v
n
w
n1
= v
n1
+
n1,n
w
n
.
.
.
.
.
.
w
2
= v
2
+
2,3
w
3
+ +
2,n
w
n
where the s are determined by imposing that q(w
i
, w
j
) = 0 for any i > j 2.
Note that q(w
i
, w
j
) = q(w
i
, w
j
) for any i > j 2, and that this process gives that

i,j
0 for any 2 i < j n and q(v
i
, w
j
) 0 for any 1 i < j n. Now
take w
1
= v
1
+
1,3
w
3
+ +
1,n
w
n
, and determine the coecients by imposing that
q(w
1
, w
i
) = 0 for any i 3. Then q(w
1
, w
2
) = q(v
1
, w
2
) q(v
1
, v
2
) < 0, q(w
1
, w
2
) =
q(v
1
, w
2
) q(v
1
, v
2
) 0, and 0 q(w
1
, w
2
) > q(w
1
, w
2
).
In the basis w
1
, . . . , w
n
, the coordinate matrices of q and q present the form
_
_
_
_
_
_
_

1
0 0

2
0 0
0 0
3
0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0
n
_
_
_
_
_
_
_
and
_
_
_
_
_
_
_

1

0 0


2
0 0
0 0
3
0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0
n
_
_
_
_
_
_
_
with 0

> . Since q is positive denite,
i
0 for any i and
1

2

2
> 0. Hence

2
>
1

2

2
> 0 and the result follows.
Note that by suppressing a line connecting nodes i and j in a Coxeter type graph,
with associated quadratic form q, the quadratic form q associated to the new graph
obtained diers only in that 0 > q(e
i
, e
j
) > q(e
i
, e
j
). Hence the previous Lemma imme-
diately implies the following result:
6.7 Corollary. If some lines connecting two nodes on a Coxeter type graph with positive
denite quadratic form are suppressed, then the new graph obtained is a new Coxeter
type graph with positive denite quadratic form.
Let us compute now the matrices associated to some Coxeter type graphs, as well
as their determinants.
A
n
(n 1) . Here the associated matrix is
M
A
n
=
_
_
_
_
_
_
_
2 1 0 0 0
1 2 1 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 2 1
0 0 0 1 2
_
_
_
_
_
_
_
whose determinant can be computed recursively by expanding along the rst row:
det M
A
n
= 2 det M
A
n1
det M
A
n2
, obtaining that det M
A
n
= n + 1 for any
n 1.
6. CLASSIFICATION OF ROOT SYSTEMS 43
B
n
= C
n
(n 2) . Here
M
B
n
=
_
_
_
_
_
_
_
2 1 0 0 0
1 2 1 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 2

2
0 0 0

2 2
_
_
_
_
_
_
_
and, by expanding along the last row, det M
B
n
= 2 det M
A
n1
2 det M
A
n2
, so
that det M
B
n
= 2.
D
n
(n 4)

...................................
...................................
. The associated matrix is
M
D
n
=
_
_
_
_
_
_
_
_
_
2 1 0 0 0 0
1 2 1 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 2 1 1
0 0 0 1 2 0
0 0 0 1 0 2
_
_
_
_
_
_
_
_
_
so that det M
D
4
= 4 = det M
D
5
and by expanding along the rst row, det M
D
n
=
2 det M
D
n1
det M
D
n2
. Hence det M
D
n
= 4 for any n 4.
E
6

. Here det M
E
6
= 2 det M
D
5
det M
A
4
= 8 5 = 3 (expansion
along the row corresponding to the leftmost node).
E
7

. Here det M
E
7
= 2 det M
E
6
det M
D
5
= 6 4 = 2.
E
8

. Here det M
E
8
= 2 det M
E
7
det M
E
6
= 4 3 = 1.
F
4
. Here det M
F
4
= det M
B
3
det M
A
2
= 4 3 = 1.
G
2
. Here det M
G
2
= det

3 2

= 1.

A
n

...........................................................................................................................................................
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
(n + 1 nodes, n 2). Then
M

A
n
=
_
_
_
_
_
_
_
2 1 0 0 1
1 2 1 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 2 1
1 0 0 1 2
_
_
_
_
_
_
_
so the sum of the rows is the zero row. Hence det M

A
n
= 0.
44 CHAPTER 2. LIE ALGEBRAS

B
n


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
(n + 1 nodes, n 3) Let us number the nodes so that
the leftmost nodes are nodes 1 and 2, and node 3 is connected to both of them.
Then we may expand det M

B
n
= 2 det M
B
n
det M
A
1
det M
B
n1
= 44 = 0. (For
n = 3, det M

B
3
= 2 det M
B
2
det M
2
A
1
= 4 4 = 0.)

C
n
(n + 1 nodes, n 2). Then det M

C
n
= 2 det M
B
n

2 det M
B
n1
= 0. (For n = 2, det M

C
3
= 2 det M
B
2
2 det M
A
1
= 0.)

D
n

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
...................................
...................................
(n + 1 nodes, n 4). Here
det M

D
n
=
_

_
2 det M
D
4
det M
3
A
1
= 8 8 = 0, if n = 4,
2 det M
D
5
det M
A
1
det M
A
3
= 8 8 = 0, if n = 5,
2 det M
D
n
det M
A
1
det M
D
n2
= 8 8 = 0, otherwise.

E
6

. Here det M

E
6
= 2 det M
E
6
det M
A
5
= 6 6 = 0.

E
7

. Here det M

E
7
= 2 det M
E
7
det M
D
6
= 4 4 = 0.

E
8

. Here det M

E
8
= 2 det M
E
8
det M
E
7
= 22 = 0.

F
4
. Here det M

F
4
= 2 det M
F
4
det M
B
3
= 2 2 = 0.

G
2
. Here det M

G
2
= 2 det M
G
2
det M
A
1
= 2 2 = 0.
Now, if ( is a connected Coxeter graph and we suppress some of its nodes (and
the lines connecting them), a new Coxeter type graph with positive denite associated
quadratic form is obtained. The same happens, because of the previous Lemma 6.6, if
only some lines are suppressed. The new graphs thus obtained will be called subgraphs.
If ( contains a cycle, then it has a subgraph (isomorphic to)

A
n
, and this is a
contradiction since det M

A
n
= 0, so its quadratic form is not positive denite.
If ( contains a node which is connected to four dierent nodes, then it contains a
subgraph of type

D
4
, a contradiction.
If ( contains a couple of nodes (called triple nodes) connected to three other
nodes, then it contains a subgraph of type

D
n
, a contradiction again.
If ( contains two couples of nodes connected by at least two lines, then it contains
a subgraph of type

C
n
, which is impossible.
If ( contains a triple node and two nodes connected by at least two lines, then it
contains a subgraph of type

B
n
.
If ( contains a triple link, then either it is isomorphic to G
2
or contains a subgraph
of type

G
2
, this latter possibility gives a contradiction.
6. CLASSIFICATION OF ROOT SYSTEMS 45
If ( contains a double link and this double link is not at a extreme of the graph,
then either ( is isomorphic to F
4
or contains a subgraph of type

F
4
, which is
impossible.
If ( contains a double link at one extreme, then the Coxeter graph is B
n
= C
n
.
Finally, if ( contains only simple links, then it is either A
n
or it contains a unique
triple node. Hence it has the form:

.......................................
.......................................
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
p
q
r
with 1 p q r. But then either p = 1 or it contains a subgraph of type

E
6
,
a contradiction. If p = 1, then either q 2 or it contains a subgraph of type

E
7
,
another contradiction. Finally, with p = 1 and q = 2, either r 4 or it contains
a subgraph of type

E
8
, a contradiction again. Therefore, either p = q = 1 and we
get D
n
, or p = 1, q = 2 and r = 2, 3 or 4, thus obtaining E
6
, E
7
and E
8
.
Therefore, the only possible connected Coxeter graphs are those in Theorem 6.5.
What remains to be proven is to show that for each Dynkin diagram (A)(G), there
exists indeed an irreducible root system with this Dynkin diagram.
For types (A)(D) we will prove a stronger statement, since we will show that there
are simple Lie algebras, over an algebraically closed eld of characteristic 0, such that
their Dynkin diagrams of their root systems relative to a Cartan subalgebra and a set
of simple roots are precisely the Dynkin diagrams of types (A)(D).
(A
n
) Let L = sl
n+1
(k) be the Lie algebra of n + 1 trace zero square matrices. Let H
be the subspace of diagonal matrices in L, which is an abelian subalgebra, and let

i
: H k the linear form such that
i
_
diag(
1
, . . . ,
n+1
)
_
=
i
, i = 1, . . . , n +1.
Then
1
+ +
n+1
= 0. Moreover,
(6.4) L = H
_

1i,=jn+1
kE
ij
_
where E
ij
is the matrix with a 1 in the (i, j) entry, and 0s elsewhere. Since
[h, E
ij
] = (
i

j
)(h)E
ij
for any i ,= j, it follows that H is toral and a Cartan
subalgebra of L. It also follows easily that L is simple (using that any ideal is
invariant under the adjoint action of H) and that the set of roots of L relative to
H is
=
i

j
: 1 i ,= j n + 1.
The restriction of the Killing form to H is determined by
(h, h) =

1i,=jn+1
(
i

j
)
2
= 2

1i<jn+1
(
2
i
+
2
j
2
i

j
)
= 2(n + 1)

1in+1

2
i
= 2(n + 1) trace(h
2
)
(6.5)
46 CHAPTER 2. LIE ALGEBRAS
for any h = diag(
1
, . . . ,
n+1
) H, since 0 = (
1
+ +
n+1
)
2
=

1in+1

2
i
+
2

1i<jn+1

j
. Therefore, for any i ,= j, t

j
=
1
2(n+1)
(E
ii
E
jj
) and
_

i

j
[
h

k
_
= (
i

j
)
_
t

k
_
=
1
2(n + 1)
_

ih

jh

ik
+
jk
_
,
where
ij
is the Kronecker symbol. Thus we get the euclidean vector space E =
R
Q
Q and can take the vector = n
1
+ (n 2)
2
+ + (n)
n+1
= n(
1

n+1
) +(n2)(
2

n
) + E, which satises
_
[
i

j
_
> 0 if and only if i < j.
For this we obtain the set of positive roots
+
=
i

j
: 1 i < j n+1 and
the system of simple roots =
1

2
,
2

3
, . . . ,
n

n+1
. The corresponding
Dynkin diagram is (A
n
).
(B
n
) Consider the following orthogonal Lie algebra:
L = so
2n+1
(k)
=
_
X gl
2n+1
(k) : X
t
_
_
1 0 0
0 0 I
n
0 I
n
0
_
_
+
_
_
1 0 0
0 0 I
n
0 I
n
0
_
_
X = 0
_
=
_
_
_
0 b
t
a
t
a A B
b C A
t
_
_
: a, b Mat
n1
(k) ,
A, B, C Mat
n
(k), B
t
= B, C
t
= C
_
(6.6)
where I
n
denotes the identity n n matrix. Number the rows and columns of
these matrices as 0, 1, . . . , n,

1, . . . , n and consider the subalgebra H consisting


again of the diagonal matrices on L: H = diag(0,
1
, . . . ,
n
,
1
, . . . ,
n
) :

i
k, i = 1, . . . , n. Again we get the linear forms
i
: H k, such that

i
_
diag(0,
1
, . . . ,
n
,
1
, . . . ,
n
)
_
=
i
, i = 1, . . . , n. Then,
L = H
_

n
i=1
k(E
0i
E
i0
)
_

n
i=1
k(E
0

i
E
i0
)
_

1i,=jn
k(E
ij
E
j

i
)
_

1i<jn
k(E
i

j
E
j

i
)
_

1i<jn
k(E
ij
E
ji
)
_
= H
_

n
i=1
L

i
_

n
i=1
L

i
_

1i,=jn
L

j
_

1i<jn
L

i
+
j
_

1i<jn
L
(
i
+
j
)
_
,
where L

= x L : [h, x] = (h)x h H. It follows easily from here that H


is a Cartan subalgebra of L, that L is simple and that the set of roots is
=
i
,
i

j
: 1 i < j n.
Also, for any h H as above,
(h, h) = 2(
1
+ +
n
)
2
+

1i,=jn
(
i

j
)
2
+ 2

1i<jn
(
i
+
j
)
2
= 2(
2
1
+ +
2
n
) + 2

1i<jn
_
(
i

j
)
2
+ (
i
+
j
)
2
_
=
_
2 + 4(n 1)
_
(
2
1
+ +
2
n
_
= 2(2n 1)(
2
1
+ +
2
n
_
= (2n 1) trace(h
2
).
(6.7)
6. CLASSIFICATION OF ROOT SYSTEMS 47
Therefore, t

i
=
1
2(2n1)
(E
ii
E
i

i
) and
_

i
[
j
_
=
i
(t

j
) =
1
2(2n1)

ij
. We can take
the element = n
1
+ (n 1)
2
+ +
n
, whose inner product with any root is
never 0 and gives
+
=
i
,
i

j
: 1 i < j n and system of simple roots
=
1

2
,
2

3
, . . . ,
n1

n
,
n
. The associated Dynkin diagram is (B
n
).
6.8 Exercise. Prove that so
3
(k) is isomorphic to sl
2
(k). (k being algebraically
closed.)
(C
n
) Consider now the symplectic Lie algebra:
L = sp
2n
(k)
=
_
X gl
2n
(k) : X
t
_
0 I
n
I
n
0
_
+
_
0 I
n
I
n
0
_
X = 0
_
=
_
_
A B
C A
t
_
: A, B, C Mat
n
(k), B
t
= B, C
t
= C
_
(6.8)
where n 2 (for n = 1 we get sp
2
(k) = sl
2
(k)). Number the rows and columns
as 1, . . . , n,

1, . . . , n. As before, the subspace H of diagonal matrices is a Cartan


subalgebra with set of roots
= 2
i
,
i

j
: 1 i < j n
where
i
(h) =
i
for any i, with h = diag(
1
, . . . ,
n
,
1
, . . . ,
n
). Here
(h, h) = 2
n

i=1
4
2
i
+

1i,=jn
(
i

j
)
2
+ 2

1i<jn
(
i
+
j
)
2
= 8(
2
1
+ +
2
n
) + 2

1i<jn
_
(
i

j
)
2
+ (
i
+
j
)
2
_
=
_
8 + 4(n 1)
_
(
2
1
+ +
2
n
_
= 4(n + 1)(
2
1
+ +
2
n
_
= 2(n + 1) trace(h
2
),
(6.9)
t

i
=
1
4n
(E
ii
E
i

i
),
_

i
[
j
_
=
1
4n

ij
. Besides, we can take = n
1
+(n1)
2
+ +
n
,
which gives
+
= 2
i
,
i

j
: 1 i < j n and =
1

2
, . . . ,
n1

n
, 2
n
,
whose associated Dynkin diagram is (C
n
).
(D
n
) Finally, consider the orthogonal Lie algebra:
L = so
2n
(k)
=
_
X gl
2n
(k) : X
t
_
0 I
n
I
n
0
_
+
_
0 I
n
I
n
0
_
X = 0
_
=
_
_
A B
C A
t
_
: A, B, C Mat
n
(k), B
t
= B, C
t
= C
_
(6.10)
with n 4. Number the rows and columns as 1, . . . , n,

1, . . . , n. As it is always
the case, the subspace H of diagonal matrices is a Cartan subalgebra with set of
roots
=
i

j
: 1 i < j n
48 CHAPTER 2. LIE ALGEBRAS
where
i
(h) =
i
for any i, with h = diag(
1
, . . . ,
n
,
1
, . . . ,
n
). Here
(h, h) =

1i,=jn
(
i

j
)
2
+ 2

1i<jn
(
i
+
j
)
2
= 4(n 1)(
2
1
+ +
2
n
_
= 2(n 1) trace(h
2
),
(6.11)
t

i
=
1
4(n1)
(E
ii
E
i

i
),
_

i
[
j
_
=
1
4(n1)

ij
. Also, we can take = n
1
+(n1)
2
+
+
n
, which gives
+
=
i

j
: 1 i < j n and =
1

2
, . . . ,
n1

n
,
n1
+
n
, whose associated Dynkin diagram is (D
n
).
The remaining Dynkin diagrams correspond to the so called exceptional simple Lie
algebras, whose description is more involved. Hence, we will proceed in a dierent way:
(E
8
) Let E = R
8
with the canonical inner product ( . [ . ) and canonical orthonormal
basis e
1
, . . . , e
8
. Take e
0
=
1
2
(e
1
+ +e
8
) and Q = m
0
e
0
+

8
i=1
m
i
e
i
: m
i

Zi,

8
i=1
m
i
2Z, which is an additive subgroup of R
8
. Consider the set
= v Q : (v[v) = 2.
For v =

8
i=0
m
i
e
i
Q, (v[v) =

8
i=1
(m
i
+
1
2
m
0
)
2
, so if m
0
is even, then m
i
+
1
2
m
0
Z for any i and the only possibilities for v to belong to are v = e
i
e
j
,
1 i < j 8. On the other hand, if m
0
is odd, then m
i
+
1
2
m
0

1
2
+ Z for
any i and the only possibilities are v =
1
2
(e
1
e
2
e
8
). Moreover, since

8
i=1
m
i
must be even, the number of + signs in the previous expression must be
even. In particular, satises the restrictions (R1) and (R2) of the denition of
root system.
Besides, for any v , (v[v) = 2 and for any v, w , v[w) =
2(v[w)
(w[w)
= (v[w)
is easily shown to be in Z, hence (R4) is satised too. The proof that (R3) is
satised is a straightforward computation. Thus, is a root system.
Take now =

8
i=1
2
i
e
i
, then ([) ,= 0 for any . The associated set of
positive roots is
+
=
1
2
(e
1
e
2
e
7
+ e
8
), e
i
+ e
j
: i < j, and the set
of simple roots is
=
_

1
=
1
2
(e
1
e
2
e
3
e
4
e
5
e
6
e
7
+e
8
),
2
= e
1
+e
2
,
3
= e
2
e
1
,

4
= e
3
e
2
,
5
= e
4
e
3
,
6
= e
5
e
4
,
7
= e
6
e
5
,
8
= e
7
e
6
_
with associated Dynkin diagram

1

3

4

5

6

7

8

2
of type (E
8
).
(E
7
) and (E
6
) These are obtained as the root subsystems of (E
8
) generated by

8
and
7
,
8
above.
6. CLASSIFICATION OF ROOT SYSTEMS 49
(F
4
) Here consider the euclidean vector space E = R
4
, e
0
=
1
2
(e
1
+ e
2
+ e
3
+ e
4
),
Q = m
0
e
0
+

4
i=1
m
i
e
i
: m
i
Z, and
= v Q : (v[v) = 1 or 2 = e
i
, e
i
e
j
(i < j),
1
2
(e
1
e
2
e
3
e
4
).
This is a root system and with = 8e
1
+ 4e
2
+ 2e
3
+ e
4
one obtains
+
=
e
i
, e
i
e
j
(i < j),
1
2
(e
1
e
2
e
3
e
4
) and
= e
2
e
3
, e
3
e
4
, e
4
,
1
2
(e
1
e
2
e
3
e
4
),
with associated Dynkin graph (F
4
).
(G
2
) In the euclidean vector space E = (, , ) R
3
: + + = 0 = R(1, 1, 1)

,
with the restriction of the canonical inner product on R
3
, consider the subset
Q = m
1
e
1
+m
2
e
2
+m
3
e
3
: m
i
Z, m
1
+m
2
+m
3
= 0, and
= v Q : (v[v) = 2 or 6
= (e
i
e
j
) (i < j), (2e
1
e
2
e
3
), (e
1
+ 2e
2
e
3
), (e
1
e
2
+ 2e
3
).
Again, is a root system, and with = 2e
1
e
2
+ 3e
3
,
+
= e
i
e
j
(i >
j), 2e
1
+e
2
+e
3
, e
1
2e
2
+e
3
, e
1
e
2
+ 2e
3
and
= e
2
e
1
, e
1
2e
2
+e
3
,
with associated Dynkin diagram of type (G
2
).
This nishes the classication of the connected Dynkin diagrams. To obtain from
this classication a classication of the root systems, it is enough to check that any root
system is determined by its Dynkin diagram.
6.9 Denition. Let
i
be a root system in the euclidean space E
i
, i = 1, 2, and let
: E
1
E
2
be a linear map. Then is said to be a root system isomorphism between

1
and
2
if (
1
) =
2
and for any ,
1
, ()[()) = [).
6.10 Exercise. Prove that if is a root system isomorphism between the irreducible
root systems
1
and
2
, then is a similarity of multiplier
_
()[()
_
([)
for a xed

1
.
The next result is already known for roots that appear inside the semisimple Lie
algebras over algebraically closed elds of characteristic 0, because of the representation
theory of sl
2
(k).
6.11 Lemma. Let be a root system, , two roots such that ,= , let
r = maxi Z
0
: i and q = maxi Z
0
: +i . Then [) = r q,
r +q 3 and all the elements in the chain r, (r 1), . . . , , . . . , +q belong
to (this is called the -chain of ).
50 CHAPTER 2. LIE ALGEBRAS
Proof. Take = + q , then [) = [) + 2q. Besides, + i , for any
i Z
>0
, (r +q) , and (r +q +i) , for any i Z
>0
.
Then

() = [) , so [) r + q; while

_
(r + q)
_
=
[) + (r + q) , so r + q [) 0, or [) r + q. We conclude that
[) = r +q and this is 3 by the argument in the proof of Proposition 6.1. Besides,
[) = [) 2q = r q.
Thus, [) = 0, 1, 2 or 3. If [) = 0, then the -chain of consists only of
= . If [) = 1, then the -chain consists of and =

() . If
[) = 2, then [) = 1 and the -chain consists of , =

() and
2 =

() . Finally, if [) = 3, then again [) = 1 and the -chain consists


of , =

(), 2 =

( ) , and 3 =

() .
6.12 Theorem. Each Dynkin diagram determines a unique (up to isomorphism) root
system.
Proof. First note that it is enough to assume that the Dynkin diagram is connected.
We will do it.
Let be the set of nodes of the Dynkin diagram and x arbitrarily the length of a
short node. Then the diagram determines the inner product on E = R = R. This
is better seen with an example. Take, for instance the Dynkin diagram (F
4
), so we have
=
1
,
2
,
3
,
4
, with
>

1

2

3

4
Fix, for simplicity, (
3
[
3
) = 2 = (
4
[
4
). Then
1 =
3
[
4
) =
2(
3
[
4
)
(
4
[
4
)
, so (
3
[
4
) = 1,
2 =
2
[
3
) =
2(
2
[
3
)
(
3
[
3
)
, so (
2
[
3
) = 2.
1 =
3
[
2
) =
2(
3
[
2
)
(
2
[
2
)
, so (
2
[
2
) = 4 = (
1
[
1
).
1 =
1
[
2
), so (
1
[
2
) = 1.
Since is a basis of E, the inner product is completely determined up to a nonzero
positive scalar (the arbitrary length we have imposed on the short roots of ). For any
other connected Dynkin diagram, the argument is the same.
Now, with =
1
, . . . ,
n
, any
+
appears as =

n
i=1
m
i

i
with m
i
Z
0
.
Dene the height of as ht() = m
1
+ + m
n
. It is enough to prove that for any
N N, the subset
+
: ht() = N is determined by the Dynkin diagram, and
this is done by induction on N:
For N = 1 this is obvious, since ht() = 1 if and only if .
Assume that the result is valid for 1, . . . , N. Then it is enough to prove that the
roots of height N + 1 are precisely the vectors = + , with ht() = N, and
such that [) < r with r = maxi Z
0
: i
+
. Note that the height of the
roots i
+
, with i 0, is at most N, and hence all these roots are determined by
. Actually, if and satisfy these conditions, then r > [) = r q by the
Lemma, so q 1, and + is in the -chain of , and hence it is a root. Conversely, let
7. CLASSIFICATION OF THE SEMISIMPLE LIE ALGEBRAS 51
=

n
i=1
m
i

i
be a root of height N + 1. Then 0 < ([) =

n
i=1
([
i
), so there is an
i with ([
i
) > 0 and m
i
> 0. From the previous Lemma we know that =
i
,
and ht() = N. Besides, +
i
, so q > 1 in the previous Lemma, and hence
r q = [
i
) < r, as required.
6.13 Remark. Actually, the proof of this Theorem gives an algorithm to obtain a root
system , starting with its Dynkin diagram.
6.14 Exercise. Use this algorithm to obtain the root system associated to the Dynkin
diagram (G
2
).
6.15 Exercise. Let a root system and let =
1
, . . . ,
n
be a system of simple
roots of . Let = m
1

1
+ +m
n

n
be a positive root of maximal height and consider

1
=
i
: m
i
,= 0 and
2
=
1
. Prove that
_

1
[
2
_
= 0.
In particular, if is irreducible this shows that involves all the simple roots ( =

1
).
7. Classication of the semisimple Lie algebras
Throughout this section, the ground eld k will be assumed to be algebraically closed
of characteristic 0.
The aim here is to show that each root system determines, up to isomorphism, a
unique semisimple Lie algebra over k.
Let L = H
_

_
be the root space decomposition of a semisimple Lie algebra
over k, relative to a Cartan subalgebra H. We want to prove that the multiplication in
L is determined by .
For any , there are elements x

, y

such that [x

, y

] = h

,
with (h

) = 2. Besides, L

= kx

, L

= ky

and S

= L

[L

, L

] =
kx

ky

kh

is a subalgebra isomorphic to sl
2
(k). Also, for any , recall
that the -chain of consists of roots r, . . . , , . . . , +q, where [) = r q.
7.1 Lemma. Under the hypotheses above, let , with + , then [L

, L

] =
L
+
. Moreover, for any x L

,
_
[y

, [x

, x]] = q(r + 1)x,


[x

, [y

, x]] = r(q + 1)x.


Proof. This is a straightforward consequence of the representation theory of sl
2
(k), since

q
i=r
L
+i
is a module for S


= sl
2
(k). Hence, there are elements v
i
L
+(qi)
, i =
0, . . . , r+q, such that [y

, v
i
] = v
i+1
, [x

, v
i
] = i(r+q+1i)v
i1
, with v
1
= v
r+q+1
= 0
(see the proof of Theorem 3.2); whence the result.
Let =
1
, . . . ,
n
be a system of simple roots of . For any i = 1, . . . , n, let
x
i
= x

i
, y
i
= y

i
and h
i
= h

i
. For any
+
, the proof of Theorem 6.12 shows
that is a sum of simple roots: =
i
1
+ +
i
r
, with
i
1
+ +
i
j

+
for any
j = 1, . . . , r = ht(). For any
+
we x one such sequence I

= (i
1
, . . . , i
r
) and take
x

= ad x
i
r
ad x
i
2
_
x
i
1
_
and y

= ad y
i
r
ad y
i
2
_
y
i
1
_
. These elements are nonzero by
the previous Lemma, and hence L

= kx

and L

= ky

.
52 CHAPTER 2. LIE ALGEBRAS
7.2 Lemma. For any
+
, let J = J

= (j
1
, . . . , j
r
) be another sequence such that
=
j
1
+ +
j
r
, and let x
J
= ad x
j
r
ad x
j
2
_
x
j
1
) and y
J
= ad y
j
r
ad y
j
2
_
y
j
1
).
Then there are rational numbers q, q
t
Q, determined by , such that x
J
= qx

,
y
J
= q
t
y

.
Proof. Since x
J
L

, the previous Lemma shows that x


J
= q
1
[x
i
r
, [y
i
r
, x
J
]], for some
q
1
Q which depends on . Let s be the largest integer with j
s
= i
r
, then
[y
i
r
, x
J
] = ad x
j
r
ad x
j
s+1
ad y
i
r
ad x
j
s
(x
K
)
(where K = (j
1
, . . . , j
s1
), since [y
i
, x
j
] = 0 for any i ,= j)
= q
2
ad x
j
r
ad x
j
s+1
(x
K
) (by the previous Lemma)
= q
2
q
3
x
I
(by induction on r = ht()),
where q
2
, q
3
Q depend on and I
t
= (i
1
, . . . , i
r1
). Therefore, x
J
= q
1
q
2
q
3
[x
i
r
, x
I
] =
q
1
q
2
q
3
x

, with q
1
, q
2
, q
3
Q determined by . The proof for y
J
is similar.
Hence, we may consider the following basis for L: B = h
1
, . . . , h
n
, x

, y

:
+
,
with the x

s and y

s chosen as above.
7.3 Proposition. The product of any two elements in B is a rational multiple of another
element of B, determined by , with the exception of the products [x

, y

], which are
linear combinations of the h
i
s, with rational coecients determined by .
Proof. First note that [h
i
, h
j
] = 0, [h
i
, x

] = (h
i
)x

= [
i
)x

and [h
i
, y

] =
[
i
)y

, are all determined by .


Consider now ,
+
, and the corresponding xed sequences I

= (i
1
, . . . , i
r
),
I

= (j
1
, . . . , j
s
).
To deal with the product [x

, x

], let us argue by induction on r. If r = 1, [x

, x

] =
0 if + , , while [x

, x

] = qx
+
for some q Z determined by by the
previous Lemma. On the other hand, if r > 1 and I
t

= (i
1
, . . . , i
r1
), then [x

, x

] =
[[x
i
r
, x
I

], x

] = [x
i
r
, [x
I

, x

]] [x
I

, [x
i
r
, x

]] and now the induction hypothesis and the


previous Lemma yield the result. The same arguments apply to products [y

, y

].
Finally, we will argue by induction on r too to deal with the product [x

, y

]. If
r = 1 and =
i
, then [x

, y

] = 0 if 0 ,= , , [x

, y

] = h
i
if = ,
while if = , then y

= q[y
i
, y

] for some q Q determined by , and


[x

, y

] = q[x
i
, [y
i
, y

]] = qq
t
y

, determined by . On the other hand, if r > 1 then, as


before, [x

, y

] = [x
i
r
, [x
I

, y

]][x
I

, [x
i
r
, y

]] and the induction hypothesis applies.


What remains to be done is, on one hand, to show that for each of the irreducible root
systems E
6
, E
7
, E
8
, F
4
, G
2
there is a simple Lie algebra L over k and a Cartan subalgebra
H such that the corresponding root system is of this type. Since we have constructed
explicitly these root systems, the dimension of such an L must be 2[[ + rank(), so
dim
k
L = 78, 133, 248, 52 and 14 respectively. Later on, some explicit constructions of
these algebras will be given.
On the other hand, given a simple Lie algebra L over k and two Cartan subalgebras
H
1
and H
2
, it must be shown that the corresponding root systems
1
and
2
are
isomorphic. The next Theorem solves this question:
7. CLASSIFICATION OF THE SEMISIMPLE LIE ALGEBRAS 53
7.4 Theorem. Let L be one of the Lie algebras sl
n
(k) (n 2), so
n
(k) (n 3), or
sp
2n
(k) (n 1), and let H be any Cartan subalgebra of L. Then there is an element
g of the matrix group GL
n
(k), O
n
(k) or Sp
2n
(k) respectively, such that gHg
1
is the
subspace of diagonal matrices in L. In particular, for any two Cartan subalgebras of L,
there is an automorphism Aut(L) such that (H
1
) = H
2
.
The last assertion is valid too for the simple Lie algebras containing a Cartan sub-
algebra such that the associated root system is exceptional.
Proof. For the rst part, let V be the natural module for L (V = k
n
(column vectors)
for sl
n
(k) or so
n
(k), and V = k
2n
for sp
2n
(k)). Since H is toral and abelian, the elements
of H form a commuting space of diagonalizable endomorphisms of V . Therefore there is
a simultaneous diagonalization: V =
H
V

, where V

= v V : h.v = (h)v h
H.
If L = sl
n
(k), then this means that there is an element g GL
n
(k) such that
gHg
1
diagonal matrices. Now, the map x gxg
1
is an automorphism of L and
hence gHg
1
is a Cartan subalgebra too, in particular it is a maximal toral subalgebra.
Since the set of diagonal matrices in L is a Cartan subalgebra too, we conclude by
maximality that gHg
1
coincides with the space of diagonal matrices in L.
If L = so
n
(k) or L = sp
2n
(k), there is a nondegenerate symmetric or skew symmetric
bilinear form b : V V k such that (by its own denition) L = x gl(V ) :
b(x.v, w) + b(v, x.w) = 0 v, w V . But then, for any h H, , H

and v V

,
w V

, 0 = b(h.v, w) + b(v, h.w) =


_
(h) + (h)
_
b(v, w). Hence we conclude that
b(V

, V

) = 0 unless = . This implies easily the existence of a basis of V consisting


of common eigenvectors for H in which the coordinate matrix of b is either
_
_
1 0 0
0 0 I
n
0 I
n
0
_
_
,
_
0 I
n
I
n
0
_
or
_
0 I
n
I
n
0
_
according to L being so
2n+1
(k), sp
2n
(k) or so
2n
(k). Therefore, there is a g SO
2n+1
(k),
Sp
2n
(k) or SO
2n
(k) (respectively) such that gHg
1
is contained in the space of diagonal
matrices of L. As before, we conclude that gHg
1
lls this space.
Finally, let L be a simple Lie algebra with a Cartan subalgebra H such that the
associated root system is exceptional. Let H
t
be another Cartan subalgebra and
t
the associated root system. If
t
were classical, then Proposition 7.3 would show that
L is isomorphic to one of the simple classical Lie algebras, and by the rst part of
the proof, there would exist an automorphism of L taking H
t
to H, so that would
be classical too, a contradiction. Hence
t
is exceptional, and hence the fact that
dim
k
L = 2[[ + rank(), and the same for
t
, shows that and
t
are isomorphic.
But by Proposition 7.3 again, we can choose bases h
1
, . . . , h
n
, x

, y

: and
h
t
1
, . . . , h
t
n
, x
t

, y
t

:
t
with the same multiplication table. Therefore, there is
an automorphism of L such that (h
i
) = h
t
i
, (x

) = x
t

and (y

) = y
t

, for any
i = 1, . . . , n and . In particular, (H) = H
t
.
7.5 Remark. There is a more general classical result which asserts that if H
1
and H
2
are any two Cartan subalgebras of an arbitrary Lie algebra over k, then there is an
automorphism , in the subgroup of the automorphism group generated by exp ad x :
x L, ad x nilpotent such that (H
1
) = H
2
. For an elementary (not easy!) proof, you
may consult the article by A.A. George Michael: On the conjugacy theorem of Cartan
subalgebras, Hiroshima Math. J. 32 (2002), 155-163.
54 CHAPTER 2. LIE ALGEBRAS
The dimension of any Cartan subalgebra is called the rank of the Lie algebra.
Summarizing all the work done so far, and assuming the existence of the exceptional
simple Lie algebras, the following result has been proved:
7.6 Theorem. Any simple Lie algebra over k is isomorphic to a unique algebra in the
following list:
sl
n+1
(k) (n 1, A
n
), so
2n+1
(k) (n 2, B
n
), sp
2n
(k) (n 3, C
n
),
so
2n
(k) (n 4, D
n
), E
6
, E
7
, E
8
, F
4
, G
2
.
7.7 Remark. There are the following isomorphisms among dierent Lie algebras:
so
3
(k)

= sp
2
(k) = sl
2
(k), so
4
(k)

= sl
2
(k) sl
2
(k) sp
4
(k)

= so
5
(k), so
6
(k)

= sl
4
(k).
Proof. This can be checked by computing the root systems associated to the natural
Cartan subalgebras. If the root systems are isomorphic, then so are the Lie algebras.
Alternatively, note that the Killing form on the three dimensional simple Lie alge-
bra sl
2
(k) is symmetric and nondegenerate, hence the orthogonal Lie algebra so
3
(k)

=
so
_
sl
2
(k),
_
, which has dimension 3 and contains the subalgebra ad sl
2
(k)

= sl
2
(k),
which is three dimensional too. Hence so
3
(k)

= sl
2
(k).
Now consider V = Mat
2
(k), which is endowed with the quadratic form det and
its associated symmetric bilinear form b(x, y) =
1
2
_
det(x + y) det(x) det(y)
_
=
trace(xy) trace(x) trace(y). Then we get the one-to-one Lie algebra homomorphism
sl
2
(k)sl
2
(k) so(V, b)

= so
4
(k), (a, b)
a,b
, where
a,b
(x) = axxb. By dimension
count, this is an isomorphism.
Next, consider the vector space V = k
4
. The determinant provides a linear iso-
morphism det :
4
V

= k, which induces a symmetric nondegenerate bilinear map
b :
2
V
2
V k. The Lie algebra sl(V ) acts on
2
(V ), which gives an embed-
ding sl
4
(k)

= sl(V ) so
_

2
V, b
_

= so
6
(k). By dimension count, these Lie alge-
bras are isomorphic. Finally, consider a nondegenerate skew-symmetric bilinear form
c on V . Then c may be considered as a linear map c :
2
V k and the dimension
of K = ker c is 5. The embedding sl(V ) so(
2
V, b) restricts to an isomorphism
sp
4
(k)

= sp(V, c)

= so(K, b)

= so
5
(k).
8. Exceptional Lie algebras
In this section a construction of the exceptional simple Lie algebras will be given, thus
completing the proof of Theorem 7.6. The hypothesis of the ground eld k being al-
gebraically closed of characteristic 0 will be kept here. Many details will be left to the
reader.
Let V = k
3
= Mat
31
(k) and let denote the usual cross product on V . For any
x V , let l
x
denote the coordinate matrix, in the canonical basis, of the map y xy.
Hence for
x =
_
_
x
1
x
2
x
3
_
_
l
x
=
_
_
0 x
3
x
2
x
3
0 x
1
x
2
x
1
0
_
_
.
8. EXCEPTIONAL LIE ALGEBRAS 55
Consider also the map V
3
k, (x, y, z) (x y) z (where u v denotes the canonical
inner product on V ). Then a simple computation gives that for any a sl
3
(k), l
ax
=
(l
x
a+a
t
l
x
). Also, the identity of the double cross product: (xy)z = (xz)y(yz)x,
shows that l
xy
= yx
t
xy
t
. Using these properties, the proof of the following result
follows at once.
8.1 Proposition. The subspace
L =
_
_
_
_
_
0 2y
t
2x
t
x a l
y
y l
x
a
t
_
_
: a sl
3
(k), x, y k
3
_
_
_
is a fourteen dimensional Lie subalgebra of gl
7
(k).
For any a sl
3
(k), and x, y k
3
, let M
(a,x,y)
denote the matrix
_
_
0 2y
t
2x
t
x a l
y
y l
x
a
t
_
_
.
In particular we get:
[M
(a,0,0)
, M
(0,x,0)
] = M
(0,ax,0)
, [M
(a,0,0)
, M
(0,0,y)
] = M
(0,0,a
t
y)
.
Let H be the space of diagonal matrices in L, dim
k
H = 2 and let
i
: H k,
the linear map such that
i
_
diag(0,
1
,
2
,
3
,
1
,
2
,
3
)
_
=
i
, i = 1, 2, 3. Thus,

1
+
2
+
3
= 0. Let e
1
, e
2
, e
3
be the canonical basis of V = k
3
. Then we have a root
space decomposition
L = H
_

_
,
with = (
1

2
), (
1

3
), (
2

3
),
1
,
2
,
3
, where M
(E
ij
,0,0)
L

j
for
i ,= j, M
(0,e
i
,0)
L

i
, and M
(0,0,e
i
)
L

i
.
8.2 Theorem. L is simple of type G
2
.
Proof. Any proper ideal I of L is invariant under the adjoint action of H, so I = (IH)
_

(I L

)
_
. Also, sl
3
(k) is isomorphic to the subalgebra S = M
(a,0,0)
: a sl
3
(k)
of L. If I S ,= 0, then, since S is simple, H S I, and hence L = H + [H, L] I,
a contradiction. On the other hand, if I S = 0, then there is an i = 1, 2, 3 such that
L

I with =
i
. But 0 ,= [L

, L

] I S, a contradiction again.
Therefore L is simple of rank 2 and dimension 14. Since the classical Lie algebras of
rank 2 are sl
3
(k) of dimension 8, and so
5
(k) of dimension 10, the only possibility left is
that L must be of type G
2
.
8.3 Exercise. Compute the restriction to H of the Killing form of L. Get a system of
simple roots of and check directly that is the root system G
2
.
Let us proceed now to give a construction, due to Freudenthal, of the simple Lie
algebra of type E
8
. To do so, let V be a vector space of dimension 9 and V

its dual.
Consider a nonzero alternating multilinear map det : V
9
k (the election of det to
name this map is natural), which induces an isomorphism
9
V

= k, and hence another
isomorphism
9
V

=
_

9
V )

= k. Take a basis e
1
, . . . , e
9
of V with det(e
1
, . . . , e
9
) =
1, and consider its dual basis
1
, . . . ,
9
(so, under the previous isomorphisms,
1

. . .
9

9
V

corresponds to 1 k too.
56 CHAPTER 2. LIE ALGEBRAS
Consider now the simple Lie algebra of type A
8
, S = sl(V )

= sl
9
(k), which acts
naturally on V . Then V

is a module too for S with the action given by x.(v) = (x.v)


for any x S, v V and V

. Consider W =
3
V , which is a module too under the
action given by x.(v
1
v
2
v
3
) = (x.v
1
)v
2
v
3
+v
1
(x.v
2
)v
3
+v
1
v
2
(x.v
3
) for any
x S and v
1
, v
2
, v
3
V . The dual space (up to isomorphism) W

=
3
V

is likewise a
module for S. Here (
1

2

3
)(v
1
v
2
v
3
) = det
_

i
(v
j
)
_
for any
1
,
2
,
3
V

and v
1
, v
2
, v
3
V .
The multilinear map det induces a multilinear alternating map T : WWW k,
such that
T
_
v
1
v
2
v
3
, v
4
v
5
v
6
, v
7
v
8
v
9
) = det(v
1
, . . . , v
9
),
for any v
i
s in V . In the same vein we get the multilinear alternating map T

: W

k. These maps induce, in turn, bilinear maps WW W

, (w
1
, w
2
) w
1
w
2

W

, with (w
1
w
2
)(w) = T(w
1
, w
2
, w), and W

W, (
1
,
2
)
1

2
W,
with (
1

2
)() = T

(
1
,
2
, ), for any w
1
, w
2
, w W and
1
,
2
, W

, and where
natural identications have been used, like (W

)

= W.
Take now the bilinear map
3
V
3
V

sl(V ): (w, ) w , given by


(v
1
v
2
v
3
) (
1

2

3
)
=
1
2
_

,S
3
(1)

(1)

(1)
(v
(1)
)
(2)
(v
(2)
)v
(3)

(3)
_

1
3
det
_

i
(v
j
)
_
1
V
,
where (1)

denotes the signature of the permutation S


3
, v denotes the endo-
morphism u (u)v, and 1
V
denotes the identity map on V . Then for any w
3
V ,

3
V

and x sl(V ), the following equation holds:


trace
_
(w )x
_
= (x.w).
(It is enough to check this for basic elements e
J
= e
j
1
e
j
2
e
j
3
, where J = (j
1
, j
2
, j
3
)
and j
1
< j
2
< j
3
, in W and the elements in the dual basis of W

:
J
=
j
1

j
2

j
3
.)
Note that this equation can be used as the denition of w .
Now consider the vector space L = sl(V ) W W

with the Lie bracket given, for


any x, y sl(V ), w, w
1
, w
2
W and ,
1
,
2
W

by:
[x, y] is the bracket in sl(V ),
[x, w] = x.w W, [x, ] = x. W

,
[w
1
, w
2
] = w
1
w
2
W

,
[
1
,
2
] =
1

2
W,
[w, ] = w sl(V ).
A lengthy computation with basic elements, shows that L is indeed a Lie algebra.
Its dimension is dim
k
L = 80 + 2
_
9
3
_
= 80 + 2 84 = 244.
Let H be the Cartan subalgebra of sl(V ) consisting of the trace zero endomorphisms
with a diagonal coordinate matrix in our basis e
1
, . . . , e
9
, and let
i
: H k be
the linear form such that (identifying endomorphisms with their coordinate matrices)

i
_
diag(
1
, . . . ,
9
)
_
=
i
. Then
1
+ +
9
= 0, H is toral in L and there is a root
decomposition
L = H
_

_
,
8. EXCEPTIONAL LIE ALGEBRAS 57
where
=
i

j
: i ,= j (
i
+
j
+
k
) : i < j < k.
Here L

j
= kE
ij
sl(V ) (E
ij
denotes the endomorphism whose coordinate matrix
has (i, j)-entry 1 and 0s elsewhere), L

i
+
j
+
k
= k(e
i
e
j
e
k
) W and L
(
i
+
j
+
k
)
=
k(
i

j
e
k
) W

. Using that sl(V ) is simple, the same argument in the proof of


Theorem 8.2 proves that L is simple:
8.4 Theorem. L is simple of type E
8
.
Proof. We have shown that L is simple of rank 8. The classical Lie algebras of rank 8,
up to isomorphism, are sl
9
(k), so
17
(k), sp
16
(k) and so
16
(k), which have dimensions 80,
156, 156 and 120 respectively. Hence L is not isomorphic to any of them and hence it is
of type E
8
.
Take now the simple Lie algebra L of type E
8
and its generators h
i
, x
i
, y
i
: i =
1, . . . , 8 as in the paragraph previous to Lemma 7.2, the indexing given by the ordering
of the simple roots given in the next diagram:

1

3

4

5

6

7

8

2
Let be the Killing form of L. Then consider the subalgebra

L generated by h
i
, x
i
, y
i
:
i = 1, . . . , 7 and its subalgebra

H =
7
i=1
kh
i
. Since H is toral in L, so is

H in

L and

L =

H
_

(Z
1
+Z
7
)
L

_
.
From here it follows that

H is a Cartan subalgebra of

L. Since the restriction of to

H is nondegenerate (recall that the restriction of to



8
i=1
Qh
i
is positive denite!),
the restriction of to

L is nondegenerate. Thus we get a representation ad :

L gl(L)
with nondegenerate trace form, and hence

L = Z(

L) [

L,

L], with [

L,

L] semisimple
(recall Consequences 2.2). But

H [

L,

L] and Z(

L) C

L
(

H) =

H, so Z(

L) = 0 and

L is semisimple, with root system of type E


7
(which is irreducible). Theorem 6.3 shows
that

L is simple of type E
7
.
The same arguments show that the Lie subalgebra

L of L generated y h
i
, x
i
, y
i
:
i = 1, . . . , 6 is a simple Lie algebra of type E
6
.
Finally, the existence of a simple Lie algebra of type F
4
will be deduced from that
of E
6
. Let now

L be the simple Lie algebra of type E
6
considered above, with canonical
generators h
i
, x
i
, y
i
: i = 1, . . . , 6. Since the multiplication in

L is determined by the
Dynkin diagram, there is an automorphism of

L such that
(h
1
) = h
6
, (x
1
) = x
6
, (y
1
) = y
6
,
(h
6
) = h
1
, (x
6
) = x
1
, (y
6
) = y
1
,
(h
3
) = h
5
, (x
3
) = x
5
, (y
3
) = y
5
,
(h
5
) = h
3
, (x
5
) = x
3
, (y
5
) = y
3
,
(h
2
) = h
2
, (x
2
) = x
2
, (y
2
) = y
2
,
(h
4
) = h
4
, (x
4
) = x
4
, (y
4
) = y
4
.
58 CHAPTER 2. LIE ALGEBRAS
In particular,
2
is the identity, so

L =

L
0


L
1
, with

L
0
= z

L : (z) = z, while

L
1
= z

L : (z) = z, and it is clear that

L
0
is a subalgebra of

L, [

L
0
,

L
1
]

L
1
,
[

L
1
,

L
1
]

L
0
. For any z

L
0
and z
t


L
1
, (z, z
t
) =
_
(z), (z
t
)
_
= (z, z
t
), where
denotes the Killing form of

L. Hence (

L
0
,

L
1
) = 0 and, thus, the restriction of to

L
0
is
nondegenerate. This means that the adjoint map gives a representation ad :

L
0
gl(

L)
with nondegenerate trace form. As before, this gives

L
0
= Z(

L
0
) [

L
0
,

L
0
], and [

L
0
,

L
0
]
is semisimple.
Consider the following elements of

L
0
:

h
1
= h
1
+h
6
,

h
2
= h
3
+h
5
,

h
3
= h
2
,

h
4
= h
2
,
x
1
= x
1
+x
6
, x
2
= x
3
+x
5
, x
3
= x
4
, x
4
= x
2
,
y
1
= y
1
+y
6
, y
2
= y
3
+y
5
, y
3
= y
4
, y
4
= y
2
.
Note that [ x
i
, y
i
] =

h
i
for any i = 1, 2, 3, 4. The element

h = 10

h
1
+19

h
2
+27

h
3
+14

h
4
satises
_

1
(

h) =
6
(

h) = 20 19 = 1,

3
(

h) =
5
(

h) = 38 10 27 = 1,

4
(

h) = 54 38 14 = 2,

2
(

h) = 28 27 = 1.
Thus (

h) > 0 for any


+
, where is the root system of

L. In particular, (

h) ,= 0
for any .
Note that

H =
6
i=1
kh
i
is a Cartan subalgebra of

L. Besides, (

H) =

H and hence

H =

H
0


H
1
, with

H
0
=

H

L
0
=
4
i=1
k

h
i
and

H
1
=

H

L
1
= k(h
1
h
6
)k(h
3
h
5
).
Also, for any , x

+ (x

)

L
0
, and this vector is a common eigenvector for

H
0
with eigenvalue [

H
0
, which is not zero since (

h) ,= 0 for any . Hence there is a


root space decomposition

L
0
=

H
0

_

k(x

+(x

))
_
and it follows that Z(

L
0
) C

L
(

H
0
) L
0
=

H L
0
=

H
0
[

L
0
,

L
0
]. We conclude that
Z(

L
0
) = 0, so

L
0
is semisimple, and

H
0
is a Cartan subalgebra of

L
0
.
The root system

of

L
0
, relative to

H
0
, satises that

= [

H
0
: .
Also
i
=
i
[

H
0


, with x
i
(

L
0
)

i
and y
i
(

L
0
)

i
for any i = 1, 2, 3, 4. Moreover,
[ x
i
, y
i
] =

h
i
and
i
(

h
i
) = 2 for any i. Besides,

=

, with

+
=

: (

h) >
0 :
+
(and similarly with

). We conclude that

=
1
,
2
,
3
,
4

is a system of simple roots of



L
0
. We can compute the associated Cartan matrix. For
instance,
[

h
1
, x
2
] =
_

2
(

h
1
) x
2
=
2
[
1
) x
2
[h
1
+h
6
, x
3
+x
5
] =
3
(h
1
+h
6
)x
3
+
5
(h
1
+h
6
)x
5
= (x
3
+x
5
) = x
2
,
[

h
2
, x
3
] =
_

3
(

h
2
) x
3
=
3
[
2
) x
3
[h
3
+h
5
, x
4
] =
4
(h
3
+h
5
)x
4
= 2x
4
= 2 x
3
,
8. EXCEPTIONAL LIE ALGEBRAS 59
which shows that
2
[
1
) = 1 and
3
[
2
) = 2. In this way we can compute the
whole Cartan matrix, which turns out to be the Cartan matrix of type F
4
:
_
_
_
_
2 1 0 0
1 2 1 0
0 2 2 1
0 0 1 2
_
_
_
_
thus proving that

L
0
is the simple Lie algebra of type F
4
.
Chapter 3
Representations of semisimple Lie
algebras
Unless otherwise stated, the following assumptions will be kept throughout the chapter:
k will denote an algebraically closed eld of characteristic 0,
L will denote a semisimple Lie algebra over k,
H will be a xed Cartan subalgebra of L, will denote the corresponding set of
roots and L = H
_

_
the Cartan decomposition.
will denote the Killing form of L and ( [ ) : H

k the induced nondegen-


erate bilinear form.
For any , t

H is dened by the relation (h) = (t

, h) for any h H,
and h

=
2t

(t

)
.
=
1
, . . . ,
n
denotes a xed system of simple roots. Accordingly, decom-
poses as =
+

(disjoint union), where


+
(respectively

) is the set of
positive roots (resp., negative roots). Moreover,

=
+
.
J is the Weyl group, generated by

: .
L
+
=

+L

, L

, so that L = L

H L
+
.
This chapter is devoted to the study of the nite dimensional representations of such
an algebra L. By Weyls theorem (Chapter 2, 2.5), any representation is completely
reducible, so the attention is focused on the irreducible representations.
1. Preliminaries
Let : L gl(V ) be a nite dimensional representation of the Lie algebra L. Since the
Cartan subalgebra H is toral, V decomposes as
V =
H
V

,
where V

= v V : h.v = (h)v h H.
61
62 CHAPTER 3. REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS
1.1 Denition. Under these circumstances, H

is said to be a weight of V if
V

,= 0. The set of weights of V is denoted P(V ).


1.2 Properties of P(V ).
(i) For any and P(V ), L

.V

V
+
.
(ii) For any P(V ) and , [) :=
2([)
([)
is an integer.
Proof. Let S

= L

[L

, L

], which is isomorphic to sl
2
(k) and take
elements x

and y

such that [x

, y

] = h

. Then W =
mZ
V
+m
is an S

-submodule of V . Hence the eigenvalues of the action of h

on W form an
unbroken chain of integers:
(1.1) ( +q)(h

), . . . , (h

), . . . , ( r)(h

),
with ( r)(h

) = ( +q)(h

). But (h

) = [) and (h

) = 2. Hence,
(h

) = [) = r q Z.
(iii) P(V ) is J-invariant.
Proof. For any P(V ) and ,

() = [) = (r q) P(V ),
since it belongs to the unbroken chain (1.1).
(iv) Let C =
_

i
[
j
)
_
be the Cartan matrix. Then
P(V )
1
det C
_
Z
1
+ +Z
n
_
E = R
1
+ +R
n
.
(Recall that E is an euclidean vector space.)
Proof. Since is a basis of H

, for any P(V ), there are scalars r


1
, . . . , r
n
k
such that = r
1

1
+ +r
n

n
. Then [
j
) =

n
i=1

i
[
j
)r
i
, j = 1, . . . , n. This
constitutes a system of linear equations with integer coecients, whose matrix is
C. Solving this system using Cramers rule gives r
i

1
det C
Z.
At this point it is useful to note that det A
n
= n+1, det B
n
= det C
n
= 2, det D
n
= 4,
det E
6
= 3, det E
7
= 2 and det E
8
= det F
4
= det G
2
= 1.
1.3 Denition.

R
= Z = Z is called the root lattice of L.

W
= H

: [
i
) Z i = 1, . . . , n (which is contained in
Z
det C
) is called
the weight lattice.
The elements of
W
are called weights of the pair (L, H).
An element
W
is said to be a dominant weight if [) 0 for any .
The set of dominant weights is denoted by
+
W
.
1. PRELIMINARIES 63
For any i = 1, . . . , n, let
i
H

such that
i
[
j
) =
ij
for any j = 1, . . . , n.
Then
i

+
W
,
W
= Z
1
+ . . . + Z
n
, and
+
W
= Z
0

1
+ + Z
0

n
. The
weights
1
, . . . ,
n
are called the fundamental dominant weights.
1.4 Proposition.
W
= H

: [) Z . In particular, the weight lattice


does not depend on the chosen set of simple roots .
Proof. It is trivial that H

: [) Z
W
. Conversely, let
W
and
+
. Let us check that [) Z by induction on ht(). If this height is
1, then and this is trivial. If ht() = n > 1, then = m
1

1
+ + m
n

n
,
with m
1
, . . . , m
n
Z
0
. Since ([) > 0, there is at least one i = 1, . . . , n such that
([
i
) > 0 and ht
_

i
()
_
= ht
_
[
i
)
i
_
< ht(). Then
[) =

i
()[

i
()) =

[
i
)
i
[

i
()
_
= [

i
()) [
i
)
i
[

i
())
and the rst summand is in Z by the induction hypothesis, and so is the second since

W
and [) Z.
1.5 Denition. Let : L gl(M) be a not necessarily nite dimensional representa-
tion.
(i) An element 0 ,= m M is called a highest weight vector if m is an eigenvector for
all the operators (h) (h H), and (L
+
)(m) = 0.
(ii) The module M is said to be a highest weight module if it contains a highest weight
vector that generates M as a module for L.
1.6 Proposition. Let : L gl(V ) be a nite dimensional representation of L. Then
(i) V contains highest weight vectors. If v V is such a vector and v V

( H

),
then
+
W
.
(ii) Let 0 ,= v V

be a highest weight vector. Then


W = kv +

r=1

1i
1
,...,i
r
n
k
_
(y

1
) (y

i
r
)(v)
_
is the submodule of V generated by v. Besides, W is an irreducible L-module and
P(W)
i
1

i
r
: r 0, 1 i
1
, . . . , i
r
n
_

n
i=1
Z
0

i
_
.
(iii) If V is irreducible, then it contains, up to scalars, a unique highest weight vector.
Its weight is called the highest weight of V .
(iv) (Uniqueness) For any
+
W
there is, up to isomorphism, at most one nite
dimensional irreducible L-module whose highest weight is .
Proof. (i) Let l Q such that (l[) > 0 for any (for instance, one can take
(l[) = 1 for any ), and let P(V ) such that (l[) is maximum. Then for any

+
, + , P(V ), so that L

.V

= 0. Hence L
+
.V

= 0 and any 0 ,= v V

is a
highest weight vector.
64 CHAPTER 3. REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS
Besides, for any
+
, [) = r q = r 0 (since + , P(V )), so
+
W
.
(ii) The subspace W is invariant under the action of L

and the action of H (since


it is spanned by common eigenvectors for H). Therefore, since L
+
is generated by
x

: , it is enough to check that W is invariant under the action of (x

), for
. But x

.v = 0 (v is a highest weight vector) and for any , and w W,


x

.(y

.w) = [x

, y

].w y

.(x

.w), and [x

, y

] either is 0 or belongs to H. An easy


induction on r argument shows that x

.
_
(y

1
) (y

i
r
)(v)
_
W, as required.
Therefore, W is an L-submodule and P(W)
i
1

i
r
: r 0, 1
i
1
, . . . , i
r
n. (Note that up to now, the nite dimensionality of V has played no role.)
Moreover, V

W = kv and if W is the direct sum of two submodules W = W


t
W
tt
,
then W

= kv = W
t

W
tt

. Hence either v W
t

or v W
tt

. Since W is generated by
W, we conclude that either W = W
t
or W = W
tt
. Now, by nite dimensionality, Weyls
Theorem (Chapter 2, 2.5) implies that W is irreducible.
(iii) If V is irreducible, then V = W, V

= kv

and for any P(V ) there is


an r 0 and 1 i
1
, . . . , i
r
n such that =
i
1

i
r
. Hence (l[) < (l[).
Therefore, the highest weight is the only weight with maximum value of (l[).
(iv) If V
1
and V
2
are two irreducible highest weight modules with highest weight and
v
1
V
1

, v
2
V
2

are two highest weight vectors, then w = (v


1
, v
2
) is a highest weight
vector in V
1
V
2
, and hence W = kw+

r=1

1i
1
,...,i
r
n
k
_
(y

1
) (y

i
r
)(w)
_
is a
submodule of V
1
V
2
. Let
i
: V
1
V
2
V
i
denote the natural projection (i = 1, 2).
Then v
i

i
(W), so
i
(W) ,= 0 and, since both W and V
i
are irreducible by item (iii),
it follows that
i
[
W
: W V
i
is an isomorphism (i = 1, 2). Hence both V
1
and V
2
are
isomorphic to W.
There appears the natural question of existence: given a dominant weight
+
W
,
does there exist a nite dimensional irreducible L-module V whose highest weight is ?
Note that = m
1

1
+ + m
n

n
, with m
1
, . . . ,
n
Z
0
. If it can be proved that
there exists and irreducible nite dimensional highest weight module V (
i
) of highest
weight
i
, for any i = 1, . . . , n, then in the module
V (
1
)
m
1
V (
n
)
m
n
there is a highest weight vector of weight (the basic tensor obtained with the highest
weight vectors of each copy of V (
i
)), By item (ii) above this highest weight vector
generates an irreducible L-submodule of highest weight . Hence it is enough to deal
with the fundamental dominant weights and this can be done ad hoc. A more abstract
proof will be given here.
2. Properties of weights and the Weyl group
Let us go back to the abstract situation that appeared in Chapter 2.
Let E be an euclidean vector space, a root system in E and =
1
, . . . ,
n

a system of simple roots. Consider in this abstract setting the subsets we are already
familiar with:

R
= Z = Z,
2. PROPERTIES OF WEIGHTS AND THE WEYL GROUP 65

W
= E : [) Z = Z
1
+ +Z
n
(the weight lattice),

+
W
=
W
: [) 0 = Z
0

1
+ +Z
0

n
(the set of dominant
weights),

i
=

i
(i = 1, . . . , n), J =

: ) (Weyl group).
2.1 Properties.
(i) The Weyl group is generated by
1
, . . . ,
n
.
Proof. Let J
0
be the subgroup of J generated by
1
, . . . ,
n
. It is enough to
prove that

J
0
for any
+
. This will be proven by induction on ht(),
and it is trivial if ht() = 1. Assume that ht() = r and that

J
0
for any

+
with ht() < r. The arguments in the proof of Proposition 1.4 show
that there is an i = 1, . . . , n, such that ([
i
) > 0, so =

i
() = [
i
)
i
satises that ht() < ht(). Hence

J
0
. But for any isometry and any
E:

()
() = () ()[())() =
_
[)
_
=

(),
so
()
=


1
. In particular, with =
i
,

=
i

i
J
0
.
(ii) If t 2,
1
, . . . ,
t
and

1
. . .

t1
(
t
)

, then there is an index


1 s t 1 such that

1
. . .

t
=

1
. . .

s1

s+1
. . .

t1
.
(That is, there is a simpler expression as a product of generators.)
Proof. Let s be the largest index (1 s < t 1) with

s
. . .

t1
(
t
)

.
Thus

s
_

s+1
. . .

t1
(
t
)
_

. But

s
_

_
=
+

s
(Chapter 2,
Proposition 6.1), so

s+1
. . .

t1
(
t
) =
s
and, using the argument in the
proof of (i),
_

s+1
. . .

t1
_

t

_

t1
. . .

s+1
_
=

s+1
...

t1
(
t
)
=

s
,
whence

s+1
. . .

t1

t
=

s+1
. . .

t1
.
(iii) Given any J, item (i) implies that there are
1
, . . . ,
t
such that =

1
. . .

t
. This expression is called reduced if t is minimum. (For = id,
t = 0.) By the previous item, if the expression is reduced (
t
)

. In particular,
for any id ,= J, () ,= . Therefore, because of Chapter 2, Proposition 6.1,
J acts simply transitively on the systems of simple roots.
(iv) Let =
i
1

i
t
be a reduced expression. Write l() = t. Also let n() =
[
+
: ()

[. Then l() = n().


66 CHAPTER 3. REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS
Proof. By induction on l() = t. If t = 0 this is trivial. For t > 0, =
i
1

i
t
satises (
i
t
)

. But
i
t
_

i
t

_
=
+

i
t
. Hence n
_

i
1

i
t1
_
=
n() 1 and the induction hypothesis applies.
(v) There is a unique element
0
J such that
0
() = . Moreover,
2
0
= id and
l(
0
) = [
+
[.
Proof. J acts simply transitively on the system of simple roots, so there is a
unique
0
J such that
0
() = . Since
2
0
() = , it follows that
2
0
= id.
Also,
0
(
+
) =

, so l(
0
) = n(
0
) = [
+
[.
(vi) Dene a partial order on E by if Z
0

1
+ +Z
0

n
. If
+
W
,
then [
+
W
: [ is nite.
Proof.
W
, so = r
1

1
+ +r
n

n
, with r
1
, . . . , r
n
Q (see Properties 1.2).
It is enough to prove that if
+
W
, then r
i
0 for any i; because if
+
W
and
, then = s
1

1
+ + s
n

n
with s
i
Q, s
i
0 and r
i
s
i
Z
0
for any
i, and this gives a nite number of possibilities.
Hence, it is enough to prove the following result: Let v
1
, . . . , v
n
be a basis of
an euclidean vector space with (v
i
[v
j
) 0 for any i ,= j, and let v E such that
(v[v
i
) 0 for any i = 1, . . . , n, then v R
0
v
1
+ +R
0
v
n
.
To prove this, consider the dual basis w
1
, . . . , w
n
(so (v
i
[w
j
) =
ij
for any i, j). If
(v[v
i
) =
i
0 for any i, then v =
1
w
1
+ +
n
w
n
. So it is enough to prove the
result for w
n
(the argument for w
i
, i < n, is the same). Now, w
n
=
1
v
1
+ +
n
v
n
and 0 < (w
n
[w
n
) = (w
n
[
n
v
n
) =
n
, while the vector w
n

n
v
n
Rv
1
+ +Rv
n1
satises that (w
n

n
v
n
[v
i
) =
n
(v
n
[v
i
) 0. Now, by induction on the dimension
n it can be assumed that
1
, . . . ,
n1
0.
(vii) For any
W
, there is a unique
+
W
J. That is, for any
W
, its
orbit under the action of J intersects
+
W
in exactly one weight.
Proof. Let = m
1

1
+ + m
n

n
, with m
i
= [
i
) Z, i = 1, . . . , n. Let us
prove that there is a
+
W
J. If m
i
0 for any i, then we can take = .
Otherwise, if m
j
< 0 for some j, then
1
=
j
() = m
j

j
satises that
1
>
and
1
J. If
1

+
W
we are done, otherwise we proceed now with
1
and
obtain a chain =
0
<
1
<
2
< , with
i
J. Since J is nite, this
process must stop, so there is an r such that
r

+
W
.
To prove the uniqueness, it is enough to prove that if ,
+
W
and there exists
a J with () = , then = . For this, take such a of minimal length
and consider a reduced expression for : =

t
. If t = 0, = id and
= . Otherwise, t > 0 and (
t
) < 0. Hence
0 ([
t
) =
_
()[(
t
)
_
=
_
[(
t
)
_
0,
so ([
t
) = 0,

t
() = , and = () =

t1
(), a contradiction with
the minimality of the length.
2. PROPERTIES OF WEIGHTS AND THE WEYL GROUP 67
(viii) Let
+
W
be a dominant weight. Then () for any J. Moreover,
its stabilizer J

= J : () = is generated by
i
: ([
i
) = 0. In
particular, if is strictly dominant (that is, [
i
) > 0 for any i = 1, . . . , n), then
J

= 1.
Proof. Let =
i
1

i
t
be a reduced expression of id ,= J, and let

s
=
i
s

i
t
(), 0 s t. Then, for any 1 s t,
(
s
[
i
s1
) =
_

i
s

i
t
()[
i
s1
_
=
_
[
i
t

i
s
(
i
s1
)
_
and this is 0, because item (ii) shows that
i
t

i
s
(
i
s1
)
+
. Hence

s1
=
i
s1
(
s
) =
s

s
[
i
s1
)
i
s1

s
.
Therefore, () =
1

2

t
= and () = if and only if
s
= for any
s, if and only if ([
i
1
) = = ([
i
t
) = 0.
(ix) A subset of
W
is said to be saturated if for any , , and i Z
between 0 and [), i . In particular, is invariant under the action of
J. If, in addition, there is a dominant weight such that any satises
, then is said to be saturated with highest weight .
Let
+
W
. Then the subset is saturated with highest weight if and only if
=

+
W

J. In particular, is nite in this case.


Proof. For
+
W
, let

+
W

J.
If is saturated with highest weight , and , there is a J such that
()
+
W
. But is J-invariant, so = ()
+
W
, and hence J,
with
+
W
and . Therefore,

. To check that =

it is enough
to check that any
+
W
with < , belongs to . But, if
t
= +

n
i=1
m
i

i
is any weight in with m
i
Z
0
for any i, and
t
,= (that is
t
> ), then
0 < (
t
[
t
), so there is an index j such that m
j
> 0 and (
t
[
j
) > 0.
Now, since
+
W
, [
j
) 0, so
t
[
j
) > 0, and since is saturated,
tt
=

j
. Starting with
t
= and proceeding in this way, after a nite number
of steps we obtain that .
Conversely, we have to prove that for any
+
W
,

is saturated. By its very


denition,

is J-invariant. Let

, and i Z between 0 and


[). It has to be proven that i

. Take J such that ()


+
W
.
Since ()[()) = [), we may assume that
+
W
. Also, changing if
necessary by , we may assume that
+
. Besides, with m = [),

( i) = (m i), so it is enough to assume that 0 < i


m
2
|. Then
i[) = m2i 0.
If i[) > 0 and J satises (i)
+
W
, then 0 < (i)[()), so
()
+
and ( i) = () i() < () , since is dominant. Hence
( i)

and so does i. On the other hand, if m is even and i =


m
2
,
then i[) = 0. Take again a J such that (i)
+
W
. If ()
+
,
the same argument applies and ( i) < () . But if ()

, take
=

, then ( i) = ( i)
+
W
and () = ()
+
, so again
the same argument applies.
68 CHAPTER 3. REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS
(x) Let =
1
2

+ be the Weyl vector (see Chapter 2, 6.1), and let


+
W
and
J. Then ( + [ + ) ( + [ + ), and they are equal if and only if
= . The same happens for any
+
W
with . Hence, in particular,
( +[ +) < ( +[ +) for any

.
Proof. Since
i
() =
i
for any i = 1, . . . , n, it follows that [
i
) = 1 for any
i, so =
1
+ +
n

+
W
. Let J and let J such that = ().
Then,
(+[+) (+[+) = (+[+) (() +[() +) = 2(()[).
But () < (item (viii)) and is strictly dominant, so ([()) > 0, and the
rst assertion follows.
Now, if
+
W
with , then
( +[ +) ( +[ +) = ( +[ ) + 2( [) 0
since +
+
W
, 0 and is strictly dominant. Besides, this is 0 if and
only if = 0.
Later on, it will be proven that if V is any irreducible nite dimensional module over
L, then its set of weights P(V ) is a saturated set of weights.
3. Irreducible representations
In this section innite dimensional vector spaces will be allowed.
Given a vector space V , recall that its tensor algebra is the direct sum
T(V ) = k V (V
k
V ) V
n
,
with the associative multiplication determined by
(v
1
v
n
)(w
1
w
m
) = v
1
v
n
w
1
w
m
.
Then T(V ) is a unital associative algebra over k.
Given the Lie algebra L, let I be the ideal generated by the elements
x y y x [x, y] L (L L) T(L),
where x, y L. The quotient algebra
U(L) = T(L)/I
is called the universal enveloping algebra of L. Let us denote by : L U(L) the
natural embedding: x (x) = x + I. Sometimes L will be identied with its image
under . The universal property of the tensor algebra immediately gives:
3.1 Universal property. Given an associative algebra over k, let A

be the Lie algebra


dened on A by means of [x, y] = xy yx, for any x, y A. Then for any Lie algebra
homomorphism : L A

, there is a unique homomorphism of associative algebras


: U(L) A such that the following diagram is commutative
3. IRREDUCIBLE REPRESENTATIONS 69
L

U(L)
A

-
@
@
@
@R


?
Remark. The universal enveloping algebra makes sense for any Lie algebra, not just
for semisimple Lie algebras over algebraically closed elds of characteristic 0.
In particular, any representation : L gl(V ) induces a representation of U(L):
: U(L) End
k
(V ). Therefore a module for L is the same thing as a left module for
the associative algebra U(L).
Now, given a linear form H

, consider:
J() =

+ U(L)x

+

n
i=1
U(L)(h
i
(h
i
)1), which is a left ideal of U(L),
where h
i
= h

i
for any i = 1, . . . , n.
M() = U(L)/J(), which is called the associated Verma module. (It is a left
module for U(L), hence a module for L.)
: U(L) M(), u u +J(), the canonical homomorphism of modules.
m

= (1) = 1 +J(), the canonical generator: M() = U(L)m

.
Then x

J() for any


+
, so x

= 0. Therefore, L
+
.m

= 0. Also,
h
i
(h
i
)1 J(), so h
i
m

= (h
i
)m

for any i, and hence hm

= (h)m

for any
h H.
Therefore, as in the proof of Proposition 1.6
M() = km

r=1

k
_
y

i
1
.(y

i
2
(y

i
r
.m

))
_
.
(Note that y

i
1
.(y

i
2
(y

i
r
.m

)) M()

i
1

i
r
.)
Let K() be the sum of all the proper submodules of M(). Then m

, K(), so
K() ,= M() and V () = M()/K() is an irreducible L-module (although, in general,
of innite dimension). However,
dimM()

i
1

i
r
[(
j
1
, . . . ,
j
r
)
r
:
j
1
+ +
j
r
=
i
1
+ +
i
r
[ r!,
so for any H

, the dimension of the weight space V ()

is nite.
3.2 Theorem. For any H

, dimV () is nite if and only if


+
W
.
Proof. The vector v

= m

+K() is a highest weight vector of V () of weight , and


hence by Proposition 1.6, if dimV () is nite, then
+
W
.
Conversely, assume that
+
W
. Let x
i
= x

i
, y
i
= y

i
and h
i
= h

i
, i =
1, . . . , n, be the standard generators of L. Denote by : L gl
_
V ()
_
the associated
representation. For any i = 1, . . . , n, m
i
= [
i
) Z
0
, because is dominant. Several
steps will be followed now:
70 CHAPTER 3. REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS
1. (y
i
)
m
i
+1
(v

) = 0 for any i = 1, . . . , n.
Proof. Let u
i
= (y
i
)
m
i
+1
(v

) = y
m
i
+1
i
v

(as usual we denote by juxtaposition the


action of an associative algebra, in this case U(L), on a left module, here V ()).
For any j ,= i, [x
j
, y
i
] = 0, so x
j
u
i
= y
m
i
+1
i
(x
j
v

) = 0. By induction, it is checked
that in U(L), x
i
y
m+1
i
= y
m+1
i
x
i
+ (m + 1)y
m
i
h
i
m(m + 1)y
m
i
for any m Z
0
.
Hence
x
i
u
i
= x
i
y
m
i
+1
i
v

= y
m
i
+1
i
x
i
v

+ (m
i
+ 1)y
m
i
i
h
i
v

m
i
(m
i
+ 1)y
m
i
i
v

= 0 + (m
i
+ 1)(h
i
)v

m
i
(m
i
+ 1)v

= 0
Thus L
+
u
i
= 0 and hence u
i
is a highest weight vector of weight
i
= m
i

i
.
Then W
i
= ku
i
+

r
i=1

k
_
y

i
1
y

i
r
u
i
_
is a proper submodule of V (), and
hence it is 0. In particular u
i
= 0, as required.
2. Let S
i
= L

i
L

i
[L

i
, L

i
] = kx
i
+ ky
i
+ kh
i
, which is a subalgebra of L
isomorphic to sl
2
(k). Then V () is a sum of nite dimensional S
i
-submodules.
Proof. The linear span of v

, y
i
v

, . . . , y
m
i
i
v

is an S
i
-submodule. Hence the sum
V
t
of the nite dimensional S
i
-submodules of V () is not 0. But if W is a nite
dimensional S
i
-submodule, then consider

W = LW =

zW, where z runs over
a xed basis of L. Hence dim

W < . But for any w W, x
i
(zw) = [x
i
, z]w +
z(x
i
w) LW =

W and, also, y
i
(zw) LW =

W. Therefore,

W = LW is a nite
dimensional S
i
-submodule, and hence contained in V
t
. Thus, V
t
is a nonzero
L-submodule of the irreducible module V (), so V
t
= V (), as required.
3. The set of weights P
_
V ()
_
is invariant under J.
Proof. It is enough to see that P
_
V ()
_
is invariant under
i
, i = 1, . . . , n. Let
P
_
V ()
_
and 0 ,= v V ()

. Then v V () = V
t
, so by complete reducibil-
ity (Weyls Theorem, Chapter 2, 2.5) there are nite dimensional S
i
-submodules
W
1
, . . . , W
m
such that v W
1
W
m
. Thus, v = w
1
+ + w
m
, with
w
j
W
j
for any j, and we may assume that w
1
,= 0. Since h
i
v = (h
i
)v, it
follows that h
i
w
j
= (h
i
)w
j
for any j. Hence (h
i
) is an eigenvalue of h
i
in
W
1
, and the representation theory of sl
2
(k) shows that (h
i
) is another eigen-
value. Besides, if (h
i
) 0, then 0 ,= y
(h
i
)
i
w
1
(W
1
)
(h
i
)
, so 0 ,= y
(h
i
)
i
v
V ()
(h
i
)
i
= V ()

i
()
; while if (h
i
) < 0, then 0 ,= x
(h
i
)
i
w
1
(W
1
)
(h
i
)
, so
0 ,= x
(h
i
)
i
v V ()
(h
i
)
i
= V ()

i
()
. In any case
i
() P
_
V ()
_
.
4. For any P
_
V ()
_

i
1

i
r
: r 0, 1 i
1
, . . . , i
r
n
W
, there
is a J such that ()
+
W
. Hence, by the previous item, () P
_
V ()
_
,
so () . Therefore, P
_
V ()
_

+
W

J. Hence P
_
V ()
_
is nite, and
since all the weight spaces of V () are nite dimensional, we conclude that V ()
is nite dimensional.
3. IRREDUCIBLE REPRESENTATIONS 71
3.3 Corollary. The map

+
W
isomorphism classes of nite dimensional irreducible L-modules
the class of V (),
is a bijection.
3.4 Proposition. For any
+
W
, P
_
V ()
_
=

(the saturated set of weights with


highest weight , recall Properties 2.1).
Moreover, for any P
_
V ()
_
, dimV ()

= dimV ()
()
for any J, and
( +[ +) ( +[ +), with equality only if = .
Proof. For any P
_
V ()
_
and any ,
mZ
V ()
+m
is a module for S

=
L

[L

, L

]

= sl
2
(k). Hence its weights form a chain: the -chain of :
+ q, . . . , , . . . , r with [) = r q. Therefore, P
_
V ()
_
is a saturated set of
weights with highest weight , and thus P
_
V ()
_
=

by Properties 2.1.
The last part also follows from Properties 2.1
Now, if : L gl
_
V ()
_
is the associated representation, for any
+
, ad (x

)
End
k
_
(L)
_
and (x

) End
k
_
V ()
_
are nilpotent endomorphisms. Moreover, ad (x

)
is a derivation of the Lie algebra (L). Hence exp
_
ad (x

)
_
is an automorphism of (L),
while exp (x

) GL
_
V ()
_
. The same applies to (y

). Consider the maps:

= exp
_
ad (x

)
_
exp
_
ad (y

)
_
exp
_
ad (x

)
_
Aut (L),

= exp (x

) exp((y

)) exp (x

) GL
_
V ()
_
.
For any z L, exp
_
ad (x

)
_
(z) = exp
_
L
(x

)
R
(x

)
_
= exp L
(x

)
exp
_
R
(x

)
_
,
where L
a
and R
a
denote the left and right multiplication by the element a End
k
_
V ()
_
,
which are commuting endomorphisms. Hence
exp
_
ad (x

)
_
((z)) =
_
exp (x

)
_
(z)
_
exp((x

))
_
and

((z)) =

(z)
1

.
For any h H, exp
_
ad (x

)
_
((h)) = (h) + [(x

), (h)] = (h (h)x

). Hence
exp
_
ad (y

)
_
exp
_
ad (x

)
_
((h))
= (h (h)x

) +
_
[h (h)x

, y

]
_
+
1
2

_
[[h (h)x

, y

], y

]
_
=
_
h (h)x

(h)y

(h)h

+
1
2
2(h)y

_
=
_
h (h)h

(h)x

_
and, nally,

_
(h)
_
= exp
_
ad (x

)
__
h (h)h

(h)x

_
=
_
h (h)h

(h)x

+ [x

, h (h)h

(h)x

]
_
=
_
h (h)h

(h)x

(h)x

+ 2(h)x

_
=
_
h (h)h

_
,
72 CHAPTER 3. REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS
so for any 0 ,= v V ()

and h H,

()(h)v =
_
[)
_
(h)v =
_
h (h)h

_
v =

_
(h)
_
(v) =

(h)
_

(v)
_
.
That is, (h)
_

(v)
_
=

()(h)
1

(v) for any h H, so


1

(v) V ()

()
and

_
V ()

_
V ()

()
. But also,
1

_
V ()

()
_
V ()

()
= V ()

. Therefore,

_
V ()

_
= V ()

()
and both weight spaces have the same dimension.
4. Freudenthals multiplicity formula
Given a dominant weight
+
W
and a weight
W
, the dimension of the associated
weight space, m

= dimV ()

, is called the multiplicity of in V (). Of course, m

= 0
unless P
_
V ()
_
.
The multiplicity formula due to Freudenthal gives a recursive method to compute
these multiplicities:
4.1 Theorem. (Freudenthals multiplicity formula, 1954) For any
+
W
and

W
:
_
( +[ +) ( +[ +)
_
m

= 2

j=1
( +j[)m
+j
.
(Note that the sum above is nite since there are only nitely many weights in
P
_
V ()
_
. Also, starting with m

= 1, and using Proposition 3.4, this formula allows


the recursive computation of all the multiplicities.)
Proof. Let : L gl
_
V ()
_
be the associated representation and denote also by the
representation of U(L), : U(L) End
k
_
V ()
_
. Let a
1
, . . . , a
m
and b
1
, . . . , b
m
be
dual bases of L relative to the Killing form (that is, (a
i
, b
j
) =
ij
for any i, j). Then
for any z L, [a
i
, z] =

m
j=1

j
i
a
j
for any i and [b
j
, z] =

m
i=1

i
j
b
i
for any j. Hence,
inside U(L),
m

i=1
[a
i
b
i
, z] =
m

i=1
_
[a
i
, z]b
i
+a
i
[b
i
, z]
_
=
n

i,j=1
_

j
i
+
i
j
_
a
j
b
i
,
but
0 = ([a
i
, z], b
j
) +(a
i
, [b
j
, z]) =
j
i
+
i
j
,
so [

m
i=1
a
i
b
i
, L] = 0. Therefore, the element c =

m
i=1
a
i
b
i
is a central element in U(L),
which is called the universal Casimir element (recall that a Casimir element was used
in the proof of Weyls Theorem, Chapter 2, 2.5). By the well-known Schurs Lemma,
(c) is a scalar.
Take a basis g
1
, . . . , g
n
of H with (g
i
, g
j
) =
ij
for any i, j. For any H

, let
t

H, such that (t

, . ) = . Then t

= r
1
g
1
+ +r
n
g
n
, with r
i
= (t

, g
i
) = (g
i
)
for any i. Hence,
([) = (t

) =
n

i=1
r
i
(g
i
) =
n

i=1
(g
i
)
2
.
4. FREUDENTHALS MULTIPLICITY FORMULA 73
For any
+
, take x

and x

such that (x

, x

) = 1 (so that
[x

, x

] = t

). Then the element


c =
n

i=1
g
2
i
+

+
(x

+x

) =
n

i=1
g
2
i
+

+
t

+ 2

+
x

is a universal Casimir element.


Let 0 ,= v

V ()

be a highest weight vector, then since x

= 0 for any
+
,
(c)v

=
_
n

i=1
(g
i
)
2
+

+
(t

)
_
v

=
_
([) + 2([)
_
v

= ([ + 2)v

,
because 2 =

+ . Therefore, since (c) is a scalar, (c) = ([ + 2)id.


For simplicity, write V = V (). Then trace
V

(c) = ([ + 2)m

.
Also, for any v V

,
(c)v =
_
n

i=1
(g
i
)
2
_
v +
_

+
(t

)
_
v + 2
_

+
(x

)(x

)
_
v
= ([ + 2)v + 2

+
(x

)(x

)v.
Recall that if f : U
1
U
2
and g : U
2
U
1
are linear maps between nite dimensional
vector spaces, then trace
U
1
gf = trace
U
2
fg. In particular,
trace
V

(x

)(x

) = trace
V
+
(x

)(x

)
= trace
V
+
_
(t

) +(x

)(x

)
_
= ( +[)m
+
+ trace
V
+
(x

)(x

)
=

j=1
( +j[)m
+j
.
(The argument is repeated until V
+j
= 0 for large enough j.) Therefore,
([ + 2)m

= ([ + 2)m

+ 2

j=1
( +j[)m
+j
,
and this is equivalent to Freudenthals multiplicity formula.
4.2 Remark. Freudenthals multiplicity formula remains valid if the inner product is
scaled by a nonzero factor.
4.3 Example. Let L be the simple Lie algebra of type G
2
and write = , .
The Cartan matrix is
_
2 1
3 2
_
, so we may scale the inner product so that ([) = 2,
([) = 6 and ([) = 3. The set of positive roots is (check it!):

+
= , , +, 2 +, 3 +, 3 + 2.
Let
1
,
2
be the fundamental dominant weights, so:

1
[) = 1,
1
[) = 0, so
1
= 2 +,

2
[) = 0,
2
[) = 1, so
2
= 3 + 2.
74 CHAPTER 3. REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS
.................................................................................. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...........................
..........................
.......................................................................................................................................................................
. . . . . . . . . . . . . . . . . . . . . . . . . .
....................................................................................................................................................................................................................... . . . . .....................
. . . . . . . . . . . . . . . . . . . . . . . . . .
................................................................................................................................................................... . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
.................................................................................. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
................................................
................................................................
................................................................................
................................................................................................
................................................
................................................................
................................................................................
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.











1
2
1

Figure 4.1: G
2
, roots and weights
Consider the dominant weight =
1
+
2
= 5 + 2 (see gure 4.1).
Then

+
W
: = , 2
1
,
2
,
1
, 0,
and in order to compute the weight multiplicities of V () it is enough to compute
m

= 1, m
2
1
, m

2
, m

1
and m
0
.
The Weyl group J is generated by

and

, which are the reections along the


lines trough the origin and perpendicular to and respectively. The composition

is the counterclockwise rotation of angle



3
. Thus J is easily seen to be the dihedral
group of order 12. Therefore, P
_
V ()
_
consists of the orbits of the dominant weights
(3.4), which are the weights marked in Figure 4.1.
A simple computation gives that = =
1
+
2
, (
1
[
1
) = 2, (
2
[
2
) = 6,
(
1
[
2
) = 3 and
( +[ +) = 4([) = 56,
(2
1
+[2
1
+) = (3
1
+
2
[3
1
+
2
) = 42,
(
2
+[
2
+) = (
1
+ 2
2
[
1
+ 2
2
) = 38,
(
1
+[
1
+) = (2
1
+
2
[2
1
+
2
) = 26,
([) = 14.
We start with m

= 1, then Freudenthals multiplicity formula gives:


(56 42)m
2
1
= 2

+
(2
1
+[)m
2
1
+
= 2
_
(2
1
+[)m
2
1
+
+ (2
1
+[)m
2
1
+
+ (2
1
+ +[ +)m
2
1
++
_
= 2
_
(5 + 2[) + (4 + 3[) + (5 + 3[ +)
_
= 2
_
(10 6) + (12 + 18) + (10 24 + 18)
_
= 28,
4. FREUDENTHALS MULTIPLICITY FORMULA 75
and we conclude that m
2
1
=
28
14
= 2. Thus the multiplicity of the weight spaces, whose
weight is conjugated to 2
1
is 2. (These are the weights marked with a in in Figure
4.1.)
In the same vein,
(56 38)m

2
= 2

j=1
(
2
+j[)m

2
+j
= 2
_
(
2
+[)m

2
+
+ (
2
+ 2[)m

2
+2
+ (
2
+ +[ +)m

2
++
+ (
2
+ 2 +[2 +)m

2
+2+
_
= 2
_
(4 + 2[)2 + (5 + 2[) + (4 + 3[ +) + (5 + 3[2 +)
_
= 2
_
(8 6)2 + (10 6) + (8 12 9 + 18) + (20 15 18 + 18)
_
= 36,
and we conclude that m

2
= 2. Now,
(56 26)m

1
= 2

j=1
(
1
+j[)m

1
+j
= 2
_
(
1
+[)m

1
+
+ (
1
+ 2[)m

1
+2
+ (
1
+[)m

1
+
+ (
1
+ +[ +)m

1
++
+ (
1
+ 2( +)[ +)m

1
+2(+)
+ (
1
+ 2 +[2 +)m

1
+2+
+ (
1
+ 3 +[3 +)m

1
+3+
+ (
1
+ 3 + 2[3 + 2)m

1
+3+2
_
= 2
_
(3 +[)2 + (4 +[)1 + (2 + 2[)2 + (3 + 2[ +)2
+ (4 + 3[ +)1 + (4 + 2[2 +)2 + (5 + 2[3 +)1
+ (5 + 3[3 + 2)1
_
= 120,
so m

1
= 4. Finally,
(56 14)m
0
= 2

j=1
(j[)m
j
= 2

j=1
([)jm
j
= 2
_
2(4 + 2 2) + 6 2 + 2(4 + 2 2) + 2(4 + 2 2) + 6 2 + 6 2
_
= 168,
so m
0
= 4.
Taking into account the sizes of the orbits, we get also that
dim
k
V () = 12 1 + 6 2 + 6 2 + 6 4 + 4 = 64.
In the computations above, we made use of the symmetry given by the Weyl group.
This can be improved.
4.4 Lemma. Let
+
W
, P
_
V ()
_
and . Then

jZ
( +j[)m
+j
= 0.
76 CHAPTER 3. REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS
Proof.
jZ
V ()
+j
is a module for S

(notation as in the proof of Proposition 3.4).


But S

= [S

, S

], since it is simple, hence the trace of the action of any of its elements
is 0. In particular,
0 = trace

jZ
V ()
+j
(t

) =

jZ
( +j[)m
+j
.
Now, Freudenthals formula can be changed slightly using the previous Lemma and
the fact that 2 =

+ .
4.5 Corollary. For any
+
W
and
W
:
([ + 2)m

j=1
( +j[)m
+j
+ ([)m

.
We may change j = 1 for j = 0 in the sum above, since ([)m

+ ([ )m

= 0
for any .
Now, if
+
W
, P
_
V ()
_

+
W
and J

(the stabilizer of , which


is generated by the
i
s with ([
i
) = 0 by 2.1), then for any and j Z,
m
+j
= m
+j()
.
Let I be any subset of 1, . . . , n and consider

I
=
_

iI
Z
i
_
(a root system in
iI
R
i
!)
J
I
, the subgroup of J generated by
i
, i I,
J

I
, the group generated by J
I
and id.
For any , let O
I,
= J

I
. Then, if
I
, =

() J
I
, so J

I
= J
I
.
However, if ,
I
, then =

n
i=1
r
i

i
and there is an index j , I with r
j
,= 0. For any
J
I
, () = r
j

j
+

i,=j
r
t
i

i
(the coecient of
j
does not change). In particular,
if

, then J
I

. Therefore, J

I
is the disjoint union of J
I
and J
I
.
4.6 Proposition. (Moody-Patera) Let
+
W
and P
_
V ()
_

+
W
. Consider
I =
_
i 1, . . . , n : ([
i
) = 0
_
and the orbits O
1
, . . . O
r
of the action of J

I
on .
Take representatives
i
O
i

+
for any i = 1, . . . , r. Then,
_
( +[ +) ( +[ +)
_
m

=
r

i=1
[O
i
[

j=1
( +j
i
[
i
)m
+j
i
.
Proof. Arrange the orbits so that
I
= O
1
O
s
and
I
= O
s+1
O
r
.
Hence O
i
= J
I

i
for i = 1, . . . , s, while O
i
= J
i

i
J
I

i
for i = s + 1, . . . , r. Then,
5. CHARACTERS. WEYLS FORMULAE 77
using the previous Lemma,

j=1
( +j[)m
+j
=
s

i=1
[J
I

i
[

j=1
( +j
i
[
i
)m
+j
i
+
r

i=s+1
[J
I

i
[

j=1
_
( +j
i
[
i
)m
+j
i
+ ( j
i
[
i
)m
j
i
_
=
s

i=1
[J
I

i
[

j=1
( +j
i
[
i
)m
+j
i
+
r

i=s+1
[J
I

i
[
_
2

j=1
( +j
i
[
i
)m
+j
i
+ ([
i
)m

_
.
But, for any i = s + 1, . . . , r, 2[J
I

i
[ = [O
i
[ and
r

i=s+1
[J
I

i
[([
i
) =

+
\
+
I
([) =

+
([) = 2([).
Now, substitute this in the formula in Corollary 4.5 to get the result.
4.7 Example. In the previous Example, for = 0, J
I
= J and there are two orbits:
the orbit of
1
(the short roots) and the orbit of
2
(the long roots), both of size 6.
Hence,
_
( +[ +) ([)
_
m
0
= (56 14)m
0
= 6
_
(
1
[
1
)m

1
+ (2
1
[
1
)m
2
1
+ (
2
[
2
)m

2
_
= 6
_
2 4 + 4 2 + 6 2
_
= 168,
so again we get m
0
= 4.
5. Characters. Weyls formulae
Consider the group algebra R
W
. To avoid confusion between the binary operation
(the addition) in
W
and the addition in R
W
, multiplicative notation will be used for

W
. Thus any
W
, when considered as an element of R
W
, will be denoted by the
formal symbol e

, and the binary operation (the addition) in


W
becomes the product
e

= e
+
in R
W
. Hence,
R
W
=
_

W
r

: r

R, r

= 0 for all but nitely many s


_
.
78 CHAPTER 3. REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS
Since
W
is freely generated, as an abelian group, by the fundamental dominant
weights:
W
= Z
1
Z
n
, R
W
is isomorphic to the ring of Laurent polynomials
in n variables by means of:
R[X
1
1
, . . . , X
1
n
] R
W
p(X
1
, . . . , X
n
) p
_
e

1
, . . . , e

n
_
.
In particular, R
W
is an integral domain.
There appears a natural action of the Weyl group J on R
W
by automorphisms:
J Aut
_
R
W
_

_
e

= e
()
_
.
An element p R
W
is said to be symmetric if p = p for any J, and it is
said alternating if p = (1)

p for any J, where (1)

= det (= 1).
Consider the alternating map
/ : R
W
R
W
p

V
(1)

p.
Then,
(i) For any p R
W
, /(p) is alternating.
(ii) If p R
W
is alternating, /(p) = [J[p.
(iii) The alternating elements are precisely the linear combinations of the elements
/(e

), for strictly dominant (that is, [) > 0 for any


+
).
Proof. For any
W
, there is a J such that ()
+
W
(Properties 2.1),
and /() = (1)

/(e
()
). But if there is a simple root
i
such that [
i
) = 0,
then =
i
(), so /(e

) = (1)

i
/(e

i
()
) = /(e

) = 0. Now, item (ii) nishes


the proof.
5.1 Lemma. Let =
1
2

+ be the Weyl vector, and consider the element


q = e

+
_
e

1
_
= e

+
_
1 e

_
in R
W
. Then q = /(e

).
Proof. For any simple root ,

+

_
=
+
(Proposition 6.1). Hence

() = and

(q) = e

(1 e

+
\
_
1 e

_
= e

_
e

1
_

+
\
_
1 e

_
= q.
Thus, q is alternating.
5. CHARACTERS. WEYLS FORMULAE 79
But, by its own denition, q is a real linear combination of elements e

, with =

(where

is either 0 or 1). Hence


q =
1
[J[
/(q) =

+
W
strictly dominant
c

/(e

),
for some real scalars c

such that c

,= 0 only if is strictly dominant, and


=

as above. But then, for any such and i = 1, . . . , n,


[
i
) = 1 [
i
) 0,
because [
i
) 1, as is strictly dominant. Hence ( [) 0 for any
+
and
0 ( [ ) = ( [

) 0,
so = . We conclude that q = c/(e

) for some scalar c, but the denition of q shows


that
q = e

+ a linear combination of terms e

, with < ,
so c = 1 and q = /(e

).
Consider the euclidean vector space E = R
Q
Q, and the R
W
-module R
W

R
E.
Extend the inner product ( . [ . ) on E to a R
W
-bilinear map
_
R
W

R
E
_

_
R
W

R
E
_
R
W
,
and consider the R-linear maps dened by:
(GRADIENT) grad : R
W
R
W

R
E
e

,
(LAPLACIAN) : R
W
R
W
e

([)e

,
which satisfy, for any f, g R
W
:
_
grad(fg) = f grad(g) +g(grad(f),
(fg) = f(g) +g(f) + 2
_
grad(f)[ grad(g)
_
.
5.2 Denition. Let V be a nite dimensional module for L, the element

V
=

W
(dim
k
V

)e

of R
W
is called the character of V .
For simplicity, we will write

instead of
V ()
, for any
+
W
.
80 CHAPTER 3. REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS
5.3 Theorem. (Weyls character formula) For any
+
W
,

/(e

) = /(e
+
).
In theory, this allows the computation of

as a closed quotient in R
W
. In practice,
Freudenthals multiplicity formula is more ecient.
Proof. Note (Corollary 4.5) that Freudenthals multiplicity formula is equivalent to
(5.2) ([ + 2)m

j=0
( +j[)m
+j
+ ([)m

for any
W
. Multiply by e

and sum on to get


([ + 2)

j=0
( +j[)m
+j
e

+ (

).
Now,

(e

1) =

+
(e

1)(e

1) = q
2
with = 1. Multiply by q
2
to obtain
([ + 2)

q
2
(

)q
2
=

j=1
( +j[)m
+j
(e
+
e

,=
(e

1)
=

,=
(e

1)e

W
_

j=0
( +j[)m
+j

j=0
( + +j[)m
++j
_
e

,=
(e

1)e

W
m

([)e

=
_

_
e

,=
(e

1)
_

W
m

_
=
_
grad(q
2
)[ grad(

)
_
= 2q
_
grad(q)[ grad(

)
_
= q
_
(

q)

(q) q(

)
_
.
That is,
([ + 2)

q = (

q)

(q).
But q =

V
(1)

e
()
by the previous Lemma, so
(q) =

V
_
()[()
_
(1)

e
()
= ([)q,
so
(5.3) ( +[ +)

q = (

q).
5. CHARACTERS. WEYLS FORMULAE 81
Now,

q is a linear combination of some e


+()
s, with P
_
V ()
_
and J, and

_
e
+()
_
=
_
+()[ +()
_
e
+()
=
_

1
() +[
1
() +
_
e
+()
.
Therefore, e
+()
is an eigenvector of with eigenvalue
_

1
()+[
1
()+
_
, which
equals ( +[ +) because of (5.3). This implies (Properties 2.1) that
1
() = , or
= () and, hence,

q is a linear combination of e
(+)
: J.
Since

is symmetric, and q is alternating,

q is alternating. Also, ( + ) is
strictly dominant if and only if = id. Hence

q is a scalar multiple of /(e


+
), and
its coecient of e
+
is 1. Hence,

q = /(e
+
), as required.
Weyls character formula was derived by Weyl in 1926 in a very dierent guise.
5.4 Corollary. (Weyls dimension formula) For any
+
W
,
dim
k
V () =

+
([ +)
([)
=

+
+[)
[)
.
Proof. Let R[[t]] be the ring of formal power series on the variable t, and for any
W
consider the homomorphism of real algebras given by:

: R
W
R[[t]]
e

exp
_
([)t
_
=

s=0
1
s!
_
([)t
_
s
.
For any ,
W
,

_
/(e

)
_
=

V
(1)

exp
_
(()[)t
_
=

V
(1)

exp
_
([
1
())t
_
=

_
/(e

)
_
.
The homomorphism

will be applied now to Weyls character formula. First,

_
/(e

)
_
=

_
/(e

)
_
=

(q)
=

_
e

+
_

(e

1)
_
= exp
_
([)t
_

+
_
exp
_
([)t
_
1
_
=

+
_
exp
_
1
2
([)t
_
exp
_

1
2
([)t
_
.
Hence,

q) =

+
_
exp
_
1
2
([)t
_
exp
_

1
2
([)t
_
,
82 CHAPTER 3. REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS
while

_
/(e
+
)
_
=

+
_
exp
_
1
2
([ +)t
_
exp
_

1
2
([ +)t
_
.
With N = [
+
[,

+
_
exp
_
1
2
([)t
_
exp
_

1
2
([)t
_
=
_

+
([)
_
t
N
+ higher degree terms,
so if we look at the coecients of t
N
in

q) =

_
/(e
+
)
_
we obtain, since the
coecient of t
0
in

) is dim
k
V (), that
dim
k
V ()

+
([) =

+
([ +).
5.5 Example. If L is the simple Lie algebra of type G
2
,
+
= , , +, 2+, 3+
, 3 + 2 (see Example 4.3) . Take = n
1
+m
2
. Then Weyls dimension formula
gives
dim
k
V () =
(n + 1)3(m + 1)

n + 1 + 3(m + 1)

2(n + 1) + 3(m + 1)

3(n + 1) + 3(m + 1)

3(n + 1) + 6(m + 1)

1 3 4 5 6 9
=
1
120
(n + 1)(m+ 1)(n +m+ 2)(n + 2m+ 3)(n + 3m+ 4)(2n + 3m+ 5).
In particular, dim
k
V (
1
) = 7 and dim
k
V (
2
) = 14.
5.6 Remark. Weyls dimension formula is extremely easy if is a multiple of . Ac-
tually, if = m, then
dim
k
V () =

+
_
(m+ 1)[)
_
([)
= (m+ 1)
[
+
[
.
For instance, with =
1
+
2
for G
2
, dim
k
V () = 2
6
= 64 (compare with Example
4.3).
Two more formulae to compute multiplicities will be given. First, for any
W
consider the integer:
p() =

_
(r

+ Z
[
+
[
0
: =

+
r

.
Thus p(0) = 1 = p() for any . Also, if , , with ,= and ([) ,= 0,
then p( +) = 2, as + can be written in two ways as a Z
0
-linear combination of
positive roots: 1 +1 +0 ( +) and 0 +0 +1 ( +). Note, nally, that
p() = 0 if , Z
0
.
5.7 Theorem. (Kostants formula, 1959) For any
+
W
and
W
,
dim
k
V ()

V
(1)

p
_
( +) ( +)
_
.
5. CHARACTERS. WEYLS FORMULAE 83
Proof. Take the formal series

W
p()e

+
_
1 +e

+e
2
+
_
=

+
_
1 e

_
1
,
in the natural completion of R
W
(which is naturally isomorphic to the ring of formal
Laurent series R[[X
1
1
, . . . , X
1
n
]]). Thus,
_

W
p()e

__

+
_
1 e

_
_
= 1.
Let : R
W
R
W
be the automorphism given by (e

) = e

for any
W
. If this
is applied to Weyls character formula (recall that q = /(e

) = e

+
_
1 e

_
=
e

+
_
1 e

_
), we obtain
_

W
m

_
e

+
(1 e

) =

V
(1)

e
(+)
.
Multiply this by e

W
p()e

_
to get

W
m

=
_

V
(1)

e
(+)
__

W
p()e

_
=

W
(1)

p()e
+(+)
,
which implies that
m

V
(1)

p(

), with

such that +

( +) =
=

V
(1)

p
_
( +) ( +)
_
.
5.8 Corollary. For any 0 ,=
W
,
p() =

1,=V
(1)

p
_
( ())
_
.
Proof. Take = 0 in Kostants formula. Then V (0) = k and
0 = dim
k
V (0)

W
(1)

p
_
() ( +)
_
.
5.9 Theorem. (Racahs formula, 1962) For any
+
W
and P
_
V ()
_
, with
,= ,
m

1,=V
(1)

m
+()
.
(m

= dim
k
V ()

for any
W
).)
84 CHAPTER 3. REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS
Note that, since is strictly dominant, () < for any 1 ,= J, hence +
() > and thus Racahs formula gives a recursive method starting with m

= 1.
Proof. Recall that + is strictly dominant, so if J satises (+) = +, then
( + [ + ) = ( + [ + ) and = (Properties 2.1). Now, by Kostants formula
and the previous Corollary,
m

V
(1)

p
_
( +) ( +)
_
=

1,=V
(1)

(1)

p
_
( +) ( +) ( ())
_
=

1,=V
(1)

V
(1)

p
_
( +)
_
( + ()) +
_
_
=

1,=V
(1)

m
+()
.
5.10 Example. Consider again the simple Lie algebra of type G
2
, and = =
1
+
2
.
For the rotations 1 ,= J one checks that (see Figure 4.1):
() = + 2, 6 + 2, 10 + 2, 9 + 4, 4 +,
while for the symmetries in J,
() = , , 4 +, 9 + 6, 10 + 5, 6 + 2.
Starting with m

= 1 we obtain,
m
2
1
= m
2
1
+
+m
2
1
+
= m

+m

= 2,
since both 2
1
+ and 2
1
+ are conjugated, under the action of J, to . In the
same spirit, one can compute:
m

2
= m

2
+
= m
2
1
= 2,
m

1
= m

1
+
+m

1
+
= m

2
+m
2
1
= 4,
m
0
= m

+m

m
+2
m
4+
= m

1
+m

2
m

= 4.
6. Tensor products decompositions
Given two dominant weights
t
,
tt

+
W
, Weyls Theorem on complete reducibility
shows that the tensor product V (
t
)
k
V (
tt
) is a direct sum of irreducible modules:
V (
t
)
k
V (
tt
)

=

+
W
n

V ().
Moreover, for any
W
,
_
V (
t
)
k
V (
tt
)
_

W
V (
t
)


k
V (
tt
)

,
6. TENSOR PRODUCTS DECOMPOSITIONS 85
which shows that
V (

)
k
V (

)
=

. Hence

+
W
n

.
The purpose of this section is to provide methods to compute the multiplicities n

.
6.1 Theorem. (Steinberg, 1961) For any
t
,
tt

+
W
,
n

,V
(1)

p
_
(
t
+) +(
tt
+) ( + 2)
_
.
Proof. From

+
W
n

, we get

/(e

)
_
=

+
W
n

/(e

)
_
,
which, by Weyls character formula, becomes
(6.4)
_

W
m
t

__

V
(1)

e
(

+)
_
=

+
W
n

W
(1)

e
(+)
_
.
The coecient of e
+
on the right hand side of (6.4) is n

, since in each orbit J(+)


there is a unique dominant weight, namely +.
On the other hand, by Kostants formula, the left hand side of (6.4) becomes:
_

V
(1)

p
_
(
t
+)
_
e

__

V
(1)

e
(

+)
_
=

,V
(1)

p
_
(
t
+)
_
e
+(

+)
.
Note that + (
tt
+ ) = + if and only if = (
tt
+ ) ( + 2), so the
coecient of e
+
on the left hand side of (6.4) is

,V
(1)

p
_
(
t
+) +(
tt
+) ( + 2)
_
,
as required.
6.2 Corollary. (Racah, 1962) For any ,
t
,
tt

+
W
and any
W
, the multi-
plicity of V () in V (
t
)
k
V (
tt
) is
n

V
(1)

m
t
+(

+)
.
(For any weight , m
t

denotes the multiplicity of in V (


t
).)
86 CHAPTER 3. REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS
Proof.
n

,V
(1)

p
_
(
t
+) +(
tt
+) ( + 2)
_
(Steinberg)
=

V
(1)

V
(1)

p
_
(
t
+) ( + (
tt
+) +)
_
_
=

V
(1)

m
+(

+)
(Kostant).
To give a last formula to compute

some more notation is needed. First, by


Weyls character formula 5.3, for any
+
W
,

=
,(e
+
)
,(e

)
. Let us extend this, by
dening

for any
W
by means of this formula. For any weight
W
, recall
that J

denotes the stabilizer of in J: J

= J : () = . If this stabilizer
is trivial, then there is a unique J such that ()
+
W
(Properties 2.1). Consider
then
s() =
_
0 if J

,= 1
(1)

if J

= 1, and ()
+
W
.
Denote also by the unique dominant weight which is conjugate to . Let J
such that () = . If is strictly dominant, then /(e

) = (1)

/(e

) =
s()/(e

), otherwise there is an i = 1, . . . , n such that


i
() = () (Properties
2.1) so
1

i
J

and s() = 0; also /(e

) = /(
i
e

) = /(e

) = 0 and
/(e

) = 0 too. Hence /(e

) = s()/(e

) for any
W
. Therefore, for any
W
,
/(e
+
) = s( +)/(e
+
), and

= s( +)
+
.
6.3 Theorem. (Klymik, 1968) For any
t
,
tt

+
W
,

P
_
V (

)
_
m
t

+
.
Note that this can be written as
(6.5)

P
_
V (

)
_
m
t

s( +
tt
+)
+

+
.
By Properties 2.1, if
W
and s() ,= 0, then is strictly dominant, and hence

+
W
, so all the weights +
tt
+ that appear with nonzero coecient
on the right hand side of the last formula are dominant.
6. TENSOR PRODUCTS DECOMPOSITIONS 87
Proof. As in the proof of Steinbergs Theorem, with P
t
= P
_
V (
t
)
_
,

/(e

)
_
=

/(e

+
)
=
_

m
t

__

V
(1)

e
(

+)
_
=

V
(1)

m
t

e
()
_
e
(

+)
(as m
t

= m
t
()
J)
=

m
t

V
(1)

e
(+

+)
=

m
t

/
_
e
+

+
_
.
6.4 Corollary. Let ,
t
,
tt

+
W
. If V () is (isomorphic to) a submodule of V (
t
)
k
V (
tt
), then there exists P
_
V (
t
)
_
such that = +
tt
.
Proof. Because of (6.5), if V () is isomorphic to a submodule of V (
t
)
k
V (
tt
) there
is a P
_
V (
t
)
_
such that +
tt
+ = +. Take P
_
V (
t
)
_
and J such
that (+
tt
+) = + and has minimal length. It is enough to prove that l() = 0.
If l() = t 1, let =

t
be a reduced expression. Then (
t
)

by
Properties 2.1 and
0
_
+[(
t
)
_
=
_

1
( +)[
t
_
=
_
+
tt
+[
t
_
=
_
[
t
_
+
_

tt
+[
t
_

_
[
t
_
,
since + and
tt
+ are dominant. Hence 0 +
tt
+ [
t
) [
t
) and =
+
tt
+[
t
)
t
P
_
V (
t
)
_
. Therefore,
+ = ( +
tt
+) = (

t
)
_

t
( +
tt
+)
_
= (

t
)
_
+
tt
+ +
tt
+[
t
)
t
_
=

t
( +
tt
+),
a contradiction with the minimality of l(), as

t
=

t1
.
6.5 Example. As usual, let L be the simple Lie algebra of type G
2
. Let us decompose
V (
1
)
k
V (
2
) using Klymiks formula.
Recall that
1
= 2+,
2
= 3+2, so = 2
1

2
and = 3
1
+2
2
. Scaling
so that ([) = 2, one gets (
1
[) = 1, (
2
[) = 3, and (
1
[) = 0 = (
2
[).
Also, P
_
V (
1
)
_
= J
1
J0 = 0, , ( + ), (2 + ) (the short roots and
0). The multiplicity of any short root equals the multiplicity of
1
, which is 1.
Freudenthals formula gives
_
(
1
+[
1
+) ([)
_
m
0
= 2

j=1
(j[)m
j
= 2

+
short
([) = 12,
since m

1
= 1, so m

= 1 for any short , as all of them are conjugate. But (


1
+[
1
+
) ([) = (
1
[
1
+ 2) = (
1
[3
1
+ 2
2
) = (3
1
+ 2
2
[2 +) = 12. Thus, m
0
= 1.
88 CHAPTER 3. REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS
Hence all the weights of V (
1
) have multiplicity 1, and Klymiks formula gives then

2
=

P(V (
1
))
s( +
2
+)
+
2
+
.
Let us compute the contribution to this sum of each P
_
V (
1
)
_
:
0 +
2
+ is strictly dominant, so s(0 +
2
+) = 1, and we obtain the summand
1

2
,
+
2
+ = 2
1
+ is strictly dominant, so s(+
2
+) = 1 and we get 1
2
1
,
+
2
+ =
1
+3
2
is not dominant, and

(
1
+3
2
) =
1
+3
2
+ =

1
+2
2
=
2
+ is strictly dominant, so s(+
2
+) = 1 and get (1)

2
,
+ +
2
+ = 3
2
is stabilized by

, so s( + +
2
+) = 0,
( +) +
2
+ =
1
+ is strictly dominant, so we get 1

1
,
2 + +
2
+ =
1
+
2
+ is strictly dominant, so we get 1

1
+
2
,
(2 +) +
2
+ =
2
is stabilized by

.
Therefore, Klymiks formula gives:
(6.6) V (
1
)
k
V (
2
)

= V (
1
+
2
) V (2
1
) V (
1
).
With some insight, we could have proceeded in a dierent way. First, the multiplicity
of the highest possible weight
t
+
tt
in V (
t
)
k
V (
tt
) is always 1, so V (
t
+
tt
) always
appears in V (
t
)
k
V (
tt
) with multiplicity 1.
In the example above, if P
_
V (
1
)
_
and +
2

+
W
, then +
2

1
+

2
, 2
1
,
1
,
2
. Hence,
V (
1
)
k
V (
2
)

= V (
1
+
2
) pV (2
1
) qV (
1
) rV (
2
),
and dim
k
V (
1
)
k
V (
2
) = 7 14 = 98, dim
k
V (
1
+
2
) = dim
k
V () = 2
6
= 64.
Weyls dimension formula gives
dim
k
V (2
1
) =

+
(2
1
+[)
([)
=
3 3 6 9 12 15
1 3 4 5 6 9
= 27.
The only possibility of 98 = 64 +p 27 +q 7 +r 14 is p = q = 1, r = 0, thus recovering
(6.6).
Appendix A
Simple real Lie algebras
Let L be a simple real Lie algebra. By Schurs Lemma, the centralizer algebra End
L
(L)
is a real division algebra, but for any , End
L
(L) and x, y L,

_
[x, y]
_
=
_
x, y] = [x, y] =
_
[x, y]
_
=
_
[x, y]
_
,
and, since L = [L, L], it follows that End
L
(L) is commutative. Hence End
L
(L) is
(isomorphic to) either R or C.
In the latter case, L is then just a complex simple Lie algebra, but considered as a
real Lie algebra.
In the rst case, End
L
(L) = R, so End
L
C(L
C
) = C, where L
C
= C
R
L = L iL.
Besides, L
C
is semisimple because its Killing form is the extension of the Killing form of
L, and hence it is nondegenerate. Moreover, if L
C
is the direct sum of two proper ideals
L
C
= L
1
L
2
, then C = End
L
C(L
C
) End
L
C(L
1
) End
L
C(L
2
), which has dimension at
least 2 over C, a contradiction. Hence L
C
is simple. In this case, L is said to be central
simple and a real form of L
C
. (More generally, a simple Lie algebra over a eld k is
said to be central simple, if its scalar extension

k
k
L is a simple Lie algebra over

k, an
algebraic closure of k.)
Consider the natural antilinear automorphism of L
C
= C
R
L = L iL given by
= id ( is the standard conjugation in C). That is,
: L
C
L
C
x +iy x iy.
Then L is the xed subalgebra by , which is called the conjugation associated to L.
Therefore, in order to get the real simple Lie algebras, it is enough to obtain the real
forms of the complex simple Lie algebras.
1. Real forms
1.1 Denition. Let L be a real semisimple Lie algebra.
L is said to be split if it contains a Cartan subalgebra H such that ad
h
is diago-
nalizable (over R) for any h H.
L is said to be compact if its Killing form is denite.
89
90 APPENDIX A. SIMPLE REAL LIE ALGEBRAS
L is said to be a real form of a complex Lie algebra S if L
C
is isomorphic to S (as
complex Lie algebras).
1.2 Proposition. The Killing form of any compact Lie algebra is negative denite.
Proof. Let be the Killing form of the compact Lie algebra L with dim
R
L = n. For any
0 ,= x L, let
1
, . . . ,
n
C be the eigenvalues of ad
x
(possibly repeated). If for some
j = 1, . . . , n,
j
R0, then there exists a 0 ,= y L such that [x, y] =
j
y. Then the
subalgebra T = Rx +Ry is solvable and y [T, T]. By Lies Theorem (Chapter 2, 1.7)
ad
y
is nilpotent, so (y, y) = 0, a contradiction with being denite. Thus,
j
, R0
for any j = 1, . . . , n. Now, if
j
= +i with ,= 0, then

j
= i is an eigenvalue
of ad
x
too, and hence there are elements y, z L, not both 0, such that [x, y] = y +z
and [x, z] = y +z. Then
[x, [y, z]] = [[x, y], z] + [y, [x, z]] = 2[y, z].
The previous argument shows that either = 0 or [y, z] = 0. In the latter case T =
Rx +Ry +Rz is a solvable Lie algebra with 0 ,= y +z [T, T]. But this gives again
a contradiction.
Therefore,
1
, . . . ,
n
Ri and (x, x) =

n
j=1

2
j
0.
1.3 Theorem. Any complex semisimple Lie algebra contains both a split and a compact
real forms.
Proof. Let S be a complex semisimple Lie algebra and let h
j
, x
j
, y
j
: j = 1, . . . , n be
a set of canonical generators of S relative to a Cartan subalgebra and an election of a
simple system of roots, as in Chapter 2, 7. For any
+
choose I

= (j
1
, . . . , j
m
)
(ht() = m) such that 0 ,= ad
x
j
m
ad
x
j
2
(x
j
1
) S

and take x

= ad
x
j
m
ad
x
j
2
(x
j
1
)
and y

= ad
y
j
m
ad
y
j
2
(y
j
1
). Then h
1
, . . . , h
n
, x

, y

:
+
is a basis of S and its
structure constants are rational numbers that depend on the Dynkin diagram. Therefore,
L =
n

j=1
Rh
j
+

+
_
Rx

+Ry

_
is a split real form of S = L iL. Its associated conjugation : S S is determined
by (x
j
) = x
j
and (y
j
) = y
j
for any j = 1, . . . , n.
But there is a unique automorphism Aut
C
S such that (x
j
) = y
j
and (y
j
) =
x
j
for any j = 1, . . . , n, because is another simple system of roots. Note that
(h
j
) = h
j
for any j and
2
= id. Then
(x
j
) = (x
j
) and (y
j
) = (y
j
)
for any j, so = , or = . Consider the antilinear involutive automorphism
= = of S. Let us check that is the conjugation associated to a compact real
form of S. Denote by the Killing form of S.
First, by induction on ht(), let us prove that (x

, (x

)) is a negative rational
number:
1. REAL FORMS 91
If ht() = 1, then =
j
for some j, x

= x
j
and (x
j
, (x
j
)) = (x
j
, y
j
) =

2
(
j
[
j
)
< 0, since h

=
2
([)
t

= [x

, y

] = (x

, y

)t

(see Chapter 2, 5.) for


any , and the bilinear form ( . [ . ) is positive denite on R.
If ht() = m + 1, then x

= q[x
j
, x

] for some j = 1, . . . , n and q Q, with


ht() = m, then

_
x

, (x

)
_
= q
2

_
[x
j
, x

], [(x
j
), (x

)]
_
= q
2

_
x

, [x
j
, [y
j
, (x

)]]
_
Q
>0

_
x

, (x

)
_
(by Chapter 2, Lemma 7.1)
Q
<0
(by the induction hypothesis).
Now take K the xed subalgebra S

of . Hence,
K =
n

j=1
R(ih
j
) +

+
_
R
_
x

+(x

)
_
+Ri
_
x

(x

)
_
_
,
which is a real form of S = K iK. Note that
(ih
r
, ih
s
) = (h
r
, h
s
), and the restriction of to

n
j=1
Rh
j
is positive denite,

_
x

+(x

), x

+(x

)
_
= 2
_
x

, (x

)
_
< 0, by the previous argument,

_
i(x

(x

)), i(x

(x

))
_
= 2
_
x

, (x

)
_
< 0, and

_
x

+(x

), i(x

(x

))
_
= i
_
x

+(x

), x

(x

)
_
= 0.
Hence the Killing form of K, which is obtained by restriction of , is negative denite,
and hence K is compact.
1.4 Remark. The signature of the Killing form of the split form L above is rank L,
while for the compact form K is dimK.
1.5 Denition. Let S be a complex semisimple Lie algebra and let
1
,
2
be the con-
jugations associated to two real forms. Then:

1
and
2
are said to be equivalent if the corresponding real forms S

1
and S

2
are isomorphic.

1
and
2
are said to be compatible if they commute:
1

2
=
2

1
.
Given a complex semisimple Lie algebra and a conjugation , this is said to be split
or compact if so is its associated real form S

.
Note that the split and compact conjugations considered in the proof of Theorem
1.3 are compatible.
1.6 Proposition. Let S be a complex semisimple Lie algebra and let
1
,
2
be the
conjugations associated to two real forms. Then:
(i)
1
and
2
are equivalent if and only if there is an automorphism Aut
C
S such
that
2
=
1

1
.
92 APPENDIX A. SIMPLE REAL LIE ALGEBRAS
(ii)
1
and
2
are compatible if and only if =
1

2
(which is an automorphism
of S) is involutive (
2
= id). In this case leaves invariant both real forms
([
S

i Aut
R
(S

i
), i = 1, 2).
(iii) If
1
and
2
are compatible and compact, then
1
=
2
.
Proof. For (i), if : S

1
S

2
is an isomorphism, then induces an automorphism
: S = S

1
iS

1
S = S

2
iS

2
((x + iy) = (x) + i(y) for any x, y S

1
).
Moreover, it is clear that
1
=
2
as this holds trivially for the elements in S

1
.
Conversely, if
2
=
1

1
for some Aut
C
S, then
_
S

1
_
S

2
and the restriction
[
S

1 gives an isomorphism S

1
S

2
.
For (ii), it is clear that if
1
and
2
are compatible, then =
1

2
is C-linear (as
a composition of two antilinear maps) and involutive (
2
=
1

2
=
2
1

2
2
= id).
Conversely, if
2
= id, then
1

2
= id =
2
1

2
2
, so
1

2
=
2

1
(as
1
and
2
are
invertible).
Finally, assume that
1
and
2
are compatible and compact, and let =
1

2
,
which is an involutive automorphism which commutes with both
1
and
2
. Then
S

1
= S

1
+
S

, where S

= x S

1
: (x) = x. Let be the Killing form of S,
which restricts to the Killing forms of S

i
(i = 1, 2). For any x S

, 0 (x, x) =
(x, (x)) = (x,
2
(x)), as (x) =
1

2
(x) =
2

1
(x) =
2
(x). But the map
h

2
: S S C
(u, v) (u,
2
(v))
is hermitian, since
_

2
(u),
2
(v)
_
= (u, v) for any u, v, because
2
is an antilinear
automorphism, and it is also positive denite since the restriction of h

2
to S

2
S

2
equals [
S

2S

2 , which is positive denite, since S

2
is compact. Therefore, for any
x S

, 0 (x, x) = h

2
(x, x) 0, so (x, x) = 0, and x = 0, since S

1
is compact.
Hence S

= 0 and id = [
S

1 , so = id as S = S

1
iS

1
and
1
=
2
.
1.7 Theorem. Let S be a complex semisimple Lie algebra, and let and be two
conjugations, with being compact. Then there is an automorphism Aut
C
S such
that and
1
(which is compact too) are compatible. Moreover, can be taken of
the form exp(i ad
u
) with u K = S

.
Proof. Consider the positive denite hermitian form
h

: S S C
(x, y)
_
x, (y)
_
and the automorphism = Aut
C
S. For any x, y S,
h

_
(x), y
_
=
_
(x), (y)
_
=
_
x,
1
(y)
_
=
_
x, (y)
_
= h

_
x, (y)
_
.
Thus, is selfadjoint relative to h

and, hence, there is an orthonormal basis x


1
, . . . , x
N

of S over C, relative to h

, formed by eigenvectors for . The corresponding eigenvalues


are all real and nonzero. Identify endomorphisms with matrices through this basis to
get the diagonal matrices:
= diag(
1
, . . . ,
N
),
2
= diag([
1
[
2
, . . . , [
N
[
2
) = exp
_
diag
_
2 log[
1
[, . . . , 2 log[
N
[
_
_
.
1. REAL FORMS 93
For any r, s = 1, . . . , N, [x
r
, x
s
] =

N
j=1
c
j
rs
x
j
for suitable structure constants. With

j
= [
j
[
2
=
2
j
for any j = 1, . . . , N, and since
2
is an automorphism, we get
r

s
c
j
rs
=

j
c
j
rs
for any r, s, j = 1, . . . , N, and hence (either c
j
rs
= 0 or
r

s
=
j
) for any t R,

t
r

t
s
c
j
rs
=
t
j
c
j
rs
, which shows that, for any t R,

t
= diag(
t
1
, . . . ,
t
N
) = exp
_
diag
_
2t log[
1
[, . . . , 2t log[
N
[
_
_
is an automorphism of S.
On the other hand, = =
1
, so
1
=
2
=
2
=
1
. This shows that
diag(
1
, . . . ,
N
) = diag(
1
1
, . . . ,
1
N
) and, as before, this shows that
t
=
t

for any t R. Let


t
=
t

t
. We will look for a value of t that makes
t
=
t
.
But,

t
=
t

t
=
2t
=
2t
,

t
=
t

t
=
2t
=
2t

1
=
1

2t
.
(
1
and
2t
commute as they both are diagonal.)
Hence
t
=
t
if and only if
2
=
4t
, if and only if t =
1
4
. Thus we take
= 1
4
= diag
_

1
4
1
, . . . ,
1
4
N
_
= exp
_
diag
_
1
2
log[
1
[, . . . ,
1
2
log[
N
[
_
_
= exp d,
with d = diag
_
1
2
log[
1
[, . . . ,
1
2
log[
N
[
_
. But
t
Aut
C
S for any t R, so exp td
Aut
C
S for any t R and, by dierentiating at t = 0, this shows that d is a derivation
of S so, by Chapter 2, Consequences 2.2, there is a z S such that d = ad
z
. Note that
d = ad
z
is selfadjoint ((ad
z
)

= ad
z
) relative to the hermitian form h

. But S = KiK
and for any u K and x, y S
h

_
[u, x], y
_
=
_
[u, x], (y)
_
=
_
x, [u, (y)]
_
=
_
x, ([u, y])
_
(since (u) = u)
= h

_
x, [u, y]
_
,
so ad
u
is skew relative to h

. Therefore, ad
z
is selfadjoint if and only if z iK.
1.8 Remark. Under the conditions of the proof above, for any Aut
R
S such that
= and = , one has = and hence (working with the real basis
x
1
, ix
1
, . . . , x
N
, ix
N
) one checks that
t
=
t
for any t R so, in particular, =
. That is, the automorphism commutes with any real automorphism commuting
with and .
1.9 Corollary. Let S be a complex semisimple Lie algebra and let , be two compact
conjugations. Then and are equivalent. That is, up to isomorphism, S has a unique
compact form.
Proof. By Theorem 1.7, there is an automorphism such that and
1
are com-
patible (and compact!). By Proposition 1.6, =
1
.
94 APPENDIX A. SIMPLE REAL LIE ALGEBRAS
1.10 Theorem. Let S be a complex semisimple Lie algebra, and involutive automor-
phism of S and a compact conjugation. Then there is an automorphism Aut
C
S
such that commutes with
1
. Moreover, can be taken of the form exp(i ad
u
) with
u K = S

. In particular, there is a compact form, namely (K), which is invariant


under .
Proof. First note that ()
2
is an automorphism of S and for any x, y S,
h

_
()
2
(x), y
_
=
_
()
2
(x), (y)
_
=
_
x, ()
2
(y)
_
=
_
x, ()
2
(y)
_
(()
1
= )
= h

(x, ()
2
(y)
_
,
so ()
2
is selfadjoint. Besides,
h

_
()
2
(x), x
_
=
_
()
2
(x), (x)
_
=
_
()(x), (x)
_
=
_
(x), (x)
_
( Aut
C
S and
2
= id)
= h

_
(x), (x)
_
0,
so ()
2
is selfadjoint and positive denite. Hence there is an orthonormal basis of S
in which the matrix of ()
2
is diag(
1
, . . . ,
N
) with
j
> 0 for any j = 1, . . . , N.
Identifying again endomorphisms with their coordinate matrices in this basis, consider
the automorphism
t
= diag(
t
1
, . . . ,
t
N
) for any t R.
Since ()
2
= ()
2
, it follows that
t
=
t
and, as in the proof of Theorem
1.3 take
t
=
t

t
. Then,

t
=
t

t
=
2t
,

t
=
t

t
=
2t
()
1
=
2t
,
where it has been used that, since ()
2
commutes with , so does
t
for any t. Hence

t
=
t
if and only if t =
1
4
.
The rest follows as in the proof of Theorem 1.3.
Now, a map can be dened for any complex semisimple Lie algebra S:
:
_
Isomorphism classes of
real forms of S
_

_
Conjugation classes in Aut
C
S
of involutive automorphisms
_
[] []
where [ . ] denotes the corresponding conjugation class and is a compact conjugation
that commutes with (see 1.7). Note that we are identifying any real form with the
conjugation class in Aut
C
S of the corresponding conjugation (Proposition 1.6).
1.11 Theorem. The map above is well dened and bijective.
1. REAL FORMS 95
Proof. If is a conjugation and
1
and
2
are compact conjugations commuting with
, then there is a Aut
C
S such that
2
=
1

1
(Corollary 1.9) and commutes
with any real automorphism commuting with
1
and
2
(Remark 1.8). Hence
2
=

1
= (
1
)
1
. Hence [
2
] = [
1
] and, therefore, the image of [] does not
depend on the compact conjugation chosen.
Now, if
1
,
2
are equivalent conjugations and Aut
C
S satises
2
=
1

1
,
if
1
is a compact conjugation commuting with
1
, then
2
=
1

1
is a compact
conjugation commuting with
2
, and
2

2
=
1

1
=
1

1
. Hence, is
well dened.
Let Aut
C
S be an involutive automorphism, and let be a compact conjugation
commuting with (Theorem 1.10). Then = is a conjugation commuting with
and
_
[]
_
= [] = [
2
] = [].
Finally, to check that is one-to-one, let
1
,
2
be two conjugations and let
1
,
2
be two compact conjugations with
i

i
=
i

i
, i = 1, 2. Write
i
=
i

i
. Assume that
there is a Aut
C
S such that
2
=
1

1
. Is [
1
] = [
2
]?
By Corollary 1.9, there exists Aut
C
S such that
2
=
1

1
. Thus, [
1
] =
[
1

1
] and
_
[
1
]
_
= [
1

1
] = [
1

2
]. Hence we may assume that

1
=
2
= , so
i
=
i
, i = 1, 2.
Now, by Theorem 1.7 and Remark 1.8, there is an automorphism Aut
C
S such
that
1
and
1
are compatible and commutes with
1
, since
1
commutes
with , and also
1
=
1

2
commutes with
1
. But two compatible compact
conjugations coincide (Proposition 1.6), so
1
=
1
. Then,

2
=
2
=
1

=
1
(
1
)
1
=
1

1
=
1
()
1
= ()
1
()
1
.
Hence [
1
] = [
2
].
1.12 Remark.
(i) The proof of Theorem 1.3 shows that ([split form]) = [] ((x
j
) = y
j
, (y
j
) =
x
j
for any j). Trivially, ([compact form]) = [id].
(ii) Let Aut
C
S be an involutive automorphism, and let be a compact conjugation
commuting with . Take = . Then S

= K = K

0
K
1
, where K
0
= x
K : (x) = x and K
1
= x K : (x) = x. Then the real form corresponding
to is S

= L = K
0
iK
1
, and its Killing form is

L
=
S
[
L
= [
K
0
[
iK
1

= [
K
0

_
[
K
1
_
.
Since [
K
0
and [
K
1
are negative denite, the signature of
L
is dim
R
K
1

dim
R
K
0
= dim
C
S
1
dim
C
S
0
, where S
0
= x S : (x) = x and S
1
=
x S : (x) = x.
The decomposition L = K
0
iK
1
is called a Cartan decomposition of L.
96 APPENDIX A. SIMPLE REAL LIE ALGEBRAS
(iii) To determine the real simple Lie algebras it is enough then to classify the involutive
automorphisms of the simple complex Lie algebras, up to conjugation. This will
be done over arbitrary algebraically closed elds of characteristic 0 by a process
based on the paper by A.W. Knapp: A quick proof of the classication of simple
real Lie algebras, Proc. Amer. Math. Soc. 124 (1996), no. 10, 32573259.
2. Involutive automorphisms
Let k be an algebraically closed eld of characteristic 0, and let L be a semisimple Lie
algebra over k, H a xed Cartan subalgebra of L, the corresponding root system and
=
1
, . . . ,
n
a system of simple roots. Let x
1
, . . . , x
n
, y
1
, . . . , y
n
be the canonical
generators that are being used throughout.
(i) For any subset J 1, . . . , n, there is a unique involutive automorphism
J
of L
such that
_

J
(x
i
) = x
i
,
J
(y
i
) = y
i
, if i , J,

J
(x
i
) = x
i
,
J
(y
i
) = y
i
, if i J.
We will say that
J
corresponds to the Dynkin diagram of (, ), where the nodes
corresponding to the roots
i
, i J, are shaded.
(ii) Also, if is an involutive automorphism of the Dynkin diagram of (, ), that
is, a bijection among the nodes of the diagram that respects the Cartan integers,
and if J is a subset of 1, . . . , n consisting of xed nodes by , then there is a
unique involutive automorphism
,J
of L given by,
_

,J
(x
i
) = x
(i)
,
,J
(y
i
) = y
(i)
, if i , J,

,J
(x
i
) = x
i
,
,J
(y
i
) = y
i
, if i J.
We will say that
,J
corresponds to the Dynkin diagram of (, ) with the nodes
in J shaded and where is indicated by arrows, like the following examples:


...................................
...................................


These diagrams, where some nodes are shaded and a possible involutive diagram au-
tomorphism is specied by arrows, are called Vogan diagrams (see A.W. Knapp: Lie
groups beyond an Introduction, Birkhauser, Boston 1996).
2.1 Theorem. Let k be an algebraically closed eld of characteristic 0. Then, up to con-
jugation, the involutive automorphisms of the simple Lie algebras are the automorphisms
that correspond to the Vogan diagrams that appear in Tables A.1, A.2.
In these tables, one has to note that for the orthogonal Lie algebras of small di-
mension over an algebraically closed eld of characteristic 0, one has the isomorphisms
so
3

= A
1
, so
4

= A
1
A
1
and so
6

= A
3
. Also, Z denotes a one-dimensional Lie al-
gebra, and so
r,s
(R) denotes the orthogonal Lie algebra of a nondegenerate quadratic
2. INVOLUTIVE AUTOMORPHISMS 97
form of dimension r + s and signature r s. Besides, so

2n
(R) denotes the Lie al-
gebra of the skew matrices relative to a nondegenerate antihermitian form on a vec-
tor space over the quaternions: so

2n
(R) = x Mat
n
(H) : x
t
h + h x = 0, where
h = diag(i, . . . , i). In the same vein, sp
n
(H) = x Mat
n
(H) : x
t
+ x = 0, while
sp
r,s
(H) = x Mat
r+s
(H) : x
t
h +h x = 0, where h = diag(1, . . . , 1, 1, . . . , 1) (r 1s
and s 1s). Finally, an expression like E
8,24
denotes a real form of E
8
such that the
signature of its Killing form is 24.
Proof. Let L be a simple Lie algebra over k and let Aut L be an involutive automor-
phism. Then L = S T, with S = x L : (x) = x and T = x L : (x) = x.
The subspaces S and T are orthogonal relative to the Killing form (since the Killing
form is invariant under ).
(i) There exists a Cartan subalgebra H of L which contains a Cartan subalgebra of S
and is invariant under :
In fact, the adjoint representation ad : S gl(L) has a nondegenerate trace form, so
S = Z(S) [S, S] and [S, S] is semisimple (Chapter 2, 2.2). Besides, for any x Z(S),
x = x
s
+ x
n
with x
s
, x
n
N
L
(T) C
L
(S) = S C
L
(S) = Z(S) (as the normalizer
N
L
(T) is invariant under and N
L
(T) T is an ideal of L and hence trivial). Besides,
(x
n
, S) = 0, so x
n
= 0. Hence Z(S) is a toral subalgebra, and there is a Cartan
subalgebra H
S
of S with H
S
= Z(S)
_
H
S
[S, S]
_
. Then H
S
is toral on L. Let
H = C
L
(H
S
) = H
S
H
T
, where H
T
= C
L
(H)T. Then [H, H] = [H
T
, H
T
] S. Hence
[[H, H], H] = 0, so H is a nilpotent subalgebra. Thus, [H, H] acts both nilpotently and
semisimply on L. Therefore, [H, H] = 0 and H is a Cartan subalgebra of L, since for
any x H
T
, (x
n
, H) = 0 and hence x
n
= 0.
(ii) Fix one such Cartan subalgebra H and let be the associated set of roots. Then
induces a linear map

: H

, = [
H
. Since is an automorphism,
(L

) = L

for any , so

= . Besides, for any and any h H,
(h) = ((h)) = (t

, (h)) = ((t

), h), so (t

) = t

for any . This shows
that

Qt

is invariant under .
(iii) Consider the subsets
S
= : L

S and
T
= : L

T. Then

S

T
= : = :
Actually, [H
T
, S] T and [H
T
, T] S, so for any
S

T
, (H
T
) = 0 and
= . Conversely, if (H
T
) = 0, then L

= (L

S) (L

T) and, since dimL

= 1,
either L

S or L

T.
(iv) The rational vector space

E =

Qt

is invariant under and [

E
is positive
denite (taking values on Q). Hence

E =

E
S


E
T
, where

E
S
=

ES and

E
T
=

ET.
Also, E =

Q = E
S
E
T
, where E
S
= E : (H
T
) = 0 and
E
T
= E : (H
S
) = 0, with E
S
and E
T
orthogonal relative to the positive denite
symmetric bilinear form ( . [ . ) induced by . Moreover, E
T
= :
In fact, if and (H
S
) = 0, then for any x = x
S
+x
T
L

(x
S
S, x
T
T),
and any h H
S
, [h, x
S
+ x
T
] = (h)(x
S
+ x
T
) = 0. Hence x
S
C
S
(H
S
) = H
S
and
[H, x
S
] = 0. Now, for any h H
T
, (h)(x
S
+ x
T
) = [h, x
S
+ x
T
] = [h, x
T
] S. Hence
x
T
= 0 = x
S
, a contradiction.
(v) There is a system of simple roots such that

= :
For any , =
S
+
T
, with
S
E
S
,
T
E
T
and
S
,= 0 because of (iv).
Choose E
S
such that ([) = ([
S
) ,= 0 for any . Then ([) = ( [

) =
98 APPENDIX A. SIMPLE REAL LIE ALGEBRAS
Type Vogan diagram Fixed subalgebra Real form (k = C)
A
n
A
n
( = id) su
n+1
(R)

p
A
p1
A
np
Z su
p,n+1p
(R)
(1 p
_
n
2

) (A
0
= 0)


so
2r+1
sl
n+1
(R)
(n = 2r > 1)


so
2r
sl
n+1
(R)
(n = 2r 1 > 1)


sp
2r
sl
r
(H)
(n = 2r 1 > 1)
B
n
> B
n
( = id) so
2n+1
(R)

p
>
so
2n+1p
so
p
so
2n+1p,p
(R)
(1 p n)
_
so
1
= 0, so
2
= Z
_
C
n
< C
n
( = id) sp
n
(H)

p
<
sp
2p
sp
2(np)
sp
np,p
(H)
(1 p
n
2
|)
< A
n1
Z sp
2n
(R)
D
n

...................................
...................................
D
n
( = id) so
2n
(R)

...................................
...................................
p
so
2(np)
so
2p
so
2(np),2p
(R)
(1 p
n
2
|)

...................................
...................................
A
n1
Z so

2n
(R)
(n > 4)

...................................
...................................
B
n1
so
2n1,1
(R)

...................................
...................................
p
so
2n2p1
so
2p+1
so
2n2p1,2p+1
(R)
(1 p
n1
2
|)
Table A.1: Involutive automorphisms: classical cases
2. INVOLUTIVE AUTOMORPHISMS 99
Type Vogan diagram Fixed subalgebra Real form (k = C)
E
6

E
6
E
6,78

D
5
Z E
6,14

A
5
A
1
E
6,2


F
4
E
6,26


C
4
E
6,6
E
7

E
7
E
7,133

E
6
Z E
7,25

D
6
A
1
E
7,5

A
7
E
7,7
E
8

E
8
E
8,248

E
7
A
1
E
8,24

D
8
E
8,8
F
4
> F
4
F
4,52
> B
4
F
4,20
> C
3
A
1
F
4,4
G
2
< G
2
G
2,14
< A
1
A
1
G
2,2
Table A.2: Involutive automorphisms: exceptional cases
100 APPENDIX A. SIMPLE REAL LIE ALGEBRAS
( [) for any , so that in the total order on given by ,
+
=
+
and
+
is simple if and only if so is .
(vi) Let be a system of simple roots invariant under , hence
=
1
, . . . ,
s
,
s+1
, . . . ,
s+2r
,
with
i
=
i
, for i = 1, . . . , s, and
s+2i1
=
s+2i
for i = 1, . . . , r. Let = m
1

1
+ +
m
s+2r

s+2r
be a root with = and assume that s 1. Then
S
(respectively

T
) if and only if

T
m
i
is even (respectively odd):
To prove this, it can be assumed that
+
. We will proceed by induction on ht().
If ht() = 1, then = , so there is an index i = 1, . . . , s such that =
i
and the result
is trivial. Hence assume that ht() = n > 1 and that = . If there is an i = 1, . . . , s
such that ([
i
) > 0, then = +
i
, for some
+
with =

. Besides L

=
[L

, L

i
] and the induction hypothesis applies. Otherwise, there is an index j > s such
that ([
j
) > 0, so ([
j
) = ( [
j
) = ([
j
) > 0 and (
j
[
j
) = ([
j
)(
j
[
j
) > 0,
since (
j
[
j
) 0. Note that, since s 1,
j
and
j
are not connected in the Dynkin
diagram, since

induces an automorphism of the diagram (the only possibility for


j
and
j
to be connected would be in a diagram A
2r
, but with s = 0), hence
j
+
j
, ,
so if L

= k(x
S
+x
T
), then L

k
= k(x
S
x
T
) and 0 = [x
S
+x
T
, x
S
x
T
] = 2[x
S
, x
T
].
Therefore, = +
j
+
j
, with , +
j
, L

= [L

j
, [L

j
, L

]] and =

.
Hence L

= ad
x
S
x
T
ad
x
S
+x
T
(L

) =
_
ad
2
x
S
ad
2
x
T
_
(L

). Thus, L

is contained in S
(respectively T) if and only if so is L

.
Once we have such a system of simple roots, it is clear that canonical generators of
L can be chosen so that becomes the automorphism associated to a Vogan diagram
(if

(
i
) =
j
with i ,= j, then it is enough to take x
j
= (x
i
) and y
j
= (y
i
)). Let
us check that it is possible to choose such a system so that the corresponding Vogan
diagram is one of the diagrams that appear in Tables A.1, A.2, where there is at most
a node shaded and this node has some restrictions.
(vii) Let = E
S
: ([) Z and ([) 2Z + 1
T
. Then, if
s 1, ,= :
Note that with as above, E
S
=

s
i=1
Q
i
+

r
j=1
Q
_

s+2j1
+
s+2j
_
, while
E
T
=

r
j=1
Q
_

s+2j1

s+2j
_
. Let
i

s+2r
i=1
be the dual basis of . Then
1
, . . . ,
s
are
orthogonal to
s+2j1
and
s+2j
for any j = 1, . . . , r, so
1
, . . . ,
s
E

T
= E
S
. Also, the
invariance of ( . [ . ) under the automorphism induced by shows that
s+2j
=
s+2j1
for any j = 1, . . . , r. Let =

i:
i

i
, which satises that (
i
[) = 1 for any i
with
i

T
and (
j
[) = 0 otherwise. Hence by (vi) ([) 2Z + 1 for any
T
.
(viii) Note that E : ([) Z = Z
1
+ + Z
s+2r
, which is a
discrete subset of E. Let 0 ,= of minimal norm. Then there exists a system of
simple roots
t
such that
t
=

t
and with ([) 0 for any
t
:
Let E
S
as in (v), take a positive and large enough r Q such that, for any
, ([ + r) is > 0 if and only if, either ([) > 0 or ([) = 0 and ([) > 0.
Then consider the total order in given by
t
= + r (
t
=

t
). The associated
system of simple roots
t
satises the required conditions.
(ix) Take and the system of simple roots
t
in (viii). Then

t
=
t
1
, . . . ,
t
s
,
t
s

+1
, . . . ,
t
s

+2r
,
2. INVOLUTIVE AUTOMORPHISMS 101
with
t
i
=
t
i
, i = 1, . . . , s
t
and
t
s

+2j1
=
t
s

+2j
, j = 1, . . . , r
t
. Let
t
i

+2r

i=1
be the
dual basis to
t
. Since ([) 0 for any
+
( is said dominant then), and = ,
=
s

i=1
m
i

t
i
+
r

j=1
m
s

+j
_

t
s

+2j1
+
t
s

+2j
_
,
with m
1
, . . . , m
s

+r
Z
0
.
Note that if 0 ,= h
1
, h
2

+ Qt

with (h
i
) 0 for any
+
and i = 1, 2,
then (h
1
, h
2
) = trace
_
ad
h
1
ad
h
2
_
= 2

+ (h
1
)(h
2
) > 0 (use Exercise 6.15 in
Chapter 2). As a consequence, the inner product of any two nonzero dominant elements
of E is > 0.
Hence if some m
i
> 0, i = 1, . . . , s
t
, then
t
i
is dominant, so (
t
i
[
t
i
) 0 and this
is 0 if and only if =
t
i
. Now, 2
t
i
and (2
t
i
[2
t
i
) = ([)4(
t
i
[
t
i
)
([). By the minimality of , we conclude that (
t
i
[
t
i
) = 0 and =
t
i
. Therefore

t

T

t
i
.
On the other hand, if m
i
= 0, for any i = 1, . . . , s
t
, then ([
t
i
) = 0 (even!), so

t

T
= .
Therefore there is at most one shaded node in the associated Vogan diagram. More
precisely, either
t

T
= , or
t

T
=
t
i
and =
t
i
for some i = 1, . . . , s
t
. In
this latter case, for any i ,= j = 1, . . . , s
t
, (
j
[
t
j
) 0 (otherwise 2
t
j
with
_
2
t
j
[ 2
t
j
_
< ([)). Also, if for some j = 1, . . . , r
t
,
_

1
2
(
t
s

+2j1
+
t
s

+2j
)

t
s

+2j1
+
t
s

+2j
_
> 0,
we would have
_
(
t
s

+2j1
+
t
s

+2j
)

(
t
s

+2j1
+
t
s

+2j
)
_
< ([),
a contradiction with the minimality of , since (
t
s

+2j1
+
t
s

+2j
) , because for
any
T
,
_

t
s

+2j1
[
_
=
_

t
s

+2j1
[
_
=
_

t
s

+2j
[
_
.
Therefore, if
t

T
=
t
i
for some i = 1, . . . , r, then
=
t
i
,
(
t
j
[
t
j
) 0 for any i ,= j = 1, . . . , s,
_

1
2
(
t
s

+2j1
+
t
s

+2j
)[
t
s

+2j1
+
t
s

+2j
_
0, for any j = 1, . . . , r.
(2.1)
(The last condition in (2.1) does not appear in Knapps article.)
(x) Looking at Tables A.1, A.2, what remains to be proved is to check that for Vogan
diagrams associated to the Lie algebras of type C
n
, D
n
, E
6
, E
7
, E
8
, F
4
or G
2
, in case
there is a shaded node, this node satises the requirements in the Tables A.1, A.2. This
can be deduced easily case by case from (2.1):
For C
n
, order the roots as follows,

1

2

3

n1

n
<
Here
i
=
i

i+1
, i = 1, . . . , n 1 and
n
= 2
n
where, up to a nonzero
scalar, (
i
[
j
) =
ij
for any i, j. Hence
t
i
=
1
+ +
i
for i = 1, . . . , n 1 and

t
n
=
1
2
(
1
+ +
n
).
102 APPENDIX A. SIMPLE REAL LIE ALGEBRAS
For any i = 1, . . . , n 1,
_

t
i

t
n
[
t
n
_
=
1
2
_
i (n i)
_
=
1
2
(2i n),
so (2.1) is satised if and only if i
_
n
2
_
.
For D
n
,

1

2

3

n2

n1

n
...................................
...................................
Here either

is the identity, or
n1
=
n
(
i
=
i
for i n 1). Also,
i
=

i+1
for i = 1, . . . , n1 and
n
=
n1
+
n
where, up to a scalar, (
i
[
j
) =
ij
.
Hence
t
i
=
1
+ +
i
, for i = 1, . . . , n 2,
t
n1
=
1
2
(
1
+ +
n1

n
) and

t
n
=
1
2
(
1
+ +
n1
+
n
).
For any i = 1, . . . , n 2,
_

t
i

t
n
[
t
n
_
=
1
4
(2i n),
_

t
i

1
2
(
t
n1
+
t
n
)

t
n1
+
t
n
_
=
1
4
_
2i (n 1)
_
,
so if

= id, then (2.1) is satised if i


_
n
2
_
, while if

,= id, (2.1) is satised if


i
_
n1
2
_
.
For E
8
, take the simple roots as follows:

1

3

4

5

6

7

8

2
Here
8
=
1
, . . . ,
8
with

1
=
1
2
(
1

2

7
+
8
),

2
=
1
+
2
,

i
=
i1

i2
, i = 3, . . . , 8,
for an orthonormal basis (up to a scaling of the inner product)
i
: i = 1, . . . , 8.
Hence

t
1
= 2
8
,

t
2
=
1
2
(
1
+ +
7
+ 5
8
),

t
3
=
1
2
(
1
+
2
+ +
7
+ 7
8
),

t
i
=
i1
+ +
7
+ (9 i)
8
, i = 4, . . . , 8, .
For any i = 2, . . . , 6,
_

t
i

t
1
[
t
1
_
> 0,
_

t
i

t
8
[
t
8
_
> 0,
so if (2.1) is satised, then i = 1 or i = 8.
2. INVOLUTIVE AUTOMORPHISMS 103
For E
7
,
7
=
8

8
. It follows that

t
1
=
8

7
,

t
2
=
1
2
_

1
+ +
6
+ 2(
8

7
)
_
,

t
3
=
1
2
_

1
+ +
6
+ 3(
8

7
)
_
,

t
4
=
3
+ +
6
+ 2(
8

7
),

t
5
=
4
+
5
+
6
+
3
2
(
8

7
),

t
6
=
5
+
6
+ (
8

7
),

t
7
=
6
+
1
2
(
8

7
).
Hence
_

t
i

t
7
[
t
7
_
> 0 for i = 3, 4, 5, 6, so (2.1) imply that i = 1, 2 or 7.
For E
6
, take
6
=
7

7
. Here either

= id or it interchanges
1
and
6
,
and
3
and
5
. Besides,

t
1
=
2
3
(
8

7

6
),

t
2
=
1
2
_

1
+ +
5
+ (
8

7

6
)
_
,

t
3
=
1
2
_

1
+ +
5
_
+
5
6
_

8

7

6
_
,

t
4
=
3
+
4
+
5
+ (
8

7

6
),

t
5
=
4
+
5
+
2
3
(
8

7

6
),

t
6
=
5
+
1
3
(
8

7

6
).
Moreover,
_

t
3

t
1
[
t
1
_
> 0,
_

t
5

t
6
[
t
6
_
> 0,
_

t
4

t
2
[
t
2
_
> 0,
so if

= id, then (2.1) implies that i = 1, 2 or 6, so the symmetry of the diagram


shows that after reordering, it is enough to consider the cases of i = 1 or i = 2.
On the other hand, if

,= id, then i = 2 is the only possibility.


For F
4
, consider the ordering of the roots given by

1

2

3

4
>
Here

1
=
2

3
,

2
=
3

4
,

3
=
4
,

4
=
1
2
(
1

2

3

4
),
104 APPENDIX A. SIMPLE REAL LIE ALGEBRAS
for a suitable orthonormal basis. Hence,

t
1
=
1
+
2
,

t
2
= 2
1
+
2
+
3
,

t
3
= 3
1
+
2
+
3
+
4
,

t
4
= 2
1
,
so
_

t
2

t
1
[
t
1
_
> 0,
_

t
3

t
4
[
t
4
_
> 0,
and (2.1) imply that i = 1 or i = 4.
For G
2
, order the roots as follows:

1

2
<
Then
1
=
2

1
,
2
=
1
2
(
1
2
2
+
3
), where
1
,
2
,
3
is an orthonormal
basis of a three-dimensional inner vector space and
t
=
1
,
2
generates a
two-dimensional vector subspace. Then
t
1
=
3

1
and
t
2
=
1
3
(
1

2
+ 2
3
),
so
_

t
1

t
2
[
t
2
_
> 0, and hence (2.1) forces i = 2.
(xi) The assertions on the third column in Tables A.1, A.2 follows by straightforward
computations, similar to the ones used for the description of the exceptional simple Lie
algebra of type F
4
in Chapter 2, Section 8. (Some more information will be given in
the next section.) The involutive automorphisms that appear in these tables for each
type are all nonconjugate, since their xed subalgebras are not isomorphic.
3. Simple real Lie algebras
What is left is to check that the information on the fourth column in Tables A.1, A.2 is
correct.
First, because of item (ii) in Remark 1.12, the signature of the Killing form of the
real form of a simple complex Lie algebra S associated to an involutive automorphism
Aut
C
S is dim
C
S
1
dim
C
S
0
, and this shows that the third column in Table A.2
determines completely the fourth. Thus, it is enough to deal with the classical cases.
Here, only the type A
n
will be dealt with, leaving the other types as an exercise.
Let S = sl
n+1
(C) be the simple complex Lie algebra of type A
n
. The special unitary
Lie algebra
su
n+1
(R) = x sl
n+1
(C) : x
t
= x
is a compact real subalgebra of S (here the bar denotes complex conjugation), as for
any x su
n+1
(R),
(x, x) = 2(n + 1) trace(x
2
) = 2(n + 1) trace(x x
t
) < 0
(see Equation (6.5) in Chapter 2). Let be the associated compact conjugation:
(x) = x
t
. For any Vogan diagram, we must nd an involutive automorphism
Aut
C
sl
n+1
(C) associated to it and that commutes with . Then = is the conjuga-
tion associated to the corresponding real form.
3. SIMPLE REAL LIE ALGEBRAS 105
(a) For = id, = and the real form is S

= su
n+1
(R).
(b) Let a
p
= diag(1, . . . , 1, 1, . . . , 1) be the diagonal matrix with p 1s and (n +
1 p) 1s. Then a
2
p
= I
n+1
(the identity matrix). The involutive automorphism

p
: x a
p
xa
p
= a
p
xa
1
p
of sl
n+1
(C) commutes with , its xed subalgebra is
formed by the block diagonal matrices with two blocks of size p and n + 1 p, so
S

p
= sl
p
(C) sl
n+1p
(C) Z, where Z is a one-dimensional center. Moreover,
the usual Cartan subalgebra H of the diagonal matrices in S (see Equation (6.4)
in Chapter 2) contains a Cartan subalgebra of the xed part and is invariant
under
p
. Here x
i
= E
i,i+1
(the matrix with a 1 on the (i, i + 1) position and 0s
elsewhere). Then
p
(x
p
) = x
p
, while
p
(x
j
) = x
j
for j ,= p, so the associated
Vogan diagram is

p
Now, with
p
=
p
, the associated real form is
x sl
n+1
(C) : s
t
a
p
+a
p
x = 0 = su
p,n+1p
(R).
(c) With n = 2r, consider the symmetric matrix of order n + 1 = 2r + 1
b =
_
_
1 0 0
0 0 I
r
0 I
r
0
_
_
,
which satises b
2
= I
n
, and the involutive automorphism
b
: x bx
t
b, which
commutes with . The xed subalgebra by
b
is precisely so
2r+1
(C). Again
b
preserves the by now usual Cartan subalgebra H. With the description of the
root system in Chapter 2, (6.4), it follows that

b
(
1
) =
1

b
=
1
, while

(
i
) =
r+i
, for i = 2, . . . , r + 1. Take the system of simple roots

t
=
2

3
,
3

4
, . . . ,
r

r+1
,
r+1

1
,
1

2r+1
,
2r+1

2r
, . . . ,
r+3

r+2

which is invariant under

and shows that the associated Vogan diagram is




Finally, consider the regular matrix
a =
_
_
1 0 0
0 I
r
I
r
0 iI
r
iI
r
_
_
,
which satises that b = a
1
a = a
1
a, and the associated conjugation
b
=
b
.
Its real form is
S

b
= x sl
n+1
(C) : b xb = x
= x sl
n+1
(C) : axa
1
= axa
1

= a
1
sl
n+1
(R)a

= sl
n+1
(R).
106 APPENDIX A. SIMPLE REAL LIE ALGEBRAS
(d) In the same vein, with n = 2r 1, consider the symmetric matrix d =
_
0 I
r
I
r
0
_
and the involutive automorphism
d
: x dx
t
d = dx
t
d
1
. Here the xed
subalgebra is so
2r
(C), and

d
(
i
) =
r+i
for i = 1, . . . , r. A suitable system of
simple roots is

t
=
r

r1
,
r1

r2
, . . . ,
2

1
,
1

r+1
,
r+1

r+2
, . . . ,
2r1

2r
.
The only root in
t
xed by
d
is
1

r+1
and
d
(E
1,r+1
) = E
1,r+1
, which
shows that the associated Vogan diagram is


As before, with
d
=
d
, one gets the real form S

d
= sl
n+1
(R).
(e) Finally, with n = 2r 1, consider the skew-symmetric matrix c =
_
0 I
r
I
r
0
_
and
the involutive automorphism
c
: x cx
t
c = cx
t
c
1
. Here the xed subalgebra
is sp
2r
(C), and

c
(
i
) =
r+i
for i = 1, . . . , r. The same
t
of the previous
item works here but
c
(E
1,r+1
) = E
1,r+1
, which shows that the associated Vogan
diagram is


With
c
=
c
, one gets the real form
S

c
= x sl
n+1
(C) : c xc = x
=
__
p q
q p
_
: p, q gl
r
(C) and Re
_
trace(p)
_
= 0
_

= p +jq gl
r
(H) : Re
_
trace(p)
_
= 0 = sl
r
(H),
where j H satises j
2
= 1 and ij = ji and Re denotes the real part.

Você também pode gostar