Algebra II
Lecture Notes
Kiyoshi Igusa
May 2007
These are lecture notes from a graduate course given at
Brandeis University in Spring 2007
using the 4th edition of Serge Langs book Algebra.
Contents
Syllabus
Part A: Homological Algebra
1. Additive categories
2. Abelian categories
3. Injective modules
4. Divisible groups
5. Projective resolutions
Part B: Commutative Algebra
1. Integrality
2. Transcendental extensions
3. Algebraic spaces
4. Local rings
Part C: Semisimplicity
1. Simple rings and modules
2. Semisimple modules
3. Semisimple rings
Part D: Representations of finite groups
1. The group ring k[G]
2. Characters
3. Induction
Appendix: Homework and answers
iii
Syllabus
Below is the syllabus for Algebra I and II (101a,b). I tried to cover every
thing in the Algebra II syllabus and I assumed that students had learned the
Algebra I material. We used Langs Algebra, 4th ed.
There was weekly homework but no quizzes or final exam. Class par
ticipation was also counted. Some effort was made to coordinate material
with Math 121b (Algebraic Topology) which grad students were taking con
currently.
Appendix B: Syllabi for Required Courses
Math 101a: Algebra I
Core topics (always covered):
1. Group theory:
(a) Quick review of the basic theory (subgroups, homomomorphisms, etc.).
(b) Group actions, conjugacy classes, Sylow theorems.
(c) Solvable and nilpotent groups.
(d) Free groups, presentations.
2. Categories: Basic notions of categories, functors and natural transformations are
introduced and used as a language during the course.
3. Rings and Modules:
(a) Review of basic theory (subrings, ideals, fields, homomorphisms, etc.)
(b) UFDs, PIDs, polynomial rings.
(c) Linear algebra over rings (free modules, tensor products, exterior and symmetric
powers, determinants).
(d) Finitely generated modules over a PID and applications.
4. Field theory:
(a) Field extensions, splitting fields, finite fields.
(b) Separable and inseparable extensions, algebraic closure.
(c) Fundamental theorem of Galois theory, solvability by radicals.
Additional topics. As time and inclination permits, one can go deeper into:
Field theory (trace and norm, transcendental extensions, purely inseparable exten
sions, infinite Galois extensions, Kummer theory).
Category theory (adjoint functors, Yonedas lemma, limits).
Possible Texts:
Lang: Algebra
Jacobson: Basic Algebra
16
Math 101b: Algebra II
Core topics:
1. Homological algebra: Exact sequences, complexes and homology, projective and in
jective modules, Ext and Tor.
2. Commutative algebra: Chain conditions, Hilbert basis theorem, Nullstellensatz, lo
calization.
3. Representation theory (of finite groups): Maschkes theorem, Schurs Lemma, Frobe
nius reciprocity, characters.
4. Noncommutative algebra: Semisimple rings and Wedderburns theorem.
Additional topics. Additional 101a topics, or:
Representation theory (representations of Sn , Brauers theorem, representations in
finite characteristic, representations of Lie groups).
Commutative algebra/number theory (integrality, completion, DVRs, Dedekind do
mains).
Commutative algebra/algebraic geometry (dimension theory, Noether normalization,
the idealvariety correspondence, primary decomposition).
Possible Texts: See 101a.
17
Part A
Homological Algebra
MATH 101B: ALGEBRA II
PART A: HOMOLOGICAL ALGEBRA
These are notes for our first unit on the algebraic side of homological
algebra. While this is the last topic (Chap XX) in the book, it makes
sense to do this first so that grad students will be more familiar with
the ideas when they are applied to algebraic topology (in 121b). At
the same time, it is not my intention to cover the same material twice.
The topics are
Contents
1. Additive categories 1
2. Abelian categories 2
2.1. some definitions 2
2.2. definition of abelian category 3
2.3. examples 4
3. Projective and injective objects 5
4. Injective modules 7
4.1. dual module 7
4.2. constructing injective modules 8
4.3. proof of lemmas 9
4.4. Examples 12
5. Divisible groups 15
6. Injective envelope 17
7. Projective resolutions 20
7.1. Definitions 20
7.2. Modules of a PID 21
7.3. Chain complexes 23
7.4. Homotopy uniqueness of projective resolutions 26
7.5. Derived functors 29
7.6. Left derived functors 34
0
MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA 1
1. Additive categories
On the first day I talked about additive categories.
Definition 1.1. An additive category is a category C for which every
hom set HomC (X, Y ) is an additive group and
(1) composition is biadditive, i.e., (f1 + f2 ) g = f1 g + f2 g and
f (g1 + g2 ) = f g1 + f g2 .
(2) The category has finite direct sums.
I should have gone over the precise definition of direct sum:
Definition 1.2. The direct sum ni=1 Ai is an object X together with
morphisms ji : Ai X, pi : X Ai so that
(1) pi ji = id : Ai Ai
(2) Ppi jj = 0 if i 6= j.
(3) ji pi = idX : X X.
Theorem 1.3. Ai is both the product and coproduct of the Ai
Proof. Suppose that fi : Y Ai are morphisms. Then there is a
morphism X
f= ji fi : Y Ai
which has the property that pi f = pi ji fi = fi . Conversely, given any
morphism g : Y Ai satisfying pi g = fi for all i, then we have:
X X
f= ji fi = ji pi g = idX g = g
So, f is unique and Ai is the product of the Ai . By an analogous
argument, it is also the coproduct.
2 MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA
The converse is also true:
Q `
Proposition 1.4. Suppose that X = Ai = Ai and the composition
of the inclusion ji : Ai X with pj : X Aj is
pj ji = ij : Ai Aj
I.e., it is the identity on Ai for i = j and it is zero for i 6= j. Then
X
ji pi = idX
P
Proof. Let f = ji pi : X X. Then
X X
pj f = pj ji pi = ij pi = pj
i i
So, f = idX by the universal property of a product.
In class I pointed out the sum of no objects is the zero object 0 which
is both initial and terminal. Also, I asked you to prove the following.
Problem. Show that a morphism f : A B is zero if and only if it
factors through the zero object.
2. Abelian categories
2.1. some definitions. First I explained the abstract definition of ker
nel, monomorphism, cokernel and epimorphism.
Definition 2.1. A morphism f : A B in an additive category C is
a monomorphism if, for any object X and any morphism g : X A,
f g = 0 if and only if g = 0. (g acts like an element of A. It goes
to zero in B iff it is zero.)
Another way to say this is that f : A B is a monomorphism and
write 0 A B if
f]
0 HomC (X, A)
HomC (X, B)
is exact, i.e., f] is a monomorphism of abelian groups. The lower sharp
means composition on the left or postcomposition. The lower sharp
is order preserving:
(f g)] = f] g]
Epimorphisms are defined analogously: f : B C is an epimor
phism if for any object Y we get a monomorphism of abelian groups:
f]
0 HomC (C, Y )
HomC (B, Y )
MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA 3
An abelian category is an additive category which has kernels and
cokernels satisfying all the properties that one would expect which can
be stated categorically. First, I explained the categorical definition of
kernel and cokernel.
Definition 2.2. The kernel of a morphism f : A B is an object K
with a morphism j : K A so that
(1) f j = 0 : K B
(2) For any other object X and morphism g : X A so that
f g = 0 there exists a unique h : X K so that g = j h.
Since this is a universal property, the kernel is unique if it exists.
Theorem 2.3. A is the kernel of f : B C if and only if
0 HomC (X, A) HomC (X, B) HomC (X, C)
is exact for any object X. In particular, j : ker f B is a monomor
phism.
If you replace A with 0 in this theorem you get the following state
ment.
Corollary 2.4. A morphism is a monomorphism if and only if 0 is its
kernel.
Cokernel is defined analogously and satisfies the following theorem
which can be used as the definition.
Theorem 2.5. The cokernel of f : A B is an object C with a
morphism B C so that
0 HomC (C, Y ) HomC (B, Y ) HomC (A, Y )
is exact for any object Y .
Again, letting C = 0 we get the statement:
Corollary 2.6. A morphism is an epimorphism if and only if 0 is its
cokernel.
These two theorems can be summarized by the following statement.
Corollary 2.7. For any additive category C, HomC is left exact in each
coordinate.
2.2. definition of abelian category.
Definition 2.8. An abelian category is an additive category C so that
(1) Every morphism has a kernel and a cokernel.
(2) Every monomorphism is the kernel of its cokernel.
4 MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA
(3) Every epimorphism is the cokernel of its kernel.
(4) Every morphism f : A B can be factored as the composition
of an epimorphism A I and a monomorphism I , B.
(5) A morphism f : A B is an isomorphism if and only if it is
both mono and epi.
Proposition 2.9. The last condition follows from the first four condi
tions.
Proof. First of all, isomorphisms are always both mono and epi. The
definition of an isomorphism is that it has an inverse g : B A so
that f g = idB and g f = idA . The second condition implies that f
is mono since
(g f )] = g] f] = id] = id
which implies that f] is mono and f is mono. Similarly, f g = idB
implies that f is epi.
Conversely, suppose that f : A B is both mono and epi. Then, by
(2), it is the kernel of its cokernel which is B 0. So, by left exactness
of Hom we get:
0 HomC (B, A) HomC (B, B) HomC (B, 0)
In other words, f] : HomC (B, A) = HomC (B, B). So, there is a unique
element g : B A so that f g = idB . Similarly, by (3), there is a
unique h : B A so that h f = idA . If we can show that g = h then
it will be the inverse of f making f invertible and thus an isomorphism.
But this is easy:
h = h idB = h f g = idA g = g
2.3. examples. The following are abelian categories:
(1) The category of abelian groups and homomorphisms.
(2) The category of finite abelian groups. This is an abelian cat
egory since any homomorphism of finite abelian groups has a
finite kernel and cokernel and a finite direct sum of finite abelian
groups is also finite.
(3) Rmod= the category of all left Rmodules and homomorphisms
(4) RMod= the category of finitely generated (f.g.) left Rmodules
is an abelian category assuming that R is left Noetherian (all
submodules of f.g. left Rmodules are f.g.)
(5) modR=the category of all right Rmodules and homomorphisms
(6) ModR=the category of f.g. right Rmodules is abelian if R is
right Noetherian.
MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA 5
The following are examples of additive categories which are not
abelian.
(1) Free abelian groups. (This category does not have cokernels)
(2) Let R be a nonNoetherian ring, for example a polynomial ring
in infinitely many variables:
R = k[X1 , X2 , ]
Then RMod, the category of f.g. Rmodules is not abelian since
it does not have kernels. E.g., the kernel of the augmentation
map
Rk
is infinitely generated.
3. Projective and injective objects
At the end of the second lecture we discussed the definition of injec
tive and projective objects in any additive category. And it was easy to
show that the category of Rmodules has sufficiently many projectives.
Definition 3.1. An object P of an additive category C is called projec
tive if for any epimorphism f : A B and any morphism g : P B
there exists a morphism g : P A so that f g = g. The map g is
called a lifting of g to A.
Theorem 3.2. P is projective if and only if HomC (P, ) is an exact
functor.
Proof. If 0 A B C 0 is exact then, by left exactness of Hom
we get an exact sequence:
0 HomC (P, A) HomC (P, B) HomC (P, C)
By definition, P is projective if and only if the last map is always an
epimorphism, i.e., iff we get a short exact sequence
0 HomC (P, A) HomC (P, B) HomC (P, C) 0
Theorem 3.3. Any free Rmodule is projective.
Proof. Suppose that F is free with
P generators x . Then every element
of F can be written uniquely as r x where the coefficients r R
are almost all zero (only finitely many are nonzero). Suppose that
g : F B is a homomorphism. Then, for every index , the element
6 MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA
g(x ) comes from some element y A. I.e., g(x ) = f (y ). Then a
lifting g of g is given by
X X
g( r xa ) = r y
The verification that this is a lifting is straightforward or I would say
obvious but it would go like this: The claim is that, first, g : F A
is a homomorphism of Rmodules and, second, it is a lifting: f g = g.
The second statement is easy:
X X X X X
f
g( r xa ) = f ( r y ) = r f (y ) = r g(x ) = g( r x )
The first claim says that g is additive:
X X X
g r x + s x = g (r + s )x
X X X
= (r + s )y = g( r x ) + g( s x )
and g commutes with the action of R:
X X
g r r x = g rr x
X X X
= rr y = r r x = r
g r x
For every Rmodule M there is a free Rmodule which maps onto
M , namely the free module F generated by symbols [x] for all x M
and with projection map p : F M given by
X X
p r [x ] = r x
The notation [x] is used to distinguish between the element x M and
the corresponding generator [x] F . The homomorphism p is actually
just defined by the equation p[x] = x.
Corollary 3.4. The category of Rmodules has sufficiently many pro
jectives, i.e., for every Rmodule M there is a projective Rmodule
which maps onto M .
This implies that every Rmodule M has a projective resolution
0 M P0 P1 P2 . . .
This is an exact sequence in which every Pi is projective. The projec
tive modules are constructed inductively as follows. First, P0 is any
projective which maps onto M . This gives an exact sequence:
P0 M 0
MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA 7
By induction, we get an exact sequence
Pn Pn1 Pn2 P0 M 0
Let Kn be the kernel of dn : Pn Pn1 . Since there are enough
projectives, there is a projective module Pn+1 which maps onto Kn .
The composition Pn+1 Kn , Pn is the map dn+1 : Pn+1 Pn
which extends the exact sequence one more step.
Definition 3.5. An object Q of C is injective if, for any monomorphism
A B, any morphism A Q extends to B. I.e., iff
HomC (B, Q) HomC (A, Q) 0
As before this is equivalent to:
Theorem 3.6. Q is injective if and only if HomC (, Q) is an exact
functor.
The difficult theorem we need to prove is the following:
Theorem 3.7. The category of Rmodules has sufficiently many injec
tives. I.e., every Rmodule embeds in an injective Rmodule.
As in the case of projective modules this theorem will tell us that
every Rmodule M has an injective coresolution which is an exact
sequence:
0 M Q0 Q1 Q2
where each Qi is injective.
4. Injective modules
I will go over Langs proof that every Rmodule M embeds in an
injective module Q. Lang uses the dual of the module.
4.1. dual module.
Definition 4.1. The dual of a left Rmodule M is defined to be the
right Rmodule
M := HomZ (M, Q/Z)
with right Raction given by
r(x) = (rx)
for all M , r R.
8 MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA
Proposition 4.2. Duality is a left exact functor
( ) : Rmod modR
which is additive and takes sums to products:
(M )
Y
= M
Proof. We already saw that the hom functor HomZ (, X) is left exact
for any abelian group X. It is also obviously additive which means
that (f + g)] = f ] + g ] for all f, g : N M . I.e., the duality functor
induces a homomorphism (of abelian groups):
HomR (N, M ) HomZ (M , N )
Duality also takes sums to products since a homomorphism
f : M X
is given uniquely by its restriction to each summand: f : M X
and the f can all be nonzero. (So, it is the product not the sum.)
4.2. constructing injective modules. In order to get an injective
left Rmodule we need to start with a right Rmodule.
Theorem 4.3. Suppose F is a free right Rmodule. (I.e., F = RR
is a direct sum of copies of R considered as a right Rmodule). Then
F is an injective left Rmodule.
This theorem follows from the following lemma.
Lemma 4.4. (1) A product of injective modules is injective.
(2) HomR (M, RR ) = HomZ (M, Q/Z)
(3) Q/Z is an injective Zmodule.
Proof of the theorem. Lemma (3) implies that HomZ (, Q/Z) is an ex
act functor. (2) implies that HomR (, RR ) is an exact functor. There
fore, RR is an injective Rmodule. Since duality takes sums to products,
(1) implies that F is injective for any F with is a sum of RR s, i.e. F
is a free right Rmodule.
We need one more lemma to prove the main theorem. Then we have
to prove the lemmas.
Lemma 4.5. Any left Rmodule is naturally embedded in its double
dual:
M M
Assume this 4th fact for a moment.
MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA 9
Theorem 4.6. Every left Rmodule M can be embedded in an injective
left Rmodule.
Proof. Let F be a free right Rmodule which maps onto M :
F M 0
Since duality is left exact we get:
0 M F
By the last lemma we have M M F . So, M embeds in the
injective module F .
4.3. proof of lemmas. There are four lemmas to prove. Suppose for
a moment that T = Q/Z is injective then the other three lemmas are
easy:
Proof of Lemma 4.5. A natural embedding M M is given by the
evaluation map ev which sends x M to evx : M T which is
evaluation at x:
evx () = (x)
Evaluation is additive:
evx+y () = (x + y) = (x) + (y) = evx () + evy () = (evx + evy ) ()
Evaluation is an Rmodule homomorphism:
evrx () = (rx) = (r)(x) = evx (r) = (revx ) ()
Finally, we need to show that ev is a monomorphism. In other words,
for every nonzero element x M we need to find some additive map
: M T so that evx () = (x) 6= 0. To do this take the cyclic group
C generated by x
C = {kx  k Z}
This is either Z or Z/n. In the second case let f : C T be given by
k
f (kx) = + Z Q/Z
n
This is nonzero on x since 1/n is not an integer. If C = Z then let
f : C T be given by
k
f (kn) = + Z
2
Then again, f (x) is nonzero. Since T is Zinjective, f extends to an
additive map : M T . So, evx is nonzero and ev : M M is a
monomorphism.
10 MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA
Proof that products of injectives are injective. Suppose
Q that J are in
jective. Then we want to show that Q = J is injective. Let
p : Q J be the projection map. Suppose that f : A B is
a monomorphism and g : A Q is any morphism. Then we want to
extend g to B.
Since each J is injective each composition p g : A J extends
to a morphism g : B J . I.e., g f = p g for all . By definition
Q
of the product there exists a unique morphism g : B Q = J so
that p g = g for each . So,
p g f = g f = p g : A J
Q
The uniquely induced map A J is g f = g. Therefore, g is an
extension of g to B as required.
MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA 11
Finally, we need to prove that
HomR (M, RR ) = HomZ (M, Q/Z)
To do this we will give a 11 correspondence and show that it (the
correspondence) is additive.
If f HomR (M, RR ) then f is a homomorphism f : M RR which
means that for each x M we get a homomorphism f (x) : R Q/Z.
In particular we can evaluate this at 1 R. This gives (f ) : M
Q/Z by the formula
(f )(x) = f (x)(1)
This defines a mapping
: HomR (M, RR ) HomZ (M, Q/Z)
We need to know that this is additive. I used know instead of
show since this is one of those steps that you should normally skip.
However, you need to know what it is that you are skipping. The fact
that we need to know is that
(f + g) = (f ) + (g)
This is an easy calculation which follows from the way that f + g
is defined, namely, addition of function is defined pointwise which
means that (f + g)(x) = f (x) + g(x) by definition. So, x M ,
(f + g)(x) = (f + g)(x)(1) = [f (x) + g(x)](1) = f (x)(1) + g(x)(1)
= (f )(x) + (g)(x) = [(f ) + (g)](x)
Finally we need to show that is a bijection. To do this we find
the inverse 1 = . For any homomorphism g : M Q/Z let
(g) : M RR be given by
(g)(x)(r) = g(rx)
Since this is additive in all three variables, is additive and (g) is
additive. We also need to check that (g) is a homomorphisms of left
Rmodules, i.e., that (g)(rx) = r(g)(x). This is an easy calculation:
(g)(rx)(s) = g(s(rx)) = g((sr)x)
[r(g)(x)](s) = [(g)(x)](sr) = g((sr)x)
The verification that is the inverse of is also straightforward:
For all f HomR (M, RR ) we have
((f ))(x)(r) = (f )(rx) = f (rx)(1) = [rf (x)](1) = f (x)(1r) = f (x)(r)
So, ((f )) = f . Similarly, for all g HomZ (M, Q/Z) we have:
((g))(x) = (g)(x)(1) = g(1x) = g(x)
12 MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA
So, ((g)) = g.
I will do the last lemma (injectivity of Q/Z) tomorrow.
4.4. Examples. I prepared 3 examples but I only got to two of them
in class.
4.4.1. polynomial ring. Let R = Z[t], the integer polynomial ring in one
generator. This is a commutative Noetherian ring. It has dimension 2
since a maximal tower of prime ideal is given by
0 (t) (t, 2)
These ideals are prime since the quotient of R by these ideals are do
mains (i.e., have no zero divisors):
R/0 = Z[t], R/(t) = Z
are domains and
R/(t, 2) = Z/(2) = Z/2Z
is a field, making (2, t) into a maximal ideal.
Proposition 4.7. A Z[t] module M is the same as an abelian group
together with an endomorphism M M given by the action of t. A
homomorphism of Z[t]modules f : M N is an additive homomor
phism which commutes with the action of t.
Proof. I will use the fact that the structure of an Rmodule on an
additive group M is the same as a homomorphism of rings : R
End(M ). When R = Z[t], this homomorphism is given by its value on
t since (f (t)) = f ((t)). For example, if f (t) = 2t2 + 3 then
(f (t)) = (2t2 + 3) = 2(t) (t) + 3idM = f ((t))
Therefore, is determined by (t) EndZ (M ) which is arbitrary.
What do the injective Rmodules look like? We know that Q = RR
is injective. What does that look like?
Q = HomZ (Z[t], Q/Z)
But Z[t] is a free abelian group on the generators 1, t, t2 , t3 , . There
fore, an element f Q, f : Z[t] Q/Z is given uniquely by the
sequence
f (1), f (t), f (t2 ), f (t3 ), Q/Z
Multiplication by t shifts this sequence to the left since tf (ti )f (ti t) =
f (ti+1 ). This proves the following.
MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA 13
Theorem 4.8. The injective module Q = Z[t] is isomorphic to the
additive group of all sequences (a0 , a1 , a2 , ) of elements ai Q/Z
with the action of t given by shifting to the left and dropping the first
coordinate. I.e.,
t(a0 , a1 , a2 , ) = (a1 , a2 , )
The word isomorphism is correct here because these are not the
same set.
4.4.2. fields. Suppose that R = k is a field. Then I claim that all
kmodules are both projective and injective.
First note that a kmodule is the same as a vector space over the
field k. Since every vector space has a basis, all kmodules are free.
Therefore, all kmodules are projective. Then I went through a round
about argument to show that all kmodules are injective and I only
managed to show that finitely generated kmodules are injective. (More
on this later.)
Finally, I started over and used the following theorem.
Theorem 4.9. Suppose that R is any ring. Then the following are
equivalent (tfae).
(1) All left Rmodules are projective.
(2) All left Rmodules are injective.
(3) Every short exact sequence of Rmodules splits.
First I recalled the definition of a splitting of a short exact exact
sequence.
Proposition 4.10. Given a short exact sequence of left Rmodules
f g
(4.1) 0A
B
C0
Tfae.
(1) B = f (A) D for some submodule D B.
(2) f has a retraction, i.e., a morphism r : B A s.t. r f = idA .
(3) g has a section, i.e., a morphism s : C B s.t. g s = idC .
Proof. This is a standard fact that most people know very well. For
example, (1) (2) because a retraction r is given by projection to
the first coordinate followed by the inverse of the isomorphism f :
A f (A). (2) (1) by letting D = ker r. [You need to verify that
B = f (A) D which is in two steps: D f (A) = 0 and D + f (A) = B.
For example, x B, x = f r(x) + (x f r(x)) f (A) + D.]
14 MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA
Proof of Theorem . (1) (3): In the short exact sequence (4.1), C is
projective (since all modules are assumed projective). Therefore, the
identity map C C lifts to B and the sequence splits.
(3) (1): Since any epimorphism g : B C has a section s, any
morphism f : X C has a lifting f = s f : X B.
The equivalence (2) (3) is similar.
MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA 15
5. Divisible groups
Zmodules are the same as abelian groups. And we will see that
injective Zmodules are the same as divisible groups.
Definition 5.1. An abelian group D is called divisible if for any x D
and any positive integer n there exists y D so that ny = x. (We say
that x is divisible by n.)
For example, Q is divisible. 0 is divisible. A finite groups is divisible
if and only if it is 0.
Proposition 5.2. Any quotient of a divisible group is divisible.
Proof. Suppose D is divisible and K is a subgroup. Then any element
of the quotient D/K has the form x + K where x D. This is divisible
by any positive n since, if ny = x then
n(y + K) = ny + K = x + K
Therefore D/K is divisible.
Theorem 5.3. The following are equivalent (tfae) for any abelian
group D:
(1) D is divisible.
(2) If A is a subgroup of a cyclic group B then any homomorphism
A D extends to B.
(3) D is an injective Zmodule.
Proof. It is easy to see that the first two conditions are equivalent.
Suppose that x D and n 0. Then, A = nZ is a subgroup of the
cyclic group B = Z and f : nZ D can be given by sending the
generator n to x. The homomorphism f : nZ D can be extended to
Z if and only if D is divisible. Thus (2) implies (1) and (1) implies (2)
in the case B = Z. The argument for any cyclic group is the same.
It follows from the definition of injectivity that (3) (2). So, we
need to show that (1) and (2) imply (3).
So, suppose that D is divisible. Then we will use Zorns lemma to
prove that it is injective. Suppose that A is a submodule of B and
f : A D is a homomorphism. Then we want to extend f to all
of B. To use Zorns lemma we take the set of all pairs (C, g) where
A C B and g is an extension of f (i.e., f = gA). This set is
partially ordered in an obvious way: (C, g) < (C 0 , g 0 ) if C C 0 and
g = g 0 C. It also satisfies the hypothesis of Zorns lemma. Namely,
any totally ordered subset (C , g ) has an upper bound: (C , g ).
Zorns lemma tells us that this set has a maximal element, say, (M, g).
We just need to show that M = B. We show this by contradiction.
16 MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA
If M 6= B then there is at least one element x B which is not
in M . Let Zx = {kx  k Z} be the subgroup of B generated by x.
Then M + Zx is strictly bigger than M . So, if we can find an extension
g : M +Zx D of g then we have a contradiction proving the theorem.
There are two cases.
Case 1. M Zx = 0. In this case, let g = (g, 0). I.e. g(a, kx) = g(a).
Case 2. M Zx = nZx. (n is the smallest positive integer so that
nx M .) Since D is divisible, there is an element y D so that
ny = g(nx). Let g : M + Zx D be defined by g(a + kx) = g(a) + ky.
This is well defined by the following lemma since, for any a = knx
g(a) = g(knx) = kg(nx) = kny
Lemma 5.4. Suppose that A, B are submodules of an Rmodule C and
f : A X, g : B X are homomorphisms of Rmodules which agree
on AB. Then we get a welldefined homomorphism f +g : A+B X
by the formula
(f + g)(a + b) = f (a) + g(b)
Proof. Welldefined means that, if the input is written in two different
ways, the output is still the same. So suppose that a + b = a0 + b0 .
Then a a0 = b0 b A B. So,
f (a a0 ) = f (a) f (a0 ) = g(b0 b) = g(b0 ) g(b)
by assumption. Rearranging the terms, we get f (a)+g(b) = f (a0 )+g(b0 )
as desired.
MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA 17
6. Injective envelope
There is one other very important fact about injective modules which
was not covered in class for lack of time and which is also not covered
in the book. This is the fact that every Rmodule M embeds in a
minimal injective module which is called the injective envelope of M .
This is from Jacobsons Basic Algebra II.
Definition 6.1. An embedding A , B is called essential if every
nonzero submodule of B meets A. I.e., C B, C 6= 0 A C 6= 0.
For example, Z , Q is essential because, if a subgroup of Q contains
a/b, then it contains a Z. Also, every isomorphism is essential.
Exercise 6.2. Show that the composition of essential maps is essential.
Lemma 6.3. Suppose A B. Then
(1) X B s.t. A X = 0 and A , B/X is essential.
(2) C B maximal so that A C is essential.
Proof. For (1) the set of all X B s.t. A X = 0 has a maxi
mal element by Zorns lemma. Then A , B/X must be essential,
otherwise there would be a disjoint submodule of the form Y /X and
X Y, A Y = 0 contradicting the maximality of Y . For (2), C exists
by Zorns lemma.
Lemma 6.4. Q is injective iff every short exact sequence
0QM N 0
splits.
Proof. If Q is injective then the identity map Q Q extends to a
retraction r : M Q giving a splitting of the sequence. Conversely,
suppose that every sequence as above splits. Then for any monomor
phism i : A , B and any morphism f : A Q we can form the
pushout M in the following diagram
i
A  B
f f0
? ?
j
Q  M
As you worked out in your homework, these morphisms form an exact
sequence:
(fi ) (j,f 0 )
A Q B M 0
18 MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA
Since i is a monomorphism by assumption, A is the kernel of (j, f 0 ).
Therefore (again using your homework) A is the pullback in the above
diagram. This implies that j is a monomorphism. [Any morphism
g : X Q which goes to zero in M , i.e., so that j g = 0, will give a
morphism (g, 0) : X Q B which goes to zero in M and therefore
lifts uniquely to h : X A so that
f f h g
h= =
i ih 0
But i is a monomorphism. So, i h = 0 implies h = 0 which in turn
implies that f h = g = 0. So, j is a monomorphism.]
Since j is a monomorphism there is a short exact sequence
j
0Q
M coker j 0
We are assuming that all such sequences split. So, there is a retraction
r : M Q. (r j = idQ ) Then it is easy to see that r f 0 : B Q is
the desired extension of f : A Q:
r f 0 i = r j f = idQ f = f
So, Q is injective.
Lemma 6.5. Q is injective if and only if every essential embedding
Q , M is an isomorphism.
Proof. () Suppose Q is injective and Q , M is essential. Then the
identity map Q Q extends to a retraction r : M Q whose kernel
is disjoint from Q and therefore must be zero making M
= Q.
() Now suppose that every essential embedding of Q is an isomor
phism. We want to show that Q is injective. By the previous lemma
it suffices to show that every short exact sequence
j
0Q
M N 0
splits. By Lemma 6.3 there is a submodule X M so that jQ X = 0
and Q , M/X is essential. Then, by assumption, this map must be
an isomorphism. So, M = Q X and the sequence splits proving that
Q is injective.
Theorem 6.6. For any Rmodule M there exists an essential em
bedding M , Q with Q injective. Furthermore, Q is unique up to
isomorphism under M .
Proof. We know that there is an embedding M , Q0 where Q0 is
injective. By Lemma 6.3 we can find Q maximal with M , Q , Q0
so that M , Q is essential.
MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA 19
Claim: Q is injective.
If not, there exists an essential Q , N . Since Q0 is injective, there
exists f : N Q0 extending the embedding Q , Q0 . Since f is an
embedding on Q, ker f Q = 0. This forces ker f = 0 since Q , N
is essential. So, f : N Q0 is a monomorphism. This contradicts the
maximality of Q since the image of N is an essential extension of M in
Q0 which is larger than Q.
It remains to show that Q is unique up to isomorphism. So, suppose
M , Q0 is another essential embedding of M into an injective Q0 .
Then the inclusion M , Q0 extends to a map g : Q Q0 which must
be a monomorphism since its kernel is disjoint from M . Also, g must
be onto since g(Q) is injective making the inclusion g(Q) , Q0 split
which contradicting the assumption that M , Q0 is essential unless
g(Q) = Q0 .
20 MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA
7. Projective resolutions
We talked for a week about projective resolutions.
(1) Definitions
(2) Modules over a PID
(3) Chain complexes, maps and homotopies
(4) Homotopy uniqueness of projective resolutions
(5) Examples
7.1. Definitions. Suppose that M is an Rmodule (or, more gener
ally, an object of any abelian category with enough projectives) then
a projective resolution of M is defined to be a long exact sequence of
the form
dn+1 d n
Pn+1 Pn Pn1 P0
M 0
where Pi are all projective.
The (left) projective dimension of M is the smallest integer n 0 so
that there is a projective resolution of the form
0 Pn Pn1 P0
M 0
We write pd(M ) = n. If there is no finite projective resolution then
the pd(M ) = .
The (left) global dimension of the ring R written gl dim(R) is the
maximum projective dimension of any module.
Example 7.1. (0) R has global dimension 0 if and only if it is
semisimple (e.g., any field).
(1) Any principal ideal domain (PID) has global dimension 1
since every submodule of a free module is free and every module
(over any ring) is (isomorphic to) the quotient of a free module.
An injective coresolution of a module M is an exact sequence of the
form
0 M Q0 Q1
where all of the Qi are injective. If an abelian category has enough
injectives then every object has in injective resolution. We went to a
lot of trouble to show this holds for the category of Rmodules.
The injective dimension id(M ) is the smallest integer n so that there
is an injective resolution of the form
0 M Q0 Q1 Qn 0
We will see later that the maximum injective dimension is equal to the
maximum projective dimension.
MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA 21
7.2. Modules of a PID. At this point I decided to go through Langs
proof of the following wellknown theorem that I already mentioned
several times.
Theorem 7.2. Suppose that R is a PID and E is a free Rmodule.
Then every submodule of E is free.
Proof. (This proof is given on page 880 as an example of Zorns lemma.)
Suppose that E is free with basis I and let F be a arbitrary submodule
of E. Then we consider the set P of all pairs (J, w) where J I and w
is a basis for FJ := F EJ where EJ is the submodule of E generated
by J. In other words, FJ is the set of all elements of F which are linear
combinations of elements of the subset J of the given basis of E.
For example, suppose that I = {i, j, k} and J = {i, j}. If F E is
the submodule given by
F = {(x, y, z)  x + y + z = 0}
then FJ = {(x, x, 0)}.
The set P = {(J, w)} is partially ordered in the usual way: (J, w)
(J 0 , w0 ) if J J 0 and w w0 . To apply Zorns lemma we need to check
that every tower has an upper bound. So, suppose that {(J , w )} is
a tower. Then the upper bound is given in the usual way by
(J, w) = ( J , w )
This clearly has the property that (J , w ) (J, w) for all . We need
to verify that (J, w) is an element of the poset P . Certainly, J = J
is a subset of I. So, it remains to check that
(1) w is linearly independent.
(2) w spans FJ , i.e., w FJ and every element of FJ is a linear
combination of elements of w.
The first point is easy since any linear dependence among elements of
w involves only a finite number of elements of w which must all belong
to some w (if x1 , , xn w then each xi is contained in some wi .
Let be the largest i . Then w contains all the xi . Since w is a
basis for FJ , these elements are linearly independent.
The second point is also easy. w FJ FJ . So, the union
w = w is contained in FJ . Any element x FJ has only a finite
number of nonzero coordinates which all lie in some J . So x FJ
which is spanned by w .
This verifies the hypothesis of Zorns lemma. Therefore, the conclu
sion holds and our poset P has a maximal element (J, w). We claim
that J = I. This would mean that FJ = FI = F and w would be a
basis for F and we would be done.
22 MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA
To prove that J = I suppose J is strictly smaller. Then there exists
an element k of I which is not in J. Let J 0 = J {k}. Then we want to
find a basis w0 for FJ 0 so that (J, w) < (J 0 , w0 ), i.e., the basis w0 needs
to contain w. In that case we get a contradiction to the maximality of
(J, w). There are two cases.
Case 1. FJ 0 = FJ . In that case take w0 = w and we are done.
Case 2. FJ 0 6= FJ . This means that there is at least one element of
FJ 0 whose kcoordinate is nonzero. Let A be the set of all elements of
R which appear as kcoordinates of elements of FJ 0 . This is the image
of the kcoordinate projection map
pk
FJ 0 EJ 0 R
So, A is an ideal in R. So, A = Rs for some s R. Let x FJ 0 so that
pk (x) = s. Then I claim that w0 = w {x} is a basis for FJ 0 . First,
w0 is clearly linearly independent since any linear combination which
involves x will have nonzero kcoordinate so cannot be zero. And any
linear combination not involving x cannot be zero since w is linearly
independent. Finally, w0 spans FJ 0 . Given any z FJ 0 we must have
pk (z) A = Rs. So pk (z) = rs and pk (z rx) = 0. This implies that
z rx FJ which is spanned by w. So z is rx plus a linear combination
of elements of w. So, w0 spans F J 0 and we are done.
Since Z is a PID we get the following.
Corollary 7.3. Every subgroup of a free abelian group is free.
MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA 23
7.3. Chain complexes. At this point I decided to review the basic
definitions for chain complexes, chain maps and chain homotopies.
Suppose that A is an abelian category. Then a chain complex over
A is an infinite sequence of objects and morphisms (called boundary
maps):
dn dn1 d
1
Cn Cn1 C1
C0
so that the composition of any two arrows is zero:
dn1 dn = 0
The chain complex is denoted either C or (C , d ).
Given two chain complexes C , D a chain map f : C D is a
sequence of morphisms fn : Cn Dn so that dD C
n fn = fn1 dn where
the superscripts are to keep track of which chain complex the boundary
maps dn are in. These morphisms form a big commuting diagram in
the shape of a ladder.
7.3.1. category of chain complexes.
Proposition 7.4. If A is abelian let C (A) be the category of chain
complexes over A and chain maps. Then C (A) is also abelian.
I didnt give a detailed proof but I pointed out how direct sums,
kernels and cokernels are constructed. First the direct sum: C D
is the chain complex with objects Cn Dn and boundary maps
dnCD = dC D
n dn : Cn Dn Cn+1 Dn+1
Then the kernel of a chain map f : C D is defined to be the chain
complex with nth term ker fn and boundary map
d0n : ker fn ker fn1
induced by the morphism dC C D
n : Cn Cn1 . (Since fn1 dn = dn fn =
0 on ker fn , we get this induced map.) The cokernel complex is given
similarly by
n d
coker fn coker fn1
where dn is the morphism induced by dD n.
A cochain complex over an abelian category A is a sequence of objects
and morphisms (called coboundary maps)
d d
C0
0
C1
1
so that dn+1 dn = 0. A morphism of cochain complexes C D
is a sequence of morphisms f n : C n Dn which form a commuting
ladder diagram. It is convenient to use the fact that this is the same as
a chain complex over the opposite category Aop which is also abelian.
24 MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA
If the category of cochain complexes over A is denoted C (A) then this
duality can be written as
C (A)op
= C(Aop )
7.3.2. homology. The homology of a chain complex C is defined to be
the sequence of objects:
ker dn
Hn (C ) :=
im dn+1
In theory these are defined only up to isomorphism. So, they are not
true functors. However, in practice, they can almost always be explic
itly constructed. The construction does not have to be particularly
elegant or simple. But it avoids set theoretic headaches since the Ax
iom of Choice does not apply to categories: We are not allowed to
choose an object Hn (C ) for every chain complex C only for a set
of chain complexes.
Homology is a functor in the sense that any chain map f : C D
induces a morphism in homology Hn (f ) : Hn (C ) Hn (D ). This
is because commutativity of the ladder implies that ker dC n maps to
D C D
ker dn and im dn+1 maps to im dn+1 . This functor is additive in the
sense that Hn (f + g ) = Hn (f ) + Hn (g ). In other words, Hn gives a
homomorphism
Hn : HomC (A) (C , D ) HomA (Hn (C ), Hn (D ))
Additivity follows from the shape of the diagram in a way that I will
explain later.
7.3.3. chain homotopy. Two chain maps f , g : C D are called
chain homotopic if there is a sequence of morphisms
hn : Cn Dn+1
so that
dD C
n+1 hn + hn1 dn = gn fn
for all n 0 where h1 = 0. We call h a homotopy from f to g and
we write
h : f ' g
Theorem 7.5. Homotopic chain maps induce the same map in homol
ogy.
Proof. This follows from the fact that Hn is additive:
Hn (g ) Hn (f ) = Hn (g f ) = Hn (dD C
h + h d )
= Hn (dD C
h ) + Hn (h d )
MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA 25
But both of these are zero since dD D
h maps to the image of d and
C C
therefore to zero in H (D ) and h d is zero on ker d .
7.3.4. homotopy equivalence. Two chain complexes C , D are called
(chain) homotopy equivalent and we write C ' D if there exist chain
maps f : C D and g : D C so that f g ' idD and
g f ' idC . The chain maps f , g are called (chain) homotopy
equivalences and we write f : C ' D .
Corollary 7.6. Any chain homotopy equivalence induces an isomor
phism in homology.
Proof. Theorem 7.5 implies that Hn (f ) Hn (g ) = Hn (idD ) = idHn (D)
and similarly the other way. So, Hn (f ) is an isomorphism with inverse
Hn (g ).
26 MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA
7.4. Homotopy uniqueness of projective resolutions. Here I proved
that the projective resolution of any Rmodule (or any object of an
abelian category with enough projectives) is unique up to chain ho
motopy. I used diagrams and the (equivalent) equation. First I wrote
down the understood standard interpretation of a diagram.
Lemma 7.7. Given that the solid arrows (in below) form a commuting
diagram, there exists a dotted arrow (the arrow labeled f is supposed
to be dotted) as indicates making the entire diagram commute. [This
is the understood meaning of this kind of diagram.] The dotted arrow
is not necessarily uniquely determined (it is labeled and not !) The
assumptions are that P is projective and g : A B is onto.
Cn+1
f
dn+1
f
?
P  Cn
@
@ dn
0
@R
@ ?
Cn1
Proof. By definition of kernel, f lifts uniquely to ker dn . But Cn+1 maps
onto ker dn . So, by definition of P being projective, f lifts to Cn+1 .
Lemma 7.8. With standard wording as above. The additional assump
tions are that the right hand column is exact (i.e., im dC C
n+1 = ker dn ),
Pn+1 is projective and the left hand column is a chain complex (i.e.
dPn dPn+1 = 0).
fn+1
Pn+1  Cn+1
dP
n+1 dC
n+1
? fn ?
Pn  Cn
dP
n dC
n
? fn1 ?
Pn1  Cn1
Proof. The assumptions implies dC P
n (fn dn+1 ) = 0. By the previous
lemma this implies that fn dPn+1 lifts to Cn+1 .
MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA 27
Theorem 7.9. Suppose P M 0 is a projective chain complex
(augmented) over M and C N 0 is a resolution of N (i.e., an
exact sequence). Suppose f : M N is any morphism. Then
(1) There exists a chain map f : P C over f . I.e., the follow
ing diagram commutes:
f
P  C
? f ?
M  N
(2) f is unique up to chain homotopy.
Proof. (1) Since : C0 N is an epimorphism and P0 is projective,
the map f : P0 N lifts to a map f0 : P0 C0 . The rest is by
induction using lemma we just proved.
(2) To prove the existence of the homotopy, I first restated Lemma
7.7 as an equation. It says that for any homomorphism f : P Cn so
that dn f = 0, there exists a homomorphism f : P C n+1 so that
dn+1 f = f .
We want to show that f is unique up to homotopy. So, suppose
f , g are two chain maps over f : M N . Then we want to show
that there exists a sequence of morphisms hn : Pn Cn+1 so that
dC P
n+1 hn + hn1 dn = gn fn
We set h1 = 0 by definition. So, for n = 0 we get:
d1 h0 = g0 f0
First, h0 exists because (g0 f0 ) = (f f ) = 0. If h0 , , hn1 exist
satisfying the above equation then in particular we have:
(7.1) dn hn1 + hn2 dn1 = gn1 fn1
We want to show that hn exists satisfying the equation
dC P
n+1 hn = gn fn hn1 dn
The right hand side is the f of Lemma 7.7 and the map that we want
(hn ) is the f in Lemma 7.7. So, all we need to do is show that dC
n f = 0.
dn (gn fn hn1 dn ) = dn gn dn fn dn hn1 dn
= gn1 dn fn1 dn dn hn1 dn
Factoring out the dn and using Equation (7.1) we get:
= (gn1 fn1 dn hn1 )dn = hn2 dn1 dn = 0
This is 0 since dn1 dn = 0. Thus hn exists and f ' g .
28 MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA
This gives us the statement that we really want:
Corollary 7.10. In any abelian category with enough projectives, any
object A has a projective resolution P A. Furthermore, any two
projective resolutions of A are homotopy equivalent.
Proof. If there are two projective resolutions P , P0 then the first part
of the theorem above tells us that there are chain maps f : P P0
and g : P0 P which cover the identity map on A. Since g f and
the identity map are both chain maps P P over the identity of A,
the second part of the theorem tells us that
f g ' idP
and similarly g f ' idP0 . So, P ' P0 .
The dual argument gives us the following. [In general you should
state the dual theorem but not prove it.]
Theorem 7.11. In any abelian category with enough injectives, any
object B has an injective coresolution. Furthermore, any two injective
coresolutions of B are homotopy equivalent.
Following this rule, I should also give the statement of the dual of
the previous theorem:
Theorem 7.12. Suppose 0 M Q is an injective cochain complex
under M and 0 N C is a coresolution of N (i.e., a long exact
sequence). Suppose f : N M is any morphism. Then
(1) There exists a cochain map f : C Q under f . I.e., the
following diagram commutes:
f
C  Q
6 6
f
N  M
(2) f is unique up to chain homotopy.
MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA 29
7.5. Derived functors.
Definition 7.13. Suppose that A, cB are abelian categories where A
has enough injectives and F : A B is a left exact (additive) functor.
Then the right derived functors Ri F are defined as follows. For any
object B of A choose an injective coresolution B Q and let Ri F (B)
be the ith cohomology of the cochain complex F (Q ):
0 F (Q0 ) F (Q1 ) F (Q2 )
In the case F = HomR (A, ), the right derived functors are the Ext
functors:
ExtiR (A, B) := Ri F (B) = H i (HomR (A, Q ))
Note that the derived functors are only welldefined up to isomor
phism. If there is another choice of injective coresolutions Q0 then
Q ' Q0 which implies that F (Q ) ' F (Q0 ) which implies that
H i (F (Q ))
= H i (F (Q0 ))
I pointed out later that, for Rmodules, there is a canonical minimal
injective coresolution for any module.
By definition of F (Q ) we take only the injective objects. The term
F (B) is deliberately excluded. But F is left exact by assumption. So
we have an exact sequence
0 F (B) F (Q0 ) F (Q1 )
Thus
Theorem 7.14. The zeroth derived functor R0 F is canonically iso
morphic to F . In particular,
Ext0 (A, B)
R = HomR (A, B)
At this point we tried to do an example: Compute Ext1Z (Z/3, Z/2).
We took an injective coresolution of Z/2
j
0 Z/2Z Q/2Z
Q/Z 0
We used that fact that any quotient of a divisible group is divisible.
We took Hom(Z/3, ) into the injective part:
j : Hom(Z/3, Q/2Z) Hom(Z/3, Q/Z)
Then I claimed that this map is an isomorphism. Here is a simple
minded proof. A homomorphism Z/3 Q/Z is given by its value
on the generator 1 + 3Z of Z/3Z. This must be a coset a/b + Z so
that 3a/b Z. In other words b = 3 and a = 0, 1 or 2. Similarly,
a homomorphism Z/3 Q/2Z sends the generator of Z/3 to a coset
30 MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA
a/b + 2Z so that 3a/b 2Z. So, b = 3 and a = 0, 2 or 4. So, both
of there groups have exactly three elements and a simple calculation
shows that j is a bijection.
7.5.1. delta operator. On of the basic properties of the derived functors
is that they fit into an a long exact sequence.
Theorem 7.15. Given any short exact sequence
0A
B
C0
there is a sequence of homomorphisms
n : Rn F (C) Rn+1 F (A)
making the following sequence exact:
0 F (A) F (B) F (C)
0
R1 F (A) R1 F (B)
R1 F (C)
1
R2 F (A) R2 F (B) R2 F (C) 2
R3 F (A)
Furthermore, n is natural in the sense that, given any commuting di
agram with exact rows:

0  A B  C  0
f g h
0 0
 A0 
? ? ?
0 B0  C  0
we get a commuting square:
n
Rn F (C)  Rn+1 F (A)
h f
? n
?
Rn F (C 0 )  Rn+1 F (A0 )
I gave the following construction of these operators. First I needed
the following lemmas, the first being obvious.
Lemma 7.16. If Q is injective then Rn F (Q) = 0 for all n 1.
Lemma 7.17. If 0 A Q K 0 is a short exact sequence
where Q is injective, then we have an exact sequence
0 F (A) F (Q) F (K) R1 F (A) 0
and
Rn F (K)
= Rn+1 F (A)
for all n 1.
MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA 31
Proof. We can use Q = Q0 as the beginning of an injective coresolution
of A:
j j0 j1 j2
0A Q0 Q1 Q2 Q3
Since coker j
= im j0
= ker j1
= K, we can break this up into two exact
sequences:
j
0A Q0 K 0
j1 j2
0 K Q1 Q2 Q3
The second exact sequence shows that the injective coresolution of K
is the same as that for A but shifted to the left with the first term
deleted. So,
Rn F (K)
= Rn+1 F (A)
for all n 1.
When n = 0 we have, by left exactness of F , the following exact
sequence:
(j1 )
0 F (K) F (Q1 ) F (Q2 )
In other words, F (K) = ker(j1 ) . The image of (j0 ) : F (Q0 )
F (Q1 ) lands in F (K) = ker(j1 ) . The cokernel is by definition the first
cohomology of the cochain complex F (Q ) which is equal to R1 F (A).
So, we get the exact sequence
F (Q0 ) F (K) R1 F (A) 0
We already know that the kernel of F (Q0 ) F (K) is F (A) so this
proves the lemma.
My construction of the delta operator proceeded as follows. Start
with any short exact sequence 0 A B C 0. Then choose an
injective coresolution of A:
j j0 j1 j2
0A
Q0
Q1
Q2
Q3
Let K = ker j1 = im j0 = coker j. Since Q0 is injective, the map A
Q0 extends to B and cokernels map to cokernels giving a commuting
diagram:

0  A B  C  0
(7.2) idA f g
? j ? p ?
0  A  Q0  K  0
The map g : C K induces a map g : R F (C) Rn F (K) and I n
defined the connecting homomorphism n for n 1 to be the compo
sition:
g
Rn F (K)
n : Rn F (C) = Rn+1 (A)
32 MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA
I showed that this is independent of the choice of g : C K since, for
any other choice g 0 , the difference gg 0 lifts to Q0 since f f 0 : B Q0
is zero on A and therefore factors through C. So, g g0 = Rn F (g g 0 )
factors through Rn F (Q0 ) = 0 so g = g0 . To show independence from
the choice of Q0 I said that there was a canonical choice for Q0 called
the injective envelope of A and I promised to write up the proof of that.
What about n = 0? In this case, Lemma 7.17 gives us a 4 term exact
sequence:
0 F (A) F (Q0 ) F (K) R1 F (A) 0
So, we can define 0 : F (C) R1 F (A) to be the composition
g
F (K) R1 F (A)
0 : F (C)
Again, for any other choice g 0 : C K, the difference g g 0 factors
through Q0 . This time F (Q0 ) 6= 0. But that is OK since F (Q0 ) is
in the kernel of the next map F (K) R1 F (A) by the 4 term exact
sequence.
7.5.2. Proof of Theorem 7.15. From your homework you might remem
ber that the sequence (7.2) gives a short exact sequence:
(f ) (p,g)
0 B Q0 C K 0
Since Rn F (Q0 ) = 0 the top row in the following sequence is supposed
to be exact:
n1 g
 Rn1 F (K)  Rn F (B)  Rn F (C)  Rn F (K) 
= = =
=
? ? ? n
?
 Rn F (A)  Rn F (B)  Rn F (C)  Rn+1 F (A) 
In the top sequence Rn F (C) occurs in position 3n 1 and in the
bottom sequence it occurs in position 3n. Therefore, exactness of the
bottom sequence for all 0 A B C 0 at position k 1
implies the exactness of the top sequence at position k1 which implies
the exactness of the bottom sequence at position k. So, it is exact
everywhere, proving theorem.
We just need to check that the diagram commutes. (Actually it
doesnt. But thats OK.) The middle square obviously commutes. The
right hand square commutes by definition of n . The left square anti
commutes (i.e., going one way is negative the other way). But that is
good enough for the argument to work. This will follow from the way
that and n1 are defined.
MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA 33
The morphism : A B induces a cochain map of injective cores
olutions:
j
0  A  Q0  Q1  Q2 
0 1 2
jB
? ? ? ?
0  B  QB
0
 QB
1
 QB
2

The cochain map induces the cochain map : F (Q ) F (QB ).
The induced map in cohomology is : Rn F (A) Rn F (B) by defini
tion. If the cokernels of j, j B are K, L we get the commuting diagrams
j p
0  A  Q0  K  0
0
? jB ? pB ?
0  B  QB  L
0
 0
0  K  Q1  Q2 
1 2
? ? ?
0 L  QB

1
 QB
2

Just as we had Rn F (K)
= Rn+1 F (A) we also have Rn F (L)
= Rn+1 F (B)
and the above diagrams show that the maps of injective coresolutions
induced by : A B and : K L are the same but shifted.
In other words, : Rn F (A) Rn F (B) is the same as the map
: Rn1 F (K) Rn1 F (L).
We need one more commuting diagram:
(f ) (p,g)
0  B  Q0 C  K  0
= (0 ,h)
jB pB
? ? ?
0  B  QB
0
 L  0
Here h : C QB 0 is the morphism needed to make the diagram com
mute, i.e., the maps 0 f, j B : B QB
0 are not equal. But they agree
on A. So their difference factors through C. I.e. h : C QB 0 so that
h = 0 f j B
The coboundary map n1 : Rn1 F (K) Rn F (B) is given by the
composition
n1 : Rn1 F (K) Rn1 F (L)
= Rn F (B)
By what I said in the last paragraph, this is the same as : Rn F (A)
Rn F (B) proving the theorem. (The n = 1 case is slightly different.)
34 MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA
7.6. Left derived functors. There are two cases when we use pro
jective resolutions instead of injective coresolutions:
(1) When the functor is left exact but contravariant, e.g., F =
HomR (, B).
(2) When the functor is right exact and covariant, e.g., F = R B.
In both cases we take a projective resolution P A 0 and define
the left derived functors to be Ln F (A) = H n (F (P )) in the first case
and Ln F (A) = Hn (F (P )) in the second case.
Definition 7.18. The left derived functors of F (A) = A R B are
called Ln F (A) = TorR
n (A, B)
7.6.1. review of tensor product. Following Lang, I will take tensor prod
ucts only over commutative rings. The advantage is that AR B will be
an Rmodule. The tensor product is defined by a universal condition.
Definition 7.19. Suppose that A, B are modules over a commutative
ring R. Then a map
g :AB C
from the Cartesian product A B to a third Rmodule C is called
Rbilinear if it is an Rhomomorphism in each variable. I.e., for each
a A, the mapping b 7 g(a, b) is a homomorphism B C and
similarly g(, b) HomR (A, C) for all a A. A R B is defined to be
the Rmodule which is the target of the universal Rbilinear map
f :AB AB
When I say that f is universal I mean that for any other Rbilinear
map g : A B C there is a unique Rhomomorphism h : A B C
so that g = h f .
The universal property tells us that A R B is unique if it exists. To
prove existence we need to construct it. But this easy. You just take
A R B to be the free Rmodule generated by all symbols a b where
a A, b B modulo the relations that are required, namely:
(1) (ra) b = r(a b)
(2) (a + a0 ) b = a b + a0 b
(3) a rb = r(a b)
(4) a (b + b0 ) = a b + a b0
I pointed out that the universal property can be expressed as an
isomorphism
HomR (A B, C)
= BiLinR (A B, C)
MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA 35
And the definition of Rbilinear can be expressed as the isomorphisms
BiLinR (AB, C) = HomR (A, HomR (B, C)) = HomR (B, HomR (A, C))
So, we conclude that
HomR (A B, C)
= HomR (A, HomR (B, C))
This is a special case of:
HomR (F (A), C)
= HomR (A, G(C))
with F = B and G = HomR (B, ). When we have this kind of
isomorphism, F is called the left adjoint and G is called the right
adjoint and F, G are called adjoint functors.
Lemma 7.20. Any left adjoint functor is right exact. In particular,
tensor product is right exact. Also, any right adjoint functor is left
exact.
Proof. In the first case, suppose that F is a left adjoint functor and
(7.3) A0
0A A00 0
is a short exact sequence. Then for any C, the left exactness of
HomR (, G(C)) gives an exact sequence
0 HomR (A00 , G(C)) HomR (A0 , G(C)) HomR (A, G(C))
By adjunction, this is equivalent to an exact sequence
0 HomR (F (A00 ), C)) HomR (F (A0 ), C)) HomR (F (A), C))
The exactness of this sequence for all C is equivalent to the exactness
of the following sequence by definition of coker F ():
F ()
F (A) F (A0 ) F (A00 ) 0
The left exactness of G is analogous. (Also, the proof uses the left
exactness of Hom so the second case is dumb.)
Take the sequence (7.3) and suppose that it splits. I.e., A0
= A A00
and there is a retraction r : A0 A so that r = idA . Then, in the
exact sequence
F ()
F (A) F (A0 ) F (A00 ) 0
F () is a monomorphism since F (r) F () = F (r ) = idF (A) . So,
we get a short exact sequence which furthermore splits. This proves
the following.
36 MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA
Lemma 7.21. If F is any right (or left) exact functor then F (A
A00 )
= F (A) F (A00 ). In particular,
(A A00 ) R B = (A R B) (A00 R B)
Another important lemma was the following.
Lemma 7.22. R R B is isomorphic to B as Rmodules.
Proof. I showed that B satisfies the universal property. Let
f :RB B
be the map f (r, b) = rb. This is Rbilinear when R is commutative.
Suppose that g : R B C is another Rbilinear map. Then we
can define h : B C by h(b) = g(1, b). This is Rlinear since g is
Rbilinear. The required diagram commutes since
h f (r, b) = h(rb) = g(1, rb) = rg(1, b) = g(r, b)
Furthermore, h is unique since it has no choice but to send b to g(1, b).
Since B satisfies the universal property, B = R B. Also the proof
gives the isomorphism. r b R B corresponds to rb B.
There was one other lemma that I didnt prove because it was ob
vious.
Lemma 7.23. A B =BA
7.6.2. computations. With these lemmas, I did some computations.
Suppose that R = Z and A = Z/n. Then a projective resolution
of A is given by
n
0Z Z Z/n 0
Since this sequence is exact it gives the following right exact sequence
for any abelian group B:
n
ZB Z B Z/nZ B 0
Using the lemma that R R B
= B this becomes:
n
B
B Z/nZ B 0
So, we conclude that
Z/nZ B
= B/nB
More generally, if A is any finitely generated abelian group then
A= Zr Z/t1 Z/t2 Z/tn
and, since tensor product distributes over direct sum we get:
A Z B = B r B/t1 B B/t2 B B/jn B
MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA 37
The derived functor TorZ1 (Z/n, B) is by definition the kernel of the map
n
ZB
ZB
Since Z B = B this is just the map B B given by multiplication
by n. So,
TorZ1 (Z/n, B) = {b B  nb = 0}
It is the subgroup of B consisting of all elements whose order divides
n. It is the ntorsion subgroup of B. Maybe that is why it is called
Tor.
7.6.3. extension of scalars. Suppose that R is a subring of S (e.g.,
Z R). A homomorphism of free Rmodules
f
Rn
Rm
is given by a matrix M (f ) = (aij ) as follows. If the basis elements of
Rn are ej and the basis elements of Rm are ei then
Xm
f (ej ) = aij ei
i=1
R. These numbers determine f since, for an arbitrary
for some aij P
element x = xj ej Rn we have
!
X X X
f (x) = f xj e j = xj aij ei = aij xj ei
j i,j i,j
since R is commutative. (Take free right Rmodules when R is not
commutative and this will still work.) This can be written in matrix
form: P
x1 P a 1j x j x1
x2 a2j xj x2
f = = (aij )
P
xn anj xj xn
When you tensor with S you get Rn R S = (R R S)n = S n
f id
Rn R S = S n
S
R m R S = S m
The claim is that M (f idS ) = M (f ). So, f = f idS is obtained
from f by extending scalars to S. If you have an integer matrix, you
just take the same matrix and consider it as a real matrix.
38 MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA
7.6.4. two definitions of Ext. The last thing I did was to prove that
the two definitions of ExtnR (A, B) that we now had were equivalent.
Theorem 7.24. If P A is a projective resolution of A and B Q
is an injective resolution of B then
H n (Hom(P , B))
= H n (Hom(A, Q ))
So, either formula gives ExtnR (A, B).
Proof. The theorem is true in the case when n = 0 because both sides
are are isomorphic to HomR (A, B). So, suppose n 1. I gave the
proof in the case n = 2.
I want to construct a homomorphism
H n (Hom(A, Q )) H n (Hom(P , B))
So, take an element [f ] H n (Hom(P , B)). The notation means
f ker((j2 ) : HomR (A, Q2 ) HomR (A, Q3 ))
[f ] = f + im((j1 ) : HomR (A, Q1 ) HomR (A, Q2 ))
This gives the following diagram
d3 d2 d1 d0
P3  P2  P1  P0  A  0
@ 0 @ 0
(7.4) f3 @ f2 f1 f0 f @
? R ?
@ ? ? ? R ?
@
0  B  Q0 
j0
Q1  Q2  Q3
j1 j2
Since j2 f = 0, f maps to the kernel K of j2 . But P is a projective
resolution of A and B Q0 Qn1 K is a resolution of K.
So, we proved that there is a chain map from P to this resolution of
K which is unique up to chain homotopy. This gives the maps f0 , f1 ,
etc in the diagram. Note that f2 d3 = 0 f3 = 0. So,
f2 ker((d3 ) : HomR (P2 , B) HomR (P3 , B))
But f2 is only well defined up to homotopy h : P1 B. So, we could
get f20 = f2 + h d + d h. But the second term must be zero since it
goes through 0 and the first term
h d2 im((d2 ) : HomR (P1 , B) HomR (P2 , B))
This means that
[f2 ] = f2 + im(d2 )
is a well defined element of H 2 (Hom(P , B)) and we have a homomor
phism:
HomR (A, ker j2 ) H 2 (Hom(P , B))
MATH 101B: ALGEBRA II PART A: HOMOLOGICAL ALGEBRA 39
But this homomophism is zero on the image of (j1 ) : Hom(A, Q1 )
Hom(A, Q2 ) because, if f = j1 g then we can take f0 = g d0 and
f1 = 0 = f2 . Therefore, we have a well defined map
H 2 (Hom(A, Q )) H 2 (Hom(P , B))
which sends [f ] to [f2 ].
This is enough! The reason is that the diagram (7.4) is symmetrical.
We can use the dual argument to define a map
H 2 (Hom(P , B)) H 2 (Hom(A, Q ))
which will send [f2 ] back to [f ]. So, the two maps are inverse to each
other making them isomorphisms.
By the way, this gives a symmetrical definition of Extn , namely it is
the group of homotopy classes of chain maps from the chain complex
P A 0 to the cochain complex 0 B Q shifted by n.
Elements of ExtnR (A, B) are represented by vertical maps as in Equation
(7.4).
MATH 101B: ALGEBRA II
PART B: COMMUTATIVE ALGEBRA
I want to cover Chapters VIII,IX,X,XII. But it is a lot of material.
Here is a list of some of the particular topics that I will try to cover.
Maybe I wont get to all of it.
(1) integrality (VII.1)
(2) transcendental field extensions (VIII.1)
(3) Noether normalization (VIII.2)
(4) Nullstellensatz (IX.1)
(5) idealvariety correspondence (IX.2)
(6) primary decomposition (X.3) [if we have time]
(7) completion (XII.2)
(8) valuations (VII.3, XII.4)
There are some basic facts that I will assume because they are much
earlier in the book. You may want to review the definitions and theo
rems:
Localization (II.4): Invert a multiplicative subset, form the quo
tient field of an integral domain (=entire ring), localize at a
prime ideal.
PIDs (III.7): k[X] is a PID. All f.g. modules over PIDs are
direct sums of cyclic modules. And we proved in class that all
submodules of free modules are free over a PID.
Hilbert basis theorem (IV.4): If A is Noetherian then so is A[X].
Algebraic field extensions (V). Every field has an algebraic clo
sure. If you adjoin all the roots of an equation you get a normal
(Galois) extension.
An excellent book in this area is AtiyahMacdonald Introduction to
Commutative Algebra.
0
MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA 1
Contents
1. Integrality 2
1.1. Integral closure 3
1.2. Integral elements as lattice points 4
1.3. Proof of Lemma 1.3 7
2. Transcendental extensions 8
2.0. Purely transcendental extensions 8
2.1. Transcendence basis 9
2.2. Noether Normalization Theorem 11
3. Outline of rest of Part B 15
3.1. Valuation rings 15
3.2. Noetherian rings 17
4. Algebraic spaces 18
4.0. Preliminaries 18
4.1. Hilberts Nullstellensatz 19
4.2. Algebraic sets and varieties 22
5. Noetherian rings 26
5.1. Hilbert basis theorem 26
5.2. Noetherian modules 27
5.3. Associated primes 29
5.4. Primary decomposition 34
5.5. Spec(R) 38
6. Local rings 39
6.1. Basic definitions and examples 39
6.2. Nakayamas Lemma 40
6.3. Complete local rings 42
6.4. Discrete valuation rings 43
2 MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA
1. Integrality
I just want to go over briefly the basic properties of integral exten
sions. All rings are commutative with 1.
Definition 1.1. Suppose that R is a subring of S and S. Then is
integral over R if any of the following equivalent conditions is satisfied.
(1) is the root of a monic polynomial with coefficients in R. I.e.,
f () = n + r1 n1 + + rn = 0
for some ri R.
(2) The subring R[] S is a finitely generated (f.g.) Rmodule.
(3) There exists a faithful R[]module which is a f.g. Rmodule.
Each condition makes some aspect of integrality most apparent. The
first condition implies:
Lemma 1.2. If is integral over R then is integral over any subring
of S which contains R.
The second (and third) condition implies:
Lemma 1.3. If R T are subrings of S, S is integral over T and
T is finitely generated as an Rmodule then is integral over R.
This follows from the following lemma.
Lemma 1.4. If R is a subring of T and T is finitely generated as an
Rmodule then any f.g. T module is also f.g. as an Rmodule.
Proof. Let x1 , , xn be generators
P of T as an Rmodule. Then any
t T can be written as t = rj xj . If M is a f.g. T module with
generators y1 , , ym then any element of M can be written as
X X
ti yi = rij xj yi
So, the products xj yi generate M as an Rmodule.
The last condition looks strange. A faithful module M is one where
the only element of the ring which annihilates M is 0. (Show that
any nonzero free module over any ring is faithful.) In other words,
M is faithful R[]module if the action of R[] on M gives a ring
monomorphism:
R[] , EndZ (M )
One immediate consequence of the third definition is the following:
Lemma 1.5. Any R[] is also integral over R.
MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA 3
Proof.
R[] R[] , End(M )
and M is f.g. as an Rmodule.
Proof of equivalence of three definitions. (1) (2) since 1, , , n1
generate R[] as an Rmodule.
(2) (3) M = R[] is a faithful f.g. R[]module.
(3) (1). Suppose that M is a faithful R[]module which is gen
erated by w1 , w2 , , wn as an Rmodule. Then, for each wj ,
Xn
(1.1) wj = aij wi
i=1
for some aij R. Then I claim that is a root of the characteristic
polynomial of the n n matrix A = (aij )
f (t) = det(tIn A) = tn Tr Atn1 + + (1)n det A
The reason is that Equation (1.1) can be written in matrix form as:
(w1 , w2 , , wn )In = (w1 , w2 , , wn )A
or
(w1 , w2 , , wn )(In A) = (0, 0, , 0)
If we multiply by the adjoint matrix (In A)ad and use the fact that
(In A)(In A)ad = det(In A)In = f ()In
we get:
(w1 , , wn )f ()In = (f ()w1 , f ()w2 , , f ()wn ) = (0, 0, , 0)
Since f ()wj = 0 for all generators wj of M we get f ()M = 0. This
implies that f () = 0 since M is a a faithful R[]module.
1.1. Integral closure.
Proposition 1.6. If R is a subring of S then the set of all S which
are integral over R forms a ring (which contains R). This is called the
integral closure of R in S.
Proof. Suppose that , S are integral over R. Then is integral
over R[] by Lemma 1.2. Any element of R[, ] is integral over R[]
by Lemma 1.5. So every element of R[, ] (e.g., + , ) is also
integral over R by Lemma 1.3. Therefore, + and are integral
over R and the integral elements form a ring.
Here is an example.
Theorem 1.7. Z is the integral closure of Z in Q.
4 MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA
Proof. Suppose that x = a/b Q is integral over Z where a, b Z are
relatively prime. Then there are integers n, c1 , , cn so that
xn + c1 xn1 + c2 xn2 + + cn = 0
multiplying by bn we get the integer equation
an + c1 an1 b + c2 an2 b2 + + bn = 0
This implies that b divides an . Since a, b are relatively prime this means
b = 1 and x = a/b Z.
Definition 1.8. A domain (= entire ring) is called integrally closed if
it is integrally closed in its fraction field.
The last theorem shows that Z is integrally closed.
Here is another example. The domain R = Z + Z 5 is not integrally
closed since its fraction field contains the golden ratio
1+ 5
=
2
which is a root of the monic polynomial
x2 x 1 = 0
1.2. Integral elements as lattice points. Suppose V is a vector
space over a field k of characteristic 0 and B = {b1 , , bn } is a basis
for V . Then the additive subgroup ZB generated by B forms a lattice
L in V . (A lattice in V is defined to be an additive free subgroup whose
free basis elements form a basis for V as a vector space over k.)
Theorem 1.9. Suppose that K is an algebraic number field, i.e., a
finite extension of Q. Let OK be the integral closure of Z in K. Then
(1) OK is a lattice in K as a vector space over Q. (So, OK is the
free additive group generated by some Qbasis for K.)
(2) OK contains any other subring of K which is finitely generated
as an additive group. (So, OK contains any subring of K which
is a lattice.)
To prove this theorem we need to review the properties of the trace.
MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA 5
1.2.1. example. Take K = Q(i) where i = 1. Then Q(i) has an
automorphism given by complex conjugation
(a + bi) = a bi
Q(i) is a Galois extension of Q with Galois group
Gal(Q(i)/Q) = {1, }
Proposition 1.10. The ring of integers in Q(i) (= the integral closure
of Z in Q(i)) is
OQ(i) = Z[i] = {a + bi  a, b Z}
Proof. Certainly, Z[i] OQ(i) since 1 and i are integral elements of
Q(i). Conversely, suppose that = a+bi is integral. Then () = abi
is also integral. So, the sum and product of , () which are called
the trace and norm of are also elements of the ring OQ(i) . Since
OQ(i) Q = Z (Theorem 1.7), these are rational integers:
TrK/Q () := + () = 2a Z
NK/Q () := () = a2 + b2 Z
Also, Tr(i) = 2b Z. These imply that a, b Z as claimed.
1.2.2. properties of trace. Suppose that E is a finite separable field
extension of K. This means that
[E : K] = [E : K]s
where [E : K] = dimK (E) is the degree of the extension and [E : K]s
is the number of distinct embeddings
i : E , K
over K. Here K is the algebraic closure of K and an embedding over
K means that i is the inclusion map on K for each i = 1, , n.
Recall that, if K has characteristic zero then all algebraic extensions
of K are separable.
The trace TrE/K : E K is defined by
X
TrE/K (x) = i (x)
P any element Gal(K/K) will permute
This is an element of K since
the i and therefore fix i (x). It is clear that this mapping is K
linear since it is a sum of Klinear maps.
Lemma 1.11. If R is an integrally closed subring of K and S is the
integral closure of R in E then TrE/K () R for all S
6 MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA
Proof. If is integral over R then so is each i (). Thus their sum,
TrE/K () is also integral over R. But this trace is an element of K.
Since R is integrally closed in K, TrE/K () R.
Lemma 1.12. If E is a finite separable extension of K the mapping
EE K
sending (a, b) to TrE/K (ab) is a nondegenerate symmetric Kbilinear
pairing which induces an isomorphism of E with its Kdual:
E = E = HomK (E, K)
Proof. (char 0 case) The bilinear map Tr(ab) gives a linear map
E K E K
whose adjoint is the map E E . Since E, E have the same di
mension, the map E E is an isomorphism if and only if it is a
monomorphism. In other words we need to show that, for any a E
there exists a b E so that TrE/K (ab) 6= 0. This is easy: just take
b = a1 . Then Tr(ab) = Tr(1) = n 6= 0 since char K = 0
1.2.3. proof of the theorem. Now we can prove Theorem 1.9. First
choose a basis x1 , , xn for K over Q.
Claim: There are positive integers mi so that yi = mi xi OK . To
see this suppose that xi is a root of the polynomial f (X) Q[X]. By
multiplying by all the denominators we may assume that f (X) Z[X].
So, there are integers mj so that
m0 xdi + m1 xd1
i + + md = 0
Multiply by md1
0 and you get:
(m0 xi )d + m1 (m0 xi )d1 + m2 m0 (m0 xi )d2 + = 0
So, yi = m0 xi OK .
Thus OK contains the n linearly independent elements yi . So, the
rank of this additive group is at least n.
By Lemma 1.12, there is a dual basis z1 , , zn E so that
TrK/Q (yi zj ) = ij
P
Take any OK . Then = aj zj for some ai Q. But then
T rK/Q (yi ) = aj OK Q = Z
So, OK is a subgroup of the additive free group generated by the zi .
This implies that it s free of rank n. But we already know that its
rank is at least n. So, OK = Zn and it spans K.
MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA 7
1.3. Proof of Lemma 1.3. I explained this in class but I didnt write
it yet. You need properties (2) and (3) to show that if S is integral
over T and T is a f.g. Rmodule then is integral over R.
Property (2) implies that T [] is a f.g. T module. Lemma 1.4 tells us
that T [] is a f.g. Rmodule. But T [] contains R[] so it is a faithful
R[]module. Condition (3) then tells us that is integral over R.
8 MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA
2. Transcendental extensions
Transcendental means not algebraic. We want to look at finitely
generated field extensions
k(x1 , x2 , , xn )
where not all the xi are algebraic over k. Transcendental extensions
are also called function fields. The simplest cases are:
2.0. Purely transcendental extensions. These are field extensions
of k which are isomorphic to a fraction field of a polynomial ring:
k(X1 , , Xn ) = Qk[X1 , , Xn ]
Here the capital letters Xi are formal variables. So, k[X1 , , Xn ] is the
ring of polynomials in the generators X1 , , Xn with coefficients in
the field k and Q is the functor which inverts all the nonzero elements.
I.e., QR is the quotient field of an integral domain R. Elements of
k(X1 , , Xn ) are fractions f (X)/g(X) where g(X) 6= 0. These are
called rational functions in n variables.
When is k(x1 , , xn )
= k(X1 , , Xn )?
Definition 2.1. Suppose that R is a kalgebra, i.e., a (commutative)
ring which contains the field k. We say that y1 , , yn R are alge
braically independent over k if the kalgebra homomorphism (a homo
morphism of rings containing k which is the identity on k)
: k[X1 , , Xn ] R
which sends Xi to yi is a monomorphism. Equivalently,
f (y1 , , yn ) 6= 0
for every nonzero polynomial f over k in n variables.
For example, y E is algebraically independent (as a set) if and
only if it is transcendental over k.
Proposition 2.2. Suppose that E is a field extension of k and y1 , , yn
E are algebraically independent over k. Then we get an isomorphism
k(X1 , , Xn )
= k(y1 , , yn )
sending Xi to yi .
Because of this we can define a purely transcendental field extension
to be an extension k(y1 , , yn ) generated by a set of algebraically
independent elements.
MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA 9
2.1. Transcendence basis.
Definition 2.3. If E is a transcendental extension of k then a tran
scendence basis for E over k is defined to be a maximal algebraically
independent subset {x1 , , xn }.
If {x1 , , xn } is a transcendence basis then, if we add one more
element, it will become algebraically dependent.
2.1.1. algebraic dependence. Suppose that y1 , , ym is algebraically
dependent and k minimal. In other words, any if we delete any element
it becomes algebraically independent. Then there exists a nonzero
polynomial f (X) k[X] so that
f (y1 , , ym ) = 0
Furthermore, by minimality of m, every variable yi appears in the
polynomial. The polynomial f (X) can be written as a polynomial in
one variable X1 with coefficients in k[X2 , , Xm ]. Plugging in the
elements yi for Xi we get:
X
f (y1 , , ym ) = gj (y2 , , ym )y1j
j
This means that y1 is algebraic over the purely transcendental extension
k(y2 , , ym ). Similarly, each yi is algebraic over the purely transcen
dental extension k(y1 , , ybi , , ym ).
2.1.2. transcendence degree. We say that E has transcendence degree
m over k if it has a transcendence basis with m elements. The following
theorem shows that this is a well defined number.
Theorem 2.4. Every transcendence basis for E over k has the same
number of elements.
Ill use the following lemmas which are supposed to be obvious.
Lemma 2.5. If {x1 , , xm } is a transcendence basis for E over k
then {x2 , , xm } is a transcendence basis for E over k(x1 ).
Proof. Suppose not. Then there is a nonzero polynomial f in n 1
variables with coefficients in k(x1 ) = k(X1 ), so that f (x2 , , xm ) = 0.
We can multiply by all the denominators to get another polynomial g
with coefficients in k[x1 ] = k[X1 ]. But then g is a polynomial in m
variables so g(x1 , , xm ) = 0 which is a contradiction.
Lemma 2.6. Suppose that Y is a subset of E so that E is algebraic
over k(Y). Then any maximal algebraically independent subset of Y is
a transcendence basis for E.
10 MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA
Proof of the theorem. Suppose that {x1 , , xm } is any transcendence
basis for E over k. Suppose that Y is a subset of E so that E is
algebraic over k(Y). Then we want to show that Y has at least m
elements because this will imply in particular that every transcendence
basis has at least m elements. I will show this by induction on m.
Suppose that m = 0. Then the statement is that Y has at least 0 el
ements which is true. Now suppose that m > 0 and Y = {w1 , , wn }.
Suppose first that w1 = x1 . Then {x2 , , xm } will be transcendence
bases for E over k(x1 ) and E will be algebraic over k(x1 )(w2 , , wn ) =
k(x1 , w2 , , wn ). So, n1 m1 by induction on m and this implies
n m. So, all we have to do is replace one of the wi with x1 .
By the previous lemma, Y contains a transcendence basis which, by
rearranging the elements, can be taken to be {w1 , , wr }. If we add x1
it will be algebraically independent. So, there will be some polynomial
in x1 and some of the wi which will be zero. Rearrange the wi so that
only w1 , , ws are involved in the polynomial and s is minimal. So,
f (x1 , w1 , , ws ) = 0
Since x1 is transcendental, we must have s 1. So we can write this
as a polynomial in w1 :
N
X
f (x1 , w1 , , ws ) = gj (x1 , w2 , , ws )w1j = 0
j=0
where N is the highest power of w1 which appears in f . Then
gN (x1 , w2 , , ws ) 6= 0
by minimality of s. Therefore, w1 is algebraic over k(x1 , w2 , , ws )
which implies that E is algebraic over k(x1 , w2 , , wn ). By the previ
ous paragraph, this implies by induction on m that n m and we are
done.
2.1.3. example. Let k = C and let E = C(X)[Y ]/(f (X, Y )) where
f (X, Y ) = Y 2 (X a)(X b)(X c)
Since f is irreducible, E is a quadratic extension of C(X). E is also
a cubic extension of C(Y ) since f is a cubic polynomial in X with
coefficients in C(Y ). Therefore, {X} and {Y } are transcendence bases
for E and the transcendence degree is 1.
MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA 11
2.2. Noether Normalization Theorem. The statement is:
Theorem 2.7 (Noether Normalization). Suppose that R is a finitely
generated domain over a field K. Then there exists an algebraically
independent subset Y = {y1 , y2 , , yr } of R so that R is integral over
K[Y].
I pointed out that r (the maximal number of algebraically indepen
dent elements of R over K) must be equal to the transcendence degree
of the quotient field Q(R) of R over K. Recall that a transcendence
basis is a maximal algebraically independent subset. If Y is not a tran
scendence basis for Q(R) then we can add at least one more element
x = a/b Q(R) where a, b R. But then y1 , , yr and a are al
gebraically independent elements of R which is a contradiction. So,
{y1 , , yr } is a transcendence basis for Q(R) over K.
2.2.1. motivation. Before I proved the theorem, I explained why this
is important using general language and the specific example of the
elliptic curve (2.1.3).
The basic idea is that the inclusion map
K[Y] = K[y1 , y2 , , yr ] , R
corresponds to a mapping of spaces going the other way:
Kr X
The correspondence is that K[Y] is the ring of polynomial functions on
K r and R is supposed to be the ring of polynomial functions on some
space X. The fact that R is integrally closed over K[Y] means that
R is finitely generated as a K[Y]module. This correspond to the fact
that the mapping of spaces is ntoone where n is the minimal number
of generators of R over K[Y] provided that K is algebraically closed.
The specific example made this a lot clearer.
Suppose that K = C. Then the equation
f (X, Y ) = Y 2 (X a)(X b)(X c) = 0
defines a subset of Ef C2 . Projection to the first coordinate gives a
mapping
p1 : E f C
Because the polynomial f is monic in Y , this mapping has exactly 2
inverse image points for every x C except for the three points a, b, c
(which I am assuming are distinct) and, even at these three point, the
inverse image is Y = 0 which is a double root of the equation Y 2 = 0.
If the polynomial f were not monic, e.g., if the equation were:
(X d)Y 2 (X a)(X b)(X c) = 0
12 MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA
then the polynomial would have a different degree in Y for different
values of X. For example, when X = d, this polynomial has no roots
at all. Therefore, d C would not be in the image of the projection
map X C.
At this point I decided to do some topology to determine that the
elliptic curve Ef is topologically a torus. We need to add the
point at infinity to make it compact. The projection map p1 : Ef C
extends to a continuous mapping to the Riemann sphere
Ef C = S 2
This mapping has four branch points: a, b, c, . (When X is very large,
the constants a, b, c are like 0 and the equation looks like Y 2 = X 3 . As
X rotates a full 2, Y goes 3, i.e., changes sign. So is a branch
point.)
Now, comes the Euler characteristic calculation: Cut up the Rie
mann sphere S 2 into two squares along edges connecting a to b to c to
and back to a. This decomposes Ef into
' $
b c
& %
(1) four squares (since each square in S 2 have two squares lying
over it)
(2) eight edges (each of the 4 edges in S 2 has two edges over it)
(3) four vertices (each of the four vertices in S 2 is a branch point
and has only one point lying over it)
So, the Euler characteristic of Ef is
(Ef ) = 4 8 + 4 = 0 = 1 2g
making the genus g = 1. So, it is a torus.
MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA 13
2.2.2. proof of the theorem. The proof was by induction on n, the num
ber of generators of R over K. Thus
R = K[x1 , , xn ]
If n = 0 then R = K and there is nothing to prove.
If n = 1 then R = [x1 ]. There are two cases: either x1 is algebraic or
it is transcendental. If x1 is algebraic then r = 0 and x1 is integral over
K. So, the theorem holds. If x1 is transcendental then let y1 = x1 . We
get R = K[x1 ] which is integral over K[x1 ].
Now suppose that n 2. If x1 , , xn are algebraically inde
pendent then we let Y = {x1 , , xn } and we are done. If they
are not algebraically independent then there is a nonzero polynomial
f (X) K[X1 , , Xn ] so that f (x1 , , xn ) = 0. The polynomial
f (X) can be written as
X
f (X) = c X
a1 a2
where we use the notation X = Xnan for = (a1 , , an ).
X1 X2
We can write this as a polynomial in X1 with coefficients in K[X2 , , Xn ]:
N
X
f (X) = fj (X2 , , Xn )X1j
j=0
(Since f is nonzero, it involves at least one of the Xi and we can
assume it is X1 .) We want to somehow arrange to have fN = 1. Then
x1 would be integral over K[x2 , , xn ] which by induction on n would
be integral over K[Y] for some algebraically independent subset Y.
Since integral extensions of integral extensions are integral, the theorem
follows.
To make f monic in X1 we change coordinates as follows. Let
Y2 , , Yn and y2 , , yn R be given by
Yi = Xi X1mi y i = xi xm
1
i
or: Xi = Yi + X1mi and xi = yi + xm 1 where the positive integers mi
i
will be chosen later. We get the new polynomial
g(X1 , Y2 , , Yn ) = f (X1 , , Xn ) K[X1 , Y2 , , Yn ]
so that g(x1 , y2 , , yn ) = 0. Also, it is clear that R = K[x1 , y2 , , yn ].
Now take mi = di1 where d is an integer greater than any of the
exponents ai which occur in the polynomial f (X). For example, let
f (X) = x91 x52 x23 + x21 x22 x33 + x82 x3
Then d = 10 will do. The three multiindices which occur are =
(9, 5, 2), (2, 2, 3), (0, 8, 1). Reverse the orders of these and write them in
14 MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA
descending lexicographic order: (3, 2, 2), (2, 5, 9), (1, 8, 0). Saying that
these numbers are in lexicographic order is the same as saying that
322 > 259 > 180
P i1
More generally, the value of ai d is different for every multiindex
and is the largest for the multiindex which is maximal in this
lexicographic order. Look at
2 n1
X
g(X1 , Y2 , , Yn ) = c X1a1 (X1d +X2 )a2 (X1d +X3 )a3 (X1d +Xn )an
P i1
The highest power of X1 which occurs is N = ai d where is
maximal in lexicographic order. The coefficient of X1N is c . We can
divide g by this nonzero constant and make g monic in X1 and we are
done by induction on n as discussed earlier.
MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA 15
3. Outline of rest of Part B
For the next three weeks we will discuss
(1) Algebraic spaces (for motivation)
(2) Noetherian rings
(3) Valuation rings (local rings)
This is in motivational rather than logical order.
The main idea is that somehow Algebraic spaces are the same as
Noetherian rings which are the same as transcendental extensions.
This is based on the correspondence
points local rings
3.1. Valuation rings. The definition of places and valuation rings
given in most books seems complicated and artificial. I like the old
definition which is very simple.
Definition 3.1. Suppose that E is a field extension of K. Then a
place in E (over K) is a subring K V E so that for any x E
either x V or x1 V or both.
3.1.1. example. E = K(X), V = K[X](Xa) . This is K[X] localized
at the prime ideal (X a).
Recall that if p is a prime ideal in R then the complement S of p in
R is a multiplicative set and
Rp = R localized at p := S 1 R = {a/b  a, b R, b
/ p}
Rp is a local ring, i.e., it has a unique maximal ideal S 1 p. By the
following lemma this is equivalent to saying that all elements in the
complement of S 1 p are invertible.
Lemma 3.2. The union of all maximal ideals in a ring R is equal to
the complement of the set of all units.
Proof. () Ideals cannot contain invertible elements, otherwise they
would contain 1.
() Conversely, any nonunit a generates an ideal (a) = Ra 6= R.
This is contained in a maximal ideal by Zorns lemma.
Back to the example: p = (X a) is a maximal ideal and is thus
prime since it is the kernel of the homomorphism : K[X] K given
by (f ) = f (a). Since K is a field, ker = p is maximal.
f (X)
V = K[X](Xa) = K(X)  g(a) 6= 0
g(X)
16 MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA
To show that V is a place suppose that x = f (X)/g(X) E = K(X).
We can assume that f, g are relatively prime. This implies that either
f (a) 6= 0 or g(a) 6= 0 (otherwise both f (X) and g(X) would be divisible
by X a). If g(a) 6= 0 then x = f /g V . If f (a) 6= 0 then x1 =
g/f V . So, V is an example of a place.
This means that every point a K corresponds to a place in K(X).
But there are other places such as which is given by
f (X)
V = K(X)  deg f deg g
g(X)
This is a local ring with unique maximal ideal
f (X)
m= K(X)  deg f < deg g
g(X)
3.1.2. properties of places.
Proposition 3.3. Suppose that V is any place in E over K. Then
(1) V is a local ring
(2) V is integrally closed in E.
Proof. First I showed (2): V is integrally closed in E. Suppose that
x E is integral over V . Then
n1
X
n
x = ak x k
k=0
/ V . Then x1 V and
where ak V . Suppose that x
n1
xn X
x= = ak xkn+1
xn1 k=0
But this is an element of V since k n + 1 0 for all k n 1.
To prove (1) it suffices to show that the set W of nonunits of V is
closed under subtraction and under multiplication by elements of V .
So, suppose that r V and y W , i.e., y 1 / V . Then ry is not a
1 1
unit (otherwise (ry) = z V gives y = rz V ). So, V W W .
Finally, I have to show that, if x, y W then x y W . If x = 0 or
y = 0 then we are done since x y = y or x. So, suppose that x, y are
nonzero. Then either x/y V or y/x V . Suppose by symmetry that
x/y V . Then x/y 1 V and x y = (x/y 1)y V W W . So,
the nonunits form the unique maximal ideal and V is a local ring.
MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA 17
3.2. Noetherian rings.
Definition 3.4. The ascending chain condition (ACC) for ideals in
a ring R says that every increasing sequence of ideals is eventually
stationary. I.e., if
I1 I2 I3
is an increasing sequence of ideals in R then, for some N , we have
IN = IN +1 = .
Definition 3.5. A ring is called Noetherian if it has the ACC for ideals.
There are two other well known equivalent characterizations of Noe
therian rings.
Theorem 3.6. A ring R is Noetherian if and only if every ideal is
finitely generated.
So, e.g., PIDs are Noetherian.
Proof. () Suppose that every ideal is finitely generated. Then we have
to show that every accending chain of ideals I1 I2 stops. Let
I = Ii . This is an ideal so it is generated by a finite set {x1 , , xn }.
Each xi lies in some Ij . So, taking N to be the largest j, we see that
IN which contains all of the xi . But then I = IN = IN +1 = .
() Suppose that I is an ideal which is not finitely generated. Let
x1 I. Then I 6= (x1 ) so there is an element x2 I which is not in
(x1 ) and there is an element x3 I which is not in (x1 , x2 ), etc. This
gives a strictly increasing sequence of ideals in R:
(x1 ) (x1 , x2 ) (x1 , x2 , x3 )
So, R is not Noetherian.
The last characterization I only proved in one direction.
Theorem 3.7. R is Noetherian if and only if every submodule of a
finitely generated Rmodule is finitely generated.
Corollary 3.8. The category RMod of finitely generated Rmodules
is an abelian category if R is Noetherian.
Proof of the theorem in one direction. () Suppose that every submod
ule of a finitely generated Rmodule is finitely generated. Then this
applies to the free module R = R R which is generated by one element.
The submodules of R are the ideal of R. So, ideals are all finitely
generated making R Noetherian.
18 MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA
4. Algebraic spaces
We want to study zero sets of polynomial equations. The basic
theorem is the Nullstellensatz. But first we need some preliminaries.
4.0. Preliminaries. First we need to know that any finitely generated
ring over a field K can be mapped to the algebraic closure K. This is
not true for finitely generated field extensions. For example, there is no
homomorphism of K(X) into K over K. I used Noether normalization
which makes the proofs shorter.
Lemma 4.1. If R is integral over S and R is a field then S is a field.
Proof. We just need to show that if x 6= 0 S then x1 S. Since
R is a field, x1 R. Since R is integral over S, x1 satisfies a monic
polynomial:
n1
X
n
x = ai xi
i=0
where ai S. Multiply by xn1 to get
n1
xn1 X
x1 = = ai xni1
xn i=0
Since n i 1 0 for all i n 1, this is an element of S and we are
done.
Theorem 4.2. If R = K[x1 , , xn ] is a field then x1 , , xn are
algebraic over K.
Proof. By Noether normalization there is an algebraically independent
set Y = {y1 , , yr } in R so that R is integral over K[Y]. Since R is a
field, the lemma says that K[Y] must be a field. This is only possible
if r = 0. So, R is integral and thus algebraic over K.
Corollary 4.3. Suppose that K is a field and R = K[x1 , , xn ] is a
finitely generated ring over K. Then there is a homomorphism
:RK
of rings over K, i.e., so that is the identity on K.
Proof. R f.g. implies R = K[X1 , , Xn ]/I for some ideal I. (I =
{f (X)  f (x) = 0}). Let M be a maximal ideal containing I. Then we
have an epimorphism of rings over K:
R K[X]/M = L
Since M is maximal, L is field. Since L is an extension of K which is
finitely generated as a ring, the theorem says that L is algebraic over
MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA 19
K. Therefore L K and the homomorphism what we want is the
composition:
: R L , K
Corollary 4.4. Suppose that R = K[x1 , , xn ] is a f.g. domain over
K. Suppose that y1 , , ym are nonzero elements of R. Then there is
a homomorphism
:RK
of rings over K so that (yi ) 6= 0 for all i.
Proof. In the quotient field Q(R) we have y11 , , ym
1
. Let
S = K[x1 , , xn , y11 , , ym
1
] Q(R)
Then R S and by the previous corollary there is a homomorphism
:SK
of rings over K. = R is the homomorphism that we want.
4.1. Hilberts Nullstellensatz. Now we can prove the theorem about
zero sets of polynomials. First, I gave the definition and some exam
ples.
Definition 4.5. Suppose that S is a subset of K[X] = K[X1 , , Xn ]
and L is a field extension of K. Then let ZS (L) denote the set of all
common zeroes of fi S in Ln :
ZS (L) = {(a) = (a1 , , an ) Ln  fi (a) = 0 fi S}
One of the key features of the zero set is the duality between points
and polynomials, namely, the equation
f (a) = 0
can be interpreted in two ways: (a) Ln is a zero of f or f K[X]
lies in the kernel of the evaluation at (a) mapping
ev(a) : K[X] L
The equivalence of these two statements gives the following lemma.
Lemma 4.6. (a) ZS (L) S ker(ev(a) : K[X] L).
Since S ker(ev(a) ) iff the ideal (S) of K[X] generated by S is
contained in the ideal ker(ev(a) ), we get:
Proposition 4.7. ZS (L) = Z(S) (L).
20 MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA
We are assuming the Hilbert basis theorem which implies that K[X]
is Noetherian. Therefore, every ideal of K[X] is finitely generated. So,
we may assume that S is a finite set of polynomials.
Now comes the first version of the Nullstellensatz:
Theorem 4.8. Let S = {f1 , , fm } K[X1 , , Xn ] and suppose
that L K is algebraically closed. Then either
P
(1) 1 = fi gi for some gi K[X] or
(2) ZS (L) 6= .
P
Proof. (S) = { fi gi } is either equal to K[X]P or it is an ideal in
K[X]. In the first case we get 1 (S). So, 1 = fi gi . In the second
case, R = K[X]/(S) is a finitely generated ring over K. So, there is a
homomorphism of rings over K:
: K[X]/(S) K , L
Let ai = (Xi ). Then (f ) = f (a). Since each fi S is in the kernel
of we have fi (a) = 0 for all i. I.e., (a) ZS (L).
Theorem 4.9 (weak Nullstellensatz). The maximal ideals of K[X] are
(X1 a1 , X2 a2 , , Xn an )
where ai K. (So, the maximal ideals of K[X] are in 11 correspon
n
dence with the points in K .)
Proof. Take any ideal I in K[X]. By the previous theorem, (a1 , , an )
ZI (K). By the lemma, this is equivalent to:
I (X1 a1 , , Xn an )
If I is maximal, these must be equal.
Theorem 4.10 (Hilberts Nullstellensatz). If f K[X] so that f (a) =
0 for all (a) ZS (K) then f m (S) for some m 1.
Proof. Let S = {h1 , , hr }. Introduce a new variable Y and one more
polynomial:
h0 = 1 Y f (X1 , , Xn )
Then
Zh0 ,h1 , ,hr =
since, for any common zero (a1 , , an , b), we have f (a) = 0 by as
sumption and
0 = h0 (a, b) = 1 bf (a) = 1
MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA 21
which is a contradiction. Therefore, there exist g0 , , gn K[X, Y ]
so that n
X
1= gi hi
i=0
Plugging in Y = 1/f (X) makes h0 = 0 and we get
X n
1= gi (1/f (X), X1 , , Xn )hi (X)
i=1
If m is sufficiently large then
f (X)m gi (1/f (X), X) K[X]
for all i and f m (S).
22 MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA
4.2. Algebraic sets and varieties. Now I just want to talk about the
consequences of the Nullstellensatz. One formulation of the statement
is that there is a 11 correspondence between algebraic sets and reduced
ideals.
Definition 4.11. An algebraic set is a subset A Ln (where L = L
is algebraically closed) which is defined by polynomial equations with
coefficients in K L. In other words,
A = ZS (L)
where S K[X1 , , Xn ], K L. We say that A is defined over K.
4.2.1. associated ideal.
Definition 4.12. If A Ln is an algebraic set defined over K then
the associated ideal a is defined by
a = {f K[X]  f (a) = 0 (a) A}
Hilberts Nullstellensatz says that if L = L and a K[X] is an ideal
then the ideal associated to the algebraic set Za (L) is the radical
rad(a) = {f K[X]  f n a for some n 1}
Definition 4.13. The radical of an ideal I in a ring R is defined to
be the set of all f R so that some positive power of f lies in I. The
radical of I is written rad(I).
Some peoplep write the radical of I as I. (But then we get silly
things like: (8) = (2).)
Definition 4.14. The radical of the ring R is defined to be the radical
of the ideal 0:
rad(R) := rad(0) = {r R  rn = 0 for some n 1}
Proposition 4.15. rad(I) = 1 rad(R/I) where : R R/I is the
quotient map.
Proof. xn I (x + I)n = I.
Now we can restate the Nullstellensatz again: It says that there is
a 11 correspondence between algebraic subsets of Ln defined over K
and ideals a in K[X1 , , Xn ] so that rad(a) = a (we call such ideals
reduced ).
{algebraic sets} = {reduced ideals}
n
A L defined/K a K[X1 , , Xn ]
Note that
MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA 23
a) This bijection is inclusion reversing:
a b Za Zb
Assuming the Hilbert basis theorem (K[X] is Noetherian), this has
the following immediate consequence.
Proposition 4.16. Algebraic sets satisfy the DCC.
Proof. Suppose that A1 A2 A3 is a descending sequence
of algebraic subsets of Ln . Then the associated ideals form an as
cending sequence: a1 a2 a3 . So, it eventually stops:
aN = aN +1 = . And the corresponding algebraic sets must be
equal: AN = AN +1 = .
One consequence of this is that every algebraic set decomposes as a
finite union of irreducible sets.
4.2.2. irreducible sets.
Definition 4.17. An algebraic set defined over K is called Kirreducible
if it is not the union of two proper algebraic subsets. An irreducible
algebraic set is called an (affine) variety over K.
When people talk about varieties they often view L as being variable.
Lang formalizes this by saying calling the function L 7 Za (L) an
algebraic space and defining a variety as an algebraic space rather than
an irreducible algebraic set.
The field of definition is very important. For example, the two point
set A = {i, i} C is Rirreducible but not Cirreducible. This has
many explanations. It is because the polynomial f (X) = X 2 + 1 which
defines this set is irreducible over R but not over C. Another way to
say it is that the Galois group of C/R acts transitively on the set.
Corollary 4.18. Every algebraic set is a finite union of irreducible
sets.
Proof. Suppose that A is not a finite union of irreducible sets. Then
it is not irreducible. So A = A0 A1 . Either A0 or A1 is not a finite
union of irreducible sets (otherwise we get a contradiction). Suppose
it is A0 . Then A0 = A00 A01 where one of the pieces is not a finite
union of irreducibles, etc. This contradicts the DCC. So, the corollary
holds.
Another feature of the algebraic setreduced ideal correspondence:
b) It converts products into unions, e.g.
Zf g = Z f Z g
24 MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA
if f, g K[X]. We will discuss products of ideals later.
This has the following important consequence.
Theorem 4.19. A is Kirreducible iff the associated ideal a K[X]
is prime.
The proof is easy if you realize that the Nullstellensatz can be for
mulated in the following way:
Lemma 4.20. Suppose a K[X] is a reduced ideal, f K[X] and
K L = L. Then f a iff Za (L) Zf (L).
4.2.3. coordinate ring. For the next subtopic I need to assume that
K = K = L and A K n is an irreducible algebraic set with associated
(prime) ideal p K[X]. In that case the coordinate ring of A is defined
by
R := K[X]/p
This can be interpreted as the ring of all polynomial functions
AK
since two polynomials f, g K[X] give the same function A K iff
f g = 0 on A. By the Nullstellensatz this is equivalent to saying that
f g rad p = p.
Lemma 4.21. Suppose that I is an ideal in a ring R. Then the maxi
mal ideals of R/I are m/I where m is a maximal ideal of R containing
I.
Proof. What I said in class was that m is a maximal ideal in R iff R/m
is a field. But
R R/I
=
m m/I
which is a field iff m/I is a maximal ideal in R/I. This proof assumes
that m R is an ideal containing I.
Suppose that M is a maximal ideal in R/I. Then M is the kernel of
an epimorphism : R/I K where K is a field. Let m = 1 (M ) be
the inverse image of M under : R R/I. Then m R is an ideal
containing the kernel I of . So the previous paragraph applies.
To apply this lemma to the coordinate ring R = K[X]/p we need to
recall the weak Nullstellensatz which says that the maximal ideals of
K[X] are (X1 a1 , , Xn an ) where (a) = (a1 , , an ) K n . This
ideal is the kernel of the evaluation map:
ev(a) : K[X] K
MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA 25
And you need to remember that S is contained in this kernel iff (a)
ZS (K). Putting S = p we see that p is contained in (X1 a1 , , Xn
an ) iff (a) A = Zp (K). This proves the following.
Theorem 4.22. There is a 11 correspondence between the points of
A and the maximal ideals of the coordinate ring R of A.
26 MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA
5. Noetherian rings
Since I have an extra day, I decided to go back and prove some of
the basic properties of Noetherian rings that I havent already proven.
5.1. Hilbert basis theorem. Recall that R is called a Noetherian
ring iff the ideal of R satisfy the ACC. We saw that this was equivalent
to saying that every ideal R has a finite set of generators a1 , , an
which means that a is the set of all Rlinear combinations of the ai .
Theorem 5.1 (Hilbert Basis Theorem). If R is Noetherian then so is
R[X].
Proof. Suppose that I is an ideal in R[X]. For each n 0 let an be
the set consisting of 0 and all leading coefficients of polynomials in I
of degree n. This can also be described as the set of all a R so that
I contains an element of the form
aX n + a1 X n1 + a2 X n2 + + an
Claim 1. an is an ideal of R or is equal to R.
Proof: To show that an is an ideal, choose two elements a, b an .
This is equivalent to saying that I contains two polynomials of the form
f (X) = aX n + lower terms
g(X) = bX n + lower terms
Suppose that r, s R. Then rf + sg I. But
rf + sg = (ra + sb)X n + lower terms
Therefore, ra + sb an . So, an is an ideal in R (or an = R).
Claim 2. am an+1 .
This statement is obvious: If a an then I contains a polynomial
f (X) = aX n + . Since X R[X], I also contains Xf (X) = aX n+1 +
. So, a an+1 .
Since R is Noetherian, the ascending sequence
a0 a1 a2
stops at some point N and we get aN = aN +1 = .
For i = 0, , N let aij R be a finite set of generators for the ideal
ai . Let fij I be a polynomial of degree i with leading coefficient aij .
Claim 3. The polynomials fij for i = 0, , N generate I as an ideal
over R[X].
I proved this by induction on n: If f (X) is a polynomial in I of
degree n then f (X) is equal to a polynomial of degree less than n plus
some R[X]linear combination of the polynomials fij . To prove this I
considered two cases.
MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA 27
n
Case 1. Suppose first that n > N . In that case f (X) = aXP + .
Since a an = aP
N there are elements r j R so that a = rj aN j .
N
This means that rj fN j (X) = aX + . So,
X
f (X) rj X nN fN j (X)
is an element of I of degree < n.
Case 2. Suppose that n N then the same argument applies to
show that there exist rj R so that
X
f (X) rj fnj (X)
is an element of I of degree < n.
This proves Claim 3 which proved the theorem.
There are some immediate consequences of this theorem which I hope
I mentioned. First, we have the following immediate consequence of
the definition of Noetherian ring.
Lemma 5.2. Any quotient of a Noetherian ring is Noetherian.
And then we have the following consequence of this lemma and the
Hilbert basis theorem.
Corollary 5.3. If R is Noetherian, then any finitely generated ring
over R is also finitely generated.
For example, any finitely generated ring over a field is Noetherian.
5.2. Noetherian modules. The next basic property was the third
equivalent definition of a Noetherian ring:
Theorem 5.4. A ring R is Noetherian if and only if every submodule
of a finitely generated R module is finitely generated.
I pointed out that one direction () is obvious since R is a finitely
generated module over itself and the proper submodules of R are the
ideals. To prove the converse I needed some definitions.
Definition 5.5. Suppose that R is any ring. Then an Rmodule M
is called Noetherian if the submodules of M satisfy the ACC. This
is clearly equivalent to saying that every submodule of M is finitely
generated.
Using this definition, a ring R is Noetherian if and only if R is a
Noetherian Rmodule. From this we want to conclude that every f.g.
Rmodule is Noetherian. But every f.g. Rmodule is the quotient of a
f.g. free module Rn . Therefore, the theorem follows from the following
two lemmas.
28 MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA
Lemma 5.6. If M is a Noetherian Rmodule then so is every quotient
module of M .
This is obvious because the submodules of any quotient M/N corre
spond to the submodules of M containing N .
Lemma 5.7. If M, N are Noetherian Rmodules then so is M N .
I proved the following generalization of this lemma. And I pointed
out that since the proposition proves the lemma and the lemma proves
the theorem we will be done.
Proposition 5.8. Suppose that 0 K M
N 0 is a short
exact sequence of Rmodules. Then tfae.
(1) M is Noetherian.
(2) K and N are both Noetherian.
Proof. It is clear that the first statement implies the second. So, sup
pose that the second statement is true. Let L1 L2 be an
ascending sequence of submodules of M . Then we want to show that
the sequence stops.
Let Ai = Li K. This is an increasing sequence of submodules of
K. So, for large enough N , we have AN = AN +1 = .
Let Bi = (Li ) be the image of Li in N . Then, because N is Noe
therian, there is a large number N 0 which we can take to be equal to
N so that BN 0 = BN 0 +1 = .
Now we claim that LN = LN +1 = . This is a special case of the
5lemma applied to the following diagram.
0  AN  LN  BN  0
= =
? ? ?
0  AN +1  LN +1  BN +1  0
Since the rows are exact and the diagram commutes, the mapping
LN , LN +1 is an isomorphism. I.e., LN = LN +1 .
MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA 29
5.3. Associated primes. The idea is to obtain somethingQ eI similar to
the unique factorization theorem for integers: n = pi where we
replace pi with prime ideals in a Noetherian ring R and we replace
n with a f.g. Rmodule M . For example, 12 = 22 3 corresponds to
the statement that the prime ideals (2), (3) Z are associated to the
Zmodule M = Z/(12).
There are three ways to associate prime ideals to modules. Two are
easy and we also use a third method which has better properties. The
first way is to look at the cyclic submodules. For example, M = Z/(12)
contains submodules isomorphic to Z/(2) and Z/(3). The other simple
method is to look at the quotient modules. There is an epimorphism
Z/(12) Z/(p) only for the primes p = 2, 3. However, there tend to
be more quotients than submodules. For example, Z/(p) is a quotient
of M = Z for every prime p. But the only prime ideal p Z so that
Z/p is embedded in Z is p = 0.
5.3.1. annihilators and associated primes. I will always assume that R
and M are both Noetherian.
Definition 5.9. If x 6= 0 M then the annihilator ann(x) is the set
of all a R so that ax = 0. It is easy to see that this is an ideal in R.
Definition 5.10. For any Rmodule M , the associated primes are the
prime ideals p which occur as annihilators of nonzero elements of M .
This is equivalent to the statement that there is a monomorphism
R/p , M
The set of associated ideals is denoted ass(M ). So,
ass(M ) = {p R prime  p = ann(x) for some x M }
Example 5.11. If p R is prime then ass(R/p) = {p}.
Proof. Clearly, p is a prime associated to R/p. Suppose that q is an
other associated prime. Then q = ann(x) for some x = y + p in R/p.
But then
q = ann(x) = {a R  ax = 0 R/p}
= {a R  ay p} = p
The next theorem is that the union of the associated primes is the
set of zero divisors.
Definition 5.12. a R is called a zero divisor of M if ax = 0 for
some nonzero x M . In other words, the set of zero divisors of M is
equal to the union of all ann(x) for all nonzero x M .
30 MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA
I pointed out that a R is not a zero divisor of M if and only if
multiplication by a does not kill any nonzero element of M . I.e.,
a = aM : M M
is a monomorphism. In that case a is called M regular.
Theorem 5.13. The union of the associated primes of M is equal to
the set of zero divisors of M :
[
p = {zero divisors of M }
pann(M )
First we need a lemma:
Lemma 5.14. Consider the set of all ideals I R so that I = ann(x)
for some x R. Then the maximal elements of this set are prime.
Proof. Suppose that I = ann(x) is maximal but not prime. Then there
exist a, b R so that ab I but a, b / I. Since b / I, bx 6= 0. But
abx = 0. So, a ann(bx) and a / I ann(x). This implies that ann(bx)
is strictly larger than ann(x) which is a contradiction.
5.3.2. localization and support. There are two other methods to asso
ciate primes to Rmodules. The easy way is to take the set of all p so
that M/pM 6= 0. However, localization is a better method even though
it is more complicated.
Recall that, for any multiplicative subset S of R, S 1 M is the R
module given by:
S 1 M = {x/s  x M, s S}/
where x/s y/t iff rtx = rsy for some r S. If p is a prime ideal
then the localization Mp of M at p is defined to be Mp = S 1 M where
S is the complement of p in R.
Definition 5.15. The support of M is defined to be the set of all
primes p so that Mp 6= 0:
supp(M ) := {p R prime  Mp 6= 0}
Example 5.16. supp(R/I) = {p R prime  I p}.
This follows from the elementary fact that
S 1 (R/I) = 0 S I 6=
The advantage of the localization functor M 7 Mp is that it is exact
while M 7 M/pM is not exact.
Proposition 5.17. Suppose that S R is a multiplicative set. Then
the functor M 7 S 1 M is exact.
MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA 31
Proof. Suppose that N
M
L is exact, i.e., ker = im . Then
we want to show that
S 1 N
S 1 M
S 1 L
is exact. The composition is certainly 0. So, suppose that x/s S 1 M
is in the kernel of . Then (x)/s = 0. So, t S s.t. t(x) =
0 = (tx). This implies that tx = (y) for some y N . Then
x/s = tx/ts = (y/ts). So, ker = im .
An immediate consequence of the exactness of localization is the
following.
Corollary 5.18. If 0 N M M/N 0 is a short exact
sequence of Rmodules then
supp(M ) = supp(N ) supp(M/N )
Proof. For any prime p we get a short exact sequence
0 Np Mp (M/N )p 0
p supp(M ) Mp 6= 0 either Np 6= 0 or (M/N )p 6= 0
p supp(N ) supp(M/N ).
The main result relating associated primes and the support is the
following.
Theorem 5.19. Suppose that R and M are Noetherian. Then
ass(M ) supp(M )
Furthermore, the minimal elements of supp(M ) are associated primes.
The minimal elements of supp(M ) are called the minimal (or iso
lated ) associated primes.
Proof. Suppose that p is an associated prime. Then we have a short
exact sequence
0 R/p M X 0
By exactness of localization, we get an exact sequence
0 (R/p)p Mp Xp 0
But (R/p)p 6= 0 by Example 5.16. So, Mp 6= 0. So p is in the support
of M .
Now suppose that p supp(M ) is minimal. Then Mp 6= 0. So it
has a nonzero element x/s. This being nonzero means that tx 6= 0 for
32 MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA
all t S (where S is the complement of p in R). Consequently, the
annihilator of x/s is disjoint from S. So
ann(x) ann(x/s) p
Let q = ann(x/s) be maximal. Then q ass(Mp ) by Lemma 5.14.
Since R is Noetherian, q = (a1 , , an ). This means that there are
elements t1 , , tn S so that ti ai x = 0. This implies that
Q Q
q ann( ti x) ann( ti x/s)
Q
But ann( ti x/s) = q by maximality of q. So,
Q
q = ann( ti x) ass(M ) supp(M )
which implies that q = p ass(M ).
5.3.3. intersection of associated primes. One consequence of the theo
rem is that the intersection of the associated primes is the same as the
intersection of the primes in the support.
Corollary 5.20.
\ \
p= p = rad(ann(M ))
pass(M ) psupp(M )
To prove this we need the following lemma.
Lemma 5.21. The radical of any ideal I is the intersection of all prime
ideals containing it:
\
rad I = p
Ip prime
For this we need a description of the embedding
Spec(S 1 R) , Spec(R)
where Spec(R) is the set of prime ideals in R. This embedding sends
any ideal a in S 1 R to the ideal
I = aR := {a R  a/1 a}
This is clearly an ideal in R which is disjoint from S (otherwise a
contains a unit s/1). Also a prime implies S 1 R/a is a domain which
imlies that aR is prime since it is the kernel of the composition
R S 1 R S 1 R/a.
(a 7 I = aR describes an embedding since a = S 1 I.)
MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA 33
Proof of the lemma. () is clear since I p rad I p.
() If a / rad(I) then the multiplicative set S = {1, a, a2 , } is
disjoint from I. This implies that the ring T 1 (R/I) is nonzero (T is
the image of S in R/I). So, it has a maximal (and thus prime) ideal
m. Then P = mR/I is a prime in R/I disjoint from T . Then P = p/I
for some prime idealTp containing I and disjoint from S. This implies
that a / p. So, a
/ p.
Proof of corollary. Since the minimal associated primes are the same
as the minimal supporting primes, the two intersections are equal.
() ann(M ) ann(x) for all x M . In particular, ann(M ) is
contained in every associated prime p. But then rad(ann(M )) is also
contained in each p.
() Suppose that a / rad(ann(M )). Then, by the lemma, there is a
prime ideal p containing ann(M ) so that a / Tp. But p ann(M )
Mp 6= 0. So, p is a supporting prime. So, a
/ p.
5.3.4. primes associated to extensions.
Theorem 5.22. Suppose that 0 N M M/N 0 is a short
exact sequence of Rmodules. Then
ass(N ) ass(M ) ass(N ) ass(M/N )
Proof. The first inclusion is obvious: If R/p embeds in N then it em
beds in M . So, suppose that p is associated to M . Then p = ann(x) for
some x M . There are two cases. Either Rx N = 0 or Rx N 6= 0.
If Rx N = 0 then R/p = Rx embeds in M/N making p an associ
ated prime of M/N .
If Rx N contains y = ax 6= 0 in particular, a / ann(x) = p. The
annihilator of y is the set of all b R so that bax = 0. But this implies
ba p. Since a / p we must have b p. So, ann(y) = p ass(N ).
The theorem holds in both cases.
When the short exact sequence splits, this theorem implies that
ass(M ) = ass(N ) ass(M/N )
By induction this implies the following.
L S
Corollary 5.23. ass( Mi ) = ass(Mi ).
34 MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA
5.4. Primary decomposition. In a Noetherian ring, the radical of
any ideal a is the intersection of a finite number of prime ideals
\
(5.1) rad a = pi
where pi are the primes associated with the module R/a. When R =
K[X1 , , Xn ] this corresponds, by the Nullstellensatz, to the decom
position of the corresponding algebraic set A = Za (L) Ln as a finite
union of irreducible sets:
[
(5.2) A= Vi
The ideal itself is the intersection of corresponding primary ideals qi :
\
(5.3) a= qi
The correspondence is that rad qi = pi . Thus every term in the ex
pression (5.1) is the radical of the corresponding term in (5.3). It is
interesting that only the minimal primes are needed in (5.1) but all of
the primary ideals in (5.3) are needed.
The zero sets for the ideals in (5.3) corresponding to the nonminimal
embedded associated primes are contained in the union of the others.
5.4.1. primary submodule.
Definition 5.24. A submodule Q of an Rmodule M is called primary
if any zero divisor of M/Q is also nilpotent on M/Q.
(1) a R is a zero divisor of M/Q iff there is some x M, x /Q
so that ax Q.
(2) a R is nilpotent on M/Q iff some power of a annihilates
M/Q, i.e., an M Q for some n 1. In other words, a
rad(ann(M/Q)).
Since the set of zero divisors of M/Q is the union of the associated
ideals and rad ann(M/Q) is the intersection of the associated ideals,
we have: \ \
p= p = rad(ann(M/Q))
pass(M/Q) pass(M/Q)
So,
Proposition 5.25. Q M is primary if and only if M/Q has a unique
associated prime p = rad(ann(M/Q)).
We call p the prime belonging to Q and we say that Q is pprimary. In
the special case when M = R, Q = q is an ideal in R and ann(R/q) = q.
So, p = rad q. The definition of a primary module translates to the
following.
MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA 35
Definition 5.26. An ideal q R is called primary if wherever a, b R,
ab q, b
/ q a rad q.
Proposition 5.27. The intersection of finitely many pprimary sub
modules of M is pprimary.
Proof. Suppose that Q1 , , Qn are pprimary submodules of M . Then
ass(M/Qi ) = {p}. This implies that
M [
ass M/Qi = ass(M/Qi ) = {p}
L
Since M/ TQi embeds in M/Qi it also has p as its unique associated
prime. So, Qi is pprimary.
5.4.2. existence of primary decomposition. Suppose that N M . Then
a primary decomposition of N is defined to be a minimal expression of
the form
N = Q1 Q2 Qn
where Qi are primary submodules of M . By the proposition, the prime
ideals pi belonging to the Qi will be distinct.
Theorem 5.28. Every submodule N M admits a primary decompo
sition.
Proof. If not, there exists a maximal N with no primary decomposition.
The plan of the proof is to express N as the intersection of two larger
submodules N = K I. By maximality of N , both K and I are
intersections of primary submodules. This makes N an intersection of
primary submodules and we will be done.
Since N is a counterexample, it is in particular not primary. So there
is an a R which is a zero divisor for M/N but is not nilpotent on
M/N . This gives a sequence of submodules of L = M/N :
ker aL ker a2L ker a3L
m+1
By the ACC, this sequence stops. So, ker am L = ker aL = . Let
K = ker aL M/N . Since a is a zero divisor, K 6= 0. Let I = im aM
m
L .
Since a is not nilpotent on L, I 6= 0. But K I = 0 since any element
of the intersection has the form am x and satisfies am (am x) = 0 which
implies that x ker a2m m m
L = ker aL . So, a x = 0.
But K = K/N and I = I/N for some K, I M . And K I = 0
K I = N . So we are done.
36 MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA
5.4.3. partial uniqueness of primary decomposition.
Lemma 5.29. There is a primary decomposition N = Q1 Qn
which is reduced in the sense that
(1) the primes pi belonging to Qi are all distinct and
(2) each of the Qi is necessary, i.e., N 6= Q1 Q
bi Qn
for all i.
In the following theorem I used N = 0 in class. But that seemed to
be confusing so I put an arbitrary N . This also explains where L
came from.
Theorem 5.30. Suppose N = Q1 Qn is a (reduced) primary
decomposition of N M . Let pi be the prime belonging to Qi . Then
ass(M/N ) = {p1 , , pn }
Proof. () Since N = Q1 Qn , we have a monomorphism
M
M/N , M/Qi
So,
M [
ass(M/N ) ass M/Qi = ass(M/Qi ) = {p1 , , pn }
() We want to show that p1 ass(M/N ). Let
L = Q2 Qn
Then, L Q1 = N . So, we have a monomorphism L/N , M/Q1 . So,
ass(L/N ) ass(M/Q1 ) = {p1 }
Since L 6= N , L/N has at least one associated prime (a maximal ann(x)
where x 6= 0 L/N ). Therefore, ass(L/N ) = {p1 }. Since L/N M/N
this implies that p1 ann(M/N ).
Theorem 5.31. Suppose that ann(M/N ) = {p1 , , pn } andN = Q1
Qn , N = Q01 Q0n are two primary decompositions of N M
where Qi , Q0i are pi primary. Then for every minimal (=isolated) pi ,
Qi = Q0i .
Proof. Suppose that p1 is minimal. This means that it does not contain
any of the other associated primes. So, for i 2, there exists ai pi
so that ai
/ p1 . This implies that a = a2 a3 an pi for i 2 but
a/ p1 .
Claim: Q1 = {x M  am x N for some m > 0}. This will prove
that Q1 = Q01 since the expression on the right is independent of the
primary decomposition.
MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA 37
Pf: () Let x Q1 . Then we want to show that am x lies in each
Qi for sufficiently large m. This will show that am x Qi = N .
x Q1 am x Q1 . For i 2, a pi = rad ann(M/Qi ) am x Qi .
() Suppose that am x N . Then x Q1 . Otherwise, a is a zero
divisor for M/Q1 which implies that a p1 which is a contradiction.
This proves the claim and the theorem follows.
5.4.4. example. (from AtiyahMacdonald) In this example, p is a prime
whose square p2 is not primary. However, rad p2 = p.
Let R = K[X, Y, Z]/(XY Z 2 ) and let x, y, z denote the image of
X, Y, Z in R. Let p = (x, z). This is a prime ideal in R since
R/p = K[X, Y, Z]/(X, Z) = K[Z]
But p2 = (x2 , xz, z 2 ) is not primary since xy = z 2 p2 but x
/ p2 and
no power of y lies in p2 . Finally, it is clear that rad p2 = p. (For any
w p, w2 p2 . Conversely, p2 p rad p2 p.)
The ideal p2 has two associated primes: p and the maximal ideal
m = (x, y, z) which contains p. It is easy to verify that these are
associated primes since p is the annihilator of z module p2 and m is
the annihilator of x module p2 . There are no other associated primes
because the primary decomposition of p2 has only two terms:
p2 = q1 q2
where
q1 = (x) = (x, z 2 )
This is pprimary since p/q1 is generated by z with annihilator p. So
there is a short exact sequence
0 R/p R/q1 R/p 0
which implies that p is the only prime associated to R/q1 and, therefore,
q1 is pprimary. The other primary ideal is
q2 = m2 = (x2 , xz, z 2 , y 2 , yz)
This is mprimary since rad m2 = m = {p2 ass(M/m2 )}. Since m is
maximal, p2 = m is the only associated prime.
There is a modified version of the powers of a prime ideal p called
the symbolic power of p which always gives a pprimary ideal. As a
special case of HW6, problem 2, this is given by
p(2) = p2 Rp R
38 MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA
5.5. Spec(R). For any ideal I R let C(I) (= ZI ) be the set of all
prime ideals p of R which contain I.
Definition 5.32. If R is a Noetherian ring then Spec(R) is the set of
all prime ideals in R with the topology given by taking C(I) (for all
ideals I), the empty set and the whole space Spec(R) to be the closed
subsets.
Since R is Noetherian, Spec(R) satisfies the DCC for closed subsets.
In particular, any collection of closed subsets has a minimal element.
To verify that this is a topology we need to show that any intersection
or finite union of closed sets is closed. The DCC implies that any
intersection is a finite intersection.
The first problem on HW6 was to show that, given any ideal I, the
set C(I) contains a finite number of minimal elements. The fancy proof
of this is the following. First, define a closed subset of Spec(R) to be
indecomposable if it is not the union of two proper subsets.
Lemma 5.33. C(I) = C(rad(I)).
Lemma 5.34. C(I) is indecomposable iff rad(I) is prime.
Lemma 5.35. In any topological space satisfying the DCC for closed
subsets, every closed subset is a finite union of indecomposable closed
subsets.
Theorem 5.36. For every ideal I, there are finitely many primes
p1 , , pn containing I so that any other prime which contains I will
contain one of the pi .
Proof. Let C(I) = C1 Cn be a decomposition of C(I) into in
decomposables. Then Ci = C(pi ) for some prime pi containing I and,
for any other primes P containing I we have that C(P ) C(I) and
therefore,
C(P ) = (C(P ) C1 ) (C(P ) Cn )
Since C(P ) is indecomposable, this implies that C(P ) Ci = C(pi )
and this implies that P contains pi .
MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA 39
6. Local rings
Our last topic in commutative algebra is local rings. I first went over
the basic definitions, talked about Nakayamas Lemma and I plan to
do discrete valuation rings and then more general valuation rings and
then return to places in fields. All rings are commutative with 1. They
might not be Noetherian.
6.1. Basic definitions and examples.
Definition 6.1. A local ring is a ring R with a unique maximal ideal
m.
Proposition 6.2. A ring is local iff the nonunits form an ideal.
Proof. Suppose first that R is local with maximal ideal m. Let x be
any element not in m. Then x must be a unit. Otherwise, x generates
an ideal (x) which is contained in a maximal ideal other than m.
Conversely, suppose that R is a ring in which the nonunits form an
ideal I. Then every ideal in R must be contained in I since ideals
cannot contain units.
Example 6.3. An example is Z/(p), the integers localized at the prime
ideal (p). Recall that Rp = S 1 R is the set of all equivalence classes
of fractions a/b where a R and b S where S is the complement of
p. When R is an integral domain, Rp is contained in the quotient field
QR. So, it is easier to think about:
na o
Z(p) = Q s.t. p  b
b
This is a local ring with unique maximal ideal
na o
m= Q s.t. pa, p  b
b
This is the unique maximal ideal since all of the other elements are
clearly units. The quotient Z(p) /m is isomorphic to Z/p although this
is not completely trivial.
More generally we have:
Proposition 6.4. If R is a ring and p is a prime ideal then Rp = S 1 R
is a local ring with maximal ideal S 1 p.
40 MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA
6.2. Nakayamas Lemma. You probably already know this but it is
an very useful result which is also very easy to prove. There are two
equivalent versions.
Lemma 6.5 (Nakayamas Lemma, version 1). Suppose that R is a local
ring and M is a f.g. Rmodule. If mM = 0 then M = 0.
Lemma 6.6 (Nakayamas Lemma, version 2). Suppose that R is a
local ring and E is a f.g. Rmodule and F E is a submodule. If
E = F + mE = 0 then E = F .
First I showed that these are equivalent. The first version is obviously
a special case of the second version: just let F = 0. To prove the second
given the first let M = E/F . Then E = F +mE implies that M = mM
which implies M = E/F = 0 which is the same as E = F .
Proof of Nakayama, 1st version. This will be by induction on the num
ber of generators. If this number is zero, then M = 0 so the lemma is
true. So, suppose that x1 , , xs is a minimal set of generators for M
and s 1. Since xs M = mM , there exist a1 , , as m so that
x s = a1 x 1 + + as x s
This gives:
(1 as )xs = a1 + + as1 xs1
But 1as is invertible since it is not an element of m (if 1as m then
we would get 1 as + as = 1 m which is not possible). Therefore,
xs = (1 as )1 (a1 x1 + + as1 xs1 )
which implies that x1 , , xs1 generate M . So, M = 0 by induction
on s.
Remark 6.7. Note that M/mM is an Rmmodule. Since R/m is a field
(called the residue field of R), M/mM is a vector space over the residue
field. If f : M N is a homomorphism of Rmodules then we get an
induced linear mapping
f : M/mM N/mN
given by f (x + mM ) = f (x) + mN (or f (x) = f (x) if x denotes
x + mM ). This defines a functor from the category of Rmodules to
the category of vector spaces over R/m.
Definition 6.8. One definition of the dimension of a local ring R is
the vector space dimension of m/m2 :
dim R = dimR/m m/m2
MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA 41
Proposition 6.9. x1 , , xn are generators for the Rmodule M if
and only if their images x1 , , xn span the vector space M/mM .
Proof. () P This direction is clear. Every element x M can be written
as x = P ai xi . So, any element x = x + mM of M/mM can be written
as x = ai xi .
(neht) Let N be the submodule of M generated by x1 , , xn . If
x1 , , xn span M/mM then N + mM = M . Then N = M by
Nakayama.
Corollary 6.10. x1 , , xn is a minimal set of generators for M iff
x1 , , xn form a basis for the vector space M/mM .
Theorem 6.11. Any finitely generated Rmodule M is projective if
and only if it is free (isomorphic to Rn ).
Proof. It is clear that every free module is projective. So, suppose that
M is projective. Let x1 , , xn be a minimal set of generators for M .
Then we get an epimorphism : Rn M sending the ith generator
of Rn to xi . Since M is projective by assumption, there is a section
s : M Rn of this homomorphism. I.e., s = idM . This gives the
following diagram:
s
M  Rn  M
? ? ?
s
M/mM n n M/mM
R m 
Since s is the identity on M , /s is the identity on M.mM . There
fore, s : M/mM Rn /mn is a monomorphism. But M/mM and
Rn /mn have the same finite dimension over R/m. Therefore, s is an
isomorphism. This implies that s(M ) + mn = Rn . By Nakayama this
shows that s(M ) = Rn . So, M = Rn as we wanted to show.
42 MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA
6.3. Complete local rings. If R is a local ring we get a sequence of
ring homomorphisms:
R/mn+1 R/mn R/mn1 R/m
The inverse limit lim R/mn is the set of all sequences (an R/mn )
which are compatible in the sense that an maps to an1 under the
homomorphism R/mn R/mn1 , i.e., an1 = an + mn1 .
The inverse limit is defined by a universal condition. It is the uni
versal object L with ring homomorphisms fn : L R/mn so that the
composition L R/mn R/mn1 is fn1 . In other words, given any
other L0 with homomorphisms fn0 : L0 R/mn , there exists a unique
ring homomorphism g : L0 L so that fn g = fn0 for all n. It is
easy to see that the set of all compatible sequences (an ) satisfies this
universal property since g(x) = (fn0 (x))
Definition 6.12. R is a complete local ring if R = lim R/mn .
Example 6.13. The padic integers Zp form a complete local ring by
definition. They are defined to by Zp = lim Z/pn Z. I.e., it is the
inverse limit of the sequence
Z/pn Z/pn1 Z/p
In other words, Zp is the set of all sequences (an ) of integers an de
fined modulo pn so that an + pn1 Z = an1 . There is a natural ring
monomorphism Z , Zp sending m to the sequence (an = m) for all n.
The unique maximal ideal is given by a1 = 0, i.e., m is the kernel of
the epimorphism Zp Z/p given by (an ) 7 a1 .
A typical element of Z3 has an infinite 3nary expansion with digits
0, 1, 2 to the left of the decimal place:
2110220112.
padic integers do not have signs. They are all positive since, e.g.,
1 = 2222222.
The maximal ideal is the set of all numbers with last digit equal to 0.
Problem: Show that there is a monomorphism of rings Z(p) , Zp
which is not an epimorphism (since Zp is a Cantor set and therefore
uncountable whereas Z(p) is countable, being a subset of Q).
MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA 43
6.4. Discrete valuation rings. This is the last topic in commutative
algebra.
6.4.1. definition. I gave two of the elementary definitions.
Definition 6.14 (1st definition). A discrete valuation ring (DVR) is
an integral domain R together with a mapping
v : QR Z
called a valuation from the group of nonzero elements QR of the quo
tient field QR of R onto Z so that
(1) v(ab) = v(a) + v(b) for all a, b QR
(2) v(a + b) min(v(a), v(b)) for all a, b QR
(3) R = {a QR  v(a) 0}
I pointed out that the first conditions implies v(1) = 0 (since v(1) =
v(1) + v(1)) and v(a/b) = v(a) v(b) (since a = (a/b)b v(a) =
v(a/b) + v(b)).
Example 6.15. Take R = Z(p) . Then QZ(p) = Q and we can define
vp : Q Z
a
by vp b
= n where n is the number of times that p divides a/b. I.e.,
a c
= pn
b d
where p  c and p  d. It is easy to verify that the conditions are satisfied.
We figured out in class that equality holds in (2) when v(a) 6= v(b).
There is one thing I dont like about the first definition. The valua
tion is assumed to be defined outside of the original set R. The second
definition restricts the valuation just to R .
Definition 6.16 (2nd definition). A DVR is a domain R together with
a mapping
v : R {0, 1, 2, }
so that
(1) v(ab) = v(a) + v(b) for all a, b R
(2) v(a + b) min(v(a), v(b)) for all a, b R
(30 ) ba v(b) v(a)
Proposition 6.17. These two definitions are equivalent.
44 MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA
Proof. () If v is defined on QR we can just take the restriction of
v to R . Then the only condition we need to check is (30 ). If ba then
a = bc for some c R . So, v(a) = v(b) + v(c) v(b). Conversely, if
v(b) v(a) then v(a/b) = v(a) v(b) 0.
() Suppose we have v defined on R . Then we can extend it to
QR by the equation
a
v = v(a) v(b)
b
This is well defined since a/b = c/d implies ad = bc which implies that
v(ad) = v(a) + v(d) = v(b) + v(c) = v(bc)
v(a) v(b) = v(c) v(d)
Conditions (1) is obvious. For condition (2):
a b a+b
v + =v = v(a + b) v(c)
c c c
min(v(a), v(b)) v(c)
= min(v(a) v(c), v(b) v(c))
= min (v(a/c), v(b/c))
For the last condition:
v(a/b) 0 v(a) v(b) ba a/b R
6.4.2. properties.
Proposition 6.18. Suppose that R is a DVR with valuation v. Then
(1) a R is a unit iff v(a) = 0.
(2) R is a local ring
(3) m = {a R  v(a) 1 and a 6= 0}
(4) m = () for any R so that v() = 1.
(5) mn = ( n ).
is called the uniformizer of R.
Proof. For (1), a is a unit iff a1 iff v(a) v(1) = 0 iff v(a) = 0. This
implies that the set of nonunits is I = {a R  v(a) 0 or a = 0}.
R is a local ring iff I is an ideal. But this is easy: I is closed under
addition since
v(a + b) min(v(a), v(b)) 1
and it is an ideal since, for any a I, r R which are nonzero,
v(ra) = v(r) + v(a) v(a) 1
which implies ra I. Therefore, I = m is the unique maximal ideal.
MATH 101B: ALGEBRA II PART B: COMMUTATIVE ALGEBRA 45
Now choose any R so that v() = 1. Then, divides any
element a m since v() v(a). So, m = ().
Finally, it is clear that if generates m then n generates mn .
The converse to this is also true giving a 3rd definition of DVR:
Proposition 6.19. If R is a local domain whose unique maximal ideal
is principal, then R is a DVR.
Proof. If is a generator for m then any nonzero element of R can be
written uniquely as n u where u is a unit. Then the valuation is given
by v( n u) = n.
6.4.3. DVRs and Riemann surfaces. The theorem is that there is a
11 correspondence between isomorphism classes of field extensions E
of C of transcendence degree 1 and isomorphism classes of (compact)
Riemann surfaces (complex curves). The points of the Riemann surface
are given by the discrete valuations v : E Z associated to places in
E which are DVRs.
I didnt prove this but I gave an example, the simplest possible ex
ample, which is E = C(X). This field corresponds to the Riemann
sphere S 2 = C . Any point x0 C corresponds to the valuation
vx0 : E Z given by
f (X)
vx0 =n
g(X)
where n is the number of times that X x0 divides f (X)/g(X). In
other words,
f (X) (X)
= (X x0 )n
g(X) (X)
where (X), (X) are not divisible by X x0 . The condition that
these are not divisible by X x0 is equivalent to the condition that
(x0 ) 6= 0 and (x0 ) 6= 0. In other words, the function f (X)/g(X) has
a zero of order n at x0 . When n < 0, f (X)/g(X) has a pole at x0 ,
i.e., f (x0 )/g(x0 ) = . The DVR is the set of all f (X)/g(X) which
does not have a pole at x0 . The maximal ideal is the set of all rational
functions which are zero at x0 .
There is one other valuation on E = C(X) corresponding to the point
at infinity. This should be the set of all rational functions f (X)/g(X)
which do not have a pole at . This is equivalent to saying that
deg(f ) deg(g). So, the valuation at infinity is:
v (f (X)/g(X)) = deg(g) deg(f ).
MATH 101B: ALGEBRA II
PART C: SEMISIMPLICITY
We have one week to talk about semisimple rings and semisimple
modules (Chapter XVII). A semisimple Rmodule is a finite direct sum
of simple modules
M = S1 Sn
and a semisimple ring is a ring R for which all f.g. modules are semisim
ple. The main reasons that I am choosing this particular topic in non
commutative algebra is for the study of representations of finite groups
which we will do after the break.
If G is a finite group then a representation of G over C is the same
as a module over the group ring C[G] (also written CG). Once we have
the basic definitions it will be very easy to see that C[G] is a semisimple
ring. This makes the representation theory of finite groups elementary.
From now on, all rings will be associative rings with 1 6= 0 (which
may or may not be commutative) and Rmodule will be understood to
be left Rmodules.
Contents
1. Simple rings and modules 2
1.1. Simple rings 2
1.2. Simple modules 3
2. Semisimple modules 5
2.1. Finiteness conditions 5
2.2. Definition of semisimple 6
2.3. Unique decomposition 7
2.4. Jacobson radical 8
3. Semisimple rings 9
3.1. Jacobson radical of a ring 9
3.2. Wedderburn structure theorem 10
1
2 MATH 101B: ALGEBRA II PART C: SEMISIMPLICITY
1. Simple rings and modules
Although semisimple rings are not defined to be products of simple
rings (this being a theorem and not a definition), it still makes sense
to talk about simple rings first.
1.1. Simple rings.
Definition 1.1. A ring R is called simple if it has no nonzero twosided
ideals.
For example, any field is simple. There is also a noncommutative
version of a field:
1.1.1. division rings.
Definition 1.2. A division ring is a ring R in which every nonzero
element has a twosided inverse. I.e., for all a 6= 0 R there is a b R
so that ba = ab = 1.
Proposition 1.3. R is a division ring iff every nonzero element has a
left inverse.
Proof. If every a 6= 0 R has a left inverse b (so that ba = 1) then b
also has a left inverse c with cb = 1. But then
c = c(ba) = (cb)a = a
So b is a twosided inverse for a, making R a division ring.
Theorem 1.4. Division rings are simple.
Proof. Any nonzero twosided ideal I D would have a nonzero ele
ment a. So, Da would contain 1 which is a contradiction.
1.1.2. matrix rings.
Definition 1.5. For any ring R let M atn (R) denote the ring of n n
matrices (aij ) with coefficients aij in R. Addition is coordinatewise
(aij ) + (bij ) = (cij ) where cij = aij + bij . Multiplication is matrix
multiplication: (aij )(bij ) = (cij ) where cij = nk=1 aik bkj . A tedious
P
and unnecessary computation will show that M atn (R) is a ring. (There
is an easy proof which we will see later.)
I will assume that everyone knows how matrices work. There is one
point that you should be careful about. Since R is noncommutative,
the determinant does not behave the way it should. I.e., there is no
function det : M atn (R) R so that det(A) det(B) = det(AB).
One thing I should have pointed out is that M atn (D) is a free R
module with basis given by the matrices xij which have a 1 in the ij
MATH 101B: ALGEBRA II PART C: SEMISIMPLICITY 3
position and 0 everywhere else. A matrix with coefficients aij R can
be written as X
(aij ) = aij xij
Theorem 1.6. If D is a division ring then M atn (D) is simple.
Proof. If I is a nonzero twosided ideal in M atn (D) then I want to show
that I = M atn (D). Let A = (aij ) be a nonzero element of I. Then one
of the entries is nonzero: aij 6= 0. Multiplying on the left by xii and on
the right by the matrix a1ij xjj we see that
xij = xii A(a1
ij xjj ) I
This implies that every matrix is in I since, for any b D and any
k, ` n, we have
(bxke )xij xj` = bxk` I
Taking sums we get any element of M atn (D).
Basically, there are no other examples of simple rings.
1.2. Simple modules.
Definition 1.7. A (left) Rmodule M is called simple if M 6= 0 and
M has no proper nonzero submodules. (N M is proper if N 6= M .)
I made some trivial observations without proof:
(1) If N M is a maximal proper submodule then M/N is simple.
(2) M is simple iff 0 is a maximal proper submodule.
Theorem 1.8. R R (R considered as a left Rmodule) is simple iff R
is a division algebra.
Proof. This is obvious both ways. () A submodule N of R R is the
same as a left ideal. If N is nonzero then it has a nonzero element a
with inverse b. Then ba = 1 RN . So, RN = N . () Conversely,
suppose that R R is simple and a 6= 0 R. Then Ra is a nonzero
submodule of R. Therefore, Ra = R. So, 1 = ba Ra and b is a left
inverse for a. So, R is a division ring.
One of the most important theorem about simple modules is also
trivial:
Lemma 1.9 (Schurs lemma). If M, N are simple Rmodules and f :
M N is an Rmodule homomorphism then either f = 0 or f is an
isomorphism.
4 MATH 101B: ALGEBRA II PART C: SEMISIMPLICITY
Proof. The kernel of f is a submodule of M . So it is either 0 or M .
If ker f = M then f = 0. If ker f = 0 then f is a monomorphism
which means that M is isomorphic to the image of f . But the image
im f = f (M ) is a submodule of N . So, there are two possibilities.
Either f (M ) = 0 or f (M ) = N . The first case is not possible since
M 6= 0. So, f (M ) = N which means that f is onto. But f is also
monomorphism. So, f is an isomorphism.
Theorem 1.10. The endomorphism ring EndR (M ) of a simple module
M is a division ring.
At this point I decided to review the definition.
Definition 1.11. The endomorphism ring EndR (M ) of an Rmodule
M is the set of all Rmodule homomorphisms f : M M with multi
plication defined by composition of functions: f g = f g and pointwise
addition: (f + g)(x) = f (x) + g(x).
Proof of the theorem. If M is simple then Schurs lemma tells us that
any nonzero element of EndR (M ) is an isomorphism and thus invert
ible.
I had time for one more trivial theorem. First a definition.
Definition 1.12. For any ring R the opposite ring Rop is defined to be
the same set R with order of multiplication reversed. In other words,
the opposite of the ring (R, , +) is the ring (R, op , +) with the same set
R and the same addition + but with a new multiplication op defined
by
a op b = ba
R and Rop are said to be antiisomorphic. (So, S is antiisomorphic
to R iff S
= Rop .)
Theorem 1.13. For any ring R we have EndR (R R) = Rop .
Proof. An antiisomorphism : EndR (R R) R is given by (f ) =
f (1) with inverse : R EndR (R R) sending a R to (a) given by
right multiplication by a. (So, (a)(x) = xa.) It is easy to see that the
mappings and are inverse to each other. The point is that they
reverse the order of multiplication:
(ab)(x) = xab = (b)(a)(x)
So, they induce an isomorphism EndR (R R) = Rop .
MATH 101B: ALGEBRA II PART C: SEMISIMPLICITY 5
2. Semisimple modules
2.1. Finiteness conditions. We will need to assume some finiteness
conditions for modules. These work in the same way that they do for
modules over commutative rings.
2.1.1. Noetherian modules.
Definition 2.1. An Rmodule M is called Noetherian if it satisfies the
ascending chain condition (ACC) for submodules. This is equivalent
to saying that every submodule of M is finitely generated. A ring R is
called leftNoetherian if R R is a Noetherian module.
Just as in the commutative case we have:
Theorem 2.2. (1) If M is Noetherian then every submodule and
quotient module of M is Noetherian.
(2) R is leftNoetherian iff every finitely generated Rmodule is Noe
therian.
Noetherian modules have lots of maximal submodules, by which I
mean maximal proper submodules.
Proposition 2.3. Every proper submodule of a Noetherian module is
contained in a maximal submodule.
Proof. Otherwise, we would get a sequence of larger and larger sub
modules, contradicting the ACC.
Sometimes it is enough just to assume that M is finitely generated.
Lemma 2.4 (finite
P sum lemma). If a f.g. module M is a sum of sub
modules M = PI N , then there is a finite subset J of the index set
I so that M = J N .
Proof. To say that M is the sum of the submodules N is the same as
saying that the inclusion maps N , M give an epimorphism
M
N M
I
Suppose
L that x1 , , xn generate M and, for each i, choose yi
I N which maps onto xi . Then each yi has only finitely many
nonzero coordinates. Let J I be the set Pof all I so that some yi
has a nonzero coordinate. Then M = J N .
6 MATH 101B: ALGEBRA II PART C: SEMISIMPLICITY
2.1.2. Artinian modules.
Definition 2.5. An Rmodule M is called Artinian if it satisfies the
descending chain condition (DCC) for submodules. A ring R is called
leftArtinian if R R is an Artinian module.
Analogous to the corresponding statements for Noetherian modules
and with analogous proofs we have the following.
Theorem 2.6. (1) Every submodule and quotient module of an Ar
tinian module is Artinian.
(2) R is left Artinian iff every f.g. Rmodule is Artinian.
(3) Every nonzero submodule of an Artinian module contains a sim
ple submodule.
A statement which is dual to the finite sum lemma for Noetherian
modules is the following.
Lemma 2.7 (finite intersection
T lemma). Suppose that M is an Ar
tinian module and N = I N is an intersection of submodules.T
Then there is a finite subset J of the index set I so that N = J N .
Proof. Finite intersections form a descending sequence of submodules
which stops when it is equal to the infinite intersection.
Example 2.8. (1) Z is a Noetherian Zmodule but it is not Artinian.
(2) Take the ring
Z[ p1 ] = {a/pn  a Z, n 0}
Z Z[1/p] is a subring (not an ideal). So the quotient Z[1/p]/Z is a Z
module (not a ring). This module is Artinian but not Noetherian (but
there is a theorem that says that all Artinian rings are Noetherian).
2.2. Definition of semisimple.
Definition 2.9. A f.g. Rmodule M is called semisimple if it satisfies
one of the following equivalent definitions.
(1) M is a direct sum of simple modules.
(2) M is a sum of simple submodules.
(3) Every submodule of M is a direct summand.
First I need a trivial lemma.
Lemma 2.10. Suppose that N, S are submodules of any module M
where S is simple. Then N + S is either equal to N or to N S. In
both cases, N is a direct summand of N + S.
Proof. N S is a submodule of S. So, it is either 0 or S. In the first
case, N + S = N S. In the second case, N + S = N .
MATH 101B: ALGEBRA II PART C: SEMISIMPLICITY 7
Proof of equivalence of definitions. Clearly, (1) implies (2). Also, by
the finite sum lemma (2.4), (2) implies
(20 ) M is a finite sum of simple submodules.
Since every simple module is generated by one element (any nonzero
element), condition (20 ) includes the assumption that M is finitely gen
erated.
(20 ) (3) Suppose that N M and M = ni=1 Si . For each k n
P
let
Nk = N + S1 + + Sk
Lemma 2.10 says that Nk is a direct summand of Nk+1 = Nk + Sk+1
for every k. Therefore, N is a direct summand of Nn = M .
(3) (1). Since any submodule N M is a summand: M = N K,
N = M/K is also a quotient of M . Therefore, every submodule of M
is f.g. making M Noetherian. If M = M/0 is not a direct sum of simple
modules then, by the ACC, there is a maximal submodule K so that
M/K is not a direct sum of simple modules. But then M = N K
by assumption. So, N = M/K is not a direct sum of simple modules.
In particular, N is not simple. So, it has a nonzero proper submodule
N1 . But then we must have
N = N1 N2
(M = N1 L and we can take N2 = L N ). Since K K N1 ,
the quotient M/(K N1 ) = N2 is a direct sum of simple modules by
maximality of K. Similarly, N1 is a direct sum of simple modules. So,
N = N1 N2 = M/K is a direct sum of simple modules and we have
a contradiction which shows that (1) must hold.
2.3. Unique decomposition.
Theorem 2.11. The simple summands of a semisimple Rmodule are
uniquely determined up to isomorphism. In other words, if
M = S1 Sn = T1 Tm
where Si , Tj are simple submodules of M , then n = m and Si
= T(i)
for some permutation of n.
Proof. For each j let
Nj = T1 Tbj Tm
This
T is a maximal submodule of M since M/Nj = Tj is simple. Also,
Nj = 0. So, there is some j so that Sn 6 Nj . Since Sn is simple,
8 MATH 101B: ALGEBRA II PART C: SEMISIMPLICITY
this implies that Sn Nj = 0. Since Nj is maximal we conclude that
M = Sn Nj and therefore,
M/Sn = S1 Sn1
= Nj = T1 Tbj Tm
The theorem follows by induction on n.
Definition 2.12. Define the length `(M ) of a semisimple moduleL
M to
be the number of simple summands in any decomposition M = Si .
Corollary 2.13. Submodules and quotient modules of semisimple mod
ules are semisimple. Furthermore, `(N ) + `(M/N ) = `(M ) for any
submodule N of a semisimple module M .
Proof. By (3), M = N K for some submodule K = M/N . Each a
quotient of M and therefore a sum of finitely many simple modules.
So, they are both semisimple. Decomposing N, K into direct sums of
n, m simple modules, we get a decomposition of M = N K into n+m
simple modules. So,
`(M ) = n + m = `(N ) + `(M/N )
Corollary 2.14. Semisimple modules are both Noetherian and Ar
tinian.
Proof. For any increasing or decreasing sequence of submodules the
lengths increase or decrease.
2.4. Jacobson radical. An Artinian module M is semisimple iff its
Jacobson radical is zero.
Definition 2.15. The Jacobson radical rM of any Rmodule M is
defined to be the intersection of all maximal (proper) submodules of
M.
Proposition 2.16 (Naturality of rM ). If f : M N is a homomor
phism of Rmodules, then f (rM ) rN .
Proof. For any maximal submodule L N , f 1 (L ) is either equal
to M or to a maximal submodule of M . Therefore,
\ \
f 1 L = f 1 (L )
contains rM which is what we wanted to show.
Theorem 2.17. Suppose that M is an Artinian Rmodule. Then
(1) M/rM is semisimple.
(2) M is semisimple iff rM = 0.
MATH 101B: ALGEBRA II PART C: SEMISIMPLICITY 9
Proof. By the finite intersection lemma (2.7), the Jacobson
T radical of
M is a finite intersection of maximal submodules: rM = Ni . Thus
we have an exact sequence:
M
0 rM M M/Ni
Since each Ni is maximal, M/Ni is simple. LSo, M/rM is also semisim
ple, being isomorphic to a submodule of M/Ni .
This shows L that rM = 0 implies M is semisimple. Conversely, sup
pose M = Si . Then the kernelTof each projection M Si is a
maximal submodule Ni and rM Ni = 0.
3. Semisimple rings
3.1. Jacobson radical of a ring. The theorem is that an Artinian
ring is semisimple if and only if its Jacobson radical is zero. But we
need to define the terms.
Definition 3.1. The Jacobson radical rR of a ring R is defined to be
the intersection of all maximal left ideals. (This is the same as the
Jacobson radical of R R.)
As a special case of the previous theorem we have the following.
Corollary 3.2. If R is a left Artinian ring then R R is semisimple iff
rR = 0.
Definition 3.3. A ring R is called semisimple if every f.g. Rmodule
is semisimple.
By what we know, it is clear that all f.g. modules are semisimple if
and only if R R itself is semisimple. This gives the following.
Corollary 3.4. A ring is semisimple iff it is leftArtinian (and Noe
therian) and its Jacobson radical is zero.
Proof. () This follows from the previous corollary.
() If R is a semisimple ring, R R is a semisimple module which
implies that it is Artinian and Noetherian and its radical is zero.
Theorem 3.5. The Jacobson radical of R is a twosided ideal.
Proof. rR is clearly a left ideal. So, let a R. Then right multiplication
by a is an Rmodule homomorphism (a) : R R R R. By naturality of
r (2.16) this implies that (rR)a rR. So, rR is also a right ideal.
Corollary 3.6. Simple Artinian rings are semisimple.
Corollary 3.7. Division rings and matrix rings over division rings are
semisimple.
10 MATH 101B: ALGEBRA II PART C: SEMISIMPLICITY
Proof. Division rings are clearly Artinian. So, they they are semisimple.
The matrix ring M atn (D) is finitely generated as a Dmodule and
therefore is Artinian (since all left ideals are also Dsubmodules). So,
it is also semisimple by the previous corollary.
3.2. Wedderburn structure theorem.
Theorem 3.8 (Wedderburn structure theorem). A ring R is semisim
ple if and only if it is a finite product of matrix rings over division
rings:
R
Y
= M atn (Di )
i
To show that these products are semisimple we need the following
lemma.
Lemma 3.9. If R, S are semisimple rings then their product R S is
semisimple.
Proof. The unit 1 = (1, 1) of the ring R S can be written as a sum:
1 = (1, 0) + (0, 1) = e1 + e2
where ei are central, orthogonal idempotents. (Central means ei x = xei
for all x, orthogonal means e1 e2 = 0 and idempotent means e2i = ei .) If
M is any R S module then any x M can be written as x = 1x =
e1 x + x2 x. Thus
M = e1 M e2 M.
Since e1 S = Se1 = 0, the action of S on e1 M is zero. So, the action
of R S on e1 M factors through (R S)/S = R and the action of
RS on e2 M factors through S. If M is f.g. then e1 M, e2 M are finitely
generated modules over R, S respectively. So they are semisimple. This
makes M semisimple.
This lemma, together with Corollary 3.7, proves that the rings named
in the structure theorem are all semisimple.
3.2.1. endomorphisms. Suppose that M is a semisimple Rmodule. Then
we want to show that the endomorphism ring EndR (M ) is the oppo
site of one of the rings in the Wedderburn structure theorem. This will
prove the structure theorem because of the observation (Theorem 1.13)
that EndR (R R) = Rop : If R is semisimple then R R is a semisimple R
module. This will imply that EndR (R R) is the opposite of one of the
rings in the structure theorem and therefore R is one of those rings.
MATH 101B: ALGEBRA II PART C: SEMISIMPLICITY 11
L
In the decomposition M = Si , some of the simples Si may be
isomorphic to each other. We use the notation nSi to denote a direct
sum of n copies of the simple module Si . Then we can write:
m
M
M
= ni Si
i=1
where the Si are nonisomorphic.
By Schurs lemma, there are no homomorphisms from Si to Sj for
i 6= j. Therefore,
m
EndR (M )
Y
= EndR (ni Si )
i=1
So, it suffices to show the following lemma.
Lemma 3.10. (1) If S is a simple Rmodule then
EndR (nS) = M atn (D)
where D = EndR (S).
(2) If D is a division algebra then Dop is also a division algebra.
(3) M atn (D)op
= M atn (Dop ).
Proof. (1) An isomorphism : EndR (nS) M atn (D) is given as
follows. For any f : nS nS let (f ) M atn (D) be the matrix with
ijcoordinate (in D = EndR (S)) given by the composition
tj f pi
pi f tj : S nS
nS S
where tj : S nS is the inclusion of the jth summand and pi : nS S
is the projection to the ith summand.
This is a homomorphism since
X
(f g) = (pi f g tj ) = (pi f tk pk g tj )
= (pi f tk )(pk g tj ) = (f )(g)
P
This uses the equation tk pk = id which should be familiar from
your very first assignment.
1
PTo show that is an isomorphism, we give the inverse: (fij ) =
ti fij pj
(2) is obvious. To show (3), an antiisomorphism
M atn (Dop )
: M atn (D)
is given by transpose: (f ) = f t is the matrix with ijcoordinate equal
to fji .
12 MATH 101B: ALGEBRA II PART C: SEMISIMPLICITY
3.2.2. algebraically closed fields. We want to talk about algebras over
algebraically closed fields.
Definition 3.11. An algebra over a field K is defined to be a ring A
which contains the field K in its center. (The center of A is the set
of elements which commute with all other elements of A. This is a
subring but not an ideal.)
Since the center of A acts on all Amodules, every module over a
Kalgebra will be a vector space over K. If a Kalgebra A is finite
dimensional as a vector space over K, then it is clearly Artinian.
Theorem 3.12. The only finite dimensional division algebra over an
algebraically closed field K is K itself.
Proof. Suppose that D is an n dimensional division algebra over an
algebraically closed field K. Take any a D. Right multiplication by
a gives a Klinear endomorphism of the ndimensional vector space D:
(a) : D
= Kn D
= Kn
Since K is algebraically closed, it contains all of the eigenvalues (roots
of the characteristic polynomial) of this endomorphism. Let K be
one of these eigenvalues. Then right multiplication by a is singular,
i.e., there is an x 6= 0 D (the eigenvector) so that x(a ) = 0. But
D is a division algebra. So, this implies that a = K. Since a D
was arbitrary, this implies that D = K.
Corollary 3.13. The only semisimple algebras over C are finite prod
ucts of matrix algebras: Y
M atni (C).
MATH 101B: ALGEBRA II
PART D: REPRESENTATIONS OF FINITE GROUPS
For the rest of the semester we will discuss representations of groups,
mostly finite groups. An elementary treatment would start with char
acters and their computation. But the Wedderburn structure theorem
will allow us to start at a higher level which will give more meaning
to the character tables which we will be constructing. This is from
Lang, XVIII, 17 with additional material from Serres Linear Rep
resentations of Finite Groups (Springer Graduate Texts in Math 42)
and Alperin and Bells Groups and Representations (Springer GTM
162).
Contents
1. The group ring k[G] 2
1.1. Representations of groups 2
1.2. Modules over k[G] 3
1.3. Semisimplicity of k[G] 5
1.4. idempotents 8
1.5. Center of C[G] 9
2. Characters 10
2.1. Basic properties 10
2.2. Irreducible characters 13
2.3. formula for idempotents 18
2.4. character tables 19
2.5. orthogonality relations 23
3. Induction 31
3.1. induced characters 31
3.2. Induced representations 36
3.3. Artins theorem 42
1
2 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS
1. The group ring k[G]
The main idea is that representations of a group G over a field k are
the same as modules over the group ring k[G]. First I defined both
terms.
1.1. Representations of groups.
Definition 1.1. A representation of a group G over a field k is defined
to be a group homomorphism
: G Autk (V )
where V is a vector space over k.
Here Autk (V ) is the group of klinear automorphisms of V . This also
written as GLk (V ). This is the group of units of the ring Endk (V ) =
Homk (V, V ) which, as I explained before, is a ring with addition defined
pointwise and multiplication given by composition. If dimk (V ) = d
then Autk (V )
= Autk (k d ) = GLd (k) which can also be described as
the group of units of the ring Matd (k) or as:
GLd (k) = {A Matd (k)  det(A) 6= 0}
d = dimk (V ) is called the dimension of the representation .
1.1.1. examples.
Example 1.2. The first example I gave was the trivial representation.
This is usually defined to be the one dimensional representation V = k
with trivial action of the group G (which can be arbitrary). Trivial
action means that () = 1 = idV for all G.
In the next example, I pointed out that the group G needs to be
written multiplicatively no matter what.
Example 1.3. Let G = Z/3. Written multiplicatively, the elements
are 1, , 2 . Let k = R and let V = R2 with () defined to be rotation
by 120 = 2/3. I.e.,
1/2 3/2
() =
3/2 1/2
Example 1.4. Suppose that E is a field extension of k and G =
Gal(E/k). Then G acts on E by klinear transformations. This gives
a representation:
: G , Autk (E)
Note that this map is an inclusion by definition of Galois group.
MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS 3
1.1.2. axioms. In an elementary discussion of group representations I
would write a list of axioms as a definition. However, they are just
longwinded explanations of what it means for : G Autk (V ) to be
a group homomorphism. The only advantage is that you dont need
to assume that () is an automorphism. Here are the axioms. (I
switched the order of (2) and (3) in the lecture.)
(1) (1) = 1 1v = v v V
(2) ( ) = ()( ) , G ( )v = ( v) v V
(3) () is klinear G (av + bw) = av + bw v, w V, a, b k
The first two conditions say that is an action of G on V . Actions
are usually written by juxtaposition:
v := ()(v)
The third condition says that the action is klinear. So, together, the
axioms say that a representation of G is a klinear action of G on a
vector space V .
1.2. Modules over k[G]. The group ring k[G] is defined P to be the
set of all finite k linear combinations of elements of G: a where
k for all G and a = 0 for almost all .
For example, R[Z/3] is the set of all linear combinations
x + y + z 2
where x, y, z R. I.e., R[Z/3] = R3 . In general k[G] is a vector space
over k with G as a basis.
Multiplication in k[G] is given by
X X X
a b = c
where c G can be given in three different ways:
X X X
c = a b = a b1 = a 1 b
= G G
Proposition 1.5. k[G] is a kalgebra.
This is straightforward and tedious. So, I didnt prove it. But I did
explain what it means and why it is important.
Recall that an algebra over k is a ring which contains k in its center.
The center Z(R) of a (noncommutative) ring R is defined to be the set
of elements of R which commute with all the other elements:
Z(R) := {x R  xy = yx y R}
Z(R) is a subring of R.
4 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS
The center is important for the following reason. Suppose that M
is a (left) Rmodule. Then each element r R acts on M by left
multiplication r
r : M M, r (x) = rx
This is a homomorphism of Z(R)modules since:
r (ax) = rax = arx = ar (x) a Z(R)
Thus the action of R on M gives a ring homomorphism:
: R EndZ(R) (M )
Getting back to k[G], suppose that M is a k[G]module. Then the
action of k[G] on M is klinear since k is in the center of k[G]. So, we
get a ring homomorphism
: k[G] Endk (M )
This restricts to a group homomorphism
G : G Autk (M )
I pointed out that, in general, any ring homomorphism : R S
will induce a group homomorphism U (R) U (S) where U (R) is the
group of units of R. And I pointed out earlier that Autk (M ) is the
group of units of Endk (M ). G is contained in the group of units of
k[G]. (An interesting related question is: Which finite groups occur as
groups of units of rings?)
This discussion shows that a k[G]module M gives, by restriction, a
representation of the group G on the kvector space M . Conversely,
suppose that
: G Autk (V )
is a group representation. Then we can extend to a ring homomor
phism
: k[G] Endk (V )
by the simple formula
X X
a = a ()
When we say that a representation of a group G is the same as
a k[G]module we are talking about this correspondence. The vector
space V is also called a Gmodule. So, it would be more accurate to
say that a Gmodule is the same as a k[G]module.
Corollary 1.6. (1) Any group representation : G Autk (V )
extends uniquely to a ring homomorphism : k[G] Endk (V )
making V into a k[G]module.
MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS 5
(2) For any k[G]module M , the action of k[G] on M restricts to
give a group representation G Autk (M ).
(3) These two operations are inverse to each other in the sense that
is the restriction of and an action of the ring k[G] is the
unique extension of its restriction to G.
There are some conceptual differences between the group represen
tation and the corresponding k[G]module. For example, the module
might not be faithful even if the group representation is:
Definition 1.7. A group representation : G Autk (V ) is called
faithful if only the trivial element of G acts as the identity on V . I.e.,
if the kernel of is trivial. An Rmodule M is called faithful if the
annihilator of M is zero. (ann(M ) = {r R  rx = 0 x M }).
These two definitions do not agree. For example, take the represen
tation
: Z/3 , GL2 (R)
which we discussed earlier. This is faithful. But the extension to a ring
homomorphism
: R[Z/3] Mat2 (R)
is not a monomorphism since 1 + + 2 is in its kernel.
1.3. Semisimplicity of k[G]. The main theorem about k[G] is the
following.
Theorem 1.8 (Maschke). If G is a finite group of order G = n and
k is a field with char k  n (or char k = 0) then k[G] is semisimple.
Instead of saying char k is either 0 or a prime not dividing n, I will
say that 1/n k. By the Wedderburn structure theorem we get the
following.
Corollary 1.9. If 1/G k then
k[G] = Matd1 (Di ) Matdb (Db )
where Di are finite dimensional division algebras over k.
Example 1.10.
R[Z/3] =RC
In general, if G is abelian, then the numbers di must all be 1 and Di
must be finite field extensions of k.
6 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS
1.3.1. homomorphisms. In order to prove Maschkes theorem, we need
to talk about homomorphisms of Gmodules. We can define these to
be the same as homomorphisms of k[G]modules. Then the following is
a proposition. (Or, we can take the following as the definition of a G
module homomorphism, in which case the proposition is that Gmodule
homomorphisms are the same as homomorphisms of k[G]modules.)
Proposition 1.11. Suppose that V, W are k[G]modules. Then a k
linear mapping : V W is a homomorphism of k[G]modules if and
only if it commutes with the action of G. I.e., if
((v)) = (v)
for all G.
Proof. Any homomorphism of k[G]modules will commute with the ac
tion of k[G] and therefore with the action of G k[G]. P
Conversely, if
: V W commutes with the action of G then, for any a k[G],
we have
! !
X X X X
a v = a (v) = a (v) = a (v)
G G G G
So, is a homomorphism of k[G]modules.
We also have the following Proposition/Definition of a Gsubmodule.
Proposition 1.12. A subset W of a Gmodule V over k is a k[G]
submodule (and we call it a Gsubmodule) if and only if
(1) W is a vector subspace of V and
(2) W is invariant under the action of G. I.e., W W for all
G.
Proof of Maschkes Theorem. Suppose that V is a finitely generated
Gmodule and W is any Gsubmodule of V . Then we want to show
that W is a direct summand of V . This is one of the characterizations
of semisimple modules. This will prove that all f.g. k[G]modules are
semisimple and therefore k[G] is a semisimple ring.
Since W is a submodule of V , it is in particular a vector subspace
of V . So, there is a linear projection map : V W so that W =
idW . If is a homomorphism of Gmodules, then V = W ker
and W would split from V . So, we would be done. If is not a G
homomorphism, we can make it into a Ghomomorphism by averaging
1
P
over the group, i.e., by replacing it with = n 1 .
MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS 7
First, I claim that W = idW . To see this take any w W . Then
w W . So, (w) = w and
1 X 1 1 X 1
(w) = (w) = (w) = w
n G n G
Next I claim that is a homomorphism of Gmodules. To show this
take any G and v V . Then
1 X 1 1 X
( v) = ( v) = (v)
n G n =
1 X 1
= (v) = (v)
n G
So, gives a splitting of V as required.
1.3.2. R[Z/3]. I gave a longwinded explanation of Example 1.10 using
the universal property of the group ring k[G]. In these notes, I will
just summarize this property in one equation. If R is any kalgebra
and U (R) is the group of units of R, then:
Homkalg (k[G], R)
= Homgrp (G, U (R))
The isomorphism is given by restriction and linear extension.
The isomorphism R[Z/3] = R C is given by the mapping:
: Z/3 R C
which sends the generator to (1, ) where is a primitive third root
of unity. Since (1, 0), (1, ), (1, ) are linearly independent over R, the
linear extension of is an isomorphism of Ralgebras.
1.3.3. group rings over C. We will specialize to the case k = C. In that
case, there are no finite dimensional division algebras over C (Part C,
Theorem 3.12). So, we get only matrix algebras:
Corollary 1.13. If G is any finite group then
C[G]
= Matd1 (C) Matdb (C)
P 2
In particular, n = G = di .
Example 1.14. If G is a finite abelian group of order n then C[G]
=
Cn .
Example 1.15. Take G = S3 , the symmetric group on 3 letters. Since
this group is nonabelian, the numbers di cannot all be equal to 1. But
8 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS
the only way that 6 can be written as a sum of squares, not all 1, is
6 = 1 + 1 + 4. Therefore,
C[S3 ]
= C C Mat2 (C)
This can be viewed as a subalgebra of Mat4 (C) given by
0 0 0
0 0 0
0 0
0 0
Each star () represents an independent complex variable. In this de
scription, it is easy to visualize what are the simple factors Matdi (C)
given by the Wedderburn structure theorem. But what are the corre
sponding factors of the group ring C[G]?
1.4. idempotents. Suppose that R = R1 R2 R3 is a product of
three subrings. Then the unity of R decomposes as 1 = (1, 1, 1). This
can be written as a sum of unit vectors:
1 = (1, 0, 0) + (0, 1, 0) + (0, 0, 1) = e1 + e2 + e3
This is a decomposition of unity (1) as a sum of central, orthogonal
idempotents ei .
Recall that idempotent means that e2i = ei for all i. Also, 0 is not
considered to be an idempotent. Orthogonal means that ei ej = 0 if
i 6= j. Central means that ei Z(R).
Theorem 1.16. A ring R can be written as a product of b subrings
R1 , R2 , , Rb iff 1 R can be written as a sum of b central, orthogonal
idempotents and, in that case, Ri = ei R.
A central idempotent e is called primitive if it cannot be written as
a sum of two central orthogonal idempotents.
Corollary 1.17. The number of factors Ri = ei R is maximal iff each
ei is primitive.
So, the problem is to write unity 1 C[G] as a sum of primitive,
central ( orthogonal) idempotents. We will derive a formula for this
decomposition using characters.
MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS 9
1.5. Center of C[G]. Before I move on to characters, I want to prove
one last thing about the group ring C[G].
Theorem 1.18. The number of factors b in the decomposition
b
C[G]
Y
= Matdi (C)
i=1
is equal to the number of conjugacy classes of elements of G.
For, example, the group S3 has three conjugacy classes: the identity
{1}, the transpositions {(12), (23), (13)} and the 3cycles {(123), (132)}.
In order to prove this we note that b is the dimension of the center
of the right hand side. Any central element of Matdi (C) is a scalar
multiple of the unit matrix which we are calling ei (the ith primitive
central idempotent). Therefore:
Lemma 1.19. The center of bi=1 Matdi (C) is the vector subspace
Q
spanned by the primitive central idempotents e1 , , eb . In particular
it is bdimensional.
So, it suffices to show that the dimension of the center of C[G] is
equal to the number of conjugacy classes of elements of G. (If G is
abelian, this is clearly true.)
Definition 1.20. A class function on G is a function f : G X so
that f takes the same value on conjugate elements. I.e.,
f ( 1 ) = f ()
for all , G. Usually, X = C.
For example, any function on an abelian group is a class function.
Lemma 1.21. For any field k, the center of k[G] is the set of all
P c
G a so that a is a class function on G. So, Z(k[G]) = k where
c is the number of conjugacy classes of elements of G.
P
Proof. If G a is central then
X X X
a = a 1 = a 1
G G G
1
The coefficient of on both sides must agree. So
a 1 = a
I.e., a is a class function. The converse is also clear.
These two lemmas clearly imply Theorem 1.18 (which can now be
stated as: b = c if k = C).
10 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS
2. Characters
If : G GLd (C) is a representation of G over C then the character
of is the function
: G C
given by () = Tr(()).
The main property of characters is that they determine the represen
tation uniquely up to isomorphism. So, once we find all the characters
(by constructing the character table) we will in some sense know all
the representations. We will assume that all groups G are finite and
all representations are finite dimensional over C.
2.1. Basic properties. The basic property of trace is that it is invari
ant under conjugation:
Tr(ABA1 ) = Tr(B)
Letting A = (), B = ( ) we get
( 1 ) = Tr(( 1 )) = Tr(()( )()1 ) = Tr(( )) = ( )
for any representation . So:
Theorem 2.1. Characters are class functions. (They have the same
value on conjugate elements.)
If : G AutC (V ) is a representation of G over C, then the char
acter of , also called the character of V , is defined to be the function
= V : G C
given by
V () = Tr(()) = Tr( () 1 )
for any linear isomorphism : V Cd .
There are three basic formulas that I want to explain. In order of
difficulty they are:
(1) The character of a direct sum is the sum of the characters:
V W = V + W
(2) The character of a tensor product is the product of the charac
ters:
V W = V W
(3) The character of the dual representation is the complex conju
gate of the original character:
V = V
MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS 11
2.1.1. direct sum. The trace of a direct sum of matrices is the sum of
traces:
A 0
Tr(A B) = Tr = Tr(A) + Tr(B)
0 B
Theorem 2.2. If V, W are two Gmodules then
V W = V + W
Proof. If V , W , V W are the corresponding representations then
V W () = V () W ()
The theorem follows.
2.1.2. character formula using dual basis. Instead of using traces of
matrices, I prefer the following equivalent formula for characters using
bases and dual bases.
If V is a Gmodule, we choose a basis {v1 , , vd } for V as a vector
space over C. Then recall that the dual basis for V = HomC (V, C)
consists of the dual vectors v1 , , vd : V C given by
d
!
X
vj ai vi = aj
i=1
I.e., vj picks out the coefficient of vj .
Proposition 2.3.
d
X
V () = vi (vi )
i=1
Proof. The matrix of the linear P transformation () has (i, j) entry
vi (vj ) Therefore, its trace is vi (vi ).
For example, the trace of the identity map is
d
X
Tr(idV ) = vi (vi ) = d
i=1
Theorem 2.4. The value of the character at 1 is the dimension of the
representation:
V (1) = d = dimC (V )
12 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS
2.1.3. tensor product. If V, W are two Gmodules then the tensor prod
uct V W is defined to be the tensor product over C with the following
action of G:
(v w) = v w
Theorem 2.5. The character of V W is the product of the characters
of V and W . I.e.,
V W () = V ()W ()
for all G.
Proof. Choose bases {vi }, {wj } for V, W with dual bases {vi }, {wj }.
Then the tensor product V W has basis elements vi wj with dual
basis elements vi wj . So, the character is:
X X
V W () = (vi wj )(vi wj ) = (vi wj )(vi wj )
i,j i,j
X X X
= vi (vi )wj (wj ) = vi (vi ) wj (wj ) = V ()W ()
i,j i j
2.1.4. dual reprentation. The dual space V is a right Gmodule. In
order to make it a left Gmodule we have to invert the elements of the
group. I.e., for all f V we define
(f )(v) := f ( 1 v)
Lemma 2.6.
V () = V ( 1 )
Lemma 2.7.
V ( 1 ) = V ()
Proof. The trace of a matrix A is equal to the sum of its eigenvalues i .
If A has finite order: Am = Id then its eigenvalues are roots of unity.
Therefore, their inverses are equal to their complex conjugates. So,
X X
Tr(A1 ) = 1
i = i = Tr(A)
Since G is finite, the lemma follows.
Theorem 2.8. The character of the dual representation V is the com
plex conjugate of the character of V :
V () = V ()
MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS 13
2.2. Irreducible characters.
Definition 2.9. A representation : G AutC (V ) is called irre
ducible if V is a simple Gmodule. The character
= V : G C
of an irreducible representation is called an irreducible character.
Theorem 2.10. Every character is a nonnegative integer linear com
bination of irreducible characters.
Proof. Since C[G] is semisimple, any Gmodule V is a direct sum of
simple modules V = L
S . So, the character
P of V is a sum of the
corresponding irreducible characters: V = S .
If we collect
L together multiple copies of the same simple module we
get V = ni Si and
Xr
V = ni i
i=1
where i is the character of Si . This makes sense only if we know that
there are only finitely many nonisomorphic simple modules Si and that
the corresponding characters i are distinct functions G C. In fact
we will show the following.
Theorem 2.11. (1) There are exactly b (the number of blocks) ir
reducible representations Si up to isomorphism.
(2) The corresponding characters i are linearly independent.
This will immediately imply the following.
Corollary 2.12. The irreducible characters 1 , , b form a basis for
the bdimensional vector space of all class functions G C.
2.2.1. regular representation. This is a particularly elementary repre
sentation and character which in contains all the simple modules.
Definition 2.13. The free module C[G] C[G] is called the regular rep
resentation of G. The corresponding character is called the regular
character : reg = C[G] : G C.
(
n = G if = 1
Theorem 2.14. reg () =
0 if 6= 1
Proof. I used the basisdual basis formula for characters. The regular
representation V = C[G] has basis elements G and dual basis
elements given by X
a = a
14 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS
I.e., (x) is the coefficient of in the expansion of x. The regular
character is then given by
X
reg ( ) = ( )
G
But this is clearly equal to 0 if 6= 1 since the coefficient of in is
0. And we already know that reg (1) = dim C[G] = n.
Lemma 2.15. There are only finitely many isomorphism classes of
simple Gmodules.
Proof. First choose a decomposition of the regular representation into
simple modules:
C[G]
M
= S
Then I claim that any simple module S is isomorphic to one of the S
in this decomposition. And this will prove the lemma.
To prove the claim, choose any nonzero element x0 S. Then x0
generates S (the submodule generated by x0 is either 0 or S). There
fore, we have an epimorphism
: C[G] S
given by (r) = rx0 . When we restrict to each simple component S ,
we get a homomorphism S : S S which, by Schurs lemma, must
either be zero or an isomorphism. These restrictions cannot all be zero
since is an epimorphism. Therefore, one of them is an isomorphism
S = S. This proves the claim.
This proof shows more than the lemma states. It proves:
Lemma 2.16. The regular representation C[G] contains an isomorphic
copy of every simple Gmodule.
Therefore, in order to find all the irreducible representations, we need
to decompose the regular representation as a sum of simple modules.
2.2.2. decomposition of the regular representation. At this point I used
the Wedderburn structure theorem again:
b
C[G]
Y Y
= Matdi (C) = Ri
i=1
where, following Lang, we write Ri = Matdi (C).
Let Si = Cdi be the vector space of column vectors. Then Ri acts
on the left by matrix multiplication and it is easy to see that Si is a
simple Ri module since it is generated by any nonzero element.
MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS 15
Lemma 2.17. If : R0 R is an epimorphism of rings and S is any
simple Rmodule, then S becomes a simple R0 module with the action
of R0 induced by .
Proof. If x0 S is any nonzero element then R0 x0 = Rx0 = S. So, any
nonzero element of S generates the whole thing as an R0 module. So,
it is simple.
Q
Since C[G] = Ri , we can make Si into a Gmodule with the ring
homomorphism:
i
i : C[G]
Ri
EndC (Si )
Since i : C[G] Ri is an epimorphism, Si becomes a simple G
module. In other words, the corresponding representation is irreducible:
i : G AutC (Si )
Also, Lang points out that
Rj Si = 0
if i 6= j. (And Ri Si = Si .) This is the key point. It shows immediately
that the Gmodules Si are not isomorphic. And it will also show that
the characters are linearly independent.
In order to show that the characters
i = i = Si
are linearly independent we will evaluate them on the central
Q idem
potents ei corresponding to the decomposition C[G] = Ri . As we
discussed earlier, this product decomposition gives a decomposition of
unity:
1 = e1 + + eb
where ei is the unity of Ri . (We want to say ei = 1 but there would
be too many 1s.) We then need to compute i (ej ). But this is not
defined since ej is not an element of G. We need to extend i to a map
on C[G].
2.2.3. linear extension of characters. If : G C is any character,
we define the linear extension of to C[G] by the formula
X X
a = a ()
Since the symbol is already taken ( is the complex conjugate of ),
I decided to use the same symbol to denote the linear extension of
given by the above formula.
16 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS
The linear extension of is P which is the trace of the linear
extension of . To see this let x = a . Then
X X X
a () = a Tr(()) = Tr a () = Tr((x))
Lemma 2.18. (
0 if i 6= j
i (ej ) =
di if i = j
Proof. If i 6= j we have
i (ej ) = Tr(i (ej )) = 0
since i (ej ) is the zero matrix (giving the action of ej Rj on Wi .
If i = j then
i (ei ) = dim Si = di
since ei is unity in Ri .
P
This proves the second part of Theorem 2.11: If ai i = 0 then
X
ai i (ej ) = aj dj = 0
which forces aj = 0 for all j.
Theorem 2.19. The regular representation decomposes as:
b
C[G]
X
= di Si
i=1
Proof. The ith block of the Wedderburn decomposition is a di di
matrix which, as a left module, decomposes into di column vectors,
i.e., into a direct sum of di copies of the simple module di .
2.2.4. example. Take G = S3 . Then we already saw that
C[S3 ]
= C C Mat2 (C)
So, there are three simple modules S1 , S2 , S3
S1 = C is the trivial representation.
S2 is the sign representation 2 () = sgn() = 1
S3 is a simple 2dimensional module.
Since characters are class functions, their value is the same on con
jugate elements. So, we only need their values on representatives
1, (12), (123). The characters 1 , 2 are easy to compute. The last
irreducible character is determined by the equation
reg = 1 + 2 + 23
MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS 17
So, here is the character table of S3 :
1 (12) (123)
1 1 1 1
2 1 1 1
3 2 0 1
reg 6 0 0
All characters of S3 are nonnegative integer linear combinations of
1 , 2 , 3 .
18 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS
2.3. formula for idempotents. Lang gives a formula for the idem
potents ei C[G] in terms of the corresponding irreducible character
i . The key point is that the linear extension
i : C[G] EndC (Si ) = Matdi (C)
of i sends ei to the identity matrix. Therefore,
i (ei ) = i (ei )i () = i ()
Also, i (ej ) is the zero matrix if i 6= j. Therefore,
(
i () if i = j
i (ej ) =
0 if i 6= j
P
Now use the regular character. If ei = a then
1 1X di
a = reg ei 1 = dj j ei 1 = i ( 1 )
n n j n
Theorem 2.20.
di X
ei = i ( 1 )
n G
This formula has an important consequence.
Corollary 2.21. di n (Each di divides n = G.)
Proof. First recall that ei is an idempotent. So,
di X
ei = e2i = i ( 1 ) ei
n G
Now multiply by n/di to get:
n X
(2.1) ei = i ( 1 ) ei
di G
1
P
I mentioned earlier that i ( ) = j is a sum of mth roots of unity
where m = o( 1 ) = o( ). But this number divides n = G. So, each
j is a power of = e2i/n
Let Mi C[G] be the additive subgroup generated by all elements of
the form j ei (for all j and fixed i). This is a finitely generated torsion
free (and thus free) Zmodule and equation (2.1) shows that Mi is
invariant under multiplication by the rational number n/di . Therefore,
n/di is integral. Since Z is integrally closed in Q this implies that
n/di Z.
MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS 19
2.4. character tables. I decided to construct some character tables
(as I did for G = S3 ) and explain properties of characters using the
examples. The character table is defined to be the b b matrix with
entries i (cj ) where cj is the jth conjugacy class. The characters are
usually arranged in order of degree di with 1 being the trivial charac
ter. The conjugacy classes are arranged arbitrarily with c1 = {1}. So,
the character table looks like this:
1 c2 c3 cb
1 1 1 1 1
2 d2
3 d3
i (cj )
b db
2.4.1. onedimensional characters. The case d = 1 is very special.
First of all, any onedimensional representation of G is irreducible.
So, it is one of the i . Here are all the things I pointed out:
Proposition 2.22. Suppose that di = 1. Then
(1) i = i : The character is the representation.
(2) i () is an mth root of unity where m = o().
(3) i ( ) = i ()i ( ).
Proof. This hardly need proof. When di = 1, the representation is:
i : G AutC (Si ) = GL1 (C) = C
The trace of a 1 1 matrix is equal to the matrix itself. So, i () =
i (). Since i is a homomorphism, so is i . This means i is multi
plicative. Also, m = 1 implies that i ()m = 1.
2.4.2. example: Z/3. Since Z/3 = {1, , 2 } is an abelian group we
have b = c = n = 3. Every element is its own conjugacy class. Also,
all blocks have size di = 1. This gives the following partial character
table.
1 2
1 1 1 1
2 1
3 1
From our discussion of onedimensional characters we know that each
i () is a third root of unity:
i () = 1, , 2
20 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS
i ( 2 ) = i ()2 = 1, 2 , respectively. So, the complete character
table is:
1 2
1 1 1 1
2 1 2
3 1 2
2.4.3. example: Z/2Z/2. Lets call the elements of the group 1, , , .
Since Z/2 Z/2 is abelian, all characters are again one dimensional
and the values must be square roots of 1, i.e., they must be 1. So,
we got the following.
1
1 1 1 1 1
2 1 1 1 1
3 1 1 1 1
4 1 1 1 1
Each row is clearly a onedimensional representation. There are no oth
ers because we know that there are exactly b = 4 such representations.
So, this is the complete character table.
2.4.4. example:D4 . This is the dihedral group of order 8 with presen
tation:
D4 = ,  4 , 2 ,
(Replace 4 by any n to get the dihedral group of order 2n.) To find
the numbers di we have to write n = 8 as a sum of squares which are
not all 1 (because D4 is nonabelian) and so that there is at least one 1
(since d1 = 1). The solution is:
8=1+1+1+1+4
Therefore, b = c = 5.
The elements of the group are:
D4 = {e, , 2 , 3 , , , 2 , 3 }
Among these, , 3 are conjugate since 1 = 3 , , 2 = 1 are
conjugate and , 3 = ( ) 1 are conjugate. There are no other
conjugacy relations since we got it down to 5 classes.
Among the 5 characters, the first 4 are 1dimensional. And we can
find them very quickly as follows. The center of D4 is the set of elements
which are alone in their conjugacy class. So,
Z(D4 ) = {1, 2 }
MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS 21
This is a normal subgroup of D4 with quotient isomorphic to Z/2Z/2.
We already have four irreducible representations 1 , , 4 of Z/2
Z/2. We can compose with the projection to get four irreducible rep
resentations of D4
i =i
D4 D4 /Z C
This gives the first four lines in the character table:
1 2
1 1 1 1 1 1
2 1 1 1 1 1
3 1 1 1 1 1
4 1 1 1 1 1
5 2 2 0 0 0
To get the last line we use the equation:
X
reg = di i = 1 + 2 + 3 + 4 + 25
2.4.5. kernel of a representation. Looking at the character table, we
can determine which elements of the group lie in the kernel of each
representation.
Lemma 2.23. ker () = d = (1).
Proof. In a ddimensional representation, () = 1 + + d is a sum
of d roots of unity. This sum is equal to d if and only if every i = 1
which is equivalent to saying that () is the identity matrix (since
() has finite order).
Using the same argument it follows that:
Proposition 2.24.  () = d if and only if () = Id is a scalar
multiple of the identity matrix. Furthermore, = ()/d.
For example, in the last irreducible representation of D4 we have
5 ( 2 ) = 2 = d5
Therefore, 5 ( 2 ) = I2 .
2.4.6. finding all normal subgroups. Finally, I claimed that the char
acter table determines all normal subgroups of the group G. This is
based on the trick that we used to construct the character table of D4 .
Suppose that N is a normal subgroup of G and i , i = 1, , r are
the irreducible representations of G/N .
22 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS
Lemma 2.25. \
N= ker(i )
where : G G/N is the quotient map.
T
Proof. Let K = ker(i ). Then clearly, N K. So, suppose
that K is bigger than N . Then the representations i would all factor
through the quotient G/K:
i
i : G/N
G/K AutC (Si )
This is not possible because the sum of the squares of the dimensions
of these representations add up to the order of G/N :
X
G/K < G/N  = d2i
So, the i are distinct irreducible representations of G/K whose di
mensions squared add up to more than the order of the group. This
contradiction proves the lemma.
Combining Lemmas 2.25 and 2.23, we get the following.
Theorem 2.26. The normal subgroups of a finite group G can be de
termined from its character table as follows.
(1) The kernel of i is the union of all conjugacy classes cj for
which i (cj ) = di = i (1).
(2) A collection of conjugacy classes forms a normal subgroup if
and only if it is an intersection of kernels of irreducible repre
sentations i .
MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS 23
2.5. orthogonality relations. The character table satisfies two or
thogonality relations:
(1) row orthogonality
(2) column orthogonality
First, I will do row orthogonality. The rows are the characters i .
We want to show that they are orthogonal in some sense.
2.5.1. main theorem and consequences.
Definition 2.27. If f, g : G C are class functions then we define
hf, gi C by
1X
hf, gi = hg, f i = f ()g( 1 )
n G
The main theorem is the following.
Theorem 2.28. If V, W are Gmodules then
hV , W i = dimC HomG (V, W )
Before I prove this let me explain the consequences.
Corollary 2.29. The rows of the character table are orthonormal in
the sense that:
hi , j i = ij
Proof. It follows from Schurs lemma that
hi , j i = dimC HomG (Si , Sj ) = ij
since HomG (Si , Sj ) = 0 for i 6= j and HomC (Si , Si ) = C.
Since only conjugacy classes appear in the character table we have:
b
X
hi , j i = ck i (ck )j (ck )
k=1
For example, for G = S3 we have the character table:
cj  1 3 2
1 (12) (123)
1 1 1 1
2 1 1 1
3 2 0 1
(1)(1) + 3(1)(1) + 2(1)(1) 13+2
h1 , 2 i = = =0
6 6
This formula also tells us that a representation is determined by its
character in the following way.
24 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS
Corollary 2.30. P
Suppose that the semisimple decomposition of the G
module V is V = ni Si . Then
ni = hV , i i
P
Proof. Since V W = V + W , we have: V = nj j . So,
DX E
hV , i i = nj j , i = ni
2.5.2. proof of the main theorem. The theorem will follow from three
lemmas. The first lemma calculates the dimension of the fixed point
set of V .
Definition 2.31. If V is a Gmodule then the fixed point set of the
action of G is given by
V G := {v V  v = v G}
Lemma 2.32. The dimension of the fixed point set is equal to the
average value of the corresponding character:
1X
dimC V G = V ()
n G
Proof. The projection map
:V VG
is given by
1X
(v) = v
n
It is clear that
(1) (v) V G since multiplication by any G will just permute
the summands.
(2) (v) = v if v V G because, in that case, each v = v and there
are n terms.
Therefore, is a projection map, i.e., a linear retraction onto V G .
Looking at thePformula we see that is multiplication by the idem
potent e1 = n1 G . (This is the idempotent corresponding to the
trivial representation.) So:
!
1 X 1X
dim V G = Tr() = V (e1 ) = V = V ()
n G n G
Explanations:
MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS 25
(1) dim V G = Tr() because V = V G W (W = ker ). So, the
matrix of is:
1V G 0
=
0 0W
making Tr() = Tr(1V G ) = dimC V G .
(2) Tr() = V (e1 ) by definition of the character:
V (e1 ) := Tr(e1 : V V )
This is the trace of the mapping V V given by multiplication by e1 .
But we are calling that mapping .
Lemma 2.33. If V, W are representations of G then
HomG (V, W ) = HomC (V, W )G
where G acts on HomC (V, W ) by conjugation, i.e., f = f 1
which means that
(f )(v) = f ( 1 v)
Proof. This is trivial. Given any linear map f : V W , f is a G
homomorphism iff
f = f f 1 = f f = f
iff f HomC (V, W )G .
Lemma 2.34. HomC (V, W )
= V W as Gmodules.
Proof. Let : V W HomC (V, W ) be given by
(f w)(v) = f (v)w
To check that this is a Ghomomorphism we need to show that =
for any G. So, we compute both sides:
(f w) = (f w) = (f 1 w)
which sends v V to
(f 1 w)(v) = f ( 1 v)w
On the other side we have:
(f w) = (f w) 1
which also sends v V to
(f w) 1 v = (f ( 1 v)w) = f ( 1 v)w
This shows that commutes with the action of G. The fact that is
an isomorphism is wellknown: If vi , vi form a basisdual basis pair for
V and wj form a basis for W then vj wi form a basis for V W and
X
(vj wi ) : v = aj vj 7 vj (v)wi = aj wi
26 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS
is the mapping whose matrix has ijentry equal to 1 and all other
entries 0. So, these homomorphisms form a basis for HomC (V, W ) and
is an isomorphism.
Proof of main theorem 2.28. Using the three lemmas we get:
dimC HomG (V, W ) =2.33 dimC HomC (V, W )G
=2.34 dimC (V W )G
1X
=2.32 V W ()
n G
1X
= V ()W ()
n
1X
= V ( 1 )W () = hV , W i
n
2.5.3. character table of S4 . Using these formulas we can calculate the
character table for S4 . First note that there are five conjugacy classes
represented by
1, (12), (123), (12)(34), (1234)
The elements of cycle form (12)(34) form (with 1) a normal subgroup
K = {1, (12)(34), (13)(24), (14)(23)} / S4
called the Klein 4group. The quotient S4 /K is isomorphic to the
symmetric group on 3 letters. Imitating the case of D4 , this allows us
to construct the following portion of the character table for S4 :
cj  1 6 8 3 6
1 (12) (123) (12)(34) (1234)
1 1 1 1 1 1
2 1 1 1 1 1
3 2 0 1 2 0
4 3
5 3
Explanations:
(1) Since (12)(34) K, the value of the first three characters on
this conjugacy class is di , the same as in the first column.
(2) Since (1234)K = (12)K, these two columns have the same val
ues of 1 , 2 , 3 .
MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS 27
(3) Finally, the two unknown characters 4 , 5 must be 3dimensional
since
X
24 = d2i = 1 + 1 + 4 + d24 + d25
has only one solution: d4 = d5 = 3.
To figure out the unknown characters we need another representa
tion. The permutation representation P is the 4dimensional represen
tation of S4 in which the elements of S4 act by permuting the unit
coordinate vectors. For example
0 1 0 0
1 0 0 0
P (12) =
0 0 1 0
0 0 0 1
Note that the trace of P () is equal to the number of letters left fixed
by . So, P takes values 4, 2, 1, 0, 0 as shown:
cj  1 6 8 3 6
1 (12) (123) (12)(34) (1234)
1 1 1 1 1 1
2 1 1 1 1 1
3 2 0 1 2 0
P 4 2 1 0 0
V = P 1 3 1 0 1 1
The representation P contains one copy of the trivial representation
and no copies of the other two:
1
hP , 1 i = (4 + 6(2) + 8(1)) = 1
24
1
hP , 2 i = (4 + 6(1)(2) + 8(1)(1)) = 0
24
1
hP , 3 i = ((2)(4) + 8(1)(1)) = 0
24
So, P = S1 V where V is a 3dimensional module which does not
contain S1 , S2 or S3 . So, V = nS4 mS5 . But S4 , S5 are both 3
dimensional. So, V = S4 (or S5 ).
Using the fact that
1 + 2 + 23 + 34 + 35 = reg
28 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS
we can now complete the character table of S4 :
cj  1 6 8 3 6
1 (12) (123) (12)(34) (1234)
1 1 1 1 1 1
2 1 1 1 1 1
3 2 0 1 2 0
4 3 1 0 1 1
5 3 1 0 1 1
From the character table of S4 we can find all normal subgroups.
First, the kernels of the 5 irreducible representations are:
(1) ker 1 = S4 .
(2) ker 2 = A4 containing the conjugacy classes of 1, (123), (12)(34).
(3) ker 3 = K containing 1, (12)(34) and conjugates.
(4) ker 4 = 1. I.e., 4 is a faithful representation.
(5) ker 5 = 1. So, 5 is also faithful.
Since these subgroups contain each other:
1 < K < A4 < S4
intersecting them will not give any other subgroups. So, these are the
only normal subgroups of S4 .
MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS 29
2.5.4. column orthogonality. The columns of the character table also
satisfy an orthogonality condition. To see it we first have to write the
row orthogonality condition
b
X ck 
hi , j i = i (ck )j (ck ) = ij
k=1
n
and write it in matrix form:
c 
1
n
0
... t
T T = Ib
cb 
0 n
where T is the character table T = (i (cj )). This equation shows that
the character table T is an invertible matrix with inverse
t
T 1 = DT
where D is the diagonal matrix with diagonal entries cni  . Multiplying
both sides of this equation on the right by T and on the left with D1
and we get:
n
c1 
0
t ...
T T = D1 =
n
0 cb 
Looking at the entries of these matrices we get the column orthogonal
ity relation:
Theorem 2.35. If , G then
b
(
n
X if , are conjugate
i ()i ( ) = c
i=1
0 if not
Here c is the number of conjugates of in G. (So, n/c is the order
of the centralizer C() = { G  = } of .)
Corollary 2.36. The character table T = (i (cj )) determines the size
of each conjugacy class cj .
Proof. Taking = in the above theorem we get
X
C() = ki ()k2
i
The size of the conjugacy class c of is the index of its centralizer:
c = G : C() = n/C().
30 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS
As an example, look at the character table for S3 :
1 (12) (123)
1 1 1 1
2 1 1 1
3 2 0 1
Column orthogonality means that the usual Hermitian dot product of
the columns is zero. For example, the dot product of the first and third
column is
(1)(1) + (1)(1) + (2)(1) = 0
Also the dot product of the jth vector with itself (its length squared)
is equal to n/cj . For example, the length squared of the third column
vector is
1+1+1=3
Making the number of conjugates of (123) equal to 6/3 = 2.
MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS 31
3. Induction
If H is a subgroup of G then any representation of G will restrict to
a representation of H by composition:
H , G
AutC (V )
Induction is a more complicated process which goes the other way: It
starts with a representation of H and produces a representation of G.
Following Lang, I will construct the same object in several different
ways starting with an elementary equation for the induced character.
3.1. induced characters.
Definition 3.1. Suppose that H G (H is a subgroup of G) and
: H C is a character (or any class function). Then the induced
character
IndG
H : G C
is the class function on G defined by
1 X
IndG
H () = ( 1 )
H G
where () = 0 if
/ H.
The main theorem about the induced character is the following.
Theorem 3.2. If V is any representation of H then there exists a
representation W of G so that
W = IndG
H V
Furthermore, W is unique up to isomorphism.
The representation W is written W = IndG H V and is called the
induced representation. We will study that tomorrow.
Before proving this theorem let me give two examples.
3.1.1. example 1. Here is a trivial observation.
Proposition 3.3. If G is abelian then
IndG
H () = G : H()
Now suppose that G = Z/4 = {1, , 2 , 3 } and H = {1, } with
= 2 . Then the character table of H
= Z/2 is
H = Z/2 1
+ 1 1
1 1
32 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS
Z/4
I want to calculate IndZ/2 . By the proposition, the value of this in
duced character on 1, , 2 , 3 is the index G : H = 2 times 1, 0, 1, 0
respectively. This gives 2, 0, 2, 0 as indicated below the character ta
ble for G = Z/4:
G = Z/4 1 2 3
1 1 1 1 1
2 1 1 1 1
3 1 i 1 i
4 1 i 1 i
Z/4
IndZ/2 2 0 2 0
By examination we see that
Z/4
IndZ/2 = 3 + 4
3.1.2. example 2. In the nonabelian case we have the following formula
which is analogous to the one in the abelian case.
Proposition 3.4.
1
IndG
H () = G : H(average value of ( ))
Now let G = S3 and H = {1, (12)} = Z/2. Using the same notation
as in the previous example, let be the one dimensional character on
H given by (1) = 1, (12) = 1. We want to compute the induced
character IndGH .
IndG
H (1) = G : H (1) = (3)(1) = 3
Since (12) has three conjugates only one of which lies in H, the average
value of 1 on these conjugates is
1 1
(1 + 0 + 0) =
3 3
So,
1 3
IndG
H (12) = G : H = = 1
3 3
Since neither of the conjugates of (123) lie in H we have:
IndG
H (123) = 0
MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS 33
So, IndG
H takes the values 3, 1, 0 on the conjugacy classes of G =
S3 . Put it below the character table of S3 :
G = S3 1 (12) (123)
1 1 1 1
2 1 1 1
3 2 0 1
IndG
H 3 1 0
We can see that
IndG
H = 2 + 3
3.1.3. Frobenius reciprocity for characters. First I need some fancy no
tation for a very simple concept. If f : G C is any class function
then the restriction of f to H, denoted ResG H f , is the composition of
f with the inclusion map j : H , G:
ResG
Hf = f j : H C
Theorem 3.5 (Frobenius reciprocity). Suppose that g, h are class func
tions on G, H respectively. Then
G
IndH h, g G = h, ResG
Hg H
Suppose for a moment that this is true. Then, letting h = V and
taking g to be the irreducible character g = i , we get:
G
IndH V , i G = V , ResG
H i H = ni
Since ResG
H i is the character of the Gmodule Si considered as an
Hmodule, the number ni is a nonnegative integer, namely:
ni = dimC HomH (V, Si )
This implies that
IndG
H V = W
L
where W is the Gmodule W = ni Si . In other words, the induced
character is an effective character (the character of some representa
tion).
Corollary 3.6. If h : H C is an effective character then so is
IndG
H h : G C.
This is a rewording of the main theorem (Theorem 3.2).
34 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS
Proof of Frobenius reciprocity for characters. Since
1 X
IndGH h() = h( 1 )
H G
the left hand side of our equation is
1 X 1 X
LHS = h( 1 )g( 1 )
G G H G
Since g is a class function, g( 1 ) = g( 1 1 ). Letting = 1
we get a sum of terms of the form
h()g(1 )
How many times does each such term occur?
Claim: The number of ways that can be written as = 1 is
exactly n = G.
The proof of this claim is simple. For each G there is exactly
one which works, namely, = 1 .
This implies that
1 X
LHS = h()g(1 )
H G
Since h is a class function on H, h() = 0 if / H. Therefore, the
sum can be restricted to H and this expression is equal to the RHS
of the Frobenius reciprocity equation.
3.1.4. examples of Frobenius reciprocity. Lets take the two example
of induced characters that we did earlier and look at what Frobenius
reciprocity says about them.
In the case G = Z/4, H = Z/2, the restrictions of the four irreducible
characters of G = Z/4 to H (given by the first and third columns) are:
ResG
H 1 = +
ResG
H 2 = +
ResG
H 3 =
ResG
H 4 =
Frobenius reciprocity says that the number of times that appears
in the decomposition of ResGH i is equal to the number of times that
i appears in the decomposition of IndGH . So,
IndG
H = 3 + 4
MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS 35
In the case G = S3 , H = {1, (12)}, the restrictions of the three
irreducible characters of G = S3 to H, as given by the first two columns,
are:
ResGH 1 = +
ResGH 2 =
G
ResH 3 = (2, 0) = + +
Since appears once in the restrictions of 2 , 3 we have
IndG
H = 2 + 3
3.1.5. inductionrestriction tables. The results of the calculations in
these two examples are summarized in the following tables which are
called inductionrestriction tables.
For G = Z/4 and H = Z/2 the inductionrestriction table is:
+
1 1 0
2 1 0
3 0 1
4 0 1
For G = S3 and H = {1, (12)} the inductionrestriction table is:
+
1 1 0
2 0 1
3 1 1
In both cases, the rows give the decompositions of ResG
H i and the
G
columns give the decompositions of IndH .
36 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS
3.2. Induced representations. Last time we proved that the in
duced character IndGH V is the character of some representation which
is uniquely determined up to isomorphism. Today I want to construct
that representation explicitly. There are two methods, abstract and
concrete. The abstract version is short and immediately implies Frobe
nius reciprocity. The concrete version is complicated but you can see
what the representation actually looks like.
Definition 3.7. If V is a representation of H and H G then the
induced representation is defined to be
IndG
H V = C[G] C[H] V
3.2.1. Frobenius reciprocity. One of the main theorems follows imme
diately from basic properties of tensor product:
Theorem 3.8 (Frobenius reciprocity). If V is a representation of H
and W is a representation of G then
HomG (IndG V, W )
H = HomH (V, ResG W )
H
This follows from:
Theorem 3.9 (adjunction formula).
HomR (R MS S S V, R N )
= HomS (S V, HomR (MS , N ))
And the easy formula:
HomR (R, N )
=N
Letting M = R and S R, we get the following.
Corollary 3.10. If S is a subring of R, V is an Smodule and W is
an Rmodule then
HomR (R S V, W ) = HomS (V, W )
Putting R = C[G], S = C[H], this gives Frobenius reciprocity. Thus
it suffices to prove the adjunction formula.
Proof of adjunction formula. The first step follow from the definition
of the tensor product. When S is a noncommutative ring, such as
S = C[H], the tensor product M S V is sometimes called a bal
anced product. It is an abelian group characterized by the following
universal property:
(1) There is a mapping f : M V M S V which is
(a) bilinear in the sense that it is a homomorphism in each
variable (f (x, ) : V M S V and f (, v) : M
M S V are homomorphisms for all x, v) and
MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS 37
(b) f is balanced in the sense that
f (xs, v) = f (x, sv)
for all x M, s S, v V .
In other words, f is bilinear and balanced.
(2) For any other bilinear, balanced mapping g : M V W there
is a unique homomorphism gb : M S V W so that g = gb f
Let BiLin(M S V, W ) denote the set of all balanced bilinear maps
M V W . Then the universal property says that
BiLin(M S V, W )
= Hom(M S V, W )
On the other hand the definitions of balanced and bilinear imply that
BiLin(M S V, W )
= HomS (V, Hom(M, W ))
The balance bilinear map : M S V W corresponds to its adjoint
b : V Hom(M, W ) given by (v)(x) b = (x, v).
b Hom(M, W ). This is clear.
(1) (x, v) is linear in x iff (v)
(2) (x, v) is linear in v iff b is linear, i.e., gives a homomorphism
of abelian groups V Hom(M, W ). This is also clear.
(3) Finally, is balance iff (xs, v) = (x, sv) iff
(sv)(x)
b = (v)(xs)
b = [s(v)](x)
b
iff s
b = s.
b
In the case when M is an RSbimodule we just need to observe the
obvious fact that is an Rhomomorphism in the first coordinate iff
b ) HomR (M, W ).
(V
The adjunction formula follows from these observations.
3.2.2. example. Here is the simplest example of an induced represen
tation. Take G = Z/4 = {1, , 2 , 3 } and H = Z/2 = {1, } where
= 2 . Let be the one dimensional sign representation () = 1.
Let V denote the Hmodule of the representation. So, H = C with
acting by 1.
Z/4
What is the induced representation IndZ/2 ?
The induced module is C[G] C[H] V which is 2dimensional. It is
generated by four elements 1 1, 1, 2 1, 3 1. But 2 = . So,
2 1 = 1 1 = 1 1
and
3 1 = 1 = 1
38 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS
So, C[G] V is two dimensional with basis w1 = 1 1, w2 = 1
and acts by: w1 = w2 and w2 = 2 1 = 1 1 = w1 . So, the
matrix of the representation IndGH = is given by:
0 1
( ) =
1 0
Since G is cyclic this determines the other matrices:
2 2 1 0 3 3 0 1
( ) = ( ) = , ( ) = ( ) =
0 1 1 0
Notice that matrices are all monomial which means that they have
exactly one nonzero entry in every row and every column. The induced
representation is always given by monomial matrices.
3.2.3. monomial matrices. A monomial matrix of size m with coeffi
cients in a group H is defined to be an element of Matm (Z[H]) having
exactly one nonzero entry in every row and every column and so that
those entries lie in H. Every monomial matrix M is a product of a
permutation matrix P and a diagonal matrix D:
M = P D(h1 , h2 , , hm )
Here P is the matrix obtained from the identity matrix Im by per
muting the rows by the permutation . For example, if = (132)
then
0 1 0
P(132) = 0 0 1
1 0 0
This is obtained by taking the identity matrix, moving row 1 which is
(1, 0, 0) to row (1) = 3, moving row 2 which is (0, 1, 0) to row (2) = 1,
etc. The entries of the matrix are:
(
1 if i = (j)
(P )ij =
0 otherwise
The notation for the diagonal group is the obvious one: D(h1 , , hm )
is the diagonal matrix with (i, i) entry hi . So, for example,
0 h2 0
P(132) D(h1 , h2 , h3 ) = 0 0 h3
h1 0 0
So, hj is in the jth column.
How do monomial matrices multiply? We need to calculate:
P D(h1 , , hm )P D(`1 , , `m )
MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS 39
But
D(h1 , , hm )P = P D(h (1) , , h (m) )
So,
(3.1) P D(h1 , , hm )P D(`1 , , `m ) = P D(h (1) `1 , , h (m) `m )
Definition 3.11. Let Mm (H) denote the group of all mm monomial
matrices with coefficients in H. We denote the elements by
M (; h1 , , hm ) = P D(h1 , , hm )
3.2.4. monomial representation. Suppose that H is a subgroup of a
group G with index G : H = m. Then
G = t1 H t2 H tm H
where t1 , , tm form what is called a (left) transversal which is a set
of representatives for the left cosets of H. Then we will get a monomial
representation by which I mean a homomorphism
: G Mm (H)
First, I start with the permutation representation
: G Sm
which is given by the action of G on the set of left cosets of H. If G
then
tj H = ti H
where i = (j) = ()(j).
For example, suppose G = S3 , H = {1, (12)}. Choose the transver
sal: t1 = 1, t2 = (13), t3 = (23). Then = (13) acts on the three left
cosets by transposing the first two and fixing the third:
(13)t1 H = t2 H, (13)t2 H = t1 H, (13)t3 H = t3 H
Therefore, (13) = (12).
Now, look at the element of H that we get:
tj = t(j) hj
where
hj = t1
(j) tj
Definition 3.12. The monomial representation
: G Mm (H)
is given by
() = M ((); t1 1
(1) t1 , , t(m) tm )
40 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS
The following calculation verifies that is a homomorphism:
()( ) = M ((); t1 1 1 1
(1) t1 , , t(m) tm )M (( ); t (1) t1 , , t (m) tm )
= M (()( ); , t1 (i) ti t1
(j) tj ,)
But i = (j) by the formula (3.1). So,
t1 t
(i) i t1 1
(j) j = t (j) tj
t
and
()( ) = M (( ); , t1
(j) tj , ) = ( )
3.2.5. induced representation as monomial representation. Suppose that
: H GL(k, C) is a kdimensional representation of H and V = Ck
is the corresponding Hmodule. Then I claim that the induced rep
resentation IndGH is a monomial representation. More precisely the
statement is:
Proposition 3.13. The induced representation
= IndG
H : G GL(mk, C)
is the composition of the monomial representation : G Mm (H)
with the homomorphism
Mm () : Mm (H) Mm (GL(k, C)) GL(mk, C)
induced by : H GL(k, C).
Proof. As a right Hmodule, C[G] is free of rank m with a basis given
by a left transversal t1 , , tm . So,
C[G] = t1 C[H] tm C[H]
As a Gmodule the induced representation is defined to be
C[G] C[H] V = (t1 V ) (tm V )
P
An arbitrary element is given by j tj vj where vj are arbitrary
elements of V . Each G acts by
X X X X
tj vj = tj vj = t(j) hj vj = t(j) (hj )vj
In other words, acts on V m by multiplying the jth copy of V by the
matrix
(hj ) = (t1
(j) tj )
and then moving it to the (j) slot. So:
1
IndG
H = M ((); , (t(j) tj ), )
MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS 41
This is Mm () applied to the standard monomial representation as I
claimed.
Proposition 3.14. The character of the induced representation is the
induced character.
Proof. This is a simple calculation. The trace of a monomial matrix is
given by the points left fixed by the permutation representation ():
1
Tr(IndG
H ) = Tr M ((); , (t(j) tj ), )
X m
X
= Tr (t1
(j) tj ) = (t1
j tj )
j=(j) j=1
1
because (tj tj ) = 0 when j 6= (j).
Since is a class function on H,
(tj1 tj ) = (h1 t1
j tj h)
for all h H. So,
m
1 XX
Tr(IndG
H ) = (h1 t1
j tj h)
H hH j=1
Since tj h runs over all the elements of G, this is equal to
1 X
IndG
H () = ( 1 )
H G
proving the proposition.
42 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS
3.3. Artins theorem. One of the main theorems is that all characters
on finite groups are integer linear combinations of characters induced
from abelian subgroups. I dont have time to do this theorem. But I
can prove a weaker version which says that all characters are rational
linear combinations of characters induced from cyclic subgroups.
Before I prove this, I want to make sense out of the statement of the
theorem. What happens when we take linear combinations of charac
ters when the coefficients are arbitrary integers or rational numbers?
3.3.1. character ring.
Definition 3.15. The character ring R(G) of G is defined to be the
ring of all virtual characters which are defined to be differences of
effective characters:
f = V W
These can also be described as integer linear combination of irreducible
characters: X
f= ni i , ni Z
R(G) is a ring because (pointwise) sums and products of effective
characters are effective. So, the same holds for virtual characters.
Proposition 3.16. A group homomorphism : H G induces a ring
homomorphism : R(G) R(H). In particular, if H G,
ResG
H : R(G) R(H)
is a ring homomorphism.
I wont prove this because it is sort of obvious and I dont need it. I
want to look at the induction map.
Proposition 3.17. If H G then
IndG
H : R(H) R(G)
is a group homomorphism, i.e., it is additive.
Proof. This follows from the fact that tensor product distributes over
direct sum:
IndG
H (V W ) = C[G] C[H] (V W )
= C[G] C[H] V C[G] C[H] W
= IndG G
H V IndH W
MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS 43
3.3.2. statement of the theorem. We want a collection of subgroups
X = {H} of G with the property that the maps IndGH : R(H) R(G)
taken together for all H X give an epimorphism
X M
IndG
H : R(H) R(G)
HX
This would say that every (effective) character on G is an integer linear
combination of characters induced from the subgroups H X . But
we will only get this rationally which is the same as saying that the
cokernel is a finite group.
Theorem 3.18 (Artin). Suppose that X is a collection of subgroups
H G. Then the following conditions are equivalent.
(1) G H X so that H contains a conjugate of .
(2) Every character on G is a rational linear combination of char
acters induced from the subgroups H X .
As an example, the collection of cyclic subgroups of G satisfies con
dition (1) since every element of G is contained in a cyclic subgroup.
3.3.3. example: D4 . Take the dihedral group
G = D4 = {1, , 2 , 3 , , , 2 , 3 }
Let X = {Z/4, h i , h i}. These three subgroups meet all of the con
jugacy classes of D4 . So, Artins theorem applies. To find the image of
the induction map we start with the character table of D4 :
1 2
1 1 1 1 1 1
2 1 1 1 1 1
3 1 1 1 1 1
4 1 1 1 1 1
5 2 2 0 0 0
From this we can easily compute the inductionrestriction table:
D4 Z/4 h i h i
1 i 1 i + +
1 1 1 1
2 1 1 1
3 1 1 1
4 1 1 1
5 1 1 1 1 1 1
Here denotes the one dimensional character of a cyclic group of order
n which sends the generator to (which must be an nth root of unity).
44 MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS
This 5 8 matrix T gives the induction map:
Ind
R(Z/4) R(h i) R(h i) R(D4 )
multiplication by T
Z4 Z2 Z2 Z5
Artins theorem says that the cokernel of this map is a finite group.
To find this group we use integer row and column operations, which
change the basis for Z5 and Z8 respectively, to reduce the matrix T to
the form:
1 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0
0 0 1 0 0 0 0 0
0 0 0 1 0 0 0 0
0 0 0 0 2 0 0 0
This means that the cokernel of the induction map is Z/2. So, for
any representation V of D4 , twice the character of V is a sum of vir
tual characters induced from virtual representations of the three cyclic
subgroup in the list X .
3.3.4. proof of the theorem. (2) (1). Let G. Then there is an
irreducible character i so that i () 6= 0. Since i is a rational linear
combination of induced characters from H X , there must be some
H X and some representation V of H so that IndG H V () 6= 0. By
the definition of induced character this implies that some conjugate of
lies in H.
(1) (2). Suppose that (2) is false. Then the set of induced virtual
characters forms a subgroup L of R(G) = Zb of rank a < b. Let
1 , , a be a set of characters induced from elements H X which
span L. We can decompose each i into an integer linear combination
of the irreducible characters j :
X
i = nij j
The numbers nij form an a b matrix which defines a Q linear map:
(nij ) : Qb Qa
Since a < b this linear map has a kernel, i.e., there are rational numbers
cj not all zero so that X
nij cj = 0 i
j
Multiplying by the denominators, we may assume the numbers cj are
integers. This gives a nonzero virtual character
X
cj j = V W
MATH 101B: ALGEBRA II, PART D: REPRESENTATIONS OF GROUPS 45
which is orthogonal to all the i and therefore all L:
D X E X
hi , W W 0 i = i , cj j = nij cj = 0
But L contains all induced characters:
= IndG
HV
for all H X and all representations V of H. So, by Frobenius reci
procity, we have:
h, W W 0 i = IndG G
H V, W W 0 G = V, ResH (W W 0 ) H = 0
Since this is true for all representations V of H, we must have
ResG
H (W W 0 ) = 0
for all H X . This in turn implies that
W () = W 0 ()
for all H.
But, for any G there is an H X which contains a conjugate of
. But then
W () = W 0 ()
So, the virtual character W W 0 must be zero, which is a contradic
tion. This proves the theorem.
Appendix
Homework and answers
Algebra II: Homework
These are the weekly homework assignments and answers. Students were
encouraged to work on them in groups. Some of the problems turned out to
be more difficult than I intended.
Homework
1. Homework 1 (Abelian categories)
2. Homework 2 (Chain complexes)
3. Homework 3 (Ext1 )
4. Homework 4 (Integral closure)
5. Homework 5 (Transcendence degree)
6. Homework 6 (Associated primes)
7. Homework 7 (Semisimplicity)
8. Homework 8 (Group rings)
9. Homework 9 (Character table)
10. Homework 10 (More characters)
Answers to homework
MATH 101B: HOMEWORK
1. Homework 01
The following three problems are due next Thursday (1/25/7).
1.1. Show that the category of finite abelian groups contains no non
trivial projective or injective objects. (Use the Fundamental Theorem:
all finite abelian groups are direct sums of cyclic pgroups Z/pn .)
1.2. Show that abelian categories have pushouts and pullbacks. I.e.,
Given an abelian category A and morphisms f : A C, g : B C
there exists an object D (called the pullback ) with morphisms :
D A, : D B forming a commuting square (f = g )
so that for any other object X with maps to A, B forming another
commuting square, there exists a unique morphism X D making
a big commutative diagram. (Draw the diagram.) Hint: There is an
exact sequence
0D AB C
1.3. Let k be a field and let R be the polynomial ring R = k[X]. Let
Q be the k vector space of all sequences:
(a0 , a1 , a2 , )
where ai k with the action of X R given by shifting to the left:
X(a0 , a1 , a2 , ) = (a1 , a2 , a3 , )
Then show that Q is injective. Hint: first prove that any homomor
phism f : A Q is determined by its first coordinate.
Date: January 18, 2007.
1
MATH 101B: HOMEWORK
2. Homework 02
The following three problems are due next Thursday (2/2/7). The
strict deadline is 1:30pm Friday.
2.1. Give a precise description of the ring R indicated below and show
that it has the property that left Rmodules are the same as chain
complexes of abelian groups with three terms, i.e.,
2d 1 d
C2
C1
C0
where C1 , C2 , C3 are abelian groups and d1 , d2 are homomorphisms so
that d1 d2 = 0 and a homomorphism of Rmodules is a chain map
f : C D, i.e. it consists of three homomomorphisms fi : Ci Di so
that the following diagram commutes.
C2  C1  C0
f2 f1 f0
? ? ?
D2  D1  D0
R is a quotient ring:
Z Z Z 0 0 Z
R = 0 Z Z / 0 0 0
0 0 Z 0 0 0
[Hint: R is additively free abelian with five generators, three of which
are idempotents e0 , e1 , e2 and Ci = ei C.]
2.2. Describe the chain complex corresponding to the free Rmodule
Rn (n finite).
2.3. Assume without proof that the analogous statements hold for
right Rmodules, namely they are cochain complexes
C2 C1 C0
where C i = Cei (just as Ci = ei C). Give a description (as a chain
n
complex with three terms) of the injective Rmodule HomZ (RR , Q/Z).
Date: January 25, 2007.
X1 2
MATH 101B: HOMEWORK
3. Homework 03
The following problem is due next Thursday (2/8/7). The strict
deadline is 1:30pm Friday.
Compute ExtiQ[X] (Q[X]/(f ), Q[X]/(g)) using both the projective res
olution of Q[X]/(f ) and the injective coresolution of Q[X]/(g).
[First take the projective resolution P of Q[X]/(f ). Then
Exti (Q[X]/(f ), Q[X]/(g))
Q[X] = H i (HomQ[X] (P , Q[X]/(g)))
Then, find the injective (co)resolution Q of Q[X]/(g). The Ext groups
can also be found by the formula
Exti (Q[X]/(f ), Q[X]/(g))
Q[X] = H i (HomQ[X] (Q[X]/(f ), Q ))
The theorem that says that these two definitions are equivalent, we
dont have time to prove. But working out this example might give
you an idea on why it is true for i = 0, 1.]
Date: February 2, 2007.
X
1 3
MATH 101B: HOMEWORK
4. Homework 04
The following problems are due Thursday (3/1/7). The strict dead
line is 1:30pm Friday.
(1) Show that unique factorization domains (UFDs) are integrally
closed.
(2) Show that the integral closure of Z[ 5] (in its fraction field) is
Z[] where
1+ 5
=
2
(3) Combining these we see that Z[ 5] is not a UFD. Find a number
which can be written in two ways as a product of irreducible
elements. [Look at the proof of problem 1 and see where it fails
for the element in problem 2.]
(4) If E is a finite separable extension of K and E show that
the trace TrE/K () is equal to the trace of the K linear endo
morphism of E given by multiplication by . [Show that the
eigenvalues of this linear transformation are the Galois conju
gates of and each eigenvalue has the same multiplicity.]
Date: February 15, 2007.
X1 4
MATH 101B: HOMEWORK
5. Homework 05
The following problems are due Thursday (3/8/7). The strict dead
line is 1:30pm Friday.
(1) (Problems #3 on page 374) If L E are finitely generated field
extension of K then show that the transcendence degree of E
sum
xxxxxxxx of the transcendence degree of
over K is equal to the product
E over L and the transcendence degree of L over K.
(2) Let R = K[X, Y ]/(f ) where f (X, Y ) = (X a)Y 2 (X b)
for some a 6= b K. Find a transcendental element Z of R
so that R is integral over K[Z]. [Use the proof of Noether
Normalization.]
Date: March 3, 2007.
X1 5
MATH 101B: HOMEWORK
6. Homework 06
The following problems are due Thursday (3/22/7). The strict dead
line is 1:30pm Friday.
Assume R, M are both Noetherian.
(1) Show that for any ideal I in R there are only finitely many min
imal primes containing I. [Take a maximal counterexample.]
(2) Suppose that p is a prime and n > 0. Let
p(n) M := pn Mp M
Show that this is a pprimary submodule of M .
(3) If : R S is a homomorphism of Noetherian rings and M is
an Smodule then show that
assR (M ) = (assS (M ))
where : Spec(S) Spec(R) is the map induced by .
Date: March 15, 2007.
X
1 6
MATH 101B: HOMEWORK
7. Homework 07
The following problems are due Thursday (4/12/7). The strict dead
line is 3pm Friday.
(1) Show that Z[1/p]/Z is an Artinian Zmodule. [Show that every
proper subgroup is finite.]
(2) Show that the center of the ring M atn (R) is isomorphic to the
center of R. [Show that the center Z(M atn (R)) consists of the
scalar multiples aIn of the identity matrix where a Z(R).]
(3) Suppose that M is both Artinian and Noetherian. Then show
that
(a) rn M = 0 for some n > 0 (rn M := rr {z
r} M )
n
(b) ri M/ri+1 M is f.g. semisimple for all i 0.
(4) Prove the converse, i.e., (a) and (b) imply that M is both Ar
tinian and Noetherian.
Date: April 4, 2007.
X
1 7
MATH 101B: HOMEWORK
8. Homework 08
The following problems are due Thursday (4/19/7). The strict dead
line is 1:30pm Friday.
(1) Let K be any field, let G be any finite group. Let V = K be
the trivial representation of G. Let E = K[G] be the group
ring considered as a representation (this is called the regular
representation) Find all Ghomomorphisms
f :V E
(and show that you have a complete list).
(2) If K is the field with two elements and G is the group with
two elements then show that K[G] is not semisimple. [Using
your answer to question 1, show that you have a short exact
sequence 0 V E V 0 which does not split.]
Date: April 15, 2007.
X
1 8
MATH 101B: HOMEWORK 09
9. Homework 09
one problem:
Calculate the character table of the quaternion group and find the
2dim irreducible representation.
Q = {1, i, j, k, t, it, jt, kt}
where t = i2 = j 2 = k 2 is central and ij = k = jit, jk = i = kjt, ki =
j = ikt.
X
1 9
MATH 101B: HOMEWORK
10. Homework 10
This is the last assignment. There is no final exam or any other
work to do for this course! The following problems are due Thursday
(5/3/7). The strict deadline is 1:30pm Friday.
(1) If V, S are irreducible representations of G and S is onedimensional,
then show that V S is also irreducible.
(2) Show that the irreducible representations of a product G H
are the tensor products of irreducible representations of G, H.
(3) Compute the character table of the alternating group A4 and
the inductionrestriction table for the pair (S4 , A4 ).
Date: April 26, 2007.
X1 10
Answers to homework
MATH 101B: HOMEWORK
1. Homework 01
The following three problems are due next Thursday (1/25/7).
1.1. Show that the category of finite abelian groups contains no non
trivial projective or injective objects. (Use the Fundamental Theorem:
all finite abelian groups are direct sums of cyclic pgroups Z/pn .)
For any abelian group A, Hom(Z/n, A) is isomorphic to the set of
all a A so that na = 0 with the isomorphism even by evaluating
functions on a fixed generator of Z/n. If n = pk (p prime) and j 0
this gives
Hom(Z/pk , Z/pj+k )
= pj Z/pj+k Z
= Z/pk
So, the short exact sequence
(1.1) 0 Z/pk
Z/p2k
Z/pk 0
gives the exact sequence
] ]
0 Hom(Z/pk , Z/pk ) Hom(Z/pk , Z/p2k )
Hom(Z/pk , Z/pk )
which is isomorphic
] ]
0 Z/pk Z/pk
Z/pk
Therefore, ] is an isomorphism and ] = 0. This means that Z/pk is
not projective for any k > 0. This implies that there are no nontrivial
projectives in the category of finite abelian groups since any such pro
jective P would have a direct summand of the form Z/pk for k > 0 by
the fundamental theorem of finite abelian groups and any direct sum
mand of a projective module is projective. (To see that suppose that
P = X Y is projective and g : A B is an epimorphism. Then any
morphism f : X B extends to a morphism f + 0 : P = X Y B
which lifts to A. The restriction of the lifting to X gives a lifting of f
making X projective.)
Similarly, if we apply Hom(, Z/pk ) to the exact seqence (1.1) we
get:
] ]
0 Hom(Z/pk , Z/pk ) Hom(Z/p2k , Z/pk ) Hom(Z/p2k , Z/pk )
Date: February 3, 2007.
1
2 MATH 101B: HOMEWORK
which is isomorphic to
] ]
0 Z/pk Z/pk Z/pk
So, ] = 0 which implies that Z/pk is not injective for k > 0. Therefore
there is no nontrivial injective object in the category of finite abelian
groups.
1.2. Show that abelian categories have pushouts and pullbacks. I.e.,
Given an abelian category A and morphisms f : A C, g : B C
there exists an object D (called the pullback ) with morphisms :
D A, : D B forming a commuting square (f = g )
so that for any other object X with maps to A, B forming another
commuting square, there exists a unique morphism X D making
a big commutative diagram. (Draw the diagram.) Hint: There is an
exact sequence
0D AB C
I will use the notation jA , jB for the inclusion maps of A, B into
A B and pA , pB for the projections to A, B. Then jA pA + jB pB as
part of the definition of a direct sum (or as a consequence as explained
in the notes).
Since A B is the coproduct of A and B, the morphisms f and g
induce a morphism
h = (f, g) : A B C
so that h jA = f, h jB = g Let D A B be the kernel of this
homomorphism. In other words D is the kernel in the exact sequence
k h
0D
AB
C
Let = pA k : D A and = pB k : D B which is written
k = . Then
Claim 1. f = g : D C. In other words, f g = 0.
This is a calculation:
f g = (h jA ) (pA k) + (h jB ) (pB k)
= h (jA pA + jB pB ) k = h k = 0
This calculation is often written in the matrix notation:
h k = (f, g) =f g
Claim 2. For any object X and any pair of morphisms 0 : X
A, 0 : X B so that f 0 = g 0 there is a unique morphism
: X D so that 0 = and 0 = .
MATH 101B: HOMEWORK 3
0
For existence, let = 0 : X A B be the product of 0 and 0 .
I.e., the unique morphism so that 0 = pA and 0 = pB . Then
h = h (jA pA + jB pB ) = f 0 g 0 = 0
Therefore, there is a unique morphism : X D so that h = .
Then
0 = pA = pA h =
and similarly, 0 = .
For uniqueness suppose that 0 : X D is another morphism so
that 0 = 0 and 0 = 0 . Then 0 = h 0 is another morphism
so that pA 0 = 0 and pB 0 = 0 . By the universal property, 0 =
which implies that 0 = .
The conclusion is that D is the pullback.
If we reserve all the arrow in the above argument (or equivalently,
repeat the same argument in the opposite category Aop ) we see that
the pushout of a diagram
f
A  B
g
?
C
is the cokernel of
f
:ABC
g
1.3. Let k be a field and let R be the polynomial ring R = k[X]. Let
Q be the k vector space of all sequences:
(a0 , a1 , a2 , )
where ai k with the action of X R given by shifting to the left:
X(a0 , a1 , a2 , ) = (a1 , a2 , a3 , )
Then show that Q is injective. Hint: first prove that any homomor
phism f : A Q is determined by its first coordinate.
Let pi : Q k be the projection to the ith coordinate. This is
linear map (i.e., a morphism of kmodules) and not a morphism of
Rmodules.
Lemma 1.1. For any Rmodule A there is an isomorphism
: HomR (A, Q)
= Homk (A, k)
given by sending f : A Q to (f ) = p1 f .
4 MATH 101B: HOMEWORK
Proof. f can be reconstructed from g = (f ) as follows. For any a A,
let
h(a) = (g(a), g(Xa), g(X 2 a), g(X 3 a), )
If a is replaced by Xa then this entire sequence shifts to the left. There
fore h(Xa) = Xh(a) making h into a homomorphism of Rmodules
h : A Q. To see that f = h look at the coordinates of f (a)
f (a) = (f0 (a), f1 (a), f2 (a), )
g(a) is the first coordinate: g(a) = f0 (a) and the other coordinates are
given by shifting to the left and then taking the first coordinate:
fn (a) = p0 (X n f (a)) = p0 (f (X n a) = g(X n a) = hn (a)
Also, if we start with g and take h then (h) = g. So, if a bijection
and thus an isomorphism.
To show that Q is injective as an Rmodule suppose that
(1.2) 0A
B
C0
is a short exact sequence of Rmodules. Then we want to show that
induced sequence
] ]
(1.3) 0 HomR (C, Q) HomR (B, Q) HomR (A, Q) 0
is exact. By composing with p0 we get the sequence
] ]
(1.4) 0 Homk (C, k) Homk (B, k) Homk (A, k) 0
which is exact since (1.2) is a split exact sequence of k. Therefore
the isomorphic sequence (1.3) is also exact making Q into an injective
Rmodule.
MATH 101B: HOMEWORK
2. Homework 02
The following three problems are due next Thursday (2/2/7). The
strict deadline is 1:30pm Friday.
2.1. Give a precise description of the ring R indicated below and show
that it has the property that left Rmodules are the same as chain
complexes of abelian groups with three terms, i.e.,
d
2 1 d
C2
C1
C0
where C1 , C2 , C3 are abelian groups and d1 , d2 are homomorphisms so
that d1 d2 = 0 and a homomorphism of Rmodules is a chain map
f : C D, i.e. it consists of three homomomorphisms fi : Ci Di so
that the following diagram commutes.
C2  C1  C0
f2 f1 f0
? ? ?
D2  D1  D0
R is a quotient ring:
Z Z Z 0 0 Z
R = 0 Z Z / 0 0 0
0 0 Z 0 0 0
[Hint: R is additively free abelian with five generators, three of which
are idempotents e0 , e1 , e2 and Ci = ei C.]
R is additively free abelian with five generators, e0 , e1 , e2 , , where
the first three are diagonal matrices
e0 = diag(1, 0, 0), e1 = diag(0, 1, 0), e2 = diag(0, 0, 1)
and is a matrix with a 1 in the (0, 1)entry and 0s elsewhere and
is a matrix with 1 in the (1, 2)entry and zeros elsewhere.
Given an Rmodule C the corresponding chain complex is given as
follows. Let Ci = ei C. Since 1 = e0 + e1 + e2 and ei are orthogonal
idempotents in the sense that ei ej = ij ej , we get a decomposition
C = e0 C e1 C e2 C = C0 C1 C2
Date: February 4, 2007.
1
2 MATH 101B: HOMEWORK
Since = e0 e1 , multiplication by is zero on C0 C2 and has image in
C0 . So it gives a homomorphism C1 C0 . Similarly, multiplication by
gives a homomorphism C2 C1 . Since = 0, the composition of
these two homomorphisms is zero. Therefore, we have a chain complex
structure on C.
d d
Conversely, suppose we have a chain complex C2 C1 C0 . Then
we have an action of R on C = Ci by
a00 a01 0 x0 a00 x0 + a01 d(x1 )
0 a11 a12 x1 = a11 x1 + a12 d(x2 )
0 0 a22 x2 a22 x2
where we write elements of C in column matrix form. We see that
multiplication by ei is projection onto Ci and multiplication by the
matrix sends (x0 , x1 , x2 ) to (d(x1 ), 0, 0), i.e., it is projection to C1
followed by d : C1 C0 followed by inclusion C0 C. Similarly for .
So, the chain complex corresponding to this Rmodule is the original
chain complex.
Conversely, if we start with an Rmodule, form the chain complex
and then make the chain complex back into an Rmodule by the above
construction then we get back that same Rmodule. This is because
d1 : C1 C0 is restricted to C1 and d2 : C2 C1 is restricted to
C2 .
2.2. Describe the chain complex corresponding to the free Rmodule
Rn (n finite).
First take n = 1. Then R is free with 3 generators: e0 , e1 , e2 , , .
Multiplying on the left with e0 kills all but e0 , . So, e0 R
= Z2 with
free generators e0 , . Similarly, e1 R
= Z2 with free generators e1 ,
and e2 R = Z with free generator 2 . So, the chain complex is:
d d
Z
2
Z2
1
Z2
As we already discussed, d2 is left multiplication by which sends the
generator e2 of C2 = Z to the second generator of C1 = Z2 . And d1
is left multiplication by which sends the first generator e1 of C1 to
the second generator of C0 = Z2 and sends the second generator
of C1 to zero.
If we take the direct sum of n copies of this we get the chain complex
d d
Zn
2
Z2n
1
Z2n
where d2 is the inclusion into the first factor of Z2n = Zn Zn and d1
is the projection to the second coordinate of C1 followed by inclusion
MATH 101B: HOMEWORK 3
into the second factor of C0 . In matrix notation:
0 0 In
d2 = , d1 =
In 0 0
2.3. Assume without proof that the analogous statements hold for
right Rmodules, namely they are cochain complexes
C2 C1 C0
where C i = Cei (just as Ci = ei C). Give a description (as a chain
n
complex with three terms) of the injective Rmodule HomZ (RR , Q/Z).
The right Rmodule R is the cochain complex C 2 = Re2 = Z2 with
free generators e2 , , C = Re1
1
= Z with generators e1 , and C 0 =
2
Re0 = Z with generator e0 . d1 : C 1 C 2 , being right multiplication
by , is an isomorphism of the first factor of C 1 to the second factor of
C 2 and d0 : C 0 C 1 is an isomorphism of C 0 onto the second factor
of C 1 .
If we hom into Q/Z we get
d d
(Q/Z)2
2
(Q/Z)2
1
Q/Z
Where d2 is an isomorphism from the second factor of C2 to the first
factor of C1 and d1 is an isomorphism from the second factor of C1 to
n
C0 . HomZ (RR , Q/Z) is a direct sum of n copies of this chain complex.
So, it is
d2 d1
(Q/Z)2n (Q/Z)2n (Q/Z)n
where
0 In
d2 = , d1 = 0 In
0 0
MATH 101B: HOMEWORK
3. Homework 03 answers
The following problem was due Thursday (2/8/7).
Compute ExtiQ[X] (Q[X]/(f ), Q[X]/(g)) using both the projective res
olution of Q[X]/(f ) and the injective coresolution of Q[X]/(g).
[First take the projective resolution P of Q[X]/(f ). Then
Exti (Q[X]/(f ), Q[X]/(g))
Q[X] = H i (HomQ[X] (P , Q[X]/(g)))
Then, find the injective (co)resolution Q of Q[X]/(g). The Ext groups
can also be found by the formula
Exti (Q[X]/(f ), Q[X]/(g))
Q[X] = H i (HomQ[X] (Q[X]/(f ), Q ))
The theorem that says that these two definitions are equivalent, we
dont have time to prove. But working out this example might give
you an idea on why it is true for i = 0, 1.]
Actually, I did end up proving that theorem.
3.1. projective resolution. Since Q[X] is a domain, the projective
resolution of Q[X]/(f ) is given by
f
0 Q[X] Q[X] Q[X]/(f ) 0
where f is multiplication by f . Since HomR (R, M )
= M , we get:
HomQ[X] (Q[X], Q[X]/(g)) = Q[X]/(g)
The isomorphism x x is given by x (1) = x. The map induced by
f is precomposition with f which corresponds to multiplication by f :
(f ) (x) = (f )] (x ) = x (f 1) = f x
So, Ext1Q[X] (Q[X]/(f ), Q[X]/(g)) is the cokernel of the map
Q[X]/(g) Q[X]/(g)
given by multiplication by f . This is Q[X] modulo the ideal I consisting
of all Q[X]linear combinations of f and g. In other words, I = (f, g).
But Q[X] is PID. So this ideal is generated by one element, the greatest
common divisor of f and g. Call this h. Then
Ext1 (Q[X]/(f ), Q[X]/(g))
Q[X] = Q[X]/(h)
Date: February 27, 2007.
1
2 MATH 101B: HOMEWORK
Ext0Q[X] (Q[X]/(f ), Q[X]/(g))
= HomQ[X] (Q[X]/(f ), Q[X]/(g)) is the
kernel of the map
f
Q[X]/(g) Q[X]/(g)
This is K/(g) where K is the kernel of the composite map
f
Q[X] Q[X]/(g)
Q[X]/(g)
But this means that K is the set of all polynomials which, when multi
plied by f become divisible by g. To figure out what this is you need to
write f and g as products f = ha, g = hb where h = gcd(f, g). (So, a, b
are relatively prime.) kf = kha is divisible by g = hb iff k is divisible
by b. Therefore, K = (b) and
Ext0 (Q[X]/(f ), Q[X]/(g))
Q[X] = K/(g)
= Q[X]/(h)
where the isomorphism Q[X]/(h)
= K/(g) is given by multiplication
by b.
3.2. Injective resolution. To get an injective resolution of Q[X]/(g)
we first need a lemma:
Lemma 3.1. The module Q[X]/(g) is isomorphic to its dual:
Q[X]/(g) = HomZ (Q[X]/(g), Q)
= Q[X]/(g)
Suppose that this is true. Then the short exact sequence:
g
(3.1) 0 Q[X]
Q[X] Q[X]/(g) 0
when dualized gives an injective coresolution of Q[X]/(g)
= Q[X]/(g) :
g
(3.2) 0 Q[X]/(g) Q[X]
Q[X] 0
This is exact since (3.1) splits over Q. The injective module Q[X]
consists of infinite sequences of rational numbers
(a0 , a1 , a2 , )
where X acts by moving the sequence to the left:
X(a0 , a1 , ) = (a1 , a2 , )
From the previous homework we know that
HomQ[X] (Q[X]/(f ), Q[X] )
= HomZ (Q[X]/(f ), Q)
which, by the lemma, is isomorphic to Q[X]/(f ).
When we apply HomQ[X] (Q[X]/(f ), ) to the injective coresolution
(3.2) we get
g
Q[X]/(f )
Q[X]/(f )
MATH 101B: HOMEWORK 3
which has cokernel
Ext1Q[X] (Q[X]/(f ), Q[X]/(g)) = Q[X]/(f, g) = Q[X](h)
where h is the greatest common divisor of f and g and kernel
Ext0 (Q[X]/(f ), Q[X]/(g)) = (a)/(f )
Q[X] = Q[X]/(h)
where a = f /h.
Proof of the lemma. Suppose first that g is irreducible of degree n.
Then M = Q[X]/(g) is n dimensional over Q and is annihilated by
multiplication by g. Then M is also n dimensional and is annihilated
by multiplication by g. So, M = M . Next suppose that g is a power
g = pk of an irreducible polynomial p of degree n. Then M is charac
terized by the fact that it is nk dimensional and is annihilated by pk
but not by pk1 . But M also Qhas this property. So, M = M.
ki
Finally suppose that g = L pi where pi are distinct irreducible
monic polynomials. Then M = Q[X]/(pki i ) and
M Q[X]/(pki i )
M M
= = Q[X]/(pki i ) = M
MATH 101B: HOMEWORK
4. Homework 04 answers
The following problems were due Thursday (3/1/7).
1. Show that unique factorization domains (UFDs) are integrally
closed.
Suppose that R is a UFD with quotient field QR. Then we want to
show that any element x = a/b QR which is integral over R lies in
R. By writing a, b as products of primes, we may assume that they
are relatively prime. If x is integral over R then there are elements
c1 , , cn R so that
xn + c1 xn1 + + cn = 0
Multiplying by bn gives
an + c1 an1 b + + cn bn = 0
So, b divides an . This is impossible unless b is a unit in which case
a/b R.
2. Show that the integral closure of Z[ 5] (in its fraction field) is
Z[] where
1+ 5
=
2
Let R be the integral closure of Z[ 5] in its fraction field Q[ 5].
Then R contains
and R is integral over Z. So it suffices to show that
any a + b 5 Q[ 5] which is integral over Z lies in Z[].
You can prove this using the trace: Tr : R Z which is given by
Tr(a + b 5) = (a + 5) + (a 5) = 2a
2a Z a = n/2 for some n Z and the norm: N : R Z given by
N (a + b 5) = (a + 5)(a 5) = a2 5b2
Using the unique factorization of b we seethat this is an integer
iff
b = m/2 where m n + 2Z. So, a + b 5 lies in either Z[ 5] or
+ Z[ 5] in either case a + b 5lies in Z[].
So, R = Z[] is the
integer closure of Z and thus of Z[ 5] in Q[ 5].
Date: March 18, 2007.
1
2 MATH 101B: HOMEWORK
3. Combining these we see that Z[ 5] is not a UFD. Find a number
which can be written in two ways as a product of irreducible elements.
[Look at the proof of problem 1 and see where it fails for the element
in problem 2.]
The proof in problem 1 fails for = 1+2 5 because, although the
numerator a = 1 + 5 and the denominator 2 are relatively prime, the
denominator divides the square of the numerator:
(1 + 5)2 = 6 + 2 5 = 2(3 + 5)
6 + 2 5 into irre
This gives two factorizations of thesame number
ducible factors. To show that 1 + 5, 2, 3 + 5 are irreducible note
that their norms are 4, 4, 4 respectively.
Lemma 4.1. Any element of Z[ 5] with norm 4 is irreducible.
Proof. This follows from the following two statements:
of Z[ 5] with norm 2.
(1) There is no element
(2) Any element of Z[ 5] with norm 1 is a unit.
The first follows immediately from the fact that 2 and 3 are not squares
modulo 5. The second follows from that fact that if the norm of a+b 5
is 1 then its inverse is given by (a b 5).
Given these two facts, any element of Z[ 5] with norm 4 must be
irreducible since, if it factors, one factor must have norm 1 since the
only way that 4 factors as a product of integers is 14 and 22.
4. If E is a finite separable extension of K and E show that the
trace TrE/K () is equal to the trace of the K linear endomorphism of E
given by multiplication by . [Show that the eigenvalues of this linear
transformation are the Galois conjugates of and each eigenvalue has
the same multiplicity.]
The first proof ignores the hint. For any E take the interme
diate field K(). Then n = [K() : K] is the degree of the minimal
polynomial of which is given by
Y
f (X) = (X i ) = X n + c1 X n1 + + cn
where c1 = i and cn = (1)n i . A basis for K() over K
P Q
is given by 1, , 2 , , n1 . Let x1 , , xm be a basis for E as a
vector space over K(). Then bij = i xj , i = 0 , n 1, j = 1, , m
forms a basis for E over K. The linear function E : E E given by
MATH 101B: HOMEWORK 3
multiplication by takes bij to bi+1,j if i 6= n 1 and
X n Xn
n nk
E (bn1,j ) = xj = ck xj = ck bnk,j
k=1 k=1
So, the trace of E is
m
X X
Tr(E ) = c1 = m i
j=1
We need to show that this is equal to TrE/K (). Let i : K()
K, i = 1, , n be the n distinct embeddings of K() over K. Then
i () = i are the Galois conjugates of . Each embedding i extends
in exactly m ways to an embedding ij : E K over K since E is a
separable extension of K(). Therefore,
X X X
TrE/K () = ij () = m i () = m i
which is the same as the trace of E .
Now, a proof using the hint. Let g(X) K[X] be the minimal poly
nomial of the Klinear endomorphism E : E E. Since g() =
g(E )(1) = 0 it follows that f (X)g(X). Conversely, g(X)f (X) since
f (E ) is multiplication by f () = 0. Therefore g(X) = f (X) (as
suming they are chosen to be monic). So, deg(g) = deg(f ) = n =
[K(), K]. Choose a basis x1 , , xm for E over K(). Then
E = K()x1 K()x2 K()xm
and each summand K()xj is invariant under the endomorphism E .
By the same argument as above, the minimal polynomial of E re
stricted to each summand is f (X). So, the characteristic polynomial
of E is f (X)m . The eigenvalues of E are the roots of the character
istic polynomial which are the roots of f (X) each
P with multiplicity m.
Therefore, the trace of the matrix of E is m i = TrE/K ().
This proof uses the following two facts about the characteristic poly
nomial of a linear endomorphism of a vector space V
(1) If V = V1 V2 where V1 , V2 are both invariant under then the
characteristic polynomial of is the product of the character
istic polynomials of Vi .
(2) The minimal polynomial of divided the characteristic poly
nomial of . In particular, they are equal if they have the same
degree.
5. Homework 05 answers
The following problems were due Thursday (3/8/7).
(1) (Problems #3 on page 374) If L E are finitely generated field extension(s) of K
then show that the transcendence degree of E over K is equal to the sum of the
transcendence degree of E over L and the transcendence degree of L over K.
Lang says: Let x1 , , xn be a transcendence basis for L over K and let y1 , , ym
be a transcendence basis for E over L. Then show that x1 , , xn , y1 , , ym form
a transcendence basis for E over K.
So, we need to show
(a) x1 , , xn , y1 , , ym are algebraically independent and
(b) Every element of E is algebraic over K(x1 , , ym ).
Proof of (1): Suppose that f (X, Y ) K[X, Y ] so that
f (x1 , , xn , y1 , , ym ) = 0
Rewrite f (X, Y ) as X
f (X, Y ) = f (X)Y
where f (X) K[X] for all multiindices . If we substitute Xi = xi then each
f (x) K[x] L. So, f (x, Y ) is an element of L[Y ]. Since the yj are algebraically
independent over L and f (x, y) = 0 it must be that f (x) = 0 for all . But, the xi
are algebraically independent over K. So, f (X) = 0 for all . So, f (X, Y ) is the
zero polynomial and the xi and yj together form an alg. ind. set.
Proof of (2): We have to show that E is algebraic over K(x, y). So, take any
element a E. Then a is algebraic over L(y) = L(Y ). So, there is a polynomial
g(Z) with coefficients in L(y) so that g(a) = 0. But each of the coefficients of g is
a rational function in y1 , , ym with coefficients in L. The set of these coefficients
forms a finite set S L so that a is algebraic over K(x, S, y). But each element of
S is algebraic over K(x) and therefore also over K(x, y). So, a is also algebraic over
K(x, y) as claimed.
(2) Let R = K[X, Y ]/(f ) where f (X, Y ) = (X a)Y 2 (X b) for some a 6= b K.
Find a transcendental element Z of R so that R is integral over K[Z]. [Use the proof
of Noether Normalization.]
The proof of Noether normalization says that Z = X Y m for some large m.
But m = 1 is large enough. Then Z = X Y and X = Z + Y . We see that
K[X, Y ] = K[Z, Y ]. The polynomial f , when written in terms of Y and Z becomes:
f (Z, Y ) = (Z + Y a)Y 2 (Z b) = Y 3 + (Z a)Y 2 Y (Z b)
Since this is a monic polynomial in Y with coefficients in K[Z] we see two things:
(a) Y is integral over K[Z]
(b) R = K[Z, Y ]/(f ) is a free module over K[Z] of rank 3.
This implies that R is infinite dimensional as a vector space over K. Therefore, it
cannot be an algebraic extension of K. So, Z is transcendental and R is integral over
K[Z].
1
6. Homework 06 answers
The following problems was due Thursday (3/22/7).
Assume R, M are both Noetherian.
(1) Show that for any ideal I in R there are only finitely many minimal primes
containing I. [Take a maximal counterexample.]
Let M = R/I. Then I p iff Mp 6= 0. (Proof: Mp = 0 iff 1 0 in Mp iff
s
/ p s.t. s = 0 in M iff s I.)
Therefore, the minimal primes containing I are exactly the minimal sup
porting primes of R/I which are the same as the minimal associated primes.
We proved that there are only finitely many of these.
(2) Suppose that p is a prime and n > 0. Let
p(n) M := pn Mp M
Show that this is a pprimary submodule of M .
Let N = p(n) M . Then, by definition, N is the inverse image of pn Mp in
M . So, M/N is isomorphic to an Rsubmodule of Mp /pn Mp . So, it suffices to
show that p is the only associated prime of Mp /pn Mp .
In Mp each element s S = R\p act as isomorphisms (with inverse s1
Rp ). This means that on Mp and more generally on any Rp module, when
considered as an Rmodule, the annihilator of any nonzero element will be
disjoint from S, i.e., annR (x) p. On the other hand p annihilates the entire
module pk Mp /pk+1 Mp . So, every nonzero element x will have annR (x) = p.
This means that p is the only associated prime (in R) of Qk = pk Mp /pk+1 Mp .
But, Q = Mp /pn Mp is an extension of the quotients Qk . So, p is the only
associated prime of Q. Since M/N is a submodule of Q, p is also the only
prime associated to M/N . So, N is pprimary.
(3) If : R S is a homomorphism of Noetherian rings and M is an Smodule
then show that
assR (M ) = (assS (M ))
where : Spec(S) Spec(R) is the map induced by . ( (p) = 1 (p))
Suppose first that M has only one associated prime in S, call it p. Then p
is the set of zero divisors of M and every zero divisor acts nilpotently on M
in the sense that a power of it annihilates M . Then q = (p) is the set of
zero divisors of M in R. And each element of q acts nilpotently on M . This
means thatT assR (M ) = {q}. So, the theorem is true in this case.
Let 0 = Qi be a primary decomposition of 0 M as an Ssubmodule.
Suppose that Qi is pi primary. Then
assS (M/Qi ) = {pi } assR (M/Qi ) = {qi }
T
where qi = (pi ) by the argument above. This implies that 0 = Qi is also
a primary decomposition of 0 as an Rsubmodule of M . So,
[
assR (M ) = assR (M/Qi ) = {q1 , , qn } = (assS (M ))
1
7. Homework 07 answers
The following problems were due Thursday (4/12/7).
(1) Show that Z[1/p]/Z is an Artinian Zmodule. [Show that every proper sub
group is finite.]
The logic of the argument is as follows. This abelian group A = Z[1/p]/Z has a
sequence of elements a1 , a2 , satisfying the following two conditions:
(a) The sequence generates the group.
(b) If a subgroup B A does not contain an+1 then it has at most pn elements.
Together, these two statements imply that any proper subgroup of A is finite. So,
the DCC must hold for subgroups and A is Artinian.
(a) The generating elements are: an = 1/pn . By definition, any element of Z[1/p]
is an integer linear combination of these elements. So, they generate Z[1/p] and,
consequently, they also generate A.
(b) Suppose that B is a subgroup of A which does not contain 1/pn+1 . Then the
highest denominator in the elements of B is pn . So, the only possible elements of B
are i/pn + Z where 0 i < pn . So, B pn .
(2) Show that the center of the ring M atn (R) is isomorphic to the center of R.
[Show that the center Z(M atn (R)) consists of the scalar multiples aIn of the identity
matrix where a Z(R).]
Let Z = Z(M atn (R)) be the center of M atn (R). Let xij be the matrix with 1 in
the (i, j) position
P and zero everywhere else. Then, any matrix in M atn (R) can be
written as A = aij xij where aij R. If A Z then, e.g.,
Xn Xn
x12 A = a2j x1j = Ax12 = ai1 xi2
j=1 i=1
Comparing the coefficients of x12 on both sides we see that a22 = a11 . For j 6= 2 the
RHS has no x1j terms. So, a2j = 0. Changing 1, 2 to other indices we see that aij = 0
if i 6= j and the diagonal terms aii must all be equal.
To see that these diagonal entries must lie in the center of R, take any r R.
Then
(rx11 )A = ra11 x11 = A(rx11 ) = a11 rx11
So, ra11 = a11 r.
1
2
(3) Suppose that M is both Artinian and Noetherian. Then show that
(1) rn M = 0 for some n > 0 (rn M := rr r} M )
 {z
n
Since M is Noetherian, it has at least one maximal submodule (assuming
M 6= 0). Therefore, the radical of M is a proper submodule. Repeating this
process we get a descending sequence of submodules
M ) rM ) r2 M )
Since M is Artinian, the sequence stops: rn M = rn+1 M . But this can happen
only if rn M = 0
(2) ri M/ri+1 M is f.g. semisimple for all i 0. T
By the finite intersection lemma, rM = Ni is a finite intersection of
maximal
L submodules Ni . So, M/rM is a submodule of the semisimple module
M/Ni , making it semisimple. Similarly each of the other subquotients
ri M/ri+1 M is semisimple.
(4) Prove the converse, i.e., (a) and (b) imply that M is both Artinian and Noe
therian.
Suppose that n = 1. Then rM = 0 and M = M/rM is semisimple by assumption.
But we know that semisimple modules are both Artinian and Noetherian.
Now suppose that n 2. We may assume by induction that rM is Artinian and
Noetherian. We also know that M/rM satisfies both chain conditions. Let N1 , N2 ,
be a sequence of submodules which is either increasing or decreasing. Then Ni rM
must eventually become stationary. Similarly, the image of Ni in M/rM must also
become stationary. Using the 5lemma on the following diagram (just as we did for
Noetherian modules over commutative rings), we see that the inclusion Ni , Nj is an
isomorphism and thus an equality for sufficiently large i, j. (i < j or i > j depending
on whether Ni is an ascending or descending chain.)
0  Ni rM  Ni  Ni + rM  0
rM
= =
?
? ?
0  Nj rM  Nj  Nj + rM  0
rM
8. Homework 08 answers
The following problems were due Thursday (4/19/7).
8.1. Let K be any field, let G be any finite group. Let V = K be the trivial
representation of G. Let E = K[G] be the group ring considered as a representation
(this is called the regular representation) Find all Ghomomorphisms
f :V E
(and show that you have a complete list).
Let N K[G] denote the sum of all the elements of the group:
X
N=
G
Then N is clearly invariant under multiplication by elements of G:
N = N G
For any a K let fa : V E be given by fa (x) = axN . Then this is a linear map
which is Gequivariant:
fa (x) = fa (x) = axN = axN = fa (x)
To show that the fa are the homomorphisms V E, let f : V E be a
homomorphism of Gmodules. Let
X
f (1) = a
Then X
f (1) = f ( 1) = f (1) = a
Comparing the coefficients of we see that a = a for all , G. Taking = 1
we see that a = a1 for all , i.e., the coefficients are the same. So, f (1) = aN for
some a K. But then f (x) = axN . So, f = fa .
8.2. If K is the field with two elements and G is the group with two elements then
show that K[G] is not semisimple. [Using your answer to question 1, show that you
have a short exact sequence 0 V E V 0 which does not split.]
By problem 1 there are only two homomorphisms V E, namely f0 = 0 and
f1 : V E which has image V0 = {0, N }. The second point is that all one dimen
sional representations are isomorphic to the trivial representation. This is because
AutK (K) = K has only one element. If E were semisimple, then E = V0 W
where W E is another 1dimensional submodule of E. But then W = V and we
would get another homomorphism V E contradicting Problem 1. So, E is not a
semisimple module. So, K[G] is not a semisimple ring.
1
9. Homework 09 answers
The following problems were due Thursday (4/26/7).
9.1. Calculate the character table of the quaternion group Q and find the 2dimensional
irreducible representation.
If we mod out the central element t then we get Q/ hti
= Z/2 Z/2 with character
table
1 i j k
1 1 1 1 1
2 1 1 1 1
3 1 1 1 1
4 1 1 1 1
Pulling this back to Q gives:
1 i j k t
1 1 1 1 1 1
2 1 1 1 1 1
3 1 1 1 1 1
4 1 1 1 1 1
5 2 0 0 0 2
Since characters determine the representation, the unique 2dimensional irreducible
representation is any representation with character 5 . The module is the quaternions
H viewed as a right C module. The units 1, i, j, k in H form a group isomorphic
to the quaternion group Q and left multiplication by these elements commutes with
right multiplication by elements of C and is therefore Clinear. To find matrices for
the elements of Q we choose a basis for H, say v1 = 1, v2 = j. Then the matrices for
i, j, k are given by:
i 0 0 1 0 i
(i) = , (j) = , (k) =
0 i 1 0 i 0
for example, iv1 = v1 i, iv2 = v2 (i) making (i) = D(i, i). Taking traces we see
that this is the representation with character 5 . So, it is the one we want.
1
10. Homework 10
This was the last assignment. It was due Thursday (5/3/7).
10.1. If V, S are irreducible representations of G and S is onedimensional,
then show that V S is also irreducible.
There are two proofs. The first proof uses the fact that tensor prod
uct distributes over direct sum and one dimensional characters have
inverses. The second proof is a computation.
First note that V C
= V as Gmodules if C is the trivial represen
tation. This follows from the compuation:
V C () = V ()1 () = V () 1 = V ()
If is a one dimension character then is its inverse since
() = = 1
for all G. Consequently
(V S) S
=V SS
=V C
=V
is irreducible. This implies that V S must be irreducible.
The second proof is just a computation. If V S
L
= ni Si then
X
kV S k2 = n2i = hV S , V S i
1X 1X
= V ()S ()V ()S () = V ()V () = 1
n n
Which implies that V S is irreducible.
10.2. Show that the irreducible representations of a product G H
are the tensor products of irreducible representations of G, H.
Suppose that 1 , , a are the irreducible representations of G and
1 , , b are the irreducible representations of H. Let G : G H
G, H : G H G be the projection maps.
Let i = i G , j = j H be the pullbacks of i , j to G H.
Claim 1: i j is irreducible.
Proof: The restriction of i j to G = G 1 is irreducible:
ResGH
G i j = i
Claim 2: The irreducible representations i j are distinct.
Proof: i j restricts to i on G and j on H.
1
2
Claim 3: G H has ab conjugacy classes.
Proof: (, ) and ( 0 0 ) are conjugate in G H iff , 0 are conjugate
in G and , 0 are conjugate in H.
These three together show that i j give a complete set of irre
ducible representations of G H.
Another proof uses the equation:
D E
D E
i j , k ` = i , k j , ` = ik j`
This equation shows simultaneously that i j are irreducible and
that they are distinct. Since there are ab of them, these are all the
irreducible characters on G H.
10.3. Compute the character table of the alternating group A4 and
the inductionrestriction table for the pair (S4 , A4 ).
First you need to find the conjugacy classes of A4 . By the orbit
stabilizer formula, the size of the conjugacy class of in G is equal to
the index of CG () the stabilizer of in G. It is easy to see what these
centralizers are:
C() A4 : C()
1 A4 1
(123) {1, (123), (132)} 4
(12)(34) {1, (12)(34), (13)(24), (14)(23)} 3
Since A4 has 8 elements of cycle type (abc) and (123) has only 4 con
jugates, it must mean that there is another conjugacy class of 3cycles
which also has 4 elements (elements of N / G which are conjugate in G
have the same number of conjugates in N ). To find it you just conjugate
(123) by any odd permutation, e.g., (23) to give (132).
The quotient of A4 by the Klein 4group K = {1, (12)(34), (13)(24), (14)(23)}
is cyclic of order 3. So, its character table is:
1 (123)K (132)K
1 1 1 1
2 1 w2
2
3 1
1+ 3
where = 2
.
3
Pulling back to A4 , we get:
1 (123) (132) (12)(34)
1 1 1 1 1
2
2 1 w 1
3 1 2 1
4 3 0 0 1
Here we use the fact that (12)(34) K to get i ((12)(34) = i (1) for
i = 1, 2, 3 and 4 (1) = 3 is the only solution to the equation 12 =
1 + 1 + 1 + 4 (1)2 . The other values of 4 are obtained by column
orthogonality.
Looking at the character table of S4 which I did in class we can get
the induction restriction table:
S4 A4
1 (12) (123) (12)(34) (1234) 1 2 3 4
1 1 1 1 1 1 1 0 0 0
2 1 1 1 1 1 1 0 0 0
3 2 0 1 2 0 0 1 1 0
4 3 1 0 1 1 0 0 0 1
5 3 1 0 1 1 0 0 0 1
The inductionrestriction table is obtained by looking at the values
of the characters of S4 on the conjugacy classes 1, (123), (12)(34). The
restricted characters have the same value on the conjugacy classes (123)
and (132).