Você está na página 1de 239

Notes Quantum Many Body Physics

Notes
December 9, 2013 Giordon Stark

Contents
Notes - December 9, 2013 1

Day 1 - September 30, 2013 3

1 Syllabus Information 3

2 Introduction 3

3 Path Integrals 4
3.1 Single Particle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Day 2 - October 2nd, 2013 8

4 Path Integral 8
4.1 Partition Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Day 3 - October 4th, 2013 14

5 Reminder of last time 14

6 Simple examples of path integral 15


6.1 Free Particle (Example 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
6.2 Harmonic Oscillator (Example 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

Day 4 - October 9th, 2013 24

7 Stationary Phase Approximation 24

8 Semiclassical Approximation 26
8.1 Example 1: Single (anharmonic) well . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
8.2 Example 2: Double well . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

1
Day 5 - October 11th, 2013 34

9 Last Time 34

10 Double anharmonic well (continued) 35


10.1 What is the instanton action S0 ? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
10.1.1 Conservation of “energy" . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
10.1.2 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
10.2 Missing part of derivation: K . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
10.2.1 Calculate K . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

Day 6 - October 16th, 2013 44

11 Path Integral for Spin 44


11.1 Phase convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
11.2 Hamiltonian is zero . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
11.3 Non-zero Hamiltonian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

12 Larger Spin - Generalization 48

13 Berry Phase 50
13.1 Berry Phase and adiabatic evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
13.1.1 Some Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

Day 7 - October 18th, 2013 56

14 Last Time 56

15 Berry Phase (continued) 58


15.1 Berry phase for spin- 21 example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

16 Berry Phase and Path Integrals 60


16.1 Other examples of Berry phase terms . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
16.2 Analogy with gauge theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
16.3 2 types of berry phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

17 Linear Response 64
17.1 Computing D(t, t0 ) using path integral . . . . . . . . . . . . . . . . . . . . . . . . . . 65
17.2 How to compute G? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

2
17.2.1 1-dimensional quantum mechanical case . . . . . . . . . . . . . . . . . . . . . 66

Day 8 - October 23rd, 2013 68

18 Response and Correlation (continued) 68


18.1 Example: Harmonic Oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

19 Relationship between D and G 71


19.1 First Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
19.2 Second Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
19.2.1 Harmonic Oscillator Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

20 Bosons 73
20.1 Second Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

Day 9 - October 25th, 2013 77

21 Coherent state path integral for bosons 77


21.1 Add interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
21.2 Short-range interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
21.2.1 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
21.2.2 What are the properties for different values of µ, V0 , m? . . . . . . . . . . . . 79
E
21.2.3 How does V behave as a function of µ? . . . . . . . . . . . . . . . . . . . . . 80
21.3 Dilute Bose gas Transition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
21.3.1 µ < 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
21.3.2 µ > 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
21.4 Cartoon wave function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
21.5 Spontaneous Symmetry Breaking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
21.5.1 SSB v.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
21.5.2 SSB v.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
21.5.3 SSB v.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
21.5.4 SSB v.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

Day 10 - October 30th, 2013 87

22 Last time 87
22.1 Identifying operators in effective theory . . . . . . . . . . . . . . . . . . . . . . . . . 90
22.2 The punchline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

3
Day 11 - November 1st, 2013 95

23 Last time 95
23.1 Now: see this explicitly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
23.2 Think about k = 0 part of Heff now . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
23.3 Total Hamiltonian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

Day 12 - November 6th, 2013 102

24 Last Time 102

25 Superfluids in Low Dimension 103


25.1 (1 + 1)D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
25.2 Example of Mermin-Wagner Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 109
25.3 (2 + 1)D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

Day 13 - November 8th, 2013 112

26 Superfluidity at Finite Temperature 112


26.1 Long range order? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
26.2 What about η? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
26.3 Why do superfluids flow without relaxation? . . . . . . . . . . . . . . . . . . . . . . . 122

Day 14 - November 13th, 2013 126

27 Introduction 126
27.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

28 Integer Quantum Hall Effect 129

Day 15 - November 15th, 2013 137

29 Last time 137

30 Robustness of the Quantum Hall Effect 140


30.1 Laughlin flux insertion argument . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
30.1.1 1st Way . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
30.1.2 2nd Way . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
30.2 With weak interactions and/or disorder . . . . . . . . . . . . . . . . . . . . . . . . . 145
30.3 Puzzling concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

4
Day 16 - November 20th, 2013 149

31 Integer Quantum Hall Edge 149


31.1 Case 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
31.2 Case 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
31.3 Case 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
31.4 All cases combined . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

32 Properties of Gapless excitations 154


32.1 Redo σxy calculation with edge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

33 Simplest Toy model for Integer Quantum Hall 165

Day 17 - November 22th, 2013 168

34 Landua Levels in circular gauge 168


34.1 Many-body wave function for ν = 1 IQH State . . . . . . . . . . . . . . . . . . . . . 171

35 Fractional Quantum Hall Effect 173


35.1 FQH effect requires interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
35.1.1 An educated guess . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
35.1.2 Why does it have uniform density? . . . . . . . . . . . . . . . . . . . . . . . . 178

Day 18 - November 25th, 2013 183

36 Last Time 183


1
36.1 Why ν = m? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
36.2 Why uniform density? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
36.3 Why is state gapped? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
36.3.1 What about the gap? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187

37 Interpreation of Laughlin States 188


37.1 Low energy excitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
37.1.1 ν = 1 IQH case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
1
37.1.2 Laughlin ν = 3 case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192

Day 19 - November 27th, 2013 196

38 Last Time 196

5
38.1 Quasihole excitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
38.2 What is the charge of quasihole? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197

39 Fractional Statistics 201


39.1 Identical Particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
39.1.1 Example 1: ν = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
1
39.1.2 Example 2: ν = 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
39.2 What is statistics and how does one compute it? . . . . . . . . . . . . . . . . . . . . 204
39.3 What kinds of actions satisfy this locality constraint? . . . . . . . . . . . . . . . . . . 208

40 Topological Classes of n-particle paths 208

Day 20 - December 4th, 2013 211

41 Identical Particles 211


41.1 Last Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211

42 Topological Action 213


42.1 3D Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
42.2 2D Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216

43 Measuring Stop with adiabatic evolution 218


43.1 2D Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
43.2 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
43.3 Relation between statistics and wavefunctions . . . . . . . . . . . . . . . . . . . . . . 221

Day 21 - December 6th, 2013 224

44 Last Time 224

45 Statistics of Laughlin quasihole 225

46 Berry phase for 2 quasiholes 230

6
Foreword

Dear reader. Thanks for stumbling on these notes. First, I would like to thank
Professor Michael Levin for teaching the course on Quantum Many-Body Systems
at University of Chicago (PHYS411) during the Fall term of 2013. There were 21
lectures total, the first week contained three 1-hour lectures and the preceding
9 weeks after contained two 1.5-hour lectures. These lecture notes are currently
in a rough-draft form and may contain typos – grammatically and physically.
Consume at your own risk [insert surgeon general warning].

On top of this, I would like to thank

• Dylan Sabulsky (Class of 2014 at UChicago)

• Lei Feng (PhD Student at UChicago)

• Gustaf Downs (PhD Student at UChicago

for the extra help in putting the notes together by way of their drawings when
I, the author, didn’t feel like drawing the images in class (for whatever rea-
son).

Also, thanks to my family and friends whom I’ve ignored while editing these notes.
I’m sure they still love me.

- Giordon Stark (PhD Student at UChicago; kratsg@gmail.com)

7
Notes Quantum Many Body Physics
Day 1
September 30, 2013 Giordon Stark

1 Syllabus Information

Professor: Michael Levin


Office: Accelerator Bldg. 102
Course Webpage: http://chalk.uchicago.edu
Hours Change BE INFORMED
Books: X.G. Wen QFT of Many-Body Systems; Altland and Simons: Condensed Matter Field
Theory
Homework: 1 Problem Set a week; typically due first class of the week

2 Introduction

So I should just tell you a little bit of what we are going to cover. So basically, there’s a lot of
freedom in choosing the syllabus of this course. It’s not a standard course. So what I’m going to do
is – I kind of just want to teach a few things that I think are sort of interesting tools and concepts
relevant to quantum any body physics. We will do path integrals and that will be one tool that
we will be working on. And mean field, approximations, semiclassical approximations and then
interesting concepts like Barry phase and fractional statistics. These are things I will talk about.
In terms of physical systems I will focus on two model systems for most of the class.

So first half of the class, the first thing I will talk about is basically superfluids. I will spend a lot of
– sometime introducing superfluids and deriving some of their properties and that’s sort of a model
boson system and then for the second part of the class, I will be talking about quantum all states
and those will be interesting correlated fermionic systems, of course, they can be bosonic, but we
will focus on fermionic. And then we will do more modern stuff on describing what topological
phases are, and analyzing properties.

8
3 Path Integrals

3.1 Single Particle

Particle in 1D (non-relativistic):

p̂2
Ĥ = + V (x̂) [x̂, p̂] = i (~ = 1 throughout class)
2m
You would want to know what the time evolution operator is. What’s the probability amplitude
that a particle that starts at a given position at time ti ends in some other position at time tf ?
Want a time evolution operator

U (f , tf ; xi , ti ) = hxf |e−iH(tf −ti ) |xi i

It’s hard to exponentiate the Hamiltonian times a large time period ∆T = tf − ti so we will split
the time period up into smaller chunks of time that we can deal with. Idea: break up into N small
tf −ti
time steps, let ∆t = N . Then do a time evolution over each of those steps and we will put it all
together at the end.

hxf |e−iH(tf −ti ) |xi i = hxf |e−iH∆t · e−iH∆t · · · e−iH∆t |xi i


= hxf |e−iH∆t · 1 · e−iH∆t · 1 · · · e−iH∆t |xi i
Z NY −1
= dxk hxf |e−iH∆t |xN −1 ihxN −1 |e−iH∆t |xN −2 i · · · hx1 |e−iH∆t |xi i
k=1
R
using the completeness relation 1 = dx |xihx|. For notational purposes, we will denote xf ≡ xN
and xi ≡ x0 .

Figure 1: The total amplitude is the sum of all the paths from xi to xf .

9
So my claim more specifically, is you can write it like this up to an error term, which vanishes like
(∆t)2 . And the reason, there are different ways you can see this. One way is you can try to expand
this out as a power series and they match up to second order terms. And once you get second order
terms, they won’t match if they don’t operate with each other. Another way to state it is the error
from this is proportional to an exponential commutator of these two guys. And the commutator of
each of these two guys is something on the order of (∆t)2 . So infinitesimal evolutions commute with
each other. For any two operators, they commute with each other, up to order (∆t)2 . So that’s a
key point. That’s what’s going to allow us to evaluate all of these matrix elements.
 
p̂2
−i +V (x̂) ∆t
e−iĤ∆t = e
2m

 
p̂2
−i ∆t
· e−iV (x̂)∆t (1 +   2 )
2m  
=e O (∆(t))

We drop the last term so that the total error is on order of N (∆t)2 ∼ N ( N1 )2 → 0 as N → ∞.
Now, we can compute an individual term of the path integral formalism
 
p̂2
−i ∆t
⇒ hxk |e−iĤ∆t |xk−1 i ≈ hxk |e · e−iV (x̂)∆t |xk−1 i
2m

 
Z p̂2
−i 2m ∆t
= dpk hxk |pk ihpk |e · e−iV (x̂)∆t |xk−1 i

dpk ipk (xk −xk−1 ) −ip2k ∆t−iV (xk−1 )∆t


Z
= e · e 2m

noting that by our normalization
1
hx|pi = √ eipx

and you can series expand the operators that are exponentiated. For example
 
p̂2
" 2 #
p̂2
 2
−i 2m ∆t p̂ 1
hpk |e = hpk | 1 − i ∆t + i ∆t + ···
2m 2m 2!
" 2 #
p2k
 2
pk 1
= hpk | 1 − i ∆t + i ∆t + ···
2m 2m 2!
p2
 
−i k ∆t
2m
=e hpk |

since p̂|pk i = pk |pk i where pk is an eigenvalue of the momentum operator here.

10
Substitute back in to the unitary operator above
 
−1
Z NY N PN (xn −xn−1 ) p2
n −V (x
Y dpl i∆t n=1 pn ∆t
− 2m n−1 )
U (xf , tf ; xi , ti ) = lim dxk e
N →∞ 2π
k=1 l=1
H(x,p)
z }| {
xf ,tf R tf p2
− V (x)]
Z
i t dt[p∂t x−
≡ Dx Dp e i 2m
xi ,ti

This is known as the Hamiltonian Path Integral or the Phase Space Path Integral. We can also do
integrals over pk explicitly
p2
   2 
pk (xk −xk−1 ) r   x −x
Z k 1 k k−1
dpk i∆t ∆t
− 2m 1 m  i∆t 2
m ∆t
e = 2π e
2π 2π i∆t
So substitution gives
 
Z  N −1 PN m  xl −xl−1 2
m  N/2 Y i∆t l=1 2 ∆t
−V (x l−1 )
U (xf , tf ; xi , ti ) = lim dxk e
N →∞ 2πi∆t
k=1
Z xf ,tf R tf 1 2
≡ Dxei ti dt[ 2 m(∂t x) −V (x)]
x ,t
Z i i
≡ DxeiS[x(t)]

where
Z tf Z tf m 
S= dt L[x(t)] = dt (∂t x)2 − V (x)
ti ti 2

This is known as the Lagrangian Path Integral.

I have been warning you that, you know, to be careful about what we mean by path integral
and I have been – I have emphasized on many occasions to truly define this path integral, it’s
really defined in terms of particular desperation, what I have written here. It’s sort of tempting
and in some cases, it’s okay to just think intuitively, that there is a true meaning to path integral,
independent of any discretization, and it’s true in some cases that there are many ways I could write
down an approximate discrete version of this expression, sort of natural – there are many natural
discretizations of this. It is an unambiguous defined quantity without any further information.
However, you might wonder, suppose it’s unambiguous for any action.

If that was the case, you would run into a bit of a puzzle because it suggests for every – this is a
way to give you a quantum theory, if you think of it given a quantum theory given a – if you can
make sense of this without more details then I have a very clear way of giving any classical theory
to get a quantum theory. But that’s puzzling because we know in general there isn’t a unique way

11
to quantized in a classical theory. So you may wonder how is this possible that I have some kind
of well defined to quantize theory. So that’s kind of a puzzle. You know, how can that be? And so
that’s sort of a puzzle we already see here and the hint here is that – and you will explore this puzzle
in the homework, but the hint is that – I suppose I have hinted here, maybe the path integrals are
not completely well defined in general.

Question: Can we quantize arbitrary classical action S? Is that quantization unique?

So if I just write down an expression like this, maybe two people, two sensible people could write
down two discretizations and get two theories out. So you will explore that in the homework.
So it’s ultimately what you find is that ambiguity in quantizing classical theory comes up in the
ambiguity in discretizations a path integral. So that’s a thing that path integrals bring up. It’s an
independent way to – you know, economical quantitation. There suggests to be a better way to do
it. Any problem that is the problem with quantization. The way they show up is in the definition
of the path integral. Path integral itself in the continuum form is not fully defined in some cases.
You need to give more specific information about how you discretize it.

It turns out for this particular type of path integral, where you have quadratic term, like x dot term,
and v of x. There’s no way to discretize it. Even things as a dot x dot, you can get interesting
ambiguities which you will explore in the homework.

it’s ultimately what you find is that ambiguity in quantizing classical theory comes up in the
ambiguity in discretizations a path integral. So that’s a thing that path integrals bring up. It’s an
independent way to – you know, economical quantitation. There suggests to be a better way to do
it. Any problem that is the problem with quantization. The way they show up is in the definition
of the path integral. Path integral itself in the continuum form is not fully defined in some cases.
You need to give more specific information about how you discretize it.

12
Notes Quantum Many Body Physics
Day 2
October 2nd, 2013 Giordon Stark

4 Path Integral

4.1 Partition Function

So we are just going to calculate the partition function of a particle in one dimension. We want to
calculate this trace.
  1
Z = Tr e−β Ĥ , β=
Z kT
= dx hx|e−β Ĥ |xi
Z
= dx Uim (xf , τf , xi , τi )

where τf − τi = β, and the imaginary time evolution operator

Uim (xf , τf , xi , τi ) = hxf |e−Ĥ(τf −τi ) |xi i

where you let τ = it (imaginary time). So calculating the partition function is equal to evolving
this in real time.
τf − τi
e−H(τf −τi ) = e−H∆τ · e−H∆τ · · · e−H∆τ , ∆τ =
N
In fact, you don’t have to do the calculation. So let’s imagine we compute it as before. So if we
compute it as before, we will get exactly the same results we had before, but at every stage, we
can – we will get exactly the same thing we had before, but we will have to make a replacement,
replacing

i∆t → ∆τ
dt → −i∆τ
∂t → i∂τ

Okay. So we know – so we just make these simple replacements. We know what we will get
and so let’s just do that. So first let’s look at the Hamiltonian path integral. So we can do the

13
Hamiltonian imaginary path integral.
 
Z xf ,τf R p2
dτ ip∂τ x− 2m −V (x)
Uim (xf , τf , xi , τi ) = Dx Dp e
xi ,τi

We can do the Lagrangian imaginary time path integral. It’s another expression for this imaginary
time, time evolution. So what is this going to be?
Z
1 2
Uim (xf , τf , xi , τi ) = Dx e− dτ ( 2 m(∂τ x) +V (x))
R

So I just told you what the imaginary time evolution operator is. Of course we want the partition
function and we will write down the expression for the partition function. Remember the partition
function is the integral of the imaginary time evolution
Z
Z = dx Uim (x, τf , x, τi )
Z τf R τf 1 2
= Dx e− τi dτ ( 2 m(∂τ x) +V (x))
periodic,τi

So here xi when we calculate this. We set xi equal to xf , right? We are setting the two equal and
we are integrating over all possible values of x. So one way you can write it, if you sort of substitute
this in, you can think of it as an integral over all paths. Where periodic end positions from periodic,
so integrates over fixed endpoints.

So that’s a key result that it can be written as a path integral. You are taking a trace, it’s a path
integral with periodic boundary conditions. So one way to think about it schematically, we can
think of our path. I drew a cylinder. This is the imaginary time or tau, and what we are doing
is before we were integrating over paths from some particular endpoints from here to here. We
integrate over all paths around the cylinder with any endpoints and they are free to move. So all
possible loops basically you can draw on this cylinder.

This looks like a partition function for a classical string. For a classical string. So what do I mean?
So let’s imagine what I have in mind is the following. Imagine that I have some string. I’m going
to describe the string problem and then I will show you why it’s the same. So imagine I have a
string, the following sort of statistical mechanical problem, maybe it’s a little contrived. I have
some potential like this I’m drawing it – I’m drawing to draw a curve here but I’m not doing it very
well. It’s like a parabola. And imagine I have a string sitting on this. We assume the string can
only move up and down. The only degree of freedom is this displacement here, u(x), okay? One
dimensional. It’s stuck on this surface or whatever, so this one dimensional displacement. So in
other words, the string if I was looking down at it from above, it can oscillate about the minimum
of this potential W (u), you know? Okay?

14
So imagine a string in a potential well W (u) and let σ be the tension. So the potential energy is
going to be a function of the string, u(x). There will be two terms in it. So one term is the potential
energy, W (u), penalizes the string for being far from its minimum. Now, of course, there will be a
string tension term, if we assume this – you know – and a string tension term will be proportional
to, as you know, when you study wave mechanics at least the lowest order, proportional to the
derivative (∂x u)2 .
Z L  
1
E[u(x)] = dx σ(∂x u)2 + W (u)
0 2

Now I’m going to forget the kinetic energy. It’s a real string so it can vibrate, but forget about that.
Let’s think about the potential energy part. Then the partition function is just given by integrating
over all states
Z
Zstring, classical = Du e−βE[u(x)] ignoring KE and assuming periodic BCs
periodic
Z RL
dx[ 21 σ(∂x u)2 +W (u)]
= Du e−β 0
periodic

Z Rβ
dτ ( 21 m(∂τ x)2 +V (x))
Zparticle, quant = Dx e− 0
periodic

Okay. So now – so that’s our partition function if we write it out explicitly, we can see it looks just
like the quantum partition function we wrote down. When you look at these two expressions, this
was our partition function for our quantum particle. And it looks identical to our partition function
of our classical string, right? So there’s a correspondence between these two systems. You can map
one on to the other exactly. Let’s see what I want to do here. I will explain in a second but there’s
a mapping between quantum systems and d dimensions and I’m trying to give an example of that.
So you may say this particular classical system, you know it doesn’t mean every classical snip in d
plus one has a simple quantum analog. But there is a mapping which is very useful in many cases
you can map from one to the other. It’s a classical statistical physical model. The fact that it’s
physical cord string isn’t important. You can imagine all types of – I’m sure there are many physical

15
systems we can imagine that you could approximate by this kind of partition function.

0 dimensional ↔ 1 dimensional
Zparticle, quantum = Zstring, classical
β↔L
m ↔ βσ
V ↔ βW
τ ↔x

So we compare those two, we can see we have correspondence where we’ve replaced β by L. So the
temperature and the inverse temperature in the quantum system is like the length of the classical
system and then there are some other replacements which are maybe not so important. They are
specific. So m gets replaced by βσ. This is the classic – the temperature of the classical system.
Our potential – or quantum potential gets replaced by βW .

And then what was imaginary time here is now the spatial coordinate x. Okay. So the main point
I’m trying to make here is that let’s think about the main dimensionality of these systems. So
the dimensionality of the string, it’s a one-dimensional classical field theory, if you like. This is a
particle – the particle is a zero dimensional system. So what we have mapped is the fact that it’s
living in one dimension is not relevant. It’s – it’s – yeah. It’s – the degrees of freedom – the fields
that you are talking about just depend on time, right? They don’t depend on 9 coordinates, the
particles, there’s no spatial coordinates. So we have a mapping here from zero dimensional quantum
system, to a one dimensional classical system. And so this is a general thing, and so many cases,
not always, if you have a quantum system in d dimensions, it could be a quantum field theory or
just a lattice quantum system in d dimensions.

If you go to the path integral, you can see the path integral introduces an extra dimension which
is time or temperature, actually – if I’m calculating the partition function, I’m in – I’m going to
imaginary time, but extra dimension which is tau and I have an integral over these configurations
and this d plus one dimensional space. So – and in some cases, like what I have leer, the boson
weight, the things that appear in your exponential are purely positive. They are completely positive
and so they can really be interpreted as the classical boson weight. So in those cases where you have
a completely positive weight in your path integral, you can think of a classical system and one higher
dimension. So this is kind of common thing. So this is an example of general correspondence.

This is between d dimensional and imaginary time or if you like partition functions when you are
talking about partition function. There’s a correspondence between that and d plus one dimensional

16
classical statistical mechanical systems.

Example of general correspondence

D dimensional quantum of imaginary time ↔ (D + 1) classical statistical mechanics

And – and the other key thing to know about this correspondence is this – this line here, this is
very important. It says this is sort of what we have been emphasizing here. The variable that
corresponds to the length of – the inverse temperature here is what corresponds to this classical
spatial dimension here. So if you wanted to consider a thermodynamic limit. We want all dimensions
to be very large, right? That’s going to map on to a case – this maps on to a quantum system
that is at zero temperature. The beta goes to – the temperature goes to zero and beta goes to
infinity. If you are interested in quantum systems at low temperature, you need to understand the
classical statistical mechanical problem. So this correspondence is valuable both analytically, but
it’s also valuable numerically. So actually this is the only – really the only way we know how to
solve quantum systems beyond a few cases, in one dimensional. There’s a few cases 1d systems
that we can solve but pretty much any other quantum system in – in, let’s say two dimensions or
above, two special dimensions or above, the only way we know how to say anything about them
on the computer without any efficiency at all, is that in some those systems, very special ones, or
somewhat special ones, they have this property that they map on to a classical system, when they
look at your partition function. It’s something you can simulate classical systems on a computer.
This is very valuable.

Some quantum systems when do you this trick, they don’t map to on to a classical system. Even in
imaginary time, the weigh that appears in your path integral is actually complex. It’s still complex.
And then – and then you keep mapping on to a classical system. Those are the systems tar really
puzzling and actually usually the most interesting ones but, anyway, this is kind of an interesting
correspondence, both it’s useful, it’s kind of an interesting intuition you can get for one system in
terms of the other.

And finally, kind of, I think it raises some interesting puzzle because, you know, in Klattical statis-
tical mechanics we know how to derive this structure, this kind of structure I wrote here, like boson
weights, you can derive almost mathematically from first principles by thinking of large theorems
about large numbers, right? And, again, boson weights just from that.

So in quantum mechanics, the path integral or even – or even the formulation of quantum mechanics
we don’t derive from any sort of more basic – you just assume it based on experiment, of course it
works but it’s a close relation between something which is derivable from much more basic postulates

17
and something which we don’t have any idea, sort of what the – how to motivate it beyond just
basic experiment.

WF 2:30-3:50, room 101

em in terms of the other.

18
Notes Quantum Many Body Physics
Day 3
October 4th, 2013 Giordon Stark

5 Reminder of last time

Okay, so today I’m going to do sort of continue our discussion of path integrals. So first I want to
just mention – remind you of the results from last time and mention one connection between real
and imaginary time path integrals. And then we’ll do some examples. So actually calculating path
integrals for various systems. Okay, so first let’s sort of remember the two – or at least two of our
major results from last time.

So last time we showed that you could calculate the time evolution operator of a particle in one
dimension. It’s the probability amplitude for a particle that starts at position xi at say time 0 and
ends at position xf for time t. Before I was calling it for tf , ti , but for this lecture, ti = 0. We
wanted to calculate this quantity.
Z xf ,t Rt
−iHt dt0 [ 12 m(∂t0 x)2 −V (x)]
U (xf , t, xi , 0) = hxf |e |xi i = Dx ei 0
xi ,0
Z xf Rτ
dτ 0 [ 12 m(∂τ 0 x)2 +V (x)]
Uim (xf , τ, xi , 0) = hxf |e−Hτ |xi i = Dx e− 0
xi

Here, we’re going to focus on the Lagrangian path integral, an integral of all paths joining xi with
xf . But we also talked about the imaginary time evolution operator useful for example in computing
partition functions. We defined an imaginary time operator, very analogously. And this was here
instead of evolving under that usual equation, we evolved it. We basically set t to be a negative
imaginary number. Note that

Uim (xf , τ, xi , 0) = U (xf , −iτ, xi , 0)

And it was that that allowed us to relate these types of quantities to statistical mechanical models.
So those are the two main results from last time. Are there any questions about them? Okay. I want
to make one observation I didn’t get a chance to make last time which is about the relationship
between these two time evolution operators. This is really a definition here. This is what we
proved.

19
So here what we imagine doing is formally setting t equal to some imaginary number, like minus
iτ , t can be any complex number, as long as it has a negative imaginary part. As long as t has
a negative imaginary part, you can define the expression and it can converge. Assuming H has a
lower bound. As long as t has an imaginary part, this thing is going to decay on higher energies.
When the energy is large, the real part of ßHt is going to be positive and that means that at very
high energies, this thing will decay and it will be a nice convergent quantity that we can talk about.
So as long as t is a negative imaginary part, everything is well-defined. If it had a positive imaginary
part, you might get divergences that you don’t know exactly what’s going to happen.

More generally, we can define U (xf , t, xi , 0) for any t with =(t) < 0.

Imagine letting t be a complex number. A real part of t, imaginary part of t. What we’ve shown is
the following. We’ve shown that the real time evolution operator here is equivalent to evaluating U
for time which is close to the real axis. On the other hand, the imaginary time evolution operator
is evaluating it down here close to the imaginary axis.

This is regular U in real time. What I’m trying to argue here is this expression, this U is analytic
throughout this lower-half plane. So that is all analytic. And so what that means is that, you know,
here I naively said, you can try to calculate – you can go between the real and imaginary time
evolution operators by just substituting, t = −iτ . More accurately, if we know the value of U along
this real axis here, we can calculate the value of Uim along this imaginary axis or at least close to
this axis. Often, it’s just substitution. But in general, it’s a slightly more subtle thing.

You have things like square groups and so on. If I know U and the real time axis here, I can look
and find Uim , the imaginary time evolution operator. So in general, you can go from U → Uim by
analytic continuation. And like I said, many times, that will be sort of simple to do. But you’ll
look like you’re just substituting. But if you have things like square roots, of course, you’d have to
choose a branch for your square root.

This gives you a prescription, if I know one of them, there’s a unique way to calculate what the
other is. There’s a one-to-one correlation.

6 Simple examples of path integral

So it’s something that is best learned by example and practice. But this is to give you the lay of
the land. So let’s give you some examples. Okay, so I’m going to do a bunch of examples, starting
with sort of getting more and more complicated. And so for the simplest examples, start with the

20
simplest one.

6.1 Free Particle (Example 1)

p̂2
Ĥ =
2m
First, compute U directly.
2
Z
ip̂2 t
− i2m
p̂ t
hxf |e |xi i = dphxf |pihp|e− 2m |xi i
Z
dp ip(xf −xi ) − ip2 t
= e e 2m

So we’re going to be pedantic and do it in a general way. When we generalize the path integral, it
will just clarify what we’re doing. One way to do the path is to look at the exponent and find the
stationary point of the exponents. Take the derivative

p2 t
 
d
p(xf − xi ) − =0
dp 2m
pt
(xf − xi ) − =0
m
Thus we have a stationary point pc = m(xf − xi )/t which is the classical momentum that a particle
would have if it went from xi to xf in time t. So what we do is in the next step of Gaussian integrals.
Expand around pc : p = pc + y so
Z
dy i(pc +y)(xf −xi ) − i(pc +y)2 t
= e e 2m

dy ipc (xf −xi ) − ip2c t − ity2
Z
= e e 2m e 2m

r
1 m ip2
ct
= 2π eipc (xf −xi )− 2m
2π it
2
r
m mi(x f −xi )
= e 2t
2πit
Suppose we had done the imaginary time, we calculated this, we would get m/(2πτ ) or something.
We continue to real time. We know how to choose the square root, you know, which sign to choose
for the square root. We know along the imaginary time axis, its’s the positive square root, right?
That did define what the sign is here when you get to the real axis. This is just the usual Gaussian
integral.

So I think what I’m trying to say here is this is exactly what the issue is. When you work exactly
along the real axis, it’s slightly ill-defined because the integrals are not completely convergent. This

21
sign is not well-defined the way I’ve written it. You have to pick a prescription, which is to be
precise. You put a small and negative imaginary part and there’s a well defined value integral with
a certain choice of sign here. There’s a slight pathology right along the real axis.

Next do it with a path integral:


Z xf ,t Rt 0m 2
= Dx ei 0 dt 2 (∂t0 x)
xi ,0
Z  N −1 m(xk −xk−1 )2
m N/2 Y i∆t N
P
k=1 2(∆t)2
= lim dxk e
N →∞ 2πi∆t
k=1

Again, we have a multi-dimensional Gaussian integral. It’s quadratic in the variable. We can use
the same approach we used before – find the stationary path xck :
N
!
∂ X
(xk − xk−1 )2 =0
∂xk
k=1

Its gradient has to vanish with respect to x. It’s a stationary point so forget all other terms

∂(xk − xk−1 ) − ∂(xk+1 − xk ) = 0

So this clearly says that the difference amount that x is moving at every time slice is the same. It
shows there are equal amounts in every time slice, so it’s clearly moving along a straight line with
constant velocity. That’s what minimizes it.
xf − xi
xck = k∆t + xi
t
The main point is you understand that there’s a particular path that – particular set of xk ’s that

22
minimize this path. Expand around xck . So let xk = xck + yk . Then
Z  N −1 m(xc c
k −xk−1 )
2
m(yk −yk−1 )2
m N/2 Y i∆t N
P
k=1 2 i∆t N
P
k=1 2(∆t)2
= lim dyk |e 2(∆t)
}e
N →∞ 2πi∆t {z
k=1 Sc classical action
Z  N −1
m N/2 Y i T
= lim dyk eiSc e 2 y Ay
N →∞ 2πi∆t
k=1

 
2 −1 · · · 0
 
 −1 2 −1 0 
m  
A=  0 ··· ··· 0
 
∆t 


 0 −1 2 −1 
 
0 · · · −1 2

 m N/2 √ N −1 p
= lim eiSc 2π det(iA−1 ) general formula for Gaussian (look in Altland)
N →∞ 2πi∆t
s
 m N/2 √ N −1 1
= lim eiSc 2π
N →∞ 2πi∆t det(−iA)

We need the determinant. Let DN ≡ det() of the matrix above of size N × N . This gives a
recurrence relation DN = 2DN −1 − (−1)(−1)DN −2 = 2DN −1 − DN −2 .

D1 = det(2) = 2
!
2 −1
D2 = det =3
−1 2
D3 = 2D2 − D1 = 4
..
.
Dn = n + 1

So the original A matrix we had was (N − 1) × (N − 1) so


 N −1  N −1
−im −im
det(−iA) = DN −1 = N
∆t ∆t

23
And we can plug that back into our stuff above...
s
 m N/2
iSc (2π)N −1
= lim e N −1
N →∞ 2πi∆t N −im
∆t
 m N/2  2πi∆t  N2−1 1
= lim √ eiSc
N →∞ 2πi∆t m N
r
m 1
= lim √ eiSc
N →∞ 2πi∆t N
r
m i m (xf −xi )2
= e 2t
2πit
A free particle. No approximations. The only approximation that we made in the path integral is
that it didn’t compute. There was no transfer of energy. So are there any questions? Okay, so let
me start on the next example, then.

6.2 Harmonic Oscillator (Example 2)

So this way of calculating path integrals from discrete approximation as you already recognize is
cumbersome. So I don’t think it’s very common, if ever. Actually you can only do it for integrals
that we already know how to do, quadratic problems, problems that are solvable in every possible
way.

Okay, so now I want to do in the second example, I want to do the harmonic oscillator. But you
could do the harmonic oscillator the way we did it in the free particle. It’s a Gaussian, you can
write out the path integral. A Gaussian integral. You can determine the matrix. You can follow
the same procedure. In fact, it’s a good exercise to repeat this calculation. For harmonic oscillator.
But I’m not going to do it here. Instead, I want to show you another way to compute path integrals,
which maybe gives you another point of view. It’s closely related. But a different way to go. We
want to compute this time evolution operator for harmonic oscillator.
mω 2 x2
 
Rt 1
dt0 m(∂t0 x)2 − 20
Z
i 0 2
U (xf , t, xi , 0) = Dx e

So how do we proceed? The first step is the same as before. We’re going to compute the path
integral without going to discrete limit. And that by its very nature means we’re in some sense
not going to do something completely rigorous. I defined the path integral as a discrete integral.
I’m going to take a leap of faith and treat it like it’s a real continuum without taking a discrete
limit. We’re going to evaluate this Gaussian interval directly in the continuum. We’ll follow the
same procedures as we did before. We’re going expand around the stationary path. We’ll find the

24
stationary path xc .
δL
=0
δx
and then expand x = xc + y. So I’m not going to write out what xc was. Because in general, the
stationary path depends on the initial and final positions and there’s a formula for it. I’m not going
to write it down. It’s a purely classical problem
Z 2 2 mx2 y 2
2 mx0 xc
R 0m
i dt0 m (∂t0 y)2 − 20
R
0 xc ) −
= Dy e|i dt 2 (∂t{z 2
} e 2 linear terms vanish
Sc
Z
dt0 y (− m2 ∂t20 − m2 ω02 )y
R
= Dy eiSc ei
1
= N eiSc q
det −i − m 2 m 2

2 ∂t0 − 2 ω0
1
= N 0 eiSc
det −∂t20 − ω02


The part that we’re going to care about here has to do with theq
square root of the determinant.

Just like we have the 1/ −ia, we can do the same thing here. 1/ det(−iÔ). Of course, when we
do the integrals, there are nasty factors and there’s also a factor of two usually when you use the
formulas. There are other factors. They’re infinite. There’s some infinite dimension. We’re going
to combine the factors to some unknown that we don’t know.

So we have this expression. We have this normalization factor we don’t know what to do with.
Let’s go ahead blindly and try to calculate this determinant. We need to calculate the determinant
of some operator. Just like you calculate the determinant of the matrix by taking the eigenvalues,
we take the product of the eigenvalues.

You can think of it as the matrix. It’s a continuous version. To find the eigenvalues, what are
the basic functions it’s operating on, the operator? What is the space of function? These paths
lie, what are their properties? We had some kind of classical trajectory. I’ll write like this. We’re
expanding around it. y is some kind of deviation from that. But you can see the picture, y had
to vanish in the initial final points. The path has to start at xi and end at xf . So we want our
eigenvalues with the boundary conditions.

−∂t2 y − ω02 y = λy y(0) = 0, q(t) = 0

That’s a familiar problem. It’s a problem you do all the time when you deal with basically the
normal modes. You want to find the normal modes of a string. Normal modes of a string are
nπt0
   nπ 2
yn = sin λn = − ω02 n = 1, 2, 3, · · ·
t t

25
That’s what’s interesting, right? They completely decompose. I erased it here. But the action
broke up to a classical piece, plus the deviation piece. And there was no term, the linear term
was gone, right? So this piece doesn’t even know. Basically, it doesn’t matter what the classical
trajectory is. Calculating the quantum fluctuations in this case is independent of the paths. That’s
the particular thing for this quadratic example. It’s because it’s a quadratic problem. Independent
of what the trajectory you have, this determinant or whatever is the same. That’s because of a
perfectly quadratic problem. If you had an harmonic terms in general, which trajectory you’re on
would affect the determinant. You get a different equation here. But since it’s a perfectly quadratic
problem, it’s simple.

So we have the eigenvalues. So the determinant is the product of all of the Eigenvalues, right? So if
we plug that all in, we can see what this time evolution operator is. So the time evolution operator
from some position xi × 0 has several components. So just write it out here. And so we have this
big determinant, which is just the product of all of the eigenvalues.
1
U (xf , t, xi , 0) = N 0 eiSc r
Q∞ h nπ 2
i
− ω02

n=1 t

So we have two problems.

1. N 0 =???

2. The product hopelessly diverges

It’s ω0 6= 0. It’s a product of N 2 over all integers, right? It doesn’t make any sense. Product
diverges. These are not all related to each other. There’s an infinity here and here. If we can
combine them correctly, we can get something finite. How do we do that?

We’re going compare with the problem we already solved, which is the free particle case. So let’s
consider the free particle case. Imagine we did the same thing. We can do the same calculations for
omega knot equal to zero, right? The N 0 does not depend on ω0 . Because remember, this N 0 was

just – the factors of 2π. So we had the same N 0 here. We get almost exactly the same expression
for our free time evolution operator.

1
Ufree (xf , t, xi , 0) = N 0 eiSc,free qQ
∞ nπ 2

n=1 t

26
And so, if we look at the ratio

U eiSc 1
→ = iS r
Ufree e c,free Q∞ 
ω0 t 2
 
n=1 1 − nπ

In fact, there’s a nice well-known expression for this product. Which you may be familiar with. Are
there questions so far? So you may remember the following identity, the product formulas which
may be complex analysis. What do you think this is equal to?
∞ 
Y  x 2  sin x
1− =
nπ x
n=1

Any guesses? So actually here’s a good way to figure it out. This is the way they originally derived
all kinds of cool identities. Look at the zeros of this function. Think of it as a polynomial. Where
does it have zeros? Multiples of π. What function do you know has zeros at every multiple integer
of π? Only one function? Analytic function. You say sin x, right? But there’s one slight issue. This
doesn’t have a zero at zero, right? So what function has zero here but not there? Yeah, exactly.
Substitute that in. Thus
eiSc
r
U ω0 t
→ = iS
Ufree e c,free sin ω0 t
and
 r
iSc Ufree ω0 t
U =e
eiSc,free sin ω0 t

and from this we can get


r
m iSc,free
Ufree = e
2πit
r
iSc mω0
U =e
2πi sin ω0 t

So the ratio of the two is just a prefactor. All together, we get U is equal to the classical action
times this. And we combine these two factors. We can see the t’s cancel and we get something like
p
mω0 /2π. That’s sensible. If you think about it, we want this result to reduce to the classical
one and the limit ω0 → 0, well, maybe you can already see that here. But in the limit as ω0 → 0,
t → 1/t. So we can get back to here. So this is our final result for the harmonic oscillator in real
time.

So this has some kind of interesting structure. You can see, actually, I was saying some of this
before. It’s not analytic on the real axis, right? It has similarities at particular, interesting values

27
of t. So it has singularities when ω0 t is an integer multiple of π. Does anyone have an interesting
question. Why doesn’t somebody ask me this? Why does it have this structure?

What’s special about times which are inverses of the emergence for the harmonic oscillator. That’s
right. You only have to have only multiples of π, right? It doesn’t have to go all the way around.
So an interesting picture. I don’t know if this explains it. Maybe it does. When you look at the
classical equations of motion for the harmonic oscillator. Look at it in the x − p plane. What do
they look like? They are just traverse circles, right? They’re ellipses. Okay? Different units. So it’s
not defined as a circle or ellipse.

ome of this before. It’s not analytic on the real axis, right? It has similarities at particular,
interesting values of t. So it has singularities when ω0 t is an integer multiple of π. Does anyone have
an interesting question. Why doesn’t somebody ask me this? Why does it have this structure?

28
Notes Quantum Many Body Physics
Day 4
October 9th, 2013 Giordon Stark

Today, we’re going to talk about a particular method for evaluating very similar approximations
that people make when evaluating conventional integrals.

7 Stationary Phase Approximation

Consider
Z ∞
eif (x)/a dx
−∞

where f (x) is some real function (like a quadratic) that has a minimum at x = 0 (f 0 (0) = 0).

Figure 2: A regular function with a minimum at x = 0

How can we estimate the integral if a is small?

29
Figure 3: How the exponential varies as a function of a (it oscillates much more slowly around a
minimum

The integral is dominated by the region near x = 0. Expand f (x) around x = 0 so


1
f (x) = f (0) + f 00 (0)x2 + · · ·
2
We will drop the higher order terms and then do the Gaussian integral. So
Z ∞ Z ∞
1 00 2
eif (x)/a dx ≈ ei(f (0)+ 2 f (0)x )/a dx
−∞ −∞
s
2πa
= eif (0)/a 00
f (0)(−i)

which is for when f (x) has one stationary point, some x where f 0 (x) = 0. If there were more
stationary points, then more generally - we will sum over all stationary points xc with f 0 (xc ) = 0.
And
s
X
if (xc )/a 2πa
= e
xc
f 00 (x
c )(−i)

How big is the error? So the typical fluctuations are f 00 (0)x2 ∼ a which implies that this width

x ∼ a.
√ √
( a)3
So higher order terms like x3 give a ∼ a.

So all together, we sort of conclude that if we were to look at this interval it’s going to be given by
a value with phase approximation up to an error of square root of A. So more precisely, it is going

to be given by this, 1 + O( a). This is the term you would get if you expanded it out. If the next
√ 4
term was a quartic term you would get a /a. In any case, you can see that in the limit that a → 0
we’ve found the lowest-order term.

So I haven’t given you a proof of any of these statements. I’m just telling you that you can prove
that some of these complex variables or other approaches, I’m giving you the intuitive argument.

30
The stationary points are the points where this integral doesn’t oscillate rapidly. So we can imagine
that the main contribution is going to come from the regions near the estimated points. It comes
within a window around xc . In the limit that a  1, there will be tiny windows. They won’t overlap
if a  1 so you can just sum them up.

Overall, the idea is simple. When we integrate over the region from −∞ to +∞, we will break it
up into windows around each stationary point xc −  and xc +  which contribute the most to our

integral. Then, it turns out that the average fluctuations of x are on the order of a which means

we would want to have  ∼ a to get the most out of each stationary point.

So you know, by the way, you’re very familiar with this idea. In fact, there’s an even simpler case.
The only reason we call it stationary phase approximation, that is because we had the phase factor
eif (x) . You can also imagine integrals where you have no i here at all. If I put a minus sign here
(−i) rather than an +i, that is a case which is of interest. You can quickly convince yourself it is
going to be dominated by the minimum of f . Away from that minimum, this exponential is going
to be extremely small. Again, in this case, you would sum over all the place where f 0 (x) = 0. You
get a contribution – you could estimate by a Gaussian contribution. I’m going to use those two
approximations interchangeably. They’re part of the same approximation, looking at points where
your integrand is. The other one is going to be doing things in imaginary time. Approximating in
real-time expression like this, with eiS[x] , in imaginary time you get something that is real. At least
for some simple cases.

8 Semiclassical Approximation

Re-introduce ~. We should write down what the path integral looks like with ~.
Z
hxf |e−iHt/~ |xi i = Dx eiS[x]/~

where S is the usual action, dealing with a single particle as always.


Z  
1 2
S= m(∂t x) − V (x) dt
2
δS
In the limit ~ → 0, the integral is dominated by “stationary paths" with δx = 0 which are classical
paths xc . We can apply the same kind of approximation scheme. So what we’ll do is take each one
of the classical paths and expand around them. So we write x = xc + y and y will be the deviation
from the classical path.

31
Figure 4: An example of a path from xi → xf

So expand out the action (think of S like a matrix)

1 δ 2 S

S[x] = S[xc ] + y y + ···
2 δx2 xc

So let’s do the Gaussian integrals and then what do we get?


!−1/2
2S
i δ
hxf |e−iHt/~ |xi i = N eiSc /~ · det −
~ δx2 xc

We can ask the same question before. How big is the error in this approximation? In dropping
higher-order terms. The same approach applies by estimating what the typical fluctuations.

S[xc + y] − S[xc ] ∼ ~ ⇒ y ∼ ~
| {z }
y2
√ √
( ~)3
Higher order terms like y 3 give ~ ∼ ~. So we conclude that just like before, if you want to
calculate this time evolution operator, to the lowest-order semiclassical approximation (the lowest-
order quantum picture of the problem)
!−1/2 h
2S √ i
i δ
hxf |e−iHt/~ |xi i = N eiSc /~ · det − 1 + O ~
~ δx2 xc

So the point is, for any given system, you can imagine tuning your parameters to the classical limit
and in that limit, this is the kind of approximation you would make to describe the system. Whether
it’s valid in a particular system, you have to see how big this parameter actually is.

In general, the approximation gives the leading order result in ~. For free particle, the harmonic
oscillator gives an exact solution (your action is quadratic, we haven’t dropped any higher-order
terms).

32
8.1 Example 1: Single (anharmonic) well

Figure 5: An anharmonic potential. It is not parabolic.

I’m assuming it has a minimum at x = 0 (it’s not a parabola) but we know that the second derivative
(which is important) is going to be mω 2 . For simplicity, let’s start at the point xi = 0, you wait
some period of time, and you want to know the amplitude it needs to return to that same point.
There is no real deep reason to do that case. It’s just simple.

hxf = 0|e−iHt/~ |xi = 0i

So to calculate according to this scheme, first find the classical path and then expand around that
path. We need the stationary path such that xc (t) ≡ 0 for all t.
!−1/2 h
−iHt/~ iSc /~ i δ 2 S √ i
⇒ hxf = 0|e |xi = 0i = N e · det − 1 + ~
~ δx2 xc

where the classical action is


Z  
1 2
Sc = m(∂t x) − V (x) dt = 0
2

So then
 −1/2
−iHt/~ i 2 2
hxf = 0|e |xi = 0i = N det − (−m∂t − mω )
~
r

=
2πi~ sin ωt

33
To extract the ground state energy, we consider imaginary time, large T :
r
−HT /~ mω
h0|e |0i =
2π sinh ωT
r
mω −ωT /2
≈ e T →∞
π~

since the i sin ωt looks like a sinh ωT . The sinh ωT in the looks like an exponential. So we can
write
!
X
h0|e−HT /~ |0i = h0| |nie−En T /~ hn| |0i
n

≈ |h0|GSi|2 e−Egs T /~

1
⇒ Egs = ~ω
2
p mω
where |h0|GSi|2 = π~ .

8.2 Example 2: Double well

So what we want to do is we’re going to try to repeat what we did for the single well. Calculate the
ground-state energy. So classically, there are two ground states here and here. So ultimately, what
we want to do is we are not calculating a single ground state energy. We’re going to calculate the
two lowest states. We want to find the energy and find some information about things like their
probability density at these two points.

Figure 6: The double well potential

34
Let’s just do our path integral in imaginary time from the beginning. Compute
Z Z  
−HT /~ −S[x]/~ 1 2
h±a|e | ± ai = e Dx S= m(∂t x) + V (x)
2
∂S
The stationary condition is minimizing the action ∂x = 0. This is saying

−m∂τ2 x + V 0 (x) = 0 → m∂τ2 x = V 0 (x)

so the particle is moving in the direction of higher potential. This is because we’re doing imaginary
time, so the effective potential we think about should be negative.

Figure 7: The effective potential

Two solutions with x(0) = x(T ) = ±a with

x1 (τ ) = a ∀ τ
x2 (τ ) = −a ∀ τ

35
Figure 8: The two trivial stationary paths


If we stopped here, we would get two degenerate ground states with energies E = 2 and no energy
difference between them. But we know there is splitting, so to get splitting – we need the other
stationary path.

Figure 9: The instanton stationary path

This is the “instanton" path h+a|e−HT /~ | − ai. There is some fixed period of time ∆t where it takes
the most time to roll from one extremum to the other (with exponential tails afterwards). This is
localized in time, which is why it is called instanton.

36
Figure 10: An instanton

Outside of this window where the particle rolls back and forth, it looks like the other stationary
paths. So if we were interested in calculating the time in which we can operate from +a to +a,
then we would want to sum over all instantons in the path integral.

Figure 11: An examlpe of n instantons for a given path

X Z T Z T Z T
−HT /~
ha|e |ai = dτ1 dτ2 · · · dτn e−S(τ1 ,··· ,τn )/~ · N det (· · · )
n even 0 τ1 τn−1

We’re going to assume these instantons are far enough away from each other that their distance
between them is larger than this distance ∆t. If they start to kind of overlap at that – length-scale
or time-scale ∆t, it becomes hard to think about it. We are going to assume that the dominant
contributions to the path integral come from dilute gases. So in other words, n is not too big. If the
density isn’t too high, the total action will just be a sum of the actions of each individual instanton,
right? The action along these plateau points is just 0, right? Potential and derivative is 0. There

37
is no contribution from these flat points. So the total classical action will just be sum of action as
long as they’re far apart, they’ll each contribute the same amount. So then the action

S(τ1 , τ2 , · · · , τn ) = nS0

S0 is the action of a single instanton. With n dilute instantons


r
mω −ωT /2
N det (instantons) = e · Kn
π~
| {z }
det(straight paths only)

Note that if there are no instantons n = 0, then we have the two trivial stationary paths which have
zero action since their potential is a minimum V (±a) = 0 and (∂t x)2 = 0. Each path τi will have
some number of instantons n for the paths. Compute K later. So

mω −ωT /2 X T n  −S0 /~ n
r
−HT /~
ha|e |ai = e · Ke
π~ n even
n!
r
mω −ωT /2  
= e cosh KT e−S0 /~
π~
And then one more
r
−HT /~ mω −ωT /2  
h−a|e |ai = e sinh KT e−S0 /~
π~

38
Notes Quantum Many Body Physics
Day 5
October 11th, 2013 Giordon Stark

9 Last Time

Okay. So I’m going to continue what we did last time, to understand the deriving physics of a
double-well potential. But first I just want to make one comment which is that I won’t go into
details, but there’s actually a mistake. I made a mistake when I did the derivation for single
harmonic well. Single an anharmonic well, where we applied the semi-classical approximation. But
I don’t want to go into that now because I think it will sort of interrupt the logical flow. So whoever
is interested, I can discuss that after class, but basically, there was a mistake in the derivation, but
the final answer was correct, which is is that the statement that we calculated the imaginary time,
the imaginary evolution operator for a particle starting at position zero, ending at position zero for
this anharmonic well and we came up with an expression. So our final answer was – we calculated
this evolution operator and at large T , we got (whatever it was). So that is actually the correct
answer, but there is actually a problem in the derivation which I think some of you caught. So if
you’re interested, I can talk about that after class. But for now, let’s continue with what we were
doing last time, which is the double-harmonic well. (Editor’s note: the mistake has to do with the
fact that there are more stationary paths to include in the real time path integral than with the
imaginary time path integral; and the derivation for the anharmonic well started from the real time.
The double well started from imaginary time – hence the lack of a mistake). There’s no mistake in
our derivation of a double anharmonic well.

39
10 Double anharmonic well (continued)

Figure 12: Double well from last time

We expect classically that there will be two ground states, we want to calculate the splitting between
the two using the path integral approach. So the way we went about doing that, we wanted to
calculate the matrix element for the particle going back to the point it started at

ha|e−HT /~ |ai =?

So the stationary condition led to this inverted potential, because we’re working in imaginary time.

But then we thought about it a little harder and we realized there


are other paths we should include. These other stationary paths or
approximate stationary paths where the particle rolls down the hill,
comes to −a, and then rolls back and comes back to +a. So you
can have paths where you have this rolling and we called each one
of these paths an instanton. So we can have actually any number
Figure 13: Inverted potential of instantons between time 0 and time T , and this would also be a
in imaginary time stationary path. They’re not exact stationary paths. They’re very
good approximate stationary paths. All of these. They’re not exact
stationary paths because the true equation’s motion, if you leave a particle here at −a: one way to
say it is if I just touch on it, I have to add some force to get it to roll. And here at +a, I have to
add some tiny force to get it to roll back. This force you need becomes smaller and smaller, so it
becomes a very good approximate stationary path.

40
To sum up over the paths, we realized the number of instantons has to be even n, because it would
start at a to −a and then back again to a. So all the possible stationary paths we should sum
over
X Z T Z T
−HT /~
ha|e |ai ≈ dτ1 · · · dτn e|−S(τ{z
1 ,··· ,τn )
} N det(· · · )
n even | 0 τn−1 √
| {z }
{z } e−nS0 /~ K n mω e−ωT /2
Tn π~
n!
r
mω −ωT /2  
= e cosh KT e−S0 /~
π~

That was the evolution operator from a → a (and −a → −a is similar). So what we want now is
the time evolution operator to go from −a → a which is just to replace cosh → sinh for odd n now.
Thus
r
−HT /~ mω −ωT /2  
ha|e | − ai = e sinh KT e−S0 /~
π~
So these are our time evolution matrix elements now we want to
extract some physics from this result. So to do that, we need to sort
of compare with what our expectation is from quantum mechanics,
for what these time evolution operators should look like. So let’s
remember – you know, so let’s remember the picture from quantum
Figure 14: Ground state
mechanics. We expect there will be two nearly degenerate ground
wavefunctions for symmetric
states. One is going to be a symmetric state which looks like a
and anti-symmetric modes
symmetric combination of our ground states and the other is going
to be anti-symmetric.

So this one we’ll call |Si. And the other is the anti-symmetric combination which we will call |Ai.
Those are going to be the exact ground states if you solve the quantum mechanical problem. We
expect that these two are going to be very close in energy if this barrier is high because basically the
splitting between the two is proportional to the tunneling time to get from one well to the other.
That will be exponentially suppressed as the barrier gets high.

We expect there to be degenerate energy and finite energy given


by let’s say the ω where ω is the frequency at the bottom of this
well. There will be some separation to the excited states. So you
will have |Si and |Ai and some excited states, higher-energy states.
This will be some much smaller splitting.
Figure 15: Energy splittings
e−HT /~ ≈ |Sie−EA T /~ hS| + |Aie−EA T /~ hA|

41
And then substitute back in

ha|e−HT /~ | ± ai ≈ e−ES T /~ ha|SihS| ± ai + e−EA T /~ ha|SihA| ± ai

Let’s define some probability densities

|ha|Si|2 ≡ |c|2 ha|SihS| ± ai = |c|2 ha|AihA| ± ai = ±|c|2

So
 
ha|e−HT /~ | ± ai ≈ |c|2 e−ES T /~ ± e−EA T /~

which gives us
~ω ~ω
ES = − ~Ke−S0 /~ EA = + ~Ke−S0 /~
2 2
and can read off for |c|2 which is
r
2 1 mω
|c| =
2 π~

10.1 What is the instanton action S0 ?

Figure 16: An example of an instanton with width ∆T

Z ∞  
1
S0 = m(∂τ x)2 + V (x) dτ
−∞ 2

So to simplify the structure – in principle, what you would do is find this particular solution to
the equation’s emotion and then you would calculate its time derivative. You just do this integral,
right? But we can simplify it a little bit because remember that a part rolling in a potential, there
is a conserved quantity. There’s conserved energy, right? So we have a conservation of energy law.

42
Now I’m going to put energy in quotes because remember this isn’t the usual action, right? This
is an action in inverted potential. It is energy of this inverted potential. Energy in the inverted
potential is the kinetic energy minus the potential energy. Right? So this is conserved. Usually you
would put a plus sign here, but we’re working with inverted potential. We’re working in imaginary
time.

10.1.1 Conservation of “energy"

1
m(∂τ x)2 − V (x) = const.
2
At τ = −∞, x(τ ) = −a, ∂τ x = 0. So 21 m(∂τ x)2 − V (x) = 0.
Z ∞  Z ∞ Z a
1 dx
S0 = m(∂τ x)2 + V (x) dτ = 2V (x)dτ = dx 2V (x)
−∞ 2 −∞ −a dτ

So let’s make it simpler by doing a change of variables to integrate x over time τ . By the “law" for
energy before, we know
r
= 2V (x)
dx
dτ m

So the instanton action is rewritten as


Z a p
S0 = dx 2mV (x)
−a

and the energy splitting

∆E = EA − ES = 2~Ke−S0 /~
1 a
R √
= 2~Ke− ~ −a dx 2mV (x)

which looks a lot like the WKB result for tunneling probability.

10.1.2 Remarks

1
1. Non-perturbative (∼ e− ~ ). If you looked at higher-order terms expanding around the path, if
you just looked at fluctuations around this single point, the straight path, you would never get
any splitting at all because the splitting has to do with tunneling between these two minima.

43
2. When I wrote down the formulas for the energy of the symmetric and anti-symmetric state,
it wasn’t really correct. Strictly speaking, it is not correct to write

ES = − ~Ke−S0 /~
2
If you go beyond Gaussian fluctuations about the straight paths here, we dropped terms of

order ~ω
2 ~  ~Ke−S0 /~ . We are not including higher-order fluctuations of the instanton path
(only straight path). More precisely, what we calculated is leading order term in splitting:

∆E = 2~Ke−S0 /~

3. Check the “dilute gas" assumption. This is the assumption that basically the distance between
instantons is much larger than their size – the typical distance. So how are we going to check
that? Remember the expression for the matrix element. To calculate it, we got this sum over
all the instanton configurations
X (KT e−S0 /~ )n
h· · · | · · ·i =
n
n!
So if we look at a power series like this, in general if you look at a power series for a cosine or
whatever, the – you know – if this term here is large, which it is because we’re going to make
T large, right, so if this term is large, then the first couple terms get bigger and bigger, right,
because a large number is being raised to a big power. When you go far-enough out to your
expansion, they get smaller and smaller in size. There is some typical term. The dominant
term comes from some value of n which is determined by how big the term x is. So what is
that value of n? If you do a quick estimate – it is a nice exercise. The value of n is basically
of this order. So the typical not term in the expansion has n of this order.

n ∼ KT e−S0 /~
n
⇒ ∼ Ke−S0 /~
T
|{z}
instanton density

So we need (remember average size of instanton is ∆T so invert it for frequency)


n 1

T ∆T
1
Ke−S0 /~  ∼ω
∆T
q
And we will see that K ∼ S~0 ω which gives
r
S0 −S0 /~
e 1
~
which means the dilute gas approximation is okay as long as S0  ~.

44
10.2 Missing part of derivation: K

So K was some parameter that we just assumed that it took this form and here’s the number of
instantons. So we assume for each instanton we got this factor of K, and somehow we had this
pre-factor. So let’s just very quickly motivate that and then we’ll try to calculate what K is.

We assumed that a Gaussian of instantons is equal to


r
mω −ωT /2 n
e K
π~
So the basic idea is we want to calculate the gaussian integral of the fluctuations about some path.
Let’s compare the gaussian integral around this path to the gaussian integral around a straight path
at ±a.

Figure 17: The Gaussian integral idea between instanton stationary path and a straight-line sta-
tionary path

So the idea is that K is some factor that gets introduced to sort of make corrections to the Gaussian
integrals that get introduced when an instanton is integrated over.

Figure 18: Then one instanton path is like a stationary path multiplied by a prefactor K

45
10.2.1 Calculate K

Figure 19: The method to calculate K involves taking the ratio of what we showed above

So
   
δ2 S 1 δ2 S

1
− 2~ y y N
 det  !−1/2
δx2 xc
det ~1 (−m∂τ2 + V 00 (xc ))

~ δx2 x
R
Dy e

c
K= = =
det ~1 (−m∂τ2 + mω 2 )
  
δ2 S
1 2
 
R
Dy e
− 2~ y
δx2 x=a
y  det ~1 δδxS2
N

x=a

So we need to diagonalize (−m∂τ2 + V 00 (xc )). Problem: operator has an eigenstate with eigenvalue
λ = 0. Because that means a determinant will be 0. And then that means when we raise it to the
−1/2 power, we get an infinite number which seems like we’ve done something wrong. Why does it
have an eigenstate with an eigenvalue 0? To see this:

m∂τ2 xc = V 0 (xc ) (differentiate potential for force)

m∂τ3 xc = V 00 (xc )∂τ xc

This gives −m∂τ2 + V 00 (xc ) (∂τ xc ) = 0.


 
| {z } | {z }
operator state

Thus we have a vanishing eigenvalue ⇒ det = 0 ⇒ K → ∞.

Now, to resolve the problem, let’s think about


what this zero eigenvalue means. So let’s re-
member why are we even thinking about the
eigenstates of this operator? We’re consider-
ing the fluctuations around this path, various
fluctuations. And if each fluctuation had some
quadratic action that describes the cost of a
given fluctuation, we want to integrate. When
you have a zero eigenvalue, this corresponds to

Figure 20: A fluctuation of an instanton by shift-


46ing it in time, a flat direction.
a particular y. See, we’re integrating over dif-
ferent fluctuations y. This is a fluctuation in
our space where our operator is a flat direction,
right? So it’s like we’re doing a Gaussian integral, you know, if our quadratic form here has a flat
direction. This is the direction with zero eigenvalue. So when we do the integral over this flat
direction, we get infinity. Why is this flat direction? Well, the flat direction here, it corresponds
to this particular y. So the flat direction corresponds to y∂τ xc . But what is ∂τ xc ? If you drew it,
∂τ xc if I plot it, it is 0 everywhere except where the tunneling event occurs. So it just looks like
this.

So the flat direction just corresponds shifting it in time, forward or backward, so there’s a simple
reason why it’s there, right? There are lots of other fluctuations that are much more complicated,
but there is this simple fluctuation shifting, that is our flat direction.

So now, the infinity is coming from that, from the fact that we can always shift the position and it
doesn’t cost any action. But then if you think back, you should realize actually, we don’t want to
include that; that sort of flat direction when we do our integral. The reason is that we’re already
integrating over all the different positions of the instantons so that when we did that integral the
first time, we don’t want to double-count in calculating K. So the original equation for K is not
correct. λ = 0 mode ↔ Integration over position of instanton. The correct expression for K is
then
 
δ2 S

1
− 2~ y y
δx2 xc
R
Dy e
Z
numerator
Kdτ = 
δ2 S
 ≡
R 1
− 2~ y
δx2 x=a
y denominator
Dy e

47
Numerator
P
Write y = cn yn where yn has eigenvalue λn and y1 has eigenvalue λ1 = 0. Then

Z Y Z Z Y
1 1
− 2~ λn c2n 2
numerator = dcn e = dc1 dcn e− 2~ λn cn
n=1 n>1

dc1 = #dτ . To get constant between c and τ


∂xc
δxc = dτ
∂τ
This is how much the curve changes when you shift in time. Let’s ask how much the curve changes
if you change c1 a little bit
r
dxc m
δxc = y1 dc1 y1 = ×
dτ S0
| {z } R∞
normalization factor so that −∞ y12 =1

r
S0
⇒ dc1 = dτ
m
So
1 P 2
− 2~ n λn cn
r RQ
S0 n>1 dc n e
K= ·  2 
m R 1
− 2~ y δ S

2 y
Dy e δx x=a

N 0 1
 2 00
−1/2
S0 √2π det ~ (−m∂τ + V (xc ))
r
=
m N det  1 (−m∂ 2 + mω 2 )−1/2
~ τ
r r 0
 2 1 00
−1/2
S0   det −∂τ + m V (xc )
m
=
m

 ~ det [−∂τ2 + ω 2 ]−1/2

c1 a little bit

48
Notes Quantum Many Body Physics
Day 6
October 16th, 2013 Giordon Stark

11 Path Integral for Spin

From now on we will be setting ~ = 1 – we are done with the semiclassical discussion which we
spent most of last week on. Consider the simplest quantum system one can write down, so let’s
consider single spin-1/2 and let’s imagine we have some Hamiltonian in a magnetic field.

~ ·S
H = −B ~

Just to make sure we are on the same page,


! ! !
1 1 0 1 1 1 0 −i 1 1 1 0
S x = σx = S y = σy = S z = σy =
2 2 1 0 2 2 i 0 2 2 0 −1

I’m not familiar with the history, the spin path integral came much later than the path integral
for various field theories. I think it was the 60s, 70s, 80s really. As far as I know, they wanted
to write down a path integral and didn’t really know how to do it. There are definitely some
subtleties in developing a path integral for spin, but here we’re just going to take a naive approach,
just not a particularly rigorous one. And what we’ll write down something that is useful in some
circumstances.

How can we write down a path integral for spin? How can we
describe H by a path integral? One approach: work in the |sz =
±1/2i basis.
Z
hszf |e−iHt |szi i = Dsz

But of course, sz is a discrete number, so the paths we will be talking


about will have jumps in them. Nevertheless, one can imagine doing
this, inserting a complete set of states in here, breaking into small
time evolutions and developing some kind of path integral. But this
approach has two problems

Figure 21: basis ~n 49


1. Discontinuous paths – due to the discrete quantum number
we are dealing with, but this is a little handwavy. We would
like to have smoother paths.

2. Breaks the SU (2) symmetry. Or at least, the symmetry does


not manifest.

Our choice of basis was naive. There’s another approach which sort of addresses both of these
problems. We’re going to use the fact that you don’t need to use orthogonal states, you just need
to use states which satisfy some kind of completeness relation.

Instead, use spin coherent states |~ni where |~ni is defined by

~ ni = 1 |~ni
~n · S|~
2

This is an overcomplete basis, but we can handle


that. So in this case, these states do, indeed,
satisfy a completeness relation. So if you take
the integral over all points on the sphere of ~n
with itself, you get the identity matrix. So this
is over all ~n on the unit sphere.

So how can we see that? We can see that from


really almost immediately just a quick deriva-
tion of this completeness relation. So first off,
Figure 22: definition of spin coherent states
we can see this symmetry, this expression here
has rotational symmetry since it is central over
all possible values of ~n. More precisely, it commutes. We know it has to be proportional to the
identity. You can take the trace on both sides. So if you take the trace of the right hand side, this
is a 2×2 matrix. It is the identity. So you get 2. If you take the trace of the left hand side, these
have trace 1, so we are integrating over the sphere which is 4π area. This is a quick way you can
derive that.
d2 n
Z
|nihn| = 1
|~
n|=1 2π

So this is the completeness relation and now we’ll use this completeness relation to derive our path
integral.

50
11.1 Phase convention

So first there’s one issue we have to deal with which is a little bit annoying and maybe not so
essential, but you know, I haven’t really defined what N is by this eigenvalue equation, right?
There’s still a phase ambiguity. To be precise, I have to tell you what is the phase choice. You
can use any phase choice you’d like and develop a path integral for any one of them. They’re
not really that different from one another. But just to be concrete, I’ll choose a particular phase
convention.

Let’s parameterize our ~n. We will parameterize it by θ from the North Pole and it will have the
azimuthal angle φ. Suppose

~n = (cos φ sin θ, sin φ sin θ, cos θ)

Define
! !
z1 cos 2θ
|~ni = = ← an arbitrary choice of phase
z2 eiφ sin 2θ

I guess there is probably some issues with the phase convention.


It breaks down, it becomes singular near one of the poles. In
this case, I guess it is singular near the South Pole. We’ll be
Figure 23: The parameterization okay with this convention. So we want to work at our path
integral. We want to calculate hnf |e−iHt |ni i.

11.2 Hamiltonian is zero

−1 2
Z NY
−iHt d nk
hnf |e |ni i = hnf |nN −1 ihnN −1 |nN −2 i · · · hn1 |ni i suppose H = 0

k=1

So we should calculate each one of these matrix elements.

hnk |nk−1 i = 1 − hnk |nk i +hnk |nk−1 i ≈ e−hnk |nk i+hnk |nk−1 i
| {z }
0

And now, a kind of hand-wavy justification for this is as follows. In our limit that we take N , if
we take N very large here, we are going to have a product of many many of these matrix elements.
You can imagine certainly there’s no reason why nk and nk−1 have to be close together. They can

51
be quite different from one another. You can see that most of these matrix elements have to be
very close to 1 otherwise this inner product is going to be extremely small. So in other words, the
dominant contribution to our integral, in the limit where N is very large, is going to come from
where paths are smooth. So we’re generally going to be dealing with smooth paths. So that is
kind of the, maybe some hand-waving, justification for this. So in the limit when N is large, the
dominant paths will have nk close to nk−1 . Note that the farthest apart nk and nk−1 can be are
antiparallel, which means they are completely orthogonal to each other (hence inner product is
zero).

d2 nk i∆t PN
Z hnk |nk i−hnk |nk−1 i
Z
−i0·t k=1 i
hnf |e |ni i = lim e ∆t = Dn eiS[n]
N →∞ 2π
where the action is
Z
S= ihn|∂t |nidt

!
z1
Write the action here in terms of z = = |~ni:
z2
Z
S= iz † ∂t z dt

and then in terms of θ, φ, we find z:


!
cos 2θ
z=
eiφ sin 2θ

So simplifying the z † ∂t z term:


!
  − 2i sin 2θ ∂t θ
z † ∂t z = cos 2θ e−iφ sin 2θ · 1 iφ θ
2 e cos 2 ∂t θ + ieiφ sin 2θ ∂t φ
   
θ 2 1 − cos θ
= i sin ∂t φ = i ∂t φ
2 2
And then can parameterize the action in terms of θ, φ
Z  
cos θ − 1
S= ∂t φ dt
2

11.3 Non-zero Hamiltonian

So that was the case where the Hamiltonian was zero. Let’s talk about what happens when you
have a non-zero Hamiltonian. So you can repeat the whole analysis, but I’m not going to do it here.

52
You want to find
Z
hnf |e−iHt |ni i = Dn eiS[n]

and the non-zero term just changes the action


Z
S = (ihn|∂t |ni − hn|H|ni) dt

So let’s think about a particular choice of H = −B ~ ·S


~
Z  
1~
S= ihn|∂t |ni + B · ~n dt
2

~
where hn|S|ni = 12 ~n. In terms of z:
Z  
† 1~ †
S= iz ∂t z + B · z ~σ z dt
2

where
~
~ · S|~
h~n|B ~ · ~σ · z = B · z †~σ z
~ ni = z † B
2 2

12 Larger Spin - Generalization

So we want to think about a path integral for some arbitrary spin and we’ll do the same approach
as before. We’re still going to use spin coherent states, which means the same thing as before

~ ni = S|~ni
|~ni : ~n · S|~

Here, S is going to be any half-integer, not just 1/2. So if you followed exactly the same steps, with
an explicit function of n that is a normalized state, we can get a completeness relation to derive
your action
Z  
S= dt ~ · ~n
ih~n|∂t |~ni + S B

There is another factor of S as we are ealing with a larger spin. Otherwise, it is pretty much the
same. The only question is how exactly do I parameterize! the spin coherent states for larger spins?
z1
So we can parameterize |~ni in terms of z = . These are states where you can think of
z2
these pointing in our ~n direction, so the eigenstate in the operator. We can think of any spin being

53
composed out of some number of spin-1/2’s so spin 1 I can think of being composed of 3 spin-halves
and so on. Write

|~ni = |zi ⊗ |zi ⊗ · · · ⊗ |zi


| {z }
2S

So

~ = 1 ~σ ⊗ 1 ⊗ 1 · · · ⊗ 1 + 1 ⊗ 1 ~σ ⊗ 1 · · · + · · ·
S
2 2
So this is the usual – so what I’m doing here is I’m representing my spin 1. I’m representing in
terms of two – I’m putting two one-halfs into a triplet. There are three. It’s equivalent to thinking
of a single spin 1 and then I can write down operators that can act on those three states that satisfy
the same commutation relations as the usual spin operators. What are those operators? They are
the sum of the spin. It is the total spin. S1 + S2 , right, which will obey the same commutations as a
single spin 1 would obey. It is an equivalent representation. There is no difference between a single
spin-1 and the two spin- 21 . You can write down identical operators for the two of them. So this is
just a way to parameterize spin-1 states. There is a difference. I’m not claiming that all spin states
can be written as a coherent states. I am claiming all states can be written as a superposition of
these coherent states.

So it’s just a sum of the spin operators acting on each one of these guys. If I apply this spin operator
and dot it with n, it is an eigenstate of that operator. So effectively, this is a trick for reducing
a larger spin to smaller spin problem. And so if you use this representation and you can see that
you can calculate say n∂tn for your large spin in terms of z1 , z2 s, so if I take this and I multiply
it, I take a matrix element between N and its derivative, if I take its derivative, I’m going to get a
contribution from each one of these. I guess I could write it out. So I get something like z multiplied
by the derivative of this thing so I get ∂t z. And then there’s a whole bunch of these terms.

h~n|∂t |~ni = hz| ⊗ · · · ⊗ hz| · · · (∂t |zi ⊗ · · · ⊗ |zi + |zi ⊗ ∂t |zi ⊗ · · · ⊗ |zi + · · · ) = 2Sz † ∂t z

Then if I take this and multiply this out I’m going to get 2S identical contributions, one for each
one of these. So all together I’ll get 2S times the thing I got before, the z † ∂t z. Similarly, if I want
to calculat:

~ · S|~
h~n|B ~ · z †~σ z
~ ni = S B

If I plug in this expression here I can see when I calculate the expectation value, I’m going to get
a contribution from the first spin then I’m going to get a contribution for the second spin and the
third spin. It’s going to be a sum of all of them. I’m going to get a factor of 2S. What we had

54
before was 12 B ·z † σ z . So putting this all together, the action for the large spin looks like the previous
action except for the factors of S. So the action is
Z  
S = dt 2iSz † ∂t z + S B
~ · z †~σ z

So the larger spin is reduced to the old expression that we had. Um ...so you can see that kind of
makes sense physically because if you remember, the classical limit is a limit where when you take
a path integral, S >> ~. So if you look at this expression, there’s two factors of S you can just
pull out (one S is the spin). You can pull out the spin up front. You can see the limit that S goes
to infinity. The spin goes very large. It is a precise limit where where you are action is very large
compared with H bar. So it makes sense you should have a factor of S out front because large S is
the limit when it becomes a classical system.

13 Berry Phase

13.1 Berry Phase and adiabatic evolution

Consider a spin- 12 in a time dependent magnetic field

~
H = −B(t) ~
·S

First suppose B ~ = B~n is constant in time and points in some direction ~n. The ground state is |~ni
where S~ · ~n|~ni = 1 |~ni. Then we evolve in time but we keep the magnetic field fixed
2

h~n|e−iHT |~ni = e−iE0 t E0 = ground state energy

~ is changing in time, and we want to consider the limit where these are changing
Now suppose B
very slowly with time. If B changes very slowly with time, there’s a general result which you’re
probably familiar with which says that the system will stay in its ground state.

~ = B · ~n(t)
B

If slow enough, then the ground state will evolve into the ground state: |~n(0)i evolves into |~n(t)i –
“adiabatic theorem". The time scale here is of order B so you have to change it slowly compared
with B (throw in ~s to fix up units).

So where are we? At any given time, our spin will follow and track our magnetic field. So let’s
repeat the same calculation we made before and calculate the matrix element for the spin between.

55
But this time, let’s calculate it between some initial state
RT
~ ~
hn(T )|T e−i 0 −B(t)· Sdt
|n(0)i note time-ordering
= hn(T )| · · · hn(2∆t)|e−iB(∆t)·S(∆t) |n(∆t)ihn(∆t)|e−iB(0)·S∆t |n(0)i
N
= e−i∆tE0 hn(T )|n(T − ∆t)i · · · hn(∆t)|n(0)i
| R
{z }
ei dt ihn|∂t |ni
R
= e−iE0 T ei dt·ihn|∂t |ni
Z
−iE0 T iθB
=e e θB = dt ihn|∂t |ni

Where we are inserting the projection of a state onto itself (which doesn’t do anything). We know
at time ∆t that the system is in this state, etcetera. We see that there is an additional contribution
to the phase: berry phase.

13.1.1 Some Observations

1. θB only depends on the path, independent of the time to traverse the path. For example, a
change of variables can show
Z Z Z

dt ihn|∂t |ni = dt iz ∂t z = iz † ∂z
γ

where ∂z is dependent only on the path γ.

2. θB is real.

ihn|∂t |ni − (ihn|∂t |ni)∗ = ihn|∂t |ni − (−i∂t hn| · |ni)


= i(hn|∂t |ni + ∂t hn| · |ni)
= i∂t (hn|ni) = 0

Of course, by the product rule. If you integrate by parts - you can show that the imaginary
part vanishes.

3. eiθB defines a phase factor for every path γ on the sphere.

How does θB depend on the choice of phase for |~ni? Let |n(t)i → eiφ(t) |n(t)i. Then
Z T
θB → dt ihn(t)|e−iφ(t) ∂t eiφ(t) |n(t)i
0
Z T
= dt (ihn(t)|∂t |n(t)i − ∂t φ)
0

56
So if we expand that, we can see that the first term is the old berry phase. Then imagine when we
integrate the other term, this is just going to give us a phase difference across time:

θB → θB + φ(0) − φ(T )

All right. So what do we see? So we see that the choice of phase


has a definite impact on our berry phase. It will shift, in particular,
we can see all that matters is the phase of the initial and the final
state which we can understand from a different point of view. It
does have an impact. When we change our phase, it will change our
berry phase. There are two cases to consider. One is if our path
is an open path, so here’s n(0) and here’s n(T ), and two is if our
path is a closed path. If it is an open path, which is what I was
implicitly assuming here, if I make this change of phase, it changes Figure 24: path γ
our berry phase. That means the berry phase is not a well-defined
quantity for open paths. It depends on the phase choice.

So for open paths, θB depends on a phase choice. That’s what this calculation shows. On the other
hand, if you had a closed path, then we’re not allowed to choose this φ arbitrarily. What I mean by
closed path, if we want our final state to be identical to, even the phase to be identical, the initial
state, then we are restricted on what phase changes we can make. So in particular, φ(0) would have
to be equal to φ(T ), and then these two would cancel. So if that case, for closed path, our berry
phase would be independent of our choice of phase.

To conclude. For open paths, θB depends on the phase choice. For closed paths, θB is independent
of phase choice.

(a) open path (b) closed path

Figure 25: paths – open and closed

What does it mean to choose them? I mean they’re different states, right? You can’t compare
the phase of two states which aren’t physical. Just to be precise, if you have a closed path and

57
you insist that n(0) and n(T ) are the same, even up to a choice in phase, identical states, then
it’s independent of any choice of phase, either for this initial and final states or for any of the
states in between, right? So the idea is the following. If you have an open path like this, I can
actually choose my phases of all my intermediate points in such a way that the berry phase vanishes.
So in such a way that this integral vanishes ev-
erywhere along this path. So for open path,
I can always choose the phases so that theta
B equal 0, so that the integrand equals zero.
So think about it as this product. All the way
down to hn(∆t)|n(0)i, right? So I’m imagin-
ing, here’s my path. The way I’m going to
calculate the berry phase is I’m going to split
Figure 26: Open path with prescription of ∆t
it up into these little symbols. hn(T )|n(T −
∆t)i · · · hn(∆t)|n(0)i. Now, the idea is the fol-
lowing. It is very simple. n(0) was given to me. So what I do is I choose n(∆t) explicitly. I can
choose my phase however I like. I can choose it so that this thing is real. That gives me a unique
choice of phase for n(∆t). You can choose the phase for the next guy to be real. So that’s kind of
2∆t. I can keep proceeding. I can go all the way up to n(T ). When I do that, each one of these
individually is going to be real. So you can see your phase is going to be zero. In that case what
you would end up with this limit N → ∞ and each of the terms are going to be N of these. So all
together you would just get one. So I am just saying, in particular, the berry phase would vanish.
And vanish did, everywhere along the path, the integrand that we use to compute the integral would
vanish.

Another way to see this: for open paths, we can always choose phases so that θB = 0

eiθB = lim hn(T )|n(T − ∆t)i ··· hn(∆t)|n(0)i


N →∞ |{z} | {z }
choose real choose real
N
= lim 1 − O(∆t)2 =1
N →∞

If you try for closed path |n(T )i =


6 |n(0)i.

Now you can do that for a closed path, too. Suppose you tried
to do that for a closed path. You can choose the phase of this so
the first term is real. And you can keep doing that until you go
all the way around. To make the final one real, you’re going to
have a particular choice of n(T ). At the end, the issue is if you

58

Figure 27: closed path with


prescribed ∆t
follow this prescription, your n(T ) 6= n(0). The relative phase is
|n(T )i = eiθB |n(0)i.
R
Thus the berry phase always shows up in the integral ihn|∂t |nidt
or in relative phase between |n(0)i, |n(T )i. “Frustration in phase
choice".

The phase – it doesn’t to be, right? The phase is fixed by this


procedure. When you go all the way around, it comes back with the phase that is different from
your original phase. What is the phase difference between these two? That is exactly the berry
phase. So N of T can be written exactly as E to the I theta be with N of 0. So you can see that
if you just – again, look at this phase. You know, if I – if once I – if all these guys are 1, then the
thing I’m going to get left with in the end – the only phase I’m going to get left with is the phase of
N of 0 and N of T. So this is just another way of thinking about the berry phase. There’s two ways
the berry phase can appear in a calculation. One is it can appear in the integrand. So along the
path you can see some phase that is accumulating. The other way it can appear is it can appear
at a phase difference at the end. So you have N of 0. Point of the berry phase is no matter how
you choose your phase convention, you’re always going to see the phase somewhere, either along
the path or when you compare the initial and final states. So that’s the last thing I wanted to say
which is that – so basically what I wanted to say is that berry phase – so it always shows up in the
integral which is the integral of N DT of N, or in the relative phase between N of 0 and N of T.
So you can push – if you want like, you can push the berry phase into the relative phasor you can
use a convention, which is what I was assuming earlier where these two are the same state in which
case the berry state will show up in the integral. One way or the other, you’re always going to see
this phase. (Board).

The reason why the closed path has a frustration in phase choice comes down to how to look at the
berry phase.
Y
closed: eiθB = hn(0)|n(T − ∆t)i · · · hn(∆t)|n(0)i
k
Y
open: eiθB = hn(T )|n(T − ∆t)i · · · hn(∆t)|n(0)i
k

With our prescription, we can choose ∆T steps such that hn(k∆t)|n((k−1)∆t)i is real. So therefore,
we can choose a prescription in both the open and closed path case such that

hn(T − ∆t)|n(T − 2∆t)i · · · hn(∆t)|n(0)i ∈ R

59
Then all that is left is the last choice for open and closed:

closed: eiθB ∝ hn(0)|n(T − ∆t)i


open: eiθB ∝ hn(T )|n(T − ∆t)i

For the open path, we can choose a specific step to make that term real so that θB = 0. However in
the closed path, we do not get a choice. Therefore θB may not necessarily be zero – we can’t make
that choice. So when going around a closed path, the spin particle might pick up an extra phase
(the berry phase).

60
Notes Quantum Many Body Physics
Day 7
October 18th, 2013 Giordon Stark

14 Last Time

Okay. I guess we should start. So first I want to – first part of this class I want to continue our
discussion of berry phase from last time, tie up some loose ends. Maybe also clarify some things.
I think at the end I was sort of rushed. So let’s remember what we talked about last time. So we
consider a model system which was just a spin in a magnetic field, so it was a single spin one-half
in a time-dependent magnetic field. We had some Hamiltonian, given by

~ · B(t)
H(t) = −S ~ ~
B(t) = B · ~n(t)

And then we wanted to ask what the phase of that state was
R
hn(T )|T e−i −B(t)·S)dt
|n(0)i

This T here is the time-ordering symbol. We’ll cover it later in the class. You want to calculate the
time evolution properties, time-dependent Hamiltonian. it is not just the exponential. You have to
say something a little bit more precisely about how you order operators.

So our question was: What happens when we do this?

We know in the end we’re going to be in the ground state with the magnetic field pointing in
direction of magnetic field n̂, but the question is: What’s the phase? So we calculated this and
we found there were two contributions to the phase. First we found a contribution which grew
linearly with the time; this process was from time 0 to time T . Secondly, there was an additional
contribution beyond a typical phase which we wrote as the Berry Phase
 Z T 
iθB
e = lim hn(T )|n(T − ∆t)ihn(T − ∆t)| · · · |n(∆t)ihn(∆t)|n(0)i = exp i ihn|∂t |nidt
N →∞ 0

61
Figure 28: Berry phase calculations for an open path from n(0) to n(T )

Now the next question we want to ask was: Is this berry phase time-independent, to what extent
does it depend on our choice of phase? Suppose we choose different phases: |n(t)i → eiφ(t) |n(t)i.
I know what it is up to a phase, but there was a phase choice which was arbitrary. We asked, if I
changed the phase of these states n(t), how would it effect this berry phase? So suppose we change
the phases. So I’ll just say we choose different phases for n(t). So in other words, instead of using
n(t), someone else working another galaxy may use some other phase convention. We found that
their berry phase would differ from yours by two contributions.

θB → θB + φ(0) − φ(T )

We were asking a question which is, What is the phase we accumulate between a state n(0) and
a state n(T ). That is a tough question. Obviously if I change the phases of those two states, the
matrix will change accordingly. This is all saying the same thing. So it really makes sense that
for an open path, our berry phase is going to be dependent on our particular choice of phases. So
it is not a particularly meaningful or physical quantity. For open paths, θB depends on the phase
choice.

On the other hand, for closed paths , the story


is very different because for closed path, there’s
kind of a natural convention we can take which
is – if we have a closed path like this, then at
the end our system is returned back to the orig-
inal ground state. We can use the convention
which says we can insist that by the phase that
Figure 29: A berry phase calculation for a closed
we choose for n(T ) is the same as n(0). Let’s
path, |n(T )i = |n(0)i.
just insist that we do our calculation in the case
where n(T ) = n(0). So the question we’re asking there physically is we’re starting in some state,
we’re moving our magnetic field in some loop, we come back to our state and ask what is the phase
of our final state compared with the phase of our initial state? There’s two contributions: the usual

62
ground-state energy and the berry phase. The claim is when you really go back to your final state
and you compare the phase of the final state to the phase of the initial state, it is a well-defined
physical quantity which is independent of any phase conventions.

eiθB = lim hn(0)|n(T − ∆t)ihn(T − ∆t)| · · · |n(∆t)ihn(∆t)|n(0)i


N →∞

Again, it is obvious from this formula because if you insist these two are equal – if I change the
phases, I have to change them the same way. You would calculate this quantity and here you can
see again, no matter how I change the phases, every state appears as both a bra and a ket. One
thing I was saying at the end of class last time, suppose you can choose the phases for any of these
states however you like, so you can imagine if you did it for a closed path you can imagine the
following scheme where you – suppose someone doesn’t believe you that the berry phase is well
defined. They can choose the phase of n(∆t) to make this thing real and then they could choose
the phase of the next one to make the next one real. But when they get to the endpoint, they can’t
change the phase of this. The rule is this has to be n(0). When they do that, they make everything
real, but then they would have some big phase coming from the last guy.

eiθB = lim hn(0)|n(T − ∆t)i hn(T − ∆t)| · · · |n(∆t)ihn(∆t)|n(0)i


N →∞ | {z }| {z }
can’t choose real

This integrand would vanish everywhere along the path if they chose the phases appropriately, then
right at the end, there would be a big contribution that comes from this last piece. So what I
was trying to say last time, is no matter how you choose your phases, you’re always going to get
a contribution if there is a non-trivial berry phase. You can distribute it along the whole path or
bunch it together at the end, but there’s always going to be something there. In general, people
use different phase conventions. Sometimes it’s useful to use a convention where this quantity is
basically 0 everywhere and you have a phase mismatch at the end. Sometimes people use it where
it is more uniform. That is all I was trying to say.

15 Berry Phase (continued)

So what am I trying to argue? In the case of a closed path γ it will be associated with some berry
phase γ → eiθB (γ) . In the case of our spin model, it is a closed path on the sphere, it is a closed
path in Hamiltonian space – depending on your point of view. For each one of these closed paths,
we have a well-defined phase we can associate with it. If I make a physical process where I deform
my state along this path and then I ask: How does the final state compare with the initial state?

63
How much phase has been accumulated? This will be one contribution to the phase along with the
ei×(contribution) .

Berry phase, although we sometimes write it in integral form like θB is this integral, which makes
it look like it is a real valued number. Berry phase is only defined modular 2π. If someone uses a
different phase convention, they could shift this by 2π. So that the physical quantity is eiθB mod 2π .
We can associate that with any path.

15.1 Berry phase for spin- 21 example

Z
θB (γ) = ihn|∂t |nidt
γ

Now so θB is only defined mod 2π. So that is one point. The other point is I’ve been focusing
on spin one-half and everything, I’ll give you an example of what actually happens if you do this
calculation for spin one-half. So what is the berry phase for spin one-half?

It turns out it has a very beautiful geometric


picture. So we can think of our paths as being
on the units sphere because we can parameterize
the Bloch states, this is some path γ. You can
ask What is the phase for spin one-half if I take
it around this trajectory? Of course we know
how to calculate that. It is by definition. It Figure 30: Unit sphere with a path γ traced out.
is this formula mentioned above integrated over It has total area  if you trace clockwise, and area
this path γ. And if you work out this calcula- 4π −  if you trace counter-clockwise.
tion, I think it’s on the homework, you find it
has a very nice geometric meaning. It is exactly one-half times the area enclosed by the path.
Z
1
θB (γ) = ihn|∂t |nidt = Area(γ)
γ 2

So for example, if you have a path which goes around the equator of the sphere, it encloses an area
1
of 2π. So you would get 2 × 2π. The berry phase is π. Now, of course, one point may come to mind:
how do you define area? There are two ways. I can think of the area on this part of the curve or
the area on the other part of this sphere. You know, this path cuts my sphere into two pieces. So
which piece do I calculate the area for? And the answer is it doesn’t matter because by area here,
I meant oriented area. Let’s say if I do this path here, I’ll get some area . If I think of it the other
way, I’ll get at some area which is going to be  − 4π if I think of it the other way.

64
The reason is that the total area of the sphere is 4π. So I’d either get  or  − 4π. But then when
I take half of that, I see that the two differ by 2π. So the physical quantity, eiθB is identical. So
this is a well-defined formula, what we’ve written here. It is only well-defined if the coefficient is an
integer of multiple one-half. That is a way to kind of see why spin comes in multiples of 21 . It has
1
to come in multiples of 2 to have berry phases be consistent. Sort of works out as a necessity.

Now, what I meant by  − 4π, you might be confused because that’s a negative area. Point is that
the sign that you give to the area, you have to choose a convention. Depends on whether it is
clockwise or counter-clockwise. From someone looking in another direction, you would see a minus
sign because it is oriented the other way.

16 Berry Phase and Path Integrals

So let’s go back to our spin- 12 example again. So you know, we calculated our path integral for this
last class.
Z nf Z  
−iHt iS[n] 1~
hnf |e |ni i = Dn e S= dt ihn|∂t |ni + B · ~n
ni 2

What was this first term? This is exactly identical to the berry phase. We can see that this same
phase that occurs in calculations of adiabatic evolution also appears in the path integral. So what
does this mean? Remember, the first term was the berry phase

S[n] = θB [n] + · · ·

and there will be some other dynamical contributions (and other contributions we haven’t included).
The question you can ask now: Does this berry phase affect the dynamics of our system?

So naively, you might say it is not going to af-


fect the dynamics at all. The way I portrayed
berry phase before was just a phase factor when
I go all the way around the loop. You may say
phases don’t matter. How is a phase going to
show up in any physical quantity. The point
is it isn’t the phase that is going to matter. If
you added some constant to the action for each
one of these paths, it wouldn’t effect anything. Figure 31: A pictoral representation of what the
What matters is the relative phase, the relative paths may look like for a path integral – but does
not represent the physical behavior of the system
65undergoing adiabatic evolution. Every time we
calculate a path, when we calculate the amplitude
of that path, we have to include that contribution,
and that contribution is going to have a big im-
pact on the total sum of all these paths because
phase accumulated on this one path as opposed
to this one path. Of course the relative phase
is going to matter because when I add up their
contributions they may interfere constructively
or destructively. So what matters is the rela-
tive berry phase between two paths. That can
actually affect – that is going to affect things
like the – the amplitude, the probability to go
from this state to this state. So this is certainly
going to affect the path integral and it is going
to affect dynamics. It is going to affect all our
probability amplitudes.

And again, that makes perfect sense that it should be the relative berry phase because of the same
point we’re making before. Berry phase isn’t really well-defined for a path, but if you compare the
berry phase for two open paths with the same endpoints, that is like the phase of a closed loop
which is going to be well-defined. It is consistent. It should be, it’s a physical quantity. It is okay
that it can affect our dynamics. This affect on the dynamics is not like some esoteric effect, it’s
important even in a classical level. So you’ll see that on the homework. I think this homework, you
can see that berry phase terms are crucial to deriving classical equations in motion, if you try to
derive classical equations in motion from this action.

On the homework – we will show that the berry phase is important, even in the classical
equation of motion.

16.1 Other examples of Berry phase terms

1. coherent state path integral (integrate one of the terms by parts)


Z
i
S = dt (α∗ ∂t α − α∂t α∗ ) + · · ·
|2 {z }
i(α∗ ∂t α)≡ihα|∂t |αi

2. phase space path integral (I claim that the first term is viewed as a berry phase term)
Z
S = dt [p∂t x − · · · ]

In general, any term with a single time derivative, you can view as a berry phase term because
it will be a term that doesn’t depend on time.

66
3. particle in an external magnetic field
Z
S= dt A(x)∂t x + · · ·

Any term – like I said, any term with a single time derivative you can think of as a berry phase
term. So this last example brings up kind of very important analogy between berry phase and gauge
theory. This is because
Z Z Z
∂x
S≡ dt A∂t x + · · · = dt A + ··· = dx A + · · ·
∂t

16.2 Analogy with gauge theory

So there’s an analogy with gauge theory which I think is pretty useful. So to see this analogy let’s
just think about spin one-half. So far when we were talking about berry phase, I kind of focused on
a single path. But let’s say we don’t want to repeat this calculation every time we have to do this.
Let’s say we’re going to calculate a lot of berry phase for spin one-half. Instead of calculating n(t)
for each and every path, we should come up with some general parameterization of n as a function
of θ, φ. So no matter what path we have, we’ll just use the same n here – that is fine, we can
always do that. We can define a useful quantity which is called the berry connection A. So let’s
define

A = ihn|∂α |ni

I’m going to call these parameters α. In general, I have a system that depends on a whole bunch
of parameters. In general, this is going to be a multi-component vector, it’s like the gradient. So
what’s the berry phase?
Z T
θB = ihn|∂t |nidt
0
Z T

= ihn|∂α |ni dt
dt
Z0
= A · dα
γ

67
So with this picture, with this notation, we can make a nice correspondence between berry phase
and gauge theory.

Berry Phase Gauge Theory


A ↔ vector potential A
phase choice ↔ gauge choice
R
θB ↔ Wilson line operatorei Adx

Ω ∇ × A}
| = {z ↔ magnetic field B = ∇ × A
Berry curvature

16.3 2 types of berry phase

1. geometric berry phase → depends on geometry of path. An example was the spin-half system.
If I move the loop a little bit or stretch it, the area changes. → affects the classical equation
of motion.

2. topological berry phase. This is everything but a geometry berry phase. It is not something
that depends on the geometry of the path, only the topology. If I think about a particle’s
property in a plane with a hole in it, I can calculate a topological berry phase which is the
same for all paths around the hole, and for all paths not around the hole. It only depends on
the top. → no effect on the classical equations of motion (important at a quantum level; e.g.
Aharanov-Bohm effect).

It is important when you take quantum effects


into account. I have a particle, this is my exam-
ple with a hole, and classically you don’t feel a
magnetic field. If there’s only a magnetic field
at the hole and nowhere else, there is going to
be no effect. But quantum mechanically, if I put
flux through a hole and I look at an interference
Figure 32: A system where there is a physical hole pattern over here, this will shift the fringes if I
with a magnetic field through the hole providing insert flux here because it causes different inter-
a flux φ ∝ ∆(BA). An example of a topological ference. It changes whether these paths inter-
berry phase. Editor’s note: this is very very sim- fere constructively or destructively. If you look
ilar to superconductivity and the Meissner effect at a quantum interference experiment, you can
that the Tel Aviv group talks about! see dependence on the flux. That is a topolog-
ical berry phase. But it doesn’t have any effect

68
on the classical equations of motion. So that is the classic example. In topological berry phases,
you can see it in some kind of interference experiment. Then the phase becomes important. But
classically, the classical particle only goes in one path so it would never know about this. here you
have to have a particle that is cold and quantum to see this kind of interference. If it decoheres.
If I put a flux here, there’s a phase factor when I do my path integral, it is this Adx term (for a
closed path).

Suppose I want to calculate the probability at this point and end up at this point. I have to add up
lots of paths, right? And the point is this path and this path are going to have different values. This
R
term will take different values because the dx A along one path and the other differ by exactly
the amount of flux in the central. So that is the Aharanov-Bohm effect. You can probably read
about it in various textbooks, but that is the point. It is going to affect the relative phase of this
and this. It is exactly this term. This is a topological berry phase if ∇ × A = 0 everywhere. Stokes
theorem tells me it changes. It can have a non-trivial value if you have a hole in your system like
this.

In the example on the homework where we had an action based on the path integral in a magnetic
field
Z
S= A (∂t x)dt + · · ·
| {z }
dx

We can see that we will have a topological berry phase when ∇ × A = 0. In other words, the
magnetic field is somewhat uniform in some way.

17 Linear Response

Okay. So what I want to do now is I’m going to do a review of linear response theory. And the
purpose of this is just to convince you or show you one other way you can use path integrals and
actually a more typical, physical way that path integrals measure to – we don’t usually measure
things like – you don’t usually measure something like the amplitude – probability amplitude of a
thing to go from one point to the other. So let’s talk about linear response. That is the usual way
we probe systems.

Suppose the system is in a ground state |ψ0 i of H0 at time t = −∞. Apply a time-dependent
perturbation

H(t) = H0 + f (t) · O1

69
What is the response to the system? How does some quantity change? So we want the change
in some other expectation value at some later time t. Okay. So in general, of course, what are
examples? Maybe we’re applying an electric field to our system and we measure the change in
current, what’s the current at some later time? You know, maybe we apply a force to our system
and we measure how much something moves. Then basically, a huge variety of experiments take
this form where we apply some perturbation electric field and we add light and see how the field
changes.

Want δhO2 i at later time t. The 1st order perturbation theory has
Z t
δhO2 it = −i f (t0 )hψ0 |[O2 (t), O1 (t0 )]|ψ0 idt0
−∞

where

O1 (t) = eiH0 t O1 e−iH0 t

And we can see that it comes from

hO2 i = hψ + O1 ψ|O2 |ψ + O1 ψi

Write in terms of a form that is useful


Z ∞
δhO2 it = dt0 D(t, t0 )f (t0 ) D(t, t0 ) = −iΘ(t − t0 ) · hψ0 |[O2 (t), O1 (t0 )]|ψ0 i
−∞

And D(t, t0 ) is known as the response function (it’s the ordinary step function being used here).

17.1 Computing D(t, t0 ) using path integral

Before we compute this, let’s computer something easier. Let’s consider a related problem. So I’m
going to define what I call Green’s function,

iG(t2 , t1 ) = h0|T (O2 (t2 ), O1 (t1 )) |0i h0| = hψ0 |

So let me explain what T is. It is the time-ordering symbol which I’ve used before, let me just
define it:

O1 (t1 )O2 (t2 ) t1 > t2
T (O1 (t1 )O2 (t2 )) ≡
O (t )O (t ) t < t
2 2 1 1 1 2

So T is known as the time ordered correlation function.

70
17.2 How to compute G?

17.2.1 1-dimensional quantum mechanical case

Let Ô1 = f1 (x̂), Ô2 = f (x̂). Assume t2 > t1 . Let’s look at the path integral over all paths from
some initial state xi to some final state xf . At the end of the day, these states are going to drop
out, but we’re going to put them in originally.
Z xf ,T Z Z
iS
Dx f2 [x2 ]f1 [x1 ]e = dx1 dx2 Dx f2 (x(t2 ))f1 (x(t1 ))eiS
xi ,−T x(t2 )=x2 ,x(t1 )=x1

with boundary conditions set for x(t2 ) = x2 and x(t1 ) = x1 .

Figure 33: We want to look at all possible paths such that x(t1 ) = x1 and x(t2 ) = x2 are fixed at
time t1 and t2 .

Take any path, we can parameterize it by look being at its position at time t1 . So this is going to be
x1 . And then looking at this position here time t2 – calling that x2 . I’ll integrate over all possible
x1 and x2 . So these can be at any place on a slice. And then for each value of x1 and x2 I’m going
to integrate over all paths satisfying that particular boundary condition. And then I’m integrating
over all paths that happen to go to those points. As long as I integrate over all possibilities, then
I’m integrating over all paths. I’m parameterizing my paths by they are positions here and then I
will integrate over all possible paths with those boundary conditions. So then
Z
= dx1 dx2 hxf |e−iH(T −t2 ) |x2 if2 (x2 ) · hx2 |e−iH(t2 −t1 ) |x1 if1 (x1 ) · hx1 |e−iH(t1 −(−T )) |xi i

= hxf |e−iH(T −t2 ) O2 e−iH(t2 −t1 ) O1 e−iH(t1 −(−T )) |xi i


= hxf |e−iHT O2 (t2 )O1 (t1 ) e−iHT |xi i
| {z }
T [O2 (t2 )O1 (t1 )]

R
With f (ĥ) = dx |xif (x)hx| (??? what the fuck). This is almost what we want, but not quite. We
want |0i, not |xi i, |xf i. We also want to kill e−iHT . First take the limit as T → ∞(1 − i) and a

71
small imaginary part. Then
X
e−iHT |xi i → |nie−iEn ∞(1−i) hn|xi i ≈ |0ie−iE0 ∞(1−i) h0|xi i
n

All of the states are going to get suppressed, but the ground state will get suppressed the least. Get
factors to cancel, by dividing:
RT
−T Dx f2 (x(t2 ))f1 (x(t1 ))eiS
h0|T O2 (t2 )O1 (t1 )|0i = lim RT
iS
−T Dxe
T →∞(1−i)

−E
0∞ h0|T (· · · )|0ie−iE
 ∞ h0|x
 
hx
f |0ie
 0  ii
=    
−iE ∞ −iE ∞
 
hxf |0ie
 0 h0|0i e  h0|xi i
 0  

ut not quite. We want |0i, not |xi i, |xf i. We also want to kill e−iHT . First take the limit as
T → ∞(1 − i) and a small imaginary part. Then

72
Notes Quantum Many Body Physics
Day 8
October 23rd, 2013 Giordon Stark

18 Response and Correlation (continued)

Last time, we discussed a response function using the notation given by

D(t2 , t1 ) = −iΘ(t2 − t1 )h0|[O2 (t2 , O1 (t1 )]|0i

And remember what it meant physically. This was a function – lowest order of perturbation theory
– if we apply it to the Hamiltonian given by some operator O1 and we measure some observable
given by O2 , it tells us how much the value of O2 changes at t2 as opposed to apply perturbations
to t2 using this response function.

We also defined another function which is a correlation function which is defined to be

iG(t2 , t1 ) = h0|T O2 (t2 )O1 (t1 )|0i

we said this is very similar to our response function. It looks very similar. And then once we
compute this, we can relate it to the response function. It is a step that you can add if you like to
observe this quantity. So the question was: How do we compute G in particular, how to compute
it using the path integral machinery? We worked that out last time in the case of a particle in one-
dimensional quantum mechanics. So in one-dimensional quantum mechanics would be functions of
the position of the particle. then we could have the operators

O1 = f1 (x̂) O2 = f2 (x̂)

And we could write it as a ratio of two path integrals like these.


RT iS
−T Dx f2 (x(t2 ))f1 (x(t1 ))e
iG(t2 , t1 ) = lim RT
iS
−T Dx e
T →∞(1−i)

Of course, this also looks like another reason we could call this correlation functions is we can see in
this expression, you think of this as the probability density, this looks like what you would calculate
in probability theory. If you want to calculate the correlation, you would at any rate over all possible
configurations times the probability times whatever functions you’re interested in. That would give

73
you the expectation value. You divide by the total sum of probabilities which would be 1. This kind
of expression is exactly what do you in probability theory except instead of having eiS , this would
be some positive weight. This would be the expression you would write down for the expectation
value for some function. It is a quantum correlation function.

We’re weighting them by the value of the path. This is measured at t2 and t1 . We’re dividing it by
the unweighted path. It gives us some kind of expectation value of this product. So that was our
result last time.

18.1 Example: Harmonic Oscillator

1 1
L = mẋ2 − mω02 x2
2 2
and we have O1 = O2 = x̂ (the operators). Applying our formula here, we have the correlation
function
R∞ iS
−∞ Dx x(t2 )x(t1 )e
iGx (t2 , t1 ) = R∞
−∞ Dx eiS
| {z }
put in i’s later

So to evaluate this ratio of integrals, we’re going to use a gaussian formula here – it’s pretty well-
known. The action here is (the gaussian contribution)
Z ∞ 
1 2 1 2 2
S= m(∂t x) − mω0 x dt
−∞ 2 2
We’re going to write down some sort of a gaussian integral for a finite space, for functional
space. Suppose you have some finite dimensional space, and I have some Gaussian in the following
form
i T
xk xl e 2 x Ax dN x
R
R i xT Ax N = i(A−1 )kl
e2 d x
where x = (x1 , · · · , xN ). Both xk , xl are some components of an overall x. There’s a nice feature
we didn’t have before when dealing with Gaussian integrals – normalization factors. By taking the
ratio of two Gaussian integrals, all those normalization factors drop out because they’re the same
in the top and bottom and you get a pretty simple result.

In our case, we have A = −m∂t2 − mω02 . Note, (∂t x)2 = (∂t x)(∂t x) → integration by parts →
x∂t2 x + (
((
other
((terms.
( So we have
−1
i Gx (t2 , t1 ) = i −m∂t2 − mω02
(−m∂t22 − mω02 )Gx (t2 , t1 ) = δ(t2 − t1 )

74
So basically, we should get the identity. We should apply this operator on the left. You can think
of this Gx (t2 , t1 ) as an operator or a very big matrix with two components. And we should get the
identity. So what is that – let’s be more precise. The operator on the left should be a derivative
with respect to t2 because we’re kind of multiplying this matrix by this matrix. We want this to
act on the left index of G. When we multiply it together we should get the identity matrix which
is the delta function (the analog in these variables).

So we just need to solve this equation. So we will go to frequency space


Z ∞

Gx (t2 , t1 ) = Gx (ω)e−iω(t2 −t1 )

Z−∞

Gx (ω) = dt Gx (t, 0)eiωt
−∞

And I’m just writing these out to make it clear what my conventions are. So let’s substitute the
equation we want to solve
Z ∞ Z ∞
dω dω −iω(t2 −t1 )
−m∂t22 − mω02 Gx (ω)e−iω(t2 −t1 ) =

e
−∞ 2π −∞ 2π
| {z }
δ(t2 −t1 ) in fourier space

So
Z Z
dω dω −iω(t2 −t1 )
(mω 2 − mω02 )Gx (ω)e−iω(t2 −t1 ) = e
2π 2π
| {z }
ω 2 =∂t2 Gx (ω)=∂t2 dt Gx (t,0)eiω(t2 −t1 )
R
2 2

which ends up giving us


1
Gx (ω) =
mω 2 − mω02

Now, we need to put in our i’s we had before from the limit in large T . So dt → dt(1 − i) and
∂t
∂t → (1−i) . So let’s make the substitution and see how things change:

(∂t x)2
Z   Z  
1 1 1 1
dt m(∂t x)2 − mω02 → (1 − i)dt m − mω02
2 2 2 (1 − i)2 2
So the operator A that we were inverting

−m∂t2
A→ − mω02 (1 − i)
(1 − i)
So we can look at what happens to our Green’s function
1 1 − i
Gx (ω) = =
mω 2
1−i − mω02 (1 − i) mω 2 − mω02 (1 − i)2

75
Two points to be made here. One point is that ’s in the numerator are innocuous. We can set them
to zero. The only epsilons that matter are in the denominator. If ω = ω0 , you run into singularities
in the poles. So we leave it there and expand this out

1 1 m−1
Gx (ω) = = =
mω 2 − mω02 (1 − 2i) mω 2 − mω02 + i ω 2 − ω02 + i

So that is Gx (ω) and if you wanted it in the time-domain, you would Fourier transform this. So
that was supposed to demonstrate how you can calculate Gx (ω) from Gaussian integrals and so on.
Ultimately, we want to measure things like linear response, so we need to understand how you go
from time-ordered correlation function to the response function.

The prefactors on the  doesn’t matter. It is more of a prescription to remind ourselves that you
should add a very small imaginary part (you have to remember the sign of that part) to fix the
singularities.

19 Relationship between D and G

D(t2 , t1 ) = −iΘ(t2 − t1 )h0|[O2 (t2 ), O1 (t1 )]|0i


iG(t2 , t1 ) = h0|T O2 (t2 )O1 (t1 )|0i

How are these two related? So in general, I’ll give you a few examples of how they’re related and
hopefully the examples will indicate that generally, if you know how to compute the time-ordered
correlation functions, you can always extract these response functions. Basically, these contain all
the information. So these can always be obtained by some simple manipulations from the G’s. So
let me give you two examples and then maybe you’ll see how you could do it more generally.

19.1 First Example

Suppose O1 = O1† , O2 = O2† . Then

D(t2 , t1 ) = −iΘ(t2 − t1 ) [h0|O2 (t2 )O1 (t1 )|0i − h0|O1 (t1 )O2 (t2 )|0i]

The first term is great because it is time-ordered since t2 > t1 . The second term is not.
h i
D(t2 , t1 ) = −iΘ(t2 − t1 ) h0|O2 (t2 )O1 (t1 )|0i − h0|O2† (t2 )O1† (t1 )|0i∗

= −iΘ(t2 − t1 ) [iG(t2 , t1 ) − (iG(t2 , t1 ))∗ ]


= 2Θ(t2 − t1 ) · <(G(t2 , t1 ))

76
This shows that any time I want a response function, I can always calculate it from time-ordered
correlation functions.

19.2 Second Example

If O1 = O2† , then <(D(ω)) = <(G(ω)) and =(D(ω)) = =(G(ω)) · sign(ω). This is a general
statement, let’s show it with the harmonic oscillator.

19.2.1 Harmonic Oscillator Example

m−1
G(ω) =
ω 2 − ω02 + i

m−1 m−1 m−1 (ω 2 − ω02 )


 
1
<(G(ω)) = + =
2 ω 2 − ω02 + i ω 2 − ω02 − i (ω 2 − ω02 )2 + 2
m−1 m−1 m−1 
 
1
=(G(ω)) = − =
2i ω 2 − ω02 + i ω 2 − ω02 − i (ω 2 − ω02 )2 + 2

m−1 (ω 2 − ω02 )
<(D(ω)) =
(ω 2 − ω02 )2 + 2
−m−1  · sign(ω)
=(D(ω)) =
(ω 2 − ω02 )2 + 2

m−1
→ D(ω) =
ω 2 − ω02 + i · sign(ω)

And you have ±i around each of the two poles, one at +ω0 −i
and one at −ω0 − i.
sin ω0 (t2 − t1 )
Z

D(t2 , t1 ) = D(ω)e−iω(t2 −t1 ) = −Θ(t2 − t1 )
2π mω0
The Θ(t2 − t1 ) prescription is to remember that t2 > t1 as the
response function has causality. This just means that when
you are doing the integral, you have to be careful – this is only
important when you have poles. This i tells you the rules when
Figure 34: The contour prescrip- you do the contour integral in regards to how you shift your
tion where both poles are shifted poles with respect to the x-axis (real axis). You find that this
below the real axis by i
77
integral vanishes when t1 > t2 (causality). In this particular
example, both poles are below the axis. If t2 < t1 , then the eiωt terms decays exponentially when
ω → +∞ and contour without any poles in it is zero.

20 Bosons

We’re going to shift topics. Okay. So this was admittedly very brief introduction to linear response
and correlation. But unfortunately that’s all I’m going to say about it. Just try to give you a flavor
for how you can compute these things and what the logical structure is, what is related to what.
So what I want to move to next is we’re going to basically start talking about superfluids. But
before we get there, we first have to set up some machinery. So I’m going to begin to be talking
about a second quantization for bosons, so setting up notations. So next time we’ll work out path
integral representation of many-particle boson systems and then we can start really talking about
superfluids.

What we’re now done with is this single-particle part of the class, sort of just the foundational
stuff on path integrals. Now I’m going to sort of start talking about many particle systems. First
many-boson systems. In the second half of the class on many-fermion systems. I need to make sure
everybody is using the same notation.

20.1 Second Quantization

Let’s think about N bosons in a box, an L × L × L cube, a d-


dimensional box. I’m going to consider totally non-interacting
bosons.
N
X p2i
H=
2m
i=1

So what is the Hilbert space for the system? HN : symmetric


wavefunctions in N variables ψ(x1 , · · · , xN ). So this is just
some wavefunction size. Now we want to have a basis for our
Hilbert space, so a natural basis is to pick eigenstates for our

Figure 35: Cube of side length L Hamiltonian. The eigenstates of H are

78
N
ψk1 ,k2 ,··· ,kN (x1 , · · · , xN ) = · [ei(k1 x1 +k2 x2 +···+kN xN + permutations xi ↔ xj ]
Ld/2 | {z }
|k1 ,k2 ,··· ,kN i

where N is a normalization factor. These are parameterized by the set of momentum that the
bosons are in. This gives us a basis for our Hilbert space, but they are not a very convenient basis
to use. A more compact notation is to consider a Hilbert space which includes the combination of
d-boson, 1-boson, ... So

H = H0 ⊕ H 1 ⊕ H 2 ⊕ · · ·

H now has a basis |nk1 , nk2 , · · ·i where nk = 0, 1, · · · is a number of bosons in state |ki. For example,
if we have |nk1 = 1, nk2 = 2i ↔ |k1 , k2 , k2 i. So our Hamiltonian has a new form
X k2
H= · n̂k
2m
k

Once we write it like this, we see it looks a lot like the representation of a harmonic oscillator
with the Hamiltonian H = (n̂ + 12 )ω0 . In other words, it looks like a collection of independent
k2
harmonic oscillators with frequency ωk = 2m (ignoring the zero-point-energy 12 ω0 ). It is convenient
and natural for us to define creation and annihilation operators

a†k |n1 , · · · , nk , · · ·i = nk + 1|n1 , · · · , nk + 1, · · ·i

ak |n1 , · · · , nk , · · ·i = nk |n1 , · · · , nk − 1, · · ·i

which has

1. [ak , a†k0 ] = δkk0

2. ak |nk = 0i = 0

3. n̂k = a†k ak

So we can rewrite our Hamiltonian yet again as


X k2 †
H= a ak
2m k
k

It is also convenient to introduce real space creation/annihilation operators


X 1
a(x) = a eikx
d/2 k
k
L

79
So our k’s are actually discrete. It is going to be the sum over all k with a factor Ld/2 . This is
just a normalization factor basically. I’m trying to make sure that I choose my momentum states
correctly. I want to properly integrate them over the whole space squared, I get 1. But I need this
factor to make it normalized. And alternatively, you can write
Z
1
ak = dd x a(x)e−ikx d/2
L
or for infinite system, we have

dd k
Z
a(x) = ak eikx
(2π)d
Z
ak = dd x a(x)e−ikx

Then we have nice real space commutation relations by substituting in to the ones from before

1. [a(x), a† (y)] = δ(x − y)

2. Boson density is given by ρ(x) = a† (x)a(x)

So then we can write our Hamiltonian in real space

X k2 † Z 
1 2

d †
H= a ak = d x a (x) − ∂ a(x)
2m k 2m x
k

But one last point is one nice thing about the second quantization picture is it can unify the
two topics we’ve discussed up until now, the two types of quantities we’ve extracted from the
path integral. One was time evolution operators. The other quantity we’ve extracted is correlation
functions. So interesting point is that what we thought of as a time evolution operator in our single-
particle context, all right, so this was supposed to tell us the probability amplitude of a particle
ending up at an xf of time t, we can now think of it as this function for creation/annihilation
operators.

U (xf , t, xi , 0) = h0|a(xf , t)a† (xi , 0)|0i

where |0i is the no particle state. So if you think about it, what do we have here? We evolve under
the Hamiltonian, then at time t = 0, we create a particle under the Hamiltonian, and after time t,
we annihilate the particle. That is the same thing as asking, if I have the probability xi , what is
the probability of getting to xf ? We have a relationship between our single particle time evolution
operator and correlation function. In fact, this is also another way of thinking about that trick
we used, I think second or third lecture for computing time evolution operators is we eventually
wrote them as a ratio of two path integrals, remember? We ended up calculating a ratio of two

80
path integrals. That is how we got all the normalization factors to cancel. We’re basically using the
same trick that we used to calculate correlation functions. Again, you can express them in terms
of a ratio of two path integrals. Anyway, so I think that’s all I was going to say. So next time with
this machinery, we’ll work out the many body path integrals to analyze it.

particle under the Hamiltonian, and after time t, we annihilate the particle. That is the same thing
as asking, if I have the probability xi , what is the probability of getting to xf ? We have a relationship
between our single particle time evolution operator and correlation function. In fact, this is also
another way of thinking about that trick we used, I think second or third lecture for computing time
evolution operators is we eventually wrote them as a ratio of two path integrals, remember? We
ended up calculating a ratio of two path integrals. That is how we got all the normalization factors
to cancel. We’re basically using the same trick that we used to calculate correlation functions.
Again, you can express them in terms of a ratio of two path integrals. Anyway, so I think that’s all
I was going to say. So next time with this machinery, we’ll work out the many body path integrals
to analyze it.

81
Notes Quantum Many Body Physics
Day 9
October 25th, 2013 Giordon Stark

21 Coherent state path integral for bosons

Last class, we did a review of second quantization. The goal was to get our notation ready so we can
start talking about many-body boson systems and write down a path integral for them. Now, we’re
going to discuss the coherent path integral system and analyze that system using path integrals and
try to understand some of the different phases of matter that bosons can form.

Our starting point here is the homework problem which you guys worked out the coherent path
integral for harmonic oscillators. So recall the coherent state path integral for harmonic oscillator.
So remember the way it worked there was that we used – as a basis for making our coherent states
– it is an over-complete basis. But it’s still – we can still use it because it satisfied a completeness
relation. In the homework I think I labeled the coherent states by α. But here, just for notational
reasons, I’m going to call it φ. We have eigenstates.

a|φi = φ|φi φ∈C

Z
dt 2i (φ∗ ∂t φ−φ∂t φ∗ )−ω|φ|2
R

H = ωa a ↔ D2 φei

This is what we should have derived on the homework. Now, we are going to do the same thing for
bosons. So start with free bosons and generalized to d-dimensional free bosons
X  k2 
H0 = − µ a†k ak where µ is the chemical potential
2m
k

↔ ak |φk i = φk |φk i
Z Y P R h  2  i
2 i k dt 2i (φ∗k ∂t φk −φk ∂t φ∗k )− 2m
k
−µ |φk |2
↔ D φk e
k

So that is perfectly an analogy with the harmonic oscillator. Now what we want to do is go into
real space. So we can, if we like – there are different ways you can think about this. Either you
can just introduce a real space phi by Fourier transforming phi K, but if you like, remember last
time we wrote down a Hamiltonian in real space. You could have written that down in real space

82
and applied this procedure or you can take – you can go to the path integral an then convert it
to real space. But either way, what you get if we convert it into real space, is now we have an
integral over D2 φ. I won’t write the product anymore because x is now continuous variable. It is
bad notation.

Real Space φ(x, t):


Z
1
dd x dt[ 2i (φ∗ ∂t φ−φ∂t φ∗ )− 2m |∂x φ|2 +µ|φ|2 ]
R
D2 φei

21.1 Add interactions

Okay. So this is our path integral representation of our free Boson system. Now we want to add
interactions. So let’s add some interactions. We’ll add two-body interactions. So we’ll consider
Hamiltonian

Ĥ = Ĥ0 + V̂

I’m going to assume it is a two-body interaction.


 
Z
1 1 X
Ĥ = dd x dx y V (x − y) 
 
ρ(x)ρ(y) − ρ(x)δ(x − y)  ρ(x) = δ(x − xi )
 2 |2

| {z } {z } i
1/2 for double-counting cancel self-interactions
X
= v(xi − xj )
i<j

Substitute into ρ(x) = a† (x)a(x) into V and get (using commutators)


Z
1
V̂ = dd x dd y a† (x)a† (y)V (x − y)a(x)a(y)
2

21.2 Short-range interactions

For simplicity, I’m going to specialize in basically short-range interactions. I’m just going to think
about δ-function interactions. So imagine a δ-function potential V (x − y) = V0 δ(x − y). So we’re
going to be thinking about bosons that have some kind of repulsive short-range interaction. We can
imagine adding this as another term in our Hamiltonian and it is going to change our path integral,
an extra term from the interactions.
 

1 V0 
 2 (φ ∂t φ−φ∂t φ ) − |∂x φ|2 + µ|φ|2 − |φ|4 
i ∗ ∗
dd x dt 
R 
i
Z  | 2m {z 2 }

D2 φe −hφ|H|φi

83
So we get this path integral, and at this point it is an exact representation of many-body interacting
boson systems. We have to be a little careful when we say "This is an exact description," we have
to be very specific about what we mean. Okay. Forgetting that issue, we have an exact description.
If you forget about the details about exactly how you define path integrals, then it is a complete
description. An exact description.

21.2.1 Notes

So now a few comments about this path integral description and about the system. We’re going to
spend a lot of time investigating the system in different dimensions and so on. So first comment
I want to make is about this delta function interaction here. So you know, so not surprisingly,
of course, that is an approximation to the real system we’re interested in. What we really want
to think is Bosons with a short-range interaction, maybe on the scale of – if these are atoms, on
the scale of the size of the atom, they may have some repulsive interaction. They don’t want to
overlap.

1. δ-function is approximate to short range interactions between bosons. We need to modify/cut-


off path integral at short distance scale “a". Ignore for now.

2. People who have taken QFT, when you see this action, the first impulse is to think “I’ve seen
this before, this is |φ|4 theory". But this is not |φ|4 theory. Most textbooks you see would
have a ∂t φ2 , not a ∂t φ.

21.2.2 What are the properties for different values of µ, V0 , m?

So today we’re going to analyze the system from completely in the classical limit. And then later
we’ll introduce quantum fluctuations.
 
Z iφ∗ ∂t φ+total deriv
i z }| { 1 V0
L = dd x  (φ∗ ∂t φ − φ∂t φ∗ ) − |∂x φ|2 + µ|φ|2 − |φ|4 

2 2m 2

And then we can write out the Hamiltonian formulation. First we need to find the momentum, so
we can get that by
∂L
Π= = iφ∗
∂ φ̇

84
The Hamiltonian is given by (classically, we can drop the total derivative term from the La-
grangian)

H = Π∂t φ − L
Z  
d 1 2 2 V0 4
= d x |∂x φ| − µ|φ| + |φ|
2m 2
So the classical ground state. It is pretty clear staring at this, the way to minimize this is you’re
go to go want φ to be spatially constant. Changes in phi and space cost kinetic energy. You can
think of this as a remnant. We will set φ(x) ≡ φ0 . Everywhere in space. And then the value of φ0 ,
that will kill off the first term but we’re going to have to choose the value of phi not to minimize
these two terms. What is the energy of the ground-state energy? I’m going to write it as energy
per volume because it is an extensive system. This is the volume of our system. The D-dimensional
volume V. This is the energy. So this is the energy per unit value – energy density.
E V0
= −µ|φ0 |2 + |φ0 |4
V 2
And we see that the minimum depends on µ - the chemical potential. When µ < 0: φ0 = 0. When
q
µ > 0: φ0 = Vµ0 eiθ0 .

E
21.2.3 How does V behave as a function of µ?


E(µ) 0 µ<0
=
V − µ2 µ>0
2V0

So here’s energy density as a function of µ. So it is flat for µ < 0, right? It is just 0. And then
it bends down quadratically, for µ > 0. Why am I showing this plot? The main point I want to
make here is there is a non-analytic behavior here. This is not an analytic function. It changes
from something which has all – you know – exactly flat to something that is quadratic. So this is
non-analytic behavior of the energy density at mu equals 0.

Figure 36: shitty picture of the curve

85
And why do I mention that? Because the definition of a phase transition is – basically, the definition
of a phase transition is – the free energy – or if you’re doing it at T = 0 temperature, the ground-
state energy has a non-analytic behavior. Generally, when you vary some parameters, everything is
perfectly smooth except at certain points or certain lines where you see a – this is basically what
we see. That is the definition of a phase transition.

We can see that it is continuous and smooth for the first and second derivatives, the discontinuity
E
comes in at the second derivative of V. The phase transition is “2nd order".

Figure 37: First-order discontinuity

So we have a second-order phase transition. It has a particular name, the dilute bose gas transition.

21.3 Dilute Bose gas Transition

What are the two phases on the diagram?

21.3.1 µ<0

We said the classical level was just φ0 = 0 everywhere in space. Since ρ̂(x)↠(x)â(x), then ρ =
|φ0 |2 = 0. This a zero density. In other words, this is a system without any bosons. This is the
no-boson phase.

86
[picture of no bosons]

21.3.2 µ>0

Then φ0 6= 0 ⇒ ρ 6= 0. We’re dealing with a finite density of bosons. And this makes sense because
of our picture with a chemical potential up higher.

[picture of bosons]

If we didn’t have interactions, a system would put bosons in the lowest state. With interactions, it
would put some finite number of bosons. it would like to have bosons there because the chemical
potential is above the minimum here. The ground state |φ(x) = φ0 i is an eigenstate of a(x):

a(x)|φ0 i = φ0 |φ0 i

21.4 Cartoon wave function

Here is a cartoon wave function. When I say cartoon, I’m just going to dray some pictures,
okay?

Figure 38: Cartoon wavefunction showing how the destruction operator has an eigenvalue of φ such
that a(x)|φ0 i = φ0 |φ0 i for the coherent states

This is a particular state where you have N bosons in these particular positions, actually, when I
draw this cartoon, what I mean is sum over all possible configurations of N bosons with all the
same amplitude. Of course it goes back to 0 bosons, too. I went to the left here.

So it is a sum over 0 boson states, up to any number of bosons. The amplitude is φ0 to the number
of bosons. No matter where they are it is the same. If I act on this wave function by a(x), what

87
happens, right? If I act on it by a(x) to one of these states, you can see what happens if I happen
to have a boson there, right? It is going to convert my N + 1 picture.

What a(x) does is it visits all these pictures to the left. But when I do that – the amplitude –
once I apply a(x), this is going to be a φ0 – all my N -boson states are coming from here. When I
apply a(x), it will look exactly the same as this, except it will be (N φ0 )N +1 here. That is exactly
φ0 times the original state. In other words, when I apply a(x) to it, I get back the original state
but with an extra factor of φ0 . Maybe you just have to look at it yourself to see it. My claim is if
you just look at this – stare at this you can convince yourself that this is the unique state of the
annihilation operators with that eigenvalue.

Okay. So the way to think about it as a cartoon picture, it is basically a state where it’s – it doesn’t
care about the Boson positions. It is a super position of any number of – of any positions of the
Bosons with an amplitude which is just to the number of Bosons. This state is called a super fluid
state. So this is a super fluid phase that we’re talking about.

It is fluid because the density is formed, there are no correlations. In this perfect state, it is
uncorrelated. It is not a solid, it doesn’t have any order, any crystalline structure. And so what
this phase transition we’ve talked about is between an empty no-Boson state and a super-fluid state
when the chemical potential goes from negative to positive values. The question?

21.5 Spontaneous Symmetry Breaking

The key property of the superfluid phase, the most important property, is that it has “spontaneous
symmetry breaking". So I’m going to explain several different definitions of an SSB that are getting
more and more sophisticated/accurate.

21.5.1 SSB v.1

The Hamiltonian is invariant under U (1) symmetry: φ(x) → eiθ φ(x). This is called a U (1) because
it is only defined by just an angular parameter θ. This means

H(φ) = H(eiθ φ)

But the ground state is not invariant: φ0 6= eiθ φ0 (there’s infinitely many ground states).

This is kind of in classical language. So we want to rephrase this in quantum mechanical language.
So first we need to understand how to write the symmetry in our quantum system. So a classical

88
symmetry φ → φeiθ should correspond to some kind of unitary operator in our Hilbert space. I
claim it corresponds to the following unitary operator eiN̂ θ where N̂ = k n̂k = k a†k ak (total
P P

number of particles).

To see this, note:

eiN̂ θ |φ(x)i = |φ(x)eiθ i

So let me just show this for the uniform case, where φ0 was uniform in space.

X
|φ0 i = φN
0 |N i
N =0

So

X X
iN̂ θ
e |φ0 i = φN
0 e
iN θ
|N i = (φ0 eiθ )N |N i = |φ0 eiθ i
N =0

21.5.2 SSB v.2

The quantum Hamiltonian Ĥ is invariant under the quantum symmetry eiN̂ θ , i.e.

e−iN̂ θ ĤeiN̂ θ = Ĥ

but the ground state |φ0 i is not invariant.

eiN̂ θ |φ0 i =
6 eiλN |φ0 i

it is not invariant meaning that if I apply this transformation, E to the I N hat theta to the phi not,
I do not get back phi not. In fact, I do not get it back times some phase factor. If I just got back up
to the phase factor, this is really the same state, I would claim it’s invariant under this operation,
but I do not get this back because we just calculate over here what happens when I apply it to my
state, it rotates the phase. It – this is an orthogonal state in my Hilbert space. I do not get back
the same state.

So by a theorem in quantum mechanics, this can only happen if it is exactly degenerate. This is
never going to be true, in fact. Only in this very special – in fact, it is not true for the system we
talked about. It is only true because we did this classical limit and made a mistake. This is not
strictly correct. I’ll explain. It is sort of – I’ll explain how it is sort of morally correct maybe next
time or the time after.

Is there another question? Yeah. The statement is: In general, you’re never going to find a system
that obeys this, unless it has an exact degeneracy which never happens.

89
(Equivalently, [N̂ , H] = 0 but N̂ |φ0 i =
6 λ|φ0 i. The sense in which our system is spontaneously
symmetry broken is that the Hamiltonian commutes but our ground state does not have a definite
number of particles.)

21.5.3 SSB v.3

There exists an operator transforming non-trivially under the symmetry eiN̂ θ with a nonzero ground
state expectation value

e−iN̂ θ âeiN̂ θ = eiθ â

So this tells me that my operator transforms nontrivially under the symmetry. If you like, it
transforms under a representation – transforming trivially – I would have the number 1 here. It
would commute with this symmetry. But it doesn’t commute it. It is a phase factor. Oh, of course,
it has a ground state expectation value so I should probably write that here. If I look at phi not A
hat, phi not, it is non-zero. Right?

hφ0 |â|φ0 i =
6 0

And this â here is the “order parameter".

So this is kind of a – if you like, is a stronger statement than this statement here. This certainly –
if I have a ground state expectation value, my state cannot be varying, you can see that. So this
is – if you’d like, this implies the other definition here. The fact that I have an expectation value
for an operator which is – transforms nontrivially. This is called an order parameter. Basically, any
operator that transforms nontrivially under symmetry requires an expectation value, you call an
order parameter.

So you can – I’m not saying that all operators transformed nontrivially under the symmetry. I’m
saying there exists an operator that transforms under this symmetry which has an expectation value.
That is the same. Think about it in terms of maybe – a system that is more intuitive as you think
of like a magnet. So when a magnet changes the symmetry, you have the spin direction, right? So
spin transforms nontrivially under notations. And so the fact that it requires an expectation tells
you the system has broken rotational symmetry. So it is the same thing here. Maybe a little bit
abstract because the order parameter – it isn’t a direction in physical space, it is an operator – it is
an annihilation operator. You can think of it as a two-component vector. It is similar to ordering
– we will say that – of spins or two-dimensional plain. This is like the direction of the spin. It is
A-hat annihilation operator. Yeah.

90
21.5.4 SSB v.4

Final definition. This is the only definition that is really strictly correct. So the final definition or
final property is that there exists an operator transforming nontrivially under eiN̂ θ with long range
correlations:
|x−y|→∞
hφ0 |↠(x)â(y)|φ0 i → |φ0 |2 6= 0

k at some two-point correlation function – of your order parameter, like the one I’ve drawn here, you
can see the limit that is far apart, the 0, that is also a definition of spontaneous symmetry breaking.
Maybe what I will discuss in more detail, this is the only definition that is actually correct. So
this result is actually strictly correct for any system with spontaneous symmetry breaking. All the
other ones don’t hold for fine out system. None of them will satisfy definitions 1, 2 or 3. But this
is correct. Of course I have to take a limit to make them go – if they’re infinitely far apart, the
sizes get larger and larger. This is the one definition I think you can make a little bit more precise.
This phenomenon that I have, this long-range correlations is called long-range order. And again,
you’re probably more familiar with it in the magnet – the real ways to characterize the ordered
phase is the spin direction is going to become correlated with the spin direction far away. They’re
still correlated together. They still prefer one direction.he 0, that is also a definition of spontaneous
symmetry breaking. Maybe what I will discuss in more detail, this is the only definition that is
actually correct. So this result is actually strictly correct for any system with spontaneous symmetry
breaking. All the other ones don’t hold for fine out system. None of them will satisfy definitions 1,
2 or 3. But this is correct. Of course I have to take a limit to make them go – if they’re infinitely
far apart, the sizes get larger and larger. This is the one definition I think you can make a little bit
more precise. This phenomenon that I have, this long-range correlations is called long-range order.
And again, you’re probably more familiar with it in the magnet – the real ways to characterize the
ordered phase is the spin direction is going to become correlated with the spin direction far away.
They’re still correlated together. They still prefer one direction.

91
Notes Quantum Many Body Physics
Day 10
October 30th, 2013 Giordon Stark

22 Last time

So last time we derived the first part – the first part of the class we derived a path integral represen-
tation of many-bodied interacting Boson systems and Lagrangian shows where phi corresponded to
the coherent state, described a coherent state with parameterized by a complex number, phi.
Z  
i ∗ 1
L = dd x (φ ∂t φ − φ∂t φ∗ ) 1 |∂x φ|2 + µ|φ|2 − V0 |φ|4
2 2m 2

And what we did last time is we just treated it as a classical Lagrangian. We solved for the ground
state and looked at various properties. But today, what we’re going to do is move beyond a purely
classical analysis and ask: What happens if you include small – include quantum fluctuations?
In particular we’re going to expand around the classical solution and keep terms to quadratic
order.

So far: purely classical analysis. Now: include Gaussian fluctuations. So if you remember there
were two phases our many Boson system could be in. The no-Boson phase – when the chemical
potential was positive, it was in a super-fluid phase. We want to analyze both of these phases. The
one we’re interested in is the super-fluid phase.

• µ < 0 phase – exact ground state is a no-boson state.

Figure 39: No-boson state, for µ < 0

92
The ground state is kinda boring. What about the excitations about the ground state? Can
we say anything about the excitations? Expand around φ0 = 0 so if we do that, it is pretty
clear what happens. We end up dropping the quadratic term
 
Z i 1 
⇒ L= dd x  (φ∗ ∂t φ − φ∂t φ∗ ) − |∂x φ|2 +µ|φ|2 
 
|2 {z } | 2m{z } 
iφ∗ ∂t φ 1
φ∗ ∂x2 φ
2m

So let’s look at the equation of motion, which is an unnecessary exercise


−k2
2m
ω z }| { +µ
∂L z}|{ 1 2 z}|{
0= = i∂ t φ + ∂ φ + µφ
∂φ∗ 2m x
k2
In ω, k space we have ωk = 2m − µ. This implies that excitations have an energy gap ∆ = −µ.
The excitations are “original bosons”.

• µ > 0.

Figure 40: µ > 0 is not a no-boson state, there will be bosons

We have a whole path of minima at the bottom of the double-welled potential now. We will
expand around φ0 6= 0. Using polar coordinates
p
φ = ρ0 + δρei(θ0 +θ)

Figure 41: Parameterization of stuff

93
θ0 is going to be anything we want, it’s not going to matter since there are all degenerate so
set θ0 = 0 and parameterize in terms of δρ and θ. We’re going to assume that δρ is small, but
not θ.
 
∂ δρ
φ ∂t φ = ρ0 + δρe−iθ

√ t
p p
eiθ + ρ0 δρ i∂t θeiθ
2 ρ0 + δρ
1 
= ∂t δρ
 + i(ρ
0 + δρ)∂t θ cancel total derivatives
2
So that is the first term, let’s look at the next term

|∂x φ|2 = (∂x φ∗ )(∂x φ)


2
∂x δρ iθ
p iθ

= √ e + ρ0 + δρ i∂x θe
2 ρ0 + δρ
(∂x δρ)2
= + (ρ0 + δρ) · (∂x θ)2 drop δρ compared to ρ0
4(ρ0 + δρ)
(∂x δρ)2
≈ + ρ0 (∂x θ)2
4ρ0
where we keep all the leading order terms only. So the last term is
1 1
µ|φ|2 − V0 |φ|4 = µ(ρ0 + δρ) − V0 (ρ0 + δρ)2
2 2
1
= const. − V0 (δρ)2
2
So putting it all together, we have
Z  
d 1 ρ0 1
L = d x −δρ∂t θ − (∂x δρ)2 − (∂x θ)2 − V0 (δρ)2
8mρ0 2m 2

Note, from this first term: θ, δρ are canonically conjugate. So these are conjugate variables like
position and momentum. You can think of this as being our phase path integral except here
we have delta rho theta not. Let’s – we went from a phase-based path integral to Lagrangian.
We will do the same thing here. We will integrate out our density and write it all in terms of
theta. I want to emphasize that we’re just going – if you like from a Hamiltonian description
to a Lagrangian description with this step. We’re not throwing away any degrees of freedom
at this point because we’re writing it purely in terms of position rather than writing it in
terms of both momentum an position.

Integrate out δρ to express the theory in terms of θ. So this action is quadratic in δρ, so when
we do this integral it is a Gaussian integral. We’ll do the integral by finding the classical value

94
for δρ and then substitute it back in.
∂L 1
0= = −∂t θ + ∂ 2 δρ − V0 δρ
∂δρ 4mρ0 x
 
1 2
∂x − V0 δρ = ∂t θ
4mρ0
 −1
1
⇒ δρ = −V0 + ∂2 ∂t θ
4mρ0 x

Substitute this back in for the Lagrangian (we found the classical δρ).
 !−1 
Z
ρ0 1 1 2

L = dd x − (∂x θ)2 − (∂t θ) −V0 + ∂ (∂t θ)
2m 2 4mρ0 x



1
This derivative (∂x2 ∼ L2
) here is going to pull down something like 1 over length squared.
This isn’t have any 1 over length squared. So we can see if we have very long wavelength
fluctuations, this term will be suppressed compared with this term. What we’re going to do
is drop this term. It is justified to drop this term if we’re interested at physics length scales
where this term becomes small compared with this.

If you compare the magnitude of the two terms, there is a characteristic length scale ξ
1 1
∼ V0 → ξ∼√
4mρ0 ξ 2 mρ0 V0

And we’ll call this length scale ξ a “coherence length”. So we drop a ∂x2 term at lengths  ξ.
Z  
d 1 2 ρ0 2
Leff = d x (∂t θ) − (∂x θ)
2V0 2m

And our claim is that this Lagrangian is a good description of our problem if we’re interested
in physics at length scales longer than this length scale, this coherence length ξ. There’s a
corresponding energy scale, below which this is a reasonable description. And θ is defined
modulo 2π. And so we should really think of this as a theory of angular-valued fields and
because of that, people call it the “XY Model”. For now, we can treat θ as real valued.

22.1 Identifying operators in effective theory

So maybe we’ll have a chance to talk before that later, but for now, we’re just sort of blindly
doing this semi-classical approximation. This is the picture we get. Okay. So this is our effective
Lagrangian. We’re going to investigate it shortly. Try to understand its properties and from it, in
particular, we want to derive, for example, the excitation spectrum of our super fluid.

95
But before we do that, I want to first just mention one important fact which is that when you write
down an effective theory like this, this tells you it’s not complete to just write down a Lagrangian,
you have to also say how to write down – how to express all physical observables in terms of your
effective fields, theta. So we need to know how to express variables in terms of theta. That is what
we’ll talk about next.

So we’ll talk about how you identify operators and effective theories. And you kind of did this
on the homework due today – I’ll explain in a second. You basically had to identify the position
operator within an effective theory of electrons moving – you wrote down an effective theory of
electrons within a single band and you calculated the position operator and found that in that
effective theory.

How do we write density ρ in terms of θ? How do we write current J~ in terms of θ?

Identify by adding a source term. So let’s start with ρ.

ρ̂(x) = ↠(x)â(x)


→ |φ(x)|2
→ ρ0 + δρ

So let’s imagine at the very beginning of our analysis, we added a term like

L → L − (ρ0 + δρ) · A0

times some source A0 . If we followed through at every stage, we’d get


"  −1 #
1
L → L − ρ0 + −V0 + ∂2 ∂ t θ A0
4mρ0 x

So

Leff → Leff − ρ0 − V0−1 ∂t θ A0




And we conclude ρ = ρ0 − V0−1 ∂t θ. What does it really mean to identify an operator? If I couple
the system to some externalities, I want to know how this system behaves. I want to know how to
couple the vector potential to my effective theory. Let’s couple it to the microscopic theory and see
what it translates to in our effective theory. Density ∼ time derivative of θ.

96
Let’s do the same thing for J, the current.
1 h † i
ˆ
J(x) = a (x)∂x a(x) − a(x)∂x a† (x)
2mi
1
→ [φ∗ ∂x φ − φ∂x φ∗ ]
2mi    
1 ∂ δρ
ρ0 + δρe−iθ √ x
p p
→ eiθ + ρ0 + δρ i∂x θeiθ − complex conjugate
2mi 2 ρ0 + δρ
1
= ((ρ0 + δρ)i∂x θ − complex conjugate)
2mi
ρ0 + δρ
= ∂x θ
m
So to see how the Lagrangian changes

~ · 1 (ρ0 + δρ)∂x θ
L→L−A
m
and the effective Lagrangian goes

~ · 1 ρ 0 ∂x θ
Leff → Leff − A
m
and the current is
ρ0
J= ∂x θ
m
So the current ∼ spatial derivative of θ.

22.2 The punchline

The Lagrangian describes density waves. What is the dispersion relation? Let’s write out our
Lagrangian again
Z  
1 ρ0
L= dd x (∂t θ)2 − (∂x θ)2
2V0 m
So let’s look at the classical equations of motion. This is completely quadratic, this is a short-
cut
δL 1 ρ0
0= = − ∂t2 θ + ∂x2 θ
δθ V m
| 0 {z }
wave equation

In ω, k space

ω 2 ρ0 k 2
− =0
V0 m

97
So in other words, you get
r
ρ0 V0
ω = V |k| V =
m

So we recognize this is just a wave equation. Usual wave equation. And if we want the dispersion
relation, we can go to ωk -space. And what are we going to get? The first term is going to get us
something like ω 2 /V0 and the second term is going to give us something like ρ0 K 2 /M . So in other
p
words, we get – in other words we get that ω = V K where V ∼ ρ0 V0 /M . Okay. So our dispersion
relation is – it’s linearly dispersing modes. So two conclusions we can draw: One is that – there’s
two key things. One is that they’re linearly dispersing and the other is that they’re gapless. Unlike
the no-Boson case, we have gapless excitations.

So we have – so at least classically what we’ve shown – this is just classical analysis of equation
of motion. Our super-fluid phase has gapless, linear dispersing density waves. Gapless linearly
dispersing density wave. That’s – I mean, we can interpret these waves as density waves given
that the current and – given that dtΘ is proportional to θ and so that is what we see classically.
Since we know it is – quantum mechanically, we will have quantum excitations. There will be
some quanta, which will be called phonons. So we also have gapless linearly dispersing excitations.
Those are called phonons. Next time we will be careful and actually derive these phonons quantum
mechanically rather than just seeing them from the classical equation of motion. And then we will
investigate more detail what the low-energy physics of our super-fluid looks like.

Classically: superfluid phase has gapless, linearly dispersing density wave. This implies that gap-
less linearly dispersing quantum excitations.of motion. Our super-fluid phase has gapless, linear
dispersing density waves. Gapless linearly dispersing density wave. That’s – I mean, we can inter-
pret these waves as density waves given that the current and – given that dtΘ is proportional to θ
and so that is what we see classically. Since we know it is – quantum mechanically, we will have
quantum excitations. There will be some quanta, which will be called phonons. So we also have
gapless linearly dispersing excitations. Those are called phonons. Next time we will be careful and
actually derive these phonons quantum mechanically rather than just seeing them from the classical

98
equation of motion. And then we will investigate more detail what the low-energy physics of our
super-fluid looks like.

99
Notes Quantum Many Body Physics
Day 11
November 1st, 2013 Giordon Stark

23 Last time

if you remember last time what we did was we started with an exact description of a many-bodied
Boson system and then we made some approximations and derived an effective theory which de-
scribed the long-length scale, low-energy physics of this Boson system. Ultimately, we derived an
effective theory – effective Lagrangian in the following form.
Z  
d 1 2 ρ0 2
Leff = d x (∂t θ) − (∂x θ)
2V0 2m
And in addition to this Lagrangian, we also identified sort of what key observables, so we found
that the density operator, or density field would be written as

ρ = ρ0 − V0−1 ∂t θ

We also wrote down what the current operator looks like, and just as the density was a time
derivative of θ, the current was a spatial derivative of θ.
ρ0
J~ = ∂x θ
m
So from this point of view, you could think of this effective theory as some kind of theory that is
describing, – you could think of as the physical – it’s really describing density waves in our system.
And this is some kind of effective theory for how those density waves propagate.

And this theory, remember, was an effective theory, we had to make certain approximations, and it
was only valid at long-length scales. We dropped some terms. This was valid at length scales
1
scales  ξ = √
ρ0 mV0
Okay. So that’s what we found, and then we wanted to analyze the properties of this system, this
effective Lagrangian, and so first we just analyzed it classically. We found the classical equation
as
1 2 ρ0
⇒ ∂ θ = ∂x2 θ
V0 t m
| {z }
− dρ
dt
~ J~
=∇·

100
And then we went into Fourier space and we say that like every wave equation, it has a linear
dispersion which is like
r
ρ 0 V0
⇒ ω = V |k|, V =
m
we’re going to have quantum excitations which will have this kind of dispersion relation, just like in
electrodynamics, we – classically, we see light electromagnetic waves and we find photons. Similarly
here, we find linearly dispersing density waves, we’re going to have quantum particles, and phonons.
So we concluded without ever doing any – talking about the quantum problem, we must have linearly
dispersing gapless quantum excitations, which are these quantum density waves.

23.1 Now: see this explicitly

Okay. So what I want to do now is – you know, this was just looking at the classical equations of
motion, so now I want to be a little bit more concrete, and actually really solve this as a quantum
system and verify that this is the structure we get. We will see in more detail what the low-energy
spectrum looks like.
Z  
1 ρ0
Leff = dd x (∂t θ)2 − (∂x θ)2
2V0 2m
So now what we’re going to do is quantize the theory. So first we find the momentum conjugate to
θ
∂L 1
π= = ∂t θ
∂ θ̇ V0
This gives us our hamiltonian

Heff = π θ̇ − Leff
Z  
d V0 2 ρ0 2
= d x π + (∂x θ)
2 m
And the quantum Hamiltonian has the same form, so quantize
Z  
d V0 2 ρ0 2
Ĥeff = d x π̂ + (∂x θ̂)
2 2m

with [θ̂(x), π̂(y)] = iδ(x − y). In k-space, imagining our system is on a finite box, with periodic
boundary conditions, then k itself is discrete
X 1
π̂(x) = √ eikx π̂k
k
V

101
So
X 1
θ̂(x) = ˆ k
√ eikx theta
k
V

So then the Hamiltonian is


X  V0 ρ0 |k|2

Ĥeff = π̂k π̂−k + θ̂k θ̂−k [θ̂k , π̂−k ] = iδkk0
2 2m
k

So now at this point, we can see very explicitly that what we have is just a collection of harmonic
oscillators that are decoupled from one another. So we know – so each one of these we can think
of – well, more or less for each value of k we have a harmonic oscillator. So let’s write it in a form
that looks more like a harmonic oscillator Hamiltonian,
r
X 1 1 −1 2

ρ0 V0
Ĥeff = π̂k π̂−k + V0 ωk θ̂k θ̂−k ωk = |k|
k
2V0−1 2 m

Okay. So we’re already rederiving that structure that we had from the – just the classical dispersion
relation. Any questions so far? Okay. So finally, once you’ve written this form, it’s a regular
harmonic oscillator Hamiltonian, we know how to define raising lowering operators.
s
V0−1 ωk
 
i
αk = θ̂k + −1 π̂k
2 V0 ωk
This is analogy of the usual harmonic oscillator. Notice by the way, that this definition only makes
sense for k 6= 0, right? k = 0 is a special case. That means our frequency is 0. So it isn’t really a
harmonic oscillator at all. Our definition here only makes sense for k 6= 0.

So substituting this in, we can rewrite our Hamiltonian in terms of αk and we get
X  † 1

Heff = ωk αk αk + + (k = 0 piece) [αk , αk† 0 ] = δkk0
2
k6=0

Note that I should have mentioned that θ, π are not Hermitian, that is

πk† = π−k θk† = θ−k

Okay. So now we’re basically done. We’ve reduced our system to a collection of harmonic oscillators.
We have the complete energy spectrum except for this k = 0 piece. So in particular, if we look at
this Hamiltonian here, we can see this looks exactly like the free-Boson Hamiltonian where we were
doing the second quantization. It was plus some constant, which we don’t care about. This looks
just like a free-Boson Hamiltonian with an energy – or dispersion relation, which is – instead of
being quadratic, it’s linear.

102
So our system – our super-fluid can be modeled as just a collection of non-interacting Bosons that
have a linear dispersion.
r
ρ0 V0
k = ωk = |k|
m
which are gapless, non-interacting, linearly dispersing bosons. Not original bosons!

So our system – our super-fluid can be modeled as just a collection of non-interacting Bosons that
have a linear dispersion. That’s kind of interesting. I mean, if you think about it, we started with
a system which was interacting Bosons with quadratic dispersion, and we ended up – low-energy
physics, we found that the low-energy excitations, what we see are actually non-acting linearly
dispersing Bosons, so it should be perhaps clear that the Bosonic excitations of our super fluid have
nothing to do with our original Bosons. That’s – when I wrote down when we talked about the – the
no-Boson phase, I said, "Oh, the excitations in that case were the original Bosons," and everyone
was confused with what I said. The excitations of this our super fluid are – you cannot understand
them in terms of individual Boson, adding a Bosonic excitation or fluid. It has nothing to do with
that.

So these are really, if you like, these are collective excitations of our system. You can’t think of
them in terms of an individual particle. There’s no way to understand these – understand these
excitations as just – in terms of the microscopic Bosons. Their collective excitations.

So these collective excitations have a name: “phonons”.

So phonons here are particles and you can have – you can have a phonon excitation with any given
value of K, you have one phonon with momentum k, is ωk . So I have have two phonons with one
momentum and five with another. 25 phonons with other momentums and they don’t interact with
each other, not at very low energies, just scattering, we don’t see anything. So the claim here is that
if you look at your – strongly interacting system, we can’t solve it exactly. If you ask what happens
at low energies, you see, in fact, this is universal. We derived it for a particular – we made many
approximations and we started with a particular interaction. Believe much more no matter what
the high-energy spectrum is, I will always see it this way, it is characterized by velocity – there’s
two parameters, actually, in this theory, of velocity and phase theory. Two parameters. And that
characterizes the low-energy physics. So this is the claim. If you diagonalize it on your computer
and you looked at big systems and low energies, you can see the tower of states that looks like these
collection of harmonic oscillators.

103
23.2 Think about k = 0 part of Heff now

V0 2
Heff,k=0 = π
2 0
There’s no potential now (ωk=0 = 0). Naively looks like a continuum spectrum of excitations with
V0 ρ2
energies E = 2 .

This is not correct. so why is it not correct? There are two reasons.

1. θ0 lives on a circle ⇒ π0 should be quantized.

2. We need to include the total derivative terms we dropped earlier.

Okay. So those are the two things we’re going to need to correct to get the k = 0 spectrum correctly.
We’re going to kind of redo our analysis, our quantization, we’ll really be careful and keep all the
total derivative terms. And so see what we get when we do it carefully.
Z  
d 1 2 ρ0 2
L= d x (∂t θ) + (∂x θ)
2V0 2m
There are two more terms I want to put in. There’s this ρ0 ∂t θ which is a total derivative term. We
also need to add in some constant terms to make the structure clearer, but it’s not important
1
−ρ0 ∂t θ + (µρ0 − V0 ρ20 )
2
Focus on the k = 0 piece. So let θ(x, t) ≡ θ(t).
 
1 2 1 µ
L=V θ̇ − ρ0 θ̇ + µρ0 − V0 ρ20 ρ0 = |φ0 |2 =
2V0 2 V0
2
 
1 2 µ µ
=V θ̇ − θ̇ +
2V0 V0 2V0
V
= (θ̇ − µ)2
2V0
This has the conjugate momentum
∂L V
p= = (θ̇ − µ)
∂ θ̇ V0
So the Hamiltonian is

H = pθ̇ − L
 
V0 V0 2
=p p+µ − p
V 2V
V0 2
= p + µp
2V

104
We can see that, since θ itself was quantized, p is quantized as an integer. Also [p, H] = 0. What
is the physical interpretation of p? p is related to number
V
p= (θ̇ − µ)
V0
 
V ρ0 − ρ
= −µ
V0 V0−1
µ
= ρ0 V − ρV − V
|{z} V0
µ
V0

= −ρV = −N total number of particles

In terms of N :
V0 2
H= N − µN
|2V{z } |{z}
chemical potential
interaction energy

23.3 Total Hamiltonian

 
V0 2
ωk αk† αk + constant
X
H= N − µN +
2V
k6=0

The ground state is going to have


µ
N = Nmin ≈ V ≡ ρ0 V
V0
And there are no phonons so αk† αk = 0. The first thing you notice is this is very different from before
with bosons. We have a unique ground state, generically. I guess if you just tune this chemical
potential, there can be two numbers that are degenerate. But generically, there is a unique number
of particles that is preferred. That is in stark contrast to what we found with the classical picture.
Classically, we found a whole infinite set of degenerate ground states, they have the bottom – this
is different from the classical analysis. The degenerate ground states classically were
r
µ iθ
φ0 = e
V0
Also,

hNmin |a(x)|Nmin i = 0

while classical analysis gave

hφ0 |a(x)|φ0 i = φ0

105
Right. So now why am I pointing these things out? Because if you remember in our sort of discussion
of – introductory discussion of symmetry breaking, this was one of our definitions. We said if you
have a field like A, which requires an expectation value, then you have spontaneous symmetry
breaking. We, indeed, found that we had that situation. But here, when we do the analysis more
carefully with our system, we see that at least this field doesn’t work. Actually, more generally,
there’s no order parameter you can construct that would have a non-zero expectation value. You
can see that our state here – if you remember our symmetry transformation was something like
eiθN . This is where your state transformed nontrivially under your symmetry. This is manifestly
invariant. In other words it has a fixed particle number. It’s an eigenstate of N . It’s invariant under
the U (1) symmetry generated by N .

So all of these things say that our ground state – our system does not have spontaneous symmetry
breaking, at least according to those sort of original definitions of spontaneous symmetry breaking.
So it looks like no symmetry breaking, according to SSB v.2, SSB v.3.

So, what’s going on? Let’s try to understand what happens in the thermodynamic limit. Is there
a sense in which order parameters start to exist in that time? The spontaneous symmetry breaking
can be defined by that line. In particular, let’s think about the energy spectrum as N gets large.
But we should think about – low-lying excited state. Let’s think about the spectrum of the lowest
excitations above the ground state. We have very low-lying states. As system gets large, many
nearly degenerate low-lying states.
V0 V0
ENmin +1 − ENmin ∼ = d → 0 as L → ∞
V L
The system doesn’t care exactly how many particles it has, it is a large system. So
V
Ephonon ∼ → 0 as L → ∞
L
so
ENmin +1 − ENmin V0 /Ld
∼ → 0 as L → ∞(d > 1)
Ephonon V /L
On the time scale of phonons, states are degenerate. So we can construct approximate symmetry
breaking ground states
2 /2n2
X
|θi = eiN θ e−(N −Nmin ) |N i
N
which have approximately a well-defined phase of

hθ|a|θi ∼ const eiθ

As the system size of the states becomes increasingly large, the claim is that this does have symmetry
breaking if we consider approximate ground state |θi(d > 1).ates.

106
Notes Quantum Many Body Physics
Day 12
November 6th, 2013 Giordon Stark

24 Last Time

So the last couple lectures we’ve been talking about a sort of effective theory which describes the
physics of the low-energy physics of superfluids. And so we wrote down this action – well, maybe a
Lagrangian, something like
1 ρ0
L= (∂t θ)2 − (∂x θ)2
2V0 2m
And what we’ve been doing is we analyzed this, we showed that when you quantize the theory, you
get phonon excitations and we also worked out more carefully what happens with the k = 0 phonon
mode – it is not a phonon mode. We showed that there’s this extra term in the energy. There’s the
phonon and another degree of freedom which is the total of particles in your system.

When we thought about the ground states, the exact ground states of the system, the exact ground
state had a definite number of particles N , and no phonon excitation as well. This is a state with
a definite number of particles. By that definition, there was no symmetry breaking.

hGS|a|GSi = 0

But we discussed if you were more careful you could recognize there’s a whole collection of low-lying
states which are approximately degenerate, very closely – very small splitting. Separated by – and
then up here, like your phonon excitations.

We showed that while each eigenstate has vanishing value of the order parameter, you can construct
approximate ground states by taking superpositions where your order parameter would be a true
symmetry-breaking state. That is the way in which symmetry breaking can occur in some sense
in – even in finite-sized system. It never occurs in eigenstates. But in looking at an approximate
eigenstate, by looking at these approximate eigenstates we can see symmetry breaking. These could
be some non-zero number.

But that was kind of a complicated construction and so what I want to do now is use the fourth
definition of symmetry breaking in terms of long-range order to probe long-range order in our ground

107
states. The advantage of this is we won’t have to think about approximate states. We’ll just take
the exact ground state for our system, for any finite system and we’ll look for long-range order. So
that is what we’re going to do today.

Recall the precise definition of symmetry breaking (SSBv.4)

hGS|a† (x)a(0)|GSi → const 6= 0 (as x → ∞)

25 Superfluids in Low Dimension

We are going to check whether ha† (x)a(0)i → const. So we are going to check for “long range
order”.
Dφ φ∗ (x, t)φ(0, 0)eiS
R

hGS|T (a (x, t)a(0, 0))|GSi = R
Dφ eiS
Of course, there’s some subtlety with taking the time to have a very small negative imaginary part,
but I won’t write in those details at this point.

This is our action in terms of the original φ fields. But we want to express it in terms of our
effective theory. This describes the low-energy long-distance physics. We are interested in these
in long distances, in particular whether they go to a constant or zero at infinity. So we want to

re-express this in terms of θ. Remember φ can be written as ρ0 + δρeiθ0 . The claim is you’ll get
something like that if you re-express it in terms of θ.

Dθ e−iθ(x,t) eiθ(0,0) eiSeff


R
† 2
hGS|T (a (x, t)a(0, 0))|GSi ∼ |φ0 | R
Dθ eiSeff
| {z }
he−iθ(x,t) eiθ(0,0) i

There will be the magnitude of φ, which we can bring out ψ. And then you will get something
like the above equation. This is the effective theory. This just means take an expectation value
with respect to the basically the complex weight of the path integral. So this is what we want to
calculate. This is just some constant out front which we don’t care about. So we want to calculate
how this behaves when x → ∞. To calculate it, actually, it’s helpful to generalize our discussion a
little bit. It’s helpful to define the following quality.
Z R d
Z[f (x, t)] = Dθ eiS ei d x dt f (x,t)θ(x,t)

So that
Z[δ(x, t) − δ(x − x0 , t − t0 )]
he−iθ(x0 ,t0 ) eiθ(0,0) i =
Z[0]

108
So we need to calculate the ratio of these two
Z[f ]
Z[0]
which is analogous to
a 2
e− 2 x +bx
R
2
R − a x2 = eb /2a
e 2
We have to identify what a and b are in what we’re doing here. Let’s remember what our action
looks like (from above)
Z Z  
1 2 ρ0 2
S = dt L = dt (∂t θ) − (∂x θ)
2V0 2m
Z
χ 1
(∂t θ)2 − V 2 (∂x θ)2

= dt χ=
2 V0
So we have some integral like
 −1
2 2 2 
−iχ(−∂t
if (x1 , t1 ) +V ∂x ) if (x2 , t2 )
1
dt1 dt2 dd x1 dd x2
R
2
Z[f ] | {z } | {z } | {z }
=e b a b
Z[0]
i
dt1 dt2 dd x1 dd x2 f (x1 ,t1 )Gθ (x1 −x2 ,t1 −t2 )f (x2 ,t2 )
R
= e− 2

and we recognize [a]−1 being related to the green’s function.


−1
−iχ(−∂t2 + V 2 ∂x2 )

= hT (θ(x, t)θ(0, 0))i = iGθ (x, t)

So let’s substitute what we’re interested in, which is these delta functions, so f = (δ(x, t) − δ(x −
x0 , t − t0 )). If I plug this in here, there are going to be four terms when I expand it out. It sort
of looks like interactions, in fact. You can think of it as interactions between two point charges.
It’s like an interaction energy, right? So first there will be self-interactions. There will be diagonal
terms. Terms that involve δ(x, t). Those are going to give you – since they’re evaluated at the same
point, they’re going to give me things involving Gθ (0, 0). Then there will be the cross-terms that
involve a product of the two deltas and that is going to involve Gθ (x0 , t0 ). So all together if you plug
it in, we’re going to get terms that are self-interactions terms and cross-terms. The self-interaction
terms come in with a plus sign and the cross-terms come in with a minus sign. There is a minus sign
up here. All together we get something like this. I’ll write it over here because it is an important
result.

he−iθ(x,t) eiθ(0,0) i = ei(Gθ (x,t)−Gθ (0,0))


= ehθ(x,t)θ(0,0)i−hθ(0,0)θ(0,0)i

109
So actually, so I did it for this particular action, you know, superfluid case. But it would be true
for any free theory. These kinds of manipulations hold, as long as you’re dealing with the Gaussian
integral. Any free field, you can calculate quantities – these kinds of expectation values by – calculate
the correlation of an exponential by looking at the correlation functions like this.

So this is true for any quadratic theory, free field. This is correct and exact starting from this
effective action. Starting from here. The original theory, of course, was a more complicated theory
with interactions and so on. It had interactions, for example. And so certainly when we went from
that to this, we made some approximations. So this is not an exact – this theory didn’t exist. Just
the angular part of our field. Yeah.

25.1 (1 + 1)D

Okay. So all right. So now we should just compute – we just need to compute these Green’s
functions for θ or these expectation values. Okay. Okay. So we’ll start in 1 + 1 dimension. In other
words, one spatial dimension. So there are many ways to compute these Green’s functions. I’m
just going to go a brute-force calculation pretty similar to what is written in the book. But I’m
going to work in imaginary time, which makes it slightly cleaner. Then I’m going to compute it in
imaginary time and then we’ll continue into real time. So in imaginary time, let’s write down our
path integral. Our path integral looks like
Z
χ
θ(−∂τ2 −V 2 ∂x2 )θdx
R
Z = Dθ e− 2

For notation, whenever we have path integrals like this, I will write it as a partition function even if
we are not calculating something. Even if we’re doing it in real time. Standard notation. We want
to calculate the Green’s function, which is given by inverting this operator, so

χ−1
Gim
θ (x, τ ) = hθ(x, τ )θ(0, 0)i =
−∂τ2 − V 2 ∂x2

If I apply this operator to my Green’s function, I get a delta function (it better be some delta
function)

−1
(−∂τ2 − V 2 ∂x2 )Gim
θ (x, τ ) = χ δ(x)δ(τ )
X1 Z
dω iωt
= χ−1 eikx e
L 2π
k

Okay. So now let’s write out these delta functions. The way we’re going to invert it is we’ll do it
in ω and k space. Just to make sure we get all the normalizations correct. Let’s write out the δ

110
functions in ω and k space. So if we rewrite this, I’ll bring out the χ−1 out front. I’m going to treat
space and time slightly differently in the following sense. I’m going to consider a system which is
finite-sized system.

So I said it is a 1 + 1 dimensional system. I’m going to think about them on a circle. Okay. With
some finite circumference L. Okay. And we’re doing an imaginary time. I’m interested in the time
direction I’ll let the imaginary time direction be infinite. If you like, our path integral is kind of on
a cylinder. Your fields live on a cylinder. This is the spatial direction. This is the time direction.
This is x, and this is τ .

The reason I’m going to do a finite-sized sys-


tem like this, is because we get some diver-
gences. Ultimately what we’re going to be inter-
ested in is the correlation functions. We’re go-
ing to correlate where the displacement is much
smaller than the circumference. So we’re go-
ing to be thinking about two points which are
pretty close. Their distance between them, x,
is much smaller than L. Also the time will be
much smaller than L. Ultimately, we’re going
to be thinking about some fixed distance. But
to regulate everything I’m going to start with
finite L. Since this is the system we have in
mind, that means that when we work in Fourier space, our k’s are quantized. They have to be

multiples of L. But our ω’s are not quantized. They can take any values. If they expand in the
1 ikx
basis, this is going to be a sum over all the different k’s of Le . That is the expansion for the
Fourier space. We can write this as an integral. This is the continuum expression. This is like

111
discrete Fourier expression.

χ−1 X
Z
dω 1
Gim
θ (x, τ ) = ei(kx+ωτ )
L 2π ω + V 2 k 2
2
k
−1 Z  
χ X dω 1 1 1 i(kx+ωτ )
= − e
L 2π ω − iV k ω + iV k 2iV k
k

(assume τ > 0)
χ−1 X 1 1
= (2πi) ei(kx+iV |k|τ )
L 2π 2iV |k|
k
1 X 1 ikx−V |k|τ
= e
2V χL |k|
k
2πn
k=
L  
∞ ∞ 
1  X 1 2πin
(x+iV τ )
X 1 − 2πin
(x−iV τ )

=  e L + e L 
2V χL 
 2πn/L 2πn/L 
n=1
} n=1

| {z | {z }
k>0 k<0
1  h 2πi
i h 2πi
i
=− log 1 − e L (x+iV τ ) + log 1 − e− L (x−iV τ )
4πχV

Note that log(1 − x) = −(x + x2 /2 + x3 /3 + · · · ). And if we can do this integral by contour


integrations, I won’t go through this. Basically, you can close the contour – depending on whether
τ is positive or negative, you can close it on the upper half plane or bottom. You can pick up the
residue corresponding to this pole or this pole. Suppose τ > 0. So in that case, when τ > 0, we can
close it in the upper half because things become very well-behaved when ω goes to i∞.
−1
So that means the only pole you get is this pole. So all together you get is χL × all k . When we
P

enclose this, we’re going to get 2πi. So the only difference is this term, the sign changes. So we can
1
– the L’s cancel. We can pull out the 2π’s. So we end up with 4πχV . And then we have these sums.
But these sums we recognize as just a usual power series. So that is the power series of a logarithm
– more precisely, the power series of logarithm of 1 − x. We can write this of log of 1 minus the
thing that we’re raising to the Nth power.

Okay. So that takes care of our first one. And then our second one is similar. It is also logarithmic.
The only difference is that there is a minus sign here. It is a − 2πi
L (x−iV τ ). And there’s actually one
x2
more minus sign because if you remember, the expansion of log(1 − x) is actually −(x + 2 + · · · ).
We have a minus sign here. Because we want something that has a positive sign.

(he finishes talking about the math from the above equation)

112
So if we look in the limit when L is large, or x, τ  L, we have
 2 2
4π (x + V 2 τ 2 )

1
=− log
4πχV L2
So we conclude that
√ !
1 2π x2 + V 2 τ 2
hθ(x, τ )θ(0, 0)i = − log
2πχV L

Now let’s continue to real time. Remember that imaginary time, we’re taking our time direction
along this negative imaginary axis, so to go to real-time we have to replace τ . We have to multiply
π so that would correspond to a rotation.

τ → it(1 − i) = it + 

So we get
√ !
1 2π x2 − V 2 t2 + i
hθ(x, t)θ(0, 0)i = − log
2πχV L

So if we plug it in and get our Green’s function, we have


√ !
1 2π x2 − V 2 t2 + i
iGθ (x, t) = − log
2πχV L

What about Gθ (0, 0)?

So if you naively set x = t = 0, you get a log(0) here. So you get something divergent. And that’s
correct in some sense. So there’s a key point which I maybe didn’t emphasize enough which is that
our effective theory was only valid at long-distance scales and we have this – we said it was only
good at distance scales larger than some scale χ. Now I was a little vague about what I meant by
that, but the really precise meaning is that you should – when we do our path integral, this is an
effective theory. So it’s only a theory of fluctuations or distances larger than χ. If we do our path
integral, we should only include fluctuations with wavelength longer than this scale chi.

In other words, our value of k – remember I summed over all these k’s, we should only let k get up
to a maximum value which is basically χ−1 . We shouldn’t sum over all fluctuations. So what we
really did when we derived that effective theory, I didn’t explain this carefully. What we really did
is we integrated out all the fluctuations with – you get some system with wavelengths longer than
χ. It has a cut off in it.

This looks divergent. But we have a cut-off at short distance scale. So


 
1 2πξ
iGθ (0, 0) → iGθ (ξ, 0) = − log
2πχV L

113
So if we substitute these in, you’ll see why this minus sign matters, because it has a very important
effect

he−iθ(x,t) eiθ(0,0) i = ei(Gθ (x,t)−Gθ (0,0))


1/2πχV 
L 1/2πχV
 
L
= √ /
2π x2 − V 2 t2 + i 2πξ
 1/2πχV
ξ
= √
x − V 2 t2 + i
2

This is really what our whole calculation is, was calculating this power. (Board). So that’s our
correlation function. So we can see an important point here, very important point is that to look
for long-range order, we want to set t = 0. We want to set x → ∞. Look at how the correlation
function behaves. We can see that it goes to zero. It’s polynomial algebraic decay. This is a power
law decay. This shows that

ha† (x, t)a(0, 0)i → 0 as x → ∞(t = 0)

No long range order, no symmetry breaking. Note that the i is only important to tell you how to
handle the singularity in the square root.

It’s not going to a non-zero constant. And that’s actually consistent with what we found earlier
when we looked at – just the energy spectra. We said there are really only low-lying states in some
states in dimension greater than 1. In dimension greater than 1, there is no mention of low-lying
states. Here we see explicitly there’s no long-range order in the superfluid. And that should be
contrasted with what we calculated very early when we did it just a purely classical analysis. We
had this ground state which was this perfect coherent state. We said this went to a constant. It
was given by φ20 . So what we’ve shown here is that when you go beyond a classical analysis, then
you’ll include these Gaussian fluctuations which is basically what we’ve done here, it changes the
physics. The long-range order is destroyed by the Gaussian fluctuations.

So recall that classical analysis gave symmetry breaking independent of the dimensionality. And
what we’ve shown here is that when you include quantum fluctuations, it destroys this result in
dimension 1. So it’s very important, it’s not just a quantitative effect, changing a few numbers. It
changes the qualitative structure. There’s no symmetry breaking.

25.2 Example of Mermin-Wagner Theorem

It is the Green’s function, and that gives you a power log when you look at the correlation function
of things like eiθ . The Mermin-Wagner theorem says this happens more often. It says no continuous

114
symmetry breaking for d ≤ 2 (classical, including thermal fluctuations) and d ≤ 1 (quantum systems
with linear dispersion ω ∼ k).

So the claim, in general, you concentrate on the quantum result,you end up with this logarithm.
This is also very similar. As we discussed, I guess very early on – we will see more of this – systems
are very closely related to quantum systems. So you can do a very similar calculation for classical
systems. Thermal fluctuations, as I mentioned, too, change your order. We’ll talk about the classical
finite temperature system probably next time.

25.3 (2 + 1)D

Again, going to imaginary time:

Gim
θ = hθ(x, τ )θ(0, 0)i
χ−1
=
−∂τ2 − V 2 ∂x2

And now if we follow exactly the same logic as before, we can write this as something like

d2 k
Z Z
−1 dω 1
=χ 2
ei(kx+ωτ )
(2π) 2π ω + V 2 + k 2
2

χ−1
= √
4πV x2 + V 2 τ 2
Okay. And of course this is antral you’re probably very familiar with. It’s basically you’re taking
the Fourier transform x1 . You rescale things. This is something you do all the time when you want
to invert the operator. We know what the Green’s function – in two dimensions, it is a logarithm,
but in three dimensions, it is a 1 over – you get 1 over distance – 1 over R, which in this case is –
since we have a V here, it’s x2 + V 2 τ 2 .
χ−1 1
So you end up with 4π . You set V = 1, this is saying that it is 4πR . But you can also get that
if you explicitly evaluate this integral. And know, by the way, that before we had a divergence,
when we integrated over ω here, we get something like k1 , so there was a divergence, so we had to
deal with this long-distance divergence, so we used a finite system to regulate that distance. And
1 d2 k
we summed over discrete values of k. We are still going to get a k, but it is going to be k . So
you can immediately take L → ∞ which is why we didn’t have to do the sum over k again. Be
careful.

115
Now we can continue to real time as we’ve done before: τ → it(1 − i).

iGθ (x, t) = hθ(x, t)θ(0, 0)i


χ−1
= √
4πV x2 − V 2 t2 + i

Now, we just have to plug in the values to get χ−1 . So we also need, of course, iGθ (0, 0) right? So
remember, we get divergences, similar to what we’ve got before. So this is short-distance divergence
gets cut off at our short scale ξ

χ−1
iGθ (0, 0) ∼ iGθ (ξ, 0) =
4πV ξ
So then

he−iθ(x,t) eiθ(0,0) i = eiGθ (x,t)−iGθ (0,0)


χ−1 χ−1
 
= exp √ −
4πV x2 − V 2 t2 + i 4πV ξ
χ−1
 
(as x → ∞) → exp −
4πV ξ

So

− 4πV1 ξχ
ha† (x, t)a(0, 0)i → |φ0 |2 e

And we have long range order! And we can also see what did quantum fluctuations do in this
case? So classically, we found this correlation function was φ0 62 and that is what we found in the
classical ground state. What we’ve done is reduced the amount of correlation by some factor here,
but it hasn’t been killed. Only effect of quantum fluctuations is to reduce the order parameter.
Note

ha† (x, t)a(0, 0)i → |φ0 |2 classically

116
Notes Quantum Many Body Physics
Day 13
November 8th, 2013 Giordon Stark

26 Superfluidity at Finite Temperature

Let’s go back to our original path integral description of bosons. Our coherent state path integral.
So if you remember, given the path integral for quantum system you can calculate the partition
function for that system at finite temperature by evaluating the path integral on imaginary time
and putting time periods in the imaginary time direction. We can write down the partition function
for our system
Z h i
Rβ 1 1 V
− dτ dd x (φ∗ ∂τ φ−φ∂τ φ∗ )+ 2m |∂x φ|2 −µ|φ|2 + 20 |φ|4
Z= Dφ e 0 2

1
Where instead of going over real time, we go over imaginary time, or over β, which is T. Okay. And
if you remember, what we mean by this integral here is we should really think of a geometry like
this and draw this schematically. This is going to be two spatial dimensions and then one imaginary
time dimension. Of course I just can’t draw a picture for three spatial dimensions, so I’ll draw it
this way. These are our spatial dimensions x. β is going to be periodic. So you want to take the
value at the top part of the slab as the bottom part. If we do that, we will have the partition
function.

Figure 42: A box of two spatial dimensions and imaginary time dimension β

Okay. So of course the basic problem in statistical physics is to compute the partition function. So
how will we do it? So we’re not going to actually compute it. So what I’m going to describe for you

117
is a way to think about the structure of this partition function. I’m not going to actually compute
it quantitatively. Of course we can’t do it exactly anyway in an interacting case.

So there’s a basic conceptual point which applies to any system at finite temperature. Which is that
if we think about our system, we have our spatial dimensions and our imaginary time dimension. If
we kind of zoom out and look at this system from far away, from we’re interested in things the long
length scales, this dimension becomes small – we can’t see it. We look at it from far away. So instead
of having a system which is in d + 1 dimensions, which is what this integral is over, the long-distance
physics will look d-dimensional. Some scales are large compared with the temperature direction,
it will look like a 2-dimensional theory. Or a d-dimensional theory instead of a d + 1 dimensional
theory. We want to describe that formally. The effective theory that describes the long-distance
correlation functions, we should be able to write down a theory that is d-dimensional rather than
d + 1 dimensional. So let’s describe how to do that.

Idea: At long length scales, β direction no longer looks like an extra dimension. So now, the
effective theory at long distances is d dimensional.

So let me just write. So the idea is if you look at it in long-length scales, this β direction no longer
looks like an extra dimension. So then the effective theory, we should be able to write it in terms of
some d-dimensional path integral, so the effective theory at long distances is d dimensional. So this
is true at any finite temperature. Obviously if we’re working at zero temperature, which is what
we’ve been doing up until now, the β direction is basically infinitely large, and then there’s no limit
in which we’re really dealing with a d + 1 dimensional theory. We have the extra time direction now
has a finite extent. When you think about a system at finite temperature, its effective dimensionality
at long distances, you can describe it in terms of d-dimensional systems rather than d + 1.

So let’s describe how that comes about formally from the path integral. So how are we going to
derive this effective d-dimensional description? So the idea is that we’re going to Fourier transform
the modes along the time direction. We can Fourier expand it, and then roughly speaking, there’s
going to become lowest mode, which is like the zero mode, actually, zero-frequency mode and then
there will be other modes at quantized frequencies that go like 2 pi over beta. Those higher modes,
those higher-frequency modes are going to show the long distances, they can be integrated out, and
we will get an effective theory. We will have a single mode here.

It’s kind of similar conceptually to extra dimensions and high-energy physics. If they’re looking
at things at low energy, you can’t see the extra dimension because extra dimension is only visible
if you can probe energies, if you can create excitations that have higher modes in that direction,
which may be very energetic. So at low energies, or in long distances, this dimension isn’t going to

118
appear.

Formally consider a Fourier transform in β-direction.



X 2πn
φ(x, τ ) = φn (x)e−iωn τ ωn = , n = 0, ±1, ±2, · · ·
n=−∞
β

We can always expand out in some sum of φ where n is going to parameterize all the different modes
ωn because it’s periodic. So then
Z Y  
1 V0
− n dd x −iωn |φn |2 + 2m |∂x φn |2 −µ|φn |2+
P R
(··· )
Z= Dφn e 2

So I first tried to give you some intuition, so if that doesn’t make sense to you, pay attention to
calculation. One of these modes is special, the n = 0 mode. When n = 0, the frequency ω0 = 0.
The claim is that this mode is special. If it’s not clear to you why it’s special now, we will show
why it’s special or we’ll argue why it’s special. All of these massive modes have propagation in the
x direction and we can integrate them out at long distances to derive some effective theory in terms
of φ0 . We get some sort of low-energy long wavelength degree of freedom that we are going to write
our theory in terms of
Z
Z= Dφ0 e−pSeff [φ0 ]

Seff is complicated. It includes non-local terms such as


Z
dd x dd x0 |φ0 (x)|2 k(x − x0 )|φ0 (x0 )|2

Remember this is not a Gaussian integral we’re doing here. We have interactions here. When
we integrate out these modes, this is a complicated calculation. We could get many many terms
in terms of φ0 . In general, this is pretty complicated. And in general, it will include non-local
terms. So there will be things in the action, for example, like this. I’ll justify why these terms are
there in a second, but the claim is you’re going to get terms for example, you might get density-
density interactions. So density-density interactions, so φ20 + φ20 – so you’ll get some density-density
interactions k(x − x0 ). They won’t be perfectly short-distance interactions.

On the other hand, the claim is that, because we’re integrating out modes, which are in some sense
massive modes, these kernels, these things that show up here, they’re going to be short ranged. So
for example, we would expect that they’ll have some sort of exponential decay. So
0
k(x − x0 ) ∼ e−|x−x |/lT

LT here is a thermal length scale. And this is exactly what I meant. So there is some kind of length
scale. It depends on temperature. It is going to diverge as the temperature goes to zero. And at

119
length scales longer than this scale – so if you believe this claim – I have proven it for you, length
scales longer than this, k will basically look very local because it’s going to be confined to a few
times lT . So at length scales longer than lT , this will be local and we can think of our action as a
nice, local – we can – at least well-approximate it as a local action. In other words for some length
scale lT ; At lengths larger than lT , action Seff is local.

Why this expectation? Consider a system:


Z
1 0
R d 2 2 2
DΦe− d x (−iωn Φ + 2m (∂x Φ) −µΦ +Φ·J(x ))

And instead of adding these more complicated interactions Φ4 , I’m going to couple Φ to some
external source J(x). Suppose I integrate out Φ, I’m going to get a number. I’ll do the whole
integral. In general, in this case, I’ve written a completely quadratic action, what does that look
like? It’s going to be short range J:

∂x2
 Z 
1 d d 0 −1
∼ exp d x d x J(x)[−iωn − − µ] J(x)
2 2m

So what does inverting that operator look like?


0
eik(x−x )
Z
∼ k2
dd k
−iωn + 2m −µ

And now, you know, we’ve done these types of integrals before. You’ll recognize that this type
of integral – in general, whenever you Fourier transfer this, you get a super polynomial function
that decays rapidly. Fourier transform quickly as long as it doesn’t have any singularities in it, for
example, poles. So if you look at this, you can see there’s a key point here. As long as ωn 6= 0, this
term is perfectly – it’s never zero, because this is all real, right? So as long as ωn 6= 0, we do this
Fourier transform, you can convince yourself that it will decay exponentially. When ωn vanishes,
there was no term here. Then if µ – let’s say, was positive, this would have, you know, singularities
at certain values of k. That will give you power-law behavior. Maybe I’m saying something simple
in a complicated way. If you do this Fourier transform you will, indeed, get something like
0 1 2πn
∼ e−|x−x |/l l∼ √ , ωn =
mωn β

If you like, that’s what you would get if you did the – if you – um. Yeah, I guess I’ll leave it at
that. If you can see that from dimensional analysis if you’d like. Remember here that omega N has
2πn
a dependence on temperature, right? It is equal to β . So we can see here exactly the physics I
was trying to indicate to you which is that number 1, we see this length-scale l, it goes like ω1n , so

it is proportional to β in this case. So that means is that as β → ∞, if we go to zero temperature

120
of this scale, it’s larger and larger. The length scale at which the physics crosses over from d + 1
dimensional physics to d dimensional physics.

The key point here is that if we look at the system at long-length scales – well, if we look at the
kernel, it’s going to have this nice decaying property. The claim would have been true in the original
action. It’s effectively giving you a mass for your field so when you calculate any – any correlation
function for these fields, when we integrate them out, we’re always going to get the kinds of terms
we generate, these types of kernels are always going to have these dependence. So that’s the claim.
Any questions? So basically, this is responsible for – it’s like having a mass.

It literally would be like a mass if there wasn’t an imaginary – if it wasn’t iωn . So the claim is this
gives you a short-range K. Okay. So that’s my argument for why we’re going to get a nice, local
action at the end of the day.

So what does the local action Seff look like?


 
Z
1
Seff = β dd x V (|φ|2 , T, · · · ) + ∗
|∂x φ|2 + ··· 
2m (T ) |{z}
higher-order derivatives

Depending on T and other parameters, V may have different forms:

1. V looks like a single-well potential.

2. V looks like a double-welled potential.

Now, in general if we take the same philosophy that we’ve taken up until now, we assume fluctuations
are small and expand around the stationary point, in this case, at least its fluctuations are very
small. We expect that the system is not going to have any symmetry breaking, whereas in this case,
we expect the system will have – could have symmetry breaking, because at least without – if we
forget about fluctuations, the system seems to want to have a non-zero value of φ0 in the lowest
free-energy configuration. So let’s focus on the second case here, because this is the case where we
expect superfluidity to happen.

Analyze the 2nd case. So of course in general, if fluctuations are strong, we have no idea. In either
case, anything can happen. But if we’re working in this kind of limit where fluctuations are weak
in one case we expect fluctuations to occur and the other one not so much. We will follow the same
procedures we did when analyzing quantum super fluidity.
p
φ = ρ0 + δρeiθ

Assuming δρ  ρ0 , expand around ρ0 (some minimum configuration). And notice, by the way, there
is one difference from the action we had in the quantum case. We don’t have any berry phase term.

121
We just have spatial derivatives at this point, and in any case, we can follow the same procedures
as before. Integrate out δρ to get effective action
Z
η
Seff = dd x (∂x θ)2
2
we will get something similar to the zero-temperature case. so the claim is (∂x θ)2 and these φ2
terms, these we’re going to drop out of the problem because we’re going to only be looking at
fluctuations near the minimum here. Our only degree of freedom at the end of the day is going to
be θ. These terms have no dependence on θ here, these terms here. So at the end of the day, we
can’t necessarily predict what this coefficient is going to be. That is non-trivial. But at the end of
the day, the only thing we can get is something like – again, you can say maybe by symmetry. The
only terms you can get are terms like ∂x θ plus higher-ordered derivative terms. But we drop those
because we’re interested in long-distance physics. If you like, just by symmetry principles, you will
know that when we integrate out delta rho, we’re going to get this effective action, this form.

So this action has a name, it is called the “classical d-dimensional XY model”. I want to emphasize
that it is extremely similar to the quantum XY model we’ve been studying. So let’s write them
next to each other. So let’s look at the “quantum mechanical d + 1-dimensional XY model”
Z
χ
Seff = dd−1 x dt ((∂x θ)2 − V 2 (∂t θ)2 )
2
I don’t know if I should call them quantum and classical. I don’t know if this language is confusing
to you. By quantum I mean T < 0. Imaginary time is equal to the classical d-dimensional XY
model. That’s the claim. I tried to show – yeah. You could just say, “This is T = 0. This is
T > 0.”

That’s probably confusing. So maybe I’ll take it out. But I guess the reason why we call the classical
model is that even though we derived it from a quantum system at finite temperature, at the end
of the day, you can think of it as a purely statistical mechanical model. We can think of this as a
Bolston Weight. It can come out of the effective theory of a quantum system. We can also think of
it as a completely classical, you know, statistical mechanical model.

26.1 Long range order?

So remember that last time, we showed that quantum fluctuations destroy long-range order in one-
dimension and below in the quantum XY model. Similarly, thermal fluctuations destroy long range
order in classical XY model in d ≤ 2.

So we can work this out explicitly.

122
1. d = 1: We know by now, we know how to calculate correlation functions for Gaussian models.
It is simple. The Green’s function is given by inverting the operator that appears in the
quadratic part of the action. So that is just going to be θ times DX squared. So you invert
that and you get. There is a minus sign when you integrate the parts. Symbolically, the
Green’s function is given by the inverse of this.

1 1 X eikx 1
hθ(x)θ(0)i = − = = − |x| + const · L
η∂x2 L ηk 2 2η
k

So if you wanted to, you could calculate this formal quantity by going to Fourier space. For
example,
Z
dω iωt
(−∂t2 + ω02 )G(t) = δ(t) = e

and then solving for G(t) in a way.

So if we had some finite system size, you know, some system of dimension, say some system
with a length l, then we go to Fourier space, we’re going to have to sum over all the different
k’s, then we will get something like eikx . So this operator becomes something like η –. This
is like the calculation we did before in the quantum case. We inferred this operator in Fourier
space and we calculated the sum explicitly if you remembered. If you wanted to, you could try
to calculate the sum. But let’s not bother to do that. Let’s use the shortcut and remember
that we just need to find the Green’s function. We need to find basically have to solve
electrodynamics in one dimension. We have to find the function whose Laplacian is equal
to a δ function. That is what this means. You can easily see what that is so you can write
it down. The Green’s function is only determined up to some constant and in fact, if you
actually perform this sum, you will get a constant piece which is not so important, but I’ll
write it here – which is divergent. It is not important because it’s going to cancel out. For
completeness, there would be a divergent term if you actually calculated the sum. So once
you have this expression, we can quickly calculate the correlation function we are interested
in

heiθ(x) e−iθ(0) i = ehθ(x)θ(0)i−hθ(0)θ(0)i = e−|x|/2η ⇒ exponential decay

So we have no long range order for d = 1 superfluid at finite temperature. We didn’t do the
d = 0 case last time. That would have been the analog of this. I’m just saying last time we
studied two cases. One and two-dimensional quantum systems. In one-dimensional quantum
systems, we found algebraic long-range order. We didn’t bother to do 0-dimensional long-
range order. Which would be a tiny puddle of bosons. There it’s like trivial. It’s like obvious

123
that you can’t have symmetry breaking unless you have a thermodynamic limit certainly in a
small system you can’t see that.

Okay. But here we’re kind of redoing the calculation. This is the classical calculation. It’s very
much analogous to doing a 0-dimensional quantum calculation. So if you like this exponential
– okay. I’ll stop there. It is probably not worth the labor and the point.

2. d = 2
1 1 X eikx 1 1
hθ(x)θ(0)i = = =− log(x) + log L + const
−η(∂x2 + ∂y2 ) L2 |k|2 2πη 2πη
k

So then we can compute the correlation function


 1/2πη
l
he−iθ(x) eiθ(0) i ∼ ehθ(x)θ(0)i−hθ(0)θ(l)i =
x
where we have a cutoff at length scale l (like last time). So our theory only includes when we
do our path integral, we only integrate over wavelengths longer than l. If we do our sum here,
we sum over k up to 1l . It’s cut off at that large k. Okay. So if we substitute in this expression,
we can easily see we’re going to get a power log just like we got in the quantum case, we get
something like this. So we see that we get algebraic long-range order for two-dimensional
super fluid at finite temperature. So we have an algebraic decay, quasi-long range order for
d = 2 superfluid at finite temperature.

3. d = 3
−1 1
hθ(x)θ(0)i = =
η(∂x2 2 2
+ ∂y + ∂z ) 4πη|x|

so then when we calculate it again, we’re going to cut it off at a short-distance scale
1
− 4πηl
he−iθ(x) eiθ(0) i = ehθ(x)θ(0)i−hθ(l)θ(0)i ∼ e

which shows we have true long range order for d = 3 superfluid at a finite temperature.

26.2 What about η?

So now that brings us to an interesting question which is what about this parameter η? So if you
look at all these results, does the value of η matter? This is sometimes called the stiffness of the
fluid.
Z
η
Seff = β (∂x θ)2
2

124
It is the coefficient of the ∂x2 . If η is very large, that means that θ’s really want to order, they
really hate spatial variation. If η is small, θ can have lots of fluctuations, it doesn’t cost much free
energy.

Just writing on the board, we see that d = 2 gives algebraic long range order, and d > 2 gives long
range order – which appears to be true independent of η. But once we had to get to this theory,
it seems like the value of this stiffness, it doesn’t affect these conclusions. But that’s incorrect for
very important reasons.

will effect much more fundamental physics than just what the power law is or what the value of the
order parameter is. And how is that possible? Well, if you look at our action here, it is a purely
quadratic action. We made no approximations once we get to this stage. And so then the value of
η doesn’t really matter at that level. It comes in as some quantitative effects. But there’s a key
point, which is that our true effective action is not a completely quadratic theory in the following
sense.

θ – we’ve been treating θ as a real-value field. And that’s a key issue, which is that we really
should treat θ as something that’s 2π-valued, right? It is an angular-valued field. If you make θ an
angular-value feild, this simple action is no longer such a simple action. This actually becomes a
much richer problem, the value of η can have important effects. So the key physics here is that the
fact that θ is not – you should think of it – it’s defined modular 2π. What does that really mean
physically? The issue is – when we treat thetas real value, we are considering certain θ’s and space.
We are missing vortex configurations. So we miss something.

Figure 43: A vortex about a point in 2 dimensions

125
The true θ, you can have configurations like this. I’m drawing the phase as an arrow. So this is the –
the value of θ is winding when we go around this point here, this vortex. That kind of configuration
we did not integrate over when we did these calculations because we treated θ at real value. A real
value in a smooth field can’t have a configuration like this. So the key point is that we can have
these vortex configurations and we should have included them in the path integral. That is in two
dimensions. This is d = 2. In d = 3 we should have vortex lines, which I’ll draw like this

Figure 44: Vortices in 3 dimensions

It’s harder for me to draw, but you get the idea. These forms close loops – vortex loops. So the key
point is this is very similar to what we did early on when we were doing our instanton calculations.
We did this calculation where we just integrated over one of our solutions, our sort of – we only
considered this path that was like flat and you know, we had these two minima and we consider a
path where the particle just stayed at one minima the whole time. But then we realized that there
were other saddle points that we had to include. We had to sum over all stationary points. Then
we included these other kinds of configurations and that changes our physics, gave us physics like
tunneling and so on.

When we treat θ as real value, we are integrating over a particular saddle point. We’re integrating
over small fluctuations over θ = 0, or a fixed configuration of theta. But we have to include these –

126
just like the instantons, there are other saddle-point solutions. So in general, we can have vorteces
in space and then integrate over all that. These have a very important effect. So what can happen
is that if η is small – the basic point is – in the quadratic theory, if eta is small it doesn’t matter.
In the theory where θ is defined as a real value, the value of eta doesn’t matter. If we treat θ as
2π value, then when η gets small, these vortex configurations no longer cost that much free energy.
The energy is going to be proportional to this theta.

So then they’ve been more and more – they’re not – they’ll start to proliferate. That means they’re
going to have a big effect. They suddenly – we will have a system – typical configuration will have
a huge gas of these gas lines or vortex lines all over the place. Once that happens, it will destroy
the long-range order. Instead of having algebraic order in any direction, you will get a completely
short-range phase. So that is the key point.

So if η is small, vortices (vortex lines) can proliferate which destroys long range order. Let’s think
about the d = 2 case for example. There is some critical value η > ηc which gives algebraic long
range order. However, when η < ηc , no long range order at all. This is a very interesting story
called the Kesterlitz-Transition.

The same thing is true in d = 3 dimensions. η > ηc which has long range order (we’ve seen before).
When η < ηc , you have vortices everywhere and no long range order.

26.3 Why do superfluids flow without relaxation?

This is the most – one of the most amazing properties of superfluids, if you take a jar of helium,
super fluid helium and you get the helium spinning, it can just have a continuous flow, it seems like
for extremely long periods of time. So why is that? Why does that super flow? Why does it flow
without dissipation? I’m going to discuss this at T = 0 for simplicity (we’re back to T = 0 for this
discussion).

So let’s imagine a superfluid in a box, but it’s a strange box. It’s actually going to be periodic.

Figure 45: A box of side length L

127
So let’s think about this system. So what’s the ground state? So the ground state is just described
by at least to the lowest order, it’s described by a coherent state. This is in a completely classical
picture. It is described by a coherent state with some fixed value of φ throughout space, right?

φ = ρ0 eiθ

and θ is constant in space, say θ = 0. We can ask, what’s the flow in the ground state? So how
much current is flowing in the ground state? Not surprisingly, there’s no current flow in the ground
state, but how do we see that? The ground state describes a superfluid without any flow:
ρ0
j= ∂x θ = 0
m
So if θ is constant, that is zero. So the ground state has no current flowing in it. There’s no flow.
The next question is: how do we describe a superfluid moving with velocity v? How we do that
basically is by making a Galilean boost. We will take our state with no flow and transform the
ground state to some other state moving with average velocity v by

ψ(x1 , · · · , xN ) = eimv(x1 +···+xN ) ψ(x1 , · · · , xN )


| {z }
ψboost (x1 ,··· ,xN )

So that’s how you do boosts in quantum mechanics. How do you do boosts in coherent states? So
in terms of coherent states, a boost is simple: |φi → |φboost i where φboost (x) = φ(x)eimvx . So a
boosted ground state |θ(x) = 0i → |θ(x) = mvxi. So our boosted state θ is not constant in space.
It has a gradient in space. And let’s just have a sanity check to make sure this is reasonable. So
let’s calculate the current in this state. Santity check. Current for |θ = mvxi:
ρ0 ρ0
j= ∂x θ = (mv) = ρ0 v
m m
V describes the current we expect for system with density rho not moving at velocities at this
D.

Okay. So now I think we’re ready for the punchline. So this is our boosted state. So now it’s clear
that – okay. There’s one key point I have to make, actually, before I get to the punchline. The
value of V is quantized in the system. The reason is that – remember our system was defined with
periodic boundary conditions. θ of L has to be equal to θ of 0. So now, because of the periodic
boundary conditions, mvL = 2πn. This shows that each of these states describes some state with
2πn
the superfluid moving at some velocity v = mL for some n. Any of these states, they clearly have
higher energy than our ground state, because there’s a gradient in θ. They’re going to cost more
energy to plug them into our Hamiltonian.

128
So these states have higher energy than the ground state. But can they relax? That’s the question.
In other words, normally, when we have a higher-energy state, there are processes that allow the
system to dissipate energy and go to a lower-energy state, go back to the ground state, right, for
which case the flow would stop, right? It would lose the energy and it would go to something with
no flow. The answer is normally, you would think you could relax to a ground state easily, but in
this case, cannot relax easily.

(a) (b)

The reason is – you can see from a simple picture. So now I’m drawing my box as a ring like this.
The circumference is L. So the state we wrote down here, θ is winding by some integer number of
times. That is what n is. n here is just the number – the amount of times it winds when it goes
all the way around. So you can see that you’re starting in a state like this, and you want to go to
a state where it doesn’t wind at all, right? That has no flow. If you’d like, this is – yeah. This is
the ground state down here, and this is your excited state that has the flow. And the question is:
Can it relax from here to here? You can see from the picture, it can’t relax very easily, because
you can’t smoothly go from this configuration to this configuration without climbing a big energy
barrier. Basically, the only way you can do that – if you think about it – a vortex has to tunnel
across this system, a vortex or vortex line has to tunnel across and that will cause a phase lift to
occur. Every time a vortex tunnels you can reduce N by 1. For a vortex to tunnel through here,
they cost a lot of energy. This is a tunneling process. Quantum mechanical tunneling process. It is
going to be suppressed exponentially with the length it will be suppressed exponentially with the
length – or the width of this – the length that the vortex has to tunnel through. And that’s the
key point. So that means since the tunneling processes are suppressed, that means relaxation can
occur, but it will be exponentially slow in this length scale, which I also called L. So that is the
key point is that – so the only way the winding can change is if a vortex, or vortex line, tunnels
across the sample. This is known as a phase flip event. This tunneling rate decays exponential
with system size. This means that the relaxation time goes to infinity as L goes to infinity (in the

129
thermodynamic limit).

Even for a not-so-big-system – I’m sorry, I’m over time. The key point is what we have in some
sense is we have a meta-stable state. This state has higher energy than this state. But it can’t relax
because to get to the ground state, it has to climb over – it requires a difficult process to occur. It
has to climb over an energy barrier. So it’s the metastability. The only way you can unwind is flow
is with a vortex tunneling across the sample that makes it – makes it flow basically forever.

ut it will be exponentially slow in this length scale, which I also called L. So that is the key point
is that – so the only way the winding can change is if a vortex, or vortex line, tunnels across the
sample. This is known as a phase flip event. This tunneling rate decays exponential with system
size. This means that the relaxation time goes to infinity as L goes to infinity (in the thermodynamic
limit).

130
Notes Quantum Many Body Physics
Day 14
November 13th, 2013 Giordon Stark

27 Introduction

So first we spent a lot of time talking about a many-bodied system of Bosons. We learned about
new phenomenon related to symmetry breaking. It is a very typical example of a many-Bosonic
system. The second part of the course we’re going to talk about another model system of many-
bodied system made of fermions. These are going to be liquids formed out of many two-dimensional
electron gases. Just like the Bosonic system is going to demonstrate phenomena having to do with
symmetry breaking, the fermionic systems are going to demonstrate phenomena that have nothing
to do with symmetry breaking, but have a topological origin. I hope to demonstrate two systems
and two important types of phenomena that can occur in many bodied systems.

27.1 Background

We’re going to imagine a 2D electron gas, electrons confined to a plane.

Figure 46: electrons in a gas

These electrons have charge e and are in the presence of a uniform perpendicular magnetic field
~ = B ẑ. So classically, what happens in such a situation? If we solve the equations of motion, we
B
get cyclotron orbits. The motion of the electrons is kind of boring.

131
Figure 47: Standard orbits in magnetic field

We’re going to imagine applying an electric field to our gas. Now we add a uniform electric field
~ = E ŷ.
E

Figure 48: Turn on electric field

Okay. So classically, what happens – well, classically basically you end up in – instead of getting
closed orbits, you get – so the electric field – I may not get the direction right. But you end up
with something like this. So we get this drifting and we can calculate the drift velocity very easily
by just comparing the balancing the electric force with the magnetic force. I mean, if you want to
calculate those orbits, I guess some of you did that homework. We’re not interested in the details of
the orbits, we are interested in the drift. So to get the drift velocity you can find that by balancing
forces.

eE = evB
E
v=
B
So we can expect to get this drift and this is going to happen with all the electrons in our gas.
They’re all going to be drifting. So there will be some net current in this direction. We can

132
calculate the current easily enough.

jx = ρv
ρ
= Ey
B}
| {z
σxy

σxy is denoted as the Hall connectivity. This result is very general and can be proven for any system
with Galilean invariance, even though it was derived naively. Of course, our real system won’t have
Galilean invariance, but in any case it is worth mentioning that. Based on this, we can expect that
1
if you were to take your 2D electron gas and plot σxy versus B, we get a straight line, linear with
the slope giving you the density.

Okay. So that is the classical expectation. And that’s – indeed what you would see if you did this
at low magnetic fields and reasonably high temperatures. But if you go to low temperatures and
high magnetic fields, and you see something different. So at small magnetic fields, you indeed see
this nice, linear behavior. But as we go to larger and larger magnetic fields, you start to see some
more structure developing, in particular, you see some plateaus.

On each plateau, we see

ne2
σxy = n = 1, 2, 3, · · ·
h

133
which can be quantized to many decimal places. At high integers, these plateaus get shorter and
shorter. Eventually, they kind of get washed out. I believe it is one of the most precisely measured
quantities in condensed matter physics. This is known as the “integer quantum hall effect”. It’s
kind of mysterious, or was originally kind of mysterious because it’s kind of hard to understand why
you get such beautifully perfect quantization out of a system that is messy and disordered. So you
know, but okay. But of course what was more interesting was what happens when you go to higher
magnetic fields?

p e2
We then see that σxy = q h where p, q are integers. This is known as the “fractional quantum
Hall effect”. It was completely mysterious when it was originally discovered. People had no idea
what this could come from. What we’re going to talk about is we’ll talk about both integer and
quantum hall effect. Integer is simpler because you can understand the physics without – basically,
from a non-interacting electron model whereas the fractional quantum hall effect depends strongly
on interactions. So the interactions are important. There is no way to understand this at a equal
taughtive level. So these are not so important for the integer quality effect. At least at a qualitative
level. So the basic physics could be understood from non-interacting electrons.

28 Integer Quantum Hall Effect

So let’s talk about integer quantum hall effect. Let’s try to understand where these plateaus come
from. Actually, why we have this exact quantization, even in a messy, interacting system and so on.
So I said that the integer quantum hall effect interactions are not particularly important so we’re
going to consider a toy system without any interactions.
~ = B ẑ, and electric field E
Consider non-interacting electrons in magnetic field B ~ = E ŷ. Neglect
spin. Set ~ = c = 1. Our ground state will be some later determination of single-particle states.
I should also say that our model is going to be working at zero temperature. We can solve the
single-particle Hamiltonian (since it is a non-interacting system), our ground state will be some
later determination of single-particle states. This is the usual Hamiltonian of an electron moving in

134
an electric field.
(p − eA)2 ~ ×A
~ = B ẑ
H= − eEy ∇
2m
~ = −Byx̂. We have an electric field that wants
And we are going to work in the Landau gauge: A
to push our electrons to increasing values of y. So the Hamiltonian is

(px + eBy)2 p2y


H= + − eEy
2m 2m
A nice feature of our Hamiltonian, you can see there’s no dependence on x right? So [px , H] = 0. If I
chose a different gauge, I wouldn’t necessarily have that. But I chose the gauge, so that px commutes
and is a good quantum number. I can choose them to be eigenstates of px : ⇒ eikx ψ(y).

So now we just want to find this wavefunction ψ. The dependence in the y direction. To do that,
we can plug this expression in to our Hamiltonian to our time-independent Schrodinger equation,
so we write the effective Hamiltonian as
(k + eBy)2 p2y
Hk = + − eEy
2m 2m
k2 ekBy e2 B 2 y 2 p2y
= + + + − eEy
2m m 2m 2m
So let’s just complete the square.

p2y e2 B ekB/m 2
 
eE
Hk = + y− 2 2 + + const.
2m 2m e B /m e2 B 2 /m
p2y e2 B 2
 
mE k
= + y− + + const.
2m 2m eB 2 eB
p2y e2 B 2 mE k
= + (y − yk )2 + const. yk = 2

2m 2m eB eB
So our Hamiltonian is simple at this point. But to make it even simpler, let’s write it in terms
of natural length and frequency (energy scales). So there’s a natural length scale for this prob-
lem.
1 eB
`= √ ωc =
eB
| {z } | {z m}
magnetic length cyclotron frequency

Roughly, `2 is the flux this goes through. The other natural energy scale is the cyclotron frequency.
So let’s write everything in terms of those quantities.

p2y 1
Hk = + mωc2 (y − yk )2 + const.
2m 2

135
mE
where yk = y0 − k`2 and y0 = eB 2
. We just have a simple harmonic oscillator, and these are
just oscillated ground states. There’s a constant here, which comes into the next equation. The
eigenstates of Hk are going to be centered at yk so we denote them by ψn (y − yk ) where n =
0, 1, 2, · · · .

So that’s the eigenstates of Hk , but eigenstates of H were given by these plane waves. So the
eigenstates of H are given by the form

ψk,n (x, y) = eikx ψn (y − yk )

f you multiply those two together, that would be our lowest Eigenstates and the next ones would
have a node here. And depending on K, you can see two things happen. One is K changes the
wavelength of those oscillations, but K also affects the position of the orbital. So as we change K,
the orbital is going to shift in the Y direction. So the schematic picture is you have some orbitals
like this that are plain waves, that are Gaussians or some kind of harmonic oscillator wave function
and they have different positions for different K’s.

Those are our Eigenstates. What about our energies? So they’re going to have certain energies.
The also depend on – well, they also depend on both K and N. So what are energies? We know the
energies – we forget the constant piece, we know what the energies of these guys are. They are the
harmonic oscillator levels.
   2
1 m E
Ek,n = n+ ωc − eEyk +
2 2 B
where the eEyk term is like the potential energy of an orbital at position yk . We also have the kinetic
energy of the particle. First, suppose E = 0. Then there is no dependence on k which came from this
electric field that favored orbitals that were large in the y direction. Then Ek,n = n + 21 ωc .


So different k’s have the same energy. So why do they have the same energy? Is there a physical
argument for why they have the same energy? Why the bands are exactly flat. That is a puzzle,

136
then. We’ll think about that. A different case had the exact same energy. So we had this huge
degeneracy. And of course, these highly degenerate bands are called Landau levels.

So far I’ve been thinking about an infinite system. Electrons on an infinite plain. Imagine we have
a finite system or a hall bar. So we have some finite system of some dimensions LX, and say LY.
This is an infinite plane, but for finite hall effect there are some finite number of states.

There will be a finite number of states in each one of these levels. So that is the question we want
to ask. How many states are there in a Landau level? So to answer that problem, we’re going to
think about a finite-sized system. We want translational symmetry in the x direction, so I’m going
to put periodic conditions in the x direction.

So one we have a finite-sized system, k is no longer a continuous quantity. It has to be quantized


in multiples of 2π/Lx , so the allowed k’s are given by things like
2π 4π
k=± ,± ,···
Lx Lx
So what is the spacing in y now?
2π 2
∆y = y 2(m+1)π − y 2mπ = `
Lx Lx Lx
So that gives the spacing. From this, we can get the total number of states. So
Ly Lx Ly
N= =
∆y 2π`2

137
So the number of states proportional to the area – so the number of states in the band is proportional
to the total number of sites. Number of units cells. Okay. So that means one point that I’ll come
back to is some of you may be wondering what happens near the boundary here? In our the
boundary, I have this Gaussian wave function or whatever that I’ve been using as the ground state,
but of course that is not going to be the ground state wave function. I haven’t even defined exactly
how this boundary, what it looks like, what potential is present there. It is clear how I define it,
these are not going to be exact – something is not quite right with – with my – you know, I haven’t
done the careful calculation of what happens near the boundary. But we don’t expect that that
effects – just – if we are interested in the scaling with the area, the boundary is going to be negligible
– we will come back to that. That is going to be important.

Something happens near the edge, but we won’t worry about it here. So let’s write the total number
of states in a different way
Φtotal
z }| {
Lx Ly BLx Ly Φtotal
N= 2
= =
2π` 2π/e Φ0
| {z }
Φ0

where Φ0 is the number of flux quanta that penetrate through the Hall bar. This is usually written
h
as e but we’re using units where ~ = 1. Therefore we have one state per flux quanta.

So we can see if we can explain this integer quantum hall-effect where we see this quantized value
– we will try to compute this connectivity explicitly.

So we have a collection of our Landau levels. Now we’re going to imagine that we can just attach
our system to a reservoir at some chemical potential µ. We will assume that µ is somewhere between
the N th and N th+1 Landau levels. There is some bad notation here, I gave the lower level the
index zero, which is probably a bad idea. We’re going to imagine the chemical potential there, so

138
that our ground state is going to want to fill up these states by exactly one electron and no more.
So the first n Landau levels are filled. All the levels above will be unoccupied.

And now the question is, if we state with some state like this and we apply some weak electric field
– we’re going to ask what is the current response? So first, let’s just say we have n-filled Landau
levels and then apply an electric field in the y direction. These things will no longer be flat, but
have a slight dispersion. As long as the electric field isn’t too big, they will be mostly the same (the
slope won’t be too large, and they won’t jump to a new energy level).

As long as my electric field is less than some critical value, when I apply my electric field, I’m going
to occupy exactly the same states, I’ll occupy exactly the same states. When we apply this weak
electric field, the wave functions shift a little bit, what is the current? So by weak, I mean weak
enough – I don’t want it to be such a strong electric field – what is it called? Electrical break
down? An insulator can conduct? Might not be a problem in this perfectly clean model, actually.
It probably isn’t a problem. In this toy model it is a problem. The whole system is completely – it
has become conducted. So I’m going to apply a weak electric field and ask: What is the current?
So the answer is simple. Since we know – since we already solved the problem in the context of
electric field, I have to go back to the slightly general solution. All electric field did – I don’t think
it changed the wave function, if I remember. What did it do? It shifted the position – did it shift
the positions? It shifts the position of the Y sub Ks. It shifts them. It shifts their positions a little
bit and changes their energies a little bit. That is all it does in this model.

So we want to calculate all of these after I turn on my electric field. So to do that, since it is a
single-particle problem, I can calculate the currents of each particle and add them all up. So let’s
add up the current for each single particle state.
 
e  eikx
Ik,n = hψk,n | (px − eAx )δ(x − x0 ) + δ(x − x0 ) (px − eAx ) |ψk,n i ψk,n (x, y) = √ ψn (y − yk )

2m | {z } Lx
k+eBy

So we get something like velocity – or velocity times a delta function. PX minus EAX times a delta
function, delta of X minus X not plus delta of X minus X not times PX minus EAX. This would
be the – okay, so normally if you were talking about the current of your system, you would have a
delta function in both – if you’re looking at the current density, you have a – this is the current at
some particular – let me draw this. (Board). If we want to calculate the calculation at X-not and
Y-not, if I want to calculate the current density, I would put a delta function in the Y direction. I
would multiply this. That would give me the current density at this point.

Here I’m interested in the total current. So I’m going to integrate over all different values of Y. So
this delta function – the Y direction doesn’t appear. So it is like a one-dimensional current. I would

139
put a delta function of Y - Y not and integrate over all different Y not. But then the deltas drop
out. So one-dimensional current. What is the expectation value, let’s look at this term here. So if
we applied this state as an Eigenstate of P sub X with Eigenvalue K, this brings K minus E times
– A sub X (Board).

» PROFESSOR: I’m integrating over Y not. So there’s an integral over Y. Suppose I fix X and Y
not. It tells me the current at that particular position Y not then I would integrate over Y not to
get the total current through this. Once I integrate over all Y not, this whole delta function – I can
just drop it. I’m not integrating over Y, that is the non-trivial part. I will integrate over Y not.
Okay.

This gives me K+ EBY. Similarly, this term here will also give me – if I act to the left here. The
delta – so I can – so if I look at this integral here, there’s going to be an integral over the X direction
and an integral over the Y direction. I can break it up. I have a separation of variables, right? This
is a function in the X direction as a function of Y. So this splits up into a product of two integrals.
An integral over X and Y. Think about the integral over X. So that is just going to – that is just
going to involve putting a delta function times psi times psi star and intergrating over X. That tells
me the density of particles at position X not, right?
e
Ik,n == hψk,n | [(px − eAx )δ(x − x0 ) + δ(x − x0 )(px − eAx )] |ψk,n i
Z 2m     
1 ∗ e 1 1
= dx dy ψk,n ∂x − eAx δ(x − x0 ) + δ(x − x0 ) ∂x − eAx ψk,n
Lx 2m i i
Z  Z 
1 e
= dx e−ikx δ(x − x0 )eikx dy ψn∗ (y − yk )(k − eAx )ψn (y − yk )
Lx m
Z
1 e
= dy ψn∗ (y − yk )(k + eBy)ψn (y − yk )
Lx m
1 e
= (k + eByk )
Lx m
  
1 e mE k
= k + eB −
Lx m eB 2 eB
1 E
= e
Lx |{z}
B
drift

140
So the total current density
P
Itotal occupied states Ik,n
j= =
Ly Ly
 
Lx Ly
n 2π`2 L1x e B E
=
Ly
ne
= E
2π`2 B
ne2
= E

So almost done, we put in ~ so that

ne2 ne2
j= E= E
2π~ h
ne2
showing that σxy = h .

ne
And you can see above that we have ρ = 2π`2
as total density quantized. So rho over B happens to
be quantized that is why we’re getting this result here. Anyway. So that is all I have to say today.
What we’ll do next time is we’ll try to understand, this looks like a magical cancellation. So we’ll
try to come up with – understand for more general principals why do we get this quantized result
and why is it stable.

141
Notes Quantum Many Body Physics
Day 15
November 15th, 2013 Giordon Stark

29 Last time

So to begin, I want to quickly review what we derived last time, which was – you remember at the
end of the class we basically calculated the hall conductivity of a perpendicular magnetic field. We
had these Landau levels and we imagined that we had just the right density of electrons to fill up
some integer number of these Landau levels.

(a) landau levels (b) cylinder

And then we considered a finite geometry of dimension Lx , Ly and applied periodic boundary
conditions in the x direction. So we thought of it as a cylinder. We asked, "What is the current in
the X direction?" Current can flow in the X direction because it’s periodic. And basically, the story
was we solved this problem exactly with an electric field and we got these orbitals, these Landau
orbitals, if we worked in a Landau gauge, which were plain waves in the X direction and then we
had – in the lowest level they are Gaussian, in the Y direction, the second Landau level – the third
excited level – et cetera. There are states of the harmonic oscillation and there are plain waves in
the X direction. We explicitly calculated the current of these states in the presence of this electric
field and then we added ’em all up.

So we got σxy for a state with n filled Landau levels. I want to redo this calculation in a specific

142
notation to be consistent with what I will do today, and I wasn’t very clear about it.
e
Ik,n = hψk,n | ((px − eAx )δ(x − x0 ) + δ(x − x0 )(px − eAx )) |ψk,n i
Z 2m
1
= dx0 hψk,n | · · · |ψk,n i
Lx
e 1
= hψk,n | (px − eAx ) |ψk,n i
m Lx
e 1
= hψk,n | (k + eBy) |ψk,n i
m Lx
  
e mE k 1
= k + eB −
m eB 2 eB Lx
e E
= ·
Lx B
But since our states are stationary states, or Eigenstates, the current through any point has to be
the same. So if I measure the current through this point, X not prime, I have to get the same
value. Because we know from current conservation, if this current were different from this current,
there would have to be a changing density inside, but we know it’s a stationary state. The density
is constant in time. I can measure the current through any point I like. I’ll get the same answer.
That means that we can – what we can do is we can average over all X not if we want. So we can
write this as 1 over LX integral V of X not and average over all X nots. (Board).

I won’t rewrite that whole matrix average. It is the same value for every X not. Doing it that
way, it is clear what happens. When we integrate over X not we get a factor of 1 over LX times –
actually, I’ll bring the 1 over LX inside the matrix element just to keep notation consistent.

So I can write it as just psi kn. The deltas just become 1 because I’m integrating over all X nots,
and these give me a factor of 2, so I end up with a pretty simple expression. (Board). This is
basically the velocity – velocity of our state and this is the electric – this P over M, this is the
velocity of multiplying by the charge and this is coming from the fact that I’m interested in the –
the state is uniformly spread out over the whole interval.

Okay. So we get this expression and then we know that our state is an Eigenstate of PX with
Eigenvalue of X. So we can write it as psi KN E over MK, and then EAX in the Landau gauge is
EBY.

And now we’re almost done. The only thing that’s nontrivial here. What is the expectation value
of Y in our state psi KN, but it is always symmetric about Y sub-K. This gives us (Board). And
then we remember what Y sub K is (Board). So it’s K plus – from last time, it was given by some
constant (Board). This is the one place where the electric field – times 1 over LX. Where the electric
field comes in, it shifted the center of our orbits a little bit, right? That is the shift here.

143
And because it shifts the position Y sub K, it changes – you can – that’s why the – this – yeah, so
that is how it effects the current of each of our states. So then we look at this expression. As we
remarked last time, we get these nice cancellations. Ms cancel, E’s cancel. We end up with E over
LX times E over B, which is very natural because this is the drift velocity. This is the charge. And
then this is 1 over LX is our density. It is exactly what you would expect for the current.

end math talk for above derivation

So then we see that every state contributes exactly the same current. It is just charge times average
density times the velocity, so
1
J= Itot
Ly
 
1  Lx Ly e E ne2
=  n 2
· ·  = E
Ly  | 2π`
{z } Lx B  2π
number of states

ne2 ne2
So this shows σxy = 2π = h . So we’ve shown at least in this toy model where we’ve exactly filled
N Landau levels, perfectly clean system that we can get this – the integer exactly quantized.

There are a couple of issues. One is that – on the one hand – yeah, so one issue is that, you
know, in a real system, we don’t have exactly the right number of electrons to fill N land ou levels.
In general, that would – N is a fixed quantity. You’re tuning your magnetic field. You’re never
going to be in this scenario. So that is one problem. The other problem is that if you remember,
I mentioned this last time. At the root of this result, why we get this result, it’s – I should – I
mentioned offhand that it just follows from Galilean invariance, which we do in this system, we
know in general principles, the hall conductance – really, this result just comes – we could have
guessed it from the very beginning. Once we calculated the number of states in the Landau level.
It immediately follows the density of the magnetic field was equal to NE squared over H, and it’s
all set.

So in some sense, this doesn’t seem very surprising. It all followed from Galilean invariance and the
fact that we had exactly the right density. And now there are two issues. Number 1, I just brought
up. In a real system, we won’t have exactly the right density. It won’t be an integer number of
Landau levels. We have disorder – the system is certainly not Galilean in any approximation. Once
we throw away those two facts, it seems like there’s no way we’re going to get this result because it
depends on having the exactly right density, and having – it seems to depend on Galilean variance,
or you might think it depends.

So what I want to argue – or show for you now – is two issues. Number 1, what happens if the

144
– the two issues are. What happens – if we don’t have the right number of density. The problem
number 2 is how do we deal with the fact that we don’t have, in a real system, a Galilean invariance.
I’m going to assume an unrealistic level. You can add disorder and interactions – you can basically
do – it’s extremely robust. As long as you have the right number of electrons to fill up N Landau
levels, the N is very unimportant. How do we recognize the – we don’t have the right number of
electrons, the right density, that is not something I’m going to talk about too much in class, but as
we mentioned, it has to do with disorder.

Right now what I’m going to discuss is robustness not against changing the density, but against
adding arbitrary interactions and so on, and disorder. Okay. Is that – you okay with that? I’m
working with this toy model. Now we won’t have Landau levels anymore because we’re going to
start adding all kinds of other crap to our system.

30 Robustness of the Quantum Hall Effect

ne2
Now, let’s explain why σxy = h even with disorder, interactions, etcetera.

30.1 Laughlin flux insertion argument

We’re going to consider (still) a finite-sized system and it’s still going to be periodic in the x-
direction. Since it is a cylinder, we’re going to imagine inserting magnetic flux Φ through the hole
in the cylinder. What we’re going to study is, what are the properties of our many-bodied ground
states as a function of the flux?

Figure 49: Insert flux through cylinder

145
To start off the argument, I’m going to want to show you that we get a robust quantized value σxy
even with interaction and disorder. Let’s start with the clean, non-interacting systems first; then
introduce interactions and disorder. The many-particle Hamiltonian is
" eΦ 2
#
X (pix − eAx + L ) (piy − eAy )2
x
H= + − eEiy
2m 2m
i

I’m choosing a gauge so flux can be incorporated into my problem by changing Ax → Ax − Lφx . The
reason is that when I integrate over the circumference of the whole cylinder, it is going to give me
exactly Φ.
" eΦ 2
#
X (pix + eByi + Lx ) p2iy
H= + − eEyi
2m 2m
i


Now let’s imagine that we are adiabatically changing Φ from Φ = 0 to Φ = Φ0 = e . When I mean
slowly, I mean that it is slow enough to not excite the states. So it is slower than some energy scale
to excited states, ωc . So I’m going to do it more slowly than the cylcotron frequency. So by the
adiabatic theorem, we expect if we do things slowly enough, our system will evolve eigenstate to
eigenstate. We start out with a ground state at zero flux, it is going to evolve to some eigenstate
at higher flux. So ψ0 , with energy E0 will evolve into state ψΦ with energy EΦ .

Figure 50: energy evolution of state under adiabatic change

So we’re going to compute ∆E = EΦ0 − E0 – how much energy shift there is, in two different
ways.

146
30.1.1 1st Way

Consider 1st order perturbation theory, this gives:


∂EΦ ∂H
= hψΦ | |ψΦ i (Feynman-Hellman theorem)
∂Φ ∂Φ

X (pix + eBy + L
x
) e
= hψΦ | · |ψΦ i
m Lx
i
| {z }
IΦ (total current in state)

= IΦ

So in other words, what we see here is that the change in energy with respect to flux is the current.
This is actually a very general relation. I just showed it here to be concrete, but it’s true. Essentially,
for any system, it is the definition of current. If you like, you can derive this from Faraday’s law.
The idea is, if I insert flux into some system, if it’s time-dependent flux, I’m going to generate some
ε, EMF, which is given by (being careless about signs)

ε=
dt
So it generates a voltage around my cylinder, so I’ll do work. If my system has current flowing
around the cylinder, there’s some work on the system. The amount of work done is

dW = ε · I · dt = IdΦ ⇒ dE = IdΦ

So then anyway, you don’t necessarily have to believe that. We’ve explicitly shown this. So we can
calculate the shift in our energy
Z Φ0
∆E = IΦ dΦ
0

Now, the main physical assumption comes in. It can be proven mathematically. We expect the
current is going to be independent of the flux in the limit of a large system size, because you think
of our system as our cylinder. So we’re putting in some flux. You can only feel flux if you travel all
the way around the cylinder. Changing the flux is like changing the boundary conditions of your
system. What kind of boundary conditions do you do when your system is a large system. So in
the thermodynamic limit, you don’t expect the amount of current that is flowing to depend on the
flux (how does the system know what the boundary conditions are if it is a very large system?). So
this is the physical assumption, that if Lx is large, IΦ is independent of Φ.

So then the claim is that the current of your system, this velocity or whatever you measured here
isn’t going to depend much on your global boundary condition on your wave function. So it’s

147
probably more convincing if I choose a different gauge where I – where I set – basically, this is a
gauge where I set the vector potential that generates this phi as constant. I can put in a constant
across the whole cylinder, like you know, like this (Board). I think this is a gauge where it’s 0
everywhere except at one line where it’s delta function. That is putting a boundary condition on
psi.

That is the way people think of it. It is just changing the boundary conditions. The current here,
what is generating the current is the electric field. We have an electric field. We have a magnetic
field coming out, and we know from classical hall-effect, even from a quantum calculation, we expect
a current this way. The fly – it is like a – it’s doing very little. It is just changing some global
boundary system around this whole cylinder. You don’t expect to do much of anything. So it will
have an effect for finite system size, but the limit – we expect it will is have a negligible effect. This
only depends on phi up to some term which is exponentially small in the circumference.

So to conclude,
 

∆E = I∆Φ = I
e

30.1.2 2nd Way

So the other way of calculating the change in energy is to note when you insert a full-flux quantum,
it has a special meaning. It is the right amount of flux, so when the electron goes all around the

cylinder, flux is e . So when the electron goes all the way around, it picks up a phase of 2π. It
2pi
means our system is insensitive. Once you put in e flux, no physical quantity will register that you
have flux. There is a name for this theorem. The Byers Yang. A full-flux quantum can be gauged
away. So that means that once we insert a full-flux quantum, our Hamiltonian has exactly the same
energy levels, everything is the same. The Hamiltonian is identical to the zero-flux Hamiltonian up
to a gauge transformation.

The other way to see it, remember I told you that all the fluxes is a boundary condition on the
wave – once you make 2π, it is the same as periodic boundary conditions. So that means, since it
is the same Hamiltonian, it will have exactly the same energies. So all single particle energies must
be the same as zero-flux case.

So remember, I started with a non-interact is case right now. So our energy of our system is just the
sum of all the occupied states of single-particle energies multiplied by single-particle energies. Since
all the single particle energies are the same, the only way you can get a change of energy is if you’ve

148
populated the states in a different way. So in other words, the change of energy must come from
repopulating the single-particle states. So we know that, on general grounds, let’s say that they’re
repopulated. I’ll show that and see how they’re repopulated. So let’s look at the single-particle
Hamiltonian:
eΦ 2
(px + eBy + p2y
Lx )
H= + − eEy
2m 2m
 h i2
Φ
px + eB y + BL x p2y

Φ

= + − eE y + + const
2m 2m BLx
Φ 2π
Just by looking at this, it is clear that all Φ is doing is shifting y → y + BLx . When Φ = e , each
2π 2π`2 2π`2
orbital shifts by eBLx = Lx . But Lx is the spacing between the orbitals (we found from last
time).

Because in my notes I wrote it this way. Probably means phi is negative or phi over B is negative.
But that’s fine. This direction doesn’t really matter. So we have some shift like this. Each orbital
is shifting over. That is, indeed, what we expected. So the single-particle energies look different for
intermediate values of phi, the single-particle energies, all the energies, all the wave functions are
basically the same up to a gauge transformation, exactly the same.

Figure 51: Diagram of atomic energy evolution

So we can see that they just shift by one. Okay. So what is the point here? So the key point is to
remember these orbitals are each occupied by some electron. The ultimate result of this process is
if each electron moves by this amount, we end up shifting one single electron from one sample to

149
the other. We end up shifting an electron from an orbital to this edge here to this edge here. So
since we have n-filled Landau levels, we are going to get n electrons shifted from bottom to top.
So the net result of this flux insertion is that one electron per Landau level is transferred from one
edge to the other.

Since potential difference is V , we have ∆E = neV . From this, we can equate the two equations
from each way for ∆E to get

I = neV
e
ne2 ne2
⇒I= V = V
2π h
ne2
So we have successfully re-derived result that σxy = h .

30.2 With weak interactions and/or disorder

Now suppose we add weak interactions and/or disorder (smaller than ~ωc ).

Figure 52: An example of non-interacting model with interactions, disorder, etc. included

For simplicity, keep H (the clean model) the same near the edge. So let’s do the same calculation
in this case and see what happens. We’re still going to insert flux and try to calculate the shift in
our energy, or ground state energy when we insert the flux.

So let’s think about the first way we computed it was we used the fact that the change in energy
was just given by the current, right? And that relation was very general. I tried to argue from, you
that follows Faraday’s Law. So we still have
∂EΦ
= IΦ
∂Φ

150
which means ∆E = I 2π
e . What about the second way we computed the shift in energy? What we
did is argued that when you insert a full-flux quantum, it doesn’t stick. The Hamiltonian returns
to its original state, so we know the shift in energy comes from the fact that the single-particle
eigenstates got re-populated.

So that means, by our old arguments, since there are n occupied Landau levels, we know n electrons
will be moved from this edge to this edge as a result of this process. Just thinking about this region.
So we have this. (Board) maybe I should draw it like this. So the n electrons moving here and n
electrons moving here.

It’s just orbital shifting over.

So now the question is: What about – what happens here when we insert flux? This state is going
to reorganize in some way, how are the electrons going to get transported? Now, in general, it is
hard to say what happens here since this is an interacting system, a disordered system. So it is hard
to say what happens in the interior (Board) but we do know something, which is we know since
this is an adiabatic process and we’re going than the Landau – the cyclotron frequency, the bulk
gap, since we’re going slowly like that, we know that, at the end of the process, we cannot have any
extra holes or electrons sitting in here in the interior – actually, anywhere in the interior. You can’t
have any holes or electrons because those excitations will cost energy and we are doing our process
much slower than that. So we know at the end of the process, no electrons or holes can possibly
accumulate in the interior since this would cost energy ∼ ~ωc and adiabatic process is slower than
that.

So what does that mean? So that means there’s only one way that we can immediately deduce what
must happen. So if we have n electrons going here, we have n electrons going here, we must also
have n electrons being transported across this region. We need exactly n electrons to be transported
across this region, otherwise we would have a hole accumulating here at the boundaries.

151
So that means that even with interactions, the same result must hold. You must still have n electrons
being transported from the bottom to the top. Can’t change. So since we still have n electrons,
that means we still have the same relation that says

∆E = neV
 
ne2
Since we have the same change in energy on both sides, we get I = h V . THen we conclude
that σxy robust against weak interactions or disorder. You can see the key point here is, it’s robust
here as long as the gap of our system doesn’t close. The excitation gap of our system remains open
in the bulk. We know we have to – the same – basically, that current conservation, we know that
n charges must go from here to here because we know n are going from here to here and nothing is
accumulating in the middle. We know n must be going across our sample. So more generally, σxy
is robust as long as bulk gaps remains open.

30.3 Puzzling concepts

1. Why isn’t ∆E = 0?

2. Why isn’t ∆E > ~ωc ?

Okay. So the first puzzle I can explain, but we’ll discuss it next time, which is that this is an
adiabatic process. I said we were going to slowly insert flux and we know by the adiabatic theorem
that ground states should go to ground states. So the adiabatic theorem says that if we do things
slowly, delta E has to be 0. But we just calculated delta E. First of all, the ground state didn’t
go to the ground state. We got some repopulation of the orbitals and we definitely got a shift in
energy.

The question number 2 is – I’ll explain this next time. Point is, at the end of this process, we get
a new state. We called it psi phi. We start at psi 0 and we get some new state psi phi. We can
think of these as – once you’ve had a flux quantum, it is the same thing as having a 0 flux up
to gauge transformation. We have psi phi. It is not the same thing as psi not but it’s an excited

152
state, shouldn’t it have an excited state bigger than our gap? At first you might think it must be
0. If it is not 0 and you’ve managed to construct a state of 0, how can that excitation energy be
smaller than your gap – which it will be for small voltage. We calculated delta E and it was given
by NEV. For small voltage, it can make this change in energy be arbitrarily small. Seems like we
have two contradictions. It has to do with two subtleties with our argument that we’ll discuss next
time.finitely got a shift in energy.

153
Notes Quantum Many Body Physics
Day 16
November 20th, 2013 Giordon Stark

31 Integer Quantum Hall Edge

So today, what we’re going to talk about is the integer quantum hall edge. I should mention that, at
the end of class last time, I mentioned there were two puzzles related to this adiabatic flux insertion
object. I wish I had time to explain those puzzles. I’m going to skip those two puzzles and move
on.

To fully understand that argument, one needs to understand why it is that when we do this adiabatic
flux insertion, normally the theorem tells us we insert flux – when we do any adiabatic flux, we
start in a ground state, we will end in one. We supposedly did an adiabatic process, and the energy
of our final state was different from the energy of our initial state. The other puzzle was that the
energy of our final state – not only was it larger than the energy of our original state, but it was
smaller in the gap. So there were two strange things about what was going on there. I wish I had
time to discuss them, but it turns out they’re both related in some way to the existence of edge
states. So I’m going to skip those two puzzles and go straight on to discussing the edge physics.
Integer quantum hall edge which is probably more important for most applications.

Okay. So far we analyzed integer quantum hall state. We considered geometry, we’ve been mostly

154
focusing on this cylinder geometry. This and – periodic in the X direction, we had – actually, in the
Y direction. We’ve kind of been pushing some things under the rug about what exactly happened
near these two boundaries. So what we’re going to do today is treat these boundaries carefully.
(Board).

So far we’ve ignored boundary effects. We argued they weren’t important – when we were counting
how many states were in the landau level, we said, “Okay, what is happening in the boundary?” Most
of it is going to be in the bulk anyway. But now we’re going to treat the boundary carefully.

So what we’re going to do is we’ll redo our calculation of just solving a particle – free non-interacting
electrons in a magnetic field, except now we’re going to actually treat the boundary. We’re going to
put some boundary conditions. So first we’ll solve the problem with no electric field. So we’ll have
no electric field, we’ll still have magnetic field coming out of the plane, but no electric field. Later,
we’ll turn on the electric field and see how the system responds.

First suppose E = 0. What is our Hamiltonian? Since I’m doing non-interacting particles, I’m
going to go back to a single-particle picture, so it’s just
(px − eAx )2 (py − eAy )2
H= + + Vedge (y)
2m 2m
with a new addition that there’s some kind of potential that keeps the particle in the system, some
kind of edge potential.

0 0 ≤ y ≤ Ly
vedge (y) =
∞ elsewhere

For simplicity, we can think of this edge potential as only depending on y, the Ly coordinate. The
potential will be zero within the boundary. So again, we’ll work in the Landau gauge so that

~ = −Byx̂
A

We choose this gauge because, in this gauge, our system is going to have translational symmetry
in the external equation. So that will allow us to solve the problem. Specifically, because it has
translational symmetry in the x direction, our wavefunctions can be written as plane waves in the
x-direction plus some wave in the y direction. So eigenstates are of some form eikx ψ(y) and ψ is an
eigenstate of
p2y e2 B 2
H= + (y − yk )2 + Vedge (y) yk = −k`2
2m 2m
So this is exactly what we had before. We’ve completed the square and we got a Hamiltonian
like this. The only new element is we will have this Vedge . Remember yk depends on k. In the

155
following way, the constant piece vanishes because we’re doing an electric field. The important piece
is yk varies linearly with k. Without this, we have harmonic oscillator states centered in yk , as we
varied k, the k’s varied in the y-direction. Now the eigenstates aren’t going to be harmonic oscillator
wavefunctions. We have this boundary condition now. We put periodic boundary conditions because
we want to be able to describe a system that has current flowing It’s convenient to use periodic
boundary conditions. So the only new element is this potential. But that is going to make our
problem no longer exactly solvable. But nevertheless, we can still understand the physics by just
drawing some simple pictures.

31.1 Case 1

In order to solve the Hamiltonian, the physics of the low-energy spectrum of this Hamiltonian
depends on the position where yk is. yk deep in interior, |yk |, |L − yk |  `.

Figure 53: Deep in the interior, the edge potential is completely irrelevant

So in that case, we kind of already know what’s going to happen. This boundary condition is just
not going to matter because our eigenstates – if we ignore this, our eigenstates of the harmonic
oscillator of the equation, they’re going to be some sort of Gaussian – they’re going to be localized.
It will be a very good approximation – it will almost exactly satisfy this boundary condition, that it
vanishes at the boundary, because harmonic oscillator is going to be extremely small by the time it
meets the boundary. So it will be extremely close to the eigenstate because the harmonic oscillator
state satisfies the boundary state to a good approximation.

156
Because the harmonic oscillator eigenstates already satisfy the boundary condition. Remember, we
called them ψn , so they already satisfy the two boundary conditions are ψn (y = 0), so that would
be ψn (0 − yk ) and ψn (y = Ly ). These are already basically zero. Because yk is in the interior, so
this is extremely small. So that means the Eigenstates – our original Eigenstates are still pretty
good Eigenstates for our pretty good approximate states for our system. So that means that our
Eigenstates and energies are basically unchanged. The harmonic oscillator eigenstates ψn (y − yk )
already satisfy ψn (0 − yk ) ≈ ψn (Ly − yk ) ≈ 0.

So the eigenstates, energies are the same as original calculation. So they’re the same as an original
calculation. In particular, that means that we’re going to still get this kind of Landau-level structure.
So our energy as a function will be
 
1
Ek,n = n+ ωc
2

31.2 Case 2

yk close to the boundary, say at Ly . Then the total potential looks like

We’re going to increase the potential and energy. The energy of our nth excited state is going to
be greater. The edge potential pushes the energies up.
 
1
Ek,n > n + ωc
2

157
31.3 Case 3

yk is far outside the boundary, say y − Ly  `. I’ll focus on the upper boundary, and this is much
greater than the magnetic length. And let’s see what happens in this case. Again, we can sort of
draw a picture of our sort of total potential.

But now we have again this potential cuts it off. So we get something – we get like an infinite wall
here. And so our effective potential are fields of potential like this. We can see that this potential
obviously is pushing the energies up from where we didn’t have any boundary. But let’s try to
estimate how much – what the energy levels look like. Estimate the energy of lowest eigenstate by
minimum value of the potential.

e2 B 2
Ek,0 ≈ Emin = (yk − Ly )2
2m
e2 B 2
= (−k`2 − Ly )2
2m
The main point is we can see the dependence on Q is quadratic. We will get a term that is quadratic
in k. So the energies are going to be shifted up in something that depends quadratically on this
difference. So once yk goes beyond the maximum – the value at the boundary, the energies get
pushed up quadratically.

31.4 All cases combined

Combining the three cases:

158
And then we just connect the two up, interpolate between these, get some picture like this. This is
what the energy spectrum looks like with a boundary. Okay. So now let’s imagine that we have our
chemical potential somewhere in between the two – somewhere in the middle between two Landau
levels. Maybe the chemical potential is here. This is µ.

Then the ground state is going to fill up all states below the chemical potential. So all these states
will be filled. All these states will be filled. The states above the chemical potential will be empty.
And so we can immediately see something interesting, which is that – so these states are all empty.
So what we can see is that there actually are low-energy excitations of our system. So we can take
for example an electron which is occupying this state and we can bump it up to the empty state up
here, or we can bump it up a little bit further up. There are some low-energy excitations involved
with kicking an electron to kicking – similarly, there’s some low-energy excitations here that can
be obtained by moving an electron up above the chemical potential. The gap goes to the thermal
excitations.

We have gapless excitations which is maybe a little bit surprising, because if you remember, at
least when we ignored the edge, we found that there was an energy gap. It took a finite amount of
energy to get to this level. Now we’re finding low-energy excitations when we include the boundary
effect.

32 Properties of Gapless excitations

1. These excitations are truly edge excitations. These correspond to orbitals localized near the
edge. Another way to see it – another way to see that these must be edge excitations is
remember, imagine we did this with periodic boundary conditions in both directions. All
right. So we wrap it up. So this forms a torus. We glue this end to this end. Then it is not
hard to convince yourself that our original Landau level structure is correct. If we wrap it up

159
with periodic boundary conditions, there is no edge. So everywhere is a bulk system. The
energy systems we found are more or less exact.

So if we did it in periodic boundary conditions, what you would find is you would just find
exactly flat Landau levels with nothing else. There is a periodicity here. If I keep going in
this direction, I come back on this side. I’m basically going around here. So if I have periodic
boundary conditions, I put my chemical potential somewhere in between two levels, I get an
energy gap which occurs when you have open boundary conditions, another – more evidence
that these are truly edge excitations we’re talking about. These excitations disappear. If we
have periodic directions in both x and y directions

So in other words, the bulk is fully gapped. These gapless excitations are a feature of edges.

2. Edge excitations are chiral. So to see that, let’s just draw the spectrum again here. So let’s
just focus on the lowest Landau level so I can just – it’s easier to draw. So we have our
chemical potential, µ.

160
So we can find that basically, the point is, let’s say we look at the excitations near the y = 0
edge here. So we can see that the slope here is positive so that means the group of these is
positive. The excitations near the y = 0 edge has a velocity which is greater than zero. So
excitations near the y = 0 edge have
∂Ek,n
v= >0
∂k
That means if I take a wave packet of these states, it will only move in the +x direction. Just
a strange thing. You’ve probably never seen a conventional system. You can’t make a wave
packet move to the left. It can only move to the right. How is that possible? The way it is
possible is if you try to make a wave packet that moves in the opposite direction, you can use
a collection of states that is over here. But this kind of wave packet moves on the other edge.
Another way to say it, if you look at the excitations on the other edge, the excitations near
y = Ly , here, the group velocity is negative. So they have a velocity which is (Board) which is
less than zero. So we’ve taken the left and right-moving modes and separated them spatially
so that the right-moving modes are on one boundary and the left-moving modes are on the
other.

So there’s a simple picture for this which you’ve maybe seen before, but I’ll show it anyway.
There’s a very simple kind of cartoon picture. It is useful for like getting signs right, making
sure you get the modes moving the right direction. So the picture is the following. Classically,
if we have a particle in the center of our hall bar, we know it is going to undergo cyclotron
motion. If the magnetic field is pointing out of the board, I believe – assuming it has a positive
charge, it would move this way.

161
So it will move, I guess, clockwise. The magnetic field is coming out. So we have cyclotron
motion in the bulk. But if we look at the orbits near the boundary, they perform skipping
orbits like this. They hit the wall and then they bounce off and do another orbit. Near the
top boundary, we can see if we start a particle here, it tries to do cyclotron motion, hits the
boundary, does skipping order this way.

We expect to have right-moving modes near this boundary, and left-moving modes near this
boundary, which is, by the way, what we found here. If we did all the signs correctly, we had
the magnetic field going out of the x − y planes. So we’ve got positive velocity modes near
y = 0. And then we got backward, left-moving modes at y = Ly .

3. For n filled Landau levels (ν = n), there are n edge modes at each edge, each boundary. The
reason was that, you know, if we looked at our spectrum, if our chemical potential was here:

So we would get two filled Landau levels, and so on. So this is equals to 2. So you get two
left-moving modes at this boundary and two right-moving modes. In general, they will, unless
there is some reason for them – some kind of symmetry.

4. Edge modes are “protected”. What I mean is that they can’t be destroyed. They have sort of
a protected structure. So here we talked about the cleanest possible model was infinite wall
potential. But let’s imagine that we put some other kind of edge potential. You can put any
potential we like with the only restrictions is our potential should confine the particles. So
when we go off to positive infinity, our potential should curve upward. So imagine we consider

162
some weird potential. So let me draw what I mean by weird potential.

imagine your potential has some kinks in it. So then when we solve our energy levels, we’ll get
flat Landau level here and we’ll get something curving around up here and up here. But in
general, there is no reason why it has to be monotonic. You can have something – a potential
that does something like this, which leads to energy levels. What I’m plotting here is energy
levels. Of course they closely track the potential. It doesn’t have to be the same on the left
and right edge, okay? That would be crazy. The chemical potential is, let’s say, here. So let’s
focus on this right edge here. So what is the effect of this strange potential?

So we can see now we’re going to fill up all these states – all the states below the chemical
potential, we’re also going to fill up these states here. Okay. So we can see that we’re going to
actually have three modes at this edge, not one. So let’s say there’s another Landau level that
is up here that we don’t care about. Normally we have one edge mode in the clean example.
Here, we’re going to have three edge modes. Two of them – this one and this one are right-
moving modes. Slope – increasing slope. So these are right-moving. And then one of them
is left-moving. So if we plot in the edge spectrum here, instead of having one right-moving
mode, we would have two right-moving and one left moving. So nR = 2 and nL = 1.

So we can see that – you know, the statement that we have exactly one-edge mode for ν = 1
edge state is not universal. It can be effected by this potential. But what is universal is
the difference between the number of these right and left-moving modes. The difference is

163
guaranteed to be one. Every time we added the kinks in, we can increase the right and left-
moving modes. No matter how many of these kinks I put in, I have the same imbalance. So
the total number of modes can change. nR − nL is fixed. In this case, it is always 1. If I’m
talking about a single-filled Landau level in this state. Total number of modes can change,
but nR − nL = 1 always. In particular, always at least one gapless edge mode. The word
people use a lot is it is “topologically protected”. That is the general thing. If I look at the
total strip, you can actually – if I look at the total strip, the number of right and left-moving
modes, the entire strip, including both edges, always has to be the same.

if I have both a right and left-moving mode, then those two can gap out, for example, if I have
impurities, or if I put a periodic potential there, I can introduce some scattering there and
gap them out. That is similar, in some sense, to what is happening here. By changing my
potential, I can add or subtract pairs of modes. So that is not protected. The total number is
not protected. What is protected is the difference between right and left-moving modes. So if
I have – whenever I have back-scattering, it can always gap out a pair of right and left-moving
mode together. It always gaps them out. One right with one left.

So that is like the minimal edge, but it doesn’t have to be. You could have a model where
I have both left and right-moving modes. What is protected – so there’s two issues. There
are different types of stability. So I can have the feature which is universally stable to any
interactions or any impurities and anything, the difference between right and left-moving
modes, but that doesn’t mean that if I have a right and left-moving mode, they will be
generically gapped out.

So in general, there could be multiple possible edges that could occur depending on details of
my potential. The point I’m trying to make here is what is universal is just the difference. It
isn’t mean that you’re always going to be the minimal edge, where you have one right-moving
mode. In some circumstances, you might have that. You can imagine circumstances where
it tends to go to the minimal edge. You can come up with scenarios where you have more
complicated than the minimal edge, and they’re still perturbative, for example.

But perturbatively stable is different from the statement I’m making here, which is that it is
stable to arbitrary interactions. Even strong ones.

32.1 Redo σxy calculation with edge

~ = E ŷ. What is current?


Assume v = n. Electric field is E

164
and calculate the current of each of our occupied states and add them up. So what is the current of
a particular state, |k, ni? So if you remember from last time, we calculated this as an expectation
value of the current operator. We eventually wrote it in this simplified form
e 1
Ik,n = hψk,n | (px − eAx ) |ψk,n i
m Lx
The current operator had a δ function in here, but we argued that we could average over all possible
1
xs, it didn’t matter since the current was the same everywhere, and we got this factor of Lx . So
that was our expression for the current. But there’s an alternative expression for it which we more
or less derived in the process of this flux insertion argument. So let me remind you of how the
alternative expression for the current looks.

(k − eAx )2 p2y
Hk = + − eEy
2m 2m
So then from the Hamiltonian, we have
e ∂Hk
Ik,n = hψk,n | |ψk,n i
Lx ∂k
This should remind you of our expression in the flux inversion argument where we said the current
could be calculated by taking the derivative of our Hamiltonian with respect to flux Φ. So flux
and wave vector k are close. This expression is more or less the single-particle analog of that
many-particle formula we derived.

Remember we used the Feynman Helman theorem, the first-order theory, the expectation value is the
derivative of the energy with respect to k, right? As I change my Hamiltonian by a small amount, the
change in energy is exactly given by the expectation value of the change in the Hamiltonian.
e ∂Ek,n
Ik,n =
Lx ∂k

165
The total current is (the sum over all occupied states)
X e X ∂Ek,n
I= Ik,n =
Lx ∂k
k,n k,n

Okay. So we need to first understand what the occupied states look like. What does our state look
like? So let’s think about our state before and after we turn on the electric field. Before we turn on
the electric field, I’m just going to draw one Landau level just to make my picture simpler. Let’s
imagine our chemical potentials here. So before we turn on the electric field, we have this nice flat
Landau level, some curvature near the edges, and then all these states are occupied.

And the states above are unoccupied. Then we turn on the electric field. What happens? This
is turn on the electric field. So what happens? Well, so what happens is pretty simple. The first
time we discussed integer quantum hall-effect, we solved the energy of a particle in an electric and
magnetic field. You can see the only effective electric field is it adds a term like this. There is also
some constant term, too.

It is probably −Eyk . So electric field naturally is to go favor particles that are in the +y-direction.
They’ll be lower energy. So yk is decreasing with increasing k. What is going to happen when we
turn on the electric field? The electric field is going to give us the slope. It is going to favor states
at positive y. It is going to give you a slope like that. That is what happens in the interior. What
happens near the edge is more complicated. We know it is going to go curve up eventually. So we’ll
just draw it something like this. So what is going to happen is when we turn on this field, imagine

166
say turning it on slowly, the energies will get shifts, but the occupation numbers would stay the
same. So we would have something like that.

So what we can see in particular is up here we’re filling up states up to some chemical potential
called µ1 . And here we’re filling up states to some chemical potential µ2 . Okay. Notice the system is
not in equilibrium ground state, right? The ground state – here I’m fixing the number of particles,
right? The ground state, of course, some of these particles should come over here. The chemical
potential should be the same everywhere. This is a non-equilibrium system. We have different
chemical potentials on the 2 edges, µ1 , µ2 with

µ1 − µ2 ≈ eELy

and the current is


e X ∂Ek,n
I=
Lx ∂k
k,n

So – okay. So a key feature that we’ve seen here is we get different chemical potentials on the two
edges. I’ll call this mu 1 and mu 2. The other thing that we see – this is an aproximate statement,
is we can roughly estimate the difference between these two chemical potentials, you know, if this
is a long strip, and it has a slope given by E, the differences can be roughly ELy . I’ll come back to
this point shortly.

So the chemical potential different is roughly given by the electric field multiplied by the difference
between the two edges. So what is our current? Now we’re ready to calculate our current. If
we consider a thermodynamic statistic, we take the limit where our Lx direction goes to infinity,
these points will get finer and finer, this quantization here. So we can approximate this sum by an
integral. That is exact in the thermodynamic limit. We will have an integral over the k’s.

e X k1 Lx
Z  
∂Ek,n
I= dk
Lx n k2 2π ∂k
e Lx
= n (µ1 − µ2 )
Lx 2π
ne
= (µ1 − µ2 )

So we can easily see that this is an integral of a derivative, so we’re going to get the energy of the
occupied state at momentum k1 minus the energy of the occupied state at momentum k2 . That is
the chemical potential µ1 − µ2 . And then there are n of these integral pieces.

Now, if we want to, we can use this kind of naive approximation here and try to write this in terms

of the total electrostatic voltage. dk is the spacing between different ks, you know that δk ∼ Lx

167
(discretely). So when you convert the sum to an integral, you have that factor. So naively, we
conclude, given this approximation, that µ1 − µ2 = eELy and then we can write

ne2 ne2
I≈ ELy = V0
2π 2π
where V0 = ELy is the electrostatic voltage. So then we have

ne2
σxy ≈
h
again.On the one hand, this is nice because we’ve derived the result. On the other hand, it is not
so nice because it looks like this hall conductance is just an approximation because the result for
µ1 − µ2 is a rough approximation. What is the difference between the chemical potentials here? I
just took the slope. What exactly is Ly ? What is the width of our bar? It is a little bit ill-defined.
What should we take as our width? There is some ambiguity up to at least – on the order of
magnetic lengths for Ly . There is no reason why it doesn’t have to be given by the electrostatic
potential difference. So this is strange. It looks like the quantization is only approximate.

As I said, in fact, more generally, this could be badly wrong, right? So in fact, more generally, I
can imagine a system where the chemical potentials – how do I actually create a voltage? I usually
attach two leads to my system. So the chemical potential difference is set by – if you like the firming
level of the leads, it may not have anything to do the electrostatic voltage between the two edges.
So in fact, µ1 − µ2 can be quite different. In fact, more generally, µ1 − µ2 can be quite different
from eELy .

Even if Ly is much bigger than the magnetic length, there is no reason it has to be equal. So more
generally, you know, let’s say apply some electric field, I might get some slope like this. What I’m
really doing is attaching two leads. I may get some slope like this. In a general experimental set
up, I can have any chemical potential here. I have some other chemical potential here, mu 2, set
by something else, and there might be some slope here. But this difference, which is let’s say what

168
you’re calling the electrostatic potential difference need not have anything to do with this difference.
I mean you can see that these are two independent things if you’d like. You can turn on an electric
field, change the chemical potential on this side or this side and attach to a lead. They may be
correlated because electrons carry a charge, but there’s no exact relationship between the two of
them.

Okay. So looks like the quantization breaks down. I think this was actually historically the impetus
for thinking about edge states, the reason why people – Burt Halprin – thought the reason between
the two chemical potentials do not have to be the same as the electrostatic potential.

The problem is that we are using the wrong definition of voltage. So a better definition, the true
definition of voltage and what is measured experimentally is not the electrostatic voltage, but the
chemical potential difference. Meaning, what is the amount of work to move an electron from one
lead to another? That could be electrostatic work or chemical work, moving to a level. So
µ1 − µ2
V =
e
and not eELy . This is also what is measured experimentally. So if we sue this definition, we get
exact quantization because the approximation occurred when we tried to approximate the potential
difference by electrostatic voltage. But if we write in terms of the chemical potential difference, we
recovered the exact result.
ne2
I= V
h
If there is no boundary, what does that mean? You have to include the boundary. That is important.
So the reason why we ignored it at first is sort of pedagogical. Try to understand it in bulk and
introduce the boundary. It is also historically how people thought about it. That is a reason why
– ultimately, to get the quantization of the hall conductance, you have to include both bulk and
edge effects. They’re both important. What we did before was just included bulk. Only when you
include everything – another key point I want to make is that the thing that is quantized – people
often misspace. The thing that is quantized is the hall conductance, it is not J versus E. You can
only define some long wavelength limit, maybe in some limits you can recover some quantization,
but this distant quantization is I/V , total current divided by total voltage. This is quantized. There
is a qualitative difference between thinking about J versus E versus the level of the exactness of
the quantization.

Actually, so one other point I want to make is you say, what about that Laughlin flux insertion
argument to argue at that quantization was not exact? But if you look back at that argument, what
we calculated was we calculated this energy change when we inserted flux in two different ways.

169
One we related to current and the other we related to voltage. The second way we calculated was
we calculated how much energy it took us to transfer an electron from one edge to the other. That
actually was the voltage according to this definition. What we really calculated was we showed –
we basically proved this formula was the Laughlin flux argument. Our definition of voltage in that
argument was the total work it takes to transfer an electron from the bottom edge to the top edge,
which is the chemical potential difference, not the electrostatic potential difference.

Actually, if we re-examine the Laughlin flux argument, there we also implicitly used the definition
µ1 −µ2
of V = e .

33 Simplest Toy model for Integer Quantum Hall

Since we’ve seen there are two separate things, electrostatic voltage and then the chemical potential
difference, the simplest toy model you can think of is not putting any electrostatic voltage at all.
Let’s imagine we put an electrochemical difference while keeping the region within the boundary
flat. You can probably have a density difference, some electrostatic difference.

Here, we’re just changing the occupation levels near the edge by attaching something. Let’s think
about the current. The current was given by the derivative of energy with respect to k
∂Ek,n
Ik,n ∼ ⇒ all current localized near the edge
∂k
That is why it is a nice toy model, so how does the hall current come about? It comes about because
there’s an imbalance between these two currents, right? So here, on the right edge, since we have a

170
higher chemical potential, we can have more current going to the right at the y = 0 edge than we
have current at the left. So we get some picture like this

So the difference between these two edge currents, you can calculate purely using edge physics
because all currents are being carried at the edge. You will do this on the homework and analyze
this toy model. The difference between the edge currents is

ne2 (µ2 − µ1 )
I=
h e
V is the difference in chemical potentials, so in this toy model, it just comes about. There is no
current in the bulk. It comes about that the chemical is producing the imbalance in the two edges.
The point I’m trying to make is that is generally not the way it goes.

Actually, let me just finish this one thing. This is the last thing I want to say. So this is a useful
toy model. But more generally, the current is non-zero in the bulk and the edge. So more generally,
we have this picture, which I guess I sort of drew before. You might have some kind of electrostatic
potential that is being induced by some charges, and then you have some chemical potential differ-
ence. The difference in chemical potentials is not the same necessarily as this electrostatic voltage.
And then you know, all of these guys are occupied. And then you can see, okay, I’m going to get
some current in the bulk, because there’s some slope here. And I’m going to get some current at
the edge. And the edge currents are going to be imbalanced because you know, these chemical
potentials are different. And the key point is you get current in both the bulk and the edge. The

171
thing that is quantized is the total current, that magically is exactly equal to the voltage. So that
is the key point. Basically, what we’ve proven is why this total current was exactly quantized in
terms of the total chemical potential difference between the two sides and the reason why we got
this integer quantization had to do with the fact that, in this flux insertion experiment, we pumped
an integer number of electrons from one edge to the other.

e in chemical potentials is not the same necessarily as this electrostatic voltage. And then you know,
all of these guys are occupied. And then you can see, okay, I’m going to get some current in the
bulk, because there’s some slope here. And I’m going to get some current at the edge. And the
edge currents are going to be imbalanced because you know, these chemical potentials are different.
And the key point is you get current in both the bulk and the edge. The thing that is quantized is
the total current, that magically is exactly equal to the voltage. So that is the key point. Basically,
what we’ve proven is why this total current was exactly quantized in terms of the total chemical
potential difference between the two sides and the reason why we got this integer quantization had
to do with the fact that, in this flux insertion experiment, we pumped an integer number of electrons
from one edge to the other.

172
Notes Quantum Many Body Physics
Day 17
November 22th, 2013 Giordon Stark

34 Landua Levels in circular gauge

First I need to tell you how to look at Landau levels in a different gauge. We are going to talk
about a circular gauge, turns out to be convenient for thinking about fractional quantum hall states.
So for the moment, we’re thinking about certain particle problems, so we have some Hamiltonian
which is
(p − eA)2
H=
2m
Up until now we’ve been choosing a gauge which gave us translational symmetry, which is for doing
hall conductance calculations. Now we want to consider a gauge which gives us rotational symmetry
about a particular point. So we can do that with the circular gauge.
By Bx
Ax = − Ay =
2 2
where we wanted a vector potential like this

Figure 54: A circular vector potential

You can see by inspection this gives our Hamiltonian circular symmetry, rotational symmetry about
this point R= 0. So H is circumstantially symmetric. And so that means that we know our
Eigenstates will have rotational symmetry. So the Eigenstates must be in the form – some function
of R which I’ll call ψm (r)eimθ .

173
Now, of course, depending on what Landau level you’re in, these functions here may – will take
different forms, the wave functions in the Y direction will take different forms, like in the Landau
gauge, depending on what Landau level we had, they are either Gaussians or polynomials. The
same thing will be true here, but we’re going to focus only the basic Landau level (lowest landau
level; LLL) from now on.

In that case, eigenstates are of the form


r2
ψm (r) = rm ei 4`2 m = 0, 1, 2, · · ·

m = 0 is just a Gaussian centered at the origin. As you get to large m, these describe orbitals
centered at some position, some particular radial coordinate.

Figure 55: An orbital

and so including the θ dependence


r2
ψm (r, θ) = rm eimθ e− 4`2

These are all the states in the lowest Landau level are this form. So as I said here, these states are
analogous to the orbitals that we found in the Landau gauge. Basically, if you draw a disk, we have
lots of these orbitals, this, and they’re very much like in the hall bar geometries, we had all these
orbitals. You can probably get from one to the other by some conformal map.

Figure 56: Analogues

174
These are the Eigenstates in the lowest Landau, if you went to the second lowest, you wouldn’t have
quite the same radial structure, you would get some different wave function. Instead of here, you
have the lowest level, in the highest, you have the first or second hermite polynomial. This is all we
have to know.

Let’s do a sanity check and make sure that the total number of orbitals in the lowest Landau level is
exactly what we calculated earlier. So let’s check that the total number of orbitals – in this geometry
– agrees with some previous calculation. So our previous calculation, remember, we showed that
there was one state for every flux quantum. So let’s make sure we recover that. Geometry with a
radius which I’ll call R. To count, we’re basically going to count how many orbitals we can fit inside
the disk. We’re just trying to count the extensive part of that scales with area. We want to make
sure – it doesn’t matter exactly what happens near the edges, it is only going to make a correction
1
of order – R to the total counting.

So we want to include all orbitals with rm ≤ R where rm is peak of ψm . So we’ll see that this
wavefunction has peaked at a particular value which is rm . That is a rough way of counting how
many, just like before we counted with yk between the top and bottom of our Hall bar. So let’s
calculate what rm is

0= log ψm
∂r
m r
= − 2
r 2`

And the peak is located at rm = 2m`2 . We can see that the orbitals are spaced like square roots
and they get more and more dense as we go out which makes sense because we’re going to want the
number of these to be proportional to the area. So they have to get more and more dense since the
area grows like R2 . So let’s finish our calculation, how many of the orbitals are there within our
disk? We need
R2
rm ≤ R ⇒ m ≤
2`2

R2
And the total number N = . Our previous calculation had
2`2

BA eBπR2
N = Nφ = 2π
 =
e

R2
=
2`2

175
34.1 Many-body wave function for ν = 1 IQH State

Why is this circular gauge so useful? Let’s think about the many-body wave function. Let’s start
with the ν = 1 state. So the ν = 1 is the integer quantum hall state. Many body wave function is
going to be some slater determinant of all the states in the lowest Landau level. So let’s write that
down. To do that, it is useful to use complex coordinates. So we will define a coordinate

z = x + iy = reiθ

So these wavefunctions, if we write them in complex coordinates, we can see very conveniently
that
r2
ψm (r, θ) = rm eimθ e− 4`2
|z|2
ψm (z) = z m e− 4`2

So these orbitals are given by power z which is really very nice. et’s think about the many-bodied
wave function. We are going to – it is going to be a wave function on N coordinates and we’re
writing in terms of these complex coordinates. So what is this wave function? We know it should
be a slater determinant of all the lowest Landau level states. In those states, we’ve labeled by this
ψm where m = 0, 1, 2, · · · so we want to put one electron in the 0 state, 1st, 2nd, all the way out
to some maximum radius. We’re going to fill all of these guys. Put an electron in every one of
these orbitals up to some maximum radius. This is going to describe a circular drop let of integer
quantum hall state.

Figure 57: Picture of the n filled orbitals within the LLL

So how do we do that? Since it is a non-interacting thing, you can take the product of the single
particle wave function.

ψ(z1 , · · · , zN ) = A [ψ0 (z1 )ψ1 (z2 )ψ2 (z3 ) · · · ψN −1 (zN )]

176
where A is the antisymmetrizer. It’s just notation. It means to sum over all permutations whether
it’s even or odd (fixing the minus signs for the number of ‘swaps’ probably?). And this is all for the
spinless case, so we’re dropping the spin polarization.

Spin is another issue, in principle, there’s two states per Landau level. Usually the way people count
things is in spin polarized Landau level. For example, you can imagine if it favors the electron –
naively in a strong magnetic field – the electrons want to point at lower energy and spin up. So you
fill up the Landau level first with spin up and then maybe it will cost a little bit more energy to fill
them up with spin down. And then if you had higher-filling fraction. When I talk about ν = 1 I’m
thinking about filling up the lowest Landau level with spin polarized electrons. So I’m forgetting
spin. That is what we have been doing with our discussion.

So it’s an antisymmetrizer of this product. If we write that out in terms of these guys, we can write
it as follows.
h i P |zi |2
N −1
ψ(z1 , · · · , zn ) = A z10 z21 z32 · · · zN e− i 4`2
 
1 1 1 1 1 ··· 1
 
 z
1 z2 z3 · · · · · · zN 
 − |zi |2
  P
2 z12 z32 · · · · · · 2

 z1
= det  zN e i 4`2

 .. .. .. .. 
 . . . . 
 
z1N −1 · · · N −1
· · · · · · · · · zN
P |zi |2
(zi − zj )e−
Y
ψν=1 = ± i 4`2

i<j

And we can pull this out of the antisymmetrizer. So we have to antisymmetrize the first product.
We can think of it as a determinant of the following form. So this is exactly this antisymmetrized
product. Of course, this is the famous determinant. We know for example – from the structure of
this, it is clear that if any zi = zj , those two columns would be the same. If z1 was equal to z3 –
we know the determinant vanishes when any two guys coincide. It’s easy to guess any form of it.
It has to have zeros every time zi approaches zj , and it also has a maximum power of N − 1. So
there is only a unique polynomial that has that product. It is product over all i < j. I’m not too
concerned about overall phase.

So we can see that our many-body wave function, ψ, is just given by this pretty simple formula,
product of zi − zj times this Gaussian factor. So that will be useful to keep in mind when we start
talking about fractional quantum hall.

To see this simple expression, somehow it was important that we worked in the circular gauge.
I’m not sure how to write down this kind of simple algebraic expression for the – even for ν = 1

177
quantum hall state if you work in another gauge. It somehow relies on algebraic properties of
complex numbers that we can just write it in this form.

35 Fractional Quantum Hall Effect

So now we’re ready to think about fractional quantum hall effect with this sort of set up. Okay. So a
starting point is we kind of want to know about – if we want to explain the fractional quantum hall-
effect, what are the key features we need to explain the fractional quantum hall effect. The starting
point maybe is to remember that that we had this general result that the quantum hall conductivity
is given by the density divided by the magnetic field for systems with Galilean invariance
ρ
σxy =
B
It is more or less a simple argument about drift velocities. You can say it show that if you have
Galilean invariance, you can show that that kind of drift velocity picture holds and that this center
of mass motion of your fluid moves with a constant drift velocity which is given by exactly that
formula, so you can easily calculate the hall conductance in terms of magnetic field, and the density.
Let’s write this in a different form
σxy σxy ρ N e/A N
2
= 2 = 2
= 2
= e
e /h e /2π Be /2π Be /2π BA 2π

N
= “filling fraction” ν

This is just for these Galilean invariance systems. It is the filling fractions. So this is called – we
will call this the filling fractions. It is the number of interactions per flux quanta, and we denote it
by ν, the filling fraction.

So based on this fact, we know that – imagine we want to model the fraction quantum hall effect
from a completely system with Galilean invariance, okay? So consider a 2D electron gas and let’s
suppose it is at filling fraction ν = 1/3 and suppose that it has Galilean invariance which basically
just means it is clean. It doesn’t have any disorder. So basically, this is no disorder, no background
potential, the electrons are moving in. So if you had such a clean system, then by this basic formula,
we would automatically get

e2
σxy =
3h
2 – to explain the fraction quantum hall why
Okay. So now of course we don’t just want to get σxy
is it quantized and perfectly exact? Our real system does not have Galilean invariance. It has

178
disorder and so on. So we need to explain why we would – we still get this. But if you remember
our flux insertion argument for the integer quantum hall state, we first talked about a clean system.
We showed for clean system σxy = e2 /h, which we could have just guessed immediately from these
general principles, but then we gave this flux insertion argument that showed even if we added
disorder, if we broke Galilean invariance, no matter what interactions we add and so on, as long as
they weren’t so strong as to close the bulk gap, this quantity had to remain fixed. The key feature
in that argument was the system, or bulk, had a gap.

In particular it had a gap to having extra electrons or holes. If you remember, we made our argument
– we eventually considered some process where he inserted flux, we argued that basically, in the
non-interacting model, some number of electrons were pumped across the system like this. We
didn’t know what happened in the region with interactions, but we knew that no extra electrons
or holes could accumulate here because the system had a gap and we were doing it slow compared
with the gap, so we couldn’t create any excitations in the bulk, so we knew that the number of
excitations that were pumped remained fixed even after we included interactions.

So the key feature, I guess what I’m trying to say, the key feature in that argument,the robustness
depended on the fact that our system had a gap to adding charges in the bulk. It had a charge gap.
In other words, the key feature was that the bulk was incompressible. It couldn’t change by small
perturbations. As long as the perturbations weren’t so big too close the gap, it wouldn’t change.
We could say the same argument could be made with basically no changes as long as we have filling
1
fraction 3 and our system is incompressible, then we know this result here will actually be robust
to adding disorder and so on.

So furthermore, if the ground state is incompressible (i.e. there is a gap to adding charged excitations
in the bulk), as long as we have that, then by these general arguments – we know that σxy will
actually be robust. This holds, even if Galilean invariance is broken, it is a topologically protected
quantity.

So what we have showed in class with those flux insertion argument, we took a system with fixed
density, and then we said we wanted to understand if we have that fixed density, could we show

179
that you would still get this – that the hall conductance would be robust to adding interaction or
disorder with fixed density. In the real system, the density is not exactly 13 , it never is. The plateau
extends over a whole finite range of filling fractions, that requires another part of physics, with
electrons, localized states, et cetera, which I haven’t talked about.

The fundamental property that sort of maybe generalizes to systems beyond quantum hall is under-
standing why, even at fixed density, the phenomenon is robust to adding disorder and interactions
and so on. This extra piece of physics is certainly important, but I don’t have to time to discuss it,
it is a slightly less fundamental thing. So we’re specific to quantum hall experiments.

So we conclude from this that to explain at least this feature, it is sufficient to understand why
e2
our 2D electron gap has an incompressible ground state. So to explain σxy = 3h FQH effect, it’s
1
enough to show that 2DEG at ν = 3 has an incompressible ground state – understanding the
plateaus is a separate thing – but it has to do with extra electrons or quasi-particles being trapped
by impurities.

It is impossible to get this in a non-interacting model. Now, a key observation is that we’re never
going to be able to prove this result in a non-interacting model. So this effect requires interactions.
My claim is it’s impossible to get this in a non-interacting model. And just to see that, just think
about the case where let’s say I have six flux quanta, and two electrons. I have my six orbitals,
they’re all exactly degenerate in energy and I have two electrons, and so you can see that in a non-
interacting model, I can put my two electrons in any of these six choose-two possible configurations
and they would be exactly degenerate in energy. I would have a degeneracy basically.

Figure 58: 6 flux quanta, 2 electrons

6

We have 2 = 15 degenerate ground states. This has an infinite degeneracy in the thermodynamic
limit. So we conclude that to understand the fractional quantum hall effect requires interactions.
The integer quantum hall doesn’t, we get a gap based on the non-interacting problem – just the
cyclotron frequency effect. Here we’re going to really need interactions.

180
35.1 FQH effect requires interaction

So what model are we going to use? We want to include interactions, coulomb interactions
X (pi − eA)2 X e2
H= +
2m |ri − rj |
i i<j

We will write it as an n particle Hamiltonian since we’re thinking of interactions. The question we
want to ask in principle, we need to show that a filling fraction ν = 1/3, if I put this Hamiltonian
and I have exactly one electron for every three flux quanta – what is the ground state?

We can ask what the excitations around the ground state and is there energy to those excitations
and so on. Of course, there’s no way that we can answer this question, solve this problem directly.
This problem is completely beyond any analytic calculation. One issue is that it has no small
parameters and so it is a strongly interacting system with no small parameters. So basically, the
way we’re going to proceed is, instead of trying to solve this problem from first principles, we’re
going to first guess a kind of candidate ground state and argue that this ground state is a good
description of our system. So we’re going to use an educated guess.

35.1.1 An educated guess

This type of thinking is pretty common in strongly correlated many-body physics. In many many
cases, we can’t really solve – in fact, that is not really the goal, to solve particularly microscopic
Hamiltonians. Instead what we try to do is figure out what universality case the Hamiltonian is in.
We figure out what should be seen experimentally or numerically and verify those predictions.

This is due to Bob Laughlin and I’m going to follow his kind of reasoning, how he kind of guessed
the answer, starting from this Hamiltonian.

1. So the first thing that he said was, “Let’s imagine that our magnetic field is very strong. So our
cyclotron frequency is very large. Let’s imagine it is larger than the Coulomb interaction scale.”
 2 
So `B eωc . Then we can restrict our attention to the lowest landau level. Experimentally,
this may or may not be a reasonable assumption to make in particular cases, but we can
imagine theoretically making this assumption. I think in many cases, these two are on the
same order of magnitude. It depends on your sample.

2. The LLL states are the form


P |zi |2
f (z1 , · · · , zN )e− i 4`2

181
where f is an anti-symmetric polynomial – since we’re dealing with fermions. This is the most
general state you can have in the LLL for fermions.

3. In order to have filling fraction ν = 13 , the maximum power zim should have m
N = 3. So what I
mean by that is, if I look at this polynomial and I expand it out and say, “What is the highest
power of z1 that occurs, it should be three times the number of electrons. Why? Because that
tells us that the highest power tells us the size of our droplet, and if it is 3N , that means the
1
number of electrons is equal to 3 – the lowest Landau level states I can fit in this droplet. This
1
would ensure it is exactly at the appropriate density of 3 electrons per orbital. The maximum
orbital out here is three times the number of electrons.”

So I want to have a certain degree for my polynomial, every zi should be raised to the power
of order 3N . This just talks about the maximum power, so you shouldn’t go too far out. You
should only go out to 3N . At the maximum power, most of the electrons would be clumped
together.

4. So far, I have lots of possibilities. I can take any antisymmetric polynomial which has a power
1
of m = 3N and it would be a candidate description of a µ = 3 state. It would have the
right number of electrons per flux quanta. All those polynomials would have the same energy.
So without interactions, all these states are degenerate. However, if we have interactions,
in particular coulomb interactions, they’re going to favor particular types of polynomials
where the zi – the different electron states – are far away from each other. So that means
wavefunctions which vanish rapidly when zi → zj . At a minimum power, the electrons would
be spaced further out and probably would not vanish rapidly.

So a natural candidate is going to be


Y
f (z1 , · · · , zN ) = (zi − zj )3
i<j

1
This generalizes to any m where m is odd. I was going to write that here. So we can generalize, too.
Historically, that was the fraction we found. So this generalizes to basically, we can do (zi − zj )n
times this Gaussian factor. And this we’re going to claim is going to explain all the plateaus at
1 1
filling fractions m. So we can generalize to here for any odd m. So these are called Laughlin m
states. And these are supposed to explain the plateaus, believed to explain the plateaus which are
seen as
P |zi |2
(zi − zj )m e− i 4`2
Y
ψm (z1 , · · · , zN ) =
i<j

182
where the requirement that m is odd is because we have a fermionic system. It is antisymmetric,
switching zi ↔ zj throws out a minus. So this kind of wave function explains all these plateaus.
There are other fractions that are seen, for example, 52 , 35 , 37 and so on. This is the simplest quantum
hall state I’m going to talk about here. Building on these wave functions, people have developed an
1
understanding of other plateaus. But they’re more complicated. This is just the m plateaus.
1 e2
Laughlin ν = m states explain the σxy = mh FQH plateaus.

So why is this wave function good? ψm (z1 , · · · , zN ) has several nice properties:

1. mth order zero when zi → zj .


1
2. Describes a circular droplet with uniform density ν = m.

3. Energy gap in bulk. In particular, incompressible.

35.1.2 Why does it have uniform density?

Okay. So why does it have uniform density? To understand that, let’s look at the probability density
for this wave function. So let’s look at the probability to have particle at position Z1 through Zn.
This is just psi M squared by definition. So now, let’s think about ψ 2 . We can write ψ 2 in the
following form

ρ(z1 , · · · , zN ) = |ψm (z1 , · · · , zN )|2

and since it’s a positive number, a non-negative number at least. So we can always write it in the
form of

|ψm (z1 , · · · , zN )|2 = e−βV (z1 ,··· ,zN )

We’re going to think about this probability density as actually coming from a Boltzmann weight.
You can do that for any quantum system. In general though, if you write it in this way, this function
V will not necessarily be a local function – it won’t describe local interactions or at least physical.
Maybe I shouldn’t say local. It won’t be easy to interpret in terms of simple interactions. But in
this case, because of the special form of our wave function, V has a very, very nice form, which is
easy to interpret.

Let’s just choose a particular value of β – a formal parameter that we’ve just put here. Choose
2
β= m. So then solving for V (z1 , · · · , zN ) then
X m X
V (z1 , · · · , zN ) = −m2 log |zj − zk | + 2 |zj |2
4`
j<k j

183
The first term looks like a 2D coulomb interaction between charge m objects. So they interact by
a logarithmic interaction, it’s minus a logarithm. So that means potential decreases when particles
are far away – it’s a repulsive interaction like you would get between charge m objects in two
dimensions.

The second term also has a simple interpretation. We can interpret it as an interaction between our
1
charge m objects and some background charge with charge density ρφ = − 2π`2.

Figure 59: Negative charge density background

My claim is we can get this by looking at the energy of the following kind of configuration. So we
have +m charges at each of these coordinates, zi , and then we have this background charge which
is negative, right? So it is a negative background charge everywhere.

And the claim is, if you just consider some system like this, and you work out the electrostatic
energy of this system, you would recover exactly this. This is your repulsive energy between the
charge Ms and then this is an attractive energy. This makes the particles want to be – energy goes
up as z → ∞. It makes the particles want to be close to the origin. It is going to be an attractive
energy between the particles and your background.

Figure 60: A single electron

So let’s just show that we re-derived – this is clear by inspection. By definition, this is what Coulomb
interaction looks like. Let’s show that this comes about from a background charge. So the idea is

184
the following. Suppose I have some plus +m charge and I want to know what – I know from usual
electrostatic arguments, it only interacts with the charges – I can model it – it only interacts with
the charges – its interaction can be modeled by this, all the interaction charges in a disc of radius
r. All the other ones out here, their forces cancel. So I can worry about the charges in this disk.
And let’s calculate the force between this background charge and our charge.
2
mQ m πr 2 mr
F = = 2π` = 2
r r 2`
So force is given by M, which is our charge here, times the total charge of the background which I’ll
call Q, divided by logarithmic Coulomb interaction gives you 1 over R forces. It is given by mass
times the total charge here divided by H. Here I’m using the fact that when you have a circular
geometry, you can concentrate all the charge here. It is like + M interaction, something with charge
Q, or minus Q, at the center. So we would have potential energy

mr2
Z
V = F dr =
4`2
So this term is exactly the interaction of the background charge so that’s good. Now we have this
nice interpretation of our wave function. We can think of probability density as identical to charges
interacting via logarithmic 2D interactions.

So our probability density is equivalent to the thermal density – the thermal – basically, the Boltz-
mann weight for this classical problem, right? It is just – it’s exactly the density you would get
2
from statistical classical analysis of this system at some temperature m. So this statistical mechan-
ical problem of calculating – of working with logarithmic interactions at some temperature has a
name. This is called the classical one-component plasma (OCP). It is a problem that’s been studied
previously.

The physics of this is understood. So we know the phase diagram. So basically, when you have a
plasma, there’s generally speaking there’s two types of phases you can have. It can be in – so one
phase is a plasma phase. That is basically a phase where your charges are basically, by definition
of plasma phase, your charges screen test charges. If I put a test charge into my system, these m
charges perfectly screen it. That is a plasma phase.

The other phase it can be in is a phase which is basically not a plasma phase. I put in test charges in
a plasma phase, the charges are arranged in a kind of uniform density unless I put a test charge and
a few of them will come and screen it. In the non-plasma phase, it can have non-uniform density.
It can form like a crystal phase. And in this case, the phase diagram is known, so it turns out that
depending on the value of m, there are two phases.

185
This is just known numerically. When m . 70, OCP is in plasma phase and background charge is
screened exactly. the plasma phase screens all charges particularly. In particular, they screen the
plasma charge particularly. So they have exactly the right density on average, they’re distributed
perfectly uniformly so they screen this background charge. So the background charge is screened
exactly. That is when m is not too large. When m is small, that means the particles don’t interact
with each other too much. So then they’re kind of distributed kind of randomly. And their main
interaction is with the background charge and they can screen this background charge. When m
becomes large, interactions between these guys become very large. I’ll explain that in a second.
There’s a phase transition out of this phase.

Let’s talk about what happens when m < 70, when we’re in the plasma phase. Since we know this
background charge is screened exactly, we know exactly what the density of these particles is. It
must be exactly uniform density, and exactly the right density to screen the background charge.
So that tells us what the electron density is. So the electron density, which I’ll call ne , this is
the number of electrons per unit area, it’s exactly the right amount so that this guy screens the
background charge. That means it has to be
ρφ 1 1
ne = = 2
⇔ν= uniformly throughout droplet
m 2πm` m
The reason I divide by m here is every electron carries a charge m – so if I want to screen this
background charge density ρφ , I have to have ρφ /m electrons.

This is exactly the density of a filling fraction. So it has to have uniform density when it is in this
phase. When m > 70, there is a phase transition in this model. I’ll just mention it offhand. It is
not so important to the physics of fraction quantum hall effect. The OCP enters into a hexagonal
crystal phase. So we get something like this. Basically what happens is like

186
The point is that what happens is that when this m becomes large, these particles want to repel. So
they arrange themselves to get as far away from each other as possible and they look – the density
is totally non-uniform in this state. It is a crystal. So that means that the Laughlin state is no
good for large m. It doesn’t describe the uniform density state anymore.So the main point is by
mapping our wave function on to this plasma, we know that the physics of this plasma is. We know
when m is not too large, we know they are distributed perfectly uniformly on average around our
droplet so we can see that it has uniform density. xx

187
Notes Quantum Many Body Physics
Day 18
November 25th, 2013 Giordon Stark

36 Last Time

1 P
|zi |2 1
(zi − zj )m e− 4`2
Y
ψm (z1 , · · · , zN ) = ν= , m odd
m
i<j

We had a model for fractional quantum hall effect that could explain the plateaus that occur at
filling fraction ν = 1/m where m is odd. And basically, we wanted to kind of guess an answer for
what the ground state of our system looks like. It is an interacting electron system at filling fraction
ν = 1/m. There is no way to analytically – from first principles – figure out what the ground state
looks like. Instead we explained how Laughlin guessed an answer for what the ground state looks
like. He guessed a particular wavefunction, the Laughlin wavefunction. Why was this wavefunction
such a compelling guess? Because it had very nice properties.

1. mth order zeros when zi → zj


1
2. describes a circular droplet with uniform density ν = m

3. gapped in the bulk, in particular incompressible

Today, we’re going to talk about why it makes sense that this state is gapped in the bulk. I’m going
to explain property number three. But first, let me quickly review property number two here.

1
36.1 Why ν = m
?

How can you see that? What is special about


this filling fraction? What you want to do is
think about – we have these circular orbitals.
First you want to see, the highest power. Say
z1 is raised to the power m(N − 1). Because z1
comes in each term, you will have (z1 −z2 ), (z1 −
z3 ), · · · and there will be n − 1 of those. The
Figure 61: The orbitals

188
largest orbital out here is going to be something
of z m(N −1) . So there are going to be mN − m
orbitals, so you take the N electrons divided by the total number of orbitals and you can see
the filling fraction goes to 1/m. So limit is n → ∞, number of electrons divided by number of
orbitals goes to ν = 1/m. We can see it is a filling fraction from the degree of the polynomial.
Mathematically
N 1
lim = ≡ν
N →∞ m(N − 1) m
This just tells us that the average density is correct.

36.2 Why uniform density?

How do we know it is uniform density? The way we argued that is that we mapped our wavefunction
on to a statistical mechanical problem. We mapped the probability density. So we looked at ψ 2

|ψm (z1 , · · · , zN )|2 = e−βV (z1 ,··· ,zN )

and said we can write this as a bulk – we can think of this as a Boltzmann weight. And V here was
very simple to be written as
X m X
V = −m2 log |zj − zk | + |zj |2
4`2
j<k j

2
This was for a particular choice of β, which is β = m. This map told us the probability of the density
in the electrons of in state is equal to the probability density of this problem at this temperature.
This had a very simple picture.

It was a charge m interacting logarithmically


and with a background potential. Roughly
speaking, we have a collection of charges. One
at position z1 through zN , and then there’s
this constant background charge. Everywhere
in space. Extending all the way out to infinity.
This is the potential energy of just a collection
1
Figure 62: sea of charges with density − 2π`2 of charges with some background charge that
−1
the background charge density was 2π`2
. What
we argued is that it is known numerically, I guess, that as long as m . 70, this statistical me-
chanical problem of these charges forms a plasma phase. That means the charges are fluctuating

189
a lot and their interactions with a background charge are more than their interactions with each
other. So they screen the background charge. On average, they distribute themselves exactly right
so their density cancels the background charge. If you tune m > 70, the charges go to a different
phase where, instead of perfectly screening the background charge, they tend to form a crystalline
structure which would mean our wavefunction over here, instead of having uniform density, would
have some non-uniform density – some crystalline structure.

So to describe a state with uniform density, to be an incompressible state, we need m . 70 (plasma


phase). This gets us perfect screening of the background charge and implies that the electron density
is uniform.

36.3 Why is state gapped?

By gapped, it means it needs a finite excitation above this ground state. I have not given you
a Hamiltonian for which this is a ground state. This is not the ground state of the Coulomb
Hamiltonian. I’m going to describe for you an ideal Hamiltonian, a cartoon Hamiltonian for which
this is the exact ground state and then we will discuss whether or not this Hamiltonian has a gap.
So what is the Hamiltonian?
X (pi − eAi )2 X 
ri − rj

1
Hideal = + V ·
2m a a2m−2
|i {z } i<j
| {z }
kinetic potential

So the ideal Hamiltonian is in the following form. I’ll call it Hideal , it has two terms. First it has the
first kinetic energy term. And then there’s a potential energy term. And the potential energy term
in ideal Hamiltonian is going to be short-range repulsive interactions. I’ll introduce some potential
ri −rj
V , which depends on a (some length scale a). I’m also going to put a coefficient out front for
1
technical reasons of a2m−2
. What we’re going to do is we’re going to take the limit, consider this
Hamiltonian, a is going to be some length scale and take the limit of a → 0.

Now I should tell you what V is, V can actually be any short range repulsive interaction: V (X) =
V0 e−|x| ; repulsive potentials. So it is something that would he reasonable if your Coulomb interaction
was screened, you could have an interaction like this. So the claim is that in the limit that a → 0,
that Laughlin state is the exact ground state of this ideal Hamiltonian. So let’s verify that that’s
the case.

190
In the limit as a → 0, energy of ψm scales like:
 
ωc  N (N − 1) |z|
Z
z 1
E∼N + d z p(z1 − z2 = z)V · 2m−2
{z2} 2 a a

|
−eAi ) term
(pi

| {z }
drop

1
There’s two terms. A kinetic energy term which is going to give us 2 the cyclotron frequency. It is
the ground-state energy over the harmonic oscillator. Then there are N electrons. So that is just
the kinetic energy piece. We’re not going to be too interested in this piece. So we’re go to go drop
this. We’re always going to be considering the limit where our cyclotron frequency is large. The
lowest states in the Landau level is the state that tells us whether it is low or high energy, favorable
or low energy is determined by potential energy.

Let’s think about the potential energy. Potential energy, we can estimate thinking about the
expectation value of this term, we have to figure out what is the probability of having two particles
separated by a given distance. So we can take an integral over all two-particle distances of the
probability of having two particles – let’s say z1 , z2 separated by that distance. This is a complex
distance. So separated by that vector displacement times the energy associated with them, being
1
separated by that much. And then there’s always that factor that I included there, a2m−2
. So if I
just consider these first two particles, I’m taking the probability they’re displaced by z, multiplied
by the energy associated with that, and integrating over all possible displacements. And that is just
the first two particles (add in combinatorics).

So dropping that first term, which is the same for all terms, we don’t care about it. We will set it
equal to zero. In the limit that a gets very small, this function is localized, it only has a range out
to a. So the integral is going to be over a region a2

a2
Z
a2m
d2 z(· · · ) ∼ 2m−2 · |{z} ∼ a4
a
p(z1 −z2 =z)∼z 2m for small z

Therefore we conclude that E → 0 as a → 0. And it’s all because of these mth order zeros which
is making the probability that the two particles are near each other very very small. On the other
hand you can convince yourself that when you look at other states, you get an energy which is
strictly positive.

So the basic point is that the following fact which is that ψm is unique. ψm is the unique LLL
state with mth order zeros and maximum power z m(N −1) . It only goes out to that orbital. So if
you think of any other state that has that degree, any other polynomial that has that degree, it
will have at least one m − 2 order or so. And because of that, that means that all other states

191
that could basically – the competing states – when we calculate the energy, they’re going to have a
higher energy.

Consider a competing state with zero of order m − 2. Then in this case when we calculate our
energy
1
E ∼ a2 a2(m−2) · ∼1
a2m−2
So I put this factor here to – because it’s not a proof, but if you consider competing states, they’re
going to have – not only will they have a higher energy – the reason I put this factor – that is going
to be important because I want to argue it has a finite energy gap, even in the limit a → 0. That
is why I put it there. It is very strong and very short range. If I didn’t put that a there, then as
a → 0, it would still be the lowest-energy state, the gap would go to zero as a → 0. It is not so
important. But that is the reason I did it.

All other LLL states have lower order zeros and therefore have E > 0. From this we conclude that
ψm is exact ground state of Hideal in limit a → 0, ωc → ∞ (we’re restricting to the LLL). Therefore
they have higher energy.

This does not prove there’s a gap. All this says is that if I solve the system, it really just proves
that this state is the ground state. Other states may have energies greater than zero, but maybe
that energy difference goes to zero in the thermodynamic limit. I haven’t proven there’s a gap. In
fact, as far as I know, there is no way to prove this Hamiltonian has a gap. It is very hard in general
to prove any Hamiltonian except for exactly. This particular sort of short-range Hamiltonian can
be studied by exact diagonalization on computers and it seems to be numerical evidence suggests
that at a high thermodynamic limit, the splitting between that and the ground state goes to some
constant.

36.3.1 What about the gap?

So what about the gap? There’s nothing I can say analytically, but numerically, it looks like it’s
gapped. Okay. So that is the sense in which this state is gapped. We have an ideal Hamiltonian for
which it is the exact ground state. Numerically, it seems to have an actual energy gap separating
1
it from excited states. And the reason I had to put this power of a, is if I didn’t put that there,
I think this statement wouldn’t be correct strictly speaking. In the limit a → 0, there would be
low-energy excitations. It is a question of order of limits to some extent. So any finite a, I think it
is correct the system has a gap, but if I want to take the limit a → 0, it is a kind of a question of
order of limits, the gap stays constant as a → 0.

192
I want to claim it is the ground state. What if I found a Hamiltonian for which it has some
degeneracy. I’m not sure how to answer that. So physically it’s hard to imagine a scenario. So
when do you get degeneracies in physics? We may touch on this at the end of the class. The
topological doesn’t apply because we’re in a disk. It is hard to imagine physically why you would
have a degeneracy. I’m not sure how to answer the question. We would like to say something
about the Coulomb Hamiltonian and in that case we believe there is a unique ground state with no
degeneracy separated by some gap. Here I’ve come up with some kind of toy model which captures
some of that physics. So it is exactly solvable Hamiltonian that has a unique ground state and
numerically, it looks like it has a gap.

37 Interpreation of Laughlin States

It seems like it has a gap, trying to understand why ν = 1/m has an incompressible ground state.
What is the precise connection to the physical system? So remember that the real system is described
by Coulomb Hamiltonian. It is a good first start to a description. So we have Coulomb interaction
between our electrons. So these are long-range interactions.
X (pi − eAi )2 X e2
H= +
2m |ri − rj |
i i<j

Okay. So let’s imagine we think about this Hamiltonian. And we compute its ground state which
we can call ψcoul . So the ground Hamiltonian is definitely not the Laughlin state for any system
size. ψcoul 6= ψm . What is the relationship between the two? Why am I telling you about the
Laughlin state when we shouldn’t be interested in this ground state right here? Numerically, it
looks like ψcoul also has an energy gap. Furthermore, this is the important point. It’s believed,
again based on a combination of numerical evidence, not only does this have an energy gap, the
Hamiltonian can be smoothly deformed into the ideal Hamiltonian I wrote down without closing
this energy gap.

And furthermore, there is a definition that takes you from one Hamiltonian to the other. There ∃
a continuous deformation of H called H(s) with

H(s = 0) = Hideal H(s = 1) = Hcoul

where gap doesn’t close in between (adiabatic).

193
so here’s our Hideal . Here’s Hcoul and the claim
is that you can smoothly connect them. You
can smoothly connect from one to the other.
Changing the structure of the interaction. So
for example, maybe you just interpolate you add
Figure 63: Cartoon of smooth deformation
in a very weak Coulomb potential and slowly
increase the Coulomb potential and decrease the short-range potential you can connect from one
Hamiltonian to the other. And there exists such a path where the gap does not close in between.
That has important physical significance because it means that these two Hamiltonians belong to
the same phase. You can go from one to the other basically. The gap doesn’t close. They’re
smoothly connected. There’s no phase transition getting from one to the other.

So whenever you have two Hamiltonians or two ground states that are connected this way, you say
they are adiabatically connected. By definitions, they are part of the same quantum phase. The
set of all Hamiltonians that can connect together without a phase transition. The key point is that
if you consider Hamiltonians are ground states that belong to the same quantum phase, there are
certain properties that are universal that don’t depend on where you are in that phase. They just
depend on the phase. That is true of many of the most interesting properties. So for things like
the hall conductance, the existence of fractional charge and statistics, basically, many of the most
interesting properties of FQH states are shared by every state in the same quantum phase.

In some sense, this is maybe what we understand by physical properties. A lot of many-bodies
condensed matter of physics, we are not interested in the details of one particular interaction, we are
interested in what holds independent of those details. And so these properties that are independent
of the details and only dependent on the quantum phase are called universal properties. And
if you’re only interested in universal properties, any ground state is as good as any other. We can
study ψm (Laughlin) instead of ψcoul (Coulomb) if we are interested in universal properties.

37.1 Low energy excitations

So what are the nature above the excitations above the gap? So that is what I want to talk about
next.

37.1.1 ν = 1 IQH case

194
So in the integer quantum hall case, it’s pretty clear there are two
types of bulk excitations – electrons and holes. The electrons would
be something you would put in the second Landau levels and holes
would be something you would put in the lowest Landau level. For
simplicity let’s focus on the holes because they are closer – well,
for technical reasons, they are slightly easier to understand because
Figure 64: N filled orbitals they’re entirely within the lowest Landau level. We can just think
in terms of lowest Landau level physics.

So let’s imagine we have an IQG droplet in some kind of circular droplet. And let’s say that it goes
out to some maximum orbital. It fills up all the orbitals up to some maximum orbital which will
have angular momentum n − 1. So this will be z N −1 – the maximum one that is filled. So what
I want to imagine here, is start with n electrons and I’m just going to fill all the orbitals, all the
orbitals up to z N −1 .

Let me say it in a different way. We’re go to go start with n electrons and imagine we have a
system of n electrons, if you like, some kind of a boundary condition that only lets the system fill
up orbitals up to this amount. z N −1 . You can think of that as putting a potential at the edge.
We’re trying to capture the fact that we want to consider a droplet contained in a finite disk. You
put it on a sphere, but I don’t want to do that here. What we’re going to do is imagine putting our
IQG state in a disk and only orbitals up to z n−1 can be filled. So we will impose this maximum
power condition (maximum angular momentum)

Lmax ≤ N − 1

So we put that condition, we can fill up to z N −1 . We have a unique ground state, we’re going
to fill them all up. So we get a slater determinant of all the filled orbitals up to this maximum
outer orbital with angular momentum N . What does the wavefunction look like for this ground
state?
 
1 1 1 ··· 1
 
 z1 z2 z3 · · · · · ·  P |zi |2
 − i 4`2
ψ(z1 , · · · , zN ) = det  2 2 2 e

 z1 z2 z3 · · · · · · 
.. .. ..
 
. . .
P |zi |2
(zi − zj )e− i 4`2
Y
=
i<j

there is this Gaussian factor out front. Equivalently, of course, as we discussed last time, you can
write this determinant in terms of simple form – product of ξ1 − ξj .

195
So if we impose this maximum angular momentum condition, we
have a unique ground state. Just like I sort of claim this Laughlin
state with a unique ground state. We have interactions plus this
boundary condition. For the integer quantum hall state we don’t
need interactions. As long as we have N electrons in this condition,
we already have a unique ground state. So now let’s imagine that
Figure 65: An empty orbital we take out one of the electrons. So we’ll consider exactly the same
system, but we imagine we just take out one of the electrons. So
now consider a system with N − 1 electrons, Lmax ≤ N − 1 (boundary conditions). So you get N
degenerate states and these states are given by leaving any of those orbitals z k empty, So if I want
to write it down, as a wavefunction for this state, we can call
 
1 1 ··· 1
 
 z 1 z2 · · · 
  P 2
 . ..  − |zi |
ψhole (z1 , · · · , zN −1 , k) = det  .
 . . e 4`2

 ẑ1 zˆ2 k · · ·
 k 

.. ..
 
. .
So we’ve taken out one of these rows. In this model Hamiltonian, once we take this out, these are
N different quasihole states and they can live in any one orbital. Now, so that is one way that
we can parameterize our quasihole states. We can leave any of those orbitals empty. But there’s
another way. This way of parameterizing them is normal if you think in orbital space, sort of the
orbital notation. You might also want to think more physical – imagine I take an electron out
from a particular point in space rather than from a hole orbital. From that, you can also organize
quasi-hole states generally.

I guess they’re not quasihole states at this point


– hole states by applying an electron disruption
operator at some point in space, parameterized
by ξ. So apply an electron destruction oper-
ator at some point in space and I apply it to
my wavefunction ψ. So just schematically, you
know, I have some collection of electrons here,
and then at some point, let’s say right here in
space, I’m going to take out the electron. That is the point – ξ.

The zi ’s are the coordinates of the other electrons. I’m going to take out an electron at one point
ξ. So let’s imagine I do that. I’m going to apply my destruction operator to my original state with

196
my full slater determinant state. I’m going to get some other wavefunction with one fewer electron.
It will be parameterized by where I have a parameter, which is where the hole, where did I take out
the electron? Alternatively, we can generate hole states by applying electron destruction operator
c(ξ) to ψ.

c(ξ)ψ(z1 , · · · , zN ) = ψhole (z1 , · · · , zN −1 , ξ)


P |zi |2
(ξ − zi ) (zi − zj )e− i 4`2
Y Y
=
i i<j

This I’m going to call ψhole and what does ψhole look like? We can write down its wavefunction.
Once we take out this electron from this point, what we’re going to get is a wavefunction that looks
just like the old integer quantum hall state. Our original wavefunction had a zero, whenever two
particles come close to each other. So if we take out an electron at this point here, what remains is
still going to have the only non-zero contribution, if you’d like, when I applied this to this, the only
non-zero contribution would happen if one of these electrons would be located at ξ. Otherwise, I
will just get zero. If it is located at ξ, what does the operator do? It will just return a state with
N − 1 other electrons and the amplitude for that state will be the same. The original amplitude
vanished when any other particle got near ξ, since there was an electron there. They didn’t say it
very clearly. But maybe you can convince yourself that you applied this destruction operator. All it
does is replace one of your zi ’s by ξ. It is true for any fermionic wavefunction if I apply a destruction
operator, in first quantized form, it should substitute that position into my wavefunction.

Instead of parameterizing them into orbitals where I’ve taken out the electron here, I’ve parame-
terized it by the actual space. Of course there are infinitely many positions in space ξ. While there
are only k different orbitals, you can immediately see that they’re not orthogonal to one other.
They’re going to be a very over-complete basis by set of the hole states. ψhole (z1 , · · · , zN −1 , ξ) are
overcomplete (like coherent states).

1
37.1.2 Laughlin ν = 3 case

Again, focus on holes for simplicity. Start with N electrons, Lmax ≤ 3(N − 1). Again, let’s put this
kind of boundary condition which says we can fill up orbitals up to some maximum momentum.
This is a poor man’s way of doing a problem on a sphere. I’m not going to do that here. Use
Hideal .

Let’s consider the ideal Hamiltonian. this is the system we just talked about earlier in class, you
impose this boundary condition and you look at the ideal Hamiltonian, there is a unique ground

197
state right? Because there is a unique with N th order zeros. So you get a unique ground state order
just like before. The unique ground state is
P |zi |2
(zi − zj )3 e−
Y
ψ(z1 , · · · , zN ) = i 4`2

i<j

This is just the Laughlin state. Okay. Now let’s do this the same thing we did before. Let’s take out
one of the electrons on the system and let’s ask what happens? What does the spectrum look like?
How many low-energy states are there? What are their wavefunctions? Now consider a system with
(N − 1) electrons, Lmax ≤ 3(N − 1).

So in this case, you can see since we have one fewer electron, you start playing around with polyno-
mials, you can see there are lots that you can write down that have N th order zeros. So you have
one fewer electron, it’s easier to fit them into the droplets. There are a couple combinations and
different ways of doing it. In fact, if you work it out, you will find that it’s kind of fun – you can try
it for a small number of electrons and do it on a piece of paper. You find multiple states can N th
order zero. It scales like N 3 . These states with all states which have zero energy because they’re
all states have this N th order zeros. So these are all N th order ground states.

We will find N 3 degenerate ground states. We have an overcomplete basis


Y Y Y Y P |zi |2
ψ(z1 , · · · , zN −1 , ξ1 , ξ2 , ξ3 ) = (ξ1 − zi ) (ξ2 − z1 ) (ξ3 − zi ) (zi − zj )3 e i 4`2
i i i i<j

These are states that have N − 1 electrons and have N th order zeros and that have maximum
momentum. It has properties

• 3rd order zeros

• Maximum angular momentum Lmax ≤ 3(N − 1)

They would be ground states at that ideal


Hamiltonian if I had one fewer electron. My
claim is that they span the entire space. They’re
an over-complete basis for this space. We’ve fig-
ured out a way to parameterized the excitations.
Now we can see that we have three parameters
here, ξ1 , ξ2 , ξ3 . Physically, what do those mean?
If you look at this wavefunction and you plot
any property, density, energy density, particle
number density, anything as a functional posi- Figure 66: 3 quasiholes
tion, you will find that the state looks exactly

198
like the ground state, anywhere for except for three special points, ξ1 , ξ2 , ξ3 . There are three local-
ized excitations. Away from those points it will look like the ground state. All properties may all
reduce to the original ground states.

It will look like the ground state with N electrons. Even the density will be the same. It’s all been
taken out of these points. So just like before, in the integer quantum hall case I didn’t say it because
maybe it was obvious. I had these wavefunctions parameterized by single function ξ. It looks just
like a filled integer quantum hall state. It is just near this position where the hole is located. I
am saying the same thing here. Everywhere away from here it looks like a Laughlin state with N
electrons. Here at ξi , there’s some excitation.

ψ consists of 3 localized excitations, called quasihole excitations. What are the relationship to the
usual hole? THe usual hole is I applied electron destruction operator at some position ξ to my wave
state
P |zi |2
(ξ − zi )3 (zi − zj )3 e− i 4`2
Y Y
c(ξ)ψ(z1 , · · · , zN ) =
i i<j

Figure 67: A single hole, ξ

The usual hole is composite of 3 quasiholes! So what does that mean? In another way, if I took out an
electron like this and I had an excitation like this, it could split into three quasiholes parameterized
by three separate positions ξ1 , ξ2 , ξ3 .

Figure 68: a single hole is 3 quasiholes!

199
These zeros can move into different directions and you get something like this. It means that you
add a hole to your system, or an electron to your system, it will split into pieces generically. I’ve
been talking about the Laughlin case just for simplicity here. But if you do it generally for the
1
ν= m case: the hole can break into m quasiholes.

200
Notes Quantum Many Body Physics
Day 19
November 27th, 2013 Giordon Stark

38 Last Time

So the first part of class I want to finish our discussion last time of quasi-hole excitations in fraction
quantum hall states. The way we introduced quasi-hole excitations is we thought about a physical
system. We considered a system of n electrons and we imagined putting them in a kind of disk
geometry where they could fill up some orbitals which is 3(N − 1). We can only fill up orbitals up
to this basically z 3(N −1) . N electrons, Lmax = 3(N − 1).

And we considered this ideal Hamiltonian which we introduced last time and we argued that if you
considered this Hamiltonian with n electrons and this psi system, there’s a unique ground state and
it’s given by the Laughlin wavefunction. So that is the ground state of the system without any
quasiholes. It is the uniform state. The ground state of Hideal is

|zi |2 /4`2
Y P
ψ(z1 , · · · , zN ) = (zi − zj )3 e− i

Then what we did is say, imagine we take one electron out of the system and look at the lowest
energy states of Hideal with one fewer electron. We would consider it a system with N − 1 electrons,
but the same boundary condition Lmax ≤ 3(N − 1). Instead of scaling like N, you would expect the
number of degenerate states to scale like N. You take out an electron. That is what would happen in
a non-interacting program. You would find that they’re roughly N cubed degenerate ground states.
And furthermore, I claimed maybe without proof that there’s a simple basis which spans the space
of all of these states. It’s an over-complete basis. So these are not orthogonal states, but in any
case, they span the hole space and we can write them, they’re wavefunctions on n - 1 electrons and
they’re parameterized by three functions, an overcomplete basis
2 2
Y Y Y Y P
ψ(z1 , · · · , zN −1 , ξ1 , ξ2 , ξ3 ) = (ξ1 − zi ) (ξ2 − zi ) (ξ3 − zi ) (zi − zj )3 e− |zi | /4`

Okay. So the claim was that these states really were zero-energy states, they had the 3rd-order
zeros and they had the right maximum angular momentum. I proved this is all of them. You can
check that if you’d like in small cases.

201
And for the states, these three parameters really parameterize the location of three sort of localized
excitations, ξ1 , ξ2 , ξ3 . And the reason it scales like N 3 is because we can put these excitations in
any of N possible places in our system. These localized excitations that are created when we take
an electron out of our system, are called quasi-hole excitations.

38.1 Quasihole excitations

We talked about quasihole excitations and a conventional hole where you take an electron out and
noted that the usual hole – you apply a single operator to the Laughlin wavefunction and that gives
you something like the product of an operator and a fermionic wavefunction
2 2
Y Y P
c(ξ)ψ(z1 , · · · , zN ) = (ξ − zi )3 (zi − zj )3 e− |zi | /4`

A hole is a special case where ξ1 , ξ2 , ξ3 coincide. A regular hole is the same as three quasi-holes.
If you take away an electron in some position ξ, so you create a hole, then that hole can split into
three parts. Energetics determines whether it wants to split into three parts, whether the state is
lower in energy than the composite object, which is a conventional hole. But the important thing is
that it can split, which is a very strange, unusual phenomenon, that an electron or a hole can split
into three localized excitations that can move far apart from one another.

We pointed out that for the ideal Hamiltonian, these are both eigenstates of the Hamiltonian. So
for the ideal Hamiltonian, if I create a hole, it would sit there and not do anything. Generically,
probably, this would no longer be an eigenstate and you might expect that if you had Coulomb
interactions, this would be lower energy to split into three pieces. But again, it is not the energetics
that is important, the important thing is that an electron can fractionalize. I think quantum hall
was really the first example where this was seen, even as a theoretical possibility.

Elementary excitations for this Laughlin state are actually pieces of an electron. In this case, a third
of an electron. More generally, if you look at the ν = 1/m case, the same story holds, but now a
1
conventional hole will break into m quasiholes. ν = m case: hole can break into m quasiholes.

38.2 What is the charge of quasihole?

So in the simplest level, we can immediately see what the answer of this is. We can see that if
you put m quasiholes together, and you combine them all together, this is one usual hole, one
conventional hole. m quasiholes = 1 conventional hole.

202
And we can see that it’s kind of clear that these holes – these are all equivalent. They’re basically
identical particles, and so they must carry equal charge. It follows from this that, since the charge
of the conventional hole is −e, the charge is −e/m.
−e
⇒ charge =
m
Okay. So that’s kind of quick argument. It must carry a fractional charge. But let’s try to see it
more directly. Let’s try to really analyze what the density of electrons is in the vicinity of this point
ξ1 . What’s the deviation in the density and we’re going to integrate it in the region around ξ and
we get a deficit of e.

So let’s see this more directly. Let’s calculate the charge directly. So the way we’ll do that is look
at the wavefunction for a single quasihole. I’m thinking of a disc geometry. You can have a single
quasihole as I’ll explain. So you have a single quasihole at some position ξ

|zi |2 /4`2
Y Y P
ψqh = (z1 , · · · , zN ; ξ) = (ξ − zi ) (zi − zj )3 e−

So what we’ll do is calculate the charge density near ξ. This is a complicated wavefunction. To
calculate the density, it exactly requires doing some complicated integral. It is not easy to extract
information from these wavefunctions, but we have a nice thing. We’re going to use this nice
property of these wavefunctions is that we can map them on to this one component plasma (OCP)
which is the model that is well understood. So how is that mapping going to work? We will look
at |ψ|2 which is really the quantity we’re interested in.
2
|ψqh (z1 , · · · , zN ; ξ)|2 = e−βV (z1 ,··· ,zN ,ξ) β=
m
This is what is going to encode properties like the density. This is the probability density of having
particles at these positions, and we write it as a Boltzmann weight. We choose β = 2/m like we did
previously. Let’s write out what V is
X m X 2 X
V (z1 , · · · , zN , ξ) = −m2 log |zi − zj | + |z i | − m log |zi − ξ|
4`2
i<j i i

We’re going to get one term which comes from the product
of (zi − zj )3 (more general, to the nth power). The next
term comes from taking the logarithm of the exponential
term. And this is the new term that we didn’t have before,
and comes from the logarithm of the ξ − zi terms. But
this new term has a very simple physical significance. So if
you remember, the one-component plasma, we could think

203

Figure 69: Interaction with a +1 charge


about this as the energy of just a bunch of charges that
are charge m objects that are interacting logarithmically
with each other.

So we have these background charges, which is negative,


and we have these +m charges and they’re interacting
logarithmically with each other and with this background potential. That is the one-component
plasma and we argued that that gave us a uniform density if m < 70 (plasma phase) and these
+m charges like to screen the background charge. But now we have this new term here. In this
new term, you can easily understand it looks like all these N charges are interacting with another
charge, which is a +1 charge which is sitting at position ξ. All these other guys are sitting at the
position z1 , · · · , zN . They give you an interaction like m log(), whereas if the +m interacted with
the +m, you get an m2 log(). Once we have that model, we can immediately understand what the
density looks like in the vicinity of ξ, what the charge density looks like.

Let’s assume that m . 70 ⇒ OCP in plasma phase ⇒ perfect screening of Q = 1 charge. So


what does it mean by perfect screening? It means that all the +m’s move away slightly on average
from where this +1 charge is. They’re all going to move away slightly on average just enough
so that there’s an extra negative charge density here ⇒ local accumulation of screening charge
e
Qscreen = −1 ⇒ physical charge of − m .

It makes sense physically, right? If you just look at the wavefunction you can see that the density
of electrons is going to be a little bit lower near where ξ is because you know, this wavefunction has
zeros. So the idea is that there will be slightly fewer electrons near ξ and the amount fewer has to
be exactly this amount, −e/m. So let’s look at what the charge density looks like.

204
Figure 70: Ground state electron density with n electrons and zero quasiholes.

So in the ground state, what happens? In the ground state, the charge density is completely uniform
everywhere within this disk, right? So that means if you look at the density, it is just going to be
flat until we get out to the edge which is, let’s say, here. And then of course the charge density is
going to drop off to zero. So we will see some picture like this in the ground state.

Figure 71: Ground state electron density with n electrons and one quasihole at ξ

205
So what does ρquasihole look like? It is going to look exactly like the ground state density until we
get to this point ξ and then there will be some deficit. I guess it will probably go down to zero right
at ξ, come back up, and then it will go straight for awhile. Then this extra deficit here, the claim
here, is that if you integrated ρ over this little blip here, that the deficit would be exactly −e/m
charge.

Now, of course, we know our system has an integer amount of charges. So you can’t have −e/m
charge here without it being compensated in some other charge somewhere else. How do you
compensate that? It is not hard to convince yourself that when you get out to the edge, where the
wavefunction falls off, it goes out just a little bit further. So what is happening is relative to the
original ground state, there is some extra charge here at the edge, +e/m.

So they both have the same total charge. The charge is distributed a little bit differently. If you
look at that wavefunction, you have this product of ξ − z. What does that factor do to your
wavefunction? It suppresses the wavefunction near ξ. So you get this deficit. But you can also see
it is kind of going to increase the probability at large z, because you have this extra factor. So that
is what leads to the fact that it falls off a little bit slower out at the edge.

39 Fractional Statistics

So what we’ve talked about here is we’ve shown that something really unusual – which is that the
excitations of this fluid – of this quantum hall system actually carry fractional charge. They’re
fractions of an electron.

But the next thing I want to discuss is that these things, they not only carry fractional charge,
they also carry fractional statistics which I want to now explain “what are fractional statistics” and
then I want to compute it for you explicitly and prove to you that these excitations carry fractional
statistics.

39.1 Identical Particles

So the starting point, I need to explain what is fractional statistics? And to do that, I have to tell
you a little bit about identical particles.

In my view, most treatments of identical particles in most textbooks are very misleading, and they
really confuse formalism with physics. So I’m going to give another way of thinking about identical
particles, which, will hopefully clarify things. But it is a little bit more involved than the usual

206
approach which is saying something about, “Oh, wavefunctions can be symmetric or anti-symmetric.”
These are statements about formalism. Here I want to give you: “What does it physically mean to
have fermionic statistics, bosonic statistics? Why can’t we have more statistics in two dimensions?”
and so on.

I want to think about a gapped Hamiltonian H in d-dimensions with short-range interactions.


We’re thinking generally here, no longer about quantum Hall. Now let’s suppose in addition, this
Hamiltonian H has particle-like excitations. You may have multiple types of excitations but let’s
focus on one of its sort of elementary excitations. And by particle-like, I mean it has some sort of
excitation, it’s like these quasiholes but these quasiholes have a typical size which is on the order
of a magnetic length (`). They’re just localized to some finite size.

Consider a set of all states with n excitations. Expect the states can be parameterized by position
of particles:

|{x1 , x2 , · · · , xN }i
| {z }
unordered set
The order doesn’t matter. Once you specify
where the particles are located, there is some
unique excited state corresponding to those lo-
cations. So I’ll give an example of this shortly.
This is common, this is typically the way we
represent/respect the structure of our excited
states. We just describe them by saying where
the positions of the particles are. So in gen-
Figure 72: Set of particles, specified by position
eral, this set of states |{x1 , · · · , xN }i is going to
form a (possibly overcomplete) basis for n par-
ticle excited states. They may or may not be orthogonal to each other. I don’t actually care
about that. But they form a basis, and they span the entire space of n particle excited states.
So this is maybe an assumption that we can span the entire space of these n particle excita-
tions with states of this form. So this set of states and the space that it spans, we’ll call the
low-energy Hilbert space with n particles.

39.1.1 Example 1: ν = 1

So our first example is to think about the ν = 1 IQH state:


2 2
Y Y Y P
|{ξ1 , ξ2 }i = (zi − ξ1 ) (zi − ξ2 ) (zi − zj )e− |zi | /4`

207
So if we think about that state, it is a gapped Hamiltonian in two dimensions, we can assume
it has short-range interactions if we like in some model and it has excitations which are hole-like
excitations. If we consider excitations two two holes, they can be parameterized by the positions
of the two holes, ξ1 and ξ2 . So once I say there’s a hole in position ξ1 and ξ2 , I’ve specified a
unique, excited state. It’s a many-body state. And it’s written in the following way in terms of a
wavefunction, it is the following state. We’ve just written it down above. This is something we just
said previously which is that the set of all two-particle states is spanned by these states.

So this is a two-hole state. And the set of all two-hole states spans a certain space of two-hole
excitations.

1
39.1.2 Example 2: ν = 3

Similarly, again, if I stay restricted to quasi-hole states, these can be labeled by the positions of the
two quasiholes, and they’re given by two-quasihole states:
2 2
Y Y Y P
|{ξ1 , ξ2 }i = (zi − ξ1 ) (zi − ξ2 ) (zi − zj )3 e− |zi | /4`

This is a little abstract. I’m giving you an example. I’m giving you two quasiholes because it’s
quicker to write down. No reason why I did two.

So this is very general. The reason I’m giving you all this background is I want to formulate the
question of what it means to talk about statistics. So the question is the following: Suppose someone
gives us some Hamiltonian and we solve it and we restrict our tension to excited states that contain
n particles and we understand it completely. We know exactly what the wavefunctions are for these
n particle states in terms of the underlying degrees of freedom. The next question we ask, these
particle excitations, they must have some well-defined statistics, are they fermions, bosons?

So each particle-like excitation has some well-defined notion of exchange statistics. Exchange statis-
tics – is it a boson or a fermion or something else? So for example, in this case, the integer quantum
hall case, we can easily see what the statistics of these excitations are, because these are holes of our
non-interacting electron system, it’s sort of clear that their holes are fermions because the electrons
are fermions. The holes are also fermions.

So the non-interacting case is easy. It is example 1 ⇒ Fermi statistics. But what about the second
case here? Again, we have these particle-like excitations. We can ask: what are these excitations?
Are they fermions? Are they bosons? Are they something else?

208
39.2 What is statistics and how does one compute it?

So that brings us to the basic question, what is statistics? What does it really mean to say something
is a fermion or a boson? And how does one compute it?

So how are we going to do that? The idea, I’m going to write down a path integral description of
the dynamics of n particle states. And the way we’re going to do that is as follows. It is going to
be just the usual story. How did we introduce path integrals originally? We wanted to compute
some probability amplitude, or if you like, the matrix elements of the time evolution operator. If I
take e−iHt , I evolve for some time, what the amplitude is to get to some final state. So here let’s
imagine we start with some initial state which is going to be an n particle excited state. We evolve
for some period of time and we ask: What is the probability amplitude that we’re in some other n
particle excited state.

h{x01 , · · · , x0N }|e−iHt |{x1 , · · · , xN }i = h{x01 , · · · , x0N }| e|−iH∆t ·{z


· · e−iH∆t} |{x1 , · · · , xN }i
R
insert |{x1 ,··· }ih{x1 ,··· }|

So in each one of these spaces, we’re going to insert this completeness relation, which is going to
be integral of x1 to xN . It was one of my assumptions that these form an over-complete basis.
What I meant by that was that this integral here is equal to the identity within the subspace. So
there is some kind of completeness relation satisfied by these states. We are at least talking about
low energy. So I’m just restricting attention to this N particle space throughout this discussion. It
could be over-complete or just complete. It might be orthogonal states. It might be non-orthogonal.
That I don’t care about.

It is pretty easy to see that I’m going to get some kind of path integral that is not on single path,
but a particle that has n particle paths. So ultimately, I’m going to get some path integral – say of
this form
Z Y
Dxi e−S[{x1 (t),··· ,xN (t)}]
i

S is a function of n-particle paths γ.

209
Figure 73: n-particle paths γ. This is one path. Another path, maybe these guys don’t exchange.
Another path, they do something else.

So what this path integral says is to compute this amplitude to get from this state to this state,
I’m going to integrate over all n-particle paths. For example, here’s one path. So n-particle path is
any path starting with particles in this position xi , ending with particles in this position x0i . You
would sum over all possible paths connecting these points to these points, and that would give you
the probability amplitude of getting from this state to this state.

So we’ll talk about what happens when particles get near each other. I should say that throughout
this discussion, I’m more or less going to be assuming D (dimensionality) is 2 or larger. We’ll talk
about one dimension afterwards. You should have in mind, two or higher dimensions. Two or three,
really. The claim is that S can’t be an arbitrary function of paths because S came from some
gapped Hamiltonian with local interactions. Gap Hamiltonians mean that they have no massless
particles to carry information from one region to another.

So all correlations, everything is exponentially localized. As a result of that, the claim is S must be
“local” in some sense. It can’t have very non-local terms in it because it came out of a Hamiltonian
with an energy gap with short-range interactions.

It can’t have some strange non-local interactions, where the amplitude for some path depends a
lot on particles that are far away. All I’m saying is that the dynamics have to be local. We’re
starting with a system that was gapped which gives a measure of locality – the dynamics have to
be local. Let me define what I mean by local. This is really the essential point. What exactly is
the restriction on this effective action that must hold for any gapped Hamiltonian?

210
So more precisely, imagine perturbing a path near (x0 , t0 )

Figure 74: An example of a local perturbation on a three-particle path, γ1 , γ2 .

Now, what I’m going to do here is I’m going to imagine taking this path in some region around here
I’m going to make a small perturbation to the path. I’ll leave them the same everywhere else. Out
here, they’ll be identical. In this region, I’m going to make a small change. It doesn’t matter what
I’m going to draw. I’ll draw my change as a dotted line. Maybe I perturb it. So instead of going
straight like that, it has this little blip in it. Say this one I leave it the same. So I’ve made this
small change. I’ve just done it in this region. So this I’m going to call γ2 . I’ve taken this path and
made a small change in the path, but only in some localized place in space and time. So imagine
perturbing the path between some particular space and time. If you do that and you compare the
action for those two paths, which just differ in this very local way. S(γ2 ) − S(γ1 ) only depends on
the perturbation and the structure of the paths just in this region near the perturbation. The claim
is S(γ2 ) − S(γ1 ) only depends on paths near (x0 , t0 ).

So to make that more precise, let me just draw another set of paths just to clarify.

211
Figure 75: Another set of paths γ10 , γ20

So the point is that away from this region, γ10 and γ20 look different from γ1 and γ2 . But if I zero in
on this little local region, they look identical. γ10 looks like γ2 . The perturbation from getting from
one to the other is the same. The claim that I’m making here is that, if I look at the difference
between these two actions, it is the same as the difference between γ2 and γ1 . So let me write that
here.

S(γ20 ) − S(γ10 ) = S(γ2 ) − S(γ1 )

Why should it satisfy this? So let me maybe explain why. Suppose it didn’t satisfy this property. It
just says that it is equivalent to the following statement: that the dynamics near some point in space
time here, if I do a measurement in some point in space time here, the outcome of that measurement
is independent of what’s going on far away. So from that statement, the difference between two
actions is a physically measurable quantity. Why? If I want to calculate some probability amplitude,
the difference between two actions determines whether these two guys will add constructively or
destructively. So differences between actions correspond to something that is physically measurable.
It is something we talked about when we talked about berry phases. You can’t talk about the berry
phase of an open path, but you can compare the berry phase of two open paths with the same end
points. That is a physical thing. Talking about the action for an open path for single open path
doesn’t have any meaning. It is not a physically measurable thing. But two different end points
– whether that difference is 2π or π – the point is this is a physically measurable quantity. Any
physically measurable quantity should be independent of what’s going on far away. So this quantity
is going to effect things like the detailed dynamics of these two particles when they come near each
other. That dynamics has to be independent of things that are far away from x0 and t0 .

212
Whether two paths interfere constructively or destructively only depends on the local structure. If
the two paths differ by some local change, it depends on that local structure and not things far
away. So I probably didn’t explain that very well. But it may be something that you have to think
about.

39.3 What kinds of actions satisfy this locality constraint?

One solution that works:


 

Z  
1 X X X X 
S= dt  m ẋ2i − ẋi A(xi ) − V (xi ) + f (xi − xj )
 
2 
 i i<j 
| {z }
short-ranged

So this is an example of a particular solution, but you can see more generally, any Lagrangian
you write down, as long as it involves short-range interactions, it will be fine. More generally, any
function
Z
Sshort-range = dt L(xi , ẋi )

where L has short-range interactions, is ok. Is this the only possibility consistent with locality? No!
And it’s precisely only bosonic particles. Only the action of special bosonic particles are written in
perfectly local form like this.

40 Topological Classes of n-particle paths

Two n-particle paths are “topologically equivalent” if you can continuously deform one path into the
other while keeping the particles far apart from one another. That means you can think of these
paths as strings – physical strings – you can’t let the strings pass through each other.

Let me give you examples of what topological equivalence means. So let me draw a few cases. I’m
going to assume I’m thinking about paths with some fixed n points. So these are all joining the
same n points here and here.

213
Figure 76: Example of 4 paths, (a), (b), (c), (d)

So the claim is that (a) ∼ (d). So I can imagine pulling this. I can deform this into this picture
without having anything cross each other. But (a)  (b)  (c). These endpoints are all fixed. I
can’t turn it into this if I am really thinking of these. I’m thinking of these as strings in three
dimensions. So I’m thinking about worldlines of particles.

So then this brings us to the basic punchline. So we have this notion of topological equivalence.
What we can say is once we have this notion is we can take the set of all paths. We fixed the N
points, x1 , · · · , xN here, this is x01 through x0N up here. And then what we say is we consider all
paths with those N points. We can break them up into classes which all paths that can be deformed
into one another, we put them in the same class. And paths that can’t be deformed we put in a
different class.

So the set of all n-particle paths breaks into different topological equivalence classes. For example,
(a) and (d) would be in the same class, but (a), (b), (c) are all in different classes. Most general S
that satisfies locality constraint:

S(γ) = Sshort-range (γ) + Stopo (γ)

Let me define what these two pieces are. Sshort-range (γ) only depends on the local properties of the
paths
Z
1 X 2
Ss-r (γ) = m ẋi + · · ·
2
and then Stopo (γ) only depends on the topological class of γ. It is another term that can appear in
our action that is consistent with locality on top of these conventional term that looks non-local, but

214
it is a special type of term. It depends on the topology of our paths. Any action that is consistent
with locality can be written as a sum of something that is purely short-range and something that
has this topological term.

And what I’m going to argue is that it is this topological thing, the short-range thing doesn’t contain
universal information, but the topological term, encodes the statistics of our particle. So fermions
will have a particular topological term in their action, bosons will have no topological terms. In the
fractional statistics, those have a third type of topological term in their action.

The language is confusing. So here I define the notion of topological classes as paths. All the
n-particle paths, I break them up into classes. For example, (a) and (d) would be in one class. (b)
and (c) would be in a different class. So the paths can be broken up into topological classes. All
the paths in the same class, they’re all equivalent to one another topologically. The most general
action that is local, it has a piece which is short-range which is things like usual terms you’ve seen
in actions. This term is not a local term – it has its own special structure. It only depends on the
class of γs. In other words, Stopo , if I evaluated it for this path, it would always give me the same
thing for (a) and (d) – as these two are equivalent to one another. But it would give me different
values for (b) and (c), in general. The claim is that you can have two terms. You can have a term
which depends on local properties of your path, and you can also have a term that depends on the
global topology of the path. Both are allowed by locality and the second term encodes the statistics
of our particles.

So fermions will have a particular term here. And bosons will have another term. So what defines
fermions or bosons is encoded in what this term looks like. And the next thing is we’ll try to
classify the types, “what are the possible topological terms that are consistent and in three dimen-
sions?” There are only two terms you can write down. One that corresponds to fermions and one
that corresponds to bosons. Others correspond to different types of fractional statistics you can
have.

215
Notes Quantum Many Body Physics
Day 20
December 4th, 2013 Giordon Stark

41 Identical Particles

41.1 Last Time

Gapped d-dim Hamiltonian H with short-range interactions. Suppose H (d ≥ 2) has a particle-like


excitation. How do we know if particle is boson, fermion, or something else?

The first question, how do we compute statistics? Suppose we can find the excitations, how can
we compute whether it is a boson, fermion or something else, and what its statistics are. How
would you measure if it is one of those? The starting point is theoretically, imagine we just have
a Hamiltonian we can solve it, we can say everything about it on the computer very exactly. How
would we be able to calculate the statistics of the excitations. How is statistics defined? We have
to define it in order to know how to compute it.

So one way to find statistics (discussed last time) is to construct an effective action that describes
the dynamics of these particles, the low-energy dynamics of these particles. In particular, we
want to construct an effective action that describes n particles, the dynamics of a collection of n
particles.

Figure 77: Paths from xi → x0j

216
This action is sort of defined by the following equation
Z n
{x01 ,··· ,x0n } Y
h{x01 , · · · , x0n }|e−iHt |{x1 , · · · , xn }i = Dxi eiS[x1 (t),··· ,xn (t)]
{x1 ,··· ,xn } i=1

The idea is that this S here should, by definition, encode the information about the dynamics of
the particle. So it should encode the statistics of the particles. So S is a functional of n-particle
paths γ.

So now, we expect S encodes all our information. So the next question is, what does S look like?
Last time we said that S, because we’re starting with a gapped Hamiltonian, must obey some kind
of locality constraint (defined last time).

But we defined a very particular notion of locality that S must satisfy on sort of general grounds. S
has to be local in a certain sense. Once we know S has to obey this local constraint, the next question
is: “What kinds of actions could we find if we started with some generic gapped Hamiltonian and
we were able to compute this action, what types of things could we find?”

So I claimed, without proof, that the most general solution of the locality constraint had to be of
the following form

S(γ) + Ss-r (γ) + Stop (γ)

There is one piece which I called short-range and the other piece is topological. What were these
two pieces? The first one, the short-range, was any action
Z
Ss-r (γ) ≡ dt L(x, ẋ)

where L has short-range interaction. For example,


 

Z  
X 1 X X 
2
Ss-r (γ) = dt  mẋi − ẋi A(xi ) + f (xi − xj )
 
 2 
 i i<j 
| {z }
short-ranged

So this is a kind of a pretty natural action. There is the usual kinetic energy term that you would
expect to find, you know, that describes particles with mass m. Maybe your particles are going to
be coupled to some kind of external vector potential. Finally, there is some kind of interaction, a
short-range interaction. That’s important.

So the second piece is topological. This is any action that is topologically invariant – the action
depends on the topological class of this n-particle path γ. You can say two n-particle paths are

217
equivalent if I can deform one set of world lines with another without having them cross each other.
We can think of the world lines as strings. We say these two sets of strings are equivalent if we can
deform one into the other without having to pull one string through another.

42 Topological Action

Now, what about this Stopological ?. Not all of these terms here are possible. What are the possible
terms that can be in our topological action? Not all topological invariant terms are physical, so
there are some constraint. Let’s focus on the 3D case.

42.1 3D Case

Focus on two-particle paths for simplicity.

Consider paths from {x1 , x2 } to {x1 , x2 }. Now when we think about those paths of that type, there
are two topological classes we can have, such paths are

218
Figure 78: γ1 on the left, γ2 on the right

The claim is any other way that you connect up these two with these two could be continuously
deformed into this picture or this picture without ever having strings pass through each other. The
reason, this is in three dimensions. These world lines live in three dimensions, 3 + 1. You can’t
really tie a knot in four dimensions. So no matter how you connect up these guys, all that matter
is which way they connect up. There is no such thing as something winding around something in
four dimensions.

So there are really only two possible paths or two topological classes. Now our topological term, by
definition, it will take some value on this class of paths and some value on this class. So denote

eiStop (γ1 ) = α eiStop (γ2 ) = β

So let’s try to see whether α, β are arbitrary, are there any constraints on α, β? Let’s imagine what
happens when we stack two of these paths together.

219
Figure 79: stacking γ2 paths together

Then

eiStop (γ3 ) = β 2

If you think about this path, you can convince yourself this path is topologically equivalent to
the parallel path. If you exchange a particle twice in three dimensions, it doesn’t change positions.
Exchanging a particle twice is the same thing as taking one particle around the other by 360 degrees.
By doing that in three dimensions, you can have them stay still. So exchanging twice is the same
thing being continuously deformed into doing nothing. So by topological invariance

eiStop (γ3 ) = eiStop (γ1 ) = α ⇒ α = β2

But there is another constraint if we stack the other path on top of each other.

220
Figure 80: stacking γ1 paths together

From this, we see that

eiStop (γ4 ) = eiStop (γ1 ) = α ⇒ α = α2

Therefore, we only get two solutions:

1. α = 1, β = 1 which implies that eiStopo = (+1)#exchanges and this leads to bosonic statistics!
Doesn’t do anything. If eiStopo = 1, it means that there’s no extra phase accumulated if
particles exchange with one another. It doesn’t care whether they exchange or not. That is
what this β is. β = 1 is a phase under exchange. This is bosonic statistics. This is a boson.
It means when the particles exchange, there’s no phase. No extra phase accumulated.

2. α = 1, β = −1 which implies that eiStop = (−1)#exchanges . we get β is the phase accumulated


for the exchange. For any path, the phase we accumulate will be −1 for every exchange. This
is what we call fermionic statistics.

42.2 2D Case

Again, focus on two-particle paths from {x1 , x2 } to {x1 , x2 }. Infinite # of topological classes.

221
Figure 81: Infinite number of topological classes...

These paths correspond to braids of two strings. You have fixed endpoints. You can braid your
strings around each other in different ways so the different classes correspond to different braids.
And you can see, if you only have two particles it is easy to label all the braids. You can label them
by how many times the particles exchange in the clockwise direction. There is a different braid for
every number of terms they exchange.

So label the braids by the # of times particles exchange n clockwise direction. Let

eiStop (n clockwise exchanges) = αn

Then we have αn = (α1 )n (by the stacking argument). Let α1 = eiθ . Then I have

eiStop (n clockwise exchanges) = einθ

Now we see, any phase θ is a consistent statistics. We call θ the statistical angle of excitation.
Then

• θ = 0 is a boson.

• θ = π is a fermion.

• θ 6= 0, π are “fractional statistics“, or “abelion anyons“.

There are three topological terms, in two dimensions, each one is a different type of statistics. And
in the two-dimensional case, they’re parameterized by this θ which tells you the statistical angle
of your excitations. So far, this has been a purely theoretical exercise. You have to calculate this
effective action and analyze it in depth.

222
43 Measuring Stop with adiabatic evolution

So the next question is: “How would you measure statistics?” How would you measure it in terms
of a thought experiment? The idea is we can measure in terms of using adiabatic evolution. We
can measure this topological term. So it doesn’t depend on the time, it depends on the length as
topological only depends on the path, not on the time it takes to traverse the path. Thus, Stop is a
berry phase term! As we’ve seen before in class, berry phase terms can be “measured” by adiabatic
evolution.

43.1 2D Case

Let’s focus on 2D again, because it is easier to


draw the pictures. Consider the following, sort
of, thought experiment. Imagine trapping two
excitations in two local-potential wells.

I’m going to try two of my particle excitations


in two potential wells, one at x1 and one at x2 .
Imagine taking x1 and gradually moving it around x2 . Let’s call this path γ.

So now let’s think about the berry phase that we accumulate under this adiabatic evolution. So the
berry phase by definition, we want to think about the matrix element. We start in some {x1 , x2 }.
We evolve over some period of time and then we end up in {x1 , x2 }, we want to know what is
the extra phase that is accumulated. So the evolution can be described by some time-dependent
Hamiltonian. There is probably some time ordering I need to put here. Not a big deal.
RT
h{x1 , x2 }|T e−i 0 H(t) dt
|{x1 , x2 }i = e−iE0 T e−iθB (γ)

223
So what is the phase I get under this process? We discussed this last time. There is two contributions
under any adiabatic process. First there is the usual dynamic phase that is accumulated in any
quantum system. And then there is the berry phase term. So the same is true in our case for this
particular adiabatic process.

So let’s think about what this berry phase looks like. So in general, we have an expression for the
berry phase term. We know how to calculate it. It is a function of a path. Remember, it’s given by
the following expression. So here, {x1 , x2 } denotes the ground state of our system with our wells
position x1 and x2 . Remember, to calculate the berry phase, you had to calculate the overlaps
between x1 and x2 , you have to differentiate and integrate over the whole path and I gave you, the
berry phase accumulated over the whole path.
Z T
θB (γ) = dt h{x1 , x2 }|i∂t |{x1 , x2 }i
0

= Stop (γ) + θs-r (γ)

Now, in general, what do we expect to find in this berry phase term? So θB , in general, we have
two contributions. One term we’re going to get is the topological berry phase. For example, there
were berry phases in our short-ranged part of our action. So there will be other berry phases I’ll
call θshort-range . These are berry phase terms that come from the short-range action. If we take our
particles, maybe the particles carry a charge. So you know, when I take my particle around this
phase, it is going to create another phase. So short-range could include terms like
XZ
θs-r A · ẋi dt
xi

Maybe while we’re moving the particles around, one of the particles has some kind of internal spin
that rotates around.

In general, our berry phase, we can have this phase we’re interested in and other types of berry
phase that we’re not interested in. When we measure this berry phase, we’ll get both. They’ll both
be there. So what we need to do is we need to separate out – this is the piece we’re interested in
(Stop ). Tells us the statistics of the particles. We need to separate this out from the piece that we’re
not interested in, this non-universal (θs-r ) – this may depend on details, how strong the magnetic
field is and so on. How do we separate it out? To separate out Stop (γ) compare with path γ 0 .

We take x1 around exactly the same circle that


we did here. However, instead of putting x2
in the center, x2 will be over here. The other
particle will be sitting in some other position

224
over here. And the idea is here, when we do
this process here, the short-range parts of the
berry phase, things like the amount of magnetic field that is enclosed in this path are identical
to what we have here. When we compare the berry phases, we will extract out the topological
term.

So let’s see that explicitly. So let’s think about what the berry phase is in path. So I will call this
γ 0 . The first path we call γ. So if we think about the short-range part of the action, since the paths
are completely identical as far as the particles are far apart from each other, this is a short-range
berry phase term, this must be identical to the short-phase berry phase terms accumulated for the
other path. Same amount of spin, spin will traverse the same – all these berry phases will be the
same. On the other hand, if we look at the topological term for γ 0 , we can see here this particle
doesn’t enclose this particle. So this is exactly zero, whereas it was non-zero in the first case where
the particle went all the way around.

θs-r (γ 0 ) = θs-r (γ)


Stop (γ 0 ) = 0

This implies, if we subtract the two and look at the difference, we get

θB (γ) − θB (γ 0 ) = Stop (γ) = 2θ

where 2θ ≡ (n = 2)θ. In this way, we can measure the statistical angle θ.

So in principle for calculating statistics – to calculate a berry phase you need some kind of interfer-
ence experiment. To extract statistics, it is not enough to do one interference experiment. You have
to compare two. One where you’ve set it up with a particle in the center and one where the particle
is not in the center. By looking at the pattern of those fringes, you can calculate the difference in
each of the cases. So each of these things would be encoded in some interference pattern. That
would tell you this term. So it tells you the effect on the interference path. That is where the
statistics is encoded.

43.2 Remarks

1. Can we measure eiθ rather than e2iθ ? Yes, using adiabatic exchange (you want to consider
an exchange of the two particles rather than a full braiding – but it is a little bit trickier to
set up two paths which will cancel all the local berry phase terms).

225
2. We discussed 2D, 3D. What about 1D? So I mean a system which has finite extent. It
doesn’t have to be perfectly one dimensional, it has to have some finite extent in all directions.
It has infinite extent in one direction. We can say when the limit gets arbitrarily long, it is
finite in the other directions.

3. Can I think about statistics of particles on this sort of wire-like quasi 1D system?
The answer is: not really. And the problem is that, in one dimension, whenever we consider
two particles exchanging with each other, no matter how we do it, the particles have to come
close to one another. The maximum distance they can be apart is this width. Since the
particles have to come close together when you exchange or when you wind one around the
other, that means the short-range part of your action can never be completely separated from
the topological term. In 1D, particles can’t exchange without coming close together. So
there’s no way that the short-range part of the berry phase will always contribute and you
can’t separate it out. So there’s no way to separate Stop from Ss-r . Thus the statistics are not
well-defined (not universal) in 1D.

4. I gave you a definition of statistics in terms of a berry phase term. It’s a term in
these effective action that describes the dynamics of your particles. Equivalently, it is a term
you can measure by doing some adiabatic evolution where you exchange two particles or wind
one particle around the other. And but you’ve also learned another definition in statistics –
probably in your quantum mechanics class, which basically has to do with wavefunctions.

You say it is not a definition, but it is a formalism. bosons you describe with symmetric
wavefunctions and fermions you describe with antisymmetric wavefunctions. What is the
relationship between that kind of wavefunction formalism for statistics and this sort of physical
definition of statistics? What is the relation between Berry phase definition of statistics and
symmetric/anti-symmetric wave-functions?

43.3 Relation between statistics and wavefunctions

So what I’ve given you here is a physical definition of statistics. Whether that particle is an elemen-
tary particle – "Elementary particle" an electron, you can always measure statistics by these adia-
batic evolution experiments I’m suggesting here. Once we understand what statistics is, and how it
is measured and how it is defined, what does it have to do with symmetric and anti-symmetric wave-
functions, which is how physics is often presented in elementary quantum mechanics classes?

So what I want to claim is that, the symmetric wavefunctions are just a formalism – a very clever

226
formalism for describing this berry phase term. This berry phase effect. This topological berry
phase. So let me explain how that works. I want to show you that suppose you don’t know anything
about berry phase and you follow the descriptions in the textbook, and you say, “I have some particle
and I’m going to restrict its Hilbert space to be symmetric or antisymmetric wavefunctions. My
claim is that if you follow that prescription, you will automatically find that – by following that
prescription, you’ve automatically encoded a particular berry phase term into your system.”

Suppose that we require wavefunctions to be


symmetric or antisymmetric. We should de-
scribe Helium atoms with symmetric wavefunc-
tions. Let’s imagine calculating with that for-
malism. I want to calculate this berry phase,
so what happens if I take two particles and ex-
change them and see what I get? Imagine trapping two particles in two potential wells like we
did before. Now we wanted to calculate the berry phase for some kind of exchange. So first, we
have to calculate what the ground state is for each configuration of the wells. So what is the
ground state wavefunction (assume a harmonic potential) with electrons (antisymmetric) or bosons
(symmetric):
h 2 2 2 2 2 2 2 2
i
ψ{x1 ,x2 } (y1 , y2 ) = eiφ e−(y1 −x1 (t)) /2d e−(y2 −x2 (t)) /2d ± e−(y1 −x2 (t)) /2d e−(y2 −x1 (t)) /2d

which I can symmetrize or antisymmetrize if I have bosons or electrons. Once I have my wavefunc-
tions, I can calculate my berry phase under an exchange.
Z T
θB = h{x1 , x2 }|i∂t |{x1 , x2 }i
0

By definition, it is given by this integral of the wavefunction we gave. For the symmetric case, we
find eiθB = 1. This encodes bosonic Berry phase.

For antisymmetric case, we find eiθB = −1. This encodes fermionic Berry phase.

So the basic idea here is that this prescription of using symmetric or anti-symmetric wavefunctions
is kind of a trick for encoding this berry phase term, taking care of it from the very beginning,
so you never have to worry about it. So in particular, if you think of the fermionic case, what is
really going on, if you think about it from first principles, we’re trying to describe particles that
get a phase of −1 in the first exchange. If you think in terms of the path integrals, it seems like
a complicated problem. How do you solve that? But by this formalism of restricting gives you a
trick for encoding that topological term into the structure of your Hilbert space. It is there from
the very beginning. So you can work with perfectly local Hamiltonians without anything non-local,

227
and you automatically kind of captured this non-local physics, this berry phase term just from the
way you’ve defined your Hilbert space. It is kind of a non-trivial result.

So the claim here is that symmetric and anti-symmetric wavefunctions are basically a trick for
describing bosonic and fermionic berry phases. Their way of encoding these non-trivial topological
terms, at least in the fermionic case, in a purely encoded way.

Anyons can be described using multi-valued wavefunctions.

228
Notes Quantum Many Body Physics
Day 21
December 6th, 2013 Giordon Stark

44 Last Time

Last time we talked about general principles. We talked about, in general, for principles, how are
the statistics encoded in the system? If you really want to calculate the statistics, ywe talked about
it in two different ways. One way to think about the statistics is to look at a system of n excitations
and by analyzing the properties of that effective action, we argue that all effective actions have
local terms and topological terms. There’s another possible term that can occur in the action in
general, and we classify the different types of terms you can have and we discovered that in two
dimensions, there were two types of topological terms and one of which basically corresponded to
bosonic particles, and one of which corresponded to fermionic particles. In two dimensions, we
found there were two topological terms that – well, one point of view we gave is to look at the
effective action. If you can solve your system, you can find this effective action, find this topological
term, and tell what the statistics is.

The other point of view we gave, which is really maybe more physical was – okay, how do you
actually find this effective action that describes these particles? Well, sort of the simplest way to
really get at it is to actually did an adiabatic transform experiment where you trap your particles
in some wells and move them in a controlled fashion. And that way you can basically measure this
berry phase term, this topological term directly as the berry phase that you accumulate in this
process.

So we said, by doing this kind of adiabatic exchange or process, you can explicitly measure, basically,
this topological term and then you can tell, okay, is it a boson or a fermion depending on what
phase do I get when I exchange them?

So that point of view said the statistics is encoded in the berry phase associated with adiabatically
moving two particles. So what we’re going to do is use that – so that gives an actual – a simple
prescription for calculating statistics. You have to calculate the berry phase for adiabatic transport.
It is really the same thing as talking about the effective action that describes their dynamics. Berry
phase terms appears both in adiabatic evolution and also in effective action. They are both the

229
same thing. But the berry phase language is the language we’ll use today. We’ll really be thinking
about what is the berry phase when I transport two particles around, exchange two particles.

45 Statistics of Laughlin quasihole

So last time we talked about sort of how to think about identical particles, how to define the notion
of statistics and in principle how to compute statistics. And we discussed the relationship between
statistics and berry phase and so on. So what we’ll do today is we’ll kind of do an example. And
basically what we’re going to do is we’re going to compute the statistics of the excitations in the
Laughlin state. And for simplicity, we’ll work with the statistics of the quasihole excitation.

So what we want to do is use all that general for-


malism to calculate the statistics of the Laugh-
lin quasihole excitation. And as I said, to get
the statistics, we need to look at a berry phase.
So in particular, we’re going to look at the fol-
lowing process. We’re going to take one quasi-
hole all the way around another quasihole like
this. And we’re going to ask, what’s the berry
Figure 82: Moving a quasihole around phase that’s accumulated? What is the berry
phase for this path? This path in Hamiltonian
space or equivalently, this path in Hilbert space. If I take the wavefunction corresponding to these
excitations and I can calculate the berry phase along that path in many-body wavefunction space.
Each point along the path is described by a many-body wavefunction.

So to get the statistics, we need to find a berry phase associated with this process here. And these
dots I’ve drawn here are the quasiholes. Okay. So now, for simplicity, it turns out – we’re going
to start with something even simpler. Instead of calculating the berry phase for this two quasihole
process, I’m going to consider a system with just a single quasihole. I’m just going to take a single
quasihole and move it in some path. And then I’m going to ask, okay, what is the berry phase
accumulated if I take a single quasihole along some path? So begin with the Berry phase for a
single quasihole. We had a formula for it, but we need to know the wavefunction for the quasihole.
What was it?
P |zi |2
(zi − zj )m e−
Y Y
ψ(z1 , · · · , zN ; ξ) = (ξ − zi ) i 4`2

i i<j

230
where ξ is the coordinate of the quasihole. What we want to ask is, as we change this param-
eter ξ, this wavefunction changes – it traces out some path in this Hilbert space and we want
to calculate the Berry phase corresponding to that path. Let ψh (z1 , · · · , zN ; ξ) be a normalized
wavefunction:
1
ψh (z1 , · · · , zN ; ξ) = ψ(z1 , · · · , zN ; ξ) p
c(ξ, ξ ∗ )
The normalization is a function of the position ξ. Now once we have these normalized states, the
Berry phase in principle is just given by the usual formula
Z
θB = dt hψh |i∂t |ψh i

You can just plug this in here, do the integral, and we’re done. But of course, it is not going to be so
easy because this is a complicated wavefunction. It is a function of n electrons – highly correlated.
Calculating the overlaps is highly non-trivial. We’re going to have to use some tricks.

In fact, we can’t do it exactly. What we’re really interested in is the berry phase in the thermody-
namic limit – I guess the limit n → ∞ and in that limit we can calculate it.

The first step is to rewrite our berry phase. Berry phase doesn’t really depend on time, it only
depends on the path we traverse. Instead of an integral over time, let’s write it over here. We just
use the chain rule. We can write this as
∂ξ ∗
Z  
∂ξ
θB = dt hψh |i ∂ξ |ψh i + hψh |i ∂ξ∗ |ψh i
∂t ∂t
 aξ aξ∗ 
Z z }| { z }| {
= dξ hψh |i∂ξ |ψh i +dξ ∗ hψh |i∂ξ∗ |ψh i

If you find this confusing, you could redo all of this in terms of dx and using the real imaginary
parts of ξ. It is a trick people use all the time. You can treat with complex variables ξ and ξ ∗
as complex variables. It is a mathematical trick, but it gives you the same answers as long as you
follow the math. You just treat them blindly like they’re independent variables and it works. But
yeah, you can always rewrite it in terms of real and imaginary pieces if you want.

So the trick is going to be to calculate these guys. The berry connection. So let’s look at the first
guy and basically, we’re going to make the following observation. We can write this
 
1 1
hψh |∂ξ |ψh i = hψ| √ ∂ξ √ |ψi
c c
   
1 11 1 1
= ∂ξ hψ| √ √ |ψi − ∂ξ hψ| √ √ |ψi
  
c c c c
| {z }
hψ|ψi≡c

231
So then we’re left with this second term here, which has a minus sign. So here we notice the following
fact which is that – so we need to calculate ∂ξ of this h·| over the normalization constant. You can
see ∂ξ hψ| = 0 since ψ ∗ only depends on ξ ∗ which implies
 
1 1
hψh |∂ξ |ψh i = −hψ| ∂ξ √ √ |ψi
c c

 
1
= − c ∂ξ √
c
1
= ∂ξ log c
2
This is an amazing fact if you think about it, which is that it tells us that the berry phase –
which is deeply connected to the phase of our wavefunction – how the phase of our wavefunction
or – understanding the kind of phase construction along this path, that is how we thought about
berry phase. It is very connected with the phase, yet we’ve seen that we can calculate it by just
knowing the normalization of our wavefunction. So it seemed to have no information whatsoever
about these phase factors. And that’s really very special property, the fact that it is only true for
this very special wavefunction. It has to do with the fact that our wavefunction is this complex,
perfectly analytic function. So because it is an analytic function, you can extract phase information
by looking at this amplitude information.

It’s a very special property, but it’s very helpful in this case because calculating the normalization
constant is much easier – well, it is still hard, but it is a little bit more tractable than really having to
think about all the phase information. Thus we only need to find the normalization factor c.

So now we use the plasma analogy. The idea is the following, we want to calculate c, so we have to
calculate the absolute value of |ψ|2 . So consider, instead
1 2
ψ(z1 , · · · , zN ; ξ)e− 4m`2 |ξ|

Now of course, what is this? It is going to be an integral over all possible z coordinates. Note that
1
now c = (new normalization)2
.

2 Z YN
2 2
1
1

− 4m` |ξ|2
ψe 2
= d2 zi ψ(z1 , · · · , zN ; ξ)e− 4m`2 |ξ|

i=1 | {z }
e−βV ({zi },ξ)

2
and choose β = m as this will make V look especially nice. Okay. So what is – what does V look like
if we rewrite it in that form? So it has a few terms in it. I’ll write the terms and then discuss what

232
they mean. So there’s nothing that special about this. I’ve taken a logarithm of my wavefunction.
My wavefunction squared with this extra factor (constant) that I put in for convenience.
X X m X 2 1
V = − m2 log |zi − zj | − m log |ξ − zi | + 2
|zi | + 2
|ξ|2
4` 4`
i<j i i | {z }
| {z } | {z } | {z } 1-background interaction
m-m interaction m-1 interaction m-background interaction

And now this 1-background interaction term is the only new term. But this term you can see, it
is just like this m-background interaction term, except it’s a 1 instead of an m. So looking at all
four of these terms, this is the complete energy – the total energy of a system that consists of +m
charges, a single +1 test charge, and some background charge. It is the total Coulomb energy in
two dimensions, total Coulomb energy of this kind of picture. So we have some +m charges here.
This is where all of the electrons are.

And then we have a +1 charge where this lo-


cated at position ξ. And then we have back-
ground charge everywhere. So we can interpret
this energy as the total energy. So V is just the
energy of a plasma – total energy of a plasma
with one extra +1 charge at location C. I’m go-
ing to call that a test charge. So the energy
of a plasma with a unit test charge at position
C.

And then V is the energy of the plasma with


unit “test” charge at ξ.

So the reason I put in this extra factor in front of the wavefunction was because by including this
extra factor here, we are able to get something that actually was the total energy. It included even
the interaction between the unit charge in the background. So really, it was convenient to put in
that term so we could interpret it as the total energy.

So we have
Z Y
d2 zi e−βV = z(ξ)

= e−βF (ξ)

where F (ξ) is the free energy of the plasma with test charge at position ξ.

So we have this interpretation for our integral in terms of free energy. Now we can think about
physically, how do we expect the free energy of our system to change as we move our unit charge

233
around? It depends on what phase this plasma was in. If it is in a plasma phase where these charges
are perfectly screening – this is the definition of the plasma phase, then wherever I put this unit
test charge, it is going to be perfectly screened by the plasma, which will basically screen it with a
−1 charge to exactly cancel this +1 charge. When that happens, if it is perfectly screened, it is like
a neutral object, it won’t feel the background charge at all.

So in the plasma phase, this thing gets perfectly screened – it is like a neutral object so it feels
no force from the background charge. We know we’re in the plasma phase numerically, we know
for these Laughlin states we’re in the plasma phase for any reasonable value of m. So we know
that we have complete screening. And this complete screening of this test charge means that F is
just constant. Test charge feels no force. If F is constant, that means the thing we started with,
that wavefunction, was correctly normalized up to a constant. So ψ × (extra factor), this is already
normalized. We were clever. This extra factor perfectly normalizes it.
2 /4m`2
“Complete” screening ⇒ F =constant. This implies that ψe−|ξ| is normalized (up to a constant
independent of ξ) which means we can write it as c ∝ e |ξ|2 /2m`2 (it’s a 2 because of the square root
for normalization).

So now that we know what the normalization factor is, at least up to a proportionality factor,
we’re basically done because we can calculate our berry connection. And we showed using that
sort of trick, that this whole thing here would be evaluated once we knew the normalization factor.
So

aξ = i hψh |∂ξ |ψi


| {z }
1

2 ξ
log c
i
= ∂ξ log c
2 
ξξ ∗

i
= ∂ξ
2 2m`2
i ξ∗
=
4 m`2
i
and similarly aξ∗ = − 4m` 2 ξ.

234
So the total berry phase is
Z
θB = (aξ dξ + aξ∗ dξ ∗ )
γ
Z
i
= 2
(ξ ∗ dξ − ξdξ ∗ )
γ 4m`
Z
i
= (x − iy)(dx + idy) − (x + iy)(dx − idy)
4m`2 γ
Z
i
= 2i(xdy − ydx)
4m`2 γ
Z
eB
= ydx − xdy
2m γ
 e Z
= − Ax dx + Ay dy ∇×A=B
m γ

and so A is just the vector potential for a charged particle in a magnetic field B. And so, we can
interpret this quasihole, this loop, it will pick up a phase which is proportional to the total magnetic
flux in the loop. It makes sense. We know any charged particle – whether it is a collective excitation
or an elementary particle, it is going to feel a berry phase when it moves around in a circle.
e
This means θB is the Aharonov-Bohm phase for charge − m particle in a magnetic field B.

This result applies inside the bulk. If you get inside the boundry – actually, we didn’t discuss this,
but there are gapless excitations near the boundary. When you take your particle around there, it
is hard to identify an adiabatic process.

46 Berry phase for 2 quasiholes

So in particular, we’re going to have one – what we’re going to do is we’re going to have one
quasihole, but we’re going to have it sit here and not do anything. The other one we’re going to
move around it in some path. Again, I guess I’ll call gamma. We’re going to calculate what is the
berry phase associated with this process?

235
So what’s the wavefunction?
P |zi |2
(ξ2 − zi ) (zi − zj )m e− i 4`2
Y Y Y
ψ({zi }, ξ1 , ξ2 ) = (ξ1 − zi )
i i i<j

Normalize the wavefunction


1
ψh ({zi }, ξ1 , ξ2 ) = ψ({zi }, ξ1 , ξ2 ) √
c
and c is going to depend on the positions of both of the quasiholes. Using the same argument as
before, if we can calculate the normalization factor – like if you looked at your arguments before, it
relied on the fact that our wavefunction was an analytic function of these parameters. That is still
true even with the two-quasihole case. We need to calculate c and use the same formula that we
had from before. So as before, we can compute the berry phase from c. What is c? We will use the
same plasma analogy.

Again, I find it convenient to make the presentation look cleaner so instead of consider ψ 2 , I’m
going to consider
1 2 2 1
2
− 4m` 2 (|ξ1 | +|ξ2 | )+ m log |ξ1 −ξ2 |
ψe

We can deduce the norm of ψ which is what we’re actually interested in. Write it as
Z Y
= d2 zi e−βV ({zi },ξ1 ,ξ2 )

2
and choose β = m and then we get
X X X
V = −m2 log |zi − zj | − m log |ξ1 − zi | − m log |ξ2 − zi |
i<j i i
m X 2 1
+ |zi | + 2 (|ξ1 |2 + |ξ2 |2 ) − log |ξ1 − ξ2 |
4`2 4`
i

So these are kind of familiar terms – I think


we’ve seen all of these types of terms before.
This is the interaction between our m charges
with each other. This is the m charge with a
+1 charge and these are ξ2 . This is interaction
between our +m charges with the background.
And these are the new terms which we put in
by hand. And they look like the interaction be-
tween our test charge in the background and
then finally this last term here is an interaction between our two test charges.

236
So if you look at it, we’ve really included all possible interactions between two test charges and our
plasma and the background. So we’re really looking at this system. The system where we have
+m and all our +m charges, we have a +1 test charge here and here, and then a lot of background
charge around, uniform background charge.

And these two test charges are at position ξ1 and ξ2 . We’ve included all interactions. So that means
that V here is the total energy of a plasma with two test charges.

Now since V is the total energy, the thing we’re interested in computing is, again, it is a partition
function for our system. So we can write down what
Z Y
d2 zi e−βV = e−βF (ξ1 ,ξ2 )

we’re trying to calculate. We can think of it as a partition function, or we can think of it as e−iβF .
So F here is the free energy for our plasma with two test charges. Using exactly the same argument
as before, we know that if we have screening, these charges will be perfectly screened. They’ll
basically become neutral objects when they’re screened by the +m charges, so they will feel no
force either from the background charge or each other. They have no long-range force. So the free
energy will be independent of their positions. That holds as long as these charges are in the bulk
of our plasma. And also they’re not too close together. Of course if they get close enough together
there’s going to be some finite screening radius. If they get too close together, they’ll interact.

There’s really only one length scale on our problem, ` (magnetic length). So as before, we have
complete screening, so that free energy doesn’t depend on the positions, as long as the quasiholes are
not close. So ultimately, we want to calculate the berry phase of a process where our particles are
very far apart. That is the way we isolate the topological berry phase. So that is fine. We are going
to keep our particles far apart. F = constant. That means we’ve normalized our wavefunction
correctly once we’ve multiplied by this factor. We can deduce what our normalization factor is.
So
1
c ∝ e 2m`2 (|ξ1 |
2 +|ξ |2
2 )− m2 log |ξ1 −ξ2 |

Now we have c, we can calculate our Berry phase:


Z
θB (γ) = dt hψh |i∂t |ψh i
Z
= dξ1 hψh |i∂ξ1 |ψh i + dξ1∗ hψh |i∂ξ1∗ |ψh i

I don’t have a line integral over ξ2 because ξ2 isn’t moving. See the picture of the path:

237
So we can compute, using the trick like before
i
hψh |i∂ξ1 |ψh i = ∂ξ1 log c
2
i i
= 2
ξ1∗ −
4m` 2m(ξ1 − ξ2 )
And this gets us
Z  
i ∗ i
θB = ξ − dξ1 + c.c.
4m`2 1 2m(ξ1 − ξ2 )
−i
Z Z
i ∗ ∗ i
= (ξ dξ1 − ξ1 dξ1 ) + dξ1 + dξ ∗
4m`2 1 2m(ξ1 − ξ2 ) 2m(ξ1∗ − ξ2∗ ) 1
 
i i
= (A − B phase) + − 2πi + (−2πi)
2m 2m

= (A − B phase) +
| {z m
} |{z}
θs-r
θtopo

What do we have here? This is the old term. This is the Aharnov-Bohm phase. This is the new
term. So the first term is just the Aharnov-Bohm phase. Same thing we found before. This integral
you can do easily. This is a complex integral. We know how to integrate 1ξ . And we know that if ξ1
wraps around ξ2 , it is going around a pole. So this whole thing – this 1ξ dξ integral here is going to
give us a factor of 2πi. So all together, we have computed our berry phase. Our berry phase for our
path is equal to Aharnov-Bohm phase. To get the statistics, it is not enough to just compute the
berry phase of your path – berry phase will have two contributions, statistical sort of topological
berry phase and also geometric or short-range contributions to the berry phase. What you need to
do is extract the topological term. In principle, the way you would do that is subtract the berry
phase for two different paths. So one path is the γ path where one particle goes around the other.
And the other path is γ 0 where the particle doesn’t go around the other. And you can see, I won’t
do it here, but you can see what’s going to happen. When we subtract the two, the Aharnov-Bohm
phases will be the same. This came from this integral here which gives a contribution if ξ1 came

238
from this. When we do the subtraction, we will see the topological term – not surprisingly, it is the
same term. So I won’t say that here, but the main point is that we can see this is that one is the
θshort-range term and the other is the θtopological term.

The topological berry phase in the path is e2iθB , the topological contribution to the berry phase. If
you remember, that if I take a particle all the way around the other. It is a full 360 degree. So we
know what θ is.
2πi
eiStopo (γ) = e m

So we conclude that there’s two possibilities for θ, in principle. So the statistical angle is e2iθ =
2πi
π π 
e and θ = θ=
or  m + π. We can rule out the second term with more work (adiabatic
m

m 
exchange).

You can rule this out if you consider adiabatic exchange rather than an adiabatic winding. Just
slightly trickier in terms of doing the subtraction. In any case, what have we found? We have the
statistical range for the Laughlin quasihole has statistical angle θ = π/m. This is a sensible answer
because remember, in the case m = 1, it is an integer quantum hall state. We know the quasihole
is a fermion. But here, it is fractional statistics.

239

Você também pode gostar