Escolar Documentos
Profissional Documentos
Cultura Documentos
1 Lev Landau 1
1.1 Life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Early years . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 Leningrad and Europe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.3 National Scientic Center Kharkiv Institute of Physics and Technology, Kharkiv . . . . . . 1
1.1.4 Institute for Physical Problems, Moscow . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.5 Scientic achievements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.6 Personal life and views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.7 Last years . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.8 Death . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Legacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Landaus List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 In popular culture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5 Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5.1 Landau and Lifshitz Course of Theoretical Physics . . . . . . . . . . . . . . . . . . . . . . 4
1.5.2 Other . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.8 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Felix Bloch 7
2.1 Life and work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Footnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.5 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3 Quantum thermodynamics 9
3.1 A dynamical view of quantum thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1.1 The emergence of time derivative of rst law of thermodynamics . . . . . . . . . . . . . . 9
3.1.2 The emergence of the second law of thermodynamics . . . . . . . . . . . . . . . . . . . . 10
i
ii CONTENTS
3.1.3 The Quantum and Thermodynamic Adiabatic Conditions and Quantum Friction . . . . . . 11
3.1.4 The emergence of the dynamical version of the third law of thermodynamics . . . . . . . . 11
3.2 Typicality as a source of emergence of thermodynamical phenomena . . . . . . . . . . . . . . . . 12
3.3 Quantum thermodynamics resource theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.5 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4 Master equation 14
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.1.1 Detailed description of the matrix A , and properties of the system . . . . . . . . . . . . . 14
4.1.2 Examples of master equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.2 Quantum master equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.5 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5 Markov property 17
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.3 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.4 Alternative formulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.5 Strong Markov property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.6 In forecasting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.7 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
6 Lindblad equation 19
6.1 Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6.2 Harmonic oscillator example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.5 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
7 Amir Caldeira 21
7.1 Selected Scientic Articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
7.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
9 Nitrogen-vacancy center 25
9.1 Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
9.2 Production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
9.3 Basic optical properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
9.4 Energy level structure and its manipulation by external elds . . . . . . . . . . . . . . . . . . . . . 26
9.5 Spin dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
9.6 Potential applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
9.7 Historical remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
9.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
9.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
10 Quantum mechanics 32
10.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
10.2 Mathematical formulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
10.3 Mathematically equivalent formulations of quantum mechanics . . . . . . . . . . . . . . . . . . . . 36
10.4 Interactions with other scientic theories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
10.4.1 Quantum mechanics and classical physics . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
10.4.2 Copenhagen interpretation of quantum versus classical kinematics . . . . . . . . . . . . . . 38
10.4.3 General relativity and quantum mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . 38
10.4.4 Attempts at a unied eld theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
10.5 Philosophical implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
10.6 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
10.6.1 Electronics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
10.6.2 Cryptography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
10.6.3 Quantum computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
10.6.4 Macroscale quantum eects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
10.6.5 Quantum theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
10.7 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
10.7.1 Free particle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
10.7.2 Step potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
10.7.3 Rectangular potential barrier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
10.7.4 Particle in a box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
10.7.5 Finite potential well . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
10.7.6 Harmonic oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
10.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
10.9 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
10.10References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
10.11Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
iv CONTENTS
10.12External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
11 Markov chain 49
11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
11.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
11.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
11.3.1 Gambling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
11.3.2 A birth-death process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
11.3.3 A non-Markov example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
11.4 Markov property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
11.4.1 The general case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
11.4.2 For discrete-time Markov chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
11.5 Formal denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
11.5.1 Discrete-time Markov chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
11.5.2 Continuous-time Markov chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
11.6 Transient evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
11.7 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
11.7.1 Reducibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
11.7.2 Periodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
11.7.3 Transience and recurrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
11.7.4 Ergodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
11.7.5 Steady-state analysis and limiting distributions . . . . . . . . . . . . . . . . . . . . . . . . 56
11.8 Finite state space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
11.8.1 Stationary distribution relation to eigenvectors and simplices . . . . . . . . . . . . . . . . . 57
11.8.2 Time-homogeneous Markov chain with a nite state space . . . . . . . . . . . . . . . . . . 57
11.8.3 Convergence speed to the stationary distribution . . . . . . . . . . . . . . . . . . . . . . . 58
11.9 Reversible Markov chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
11.9.1 Closest reversible Markov chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
11.10Bernoulli scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
11.11General state space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
11.11.1 Harris chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
11.11.2 Locally interacting Markov chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
11.12Markovian representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
11.13Transient behaviour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
11.14Stationary distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
11.14.1 Example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
11.14.2 Example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
11.15Hitting times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
11.15.1 Expected hitting times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
11.16Time reversal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
11.17Embedded Markov chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
11.18Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
CONTENTS v
11.18.1 Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
11.18.2 Chemistry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
11.18.3 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
11.18.4 Speech recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
11.18.5 Information and computer science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
11.18.6 Queueing theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
11.18.7 Internet applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
11.18.8 Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
11.18.9 Economics and nance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
11.18.10Social sciences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
11.18.11Mathematical biology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
11.18.12Genetics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
11.18.13Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
11.18.14Music . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
11.18.15Baseball . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
11.18.16Markov text generators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
11.18.17Bioinformatics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
11.19See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
11.20Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
11.21History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
11.22References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
11.23External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
12 Density matrix 71
12.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
12.2 Pure and mixed states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
12.2.1 Example: Light polarization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
12.2.2 Mathematical description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
12.3 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
12.4 Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
12.5 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
12.6 The von Neumann equation for time evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
12.7 Quantum Liouville, Moyals equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
12.8 Composite systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
12.9 C*-algebraic formulation of states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
12.10See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
12.11Notes and references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
13 Matrix (mathematics) 77
13.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
13.1.1 Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
13.2 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
vi CONTENTS
Lev Landau
Lev Davidovich Landau (Russian: University. In Leningrad, he rst made the acquain-
; IPA: [lv dvidvit lnda.u]; January 22 tance of theoretical physics and dedicated himself fully
[O.S. January 9] 1908 1 April 1968) was a Soviet to its study, graduating in 1927. Landau subsequently en-
physicist who made fundamental contributions to many rolled for post-graduate studies at the Leningrad Physico-
areas of theoretical physics. His accomplishments in- Technical Institute where he eventually received a doc-
clude the independent co-discovery of the density ma- torate in Physical and Mathematical Sciences in 1934.[7]
trix method[1] in quantum mechanics (alongside John Landau got his rst chance to travel abroad during the
von Neumann), the quantum mechanical theory of period 19291931, on a Soviet governmentPeoples
diamagnetism, the theory of superuidity, the theory Commissariat for Educationtravelling fellowship sup-
of second-order phase transitions, the GinzburgLandau plemented by a Rockefeller Foundation fellowship. By
theory of superconductivity, the theory of Fermi liquid, that time he was uent in German and French and could
the explanation of Landau damping in plasma physics, communicate in English.[8] He later improved his English
the Landau pole in quantum electrodynamics, the two- and learned Danish.[9]
component theory of neutrinos, and Landaus equations After brief stays in Gttingen and Leipzig, he went to
for S matrix singularities.[2] He received the 1962 Nobel Copenhagen on 8 April 1930 to work at the Niels Bohrs
Prize in Physics for his development of a mathematical Institute for Theoretical Physics. He stayed there till 3
theory of superuidity that accounts for the properties of May of the same year. After the visit, Landau always
liquid helium II at a temperature below 2.17 K (270.98 considered himself a pupil of Niels Bohr and Landaus
C).[3] approach to physics was greatly inuenced by Bohr. Af-
ter his stay in Copenhagen, he visited Cambridge (mid-
1930), where he worked with P. A. M. Dirac,[10] Copen-
1.1 Life hagen ( 20 to 22 September 22 November 1930),[11]
and Zurich (December 1930 to January 1931), where
1.1.1 Early years he worked with Wolfgang Pauli.[10] From Zurich Landau
went back to Copenhagen for the third time[12] and stayed
Landau was born on 22 January 1908 to Jewish there from 25 February till 19 March 1931 before return-
parents[3][4][5][6] in Baku, Azerbaijan, in what was then ing to Leningrad the same year.[13]
the Russian Empire. Landaus father was an engineer
with the local oil industry and his mother was a doc-
tor. He learned to dierentiate at age 12 and to inte-
1.1.3 National Scientic Center Kharkiv
grate at age 13. Landau graduated in 1920 at age 13 from Institute of Physics and Technology,
gymnasium. His parents considered him too young to at- Kharkiv
tend university, so for a year he attended the Baku Eco-
nomical Technical School (). In 1922, at age Between 1932 and 1937 he headed the Department
14, he matriculated at the Baku State University, studying of Theoretical Physics at the National Scientic Cen-
in two departments simultaneously: the Departments of ter Kharkiv Institute of Physics and Technology and
Physics and Mathematics, and the Department of Chem- lectured at the University of Kharkiv and the Kharkiv
istry. Subsequently, he ceased studying chemistry, but Polytechnical Institute. Apart from his theoretical ac-
remained interested in the eld throughout his life. complishments, Landau was the principal founder of a
great tradition of theoretical physics in Kharkiv, Soviet
Union, sometimes referred to as the Landau school. In
1.1.2 Leningrad and Europe Kharkiv, he and his friend and former student, Evgeny
Lifshitz, began writing the Course of Theoretical Physics,
In 1924, he moved to the main centre of Soviet physics ten volumes that together span the whole of the subject
at the time: the Physics Department of Leningrad State and are still widely used as graduate-level physics texts.
1
2 CHAPTER 1. LEV LANDAU
During the Great Purge, Landau was investigated within perature below 2.17 K (270.98 C). [21]
the UPTI Aair in Kharkiv, but he managed to leave for
Moscow to take up a new post.[14]
1.1.6 Personal life and views
Landau developed a famous comprehensive exam called
the Theoretical Minimum which students were ex-
pected to pass before admission to the school. The
exam covered all aspects of theoretical physics, and be-
tween 1934 and 1961 only 43 candidates passed, but
those who did later became quite notable theoretical
physicists.[15][16]
In 1932, he computed the Chandrashekhar limit;[17] how-
ever, he did not apply it to white dwarf stars.
1.1.8 Death
Landau died on 1 April 1968, aged 60, from compli-
cations of the injuries sustained in the car accident he
was involved in six years earlier. He was buried at the
Novodevichy cemetery.[29][30]
1.2 Legacy
Two celestial objects are named in his honour:
1.5.1 Landau and Lifshitz Course of Theo- A complete list of Landaus works appeared in 1998 in
retical Physics the Russian journal Physics-Uspekhi.[34]
[5] Frontiers of physics: proceedings of the Landau Memorial [22] Petr Leonidovich Kapitsa, Experiment, Theory, Prac-
Conference, Tel Aviv, Israel, 610 June 1988, (Pergamon tice: Articles and Addresses, Springer, 1980, ISBN
Press, 1990) ISBN 0080369391, pp. 1314 9027710619, p. 329.
[6] Edward Teller, Memoirs: A Twentieth Century Jour- [23] Mishina, Irina (17 December 2012). -
ney In Science And Politics, Basic Books 2002, ISBN [Dual personalities]. [Versiya] (in Russian).
0738207780 p. 124 Retrieved 3 March 2014.
[7] Frantiek Janouch, Lev Landau: A Portrait of a Theoret- [24] Schaefer, Henry F. (2003). Science and Christianity: Con-
ical Physicist, 19081988, Research Institute for Physics, ict Or Coherence?. The Apollos Trust. p. 9. ISBN
1988, p. 17. 9780974297507. I present here two examples of notable
atheists. The rst is Lev Landau, the most brilliant Soviet
[8] Rumer, Yuriy. . berkovich-zametki.com physicist of the twentieth century.
[9] Bessarab, Maya (1971) . [25] Lev Landau. Soylent Communications. 2012. Re-
. Moscow trieved 7 May 2013.
[12] Sykes, J. B. (2013) Landau: The Physicist and the Man: [29] Lev Davidovich Landau. Find a Grave. Retrieved 28
Recollections of L. D. Landau, Elsevier, p. 81. ISBN January 2012.
9781483286884.
[30] Obelisk at the Novodevichye Cemetery. novode-
[13] Haensel, P.; Potekhin, A.Y. and Yakovlev, D.G. (2007) vichye.com (26 October 2008). Retrieved on 28 January
Neutron Stars 1: Equation of State and Structure, Springer 2012.
Science & Business Media, p. 2. ISBN 0387335439. [31] Schmadel, Lutz D. (2003). Dictionary of Minor Planet
Names (5th ed.). Springer Verlag. p. 174. ISBN 3-540-
[14] Gennady Gorelik, Scientic American 1997, The Top Se-
00238-3.
cret Life of Lev Landau
[32] Hey, Tony (1997). Einsteins Mirror. Cambridge Univer-
[15] Blundell, Stephen J. (2009). Superconductivity: A Very sity Press. p. 1. ISBN 0-521-43532-3.
Short Introduction. Oxford U. Press. p. 67. ISBN
9780191579097. [33] Mitra, Asoke; Ramlo, Susan; Dharamsi, Amin; Mi-
tra, Asoke; Dolan, Richard; Smolin, Lee (2006).
[16] Ioe, Boris L. (25 April 2002). Landaus Theoretical New Einsteins Need Positive Environment, Inde-
Minimum, Landaus Seminar, ITEP in the beginning of pendent Spirit. Physics Today. 59 (11): 10.
the 1950s. arxiv.org. arXiv:hep-ph/0204295 . Bibcode:2006PhT....59k..10H. doi:10.1063/1.2435630.
[17] On the Theory of Stars, in Collected Papers of L. D. Lan- [34] Complete list of L D Landaus works. Phys. Usp. 41
dau, ed. and with an introduction by D. ter Haar, New (6): 621623. June 1998. Bibcode:1998PhyU...41..621..
York: Gordon and Breach, 1965; originally published in doi:10.1070/PU1998v041n06ABEH000413.
Phys. Z. Sowjet. 1 (1932), 285.
Articles
Felix Bloch
This article is about the Swiss physicist. For the man 1928. His doctoral thesis established the quantum theory
accused of espionage, see Felix Bloch (diplomatic of solids, using Bloch waves to describe the electrons.
ocer).
In 1940 he married Lore Misch.[4]
He remained in European academia, studying with
Felix Bloch (23 October 1905 10 September 1983) Wolfgang Pauli in Zrich, Niels Bohr in Copenhagen and
was a Swiss physicist, working mainly in the U.S.[1] He Enrico Fermi in Rome before he went back to Leipzig
and Edward Mills Purcell were awarded the 1952 Nobel assuming a position as privatdozent (lecturer). In 1933,
Prize for their development of new ways and methods for immediately after Hitler came to power, he left Ger-
nuclear magnetic precision measurements.[2] In 1954 many because he was Jewish.[5] He emigrated to work
1955, he served for one year as the rst Director-General at Stanford University in 1934. In the fall of 1938, Bloch
of CERN. began working with the University of California at Berke-
ley 37 cyclotron to determine the magnetic moment of
the neutron.[6] Bloch went on to become the rst professor
2.1 Life and work for theoretical physics at Stanford. In 1939, he became
a naturalized citizen of the United States. During WW
II he worked on nuclear power at Los Alamos National
Laboratory, before resigning to join the radar project at
Harvard University.
After the war he concentrated on investigations into nu-
clear induction and nuclear magnetic resonance, which
are the underlying principles of MRI.[7] [8] [9] In 1946
he proposed the Bloch equations which determine the
time evolution of nuclear magnetization. When CERN
was being set up in the early 1950s, its founders were
searching for someone of the stature and international
prestige to head the edgling international laboratory, and
in 1954 Professor Bloch became CERNs rst Director-
General,[10] at the time when construction was getting un-
der way on the present Meyrin site and plans for the rst
machines were being drawn up. After leaving CERN, he
Felix Bloch in the lab, 1950s returned to Stanford University, where he in 1961 was
made Max Stein Professor of Physics.
Bloch was born in Zrich, Switzerland to Jewish[3] par-
At Stanford, he was the advisor of Carson D. Jeries,
ents Gustav and Agnes Bloch.
who became a professor of Physics at the University of
He was educated at the Cantonal Gymnasium in California, Berkeley.
Zurich and at the Eidgenssische Technische Hochschule
In 1964, he was elected a foreign member of the Royal
(ETHZ), also in Zrich. Initially studying engineering
Netherlands Academy of Arts and Sciences.[11]
he soon changed to physics. During this time he at-
[4]
tended lectures and seminars given by Peter Debye and He died in Zurich.
Hermann Weyl at ETH Zrich and Erwin Schrdinger at
the neighboring University of Zrich. A fellow student
in these seminars was John von Neumann. Graduating in
1927 he continued his physics studies at the University of
Leipzig with Werner Heisenberg, gaining his doctorate in
7
8 CHAPTER 2. FELIX BLOCH
[9] Shampo, M A; Kyle R A (September 1995). Fe- Oral History interview transcript with Felix Bloch
lix Blochdeveloper of magnetic resonance imaging. 15 August 1968, American Institute of Physics,
Mayo Clin. Proc. 70 (9): 889. PMID 7643644. Niels Bohr Library and Archives
doi:10.4065/70.9.889.
Oral History interview transcript with Felix Bloch
[10] People and things : Felix Bloch. CERN Courier. CERN. 15 December 1981, American Institute of Physics,
Retrieved 1 September 2015. Niels Bohr Library and Archives
[11] F. Bloch (1905 - 1983)". Royal Netherlands Academy Felix Bloch Papers, 19311987 (33 linear ft.) are
of Arts and Sciences. Retrieved 22 May 2016. housed in the Department of Special Collections and
University Archives at Stanford University Libraries
National Academy of Sciences Biographical Mem-
2.4 References oir
Quantum thermodynamics
H = HS + HB + HSB dE HS
= + LD (HS )
dt t
where HS is the systems Hamiltonian, HB is the
bath Hamiltonian and HSB is the system-bath interac- t and the heat
where power is interpreted as P = H S
tion. The state of the system is obtained from a par- current J = LD (HS ) . [6][7][8]
tial trace over the combined system and bath: S (t) = Additional conditions have to imposed on the dissipa-
T rB (SB (t)) . Reduced dynamics is an equivalent de- tor LD to be consistent with thermodynamics. First the
scription of the systems dynamics utilizing only systems invariant S () should become an equilibrium Gibbs
operators. Assuming Markov property for the dynamics state. This implies that the dissipator LD should com-
the basic equation of motion for an open quantum system mute with the unitary part generated by HS .[3] In ad-
is the Lindblad equation(L-GKS):[4][5] dition an equilibrium state, is stationary and stable. This
9
10 CHAPTER 3. QUANTUM THERMODYNAMICS
assumption is used to derive the Kubo-Martin-Schwinger formation gathered by measurement. An example is the
stability criterion for thermal equilibrium i.e. KMS state. case of Maxwells demon, which has been resolved by
[13][14][15]
A unique and consistent approach is obtained by deriv- Le Szilrd.
ing the generator, LD , in the weak system bath coupling The entropy of an observable is associated with the com-
limit.[9] In this limit, the interaction energy can be ne-plete projective measurement of an observable, A ,
glected. This approach represents a thermodynamic ide- where the
operator, A , has a spectral decomposition:
alization: it allows energy transfer, while keeping a tensor
A = j i Pj where Pj is the projection operators
product separation between the system and bath, i.e., a of the eigenvalue j . The probability of outcome j is
quantum version of an isothermal partition. pj = T r(Pj ) The entropy associated with the observ-
Markovian behavior involves a rather complicated coop- able, A , is the Shannon entropy with respect to the pos-
eration between system and bath dynamics. This means sible outcomes:
that in phenomenological treatments, one cannot com-
bine arbitrary system Hamiltonians, HS , with a given
L-GKS generator. This observation is particularly impor- SA = pj ln pj
tant in the context of quantum thermodynamics, where it j
is tempting to study Markovian dynamics with an arbi-
trary control Hamiltonian. Erroneous derivations of the The most signicant observable in thermodynamics is the
quantum master equation can easily lead to a violation of energy represented by the Hamiltonian[16]
operator, H , and
the laws of thermodynamics. its associated energy entropy, S E .
An external perturbation modifying the Hamiltonian of John von Neumann suggested to single out the most in-
the system will also modify the heat ow. As a result, formative observable to characterize the entropy of the
the L-GKS generator has to be renormalized. For a slow system. This invariant is obtained by minimizing the en-
change, one can adopt the adiabatic approach and use the tropy with respect to all possible observables. The most
instantaneous systems Hamiltonian to derive LD . An informative observable operator commutes with the state
important class of problems in quantum thermodynam- of the system. The entropy of this observable is termed
ics is periodically driven systems. Periodic quantum heat the Von Neumann entropy and is equal to:
engines and power-driven refrigerators fall into this class.
A reexamination of the time-dependent heat current ex-
pression using quantum transport techniques has been Svn = T r( ln )
proposed.[10]
As a consequence, SA Svn for all observables. At
A derivation of consistent dynamics beyond the weak thermal equilibrium the energy entropy is equal to the von
coupling limit has been suggested.[11] Neumann entropy: SE = Svn .
Svn is invariant to a unitary transformation changing the
3.1.2 The emergence of the second law of state. The Von Neumann entropy Svn is additive only for
a system state that is composed of a tensor product of its
thermodynamics subsystems:
The second law is a statement on the irreversibility of
dynamics or, the breakup of time reversal symmetry (T-
symmetry). This should be consistent with the empirical = j j
direct denition: heat will ow spontaneously from a hot
source to a cold sink.
Clausius version of the II-law
From a static viewpoint, for a closed quantum system, the
II-law of thermodynamics is a consequence of the uni- No process is possible whose sole result is the transfer of
tary evolution.[12] In this approach, one accounts for the heat from a body of lower temperature to a body of higher
entropy change before and after a change in the entire sys- temperature.
tem. A dynamical viewpoint is based on local accounting
for the entropy changes in the subsystems and the entropy This statement for N-coupled heat baths in steady state
generated in the baths. becomes:
Entropy Jn
0
n
Tn
In thermodynamics, entropy is related to a concrete pro-
cess. In quantum mechanics, this translates to the ability A dynamical version of the II-law can be proven, based
to measure and manipulate the system based on the in- on Spohns inequality[17]
3.1. A DYNAMICAL VIEW OF QUANTUM THERMODYNAMICS 11
The entropy of any pure substance in thermody- This equation introduces the relation between the char-
namic equilibrium approaches zero as the temper- acteristic exponents and . When < 0 then the bath
ature approaches zero. is cooled to zero temperature in a nite time, which im-
plies a valuation of the third law. It is apparent from the
The second formulation is dynamical, known as the last equation, that the unattainability principle is more re-
unattainability principle[21] strictive than the Nernst heat theorem.
12 CHAPTER 3. QUANTUM THERMODYNAMICS
3.2 Typicality as a source of emer- [2] John Von Neumann. Mathematical foundations of quan-
tum mechanics. No. 2. Princeton university press, 1955.
gence of thermodynamical phe-
[3] Koslo, Ronnie. Quantum thermodynamics: A dynam-
nomena ical viewpoint. Entropy 15, no. 6 (2013): 2100-2128.
The basic idea of quantum typicality is that the vast ma- [4] Lindblad, G. On the generators of quantum dynamical
semigroups. Comm. Math. Phys. 1976, 48, 119130.
jority of all pure states featuring a common expectation
value of some generic observable at a given time will yield [5] 6. Gorini, V.; Kossakowski, A.; Sudarshan, E.C.G. Com-
very similar expectation values of the same observable at pletely positive dynamical semigroups of N-level systems.
any later time. This is meant to apply to Schrdinger type J. Math. Phys. 1976, 17, 821825.
dynamics in high dimensional Hilbert spaces. As a con-
[6] Spohn, H.; Lebowitz, J. Irreversible thermodynamics for
sequence individual dynamics of expectation values are
quantum systems weakly coupled to thermal reservoirs.
then typically well described by the ensemble average.[23]
Adv. Chem. Phys. 1979, 38, 109.
Quantum ergodic theorem originated by John von Neu-
mann is a strong result arising from the mere mathe- [7] Alicki, R. Quantum open systems as a model of a heat
engine. J. Phys A: Math. Gen. 1979, 12, L103L107
matical structure of quantum mechanics. The QET is
a precise formulation of termed normal typicality, i.e. [8] Koslo, R. A quantum mechanical open system as a
the statement that, for typical large systems, every ini- model of a heat engine. J. Chem. Phys. 1984, 80, 1625
tial wave function 0 from an energy shell is normal: it 1631
evolves in such a way that |t for most t, is macroscopi-
[9] E.B. Davis Markovian master equations. Comm. Math.
cally equivalent to the micro-canonical density matrix.[24]
Phys. 1974, 39, 91110.
[1] Einstein, Albert. "ber einen die Erzeugung und Ver- [19] Koslo, R.; Feldmann, T. A discrete four stroke quantum
wandlung des Lichtes betreenden heuristischen Gesicht- heat engine exploring the origin of friction. Phys. Rev. E
spunkt. Annalen der Physik 322, no. 6 (1905): 132-148. 2002, 65, 055102 .
3.6. EXTERNAL LINKS 13
[26] John Goold and Marcus Huber and Arnau Riera and Lidia
del Rio and Paul Skrzypczyk, The role of quantum in-
formation in thermodynamics a topical review, Journal
of Physics A: Mathematical and Theoretical, 49, 143001,
2016
Master equation
For the master equation used in quantum physics, see rate equal to the value of the connection). When the con-
Lindblad equation. For the classical and quantum nections depend on the actual time (i.e. matrix A depends
master equations in quantum eld theory, see Batalin on the time, A A(t) ), the process is not stationary and
Vilkovisky formalism. the master equation reads
4.1 Introduction dP
=
t
A(t )P ( )d.
dt 0
A master equation is a phenomenological set of rst- The matrix A can also represent birth and death, meaning
order dierential equations describing the time evolution that probability is injected (birth) or taken from (death)
of (usually) the probability of a system to occupy each the system, where then, the process is not in equilibrium.
one of a discrete set of states with regard to a continu-
ous time variable t. The most familiar form of a master
equation is a matrix form: 4.1.1 Detailed description of the matrix A
, and properties of the system
dP Let A be the matrix describing the transition rates (also
= AP ,
dt known as kinetic rates or reaction rates). As always, the
rst subscript represents the row, the second subscript the
where P is a column vector (where element i represents column. That is, the source is given by the second sub-
state i), and A is the matrix of connections. The way con- script, and the destination by the rst subscript. This is
nections among states are made determines the dimension the opposite of what one might expect, but it is technically
of the problem; it is either convenient.
For each state k, the increase in occupation probability
a d-dimensional system (where d is 1,2,3,...), where depends on the contribution from all other states to k, and
any state is connected with exactly its 2d nearest is given by:
neighbors, or
When the connections are time-independent rate con- where P , is the probability for the system to be in the
stants, the master equation represents a kinetic scheme, state , while the matrix A is lled with a grid of
and the process is Markovian (any jumping time proba- transition-rate constants. Similarly, Pk contributes to the
bility density function for state i is an exponential, with a occupation of all other states P ,
14
4.2. QUANTUM MASTER EQUATIONS 15
Markov property
5.1 Introduction
A stochastic process has the Markov property if the
conditional probability distribution of future states of the
process (conditional on both past and present values)
depends only upon the present state; that is, given the
present, the future does not depend on the past. A process
with this property is said to be Markovian or a Markov
process. The most famous Markov process is a Markov
chain. Brownian motion is another well-known Markov
process.
17
18 CHAPTER 5. MARKOV PROPERTY
5.4 Alternative formulations This stochastic process of observed colors doesn't have
the Markov property. Using the same experiment above,
Alternatively, the Markov property can be formulated as if sampling without replacement is changed to sampling
follows. with replacement, the process of observed colors will
have the Markov property.[5]
An application of the Markov property in a generalized
E[f (Xt )|Fs ] = E[f (Xt )|(Xs )] form is in Markov chain Monte Carlo computations in
the context of Bayesian statistics.
for all t s 0 and f : S R bounded and
measurable.[4]
5.8 See also
5.5 Strong Markov property Markov chain
Lindblad equation
In quantum mechanics, the GoriniKossakowski Lindblad equations can also be expressed as the following
SudarshanLindblad equation (GKSL equation, equations for quantum observables:
named after Vittorio Gorini, Andrzej Kossakowski,
George Sudarshan and Gran Lindblad) or master
1( ))
equation in Lindblad form, is the most general type d i 1 (
of Markovian and time-homogeneous master equation A = + [H, A]+ Lk ALk ALk Lk + Lk Lk A ,
dt 2
k=1
describing non-unitary evolution of the density matrix
that is trace-preserving and completely positive for any where A is a quantum observable and we assumed a di-
initial condition. The Schroedinger equation is a special agonal coecient matrix h = (hn,m).
case of the more general Lindblad equation, which has
led to some speculation that quantum mechanics may
be productively extended and expanded through further 6.1 Diagonalization
application and analysis of the Lindblad equation.[1]
The Lindblad master equation for an N-dimensional sys- Since the matrix h = (hn,m) is positive, it can be
tems reduced density matrix can be written: diagonalized with a unitary transformation u:
N 2
1 ( )
i 1(
) 1 0 0
= [H, ]+ hn,m Ln Lm Lm Ln + Lm Ln 0 0
2 2
n,m=1 u hu = . . . ..
.. .. .. .
where H is a (Hermitian) Hamiltonian part, the L are
0 0 N 2 1
an arbitrary linear basis of the operators on the systems
Hilbert space, and the h , are constants which determine where the eigenvalues are non-negative. If we dene
the dynamics. The coecient matrix h = (hn,m) must be another orthonormal operator basis
positive to ensure that the equation is trace-preserving and
completely positive. The summation only runs to N 2
1 because we have taken LN 2 to be proportional to the N2
1
identity operator, in which case the summand vanishes. Ai = uj,i Lj
Our convention implies that the L are traceless for m < j=1
N 2 . The terms in the summation where m = n can be
described in terms of the Lindblad superoperator, we can rewrite Lindblad equation in diagonal form
1( ) N 1
2 ( )
D(L) = LL L L + L L . i
= [H, ]+
1 1
i Ai Ai Ai Ai Ai Ai .
2 2 2
i=1
If the h , terms are all zero, then this is quantum Liou-
ville equation (for a closed system), which is the quantum This equation is invariant under a unitary transformation
analog of the classical Liouville equation. A related equa- of Lindblad operators and constants,
tion describes the time evolution of the expectation values
of observables, it is given by the Ehrenfest theorem.
N 2 1
Note that H is not necessarily equal to the self- A A = v A ,
i i i i j,i i j
Hamiltonian of the system. It may also incorporate j=1
eective unitary dynamics arising from the system-
environment interaction. and also under the inhomogeneous transformation
19
20 CHAPTER 6. LINDBLAD EQUATION
6.2 Harmonic oscillator example Accardi, Luigi; Lu, Yun Gang; Volovich, I.V.
(2002). Quantum Theory and Its Stochastic Limit.
New York: Springer Verlag. ISBN 978-3-5404-
The most common Lindblad equation is that describing 1928-0.
the damping of a quantum harmonic oscillator, it has
Alicki, Robert; Lendi, Karl (1987). Quantum
Dynamical Semigroups and Applications. Berlin:
L1 = a Springer Verlag. ISBN 978-0-3871-8276-6.
L2 = a Attal, Stphane; Joye, Alain; Pillet, Claude-Alain
(2006). Open Quantum Systems II: The Markovian
2 (n + 1) n = m = 1
Approach. Springer. ISBN 978-3-5403-0992-5.
hn,m = 2 n n=m=2
0 else Breuer, Heinz-Peter; Petruccione, F. (2002). The
Theory of Open Quantum Systems. Oxford Univer-
Here n is the mean number of excitations in the reser- sity Press. ISBN 978-0-1985-2063-4.
voir damping the oscillator and is the decay rate. Addi-
tional Lindblad operators can be included to model vari- Gardiner, C.W.; Zoller, Peter (2010). Quantum
ous forms of dephasing and vibrational relaxation. These Noise. Springer Series in Synergetics (3rd ed.).
methods have been incorporated into grid-based density Berlin Heidelberg: Springer-Verlag. ISBN 978-3-
matrix propagation methods. 642-06094-6.
Ingarden, Roman S.; Kossakowski, A.; Ohya, M.
(1997). Information Dynamics and Open Systems:
6.3 See also Classical and Quantum Approach. New York:
Springer Verlag. ISBN 978-0-7923-4473-5.
Open quantum system
Lindblad, G. (1983). Non-Equilibrium Entropy and
Quantum jump method Irreversibility. Dordrecht: Delta Reidel. ISBN 1-
4020-0320-X.
Quantum dynamical semigroup
Tarasov, Vasily E. (2008). Quantum Mechanics of
Non-Hamiltonian and Dissipative Systems. Amster-
dam, Boston, London, New York: Elsevier Science.
6.4 References ISBN 978-0-0805-5971-1.
Amir Caldeira
Amir Ordacgi Caldeira (born 1950 in Rio de Janeiro) for dissipation in quantum mechanics, Physical Re-
is a Brazilian physicist. He received his bachelors de- view Letters 67, 1960 (1991).
gree in 1973 from the Pontifcia Universidade Catlica
do Rio de Janeiro, his M.Sc. degree in 1976 from the
same university, and his Ph.D. in 1980 from University 7.2 References
of Sussex. His Ph.D. advisor was the Physics Nobel
Prize winner Anthony James Leggett. He joined the [1] Amir O. Caldeiras page on the Brazilian Academy of Sci-
faculty at Universidade Estadual de Campinas (UNI- ences website Archived March 11, 2007, at the Wayback
CAMP) in 1980. In 1984 he did post-doctoral work Machine.
at the Kavli Institute for Theoretical Physics (KITP)
at University of California, Santa Barbara and at the
Thomas J. Watson Research Laboratory at IBM. In 1994-
1995 he spent a sabbatical at the University of Illinois at
Urbana-Champaign. He is currently a Full Professor at
Universidade Estadual de Campinas. He was the recip-
ient of the Wataghin Prize, from Universidade Estadual
de Campinas, for his contributions to theoretical physics
in 1986.
Caldeiras research interests are in theoretical condensed
matter physics, in particular quantum dissipation and
strongly correlated electron systems. His best known
work is on the Caldeira-Leggett model, which is one of
the rst and most important treatments of decoherence in
quantum mechanical systems.[1]
21
Chapter 8
Warning: Page using Template:Infobox scientist with eventually followed by two sisters, Clare and Judith, and
unknown parameter ethnicity (this message is shown two brothers, Terence and Paul, all raised in their parents
only in preview). Roman Catholics faith. Leggett ceased to be a practising
Catholic in his early twenties.[10]
Sir Anthony James Leggett KBE FRS[4] (born 26 Soon after he was born, his parents bought a house in
March 1938), has been a professor of physics at the Upper Norwood, south London. When he was 18 months
University of Illinois at Urbana-Champaign since 1983.[5] old, WWII broke out and he was evacuated to Engleeld
Leggett is widely recognized as a world leader in the the- Green, a small village in Surrey on the edge of the great
ory of low-temperature physics, and his pioneering work park of Windsor Castle, where he stayed for the dura-
on superuidity was recognized by the 2003 Nobel Prize tion of the war. After the end of the war, he returned
in Physics.[6] He has shaped the theoretical understand- to the Upper Norwood house and lived there until 1950;
ing of normal and superuid helium liquids and strongly his father taught at a school in north-east London and his
coupled superuids.[7] He set directions for research in mother looked after the ve children full-time. He at-
the quantum physics of macroscopic dissipative systems tended the local Catholic primary school, and later, fol-
and use of condensed systems to test the foundations of lowing a successful performance in the 11-plus, which
quantum mechanics.[8][9] he took rather earlier than most, and then transferred to
Wimbledon College.
Leggett won a scholarship to Balliol College, Oxford, in
December 1954 and entered the University the following
8.1 Early life and education year with the intention of reading the degree technically
known as Literae Humaniores (classics). He completed
Leggett was born in Camberwell, South London, and a second undergraduate degree, this time in physics at
raised Catholic.[10] His fathers forebears were village Merton College, Oxford.[11] One person who was will-
cobblers in a small village in Hampshire; his father broke ing to overlook his unorthodox credentials was Dirk ter
with this tradition to become a greengrocer; his father Haar, then a reader in theoretical physics and a fellow of
would relate how he used to ride with him to buy veg- Magdalen College, Oxford; so he signed up for research
etables at the Covent Garden market in London. His under his supervision. As with all ter Haars students
mothers parents were of Irish descent; her father had em- in that period, the tentatively assigned thesis topic was
igrated to England and worked as a clerk in the naval Some Problems in the Theory of Many-Body Systems,
dockyard in Chatham.[10] His maternal grandmother, which left a considerable degree of latitude.
who survived into her eighties, was sent out to domestic Dirk took a great interest in the personal welfare of his
service at the age of twelve. She eventually married his students and their families, and was meticulous in making
grandfather and raised a large family, then in her late six- sure they received adequate support; indeed, he encour-
ties emigrated to Australia to join her daughter and son- aged Leggett to apply for a Prize Fellowship at Magdalen,
in-law, and nally returned to the UK for her last years. which he held from 1963 to 1967. In the end Leggetts
His father and mother were each the rst in their families thesis consisted of studies of two somewhat disconnected
to receive a university education; they met and became problems in the general area of liquid helium, one on
engaged while students at the Institute of Education at higher-order phonon interaction processes in superuid
4
the University of London, but were unable to get married He and the other on the properties of dilute solutions of
4
for some years because his father had to care for his own He in normal liquid 3 He (a system which unfortunately
mother and siblings. His father worked as a secondary turned out to be much less experimentally accessible than
school teacher of physics, chemistry and mathematics. the other side of the phase diagram, dilute solutions of
3
His mother also taught secondary school mathematics for He in 4 He). The University of Oxford awarded Leggett
a time, but had to give this up when he was born. He was an Honorary DLitt in June 2005.
22
8.4. AWARDS AND HONOURS 23
[12] https://www.bnl.gov/energy/ces/cv/leggett.asp
Nitrogen-vacancy center
9.2 Production
Primitive picture of the N-V center
Main article: Crystallographic defects in diamond
The nitrogen-vacancy center is a point defect in the
diamond lattice. It consists of a nearest-neighbor pair of Nitrogen-vacancy centers are typically produced from
a nitrogen atom, which substitutes for a carbon atom, and single substitutional nitrogen centers (called C or P1 cen-
25
26 CHAPTER 9. NITROGEN-VACANCY CENTER
ters in diamond literature) by irradiation followed by an- illumination, however, also converts some N-V into N-
nealing at temperatures above 700 C.[1] A wide range V0 centers.[3] Emission is very quick (relaxation time ~10
of high-energy particles are suitable for such irradiation, ns).[16][17] At room temperature, no sharp peaks are ob-
including electrons, protons, neutrons, ions, and gamma served because of the thermal broadening. However,
photons. Irradiation produces lattice vacancies, which are cooling the N-V centers with liquid nitrogen or liquid
a part of N-V centers. Those vacancies are immobile helium dramatically narrows the lines down to a width of
at room temperature, and annealing is required to move a few megahertz.
them. Single substitutional nitrogen produces strain in the
An important property of the luminescence from individ-
diamond lattice;[10] it therefore eciently captures mov-
ual N-V centers is its high temporal stability. Whereas
ing vacancies,[11] producing the N-V centers. many single-molecular emitters bleach after emission of
During chemical vapor deposition of diamond, a small 106 108 photons, no bleaching is observed for the N-V
fraction of single substitutional nitrogen impurity (typ- centers at room temperature.[7][14]
ically <0.5%) traps vacancies generated as a result of Because of these properties, the ideal technique to ad-
the plasma synthesis. Such nitrogen-vacancy centers are dress the N-V centers is confocal microscopy, both at
preferentially aligned to the growth direction.[12] room temperature and at low temperature. In particu-
Diamond is notorious for having a relatively large lattice lar, low temperature operation is required to specically
strain. Strain splits and shifts optical transitions from in- address only the zero-phonon line (ZPL).
dividual centers resulting in broad lines in the ensem-
bles of centers.[1] Special care is taken to produce ex-
tremely sharp N-V lines (line width ~10 MHz)[13] re- 9.4 Energy level structure and its
quired for most experiments: high-quality, pure natu-
ral or better synthetic diamonds (type IIa) are selected. manipulation by external elds
Many of them already have sucient concentrations of
grown-in N-V centers and are suitable for applications.
If not, they are irradiated by high-energy particles and
annealed. Selection of a certain irradiation dose al-
lows tuning the concentration of produced N-V cen-
ters such that individual N-V centers are separated by
micrometre-large distances. Then, individual N-V cen-
ters can be studied with standard optical microscopes
or, better, near-eld scanning optical microscopes having
sub-micrometre resolution.[7][14]
N-V centers emit bright red light which can be conve- The energy level structure of the N-V center was estab-
niently excited by visible light sources, such as argon or lished by combining optical, electron paramagnetic reso-
krypton lasers, frequency doubled Nd:YAG lasers, dye nance and theoretical results, as shown in the gure. In
lasers, or He-Ne lasers. Excitation can also be achieved particular, several theoretical works have been done, us-
at energies below that of zero phonon emission.[15] Laser ing the Linear Combination of Atomic Orbitals (LCAO)
9.4. ENERGY LEVEL STRUCTURE AND ITS MANIPULATION BY EXTERNAL FIELDS 27
approach, to build the electronic orbitals to describe the responding m = 0 state in 3 E and then go back to original
possible quantum states, looking at the NV center as a state. However, for a spin state with m = 1 in 3 A, after
molecule. Moreover, group theory results are used, to the excitation, it has a relatively high probability to jump
take into account the symmetry of the diamond crystal, to the intermediate state 1 A by non-radiative transition
and so the symmetry of the NV itself. The energy levels and go to the ground state with m = 0. After sucient
are labeled according to the group theory, and in particu- cycles, the state of the NV center can be regarded as in the
lar are labelled after the irreducible representations of the m = 0 state. Such a process can be used in the initializa-
CV symmetry group of the defect center, A1 , A2 and E. tion of quantum state in quantum information processing.
The numbers 3 in 3 A and 1 in 1 A represent the number
There is an additional level splitting in the excited 3 E state
of allowable m spin states, or the spin multiplicity, which due to the orbital degeneracy and spin-orbit interaction.
range from S to S for a total of 2S+1 possible states. If
Importantly, this splitting can be modulated by applying a
S = 1, m can be 1, 0, or 1. The 1 A level is predicted static electric eld,[13][19] in a similar fashion to the mag-
by theory but not directly observed in experiment, and it
netic eld mechanism outlined above, though the physics
is believed to play an important role in the quenching of of the splitting is somewhat more complex. Neverthe-
photoluminescence.
less, an important practical outcome is that the intensity
In the absence of an external magnetic eld, the ground and position of the luminescence lines can be modulated
and excited states are split by the magnetic interaction by applying electric or/and magnetic elds.
between the two unpaired electrons at the N-V center The energy dierence between the m = 0 and m = 1
(see microscopic model): when two electrons have paral- states corresponds to the microwave region. Thus by ir-
lel spins (m =1), their energy is higher than when spins radiating the N-V centers with microwave radiation, one
are antiparallel (m =0). The farther apart the electrons can change the relative population of those levels, thereby
are, the weaker their interaction energy D (roughly D again modulating the luminescence intensity.
~1/r3 ).[5] Thus the smaller splitting in the excited state
can be viewed in terms of larger electron-electron sepa- There is an additional splitting of the m = 1 energy lev-
ration in the excited state. When an external magnetic els, which originates from the "hyperne" interaction be-
eld is applied to the N-V center, it does not aect the tween the nuclear and electron spins. Thus nally, the
m =0 states nor the 1 A state (because it has S = 0), but it optical absorption and luminescence from the N-V cen-
splits the m = 1 levels. If a magnetic eld is oriented ter consists of roughly a dozen sharp lines with a sepa-
along the defect axis and reaches about 1027 G (or 508 ration in the MHz-GHz range, and all those lines can be
G) then the m = 1 and m = 0 states in the ground (or ex- resolved, given proper sample preparation. The intensity
cited) state become equal in energy; they strongly interact and position of those lines can be modulated using the
resulting in so-called spin polarization, which strongly af- following tools:
fects the intensity of optical absorption and luminescence
transitions involving those states.[18] 1. Amplitude and orientation of magnetic eld, which
This happens because transitions between electronic splits the m = 1 levels in the ground and excited
states are mediated by a photon which cannot change states.
overall spin. Thus optical transitions must preserve the
2. Amplitude and orientation of elastic eld (strain),
total spin and occur between levels of the same total spin.
which can be applied by, e.g., squeezing the dia-
For this reason, transitions 3 E1 A and 1 A 3 A are non-
mond. Similar eects can be induced by applying
radiative and quench the luminescence. Whereas m =
electric eld,[13][19] and the electric eld can be con-
1 (excited state) m = 0 (ground state) transition was
trolled with much higher precision.
forbidden in the absence of an external magnetic eld, it
becomes allowed when a magnetic eld mixes the m = 3. Continuous-wave microwave radiation, which
1 and m = 0 levels in the ground state. As a measurable changes the population of the sublevels within the
outcome of this phenomenon, luminescence intensity can ground and excited state.[19]
be strongly modulated by magnetic eld.
4. Tunable laser, which can selectively excite certain
An important property of the non-radiative transition be-
sublevels of the ground and excited state.[19][20]
tween 3 E and 1 A is that it is stronger for m = 1 and
weaker for m = 0. This property results in a very useful 5. In addition to those static perturbations, numerous
manipulation of N-V center, which is called optical spin- dynamic eects (spin echo, Rabi oscillations, etc.)
polarization. First, an o-resonance excitation which has can be exploited by applying a carefully designed
a higher frequency (typically 2.32 eV (532 nm)) than sequence of microwave pulses.[21][22][23][24][25] The
the frequencies of all transitions and thus lays in the vi- rst pulse coherently excites the electron spins, and
bronic bands for all transitions. By using a pulse of this this coherence is then manipulated and probed by
wavelength, people can excite all spin states and create the subsequent pulses. Those dynamic eects are
phonons as well. For a spin state with m = 0, due to con- rather important for practical realization of quantum
servation of spin in transition, it will be excited to the cor- computers, which ought to work at high frequency.
28 CHAPTER 9. NITROGEN-VACANCY CENTER
As a nal remark, it should be noted that the above- tion is indicated with a thin line. The diagram also shows
described energy structure is by no means exceptional for the non-radiative and infrared competing decay paths be-
a defect in diamond or other semiconductor.[26] It was tween the two singlet states, and the ne splitting in the
not this structure alone, but a combination of several fa- triplet states, whose dierences in energy correspond to
vorable factors (previous knowledge, easy production and microwave frequencies.
excitation, etc.) which suggested the use of the N-V cen- Some authors explain the dynamicsof the
ter. NV
3 centerby
admitting that the transition from E to A2 , 1 is
1
[31]
small, but as Robledo et al. shows,
1 only the fact that
the probability of decaying to A1 is smaller for 3 E, 0
9.5 Spin dynamics than for 3 E, 1 is enough to polarize the spin to m =
0.
radiatively to the singlet state 1 A1 , a phenomenon stability, etc.). Compared with large single-crystal dia-
called intersystem crossing (ISC). This happens at an ap- monds, nanodiamonds are cheap (about 1 USD per gram)
preciable rate because the energy and available from various suppliers. N-V centers are
3 curve in function of the
produced in diamond powders with sub-micrometre par-
position of theatoms for the E, 1 state intersects the ticle size using the standard process of irradiation and an-
1
curve for the A1 state. Therefore, for some instant
during the vibrational relaxation that the ions undergo af- nealing described above. Those nanodiamonds are intro-
ter the excitement, it is possible for the spin to ip with lit- duced in a cell, and their luminescence is monitored using
[37]
[30]
tle or no energy required in the transition. It is impor- a standard uorescence microscope.
tant to note that this mechanism
also leads to a transition Further N-V center has been hypothesized to be a po-
3 1
from E, 0 to A1 , but the rate of this ISC is much tential bio-mimetic system for emulating radical pair spin
lower than the 3 E, 1 states rate, therefore this transi- dynamics of the avian compass.[38][39]
9.9. REFERENCES 29
[17] Hanzawa, H.; Nisida, Y.; Kato, T. (1997). Measure- Chris G. (2013-10-01). Quantum computing with de-
ment of decay time for the NV centre in Ib diamond with fects. MRS Bulletin. 38 (10): 802807. ISSN 1938-
a picosecond laser pulse. Diamond and Related Ma- 1425. doi:10.1557/mrs.2013.206.
terials. 6 (11): 1595. Bibcode:1997DRM.....6.1595H.
doi:10.1016/S0925-9635(97)00037-X. [28] Rogers, L. J.; Doherty, M. W.; Barson, M. S. J.;
Onoda, S.; Ohshima, T.; Manson, N. B. (2015-
[18] Fuchs, G. D.; et al. (2008). Excited-State 01-01). Singlet levels of the NV centre in
Spectroscopy Using Single Spin Manipulation diamond. New Journal of Physics. 17 (1):
in Diamond. Physical Review Letters. 101 013048. Bibcode:2015NJPh...17a3048R. ISSN
(1): 117601. Bibcode:2008PhRvL.101k7601F. 1367-2630. arXiv:1407.6244 . doi:10.1088/1367-
PMID 18851332. arXiv:0806.1939 . 2630/17/1/013048.
doi:10.1103/PhysRevLett.101.117601.
[29] Doherty, Marcus W.; Manson, Neil B.; Delaney, Paul;
[19] Tamarat, Ph.; et al. (2008). Spin-ip and spin- Jelezko, Fedor; Wrachtrup, Jrg; Hollenberg, Lloyd C.
conserving optical transitions of the nitrogen-vacancy L. (2013-07-01). The nitrogen-vacancy colour cen-
centre in diamond. New Journal of Physics. 10 tre in diamond. Physics Reports. The nitrogen-
(4): 045004. Bibcode:2008NJPh...10d5004T. vacancy colour centre in diamond. 528 (1): 1
doi:10.1088/1367-2630/10/4/045004. 45. Bibcode:2013PhR...528....1D. arXiv:1302.3288 .
doi:10.1016/j.physrep.2013.02.001.
[20] Santori, C.; et al. (2006). Coherent Population
Trapping of Single Spins in Diamond under Op- [30] Choi, SangKook (2012-01-01). Mechanism for op-
tical Excitation. Physical Review Letters. 97 tical initialization of spin in NV. Physical Re-
(24): 247401. Bibcode:2006PhRvL..97x7401S. view B. 86 (4). Bibcode:2012PhRvB..86d1202C.
PMID 17280321. arXiv:quant-ph/0607147 . doi:10.1103/PhysRevB.86.041202.
doi:10.1103/PhysRevLett.97.247401. [31] Robledo, Lucio; Bernien, Hannes; Sar, Toeno van
der; Hanson, Ronald (2011-01-01). Spin dynam-
[21] Hanson, R.; Gywat, O.; Awschalom, D. D. (2006).
ics in the optical cycle of single nitrogen-vacancy cen-
Room-temperature manipulation and decoherence of a
tres in diamond. New Journal of Physics. 13
single spin in diamond. Physical Review B. 74 (16):
(2): 025013. Bibcode:2011NJPh...13b5013R. ISSN
161203. Bibcode:2006PhRvB..74p1203H. arXiv:quant-
1367-2630. arXiv:1010.1192 . doi:10.1088/1367-
ph/0608233 . doi:10.1103/PhysRevB.74.161203.
2630/13/2/025013.
[22] Dutt, M. V. G.; et al. (2007). Quantum Regis-
[32] Maze, J. R.; Stanwix, P. L.; Hodges, J. S.; Hong, S.; Tay-
ter Based on Individual Electronic and Nuclear Spin
lor, J. M.; Cappellaro, P.; Jiang, L.; Dutt, M. V. G.; To-
Qubits in Diamond (PDF). Science. 316 (5829):
gan, E.; Zibrov, A. S.; Yacoby, A.; Walsworth, R. L.;
13126. Bibcode:2007Sci...316.....D. PMID 17540898.
Lukin, M. D. (2008). Nanoscale magnetic sensing with
doi:10.1126/science.1139831.
an individual electronic spin in diamond (PDF). Nature.
[23] Childress, L.; et al. (2006). Coherent Dynam- 455 (7213): 644647. Bibcode:2008Natur.455..644M.
ics of Coupled Electron and Nuclear Spin Qubits PMID 18833275. doi:10.1038/nature07279.
in Diamond. Science. 314 (5797): 2815. [33] Dolde, F.; Fedder, H.; Doherty, M. W.; Nbauer, T.;
Bibcode:2006Sci...314..281C. PMID 16973839. Rempp, F.; Balasubramanian, G.; Wolf, T.; Reinhard, F.;
doi:10.1126/science.1131871. Hollenberg, L. C. L.; Jelezko, F.; Wrachtrup, J. (2011).
Electric-eld sensing using single diamond spins. Na-
[24] Batalov, A.; et al. (2008). Temporal Coherence
ture Physics. 7 (6): 459. Bibcode:2011NatPh...7..459D.
of Photons Emitted by Single Nitrogen-Vacancy
Defect Centers in Diamond Using Optical Rabi- arXiv:1103.3432 . doi:10.1038/nphys1969.
Oscillations. Physical Review Letters. 100 (7): 077401.
[34] Grazioso, F.; et al. Measurement of the full stress ten-
Bibcode:2008PhRvL.100g7401B. PMID 18352594.
sor in a crystal using photoluminescence from point de-
doi:10.1103/PhysRevLett.100.077401.
fects: The example of nitrogen vacancy centers in di-
[25] Jelezko, F.; et al. (2004). Observation of Co- amond, Appl. Phys. Lett. 103, 101905 (2013).
herent Oscillations in a Single Electron Spin Grazioso, F.; Patton, B. R.; Delaney, P.; Markham, M.
(PDF). Physical Review Letters. 92 (7): 076401. L.; Twitchen, D. J.; Smith, J. M. (2013). Measure-
Bibcode:2004PhRvL..92g6401J. PMID 14995873. ment of the full stress tensor in a crystal using photolu-
doi:10.1103/PhysRevLett.92.076401. minescence from point defects: The example of nitro-
gen vacancy centers in diamond. Applied Physics Letters.
[26] Aharonovich, I.; et al. (2009). Enhanced single- 103 (10): 101905. Bibcode:2013ApPhL.103j1905G.
photon emission in the near infrared from a di- arXiv:1110.3658 . doi:10.1063/1.4819834. http://
amond color center. Physical Review B. 79 arxiv.org/abs/1110.3658
(23): 235316. Bibcode:2009PhRvB..79w5316A.
doi:10.1103/PhysRevB.79.235316. [35] Shao, Linbo; Zhang, Mian; Markham, Matthew; Ed-
monds, Andrew; Loncar, Marko (15 December 2016).
[27] Gordon, Luke; Weber, Justin R.; Varley, Joel B.; Jan- Diamond Radio Receiver: Nitrogen-Vacancy Centers as
otti, Anderson; Awschalom, David D.; Van de Walle, Fluorescent Transducers of Microwave Signals. Phys.
9.9. REFERENCES 31
Quantum mechanics
Not to be confused with Quantum eld theory. foundly reconceived in the mid-1920s.
The reconceived theory is formulated in various specially
For a more accessible and less technical introduction to developed mathematical formalisms. In one of them, a
this topic, see Introduction to quantum mechanics. mathematical function, the wave function, provides infor-
Quantum mechanics (QM; also known as quantum mation about the probability amplitude of position, mo-
mentum, and other physical properties of a particle.
Important applications of quantum theory[2] include
quantum chemistry, superconducting magnets, light-
emitting diodes, and the laser, the transistor and
semiconductors such as the microprocessor, medical and
research imaging such as magnetic resonance imaging
and electron microscopy, and explanations for many bio-
logical and physical phenomena.
10.1 History
Main article: History of quantum mechanics
32
10.1. HISTORY 33
E = h
waves are neither simply particle nor wave but have cer- Broadly speaking, quantum mechanics incorporates four
tain properties of each. This originated the concept of classes of phenomena for which classical physics cannot
waveparticle duality. account:
By 1930, quantum mechanics had been further unied
and formalized by the work of David Hilbert, Paul Dirac quantization of certain physical properties
and John von Neumann[11] with greater emphasis on
quantum entanglement
measurement, the statistical nature of our knowledge
of reality, and philosophical speculation about the 'ob- principle of uncertainty
server'. It has since permeated many disciplines includ-
ing quantum chemistry, quantum electronics, quantum waveparticle duality
optics, and quantum information science. Its specu-
lative modern developments include string theory and
quantum gravity theories. It also provides a useful frame- 10.2 Mathematical formulations
work for many features of the modern periodic table
of elements, and describes the behaviors of atoms dur- Main article: Mathematical formulation of quantum
ing chemical bonding and the ow of electrons in com- mechanics
puter semiconductors, and therefore plays a crucial role See also: Quantum logic
in many modern technologies.
While quantum mechanics was constructed to de- In the mathematically rigorous formulation of quantum
scribe the world of the very small, it is also needed mechanics developed by Paul Dirac,[18] David Hilbert,[19]
to explain some macroscopic phenomena such as John von Neumann,[20] and Hermann Weyl,[21] the
superconductors,[12] and superuids.[13] possible states of a quantum mechanical system are
The word quantum derives from the Latin, meaning symbolized[22] as unit vectors (called state vectors). For-
how great or how much.[14] In quantum mechanics, mally, these reside in a complex separable Hilbert
it refers to a discrete unit assigned to certain physical spacevariously called the state space or the associated
quantities such as the energy of an atom at rest (see Hilbert space of the systemthat is well dened up to
Figure 1). The discovery that particles are discrete a complex number of norm 1 (the phase factor). In
packets of energy with wave-like properties led to the other words, the possible states are points in the projective
branch of physics dealing with atomic and subatomic space of a Hilbert space, usually called the complex pro-
systems which is today called quantum mechanics. It jective space. The exact nature of this Hilbert space is
underlies the mathematical framework of many elds dependent on the systemfor example, the state space
of physics and chemistry, including condensed matter for position and momentum states is the space of square-
physics, solid-state physics, atomic physics, molecular integrable functions, while the state space for the spin of
physics, computational physics, computational chemistry, a single proton is just the product of two complex planes.
quantum chemistry, particle physics, nuclear chemistry, Each observable is represented by a maximally Hermitian
and nuclear physics.[15] Some fundamental aspects of the (precisely: by a self-adjoint) linear operator acting on
theory are still actively studied.[16] the state space. Each eigenstate of an observable corre-
sponds to an eigenvector of the operator, and the associ-
Quantum mechanics is essential to understanding the be-
ated eigenvalue corresponds to the value of the observable
havior of systems at atomic length scales and smaller. If
in that eigenstate. If the operators spectrum is discrete,
the physical nature of an atom were solely described by
the observable can attain only those discrete eigenvalues.
classical mechanics, electrons would not orbit the nucleus,
since orbiting electrons emit radiation (due to circular In the formalism of quantum mechanics, the state of a
motion) and would eventually collide with the nucleus system at a given time is described by a complex wave
due to this loss of energy. This framework was un- function, also referred to as state vector in a complex
able to explain the stability of atoms. Instead, elec- vector space.[23] This abstract mathematical object allows
trons remain in an uncertain, non-deterministic, smeared, for the calculation of probabilities of outcomes of con-
probabilistic waveparticle orbital about the nucleus, de- crete experiments. For example, it allows one to com-
fying the traditional assumptions of classical mechanics pute the probability of nding an electron in a particular
and electromagnetism.[17] region around the nucleus at a particular time. Contrary
to classical mechanics, one can never make simultaneous
Quantum mechanics was initially developed to provide
predictions of conjugate variables, such as position and
a better explanation and description of the atom, espe-
momentum, to arbitrary precision. For instance, elec-
cially the dierences in the spectra of light emitted by
trons may be considered (to a certain probability) to be
dierent isotopes of the same chemical element, as well
located somewhere within a given region of space, but
as subatomic particles. In short, the quantum-mechanical
with their exact positions unknown. Contours of con-
atomic model has succeeded spectacularly in the realm
stant probability, often referred to as clouds, may be
where classical mechanics and electromagnetism falter.
drawn around the nucleus of an atom to conceptualize
10.2. MATHEMATICAL FORMULATIONS 35
where the electron might be located with the most prob- of occurrence. However, quantum mechanics does not
ability. Heisenbergs uncertainty principle quanties the pinpoint the exact values of a particles position and mo-
inability to precisely locate the particle given its conjugate mentum (since they are conjugate pairs) or its energy and
momentum.[24] time (since they too are conjugate pairs); rather, it pro-
According to one interpretation, as the result of a mea- vides only a range of probabilities in which that parti-
surement the wave function containing the probability cle might be given its momentum and momentum prob-
information for a system collapses from a given initial ability. Therefore, it is helpful to use dierent words to
state to a particular eigenstate. The possible results of describe states having uncertain values and states having
denite values (eigenstates). Usually, a system will not be
a measurement are the eigenvalues of the operator rep-
resenting the observablewhich explains the choice of in an eigenstate of the observable (particle) we are inter-
ested in. However, if one measures the observable, the
Hermitian operators, for which all the eigenvalues are
real. The probability distribution of an observable in a wave function will instantaneously be an eigenstate (or
generalized eigenstate) of that observable. This pro-
given state can be found by computing the spectral de-
composition of the corresponding operator. Heisenbergs cess is known as wave function collapse, a controversial
uncertainty principle is represented by the statement that and much-debated process[29] that involves expanding the
the operators corresponding to certain observables do not system under study to include the measurement device. If
commute. one knows the corresponding wave function at the instant
before the measurement, one will be able to compute the
The probabilistic nature of quantum mechanics thus probability of the wave function collapsing into each of
stems from the act of measurement. This is one of the possible eigenstates. For example, the free particle in
the most dicult aspects of quantum systems to un- the previous example will usually have a wave function
derstand. It was the central topic in the famous Bohr that is a wave packet centered around some mean posi-
Einstein debates, in which the two scientists attempted to tion x0 (neither an eigenstate of position nor of momen-
clarify these fundamental principles by way of thought tum). When one measures the position of the particle, it
experiments. In the decades after the formulation of is impossible to predict with certainty the result.[25] It is
quantum mechanics, the question of what constitutes probable, but not certain, that it will be near x0 , where
a measurement has been extensively studied. Newer the amplitude of the wave function is large. After the
interpretations of quantum mechanics have been formu- measurement is performed, having obtained some result
lated that do away with the concept of wave function col- x, the wave function collapses into a position eigenstate
lapse (see, for example, the relative state interpretation). centered at x.[30]
The basic idea is that when a quantum system interacts
with a measuring apparatus, their respective wave func- The time evolution of a quantum state is described by
the Schrdinger equation, in which the Hamiltonian (the
tions become entangled, so that the original quantum sys-
tem ceases to exist as an independent entity. For details, operator corresponding to the total energy of the system)
generates the time evolution. The time evolution of wave
see the article on measurement in quantum mechanics.[25]
functions is deterministic in the sense that - given a wave
Generally, quantum mechanics does not assign denite function at an initial time - it makes a denite prediction
values. Instead, it makes a prediction using a probability of what the wave function will be at any later time.[31]
distribution; that is, it describes the probability of ob-
taining the possible outcomes from measuring an observ- During a measurement, on the other hand, the change of
able. Often these results are skewed by many causes, the initial wave function into another, later wave function
such as dense probability clouds. Probability clouds are is not deterministic, it is unpredictable (i.e.,[32][33]
random). A
approximate (but better than the Bohr model) whereby time-evolution simulation can be seen here.
electron location is given by a probability function, the Wave functions change as time progresses. The
wave function eigenvalue, such that the probability is the Schrdinger equation describes how wave functions
squared modulus of the complex amplitude, or quantum change in time, playing a role similar to Newtons second
state nuclear attraction.[26][27] Naturally, these probabil- law in classical mechanics. The Schrdinger equation,
ities will depend on the quantum state at the instant applied to the aforementioned example of the free par-
of the measurement. Hence, uncertainty is involved in ticle, predicts that the center of a wave packet will move
the value. There are, however, certain states that are as- through space at a constant velocity (like a classical parti-
sociated with a denite value of a particular observable. cle with no forces acting on it). However, the wave packet
These are known as eigenstates of the observable (eigen will also spread out as time progresses, which means that
can be translated from German as meaning inherent or the position becomes more uncertain with time. This also
characteristic).[28] has the eect of turning a position eigenstate (which can
In the everyday world, it is natural and intuitive to think be thought of as an innitely sharp wave packet) into a
of everything (every observable) as being in an eigenstate. broadened wave packet that no longer[34]
represents a (de-
Everything appears to have a denite position, a de- nite, certain) position eigenstate.
nite momentum, a denite energy, and a denite time Some wave functions produce probability distributions
36 CHAPTER 10. QUANTUM MECHANICS
and that observables of that system are Hermitian op- weak nuclear force and the electromagnetic force were
erators acting on that spacealthough they do not tell unied, in their quantized forms, into a single quantum
us which Hilbert space or which operators. These can eld theory (known as electroweak theory), by the physi-
be chosen appropriately in order to obtain a quantitative cists Abdus Salam, Sheldon Glashow and Steven Wein-
description of a quantum system. An important guide berg. These three men shared the Nobel Prize in Physics
for making these choices is the correspondence princi- in 1979 for this work.[39]
ple, which states that the predictions of quantum mechan- It has proven dicult to construct quantum models of
ics reduce to those of classical mechanics when a system gravity, the remaining fundamental force. Semi-classical
moves to higher energies or, equivalently, larger quan-
approximations are workable, and have led to predic-
tum numbers, i.e. whereas a single particle exhibits a tions such as Hawking radiation. However, the formula-
degree of randomness, in systems incorporating millions
tion of a complete theory of quantum gravity is hindered
of particles averaging takes over and, at the high energy by apparent incompatibilities between general relativity
limit, the statistical probability of random behaviour ap-
(the most accurate theory of gravity currently known) and
proaches zero. In other words, classical mechanics is sim- some of the fundamental assumptions of quantum theory.
ply a quantum mechanics of large systems. This high
The resolution of these incompatibilities is an area of ac-
energy limit is known as the classical or correspondence tive research, and theories such as string theory are among
limit. One can even start from an established classical the possible candidates for a future theory of quantum
model of a particular system, then attempt to guess the gravity.
underlying quantum model that would give rise to the
classical model in the correspondence limit. Classical mechanics has also been extended into the
complex domain, with complex classical mechanics ex-
When quantum mechanics was originally formulated, it hibiting behaviors similar to quantum mechanics.[40]
was applied to models whose correspondence limit was
non-relativistic classical mechanics. For instance, the
well-known model of the quantum harmonic oscilla-
tor uses an explicitly non-relativistic expression for the 10.4.1 Quantum mechanics and classical
kinetic energy of the oscillator, and is thus a quantum
version of the classical harmonic oscillator.
physics
Early attempts to merge quantum mechanics with special Predictions of quantum mechanics have been veried ex-
relativity involved the replacement of the Schrdinger perimentally to an extremely high degree of accuracy.[41]
equation with a covariant equation such as the Klein According to the correspondence principle between clas-
Gordon equation or the Dirac equation. While these the- sical and quantum mechanics, all objects obey the laws
ories were successful in explaining many experimental re- of quantum mechanics, and classical mechanics is just an
sults, they had certain unsatisfactory qualities stemming approximation for large systems of objects (or a statistical
from their neglect of the relativistic creation and anni- quantum mechanics of a large collection of particles).[42]
hilation of particles. A fully relativistic quantum the- The laws of classical mechanics thus follow from the laws
ory required the development of quantum eld theory, of quantum mechanics as a statistical average at the limit
which applies quantization to a eld (rather than a xed of large systems or large quantum numbers.[43] However,
set of particles). The rst complete quantum eld the- chaotic systems do not have good quantum numbers, and
ory, quantum electrodynamics, provides a fully quantum quantum chaos studies the relationship between classical
description of the electromagnetic interaction. The full and quantum descriptions in these systems.
apparatus of quantum eld theory is often unnecessary
for describing electrodynamic systems. A simpler ap- Quantum coherence is an essential dierence between
proach, one that has been employed since the inception classical and quantum theories as illustrated by the
of quantum mechanics, is to treat charged particles as EinsteinPodolskyRosen (EPR) paradox an attack
quantum mechanical objects being acted on by a classical on a certain philosophical interpretation of quantum me-
electromagnetic eld. For example, the elementary quan- chanics by an appeal to local realism.[44] Quantum in-
tum model of the hydrogen atom describes the electric terference involves adding together probability ampli-
eld of the hydrogen atom using a classical e2 /(4 0 r) tudes, whereas classical waves infer that there is an
Coulomb potential. This semi-classical approach fails adding together of intensities. For microscopic bodies,
if quantum uctuations in the electromagnetic eld play the extension of the system is much smaller than the
an important role, such as in the emission of photons by coherence length, which gives rise to long-range entan-
charged particles. glement and other nonlocal phenomena characteristic of
quantum systems.[45] Quantum coherence is not typically
Quantum eld theories for the strong nuclear force and evident at macroscopic scales, though an exception to this
the weak nuclear force have also been developed. The rule may occur at extremely low temperatures (i.e. ap-
quantum eld theory of the strong nuclear force is called proaching absolute zero) at which quantum behavior may
quantum chromodynamics, and describes the interactions manifest itself macroscopically.[46] This is in accordance
of subnuclear particles such as quarks and gluons. The with the following observations:
38 CHAPTER 10. QUANTUM MECHANICS
Many macroscopic properties of a classical system For many experiments, it is possible to think of the ini-
are a direct consequence of the quantum behav- tial and nal conditions of the system as being a particle.
ior of its parts. For example, the stability of bulk In some cases it appears that there are potentially several
matter (consisting of atoms and molecules which spatially distinct pathways or trajectories by which a par-
would quickly collapse under electric forces alone), ticle might pass from initial to nal condition. It is an im-
the rigidity of solids, and the mechanical, thermal, portant feature of the quantum kinematic description that
chemical, optical and magnetic properties of matter it does not permit a unique denite statement of which of
are all results of the interaction of electric charges those pathways is actually followed. Only the initial and
under the rules of quantum mechanics.[47] nal conditions are denite, and, as stated in the forego-
ing paragraph, they are dened only as precisely as al-
While the seemingly exotic behavior of matter lowed by the conguration space description or its equiv-
posited by quantum mechanics and relativity theory alent. In every case for which a quantum kinematic de-
become more apparent when dealing with particles scription is needed, there is always a compelling reason
of extremely small size or velocities approaching the for this restriction of kinematic precision. An example
speed of light, the laws of classical, often considered of such a reason is that for a particle to be experimentally
"Newtonian", physics remain accurate in predicting found in a denite position, it must be held motionless;
the behavior of the vast majority of large objects for it to be experimentally found to have a denite mo-
(on the order of the size of large molecules or big- mentum, it must have free motion; these two are logically
ger) at velocities much smaller than the velocity of incompatible.[60][61]
light.[48]
Classical kinematics does not primarily demand exper-
imental description of its phenomena. It allows com-
10.4.2 Copenhagen interpretation of pletely precise description of an instantaneous state by
a value in phase space, the Cartesian product of cong-
quantum versus classical kinemat- uration and momentum spaces. This description simply
ics assumes or imagines a state as a physically existing en-
tity without concern about its experimental measurability.
A big dierence between classical and quantum me- Such a description of an initial condition, together with
chanics is that they use very dierent kinematic Newtons laws of motion, allows a precise deterministic
descriptions.[49] and causal prediction of a nal condition, with a denite
In Niels Bohr's mature view, quantum mechanical phe- trajectory of passage. Hamiltonian dynamics can be used
nomena are required to be experiments, with complete for this. Classical kinematics also allows the description
descriptions of all the devices for the system, prepara- of a process analogous to the initial and nal condition
tive, intermediary, and nally measuring. The descrip- description used by quantum mechanics. Lagrangian me-
tions are in macroscopic terms, expressed in ordinary chanics applies to this.[62] For processes that need account
language, supplemented with the concepts of classical to be taken of actions of a small number of Planck con-
mechanics.[50][51][52][53] The initial condition and the - stants, classical kinematics is not adequate; quantum me-
nal condition of the system are respectively described by chanics is needed.
values in a conguration space, for example a position
space, or some equivalent space such as a momentum
space. Quantum mechanics does not admit a completely 10.4.3 General relativity and quantum me-
precise description, in terms of both position and mo-
chanics
mentum, of an initial condition or state (in the classical
sense of the word) that would support a precisely deter-
ministic and causal prediction of a nal condition.[54][55] Even with the dening postulates of both Einsteins the-
In this sense, advocated by Bohr in his mature writings, a ory of general relativity and quantum theory being in-
quantum phenomenon is a process, a passage from initial disputably supported by rigorous and repeated empirical
to nal condition, not an instantaneous state in the clas- evidence, and while they do not directly contradict each
sical sense of that word.[56][57] Thus there are two kinds other theoretically (at least with regard to their primary
of processes in quantum mechanics: stationary and tran- claims), they have proven extremely dicult to incorpo-
sitional. For a stationary process, the initial and nal rate into one consistent, cohesive model.[63]
condition are the same. For a transition, they are dif- Gravity is negligible in many areas of particle physics,
ferent. Obviously by denition, if only the initial con- so that unication between general relativity and quan-
dition is given, the process is not determined.[54] Given tum mechanics is not an urgent issue in those particular
its initial condition, prediction of its nal condition is applications. However, the lack of a correct theory of
possible, causally but only probabilistically, because the quantum gravity is an important issue in physical cosmol-
Schrdinger equation is deterministic for wave function ogy and the search by physicists for an elegant "Theory
evolution, but the wave function describes the system only of Everything" (TOE). Consequently, resolving the in-
probabilistically.[58][59] consistencies between both theories has been a major
10.5. PHILOSOPHICAL IMPLICATIONS 39
goal of 20th and 21st century physics. Many prominent tromagnetism or the discrete levels of the energy of the
physicists, including Stephen Hawking, have labored for atoms. But here it is space itself which is discrete. More
many years in the attempt to discover a theory underly- precisely, space can be viewed as an extremely ne fabric
ing everything. This TOE would combine not only the or network woven of nite loops. These networks of
dierent models of subatomic physics, but also derive loops are called spin networks. The evolution of a spin
the four fundamental forces of nature - the strong force, network over time is called a spin foam. The predicted
electromagnetism, the weak force, and gravity - from a size of this structure is the Planck length, which is ap-
single force or phenomenon. While Stephen Hawking proximately 1.6161035 m. According to theory, there
was initially a believer in the Theory of Everything, af- is no meaning to length shorter than this (cf. Planck scale
ter considering Gdels Incompleteness Theorem, he has energy). Therefore, LQG predicts that not just matter,
concluded that one is not obtainable, and has stated so but also space itself, has an atomic structure.
publicly in his lecture Gdel and the End of Physics
(2002).[64]
The quest to unify the fundamental forces through quan- Since its inception, the many counter-intuitive aspects
tum mechanics is still ongoing. Quantum electrody- and results of quantum mechanics have provoked strong
namics (or quantum electromagnetism), which is cur- philosophical debates and many interpretations. Even
rently (in the perturbative regime at least) the most accu- fundamental issues, such as Max Born's basic rules con-
rately tested physical theory in competition with general cerning probability amplitudes and probability distribu-
relativity,[65][66] has been successfully merged with the tions, took decades to be appreciated by society and
weak nuclear force into the electroweak force and work is many leading scientists. Richard Feynman once said, I
currently being done to merge the electroweak and strong think I can safely say that nobody understands quantum
force into the electrostrong force. Current predictions mechanics.[68] According to Steven Weinberg, There is
state that at around 1014 GeV the three aforementioned now in my opinion no entirely satisfactory interpretation
forces are fused into a single unied eld.[67] Beyond this of quantum mechanics.[69]
grand unication, it is speculated that it may be possi- The Copenhagen interpretation due largely to Niels
ble to merge gravity with the other three gauge symme- Bohr and Werner Heisenberg remains most widely ac-
tries, expected to occur at roughly 1019 GeV. However cepted amongst physicists, some 75 years after its enun-
and while special relativity is parsimoniously incor- ciation. According to this interpretation, the probabilis-
porated into quantum electrodynamics the expanded tic nature of quantum mechanics is not a temporary fea-
general relativity, currently the best theory describing the ture which will eventually be replaced by a deterministic
gravitation force, has not been fully incorporated into theory, but instead must be considered a nal renuncia-
quantum theory. One of those searching for a coherent tion of the classical idea of causality. It is also believed
TOE is Edward Witten, a theoretical physicist who for- therein that any well-dened application of the quantum
mulated the M-theory, which is an attempt at describ- mechanical formalism must always make reference to the
ing the supersymmetrical based string theory. M-theory experimental arrangement, due to the conjugate nature
posits that our apparent 4-dimensional spacetime is, in of evidence obtained under dierent experimental situa-
reality, actually an 11-dimensional spacetime containing tions.
10 spatial dimensions and 1 time dimension, although 7 Albert Einstein, himself one of the founders of quantum
of the spatial dimensions are - at lower energies - com- theory, did not accept some of the more philosophical or
pletely compactied (or innitely curved) and not read- metaphysical interpretations of quantum mechanics, such
ily amenable to measurement or probing. as rejection of determinism and of causality. He is fa-
Another popular theory is Loop quantum gravity (LQG), mously quoted as saying, in response to this aspect, God
a theory rst proposed by Carlo Rovelli that describes does not play with dice.[70] He rejected the concept that
the quantum properties of gravity. It is also a theory of the state of a physical system depends on the experimen-
quantum space and quantum time, because in general rel- tal arrangement for its measurement. He held that a state
ativity the geometry of spacetime is a manifestation of of nature occurs in its own right, regardless of whether or
gravity. LQG is an attempt to merge and adapt standard how it might be observed. In that view, he is supported
quantum mechanics and standard general relativity. The by the currently accepted denition of a quantum state,
main output of the theory is a physical picture of space which remains invariant under arbitrary choice of cong-
where space is granular. The granularity is a direct conse- uration space for its representation, that is to say, man-
quence of the quantization. It has the same nature of the ner of observation. He also held that underlying quantum
granularity of the photons in the quantum theory of elec- mechanics there should be a theory that thoroughly and
40 CHAPTER 10. QUANTUM MECHANICS
directly expresses the rule against action at a distance; in order to prove that the wave function did not collapse,
other words, he insisted on the principle of locality. He one would have to bring all these particles back and mea-
considered, but rejected on theoretical grounds, a partic- sure them again, together with the system that was orig-
ular proposal for hidden variables to obviate the indeter- inally measured. Not only is this completely impracti-
minism or acausality of quantum mechanical measure- cal, but even if one could theoretically do this, it would
ment. He considered that quantum mechanics was a cur- have to destroy any evidence that the original measure-
rently valid but not a permanently denitive theory for ment took place (including the physicists memory). In
quantum phenomena. He thought its future replacement light of these Bell tests, Cramer (1986) formulated his
would require profound conceptual advances, and would transactional interpretation.[73] Relational quantum me-
not come quickly or easily. The Bohr-Einstein debates chanics appeared in the late 1990s as the modern deriva-
provide a vibrant critique of the Copenhagen Interpre- tive of the Copenhagen Interpretation.
tation from an epistemological point of view. In argu-
ing for his views, he produced a series of objections, the
most famous of which has become known as the Einstein 10.6 Applications
PodolskyRosen paradox.
John Bell showed that this EPR paradox led to Quantum mechanics has had enormous[74] success in ex-
experimentally testable dierences between quantum plaining many of the features of our universe. Quantum
mechanics and theories that rely on added hidden vari- mechanics is often the only tool available that can reveal
ables. Experiments have been performed conrming the the individual behaviors of the subatomic particles that
accuracy of quantum mechanics, thereby demonstrating make up all forms of matter (electrons, protons, neutrons,
that quantum mechanics cannot be improved upon by ad- photons, and others). Quantum mechanics has strongly
dition of hidden variables.[71] Alain Aspects initial exper- inuenced string theories, candidates for a Theory of Ev-
iments in 1982, and many subsequent experiments since, erything (see reductionism).
have denitively veried quantum entanglement.
Quantum mechanics is also critically important for un-
Entanglement, as demonstrated in Bell-type experiments, derstanding how individual atoms combine covalently to
does not, however, violate causality, since no transfer of form molecules. The application of quantum mechanics
information happens. Quantum entanglement forms the to chemistry is known as quantum chemistry. Relativis-
basis of quantum cryptography, which is proposed for use tic quantum mechanics can, in principle, mathematically
in high-security commercial applications in banking and describe most of chemistry. Quantum mechanics can
government. also provide quantitative insight into ionic and covalent
The Everett many-worlds interpretation, formulated in bonding processes by explicitly showing which molecules
1956, holds that all the possibilities described by quan- are energetically favorable to which others and the mag-
tum theory simultaneously occur in a multiverse com- nitudes of the energies involved.[75] Furthermore, most
posed of mostly independent parallel universes.[72] This of the calculations performed in modern computational
is not accomplished by introducing some new axiom chemistry rely on quantum mechanics.
to quantum mechanics, but on the contrary, by removing In many aspects modern technology operates at a scale
the axiom of the collapse of the wave packet. All of the where quantum eects are signicant.
possible consistent states of the measured system and the
measuring apparatus (including the observer) are present
in a real physical - not just formally mathematical, as in 10.6.1 Electronics
other interpretations - quantum superposition. Such a su-
perposition of consistent state combinations of dierent Many modern electronic devices are designed using
systems is called an entangled state. While the multi- quantum mechanics. Examples include the laser, the
verse is deterministic, we perceive non-deterministic be- transistor (and thus the microchip), the electron micro-
havior governed by probabilities, because we can only scope, and magnetic resonance imaging (MRI). The study
observe the universe (i.e., the consistent state contribu- of semiconductors led to the invention of the diode and
tion to the aforementioned superposition) that we, as ob- the transistor, which are indispensable parts of modern
servers, inhabit. Everetts interpretation is perfectly con- electronics systems, computer and telecommunication
sistent with John Bell's experiments and makes them in- devices. Another application is the light emitting diode
tuitively understandable. However, according to the the- which is a high-eciency source of light.
ory of quantum decoherence, these parallel universes Many electronic devices operate under eect of quantum
will never be accessible to us. The inaccessibility can tunneling. It even exists in the simple light switch. The
be understood as follows: once a measurement is done, switch would not work if electrons could not quantum
the measured system becomes entangled with both the tunnel through the layer of oxidation on the metal con-
physicist who measured it and a huge number of other tact surfaces. Flash memory chips found in USB drives
particles, some of which are photons ying away at the use quantum tunneling to erase their memory cells. Some
speed of light towards the other end of the universe. In negative dierential resistance devices also utilizes quan-
10.7. EXAMPLES 41
A working mechanism of a resonant tunneling diode de- Quantum theory also provides accurate descriptions for
vice, based on the phenomenon of quantum tunneling through many previously unexplained phenomena, such as black-
potential barriers. (Left: band diagram; Center: transmission body radiation and the stability of the orbitals of elec-
coecient; Right: current-voltage characteristics) As shown in trons in atoms. It has also given insight into the work-
the band diagram(left), although there are two barriers, elec- ings of many dierent biological systems, including smell
trons still tunnel through via the conned states between two bar- receptors and protein structures.[77] Recent work on
riers(center), conducting current.
photosynthesis has provided evidence that quantum cor-
relations play an essential role in this fundamental pro-
tum tunneling eect, such as resonant tunneling diode. cess of plants and many other organisms.[78] Even so,
Unlike classical diodes, its current is carried by resonant classical physics can often provide good approximations
tunneling through two potential barriers (see right gure). to results otherwise obtained by quantum physics, typ-
Its negative resistance behavior can only be understood ically in circumstances with large numbers of particles
with quantum mechanics: As the conned state moves or large quantum numbers. Since classical formulas are
close to Fermi level, tunnel current increases. As it moves much simpler and easier to compute than quantum for-
away, current decreases. Quantum mechanics is vital to mulas, classical approximations are used and preferred
understanding and designing such electronic devices. when the system is large enough to render the eects of
quantum mechanics insignicant.
10.6.2 Cryptography
10.7 Examples
Researchers are currently seeking robust methods of di-
rectly manipulating quantum states. Eorts are be-
ing made to more fully develop quantum cryptography, 10.7.1 Free particle
which will theoretically allow guaranteed secure trans-
mission of information. For example, consider a free particle. In quantum me-
chanics, there is waveparticle duality, so the properties
of the particle can be described as the properties of a
10.6.3 Quantum computing wave. Therefore, its quantum state can be represented
as a wave of arbitrary shape and extending over space
A more distant goal is the development of quantum com- as a wave function. The position and momentum of the
puters, which are expected to perform certain computa- particle are observables. The Uncertainty Principle states
tional tasks exponentially faster than classical computers. that both the position and the momentum cannot simulta-
Instead of using classical bits, quantum computers use neously be measured with complete precision. However,
qubits, which can be in superpositions of states. Another one can measure the position (alone) of a moving free
active research topic is quantum teleportation, which particle, creating an eigenstate of position with a wave
deals with techniques to transmit quantum information function that is very large (a Dirac delta) at a particular
over arbitrary distances. position x, and zero everywhere else. If one performs a
position measurement on such a wave function, the resul-
tant x will be obtained with 100% probability (i.e., with
10.6.4 Macroscale quantum eects full certainty, or complete precision). This is called an
eigenstate of positionor, stated in mathematical terms,
While quantum mechanics primarily applies to the a generalized position eigenstate (eigendistribution). If the
smaller atomic regimes of matter and energy, some sys- particle is in an eigenstate of position, then its momentum
tems exhibit quantum mechanical eects on a large scale. is completely unknown. On the other hand, if the par-
42 CHAPTER 10. QUANTUM MECHANICS
ticle is in an eigenstate of momentum, then its position 10.7.3 Rectangular potential barrier
is completely unknown.[79] In an eigenstate of momen-
tum having a plane wave form, it can be shown that the Main article: Rectangular potential barrier
wavelength is equal to h/p, where h is Plancks constant
and p is the momentum of the eigenstate.[80] This is a model for the quantum tunneling eect which
plays an important role in the performance of modern
10.7.2 Step potential technologies such as ash memory and scanning tunnel-
ing microscopy. Quantum tunneling is central to physical
Main article: Solution of Schrdinger equation for a step phenomena involved in superlattices.
potential
The potential in this case is given by: 10.7.4 Particle in a box
2 k 2
(x) = Aeikx + Beikx E=
2m
or, from Eulers formula,
and D = 0. At x = L,
2 2 n2 n2 h2 1
E= = . V (x) = m 2 x2 .
2mL2 8mL2 2
This problem can either be treated by directly solving the
10.7.5 Finite potential well Schrdinger equation, which is not trivial, or by using
the more elegant ladder method rst proposed by Paul
Main article: Finite potential well Dirac. The eigenstates are given by
( )
A nite potential well is the generalization of the in- 1 ( m )1/4 mx2 m
nite potential well problem to potential wells having nite n (x) = e 2 H
n x ,
2n n!
depth.
n = 0, 1, 2, . . . .
The nite potential well problem is mathematically more
complicated than the innite particle-in-a-box problem where Hn are the Hermite polynomials
as the wave function is not pinned to zero at the walls of
the well. Instead, the wave function must satisfy more n ( )
n x2 d x2
complicated mathematical boundary conditions as it is Hn (x) = (1) e dxn e ,
nonzero in regions outside the well.
and the corresponding energy levels are
10.8 See also [11] van Hove, Leon (1958). Von Neumanns contribu-
tions to quantum mechanics (PDF). Bulletin of the
Angular momentum diagrams (quantum mechanics) American Mathematical Society. 64 (3): Part2:9599.
doi:10.1090/s0002-9904-1958-10206-2.
EPR paradox
[12] Feynman, Richard. The Feynman Lectures on Physics
Fractional quantum mechanics III 21-4. California Institute of Technology. Retrieved
2015-11-24. "...it was long believed that the wave func-
List of quantum-mechanical systems with analytical tion of the Schrdinger equation would never have a
solutions macroscopic representation analogous to the macroscopic
representation of the amplitude for photons. On the other
Macroscopic quantum phenomena hand, it is now realized that the phenomena of supercon-
ductivity presents us with just this situation.
Phase space formulation
[13] Richard Packard (2006) Berkeley Experiments on
Regularization (physics) Superuid Macroscopic Quantum Eects Archived
November 25, 2015, at the Wayback Machine.
Spherical basis accessdate=2015-11-24
[5] Kragh, Helge (2002). Quantum Generations: A History [19] D. Hilbert Lectures on Quantum Theory, 19151927
of Physics in the Twentieth Century. Princeton University
[20] J. von Neumann, Mathematische Grundlagen der Quan-
Press. p. 58. ISBN 0-691-09552-3. Extract of page 58
tenmechanik, Springer, Berlin, 1932 (English transla-
[6] Ben-Menahem, Ari (2009). Historical Encyclopedia of tion: Mathematical Foundations of Quantum Mechanics,
Natural and Mathematical Sciences, Volume 1. Springer. Princeton University Press, 1955).
p. 3678. ISBN 3540688315. Extract of page 3678
[21] H.Weyl The Theory of Groups and Quantum Mechan-
[7] E Arunan (2010). Peter Debye (PDF). Resonance (jour- ics, 1931 (original title: Gruppentheorie und Quanten-
nal). Indian Academy of Sciences. 15 (12). mechanik).
[8] Kuhn, T. S. (1978). Black-body theory and the quantum [22] Dirac, P.A.M. (1958). The Principles of Quantum Me-
discontinuity 1894-1912. Oxford: Clarendon Press. ISBN chanics, 4th edition, Oxford University Press, Oxford UK,
0195023838. p. ix: For this reason I have chosen the symbolic method,
introducing the representatives later merely as an aid to
[9] Kragh, Helge (1 December 2000), Max Planck: the reluc- practical calculation.
tant revolutionary, PhysicsWorld.com
[23] Greiner, Walter; Mller, Berndt (1994). Quantum Me-
[10] Einstein, A. (1905). "ber einen die Erzeugung chanics Symmetries, Second edition. Springer-Verlag. p.
und Verwandlung des Lichtes betreenden heu- 52. ISBN 3-540-58080-8., Chapter 1, p. 52
ristischen Gesichtspunkt" [On a heuristic point
of view concerning the production and trans- [24] Heisenberg - Quantum Mechanics, 19251927: The Un-
formation of light]. Annalen der Physik. 17 certainty Relations. Aip.org. Retrieved 2012-08-18.
(6): 132148. Bibcode:1905AnP...322..132E.
doi:10.1002/andp.19053220607. Reprinted in The [25] Greenstein, George; Zajonc, Arthur (2006). The Quan-
collected papers of Albert Einstein, John Stachel, editor, tum Challenge: Modern Research on the Foundations of
Princeton University Press, 1989, Vol. 2, pp. 149-166, Quantum Mechanics, Second edition. Jones and Bartlett
in German; see also Einsteins early work on the quantum Publishers, Inc. p. 215. ISBN 0-7637-2470-X., Chapter
hypothesis, ibid. pp. 134-148. 8, p. 215
10.9. NOTES 45
[26] "[Abstract] Visualization of Uncertain Particle Move- [45] N. P. Landsman (June 13, 2005). Between classical and
ment. Actapress.com. Retrieved 2012-08-18. quantum (PDF). Retrieved 2012-08-19. Handbook of
the Philosophy of Science Vol. 2: Philosophy of Physics
[27] Hirshleifer, Jack (2001). The Dark Side of the Force: Eco- (eds. John Earman & Jeremy Buttereld).
nomic Foundations of Conict Theory. Campbridge Uni-
versity Press. p. 265. ISBN 0-521-80412-4., Chapter , p. [46] (see macroscopic quantum phenomena, BoseEinstein
condensate, and Quantum machine)
[28] dict.cc dictionary :: eigen :: German-English transla- [47] Atomic Properties. Academic.brooklyn.cuny.edu. Re-
tion. dict.cc. Retrieved 11 September 2015. trieved 2012-08-18.
[42] Tipler, Paul; Llewellyn, Ralph (2008). Modern Physics (5 [52] Bohr, N. (1948). On the notions of complementarity and
ed.). W. H. Freeman and Company. pp. 160161. ISBN causality, Dialectica 2: 312319. As a more appropriate
978-0-7167-7550-8. way of expression, one may advocate limitation of the use
of the word phenomenon to refer to observations obtained
[43] Quantum mechanics course iwhatisquantummechanics. under specied circumstances, including an account of the
Scribd.com. 2008-09-14. Retrieved 2012-08-18. whole experiment.
[44] A. Einstein, B. Podolsky, and N. Rosen, Can quantum- [53] Ludwig, G. (1987). An Axiomatic Basis for Quantum Me-
mechanical description of physical reality be considered chanics, volume 2, Quantum Mechanics and Macrosys-
complete? Phys. Rev. 47 777 (1935). tems, translated by K. Just, Springer, Berlin, ISBN 978-3-
46 CHAPTER 10. QUANTUM MECHANICS
642-71899-1, Chapter XIII, Special Structures in Prepa- [66] Tatsumi Aoyama; Masashi Hayakawa; Toichiro
ration and Registration Devices, 1, Measurement chains, Kinoshita; Makiko Nio (2012). Tenth-Order
p. 132. QED Contribution to the Electron g-2 and an Im-
proved Value of the Fine Structure Constant.
[54] Heisenberg, W. (1927). ber den anschaulichen Inhalt Physical Review Letters. 109 (11): 111807.
der quantentheoretischen Kinematik und Mechanik, Z. Bibcode:2012PhRvL.109k1807A. arXiv:1205.5368v2
Phys. 43: 172198. Translation as 'The actual content . doi:10.1103/PhysRevLett.109.111807.
of quantum theoretical kinematics and mechanics here ,
But in the rigorous formulation of the law of causality, [67] Parker, B. (1993). Overcoming some of the problems. pp.
If we know the present precisely, we can calculate the 259279.
future it is not the conclusion that is faulty, but the
[68] The Character of Physical Law (1965) Ch. 6; also quoted
premise.
in The New Quantum Universe (2003), by Tony Hey and
[55] Green, H.S. (1965). Matrix Mechanics, with a foreword Patrick Walters
by Max Born, P. Noordho Ltd, Groningen. It is not
[69] Weinberg, S. Collapse of the State Vector, Phys. Rev.
possible, therefore, to provide 'initial conditions for the
A 85, 062116 (2012).
prediction of the behaviour of atomic systems, in the way
contemplated by classical physics. This is accepted by [70] Harrison, Edward (16 March 2000). Cosmology: The Sci-
quantum theory, not merely as an experimental diculty, ence of the Universe. Cambridge University Press. p. 239.
but as a fundamental law of nature, p. 32. ISBN 978-0-521-66148-5.
[56] Rosenfeld, L. (1957). Misunderstandings about the foun- [71] Action at a Distance in Quantum Mechanics (Stanford
dations of quantum theory, pp. 4145 in Observation and Encyclopedia of Philosophy)". Plato.stanford.edu. 2007-
Interpretation, edited by S. Krner, Butterworths, London. 01-26. Retrieved 2012-08-18.
A phenomenon is therefore a process (endowed with the
characteristic quantal wholeness) involving a denite type [72] Everetts Relative-State Formulation of Quantum
of interaction between the system and the apparatus. Mechanics (Stanford Encyclopedia of Philosophy)".
Plato.stanford.edu. Retrieved 2012-08-18.
[57] Dirac, P.A.M. (1973). Development of the physicists
[73] The Transactional Interpretation of Quantum Mechanics
conception of nature, pp. 155 in The Physicists Concep-
by John Cramer. Reviews of Modern Physics 58, 647-688,
tion of Nature, edited by J. Mehra, D. Reidel, Dordrecht,
July (1986)
ISBN 90-277-0345-0, p. 5: That led Heisenberg to his
really masterful step forward, resulting in the new quan- [74] See, for example, the Feynman Lectures on Physics for
tum mechanics. His idea was to build up a theory entirely some of the technological applications which use quan-
in terms of quantities referring to two states. tum mechanics, e.g., transistors (vol III, pp. 1411 ),
integrated circuits, which are follow-on technology in
[58] Born, M. (1927). Physical aspects of quantum mechanics,
solid-state physics (vol II, pp. 86), and lasers (vol III,
Nature 119: 354357, These probabilities are thus dy-
pp. 913).
namically determined. But what the system actually does
is not determined ... [75] Pauling, Linus; Wilson, Edgar Bright (1985-03-01).
Introduction to Quantum Mechanics with Applications to
[59] Messiah, A. (1961). Quantum Mechanics, volume 1, Chemistry. ISBN 9780486648712. Retrieved 2012-08-
translated by G.M. Temmer from the French Mcanique 18.
Quantique, North-Holland, Amsterdam, p. 157.
[76] Chen, Xie; Gu, Zheng-Cheng; Wen, Xiao-Gang (2010).
[60] Bohr, N. (1928). The Quantum postulate and the recent Local unitary transformation, long-range quantum en-
development of atomic theory, Nature 121: 580590. tanglement, wave function renormalization, and topo-
logical order. Phys. Rev. B. 82: 155138.
[61] Heisenberg, W. (1930). The Physical Principles of the
Quantum Theory, translated by C. Eckart and F.C. Hoyt, Bibcode:2010PhRvB..82o5138C. arXiv:1004.3835 .
University of Chicago Press. doi:10.1103/physrevb.82.155138.
[62] Goldstein, H. (1950). Classical Mechanics, Addison- [77] Anderson, Mark (2009-01-13). Is Quantum Mechanics
Wesley, ISBN 0-201-02510-8. Controlling Your Thoughts? | Subatomic Particles. DIS-
COVER Magazine. Retrieved 2012-08-18.
[63] There is as yet no logically consistent and complete rela-
[78] Quantum mechanics boosts photosynthesis.
tivistic quantum eld theory., p. 4. V. B. Berestetskii,
physicsworld.com. Retrieved 2010-10-23.
E. M. Lifshitz, L P Pitaevskii (1971). J. B. Sykes, J. S.
Bell (translators). Relativistic Quantum Theory 4, part I. [79] Davies, P. C. W.; Betts, David S. (1984). Quantum Me-
Course of Theoretical Physics (Landau and Lifshitz) ISBN chanics, Second edition. Chapman and Hall. p. 79. ISBN
0-08-016025-5 0-7487-4446-0., Chapter 6, p. 79
[64] Stephen Hawking; Gdel and the end of physics. [80] Baofu, Peter (2007-12-31). The Future of Complexity:
cam.ac.uk. Retrieved 11 September 2015. Conceiving a Better Way to Understand Order and Chaos.
ISBN 9789812708991. Retrieved 2012-08-18.
[65] The Nature of Space and Time. google.com. Retrieved
11 September 2015. [81] Derivation of particle in a box, chemistry.tidalswan.com
10.11. FURTHER READING 47
Griths, David J. (2004). Introduction to Quantum Eisberg, Robert; Resnick, Robert (1985). Quantum
Mechanics (2nd ed.). Prentice Hall. ISBN 0-13- Physics of Atoms, Molecules, Solids, Nuclei, and Par-
111892-7. OCLC 40251748. A standard under- ticles (2nd ed.). Wiley. ISBN 0-471-87373-X.
graduate text.
Libo, Richard L. (2002). Introductory Quantum
Max Jammer, 1966. The Conceptual Development Mechanics. Addison-Wesley. ISBN 0-8053-8714-
of Quantum Mechanics. McGraw Hill. 5.
48 CHAPTER 10. QUANTUM MECHANICS
Merzbacher, Eugen (1998). Quantum Mechanics. Stanford Continuing Education PHY 25: Quantum
Wiley, John & Sons, Inc. ISBN 0-471-88702-1. Mechanics by Leonard Susskind, see course de-
scription Fall 2007
Sakurai, J. J. (1994). Modern Quantum Mechanics.
Addison Wesley. ISBN 0-201-53929-2. 5 Examples in Quantum Mechanics
Shankar, R. (1994). Principles of Quantum Mechan- Imperial College Quantum Mechanics Course.
ics. Springer. ISBN 0-306-44790-8.
Spark Notes - Quantum Physics.
Stone, A. Douglas (2013). Einstein and the Quan-
Quantum Physics Online : interactive introduction
tum. Princeton University Press. ISBN 978-0-691-
to quantum mechanics (RS applets).
13968-5.
Experiments to the foundations of quantum physics
Martinus J. G. Veltman, 2003 Facts and Mysteries in
with single photons.
Elementary Particle Physics.
AQME : Advancing Quantum Mechanics for Engi-
Shushi, Tomer (2014). The Inuence of Particle
neers by T.Barzso, D.Vasileska and G.Klimeck
Interactions on the Existence of Quantum Particles
online learning resource with simulation tools on
Properties (PDF). Haifa, Israel: Journal of Physical
nanohub
Science and Application.
Quantum Mechanics by Martin Plenio
Zucav, Gary (1979, 2001). The Dancing Wu Li
Masters: An overview of the new physics (Perennial Quantum Mechanics by Richard Fitzpatrick
Classics Edition) HarperCollins.
Online course on Quantum Transport
Markov chain
In probability theory and related elds, a Markov pro- the internet search engine Google, is based on a Markov
cess (or Marko process), named after the Russian process.[26][27] Furthermore, Markov processes are the
mathematician Andrey Markov, is a stochastic process basis for general stochastic simulation methods known
that satises the Markov property[1][2] (sometimes char- as Gibbs sampling and Markov Chain Monte Carlo, are
acterized as "memorylessness"). Loosely speaking, a pro- used for simulating random objects with specic proba-
cess satises the Markov property if one can make pre- bility distributions, and have found extensive application
dictions for the future of the process based solely on its in Bayesian statistics.[25][28][29]
present state just as well as one could knowing the pro- The adjective Markovian is used to describe something
cesss full history; i.e., conditional on the present state of that is related to a Markov process.[30]
the system, its future and past states are independent.
A Markov chain is a type of Markov process that has ei-
ther discrete state space or discrete index set (often rep-
resenting time), but the precise denition of a Markov 0.3
chain varies.[3] For example, it is common to dene a
Markov chain as a Markov process in either discrete or
continuous time with a countable state space (thus regard-
less of the nature of time)[4][5][6][7] , but it is also common
E
to dene a Markov chain as having discrete time in ei- 0.7
ther countable or continuous state space (thus regardless
of the state space).[8]
Andrey Markov studied Markov processes in the early 0.4
20th century, publishing his rst paper on the topic
in 1906, but earlier uses of Markov processes already
existed.[9][10][11] Random walks on the integers and
A
the Gamblers ruin problem are examples of Markov 0.6
processes[12][13] and were studied hundreds of years
earlier.[14][15] Two important examples of Markov pro-
cesses are the Wiener process, also known as the Brow-
nian motion process, and the Poisson process,[16] which A diagram representing a two-state Markov process, with the
are considered the most important and central stochastic states labelled E and A. Each number represents the probability
processes in the theory of stochastic processes,[17][18][19] of the Markov process changing from one state to another state,
and were discovered repeatedly and independently, both with the direction indicated by the arrow. For example, if the
before and after 1906, in various settings.[20][21] These Markov process is in state A, then the probability it changes to
two processes are Markov processes in continuous time, state E is 0.4, while the probability it remains in state A is 0.6.
while random walks on the integers and the Gamblers
ruin problem are examples of Markov processes in dis-
crete time.[12][13]
11.1 Introduction
Markov chains have many applications as statistical mod-
els of real-world processes[22][23][24] , such as studying A Markov chain is a stochastic process with the Markov
cruise control systems in motor vehicles, queues or lines property. The term Markov chain refers to the
of customers arriving at an airport, exchange rates of sequence of random variables such a process moves
currencies, storage systems such as dams, and popula- through, with the Markov property dening serial depen-
tion growths of certain animal species.[25] The algorithm dence only between adjacent periods (as in a chain).
known as PageRank, which was originally proposed for It can thus be used for describing systems that follow a
49
50 CHAPTER 11. MARKOV CHAIN
with probability 4/10 or cheese with probability paper an equation, now called the ChapmanKolmogorov
6/10. It will not eat lettuce again tomorrow. equation, in a less mathematically rigorous way than Kol-
mogorov, while studying Brownian movement.[46] The
dierential equations are now called the Kolmogorov
This creatures eating habits can be modeled with a
equations[47] or the KolmogorovChapman equations.[48]
Markov chain since its choice tomorrow depends solely
Other mathematicians who contributed signicantly to
on what it ate today, not what it ate yesterday or any other
the foundations of Markov processes include William
time in the past. One statistical property that could be cal-
Feller, starting in 1930s, and then later Eugene Dynkin,
culated is the expected percentage, over a long period, of
starting in the 1950s.[43]
the days on which the creature will eat grapes.
A series of independent events (for example, a series
of coin ips) satises the formal denition of a Markov
chain. However, the theory is usually applied only when
the probability distribution of the next step depends non- 11.3 Examples
trivially on the current state.
Main article: Examples of Markov chains
11.2 History
Andrey Markov studied Markov chains in the early 20th
century. Markov was interested in studying an exten-
11.3.1 Gambling
sion of independent random sequences, motivated by a
diagreement with Pavel Nekrasov who claimed indepen- See also: random walk and Markov chain
dence was necessary for the weak law of large numbers to
hold.[36] In his rst paper on Markov chains, published in Suppose that you start with $10, and you wager $1 on an
1906, Markov showed that under certain conditions the unending, fair, coin toss indenitely, or until you lose all
average outcomes of the Markov chain would converge of your money. If Xn represents the number of dollars
to a xed vector of values, so proving a weak law of large you have after n tosses, with X0 = 10 , then the sequence
numbers without the independence assumption,[37][38][39] {Xn : n N} is a Markov process. If I know that you
which had been commonly regarded as a requirement have $12 now, then it would be expected that with even
for such mathematical laws to hold.[39] Markov later odds, you will either have $11 or $13 after the next toss.
used Markov chains to study the distribution of vowels This guess is not improved by the added knowledge that
in Eugene Onegin, written by Alexander Pushkin, and you started with $10, then went up to $11, down to $10,
proved a central limit theorem for such chains.[37] up to $11, and then to $12.
In 1912 Poincar studied Markov chains on nite groups The process described here is a Markov chain on a count-
with an aim to study card shuing. Other early uses of able state space that follows a random walk.
Markov chains include a diusion model, introduced by
Paul and Tatyana Ehrenfest in 1907, and a branching pro-
cess, introduced by Francis Galton and Henry William
Watson in 1873, preceding the work of Markov.[37][38] 11.3.2 A birth-death process
After the work of Galton and Watson, it was later
revealed that their branching process had been inde-
See also: birth-death process and Poisson point process
pendently discovered and studied around three decades
earlier by Irne-Jules Bienaym.[40] Starting in 1928,
Maurice Frchet became interested in Markov chains, If one pops one hundred kernels of popcorn, each ker-
eventually resulting in him publishing in 1938 a detailed nel popping at an independent exponentially-distributed
study on Markov chains.[37][41] time, then this would be a continuous-time Markov pro-
Andrei Kolmogorov developed in a 1931 paper a large cess. If Xt denotes the number of kernels which have
part of the early theory of continuous-time Markov popped up to time t, the problem can be dened as nd-
processes.[42][43] Kolmogorov was partly inspired by ing the number of kernels that will pop in some later time.
Louis Bacheliers 1900 work on uctuations in the stock The only thing one needs to know is the number of kernels
market as well as Norbert Wiener's work on Einsteins that have popped prior to the time t. It is not necessary
model of Brownian movement.[42][44] He introduced and to know when they popped, so knowing Xt for previous
studied a particular set of Markov processes known as times t is not relevant.
diusion processes, where he derived a set of dierential The process described here is an approximation of a
equations describing the processes.[42][45] Independent of Poisson point process - Poisson processes are also Markov
Kolmgorovs work, Sydney Chapman derived in a 1928 processes.
52 CHAPTER 11. MARKOV CHAIN
Suppose that you have a coin purse containing ve quar- A discrete-time Markov chain is a sequence of random
ters (each worth 25c), ve nickels (each worth 5c) and ve variables X1 , X2 , X3 , ... with the Markov property,
dimes (each worth 10c), and one-by-one, you randomly namely that the probability of moving to the next state
draw coins from the purse and set them on a table. If Xn depends only on the present state and not on the previous
represents the total value of the coins set on the table after states
n draws, with X0 = 0 , then the sequence {Xn : n N}
is not a Markov process. Pr(Xn+1 = x | X1 = x1 , X2 =
To see why this is the case, suppose that in your rst six x2 , . . . , Xn = xn ) = Pr(Xn+1 = x | Xn =
draws, you draw all ve nickels, and then a quarter. So xn ) , if both conditional probabilities are well
X6 = $0.50 . If we know not just X6 , but the earlier dened, i.e. if Pr(X1 = x1 , ..., Xn = xn ) >
values as well, then we can determine which coins have 0.
been drawn, and we know that the next coin will not be a
nickel, so we can determine that X7 $0.60 with prob- The possible values of Xi form a countable set S called
ability 1. But if we do not know the earlier values, then the state space of the chain.
based only on the value X6 we might guess that we had Markov chains are often described by a sequence of
drawn four dimes and two nickels, in which case it would directed graphs, where the edges of graph n are labeled
certainly be possible to draw another nickel next. Thus, by the probabilities of going from one state at time n to
our guesses about X7 are impacted by our knowledge of the other states at time n+1, Pr(Xn+1 = x | Xn = xn )
values prior to X6 . . The same information is represented by the transition
matrix from time n to time n+1. However, Markov
chains are frequently assumed to be time-homogeneous
11.4 Markov property (see variations below), in which case the graph and ma-
trix are independent of n and are thus not presented as
sequences.
Main article: Markov property
These descriptions highlight the structure of the Markov
chain that is independent of the initial distribution
Pr(X1 = x1 ) . When time-homogeneous, the chain can
11.4.1 The general case be interpreted as a state machine assigning a probability
of hopping from each vertex or state to an adjacent one.
Let (, F, P) be a probability space with a ltration The probability Pr(Xn = x|X1 = x1 ) of the machines
(Ft , t T ) , for some (totally ordered) index set T ; state can be analyzed as the statistical behavior of the ma-
and let (S, S) be a measure space. An S-valued stochas- chine with an element x1 of the state space as input, or as
tic process X = (Xt , t T ) adapted to the ltration is the behavior of the machine with the initial distribution
said to possess the Markov property with respect to the Pr(X1 = y) = [x1 = y] of states as input, where [P ] is
{Ft } if, for each A S and each s, t T with s < t, the Iverson bracket.
The fact that some sequences of states might have zero
P(Xt A|Fs ) = P(Xt A|Xs ). [49] probability of occurring corresponds to a graph with mul-
tiple connected components, where we omit edges that
would carry a zero transition probability. For example,
A Markov process is a stochastic process which satises
if a has a nonzero probability of going to b, but a and x
the Markov property with respect to its natural ltration.
lie in dierent connected components of the graph, then
Pr(Xn+1 = b|Xn = a) is dened, while Pr(Xn+1 =
b|X1 = x, ..., Xn = a) is not.
11.4.2 For discrete-time Markov chains
( )
= x(n+1) P 2 = x(n) P P 2
Pr(Xn = xn | Xn1 = xn1 , Xn2 = xn2 , . . . , X1 = x1 ) = x(n) P 3
= Pr(Xn = xn | Xn1 = xn1 , Xn2 = xn2 , . . . , Xnm = xnm ) for n > m
In particular, if at time n the system is in state 2 (bear),
then at time n + 3 the distribution is
In other words, the future state depends on the
past m states. It is possible to construct a chain
3
(Yn) from (Xn) which has the 'classical' Markov 0.9 0.075 0.025
[ ]
property by taking as state space the ordered x(n+3) = 0 1 0 0.15 0.8 0.05
m-tuples of X values, ie. Yn = (Xn, Xn, ..., 0.25 0.25 0.5
Xnm).
[ ] 0.7745 0.17875 0.04675
= 0 1 0 0.3575 0.56825 0.07425
0.4675 0.37125 0.16125
Example [ ]
= 0.3575 0.56825 0.07425 .
Main article: Examples of Markov chains Using the transition matrix it is possible to calculate, for
A state diagram for a simple example is shown in the g- example, the long-term fraction of weeks during which
the market is stagnant, or the average number of weeks
it will take to go from a stagnant to a bull market. Using
the transition probabilities, the steady-state probabilities
indicate that 62.5% of weeks will be in a bull market,
31.25% of weeks will be in a bear market and 6.25% of
weeks will be stagnant, since:
0.625 0.3125 0.0625
lim P N = 0.625 0.3125 0.0625
N
0.625 0.3125 0.0625
(n)
pij = Pr(Xn = j | X0 = i)
(n)
The continuous time Markov chain is characterized by the tran- pij = Pr(Xk+n = j | Xk = i)
sition rates, the derivatives with respect to time of the transition
probabilities between states i and j. and
11.7.1 Reducibility
Pr(Xtn+1 = in+1 |Xt0 = i0 , Xt1 = i1 , . . . , Xtn = in ) = pin in+1 (tn+1 tn )
A Markov chain is said to be irreducible if it is possible
where pij is the solution of the forward equation (a rst- to get to any state from any state. The following explains
order dierential equation) this denition more formally.
A state j is said to be accessible from a state i (written i
j) if a system started in state i has a non-zero probability
P (t) = P (t)Q of transitioning into state j at some point. Formally, state
j is accessible from state i if there exists an integer nij
with initial condition P(0) is the identity matrix. 0 such that
11.7. PROPERTIES 55
(n) C
lim pij = .
pii = 1 and pij = 0 for i = j. n Mj
Note that there is no assumption on the starting distri-
If every state can reach an absorbing state, then the
bution; the chain converges to the stationary distribution
Markov chain is an absorbing Markov chain.
regardless of where it begins. Such is called the equi-
librium distribution of the chain.
11.7.4 Ergodicity If a chain has more than one closed communicating class,
its stationary distributions will not be unique (consider
A state i is said to be ergodic if it is aperiodic and posi- any closed communicating class Ci in the chain; each
tive recurrent. In other words, a state i is ergodic if it is one will have its own unique stationary distribution i .
recurrent, has a period of 1, and has nite mean recur- Extending these distributions to the overall chain, setting
rence time. If all states in an irreducible Markov chain all values to zero outside the communication class, yields
are ergodic, then the chain is said to be ergodic. that the set of invariant measures of the original chain is
It can be shown that a nite state irreducible Markov the set of all convex combinations of the i 's). However,
chain is ergodic if it has an aperiodic state. More gen- if a state j is aperiodic, then
erally, a Markov chain is ergodic if there is a number N
such that any state can be reached from any other state in
C
at most N steps (in other words, the number of steps taken lim p(n) =
n jj M
are bounded by a nite positive integer N). In case of j
a fully connected transition matrix, where all transitions and for any other state i, let j be the probability that the
have a non-zero probability, this condition is fullled with chain ever visits state j if it starts at i,
N=1.
A Markov chain with more than one state and just one
out-going transition per state is either not irreducible or (n) fij
lim p =C .
not aperiodic, hence cannot be ergodic. n ij Mj
If a state i is periodic with period k > 1 then the limit
11.7.5 Steady-state analysis and limiting
distributions (n)
lim p
n ii
If the Markov chain is a time-homogeneous Markov
chain, so that the process is described by a single, time- does not exist, although the limit
independent matrix pij , then the vector is called a sta-
tionary distribution (or invariant measure) if j S (kn+r)
it satises lim p
n ii
in Markov chain Monte Carlo (MCMC) methods in sit- transition probability can be computed as the k-th power
uations where a number of dierent transition matrices of the transition matrix, Pk .
are used, because each is ecient for a particular kind If the Markov chain is irreducible and aperiodic, then
of mixing, but each matrix respects a shared equilibrium there is a unique stationary distribution . Additionally,
distribution. in this case Pk converges to a rank-one matrix in which
each row is the stationary distribution , that is,
The values of a stationary distribution i are associated where In is the identity matrix of size n, and 0n,n is the
with the state space of P and its eigenvectors have their zero matrix of size nn. Multiplying together stochas-
relative proportions preserved. Since the components of tic matrices always yields another stochastic matrix, so Q
are positive and the must be a stochastic matrix (see the denition above). It
constraint that their sum is unity
can be rewritten as i 1 i = 1 we see that the dot is sometimes sucient to use the matrix equation above
product of with a vector whose components are all 1 is and the fact that Q is a stochastic matrix to solve for Q.
unity and that lies on a simplex. Including the fact that the sum of each the rows in P is 1,
there are n+1 equations for determining n unknowns, so
it is computationally easier if on the one hand one selects
11.8.2 Time-homogeneous Markov chain one row in Q and substitute each of its elements by one,
with a nite state space and on the other one substitute the corresponding element
(the one in the same column) in the vector 0, and next left-
If the Markov chain is time-homogeneous, then the tran- multiply this latter vector by the inverse of transformed
sition matrix P is the same after each step, so the k-step former matrix to nd Q.
58 CHAPTER 11. MARKOV CHAIN
Here is one method for doing so: rst, dene the function
f(A) to return the matrix A with its right-most column
n
replaced with all 1s. If [f(P I )]1 exists then xT = a i ui
i=1
Q = f (0n,n )[f (P In )]1 . for some set of ai. If we start multiplying P with x
Explain: The original matrix equation is equiv- from left and continue this operation with the results, in
alent to a system of nn linear equations in nn the end we get the stationary distribution . In other
variables. And there are n more linear equa- words, = ui xPPP...P = xPk as k goes to innity.
tions from the fact that Q is a right stochastic That means
matrix whose each row sums to 1. So it needs
any nn independent linear equations of the
1 1 1
(nn+n) equations to solve for the nn vari- (k) = x(U U )(U U ) (U U )
ables. In this example, the n equations from Q
U1
k
multiplied by the right-most column of (P-In) = xU
have been replaced by the n stochastic ones.
since UU1 = I the identity matrix and power of a diag-
onal matrix is also a diagonal matrix where each entry is
One thing to notice is that if P has an element Pi,i on its taken to that power.
main diagonal that is equal to 1 and the ith row or col-
umn is otherwise lled with 0s, then that row or column
will remain unchanged in all of the subsequent powers = (a uT + a uT + + a uT )U k U1 ,
1 1 2 2 n n
Pk . Hence, the ith row or column of Q will have the 1
and the 0s in the same positions as in P. = a1 k1 u1 + a2 k2 u2 + + an kn un ,
since the eigenvectors are orthonormal. Then[55]
11.8.3 Convergence speed to the stationary
distribution { ( )k ( )k ( )k }
2 3 n
= k1 a 1 u1 + a 2 u2 + a3 u3 + + an un .
As stated earlier, from the equation = P , (if ex- 1 1 1
ists) the stationary (or steady state) distribution is a
left eigenvector of row stochastic matrix P. Then assum- Since = u1 , (k) approaches to as k goes to innity
ing that P is diagonalizable or equivalently that P has n with a speed in the order of 2 /1 exponentially. This
linearly independent eigenvectors, speed of convergence follows because |2 | |3 | ... |n|, hence 2 /1 is the
is elaborated as follows. (For non-diagonalizable, i.e. dominant term. Random noise in the state distribution
defective matrices, one may start with the Jordan normal can also speed up this convergence to the stationary
form of P and proceed with a bit more involved set of distribution.[56]
arguments in a similar way.[54] )
Let U be the matrix of eigenvectors (each normalized to
having an L2 norm equal to 1) where each column is a
11.9 Reversible Markov chain
left eigenvector of P and let be the diagonal matrix of
left eigenvalues of P, i.e. = diag(1 ,2 ,3 ,...,n). Then A Markov chain is said to be reversible if there is a prob-
by eigendecomposition ability distribution over its states such that
P=U U
1
. i Pr(Xn+1 = j | Xn = i) = j Pr(Xn+1 = i | Xn = j)
i pij = j pji = j pji = j ,
i i i
11.10 Bernoulli scheme panding the concept of the 'current' and 'future' states.
For example, let X be a non-Markovian process. Then
A Bernoulli scheme is a special case of a Markov chain dene a process Y, such that each state of Y represents
where the transition probability matrix has identical rows, a time-interval of states of X. Mathematically, this takes
which means that the next state is even independent of the the form:
current state (in addition to being independent of the past
states). A Bernoulli scheme with only two possible states
{ }
is known as a Bernoulli process. Y (t) = X(s) : s [a(t), b(t)] .
(
) 11.14.2 Example 2
+ +
P =
+ +
Q = 0.
The central state and the border states 2 and 8 of the ad- 0 otherwise.
jacent secret passageway are visited most and the corner
states are visited least. From this, S may be written as
1
S = I (diag(Q)) Q
11.15 Hitting times
where I is the identity matrix and diag(Q) is the diagonal
Main article: phase-type distribution matrix formed by selecting the main diagonal from the
matrix Q and setting all other elements to zero.
The hitting time is the time, starting in a given set of states To nd the stationary probability distribution vector, we
until the chain arrives in a given state or set of states. The must next nd such that
distribution of such a time period has a phase type distri-
bution. The simplest such distribution is that of a single
exponentially distributed transition. S = ,
Markov chains and continuous-time Markov processes Hidden Markov models are the basis for most modern
are useful in chemistry when physical systems closely ap- automatic speech recognition systems.
proximate the Markov property. For example, imagine a
large number n of molecules in solution in state A, each
of which can undergo a chemical reaction to state B with
11.18.5 Information and computer science
a certain average rate. Perhaps the molecule is an en-
Markov chains are used throughout information process-
zyme, and the states refer to how it is folded. The state of
ing. Claude Shannon's famous 1948 paper A Mathemati-
any single enzyme follows a Markov chain, and since the
cal Theory of Communication, which in a single step cre-
molecules are essentially independent of each other, the
ated the eld of information theory, opens by introduc-
number of molecules in state A or B at a time is n times
ing the concept of entropy through Markov modeling of
the probability a given molecule is in that state.
the English language. Such idealized models can capture
The classical model of enzyme activity, Michaelis many of the statistical regularities of systems. Even with-
Menten kinetics, can be viewed as a Markov chain, where out describing the full structure of the system perfectly,
at each time step the reaction proceeds in some direc- such signal models can make possible very eective data
tion. While Michaelis-Menten is fairly straightforward, compression through entropy encoding techniques such
far more complicated reaction networks can also be mod- as arithmetic coding. They also allow eective state esti-
eled with Markov chains. mation and pattern recognition. Markov chains also play
An algorithm based on a Markov chain was also used to an important role in reinforcement learning.
focus the fragment-based growth of chemicals in silico Markov chains are also the basis for hidden Markov mod-
towards a desired class of compounds such as drugs or els, which are an important tool in such diverse elds as
natural products.[63] As a molecule is grown, a fragment is telephone networks (which use the Viterbi algorithm for
selected from the nascent molecule as the current state. error correction), speech recognition and bioinformatics
It is not aware of its past (i.e., it is not aware of what (such as in rearrangements detection[65] ).
is already bonded to it). It then transitions to the next
The LZMA lossless data compression algorithm com-
state when a fragment is attached to it. The transition
bines Markov chains with Lempel-Ziv compression to
probabilities are trained on databases of authentic classes
achieve very high compression ratios.
of compounds.
Also, the growth (and composition) of copolymers may
be modeled using Markov chains. Based on the reactivity 11.18.6 Queueing theory
ratios of the monomers that make up the growing polymer
chain, the chains composition may be calculated (e.g., Main article: Queueing theory
whether monomers tend to add in alternating fashion or
in long runs of the same monomer). Due to steric eects,
Markov chains are the basis for the analytical treatment
second-order Markov eects may also play a role in the
of queues (queueing theory). Agner Krarup Erlang initi-
growth of some polymer chains.
ated the subject in 1917.[66] This makes them critical for
Similarly, it has been suggested that the crystallization optimizing the performance of telecommunications net-
and growth of some epitaxial superlattice oxide materi- works, where messages must often compete for limited
als can be accurately described by Markov chains.[64] resources (such as bandwidth).[67]
Numerous queueing models use continuous-time Markov
chains. For example, an M/M/1 queue is a CTMC on the
11.18.3 Testing non-negative integers where upward transitions from i to
i + 1 occur at rate according to a Poisson process and
Several theorists have proposed the idea of the Markov describe job arrivals, while transitions from i to i 1 (for
chain statistical test (MCST), a method of conjoining i > 1) occur at rate (job service times are exponentially
Markov chains to form a "Markov blanket", arranging distributed) and describe completed services (departures)
these chains in several recursive layers (wafering) and from the queue.
producing more ecient test setssamplesas a re-
placement for exhaustive testing. MCSTs also have uses
in temporal state-based networks; Chilukuri et al.'s pa- 11.18.7 Internet applications
per entitled Temporal Uncertainty Reasoning Networks
for Evidence Fusion with Applications to Object Detec- The PageRank of a webpage as used by Google is dened
tion and Tracking (ScienceDirect) gives a background by a Markov chain.[68] It is the probability to be at page
and case study for applying MCSTs to a wider range of i in the stationary distribution on the following Markov
applications. chain on all (known) webpages. If N is the number of
64 CHAPTER 11. MARKOV CHAIN
known webpages, and a page i has ki links to it then it structural factors, such as size of the middle class, the
has transition probability ki + 1
N for all pages that are ratio of urban to rural residence, the rate of political mo-
linked to and 1 for all pages that are not linked to. The bilization, etc., will generate a higher probability of tran-
N
parameter is taken to be about 0.85.[69] sitioning from authoritarian to democratic regime.[76]
Markov models have also been used to analyze web navi-
gation behavior of users. A users web link transition on a 11.18.11 Mathematical biology
particular website can be modeled using rst- or second-
order Markov models and can be used to make predic- Markov chains also have many applications in biologi-
tions regarding future navigation and to personalize the cal modelling, particularly population processes, which
web page for an individual user. are useful in modelling processes that are (at least) analo-
gous to biological populations. The Leslie matrix, is one
11.18.8 Statistics such example used to describe the population dynamics
of many species, though some of its entries are not proba-
Markov chain methods have also become very important bilities (they may be greater than 1). Another example is
for generating sequences of random numbers to accu- the modeling of cell shape in dividing sheets of epithelial
rately reect very complicated desired probability distri- cells.[77] Yet another example is the state of ion channels
butions, via a process called Markov chain Monte Carlo in cell membranes.
(MCMC). In recent years this has revolutionized the Markov chains are also used in simulations of brain
practicability of Bayesian inference methods, allowing a function, such as the simulation of the mammalian
wide range of posterior distributions to be simulated and neocortex.[78]
their parameters found numerically.
11.18.12 Genetics
11.18.9 Economics and nance
Markov chains have been used in population genetics
Markov chains are used in nance and economics to in order to describe the change in gene frequencies in
model a variety of dierent phenomena, including as- small populations aected by genetic drift, for exam-
set prices and market crashes. The rst nancial model ple in diusion equation method described by Motoo
to use a Markov chain was from Prasad et al. in Kimura.[79]
1974.[70] Another was the regime-switching model of
James D. Hamilton (1989), in which a Markov chain is
used to model switches between periods high and low 11.18.13 Games
GDP growth (or alternatively, economic expansions and
recessions).[71] A more recent example is the Markov Markov chains can be used to model many games of
Switching Multifractal model of Laurent E. Calvet and chance. The childrens games Snakes and Ladders and
Adlai J. Fisher, which builds upon the convenience of ear- "Hi Ho! Cherry-O", for example, are represented ex-
lier regime-switching models.[72][73] It uses an arbitrarily actly by Markov chains. At each turn, the player starts
large Markov chain to drive the level of volatility of asset in a given state (on a given square) and from there has
returns. xed odds of moving to certain other states (squares).
Dynamic macroeconomics heavily uses Markov chains.
An example is using Markov chains to exogenously
model prices of equity (stock) in a general equilibrium 11.18.14 Music
setting.[74]
Markov chains are employed in algorithmic music com-
Credit rating agencies produce annual tables of the transi- position, particularly in software such as CSound, Max
tion probabilities for bonds of dierent credit ratings.[75] and SuperCollider. In a rst-order chain, the states of
the system become note or pitch values, and a probability
vector for each note is constructed, completing a tran-
11.18.10 Social sciences
sition probability matrix (see below). An algorithm is
Markov chains are generally used in describing path- constructed to produce output note values based on the
dependent arguments, where current structural congu- transition matrix weightings, which could be MIDI note [80]
rations condition future outcomes. An example is the values, frequency (Hz), or any other desirable metric.
reformulation of the idea, originally due to Karl Marx's A second-order Markov chain can be introduced by con-
Das Kapital, tying economic development to the rise of sidering the current state and also the previous state, as
capitalism. In current research, it is common to use a indicated in the second table. Higher, nth-order chains
Markov chain to model how once a country reaches a spe- tend to group particular notes together, while 'breaking
cic level of economic development, the conguration of o' into other patterns and sequences occasionally. These
11.19. SEE ALSO 65
higher-order chains tend to generate results with a sense revealed that their branching process had been inde-
of phrasal structure, rather than the 'aimless wandering' pendently discovered and studied around three decades
produced by a rst-order system.[81] earlier by Irne-Jules Bienaym.[93] Starting in 1928,
Markov chains can be used structurally, as in Xenakiss Maurice Frchet became interested in Markov chains,
Analogique A and B.[82] Markov chains are also used in eventually resulting in him publishing in 1938 a detailed
[90][94]
systems which use a Markov model to react interactively study on Markov chains.
to music input.[83] Andrei Kolmogorov developed in a 1931 paper a large
Usually musical systems need to enforce specic control part of the early theory of continuous-time Markov
constraints on the nite-length sequences they generate, processes.[95][96] Kolmogorov was partly inspired by
but control constraints are not compatible with Markov Louis Bacheliers 1900 work on uctuations in the stock
models, since they induce long-range dependencies that market as well as Norbert Wiener's work on Einsteins
violate the Markov hypothesis of limited memory. In or- model of Brownian movement.[95][97] He introduced and
der to overcome this limitation, a new approach has been studied a particular set of Markov processes known as
proposed.[84] diusion processes, where he derived a set of dieren-
tial equations describing the processes.[95][98] Indepen-
dent of Kolmgorovs work, Sydney Chapman derived in
a 1928 paper an equation, now called the Chapman
11.18.15 Baseball Kolmogorov equation, in a less mathematically rig-
orous way than Kolmogorov, while studying Brown-
Markov chain models have been used in advanced base-
ian movement.[99] The dierential equations are now
ball analysis since 1960, although their use is still rare.
called the Kolmogorov equations[100] or the Kolmogorov
Each half-inning of a baseball game ts the Markov chain
Chapman equations.[101] Other mathematicians who con-
state when the number of runners and outs are consid-
tributed signicantly to the foundations of Markov pro-
ered. During any at-bat, there are 24 possible combina-
cesses include William Feller, starting in 1930s, and then
tions of number of outs and position of the runners. Mark
later Eugene Dynkin, starting in the 1950s.[96] </ref> ).
Pankin shows that Markov chain models can be used to
evaluate runs created for both individual players as well These processes are also used by spammers to inject real-
as a team.[85] He also discusses various kinds of strate- looking hidden paragraphs into unsolicited email and post
gies and play conditions: how Markov chain models have comments in an attempt to get these messages past spam
been used to analyze statistics for game situations such as lters.
bunting and base stealing and dierences when playing
on grass vs. astroturf.[86]
11.18.17 Bioinformatics
11.18.16 Markov text generators In the bioinformatics eld, they can be used to simulate
DNA sequences.[102]
Markov processes can also be used to generate super-
cially real-looking text given a sample document: they
are used in a variety of recreational "parody generator" 11.19 See also
software (see dissociated press, Je Harrison,[87] Mark
V Shaney[88][89] In his rst paper on Markov chains, Hidden Markov model
published in 1906, Markov showed that under certain
conditions the average outcomes of the Markov chain Markov blanket
would converge to a xed vector of values, so prov-
Markov chain geostatistics
ing a weak law of large numbers without the indepen-
dence assumption,[90][91][92] which had been commonly Markov chain mixing time
regarded as a requirement for such mathematical laws
to hold.[92] Markov later used Markov chains to study Markov chain Monte Carlo
the distribution of vowels in Eugene Onegin, written by
Alexander Pushkin, and proved a central limit theorem Markov decision process
for such chains.[90]
Markov information source
In 1912 Poincar studied Markov chains on nite groups
with an aim to study card shuing. Other early uses of Markov network
Markov chains include a diusion model, introduced by
Quantum Markov chain
Paul and Tatyana Ehrenfest in 1907, and a branching pro-
cess, introduced by Francis Galton and Henry William Semi-Markov process
Watson in 1873, preceding the work of Markov.[90][91]
After the work of Galton and Watson, it was later Telescoping Markov chain
66 CHAPTER 11. MARKOV CHAIN
Variable-order Markov model [12] Ionut Florescu (7 November 2014). Probability and
Stochastic Processes. John Wiley & Sons. pp. 373 and
Brownian motion 374. ISBN 978-1-118-59320-2.
Dynamics of Markovian particles [13] Samuel Karlin; Howard E. Taylor (2 December 2012). A
First Course in Stochastic Processes. Academic Press. p.
Examples of Markov chains 49. ISBN 978-0-08-057041-9.
Semi-Markov process [17] Emanuel Parzen (17 June 2015). Stochastic Processes.
Courier Dover Publications. p. 7 and 8. ISBN 978-0-
Markov chain approximation method 486-79688-8.
[2] Y.A. Rozanov (6 December 2012). Markov Random [20] Jarrow, Robert; Protter, Philip (2004). A short his-
Fields. Springer Science & Business Media. p. 58. ISBN tory of stochastic integration and mathematical nance:
978-1-4613-8190-7. the early years, 18801970": 7591. ISSN 0749-2170.
doi:10.1214/lnms/1196285381.
[3] Sren Asmussen (15 May 2003). Applied Probability and
Queues. Springer Science & Business Media. p. 7. ISBN [21] Guttorp, Peter; Thorarinsdottir, Thordis L. (2012).
978-0-387-00211-8. What Happened to Discrete Chaos, the Quenouille Pro-
cess, and the Sharp Markov Property? Some His-
[4] Emanuel Parzen (17 June 2015). Stochastic Processes. tory of Stochastic Point Processes. International Sta-
Courier Dover Publications. p. 188. ISBN 978-0-486- tistical Review. 80 (2): 253268. ISSN 0306-7734.
79688-8. doi:10.1111/j.1751-5823.2012.00181.x.
[5] Samuel Karlin; Howard E. Taylor (2 December 2012). A [22] Samuel Karlin; Howard E. Taylor (2 December 2012). A
First Course in Stochastic Processes. Academic Press. pp. First Course in Stochastic Processes. Academic Press. p.
29 and 30. ISBN 978-0-08-057041-9. 47. ISBN 978-0-08-057041-9.
[6] John Lamperti (1977). Stochastic processes: a survey of [23] Bruce Hajek (12 March 2015). Random Processes for En-
the mathematical theory. Springer-Verlag. pp. 106121. gineers. Cambridge University Press. ISBN 978-1-316-
ISBN 978-3-540-90275-1. 24124-0.
[7] Sheldon M. Ross (1996). Stochastic processes. Wiley. pp. [24] G. Latouche; V. Ramaswami (1 January 1999).
174 and 231. ISBN 978-0-471-12062-9. Introduction to Matrix Analytic Methods in Stochastic
Modeling. SIAM. pp. 4. ISBN 978-0-89871-425-8.
[8] Sren Asmussen (15 May 2003). Applied Probability and
Queues. Springer Science & Business Media. p. 7. ISBN [25] Sean Meyn; Richard L. Tweedie (2 April 2009). Markov
978-0-387-00211-8. Chains and Stochastic Stability. Cambridge University
Press. p. 3. ISBN 978-0-521-73182-9.
[9] Charles Miller Grinstead; James Laurie Snell (1997).
Introduction to Probability. American Mathematical Soc. [26] Gupta, Brij; Agrawal, Dharma P.; Yamaguchi, Shingo (16
pp. 464466. ISBN 978-0-8218-0749-1. May 2016). Handbook of Research on Modern Crypto-
graphic Solutions for Computer and Cyber Security. IGI
[10] Pierre Bremaud (9 March 2013). Markov Chains: Gibbs Global. pp. 448. ISBN 978-1-5225-0106-0.
Fields, Monte Carlo Simulation, and Queues. Springer Sci-
ence & Business Media. p. ix. ISBN 978-1-4757-3124-8. [27] Langville, Amy N.; Meyer, Carl D. (2006). A Reorder-
ing for the PageRank Problem. SIAM Journal on Scien-
[11] Hayes, Brian (2013). First links in the Markov chain. tic Computing. 27 (6): 21122113. ISSN 1064-8275.
American Scientist. 101 (2): 9296. doi:10.1137/040607551.
11.20. NOTES 67
[28] Reuven Y. Rubinstein; Dirk P. Kroese (20 September [44] Marc Barbut; Bernard Locker; Laurent Mazliak (23 Au-
2011). Simulation and the Monte Carlo Method. John Wi- gust 2016). Paul Lvy and Maurice Frchet: 50 Years of
ley & Sons. p. 225. ISBN 978-1-118-21052-9. Correspondence in 107 Letters. Springer London. p. 5.
ISBN 978-1-4471-7262-8.
[29] Dani Gamerman; Hedibert F. Lopes (10 May 2006).
Markov Chain Monte Carlo: Stochastic Simulation for [45] Valeriy Skorokhod (5 December 2005). Basic Principles
Bayesian Inference, Second Edition. CRC Press. ISBN and Applications of Probability Theory. Springer Science
978-1-58488-587-0. & Business Media. p. 146. ISBN 978-3-540-26312-8.
[30] Markovian. Oxford English Dictionary (3rd ed.). [46] Bernstein, Jeremy (2005). Bachelier. American Jour-
Oxford University Press. September 2005. (Subscription nal of Physics. 73 (5): 398396. ISSN 0002-9505.
or UK public library membership required.) doi:10.1119/1.1848117.
[37] Charles Miller Grinstead; James Laurie Snell (1997). [52] Asher Levin, David (2009). Markov chains and mixing
Introduction to Probability. American Mathematical Soc. times. p. 16. ISBN 978-0-8218-4739-8. Retrieved 2016-
pp. 464466. ISBN 978-0-8218-0749-1. 03-04.
[38] Pierre Bremaud (9 March 2013). Markov Chains: Gibbs [53] Serfozo, Richard (2009), Basics of Applied Stochas-
Fields, Monte Carlo Simulation, and Queues. Springer Sci- tic Processes, Probability and Its Applications, Berlin:
ence & Business Media. p. ix. ISBN 978-1-4757-3124-8. Springer-Verlag: 35, ISBN 978-3-540-89331-8, MR
2484222, doi:10.1007/978-3-540-89332-5
[39] Hayes, Brian (2013). First links in the Markov chain.
American Scientist. 101 (2): 9296. [54] Florian Schmitt and Franz Rothlauf, On the Mean of the
Second Largest Eigenvalue on the Convergence Rate of
[40] Seneta, E. (1998). I.J. Bienaym [1796-1878]: Criti- Genetic Algorithms, Working Paper 1/2001, Working
cality, Inequality, and Internationalization. International Papers in Information Systems, 2001. http://citeseerx.ist.
Statistical Review / Revue Internationale de Statistique. 66 psu.edu/viewdoc/summary?doi=10.1.1.28.6191
(3): 291292. ISSN 0306-7734. doi:10.2307/1403518.
[55] Gene H. Golub, Charles F. Van Loan, Matrix compu-
tations, Third Edition, The Johns Hopkins University
[41] Bru, B.; Hertz, S. (2001). Maurice Frchet": 331334.
Press, Baltimore and London, 1996.
doi:10.1007/978-1-4613-0179-0_71.
[56] Franzke, Brandon; Kosko, Bart (1 October 2011). Noise
[42] Kendall, D. G.; Batchelor, G. K.; Bingham, N. H.; Hay- can speed convergence in Markov chains. Physical Re-
man, W. K.; Hyland, J. M. E.; Lorentz, G. G.; Moatt, H. view E. 84 (4). doi:10.1103/PhysRevE.84.041112.
K.; Parry, W.; Razborov, A. A.; Robinson, C. A.; Whit-
tle, P. (1990). Andrei Nikolaevich Kolmogorov (1903 [57] Richard Durrett (19 May 2012). Essentials of Stochastic
1987)". Bulletin of the London Mathematical Society. 22 Processes. Springer Science & Business Media. p. 37.
(1): 33. ISSN 0024-6093. doi:10.1112/blms/22.1.31. ISBN 978-1-4614-3615-7.
[43] Cramer, Harald (1976). Half a Century with Proba- [58] A. Nielsen and M. Weber, Computing
bility Theory: Some Personal Recollections. The An- the nearest reversible Markov chain. Nu-
nals of Probability. 4 (4): 509546. ISSN 0091-1798. merical Linear Algebra with Applications,
doi:10.1214/aop/1176996025. 22(3):483-499, 2015.
68 CHAPTER 11. MARKOV CHAIN
[59] Spitzer, Frank (1970). Interaction of Markov Pro- [73] Calvet, Laurent; Adlai Fisher (2004). How to Forecast
cesses. Advances in Mathematics. 5 (2): 246290. long-run volatility: regime-switching and the estimation
doi:10.1016/0001-8708(70)90034-4. of multifractal processes. Journal of Financial Econo-
metrics. 2: 4983. doi:10.1093/jjnec/nbh003.
[60] R. L. Dobrushin; V. I. Kriu kov; A. L. Toom (1978).
Stochastic Cellular Systems: Ergodicity, Memory, Morpho- [74] Brennan, Michael; Xiab, Yihong. Stock Price Volatility
genesis. ISBN 9780719022067. Retrieved 2016-03-04. and the Equity Premium (PDF). Department of Finance,
the Anderson School of Management, UCLA.
[61] Doblinger, G., 1998. Smoothing of Noise AR Signals
Using an Adaptive Kalman Filter. In EUSIPCO 98. [75] A Markov Chain Example in Credit Risk Modelling
pp. 781784. Available at: http://citeseerx.ist.psu.edu/ Columbia University lectures
viewdoc/summary?doi=10.1.1.251.3078 [Accessed Jan-
[76] Acemoglu, Daron; Georgy Egorov; Konstantin Sonin
uary 15, 2015].
(2011). Political model of social evolution. Proceedings
[62] Norris, J. R. (1997). Continuous-time Markov chains of the National Academy of Sciences. 108: 2129221296.
II. Markov Chains. p. 108. ISBN 9780511810633. doi:10.1073/pnas.1019454108.
doi:10.1017/CBO9780511810633.005. [77] Gibson, Matthew C; Patel, Ankit P.; Perrimon, Nor-
[63] Kutchukian, Peter; Lou, David; Shakhnovich, Eugene bert; Perrimon, Norbert (2006). The emergence of ge-
(2009). FOG: Fragment Optimized Growth Algorithm ometric order in proliferating metazoan epithelia. Na-
for the de Novo Generation of Molecules occupying ture. 442 (7106): 10381041. PMID 16900102.
Druglike Chemical. Journal of Chemical Information doi:10.1038/nature05014.
and Modeling. 49 (7): 16301642. PMID 19527020. [78] George, Dileep; Hawkins, Je (2009). Friston, Karl
doi:10.1021/ci9000458. J., ed. Towards a Mathematical Theory of Cor-
[64] Kopp, V. S.; Kaganer, V. M.; Schwarzkopf, J.; tical Micro-circuits. PLoS Comput Biol. 5 (10):
Waidick, F.; Remmele, T.; Kwasniewski, A.; Schmid- e1000532. PMC 2749218 . PMID 19816557.
bauer, M. (2011). X-ray diraction from nonpe- doi:10.1371/journal.pcbi.1000532.
riodic layered structures with correlations: Analyti-
[79] Watterson, G. (1996). Motoo Kimuras Use of Diusion
cal calculation and experiment on mixed Aurivillius
Theory in Population Genetics. Theoretical Population
lms. Acta Crystallographica Section A. 68: 148155.
Biology 49 (2): 154188. doi:10.1006/tpbi.1996.0010.
doi:10.1107/S0108767311044874.
PMID 8813021.
[65] Pratas, D; Silva, R; Pinho, A; Ferreira, P (May 18, 2015). [80] K McAlpine; E Miranda; S Hoggar (1999).
An alignment-free method to nd and visualise rear- Making Music with Algorithms: A Case-Study
rangements between pairs of DNA sequences.. Scien- System. Computer Music Journal. 23 (2): 1930.
tic Reports (Group Nature). 5 (10203): 10203. PMID doi:10.1162/014892699559733.
25984837. doi:10.1038/srep10203.
[81] Curtis Roads (ed.) (1996). The Computer Music Tutorial.
[66] O'Connor, John J.; Robertson, Edmund F., Markov MIT Press. ISBN 0-262-18158-4.
chain, MacTutor History of Mathematics archive,
University of St Andrews. [82] Xenakis, Iannis; Kanach, Sharon (1992) Formalized Mu-
sic: Mathematics and Thought in Composition, Pendragon
[67] S. P. Meyn, 2007. Control Techniques for Complex Net- Press. ISBN 1576470792
works, Cambridge University Press, 2007.
[83] Continuator Archived July 13, 2012, at the Wayback Ma-
[68] U.S. Patent 6,285,999 chine.
[69] Page, Lawrence and Brin, Sergey and Motwani, Rajeev [84] Pachet, F.; Roy, P.; Barbieri, G. (2011) Finite-Length
and Winograd, Terry (1999). The PageRank Citation Markov Processes with Constraints, Proceedings of the
Ranking: Bringing Order to the Web (Technical report). 22nd International Joint Conference on Articial Intelli-
Retrieved 2016-03-04. gence, IJCAI, pages 635-642,Barcelona, Spain, July 2011
[70] Prasad, NR; RC Ender; ST Reilly; G Nesgos (1974). [85] Pankin, Mark D. MARKOV CHAIN MODELS: THE-
Allocation of resources on a minimized cost basis. ORETICAL BACKGROUND. Retrieved 2007-11-26.
1974 IEEE Conference on Decision and Control including
[86] Pankin, Mark D. BASEBALL AS A MARKOV
the 13th Symposium on Adaptive Processes. 13: 4023.
CHAIN. Retrieved 2009-04-24.
doi:10.1109/CDC.1974.270470.
[87] Poets Corner Fieralingue Archived December 6, 2010,
[71] Hamilton, James (1989). A new approach to the
at the Wayback Machine.
economic analysis of nonstationary time series and the
business cycle. Econometrica. Econometrica, Vol. [88] Kenner, Hugh; O'Rourke, Joseph (November 1984). A
57, No. 2. 57 (2): 35784. JSTOR 1912559. Travesty Generator for Micros. BYTE. 9 (12): 129131,
doi:10.2307/1912559. 449469.
[72] Calvet, Laurent E.; Fisher, Adlai J. (2001). Forecasting [89] Hartman, Charles (1996). Virtual Muse: Experiments
Multifractal Volatility. Journal of Econometrics. 105 (1): in Computer Poetry. Hanover, NH: Wesleyan University
2758. doi:10.1016/S0304-4076(01)00069-0. Press. ISBN 0-8195-2239-2.
11.22. REFERENCES 69
Density matrix
See also: Quantum statistical mechanics 12.2 Pure and mixed states
Not to be confused with dense matrix.
In quantum mechanics, the state of a quantum system is
A density matrix is a matrix that describes a quantum represented by a state vector (or ket) | . A quantum
system in a mixed state, a statistical ensemble of sev- system with a state vector | is called a pure state. How-
eral quantum states. This should be contrasted with a ever, it is also possible for a system to be in a statistical
single state vector that describes a quantum system in a ensemble of dierent state vectors: For example, there
pure state. The density matrix is the quantum-mechanical may be a 50% probability that the state vector is |1 and
analogue to a phase-space probability measure (probabil- a 50% chance that the state vector is |2 . This system
ity distribution of position and momentum) in classical would be in a mixed state. The density matrix is especially
statistical mechanics. useful for mixed states, because any state, pure or mixed,
can be characterized by a single density matrix.
Mixed states arise in situations where the experimenter
does not know which particular states are being manip- A mixed state is dierent from a quantum superposition.
ulated. Examples include a system in thermal equilib- The probabilities in a mixed state are classical probabil-
rium (or additionally chemical equilibrium) or a system ities (as in the probabilities one learns in classic proba-
with an uncertain or randomly varying preparation his- bility theory / statistics), unlike the quantum probabilites
tory (so one does not know which pure state the system in a quantum superposition. In fact, a quantum super-
is in). Also, if a quantum system has two or more sub- position of pure states is another pure state, for example
systems that are entangled, then each subsystem must be |= (|1 + |2 )/ 2 . In this case, the coecients
treated as a mixed state even if the complete system is in 1/ 2 are not probabilities, but rather probability ampli-
a pure state.[1] The density matrix is also a crucial tool in tudes.
quantum decoherence theory.
The density matrix is a representation of a linear oper-
ator called the density operator. The density matrix is
12.2.1 Example: Light polarization
obtained from the density operator by choice of basis in
the underlying space. In practice, the terms density ma- An example of pure and mixed states is light polarization.
trix and density operator are often used interchangeably. Photons can have two helicities, corresponding to two or-
Both matrix and operator are self-adjoint (or Hermitian), thogonal quantum states, |R (right circular polarization)
positive semi-denite, of trace one, and may be innite- and |L (left circular polarization). A photon can also be
dimensional.[2] in a superposition state, such as (|R + |L)/ 2 (verti-
cal polarization) or (|R |L)/ 2 (horizontal polariza-
tion). More generally, it can be in any state |R + |L
(with ||2 + ||2 = 1 ), corresponding to linear, circular,
or elliptical polarization. If we pass (|R + |L)/ 2 po-
larized light through a circular polarizer which allows ei-
ther only |R polarized light, or only |L polarized light,
intensity would be reduced by half in both cases. This
12.1 History may make it seem like half of the photons are in state |R
and the other half in state |L . But this is not correct:
Both |R and |L photons are partly absorbed
by a verti-
The formalism of density operators and matrices was in- cal linear polarizer, but the (|R+|L)/ 2 light will pass
[3]
troduced by John von Neumann in 1927 and indepen- through that polarizer with no absorption whatsoever.
dently, but less systematically by Lev Landau[4] and Felix However, unpolarized light (such as the light from an
Bloch[5] in 1927 and 1946 respectively. incandescent light bulb) is dierent from any state like
71
72 CHAPTER 12. DENSITY MATRIX
= p|| + (1 p)|| .
|i pi = uij |j pj .
For the above example of unpolarized light, the density j
The expectation value of the measurement can be cal- decrease with a projective measurement, however gen-
culated by extending from the case of pure states (see eralised measurements can decrease entropy.[9][10] The
Measurement in quantum mechanics): entropy of a pure state is zero, while that of a proper
mixture always greater than zero. Therefore, a pure state
may be converted into a mixture by a measurement,
but
a proper mixture can never be converted into a pure
A = pj j |A|j = pj tr (|j j |A) = (pj |jThus
tr state. j |A) ofmeasurement
= tr
the act =a fundamen-
pj |j induces
j |A tr(A),
j j j tal irreversible change onjthe density matrix; this is analo-
gous to the collapse of the state vector, or wavefunction
where tr denotes trace. Moreover, if A has spectral reso- collapse. Perhaps counterintuitively, the measurement
lution actually decreases information by erasing quantum inter-
ference in the composite systemcf. quantum entangle-
ment, einselection, and quantum decoherence.
A= ai |ai ai | = ai Pi ,
i i (A subsystem of a larger system can be turned from a
mixed to a pure state, but only by increasing the von Neu-
where Pi = |ai ai | , the corresponding density operator mann entropy elsewhere in the system. This is analogous
after the measurement is given by: to how the entropy of an object can be lowered by putting
it in a refrigerator: The air outside the refrigerators heat-
exchanger warms up, gaining even more entropy than was
= Pi Pi . lost by the object in the refrigerator. See second law of
i thermodynamics. See Entropy in thermodynamics and
Note that the above density operator describes the full en- information theory.)
semble after measurement. The sub-ensemble for which
the measurement result was the particular value ai is de-
scribed by the dierent density operator 12.6 The von Neumann equation
for time evolution
Pi Pi
i = . See also: Liouvilles theorem (Hamiltonian) Quantum
tr[Pi ]
Liouville equation
This is true assuming that |ai is the only eigenket (up
to phase) with eigenvalue ai; more generally, Pi in this
Just as the Schrdinger equation describes how pure
expression would be replaced by the projection operator
states evolve in time, the von Neumann equation (also
into the eigenspace corresponding to eigenvalue ai.
known as the Liouvillevon Neumann equation) de-
scribes how a density operator evolves in time (in fact,
the two equations are equivalent, in the sense that either
12.5 Entropy can be derived from the other.) The von Neumann equa-
tion dictates that[11][12]
The von Neumann entropy S of a mixture can be ex-
pressed in terms of the eigenvalues of or in terms of
the trace and logarithm of the density operator . Since i = [H, ] ,
is a positive semi-deniteoperator, it has a spectral de- t
composition such that = i i | i i | where |i are where the brackets denote a commutator.
orthonormal vectors, i > 0 and i = 1 . Then the Note that this equation only holds when the density opera-
entropy of a quantum system with density matrix is tor is taken to be in the Schrdinger picture, even though
this equation seems at rst look to emulate the Heisen-
berg equation of motion in the Heisenberg picture, with
S= i ln i = tr( ln ) . a crucial sign dierence:
i
Taking the density operator to be in the Schrdinger pic- 12.8 Composite systems
ture makes sense, since it is composed of 'Schrdinger'
kets and bras evolved in time, as per the Schrdinger pic- The joint density matrix of a composite system of two
ture. If the Hamiltonian is time-independent, this dier- systems A and B is described by AB . Then the subsys-
ential equation can be easily solved to yield tems are described by their reduced density operator.
For a more general Hamiltonian, if G(t) is the wavefunc- trB is called partial trace over system B. If A and B are
tion propagator over some interval, then the time evolu- two distinct and independent systems then AB = A
tion of the density matrix over that same interval is given B which is a product state.
by
Quantum state [13] See appendix, Mackey, George Whitelaw (1963), Mathe-
matical Foundations of Quantum Mechanics, Dover Books
POVM Generalized measurement of density states on Mathematics, New York: Dover Publications, ISBN
978-0-486-43517-6
Purication of quantum state
[14] Emch, Gerard G. (1972), Algebraic methods in statistical
Wave function mechanics and quantum eld theory, Wiley-Interscience,
ISBN 978-0-471-23900-0
Wigner quasi-probability distribution
Matrix (mathematics)
For other uses, see Matrix. multiplied element-wise by a scalar from its associated
Matrix theory redirects here. For the physics topic, see eld. A major application of matrices is to represent
Matrix string theory. linear transformations, that is, generalizations of linear
In mathematics, a matrix (plural matrices) is a functions such as f(x) = 4x. For example, the rotation of
vectors in three-dimensional space is a linear transforma-
tion, which can be represented by a rotation matrix R: if
m-by-n matrix v is a column vector (a matrix with only one column) de-
ai,j n columns j changes scribing the position of a point in space, the product Rv is
m a column vector describing the position of that point after
rows a rotation. The product of two transformation matrices
a1,1 a1,2 a1,3 is a matrix that represents the composition of two linear
.
.
.
77
78 CHAPTER 13. MATRIX (MATHEMATICS)
which acts on the Taylor series of a function. a,), represent the entries. In addition to using upper-
case letters to symbolize matrices, many authors use a
special typographical style, commonly boldface upright
13.1 Denition (non-italic), to further distinguish matrices from other
mathematical objects. An alternative notation involves
the use of a double-underline with the variable name, with
A matrix is a rectangular array of numbers or other math-
or without boldface style, (for example, A ).
ematical objects for which operations such as addition
[6]
and multiplication are dened. Most commonly, a ma- The entry in the i-th row and j-th column of a matrix A
trix over a eld F is a rectangular array of scalars each of is sometimes referred to as the i,j, (i,j), or (i,j)th entry of
which is a member of F.[7][8] Most of this article focuses the matrix, and most commonly denoted as ai,j, or aij.
on real and complex matrices, that is, matrices whose el- Alternative notations for that entry are A[i,j] or Ai,j. For
ements are real numbers or complex numbers, respec- example, the (1,3) entry of the following matrix A is 5
tively. More general types of entries are discussed below. (also denoted a13 , a,, A[1,3] or A1,3):
For instance, this is a real matrix:
4 7 5 0
1.3 0.6
A = 2 0 11 8
A = 20.4 5.5 . 19 1 3 12
9.7 6.2
The numbers, symbols or expressions in the matrix are Sometimes, the entries of a matrix can be dened by a
called its entries or its elements. The horizontal and ver- formula such as ai,j = f(i, j). For example, each of the
tical lines of entries in a matrix are called rows and entries of the following matrix A is determined by aij = i
columns, respectively. j.
13.1.1 Size 0 1 2 3
A = 1 0 1 2
The size of a matrix is dened by the number of rows 2 1 0 1
and columns that it contains. A matrix with m rows and
n columns is called an m n matrix or m-by-n matrix, In this case, the matrix itself is sometimes dened by that
while m and n are called its dimensions. For example, the formula, within square brackets or double parentheses.
matrix A above is a 3 2 matrix. For example, the matrix above is dened as A = [i-j], or
Matrices which have a single row are called row vectors, A = ((i-j)). If matrix size is m n, the above-mentioned
and those which have a single column are called column formula f(i, j) is valid for any i = 1, ..., m and any j = 1,
vectors. A matrix which has the same number of rows ..., n. This can be either specied separately, or using m
and columns is called a square matrix. A matrix with an n as a subscript. For instance, the matrix A above is 3
innite number of rows or columns (or both) is called an 4 and can be dened as A = [i j] (i = 1, 2, 3; j = 1, ...,
innite matrix. In some contexts, such as computer al- 4), or A = [i j]34.
gebra programs, it is useful to consider a matrix with no Some programming languages utilize doubly subscripted
rows or no columns, called an empty matrix. arrays (or arrays of arrays) to represent an m--n matrix.
Some programming languages start the numbering of ar-
ray indexes at zero, in which case the entries of an m-by-
13.2 Notation n matrix are indexed by 0 i m 1 and 0 j n
1.[9] This article follows the more common convention in
mathematical writing where enumeration starts from 1.
Matrices are commonly written in box brackets or
parentheses: An asterisk is occasionally used to refer to whole rows or
columns in a matrix. For example, ai, refers to the ith
row ofA, and a,j refers to the jth column of A. The set
a11 a12 a1n a11 a12 ofa1n all m-by-n matrices is denoted (m, n).
a21 a22 a2n a21 a22 a2n
A= . .. = .. = (aij ) R
mn
.. .. .. .. . . .
.. . . . . . . .
am1 am2 amn am1 am2 13.3 amn Basic operations
The specics of symbolic matrix notation vary widely,
with some prevailing trends. Matrices are usually sym- There are a number of basic operations that can be ap-
bolized using upper-case letters (such as A in the ex- plied to modify matrices, called matrix addition, scalar
amples above), while the corresponding lower-case let- multiplication, transposition, matrix multiplication, row
ters, with two subscript indices (for example, a11 , or operations, and submatrix.[11]
13.3. BASIC OPERATIONS 79
13.3.1 Addition, scalar multiplication and Matrix multiplication satises the rules (AB)C = A(BC)
transposition (associativity), and (A+B)C = AC+BC as well as
C(A+B) = CA+CB (left and right distributivity), when-
Main articles: Matrix addition, Scalar multiplication, ever the size of the matrices is such that the various prod-
and Transpose ucts are dened.[14] The product AB may be dened with-
out BA being dened, namely if A and B are m-by-n and
n-by-k matrices, respectively, and m k. Even if both
Familiar properties of numbers extend to these operations products are dened, they need not be equal, that is, gen-
of matrices: for example, addition is commutative, that erally
is, the matrix sum does not depend on the order of the
summands: A + B = B + A.[12] The transpose is compat-
ible with addition and scalar multiplication, as expressed AB BA,
T T T T T T T
by (cA) = c(A ) and (A + B) = A + B . Finally, (A )
= A. that is, matrix multiplication is not commutative, in
marked contrast to (rational, real, or complex) numbers
whose product is independent of the order of the factors.
13.3.2 Matrix multiplication An example of two matrices not commuting with each
other is:
Main article: Matrix multiplication
Multiplication of two matrices is dened if and only if [ ][ ] [ ]
1 2 0 1 0 1
= ,
B 3 4 0 0 0 3
whereas
b1,1 b1,2 b1,3
where 1 i m and 1 j p.[13] For example, the un- These operations are used in a number of ways, including
derlined entry 2340 in the product is calculated as (2 solving linear equations and nding matrix inverses.
1000) + (3 100) + (4 10) = 2340:
13.3.4 Submatrix
[ ] 0 1000 [ ]
2 3 4 3 2340
1 100 = . A submatrix of a matrix is obtained by deleting any col-
1 0 0 0 1000 lection of rows and/or columns.[16][17][18] For example,
0 10
80 CHAPTER 13. MATRIX (MATHEMATICS)
from the following 3-by-4 matrix, we can construct a 2- 13.5 Linear transformations
by-3 submatrix by removing row 3 and column 2:
Main articles: Linear transformation and Transformation
matrix
1 2 3 4 [ ] Matrices and matrix multiplication reveal their essen-
1 3 4
A = 5 6 7 8 .
5 7 8
9 10 11 12
(a+c,b+d)
The minors and cofactors of a matrix are found by com-
puting the determinant of certain submatrices.[18][19] (c,d)
A principal submatrix is a square submatrix obtained by
removing certain rows and columns. The denition varies
from author to author. According to some authors, a prin-
cipal submatrix is a submatrix in which the set of row in-
dices that remain is the same as the set of column indices
that remain.[20][21] Other authors dene a principal sub-
adbc
matrix to be one in which the rst k rows and columns, for
some number k, are the ones that remain;[22] this type of
submatrix has also been called a leading principal sub-
matrix.[23]
(a,b)
13.6 Square matrix A square matrix A that is equal to its transpose, that is,
A = AT , is a symmetric matrix. If instead, A is equal to
the negative of its transpose, that is, A = AT , then A is a
Main article: Square matrix skew-symmetric matrix. In complex matrices, symmetry
is often replaced by the concept of Hermitian matrices,
A square matrix is a matrix with the same number of rows which satisfy A = A, where the star or asterisk denotes
and columns. An n-by-n matrix is known as a square ma- the conjugate transpose of the matrix, that is, the trans-
trix of order n. Any two square matrices of the same pose of the complex conjugate of A.
order can be added and multiplied. The entries aii form By the spectral theorem, real symmetric matrices and
the main diagonal of a square matrix. They lie on the complex Hermitian matrices have an eigenbasis; that is,
imaginary line which runs from the top left corner to the every vector is expressible as a linear combination of
bottom right corner of the matrix. eigenvectors. In both cases, all eigenvalues are real.[29]
This theorem can be generalized to innite-dimensional
situations related to matrices with innitely many rows
13.6.1 Main types and columns, see below.
takes only positive values (respectively only negative val- tr(AB) = tr(BA).
ues; both some negative and some positive values).[32]
If the quadratic form takes only non-negative (respec- This is immediate from the denition of matrix multipli-
tively only non-positive) values, the symmetric ma- cation:
trix is called positive-semidenite (respectively negative-
semidenite); hence the matrix is indenite precisely
when it is neither positive-semidenite nor negative- m
n
semidenite. tr(AB) = Aij Bji = tr(BA).
i=1 j=1
A symmetric matrix is positive-denite if and only if all
its eigenvalues are positive, that is, the matrix is positive- Also, the trace of a matrix is equal to that of its transpose,
semidenite and it is invertible.[33] The table at the right that is,
shows two possibilities for 2-by-2 matrices.
Allowing as input two dierent vectors instead yields the tr(A) = tr(AT ).
bilinear form associated to A:
Determinant
BA (x, y) = xT Ay.[34]
Main article: Determinant
Orthogonal matrix The determinant det(A) or |A| of a square matrix A is a
The trace, tr(A) of a square matrix A is the sum of its det(AB) = det(A) det(B).[36]
diagonal entries. While matrix multiplication is not com-
mutative as mentioned above, the trace of the product of Adding a multiple of any row to another row, or a mul-
two matrices is independent of the order of the factors: tiple of any column to another column, does not change
13.8. DECOMPOSITION 83
the determinant. Interchanging two rows or two columns the eectiveness and precision of all the available al-
aects the determinant by multiplying it by 1.[37] Us- gorithms. The domain studying these matters is called
ing these operations, any matrix can be transformed to a numerical linear algebra.[45] As with other numerical situ-
lower (or upper) triangular matrix, and for such matrices ations, two main aspects are the complexity of algorithms
the determinant equals the product of the entries on the and their numerical stability.
main diagonal; this provides a method to calculate the de- Determining the complexity of an algorithm means nd-
terminant of any matrix. Finally, the Laplace expansion ing upper bounds or estimates of how many elementary
expresses the determinant in terms of minors, that is, de- operations such as additions and multiplications of scalars
terminants of smaller matrices.[38] This expansion can be
are necessary to perform some algorithm, for example,
used for a recursive denition of determinants (taking as multiplication of matrices. For example, calculating the
starting case the determinant of a 1-by-1 matrix, which
matrix product of two n-by-n matrix using the deni-
is its unique entry, or even the determinant of a 0-by-0 tion given above needs n3 multiplications, since for any
matrix, which is 1), that can be seen to be equivalent to
of the n2 entries of the product, n multiplications are nec-
the Leibniz formula. Determinants can be used to solve essary. The Strassen algorithm outperforms this naive
linear systems using Cramers rule, where the division of
algorithm; it needs only n2.807 multiplications.[46] A re-
the determinants of two related square matrices equates ned approach also incorporates specic features of the
to the value of each of the systems variables.[39] computing devices.
In many practical situations additional information about
Eigenvalues and eigenvectors the matrices involved is known. An important case are
sparse matrices, that is, matrices most of whose entries
Main article: Eigenvalues and eigenvectors are zero. There are specically adapted algorithms for,
say, solving linear systems Ax = b for sparse matrices A,
A number and a non-zero vector v satisfying such as the conjugate gradient method.[47]
An algorithm is, roughly speaking, numerically stable, if
Av = v little deviations in the input values do not lead to big devi-
ations in the result. For example, calculating the inverse
of a matrix via Laplaces formula (Adj (A) denotes the
are called an eigenvalue and an eigenvector of A,
adjugate matrix of A)
respectively.[40][41] The number is an eigenvalue of an
nn-matrix A if and only if AIn is not invertible, which
A1 = Adj(A) / det(A)
is equivalent to
may lead to signicant rounding errors if the determinant
det(A I) = 0. [42]
of the matrix is very small. The norm of a matrix can be
used to capture the conditioning of linear algebraic prob-
[48]
The polynomial pA in an indeterminate X given by lems, such as computing a matrixs inverse.
evaluation the determinant det(XInA) is called the Although most computer languages are not designed with
characteristic polynomial of A. It is a monic polynomial commands or libraries for matrices, as early as the 1970s,
of degree n. Therefore the polynomial equation pA() = some engineering desktop computers such as the HP
0 has at most n dierent solutions, that is, eigenvalues of 9830 had ROM cartridges to add BASIC commands
the matrix.[43] They may be complex even if the entries of for matrices. Some computer languages such as APL
A are real. According to the CayleyHamilton theorem, were designed to manipulate matrices, and various math-
pA(A) = 0, that is, the result of substituting the matrix it- ematical programs can be used to aid computing with
self into its own characteristic polynomial yields the zero matrices.[49]
matrix.
13.8 Decomposition
13.7 Computational aspects
Main articles: Matrix decomposition, Matrix diagonal-
Matrix calculations can be often performed with dier- ization, Gaussian elimination, and Montantes method
ent techniques. Many problems can be solved by both
direct algorithms or iterative approaches. For example, There are several methods to render matrices into a more
the eigenvectors of a square matrix can be obtained by easily accessible form. They are generally referred to as
nding a sequence of vectors xn converging to an eigen- matrix decomposition or matrix factorization techniques.
vector when n tends to innity.[44] The interest of all these techniques is that they preserve
To be able to choose the more appropriate algorithm for certain properties of the matrices in question, such as de-
each specic problem, it is important to determine both terminant, rank or inverse, so that these quantities can be
84 CHAPTER 13. MATRIX (MATHEMATICS)
calculated after applying the transformation, or that cer- A instead. This can be used to compute the matrix ex-
tain matrix operations are algorithmically easier to carry ponential eA , a need frequently arising in solving linear
out for some types of matrices. dierential equations, matrix logarithms and square roots
[54]
The LU decomposition factors matrices as a product of matrices. To avoid numerically ill-conditioned sit-
of lower (L) and an upper triangular matrices (U). [50] uations, further algorithms such as the Schur decomposi-
[55]
Once this decomposition is calculated, linear systems tion can be employed.
can be solved more eciently, by a simple technique
called forward and back substitution. Likewise, in-
verses of triangular matrices are algorithmically easier 13.9 Abstract algebraic aspects
to calculate. The Gaussian elimination is a similar algo-
rithm; it transforms any matrix to row echelon form.[51]
and generalizations
Both methods proceed by multiplying the matrix by suit-
able elementary matrices, which correspond to permuting Matrices can be generalized in dierent ways. Abstract
rows or columns and adding multiples of one row to an- algebra uses matrices with entries in more general elds
other row. Singular value decomposition expresses any or even rings, while linear algebra codies properties of
matrix A as a product UDV , where U and V are unitary matrices in the notion of linear maps. It is possible
matrices and D is a diagonal matrix. to consider matrices with innitely many columns and
rows. Another extension are tensors, which can be seen
as higher-dimensional arrays of numbers, as opposed to
vectors, which can often be realised as sequences of num-
bers, while matrices are rectangular or two-dimensional
arrays of numbers.[56] Matrices, subject to certain re-
quirements tend to form groups known as matrix groups.
Similarly under certain conditions matrices form rings
known as matrix rings. Though the product of matrices
is not in general commutative yet certain matrices form
elds known as matrix elds.
multiplication is commutative, then M(n, R) is a unitary objects together with a binary operation, that is, an op-
noncommutative (unless n = 1) associative algebra over eration combining any two objects to a third, subject to
R. The determinant of square matrices over a commuta- certain requirements.[63] A group in which the objects are
tive ring R can still be dened using the Leibniz formula; matrices and the group operation is matrix multiplication
such a matrix is invertible if and only if its determinant is called a matrix group.[64][65] Since in a group every ele-
is invertible in R, generalising the situation over a eld F, ment has to be invertible, the most general matrix groups
where every nonzero element is invertible.[59] Matrices are the groups of all invertible matrices of a given size,
over superrings are called supermatrices.[60] called the general linear groups.
Matrices do not always have all their entries in the same Any property of matrices that is preserved under matrix
ring or even in any ring at all. One special but com- products and inverses can be used to dene further matrix
mon case is block matrices, which may be considered as groups. For example, matrices with a given size and with
matrices whose entries themselves are matrices. The en- a determinant of 1 form a subgroup of (that is, a smaller
tries need not be quadratic matrices, and thus need not be group contained in) their general linear group, called a
members of any ordinary ring; but their sizes must full special linear group.[66] Orthogonal matrices, determined
certain compatibility conditions. by the condition
Linear maps Rn Rm are equivalent to m-by-n matrices, form the orthogonal group.[67] Every orthogonal matrix
as described above. More generally, any linear map f: has determinant 1 or 1. Orthogonal matrices with deter-
V W between nite-dimensional vector spaces can be minant 1 form a subgroup called special orthogonal group.
described by a matrix A = (aij), after choosing bases v1 ,
Every nite group is isomorphic to a matrix group, as one
..., vn of V, and w1 , ..., wm of W (so n is the dimension
can see by considering the regular representation of the
of V and m is the dimension of W), which is such that
symmetric group.[68] General groups can be studied using
matrix groups, which are comparatively well-understood,
by means of representation theory.[69]
m
f (vj ) = ai,j wi for j = 1, . . . , n.
i=1
13.9.4 Innite matrices
In other words, column j of A expresses the image of vj
in terms of the basis vectors wi of W; thus this relation It is also possible to consider matrices with innitely
uniquely determines the entries of the matrix A. Note that many rows and/or columns[70] even if, being innite ob-
the matrix depends on the choice of the bases: dierent jects, one cannot write down such matrices explicitly. All
choices of bases give rise to dierent, but equivalent ma- that matters is that for every element in the set index-
trices.[61] Many of the above concrete notions can be rein- ing rows, and every element in the set indexing columns,
terpreted in this light, for example, the transpose matrix there is a well-dened entry (these index sets need not
AT describes the transpose of the linear map given by A, even be subsets of the natural numbers). The basic oper-
with respect to the dual bases.[62] ations of addition, subtraction, scalar multiplication and
These properties can be restated in a more natural way: transposition can still be dened without problem; how-
the category of all matrices with entries in a eld k with ever matrix multiplication may involve innite summa-
multiplication as composition is equivalent to the category tions to dene the resulting entries, and these are not de-
of nite dimensional vector spaces and linear maps over ned in general.
this eld. If R is any ring
with unity, then the ring of endomor-
More generally, the set of mn matrices can be used to phisms of M = iI R as a right R module is isomor-
represent the R-linear maps between the free modules phic to the ring of column nite matrices CFMI (R)
Rm and Rn for an arbitrary ring R with unity. When n whose entries are indexed by I I , and whose columns
= m composition of these maps is possible, and this gives each contain only nitely many nonzero entries. The en-
rise to the matrix ring of nn matrices representing the domorphisms of M considered as a left R module result in
endomorphism ring of Rn . an analogous object, the row nite matrices RFMI (R)
whose rows each only have nitely many nonzero entries.
If innite matrices are used to describe linear maps, then
13.9.3 Matrix groups only those matrices can be used all of whose columns
have but a nite number of nonzero entries, for the fol-
Main article: Matrix group lowing reason. For a matrix A to describe a linear map
f: VW, bases for both spaces must have been chosen;
A group is a mathematical structure consisting of a set of recall that by denition this means that every vector in
86 CHAPTER 13. MATRIX (MATHEMATICS)
the space can be written uniquely as a (nite) linear com- 13.10 Applications
bination of basis vectors, so that written as a (column)
vector v of coecients, only nitely many entries vi are There are numerous applications of matrices, both in
nonzero. Now the columns of A describe the images by f mathematics and other sciences. Some of them merely
of individual basis vectors of V in the basis of W, which is take advantage of the compact representation of a set of
only meaningful if these columns have only nitely many numbers in a matrix. For example, in game theory and
nonzero entries. There is no restriction on the rows of A economics, the payo matrix encodes the payo for two
however: in the product Av there are only nitely many players, depending on which out of a given (nite) set of
nonzero coecients of v involved, so every one of its en- alternatives the players choose.[74] Text mining and au-
tries, even if it is given as an innite sum of products, tomated thesaurus compilation makes use of document-
involves only nitely many nonzero terms and is there- term matrices such as tf-idf to track frequencies of cer-
fore well dened. Moreover, this amounts to forming a tain words in several documents.[75]
linear combination of the columns of A that eectively
involves only nitely many of them, whence the result has Complex numbers can be represented by particular real
only nitely many nonzero entries, because each of those 2-by-2 matrices via
columns do. One also sees that products of two matrices
of the given type is well dened (provided as usual that [ ]
the column-index and row-index sets match), is again of a b
a + ib ,
the same type, and corresponds to the composition of lin- b a
ear maps.
under which addition and multiplication of complex num-
If R is a normed ring, then the condition of row or col- bers and matrices correspond to each other. For example,
umn niteness can be relaxed. With the norm in place, 2-by-2 rotation matrices represent the multiplication with
absolutely convergent series can be used instead of nite some complex number of absolute value 1, as above. A
sums. For example, the matrices whose column sums similar interpretation is possible for quaternions[76] and
are absolutely convergent sequences form a ring. Analo- Cliord algebras in general.
gously of course, the matrices whose row sums are abso-
lutely convergent series also form a ring. Early encryption techniques such as the Hill cipher also
used matrices. However, due to the linear nature of ma-
In that vein, innite matrices can also be used to de- trices, these codes are comparatively easy to break.[77]
scribe operators on Hilbert spaces, where convergence Computer graphics uses matrices both to represent ob-
and continuity questions arise, which again results in cer- jects and to calculate transformations of objects us-
tain constraints that have to be imposed. However, the ing ane rotation matrices to accomplish tasks such
explicit point of view of matrices tends to obfuscate the as projecting a three-dimensional object onto a two-
matter,[71] and the abstract and more powerful tools of dimensional screen, corresponding to a theoretical cam-
functional analysis can be used instead. era observation.[78] Matrices over a polynomial ring are
important in the study of control theory.
Chemistry makes use of matrices in various ways, par-
ticularly since the use of quantum theory to discuss
molecular bonding and spectroscopy. Examples are the
overlap matrix and the Fock matrix used in solving the
13.9.5 Empty matrices Roothaan equations to obtain the molecular orbitals of
the HartreeFock method.
2
If f 1 , ..., fm denote the components of f, then the Jacobi
matrix is dened as [83]
3 J(f ) =
[
fi
]
.
xj 1im,1jn
1
maximal value m, f is locally invertible at that point, by
the implicit function theorem.[84]
Partial dierential equations can be classied by consid-
ering the matrix of coecients of the highest-order dif-
ferential operators of the equation. For elliptic partial dif-
ferential equations this matrix is positive denite, which
1 1 0 has decisive inuence on the set of possible solutions of
An undirected graph with adjacency matrix 1 0 1 . the equation in question.[85]
0 1 0
The nite element method is an important numerical
method to solve partial dierential equations, widely ap-
plied in simulating complex physical systems. It attempts
13.10.2 Analysis and geometry
to approximate the solution to some equation by piece-
wise linear functions, where the pieces are chosen with
The Hessian matrix of a dierentiable function : Rn
respect to a suciently ne grid, which in turn can be
R consists of the second derivatives of with respect to
recast as a matrix equation.[86]
the several coordinate directions, that is, [81]
particle attains eventually, can be read o the eigenvec- also referred to as matrix mechanics. One particular ex-
tors of the transition matrices.[88] ample is the density matrix that characterizes the mixed
Statistics also makes use of matrices in many dierent state of a quantum system as[97] a linear combination of el-
[89]
forms. Descriptive statistics is concerned with describ- ementary, pure eigenstates.
ing data sets, which can often be represented as data ma- Another matrix serves as a key tool for describing the
trices, which may then be subjected to dimensionality scattering experiments that form the cornerstone of ex-
reduction techniques. The covariance matrix encodes perimental particle physics: Collision reactions such as
the mutual variance of several random variables.[90] An- occur in particle accelerators, where non-interacting par-
other technique using matrices are linear least squares, a ticles head towards each other and collide in a small inter-
method that approximates a nite set of pairs (x1 , y1 ), action zone, with a new set of non-interacting particles as
(x2 , y2 ), ..., (xN, yN), by a linear function the result, can be described as the scalar product of out-
going particle states and a linear combination of ingoing
yi axi + b, i = 1, ..., N particle states. The linear combination is given by a ma-
trix known as the S-matrix, which encodes all information
about the possible interactions between particles.[98]
which can be formulated in terms of matrices, related to
the singular value decomposition of matrices.[91]
Random matrices are matrices whose entries are ran-
dom numbers, subject to suitable probability distribu- 13.10.6 Normal modes
tions, such as matrix normal distribution. Beyond prob-
ability theory, they are applied in domains ranging from A general application of matrices in physics is to the
number theory to physics.[92][93] description of linearly coupled harmonic systems. The
equations of motion of such systems can be described in
matrix form, with a mass matrix multiplying a general-
13.10.4 Symmetries and transformations ized velocity to give the kinetic term, and a force ma-
in physics trix multiplying a displacement vector to characterize the
interactions. The best way to obtain solutions is to de-
Further information: Symmetry in physics termine the systems eigenvectors, its normal modes, by
diagonalizing the matrix equation. Techniques like this
Linear transformations and the associated symmetries are crucial when it comes to the internal dynamics of
play a key role in modern physics. For example, molecules: the internal vibrations of systems [99]
consisting
elementary particles in quantum eld theory are classi- of mutually bound component atoms. They are also
ed as representations of the Lorentz group of special needed for describing mechanical vibrations, and oscilla-
[100]
relativity and, more specically, by their behavior un- tions in electrical circuits.
der the spin group. Concrete representations involving
the Pauli matrices and more general gamma matrices are
an integral part of the physical description of fermions,
which behave as spinors.[94] For the three lightest quarks, 13.10.7 Geometrical optics
there is a group-theoretical representation involving the
special unitary group SU(3); for their calculations, physi-Geometrical optics provides further matrix applications.
cists use a convenient matrix representation known as the In this approximative theory, the wave nature of light is
Gell-Mann matrices, which are also used for the SU(3) neglected. The result is a model in which light rays are
gauge group that forms the basis of the modern descrip- indeed geometrical rays. If the deection of light rays
tion of strong nuclear interactions, quantum chromody- by optical elements is small, the action of a lens or re-
namics. The CabibboKobayashiMaskawa matrix, in ective element on a given light ray can be expressed as
turn, expresses the fact that the basic quark states that multiplication of a two-component vector with a two-by-
are important for weak interactions are not the same as, two matrix called ray transfer matrix: the vectors com-
but linearly related to the basic quark states that dene ponents are the light rays slope and its distance from the
particles with specic and distinct masses.[95] optical axis, while the matrix encodes the properties of
the optical element. Actually, there are two kinds of ma-
trices, viz. a refraction matrix describing the refraction
13.10.5 Linear combinations of quantum at a lens surface, and a translation matrix, describing the
states translation of the plane of reference to the next refracting
surface, where another refraction matrix applies. The op-
The rst model of quantum mechanics (Heisenberg, tical system, consisting of a combination of lenses and/or
1925) represented the theorys operators by innite- reective elements, is simply described by the matrix re-
dimensional matrices acting on quantum states.[96] This is sulting from the product of the components matrices.[101]
13.11. HISTORY 89
and Jordan led to studying matrices with innitely many Matrix calculus
rows and columns.[116] Later, von Neumann carried out
the mathematical formulation of quantum mechanics, by Matrix function
further developing functional analytic notions such as Periodic matrix set
linear operators on Hilbert spaces, which, very roughly
speaking, correspond to Euclidean space, but with an in- Tensor
nity of independent directions.
The word has been used in unusual ways by at least two [2] Anton (1987, p. 23)
authors of historical importance.
[3] Beauregard & Fraleigh (1973, p. 56)
Bertrand Russell and Alfred North Whitehead in their
Principia Mathematica (19101913) use the word ma- [4] Young, Cynthia. Precalculus. Laurie Rosatone. p. 727.
trix in the context of their axiom of reducibility. They [5] K. Bryan and T. Leise. The $25,000,000,000 eigenvec-
proposed this axiom as a means to reduce any function to tor: The linear algebra behind Google. SIAM Review,
one of lower type, successively, so that at the bottom (0 48(3):569581, 2006.
order) the function is identical to its extension:
[6] Lang 2002
Let us give the name of matrix to any function, [7] Fraleigh (1976, p. 209)
of however many variables, which does not in-
[8] Nering (1970, p. 37)
volve any apparent variables. Then any possi-
ble function other than a matrix is derived from [9] Oualline 2003, Ch. 5
a matrix by means of generalization, that is, by
considering the proposition which asserts that [10] How to organize, add and multiply matrices - Bill
the function in question is true with all possi- Shillito. TED ED. Retrieved April 6, 2013.
ble values or with some value of one of the ar- [11] Brown 1991, Denition I.2.1 (addition), Denition I.2.4
guments, the other argument or arguments re- (scalar multiplication), and Denition I.2.33 (transpose)
maining undetermined.[117]
[12] Brown 1991, Theorem I.2.6
For example, a function (x, y) of two variables x and [13] Brown 1991, Denition I.2.20
y can be reduced to a collection of functions of a single
variable, for example, y, by considering the function for [14] Brown 1991, Theorem I.2.24
all possible values of individuals ai substituted in place [15] Horn & Johnson 1985, Ch. 4 and 5
of variable x. And then the resulting collection of func-
tions of the single variable y, that is, a: (ai, y), can be [16] Bronson (1970, p. 16)
reduced to a matrix of values by considering the func- [17] Kreyszig (1972, p. 220)
tion for all possible values of individuals bi substituted
in place of variable y: [18] Protter & Morrey (1970, p. 869)
Geometric multiplicity [23] Horn, Roger A.; Johnson, Charles R. (2012), Matrix Anal-
ysis (2nd ed.), Cambridge University Press, p. 17, ISBN
GramSchmidt process 9780521839402.
[25] Greub 1975, Section III.2 [62] Greub 1975, Section III.3.13
[26] Brown 1991, Denition II.3.3 [63] See any standard reference in group.
[27] Greub 1975, Section III.1 [64] Additionally, the group is required to be closed in the gen-
eral linear group.
[28] Brown 1991, Theorem II.3.22
[65] Baker 2003, Def. 1.30
[29] Horn & Johnson 1985, Theorem 2.5.6
[66] Baker 2003, Theorem 1.2
[30] Brown 1991, Denition I.2.28
[67] Artin 1991, Chapter 4.5
[31] Brown 1991, Denition I.5.13
[68] Rowen 2008, Example 19.2, p. 198
[32] Horn & Johnson 1985, Chapter 7
[69] See any reference in representation theory or group rep-
[33] Horn & Johnson 1985, Theorem 7.2.1
resentation.
[34] Horn & Johnson 1985, Example 4.0.6, p. 169
[70] See the item Matrix in It, ed. 1987
[35] Brown 1991, Denition III.2.1
[71] Not much of matrix theory carries over to innite-
[36] Brown 1991, Theorem III.2.12 dimensional spaces, and what does is not so useful, but
it sometimes helps. Halmos 1982, p. 23, Chapter 5
[37] Brown 1991, Corollary III.2.16
[72] Empty Matrix: A matrix is empty if either its row or
[38] Mirsky 1990, Theorem 1.4.1 column dimension is zero, Glossary, O-Matrix v6 User
Guide
[39] Brown 1991, Theorem III.3.18
[73] A matrix having at least one dimension equal to zero is
[40] Eigen means own in German and in Dutch. called an empty matrix, MATLAB Data Structures
[41] Brown 1991, Denition III.4.1 [74] Fudenberg & Tirole 1983, Section 1.1.1
[42] Brown 1991, Denition III.4.9 [75] Manning 1999, Section 15.3.4
[43] Brown 1991, Corollary III.4.10 [76] Ward 1997, Ch. 2.8
[44] Householder 1975, Ch. 7
[77] Stinson 2005, Ch. 1.1.5 and 1.2.4
[45] Bau III & Trefethen 1997
[78] Association for Computing Machinery 1979, Ch. 7
[46] Golub & Van Loan 1996, Algorithm 1.3.1
[79] Godsil & Royle 2004, Ch. 8.1
[47] Golub & Van Loan 1996, Chapters 9 and 10, esp. section
[80] Punnen 2002
10.2
[81] Lang 1987a, Ch. XVI.6
[48] Golub & Van Loan 1996, Chapter 2.3
[82] Nocedal 2006, Ch. 16
[49] For example, Mathematica, see Wolfram 2003, Ch. 3.7
[83] Lang 1987a, Ch. XVI.1
[50] Press, Flannery & Teukolsky 1992
[84] Lang 1987a, Ch. XVI.5. For a more advanced, and more
[51] Stoer & Bulirsch 2002, Section 4.1
general statement see Lang 1969, Ch. VI.2
[52] Horn & Johnson 1985, Theorem 2.5.4
[85] Gilbarg & Trudinger 2001
[53] Horn & Johnson 1985, Ch. 3.1, 3.2
[86] olin 2005, Ch. 2.5. See also stiness method.
[54] Arnold & Cooke 1992, Sections 14.5, 7, 8
[87] Latouche & Ramaswami 1999
[55] Bronson 1989, Ch. 15
[88] Mehata & Srinivasan 1978, Ch. 2.8
[56] Coburn 1955, Ch. V
[89] Healy, Michael (1986), Matrices for Statistics, Oxford
[57] Lang 2002, Chapter XIII University Press, ISBN 978-0-19-850702-4
[58] Lang 2002, XVII.1, p. 643 [90] Krzanowski 1988, Ch. 2.2., p. 60
[59] Lang 2002, Proposition XIII.4.16 [91] Krzanowski 1988, Ch. 4.1
[61] Greub 1975, Section III.3 [93] Zabrodin, Brezin & Kazakov et al. 2006
92 CHAPTER 13. MATRIX (MATHEMATICS)
[94] Itzykson & Zuber 1980, Ch. 2 [116] Mehra & Rechenberg 1987
[95] see Burgess & Moore 2007, section 1.6.3. (SU(3)), sec- [117] Whitehead, Alfred North; and Russell, Bertrand (1913)
tion 2.4.3.2. (KobayashiMaskawa matrix) Principia Mathematica to *56, Cambridge at the Univer-
sity Press, Cambridge UK (republished 1962) cf page
[96] Schi 1968, Ch. 6 162.
[97] Bohm 2001, sections II.4 and II.8 [118] Tarski, Alfred; (1946) Introduction to Logic and the
[98] Weinberg 1995, Ch. 3 Methodology of Deductive Sciences, Dover Publications,
Inc, New York NY, ISBN 0-486-28462-X.
[99] Wherrett 1987, part II
[102] Shen, Crossley & Lun 1999 cited by Bretscher 2005, p. 1 Anton, Howard (1987), Elementary Linear Algebra
(5th ed.), New York: Wiley, ISBN 0-471-84819-0
[103] Discrete Mathematics 4th Ed. Dossey, Otto, Spense, Van-
den Eynden, Published by Addison Wesley, October 10, Arnold, Vladimir I.; Cooke, Roger (1992), Ordi-
2001 ISBN 978-0321079121 | p.564-565 nary dierential equations, Berlin, DE; New York,
[104] Needham, Joseph; Wang Ling (1959). Science and Civil-
NY: Springer-Verlag, ISBN 978-3-540-54813-3
isation in China. III. Cambridge: Cambridge University Artin, Michael (1991), Algebra, Prentice Hall, ISBN
Press. p. 117. ISBN 9780521058018.
978-0-89871-510-1
[105] Discrete Mathematics 4th Ed. Dossey, Otto, Spense, Van-
den Eynden, Published by Addison Wesley, October 10, Association for Computing Machinery (1979),
2001 ISBN 978-0321079121 | p.564 Computer Graphics, Tata McGrawHill, ISBN 978-
0-07-059376-3
[106] MerriamWebster dictionary, MerriamWebster,
retrieved April 20, 2009 Baker, Andrew J. (2003), Matrix Groups: An Intro-
duction to Lie Group Theory, Berlin, DE; New York,
[107] Although many sources state that J. J. Sylvester coined
NY: Springer-Verlag, ISBN 978-1-85233-470-3
the mathematical term matrix in 1848, Sylvester pub-
lished nothing in 1848. (For proof that Sylvester pub- Bau III, David; Trefethen, Lloyd N. (1997), Nu-
lished nothing in 1848, see: J. J. Sylvester with H. F. merical linear algebra, Philadelphia, PA: Society for
Baker, ed., The Collected Mathematical Papers of James
Industrial and Applied Mathematics, ISBN 978-0-
Joseph Sylvester (Cambridge, England: Cambridge Uni-
89871-361-9
versity Press, 1904), vol. 1.) His earliest use of the term
matrix occurs in 1850 in: J. J. Sylvester (1850) Addi- Beauregard, Raymond A.; Fraleigh, John B. (1973),
tions to the articles in the September number of this jour-
A First Course In Linear Algebra: with Optional
nal, On a new class of theorems, and on Pascals the-
orem, The London, Edinburgh and Dublin Philosophical
Introduction to Groups, Rings, and Fields, Boston:
Magazine and Journal of Science, 37 : 363-370. From Houghton Miin Co., ISBN 0-395-14017-X
page 369: For this purpose we must commence, not with
Bretscher, Otto (2005), Linear Algebra with Appli-
a square, but with an oblong arrangement of terms consist-
ing, suppose, of m lines and n columns. This will not in cations (3rd ed.), Prentice Hall
itself represent a determinant, but is, as it were, a Matrix
Bronson, Richard (1970), Matrix Methods: An In-
out of which we may form various systems of determi-
troduction, New York: Academic Press, LCCN
nants "
70097490
[108] The Collected Mathematical Papers of James Joseph
Sylvester: 18371853, Paper 37, p. 247 Bronson, Richard (1989), Schaums outline of the-
ory and problems of matrix operations, New York:
[109] Phil.Trans. 1858, vol.148, pp.17-37 Math. Papers II 475- McGrawHill, ISBN 978-0-07-007978-6
496
Brown, William C. (1991), Matrices and vector
[110] Dieudonn, ed. 1978, Vol. 1, Ch. III, p. 96
spaces, New York, NY: Marcel Dekker, ISBN 978-
[111] Knobloch 1994 0-8247-8419-5
[112] Hawkins 1975 Coburn, Nathaniel (1955), Vector and tensor analy-
sis, New York, NY: Macmillan, OCLC 1029828
[113] Kronecker 1897
Conrey, J. Brian (2007), Ranks of elliptic curves and
[114] Weierstrass 1915, pp. 271286
random matrix theory, Cambridge University Press,
[115] Bcher 2004 ISBN 978-0-521-69964-8
13.14. REFERENCES 93
Fraleigh, John B. (1976), A First Course In Abstract Lang, Serge (2002), Algebra, Graduate Texts in
Algebra (2nd ed.), Reading: Addison-Wesley, ISBN Mathematics, 211 (Revised third ed.), New York:
0-201-01984-1 Springer-Verlag, ISBN 978-0-387-95385-4, MR
1878556
Fudenberg, Drew; Tirole, Jean (1983), Game The-
ory, MIT Press Latouche, Guy; Ramaswami, Vaidyanathan (1999),
Introduction to matrix analytic methods in stochas-
Gilbarg, David; Trudinger, Neil S. (2001), Elliptic tic modeling (1st ed.), Philadelphia, PA: Society for
partial dierential equations of second order (2nd Industrial and Applied Mathematics, ISBN 978-0-
ed.), Berlin, DE; New York, NY: Springer-Verlag, 89871-425-8
ISBN 978-3-540-41160-4
Manning, Christopher D.; Schtze, Hinrich (1999),
Godsil, Chris; Royle, Gordon (2004), Algebraic Foundations of statistical natural language process-
Graph Theory, Graduate Texts in Mathematics, 207, ing, MIT Press, ISBN 978-0-262-13360-9
Berlin, DE; New York, NY: Springer-Verlag, ISBN
Mehata, K. M.; Srinivasan, S. K. (1978), Stochas-
978-0-387-95220-8
tic processes, New York, NY: McGrawHill, ISBN
Golub, Gene H.; Van Loan, Charles F. (1996), Ma- 978-0-07-096612-3
trix Computations (3rd ed.), Johns Hopkins, ISBN Mirsky, Leonid (1990), An Introduction to Linear
978-0-8018-5414-9 Algebra, Courier Dover Publications, ISBN 978-0-
486-66434-7
Greub, Werner Hildbert (1975), Linear algebra,
Graduate Texts in Mathematics, Berlin, DE; New Nering, Evar D. (1970), Linear Algebra and Ma-
York, NY: Springer-Verlag, ISBN 978-0-387- trix Theory (2nd ed.), New York: Wiley, LCCN 76-
90110-7 91646
Halmos, Paul Richard (1982), A Hilbert space prob- Nocedal, Jorge; Wright, Stephen J. (2006), Numeri-
lem book, Graduate Texts in Mathematics, 19 (2nd cal Optimization (2nd ed.), Berlin, DE; New York,
ed.), Berlin, DE; New York, NY: Springer-Verlag, NY: Springer-Verlag, p. 449, ISBN 978-0-387-
ISBN 978-0-387-90685-0, MR 675952 30303-1
Horn, Roger A.; Johnson, Charles R. (1985), Matrix Oualline, Steve (2003), Practical C++ program-
Analysis, Cambridge University Press, ISBN 978-0- ming, O'Reilly, ISBN 978-0-596-00419-4
521-38632-6
Press, William H.; Flannery, Brian P.; Teukolsky,
Householder, Alston S. (1975), The theory of ma- Saul A.; Vetterling, William T. (1992), LU
trices in numerical analysis, New York, NY: Dover Decomposition and Its Applications, Numerical
Publications, MR 0378371 Recipes in FORTRAN: The Art of Scientic Comput-
ing (PDF) (2nd ed.), Cambridge University Press,
Kreyszig, Erwin (1972), Advanced Engineering pp. 3442
Mathematics (3rd ed.), New York: Wiley, ISBN 0-
Protter, Murray H.; Morrey, Jr., Charles B. (1970),
471-50728-8.
College Calculus with Analytic Geometry (2nd ed.),
Krzanowski, Wojtek J. (1988), Principles of multi- Reading: Addison-Wesley, LCCN 76087042
variate analysis, Oxford Statistical Science Series, Punnen, Abraham P.; Gutin, Gregory (2002),
3, The Clarendon Press Oxford University Press, The traveling salesman problem and its variations,
ISBN 978-0-19-852211-9, MR 969370 Boston, MA: Kluwer Academic Publishers, ISBN
978-1-4020-0664-7
It, Kiyosi, ed. (1987), Encyclopedic dictionary of
mathematics. Vol. I-IV (2nd ed.), MIT Press, ISBN Reichl, Linda E. (2004), The transition to chaos:
978-0-262-09026-1, MR 901762 conservative classical systems and quantum mani-
festations, Berlin, DE; New York, NY: Springer-
Lang, Serge (1969), Analysis II, Addison-Wesley Verlag, ISBN 978-0-387-98788-0
Lang, Serge (1987a), Calculus of several variables Rowen, Louis Halle (2008), Graduate Algebra: non-
(3rd ed.), Berlin, DE; New York, NY: Springer- commutative view, Providence, RI: American Math-
Verlag, ISBN 978-0-387-96405-8 ematical Society, ISBN 978-0-8218-4153-2
Lang, Serge (1987b), Linear algebra, Berlin, DE; olin, Pavel (2005), Partial Dierential Equations
New York, NY: Springer-Verlag, ISBN 978-0-387- and the Finite Element Method, Wiley-Interscience,
96412-6 ISBN 978-0-471-76409-0
94 CHAPTER 13. MATRIX (MATHEMATICS)
Guenther, Robert D. (1990), Modern Optics, John Mehra, Jagdish; Rechenberg, Helmut (1987), The
Wiley, ISBN 0-471-60538-7 Historical Development of Quantum Theory (1st
ed.), Berlin, DE; New York, NY: Springer-Verlag,
Itzykson, Claude; Zuber, Jean-Bernard (1980), ISBN 978-0-387-96284-9
Quantum Field Theory, McGrawHill, ISBN 0-07- Shen, Kangshen; Crossley, John N.; Lun, Anthony
032071-3 Wah-Cheung (1999), Nine Chapters of the Mathe-
matical Art, Companion and Commentary (2nd ed.),
Riley, Kenneth F.; Hobson, Michael P.; Bence, Oxford University Press, ISBN 978-0-19-853936-0
Stephen J. (1997), Mathematical methods for physics
and engineering, Cambridge University Press, ISBN Weierstrass, Karl (1915), Collected works, 3
0-521-55506-X
Online books
Characteristic root redirects here. For other uses, see adopted from the German word eigen for proper, in-
Characteristic root (disambiguation). herent"; own, individual, special"; specic, pe-
culiar, or characteristic.[4] Originally utilized to study
In linear algebra, an eigenvector or characteristic vec- principal axes of the rotational motion of rigid bodies,
eigenvalues and eigenvectors have a wide range of appli-
tor of a linear transformation is a non-zero vector whose
direction does not change when that linear transformation cations, for example in stability analysis, vibration analy-
sis, atomic orbitals, facial recognition, and matrix diago-
is applied to it. More formally, if T is a linear transfor-
mation from a vector space V over a eld F into itself and nalization.
v is a vector in V that is not the zero vector, then v is an In essence, an eigenvector v of a linear transformation T
eigenvector of T if T(v) is a scalar multiple of v. This is a non-zero vector that, when T is applied to it, does
condition can be written as the equation not change direction. Applying T to the eigenvector only
scales the eigenvector by the scalar value , called an
eigenvalue. This condition can be written as the equation
T (v) = v,
Av = v.
96
14.2. HISTORY 97
to the right and points in the bottom half are moved to the 14.2 History
left proportional to how far they are from the horizontal
axis that goes through the middle of the painting. The
vectors pointing to each point in the original image are
therefore tilted right or left and made longer or shorter
by the transformation. Notice that points along the hor-
izontal axis do not move at all when this transformation
Eigenvalues are often introduced in the context of linear
is applied. Therefore, any vector that points directly to
algebra or matrix theory. Historically, however, they
the right or left with no vertical component is an eigen-
arose in the study of quadratic forms and dierential
vector of this transformation because the mapping does
equations.
not change its direction. Moreover, these eigenvectors
all have an eigenvalue equal to one because the mapping In the 18th century Euler studied the rotational motion
does not change their length, either. of a rigid body and discovered the importance of the
principal axes.[9] Lagrange realized that the principal axes
Linear transformations can take many dierent forms,
are the eigenvectors of the inertia matrix.[10] In the early
mapping vectors in a variety of vector spaces, so the
19th century, Cauchy saw how their work could be used
eigenvectors can also take many forms. For example, the
to classify the quadric surfaces, and generalized it to
linear transformation could be a dierential operator like
d arbitrary dimensions.[11] Cauchy also coined the term
dx , in which case the eigenvectors are functions called racine caractristique (characteristic root) for what is now
eigenfunctions that are scaled by that dierential opera-
called eigenvalue; his term survives in characteristic equa-
tor, such as
tion.[12][13]
Fourier used the work of Laplace and Lagrange to solve
d x the heat equation by separation of variables in his famous
e = ex . 1822 book Thorie analytique de la chaleur.[14] Sturm
dx
developed Fouriers ideas further and brought them to
Alternatively, the linear transformation could take the the attention of Cauchy, who combined them with his
form of an n by n matrix, in which case the eigenvectors own ideas and arrived at the fact that real symmetric
are n by 1 matrices that are also referred to as eigenvec- matrices have real eigenvalues.[11] This was extended by
tors. If the linear transformation is expressed in the form Hermite in 1855 to what are now called Hermitian matri-
of an n by n matrix A, then the eigenvalue equation above ces.[12] Around the same time, Brioschi proved that the
for a linear transformation can be rewritten as the matrix eigenvalues of orthogonal matrices lie on the unit cir-
multiplication cle,[11] and Clebsch found the corresponding result for
skew-symmetric matrices.[12] Finally, Weierstrass clari-
ed an important aspect in the stability theory started
by Laplace by realizing that defective matrices can cause
Av = v, instability.[11]
In the meantime, Liouville studied eigenvalue problems
where the eigenvector v is an n by 1 matrix. For a matrix, similar to those of Sturm; the discipline that grew out
eigenvalues and eigenvectors can be used to decompose of their work is now called SturmLiouville theory.[15]
the matrix, for example by diagonalizing it. Schwarz studied the rst eigenvalue of Laplaces equation
Eigenvalues and eigenvectors give rise to many closely re- on general domains towards the end of the 19th century,
lated mathematical concepts, and the prex eigen- is ap- while Poincar studied Poissons equation a few years
plied liberally when naming them: later.[16]
At the start of the 20th century, Hilbert studied the eigen-
values of integral operators by viewing the operators as
The set of all eigenvectors of a linear trans- innite matrices.[17] He was the rst to use the German
formation, each paired with its corresponding word eigen, which means own, to denote eigenvalues
eigenvalue, is called the eigensystem of that and eigenvectors in 1904,[18] though he may have been
transformation.[5][6] following a related usage by Helmholtz. For some time,
the standard term in English was proper value, but the
The set of all eigenvectors of T corresponding to more distinctive term eigenvalue is standard today.[19]
the same eigenvalue, together with the zero vector, The rst numerical algorithm for computing eigenval-
is called an eigenspace or characteristic space of ues and eigenvectors appeared in 1929, when Von
T.[7][8] Mises published the power method. One of the most
popular methods today, the QR algorithm, was pro-
If the set of eigenvectors of T form a basis of the posed independently by John G.F. Francis[20] and Vera
domain of T, then this basis is called an eigenbasis. Kublanovskaya[21] in 1961.[22]
98 CHAPTER 14. EIGENVALUES AND EIGENVECTORS
See also: Euclidean vector and Matrix (mathematics) wi = Ai1 v1 + Ai2 v2 + + Ain vn = Aij vj
j=1
Eigenvalues and eigenvectors are often introduced to stu- If it occurs that v and w are scalar multiples, that is if
dents in the context of linear algebra courses focused on
matrices.[23][24] Furthermore, linear transformations can
be represented using matrices,[1][2] which is especially
common in numerical and computational applications.[25] then v is an eigenvector of the linear transformation A
and the scale factor is the eigenvalue corresponding to
Y that eigenvector. Equation (1) is the eigenvalue equation
for the matrix A.
y Equation (1) can be stated equivalently as
Ax = x
Using Leibniz' rule for the determinant, the left hand side
1 20 of Equation (3) is a polynomial function of the variable
x = 3 and y = 60. and the degree of this polynomial is n, the order of the
4 80 matrix A. Its coecients depend on the entries of A, ex-
cept that its term of degree n is always (1)n n . This
These vectors are said to be scalar multiples of each other, polynomial is called the characteristic polynomial of A.
or parallel or collinear, if there is a scalar such that Equation (3) is called the characteristic equation or the
secular equation of A.
The fundamental theorem of algebra implies that the
x = y. characteristic polynomial of an n by n matrix A, being a
In this case = 1/20. polynomial of degree n, can be factored into the product
of n linear terms,
Now consider the linear transformation of n-dimensional
vectors dened by an n by n matrix A,
Taking the determinant of (M I), the characteristic If d = n then the right hand side is the product of n linear
polynomial of M is terms and this is the same as Equation (4). The size of
each eigenvalues algebraic multiplicity is related to the
dimension n as
2 1
|M I| = = 3 4 + 2 .
1 2
1 A (i ) n,
Setting the characteristic polynomial equal to zero, it has
d
roots at = 1 and = 3, which are the two eigenvalues = A (i ) = n.
A
of M. The eigenvectors corresponding to each eigenvalue i=1
can be found by solving for the components of v in the
equation Mv = v. In this example, the eigenvectors are If A(i) = 1, then i is said to be a simple eigenvalue.[27]
any non-zero scalar multiples of If A(i) equals the geometric multiplicity of i, A(i),
dened in the next section, then i is said to be a semisim-
ple eigenvalue.
[ ] [ ]
1 1
v=1 = , v=3 = .
1 1
14.3.3 Eigenspaces, geometric multiplic-
If the entries of the matrix A are all real numbers, then ity, and the eigenbasis for matrices
the coecients of the characteristic polynomial will also
be real numbers, but the eigenvalues may still have non- Given a particular eigenvalue of the n by n matrix A,
zero imaginary parts. The entries of the corresponding dene the set E to be all vectors v that satisfy Equation
eigenvectors therefore may also have non-zero imaginary (2),
parts. Similarly, the eigenvalues may be irrational num-
bers even if all the entries of A are rational numbers or
even if they are all integers. However, if the entries of E = {v : (A I)v = 0}.
A are all algebraic numbers, which include the rationals,
the eigenvalues are complex algebraic numbers. On one hand, this set is precisely the kernel or nullspace
The non-real roots of a real polynomial with real coe- of the matrix (A I). On the other hand, by deni-
cients can be grouped into pairs of complex conjugates, tion, any non-zero vector that satises this condition is
namely with the two members of each pair having imag- an eigenvector of A associated with . So, the set E is the
inary parts that dier only in sign and the same real part. union of the zero vector with the set of all eigenvectors of
If the degree is odd, then by the intermediate value the- A associated with , and E equals the nullspace of (A
orem at least one of the roots is real. Therefore, any real I). E is called the eigenspace or characteristic space of
matrix with odd order has at least one real eigenvalue, A associated with .[7][8] In general is a complex num-
whereas a real matrix with even order may not have any ber and the eigenvectors are complex n by 1 matrices. A
real eigenvalues. The eigenvectors associated with these property of the nullspace is that it is a linear subspace, so
complex eigenvalues are also complex and also appear in E is a linear subspace of n .
complex conjugate pairs. Because the eigenspace E is a linear subspace, it is closed
under addition. That is, if two vectors u and v belong to
the set E, written (u,v) E, then (u + v) E or equiva-
14.3.2 Algebraic multiplicity lently A(u + v) = (u + v). This can be checked using the
distributive property of matrix multiplication. Similarly,
Let i be an eigenvalue of an n by n matrix A. The because E is a linear subspace, it is closed under scalar
algebraic multiplicity A(i) of the eigenvalue is its multiplication. That is, if v E and is a complex num-
multiplicity as a root of the characteristic polynomial, that ber, (v) E or equivalently A(v) = (v). This can be
is, the largest integer k such that ( i)k divides evenly checked by noting that multiplication of complex matri-
that polynomial.[8][26][27] ces by complex numbers is commutative. As long as u
Suppose a matrix A has dimension n and d n distinct + v and v are not zero, they are also eigenvectors of A
eigenvalues. Whereas Equation (4) factors the character- associated with .
istic polynomial of A into the product of n linear terms The dimension of the eigenspace E associated with , or
with some terms potentially repeating, the characteristic equivalently the maximum number of linearly indepen-
polynomial can instead be written as the product d terms dent eigenvectors associated with , is referred to as the
each corresponding to a distinct eigenvalue and raised to eigenvalues geometric multiplicity A(). Because E is
the power of the algebraic multiplicity, also the nullspace of (A I), the geometric multiplicity
of is the dimension of the nullspace of (A I), also
called the nullity of (A I), which relates to the dimen-
|AI| = (1 )A (1 ) (2 )A (2 ) (d )A (d )sion
. and rank of (A - I) as
100 CHAPTER 14. EIGENVALUES AND EIGENVECTORS
A u = u .
Q1 AQ = .
Comparing this equation to Equation (1), the left eigen-
A can therefore be decomposed into a matrix composed
vectors of A are the conjugate transpose of the right
of its eigenvectors, a diagonal matrix with its eigenvalues
eigenvectors of A*. The eigenvalues of the left eigen-
along the diagonal, and the inverse of the matrix of eigen-
vectors are the solution of the characteristic polynomial
vectors. This is called the eigendecomposition and it is a
|A* *I|=0. Because the identity matrix is Hermitian
similarity transformation. Such a matrix A is said to be
and |M*| = |M|* for a square matrix M, the eigenvalues
similar to the diagonal matrix or diagonalizable. The
of the left eigenvectors of A are the complex conjugates
matrix Q is the change of basis matrix of the similarity
of the eigenvalues of the right eigenvectors of A. Recall
transformation. Essentially, the matrices A and rep-
that if A is a real matrix, all of its complex eigenvalues ap-
resent the same linear transformation expressed in two
pear in complex conjugate pairs. Therefore, the eigenval-
dierent bases. The eigenvectors are used as the basis
ues of the left and right eigenvectors of a real matrix are
when representing the linear transformation as .
the same. Similarly, if A is a real matrix, all of its com-
plex eigenvectors also appear in complex conjugate pairs. Conversely, suppose a matrix A is diagonalizable. Let P
Therefore, the left eigenvectors simplify to the transpose be a non-singular square matrix such that P 1 AP is some
of the right eigenvectors of AT if A is real. diagonal matrix D. Left multiplying both by P, AP = PD.
Each column of P must therefore be an eigenvector of A
whose eigenvalue is the corresponding diagonal element
14.3.6 Diagonalization and the eigende- of D. Since the columns of P must be linearly independent
composition for P to be invertible, there exist n linearly independent
eigenvectors of A. It then follows that the eigenvectors of
Main article: Eigendecomposition of a matrix A form a basis if and only if A is diagonalizable.
A matrix that is not diagonalizable is said to be defective.
Suppose the eigenvectors of A form a basis, or equiva- For defective matrices, the notion of eigenvectors gen-
lently A has n linearly independent eigenvectors v1 , v2 , eralizes to generalized eigenvectors and the diagonal ma-
..., vn with associated eigenvalues 1 , 2 , ..., n. The trix of eigenvalues generalizes to the Jordan normal form.
eigenvalues need not be distinct. Dene a square matrix Over an algebraically closed eld, any matrix A has a
Q whose columns are the n linearly independent eigen- Jordan normal form and therefore admits a basis of gener-
vectors of A, alized eigenvectors and a decomposition into generalized
eigenspaces.
[ ]
Q = v1 v2 ... vn . 14.3.7 Variational characterization
Since each column of Q is an eigenvector of A, right mul- Main article: Min-max theorem
tiplying A by Q scales each column of Q by its associated
eigenvalue,
In the Hermitian case, eigenvalues can be given a varia-
tional characterization. The largest eigenvalue of H is the
[ ] maximum value of the quadratic form xT Hx/xT x . A
AQ = 1 v1 2 v2 ... n vn .
value of x that realizes that maximum, is an eigenvector.
With this in mind, dene a diagonal matrix where each
diagonal element ii is the eigenvalue associated with the 14.3.8 Matrix examples
ith column of Q. Then
Two-dimensional matrix example
[ ][ ] [ ]
1 1 v1 0
(A 3I)v=3 = = .
1 1 v2 0
[ ]
1
v=3 =
1
3 = 2 = 1/2 i 3/2 Each diagonal element corresponds to an eigenvector
whose only non-zero component is in the same row as
where i = 1 is the imaginary unit.
that diagonal element. In the example, the eigenvalues
For the real eigenvalue 1 = 1, any vector with three equal correspond to the eigenvectors,
non-zero entries is an eigenvector. For example,
1 0 0
5 5 5 v1 = 0, v2 = 1, v3 = 0,
A5 = 5 = 1 5. 0 0 1
5 5 5
respectively, as well as scalar multiples of these vectors.
For the complex conjugate pair of imaginary eigenvalues,
note that
Triangular matrix example
2 0 0 0 f (t) = f (0)et ,
1 2 0 0
|AI| = = (2)2 (3) 2
is the .eigenfunction of the derivative operator. Note that
0 1 3 0
0 0 1 3 in this case the eigenfunction is itself a function of its as-
sociated eigenvalue. In particular, note that for = 0 the
The roots of this polynomial, and hence the eigenvalues, eigenfunction f(t) is a constant.
are 2 and 3. The algebraic multiplicity of each eigenvalue
The main eigenfunction article gives other examples.
is 2; in other words they are both double roots. The sum
of the algebraic multiplicities of each distinct eigenvalue
is A = 4 = n, the order of the characteristic polynomial
and the dimension of A. 14.5 General denition
On the other hand, the geometric multiplicity of the eigen-
The concept of eigenvalues and eigenvectors extends nat-
value 2 is only 1, because its eigenspace is spanned by just
urally to arbitrary linear transformations on arbitrary
one vector [0 1 1 1]T and is therefore 1-dimensional.
vector spaces. Let V be any vector space over some eld
Similarly, the geometric multiplicity of the eigenvalue 3
K of scalars, and let T be a linear transformation mapping
is 1 because its eigenspace is spanned by just one vec-
V into V,
tor [0 0 0 1]T . The total geometric multiplicity A is 2,
which is the smallest it could be for a matrix with two dis-
tinct eigenvalues. Geometric multiplicities are dened in
a later section. T : V V.
E = {v : T (v) = v},
Df (t) = f (t)
which is the union of the zero vector with the set of all
The functions that satisfy this equation are eigenvectors eigenvectors associated with . E is called the eigenspace
of D and are commonly called eigenfunctions. or characteristic space of T associated with .
By denition of a linear transformation,
14.4.1 Derivative operator example
d T (x + y) = T (x) + T (y),
Consider the derivative operator dt with eigenvalue equa-
tion T (x) = T (x),
So, both u + v and v are either zero or eigenvectors general, the operator (T I) may not have an inverse
of T associated with , namely (u+v,v) E, and E even if is not an eigenvalue.
is closed under addition and scalar multiplication. The For this reason, in functional analysis eigenvalues can be
eigenspace E associated with is therefore a linear sub- generalized to the spectrum of a linear operator T as the
space of V.[8][34][35] If that subspace has dimension 1, itset of all scalars for which the operator (T I) has
is sometimes called an eigenline.[36] no bounded inverse. The spectrum of an operator always
The geometric multiplicity T() of an eigenvalue is contains all its eigenvalues but is not limited to them.
the dimension of the eigenspace associated with , i.e.,
the maximum number of linearly independent eigenvec-
tors associated with that eigenvalue.[8][27] By the deni- 14.5.4 Associative algebras and represen-
tion of eigenvalues and eigenvectors, T() 1 because tation theory
every eigenvalue has at least one eigenvector.
The eigenspaces of T always form a direct sum. As a Main article: Weight (representation theory)
consequence, eigenvectors of dierent eigenvalues are al-
ways linearly independent. Therefore, the sum of the di- One can generalize the algebraic object that is acting
mensions of the eigenspaces cannot exceed the dimension on the vector space, replacing a single operator acting
n of the vector space on which T operates, and there can- on a vector space with an algebra representation an
not be more than n distinct eigenvalues.[37] associative algebra acting on a module. The study of such
Any subspace spanned by eigenvectors of T is an invariant actions is the eld of representation theory.
subspace of T, and the restriction of T to such a subspace The representation-theoretical concept of weight is an
is diagonalizable. Moreover, if the entire vector space V analog of eigenvalues, while weight vectors and weight
can be spanned by the eigenvectors of T, or equivalently spaces are the analogs of eigenvectors and eigenspaces,
if the direct sum of the eigenspaces associated with all respectively.
the eigenvalues of T is the entire vector space V, then
a basis of V called an eigenbasis can be formed from
linearly independent eigenvectors of T. When T admits
an eigenbasis, T is diagonalizable.
14.6 Dynamic equations
The simplest dierence equations have the form
14.5.2 Zero vector as an eigenvector
While the denition of an eigenvector used in this article x = a x
t 1 t1 + a2 xt2 + + ak xtk .
excludes the zero vector, it is possible to dene eigen-
values and eigenvectors such that the zero vector is an The solution of this equation for x in terms of t is found
eigenvector.[38] by using its characteristic equation
Consider again the eigenvalue equation, Equation (5).
Dene an eigenvalue to be any scalar K such that
there exists a non-zero vector v V satisfying Equation k a1 k1 a2 k2 ak1 ak = 0,
(5). It is important that this version of the denition of
an eigenvalue specify that the vector be non-zero, other- which can be found by stacking into matrix form a set of
wise by this denition the zero vector would allow any equations consisting of the above dierence equation and
scalar in K to be an eigenvalue. Dene an eigenvector the k1 equations xt1 = xt1 , . . . , xtk+1 = xtk+1 ,
v associated with the eigenvalue to be any vector that, giving a k-dimensional system of the rst order in the
given , satises Equation (5). Given the eigenvalue, the stacked variable vector [xt , . . . , xtk+1 ] in terms of its
zero vector is among the vectors that satisfy Equation (5), once-lagged value, and taking the characteristic equation
so the zero vector is included among the eigenvectors by of this systems matrix. This equation gives k character-
this alternate denition. istic roots 1 , . . . , k , for use in the solution equation
14.7 Calculation [ ][ ] [ ]
4 1 x x
=6
Main article: Eigenvalue algorithm 6 3 y y
This matrix equation is equivalent to two linear equations
{ {
4x + y = 6x 2x + y = 0
14.7.1 Eigenvalues that is
6x + 3y = 6y +6x 3y = 0
The eigenvalues of a matrix A can be determined by nd- Both equations reduce to the single linear equation y =
ing the roots of the characteristic polynomial. Explicit 2x . Therefore, any vector of the form [a, 2a] , for any
algebraic formulas for the roots of a polynomial exist only non-zero real number a , is an eigenvector of A with
if the degree n is 4 or less. According to the AbelRuni eigenvalue = 6 .
theorem there is no general, explicit and exact algebraic The matrix A above has another eigenvalue = 1 . A
formula for the roots of a polynomial with degree 5 or similar calculation shows that the corresponding eigen-
more. vectors are the non-zero solutions of 3x + y = 0 , that
It turns out that any polynomial with degree n is the char- is, any vector of the form [b, 3b] , for any non-zero real
acteristic polynomial of some companion matrix of order number b .
n . Therefore, for matrices of order 5 or more, the eigen- Some numeric methods that compute the eigenvalues of a
values and eigenvectors cannot be obtained by an explicit matrix also determine a set of corresponding eigenvectors
algebraic formula, and must therefore be computed by as a by-product of the computation.
approximate numerical methods.
In theory, the coecients of the characteristic polyno-
mial can be computed exactly, since they are sums of 14.8 Applications
products of matrix elements; and there are algorithms that
can nd all the roots of a polynomial of arbitrary degree
14.8.1 Eigenvalues of geometric transfor-
to any required accuracy.[39] However, this approach is
not viable in practice because the coecients would be mations
contaminated by unavoidable round-o errors, and the
roots of a polynomial can be an extremely sensitive func- The following table presents some example transforma-
tion of the coecients (as exemplied by Wilkinsons tions in the plane along with their 22 matrices, eigen-
polynomial).[39] values, and eigenvectors.
Ecient, accurate methods to compute eigenvalues and Note that the characteristic equation for a rotation is a
eigenvectors of arbitrary matrices were not known until quadratic equation with discriminant D = 4(sin )2 ,
the advent of the QR algorithm in 1961. [39] Combin- which is a negative number whenever is not an inte-
ing the Householder transformation with the LU decom- ger multiple of 180. Therefore, except for these spe-
position results in an algorithm with better convergence cial cases, the two eigenvalues are complex numbers,
than the QR algorithm. For large Hermitian sparse matri- cos i sin ; and all eigenvectors have non-real entries.
ces, the Lanczos algorithm is one example of an ecient Indeed, except for those special cases, a rotation changes
iterative method to compute eigenvalues and eigenvec- the direction of every nonzero vector in the plane.
tors, among several other possibilities.[39] A linear transformation that takes a square to a rectan-
gle of the same area (a squeeze mapping) has reciprocal
eigenvalues.
14.7.2 Eigenvectors
Once the (exact) value of an eigenvalue is known, the 14.8.2 Schrdinger equation
corresponding eigenvectors can be found by nding non-
zero solutions of the eigenvalue equation, that becomes a An example of an eigenvalue equation where the transfor-
system of linear equations with known coecients. For mation T is represented in terms of a dierential operator
example, once it is known that 6 is an eigenvalue of the is the time-independent Schrdinger equation in quantum
matrix mechanics:
[ ] HE = EE
4 1
A= where H , the Hamiltonian, is a second-order dierential
6 3
operator and E , the wavefunction, is one of its eigen-
we can nd its eigenvectors by solving the equation Av = functions corresponding to the eigenvalue E , interpreted
6v , that is as its energy.
14.8. APPLICATIONS 107
PCA of the multivariate Gaussian distribution centered at (1, 3) Mode Shape of a Tuning Fork at Eigenfrequency 440.09 Hz
with a standard deviation of 3 in roughly the (0.878, 0.478) di-
rection and of 1 in the orthogonal direction. The vectors shown
are unit eigenvectors of the (symmetric, positive-semidenite) damped vibration is governed by
covariance matrix scaled by the square root of the corresponding
eigenvalue. (Just as in the one-dimensional case, the square root
is taken because the standard deviation is more readily visualized
mx + kx = 0
than the variance.
or
sample variance equal to one). For the covariance
or correlation matrix, the eigenvectors correspond to
principal components and the eigenvalues to the variance mx = kx
explained by the principal components. Principal
component analysis of the correlation matrix provides that is, acceleration is proportional to position (i.e., we
an orthonormal eigen-basis for the space of the observed expect x to be sinusoidal in time).
data: In this basis, the largest eigenvalues correspond to
In n dimensions, m becomes a mass matrix and k a
the principal components that are associated with most
stiness matrix. Admissible solutions are then a linear
of the covariability among a number of observed data.
combination of solutions to the generalized eigenvalue
Principal component analysis is used to study large data problem
sets, such as those encountered in bioinformatics, data
mining, chemical research, psychology, and in marketing.
PCA is popular especially in psychology, in the eld of kx = 2 mx
psychometrics. In Q methodology, the eigenvalues of the
correlation matrix determine the Q-methodologists judg- where 2 is the eigenvalue and is the (imaginary)
ment of practical signicance (which diers from the angular frequency. Note that the principal vibra-
statistical signicance of hypothesis testing; cf. criteria tion modes are dierent from the principal compliance
for determining the number of factors). More generally, modes, which are the eigenvectors of k alone. Further-
principal component analysis can be used as a method of more, damped vibration, governed by
factor analysis in structural equation modeling.
mx + cx + kx = 0
14.8.6 Vibration analysis
leads to a so-called quadratic eigenvalue problem,
Main article: Vibration
The orthogonality properties of the eigenvectors allows 14.8.8 Tensor of moment of inertia
decoupling of the dierential equations so that the sys-
tem can be represented as linear summation of the eigen- In mechanics, the eigenvectors of the moment of iner-
vectors. The eigenvalue problem of complex structures is tia tensor dene the principal axes of a rigid body. The
often solved using nite element analysis, but neatly gen- tensor of moment of inertia is a key quantity required to
eralize the solution to scalar-valued vibration problems. determine the rotation of a rigid body around its center
of mass.
14.8.10 Graphs
one person becoming infected to the next person becom- In 1755, Johann Andreas Segner proved that any
ing infected. In a heterogeneous population, the next gen- body has three principal axes of rotation: Johann
eration matrix denes how many people in the population Andreas Segner, Specimen theoriae turbinum [Es-
will become infected after time tG has passed. R0 is then say on the theory of tops (i.e., rotating bodies)] (
the largest eigenvalue of the next generation matrix.[46][47] Halle (Halae), (Germany) : Gebauer, 1755). On
p. XXVIIII (i.e., 29), Segner derives a third-degree
equation in t, which proves that a body has three
principal axes of rotation. He then states (on the
14.9 See also same page): Non autem repugnat tres esse eius-
modi positiones plani HM, quia in aequatione cubica
Antieigenvalue theory radices tres esse possunt, et tres tangentis t valores.
(However, it is not inconsistent [that there] be three
Eigenplane such positions of the plane HM, because in cubic
equations, [there] can be three roots, and three val-
Eigenvalue algorithm ues of the tangent t.)
Introduction to eigenstates The relevant passage of Segners work was dis-
cussed briey by Arthur Cayley. See: A. Cayley
Jordan normal form (1862) Report on the progress of the solution of
certain special problems of dynamics, Report of the
List of numerical analysis software Thirty-second meeting of the British Association for
Nonlinear eigenproblem the Advancement of Science; held at Cambridge in
October 1862, 32 : 184-252 ; see especially pages
Quadratic eigenvalue problem 225-226.
section), pp. 49-91. From page 51: Insbesondere [34] Shilov 1977, p. 109
in dieser ersten Mitteilung gelange ich zu Formeln,
die die Entwickelung einer willkrlichen Funktion [35] Lemma for the eigenspace
nach gewissen ausgezeichneten Funktionen, die ich
Eigenfunktionen nenne, liefern: (In particular, [36] Schaums Easy Outline of Linear Algebra, p. 111
in this rst report I arrive at formulas that provide
the [series] development of an arbitrary function [37] For a proof of this lemma, see Roman 2008, Theorem 8.2
in terms of some distinctive functions, which I on p. 186; Shilov 1977, p. 109; Heeron 2001, p. 364;
call eigenfunctions: ) Later on the same page: Beezer 2006, Theorem EDELI on p. 469; and Lemma for
Dieser Erfolg ist wesentlich durch den Umstand linear independence of eigenvectors
bedingt, da ich nicht, wie es bisher geschah, in
erster Linie auf den Beweis fr die Existenz der [38] Axler, Sheldon, Ch. 5, Linear Algebra Done Right (2nd
Eigenwerte ausgehe, " (This success is mainly ed.), p. 77
attributable to the fact that I do not, as it has
happened until now, rst of all aim at a proof of [39] Trefethen, Lloyd N.; Bau, David (1997), Numerical Lin-
the existence of eigenvalues, ) ear Algebra, SIAM
For the origin and evolution of the terms eigenvalue, [40] Graham, D.; Midgley, N. (2000), Graphical rep-
characteristic value, etc., see: Earliest Known Uses resentation of particle shape using triangular dia-
of Some of the Words of Mathematics (E) grams: an Excel spreadsheet method, Earth Surface
Processes and Landforms, 25 (13): 14731477,
[19] See Aldrich 2006
Bibcode:2000ESPL...25.1473G, doi:10.1002/1096-
[20] Francis, J. G. F. (1961), The QR Transformation, 9837(200012)25:13<1473::AID-ESP158>3.0.CO;2-C
I (part 1)", The Computer Journal, 4 (3): 265271,
doi:10.1093/comjnl/4.3.265 and Francis, J. G. F. (1962), [41] Sneed, E. D.; Folk, R. L. (1958), Pebbles in the
The QR Transformation, II (part 2)", The Computer Jour- lower Colorado River, Texas, a study of particle mor-
nal, 4 (4): 332345, doi:10.1093/comjnl/4.4.332 phogenesis, Journal of Geology, 66 (2): 114150,
Bibcode:1958JG.....66..114S, doi:10.1086/626490
[21] Kublanovskaya, Vera N. (1961), On some algorithms
for the solution of the complete eigenvalue prob- [42] Knox-Robinson, C.; Gardoll, Stephen J. (1998),
lem, USSR Computational Mathematics and Mathe- GIS-stereoplot: an interactive stereonet plotting
matical Physics, 3: 637657. Also published in: module for ArcView 3.0 geographic information
" system, Computers & Geosciences, 24 (3): 243,
" [On certain algo- Bibcode:1998CG.....24..243K, doi:10.1016/S0098-
rithms for the solution of the complete eigenvalue 3004(97)00122-2
problem],
(Journal of Computational [43] Stereo32 software
Mathematics and Mathematical Physics), 1 (4): 555570,
1961 [44] Benn, D.; Evans, D. (2004), A Practical Guide to the study
of Glacial Sediments, London: Arnold, pp. 103107
[22] See Golub & van Loan 1996, 7.3; Meyer 2000, 7.3
[45] Xirouhakis, A.; Votsis, G.; Delopoulus, A. (2004),
[23] Cornell University Department of Mathematics (2016) Estimation of 3D motion and structure of human faces
Lower-Level Courses for Freshmen and Sophomores. Ac- (PDF), National Technical University of Athens
cessed on 2016-03-27.
[46] Diekmann O, Heesterbeek JA, Metz JA (1990), On
[24] University of Michigan Mathematics (2016) Math Course the denition and the computation of the basic re-
Catalogue. Accessed on 2016-03-27. production ratio R0 in models for infectious dis-
[25] Press (2007, pp. 38) eases in heterogeneous populations, Journal of Math-
ematical Biology, 28 (4): 365382, PMID 2117040,
[26] Fraleigh (1976, p. 358) doi:10.1007/BF00178324
[27] Golub & Van Loan (1996, p. 316) [47] Odo Diekmann; J. A. P. Heesterbeek (2000), Mathemat-
ical epidemiology of infectious diseases, Wiley series in
[28] Beauregard & Fraleigh (1973, p. 307) mathematical and computational biology, West Sussex,
England: John Wiley & Sons
[29] Herstein (1964, p. 272)
Aldrich, John (2006), Eigenvalue, eigenfunction, Gelfand, I. M. (1971), Lecture notes in linear alge-
eigenvector, and related terms, in Je Miller (Ed- bra, Russian, Science Publishers, Moscow
itor), Earliest Known Uses of Some of the Words of
Mathematics, retrieved 2006-08-22 Gohberg, Israel; Lancaster, Peter; Rodman, Leiba
(2005), Indenite linear algebra and applications,
Alexandrov, Pavel S. (1968), Lecture notes in analyt- Basel-Boston-Berlin: Birkhuser Verlag, ISBN 3-
ical geometry, Russian, Science Publishers, Moscow 7643-7349-0
Anton, Howard (1987), Elementary Linear Algebra Golub, Gene F.; van der Vorst, Henk A. (2000),
(5th ed.), New York: Wiley, ISBN 0-471-84819-0 Eigenvalue computation in the 20th century,
Journal of Computational and Applied Mathemat-
Beauregard, Raymond A.; Fraleigh, John B. (1973),
ics, 123: 3565, Bibcode:2000JCoAM.123...35G,
A First Course In Linear Algebra: with Optional
doi:10.1016/S0377-0427(00)00413-1
Introduction to Groups, Rings, and Fields, Boston:
Houghton Miin Co., ISBN 0-395-14017-X Golub, Gene H.; Van Loan, Charles F. (1996), Ma-
trix computations (3rd ed.), Johns Hopkins Univer-
Beezer, Robert A. (2006), A rst course in linear
sity Press, Baltimore, Maryland, ISBN 978-0-8018-
algebra, Free online book under GNU licence, Uni-
5414-9
versity of Puget Sound
Greub, Werner H. (1975), Linear Algebra (4th ed.),
Betteridge, Harold T. (1965), The New Cassells
Springer-Verlag, New York, ISBN 0-387-90110-8
German Dictionary, New York: Funk & Wagnall,
LCCN 58-7924 Halmos, Paul R. (1987), Finite-dimensional vector
Bowen, Ray M.; Wang, Chao-Cheng (1980), Linear spaces (8th ed.), New York: Springer-Verlag, ISBN
and multilinear algebra, Plenum Press, New York, 0-387-90093-4
ISBN 0-306-37508-7 Hawkins, T. (1975), Cauchy and the spectral the-
Brown, Maureen (October 2004), Illuminating Pat- ory of matrices, Historia Mathematica, 2: 129,
terns of Perception: An Overview of Q Methodology doi:10.1016/0315-0860(75)90032-4
Burden, Richard L.; Faires, J. Douglas (1993), Nu- Heeron, Jim (2001), Linear Algebra, Online book,
merical Analysis (5th ed.), Boston: Prindle, Weber St Michaels College, Colchester, Vermont, USA
and Schmidt, ISBN 0-534-93219-3 Herstein, I. N. (1964), Topics In Algebra, Waltham:
Carter, Tamara A.; Tapia, Richard A.; Papacon- Blaisdell Publishing Company, ISBN 978-
stantinou, Anne, Linear Algebra: An Introduction to 1114541016
Linear Algebra for Pre-Calculus Students, Rice Uni- Horn, Roger A.; Johnson, Charles F. (1985), Matrix
versity, Online Edition, retrieved 2008-02-19 analysis, Cambridge University Press, ISBN 0-521-
Cohen-Tannoudji, Claude (1977), Chapter II. The 30586-1
mathematical tools of quantum mechanics, Quan-
Kline, Morris (1972), Mathematical thought from
tum mechanics, John Wiley & Sons, ISBN 0-471-
ancient to modern times, Oxford University Press,
16432-1
ISBN 0-19-501496-0
Curtis, Charles W. (1999), Linear Algebra: An In-
Korn, Granino A.; Korn, Theresa M. (2000),
troductory Approach (4th ed.), Springer, ISBN 0-
Mathematical Handbook for Scientists and
387-90992-3
Engineers: Denitions, Theorems, and For-
Demmel, James W. (1997), Applied numerical lin- mulas for Reference and Review, New York:
ear algebra, SIAM, ISBN 0-89871-389-7 McGraw-Hill (2nd Revised ed.), Dover Pub-
lications, Bibcode:1968mhse.book.....K, ISBN
Fraleigh, John B. (1976), A First Course In Abstract 0-486-41147-8
Algebra (2nd ed.), Reading: Addison-Wesley, ISBN
0-201-01984-1 Kuttler, Kenneth (2007), An introduction to lin-
ear algebra (PDF), Online e-book in PDF format,
Fraleigh, John B.; Beauregard, Raymond A. (1995), Brigham Young University
Linear algebra (3rd ed.), Addison-Wesley Publish-
ing Company, ISBN 0-201-83999-7 Lancaster, P. (1973), Matrix theory, Russian,
Moscow, Russia: Science Publishers
Friedberg, Stephen H.; Insel, Arnold J.; Spence,
Lawrence E. (1989), Linear algebra (2nd ed.), En- Larson, Ron; Edwards, Bruce H. (2003), Elemen-
glewood Clis, New Jersey 07632: Prentice Hall, tary linear algebra (5th ed.), Houghton Miin
ISBN 0-13-537102-3 Company, ISBN 0-618-33567-6
14.12. EXTERNAL LINKS 113
Press, William H.; Teukolsky, Saul A.; Vetterling, Eigen Vector Examination working applet
William T.; Flannery, Brian P. (2007), Numerical
Same Eigen Vector Examination as above in a Flash
Recipes: The Art of Scientic Computing (3rd ed.),
demo with sound
ISBN 9780521880688
Computation of Eigenvalues
Roman, Steven (2008), Advanced linear algebra
(3rd ed.), New York: Springer Science + Business Numerical solution of eigenvalue problems Edited
Media, LLC, ISBN 978-0-387-72828-5 by Zhaojun Bai, James Demmel, Jack Dongarra,
Sharipov, Ruslan A. (1996), Course of Linear Al- Axel Ruhe, and Henk van der Vorst
gebra and Multidimensional Geometry: the textbook, Eigenvalues and Eigenvectors on the Ask Dr. Math
Bibcode:2004math......5323S, ISBN 5-7477-0099- forums: ,
5, arXiv:math/0405323
Shilov, Georgi E. (1977), Linear algebra, Translated
14.12.2 Demonstration applets
and edited by Richard A. Silverman, New York:
Dover Publications, ISBN 0-486-63518-X Java applet about eigenvectors in the real plane
Shores, Thomas S. (2007), Applied linear alge- Wolfram Language functionality for Eigenvalues,
bra and matrix analysis, Springer Science+Business Eigenvectors and Eigensystems
Media, LLC, ISBN 0-387-33194-8
Strang, Gilbert (1993), Introduction to linear alge-
bra, Wellesley-Cambridge Press, Wellesley, Mas-
sachusetts, ISBN 0-9614088-5-5
Strang, Gilbert (2006), Linear algebra and its appli-
cations, Thomson, Brooks/Cole, Belmont, Califor-
nia, ISBN 0-03-010567-6
Positive-denite matrix
Not to be confused with Positive matrix and Totally Either way, the result is positive since z is not
positive matrix. the zero vector (that is, at least one of a and b
is not zero).
In linear algebra, a symmetric n n real matrix M is said
to be positive denite if the scalar z T M z is positive for The real symmetric matrix
every non-zero column vector z of n real numbers. Here
z T denotes the transpose of z .[1]
More generally, an n n Hermitian matrix M is said 2 1 0
to be positive denite if the scalar z M z is real and M = 1 2 1
positive for all non-zero column vectors z of n complex 0 1 2
numbers. Here z denotes the conjugate transpose of z .
is positive denite since for any non-zero col-
The negative denite, positive semi-denite, and neg- umn vector z with entries a, b and c, we have
ative semi-denite matrices are dened in the same way,
except that in the last two cases 0s are allowed, i.e. the
[ ] a
expression z T M z or z M z is required to be always neg- z T M z = (z T M )z = (2a b) (a + 2b c) (b + 2c) b
ative, non-negative, and non-positive, respectively. c
Positive denite matrices are closely related to positive- = 2a2 2ab + 2b2 2bc + 2c2
denite symmetric bilinear forms (or sesquilinear forms = a2 + (a b)2 + (b c)2 + c2
in the complex case), and to inner products of vector
spaces.[2] This result is a sum of squares, and therefore
Some authors use more general denitions of positive non-negative; and is zero only if a = b = c = 0,
denite that include some non-symmetric real matrices, that is, when z is zero.
or non-Hermitian complex ones.
For any real invertible matrix A , the product AT A is
a positive denite matrix. A simple proof is that for
15.1 Examples any non-zero vector z , the condition z T AT Az =
Az2 > 0, since the invertibility of matrix A
[ ] means that Az = 0.
1 0
The identity matrix I = is positive denite
0 1
(and as such also positive semi-denite). Seen as a The examples M and N above show that a matrix in which
real matrix, it is symmetric, and, for any non-zero some elements are negative may still be positive-denite,
column vector z with real entries a and b, one has and conversely a matrix whose entries are all positive may
not be positive denite.
[ ][ ]
T
[ ] 1 0 a 15.2 Connections
z Iz = a b = a2 + b2
0 1 b
A general purely quadratic real function f(z) on n real
Seen as a complex matrix, for any non-zero col-
variables z1 , ..., zn can always be written as zT Mz where
umn vector z with complex entries a and b one
z is the column vector with those variables, and M is a
has
symmetric real matrix. Therefore, the matrix being pos-
[ ][ ]
[ ] 1 0 a itive denite means that f has a unique minimum (zero)
z Iz = a b = a a+b b = |a|2 +|b|2 when z is zero, and is strictly positive for any other z.
0 1 b
114
15.4. QUADRATIC FORMS 115
A Hermitian matrix is negative-denite, negative- A Hermitian matrix which is neither positive denite,
semidenite, or positive-semidenite if and only if all negative denite, positive-semidenite, nor negative-
of its eigenvalues are negative, non-positive, or non- semidenite is called indenite. Indenite matrices are
negative, respectively. also characterized by having both positive and negative
eigenvalues.
15.6.1 Negative-denite
15.7 Further properties
The n n Hermitian matrix M is said to be negative-
denite if
If M is a Hermitian positive-semidenite matrix, one
sometimes writes M 0 and if M is positive-denite one
writes M > 0.[3] The notion comes from functional anal-
x Mx < 0 ysis where positive-semidenite matrices dene positive
for all non-zero x in C (or, all non-zero x in R for the operators.
n n
real matrix), where x* is the conjugate transpose of x. For arbitrary square matrices M, N we write M N if M
A matrix is negative denite if its k-th order leading prin- N 0; i.e., M N is positive semi-denite. This denes
cipal minor is negative when k is odd, and positive when a partial ordering on the set of all square matrices. One
k is even. can similarly dene a strict partial ordering M > N.
2. If M is positive denite and r > 0 is a real number, 12. If M,N 0, although MN is not necessary positive-
then rM is positive denite.[6] If M and N are posi- semidenite, the Kronecker product M N
tive denite, then the sum M + N [6] and the products 0, the Hadamard product M N 0 (this re-
MNM and NMN are also positive denite. If MN = sult is often called the Schur product theorem).,[10]
NM, then MN is also positive denite. and the Frobenius product M : N 0 (Lancaster-
Tismenetsky, The Theory of Matrices, p. 218).
3. Every principal submatrix of a positive denite ma-
trix is positive denite. 13. Regarding the Hadamard product of two positive-
4. If M is positive-semidenite, then Q M Q is T semidenite matrices M = (mij) 0, N 0, there
positive-semidenite. If M is positive denite and are two notable inequalities:
Q has full rank, then QT M Q is positive denite.[7] Oppenheims inequality: det(M N )
5. The diagonal entries mii are real and non- det(N ) i mii . [11]
negative. As a consequence the trace, tr(M) 0. det(M N) det(M) det(N).[12]
Furthermore,[8] since every principal sub matrix (in
particular, 2-by-2) is positive denite,
15.8 Block matrices
mii + mjj
|mij | mii mjj
2 A positive 2n 2n matrix may also be dened by blocks:
and thus
max |mij | max |mii | [ ]
A B
M=
C D
6. A matrix M is positive semi-denite if and only if
there is a positive semi-denite matrix B with B2 = where each block is n n. By applying the positivity con-
M. This matrix B is unique,[9] is called the square dition, it immediately follows that A and D are hermitian,
root of M, and is denoted with B = M 1/2 (the square and C = B*.
root B is not to be confused with the matrix L in
the Cholesky factorization M = LL*, which is also We have that z*Mz 0 for all complex z, and in particular
sometimes called the square root of M). If M > N > for z = ( v, 0)T . Then
0 then M 1/2 > N 1/2 > 0.
7. If M is a symmetric matrix of the form mij = m(ij), [ [ ][ ]
] A B v
and the strict inequality holds v 0 = v Av 0.
B D 0
B are Hermitian, therefore z*Az and z*Bz are individu- Covariance matrix
ally real. If z*Mz is real, then z*Bz must be zero for all z.
Then B is the zero matrix and M = A, proving that M is M-matrix
Hermitian.
Positive-denite function
By this denition, a positive denite real matrix M is
Hermitian, hence symmetric; and zT Mz is positive for all
Positive-denite kernel
non-zero real column vectors z. However the last condi-
tion alone is not sucient for M to be positive denite. Schur complement
For example, if
Square root of a matrix
[ ]
1 1
M= , Sylvesters criterion
1 1
then for any real vector z with entries a and b we have
zT Mz = (ab)a + (a+b)b = a2 + b2 , which is always pos-
itive if z is not zero. However, if z is the complex vector 15.11 Notes
with entries 1 and i, one gets
[1] http://onlinelibrary.wiley.com/doi/10.1002/
z*Mz = [1, i]M[1, i]T = [1+i, 1i][1, i]T = 2 9780470173862.app3/pdf
+ 2i,
[2] Stewart, J. (1976). Positive denite functions and gener-
which is not real. Therefore, M is not positive denite. alizations, an historical survey. Rocky Mountain J. Math,
6(3).
On the other hand, for a symmetric real matrix M, the
condition "zT Mz > 0 for all nonzero real vectors z" does [3] This may be confusing, as sometimes nonnegative matri-
imply that M is positive denite in the complex sense. ces are also denoted in this way. A common alternative
notation is M 0 and M 0 for positive semidenite
and positive denite matrices, respectively.
15.9.2 Extension for non symmetric matri-
ces [4] Horn & Johnson (1985), p. 397
Some authors choose to say that a complex matrix M is [5] Horn & Johnson (1985), Corollary 7.7.4(a)
positive denite if Re(z*Mz) > 0 for all non-zero complex
[6] Horn & Johnson (1985), Observation 7.1.3
vectors z, where Re(c) denotes the real part of a com-
plex number c.[13] This weaker denition encompasses [7] Horn, Roger A.; Johnson, Charles R. (2013). 7.1 De-
some non-Hermitian complex matrices,
[ 1 1 ] including some nitions and Properties. Matrix Analysis (Second Edition).
non-symmetric real ones, such as 1 1 . Cambridge University Press. p. 431. ISBN 978-0-521-
Indeed, with this denition, a real matrix is positive def- 83940-2.
inite if and only if zT Mz > 0 for all nonzero real vectors Observation 7.1.8 Let A Mn be Hermitian and let
C Mn,m :
z, even if M is not symmetric.
* Suppose that A is positive semidenite. Then
In general, we have Re(z*Mz) > 0 for all complex nonzero C AC is positive semidenite, nullspace( C AC ) =
vectors z if and only if the Hermitian part (M + M*)/2 of nullspace(AC), and rank( C AC )=rank( AC )
M is positive denite in the narrower sense. Similarly, we * Suppose that A is positive denite. Then rank( C AC
have xT Mx > 0 for all real nonzero vectors x if and only if )=rank(C), and C AC is positive denite if and only if
the symmetric part (M + M T )/2 of M is positive denite rank(C)=m
in the narrower sense.
[8] Horn & Johnson (1985), p. 398
In summary, the distinguishing feature between the real
and complex case is that, a bounded positive operator [9] Horn & Johnson (1985), Theorem 7.2.6 with k = 2
on a complex Hilbert space is necessarily Hermitian, or
self adjoint. The general claim can be argued using the [10] Horn & Johnson (1985), Theorem 7.5.3
polarization identity. That is no longer true in the real
case. [11] Horn & Johnson (1985), Theorem 7.8.6
15.10 See also [13] Weisstein, Eric W. Positive Denite Matrix. From
MathWorld--A Wolfram Web Resource. Accessed on
Cholesky decomposition 2012-07-26
15.13. EXTERNAL LINKS 119
15.12 References
Horn, Roger A.; Johnson, Charles R. (1990). Matrix
Analysis. Cambridge University Press. ISBN 978-
0-521-38632-6.
For the football club, see Cambridge University Press thors published by Cambridge have included John Mil-
F.C. ton, William Harvey, Isaac Newton, Bertrand Russell,
Warning: Page using Template:Infobox publisher with and Stephen Hawking.[5]
unknown parameter num_employees (this message is University printing began in Cambridge when the rst
shown only in preview).
practising University Printer, Thomas Thomas, set up a
Warning: Page using Template:Infobox publisher with printing house on the site of what became the Senate
unknown parameter key_people (this message is
House lawn a few yards from where the Presss
shown only in preview). bookshop now stands. In those days, the Stationers Com-
Warning: Page using Template:Infobox publisher with
pany in London jealously guarded its monopoly of print-
unknown parameter company_type (this message is ing, which partly explains the delay between the date of
shown only in preview). the Universitys Letters Patent and the printing of the rst
book.
Cambridge University Press (CUP) is the publishing In 1591, Thomass successor, John Legate, printed the
business of the University of Cambridge. Granted letters rst Cambridge Bible, an octavo edition of the popular
patent by Henry VIII in 1534, it is the worlds oldest Geneva Bible. The London Stationers objected stren-
publishing house and the second-largest university press uously, claiming that they had the monopoly on Bible
in the world (after Oxford University Press).[1][2] It also printing. The universitys response was to point out the
holds letters patent as the Queens Printer.[3] provision in its charter to print 'all manner of books.
The Presss mission is To further the Universitys mission Thus began the Presss tradition of publishing the Bible,
by disseminating knowledge in the pursuit of education, a tradition that has endured for over four centuries, be-
learning and research at the highest international levels of ginning with the Geneva Bible, and continuing with the
excellence.[4] Authorized Version, the Revised Version, the New En-
Cambridge University Press is a department of the Uni- glish Bible and the Revised English Bible. The restric-
versity of Cambridge and is both an academic and educa- tions and compromises forced upon Cambridge by the
tional publisher. With a global sales presence, publishing dispute with the London Stationers did not really come
hubs, and oces in more than 40 countries, it publishes to an end until the scholar Richard Bentley was given the
over 50,000 titles by authors from over 100 countries. power to set up a 'new-style press in 1696. In July 1697
Its publishing includes academic journals, monographs, the Duke of Somerset made a loan of 200 to the univer-
reference works, textbooks, and English-language teach- sity towards the printing house and presse and James
ing and learning publications. Cambridge University Halman, Registrary of the University, lent 100 for the
Press is a charitable enterprise that transfers part of its same purpose.[6]
annual surplus back to the university. It was in Bentleys time, in 1698, that a body of senior
scholars ('the Curators, known from 1733 as 'the Syn-
dics) was appointed to be responsible to the university
for the Presss aairs. The Press Syndicates publishing
16.1 History committee still meets regularly (eighteen times a year),
and its role still includes the review and approval of the
Cambridge University Press is both the oldest publish- Presss planned output. John Baskerville became Univer-
ing house in the world and the oldest university press. It sity Printer in the mid-eighteenth century. Baskervilles
originated from Letters Patent granted to the University concern was the production of the nest possible books
of Cambridge by Henry VIII in 1534, and has been pro- using his own type-design and printing techniques.
ducing books continuously since the rst University Press Baskerville wrote, The importance of the work demands
book was printed. Cambridge is one of the two privileged all my attention; not only for my own (eternal) reputation;
presses (the other being Oxford University Press). Au-
120
16.2. GOVERNANCE 121
16.2 Governance
16.3.3 Education
The Education group delivers educational products
and solutions for primary, secondary and international
schools, and Education Ministries worldwide.
The Cambridge English group publishes English language In 2007, controversy arose over CUPs decision to de-
teaching courses and resources for all ages around the stroy all remaining copies of its 2006 book, Alms for
world.[11] The group works closely with Cambridge En- Jihad: Charity and Terrorism in the Islamic World, by
glish Language Assessment to provide solutions that im- Burr and Collins, as part of the settlement of a law-
prove language prociency, aligned to the Common Euro- suit brought by Saudi billionaire Khalid bin Mahfouz.[16]
pean Framework of Reference for Languages, or CEFR. Within hours, Alms for Jihad became one of the 100 most
16.7. OPEN ACCESS 123
sought after titles on Amazon.com and eBay in the United The Press partnered with Bookshare in 2010 to make
States. CUP sent a letter to libraries asking them to re- their books accessible to people with qualied print dis-
move copies from circulation. CUP subsequently sent out abilities. Under the terms of the digital rights licence
copies of an errata sheet for the book. agreement, the Press delivers academic and scholarly
The American Library Association issued a recommen- books from all of its regional publishing centres on the
dation to libraries still holding Alms for Jihad: Given world to Bookshare for conversion into accessible for-
the intense interest in the book, and the desire of readers mats. People with qualied print disabilities around
to learn about the controversy rst hand, we recommend the world can download the books for a nominal Book-
share membership fee and read them using a computer
that U.S. libraries keep the book available for their users.
The publishers decision did not have the support of the or other assistive technology, with voice generated by
text-to-speech technology, as well as options for digital
books authors and was criticised by some who claimed it
was incompatible with freedom of speech and with free- Braille.[22]
dom of the press and that it indicated that English libel
laws were excessively strict.[17][18] In a New York Times
Book Review (7 October 2007), United States Congress- 16.7 Open access
man Frank R. Wolf described Cambridges settlement as
basically a book burning.[19] CUP pointed out that, at CUP is one of thirteen publishers to participate in the
that time, it had already sold most of its copies of the Knowledge Unlatched pilot, a global library consortium
book. approach to funding open access books.[23] CUP is a
Cambridge defended its actions, saying it had acted re- member of the Open Access Scholarly Publishers Asso-
sponsibly and that it is a global publisher with a duty to ciation.
observe the laws of many dierent countries.[20]
16.6 Community work [2] Black, Michael (1984). Cambridge University Press,
15831984. pp. 3289. ISBN 978-0-521-66497-4.
[11] Black, Michael (2000). A Short History of Cambridge A History of Cambridge University Press, Volume 3:
University Press. Cambridge University Press. pp. 65 New Worlds for Learning, 18731972; McKitterick,
66. ISBN 978-0-521-77572-4. David; 1998; ISBN 978-0-521-30803-8
[12] The Queens Printers Patent. Cambridge University A Short History of Cambridge University Press;
Press Website. Retrieved 15 October 2012. Black, Michael; 2000; ISBN 978-0-521-77572-4
[13] Neill, Graeme (1 November 2010). CUP looks to digi- Cambridge University Press 15841984; Black,
tal. The Bookseller. Retrieved 4 May 2011.
Michael, Foreword by Gordon Johnson; 2000; ISBN
[14] Neilan, Catherine (7 December 2009). CUP launches 978-0-521-66497-4, Hardback ISBN 978-0-521-
online books platform. The Bookseller. Retrieved 4 May 26473-0
2011.
[21] Annual Report and Accounts for the year that ended 30
April 2009 (PDF). Cambridge University Press. 2009.
p. 30. Retrieved 4 May 2011.
16.10 References
Anonymous; The Students Guide to the University
of Cambridge. Third Edition, Revised and Partly
Re-written; Deighton Bell, 1874 (reissued by Cam-
bridge University Press, 2009; ISBN 978-1-108-
00491-6)
In quantum mechanics, the Hamiltonian is the operator is the potential energy operator and
corresponding to the total energy of the system in most
of the cases. It is usually denoted by H, also or . Its
spectrum is the set of possible outcomes when one mea- p^
^ p p2 2 2
T = = =
sures the total energy of a system. Because of its close 2m 2m 2m
relation to the time-evolution of a system, it is of fun- is the kinetic energy operator in which m is the mass of
damental importance in most formulations of quantum the particle, the dot denotes the dot product of vectors,
theory. and
The Hamiltonian is named after William Rowan Hamil-
ton, who also created a revolutionary reformation of
Newtonian mechanics, now called Hamiltonian mechan- p = i
ics, that is important in quantum physics.
is the momentum operator wherein is the del operator.
The dot product of with itself is the Laplacian 2 . In
three dimensions using Cartesian coordinates the Laplace
17.1 Introduction operator is
125
126 CHAPTER 17. HAMILTONIAN (QUANTUM MECHANICS)
V = V (r1 , r2 rN , t)
N
is the potential energy function, now a function of the V = V (ri , t) = V (r1 , t)+V (r2 , t)+ +V (rN , t)
spatial conguration of the system and time (a particular i=1
set of spatial positions at some instant of time denes a
conguration) and; The general form of the Hamiltonian in this case is:
pn pn
Tn = 2 1 2
N N
2mn H = + Vi
2 i=1 mi i i=1
is the kinetic energy operator of particle n, and is the
gradient for particle n, 2 is the Laplacian for particle N ( )
2 2
using the coordinates: = i + Vi
i=1
2mi
N
2 2 2 = Hi
2n = + +
x2n yn2 zn2 i=1
Combining these yields the Schrdinger Hamiltonian for where the sum is taken over all particles and their cor-
the N-particle case: responding potentials; the result is that the Hamiltonian
of the system is the sum of the separate Hamiltonians
for each particle. This is an idealized situation - in prac-
N
tice the particles are usually always inuenced by some
H = Tn + V
potential, and there are many-body interactions. One il-
n=1
lustrative example of a two-body interaction where this
N
pn ^
^ pn form would not apply is for electrostatic potentials due to
= + V (r1 , r2 rN , t)
n=1
2m n charged particles, because they interact with each other
by Coulomb interaction (electrostatic force), as shown be-
2 1 2
N
= + V (r1 , r2 rN , t) low.
2 n=1 mn n
2
i j H |(t) = i |(t) .
2M t
where M denotes the mass of the collection of particles This equation is the Schrdinger equation. It takes the
resulting in this extra kinetic energy. Terms of this form same form as the HamiltonJacobi equation, which is one
are known as mass polarization terms, and appear in the of the reasons H is also called the Hamiltonian. Given the
Hamiltonian of many electron atoms (see below). state at some initial time (t = 0), we can solve it to obtain
For N interacting particles, i.e. particles which interact the state at any subsequent time. In particular, if H is
mutually and constitute a many-body situation, the po- independent of time, then
tential energy function V is not simply a sum of the sep-
arate potentials (and certainly not a product, as this is di-
mensionally incorrect). The potential energy function can |(t) = eiHt/ |(0) .
17.5. EXPRESSIONS FOR THE HAMILTONIAN 127
The exponential operator on the right hand side of the 17.5.1 General forms for one particle
Schrdinger equation is usually dened by the corre-
sponding power series in H. One might notice that tak- 17.5.2 Free particle
ing polynomials or power series of unbounded operators
that are not dened everywhere may not make mathemat- The particle is not bound by any potential energy, so the
ical sense. Rigorously, to take functions of unbounded potential is zero and this Hamiltonian is the simplest. For
operators, a functional calculus is required. In the case one dimension:
of the exponential function, the continuous, or just the
holomorphic functional calculus suces. We note again,
however, that for common calculations the physicists for- 2 2
mulation is quite sucient. H =
2m x2
By the *-homomorphism property of the functional cal- and in three dimensions:
culus, the operator
2 2
U = eiHt/ H =
2m
is a unitary operator. It is the time evolution operator, or
propagator, of a closed quantum system. If the Hamil-
tonian is time-independent, {U(t)} form a one parameter
17.5.3 Constant-potential well
unitary group (more than a semigroup); this gives rise to
For a particle in a region of constant potential V = V 0
the physical principle of detailed balance.
(no dependence on space or time), in one dimension, the
Hamiltonian is:
17.4 Dirac formalism
2 2
However, in the more general formalism of Dirac, the H = + V0
2m x2
Hamiltonian is typically implemented as an operator on a
Hilbert space in the following way: in three dimensions
The eigenkets (eigenvectors) of H, denoted |a , provide
an orthonormal basis for the Hilbert space. The spectrum 2 2
of allowed energy levels of the system is given by the set H = + V0
2m
of eigenvalues, denoted {E}, solving the equation:
This applies to the elementary "particle in a box" prob-
lem, and step potentials.
H |a = Ea |a .
Since H is a Hermitian operator, the energy is always a 17.5.4 Simple harmonic oscillator
real number.
From a mathematically rigorous point of view, care must For a simple harmonic oscillator in one dimension, the
be taken with the above assumptions. Operators on potential varies with position (but not time), according
innite-dimensional Hilbert spaces need not have eigen- to:
values (the set of eigenvalues does not necessarily coin-
cide with the spectrum of an operator). However, all rou-
tine quantum mechanical calculations can be done using k 2 m 2 2
V = x = x
the physical formulation. 2 2
where the angular frequency , eective spring constant
k, and mass m of the oscillator satisfy:
17.5 Expressions for the Hamilto-
nian
k
2 =
Following are expressions for the Hamiltonian in a num- m
ber of situations.[2] Typical ways to classify the expres- so the Hamiltonian is:
sions are the number of particles, number of dimensions,
and the nature of the potential energy function - impor-
tantly space and time dependence. Masses are denoted 2 2 m 2 2
by m, and charges by q. H = 2
+ x
2m x 2
128 CHAPTER 17. HAMILTONIAN (QUANTUM MECHANICS)
For three dimensions, this becomes Electrostatic potential energy stored in a conguration of
discrete point charges):[3]
2 2 m 2 2
H =
2m
+
2
r 1 1 qi qj
Vj = qi (ri ) =
2 80 |ri rj |
i=j i=j
where the three-dimensional position vector r using
cartesian coordinates is (x, y, z), its magnitude is where (r) is the electrostatic potential of charge qj at r.
The total potential of the system is then the sum over j:
r2 = r r = |r|2 = x2 + y 2 + z 2
1 qi qj
N
( 2 )
2 2 2 m 2 2
2 1 2 1 qi qj
N N
H = + + + (x + y 2 + z 2 )
2m x 2 y 2 z 2 2 H = j +
( ) ( ) ( 2 2j=1 2mj )0
8 |ri rj |
2 2 m 2 2 2 2 m 2 2 m 2 2 j=1 i=j
= + x + + y + + z
2m x2 2 2m y 2 2 N 2m z 22
2 1 qi qj
= 2j +
j=1
2m j 8 0 |r i r j |
i=j
17.5.5 Rigid rotor
For a rigid rotor i.e. system of particles which can rotate 17.5.7 Electric dipole in an electric eld
freely about any axes, not bound in any potential (such as
free molecules with negligible vibrational degrees of free- For an electric dipole moment d constituting charges
dom, say due to double or triple chemical bonds), Hamil- of magnitude q, in a uniform, electrostatic eld (time-
tonian is: independent) E, positioned in one place, the potential is:
2 2 2 2 2 2 V = ^
dE
H = Jx Jy J
2Ixx 2Iyy 2Izz z
the dipole moment itself is the operator
where Ixx, Iyy, and Izz are the moment of inertia compo-
nents (technically the diagonal elements of the moment
of inertia tensor), and Jx , Jy and Jz are the total angular V = ^ dE
momentum operators (components), about the x, y, and
z axes respectively. Since the particle is stationary, there is no translational
kinetic energy of the dipole, so the Hamiltonian of the
dipole is just the potential energy:
17.5.6 Electrostatic or coulomb potential
The Coulomb potential energy for two point charges q1 H = ^ d E = qE ^
r
and q2 (i.e. charged particles, since particles have no
spatial extent), in three dimensions, is (in SI units -
rather than Gaussian units which are frequently used in 17.5.8 Magnetic dipole in a magnetic eld
electromagnetism):
For a magnetic dipole moment in a uniform, magneto-
static eld (time-independent) B, positioned in one place,
q1 q2 the potential is:
V =
40 |r|
17.5.9 Charged particle in an electromag- Since U is nontrivial, at least one pair of |a and U |a
netic eld must represent distinct states. Therefore, H has at least
one pair of degenerate energy eigenkets. In the case of
For a charged particle q in an electromagnetic eld, de- the free particle, the unitary operator which produces
scribed by the scalar potential and vector potential A, the symmetry is the rotation operator, which rotates the
there are two parts to the Hamiltonian to substitute for.[1] wavefunctions by some angle while otherwise preserving
The momentum operator must be replaced by the kinetic their shape.
momentum operator, which includes a contribution from The existence of a symmetry operator implies the exis-
the A eld: tence of a conserved observable. Let G be the Hermitian
generator of U:
^=P
^ qA
^ is the canonical momentum operator given as the U = I iG + O(2 )
where P
usual momentum operator:
It is straightforward to show that if U commutes with H,
then so does G:
^ = i
P
so the corresponding kinetic energy operator is:
[H, G] = 0
^^ 1 (^ )2 Therefore,
T = = P qA
2m 2m
and the potential energy, which is due to the eld: 1
(t)|G|(t) = (t)|[G, H]|(t) = 0.
t i
V = q In obtaining this result, we have used the Schrdinger
equation, as well as its dual,
Casting all of these into the Hamiltonian gives:
1
H =
2
(i qA) + q (t)|H = i (t)|.
2m t
Thus, the expected value of the observable G is conserved
17.6 Energy eigenket degeneracy, for any state of the system. In the case of the free particle,
the conserved quantity is the angular momentum.
symmetry, and conservation
laws
17.7 Hamiltons equations
In many systems, two or more energy eigenstates have
the same energy. A simple example of this is a free par- Hamilton's equations in classical Hamiltonian mechanics
ticle, whose energy eigenstates have wavefunctions that have a direct analogy in quantum mechanics. Suppose
130 CHAPTER 17. HAMILTONIAN (QUANTUM MECHANICS)
n |n = nn .
n (t) = ian (t)
Note that these basis states are assumed to be indepen-
dent of time. We will assume that the Hamiltonian is also then the above equations become
independent of time.
The instantaneous state of the system at time t, | (t) ,
can be expanded in terms of these basis states: H an H n
= , =
n t an t
which is precisely the form of Hamiltons equations, with
|(t) = an (t)|n the an s as the generalized coordinates, the n s as the
n
conjugate momenta, and H taking the place of the clas-
where sical Hamiltonian.
Quantum state
In quantum physics, quantum state refers to the state of distinguishable) mixed states. Mixed states are described
an isolated quantum system. A quantum state provides a by so-called density matrices. A pure state can also be
probability distribution for the value of each observable, recast as a density matrix; in this way, pure states can be
i.e. for the outcome of each possible measurement on the represented as a subset of the more general mixed states.
system. Knowledge of the quantum state together with For example, if the spin of an electron is measured in
the rules for the systems evolution in time exhausts all
any direction, e.g. with a SternGerlach experiment,
that can be predicted about the systems behavior. there are two possible results: up or down. The Hilbert
A mixture of quantum states is again a quantum state. space for the electrons spin is therefore two-dimensional.
Quantum states that cannot be written as a mixture of A pure state here is represented by a two-dimensional
other states are called pure quantum states, all other complex vector (, ) , with a length of one; that is, with
states are called mixed quantum states.
Mathematically, a pure quantum state can be repre-
sented by a ray in a Hilbert space over the complex num- ||2 + ||2 = 1,
bers.[1] The ray is a set of nonzero vectors diering by
just a complex scalar factor; any of them can be chosen where || and || are the absolute values of and
as a state vector to represent the ray and thus the state. . A mixed state, in this case, is a 2 2 matrix that is
A unit vector is usually picked, but its phase factor can Hermitian, positive-denite, and has trace 1.
be chosen freely anyway. Nevertheless, such factors are Before a particular measurement is performed on a quan-
important when state vectors are added together to form tum system, the theory usually gives only a probability
a superposition. distribution for the outcome, and the form that this dis-
Hilbert space is a generalization of the ordinary Euclidean tribution takes is completely determined by the quan-
space[2]:9396 and it contains all possible pure quantum tum state and the observable describing the measurement.
states of the given system. If this Hilbert space, by These probability distributions arise for both mixed states
choice of representation (essentially a choice of basis cor- and pure states: it is impossible in quantum mechanics
responding to a complete set of observables), is exhibited (unlike classical mechanics) to prepare a state in which
as a function space, a Hilbert space in its own right, then all properties of the system are xed and certain. This
the representatives are called wave functions. is exemplied by the uncertainty principle, and reects a
core dierence between classical and quantum physics.
For example, when dealing with the energy spectrum of
Even in quantum theory, however, for every observable
the electron in a hydrogen atom, the relevant state vec-
there are some states that have an exact and determined
tors are identied by the principal quantum number n,
value for that observable.[2]:45[3]
the angular momentum quantum number l, the magnetic
quantum number m, and the spin z-component sz. A
more complicated case is given (in braket notation) by
the spin part of a state vector 18.1 Conceptual description
( )
1 18.1.1 Pure states
| = | | ,
2
In the mathematical formulation of quantum mechanics,
which involves superposition of joint spin states for two pure quantum states correspond to vectors in a Hilbert
particles with spin 1 2 . space, while each observable quantity (such as the energy
A mixed quantum state corresponds to a probabilistic or momentum of a particle) is associated with a mathe-
mixture of pure states; however, dierent distributions matical operator. The operator serves as a linear function
of pure states can generate equivalent (i.e., physically in- which acts on the states of the system. The eigenvalues of
131
132 CHAPTER 18. QUANTUM STATE
taken in the earlier part of thediscussion above, with where vectors are usually bold, lower-case letters, or
a time-varying state |(t) = n Cn (t)|n .) Con- letters with arrows on top.
ceptually (and mathematically), the two approaches are
equivalent; choosing one of them is a matter of conven-
Dirac dened two kinds of vector, bra and ket, dual
tion.
to each other.[12]
Both viewpoints are used in quantum theory. While non-
relativistic quantum mechanics is usually formulated in
Each ket | is uniquely associated with a so-called
terms of the Schrdinger picture, the Heisenberg pic-
bra, denoted | , which corresponds to the same
ture is often preferred in a relativistic context, that is, for
physical quantum state. Technically, the bra is the
quantum eld theory. Compare with Dirac picture.[10]:65
adjoint of the ket. It is an element of the dual space,
and related to the ket by the Riesz representation
theorem. In a nite-dimensional space with a cho-
18.2 Formalism in quantum sen basis, writing | as a column vector, | is a
physics row vector; to obtain it just take the transpose and
entry-wise complex conjugate of | .
See also: Mathematical formulation of quantum me-
chanics Scalar products[13][14] (also called brackets) are writ-
ten so as to look like a bra and ket next to each other:
1 |2 . (The phrase bra-ket is supposed to re-
semble bracket.)
18.2.1 Pure states as rays in a Hilbert
space
Quantum physics is most commonly formulated in terms
18.2.3 Spin
of linear algebra, as follows. Any given system is iden-
tied with some nite- or innite-dimensional Hilbert Main article: Mathematical formulation of quantum
space. The pure states correspond to vectors of norm mechanics Spin
1. Thus the set of all pure states corresponds to the unit
sphere in the Hilbert space. The angular momentum has the same dimension
2 1
Multiplying a pure state by a scalar is physically inconse- (ML T ) as the Planck constant and, at quantum scale,
quential (as long as the state is considered by itself). If behaves as a discrete degree of freedom of a quantum sys-
one vector is obtained from the other by multiplying by a tem. Most particles possess a kind of intrinsic angular
scalar of unit magnitude, the two vectors are said to cor- momentum that does not appear at all in classical me-
respond to the same ray in Hilbert space[11] and also to chanics and arises from Diracs relativistic generalization
the same point in the projective Hilbert space. of the theory. Mathematically it is described with spinors.
In non-relativistic quantum mechanics the group repre-
sentations of the Lie group SU(2) are used to describe
18.2.2 Braket notation this additional freedom. For a given particle, the choice
of representation (and hence the range of possible val-
Main article: Braket notation ues of the spin observable) is specied by a non-negative
number S that, in units of Plancks reduced constant , is
either an integer (0, 1, 2 ...) or a half-integer (1/2, 3/2,
Calculations in quantum mechanics make frequent use
5/2 ...). For a massive particle with spin S, its spin quan-
of linear operators, scalar products, dual spaces and
tum number m always assumes one of the 2S + 1 possible
Hermitian conjugation. In order to make such calcula-
values in the set
tions ow smoothly, and to make it unnecessary (in some
contexts) to fully understand the underlying linear alge-
bra, Paul Dirac invented a notation to describe quantum
states, known as braket notation. Although the details {S, S + 1, . . . + S 1, +S}
of this are beyond the scope of this article, some conse-
quences of this are: As a consequence, the quantum state of a particle with
spin is described by a vector-valued wave function with
The expression used to denote a state vector (which values in C2S+1 . Equivalently, it is represented by a
corresponds to a pure quantum state) takes the form complex-valued function of four variables: one discrete
| (where the " " can be replaced by any other quantum number variable (for the spin) is added to
symbols, letters, numbers, or even words). This can the usual three continuous variables (for the position in
be contrasted with the usual mathematical notation, space).
134 CHAPTER 18. QUANTUM STATE
mechanics).
When symmetrization or anti-symmetrization is unneces- 3
sary, N-particle spaces of states can be obtained simply d r|(r)|2 = 1
by tensor products of one-particle spaces, to which we
will return later. In terms of the continuous set of position basis |r , the
state | is:
One property worth noting is that the normalized states is a dierent quantum state (possibly not normalized).
| are characterized by Note that which quantum state it is depends on both the
18.3. INTERPRETATION 135
18.4 Mathematical generalizations [7] Note that this criterion works when the density matrix is
normalized so that the trace of is 1, as it is for the stan-
dard denition given in this section. Occasionally a den-
States can be formulated in terms of observables, rather
sity matrix will be normalized dierently, in which case
than as vectors in a vector space. These are positive nor- the criterion is Tr(2 ) = (Tr )2
malized linear functionals on a C*-algebra, or sometimes
other classes of algebras of observables. See State on a
C*-algebra and GelfandNaimarkSegal construction for
more details. 18.7 References
[1] Weinberg, S. (2002), The Quantum Theory of Fields, I,
Cambridge University Press, ISBN 0-521-55001-7
18.5 See also
[2] Griths, David J. (2004), Introduction to Quantum Me-
Atomic electron transition chanics (2nd ed.), Prentice Hall, ISBN 0-13-111892-7
[1] Sometimes written ">"; see angle brackets. [9] Dirac, P.A.M. (1958). The Principles of Quantum Me-
chanics, 4th edition, Oxford University Press, Oxford UK.
[2] To avoid misunderstandings: Here we mean that Q(t) and
P(t) are measured in the same state, but not in the same [10] Gottfried, Kurt; Yan, Tung-Mow (2003). Quantum Me-
run of the experiment. chanics: Fundamentals (2nd, illustrated ed.). Springer.
ISBN 9780387955766.
[3] i.e. separated by a zero delay. One can think of it as stop-
ping the time, then making the two measurements one af- [11] Weinberg, Steven. The Quantum Theory of Fields, Vol.
ter the other, then resuming the time. Thus, the measure- 1. Cambridge University Press, 1995 p. 50.
ments occurred at the same time, but it is still possible to
tell which was rst. [12] Dirac (1958),[9] p. 20: The bra vectors, as they have been
here introduced, are quite a dierent kind of vector from
[4] For concreteness sake, suppose that A = Q(t 1 ) and B = the kets, and so far there is no connexion between them
P(t 2 ) in the above example, with t 2 > t 1 > 0. except for the existence of a scalar product of a bra and a
ket.
[5] Note that a state | is a superposition of dierent basis
states |r , so | and |r are elements of the same Hilbert [13] Dirac (1958),[9] p. 19: A scalar product B|A now ap-
space. A particle in state |r is located precisely at position pears as a complete bracket expression.
r = (x, y, z) , while a particle in state | can be found
at dierent positions with corresponding probabilities. [14] Gottfried (2013),[10] p. 31: to dene the scalar products
as being between bras and kets.
[6] In the continuous case, the basis kets |r are not unit kets
3 the state |[15]): They are normalized according to
(unlike [15] Landau (1965),<ref name='Landau (1965)' group=''>
d r r|r = 1, p. 17: " * dq = (f f)"
(the left side corresponds to f|f ), " (f f) df = 1. [16] Blum, Density matrix theory and applications, page 39.
18.8. FURTHER READING 137
Hermitian matrix
138
19.4. RAYLEIGH QUOTIENT 139
Self-adjoint operator
Unitary matrix
19.6 References
[1] Frankel, Theodore (2004). The geometry of physics: an
introduction. Cambridge University Press. p. 652. ISBN
0-521-53927-7.
In linear algebra, the trace of an n-by-n square matrix for all square matrices A and B, and all scalars c.
A is dened to be the sum of the elements on the main A matrix and its transpose have the same trace:
diagonal (the diagonal from the upper left to the lower
right) of A, i.e.,
tr(A) = tr(AT )
n
This follows immediately from the fact that transposing
tr(A) = a11 + a22 + + ann = aii
a square matrix does not aect elements along the main
i=1
diagonal.
where aii denotes the entry on the ith row and ith column
of A. The trace of a matrix is the sum of the (complex)
eigenvalues, and it is invariant with respect to a change 20.2.2 Trace of a product
of basis. This characterization can be used to dene the
trace of a linear operator in general. Note that the trace The trace of a product can be rewritten as the sum of
is only dened for a square matrix (i.e., n n). entry-wise products of elements:
20.1 Example This means that the trace of a product of matrices func-
tions similarly to a dot product of vectors. For this rea-
son, generalizations of vector operations to matrices (e.g.
Let A be a matrix, with in matrix calculus and statistics) often involve a trace of
matrix products.
a b c For real matrices, the trace of a product can also be writ-
A = d e f ten in the following forms:
g h i The matrices in a trace of a product can be switched with-
out changing the result: If A is an m n matrix and B is
Then
an n m matrix, then
141
142 CHAPTER 20. TRACE (LINEAR ALGEBRA)
However, if products of three symmetric matrices are 0; one can state this as the trace is a map of Lie algebras
considered, any permutation is allowed. (Proof: tr(ABC) gln k from operators to scalars, as the commuta-
= tr(AT BT C T ) = tr(AT (CB)T ) = tr((CB)T AT ) = tr((ACB)T ) tor of scalars is trivial (it is an abelian Lie algebra). In
= tr(ACB), where the last equality is because the traces of particular, using similarity invariance, it follows that the
a matrix and its transpose are equal.) For more than three identity matrix is never similar to the commutator of any
factors this is not true. pair of matrices.
Unlike the determinant, the trace of the product is not the Conversely, any square matrix with zero trace is a linear
product of traces, that is: combinations of the commutators of pairs of matrices.[3]
Moreover, any square matrix with zero trace is unitarily
equivalent to a square matrix with diagonal consisting of
tr(XY ) = tr(X) tr(Y ) all zeros.
The trace of any power of a nilpotent matrix is zero.
What is true is that the trace of the Kronecker product of When the characteristic of the base eld is zero, the con-
two matrices is the product of their traces: verse also holds: if tr(xk ) = 0 for all k , then x is nilpo-
tent.
The trace of a Hermitian matrix is real, because the ele-
tr(X Y ) = tr(X) tr(Y )
ments on the diagonal are real.
The trace of a projection matrix is the dimension of the
20.2.3 Other properties target space. If
If A is symmetric and B is antisymmetric, then This is sometimes referred to as the exponential trace
function; it is used in the GoldenThompson inequality.
tr(AB) = 0
20.4 Trace of a linear operator
The trace of the identity matrix is the dimension of the
space; this leads to generalizations of dimension using Given some linear map f : V V (where V is a nite-
trace. The trace of an idempotent matrix A (for which dimensional vector space) generally, we can dene the
A2 = A) is the rank of A. The trace of a nilpotent matrix trace of this map by considering the trace of matrix rep-
is zero. resentation of f, that is, choosing a basis for V and de-
More generally, if f(x) = (x 1 )d1 (x k)dk is the scribing f as a matrix relative to this basis, and taking the
characteristic polynomial of a matrix A, then trace of this square matrix. The result will not depend
on the basis chosen, since dierent bases will give rise to
similar matrices, allowing for the possibility of a basis-
independent denition for the trace of a linear map.
tr(A) = d1 1 + + dk k
Such a denition can be given using the canonical iso-
When both A and B are n-by-n, the trace of the (ring- morphism between the space End(V) of linear maps on
theoretic) commutator of A and B vanishes: tr([A, B]) = V and V V , where V is the dual space of V. Let v be
20.5. APPLICATIONS 143
in V and let f be in V . Then the trace of the indecom- These transformations all have determinant 1, so they
posable element v f is dened to be f(v); the trace of a preserve area. The derivative of this family at = 0, the
general element is dened by linearity. Using an explicit identity rotation, is the antisymmetric matrix
basis for V and the corresponding dual basis for V , one
can show that this gives the same denition of the trace ( )
as given above. 0 1
A=
1 0
which clearly has trace zero, indicating that this ma-
20.4.1 Eigenvalue relationships trix represents an innitesimal transformation which pre-
serves area.
If A is a linear operator represented by a square n-by-n
A related characterization of the trace applies to linear
matrix with real or complex entries and if 1 , ..., n are
vector elds. Given a matrix A, dene a vector eld F on
the eigenvalues of A (listed according to their algebraic
n by F(x) = Ax. The components of this vector eld are
multiplicities), then
linear functions (given by the rows of A). Its divergence
div F is a constant function, whose value is equal to tr(A).
By the divergence theorem, one can interpret this in terms
tr(A) = i of ows: if F(x) represents the velocity of a uid at lo-
i cation x and U is a region in n , the net ow of the uid
This follows from the fact that A is always similar to its out of U is given by tr(A) vol(U), where vol(U) is the
Jordan form, an upper triangular matrix having , ..., n volume of U.
1
on the main diagonal. In contrast, the determinant of A The trace is a linear operator, hence it commutes with the
is the product of its eigenvalues; i.e., derivative:
d tr(X) = tr(dX).
det(A) = i
i
In fact, there is an internal direct sum decomposition Two matrices x and y are said to be trace orthogonal if
gln = sln k of operators/matrices into traceless opera-
tors/matrices and scalars operators/matrices. The projec-
tion map onto scalar operators can be expressed in terms tr(xy) = 0
of the trace, concretely as:
tr
0 sln gln k 0 A, B = tr(A B)
which is analogous to yields an inner product on the space of all complex (or
real) m-by-n matrices.
The norm derived from the above inner product is
det
1 SLn GLn K 1 called the Frobenius norm, which satises submultiplica-
tive property as matrix norm. Indeed, it is simply the
for Lie groups. However, the trace splits naturally (via n1 Euclidean norm if the matrix is considered as a vector
times scalars) so gln = sln k , but the splitting of the of length m n.
determinant would be as the nth root times scalars, and It follows that if A and B are real positive semi-denite
this does not in general dene a function, so the determi- matrices of the same size then
nant does not split and the general linear group does not
decompose: GLn = SLn K .
0 [tr(AB)]2 tr(A2 ) tr(B 2 )
[tr(A)]2 [tr(B)]2 .[5]
20.6.1 Bilinear forms
The bilinear form 20.8 Generalizations
The concept of trace of a matrix is generalized to the trace
B(x, y) = tr(ad(x) ad(y)) where ad(x)y = [x, y] = xyyx class of compact operators on Hilbert spaces, and the ana-
log of the Frobenius norm is called the HilbertSchmidt
is called the Killing form, which is used for the classi- norm.
cation of Lie algebras.
If K is trace-class, then for any orthonormal basis (n )n
The trace denes a bilinear form: , the trace is given by
tr(K) = n n , Kn ,
(x, y) 7 tr(xy) and is nite and independent of the orthonormal basis.[6]
The partial trace is another generalization of the trace that
(x, y square matrices). is operator-valued. The trace of a linear operator Z which
The form is symmetric, non-degenerate[4] and associative lives on a product space AB is equal to the partial traces
in the sense that: over A and B : tr(Z) = trA (trB (Z)) = trB (trA (Z)) .
20.10. SEE ALSO 145
For more properties and a generalization of the partial coecients along the diagonal. This method, however,
trace, see the article on traced monoidal categories. makes coordinate invariance an immediate consequence
If A is a general associative algebra over a eld k, then a of the denition.
trace on A is often dened to be any map tr: A k which
vanishes on commutators: tr([a, b]) = 0 for all a, b in A.
20.9.1 Dual
Such a trace is not uniquely dened; it can always at least
be modied by multiplication by a nonzero scalar.
Further, one may dualize this map, obtaining a map F =
A supertrace is the generalization of a trace to the setting F V V = End(V ) . This map is precisely the
of superalgebras. inclusion of scalars, sending 1 F to the identity matrix:
The operation of tensor contraction generalizes the trace trace is dual to scalars. In the language of bialgebras,
to arbitrary tensors. scalars are the unit, while trace is the counit.
I tr
One can then compose these, F End(V )F , which
yields multiplication by n, as the trace of the identity is
20.9 Coordinate-free denition the dimension of the vector space.
(V V ) (V V ) (V V ) 20.11 Notes
coming from the pairing V V F on the middle [1] This is immediate from the denition of the matrix prod-
terms. Taking the trace of the product then comes from uct:
pairing on the outer terms, while taking the product in
m m n n m n
the opposite order and then taking the trace just switches tr(AB) = (AB)ii = Aij Bji = Bji Aij = (BA)jj = tr
which pairing is applied rst. On the other hand, taking i=1 i=1 j=1 j=1 i=1 j=1
the trace of A and the trace of B corresponds to applying
the pairing on the left terms and on the right terms (rather [2] Proof:
than on inner and outer), and is thus dierent.
f (eij ) = 0 if and only if i = j and f (ejj ) =
In coordinates, this corresponds toindexes: mul- f (e11 ) (with the standard basis eij ),
tiplication
is given by (AB)ik = j aij bjk , so
tr(AB) = ij aij bji and tr(BA) = ij bij a
ji which
and thus
is the same, while tr(A) tr(B) = i aii j bjj ,
f (A) = [A]ij f (eij ) = [A]ii f (e11 ) = f (e11 ) tr(A)
which is dierent. i,j i
[3] Proof: sln is a semisimple Lie algebra and thus every ele-
ment in it is a linear combination of commutators of some
pairs of elements, otherwise the derived algebra would be
a proper ideal.
[4] This follows from the fact that tr(A A) = 0 if and only
if A = 0
Symmetric matrix
In linear algebra, a symmetric matrix is a square ma- BA. So for integer n, An is symmetric if A is symmetric.
trix that is equal to its transpose. Formally, matrix A is If A1 exists, it is symmetric if and only if A is symmetric.
symmetric if Let Matn denote the space of n n matrices. A sym-
metric n n matrix is determined by n(n + 1)/2 scalars
(the number of entries on or above the main diagonal).
A = AT . Similarly, a skew-symmetric matrix is determined by n(n
1)/2 scalars (the number of entries above the main di-
Because equal matrices have equal dimensions, only agonal). If Symn denotes the space of n n symmetric
square matrices can be symmetric. matrices and Skewn the space of n n skew-symmetric
The entries of a symmetric matrix are symmetric with matrices then Matn = Symn + Skewn and Symn Skewn
respect to the main diagonal. So if the entries are written = {0}, i.e.
as A = (aij), then aij = aji, for all indices i and j.
The following 3 3 matrix is symmetric:
Matn = Symn Skewn ,
Ax, y = x, Ay x, y Rn .
21.1 Properties Since this denition is independent of the choice of basis,
symmetry is a property that depends only on the linear op-
The sum and dierence of two symmetric matrices is erator A and a choice of inner product. This characteriza-
again symmetric, but this is not always true for the tion of symmetry is useful, for example, in dierential ge-
product: given symmetric matrices A and B, then AB is ometry, for each tangent space to a manifold may be en-
symmetric if and only if A and B commute, i.e., if AB = dowed with an inner product, giving rise to what is called
147
148 CHAPTER 21. SYMMETRIC MATRIX
a Riemannian manifold. Another area where this formu- Every real non-singular matrix can be uniquely factored
lation is used is in Hilbert spaces. as the product of an orthogonal matrix and a symmetric
The nite-dimensional spectral theorem says that any positive denite matrix, which is called a polar decom-
symmetric matrix whose entries are real can be position. Singular matrices can also be factored, but not
diagonalized by an orthogonal matrix. More explicitly: uniquely.
For every symmetric real matrix A there exists a real or- Cholesky decomposition states that every real positive-
thogonal matrix Q such that D = QT AQ is a diagonal ma- denite symmetric matrix A is a product of a lower-
trix. Every symmetric matrix is thus, up to choice of an triangular matrix L and its transpose, A = LLT . If
orthonormal basis, a diagonal matrix. the matrix is symmetric indenite, it may be still decom-
T T
If A and B are n n real symmetric matrices that posed as P AP = LDL where P is a permutation
commute, then they can be simultaneously diagonalized: matrix (arising from the need to pivot), L a lower unit
there exists a basis of Rn such that every element of the triangular matrix, and D is a direct sum of symmetric 1
basis is an eigenvector for both A and B. 1 and 2 2 blocks, which is called Bunch-Kaufman
decomposition [5]
Every real symmetric matrix is Hermitian, and therefore
all its eigenvalues are real. (In fact, the eigenvalues are the A complex symmetric matrix may not be diagonalizable
entries in the diagonal matrix D (above), and therefore D by similarity; every real symmetric matrix is diagonaliz-
is uniquely determined by A up to the order of its entries.) able by a real orthogonal similarity.
Essentially, the property of being symmetric for real ma- Every complex symmetric matrix A can be diagonalized
trices corresponds to the property of being Hermitian for by unitary congruence
complex matrices.
A = QQT
21.1.2 Complex symmetric matrices
where Q is an unitary matrix. If A is real, the matrix
A complex symmetric matrix can be 'diagonalized' us- Q is a real orthogonal matrix, (the columns of which are
ing a unitary matrix: thus if A is a complex symmet- eigenvectors of A), and is real and diagonal (having the
ric matrix, there is a unitary matrix U such that UAUT eigenvalues of A on the diagonal). To see orthogonality,
is a real diagonal matrix. This result is referred to as suppose x and y are eigenvectors corresponding to dis-
the AutonneTakagi factorization. It was originally tinct eigenvalues 1 , 2 . Then
proved by Lon Autonne (1915) and Teiji Takagi (1925)
and rediscovered with dierent proofs by several other
mathematicians.[2][3] In fact, the matrix B = A A is Her- 1 x, y = Ax, y = x, Ay = 2 x, y
mitian and non-negative, so there is a unitary matrix V
such that V BV is diagonal with non-negative real entries. Since 1 and 2 are distinct, thus we have x, y = 0 the
Thus C = V T AV is complex symmetric with C C real. orthogonality.
Writing C = X + iY with X and Y real symmetric matrices,
C C = X2 + Y 2 + i(XY YX). Thus XY = YX. Since X and
Y commute, there is a real orthogonal matrix W such that 21.3 Hessian
both WXW T and WYW T are diagonal. Setting U = WV T ,
the matrix UAU T is complex diagonal. Post-multiplying Symmetric n-by-n matrices of real functions appear as the
U by another diagonal matrix the diagonal entries can be Hessians of twice continuously dierentiable functions of
made to be real and non-negative. Since their squares are n real variables.
the eigenvalues of A A, they coincide with the singular
values of A. (Note, about the eigen-decomposition of a Every quadratic form q on Rn can be uniquely written in
complex symmetric matrix A, the Jordan normal form of the form q(x) = xT Ax with a symmetric n-by-n matrix
A may not be diagonal, therefore A may not be diagonal- A. Because of the above spectral theorem, one can then
ized by any similarity transformation.) say that every quadratic form, up to the choice of an or-
thonormal basis of Rn , looks like
21.2 Decomposition
n
q(x1 , . . . , xn ) = i x2i
i=1
Using the Jordan normal form, one can prove that every
square real matrix can be written as a product of two real with real numbers i. This considerably simplies the
symmetric matrices, and every square complex matrix study of quadratic forms, as well as the study of the level
can be written as a product of two complex symmetric sets {x : q(x) = 1} which are generalizations of conic sec-
matrices.[4] tions.
21.7. REFERENCES 149
This is important partly because the second-order behav- Autonne, L. (1915), Sur les matrices hypohermi-
ior of every smooth multi-variable function is described tiennes et sur les matrices unitaires, Ann. Univ.
by the quadratic form belonging to the functions Hessian; Lyon, 38: 177
this is a consequence of Taylors theorem. Takagi, T. (1925), On an algebraic problem re-
lated to an analytic theorem of Carathodory and
Fejr and on an allied theorem of Landau, Japan.
J. Math., 1: 8393
21.4 Symmetrizable matrix
Siegel, Carl Ludwig (1943), Symplectic Geome-
try, American Journal of Mathematics, 65: 186,
An n-by-n matrix A is said to be symmetrizable if there JSTOR 2371774, doi:10.2307/2371774, Lemma 1,
exist an invertible diagonal matrix D and symmetric ma- page 12
trix S such that A = DS. The transpose of a symmetriz-
Hua, L.-K. (1944), On the theory of au-
able matrix is symmetrizable, since AT = (DS)T = SD =
tomorphic functions of a matrix variable I
D1 (DSD) and DSD is symmetric. A matrix A = (aij) is geometric basis, Amer. J. Math., 66: 470488,
symmetrizable if and only if the following conditions are doi:10.2307/2371910
met:
Schur, I. (1945), Ein Satz ber quadratische for-
men mit komplexen koezienten, Amer. J. Math.,
1. aij = 0 implies aji = 0 for all 1 i j n. 67: 472480, doi:10.2307/2371974
Benedetti, R.; Cragnolini, P. (1984), On simul-
2. ai1 i2 ai2 i3 . . . aik i1 = ai2 i1 ai3 i2 . . . ai1 ik for any -
taneous diagonalization of one Hermitian and one
nite sequence (i1 , i2 , . . . , ik ). symmetric form, Linear Algebra Appl., 57: 215
226, doi:10.1016/0024-3795(84)90189-7
21.5 See also [4] Bosch, A. J. (1986). The factorization of a square ma-
trix into two symmetric matrices. American Mathe-
matical Monthly. 93 (6): 462464. JSTOR 2323471.
Other types of symmetry or pattern in square matrices doi:10.2307/2323471.
have special names; see for example:
[5] G.H. Golub, C.F. van Loan. (1996). Matrix Computa-
tions. The Johns Hopkins University Press, Baltimore,
Antimetric matrix London.
Centrosymmetric matrix
Hilbert matrix
21.8 External links
Persymmetric matrix
Hazewinkel, Michiel, ed. (2001), Symmetric ma-
Skew-symmetric matrix trix, Encyclopedia of Mathematics, Springer, ISBN
Sylvesters law of inertia 978-1-55608-010-4
21.6 Notes
[1] Jess Rojo Garca (1986). lgebra lineal (in Spanish) (2nd.
ed.). Editorial AC. ISBN 84 7288 120 2.
[3] See:
150 CHAPTER 21. SYMMETRIC MATRIX
Njerseyguy, Mets501, Hansbethe, Myasuda, Mct mht, Second Quantization, Azaghal of Belegost, Cotton2, Smite-Meister, VolkovBot,
Sam729, Shai mach, SchreiberBike, Jscg, Phys0111, WikHead, Addbot, Dr. Universe, Yobot, AnomieBOT, Citation bot, Churchill17,
Omnipaedista, Baz.77.243.99.32, Kirsim, Pavithransiyer, Akerans, , Helpful Pixie Bot, Bibcode Bot, BG19bot, Andreamari84, Luca
Innocenti, Corvus-TAU, Bender the Bot, PrimeBOT and Anonymous: 26
Amir Caldeira Source: https://en.wikipedia.org/wiki/Amir_Caldeira?oldid=749945120 Contributors: Rsabbatini, Masud1011, Tim-
rollpickering, Kinu, T. Anthony, SmackBot, Victor Lopes, Judgesurreal777, Cydebot, Batamtig, Dalliance, Alphachimpbot, Waacstats,
Phonon, Fordescort79, Aboutmovies, Jonas Mur~enwiki, BOTijo, Addbot, Lightbot, Yobot, RjwilmsiBot, BG19bot, Brad7777, Giso6150,
KasparBot, InternetArchiveBot, GreenC bot and Anonymous: 3
Anthony James Leggett Source: https://en.wikipedia.org/wiki/Anthony_James_Leggett?oldid=766625531 Contributors: Amillar, Rsab-
batini, Mic, Ams80, Maximus Rex, Kenatipo, SWAdair, Wmahan, ChicXulub, HorsePunchKid, Phe, PDH, John Foley, D6, Garri-
son, Aris Katsaris, ChristophDemmer, Srbauer, Craigy144, Ksnow, Cortonin, RyanGerbil10, Angr, Woohookitty, Emerson7, BD2412,
Wachholder0, Melesse, Rjwilmsi, The wub, Ttwaring, FlaBot, Margosbot~enwiki, Gary Cziko, CarolGray, Choess, Srleer, CJLL
Wright, YurikBot, RussBot, Alex Bakharev, Rms125a@hotmail.com, LeonardoRob0t, Garion96, Le Hibou~enwiki, Philip Stevens,
KnightRider~enwiki, SmackBot, Vald, Bluebot, Paulcardan, George Church, Tsca.bot, EdGl, Andrei Stroe, Chymicus, Ser Amantio di
Nicolao, John, Tim bates, Regan123, Ambuj.Saxena, Trafalgar007, HennessyC, CmdrObot, Drinibot, Themightyquill, Cydebot, MWaller,
Thijs!bot, TonyTheTiger, Headbomb, Batamtig, Bunzil, Salavat, RobotG, Chill doubt, NBeale, Gcm, MER-C, Here2xCategorizations,
Drdavidhill, CommonsDelinker, Johnpacklambert, SuperGirl, Paravane, Slim cop, Domminico, Paragghosh, GrahamHardy, VolkovBot,
MertonLister, Setchcr, TXiKiBoT, Jimmyeatskids, Victor vakaryuk, Duncan.Hull, Strangerer, Rcb1, AlleborgoBot, Millbanks, SieBot,
Kzirkel, Utternutter, Arjen Dijksman, Seedbot, Jobas, Someone111111, Martarius, Gaia Octavia Agrippa, Joao Xavier, Pointillist, Mas-
terpiece2000, DragonBot, Excirial, Alexbot, DeltaQuad, Cardinalem, Tjako, MessinaRagazza, D.M. from Ukraine, Addbot, Kamuichikap,
LaaknorBot, Tassedethe, Lightbot, Zorrobot, Luckas-bot, Yobot, Amirobot, KamikazeBot, AnomieBOT, Ciphers, AdjustShift, Cyan22,
Citation bot, ArthurBot, KHirsch, Carolyne M. Van Vliet, Davshul, Omnipaedista, Citation bot 1, Sopher99, Plucas58, TobeBot, Badger
M., MrX, SeoMac, Srv.poddar, RjwilmsiBot, VernoWhitney, EmausBot, WikitanvirBot, PBS-AWB, Parsonscat, JeanneMish, ClueBot
NG, HBook, Bibcode Bot, BG19bot, Zoldyick, JYBot, VIAFbot, Here is where you cut it, Tentinator, RaphaelQS, Fallen skirts, Monkbot,
EdYuan, KasparBot, JorisEnter, InternetArchiveBot, Bender the Bot and Anonymous: 46
Nitrogen-vacancy center Source: https://en.wikipedia.org/wiki/Nitrogen-vacancy_center?oldid=763309032 Contributors: Dratman,
Mike Schwartz, Gary, Gene Nygaard, WilliamKF, BD2412, Rjwilmsi, Shaddack, Oakwood, Reyk, Tentrillion, SaveTheWhales, Smack-
Bot, Chris the speller, Mgiganteus1, Sasata, Wideofthemark, Trev M, Headbomb, VolkovBot, TXiKiBoT, FKmailliW, Lightmouse, Mild
Bill Hiccup, K a r n a, Justin545, Wikimedes, Addbot, Tassedethe, AnomieBOT, JackieBot, Khcf6971, Materialscientist, Citation bot,
LilHelpa, Mononomic, J04n, Herrenberg, BenzolBot, Sbalian, Citation bot 1, Tom.Reding, Ripchip Bot, AManWithNoPlan, DokReggar,
Nscozzaro, Frietjes, Bibcode Bot, BG19bot, Dexbot, Me, Myself, and I are Here, Lagoset, Monkbot, Vishven, Brunolucatto, Zijin Shi,
Nadsokor, SarcasticHalli, MinusBot and Anonymous: 16
Quantum mechanics Source: https://en.wikipedia.org/wiki/Quantum_mechanics?oldid=769477577 Contributors: AxelBoldt, Paul Drye,
Chenyu, Derek Ross, CYD, Eloquence, Mav, The Anome, AstroNomer, Taral, Ap, Magnus~enwiki, Ed Poor, XJaM, Rgamble, Christian
List, William Avery, Roadrunner, Ellmist, Mjb, Olivier, Stevertigo, Bdesham, Michael Hardy, Tim Starling, JakeVortex, Vudujava, Owl,
Norm, Gabbe, Menchi, Ixfd64, Axlrosen, TakuyaMurata, Shanemac, Alo, Looxix~enwiki, Mdebets, Ahoerstemeier, Cyp, Stevenj, J-
Wiki, Theresa knott, Snoyes, Gyan, Nanobug, Cipapuc, Jebba, , Glenn, Kyokpae~enwiki, Nikai, Dod1, Jouster, Mxn, Charles
Matthews, Tantalate, Timwi, Stone, Jitse Niesen, Rednblu, Wik, Dtgm, Patrick0Moran, Tpbradbury, Nv8200pa, Phys, Bevo, Jecar, Fvw,
Stormie, Sokane, Optim, Bcorr, Johnleemk, Jni, Rogper~enwiki, Robbot, Ke4roh, Midom, MrJones, Jaleho, Astronautics~enwiki, Fredrik,
Chris 73, Moncrief, Goethean, Bkalafut, Lowellian, Centic, Gandalf61, StefanPernar, Academic Challenger, Rursus, Texture, Matty j,
Moink, Hadal, Papadopc, Johnstone, Fuelbottle, Lupo, HaeB, Mcdutchie, Xanzzibar, Tea2min, David Gerard, Enochlau, Ancheta Wis,
Decumanus, Giftlite, Donvinzk, DocWatson42, ScudLee, Awolf002, Barbara Shack, Harp, Fudoreaper, Lethe, Fastssion, Zigger, Mon-
edula, Wwoods, Anville, Alison, Bensaccount, Tromer, Sukael, Andris, Jason Quinn, Gracefool, Solipsist, Nathan Hamblen, Foobar,
SWAdair, Mckaysalisbury, AdamJacobMuller, Utcursch, CryptoDerk, Knutux, Yath, Amarvc, Pcarbonn, Stephan Leclercq, Antandrus, Jo-
Jan, Savant1984, Jossi, Karol Langner, CSTAR, Rdsmith4, APH, Anythingyouwant, Bumm13, Thincat, Aaron Einstein, Edsanville, Robin
klein, Muijz, Zondor, Guybrush, Grunt, Lacrimosus, Chris Howard, L-H, Ta bu shi da yu, Freakofnurture, Sfngan, Venu62, Spiy sperry,
CALR, Ultratomio, KeyStroke, Noisy, Discospinster, Caroline Thompson, Rich Farmbrough, H0riz0n, FT2, Pj.de.bruin, Hidaspal, Pjacobi,
Vsmith, Wk muriithi, Silence, Smyth, Phil179, Moogoo, WarEagleTH, Smear~enwiki, Paul August, Dmr2, Bender235, ESkog, Nabla, Dat-
aphile, Dpotter, Floorsheim, El C, Lankiveil, Kross, Laurascudder, Edward Z. Yang, Shanes, Spearhead, RoyBoy, Femto, MPS, Bobo192,
Army1987, John Vandenberg, AugustinMa, Geek84, GTubio, Clarkbhm, SpaceMonkey, Sjoerd visscher, 9SGjOSfyHJaQVsEmy9NS,
Sriram sh, Matt McIrvin, Sasquatch, BM, Firewheel, MtB, Nsaa, Storm Rider, Alansohn, Gary, ChristopherWillis, Tek022, ZiggyZig,
Keenan Pepper, La hapalo, Gpgarrettboast, Pippu d'Angelo, PAR, Batmanand, Hdeasy, Bart133, Snowolf, Wtmitchell, Tycho, Leoadec, Jon
Cates, Mikeo, Dominic, Bsadowski1, W7KyzmJt, GabrielF, DV8 2XL, Alai, Nick Mks, KTC, Dan100, Chughtai, Falcorian, Oleg Alexan-
drov, Ashujo, Ott, Feezo, OwenX, Woohookitty, Linas, Superstring, Tripodics, Shoyer, StradivariusTV, Kzollman, Kosher Fan, JeremyA,
Tylerni7, Pchov, GeorgeOrr, Mpatel, Adhalanay, Firien, Wikiklrsc, GregorB, AndriyK, SeventyThree, Wayward, Prashanthns, DL5MDA,
Palica, Pfalstad, Graham87, Magister Mathematicae, Chun-hian, FreplySpang, Baker APS, JIP, RxS, Search4Lancer, Canderson7, Sj,
Saperaud~enwiki, Rjwilmsi, Jake Wartenberg, Linuxbeak, Tangotango, Bruce1ee, Darguz Parsilvan, Mike Peel, Pasky, HappyCamper,
Ligulem, The wub, Ttwaring, Reinis, Hermione1980, Sango123, Oo64eva, St33lbird, Kevmitch, Titoxd, Das Nerd, Alejo2083, FlaBot,
Moskvax, RobertG, Urbansky~enwiki, Arnero, Latka, Nihiltres, Pathoschild, Quuxplusone, Srleer, Kri, Cpcheung, Acett, Chobot,
DVdm, Gwernol, Niz, YurikBot, Wavelength, Paulraine, Arado, Loom91, Xihr, GLaDOS, Khatharr, Firas@user, Gaius Cornelius, Chaos,
Rsrikanth05, Rodier, Wimt, Anomalocaris, Royalbroil, David R. Ingham, NawlinWiki, Grafen, NickBush24, RazorICE, Stephen e nelson,
JocK, SCZenz, Randolf Richardson, Vb, E2mb0t~enwiki, Tony1, Syrthiss, SFC9394, DeadEyeArrow, Bota47, Kkmurray, Werdna, Bmju,
Wknight94, WAS 4.250, FF2010, Donbert, Light current, Enormousdude, 21655, Zzuuzz, TheKoG, Lt-wiki-bot, Nielad, Closedmouth,
Ketsuekigata, E Wing, Brina700, Modify, Dspradau, Netrapt, Petri Krohn, Badgettrg, Peter, Willtron, Mebden, RG2, GrinBot~enwiki,
Mejor Los Indios, Sbyrnes321, CIreland, Luk, Itub, Hvitlys, SmackBot, Paulc1001, Moeron, Rex the rst, InverseHypercube, Knowl-
edgeOfSelf, Royalguard11, K-UNIT, Lagalag, Pgk, Jagged 85, Clpo13, Chairman S., Pxfbird, Grey Shadow, Delldot, Petgraveyard,
Weiguxp, David Woolley, Lithium412, Philmurray, Yamaguchi , Robbjedi, Gilliam, Slaniel, Betacommand, Skizzik, Dauto, Holy Ganga,
JSpudeman, Modusoperandi, Anachronist, Stevenwagner, DetlevSchm, MK8, Jprg1966, MalafayaBot, Marks87, Silly rabbit, Complexica,
Colonies Chris, Darth Panda, Sajendra, Warbirdadmiral, El Chupacabra, Zhinz, Can't sleep, clown will eat me, Physika~enwiki, Scott3,
Scray, ApolloCreed, Ackbeet, Le fantome de l'opera, Onorem, Surfcuba, Voyajer, Addshore, Stiangk, Paul E T, Huon, Khoikhoi, King-
don, DenisDiderot, Cybercobra, Nakon, Nick125, SnappingTurtle, Dreadstar, Richard001, Akriasas, Freemarket, Weregerbil, Kleuske,
DeFoaBuSe, DMacks, Salamurai, LeoNomis, Sadi Carnot, Pilotguy, Byelf2007, Xezlec, DJIndica, Akubra, Rory096, Bcasterline, Har-
152 CHAPTER 21. SYMMETRIC MATRIX
ryboyles, JzG, Richard L. Peterson, RTejedor, AmiDaniel, UberCryxic, Wtwilson3, Zslevi, LWF, Gobonobo, Jaganath, JorisvS, Evan
Robidoux, Mgiganteus1, Zarniwoot, Goodnightmush, Jordan M, Ex nihil, Gwendy, SirFozzie, Waggers, MarphyBlack, Caiaa, Asyn-
deton, Dan Gluck, BranStark, Iridescent, JMK, Dreftymac, Joseph Solis in Australia, UncleDouggie, Rnb, Hikui87~enwiki, Cain47,
Mbenzdabest, Nturton, Civil Engineer III, Cleric12121, Tawkerbot2, Chetvorno, Carborn1, Mustbe, SkyWalker, JForget, Frovingslosh,
Ale jrb, Peace love and feminism, Wafulz, Sir Vicious, Asmackey, Dycedarg, Lavateraguy, Van helsing, The ed17, Bad2101, Jayunder-
scorezero, CBM, BeenAroundAWhile, JohnCD, Nunquam Dormio, Harriemkali, Swwright, Wquester, N2e, Melicans, Smallpond, Mya-
suda, Gregbard, Xanas Servant, Dragons Blood, Cydebot, Wrwrwr, Beek man, Meznaric, Jack O'Lantern, Peterdjones, Meno25, Gogo
Dodo, Islander, DangApricot, NijaMunki, Pascal.Tesson, Hughgr, Benvogel, Michael C Price, Xlynx, Doug Weller, Christian75, Dumb-
BOT, FastLizard4, Waxigloo, Amit Moscovich, FrancoGG, CieloEstrellado, Thijs!bot, Epbr123, Derval Sloan, Koeplinger, Mbell, N5iln,
Headbomb, Marek69, Ujm, Second Quantization, Martin Hedegaard, Philippe, CharlotteWebb, Nick Number, MichaelMaggs, Sbandrews,
Mentisto, Austin Maxwell, Cyclonenim, AntiVandalBot, Luna Santin, Widefox, Tkirkman, Eveross, Lontax, Grafnita, Rakniz, Prolog,
Gnixon, CStar, TimVickers, Dylan Lake, Casomerville, Danger, Farosdaughter, Tim Shuba, North Shoreman, Yellowdesk, Glennwells,
Byrgenwulf, GaaraMsg, Figma, JAnDbot, Leuko, Husond, Superior IQ Genius, MER-C, CosineKitty, Matthew Fennell, Eurobas, IJMacD,
Andonic, Dcooper, Hut 8.5, 100110100, Skewwhiy, Four Dog Night, Acroterion, Magioladitis, Connormah, Mattb112885, Bongwar-
rior, VoABot II, AtticusX, Kuyabribri, JamesBWatson, SHCarter, FagChops, Bene, Rivertorch, Michele123, Zooloo, Jmartinsson, Thun-
derhead~enwiki, Couki, Catgut, Indon, ClovisPt, Dirac66, 28421u2232nfenfcenc, Joe hill, Schumi555, Adventurer, Cpl Syx, Robb37,
Quantummotion, DerHexer, Chaujie328, Khalid Mahmood, Teardrop onthere, Guitarspecs, Info D, Seba5618, Gjd001, CiA10386, Mar-
tinBot, Arjun01, Rettetast, Mike6271, Keith D, Fpaiano~enwiki, CommonsDelinker, AlexiusHoratius, Andrej.westermann, Thirdright,
Dinkytown, Shellwood, J.delanoy, DrKay, Trusilver, Kaesle, Numbo3, NightFalcon90909, Uncle Dick, Maurice Carbonaro, Kevin ayl-
ward, 5Q5, StonedChipmunk, Foober, Acalamari, Metaldev, Bot-Schafter, Katalaveno, DarkFalls, McSly, Bustamonkey2003, Ignatzmice,
Tarotcards, JayJasper, Gcad92, Detah, LucianLachance, Midnight Madness, NewEnglandYankee, Rwessel, Nin0rz4u 2nv, SJP, MKolt-
now, KCinDC, Han Solar de Harmonics, Cmichael, Juliancolton, Cometstyles, MoForce, Chao129, Elenseel, Wfaze, Samlyn.josfyn,
Martial75, GrahamHardy, CardinalDan, Sheliak, Spellcast, Signalhead, Pgb23, Cuzkatzimhut, Zakuragi, MBlue2020, Pleasantville, Lo-
kiClock, Lears Fool, Soliloquial, Philip Trueman, TXiKiBoT, Oshwah, Maximillion Pegasus, SanfordEsq, RyanB88, SCriBu, Nxavar,
Sean D Martin, Sankalpdravid, ChooseAnother, Qxz, Someguy1221, Liko81, Bsharvy, Olly150, XeniaKon, Clarince63, Seraphim, Sai-
bod, Fizzackerly, Zolot, Raymondwinn, David in DC, Handsome Pete, Geometry guy, Ilyushka88, Leavage, Krazywrath, V81, Sodicadl,
RandomXYZb, Lerdthenerd, Andy Dingley, Enigmaman, Meters, Lindsaiv, Synthebot, Antixt, Falcon8765, Enviroboy, Spinningspark,
H1nomaru senshi, The Devils Advocate, Monty845, AlleborgoBot, Nagy, The Mad Genius, Logan, PGWG, DarthBotto, Vitalikk, Bel-
sazar, Katzmik, EmxBot, Givegains, Kbrose, Mk2rhino, YohanN7, SieBot, Ivan tambuk, Nibbleboob, Graham Beards, WereSpielChe-
quers, Dawn Bard, AdevarTruth, RJaguar3, Hekoshi, Yintan, 4RM0~enwiki, Ujjwol, Bentogoa, Ferret, Jc-S0CO, JSpung, Arjen Di-
jksman, Oxymoron83, Antonio Lopez, Henry Delforn (old), Hello71, AnonGuy, Lightmouse, Radzewicz, Hobartimus, Jaquesthehunter,
Michael Courtney, Macy, Hatster301, Swegei, Curlymeatball38, Quackbumper, Coldcreation, Zenbullets, StaticGull, Heptarchy of teh
Anglo-Saxons, baby, Mygerardromance, Fishnet37222, Stentor7, Mouselb, Randy Kryn, Velvetron, ElectronicsEnthusiast, Darrellpenta,
Soporaeternus, Martarius, ClueBot, NickCT, AllPurposeScientist, Scottstensland, Yeahyeahkickball, The Thing That Should Not Be,
EMC125, Zero over zero, Infrasonik, MichaelVernonDavis, Herakles01, Drmies, Cp111, Diafanakrina, Macka92, Mrsastrochicken, Van-
dalCruncher, Agge1000, Otolemur crassicaudatus, Ridge Runner, Neverquick, Asdf1990, DragonBot, Djr32, Ondon, Excirial, HounsGut,
Welsh-girl-Lowri, Quercus basaseachicensis, Jusdafax, Krackenback, Winston365, Brews ohare, Sukaj, Viduoke, NuclearWarfare, Ice
Cold Beer, Arjayay, Terra Xin, PhySusie, Kding, Imalad, The Red, Mikaey, SchreiberBike, Vlatkovedral, Joeawfjdls453, Thingg, Russel
Mcpigmin, Aitias, Scalhotrod, Versus22, Maaparty303, SoxBot III, Apparition11, Mrvanner, Crowsnest, Vanished user uih38riiw4hjlsd,
DumZiBoT, Finalnight, CBMIBM, Javafreakin, X41, XLinkBot, Megankerr, Yokabozeez, Arthur chos, Odenluna, Matthewsasse1, Sakura
Cartelet, Ajcheema, AndreNatas, Paul bunion, WikHead, Loopism, NellieBly, Mifter, JinJian, Truthnlove, Airplaneman, Billcosbyislone-
lypart2, Mojska, Stephen Poppitt, Willieru18, Tayste, Addbot, Ryan ley, 11341134a, Willking1979, Manuel Trujillo Berges, Kadski,
TylerM37, Wareagles18, XTRENCHARD29x, 11341134b, Tcncv, Betterusername, Non-dropframe, Captain-tucker, Robertd514, Fg-
nievinski, Mjamja, Harrytipper, SunDragon34, Blethering Scot, Ronhjones, PandaSaver, WMdeMuynck, Aboctok, JoshTW, Canadian-
LinuxUser, Fluernutter, Looie496, Cst17, MrOllie, BualoBill90, Mitchellsims08, Neonorange, Chzz, AnnaFrance, Favonian, LinkFA-
Bot, Adolfman, Brufnus, Barak Sh, AgadaUrbanit, Ehrenkater, Tide rolls, Lightbot, NoEdward, Romaioi, Jan eissfeldt, Teles, Jarble,
Csdavis1, Ttasterul, Luckas-bot, Yobot, OrgasGirl, Tohd8BohaithuGh1, TaBOT-zerem, Niout, II MusLiM HyBRiD II, Kan8eDie, Nal-
limbot, Brougham96, KamikazeBot, Fearingfearitself, Positivetruthintent, IW.HG, Solo Zone, Jackthegrape, Eric-Wester, Magog the Ogre,
Armegdon, N1RK4UDSK714, Octavianvs, AnomieBOT, Captain Quirk, Jim1138, IRP, Rnpg1014, Piano non troppo, AdjustShift, Csi-
gabi, Giants27, Materialscientist, Gierens22, Supppersmart, The High Fin Sperm Whale, Citation bot, Bci2, Frankenpuppy, LilHelpa,
The Firewall, Joshualmer, Rightly, Mollymop, Xqbot, Nxtid, Sionus, Raaziq, Amareto2, Melmann, Capricorn42, Jostylr, Dbroesch,
Mark Swiggle, TripLikeIDo, Benvirg89, Sokratesinabasket, Gilo1969, Physprof, Grim23, P99am, Qwertyuio 132, Gap9551, Almabot,
Polgo, GrouchoBot, Abce2, Jagbag2, Frosted14, Toofy mcjack34, Richard.decal, Qzd800, Trurle, Omnipaedista, Mind my edits, Willi-
unWeales, Kesaloma, Charvest, The Spam-a-nata, Dale Ritter, Shipunits, FaTony, Gr33k b0i, Shadowjams, Adrignola, Dingoatscritch,
Spakwee, A. di M., Naturelles, Dougofborg, Cigarettizer, , C.c. hopper, JoshC306, Chjoaygame, GliderMaven, Dylan620 II, Bboy-
dill, Magnagr, Kroin, Tobby72, Pepper, Commander zander, Guy82914, PhysicsExplorer, Kenneth Dawson, Colinue, Steve Quinn,
N4tur4le, Pratik.mallya, Razataza, Machine Elf 1735, 06twalke, TTGL, Izodman2012, Xenfreak, Iquseruniv, HamburgerRadio, Cita-
tion bot 1, Cheryledbernard, Greg HWWOK Shaw, WQUlrich, Brettwats, Pinethicket, I dream of horses, Pink Bull, Tom.Reding, Lithium
cyanide, DanielGlazer, Serols, Deaddogwalking, FloridaSpaceCowboy, RobinK, Liarliar2009, JereyVest, Seattle Jrg, Reconsider the
static, Fredkinfollower, Superlions123, GreenReections, Roseohioresident, Tjlafave, FoxBot, Chris5858, Anonwhymus, Trappist the
monk, Buddy23Lee, 3peasants, Beta Orionis, Train2104, Hickorybark, Creativethought20, Lotje, PorkHeart, Michael9422, Lmp883,
Bowlofknowledge, Leesy1106, Doc Quintana, Reaper Eternal, Azatos, SeriousGrinz, Pokemon274, Specs112, Vera.tetrix, Earthandmoon,
MicioGeremia, Tbhotch, Jesse V., Sideways713, Dannideak, Factosis, MR87, Borki0, Taylo9487, Updatehelper, Seawolf1111, Ones-
moothlefty, Carowinds, Bento00, Beyond My Ken, Andy chase, WildBot, Deadlyops, Phyguy03, EmausBot, John of Reading, Dave-
johnsan, Orphan Wiki, Bookalign, WikitanvirBot, Mahommed alpac, Dr Aaij, Gfoley4, Roxbreak, Word2need, Beatnik8983, Alamadte,
Racerx11, Dickwet89, GoingBatty, RA0808, Minimacs Clone, NotAnonymous0, Dmblub, KHamsun, Wham Bam Rock II, Solarra, Ste-
venganzburg, Elee, Slightsmile, Tommy2010, Uploadvirus, Wikipelli, Dcirovic, Elitedarklord dragonslyer 3.14159, AsceticRose, JSquish,
AlexBG72, PBS-AWB, White Trillium, Harddk, Checkingfax, Angelsages, NickJRocks95, F, Josve05a, Stanford96, MithrandirAgain,
Imperial Monarch, 1howardsr1, Plotfeat, User10 5, Brazmyth, Raggot, Alvindclopez, Dalma112211221122, Wayne Slam, Tolly4bolly,
EricWesBrown, Mattedia, Jacksccsi, L Kensington, Qmtead, Chrisman62, Lemony123, Final00123, Maschen, Donner60, HCPotter, Sci-
entic29, Notolder, Pat walls1, ChuispastonBot, Roberts Ken, RockMagnetist, TYelliot, Llightex, DJDunsie, DASHBotAV, The beings,
Whoop whoop pull up, Isocli, ClueBot NG, KagakuKyouju, Professormeowington, CocuBot, MelbourneStar, This lousy T-shirt, Satellizer,
MC ShAdYzOnE, Baseball Watcher, Sabri Al-Sa, Arespectablecitizen, Jj1236, Braincricket, ScottSteiner, Wikishotaro, Widr, Machdeep,
21.9. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 153
Ciro.santilli, Mikeiysnake, Dorje108, Anupmehra, Theopolisme, MerlIwBot, BlooddRose, Helpful Pixie Bot, Novusuna, Olaniyob, Billy-
bobjow, Leo3232, Elochai26, Jubobro, Ieditpagesincorrectly, Bibcode Bot, Psaup09, Lowercase sigmabot, Saurabhagat, BG19bot,
Physics1717171, Brannan.brouse, ThisLaughingGuyRightHere, Happyboy2011, Hashem sfarim, The Mark of the Beast, Northamer-
ica1000, Declan12321, Cyberpower678, BobTheBuilder1997, Yowhatsupdude, Metricopolus, Solomon7968, Mark Arsten, Bigsean0300,
Chander, Guythundar, Joydeep, Trevayne08, Roopydoop55, Aranea Mortem, Jamessweeen, F=q(E+v^B), Vagigi, DARIO SEVERI,
Snow Blizzard, Hipsupful, Laye Mehta, Glacialfox, Winston Trechane, In11Chaudri, Achowat, Bfong2828, PinkShinyRose, Tm14, Lieu-
tenant of Melkor, Penguinstorm300, Pkj61, Williamxu26, Jnracv, Samwalton9, Lbkt, Kisokj, Bakkedal, Cyberbot II, StopTheCrackpots,
Layzeeboi, Callum Inglis, Davidwhite18, Macven, Khazar2, Adwaele, Gdrg22, BuzyBody, BrightStarSky, Dexbot, Webclient101, Autis-
ticCatnip, Garuda0001, William.winkworth, Belief action, Harrycol123, Saehry, Matthewrobertolson, Jamesx12345, Josophie, Brirush,
Athomeinkobe, Thepalerider2012, JustAMuggle, Reatlas, Joeinwiki, Mmcev106, Darvii, Loganfalco, Everymorning, Jakec, Rod Pierce,
Backendgaming, DavidLeighEllis, Geometriccentaur, Rauledc, Eapbar, Ryomaiinsai12345, Pr.malek, LieutenantLatvia, Quadratic for-
mula, Desswarrior, Ray brock, The Herald, Shawny J, DrYusuf786, Bubblynoah, JWNoctis, Asherkirschbaum, W. P. Uzer, Cfunkera,
SJ Defender, Melquiades Babilonia, Bojo1498, Atticus Finch28, Dfranz1012, PhuongAlex, JaconaFrere, 15petedc, Adamtt9, Aspaas-
Bekkelund, QuantumMatt101, Htp0020, Derenek, Russainbiaed na, Internucleotide, Emmaellix, Renegade469, Nikrulz07, HiYahhFriend,
Johntrollston1233, BethNaught, HolLak456, Castielsbloodyface, Trackteur, Black789Green456, Kinetic37, Theskru, DaleReese1962,
Zazzi01, Gareld Gareld, Potatomuncher2000, 3primetime3, 420noscopekills, HMSLavender, The Original Bob, EvilLair, 427454LSX,
ChamithN, SA 13 Bro, Suman Chatterjee DHEP, HelloFriendsOfPlanetEarth, Y-S.Ko, Zppix, Audiorew, Trentln1852, CheeseButtery,
Blackbeast75, Justdausualtf, Whijus19, Govindaharihari, Rubbish computer, Dubsir, Iazyges, Lanzdsey, Tetra quark, Isambard King-
dom, Rohin2002, Bloodorange1234, Harsh mahesheka, Skipfortyfour, Username12345678901011121314151617181920, Camisboss5,
WebdriverHead, SamuelFey666, Cnbr15, Amccann421, Jerodlycett, KasparBot, MintyTurtle01, Peter Richard Obama, Conana007, An-
archyte, Sweepy, Pengyulong7, TheDoctor07, Tropicalkitty, Shinobabu20081996, C.Gesualdo, Jpskycak, Murph9000, CAPTAIN RAJU,
Babymissfortune, TerraCodes, Javathunderman, MB, Kschmit90, Seventhorbitday, GamersUnite, 420BlazeItPhaggot, Jejef, Ryan-
merl8, Matthewadinatajapati, Fthatshiit, Thierry Dugnolle, Zackwright07, InternetArchiveBot, RedExplosiveswiki, Urmomisdumb69, Jon-
ahSpars, KGirlTrucker81, GreenC bot, Eisengetribe13, PlayGatered, ThePlatypusofDoom, Eep03, Fmadd, RunnyAmiga, PANDA12346,
Jmcgnh, Konic004, Mindopener420, Imsarvesh18, Ecnomercy, Bender the Bot, Ebizo5209, Mramoeba, Joeypeeps, Zane Kata, Pot-
holet, Je shcin, Cyrus noto3at bulaga, MysterKitty, Dajonsmmns, Moosealinee, T.J.M, Friendshipofdeath25, VeritasLaureate, Ni-
hal7799622906, Asgbe, KehrerK, WP 3456, Suhasbhokare, Cosmicpixelgames, Bennv3771, Jake.hoer, Omi patil, , Pratik tan-
gade, Amk7313, Eschuelk000, Wadneyare2017, DrHelenK, Bigboy42, A010101, Michpod, Quatumdoodoo, All Knowing Time Demon,
Mackestu55, SALUTATIONEZZZ, Ghori mazail, CybFox and Anonymous: 1857
Markov chain Source: https://en.wikipedia.org/wiki/Markov_chain?oldid=768623230 Contributors: AxelBoldt, The Anome, Ed Poor,
Deb, Olivier, Drseudo, Jdlh, Michael Hardy, Dan Koehl, Kku, Tomi, Chadloder, Tregoweth, Ejrh, Samuelsen, Angela, Andres, Bjcairns,
A5, Charles Matthews, Timwi, Dysprosia, Furrykef, Finlay McWalter, Aliekens, Phil Boswell, Merriam~enwiki, LaurentPerrinet, Hadal,
Qlmatrix, Ruakh, Seth Ilys, Wile E. Heresiarch, Magic Window, Tea2min, Centrx, Giftlite, Gwalla, Christopher Parham, DavidCary,
Telemakh0s, BenFrantzDale, Soundray~enwiki, Dratman, Mellum, Leonard G., Elmindreda, Dj Vu, Neilc, Anthonywong, Andycjp,
Altarego, Gzuckier, Karol Langner, Alex Cohn, Cihan, K-squire, Ehudshapira, Urhixidur, Shiftchange, Luqui, Ericamick, Xezbeth, Ben-
der235, Kjoonlee, MisterSheik, El C, Shanes, SgtThroat, O18, C S, Jumbuck, Storm Rider, Msh210, Denis.arnaud, Gintautasm, Arthena,
Neonumbers, Sligocki, Julovi, Mbloore, Danhash, Jheald, RainbowOfLight, Boyd Steere, Rotring, Kenyon, Oleg Alexandrov, Marasmu-
sine, Linas, Mindmatrix, LOL, Aaron McDaid, OdedSchramm, Wikiklrsc, Isnow, Joke137, Stoni, Rjwilmsi, Salix alba, Miserlou, Cww,
Klortho, GnniX, JYOuyang, Chobot, WriterHound, Timboe, YurikBot, Wavelength, Grubber, Stunetii, Archelon, Gaius Cornelius, Pseu-
domonas, Thane, Gareth Jones, Ino5hiro, Schmock, Tachs, Jmchen, Hirak 99, Mythobeast, Lt-wiki-bot, Cedar101, Digfarenough, Tyre-
nius, DmitriyV, Shepard, Rdbrady, MartinGugino, SmackBot, RDBury, FocalPoint, Tom Lougheed, Karl Stroetmann, Shabda, Frymaster,
Vvarkey, Chris the speller, MK8, Indy90~enwiki, Andreyf, Roscelese, Droll, DoctorW, Thomas Bliem, DHN-bot~enwiki, Jdthood, Svein
Olav Nyberg, Zoonfafer, TheKMan, LouScheer, Ddon, Dreadstar, Hl, Eggstone, Jbergquist, Henning Makholm, Andrei Stroe, Argle-
bargleIV, Wvbailey, Loodog, Khim1, Jim.belk, Nijdam, JHunterJ, Slakr, Doczilla, Drae, MTSbot~enwiki, Negrulio, Yoderj, Iridescent,
JMK, W0le, Eassin, Matthew Meta, Tawkerbot2, Ylloh, Trevor.tombe, CmdrObot, Shorespirit, Nicolaennio, DavidFHoughton, Reques-
tion, Ezrakilty, Cosy, Katya0133, Myasuda, ChrisKennedy, Peterdjones, Skittleys, Josephorourke, Fij, Quibik, HitroMilanese, Mikewax,
Zalgo, Grubbiv, Epbr123, Headbomb, Rkrish67, Mikeeg555, Nick Number, Urdutext, Mmortal03, LachlanA, Gioto, Quintote, Lovibond,
Mack2, Ncauthor, Alhenawy, JAnDbot, Harish victory, CosineKitty, BrotherE, F.Shelley, Coee2theorems, Magioladitis, Leandro79,
Brusegadi, Bubba hotep, SSZ, A3nm, David Eppstein, JaGa, Eeera, Ekotkie, Seanpor, Michael.Clerx, Sarma.bhs, Gaidheal1, Steve98052,
Faridani, Tarotcards, Quantling, JonMcLoone, Policron, Doctahdrey, DavidCBryant, HyDeckar, DorganBot, SirHolo, TWiStErRob,
Yayavar, Mistercupcake, VolkovBot, JohnBlackburne, LokiClock, Maghnus, TXiKiBoT, Egkauston, Someguy1221, UnitedStatesian,
Ankitpatel715, Greg searle, Falcon8765, Forwardmeasure, Paulthree, Canavalia, Fcady2007, Maxlittle2007, SieBot, Ph0t0phobic, IradBG,
Maurizio.polito, BusError, Lightmouse, Fratrep, OKBot, AlanUS, Randomblue, Melcombe, SanderEvers, ClueBot, Rks22, Snigbrook, Bo-
bathon71, MATThematical, Jim 14159, Boing! said Zebedee, Idempotent, Nanmus, Ecov, Bender2k14, Spmeyn, Aurelius173, Terra Xin,
Stypex, SchreiberBike, Citrus Lover, Aardvark23a, 1ForTheMoney, Siniestra, Qwfp, Rmerks, XLinkBot, Efexan~enwiki, Autotomic,
Shepelyansky, Jeferman, Voice In The Wilderness, Tayste, Addbot, DOI bot, Cuaxdon, MrOllie, Download, Xicer9, Dyaa, OlEnglish, Jar-
ble, Greyhood, Lnummeli, Luckas-bot, Yobot, Ht686rg90, Ptbotgourou, Ljaun, Nallimbot, Bryanjohnston, AnomieBOT, ModernMajor,
Rubinbot, AdjustShift, Materialscientist, Citation bot, ArthurBot, Xqbot, Dithridge, Jtsch, Drilnoth, Jergling, NOrbeck, GrouchoBot, Und-
soweiter, Eed3si9n, FrescoBot, Olexa Riznyk, Gausseliminering, Syngola, Planetmarshall, Citation bot 1, MorganGreen, Greender, I dream
of horses, Kiefer.Wolfowitz, Bpiwowar, Stpasha, RedBot, Izmirlig, So nazzer its pav, Vaahteramen Eemeli, Icfaldu, Kokoklems, Ptj tsub-
asa, RC Howe, Begoon, Duoduoduo, Yunesj, RjwilmsiBot, Xphileprof, PeterWT, Dewritech, Wikipelli, Dcirovic, Julienbarlan, Kazantsev,
Quondum, L Kensington, Sigma0 1, 123forman, ClueBot NG, BarrelProof, Makrai, JXtra, Mpaa, Jj1236, Kdcoo, Theinactivist, Capodis-
tria, Helpful Pixie Bot, Ricardohz, Wbm1058, BG19bot, Andytango, Jafusmaximus, Nunoxic, Samoon, Guanhomer, Austinprince, Pro-
tein Chemist, CitationCleanerBot, Mocahante, Kritboy, Manoguru, M.berlinkov, BillBucket, Lucyinthesky45, Soa karampataki, Vu2swx,
Pratyya Ghosh, Cyberbot II, Ideafarmcity, Ethlew, Dexbot, Mogism, Imdadasad, Numbermaniac, Sixstring91, Y256, RMCD bot, Ma-
gusapollo, Limit-theorem, Cosmicraga, 7804j, Nbrader, PierreYvesLouis, TreeFell, Uociucamin, Pokechu22, Myconix, Iancj88, Sangdon
Lee, Eugenio Lpez Cortegano, Elenktik, Geitonaki, Sergio Yaksic, Quenhitran, Improbable keeler, Ambrosia0, Ixthings 69, Monkbot,
Sherloco, Yikkayaya, Abacenis, Technetium-99, JiyiZhou, Jh4mit, Firstinshow, Rjason1991, Sourabhghurye, Evermore345, KasparBot,
Ikkohm, Baking Soda, InternetArchiveBot, Latex-yow, Arif212, GreenC bot, Fmadd, Bender the Bot, Raghummr, Moritz Kohls, Sir Rin,
Aleekaror, Lkulakova, Edthat2, Magic links bot, Deniskelleher, Jemonggg and Anonymous: 466
Density matrix Source: https://en.wikipedia.org/wiki/Density_matrix?oldid=769112974 Contributors: AxelBoldt, The Anome, Tarquin,
FlorianMarquardt, Modemac, Michael Hardy, Delirium, Looxix~enwiki, Popas11, AugPi, Phys, Shahard~enwiki, Bkell, Giftlite, Lock-
eownzj00, CSTAR, Creidieki, Frau Holle, Chris Howard, V79, Floorsheim, Phobos~enwiki, Chtito, Oleg Alexandrov, Isnow, BD2412,
154 CHAPTER 21. SYMMETRIC MATRIX
Nanite, Ketiltrout, Rjwilmsi, J S Lundeen, Chobot, Tone, YurikBot, Ugha, Wavelength, Archelon, Aaron Brenneman, Oakwood, Ne-
trapt, Sbyrnes321, SmackBot, Betacommand, Chris the speller, Colonies Chris, Vina-iwbot~enwiki, Mets501, Andreas Rejbrand, Heqs,
Mct mht, Michael C Price, Thijs!bot, Headbomb, Marek69, Gnixon, Shlomi Hillel, B7582, Damonturney, Uisqebaugh, RogierBrussee,
Jakob.scholbach, MetsBot, GIrving, Victor Blacus, Marek.zukowski, Cuzkatzimhut, VolkovBot, LokiClock, Saibod, Ocsenave, YohanN7,
SieBot, Randy Kryn, DFRussia, Brews ohare, SchreiberBike, DumZiBoT, Hess88, Addbot, Dr. Universe, Legobot, Luckas-bot, Yobot,
Niout, Li3939108, Unara, Citation bot, Zeroimpl, LouriePieterse, GrouchoBot, Pradameinho, , FrescoBot, Caridin, Citation bot 1,
MastiBot, Aadagger, Jordgette, Havresylt, Mauriachigar, ZroBot, Bamyers99, AManWithNoPlan, Zueignung, RockMagnetist, Chester
Markel, Physics is all gnomes, Zak.estrada, Helpful Pixie Bot, Bibcode Bot, JpMarat, SrijitaK, Pracec, Mathphysman, Monkbot, Oisguad,
Luis Goslin, SSA7471, Integrvl, EntropicPrinciple, Zachb97 and Anonymous: 73
Matrix (mathematics) Source: https://en.wikipedia.org/wiki/Matrix_(mathematics)?oldid=769419035 Contributors: AxelBoldt, Tarquin,
Tbackstr, Hajhouse, XJaM, Ramin Nakisa, Stevertigo, Patrick, Michael Hardy, Wshun, Cole Kitchen, SGBailey, Chinju, Zeno Gantner,
Dcljr, Ejrh, Looxix~enwiki, Muriel Gottrop~enwiki, Angela, , Poor Yorick, Rmilson, Andres, Schneelocke, Charles Matthews,
Dysprosia, Jitse Niesen, Lou Sander, Dtgm, Bevo, J D, Francs2000, Robbot, Mazin07, Sander123, Chrism, Fredrik, R3m0t, Gandalf61,
MathMartin, Sverdrup, Rasmus Faber, Bkell, Paul Murray, Neckro, HaeB, Tea2min, Tosha, Giftlite, Jao, Arved, BenFrantzDale, Neto-
holic, Herbee, Dissident, Dratman, Michael Devore, Waltpohl, Duncharris, Macrakis, Utcursch, Alexf, Antandrus, MarkSweep, Profvk,
Wiml, Urhixidur, Sam nead, Azuredu, Barnaby dawson, Porges, PhotoBox, Shahab, Rich Farmbrough, FiP, ArnoldReinhold, Pavel Voze-
nilek, Paul August, Bender235, ZeroOne, El C, Rgdboer, JRM, NetBot, The strategy freak, La goutte de pluie, Obradovic Goran, Mdd,
Tsirel, LutzL, Landroni, Jumbuck, Jigen III, Alansohn, ABCD, Fritzpoll, Wanderingstan, Mlm42, Jheald, Simone, RJFJR, Dirac1933,
AN(Ger), Adrian.benko, Oleg Alexandrov, Nessalc, Woohookitty, Igny, LOL, Webdinger, David Haslam, UbiquitousUK, Username314,
Tabletop, Waldir, Prashanthns, Mandarax, Qwertyus, SixWingedSeraph, Grammarbot, Porcher, Sjakkalle, Koavf, Salix alba, Joti~enwiki,
Watcharakorn, SchuminWeb, Old Moonraker, RexNL, Jrtayloriv, Krun, Fresheneesz, Srleer, Vonkje, Masnevets, NevilleDNZ, Chobot,
Krishnavedala, Karch, DVdm, Bgwhite, YurikBot, Wavelength, Borgx, RussBot, Michael Slone, Bhny, NawlinWiki, Rick Norwood,
Jfheche, 48v, Bayle Shanks, Jimmyre, Misza13, Samuel Huang, Merosonox, DeadEyeArrow, Bota47, Glich, Szhaider, Ms2ger, Jezz-
abr, Leptictidium, Mythobeast, Spondoolicks, Alasdair, Lunch, Sardanaphalus, SmackBot, RDBury, CyclePat, KocjoBot~enwiki, Jagged
85, GoonerW, Minglai, Scott Paeth, Gilliam, Skizzik, Saros136, Chris the speller, Optikos, Bduke, Silly rabbit, DHN-bot~enwiki, Colonies
Chris, Darth Panda, Scwlong, Foxjwill, Can't sleep, clown will eat me, Smallbones, KaiserbBot, Rrburke, Mhym, SundarBot, Jon Aw-
brey, Tesseran, Aghitza, The undertow, Lambiam, Wvbailey, Attys, Nat2, Cronholm144, Terry Bollinger, Nijdam, Aleenf1, IronGar-
goyle, Jacobdyer, WhiteHatLurker, Beetstra, Kaarebrandt, Mets501, Neddyseagoon, Dr.K., P199, MTSbot~enwiki, Quaeler, Rschwieb,
Levineps, JMK, Tawkerbot2, Dlohcierekim, DKqwerty, Dan1679, Propower, CRGreathouse, CBM, JohnCD, INVERTED, SelfStudy-
Buddy, HalJor, MC10, Pascal.Tesson, Bkgoodman, Alucard (Dr.), Juansempere, Codetiger, Bellayet, , Epbr123, Paragon12321,
Markus Pssel, Aeriform, Gamer007, Headbomb, Marek69, RobHar, Urdutext, AntiVandalBot, Lself, Jj137, Hermel, Oatmealcook-
iemon, Dhrm77, JAnDbot, Fullverse, MER-C, The Transhumanist, Yanngerotin~enwiki, Bennybp, VoABot II, Fusionmix, T@nn, JNW,
Jakob.scholbach, Rivertorch, EagleFan, JJ Harrison, Sullivan.t.j, David Eppstein, User A1, ANONYMOUS COWARD0xC0DE, JoergenB,
Philg88, Nevit, Hbent, Gjd001, Doccolinni, Yodalee327, R'n'B, Alfred Legrand, J.delanoy, Rlsheehan, Maurice Carbonaro, Richard777,
Wayp123, Toghrul Talibzadeh, Aqwis, It Is Me Here, Cole the ninja, TomyDuby, Peskydan, AntiSpamBot, JonMcLoone, Policron, Doug4,
Fylwind, Kevinecahill, Ben R. Thomas, CardinalDan, OktayD, Egghead06, X!, Malik Shabazz, UnicornTapestry, Shiggity, VolkovBot,
Dark123, JohnBlackburne, LokiClock, VasilievVV, DoorsAjar, TXiKiBoT, Hlevkin, Rei-bot, Anonymous Dissident, D23042304, PaulTa-
nenbaum, LeaveSleaves, BigDunc, Wolfrock, Surajx, Wdrev, Brianga, Dmcq, KjellG, AlleborgoBot, Symane, Anoko moonlight, W4chris,
Typoer, Neparis, T-9000, D. Recorder, ChrisMiddleton, GirasoleDE, Dogah, SieBot, Ivan tambuk, Bachcell, Gerakibot, Cwkmail,
Yintan, Radon210, Elcobbola, Blueclaw, Paolo.dL, Oxymoron83, Ddxc, Oculi, Manway, AlanUS, Anchor Link Bot, Rinconsoleao, Denis-
arona, Canglesea, Myrvin, DEMcAdams, ClueBot, Sural, Wpoely86, Remag Kee, SuperHamster, LizardJr8, Masterpiece2000, Excirial,
Da rulz07, Bender2k14, Ftbhrygvn, Muhandes, Brews ohare, Tyler, Livius3, Jotterbot, Hans Adler, Manco Capac, MiraiWarren, Qwfp,
Johnuniq, TimothyRias, Lakeworks, XLinkBot, Marc van Leeuwen, Rror, AndreNatas, Jaan Vajakas, Porphyro, Stephen Poppitt, Addbot,
Proofreader77, Deepmath, RPHv, Steve.jaramillov~enwiki, WardenWalk, Jccwiki, CactusWriter, Mohamed Magdy, MrOllie, Tide rolls,
Gail, Jarble, CountryBot, LuK3, Luckas-bot, Yobot, Senator Palpatine, QueenCake, TestEditBot, AnomieBOT, Autarkaw, Gazzawi, Ar-
chon 2488, IDangerMouse, MattTait, Kingpin13, Materialscientist, Citation bot, Wrelwser43, LilHelpa, FactSpewer, Xqbot, Capricorn42,
Drilnoth, HHahn, El Caro, BrainFRZ, J04n, Nickmn, RibotBOT, Cerniagigante, Smallman12q, WaysToEscape, Much noise, LucienBOT,
Tobby72, VS6507, Recognizance, Sawomir Biay, Izzedine, IT2000, HJ Mitchell, Sae1962, Jamesooders, Cafreen, Citation bot 1, Swords-
mankirby, I dream of horses, Kiefer.Wolfowitz, MarcelB612, NoFlyingCars, RedBot, RobinK, Kallikanzarid, Jordgette, ItsZippy, Vairoj,
Rentzepopoulos, SeoMac, MathInclined, The last username left was taken, Earthandmoon, Birat lamichhane, Katovatzschyn, Soupjvc,
Sfbaldbear, Salvio giuliano, Cowpig, Mandolinface, EmausBot, Lkh2099, Nurath224, Primefac, DesmondSteppe, RIS cody, Slawekb,
Gclink, Quondum, Chocochipmun, Jadzia2341, U+003F, Rcorcs, , Maschen, Babababoshka, Adjointh, Donner60,
Pun, JFB80, Anita5192, Petrb, Mikhail Ryazanov, ClueBot NG, Wcherowi, Michael P. Barnett, Rtucker913, Satellizer, Rank Penguin,
Tyrantbrian, Frietjes, Dsperlich, Helpful Pixie Bot, Rxnt, Christian Matt, MarcoPotok, BG19bot, Wiki13, Muscularmussel, MusikAni-
mal, JMtB03, Brad7777, Ren Vpenk, Soa karampataki, BattyBot, Freesodas, IkamusumeFan, Lucaspentzlp, OwenGage, Enterprisey,
Dexbot, Mark L MacDonald, Numbermaniac, Frosty, Gordino110, JustAMuggle, Reatlas, Acetotyce, Debouch, Wamiq, Ugog Nizdast,
Zenibus, SwimmerOfAwesome, Jianhui67, OrthogonalFrog, Albert Maosa, Airwoz, Derpghvdyj, Mezafo, Botha42, CarnivorousBunny,
Xxhihi, Fzzle, Sordin, Username89911998, Gronk Oz, Hidrolandense, Ansathas, Kellywacko, Frost.derec, Norbornene, Solid Frog, Lo-
raof, Cleaner 1, JArnold99, Anson Law Sum Kiu, Mutantoe, Kavya l, Graboy, Minima2014, Mikeloud, H.dryad, Yrtnasd, Skpandey12,
Kavyalat9, Fmadd, Pictomania, Chuckwoodjohn, Nickfury95, Tompop888, CarlosGonz27 and Anonymous: 659
Eigenvalues and eigenvectors Source: https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors?oldid=768140060 Contributors: Tar-
quin, Gareth Owen, Tomo, Stevertigo, Patrick, Michael Hardy, Chris-martin, Cyde, Delirium, Stevenj, Dcoetzee, Dysprosia, Jitse Niesen,
Patrick0Moran, Bevo, McKay, Spikey, Shizhao, Nickshanks, Aliekens, Robbot, Josh Cherry, Moriori, Schutz, Gak, Lowellian, Gandalf61,
Timrollpickering, Sunray, Dmn, Tea2min, Adam78, Denwid, Ancheta Wis, Connelly, Giftlite, Nichalp, Haeleth, BenFrantzDale, Drat-
man, Kmote, Bovlb, Zinnmann, Jason Quinn, Jorge Stol, LucasVB, DragonySixtyseven, Pmanderson, Almit39, Edsanville, Fintor, Mike
Rosoft, Natrij, Lone Isle, Iainscott, ObsessiveMathsFreak, Rspeer, Mattrix, ZeroOne, Zaslav, Kjoonlee, Kimbly, Pedant, Eric Forste,
Brian0918, Pt, Rgdboer, Circeus, Billymac00, Kappa, Giraedata, Scott Ritchie, Blotwell, Rajah, Deryck Chan, Crust, Jakew, Varuna,
Landroni, Thornn~enwiki, Jrme, Alansohn, Jhertel, Kanie, Keenan Pepper, Katefan0, LordViD, Jefromi, Jheald, RJFJR, Reaverdrop,
Itsmine, Forderud, Oleg Alexandrov, Soultaco, Woohookitty, Shreevatsa, Igny, LOL, Dmazin, StradivariusTV, Guardian of Light, BillC,
Ruud Koot, HcorEric X, Jok2000, Tabletop, Male1979, Plrk, Waldir, Agthorr, Wayward, Jbarta, Marudubshinki, Magister Mathemati-
cae, BD2412, Abstracte, FreplySpang, Yurik, Rjwilmsi, Tyraios, NatusRoma, MarSch, Salix alba, Somesh, HappyCamper, Bdegfcunbbfv,
Boris Alexeev, Titoxd, Zylinder~enwiki, Mathbot, JYOuyang, TheDJ, Saketh, Fresheneesz, TeaDrinker, Tatpong~enwiki, Kri, Chobot,
Chrisbaird.ma, Bgwhite, Metaeducation, JPD, Rmbyoung, YurikBot, Wavelength, Mushin, Hairy Dude, Dmharvey, Michael Slone, Con-
21.9. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 155
scious, Hede2000, JabberWok, Markus Schmaus, Dotancohen, Grubber, Gaius Cornelius, NawlinWiki, Rick Norwood, Edinborgarste-
fan, Buster79, TVilkesalo~enwiki, Yahya Abdal-Aziz, Taco325i, Joelr31, Cleared as led, Capagot, Orizon, Dhollm, Vb, TDogg310,
DanBri, Crasshopper, Chichui, Ms2ger, Deeday-UK, NormDor, Bobguy7, JahJah, Kier07, Stevelinton, Adpadu, Kungfuadam, Phyrexi-
caid, Mebden, Williampoetra, GunnerJr, Amberrock, Finell, Luk, Sardanaphalus, SmackBot, Nihonjoe, Incnis Mrsi, TheRealInsomnius,
Bjelleklang, Jtwdog, InvictaHOG, Eskimbot, Oxygene123, Shai-kun, JeeAlex, Commander Keane bot, Gilliam, Jcarroll, Hitman012,
Davigoli, AhmedHan, PabloE, Oli Filth, Silly rabbit, Complexica, Adpete, Nbarth, DHN-bot~enwiki, Colonies Chris, Hongooi, Adam-
Smithee, Javalenok, Aacool, Ntjohn, Kjetil1001, RProgrammer, Riteshsood, LkNsngth, Napalm Llama, Trifon Triantallidis, Michael-
Billington, Lacatosias, Yoshigev, Gbnogkfs~enwiki, Cotterr2, J. Finkelstein, Moala, Severoon, Sebastian Klein, WhiteHatLurker, Hiiiiiiiii-
iiiiiiiiiiii, Hetar, Simon12, Iridescent, JMK, Tawkerbot2, Tiny green, Vaughan Pratt, Ahmad.tachyon, CRGreathouse, Laplacian, A civilian,
Mcstrother, Szabolcs Nagy, DavidFHoughton, Tac-Tics, Myasuda, Safalra, Mct mht, Cydebot, Benzi455, Skittleys, Christian75, Editor at
Large, Xantharius, Repliedthemockturtle, Rbanzai, Thijs!bot, Wikid77, Headbomb, Rlupsa, Marek69, Gimbeline, Nick Number, Urdu-
text, Luna Santin, Ste4k, Danger, LaQuilla, Etr52, Daytona2, Erxnmedia, JAnDbot, Dr. Nobody, Wootery, Sanchom, Je560, Thenub314,
VoABot II, Albmont, Jakob.scholbach, Rich257, Baccyak4H, Crunchy Numbers, Not a dog, MetsBot, Dirac66, David Eppstein, User
A1, ANONYMOUS COWARD0xC0DE, JaGa, GermanX, Ekwity, Dwwaddell, Sameerkale, Dima373, Bissinger, R'n'B, Johnpacklam-
bert, Leyo, Maurice Carbonaro, Yonidebot, Wayp123, Smite-Meister, Lantonov, TomyDuby, Bknittel, Jayden54, Haseldon, Veganaxos,
Lzyvzl, Policron, 83d40m, STBotD, DorganBot, Maziar.irani, PatriciaJH, H1voltage, Izno, Idioma-bot, Error9312, Larryisgood, The Duke
of Waltham, JohnBlackburne, LokiClock, Butwhatdoiknow, Hydrogravity, Gdorner, Oshwah, Like.liberation, Red Act, Rei-bot, Josp-
mathilde, Dependent Variable, Ttennebkram, Reddevyl, Zapurva, LBehounek, TerraNova whatcanidotomakethisnottoosimilartosomeoth-
ername, Aarongeller, Ajto8, Anoko moonlight, StevenJohnston, SieBot, Philgoetz, Ichakrab, Cwkmail, Flyer22 Reborn, Paolo.dL, Moon-
raker12, M4ry73, AlanUS, Anchor Link Bot, Tesi1700, Randomblue, Lalahuma, Curtdbz, Kanonkas, Madanor, ClueBot, Justin W Smith,
Plastikspork, JuPitEer, Mild Bill Hiccup, Ellliottt, TypoBoy, Timhoooey, Rchandan, Bender2k14, Muhandes, Jakarr, Arjayay, Blaner-
hoads, Kausikghatak, Urtis~enwiki, AlexCornejo, Martyulrich, Acabashi, Rubybrian, Alousybum, Crowsnest, Humanengr, Bochev, DumZ-
iBoT, One-eyed pirate, RexxS, Gtbanta, XLinkBot, Forbes72, Marc van Leeuwen, Jamesjhs, Porejide, D1ma5ad, Shishir0610, Luolimao,
MystBot, Gelsus~enwiki, Stephen Poppitt, Addbot, Fgnievinski, Protonk, EconoPhysicist, FiriBot, Favonian, OlEnglish, Zorrobot, TeH
nOmInAtOr, Wireless friend, Kevinj04, , Yobot, Timeroot, Amirobot, CinchBug, Doleszki, AnomieBOT, Piano non troppo, Cita-
tion bot, Xelnx, Xqbot, Restu20, Raamaiden, Srich32977, The suocated, GrouchoBot, RibotBOT, Alainr345, NinjaDreams, FrescoBot,
Fortdj33, Gwideman, Mxipp, Boyzindahoos, Sawomir Biay, NewEconomist, Tharthan, Tkuvho, I dream of horses, Kiefer.Wolfowitz, Cos-
mikz, MarcelB612, Hunter.moseley, Rohitphy, Foobarnix, MedicineMan555, Gunderburg, Pushkar3, Matthewmoncek, Ruslan Sharipov,
Dinamik-bot, David Binner, Skakkle, Xnn, Kedwar5, TjBot, 123Mike456Winston789, Mandolinface, EmausBot, WikitanvirBot, Joseph-
Catrambone, KHamsun, Dcirovic, Slawekb, Thecheesykid, Ouzel Ring, Chire, A930913, Quondum, AManWithNoPlan, Ms2756, Frank-
Flanagan, Anita5192, Mikhail Ryazanov, ClueBot NG, Saburr, Dfsisson, Jj1236, Frietjes, TreyGreer62, 336, Widr, Helpful Pixie Bot,
Rxnt, Curb Chain, Bibcode Bot, BG19bot, Prof McCarthy, JinPan, Krucraft, GlaedrH, Hankel operator, Rushiagr, Trombonechamp,
Zedshort, JMtB03, Brad7777, Winston Trechane, Thegreatgrabber, Kalmiopsiskid, DarafshBot, Sschongster, ChrisGualtieri, Dexbot, Jur-
genNL, Mark L MacDonald, Colonel angel, Mogism, JeAEdmonds, Amgtech, Frizzil, Jonex, Indronil Ghosh, Manixer, MartCMdeJong,
Tator2, Abottelli, EFZR090440, Popa910, DavidLeighEllis, Qiangshiweiguan, Schismata, Nigellwh, Aupi, Sherif helmy, Xdever, Lepfer-
nandes, , Pscrape, Ekman, Menthoaltum, , CanuckMonkey, GregBrimble, Monkbot, Trackteur, Ndv79, Verdana
Bold, Pratapraj11pawar, IchiroSuzuki51, Purgy Purgatorio, Loraof, Sudaharan, Carlostp12, User0x539, Wf6humboldt, VexorAbVikip-
dia, Monchisan, Csa slb, Maja123321, Denofs8, Zyvov, Yangyang0117, Sudoer41, Juz bcoz im prem, Dataclysm, Sjarrad, Fmadd, TheRe-
alPascalRascal, L8 ManeValidus, PattyLocke and Anonymous: 568
Positive-denite matrix Source: https://en.wikipedia.org/wiki/Positive-definite_matrix?oldid=768525835 Contributors: AxelBoldt,
Shd~enwiki, Torfason, Michael Hardy, Wshun, Dan Koehl, Cyp, Stevenj, Charles Matthews, Dcoetzee, Jitse Niesen, Phys, Josh Cherry,
MathMartin, Elusus, Tea2min, Giftlite, Fropu, Jorge Stol, TedPavlic, Mattrix, Bender235, Floorsheim, Pt, El C, Erik456, O18, Hespe-
rian, Blahma, PAR, Sean3000, Cburnett, Jheald, Forderud, Simetrical, Eclecticos, Btyner, Qwertyus, Sj, Strait, Kevmitch, FlaBot, Don
Gosiewski, Sodin, Chobot, Algebraist, YurikBot, Wavelength, Syth, Bruguiea, Crasshopper, Eli Osherovich, Lunch, SmackBot, Maksim-
e~enwiki, Eskimbot, Cabe6403, Njerseyguy, Drewnoakes, Nbarth, Svein Olav Nyberg, Kjetil1001, Lambiam, Tim bates, Breno, Lilily,
CRGreathouse, Myasuda, Mct mht, Thijs!bot, Lfscheidegger, LachlanA, Ben pcc, JAnDbot, MER-C, Wootery, Stangaa, Magioladitis,
JamesBWatson, Cyktsui, MetsBot, Americanhero, Tercer, Mythealias, Leyo, Maurice Carbonaro, Nathanshao, Policron, Ratfox, DavidIM-
cIntosh, Tomtheebomb, NathanHagen, PaulTanenbaum, Kkilger, Philmac, Daviddoria, AlleborgoBot, Hsbhat, PeterBFZ, Andrs Cataln,
Yahastu, Sharov, Skippydo, DesolateReality, Yoda of Borg, Jdgilbey, Wcy~enwiki, Bender2k14, Muhandes, Bluemaster, Qwfp, Dinge-
nis, Gjnaasaa, Job Inkop~enwiki, Tayste, Addbot, Cst17, Dr. Universe, PV=nRT, Luckas-bot, Yobot, Nghtwlkr, Nimrody, Legendre17,
AnomieBOT, Joule36e5, Materialscientist, ArthurBot, Bdmy, Airalcorn2, Raamaiden, Mardebikas, Pupdike, Sawomir Biay, Vineethku-
ruvath, Haein45, Avarela1965, MastiBot, Dzlot, Dividingbyzerofordummies, Ashis.csedu, Toolnut, Pfm77, Begoon, Duoduoduo, Suu-
sion of Yellow, Xnn, Wyverald, EmausBot, Wisapi, GoingBatty, Felix Homann, Chaohuang, Osociety, Jadzia2341, Wayne Slam, Zaran,
Joao Meidanis, 1, Maschen, Akseli.palen, ClueBot NG, Est nomis, Fioravante Patrone, Joel B. Lewis, Helpful Pixie Bot, Lubdone,
BG19bot, Dvomedo, Solomon7968, Intervallic, Manoguru, Msinvent, ChrisGualtieri, YFdyh-bot, Preston Kemeny, Davidcy123, Egorlar-
ionov, Frytvm, Digiuno, Webclient101, Limit-theorem, HEKrogstad, GoplaWHya, The Disambiguator, Brownerthanu, Loraof, Srinivas
tudelft, Rangdor, Rdsk2014, H1729, Fmadd, Ben300694, Jarlesog and Anonymous: 176
Cambridge University Press Source: https://en.wikipedia.org/wiki/Cambridge_University_Press?oldid=766638248 Contributors: Vicki
Rosenzweig, Ianp, Olivier, Rbrwr, Michael Hardy, Lquilter, Ronz, Notheruser, Dimadick, Pigsonthewing, Postdlf, Geogre, Giftlite, Lupin,
Duncharris, Solipsist, Stevietheman, Alexf, BozMo, Piotrus, Icairns, Jayjg, Bluap, Dirac1933, Pcpcpc, Woohookitty, Xover, David Haslam,
Tim!, Chirags, FlaBot, Nihiltres, SouthernNights, NekoDaemon, Gdrbot, Adoniscik, The Rambling Man, Mark Ironie, Daniel Mietchen,
Jpbowen, Number 57, Nikkimaria, SMcCandlish, SmackBot, JK23, Sebesta, Chris the speller, MalafayaBot, Bazonka, Ste, Andrei Stroe,
JzG, Prof02, Simongraham, Hu12, Colonel Warden, Cydebot, Danrok, Lo2u, Malleus Fatuorum, Julia Rossi, Geniac, Magioladitis, The
Anomebot2, Matt Lewis, Kathrynbooth, Axlq, It Is Me Here, Krishnachandranvn, Robertson-Glasgow, Pjv7ex, Djr13, PacicWonder-
land, GrahamHardy, Hugo999, Nikthestunned, Deor, Shortride, Mrh30, TXiKiBoT, Guillaume2303, JhsBot, Broadbot, Tamorlan, Ir-
fan82, Noveltyghost, SE7, SimonTrew, Int21h, Svick, Calatayudboy, ClueBot, Testu, CharlieRCD, Alexbot, Adimovk5, Versus22, DumZ-
iBoT, RMFan1, Captain108, BarretB, Gerhardvalentin, Addbot, Betterusername, Fgnievinski, Numbo3-bot, Kicior99, Lightbot, Legobot,
Luckas-bot, Yobot, Amirobot, We66er, Fosterjjj, Johnmanj, AnomieBOT, Momoricks, Theovetes, Galoubet, Tmwns, XYGuy, Mud-
snakezim, Glappelle, Eric Blatant, Miesianiacal, Omnipaedista, Dfhehuii, Fianara, Aaditya 7, Dorecou, Orphdrug, McAnt, FrescoBot,
OMcSpin, Filosophy, Moonraker, MondalorBot, CLC Editorial, Lotje, Leto 78, Spdiy, Cmdcam01, RjwilmsiBot, Shhhnotsoloud, Ja-
son86~enwiki, ZroBot, Solus ipse Inc., Philafrenzy, Iketsi, Spicemix, ClueBot NG, Helpful Pixie Bot, Chilliwow, Churchway, BG19bot,
Aliwal2012, YFdyh-bot, Khazar2, Mogism, Dads Knife, Rupert loup, Randykitty, A7592, Fuzzy mongoose, Acjones49, ArcticTree, Za-
cwill, Michaelthelamb, Narky Blert, Berlidam Dergilk, Jtrrs0, KasparBot, Spyros 78, BU Rob13, Permstrump, VernonF, Nguyenthibiet,
156 CHAPTER 21. SYMMETRIC MATRIX
Quantum state Source: https://en.wikipedia.org/wiki/Quantum_state?oldid=768300822 Contributors: RTC, Michael Hardy, Julesd, An-
dres, Laussy, Patrick0Moran, Bevo, BenRG, Bkalafut, Rorro, Papadopc, Tea2min, Giftlite, MathKnight, MichaelHaeckel, CSTAR, H
Padleckas, Elroch, Mschlindwein, Chris Howard, Freakofnurture, Hidaspal, Slipstream, Bender235, Giraedata, Geschichte, Alansohn,
Cortonin, Dan East, Ott, Woohookitty, Mpatel, Dzordzm, Colin Watson, Rjwilmsi, Mathbot, Margosbot~enwiki, Fresheneesz, Bgwhite,
Wavelength, RobotE, Bambaiah, Agent Foxtrot, Hydrargyrum, PoorLeno, Larsobrien, Modify, Sbyrnes321, A13ean, Incnis Mrsi, Ptpare,
Jutta234, Physis, Erwin, CapitalR, Petr Matas, BeenAroundAWhile, Mct mht, Phatom87, Dragons Blood, Waxigloo, Thijs!bot, Colincmr,
Headbomb, Second Quantization, Iviney, Eleuther, Bizzon, Magioladitis, Tercer, B. Wolterding, R'n'B, Hans Dunkelberg, Maurice Car-
bonaro, ARTE, Hulten, Sheliak, VolkovBot, LokiClock, Kinneytj, Thurth, TXiKiBoT, V81, Spinningspark, Kbrose, YohanN7, SieBot,
Phe-bot, Jdcaneld, OKBot, Denisarona, Randy Kryn, StewartMH, ClueBot, Alksentrs, EoGuy, Rockfang, SchreiberBike, The-tenth-
zdog, TimothyRias, Dragon, SilvonenBot, RealityDysfunction, Porphyro, Stephen Poppitt, Addbot, Bob K31416, Luckas-bot, Yobot,
JTXSeldeen, AnomieBOT, Gtz, Xqbot, Pvkeller, J04n, GrouchoBot, Omnipaedista, Nathanielvirgo, Waleswatcher, WaysToEscape, ,
Chjoaygame, FrescoBot, Freddy78, Steve Quinn, Machine Elf 1735, Oxonienses, RedBot, RobinK, BasvanPelt, Heurisko, Lotje, Ea-
gleclaw6, RjwilmsiBot, Pierluigi.taddei, EmausBot, John of Reading, Gaurav biraris, Solomonfromnland, Harddk, Josve05a, Zephyrus
Tavvier, Maschen, Xronon, ClueBot NG, MelbourneStar, Theopolisme, Helpful Pixie Bot, Bibcode Bot, BG19bot, F=q(E+v^B), Gan-
itvidya, DrBugKiller, Chetan666, Jochen Burghardt, W. P. Uzer, Noix07, 7Sidz, Monkbot, Cpt Wise, Pratixit, AliShug, Tyttcfm, Inter-
netArchiveBot, tale.cohomology, AlterHollow, GreenC bot, Bender the Bot and Anonymous: 80
21.9.2 Images
File:AAMarkov.jpg Source: https://upload.wikimedia.org/wikipedia/commons/7/70/AAMarkov.jpg License: Public domain Con-
tributors: . ..: , 1964. . 2. . 475. Original artist: Un-
known<a href='https://www.wikidata.org/wiki/Q4233718' title='wikidata:Q4233718'><img alt='wikidata:Q4233718' src='https://upload.
wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/20px-Wikidata-logo.svg.png' width='20' height='11' srcset='https://
upload.wikimedia.org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/30px-Wikidata-logo.svg.png 1.5x, https://upload.wikimedia.
org/wikipedia/commons/thumb/f/ff/Wikidata-logo.svg/40px-Wikidata-logo.svg.png 2x' data-le-width='1050' data-le-height='590'
/></a>
File:ASA_conference_2008_-_13.JPG Source: https://upload.wikimedia.org/wikipedia/commons/5/57/ASA_conference_2008_-_13.
JPG License: CC BY-SA 3.0 Contributors: Own work (taken by myself) Original artist: myself (User:Piotrus)
File:Area_parallellogram_as_determinant.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/ad/Area_parallellogram_
as_determinant.svg License: Public domain Contributors: Own work, created with Inkscape Original artist: Jitse Niesen
File:Bloch_Sphere.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f4/Bloch_Sphere.svg License: CC BY-SA 3.0 Con-
tributors: Own work Original artist: Glosser.ca
File:CUPPress2.jpg Source: https://upload.wikimedia.org/wikipedia/en/a/a6/CUPPress2.jpg License: CC-BY-SA-3.0 Contributors:
I McAnt created this work entirely by myself.
Original artist:
McAnt (talk)
File:CamPress1.jpg Source: https://upload.wikimedia.org/wikipedia/en/a/a2/CamPress1.jpg License: CC-BY-SA-3.0 Contributors:
I McAnt created this work entirely by myself.
Original artist:
McAnt (talk)
File:Cambridge_University_Press_Letters_Patent.jpg Source: https://upload.wikimedia.org/wikipedia/en/8/8b/Cambridge_
University_Press_Letters_Patent.jpg License: Public domain Contributors: ? Original artist: ?
File:Cambridge_University_Press_logo.svg Source: https://upload.wikimedia.org/wikipedia/en/1/11/Cambridge_University_Press_
logo.svg License: Fair use Contributors:
The logo is from the Annual Report for the year ending 30 April 2010 on cambridge.org website. Original artist: ?
File:Commons-logo.svg Source: https://upload.wikimedia.org/wikipedia/en/4/4a/Commons-logo.svg License: PD Contributors: ? Origi-
nal artist: ?
File:Determinant_example.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a7/Determinant_example.svg License: CC
BY-SA 3.0 Contributors: Own work Original artist: Krishnavedala
File:Diagram_for_spin_dynamics.png Source: https://upload.wikimedia.org/wikipedia/commons/b/b4/Diagram_for_spin_dynamics.
png License: CC BY-SA 4.0 Contributors: Own work Original artist: Brunolucatto
File:Edit-clear.svg Source: https://upload.wikimedia.org/wikipedia/en/f/f2/Edit-clear.svg License: Public domain Contributors: The
Tango! Desktop Project. Original artist:
The people from the Tango! project. And according to the meta-data in the le, specically: Andreas Nilsson, and Jakub Steiner (although
minimally).
File:Eigenfaces.png Source: https://upload.wikimedia.org/wikipedia/commons/6/67/Eigenfaces.png License: Attribution Contributors: ?
Original artist: ?
File:Eigenvalue_equation.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/58/Eigenvalue_equation.svg License: GFDL
Contributors: This vector image was created with Inkscape. Original artist: Lyudmil Antonov Lantonov 16:35, 13 March 2008 (UTC)
File:Eigenvectors.gif Source: https://upload.wikimedia.org/wikipedia/commons/0/06/Eigenvectors.gif License: Public domain Contrib-
utors: Own work Original artist: Kie
File:Ellipse_in_coordinate_system_with_semi-axes_labelled.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/8e/
Ellipse_in_coordinate_system_with_semi-axes_labelled.svg License: CC BY-SA 3.0 Contributors: Own work Original artist:
Jakob.scholbach
File:Felix_Bloch,_Stanford_University.jpg Source: https://upload.wikimedia.org/wikipedia/commons/0/0b/Felix_Bloch%2C_
Stanford_University.jpg License: CC BY 3.0 Contributors: Stanford News Service Original artist: Stanford University / Courtesy Stanford
News Service
File:Felix_Bloch_1950s.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/9c/Felix_Bloch_1950s.jpg License: Public do-
main Contributors: http://www.gettyimages.co.uk/detail/news-photo/portrait-of-swiss-physicist-felix-bloch-leaning-on-a-news-photo/
158744802 Original artist: Unknown (Mondadori Publishers)
File:Finance_Markov_chain_example_state_space.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/95/Finance_
Markov_chain_example_state_space.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Gareth Jones
File:Financial_Markov_process.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a6/Financial_Markov_process.svg
License: CC BY-SA 3.0 Contributors: Own work Original artist: Gareth Jones
File:Flag_of_Brazil.svg Source: https://upload.wikimedia.org/wikipedia/en/0/05/Flag_of_Brazil.svg License: PD Contributors: ? Origi-
nal artist: ?
File:Flip_map.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3f/Flip_map.svg License: CC BY-SA 3.0 Contributors:
derived from File:Rotation_by_pi_over_6.svg Original artist: Jakob.scholbach
File:Folder_Hexagonal_Icon.svg Source: https://upload.wikimedia.org/wikipedia/en/4/48/Folder_Hexagonal_Icon.svg License: Cc-by-
sa-3.0 Contributors: ? Original artist: ?
158 CHAPTER 21. SYMMETRIC MATRIX