Você está na página 1de 10

Quantum Mechanics: bits and pieces 1

Notes by Sergei Winitzki


DRAFT July 18, 2005
Contents
1 Overview 1
1.1 Relation of quantum mechanics to the rest of
physics . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Outline of quantum mechanics . . . . . . . . 1
2 Hilbert spaces and operators 2
2.1 Abstract vector spaces . . . . . . . . . . . . . 2
2.1.1 Motivation . . . . . . . . . . . . . . . . 2
2.1.2 Notation . . . . . . . . . . . . . . . . . 2
2.1.3 Denition: complex vector space . . . 2
2.2 Examples of vector spaces . . . . . . . . . . . 3
2.2.1 Finite-dimensional spaces . . . . . . . 3
2.2.2 Innite-dimensional spaces . . . . . . 3
2.3 Spaces with scalar product . . . . . . . . . . . 4
2.3.1 Motivation . . . . . . . . . . . . . . . . 4
2.3.2 Denition of Hermitian scalar product 4
2.3.3 Examples . . . . . . . . . . . . . . . . 4
2.4 Dual space. Bra and ket vectors . . . . . 4
2.4.1 Motivation . . . . . . . . . . . . . . . . 4
2.4.2 Denition of dual space . . . . . . . . 5
2.4.3 Examples . . . . . . . . . . . . . . . . 5
2.4.4 Correspondence between ket and
bra vectors . . . . . . . . . . . . . . 5
2.4.5 Example of the ket-bra correspon-
dence . . . . . . . . . . . . . . . . . . . 5
2.5 Hilbert spaces . . . . . . . . . . . . . . . . . . 6
2.5.1 Orthonormal bases in nite dimensions 6
2.5.2 Innite bases, completeness, and sep-
arability . . . . . . . . . . . . . . . . . 6
2.5.3 Denition and main examples of
Hilbert spaces . . . . . . . . . . . . . . 6
2.5.4 Useful properties of Hilbert spaces . . 7
2.6 Operators and tensor products . . . . . . . . 7
2.6.1 Linear operators . . . . . . . . . . . . 7
2.6.2 Eigenvectors and eigenvalues . . . . . 8
2.6.3 Decomposition of unity . . . . . . . . 8
2.6.4 Hermitian conjugation . . . . . . . . . 8
2.6.5 Properties of Hermitian operators . . 8
2.6.6 Tensor products . . . . . . . . . . . . 9
2.7 Life outside of Hilbert space . . . . . . . . . . 9
2.7.1 Denition of subspace . . . . . . . . . 9
2.7.2 Examples of subspaces . . . . . . . . . 9
2.7.3 Generalized vectors: motivation . . . 9
2.7.4 Generalized vectors: denition and
examples . . . . . . . . . . . . . . . . . 10
2.7.5 Operators dened only on a subspace 10
2.7.6 Dense subspaces . . . . . . . . . . . . 10
1 Overview
1.1 Relation of quantum mechanics to the rest
of physics
Physics is an application of mathematics to approximate
description of observable (measurable) phenomena in the
inanimate world. Physical theories can be classied in
many ways. For instance, one could suggest the following
dichotomies:
1. Whether the theory assumes a perfect (com-
plete) knowledge of the state of physical systems,
i.e. whether a certain mathematical object is supposed
to completely describe the internal state of the entire
system. Theories that allow incomplete knowledge
are summarily called statistical physics. Quantum
mechanics (QM) does assume perfect knowledge. The
extension of QM to describe incomplete knowledge is
quantum statistical mechanics.
2. Whether the theory claims (in principle) to predict
observable phenomena with certainty. If yes, one
may call the theory deterministic. Since QM pre-
dicts only probabilities for observing particular mea-
surement outcomes, it is a non-deterministic theory
in this sense. (Examples of deterministic theories are:
Newtons mechanics, Maxwells electrodynamics, Ein-
steins general relativity.)
3. Whether the theory describes the properties of in-
nitely localized objects (point-like particles) or objects
that ll the entire space (elds). Quantum mechanics
(in the usual sense) describes only point-like particles.
The extension of QM to elds is called quantum eld
theory.
4. Whether the theory contains a notion of xed space
and time in which systems are located and events take
place. In its usual formulation, QM describes events in
a xed spacetime. In contrast, Einsteins general rela-
tivity describes spacetime itself as a dynamical object
(the gravitational eld). An extension of QM to gravi-
tation is not yet completely developed.
1.2 Outline of quantum mechanics
The formalism of quantum mechanics is based on the the-
ory of linear operators in Hilbert spaces, which is a branch
of functional analysis. Predictions for measurement out-
comes are obtained from a rather complicated set of cal-
culational procedures. The interpretation of these proce-
dures is far from intuitive. Only the spectacular experi-
mental success of QM has persuaded physicists that the
1
quantum-mechanical formalism is valid. Compared with
classical mechanics, the formalism is far more abstract and
convoluted, to such an extent that the standard theoretical
physics curriculum cannot afford the time needed to fully
study the mathematical foundations of QM. In this text,
many mathematical subtleties will be ignored for lack of
time.
Recall that in classical (Newtonian) mechanics, the po-
sition of a point-like object (called a particle) is described
by coordinates q(t) which are functions satisfying certain
differential equations (equations of motion). The standard
physical problem is to compute the coordinates at time t
2
given the initial coordinates q and velocities

q = dq/dt at
time t
1
. The answer is obtained by solving the equations of
motion with given initial conditions. In this way, one as-
sumes the perfect knowledge of the initial state of the sys-
tem q(t
1
),

q(t
1
) and completely and uniquely determines
the measurement outcome q(t
2
),

q(t
2
).
Experiments with atoms and subatomic particles grad-
ually led to an understanding that Newtonian mechanics
is inadequate for describing the motion of very small ob-
jects. The Newtonian description assumes that a particle
always follows a particular trajectory q(t); this assumption
conicts with observations such as the double-slit exper-
iment. Precise experiments prove that elementary parti-
cles such as photons or electrons generally do not follow
well-dened trajectories. One of the basic facts that refutes
classical physics is the Heisenberg uncertainty principle,
qualitatively formulated as the position and the velocity
of a particle cannot be exactly dened at the same time.
This is in a direct contradiction with the Newtonian formal-
ism where the function q(t) is a complete representation of
the state of the particle and therefore the velocity

q(t
0
) is
uniquely and exactly determined together with the posi-
tion q(t
0
) at any time t
0
.
The mathematical formalism capable of describing the
Heisenberg uncertainty relation can be motivated by con-
sidering the properties of eigenvectors of linear transfor-
mations. Instead of the description of a particle in terms of
the function q(t), one introduces the following mathemati-
cal construction:
the state of the system is a vector (the state vector) in
some abstract vector space (not the position vector q(t)
in the usual three-dimensional space);
all observable quantities A, B, etc., (such as coordi-
nates or velocities) are represented by certain transfor-
mations

A,

B, etc., dened in that vector space;
the observed value of a quantity A is always an
eigenvalue of the corresponding transformation

A;
and
the observable denitely has the value only if the
state of the system is an eigenvector of

A with the
eigenvalue .
If transformations

A and

B have different eigenvectors,
then it is quite possible that the system has a denite value
of the observable Abut no denite value of the observable
B. Thus the formalism can describe the impossibility of
measuring the coordinate and the velocity simultaneously.
The above is a (very rough) outline of the formalism of
quantummechanics. Additionally, QMgives a prescription
of how to choose the vector space and the state vector that
would correspond to each particular physical experiment,
and how to dene the transformations

A,

B, etc., that de-
scribe interesting physical observables. What remains is a
mathematical problem of computing the eigenvalues and
eigenvectors of particular linear operators. This problem
is often quite complicated since the relevant vector spaces
are innite-dimensional. Special methods were developed
for solving problems of this kind. Learning these methods
and the underlying mathematical concepts is also an inte-
gral part of studying quantum mechanics. We start with a
brief outline of the necessary facts of linear algebra.
2 Hilbert spaces and operators
2.1 Abstract vector spaces
2.1.1 Motivation
Familiar vector spaces, such as the two-dimensional plane
or the three-dimensional Euclidean space, can be easily
generalized to an n-dimensional space by considering n-
tuples of coordinates (x
1
, ..., x
n
). However, physics needs
also innite-dimensional spaces whose properties are more
subtle. The rst step is to dene a vector space in a way that
does not explicitly depend on the number of dimensions.
(Note that we shall only consider vectors with complex co-
ordinates.)
2.1.2 Notation
In physics, vectors are denoted variously by letters with ar-
rows v, by boldface letters v, by letters with indices v
i
or v
i
,
or by the bra/ket notation (the Dirac notation) |v. When
unambiguous, one can also denote vectors by simple let-
ters v. We shall mostly use the Dirac notation because it is
conventional in quantummechanics. Numbers (i.e. scalars)
will be denoted by Greek letters, e.g. .
2.1.3 Denition: complex vector space
A complex vector space is rst of all an abelian (commu-
tative) group V with elements |v V and group oper-
ation denoted by |v + |w (addition of vectors), in other
words, one can add and subtract vectors, and this opera-
tion is commutative. We may denote by 0 V the zero
vector, i.e. the zero element of the additive group (this no-
tation never causes confusion). For example, we write
|v |v = 0. (1)
Secondly, the operation called multiplication of vector by
a scalar |v V is also dened. In other words, for any
vector |v and for any complex scalar C the vector |v
is dened. These two operations, |v + |w and |v, must
be such that the following axioms hold (for all |v
k
V and
, C):
(|v
1
+|v
2
) = |v
1
+ |v
2
, (2)
( + ) |v = |v + |v , (3)
(|v) = () |v , (4)
1 |v = |v . (5)
2
Simple corollaries:
|v = 1 |v = (1 + 0) |v = |v + 0 |v
0 |v = 0 V ;
(1) |v +|v = (1 + 1) |v = 0
(1) |v = |v .
All these properties look natural, i.e. we are already fa-
miliar with such rules of vector algebra. All we have done
so far is to formalize these rules as axioms. Any set satis-
fying these axioms is a vector space where most statements
of linear algebra hold.
2.2 Examples of vector spaces
The above denition does not specify a particular vector
space; it merely gives a list of properties needed for an
abelian group V to be a vector space. To verify that a partic-
ular set V with a particularly dened scalar multiplication
is a vector space, one needs to check that the axioms (2)-(5)
hold. Here are some examples.
2.2.1 Finite-dimensional spaces
The set of all n-tuples (z
1
, ..., z
n
) of complex numbers z
k
is
a complex vector space if one denes the multiplication by
scalars as
(z
1
, ..., z
n
) (z
1
, ..., z
n
) . (6)
This is the main example of a nite-dimensional space (this
space has dimension n).
In any n-dimensional space there exists a basis of n vec-
tors |e
1
, ..., |e
n
, such that any vector |v can be expressed
as a linear combination
|v =
n

j=1
v
j
|e
j
, (7)
where v
j
are complex numbers called the components of
the vector |v in the basis |e
1
, ..., |e
n
. Once a basis is
xed, each vector |v is unambiguously represented by the
set of its components v
j
. Therefore, the vector space of n-
tuples (z
1
, ..., z
n
) is a good model of any n-dimensional vec-
tor space.
2.2.2 Innite-dimensional spaces
The examples below are all innite-dimensional complex
vector spaces. Intuitively, an innite-dimensional vector
space is made of vectors with innitely many compo-
nents.
1. The set of all innite sequences (z
1
, z
2
, ...) of complex
numbers is a complex vector space. The multiplication
by scalars is dened similarly to Eq. (6).
2. Consider the set of innite sequences (z
1
, z
2
, ...) re-
stricted by one of the following conditions:
(a) The coefcients z
j
are such that the series

j=1
|z
j
|
2
converges.
(b) The coefcients z
j
are arbitrary except that z
117
=
0.
(c) The coefcients z
j
decay at large j (more pre-
cisely, lim
j
z
j
= 0).
(d) Only nitely many coefcients z
j
are nonzero,
i.e. z
j
all become zero for large enough j. More
precisely: for a given sequence (z
1
, z
2
, ...) there
must exist a number p such that z
j
= 0 for all
j p; the number p may be different for each se-
quence.
Note: In each case, one obtains an innite-
dimensional vector space, but these spaces are all
different! Two spaces are the same, or isomor-
phic, if there exists a one-to-one linear map be-
tween them. In general, it is a subtle question to
decide whether two innite-dimensional spaces
are isomorphic. We omit the proofs of these state-
ments.
3. The set of all innite matrices A
ij
with complex coef-
cients. (Note: we do not need to multiply these in-
nite matrices, we only need to add them and multiply
element-wise by scalars.)
4. The set of all complex-valued functions f (x), where
x is a real number within a range [a, b], is a complex
vector space if one denes the multiplication by scalars
in the natural way: the function f (x) is multiplied at
every point by the number and the new function is
f (x).
5. Consider now the set of all complex-valued functions
f (x) dened for x [a, b] and restricted by one of the
following conditions:
(a) The functions f (x) are continuous. The result is a
vector space because the sum of two continuous
functions is again continuous, and the product of
a continuous function by a number is again a con-
tinuous function.
(b) The functions f (x) are smooth (innitely many
times differentiable).
(c) The functions f (x) are square-integrable, i.e.
such that the following integral converges,
_
b
a
|f (x)|
2
dx < . (8)
(d) The functions f (x) are square-integrable with a
xed weight function (x), e.g.
_
b
a
|f (x)|
2
(x) dx < . (9)
For example, if (x) = (x a)
2
, then functions
f (x) that have a pole at x = a still belong to
the space, whereas such functions are not square-
integrable in the usual sense.
(e) The functions f (x) are polynomials of nite but
unknown degree, i.e. f (x) = f
0
+f
1
x +... +f
n
x
n
with some coefcients f
n
, where n can be differ-
ent for each function f (x).
(f) The functions f (x) are continuous and vanish at
a prescribed point x
0
, i.e. f (x
0
) = 0. (The same
point x
0
is used for all functions.)
3
(g) The functions f (x) are analytic, i.e. the Taylor se-
ries of f (x) at any point x
0
[a, b] converges to
f (x) within a neighborhood of x
0
.
It is easy to check that in each case one obtains an
innite-dimensional vector space. (These spaces
are all different!)
Proof: Square-integrable functions are a vector space.
As an example, we prove that the sum of two square-
integrable functions is again square-integrable. If two
complex-valued functions f (x) and g (x) are such that
_
|f (x)|
2
dx < ,
_
|g (x)|
2
dx < , (10)
we need to show that
_
|f (x) + g (x)|
2
dx < . (11)
This is proved by the following calculation,
|f + g|
2
|f + g|
2
+|f g|
2
= 2 |f|
2
+ 2 |g|
2
, (12)
therefore
_
|f (x) + g (x)|
2
dx 2
_
|f (x)|
2
dx +
_
|g (x)|
2
dx < .
(13)
2.3 Spaces with scalar product
2.3.1 Motivation
To build basic vector calculus, one needs to be able to say
that a sequence of vectors converges to a limit. The no-
tion of limit requires to decide whether a vector is small,
i.e. one needs a concept of size or length to apply
to vectors. In normal Euclidean geometry, the notions of
length and angle are provided by the scalar product of vec-
tors: the length of a vector a is equal to |a|

a a and the
angle between two vectors a and

b is found from
|a||

b| cos = a

b. (14)
The scalar product of three-dimensional vectors is dened
by
(x
1
, y
1
, z
1
) (x
2
, y
2
, z
2
) x
1
x
2
+ y
1
y
2
+ z
1
z
2
. (15)
When considering an abstract vector space, one cannot give
a specic denition of a scalar product. Instead one lists the
properties that a scalar product should satisfy. In the case
of a complex vector space, these properties are somewhat
different from those of the familiar Euclidean scalar prod-
uct.
2.3.2 Denition of Hermitian scalar product
A Hermitian scalar product in a complex vector space is a
map (, ) which maps pairs of vectors into complex num-
bers and satises the following axioms:
(|v
1
, |v
2
+ |v
3
) = (|v
1
, |v
2
) + (|v
1
, |v
3
) ; (16)
(|v
2
, |v
1
) = (|v
1
, |v
2
)

, (17)
(|v , |v) > 0 if |v = 0. (18)
Here the asterisk

denotes the complex conjugation. Note
that it follows from the axiom (17) that (|v , |v) is always
a real number. The norm |v of a vector |v is a positive
real number dened as
_
(|v , |v).
Simple corollaries:
(|v
1
, |v
2
) =

(|v
1
, |v
2
) ; (19)
(|v
1
+ |v
2
, |v
3
) = (|v
1
, |v
3
) +

(|v
2
, |v
3
) , (20)
(0, |v) = (|v , 0) = 0, (21)
(|v , |v) = ||
2
(|v , |v) . (22)
2.3.3 Examples
1. For the nite-dimensional vector space of n-tuples
(z
1
, ..., z
n
), a Hermitian scalar product can be dened
by
((w
1
, ..., w
n
) , (z
1
, ..., z
n
))
n

j=1
w

j
z
j
. (23)
Note that the complex conjugation is applied to w
j
:
this is needed to satisfy the property (19).
2. In the space of square-integrable functions f (x) for
x [a, b] (Example 5c in Sec. 2.2.2), a Hermitian scalar
product can be dened by
(f, g)
_
b
a
f

(x) g (x) dx. (24)


The condition of square integrability (8) is needed for
the denition (24) to make sense, because the scalar
product (f, g) must be dened for all functions f and g
within the vector space, so (f, f) < . One can show
that square integrability of f and g is also sufcient for
(f, g) to be a convergent integral.
3. Consider the space of all innite sequences (z
1
, z
2
, ...)
such that the series

j=1
|z
j
|
2
converges (Example 2a
in Sec. 2.2.2). In this space, a Hermitian scalar product
can be dened by
((w
1
, w
2
, ...) , (z
1
, z
2
, ...))

j=1
w

j
z
j
. (25)
It can be shown that the convergence of this series is
guaranteed by the requirement that

j=1
|z
j
|
2
<
for each innite sequence in the space. The deni-
tion (25) of the scalar product would fail for the space
of all innite sequences.
Finally, note that a Hermitian scalar product may be multi-
plied by any positive real number and yields another Her-
mitian scalar product. So in any case there is more than one
way to dene such a scalar product on a given vector space.
2.4 Dual space. Bra and ket vectors
2.4.1 Motivation
The notion of dual space is an extremely useful mathemat-
ical construction. Unlike the intuitive picture of vectors as
directed magnitudes in physical space, the dual space
has no such visual interpretation. (It is perhaps for this rea-
son that dual spaces are not taught in school).
4
2.4.2 Denition of dual space
The dual space V

to a vector space V is the set of all
complex-valued linear functions on V , i.e. the set of maps
f : |v C such that
f (|v
1
+ |v
2
) = f (|v
1
) + f (|v
2
) . (26)
The operations of addition and scalar multiplication are
naturally dened on linear functions: it is easy to see that if
f and g are two linear functions on V and C, then f +g
and f are also linear functions. Therefore the set of all lin-
ear functions is itself another vector space which is denoted
by V

. In this way, a new vector space V

is built from the
space V .
In the Dirac notation, the vectors in V are denoted |v
and the dual vectors (elements of the dual space V

) are
denoted f|. Instead of f (|v), one writes f|v. The vectors
|v are called the ket vectors and the dual vectors f| are
called the bra vectors, after the mnemonic f|v, bra-
ket. Dual vectors are also called covectors (or covariant
vectors) while normal vectors are called contravariant.
2.4.3 Examples
1. For a nite-dimensional vector space V , the dual space
is easy to describe. Consider the n-dimensional space
of n-tuples (z
1
, ..., z
n
) and consider the function f
k
that
maps a vector into its k-th coordinate, i.e.
f
k
: (z
1
, ..., z
n
) z
k
. (27)
It is easy to see that f
k
is a linear function of the vec-
tor (z
1
, ..., z
n
). Therefore f
k
belongs to the dual space
V

. Then one can show that the set f
1
, ..., f
n
is a basis
of V

, so that any linear function f V

is uniquely
represented as
f = A
1
f
1
+ ... + A
n
f
n
, (28)
where A
k
are some complex coefcients (the compo-
nents of f in the basis f
1
, ..., f
k
). Thus V

is also an n-
dimensional vector space. The application of the func-
tion f to a vector (z
1
, ..., z
n
) yields
f (z
1
, ..., z
n
) =
n

j=1
A
j
z
j
. (29)
2. The situation with innite-dimensional spaces is
rather more complicated. Consider the space V of
all innite sequences (z
1
, z
2
, ...) without restrictions on
the coefcients (Example 1 in Sec. 2.2.2). We shall now
prove that its dual space V

is the space in Example 2d
in Sec. 2.2.2. A linear function on a sequence (z
1
, z
2
, ...)
is generally of the form
f (z
1
, z
2
, ...) = A
1
z
1
+ A
2
z
2
+ ..., (30)
where A
k
are some complex coefcients. However, the
denition (30) makes sense only if the series A
1
z
1
+
A
2
z
2
+... converges for all sequences (z
1
, z
2
, ...). Since the
numbers z
k
are arbitrary, we can choose a particular se-
quence (z
1
, z
2
, ...) so that the series A
1
z
1
+A
2
z
2
+... has
the most difculty converging. For instance, choosing
z
j
=
_
0, A
j
= 0;
1
A
j
, A
j
= 0,
(31)
we nd that the series A
1
z
1
+ A
2
z
2
+ ... consists of 0s
and 1s and thus its sum is the total number of nonzero
coefcients A
j
. Therefore the admissible sets of coef-
cients A
j
are such that only a nite number of them are
nonzero. This is precisely the denition of the vector
space in Example 2d in Sec. 2.2.2.
3. Consider the space V of innite sequences (z
1
, z
2
, ...)
such that only nitely many z
j
are nonzero (Exam-
ple 2d in Sec. 2.2.2). Its dual space V

is the space of
arbitrary innite sequences (Example 1 in Sec. 2.2.2). To
prove this, repeat the reasoning of Example 2 above.
(Since only nitely many z
j
are nonzero, the series
A
1
z
1
+ A
2
z
2
+ ... is always a nite sum and thus is
well-dened for arbitrary A
j
.)
4. Consider the space V of square-integrable functions
f (x) dened on the interval [a, b] (Example 5c in
Sec. 2.2.2). Suppose that (x) is some square-
integrable function, then a map | of V into complex
numbers can be dened by
| : f (x)
_
b
a
(x) f (x) dx. (32)
It is easy to verify that this map is linear (and that the
integral is well-dened), therefore the map | is in-
deed a dual vector that belongs to the space V

.
Note: The examples 2 and 3 above suggest that V is the
dual space to V

, i.e. (V

)

= V . This is indeed always the


case (we shall not prove this). Depending on the space V ,
the dual space V

may be larger than V , or the same
as V , or V

could be smaller than V .
2.4.4 Correspondence between ket and bra vectors
If a vector space V has a Hermitian scalar product (, ), one
can dene a correspondence between ket and bra vec-
tors. Such a correspondence is mathematically described
as a map V V

that maps any vector |a into the cor-
responding dual vector a|. This map is dened by the
equation
|a a| : a| v (|a , |v) . (33)
This equation is read as follows: any vector |a V is
mapped to the dual vector a| V

dened as the linear
function that acts on an arbitrary vector |v V and re-
turns the number which is the scalar product (|a , |v).
It is easy to derive from the axiom (16) that a| is indeed
a linear map from vectors |v into complex numbers.
Note that the ket-bra correspondence is a one-to-one
map between V and V

only if V is a Hilbert space (see
Sec. 2.5.3) but not for other innite-dimensional spaces (we
shall not prove this).
2.4.5 Example of the ket-bra correspondence
This example illustrates that the ket-bra correspondence
is not necessarily a one-to-one map between V and V

.
Consider the space of innite sequences (z
1
, z
2
, ...) such
that only nitely many z
j
are nonzero (Example 2d in
Sec. 2.2.2). A Hermitian scalar product on this space can
be dened by Eq. (25) because the innite series in Eq. (25)
is a nite sum when applied to such sequences. A ket
5
vector |v (v
1
, v
2
, ...) is mapped to the bra vector v|
which acts on vectors |z (z
1
, z
2
, ...) as
v|z

j=1
v

j
z
j
. (34)
By construction, only nitely many coefcients v

j
are
nonzero. However, the dual space V

is a space of arbitrary
sequences (A
1
, A
2
, ...) that act on |z as
A| z =

j=1
A
j
z
j
. (35)
The coefcients A
j
are not restricted in any way. It is clear
that some bra vectors A| cannot be obtained from ket
vectors |v by the correspondence map.
2.5 Hilbert spaces
2.5.1 Orthonormal bases in nite dimensions
In an n-dimensional vector space, a basis is a set of n vectors
|e
1
, ..., |e
n
such that any vector |v can be represented by
a linear combination
|v =
n

j=1
v
j
|e
j
(36)
with some coefcients v
j
. If a Hermitian scalar prod-
uct is available, one denes an orthonormal basis by the
condition
(|e
j
, |e
k
) =
jk

_
0, j = k;
1, j = k.
(37)
A standard result of linear algebra is that orthonormal
bases exist (can be explicitly computed given an arbitrary
basis), and that the components v
j
of a vector |v in an or-
thonormal basis |e
1
, ..., |e
n
are given by
v
j
= (|e
j
, |v) e
j
|v , (38)
where e
j
| is the dual vector corresponding to |e
j
.
2.5.2 Innite bases, completeness, and separability
In an innite-dimensional space, the situation is more sub-
tle. One may expect that an innite basis |e
1
, |e
2
, ...,
exists such that any vector |v is expressed as an innite
linear combination
|v =

j=1
v
j
|e
j
. (39)
However, this expression is an innite series and the con-
cept of an innite basis makes sense only if this series con-
verges to the vector |v. The concept of convergence in a
vector space is dened as usual, using the scalar product:
an innite sequence of vectors |a
1
, |a
2
, ..., converges to the
limit vector |a if the distance between |a and |a
j
tends to
zero as j :
lim
j
|a
j
|a = 0. (40)
It is a nontrivial question to decide whether a given
innite-dimensional space admits an innite basis in this
sense: it might happen that a space needs a basis {|e
j
} with
a continuous (not countable) index j, or that some series of
the form (39) do not converge to a vector in the space. For
instance, assuming that the series (39) converges and that
the basis {|e
j
} is orthonormal, one computes the norm of
the vector |v as
(|v , |v) v|v =

j=1

k=1
v
j
v

k
e
k
|e
j
=

j=1
|v
j
|
2
. (41)
Therefore, the vector |v has a nite norm only if the fol-
lowing series converges,

j=1
|v
j
|
2
< . (42)
Thus, in any case, the components v
j
of a vector in an or-
thonormal basis cannot be arbitrary numbers.
At this point one could consider each of the spaces in-
troduced in Sec. 2.2.2 and investigate whether they admit
a Hermitian scalar product and a countable basis. This is a
mathematical exercise which is standard in courses of func-
tional analysis, and for lack of time we shall omit the nec-
essary proofs and constructions. The result is that some
spaces admit a countable basis and some do not. In quan-
tum mechanics one almost always uses vector spaces that
admit a countable basis in the above sense. These spaces
are known as separable and complete. A space is separa-
ble if a countable set of vectors |e
1
, |e
2
, ..., exists such that
any vector |v can be approximated arbitrarily well by a -
nite linear combination

N
j=1
v
j
|e
j
with some coefcients
v
j
and a sufciently large N. A non-separable space cannot
admit a countable basis. A space is complete if any series
of the form (39) converges to a limit as long as the coef-
cients v
j
satisfy the condition (42). An incomplete space has
holes between the vectors because it admits a sequence of
vectors that seems to approach a limit but does not have a
limit, similarly to a convergent sequence of rational num-
bers that has no rational limit.
2.5.3 Denition and main examples of Hilbert spaces
A Hilbert space is a complete vector space with a Hermitian
scalar product.
In quantum mechanics one almost always uses separa-
ble spaces, therefore we shall refer to a separable Hilbert
space as simply a Hilbert space. Any nite-dimensional
space is also a Hilbert space, but this is a trivial fact
and by a Hilbert space we shall usually mean an innite-
dimensional space.
The following innite-dimensional spaces are Hilbert
spaces (we omit the required proofs):
1. The space of square-summable sequences {z
j
}, i.e. se-
quences such that

j=1
|z
j
|
2
< . This is the space in
Example 2a in Sec. 2.2.2. The scalar product is dened
by Eq. (25). Note that the sequence (1, 1, 1, ...) does not
belong to this space, while the sequence
_
1,
1
2
,
1
3
, ...
_
does. The bra vectors are also square-summable se-
6
quences {w
j
} that act on {z
j
} by
w|z

j=1
w

j
z
j
. (43)
2. The space L
2
([a, b]) of square-integrable functions
f(x) dened on an interval [a, b]; the scalar product is
dened by Eq. (24). The functions are not necessarily
continuous, and one identies any two functions f(x),
g(x) which are at zero distance from each other, i.e.
f(x) g(x)
2
=
_
b
a
|f(x) g(x)|
2
dx = 0. (44)
This identication excludes such useless functions
as
f (x) =
_
0, x = a
1, x = a
(45)
because this function satises
_
b
a
|f(x)|
2
dx = 0 and
is identied with the zero function. Bra vectors are
also square-integrable functions g(x) that act on ket
vectors as
g|f
_
b
a
g

(x)f(x)dx. (46)
More generally, one considers square-integrable func-
tions of several variables x, y, z, etc., instead of func-
tions of x.
These two Hilbert spaces are the most frequently used in
quantum mechanics.
One also uses the Hilbert space L
2
(R) of square-
integrable functions on the real line. This space is not sepa-
rable and does not admit a countable basis, so one needs to
use special tricks when working with this space.
2.5.4 Useful properties of Hilbert spaces
The following statements hold for (separable) Hilbert
spaces:
1. There exists a countable orthonormal basis |e
1
, |e
2
, ...,
such that any vector can be uniquely decomposed into
a series of the form (39) with coefcients satisfying the
condition (42). Note that the expression

j=1
|e
j
= |e
1
+|e
2
+ ... V (47)
is not a vector in the Hilbert space because the condi-
tion (42) is violated; on the other hand,

j=1
1
j
|e
j
= |e
1
+
1
2
|e
2
+
1
3
|e
3
+ ... V (48)
is a well-dened Hilbert space vector.
2. The dual space V

is isomorphic to V ; this means that
the ket vectors |v and the bra vectors v| are in a
one-to-one correspondence.
3. All Hilbert spaces are isomorphic to the space of
square-summable sequences {z
j
}. (However, it is not
always convenient to determine the required orthonor-
mal basis and to work with it.)
4. The Cauchy-Schwartz inequality holds:
|a|b|
2
= a|b b|a a|a b|b . (49)
To prove this, it sufces to consider (for |b = 0) the
non-negative norm
0
_
_
_
_
|a
b|a
b|b
|b
_
_
_
_
2
= a|a
a|b b|a
b|b
. (50)
In conclusion, Hilbert spaces have the properties one
may intuitively expect when one imagines a well-behaved
innite-dimensional space: one can decompose arbitrary
vectors into an orthonormal basis consisting of countably
many vectors |e
j
, and one expects that an innite se-
ries (39) converges if its coefcients v
j
decay sufciently
quickly with j to satisfy the condition (42).
2.6 Operators and tensor products
In this section we shall be usually working in a Hilbert
space V , although some results do not require the assump-
tions of completeness, separability, or the existence of a
Hermitian scalar product.
2.6.1 Linear operators
A linear operator in a vector space is a map

A : V V of
the vector space to itself such that the following property
holds for arbitrary scalar and vectors |v
1,2
,

A(|v
1
+ |v
2
) =

A|v
1
+

A|v
2
. (51)
We denote the action of the map

A on a vector |v by

A|v.
Some examples of linear operators:
1. Identity operator

1 which does not change any vectors,
i.e.

1 |v = |v.
2. Operator of multiplication by a constant,

1. This op-
erator multiplies all vectors by , i.e.

1 |v = |v.
3. Operator of projection

P
|a
onto a xed vector |a: any
vector |v is transformed into |a where the coef-
cient is equal to the scalar product of |a and |v,
i.e. (|a , |v). The action of this operator on a vec-
tor |v can be written as

P
|a
|v = (|a , |v) |a |a a|v , (52)
and this motivates writing the operator

P
|a
itself as
|a a|. Note that an operator that maps every vector
|v into a xed vector |a is not a linear operator.
4. Consider the Hilbert space of sequences {z
j
} in Exam-
ple 1 (Sec. 2.5.3). The shift operator

S
n
is dened by

S
n
{z
1
, z
2
, ...} {z
n+1
, z
n+2
, ...} . (53)
It is easy to check that this operator is linear.
5. Consider the Hilbert space of square-integrable func-
tions f (x) in Example 2 (Sec. 2.5.3). Suppose that
(x, y) is some bounded function, e.g. | (x, y)| < 1 for
7
x, y [a, b], then the operator : V V can be dened
by
f (x)
_
b
a
(x, y) f (y) dy. (54)
It is easy to verify that is a linear map (and that the
integral is well-dened).
2.6.2 Eigenvectors and eigenvalues
A vector |v is called an eigenvector of an operator

A with
eigenvalue (where is a complex number) if |v = 0 and

A|v = |v. Finding the eigenvalues and the eigenvec-


tors of a given operator

A is a standard problem in linear
algebra.
For example, the scalar multiplication operator

1 has all
nonzero vectors as eigenvectors with eigenvalue . The
projection operator |a a| has |a as the eigenvector with
eigenvalue a|a. The shift operator

S
n
acting on sequences
{z
j
} has many eigenvectors | of the form
|
_
z
1
, z
2
, ..., z
n
, z
1
, ..., z
n
,
2
z
1
, ...,
2
z
n
, ...
_
(55)
with arbitrary components z
1
, ..., z
n
and eigenvalues such
that || < 1. For || 1 the sequence (55) violates the
condition (42) and therefore does not belong to the Hilbert
space.
By denition, the spectrum of an operator

A is the set
of all eigenvalues of the operator

A. In other words, it is
the set of all numbers such that there exists at least one
vector |v

= 0 which satises

A|v

= |v

. For example,
the spectrum of the shift operator

S
n
is the set of complex
numbers such that || < 1.
2.6.3 Decomposition of unity
If |e
1
, |e
2
, ... is an orthonormal basis, one has a useful
formula

1 =

j=1
|e
j
e
j
| . (56)
This formula is called the decomposition of unity. To prove
this, consider the decomposition of an arbitrary vector |v
in the basis,
|v =

j=1
v
j
|e
j
, v
j
= e
j
|v , (57)
and apply both sides of Eq. (56) to |v,
_
_

j=1
|e
j
e
j
|
_
_
|v =

j=1
|e
j
v
j
= |v =

1 |v . (58)
2.6.4 Hermitian conjugation
We have seen (Sec. 2.4.4) that the Hermitian scalar product
allows one to dene a ket-bra map. Another important
operation maps any linear operator

Ainto another operator
denoted by

A

, which is called the Hermitian conjugate of

A. The new operator



A

is dened by
_

|v
1
, |v
2

_

_
|v
1
,

A|v
2

_
for all |v
1
, |v
2
. (59)
This equation is read as follows: the result of applying

A
to |v
1
is a new vector

A|v
1
such that its scalar product
with an arbitrary |v
2
is equal to
_
|v
1
,

A|v
2

_
. The vector

A|v
1
is uniquely dened by specifying its scalar products
with all vectors |v
2
.
In the Dirac notation, operators act only to the right, so
the denition (59) must be rewritten like this,
_
v
2
|

A

|v
1

= v
1
|

A|v
2
. (60)
An operator

A is called Hermitian if

A

=

A. Hermitian
operators play a very important role in quantummechanics
because all observable quantities are described by Hermi-
tian operators acting in suitable Hilbert spaces.
Examples:
1. The operators

1 and |a a| are Hermitian.
2. The Hermitian conjugate to

1 is

1. More generally,
the Hermitian conjugate to

A is

.
3. The Hermitian conjugate to |a b| is |b a|. So if |a is
not parallel to |b, the operator |a b| is not Hermitian.
4. Consider the derivative operator
x
d/dx acting
in the space of square-integrable functions f(x) on
[, ]. The Hermitian conjugate to d/dx is d/dx
because using integration by parts one has
_
|f ,

dg
dx
__

_
f

(x)
dg
dx
dx
=
_
df

dx
g (x) dx
_

df
dx
_
, |g
_
.
(61)
It follows that i
x
is Hermitian.
5. Consider the operator
2
x
d
2
/dx
2
acting in the space
of square-integrable functions f(x) on [, ]. This
operator is Hermitian (to verify this, one needs to inte-
grate by parts twice).
2.6.5 Properties of Hermitian operators
1. It is convenient to imagine that a Hermitian operator
acts also on bra vectors to the left, so that v|

A is the
result of acting with

A on the bra vector v|. This
is justied since the bra vector corresponding to the
ket vector

A|v acts on a vector |w by
_

A|v , |w
_
=
_
|v ,

A|w
_
v|

A|w . (62)
2. All eigenvalues of a Hermitian operator

A are
real. Proof: since

A

=

A, we have
_
|v ,

A|v
_
=
_

A|v , |v
_
for an eigenvector |v, and therefore (note
that |v = 0)
v|v = (|v , |v) = (|v , |v) =

v|v . (63)
3. Two eigenvectors |v
1
and |v
2
of a Hermitian opera-
tor

A are orthogonal if they have different eigenvalues

1
=
2
. Proof:
v
1
|

A|v
2
=
2
v
1
|v
2
=
1
v
1
|v
2
, (64)
therefore either
1
=
2
or v
1
|v
2
= 0.
8
2.6.6 Tensor products
The convenient way of writing the projection operator as
|a a| motivates the following question: can one give a use-
ful interpretation to the expression |a |b, where |a and |b
are two vectors? The answer is positive: the expression
|a |b is called the tensor product of vectors |a and |b and
can be interpreted as a linear map from dual vectors into
vectors:
|a |b
_
v|
_
v|a |b (65)
(note that v|a is a scalar). Similarly, a| b| is interpreted as
a map from vectors to dual vectors,
a| b|
_
|v
_
a| b|v . (66)
This interpretation is consistent with the rule that |a b| is
a linear map from vectors to vectors that acts as
|a b|
_
|v
_
|a b|v . (67)
Therefore linear operators can be thought of as tensor prod-
ucts of vectors and covectors.
We would like to emphasize that tensor products |a |b
are not themselves vectors from the space V but objects that
belong to another vector space. Linear combinations of ten-
sor products can be naturally considered. For example,

A
1
|a
1
b
1
| +
2
|a
2
b
2
| (68)
is a linear operator that acts on vectors |v by

A|v =
1
|a
1
b
1
|v +
2
|a
2
b
2
|v . (69)
It is clear that linear combinations of tensor products,
such as
1
|a
1
|b
1
+
2
|a
2
|b
2
, belong to a vector space
of their own: e.g. |a |b belongs to the space of linear maps
from covectors to vectors. This space is called the tensor
product of V with itself and is denoted by V V . Elements
of this space are linear combinations of the form

1
|a
1
|b
1
+
2
|a
2
|b
2
+ ... (70)
In the mathematical literature, the tensor product of vectors
|a and |b is denoted by |a|b (pronounced Atensor B),
but in physics literature the sign is frequently omitted.
Similarly, one can dene tensor products of more than
two spaces, for instance |a |b |c is an element of the space
V V V . In quantum mechanics we rarely use such
higher-order tensor products.
2.7 Life outside of Hilbert space
In quantum mechanics one needs to consider not only vec-
tors and operators in a Hilbert space, but also certain ob-
jects that do not belong to the Hilbert space, for instance,
generalized vectors. These objects can be understood as
functions dened only on a subset of Hilbert space.
2.7.1 Denition of subspace
A subspace of a vector space V is a subset W V which is
itself a vector space.
By denition, V V and {0} V , where by {0} we de-
note the trivial vector space consisting of the single 0 vector.
2.7.2 Examples of subspaces
1. The space of n-tuples (z
1
, ..., z
n
) such that z
1
= 0 is a
subspace of the space of all n-tuples. This is so because
a sum of two n-tuples such that z
1
= 0 is again an n-
tuple of the same kind.
2. The space of all innite sequences (z
1
, z
2
, ...) such that

j=1
|z
j
|
2
< is a subspace of the space of sequences
such that lim
j
z
j
= 0, which in turn is a subspace of
the space of all innite sequences.
3. Consider complex-valued functions dened on an in-
terval [a, b]. Then the space of analytic functions
the space of smooth functions the space of continu-
ous functions the space of bounded functions the
space of square-integrable functions the space of all
functions.
4. Suppose that a vector space V is not necessarily a
Hilbert space but has a Hermitian scalar product. Then
V V

because the ket-bra correspondence (Sec. 2.4.4)
maps vectors |v into dual vectors v|.
2.7.3 Generalized vectors: motivation
We again work in a Hilbert space V with an orthonormal
basis |e
1
, |e
2
, ... Consider a general linear combination of
bra vectors
a|

j=1
a
j
e
j
| . (71)
As we know, Eq. (71) denes a certain bra vector a| as
long as the sequence {a
j
} is square-summable [=satises
the condition (42)]. If the sequence {a
j
} is not square-
summable, the series (71) diverges in the vector sense,
i.e. it does not dene any dual vector a| V

. For exam-
ple, with a
j
= 1 we nd
a| =

j=1
e
j
| V

. (72)
More precisely, this divergence means that we cannot apply
the linear function a| to all vectors |v V because for
some vectors |v the series a|v

j=1
a
j
e
j
|v diverges.
For instance,

j=1
1
j
|e
j
V is a well-dened vector since

j=1
1
j
2
=

2
6
< ; however,
a|
_
_

j=1
1
j
|e
j

_
_
=

j=1
1
j
= . (73)
However, we may be able to apply a| to some (not all) vec-
tors |v, for instance to vectors whose components decay
sufciently quickly as j . It is easy to prove that there
exists a certain subspace V
(a)
of vectors |v to which a| can
be applied. Therefore the expression (72) is meaningful as
an element of the dual space V

(a)
to the subspace V
(a)
. Such
expressions are called generalized vectors and are quite
useful in calculations.
9
2.7.4 Generalized vectors: denition and examples
Generalized bra vectors are linear functions dened on
a certain subspace W V of a Hilbert space V , but not
dened on the entire space V . In other words, generalized
bra vectors are elements of the dual space W

. Since W
V , we have V

W

, i.e. W

is a larger space than V



.
Generalized ket vectors are linear combinations of the
form (39) with coefcients that do not satisfy the conver-
gence condition (42) but can be acted on by bra vectors
from a certain subspace W

V

. In other words, gen-
eralized ket vectors are limits of vector sequences that
do not converge to any vector in V , but yield well-dened
limits if a bra vector from W

is applied to them.
Note that generalized vectors are always dened with re-
spect to a certain subspace W V . This subspace is chosen
for convenience in particular calculations.
Some examples:
1. The rst example of a generalized bra vector was
given by Eq. (72).
2. Consider the Hilbert space V L
2
([a, b]) of square-
integrable functions f(x) and dene a linear map
a
| :
V C by taking the value f(a), i.e.

a
| : f (x) f (a) . (74)
It is easy to verify that this map is linear. However,

a
| is only dened on functions f(x) which are contin-
uous at x = a and is undened on all other functions.
The Hilbert space only restricts the functions f(x) to be
square-integrable and thus admits functions that are
discontinuous at x = a, such as (check this!)
f
1
(x) = |x a|

1
4
or f
2
(x) = sin
1
x a
. (75)
Therefore
a
| can be viewed as a generalized bra
vector which is dened on the subspace of continuous
functions. This bra vector cannot be represented by
an integral

a
|f = f (a) =
_
b
a
g

(x) f (x) dx ?? (76)


because no integration kernel g(x) exists that satises
Eq. (76) for all continuous functions f(x). Neverthe-
less, in calculations it is convenient to pretend that
such an integration kernel exists and to write
f (a) =
_
(x a) f (x) dx, (77)
where the symbol (x a) is the Dirac delta function.
We emphasize that the integral sign and the symbol
(x a) are only a symbolic notation for the linear
map
a
|.
3. What is the generalized ket vector analogous to
a
|?
Consider the space V L
2
([1, 1]) and the following
sequence of vectors
|f
n
f
n
(x) =
n

exp
_
n
2
x
2
_
, n = 1, 2, 3, ...
(78)
It is easy to verify that as n , the sequence {f
n
(x)}
does not converge to any function from the space V .
However, if we apply a bra vector g| to the sequence
before taking the limit n , we shall obtain (after
some calculations) a nite limit
lim
n
g|f
n
lim
n
_
1
1
g

(x)f
n
(x)dx = g

(0), (79)
as long as g| comes from a function that is continu-
ous at x = 0; the limit does not exist if g(x) is discon-
tinuous at x = 0. Therefore we say that the sequence
{|f
n
} converges to the generalized ket vector which
we might denote |
0
, by analogy with
a
|. This vec-
tor is such that g|
0
= g

(0) for continuous functions


g(x). Comparing Eqs. (77) and (79), we nd that the
generalized vector |
0
can be written using the Dirac
delta function as
|
0
lim
n
f
n
(x) = (x) . (80)
Generalized vectors and especially the Dirac delta function
are useful in many calculations.
2.7.5 Operators dened only on a subspace
Frequently we shall consider linear operators which are de-
ned only on a subspace V
1
V of a Hilbert space V . Ex-
amples:
1. In the Hilbert space L
2
, the derivative operator
d
dx
: f (x)
df
dx
(81)
is dened only on differentiable functions f(x) such
that f

(x) is square-integrable. Such functions f(x)


form a subspace of L
2
.
2. In a generic Hilbert space with an orthonormal basis
{|e
j
}, dene the operator

j=1
j |e
j
e
j
| . (82)
It is clear that this operator cannot be applied to the
vector
|v
1

j=1
1
j
|e
j
;

M |v
1
=

j=1
|e
j
V, (83)
but is well-dened on
|v
2

j=1
1
j
2
|e
j
;

M |v
2
=

j=1
1
j
|e
j
= |v
1
V.
(84)
2.7.6 Dense subspaces
There is usually no problem if an operator is dened only
in a subspace of a Hilbert space, as long as this subspace
is large enough. The mathematical denition is the fol-
lowing. A subspace V
1
V is dense if any vector |v V
can be approximated arbitrarily well by vectors from V
1
; in
other words, if > 0 |v
1
V
1
: |v |v
1
< .
For example, the subspace of differentiable functions is
dense in the Hilbert space of square-integrable functions.
10

Você também pode gostar