Você está na página 1de 22

Chapter 1

Vector Analysis

Problem Set #1: 1.2, 1.3, 1.9, 1.10, 1.11, 1.15, 1.18, 1.20 (Due Thursday
Jan. 23rd)
Problem Set #2: 1.14, 1.17, 1.28, 1.29, 1.30,1.33, 1.39, 1.43, 1.46 (Due
Tuesday Feb. 18th)

1.1 Vector Algebra


1.1.1 Vectors
Vector quantities (or three-vectors) are denoted by boldface letters A, B, ...
in contrast to scalar quantities denoted by ordinary letters A, B, .... For
example, in Cartesian coordinates a vector

A = (Ax , Ay , Az ) (1.1)

has a length (or magnitude)


!
A ≡ |A| = A2x + A2y + A2z (1.2)

which is a scalar. Scalars are real numbers or elements in space R and vectors
are elements in space R3 . For vectors A, B, ... ∈ R3 and angle θ between A
and B one can define:

1. Addition:
A + B ≡ (Ax + Bx , Ay + By , Az + Bz ) (1.3)
which is commutative
A+B=B+A (1.4)

4
CHAPTER 1. VECTOR ANALYSIS 5

associative
(A + B) + C = A + (B + C) (1.5)
and defines inverse (or minus) vector

A + (−A) ≡ 0 (1.6)

where the zero vector is


0 ≡ (0, 0, 0). (1.7)
Geometrically the addition is understood by parallel transporting vec-
tor B so that it starts where the vector A ends. Then the vector A + B
points from the beginning of vector A to the end of vector B.

2. Multiplication by scalar:

aA ≡ (aAx , aAy , aAz ) (1.8)

which is distributive

a (A + B) = aA + aB (1.9)

where a ∈ R is the scalar.


Geometrically the resulting vector aA is a vector pointing in the same
direction (or in the opposite direction if a < 0) as vector A but whose
magnitude is a times larger (or smaller if |a| < 1).

3. Dot product (or scalar product):

A · B ≡ AB cos θ (1.10)

which is commutative
A · B =B · A (1.11)
and distributive

A · (B + C) = A · B + A · C. (1.12)

Geometrically the dot product measures the length of the vector A


when projected to the direction of B times B or equivalently the length
of the vector B when projected to the direction of A times A.

4. Cross product (or vector product):

A × B ≡ AB sin θn̂ (1.13)


CHAPTER 1. VECTOR ANALYSIS 6

where the vector n̂ has unit length (unit vector)


|n̂| = 1 (1.14)
which is non-commutative (or anti-commutative)
A×B=−B×A (1.15)
and distributive
A × (B + C) = A × B + A × C. (1.16)
Geometrically the magnitude of vector A × B is the area of parallelo-
gram generated by A and B and points in the direction n̂ perpendicular
both A and B using the right-hand-rule (just a convention). Note
that the cross product exist only in three and seven dimensional spaces.

1.1.2 Components
It is convenient to write vectors in the components form
A = (Ax , Ay , Az ) = Ax x̂ + Ay ŷ + Az ẑ (1.17)
where x̂, ŷ and ẑ are unit vectors in the direction of positive x, y and z axes.
Then,
1. Addition:
A+B = (Ax x̂ + Ay ŷ + Az ẑ)+(Bx x̂ + By ŷ + Bz ẑ) = (Ax + Bx ) x̂+(Ay + By ) ŷ+(Az + Bz ) ẑ
(1.18)
2. Multiplication by scalar:
aA = a (Ax x̂ + Ay ŷ + Az ẑ) = (aAx ) x̂ + (aAy ) ŷ + (aAz ) ẑ (1.19)

3. Dot product:
A · B = (Ax x̂ + Ay ŷ + Az ẑ) · (Bx x̂ + By ŷ + Bz ẑ) =
= Ax x̂ · (Bx x̂ + By ŷ + Bz ẑ) + Ay ŷ · (Bx x̂ + By ŷ + Bz ẑ) + Az ẑ · (Bx x̂ + By ŷ + Bz ẑ
= (Ax x̂ · Bx x̂) + (Ay ŷ · By ŷ) + (Az ẑ · Bz ẑ) = Ax Bx + Ay By + Az Bz (1.20)
since
x̂ · x̂ = ŷ · ŷ = ẑ · ẑ = 1 (1.21)
and
x̂ · ŷ = ŷ · ẑ = ẑ · x̂ = 0. (1.22)
Note that ! √
A= A2x + A2y + A2z = A · A. (1.23)
CHAPTER 1. VECTOR ANALYSIS 7

4. Cross product:

A × B = (Ax x̂ + Ay ŷ + Az ẑ) × (Bx x̂ + By ŷ + Bz ẑ) =


= Ax x̂ × (Bx x̂ + By ŷ + Bz ẑ) + Ay ŷ × (Bx x̂ + By ŷ + Bz ẑ) + Az ẑ × (Bx x̂ + By ŷ +
= (Ay Bz − Az By ) x̂ + (Az Bx − Ax Bz ) ŷ + (Ax By − Ay Bx ) ẑ
⎛ ⎞
x̂ ŷ ẑ
= det ⎝ Ax Ay Az ⎠
Bx By Bz

since
x̂ × x̂ = ŷ × ŷ = ẑ × ẑ = 0 (1.25)
and (for the right-handed coordinate system)

x̂ × ŷ = ẑ ŷ × ẑ = x̂ ẑ × x̂ = ŷ (1.26)

ŷ × x̂ = −ẑ ẑ × ŷ = −x̂ x̂ × ẑ = −ŷ. (1.27)

It follows that the so-called scalar triple product

C · (A × B) = (Ay Bz − Az By ) Cx + (Az Bx − Ax Bz ) Cy + (Ax By − Ay Bx ) Cz =


⎛ ⎞
Cx Cy Cz
= det ⎝ Ax Ay Az ⎠ (1.28)
Bx By Bz

is nothing but the volume of a parallelepiped generated by A, B and C and

A · (B × C) = B · (C × A) = C · (A × B) . (1.29)

There is also a vector triple product

A × (B × C) = B (A · C) − C (A · B) , (1.30)

but you are not required to memorize these formulas since they can always
be re-derived from the components representation of vectors.

1.1.3 Notations
Let us now introduce some notations:

1. Position vector is a vector

r = xx̂ + yŷ + zẑ (1.31)


CHAPTER 1. VECTOR ANALYSIS 8

describes position of a point (x, y, z) relative to the origin (whose co-


ordinates are (0, 0, 0)). Its magnitude is
&
r = |r| = x2 + y 2 + z 2 (1.32)

and unit vector in the direction of r is


r xx̂ + yŷ + zẑ
r̂ = =& . (1.33)
r x2 + y 2 + z 2

2. Separation vector is a vector

s ≡ r − r′ = (x − x′ ) x̂ + (y − y ′) ŷ + (z − z ′ ) ẑ (1.34)

describes position of a point (x, y, z) relative to the origin (whose co-


ordinates are (x′ , y ′, z ′ )). Its magnitude is
!
s = |r − r′ | = (x − x′ )2 + (y − y ′)2 + (z − z ′ )2 (1.35)

and unit vector in the direction of s is


s (x − x′ ) x̂ + (y − y ′) ŷ + (z − z ′ ) ẑ
ŝ = =! . (1.36)
s
(x − x′ )2 + (y − y ′)2 + (z − z ′ )2

3. Displacement vector is an infinitesimal vector

dr ≡ dxx̂ + dy ŷ + dzẑ (1.37)

describes displacement from point (x, y, z) to point (x + dx, y + dy, z +


dz). What is the magnitude of dr? Is there a unit vector in the direction
of dr?

1.2 Differential Calculus


1.2.1 Gradient
Consider a function of a single variable f (x) then one can expand it around
some point x0 as

1 d2 f (x)
' ( ' (
df (x)
f (x) = f (x0 )+ (x−x0 )+ (x−x0 )2 +.... (1.38)
dx x=x0 2 dx2 x=x0
CHAPTER 1. VECTOR ANALYSIS 9

If we only keep the linear term than


' (
df (x)
f (x) − f (x0 ) ≈ (x − x0 ) (1.39)
dx x=x0

which in differential form is simply


' (
df
df = dx. (1.40)
dx

Similarly the function of three variables f (x, y, z) to the linear order in


expansion is
' ( ' ( ' (
∂f (x, y, z) ∂f (x, y, z) ∂f (x, y, z)
f (x, y, z)−f (x0 , y0, z0 ) ≈ (x−x0 )+ (y−y0)+
∂x x=x0 ∂y y=y0 ∂z x=x
(1.41)
or in differential form
' ( ' ( ' (
∂f ∂f ∂f
df = dx + dy + dz. (1.42)
∂x ∂y ∂z
This can also be rewritten as a dot product of two vectors
'' ( ' ( ' ( (
∂f ∂f ∂f
df = x̂ + ŷ + ẑ · (dxx̂ + dyŷ + dzẑ)
∂x ∂y ∂z
' (
∂f ∂f ∂f
= , , · (dx, dy, dz)
∂x ∂y ∂z
'' ( (
∂... ∂... ∂...
= , , f · (dx, dy, dz). (1.43)
∂x ∂y ∂z
where in the last line a vector-like operator acts on the function f to produce
a vector. The vector is called a gradient of f defined as
' ( ' ( ' ( ' (
∂f ∂f ∂f ∂f ∂f ∂f
∇f = , , = x̂ + ŷ + ẑ. (1.44)
∂x ∂y ∂z ∂x ∂y ∂z

Geometrically gradient ∇f is a vector pointing in the direction of (a local)


maximum increase of function f and its magnitude gives the rate of the in-
crease. For instance at a local maxima, minima or saddle point, the gradient
of a function is a zero vector.
It is also useful to define a vector-like operator known as del (or nabla)
operator ' (
∂... ∂... ∂...
∇≡ , , . (1.45)
∂x ∂y ∂z
CHAPTER 1. VECTOR ANALYSIS 10

Then the gradients can be produced by acting with nabla on functions


' ( ' (
∂... ∂... ∂... ∂f ∂f ∂f
∇f = , , f= , , (1.46)
∂x ∂y ∂z ∂x ∂y ∂z

where ∇ is treated as vector quantity and f is treated as scalar quantity.

1.2.2 Divergence and Curl


One can also imagine a vector function which has three values Ax (x′ , y ′, z ′ ),
Ay (x′ , y ′, z ′ ) and Az (x′ , y ′ , z ′ ) in each point in space or equivalently there
is a three-component vector v(x′ , y ′, z ′ ) attached to each point (x′ , y ′, z ′ ).
Mathematically speaking vectors are elements of a tangent space at a given
point (x′ , y ′, z ′ ) ∈ T(x′ ,y′ ,z ′ ) M on a manifold (in our case a 3 dimension Eu-
clidean space M = R3 ) and the vector fields are elements of a tangent bundle
v ∈ T M. One can also think of the nabla operator as a vector field operator
although it is not usually called this way
)* + * + * + ,
∂... ∂... ∂...
∇≡ , , . (1.47)
∂x (x′ ,y′ ,z ′ ) ∂y (x′ ,y′ ,z ′) ∂z (x′ ,y′ ,z ′ )
(Take your time and think what it means). We shall omit writing (x′ , y ′, z ′ ),
but it is assumed that the scalar and vector quantities are fields.
Given scalar fields f, g, ... vector fields v, u, ... and a vector operator ∇ one
can do the usual vector manipulations at each point separately to produce
new fields:
Addition:
A + B = (Ax + Bx , Ay + By , Az + Bz ) (1.48)

1. Multiplication by scalar:

aA = (aAx , aAy , aAz ) (1.49)

or ' ( ' (
∂... ∂... ∂... ∂a ∂a ∂a
∇a = , , a= , , (1.50)
∂x ∂y ∂z ∂x ∂y ∂z
which is a gradient of a scalar field a which is a vector field as we have
already mentioned.

2. Dot product (or scalar product):

A · B = Ax Bx + Ay By + Az Bz (1.51)
CHAPTER 1. VECTOR ANALYSIS 11

or ' (
∂... ∂... ∂...
∇·A= , , · (Ax , Ay , Az ) = . (1.52)
∂x ∂y ∂z
which is a scalar field called divergence of a vector A. Geometrically
the divergence measures the amount by which the lines of vector field
diverge from each other.
3. Cross product (or vector product):
⎛ ⎞
x̂ ŷ ẑ
A × B = det ⎝ Ax Ay Az ⎠ (1.53)
Bx By Bz
or ⎛ ⎞
x̂ ŷ ẑ
∂... ∂... ∂...
∇ × A = det ⎝ ∂x ∂y ∂z
⎠. (1.54)
Ax Ay Az
which is a vector field called curl of a vector A. Geometrically the curl
measures the amount by which the lines of vector field curl around a
given point.

According to Helmholtz theorem the knowledge of divergence ∇ · A and of


curl ∇ × A of some vector field A is sufficient to determine the vector field
itself (given that both ∇ · A and ∇ × A fall off faster than 1/r 2 as r → ∞).
Using definitions of gradient 1.50, divergence 1.52 and curl 1.54 it is
straight-forward to derive different product rules

∇(ab) = a∇b + b∇a


∇(A · B) = (A · ∇) B + (B · ∇) A
∇ · (aA) = a∇ · A + A · ∇a
∇ · (A × B) = B · (∇ × A) − A · (∇ × B)
∇ × (aA) = a (∇ × A) − A × (∇a)
∇ × (A × B) = (B · ∇) A − (A · ∇) B + A (∇ · B) − B (∇ · A)(1.55)

and quotient rules


-a. b∇a − a∇b
∇ =
' b( b2
A a (∇ · A) − A · (∇a)
∇· =
a a2
' (
A a (∇ × A) + A × (∇a)
∇× = . (1.56)
a a2
CHAPTER 1. VECTOR ANALYSIS 12

From definitions one can also derive expressions for second derivatives the
most useful of which is a Laplacian operator

∇2 a ≡ ∇ · (∇a) . (1.57)

It is also extremely important to remember that the curl of gradient or a


divergence of curl is always zero

∇ × (∇a) = (0, 0, 0)
∇ · (∇ × A) = 0. (1.58)

1.2.3 Maxwell Equations


Consider a scalar field which is a single function (in three space and one time
dimensions)
ρ(t′ , x′ , y ′, z ′ ). (1.59)
and three-vector fields which is a collection of three function (in three space
and one time dimensions)

B ≡ (Bx , By , Bz )
E ≡ (Ex , Ey , Ez )
J ≡ (Jx , Jy , Jz ) . (1.60)

It looks a bit odd and in fact (as you might suspect) there is a more natural
object (the so-called four-vector potential) which is an element of a tangent
bundle of the four dimensional manifold describing the space-time.
Then to promote these vectors to vector fields we should imagine they
are functions of spatial x′ , y ′, z ′ and temporal t′ coordinates

B(t′ , x′ , y ′, z ′ ) ≡ (Bx (t′ , x′ , y ′, z ′ ), By (t′ , x′ , y ′, z ′ ), Bz (t′ , x′ , y ′, z ′ ))


E(t′ , x′ , y ′, z ′ ) ≡ (Ex (t′ , x′ , y ′ , z ′ ), Ey (t′ , x′ , y ′ , z ′ ), Ez (t′ , x′ , y ′, z ′ ))
J(t′ , x′ , y ′, z ′ ) ≡ (Jx (t′ , x′ , y ′, z ′ ), Jy (t′ , x′ , y ′, z ′ ), Jz (t′ , x′ , y ′, z ′ )) .

In writing the fields we usually omit the ugly looking (t′ , x′ , y ′, z ′ ) but the
dependence on space and time coordinates is always implied.
Now if we think of ρ and J as the electric charge and electric current
density (fields) and of electric E and magnetic B fields then there are the
so-called Maxwell equations which relate these fields to each other. In SI
units the famous equations take the following form
CHAPTER 1. VECTOR ANALYSIS 13

ρ
∇·E = (Gauss’s law) (1.61)
ϵ0
∂E
∇×B− = µ0 J (Ampere’s law) (1.62)
c2 ∂t
∂B
∇×E+ = 0 (Faraday’s law) (1.63)
∂t
∇ · B = 0 (Gauss’s law) (1.64)

where c = √ϵ10 µ0 is the speed of light. Why light? Because light is nothing
but the waves of electric E and magnetic B field (or for short electromagnetic
waves) propagating in space.
These equations can be derived from variational principle and interested
students will be encouraged to do so at the end of the course. It turns out
that the fundamental fields are not the electric E and magnetic B fields, but
the scalar V and vector A potential (fields). In terms of these potential fields

∂A
E = −∇V − (1.65)
∂t
B = ∇ × A. (1.66)

The two potentials combined form a four-vector (V, A) which is the element
of the tangent bundle of our four-dimensional space-time.
And to derive (1.61,1.62,1.63,1.64) from first principles (i.e. variational
principle) one should start with a particular Lagrangian written in terms of
V and A and vary it with respect to V and A. We are not going to do this,
but we will assume that the Maxwell equations give a correct description
of electricity and magnetism for macroscopic charges, currents, distances,
energies, etc.
The Maxwell equations describe how charges (stationary ρ or moving J)
generate the electric E and magnetic B fields, but do not describe how the
charges move due to electric and magnetic forces. For that you need an
additional equation known as Lorentz force law:

F = q (E + v × B) . (1.67)

Equation (1.67) together with equations (1.61,1.62,1.63,1.64) describe every-


thing there is to know in this course, but before we start let us review the
mathematics of integral calculus.
CHAPTER 1. VECTOR ANALYSIS 14

1.3 Integral Calculus


1.3.1 Integrals
For a function of a single variable f (x) there is only one type of integral
/ b
f (x)dx (1.68)
a

but for a function of three variables f (x, y, z) one can integrate over line (or
path) , over area (or surface) and over volume:

• Path integral /
v(l) · dl (1.69)
P
where the integral is take over some path P from point l = a to point
l = b. (The subscript P is often dropped, but it is always implied that
the integral is over some path.) For example, work required to move a
particle along some path is given by
/
W = F(l) · dl (1.70)

where F is a force acting on the particle.

• Surface integral /
v(l) · da (1.71)
S

where the integral is take over some surface S (also often omitted sub-
script) and da is an infinitesimal patch of area with direction perpen-
dicular to the surface (lousy, but common notation) with also a sign
ambiguity in the definition. For closed surfaces
0
v(l) · da (1.72)

the convention is that the “infinitesimal area” vector points outwards.


For example, consider a surface integral of

v(x, y, z) = 2xzx̂ + (x + 2)ŷ + y(z 2 − 3)ẑ (1.73)

over a cubical box with side 2. Then there will be six contributions to
the integral
CHAPTER 1. VECTOR ANALYSIS 15

1. x = 2 and da = dydzx̂ implies v · da = 2xzdydz = 4zdydz, and


/ / 2 / 2
v · da = 4 dy zdz = 16. (1.74)
0 0

2. x = 0 and da = −dydzx̂ implies v · da = −2xzdydz = 0, and


/
v · da = 0. (1.75)

3. y = 2 and da = dxdzŷ implies v · da = (x + 2)dxdz, and


/ / 2 / 2
v · da = (x + 2)dx dz = 12. (1.76)
0 0

4. y = 0 and da = −dxdzŷ implies v · da = −(x + 2)dxdz, and


/ / 2 / 2
v · da = − (x + 2)dx dz = −12. (1.77)
0 0

5. z = 2 and da = dxdyẑ implies v · da = y(z 2 − 3)dxdy = ydxdy, and


/ / 2 / 2
v · da = dx ydy = 4. (1.78)
0 0

6. z = 0 and da = −dxdyẑ implies v · da = −y(z 2 − 3)dxdy = 3ydxdy,


and / / 2 / 2
v · da = 3 dx ydy = 12. (1.79)
0 0
And the total flux is
0
v · da = 16 + 0 + 12 − 12 + 4 + 12 = 32. (1.80)

• Volume integral
/ /
3
f dx = f (x, y, z)dxdydz (1.81)
V V

which is nothing but a triple integral which can be taken in any order
/ '/ '/ ( ( / '/ '/ ( (
f (x, y, z)dx dy dz = f (x, y, z)dz dx dy = ...
(1.82)
CHAPTER 1. VECTOR ANALYSIS 16

1.3.2 Exact differentials


The fundamental theorem describes how to integrate functions which are
exact differentials. For example, if

df (x)
F (x) = (1.83)
dx
then / x=b
F (x)dx = f (b) − f (a). (1.84)
x=a
In higher dimensions this results generalizes to integrating a function

F (x, y, z) = ∇f (x, y, z). (1.85)

over an arbitrary path


/ b
(∇f ) · dl = f (b) − f (a) (1.86)
a

connecting points a and b. For example, if the force in equation(1.70) is


conservative (e.g. gravitational force, but not friction force)

F = −∇V (1.87)

then the work only depend on the value of the potential in the initial and
final points / b
W = F(l) · dl = V (a) − V (b). (1.88)
a

and over closed paths the work is always zero


0
W = F(l) · dl = 0. (1.89)

Note that it is always possible to rewrite a curl-less vector fields as a gradient

∇ × F = 0 ⇔ F = −∇V (1.90)

and thus the path independence of work (1.88) and vanishing of work for
closed paths (1.89) would follow automatically. And if the vector field is not
curl-less we can still rewrite it as

∇ × F ̸= 0 ⇒ F = −∇V + ∇ × A.
CHAPTER 1. VECTOR ANALYSIS 17

A straightforward generalization of the same idea leads to the Gauss’s (or


divergence) theorem / 0
(∇ · v) dx3 = v · da (1.91)
V S

and to Stokes’s (or curl) theorem


/ 0
(∇ × v) · da = v · dl. (1.92)
S P

Roughly speaking the Gauss’s theorem (1.91) describes the two ways of cal-
culate the number of (vector v) field lines entering a given volume minus
the number of field lines leaving the volume. On can calculate it either by
integrating divergence of the field lines over the volume (as on the left hand
side), or by integrating the flow of the field lines over the surface (as on the
right hand side).
Similarly the Stokes’s theorem (1.92) describes the different ways how
swirling of the field lines can be calculate. One can calculate it by integrating
the curl of field lines over the area (as on the left hand side), or by the
integrating the rotation of field lines as we go around boundary (as on the
right hand side).
For a vector field (1.73)

v(x, y, z) = 2xzx̂ + (x + 2)ŷ + y(z 2 − 3)ẑ (1.93)

we can find
∇ · v = 2z + 2yz (1.94)

⎛ ⎞
x̂ ŷ ẑ
∂... ∂... ∂... ⎠ = (z 2 − 3)x̂ + 2xŷ + ẑ.
∇ × v = det ⎝ ∂x ∂y ∂z (1.95)
2
2xz x + 2 y(z − 3)

It is now easy to check that


/ 2/ 2/ 2
(2z + 2yz) dxdydz = 16 + 16 = 32 (1.96)
0 0 0

is the same as total flux through the boundary (1.80) as correctly predicted
by Gauss’s theorem (1.91). Moreover for the face of the cube (y = 0)
/ 2 / 2 1 2 2
/ 2 / 2
(z − 3)x̂ + ŷ2x + ẑ · ŷdxdz = 2xdxdz = 8 (1.97)
0 0 0 0
CHAPTER 1. VECTOR ANALYSIS 18

or
/ /
1 2 2
(2xzx̂ · x̂) dx + y(z − 3)ẑ · ẑ dz +
/ z=0 / x=2
1 2 2
+ (2xzx̂ · (−x̂)) dx + y(z − 3)ẑ · (−ẑ) dz =
z=2 x=0
/ 0
0+0+ (4xx̂ · (−x̂)) dx + 0 =
2
/ 2
4xdx = 8 (1.98)
0

in agreement with Stokes’s theorem (1.92).


In conclusion, let us consider an exact differential of a product of two
functions,
d dg df
(f g) = f +g , (1.99)
dx dx dx
then we can integrate both sides to obtain
/ b / b' ( / b' (
d dg df
(f g) dx = f dx + g dx,
a dx a dx a dx
/ b' ( / b' (
dg df
[f g]ba = f dx + g dx, (1.100)
a dx a dx
or / b' ( / b' (
dg df
f dx = − g dx + [f g]ba . (1.101)
a dx a dx
dg
Thus we can replace integration of a function f dx with integrating function
df
−g dx plus a boundary term which is often set to zero. Expression (1.101) is
known as integration by parts.
Although trivial the integration by parts is an extremely useful tool which
can also be generalized to more complicated integrals described above with
use of either Gauss’s or Stokes’s theorems. For example,

∇ · (f A) = f ∇ · A + A · ∇f (1.102)
implies
/ / 0
f (∇ · A) dx3 = − A · (∇f ) dx3 + f A · da (1.103)
V V S
and
∇ × (f A) = f (∇ × A) − A × (∇f ) (1.104)
CHAPTER 1. VECTOR ANALYSIS 19

implies
/ / 0
f (∇ × A) · da = − (A × (∇f )) · da + f A · dl. (1.105)
S V P

1.3.3 Generalized functions


There is a special class of functions known as generalized functions (or distri-
bution functions). The most useful example of such function is the (Dirac)
δ-function. Strictly speaking it is not a function as it only makes sense to
talk about δ-function when it is inside of an integral. In one dimension it
can be defined by the following expression
/ ∞
δ(x − a)f (x)dx ≡ f (a) (1.106)
−∞

where f (x) is an arbitrary function.Sometimes it is convenient to express


δ-function as a derivative of the Heaviside step function, i.e.
d
δ(x) = H(x). (1.107)
dx
In three dimensions it is defined as a product of three delta functions

δ (3) (r) = δ(x)δ(y)δ(z) (1.108)

so that
/ ∞/ ∞ / ∞
δ(x − a)δ(y − b)δ(z − c)f (x, y, z)dxdydz = f (a, b, c) (1.109)
−∞ −∞ −∞

or /
f (r)δ(r − a)dx3 = f (a). (1.110)

One can think of δ-function as a probability distribution for a point particle


located at a since the integral of the entire space is exactly one
/ ∞
δ (3) (r − a)dx3 = 1. (1.111)
−∞

1.4 Transformations
1.4.1 Simple transformations
Clearly the choice of the reference frame or coordinates system (i.e. origin,
axes, handedness, etc ) is arbitrary, and we want the laws of physics not
CHAPTER 1. VECTOR ANALYSIS 20

to depend on this choice. In other words if we make predictions of how a


given system should behave in one coordinate system then we should have
a rule how to make predictions in another coordinate system. For that we
need a rule how to transform different quantities from one system to another.
In fact all of the quantities (such as scalars, vectors, tensors, spinors, etc)
are distinguished from other by the way they transform under changes of
coordinates.
What are the possible transformations in Euclidean three dimensional
space (denoted by 3D)? There are:
• 3 translations (or shifts) along x̂, ŷ and ẑ directions

• 3 rotations from x̂ to ŷ, from ŷ to ẑ and from ẑ to x̂.


These are linearly independent transformations (i.e. there are no non-zero
linear combinations of these six transformations which leaves the system un-
transformed), but one can produce other linearly dependent transformations
by forming linear combinations of these six transformations (e.g. shift by -5
meters along ŷ, rotate by π/5 from ẑ to x̂ and then rotate by π/7 from x̂ to
ŷ). How many linearly independent transformations in 1D? 2D? 4D? nD?
In n dimensions there are n translations and as rotation many rotations as
there are distinct pairs of axis (rotations from x to y, from x toz, etc.)

n(n − 1)
(n − 1) + (n − 2) + .... + 2 + 1 = .
2
Thus there are
n(n − 1) n(n + 1)
n+ =
2 2
independent transformations.
Linear combination of translations can be described by a translation vec-
tor
T = (Tx , Ty , Tz ) (1.112)
of the old coordinate system (x, y, z) to new coordinate system (x′ , y ′, z ′ ).
Note that the translation vector is also expressed in the old (unprimed)
coordinates. Then scalars (e.g. A) and vectors (e.g. A = (Ax , Ay , Az ))
transforms into A′ and A′ = (A′x′ , A′y′ , A′z ′ ) such that

A′ = A (1.113)

and

A′ = A. (1.114)
CHAPTER 1. VECTOR ANALYSIS 21

This is just a statement of the fact that vectors parallel transported in the
Euclidean space do not change.
For brevity of notations (and to confuse readers) the primes are often
dropped either for the newly transformed vector (as in books on general
relativity) or for the new coordinates (as in book on electrodynamics) so
that
A′x = Ax
A′y = Ay
A′z = Az (1.115)
The notations are confusing, but it should always be clear from the context
whether we are in the old (unprimed) or in the new (primed) coordinates
system.
Linear combination of rotations can be described by a rotation matrix
⎛ ⎞
Rxx Rxy Rxz
⎝ Ryx Ryy Ryz ⎠ ≡ (1.116)
Rzx Rzy Rzz
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
cos φ1 sin φ1 0 1 0 0 cos φ3 0 − sin φ3
⎝ − sin φ1 cos φ1 0 ⎠ + ⎝ 0 cos φ2 sin φ2 ⎠ + ⎝ 0 1 0 ⎠.
0 0 1 0 − sin φ2 cos φ2 sin φ3 0 cos φ3
for some angles φ1 , φ2 and φ3 . Then scalars and vector transform as

A′ = A (1.117)
3
3
A′i = Rij Aj (1.118)
j=1

where it is assumed that i = 1, 2, 3 and it stands correspondently for ei-


ther x, y, z. or using the Einstein summation convention (always sum over
repeated indices)
A′i = Rij Aj . (1.119)
For more complicated objects such as tensors the transformation law would
be written as
3
3
Mij′ = Rik Rjl Mkl (1.120)
k,l=1

or (with Einstein summation convention) simply


Mij′ = Rik Rjl Mkl . (1.121)
CHAPTER 1. VECTOR ANALYSIS 22

Clearly, the simplest transformation rule is for scalar quantities - they do not
change under coordinate transformations.

1.4.2 General transformations


So far we had been using Cartesian coordinates, but nothing can stop us
from describing the points on different manifolds using other coordinates
system. The two most useful examples are the so-called spherical and cylin-
drical coordinates both of which are generalizations of two dimensional polar
coordinates to our three dimensional space:

• Spherical coordinates

x = r sin θ cos φ
y = r sin θ sin φ
z = r cos θ (1.122)

where r ∈ (0, ∞) is the radial distance,θ ∈ (0, π) is the inclination angle


and φ ∈ [0, 2π) is azimuthal angle.

• Cylindrical coordinates

x = s cos φ
y = s sin φ
z = z (1.123)

where s ∈ (0, ∞) is the radial direction projected to x − y plane and


φ ∈ [0, 2π) is the same azimuthal angle as in spherical coordinates.

Then any vector in Cartesian coordinates (x, y, z)

A = Ax x̂ + Ay ŷ + Az ẑ (1.124)

can be expressed in terms of the new coordinates (r, θ, φ)

A = Ar r̂ + Aθ θ̂ + Aφ φ̂ (1.125)

or (s, φ, z)
A = Ar r̂ + Aφ φ̂ + Az ẑ (1.126)
and vise versa. The transformation matrix is called Jacobian an can be
calculated for any transformation.
For example, using the inverse Jacobian matrix
CHAPTER 1. VECTOR ANALYSIS 23

⎛ ⎞
∂x ∂x ∂x ⎛ ⎞
∂r ∂θ ∂φ sin θ cos φ r cos θ cos φ −r sin θ sin φ
∂y ∂y ∂y
Ĵ−1 = ⎝ ⎠= sin θ sin φ r cos θ sin φ r sin θ cos φ ⎠
⎜ ⎟ ⎝
∂r ∂θ ∂φ
∂z ∂z ∂z cos θ −r sin θ 0
∂r ∂θ ∂φ
(1.127)
one can expressed vectors in new coordinates
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
1 0 0
A ∝ ⎝ 0 ⎠, B ∝ ⎝ 1 ⎠, C ∝ ⎝ 0 ⎠ (1.128)
0 0 1
in terms of old coordinates
⎛ ⎞
sin θ cos φ
Ĵ−1 A ∝ ⎝ sin θ sin φ ⎠ = sin θ cos φx̂ + sin θ sin φŷ + cos θẑ.
cos θ
⎛ ⎞
r cos θ cos φ
Ĵ−1 B ∝ ⎝ r cos θ sin φ ⎠ = r cos θ cos φx̂ + r cos θ sin φŷ − r sin θẑ.
−r sin θ
⎛ ⎞
−r sin θ sin φ
Ĵ−1 C ∝ ⎝ r sin θ cos φ ⎠ = −r sin θ sin φx̂ + r sin θ cos φŷ. (1.129)
0
which can be normalized to define
r̂ ≡ cos φx̂ + sin θ sin φŷ + cos θẑ
θ̂ ≡ cos θ cos φx̂ + cos θ sin φŷ − sin θẑ.
φ̂ ≡ − sin φx̂ + cos φŷ. (1.130)
In fact these normalization constants (1, r and r sin θ) are important as they
appears in a general infinitesimal displacement
dl = drr̂ + rdθθ̂ + r sin θdφφ̂ (1.131)
often written in terms of the so-called metric tensor
⎛ ⎞
1 0 0
2 2 2 2
dl = dr + rdθ + dφ = ⎝ 0 r2 0 ⎠ (1.132)
2 2
0 0 r sin θ
Similarly for cylindrical coordinates
x = s cos φ
y = s sin φ
z = z (1.133)
CHAPTER 1. VECTOR ANALYSIS 24

the inverse Jacobian matrix is


⎛ ⎞
∂x ∂x ∂x ⎛ ⎞
∂s ∂φ ∂z cos φ −s sin φ 0
∂y ∂y ∂y
Ĵ−1 = ⎝ ⎠= sin φ s cos φ 0 ⎠ (1.134)
⎜ ⎟ ⎝
∂s ∂φ ∂z
∂z ∂z ∂z 0 0 1
∂s ∂φ ∂z

and
⎛ ⎞
cos φ
Ĵ−1 A ∝ ⎝ sin φ ⎠ = cos φx̂ + sin φŷ
0
⎛ ⎞
−s sin φ
Ĵ−1 B ∝ ⎝ s cos φ ⎠ = −s sin φx̂ + s cos φŷ
0
⎛ ⎞
0
−1
Ĵ C ∝ ⎝ 0 ⎠ = ẑ. (1.135)
1

which can be normalized to define

ŝ ≡ cos φx̂ + sin φŷ


φ̂ ≡ − sin φx̂ + cos φŷ
ẑ ≡ ẑ. (1.136)

with infinitesimal displacement

dl = dsŝ + sdφφ̂ + dzẑ (1.137)

and metric tensor


⎛ ⎞
1 0 0
dl2 = ds2 + s2 dφ2 + dz 2 = ⎝ 0 s2 0 ⎠ . (1.138)
0 0 1

Note that to transform from cartesian coordinates to spherical or cylin-


drical coordinates one should starts with
&
r = x2 + y 2 + z 2
) ,
z
θ = arccos &
x2 + y 2 + z 2
-y .
φ = arctan (1.139)
x
CHAPTER 1. VECTOR ANALYSIS 25

or
&
s = x2 + y 2
-y.
φ = arctan
x
z = z (1.140)

and calculates the Jacobian matrix


⎛ ⎞
∂r ∂r ∂r
∂x ∂y ∂z
∂θ ∂θ ∂θ
Ĵ = ⎝ (1.141)
⎜ ⎟
∂x ∂y ∂z ⎠
∂φ ∂φ ∂φ
∂x ∂y ∂z

but the logic is exactly the same.


One can also rewrite gradients, divergencies and curls in terms of new
coordinates, and the simplest of all is the gradient:
∂f ∂f ∂f
∇f = r̂ + θ̂ + φ̂ (1.142)
∂r r∂θ r sin θ∂φ
or
∂f ∂f ∂f
∇f = ŝ + φ̂ + ẑ. (1.143)
∂s s∂φ ∂z
When transforming divergence and curl we must transform both the nabla
operator and the vector which is a tedious but straightforward exercise which
leads to the formulas listed in any Electrodynamics book.

Você também pode gostar