Você está na página 1de 25

NOTES ON MATHEMATICS FOR MICHAELMAS TERM 2010

DANIEL E. WORRALL
Abstract. Topics included are: Vectors, Functions and Series, Complex Variables,
Ordinary Dierential Equations, Linear Dierence Equations, Partial Dierentiation
and Matrices
1. Vectors
1.1. Vectors.
i) have magnitude
ii) have direction
iii) obey the parallelogram rule of addition
Vector Properties:
a b = a + (b)
a +b = b +a (commutative)
(a +b) +c = a + (b +c) (distributive)
k(a +b) = ka + kb (associative - under scalar multiplication)
Terminology:
Perpendicular vectors are said to be orthogonal
Perpendicular UNIT vectors are said to be orthonormal
Miscellaneous notes on Vectors
- Independent of co-ordinate system
- Laws of physics (classical) are all vector laws
- Vectors exist without co-ordinate systems
1.2. Representing Vectors.
Notation 1.
a = a
1
i + a
2
j + a
3
k =
_
_
a
1
a
2
a
3
_
_
=
_
a
1
a
2
a
3

t
= [a
i
] (i = 1, 2, 3)
1
2 DANIEL E. WORRALL
Here, a
i
is called the i
th
-component of a. Also, i, j & k could technically be any
independent vector e
i
, but it is easiest to choose perpendicular ones.
1.3. Scalar (Dot) Product.
Denition 1.
a b = |a||b| cos = a
t
b = b
t
a = a
x
b
x
+ a
y
b
y
+ a
z
b
z
= b a
2 vectors 1 scalar
Scalar Product Properties:
a b = b a (commutative)
a (b +c) = a b +a c (distributive)
if a and b are:
orthogonal then a b = 0
parallel then a b = |a||b|
anti-parallel then a b = |a||b|
1.4. Vector (Cross) Product (DATABOOK).
Denition 2.
a b = |a||b| sin n =

i j k
a
1
a
2
a
3
b
1
b
2
b
3

2 vectors 1 vector (independent)


n is a unit vector perpendicular to the ab-plane in a right-handed co-ordinate system
i.e. if a = i & b = j then a b = k
Vector Product Properties:
a b = b a (NON-commutative)
a (b +c) = a b +a c (distributive)
if a and b are:
orthogonal then |a b| = |a||b|
anti-/parallel then |a b| = 0
NOTES ON MATHEMATICS FOR MICHAELMAS TERM 2010 3
1.5. Scalar Triple Product.
Denition 3.
[a, b, c] a (b c) =

a
x
a
y
a
z
b
x
b
y
b
z
c
x
c
y
c
z

Also,
constant cyclic order
..
[a, b, c] [c, a, b] [b, c, a] [a, c, b] [c, b, a] [b, a, c]
Note: the scalar triple product can be interpreted most easily as the volume of a par-
allelepiped, with edges a, b & c.
Note: a bc = a b c - the dot and the cross can be interchanged, hence the [ , , ]
notation only distinguishes cyclic order.
Note: a, b & c should be independent, if they are co-planar then [a, b, c] = 0
Lagranges Identity:
(1) (a b)(c d) (a c)(b d) (a d)(b c)
1.6. Vector Triple Product.
a (b c) (a c)b (a b)c
(a b) c (a c)b (b c)a
(2)
1.7. Lines and Planes.
1.7.1. Lines. The vector equation of a line is:
(3) r = a + (b a)
, where a and b are known position vectors, pointing to known points on the line and
(b a) =

t (

t being a unit vector in the direction b a i.e. in the direction of the


line)
Therefore an alternate form of the above expression is:
(4) r = a +

t
Equation 3 is just vector shorthand for
_
_
x
y
z
_
_
=
_
_
a
x
a
y
a
z
_
_
+
_
_
b
x
a
x
b
y
a
y
b
z
a
z
_
_
With a little thought, this can be broken down into three equations and recombined into
the following Cartesian equation:
4 DANIEL E. WORRALL
(5)
x a
x
b
x
a
x
=
y a
y
b
y
a
y
=
z a
z
b
z
a
z
There is another form of representing a line vectorially, using the cross-product:
r

t = a

t = c
(r a)

t = 0
(6)
Here, c is a contant vector perpendicular to both

t and a. The equation basically says
that if we can nd an r such that r a is parallel to

t, then r lies on the line through a
parallel to

t.
1.7.2. Planes. The vector equation of a plane is:
(r a) n = 0
r n = a n = d
(7)
Here, r is any point on the plane, a is a known point on the plane, n is the unit normal
to the plane and d is the shortest distance between the plane and the origin.
The Cartesian equation for this plane is:
(8) lx + my + nz = d
, where n = li + mj + nk
Another way to represent a plane is:
(9) r = a + (b a) + (c a)
, where b and c are two more known points lying in the plane, such that (ba) = k(ca)
and and are scalar parameters.
It is also possible to write this equation as:
(10) r = a + b + c
, where + + = 1 (this is actually really useful, because it demonstrates the linearity
of the equation and that it has two degrees of freedom)
NOTES ON MATHEMATICS FOR MICHAELMAS TERM 2010 5
1.8. Intersection of Planes.
i) If n
p
and n
q
are the normal unit vectors to two planes P and Q, dened as
r n
p
= p
r n
q
= q
, such that n
p
= k n
q
(i.e. the planes are not parallel) then P and Q intersect
along a line of the form:
r = c + ( n
p
n
q
)
ii) If, however,
n
p
p
= k
n
q
q
, where k = 0
for k = 1 the planes do not intersect, hence there is no solution
for k = 1 the planes are one and the same, hence there are innitely many
solutions, given by the equation of the planes.
iii) If there is an additional third plane r n
s
= s, provided n
p
= k n
q
= l n
s
(i.e. they
are non-coplanar), all three planes will intersect at a single point. The solution
of the three equations can be done through simultaneous linear equations or
matrices (section 4) i.e. by solving the below equations:
a
1
x + a
2
y + a
3
z = p
b
1
x + b
2
y + b
3
z = q
c
1
x + c
2
y + c
3
z = s
or
_
_
a
1
a
2
a
3
b
1
b
2
b
3
c
1
c
2
c
3
_
_
_
_
x
y
z
_
_
=
_
_
p
q
s
_
_
(i.e. r = A
1
v, where v =
_
p q s
_
t
)
If all three normal vectors are coplanar (i.e. [a, b, c] = 0), then there will not be a single
point of intersection, in other words, there is no single solution to the three equations.
This may arise because
(1) the planes are but separate no solution
(2) the planes are one ly many solutions
(3) they all intersect along a line ly many solutions
(4) 2 of the planes are no solution
(5) we get a toblerone no solution
6 DANIEL E. WORRALL
2. Functions and Series
2.1. Curve Sketching. Things to consider when curve sketching are:
zeros of the function
turning points
vertical/horizontal asymptotes
oblique aymptotes
It is also useful sometimes to think of a function as being the product/sum of two or
more other functions
e.g. Sketch f(x) = e
x
sin(2x)
-1
-0.5
0
0.5
1
0 0.5 1 1.5 2 2.5 3 3.5 4
y
x
exp(-x)*sin(2*pi*x)
exp(-x)
-exp(-x)
sin(2*pi*x)
Figure 1. Sketching a curve as the function of others
2.2. Hyperbolic Functions.
Denition 4.
sinh x =
e
x
e
x
2
cosh x =
e
x
+ e
x
2
2.2.1. Properties of the hyperbolics.
sinh x is an odd function
cosh x is an even function
x , sinh x and cosh x
NOTES ON MATHEMATICS FOR MICHAELMAS TERM 2010 7
0
5
10
15
20
25
-3 -2 -1 0 1 2 3
exp(x)
exp(-x)
(a) Exponential
-15
-10
-5
0
5
10
15
-3 -2 -1 0 1 2 3
sinh(x)
cosh(x)
(b) Hyperbolics
Figure 2. Exponentials vs. Hyperbolics
x , sinh x and cosh x
sinh x is strictly increasing, whereas cosh x has a minimum value of 1 at x = 0
sinh x has a single zero at x = 0
2.2.2. Identities of hyperbolic functions. - REMEMBER THESE
sinh x + cosh x = e
x
cosh x sinh x = e
x
(11)
cosh
2
x sinh
2
x = 1 (12)
cosh
2
x + sinh
2
x = cosh 2x (13)
2.2.3. Calculus of hyperbolic functions. As simple as:
(14)
d
dx
cosh x = sinh x and
d
dx
sinh x = cosh x
2.2.4. Other hyperbolic functions. These include: tanh x, coth x, sechx and cosechx
2.2.5. Inverse hyperbolic functions. - REMEMBER THESE
sinh
1
x = ln
_
x +
_
x
2
+ 1
_
(Note: returns +ve y-values only)
cosh
1
x = ln
_
x
_
x
2
1
_
, where x [1, ](obviously)
(15)
Note that cosh
1
x maps to two values denoted by the symbol, although a calculator
usually only returns the positive one.
8 DANIEL E. WORRALL
2.3. Power Series Expansions.
Denition 5. The power series expansion for a function f(x) about the point x = 0 is
the innite sum of powers of x:
f(x) = a
0
+ a
1
x + a
2
x
2
+ + a
n
x
n
+ =

i=0
a
i
x
i
Note: A truncated power series is a good approximation when x is small.
Q: But how small is small ?
A: Well it depends on the function, experience will be needed.
2.3.1. Convergence. Often, power series only converge within a certain range, usually
for |x| < r, where r is called the radius of convergence of the series.
2.3.2. Obtaining power series. There are three main methods:
(1) Databook
(2) Taylors Theorem
(3) Express it as a combination of known series
2.3.3. Taylors (McLaurins) Theorem. For f(x) dened about some point x = a
(16)
f(x) =

n=0
_
1
n!
d
n
f
dx
n

x=a
_
(xa)
n
= f(a)+(xa)f

(a)+
(x a)
2
2!
f

(a)+ +
(x a)
n
n!
f
(n)
(a)+. . .
(When a = 0 this is often called a McLaurin series)
2.3.4. Manipulating power series. If two power series converge for the same range, then
they can be manipulating in the following ways:
They can be added
(a
0
+a
1
x+a
2
x
2
+. . . ) +(b
0
+b
1
x+b
2
x
2
+. . . ) = (a
0
+b
0
) +(a
1
+b
1
)x+(a
2
+b
2
)x
2
+. . .
They can be multiplied
(a
0
+a
1
x+a
2
x
2
+. . . )(b
0
+b
1
x+b
2
x
2
+. . . ) = a
0
b
0
+(a
0
b
1
+a
1
b
0
)x+(a
0
b
2
+a
1
b
1
+a
2
b
0
)x
2
+. . .
They can be integrated and dierentiated term by term
d
dx
(a
0
+ a
1
x + a
2
x
2
+ . . . ) = a
1
+ 2a
2
x + 3a
3
x
2
+ 4a
4
x
3
+ . . .
NOTES ON MATHEMATICS FOR MICHAELMAS TERM 2010 9
2.4. When do we expect there to be a valid power series? For a valid power series
to exist, the function should be continuous in the region concerned, which is usually x
= 0. All of its derivatives f

(x), f

(x), f

(x) etc. must also be continuous in this


region.
All of the power series below do NOT have power series expansions about x = 0.
0
2
4
6
8
10
-3 -2 -1 0 1 2 3
x**(-2)
(a)
1
x
2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
-10 -5 0 5 10
sgn(x)
(b) sgnx
0
1
2
3
4
5
6
7
8
9
10
-10 -5 0 5 10
abs(x)
(c) |x|
-100
-50
0
50
100
-10 -5 0 5 10
x*abs(x)
(d) x|x|
Figure 3. Exponentials vs. Hyperbolics
(A) has a singularity at f , f(0) is meaningless, other examples are ln x or
1
x
(B) is discontinuous at x = 0
(C) f is continuous but f

is discontinuous at x = 0
(D) f and f

are continuous but f

is discontinuous at x = 0
2.5. Approximations. Taking the rst few terms of a power series is a good way of
generating an approximation to a function near a given point. The approximations below
increase in accuracy with every extra term taken:
e
x
1 + x e
x
1 + x +
x
2
2!
and e
x
1 + x +
x
2
2!
+
x
3
3!
This can either be viewed as a more accurate representation of the function or a good
approximation for a wider range.
2.6. Big O Notation: being precisely imprecise.
2.6.1. Introduction. If we had the function f(x) = x
3
+ 6(sin x x) it would be
incorrect to make the approximation that sin x x for small x and hence f(x) x
3
, in
fact f(x) x
5
, but why?
A: Inside the sin x there is actually a hidden x
3
term which cancels with the x
3
earlier
in the function - approximating sin x to x incorrectly rids of this cancelling x
3
term,
hence we need to keep track of the size of terms we are ignoring, which we do with Big
O Notation.
Notation 2.
O(x
n
) terms of order x
n
and possibly higher order terms
e.g. sin x = x
x
3
3!
+ O(x
5
)
10 DANIEL E. WORRALL
2.6.2. Operations with Big O.
xO(x
n
) = O(x
n+1
)
O(x
n
)O(x
m
) = O(x
m+n
)
O(x
n
) O(x
n
) = O(x
n
)
cO(x
n
) = O(x
n
)
O(x
n
) + O(x
m
) = O(x
m
) (m n & m, n Z
+
)
2.7. Limits. A limit is roughly dened as the value which a function tends to when
approaching a certain point, or, if as x gets closer and closer to a, a function f(x) gets
closer and closer to a value , we say that is the limit of the function f(x) as x
approaches a, written:
lim
xa
f(x) =
Remember Big O is useful.
2.8. de lHopitals Rule.
If f(0) = g(0) = 0 and g

(0) = 0 then lim


x0
f(x)
g(x)
=
f

(0)
g

(0)
What if g

(0) = 0?
If f

(0) = 0 then
f(x)
g(x)

If f

(0) = 0 and g

(0) = 0 then
lim
x0
f

(x)
g

(x)
=
f

(0)
g

(0)
(provided g

(0) = 0)
If g

(0) = 0 repeat . . .
2.9. Using Power Series about other points. Just shift the origin e.g. if you wish
to evaluate sin x about x =

4
set y = x

4
, so x = y +

4
and
sin
_

4
+ y
_

2
_
1 + y
y
2
2!

y
3
3!
+
y
4
4!
_
(y small)
This may look dierent to the power series which you are used to, but remember, a
power series is an expansion of powers of the thing that is meant to be small.
NOTES ON MATHEMATICS FOR MICHAELMAS TERM 2010 11
2.10. Miscellaneous Limits.
(a) n
s
x
n
0 as n if |x| < 1 s R
(b)
x
n
n!
0 as n
(c)
_
1 +
x
n
_
n
e
x
as n
(d) x
s
ln x 0 as x 0 where s > 0
Note: these can be found in Section 2. of the Mathematics Databook
Note: it is also worth noting that logarithmic growth is extremely slow, such that in
general exponentials win over powers and everything wins over log.
3. Complex Variables
Denition 6. z = x + iy = r cos + ir sin = re
i
, where i =

1
r is the modulus of z, denoted |z|
is the argument or phase of z, denoted arg(z) or ph(z)
It is easily seen that:
x = r cos r = |z| =
_
x
2
+ y
2
y = r sin = tan
1
y
x
(17)
Complex Variable Properties
z
1
+ z
2
= (x
1
+ iy
1
) + (x
2
+ iy
2
) = (x
1
+ x
2
) + i(y
1
+ y
2
)
z
1
z
2
= (x
1
+ iy
1
) (x
2
+ iy
2
) = (x
1
x
2
) + i(y
1
y
2
)
z
1
z
2
= (x
1
+ iy
1
)(x
2
+ iy
2
) = x
1
x
2
y
1
y
2
+ i(x
1
y
2
+ x
2
y
1
)
z
1
z
2
=
x
1
+ iy
1
x
2
+ iy
2
=
x
1
+ iy
1
x
2
+ iy
2
x
2
iy
2
x
2
iy
2
=
(x
1
x
2
+ y
1
y
2
) + i(y
1
x
2
x
1
y
2
)
x
2
2
+ y
2
2
3.1. Complex Conjugation.
Denition 7. complex conjugate = z = z

= x iy
In an argand diagram this is just a reection in the real axis
12 DANIEL E. WORRALL
Complex Conjugation Properties
z z = |z|
2
this is real
(z) =
z + z
2
and (z) =
z z
2
If z = z then z R
| z| = |z|
z
1
+ z
2
= z
1
+ z
2
, z
1
z
2
= z
1
z
2
,
_
z
1
z
2
_
=
z
1
z
2
3.2. de Moivres Theorem.
Denition 8.
e
in
=
_
e
i
_
n
cos n + i sin n = (cos + i sin )
n
Simple but powerful
3.3. Principal Value of . e
2in
= 1, n Z thus z = re
i
= re
i(+2n)
, so for a given
z there are an innity of s which will produce such a result. Sometimes, in order to
avoid ambiguity we restrict the range of to < .
3.4. Evaluating n
th
roots. e.g. Find the cube root of 1
z
3
= 1 = 1 e
i(1+2n)
z = 1
1
3
e
i(
1
3
+
2n
3
)
= e
i(
1
3
+
2n
3
)
n = 0, 1, 2
These are at the vertices of a regular n-gon, not necessarily centred on the origin.
3.5. Polynomials with Real Coecients. Roots of polynomials with real
coecients are either real or come as complex conjugate pairs. So if z
1
is a root of a
real coecient polynomial p(z) then so is z
1
unless z
1
= z
1
(z
1
is real).
3.6. Complex Trigonometric and Hyperbolic Functions (DATABOOK).
cosh(iw) = cos(w) cos(iw) = cosh(w)
sinh(iw) = i sin(w) sin(iw) = i sinh(w)
(18)
3.7. Natural Logarithms ln z.
ln z = ln
_
re
i(+2n)
_
= ln r + ln e
i(+2n)
= ln r + i + i2n
= ln |z| + i arg z + i2n
NOTES ON MATHEMATICS FOR MICHAELMAS TERM 2010 13
3.8. Make it complex, Make it easier. It is sometimes easiest to break trig
functions back into complex exponentials, but it is better to take the real or imaginary
part i.e. cos x = (e
ix
) and sin x = (e
ix
) e.g.
I =
_

0
e
2x
cos xdx =
__

0
(e
2x+ix
) dx
_


I =
_

0
(e
2x+ix
) dx =
_
e
2
+ 1
_
2 + i
2
2
+ 1
2
(

I) = I =
2
_
e
2+1
_
5
Quick and easy, we can now also without any eort evaluate
_

0
e
2x
sin xdx.
3.9. Miscellaneous Extras.
(a)

cos n =
_
e
in

(b) Electrical engineering uses j instead of i because i is real current


4. Ordinary Differential Equations
4.1. Introduction. An ordinary dierential equation (ODE) consists of a dependent
variable (e.g. y) and an independent variable (e.g. x), partial dierential equations
(PDE) have more than one independent variable. The highest derivative determines
the order of the equation.
This is a linear n
th
-order ODE:
(19)
d
n
y
dx
n
+ A
1
(x)
d
n1
y
dx
n1
+ A
2
(x)
d
n2
y
dx
n2
+ + A
n1
(x)
dy
dx
+ A
n
(x)y = 0
- For functions of space, we use y

to mean
dy
dx
- For functions of time, we use y to mean
dy
dt
The general form of an ODE is:
f
_
x, y,
dy
dx
,
d
2
y
dx
2
,
d
3
y
dx
3
, . . .
_
= 0
We shall dene a new operator:
(20) L
n
= D
n
+ A
1
D
n1
+ A
2
D
n2
+ + A
n1
D + A
n
=
n

k=0
A
nk
D
k
, where D
n
y = y
(n)
4.2. 1
st
-Order ODEs.
14 DANIEL E. WORRALL
4.2.1. Direct Integration. If
y

= g

(x)
we can do the following
_
dy
dx
dx = y =
_
g

(x) dx
given initial conditions we can evaluate the RHS and get:
y = g
1
(x) = g(x) + c
1
, where c
1
depends on the initial conditions
4.2.2. Separating the variable. If the 1
st
-order ODE can be written
h(y)
dy
dx
= g(x)
then it can also be rearranged to
h(y)dy = g(x)dx
so
_
h(y)dy =
_
g(x)dx
which has solution
H(y) = G(x) + c
4.2.3. Integrating factor. As of Section 9 of the Mathematics Databook
Given the 1
st
-order ODE
dy
dx
+ p(x)y = q(x)
it can be integrated via an integrating factor
I.F. = e

p(x) dx
such that,
d
dx
_
ye

p(x) dx
_
= q(x)e

p(x) dx
yielding a solution,
y = e

p(x) dx
_
q(x)e

p(x) dx
dx
4.3. 2
nd
-order and higher-order ODEs.
4.3.1. Homogeneity. We dene a homogeneous equation as one which has no functions
of the independent variable appearing on its own i.e.
(21)
_
n

k=0
A
nk
D
n
_
y = L
n
y = 0
NOTES ON MATHEMATICS FOR MICHAELMAS TERM 2010 15
4.3.2. Solving homogeneous equations. Solve:
d
n
y
dx
n
+ A
1
(x)
d
n1
y
dx
n1
+ A
2
(x)
d
n2
y
dx
n2
+ + A
n1
(x)
dy
dx
+ A
n
(x)y = 0
Step 1. Try y = Be
x
, which yields,

n
Be
x
+ A
1

n1
Be
x
+ A
2

n2
Be
x
+ + A
n1

1
Be
x
+ A
n

0
Be
x
= 0
Divide through by Be
x
, this is called the auxilliary or characteristic equation

n
+ A
1

n1
+ A
2

n
2
+ + A
n1
+ A
n
= 0
Step 2. Find all n solutions
1
,
2
, . . . ,
n
Step 3. The general solution y
GS
depends upon the nature of
i
(a) real and distinct
k

y(x) = + C
k
e

k
x
+ . . .
(b) complex conjugate pair
k
= + i and
k+1
= i
y(x) = + C
k
e
+i
+ C
k+1
e
i
+ = + e
x
(C
k
cos x + C
k+1
sin x) + . . .
(c) repeated root
k
=
k+1

y(x) = + (C
k
+ C
k+1
x)e

k
x
Step 4. Substitute any initial conditions.
4.3.3. Solving inhomogeneous constant coecient ODEs. Solve:
d
n
y
dx
n
+ A
1
d
n1
y
dx
n1
+ A
2
d
n2
y
dx
n2
+ + A
n1
dy
dx
+ A
n
y = f (x)
(a) treat as a homogeneous equation, temporarily ignoring f(x) i.e. solve:
d
n
y
dx
n
+ A
1
d
n1
y
dx
n1
+ A
2
d
n2
y
dx
n2
+ + A
n1
dy
dx
+ A
n
y = 0
(b) the general solution of the homogeneous equation is called the complementary
function y
CF
of the inhomogeneous equation
(c) now tackle the f(x), from the databook choose a particular integral e.g.
y
PI
= e
6x
suitable for f(x) and plug this into the original equation to gure
out any coecients e.g. =
1
10
y
PI
=
1
10
e
6x
(d) the general solution of the inhomogeneous equation is y
GS
= y
CF
+ y
PI
(e) initial conditions.
Note: If the complementary function shares a term with the RHS of the ODE f(x)
then multiply an extra x-term onto the particular integral and try that. If this fails,
repeat until it works.
16 DANIEL E. WORRALL
4.4. Linear System Concept. The inhomogeneous linear ODE:
a x + b x + cx = f(t)
may be thought of as a linear system whose input is f(t) and output is x(t).
It can be shown that for multiple inputs f
1
(t) and f
2
(t):
INPUT f
1
(t) + f
2
(t) + . . . produces OUTPUT x
1
(t) + x
2
(t) + . . .
5. Linear Difference Equations
These deal with quantities which do not vary continuously but at certain discrete
intervals. An example would be interest applied to a bank account on a monthly basis.
They come in the form:
(22) A
0
y
n
+ A
1
y
n1
+ A
2
y
n2
+ = f(n)
Note: This is known as a linear dierence equation (LDE or recurrence relation)
because the dependent variable y only occurs in linear combinations. The independent
variable is n.
Note: An LDE going back as far as y
ni
is called an i
th
-order equation.
5.1. 1
st
-order LDEs. These are easy. Given:
y
n
= ry
n1
The solution is quite simply:
y
n
= ar
n
, where y
0
= a
5.2. 2
nd
-order and higher order LDEs.
5.2.1. Solving homogeneous LDEs. Solve:
A
0
y
n
+ A
1
y
n1
+ A
2
y
n2
+ + A
i
y
ni
= 0
Step 1. Try y
n
= B
n
, which yields,
A
0
B
n
+ A
1
B
n1
+ A
2
B
n2
+ + A
i
B
ni
= 0
Cancel the Bs and shift the equation down by n i, this is called the
auxiliary equation
A
0

i
+ A
1

i1
+ A
2

i2
+ + A
i
= 0
Step 2. Find all i solutions
1
,
2
, . . . ,
i
Step 3. The general solution is simply
y
GS
n
= C
1

n
1
+ C
2

n
2
+ + C
i

n
i
Step 4. Apply initial conditions.
Note: There will be i initial conditions.
NOTES ON MATHEMATICS FOR MICHAELMAS TERM 2010 17
5.2.2. Solving inhomogeneous LDEs. Solve:
A
0
y
n
+ A
1
y
n1
+ A
2
y
n2
+ + A
i
y
ni
= f (n)
(a) treat as a homogeneous equation, temporarily ignoring f(n) i.e. solve:
A
0
B
n
+ A
1
B
n1
+ A
2
B
n2
+ + A
i
B
ni
= 0
(b) the general solution of the homogeneous equation is called the complementary
function y
CF
n
of the inhomogeneous equation
(c) now tackle the f(n), choose a particular solution of the same form as f(n) e.g.
y
PS
n
= (1.07)
n
suitable for 63.685(1.07)
n
and plug this into the original
equation to gure out any coecients e.g. = 6285.6 y
PS
n
= 6285.6(1.07)
n
(d) the general solution of the inhomogeneous equation is y
GS
n
= y
CF
n
+ y
PS
n
(e) initial conditions.
Note: Watch out for cases where f(n) contains part of the complementary function, in
which case multiply the particular solution by n repeatedly until it works.
Note: Notice the similarities between LDEs and ODEs.
5.3. Boundary conditions. Given y
GS
n
= 2
n
A +
_

1
2
_
n
B Find the solution which is
bounded for large n and for which y
0
= 1.
as n , 2
n
hence A = 0
and using y
0
= 1
y
0
=
_

1
2
_
0
B = 1 B = 1
The solution is
y
n
=
_

1
2
_
n
6. Partial Differentiation
Denition 9. For a function f(x) = f(x
1
, x
2
, . . . , x
n
) the partial derivative is,
f
x
i
(x) = lim
x
i
0
_
f(x
1
, x
2
, . . . , x
i
+ x
i
, . . . , x
n
f(x
1
, x
2
, . . . , x
i
, . . . , x
n
)
x
i
_
= lim
x
i
0
_
f(x + x
i
) f(x)
x
i
_
6.1. The Total Dierential.
(23) f
f
x
1
x
1
+
f
x
2
x
2
+ +
f
x
n
x
n
In 2D Euclidean space R
2
(24) f =
f(x, y)
x
x +
f(x, y)
y
y + O
_
x
2
, xy, y
2
_
18 DANIEL E. WORRALL
Taking the limit as x
j
0 for j = 1 to n yields,
(25) df =
f
x
1
dx
1
+
f
x
2
dx
2
+ +
f
x
n
dx
n
6.2. The Total Derivative.
(26)
df
dx
i
=
_
f
x
1
_
dx
1
dx
i
+
_
f
x
2
_
dx
2
dx
i
+ +
_
f
x
i
_
+ +
_
f
x
n
_
dx
n
dx
i
Notice how the partial derivative
f
x
i
only forms part of the RHS.
6.3. The Directional Derivative. The derivative of a function f in a particular
direction u =
_
u
1
u
2
. . . u
n
_
t
is
D
u
f(x) = f u = lim
h0
+
f(x
1
+ hu
1
, x
u
2
+ hu
2
, + x
n
+ hu
n
) f(x
1
, x
2
, . . . , x
n
)
h|u|
= lim
h0
+
f(x + hu) f(x)
h|u|
=
f
x
1
u
1
|u|
+
f
x
2
u
2
|u|
+ +
f
x
n
u
n
|u|
(27)
6.4. The Chain Rule. This is the same as in single variable calculus:
For f(u(x, y))
f
x
=
f
u
u
x
f
y
=
f
u
u
y
For f(u(x, y), v(x, y))
f
x
=
f
u
u
x
+
f
v
v
x
f
y
=
f
u
u
y
+
f
v
v
x
6.5. Normal Vector to a Line/Surface. Consider a surface in R
3
-space dened by
f(x, y, z) = constant
, this can be parametrised
f(g(s, t), h(s, t), l(s, t)) = constant.
Now the tangent vector to this surface in the s-direction is dened
(28) t
s
=
x
s
i +
y
s
j +
z
s
k
NOTES ON MATHEMATICS FOR MICHAELMAS TERM 2010 19
and
(29) f =
f
x
i +
f
y
j +
f
z
k
dierentiating f wrt s yields,
f
x
dx
ds
+
f
y
dy
ds
+
f
z
dz
ds
= 0
which is another way of writing
f t
s
= 0
hence f is to the surface f(x, y, z) = constant
Note: it is equally possible to have taken the tangent in the t-direction and
dierentiated the function of the surface wrt t.
6.6. Taylor Series In More Than One Variable. The Taylor series expansion of a
function f about the point (a
1
, . . . , a
d
) is:
(30)
f(x
1
, x
2
, . . . , x
d
) =

n
1
=0

n
d
=0
(x
1
a
1
)
n
1
. . . (x
d
a
d
)
n
d
n
1
! . . . n
d
!
_

n
1
++n
d
f(a
1
, . . . , a
d
)
x
n
1
1
. . . x
n
d
d
_
Dont worry, this form is only for reference, the nicer forms for 2 and 3 dimensions are
in the data book.
7. Matrices
Terminology
- The square matrix I, where x = Ix and M = IM = MI , where x is a column
vector and M is a square matrix, is called the identity matrix and has a leading
diagonal i
ij
=
ij
, where
ij
is the Kronecker delta.
- If A = A
t
then A is said to be symmetric.
- If A = A
t
then A is said to be antisymmetric or skew symmetric.
- The columns or rows of a matrix maybe thought of as individual vectors q
i
.
7.1. Matrix Multiplication. Dened as
(31) [AB]
ij
=

all k
a
ik
b
kj
20 DANIEL E. WORRALL
7.2. Matrix Properties.
(AB)C = A(BC) (associative)
(A+B)C = AC+BC (distributive over addition)
AB = BA (NON-commutative)
7.3. Transposition. As already known, the transpose of a matrix is dened as
_
A
t
_
ij
= a
ji
For two matrices
(AB)
t
= B
t
A
t
Proof.
_
(AB)
t
_
ij
= (AB)
ji
=

k
a
jk
b
ki
=

k
b
ki
a
jk
=

k
(B
t
)
ik
(A
t
)
kj
=
_
B
t
A
t
_
ij
(AB)
t
= B
t
A
t

In general
(32) (ABC. . . F)
t
= F
t
. . . C
t
B
t
A
t
7.4. Orthogonality. An orthogonal matrix is one whose column vectors q
i
are
orthonormal i.e. q
i
q
j
= 0, i = j and |q
i
| = 1.
7.4.1. The inverse of an orthogonal matrix is also its transpose.
Proof. An orthogonal matrix has orthonormal column vectors, hence:
Q
t
Q =
_
_
q
1

q
2

q
3

_
_
_
_

q
1
q
2
q
3

_
_
=
_
_
q
1
q
1
q
1
q
2
q
1
q
3
q
2
q
1
q
2
q
2
q
2
q
3
q
3
q
1
q
3
q
2
q
3
q
3
_
_
=
_
_
1 0 0
0 1 0
0 0 1
_
_
= I
hence,
Q
t
QQ
1
= IQ
1
Q
t
= Q
1

Note: The transpose of an orthogonal matrix is itself orthogonal, which also implies
that the rows of an orthogonal matrix are orthonormal as well as the columns.
NOTES ON MATHEMATICS FOR MICHAELMAS TERM 2010 21
7.5. The Determinant.
D = det(A) = |A| =
n

k=1
a
jk
(1)
j+k
M
jk
=
n

k=1
a
jk
C
jk
j [1, n]
=
n

j=1
a
jk
(1)
j+k
M
jk
=
n

k=1
a
jk
C
jk
k [1, n]
(33)
Here, M
jk
is the minor of a
jk
in D and C
jk
is the cofactor of a
jk
in D.
For a 3 3 matrix the determinant is,

a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33

= a
11

a
22
a
23
a
32
a
33

a
12

a
21
a
23
a
31
a
33

+ a
13

a
21
a
22
a
31
a
32

= a
11
(a
22
a
33
a
23
a
32
) a
12
(a
21
a
33
a
23
a
31
) + a
13
(a
21
a
32
a
22
a
31
)
Determinant Properties:
(a) interchange of 2 rows multiplies value of determinant D by -1 (THINK scalar
triple product and cyclic order)
(b) addition of a multiple of a row to another does not alter D.
(c) multiplication of a row by a constant c multiplies D by c.
(d) !!! det(cA) = c
n
det(A) !!! for an n n matrix A
(e) transposition leaves D unaltered.
(f) a zero row/column D = 0.
(g) proportional rows/columns D = 0, in particular when two rows/columns are
identical.
(h) for a triangular matrix D =

i
a
ii
= tr(A)
(i) for a 2 2 matrix D represents the ratio
area of transformed rectangle
area of original rectangle
and for a 3 3
represents the ratio
volume of transformed parallelepiped
volume of original parallelepiped
.
7.6. The Inverse.
(34) A
1
=
1
det(A)
C
t
where C is the cofactor matrix
This has the property AA
1
= A
1
A = I
Note: If a matrix has an inverse it is called non-singular, if it has no inverse it is called
singular and D = 0.
22 DANIEL E. WORRALL
7.7. Mappings and Transformations. Suppose we have a vector
r =
_
x
y
_
=
_
r cos
r sin
_
, a rotation of about the z-axis yields
r

=
_
x

_
=
_
r cos( + )
r sin( + )
_
= . . .
=
_
cos sin
sin cos
_ _
x
y
_
= Qr
Q is the rotation matrix, alternatively, it can be seen that,
_
1
0
_

_
cos
sin
_
and
_
0
1
_

_
sin
cos
_
Q =
_
cos sin
sin cos
_
since, for a matrix
A =
_
_

a
1
a
2
a
3

_
_
Ai = a
1
, Aj = a
2
and Ak = a
3
,
Note: If det Q = 1, Q is a rotation, if det Q = 1, Q is a reection.
Note: Rotations and reections form orthogonal matrices.
Note: These conserve length and angle between vectors operated on by the same
matrix.
7.8. Change of Basis. A is some linear transformation in a co-ordinate system C,
such that y = Ax. Let there be 2 new vectors x

and y

in a new co-ordinate system


C

such that x

= Qx and y

= Qy. Q is some co-ordinate transform C C

e.g. a
rotation, which means it is orthogonal, hence, x = Q
t
x

and y = Q
t
y

.
y

= Qy = QAx = QAQ
t
x

= A

Thus A in C becomes A

in C

, where
(35) A

= QAQ
t
.
A and A

are called similar matrices.


The transformation g = A

h = QAQ
t
h can be viewed as:
h
Q
t
transforms to old

co-ordinate system
h

A applies

transformation
g

Q transforms back to

new co-ordinate system


g
Note: The change of basis has no eect on the intrinsic nature of the transformation,
i.e if A is symmetric and non-singular, so is A

.
Note: (A

)
n
= (Q
t
AQ)
n
= Q
t
AQQ
t
AQQ
t
AQ. . . n times = Q
t
A
n
Q
NOTES ON MATHEMATICS FOR MICHAELMAS TERM 2010 23
7.9. Eigenvectors and Eigenvalues. u is an eigenvector of A if u Au i.e.
Au = u , where is some non-zero scalar called the eigenvalue
7.9.1. Finding eigenvectors. Take the equation:
_
a
11
a
12
a
21
a
22
_ _
u
1
u
2
_
=
_
u
1
u
2
_

a
11
u
1
+ a
12
u
2
a
21
u
1
+ a
22
u
2
=
=
u
1
u
2
=
(a
11
)u
1
+ a
12
u
2
a
21
u
1
+ (a
22
)u
2
=
=
0
0
=
_
a
11
a
12
a
21
a
22

_ _
u
1
u
2
_
= 0
hence
(36) (AI)u = 0
DO NOT conclude u = 0 or A = I, since this will not help; however, consider the
determinant. If |AI| = 0 u = 0 and this is boring so we shall make |AI| = 0
because we know we can always get two solutions for .
|AI| = 0 a
11
a
22
(a
11
+ a
22
) +
2
a
12
a
21
= 0

2
+ (a
11
+ a
22
) +|A| = 0
Solve for to get eigenvalues
1
and
2
.
Next, substitute these into one of the original simultaneous equations e.g.
a
11
u
1
+ a
12
u
2
= u
1
, but we cannot solve for u since u can take any magnitude, what
we want is the direction, hence we take the ratio of u
1
: u
2
.
a
11

a
12
=
u
2
u
1
Note: It is conventional to normalise the eigenvector i.e. |u| = 1
Note: In the general case of an n n matrix, solve the simultaneous equations
Au
i
= u
i
to get each element of u
i
in terms of u
1
. CAVEAT: If all goes wrong
consider the possibility u
1
= 0.
Note: In a multi-dimensional system two eigenvectors may be the same, this means
there is a whole plane of solutions, this will require a bit of jiggery-pokery, just choose
two orthogonal vectors in this plane.
7.9.2. All symmetric matrices have real eigenvalues and orthogonal eigenvectors.
Proof of real eigenvalues. Say = + i and u = x + iy such that:
A(x + iy) =
eigenvalue
( + i) (x + iy)
eigenvector
24 DANIEL E. WORRALL
(x iy)
t
= x
t
iy
t
is called the conjugate transpose of our eigenvector, Now,
(x
t
iy
t
)A(x + iy) = (x
t
iy
t
)( + i)(x + iy)
x
t
Ax + ix
t
Ay iy
t
Ax +y
t
Ay = ( + i)(x
t
iy
t
)(x + iy)
= ( + i)(x
t
x iy
t
x + ix
t
y
cancel
+y
t
y)
Equating imaginary parts:
x
t
Ay y
t
Ax = (|u|
2
+|v|
2
)
The LHS equals zero since both terms are scalars and x
t
Ay = (x
t
Ay)
t
= y
t
A
t
x. Also,
if A = A
t
then A is symmetric hence x
t
Ay = y
t
Ax, thus = 0 and the eigenvalue is
just the real number .
Proof of orthogonal eigenvectors.
If
Au
1
=
1
u
1
Au
2
=
2
u
2
such that
u
t
2
Au
1
=
1
u
t
2
u
1
u
t
1
Au
2
=
2
u
t
1
u
2
since
u
t
2
u
1
= u
t
1
u
2
and
u
t
2
Au
1
= u
t
1
A
t
u
2
= u
t
1
Au
2
provided A is symmetric
0 = (
1

2
)u
t
1
u
2
= (
1

2
)u
1
u
2
u
1
u
2
= 0 , where
1
=
2
.

7.9.3. Eigendecomposition. Consider the following:


_
a
11
a
12
a
21
a
22
_ _
u
11
u
21
_
=
1
_
u
11
u
21
_
and
_
a
11
a
12
a
21
a
22
_ _
u
12
u
22
_
=
2
_
u
12
u
22
_
can be combined to form:
_
a
11
a
12
a
21
a
22
_ _
u
11
u
12
u
21
u
22
_
=
_
u
11
u
12
u
21
u
22
_ _

1
0
0
2
_
AU = U
NOTES ON MATHEMATICS FOR MICHAELMAS TERM 2010 25
If A is symmetric, the column vectors of U (its eigenvectors) are orthogonal, but we
have also normalised them for convenience sake and hence U is orthogonal, such that
U
t
= U
1
. Hence,
AU = U AUU
t
= UU
t
A = UU
t
y = Ax = UU
t
x can be viewed as
x
U
t
rotates to co-ord. system

aligned with eigenvectors


x

performs

a stretch
y

U rotates back to original

co-ord. system
y

Você também pode gostar