Escolar Documentos
Profissional Documentos
Cultura Documentos
Lectures in Engineering
Mathematics
George Nakos
Engineering Mathematics
The Johns Hopkins University
Fall 2004
1
Engineering Mathematics / The Johns Hopkins University
2. Matrix Multiplication
2
Engineering Mathematics / The Johns Hopkins University
Basic Definitions
6.1 Vectors
If n = 1, then A is called a column matrix, or a m-vector, or
a vector. If m = 1, then A is called a row matrix, or a n-row
vector, or a row vector. The entries of vectors are usually
called components.
We can add two matrices of the same size by adding the cor-
responding entries. The resulting matrix is the sum of the two
matrices.
Example We have
" # " # " #
1 −3 0 0 4 5 1 1 5
+ =
2 −4 7 −1 4 −2 1 0 5
h i h i
In general, if A = aij and B = bij , for 1 ≤ i ≤ m and 1 ≤ j ≤ n,
then
h i
A + B = aij + bij
Example We have
1 0 2 0
2 −3 4 = −6 8
5 −1 10 −2
h i
In general, if A = aij , then
h i
cA = caij
This operation is called scalar multiplication. The multiplier c
is often called a scalar, because it scales A.
Lecture 1 / @ Copyright: George Nakos 9
Engineering Mathematics / The Johns Hopkins University
Example We have
1 −2 1 −1 0 −1
7 4 6 3 1 1
5 −5 − 7 0 =
−2 −5
8 0 −3 7 11 −7
2. A + B = B + A (Commutative Law)
3. A + 0 = 0 + A = A
4. A + (−A) = (−A) + A = 0
8. 1A = A
9. 0A = 0
Example We have
T a T
1 2
" # 1
1 3 5 h iT b
3 =
h
1
i
3 4 = , a b c d = , −8
2 4 6 c −8 3
5 6
d
Theorem
1. (A + B )T = AT + B T
2. (cA)T = cAT
T
3. AT =A
a11 a12 · · · a1k
... ... ... ...
b11 · · · b1j · · · b1n
b21 · · · b2j · · · b2n
A= ai1 ai2 · · · aik B=
... ... ... ... ...
... ... ... ...
bk1 · · · bkj · · · bkn
am1 am2 · · · amk
k
X
cij = ai1 b1j + ai2 b2j + · · · + aik bkj = air brj
r=1
Examples
3 2 4
2 0 1 −2 4 6 7 6
5 =
2 1 2 4 14 9
0 3 −2
1
−2
4 −1 −2 1 =5
3
5
4 −1 −2
1 1
−2 −8 2 4 −2
3 4 −1 −2 1 = 12 −3 −6 3
5 20 −5 −10 5
6. 0A = 0 and A0 = 0
7. (AB)T = B T AT
" # " #
0 0 1 0
Example A = and B = commute.
1 1 2 3
Lecture 1 / @ Copyright: George Nakos 20
Engineering Mathematics / The Johns Hopkins University
Example
1 −1 3 −4 11 −15
A1 = , A2 = , A3 = , ···
−2 3 −8 11 −30 41
1 2 1 2 1 2
B1 = , B2 = , B3 = , ···
0 0 0 0 0 0
0 1 0 0 0 0
C1 = , C2 = , C3 = , ···
0 0 0 0 0 0
Each of three appliances outlets receive and sell daily from three
factories TVs and VCRs according to the following table.
TV VCR
Factory 1 40 50
Factory 2 70 80
Factory 3 60 65
Each outlet charges the following dollar amounts per appliance.
Outlet 1 Outlet 2 Outlet 3
TV 215 258 319
VCR 305 282 264
40 50 " # 23 850 24 420 25 960
215 258 319
AB = 70 80 = 39 450 40 620 43 450
305 282 264
60 65 32 725 33 810 36 300
The (1, 1) entry 40 · 215 + 50 · 305 = 23, 850 is the first outlet’s
revenue from selling all the appliances coming from the first
factory. The remaining entries are interpreted similarly.
3x + 2y + z = 39 x1 + x2 = 5 y1 + y2 + y3 = −2
2x + 3y + z = 34 x1 − 2x2 = 6 y1 − 2y2 + 7y3 = 6
x + 2y + 3z = 26 −3x1 + x2 = 1
Two linear systems with the same solution sets are called equiv-
alent. A solution that consists of zeros only is called a trivial
solution.
Lecture 1 / @ Copyright: George Nakos 30
Engineering Mathematics / The Johns Hopkins University
x1 + 2x2 = −3
2x1 + 3x2 − 2x3 = −10
−x1 + 6x3 =9
1 2 0 −3 1 2 0 −3
R2 − 2R1 → R2
2 3 −2 −10 0 −1 −2 −4
R3 + R1 → R3
−1 0 6 9 0 6 2 6
1 2 0 −3 1 2 0 −3
0 −1 −2 −4 R3 + 2R2 → R3 0 −1 −2 −4
0 0 2 −2 0 0 2 −2
1 2 0 −3 1 2 0 −3
0 −1 −2 −4 R2 + R3 → R2 0 −1 0 −6 R1 + 2R2 →
0 0 2 −2 0 0 2 −2
1 0 0 −15 1 0 0 −15
(−1) R2 → R2
0 −1 0 −6 0 1 0 6
(1/2)R3 → R3
0 0 2 −2 0 0 1 −1
x1 = −15, x2 = 6, x3 = −1
Lecture 1 / @ Copyright: George Nakos 33
Engineering Mathematics / The Johns Hopkins University
x = 9r + 2
y = −4r + 1 r∈R
z=r
6.3 No Solutions
Solution:
The augmented
matrix of the system reduces to
2 −1 1 −2
0 1 −2 −5 . The last row corresponds to the false ex-
0 0 0 5
pression 0 = 5. Hence, the system is inconsistent. Therefore,
the planes do not have a common intersection.
Ax = b (3)
where A is the coefficient matrix, x is the vector of the unknowns,
and b is the vector of constants.
Solution: We have
" # x1 " #
7 4 5 1
x =
2
2 −3 9 −8
x3
2. The leading entry of each nonzero row after the first occurs to the right
of the leading entry of the previous row.
4. All entries in the column above and below a leading 1 are zero.
2. If the first row has a zero in the column of step 1, interchange it with
one that has a nonzero entry in the same column.
3. Obtain zeros below the leading entry by adding suitable multiples of the
top row to the rows below that.
4. Cover the top row and repeat the same process starting with step 1
applied to the leftover submatrix. Repeat this process with the rest of
the rows, until the matrix is in echelon form.
5. Starting with the last nonzero row work upward: For each row obtain a
leading 1 and introduce zeros above it, by adding suitable multiples to
the corresponding rows.
2 −6 20 8 8
Solution:
−1 3 −10 −4 −4
0 3 −6 −4 −3
R1 ↔ R2
4 −9 34 0 1
2 −6 20 8 8
−1 3 −10 −4 −4 −1 3 −10 −4 −4
0 3 −6 −4 −3 0 3 −6 −4 −3
Step 1
0 3 −6 −16 −15
−−−−−→
0 3 −6 −16 −15
0 0 0 0 0 0 0 0 0 0
−1 3 −10 0 0
−1 3 −10 0 0
3 −6 0 1 1
0 0 1 −2 0 3
(1/3)R2 → R2 R − 3R →
0 0 0 1 1
1 2
0 0 0 1 1
0 0 0 0 0 0 0 0 0 0
−1 0 −4 0 −1 1 0 4 0 1
1 0 1 −2 0 1
0 1 −2 0 3 (−1)R1 → R1 3
0 0 0 1 1 0 0 0 1 1
0 0 0 0 0 0 0 0 0 0
Definition Let v1, v2, . . . , vk be given n-vectors and let c1, c2, . . . , ck
be any scalars. The n-vector v
v = c1v1 + c2v2 + · · · + ck vk
is called a linear combination of v1, . . . , vk . The scalars c1, . . . , ck
are called the coefficients of the linear combination. If not all ci
are zero, we have a nontrivial linear combination. If all ci are
zero, we have the trivial linear combination. The trivial linear
combination represents the zero vector.
Solution: We have
−v1 + 3v2 + 4v3 = (−1) v1 + 3v2 + 4v3
v1 + 1.5v2 − 9v3 = 1v1 + (1.5) v2 + (−9) v3
v1 − v3 = 1v1 + 0v2 + (−1) v3
(a) v1 + v2
(b) v2 − v1
(c) 10v1
Solution: (a) We seek c1, c2, c3 not all zero such that
0 1 3 0
c1 −2 + c2 2 + c3 14 = 0
3 7 9 0
if c1v1 + · · · + ck vk = 0, then c1 = 0, . . . , ck = 0
In other words, the homogeneous system [v1 · · · vk ] c = 0 has
only the trivial solution. We often say that v1, . . . , vk are linearly
independent.
1 2 2 −1
0 1 −1 −1
B=
0 0 0 0
0 0 0 0
0 0 0 0
Lecture 2 in Engineering
Mathematics
George Nakos
Engineering Mathematics
The Johns Hopkins University
Fall 2004
4. Matrix Inversion
6. Linear Transformations
(A4) There exists a unique element 0 ∈ V, called the zero of V, such that for
all u in V
u+0=0+u=u
(A5) For each u ∈ V there exists a unique element −u ∈ V, called the negative
or opposite of u, such that
u + (−u) = (−u) + u = 0
The elements of a vector space are called vectors. Axioms (A1) and (M1)
are also expressed by saying that V is closed under addition and is closed
under scalar multiplication. Note that a vector space is a nonempty set,
because it has a zero by (A4).
3. Zero: The zero function 0 is the function whose values are all zero.
0(x) = 0 for all x ∈ R
6.4 Subspaces
Definition A subset W of a vector space V is called a subspace of
V, if W itself is a vector space under the same addition and scalar
multiplication as V . In particular, a subspace always contains the
zero element.
Then
−1 + x − 2x2 = (c2 + c3 ) + (c1 + c2 + c3 ) x − c1 x2 + (c1 + 2c2 ) x3
Equating coefficients of the same powers of x yields the linear system
c2 + c3 = −1, c1 + c2 + c3 = 1, −c1 = −2, c1 + 2c2 = 0
with solution c1 = 2, c2 = −1, c3 = 0. Therefore, p is in the span of p1 , p2 , p3 .
Lecture 2 / @ Copyright: George Nakos 69
Engineering Mathematics / The Johns Hopkins University
Definition The set of vectors {v1 , . . . , vn} from a vector space V is called lin-
early independent, if it is not linearly dependent. This is the same as saying
that there is no linear dependence relation among v1 , . . . , vk . Equivalently,
c1 v1 + · · · + ck vk = 0 ⇒ c1 = 0, . . . , ck = 0
So, every nontrivial linear combination is nonzero.
Lecture 2 / @ Copyright: George Nakos 70
Engineering Mathematics / The Johns Hopkins University
6.4 Linear Independence Example Show that set {E11 , E12 , E21 , E22 }
is linearly independent in M22 .
Solution: Let
1 0 0 1 0 0 0 0 0 0
c1 + c2 + c3 + c4 =
0 0 0 0 1 0 0 1 0 0
c1 c2 0 0
⇒ =
c3 c4 0 0
Hence, c1 = c2 = c3 = c4 = 0. So, the set is linearly independent.
Example Are 1 + x, −1 + x, 4 − x2 , 2 + x3 linearly independent in P3 ?
Solution: If a linear combination in these polynomials is the zero polynomial,
then
2
3
c1 (1 + x) + c2 (−1 + x) + c3 4 − x + c4 2 + x = 0 ⇒
(c1 − c2 + 4c3 + 2c4 ) + (c1 + c2 ) x + (−c3 ) x2 + c4 x3 = 0
Equating coefficients yields,
c1 − c2 + 4c3 + 2c4 = 0, c1 + c2 = 0, −c3 = 0, c4 = 0
We solve this linear system to get c1 = c2 = c3 = c4 = 0. So, the vectors are
linearly independent in P3 .
Lecture 2 / @ Copyright: George Nakos 71
Engineering Mathematics / The Johns Hopkins University
6.4 Basis
Definition A subset {v1, . . . , vn} of a nonzero vector space V is
a basis of V, if
2. it spans V.
The empty set is, by definition, the only basis of the zero vector
space {0}.
v = c1v1 + · · · + cnvn
6.4 Dimension
dim(V ) = n
6.6 Determinants
" #
a11 a12
Let A = . The determinant, det (A) , of A is the
a21 a22
number
det(A) = a11a22 − a12a21
Let A be
a11 a12 a13
A = a21 a22 a23
a31 a32 a33
The determinant of A in terms of 2 × 2 determinants is the
number
a a a
21 a23
a
21 a22
det(A) = a11 22 23 − a12 + a13
a32 a33 a31 a33 a31 a32
6.6 Determinants
6.6 Determinants
First we assign the sign (−1)i+j to the entry aij of A. This is a checkerboard
pattern of ±’s.
+ − + ···
− + − ···
+ − + ···
... ... ... . . .
Then we pick a row or column and multiply each entry aij of it by the corre-
sponding signed minor (−1)i+j Mij . Lastly, we add all these products.
Lecture 2 / @ Copyright: George Nakos 80
Engineering Mathematics / The Johns Hopkins University
2. Let B be obtained from A by multiplying one of its rows (or columns) by a nonzero
constant. Then det(B) = k det(A). For example,
a1 a2 a3 a1 a2 a3 a1 a2 ka3 a1 a2 a3
kb1 kb2 kb3 = k b1 b2 b3 , b1 b2 kb3 = k b1 b2 b3
c1 c2 c3 c1 c2 c3 c1 c2 kc3 c1 c2 c3
3. Let B be obtained from A by interchanging any two rows (or columns). Then det(B) =
− det(A). For example,
a1 a2 a3 b1 b2 b3 a1 a2 a3 a3 a2 a1
b1 b2 b3 = − a1 a2 a3 , b1 b2 b3 = − b3 b2 b1
c1 c2 c3 c1 c2 c3 c1 c2 c3 c3 c2 c1
4. Let B be obtained from A by adding a multiple of one row (or column) to another.
Then det(B) = det(A). For example,
a1 a2 a3 a1 a2 a3
ka1 + b1 ka2 + b2 ka3 + b3 = b1 b2 b3
c1 c2 c3 c1 c2 c3
Cramer’s Rule gives an explicit formula for the solution of a consistent square
system.
Cramer’s Rule If det(A) 6= 0, then the system Ax = b has a unique solution
x = (x1, . . . , xn) given by
det(A1 ) det(A2 ) det(An)
x1 = , x2 = , . .. , xn =
det(A) det(A) det(A)
−1
" #
1 0 −1 1 0 0
1 0 −1 1 0 0
1 0 1 0 0
3 4 −2 0 1 0 ∼ 0 4 1 −3 1 0 ∼ 0 4 1 −3 1 0 ∼
3 5 −2 0 0 1 0 5 1 −3 0 1 0 0 − 14 3
− 54 1
4
−1 −2 −4 −2 −4
1 0 1 0 0 1 0 0 5 1 0 0 5
0 4 1 −3 1 0 ∼ 0 4 0 0 −4 4 ∼ 0 1 0 0 −1 1
0 0 1 −3 5 −4 0 0 1 −3 5 −4 0 0 1 −3 5 −4
Therefore,
−2 −4
5
A−1 = 0 −1 1
−3 5 −4
1. AB is invertible and
(AB)−1 = B −1 A−1
3. cA is invertible and
1 −1
(cA)−1 = A
c
4. AT is invertible and
T
(AT )−1 = A−1
AB = AC ⇒ B = C, BA = CA ⇒ B = C
Proof: Let AB = AC. Since A−1 exists, we can multiply on the left by A−1
to get
Theorem
1. det(A) = 0
6.7 Adjoint
6.7 Adjoint
Example Find the adjoint of A, where
−1 2 2
A= 4 3 −2
−5 0 3
(a) Find u · v.
Solution:
(a) We have
4
u·v = −3 2 1 −1 = (−3) 4 + 2 (−1) + (1) (5) = −9
5
2
21 √
(a) kvk = 12 + 22 + (−3) + 12 = 15
1 5
√
(b) kv − uk =
, , − 72 , 32
2 2
= 21
1 1 1 1
(c) kuk =
2
, − ,
2 2
, − 2
= 1. So, u is a unit vector.
1. u · v = v · u (Symmetry)
2. u · (v + w) = u · v + u · w (Additivity)
6. (Cauchy-Bunyakovsky-Schwarz Inequality)
|u · v| ≤ kuk kvk (8)
A real vector space with an inner product is called an inner product space.
3. hu − w, vi = hu, vi − hw, vi
5. h0, vi = hv, 0i = 0
4. Let f (x) and g(x) be in C[a, b], the vector space of the continuous real-
valued functions defined on [a, b]. Then the following defines an inner
product on C[a, b].
Z b
hf, gi = f (x)g(x) dx
a
Note that
d(0, v) = d(v, 0) = kvk
A vector with norm 1 is called a unit vector. The set S of all unit vectors of
V is called the unit circle or the unit sphere.
S = {v , v ∈ V and kvk = 1} (12)
(a) We have
π
1 π
Z Z
hsin (x) , sin (2x)i = sin (x) sin (2x) dx = (cos x − cos 3x) dx
−π 2 −π
π
1 1
= sin x − sin 3x = 0
2 3 −π
so the functions are orthogonal.
(b) The norm is
Z π 1/2 Z π 1/2
1 √
ksin (2x)k = sin2 (2x) dx = (1 − cos (4x)) dx = π
−π 2 −π
1 1
Note: (a) sin (a) sin (b) = (cos (a − b) − cos (a + b)) (b) sin2 (a) = (1 − cos 2a)
2 2
Lecture 2 / @ Copyright: George Nakos 108
Engineering Mathematics / The Johns Hopkins University
1. T (u + v) = T (u) + T (v)
2. T (cu) = cT (u)
are linear and represent reflection about the y-axis and the x-axis, reflec-
tion about the origin and rotation by θ radians about the origin.
h i h i h i
a1 b1 a2 b2 a1 + a2 b1 + b2
T c1 d1 + c2 d2 =T c1 + c2 d1 + d2
and
h i h i
a1 b1 ca1 cb1
T c c1 d1 =T cc1 cd1
Lecture 3 in Engineering
Mathematics
George Nakos
Engineering Mathematics
The Johns Hopkins University
Fall 2004
113
Engineering Mathematics / The Johns Hopkins University
114
Engineering Mathematics / The Johns Hopkins University
7.1 Eigenvalues
Av = λv (13)
The scalar λ (which may zero) is called an eigenvalue of A
corresponding to (or associated with) the eigenvector v.
7.1 Eigenvalues
Example Let
2 2 2 1
A= , v1 = , v2 =
2 −1 1 −2
Solution: We have
2 2 2 6 2
Av1 = = =3 = 3v1
2 −1 1 3 1
2 2 1 −2 1
Av2 = = = −2 = −2v2
2 −1 −2 4 −2
Therefore, v1 is an eigenvector with corresponding eigenvalue λ = 3 and v2
is an eigenvector with corresponding eigenvalue λ = −2.
7.1 Eigenvalues
Example Find all the eigenvalues and eigenvectors of A geometrically, if
0 1
(a) A = .
1 0
(b) A is the standard matrix of the rotation by 30◦ in R3 about the z-axis in
the positive direction.
(a) Ax is the reflection of x about the line y = x. The only vectors that remain
on the same line after rotation are the vectors along the lines y = x and
y = −x. These without the origin are the only eigenvectors. For v along
y = x we have Av = 1v, so v is an eigenvector with corresponding
eigenvalue 1. For v along y = −x, Av = −1v, so v is an eigenvector with
corresponding eigenvalue −1.
(b) The only vectors that remain on the same line after rotation are all vectors
along the z-axis. These without the origin are the only eigenvectors. The
corresponding eigenvalue is 1.
1. We have
Av = λv ⇒ Av = λI v
⇒ Av − λI v = 0
⇒ (A − λI)v = 0
Hence, v is an eigenvector if and only if it is a nontrivial
solution of the homogeneous system (A − λI)v = 0.
7.1 Eigenspace
1 − λ −1 −1
= −λ3 − λ2 + 30λ = −λ (λ − 5) (λ + 6) = 0
−2 0 − λ 4
−2 6 −2 − λ
Hence, the eigenvalues are
λ1 = 0, λ2 = 5, λ3 = −6
Next, we find the eigenevectors. For λ1 = 0 we have
For λ2 = 5 we have
−4 −1 −1 0 1 0 1/2 0
[A − 5I : 0] = −2 −5 4 0 ∼ 0 1 −1 0
−2 6 −7 0 0 0 0 0
For λ = −6 we have
−1 −1 −1/20
7 0 1 0 0
[A − (−6) I : 0] = −2 6 4 0 ∼ 0 1 13/20 0
−2 6 4 0 0 0 0 0
−1
1 0
Example A = 0 −4 2 .
0 0 −2
A is triangular, so the eigenvalues are the diagonal entries 1, −2, −4. By row reducing
[A − 1I : 0] , [A − (−2)I : 0], and [A − (−4)I : 0] we get
1 1/3 1/5
E1 = Span 0 , E−2 = Span 1 , E−4 = Span 1
0 1 0
The spanning eigenvectors define bases for the corresponding eigenspaces.
7.5 Diagonalization
Matrix arithmetic with diagonal matrices is easier than with any other matri-
ces. This is most notable in matrix multiplication. For example, a diagonal
matrix D does not mix the components of x in the product Dx.
2 0 a 2a
=
0 3 b 3b
Also, it does not mix rows of A in a product DA (or columns in AD).
2 0 a b c 2a 2b 2c
=
0 3 d e f 3d 3e 3f
Moreover, it is very easy to compute the powers Dk .
k k
2 0 2 0
=
0 3 0 3k
7.5 Diagonalization
B = P −1 AP
7.5 Diagonalization
Theorem Let A be an n × n matrix.
1. A is diagonalizable.
7.5 Diagonalization
0 0 1
Example A = 0 1 0 .
0 0 1
Solution: We found before that λ1 = 0, λ2 = λ3 = 1 and
1 1 0
E0 = Span 0 , E1 = Span 0 , 1
0 1 0
A has 3 linearly independent eigenvectors so it is diagonalizable. We may
take
1 1 0 0 0 0
P = 0 0 1 , D= 0 1 0
0 1 0 0 0 1
We may check this by
−1
1 1 0 0 0 1 1 1 0 0 0 0
−1
P AP = 0 0 1 0 1 0 0 0 1 = 0 1 0 = D
0 1 0 0 0 1 0 1 0 0 0 1
7.5 Diagonalization
Example
1 −1 0
A = 0 −4 2 .
0 0 −2
7.5 Diagonalization
Theorem Let λ1 , . . . , λl be any distinct eigenvalues of an n × n matrix A.
Solution: A has eigenvalues 0, 2, 4 and the corresponding basic eigenvectors (−1, 0, 1),
(0, 1, 0), (1, 0, 3) are linearly independent. Hence,
k −1
−1 −1
0 1 0 0 0 0 1
Ak = 0 1 0 0 2 0 0 1 0
1 0 3 0 0 4 1 0 3
0 0 0 − 34 0 1
−1
0 1 4
= 0 1 0 0 2k 0 0 1 0
1 0 3 0 0 4k 1
0 1
4 4
" #
4k−1 0 4k−1
= 0 2k 0
3 · 4k−1 0 3 · 4k−1
Let us now discuss an idea that is in the core of most applications of di-
agonalization. Let A be diagonalizable, diagonalized by P and D. Often a
matrix-vector equation f (A, x) = 0 can be substantially simplified, if we re-
place x by the new vector y such that
x = P y or y = P −1x (16)
and replace A with P DP −1 to get an equation of the form g(D, y) = 0 that
involves the diagonal matrix D and the new vector y.
To illustrate suppose we have a linear system Ax = b. Then we can convert
this system into a diagonal system as follows. We consider the new variable
vector y defined by y = P x. We have
Ax = b ⇔ P Ax = P b
⇔ P AP −1 y = P b
⇔ Dy = P b
The last equation defines a diagonal system.
Lecture 3 / @ Copyright: George Nakos 134
Engineering Mathematics / The Johns Hopkins University
1. A is orthogonal.
2. AT A = I
3. A−1 = AT
Lecture 3 / @ Copyright: George Nakos 135
Engineering Mathematics / The Johns Hopkins University
1. A is orthogonal.
T
To show that C is unitary, it suffices to check that C C = I. We have
" √ #T " √ #
1
T 2
− 23 i 1
2
− 23 i
C C= √ √ =
3 1 3 1
− 2
i 2
− 2
i 2
" √ #" √ #
1 3 1
i − 23 i
2 2 2 1 0
= √ √ = = I2
3
i 1
− 3
i 1 0 1
2 2 2 2
T
5. Equivalent statements for A being a unitary matrix are: A A = I and
also by taking the transpose
AT A = I
1. If A is Hermitian, then its eigenvalues are real. (Thus, this holds for
symmetric matrices.)
Lecture 4 in Engineering
Mathematics
George Nakos
Engineering Mathematics
The Johns Hopkins University
Fall 2004
143
Engineering Mathematics / The Johns Hopkins University
3. Euler’s Formula
5. Sturm-Liouville Theory
144
Engineering Mathematics / The Johns Hopkins University
1 + cos 2a
7. cos2 (a) =
2
1 − cos 2a
8. sin2 (a) =
2
1 1
9. sin a cos b = sin (a + b) + sin (a − b)
2 2
1 1
10. sin a sin b = cos (a − b) − cos (a + b)
2 2
1 1
11. cos a cos b = cos (a − b) + cos (a + b)
2 2
1. We say that the distinct functions gm (x) and gn (x) are orthogonal on
[a, b] , if their integral inner product is zero. I.e., if
Z b
hgm , gni = gm (x) gn (x) dx = 0, for m 6= n
a
Recall that the norm or length of each gm on [a, b] under this inner product
is
s s
p Z b Z b
2
kgm k = hgm , gm i = gm (x) gm (x) dx = gm (x) dx
a a
4.7 Assumptions
Solution:
1. If m 6= n, then
Z π
hgm , gni = sin (mx) sin (nx) dx
−π
Z π
1
= [cos ((m − n) x) − cos ((m + n) x)] dx
2 −π
1 1
= sin ((m − n) x)|π−π − sin ((m + n) x)|π−π
2 (m − n) 2 (m + n)
= 0+0
= 0
Lecture 4 / @ Copyright: George Nakos 150
Engineering Mathematics / The Johns Hopkins University
Solution:
1. We have
a.
Z π
π
sin (mx)
h1, cos (mx)i = (1) cos (mx) dx = =0
−π m
−π
b.
Z π
π
cos (mx)
h1, sin (mx)i = (1) sin (mx) dx = − =0
−π m −π
d. If m 6= n, then
Z π
hcos (mx) , sin (nx)i = cos (mx) sin (nx) dx
−π
Z π
1
= (sin ((m + n) x) − sin ((m − n) x)) dx
2 −π
−1 1
= cos (m + n) x|π−π + cos (m − n) x|π−π
2 (m + n) 2 (m − n)
= 0+0=0
e.
Z π
hcos (mx) , sin (mx)i = cos (mx) sin (mx) dx
−π
Z π
1
= sin (2mx) dx
2 −π
π
1 cos 2mx
= − =0
2 2m −π
Lecture 4 / @ Copyright: George Nakos 153
Engineering Mathematics / The Johns Hopkins University
c. ksin (mx)k2 = π was proved in the last example. So the norms are
√ √ √
k1k = 2π, kcos (mx)k = π, ksin (mx)k = π, for m = 1, 2, . . .
So the orthonormal set is
1 cos x sin x cos (2x) sin (2x) cos (mx) sin (mx)
√ , √ , √ ,, √ , √ ,,..., √ , √ ,...
2π π π π π π π
2. If the g1 (x) ,p
g2 (x) , . . . is orthogonal with respect to weight p (x) and we
set hn (x) = p (x)gn (x) , then by the weighted orthogonality we get
Z b Z bp p
hm (x) hn (x) dx = p (x)gm (x) p (x)gn (x) dx
a
Za b
= p (x) gm (x) gn (x) dx
a
= 0
So the functions h1 (x) , h2 (x) , . . . are orthogonal in the usual sense.
Euler’s Formula
Euler’s Formula relates the complex exponential function with the trigono-
metric sines and cosines. If t is a real number, then
eit = cos t + i sin t
√
where i = −1 is the complex unit such that
i2 = −1
Example We have
eiπ = −1, eiπ/2 = i, ei2π = 1, e2+3i = e2 (cos 3 + i sin 3)
because
eiπ = cos π + i sin π = −1 + i0 = −1
eiπ/2 = cos (π/2) + i sin (π/2) = 0 + i = i
ei2π = cos (2π) + i sin (2π) = 1 + i0 = 1
e2+3i = e2 e3i = e2 (cos 3 + i sin 3) ' −7. 315 1 + 1. 042 7i
Review
Linear Homogeneous with Constant Coefficients
Review
Linear Homogeneous with Constant Coefficients
Review
Linear Homogeneous with Constant Coefficients
For the special case of a second order equation the general solution is dis-
cussed the following theorem.
1. If (A2) has two distict real roots r1 and r2 , then the general real solution
of (H2) is given by
y (x) = c1 er1 x + c2 er2 x
Review
Linear Homogeneous with Constant Coefficients
2. If (A2) has a double real root r, then the general real solution
of (H2) is given by
Review
Linear Homogeneous with Constant Coefficients
Example Solve 2y 00 + 5y 0 − 3y = 0.
Example y 00 − 8y 0 + 20y = 0.
Review
Linear Homogeneous with Constant Coefficients
Note that a S-L problem has always the trivial solution y (x) = 0 for all x in
[a, b] . If λ is a scalar such that the S-L problem has a nontrivial solution y (x),
λ is called an eigenvalue of the the problem and the nontrivial y (x) is called
an eigenfunction corresponding to λ.
k1 y (a) + k2 y 0 (a) = 0
l1 y (b) + l2 y 0 (b) = 0
Let ym (x) and yn (x) be two eigenfunctions corresponding to different eigen-
vlues λm and λn. Then ym (x) and yn (x) are orthogonal with respect to weight
function p (x) . Furthermore:
1 − cos (2πv) − sin (2πv) = 2 − 2 cos (2πv) = 4 sin2 (πv) = 0
sin (2πv) 1 − cos (2πv)
Lecture 5 in Engineering
Mathematics
George Nakos
Engineering Mathematics
The Johns Hopkins University
Fall 2004
176
Engineering Mathematics / The Johns Hopkins University
Goal: Calculate u (x, t) , given (a) the ends of the strings are fixed and (b)
initial displacement u (x, 0) and initial velocity ut (x, 0) .
Assumptions:
1. The mass of the string per unit length is constant (homogeneous string).
The string is elastic and does not resist to bending.
2. The tension caused by stretching is much greater that gravity. So, gravity
is not a factor here.
Forces:
Consider forces acting on small portions of the string.
Since there is no resistance to bending, the tension is tangential to the curve
of the string at each point.
Let T1 and T2 be the tensions at P and Q.
Horizontal direction: There is no motion in the horizontal direction, so the
horizontal component must be constant, say T . So
T1 cos α = T2 cos β = T (1)
Vertical direction: In the vertical direction we have two forces, the vertical
components −T1 sin α and T2 sin β.
Let ρ be the linear mass density of the string, i.e., mass per unit length. By
∂ 2u
Newton’s second law the resultant force is mass ρ∆x times acceleration
∂t2
evaluated at some point between x and x + ∆x.
∂ 2u
T2 sin β − T1 sin β = ρ∆x 2
∂t
!
ρ ∂ 2u
1 ∂u ∂u
− =
∆x ∂x x+∆x ∂x x T ∂t2
∂ 2u 2
2∂ u
=c (W-1)
∂t2 ∂x2
and initial conditions specifying an initial deflection f (x) and initial velocity
g (x) , for x such that 0 ≤ x ≤ L.
∂u
u (x, 0) = f (x) , = g (x) , 0≤x≤L (IC)
∂t t=0
Method of solution
First we seek nontrivial solutions of the system (W-1), (BC). Notice that the
trivial solution is already a solution.To solve (W-1), (BC) we use the method
of separation of variables. I.e., we seek solutions of the form.
u (x, t) = X (x) T (t)
where X = X (x) is a function of x only and T = T (t) is a function of t only.
Substitution into (W-1) yields
XT 00 = c2 X 00 T
where by X 0 we mean dX
dx
and by T 0 we mean dT
dt
. Now we separate the variables
by dividing both sides by c2 XT to get
T 00 X 00
=
c2 T X
Now x and t are completely independent variables, one being location and
T 00 X 00
one being time. So the only way the functions c2 T of t and X of x is if they
are both the same constant, say −λ. So,
Lecture 5 / @ Copyright: George Nakos 183
Engineering Mathematics / The Johns Hopkins University
Note that since sin (−x) = − sin (x) and sin (0) = 0, so we need not keep any
negative integer values for n. These signs can be absorbed by the constant
coefficients.
Note that since the system (W-1), (BC) is homogeneous, any finite sum of
solutions un is also a solution.
k
X
u (x, t) = un (x, t)
n=1
∞
X cnπ cnπ nπ
u (x, t) = an cos t + bn sin t sin x
n=1
L L L
2 L
Z nπ
an = f (x) sin x dx, n = 1, 2, . . .
L 0 L
Z L
2 nπ
bn = g (x) sin x dx, n = 1, 2, . . .
cnπ 0 L