Escolar Documentos
Profissional Documentos
Cultura Documentos
Chapter 8
Transposition of Matrices
Matrix Algebra
Matrix Inversion
Vectors
Vector spaces (also called linear spaces)
systems of linear equations
d 2 y2 t
A system of 2 simultaneous
linear DEs for amplitudes m1 y1 t k1 k 2 y1 t k2 y2 t 0
2
d m2 k2 y2 t k2 y1 t 0
y1(t) and y2(t): dt 2 dt 2
Matrices
Matrices are used to express arrays of numbers, variables or data in a logical format
that can be accepted by digital computers
Matrices are made up with ROWS and COLUMNS
Matrices can represent vector quantities such as force vectors, stress vectors, velocity
vectors, etc. All these vector quantities consist of several components
Huge amount of numbers and data are common place in modern-day engineering analysis,
especially in numerical analyses such as the finite element analysis (FEA) or finite deference
analysis (FDA)
1. Rectangular matrices:
The total number of rows (m) The total number of columns (n)
3. Row matrices:
All elements above the diagonal line of the matrices are zero.
a11 0 0
A a21 a22 0
a31 a32 a33
diagonal line
7. Diagonal matrices:
Matrices with all elements except those on the diagonal line are zero
a11 0 0 0
0 a 0 0
A 22
0 0 a33 0
0 0 0 a 44
Diagonal line
8. Unity matrices [I]:
1.0 0 0 0
It is a special case of diagonal matrices, 0 1.0 0 0
with elements of value 1 (unity value) I
0 0 1.0 0
0 0 0 1.0
diagonal line
Transposition of Matrices:
Results of the above algebraic operations of matrices are in the forms of matrices
A matrix cannot be divided by another matrix, but the sense of division can be
accomplished by the inverse matrix technique
1) Addition and subtraction of matrices
The involved matrices must have the SAME size (i.e., number of rows and columns):
[C] = [ cij]
3) Multiplication of 2 matrices:
So, we have:
[A] x [B] = [C]
(3x3) (3x3) (3x3)
Example 8.2:
Multiply a rectangular matrix by a column matrix:
x1
c13 c11x1 c12 x2 c13 x3 y1
C x
c11 c12
x2 y
c21 c22 c23 c21x1 c22 x2 c23 x3 y2
x3 (2x1)
(2x3)x(3x1)
Example 8.3:
(A) Multiplication of row and column matrices
b11
a11 a12 a13b21 a11b11 a12b21 a13b31 (a scalar or a sin gle number )
b
31
NOTE:
( a UNITY matrix)
in which A' is the equivalent determinant of a matrix [A] that has all elements
of [A] excluding those in ith row and jth column.
Step 3: Transpose the co-factor matrix, [C] to [C]T.
Step 4: The inverse matrix [A]-1 for matrix [A] may be established by the following
expression:
A1 1 C T (8.15)
A
Example 8.5
Show the inverse of a (3x3) matrix:
1 2 3
A 0 1 4
2 5 3
Let us derive the inverse matrix of [A] by following the above steps:
Step 2: Use Equation (8.14) to find the elements of the co-factor matrix, [C] with its
elements evaluated by the formula:
c13 1 0 5 1 2 2
1 3
c 21 1 2 3 35 21
2 1
c 22 1 1 3 3 2 3
2 2
c 23 1 15 2 2 9
23
c31 1 2 4 3 1 11
31
c32 1 14 30 4
3 2
c33 1 1 1 2 0 1
3 3
17 8 2
C 21 3 9
11 4 1
Step 3: Transpose the [C] matrix:
17 8 2 17 21 11
T
C T 21 3 9 8 3 4
11 4 1 2 9 1
A 39 39
2 9 1 2 9 1
1 2 3 17 21 11 1 0 0
1
AA1 0 1 4 8 3 4 0 1 0 I
39
2 5 3 2 9 1 0 0 1
Solution of Simultaneous Equations
Using Matrix Algebra
Piston
FE analysis results
Connecting
rod
Real piston Discretized piston
http://www.npd-solutions.com/feaoverview.html for FEM analysis Distribution of stresses
FEM or FDM analyses result in one algebraic equation for every NODE in the discretized
model Imagine the total number of (simultaneous) equations need to be solved !!
Analyses using FEM requiring solutions of tens of thousands simultaneous equations
are not unusual.
Solution of Simultaneous Equations Using Inverse Matrix Technique
or in an abbreviate form:
[A]{x} = {r} (8.18)
4x1 + x2 = 24
x1 2x2 = -21
Let us express the above equations in a matrix form:
[A] {x} = {r}
x1
r
24
A
4 1
where
and x and
1 2 2
x 21
Following the procedure presented in Section 8.5, we may derive the inverse matrix [A]-1
to be:
1 2 1
A1
9 1 4
The principal reason for Gaussian elimination method being popular in this type of
applications is the formulations in the solution procedure can be readily programmed
using concurrent programming languages such as FORTRAN for digital computers
with high computational efficiencies
The essence of Gaussian elimination method:
2) The last unknown quantity in the converted upper triangular matrix in the simultaneous
equations becomes immediately available.
a11 a a x1 r 1
8.21)
12 13
a 21 a 22 a 23 x 2
r 2
a a a r
31 32 33 x 3 3
or in a simpler form: Ax r
We may express the unknown x1 in Equation (8.20a) in terms of x2 and x3 as follows:
x 1
r 1
a12 x 2 a13 x 3
a 11 a 11 a 11
Now, if we substitute x1 in Equation (8.20b and c) by x 1
r 1
a12 x 2 a13 x 3
a
11 a 11 a 11
0 a 22 a 21 a12 x 2 a 23 a 21 a13 x 3 a 21 r 1
a21 x1 a22 x2 a23 x3 r2
a11 a11
r 2
a 11
(8.22)
a11 r1 a
1
a 22 a 21 12 a 23 a 23
a a 22
a a
x1
11
1
12 13 11
a a
1 1
a a a
1
0
13
a 22 a 23
x2 r2 (8.23) a a a
1
32 32 31
12
33
a 33 31
1 1 1 a 11 11
0 a a x3
33
r 3 a a
r r r
1
r 2 21 r 1
32 1 31
r 2 3
a 3 1
a 11 11
The index numbers (1) indicates elimination step 1 in the above expressions
Step 2 elimination involve the expression of x2 in Equation (8.22b) in term of x3:
from 0 a 22 a 21 a12 x 2 a 23 a 21 a13 x 3 r a 21 r 1
a11 a11 2
a 11
(8.22b)
a21 a
to r2 r1 a23 a21 13 x3
x2
a11 a11
a
a22 a21 12
a11
The matrix form of the original simultaneous equations now takes the form:
a11 a a 2
12 13
x r1
2
1
(8.24)
r 2
2 2
0 a 22 a
23 x 2
2 2
0 0 a 33
x 3 r 3
We notice the coefficient matrix [A] now has become an upper triangular matrix, from
2
which we have the solution r
x 3 23
a 33
The other two unknowns x2 and x1 may be obtained by 2 the back substitution process from
2 r3
a 23 2
2
Equation (8.24),such as: r
a 23 x 3
2 2 2
r 2
a 33
x2 2 2
a 22 a 22
Recurrence relations for Gaussian elimination process:
Given a general form of n-simultaneous equations:
n
For back substitution
r i
a x ij j
j i 1 (8.26)
x i
with i n 1, n 2, .......,1
a ii
Example
Solve the following simultaneous equations using Gaussian elimination method:
x + z =1
2x + y + z = 0 (a)
x + y + 2z = 1
Express the above equations in a matrix form:
1 0 1 x 1
2 1 1 y 0 (b)
1 1 2 z 1
If we compare Equation (b) with the following typical matrix expression of
3-simultaneous equations:
a11 a a x1 r 1
12 13
a 21 a 22 a 23 x 2
r 2
a a a r
31 32 33 x 3 3
we will have the following:
a nn
nn
For i = 2, j = 2 and 3:
0
a12 a 0
i = 2, j = 2: a a a 0 a12 a21 12 1 2 1
1
22
0
22
0
21
a11 a11 1
0
a13 a 1
i = 2, j = 3: a a a 0 a23 a21 13 1 2 1
1
23
0
23
0
21
a11 a11 1
0
r r 1
i = 2: r r a
1
2 2
o 0 1
21 0 r2 a21 1 0 2 2
11 a a11 1
For i = 3, j = 2 and 3:
0
a12 a 0
i = 3, j = 2: a a a 0 a32 a31 12 11 1
1
32
0
32
0
31
a11 a11 1
0
0 a13 a13 1
i = 3, j = 3:
1
a33 a33
0
a31 0
a33 a31 2 1 1
a11 a11 1
0
r r 1
i = 3: r r a
1
3 3
0 0 1
31 0 r3 a31 1 11 0
11a a11 1
So, the original simultaneous equations after Step 1 elimination have the form:
1 0 1 x1 1
0
1 1 x2 2
0 1 1 0
x3
2 a123
a a a 1 11
1 1 2
1
i = 3 and j = 3: 33 33 32
a22 1
2 1 r21
r r a 1 0 1 1 2 2
3 3 32
a22 1
The coefficient matrix [A] has now been triangularized, and the original simultaneous
equations has been transformed into the form: