Você está na página 1de 33

San Jose State University

Department of Mechanical and Aerospace Engineering

ME 130 Applied Engineering Analysis

Instructor: Tai-Ran Hsu, Ph.D.

Chapter 8

Matrices and Solution to Simultaneous Equations


by
Gaussian Elimination Method
Chapter Outline

Matrices and Linear Algebra

Different Forms of Matrices

Transposition of Matrices

Matrix Algebra

Matrix Inversion

Solution of Simultaneous Equations


Using Inverse Matrices
Using Gaussian Elimination Method
Linear Algebra and Matrices

Linear algebra is a branch of mathematics concerned with the study of:

Vectors
Vector spaces (also called linear spaces)
systems of linear equations

(Source: Wikipedia 2009)

Matrices are the logical and convenient representations of vectors in


vector spaces, and

Matrix algebra is for arithmetic manipulations of matrices. It is a vital tool


to solve systems of linear equations
Systems of linear equations are common in engineering analysis:

A simple example is the free vibration of Mass-spring with 2-degree-of freedom:


As we postulated in single mass-spring systems, the two
masses m1 and m2 will vibrate prompted by a small
disturbance applied to mass m2 in the +y direction
k1
Following the same procedures used in deriving the equation of
motion of the masses using Newtons first and second laws,
m1 with the following free-body of forces acting on m1 and m2
+y1(t) at time t:
Inertia force:
d 2 y1 t
k2 F1 t m1 Spring force by k1: d 2 y2 t
Fs2 = k2[y2(t)+h2]
dt 2 F2 t m2
Fs1=k1[y1(t)+h1] dt 2 Fs1=k2y1(t)
m1 m2
m2 Weight of Mass 1: +y1(t)
+y2(t) W1=m1g W2 = m2g +y2(t)
Spring force by k2:
Fs2=k2[y1(t)-y2(t)]
where W1 and W2 = weights; h1, h2 = static deflection of spring k1 and k2
+y

d 2 y2 t
A system of 2 simultaneous
linear DEs for amplitudes m1 y1 t k1 k 2 y1 t k2 y2 t 0
2
d m2 k2 y2 t k2 y1 t 0
y1(t) and y2(t): dt 2 dt 2
Matrices

Matrices are used to express arrays of numbers, variables or data in a logical format
that can be accepted by digital computers
Matrices are made up with ROWS and COLUMNS
Matrices can represent vector quantities such as force vectors, stress vectors, velocity
vectors, etc. All these vector quantities consist of several components
Huge amount of numbers and data are common place in modern-day engineering analysis,
especially in numerical analyses such as the finite element analysis (FEA) or finite deference
analysis (FDA)

Different Forms of Matrices

1. Rectangular matrices:
The total number of rows (m) The total number of columns (n)

Column (n): n=1 n=2


n=3 n=n
Abbreviation of a11 a12 a13 a1n m =1
Rectangular Matrices: a 21 a 22 a 23 a 2 n m =2

A aij
Rows (m)


Matrix elements a m1 a m 2 a m3 a mn m=m
2. Square matrices:
It is a special case of rectangular matrices with:
The total number of rows (m) = total number of columns (n)

Example of a 3x3 square matrix:

a11 a12 a13


A a21 a 22 a 23
a31 a32 a33
diagonal line
All square matrices have a diagonal line
Square matrices are most common in computational engineering analyses

3. Row matrices:

Matrices with only one row, that is: m = 1:

A a11 a12 a13 a1n


4. Column matrices:
Opposite to row matrices, column matrices have only one column (n = 1),
but with more than one row:
a11
a column matrices represent vector quantities in engineering
21
analyses, e.g.:
a 31 Fx

A force vector: F

Fy
A F
z

in which Fx, Fy and Fz are the three components along
x-, y- and z-axis in a rectangular coordinate system
a
m1 respectively

5. Upper triangular matrices:


These matrices have all elements with zero value under the diagonal line

a11 a12 a13


A 0 a 22 a 23
0 0 a33
diagonal line
6. Lower triangular matrices

All elements above the diagonal line of the matrices are zero.

a11 0 0
A a21 a22 0
a31 a32 a33
diagonal line
7. Diagonal matrices:
Matrices with all elements except those on the diagonal line are zero

a11 0 0 0
0 a 0 0
A 22

0 0 a33 0

0 0 0 a 44
Diagonal line
8. Unity matrices [I]:
1.0 0 0 0
It is a special case of diagonal matrices, 0 1.0 0 0
with elements of value 1 (unity value) I
0 0 1.0 0

0 0 0 1.0
diagonal line
Transposition of Matrices:

It is a common procedure in the manipulation of matrices

The transposition of a matrix [A] is designated by [A]T

The transposition of matrix [A] is carried out by interchanging the elements in


a square matrix across the diagonal line of that matrix:

Diagonal of a square matrix

a11 a12 a13 a11 a21 a31


A a21 a22 a23 AT a12 a22 a32
a31 a32 a33
a13 a23 a33
diagonal line
(a) Original matrix (b) Transposed matrix
Matrix Algebra

Matrices are expressions of ARRAY of numbers or variables. They CANNOT be


deduced to a single value, as in the case of determinant

Therefore matrices determinants

Matrices can be summed, subtracted and multiplied but cannot be divided

Results of the above algebraic operations of matrices are in the forms of matrices

A matrix cannot be divided by another matrix, but the sense of division can be
accomplished by the inverse matrix technique
1) Addition and subtraction of matrices

The involved matrices must have the SAME size (i.e., number of rows and columns):

A B C with elements cij aij bij


2) Multiplication with a scalar quantity ():

[C] = [ cij]

3) Multiplication of 2 matrices:

Multiplication of two matrices is possible only when:

The total number of columns in the 1st matrix


= the number of rows in the 2nd matrix:

[C] = [A] x [B]


(m x p) (m x n) (n x p)
The following recurrence relationship applies:

cij = ai1b1j + ai2b2j + ainbnj


with i = 1, 2, .,m and j = 1,2,.., n
Example 8.1

Multiple the following two 3x3 matrices

a11 a12 a13 b11 b12 b13


C AB a21 a22 a23 b21 b22 b23
a31 a32 a33 b31 b32 b33

a11b11 a12b21 a13b31 a11b12 a12b22 a13b32 a11b13 a12b23 a13b33


a21b11 a22b21 a23b31

So, we have:
[A] x [B] = [C]
(3x3) (3x3) (3x3)
Example 8.2:
Multiply a rectangular matrix by a column matrix:

x1
c13 c11x1 c12 x2 c13 x3 y1
C x
c11 c12
x2 y
c21 c22 c23 c21x1 c22 x2 c23 x3 y2
x3 (2x1)
(2x3)x(3x1)

Example 8.3:
(A) Multiplication of row and column matrices

b11

a11 a12 a13b21 a11b11 a12b21 a13b31 (a scalar or a sin gle number )
b
31

(1x3) x (3x1) = (1x1)


(B) Multiplication of a column matrix and a square matrix:
a11 a11b11 a11b12 a11b13
a b
a 21 b11 b12 b13 21 11 a 21b12 a 21b13 (a square matrix)
a a31b11 a31b12 a31b13
31

(3x1) x (1x3) = (3x3)


Example 8.4
Multiplication of a square matrix and a row matrix

a11 a12 a13 x a11 x a12 y a13 z


a
21 a 22 a 23 y a 21 x a 22 y a 23 z (a column matrix)
a31 a32 a33 z a31 x a32 y a33 z

(3x3) x (3x1) = (3x1)

NOTE:

Because of the rule: [C] = [A] x [B] we have AB BA


(m x p) (m x n) (n x p)

Also, the following relationships will be useful:


Distributive law: [A]([B] + [C]) = [A][B] + [A][C]

Associative law: [A]( [B][C]) = [A][B]([C])

Product of two transposed matrices: ([A][B])T = [B]T[A]T


Matrix Inversion
The inverse of a matrix [A], expressed as [A]-1, is defined as:

[A][A]-1 = [A]-1[A] = [I] (8.13)

( a UNITY matrix)

NOTE: the inverse of a matrix [A] exists ONLY if A 0


where A = the equivalent determinant of matrix [A]

Following are the general steps in inverting the matrix [A]:


Step 1: Evaluate the equivalent determinant of the matrix. Make sure that A 0
Step 2: If the elements of matrix [A] are aij, we may determine the elements
of a co-factor matrix [C] to be:
c (1) i j A'
ij
(8.14)

in which A' is the equivalent determinant of a matrix [A] that has all elements
of [A] excluding those in ith row and jth column.
Step 3: Transpose the co-factor matrix, [C] to [C]T.
Step 4: The inverse matrix [A]-1 for matrix [A] may be established by the following
expression:
A1 1 C T (8.15)
A
Example 8.5
Show the inverse of a (3x3) matrix:

1 2 3
A 0 1 4
2 5 3

Let us derive the inverse matrix of [A] by following the above steps:

Step 1: Evaluate the equivalent determinant of [A]:


1 2 3
1 4 0 4 0 1
A 0 1 4 1 2 3 39 0
5 3 2 3 2 3
2 5 3
So, we may proceed to determine the inverse of matrix [A]

Step 2: Use Equation (8.14) to find the elements of the co-factor matrix, [C] with its
elements evaluated by the formula:

cij (1) i j A'


where A' is the equivalent determinant of a matrix [A] that has all elements
of [A] excluding those in ith row and jth column.
c11 1
11
1 3 45 17
c12 1 0 3 4 2 8
1 2

c13 1 0 5 1 2 2
1 3

c 21 1 2 3 35 21
2 1

c 22 1 1 3 3 2 3
2 2

c 23 1 15 2 2 9
23

c31 1 2 4 3 1 11
31

c32 1 14 30 4
3 2

c33 1 1 1 2 0 1
3 3

We thus have the co-factor matrix, [C] in the form:

17 8 2
C 21 3 9
11 4 1
Step 3: Transpose the [C] matrix:

17 8 2 17 21 11
T

C T 21 3 9 8 3 4
11 4 1 2 9 1

Step 4: The inverse of matrix [A] thus takes the form:


17 21 11 17 21 11
A1
C 1 8 3 4 1 8 3 4
T

A 39 39
2 9 1 2 9 1

Check if the result is correct, i.e., [A] [A]-1 = [ I ]?

1 2 3 17 21 11 1 0 0
1
AA1 0 1 4 8 3 4 0 1 0 I
39
2 5 3 2 9 1 0 0 1
Solution of Simultaneous Equations
Using Matrix Algebra

A vital tool for solving very large number of


simultaneous equations
Why huge number of simultaneous equations in this type of analyses?
Numerical analyses, such as the finite element method (FEM) and
finite difference method (FDM) are two effective and powerful analytical
tools for engineering analysis in in real but complex situations for:
Mechanical stress and deformation analyses of machines and structures
Thermofluid analyses for temperature distributions in solids, and fluid flow behavior
requiring solutions in pressure drops and local velocity, as well as fluid-induced forces
The essence of FEM and FDM is to DISCRETIZE real structures or flow patterns
of complex configurations and loading/boundary conditions into FINITE number of
sub-components (called elements) inter-connected at common NODES
Analyses are performed in individual ELEMENTS instead of entire complex solids
or flow patterns

Example of discretization of a piston in an internal combustion engine and the results


in stress distributions in piston and connecting rod:

Piston

FE analysis results

Connecting
rod
Real piston Discretized piston
http://www.npd-solutions.com/feaoverview.html for FEM analysis Distribution of stresses

FEM or FDM analyses result in one algebraic equation for every NODE in the discretized
model Imagine the total number of (simultaneous) equations need to be solved !!
Analyses using FEM requiring solutions of tens of thousands simultaneous equations
are not unusual.
Solution of Simultaneous Equations Using Inverse Matrix Technique

Let us express the n-simultaneous equations to be solved in the following form:

a11 x1 a12 x 2 a13 x3 .................................. a1n x n r1


a 21 x1 a 22 x 2 a 23 x3 .................................. a 2 n x n r2
(8.16)
a31 x1 a32 x 2 a33 x3 .................................. a3n x n r3
.............................................................................................
.............................................................................................
a m1 x1 a m 2 x 2 a m 3 x3 .................................. a mn x n rn

where a11, a12, , amn are constant coefficients


x1, x2, ., xn are the unknowns to be solved
r1, r2, ., rn are the resultant constants
The n-simultaneous equations in Equation (8.16) can be expressed in matrix form as:

a11 a12 a13 a1n x1 r1


a a 22 a 23 a 2 n x 2 r2
21
(8.17)



a m1 am2 a m3
a mn x n rn

or in an abbreviate form:
[A]{x} = {r} (8.18)

in which [A] = Coefficient matrix with m-rows and n-columns


{x} = Unknown matrix, a column matrix
{r} = Resultant matrix, a column matrix
Now, if we let [A]-1 = the inverse matrix of [A], and multiply this [A]-1 on both sides of
Equation (8.18), we will get:
[A]-1 [A]{x} = [A]-1 {r}

Leading to: [ I ] {x} = [A]-1 {r} ,in which [ I ] = a unity matrix


The unknown matrix, and thus the values of the unknown quantities x1, x2, x3, , xn
may be obtained by the following relation:
{x} = [A]-1 {r} (8.19)
Example 8.6
Solve the following simultaneous equation using matrix inversion technique;

4x1 + x2 = 24
x1 2x2 = -21
Let us express the above equations in a matrix form:
[A] {x} = {r}

x1
r
24
A
4 1
where
and x and
1 2 2
x 21

Following the procedure presented in Section 8.5, we may derive the inverse matrix [A]-1
to be:

1 2 1
A1
9 1 4

Thus, by using Equation (8.19), we have:


x 2 1 24 1 2 x 24 1x21 27 3
x 1 A1 r 1 21 9 1x24 (4)(21) 108 12
2
x 9 1 4
from which we solve for x1 = 3 and x2 = 12
Solution of Simultaneous Equations Using Gaussian Elimination Method

Johann Carl Friedrich Gauss (1777 1855)

A German astronomer (planet orbiting),


Physicist (molecular bond theory, magnetic theory, etc.), and
Mathematician (differential geometry, Gaussion distribution in statistics
Gaussion elimination method, etc.)

Gaussian elimination method and its derivatives, e.g., Gaussian-Jordan elimination


method and Gaussian-Siedel iteration method are widely used in solving large
number of simultaneous equations as required in many modern-day numerical
analyses, such as FEM and FDM as mentioned earlier.

The principal reason for Gaussian elimination method being popular in this type of
applications is the formulations in the solution procedure can be readily programmed
using concurrent programming languages such as FORTRAN for digital computers
with high computational efficiencies
The essence of Gaussian elimination method:

1) To convert the square coefficient matrix [A] of a set of simultaneous equations


into the form of Upper triangular matrix in Equation (8.5) using an elimination procedure

a11 a12 a13 Via elimination a11 a12 a13


A a21 a 22 a 23
process
Aupper 0 a'22 a'23
a31 a32 a33 0 0 a' '33

2) The last unknown quantity in the converted upper triangular matrix in the simultaneous
equations becomes immediately available.

a11 a12 a13 x1 r1


0 a'
22 a'23 x2 r2' x3 = r3/a33
0 0 a' '33 x3 r3''
3) The second last unknown quantity may be obtained by substituting the newly found
numerical value of the last unknown quantity into the second last equation:
r2' a23
'
x3
a x a x r
'
22 2
'
23 3 2
'
x2 '
a22
4) The remaining unknown quantities may be obtained by the similar procedure,
which is termed as back substitution
The Gaussian Elimination Process:
We will demonstrate this process by the solution of 3-simultaneous equations:

a11 x1 a12 x2 a13 x3 r1


a21 x1 a22 x2 a23 x3 r2 (8.20 a,b,c)
a31 x1 a32 x2 a33 x3 r3
We will express Equation (8.20) in a matrix form:

a11 a a x1 r 1
8.21)
12 13

a 21 a 22 a 23 x 2
r 2
a a a r
31 32 33 x 3 3
or in a simpler form: Ax r
We may express the unknown x1 in Equation (8.20a) in terms of x2 and x3 as follows:

x 1
r 1
a12 x 2 a13 x 3
a 11 a 11 a 11
Now, if we substitute x1 in Equation (8.20b and c) by x 1
r 1
a12 x 2 a13 x 3
a
11 a 11 a 11

we will turn Equation (8.20) from:


a x a x a x r
a11 x1 a12 x2 a13 x3 r1
11 1 12 2 13 3 1


0 a 22 a 21 a12 x 2 a 23 a 21 a13 x 3 a 21 r 1
a21 x1 a22 x2 a23 x3 r2
a11 a11
r 2
a 11
(8.22)

a31 x1 a32 x2 a33 x3 r3


0 a 32 a 31 a12 x 2 a 33 a 31 a13 x 3 r a 31 r 1

a11 a11 3
a 11

You do not see x1 in the new Equation (20b and c) anymore


So, x1 is eliminated in these equations after Step 1 elimination
The new matrix form of the simultaneous equations has the form:
a a 21 a13
1

a11 r1 a
1
a 22 a 21 12 a 23 a 23
a a 22
a a
x1
11
1
12 13 11
a a

1 1
a a a
1
0
13


a 22 a 23

x2 r2 (8.23) a a a
1
32 32 31
12
33
a 33 31

1 1 1 a 11 11

0 a a x3
33
r 3 a a
r r r
1
r 2 21 r 1
32 1 31
r 2 3
a 3 1
a 11 11

The index numbers (1) indicates elimination step 1 in the above expressions
Step 2 elimination involve the expression of x2 in Equation (8.22b) in term of x3:


from 0 a 22 a 21 a12 x 2 a 23 a 21 a13 x 3 r a 21 r 1

a11 a11 2
a 11
(8.22b)

a21 a
to r2 r1 a23 a21 13 x3
x2
a11 a11
a
a22 a21 12
a11

and submitted it into Equation (8.22c), resulting in eliminate x2 in that equation.

The matrix form of the original simultaneous equations now takes the form:
a11 a a 2

12 13
x r1
2
1
(8.24)
r 2
2 2
0 a 22 a
23 x 2
2 2
0 0 a 33
x 3 r 3
We notice the coefficient matrix [A] now has become an upper triangular matrix, from
2
which we have the solution r
x 3 23
a 33
The other two unknowns x2 and x1 may be obtained by 2 the back substitution process from
2 r3
a 23 2
2
Equation (8.24),such as: r
a 23 x 3
2 2 2

r 2
a 33
x2 2 2
a 22 a 22
Recurrence relations for Gaussian elimination process:
Given a general form of n-simultaneous equations:

a11 x1 a12 x 2 a13 x3 .................................. a1n x n r1


a 21 x1 a 22 x 2 a 23 x3 .................................. a 2 n x n r2
a31 x1 a32 x 2 a33 x3 .................................. a3n x n r3
(8.16)
.............................................................................................
.............................................................................................
a m1 x1 a m 2 x 2 a m 3 x3 .................................. a mn x n rn
The following recurrence relations can be used in Gaussian elimination process:
n 1
n a
n 1

n 1 nj (8.25a)
For elimination: a a a
ij ij in n 1

i > n and j>n


a nn
n 1
r n 1 n 1
r r a
n n (8.25b)
i i in n 1
a nn

n
For back substitution
r i
a x ij j
j i 1 (8.26)
x i
with i n 1, n 2, .......,1
a ii
Example
Solve the following simultaneous equations using Gaussian elimination method:
x + z =1
2x + y + z = 0 (a)
x + y + 2z = 1
Express the above equations in a matrix form:
1 0 1 x 1
2 1 1 y 0 (b)

1 1 2 z 1
If we compare Equation (b) with the following typical matrix expression of
3-simultaneous equations:
a11 a a x1 r 1

12 13

a 21 a 22 a 23 x 2
r 2
a a a r
31 32 33 x 3 3
we will have the following:

a11 = 1 a12 = 0 a13 = 1 r1 = 1


a21 = 2 a22 = 1 a23 = 1 and r2 = 0
a31 = 1 a32 1 a33 = 2 r3 = 1
Let us use the recurrence relationships for the elimination process in Equation (8.25):
n 1 n 1
n

n 1

a
n 1 nj
r
n
r
n 1
a r
n 1 n
with i >n and j> n
a ij a ij a
in n 1 i i in
a
n 1

a nn
nn

Step 1 n = 1, so i = 2,3 and j = 2,3

For i = 2, j = 2 and 3:
0
a12 a 0
i = 2, j = 2: a a a 0 a12 a21 12 1 2 1
1
22
0
22
0
21
a11 a11 1
0
a13 a 1
i = 2, j = 3: a a a 0 a23 a21 13 1 2 1
1
23
0
23
0
21
a11 a11 1
0
r r 1
i = 2: r r a
1
2 2
o 0 1
21 0 r2 a21 1 0 2 2
11 a a11 1
For i = 3, j = 2 and 3:
0
a12 a 0
i = 3, j = 2: a a a 0 a32 a31 12 11 1
1
32
0
32
0
31
a11 a11 1
0
0 a13 a13 1
i = 3, j = 3:
1
a33 a33
0
a31 0
a33 a31 2 1 1
a11 a11 1
0
r r 1
i = 3: r r a
1
3 3
0 0 1
31 0 r3 a31 1 11 0
11a a11 1
So, the original simultaneous equations after Step 1 elimination have the form:

a11 a12 a13 x1 r1


0 a1 1 1
22 a23 x2 r2
1
0 a32
1
a33 x3 r31

1 0 1 x1 1
0
1 1 x2 2
0 1 1 0

x3

We now have: a121 0 a122 1 a123 1


1
a31 0 a32
1
1 a33
1
1
r21 2 r31 0
Step 2 n = 2, so i = 3 and j = 3 (i > n, j > n)

2 a123
a a a 1 11
1 1 2
1
i = 3 and j = 3: 33 33 32
a22 1
2 1 r21
r r a 1 0 1 1 2 2
3 3 32
a22 1
The coefficient matrix [A] has now been triangularized, and the original simultaneous
equations has been transformed into the form:

a11 a12 a13 x1 r1 1 0 1 x1 1


0 a1 0 1 1 x 2
a123 x2 r21
22 2
2
0 0 a33 x3 r32 0 0 2 x3 2
We get from the last equation with (0) x1 + (0) x2 + 2 x3 = 2, from which we solve for
x3 = 1. The other two unknowns x2 and x1 can be obtained by back substitution
of x3 using Equation (8.26):
3
x2 r2 a2 j x j / a22 r2 a23 x2 / a22 2 11 / 1 1
3 j 3

and x1 r1 a1 j x j / a11 r1 a12 x2 a13 x3 / a11
j 2
1 0 1 11/ 1 0
We thus have the solution: x = x1 = 0; y = x2 = -1 and z = x3 = 1

Você também pode gostar