Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Continuum Mechanics
Continuum Mechanics
Continuum Mechanics
Ebook317 pages2 hours

Continuum Mechanics

Rating: 3 out of 5 stars

3/5

()

Read preview

About this ebook

The mechanics of fluids and the mechanics of solids represent the two major areas of physics and applied mathematics that meet in continuum mechanics, a field that forms the foundation of civil and mechanical engineering. This unified approach to the teaching of fluid and solid mechanics focuses on the general mechanical principles that apply to all materials. Students who have familiarized themselves with the basic principles can go on to specialize in any of the different branches of continuum mechanics. This text opens with introductory chapters on matrix algebra, vectors and Cartesian tensors, and an analysis of deformation and stress. Succeeding chapters examine the mathematical statements of the laws of conservation of mass, momentum, and energy as well as the formulation of the mechanical constitutive equations for various classes of fluids and solids. In addition to many worked examples, this volume features a graded selection of problems (with answers, where appropriate). Geared toward undergraduate students of applied mathematics, it will also prove valuable to physicists and engineers. 1992 edition.

LanguageEnglish
Release dateJun 8, 2012
ISBN9780486139470
Continuum Mechanics

Related to Continuum Mechanics

Titles in the series (100)

View More

Related ebooks

Physics For You

View More

Related articles

Reviews for Continuum Mechanics

Rating: 3 out of 5 stars
3/5

1 rating0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Continuum Mechanics - A. J. M. Spencer

    1979

    1

    Introduction

    1.1 Continuum mechanics

    Modern physical theories tell us that on the microscopic scale matter is discontinuous; it consists of molecules, atoms and even smaller particles. However, we usually have to deal with pieces of matter which are very large compared with these particles; this is true in everyday life, in nearly all engineering applications of mechanics, and in many applications in physics. In such cases we are not concerned with the motion of individual atoms and molecules, but only with their behaviour in some average sense. In principle, if we knew enough about the behaviour of matter on the microscopic scale it would be possible to calculate the way in which material behaves on the macroscopic scale by applying appropriate statistical procedures. In practice, such calculations are extremely difficult; only the simplest systems can be studied in this way, and even in these simple cases many approximations have to be made in order to obtain results. Consequently, our knowledge of the mechanical behaviour of materials is almost entirely based on observations and experimental tests of their behaviour on a relatively large scale.

    Continuum mechanics is concerned with the mechanical behaviour of solids and fluids on the macroscopic scale. It ignores the discrete nature of matter, and treats material as uniformly distributed throughout regions of space. It is then possible to define quantities such as density, displacement, velocity, and so on, as continuous (or at least piecewise continuous) functions of position. This procedure is found to be satisfactory provided that we deal with bodies whose dimensions are large compared with the characteristic lengths (for example, interatomic spacings in a crystal, or mean free paths in a gas) on the microscopic scale. The microscopic scale need not be of atomic dimensions; we can, for example, apply continuum mechanics to a granular material such as sand, provided that the dimensions of the region considered are large compared with those of an individual grain. In continuum mechanics it is assumed that we can associate a particle of matter with each and every point of the region of space occupied by a body, and ascribe field quantities such as density, velocity, and so on, to these particles. The justification for this procedure is to some extent based on statistical mechanical theories of gases, liquids and solids, but rests mainly on its success in describing and predicting the mechanical behaviour of material in bulk.

    Mechanics is the science which deals with the interaction between force and motion. Consequently, the variables which occur in continuum mechanics are, on the one hand, variables related to forces (usually force per unit area or per unit volume, rather than force itself) and, on the other hand, kinematic variables such as displacement, velocity and acceleration. In rigid-body mechanics, the shape of a body does not change, and so the particles which make up a rigid body may only move relatively to one another in a very restricted way. A rigid body is a continuum, but it is a very special, idealized and untypical one. Continuum mechanics is more concerned with deformable bodies, which are capable of changing their shape. For such bodies the relative motion of the particles is important, and this introduces as significant kinematic variables the spatial derivatives of displacement, velocity, and so on.

    The equations of continuum mechanics are of two main kinds. Firstly, there are equations which apply equally to all materials. They describe universal physical laws, such as conservation of mass and energy. Secondly, there are equations which describe the mechanical behaviour of particular materials; these are known as constitutive equations.

    The problems of continuum mechanics are also of two main kinds. The first is the formulation of constitutive equations which are adequate to describe the mechanical behaviour of various particular materials or classes of materials. This formulation is essentially a matter for experimental determination, but a theoretical framework is neeeded in order to devise suitable experiments and to interpret experimental results. The second problem is to solve the constitutive equations, in conjunction with the general equations of continuum mechanics, and subject to appropriate boundary conditions, to confirm the validity of the constitutive equations and to predict and describe the behaviour of materials in situations which are of engineering, physical or mathematical interest. At this problem-solving stage the different branches of continuum mechanics diverge, and we leave this aspect of the subject to more comprehensive and more specialized texts.

    2

    Introductory matrix algebra

    2.1 Matrices

    In this chapter we summarize some useful results from matrix algebra. It is assumed that the reader is familiar with the elementary operations of matrix addition, multiplication, inversion and transposition. Most of the other properties of matrices which we will present are also elementary, and some of them are quoted without proof. The omitted proofs will be found in standard texts on matrix algebra.

    An m x n matrix A is an ordered rectangular array of mn elements. We denote

    (2.1)

    so that Aij is the element in the ith row and the jth column of the matrix A. The index i takes values 1, 2, . . . , m, and the index j takes values 1, 2, . . . , n. In continuum mechanics the matrices which occur are usually either 3 x 3 square matrices, 3 × 1 column matrices or 1 x 3 row matrices. We shall usually denote 3 x 3 square matrices by bold-face roman capital letters (A, B, C, etc.) and 3 x 1 column matrices by bold-face roman lower-case letters (a, b, c, etc.). A 1 x 3 row matrix will be treated as the transpose of a 3 x 1 column matrix (aT, bT, cT, etc.). Unless otherwise stated, indices will take the values 1, 2 and 3, although most of the results to be given remain true for arbitrary ranges of the indices.

    A square matrix A is symmetric if

    (2.2)

    and anti-symmetric if

    (2.3)

    where AT denotes the transpose of A.

    The 3 x 3 unit matrix is denoted by I, and its elements by δij. Thus

    (2.4)

    where

    (2.5)

    Clearly δij = δji. The symbol δij is known as the Kronecker delta. An important property of δij is the substitution rule:

    (2.6)

    The trace of a square matrix A is denoted by tr A, and is the sum of the elements on the leading diagonal of A. Thus, for a 3 x 3 matrix A,

    (2.7)

    In particular,

    (2.8)

    With a square matrix A there is associated its determinant, det A. We assume familiarity with the elementary properties of determinants. The determinant of a 3 x 3 matrix A can be expressed as

    (2.9)

    where the alternating symbol eijk is defined as:

    eijk = 1 if (i, j, k) is an even permutation of (1, 2, 3) (i.e. e123 = e231 = e312 = 1);

    eijk =–1 if (i, j, k) is an odd permutation of (1, 2, 3) (i.e. e321 = e132 = e213 =–1);

    eijk = 0 if any two of i, j, k are equal (e.g. e112 = 0, e333 = 0).

    It follows from this definition that eijk has the symmetry properties

    (2.10)

    The condition det A ≠ 0 is a necessary and sufficient condition for the existence of the inverse A–1 of A.

    A square matrix Q is orthogonal if it has the property

    (2.11)

    It follows that if Q is orthogonal, then

    (2.12)

    and

    (2.13)

    Our main concern will be with proper orthogonal matrices, for which

    det Q =1

    If Q1 and Q2 are two orthogonal matrices, then their product Q1 Q2 is also an orthogonal matrix.

    2.2 The summation convention

    A very useful notational device in the manipulation of matrix, vector and tensor expressions is the summation convention. According to this, if the same index occurs twice in any expression, summation over the values 1, 2 and 3 of that index is automatically assumed, and the summation sign is omitted. Thus, for example, in (2.7) we may omit the summation sign and write

    tr A = Aii

    Similarly, the relations (2.6) are written as

    δijAjk = Aik, δijAkj = Aki

    and from (2.8),

    δii = 3

    Using this convention, (2.9) becomes

    (2.14)

    The conciseness introduced by the use of this notation is illustrated by the observation that, in full, the right-hand side of (2.14) contains 3⁶ = 729 terms, although because of the properties of eijk only six of these are distinct and non-zero.

    Some other examples of the use of summation convention are the following:

    If A = (Aij), B = (Bij), then the element in the ith row and jth 3 column of the product AB AikBkj, which is written as AikBki.

    Suppose that in (a) above, B = AT. Then Bij = Aji, and so the element in the ith row and jth column of AAT is AikAjk. In particular, if A is an orthogonal matrix Q = (Qij) we have from (2.12)

    (2.15)

    A linear relation between two column matrices x and y has the form

    (2.16)

    which may be written as

    (2.17)

    If A is non-singular, then from (2.16), y = A–1x. In particular, if A is an orthogonal matrix Q, then

    The trace of AB is obtained by setting i = j in the last expression in (a) above; thus

    (2.18)

    By a direct extension of this argument

    tr ABC = AijBjkCki,

    and so on.

    If a and b are column matrices with

    then aTb is a 1 × 1 matrix whose single element is

    (2.19)

    If a is as in (e) above, and A is a 3 x 3 matrix, then Aa is a 3 x 1 column matrix, and the element in its ith row is

    Airar, which is written as Airar.

    Two useful relations between the Kronecker delta and the alternating symbol are

    (2.20)

    These can be verified directly by considering all possible combinations of values of i, j, p, q, r and s. Actually, (2.20) are consequences of a more general relation between δij and eijk, which can also be proved directly, and is

    (2.21)

    From (2.14) and (2.21) we can obtain the useful relation

    (2.22)

    An index on which a summation is carried out is called a dummy index. A dummy index may be replaced by any other dummy index, for example Aii = Ajj. However, it is important always to ensure that, when the summation convention is employed, no index appears more than twice in any expression, because the expression is then ambiguous.

    In the remainder of this book it is to be assumed, unless the contrary is stated, that the summation convention is being employed. This applies, in subsequent chapters, to indices which label vector and tensor components as well as those which label matrix elements.

    2.3 Eigenvalues and eigenvectors

    In continuum mechanics, and in many other subjects, we frequently encounter homogeneous algebraic equations of the form

    (2.23)

    where A is a given square matrix, x an unknown column matrix and λ an unknown scalar. In the applications which appear in this book, A will be a 3 x 3 matrix. We therefore confine the discussion to the case in which A is a 3 x 3 matrix, although the generalization to n × n matrices is straightforward. Equation (2.23) can be written in the form

    (2.24)

    and the condition for (2.24) to have non-trivial solutions for x is

    (2.25)

    This is the characteristic equation for the matrix A. When the determinant is expanded, (2.25) becomes a cubic equation for λ, with three roots λ1, λ2, λ3 which are called the eigenvalues of A. For the present we assume that λ1, λ2 and λ3 are distinct. Then, for example, the equation

    (A–λ1I)x= 0

    has a non-trivial solution x(1), which is indeterminate to within a scaler multiplier. The column matrix x(1) is the eigenvector of A associated with the eigenvalue λ1; eigenvectors x(2) and x(3) associated with the eigenvalues λ2 and λ3 are defined similarly.

    Since λ1, λ2, λ3 are the roots of (2.25), and the coefficient of λ³ on the left of (2.25) is -1, we have

    (2.26)

    This is an identity in λ, so it follows by setting λ = 0 that

    (2.27)

    Now suppose that A is a real symmetric matrix. There is no a priori reason to expect λ1 and x(1) . Then

    (2.28)

    Transposing (2.28) and taking its complex conjugate gives

    (2.29)

    (1)Tand (2.29) on the right by x(1), and subtract. This gives

    (2.30)

    Since x(1)Tx(1) ≠ 01Hence the eigenvalues of a real symmetric matrix are real.

    Also from (2.28),

    (2.31)

    and similarly

    (2.32)

    Now transpose (2.31) and subtract the resulting equation from (2.32). This gives

    (2.33)

    Hence the eigenvectors associated

    Enjoying the preview?
    Page 1 of 1