Você está na página 1de 104

Mish, K.D. and Mello, J.

Computer-Aided Engineering
Mechanical Engineering Handbook
Ed. Frank Kreith
Boca Raton: CRC Press LLC, 1999


c1999 by CRC Press LLC
Computer-Aided
Engineering

Kyran D. Mish 15.1 Introduction ....................................................................15-1


California State University, Chico Definition of Terms (CAD/CAM/CAE) Overview of
Computer-Aided Engineering Goals of Computer-Aided
Joseph Mello Engineering Chapter
Aerojet Corporation 15.2 Computer Programming and Computer
Architecture ....................................................................15-3
Software Engineering Overview Computer Languages Data
Base Systems Operating System Characteristics Parallel
Computation Computer Graphics and Visualization
15.3 Computational Mechanics............................................15-35
Computational Solid, Fluid, and Thermal Problems
Mathematical Characteristics of Field Problems Finite-
Element Approximations in Mechanical Engineering Finite-
Difference Approximations in Mechanics Alternative
Numerical Schemes for Mechanics Problems Time-
Dependent and Nonlinear Computational Mechanics
Standard Packages Criteria Selection of Software
Benchmarking
15.4 Computer Intelligence..................................................15-78
Artificial Intelligence Expert Systems in Mechanical
Engineering Design Neural Network Simulations Fuzzy
Logic and Other Statistical Methods
15.5 Computer-Aided Design (CAD) ..................................15-85
Introduction Entity-Based CAD Systems Solid or Feature-
Based CAD Systems Computer Aided Manufacturing (CAM)

15.1 Introduction
The revolution in computer technology that has taken place over the last few decades has changed the
face of all engineering disciplines. Fortunately, computers appropriate for solving complex mechanical
engineering problems have evolved from complicated, rare, and expensive mainframes or supercomputers
of yesterday to simple, common, and inexpensive desktop microcomputers widely available today. During
this period, the application of improved computer technology has revolutionized the profession of
mechanical engineering and enabled practicing engineers to solve problems routinely that a few decades
ago would have resisted any available solution technique. However, many engineers possess only a
limited grasp of important topics such as computational mechanics, computer science, and advanced
computer-aided design principles. Thus, this chapter the present fundamentals of these computational

1999 by CRC Press LLC 15-1


15-2 Section 15

topics, in order to assist the practicing mechanical engineer in maintaining an appropriate high-level
understanding of these emerging engineering tools.

Definition of Terms (CAD/CAM/CAE)


First, it is necessary to define a few terms related to the integration of the computer into various aspects
of engineering practice:
Computer-aided engineering (CAE) is a broadly inclusive term that includes application of the
computer for general tasks encountered in mechanical engineering, including analysis, design,
and production.
Computer-aided manufacture (CAM) is the topic concerned with the integration of the computer
into the manufacturing process, including such tasks as controlling real-time production devices
on the factory floor. This subject is treated in Section 13.
Computer-aided design (CAD) is a general term outlining areas where computer technology is
used to speed up or eliminate efforts by engineers within the design/analysis cycle. This topic is
also covered in Section 11.
The acronym CAD is also used as an abbreviation for computer-aided drafting. Since drafting is a
key component of the mechanical engineering design process, this more narrow meaning will be
integrated into the general notion of computer-aided design practice.

Overview of Computer-Aided Engineering


There are many important components of integrating the computer into mechanical engineering. In the
past few decades, integrating the computer into mechanical engineering practice was largely a matter
of generating input data for computational analyses and subsequent examination of the output in order
to verify the results and to examine the response of the computer model. Today there are a wide variety
of integration schemes used in engineering practice, ranging from artificial intelligence applications
oriented toward removing the engineer from the design/analysis cycle, to engineer in the loop simu-
lations intended to offload to the computer the quantitative tedium of design and analysis in order to
permit the engineer to concentrate instead on qualitative issues of professional judgment. In the near
future, these existing computer integration techniques will be combined with virtual reality technology
and improved models for human/computer interaction. In order to use these new developments, engineers
must be appropriately schooled in the fundamental tenets of modern computer-aided engineering.

Goals of Computer-Aided Engineering Chapter


This chapter is designed to aid practicing mechanical engineers in understanding the scope of applications
of computer technology to their profession. To avoid premature obsolescence of the material presented
here, a high-level orientation is utilized. Details of computational techniques are relegated to a catalog
of appropriate references, and high-level practical concepts are emphasized. Low-level details (such
as computer performance issues) change rapidly relative to high-level concerns (such as general classi-
fications of computer function, or qualitative measures of computer solution quality). Thus, concentrating
on a technical overview of computer-aided engineering will permit this chapter to stay up-to-date over
the long term. In addition, this approach provides the practicing mechanical engineer with the funda-
mental principles required to accommodate the relentless pace of change that characterizes the world of
modern computation.
In addition to providing engineers with basic principles of computer science, computational mechanics,
computer intelligence, and examples of the application of CAD/CAM principles, this chapter also
provides insight into such simple but commonly neglected issues as:

1999 by CRC Press LLC


Computer-Aided Engineering 15-3

What are the best ways to identify and judge computer applications for integration and use in a
practical engineering setting?
What common pitfalls can be encountered using general-purpose CAE software, and how can
these problems be avoided?
What are the most important characteristics of engineering computation, and how do engineering
computer problems differ from computational methods used in other fields?
The development of material in this chapter is oriented toward presenting the practical how of
effectively using engineering software as opposed to the mathematical why of identifying the cause of
pathological computer results. This chapter therefore first provides the practicing mechanical engineer
with an overview of the field of computer-aided engineering, and then presents that material in a manner
less likely to become obsolete due to the rapid pace of change in this important engineering discipline.
Finally, in order to avoid overlap with other chapters, many details (for example, mathematical defini-
tions) are relegated to other chapters, or to appropriate references. Thus this chapter mainly provides an
overview of the broad subject of computer-aided engineering, leaving the individual component subjects
to other chapters or to outside references.

15.2 Computer Programming and Computer Architecture


Few other aspects of engineering practice change as rapidly as the use of the computer. Fifty years ago,
mechanical engineers analyzed and designed automobiles that are not radically different from the
automobiles of today, but practicing engineers of that era had no means of electronic calculation. Twenty-
five years ago, mechanical engineers designed high-performance aircraft and space vehicles that are
similar to those currently under development, but the computers of that more recent era had less power
than a typical microprocessor available today in an automobiles braking system. Ten years ago, an
advanced supercomputer appropriate for high-performance engineering computation cost several million
dollars, required custom construction, installation, and support, and could perform between 10 million
and 100 million floating-point operations per second. Today, one can buy such computational power for
a few thousand dollars at a retail electronics store. Given this background of astonishing change in the
performance, size, and cost of computers, it is clear that the applications of computers to the practice
of mechanical engineering can be expected to advance at an equally breathtaking rate.
This section presents fundamental principles from the viewpoint of what can be expected between
today and the beginning of the next century. The emphasis is on basic principles and fundamental
definitions.

Software Engineering Overview


The modern field of software engineering is an outgrowth of what used to be called computer program-
ming. Like many other engineering disciplines, software engineering gains much of its knowledge base
from the study of failures in the specification, design, and implementation of computer programs. Unlike
most engineering fields, however, the accepted codes of practice for software engineering are not yet
universally adhered to and many preventable (and expensive) software failures still occur.
History of the Software Engineering Discipline
Software engineering is barely two decades old. Early efforts in this field are based on viewing the
process of designing and implementing computer programs as analogous to designing and constructing
physical objects. One of the most influential works demonstrating this manufacturing tool approach
to the conceptualization of software is Software Tools (Kernighan and Plauger, 1976). In this view, large
software projects are decomposed into reusable individual components in the same manner that complex
machinery can be broken up into smaller reusable parts. The notion that computer programs could be
constructed from components that can then be reused in new configurations is a theme that runs

1999 by CRC Press LLC


15-4 Section 15

throughout the history of the software engineering field. Modularity alone is not a panacea, however, in
that programming effort contained within modules must be readable so as to facilitate verification,
maintenance, and extension. One of the most influential references on creating readable and reliable
code is The Elements of Programming Style (Kernighan and Plauger, 1978).
Emphasizing a modular and reusable approach to software components represents one of the primary
threads of software engineering, and another important related thread arises in parallel to the idea of
manufacture of software as an administrative process. After the failure of several large software projects
in the 1960s and 1970s, early practitioners of software engineering observed that many of these software
failures resulted from poor management of the software development process. Chronicles of typical
failures and the lessons to be learned are detailed in Fred Brooks The Mythical Man-Month (Brooks,
1975), and Philip Metzgers Managing a Programming Project (Metzger, 1983). Brooks conclusions
are generally considered among the most fundamental principles of the field. For example, his observation
that adding manpower to a late software project makes it later is generally accorded the status of an
axiom, and is widely termed Brookss law. More recent efforts in software management have concen-
trated on constructive solutions to past failures, as exemplified by Watts Humphreys Managing the
Software Process (Humphrey, 1989).
Understanding of the local (poor design) and global (poor management) causes of software failure
has led to the modern study of software engineering. Obstacles to modularization and reuse of software
tools were identified and proposed remedies suggested and tested. Administrative and large-scale archi-
tectural failures have been documented, digested, and their appropriate solutions published. The result
is a substantial body of knowledge concerning such important topics as specification and design of
computer programs, implementation details (such as choice of programming language), and usability of
software (e.g., how the human user relates to the computer program). This collection of knowledge forms
the basis for the modern practice of software engineering (see Yourdon, 1982).
Standard Models for Program Design and Implementation
A simple conceptual model of a program design and implementation methodology is presented by
Yourdon (1993). This model is termed the waterfall model because its associated schematic overview
bears a resemblance to a waterfall. The fundamental steps in the waterfall model (see Figure 15.2.1)
proceed consecutively as follows:

System Analysis

System Design

Programming

Testing

Delivery

FIGURE 15.2.1 Waterfall model for software development.

System Analysis the requirements of the software system are analyzed and enumerated as a
set of program specifications.
System Design the specifications are translated into precise details of implementation, including
decomposition of the software system into relevant modules, and the implementation details
required within this modular structure.

1999 by CRC Press LLC


Computer-Aided Engineering 15-5

Programming the proposed design is implemented in a computer programming language (or


collection of languages) according to the structure imposed in the system design phase.
Testing the resulting computer program is tested to insure that no errors (bugs) are present,
and that it is sufficiently efficient in terms of speed, required resources (e.g., physical memory),
and usability.
Delivery the completed program is delivered to the customer.
This approach for software development works well for the small tasks, but for larger mechanical
engineering projects there are many problems with this model. Probably the most important difficulty
is that large projects may take years of development effort, and the customer is forced to wait nearly
until the end of the project before any results are seen. If the projects needs change during the
implementation period (or more commonly, if those needs were not specified with sufficient accuracy
in the first place), then the final product will be inadequate, and required changes will have to be specified
before those modifications can be percolated back through the waterfall life cycle again. Another
important difficulty with this scheme occurs because quality-control issues (e.g., testing) are delayed
until the project is nearly finished. In practice, preventative quality-assurance measures work better at
reducing the likelihood that errors will occur in the program.
To avoid the inherent difficulties of the waterfall design, restructure this sequential design model into
a spiral form that emphasizes more rapid incremental deliveries of programming function. These modified
techniques are termed incremental or spiral development schemes (see Figure 15.2.2). The emphasis
on quick delivery of incremental programming function permits seeing results sooner and, if necessary,
making changes.

END (or
next spiral)
Application Application
Analysis Evaluation
START
Preliminary Prototype
Analysis Evaluation

Preliminary Prototype
Design Programming

Application Application
Design Programming

FIGURE 15.2.2 Spiral model for software development.

One important interpretation of the spiral development model is that the initial implementation of the
program (i.e., the first pass through the spiral) can often be constructed as a simple prototype of the
desired final application. This prototype is often implemented with limited function, in order to simplify
the initial development process, so that little or no code developed in the prototypical implementation
ends up being reused in the final commercial version. This approach has a rich history in practical
software development, where it is termed throwing the first one away (after a similarly titled chapter
in Brooks, [1975]). This widely used scheme uses the prototype only for purposes of demonstration,
education (i.e., to learn the location of potential development pitfalls), and marketing. Once the prototype
has been constructed and approved, design and implementation strategies more similar to the traditional
waterfall approach are used for commercial implementation.
An obvious argument against spiral development models that incorporate a disposable prototype is
that the cost of development is paid twice: once for the prototype and once again for the final imple-
mentation. Current remedies for this inherent problem include rapid application development (RAD)

1999 by CRC Press LLC


15-6 Section 15

methods, which emphasize the use of computer-assisted software engineering (CASE) tools to permit
substantial reuse of the prototype in the final version. With this approach, the programmer uses specially
designed computer applications that build the programs out of existing components from a software
library. Program components that are especially mundane or difficult to implement (such as those
commonly associated with managing a programs graphical user interface) often represent excellent
candidates for automation using RAD or CASE tools. This area of software design and implementation
will undoubtedly become even more important in the future, as the ability of computer programs to
create other computer applications improves with time.

Computer Languages
Regardless of what future developments occur in the field of CASE tools, most current computer
programming efforts are carried out by human programmers using high-level programming languages.
These programming languages abstract low-level details of computer function (such as the processes of
performing numerical operations or allocating computer resources), thus allowing the programmer to
concentrate on high-level concepts appropriate for software design and implementation. Figure 15.2.3
diagrams the various interrelationships among abstractions of data, abstraction of instructions, and
combinations of abstracted data and instructions. In general, more expressive languages are to be found
along the diagonal line, but there are many instances in which increased abstraction is not necessarily
a primary goal (for instance, performing fast matrix operations, which can often be done efficiently in
a procedural language).

AAbstraction
bstraction ofofD at
a
Data M oreMore
Expressive Languages
Expressive Languages

Object-Prog.
DataBase Languages
Languages

Procedural
Languages
Assembly
Languages

A bsAbstraction oftruct
traction ofIns Instruction
ion
FIGURE 15.2.3 Language abstraction classification.

In addition, high-level programming languages provide a means for making computer programs
portable, in that they may be moved from one type of computer to another with relative ease. The
combination of portability of computer languages and adherence to standard principles of software design
enables programmers to produce applications that run on a wide variety of computer hardware platforms.
In addition, programs that are designed to be portable across different computers available today are
generally easy to migrate to new computers of tomorrow, which insulates the longer-term software
development process from short-term advances in computer architecture.
Perhaps the best example of how good program design and high-level language adherence work
together to provide longevity for computer software is the UNIX operating system. This common
operating system was designed and implemented in the 1970s using the C programming language, and
runs with only minor modifications on virtually every computer available today. This operating system
and its associated suite of software tools provide an excellent demonstration of how good software

1999 by CRC Press LLC


Computer-Aided Engineering 15-7

design, when implemented in a portable programming language, can result in a software product that
exhibits a lifespan much greater than that associated with particular hardware platform.
Computer Language Classification
Computer programs consist of two primary components: data and instructions. Instructions reflect actions
taken by the computer program, and are comparable to verbs in natural human languages. In a similar
manner, data reflect the objects acted on by computer instructions, in analogy with nouns in natural
languages. Just as one would not attempt to construct sentences with only nouns or verbs, both data and
instructions are necessary components of computer programs. However, programming languages are
classified according to the relative importance of each of these components, and the resulting charac-
terization of programming languages as data-centered (object-oriented) or instruction-centered (pro-
cedural) has major ramifications toward design, implementation, and maintenance of computer software.
Older programming languages (such as FORTRAN and COBOL) reflect a preeminent role for instruc-
tions. Languages that abstract the instruction component of programs permit the programmer to recur-
sively decompose the overall computational task into logically separate subtasks. This successive
decomposition of overall function into component subtasks highlights the role of the individual task as
an encapsulated set of programming instructions. These encapsulated individual tasks are termed pro-
cedures (e.g., the SUBROUTINE structure in FORTRAN), and this instruction first approach to
program design and construction is thus termed procedural programming. In procedural programming,
the emphasis on decomposition of function obscures the important fact that the data component of the
program is inherited from the procedural design, which places the data in a secondary role.
A more recent alternative model for programming involves elevating the data component of the
program to a more preeminent position. This newer approach is collectively termed object programming,
or object-oriented programming. Where procedural models abstract programming function via encap-
sulation of instructions into subroutines, object programming models bind the procedures to the data on
which they operate. The design of the program is initially oriented toward modeling the natural data of
the system in terms of its behaviors, and once this data component has been specified, the procedures
that act upon the data are defined in terms of their functional operations on these preeminent data
structures. Object programming models have been very successful in the important task of the creation
of reusable code, and are thus valuable for settings (like implementation of a graphical user interface,
where consistency among applications is desired) where there is a natural need for considerable code
reuse.
The choice of language in computer programming is difficult because proponents of various languages
and programming models favoring particular approaches often castigate those who advocate otherwise.
Fortunately, there are few other branches of engineering that are so heavily politicized. It is important
to look beyond the dogmatic issues surrounding language choice toward the important pragmatic goals
of creating readable, verifiable, extensible, and reusable code.
Finally, it is important to recognize that the classification of programming languages into procedural
and object-programming models is not precise. Regardless of whether data or instructions are given
relative precedence, computer programs need both, and successful software design demands that the
requirements of each component be investigated carefully. Although object-programming models relegate
procedures to a secondary role to data structures, much of the final effort in writing an object-oriented
program still involves design, creation, and testing of the procedures (which are termed class methods
in object-programming models) that act upon the data. Similarly, it is possible to gain many object-
programming advantages while using strictly procedural languages. In fact, some of the most successful
languages utilized in current practice (e.g., the C++ programming language) are completely suitable for
use as either procedural or object-oriented programming languages.
Procedural Programming Models
The most commonly used procedural programming languages in current use include Ada, C, FORTRAN,
Basic, and Pascal. These procedural programming languages are characterized by a natural modular

1999 by CRC Press LLC


15-8 Section 15

structure based on programming function, which results in similar design methods being used for each.
Since procedural languages are primarily oriented toward encapsulating programming function, each
language has a rich set of control structures (e.g., looping, logical testing, etc.) that permits an appropriate
level of control over the execution of the various procedural functions. Beyond this natural similarity,
most procedural languages exhibit vast differences based on the expressiveness of the language, the
range and extensibility of native data types, the facilities available for implementing modularity in the
component procedures, and the run-time execution characteristics of the language.
Design Principles for Procedural Programming Models. The fundamental design principle for proce-
dural programming is based on the concept of divide and conquer, and is termed functional decom-
position or top-down design. The overall effort of the program is successively decomposed into smaller
logically separate subtasks, until each remaining subtask is sufficiently limited in scope so as to admit
implementation within a procedure of appropriately small size. The overall process is diagrammed in
Figure 15.2.4.

C om plete Task

Subtask 1 Subtask 2 Subtask 3

Subtask 1a Subtask 1b Subtask 2a Subtask 2b Subtask 2c Subtask 3a Subtask 3b

FIGURE 15.2.4 Sample functional decomposition.

The basic motivation behind functional decomposition is that the human mind is incapable of under-
standing an entire large computer program unless it is effectively abstracted into smaller black boxes,
each of which is simple enough so that its complete implementation can be grasped by the programmer.
The issue of exactly how large an individual procedure can be before it becomes too large to understand
is not an easy question to answer, but rules of thumb range from around ten up to a few hundred lines
of executable statements. While it is generally accepted that extremely long procedures (more than a
few hundred lines) have been empirically shown to be difficult to comprehend, there is still a lively
debate in the software engineering community about how short procedures can become before an
increased error rate (e.g., average number of errors per line of code) becomes apparent.
Although procedural programming languages possess the common characteristic of encapsulating
programming function into separate granular modules, there are few similarities beyond this basic
architectural resemblance. The various procedural languages often exhibit substantial differences in
expressiveness, in the inhibition of practices associated with common language errors, and in the run-
time characteristics of the resulting computer program.
Expressiveness represents the range of features permitted by the language that can be used by the
programmer to implement the particular programming design. There is considerable evidence from the
study of linguistics to support the notion that the more expressive the language, the wider the range of
thoughts that can be entertained using this language. This important postulate is termed the Sapir-Whorf
hypothesis. While the Sapir-Whorf hypothesis is considered controversial in the setting of natural human
languages, in the vastly simpler setting of programming languages, this phenomenon has been commonly

1999 by CRC Press LLC


Computer-Aided Engineering 15-9

observed in practice, where use of more expressive high-level languages has been correlated with overall
programmer productivity. Expressiveness in a computer language generally consists of permitting more
natural control structures for guiding the execution of the program, as well as permitting a wide range
of data representations appropriate for natural abstraction of data.
Sample Procedural Programming Languages. Some procedural languages (including FORTRAN and
Basic) permit only a limited set of control structures for looping, branching, and logical testing. For
instance, before the FORTRAN-77 standard was promulgated, FORTRAN had no way of expressing
the standard logical if-then-else statement. The FORTRAN-77 specification still does not permit
standard nondeterministic looping structures, such as do while and repeat until. The standard Basic
language suffers from similar limitations and is further impeded by the fact that most implementations
of Basic are interpreted (each line of code is sequentially translated and then executed) instead of
compiled (where the entire programs code is translated first and executed subsequently). Interpreted
languages such as Basic are often incredibly inefficient, especially on problems that involve substantial
looping, as the overhead of retranslating each line of code cannot be amortized in the same manner
available to compiled languages. Finally, because Basic is limited in its expressiveness, many imple-
mentations of Basic extend the language to permit a greater range of statements or data types. While
language extension facilitates programmer expression, it generally compromises portability, as different
nonstandard dialects of the extended language generally develop on different computer platforms. The
extreme case is illustrated by Microsofts Visual Basic language, which is completely tied to Microsoft
applications and operating systems software (and thus inherently nonportable), but so useful in its
extensions to the original Basic language that it has become the de facto scripting language for Microsoft
applications and operating systems.
Ada and Pascal are very expressive languages, permitting a rich set of control structures and a simple
extension of the set of permitted data types. In addition, Ada and some Pascal dialects force the
programmer to implement certain forms of data modularity that are specifically designed to aid in the
implementation of procedural programs. In a similar vein, standard Pascal is so strongly typed that it
forces the programmer to avoid certain common practices (such as misrepresenting the type of a data
structure passed to a procedure, which is a widespread and useful practice in FORTRAN) that are
associated with common errors in program implementation. In theory, Pascals strict approach to repre-
senting data structures and its rich set of control structures ought to make it an attractive language for
engineering programming. In practice, its lack of features for arithmetic calculation and its strict rules
on data representation make it fairly difficult to use for numeric computation. Ada is a more recent
language that is based on Pascal, but remedies many of Pascals deficiencies. Ada is a popular language
in mechanical engineering applications, as it is mandated for use on many Department of Defense
programming projects.
C and FORTRAN are among the most common procedural languages used in large-scale mechanical
engineering software applications. Both are weakly typed compiled languages with a rich set of available
mathematical operations. C permits a considerable range of expressive control structures and extensible
data structures. In addition, C is extremely portable and generally compiles and runs quickly, as the
languages features are closely tuned to the instruction sets of modern microprocessors used in current
generations of computers. The original C language specification was replaced in 1988 by a new ANSI
standard, and this current language specification adds some features (such as type checking on arguments
passed to procedures, a facility that aids greatly in preventing common programming errors) that resemble
those found in Pascal, but do not seem to compromise the overall utility of the original C language
standard.
FORTRANs current implementation (FORTRAN-90) adds user-defined extensible data structures
and a more expressive instruction set to the FORTRAN-77 standard, but the FORTRAN-90 standard
has so far been slow to gain acceptance in the programming community. FORTRAN-77 compilers are
still common, and the problems associated with this version of the language (e.g., minimal resources
for abstracting data, limited control structures for expressing program flow) still compromise the archi-

1999 by CRC Press LLC


15-10 Section 15

tecture of FORTRAN programs. However, FORTRAN retains a rich set of intrinsic numeric operations,
so it is still a good choice for its original goal of Formula Translation (where the language derives its
name). In addition, FORTRAN programs often execute very rapidly relative to other procedural lan-
guages, so for programs that emphasize rapid mathematical performance, FORTRAN is still a good
language choice. Finally, many FORTRAN-callable libraries of mathematical operations commonly
encountered in engineering applications are available, and this ability to leverage existing procedural
libraries makes FORTRAN an excellent choice for many mechanical engineering applications.
Advantages and Disadvantages of Procedural Programming. Procedural programming has inherent
advantages and disadvantages. One of the most important advantages of some procedural languages
(notably FORTRAN) is the existence of many complete libraries of procedures for solving complex
tasks. For example, there are many standard libraries for linear algebra (e.g., LINPACK, EISPACK,
LAPACK) or general scientific numerical computation (e.g., IMSL) available in FORTRAN-callable
form. Reuse of modules from these existing libraries permits programmers to reduce development costs
substantially for a wide variety of engineering applications. Under most current portable operating
systems, multiple-language integration is relatively straightforward, so high-quality FORTRAN-callable
libraries can be called from C programs, and vice-versa. The development of standard procedural libraries
is largely responsible for the present proliferation of useful computer applications in mechanical engi-
neering.
Another important advantage of procedural languages is that many important computational tasks
(such as translating mathematical models into analogous computer codes) are naturally converted from
the underlying mathematical algorithm (which is generally a sequence of instructions, and hence ame-
nable to encapsulation within a procedure) into an associated modular procedure. As long as the data
used within a program do not become unduly complex, procedural languages permit easy implementation
of many of the standard methods used in engineering analysis and design.
Perhaps the biggest disadvantage of procedural models is that they are harder to reuse than competitive
object-programming models. These obstacles to code reuse arise from the fact that data are modeled in
procedural programming as an afterthought to the simulation of instructions. In order to reuse procedures
between two different programs, the programmer must force the representation of data to be identical
across the different computer applications. In procedural programming, the goal of code reuse requires
standardization of data structures across different computer programs, regardless of whether or not such
standardization is natural (or even desired) by those disparate computer programs. Object programming
models are an attractive alternative to procedural programming schemes in large part because these
newer programming methods successfully avoid such unwarranted data standardization.
Object Programming Models
Object-programming models place modeling of data structures in a more preeminent position, and then
bind to the data structures the procedures that manipulate the data. This relegation of procedures (which
are termed methods in object programming) to a more secondary role facilitates a degree of code
reuse substantially better than is feasible with conventional procedural programming languages. Object
programming languages employ aggregate data types (consisting of various data fields, as well as the
associated methods that manipulate the data) that are termed classes, and these classes serve as templates
for creation of objects that represent specific instances of the class type. Objects thus form the repre-
sentative granularity found in object-oriented programming models, and interactions among objects
during program execution are represented by messages that are passed among the various objects present.
Each message sent by one object to another tells the receiving object what to do, and the details of
exactly how the receiving object accomplishes the associated task are generally private to the class. This
latter issue of privacy regarding implementation details of object methods leads to an independence
among objects that is one of the main reasons that object-programming schemes facilitate the desired
goal of code reuse.

1999 by CRC Press LLC


Computer-Aided Engineering 15-11

One of the most important limitations of procedural languages is abstraction. High-level languages
such as FORTRAN or C permit considerable abstraction of instructions, especially when compared to
the machine and assembly languages they are designed to supplant. Unfortunately, these languages do
not support similar levels of abstraction of data. For example, although FORTRAN-77 supports several
different numeric types (e.g., INTEGER, REAL, DOUBLE PRECISION), the only derived types avail-
able for extending these simple numeric representations are given by vectors and multidimensional
arrays. Unless the data of a problem are easily represented in one of these tabular forms, they cannot
be easily abstracted in FORTRAN. To some extent, experienced programmers can create new user-
defined data types in C using structures and typedefs, but effectively abstracting these derived data types
requires considerable self-discipline on the part of the programmer.
Object-oriented programming languages avoid these pitfalls of procedural languages by using classes
as templates for abstraction of both instructions and data. By binding the instructions and the data for
classes together, the programmer can abstract both components of a programming model simultaneously,
and this increased level of abstraction results in a radically new programming model. For instance, a
natural class for finite-element modeling would be the class of finite-element mesh objects. A mesh
object (which could easily be composed internally of node and element objects) makes it possible for
the object programmer to hide all the details of mesh representation from the rest of the program. A
procedural programming model would require standardization of the mesh to consist of (for example):
A list of nodes, each associated with individual nodal coordinates given in 1D, 2D, or 3D
(depending on the geometry of the model used)
A list of elements, each with a given number of associated nodes
A list of element characteristics, such as material properties or applied loads
In this representation, each procedure that manipulates any of the mesh data must know all of the
details of how these data have been standardized. In particular, each routine must know whether a 1D,
2D, or 3D finite-element analysis is being performed, and pertinent details of the analysis (e.g., is the
problem being modeled thermal conduction or mechanical deformation?) are also spread throughout the
code by the standardization of data into predefined formats. The sum of these constraints is to require
the programmer to recode substantial components of a procedural program every time a major modifi-
cation is desired.
In the setting of objects, the finite-element mesh object would store its particular geometric imple-
mentation internally, so that the rest of the program would be insulated from the effects of changes in
that representation. Rather than calling a procedure to generate a mesh by passing predefined lists of
nodes, elements, and element characteristics, an object-oriented approach to mesh generation would
employ sending a message such as discretize yourself to the mesh object. This object would then
create its internal representation of the mesh (perhaps using default values created by earlier messages)
and store this information privately. Alternatively, the object-oriented program might later send a solve
yourself message to the mesh object and then a report your results in tabular form message for
generating output. In each case, the rest of the program has no need to know the particular details of
how the mesh object is generating, storing, or calculating results. Only the internal procedures local to
the class (i.e., the class methods) generally need to know this private data, which are used locally to
implement the functions that act on the class.
This hiding of internal function within an object is termed encapsulation, and object programming
models permit simultaneous encapsulation of both data and instructions via appropriate abstraction. In
this setting, encapsulation permits the programmer to concentrate on creating data and procedures
naturally, instead of forcing either component into predefined formats such as floating-point arrays (for
data) or predefined subroutine libraries (for instructions). Data and instruction abstraction of this form
are thus useful additions to the similar (but less flexible) features available in procedural languages. If
these new features constituted the only improvements available from object-programming models, then

1999 by CRC Press LLC


15-12 Section 15

they would offer only slight advantages over traditional procedural programming. There are many other
advantages present in object-programming models.
The most important advantages of object programming occur because of the existence of class
hierarchies. These hierarchies permit new objects to be created from others by concentrating only on
the differences between the objects behaviors. For instance, a finite-element mesh for a rod lying in
three dimensions can be derived from a one-dimensional mesh by adding two additional coordinates at
each node. An object programmer could take an existing one-dimensional mesh class and derive a three-
dimensional version using only very simple steps:
Adding internal (private) representation for the additional coordinate data
Overriding the existing discretization method to generate the remaining coordinates when the
discretize yourself message is sent
Overriding some low-level calculations in class methods pertinent to performing local element
calculations using the new coordinate representation
Note that all of these steps are private to the mesh object, so that no other part of the program needs
to be changed to implement this major modification to the problem statement. In practice, the added
details are implemented via the creation of a derived class, where the additional coordinates and the
modified methods are created. When messages appropriate to the new class are sent, the derived object
created from the new class will handle only the modified data and instructions, and the parent object
(the original mesh object) will take care of the rest of the processing. This characteristic of object-
programming models is termed inheritance, as the individual derived (child) objects inherit their
behavior from the parent class. When the changes required by the modification to the programs
specifications are small, the resulting programming effort is generally simple. When the changes are
large (such as generalizing a one-dimensional problem to a fully three-dimensional one), it is still often
feasible to make only minor modifications to the program to implement the new features.
One of the most important rationales for using object-programming methods arises from the desire
to provide a consistent user-interface across diverse programs. Existing standardized graphical interface
models (such as the Motif interface available on OSF/UNIX, or the Microsoft Windows interface used
on Windows and Windows NT) place a premium on a consistent look and feel across different
applications. Since managing the user-interface commonly constitutes much of the programming effort
required to implement interactive engineering applications, it is advantageous to consolidate all of the
code required to implement the standard graphical user-interface into a class library and allow the
programmer to derive new objects pertinent to the application at hand.
One such class library is the Microsoft Foundation Classes, which implement the Windows interface
via a class hierarchy requiring around a hundred thousand lines of existing C++ source code. Program-
mers using class libraries such as these can often generate full-featured graphics applications by writing
only a few hundred or a few thousand lines of code (notably, for reading and storing data in files, for
drawing content into windows, and for relevant calculations). In fact, it is relatively easy to graft graphical
user interfaces onto existing procedural programs (such as old FORTRAN applications) by wrapping a
C++ user-interface layer from an existing class library around the existing procedural code, and by
recycling relevant procedures as class methods in the new object-oriented setting. This interface
wrapper approach to recycling old procedural programs is one of many standard techniques used in
reengineering of existing legacy applications (Barton and Nackman, 1994).
One other important characteristic of many object-programming languages is polymorphism. Poly-
morphism (Latin for many forms) refers to the ability of a single message to spawn different behaviors
in various objects. The precise meaning of polymorphism depends upon the run-time characteristics of
the particular object-programming language used, but it is an important practical feature in any object-
programming language.
Object-Oriented Design Principles. Because object-oriented programming is a relatively new discipline
of software engineering (when compared to procedural programming), one cannot yet identify the best

1999 by CRC Press LLC


Computer-Aided Engineering 15-13

design schemes among the various competing object-oriented design principles. For this reason (and to
avoid prejudging the future), this section treats the subject of object-oriented design in less detail than
procedural design methods.
The fundamental tenet of object-oriented program design is that the programming objects should be
chosen to model any real-world objects present in the system to be analyzed and simulated. For example,
in a thermal analysis of a microprocessor, one might identify such natural physical objects as heat
sink, thermocouple, and fan. In general, the nature of the physical objects in a mechanical system
is stable over long periods of time, so they make natural candidates for programming objects, as their
specifications are least likely to vary, and thus they will require minimal modifications to the basic
program design.
The next step in performing an object-oriented design is to model the behaviors of the various objects
identified within the system. For example, a fan object can turn on and turn off, or might vary in
intensity over a normalized range of values (e.g., 0.0 = off, 1.0 = high speed), and this behavior will
form the basis for the messaging protocols used to inform objects which behaviors they should exhibit.
At the same time, any relevant data appropriate to the object (in this case, fan speed, power consumption,
requisite operating voltage) should be identified and catalogued. Here, these individual items of data
will represent the private data of the fan class, and the behaviors of this class will be used to design
class methods.
The final step in specifying an object-oriented design is to examine the various objects for interrela-
tionships that can be exploited in a class hierarchy. In this setting, heat sink and fan could be
considered to be derived from a larger class of cooling devices (although in this trivial example, this
aggregation is probably unnecessary). Careful identification of hierarchical relationships among the
candidate objects will generally result in an arrangement of classes that will permit considerable code
reuse through inheritance, and this is one of the primary goals of object-programming design practice.
In practice, there is no final step in designing object-oriented programs, as the design process is
necessarily more complex and iterative than procedural programming models. In addition, the object-
oriented designer must take more care than given here in differentiating the role of classes (which are
the static templates for construction of objects) from objects themselves, which are the dynamic real-
ization of specific members of a class created when an object-oriented program executes. Objects are
thus specific instances of generic classes, and the process of creating objects at run time (including
setting all appropriate default values etc.) is termed instantiation.
Sample Object-Oriented Languages. There are not as many successful object-oriented languages as
there are procedural languages, because some languages (such as Ada and FORTRAN-90) that possess
limited object-oriented features are more properly classified as procedural languages. However, ADA
95 does include excellent facilities for object-oriented programming.
C++ is the most commonly used object-oriented language and was primarily developed at Bell Labs
in the same pragmatic vein as its close procedural relative, C. In theory, the C++ language includes both
procedural and object-programming models, and thus C++ can be used for either type of programming.
In practice, the procedural features on C++ are nearly indistinguishable from those of ANSI C, and
hence the phrase programming in C++ is generally taken to mean object-programming in C++. C++
is well known as an object-programming language that is not particularly elegant, but that is very popular
because of its intimate relation with the C procedural programming language (C++ is a superset of ANSI
C) and because of its extensive features. The design goal of maintaining back-compatibility with ANSI
C has led to shortcomings in the C++ language implementation, but none of these shortcomings has
seriously compromised its popularity. C++ is an efficient compiled language, providing the features of
object-programming models without undue loss of performance relative to straight procedural C, and
C++ is relatively easy to learn, especially for knowledgeable C programmers. It supports extensive
inheritance, polymorphism, and a variety of pragmatic features (such as templates and structured excep-
tion handling) that are very useful in the implementation of production-quality code.

1999 by CRC Press LLC


15-14 Section 15

An important recent development in object-oriented design is the Java programming language: the
popularity of this new language is closely tied to the explosion of interest in the Internet. Java is widely
used to provide interactive content on the World-Wide-Web, and it has a syntax very similar to C++, a
pervasive object-orientation, and provides portable elements for constructing graphical user interfaces.
Java programs can be deployed using interpreted forms over the web (utilizing a Java Virtual Machine
on the client platform), or by a more conventional (though less portable) compilation on the target
computer.
SmallTalk is one of the oldest and most successful object-programming languages available, and was
designed at the Xerox Corporations Palo Alto Research Center (also responsible for the design of modern
graphical user interfaces). SmallTalk supports both inheritance (in a more limited form than C++) and
polymorphism, and is noted as a highly productive programming environment that is particularly ame-
nable to rapid application development and construction of prototypes. SmallTalk is not a compiled
language, and while this characteristic aids during the program implementation process, it generally
leads to computer programs that are substantially less efficient than those implemented in C++. SmallTalk
is generally used in highly portable programming environments that possess a rich library of classes, so
that it is very easy to use SmallTalk to assemble portable graphical interactive programs from existing
object components.
Eiffel is a newer object-oriented language with similar structure to object-oriented variants of the
Pascal procedural programming language. Eiffel is similar in overall function to C++ but is considerably
more elegant, as Eiffel does not carry the baggage of backward compatibility with ANSI C. Eiffel has
many important features that are commonly implemented in commercial-quality C++ class libraries,
including run-time checking for corruption of objects, which is a tremendous aid during the program
debugging process. Even with its elegant features, however, Eiffel has not gained the level of acceptance
of C++.
There are other object-oriented programming languages that are worth mentioning. The procedural
language Ada provides some support for objects, but neither inheritance or polymorphism. FORTRAN-
90 is similarly limited in its support for object-programming practices. Object Pascal is a variant of
Pascal that grafts SmallTalk-like object orientation onto the Pascal procedural language, and several
successful implementations of Object Pascal exist (in particular, the Apple Macintosh microcomputer
used Object Pascal calling conventions, and this language was used for most commercial Macintosh
application development for many years). For now, none of these languages provides sufficient support
for object-oriented programming features (or a large-enough user community) to provide serious com-
petition for C++, SmallTalk, Eiffel, or Java.

Data Base Systems


In procedural programming practice, modeling data are relegated to an inferior role relative to modeling
instructions. Before the advent of object-oriented programming languages, which permit a greater degree
of data abstraction, problems defined by large or complex data sets required more flexibility for modeling
data than traditional procedural programming techniques allowed. To fill this void, specialized data base
management systems were developed, and a separate discipline of computer programming (data base
management) arose around the practical issues of data-centered programming practice. The study of
data base management evolved its own terminology and code of application design, and suffered through
many of the same problems (such as language standardization to provide cross-platform portability) that
had plagued early efforts in procedural programming. The data base management subdiscipline of
software engineering is still fundamentally important, but the widespread adoption of object-oriented
languages (which permit flexible modeling of data in a more portable manner than that provided by
proprietary data base management systems) has led to many of the concepts of data base management
becoming incorporated into the framework of object-oriented programming practice.

1999 by CRC Press LLC


Computer-Aided Engineering 15-15

Technical Overview of Data base Management


Many important engineering software applications are naturally represented as data base applications.
Data base applications are generally developed within specialized custom programming environments
specific to a particular commercial data base manager, and are usually programmed in a proprietary (and
often nonportable) data base language. Regardless of these issues, data base programming is a particular
form of computer programming, and so the relevant topics of software engineering, including procedural
and object models, portability, reliability, etc., apply equally well to data base programming. Because
many of these principles have already been presented in considerable detail, the following sections on
design and programming issues for data base systems are kept relatively concise.
Data base applications are very similar to conventional programming applications, but one of the most
important differences is in the terminology used. Data base applications have developed a nomenclature
specifically defined to dealing with structured and unstructured data, and this terminology must be
addressed. Some of the most appropriate terms are enumerated below.
Table: a logical organized collection of related data
Record: a collection of data that is associated with a single item (records are generally represented
as rows in tabular data base applications)
Field: an individual item of data in a record (fields are generally represented as columns in a
tabular data base)
Schema: the structure of the data base (schema generally is taken to mean the structure and
organization of the tables in a tabular data base)
Query: a structured question regarding the data stored in the data base (queries are the mechanism
for retrieving desired data from the data base system)
There are many other relevant terms for data base management, but these are sufficient for this brief
introduction. One important high-level definition used in data base management is Structured Query
Language, or SQL. SQL is a standard language for creating and modifying data bases, retrieving
information from data bases, and adding information to data bases. In theory, SQL provides an ANSI
standard relational data base language specification that permits a degree of portability for data base
applications. In practice, standard SQL is sufficiently limited in function so that it is commonly extended
via proprietary addition of language features (this situation is similar to that of the Basic procedural
language, which suffers from many incompatible dialects). The practical effect of these nonstandard
extensions is to compromise the portability of some SQL-based data base systems, and additional
standardization schemes are presently under development in the data base management industry. One
such scheme is Microsofts Open data base Connectivity (ODBC) programming interface, which provides
portable data base services for relational and non-relational data base applications.
Classification of Data Base Systems
There are many different types of data base systems in common use. One of the most important initial
steps in designing and implementing a data base application is to identify the relevant characteristics of
the data in order to choose the most appropriate type of data base for development. Depending upon
the structure of the data to be modeled, the data base application developer can select the simplest
scheme that provides sufficient capabilities for the problem. Three sample data base structures are
presented below: flat-file data bases, relational data bases, and object-oriented data bases.
Flat-File Data Bases. Flat-file data bases represent the simplest conceptual model for data base struc-
ture. A flat data base can be idealized as a table with a two-dimensional matrix or grid structure. The
individual data base records are represented by the rows of the matrix, and each records component
fields are represented by the columns of the matrix structure, as shown in Figure 15.2.5. Flat-file data
bases are thus confined to applications where all records are structurally identical (i.e., have the same
configuration of fields) and where the underlying matrix structure naturally represents the data component
of the application.

1999 by CRC Press LLC


15-16 Section 15

Field 1: Field 2: Field 3: Field 4:


Name Yield Strength Youngs Modulus Shear Modulus
Record 1:
Material 1 Aluminum 250 MPa 70 GPa 25 GPa

Record 2:
Material 2 Magnesium 150 MPa 45 GPa 18 GPa

Record 3:
Material 3 Steel 400 MPa 200 GPa 85 GPa

FIGURE 15.2.5 Flat data base example.

The simplicity of a flat-file data base is simultaneously its greatest advantage and worst disadvantage.
The main advantage of using a flat-file data base structure is that querying the data base is extremely
simple and fast, and the resulting data base is easy to design, implement, and port between particular
data base applications. In practice, spreadsheet applications are often used for constructing flat-file data
bases, because these packages already implement the requisite tabular structure and include a rich variety
of control structures for manipulating the data.
The biggest disadvantage of flat-file data bases is that the extreme simplicity of the flat structure
simply does not reflect many important characteristics of representative data base applications. For
programs requiring flexibility in data base schema, or complex relationships among individual data fields,
flat-file data bases are simply a poor choice, and more complex data base models should be used.
Relational Data Bases. In practice, data base applications often require modeling relationships among
various fields that may be contained in separate data files. Applications with these relational features
are term relational data bases. Relational data base technology is a rapidly evolving field, and this family
of data bases is very common in practical data base applications.
Relations provide a way to generalize flat-file data base tables to include additional features, such as
variation in the numbers of fields among different records. A schematic of a simple relational data base
schema is shown in Figure 15.2.6. Here a data base of material properties is represented by related
tables. Note that because of the disparity in number of material constants (i.e., differing numbers of
fields for each material record), a flat-file data base would not be suitable for this data base storage
scheme.
The material properties tables (containing the lists of material properties) are related to their parent
table, which contains overall identification information. These parent-child relationships give relational
data bases considerable flexibility in modeling diverse aggregates of data, but also add complexity to
the task of storing and retrieving data in the data base. In a flat-file data base system, a simple lookup
(similar to indexing into a two-dimensional array) is required to find a particular field. In a complex
relational data base, which may exhibit many nested layers of parent-child relations, the task of querying
may become very complex and potentially time-consuming. Because of this inherent complexity in
storing and retrieving data, the topics of efficient data base organization and of query optimization are
essential for careful study before any large-scale relational data base application is undertaken.
Object-Oriented Data Bases. Many of the data base schemes found in relational and flat-file data base
systems arose because of the inability to model data effectively in older procedural programming
languages like FORTRAN. Commercial relational data base managers combined powerful data-modeling
capabilities with new procedural languages (such as SQL or XBase) specifically designed to manipulate
data base constructs. Recently, the current proliferation of object-oriented programming languages, with
their innate ability to abstract data as effectively as possible with dedicated data base management
systems, has led to the development of object-oriented data base systems. These object-oriented data
base packages provide extremely powerful features that may ultimately make traditional SQL-based
relational data base applications obsolete.

1999 by CRC Press LLC


Computer-Aided Engineering 15-17

Material ID Material Name Material Type

1 Steel Isotropic

2 Wood Orthotropic

Material ID MaterialProperty

1 Steel Yield Strength


1 Steel Youngs Modulus
1 Steel Shear Modulus

2 Wood Tensile Strength with Grain


2 Wood Cross-Grain Compressive Strength
2 Wood Shear Strength
2 Wood Tensile Elastic Modulus
2 Wood Compressive Elastic Modulus
2 Wood Shear Modulus

FIGURE 15.2.6 Example relational data base structure.

One interesting example of object-oriented data base technology is the integration of data base
technology into a C++ framework. The Microsoft Foundation Class library for C++ provides numerous
features formerly requiring custom data base programming that are implemented as C++ class library
members. For example, there are extensible data base classes that provide direct support for common
data base functions, and there are ODBC (Open Data Base Connectivity, the extension of SQL to generic
data base environments) classes allowing the C++ program to access existing relational data bases
developed with specialized data base management systems. Given the extensibility of C++ class libraries,
this object-oriented approach makes it feasible to gain all of the advantages of proprietary relational
data base applications, while preserving the numerous features of working in a standard portable
programming language.

Operating System Characteristics


Computer programs depend on low-level resources for execution support, including file services for
input/output, graphical display routines, scheduling, and memory management. The software layers that
provide these low-level services are collectively termed the computers operating system. Operating
systems thus insulate individual programs from the details of the hardware platform where they are
executed, and choosing the right operating system can be a critical decision in engineering practice.
Engineering computation is generally identified by three fundamental characteristics:
Large demand for memory, where extremely large data sets (generally on the order of megabytes
or gigabytes) are used, and where all components of these demanded memory resources must be
accessible simultaneously (This demand for memory can be contrasted with standard on-line-
transaction-processing schemes used in finance and commerce, where there is a similar charac-
teristic of large data sets, but these large financial data models are seldom required to have all
components available in memory at the same time.)
Dependence on floating-point computation, where there are high-precision floating-point repre-
sentations of numbers (i.e., numbers stored in the binary equivalent of scientific notation, where
storage is divided among sign, mantissa, and exponent, requiring more extensive storage than that
required for characters or integer data types)

1999 by CRC Press LLC


15-18 Section 15

Extensive use of graphics in input and display, as graphics is generally characterized as an


engineers second language, because only the human visual sense has sufficient bandwidth to
process the vast amounts of data generally present in engineering computation
While many of these characteristics may be found in other computational settings, the simultaneous
presence of all three is a hallmark of computation in science and engineering. Identifying and selecting
an operating system that provides appropriate support for these characteristics is thus a fundamentally
important problem in the effective development and use of engineering software.
Technical Overview of Operating Systems
A simple and effective way to gain an overview of operating systems theory is to review the classification
scheme used to identify various operating systems in terms of the services that they provide. The most
common characteristics used for these classifications are enumerated below.
Multitasking. Humans are capable of performing multiple tasks simultaneously, and this characteristic
is desirable in a computer operating system as well. Although an individual computer CPU can only
process the instructions of one application at a time, it is possible with high-performance CPUs to
manage the execution of separate programs concurrently by allocating processing time to each application
in sequence. This sequential processing of different applications makes the computer appear to be
executing more than one software application at a time. When an operating system is capable of managing
the performance of concurrent tasks, it is termed a multitasking operating system. Many early operating
systems (such as MS/DOS) could only execute a single task at a time and were hence termed single-
tasking systems. While it is possible to load and store several programs in memory at one time and let
the user switch between these programs (a technique sometimes termed context switching that is
commonly used in MS/DOS applications), the lack of any coherent strategy for allocating resources
among the competing programs limits the practical utility of this simple tasking scheme.
A simple generalization of context switching is known as cooperative multitasking, and this simple
tasking scheme works remarkably well in some settings (in fact, this method is the basis for the popular
Microsoft Windows 3.x and Apple Macintosh 7.x operating systems). In a cooperative multitasking
setting, the allocation of computer resources is distributed among the competing programs: the individual
programs are responsible for giving up resources when they are no longer needed. A comparison to
human experience is a meeting attended by well-behaved individuals who readily yield the floor whenever
another speaker desires to contribute. Just as this scheme for managing human interaction depends on
the number of individuals present (obviously, the more people in the meeting, the more difficult the task
of distributed management of interaction) as well as on the level of courtesy demonstrated by the
individual speakers (e.g., there is no simple means for making a discourteous speaker yield the floor
when someone else wants to speak), the successful use of cooperative multitasking schemes is completely
dependent on the number and behavior of the individual software applications that are being managed.
Ill-behaved programs (such as a communications application that allocates communications hardware
when executed, but refuses to release it when not needed) compromise the effectiveness of cooperative
multitasking schemes and may render this simple resource-sharing model completely unusable in many
cases.
The obvious solution to managing a meeting of humans is to appoint a chair who is responsible for
allocating the prioritized resources of the meeting: the chair decides who will speak and for how long,
depending upon scheduling information such as the meetings agenda. The computer equivalent of this
approach is termed preemptive multitasking and is a very successful model for managing the allocation
of computer resources. Operating systems that use a preemptive multitasking model make use of a
scheduler subsystem that allocates computer resources (such as CPU time) according to a priority system.
Low-priority applications (such as a clock accessory, which can update its display every minute or so
without causing serious problems) are generally given appropriately rare access to system resources,
while high-priority tasks (such as real-time data acquisition applications used in manufacturing, which
cannot tolerate long intervals without access to the operating systems services) are given higher priority.

1999 by CRC Press LLC


Computer-Aided Engineering 15-19

Of course, the scheduler itself is a software system and generally runs at the highest level of priority
available.
Preemptive multitasking operating systems are natural candidates for engineering software, as the
intense memory and hardware resources associated with engineering computation require appropriately
high-powered operating system support. Virtually all large engineering computers of the present era
(e.g., workstations, mainframes, and supercomputers) run operating systems that provide preemptive
multitasking, and many microcomputers are now available with similar operating system support.
Multithreading. In the setting of multitasking, the term task has some inherent imprecision, and this
ambiguity leads to various models for allocation of computer resources among and within applications.
In the simplest setting, a task can be identified as an individual software application, so that a multitasking
operating system allocates resources sequentially among individual applications. In a more general
context, however, individual programs may possess internal granularity in the form of subprocesses that
may execute in parallel within an application. These subprocesses are termed threads, and operating
systems that support multiple threads of internal program execution are termed multithreaded operating
systems.
Examples of multiple threads of execution include programs that support internally concurrent oper-
ations such as printing documents while other work is in progress (where a separate thread is spawned
to handle the printing process), displaying graphical results while performing other calculations (where
a separate thread can be used to paint the screen as data are read or calculated), or generating reports
from within a data base application while other queries are performed. In general, multithreading of
individual subtasks within an application will be advantageous whenever spawned threads represent
components of the application that are complicated enough so that waiting for them to finish (which
would be required in a single-threaded environment) will adversely affect the response of the program.
Multiprocessing. One of the most important advantages of separating a program into multiple threads
is that this decomposition of programming function permits individual threads to be shared among
different processors. Computers with multiple CPUs have been common platforms for performing high-
end engineering computation for over a decade (e.g., multiprocessor supercomputer architectures, such
as the Cray X/MP and Cray Y/MP models introduced in the 1980s), but the availability of multiple
processing units within a single computer has finally gravitated to the realm of low-end microcomputers.
The ability of an operating system to support concurrent execution of different program threads on
different processors is termed multiprocessing. Multiprocessing occurs in two fundamental flavors:
Symmetric multiprocessing (SMP), where each individual CPU is capable of executing any
process, including threads originating within applications or within operating system services
Asymmetric multiprocessing (ASMP), where different processors are relegated to different tasks,
such as running applications or running operating systems services
Asymmetrical processing is commonly implemented using a dual-CPU architecture involving a mas-
ter/slave relation between the processing units. The master CPU performs the application and some
system services, while the slave CPU is relegated to pure system tasks (such as printing, waiting for
slow input/output devices, etc.). Asymmetric multiprocessing architectures provide some speed-up of
individual programs, but this increased performance is often limited to reducing the wait time required
for some system services. Symmetric multiprocessing can produce substantial gains in program execution
speed, as long as individual threads do not contend for resources. The ability of a program (or an
operating system) to take advantage of multiple CPU resources is termed scalability, and scalable
operating systems are well positioned to take advantage of current improvements in available multipro-
cessing hardware platforms.
Virtual Memory. Providing the extensive memory resources required for most engineering software can
be an expensive undertaking. Dynamic Random-Access Memory (DRAM) is too expensive to maintain
an appropriate supply for every program used in a multitasking environment. In practice, much of the

1999 by CRC Press LLC


15-20 Section 15

memory demand in a multitasking setting can be satisfied by caching some of the blocks of data ostensibly
stored in main memory to a fast disk storage subsystem. These blocks of data can be reloaded to main
memory only when they are absolutely required, and this practice of paging memory to and from the
disk is termed virtual memory management. In most common implementations of virtual memory, the
paging scheme provides a level of independence of memory addressing between processes that is
carefully implemented so that one process cannot corrupt the memory of another. Such schemes that
implement memory protection to prevent interapplication memory corruption are termed protected virtual
memory management.
Depending on demand for physical memory, virtual memory schemes may be a great help or a
hindrance. While there are sophisticated paging algorithms available that are designed to prevent writing
needed to memory to disk, in practice, if there are enough different applications competing for memory,
the relative disparity in speed of memory vs. disk subsystems may lead to very sluggish performance
for applications whose memory resources have been written to the disk subsystem. In addition, multi-
processing architectures place further constraints on virtual memory performance in order to avoid
corruption of memory by different threads running on different CPUs. Modern virtual memory manage-
ment is an active area of research in computer science, but one empirical rule is still true: perhaps the
best way to improve the performance of any virtual memory operating system is to add physical (real)
memory!
Networking and Security. One of the most fundamental shifts in computing over the last decade has
been the transition from disconnected individual computers to a distributed computing model character-
ized by networked workstations that support various remote processing models. Most modern operating
systems support standard networking protocols that allow easy integration of different computers into
local- and wide-area networks, and also permit sharing of resources among computers. Traditional
networking functions (such as sharing files between different computers on the same network) have been
augmented to encompass remote computing services, including sharing applications between networked
computers (which represents a generalization of symmetric multiprocessing architectures from a single
computer to a disparate network of connected computers).
Because of the tremendous pace of changes in the field of computer networking, one of the most
important features of any network operating system involves adherence to standard networking protocols.
Networking standards provide a portable implementation of networking function that effectively abstracts
network operations, allowing existing networking applications to survive current and future changes in
networking hardware and software. The most common current networking model is one promulgated
by the International Standards Organization and termed the Open Systems Interconnect (OSI) reference
model. The OSI model uses layers (ranging from low-level hardware to high-level application connec-
tions) to idealize networking function. Adherence to the OSI model permits operating systems to become
insulated from improvements in networking hardware and software, and thus preserves operating system
investment in the face of rapid technological improvements in the field of computer networking.
Once an individual computer is connected to a network, a whole host of security issues arise pertaining
to accessibility of data across the network. Secure operating systems must satisfy both internal (local to
an individual computer) and global (remote access across a network) constraints to ensure that sensitive
data can be protected from users who have no right to access it. Since many mechanical engineering
applications involve the use of military secrets, adherence to appropriate security models is an essential
component of choosing an operating system for individual and networked computers.
There are many aspects to securing computer resources, including some (such as protected virtual
memory schemes) that satisfy other relevant computer needs. In the setting of computer security,
operating systems are classified according to criteria developed by the Department of Defense (DOD
5200.28-STD, December 1985). These DOD criteria provide for such features as secure logons (i.e.,
logging into a computer requires a unique user identifier and password), access control structures (which
restrict access to computer resources such as files or volumes), and auditing information (which provides

1999 by CRC Press LLC


Computer-Aided Engineering 15-21

automated record keeping of security resources so as to help prevent and detect unauthorized attempts
at gaining access to secure computer resources).
Portability. Some operating systems (for example, MS/DOS, written in Intel 8080 assembly language)
are inextricably tied to the characteristics of a particular hardware platform. Given the rapid pace of
development in CPU hardware, tying an operating system to a particular family of processors potentially
limits the long-term utility of that operating system. Since operating systems are computer software
systems, there is no real obstacle to designing and implementing them in accordance with standard
practice in software engineering, and in particular, they can be made portable by writing them in high-
level languages whenever possible.
A portable operating system generally abstracts the particular characteristics of the underlying hard-
ware platform by relegating all knowledge of these characteristics to a carefully defined module respon-
sible for managing all of the interaction between the low-level (hardware) layer of the operating system
and the overlying systems services that do not need to know precise details of low-level function. The
module that abstracts the low-level hardware layer is generally termed a hardware abstraction layer
(HAL), and the presence of a HAL permits an operating system to be ported to various processors with
relative ease. Perhaps the most common portable operating systems are UNIX and Windows NT. Both
of these operating systems are commonly used in engineering applications, operate on a wide variety
of different CPUs, and are almost entirely written in the procedural C language.
Classification of Representative Operating Systems
Several operating systems commonly encountered in engineering practice are classified below in accor-
dance with the definitions presented above. Note that some of these operating systems are presently
disappearing from use, some are new systems incorporating the latest advances in operating system
design, and some are in the middle of a potentially long life span.
MS/DOS and Windows 3.x. The MS/DOS (Microsoft Disk Operating System) operating system was
introduced in the 1980s as a low-level controlling system for the IBM PC and compatibles. Its architecture
is closely tailored to that of the Intel 8080 microprocessor, which has been both an advantage (leading
to widespread use) and disadvantage (relying on the 8080s arcane memory addressing scheme has
prevented MS/DOS from realizing effective virtual memory schemes appropriate for engineering com-
putation). MS/DOS is a single-processor, single-tasking, single-threaded operating system with no native
support for virtual memory, networking, or security. Despite these serious shortcomings, MS/DOS has
found wide acceptance, primarily because the operating system is so simple that it can be circumvented
to provide new and desirable functions. In particular, the simplicity of MS/DOS provides an operating
system with little overhead relative to more complex multitasking environments: such low-overhead
operating systems are commonly used in realtime applications in mechanical engineering for such tasks
as process control, data acquisition, and manufacturing. In these performance-critical environments, the
increased overhead of more complex operating systems is often unwarranted, unnecessary, or counter-
productive.
Microsoft Windows is an excellent example of how MS/DOS can be patched and extended to provide
useful features that were not originally provided. Windows 3.0 and 3.1 provided the first widely used
graphical user-interface for computers using the Intel 80 86 processor family, and the Windows
subsystem layers, which run on top of MS/DOS, also provided for some limited forms of cooperative
multitasking and virtual memory for MS/DOS users. The combination of MS/DOS and Windows 3.1
was an outstanding marketing success. An estimated 40 million computers eventually ran this combination
worldwide. Although this operating system had some serious limitations for many engineering applica-
tions, it is widely used in the mechanical engineering community.
VAX/VMS. Another successful nonportable operating system that has found wide use in engineering is
VAX/VMS, developed by Dave Cutler at Digital Equipment Corporation (DEC) for the VAX family of
minicomputers. VMS (Virtual Memory System) was one of the first commercial 32-bit operating systems

1999 by CRC Press LLC


15-22 Section 15

that provided a modern interactive computing environment with features such as multitasking, multi-
threading, multiprocessing, protected virtual memory management, built-in high-speed networking, and
robust security. VMS is closely tied to the characteristics of the DEC VAX microprocessor, which has
limited its use beyond that platform (in fact, DEC has created a software emulator for its current family
of 64-bit workstations that allows them to run VMS without the actual VAX microprocessor hardware).
But the VMS architecture and feature set is widely imitated in many popular newer operating systems,
and the flexibility of this operating system was one of the main reasons that DEC VAXs became very
popular platforms for midrange engineering computation during the 1980s.
Windows NT. Windows NT is a scalable, portable, multitasking, multithreaded operating system that
supports OSI network models, high-level DOD security, and protected virtual memory. The primary
architect of Windows NT is Dave Cutler (the architect of VAX/VMS), and there are many architectural
similarities between these two systems. Windows NT is an object-oriented operating system that supports
the client-server operating system topology, and is presently supported on a wide range of high-
performance microprocessors commonly used in engineering applications. Windows NT provides a
Windows 3.1 subsystem that runs existing Windows 3.1 applications within a more robust and crash-
proof computational environment, but NT also provides other user interfaces, including a console
interface for textual applications ported from mainframes and MS/DOS, a UNIX-like graphical user
interface (provided by implementing common UNIX window management functions on top of NT), and
the Macintosh-like interface similar to that introduced in Windows 95 as a replacement for Windows
3.1 in mid-1995.
UNIX. The UNIX operating system was developed during the 1970s at Bell Laboratories to satisfy the
need for a flexible and inexpensive operating system that would provide high-end system services (e.g.,
multitasking, virtual memory) on low-cost computers. Since its initial inception, the UNIX operating
system has evolved to become one of the most successful operating environments in history. UNIX
provides preemptive multitasking, multithreading (threads are termed lightweight processes in most
implementations of UNIX), multiprocessing, scalability, protected virtual memory management, and
built-in networking. Although there are a variety of competing UNIX implementations, substantial
standardization of the UNIX operating system has occurred under the auspices of the Open Software
Foundation (OSF), a consortium of computer companies that includes IBM, DEC, and Hewlett-Packard.
OSF UNIX is based on IBMs AIX UNIX implementation and represents one of the most advanced
operating systems available today. Another important emerging standard for UNIX is the public-domain
version termed Linux: this UNIX variation runs on a wide range of computers, is freely distributed, and
has an incredibly diverse feature set, thanks to the legions of programmers around the world who have
dedicated their skills to its development, extension, and support. Various versions of UNIX run on
virtually every type of computer available, ranging from inexpensive microcomputers through engineer-
ing workstations to expensive supercomputers. The ability of the UNIX system to evolve and retain a
dominant market position over two decades is a concrete testimonial to the advantages of strict adherence
to the principles of software engineering, because nearly all aspects of the UNIX operating system have
been designed and implemented according to these principles. In fact, much of the history of software
engineering is inextricably bound up with the history of the UNIX operating system.

Parallel Computation
The use of multiple processors and specialized hardware to speed up large-scale calculations has a rich
history in engineering computation. Many early mainframe computers of the 1970s and most supercom-
puters of the 1980s used specialized hardware whose design was influenced by the nature of engineering
computation. Vector processors, gather/scatter hardware, and coarse-grain parallel CPU architectures
have been used successfully over the past few decades to increase the performance of large-scale
computers used in engineering computation. Currently, most of the hardware advances of these past
large computers have migrated to the desktop, where they are readily available on microcomputers and

1999 by CRC Press LLC


Computer-Aided Engineering 15-23

workstations. Understanding the basic principles of these advanced computer architectures is essential
to gain efficient utilization of their advantages, and so the following sections present an overview of the
fundamentals of this important field.
Technical Overview of Parallel and Vectorized Computation
Parallelism in computer hardware can occur on many levels. The most obvious example was addressed
in the setting of operating system services, where scalability over multiple CPUs was presented as a
means for a computers operating system to utilize additional CPU resources. Parallelism achieved by
adding additional CPUs to a computer commonly occurs in two variations: a coarse-grained parallelism
characterized by a relatively small number of independent CPUs (e.g., typically from 2 to 16 CPUs),
and a fine-grained parallelism commonly implemented with substantial numbers of CPUs (typically,
from a minimum of around 64 up to many thousands). The former is termed symmetric multiprocessing,
or SMP, and the latter is referred to as massively parallel (MP) computing. Each is frequently encountered
in practical engineering computation, although SMP is much more common due to its lower cost and
relative simplicity.
It is also possible for parallelization to occur within an individual CPU. On specialized mainframes
with attached vector processors (and within the CPUs of most supercomputers), various different machine
instructions can be pipelined so that more than one instruction occurs in parallel in a particular arithmetic
unit. The practical effect of this internal parallelization in instruction execution is that many common
arithmetic operations (such as the multiplication of a vector by a scalar) can be performed by carefully
arranging the pipelined calculations to permit impressive performance relative to the computational effort
required on a nonpipelined CPU. One of these forms of internal parallelism within the CPU (or attached
processor) is termed vectorization, as it permits vector operations (i.e., those associated with a list, or
vector, of floating-point numbers that are stored contiguously in memory) to be processed at very fast
rates.
Such pipelined execution characteristics have now become commonplace even on low-cost micro-
computers. In fact, current high-performance microprocessors used in engineering workstations and
high-performance microcomputers generally have multiple independent arithmetic units that can operate
in parallel. Such internally redundant designs are called superscalar architectures and allow the CPU to
execute more than one instruction per clock cycle. The cumulative effect of such multiple independent
pipelined arithmetic units operating in parallel within an individual CPU is that current microprocessors
exhibit astonishing performance on typical engineering problems when compared to large (and expensive)
central computers of the 1980s. Engineering problems that required dedicated vector processors in the
1980s are commonly executed faster on low-priced microcomputers today. In addition to current hardware
eclipsing older vector processors in performance levels, modern computer language compilers are now
commonly tuned to particular microprocessor characteristics in order to provide the same sort of
computational performance formerly associated with specialized vectorizing compilers (i.e., compilers
that could recognize common vector operations and generate appropriately efficient machine instruc-
tions).
Finally, another important form of parallelism has developed in conjunction with high-performance
networking schemes. Distributed computing applications are commonly designed to parallelize calcula-
tions by dividing up individual threads of execution among disparate computers connected via a high-
speed network. While this sort of distributed computing was sometime encountered in the 1980s on
high-priced computers (such as DEC VAX clusters, which transparently balanced computational loads
over a collection of networked minicomputers), similar distributed computing schemes are now becoming
commonplace on microcomputers and workstations connected over local-area networks.
Classification of Parallel Architectures
There are many schemes to characterize parallel computer architectures, including the SMP/MP classi-
fication given above. Since computer programs consist of instructions and data, it is possible to further
classify parallelization schemes by considering the redundancy (or lack thereof) in these components.

1999 by CRC Press LLC


15-24 Section 15

The four possible classifications are single instruction/single data (SISD); single instruction/multiple
data (SIMD); multiple instruction/single data (MISD); and multiple instruction/multiple data (MIMD).
Of these four, the first pertains to nonparallel computation (such as a standard computer with a single
processor). The others include representations of practical schemes for MP (massively parallel) com-
puting, SMP (symmetric multiprocessing) computation, and networked distributed computing.
SIMD computers are particularly simple representatives of massively parallel systems. In order to run
at maximum speed, each CPU in a SIMD architecture has to execute the same instructions as its neighbors
(here neighbors is a flexible term that represents various topological arrangements among small groups
of processors). The result is a massively parallel computer where individual processors act in unison:
every instruction is executed by many processors, each with its own local data. If the underlying algorithm
used can be constrained to fit into this SIMD architecture, the performance obtained can be phenomenal.
Many standard numerical schemes were already well organized for SIMD calculations: for example,
simple two-dimensional finite-difference calculations for heat conduction involve replacing the value at
a node with the average of the nodes four neighbors in a north/west/south/east pattern. This simple
calculation is relatively easy to implement on a SIMD computer, and so calculations such as these finite-
difference molecules result in highly scalable performance. More complex calculations, such as those
found in modeling nonlinear problems, are generally much more difficult to implement on SIMD
computers and often result in poor performance on these simplified computer architectures.
MISD computers are commonplace today, both in the form of supercomputers and in SMP desktop
workstations. In each case, a small set of individual processors shares memory (single data), and each
processor operates more or less independently of the others (multiple instructions). This form of paral-
lelism has been aided by operating systems (such as current flavors of UNIX and Windows NT) that
support multithreading, allowing the programmer (or the operating system) to distribute threads among
the various processors in a typical SMP environment. Given appropriate programmer and operating
system support, MISD computers can be highly scalable, so that adding additional processors results in
associated decreases in execution time. There are substantial obstacles of increased performance in a
MISD environment, including the important issue of contention by different threads for common
resources. An example of resource contention occurs when two different processors attempt to access a
shared memory address simultaneously: some form of signaling and locking mechanism must be provided
to insure that more than one processor cannot simultaneously modify memory. Currently, many vendors
offer hardware-based support for resolving contention (and related bottlenecks) in symmetric multipro-
cessing, and standards are currently evolving in this important area of support.
Probably the most common MIMD example of current interest is in distributed computing performed
on networked computers. Since each computer has its own local data and instruction stream (MIMD),
this approach combines many of the best features of single-computer implementations of both symmetric
multiprocessing and massively parallel architectures. While networks of computers have been used for
over a decade in solving many easily parallelized applications (such as classical mathematical problems
arising in number theory), using such distributed computer networks to perform general-purpose engi-
neering computations is a more recent development. MIMD networked computation requires a host of
instruction synchronization and data replication issues to be resolved (these are the network equivalent
of the resource contention problems of SMP architectures), but substantial progress is underway in
addressing these bottlenecks to distributed computing. The use of object-oriented models for distributed
computing, which permit hiding many of the details of the distribution process via appropriate process
abstraction, appears to be an especially important avenue toward the efficient use of distributed networked
computing.

Computer Graphics and Visualization


Mechanical engineering is one of the most important sources of applications in computer graphics, and
mechanical engineers form a significant market for computer-graphics software. Many of the most
important developments in the history of computer graphics, such as the development of spline models

1999 by CRC Press LLC


Computer-Aided Engineering 15-25

for realistic surface representations, were originally motivated by the particular needs of the mechanical
engineering profession. Current topics of importance to researchers in computer graphics, such as the
applications of scientific visualization or the use of rational physical-based models in computer anima-
tion, are also motivated in large part by the diverse needs or the knowledge base of the mechanical
engineering field.
Technical Overview of Computer Graphics
The fundamental motivation for computer graphics and visualization arises from the adage that a picture
is worth a thousand words. The human visual sense is the only sensory apparatus with sufficient
bandwidth (i.e., information-carrying capacity) to permit rapid evaluation of the large data sets charac-
teristically associated with problems in science and engineering. Mechanical engineers have historically
been aware of the importance of using graphics, and college students in this field have traditionally been
required to study engineering graphics as a required course during their first-year programs of study.
The commonly cited observation that graphics is an engineers second language is pertinent today,
especially in light of the importance of television and other visual arts during the formative years of
younger engineers.
There is a rich nomenclature associated with the field of computer graphics (see Foley and VanDam,
1982, for a detailed overview). Some high-level terms commonly used in the field should be defined
before more detailed exposition of this important field is attempted; the definitions given below are not
intended to be all-inclusive, but instead to be concise enough to permit further review of this topic.
Visualization: the study of applying computer graphics toward the goal of displaying collections
of data in a relevant and informative manner
Rendering: converting a mathematical model of a scene geometry (or a collection of data values)
into a visually meaningful image on the computer
Virtual reality: technology permitting simultaneous display and control of graphical simulations
so that the user interprets the display as the existence of an alternative reality
Multimedia: the integration of senses besides the visual sense into a computer application or
simulation
Graphical user-interface: a metaphor for human-computer interaction using standardized graphical
mechanisms for control, input, and output
One of the main reasons that rapid progress has occurred in computer graphics is that practitioners
in the field have learned to leverage the efforts of others by adhering to industry-wide standards. The
code of standardization and practice in computer graphics exists on many levels, ranging from low-level
coordinate choices developed as a foundation toward abstracting the rendering and display process to
high-level format standardizations required for the portable use of multimedia content. The breadth of
standardization in computer graphics is beyond the scope of this handbook, but relevant information can
be found in the publications of the ACM/SIGGRAPH. This organization is the primary instrument of
dissemination and standardization efforts in the computer graphics industry.
Visualization Methods in Mechanical Engineering
The fundamental problem of visualization in science and engineering is the conversion of raw data into
an informative visual display. The data may arise from a closed-form mathematical representation, or it
may be obtained as an organized collection of data presented in tabular or related formats. The most
important initial step in engineering visualization is the recognition of the domain (i.e., the set of input
values) and range (i.e., the set of output values) of the data to be represented, so that an appropriate
visual display can be synthesized. For example, if the range of the data includes a temporal component,
then a time-dependent display scheme such as an animation is often warranted. If the domain is a physical
region in space (such as a mechanical object being analyzed), then a common visualization scheme
consists of mapping that region onto the display surface as a background for visual results. The details
of the particular display choice depend upon the range of the output data: for display of scalars (such

1999 by CRC Press LLC


15-26 Section 15

as temperature, or specific components of stress), appropriate visualization schemes include contour


maps using lines or color-filled contours, as shown in Figure 15.2.7 (in two dimensions) and in Figure
15.2.8 (in three dimensions, for the same general stress analysis problem). When rendering three-
dimensional results as in Figure 15.2.8, it is necessary to provide for removal of hidden surfaces from
the display.
For display of vectors, the use of arrows aligned with the local vector field is a common means for
displaying this more complex type of range: an example is shown in Figure 15.2.9.
In order to aid in the understanding of complicated visualizations, it is often useful to provide visual
cues to aid the viewer in understanding the display presented. In Figure 15.2.10, a contour plot of
pressure within a heater fan is shown (at high Reynolds number), with an inset picture showing the
geometry of the fan used in the computational simulation. The combination of virtual display (the
pressure contours) and physical display (the fan assembly) aids the viewer in understanding the model
geometry and solution response. (Figures 15.2.9 and 15.2.10 are included courtesy of Advanced Scientific
Computing Corporation of El Dorado Hills, CA.)
Another useful approach to provide contextual cues to aid in understanding a complex visualization
is to overlay the graphical image on top of a physical setting associated with the simulation. For example,
geographical simulations (such as models of contaminant transport used for air pollution modeling) can
be overlayed on a map of the geographical area under study: viewers will instantly grasp the problem
domain in terms of landmarks such as rivers, roads, mountains, and other obvious visual cues. An
alternative approach is shown in Figure 15.2.11, where a visualization of a complex reactor simulation
is overlayed on a picture of the reactor itself. In this figure, the color contours plot gas temperatures in
a model of a chemically reacting Navier-Stokes simulation used for modeling vapor deposition processes
occurring in semiconductor manufacture. (The simulation and visualization were both performed at
Sandia National Laboratories in Livermore, CA.)
When time-dependent results are to be visualized, there are two standard schemes commonly used.
The first approach involves treating the time-dependent behavior of the data as another spatial dimension.
This approach is commonly used to graph scalar-valued functions of time t as y = y(t), with the functions
variation plotted on the vertical axis and the time on the horizontal axis. It can also be used to draw
more complex functions such as the one shown in Figure 15.2.12, where a scalar-valued function of two
variables z = z(x, t) is plotted as a three-dimensional surface (in this example, the axial displacement of
a bar as a wave moves along its length). Note that the surface is drawn in perspective to aid in the
perception of three-dimensional behavior even though it is plotted on a two-dimensional page. Using
such depth cues as perspective (as well as others, including hidden-surface removal and shading and
realistic illumination models) to aid in realistic display of computer images is the essence of the rendering
process.
The other standard approach for visualizing time-dependent data is to treat the temporal variation of
the data in a natural form by animating the history of the display. Standard animation techniques were
formerly beyond the range of cost and effort for most engineering purposes, as computer-generated
animation workstations of the 1980s typically cost over $100,000 and required considerable dedicated
expertise for operation. Currently, microcomputers have become excellent platforms for animation and
visualization, and appropriate computer programs (known as nonlinear digital video editing software)
for generating and composing animations from engineering simulations have become popular on inex-
pensive microcomputers and low-end engineering workstations.
Multimedia in Mechanical Engineering Practice
Multimedia is an imprecise term whose definition has evolved over the recent past as the performance
of computers has improved substantially. Formerly, multimedia was taken to mean integration of different
types of display (such as a textual computer used to control a graphical device for display of archived
images). The current context of the term includes multiple senses, such as the integration of video and
sound, or the control of a visual display by a tactile three-dimensional graphical input device. The former
flavor of multimedia naturally draws the analogy with silent movies being replaced by those with a

1999 by CRC Press LLC


Computer-Aided Engineering
stress sigma-max
min -1.960E+04
max 8.949E+05
+9.4000E+05
+9.0000E+05
+8.6000E+05
+8.2000E+05
+7.8000E+05
+7.4000E+05
+7.0000E+05
+6.6000E+05
+6.2000E+05
+5.8000E+05
+5.4000E+05
+5.0000E+05
+4.6000E+05
+4.2000E+05
+3.8000E+05
+3.4000E+05
+3.0000E+05
+2.6000E+05
+2.2000E+05
+1.8000E+05
+1.4000E+05
+1.0000E+05
+6.0000E+04
+2.0000E+04
-2.0000E+04
-6.0000E+04

FIGURE 15.2.7 Two-dimensional contour plot for stress analysis.

15-27
1999 by CRC Press LLC
15-28
Max Prin Stress

min -1.914E+04
max 9.563E+05

+9.8000E+05
+9.4000E+05
+9.0000E+05
+8.6000E+05
+8.2000E+05
+7.8000E+05
+7.4000E+05
+7.0000E+05
+6.6000E+05
+6.2000E+05
+5.8000E+05
+5.4000E+05
+5.0000E+05
+4.6000E+05
+4.2000E+05
+3.8000E+05
+3.4000E+05
+3.0000E+05
+2.6000E+05
+2.2000E+05
+1.8000E+05
+1.4000E+05
+1.0000E+05
+6.0000E+04

Section 15
+2.0000E+04
-2.0000E+04

FIGURE 15.2.8 Three-dimensional contour plot for stress analysis.

1999 by CRC Press LLC


Computer-Aided Engineering 15-29

FIGURE 15.2.9 Three-dimensional vector field plot for torque converter simulation.

FIGURE 15.2.10 Pressure contour plot with solution geometry cues.

1999 by CRC Press LLC


15-30 Section 15

FIGURE 15.2.11 Gas temperature contour plot with reactor background.

u
viewpoint at

Displacement History :
x =
2.00000E+01
t =
1.00000E+01
z =
1.00000E+01

displacemen
t
min = -1.485E-
t
02
max = 1.327E-
02

viewpoint axial
at:
stress
x = 2.00000E+01
min = -1.844E-
t = 1.00000E+01
01
z = 1.00000E+01
max = 1.902E-
01
displacement:
file: big.dat min = 1.485E02
max = 1.327E02
step: 0.05 axial stress:
alpha: -0.3 min = 1.844E01
max = 1.902E01

FIGURE 15.2.12 Time-dependent data viewed as space-time surface plot.

1999 by CRC Press LLC


Computer-Aided Engineering 15-31

sound track: that technological development revolutionized the movie industry in a manner similar to
the predicted impact of multimedia on computing practice. The latter example is widely used in virtual
reality applications.
Multimedia is presently used in education, training, marketing, and dissemination efforts in the
engineering profession (see Keyes [1994] for a detailed exposition of the application of multimedia to
all of these fields). Each of these uses is based on two empirical observations: (1) people generally find
multimedia presentation of data more interesting, and this increased interest level translates into better
retention of data presented; and (2) good use of visualization principles in multimedia presentation
provides a better audience understanding of the material presented. The promise of simultaneous
increased understanding and improved retention is a strong motivation for using multimedia in the
presentation of information.
There are considerable technical issues to address in efficient use of multimedia technology. The most
important technical problem is the sheer volume of stored data required to make animations, which are
a mainstay of multimedia presentations. A single high-quality color computer workstation image (e.g.,
one frame of an animation that would typically be displayed at 24 to 30 frames per second to convey
the sense of continuous motion) requires from approximately 1 to 4 million bytes of storage. Generating
full-screen animations therefore requires anywhere from a few dozen to over a hundred megabytes of
data to be processed and displayed every second. Such video bandwidths are rare on computers, and
even if they were available, the demand for storage would soon outstrip available supply, as each 10 sec
of video potentially requires up to a gigabyte of high-speed storage. Storing sound requires less dramatic
capacities, but even low-quality (e.g., AM-radio quality) audio storage requires substantial storage for
long durations of audio.
The practical way to overcome these technical limitations to storage and display bandwidth is to
develop efficient ways to compress visual and aural redundancies out of the data stream on storage, and
then decompress the archived data for editing or display. Such COmpression/DECompression schemes
(termed CODECs) are generally used to support multimedia applications ranging from digital animation
preparation on microcomputers to video teleconferencing applications. Common CODECs are tailored
for individual applications and auxiliary technology (for example, playing video from a CD/ROM device
generally requires high compression ratios in order to accommodate the relatively poor data transfer
rates obtained with CD/ROM technology), and their performance is measured in terms of compression
ratio, which is the ratio of uncompressed image size to compressed image size.
Various CODECs in common use include:
JPEG (Joint Photographic Experts Group) CODEC. JPEG is a CODEC originally developed for
efficient storage of individual images, such as photographic libraries. JPEG is a lossy algorithm
(i.e., some information may be lost during compression), but JPEG is so carefully tuned to human
perceptual characteristics that it is feasible to achieve compression ratios on the order of 10:1 or
more without being able to detect any losses, even after repeated recompressions. Higher com-
pression ratios (on the order of 100:1) can be achieved, but these more efficient rates may introduce
visible artifacts of the compression process. JPEG is generally only feasible for animation appli-
cations through the use of specialized hardware, but the same hardware can be used for both
compression and decompression tasks. JPEG is a very commonly used CODEC in animation
applications, as the required hardware is relatively inexpensive, the performance (e.g., speed,
image quality) is acceptable, and the algorithm is easy to implement within animation applications
(e.g., digital video editing software).

1999 by CRC Press LLC


15-32 Section 15

MPEG (Motion Pictures Experts Group) CODEC. MPEG is a family of CODECs that is becoming
popular for display-only animation applications. The various MPEG implementations (MPEG
and MPEG-2) require prodigious amounts of computational effort for the compression step, so
they are not presently feasible for inexpensive video capture applications. MPEG CODECs
routinely provide compression ratios approaching 100:1 with relatively little degradation of image
quality. This compression efficiency permits very large and complex animations to be stored and
played back using low-technology devices such as CD/ROMs. The fundamental difference
between MPEG and JPEG schemes is that with JPEG,each video frame is compressed indepen-
dently of all the other frames, where MPEG uses adjoining frame information by concentrating
on the differences between frames to compress the overall animation more efficiently (most
animations have only small temporal differences among adjacent frames, and this time-dependent
redundancy is ignored by the JPEG algorithm and carefully exploited by the MPEG family). It
is expected that the near-term will see many inexpensive implementations of MPEG CODECs
for microcomputers, and many proposed video storage standards (e.g., new videodisc technology
designed to replace existing VHS video playback devices) rely on MPEG compression schemes
for efficient storage of movies on CD/ROMs.
Another important technical issue arising in multimedia applications is the synchronization of time-
dependent data streams. For example, multimedia applications using both video and sound must guar-
antee synchronization of these two data streams in order to maintain audience interest. Probably the
most important synchronization method is that provided by the Society of Motion Picture and Television
Engineers (SMPTE). SMPTE synchronization depends on a time-code that numbers every frame of
video. This time-code information can be used to synchronize audio information to the video source; in
fact, this is exactly how commercial movie soundtracks are synchronized with the motion picture. Other
time-code mechanisms include RC time-code, which is a frame-indexing scheme that is available on
many consumer-grade video products, including camcorders, video recorders, and video editing equip-
ment. RC provides the same sort of synchronization as SMPTE, but with fewer features and at a reduced
cost.
Standard microcomputer multimedia programming interfaces generally provide synchronization
mechanisms for simultaneous audio and video display. Both Microsofts Video for Windows system
software (used under Windows and Windows NT operating systems) and Apples QuickTime software
(available for Macintoshes, Windows, Windows NT, and UNIX workstations) provide time-code infor-
mation pertinent for simple synchronization of multimedia. In addition, current versions of QuickTime
also provide SMPTE time-code support, in order to simplify the task of high-performance digital video
editing on the Macintosh platform.
Graphical User-Interface Design and Implementation
The marriage of human perceptual principles from applied psychology with the technology of interactive
computer graphics resulted in the development of graphical user-interfaces (GUIs). GUIs have since
evolved into one of the most revolutionary ideas in the history of computing. The basic idea behind a
GUI is that visualization can be used to provide a powerful abstraction for the interaction between human
and computer, and that basing that graphical abstraction on principles of human perception and ergo-
nomics will result in increased user productivity. Most of the early work on development of GUIs
occurred at the Xerox Palo Alto Research Center in California, and the fundamental ideas developed
there were first implemented in a consumer setting when Apple Computer released the Macintosh in
1984. Since then, virtually every major vendor of computer has adopted a GUI similar in appearance to
that of the Macintosh, and it is clear that using a well-designed GUI yields substantial gains in produc-
tivity for most computer-assisted tasks.
The fundamental rules for implementing a GUI within a computer program are simple: the programs
interface must be intuitive, consistent, permissive, and responsive (see Apple [1985] for a well-written
introduction relating to the widely imitated Macintosh user-interface guidelines). An intuitive program

1999 by CRC Press LLC


Computer-Aided Engineering 15-33

interface is easy to understand and responds naturally to a users actions. Implementing an intuitive
interface is a similar task to good industrial design, where principles of cognitive ergonomics are used
to insure that controls of manufactured products (such as household appliances) are appropriately easy
to use. The classic example of an appliance with an intuitive user interface is a toaster (in fact, one of
the design goals of the Macintosh development process was to design a computer that would be as easy
to use as a toaster), as the controls are naturally placed and behave intuitively. Intuition in computer
applications thus implies natural composition, arrangement, and performance of graphical controls, such
as buttons to be pressed for execution of commands.
The role of consistency is essential to the goal of improving the productivity of computer users. Until
the widespread use of GUIs, most application programs used individual and unique schemes for such
mundane tasks as saving files, printing documents, or quitting the program. Unfortunately, there was
little standardization of commands for performing these common functions, so the user was faced with
a substantial learning curve for each program encountered. With the advent of consistent graphical user-
interfaces, the effort of learning most standard commands needs to be learned only once; all other
applications will use the same mechanism for performing these common functions. The cost of training
is thus primarily amortized over the first program learned, and the resulting productivity gains are often
substantial. While the goal of complete consistency between applications (and especially between
different types of GUI) has not yet been realized, there has been considerable progress toward this
ultimate goal.
Permissiveness places the user in control of the computer, instead of the opposite situation, which
was routinely encountered in older textual interfaces. The user should decide what actions to do and
when to do them, subject only to the feasibility of those actions. Permissiveness allows users to structure
their workflow in a personalized manner, which allows them to seek and find the best individualized
way to accomplish computer-assisted tasks. Permissiveness is one of the most difficult attributes for a
computer programmer to implement, as it requires enabling and testing a myriad of different execution
pathways in order to guarantee that all feasible approaches to using the computer program have been
insured to work reliably. The quality control processes of permissive GUI-based programs are often
automated (using software that generates random input processes to simulate a variety of user actions)
in order to test unexpected command sequences.
Responsiveness is another essential element of ensuring that the user feels in control of the software.
In this sense, a responsive GUI-based program provides some appropriate feedback (generally graphical)
for every legitimate user-initiated action. This feedback may be as simple as a beep when an improper
command is issued, or as complicated as rendering a complex picture in response to user-supplied input
data. Responsiveness and intuition work together in the control of the programs actions; for example,
the pull-down menu command procedures of most current GUI programs permits the user to see clearly
(typically, via highlighting of the currently selected menu item) which of a series of intuitive commands
are about to be initiated.
Writing good GUI-based software is a difficult process and is especially cumbersome when procedural
languages are used. Modern event-driven interactive software is naturally suited to the more asynchronous
message-driven architecture available with object-oriented programming models, and it is in the area of
GUI-based applications that object-oriented languages have begun to corner the development market.
Since managing the GUI requires so much programmer effort (and because the goal of consistency
requires that all such code behave in a similar manner), there is a considerable incentive to reuse code
as much as possible in this task. The use of class libraries designed for implementing a standard GUI
model is a highly effective way to develop and maintain graphical interactive computer programs, and
this use is one of the most common applications of class libraries in software engineering.
There are many standard GUI elements, including:
Menus for permitting the user to initiate commands and exert program control
Dialogs for field-based input of data by the user

1999 by CRC Press LLC


15-34 Section 15

Windows for graphical display (output) of program data


Controls (such as buttons or scroll bars) for auxiliary control of the program
Virtual Reality Applications in Engineering
Virtual reality is one of fastest-growing aspects of computer graphics. Efficient and realistic virtual reality
applications require incredible amounts of computational effort, so advances in this area have been
limited in the past, but the increased computational performance available with current CPUs (and with
better support for multiprocessing and parallelization of effort) has made virtual reality solutions feasible
for many engineering problems. The primary motivation for virtual reality is the same one as for computer
graphics: using the bodys senses (and especially the sense of sight) is a very productive way to gain
insight into large amounts of data or for simulating difficult (e.g., life-threatening) situations.
There are two common technological approaches to virtual reality: windowed virtual reality applica-
tions, in which the view into the virtual world is represented by a conventional planar graphical computer
display, and immersive virtual reality which requires specialized hardware (such as display headsets) to
convey a sense of being immersed in the artificial virtual world. While windowed virtual reality appli-
cations are common (running on platforms ranging from high-performance computers with color displays
to mass-market dedicated computer game machines designed for use with television sets), immersive
virtual reality applications are still fairly expensive and rare. In addition, there is considerable anecdotal
evidence that attempting to perform immersive virtual reality with equipment insufficiently powerful to
update the display instantaneously (i.e., within the interval of around 1/30 sec associated with the
persistence of vision) may induce nausea and disorientation in many users.
There are many important types of virtual reality applications in use in engineering and related fields.
Walkthrough (or perambulatory) virtual reality programs permit the user to move around in the virtual
world, where either easily recognizable images (such as architectural renderings of virtual construction)
or completely virtual images (such as visualization displays involving surface plots, color contours,
streamline realizations, and other standard visualization schemes updated in realtime) are encountered
and manipulated. Walkthrough virtual reality applications are typically used to gain new perspective on
simulations or proposed constructions. The primary control ceded to the user is the power to move
through the virtual world, and perhaps to modify the type of display encountered.
Synthetic experience is another important component of the application of virtual reality. The classic
example of this form of virtual reality application is a flight simulator, where an integrated display and
a movable cockpit are used to simulate the look and feel of flight. Synthetic experience applications
permit the user to manipulate the virtual world in exactly the same manner as the real world, and these
programs are used for education, training, and professional practice. The basic motivation for this class
of virtual reality applications is to permit the user to practice and learn techniques that would be otherwise
too expensive or too dangerous to perform by real methods as opposed to virtual practice. This approach
has been used extensively in military applications (e.g., battle simulators) and is gaining wide use in
medicine and other professional fields (e.g., virtual surgery, where surgeons can practice surgical tech-
niques on the computer or even perform real surgery from a remote virtual location).

1999 by CRC Press LLC


Computer-Aided Engineering 15-35

15.3 Computational Mechanics


Computational mechanics is the art and science of simulating mechanics problems on the computer. The
rapid development of computational mechanics is an important component of the foundation of the
todays high-technology world. Current developments in aerospace, automotive, thermal, biomedical,
and electromagnetic engineering applications are in large part due to the success of computational
mechanics researchers in finding efficient schemes to simulate physical processes on the computer.
Understanding the strengths and weaknesses of the current computational mechanics field is an important
task for the practicing mechanical engineer interested in using computer-aided engineering methods.
Computational approximations for mechanics problems are generally constructed by discretizing the
domain of the problem (i.e., the underlying mechanical system) into smaller components. Ideally, in the
limit as the size of each of these individual discretized components becomes negligibly small, the
computational approximation obtained becomes exact. In practice, there is a lower limit on how small
a discretization can be used, as round-off error (caused by truncation of numerical representations used
on computers) and requisite computational effort eventually conspire to produce inaccurate or excessively
expensive results. Other alternative approximation schemes, including semi-analytical methods such as
truncated Fourier series representations, are also commonly used to provide numerical solutions for
engineering problems. These nondiscretized methods require letting the number of solution functions
(such as the trigonometric functions used in Fourier series) approach infinity to gain convergent results
and are thus superficially similar to discretization schemes such as finite-element and finite-difference
methods, where the number of grid points tends to infinity as the size of the grid spacing decreases.

Computational Solid, Fluid, and Thermal Problems


Before specific computational approximation schemes are presented, it is enlightening to enumerate the
wide variety of problems encountered in mechanical engineering practice. In many cases, each individual
subdiscipline of mechanics (e.g., solid mechanics) uses different modeling techniques to obtain compu-
tational approximations, so that reviewing the various branches of mechanics also serves to foreshadow
the spectrum of numerical schemes used in the different branches of computational mechanics.
Solid Mechanics
Solid mechanics is one of oldest and most well-understood branches of mechanics, and its associated
computational schemes are similarly well established. The most common application of solid mechanics
is stress analysis, where the mechanical deformation of a body is studied to determine its induced state
of stress and strain. The study of stress analysis formerly included considerable emphasis on construction
and instrumentation of physical models using strain gauges and photoelastic coatings, but modern
computational schemes for modeling deformation of solids have rendered these traditional laboratory
techniques nearly extinct. The most currently popular stress analysis methods are those based on
computational approximations using finite-element models. This family of computer simulation tech-
niques is one of the most successful branches of modern computational mechanics, and practical
implementations of finite-element methods represent a unified computational model for nearly the entire
range of solid mechanical response.
Solids are modeled by appealing to three classes of mathematical relationships: equilibrium, compat-
ibility, and stress-strain laws. Equilibrium statements represent mathematical expressions of conservation
laws and are generally expressed in terms of the conservation of linear and angular momentum. Com-
patibility conditions include the kinematical strain-displacement relations of mechanics, where strains
are expressed as spatial derivatives of the displacement field. Stress-strain laws are also termed consti-
tutive relations, because they idealize the constitution of the material in order to relate the interplay of
stresses and strains in a deformed body. The fundamental mathematical relations for the dynamic response
of a three-dimensional isotropic linear-elastic solid body are given below note that these equations
are linear, in that they involve no products of the solutions components of displacement, stress, or strain.

1999 by CRC Press LLC


15-36 Section 15

This linearity arises from the assumptions of small deformations, a linear stress-strain law, and the fact
that the reference frame for these equations is attached to the solid body. The linear nature of these
relations makes them relatively easy to solve for a wide range of problems, and this simplicity of analysis
is an important reason why stress analyses based on these relations is such an important part of
computational mechanics.
Definitions for Solid Mechanics Equations

u, , w = x, y, z components of displacement

, w
u, = x, y, z components of acceleration

x , y , z , xy , yz , zx = stress components

x , y , z , xy , yz , zx = engineering strains
(15.3.1)
g x , g y , gz = gravity body force components

E = Young' s Modulus
= Poisson' s Ratio

= Density

Dynamic Equilibrium Equations for a Solid

x xy zx
u = + + + g x
x y z

xy y yz

= + + + g y (15.3.2)
x y z

zx yz z
w
= + + + g x
x y z

Compatibility Equations for a Solid

u 1 u
x = xy = +
x 2 x y

1 w
y = yz = + (15.3.3)
x 2 y z

w 1 u w
z = zx = +
z 2 z x

1999 by CRC Press LLC


Computer-Aided Engineering 15-37

Isotropic Linear-Elastic Stress-Strain Relations

x =
1
[ (
y + z
E x
)] xy =
2(1 + )
E
xy

2(1 + )
y =
1
[
( z + x )
E y
] yz =
E
yz (15.3.4)

z =
1
[ (
x + y
E z
)] zx =
2(1 + )
E
zx

Solids are commonly distinguished from liquids by the ability of solids to maintain their shape without
restraint due to enclosing containers. Because of this particular form of mechanical strength, solids
provide many simple structural approximations that naturally arise as degenerations of full three-
dimensional solid mechanical response. These idealizations of solid mechanical behavior are diagrammed
in Figure 15.3.1 and include:

rod

beam

shaft

plate membrane

FIGURE 15.3.1 Common idealizations of solid mechanical behavior.

Rods, with a local one-dimensional geometry and applied loads directed along the natural axis
of the rod (rods are also termed spars or struts)
Beams, which are similar to rods except that beams permit loads to be applied in directions
perpendicular to the natural axis
Shafts, which are also similar to rods, except that they are loaded by torques directed along their
natural axis in addition to forces applied in that axial direction
Membranes, which are locally two-dimensional structures loaded in the plane defined by their
geometry
Plates, which are two-dimensional flat structures loaded by arbitrary forces and couples, including
those that excite membrane response and those that bend or twist the natural two-dimensional
reference surface of the plate
Shells, which are generalizations of plates to include curvature of the reference surface in three
dimensions

1999 by CRC Press LLC


15-38 Section 15

Theories of computational solid mechanics are often categorized by reference to the particular struc-
tural idealization being studied. In particular, computational theories for beams, plates, and shells are
similar because each is characterized by bending deformation, in which the primary mechanical response
may occur at right angles to the natural geometric reference frame for the structure. Problems that include
substantial bending deformation often require considerable care in computational modeling in order to
avoid inaccurate results.
Another very important aspect of solid mechanics involves nonlinear material and geometric response.
Nonlinear material behavior is commonly encountered in computational inelasticity, where the plastic,
viscous, or nonlinear elastic response of a solid material must be modeled. In many materials (such as
most metals) this inelastic behavior in realized after considerable elastic response, and hence it may
often be neglected. In other materials (such as rubber or its relatives commonly used in aerospace
applications), the nonlinear material properties must be addressed throughout the range of computational
simulation. Furthermore, many of the idealizations presented earlier (such as rods and membranes)
possess nonlinear geometric behavior that may dominate the mechanical response. One important
example of this form of geometric nonlinearity is the buckling of a rod subjected to axial compressive
stresses.
Fluid Mechanics
The basic principles of fluid mechanics are exactly the same as those for solid mechanics: the mechanical
system under consideration is analyzed using appropriate equilibrium, compatibility, and stress-strain
relations. In practice, because of the inability of a fluid to resist shear, fluids are considerably more
difficult to characterize than solids, and the effective computational mechanics of fluids is therefore a
more difficult and diverse topic. Perhaps the most important difference between common models for
solid and fluid mechanics is that the base reference frame for the solution process is generally different.
For solids, most analyses proceed by modeling the deformation relative to a reference frame attached
to the material body being deformed. Fluids are generally modeled instead by attaching the problems
reference frame to a region in space and analyzing the flow of the fluid as it is transported through this
geometric reference frame. The latter scheme is termed an Eulerian formulation of the reference frame,
and the former is called a Lagrangian reference configuration. While the Eulerian world view is more
common for analyzing fluids (primarily because tracking the individual motion of all particles of the
fluid material is generally intractable in practice), this conceptually simpler approach gives rise to many
practical computational difficulties because the underlying conservation principles are most readily posed
in a reference frame attached to the material.
One of the most important mathematical models of fluid response are the Navier-Stokes equations.
These equations are presented below for a three-dimensional flow field involving a Newtonian fluid (i.e.,
one readily characterized by the material parameter viscosity, which is assumed not to depend upon fluid
stress) and an incompressible formulation (see Panton, 1984 or White, 1979). It is worth noting that
even in the case of these simplifying assumptions, which correspond to those given above for the
mathematical formulation of solid mechanics, the Navier-Stokes equations are nonlinear, in that products
of solution quantities occur in the convective terms on the left-hand side of the momentum conservation
equations. These nonlinearities arise because of the Eulerian reference frame used to model fluid flow,
and cause fluid mechanics problems to be generally much more difficult to solve than analogous solid
mechanics problems.

1999 by CRC Press LLC


Computer-Aided Engineering 15-39

Definitions for Fluid Conservation Equations

u, , w = x, y, z components of velocity

p = pressure field

g x , g y , gz = gravity body force components (15.3.5)

= density

= viscosity

Fluid Momentum Conservation Equations

u u u u p 2u 2u 2u
+u + + w = g x + 2 + 2 + 2
t x y z x x y z

p 2 2 2
+u + + w = g y + 2 + 2 + 2 (15.3.6)
t x y z y x y z

w w w w p 2 w 2 w 2 w
+u + + w = gz + 2 + 2 + 2
t x y z z x y z

Fluid Mass Conservation Equation


(u) + () + (w) = 0 (15.3.7)
x y z

Fluid mechanical response is generally classified according to a variety of schemes, including whether
the flow is internal (confined, as in flow-through ducts) or external (confining, as in the case of modeling
flow around an airfoil in aerospace applications), and whether the internal or external flow field contains
a fluid-free surface. Free-surface flows are considerably more complex to model using computational
methods because the physical extent of the fluid material (and hence of the problems geometric domain)
is unknown a priori, and therefore determining the geometry of the mechanical problem must be
considered as an essential part of the problem solution process.
One important facet of mechanical engineering practice involves diffusion phenomena, such as the
flow of heat through a conductor. Such problems in heat conduction, porous media flow, and other
diffusive systems can be readily modeled using standard fluid mechanical models appropriate for
diffusion and dispersion. In addition, problems involving combined advection and diffusion (such as
mass transport problems encountered in environmental engineering or in manufacturing) are readily
modeled using techniques similar to related fluid mechanics models.
One of the most common ways to categorize fluid mechanical behavior arises from consideration of
various dimensionless parameters defined by the constitution of the fluid and its flow regime. Represen-
tative dimensionless parameters relevant to fluid mechanics include:
The Reynolds number, which relates the effects of fluid inertia to those of fluid viscosity forces
and which naturally arises in a broad range of fluid problems
The Mach number, which measures the ratio of fluid speed to the speed of sound in the fluid:
this parameter is important in compressible flow problems
The Peclet number, which quantifies the relative effects of fluid transport due to advection and
dispersion and is used in fluid mass transport modeling

1999 by CRC Press LLC


15-40 Section 15

The Taylor number, which is used to predict instabilities present in various fluid flow regimes
and may be encountered when dealing with the computational ramifications of fluid instability
Unlike solid mechanics, there is no body of computational expertise that provides a generic model
for computational approximation of the full range of fluid response. In practice, computational schemes
are chosen according to particular aspects of fluid behavior (such as the various classifications and
dimensionless parameters given), which substantially limits the range of applicability of any particular
computational solution scheme. Many fluid models are solved using finite-element models, as in solid
mechanics, but others have largely resisted characterization by such general computational schemes.
Fluid material models are generally more complex than those found in solid mechanics, as fluids
exhibit a bewildering array of diversified material responses. Because of the difficulty in modeling many
fluid constitutive parameters, many fluid models are analyzed using gross constitutive models that often
neglect complicating material behavior (one common example of this case is the case of inviscid flow
models, where neglecting the presence of fluid viscosity results in tremendous simplification of the
resulting computational approximation). When fluid material response is important, as in the example
of modeling transition from laminar to turbulent fluid behavior, the requisite constitutive equations for
the fluid become appropriately more complex. While there is not yet a completely unified model for
fluid constitutive behavior, each individual subdiscipline of computational fluid mechanics utilizes fluid
material models that provide reasonable accuracy for fluid response within the range of the particular
problems studied.
Electromagnetics problems are commonly solved by practicing mechanical engineers using techniques
originally utilized for computational fluid dynamics. It is feasible to recast much of the computational
technology developed for fluids into the setting of electromagnetics problems, because many electro-
magnetic simulations are qualititatively similar to fluid response. For example, potential flow simulation
techniques are readily adapted to the potential formulations of electrostatics and magnetostatics. Alter-
natively, the full range of fluid response exhibited by the Navier-Stokes equations, which includes
transient inertial and dissipative effects, is qualitatively similar to large-scale transient problems involving
coupled magnetic and electrical fields. Mechanical engineers interested in performing electromagnetic
simulations will commonly find that expertise in fluid computer modeling provides an excellent under-
standing of methods appropriate for solving electromagnetic problems.
Thermal Analysis
The determination of the static or transient temperature distribution in a physical medium is one of the
best-known cases of the mathematical setting known as scalar potential field theory. In this case, the
scalar potential is the temperature field, and the heat flow in the physical domain occurs in a direction
governed by the product of a constitutive tensor known as the thermal conductivity and the gradient of
the temperature distribution. In simple terms, heat flows downhill, or opposite the direction of the
temperature gradient. The governing mathematical relations for transient conductive heat transfer in a
three-dimensional solid are given below (see Kreith and Bohn, 1993). Note that this form of the energy
conservation law will not apply if the conductivity varies with position, as is common in many mechanical
engineering applications: in this case, the second derivatives on the right-hand side of this relation must
be generalized to include spatial derivatives of the conductivity term.

1999 by CRC Press LLC


Computer-Aided Engineering 15-41

Definitions for Thermal Equation

T = temperature field

k = thermal conductivity

s = distributed heat sources (15.3.8)


= density

c = specific heat

Heat Energy Conservation Equation

T 2T 2T 2T
c = k 2 + 2 + 2 + s (15.3.9)
t x y z

Conductive thermal problems are relatively straightforward to solve, which leads to the incorporation
of thermal effects in many commercial solid and fluid mechanics analysis programs. The mathematical
quality of these thermal analyses is that they are diffusive phenomena, in that they exhibit smooth solution
behavior and that there are few localization effects either in space or time that cause serious computational
problems such as those commonly found in transient analyses in solid, fluid, and electromagnetic systems.
Convective thermal problems are considerably more difficult to model by computer because they involve
transport of heat by a moving fluid, and thus may suffer from the same difficulties inherent in modeling
transient fluid response.
Biomechanics
Biomechanics is an increasingly important component of mechanics (see Section 20.3) Computational
simulation of biomechanical systems is difficult because many biomechanical problems straddle the
boundary between solid and fluid mechanics and the mechanical properties of many organic components
are diverse. For example, the computational modeling of head trauma, which is of considerable interest
in automotive engineering applications, is complicated by the fact that the brain is a semisolid porous
mechanical system, and its motion and deformation are further constrained by the solid enclosure of the
skull. While rational finite-element mechanical modeling of head trauma has been underway for over
two decades (Shugar, 1994), the problem still resists accurate computational simulation.
Even though biomechanical problems are inherently difficult, they are often encountered in professional
practice. The practicing engineer interested in performing accurate computational modeling of complex
biomechanical systems should insure that appropriate verification measures (such as computer visualiza-
tion of model geometry and results) are undertaken in order to minimize the potential for errors caused
by the overwhelming difficulties in modeling these important but often intractable mechanical systems.
Coupled Solid/Fluid/Thermal Field Analyses
Many real-world problems encountered in mechanical engineering practice defy simple categorization
into solid, fluid, or otherwise. These practical problems often include combinations of classes of response,
and the efficient solution of such coupled problems requires careful investigation of the individual
components of the combined problem. There are several important questions that must be addressed
when solving a coupled mechanical problem:
Is it necessary to solve for all components of the solution response simultaneously (i.e., can the
problem components be decoupled)?
If the solution processes can be decoupled, which aspects of the solution must be found first, and
which components can be considered of secondary relevance during the decoupled solution
process?

1999 by CRC Press LLC


15-42 Section 15

Are there any mathematical pathologies present in the coupled problem that may compromise
the accuracy of the computed solution?
In general, fully coupled solution schemes (in which all components of the solution are found at the
same time) will be more accurate over the entire range of solution response and will also require more
computational effort. Under some circumstances, one or more components of the solution can be
identified as of relatively minor importance (either because of their relative magnitude or their insignif-
icant rate of change with time), and then the solution process can be decoupled in order to simplify the
resulting calculations. For example, in many problems involving the coupled mechanical/thermal
response of a material, it is common practice to assume that the temperature field can be calculated
independently of the state of stress. Furthermore, an additional simplification can be obtained by
assuming that the mechanical deformation of the system does not affect the temperature distribution
substantially. The analysis of a mechanical system during relatively slow changes in ambient temperature
(such as the diurnal variations of temperature encountered outdoors between night and daytime direct
sunlight) could be modeled with reasonable accuracy using these assumptions, and the net result of this
simplification is that the coupled problem could be solved as a series of uncoupled transient problems.
At each time in the solution history, the temperature field would be computed, and this temperature
distribution would then be used to compute the mechanical deformation at the same time. As long as
the underlying assumptions leading to these simplified decoupled problems are satisfied, this uncoupled
approach works well in practice.
In many problems, it is not possible to decouple the component problems, and all aspects of the
solution must be determined simultaneously. Some important examples occur in aerospace applications,
including the modeling of flutter instabilities in aircraft structures (where the coupled external flow and
solid mechanics problems are considered concurrently) and the ablation of rocket engine components
(where the modeling of combustion, heat transfer, thermal pyrolysis, mechanical deformation, and porous
media flow effects all must be considered simultaneously).
In many coupled problems, there are mathematical solution pathologies present that may corrupt the
accuracy of the computer solution. In particular, whenever the solution process involves determination
of coupled kinematical solution components (i.e., geometric unknowns such as velocities and displace-
ments) and statical solution parameters (i.e., force-like quantities such as pressure), then the mathematical
setting for guaranteeing accuracy of the computer approximation may become considerably more com-
plex. Coupled problems involving both statical and kinematical solution components are termed mixed
problems, and practicing engineers who must model mixed problems ought to familiarize themselves
with an outline of the potential pathologies that can be exhibited by naive application of mixed compu-
tational technologies. While a complete characterization of these problems is far beyond the scope of
this brief introduction to coupled problems, some hint of the underlying theory and practice is presented
in the following sections.

Mathematical Characteristics of Field Problems


Effective computational solution of the problems encountered in mechanical engineering requires at least
an introductory knowledge of the underlying mathematics. Without a fundamental understanding of such
relevant issues as solvability (the study of the conditions required for a solution to exist), convergence
(measuring whether the computed solution is accurate), and complexity (quantifying the effort required
to construct a computational solution), the practicing engineer may spend considerable time and resources
constructing computational approximations that bear little or no resemblance to the desired exact
answers. The material presented below is intended to provide an overview of the relevant mathematical
content required to evaluate the quality of a computer approximation.
Solvability Conditions for Computational Approximations
Before any physical problem is modeled using a computational approximation, it is essential to ensure
that the underlying mathematical model is well posed. Well-posed problems can be characterized as

1999 by CRC Press LLC


Computer-Aided Engineering 15-43

those where a solution exists and where that solution is unique. While solvability concerns (i.e., those
relating to whether a mathematical problem can be solved) such as mathematical existence and unique-
ness are often considered as mere niceties by practicing engineers, in fact these properties are important
aspects of constructing a numerical solution. It is imperative for engineers using numerical approximation
schemes to understand that the computer programs generally used are deterministic: they will construct
a numerical solution to the problem and will generate results, regardless of whether the physical solution
actually exists. One of the most important components of the practical use of computational approxi-
mations such as finite-element models is the verification that the solution obtained actually provides a
reasonable representation of the physical reality. There are a number of important problems that arise
in mechanical engineering practice that are not well posed, and it is important for the engineer to
recognize symptoms of this sort of mathematical pathology.
Existence of a solution is often guaranteed by the underlying physics of the problem. Most practical
stress analyses model well-posed physical problems whose analogous mathematical existence is estab-
lished by appropriate mechanics theorems. For example, any linear-elastic stress analysis of a compress-
ible solid body can be shown to possess at least one solution. Furthermore, if the elastic body is
constrained to prevent rigid-body translations and rotations, those same theorems insure that the solution
is unique. As long as the engineer uses appropriately convergent computational models (i.e., ones that
can be shown to be capable of getting arbitrarily close to exact results in the limit as the discretization
is refined), then the unique solution constructed on the computer will represent an appropriate approx-
imation to the unique physical problem being modeled. Analogous existence and uniqueness theorems
can be found for a wide variety of problems commonly encountered in mechanical engineering practice.
There are many instances in which the physical problem is not well posed. One important example
is when a stress analysis is performed on a problem that is not constrained to prevent rigid-body
translations and/or rotations. In this case (which commonly arises during the modeling of vehicles and
aircraft) the computational solution obtained will generally represent the exact static solution super-
imposed onto a large rigid body displacement. Depending upon the precision of the computer used
(among other details), the computed stresses and strains may be accurate, or they may be contaminated
due to truncation errors resulting from the static displacements being overwhelmed by the computed
rigid-body motion that is superimposed upon the desired static solution. Proper attention to detail during
the modeling process, such as constraining the solution to prevent unwanted rigid body displacement,
will generally prevent this class of solvability pathologies from occurring.
Other more complex solvability concerns commonly arise in mechanical engineering problems where
there is an underlying constraint applied to components of the solution field. Problems such as incom-
pressible elasticity (where the volume of the mechanical body is invariant under deformation), incom-
pressible flow (where the conservation of mass implies a continuity constraint on the velocity field),
or electromagnetic field applications (where the magnetic field must be divergence-free) all possess
constraints that can lead to serious solvability difficulties during the computational approximation
process. In incompressible elasticity (the other mentioned cases admit similar resolutions), two particular
pathologies present themselves: if there are no prescribed displacement constraints on the problem, then
a solution will exist, but it will not be unique. In this case, the resulting computational approximation
will exhibit rigid body motion, and the underlying stress analysis may or may not be accurate, depending
upon the precision of the computer and the magnitude of the computed rigid-body motion. On the other
hand, if the entire boundary of the incompressible body is subject to a constrained displacement field,
then either the overall prescribed displacement of the boundary will satisfy a global incompressibility
condition (in which case the solution will exist, the computed displacements will be unique, but the
associated stresses may be suspect), or the global incompressibility condition is not satisfied (in which
case there is no feasible solution to the underlying physical problem, so the computed solution will be
completely spurious and should be ignored).
Because of the difficulty in ascertaining a priori whether solvability considerations are germane to
the physical problem being modeled, alternative schemes are often desirable for verifying that the
computed solution actually represents a reasonable approximation to the physical problem being mod-

1999 by CRC Press LLC


15-44 Section 15

eled. One particularly useful computational scheme for verifying solution quality involves the application
of computer graphics and visualization to display solution behavior (e.g., stresses and strains) for
interactive interpretation of the results by the engineer. Verification of input and output via graphical
means involves the development and use of graphical pre- and postprocessors (respectively) designed
to permit interactive review of the solution behavior. High-quality interactive pre- and postprocessors
for use with a particular computational mechanics application are very desirable characteristics and
should be given due consideration during the selection of CAE software for use in mechanical engineering
practice. When nonlinear problems (with their accompanying solvability pathologies) are encountered,
use of interactive graphics for solution verification is often required to insure the reliability of the
computed nonlinear solution.
Convergence, Stability, and Consistency
In any numerical method, the primary issue that must be addressed is that of convergence. A convergent
method is one that guarantees that a refinement of the discretization will produce generally more accurate
results. Ultimately, as the discretization mesh size decreases, the exact solution can be approximated to
an arbitrarily small tolerance; as the size of the discretization measure decreases, the answers should
converge to the correct solution. A second criterion is that of accuracy, which is related to that of
convergence. Where convergence addresses the question does the error go to zero as the discretization
step size decreases?, accuracy is concerned with at any particular size step, how close is the approx-
imation to the solution?, and with at what rate does the error go to zero with step size?. A convergent
method in which the error tends to zero as the square of the step size (a quadratic convergence rate)
will eventually become more accurate than another scheme in which the error and step size decrease at
the same rate (a linear convergence rate).
Concern especially relevant to time-dependent simulations is that of stability: a stable method is one
that guarantees that errors introduced at one step cannot grow with successive steps. If a method is
unstable, even when the errors introduced at each step are small, they can increase exponentially with
time, thus overwhelming the solution. In a stable method this cannot occur, although stability alone does
not imply anything about the size of solution errors that may be introduced at each step, or whether
errors from different times can grow by accumulation. Many numerical methods for time-dependent
approximation are only conditionally stable, implying that stability is guaranteed only when the step
size is smaller than some critical time scale dictated by the data of the physical problem and by the
discretization (in mechanics problems, this time scale is often related to the shortest period of vibration
for the structure). Some idea of this critical time scale must be known a priori for a conditionally stable
method to behave in a robust manner. For this reason, the use of unconditionally stable methods is often
preferred, since these methods are stable regardless of the step size (although the actual size of the errors
introduced at each step may still be large).
Another important issue in numerical approximations for differential equations is consistency. A
consistent discrete approximation is one that approaches the underlying continuous operator as the size
of the discretization tends to zero. All standard finite-difference operators (and their finite-element
relatives used for time-dependent approximations) are consistent, because as the size h of the discreti-
zation parameter tends to zero, the appropriate derivative is recovered. Inconsistent discretization schemes
are rarely (if ever) encountered in practice.
The convergence and stability characteristics of a numerical method are not independent: they are
related by the Lax Equivalence Theorem, which is alternatively termed The Fundamental Theorem of
Numerical Analysis. In simple terms, the Lax Equivalence Theorem states that convergence (which is
the desired property, but one which is often difficult to demonstrate directly) is equivalent to the
combination of consistency and stability. Because consistency and stability are relatively easy to deter-
mine, the task of finding convergent approximations is often replaced by that of searching for schemes
that are simultaneously consistent and stable.

1999 by CRC Press LLC


Computer-Aided Engineering 15-45

Computational Complexity
Computational complexity is another important characteristic of a numerical scheme. The complexity
of an algorithm is measured in terms of the asymptotic rate of growth of computational effort required
to perform the algorithm. For example, standard schemes (such as factorization or Gauss Elimination)
for solving a full linear system of equations with N rows and columns require computational effort that
grows as the cube of N; these algorithms thus possess a cubic rate of growth of effort. (Note that standard
schemes such as Gauss Elimination also involve terms that grow as the square of N, or linearly with N:
as N becomes asymptotically large, these latter terms diminish in size relative to the cube of N, so the
algorithm is characterized by the cubic term which ultimately dominates the growth of effort.) Many
(or most) standard numerical schemes exhibit some form of polynomial computational complexity, and
for large problems, selecting the algorithm with the lowest polynomial exponent (i.e., the slowest
asymptotic rate of growth) is generally a good idea.
Many common algorithms encountered in engineering handbooks are not suitable for large-scale
engineering computation because their computational realizations exhibit prohibitively large growth
rates. For example, many recursive schemes, such as Cramers rule for finding the solution of a linear
system of equations via recursive evaluation of various matrix determinants, exhibit rates of growth that
are larger than any polynomial measure. Cramers rule possesses a factorial rate of growth of compu-
tational effort (i.e., the effort grows as N!), and thus should be avoided completely for N larger than
ten. When faced with implementing or using a computational scheme such as this, the practical engineer
should consult appropriate references in order to determine whether alternative simpler algorithms are
available in the case of Cramers rule, it is easy to solve a linear system (or evaluate a determinant)
using algorithms that grow cubically, and for large N, there is a vast difference in effort between these
latter schemes and a naive implementation of Cramers rule, as shown in the Table 15.3.1 (note that the
values of N presented in the table are extremely small by the standards of typical computational
approximations, and hence the relative gain in using lower-order complexity schemes is even more
important than the results in the table might indicate).

TABLE 15.3.1 Polynomial and Factorial Growth Rates


N N2 N3 N!

2 4 8 2
5 25 125 120
10 100 1,000 3,628,800
20 400 8,000 2.433 1018
50 2,500 125,000 3.041 1064

In general, the choice of algorithms for computational simulation of mechanics problems should be
restricted to methods that are both convergent and at least conditionally stable. In addition, it should be
clear that an unconditionally stable method, especially if it has a higher-order convergence rate, is to be
desired. Finally, one normally prefers those methods that exhibit the slowest asymptotic rate of growth
of computational effort, especially when large problems are to be solved.

Finite-Element Approximations in Mechanical Engineering


Finite-element models are among the most efficient, general, and accurate computational approximation
methods available in engineering. Many of the attractive characteristics of finite-element computational
models arise from a seemingly unlikely source: instead of representing the mathematical statement of
the underlying mechanics as systems of differential equations, finite-element models are motivated by
considering instead equivalent integral formulations, e.g., the basic equations of equilibrium, compati-
bility, and stress-strain can be replaced by equivalent integral energy formulations such as the Principle

1999 by CRC Press LLC


15-46 Section 15

of Virtual Work or the Theorem of Minimum Potential Energy. These latter integral principles form the
basis for finite-element computational approximations.
For example, the differential equations for equilibrium of a solid body can be replaced by an integral
formulation: the principle of virtual work of elementary mechanics is one particular well-known example
of this equivalence. Once identified, the equivalent integral formulation is considerably simpler than the
differential form, and this simplicity is even more apparent when mechanics problems are generalized
to include such practical effects as point loads, arbitrary geometric constraints, and heterogeneous or
anisotropic material properties. Under these more interesting conditions (i.e., in situations that are likely
to arise in mechanical engineering practice), the differential equations of mechanics become nearly
impossible to write, as the standard calculus symbolism is barely sufficient to express the range of
intractable differential behavior. However, the integral formulation requires only marginal modifications
under these more general conditions, and this simpler integral approach also leads to appropriate elegant
and general computational approximation schemes.
In addition to providing a simpler means to express the fundamental mathematical relations of
mechanics, the integral foundation of finite-element models also provides other important features. One
important aspect of finite-element models is their computational efficiency; another is that strong and
general convergence proofs can be cited for large classes of finite-element models, obviating the need
for application of the more complicated Lax Equivalence Theorem to prove convergence and estimate
accuracy. Perhaps the most important theoretical characteristic of these integral calculus models is that
their general formulation provides a unified computational framework that includes many other practical
schemes (such as finite-difference models) as special cases of a general integral theory of computational
mechanics. Therefore, careful study of the mathematical underpinnings of finite-element modeling also
provides considerable insight into such diverse alternative computational schemes as finite-difference
models and boundary-element methods.
Overview of Finite-Element Models
Finite-element models are most commonly utilized in computational solid mechanics, where they have
been used successfully on a very general range of practical problems. Integral formulations for solid
mechanics (such as the Principle of Virtual Work) are well-established general results, and it is straight-
forward to develop generic finite-element computer models from these fundamental integral mechanics
formulations. Since many problems in other branches of mechanics (such as Stokes Flow in fluid
mechanics, which is a direct mathematical analog of the solid mechanical problem of deformation of
an incompressible elastic medium) are closely related to standard problems of solid mechanics, finite-
element models are rapidly gaining favor in many other branches of mechanics as well. In general, any
physical problem that can be expressed in terms of integral energy formulations is a natural candidate
for finite-element approximation.
The most common application of finite-element models is in computational solid mechanics. In this
area, finite-element models can be made arbitrarily accurate for a wide range of practical problems.
Standard stress analysis problems involving moderate deformations of well-characterized materials (such
as metals) are generally easy to solve using finite-element technology, and static or time-dependent
simulations of these mechanical systems are routinely performed in everyday mechanical engineering
practice. More complex problems (such as computational fluid mechanics, coupled problems, or elec-
tromagnetic simulations) can readily be solved using finite-element techniques, though the accuracy of
the solution in these areas is often limited, not by the finite-element technology available, but by
difficulties in determining accurate characterizations for the materials used. For example, in many
biomechanical problems, the model for simulating the stress-strain response of the biological material
may contain uncertainties on the order of many percent; in cases such as these, it is not the quality of
the finite-element application that limits the accuracy of the solution, but instead the inherent uncertainty
in dealing with realistic biological materials.
There are still many practical limitations to finite-element modeling in engineering practice. Some of
these problems arise from inherent difficulties in developing generic finite-element computer programs,

1999 by CRC Press LLC


Computer-Aided Engineering 15-47

while others arise from the same generality that makes finite-element models so attractive as computa-
tional approximations. Examples of the former include computer programs that do not contain completely
robust modern finite-element models for plates and shells, while examples of the latter include finite-
element applications that incorporate the latest computational technology but are so difficult to use that
inaccuracies result from gross errors in preparing input data. Finally, many practical limitations of finite-
element modeling are due to theoretical difficulties inherent in the mathematical formulations of mechan-
ics, and these last pathologies are the most difficult to remove (as well as being problematic for any
computational scheme and not therefore particular to finite-element techniques).
Classification of Finite-Element Models
There are many classifications possible in computational mechanics, and so choosing an appropriate
classification scheme for any general method such as finite-element modeling is necessarily an ambiguous
task. The most common classification scheme for categorizing finite-element models is based on con-
sideration of the physical meaning of the finite-element solution, and a simplified version of this scheme
can be described by the following list:
If kinematical solution unknowns (such as displacement or velocity) are determined first, then
the model is termed a displacement-based finite-element model.
If statical finite-element solution parameters are the primary unknowns determined during the
computational solution process, then the model is termed a stress-based finite-element model.
If a combination of kinematical and statical solution unknowns are determined simultaneously,
then the computational scheme is called a mixed finite-element model.
Displacement-based finite-element models are the most common schemes used at present, and most
commercial-quality finite-element applications contain extensive element libraries consisting primarily
of displacement-based families. In a displacement-based model, the analyst creates a finite-element mesh
by subdividing the region of interest into nodes and elements, and the program calculates the solution
in terms of nodal displacements (or velocities, in the case of flow problems). Once the displacements
have been determined, the unknowns in the secondary solution (such as stress and strain) are derived
from the finite-element displacement field by using appropriate compatibility and stress-strain relations.
Some common two-dimensional finite elements used for displacement-based formulations are catalogued
in Figure 15.3.2. The elements shown are successfully used for common problems in stress analysis,
simple fluid flow, and thermal analyses, and their generalization to three-dimensional problems is
straightforward and computationally efficient.

Three Node Six Node Ten Node


Linear Triangle Quadratic Triangle Cubic Triangle

Four Node Bilinear Eight Node Biquadratic Nine Node Biquadratic


Lagrange Quadrilateral Serendipity Quadrilateral Lagrange Quadrilateral

FIGURE 15.3.2 Common two-dimensional finite elements.

1999 by CRC Press LLC


15-48 Section 15

Stress-based finite-element schemes are much more rare than displacement models, although stress-
based finite elements have some attractive theoretical qualities (such as more direct calculation of stresses
and strains, which are the most important currency in practical stress analysis). Many practical stress-
based finite-element methods are recast into a form involving determination of nodal approximations
for displacement, so that these alternative formulations can be easily integrated into existing displace-
ment-based finite-element programs.
Mixed finite-element schemes commonly arise in problems involving constraints, such as electromag-
netics, incompressible deformation or flow, and (in some cases) solid mechanics problems characterized
by substantial bending deformation. When using a mixed computational scheme, considerable care must
be taken to insure that the resulting approximation is convergent: these subtle mathematical solvability
conditions are best approached by a practicing engineer by insuring that the program used automatically
applies appropriate consistency conditions (which are commonly presented in the literature for each
given field of practice). In short, the best ways to insure that reasonable results are obtained from mixed
models are to:
Know which problems commonly result in mixed formulations.
Use finite-element computer programs that have already been tested and verified for use on the
class of problems of interest.
Take advantage of auxiliary software (such as graphical visualization applications) to verify the
input and output data for all problems.
Note that the potential convergence problems that haunt some mixed formulations in finite-element
modeling are generally caused by the underlying mathematical model used to represent the engineering
system. In these cases, any computational method used may exhibit solution pathologies, and, in fact,
the finite-element model (while still containing the pathology) is often the best approach for identifying
and rectifying the underlying mathematical problem (this result follows from the fact that the integral
framework underlying finite-element approximation permits a more general study of convergence than
do competing differential frameworks utilized in finite-difference and finite-volume models).
Finite-element formulations can also be classified into two particular flavors of development: Rayleigh-
Ritz and Galerkin. Rayleigh-Ritz finite-element models are developed by applying theorems of minimum
potential energy (or some similar integral extremization measure). Galerkin models arise from so-called
weak statements of boundary-value problems and are simultaneously more complicated to derive and
substantially more general in scope. For problems (such as standard linear stress-analysis problems)
where both minimum and equivalent weak statements of the problem exist, the resulting Rayleigh-Ritz
and Galerkin finite-element models will yield identical results. For problems where no extremization
principle exists (such as high-Reynolds number fluid flow), Galerkin finite-element models (and their
relatives) can still be used in practice.
Computational Characteristics of Finite-Element Models
Finite-element models are widely used in mechanical engineering precisely because of their computa-
tional efficiency. A brief enumeration of the computational characteristics of finite-element models
highlights many reasons why they form such an effective family of computational approximation
techniques.
Finite-element models tend to result in computational algorithms that are readily understood,
easily implemented, and exhibit well-known computational advantages. For example, finite-
element equation sets tend to be sparse (i.e., to have a relatively small percentage of nonzero
terms in the coefficient matrix, with these nonzero entries in a well-defined pattern that can be
utilized to achieve efficient computational implementation of required matrix operations) and
preserve physical characteristics (such as symmetry and structural stability) that are readily
exploited by common matrix algebra schemes.

1999 by CRC Press LLC


Computer-Aided Engineering 15-49

Finite-element models are readily shown to possess optimal accuracy characteristics for a sub-
stantial portion of problems commonly encountered in engineering practice. For example, for
most classes of practical stress analysis, finite-element solution schemes can be shown to yield
the most accurate results (measured in terms of minimizing the energy of the error) of any standard
numerical approximation scheme, including competing finite-difference and finite-volume meth-
ods. In contrast to these latter discretization schemes, finite-element optimal convergence rates
are preserved even in the important case of irregular or unstructured meshes. The practical effect
of this optimality is that there is often little or no incentive to use any other technique for solution
of many computational mechanics problems encountered in the mechanical engineering profes-
sion.
Many finite-element calculations are inherently scalable, in that they can be readily implemented
in a manner amenable to parallel computer architectures. The practical result of this scalability
property is that current advances in multiprocessor computer architectures are easily leveraged
to gain substantial improvements in finite-element program performance with little additional
effort required by either users or programmers.
Finite-element results are defined everywhere in the problem domain, and these results include
derived parameters such as stress and strain. Because these solution components are defined
everywhere, it is very easy to apply additional computer technology (such as integrated graphical
postprocessors) to verify and interpret the results of the finite-element analysis. It is commonly
(and incorrectly!) believed that all numerical methods provide computational solutions only at a
finite number of points in the domain, but this observation is erroneous in the case of finite-
element models. The global definition of the finite-element solution is of considerable utility
toward the important goal of evaluating the convergence of finite-element models. In addition,
the derived secondary solution parameters are also convergent, so that the optimal accuracy
characteristics of finite-element models generally apply to both primary (e.g., displacement) and
secondary (e.g., stress) solution unknowns. This latter advantage is much harder to prove for
many competing numerical methods such as finite-difference and finite-volume schemes.
It is relatively easy to guarantee the convergence of common finite-element solution schemes,
and this characteristic, when combined with the mathematical optimality of the finite-element
solution, facilitates trust in the computed finite-element results. The convergence properties of
finite-element models are cast in an integral (as opposed to a differential) form, which results in
their applicability toward a wide spectrum of engineering problems that contain mathematical
pathologies (such as heterogeneous materials, point loads, and complex boundary conditions) that
compromise the convergence statements for models (such as finite-difference schemes) that would
otherwise be competitive with finite-element models in terms of computational effort.
For some problems, including those involving substantial convective nonlinearities due to high-
speed fluid flow, the optimality characteristics of standard finite-element models can no longer
be guaranteed, and an alternative integral convergence formulation must be sought. In the setting
of finite-element models for flow, this convergence framework is termed weak convergence. It is
important to understand that weak in this sense is not a pejorative term, but instead refers to
the topological structure of the solution space for these flow problems. In fact, weak formulations
for problems in science and engineering are useful representations for many practical cases; for
example, in the study of statics, the principle of virtual work is a weak formulation of the equations
of equilibrium of a rigid body. The weak integral convergence framework for Galerkin finite-
element models is an attractive alternative to analogous (but generally more complex) convergence
results used in finite-difference modeling for fluid problems.
In simple terms, finite-element models are very attractive candidates for computational approximation
efforts. Under very general conditions, they can be demonstrated to be extremely accurate, computa-
tionally efficient, and amenable to improvements in computer hardware and software technology. Because

1999 by CRC Press LLC


15-50 Section 15

of these computational advantages, finite-element models have become popular in virtually all branches
of computational mechanics.
Advantages and Disadvantages of Finite-Element Models
Like all computer approximation schemes, finite-element modeling has characteristic advantages and
disadvantages, and the effective use of finite-element technology requires at least an introductory
understanding of each. Besides the computational advantages of efficiency and optimality presented
above, most finite-element schemes include other practical advantages. Foremost among the advantages
are the characteristics of generality of function and flexibility of modeling. The primary disadvantages
occur when naive finite-element implementations are applied to mathematically ill-behaved problems.
Finite-element models can be cast into general computer applications that permit a tremendous variety
of problems to be solved within the scope of a single program. Some commercial-quality finite-element
programs permit the analysis of static and time-dependent problems in solid mechanics, fluid mechanics,
and heat conduction within one monolithic computer application. In contrast, alternative computer
analysis techniques often place substantial constraints on the range of problems that can be solved, and
these limitations require the practicing engineer to be familiar with a variety of different software
packages instead of an inclusive general analysis application.
Most finite-element packages are very permissive in the modeling of complexities that regularly occur
in engineering practice. These complicating factors include material discontinuities (which are common
in composite applications), material anisotropies (implementing more general material response within
typical finite-element applications is generally a simple programming task) and complex boundary
conditions (such as combinations of traction and displacement on boundaries not readily aligned with
the coordinate directions). Because most commercial finite-element applications have very general
features, a common source of analysis errors arises from gross mistakes in creating input data (such as
specifying a different material model or an incorrect boundary condition). One of the easiest ways to
identify gross modeling errors is to make use of solution verification tools (such as integrated graphical
pre- and postprocessors) whenever possible. Virtually all high-quality commercial finite-element pack-
ages incorporate integrated visualization tools, and utilizing these tools to verify input data and output
results is always a good idea.
There are also more complicated problems that can arise in the course of finite-element modeling:
the most common pathologies arise from analyses incorporating mixed finite-element models. Guaran-
teeing the convergence of mixed finite-element models is a much more difficult mathematical problem
than the analogous proof of convergence for displacement-based models, and it is possible to get
unreliable results from naive use of mixed finite-element models. While most modern commercial finite-
element programs will largely guard against the possibility of constructing inherently poor finite-element
approximations, there are older or less robust finite-element applications available that do not incorporate
completely dependable finite-element technologies. Therefore, it is important to identify potential prob-
lem areas for finite-element modeling. The most common sources of difficulty are presented below.
Incompressible Problems. Problems where volume is preserved commonly occur in mechanical engi-
neering practice, but errors arising from inaccurate finite-element modeling techniques are often readily
identified and corrected. Incompressible solid elastic materials include rubber and solid propellants, and
models for inelastic response of metals also require incompressible or near-incompressible plastic
behavior. The most common artifact of poor modeling of incompressibility is known as mesh locking,
and this aptly named phenomenon is characterized by grossly underestimated displacements. The easiest
way to test for potential incompressibility problems is to solve a suite of simple case studies using small
numbers of elements, as shown in Figure 15.3.3. If the finite-element mesh locks in these test examples,
then either different elements or a different finite-element application should be considered, as adding
more elements to the mesh does not generally improve the results in this case. It is easy to accommodate
incompressibility in the setting of standard finite-element libraries. Figure 15.3.4 shows several two-
dimensional elements that are accurate when used in an incompressible elasticity setting. Modern

1999 by CRC Press LLC


Computer-Aided Engineering 15-51

Load Load Load

fixed fixed

Incompressible
Material fixed

fixed fixed fixed fixed fixed

Test Problem Quadrilateral Test Meshes

Load Load Load

fixed fixed fixed

fixed fixed
(can flip
diagonal)

fixed fixed fixed fixed fixed fixed fixed fixed

Triangle Test Meshes

FIGURE 15.3.3 Test problem suite for incompressibility.

Displacement Pressure

FIGURE 15.3.4 Incompressible elasticity finite elements.

textbooks on finite-element theory (e.g., Hughes, 1987 or Zienkiewicz, 1991) generally provide guide-
lines on choice of element for incompressible problems, and the users manuals for many commercial
finite-element packages offer similar preventative advice.
Bending Deformation. Problems involving beams, plates, and shells may exhibit solution pathologies
similar to those found in incompressible elasticity. In these problems, one-dimensional beam elements
or two-dimensional plate elements are loaded in directions normal to the elements reference surface,
and naive coding practices for these cases may result in locking of the finite-element mesh, with attendant
underprediction of the displacement field. Depending upon the characteristics of the particular element
and the stability of the geometric boundary constraints, mesh locking errors may instead be replaced by
wildly unstable solution response, with the displacement field oscillating uncontrollably over adjacent
elements. In practice, it is very easy to avoid both of these pathologies by incorporating simple additional
programming effort, so this problem is rarely encountered when commercial finite-element packages
are used. It may be important when the finite-element application is a local or unsupported product, and

1999 by CRC Press LLC


15-52 Section 15

Load

Test Problem

Load Load
Load

One-Dimensional Two-Dimensional Three-Dimensional


Frame Element Plane Stress Element Elasticity Element

FIGURE 15.3.5 Single-element flexure test problems.

in these cases, appropriate care should be taken to verify the solution. If graphical solution visualization
tools are available, they should be used in these cases. If such tools are not available, then the computed
results should be compared to simplified problems that can be solved analytically using engineering
beam theory analogies or closed-form plate models. One simple test case for naive finite-element coding
practices is shown below (see Hughes, 1987); if this simple (but marginally stable) structure is analyzed
and gives excessively large or oscillating displacements, then a better finite-element model (or applica-
tion) should be sought.
Lack of Curvature Deformation in Low-Order Elements. Low-order linear, bilinear, and trilinear ele-
ments (such as four-node bilinear quadrilaterals used in plane stress and plane strain applications) are
commonly utilized in finite-element packages because they are easy to implement and because many
software tools exist for creating two- and three-dimensional meshes for such low-order elements. Unfor-
tunately, there are common circumstances when these low-order elements give inaccurate and unconser-
vative results. One such example is shown in Figure 15.3.5. Many simple finite-element packages will
provide accurate answers for the one-dimensional problem, but will substantially underpredict the two-
and three-dimensional displacements. Unlike the mesh-locking pathology encountered in incompressible
cases, the underprediction of displacements from low-order elasticity elements eventually disappears as
more elements are used when the mesh is refined. It is easy to remedy this problem in general by choosing
appropriate continuum elements, and many commercial codes provide appropriate low-order elements
automatically. Problems due to poorly implemented low-order elements are best avoided by solving
simple one-element models first to determine whether the particular element to be used exhibits these
errors, and if this is found to be the case, the analyst can either choose a different type of element (or a
better finite-element package!) or take pains to insure that the mesh is appropriately refined.
It is important to recognize that many of the problems that arise when using finite-element models
for these more difficult cases are not completely due to the particular finite-element model used, but
may arise from pathologies in the underlying mathematical problem statement. In these cases, any
computational approximation scheme may be susceptible to errors, and so care should be used in
evaluating any computational results.
Test Problem Suites for Finite-Element Software
It is easy to avoid most errors in finite-element analysis, by first insuring that no mathematical pathologies
are present in the problem or in the elements to be utilized and then by guarding against gross modeling
errors in preparation of the input data. The best way to guard against both forms of errors is to use a
suite of simpler problems as a warm-up to the more complex problems that must be solved. The
general idea here is to solve a series of problems of increasing complexity until the final member of the
series (namely, the complex problem to be modeled accurately) is obtained. The simpler members of
the suite of problems generally involve simplifications in geometry, in material response, and in scale
of deformation. For example, if a complicated problem involving large-deformation behavior of an
inelastic material is to be analyzed, simpler problems using small-deformation theories, elastic material

1999 by CRC Press LLC


Computer-Aided Engineering 15-53

models, and simplified problem geometry should be solved and carefully verified. These simpler problems
can then be used to verify the response of the complex problem, and this approach is of considerable
help in avoiding spectacular engineering blunders.
Another role of a problem suite arises when different competing commercial finite-element packages
are to be evaluated. Most commercial finite-element packages are specifically oriented toward different
engineering markets, and their respective feature sets depend upon the needs of those particular engi-
neering disciplines. When evaluating commercial finite-element software for potential purchase, a wide
selection of representative problems should be analyzed and verified, and two excellent sources of test
problems are the pathological examples of the last section (which will provide insight into the overall
quality of the commercial finite-element application) and any suite of test cases used to verify more
complex problems with a competitive program (which will provide information on how well the proposed
application provides support for the particular class of problem encountered in local practice). Problems
encountered with either type of test problems should be appropriately investigated before purchase of
new finite-element software.

Finite-Difference Approximations in Mechanics


Finite-difference models include many of the first successful examples of large-scale computer simulation
for mechanical engineering systems. Finite-difference modeling techniques were originally developed
for human calculation efforts, and the history of finite-difference calculations is considerably longer than
the history of finite-element modeling. Once automatic computation devices replaced humans for low-
level computing tasks, many of the existing finite-difference schemes were automated, and these simple
numerical techniques opened up much of engineering analysis to computer simulation methods. Over
time, improved finite-difference approximations (including ones that would be prohibitively difficult for
human calculation) resulted, and the range of problems solvable by finite-difference models became
more diverse.
Throughout the 1970s and 1980s, a natural division developed between finite-difference and finite-
element models in computational mechanics. Finite-difference models were utilized primarily for a wide
variety of flow problems, and finite-element approximations were used for modeling the mechanical
response of solids. While many finite-element approximations were proposed for fluid problems such
as the Navier-Stokes equations, in practice these first-generation finite-element models generally per-
formed poorly relative to more advanced finite-difference schemes. The poor performance of naive finite-
element models relative to carefully optimized finite-difference approximations was widely (but incor-
rectly) accepted in the mechanical engineering profession as evidence that fluid problems were best
solved by finite-difference methods, even as finite-element technology gained near-exclusive use in such
fields as stress analysis.
In fact, the problems associated with early-generation finite-element models in fluids were due
primarily to rampant overgeneralization of insight gained in solid mechanics instead of any inherent
limitations of finite-element technology applied to fluid mechanics problems. Similar problems developed
in the early stages of finite-difference modeling; for example, one of the first finite-difference references
published by Richardson used theoretically attractive but practically worthless difference schemes and
resulted in a completely unusable algorithm (see Richardson, 1910 or Ames, 1977 for details of this
interesting historical development). During the 1980s and 1990s, a better understanding emerged of how
to utilize finite-element technology on the relatively more complicated models for fluid behavior, and
there has been an explosion of interest in using finite-element approximations to replace finite-difference
schemes in a variety of fluid mechanics settings. While finite-difference methods (and especially carefully
optimized applications found in certain difficult and narrow specialties in fluid modeling) continue to
be widely used in fluid mechanics and electromagnetic simulations, the range of mechanics problems
thought to be resistant to finite-element solution approaches is rapidly diminishing.
Finally, a word regarding nomenclature is warranted. Originally, finite-difference models were devel-
oped using regular grids of equally spaced points, with the solution approximation computed at the

1999 by CRC Press LLC


15-54 Section 15

vertices of this structured lattice. As increasingly complicated problems were solved with finite-difference
technology, it became desirable to be able to model more complex problems involving variable material
properties, singular data (such as point sources), and irregular geometries. In order to permit these more
general problems to be solved, finite-difference modeling was generalized to include various averaged
(i.e., integrated) schemes that augmented classical vertex-oriented finite-difference methods to include
cell-centered schemes where appropriate integral averaging of the underlying differential equation is
utilized to permit modeling discontinuous or singular data. Ultimately, these various improvements led
to the development of finite-volume schemes, which are a general family of integrated finite-difference
approximations that preserve the relative simplicity of finite-difference equations, while casting the
underlying theory in a form more amenable to modeling complicated problems. Finite-volume methods
are discussed in a later section of this chapter, but the general characteristics given below apply to the
entire finite-difference/finite volume family of approximate solution techniques. Because of the wide-
spread application of these integral generalizations of classical vertex-based finite-difference schemes,
it has become commonplace to use the term finite-difference methods collectively to include all the
members of this computational family, including vertex-centered schemes, cell-averaged methods, and
finite-volume approximations.
Overview of Finite-Difference Models
Finite-difference methods arise from the combination of two components:
Differential equations of mathematical physics
Replacement of differential operators with discrete difference approximations
Differential statements of the equations of mathematical physics are well known, as most of engi-
neering and physics instruction is provided in this form. For example, the familiar relation f = m a
is a vector differential equation relating the components of the applied force to the differential temporal
rate of change of momentum. The analogous integral statement of this fundamental relation (which is
used to motivate finite-element modeling instead of finite-difference approximation) is probably more
useful in computational practice, but it is definitely less well known than its differential brethren. Similar
instances occur in every branch of mechanics, as the differential equations for engineering analysis are
generally introduced first, with their equivalent integral statement presented later, or not at all. Because
of this widespread knowledge of the differential form of standard equations for modeling physical
systems, the supply of differential equations that can be converted to analogous finite-difference schemes
is essentially inexhaustible.
The computational component of finite-difference modeling arises from the replacement of the con-
tinuous differential operators with approximations based on discrete difference relations. The gridwork
of points where the discrete difference approximation is calculated is termed the finite-difference grid,
and the collection of individual grid points used to approximate the differential operator is called (among
other terms) the finite-difference molecule. Sample finite-difference molecules for common differential
operators are diagrammed in Figure 15.3.6. One of the most obvious characteristics of the molecules
represented in Figure 15.3.6 is that the discrete set of points where the finite-difference approximation
is computed is very structured, with each grid point occurring in a regularly spaced rectangular lattice
of solution values.
The fundamental reason for this regular spacing has to do with necessary conditions for optimal
convergence rates for the finite-difference solution. This regular spacing of points provides simulta-
neously the greatest incentive and the biggest constraint toward practical use of finite-difference methods.
Performing finite-difference calculations on a regular grid provides desirable fast convergence rates on
one hand, but the regularity in the placement of solution sampling points often forces the analyst into
a fixed grid structure that is difficult to adapt to nonregular domains without sacrificing optimal rates of
convergence. For some problems, it is possible to perform coordinate transformations of the entire
physical (i.e., problem) domain onto a regularized computational domain in order to gain simultaneous
high convergence rate and geometric modeling flexibility. Unfortunately, these requisite coordinate

1999 by CRC Press LLC


Computer-Aided Engineering 15-55

Laplacian Molecule Laplacian Molecule Biharmonic Molecule


Quadratic Convergence Quartic Convergence Quadratic Convergence

FIGURE 15.3.6 Sample finite-difference molecules.

transformations may be difficult to determine for sufficiently complicated problem geometries. This
limitation should be contrasted with the geometric flexibility inherent in analogous finite-element models,
where optimal rates of convergence are readily realized even in the presence of irregular or unstructured
finite-element meshes.
A geometric motivation for the improved performance of regularly spaced finite-difference molecules
can be seen in Figure 15.3.7. Here a representative function has been drawn, along with its tangent
derivative and two estimates of the derivatives slope given by standard secant formulas from elementary
calculus. In particular, the forward and centered-difference approximations for the slope of the tangent
line can be visualized by passing lines through the points [x, f(x)] and [x + h, f(x + h)] for the forward-
difference approximation, and [x h, f(x h)] and [x + h, f(x + h)] for the centered-difference estimate.
(Note that it is also possible to define a backward-difference slope estimate by considering the secant
line passing through the function at the points x and (x h). In problems involving spatial derivatives,
this backward-difference estimate has numerical properties that are generally indistinguishable from the
forward-difference approximation, but in the setting of time-dependent derivatives, there are considerable
differences between forward- and backward-difference schemes.)

AAAAAA
AAAAAAAAA
AAAAAA Forward

AAAAAA
AAAAAAAAA
Centered

AAAAAAAAA
AAAAAA
Tangent f(x+h) f(x)
f(x+h) Forward

AAAAAAAAA
AAAAAA
h

Centered f(x+h) f(xh)

AAAAAAAAA
f(xh) f(x)
2h

Tangent f'(x) = df
dx
xh x x+h

FIGURE 15.3.7 Finite-difference formulas and geometry.

It is apparent from the figure that the centered-difference approximation for the slope is substantially
better than that obtained from the forward difference. While it may appear from this particular example
that the improved quality of the centered-difference estimate is due to the particular shape of the function
used in this case, in fact these results are very general and represent a common observation encountered
in finite-difference practice: centered-difference formula are generally optimal in providing the highest
rate of convergence relative to the number of points required to evaluate the difference approximation.
While this observation can be motivated from a variety of viewpoints (the most common example for
purpose of development of finite-difference methods is the use of Taylor Series expansions), it is
particularly instructive to perform some computational experimentation regarding this optimality of
centered-difference models.

1999 by CRC Press LLC


15-56 Section 15

Consider the following simple FORTRAN program:

This program evaluates forward difference estimates for the derivative of the exponential function at
the point x = 0. The program is easily modified to provide a centered-difference approximation by adding
an evaluation of exp(x h). The results of running these two computational experiments are diagrammed
in Figure 15.3.8, where forward- and centered-difference results are graphed for two different precisions
of FORTRAN REAL declarations (32-bit, which represents a lower limit of quality on floating-point
numeric representation, and 80-bit, which represents a high-quality floating-point format). Important
features presented in these two graphs are
In each graph there is a region at the right where the error in replacing the differential operator
with a difference approximation decreases with a characteristic linear behavior on the log-log
plots. In this region, the difference estimates are converging to the exact value of tangent slope,
as might be expected given the definition of the derivative as the limit of a secant.
In these linear regions, the centered-difference error decreases with a slope of two, while the
forward-difference error decreases with unit slope. This implies that the forward-difference
approximation has a linear rate of convergence, while the centered-difference estimate is con-
verging quadratically.
The size of these convergent regions depends upon the precision used for floating-point calcula-
tions. Extended-precision formats (such as 80-bit reals) provide both a larger region of conver-
gence and improved accuracy (obtained by being able to use smaller values of the step size h).
To the left (smaller values of the step size h) of these linear regions, the convergent behavior of
each estimate breaks down: upon passing through the minimum of each curve (which represents
the most accurate estimate that can be obtained for a given precision by using these difference
molecules), the effect of decreasing step size is not to decrease the error, but instead to ultimately
degrade the approximation quality.
Eventually, the denominator of each difference formula underflows (vanishes due to truncation
error) as the values of f(x), f(x + h), and f(x h) become indistinguishable in terms of their
respective floating-point representations. This implies that the estimate of the slope is computed
as zero, and thus the error eventually tends to unity, which each curve approaches at the left edge
[recall that exp(0) = 1].
This numerical experiment clearly shows the interplay of convergence rate, numerical precision, and
truncation error. The centered-difference formula converges much more rapidly than the forward-differ-

1999 by CRC Press LLC


Computer-Aided Engineering 15-57

10)
0

(Base
32-bit
reals
-2

|error|(Base
Forward-Difference
-4
Log |error|
-6 Convergence Region:
Slope = 1
80-bit
Log

-8
reals
10)

-10
-20 -10 0
Log[h]
Log [h] (Base (Base 10)
10)
10)

0
|error|(Base(Base

32-bit
reals
Centered-Difference
-5

Convergence Region:
|error|

80-bit
-10 reals Slope = 2
Log
Log
10)

-15
-20 -10 0
Log[h]
Log [h] (Base (Base 10)
10)

FIGURE 15.3.8 Results of finite-difference numerical experiment.

ence formula, and there exists an optimal step size for the various difference calculations where increasing
the step size results in larger errors (in agreement with the rate of convergence of the approximation),
and where decreasing the step size results in increased errors (due to the effects of round-off and
truncation errors in the calculation of the derivative estimate).
Classification of Finite-Difference Models
As in the case of finite-element models, there are many alternative classifications available for catego-
rizing finite-difference approximations. The first relevant classification is a geometric one, based on
whether the finite-difference grid is structured (i.e., possesses a regularly spaced lattice structure) or
unstructured. In order to maintain optimal convergence rates, finite-difference models must use regular
spacings of grid points. This regularity is easy to maintain when the problem geometry is simple, but
if the natural geometry is complicated, then the regularity of the difference grid will be compromised,
with the potential for an attendant loss of convergence rate. In cases of irregular geometry, unstructured
meshes are used, and if optimal convergence rates are to be realized, then a coordinate mapping from
the actual problem geometry to a regularized computational domain must be constructed.
The problem with using structured grids on irregular or curved domains is diagrammed in Figure
15.3.9. Here a curved boundary is approximated using a low-order linear finite-element mesh (on the
left of the figure) and a corresponding finite-difference/finite volume scheme (on the right). In order to
capture the effect of boundary curvature, the structured finite-difference/finite-volume representation
must approximate the boundary as a stepped pattern of rectangular subdomains. If the curvature is
substantial, or if there is a considerable effect of regularizing the boundary on the physical model (a
common situation in fluid and electromagnetic applications), this imposition of structure on what is

1999 by CRC Press LLC


15-58 Section 15

Finite-Element Subdomain Finite-Difference/Volume Subdomain

FIGURE 15.3.9 Finite-difference modeling of irregular geometry.

really an unstructured physical problem can seriously compromise the quality of the resulting compu-
tational approximation. Finite-element models, in which the presence of unstructured geometries does
not substantially affect the solution quality, are thus more natural candidates for problems involving
irregular boundaries or poorly structured problem domains. In fact, it is easy using higher-order finite-
element approximations (e.g., locally quadratic or cubic interpolants) to approximate the boundary
curvature shown in the figure to an essentially exact degree.
Another classification scheme can be based upon whether the difference formulas used are forward,
backward, or centered. This classification is especially appropriate for finite-difference time-stepping
methods used to solve initial-value problems in engineering and science. In this time-dependent setting,
there is a coalescence of finite-element and finite-difference approaches, and the classification presented
below permits one unified terminology to be used for both topics. Each class of difference estimate leads
to time-marching discretization schemes that have distinctly different characteristics.
Forward difference schemes are commonly used to produce time-stepping methods that are very
efficient in terms of computational effort and memory demand. The important class of explicit time-
stepping algorithms arises largely from using forward-difference approaches: explicit schemes are defined
as those requiring no equation solution effort to advance the solution to the next time step (e.g., where
the coefficient matrix to be solved is diagonal). The primary disadvantage of forward-difference time-
stepping is that these methods are generally only conditionally stable, so that they will converge only
if the time step is relatively small. The use of explicit schemes thus involves a trade-off between reduced
computational effort at each step, and the attendant requirement of a small time step. Explicit schemes
based on forward-difference estimates for temporal derivatives are usually selected when the increased
number of steps required is overshadowed by the decrease in the amount of effort required at each step.
In some very large problems (such as complex three-dimensional models), explicit schemes are often
used exclusively because of their relatively small demand for memory.
Backward difference schemes are commonly used in time-stepping applications where stability of the
solution is especially important. In particular, backward-difference schemes generally provide numerical
dissipation of high-frequency response, and this characteristic is often desirable in engineering practice.
Many transient problems exhibit a wide range of natural time scales; for example, structural dynamics
problems in aerospace engineering commonly include long-period vibrations of the entire structure as
well as short-period response of individual structural components. One of the characteristics of discret-
ization methods, in general, is that they tend to predict low-frequency behavior very accurately, while
high-frequency response is predicted with considerably less accuracy. Since the high-frequency response
is therefore often inaccurate (or spurious), there is a strong incentive to attenuate its effects, and choosing
backward-difference temporal schemes is a good way to achieve this end. Backward-difference formu-
lations generally require solution of a system of equations to advance the solution across a time step,
and hence they are termed implicit time-stepping methods.
Centered-difference approximations are generally chosen in order to gain optimal convergence rates
for time-stepping methods, and centered schemes are thus one of the most common choices for modeling

1999 by CRC Press LLC


Computer-Aided Engineering 15-59

transient response. In practice, centered-difference schemes are a mixed blessing. They generally require
equation solution to advance across a time step, so they are implicit schemes which require more
computational effort and increased demand for memory. They also usually provide a higher convergence
rate, so that larger steps can be taken without sacrificing accuracy. In problems where the time-dependent
solution is expected to be well behaved, such as transient heat conduction or similar diffusion problems,
centered-difference methods are an excellent choice, especially if there are no spatial pathologies (such
as material discontinuities or singular initial conditions) present. In problems where time-dependent
response is more problematic (such as nonlinear dynamics of mechanical systems) many centered-
difference schemes often give poor results, due to their general inability to attenuate spurious high-
frequency discretization effects. There are a number of generalizations of centered-difference time-
stepping schemes that have recently been developed to provide optimal accuracy while damping spurious
high-frequency response, and these specialized schemes should be considered when complicated dynamic
simulations are inevitable (see, for example, the relevant material in Hughes, 1987).
Computational Characteristics of Finite-Difference Models
Finite-difference methods possess many computational similarities to finite-element models, and so their
computational characteristics are in large part similar. Both families of approximations generally result
in sparse equation sets to be solved, but finite-difference models are less likely to preserve any symmetry
of the governing differential equations in the form of matrix symmetry. In addition, many finite-difference
molecules (such as those used to represent derivatives of order higher than two) result in equation sets
that are not well conditioned and may give poor results when implemented in a low-precision, floating-
point environment (such as some inexpensive microcomputers working in single precision).
Convergence rates for many families of finite-difference models can be established using appropriate
Taylor series expansions of the solution function. These convergence estimates thus require the conditions
under which existence of a Taylor expansion can be guaranteed, and may be compromised when the
exact solution of the problem violates these existence conditions. In such cases, the precise rate of
convergence of the finite-difference model is more difficult to estimate exactly, though numerical
experiments on simple test problems can often be used to obtain estimates of the actual convergence
rate. The most common examples of conditions where an appropriate Taylor series expansion may not
exist include problems involving interfaces joining dissimilar materials, and singular problem data such
as point loads or localized sources. In each of these cases, the exact solution of the underlying differential
equation may not possess sufficient continuous derivatives to warrant a Taylor series expansion, and so
the rate of convergence may be compromised from that obtained on better-behaved data. For many
standard finite-difference schemes, this compromised accuracy is commonly localized around the region
possessing the singular or discontinuous data, and outside this disturbed region, better accuracy is
obtained. Estimating the extent of contamination of optimal accuracy is a task greatly facilitated by the
use of integrated graphics applications for visualizing and verifying solution behavior.
It is worthwhile to contrast this latter potential convergence problem with the analogous finite-element
case. Finite-element convergence proofs are generally cast in terms of least-squared error norms (such
as the energy norm used for solid mechanics and diffusive flow problems) or weak convergence error
measures. Each of these convergence settings is cast in an integral form, so few (if any) constraints upon
the existence or continuity of the solutions derivatives are required. The end result of the dependence
on integral convergence measures in finite-element modeling is that a substantially more robust mathe-
matical framework exists for guaranteeing optimal convergence of finite-element models.
Advantages and Disadvantages of Finite-Difference Models
Finite-difference schemes possess many important computational advantages. Chief among these is the
fact that finite-difference models are very easy to implement for a wide variety of problems, because
much of the work of developing a finite-difference application consists merely of converting each
derivative encountered in the mathematical statement of the problem to an appropriate difference estimate.

1999 by CRC Press LLC


15-60 Section 15

Because there are so many well-understood differential equations available in engineering and science,
the scope of finite-difference modeling is appropriately broad.
Finite-difference models are also relatively simple to implement in software. Unlike finite-element
models, which possess a bewildering generality that often complicates the task of software development,
most finite-difference schemes are sufficiently limited in scope so that the task of computational imple-
mentation is less general and therefore considerably simpler. Because these models are easy to motivate
and to implement, there is a vast body of existing finite-difference programs that can readily be applied
to an equally wide range of problems in mechanical engineering. As long as the problems to be solved
fit into the framework of finite-difference modeling, there is a strong incentive to use difference models
in mechanical engineering practice.
The primary disadvantages of finite-difference modeling arise from engineering problems that do not
fit neatly into the restrictive nature of finite-difference modeling. Real-world phenomena such as material
dissimilarities, complex geometries, unusual boundary conditions, and point sources often compromise
the utility of finite-difference approximations, and these complicating factors commonly arise in profes-
sional practice. Differential formulations of physical problems are inherently restrictive, as the conditions
under which derivatives exist are considerably more stringent than those relating to integrability. The
end result of relying on differential statements of physical problems is that finite-difference schemes are
generally easier to implement on simple problems than are finite-element models, but that for more
complex problems, this relative simplicity advantage disappears.
There are other disadvantages of finite-difference models including some important issues in verifying
solution results. Since finite-difference models generally only calculate the primary solution parameters
at a discrete lattice of sampling points, determining a global representation for the solution over the
regions between these lattice points is not necessarily an easy task. If the grid used is sufficiently regular,
a finite-element mesh topology can be synthesized and use made of existing finite-element postprocessing
applications. If the grid is irregular, then automated schemes to triangulate discrete data points may be
required before postprocessing applications can be used.
Another serious disadvantage of restricting the domain of the solution to a lattice of discrete points
is that determination of secondary parameters (such as heat flux in thermal analyses or stress and strain
in mechanical problems) becomes unnecessarily complicated. In a finite-element model, determination
of secondary derived parameters is simplified by two properties: first, the solution is defined globally
over the entire domain, which permits solution derivatives to be evaluated easily and accurately; and
second, the convergence of the solution is generally cast into a mathematical framework (such as the
so-called energy norm) that guarantees not only convergence of the primary solution unknowns, but
of the derived secondary solution parameters as well. In many finite-difference applications, neither of
these important finite-element advantages are easily realized.
Finally, another problem for general application of finite-difference models is in the implementation
of boundary conditions. As long as the primary solution unknown is specified on the boundary (and the
boundary is easily represented in terms of the finite-difference grid), then implementation of boundary
conditions is relatively simple in theory. However, if derivative boundary conditions are required, or if
the boundary itself is not readily represented by a finite-difference grid, then various tricks may be
required to coerce the problem into a form amenable for finite-difference modeling (a particular example
of a clever fix is the construction of phantom nodes located outside the problem domain specifically for
the purpose of implementing derivative boundary conditions). Competing finite-element models generally
permit trivial implementation of a wide spectrum of common boundary conditions.
The implementation of derivative boundary conditions in a simple finite-difference model (in this
case, heat conduction, where the temperature function is denoted by u) is diagrammed in Figure 15.3.10.
Here it is desired to implement an insulated boundary along a vertical section of the finite-difference
grid, where x = constant, and hence the normal derivative is easily evaluated. In order to make the
normal derivative vanish, which represents the condition of an insulated boundary, it is necessary either
to utilize a reduced control volume (shown in gray in the figure) or to implement a fictitious solution

1999 by CRC Press LLC


Computer-Aided Engineering 15-61

column
column j+2
j+2

insulated boundary
x y

column j+1
column
j+1

node (i,j)
column j
column
j
phantom
node (i-1,j) nodal control volume

columnj- j1
column
1
row i
row row i+1
row row i+2
row
i i+1 i+2

FIGURE 15.3.10 Finite-difference derivative boundary condition implementation.

node outside the problem domain (labeled as a phantom node in the figure). To simplify the resulting
equations, it will be assumed that there is no heat generation taking place.
A brief overview of each implementation strategy will be presented below in order to highlight the
general methods used in practice. It should be noted that implementing derivative boundary conditions
on inclined or curved boundary segments is much more difficult than the case for this simple example.
In addition, it is worthwhile to reiterate that derivative boundary conditions are common in engineering
practice, as constraints such as insulated boundaries, specified sources, or stress-free surfaces are
accurately modeled using derivative boundary conditions. Finally, it should be emphasized that imple-
menting derivative boundary conditions is a generally trivial task when using finite-element approxima-
tions, a fact which lends considerable practical advantage to finite-element analysis methods.
Implementation of the derivative condition at node (i, j) in the figure is most easily developed using
the rectangular control volume shown. If the steady-state conservation of heat energy is considered over
this subdomain, then the heat flowing through the top, bottom, and right sides of the control volume
must be balanced by heat generation within the volume and by flow through the boundary, and these
last two terms are taken to be zero by assumptions of no internal heat generation and of an insulated
boundary, respectively. The appropriate mathematical relation for heat conservation over the control
volume is given by

u x u x u
k k k y =0
y 2 bottom y 2 top x right
edge
face face

where k is the thermal conductivity of the material. This partial differential relationship among the heat
flows can easily be converted to a discrete set of difference relations by substituting appropriate difference
formulas for the partial derivatives, and letting uij denote the temperature computed at node (i,j). The
result of this substitution is given by

uij uij 1 x uij +1 uij x ui +1, j uij


k k y = 0
y 2 y 2 x

This single equation can be used to augment the standard difference equations for the nodes not located
on the boundary, in order to embed the desired derivative boundary conditions into the complete finite-
difference thermal model. For each node located on the insulated boundary (e.g., nodes [i, j 1], [i, j],
[i, j + 1], and [i, j + 2] in the figure), an appropriate difference relationship is synthesized and incorporated

1999 by CRC Press LLC


15-62 Section 15

into the set of difference equations governing the complete problem. Details of this procedure for the
case of heat conduction can be found in Kreith and Bohn (1993). For more complex problems, appropriate
derivations must be carried out to determine the correct form of the boundary relations for the finite-
difference/finite-volume model.
In practice, the first two difference terms above are often combined to yield a more accurate centered-
difference approximation for the solution. In this case, the third term, which captures the insulated
boundary condition specification, can be seen to represent a backward-difference approximation for the
boundary flux. Since such two-point backward difference schemes have been shown earlier in this chapter
to possess less desirable convergence characteristics than centered-difference formulas, it is often advan-
tageous to convert the insulated boundary derivative condition to a more optimal centered-difference
form. One scheme for gaining a centered-difference derivative formula at node (i, j) can be developed
by introducing the phantom node shown in the figure as node (i 1, j). A fictitious (in the sense that
the node does not lie within the actual problem domain) thermal solution can be constructed at this node
and utilized to gain a centered-difference relationship for the derivative boundary condition that may be
more consistent with the relations used elsewhere in the finite-difference approximation. The details of
this form of boundary condition implementation can be found in Ames (1977) or any other advanced
text on finite-difference numerical analysis.
It is sufficient to note here that the analyst is often confronted with the choice of a feasible but less-
than-optimal implementation (such as the control volume approach taken above), or a more accurate but
appropriately more computationally complicated scheme for implementing a boundary derivative. Since
finite-element models do not exhibit any of these difficulties in boundary-condition implementation, it
is often much easier to utilize finite-element software for problems involving derivative boundary
conditions.
Test Problem Suite for Finite-Difference Software Evaluation
Before purchasing or using a new finite-difference program, appropriate evaluation measures should be
contemplated to insure that the proposed application is capable of solving the particular problems
encountered in mechanical engineering practice. As in the case of evaluation of finite-element software,
any known solution pathologies can be modeled using the new program in order to determine the quality
of the solution scheme. Many of the same mathematical pathologies presented for finite-element eval-
uation (such as mixed problems arising from incompressible flow constraints) can occur in finite-
difference models, so these standard tests may also be appropriate in the finite-difference setting. In
addition, any problems commonly encountered in professional practice should be solved with the new
code in order to verify that the proposed application is both easy to use and reliable.
Integrated graphical pre- and postprocessors are also important for use with finite-difference programs.
Preprocessors that can plot both structured and unstructured finite-element grids are especially valuable,
as verification of the coordinate transformations required to map complex physical domains to a regu-
larized coordinate system may necessitate plotting of the finite-element grid in both its irregular physical
setting and its regularized transformed computational setting. Postprocessors that automatically deter-
mine appropriate secondary solution parameters from the difference solution lattice are especially
attractive, as they provide additional solution intepretation functions while minimizing the extra work
required on the part of the engineer to construct the desired derived solution function.
If computational simulations of advection-dominated problems are to be generated, it is imperative
to insure that the proposed application provides robust algorithms for handling the nonlinear effects of
convective acceleration terms. Older computational technology (such as simple upwinding schemes used
to more accurately model advective effects in fluid dynamics) often results in poorer solution quality
than can be obtained from codes utilizing newer schemes from computational fluid dynamics. Comparing
results from various computer programs is a good way to determine which programs implement the best
means to capture nonlinear effects. In addition, if the application being evaluated is noncommercial (e.g.,
public domain codes commonly distributed by academic researchers or special-purpose applications
developed within an engineering organization), these comparisons will help to determine whether the

1999 by CRC Press LLC


Computer-Aided Engineering 15-63

application is capable of robust modeling of the engineering problems encountered in normal professional
practice.

Alternative Numerical Schemes for Mechanics Problems


Finite-element and finite-difference approximations are the most common computational schemes used
in mechanics, but the practice of computational mechanics is not restricted to these two discretization
approaches. There are a variety of alternative methods available for solving many important problems
in mechanical engineering. Some of these alternatives are closely related to finite-element and finite-
difference models. Others have little in common with these general discretization schemes. A represen-
tative sampling of these alternative approaches is presented in the following sections.
Boundary-Element Models
Boundary elements are a novel generalization of finite-element technology to permit better computational
modeling of many problems where finite-element methods are unduly complex or expensive (see Brebbia
and Dominguez, 1989). In simple terms, boundary elements involve recasting the integral principles that
lead to finite-element models into a more advantageous form that only requires discretizations to be
constructed on the boundary of the body (hence the term boundary element), instead of over the entire
solution domain. For many problems involving complicated boundaries, such as infinite domains or
multiply connected three-dimensional shapes, boundary element technology is an attractive alternative
to finite-element modeling. In addition, because many existing computer-aided drafting applications
already idealize shapes in terms of enclosing boundary representations, boundary-element approaches
are natural candidates for integration with automated drafting and design technologies.
Boundary-element schemes are not without disadvantages, however. The recasting of the integral
formulation from the original domain to the boundary of the problem is not a trivial task, and this
conversion substantially complicates the process of developing new boundary-element technology. One
of the most important factors limiting the applicability of finite-element models is the formulation of an
integral principle for a mechanical problem that is equivalent to a more standard differential problem
statement. The conversion of this integral statement into a form amenable to boundary element approx-
imation is yet another laborious task that must be performed, and each of these difficult mathematical
problems represents potential errors when implemented in the setting of developing general-purpose
analysis software. Many of the potentially attractive capabilities of boundary-element technology, such
as the goal of accurate modeling of singular solution behavior (e.g., stress concentrations) that commonly
occur on the boundaries of the domain, have not yet been completely realized in existing boundary-
element software. However, it is clear that boundary-element models are promising candidates for future
computational approximation schemes.
At present, boundary-element codes are most commonly found in research applications, as there is
not yet a full range of commercial quality boundary element programs available for widespread use by
practicing engineers. While the boundary-element method has a promising future, other advances in
finite-element technology (such as robust automated mesh generation schemes that synthesize internal
element information from a simple boundary representation) may compromise the demand for the
considerable theoretical advantages of boundary element models. It is already feasible to combine
features of finite-element and boundary-element technology in the same code, so the future of boundary
element techniques may lie in integration with (instead of replacement of) finite-element applications.
Finite-Volume Schemes
Finite-volume methods are hybrid computational schemes that have been developed to ameliorate some
of the problems of finite-difference models. In simple terms, finite-volume models represent integrated
finite-difference approximations, where the underlying differential equation is integrated over individual
control volumes in order to obtain recurrence relations among discrete solution values. These recurrence
formulas are strikingly similar to standard finite-difference approximations, but this integrated approach
possesses some clear advantages over related finite-difference technology:

1999 by CRC Press LLC


15-64 Section 15

Because the equations are integrated over control volumes, finite-volume schemes are better able
to model material discontinuities, which may cause serious problems in standard finite-difference
models.
The integration process permits rational modeling of point sources, which are commonly imple-
mented in an ad hoc manner in finite-difference schemes.
The integration process permits more flexibility in modeling unusual geometries, as the individual
integration volumes can be adjusted locally to provide such important effects as curved boundaries.
While these are important generalizations for finite-difference theory, in practice they do not realize
many of the advantages available with finite-element models. Finite-volume schemes still require sub-
stantial regularity in the construction of the solution grid, or the construction of appropriate coordinate
transformations to map the problem domain to a simpler computational setting. The integration process
provides no relevant aid in the construction of secondary solution parameters such as stresses and strains.
Finite-element topology must still be synthesized to enable use of appropriate solution visualization
schemes, and this synthesis may be difficult when complex unstructured grids are utilized for the finite-
volume model. Therefore, finite-volume schemes are most likely to be found competing with existing
finite-difference models in areas where the underlying differential statements of the problem are well
established, but where increased modeling flexibility is required.
Finally, the convergence setting for finite-volume methods is similar to that of finite-difference models,
and the various integral convergence measures available for proving finite-element convergence are not
so readily applied to finite-volume approximations. Thus finite-volume models provide a way to gener-
alize finite-difference approximations to more realistic problem settings, but their limited utilization of
integral principles does not provide a unified integral framework of the quality found in finite-element
modeling applications.
Semi-Analytical Schemes
Many problems in mechanical engineering still admit solutions based on computational implementation
of analytical (i.e., nonnumeric) techniques originally used for hand computation using closed-form
solutions. This approach of approximating reality by simplifying the model to the point where it admits
a closed-form solution (as opposed to the standard computational approach of putting a complicated
model on the computer and using numerical approximations based on finite-element or finite-difference
relations) has a rich history in engineering and science, and it includes such classical techniques as
Fourier Series expansions for heat conduction problems, representation by orthogonal polynomials, and
other eigenfunction expansion schemes.
The advent of high-performance computers has had two effects on semi-analytic schemes such as
Fourier Series approximation. First, modern computers are capable of performing the trigonometric
evaluations required by Fourier Series representations very rapidly, so that these eigenfunction expansion
schemes are now computationally feasible to implement. On the other hand, the application of modern
computers to competitive approximation schemes (such as finite-element or finite-difference models for
heat conduction applications formerly solved using closed-form Fourier Series techniques) has obviated
the need to consider using semi-analytical schemes, as their more general computational approximation
competition are now capable of solving more complicated problems with similar amounts of computa-
tional effort.
While some applications of semi-analytical techniques can still be found, these computational imple-
mentations tend to be localized into special-purpose noncommercial codes. These classical schemes are
still useful tools for such niche uses as verification of more complex computational models, but there
is little likelihood that these older closed-form schemes will one day become more important relative
to modern computational approaches.

1999 by CRC Press LLC


Computer-Aided Engineering 15-65

Nondeterministic Computational Models


All of the computational models presented so far are completely deterministic: the computer program
constructs one solution for every set of input data, and the output is completely determined by the values
of the input data set (except, of course, for some differences that may be encountered when using a
different computer or changing to a different precision for floating-point computation on the same
computer). In some problems, the input data can be characterized with sufficient precision so that these
deterministic models are sufficient for analysis. In other problems, however, the input data are not known
precisely, but are instead given only in a probabilistic sense. In these latter cases, it is more reasonable
not to speak of the precise values that define the solution (such as the nodal displacements of a finite-
element model), but instead to consider the problem of determining the probabilistic distribution of these
solution components.
This nondeterministic approach to finding computational solutions in a probabilistic sense is termed
stochastic modeling. Stochastic modeling of mechanical engineering systems is still an emerging field,
though related engineering disciplines have seen substantial developments in this area. For example, in
the modeling of structural systems subjected to earthquake loads (a field of civil/structural engineering
very similar to structural dynamics applications found in mechanical engineering practice), nondeter-
ministic models are commonly used to represent the fact that the precise excitation of an earthquake is
never known a priori. Engineers concerned with the reliability of structures subjected to earthquake
loads often resort to stochastic modeling techniques to estimate the probabilistic distribution of forces
in a structure in order to design the individual component structural members.
One area of considerable interest is in developing stochastic finite-element models that will permit
using much of the existing knowledge base of deterministic finite-element models toward more general
nondeterministic computations that can address directly the probabilistic distribution of finite-element
response. One of the main constraints to the practical use of such schemes is that large computational
models (e.g., ones with substantial numbers of displacement components) generate stochastic approxi-
mations that are computationally intractable because of the immense computational expense required to
generate and solve the resulting stochastic equation set. At present, only relatively small problems are
commonly solved via stochastic models, but this situation is reminiscent of early developments in
computational implementation of finite-element and finite-difference schemes. As computer performance
continues to improve, so will the practical utility of stochastic models in engineering.

Time-Dependent and Nonlinear Computational Mechanics


Many important problems in mechanical engineering are neither static nor linear-elastic. Mechanical
systems undergoing dynamic response or materials deformed past their elastic limit require fundamentally
different solution methods than do simpler static or elastic cases. In practice, static and elastic models
are often used for initial analysis purposes, and depending upon the analysis results or the solution
accuracy required, more complex time-dependent or nonlinear models may be subsequently analyzed.
In cases that warrant these more accurate and difficult analysis methods, it is generally a good idea to
perform simple static linear analyses first, in order to gain physical intuition into the more difficult
problem, and to remove gross modeling errors (such as inconsistent discretizations) before more complex
and expensive analyses are performed.
Practical Overview of Time-Dependent and Nonlinear Problems
While it may seem unusual to lump nonlinear and time-dependent problems into a single framework,
in practice there are many common features in implementing these two disparate solution approaches.
In each case, the more complex problem is solved by looping over a sequence of equivalent pseudostatic
or quasilinear computational problems in order to approximate time-dependence or nonlinear behavior.
In the case of time-dependent problems, the looping amounts to stepping the solution over a discretization
of the time domain; in the nonlinear case, the looping represents an iteration scheme used to trade in a
complex nonlinear problem for a series of simpler linear ones. In many problems, the response is both

1999 by CRC Press LLC


15-66 Section 15

time-dependent and nonlinear, and then nested looping schemes are generally utilized, requiring internal
iterations to solve each nonlinear step composed within a larger time-marching scheme used to advance
through the temporal domain of the problem.
Time-dependent problems in computational mechanics are generally solved by discretization of the
problems history into a set of distinct times, and the time-dependent analysis is carried out by stepping
the solution between these discrete times. At each step, the temporal variation of the solution is combined
with discrete estimates for the temporal derivatives inherent in the problem, and these composed
approximations are utilized to transform the advancement across the time step into a computational form
that resembles a traditional static problem. In this manner, the sequential nature of the time-dependent
problem is transformed into a series of equivalent pseudostatic problems.
Nolinear problems are inherently difficult to solve, as they are commonly ill posed. Nonlinearities
are generally accompanied by problems of nonuniqueness (recall that two lines intersect in at most one
point, but two curves may intersect at many), and this nonunique character of the solution leads to
considerable concerns as to which (if any) of the solutions represents the real (feasible) physical state
of the problem. In addition, there is little general theory underlying nonlinear solution techniques, and
so concrete guarantees that a nonlinear iteration process will converge to a desired solution are generally
difficult to obtain.
Test Problems for Time-Dependent Program Evaluation
The most important issues relevant for effective solution of time-dependent problems have already been
presented and are reiterated here in a high-level setting. In particular, issues of stability and convergence
are especially relevant to practical time-stepping schemes, and insuring that the computed solution history
is free from unstable artifacts or inaccurate components is the most important aspect of the practical use
of time-stepping algorithms.
The basic trade-off in time-dependent solution processes is between the desire to minimize the number
of time steps required for a particular solution history and the need to minimize the amount of compu-
tational effort required to construct the solution at each individual time step. Ideally, it would be possible
to achieve both goals simultaneously, but this desirable state is seldom reached in practice. In general,
increasing the length of the time step involves performing more work at each step, while minimizing
the effort at each individual step is accompanied by a substantial increase in the number of steps required
to construct the entire solution history. Understanding the interplay of these two desirable goals consti-
tutes much of the practice of solving time-dependent problems. Since many commercial programs that
support time-dependent analysis methods provide some form of automated selection of the time step, it
is imperative for the analyst to have some idea of the trade-offs involved in the use of these various
time-stepping approaches.
Implicit schemes, such as those based upon centered- or backward-difference schemes for approxi-
mating temporal derivatives, generally permit taking larger time steps than do explicit schemes, which
are often based upon forward-difference approximations. Implicit schemes also require that sets of
simultaneous equations be solved to advance the solution across a time step, where explicit schemes
require little or no effort of this kind. Therefore, implicit schemes permit fewer steps to be taken with
more work per step, where explicit approaches permit rapid advancement of the solution across an
attendant larger set of time steps. In general, implicit schemes tend to be more stable than explicit
schemes: most practical implicit schemes are unconditionally stable, in that their stability properties do
not depend upon the time-step size. Explicit schemes tend to be conditionally stable, with a maximum
step size given by a so-called Courant condition: step sizes larger than the critical threshold will result
in solution oscillations that can easily dwarf the desired solution response.
Centered-difference schemes for time-dependent solution, such as the Crank-Nicolson method used
to solve time-dependent heat conduction problems (Ames, 1977) or the Newmark algorithm used to
solve structural dynamics problems (Hughes, 1987), are attractive numerical approaches because they
possess optimal (quadratic) convergence rates for their time-dependent accuracy behavior. Unfortunately,
these optimal error properties are commonly associated with an inability to damp spurious oscillations

1999 by CRC Press LLC


Computer-Aided Engineering 15-67

that are generally present in discrete computational models. The problem is less important in heat
conduction and other dispersive problems, as the spurious oscillations are generally damped over time.
In dynamics applications, however, these oscillations can eventually grow and contaminate the solution,
and this effect is especially pronounced in the important nonlinear time-dependent case. For this reason,
various schemes have been proposed to generalize the more naive centered-difference approach to permit
new schemes that damp unwanted high-frequency artifacts but preserve optimal convergence properties
(for example, the Hilber-Hughes-Taylor method is a simple extension of Newmarks algorithm, but
removes virtually all of the oscillation problems encountered with the latter scheme).
One of the best ways to determine whether the solution accuracy is being eroded by stability or
accuracy concerns is to utilize appropriate visualization programs wherever possible. Undesirable or
spurious artifacts are readily observed on many graphical displays of solution history, and such solution
visualization schemes should be used whenever possible.
Classification of Nonlinear Mechanical Response
Many important problems in computational mechanics are nonlinear. Typical examples include:
Nonlinear materials, where the material parameters (for example, the material compliances)
depend upon the displacements
Nonlinear geometry, where large deformations require consideration of deformed geometry in
the strain-displacement relations, or where Eulerian coordinate formulations are used (as in fluid
mechanics problems, where convection introduces strong nonlinearities into otherwise well-posed
problems)
Nonlinear boundary conditions, where the value or type of the boundary condition depends upon
the solution of the problem
The different types of nonlinearities often lead to disparate solution qualititative behavior. For example,
many material nonlinearities (such as the yielding of metals due to plastic deformation) tend to produce
nonlinear problems that are reasonably well posed, in that they are relatively simple to solve and tend
not to have serious problems with nonuniqueness of the solution or convergence of the nonlinear solution
scheme. Alternatively, many geometrically nonlinear problems (such as high Reynolds number flow, or
arbitrarily large deformations of materials) are considerably more difficult to solve effectively, owing to
the presence of alternative (but infeasible) solutions and strong nonlinear effects.
General Overview of Nonlinear Solution Schemes
It is possible to have problems where material and geometric nonlinearities are present simultaneously,
and this situation is made commonplace by the fact that inelastic material nonlinearities are often
accompanied by plastic flow, which may lead to large deformations that necessitate the use of nonlinear
strain-displacement relations. Therefore, it is appropriate to consider solution schemes for nonlinear
problems that do not depend upon the particular source of the nonlinearity.
There are many ways to solve nonlinear problems it often seems like there are too many schemes,
reflecting the important fact that there is little general theory for solving nonlinear problems. Even
though there are a wide variety of approaches that can be used to motivate nonlinear solution schemes,
one basic fact emerges from all of them: nonlinear problems are generally solved using an iteration
scheme.
The most common nonlinear iteration schemes are based on Newtons method, which provides a
relatively simple way to convert an intractable nonlinear equation solution process into a sequence of
linear solution processes. The ultimate goal is that this sequence of linear solutions will tend to a fixed
solution, and that this solution will solve the underlying nonlinear problem. Thus, Newtons method is
based upon a linearization of the nonlinear problem, and it is an especially effective approach in practice
because the sequence of linear solutions will often converge quadratically to the nonlinear solution (recall
that quadratic rates of convergence are especially attractive, as these schemes cause the error to go to
zero rapidly once a good approximation is found).

1999 by CRC Press LLC


15-68 Section 15

Finally, a warning on nomenclature ought to be mentioned. Some authors reserve the term Newtons
method for the iteration scheme resulting from the minimization of a nonquadratic functional such as
those that govern many nonlinear problems in solid mechanics. These authors refer to the analogous
technique of solving a nonlinear system of equations (which often may not result from a functional
extremization problem, as in the case of the Navier-Stokes equations) as Newton-Raphson iteration.
Test Problems for Nonlinear Program Evaluation
Just as there is little general theory for solving nonlinear problems, there is little general practice that
can be cited to aid the engineer contemplating effective estimation of nonlinear response. Some basic
principles based on empirical observations can be given, however, and adherence to these ad hoc
guidelines will aid the practicing engineer working in this interesting but difficult field.
The most important observation is that it is unusual to be able to solve a complicated nonlinear
problem unless one can also solve a simpler version with the intractable nonlinearities removed. This
leads to the concept of sneaking up on a nonlinear solution by first performing a sequence of problems
of increasing modeling complexity and (hopefully) increasing accuracy. For example, in order to find
gross modeling errors (such as incorrect discretizations or geometric inconsistencies), it is sufficient to
generate a discretization mesh for the problem, but to use a simpler linear model for approximating
material or geometric nonlinearities. Once the gross modeling errors have been removed from these
simpler (and computationally less expensive) analyses, more accurate material or geometric models can
be constructed from the simpler input data. The results of the simplified analyses may often then be
reused to verify the overall mechanical response of the final nonlinear solution.
Another important way to verify nonlinear solution behavior occurs in the setting of time-dependent
problems, where the nonlinearities evolve over time from an initial linear state. Most time-dependent
algorithms are obtained by trading in a complicated historical problem for a sequence of quasistatic
solutions, and at each time step, a standard finite-element or finite-difference quasistatic problem is to
be solved (this is especially true when implicit time-stepping schemes are used). In this setting, it is
desirable to be able to use large time steps so that the entire solution history can be obtained with as
little effort as is possible. However, in many problems (most notably, nonlinear stress analyses), the
effect of the nonlinearities on each individual quasistatic problems decreases with the size of the time
step. As the time step gets smaller, the time-dependent component of the solution process becomes more
expensive, but each nonlinear iteration scheme generally becomes easier to solve. Therefore, when
solving nonlinear time-dependent processes, it is important to balance the desire to minimize the number
of time steps taken with the goal of more rapid convergence of the nonlinear iterations required at each
solution step.
Finally, the use of integrated graphics postprocessing to verify solution behavior is incredibly important
in the setting of solving nonlinear problems. The problems of nonuniqueness, localization of response,
and solution instabilities are commonly encountered in nonlinear analysis settings, and the use of solution
visualization techniques (combined with a good sense of engineering judgment and physical intuition)
greatly simplifies the task of dealing with these otherwise intractable difficulties.

Standard Packages Criteria


Poor computer acquisition policies can result in spectacular losses for a variety of public and private
organizations. Many of these failures arose because empirical concepts developed in the computer
industry were not given proper consideration. Guidelines for acquisition of computer hardware and
software are outlined below.
Functional Specifications
Regardless of whether an application is to be purchased off the shelf or developed for custom use, the
process of acquiring computer software begins with a functional specification of the role of the software.
The end result of this process is termed a functional requirements document, and it is a high-level
nontechnical enumeration of the goals (functions) that the software package is designed to meet. The

1999 by CRC Press LLC


Computer-Aided Engineering 15-69

language used in a functional specification is normally reasonably devoid of any references to specific
technological issues. For instance, if the purchase of an application to aid in document preparation at
the XYZ Engineering company is contemplated, the functional requirements might include such
phrases as be able to import existing documents for future modification and provide support for
various printers and other output devices normally used at XYZ Engineering. A good functional
specification would not be characterized by XYZ Engineering should purchase the MicroWizard
DocuStuff Version 9.4 software to run on a new WhizBang 9000 Supercomputer: precise specifications
of hardware and software are normally not relevant in a functional specification unless they are used
only to indicate the need for back compatibility with existing computer systems.
A functional specification should provide a view of the needs of the organization that are to be satisfied.
By stating these needs in broad terms, the functional specification admits the widest possible range of
hardware and software solutions. Since computer technology changes rapidly, it is essential to keep
technical details from creeping into the functional requirements, as these technical requirements may
cause useful alternative measures to be overlooked. When the procurement process is for a government
agency (where the lead time between identifying the need and purchasing the software or hardware may
be a year or more), it is imperative to insulate the statement of need from technical details that may
change rapidly over the course of the acquisition process, and functional specifications are an excellent
approach toward this important goal.
Technical Specifications
The next step in the computer materials acquisition process is the translation of the functional require-
ments to a technical specification. This step is generally performed immediately before bids are sought,
so that the most current technical information will be available for review. All of the relevant technical
data that was carefully excluded from the functional specification belong in the technical specification,
but care should still be taken to avoid unwarranted detail. It is especially important to avoid placing
technical information that is inaccurate into the technical specification, or else the resulting procurement
process may be compromised.
Evaluation and Testing
Once the technical specification has been finalized, it is either sent to vendors for bidding (in the case
of large purchases, where the work of matching specific models and manufacturers of computers can
be offloaded to the vendor) or handled internally (for small purchases), where local expertise is used to
match the technical requirements with a range of computer models.
One important component of the evaluation process for software procurement is regression testing,
where existing data sets are migrated to the new application (or computer), analyzed, and output results
compared. This process
Aids in evaluating whether the application is easy to use
Provides verification of the software applications capabilities and accuracy
Gives estimates of performance for the new application
Regression testing of computational mechanics software is greatly aided by the use of integrated
visualization tools. Comparing the outputs of complex simulations from two different codes (or even
two different revisions of the same code) can be an incredibly tedious process, but the use of graphical
display tools provides considerable help toward accurate implementation of the regression testing process.
Overview of Computer Models for Mechanical Engineering Practice
The following outline classifies families of computational models in terms of common mechanical
engineering problem characteristics. It emphasizes general rules of thumb. Although there are instances
in which the recommendations below are not optimal, the outline below represents a good starting point
for picking the right computational tool for a given task.

1999 by CRC Press LLC


15-70 Section 15

Simplicity: Finite-difference models are among the simplest computer models, as they can be
derived and implemented directly from the governing diffferential equation whenever the differ-
ential form of the boundary-value problem is known. If it is desired to develop, implement, and
test a computer model in a hurry, and the problem is well defined in terms of differential equations,
finite-difference schemes are a very powerful simulation tool.
Generality: Finite-element models are the most general computational schemes available for
mechanical engineers. The mathematical framework of finite-element theory makes it easy to
solve coupled problems, as well as problems with discontinuous and singular data. When the
underlying problem is intractable in terms of its complexity or material constitution, then finite-
element models are generally the first choice for solution.
Data definition: Depending upon how the data are defined, one computer model may be better
than others. If all of the data defining the problem domain are given in terms of surfaces (such
as would be obtained from some surface-based CAD solid modeling applications), then boundary
element technology may be a good choice if the underlying boundary element package can handle
the problem encountered. Alternatively, a finite-element package that automatically generates
internal element definition from the surface data would be a good choice for such surface-defined
modeling. If the data are specified naturally on an unstructured grid, then finite-element schemes
generally are easier to attempt than are finite-difference and finite-volume models.
Availability: It is generally a lot easier to use a commercial program to solve a complex mechanical
engineering problem than it is to develop the requisite modeling capabilities locally. Therefore,
one of the most important starting points for solving complex problems is to become aware of
the range of existing applications available. The next section provides an outline guide to a wide
spectrum of commercial and public-domain computer-aided engineering tools.
Sample Mechanical Engineering Applications
The following tables (Tables 15.3.2, 15.3.3, 15.3.4, 15.3.5, and 15.3.6) represent a sampling of available software for use in
typical mechanical engineering applications. The tables include software for the following tasks:
Computational solid mechanics and structural analysis
Computational fluid mechanics
Computational electromagnetics
Computational thermal analysis
Utility software for mechanical optimization and design
These tables only represent a sampling of the state of the software market at the time of publication
of this handbook. Since the world of computation and software development proceeds at a rapid pace,
it is a good idea to check the current literature (and the addresses listed) to gain a better idea of up-to-
date information. Finally, there are several abbreviations given in the tables that are enumerated below:
FE finite-element application
FD finite-difference application
FV finite-volume application
BE boundary-element application
AN analytical or semi-analytical application
OPT design optimization utility (often listed with preferred computational model)

Selection of Software Benchmarking


Benchmarks are software applications that are used to compare the performance of various computers,
or of various operating environments on the same computer. While benchmarks provide a simple means
of characterizing computer speed, they are commonly misused and often misunderstood. In particular,

1999 by CRC Press LLC


Computer-Aided Engineering 15-71

TABLE 15.3.2 Sample Solid Mechanics Applications


Program Type Applications Contact Address

ABAQUS FE General solid mechanics, heat transfer, and HKS, Inc.


coupled thermal/stress problems 1080 Main Street
Pawtucket, RI 02860
ADINA FE General solid mechanics, heat transfer, and ADINA
coupled thermal/stress problems 71 Elton Avenue
Watertown, MA 02172
ANSYS FE General solid mechanics, heat transfer, and ANSYS, Inc.
coupled thermal/stress problems Johnson Road, P.O. Box 65
Houston, PA 15342-0065
BEASY BE Linear-elastic stress analysis and potential Computational Mechanics, Inc.
(e.g., thermal) problems 25 Bridge Street
Billerica, MA 01821
HONDO FE Static and dynamic stress analysis Energy Science & Tech. Software Ctr.
P.O. Box 1020
Oak Ridge, TN 37831-1020
LS-DYNA3D FE 3D explicit solid dynamics Livermore Software Tech. Corp.
2876 Waverley Way
Livermore, CA 94550
MARC FE General solid mechanics, heat transfer, and MARC Analysis Corp.
coupled thermal/stress problems 260 Sheridan Avenue, Suite 309
Palo Alto, CA 94306
MSC/DYTRAN FE Coupled solid-fluid transient interactions MacNeal-Schwendler Corp.
815 Colorado Boulevard
Los Angeles, CA 90041
MSC/NASTRAN FE General solid mechanics, heat transfer, and MacNeal-Schwendler Corp.
coupled thermal/stress problems 815 Colorado Boulevard
Los Angeles, CA 90041
NIKE 2D/3D FE 2D/3D implicit solid dynamics Energy Science & Tech. Software Ctr.
P.O. Box 1020
Oak Ridge, TN 37831-1020
NISA II FE General solid mechanics, heat transfer, and Engineering Mechanics Research
coupled thermal/stress problems P.O. Box 696, 1607 East Big Beaver
Troy, MI 48099
STAAD III FE Structural analysis and design Research Engineers, Inc.
22700 Savi Ranch
Yorba Linda, CA 92687
STARDYNE FE Static and dynamic structural analysis The Titan Corporation
9410 Topanga Canyon Boulevard, Suite 103
Chatsworth, CA 91311

many standard benchmarks provide information that is not always relevant to the performance issues
encountered in mechanical engineering practice. Furthermore, it is easy to generate benchmark figures
that are misleading by citing results out of context, since computer performance is often strongly
dependent on a variety of factors not immediately apparent from simple benchmark comparisons.
Therefore, in this section some representative schemes for comparing computer speed are presented to
aid in the important task of accurately gauging computer performance for mechanical engineering
practice.
Choice of Problem Test Suite for Benchmarking
There are many standard benchmarks commonly used in mechanical engineering applications to estimate
the relative performance of various aspects of computers. In particular, there are three common types
of numeric operations utilized in engineering computation:

1999 by CRC Press LLC


15-72 Section 15

TABLE 15.3.3 Sample Fluid Mechanics Applications


Program Type Applications Contact Address

ADINA-F FE Incompressible viscous fluid flow ADINA


71 Elton Avenue
Watertown, MA 02172
ANSWER FV Laminar or turbulent fluid flow Analytic and Computational Research
1931 Stradella Road
Bel Air, CA 90077
ANSYS FE Fluid flow coupled analyses ANSYS, Inc.
(among many other capabilities) Johnson Road, P.O. Box 65
Houston, PA 15342-0065
ARC2D FD Navier-Stokes equations in 2D COSMIC
University of Georgia
382 East Broad Street, Athens, GA 30602
FDL3D FD Navier-Stokes equations (various Wright Laboratory
explicit/implicit schemes used) WL/FIM
Wright Patterson AFB, OH 45433
FIDAP FE Fluid flow, heat and mass transfer Fluid Dynamics International
500 Davis Street, Suite 600
Evanston, IL 60201
FLOW-3D FV Fluid dynamics and heat transfer Flow Science, Inc.
P.O. Box 933, 1325 Trinity Drive
Los Alamos, NM 87544
FLUENT FV Navier-Stokes equations Fluent, Inc.
Centerra Resource Park
10 Cavendish Court, Lebanon, NH 03766
INCA/3D FV Navier-Stokes equations Amtec Engineering, Inc.
P.O. Box 3633
Bellevue, WA 98009
MSC/DYTRAN FE Coupled solid-fluid transient interactions MacNeal-Schwendler Corp.
815 Colorado Boulevard
Los Angeles, CA 90041
NEKTON FE 3D incompressible fluid flow, heat transfer Fluent, Inc.
Centerra Resource Park
10 Cavendish Court, Lebanon, NH 03766
NISA/3D-FLUID FE General fluid flow and heat transfer Engineering Mechanics Research
P.O. Box 696, 1607 East Big Beaver
Troy, MI 48099
RPLUS FV Navier-Stokes flows, reacting flows NYMA
2001 Aerospace Parkway
Brook Park, OH 44136
SPEAR/3D FV Navier-Stokes equations Amtec Engineering
P.O. Box 3633
Bellevue, WA 98009
TEMPEST FD 3D fluid dynamics and heat transfer BATTELLE Pacific NW Laboratories
P.O. Box 999
Richland, WA 99352
VSAERO AN Subsonic aerodynamics Analytical Methods, Inc.
2133 152nd Avenue N.E.
Redmond, WA 98052

Integer arithmetic, involving the calculation of sums, differences, products, and quotients of
numbers that represent signed integral quantities (i.e., numbers that have no decimal point or
exponent in their numeric representation)
Floating-point arithmetic, which involves the same basic arithmetic calculations as integer arith-
metic, but with operands obtained from numbers represented by the composition of sign, mantissa,
and exponent (although the computer represents floating-point numbers internally in base 2 format,

1999 by CRC Press LLC


Computer-Aided Engineering 15-73

TABLE 15.3.4 Sample Electromagnetics Applications


Program Type Applications Contact Address

ANSYS FE General electromagnetics (as well as solid, ANSYS, Inc.


thermal, and fluid analysis) Johnson Road, P.O. Box 65
Houston, PA 15342-0065
EMA3D FD Maxwells equations Electro Magnetic Applications, Inc.
7655 West Mississippi Avenue Suite 300
Lakewood, CO 80226
FLUX2D/3D FE Static and dynamic electromagnetic fields Magsoft Corp.
1223 Peoples Avenue, J Bulding
Troy, NY 12180
MAX3D FV Time-dependent Maxwells equations Wright Laboratory
WL/FIM
Wright Patterson AFB, OH 45433
MAXWELL3 FE Time-dependent Maxwells equations Energy Science & Tech. Software Ctr.
P.O. Box 1020
Oak Ridge, TN 37831-1020
MSC/EMAS FE Maxwells equations for antennas and MacNeal-Schwendler Corp.
electrical systems 815 Colorado Boulevard
Los Angeles, CA 90041
NISA/EMAG FE General electromagnetic problems Engineering Mechanics Research
P.O. Box 696, 1607 East Big Beaver
Troy, MI 48099
TOSCA FE 3D electrostatics and magnetostatics Vector Fields, Inc.
1700 North Farnsworth Avenue
Aurora, IL 60505

it is generally easier to convert these binary quantities to an equivalent base 10 form, so that they
are similar to more standard scientific notation for numeric representation used in engineering)
Transcendental computation, involving floating-point results utilizing more complex mathematical
functions, such as trigonometric functions, roots, exponentials, and logarithms (transcendental
calculations can be loosely defined as those that cannot be implemented using a finite number of
algebraic computational operations)
Each of these classes of numeric computation has particular characteristics: for instance, integer
arithmetic is the simplest and fastest type of numeric calculation, as this class of operand does not
require manipulation of exponents and mantissas (which often must be renormalized after calculations).
Transcendental calculations are the most expensive native numeric operation on computers appropriate
for engineering computation, and the speed of this class of operation is often independent of the
performance of integer and floating-point arithmetic (owing to the presence and quality of specialized
transcendental hardware available to speed up this class of computation). Most algorithms involved in
any branch of computing involve substantial integer arithmetic, so nearly all aspects of computing
performance increase as integer arithmetic improves. Many common forms of computer use (such as
word processing or some drawing packages) require little or no floating-point calculation, so these
applications have performance characteristics largely independent of the speed of a computers floating-
point arithmetic hardware.
Most mechanical engineering applications (and especially computational mechanics software) are
highly dependent on floating-point performance issues, and so measuring the floating-point speed of
computers is generally important in mechanical engineering settings. While many floating-point appli-
cations in mechanical engineering involve negligible transcendental calculation (for example, most finite-
element or finite-difference schemes involve little or no transcendental results), there are some particular
software packages that require substantial amounts of transcendental computation (for example, Fourier
Series approximations, which require large-scale evaluation of trigonometric functions). Depending upon

1999 by CRC Press LLC


15-74 Section 15

TABLE 15.3.5 Sample Thermal Applications


Program Type Applications Contact Address

ABAQUS FE Heat transfer, as well as solid mechanics and HKS, Inc.


coupled thermal/stress problems 1080 Main Street
Pawtucket, RI 02860
ADINA FE Heat transfer, as well as solid mechanics and ADINA
coupled thermal/stress problems 71 Elton Avenue
Watertown, MA 02172
ANSYS FE Heat transfer, as well as solid mechanics and ANSYS, Inc.
coupled thermal/stress problems Johnson Road, P.O. Box 65
Houston, PA 15342-0065
BEASY BE Potential simulations (including thermal) and Computational Mechanics, Inc.
linear-elastic stress analysis 25 Bridge Street
Billerica, MA 01821
MARC FE Heat transfer, as well as solid mechanics and MARC Analysis Corp.
coupled thermal/stress problems 260 Sheridan Avenue, Suite 309
Palo Alto, CA 94306
MSC/NASTRAN FE Heat transfer, as well as solid mechanics and MacNeal-Schwendler Corp.
coupled thermal/stress problems 815 Colorado Boulevard
Los Angeles, CA 90041

NISA II FE Heat transfer, as well as solid mechanics and Engineering Mechanics Research
coupled thermal/stress problems P.O. Box 696, 1607 East Big Beaver
Troy, MI 48099
TACO3D FE Heat transfer Energy Science & Tech. Software Ctr.
P.O. Box 1020
Oak Ridge, TN 37831-1020
TAU FE Heat transfer AEA Technology
Risley Warrington
Cheshire, England WA3 6AT
TMG FD Heat transfer MAYA Heat Transfer Tech. Ltd.
43 Thornhill
Montreal, Quebec Canada H3Y 2E3

the types of calculations expected, various benchmarks for engineering computation have been commonly
used for decades, including:
LINPACK benchmarks, which measure a mix of integer and floating-point instructions that are
commonly associated with manipulation of matrices such as those resulting from many engineer-
ing and scientific applications. LINPACK benchmarks are generally reported in terms of the
number of floating-point operations performed per second (generally abbreviated as FLOPS, or
MFLOPS and GFLOPS, as typical modern computers execute millions or billions of these
operations per second).
Whetstone benchmarks, which (supposedly) measure an instruction mix representative of engi-
neering and scientific computation, but with a considerably simpler implementation than that
available for the complicated LINPACK estimate. (Whetstone benchmarks are reported in terms
of the number of Whetstones performed per second).
Savage benchmarks, which measure both transcendental performance and round-off error in this
component of floating-point computation. The Savage benchmark is especially simple, and
because of its simplicity, its performance estimates are heavily dependent upon compiler tech-
nology quality.
Pure-integer benchmarks, such as the Sieve of Eratosthenes, which require no floating-point
computation at all, measuring only the speed of integer arithmetic. Integer benchmarks are
generally the most commonly quoted measures of computer performance, but these benchmarks
are often only of marginal utility in real-world mechanical engineering applications, as the

1999 by CRC Press LLC


Computer-Aided Engineering 15-75

TABLE 15.3.6 Sample Optimization Utilities


Program Type Applications Contact Address

ASTROS FE OPT Preliminary structural design or structural USAF WL/FIBRA


modification Wright Laboratory
Wright Patterson AFB, OH 45433
CSAR/OPTIM2 FE OPT Automated optimal structural design for Computerized Structural Analysis Corp.
NASTRAN 28035 Dorothy Drive
Agoura Hills, CA 91301
DOT OPT General-purpose numerical optimizer Vanderplaats, Miura & Associates
suitable for use with other programs 1767 S. 8th Street , Suite M200
Colorado Springs, CO 80906
GENESIS FE OPT General-purpose structural optimization Vanderplaats, Miura & Associates
1767 S. 8th Street , Suite M200
Colorado Springs, CO 80906
OPTDES OPT Design and optimization toolbox MGA Software
200 Baker Avenue
Concord, MA 01742
OPTISTRUCT FE OPT Optimum structural design with finite Altair Engineering
elements 1757 Maplelawn Drive
Troy, MI 48084
STARSTRUC FE OPT Structural optimization application Zentech Incorporated
8582 Katy Freeway, Suite 205
Houston, TX 77024

instruction mix of engineering calculations usually involves a considerable proportion of floating-


point computation.
While these and other benchmarks are commonly quoted and compared, there are many practical
problems with their use. Many benchmarks may exhibit wildly different values depending upon minor
modifications of the source code, and hence may lead to inaccurate results. For instance, in the C language
implementation of many matrix benchmarks, various ways of declaring integer subscripts to be used as
matrix indices can result in substantial changes in the performance measured. Since the C language
specification includes a register declaration (which advises the compiler that the integer variable will
be used often, and is hence best placed in a fast on-chip storage location such as a register, instead of
being stored in slower main random-access-memory), the use of register declarations for some bench-
marks will result in substantial speed-ups on many CPUs that contain many general-purpose registers.
In some cases, published comparisons are based on a lowest-common-denominator approach, so that
if one CPU doesnt support general-purpose register allocation (for example, many earlier Intel CPU
designs), this feature is disallowed on all CPUs compared, which can give rise to misleading performance
results.
In order to remove some of these cross-platform performance problems, better benchmarking schemes
have evolved. One improved scheme involves standardized integer and floating-point calculations mea-
sured in SpecMarks, after the computer-industry standards organization (i.e., the Standard Performance
Evaluation Corporation) that proposes them. These benchmarks are termed the SpecInt and SpecFp,
and measure integer and floating-point performance, respectively. The standards are updated every few
years, as CPU manufacturers often implement design features specifically oriented to improve these
important benchmarks (it should be noted that the problem of hardware manufacturers tuning designs
to improve standard benchmark performance is a widespread one and is not only encountered among
CPU vendors). The particular benchmark is generally suffixed with the year of its standardization (e.g.,
SpecInt95 for the 1995 version of the integer benchmark suite). SpecMark benchmarks are widely quoted
figures available for various processors, and the SpecMark family has been evolving to model more
accurately the increasingly complicated schemes available with modern CPU architectures to speed up
overall computational performance. Unfortunately, many published SpecMark benchmarks are given

1999 by CRC Press LLC


15-76 Section 15

only for integer performance, so the important issue of floating-point speed may not be available without
more detailed research efforts.
For practicing mechanical engineers, the best benchmarks are those that most accurately reflect the
computational characteristics of the problems encountered. One good way to measure the relative
performance of various computers is to perform large-scale simulations of typical practical problems on
the individual computer platforms. The best comparisons are obtained using the same software package
on each computer, but if this is not possible, different packages can be run of different machines and
the overall results compared (this latter case results in a higher uncertainty, as differences between the
individual software implementations may cloud issues of hardware performance). The wider the range
of problems run, the better estimate of relative performance that will be obtained. These larger-scale
benchmarks can be supplemented with simpler benchmarks such as those discussed above to obtain an
estimate of relative computer performance characteristics.
Accurate Measurement of Program Performance
Benchmark estimates of computer performance are a useful means for evaluating computer quality, but
there are additional considerations to estimate their reliability. In particular, many features available on
modern computer conspire to complicate the analysis and interpretation of standard benchmark measures.
Some of these issues are addressed below.
First, one must consider the hardware and software environment utilized on the computer compared.
Different computers provide different support for various types of numeric computation, and these
disparate levels of support may substantially alter benchmark results. For instance, many older computers
store integers as 16-bit quantities, where more modern computers use 32- and 64-bit storage locations
to represent integers. Other things being equal, it is a lot easier to manipulate 16-bit integers than longer
representations, and so benchmarks that do not require any integers longer than 16 bits (e.g., the Sieve
of Erasthosenes utilized to find all of the primes smaller than 65,536 = 216) may give potentially inaccurate
results, especially if problems requiring manipulation of integers larger than 65,536 will be required
(this latter case occurs nearly always in engineering practice). In order to compare benchmarks accurately,
it is important to know some background information concerning the range of numeric representations
used in the published results, and how this range corresponds to expected computational practice and
to the ranges readily implemented on a particular CPU.
In addition, different types of operating systems can also influence performance of benchmarks.
Modern preemptive multitasking operating systems slightly compromise program execution speed by
adding the overhead effort of a scheduler subsystem (these and other operating systems principles are
detailed later in this chapter), and this additional operating system software effort simultaneously
improves the overall quality of the operating environment of the computer, while potentially degrading
its numeric performance. If a benchmark requires substantial amounts of random access memory (such
as the LINPACK benchmarks, where large quantities of matrix storage must be allocated), then operating
systems that implement protected paged virtual-memory subsystems may exhibit substantial slowdowns,
especially if the computers physical memory is smaller than the virtual memory demanded by the
benchmark application. In these cases, the benchmark actually measures a complex combination of CPU,
operating system, and disk subsystem performance, as the memory is paged from memory to disk and
back by the virtual memory resource subsystem.
Finally, the computer language utilized to implement the benchmark may alter the results as well.
This phenomenon may be due to obvious causes (such as the overall quality of available compilers,
which may penalize newer but faster CPUs that do not have carefully optimized native compilers available
relative to slower but more mature platforms), or more subtle effects (such as the original C language
specification, which required all floating-point arithmetic to be done in more accurate but less efficient
double-precision storage this feature of the C language often results in slower performance on some
benchmarks than that obtained by using single-precision arithmetic, which is the usual default in the
FORTRAN language).

1999 by CRC Press LLC


Computer-Aided Engineering 15-77

Cross-Platform Comparison of Benchmark Metrics


In addition to specific computer software and hardware issues outlined above, there are other important
factors that govern comparisons of benchmarks run on different types of computers. When considering
performance analyses of different computer platforms, it is important to account for such factors as:
Presence or absence of specialized hardware, such as floating-point processors, vector processors,
or other numeric accelerators
Relative quality of optimizing compilers available for the individual platforms
Differences among operating system overhead for the various computers to be compared
Specialized hardware for auxiliary tasks, such as input/output accelerators for problems requiring
substantial access to input/output subsystems (for example, mainframes and supercomputers often
have only marginally better raw floating-point performance than many workstations and high-end
PCs, but for more realistic problems involving substantial transfer of data, these more expensive
computers are considerably faster than simple benchmarks generally indicate)
Many of these specific hardware issues are outlined in the next section of this chapter, but all of them
should be investigated to insure that accurate benchmarks comparisons are measured. In all cases, the
basic principle is to be able to normalize the benchmark results to account for variations in the equipment
tested (vs. what might be purchased), as well as trade-offs in performance due to desirable features (such
as preemptive multitasking resources in an operating system). A benchmark suite containing a range of
applications that exercise the aspects of computer performance that are of practical importance in the
particular mechanical engineering enterprise can be a useful tool for evaluating relative performance of
different computers, different operating environments, and different software applications.

1999 by CRC Press LLC


15-78 Section 15

15.4 Computer Intelligence


Computer intelligence encompasses a wide variety of thought and practice, ranging from philosophical
considerations of whether computers are capable of thought to the implementations of complex programs
that can play chess as well as virtually any human being. On a purely pragmatic level, computer
intelligence can be considered as including nearly every application of computation found in this chapter,
as all of the speedy calculations inherent in large-scale computer models can be viewed as the leveraging
of the natural (but slow) computational paradigms used by humans to solve problems for centuries. A
more narrow and useful view of computer intelligence can be developed by considering the computer
as an approximation to a thinking machine, in that computer intelligence can be limited to considerations
of simulations of higher-level human activities such as judgment and decision making, instead of the
rote issues of multiplication and addition. The narrowext viewpoint of computer intelligence (often
connected with the concept of strong artificial intelligence) is the belief that computers are actually
capable of true thought, just as humans are.
In this section, the intermediate philosophy of so-called weak artificial intelligence is considered,
by which computers are utilized as useful adjuncts for modeling high-level human activities such as
decision making. This practical approach to machine intelligence relegates purely computational issues
(such as the computational mechanics applications of the last section) to nonintelligent status, yet
sidesteps the philosophical quagmire of whether the computer is actually thinking. In addition, there are
many other topics in the field of artificial intelligence, such as autonomous robots, machine vision, and
intelligent control systems, that are addressed elsewhere in this handbook. Finally, the field of artificial
intelligence is one of the richest and most diverse areas of research in computer science, with applications
that span the full range of human endeavor; the material presented in this section is necessarily only a
brief introduction to a few specific areas of interest to practicing mechanical engineers. Broader and
deeper treatments of this important field can be found in references such as Shapiro (1992) and Kurzweil
(1990).

Artificial Intelligence
Considerations of artificial computer intelligence often begin with the famous Turing test, in which a
computer is judged to be capable of thought if and only if expert human judges (who are carefully
insulated from being able to sense whether they are dealing with another human or with a computer)
pose a series of questions and receive corresponding answers: if they cannot determine whether the
questions were answered by a human or a computer, then the computer is judged to be capable of human
thought. Although no computer has passed the Turing test (and it is implicitly assumed that the question
are you a human or a computer? is disallowed), computers seem to be gaining ground.
A simpler motivation is to consider the task of making machines intelligent, in that they behave in
intuitive and predictable ways, much like people behave (or in some cases, how we wish people would
behave). In this broader sense, artificial intelligence ranges in mechanical engineering application all
the way from robotics, where machines behave as highly skilled machinists and industrial technicians,
to computer interfaces, where such desirable characteristics as consistency and user friendliness can
be seen as emulating desirable human traits in software.
Classification of Artificial Intelligence Methods
There are many facets of artificial intelligence, and only a few will be outlined below. The various
threads that constitute this important field include (but are not necessarily limited to):
Expert systems, which attempt to emulate the decision-making capacities of human experts
Neural networks, which mimic the perceived operation of human brains in the hope that computers
can become self-learning

1999 by CRC Press LLC


Computer-Aided Engineering 15-79

Fuzzy logic, which utilizes generalizations of classical mathematics to develop models capable
of handling imprecise or incomplete data
Data mining, which involves seeking implicitly defined new patterns in existing large data sets
Symbolic mathematics, which utilizes computers to perform many of the most difficult aspects
of mathematical derivations, such as integration and algebraic manipulation
Machine vision (or computer vision), where visual pattern-seeking algorithms are used to approx-
imate human visual processing
Comparison of Artificial Intelligence and Computational Science
Artificial intelligence and computational science are two contrasting approaches to the same problem in
engineering: leveraging the productivity of human engineers by efficient utilization of computer tech-
nology. How this augmenting of human efforts is realized is vastly different between the two approaches,
however. In artificial intelligence applications, the typical rationale for the computer application is to
eliminate wholesale human interaction from some component of the engineering analysis or design
process. As a concrete example, expert systems used in design applications have the general goal of
codifying the rules and inferences made by a human engineering expert into a computer implementation
that no longer requires that the engineer make those design decisions. On the other hand, computational
science (which represents the integration of high-level engineering knowledge and judgment, relevant
computer science, and mathematical numerical analysis) is motivated by the desire to immerse the
engineer in the problem-solving process by off-loading the difficult or tedious aspects of the engineering
workflow to the computer, while letting the human engineer make the high-level decisions supported
by computer tools. This process of putting the engineer into the design/analysis loop (instead of the
artificial intelligence motivation of removing engineers from the same process) leads to the class of
computer-assisted engineering applications collectively termed engineer in the loop solutions. Several
particular examples of engineer-in-the-loop applications to computer-aided drafting and design are
presented in a later section of this chapter.

Expert Systems in Mechanical Engineering Design


Expert systems are one of the oldest and most successful applications of artificial intelligence. Expert
systems have been used in such diverse fields as engineering, medicine, and finance, and their range of
applicability is enormous. There is an incredible variety of software tools available for doing expert
systems development, and this rich library of development tools makes this field especially productive
for generating artificial intelligence applications. A substantial component of artificial intelligence prac-
tice arises from expert systems, including efforts to develop these applications as well as research in
competing technologies (such as neural networks) that have developed at least in part as a reaction to
the limitations of expert systems approaches to computer intelligence solutions.
Overview of Expert Systems Methods
A simple schematic of an expert system is shown in Figure 15.4.1. There are three fundamental
components of the system: a knowledge base representing the concrete expression of rules known to a
human expert (and previously communicated to the computer), an inference engine that encapsulates
the how of the problem-solving process that the expert system is designed to implement, and a data
base representing the data relevant to the problem at hand (such as material parameters for a mechanical
design expert system, or answers to questions from a sick human patient for a medical diagnostic
application). Other important components (such as input/output systems or the computer itself) are not
shown in this simplified figure.
The expert system uses reasoning based on the rules present in the knowledge base, along with the
data at hand, to arrive at appropriate output in the form of decisions, designs, or checks of specification.
Once implemented, a well-designed expert system often works very well, but the process of programming
such a system is often very difficult. In addition to the obvious aspects of the problem, including

1999 by CRC Press LLC


15-80 Section 15

Results
Result
s

Inference
Engine

Knowledge Data (static and


Base case-specific)

FIGURE 15.4.1 Schematic representation of an expert system.

implementing the data base, there are other nontrivial issues that must be solved, such as coaxing
information out of experts who are not always sure what they are doing when they make informed
judgments. While programming expert systems is a complicated task, any incomplete progress on
elucidating rules for the knowledge base or implementation of an appropriate inference engine may
ultimately compromise the overall utility of the resulting expert system.
Advantages and Disadvantages of Expert Systems in Engineering
The main advantage of expert systems is that they permit an incredible leveraging of human expertise.
Because they are software systems, they are readily cloned (copied) and they dont forget or misrepresent
what theyve been taught. This amplification of the efforts of the human experts that were used to create
the expert systems knowledge base is frozen within the expert system, and can be readily generalized
to include new information or modified to accommodate new ways of dealing with old situations. Human
beings have a certain amount of psychological inertia that often prevents them from learning new
tricks, but this problem does not affect expert systems. In addition, other human foibles, such as emotion
or indigestion, can foil the opinions of human experts, but expert systems are completely consistent in
their exercise of computer decision-making skills. Finally, expert systems are readily scalable to take
advantage of faster computers or parallel computational resources, which permits them to solve larger
or more complicated problems than human experts could possibly handle.
The biggest disadvantage of expert systems is that they represent only the knowledge base that has
been programmed into them, so that such peculiarly human features as engineering judgment or creative
problem-solving (which are often important components of engineering decision-making practice) are
notoriously absent in this class of artificial intelligence. Another serious disadvantage of expert systems
is that they require prodigious amounts of training, which makes their development and implementation
costs large for appropriately complex problems. The latter trait of not being capable of autonomous
learning is such an important limitation of expert systems approaches that the field of neural networks
has developed in large part to remedy this important shortcoming of expert systems technology.
Sample Applications of Expert Systems in Engineering
Expert systems are among the most successful implementations of artificial intelligence, and along with
this success has come a wide range of problems that have been shown to be amenable to solution using
expert systems technology. Areas in which expert systems have been utilized include engineering design
(usually in settings of limited scope, where the rules of practice for design can be readily articulated for
programming an expert system), management (where expert systems-based decision-making tools have
enjoyed considerable success), biomedicine (where expert systems for diagnostic applications have
worked well in practice), and finance (where expert systems can be interfaced with computer models
for financial modeling to elicit information that would be too complex for most humans to discern). In
addition, research oriented toward using expert systems in conjunction with fuzzy logic has shown

1999 by CRC Press LLC


Computer-Aided Engineering 15-81

considerable promise for generalizing expert systems approaches to handle noisy, poorly specified, or
incomplete data.

Neural Network Simulations


Neural networks represent a promising approach to developing artificial intelligence applications in
engineering, and neural nets have many characteristics that are very different from traditional expert
system-based solutions. The basic principles of neural networks are derived from inquiries into the nature
of biological neural networks such as the human brain. These biological systems are not well understood,
but are believed to function efficiently because of the web-like system of interconnected nerve cells
(neurons), which differs markedly from the regular topology of most conventional computers. Artificial
computer-based neural networks represent an attempt to capture some of the beneficial characteristics
of human thinking, such as autonomous programming and simple pattern-based learning, by mimicking
what is known of the human brain. There are many good introductory references to the basic principles
of neural network modeling, including books by Fausett (1994) and Wasserman (1989).
Overview of Neural Network Modeling
A computational neural network can be idealized as a three-layer system, consisting of individual
processors, termed units, and a web of communications circuitry, called connections. The topology of
the network is diagrammed in Figure 15.4.2. The individual processors operate only on the data that
they store locally or that they send and receive via the communications web. This distributed memory
architecture makes neural network computing readily parallelizable. In practice, the hidden layer that
connects the input and output layers may be composed of a variety of individual component layers, but
this aspect of the hidden layers internal structure can be ignored in this elementary overview of neural
networks.

Output Data

Output
Layer

Hidden
Layer(s)

Input
Layer

In the interest of clarity,


Input Data not all connections are
shown in this diagram

FIGURE 15.4.2 Schematic representation of a neural network.

While neural networks are theoretically capable of performing standard deterministic computation
(such as the numeric algorithms used in computational mechanics), in practice neural schemes cannot
presently compete with traditional computer architectures for these relatively straightforward tasks. It is
in artificial intelligence applications that neural networks are used primarily, because they have an
important advantage over virtually every other computational model: neural networks learn in a very
natural and relatively autonomous fashion. In many regards, neural networks are self-programming, and

1999 by CRC Press LLC


15-82 Section 15

this incredible characteristic makes them very useful in practical computer intelligence applications. In
simple terms, they learn by example and develop the capacity to generalize what they have learned to
new situations.
The mechanism by which the most common forms of neural networks learn to solve a problem is
relatively straightforward and similar to human learning: a battery of training examples are fed into the
inputs of the neural network, while corresponding outputs are compared to known correct values. An
alternative mathematical interpretation of this learning is that a weighted sum of the inputs is computed
and filtered in a nonlinear manner through the hidden layers to the output layer, and this output response
is compared to known output patterns in order to compute an error measure. This learning process is
repeated many times utilizing some form of error minimization process to perform a nonlinear regression
on the problem data, until the response of the network is sufficiently close to the desired response on
all of the sample learning cases. At this point, the system has been trained to solve some set of problems,
in that it can generate outputs for any set of input data. The technique described above is termed
backpropagation of error and is commonly used to develop and train neural network models.
In practice, neural networks are not quite this simple to program, as there are many practical issues
that must be contended with, and solving these difficult problems is an area of active research in the
neural network intelligence field. The whole issue of how the weights are assigned, propagated through
the model, and how they evolve to yield an accurate regression of output response upon input data has
been glossed over, as has the more vexing problem of how best to train the network to learn autonomously
instead of simply memorizing the peculiarities of the training ensemble; this last undesirable trait is
termed overfitting and is an important stumbling block in autonomous neural network development.
Regardless of these practical considerations, the fact that neural networks are easy to construct and
straightforward to program has made them one of the fastest-growing and most important areas of
application of computer intelligence.
Advantages and Disadvantages of Neural Networks in Engineering
The most obvious advantage of neural networks is that they are very simple to implement and to program.
Since they are readily parallelizable and extremely simple to train, they can be easily applied to a wide
variety of useful applications requiring pattern matching and machine recognition skills. Neural networks
are also inherently scalable, in that their distributed architecture makes them easy to adapt to large
problems.
One of the most important disadvantages of neural-network systems is that they are presently used
only for certain classes of artificial intelligence problems: this limitation arises from their simple
architecture and the difficulty of a completely rigorous understanding of how neural networks work.
Other important disadvantages include difficult considerations of topology, number and type of connec-
tions, training methods, and avoidance of pathological behaviors such as overfitting. As neural networks
become more commonplace and well understood, some of these disadvantages should become less
important.
Sample Applications of Neural Networks in Engineering
In addition to the obvious examples of pattern matching in production and quality control, neural
networks have found wide acceptance in a variety of important engineering roles. Other pattern-matching
applications commonly occur in imaging problems, and neural networks have performed well in these
cases. In addition, neural networks are natural candidates for use in computer-aided decision-making
applications, including management and business decision-making cases. In these settings, other com-
puter programs (such as computer analysis programs) may be utilized to generate case studies to be
used to train the neural network, and the resulting artificial intelligence application can readily determine
reasonable results from generalization of the training examples. When decisions are to be made based
on running very expensive computer programs (such as nonlinear transient engineering analysis programs
discussed in the computational mechanics section of this chapter), it is often more efficient to run a suite

1999 by CRC Press LLC


Computer-Aided Engineering 15-83

of training problems, and then let a large neural network handle the prediction of results from other
values of input data without rerunning the original analysis application.

Fuzzy Logic and Other Statistical Methods


Fuzzy logic represents an extension of the principles of classical mathematics and statistics to handle
real-world systems, which are often imprecisely defined and poorly delineated. In simple terms, fuzzy
logic achieves this generalization of traditional logic by replacing Boolean values of either true or false
with a continuous spectrum of partially true and partially false. A common analogy is to think of
traditional statistical principles as based on black and white issues (each premise is either true or false),
while fuzzy logic allows premises to take on different shades of gray (ranging continuously from black
to white) and permits more realistic responses to be captured within its conceptual framework. For
example, such imprecise and yet common qualifiers as slightly and extremely are readily handled
within the framework of fuzzy methods, yet cause considerable difficulties for many traditional alternative
modeling schemes.
Engineering systems based on fuzzy logic represent an intermediate state between traditional math-
ematical representations, such as standard deterministic design and analysis techniques, and purely rule-
based representations, such as expert systems. In principle, fuzzy logic methods are superficially similar
to expert systems in that they are both dependent upon artificial intelligence and both remove the engineer
from many aspects of the design process. In practice, however, fuzzy systems are more closely related
to neural networks than expert systems, as fuzzy models permit considerable automation of the process
of converting descriptions of system behavior into a corresponding mathematical model. There are many
good references on fuzzy logic fundamentals (e.g., Klir and Folger, 1988).
Overview of Fuzzy Set Theory and Fuzzy Logic Principles
In traditional set theory, objects are categorized as either members of a set or of its complement (i.e.,
everything in the universe except the set in question). In real-life applications, it is much more difficult
to categorize whether anything is a member of a set. In most practical applications, engineers are
interested in determining quickly whether objects are members of a set, and such determinations are
naturally fuzzy, both in the fact that they may be imprecise in their underlying definitions, and in the
recognition that the test for set membership may be carried out only on small samples taken from a
larger product population. In order to permit modeling mathematical constructs based on such incomplete,
poorly defined, or noisy data, fuzzy set theory and fuzzy logic were developed (see Section 19.15).
In a similar manner, traditional mathematical logic represents premises as either true or false. Fuzzy
logic recasts these extremes by considering the degree of truth of individual premises as a spectrum of
numbers ranging from zero to one. The theory of fuzzy logic is based on extending more complex
principles of traditional logic, such as logical inference and Boolean operations, to a wider realm that
preserves the structure of the classical approach, yet permits an astonishing amount of realistic behavior
by permitting considerable flexibility to model poorly defined systems. There are many connections
between fuzzy set theory and fuzzy logic, including the obvious one of logical test of membership in
fuzzy sets. Many principles of one field (such as statistical clustering applications from fuzzy set theory)
are readily utilized in the other.
Advantages and Disadvantages of Fuzzy Principles in Engineering
One of the most important advantages of fuzzy applications in mechanical engineering is their flexibility.
Many industrial models represent engineering systems that are either not perfectly characterized or are
continuously evolving to better suit changing needs. In these settings, fuzzy systems are often excellent
candidates for control and decision making, because the resulting fuzzy model is simpler than traditional
approaches, owing to its intrinsic ability to adapt to poorly defined, incomplete, or changing data. Fuzzy
inference systems, which are similar to expert systems in that they are utilized for rule-based decision
making, are similarly flexible to specify and to implement, especially for engineers experienced in
applications of fuzzy principles to engineering control systems.

1999 by CRC Press LLC


15-84 Section 15

Another important advantage of fuzzy models is that they are relatively simple to develop and
implement. This advantage is primarily due to the fact that they can more accurately model realistic
engineering systems which are difficult to specify either completely or accurately. In general, simpler
models are less expensive to develop, and the simplicity of fuzzy models for engineering applications
is often directly correlated to improved economies in implementation and maintenance.
Probably the most serious disadvantage of fuzzy methods in practice is that they are a relatively new
approach to artificial intelligence applications. Because of their novelty, there is not much professional
experience with these methods, and a critical mass of software tools for specifying and developing fuzzy
models has only recently become available. Until these methods are commonly utilized and understood,
it is difficult to categorize their practical advantages and disadvantages.
Sample Applications of Fuzzy Methods in Mechanical Engineering
One of the first successful commercial applications of fuzzy logic in an engineering system was developed
for autofocusing features in video camcorders. Early autofocus systems for consumer-grade video
recorders failed to work in a wide variety of settings, as it is difficult to design a control system for such
a complex and dynamic system. In addition to the technical optical issues of rangefinding and focus
control, there are other pertinent (and naturally fuzzy) questions that must be solved to enable such
automatic devices to work robustly in a motion-recording setting:
What do you focus on when there are many objects in the field of view?
What if some or all of these objects are moving through the field?
What should be done when some objects are approaching and others receding from the viewer?
Each of these issues is readily modeled using fuzzy logical principles, and recasting such device
control problems into a fuzzy statistical setting often transforms the underlying engineering system into
a much simpler formulation. Since simpler models are generally easier to implement and to modify, the
cost of developing and extending fuzzy-based engineering models is often substantially lower than the
cost of competing traditional control systems.
The range of problems thought to be amenable to fuzzy implementations is enormous. Examples
range from automotive applications, where sensors generally reflect continuous response instead of
Boolean extremes, to tasks requiring categorization of objects as representatives of sets, such as pattern-
matching applications or quality-control applications. Many household appliances now incorporate fuzzy
methods for control, and the growth of this component of artificial-intelligence engineering applications
is expected to be rapid. In general, the more imprecise the data available from an engineering system,
the more likely a fuzzy approach will work well in comparison to more traditional artificial intelligence
approaches.

1999 by CRC Press LLC


Computer-Aided Engineering 15-85

15.5 Computer-Aided Design (CAD)


Joseph Mello

Introduction
The computer is a powerful tool in the design of mechanical components and entire mechanical engi-
neering systems. There are currently many diverse computer applications that were created to augment
the mechanical design process. The development of a mechanical component or system can generally
be divided into three tasks: design, analysis, and manufacturing.
Historically, the design task was restricted to describing the part geometrically. This task included
conceptual sketches, layouts, and detailed drawings, all of which were created on a drawing board. The
designer or draftsman generally created drawings using orthographic or axonometric projections of the
three-dimensional object being designed or developed. These drawings served to illustrate the mechanical
part, and in the case of a detailed dimensional drawing, served as a data base for transferring the design
information to the manufacturer or to the shop floor.
The drawing performed in the design process has been automated and is now performed using the
computer and dedicated software. This automation process is termed computer-aided design, or CAD.
CAD generally refers to the geometric description of the part using the computer, but as the computer
and its corresponding software have evolved, other tools have been integrated within contemporary CAD
systems which are historically categorized as analysis or manufacturing tasks. This form of design,
analysis, and manufacturing integration will be discussed later in this section.
CAD systems presently fall into two general categories: those based on entities and those based on
solids or features. This classification refers to the basic tools used to describe a mechanical part. The
entity-based system can be thought of as a three-dimensional electronic drawing board. The designer in
this case uses lines, arcs, and curves to describe the part in a manner similar to the way objects were
defined on the drawing board. Solid modeler CAD systems take a different approach: the designer in
this latter case must think and work more like a manufacturer. The part is created or described using
more complicated solids or volumes, for example. Both of these systems are discussed in the following
sections.

Entity-Based CAD Systems


The entity-based CAD system, as mentioned earlier, can be thought of as a powerful electronic imple-
mentation of the drawing board. These systems have been designed for nearly all computer platforms
and operating systems. Currently, their low cost and portability allow them to run on an Intel-based
personal computer or on an Apple Macintosh, and these advantages result in their availability at a majority
of small or medium-sized design engineering or manufacturing firms. The workhouse and a typical
example system is the AutoCAD application running on an Intel-based microcomputer using a Pentium
microprocessor. Some of these CAD systems have optional digitizer pads that are used in the design
process, while most use standard keyboard and mouse hardware for input.
Overview of Modeling and Drafting
There are a variety of ways to describe a mechanical component using an entity-based CAD program.
For a simple well-defined part, the designer or draftsman may elect to make a two-dimensional (2D)
engineering drawing directly. This work is analogous to using a drawing board, in that the designer or
draftsman uses basic commands to draft and dimension the part. For a more complex component, the
designer may choose to create a three-dimensional (3D) wireframe model of the component, which can
be meshed with surfaces. The motivation to do this may be for visualization purposes or for the export
of the geometry to an analysis package or a computer-aided manufacturing application. The following
material briefly outlines the use of an entity-based CAD system.

1999 by CRC Press LLC


15-86 Section 15

Two-Dimensional Modeling
User Interface. The user interface varies among different systems, but some common user-interface
elements include pull-down menus, iconic menus, and dialog boxes for input of data. Menu systems are
manipulated using the mouse and keyboard. Some systems may use a digitizer tablet with icons to select
different drawing operations. The software may also provide the experienced user with a command line
input prompt as an alternative option to pulling down menus or dialog boxes.
Coordinate Systems. The key element of any entity-based CAD application is the Cartesian coordinate
system. All such applications have a default or global system and also allow the user to specify any
number of user-defined coordinate systems relative to the global system. Systems usually also offer the
user polar coordinate systems. All geometric entity data are referred to a particular coordinate system.
Basic Drawing Entities. The designer or draftsman at the drawing board uses a drafting machine to
draw straight lines, a compass or circle template to draw circles and arcs, and a French curve to draft
smooth curves. The entity-based CAD system has analogous basic entities which are the line, circle or
arc, and spline.
Lines are drawn by a variety of methods. The line is defined by its end points in the system coordinates.
Basic line construction is often accomplished by pulling down a line construction menu, selecting the
option to define the lines global coordinate end points, and then typing in the Cartesian coordinate pair.
Another method for line construction is to use the control points of existing geometry: control points
are a lines or arcs end or midpoints. All systems allow the user to snap to or select control points.
Given two lines defined by coordinate end points, one can close in a box with two more lines by selecting
the end points of the existing entities.
Circles and arcs are constructed in a manner similar to line definition. The most common circle
construction is to specify the center point in terms of an active coordinate system and either the radius
or diameter. Some other circle construction methods are to define three points on the circumference or
two diameter points. There are generally various ways to construct entities in addition to these methods.
Arcs are constructed in the same fashion as circles. Some examples of arc construction methods include:
start, contour, and end point definition, start and end point radius, or start and end point and angle of
arc. This list is not inclusive and of course is system dependent. One important class of arc is the fillet.
Most systems offer the user the ability to round a corner by selecting two lines with a common end
point and then simply defining the fillet radius.
The spline is the mathematical equivalent of using the French curve to fit a smooth curve between
points. The CAD user defines the curves points and the system fits a cubic spline to these points. There
are generally options for specifying a splines end point tangents or other end conditions.
Design Aids. The utility of a CAD system can be realized when the user appeals to the myriad functions
designed to aid in drawing on the computer. Some typical tools are discussed below.
Layering or layer control gives the designer significant advantages when compared to the drawing
board. Layers can be thought of as electronic consecutive sheets of tracing paper. Entities can be
placed on different named or numbered layers. Thus the designer can put a whole system design
in one file with each component on its own layer. These layers can be turned on or off, and
therefore are visible and active or are visible for reference only.
Entity color, thickness, and line type can be set and later changed or modified. Thickness refers
to the displayed and plotted line, arc, or spline width. Type is reserved for hidden, centerline, or
phantom line types which are generally a variety of dashed lines.
Selection control determines how entities are picked and created. Systems have orthogonal grids
analogous to graph paper which can be used to sketch parts. Entity end points can be made to
snap to grid points. Grid scales can be modified to suit. Angles can be set to force all geometry
to be drawn at an angle to an active coordinate system. When building geometry by selecting end

1999 by CRC Press LLC


Computer-Aided Engineering 15-87

points of existing geometry one can filter selections. An example would be to set the appropriate
filters so that only entities red in color may be picked or selected.
View control refers to windowing, viewing, zooming, and panning. Generally all parts are created
at full scale in a CAD system. One then uses view control to look at the whole part or to zoom
in on a small portion of it. These view controls are essential, as even a large computer monitor
is deficient in view area compared to an E sheet on the drawing board.
Design information menus or calculations are built into CAD systems. These features can be as
simple as showing what layer an entity is on or measuring its length or location in Cartesian
space. Even the most simple systems can determine the area between groups of entities. Some
systems similarly calculate centroids and inertias.
Drafting
Drafting is defined herein as operations such as dimensioning, tolerancing, hatching cross-section views,
and writing notes and title blocks in the creation of engineering drawings. These tasks are usually
performed after the model geometry has been completed.
Basic Dimensioning. Using a CAD system does not change where dimensions are placed on a drawing.
The drafter still applies the traditional rules, but the process of creating dimensions changes completely.
Since the components geometry has been drawn to scale very accurately, a drafter needs to only tell
the CAD system where to start, stop, and place the dimension. The system automatically determines the
dimension value, draws extension/dimension lines and arrows, and places the dimension text.
There are generally five different types of dimensioning commands. Linear dimensions can be aligned
with horizontal or vertical geometry. Angular dimensions measure the distance between two nonparallel
lines. Radius and diameter commands dimension arcs and circles. Ordinate dimensioning commands
call out Cartesian coordinates of entity features such as spline points. These features, combined with
the ability to attach notes, allow the drafter to define the part formally.
Dimensioning Variables and Standards. Dimensioning variables are system standards that control
dimension format. Examples are arrowhead size, text location, and text size. These are generally
controlled by a set of dimensioning subcommands. Most systems default settings conform to ANSI
Y14.5 standards. CAD systems also are designed to automate geometric dimensioning and tolerancing.
Menus and commands create and label reference data. There is usually an extensive ANSI Y14.5 symbol
library so that a drafter may add features such as control references, symbols, and finish marks.
Drafting Aids. CAD systems are usually equipped with tools or commands to create drafting entities.
These entities include section and center lines. Cross-hatching is generally automated to some degree
and a large library of ANSI hatch styles are usually available. If the part drawings are created from a
3D wire frame model as some systems allow, hidden line detection or removal may be automated. This
is a very powerful perceptual cue for aiding in the visualization of 3D entities.
Libraries and Data Bases
CAD systems are designed to build up data bases and libraries; for example, special symbol libraries
can be created. Any component drawn or modeled can be grouped or blocked together, and the resulting
part can be saved and imported into future components or drawings as a single entity. It may then be
ungrouped or exploded into distinct entities if the designer wishes to modify it to create a different
version of the component.
Three-Dimensional Wireframe and Mesh Modeling
Entity-based CAD systems generally allow the designer to model in three dimensions. Historically, three-
dimensional (3D) drafting augmented standard engineering drawings with an isometric or oblique view
of the component being described. CAD systems have in a sense reversed this process, as complex parts
are now modeled in 3D and then rendered onto engineering drawing views from the 3D data base.

1999 by CRC Press LLC


15-88 Section 15

3D modeling is a simple extension 2D drawing and modeling using CAD systems. Any of the basic
entities of lines, arcs, circles, and splines can be drawn using X, Y, and Z Cartesian coordinates for
control points. The basic entities can thus be used to create a wireframe model. The wireframe model
essentially has every edge of the component or part defined by basic entities. This extension to 3D is
simple in theory but is somewhat complex in practice because the interface and display capabilities are
two dimensional.
Key tools in productive 3D CAD efforts are coordinate system and view angle manipulation. Some
systems also allow definition of construction planes which defines a 2D plane in space where all current
entities are placed. In addition, some systems allow computer hardware-assisted rotation of viewing
which makes visualization of the part in 3D extremely fast.
Surfaces. Points, lines, arcs, and splines are used to create wireframes, but surfaces are used to represent
an object accurately. Surfaces can be shaded to render objects in order to make them look realistic or
they can be exported to analysis or manufacturing systems. Surfaces on a entity-based system can be
idealized as complex entities, as they can be created from wireframe entities such as lines, arcs, and
splines or specified directly by input of the particular defining surface parameters.
Meshes are arrays of faces. Through the use of meshes, engineers can define multifaceted surfaces
as single entities. Some common surface types are
Ruled surface: a surface between two defined curves such as lines, arcs, or splines
Tabulated surface: a surface created by projecting a drafting curve some distance in space
Surface of revolution: a surface created by revolving a generator curve about an axis
Edge-defined coons surface patch: surface created by interpolating with a bicubic function from
four adjoining edges defined by lines, arcs, or splines

Solid or Feature-Based CAD Systems


CAD systems that are based on solid modeling or feature modeling are significantly different in
appearance and use from entity-based systems. There is generally no means to draft a simple part in 2D
using a feature-based system, so descriptions of feature-based CAD begins with solid modeling.
Overview of Solid Modeling
The design process starts with creating a solid model using solids, primitive shapes, or features. Examples
of these include solid extrusions, revolved solids, or solids created from general surfaces. There are also
solid removal features such as holes, cuts, and chamfers. One way to build a solid model is to extrude
a block and then to cut, make holes, and round the edges in a manner analogous to actual manufacturing,
where a block of material may be sawed, drilled, and milled into the desired shape. Some details of
creating a solid model follow in a simple case study. It should be noted that some solid modelers are
parametric. This allows the part to be reshaped by changing key dimensions. The parametric CAD system
automatically reshapes the model or part when the dimensions or parameters are modified.
After creation of the solid model the designer can combine solid models or parts to create assemblies.
The assembly process with a solid modeler also has analogies to the actual manufacturing assembly
process. For example, part surfaces can be mated and aligned to create an assembly.
The solid modeler usually has a separate drafting or drawing module that is used to create engineering
drawings. Views are generated nearly automatically from the solid model. Reference dimensions can be
added along with symbols and other drafting details. In the most powerful systems the drawing,
assemblies, and parts are associative; the designer can make changes in parts at the drawing or assembly
level, and these changes are automatically reflected in the model. The part, assembly, and drawing
features are separate entities but are associated to insure a consistent data base.
An important advantage of solid modelers is that one essentially builds the parts and assemblies.
Shading and viewing from all sides are generally accomplished using simple keyboard commands and
standard mouse movements. Complete design information is available from a solid model, including

1999 by CRC Press LLC


Computer-Aided Engineering 15-89

such parameters as volume, weight, centroids, areas, and inertias. Clearance and interference-checking
algorithms prove or test assembly function prior to manufacturing. If detailed computational analyses
are to be done or the parts to be made use computer-controlled machinery, complete part or neutral (i.e.,
intermediate transfer) files can be used to transfer information to analysis or manufacturing applications.
The current trend is to integrate CAE and CAM functions with the solid modeler, thus eliminating the
need for neutral files.
Hardware and Operating System Requirements
The CAD solid modeler generally requires significant hardware and software resources. The typical
large CAD system runs optimally on UNIX-based workstations with significant graphics hardware. The
recent increased power of personal computers and the availability of Windows NT operating system
with its integrated graphic libraries have resulted in the availability of solid modeling on lower-cost
RISC workstations and Pentium-based personal computers. This is currently giving small- and medium-
sized companies design tools once reserved for aerospace and automaking industries. The software is
expensive, but many companies realize payoff if accounting is made for shorter, more efficient design
cycles and fewer errors that result from the use of such a system.
Part Solid Modeling Details
A case study of the design of a connecting rod will be used in the following section as a concrete
demonstration of how solid modeling systems are utilized in practice. The system used in this case study
was Parametric Technology Corporations Pro/ENGINEER, which is currently considered to be one of
the best CAD systems. The hardware used was a Silicon Graphic Indy workstation equipped with a
MIPS R4400 processor and 64 megabytes of random access memory.
Datums, Sections, and Base Features. Pro/ENGINEER (Pro/E) uses what is termed feature-based
modeling to construct a part. This refers to the fact that the part is designed by creating all the features
that define the part. One generally begins the model of a part by creating default orthogonal datum
planes. Next, a base feature or solid is typically constructed: this is a solid that is approximately the size
and shape of the component to be created. Other features are then added to shape the base solid into
the part that is desired.
The connecting rods base feature in this case will be an extrusion that is defined by a section normal
to the axis defined by the connecting rod holes. The section for the extrusion is sketched on a default
data plane. Mouse and menu picks allow the engineer to sketch the shape using the sketcher mode.
This part of the process is somewhat similar to entity-based CAD drafting, except that the sketch initially
need not be exact. The section is then dimensioned: once again this task is similar to dimensioning on
an entity-based system. The part is then regenerated or solved mathematically, assuming the given
dimensions and alignment to datums are adequate to define the part. The dimensions or parameters
are then modified to shape and size the part: see Figure 15.5.1 for the connecting rods dimensioned
section. The resulting extrusion is shown in Figure 15.5.2 as a wireframe plot. The designer has the
option of viewing wireframes or shaded solids during the model creation process.
Rounds, Holes, and Cuts. The connecting rod can be made lighter in weight with a cut, in this case,
that involves an extrusion that removed material leaving the rod cross section in the form of an H.
The dimensioned cut section and resulting solid are shown in Figure 15.5.3. Given the cut base feature,
rounds can be added in this case to smooth out stress concentrations. These edge rounds are constructed
by selecting an edge and defining a radius. The solid modeler adds material in the form of a round.
Figure 15.5.4 shows the cut and rounded extrusion produced using this approach.
Hole features are now used to create the bores for the piston wrist pin and the crankshaft. These hole
features are created by defining the hole location, setting the parameters for the diameter, and describing
a bore depth or, in this case, by instructing the modeler to drill through all.
The connecting rod is finished with a collection of cuts, rounds, and chamfers. A shaded image of
the finished connecting rod is shown in Figure 15.5.5. Two different versions of the rod are shown. The

1999 by CRC Press LLC


15-90 Section 15

FIGURE 15.5.1 Connecting rod dimensioned section.

FIGURE 15.5.2 Wireframe plot of connecting rod.

1999 by CRC Press LLC


Computer-Aided Engineering 15-91

FIGURE 15.5.3 Dimensioned cut rod, solid view.

FIGURE 15.5.4 Edge rounds added to extrusion.

1999 by CRC Press LLC


15-92 Section 15

FIGURE 15.5.5 Finished connection rod.

second rendering is a shorter rod with larger main bore and a smaller wrist pin bore. This part was
created simply by selecting and changing a few of the defining dimensions and telling the modeler to
regenerate the part. This example illustrates the parametric nature of the modeler, which permits design
changes to be easily visualized and accommodated.
Other Features. The features used to create this simple example are obviously not an inclusive enu-
meration of the solid features available to the designer. A brief list of other key features and corresponding
descriptions follows to aid in summarizing the solid modelers capabilities.
Surface: extrude, revolve, sweep, or blend sections to create a surface
Shaft: a revolved addition to the part using a specified section
Slot: create a slot from a sketching plane
Protrusion: boss created from the sketching plane by revolving, extruding, sweeping, or blending
Neck: create a groove around a revolved part using a specified axial section
Flange: create a ring around the surface of a revolved part using a specified axial section
Rib: create a rib from a specified sketching plane
Shell: remove the inside of a solid to make it a thin shell
Pipe: create a 3D tube/pipe/wire
Tweak: create part draft, dome, offset, or other surface deformation features
Part Modeling Tools. The solid modeler contains a large assortment of tools that can be used to
manipulate the features or solids that comprise the component being designed. A description of a few
of these functions follows.

1999 by CRC Press LLC


Computer-Aided Engineering 15-93

Solids modelers generally provide a layer control similar to that discussed for entity-based CAD
systems. This function is particularly useful for controlling the large number of datums and other features
created by a solid modeler. Features may be redefined, reordered, and suppressed or blanked.
There are also capabilities built in to allow features to be patterned, copied, or mirrored. Patterns can
be used to create splines in a shaft or hole arrays in a flange, for example. Solid protrusions can also
be patterned to create spokes on a vehicle wheel.
Cross sections can be created by building or selecting a datum plane that passes through the part. The
system automatically creates the cross section at this plane. This information can be later used in the
drawing mode to create section views at key locations.
Pro/ENGINEER has a very powerful utility termed relations. A features geometric dimensions can
be made functions of other features or geometry. This is similar to having a spread sheet controlling the
geometry. Relations can be expressed in terms of algebraic or trigonometric functions as needed. Logic
can be built into these relations, as well as utilizing the solution of systems of simultaneous equations.
Assemblies
Parts or solid models that have been created can then be put together to form assemblies. This process
is analogous to assembling actual components in a manufacturing environment. The process starts by
placing a base part or component in the assembly. This base part could be the connecting rod example
discussed previously. After placement of the base part one can, for example, add or assemble components
to the base part such as a wrist pin and piston in this case study. Components are added by defining
appropriate constraints. Some typical assembly constraints are
Mate: Planar surfaces are constrained to be coplanar and facing each other.
Align: Planar surfaces are constrained tube coplanar and in the same direction.
Insert: The axis of a feature male part is constrained to be coaxial with a female feature on a
parts axis.
Coordinate system: Two data coordinate systems on different components can be aligned.
The wrist pin is added to the connecting rod by mating the bore of the rod to the cylindrical surface
of the pin and aligning the datum plane at the middle of each component. The piston is added to the
assembly in the same way. Figure 15.5.6 shows a shaded view of this example subassembly. It should
be noted that changes to any of the components can be made at the assembly level and the associated
part is automatically updated even though they are separate entities.
There are some advanced tools provided at the assembly level. Material may be added or subtracted
from one set of parts to the other set of parts in the assembly. For example, the connecting rod could
be added to a block of material such as a mold base. Then the blank could be cut using the connecting
rod to form a mold with which to forge or cast the rod. Parts can also be merged at the assembly level,
thus becoming essentially one part.
Design Information Tools
Pro/Engineer provides utilities that calculate engineering information for both parts and assemblies.
There is an option to assign material properties directly or they can be picked from a data base. Given
the material data the modeler can generate the mass properties for a part: Figure 15.5.7 shows the mass
property results for the connecting rod example. Basic measurement and surface analysis functions are
also available, and cross-section properties can also be generated. Assembly mass properties can be
calculated in the assembly mode. Powerful clearance and interference calculations can be performed for
any combination of subassembly parts, surfaces, or entities these serve to check the function of
hardware before committing to actual manufacturing. In general, the added effort of building solid models
is more than offset by the information and checking that are provided.

1999 by CRC Press LLC


15-94 Section 15

FIGURE 15.5.6 View of example subassembly.

Drawing
Documenting designs by creating engineering drawings with a solid modeling system such as Pro/E is
done using a drawing module. Since the part is completely described by a solid model, most of the work
normally associated with drafting or drawing is already completed. The process of creating a drawing
from a part or solid model is outlined briefly below.
The first step in drawing creation is to name the drawing, select the drawing size or sheet, and retrieve
the appropriate format. Formats are essentially borders and title blocks. A format mode editor exists to
create standard or custom formats with any drafting geometry, notes, and call outs desired. Views of the
desired model are then placed within the format. The user first places what is called a general view and
orients it with view control menu commands. Now the user can place projected views off the first view.
The system automatically creates the projected views from the model selected.
View types can be projections, auxiliary, or detailed portions of the geometry. View options include
full views, broken views, x-sections, or scale settings to scale a particular view independently. Views
can subsequently be moved, resized, and modified in a variety of ways. The most important is the
drawing scale which the system calculates automatically depending on model size and sheet selected.
A new scale can be defined from a modify menu and all drawing views are rescaled accordingly.
Dimensions are now placed within the drawing. Since the solid model is dimensioned in the design
process, this step simplifies to that of instructing the drawing module to show all detailed dimensions.
The system then automatically places dimensions on the views defined. The real effort of dimensioning
becomes that of cleaning up the dimensions in order to have them to meet traditional drafting standards.
Sometimes the dimensions used to model the part are inadequate in a detailed drafting sense, so a
complete drafting utility exists to create what are essentially reference dimensions.

1999 by CRC Press LLC


Computer-Aided Engineering 15-95

FIGURE 15.5.7 Mass properties calculated for example cross section.

As discussed previously for entity-based CAD systems, the solid modeler drawing or drafting utility
has detailed functions designed to create basic dimensions, specify geometric tolerances, and to set data
as reference. Large ANSI symbol libraries are also available to aid in the detailing.
The part and the drawing exhibit bi-directional associativity: if the engineer completes the drawing
and then goes back to the part to make modifications, the drawing geometry associated with these
changes is automatically updated. In a similar manner, the part definition is changed if a defining
dimension is modified on the drawing.
CAD Interface Overview
All contemporary CAD systems have utilities to import and export design data. These utilities have
translators which support many types of file formats. The most common interfaces and files will be
discussed in the following paragraphs. Since the CAD system usually defines the part the focus will be
on, export of data and Pro/Engineer will be used as an example.
Printing and Plotting. A very important interface is with printing and plotting hardware to get hard
copies of shaded images, wireframe plots, and drawings for use in design reviews and for archiving.
Shaded images are commonly written in PostScript page-description format for printing on a color printer
with such capabilities. Plot files of objects or parts, drawings, or assemblies can be done with the standard

1999 by CRC Press LLC


15-96 Section 15

HPGL (Hewlett-Packard Graphics Language) or PostScript formats. Pro/ENGINEER, for example, has
specific configuration files for a variety of Hewlett-Packard, Calcomp, Gerber, Tektronix (for shaded
images), and Versatec plotters.
Computer-Aided Engineering (CAE). It is often desirable to export CAD geometry to other CAD
systems or analysis packages. Example destinations are
Thermal/Structural Finite Element Analysis
Computational Fluid Dynamics Analysis
Moldflow Analysis Packages
Kinematics and Dynamics Programs
Photorealistic Graphics Rendering Programs
The most common format to transfer graphic and textual data is the Initial Graphics Exchange
Specification (IGES) file format. It is important to determine what type of IGES entities the receiving
system desires, then adjust the IGES configuration prior to creation of the file. As an example, a variety
of Finite Element Analysis (FEA) packages use an IGES trimmed surface file as opposed to wireframe
entities where part edges are output. A finite element model of the case study connecting rod was
formulated using an exported IGES trimmed surface file of the Pro/E solid model part file. A stress
contour plot of a simple static analysis of this rod model is shown in Figure 15.5.8. The model was
automeshed with hexahedral elements directly from the surface files with little user intervention. The
meshing procedure took 30 to 40 min to initiate and the automesh time was about 10 hr on a Silicon
Graphics Workstation. Most production-quality finite element codes either already have or will eventually
possess such capabilities.

FIGURE 15.5.8 Stress contours and geometry of optimized rod.

Data exchange format (DXF) files are also very common and are used to export model and drawing
data to products such as AutoCAD or to products that were designed to interface with AutoCAD. Pro/E
also has its own neutral file which is a formatted text file containing more design information than an
IGES file. Part geometry is formatted so other software packages can use the complete design data.

1999 by CRC Press LLC


Computer-Aided Engineering 15-97

Pro/E also directly writes other software package neutral files such as PATRAN geometry files (PATRAN
is a generalized finite element pre- and postprocessor).
This is only a partial list of export options available. There exists a crucial problem with export,
however: associativity is lost. If the part is, for example, exported and analyzed, any subsequent part
modification renders the analysis model inaccurate. This is a problem as the current trend is to use
computational analysis tools early in the design process instead of using them only as a final check.
Presently, there are software package alliances and cooperatives forming that are addressing this problem.
This direct integration of analysis packages has been accomplished by Structural Dynamics Research
Corporation I-DEAS and Pro/Engineer Mechanica, among others. Once again the Pro/Mechanica
integration will be used as an example to review the current capabilities of such systems. The Mechanica
packages are thermal, motion simulation, and structural analysis packages that are completely integrated
with Pro/Es solid modeler. Analysis is thus another large set of pull-down menus and dialog boxes
within the CAD package. The beauty of the system is that it is a complete bi-directional link, in that
design changes made in Pro/E automatically update finite element models or vice versa. This feature
greatly simplifies the task of structural design optimization. Mechanica has routines which will optimize
and change model geometry based on static stress and structural dynamic considerations. There are
practical limitations as Mechanica is currently limited to linear elastic structural analysis. The Mechanica
analysis package uses an adaptive p-element, where the order of the tetrahedral finite element is adjusted
until convergence is achieved. The adaptive scheme uses up to a ninth-order polynomial in the element
formulation. If the model does not converge, the mesh can be refined and the analysis tried again.

Computer-Aided Manufacturing (CAM)


Parts that are modeled in 3D are generally fabricated with CAM. This fabrication task can involve rapid
prototyping, directly machining the part, or mold making to support casting, forging, or injection molding.
Rapid Prototyping
Rapid prototyping is a relatively new manufacturing technology which is complementary to CAD solid
modeling. A CAD design file of an object is converted into a physical model using special computer-
controlled sintering, layering, or deposition processes. SLA stereolithography machines by companies
such as 3D Systems convert 3D CAD data of objects into vertical stacks of slices. A computer-controlled
low-power ultraviolet laser then traces across photocurable resin solidifying the part one layer or slice
at a time. Selective laser sintering and fused deposition processes operate in a similar manner. In all
cases, an accurate nonstructural model is created which can be used in design reviews, mock-ups, and
prototype assemblies. A physical model such as this is a very powerful tool in early product design
reviews and marketing efforts.
The interface to rapid prototyping is an SLA or (*.stl) file. SLA files represent the solid model as
groups of small polygons that make up the surfaces. The polygons are written to an ASCII text or binary
file. Pro/E has an SLA selection in its interface menu. It is important to note that the designer partially
controls the quality of the prototype through two variables, the chord height of the tessellated surface
and an angle control variable. The connecting rod example is again used to demonstrate the effects of
chord height setting in Figure 15.5.9, which shows the polygon surfaces for SLA files with two different
chord heights.
Computer Numerical Control Machining
Numerical control machining refers to created parts using computer-controlled machines such as mills,
lathes, or wire electric discharge machines. Parts or molds are made by machining away material. In
the case of complex parts it is essential to transfer CAD geometric data to a machining software package
and finally to the computer-controlled machine.
A typical machining center has a numerical control machine wired directly to the serial port of a
microcomputer. This configuration is termed computer numerical control or CNC, as it allows the
machinist to program the machine tool off-line using specialized machining programs. Most CAD

1999 by CRC Press LLC


15-98 Section 15

FIGURE 15.5.9 Polygonal representation for stereolithography.

systems have optional integrated manufacturing modules similar in architecture and use to their integrated
analysis packages. There are also a variety of stand-alone machining software packages which accept a
spectrum of CAD input in the form of neutral, IGES, and DXF files. These packages or modules can
import and create geometry, as well as determine and simulate cutter paths. As with many CAD programs,
these machining applications now have 3D interactive graphic environments. As the programmer gen-
erates steps for the machine tool path, 3D models show the part being cut by the tool. Line-by-line
programming in a text mode is presently being replaced by graphically selecting geometry, tools, and
tool paths. These programs have postprocessors to write the G and M code specific to the machine
tool being used (G and M code is a standard language for programming machine tools). These ASCII
text files based on numerical codes using G and M as prefixes form an instruction list for a wide variety
of machine tool operations.

Further Information
There are a wide variety of excellent references available that summarize many of the principles found
in this chapter. For practical discussions of computer technology, the Internet-based UseNet newsgroups
form the most detailed and up-to-date sources of information on programming languages, computer
architecture, and software engineering. Most UseNet groups provide a list of Frequently Asked Questions
(FAQs), which often address introductory issues in considerably more detail than elementary textbooks,
and consulting such UseNet FAQs is a good way to begin more detailed study of most of the computa-
tional topics outlined in this chapter. Sample UseNet newsgroups include comp.ai (for general informa-
tion on artificial intelligence research and application), comp.object (for general information on object-
oriented programming), and comp.sys.intel (for information regarding Intel hardware). These general
UseNet topics are generally further specialized toward more detailed discussions, such as the philosoph-
ical information regarding computer intelligence that can be found in comp.ai.philosophy. Since UseNet
groups are created (and sometimes disappear) at near-random intervals, it is best to use standard Internet
browsing tools to determine the best newsgroups for researching a particular topic.
More traditional printed works are available for all the topics found in this chapter. The history and
philosophy of software engineering are presented in engaging detail in Yourdons Writings of the
Revolution: Selected Readings on Software Engineering and Brookss classic The Mythical Man-Month.
The classic modern reference on object-oriented programming is Boochs Object-Oriented Analysis and
Design, which also contains good descriptions of procedural programming models. An excellent overview
of artificial intelligence can be found in Kurzweils The Age of Intelligent Machines.
One of the most detailed treatments of finite-element modeling available is Hughes The Finite Element
Method: Linear Static and Dynamic Finite Element Analysis, which presents a comprehensive and
detailed view of finite-element modeling as oriented toward mechanical engineering applications, and
does an excellent job of presenting time-dependent solution techniques amenable to any numerical
discretization approach (including finite-difference methods). There are many standard references on
finite-difference methods, including the well-known book by Ames, Numerical Methods for Partial
Differential Equations.

1999 by CRC Press LLC


Computer-Aided Engineering 15-99

Terms and Associated Acronyms


Artificial Intelligence (AI): a spectrum of ideas on the general topic of computer modeling of human
intelligence, ranging from Strong AI (defined as the premise that computers are capable of
emulating human intelligence) to Weak AI (defined as the premise that computer can be utilized
as useful adjuncts for modeling high-level human activities such as decision-making).
Benchmarks: application programs specifically designed to quantify the performance of a particular
computer system, or of its component subsystems. Benchmarks provide a convenient means to
compare measures of system performance obtained from disparate computer platforms.
Boundary-Element Method (BEM): a numerical approximation technique based upon integral equations
and related to the finite-element method. Boundary elements possess the advantage of requiring
only surface-based representations of the problem domain, in contrast to the volume-oriented data
required by finite-element methods.
Central Processing Unit (CPU): the main processor (i.e., the brain) of a computer, consisting of the
hardware necessary to perform the low-level arithmetic and logical operations that form the
building blocks of computer software.
Compressor/Decompressor (CODEC): a pair of computational algorithms designed to compress the
bandwidth required to process large streams of data. The most common applications of CODECs
are in computer video/animation applications, where it is physically impractical to process the
uncompressed raw data needed to maintain the illusion of continuous motion within a computer
animation.
Computational Mechanics: the field oriented toward integrating principles of mechanics with tools from
computer science and mathematics toward the goal of constructing accurate numerical models of
mechanical systems.
Computer-Aided Design (CAD): a general term outlining areas where computer technology is used to
speed up or eliminate efforts by engineers within the design/analysis cycle.
Computer-Aided Engineering (CAE): application of the computer toward general tasks encountered in
mechanical engineering, including analysis, design, and production.
Computer-Aided Manufacture (CAM): the integration of the computer into the manufacturing process,
including such tasks as controlling real-time production devices on the factory floor.
Computer-Assisted Software Engineering (CASE): software development tools oriented toward utilizing
the computer to perform many rote programming development tasks.
Computer Numerical Control (CNC): the creation of mechanical parts using computer-controlled machin-
ery such as lathes, mills, or wire electric discharge machines.
Convergence: in the setting of computer-aided engineering, convergence is taken as a measure of how
rapidly the computer approximation tends toward the exact solution, measured as a function of
increased computational effort. Ideally, as more computer effort is expended, the computational
approximation rapidly becomes appropriately more accurate.
Data base Management: computational techniques specifically oriented toward allocation, manipulation,
storage, and management of organized data structures.
Discretization: the fundamental tenet of most computational models for physical systems, in which a
complex continuous real system is approximated by a simpler discrete replacement. In theory, as
the level of refinement of the discretization is increased (i.e., as the discrete components get
simultaneously smaller and more numerous), the discrete system more closely approximates the
real one.
Distributed Multiprocessing: a parallel computer architecture characterized by independent processors
working cooperatively on a distributed set of resources (for example, a network of computers,
each equipped with its own local memory and input/output devices). In a distributed multipro-
cessing system, the communication bandwidth of the network connecting the individual CPUs is
often the limiting factor in overall performance.

1999 by CRC Press LLC


15-100 Section 15

Engineer-in-the-Loop: a class of computer-aided design applications oriented toward immersing the


engineer into the computational simulation with the goal of separating the rote calculation com-
ponents of the task from the decision-making processes. The latter efforts are handled by the
engineer, while the former are dealt with by the computer.
Eulerian Reference Frame: a reference frame attached to a region of space, to be used for constructing
computational mechanics approximations. Eulerian reference frames are generally used for fluids,
though they may also be profitably utilized in modeling the large deformations of solids.
Expert Systems: a class of artificial intelligence applications based on emulating the decision-making
capacities of human experts by using specialized computer software for inference and knowledge
representation.
Expressiveness: the desirable characteristic of a program languages facility for controlling program
execution and abstracting programming functions and data.
Finite-Difference Method (FDM): a family of numerical approximation methods based on replacing the
differential equation-based statement of a physical problem by a set of discrete difference equations
that directly approximate the underlying differential equations.
Finite-Element Method (FEM): a family of numerical methods based on approximation of integral
statements of the problems of mathematical physics. This emphasis on integral formulations (in
contrast to the differential formulations used by finite-difference models) provides considerable
numerical advantages, such as better convergence properties and the ability to model discontinuous
problem data.
Finite-Volume Method (FVM): a family of numerical methods based on integrating the differential
statements of physical problems over small subdomains. Finite-volume modeling is essentially a
means to maintain the simplicity of formulation of finite-difference models, while permitting
realization of some of the advantages of integral finite-element schemes.
Fuzzy Methods: computational methods for artificial intelligence that generalize the simple Boolean
logical theories that traditionally form the basis of deterministic computer programs. These meth-
ods are based on more qualitative (i.e., fuzzy) representations of data and instructions than
commonly found in conventional computer programs.
Graphical User Interface (GUI): a model for software design that emphasizes graphical human/computer
interaction through the desired characteristics of responsiveness, intuitiveness, and consistency.
Inheritance: a feature of object-oriented programming languages that permits derivation of new objects
using a childparent hierarchy. Inheritance makes is possible to express relationships among
objects in a flexible manner that facilitates programmer productivity and the reuse of code.
Lagrangian Reference Frame: a reference frame attached to a physical body, to be used for constructing
computational mechanics approximations. Lagrangian reference frames are primarily used for
solids (where it is relatively easy to track the deformation of the underlying physical domain),
and generally result in a simpler formulation of the conservation laws used in engineering mechan-
icsmultitasking
Multiprocessing: the process of distributing threads of execution over different processors on the same
computer, or over different computers connected over a network. The various flavors of multipro-
cessing form the architectural divisions among different types of parallel processing.
Multitasking: managing the execution of separate computer programs concurrently by allocating pro-
cessing time to each application in sequence. This sequential processing of different applications
makes the computer appear to be executing more than one software application at a time.
Multithreading: managing the execution of separate concurrent threads of program execution by allo-
cating processing time to each thread. This low-level software resource management model forms
the basis of many multitasking operating systems and permits various forms of parallel computation
by distributing distinct threads over different processors.

1999 by CRC Press LLC


Computer-Aided Engineering 15-101

Neural Networks: a class of largely self-programming artificial intelligence applications based on webs
of interconnected computational processes. The aggregate goal of the interconnected network of
computation is to emulate human thought patterns, and especially the autonomous programming
routinely exhibited by humans and notably lacking in computers.
Newton-Raphson Iteration: the basic algorithm for solving nonlinear problems, in which a nonlinear
equation set is traded for a sequence of linear equation sets, with the ultimate goal of rapid
convergence of the linear equation sequence to the solution of the underlying nonlinear problem.
Object-Oriented Programming: a standard for software development that emphasizes modeling the data
and behavior of the underlying objects that define the program. Object-oriented programs are
comprised of a collection of cooperating software objects, with these objects derived from an
underlying hierarchical organization of generic classes. Contrast this concept with procedural
programming, which uses algorithms as the fundamental component of program architecture.
Operating System: the software layers that provide low-level resources for execution of computer
programs, including file services for input/output, graphical display routines, scheduling, and
memory management.
Polymorphism: a feature of object-oriented programming languages where a single name (such as the
name of a variable) can be recognized by many different objects that are related by class inher-
itance. Polymorphism is one of the most important features of object-oriented programming, as
it permits many different objects to respond to a common event, a process that is considerably
more difficult to implement using procedural programming languages.
Portability: the highly desirable program characteristic of ease of implementation on different types of
computers. Portable programs are generally written in higher-level computer languages, such as C.
Procedural Programming: computer software development oriented toward abstraction of programming
function by subdividing the overall task into successively smaller subtasks, which are generically
referred to as procedures. Procedures represent the software implementation of algorithms, and
hence procedural programming is oriented toward modeling the algorithms of the underlying
system.
Random Access Memory (RAM): the collection of semiconductor hardware used for memory storage
on modern computers. The main memory of a computer is usually composed of Dynamic Random
Access Memory (DRAM), which is volatile and does not persist when the power is turned off.
Rapid Application Development (RAD): programming tools oriented specifically designed to aid in
development of prototype software applications for demonstration and initial application design.
Relational Data Base: a standard for data base management emphasizing modeling data by using flexible
relations among the various data structures of the program.
Schema: the structure of a data base (the term schema generally is taken to mean the structure and
organization of the tables in a tabular data base).
Software Engineering: the profession concerned with the effective specification, design, implementation,
and maintenance of computer programs.
Solvability: mathematical conditions relating to the existence and uniqueness of solutions to problems
in mathematical physics. Problems that do not possess unique solutions often lead to pathological
behavior when implemented on the computer, so determining the solvability characteristics of a
problem is an important first step in computational engineering applications.
Spiral Model: a standard for the incremental design and implementation of computer programs, char-
acterized by applying regular updates to a flexible design during prototyping and delivery phases.
Stability: the numerical characteristic by which errors can be guaranteed not to increase in magnitude
as they propagate throughout the mathematical domain of the problem. Unstable methods permit
errors to grow until the computed solution may become hopelessly corrupted, while stable methods
provide some insurance against this unfortunate outcome.
Stress Analysis: a family of computational schemes oriented toward determining the state of stress and
deformation of a physical medium. Stress analysis is normally associated with solid mechanics,
though its methods can be successfully applied to fluid systems as well.

1999 by CRC Press LLC


15-102 Section 15

Structured Grid: used to characterize finite-difference and finite-volume discretizations, where consid-
erable regularity (such as an evenly gridded rectangular geometry) is imposed on the topological
structure of the underlying grid. Structured grids are generally associated with optimal convergence
characteristics, but may not accurately represent the geometry of the underlying physical problem.
Structured Query Language (SQL): SQL is a standard and portable language for creating and modifying
data bases, retrieving information from data bases, and adding information to data bases.
Symmetric Multiprocessing (SMP): a parallel computer architecture generally characterized by indepen-
dent processors working cooperatively on a shared set of resources, such as memory or input/output
devices. In a symmetric multiprocessing system, the issue of contention for resources is often the
limiting factor in overall performance.
Unstructured Grid: used to characterize finite-difference and finite-volume discretizations, where little
regularity is imposed on the geometric and topological structure of the underlying grid. Unstruc-
tured grids are more easily adapted to realistic problem formulations, but require more effort and
often result in poorer convergence characteristics.
Vectorization: a process by which certain types of numeric computation (most notably, those involving
well-defined operations on vectors) can be internally pipelined on a computers processing units,
leading to considerable efficiencies in performing the associated vector operations.
Virtual Memory: an operating system characteristic where some contents of main memory are temporarily
cached on a persistent storage device in order to permit allocation and management of memory
exceeding the physically available supply.
Virtual Reality: a class of computer graphics applications specifically designed to maintain the illusion
of an artificial world in which the user is immersed.
Visualization: the graphical representation of data in order to facilitate understanding by human observers.
The art and science of visualization exist independently of computers, but their principles are
widely used in computer graphics to display the data sets that are commonly encountered in large-
scale computation.
Waterfall Model: a standard for the sequential design and implementation of computer programs,
characterized by long lead times to delivery and limited scope for substantial modifications to the
programs function or architecture.

References
Ames, W.F. 1977. Numerical Methods for Partial Differential Equations. Academic Press, New York.
Apple Computer, Inc. 1985. Inside Macintosh, Vol. 1. Addison-Wesley, Reading, MA.
Barton, J.J. and Nackman, L.R. 1994. Scientific and Engineering C++: An Introduction with Advanced
Techniques and Examples, Addison-Wesley, Reading, MA.
Booch, G. 1994. Object-Oriented Analysis and Design with Applications, 2nd ed. Benjamin/Cum-
mings, Redwood City, CA.
Brooks, F.P. 1975. The Mythical Man-Month, Anniversary ed. Addison-Wesley, Reading, MA (original
edition published in 1975).
Brebbia, C.A. and Dominguez, J. 1989. Boundary Elements: An Introductory Course. McGraw-Hill,
New York.
DOD Trusted Computer System Evaluation Criteria. December 1985. DOD 5200.28-STD.
Fausett, L.V. 1994. Fundamentals of Neural Networks: Architectures, Algorithms, and Applications.
Prentice-Hall, Englewood Cliffs, NJ.
Foley, J.D. and VanDam, A. 1982. Fundamentals of Interactive Computer Graphics. Addison-Wesley,
Reading, MA.
Hughes, T.J.R. 1987. The Finite Element Method: Linear Static and Dynamic Finite Element Analysis.
Prentice-Hall, Englewood Cliffs, NJ.
Humphrey, W.S. 1989. Managing the Software Process. Addison-Wesley, Reading, MA.
Kernighan, B.W. and Plauger, P.J. 1976. Software Tools. Addison-Wesley, Reading, MA.

1999 by CRC Press LLC


Computer-Aided Engineering 15-103

Kernighan, B.W. and Plauger, P.J. 1978. The Elements of Programming Style. McGraw-Hill, New York.
Keyes, J. (ed.). 1994. The McGraw-Hill Multimedia Handbook. McGraw-Hill, New York.
Klir, G.J. and Folger, T.A. 1988. Fuzzy Sets, Uncertainty, and Information. Prentice-Hall, Englewood
Cliffs, NJ.
Kreith, F. and Bohn, M.S. 1993. Principles of Heat Transfer, 5th ed. West Publishing, St. Paul, MN.
Kurzweil, R. 1990. The Age of Intelligent Machines. MIT Press, Cambridge, MA.
Metzger, P. 1983. Managing a Programming Project. Prentice-Hall, Englewood Cliffs, NJ.
Richardson, L.F. 1910. Philos. Trans. R. Soc. A210: 307.
Shapiro, S.C. (ed.). 1992. Encyclopedia of Artificial Intelligence, 2nd ed. John Wiley & Sons, New
York (1st ed., 1987).
Shugar, T. 1994. Visualization of skull pressure response: a historical perspective. In Proc. of the
Chico NSF Workshop on Visualization Applications in Earthquake Engineering, Chico, CA.
Wasserman, P.D. 1989. Neural Computing: Theory and Practice. Van Nostrand Reinhold, New York.
Yourdon, E. 1982. Writings of the Revolution: Selected Readings on Software Engineering. Yourdon
Press, New York.
Yourdon, E. 1993, Decline and Fall of the American Programmer. Yourdon Press, New York.
Zienkiewicz, O.C. and Taylor, R.L. 1991. The Finite Element Method, 4th ed. McGraw-Hill, Berkshire,
England.

1999 by CRC Press LLC

Você também pode gostar