Você está na página 1de 4

Meshfree Methods: Its origin, history, and theories

The following article is written by Dr.-Ing. Timon Rabczuk, who is now teaching finite element methods and meshfree methods at Technical University of Munich, Germany, for a sister blog --Meshfree Methods Blog. I enjoyed reading the article very much, and would like to share it with you. ----------------------------------------------------------------------------------------Meshfree methods go back to the seventies. The major difference to finite element methods is that the domain of interest is discretized only with nodes, often called particles. These particles interact via meshfree shape functions in a continuum framework similar as finite elements do although particle connectivities can change over the course of a simulation. This flexibility of meshfree methods was exploited in applications with large deformations in fluid and solid mechanics, e.g. free-surface flow, metal forming, fracture and fragmentation, to name a few. Most meshfree methods are pure Lagrangian in character though there are a few publications on meshfree methods formulated in an Eulerian (or ALE) description, e.g. Fries 2005. The most important advantages of meshfree methods compared to finite elements are: their higher order continuous shape functions that can be exploited e.g. for thin shells or gradient-enhanced constitutive models; higher smoothness; simpler incorporation of h- and p-adaptivity and certain advantages in crack problems (no mesh alignment sensitivity; some methods do not need to enforce crack path continuity). The most important drawback of meshfree methods is probably their higher computational cost, regardless of some instabilities that certain meshfree methods have. One of the oldest meshfree methods is the Smooth Particle Hydrodynamics (SPH) developed by Lucy and Gingold and Monaghan in 1971. SPH was first applied in astrophysics to model phenomena such as supernova and was later employed in fluid dynamics. In 1993, Petschek and Libersky extended SPH to solid mechanics. Early SPH formulations suffered from spurious instabilites and inconsistencies that were a hot topic of investigations, especially in the 90s. Many corrected SPH versions were developed that improved either the stability behavior of SPH or its consistency. Consistency, often referred to as completeness in a Galerkin framework, means the ability to reproduce exactly a polynomial of certain order. A method is called n-th order consistent (or complete) if it is able to reproduce a polynomial of order n exactly. While most SPH methods are based on the strong form, a wide class of methods was developed based on the weak form. Based on an idea of Lancaster and Salkauskas and probably motivated by the purpose to model arbitrary crack propagation without computational expensive remeshing, the group of Prof. Ted Belytschko developed the elementfree Galerkin (EFG) method in 1994. The EFG method is based on an MLS approximation and avoids inconsistencies inherent of some SPH formulations. In 1995, the group of Prof. W.K. Liu proposed a similar method, the Reproducing Kernel Particle Method (RKPM). Though the method is very similar to the EFG method, it originates from wavelets rather than from curve-fitting. The first method that employed an extrinsic basis was the hp-cloud method of Duarte and Oden. In contrast to the EFG and RKPM method, the hp-cloud method increases the order of consistency (or completeness) by an extrinsic basis. In other words, additional unknowns were introduced into the variational formulation to increase the order of completeness. This idea was later adopted (and modified) in the XFEM context though the extrinsic basis (or extrinsic enrichment) was used to describe the crack kinematics rather than to increase the order of completeness in a p-refinement sense. The group of Prof. Ivo Babuska discovered certain similarities between finite element and meshfree methods and formulated a general framework, the Partition of Unity Finite Element Method (PUFEM), that is similar to the generalized Finite Element Method (GFEM) of Strouboulis and colleagues. Another very popular meshfree method worth mentioning is the Meshless Local Petrov Galerkin (MLPG) method developed by the group

of Prof. S.N. Atluri in 1998. The main difference of the MLPG method to all other methods mentioned above is that local weak forms are generated over overlapping sub-domains rather than using global weak forms. The integration of the weak form is then carried out in these local subdomains. In this context, Atluri introduced the notion truly meshfree methods since truly meshfree methods do not need a construction of a background mesh that is needed for integration. The issue of integration in meshfree methods was a topic of investigations since its early times. Methods that are based on a global weak form may use three different types of integration schemes: nodal integration, stress-point integration and integration (usually Gauss quadrature) based on a background mesh that does not necessarily need to be aligned with the particles. Nodal integration is from the computational point of view the easiest and cheapest way to build the discrete equations but similar to reduced finite elements, meshfree methods based on nodal integration suffer from an instability due to rank deficiency. Adding stress points to the nodes can eliminate (or at least alleviate) this instability. The term stress-point integration comes from the fact that additional nodes were added to the particles where only stresses are evaluated. All kinematic values are obtained from the "original" particles. The concept of stress points was actually first introduced in one dimension in an SPH setting by Dyka. This concept was introduced into higher order dimensions by Randles and Libersky and the group of Prof. Belytschko. There is a subtle difference between the stress point integration of Belytschko and Randles and Libersky. While Randles and Libersky evaluate stresses only at the stress points, Belytschko and colleagues evaluate stresses also at the nodes. Meanwhile, many different versions of stress point integration were developed. The most accurate way to obtain the governing equations is Gauss quadrature. In contrast to finite elements, integration in meshfree methods is not exact. A background mesh has to be constructed and usually a larger number of quadrature points as in finite elements are used. For example, while usually 4 quadrature points are used in linear quadrilateral finite elements, Belytschko recommend the use of 16 quadrature points in the EFG method. Another important issue regarding the stability of meshfree methods is related to the kernel function, often called window or weighting function. The kernel function is somehow related to the meshfree shape function (depending on the method). The kernel function can be expressed in terms of material coordinates or spatial coordinates. We then refer to Lagrangian or Eulerian kernels, respectively. Early meshfree methods such as SPH use an Eulerian kernel. Many meshfree methods that are based on Eulerian kernels have a so-called tensile instability, meaning the method gets unstable when tensile stresses occur. In a sequence of papers by Belytschko, it was shown that the tensile instability is caused by the use of an Eulerian kernel. Meshfree methods based on Lagrangian kernels do not show this type of instability. Moreover, it was demonstrated that for some given strain softening constitutive models, methods based on Eulerian kernels were not able to detect the onset of material instability correctly while methods that use Lagrangian kernels were able to detect the onset of material instability correctly. This is a striking drawback of Eulerian kernels when one wishes to model fracture. However, a general stability analysis is difficult to perform and will of course also depend on the underlying constitutive model. Note also, that Libersky proposed a method based on Eulerian kernels and showed stability in the tension region though he did not consider strain softening materials. For too large deformations, methods based on Lagrangian kernels tend to get unstable as well since the domain of influence in the current configuration can become extremely distorted. Some recent methods to model fracture try to combine Lagrangian and Eulerian kernels though certain aspects still have to be studied, e.g. what happens in the transition area or how are additional unknowns treated (in case an enrichment is used). In meshfree methods, we talk about approximation rather than interpolation since the meshfree shape functions do not satisfy the Kronecker-delta property. This entails certain difficulties in imposing essential boundary conditions. Probably the simplest way to impose essential boundary

conditions is by boundary collocations. Another opportunity is to use the penalty method, Lagrange multipliers or Nitsches method. Coupling to finite elements is one more alternative that was extensively pursued in the literature-in this case, the essential boundary conditions are imposed in the finite element domain. In the first coupling method by Belytschko, the meshfree nodes have to be located at the finite element nodes and a blending domain is constructed such that the shape functions are zero at the finite element boundary. In this first approach, discontinuous strains were obtained at the meshfree-finite element interface. Many improvements were made and methods were developed that exploit the advantage of both meshfree methods and finite elements, e.g. the Moving Particle Finite Element Method (MPFEM) by Su Hao et al. or the Reproducing Kernel Element Method (RKEM) developed by the group of Prof. W.K. Liu. Meanwhile, several textbooks on meshfree methods have been published, W.K.Liu and S. Li, T. Belytschko, S.N.Atluri and some books by Prof. G.R. Liu. Many meshfree methods were developed and applied in fracture mechanics to model arbitrary crack growth. The crack was initially modeled with the visibility criterion, i.e. the crack was considered to be opaque and the meshfree shape functions were cut at the crack surface. Later, the diffraction and transparency method was used instead of the visibility criterion since they remove certain inconsistencies of the visibility criterion. With the development of the extended finite element method (XFEM) in 1999, meshfree methods got a very strong competitor. The major drawback of meshfree methods with respect to XFEM is their higher computational cost. It is also less complex to incorporate XFEM into existing FE codes. There are still some efforts to modify meshfree methods with respect to material failure and fracture. However, it seems that much less attention is paid to the development of meshfree methods these days compared to the 90s. Nevertheless, meshfree methods still are applied frequently in many different areas, from molecular dynamics, biomechanics to fluid dynamics. posted by Shaofan Li @ 4:40 PM

Where Do We Stand on Meshfree Approximation Schemes?

In a previous posting, Timon provided a nice overview of meshfree methods starting from SPH and leading up to some of the key developments over the past decade (diffuse element method, element-free Galerkin, reproducing kernel particle method/RKPM). Rather than present details on any particular method per se, here I focus on the most common approximations that are used in meshfree methods. In doing so, my goal is to bring out the commonalities, distinctions, and some recent perspectives and improved understanding that has come about in the realm of data approximation and its ties to meshfree methods. Where appropriate, I will try to point out how the properties of the approximant lead to positive or negative consequences when used within a Galerkin method. The important issues of imposing essential boundary conditions and numerical integration in Galerkin meshfree methods are also discussed. In the interest of space, equations are inlined and no figures are included. Links to cited references (journal articles, web resources, or author's web page) are provided; the full citation of the references is available here. The reader will notice that the title of this post is an adaptation of Jaynes's (1979) articleWhere Do We Stand on Maximum Entropy? Given a set of nodes {xa} (a = 1 to n) in Rd, we construct an approximation for a scalar-valued function u(x) in the form: uh(x) = a(x)ua (Einstein summation convention implied). Most, if not all meshfree methods are based on some variant of either radial basis functions (RBFs), moving least squares (MLS) approximants (in computational geometry, Levin's (1998) MLS approximant is adopted), or natural neighbor-based (nn) interpolants such as Sibson coordinates and Laplace

interpolant. Recently, maximum-entropy approximants have also come to the forefront. The mathematical analysis of meshfree approximants has been carried out by Babuska et al. (2003). In meshfree methods, the construction of the nodal basis functions a(x) is independent of the background element structure (unlike finite elements), and different approaches are used to construct a linearly independent set of basis functions. In this sense, these approximations are referred to as meshfree. A brief description of the above schemes follows. Radial Basis Functions In the radial basis function approximation, a is constructed from translates of a fixed radial function with centers at xa. If global polynomial reproducibility is desired, a polynomial term is added to the approximation, which engenders an additional side condition. For certain choices of (.), for example, Gaussian radial basis function [exp(-r2/c2)], multiquadrics [(r2 + c2)], or thinplate splines, the matrix Kab = a(||xb-xa||) is positive-definite and invertible, and hence the data interpolation problem has a unique solution for the coefficients ua. Note that uh interpolates, but the radial functions a do not satisfy the Kronecker-delta property, a(xb) = ab. In approximation theory, basis functions with the property a(xb) = ab are known as a cardinal basis. For a cardinal basis set, we immediately see that the basis functions are linearly independent. When a cardinal basis is used, the coefficients ua are more commonly referred to as nodal values (finite element terminology). The use of RBFs in collocation-based meshfree methods was initiated by Kansa (1990), and collocation methods that are based on global (full matrix with exponential convergence) as well as compactly-supported RBFs are an active area of current research. Moving Least Squares Approximants In the standard least squares approach, given a polynomial basis with m terms (a quadratic basis in one dimension is p(x) = {1, x, x2}T), the best fit to nodal data ua is sought. To this end, we let uh(x) = pTa, where the constant parameter vector a is found such that the error vector PTa - u is minimized. Here, P is a constant m x n matrix with the a-th column consisting of p(xa). As the objective function, the square of the L2 norm of the error is chosen: I(a) = 1/2(PTa - u)T(PTa- u). This leads to a quadratic minimization problem, and hence a linear system of equations (normal equations) is obtained for the unknown vector a.